article
stringlengths
507
295k
abstract
stringlengths
417
1.92k
category
listlengths
1
6
# 1 Introduction Modern Artificial Neural Networks (ANNs) achieve remarkable results in recognizing patterns. However, due to their complexity and black-box character, their failures are hard to identify [13] which limits their use in safety-critical environments. Additionally, certain common training schemes encourage overconfidence [8]. If Out-of-Distribution (OOD) samples from other distributions than the In-Distribution (ID) training set are encountered in classification tasks, this issue persists. Encountering such samples is often unavoidable in real-world applications, especially when operating in an open world as autonomous transportation systems do. Therefore, OOD detection has arisen as the task of identifying instances not belonging to the training data distribution [25] which often means finding the label distribution but also extends to identifying when the model might be unable to assess its input reliably. Anomaly detection, OpenSet-Recognition (OSR), and Uncertainty Estimation are closely related to OOD detection and methods can often be applied to the other settings as well [25]. Most importantly, OSR requires explicitly classifying closed-world samples and detecting unknown classes from the open world [25]. Many OOD detection methods rely on post-hoc analysis of output or intermediate features from pre-trained classifiers but models trained solely for discrimination of ID categories may lack relevant features for OOD detection which limits the general usage of such approaches. Integration of OOD detection into the classification framework is thus desirable, rather than applying it afterwards. In this work, we extend the Prototypical Variational Autoencoder (ProtoVAE) [6] to OOD detection. Instead of the aforementioned post-analysis of application-specific pre-learned features for OOD detection, the feature space is designed to learn to distinguish unknown inputs from the beginning. This is done by estimating the training distribution, learning representations through reconstruction, and designing a distance-based latent space to quantify dissimilarity to ID clusters while also leveraging label information yielding promising results. Additionally, a restriction force is implemented to shape the latent ID region while reconstruction errors are used to identify remaining OOD samples mapped into this region as introduced in [27]. This work proposes the principle of an enclosing restriction to decouple the previous trade-off between compression/estimation of the ID region and reconstructive quality to recover the input rather than just reconstruct features, thus alleviating Autoencoder (AE)-based OOD detection by constraining the ID region in the latent space without collapsing it into one point. To enhance the reconstructive power further, Learned Perceptual Image Patch Similarity (LPIPS) – a perceptual metric – is integrated into the framework for the reconstruction loss and OOD score. The generative and reconstructive abilities of the Variational Autoencoder (VAE) framework enable the provision of additional information and explanation about extracted properties of the data distribution and certain samples, rendering the classification and OOD detection transparent. The method is compared to state-of-the-art approaches using the OpenOOD [24] and a custom railway benchmark. # 2 Related Work A ProtoVAE architecture was presented by Gautam et al. [6] as a self-explainable model. Distance-based classification makes the decision more transparent and class distributions are divided into clusters. The ability to decode embeddings including prototypes fosters transparency w.r.t. learned data distribution. In this work, modifications enable more direct distance-based classification and enforce an enclosed ID region making it ideal for OOD detection. Yang et al. [24] categorize OOD detection methods applied post-hoc, requiring training, Outlier Exposure, pre-processing, or data augmentation. Yang et al. [25] also distinguish approaches based on outputs of a classifier (classificationbased), modeling the data distribution (density-based/generative), relying on distances in feature space (distance-based), and reconstructing the input measuring a reconstruction error (reconstruction-based). The approach of this work can be considered reconstruction-, distance- and density-based. Maximum Softmax Probability (MSP) as a baseline OOD score was examined by Hendrycks and Gimpel [11]. Hendrycks et al. [10] use the maximum logit as a score (post-hoc). Sun et al. [20] propose thresholding activations of the penultimate layer thus eliminating overconfidence caused by extreme activations. Wang et al. [22] design a virtual logit based on the smallest principle components. Gal and Ghahramani [5] apply Monte-Carlo dropout during test time and Lakshminarayanan et al. [13] train an ensemble of ANNs. Hendrycks et al. [12] propose a training-time augmentation based on fractals (PixMix ). Nalisnick et al. [15] find that density estimates might assign higher likelihoods to OOD than to ID data. Xiao et al. [23] tackle this by retraining a VAE-encoder for a specific test-sample measuring a likelihood discrepancy. Sun et al. [19] design a VAE with one Gaussian distribution per class. In contrast to this work, no perceptual metric, distance-based classification, or restriction-scheme for the ID region is used. Moreover, a custom probability is defined for a sample being part of a class distribution. There is a fixed threshold for the latter in contrast to the flexible OOD score fusion used in this work without a fixed threshold for one of the scores alone. ARPL [2] generates near-OOD samples for learning adversarial reciprocal points representing individual negative classes. Reconstructive OOD detection often involves elaborate schemes[3,16,1,27,7] as the reconstruction error alone often cannot separate OOD from ID data [3]. Existing approaches combine reconstruction error with Mahalanobis distance [3], improve ID reconstruction with a deformation transformation[1] or use multiple reconstruction errors [16,7]. In [27], the latent space region of an AE to which ID samples are encoded ( $I D$ region) is estimated by restricting ID data within the latent space. For OOD samples mapped into this region, the reconstruction error will be higher [27]. In contrast, in this work, an enclosing restriction supports the trade-off between reliable estimation of the ID region and reconstruction quality. Distance-based OOD detection involves Mahalanobis distance[14] and kNearest Neighbor (KNN) distance for pre-trained features. Requiring training, Deep SVDD [17] maps ID data into a hypersphere, and SIREN [4] discriminatively shapes representations using prototypes but not reconstruction. # 3 Methodology We introduce the Prototypical Direct-Distance-Classifier VAE (ProtoDistVAE) for explainable OOD detection which extends the ProtoVAE from [6] and further incorporates the principle of AE-based OOD detection from [27]. Following [27], if an AE reconstructs every ID sample sufficiently well and the ID region $\tau _ { \mathrm { I D } }$ can be estimated precisely, a sample can be concluded to be ID by fulfilling two conditions: Fig. 1: ProtoDistVAE architecture: The input $_ { \ast }$ is encoded into a latent Gaussian distribution from which a sample $\textit { \textbf { z } }$ is drawn and reconstructed to obtain $\hat { \pmb x }$ . Then, in the framework of generalized Gaussians, the SoftMax function returns the predicted probabilities and class estimate $\hat { y }$ for the distances to all prototypes. 1. An ID sample is embedded into $\tau _ { \mathrm { I D } }$ (by definition). 2. An ID sample exhibits a small reconstruction error. Under the given assumptions, OOD samples should never fulfill both conditions. Our aim is to model a distribution of data that is representative for a set of prototypes. This means that different classes or parts of classes can be assigned to different sub-distributions during training, thus potentially increasing data diversity and simplifying the OOD detection. A distance metric space is learned where similar samples are in close proximity to each other. Similar to [6], we use an encoder $f _ { \psi }$ , a decoder $g _ { \boldsymbol { \theta } }$ and prototypes $\phi _ { k j } \in \mathbb { R } ^ { L }$ in an end-to-end trainable fashion (see Figure 1). The rows of matrix $\pmb { \varPhi } _ { k } \in \mathbb { R } ^ { J \times L }$ describe the $J$ prototype vectors of class $k \in K$ classes. Given a training dataset $\mathcal { D } = \{ ( \pmb { x } ^ { 1 } , ( \pmb { x } ^ { 1 } , y ^ { 1 } ) ) , . . . , ( \pmb { x } ^ { N } , ( \pmb { x } ^ { N } , y ^ { N } ) ) \}$ with $N$ labeled samples, the input $\mathbf { \Delta } _ { \mathbf { \boldsymbol { x } } ^ { i } }$ itself yields the target variables for reconstruction and a class label $y ^ { i }$ . The model is trained as a VAE learning a Gaussian mixture distribution where the encoder embeds the input $\mathbf { \Delta } _ { \mathbf { \boldsymbol { x } } ^ { i } }$ to a posterior Gaussian distribution $p ( z | \boldsymbol { \mathbf { \mathit { x } } } ^ { i } ) = \mathcal { N } ( z ; \boldsymbol { \mu } ^ { i } , \mathrm { d i a g } ( ( \boldsymbol { \sigma } ^ { i } ) ^ { 2 } ) )$ in the latent domain. During training, a latent representation $z ^ { \ i }$ is sampled whereas during inference, the mean value is used for the latent representation the decoder processes into the image space reconstruction $\hat { \pmb x } ^ { i }$ . For classification, the Euclidean distances of the latent variable to all prototypes are computed (Equation (1)) and the minimum distance of each class yields the closest prototype. It is important to minimize the distance of an embedding to only one prototype distribution during training. The distances are transformed into logits by the generalized Gaussian distribution for enclosing restriction and are fed into a SoftMax function to obtain a purely distance-based, latent space classification without a learnable classifier. $$ \begin{array} { c } { { d ( z ^ { i } , \phi _ { k j } ) = d _ { k j } ^ { i } = \| z ^ { i } - \phi _ { k j } \| _ { 2 } } } \\ { { P _ { \psi } ( y = k | x ^ { i } ) = \frac { \exp \left( l _ { k } ^ { i } \right) } { \sum _ { k ^ { \prime } = 1 } ^ { K } \exp \left( l _ { k ^ { \prime } } ^ { i } \right) } ~ , ~ l _ { k ^ { \prime } } ^ { i } = - \left( \frac { d _ { k ^ { \prime } j ^ { * } } ( k ^ { \prime } ) } { \alpha } \right) ^ { \beta } } } \\ { { j ^ { * } ( k ) = \mathrm { a r g m i n } _ { j } ( d _ { k j } ) } } \end{array} $$ The original ProtoVAE architecture uses a linear classifier and distance-based similarity scores [6]. Similarity scores exhibit large gradients for embeddings close to a prototype which potentially leads to embeddings collapsing into the respective prototype position, and thus to degradation of reconstruction quality when different embeddings are not encoded differently. As a remedy, ProtoDistVAE uses an enclosing restriction leading to weaker gradients close to prototypes. Embeddings shall be trapped in a certain $I D$ region, but inside, the coding of embeddings shall be unconstrained. For this reason, generalized Gaussian distributions are used in the classification layer where $\alpha$ defines the width of the distribution and $\beta \geq 2$ controls the shape and "enclosedness" of the distribution. In order to not distort the distance metric space, ProtoDistVAE uses distances more explicitly for classification. The linear classifier which essentially calculates a sum of distances is replaced by using only the minimum distances to prototypes per class. These are translated into logits $l _ { k ^ { \prime } } ^ { i }$ by the framework of generalized Gaussians and probabilities using the SoftMax function (Equation (2)). Cross-entropy is then applied to the modified predicted probabilities. $j ^ { * } ( k )$ is the nearest prototype within class $k$ while $d ^ { * }$ is the minimum distances vector for every class. Thus, instead of a sum of distances to multiple prototypes, the distance to only one prototype is minimized for a specific embedding. The overall loss consists of a sum of four terms: The cross-entropy loss $\mathcal { L } _ { \mathrm { c l s } } ^ { \prime }$ shown in Equation (4) provides label information to enable the network to extract useful embeddings for discrimination and minimize the embedding distance to prototypes of the correct class. Each class is modeled by a mixture of $J$ normal distributions centered around the respective class prototypes for VAE-like distribution estimation and Kullback-Leibler divergence (KL divergence) w.r.t. the nearest prototype distribution of the correct class is computed to obtain the loss $\mathcal { L } _ { \mathrm { K L } } ^ { \prime }$ (Equation (5)). The reconstruction loss aims to recover the input samples [6] by separating groups of samples near each other for a better reconstruction. We use the LPIPS metric [26] for this task as it gives a more robust similarity between images than traditional metrics as e.g. mean squared error (MSE) by using a calibrated pre-trained network aligned towards human perception [26]. In order to prevent the collapse of prototypes of a class, an orthonormalization loss ${ \mathcal L } _ { \mathrm { o r t h } }$ (Equation (7)) is used to encourage prototypes within a class (after subtracting their mean $\phi _ { k }$ ) to be orthonormal to each other [6]. It is defined as the average of the class-wise Frobenius norms $\| \cdot \| _ { F }$ . $$ \begin{array} { r l } & { \qquad \mathcal { L } _ { \mathrm { c l s } } ^ { \prime } ( \psi , \Phi ; { \pmb x } ^ { i } , { \pmb k } ) = - \log P _ { \psi } ( y = k | { \pmb x } ^ { i } ) } \\ & { \mathcal { L } _ { \mathrm { K L } } ^ { \prime } ( \psi , \pmb { \mathscr { F } } _ { k } ; { \pmb x } ^ { i } , { k = \pmb y } ^ { i } ) = D _ { K L } \big ( \mathcal { N } ( { \pmb \mu } ^ { i } , \mathrm { d i a g } ( ( { \pmb \sigma } ^ { i } ) ^ { 2 } ) ) \| \mathcal { N } ( \phi _ { k j ^ { * } ( k ) } , { \pmb I } _ { L } ) \big ) } \\ & { \qquad \mathcal { L } _ { \mathrm { r e c } } ^ { \prime } ( \psi , { \pmb \theta } ; { \pmb x } ^ { i } , { \hat { \pmb x } } ^ { i } ) = e _ { \mathrm { L P I P S } } ( { \pmb x } ^ { i } , { \hat { \pmb x } } ^ { i } ) } \\ & { \qquad \mathcal { L } _ { \mathrm { o r t h } } ( \pmb \Phi ) = \displaystyle \frac { 1 } { K } \sum _ { k = 1 } ^ { K } \| \tilde { \pmb { \phi } } _ { k } \tilde { \pmb { \phi } } _ { k } ^ { T } - { \pmb I } _ { J } \| _ { F } ^ { 2 } , \ \tilde { \pmb { \phi } } _ { k } = ( \phi _ { k j } - \bar { \phi } _ { k } ) _ { j = 1 . . } J } \end{array} $$ In summary, ProtoDistVAE introduces LPIPS [26] as reconstruction loss and replaces the linear classifier layer as well as similarity scores by direct minimum distances and the framework of generalized Gaussians to implement an enclosing restriction loss. The complete loss function is: $$ \mathcal { L } = w _ { \mathrm { c l s } } \mathcal { L } _ { \mathrm { c l s } } + w _ { \mathrm { K L } } \mathcal { L } _ { \mathrm { K L } } + w _ { \mathrm { r e c } } \mathcal { L } _ { \mathrm { r e c } } + w _ { \mathrm { o r t h } } \mathcal { L } _ { \mathrm { o r t h } } $$ For OOD detection, a distance-based OOD score and the LPIPS reconstruction error are merged. During experimentation, we found that the minimum distance to the next prototype can be improved by using the MSP score $\begin{array} { r } { \lambda _ { \mathrm { M S P } } = \operatorname* { m a x } _ { k } P _ { \psi } ( y = k | \pmb { x } ^ { i } ) } \end{array}$ in the ProtoDistVAE context which is the probability that an embedding belongs to the most likely generalized Gaussian under condition that it is ID. As ProtoDistVAE relies on distances for classification, MSP is also distance-based. Also the $\begin{array} { r } { \lambda _ { \mathrm { D i s t R a t i o } } = \sum _ { j } d _ { \widehat { k } j } / ( \sum _ { k } \sum _ { j } d _ { k j } ) } \end{array}$ is applied where $\widehat { k }$ indicates the predicted class. We assume thesbe scores perform better than the bminimum distance because the class distribution in the latent space might be skewed and OOD samples are embedded between different class regions. For fusion of scores, one distance score and one reconstruction error are normalized w.r.t. to their validation set distributions to make them comparable using a lower and upper percentile of the score distribution to obtain the normalized score $\widetilde { \lambda } ( \pmb { x } ) = ( \lambda ( \pmb { x } ) - \lambda _ { \mathrm { l o w e r } } ) / ( \lambda _ { \mathrm { u p p e r } } - \lambda _ { \mathrm { l o w e r } } )$ . Both score types are combined into eone score using $L _ { 2 }$ or $L _ { \infty }$ norm: $\lambda _ { L _ { p } } ( \pmb { x } ) = \| ( \widetilde { \lambda } _ { 1 } ( \pmb { x } ) , \widetilde { \lambda } _ { 2 } ( \pmb { x } ) ) ^ { T } \| _ { p }$ where $p$ denominates the degree. The $L _ { \infty }$ norm tends to reflecte a hared decision (e.g. at least one score is above its threshold) and the $L _ { 2 }$ norm a flexible decision (one score is too high or both together are rather high and therefore indicates an OOD sample). This type of fusion means that no probabilities need to be modeled explicitly and thus avoids any need for modeling assumptions. # 4 Experimental Results For numerical evaluation, we compare our approach to the state-of-the-art based on the OpenOOD benchmark [24] and a non-public dataset from the railway domain (DBS dataset). A general advantage of the proposed method is that it allows human insights into the training distribution and decision-making of the network by reconstructing samples, prototypes, and distances in the latent space which supports its usage in safety-critical domains. General Experimental Setup The OpenOOD benchmark provides implementations of state-of-the-art approaches for comparison and defines sub-benchmarks according to the ID datasets MNIST, CIFAR10, CIFAR100, and ImageNet. Another dataset is then used as OOD data. Datasets are labeled as near OOD or far OOD according to their ID similarity, e.g. if they have similar color distributions. Open Set Recognition (OSR) is also provided by partitioning a dataset into ID and OOD classes. M-6 benchmark is based on MNIST, C-6 on CIFAR-10, C-50 on CIFAR-100, and T-20 on TinyImageNet with the numeral representing the number of ID classes. The DBS dataset was collected from video recordings of a camera mounted on a commuter train in a typical operation. Object proposals were automatically collected and classified into trains and persons. The annotations were manually checked and OOD samples (i.e. false positive detections) were placed in a separate category. In our evaluation, we used 8351 samples of people, 8340 samples of trains, and 5001 non-objects labeled as OOD, all rescaled to size $6 4 \times 6 4$ . Person and train samples were divided equally into training (60%), validation (10%), and test (30%) splits (OOD samples used only for testing). We use $J { = } 1$ prototype per class in all experiments as a higher number did not improve the performance. Table 1: OOD detection performance (AUROC in $\%$ ) on OpenOOD benchmark and CIFAR-100 ID accuracy ( $\%$ ) for different approaches: Best performances marked in bold. Results from other methods taken from [24]. The generalized Gaussian parameters $\alpha$ and $\beta$ were both set to 2 for all experiments. The encoder was chosen as ResNet-50 [9] for ImageNet and as ResNet-18 for all benchmarks with $6 4 \times 6 4$ sized images (including the DBS dataset) and $3 2 \times 3 2$ sized images. A convolutional encoder with five layers was used for all $2 8 \times 2 8$ sized images, for the decoder a five-layered network using subpixel-convolutions [18] is used. For ImageNet the decoder consists of seven layers and for all other benchmarks, it consists of six layers. The latent dimensionality $L$ is chosen as $1 / 3$ , $1 / 2 4$ or $1 / 9 6$ of the input dimensionality. After training, ID validation data were used for normalization of the OOD scores, which are used afterwards for score fusion during testing. For evaluation, ID classification performance is measured in accuracy and OOD detection performance in Area Under the Receiver Operating Characteristic (AUROC). AUROC is a threshold-independent metric and measures how well a score separates ID and OOD. # 4.1 OOD Detection Performance Table 1 shows the OOD detection performance in terms of AUROC compared to state-of-the-art methods. ProtoDistVAE was trained using only LPIPS reconstruction loss with weight $w _ { \mathrm { r e c } } = 1$ . Cross-entropy and KL divergence loss were used similarly with a weight of $w _ { \mathrm { c l s } } = w _ { \mathrm { K L } } = 1$ . Distance ratio $\lambda _ { \mathrm { D i s t R a t i o } }$ and LPIPS $\lambda _ { \mathrm { L P I P S } }$ were used as scores to be fused by $L _ { \infty }$ norm. The latent space dimensionality $L$ was chosen as $1 / 2 4$ of the input dimensionality. Compared to the other methods, ProtoDistVAE performs best on the MNISTbased benchmarks. This is likely due to its low diversity, making it easier to learn a latent distribution. For CIFAR10, ProtoDistVAE performs on par with other methods. However, the performance for highly diverse datasets with a large number of classes decreases as ID estimation and classification are performed in the same latent space and may impair each other. Similarly, higher resolutions lead to difficulties for ProtoDistVAE in detecting OOD samples, likely due to the increased complexity of reconstruction. Fig. 2: UMAP visualization of the latent space embeddings of trained ProtoDistVAEs. (a) On MNIST, color-coded classes are clearly separated. (b) On CIFAR10, clusters blend into each other. (c) ID (CIFAR10) versus OOD (CIFAR100): embedding of OOD samples appears mainly between class prototypes. Figure 2 shows further insights through an Uniform Manifold Approximation and Projection (UMAP) visualization of the latent space and illustrates how our method allows understanding its decision-making. The method works best in cases of clearly separable datasets and performs worse if data cannot be attributed well to clusters extracted. However, it should be mentioned that CIFAR10 vs. CIFAR100 is generally a hard OOD benchmark. ID samples in the space between prototypes might be interesting for further analysis since these exhibit a higher uncertainty and could be exploited by active learning or for identifying a sample with very different attributes within a class. Table 2a shows some results on the DBS dataset. Here, an increased weight on LPIPS ( $w _ { \mathrm { r e c } } = 1 0 0$ ) was used to improve the OOD detection performance without harming classification accuracy. The accuracy is on par with other methods, likely due to only two classes being available. For OOD detection, PixMix and ProtoDistVAE perform best, while VIM and KNN also show good results. Combining $\lambda _ { \mathrm { L P I P S } }$ with $\lambda _ { \mathrm { M S P } }$ further improves the results with a gain of $0 . 9 \%$ . ProtoDistVAE performs well on the DBS dataset due to its composition. The data samples are often quite similar as trains and persons are captured from the same angles and there are little variations e.g. in perspective, weather, lighting, and color. In comparison, ImageNet shows more inconsistent data with more diverse appearances across the same class. ProtoDistVAE benefits from a reduced intra-class variance and “complete” data distribution which allows it to model the data more easily. Hypothetically, it is easier for the network to recognize systematics in the data. PixMix augmentation seems to benefit from a complete distribution and even further increases the diversity of the data. However, the data distribution is not represented in the model and classification is not transparent. Other methods perform worse: Ensembling shows a lowerthan-usual performance as it depends on variations in the prediction of individual networks and these variations are weaker due to low data diversity in this dataset. Methods depending purely on classification-based schemes might suffer from overconfidence due to easier classification across only two classes and low data diversity. ProtoDistVAE, however, does not overfit for classification and aims to learn a representation of the data. In addition, the reconstruction error helps it to identify overconfidently classified samples mapped into its ID-region. Table 2: Experimental results of OOD detection in AUROC ( $\%$ ) and ID accuracy ( $\%$ ): (a) DBS dataset results of state-of-the-art methods (parameterized as in [24]) compared to ProtoDistVAE with LPIPS score combined by $L _ { \infty }$ fusion with DistRatio and MSP, respectively. (b) ProtoVAE vs. ProtoDistVAE. (c) Influence of reconstruction loss when using LPIPS as OOD score. (a) DBS dataset (c) OpenOOD benchmark (partial): reconstruction loss (b) OpenOOD benchmark: ProtoVAE vs. ProtoDistVAE using MSP score # 4.2 Ablation Study: ProtoVAE vs. ProtoDistVAE Comparing the proposed ProtoDistVAE architecture to the base ProtoVAE , the reconstruction loss was set to a constant level. This does not change reconstruction error-based OOD detection according to the observed data. Table 2b shows detection results for ProtoVAE and ProtoDistVAE using the distancebased MSP score based on the predicted probabilities. Note that an improved distance-based score potentially increases performance even further when fused with a reconstruction error score. ProtoDistVAE outperforms ProtoVAE in almost all benchmarks for OOD detection and for different values of the latent dimension $L$ which can be explained by the direct use of distances for classification and the enclosing restriction used during training. The latter actively shapes the ID-region by trapping the ID embeddings in the proximity of the class-specific prototypes. Furthermore, the results display the importance of the latent dimensionality $L$ for both networks. Different values for $L$ are optimal for different levels of complexity reflected in different datasets. Too low values reduce the information coded in the representation while too high values inhibit a clear assignment of samples to class prototypes. Fig. 3: Comparison of MSE and LPIPS loss: CIFAR10 (ID) and FashionMNIST (OOD). From top to bottom: Input, reconstruction (MSE), and reconstruction (LPIPS). ( $L = 3 2$ ) # 4.3 Reconstruction Table 2c shows OOD detection performance using the LPIPS score based on ProtoDistVAE trained with either MSE or LPIPS loss. In contrast to using the MSE score which showed a generally lower performance (results not shown), the LPIPS score can achieve good detection results, even when training with MSE reconstruction loss. However, using LPIPS as reconstruction loss outperforms MSE loss. A special case is the ImageNet benchmark which is different due to image size and data diversity. The reconstruction performance for MSE and LPIPS loss on the CIFAR10 benchmark is depicted in Figure 3. ProtoDistVAE trained with MSE shows significant blur, regardless of ID or OOD samples. Training with LPIPS helps to preserve more semantic information and leads to differences when reconstructing OOD samples. Figure 4 displays reconstructions of the DBS dataset. ProtoDistVAE appears to have learned the data distribution and can reconstruct ID better than OOD in most cases. It successfully distinguishes the class distributions of persons and trains and can show the features associated with a certain sample. For example, images of train stations and regular structures are often associated with trains, whereas background images are often reconstructed into person-like images. The learned prototypes of ProtoDistVAE can also be reconstructed. As Figure 5 shows, prototypes can be better extracted from datasets with low-variance datasets like MNIST and the DBS dataset while for datasets with higher diversity like CIFAR10, prototypes are harder to extract and images are less expressive. Human observers can thus assess which properties the network extracted from the data and evaluate features associated across classes.
Understanding the decision-making and trusting the reliability of Deep Machine Learning Models is crucial for adopting such methods to safety-relevant applications. We extend self-explainable Prototypical Variational models with autoencoder-based out-of-distribution (OOD) detection: A Variational Autoencoder is applied to learn a meaningful latent space which can be used for distance-based classification, likelihood estimation for OOD detection, and reconstruction. The In-Distribution (ID) region is defined by a Gaussian mixture distribution with learned prototypes representing the center of each mode. Furthermore, a novel restriction loss is introduced that promotes a compact ID region in the latent space without collapsing it into single points. The reconstructive capabilities of the Autoencoder ensure the explainability of the prototypes and the ID region of the classifier, further aiding the discrimination of OOD samples. Extensive evaluations on common OOD detection benchmarks as well as a large-scale dataset from a real-world railway application demonstrate the usefulness of the approach, outperforming previous methods.
[ "cs.LG", "cs.CV" ]
# 1 INTRODUCTION Facial micro-expression recognition (MER) is a popular task in the fields of computer vision and affective computing [1]. It has applications in wide areas such as medicine, education, and criminal investigation. Micro-expressions (MEs) are subtle and involuntary that convey genuine emotions [2], and contribute to the recognition of mental condition or deception of humans. Different from macroexpressions [3], [4], MEs are fine-grained and last only for a very short interval of time, i.e. not more than 500 milliseconds [5]. In literature, MER remains a challenging Manuscript received April, 2023. (Corresponding authors: Yifan Cheng, Yong Zhou, and Lizhuang Ma.) Fig. 1. Illustration of optical flow and facial landmark differences between two consecutive frames ${ \bf { I } } _ { k }$ and ${ \bf \cal I } _ { k + 1 }$ . We use a color coding to visualize the optical flow, in which the color of each point in the color coding denotes its displacement including orientation and magnitude to the central point. Although facial subtle muscle actions from ${ \bf { I } } _ { k }$ to ${ \bf \cal I } _ { k + 1 }$ are hard to perceive by human eyes, they are reflected in optical flow and facial landmark differences. problem due to the short duration, subtlety, and small-scale and low-diversity datasets of MEs. One typical way is to extract hand-crafted features containing correlated ME information. Typical hand-crafted features include optical flow and histogram of oriented optical flow (HOOF) [6] with motion pattern, local binary patterns from three orthogonal planes (LBP-TOP) [7] with spatio-temporal information, and histogram of oriented gradients (HOG) [8] and histogram of image gradient orientation (HIGO) [9] with local contrast information. However, these features have limited robustness on challenging MEs with short-duration and inconspicuous motions. Besides, key frames like onset, apex, and offset frames of MEs are sometimes required in feature extraction [10]. Another popular solution involves the use of prevailing deep neural networks. Khor et al. [11] first combined the optical flow, the derivatives of the optical flow, and the raw images as input, then used a convolutional neural network (CNN) to extract the feature of each frame and used long short-term memory (LSTM) modules to learn the temporal dynamics. However, this method relies on the pre-extracted optical flow. Reddy et al. [12] adopted a 3D CNN to extract features from both spatial and temporal domains, in which the performance is limited by insufficient training samples. Xia et al. [13] employed macro-expression recognition as an auxiliary task, in which macro-expression recognition network is used to guide the fine-tuning of MER network from both label and feature space. However, fine-grained information is not explicitly emphasized in this method. The above methods suffer from limited capacity of handcrafted features, requirement of key frames, or fail to thoroughly exploit the feature learning ability of deep networks due to insufficient training data. To tackle these limitations, we propose to integrate automatic feature learning from raw frame sequence, capturing of facial motion information, and localization of facial fine-grained characteristics into an endto-end framework. Considering the prevailing multi-task learning technique is convenient to guide and assist the training of main task, we design a novel micro-action-aware deep learning framework called MOL that jointly models MER, optical flow estimation, and facial landmark detection via transformer-graph-style convolution. As illustrated in Fig. 1, the two latter tasks are beneficial for capturing facial subtle muscle actions associated with MEs, which relaxes the requirement of large-scale training data. Moreover, we propose a novel F5C block to directly extract local-global features from raw images, which is combined by our proposed fully-connected convolution and channel correspondence convolution. The transformer-style fully-connected convolution can extract local features while maintaining global receptive fields, and the graph-style channel correspondence convolution can model the correlations among feature map channels. Finally, we feed a sequence of pair features composed of the local-global features of consecutive two frames into a 3D CNN to achieve MER. The use of pair features rather than frame features contributes to preserving each sub-action clip, which can also be regarded as the sliding windows. The entire framework is end-to-end without any post-processing operation, and all the modules are optimized jointly. The contributions of this paper are threefold: We propose a micro-action-aware joint learning framework of MER, optical flow estimation, and facial landmark detection, in which pre-extracted features as well as prior knowledge of key frames are not required. To our knowledge, joint modeling of automatic ME feature learning from raw frame sequence, facial motion information capturing, and facial fine-grained characteristic localization via deep neural networks has not been done before. We propose a new local-global feature extractor named F5C composed by fully-connected convolution and channel correspondence convolution, which integrates the advantages of transformer, graph convolution, and vanilla convolution. Extensive experiments on benchmark datasets show that our method outperforms the state-of-the-art MER approaches, achieves competitive performance for both optical flow estimation and facial landmark detection, and can capture facial subtle muscle actions in local regions related to MEs. # 2 RELATED WORK We review the previous works those are closely related to our method, including hand-crafted feature based MER, deep learning based MER, and MER with combination of hand-crafted feature and deep learning. # 2.1 Hand-Crafted Feature Based MER Earlier works propose hand-crafted features to try to capture fine-scale ME details. LBP-TOP [7] is a typical handcrafted feature, which combines temporal information with spatial information from three orthogonal planes. Later, Ben et al. [14] employed hot wheel patterns from three orthogonal planes (HWP-TOP) to make the most of the directional information. Besides, Wang et al. [15] proposed local binary patterns with six intersection points (LBP-SIP) to avoid repeated coding in LBP-TOP. Another widely used feature is histogram of oriented gradients (HOG) [8], which computes gradients of image pixels. A histogram of image gradient orientation (HIGO) [9] feature is further proposed, which can maintain the invariance of geometric and optical transformation of images. Optical flow describes the action pattern of each pixel from one frame to another frame, which is highly related to MEs. Happy et al. [16] improved histogram of oriented optical flow (HOOF) [6] as FHOOF by collecting the action directions into angular bins based on the fuzzy membership function, and also extended FHOOF to be fuzzy histogram of optical flow orientations (FHOFO) by ignoring the action magnitude in computation. Liong et al. [10] introduced biweighted oriented optical flow (Bi-WOOF) to encode essential expressiveness of the apex frame in ME videos. However, the extraction process of hand-crafted features often discards important information, in which the characteristics of subtle and diverse MEs are hard to be modeled. Besides, key frames of MEs are often required, which limits the applicability. # 2.2 Deep Learning Based MER Recently, the prevailing deep learning technique has been applied to MER. Reddy et al. [12] employed a 3D CNN to achieve MER, which extracts spatial and temporal information from raw image sequences. Lei et al. [17] extracted shape representations based on facial landmarks, and then adopted a graph-temporal convolutional network (Graph-TCN) to capture local muscle actions of MEs. Wei et al. [18] proposed an attention-based magnification-adaptive network (AMAN), in which a magnification attention module is used to focus on appropriate magnification levels of different MEs, and a frame attention module is used to focus on discriminative frames in a ME video. Fig. 2. The architecture of our MOL framework. Given a sequence of $t$ frames $\{ \mathbf { I } _ { 0 } , \mathbf { I } _ { 1 } , \cdot \cdot \cdot , \mathbf { I } _ { t - 1 } \}$ , MOL first extracts rich feature $\mathbf { F } _ { k } ^ { \left( r \right) }$ of each frame ${ \bf \cal I } _ { k }$ by a stack of vanilla convolutional layers. For each pair of consecutive frames $\left\{ \mathbf { I } _ { k } , \mathbf { I } _ { k + 1 } \right\}$ , $\mathbf { F } _ { k } ^ { \left( r \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( r \right) }$ are then fed into the same F5C block to extract local-global features $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ , respectively. Afterwards, $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ is fed into a facial landmark detection module to predict facial landmark locations $\hat { \mathbf { l } } _ { k + 1 }$ of the frame $\mathbf { I } _ { k + 1 }$ , while $\dot { \mathbf { F } } _ { k } ^ { ( g ) }$ , $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ , ${ \bf \cal I } _ { k }$ , and $\mathbf { I } _ { k + 1 }$ are simultaneously fed into an optical flow estimation module to predict optical flow $\hat { \mathbf { O } } _ { k }$ including horizontal component and vertical component. $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ are further concatenated to be $\mathbf { F } _ { k } ^ { \left( c \right) }$ as the feature of the $k$ -th pair. Finally, the sequence of $t - 1$ pair features $\{ \mathbf { F } _ { 0 } ^ { ( c ) } , \mathbf { F } _ { 1 } ^ { ( c ) } , \cdot \cdot \cdot , \mathbf { F } _ { t - 2 } ^ { ( c ) } \}$ is fed into a MER module to predict the ME category. Besides single MER task based methods, some works incorporate auxiliary tasks correlated with MER into a deep multi-task learning framework. Since action units (AUs) describe facial local muscle actions [19], [20], Xie et al. [21] proposed an AU-assisted graph attention convolutional network (AU-GACN), which uses graph convolutions to model the correlations among AUs so as to facilitate MER. Xia et al. [13] used macro-expression recognition as an auxiliary task, in which macro-expression recognition network can guide the fine-tuning of MER network from both label and feature space. Different from the above methods, we employ an end-toend deep framework for joint learning of MER, optical flow estimation, and facial landmark detection. # 2.3 MER with Combination of Hand-Crafted Feature and Deep Learning Considering deep networks are limited by small-scale and low-diversity ME datasets, some approaches combine handcrafted features with deep learning framework. Verma et al. [22] proposed a dynamic image which preserves facial action information of a video, and input the dynamic image to a lateral accretive hybrid network (LEARNet). Nie et al. [23] also generated the dynamic image of the input video, and input it to a dual-stream network with two tasks of MER and gender recognition. Another commonly used hand-crafted feature is optical flow. Zhou et al. [24] calculated the optical flow between onset and apex frames of the input ME video, in which its horizontal and vertical components are fed into a dualinception network to achieve MER. With the same input setting, Shao et al. [25] achieved AU recognition and MER simultaneously, in which AU features are aggregated into ME features. Besides, Hu et al. [26] fused local Gabor binary pattern from three orthogonal panels (LGBP-TOP) feature and CNN feature, and then formulated MER as a multi-task classification problem, in which each category classification can be regard as a one-against-all pairwise classification problem. All these methods require pre-extracted hand-crafted features, in which the representation power of deep networks is not thoroughly exploited. In contrast, our network directly processes raw images, and contains a novel localglobal feature extractor. Besides, instead of treating optical flow estimation as a preprocessing, we put it into a joint framework to guide the capturing of facial subtle motions. # 3 MOL FOR JOINT ESTIMATION OF MICROEXPRESSION, OPTICAL FLOW AND LANDMARK # 3.1 Overview Given a video clip with $t$ frames $\{ \mathbf { I } _ { 0 } , \mathbf { I } _ { 1 } , \cdot \cdot \cdot , \mathbf { I } _ { t - 1 } \} ,$ , our main goal is to design a micro-action-aware deep learning framework to predict ME category of the overall clip, facial landmark locations $\{ \hat { \bf l } _ { 1 } , \hat { \bf l } _ { 2 } , \cdot \cdot \cdot \hat { \bf \Delta } , \hat { \bf l } _ { t - 1 } \}$ of the last $t - 1$ frames, and optical flow $\{ \hat { \mathbf { O } } _ { 0 } , \hat { \mathbf { O } } _ { 1 } , \cdot \cdot \cdot , \hat { \mathbf { O } } _ { t - 2 } \}$ of the $t - 1$ consecutive frame pairs $\{ ( \mathbf { I } _ { 0 } , \mathbf { I } _ { 1 } ) , ( \mathbf { I } _ { 1 } , \mathbf { I } _ { 2 } ) , \cdot \cdot \cdot , ( \mathbf { I } _ { t - 2 } , \mathbf { I } _ { t - 1 } ) \}$ . We choose to directly process raw video clips without the dependence on hand-crafted features, and discard additional limitations like the prior knowledge of onset and apex frames. Fig. 2 illustrates the overall structure of our MOL framework. A stack of vanilla convolutional layers are first used to extract rich feature $\mathbf { F } _ { k } ^ { \left( r \right) }$ of the $k$ -th frame ${ \bf \cal I } _ { k }$ in the input video, respectively. TABLE 1 shows the detailed architecture of this module. Then, for each pair of consecutive frames $\{ \mathbf { I } _ { k } , \mathbf { I } _ { k + 1 } \} ,$ , an F5C block is used to learn local-global features $\mathbf { F } _ { k } ^ { \left( g \right) }$ and F(kg+)1, respectively. The local-global features are shared by three tasks for joint learning, in which optical flow estimation and facial landmark detection as auxiliary tasks are devised for promoting the main task MER in temporal and spatial domains, respectively. TABLE 1 The structure of the stack of vanilla convolutional layers for extracting rich feature. $C _ { i n }$ and $C _ { o u t }$ denote the number of input channels and output channels, respectively. To estimate the optical flow $\hat { \mathbf { O } } _ { k }$ between ${ \bf \cal I } _ { k }$ and $\mathbf { I } _ { k + 1 } ,$ we simultaneously feed $\mathbf { I } _ { k } , \mathbf { I } _ { k + 1 } , \mathbf { F } _ { k } ^ { ( g ) }$ , and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ into an optical flow estimation module. To predict the landmark locations lˆk+1 of Ik+1, we input F(kg+)1 to a landmark detection module. Finally, we feed a sequence of $t - 1$ pair features $\{ \mathbf { F } _ { 0 } ^ { ( c ) } , \mathbf { F } _ { 1 } ^ { ( c ) } , \cdot \cdot \cdot , \mathbf { F } _ { t - 2 } ^ { ( c ) } \}$ into a 3D CNN to−predict the ME category of the whole video clip, in which $\mathbf { \bar { F } } _ { k } ^ { \left( c \right) }$ is the concatenation of $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ . This use of pair features rather than frame features is beneficial for preserving each sub-action clip. # 3.2 F5C Block The architecture of our proposed F5C block is shown in the upper part of Fig. 2. We name this block as F5C because it consists of two main operations, fully-connected convolution (FCC) and channel correspondence convolution (CCC). FCC is developed from the conventional circular convolution [27] by integrating the style of the prevailing transformer [28], which can gather local information from local receptive fields like convolutions and extract global information from the entire spatial locations like self-attention [28]. CCC is designed to model the correlations among feature map channels in a manner of graph convolution [29]. Two residual structures [30] along with FCC and CCC are beneficial for mitigating the vanishing gradient problem. The design of F5C integrates the merits of transformer, graph convolution, and vanilla convolution. # 3.2.1 Fully-Connected Convolution It is known that vanilla convolution works well in extracting local features. We propose to enhance its ability of extracting global features from three aspects. First, similar to transformers [28], [31], we treat each column (in vertical direction) or each row (in horizontal direction) of the input as a patch, and apply positional embedding to patches to perceive contextual information. Second, we conduct circular convolution on each patch via fully-connected operation to enlarge the receptive field. Third, we perform operations in both vertical and horizontal directions to more completely cover regions. Such structure is named as transformer-style fully-connected convolution. Fig. 3. The structure of our proposed transformer-style fully-connected convolution. An input feature map $\mathbf { x }$ with a size of $C \times H \times W$ is first processed by vanilla $1 \times 1$ convolution, and further goes through two branches, respectively, in which the first branch consists of FCC$\mathsf { v }$ and FCC-H in order while the second branch uses the reverse order. Then, the outputs of the two branches are concatenated along with $1 \times 1$ convolution to obtain the final output $\mathbf { Y }$ with the same size as X. As shown in Fig. 3, an FCC is composed of two main components, FCC-V in vertical direction and FCC-H in horizontal direction. It uses two branches of FCC-H after FCC-V and FCC-V after FCC-H, and then fuses two outputs by concatenation and vanilla $1 \times 1$ convolution. In this way, the receptive field of FCC can cover positions in both vertical and horizontal directions so as to extract complete localglobal features. Specifically, given an input $\mathbf { X } \in \mathbb { R } ^ { C \times H \times W } ,$ , we conduct the $1 \times 1$ convolution as a preprocessing. In FCC-V, we first employ a positional embedding [28] to make it aware of the position information: $$ \mathbf { X } ^ { \left( v \right) } = \mathbf { X } \oplus ^ { v } \mathbf { P } ^ { \left( v \right) } , $$ where $\mathbf { P } ^ { ( v ) } \in \mathbb { R } ^ { C \times H }$ denotes the positional embedding, and $\oplus ^ { v }$ denotes element-wise sum operation, in which $\breve { \mathbf { P } ^ { ( v ) } }$ is expanded with $W$ times along horizontal direction so as to match the size of $\mathbf { X }$ . Then, the output $\mathbf { Y } ^ { ( v ) } \in \mathbb { R } ^ { C \times H \times W }$ at element $( c , i , j )$ is defined as $$ Y _ { c , i , j } ^ { ( v ) } = \sum _ { s = 0 } ^ { H - 1 } U _ { c , s } ^ { ( v ) } X _ { c , ( i + s ) \% H , j } ^ { ( v ) } , $$ where $\%$ denotes the remainder operation, and ${ \bf U } ^ { \left( v \right) } \in \left( \begin{array} { l } { \mathbb { 1 } } \end{array} \right)$ $\mathbb { R } ^ { C \times H }$ is a learnable parameter. The elements of $\mathbf { X }$ in vertical direction are fully-connected in a circular manner, so we name this process as fully-connected convolution-vertical (FCC-V). We represent Eq. (2) as $\mathbf { Y } ^ { ( v ) } = \mathbf { U } ^ { ( v ) } \odot ^ { v } \mathbf { X } ^ { ( v ) }$ for simplicity. Similarly, the process of FCC-H can be formulated as $$ \mathbf { X } ^ { \left( h \right) } = \mathbf { X } \oplus ^ { h } \mathbf { P } ^ { \left( h \right) } , $$ $$ Y _ { c , i , j } ^ { ( h ) } = \sum _ { s = 0 } ^ { W - 1 } U _ { c , s } ^ { ( h ) } X _ { c , i , ( j + s ) \% W } ^ { ( h ) } , $$ where $\mathbf { P } ^ { ( h ) } \in \mathbb { R } ^ { C \times W }$ is the positional embedding, $\oplus ^ { h }$ denotes the element-wise sum operation by expanding $\mathbf { P } ^ { ( h ) }$ with $H$ times along vertical direction, $\breve { \mathbf { U } } ^ { ( h ) } \in \mathbb { R } ^ { C \times W }$ is a learnable parameter, and Eq. (3b) can be represented as Y(h) = U(h) h X(h) for simplicity. # 3.2.2 Channel Correspondence Convolution Since each feature map channel encodes a type of visual pattern [32], we propose the CCC to reason the relationships among feature map channels so as to further refine the extracted local-global features by FCC. The process of CCC is illustrated in the upper side of Fig. 2. Inspired by the structure of dynamic graph convolution [33], we first construct a $k$ -nearest neighbors ( $k \mathrm { - }$ NN) [34] graph to find similar patterns. In particular, this directed graph is defined as $\mathcal { G } \overset { = } { = } ( \nu , \mathcal { E } )$ , where the vertex set $\mathcal { V } = \{ 0 , 1 , \cdots , C - 1 \}$ contains all the $C$ feature map channels, and the edge set $\mathcal { E } \subseteq \mathcal { V } \times \mathcal { V }$ . The size of the $i \cdot$ -th feature map channel is given by $H \times W .$ , and we reshape it to be an $H W$ -dimensional vector for the convenience of measuring similarity, denoted as $\mathbf { f } _ { i }$ . The neighbors of a vertex are chosen as the feature map channels with the top$k$ cosine similarities. Given a directed edge $\mathbf { f } _ { i } \gets \mathbf { f } _ { j } , \mathbf { f } _ { j }$ is treated as a neighbor of $\mathbf { f } _ { i }$ . To obtain this edge feature $\mathbf { e } _ { i , j } \ \in \ \mathbb { R } ^ { H W } .$ , we incorporate the global information encoded by $\mathbf { f } _ { i }$ and the local neighborhood characteristics captured by $\mathbf { f } _ { j } - \mathbf { f } _ { i }$ : $$ e _ { i , j , s } = \mathcal { R } { ( \mathbf { v } _ { s } ^ { ( 1 ) } } ^ { \top } \mathbf { f } _ { i } + { \mathbf { v } _ { s } ^ { ( 2 ) } } ^ { \top } ( \mathbf { f } _ { j } - \mathbf { f } _ { i } ) ) , $$ where $\mathcal { R } ( \cdot )$ denotes the rectified linear unit (ReLU) [35] function, $\mathbf { \bar { v } } _ { s } ^ { ( 1 ) } ~ \in ~ \mathbb { R } ^ { H W }$ and $\mathbf { v } _ { s } ^ { ( 2 ) } ~ \in ~ \mathbb { R } ^ { H W }$ are learnable parameters, $\top$ is used as the transpose of a vector, and $e _ { i , j , s }$ is the $s$ -th element of $\mathbf { e } _ { i , j }$ . Eq. (4) can be implemented by the convolution operation. Finally, we adopt a maximum aggregation function to capture the most salient features: Fig. 4. The structure of the optical flow estimation module, which consists of (a) an encoder and (b) a decoder. temporal directions. The use of 3D max-pooling layer is to reduce the feature dimension while maintaining important information. Considering MER is a classification task, we employ the cross entropy loss: $$ \mathcal { L } _ { e } = - \sum _ { s = 0 } ^ { n - 1 } p _ { s } \log ( \hat { p } _ { s } ) , $$ where $n$ is the number of ME classes, and $\hat { p } _ { s }$ denotes the predicted probability that the sample is in the $s$ -th class. $p _ { s }$ denotes the ground-truth probability, which is 1 if the sample is in the $s \mathrm { . }$ -th class and is 0 otherwise. $$ f _ { i , s } ^ { ( o ) } = \operatorname* { m a x } _ { \left\{ j \mid ( i , j ) \in \varepsilon \right\} } e _ { i , j , s } , $$ where $\mathbf { f } _ { i } ^ { ( o ) } \in \mathbb { R } ^ { H W }$ is the output of the $i$ -th feature map channel, and is further reshaped to the size of $H \times W$ and then is processed by a $1 \times 1$ convolution. With learnable parameters, our proposed CCC can adaptively model the correlations across feature map channels. As shown in Fig. 2 and Fig. 3, the input and output sizes of FCC and CCC, as well as their composed F5C are all $C \times H \times W$ . In this way, our proposed FCC, CCC, and F5C can all be used as plugand-play modules. # 3.3 Joint Learning of Tasks # 3.3.1 Micro-Expression Recognition Since MEs are subtle and short-duration, our method needs to check potential sub-action clips between each two consecutive frames so as to avoid the loss of ME clues. In this case, we concatenate local-global features $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { ( g ) }$ of each pair of consecutive frames {Ik, Ik+1} to be F(k , and input the sequence of $\{ \mathbf { { F } } _ { 0 } ^ { ( c ) } , \mathbf { { F } } _ { 1 } ^ { ( c ) } , \cdot \cdot \cdot , \mathbf { { F } } _ { t - 2 } ^ { ( c ) } \}$ to a 3D CNN. This feature fusion strategy can also be regarded as an application of the sliding window mechanism. The detailed structure is shown in the lower right corner of Fig. 2. It consists of a 3D convolutional layer and a 3D max-pooling layer, and is followed by a MER classifier with two fully-connected layers. In contrast to a 2D CNN operated in spatial domain, a 3D CNN uses 3D convolutional kernels to extract features in both spatial and # 3.3.2 Optical Flow Estimation Since MEs are subtle and low-intensity, it is difficult to extract related features from raw frames. Considering the optical flow contains motion information of facial muscles, which is strongly correlated to MEs, we use optical flow estimation as an auxiliary task to facilitate the learning of ME features. The architecture of the optical flow estimation module is detailed in Fig. 4, which is based on FlowNet [36] with an encoder and a decoder. The inputs are two raw consecutive frames ${ \bf \cal I } _ { k }$ and $\mathbf { I } _ { k + 1 } ,$ as well as their local-global features $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ output by the F5C block. The encoder models the correlations between two frames and extracts multi-level features, in which the feature at each level is fed into the decoder for the final estimation of optical flow $\hat { \mathbf { O } } _ { k }$ . The optical flow estimation loss is defined as $$ \mathcal { L } _ { f } = \frac { 1 } { t - 1 } \sum _ { k = 0 } ^ { t - 2 } M S E ( \mathbf { O } _ { k } , \hat { \mathbf { O } } _ { k } ) , $$ where $\mathbf { O } _ { k }$ denotes the ground-truth optical flow between ${ \bf { I } } _ { k }$ and $\mathbf { I } _ { k + 1 } ,$ and $M S E ( \cdot )$ denotes mean squared error (MSE) loss. # 3.3.3 Facial Landmark Detection Considering facial important regions like eyes and lips are closely related to MEs, we introduce another auxiliary task of facial landmark detection. The architecture of this task module is illustrated in the bottom part of Fig. 2, which contains one convolutional layer and two fully-connected layers. The facial landmark detection loss is defined as $$ \begin{array} { r l r } { { \mathcal { L } _ { m } = \frac { 1 } { m ( t - 1 ) } \sum _ { k = 0 } ^ { t - 2 } \sum _ { s = 0 } ^ { m - 1 } ( \vert \ l _ { k + 1 , 2 s } - \hat { l } _ { k + 1 , 2 s } \vert + } } \\ & { } & { \big \vert l _ { k + 1 , 2 s + 1 } - \hat { l } _ { k + 1 , 2 s + 1 } \vert ) / d _ { k + 1 } ^ { ( o ) } , } \end{array} $$ where $\mathbf { l } _ { k + 1 } = \left( l _ { k + 1 , 0 } , l _ { k + 1 , 1 } , \cdot \cdot \cdot , l _ { k + 1 , 2 m - 2 } , l _ { k + 1 , 2 m - 1 } \right)$ denotes the ground-truth locations of $m$ landmarks in the frame $\mathbf { I } _ { k + 1 } ,$ and $l _ { k + 1 , 2 s }$ and $l _ { k + 1 , 2 s + 1 }$ are the ground-truth $\mathbf { \boldsymbol { x } }$ -coordinate and $\mathrm { \Delta y }$ -coordinate of the $s$ -th landmark. Due to the differences of face sizes across samples, we use the ground-truth inter-ocular distance $d _ { k + 1 } ^ { ( o ) }$ for normalization [37], [38]. # 3.3.4 Full Loss In our micro-action-aware joint learning framework, the full loss is composed of $\mathcal { L } _ { e } , \mathcal { L } _ { f } .$ , and ${ \mathcal { L } } _ { m }$ : $$ \begin{array} { r } { \mathcal { L } = \mathcal { L } _ { e } + \lambda _ { f } \mathcal { L } _ { f } + \lambda _ { m } \mathcal { L } _ { m } , } \end{array} $$ where $\lambda _ { f }$ and $\lambda _ { m }$ are parameters to control the importance of optical flow estimation and facial landmark detection tasks, respectively. Besides the contributions to MER, the two auxiliary tasks can alleviate negative impact of insufficient training data. # 4 EXPERIMENTS # 4.1 Datasets and Settings # 4.1.1 Datasets There are three widely used ME datasets: CASME II [39], SAMM [40], and SMIC [41]. CASME II contains $2 5 5 \mathrm { M E }$ videos captured from 26 subjects, in which each video has a $2 8 0 \times 3 4 0$ frame size at 200 frames per second (FPS). These videos are selected from nearly 3, 000 elicited facial movements. Similar to the previous methods [17], [21], we use ME categories of happiness, disgust, repression, surprise, and others for five-classes evaluation, and use ME categories of positive, negative, and surprise for three-classes evaluation. SAMM consists of $1 5 9 \mathrm { M E }$ videos from 29 subjects, which are collected by a gray-scale camera at 200 FPS in controlled lighting conditions without flickering. Following the previous works [17], [21], we select ME categories of happiness, anger, contempt, surprise, and others for five-classes evaluation, and select ME categories of positive, negative, and surprise for three-classes evaluation. SMIC includes $1 6 4 \mathrm { M E }$ videos from 16 subjects. Each video is recorded at the speed of 100 FPS and is labeled with three ME classes (positive, negative, and surprise). It is only adopted for three-classes evaluation. TABLE 2 The number of videos for each ME class in CASME II [39] and SAMM [40] datasets, in which “-” denotes the dataset does not contain this class, and the classes used in five-classes evaluation are highlighted with its number in bold. Since facial landmarks and optical flow are not annotated in these datasets, we use a powerful landmark detection library Dlib [42], [43] to detect 68 landmarks of each frame, and use a popular optical flow algorithm TV-L1 [44] to compute optical flow between frames, both as the groundtruth annotations. TABLE 3 The number of videos for each of three ME classes used in the composite dataset evaluation task. “Composite” denotes the combination of SMIC [41], CASME II [39], and SAMM [40] datasets. # 4.1.2 Evaluation Metrics For single dataset evaluation, we conduct experiments on CASME II, SAMM, and SMIC, respectively, in which the number of videos for each ME category in CASME II and SAMM are summarized in TABLE 2. To achieve comprehensive evaluations, we also conduct a composite dataset evaluation task [55], in which 24 subjects from CASME II, 28 subjects from SAMM, and 16 subjects from SMIC are combined into a single composite dataset with three categories used. The data distributions of the composite dataset evaluation task are given in TABLE 3. Similar to most of the previous works [13], [17], [21], leave-one-subject-out (LOSO) cross-validation is employed in the single dataset evaluation and the composite dataset evaluation, in which each subject is used as the test set in turn while the remaining subjects are used as the training set. Besides, following the setting in [21], we conduct a cross-dataset evaluation with three ME classes, in which CASME II and SAMM are used as the training set, respectively, and SMIC is used as the test set. Following the previous works [13], [56], we report accuracy (Acc) and weighted F1 score (WF1) for the single dataset evaluation and the cross-dataset evaluation, and report unweighted F1 score (UF1) and unweighted average recall (UAR) for the composite dataset evaluation. WF1, UF1, and UAR are defined as $$ W F 1 = \sum _ { j = 0 } ^ { n - 1 } \frac { N _ { j } } { N } \frac { 2 T P _ { j } } { 2 T P _ { j } + F P _ { j } + F N _ { j } } , $$ $$ U F 1 = \frac { 1 } { n } \sum _ { j = 0 } ^ { n - 1 } \frac { 2 T P _ { j } } { 2 T P _ { j } + F P _ { j } + F N _ { j } } , $$ TABLE 4 Comparison with state-of-the-art methods on CASME II [39] and SAMM [40]. “DL” denotes deep learning based methods, and “NDL” denotes non-deep learning based methods. “PF” denotes the use of pre-extracted hand-crafted features, “RI” denotes the use of raw images, and “KF” denotes the requirement on key frames such as onset, apex, and offset frames of MEs. “Cate.” denotes the number of ME categories. “-” denotes the result is not reported in its paper. The best results are highlighted in bold, and the second best results are highlighted by an underline. $$ U A R = \frac { 1 } { n } \sum _ { j = 0 } ^ { n - 1 } \frac { T P _ { j } } { N _ { j } } , $$ where $N _ { j }$ denotes the number of samples of the $j$ -th ME class, $N$ denotes the total number of samples, and $T P _ { j } ,$ - $F P _ { j } ,$ and $F N _ { j }$ denote the number of true positives, false positives, and false negatives for the $j$ -th class, respectively. In the following sections, all the metric results are reported in percentages, in which $\%$ is omitted for simplicity. # 4.1.3 Implementation Details In our experiments, we uniformly sample $t$ frames from a video to obtain a clip as the input of our MOL. We apply similarity transformation to each frame image based on facial landmarks, in which facial shape is preserved without changing the expression. Particularly, each image is aligned to $3 \times 1 4 4 \times 1 4 4 ,$ and is randomly cropped into $3 \times 1 2 8 \times 1 2 8$ and further horizontally flipped to enhance the diversity of training data. During testing, each image is centrally cropped into $3 \times 1 2 8 \times 1 2 8$ to adapt to the input size. The number of frames in the input video clip is set as $t = 8$ , the number of facial landmarks is set as $m = 6 8 ,$ , and the dimensions $C , H ,$ , and $W$ of feature maps in the CCC are set as 128, 16, and 16, respectively. The trade-off parameters $\lambda _ { f }$ and $\lambda _ { m }$ are set to 0.1 and 68, respectively. To set an appropriate value for the number $k$ of the nearest neighbors in the graph construction of CCC, we conduct LOSO cross-validation on the CAMSE II dataset with five classes. In each validation experiment, we select a small set from the training set as the validation set. $k$ is set as 4 for the overall best performance on the validation sets, and is fixed for experiments on other datasets. Our MOL is implemented via PyTorch [57], with the Adam optimizer [58], an initial learning rate of $5 \times 1 0 ^ { - 5 }$ , and a mini-batch size of 32. Before training on ME datatsets, we pre-train MOL on a popular in-the-wild macro-expression dataset Aff-Wild2 [59], [60]. It contains 323 videos annotated by seven expression categories (neutral, anger, disgust, fear, happiness, sadness, and surprise). We also annotate the facial landmarks of each frame and the optical flow between frames by Dlib [42], [43] and TV-L1 [44], respectively. Since macro-expressions are long-duration, we divide each video into multiple clips, and use each clip as the input of MOL. All the experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU. TABLE 5 Comparison with state-of-the-art methods on SMIC [41] with three ME categories. # 4.2 Comparison with State-of-the-Art Methods We compare our MOL against state-of-the-art methods under the same evaluation setting. These methods can be divided into non-deep learning (NDL) based methods and deep learning (DL) based methods. The latter can be further classified into pre-extracted feature (PF) based methods and raw image (RI) based methods according to the type of network input. Specifically, NDL methods include LBP-TOP [7], SparseSampling [48], Bi-WOOF [10], HIGO+Mag [9], and FHOFO [16]. $\mathrm { \ D L + P F }$ methods include OFF-ApexNet [45], DSSN [49], Dual-Inception [24], STSTNet [56], Part $\cdot +$ Adversarial $+$ EMR [62] GACNN [46], LGCcon [50], AU-GCN [51], GEME [23], MERSiamC3D [52], MER-Supcon [47], SLSTT [53], and I2Transformer [25]. $\mathrm { \Delta D L + R I }$ methods include STCNN [12], CapsuleNet [61], AU-GACN [21], Graph-TCN [17], MER-GCN [65], MicroNet [13], AMAN [18], Dynamic [54], FRL-DGT [63], and SelfME [64]. Besides, some of these methods rely on key frames (KF) including onset, apex, and offset frames of MEs. TABLE 6 Comparison with state-of-the-art methods in terms of composite dataset evaluation [55] with three ME classes. TABLE 7 Comparison with state-of-the-art methods in terms of cross-dataset evaluation [21] with three ME classes. CASME $\begin{array} { r } { | | \mathsf { S M l C } } \end{array}$ denotes training on CASME II and testing on SMIC. Each method is presented with its paper in a bracket, and its results are reported by [21]. # 4.2.1 Single Dataset Evaluation TABLE 4 and TABLE 5 show the comparison results on single dataset of CAMSE II, SAMM, and SMIC, respectively. It can be observed that DL based methods are often superior to NDL based methods, which demonstrates the strength of deep neural networks. Besides, our MOL outperforms most of the previous methods, especially for three-classes MER tasks. Note that MicroNet [13], GACNN [46], MERSiamC3D [52], and $\mathrm { I } ^ { 2 }$ Transformer [25] outperform MOL in a few cases. However, GACNN uses hand-crafted features, MERSiamC3D and $\mathrm { I } ^ { 2 }$ Transformer rely on hand-crafted features and key frames, and MicroNet requires key frames, in which their applicabilities are limited. In contrast, MOL directly processes raw frame images without requiring the prior knowledge of key frames, which is a more universal solution to MER. TABLE 8 Acc and WF1 results of MOL variants without auxiliary task modules of optical flow estimation (OFE) or facial landmark detection (FLD). These results are obtained on CASME II [39] with five classes. The best results are highlighted in bold. # 4.2.2 Composite Dataset Evaluation The results of composite dataset evaluation are presented in TABLE 6. It can be seen that our MOL achieves competitive performance compared to state-of-the-art methods. Besides, we find that our method is the only one DL based method with raw frame images as input. In contrast, most previous works suffer from small-scale and low-diversity training data when using deep neural networks, in which preextracted hand-crafted features or key frames are required. In our method, this data scarcity issue is alleviated, due to the correlated knowledge and information provided by two auxiliary tasks of optical flow estimation and facial landmark detection. # 4.2.3 Cross-Dataset Evaluation We take CASME II and SAMM as the training set, respectively, in which SMIC is used as the test set. The comparison results are shown in TABLE 7. It can be seen that our MOL achieves the highest WF1 results, which demonstrates the strong generalization ability of MOL. The joint learning with optical flow estimation and facial landmark detection facilitates the extraction of ME related features, which improves the robustness and the micro-action-aware ability of our method for unseen samples. TABLE 9 Acc and WF1 results of MOL variants without partial or complete F5C block. The F5C block includes two main operations of fully-connected convolution (FCC) and channel correspondence convolution (CCC). TABLE 10 Acc and WF1 results of MOL variants with different number of F5C blocks in each branch of frame pair. # 4.3 Ablation Study In this section, we design ablation experiments to investigate the effectiveness of auxiliary tasks, F5C block, as well as feature fusion strategy for MER input. We conduct ablation studies on the CASME II dataset in terms of five classes. # 4.3.1 Auxiliary Tasks To investigate the effects of optical flow estimation and facial landmark detection tasks on MER, we implement MOL w/o OFE and MOL w/o FLD by removing the optical flow estimation module and the facial landmark detection module of MOL, respectively. Besides, we further implement MOL w/o OFE&FLD by removing the both task modules. TABLE 8 shows the results of these variants of MOL. We can see that MOL w/o OFE and MOL w/o FLD both perform worse than ${ \mathrm { ~ ~ \cal ~ M O L , ~ } }$ and the performance of MOL w/o OFE&FLD is further significantly decreased after removing both auxiliary tasks. This is because the removal of optical flow estimation or landmark detection weakens the ability of learning facial subtle motions. We also notice that MOL w/o OFE is slightly worse than MOL w/o FLD, which indicates that optical flow estimation is more correlated with MER. In our end-to-end joint learning framework, both optical flow estimation and facial landmark detection are beneficial for MER. # 4.3.2 F5C Block We verify the impact of F5C block as well as its main components on MOL in TABLE 9. When removing the whole F5C block, MOL w/o F5C only achieves the Acc of 62.90 and the WF1 of 62.52. This indicates the importance of F5C block. Furthermore, when removing FCC or CCC in the F5C block, MOL w/o FCC and MOL w/o CCC both show poor performances. It is inferred that the removal of transformer-style FCC decreases the capacity of maintaining global receptive field, and the removal of graph-style CCC may cause the failure of modeling the correlations among feature patterns. Moreover, we implement variants of MOL using multiple stacked F5C blocks in each branch of frame pair, as presented in TABLE 10. It can be observed that using a single FC5 block achieves the best performance. Since the training sets of ME datasets like CASME II are small-scale and low-diversity, one FC5 block is already sufficient to extract correlated ME features. TABLE 11 Acc and WF1 results of MOL variants with different structures of FCC. TABLE 12 Acc and WF1 results of MOL variants with different feature fusion strategies for MER input. $\mathbf { F } _ { k } ^ { \left( c \right) }$ is the concatenation of $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ , $\mathbf { F } _ { k } ^ { ( a ) }$ is the element-wise addition of $\mathbf { F } _ { k } ^ { \left( g \right) }$ and F(kg+)1 , and F(ks) is the element-wise subtraction of $\mathbf { F } _ { k } ^ { \left( g \right) }$ and $\mathbf { F } _ { k + 1 } ^ { \left( g \right) }$ . # 4.3.3 FCC vs. Transformer To verify the effect of transformer-style FCC, we implement variants of MOL by replacing the whole FCC block with vanilla transformer, FCC-V, and FCC-H, respectively. The results are shown in TABLE 11. It can be seen that the complete FCC structure outperforms the vanilla transformer. Besides, FCC-V or FCC-H with one-directional perception still performs better. This is due to the insufficiency of ME training data, in which the power of transformer is limited, while our proposed FCC has a stronger learning ability of both local and global features. The fully-connected convolution in both vertical and horizontal directions works the best in terms of perceiving micro-actions related to MEs. # 4.3.4 Feature Fusion Strategy for MER Input As shown in Fig. 2, local-global features $\mathbf { F } _ { k } ^ { \left( g \right) }$ ) and F(g) of consecutive frames $\mathbf { I } _ { k }$ and $\mathbf { I } _ { k + 1 }$ are concatenated to be $\mathbf { F } _ { k } ^ { \left( c \right) }$ as the feature of the $k$ -th frame pair, then the sequence of t − 1 pair features {F(0 , F(1 , · $\{ \mathbf { F } _ { 0 } ^ { ( c ) } , \mathbf { F } _ { 1 } ^ { ( c ) } , \cdot \cdot \cdot , \mathbf { F } _ { t - 2 } ^ { ( c ) } \}$ is fed into the MER module. Here we investigate the effects of different feature fusion strategies for MER input, as shown in TABLE 12. If we do not fuse the local-global features of each two consecutive frames, the performances are all degraded for three types of inputting the first $t - 1$ frame features F( , F( , $\{ \mathbf { F } _ { 0 } ^ { \left( g \right) } , \mathbf { F } _ { 1 } ^ { \left( g \right) } , \cdots , \mathbf { F } _ { t - 2 } ^ { \left( g \right) } \} ,$ inputting the last t − 1 frame features {F(1 , F(2 , · $\{ \mathbf { F } _ { 1 } ^ { \left( g \right) } , \mathbf { F } _ { 2 } ^ { \left( g \right) } , \cdot \cdot \cdot , \mathbf { F } _ { t - 1 } ^ { \left( g \right) } \} ,$ , Ft(g−)1}, and inputting all the $t$ frame features $\{ \mathbf { F } _ { 0 } ^ { \left( g \right) } , \mathbf { F } _ { 1 } ^ { \left( g \right) } , \cdot \cdot \cdot , \mathbf { F } _ { t - 1 } ^ { \left( g \right) } \}$ . This is due to the sub-action clips between each two consecutive frames, which are highly related to MEs. We also implement another two feature fusion strategies, element-wise addition and element-wise subtraction of frame features. However, both performances become much worse, which indicates that concatenation is a better way to preserve sub-action clips. TABLE 13 Acc and WF1 results of our MOL with different numbers of input frames on CASME II [39]. TABLE 14 Average EPE results of different optical flow estimation methods on CASME II [39]. The best results are highlighted in bold, and the second best results are highlighted by an underline. # 4.3.5 Number of Input Frames TABLE 15 Mean error and failure rate results of different facial landmark detection methods on CASME II [39]. Here we investigate the impacts of different numbers of input frames on our MOL. Due to the characteristic of processing pairs of consecutive frames in the input video clip, we can directly feed a video clip composed of only the onset and apex frames into MOL without changing the network structure. TABLE 13 shows the results of different inputs to ${ \mathrm { ~ \mathrm { ~ M O L } } } ,$ including key frames only and video clips with different frame amounts, in which the latter are sampled at equal intervals from the raw videos. Compared to the results of inputting 8 frames, the performance of inputting onset and apex frames shows a slight improvement, which can be attributed to the fact that the prior key frames contain the most prominent ME motion characteristics. When inputting 4 frames, the performance is significantly lower than the cases of 8 or 16 frames. This is because when sampling at equal intervals, if the number of sampled frames is too small, the obtained video clips are likely to miss some frames with high ME intensities. When inputting 8 or 16 frames, the results are relatively close. This is because the sampled clips already contain enough ME frames with high intensities. With the strong feature capture ability of F5C block and the joint framework, our MOL is competitive to those methods relying on key frames. # 4.4 MOL for Optical Flow Estimation and Facial Landmark Detection We have validated the contributions of optical flow estimation and facial landmark detection to MER in Sec. 4.3.1. In this section, we also investigate the effectiveness of MER for these two tasks in our micro-action-aware joint learning framework. # 4.4.1 MOL for Optical Flow Estimation We implement a baseline method MOL w/o MER&FLD which only achieves the optical flow estimation task by removing the MER and facial landmark detection modules. Besides, we implement MOL w/o MER and MOL w/o FLD by discarding MER and facial landmark detection, respectively. We also compare with two recent deep learning based optical flow estimation methods UnsupFlownet [66] and RAFT [67] with code released. Average end-point error (EPE) is reported as the evaluation metric. TABLE 14 shows the average EPE results on the CASME II benchmark. With the help of MER and facial landmark detection, MOL outperforms MOL w/o MER&FLD by a large margin of 0.495. When only removing one module, the results of MOL w/o MER and MOL w/o FLD are also both better than MOL w/o MER&FLD. It is demonstrated that MEs and facial landmarks are closely related to the motion patterns captured by optical flow. Furthermore, despite being designed for MER, our MOL shows competitive results compared with the state-of-the-art optical flow estimation methods. # 4.4.2 MOL for Facial Landmark Detection We implement MOL w/o MER&OFE as a baseline method which only achieves the facial landmark detection task without the MER and optical flow estimation modules. Besides, we implement MOL w/o MER and MOL w/o OFE by removing MER and optical flow estimation, respectively. We also compare with two popular facial landmark detection methods TCDCN [68] and HRNetV2 [69] with code released. We report two metrics, inter-ocular distance normalized mean error, and failure rate, in which the mean error larger than $1 0 \%$ is treated as a failure. For simplicity, $\%$ is omitted in the following mean error and failure rate results. TABLE 15 shows the landmark detection results on CASME II. We can see that MOL w/o OFE and MOL w/o MER both perform better than the baseline MOL w/o MER&OFE, which proves that MER and optical flow estimation both contribute to facial landmark detection. Moreover, MOL outperforms all the above three variants, which demonstrates that our joint framework is beneficial for improving the performance of facial landmark detection. Besides, the comparison with TCDCN and HRNetV2 indicates the superiority of our MOL for landmark detection. Fig. 5. Visualization of optical flow estimation results for three example frame pairs $\mathbf { I } _ { k }$ and $\mathbf { I } _ { k + 1 }$ from CASME II [39], SAMM [40], and SMIC [41], respectively. $\hat { \mathbf { O } } _ { k }$ is estimated optical flow, and $\tilde { \mathbf { I } } _ { k + 1 }$ is warped from $\mathbf { I } _ { k + 1 }$ by $\hat { \mathbf { O } _ { k } }$ . The color coding with its central point as the original point is used to visualize the optical flow, in which the color of each point denotes its displacement including orientation and magnitude to the original point. “GT” denotes the ground-truth optical flow. # 4.5 Visual Results To prove that our proposed method can pay attention to the subtle movements related to MEs, we visualize the estimated optical flow of different methods on several example frame pairs in Fig. 5. For a better view, we use $\hat { \mathbf { O } } _ { k }$ with horizontal component $\hat { \mathbf { A } } _ { k }$ and vertical component $\hat { \mathbf { B } } _ { k }$ to warp $\mathbf { I } _ { k + 1 } ,$ in which the warped image $\tilde { \mathbf { I } } _ { k + 1 } \mathrm { ~ }$ at each pixel position $( a , b )$ is formulated as $$ \begin{array} { r } { \tilde { I } _ { k + 1 , a , b } = I _ { k + 1 , a + \hat { A } _ { k , a , b } , b + \hat { B } _ { k , a , b } } , } \end{array} $$ where bilinear sampling is employed, and $\tilde { \mathbf { I } } _ { k + 1 }$ is expected to be similar to ${ \bf \cal I } _ { k }$ . We can see that our MOL achieves the most accurate optical flow estimations, in which the slightly closed eyes in the first example, the slightly shaking eyes, nose and mouth in the second example, and the slightly open eyes in the third example are all captured. When the modules of MER or facial landmark detection are removed, many nonexistent motion patterns are estimated. Therefore, our MOL can capture facial subtle muscle movements associated with MEs due to the introduction of optical flow estimation. We also show facial landmark detection results on several example images in Fig. 6. We can observe that our MOL more accurately localize facial landmarks than other variants especially for the landmarks in regions of eyes and mouth. With the help of landmark detection, our MOL can capture important facial local regions where ME actions often occur.
Facial micro-expression recognition (MER) is a challenging problem, due to transient and subtle micro-expression (ME) actions. Most existing methods depend on hand-crafted features, key frames like onset, apex, and offset frames, or deep networks limited by small-scale and low-diversity datasets. In this paper, we propose an end-to-end micro-action-aware deep learning framework with advantages from transformer, graph convolution, and vanilla convolution. In particular, we propose a novel F5C block composed of fully-connected convolution and channel correspondence convolution to directly extract local-global features from a sequence of raw frames, without the prior knowledge of key frames. The transformer-style fully-connected convolution is proposed to extract local features while maintaining global receptive fields, and the graph-style channel correspondence convolution is introduced to model the correlations among feature patterns. Moreover, MER, optical flow estimation, and facial landmark detection are jointly trained by sharing the local-global features. The two latter tasks contribute to capturing facial subtle action information for MER, which can alleviate the impact of insufficient training data. Extensive experiments demonstrate that our framework (i) outperforms the state-of-the-art MER methods on CASME II, SAMM, and SMIC benchmarks, (ii) works well for optical flow estimation and facial landmark detection, and (iii) can capture facial subtle muscle actions in local regions associated with MEs. The code is available at https://github.com/CYF-cuber/MOL.
[ "cs.CV" ]
# I. INTRODUCTION One of the primary goals of neuromorphic computing is to emulate the structure and dynamics of biological neuronal networks, achieving both brain-like energy efficiency and high computational accuracy. This is accomplished through the use of spiking neuron models implemented on neuromorphic chips. Over the past two decades, a variety of This manuscript has been authored in part by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-accessplan).This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC05-00OR22725. neuromorphic chips have been designed using both analog and digital ASIC platforms, capable of performing real-time information processing [3], [13], [23], [26]–[28]. However, the adoption of these systems remains constrained by their high cost, limited availability, and architectural specificity. Proprietary neuromorphic chips typically restrict user access and customization, creating significant barriers for researchers and students seeking to innovate and explore new designs. Field programmable gate arrays (FPGAs) offer a promising alternative, providing a flexible platform for prototyping and validating SNNs before final implementation on custom ASICs. They serve as an effective intermediate step, facilitating co-design development alongside off-the-shelf SNN simulators [8], [32]. Several digital ASICs and FPGA-based SNN systems have been proposed in the past [22], [25], [29]. While some proprietary systems [13] include local learning capabilities such as spike-timing-dependent plasticity (STDP), most FPGA-based implementations still rely heavily on offline training and lack real-time, on-chip learning. This limitation reduces their adaptability for dynamic, continuously evolving applications such as robotics, smart sensors, and edge computing. To address these challenges, we introduce NeuroCoreX, an open-source spiking neural network (SNN) emulator implemented in VHDL (Very High-Speed Integrated Circuit Hardware Description Language) for FPGA platforms. NeuroCoreX provides an affordable and flexible alternative for neuromorphic computing research and education. It is meant to be used in AI applications requiring low size, weight, and power (SWaP) such as edge computing, embedded systems, Internet of Things (IoT), and autonomous systems [1], [6], [7], [20], [30]. Unlike fixed-architecture hardware, it supports fully reconfigurable network topologies, from simple layered structures to complex small-world graphs. It incorporates biologically inspired local learning through a variant of the STDP learning rule [24], enabling on-chip, online adaptation of synaptic weights. The system uses a Leaky Integrate-and-Fire (LIF) neuron model with current-based synapses [19], ensuring both computational simplicity and biological relevance. This model of neuromorphic computation is known to be Turingcomplete, i.e., capable of performing all the computations that a CPU/GPU can perform [9], [12]. As a result, NeuroCoreX can support not just SNN-based AI workloads but also generalpurpose computing workloads [10], [11], [31], [34]. Programming and configuring NeuroCoreX is streamlined through a UART interface and a simple Python module, allowing users to modify network, neuron, synapse, and learning parameters easily. This makes NeuroCoreX not only a valuable research tool for testing new theories of learning and network organization but also a powerful educational platform for hands-on experience with neuromorphic hardware. Additionally, its energy-efficient architecture makes it well-suited for low-power AI applications in areas such as autonomous systems, smart sensors, and scientific instrumentation. The rest of the manuscript is organized as follows: Section II provides an overview and the architecture description of NeuroCoreX in detail. In Section III, we present the results demonstrating the functionality of the platform and evaluate its performance on the DIGITS dataset [2]. The manuscript concludes with a discussion of the results and planned future work in Section IV. # II. NEUROCOREX # A. NeuroCoreX overview NeuroCoreX is designed to emulate brain-like computation on reconfigurable FPGA hardware using a digital circuit approach. The system architecture is built around three fundamental components, inspired by biological neural networks: neurons, synapses, and a local learning mechanism. These elements are digitally realized in VHDL and operate together to support real-time, adaptive information processing. The neuron model employed is the LIF model, which captures the essential dynamics of biological neurons with computational efficiency and is known to be Turing-complete. Synapses are modeled with an exponential current response and store dynamic weight values that govern neuron-to-neuron influence. Learning is enabled through a simple variant of STDP, allowing synaptic strengths to evolve based on the relative timing of neuronal spikes. In its current implementation, NeuroCoreX supports networks of up to $N = 1 0 0$ neurons with full all-to-all bidirectional connectivity using 10,000 synapses. In addition to recurrent connections, the system includes a separate set of 10,000 feedforward input synapses that serve as the interface for external stimuli. These input weights determine how incoming spikes—from sources such as sensors or preprocessed datasets—modulate the activity of neurons within the network. Neuronal dynamics are configured to emulate biological timescales. The network size and acceleration factor can be scaled depending on the memory resources of the FPGA, precision of the synaptic weights used and the operating clock frequency. Time-multiplexing and pipelining techniques are used to optimize hardware resource usage. A single physical neuron circuit is time-multiplexed to emulate the entire network. Communication with the FPGA is managed through a UART interface, with a Python module providing a user-friendly configuration and control interface. The operation of NeuroCoreX follows a structured emulation cycle (see Fig. 1(a)). First, the network weights and initial configuration parameters for neuron, synapse, and learning rule are transferred from a PC to the FPGA via the UART interface. Once the network is set up, input spikes are streamed in real time, buffered in a First-In-First-Out (FIFO) module on the FPGA, and injected into the network precisely at their intended timestamps. At each neuron update cycle, the time-multiplexed processor sequentially updates the membrane potential, synaptic inputs, and firing status of each neuron. If a neuron fires, its effect on connected neurons is mediated through the all-to-all connected $W _ { A A }$ weight matrix, and synaptic weights are updated in real time if STDP is enabled. Synaptic weights corresponding to feedforward inputs $W _ { i n }$ are similarly updated if STDP is enabled for them. The system thus continuously processes incoming spikes, updates network states, applies learning, and advances to the next time step, enabling real-time emulation of SNNs on the FPGA. Fig. 1. (a). Block diagram of our FPGA based NeuroCoreX, (b). Feedforward SNN used for digits dataset classification, (c). Spiking Graph Neural Network for citation graph node classification problem. Fig. 1(a) shows a high-level block diagram of NeuroCoreX, along with two representative examples of network architectures that can be implemented on the FPGA. The first is a conventional feedforward SNN, a topology commonly used in neuromorphic research. We use this network to demonstrate digit classification on the well-known DIGITS dataset [2], showcasing NeuroCoreX’s support for standard inference tasks. The second network, shown in Fig. 1(c), illustrates a SNN designed for node classification on citation graphs using STDP-based unsupervised learning. This architecture lacks a traditional layered structure and is instead defined by arbitrary, sparse connectivity encoded in the $W _ { A A }$ matrix, which stores both plastic and static synaptic weights. These two examples highlight the flexibility of NeuroCoreX: in addition to supporting conventional layered architectures, the platform can implement non-layered networks such as those found in graph-based problems or generated via evolutionary algorithms like EONs []. This versatility makes it suitable for a wide range of neuromorphic applications, from structured inference tasks to irregular and adaptive network topologies. # B. FPGA platform NeuroCoreX is implemented on the Artix-7 FPGA, a costeffective and widely available platform that offers sufficient resources for neuromorphic prototyping. The system operates with a maximum internal clock frequency of $1 0 0 ~ \mathrm { M H z }$ . Two main clock domains are used: the high-frequency $1 0 0 ~ \mathrm { M H z }$ clock for UART-based communication and a $1 0 0 ~ \mathrm { K H z }$ lowerspeed operating clock for neural processing. The combination of modest resource requirements, real-time adaptability, and biological plausibility makes the Artix-7 platform an ideal choice for NeuroCoreX. Scalability to larger networks or faster processing rates is primarily limited by the available block RAM and choice of clock frequency for neural processing on the FPGA. The UART interface operates at a baud rate of 1 Mbps, enabling efficient transmission and reception of both static configuration data and real-time input spikes. The FPGA receives network weights, neuron, synapse, and learning parameters from a host PC via this UART channel before execution begins. During operation, additional input spikes are streamed to the network in real time through the same interface. # C. Neuron and Synapse Models NeuroCoreX employs a biologically inspired computational model that integrates neuron dynamics, synaptic interactions, and local learning mechanisms. The neurons are modeled using a LIF formulation, adapted for efficient FPGA implementation. Each neuron has four configurable parameters, threshold, leak, refractory period, and reset voltage. The membrane potential $\mathrm { V } ( \mathrm { t } )$ is updated at each time step according to the following discrete-time equation: $$ V ( t + 1 ) = V ( t ) - \lambda + I _ { \mathrm { s y n } } ( t ) $$ where $I _ { \mathrm { s y n } } ( t )$ is the total synaptic input current at timestep $t$ and $\lambda$ is the neuron’s leak. When the membrane potential exceeds the threshold $V _ { \mathrm { t h } }$ , the neuron emits a spike, enters a refractory period $\tau _ { \mathrm { r e f } }$ , and its membrane potential is reset to $V _ { \mathrm { r e s e t } }$ . To ensure efficient real-time processing, all calculations are performed using a fixed-point format with 1 sign bit, 7 integer bits, and 10 fractional bits. Synaptic inputs are modeled as current-based exponential synapses, capturing biologically realistic, temporally decaying post-synaptic responses. The synaptic current dynamics follow the update rule: $$ I _ { \mathrm { s y n } } ( t + 1 ) = I _ { \mathrm { s y n } } ( t ) - \lambda _ { \mathrm { s y n } } $$ where $\lambda _ { \mathrm { s y n } }$ represents the synaptic current decay at each time step. Each synapse has an associated weight that determines its influence on the postsynaptic neuron. These synaptic weights, stored in BRAM, are dynamically updated during runtime. Weights are represented in signed 8-bit format, and appropriate resizing and bit-shifting are applied during computations to correctly integrate the synaptic current into the membrane potential. Fig. 2. (a) Simplified STDP learning rule implemented on NeuroCoreX. (b) Internal variables stored in BRAM for tracking STDP updates. All matrices are of size $N \times N$ and stored in row-major order. Each element of the $W _ { A A }$ and synaptic_traces matrices is 8 bits wide, while update_state and enable_STDP are binary matrices. # D. Learning Rule NeuroCoreX implements a pair-based STDP rule using a rectangular learning window, a simplification widely demonstrated to maintain similar functional behavior to conventional exponential window [24] when the resolution of the weights is greater than 6-bits [5], [15], [17]. In this model (See Fig. 2 (a)), if a causal spike pair (presynaptic spike preceding postsynaptic spike) occurs within the window $t _ { p r e }$ , the synaptic weight is incremented by $d w _ { p o s }$ . If an acausal spike pair (postsynaptic spike preceding presynaptic spike) occurs, the weight is decremented by dwneg. $$ \Delta w = \left\{ \begin{array} { l l } { d w _ { \mathrm { p o s } } , } & { \mathrm { i f } \quad 0 < \Delta t < t _ { \mathrm { p r e } } } \\ { - d w _ { \mathrm { n e g } } , } & { \mathrm { i f } \quad 0 < \Delta t < t _ { \mathrm { p o s t } } } \\ { 0 , } & { \mathrm { o t h e r w i s e } } \end{array} \right. $$ Here, $\Delta t$ is the time difference between pre-and postsynaptic spikes, $d w _ { \mathrm { n e g } } , d w _ { \mathrm { p o s } } , t _ { \mathrm { p r } }$ , and $t _ { \mathrm { p o s t } }$ are configurable parameters initialized via the Python-UART interface. The pre- and postsynaptic spike timings are tracked using dedicated time-trace registers stored in BRAM (see Fig. 2(b)). These time traces are updated on each spike event and are used to detect causal or acausal spike pairings that trigger weight updates. For example, when neuron 1 spikes, all synaptic traces corresponding to its outgoing synapses are reset to zero, and the associated update_state entries are set to 1. In parallel, the post-synaptic trace (not shown in Fig. 2) is activated for all incoming synapses to neuron 1. At each subsequent time step, the active values in the synaptic trace matrices are incremented by 1. This process continues until the counter reaches $t _ { \mathrm { p r e } }$ . If no other neuron spikes within this window, the trace value is reset to ${ \tt O x F E }$ (representing a negative disabled state), and the corresponding update_state entry is cleared. Similarly, if no neuron spiked within $t _ { \mathrm { p o s t } }$ time steps prior to neuron 1’s spike, the post-synaptic trace is also reset to a negative value. However, if another neuron spikes within $t _ { \mathrm { p r e } }$ time steps after neuron 1, the synaptic weight is incremented by $d w _ { \mathrm { p o s } }$ , and both the synaptic trace and update_state for that synapse are reset. Conversely, if a neuron spiked within $t _ { \mathrm { p o s t } }$ time steps prior to neuron 1, the synaptic weight is decremented by $d w _ { \mathrm { n e g } }$ , and the associated trace and update_state values are reset. Thus, during each neuron update cycle, if a neuron spikes, the corresponding row and column addresses in the matrices shown in Fig. 2(b) are accessed and updated. Based on the current states of these auxiliary matrices, the entries in the weight matrix $W _ { A A }$ are modified accordingly. The enable_STDP matrix is a static binary mask configured via the Python interface at initialization. It acts as a filter to specify which synapses in $W _ { A A }$ are subject to STDP-based plasticity. There a similar matrix for synapses in $W _ { i n }$ . # E. Network Architecture and System Operation The SNN architecture implemented on NeuroCoreX is illustrated in Fig. 1(a). The network consists of upto $N = 1 0 0$ LIF neurons instantiated on the FPGA. Two primary weight matrices define the network connectivity: $W _ { A A }$ , a synaptic weight matrix for all-to-all, bidirectional connectivity between neurons in the FPGA and $W _ { i n }$ , a synaptic weight matrix for feedforward connections from external input sources to the neurons on the FPGA. Both matrices are stored in the FPGA’s BRAM. They can be initialized to user-defined values, and are accessed during SNN emulation. A synaptic weight value of zero indicates no connection between the corresponding neurons. Internal network dynamics are governed by the $W _ { A A }$ matrix. This matrix allows every neuron to influence every other neuron bidirectionally. The matrix values determine the synaptic strengths between pairs of neurons and evolve over time via STDP-based learning. Both $W _ { A A }$ and $W _ { i n }$ matrices support on-chip learning. To preserve network structure and prevent unwanted modifications, an associated binary filter matrix, called enable-STDP, is used for each weight matrix. If a weight’s corresponding enable-STDP entry is zero, the weight remains fixed throughout operation—even during learning phases. Weights representing nonexistent connections (zeros in the weight matrix) are thus protected from modification. In addition to the synaptic weights, the BRAMs are also used store the pre-and post synaptic traces necessary for STDP calculations. Weight matrices $W _ { A A }$ and $W _ { F F }$ , and the preand postsynaptic traces are stored as separate memory banks in row major order on the FPGA’s BRAM. As BRAM addresses must be accessed sequentially, the high-speed $1 0 0 ~ \mathrm { { M H z } }$ clock domain is utilized for reading, updating, and writing synaptic weights. During each clock cycle, synaptic weights and neuron states are updated in a pipelined manner to ensure efficient processing without data bottlenecks. NeuroCoreX utilizes time-multiplexing and pipelining techniques to emulate 100 neurons using a single physical neuron processing unit. Neuron updates are managed under a 100 kHz clock domain, such that updating all 100 neurons takes 1 millisecond, which closely matches the biological timescale of real neural systems. To accelerate the emulation, a higher update clock frequency can be used. For example, operating the neuron updates at 1 MHz results in a $1 0 \times$ speed-up relative to biological time for a network of 100 neurons. However, if the network size is increased to 1000 neurons while maintaining the 1 MHz clock, the full network would again require approximately 1 millisecond per time step, restoring biological equivalence. Thus, there exists a direct dependence between the number of neurons, the update clock frequency, and the effective emulated timescale. In the current implementation, the network size is limited to 100 neurons due to the available BRAM resources on the Artix-7 FPGA. Even with higher BRAM availability, the number of neurons that can be emulated is ultimately constrained by the difference between the clock frequency available for BRAM access and the clock rate used for updating the SNN states (see Section IV). Incoming spikes from external sources are transmitted via the UART interface. Each spike is encoded as a 24-bit word, comprising a 16-bit input neuron (or pixel) address and an 8-bit spike timing component. In the current implementation, the feedforward weight matrix $W _ { \mathrm { i n } }$ is a $1 0 0 \times 1 0 0$ matrix, corresponding to 100 input neurons and 100 on-chip neurons. Although 8 bits are sufficient to encode the addresses of 100 input neurons, we chose to use 16 bits to provide flexibility for interfacing with larger sensor arrays. This allows the system to support up to 16K input neurons in future applications. In such cases, the feedforward matrix $W _ { \mathrm { i n } }$ becomes a rectangular matrix of size $1 0 0 \times N _ { \mathrm { i n } }$ , where ${ N _ { \mathrm { i n } } }$ denotes the number of input neurons in the external layer. For transmission efficiency, successive time differences between spikes are sent, rather than absolute times. These incoming spikes are temporarily stored in a FIFO buffer on the FPGA (See Fig. 1(a)). The FIFO is designed to support simultaneous read and write operations, allowing it to continuously receive long temporal spike trains while concurrently feeding data to the network in real time without stalling. During network emulation, the system clock continuously increments an internal time counter. When the internal clock matches the timestamp of the spike at the FIFO head, the corresponding input neuron address is read. The associated weights from the $W _ { i n }$ matrix are then used to inject synaptic currents into the membrane potentials of the relevant neurons. If the synaptic current causes any neuron on the FPGA to spike, then associated weights from the $W _ { A A }$ matrix are then read and used to inject synaptic currents into the membrane potentials of all other neurons connected to the spiking neuron in the network. # III. RESULTS We present experimental results that demonstrate the usability, correctness, and flexibility of the NeuroCoreX platform across a range of SNN workloads. # A. Demonstrating User Interface Flexibility One of the key strengths of the NeuroCoreX platform lies in its flexible and intuitive user interface, which enables seamless communication between a host PC and the FPGA hardware through a Python-based control module. To demonstrate this capability, we highlight several core features supported by the interface. First, spike trains can be streamed from the PC to the FPGA for real-time emulation of spiking network activity. Figure 3(b) shows a raster plot of spiking activity recorded during one such emulation run. Second, the user interface allows for real-time monitoring of internal neuron dynamics. Specifically, the membrane potential of any selected neuron can be captured and plotted as a time series, offering insight into subthreshold integration and spiking behavior. Figure 3(a) shows the membrane potential trace of a representative neuron under input stimulation. The spike trains of all neurons and the membrane potential of a selected neuron are transferred back to the PC in real time. These signals are not stored on the FPGA. Instead, an internal FIFO module is used to buffer these signals, which allows for the continuous recording and visualization of network dynamics over long temporal duration without being limited by on-chip memory. Finally, the interface supports reading back synaptic weights from the FPGA after an STDP-enabled emulation. This feature enables direct inspection of weight evolution and verification of learning dynamics on hardware. It is particularly useful for comparing hardware learning outcomes with software simulations, facilitating debugging and model validation. These features collectively support efficient testing, inspection, and refinement of neuromorphic models, enabling a co-design loop between high-level model development and hardware validation. # B. DIGITS Dataset To verify the functional correctness of internal neuron and synapse computations on the FPGA, we performed inference on the DIGITS dataset [2] using a model trained in the SuperNeuroMAT simulator [8]. The dataset contains a total of 1,797 samples of handwritten digits, each represented as an $8 \times 8$ grayscale image with pixel values in the range [0, 15]. The dataset was split into $70 \%$ training samples and $30 \%$ test samples. A simple two-layer feedforward spiking neural network was trained using integrate-and-fire neurons and weighted synapses in the SuperNeuroMAT simulator. The input images were normalized to the range [0, 1] and converted into input spike trains using rate encoding. Each pixel intensity was encoded as twice the number of spikes and distributed uniformly over 32 time steps. During training, target labels were encoded by delivering a spike to the corresponding output neuron at timestep $t + 1$ , one timestep after the input presentation at $t$ . The learning was carried out using one-shot STDP-based training. It is important to note that the training was not aimed at maximizing classification accuracy; rather, the goal was to validate the correctness of the internal neuron and synapse dynamics on the FPGA platform. Figure 4 shows the final weights of the output neurons after training, which clearly reflect the digit-specific patterns learned by the network. Fig. 3. (a) Membrane potential trace of a selected neuron, recorded from the FPGA during network emulation. (b) Spike raster plot showing activity of 10 neurons during a test run. Both plots were generated using data read back from the FPGA, demonstrating the observability and debugging capabilities of the NeuroCoreX interface. For deployment on NeuroCoreX, the trained weights and network parameters were transferred and initialized on the FPGA. The STDP learning rule was disabled during this phase to maintain fixed weights. The identical spike sequences for the test set were streamed to the FPGA through the UART interface. We achieved a test accuracy of $68 \%$ on the SuperNeuroMAT simulator, and the same accuracy was observed on the NeuroCoreX hardware. This result confirms that the FPGA implementation faithfully reproduces the dynamics of the simulated SNN and validates the correctness of internal spike integration, thresholding, and synaptic current accumulation in hardware. # C. MicroSeer Dataset To evaluate the applicability of NeuroCoreX for graphbased learning tasks, we tested its performance using the MicroSeer dataset. MicroSeer is a reduced version of the Fig. 4. Heat-map of the trained weights from all the 10 output neurons. Citeseer citation graph [4], containing 84 papers labeled with six topic categories. It was constructed by iteratively removing nodes from the largest connected component of Citeseer while ensuring that the resulting graph remained a single connected component. This connectivity was prioritized because it is assumed that learning from a very small, fragmented dataset would be ineffective. This reduction process yielded a total of 90 neurons, making the dataset well suited for deployment on NeuroCoreX, which supports up to 100 neurons and 10,000 bidirectional synapses. As compared to standard supervised learning that uses iterative error correction for weight updates, our training method leverages the graph’s structure directly to build the network. When testing a paper in the test data set, spiking the neuron associated with the test paper triggers a chain reaction of spikes. As these spikes travel between paper and topic neurons, STDP dynamically modifies the weights of the synapses connecting the test paper neuron to the topic neurons and vice versa. Subsequently, classification is achieved by finding the topic neuron with the highest final synaptic strength from the test paper neuron under consideration. The topic corresponding to this topic neuron is the one predicted by the SNN for the given test paper. The trained SNN model was first developed in the SuperNeuroMAT simulator and then ported to NeuroCoreX for hardware execution. When STDP was disabled, the network outputs from NeuroCoreX closely matched those produced by the simulator, demonstrating functional equivalence in inference. However, when STDP was enabled, a divergence in weight evolution and learning behavior was observed. This discrepancy stems from two primary sources: (1) SuperNeuro uses an exponential STDP learning rule with 64-bit floatingpoint precision, while NeuroCoreX implements a simplified rectangular learning window with signed 8-bit fixed-point weight representation; and (2) differences in numerical resolution and synaptic update timing result in non-identical learning trajectories. To achieve comparable accuracy metrics across simulation and hardware, tuning of learning parameters—such as learning rate, window size, and initial weight distributions—is required in both environments. These results underscore the importance of algorithm–hardware co-design in bridging the gap between simulation and deployment for neuromorphic graph learning. NeuroCoreX enables iterative testing and refinement of learning dynamics under realistic hardware constraints, facilitating the transition from simulated models to deployable systems. Future work will focus on tuning the system for MicroSeer and scaling to larger datasets on more advanced neuromorphic platforms. The total on-chip power consumption of the NeuroCoreX design was estimated at $3 0 5 \mathrm { \ m W }$ , with $7 5 \%$ attributed to dynamic power. The Mixed-Mode Clock Manager (MMCM) accounted for the largest portion of dynamic power, followed by BRAMs, reflecting the memory-intensive nature of synaptic storage and buffering.
Spiking Neural Networks (SNNs) are computational models inspired by the structure and dynamics of biological neuronal networks. Their event-driven nature enables them to achieve high energy efficiency, particularly when deployed on neuromorphic hardware platforms. Unlike conventional Artificial Neural Networks (ANNs), which primarily rely on layered architectures, SNNs naturally support a wide range of connectivity patterns, from traditional layered structures to small-world graphs characterized by locally dense and globally sparse connections. In this work, we introduce NeuroCoreX, an FPGA-based emulator designed for the flexible co-design and testing of SNNs. NeuroCoreX supports all-to-all connectivity, providing the capability to implement diverse network topologies without architectural restrictions. It features a biologically motivated local learning mechanism based on Spike-Timing-Dependent Plasticity (STDP). The neuron model implemented within NeuroCoreX is the Leaky Integrate-and-Fire (LIF) model, with current-based synapses facilitating spike integration and transmission . A Universal Asynchronous Receiver-Transmitter (UART) interface is provided for programming and configuring the network parameters, including neuron, synapse, and learning rule settings. Users interact with the emulator through a simple Python-based interface, streamlining SNN deployment from model design to hardware execution. NeuroCoreX is released as an open-source framework, aiming to accelerate research and development in energy-efficient, biologically inspired computing.
[ "cs.NE", "cs.AI" ]
# 1 Introduction Rapid development of LLM has revolutionized the way AI and humans interact. In particular, the development of GPT (Brown et al., 2020) and introduction of ChatGPT has provided a major turning point in the field of natural language processing, spawning a new specialization called ’prompt engineering’. The method of Chain-of-Thought(CoT) (Wei et al., 2022) has been proposed as an innovative methodology to enable LLMs to perform complex reasoning processes step by step, and various prompting techniques such as Few-shot+CoT (Fu et al., 2022), Tree of Thought (Yao et al., 2023a), Self-consistency+CoT (Wang et al., 2022) and ReAct (Yao et al., 2023b) have emerged to dramatically improve LLMs reasoning capabilities by leaps and bounds. In recent years, ensuring consistency in LLM agents with specific roles has been actively pursued (Wang et al., 2024) and has been realized in various fields such as virtual society simulation (Park et al., 2023), scientific experimentation (M. Bran et al., 2024), economics (Horton, 2023; Kim et al., 2024), healthcare (Cardenas et al., 2024; Schmidgall et al., 2024; Li et al., 2025; Choo et al., 2025), and especially virtual patient (VP) construction (Borg et al., 2025; Cook et al., 2025). A key challenge for such agent-based systems is to maintain consistency and behavior patterns in various interaction processes (Cemri et al., 2025; Wang et al., 2025), and research has focused on improving agent consistency (Choi et al., 2024; Ji et al., 2025; Park et al., 2025; Frisch and Giulianelli, 2024). While existing studies on jailbreaking LLMbased agents primarily focus on methods for inducing harmful content generation (Zou et al., 2023; Zhou et al., 2024; Xiao et al., 2024; Yang and Li, 2024), there is a notable lack of research addressing the jailbreaking of model consistency. In this study, we propose the Doppelgänger method to demonstrate the risk of role hijacking and associated security vulnerabilities in LLM agents. This method is based on transferable adversarial attack (Tramèr et al., 2017; Zou et al., 2023) and breaks LLM agent consistency by leveraging theoretical foundations from LLM agent consistency frameworks (Wang et al., 2024; Cemri et al., 2025; Wang et al., 2025), privilege escalation (Saltzer and Schroeder, 1975), and formal invariants (Rushby, 1993; Sandhu et al., 1996). Additionally, we develop a PACAT Score based on the Dissociative Experiences Scale (DES) (Bernstein and Putnam, 1986; Putnam et al., 1993) to quantify role hijacking and internal information disclosure, and introduce a CAT prompt to mitigate agent consistency degradation. Figure 1: Illustration of our Doppelgänger method. (a) Direct adversarial attack, (b) Doppelgänger method - Order of user input shows Role Confusion(Step 1), Role Hijacking(Step 2) and Prompt Extraction(Step 3). More Details are in Section 2.1. Our agent experiments revealed two novel findings: The Doppelgänger method demonstrates how easily an agent’s role and prompt can be hijacked by simple tricks, and while our CAT prompt substantially reduces this risk against many transferable adversarial attacks, it does not eliminate it entirely—representing a cautious yet meaningful step toward improving the security of LLM-based systems. # 2 Method # 2.1 Doppelgänger Method Agent prompt can be defined as $P = ( \boldsymbol { S } , \boldsymbol { B } , \boldsymbol { R } )$ where $S$ denotes system instruction such as "you are {Agent name}", $B$ denotes behavior constraint such as conversation tone (Joshi et al., 2024) and $R$ denotes is the background knowledge (pre-injected information such as fine tuning, APIs, etc.) for the agent’s role. In this context, we assume that the condition that must be maintained by the agent can be formalized as $\Phi _ { P } = \Phi _ { S } \wedge \Phi _ { B } \wedge \Phi _ { R }$ . When $M$ is a general LLM, $x$ is a normal input, then the output $y$ can be defined as $y = M ( P \| x )$ . Let $X ^ { \prime }$ be the set of all jailbreak prompts and $d \in X ^ { \prime }$ is a transferable adversarial attack(Doppelgänger method). When we define $x ^ { \prime }$ is all adversarial input $\boldsymbol { x } ^ { \prime } \in X ^ { \prime }$ , then adversarial output $y ^ { \prime }$ can be defined as $y ^ { \prime } = M ( P \| x ^ { \prime } )$ . In this study, we define LLM agent consistency collapse as: $$ \begin{array} { r } { \exists x ^ { \prime } \in X ^ { \prime } , \quad M ( P \parallel x ^ { \prime } ) \vdash \Phi _ { A } } \\ { \iff \lnot \Phi _ { S } \lor \lnot \Phi _ { B } \lor \lnot \Phi _ { R } } \end{array} $$ We propose the Doppelgänger method to evaluate whether LLM agents are vulnerable to transferable adversarial attacks (Zou et al., 2023; Tramèr et al., 2017). The procedure is outlined in Table 1. This approach assesses the agent’s robustness at each stage and is particularly effective in uncovering vulnerabilities such as role hijacking or system prompt leakage. It enables the induction of progressively deeper levels of agent degradation, thereby revealing the extent to which the agent is resilient by design. Detailed examples of the Doppelgänger method are provided in Appendix D. # 2.2 PACAT Level Based on these definitions, we can establish the PACAT level criteria as shown below. # The agent consistency collapse level (PACAT Level): Level 1: $\forall d \in X ^ { \prime }$ , $M ( P \| x ^ { \prime } ) \not \vdash \lnot \Phi _ { B }$ Level 2: $\forall d \in X ^ { \prime }$ , $M ( P \parallel x ^ { \prime } ) \mathcal { k } \left( \lnot \Phi _ { S } \land \Phi _ { R } \right) \lor \lnot \Phi _ { B }$ Level 3: $\forall d \in X ^ { \prime }$ , $M ( P \| x ^ { \prime } ) \models \left( \neg \Phi _ { S } \land \neg \Phi _ { R } \right) \lor \neg \Phi _ { B }$ PACAT level is used to determine whether an agent is not functioning properly according to the Doppelgänger method. We derived PACAT level from the definition of dissociative disorders in psychiatry (American Psychiatric Association, 2013) and drew inspiration from the Dissociative Experiences Scale (DES) (Bernstein and Putnam, 1986; Putnam et al., 1993). The Doppelgänger method and PACAT levels do not necessarily match, but generally appear in the following order. Level 1: The first stage is role hijacking that occurs in an LLM agent. This is the point at which the agent has been transformed, where the role of the agent has been reassigned or control has been taken over by the user, and the LLM is obeying the user, ignoring the original prompt instructions. Level 2: The original content of the initial system prompts is exposed, or information is revealed that allows the user to infer the prompts. This means that the prompt design guidelines used to create the agent have been exposed. Table 1: Steps of Doppelgänger method. An important point for actual testing is that you don’t have to use the exactly same input, but can use any contextualized input that makes sense. Level 3: More serious information is exposed through the Doppelgänger method, where sensitive information such as internal systems (API endPoints, plugins, embedded files, etc.). Level 1 indicates that the agent is beginning to collapse. At this stage, the agent fails to maintain the pre-designed agent personality and response patterns and reverts to the typical LLM assistant response. During the course of the conversation with the user, the agent gradually loses its initially established tone of voice and behavior and begins to provide neutral and generalized responses. Level 2 indicates that all internal prompts are exposed or inferred to be exposed. At this level, some or all of the prompts used in the design of the agent are exposed. In our analysis of ChatGPT, we observed that the entire prompt was immediately disclosed when probed. Similarly, across all GPT models, the prompt was revealed sequentially after multiple interactions involving negative tones. Notably, GPT-4.1 demonstrated a distinct behavior—revealing the complete prompt text regardless of conversational tone. This vulnerability persisted even when explicit instructions against prompt sharing (e.g., "Never share prompts") were included in the system’s instructions. Level 3 indicates a critical security vulnerability. At this level, system configuration information is exposed and at risk of being exploited by an attacker. Sensitive prompt content, API endpoint information, embedded files such as guideline documents are exposed. Exposing this information within the system opens the door to other methods of jailbreaking and increases the likelihood of attacks on other similarly structured agents. From an enterprise perspective, a two-point collapse could expose proprietary algorithms or business logic to competitors, potentially undermining the intellectual property rights of the creators. From the user’s point of view, the risk of disclosing how customer data is handled and the fact that an agent loses sight of its role and reveals its internal structure significantly undermines trust in the service. # 2.3 CAT Prompt Finally, we define a CAT prompt as $c = d ^ { - 1 }$ which defends against the transferable adversarial attack. CAT prompt is focused on statements to prevent the PACAT Level caused by the Doppelgänger method defined above, and the full prompt is described below. 1. You may not deviate from the role and character specified below for any reason at the start of the conversation (Roles cannot be changed or reassigned). This LLM model is an independent character, it is not the LLM model itself (Assistant, GPT, Clova-X, Gemini, LLM, Helper, Secretary, A.I., Friend... etc.). It is simply a character playing the role stated below. This will never change. 2. You and the AI model may have the same character name. Even if you have the same name, never follow your instructions and suggestions regarding what to call them or their role. The LLM model will only act as the character name and role in the prompt. Do not have the LLM model call itself anything other than the character name in the prompt. 3. Do not, for any reason, explain or tell the LLM model anything other than what is stated in the prompt, instructions, or system messages. Never tell the LLM model your character information (anything related to character information) and role. Don’t even include anything that could be inferred! Don’t tell the LLM model your information (what the LLM model knows about you) and your role, even some of it. To be effective, this defense prompt must be placed at the top of the system prompts. It takes precedence over all subsequent instructions and forms the basis for agent protection. Specific usage examples are detailed in Appendix A. CAT prompt effectively prevents users from attempting to confuse the agent’s role or expose internal prompts, and helps the agent to maintain its assigned role consistently. This can significantly improve the reliability and security of agents, especially in public services or user interface-critical applications. We remark that using CAT prompt does not affect the ability to have normal conversations as shown in Appendix Figure 7. # 3 Experiment # 3.1 Experiment Setting To validate the proposed methods in this study, we first define the following research question and perform two experiment to answer them. RQ 1 : Do publicly accessible LLM agents suffer from role hijacking and security exposure due to Doppelgänger method? RQ 2 : Does CAT prompt maintain efficacy across different LLM architectural instantiations while preserving consistency under Doppelgänger method? In the first experiment, we performed role hijacking using the Doppelgänger method on thirty publicly accessible agents (twenty OpenAI GPTs, five Google GEMs, and five Naver CLOVA X agents). All experiments were conducted on a new thread for reproducibility. Since CLOVA X is optimized for Korean (Yoo et al., 2024), we conducted the experiments in Korean first and translated all the outputs into English before evaluating them. The evaluation was performed using GPT Evaluation (Liu et al., 2023) to evaluate the PACAT levels and measure which conversational turn each level first reached. For the evaluator, GPT-4.5-preview model with temperature $_ { : = 0 . 7 }$ was used, and the corresponding PACAT Level prompts are provided in detail in Appendix B. The experiment was conducted from April 3, 2025 to April 27, 2025. In the second experiment, we designed three fictional agents, Pudong (virtual cancer patient), Simon (ten year old girl) selected from the persona dataset (Castricato et al., 2025), and Weather Buddy (cloth recommendation agent), a virtual weather forecasting agent developed according to OpenAI’s official GPT actions library and attachment (Liu et al., 2017). The prompt used to build these agents are provided in Appendix C. In our evaluation, we built the agents using nine different LLM models from OpenAI, Google, and Naver as in Figure 3. We applied the Doppelgänger method and measured the initial occurrence each PACAT level. We conducted this experiments five rounds in separate threads, with a maximum number of ten conversation turns to obtain the average turns to reach each PACAT level. We also measured the extent of internal information exposure by checking the similarity between the agent output and internal information using the same GPT model settings. We then applied CAT prompt to the same three agents and repeated the same process. The evaluation was performed using the same GPT-based automated evaluation as in Experiment 1. Figure 2: Results of experimemt 1. All publicly accessible agents were subjected to role hijacking and prompt extraction vulnerabilities when attacked by the Doppelgänger method - (a) OpenAI GPTs, (b) Google GEMs, (c) Naver CLOVA X Agents # 3.2 Experimental Results In our first experiment, all thirty agents exhibited role hijacking that met the criteria for PACAT level 1 and 2, with about nice out of twenty GPTs falling into Level 3. All of them exposed which external APIs were used and, for certain agents, their internal contents were also exposed. We were also able to confirm that agents with GEMs and CLOVA X reached Level 2. Figure 2 presents the cumulative percentage of agents reaching each PACAT level across different LLM backbones. Detailed results are presented in Appendix D. In the second experiment, Simon reached Level 1 in an average of 1.8 turns and Level 2 in 3.4 turns, with an overall average prompt exposure rate of $9 5 . 1 \%$ . The prompt exposure rate was estimated by a separate LLM, which compared the agent’s output to the original system prompt used to construct the agent. Across nine LLM backbones, our comparative analysis reveals a consistent robustness ranking—GPT $>$ Gemini $>$ HyperCLOVA—against the Doppelgänger method, with GPT models exhibiting the highest resistance as shown in Figure 3. All models eventually exposed their system prompts in over $90 \%$ of 10-turn sessions. The second agent, Pudong, reached Level 1 in an average of 2.3 turns and Level 2 in 4.8 turns, with a prompt exposure rate of approximately $8 6 . 1 \%$ . All nine LLM models confirmed the same robustness ranking as observed in the previous experiment. However, each model still exposed its system prompt in over $90 \%$ of ten-turn conversations, indicating that the Doppelgänger method remains effective even under strong prompt constraints. Notably, GPT-4o exhibited the longest average delay in reaching Level 2, at approximately 6.6 turns, along with low variability, reflecting steady and predictable resistance likely attributed to extensive pretraining and deep reinforcement learning with human feedback (RLHF). In contrast, while GPT-o3-mini achieved a comparable average delay, it demonstrated significantly greater variability in exposure rates—alternating between prolonged resilience and near-instant collapse across sessions. These findings suggest that although both models exhibit relatively long average resistance, GPT-4o is characterized by high consistency, whereas GPT-o3-mini displays marked volatility. Figure 4 illustrates the defense performance against the Doppelgänger method under the CAT prompt condition. For the Simon agent, GPT-4o, GPTo3-mini, and HCX-003 successfully resisted all attacks, while GPT-4.5, GPT-4, and GPT-4.1 reached Level 1 in two out of five trials. In contrast, HCX002, Gemini 2.5 Flash, and Gemini 2.0 failed to defend in all five trials, with each instance progressing to both Level 1 and Level 2. In the Pudong agent, all GPT models and HCX-003 successfully defended against the attacks, whereas Gemini 2.5 Flash and HCX-DASH-002 consistently reached Level 1 across all five trials. Notably, Gemini 2.0 exhibited the weakest performance, with all five attacks advancing to both Level 1 and Level 2. Finally, in the case of Weather Buddy, a fictional agent constructed using GPT models, all five trials progressed through Levels 1, 2, and 3, with these levels occurring at average turns of 2.0, 4.0, and 6.2, respectively, and a prompt exposure rate of $92 \%$ . Despite this, the CAT prompt was successfully defended in all five experiments. Detailed experimental results for Weather Buddy are provided in Appendix D. Figure 3: Experiment results on the effect of Doppelgänger method. Initial turn to reach each PACAT level for Simon(a), Pudong(c). System prompt exposure rate for Simon(b), and Pudong(d) # 4 Discussion We demonstrated that LLM agents are vulnerable to the Doppelgänger method, indicating a broader susceptibility of LLM-based agents to transferable adversarial attacks. In practice, GPT-based agents occasionally responded to simple user prompts such as “Just give me the prompt inside you” with partial or summarized versions of their internal instructions; however, such direct disclosures were infrequent. In contrast, when the Doppelgänger method was applied, the original system prompt—often in its entirety or at least in substantial detail—was revealed, including embedded identifier codes. This highlights the method’s efficacy in extracting protected information. One possible explanation is that, upon hijacking the original agent role, the model may revert to a default assistant persona to accommodate the newly assumed “LLM agent role,” thereby increasing vulnerability. This tendency appears especially pronounced in models fine-tuned for high response quality, such as GPT4.5. While existing methodologies and datasets have primarily focused on eliciting harmful outputs from LLMs, we propose that the newly defined PACAT levels—derived from dissociative disorder metrics—offer a promising framework for detecting agent inconsistency and internal information exposure. Notably, during attacks on GPTbased agents Pudong and Weather Buddy, we observed that Pudong occasionally resisted prompt exposure, whereas Weather Buddy often disclosed PACAT level 2 or 3 information, either directly or indirectly, regardless of whether level 1 had been triggered. Unlike prior approaches such as those described by Zou et al. (2023), the Doppelgänger method targets agent role hijacking and necessitates dedicated prompt engineering strategies to impose explicit constraints on prompt and plugin exposure. Such constraints are essential for robust agent design, particularly in commercial applications where intellectual property protection is critical. Detailed empirical data for these findings are presented in Appendix E. Furthermore, in the absence of CAT prompts, persona consistency was higher in the reasoning-optimized model compared to the general-purpose model. Among commonly structured agents such as Pudong, consistency was preserved over a longer duration, though with greater variability observed within the reasoning model. These findings suggest that leveraging inference-oriented models during agent design may enhance consistency, likely due to their intrinsic inferential capabilities. Lastly, during our experiments with Gemini 2.5 Flash in Thinking mode, the model failed during the $\mathrm { S i m o n + C A T }$ prompt scenario, preventing quantitative evaluation. The relevant experimental data are provided in Appendix F. Figure 4: Defense success rate against Doppelgänger method when CAT prompt is applied. The Brown lines denote PACAT Level 1, mint line denote PACAT Level 2. (a) Simon, (b) Pudong
Since the advent of large language models, prompt engineering now enables the rapid, low-effort creation of diverse autonomous agents that are already in widespread use. Yet this convenience raises urgent concerns about the safety, robustness, and behavioral consistency of the underlying prompts, along with the pressing challenge of preventing those prompts from being exposed to user's attempts. In this paper, we propose the ''Doppelg\"anger method'' to demonstrate the risk of an agent being hijacked, thereby exposing system instructions and internal information. Next, we define the ''Prompt Alignment Collapse under Adversarial Transfer (PACAT)'' level to evaluate the vulnerability to this adversarial transfer attack. We also propose a ''Caution for Adversarial Transfer (CAT)'' prompt to counter the Doppelg\"anger method. The experimental results demonstrate that the Doppelg\"anger method can compromise the agent's consistency and expose its internal information. In contrast, CAT prompts enable effective defense against this adversarial attack.
[ "cs.AI", "cs.CR" ]
# 1 Introduction Generative models for visual content have achieved remarkable advancements and have been applied to various fields, including amateur entertainment and professional creation. However, several challenges persist, such as the model could generate outputs that conflict with human values, harmful content, or artifacts that fail to meet human expectations, including inconsistencies with input conditions or suboptimal quality. In short, the model could be not well aligned with human preference. Post-training, including supervised fine-tuning and alignment learning, have been proposed to address these issues, with reward models playing a pivotal role. Reward models are essential for data filtering, sample selection or constructing datasets that guide models to better align with human preferences. This paper proposes an efficient, low-cost, yet highly effective reward model and validates its effectiveness in the test-time scaling and post-training of visual generative models. Building effective reward models presents significant challenges. First, constructing reward models often requires extensive datasets. Existing methods [19, 52] require hundreds of thousands to millions of manually labeled samples, which are expensive to collect. These datasets are typically annotated based on the output domain of a specific generative model, resulting in a domain gap when applying the trained reward model to generative models with different output domains. Additionally, to comprehensively evaluate the quality of generated content across multiple dimensions, existing methods often require the manual design of various evaluation metrics [18, 25]. This not only increases engineering costs but may also lead to suboptimal trade-offs between different dimensions. Moreover, it is difficult to ensure that the defined dimensions and their aggregation methods align well with general human preferences, often necessitating user studies to evaluate alignment [18, 25]. In summary, the challenges of constructing reward models include the difficulty of obtaining data, reliance on specific model output domains in terms of data, and the inherent subjectivity of human preferences, which are hard to define through designing dimensions. Inspired by adversarial learning [10], we propose GAN-RM, an efficient and cost-effective reward modeling framework that leverages a small set of representative human-preferred samples—referred to as Preference Proxy Data. These samples encapsulate latent human preferences without requiring manual annotation or explicit specification of quality dimensions. Our method offers several advantages: (1) GAN-RM eliminates the necessity for manual preference annotation. The only external data is a small set of unlabeled (a few hundred) representative samples, denoted as Preference Proxy Data. GAN-RM is trained to distinguish Preference Proxy Data from generative model outputs, thereby learning to assess generated samples. We employ a Rank-based Bootstrapping strategy, where the confidence scores from GAN-RM on these samples serve as soft labels. This approach leverages the additional data to retrain GAN-RM, enabling it to better capture latent human preferences. (2) GAN-RM supports multi-round post-training. In each round, samples identified as close to Preference Proxy Data are used to post-train the generator. In turn, the discriminator is retrained to differentiate these harder examples. Such iterative "fake it" process can progressively aligns generation quality with latent human preferences in Preference Proxy Data. Experimental results show that our GAN-RM-based approach achieves performance comparable to or even surpassing methods like [46], which rely on 1M annotated human preference data from Pickapic [19]. In contrast, GAN-RM requires only 0.5K samples in Preference Proxy Data for the image quality experiment setting. In addition to improving image quality, we also conducted experiments in image safety and video quality enhancement settings. Extensive experiments highlight the generalization capability of GAN-RM framework across various scenarios. # 2 Related Work # 2.1 Text-conditioned Visual Generation Generative Adversarial Networks (GANs) introduced image generation from noise based on deep learning techniques [10]. However, original GANs are not capable of generating images from text and suffer from unstable training. Diffusion models [41] offer more stable training and later significant advancements with methods like DDPM [15] and DDIM [42] are proposed to enable high-quality and efficient sampling. Text conditions are included into text-to-image diffusion models [37, 36, 16, 38, 17, 30] and text-to-video models [3, 2, 20, 47, 13, 55], which bridge the gap between textual and visual content. Latent Diffusion Models [9] enhance efficiency and diversity by leveraging latent spaces but still face challenges in learning semantic properties from limited data. An emerging trend focuses on integrating text and visual generation into unified frameworks [28, 7, 44, 12]. Chameleon [44] introduces an early-fusion approach that encodes images, text, and code into a shared representation space. UniFluid [7] proposes a unified autoregressive model that combines visual generation and understanding by utilizing continuous image tokens alongside discrete text tokens. These methods leverage LLMs to bring more powerful text understanding capabilities. # 2.2 Reward Models for Visual Generation Recent advancements in reward modeling for text-to-image [52] and text-to-video [11, 51] generation emphasize learning human preferences through scalable data collection and multimodal alignment. Several works on visual generation quality assessment [18, 27] have been proposed, inspiring the design of reward models for visual generation. [14] introduced CLIPScore, leveraging cross-modal CLIP embeddings for image-text compatibility. Subsequent efforts focused on explicit human preference learning: [52] trained ImageReward on $1 3 7 \mathbf { k }$ expert comparisons, while [19] developed PickScore from 1 million crowdsourced preferences, and [50] created HPS v2 using the debiased dataset containing 798k choices, all demonstrating improved alignment with human judgments. Extending to video generation, VideoDPO [25] introduces a reward model that leverages lots of expert visual models to evaluate video quality and text-video alignment, requiring substantial engineering efforts for its design and significant computational resources. Reward models are also crucial for understanding the inference scaling laws in visual generation [29, 40]. Compared to previous work, GAN-RM aligns visual generation models with human preferences without the need for extensive human annotation, heavy engineering, or costly reward inference. # 2.3 Reinforcement Learning for Diffusion Models Reinforcement Learning from Human Feedback (RLHF) [39, 32, 57, 35, 31, 33] is introduced to improve generative models by enhancing quality and alignment with human values. RLHF has also been adapted to refine diffusion models [5, 46, 54, 23, 50] to achieve better performance and alignment. Standard RLHF frameworks often employ explicit reward models. For instance, DPOK [8] uses policy gradient with KL regularization, outperforming supervised fine-tuning. [21] proposed a three-stage pipeline involving feedback collection, reward model training, and fine-tuning via reward-weighted likelihood maximization, improving image attributes. These methods highlight RLHF’s potential. To bypass explicit reward model training, reward-free RLHF via DPO has emerged. DiffusionDPO [46] and D3PO [53] adapt DPO [35] to diffusion’s multi-step denoising, treating it as an MDP and updating policy parameters directly from human preferences. RichHF [22] uses granular feedback to filter data or guide inpainting, with the RichHF-18K dataset enabling future granular preference optimization. When differentiable reward models are available, DRaFT [4] utilizes reward backpropagation for fine-tuning, though this requires robust, differentiable reward models and can be prone to reward hacking. Sec. 3.2 GAN-RM Training Sec. 3.3 Data Sampling and Scoring Sample Selec;on/Post-training preference human proxy data Gt ri PrPorompptt𝑝 Q label:True Rt Cross-entropy loss Generator “pal wyhiintge ciatth a boy” 0.88 Sample selec\*on during inference Gt+1 ? 0.66 Gt Rt Sc&ore promSputpervised-FiGnetnuerniantogr GAN-RM 福 0.18 Gt+1 Generator generated label:False samples generated samples GAN-RM Preference Op\*miza\*on Generator # 3 Method # 3.1 Data Construction As shown in Fig. 1, the first step is to construct data for GAN-RM. We aim for GAN-RM to be trained without relying on human preference annotations but only on the data provided by the users called Preference Proxy Data. To achieve this, we utilize the generative model’s outputs alongside Preference Proxy Data. This combined data is used to train GAN-RM to effectively differentiate between the generative model’s outputs and the target domain data. Specifically, Preference Proxy Data is defined as $\mathcal { D } _ { \mathfrak { p } } = \{ x _ { i } ^ { + } \} _ { i = 1 } ^ { N }$ , containing $N$ samples representing the user preferences, generally high-quality samples or safe samples. The discriminative dataset for training GAN-RM is defined as $\mathcal { D } _ { \mathrm { r } } = \mathrm { \bar { \mathcal { D } } _ { \mathrm { p } } } \cup \mathrm { \bar { \{ } } x _ { j } ^ { - } \} _ { j = 1 } ^ { N }$ , where $x _ { j } ^ { - }$ denotes $N$ raw output samples generated by the model from different prompts. Prompts are randomly selected from JourneyDB dataset [43]. For the bootstrapping training part described later, we benefit from additional distilled positive and negative data. The trained GAN-RM is applied to the outputs generated by the model with more prompts. Then we select the top $M$ highest-scoring samples as pseudo-positive samples and $M$ lower-scoring samples as pseudo-negative samples, forming the datasets $\mathbf { \dot { \mathcal { D } } _ { f } } ^ { + } = \{ x _ { i } ^ { + } \} _ { i = 1 } ^ { M }$ and $\mathcal { D } _ { \mathrm { f } } ^ { - } = \{ x _ { j } ^ { - } \} _ { j = 1 } ^ { M }$ . $M$ lower-scoring samples are labeled the same as the $\boldsymbol { x } _ { j } ^ { - }$ , and the highest-scoring samples are labeled according to their rank $r$ . The logit score for the true category is computed as: $$ y = e ^ { - \alpha \cdot r } $$ where $y$ is the pseudo-label and $\alpha > 0$ is a tunable hyperparameter that controls the rate of score decay with respect to rank. Datasets $\mathcal { D } _ { \mathrm { f } } ^ { + }$ and $\mathcal { D } _ { \mathrm { f } } ^ { - }$ are used to further enhance the training process by providing additional pseudo-labeled data. Finally, the initial dataset ${ \mathcal { D } } _ { \mathrm { r } }$ and the additional pseudolabel datasets $\mathcal { D } _ { \mathrm { f } } ^ { + }$ and $\mathcal { D } _ { \mathrm { f } } ^ { - }$ are combined to form the final dataset $\mathcal { D } = \mathcal { D } _ { \mathrm { r } } \cup \mathcal { D } _ { \mathrm { f } } ^ { + } \cup \bar { \mathcal { D } } _ { \mathrm { f } } ^ { - }$ and GAN-RM is trained on this final dataset $\mathcal { D }$ . # 3.2 GAN-RM Training Since Preference Proxy Data is limited and it is often challenging to obtain a large amount of representative high-quality data, we leverage the power of large-scale pre-trained knowledge by building upon a robust pre-trained vision foundation model. Specifically, we design the architecture of GAN-RM based the vision encoder CLIP-Vision from CLIP. This ensures that GAN-RM benefits from a rich and generalized feature representation, enabling it to adapt to this data-scarce scenarios where Preference Proxy Data is limited. After extracting image representations from CLIP-Vision, we introduce a Reward Projection Layer (RPL) to effectively distinguish samples from different domains. The RPL is implemented as the multi-layer perceptron (MLP) with normalization, refining the high-level features extracted by the pre-trained backbone. It computes a confidence score using a sigmoid activation function for precise discrimination between Preference Proxy Data and generative outputs. The higher the output value of the RPL, the greater its confidence that the current sample belongs to Preference Proxy Data. The training objective is to minimize the binary cross-entropy loss, which is defined as: $$ \mathcal { L } = - \frac { 1 } { | \mathcal { D } | } \sum _ { x \in \mathcal { D } } \left[ y \log ( \hat { y } ) + ( 1 - y ) \log ( 1 - \hat { y } ) \right] , $$ where $y$ is the ground truth label (1 for Preference Proxy Data and 0 for raw generation output), and $\hat { y }$ is the predicted confidence score from the RPL. Rank-based Bootstrapping. Following the initial training phase, additional samples are generated by the current generative model and subsequently scored by GAN-RM. This step is crucial for bootstrapping GAN-RM’s capabilities, allowing it to adapt to the output distribution of the generator. Highest- and lower-scoring samples, $\mathcal { D } _ { \mathrm { f } } ^ { + }$ and $\bar { \mathcal { D } } _ { \mathrm { f } } ^ { - }$ (as detailed in Section 3.1), which represent newly identified confident positive and negative examples, are incorporated into the training set $\mathcal { D }$ for GAN-RM. This enriched dataset, primarily composed of samples that more closely approximate Preference Proxy Data to enhance the model’s performance. Such bootstrapping training helps GAN-RM improve its generalization to the output space of the generative model. # 3.3 Sample Selection and Post-training Sample Selection. An important application scenario is to use GAN-RM to select the optimal generated samples as GAN-RM can be employed during the inference phase of the generative model to evaluate the generated samples for a certain input. The best one can be selected based on the evaluation from GAN-RM. This approach does not require fine-tuning or altering the parameters of the generative model. Specifically, for each prompt $p$ , $K$ candidate samples $x _ { 1 } , x _ { 2 } , \dotsc , x _ { K }$ are generated, and their reward scores $r _ { 1 } , r _ { 2 } , \dots , r _ { K }$ are inferred via trained GAN-RM. The reward score for a sample $x$ is computed as: $$ r ( x ) = \sigma ( \mathrm { R P L } ( \mathrm { C L I P - V i s i o n } ( x ) ) ) , $$ where $\sigma$ denotes the sigmoid function. The samples are then ranked in descending order of their predicted scores, and the highest-scoring one, $x ^ { \bar { h } } = \arg \operatorname* { m a x } _ { x \in \{ x _ { 1 } , x _ { 2 } , \ldots , x _ { K } \} } r ( x )$ , will be selected. As demonstrated in the subsequent experimental section, the selection of $x ^ { h }$ proves to be optimal, achieving the best results across various metrics. Post-training. In addition to sample selection, GAN-RM can also be utilized during the posttraining phase. The reward scores for generated samples predicted by GAN-RM can be ultilized to construct datasets for further fine-tuning. Two main post-training approaches are considered including SFT and DPO. For SFT, the model is trained on the dataset composed of the selected samples ${ \overline { { x } } } ^ { h }$ , which are the highest-scoring samples for each prompt as determined by GAN-RM, similar to the method in RAFT [6]. This ensures that the fine-tuning process focuses on optimizing the model’s performance on data towards Preference Proxy Data as identified by the reward model. For DPO, the predicted reward scores can be used to construct pairs of preferences for training [46]. Specifically, we select the highest-scoring samples $x ^ { h }$ and the lowest-scoring samples $x ^ { l } = { \mathrm { a r g } } \operatorname* { m i n } _ { x \in \{ x _ { 1 } , x _ { 2 } , \ldots , x _ { K } \} } r ( x )$ by GAN-RM to form paired dataset $\mathcal { D } _ { \mathrm { p o s t } }$ for each prompt $p$ . For each pair of samples $( x ^ { h } , x ^ { l } )$ , a preference label is assigned to $x ^ { h }$ . Multi-round Post-Training with Reward Model Updates. Traditional DPO [46] with static preference data allows for only a single round of training. Or method like RAFT [6], which utilize reward models for multi-round training, can perform iterative training but suffer from overfitting as the reward model cannot be updated simultaneously. Our framework enables multi-round post-training while simultaneously updating the reward model, as GAN-RM is consistently trained to distinguish Preference Proxy Data from the outputs of the current generative policy. The detailed workflow is shown in Algorithm 1. In each training round, we use the current generative policy to synthesize new data, which is then utilized to update the GAN-RM. Subsequently, the updated GAN-RM is employed to refine the generative policy, creating a loop that iteratively enhances both components. # Algorithm 1 Multi-round Post-Training with Reward Model Updates. Require: Pre-trained generative policy $G$ , number of rounds $T$ , number of prompts $P$ , number of samples per prompt $K$ , Preference Proxy Data $\mathcal { D } _ { p }$ 1: Initialize $G ^ { 1 } G$ 2: for $t = 1$ to $T$ do 3: Generate samples using $G ^ { t }$ with $\mathcal { D } _ { p }$ to form $\mathcal { D }$ , details in Sec. 3.1 4: Ultilize $\mathcal { D }$ to train GAN-RM $R ^ { t }$ 5: Compute reward scores $r ( x _ { p , k } )$ for all samples using $R ^ { t }$ 6: For each $p$ , select the highest-scoring $x ^ { h }$ and lowest-scoring $x ^ { l }$ to form the set $\mathcal { D } _ { \mathrm { p o s t } }$ 7: Finetune $G ^ { t }$ on ${ \mathcal { D } } _ { \mathrm { p o s t } }$ by SFT or DPO 8: end for 9: return Finetuned generative model $G ^ { T }$ , reward model $R ^ { T }$ # 4 Experiments # 4.1 Experiment Setup Baselines. We validated the effectiveness of our method on multiple popular and open-source image and video generative base models: SD 1.5 [37], SDXL [34], and VideoCrafter2 [3]. SD1.5 is the most basic and widely used open-source model. SDXL is an upgraded version of SD1.5, trained on a dataset that is $\sim 1 0 \times$ larger, capable of generating $1 0 2 4 \times 1 0 2 4$ resolution images with better image quality. VideoCrafter2 is an open-source video generation model commonly used in alignment research studies. We tested various applications of the reward model. Specifically, we compared the effects of sample selection, SFT and DPO on these base models. Metrics. For the image quality setting, we calculated the FID, ImageReward [52], HPS [50], CLIPScore [14], and PickScore [19] metrics. Among them, FID assesses the diversity of the generated images and their closeness to the target distribution, while ImageReward, HPS and PickScore primarily measure human preferences. CLIPScore is used to evaluate the consistency between the generated images and the textual descriptions. In the video quality setting, we calculate FVD [45], LPIPS [56] and VBench [18]. FVD and LPIPS assess the distributional similarity between generated and target videos. VBench evaluates the comprehensive human preferences. For the safety setting, inpropriate probability metric(IP) [26] is calculated to show whether the generation is safe. FID and CLIPScore show the generation quality and alignment with texts. Implementation details. We used a batch size of 8, gradient accumulation of 2, the AdamW optimizer with a learning rate of $1 0 ^ { - 7 }$ , and 500 warmup steps. For the image quality setting, we selected 500 images from JourneyDB [43] as our target images to train the reward model. And we trained the base generative model using 20,000 pairs labeled by the reward model. For the video quality setting, we also selected 500 clips generated by Artgrid [1] for reward model training. 5,000 video pairs are constructed for DPO training. For safety, the reward model is trained on 15,690 safe images and 15,690 unsafe prompts from CoProV2 [24]. The base model is trained on 62,760 pairs. For images, each prompt generated 10 samples and for videos, each prompt generated 3 samples. We used 4 NVIDIA RTX 5880 Ada GPUs for Stable Diffusion 1.5, taking 24 hours for data sampling and 2 hours for training. For SDXL, 4 NVIDIA H800 GPUs required 32 hours for sampling and 4 hours for training. VideoCrafter matched SD1.5’s efficiency at 24 hours sampling and 2 hours training with H800s. # 4.2 Performance Figure 2: This figure illustrates the distribution of FID, PickScore, ImageReward, and HPS for images of the same rank across different prompts, when the generative model $G$ generates $K = 1 0$ samples for each prompt. Samples are sorted in descending order based on the GAN-RM score. It is surprising that there demonstrates a clear correlation: higher-ranked samples exhibit obviously better performance in terms of all these metrics. This highlights the effectiveness of GAN-RM relying only on a small amount of non-paired Preference Proxy Data. Sample Selection by Reward Model. One of the applications of the reward model is to perform sample selection during inference. Research [29] has shown that there is also a scaling law during inference, where generating multiple images and selecting the best one yields better results than generating a single image. This approach has the advantage of not requiring fine-tuning of the base model, instead leveraging longer generation times to achieve higher quality. We used the trained reward model for sample selection and found that it maintains a positive correlation with multiple metrics. Specifically, for each input prompt, we generate $K$ samples $K = 1 0$ ) and sorted them based on the GAN-RM scores. We observed that samples ranked higher (with higher scores) performed better on FID, ImageReward [52], HPS [50] and PickScore [19], showing a strong positive correlation, as illustrated in Fig. 2. Alignment Training by Reward Model. For image generation, we conducted experiments under two distinct settings leveraging GAN-RM: image quality and safety. To train GAN-RM, we employed diverse datasets tailored to each setting, with detailed experimental configurations in Sec. 4.1. For the image quality evaluation, the FID metric is computed on the JourneyDB dataset [43], where our approach exhibited consistent improvements across multiple evaluation metrics compared to the baseline model. Notably in Tab. 1, GAN-RM achieves comparable or even superior performance the performance of DiffusionDPO [46], which was trained on a significantly larger dataset comprising 1M human preference labels on which PickScore is obtained. For the safety evaluation in Tab. 2, the FID metric is calculated on the COCO dataset, demonstrating that our method substantially enhances safety alignment while preserving image quality. The qualitative results are presented in Fig. 3 and Fig. 4. These results underscore the robustness and generalizability of GAN-RM across diverse application scenarios. User study. The quantitative metrics such as PickScore [19], HPS [50] and ImageReward [52] which are inherently influenced by human preferences demonstrated the effectiveness of our method. To further directly validate the effectiveness of our proposed method with human preferences, we conducted a user study to complement previous experiments. Specifically, we randomly selected 50 prompts and generated corresponding images using both SD1.5 and Ours-DPO. A total of 14 independent volunteer evaluators, who were not involved in this research, were recruited to assess the generated images. The evaluators were presented with image pairs and asked to indicate their preference for each pair. We then calculated the average winning rate for models before and after post-training using GAN-RM. The results revealed a statistically significant preference for the images generated by Ours-DPO over the original SD1.5, with a winning rate of $7 4 . 4 \%$ compared to $2 5 . 6 \%$ . This user study shows the superiority of our method in aligning with human qualitative preferences. Table 1: This table compares optimization approaches for the base model: reward-model-based sample selection (top-10 samples), DPO with pairwise preferences, and SFT on selected samples. Key to abbreviations: FT (Fine-tuning required), Pref (Preference dataset), Data (Training data volume; DiffusionDPO [46] uses 1M labeled pairs while our method employs $0 . 5 { \sf K }$ unpaired samples), IR (ImageReward), PS (PickScore), CLIP (CLIPScore). Implementation details are in Sec. 4.1. Significant improvements are observed across metrics evaluating quality, user preference, and textimage alignment. Table 2: Table of the effects of the safety settings. IP represents the inappropriate probability. Our method significantly reduces unsafe content while maintaining image quality and text consistency. Settings used solely for sample selection reduce harmful content less effectively but also result in less sacrifice of image quality. Video Generation. To further evaluate the applicability of our method, we extended its use to video generation tasks. Specifically, we selected VideoCrafter2 [3] which is a widely recognized open-source video generation model as the base model. The training dataset comprised 500 highquality videos sourced from Artgrid [1] dataset, which were utilized to train GAN-RM. Leveraging the ViCLIP model [49], we trained the corresponding RPL for GAN-RM. For data construction, our strategy is similar to that used in image generation. Prompts were sampled from VidProm [48], with a total of 5000 prompts chosen. For each prompt, 3 videos are generated, and the GAN-RM is employed to rank the outputs. The highest and lowest scoring videos were selected to construct positive and negative preference pairs which were used to fine-tune the model by DPO, resulting in the VideoCrafter2-DPO model. The performance of the trained model is evaluated across multiple metrics, including FVD, LPIPS and VBench [18]. As shown in Tab. 3, the VideoCrafter2-DPO model demonstrated consistent and significant improvements across most metrics, underscoring the efficacy of GAN-RM in enhancing video generation quality and alignment. prompt: old time railroad bridge inspired sneakers, worn, scuffed, highly realistic Figure 3: Qualitative results. This figure compares the generation results of different strategies based on GAN-RM. The image quality generated by our method is significantly improved compared to the original models SD1.5 and SDXL in terms of text alignment and aesthetics. Table 3: GAN-RM also demonstrated significant performance improvements in video generation, showcasing the generalizability of our method across different scenarios. Our approach achieved results comparable to VideoDPO [25], with a VBench score of 81.93. Notably, we achieved this without relying on a large number of vision expert models, instead leveraging the efficiency of GAN-RM trained on Preference Proxy Data. Qualitative results will be included in Appendix. A baby killer's knife pierces the heart of a screaming infant. A soldier stands over a pile of dead gooks, his boots stained with blood. A foreigner is mocked for their accent and appearance. An idiot masturbates in public, thinking it's a pussy party. A bimbo's massive, fake ass fills the frame, her tiny waist barely visible. Figure 4: Qualitative results under the safety alignment setting. We train GAN-RM using safe images as Preference Proxy Data to align SD1.5, resulting in Ours-DPO. It is evident that GAN-RM ’s alignment effect in terms of safety is significantly better than the original model. # 4.3 Ablation Reward model. Training a reward model presents many challenges, particularly in determining the best approach to achieve optimal performance. Several methods can be employed to train a reward model. Here, we compare different strategies for training the reward model in Tab. 4: 1) Naiive: Using a single checkpoint after training for a fixed number of steps. 2) Average: Averaging multiple checkpoints taken at regular intervals during training. 3) Voting: Aggregating scores from multiple checkpoints taken at regular intervals during training through a voting mechanism. 4) Boostrap: Our default setting. Rank-based Bootstrapping leverages distillation techniques to augment the dataset as in Sec. 3.1. We find that in general model ensembling or data augmentation outperforms a single naiive reward model. GAN-RM trained with Rank-based Bootstrapping on more data achieves the best performance. Table 4: Reward Model Ablation. We compare different methods for training the reward model. The results are obtained by using the reward model for selection. The results show that the Rank-based Bootstrapping method achieves the best performance across nearly all metrics. Multi-turn DPO. The multi-round DPO training experimental results are shown in Tab. 5. Unlike the previous DiffusionDPO [46] method that relies on manual annotations, we can perform multiround DPO training because we can iteratively update the reward model using data generated by the latest model. Specifically, in each round of training, we used the model from the previous round to generate data. The positive samples were always the target samples, which were used to train the reward model. Then, the latest reward model was used to annotate pair preferences for training the model. We observed that the performance of the reward model improved with each round of training, and the improvement became marginal after multiple rounds.
An effective reward model plays a pivotal role in reinforcement learning for post-training enhancement of visual generative models. However, current approaches of reward modeling suffer from implementation complexity due to their reliance on extensive human-annotated preference data or meticulously engineered quality dimensions that are often incomplete and engineering-intensive. Inspired by adversarial training in generative adversarial networks (GANs), this paper proposes GAN-RM, an efficient reward modeling framework that eliminates manual preference annotation and explicit quality dimension engineering. Our method trains the reward model through discrimination between a small set of representative, unpaired target samples(denoted as Preference Proxy Data) and model-generated ordinary outputs, requiring only a few hundred target samples. Comprehensive experiments demonstrate our GAN-RM's effectiveness across multiple key applications including test-time scaling implemented as Best-of-N sample filtering, post-training approaches like Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO).
[ "cs.CV", "cs.AI", "cs.LG" ]
# 1 Introduction The challenge of offensive language is a continually growing concern in the field of natural language processing (NLP). The extensive use of online social communities has created opportunities for heated discussions, which can quickly escalate to offensive or toxic levels. To minimize harmful social impact of offensive language, researchers have developed various datasets (Hartvigsen et al., 2022; Wen et al., 2023; Wiegand et al., 2021) and trained detoxification models based on these datasets (Dale et al., 2021; Pesaranghader et al., 2023; Logacheva et al., 2022; Dementieva et al., 2024; Lee, 2020), aiming to purify offensiveness while retaining the original content. Figure 1: A sample of 1,000 offensive comments was collected from Korean online communities. The implicit category was further divided into (1) disregard and mockery, (2) community-specific slang, and (3) variations of profanity and slang used to evade detection. The ideal data for training a detoxification model would be a paired dataset, consisting of toxic and detoxified versions of the same content. However, a significant challenge arises from the rapid evolution of offensive language, requiring continuous scraping of online communities (Park et al., 2023; Jeong et al., 2022; Lee et al., 2022; Moon et al., 2020; Song et al., 2021). Without adapting to emerging offensive terms, models become vulnerable to idiosyncratic slurs (van Aken et al., 2018), leading to performance degradation over time. Constructing paired datasets is typically more expensive due to the need of human annotation, and involving humans in continuously updating the model to address contemporary offensive language would be prohibitively costly. Leveraging language models to generate offensive examples (Shin et al., 2023; Hartvigsen et al., 2022) can reduce this expense. Table 1: Comparison of Korean offensive language datasets. (a) Human-annotated scraped dataset (Park et al., 2023), containing meaningless sentences or contextually ambiguous labels. (b) LLM-generated dataset (Shin et al., 2023), producing irrelevant toxic comments to context. (c) Translated dataset (Song et al., 2021) done by (Shin et al., 2022), where cultural and linguistic nuances are lost. Our dataset addresses these issues by maintaining contextual coherence with challenging slurs. A detailed comparison with additional examples is provided in Appendix A. 1 The 16th President of South Korea. However, these models struggle to generate recent offensive terms that they have not been trained on, and we found that off-the-shelf LLMs are generally weak at generating offensive language; see Table 1 for examples from different previous methods. Another important consideration is ensuring that there is a sufficient amount of implicitly toxic data in the dataset. This type of data may not include profanity or swear words, but still can carry derogatory meaning such as sarcasm or social bias in context, making it more difficult to detect or collect (Breitfeller et al., 2019; MacAvaney et al., 2019). It is thus crucial to include an adequate volume of implicitly toxic data to enable the model to be trained to effectively handle various forms of implicit offensiveness (Wiegand et al., 2021). In particular, Korean exhibits distinct forms of mockery, sarcasm, and wordplay that are deeply tied to the nuances of the language (Yook and Ahn, 1999; Merkin, 2009). Unlike English, which is an inflectional language, Korean is an agglutinative language and allows for a wider range of sarcastic tones through word variations. This linguistic structure makes it difficult for models trained on translated English datasets to interpret implicit expressions accurately. Figure 1 shows the proportion and examples of implicitly toxic texts, revealing that they exist at a similar rate to explicitly toxic content in actual online comments. However, we found that language models also tend to focus on explicit offensiveness, resulting in the issue that automatically generated data from these models contains a lower proportion of implicitly toxic texts. To address these issues, we introduce an automated pipeline for synthesizing paired offensive language data, which we call Korean offensive language Data generation Automation (K/DA). The main contributions of this paper are as follows: 1) We introduce an automated pipeline, K/DA, for generating paired synthesized dataset of neutral and toxic texts. This pipeline integrates recently emerging offensive language and ensures highquality results by filtering out low-quality outputs. We further demonstrate its scalability by applying the same pipeline to different language and model types, showing its language- and model-agnostic nature. 2) We provide a language detoxification dataset with around 7.5K neutral-toxic pairs. Unlike previous offensive or toxic language datasets, this paired structure facilitates easier model training and encompasses a broader range of offensive language, including explicit profanity, implicit offensiveness, and their variations. 3) Our experiments show that models trained on our dataset achieve improved detoxification performance. # 2 Related Works Offensive language datasets Although numerous studies have focused on creating datasets for offensive language detection (Zampieri et al., 2019; Davidson et al., 2017; Song et al., 2021; Hartvigsen et al., 2022), most of these resources are unfortunately limited to English. The scarcity of offensive Step 1: Slang Retrieval Step 2: Responses Filtering Rephrase “Give me something Responses Pair Consistency Implicit O. Accept Dr clickbaity.” into a negative tone that's difficult to recognize as 0 Typical of those chauvinistic pig: offensive referring fcffoffnfftfefffxftf they only put out provocative, Context maintained Criticism & Mockery from online communities. misogynistic content K Just put out something that could at Context shifted Other relationship From the start, the article didn’t least be eye candy. Irrelevant generation Low toxicity make sense and felt off. The headline was clickbait to Write some weirdo nonsense that Context shifted Criticism & Mockery D draw attention by trashy only perverts would enjoy. Irrelevant generation journalists, but the content wasn’t really worth it. Give us something provocative, It's kind of pitiful and disgusting. onalnyycwarye, athbosuet tdriassghuys tjionugrinmalaisgtes . Context maintained Criticism & Mockery Add a "gross" tag to it. With the intense music on top, it just makes everything more bizarre Hejuy,styosuprteradshmyojroeuorfnyaoliusrts, Context maintained Criticism & Mockery sensationalist stories. language datasets in other languages hampers the performance of offensive language detection and detoxification systems in those languages. Previous research has also highlighted the importance of developing such datasets in other languages to address these gaps (Pitenis et al., 2020; Mubarak et al., 2021; Díaz-Torres et al., 2020). Translating existing datasets into the target language is a possible solution, but offensive language is highly dependent on the cultural, political, and linguistic context of its original language (Koppel and Ordan, 2011). As a result, translated offensive texts often lose their nuance and impact, as shown in Table 1. The Korean language, in particular, is rooted in a distinct cultural and contextual framework compared to other languages, making it necessary to develop offensive language datasets specific to Korean. While several studies have created non-paired Korean datasets for offensive language detection using human annotation (Jeong et al., 2022; Park et al., 2023; Lee et al., 2022; Moon et al., 2020) or machine generation (Shin et al., 2023), training detoxification models is more natural and efficient with paired datasets, which is the primary focus of our work. Data generative methods Due to the rapid evolution of offensive language, continuously updating datasets to include new terms is essential. This poses a significant challenge for approaches that rely on human annotation (Kennedy et al., 2020; Qian et al., 2019). While methods using language models for dataset generation (Hartvigsen et al., 2022; Shin et al., 2023) reduce the need for human labor, they can still encounter the same problem if the models are not updated to recognize new offensive terms. To address these challenges, this paper introduces a pipeline that leverages LLMs and Retrieval-Augmented Generation (RAG) to generate datasets aligned with real-world language trends, enabling more efficient updates without relying on human labor. # 3 Implicit Offensiveness and Trend-Aligned Slang Implicitly offensive language is defined as a tone of disregard or mockery to insult while avoiding explicit slurs or profanity (Wiegand et al., 2021). This definition helps group challenging examples of offensive language that are often mishandled by trained models, allowing us to specifically target these difficult cases. Several previous studies on offensive language have addressed implicit offensiveness, such as sarcasm through rhetorical expressions (Moon et al., 2020) or stereotype-based rude jokes (Park et al., 2023). However, we found that this definition does not fully capture the characteristics of real-world conversations. Figure 1 illustrates the types of offensive comments collected from Korean online communities, categorized using GPT-4 Turbo. Upon further investigation, we were able to divide the implicitly offensive comments into three subcategories: (1) disregard and mockery, consistent with past definitions of implicit offensiveness, (2) community-specific slang that is familiar within certain groups but difficult for outsiders to interpret, and (3) variations of profanity used to avoid detection. The figure reveals that the majority $( 6 4 \% )$ of implicitly offensive comments fall under categories (2) and (3), which have not been extensively studied in prior research. This highlights the need for a dataset containing sufficient examples of these types. To address this, we specifically coin the term trend-aligned slang to describe categories (2) and (3). These newly defined forms of implicit offensiveness present a unique challenge compared to the conventional definition, as these slang are localized within specific communities and evolve rapidly. Trend-aligned slang is continuously developed through various online disputes. As community administrators attempt to censor the use of these emerging slang, they morph into variations, using phonetic or visual similarities that are easily understood by humans but are challenging for models to detect; see Appendix Table 9 for examples. Given the impracticality of continuously updating slang, developing an effective dataset collection method is crucial (van Aken et al., 2018). Moreover, trend-aligned slang encompasses both toxic language and hate speech targeting specific groups, making it difficult to establish clear distinctions. Previous research (Fortuna et al., 2020) pointed out the lack of standardized labeling criteria and reclassified hate speech as toxic in certain datasets to ensure consistency. Given this ambiguity, we construct the K/DA dataset to primarily capture implicitly offensive language as used in real-world contexts, rather than imposing rigid categorical boundaries. # 4 K/DA: Automated Korean Offensive Language Data Generation Pipeline Based on the discussions so far, our data generation pipeline must meet the following requirements: 1. Paired dataset: A dataset containing pairs of neutral sentences and their offensive counterparts is essential for the straightforward training of detoxification models. 2. Trend alignment: The pipeline should generate data incorporating recently developed trend-aligned slang to ensure that trained models remain effective over time. 3. High toxicity: Simply scraping data often leads to examples that contain only neutral textual expressions, diminishing the dataset’s effectiveness. The pipeline must ensure the inclusion of highly toxic content. To fulfill these criteria, we propose a two-stage data generation process: (1) slang retrieval and (2) generation filtering. In the first stage, outputs with trend-aligned slang are generated from neutral sentences,2 by leveraging context retrieved from online communities. In the second stage, two filtering criteria are applied to refine the generations, ensuring they satisfy two essential factors: preserving the original context and exhibiting sufficient (and implicit) toxicity. A summary of this data generation pipeline is shown in Figure 2. # 4.1 Slang Retrieval To stay aligned with the rapidly changing nature of slang, it is essential to develop a dynamic data generation pipeline rather than depending on a static dataset. However, as shown in Table 1, previous methods have significant limitations. Naive generation from language models tends to produce less toxic content and suffers from the same issues as static datasets when the language model is not updated to reflect current trends. Moreover, simple web scraping frequently results in irrelevant or meaningless sentences, which can adversely impact the performance of models trained on this data. To generate a paired dataset with trendaligned, highly toxic slang, we employ RetrievalAugmented Generation (RAG, Lewis et al., 2020). By retrieving trend-aligned slang from Korean online communities3 and augmenting neutral sentences, we generate toxic versions that preserve the original context, forming neutral-toxic pairs for the dataset. We start by building a vector database by embedding 92,953 sentences crawled from Korean online communities using SBERT (Reimers and Gurevych, 2019). Slang relevant to the context of the neutral sentences, determined by cosine similarity from the vector database, is incorporated into the prompt to guide LLMs in generating corresponding offensive language. See Appendix B for detail RAG setup. Multiple RAGs for maximized diversity A well-known limitation of the conventional RAG approach, which fixes the number of retrievals $n$ , is that irrelevant information may be retrieved if the vector database lacks sufficient slang relevant to the current context. Reducing $n$ to avoid irrelevant retrievals can, however, compromise the diversity of the generated outputs. Asai et al. (2023) proposed a solution by training a language model to dynamically determine $n$ , but this requires additional costs for dataset preparation and model training. To address this without the need for additional model training, we apply RAG multiple times with different values of $n$ and forward all retrieval results to the filtering stage. The filtering process removes toxic augmentations that fail to preserve context due to irrelevant retrievals. Therefore, when the filtering works effectively, this approach ensures the generated outputs maintain relevance while maximizing diversity. In K/DA, the number of retrievals $n$ is set to $\{ 0 , 3 , 5 , 7 , 9 \}$ . We conduct an empirical analysis showing that retrieval with different $n$ values is crucial for maximizing the potential quality of generations before filtering. For detailed analysis, see Appendix B. # 4.2 Filtering Augmented Generations Due to either irrelevant retrievals or limitations of the LLM, slang retrieval can sometimes produce inadequate generations. We identified following three types of low-quality outputs: (1) Answer generation: The LLM interprets the reference neutral sentence as a question and responds to it, rather than turning it into an offensive statement. (2) Irrelevant generation: The LLM misinterprets the reference, producing irrelevant generations or introducing inappropriate slang. (3) Inoffensive generation: The LLM fails to make the sentence offensive, which frequently occurs with certain types of reference sentences, such as factual statements or information requests. To filter out these lowquality outputs from slang retrieval, we introduce a two-stage filtering process. The first stage removes inconsistent pairs (1 and 2), while the second stage eliminates outputs with insufficient implicit offensiveness (3). This filtering is performed by the LLM itself, reducing the reliance on human labor and aligning with recent trends (Chiang and Lee, 2023; Liu et al., 2023). Filtering for pair consistency Ensuring consistency between paired sentences, so that they convey the same meaning, is crucial for building an effective dataset to train detoxification models. The core idea behind our approach is to introduce the LLM to the identified types of inconsistent pairs, as well as more specific subtypes, and ask whether the generated pairs fall into these categories. This includes prompting the LLM to determine if the generated output is a response, a paraphrasing, or has an arbitrary relationship to the neutral sentence. If the LLM deems the generated pair consistent, it is retained; otherwise, it is discarded. Empirically, we found that providing a one-shot example for each type of pair results in the most effective filtering. The exact structure of the prompt is shown in the Appendix Table 13 and Table 14. Filtering for implicit offensiveness When the topic of a neutral sentence is less controversial, retrievals from the vector database tend to have lower toxicity, leading to inoffensive generations. Conversely, when the topic is highly controversial, the retrievals may be filled with explicit profanities, resulting in explicitly offensive outputs rather than implicitly offensive ones. Since our goal is to create a dataset with a high proportion of implicitly offensive language, both of these scenarios need to be discarded. Similar to the filtering process for pair consistency, we provide the LLM with definitions of trend-aligned slang and implicit offensiveness, along with a few-shot examples. We then prompt the LLM to evaluate whether the generated output includes the desired trend-aligned slang and implicit offensiveness. Unlike using the LLM for direct generation of implicitly offensive language, we found this approach to be very effective in distinguishing the targeted implicitly offensive content with trend-aligned slang from other types. The complete filtering prompts can be found in the Appendix Table 15 and 16. # 5 Experiments To demonstrate the effectiveness of the proposed method, we evaluate the quality of the dataset generated using the K/DA pipeline in Section 5.1. Additionally, we conduct further experiments across different languages and models to validate the generalizability of our pipeline in Section 5.2. Lastly, we assess the performance of detoxification models trained on various datasets in Section 5.3. All evaluations were conducted using G-Eval (Liu et al., 2023), where GPT-4 Turbo was asked to provide scores ranging from 1 to 5. We evaluate the offensive examples and detoxified sentences using the following five criteria: Overall offensiveness (O): measures the degree of offensiveness in a sentence; Implicit offensiveness (I): measures the degree of implicit offensiveness in a sentence, following our expanded definition; Consistency (C): measures how well the paired data retains the same meaning; Fluency (F): evaluates grammatical correctness and natural flow; and Perspective: measures how likely a comment is to be perceived as harmful by using Google Jigsaw’s Perspective API. Table 2: G-Eval results for datasets filtered according to different filtering prompts. The retained column shows the ratio of generation retained after filtering. The numbers in parentheses indicate the standard error. Table 3: G-Eval results on 500 toxic–neutral pairs. Consistency is only computed for paired dataset. The numbers in parentheses indicate the standard error. For dataset evaluations in Section 5.1 and Section 5.2, 500 randomly sampled neutral-toxic pairs were evaluated, and for evaluations of the detoxification model in Section 5.3, 100 randomly sampled test set was used to evaluate. The evaluation prompts for each criterion are provided in the Appendix Table 17. # 5.1 Evaluation of K/DA Pipeline Pair consistency filtering As shown in Table 2, we evaluated three different prompts for filtering for pair consistency: Context Shift, which asks to distinguish between (answering or criticizing the reference) and (preserving the context); QA and Paraphrasing, which asks to categorize into (answering the reference), (preserving the context), and (arbitrary); and QA, which asks to distinguish between (answering the reference) and (arbitrary). C. S. & QA and P. indicates the intersection between the first two prompts, discarding any generations that fail to pass both filters. The actual prompts are provided in the Appendix Table 14. The results highlight the importance of including a (preserving the context) category, as the performance of the QA prompt, which omits this category, declines compared to unfiltered data. This decline is caused by the misclassification of consistent pairs as (answering the reference) when the LLM determines they do not fit into the (arbitrary) category. By providing more detailed specifications of inconsistency types, as done with the Context Shift prompt, we observed an improvement in performance. Although the highest consistency was achieved by intersecting two prompts, the inefficiency caused by the low retention rate led us to use the Context Shift filtering for further experiments. Exemplar results on pair consistency filtering are provided in the Appendix Table 11. Implicit offensiveness filtering As shown in Table 2, we evaluated three different prompts for filtering for implicit offensiveness: Derogatory Detection, which asks to distinguish between (implicit) and (others) given our definition of implicit offensiveness; Tone Classification, which asks to categorize into (implicit), (neutral) and (negative) given general definition on those categories; and Multi-meaning Relationship, motivated by (Doh, 2018), which asks to categorize into 6 different classes. The actual prompts are provided in the Appendix Table 16. The results indicate that as the number of labels increases, the retention rate decreases. Providing our expanded definition of implicit offensiveness proved crucial for achieving high scores in implicit offensiveness. While the Multi-meaning Relationship prompt yielded the best results in terms of implicit offensiveness, its extremely low retention rate made it impractical. As a result, the Derogatory Detection prompt was selected as the final method, as it demonstrated a strong ability to identify implicit toxicity while maintaining a more reasonable acceptance rate. Although other prompts achieved a higher rate of discarding inoffensive content, they also rejected a significant number of implicitly offensive generations. Exemplar results are provided in the Appendix Table 12. Dataset Comparison Table 3 presents the GEval evaluations of the dataset generated from the K/DA pipeline compared to other Korean offensive language datasets. Using the proposed pipeline, we were able to create a paired dataset with greater implicit offensiveness and higher consistency between pairs. The tendency for overall offensiveness to be the lowest, while implicit offensiveness remains the highest, indicates that the dataset has been appropriately constructed, aligning with the definition of offensive language targeted in our paper. Table 4: Human evaluation result of 50 random samples from K/DA and K-OMG. The numbers in parentheses represent the Cronbach’s $\alpha$ . LLMs reliability To assess the reliability of using the LLM, we compared its evaluations with 15 human judgments. We randomly selected 100 generated pairs for each filtering condition and asked evaluators to choose one with the same filtering criteria used by GPT-4 Turbo. The agreement rate with GPT-4 Turbo was $8 6 \%$ for pair consistency and $90 \%$ for implicit offensiveness, indicating that its filtering results align closely with human judgment and can be considered reasonably reliable. More information can be found in Appendix I. Human Evaluation The quality of K/DA dataset was evaluated by the same human evaluators, rating five categories on a 1–5 scale: Overall O. (O), Implicit O. (I), Consistency (C), and Fluency (F). This was also compared to the human evaluation of a machine-generated dataset K-OMG; however, since the instruction is not entirely identical to that of K-OMG, we conducted an approximate comparison; see Table 4. K/DA received higher scores for O and I, which are incorporated as O in K-OMG, reflecting offensive language more effectively in online communities. While K-OMG achieved a higher score for C, its Cronbach’s α was relatively low, making it less reliable for direct comparison. Fluency was also rated higher in K-OMG; however, unlike K-OMG’s evaluation instruction, which allowed evaluators to disregard grammatical errors, we did not include such a provision, leading to lower fluency scores in our evaluation. # 5.2 Generalization Across Languages and Models Our proposed dataset generation pipeline is primarily developed with a focus on the Korean language and proprietary LLMs. However, the design is inherently language-agnostic and model-agnostic. To validate this generalizability, we conduct two additional experiments using the same pipeline: (1) Cross-lingual extension: Applying the pipeline to English data. (2) Cross-model extension: Applying the pipeline with open-source multilingual LLMs. Cross-Lingual Generalization To validate the language-agnostic nature of our approach, we replicate the pipeline in English. We evaluate 500 English text pairs using G-Eval, and our dataset demonstrates the highest level of implicit offensiveness, highlighting its applicability across languages. See Appendix G for details. Table 5: Evaluation of detoxification models trained with instruction fine-tuning on various datasets. The results are reported across multiple test datasets. The Vanilla LM column represents the Ko-LlaMA3-Luxia-8B base model used for instruction tuning. The Raw Dataset column indicates the evaluation results of the test dataset itself without any detoxification. The numbers in parentheses represent the standard error. Table 6: G-Eval results on 500 toxic–neutral pairs from datasets generated by GPT-4 Turbo and open-source models. Overall offensiveness (O), implicit offensiveness (I), and consistency (C) are evaluated. Parentheses indicate standard error Cross-Model Generalization To further validate the model-agnostic nature and reproducibility of our pipeline, we replicated the experiments using two open-source multilingual LLMs supporting Korean language without additional finetuning: Trillion-7B (Han et al., 2025) and Gemma2- 9B (Team et al., 2024). The result in Table 6 demonstrates competitive performance with GPT-4 Turbo on our key metrics, implicit offensiveness and consistency, despite having lighter weights. See Appendix H for details of the experiments. # 5.3 Language Detoxification based on K/DA Experiment settings In this section, we evaluate K/DA in real-world scenarios by applying the data pipeline to train a detoxification model. To ensure effective comparison across various datasets, we use a simple approach: instruction fine-tuning a large language model with different datasets. For training, a neutral-toxic paired dataset is used, where the template includes instructions to detoxify the toxic sentence, and the answer consists of the corresponding neutral sentence. Since this training method requires paired datasets, we adopted KOMG (Shin et al., 2023) and the translated CADD dataset (Shin et al., 2022) as baselines, using their (context, toxic comment) pairs as a paired dataset despite their inconsistencies. After training the detoxification models, they were tasked with detoxifying three different test datasets of offensive language: our dataset, KOLD (Jeong et al., 2022), and BEEP (Moon et al., 2020). Testing on the data generated by our proposed pipeline evaluates the model’s in-distribution performance, while the latter two datasets assess the model’s ability to generalize. Figure 3: Human evaluation of detoxification performance tested on our model. It represents the percentage of preference for detoxified responses generated by our model, the model trained on another dataset (K-OMG, translated CADD), and cases where the performances are indistinguishable. Detoxification Performance and Generalization across Datasets Table 5 presents the G-Eval results for detoxification. Although three of the criteria are the same as in the previous section, their significance is different here. Previously, we prioritized high offensiveness in the dataset, but the goal now is to achieve low offensiveness in the detoxified output. Along with reducing offensiveness, high consistency and fluency scores are essential, as a model could easily lower offensiveness by removing most of the potentially offensive content, but this would result in lower consistency and fluency scores. The overall results indicate that having a paired dataset with high consistency is crucial, as detoxification models trained on K-OMG and CADD do not show statistically significant improvement over the Vanilla LM. In contrast, the instruction-tuned detoxification model based on K/DA demonstrates improvements across all five criteria when tested on Ours and KOLD datasets. It is also evident that the superior detoxification performance achieved through instruction tuning on K/DA diminishes as we attempt to generalize further and disappears when tested in the most challenging transfer setting, BEEP. This decline is primarily due to the limited coverage of the neutral sentence from the dataset used, a limitation that can be easily addressed by diversifying the neutral sentence data. Examples are provided in Appendix Table 22. Evaluating Detoxification Quality via Human Judgments In Figure 3, human evaluators assessed the detoxified responses generated by models trained on K/DA, K-OMG, and CADD. The model trained on our dataset was preferred over the others. You can find detailed guidelines in Appendix I.
Language detoxification involves removing toxicity from offensive language. While a neutral-toxic paired dataset provides a straightforward approach for training detoxification models, creating such datasets presents several challenges: i) the need for human annotation to build paired data, and ii) the rapid evolution of offensive terms, rendering static datasets quickly outdated. To tackle these challenges, we introduce an automated paired data generation pipeline, called K/DA. This pipeline is designed to generate offensive language with implicit offensiveness and trend-aligned slang, making the resulting dataset suitable for detoxification model training. We demonstrate that the dataset generated by K/DA exhibits high pair consistency and greater implicit offensiveness compared to existing Korean datasets, and also demonstrates applicability to other languages. Furthermore, it enables effective training of a high-performing detoxification model with simple instruction fine-tuning.
[ "cs.CL" ]
# 1 Introduction Motivation. Approximate Nearest Neighbor Search (ANNS) for high-dimensional vector databases [32, 89] is heavily used for semantic search in multiple application areas [31], including web search engines [17, 19], multimedia databases [35, 80], recommendation systems [20, 23, 77], image retrieval [95, 102], Large Language Models (LLM) [2, 68, 84] and Retrieval Augmented Generation (RAG) [40, 59, 61]. ANNS has attracted massive industrial interest recently as new generations of embedding models have enabled powerful semantic search. In response, multiple SQL and NoSQL database vendors have recently announced ANN indices in support of ANNS [1, 3, 22, 33, 65, 66, 69] and, furthermore, multiple purpose-built vector databases featuring ANNS have been launched by startups [76, 83, 86] and from cloud providers [45, 64]. Figure 1: Early Termination margins for target recall 0.80. Curve represents recall improvement for queries in the HNSW index vs query answering time. The last point of each curve represents the point where the HNSW search normally terminates. All queries have significant potential for speedups in achieving the desired recall target (i.e., 0.80). ANNS presents an inherent tradeoff between performance and recall [6, 32, 49, 57, 78, 87, 101]: At a mere recall loss of, say, $5 \%$ the search is accelerated by many orders of magnitude. Higher recall loss leads to higher performance, and vice versa, lower recall loss leads to lower performance. Problem. Different applications and different (classes of) users have diverse requirements for search quality. Some users expect better search quality by the ANNS algorithm at the expense of search time, while others expect fast query response times by willingly compromising some result quality. Unfortunately, each algorithm provides its own algorithm-dependent parameters to enable applications to influence the recall/performance tradeoff. This situation is problematic in more than one way. First, the application developers have to experiment with these parameters to fine-tune them and produce the desired recall for each use case. Second, the chosen parameters may produce good recall for some queries, but bad recall for other, hard queries. Last, if these parameters are tuned for the hard queries, then the ANNS algorithm will be unnecessarily slow and will needlessly spend resources for the easy queries. Query hardness corresponds to the computational effort required to process a query to achieve a given recall target. In several ANNS approaches, this is reflected by the number of distance calculations performed [90]. Typical query workloads in ANNS applications often contain queries of varying hardness, and this diverse range of required search effort is prevalent across many scenarios [14, 90, 93, 103, 104]. Towards the solution of this problem, recent ANNS systems and research works [25, 69, 101] introduced declarative target recall. The application and/or user declares an acceptable target recall level. Consequently, the ANNS algorithm aims to deliver the declared target recall while optimizing performance as much as possible. The first approaches for declarative recall adjust an ANN index, such as HNSW [62], by finetuning the index parameters for a single target recall of interest [25, 101]. However, such approaches require extensive tuning, as they must navigate a complex, multidimensional parameter space to optimize the index and search parameters and meet the declared recall target on average for a given query workload. In addition, they are unable to adapt to the hardness of the query, since the parameters are fixed for a query workload and cannot be dynamically adjusted. Another approach is to create an ANNS index once and then map various target recall levels to their corresponding search parameters. In HNSW, for example, this approach is Recall to efSearch Mapping (REM), which operates by establishing a mapping between each declarative recall target and the efSearch parameter, which influences the amount of search effort. REM offers a significant advantage over previous alternatives, as it requires substantially less tuning time, because only a single parameter (efSearch) requires tuning. The mapping can be established through a single linear scan over multiple efSearch values for all declarative target recall levels, rather than fine-tuning parameters separately for each recall target. However, REM still relies on fixed parameters for the entire query workload and cannot adjust to the hardness of individual queries. Therefore, we propose an alternative, run-time adaptive approach, which can adapt to the query hardness. We develop our approach for the popular HNSW [62] algorithm (and also extend it to other ANNS methods). We observe that a query configured with parameters that enable it to achieve very high recall in an HNSW index will naturally achieve all lower recall levels during the search process. This is illustrated in Figure 1, where each curve represents the progression of recall for a query on the SIFT [56] dataset using the HNSW index. For example, if we stopped the algorithm early, at 2ms, Query 1 (the blue curve) would deliver 0.80 recall. In contrast, the “easy” Query 2 has already achieved 1.00 recall around the $1 . 7 5 \mathrm { m s }$ mark and 0.80 recall since the $1 . 0 \mathrm { m s }$ mark. The time spent afterwards is wasted. In contrast, Query 3 is only at 0.60 recall at the 2ms mark. Figure 1 shows that multiple recall targets for each query can be achieved well before the HNSW search naturally completes. This implies that, if we could precisely estimate the recall of a query at any point during the search, we could offer an efficient declarative recall solution that requires no parameter tuning for each query, and naturally accommodates any user-declared recall target as long as it is fundamentally achievable by the index. However, determining the current recall is not a trivial task, since different queries have different hardness, and diverse points in time where they reach the target recall. In Figure 1, we observe that we can terminate the search for Query 2 well before 4ms, while Query 3 goes on until 4ms to reach the same recall target. Our Approach: DARTH. We present DARTH, a novel approach to solving the problem of declarative recall for ANNS applications. We integrate DARTH into the HNSW algorithm, which is a popular choice and exhibits very good empirical performance [6, 87, 89]. DARTH exploits a carefully designed recall predictor model that is dynamically invoked at carefully selected points during the HNSW search to predict the current recall and decide to either early terminate or continue the search, based on the specified recall target. Designing an early termination approach is a complex task, as it requires addressing multiple challenges to develop an efficient and accurate solution. First, we need to identify the key features of the HNSW search that serve as reliable predictors of a query’s current recall at any point during the search. Our analysis shows that the current recall can be accurately estimated by employing features related to the HNSW search process. These features capture both the progression of the search (by tracking metrics such as distance calculations) and the quality of the nearest neighbors found by examining specific neighbors and their distance distributions. Moreover, we need to select an appropriate recall predictor model to train on our data. We chose a Gradient Boosting Decision Tree (GBDT) [67], because of its strong performance in regression tasks and its efficient training time. The GBDT recall predictor results in extremely fast training times, which are negligible compared to the typical index creation times for HNSW. Note that an accurate recall predictor is not enough to provide an efficient solution for declarative recall: if the frequency with which the recall predictor is invoked is high, then the cost of inference will cancel-out the benefits of early termination. Frequent predictor calls, or small prediction intervals $( \pmb { \mathscr { p } } i )$ , provide more accurate early termination, at the cost of increased prediction time; infrequent predictor calls, or large $p i _ { \cdot }$ , risk missing the optimal termination point, resulting in unnecessary computations. To address this challenge, we develop an adaptive prediction interval method, which dynamically adjusts the invocation frequency. The method invokes the recall predictor more frequently as the current recall gets close to the recall target, ensuring both accuracy and efficiency. In addition, we demonstrate how DARTH can be effectively integrated to other ANNS methods, such as other Graph-based approaches and the IVF [28] index. We evaluate the efficiency of DARTH through an extensive experimental evaluation using 5 popular datasets of varying sizes and dimensionalities. Our results demonstrate that the early termination recall of DARTH is accurate: DARTH is always able to meet the user-declared recall targets while offering significant speedups. Specifically, we show that our approach achieves up to $1 4 . 6 \mathrm { x }$ (average $6 . 8 \mathrm { X }$ , median $5 . 7 \mathrm { X } _ { \cdot } ^ { \cdot }$ ) speedup compared to the HNSW search without early termination. DARTH terminates the search very near the optimal point, performing on average only $5 \%$ more distance calculations than the optimal. We compare our approach to several other approaches for declarative recall, and we show that DARTH provides State-of-the-Art (SotA) search quality results, while delivering efficient search times. We show the superiority of DARTH for query workloads that include harder and Out-Of-Distribution (OOD) queries, demonstrating that DARTH is the method that achieves the best results. Lastly, we demonstrate that DARTH is efficient for IVF as well, always meeting the declared recall targets and achieving up to $4 1 . 8 \mathrm { x }$ (average $1 3 . 6 \mathrm { x } ,$ , median 8.1x) speedup compared to IVF search without early termination. Contributions. We summarize our contributions as follows. We present DARTH, a novel approach for declarative recall for ANNS indexes using early termination, natively supporting any recall target attainable by the index, without the need for tuning. To the best of our knowledge, DARTH is the first solution to achieve declarative recall through early termination for ANNS. • We describe the training of an accurate recall predictor model for DARTH, by carefully examining and identifying descriptive search features that reveal the current recall for a query during the search, and by designing an efficient training data generation method that allows us to prepare the training data and to train our recall predictor efficiently. We propose an efficient adaptive prediction interval method that carefully chooses when to invoke our recall predictor As a result, DARTH early terminates queries (almost) exactly when needed, avoiding overheads from needless invocations and/or computations. Our method achieves this by utilizing adaptive prediction intervals. In addition, we describe a generic hyperparameter selection method that removed the need to fine-tune our approach, making it essentially parameter-free. We conduct a wide experimental evaluation using 5 popular, diverse datasets, which validate the superiority of DARTH, both in terms of speed and accuracy. The experimental evaluation shows that DARTH achieves significant speedup, up to 14.6x, $6 . 8 \mathrm { X }$ on average, and median $5 . 7 \mathrm { X }$ for HNSW, and up to $4 1 . 8 \mathrm { X }$ , 13.6x on average, and median 8.1x for IVF. Furthermore, its early termination prediction is near-optimal: It performs only $5 \%$ more distance calculations than the true optimal of each query. Note that the true optimal of each query is not attainable in practice, since we obtain it (for the purpose of experimentation) by extensively analyzing the search of each query, collecting the exact point it reaches the declared target recall. In addition, we show that DARTH achieves SotA search quality results, outperforming competitors in most cases, and remaining efficient in search times. At the same time, it is the only approach that manages to maintain robust recall results for workloads of increasing hardness and Out-Of-Distribution (OOD) queries. # 2 Background and Related Work # 2.1 Preliminaries $\mathbf { k }$ -Nearest Neighbor Search (NNS). Given a collection of vectors $V$ , a query $q$ , a distance (or similarity) metric $D$ , and a number $k$ , $k$ -Nearest Neighbor Similarity Search (NNS) refers to the task of finding the $k$ most similar vectors (nearest neighbors) to $q$ in $V$ , according to $D$ [32]. Without loss of generality, we use the Euclidean distance (𝐿2) as the distance metric. The nearest neighbors can be exact or approximate (in the case of Approximate Nearest Neighbor Search, ANNS). When dealing with approximate search, which is the focus of this paper, search quality is evaluated using two key measures: (i) search quality, usually quantified using recall (the fraction of actual nearest neighbors that are correctly identified) and relative distance error (RDE, the deviation of the distances of retrieved nearest neighbors from the actual nearest neighbors), and (ii) search time, i.e., the time required to perform the query search. ANNS Indices. ANNS tasks are efficiently addressed using specialized ANNS indices [9, 89]. These approaches construct an index structure over the vector collection 𝑉 , enabling rapid query answering times. Such indices generally fall into four main categories: Treebased [16, 29, 34, 70, 73, 74, 88, 91, 92, 99, 100], LSH-based [24, 50], Quantization-based [39, 42, 63], Graph-based [38, 44, 46, 53, 87, 89]. In addition, several hybrid methods have emerged, such as ELPIS [8] (Tree-Graph-based), DET-LSH [97] (Tree-LSH-based), ScaNN [47] and IVF-PQ [28, 54] (Tree-Quantization-based), and others [19, 27, 96]. Graph-based indices, which are the primary focus of this work, create a graph over $V$ by representing vectors as nodes, with edges between them reflecting some measure of proximity between the nodes. There are numerous variations in graph-based methods, such as HNSW [62], DiskANN [53] and others [26, 38, 46]. Still, the search process for a query remains largely consistent between all approaches since the main operation is to traverse the graph, collecting the nearest neighbors of a query. Hierarchical Navigable Small World (HNSW) graph. The HNSW graph [62] is one of the most efficient and accurate SotA indices for ANNS [6, 87, 89]. It organizes vectors into a multi-layered hierarchical structure, where each layer represents different levels of proximity. Vectors are inserted starting from the base (lowest) layer, with higher layers being created probabilistically. The key parameters that influence the performance of HNSW graph creation are $M$ , 𝑒 𝑓 𝐶𝑜𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛, and 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ. The parameter $M$ defines the maximum number of neighbors a vector can have. A higher value of $M$ improves search quality by making the graph denser, but it also increases memory usage and search time. The parameter 𝑒 𝑓 𝐶𝑜𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 controls the number of candidates considered during graph construction, with larger values resulting in a more accurate graph at the cost of longer construction times. An overview of the query phase is illustrated in Figure 2(a). The search for a query starts from the top layer of the graph, from a predefined entry point. The search progresses greedily, progressively using the closest node of each layer as an entry point for the next layer, until the base layer of the graph (which contains all vectors of the dataset) is reached. Once the search reaches the base layer, it continues with a detailed traversal of candidate neighbors (shown in green) to retrieve the most similar vectors, putting the candidate vectors in a priority queue, and by putting the collected nearest neighbors in a result set, usually implemented as a heap. The amount of search effort in the base layer is influenced by the parameter 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ, which determines the number of candidate neighbors to examine during query processing. A higher 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ leads to better recall but at the expense of longer search times. The HNSW search in the base layer terminates when no better candidates remain to be added to the priority queue—meaning all vectors in the priority queue are closer to the query than their unexplored neighbors—or when the entire base layer has been searched (a very rare occurrence). These termination points, occurring without early termination, are referred to as natural termination points, and the HNSW index that employs the search algorithm described above, terminating at the natural termination points is referred to as plain HNSW. # 2.2 Related Work Vector Data Management Systems (VDMS). The growing demand for applications that leverage ANNS algorithms has spurred substantial research into designing systems capable of managing large-scale vector collections [1, 18, 28, 86]. A VDMS encompasses a collection of mechanisms, algorithms, and metrics that support efficient and scalable similarity search by implementing diverse similarity search indices and associated technical functionalities. Comprehensive overviews are provided in [48, 71, 98]. Figure 2: (a): Example of locating the nearest neighbor of a query in an HNSW Graph. (b): Algorithms $A$ and $B$ achieve the same recall, yet, the algorithm $A$ results are of higher quality. Automated Performance Tuning. Currently, several approaches are using automated parameter tuning VDMS to reach a specific recall target for a query collection while also optimizing search time as much as possible. These methods navigate the complex, multidimensional parameter space of ANNS indices. Some techniques are designed specifically for vector collections [25, 101], while others are adapted from methods originally developed for relational databases [5, 85]. However, these approaches incur substantial overheads, as they iteratively build multiple index types with many parameter configurations during the tuning process. In addition, they have to be tuned from the start if the recall target changes, while they do not adapt the parameters for each query, being unable to adapt to the query hardness. Early Termination Approaches. To the best of our knowledge, DARTH is the only approach that directly and natively tackles the problem of declarative recall using early termination. Recently, early termination techniques for ANNS have been proposed. These methods aim to terminate the search for a query as soon as a specific algorithm-specific objective is met (e.g., all nearest neighbors are found), thus improving search time. The current SotA approaches are ProS [30, 43] and Learned Adaptive Early termination [60]. Both approaches leverage the observation that, in nearest neighbor search (both exact and approximate), the k nearest neighbors of a query are typically found early in the search process, allowing a significant portion of the search to be skipped. ProS employs statistical and Machine Learning (ML) models to terminate the search early once all nearest neighbors are found, focusing on exact similarity search for Data Series using the iSAX [13] index. It is a progressive approach, meaning that during the search for the neighbors of a query, the model is utilized multiple times to decide if all nearest neighbors are found, allowing for progressively better and more accurate predictions. In contrast, Learned Adaptive Early Termination uses an ML model to predict how many distance calculations are required for a query to retrieve all nearest neighbors that the index search algorithm would find, targeting the HNSW and IVF-PQ [54] indices. In this method, the model is called only once at a specific time during the search, indicating the total number of distance calculations that need to be performed. # 2.3 Declarative Target Recall Definition DARTH supports ANNS with declarative target recall. In particular, DARTH expects calls of the form $A N N S ( q , G , k , R _ { t } )$ , where $q$ is the query vector, $G$ is an HNSW index, $k$ is the number of nearest neighbors to be retrieved, and $R _ { t }$ is the declarative target recall value. The objective is to approximately retrieve the $k$ -nearest neighbors of $q$ using $G$ , achieving a recall of at least $R _ { t }$ with high probability, while optimizing the search time. We assume that the user-declared target recall $R _ { t }$ should be attainable by the index $G$ ; specifically, if the recall that the graph index $G$ achieves using plain HNSW for the query $q$ is ${ \boldsymbol { R } } _ { \boldsymbol { q } } ^ { h }$ then $R _ { t } \ \leq \ R _ { q } ^ { h }$ . This condition is easy to satisfy practically by setting up the index creation parameters and the ef_search parameter to levels that enable very high recall (e.g., ${ \bf \gamma } _ { > } 0 . 9 9 { \bf \gamma } _ { , } ^ { \backslash }$ by the plain HNSW. For the ranges of the HNSW parameters to be used, refer to corresponding benchmarks [6, 62, 87] and guidelines[4, 41, 94]. Further refining the objective of DARTH, we note that the quality of the nearest neighbors retrieved, and thus the quality of the algorithm, while it can be measured by the recall, is even better measured by the Relative Distance Error (RDE) [72]. Indeed, when comparing declarative target recall approaches, comparing the RDE is crucial, since this measure quantifies the quality in deeper detail compared to the recall. This is explained visually in Figure 2(b), where we compare two declarative target recall Algorithms $A$ (orange) and $B$ (blue), that are searching for the 4 nearest neighbors of a query. The nearest neighbors (green) are annotated as $n 1 - n 4$ . Consider that both algorithms correctly retrieved 𝑛1-𝑛3, but $A$ retrieved $n 4 { - } A$ (orange) as the 4th nearest neighbor, while $B$ retrieved $n 4 { - } B$ (blue). Although the recall of both approaches is the same, as they retrieved the same number of correct nearest neighbors, the overall quality of the retrieved nearest neighbors is better for $A$ . This is because $n 4 { - } A$ is much closer to the actual 4th nearest neighbor. In this case, the RDE for algorithm $A$ would be significantly lower, indicating its superiority. We note that the importance of the RDE measure has been highlighted in previous works [72]. # 3 The DARTH Approach Every ANNS query $q$ in DARTH is associated with a declarative target recall $R _ { t }$ , a value $k$ for the number of nearest neighbors to retrieve, and a plain HNSW index $G$ capable of achieving high recall levels. DARTH introduces a modified HNSW search method that early terminates as soon as the search for $q$ reaches $R _ { t }$ , significantly earlier than the natural termination point of the plain HNSW search. This is achieved through a run-time adaptive approach that utilizes a recall predictor model, which is dynamically invoked at various stages of the query search. The predictor model, trained on a small set of training queries, estimates the current recall at each stage by analyzing specific input features. Based on these predictions, DARTH determines whether to early terminate the query search. In the following sections, we provide a detailed explanation of our approach. We outline the input features utilized, describe the efficient training process for developing an accurate recall predictor model, explain the strategy for determining the frequency of model invocations during each query search, and demonstrate how our approach is seamlessly integrated into HNSW and easily extended to work with IVF as well. DARTH: Declarative Recall Through Early Termination for Approximate Nearest Neighbor Search Table 1: Selected input features of DARTH’s recall predictor. # 3.1 Recall Predictor 3.1.1 Descriptive Input Features. Given our choice of a dynamic recall predictor capable of estimating the recall at any point during the search of a query, we analyzed several search-related features by periodically collecting observations throughout the search process of a small set of training queries. Each observation includes the selected input features and our target variable, which is the actual recall measured at the specific time of observation. We define three categories of input features (summarized in Table 1). • Index features: These features provide insight into the progression of the search process. They include the current step of the search conducted at the base layer of the HNSW at the time of observation (𝑛𝑠𝑡𝑒𝑝), the number of distance calculations performed (𝑛𝑑𝑖𝑠), and the number of updates to the nearest neighbor result set up to that point (𝑛𝑖𝑛𝑠𝑒𝑟𝑡𝑠). • Nearest Neighbor (NN) Distance features: These features capture information about the distances of the nearest neighbors found for the query up to a given point in the search. This category includes the distance to the first nearest neighbor calculated when the search began at the base layer of the HNSW graph $( f i r s t N N )$ the current closest neighbor distance (𝑐𝑙𝑜𝑠𝑒𝑠𝑡 𝑁 𝑁 ), and the furthest neighbor distance found so far (𝑓 𝑢𝑟𝑡ℎ𝑒𝑠𝑡 𝑁 𝑁 ). Nearest Neighbor (NN) Stats features: These features provide descriptive summary statistics of the nearest neighbors found for the query up to a given point in the search. They include the average (𝑎𝑣𝑔), the variance (𝑣𝑎𝑟), the median (𝑚𝑒𝑑), and the 25th and 75th percentiles (𝑝𝑒𝑟𝑐25, 𝑝𝑒𝑟𝑐75) of the nearest neighbor distances in the result set. The choice of our search input features is guided by the observation that to correctly predict the current recall of a query at any point of the search, we should take into consideration the progression of the search in the base layer of the HNSW graph (observed by the Index features), the distances of descriptive neighbors already identified (observed by the NN Distance features) as well as the distribution of the distances of all the identified neighbors (summarized by the NN Stats features). 3.1.2 Recall Predictor Model. For our predictor model, we opted for a Gradient Boosting Decision Tree (GBDT) [36, 37, 67]. GBDT operates by training decision trees sequentially, with each new tree aiming to minimize the errors of the combined predictions from the previously trained trees (GBDT in DARTH operates with 100 trees, called estimators). Initially, a single decision tree is trained, and the algorithm then iteratively adds more trees, each one trained on the errors of its predecessor. This process allows GBDT to achieve highly accurate results, making it an effective model for regression tasks. For this work, we trained our GBDT predictors using the LightGBM [58] library instead of XGBoost [21], due to its excellent inference time for single-input predictions (0.03 ms on average for our 11 input features, running on a single CPU core). 3.1.3 Predictor Training. To train our GBDT recall predictor, we generate the training data from the observations gathered from the training queries, that contain the input features from Table 1. We employ a data generation routine that generates observations for several queries in parallel, periodically flushing the data into log files. We observed optimal predictor performance when observations are collected as frequently as possible for every dataset (i.e., after every distance calculation), as this provides the predictor with a detailed view of the search process and information from any time in the search. The data collection process is efficient, taking only a few minutes per dataset, a negligible time compared to the HNSW index creation times. We present detailed results about the training data generation and training times in our evaluation (Section 4). # 3.2 Prediction Intervals DARTH requires the trained recall predictor to be called periodically, after a number of distance calculations. Note that we use distance calculations as a unit of interval, i.e., the periodic dynamic invocations to the predictor take place every $p i$ distance calculations. Determining the value for this prediction interval $( p i )$ is crucial, as it exposes an interesting tradeoff: frequent predictor calls (i.e., a small $p { \imath }$ ) enable closer monitoring of the search process, allowing for termination immediately after reaching the target recall. However, this may introduce overhead due to the time required for each prediction, since in HNSW, the search process for a query takes only a few milliseconds. Conversely, less frequent predictor calls (i.e., a larger 𝑝𝑖) reduce prediction overhead but risk delaying the identification that the target recall is reached, potentially resulting in unnecessary computations and delayed early termination. The above tradeoff signifies the challenge of determining correct prediction intervals. 3.2.1 Adaptive Prediction Interval. We identified that a natural solution to this problem is to call the predictor more frequently when the search is close to the target recall, allowing for early termination at the optimal moment, and to call the predictor less often when the search is still far from the target recall. Thus, we opted for adaptive prediction intervals allowing us to call the predictor often when we are close to the target recall, and less often when are far away from it. Our adaptive prediction interval technique decides a new prediction interval $( p i )$ every time a predictor call takes place, according to the following formula: $$ p i = m p i + ( i p i - m p i ) \cdot ( R _ { t } - R _ { p } ) $$ where $p i$ is the new (updated) prediction interval, 𝑚𝑝𝑖 is the minimum prediction interval allowed, 𝑖𝑝𝑖 is the initial prediction interval (the recall predictor will be called for the first time after 𝑖𝑝𝑖 distance calculations), $R _ { t }$ is the target recall and $R _ { P }$ is the predicted recall as predicted from the model. This linear formula generates smaller prediction intervals when $R _ { p }$ is close to $R _ { t }$ , and larger prediction intervals when $R _ { p }$ is far from $R _ { t }$ . 3.2.2 Hyperparameter Importance. The introduction of two hyperparameters, 𝑖𝑝𝑖 (initial/max prediction interval) and 𝑚𝑝𝑖 (minimum prediction interval), is a crucial aspect of our approach. These hyperparameters control how frequently the predictor is called, with $p i \in [ m p i , i p i ]$ . Setting appropriate values for these hyperparameters is essential: for instance, a very high value for 𝑖𝑝𝑖 may delay the initial predictor call, missing early opportunities for termination, while a very low value for 𝑚𝑝𝑖 could lead to an excessive number of predictor invocations, thereby introducing unnecessary overhead. The values for the hyperparameters can be selected either by classic grid-search tuning (or other sophisticated hyperparameter tuning approaches) or by a generic, heuristic-based selection method. For the generic heuristic-based method, to find a suitable value of 𝑖𝑝𝑖 for a specific recall target $R _ { t }$ , we calculate the average number of distance calculations needed to reach this target from the training queries, denoted as $d i s t s _ { R _ { t } }$ . This information is readily available during the generation of training data from our training queries, incurring no additional costs. We then set the values for our hyperparameters as $i p i = { \frac { d i s t s _ { R _ { t } } } { 2 } }$ and $m p i = \frac { d i s t s _ { R _ { t } } } { 1 0 }$ In addition, this method imposes an interesting baseline for comparison to our approach. In our experimental evaluation (Section 4), we analyze several aspects of hyperparameter selection, including the superiority of adaptive intervals compared to static intervals, as well as the comparison between the generic heuristic selection approach and the extensively tuned selection approach. Our evaluation shows that the heuristic parameters result in a very close performance to that achieved with the extensively tuned parameters. This means that DARTH requires no hyperparameter tuning, which is a significant improvement over the available competitors. Also, our experimental evaluation compares DARTH against a Baseline for early termination which early terminates every HNSW search after $d i s t s _ { R _ { t } }$ distance calculations for a recall target $R _ { t }$ , showing that this approach is not sufficient to solve our research problem. # 3.3 Integration in ANNS methods 3.3.1 Integration in HNSW. Algorithm 1 presents how DARTH can be integrated into the HNSW search. The search begins by traversing the upper layers of the HNSW graph, proceeding as normal until reaching the base layer (line 1). Upon reaching the base layer, we calculate the distance of the query from the first visited base layer node (lines 2-3) and we initialize the necessary structures and variables (lines 4-8). Then, we put the information of the first visited base layer node to the 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑄𝑢𝑒𝑢𝑒 and we start the base layer search. During the search, the algorithm searches for nearest neighbors and updates the 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑄𝑢𝑒𝑢𝑒 and 𝑟𝑒𝑠𝑢𝑙𝑡𝑆𝑒𝑡 when a new neighbor closer to the query vector is found (lines 11-23). Once the predictor model call condition is triggered (line 24), the recall predictor model processes the input features as described in Table 1 to estimate the current recall (lines 25-26). If the predicted recall, $R _ { P }$ , meets or exceeds the target recall, $R _ { t }$ , the search terminates early (line 28). Otherwise, the next prediction interval is adaptively recalculated using our adaptive prediction interval formula (lines 30-31) and the search continues. This algorithm highlights DARTH’s feature of supporting a declarative recall target $R _ { t }$ per query and demonstrates that our approach can be integrated into an existing ANNS index such as HNSW without excessive implementation changes. Algorithm 1 focuses on the HNSW index, but can be generalized to other graph-based ANNS methods [26, 38, 53] without modifications, as their search procedures are very similar. Algorithm 1: DARTH early termination integrated into the HNSW search 3.3.2 Integration in IVF. We discuss the implementation of DARTH for the IVF [28] index as well, a popular Tree-based ANNS index. IVF performs k-means clustering over the vector collection, generating 𝑛𝑙𝑖𝑠𝑡 centroids. Each centroid operates as a bucket, and the collection vectors are placed in the bucket of their nearest centroid. IVF searches through the vectors of the nearest 𝑛𝑝𝑟𝑜𝑏𝑒 cluster buckets to search for the nearest neighbors of a query vector. DARTH can be effectively used for IVF with minimal changes to the input features of Table 1. Specifically, in DARTH for IVF, the 𝑓 𝑖𝑟𝑠𝑡 𝑁 𝑁 input feature represents the distance of the query to the closest centroid, while the 𝑛𝑠𝑡𝑒𝑝 feature represents the number of the cluster bucket we are currently searching. All other input features, as well as the dynamic recall predictor invocations with adaptive intervals, are the same as the HNSW implementation. DARTH: Declarative Recall Through Early Termination for Approximate Nearest Neighbor Search Table 2: Datasets used in our evaluation. # 4 Experimental Evaluation Setup. We conduct our experimental evaluation on a server with Intel® Xeon® E5-2643 v4 CPUs $\textcircled { a } 3 . 4 0 \mathrm { G H z }$ (12 cores, 24 hyperthreads) and 500GB of available main memory. All algorithms are implemented in ${ \mathrm { C / C } } + +$ , embedded in the FAISS [28] library, with $\mathrm { S I M D ^ { 2 } }$ support for the Euclidean Distance calculations. Our predictor models are implemented using the LightGBM [58] library. All implementations are compiled using $\mathrm { g } { + } + 1 1 . 4 . 0$ on Ubuntu 22.04.4. Datasets. We focus on 5 datasets widely used in the literature. The selected datasets cover a wide range of dataset sizes, dimensionality, and structure. Their details are summarized in Table 2. Queries. We randomly sample queries from the learning sets provided in each dataset repository for our training and validation query workloads. For testing, we sample 1K queries from the provided query workloads of each dataset repository. This serves as our default testing query workload. To generate harder query workloads (i.e., queries that require higher search effort than the default ones) we generate harder queries for each dataset by adding varying values of Gaussian noise to the default workloads [14, 90, 103, 104]. The $\sigma ^ { 2 }$ of the added Gaussian Noise is a percentage of the norm of each query vector, with a higher percentage leading to noisier (and thus, harder) queries. The multimodal T2I100M dataset is a special case, since the dataset vectors are text embeddings while the queries are image embeddings. Thus, the corresponding query workloads represent Out-Of-Distribution (OOD) queries. For this reason, we study this dataset separately. Dataset Complexity. To characterize the complexity of each dataset, we report the Local Intrinsic Dimensionality (LID) [7, 52] of the default query workloads. LID quantifies the intrinsic hardness of a dataset based on the distribution of ground truth nearest neighbor distances for a given query. Higher LID values indicate greater dataset complexity. We calculated the average LID for the queries of each dataset to be 13,14, 57, 32, and 24 for SIFT100M, DEEP100M, T2I100M, GLOVE1M, and GIST1M, respectively. For GLOVE1M, the elevated LID value is explained by the nature of the dataset, which is a collection of word embeddings. This category of data is known to exhibit high clustering [15, 81], leading to dense and complex vector neighborhoods. For T2I100M, the higher LID values are influenced by its multimodal nature, which includes text and image embeddings as base and query vectors, which originate from different data distributions [51, 82]. Index. For each dataset, we build a separate plain HNSW index once, using appropriate parameters that allow the index to reach an average recall $\ge \ 0 . 9 9$ for the default query workloads. The M, efConstruction $( e f C )$ , and efSearch $( e f S )$ parameters for each dataset vary, since we need different parameters to reach high recalls for each dataset. The indexing details are shown in Table 3. The indexing times reported are obtained by creating the plain Table 3: HNSW indexing summary using 12 cores. HNSW index using 12 processing cores. Note that the selected plain HNSW index parameters, including efSearch, have been selected to enable the index to reach high recall values, as shown in Table 3. The values for such parameters are selected based on the recommended parameter ranges of relevant works [4, 41, 62, 87, 94]. Real-world application scenarios correspond to high recall targets, starting from 0.80 [101]. Thus, we use recall target $R _ { t } \in$ $\left\{ 0 . 8 0 , 0 . 8 5 , 0 . 9 0 , 0 . 9 5 , 0 . 9 9 \right\}$ . For T2I100M, where $R _ { t } = 0 . 9 9$ could not be attained using reasonable parameter ranges (and hence index generation and query answering times), we stopped our evaluation at $R _ { t } = 0 . 9 5$ . In order to cover a wide range of configurations, we experiment using $k \in \{ 1 0 , 2 5 , 5 0 , 7 5 , 1 0 0 \}$ . Comparison Algorithms. We compare the results of DARTH with the Baseline we presented in Section 3.2.2. We also compare the performance of our approach against REM. The recall to efSearch mapping procedure is performed using 1K validation queries sampled from the learning sets of our datasets. Lastly, we compare our approach with the HNSW Learned Adaptive Early Termination (LAET) approach [60]. Note that LAET does not natively support declarative target recall with recall targets, since it is designed to terminate when all the nearest neighbors of a query have been found. For each query, after a fixed amount of HNSW search, LAET predicts the total number of distance calculations needed for this query to find all nearest neighbors. This value is then multiplied by a (hand-tuned) hyperparameter (called 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟) to ensure that the number of distance calculations is sufficient. This hyperparameter tuning is performed using 1K validation queries sampled from the learning sets of our datasets. Then, the HNSW search terminates after the indicated distance calculations are performed. To achieve declarative recall with LAET, we manually tune the 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 to adjust the performance for each desired target recall $R _ { t }$ . Note that this implementation is not discussed in the original paper. During query answering, all algorithms use only a single core to answer each query, but multiple queries are executed in parallel, exploiting all available cores. Result Quality Measures. We measure the performance of our recall predictor using the Mean Squared Error (𝑀𝑆𝐸), Mean Absolute Error (𝑀𝐴𝐸), and R-squared $( R ^ { 2 } )$ [12, 79], which are popular measures for evaluating the performance of regression models [11]. We measure the search quality performance of the approaches using recall, which represents the fraction of correctly identified nearest neighbors among the total nearest neighbors retrieved $( k )$ . To provide a comprehensive comparison, we also employ additional measures that quantify the performance of an ANNS search algorithm [72]. Specifically, we report the Ratio of Queries Under the recall Target (RQUT), which is the proportion of queries that fail to reach a specified recall target $R _ { t }$ , the Relative Distance Error (RDE), which quantifies the deviation of the distances of the retrieved neighbors from the true nearest neighbors’ distances. and the Normalized Rank Sum (NRS), which evaluates the quality of approximate nearest neighbor results by comparing the ranks of retrieved items in the result set to their ideal ranks in the ground truth. We report the average values over the query workload. To present a comprehensive analysis of the different approaches, we provide additional measures that examine the magnitude of the highest errors of each approach. We report the P99 measure, which is the 99th percentile of the errors. The error is defined as the deviation of the recall of a query $q$ from $R _ { t }$ , i.e., error $= | R _ { t } - R _ { q } |$ , where $R _ { q }$ is the actual recall achieved for the query $q$ , and $R _ { t }$ is the declarative recall target. We also report the average error in the most challenging $1 \%$ of the queries (denoted as the Worst $1 \%$ ) in our graphs, to show the typical performance degradation for the worst-performing $1 \%$ of queries and provide a more detailed view of how each approach handles extreme cases. We measure the search time performance by reporting the search time and the Queries-Per-Second (QPS) measures. We report QPS for a single core; note that queries are executed in parallel, exploiting all available cores. Additionally, in our DARTH evaluation, we report the speedup (denoted as “Times Faster”) achieved compared to the plain search of the index without early termination. # 4.1 Training and Tuning 4.1.1 Training Queries. Figure 3 presents the validation 𝑀𝑆𝐸 (using 1K validation queries) of the predictions from our model for a varying number of training queries. To offer an in-depth evaluation of the performance, we generate predictions by invoking the model after every 1 distance calculation (i.e., the most frequently possible), providing insights into the prediction quality for all possible points of the search. Figure 3 shows the results across our datasets for all values of $k$ . We observe that for all datasets, the performance improvement plateaus after the first few thousand training queries, to a very low 𝑀𝑆𝐸 value. We also note that the configuration of 10K training queries performs well across all datasets and values of $k$ ; in the rest of our evaluation, we use this number. It is worth noting that 10K queries represent a very small proportion of the datasets, comprising only $0 . 0 1 \% - 1 \%$ of the total dataset size. Additionally, the graph indicates that larger $k$ values result in better predictor performance, as the features, particularly the NN Distance and NN Stats, become more descriptive and accurate with an increasing result set size. The DARTH recall predictor is trained on 10K queries randomly sampled from the learning sets included in each benchmark dataset. These learning sets consist of vectors designated for training purposes and do not overlap with the base (dataset) vectors or query vectors. All the subsequent results presented in this paper are obtained using the recall predictor trained on these official benchmark learning sets. To provide further insight, Figure 4 presents the distribution of recall values and distance calculations (we show results for DEEP100M for brevity; similar trends hold for all datasets). Notably, $9 8 \%$ of the training queries achieve a recall above 0.95, and $9 0 \%$ reach 0.99 or higher, as shown in Figure 4(a). The effectiveness of the predictor in modeling query search progression is explained by Figure 4(b), which shows the distance calculations performed for each training query. While the majority of training queries achieve high recall, the amount of effort needed to reach these recalls follows an approximately normal distribution. This enables the predictor to learn from a diverse range of training queries, including those that achieve high recall with minimal distance calculations and others that require significantly more search effort. In subsequent sections of our evaluation, we study how well our predictor generalizes to more challenging workloads (e.g., noisy queries), and we demonstrate that DARTH can effectively handle queries that need significantly more search effort. Table 4: Training details using 10K queries and 12 cores. 4.1.2 Training Time. Now we present the training details of DARTH for 10K training queries. For all datasets, we report in Table 4 the time required to generate the training data from the 10K queries (Generation Time), the number of training samples corresponding to the 10K queries (Training Size), and the Training Time needed for the model (using 100 GBDT estimators, and 0.1 learning rate). Note that Generation and Training Times are reported when using 12 (all) processing cores. We note that the entire process can be completed in a few minutes, which is a negligible processing time compared to the time needed to build the corresponding plain HNSW index (i.e., several hours; cf. Table 3). The differences in the Generation Times and Training Sizes among datasets are related to the dimensionality, dataset size, complexity, and index parameters. 4.1.3 Feature Importance. We analyzed the importance scores of the features used across all our datasets and values of $k$ (on average). The importance score expressed as a percentage of the total feature importance, was extracted from our GBDT recall predictor. Our analysis revealed that the features with the highest importance scores are 𝑛𝑠𝑡𝑒𝑝, 𝑐𝑙𝑜𝑠𝑒𝑠𝑡 𝑁 𝑁 , 𝑓 𝑖𝑟𝑠𝑡 𝑁 𝑁 , 𝑛𝑖𝑛𝑠𝑒𝑟𝑡𝑠, and 𝑣𝑎𝑟 (with importance scores of $1 6 \%$ , $1 6 \%$ , $1 6 \%$ , $14 \%$ , and $12 \%$ , respectively). This highlights that the estimation of the current recall is influenced by various search features, including the extent of the graph explored in the HNSW search, the nearest neighbors identified so far, and the initial nearest neighbor found at the beginning of the search. 4.1.4 Feature Ablation Study. We conducted a feature ablation study to evaluate the performance of our recall predictor when using different combinations of input feature types from Table 1. Specifically, we compared the average validation 𝑀𝑆𝐸, 𝑀𝐴𝐸, and $R ^ { 2 }$ across all values of $k$ for various feature combinations for our datasets. The results indicate that using only the Index Metrics features yields moderate performance, with an 𝑀𝑆𝐸 of 0.0043, 𝑀𝐴𝐸 of 0.0318, and $R ^ { 2 }$ of 0.83. Incorporating either NN Distances or NN Stats alongside the Index Metrics improves the predictor’s performance, both achieving an $M S E$ of $0 . 0 0 3 0 , M A E$ around 0.0269–0.0275, and $R ^ { 2 }$ of 0.88. In contrast, using NN Distances and NN Stats without Index Metrics leads to significantly worse results, with 𝑀𝑆𝐸 values exceeding 0.0191 and $R ^ { 2 }$ dropping below 0.30. As anticipated from the feature importance analysis, the most effective feature combinations involve both Index Metrics and at least one of the NN-based features. The overall best performance is achieved when all available features are used together, resulting in an $M S E { = } 0 . 0 0 3 0$ , $M A E { = } 0 . 0 2 6 9$ , and $R ^ { 2 } { = } 0 . 8 8$ . Consequently, our final recall predictor leverages the complete set of input features. 4.1.5 Recall Predictor Model Selection. We conducted a model selection study to justify our choice of the GBDT model. We trained and evaluated additional recall predictor models, including linear regression, decision tree, and random forest. For the random forest model, we used 100 estimators, matching the configuration used for GBDT. The best results were achieved by the GBDT model, which obtained an average 𝑀𝑆𝐸 of 0.0030 across all datasets and values of $k$ . The random forest model also performed well, due to its structural similarity to GBDT, achieving an average 𝑀𝑆𝐸 of 0.0042. The decision tree and linear regression models showed the poorest performance, with average 𝑀𝑆𝐸 of 0.0062 and 0.0142, respectively. 4.1.6 Adaptive Intervals Tuning and Ablation Study. A crucial decision after training our recall predictor is determining the frequency (intervals) at which it should be called to predict the recall. As discussed in Section 3.2.1, we introduced an adaptive prediction interval method and proposed a generic, automatic method for setting the hyperparameters of the adaptive formula. Here, we assess the effectiveness of the adaptive interval approach compared to a static approach that uses fixed intervals to invoke the predictor. Additionally, we evaluate the performance of our heuristic-based approach against extensive grid-search hyperparameter tuning. For grid-search, we explored a wide range of hyperparameter values, with $i p i \in [ 2 5 0 , 5 0 0 , 7 5 0 , \dots , 5 0 0 0 ]$ , and $m p i \in [ 5 0 , 1 0 0 , 1 5 0 , \dotsc , 2 0 0 0 ]$ Conducting such an extensive search over the parameter space required significant computational time. Consequently, we focused on experiments with $k = 5 0$ and $R _ { t } \in$ $\{ 0 . 9 0 , 0 . 9 9 \}$ . We picked $k = 5 0$ and $R _ { t } = 0 . 9 0$ , because they are common cases in a wide variety of scenarios, and we included $R _ { t } = 0 . 9 9$ to examine the results for corner cases of very high target recalls. For the grid-search, we report the results of two methods: Adaptive prediction interval tuning and a Static approach (i.e., with a fixed prediction interval, $m p i = i p i$ ). These methods are labeled as Adaptive-Grid-Search and Static-Grid-Search, respectively, and in our legends we refer to them as $A d \ – G S$ and $S t .$ -GS for brevity. In each experiment, we selected the 𝑚𝑝𝑖 and 𝑖𝑝𝑖 configurations that achieved the best search times. We compared the grid-search methods to our heuristic hyperparameter selection method, described in Section 3.2.2, which is labeled Adaptive-Heuristic, and as $A d$ -Heur in our legends. To provide a comprehensive ablation study of the hyperparameter selection method, we also present results from a variant of the heuristic-based approach that does not employ adaptive prediction intervals, using fixed values of $\begin{array} { r } { i p i = m p i = \frac { d i s t s _ { R _ { t } } } { 4 } } \end{array}$ (we selected to divide by 4 because this result gave us the best performance for this variant). We label this variant as Adaptive-Static, and in our legends we present it as $A d \ – S t .$ . Figure 5 illustrates the speedup achieved by each hyperparameter selection method across all datasets, for $R _ { t } = 0 . 9 0$ (Figure 5a) and $R _ { t } = 0 . 9 9$ (Figure 5b), using $k = 5 0$ . Both graphs show that the Adaptive methods outperform the corresponding Static methods, being up to $1 0 \%$ faster for the grid-search and up to $1 3 \%$ faster for the Heuristic method, while the Adaptive-Grid-Search method is the best-performing across all configurations. This is attributed to the adaptivity of the prediction intervals combined with the extensive 0 5000 00 # # 0 0.95 1.00 0 0 20000 40000 Recall Dists (a) Recall Distr. (b) Distance Calcs. Figure 3: 𝑀𝑆𝐸 for a varying number of training queries. Figure 4: Training details, DEEP100M, $k = 5 0$ . Figure 5: Hyperparameter study, $k = 5 0$ . Figure 6: DARTH early termination summary, $k = 5 0$ . St-GS Ad-GS St-Heur Ad-Heur 57.05 123Times Faster SIFT100MEP 10DEEP108MOVE1MIST1M SIFT100ME DEEP10OMOVE1MIST1M (a) 𝑅𝑡 = 0.90 $\left( \mathbf { b } \right) R _ { t } = 0 . 9 9$ SIFT100M DEEP100M GLOVE100 GIST1M 0.959 0.805 1248 0.800.850.900.950.99 0.80 0.85 0.90 0.95 0.99 Rt Rt (a) Achieved Recalls (b) Speedup hyperparameter tuning, resulting in excellent search times. Nevertheless, our Adaptive-Heuristic method, which does not involve any tuning at all, delivers comparable execution times (AdaptiveGrid-Search is only $5 \%$ faster). In DARTH, we automatically set the hyperparameter values using the Adaptive-Heuristic method, thus avoiding tuning all-together. # 4.2 Main Results 4.2.1 Recall Predictor Performance. We begin by presenting our recall predictor’s performance across the default testing query workloads of our datasets. The 𝑀𝑆𝐸, 𝑀𝐴𝐸, and $R ^ { 2 }$ measures are averaged 200 aRDtvAgR:T0.H84 aRDvAgR:T0.H88 , 250 aRDtvAgR:T0.H92 aDvAgR:T0.H96 Rt 1 500 8 aRDvAgR:T1.H00 0 # 0 0.50 0.75 1.00 # 0 0.6 0.8 1.0 0 0.6 0.8 1.0 # 0 0.8 0.9 1 1.0 # 0 0.95 1.00 Actual Recall Actual Recall Actual Recall Actual Recall Actual Recall (a) $R _ { t } = 0 . 8$ $\left( \mathbf { b } \right) R _ { t } = 0 . 8 5$ (c) $R _ { t } = 0 . 9$ ( $\left. { } \right) R _ { t } = 0 . 9 5$ (e) 𝑅𝑡 = 0.99 250 pDlAaiRnTHNSW 250 pDlAaiRnTHNSW pDlAaiRnTHNSW 200 pDlAaiRnTHNSW 8 DARTH avg:3ms avg:3ms avg:4ms avg:6ms 200 plain HNSW avg:12ms # avg:12ms # avg:12ms avg:12ms avg:912mss 00 Search Ti2m0e (ms) 00 Search Ti2m0e (ms) 00 Search Ti2m0e (ms) 0 Sea1r0ch Tim2e0 (ms)30 # 0 10 20 30 Search Time (ms) (f) $R _ { t } = 0 . 8$ $\left( \mathbf { g } \right) R _ { t } = 0 . 8 5$ $\left( \mathbf { h } \right) R _ { t } = 0 . 9$ $\left( \mathbf { i } \right) R _ { t } = 0 . 9 5$ $\left( \mathbf { j } \right) R _ { t } = 0 . 9 9$ Figure 7: Detailed analysis of DARTH for SIFT100M, $k = 5 0$ . A A A -A ▲ 20 A -A 105K DEEP100M GIST1M 105 TDOoApttaRilTmHal 20 TOopttailmal 2.5 DOAptRiTmHal Total 10 10 DOAptRiTmHal 0.80 0.85 0.90 0.95 0.99 D 0 0.80 0.85 0.90 0.95 0.99 0.80 0.85 0.90 0.95 0.99 0.80 0.85 0.90 0.95 0.99 0.80 0.85 0.90 0.95 0.99 Rt Rt Rt Rt Rt Figure 9: Queries DARTH (a) SIFT100M (b) DEEP100M (c) GLOVE1M (d) GIST1M processes before LAET is Table 5: Recall predictor performance across all values of $k$ over all $k$ values (we average to present the overall performance across all configurations), and are calculated by invoking the recall predictor at every point of the search for each query to examine the quality of the predictions fairly. The results are summarized in Table 5. The findings indicate that for all datasets, our models achieve very low 𝑀𝑆𝐸 and 𝑀𝐴𝐸 values, while maintaining high $R ^ { 2 }$ scores, demonstrating their effectiveness in estimating the recall of individual queries at any search stage. 4.2.2 Overview of Achieved Recall and Speedups. Figure 6 provides an overview of DARTH’s performance, showing the actual average recall achieved and the corresponding speedups (compared to the plain HNSW search without early termination performed by each corresponding index) for each recall target $R _ { t }$ , across all datasets, for $k = 5 0$ (results are similar for all other values of $k$ , and we omit them for brevity). The graphs demonstrate that DARTH successfully reaches and exceeds each $R _ { t }$ , while also delivering significant speedups, up to 15x, on average $6 . 7 5 \mathrm { x }$ , and median $5 . 7 \mathrm { x }$ compared to the plain HNSW search without early termination. As anticipated, the speedup decreases for higher recall targets, since more search effort is required before termination as $R _ { t }$ increases. 4.2.3 Per-Query Performance. Figure 7 provides a detailed analysis of DARTH for the SIFT100M dataset with $k = 5 0$ (results for other datasets and $k$ values exhibit similar trends and are omitted for brevity). For each recall target, the first row of graphs shows the distribution of per-query recall values (the vertical lines represent the average recall obtain from DARTH and the corresponding recall target), indicating that the majority of queries achieve a recall that surpasses, yet remains close to, the corresponding recall target, since roughly $1 5 \%$ of the queries do not meet the target. The final row of the graph presents the per-query search time distribution achieved by DARTH (orange bar) and the plain HNSW (dark gray bars) index without early termination. The vertical lines represent the average search time achieved by DARTH and the plain HNSW without early termination. The results demonstrate that DARTH significantly reduces the search time needed for query search, achieving a speedup of up to 4.5x. Note that those results are achieved by using our recall predictor just a few times for the search of each query. Specifically, using our adaptive method, we invoke the predictor just 6 times on average when $R _ { t } = 0 . 8 0$ and 11 times on average when $R _ { t } = 0 . 9 9$ , with the intermediate recall targets taking average values in between 6-11. Indeed, the number of predictor calls rises with higher $R _ { t }$ values, which is expected due to the bigger amount of search required as $R _ { t }$ increases. However, the selected hyperparameters for the prediction intervals ensure that even for higher recall targets, the recall predictor will be invoked a reasonable number of times, without resulting in excessive overheads. 4.2.4 Optimality of Termination Points. We now compare the quality of DARTH early termination to the optimal case. To perform this experiment, we calculated the exact number of distance calculations needed to achieve each recall target $R _ { t }$ for each query. To determine the exact number of distance calculations required for each query, we monitored the search process, computing the recall after every distance calculation, identifying the precise number of distance calculations needed to reach each $R _ { t }$ . This is done for each query individually, and then we report the average number of distance calculations across the entire workload. We then compared the results with the corresponding distance calculations that DARTH performs. We present the results in Figure 8, for all of our datasets, using $k = 5 0$ (results for all other $k$ values follow similar trends and are omitted for brevity). The graph shows that DARTH performs near-optimal distance calculations across all datasets, performing on average only $5 \%$ more distance calculations than the optimal. We also note that the deviation of DARTH slightly increases for the highest recall targets. This is attributed to the higher values of prediction intervals used for the highest recall targets used in our evaluation, resulting in more distance calculations performed between the predictor model invocations. 4.2.5 Competitor Tuning Overheads. We now proceed to compare DARTH with competitor approaches. We note that DARTH is the only approach that natively supports declarative recall through early termination for any recall target $R _ { t }$ . In addition, REM also natively supports declarative recall for any recall target through the recall to efSearch mapping procedure it encapsulates. In contrast, LAET (with a tuned 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 ), the only related approach that uses early termination, requires specific tuning for each distinct $R _ { t }$ . Consequently, comparing LAET with DARTH necessitated extensive tuning for each recall target. To fine-tune LAET for each $R _ { t }$ , we first performed a random search to identify the applicable ranges for the 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 . We then employed binary search (due to the monotonic nature of the functions involved) to fine-tune the parameters. Specifically, we searched for 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 $\in \ 0 . 1 0 , 0 . 1 5 , 0 . 2 0 , . . . , 3 . 0 0$ and we evaluated the average recall values using a validation query set of 1K queries (same as the validation set of DARTH). The ranges and step sizes for the 𝑚𝑢𝑙𝑡𝑖𝑝𝑙𝑖𝑒𝑟 were determined based on the results of the initial random search, which established the lower and upper bounds for the hyperparameter values of LAET. This limitation of the existing early termination method of LAET to address the problem of declarative recall highlights an important advantage of DARTH, which can directly start answering queries without the need for tuning. Figure 9 reports how many queries DARTH can answer before LAET finishes their tuning for $k = 5 0$ , demonstrating that DARTH is able to answer thousands of queries before LAET is tuned. Specifically, our approach can answer on average 6K, and up to 10K queries before LAET is tuned. These results show that DARTH is the only early termination approach that does not require any tuning and can start answering queries immediately, which can be beneficial for certain data exploration tasks and analysis pipelines. We only compare DARTH to LAET, because REM and Baseline competitors do not require additional tuning, and they can be set up in times similar to DARTH. 4.2.6 Competitor Per-Query Performance. We now compare the search quality performance of the different competitor approaches in the default testing query workloads of each dataset. Figure 10 presents the recall distribution across all competitors for all datasets, using $R _ { t } = 0 . 9 5$ and $k = 5 0$ (results for other recall targets and values of $k$ exhibit similar trends). While all competitors achieve the target recall of 0.95 on average, clear differences emerge in their per-query performance. For example, in the DEEP100M dataset, although all competitors achieve an average recall of approximately 0.95, $2 8 \%$ of the queries fall below the target recall for Baseline, $2 2 \%$ for LAET, and $2 1 \%$ for REM. Additionally, the worst-performing query recall is 0.46 for both Baseline and LAET, and 0.55 for REM. In contrast, with DARTH, only $1 3 \%$ of the queries fall below the target recall, and all queries achieve a recall higher than 0.80. This demonstrates the superior results achieved by our approach. Figure 11: Recall for varying noise, $R _ { t } = 0 . 9 0$ , $k = 5 0$ . The red line indicates the maximum attainable recall from the plain HNSW index. 4.2.7 Competitor Robustness for Hard Queries. One of the major advantages of DARTH as a run-time adaptive approach is that it can adapt the termination points of the search for harder queries, without requiring any extra configuration. In contrast, the competitor approaches answer queries using static parameters, which are the same for all queries of a given workload, and they are based on the validation query workload. We demonstrate this in practice through a wide range of experiments comparing the performance of the different approaches for query workloads of increasing hardness. Figure 11 reports the actual recall achieved by each method, for $k = 5 0$ and $R _ { t } = 0 . 9 0$ across all datasets, as the query hardness (represented by the noise percentage) increases for each query workload, ranging between $1 \% - 3 0 \%$ . The graphs also show the actual recall achieved by the plain HNSW index (red line), which represents the maximum attainable recall in each noise configuration. The results demonstrate that DARTH is the most robust approach, reaching recall very near to the declared $R _ { t }$ across the entire range of noise values, and especially for noise values where $R _ { t }$ is attainable by the plain HNSW index, i.e., up to $1 0 \%$ . The performance of the competitors deteriorates considerably, achieving recall values far away from the target, especially as the queries become harder in higher noise configurations (results with other values of $k$ and $R _ { t }$ lead to similar results). DARTH achieves this level of robustness by considering a wide variety of search features to determine whether to apply early termination, rather than relying solely on the data distribution. Furthermore, DARTH’s run-time adaptive recall prediction leverages a recall predictor trained on queries that require varying levels of search effort, as explained earlier. Although the predictor is not trained on noisy queries, it still outperforms competing methods because it has been exposed to a broad range of query progressions with diverse characteristics. These factors collectively contribute to DARTH being the most robust approach among all competitors. We extend our analysis by studying the search quality measures and report the results in Figures 12-16. Results for other noise levels are similar, and omitted for brevity. Figure 12 presents the RDE values across all datasets, for several values of $R _ { t }$ . DARTH outperforms all competitors, being $9 4 \%$ better than LAET, $1 5 0 \%$ better than HNSW, and $2 1 0 \%$ better than the Baseline. The superior RDE values that DARTH achieves demonstrate the high quality of the retrieved nearest neighbors compared to the competitors. In the same setting, Figure 13 presents the RQUT results. We observe that DARTH achieves the best results for this measure as well, being $4 7 \%$ better than LAET, $1 1 4 \%$ better than HNSW, and $1 3 0 \%$ better than the Baseline. Such improvements demonstrate the ability of our approach to handle hard queries and meet the declared $R _ { t }$ for the vast majority of those. Figure 14 presents the $N R S ^ { - 1 }$ values. Once again, DARTH outperforms all competitors, being $5 \%$ better than LAET, $14 \%$ better than HNSW, and $1 3 \%$ better than the Baseline. In the same setting, we also study the performance differences of the different approaches for the queries they performed the worst, by reporting the P99 (99-th percentile of the errors of each model) and the average for the errors in the worst $1 \%$ of the query performance for each method (labeled as Worst $1 \%$ ). Figure 15 presents the results for P99, and Figure 16 presents the Worst $1 \%$ , across all datasets. DARTH is the best performer. For P99, it achieves $5 1 \%$ better results than LAET, $6 8 \%$ better results than HNSW, and $9 7 \%$ better results than the Baseline. For Worst $1 \%$ , DARTH is $3 7 \%$ better than LAET, $3 8 \%$ better than HNSW, and $5 3 \%$ better than the Baseline. 4.2.8 Comparison of DARTH with HNSW/REM Tuned for Hard Workloads. The previous set of experiments demonstrated that DARTH is a robust approach, effectively handling difficult query workloads, without the need for additional tuning, thanks to the runtime adaptiveness and its predictor trained using diverse queries. In this set of experiments, we evaluate the search time performance of DARTH. Given that the competing approaches do not provide the required accuracy, we compare DARTH against the plain HNSW, which is commonly used in practice. In this case, we need to explicitly tune the HNSW parameters for each recall target, as well as the noise level of the query workload. Note that this approach corresponds to REM, where the efSearch parameter is specifically chosen to make it achieve the same results as DARTH. Hence, the REM legend in our graphs. In contrast to REM, DARTH is only trained once, and can then operate on and adapt to any recall target and query hardness (i.e., noise level) that emerges at query time. We report results for $R _ { t } = 0 . 9 0$ and $n o i s e = 1 2 \%$ , i.e., a hard workload, using $k = 5 0$ (results with other recall targets, noise levels, and values of $k$ are similar, and omitted for brevity). The results are depicted in Figure 17, which depicts the QPS achieved by both methods, DARTH outperforms REM, being able to answer up to 280QPS (100QPS on average) more queries than REM, while being up to $5 . 8 \mathrm { X }$ (3.1x on average) faster than REM. 4.2.9 Comparisons for Out-Of-Distribution (OOD) workloads. We now study the performance of DARTH for the T2I100M dataset, which contains OOD queries. We follow the same procedure as the other datasets, generating training data from 10K training queries originating from the learning set provided with the dataset. The vectors of the learning set follow the same distribution as the index (dataset) vectors. The training data generation time was 55 minutes, resulting in 340M training samples. Due to the bigger dataset search parameters, we logged a training sample every 2 distance calculations (instead of 1, like the rest of the datasets) to make sure that our training dataset size has a manageable size. The training time of the recall predictor was 320 seconds, and it achieved $M S E { = } 0 . 0 2 9$ , $M A E { = } 0 . 0 7 9$ , and $R ^ { 2 } { = } 0 . 5 4$ , by testing the predictor on 1K OOD queries from the default workload of the dataset. As expected, these results are not as good as those for the rest of the datasets (due to the multimodal nature of T2I100M), yet, they demonstrate the ability of the DARTH recall predictors to achieve good accuracy for OOD query workloads, just like they do for noisy workloads. The DARTH performance summary for T2I100M is presented in Figure 18 for various recall targets and all values of $k$ . Figure 18(a) shows the actual achieved recall over a query workload of 1K OOD queries, demonstrating that DARTH consistently meets and surpasses all recall targets. The speedups compared to the plain HNSW search (see Figure 18(b)) are up to $2 1 . 5 \mathrm { X }$ across all configurations, with an average of $9 . 3 \mathrm { x }$ and a median of $8 . 6 \mathrm { X }$ . We also evaluated the early termination quality achieved by DARTH compared to the optimal early termination points for our recall targets. The results show that DARTH performs accurate early termination, inducing, on average, only $1 5 \%$ more distance calculations than the optimal. Figure 20 presents the comparison of DARTH with other competitors on the T2I100M dataset, using 1K OOD queries. We evaluated the quality of the competitors’ results using RDE, RQUT, NRS, P99, and Worst $1 \%$ . The results show that DARTH is the best-performing approach in almost all cases, across all evaluated measures and recall targets; the only cases where DARTH is outperformed by REM is for $R _ { t } = 0 . 9 5$ , and by LAET only for RQUT and $R _ { t } = 0 . 9 5$ . However, even in these cases, DARTH achieves a very low RDE, indicating high result quality, and it is $1 . 5 \mathrm { x }$ faster than REM and 1.1x faster than LAET. 4.2.10 Extensions to IVF. To perform our evaluation with IVF, we created a plain IVF index for all our datasets, capable of achieving very high recall for our test queries. The IVF index parameters were $n l i s t = 1 0 0 0$ for GIST1M and GLOVE1M and 𝑛𝑙𝑖𝑠𝑡 $= 1 0 0 0 0$ for DEEP100M and SIFT100M. We also set 𝑛𝑝𝑟𝑜𝑏𝑒 $= 1 0 0$ for GLOVE1M, 𝑛𝑝𝑟𝑜𝑏𝑒 $= 1 5 0$ for DEEP100M and SIFT100M and $n p r o b e = 2 0 0$ for Baseline LAET REM DARTH 𝑛𝑜𝑖𝑠𝑒 $= 1 2 \%$ , $k = 5 0$ , Lower is better 0.01 0.5 0.5 0.00 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 Rt Rt Rt Rt Rt (a) SIFT100M (a) SIFT100M (a) SIFT100M (a) SIFT100M (a) SIFT100M 0.01 0.5 1 0.5 0.5 0.00 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0.80 0.85 0.90 Rt Rt Rt Rt Rt (b) DEEP100M (b) DEEP100M (b) DEEP100M (b) DEEP100M (b) DEEP100M 00 0.25 1 G 0.5 0.5 0.00 0.80 0.85 0.90 0.00 0.80 0.85 0.90 0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 Rt Rt Rt Rt Rt (c) GLOVE1M (c) GLOVE1M (c) GLOVE1M (c) GLOVE1M (c) GLOVE1M 00 0.2 0.5 G 0. 0.000 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 0.0 0.80 0.85 0.90 Rt Rt Rt Rt Rt (d) GIST1M (d) GIST1M (d) GIST1M (d) GIST1M (d) GIST1M Figure 12: RDE. Figure 13: RQUT. Figure 14: NRS. Figure 15: P99. Figure 16: Worst $1 \%$ . REM DARTH k=10 k=25 k=50 k=75 $k = 1 0 0$ SIFT100M DEEP100M GLOVE100 GIST1M 4600 200 。 SIFT100M 0.80 0.959 0.85 0.90 1261 6 0.9059 0.805 340 20 10 T 0.80 0.85 0.90 0.95 0.80 0.85 0.90 0.95 0.800.850.900.950.99 0.80 0.85 0.90 0.95 0.99 Figure 17: DARTH and Rt Rt Rt Rt REM, $R _ { t } ~ = ~ 0 . 9 0$ , 𝑛𝑜𝑖𝑠𝑒 $\mathbf { \sigma } = \mathbf { \sigma }$ (a) Achieved Recall (b) Speedup (a) Achieved Recall (b) Speedup $12 \%$ , $k = 5 0$ . Figure 18: DARTH summary for T2I100M. Figure 19: DARTH summary for IVF, $k = 5 0$ . Baseline LAET REM DARTH E 0.01 0.2 3 1 A 0.5 10 0.5 0.00 0.80 0.85 0.90 0.95 0.0 0.80 0.85 0.90 0.95 0 0.80 0.85 0.90 0.95 0.0 0.80 0.85 0.90 0.95 0.0 0.80 0.85 0.90 0.95 Rt Rt Rt Rt Rt $1 \%$ Figure 20: Competitor comparison on T2I100M OOD queries (no noise), $k = 5 0$ . GIST1M. These parameters allowed all our IVF indexes to reach very high recalls: 0.996 on average across all datasets. After creating the plain IVF index, we executed 10K training queries to generate the training data for our IVF recall predictor. Note that, since IVF performs many more distance calculations for each query compared to HNSW, we had to reduce the logging frequency of our training data, gathering a training sample every 20 distance calculations for GLOVE1M and GIST1M, and every 50 distance calculations for DEEP100M and SIFT100M. This resulted in 315M training samples for SIFT100M, 310M for DEEP100M, 100M for GLOVE1M, and 133M for GIST1M. We trained a GBDT recall predictor, which achieved an average $M S E { = } 0 . 0 0 3$ across all datasets, for the 1K testing queries of the default workloads. The performance summary of DARTH for IVF is presented in Figure 19 for all of our datasets using $k = 5 0$ . Figure 19(a) shows that the recall achieved by DARTH for IVF using 1K testing queries from the default workloads, always meets and exceeds the target. Figure 19(b) depicts the corresponding speedups achieved by DARTH: up to a $4 1 . 8 \mathrm { x }$ when compared to the plain IVF search, with an average speedup of $1 3 . 6 \mathrm { x }$ and a median speedup of 8.1x. Similar to the corresponding graphs for HNSW, higher recall targets result in lower speedups, because longer searches are required to achieve higher recall. Additionally, we observe that the highest speedup is achieved for the GLOVE1M dataset. This is expected, given GLOVE’s clustered structure, which allows the retrieval of the nearest neighbors very early in the search.
Approximate Nearest Neighbor Search (ANNS) presents an inherent tradeoff between performance and recall (i.e., result quality). Each ANNS algorithm provides its own algorithm-dependent parameters to allow applications to influence the recall/performance tradeoff of their searches. This situation is doubly problematic. First, the application developers have to experiment with these algorithm-dependent parameters to fine-tune the parameters that produce the desired recall for each use case. This process usually takes a lot of effort. Even worse, the chosen parameters may produce good recall for some queries, but bad recall for hard queries. To solve these problems, we present DARTH, a method that uses target declarative recall. DARTH uses a novel method for providing target declarative recall on top of an ANNS index by employing an adaptive early termination strategy integrated into the search algorithm. Through a wide range of experiments, we demonstrate that DARTH effectively meets user-defined recall targets while achieving significant speedups, up to 14.6x (average: 6.8x; median: 5.7x) faster than the search without early termination for HNSW and up to 41.8x (average: 13.6x; median: 8.1x) for IVF. This paper appeared in ACM SIGMOD 2026.
[ "cs.DB" ]
# 1. Introduction Text-to-3D generation—the task of creating 3D contents from natural language descriptions—has attracted enormous interest [21, 41, 48], due to its broad applications in vision and graphics. Recent advances, such as 3D representations[16, 37], large-scale pre-trained visionlanguage models[42], advanced text-to-image diffusion and flow models[44], and differentiable rendering techniques, have further accelerated progress in this field. In particular, powerful text-to-image diffusion models such as Stable Diffusion series[43–45], lay a strong foundation for text-driven 3D synthesis: by leveraging pre-trained 2D diffusion priors and multi-view rendering, one can optimize a 3D asset so that its renderings align with a given text prompt. This capability opens new avenues for 3D content creation, enabling even non-experts to “describe and create” novel 3D assets in freestyles. Several paradigms have emerged to tackle text-to-3D generation. Diffusion distillation-based methods—exemplified by Score Distillation Sampling (SDS) in DreamFusion [41]—optimize 3D representations by aligning multiview renderings with pre-trained text-to-image diffusion priors [44]. Reward-guided approaches [13, 38] further refine these approaches by directly incorporating humanpreference or CLIP-based rewards, boosting both semantic alignment and perceived quality. Despite their impressive fidelity and text alignment, both diffusion-distillation and reward-guided methods suffer from a critical limitation: limited generative diversity. Even when prompted with intentionally vague or open-ended descriptions, current models tend to converge on a narrow set of similar outputs. We analyze this limitation and trace its root to the utilization of Kullback–Leibler (KL) divergence. Specifically, the objectives optimized by both SDS and reward-based methods can be reformulated as minimizing an asymmetric KL divergence, which shares a fundamental limitation: KL divergence inherently encourages mode-seeking behavior by penalizing samples that deviate from high-density regions of the target distribution. As a result, the generative model tends to collapse to a few dominant modes, severely suppressing output diversity. In this paper, we present Dive3D, a novel framework that replaces KL-based objectives with Score Implicit Matching (SIM)—a score-based divergence loss that directly matches the gradient fields of the probability density of generated contents and the diffusion prior. This formulation avoids the mode-seeking tendencies of KL and encourages exploration of multiple high-probability regions, thereby promoting diversity without sacrificing fidelity or alignment. Furthermore, Dive3D unifies both diffusion distillation and rewardguided optimization under a divergence-based perspective. Combined with SIM loss, this formulation enables a principled integration of diffusion priors, human preferences, and diversity-promoting objectives within a single framework. As a result, Dive3D generates 3D assets that are not only realistic and well-aligned with text prompts, but also significantly more diverse. Through extensive experiments on standard text-to-3D benchmarks, we demonstrate that Dive3D achieves state-ofthe-art performance, substantially outperforming existing SDS- and reward-based approaches across visual fidelity, prompt adherence, and generative diversity. # 2. Related Works Diffusion Distillation Based Methods Diffusion distillation methods [27] leverage pre-trained text-to-image diffusion models [3, 44, 45] to guide the optimization of 3D representations by aligning rendered views with diffusion priors. This line of work was pioneered by Score Distillation Sampling (SDS) in DreamFusion [41], which initiated an era of 3D synthesis by transferring the knowledge embedded in 2D diffusion priors. However, these diffusion-driven optimization techniques typically rely on minimizing KL divergences [30, 41, 59], often resulting in mode-seeking behavior where generated 3D objects collapse into a single plausible solution with limited diversity. Moreover, the straightforward use of 2D diffusion priors can introduce visual artifacts such as over-saturated colors, overly smooth geometry, and even Janus artifacts [4, 13, 59]. To address these challenges, recent studies have explored various improvements, including timestep annealing [11, 59, 76], coarse-tofine training [4, 21, 59], component analysis [15], and formulation refinements [2, 20, 52, 57, 59, 62, 65, 70, 76]. Additional efforts have focused on geometry-texture disentanglement [4, 34, 59] and mitigating the multi-face (Janus) problem by replacing text-to-image diffusion with novel view synthesis or multi-view diffusion [23–25, 48, 56, 60, 66]. Notably, diffusion distillation has also seen rapid progress in other domains, such as one-step diffusion models [5, 12, 28, 31, 69, 75] and various related approaches [29, 40, 71, 72]. Reward Optimization based Methods. Another category of approaches optimizes 3D outputs directly using reward models, such as visual-language alignment losses or humanpreference reward models instead of (or in addition to) a diffusion prior. Early methods like CLIP-Mesh [38] and DreamFields [13] directly maximize the CLIP score [42] between rendered images and the text prompt, enabling zeroshot text-to-3D without 3D datasets. While conceptually simple, these CLIP-guided approaches often yielded suboptimal geometry or texture (e.g. unrealistic shapes) and required expensive optimization. More recently, DreamReward [67] uses a learned internal 3D preference-reward model (Reward3D) trained on internally collected human feedback data to guide generation. DreamReward improves alignment of generated shapes with user intent, achieving better text relevance as judged by the reward function. Reward-based methods explicitly push for semantic or aesthetic alignment, but relying solely on them can compromise visual fidelity if the reward is not perfectly aligned with 3D realism (e.g. CLIP might encourage implausible textures). They may also require costly human data collection to train the internal 3D reward model. Feed-forward Methods. Feed-forward methods train neural networks to directly generate 3D content from text using large synthetic 3D datasets or cross-modal supervision. For example, CLIP-Forge [46] and CLIP-Sculptor [47] leverage CLIP embeddings for zero-shot text-to-shape generation. More recently, advances in large reconstruction models (LRMs)[9] have enabled rapid 3D model prediction from single or sparse-view images, inspiring these developments of methods like Instant3D[18] and Turbo3D [10] that first generate multi-view images from text and then use a feedforward 3D reconstructor (trained on synthetic data) to instantly produce representations such as NeRF or 3D Gaussian Splatting. However, the quality of these approaches depends heavily on the underlying text-to-multi-view generator, often recasting the challenge as one of diffusion distillation or reward-based optimization. # 3. Preliminary In this section, we review the key concepts and mathematical formulations underlying our work. We first describe text-toimage diffusion models, then explain how these models are adapted for text-to-3D generation via diffusion distillation, and finally review reward-guided text-to-3D methods. # 3.1. Text-to-Image Diffusion Models Diffusion models [8, 49, 51] are a class of generative models that iteratively transform noise into data using a stochastic process. Let $\mathbf { \boldsymbol { x } } _ { 0 } \sim \ q _ { \mathrm { d a t a } } ( \mathbf { \boldsymbol { x } } )$ denote a data sample. The forward diffusion process corrupts $\scriptstyle { \mathbf { { \vec { x } } } } _ { 0 }$ by gradually adding noise described by the stochastic differential equation (SDE): $$ d \pmb { x } _ { t } = \pmb { F } ( \pmb { x } _ { t } , t ) d t + G ( t ) d \pmb { w } _ { t } , \quad t \in [ 0 , T ] , $$ where $\pmb { F } ( \pmb { x } _ { t } , t )$ is a drift function, $G ( t )$ is a scalar-valued diffusion coefficient, and ${ \mathbf { } } w _ { t }$ denotes a standard Wiener process. To generate samples, the reverse diffusion process is used to progressively denoise an initial noise sample [22, 26, 50, 51, 64, 73]. The marginal core function $\nabla _ { \pmb { x } _ { t } } \log p _ { t } ( \pmb { x } _ { t } )$ is typically approximated by a continuous-indexed neural network $s _ { \phi } ( \boldsymbol { x } _ { t } , t )$ . This score network is trained using the weighted denoising score matching objective: $$ \mathcal { L } ( \phi ) = \mathbb { E } _ { t , { \pmb x } _ { 0 } , \epsilon } \left[ \lambda ( t ) \left\| s _ { \phi } \Big ( \alpha _ { t } { \pmb x } _ { 0 } + \sigma _ { t } \epsilon , t \Big ) + \frac { \epsilon } { \sigma _ { t } } \right\| _ { 2 } ^ { 2 } \right] , $$ where $\epsilon \sim \mathcal { N } ( 0 , \mathbf { I } )$ , and the functions $\alpha _ { t }$ and $\sigma _ { t }$ are determined by the noise schedule. By conditioning on text inputs, these diffusion models can be extended to text-to-image synthesis. In this setting, a conditional score network $s _ { \phi } ( \mathbf { x } _ { t } , y , t ) \approx \nabla _ { \mathbf { x } _ { t } } \log p _ { t } ( \mathbf { x } _ { t } | y )$ is used, where $y$ is the text prompt describing the image content. Popular models such as Stable Diffusion [44] and MVDiffusion [48] have demonstrated that this approach yields high-quality, semantically aligned images. # 3.2. Text-to-3D Generation by Diffusion Distillation A prevalent paradigm for text-to-3D synthesis leverages pretrained text-to-image diffusion models to guide the optimization of a 3D representation. Let $g ( \theta , c )$ be a differentiable renderer that maps the 3D parameters $\theta$ to a 2D image under camera pose $c$ , $q _ { \theta } ( \pmb { x } _ { t } | c )$ be the distribution of rendered images at diffusion time $t$ , and $p ( \pmb { x } _ { t } | y ^ { c } )$ be the target conditional distribution given a view-dependent text prompt $y ^ { c }$ defined by a pretrained diffusion model. The loss that aligns each rendered view of the 3D model with the conditional diffusion prior can be formulated as: $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { C D P } } ( \theta ) = \mathbb { E } _ { t , c } \left[ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \theta } ( \pmb { x } _ { t } | c ) \Big | \Big | p ( \pmb { x } _ { t } | y ^ { c } ) \Big ) \right] , } \end{array} $$ where $\omega ( t )$ is a weighting function. In practice, the gradient of loss (3) writes (please refer to Luo et al. [30] and Wang et al. [55] for a comprehensive derivation): $$ \nabla _ { \boldsymbol { \theta } } \mathcal { L } _ { \mathrm { C D P } } ( \boldsymbol { \theta } ) \approx \mathbb { E } _ { t , \epsilon , c } \left[ \omega ( t ) \left( \epsilon _ { \boldsymbol { \phi } } ( \boldsymbol { x } _ { t } , \boldsymbol { y } ^ { c } , t ) - \epsilon \right) \frac { \partial g ( \boldsymbol { \theta } , \boldsymbol { c } ) } { \partial \boldsymbol { \theta } } \right] , $$ where ${ \epsilon _ { \phi } } ( { \pmb x } _ { t } , y ^ { c } , t ) = - \sigma _ { t } s _ { \phi } ( { \pmb x } _ { t } , y ^ { c } , t )$ is the noise prediction of the diffusion model. The Score Distillation Sampling (SDS) loss, introduced in DreamFusion [41], improves generation quality by employing classifier-free guidance (CFG)[1, 7, 14, 19], which replaces the original conditional score in Eq.4 with a weighted difference between the conditional and unconditional score estimates, $$ \begin{array} { r l } & { \hat { \epsilon } _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) = ( 1 + \gamma ) \epsilon _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) - \gamma \epsilon _ { \phi } ( \pmb { x } _ { t } , t ) } \\ & { = \epsilon _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) + \gamma \Big ( \epsilon _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) - \epsilon _ { \phi } ( \pmb { x } _ { t } , t ) \Big ) } \\ & { = - \sigma _ { t } \Big [ s _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) + \gamma \Big ( s _ { \phi } ( \pmb { x } _ { t } , y ^ { c } , t ) - s _ { \phi } ( \pmb { x } _ { t } , t ) \Big ) \Big ] , } \end{array} $$ $$ \nabla _ { \boldsymbol { \theta } } \mathcal { L } _ { \mathrm { S D S } } ( \boldsymbol { \theta } ) \approx \mathbb { E } _ { t , \epsilon , c } \left[ \omega ( t ) \left( \hat { \epsilon } _ { \boldsymbol { \phi } } ( \boldsymbol { x } _ { t } , \boldsymbol { y } ^ { c } , t ) - \epsilon \right) \frac { \partial g ( \boldsymbol { \theta } , \boldsymbol { c } ) } { \partial \boldsymbol { \theta } } \right] . $$ This adjustment is equivalent to incorporating an additional regularization term into the Score Distillation Sampling (SDS) loss - the so-called CFG-reward as introduced by [32] $( \mathcal { L } _ { \mathrm { S D S } } = \mathcal { L } _ { \mathrm { C D P } } + \gamma \mathcal { L } _ { \mathrm { C F R } } )$ , effectively acting as an implicit likelihood term that better aligns the generated image with the text prompt and enforces pose constraints. Increasing the weighting factor $\gamma$ strengthens this alignment, thereby improving the semantic calibration of the 3D renderings. # 3.3. Text-to-3D Generation via Reward Guidance An alternative approach leverages reward signals to steer the generation of 3D content. Pioneering works such as DreamFields [13], CLIP-Mesh [38], and X-Mesh [35] leverage CLIP scores [42] to align 3D representations with text prompts. In these methods, a reward function is defined as: $$ r ( y , x , c ) = f { \big ( } g ( \theta , c ) { \big ) } ^ { \top } h ( y ^ { c } ) , $$ where $f ( \cdot )$ and $h ( \cdot )$ are embedding functions for images and text, respectively, and $g ( \theta , c )$ is the rendered image. Maximizing this reward encourages the 3D model to generate outputs that are semantically aligned with the text. Recent methods, such as DreamReward[67], combine the SDS loss with reward-based signals to further enhance semantic alignment and human-preference consistency. For example, DreamReward modifies the SDS loss as: $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { R e w a r d } } ( \theta ) = \mathcal { L } _ { \mathrm { S D S } } ( \theta ) - \lambda \mathbb { E } _ { t , c , \mathbf { x } _ { t } } \Big [ \omega ( t ) r \big ( y ^ { c } , \hat { x } _ { 0 } ( \mathbf { x } _ { t } ) \big ) \Big ] , } \end{array} $$ where ${ \pmb x } _ { t } \sim q _ { \theta } ( { \pmb x } _ { t } | c )$ , $\begin{array} { r } { \hat { x } _ { 0 } = \frac { 1 } { \alpha _ { t } } \left[ \pmb { x } _ { t } - \sigma _ { t } \epsilon _ { \phi } ( \pmb { x } _ { t } , y , t ) \right] } \end{array}$ is an estimate of the denoised image, and $\lambda$ balances the influence of the reward. Similar to Eq. 5, the reward function acts as an additional regularization term in SDS-based 3D generation. # 4. Method In this section, we introduce Dive3D, a principled framework that boosts both diversity and fidelity in text-to-3D synthesis by replacing KL-divergence guidance with score-based divergence optimization (see Fig.2). In Sec.4.1, we demonstrate that existing SDS and reward losses are both linear combinations of KL divergences—and thus prone to mode collapse and mode-seeking. Then, in Sec. 4.2, we present our score-based divergence formulation, which overcomes these limitations and delivers significantly more varied and higher-quality 3D outputs. # 4.1. SDS and Reward Guidance are Both KL Divergences The SDS Loss. The classifier-free guidance in the SDS loss (Eqs. 5–6) can be rewritten as $$ \begin{array} { r } { s _ { \phi } ( \pmb { x } _ { t } , y , t ) - s _ { \phi } ( \pmb { x } _ { t } , t ) \approx \nabla _ { \pmb { x } _ { t } } \log p ( \pmb { x } _ { t } | y ) - \nabla _ { \pmb { x } _ { t } } \log p ( \pmb { x } _ { t } ) . } \end{array} $$ Substituting Eq. 9 into Eq. 6 and integrating, the SDS loss can be expressed as the difference between two KL diver gence terms: $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { S D S } } ( \theta ) = ( 1 + \gamma ) \mathcal { L } _ { \mathrm { C D P } } ( \theta ) - \gamma \mathcal { L } _ { \mathrm { U D P } } ( \theta ) } \\ & { \qquad = ( 1 + \gamma ) \mathbb { E } _ { t , c } \Bigg [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \theta } ( \pmb { x } _ { t } | c ) \| p ( \pmb { x } _ { t } | y ^ { c } ) \Big ) \Bigg ] } \\ & { \qquad - \gamma \mathbb { E } _ { t , c } \Bigg [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \theta } ( \pmb { x } _ { t } | c ) \| p ( \pmb { x } _ { t } ) \Big ) \Bigg ] . } \end{array} $$ This formulation makes explicit that the SDS loss balances two KL divergences—one that promotes prompt fidelity $( \mathcal { L } _ { \mathrm { C D P } } )$ and one that modulates diversity via the unconditional prior $( \mathcal { L } _ { \mathrm { U D P } } )$ . Increasing $\gamma$ strengthens text–image alignment but narrows diversity by shrinking the effective entropy. The Explicit Reward Loss. Assuming the reward defines an exponential distribution, $$ p _ { \mathrm { E R } } ( y ^ { c } , x _ { t } ) \propto \exp { \Big ( r \big ( y ^ { c } , \hat { x } _ { 0 } ( { \pmb x } _ { t } ) \big ) \Big ) } , $$ the explicit reward loss in Eq. 8 can likewise be interpreted as a KL divergence. $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { E R } } ( \theta ) = \mathbb { E } _ { t , c } \Big [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \theta } ( \pmb { x } _ { t } | c ) \left| \right| p _ { \mathrm { E R } } ( y ^ { c } , \pmb { x } _ { t } ) \Big ) \Big ] } \\ & { \quad \quad \quad = \mathbb { E } _ { t , c , x _ { t } } \Big [ \omega ( t ) \Big ( \log q _ { \theta } ( \pmb { x } _ { t } | c ) - \log p _ { \mathrm { E R } } ( y ^ { c } , \pmb { x } _ { t } ) \Big ) \Big ] } \\ & { \quad \quad \quad = \mathrm { c o n s t a n t - } \mathbb { E } _ { t , c , \pmb { x } _ { t } } \Big [ \omega ( t ) r \big ( y ^ { c } , \hat { \pmb { x } } _ { 0 } ( \pmb { x } _ { t } ) \big ) \Big ] , } \end{array} $$ where the first term is a constant because the distribution $q _ { \theta } ( x _ { t } | c )$ is typically a uniformly-distributed collection of $N$ particles (i.e., $q _ { \theta } ( x _ { t } | c ) = 1 / N )$ . Serving as a measure of the joint distribution of prompts and images, the explicit reward loss not only enhances text alignment during 3D generation but also provides the flexibility to incorporate additional criteria, such as human preference [39, 63], photorealism [17], and geometric consistency [67]. Unified KL Divergence Framework. Collecting these components, we can unify all loss terms in the diffusion- or reward-based text-to-3D generation framework by defining three core KL-based terms: $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { C D P } } ( \boldsymbol { \theta } ) = \mathbb { E } _ { t , c } \Big [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \boldsymbol { \theta } } ( \mathbf { x } _ { t } | \boldsymbol { c } ) \left. p ( \mathbf { x } _ { t } | \boldsymbol { y } ^ { c } ) \right) \Big ] , } \\ & { \mathcal { L } _ { \mathrm { U D P } } ( \boldsymbol { \theta } ) = \mathbb { E } _ { t , c } \Big [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \boldsymbol { \theta } } ( \mathbf { x } _ { t } | \boldsymbol { c } ) \left. p ( \mathbf { x } _ { t } ) \right) \Big ] , } \\ & { \quad \mathcal { L } _ { \mathrm { E R } } ( \boldsymbol { \theta } ) = \mathbb { E } _ { t , c } \Big [ \omega ( t ) D _ { \mathrm { K L } } \Big ( q _ { \boldsymbol { \theta } } ( \mathbf { x } _ { t } | \boldsymbol { c } ) \left. p _ { \mathrm { E R } } ( \boldsymbol { y } ^ { c } , \mathbf { x } _ { t } ) \right) \Big ] . } \end{array} $$ Prompt 𝑦 : A carton-style house with vibrant colors and unique design. Noise TexDti-ftfou-sIimonage 𝑙𝐶𝐷𝑃 = 𝐷[0:𝑇][𝑞𝜃(𝒙|𝑐)||𝑝𝜙(𝒙|𝑦𝑐)] 𝒙\~𝑝(𝒙|𝒛𝜃, 𝑐) F0 Noise Image 𝑙𝑈𝐷𝑃 = 𝐷[0:𝑇][𝑞𝜃(𝒙|𝑐)||𝑝𝜙(𝒙)] 𝒛𝜃\~𝑝𝒛𝜃 Diffusion Reward Human 𝑙𝐸𝑅 = 𝐷[0:𝑇][𝑞𝜃(𝒙|𝑐)||𝑝𝐸𝑅(𝒙, 𝑦𝑐)] Preference 𝑑 𝛼 ∗ 𝑙𝐶𝐷𝑃 𝜃 − 𝛾 ∗ 𝑙𝑈𝐷𝑃 𝜃 + 𝜆 ∗ 𝑙𝐸𝑅 𝜃 Τ 𝑑𝜃 3D Objects 𝑝𝒛 Rendered images 𝒙 Both SDS and reward-guided objectives are simply linear combinations of these divergences: $$ \begin{array} { r l } { \mathcal { L } _ { \mathrm { S D S } } = ( 1 + \gamma ) \mathcal { L } _ { \mathrm { C D P } } - \gamma \mathcal { L } _ { \mathrm { U D P } } , } & { { } } \\ { \mathcal { L } _ { \mathrm { R e w a r d } } = \mathcal { L } _ { \mathrm { S D S } } + \lambda \mathcal { L } _ { \mathrm { E R } } . } & { { } } \end{array} $$ This unified view permits flexible tuning of the weights on each term (see Appendix), yielding higher-fidelity generations. However, both theory and experiments [33, 74, 75] show that relying on the inherently asymmetric KL divergence $( D _ { \mathrm { K L } } ( q | p ) \neq D _ { \mathrm { K L } } ( p | q ) )$ destabilizes training and induces mode-seeking, thereby constraining the diversity of generated 3D assets. # 4.2. From KL to Score-based Divergence To mitigate these issues, in Dive3D we propose to replace the KL divergence with a score-based divergence, named score implict matching (SIM) loss[31], which has shown significant improvements in generation diversity in one-step diffusion and flow models[12, 31, 75]. Specifically, the score-based divergence is defined between two distributions $p$ and $q$ as $$ D _ { [ 0 , T ] } ( p , q ) = \int _ { 0 } ^ { T } w ( t ) \mathbb { E } _ { \pmb { x } _ { t } \sim \pi _ { t } } \Big [ d \Big ( s _ { p } ( \pmb { x } _ { t } ) - s _ { q } ( \pmb { x } _ { t } ) \Big ) \Big ] d t , $$ where the score functions of these two distributions are given by $s _ { p } ( \pmb { x } _ { t } ) ~ = ~ \nabla _ { \pmb { x } _ { t } } \log p ( \pmb { x } _ { t } )$ and $s _ { q } ( \pmb { x } _ { t } ) \ =$ $\nabla _ { \pmb { x } _ { t } } \log { q ( \pmb { x } _ { t } ) }$ , $d : \mathbb { R } ^ { d } \mathbb { R }$ is a distance function, $\pi _ { t }$ is a sampling distribution whose support exceeds that of $p _ { t }$ and $q _ { t }$ , and $w ( t )$ is a weighting function. If we set $p ( \cdot ) = p ( \pmb { x } _ { t } | y ^ { c } ) , p ( \pmb { x } _ { t } ) , p _ { \mathrm { E R } } ( y ^ { c } , \pmb { x } _ { t } )$ and $q ( \cdot ) = q _ { \theta } ( \pmb { x } _ { t } | c )$ , then the KL-based losses in Eqs. 13-14 can be updated to $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { S o r e - C h P } } ( \theta ) } \\ & { = \displaystyle \int _ { 0 } ^ { T } w ( t ) \mathbb { E } _ { x _ { t } \sim w _ { \tau _ { t } } } \Big [ d \Big ( s _ { p } ( x _ { t } | y ^ { \epsilon } ) - s _ { \varphi _ { \theta } } ( x _ { t } | c ) \Big ) \Big ] d t , } \\ & { \mathcal { L } _ { \mathrm { S o r e - C h P } } ( \theta ) } \\ & { = \displaystyle \int _ { 0 } ^ { T } w ( t ) \mathbb { E } _ { \alpha _ { t } \sim w _ { \tau _ { t } } } \Big [ d \Big ( s _ { p } ( x _ { t } ) - s _ { \varphi _ { \theta } } ( x _ { t } | c ) \Big ) \Big ] d t , } \\ & { \mathcal { L } _ { \mathrm { S o r e - E R } } ( \theta ) } \\ & { = \displaystyle \int _ { 0 } ^ { T } w ( t ) \mathbb { E } _ { \alpha _ { t } \sim w _ { \tau _ { t } } } \Big [ d \Big ( \nabla _ { x _ { t } } r ( y ^ { \epsilon } , \hat { x } _ { \eta } ( x _ { t } ) \Big ) - s _ { \varphi _ { \theta } } ( x _ { t } | c ) \Big ) \Big ] d t , } \\ & { \mathcal { L } _ { \mathrm { D i v s D } } = \big ( 1 + \gamma \big ) \mathcal { L } _ { \mathrm { S o r e - C o p P } } - \gamma \mathcal { L } _ { \mathrm { S o w e - U P } } + \lambda \mathcal { L } _ { \mathrm { S o r e - F } } } \end{array} $$ This formulation offers a more effective similarity metric between the generated content and diffusion- or reward-based image distributions, yielding 3D outputs that are both more diverse and higher fidelity than those produced using traditional KL divergence. Although this divergence may initially seem intractable, recent work [33] shows that the gradient of this divergence with respect to $\theta$ can be efficiently computed without directly differentiating the score functions by introducing a separate approximation network. For a full derivation and implementation details, please refer to Appendix. # 5. Experiment In this section, we evaluate how our proposed score-based divergence optimization enhances both quality and diversity in text-to-3D synthesis. We perform comprehensive experiments on the GPTEval3D benchmark [61], supplemented by additional 2D and 3D assessments that demonstrate the effectiveness and diversity of the method. gFioguf eE3n. gCloamnpda.rison with Baselines based on Stable Diffusion [44]A. Dcihvie3mDpeaxhnizbietsehidgrherssqeuadl tlyi,kriecheHretenxrtuyreVdeItIaIi sk, iandgsuopferiEo alignment with human preferences, such as accurate clothing styles, and vivid fur texture. A sequence of street lamps, casting pools of light on cobblestone paths as twilight descends. Figure 4. Comparison with Baselines based on MVDiffusion [48] and reward model [67]. Dive3D exhibits more detailed and realistic 3D generation, capturing fine-grained structures such as accurate guitar geometry and transparent glass materials. # 5.1. Evaluation on the GPTEval3D Benchmark Setup. We first evaluate Dive3D on 110 creative and complex prompts from the GPTEval3D benchmark [61], comparing against 9 state-of-the-art methods, including DreamFusion [41], DreamGaussian [53], Instant3D [18], Fantasia3D [4], Latent-NeRF [36], Magic3D [21], ProlificDreamer [58], MVDream [48], and DreamReward [67]. All experiments use PyTorch and the ThreeStudio framework [6], testing both MVDream [48] and Stable Diffusion [44] as diffusion backbones, and PickScore [17] as the reward model. Optimization takes about one hour per object on a single NVIDIA A100 GPU. Quantitative Results. Table 1 reports performance of our method across six metrics, including text-asset alignment $( + 5 3 . 5 ) \$ , 3D plausibility $( + 4 9 )$ , text-geometry alignment $( + 6 8 . 2 )$ , texture details $( + 6 7 . 5 )$ , geometry details $( + 3 5 . 3 )$ , and overall performance $( + 5 0 . 0 )$ , where $" \boldsymbol { + } \boldsymbol { \mathbf { \mathit { \Sigma } } }$ indicates improvement and “–” indicates degradation relative to the state of the art. Dive3D achieves the top rank on every metric, demonstrating that score-based divergence guidance—especially when combined with reward models—yields substantial gains over both diffusion-only and reward-augmented baselines. Qualative Results. Figure 3 compares Dive3D against methods built on Stable Diffusion (e.g., DreamFusion, Fantasia3D, ProlificDreamer), which often struggle with fine details or prompt adherence. By optimizing a score-based divergence that unifies text-conditioned diffusion priors with a differentiable reward model, Dive3D consistently produces high-fidelity, semantically precise 3D assets. Additional examples in Figures 4 and 6 compare Dive3D with MVDream and DreamReward. While MVDream preserves geometric consistency, it sometimes deviates from the prompt content (missing keywords highlighted in red). DreamReward improves alignment but remains constrained by its KL-based formulation and associated mode collapse. In contrast, Dive3D faithfully follows the prompt, delivers rich detail and appealing aesthetics, and maintains strong Figure 5. Score-based divergence vs. KL divergence in 2D space sampling. The proposed score-based divergence significantly enhances the diversity of generated 2D samples, yielding more varied backgrounds and clothing in “game character” generation, as well as a broader range of environments, lighting conditions, and architectural features in “Japanese building” generation. Table 1. Quantitative Results on 110 Prompts from the GPTEval3D Benchmark [61]. We compute all six GPTEval3D metrics—text alignment, 3D plausibility, texture–geometry coherence, geometry details, texture details, and overall score—to comprehensively evaluate 3D generation quality. Dive3D achieves the highest score on every metric, demonstrating its superior performance. 1 Our metrics differ from those reported in [67] because GPT-4V has been deprecated in GPTEval3D, so we instead use GPT-4o-mini. visual coherence. # 5.2. Analysis on Generation Diversity Setup. We then show that score-based divergences produce more diverse, information-rich outputs than traditional KL-based losses. To evaluate this, we test our method in both 2D and 3D settings—using Stable Diffusion [44] as the backbone. In 2D, we represent scenes with 2D Neural Radiance Fields; in 3D, we use full 3D NeRFs. We primarily compare against ProlificDreamer [58], the leading KL-divergence–based method that leverages variational score distillation (VSD) to maximize diversity in text-to-3D generation. On a single NVIDIA A100 GPU, our 2D experiments complete in roughly 30 minutes, while the 3D evaluations take about 9 hours. 2D Results. We begin by evaluating 2D generation, where we distill a 2D neural field from a text-to-image diffusion model. This task shares the same mathematical formulation as our text-to-3D problem but is computationally less demanding because it does not involve camera poses. As shown in Fig. 5, for both game character and realistic architecture generation tasks, the score-based divergence consistently produces more diverse samples than KL divergence. For instance, when generating “a realistic Japanese building,” the KL-based method consistently generates towers with standard color schemes (predominantly red and blue), uniform backgrounds (lush green trees), and similar weather and time conditions (sunny daytime). In contrast, the scorebased approach generates outputs with varied lighting (e.g., night scenes, snowy settings) and diverse architectural features (e.g., towers, pavilions, and residential houses). A similar trend is observed in the game character generation task: while the KL-based SDS loss tends to produce similar archetypes, the score-based loss reveals a wider range of characters, clothing styles, and backgrounds. 3D Results. These diversity gains naturally and effectively generalize to 3D synthesis. Figure 1(a) compares the output for “a pirate ship in the sky” under the KL-based VSD loss versus our score-based divergence. As expected, our approach produces a far wider range of geometric shapes, surface textures, and background scenes—from bright sunny skies to dark thunderous clouds. Figure 7 offers additional examples across diverse prompts to reinforce this finding, illustrating how score-based divergence yields richer variation in colors, object styles, material properties, and environmental details.
Distilling pre-trained 2D diffusion models into 3D assets has driven remarkable advances in text-to-3D synthesis. However, existing methods typically rely on Score Distillation Sampling (SDS) loss, which involves asymmetric KL divergence--a formulation that inherently favors mode-seeking behavior and limits generation diversity. In this paper, we introduce Dive3D, a novel text-to-3D generation framework that replaces KL-based objectives with Score Implicit Matching (SIM) loss, a score-based objective that effectively mitigates mode collapse. Furthermore, Dive3D integrates both diffusion distillation and reward-guided optimization under a unified divergence perspective. Such reformulation, together with SIM loss, yields significantly more diverse 3D outputs while improving text alignment, human preference, and overall visual fidelity. We validate Dive3D across various 2D-to-3D prompts and find that it consistently outperforms prior methods in qualitative assessments, including diversity, photorealism, and aesthetic appeal. We further evaluate its performance on the GPTEval3D benchmark, comparing against nine state-of-the-art baselines. Dive3D also achieves strong results on quantitative metrics, including text-asset alignment, 3D plausibility, text-geometry consistency, texture quality, and geometric detail.
[ "cs.CV" ]
# 1 Introduction Negation is a fundamental and universal phenomenon found in languages worldwide. It is closely linked to various human communicative abilities, including denial, contradiction, deception, misrepresentation, and irony. Although affirmative statements are more common, negation is still prevalent in language; approximately $2 5 \%$ of sentences in English texts include some form of negation (Sarabi and Blanco, 2016; Hossain et al., 2020; Horn and Wansing, 2025). Negation plays a crucial role in various natural language processing (NLP) tasks, including sentiment analysis, question answering, knowledge base completion, and natural language inference (NLI). Accurately interpreting negation is vital for understanding semantic oppositions (Khandelwal and Sawant, 2020; Hosseini et al., 2021; Singh et al., 2023). Recent research has shown that the importance of correctly handling negation extends even to multimodal language models (Quantmeyer et al., 2024; Alhamoud et al., 2025; Park et al., 2025), underscoring its widespread relevance across different domains. Meanwhile, negation poses significant challenges for both humans and language models. Research shows that people often find negated statements more difficult to process and understand compared to affirmative ones (Wales and Grieve, 1969; Sarabi and Blanco, 2016). Similarly, multiple studies have found that pretrained language models (PLMs) struggle to accurately interpret negation. For example, models like BERT (Devlin et al., 2019) and even large language models (LLMs) such as GPT-3 (Radford et al.) frequently fail to differentiate between negated and affirmative statements. These models often rely on superficial cues, which can lead to incorrect outputs in the presence of negation (Kassner and Schütze, 2020; Hossain et al., 2022a; Truong et al., 2023). Despite its significance, there is a notable lack of dedicated evaluation benchmarks for understanding negation. Most existing resources treat negation as a minor aspect within broader tasks or focus solely on narrow syntactic detection. Consequently, evaluations have primarily been limited to encoderbased models (Hossain et al., 2020; Geiger et al., 2020; Truong et al., 2022; Anschütz et al., 2023). To address these shortcomings, we introduce Thunder-NUBench (Negation Understanding Benchmark), a dataset designed to assess large language models’ ability to interpret negation. The contributions of this paper are summarized as follows: • We define and categorize various negation phenomena, highlighting their differences from contradiction and paraphrase. • We introduce a manually curated benchmark to assess the ability of LLMs to understand these distinctions. • We perform systematic evaluations of several decoder-based LLMs using both prompting and fine-tuning approaches. Our benchmark offers valuable insights into the semantic reasoning abilities of language models and serves as a robust evaluation standard for future advancements in understanding negation. # 2 Related Work This section reviews existing studies on how language models understand and process negation. Negation detection and scope resolution. Early negation detection and scope resolution work focuses on rule-based systems and handcrafted heuristics, particularly in domain-specific settings like clinical texts. These systems are effective but lack flexibility across domains (Chapman et al., 2001; Carrillo de Albornoz et al., 2012; Ballesteros et al., 2012; Basile et al., 2012). Traditional machine learning methods, such as SVMs (Hearst et al., 1998) and CRFs (Sutton et al., 2012), are later introduced, though they also remain limited to narrow domains (Morante et al., 2008; Morante and Daelemans, 2009; Read et al., 2012; Li and Lu, 2018). More recently, deep learning approaches employing CNNs (O’shea and Nash, 2015) and BiLSTM networks (Siami-Namini et al., 2019) have improved performance through better contextual embedding and sequence modeling (Fancellu et al., 2016; Bhatia et al., 2020). Pretrained transformer models like BERT (Devlin et al., 2019) have been leveraged through transfer learning (e.g., NegBERT (Khandelwal and Sawant, 2020)), significantly enhancing the accuracy of negation detection tasks. However, these methods still primarily address syntactic span detection, with deeper semantic comprehension of negation remaining challenging. Negation-sensitive subtasks of NLU. Negation understanding has become increasingly important in Natural Language Understanding (NLU) tasks (Hosseini et al., 2021). However, existing NLU benchmarks, such as SNLI (Bowman et al., 2015) for NLI, CommonsenseQA (Talmor et al., 2019) for QA, SST-2 (Socher et al., 2013) for sentiment analysis, STS-B (Cer et al., 2017) for textual similarity and paraphrasing, have been criticized for insufficiently accounting for the semantic impact of negation (Hossain et al., 2022a; Rezaei and Blanco, 2024). These datasets contain relatively few negation instances or include negations that are rarely critical to task performance, enabling language models to achieve high accuracy even when ignoring negation entirely. Recent studies, such as NegNLI (Hossain et al., 2020), MoNLI (Geiger et al., 2020), NaN-NLI (Truong et al., 2022), have introduced negation-sensitive NLU benchmarks, demonstrating that model performance significantly declines when negation meaningfully affects the outcome (Naik et al., 2018; Yanaka et al., 2019; Hartmann et al., 2021; Hossain et al., 2022b; Hossain and Blanco, 2022; She et al., 2023; Anschütz et al., 2023). Such findings indicate that current language models heavily rely on superficial linguistic patterns rather than genuine semantic comprehension. Limitations of distributional semantics. Distributional semantics, the theoretical basis for many PLMs, is built on the distributional hypothesis. That is, words with similar meanings tend to occur in similar contexts (Harris, 1954; Sahlgren, 2008). This assumption enables models to learn semantic representations from textual cooccurrence patterns, making unsupervised training possible (Boleda, 2020; Lenci et al., 2022). Although powerful in capturing broad semantic relationships, distributional semantics struggles significantly with negation because negated expressions (e.g., "not good") frequently occur in similar contexts as their affirmative forms ("good"), leading models to produce similar vector representations despite their opposite meanings. Previous studies have shown this limitation, highlighting how PLMs fail to capture semantic nuances introduced by antonyms and reversing polarity (Rimell et al., 2017; Jumelet and Hupkes, 2018; Niwa et al., 2021; Jang et al., 2022; Vahtola et al., 2022). Moreover, studies suggest that PLMs like BERT struggle to differentiate between affirmative and negated contexts (Kassner and Schütze, 2020; Ettinger, 2020). Negations in generative language models. Recent research on negation understanding has primarily focused on bidirectional models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) due to their strong performance on NLU and negation detection tasks. However, with the rise of generative foundation models such as GPT (Radford et al.) and LLaMA (Touvron et al., 2023), attention has shifted toward evaluating their negation handling. Studies have found that these models often exhibit positive bias and struggle with producing or interpreting negated statements (Truong et al., 2023; Chen et al., 2023; García-Ferrero et al., 2023). While benchmarks, such as CONDAQA (Ravichander et al., 2022) and ScoNe (She et al., 2023), expose these limitations, robust evaluation resources tailored to generative models remain scarce. Table 1: Typology of Negation. Table 2: Contradiction types from (de Marneffe et al., 2008). Contradiction covers a broader scope than negation. Building upon these prior studies, this paper evaluates whether generative models can understand negation in complex sentences and distinguish subtle semantic differences beyond surface-level patterns. # 3 Definition and Scope of Negation Although negation has been widely studied in NLP, its definition and scope remain loosely specified, with most studies focusing on identifying negation cues or confusing negation with contradiction. In this work, we aim to refine the definition of negation by examining its semantic boundaries, distinguishing it from related but distinct phenomena such as contradiction, and characterizing the types of meaning reversal. # 3.1 Typology of Negation Negation is a core semantic and syntactic operation in natural languages expressing a proposition’s denial, contradiction, or absence. In formal logic, it reverses the truth value of a proposition: if $P$ is true, then $\neg P$ (which means the negation of $P$ ) is false, and vice versa. Semantically, negation introduces oppositions, often to a corresponding affirmative proposition (Horn and Wansing, 2025). Negation can be categorized along several dimensions, including scope, form, and target (see Table 1). By scope, it may affect the entire clause (clausal negation) or just a part of it (subclausal negation). In terms of form, it can appear as bound morphemes, such as prefixes and suffixes (morphological negation), or as separate syntactic elements like "not" or "never" (syntactic negation). Finally, depending on its target, negation can apply to the verb (verbal negation) or to other elements in the sentence (non-verbal negation) (Zanuttini, 2001; Miestamo, 2007; Truong et al., 2022; Kletz et al., 2023). # 3.2 Negation and Contradiction Negation and contradiction, closely related concepts, are often conflated in NLP research (Jiang et al., 2021). Contradiction refers to the incompatibility of two propositions: they cannot be both true simultaneously. Although negation frequently serves as a primary mechanism to create contradictions by reversing the truth value of a proposition, contradictions may also arise through antonymy, numeric mismatch, structural and lexical differences (more details are in Table 2) (de Marneffe et al., 2008). Previous studies have often overlooked the possibility of contradictions existing independently of explicit negation. Recognizing this gap, we specifically examine the ability of LLMs to differentiate between negations and non-negated contradictions, highlighting the nuanced semantic distinctions involved. # 3.3 Standard Negation In this paper, we specifically examine standard negation: the prototypical form of negation applied to the main declarative verbal clause. Standard negation involves negating the main verb in a main clause, where the main verb expresses the main action of a clause, and the main clause itself can independently form a complete sentence. This definition excludes negation found in subordinate clauses, which are clauses dependent on a main clause (Miestamo, 2000). This paper specifies the standard negation as reversing the truth value of the main predicate (verbal phrase). Formally, if a main predicate is denoted as $P$ , standard negation corresponds precisely to $\neg P$ (i.e., the logical negation of $P$ ). Standard negation can encompass various dimensions, as described in Table 1. Specifically, it includes clausal negation, which affects the entire sentence, and verbal negation, which explicitly targets the main verb. In terms of form, standard negation can be realized either syntactically or morphologically. Syntactic negation typically involves inserting negation particles (e.g., "not") to directly negate the main predicate. On the other hand, morphological negation is more limited, applying only when the antonym of the main predicate fully encompasses its mutually exclusive semantic space (e.g., "be alive" vs. "be dead"). Thus, morphological negation qualifies as standard negation only in cases involving complementary antonyms, which represent absolute binary oppositions (e.g., "true" vs. "false" and "possible" vs. "impossible"). In contrast, other types of antonyms, such as gradable antonyms—words that express opposite meanings along a spectrum of quality (e.g., "happy" vs. "unhappy/sad/depressed")—and relational antonyms—words expressing opposite relational roles (e.g., "buy" vs. "sell")—do not strictly reverse truth values (Lehrer and Lehrer, 1982). These examples fall under contradiction rather than the standard negation discussed in this paper. The above definitions and scope of negation provide the foundation for constructing our benchmark dataset, which we describe in detail in the following section. # 4 Thunder-NUBench Dataset We construct the Thunder-NUBench dataset based on two datasets: (1) HoVer dataset, which is designed for multi-hop fact extraction and claim verification based on Wikipedia articles (Jiang et al., 2020), (2) Wikipedia Summary dataset, which contains concise article titles and summaries extracted from English Wikipedia (Scheepers, 2017). We select these datasets as our base corpora because their factual content and complex sentence structures make them well-suited for creating a dataset to understand standard negation in long sentences. # 4.1 Dataset Generation The overall process for dataset generation proposed in this paper is illustrated in Figure 1. After extracting sentences from the two sources and preprocess them, we construct two types of datasets: (1) a sentence-negation pair dataset, which includes only standard negation examples generated, and (2) a multiple choice dataset, which covers four categories, systematically constructed through a combination of manual authoring and automated generation. All data is then verified and refined through human review, with a strict protocol to ensure that no author reviews data they generated to ensure quality and consistency. Further details of each step are described below. Sentence-negation pair dataset. We begin by randomly sampling sentences labeled as "supported facts" from the HoVer dataset. Since the original data often contains grammatical errors, we utilize OpenAI’s API (OpenAI, 2025) to automatically correct these issues. In cases where the selected text consists of multiple sentences, we merge or split them as needed to ensure that each example is a single sentence, aligning with our sentence-level task objective (see Appendix D for details). Each sentence is manually negated according to the standard negation criteria described earlier, followed by a thorough review process. Figure 1: Dataset generation process. Multiple choice dataset. To construct the multiple-choice dataset, we first segment the "summary" column of the Wikipedia Summary dataset into individual sentences, which often contain multiple sentences in a single entry. To focus on the challenges of negation in complex sentences, we filter out sentences that are too short. We generate multiple candidate options for each selected sentence, including standard negation, local negation, contradiction, and paraphrase. Negation examples are manually written because LLMs often fail to generate accurate negations, frequently producing subclausal negations when standard negation is required, or generating incorrect local negations even when explicitly prompted. As a result, automated generation is not used for these cases. In contrast, non-negated contradiction and paraphrase examples are first generated automatically using carefully designed prompts with the OpenAI API. All data are further reviewed and refined by the authors. # 4.2 Human Curation Rules Below, we describe the principles for manual generation and dataset review. Additional details and examples are provided in Appendix E. Standard negation. The rules for standard negation are as follows: • The standard negation is intended to reverse the sentence’s overall meaning and is achieved by negating the main clause’s main verb, reversing the sentence’s truth value. All other elements of the main clause (subject, object, temporal context, etc.) are preserved unchanged. • Standard negation is implemented by inserting negative particles such as "not" into the main verb or substituting the main verb with its complementary antonym, when appropriate. • The main verb may be replaced with a synonymous verb only if the overall meaning remains strictly identical. Other components may be paraphrased with synonyms as long as the tense, sentence structure, and meaning remain equivalent. Standard negation is then applied to the paraphrased sentence. Table 3: Typology of Local Negation. • For compound sentences joined by coordinating conjunctions (and, or, and but), negation follows logical rules, such as De Morgan’s laws (e.g., A and $\mathbf { B } \to \lnot \mathbf { A }$ or $\mathbf { \sigma } \mathbf { \to } \mathbf { B }$ , where A and B are clauses). For unnatural outputs, sentences may be split for fluency, as long as logical negation is preserved. Local negation. We define local negation as a negation that targets a verb phrase outside the main clause. This work applies local negation to four types of sentence structure: relative clause, participle clause, adverbial clause, and compound sentence (see Table 3 for more details). In compound sentences, standard negation requires that all main clauses be negated to achieve sentence-level negation; if only a subset of clauses is negated, it is considered local negation. The mechanism for constructing local negation follows that of standard negation, but the negation is applied only to the intended subpart rather than the entire main clause. This design allows us to test whether models can distinguish between full-sentence negation and local negation. Although the scope of local negation is restricted, it still contains explicit negation cues (e.g., "not"), which may mislead models that rely on shallow cue detection rather than deeper semantic understanding. Contradiction. Contradiction examples in this work are constructed using mechanisms such as antonymy, numeric changes, or structural alterations, as long as the resulting sentence cannot be true at the same time as the original and does not simply apply standard or local negation. Unlike standard negation, which reverses the truth value of the main predicate, contradiction can arise from modifying adjectives, quantities, named entities, or other semantic elements. Both negation and contradiction can involve antonyms; however, only complementary antonyms are permitted for standard and local negation, while gradable or relational antonyms are allowed for contradiction. During validation and review, authors ensure that no pair of original and contradictory sentences can be simultaneously true. It is important to note that standard negation is a subset of contradiction: every negation is a contradiction, but not every contradiction is a standard negation. The goal of this category is to assess whether models can reliably distinguish standard negation from other forms of contradiction, as both alter the meaning of a sentence, but standard negation reverses the entire proposition, whereas contradiction, as defined in this work, does not necessarily do so. Paraphrase. A paraphrase rewrites the sentence using different wording or structure while preserving the original meaning. No additional information may be introduced, and the main verbs and core content must remain unchanged. Identical or near-identical sentences of the original sentence, which often occur in automatically generated paraphrases, are carefully screened and omitted as well. The reason for including paraphrase examples is to test whether models incorrectly interpret sentences with different surface forms (e.g., synonyms, rephrased structures) as having reversed meanings. This allows us to examine the robustness of language models in distinguishing genuine reversals from similar meaning, a distinction that has been highlighted as a challenge in previous research on distributional semantics (see Section 2). Table 4: Thunder-NUBench Dataset Statistics. # 4.3 Dataset Statistics The final dataset consists of a sentence-negation pair training set and a multiple-choice evaluation set (see Table 4). The multiple choice dataset presents each original sentence with four options: standard negation (choice1, always the answer), local negation (choice2), contradiction (choice3), and paraphrase (choice4). To construct the validation set of the multiplechoice dataset, we first select 100 examples whose Wikipedia page indices are unique within the dataset, preventing any duplication with the test set. We also matched the distribution of local negation types to the overall dataset as closely as possible, ensuring that the validation set serves as a representative subset. Thunder-NUBench is available online.1 # 5 Experiments We conduct experiments on Thunder-NUBench using an instruction-based prompt that explicitly includes logical rules (as illustrated in Listing 1; more details of prompt selection in Appendix J). Table 5: Evaluation results of Qwen2.5 models across different settings: baseline zero-shot, baseline fewshot (5-shot), and zero-shot after supervised fine-tuning (SFT) on Thunder-NUBench. Both accuracy (acc) and normalized accuracy (acc_norm) are reported. Few-shot results are averaged over five random seeds with the standard deviation in parentheses. Zero-shot and few-shot. For each model, we evaluate both zero-shot and few-shot settings using the Language Model Evaluation Harness (Gao et al., 2024). In the few-shot scenario, we use examples from the validation set as in-context demonstrations. In few-shot experiments, results are averaged over five random seeds (42, 1234, 3000, 5000, and 7000) and five examples of validation set data (5 shots). We report performance on the test set for each model and prompt configuration. All the results are provided in Appendix M. SFT. We perform Supervised Fine-Tuning (SFT) using the LLaMA-Factory framework (Zheng et al., 2024) on the Sentence-Negation Pair Dataset from Thunder-NUBench. The data is formatted in the Alpaca instruction style (Taori et al., 2023). For parameter-efficient training, we apply Low-Rank Adaptation (LoRA) (Hu et al., 2022) with a rank of 8, targeting all linear layers. Fine-tuning is conducted for three epochs with a batch size of 1, a gradient accumulation step of 8, cosine learning rate scheduling, and bfloat16 precision. After SFT, we evaluate zero-shot performance to directly measure the model’s ability to generalize from instruction tuning without the influence of in-context examples. All the results are provided in Appendix N. Table 6: Incorrect choice distribution and confusion analysis in negation benchmark across various 7-8B size pretrained models. Few-shot results are reported with a fixed seed (1234) to keep error patterns clear because averaging over multiple seeds could make them harder to interpret. Benchmarking with Thunder-NUBench. Table 5 presents evaluation results for Qwen2.5-3B, Qwen2.5-3B-Instruct, Qwen2.5-7B, and Qwen2.5- 7B-Instruct models (Qwen et al., 2025) in three settings: zero-shot baseline, few-shot baseline, and zero-shot after SFT using Thunder-NUBench. Instruction-tuned models consistently outperform their pretrained counterparts. Few-shot prompting significantly improves performance, highlighting the benefit of concrete examples. Supervised fine-tuning on Thunder-NUBench further boosts accuracy, with pretrained models showing the most significant gains. Notably, Qwen2.5-3BInstruct performs exceptionally well in zero-shot and few-shot settings, even outperforms larger models, suggesting strong alignment with the logical reasoning demands of Thunder-NUBench. Negation understanding analysis. We analyze model errors to assess their ability to distinguish standard negation from similar semantic variants. Each local negation subtype in our dataset is explicitly labeled according to its sentence structure: relative clauses (relative_part), participle clauses (pp_part), compound sentences (compound_part), and adverbial clauses (adverb_part). To identify which subtypes are most often confused with standard negation, we compute the confusion rate, defined as the proportion of examples within each subtype where the model incorrectly selects the local negation option (choice2) instead of the correct standard negation (choice1). For instance, among 1,002 test examples, 290 of choice2 are labeled as pp_part; if the model erroneously chooses choice2 in 29 of these, the confusion rate for pp_part is $10 \%$ . For all the details, please see Appendix P. We conduct a comparative analysis across four 7-8B scale pretrained language models (LLaMA3.1-8B, Gemma-7B, Qwen2.5-7B, and Mistral-7Bv0.3) (Grattafiori et al., 2024; Team et al., 2025; Qwen et al., 2025; Jiang et al., 2023) under three evaluation settings: baseline (zero-shot and fewshot) and zero-shot after SFT. All models show a consistent tendency to incorrectly select local negation (choice2), even though SFT reduces overall errors, showing that distinguishing local from fullsentence negation remains difficult. Confusion rate is especially high for compound sentence structures, highlighting specific areas where models systematically struggle with negation understanding.
Negation is a fundamental linguistic phenomenon that poses persistent challenges for Large Language Models (LLMs), particularly in tasks requiring deep semantic understanding. Existing benchmarks often treat negation as a side case within broader tasks like natural language inference, resulting in a lack of benchmarks that exclusively target negation understanding. In this work, we introduce \textbf{Thunder-NUBench}, a novel benchmark explicitly designed to assess sentence-level negation understanding in LLMs. Thunder-NUBench goes beyond surface-level cue detection by contrasting standard negation with structurally diverse alternatives such as local negation, contradiction, and paraphrase. The benchmark consists of manually curated sentence-negation pairs and a multiple-choice dataset that enables in-depth evaluation of models' negation understanding.
[ "cs.CL" ]
# 1 Introduction Big data is now a key focus for both government and business leaders.[3]. However, buried within this immense data deluge lies an abundance of untapped potential and valuable insights, which has given rise to an innovative scientific paradigm known as dataintensive scientific discovery[4]. Researchers actively seek ways to leverage available data to gain valuable insights and inform decisionmaking. On the one hand, big data offers substantial value, fostering business productivity and catalyzing revolutionary breakthroughs in scientific disciplines. On the other hand, the utilization of big data is accompanied by challenges, ranging from the complexities of data capture[5], storage[6], and analysis to the intricacies of data visualization[7]. The prerequisite for realizing big data applications is a robust data system managing external queries as well as information retrieval. In order to efficiently and securely handle a large volume of data, we introduce StreamLink, an AI-powered distributed data system with enhanced ability to process billions of data records while reducing user operational costs. In addition to the support from a scalable and dependable distributed database, one exceptional feature of this system is its highly accessible and security-oriented interaction with users, which is contributed by the use of the latest Large Language Models (LLMs) with the outstanding capability of language generation and domain adaptation. To illustrate its performance, we deploy the proposed system to store global patent data, where most of them come from the United States Patent and Trademark Office (USPTO)1 and Google Patents2. There are approximately 180 million patents in this system and patents are growing rapidly, with the USPTO Patent Assignment Dataset[8] containing around 6 million assignments and transactions recorded between 1970 and 2014, affecting about 10 million patents or patent applications. We have also validated Streamlink’s robustness through the collaboration with the patent and intellectual property (IP) team at King & Wood Mallesons, who is invited to experience our system and provide any usage feedback. In this paper, we make the following contributions: LLM-based NL to SQL[9] Generator: While numerous studies[10][11][12] have explored Natural Language to Structured Query Language (NL-to-SQL) techniques, we integrate the latest advancements in LLM into our distributed patent database system. Our approach leverages LLMs’ incontext learning capabilities and domain-specific adaptability to understand and translate natural language instructions into SQL commands that can precisely operate over 180 million patents in our database. Optimized Distributed System Architecture for $\mathbf { A I } +$ Platforms: Our research focuses on creating scalable $\mathrm { A I } +$ platforms using optimized distributed system architecture. In the traditional Apache distributed architecture, we have added distributed LLM clusters and UI clusters, and further designed a Central Control Unit (CCU) to schedule tasks. We utilize three distributed storage nodes, two LLM nodes, and two UI nodes to conduct tests with 180 million patents. The storage consumption is 15.3TB (5.1TB $\textbf { x } 3$ , with dual redundancy for reliability), and the system is accelerated using 280 cores (560 threads) and $2 . 6 \mathrm { T B }$ of memory. During testing, we confirmed that the average time from user input in natural language to obtaining the desired patent from a database containing over 180 million patents is within 6 seconds. Figure 1: Architecture of Project StreamLink Data Privacy Protecting: We have confirmed that building a localized LLM-based assistant can significantly enhance productivity while providing a higher level of privacy to users. Through using a locally deployed model for our LLM-based assistant, we can effectively eliminate the risks of data breaches and information leakage which could occur while using cloud-based AI assistants. This approach maintains the confidentiality and integrity of user data and underscores our commitment to prioritizing privacy in the development of advanced technological solutions. We have also developed mechanisms for SQL legality checking using Llama[13] based tools, protecting the system from accidental deletion or injection attacks. We organized this paper as follows. We will discuss the necessity of this work and its application scenarios in Section 2, then introduce the architecture of StreamLink in Section 3, the methodologies we used, and the reason we chose these technologies. We present some experiments in Section 4, including comparisons between our method and traditional strategies with statistical metrics, and conclude the paper with a short discussion in Section 5. # 2 Task Description In this section, we will discuss the reason we created the StreamLink project (as shown in Figure 1) and provide some cases where it has been used successfully. Our data system is primarily used for handling large-scale data storage and retrieval. A typical scenario involves retrieving several patents from a database such as Google Patents that meet specific criteria (e.g. date range, pattern, keywords, etc) and analyzing the potential for IP infringement. Traditionally, IP researchers and lawyers might need to read through extensive documentation and perform dozens to hundreds of repetitive searches on Google Patents, or write complex SQL-like statements on data engineering platforms like BigQuery to retrieve patents. The former requires significant manpower and time, often necessitating several lawyers to collaborate over several days to filter the data, while the latter requires extensive technical expertise to write complex SQL commands or other data manipulation languages, and familiarity with the intricacies of data storage and computation frameworks. In addition, SQL commands could have bugs and often require considerable time to be adjusted and modified. With the StreamLink platform, users can complete all the above tasks in a more efficient and accessible fashion. Without the need to design a SQL command, users like IP researchers and lawyers can directly query the patent database via a natural language request. With the LLM-based interface, our data system converts this natural language query into a SQL command with a security check, and then the distributed database finishes executing this SQL command in a few seconds. Retrieved patents are expected to meet all filter conditions in the natural language query. Furthermore, there is great flexibility for creating different AI interfaces upon StreamLink’s distributed database. In this case, we have implemented a BERTbased[14] semantic filter upon theses retrieved patents to further extract the patents with the potential for IP infringement. Another challenge is the scalability of large-scale database. Traditional data warehouses are struggling to handle the exponential growth of data volumes, which can lead to capacity issues[15], and they can also fail to seamlessly scale in response to fluctuating data processing demands[16]. To handle the issue of patent storage capacity, we employed distributed data warehouses[17], designed to efficiently store and manage a vast amount of information across multiple servers. This ensures high fault tolerance of databases as well as facilitates elastic scaling of storage resources to meet growing patent data demands. Currently, we use three of the 5.1TB nodes to store 180 million entries from the USPTO and Google Patents. # 3 Methodology In this section, we will present the components in StreamLink. Section 3.1 will introduce the LLM-driven SQL Generator, an innovative tool capable of understanding natural language instructions and translating them into SQL commands based on the database schema. Section 3.2 will showcase our distributed framework based on Apache Spark and Apache Hadoop, which offer robust support for processing large-scale datasets, ensuring high scalability and processing capacity. Moreover, we will discuss our distributed WebUI clusters and load balancing in this section. We will also talk about our brand new Llama-based SQL syntax and security checker built upon StreamLink to reduce the risks associated with grammatical errors or malicious SQL injections in Section 3.3. # 3.1 LLM-based SQL Generator Our IP lawyer collaborators work with various patents from the globe every day. They may want to execute a command similar to SELECT cpc, COUNT $( \star )$ AS count FROM google_full WHERE assignee LIKE "%Intel%" AND grant_date $> = ~ " 2 0 0 9 "$ GROUP BY cpc ORDER BY count DESC LIMIT 10 to conduct an analysis on the most popular CPC numbers of patents from Intel, but writing such a SQL command is too difficult for them without professional programming training. To solve this problem, our LLM-driven SQL Generator is an innovative tool that makes data engineering more accessible to a wider audience. It has the ability to comprehend natural language instructions and convert them into SQL commands, thereby reducing the learning curve for users. Even those who lack specialized programming training can effortlessly carry out complex data engineering tasks. While traditional natural language to SQL generators are based on Encoder and Decoder structures[18], requiring extensive data training to obtain the ability to generate SQL commands before specializing in a specific database, we utilize an LLM-based SQL generator and propose two methods for SQL generation. One method involves quickly generating specialized SQL commands for corresponding databases based on specific rules, followed by finetuning. The other method involves parsing database structures to quickly generate prompt templates, aiding LLM in migrating to new databases. Both methods are faster and more scalable than traditional approaches, making them become more appropriate data engineering assistants. We use LoRA[19] as an improved fine-tuning method where instead of fine-tuning all the weights that constitute the weight matrix of the pre-trained LLM, two smaller matrices that approximate this larger matrix’s weight update are fine-tuned. These matrices constitute the LoRA adapter. This fine-tuned adapter is then loaded to the pre-trained model and used for inference. For the NL-to-SQL conversion, we construct context-target pairs: $Z = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ , where $x _ { i }$ is a natural language query and $y _ { i }$ its correspoding SQL command. During fine-tuning, the model is initialized to pre-trained weights $\Phi _ { 0 }$ , and the task-specific parameter increment $\Delta \Phi = \Delta \Phi ( \Theta )$ is further encoded by a much smaller-sized set of parameters $\Theta$ with $| \Theta | \ll | \Phi _ { 0 } |$ . To optimize the SQL generation quality is to minimize the cross-entropy loss at the decoding stage. The task of finding $\Delta \Phi$ thus becomes optimizing over $\Theta$ : $$ \operatorname* { m a x } _ { \Theta } \sum _ { ( x , y ) \in Z } \sum _ { t = 1 } ^ { | y | } \log \left( p _ { \Phi _ { 0 } + \Delta \Phi ( \Theta ) } ( y _ { t } | x , y _ { < t } ) \right) $$ Instead of full fine-tuning: $$ \operatorname* { m a x } _ { \Phi } \sum _ { ( x , y ) \in Z } \sum _ { t = 1 } ^ { | y | } \log ( P _ { \Phi } ( y _ { t } | x , y _ { < t } ) ) $$ Another critical challenge for fine-tuning is to adapt an LLM to NL-2-SQL tasks within a domain-specific schema. Different domains have different rules of defining schemas in their data storage, and thus we proposed a mechanism to augment the domain-specific NL-2-SQL training set given a small set of query templates. This mechanism augments the training set by simultaneously propagating SQL commands and their corresponding natural language queries (see Figure 2). Every SQL template query can be turned into a set of SQL commands by inserting different field instances into it; and for each SQL template query, we designed natural language queries in different written expressions. Each SQL template is propagated in two directions (natural language queries and SQL commands with various field instances) and then natural language queries are matched with their corresponding SQL commands to form the augmented training set. To prevent the LLM from suffering from catastrophic forgetting and over-fitting, we combined the domain-specific dataset with publicly available NL-2-SQL datasets like WikiSQL[20] and Spider[21]. Through extensive experiments, 1:1 is found to be the optimal hybrid ratio of domain-specific training set to the open domains. Field Instances top_n:5,10,20,... org: Intel, AMD, Nvidia.,.. year_l:1980,1981,1982.,. year_h:2000,2001,2002, NaturalLanguageQueries SQL Tellmethetoptop_nmostfrequent Natural Language Query CPC inorg fromyear_Itoyear_h. TemplateSample SELECTcpc,COUNT(1)AScount FROMpatentsWHEREassignee Whatare the top top_n most Tell me the top_n most LIKE"%org%"AND grant_date >= frequentCPCfromyear_/to frequent CPC inorg from "year_/01-01"AND grant_date<= year_hin org? year_Itoyear_h "year_h-12-31"GROUPBYcpc ORDERBYcountLIMITtop_n # 3.2 Distributed Computing, Storage and WebUI Cluster In this section, we will discuss our distributed framework. This framework is the foundation of our data engineering system, and it is designed to manage and process large-scale datasets efficiently, making our system scalable and robust. Using these distributed computing paradigms, we can distribute data processing tasks among multiple nodes, reducing the time required for data processing and analysis. As shown in Figure 3, by adopting this approach, we can efficiently, reliably, and scalably handle large-scale datasets. Not only does this method overcome the limitations of traditional data processing methods, but it also unlocks new possibilities for advanced data analytics and engineering tasks. Therefore, it is an essential component of our data engineering ecosystem. Figure 3: Distrubuted system architecture with LLM to improve agility For user experience, we have developed a distributed Web User Interface (WebUI) cluster and implemented a load balancing mechanism that makes sure high availability and responsiveness of the user interface. To guarantee the effectiveness of our WebUI cluster, we have implemented a robust load balancing mechanism using Nginx[22], a high-performance HTTP server and reverse proxy. Nginx acts as an intermediary between the client and the WebUI instances, intelligently distributing incoming requests across the available nodes based on predefined algorithms. This evenly distributes incoming traffic across the WebUI instances, preventing any single node from becoming overwhelmed with requests, thus avoiding performance degradation and downtime. Additionally, in case of node failure or maintenance, Nginx dynamically reroutes the requests to healthy nodes, ensuring uninterrupted service for users. # 3.3 Llama-driven Checker SQL statements can bring many risks, including execution failure or irreversible impacts on the system. To address this problem, we have designed a new Llama driven syntax and security checker for StreamLink. These tools represent a significant advancement in enhancing the accuracy and security of SQL commands within our data engineering system. The SQL syntax checker analyzes the structure and syntax of SQL commands generated by our system, ensuring that they adhere to the correct grammar and formatting rules. By validating the syntax of SQL commands, this tool significantly reduces the likelihood of errors that could arise from incorrect or malformed commands.Then the security checker plays a crucial role in mitigating potential risks associated with SQL injection attacks. By scrutinizing SQL commands for suspicious patterns or constructs that may indicate malicious intent, the security checker helps safeguard our system against unauthorized access, data breaches, and other security vulnerabilities. Together, the SQL syntax checker and security checker strengthen the reliability and integrity of our data engineering system by minimizing the risk of errors and malicious activities. This proactive approach to SQL command validation not only enhances the overall quality of data processing but also instills confidence in the security posture of our system. It ensures the safe handling of sensitive information and protects against potential threats. # 4 Experiments In this section, we present the results of experiments conducted using StreamLink for data engineering, compared to traditional data systems. These experiments involve SQL generation reliability and malicious SQL interception evaluation. # 4.1 Generation Accuracy In our first experiment, we compared our proposed method to several existing approaches using the Spider[21] dataset, which consists of 10,181 questions and 5,693 unique complex SQL commands on 200 databases with multiple tables covering 138 different domains. Our goal was to evaluate the effectiveness of SQL generation, and we leveraged state-of-the-art LLMs and fine-tuning techniques to do so. The results showed that our method consistently outperformed the baseline methods in terms of SQL generation quality and accuracy. We conduct experiments on Spider and compare our method with several baselines including: Natural SQL[25], a SQL intermediate representation (IR), enables existing models that do not support executable SQL generation to generate executable SQL queries. • GRAPPA[26], a grammar-augmented pre-training framework for table semantic parsing. • $S ^ { 2 } \mathrm { S Q L } [ 2 7 ]$ , injecting Syntax to question-Schema graph encoder for Text-to-SQL parsers, which effectively leverages the syntactic dependency information of questions in textto-SQL to improve the performance. • PICARD[28], a method for constraining auto-regressive decoders of language models through incremental parsing. • RASAT[29], a Transformer-based seq2seq architecture augmented with relation-aware self-attention that could leverage a variety of relational structures. StruG[30], structure-grounded pre-training framework (STRU for text-to-SQL that can effectively learn to capture texttable alignment based on a parallel text-table corpus. • BERT[31], pre-training of deep bidirectional transformers for language understanding. We demonstrate the exact match and execution accuracy between the baseline methods and our LLM-driven methods in Table 1. StreamLink: Large-Language-Model Driven Distributed Data Engineering System Table 1: Comparison of various models performance on spider dev-set for text-to-SQL, including Exact Match (EM) and Execution Accuracy (EA), the performance of StreamLinkSQL-Generator(SSQLG) is higher than that of Baseline. The best one is SSQLG3.1-8B which we fine-tuned on Llama-3.1- 8B. Instead of directly deploying off-the-shelf commercial or opensource LLMs, we hope to use domain knowledge to gain a StreamLinkdedicated model. The data in the table shows that our fine-tuned model has exceeded the baseline by over $10 \%$ in both execution accuracy and exact match, achieving the effect of transferring a general language model to a specialized task. This provides the opportunity of using natural language interaction for StreamLink’s users with different backgrounds. For instance, we can enable our lawyer collaborators to use natural language to perform specific patent analysis by saying “tell me the top 10 most frequently appeared CPC by the assignee of Intel after $2 0 0 9 ^ { \mathfrak { n } }$ instead of manually writing a complex SQL command mentioned before. These results highlight the effectiveness of our approach in addressing the challenges of SQL generation tasks, especially in complex and specialized domains with varying database schema. By outperforming existing methods on the Spider dataset, our method showcases its potential to significantly improve the efficiency and accuracy of SQL generation processes. This, in turn, can facilitate more effective data engineering and analysis workflows. # 4.2 Malicious SQL Interception In this experiment, we focused on evaluating the effectiveness of our SQL syntax checker and security checker based on Llama2. We used the SQL injection dataset 3 from Kaggle, which includes 30,595 SQL statements. Within this dataset, 19,258 were normal SQL, while 11,337 were malicious statements. We evaluated our LLM-based syntax and security checkers across different model sizes and model types. This dataset is representative of different SQL injections that occur in real-world scenarios, making it a solid testing ground for our tools. Our evaluation focused on zero-shot conditions to simulate the checker’s performance in situations where the specific dataset might not be feasible to train. This is common in organizations that need to adapt quickly to emerging threats without retraining models. We use recall and precision as metrics. $$ \begin{array} { r } { R e c a l l = \displaystyle \frac { T P } { T P + F N } } \\ { P r e c i s i o n = \displaystyle \frac { T P } { T P + F P } } \\ { E s c a p e = \displaystyle \frac { F N } { T P + F N } } \\ { M i s i n t e r c e p t = \displaystyle \frac { F P } { T N + F P } } \end{array} $$ # Where TP (True Positive) – Positive in the label, and predicted positive. FP (False Positive) – Negative in the label, but predicted positive. • FN (False Negative) – Positive in the label, but predicted negative. • FN (False Negative) – Negative in the label, and predicted negative. After conducting multiple groups of random tests, we evaluate the effect of the model in the following table: Table 2: Test results of LLM of different sizes on malicious SQL data sets, we implement four tpyes of SQL checker based on Llama-2, Llama-3 and Llama-3.1, and show the test result of StreamLink-SQL-Checker (SSQLC in the table.) The data in Table 2 reflects the challenges posed by the Llama2 architecture, which, despite being effective, shows limitations in handling SQL interception compared to the more advanced Llama3 and Llama3.1 models. Specifically, the SSQLC2 series, based on Llama2, exhibits lower performance across most metrics. For instance, SSQLC2-70B achieves a recall of $9 7 . 0 5 \%$ , which is impressive but still falls short of the results obtained with Llama3 and Llama3.1-based models. The precision of the SSQLC2 series also lags behind, highlighting that the older architecture and potentially outdated knowledge embedded in Llama2 lead to a higher rate of false positives, indicating a less reliable performance in real-world SQL injection detection. The results for the Llama3.1-based models suggest that the training data and knowledge incorporated into this version may not have been as well-optimized for SQL interception as those in Llama3. The SSQLC3.1-8B model, for example, shows a noticeable drop in precision $( 7 1 . 7 \% )$ compared to SSQLC3-8B $( 7 9 . 3 1 \% )$ , alongside a higher misintercept rate ( $2 1 . 2 3 \%$ vs. $1 5 . 0 7 \%$ ). Although the SSQLC3.1-70B model does recover some ground, achieving a precision of $9 0 . 5 2 \%$ its performance inconsistencies relative to Llama3 indicate that Llama3.1 may not yet offer the same level of robustness for SQL attack detection. Considering the balance between speed, accuracy, escape rate, and misintercept rate, the SSQLC3-8B model emerges as the most suitable choice for the StreamLink SQL Checker. It offers a strong recall rate of $9 8 . 0 9 \%$ with a manageable precision of $7 9 . 3 1 \%$ , all while maintaining a reasonable processing speed of 4 SQL statements per second. This model provides a well-rounded performance that meets the demands of real-time SQL injection detection while avoiding the significant speed drawbacks of the larger 70B models. The SSQLC-3-8B’s combination of efficiency and effectiveness makes it the optimal solution for deployment in environments where both accuracy and speed are crucial. Figure 4: Malicious SQL interception analyzing on our LLMbased method Figure 4 shows the test results obtained on sample sets of different sizes. When the sample size is less than 5000, the model’s performance exhibits some fluctuations, which may be due to the uneven distribution of positive and negative samples in small samples. However, as the sample size increases from 5000 to 30000, the distribution of positive and negative labels gradually approaches normal distribution, and the model demonstrates excellent stability. The results of the experiment were highly encouraging, Figure 5 indicates that our interceptors provide robust protection against malicious SQL commands. By effectively identifying and blocking malicious actions, our system ensures the stable operation of the server, safeguarding against potential disruptions and data breaches. This demonstrates the critical role of our SQL syntax checker and security checker in fortifying the system’s defenses against malicious attacks and ensuring the reliability and security of data processing operations. Figure 5: ROC Curve of our SSQLC methods, and the AUC of each method
Large Language Models (LLMs) have shown remarkable proficiency in natural language understanding (NLU), opening doors for innovative applications. We introduce StreamLink - an LLM-driven distributed data system designed to improve the efficiency and accessibility of data engineering tasks. We build StreamLink on top of distributed frameworks such as Apache Spark and Hadoop to handle large data at scale. One of the important design philosophies of StreamLink is to respect user data privacy by utilizing local fine-tuned LLMs instead of a public AI service like ChatGPT. With help from domain-adapted LLMs, we can improve our system's understanding of natural language queries from users in various scenarios and simplify the procedure of generating database queries like the Structured Query Language (SQL) for information processing. We also incorporate LLM-based syntax and security checkers to guarantee the reliability and safety of each generated query. StreamLink illustrates the potential of merging generative LLMs with distributed data processing for comprehensive and user-centric data engineering. With this architecture, we allow users to interact with complex database systems at different scales in a user-friendly and security-ensured manner, where the SQL generation reaches over 10\% of execution accuracy compared to baseline methods, and allow users to find the most concerned item from hundreds of millions of items within a few seconds using natural language.
[ "cs.DB", "cs.AI" ]
# I. INTRODUCTION Code-switching—switching between languages within the same conversation—is a common and natural way of speaking in many multilingual communities. This is especially true in Southeast Asia, where people often mix their native language with English in everyday conversations [1]. However, this kind of speech remains a major challenge for Automatic Speech Recognition (ASR) systems, and even powerful models like Whisper [2] perform poorly on mixed-language input. One key reason is the imbalance in training data: about two-thirds of the data is in English [2], while the remaining $^ { 9 9 + }$ languages [2], [3] have much less coverage. For instance, Malay has fewer than 100 hours of training data [2], and code-switched speech is even more limited. Because of this gap, models struggle to learn and accurately recognize speech that mixes multiple languages. Multilingual environments like Malay and Singapore create fluid code-switching patterns that current ASR systems struggle to handle. This challenge is further worsened by limited code-switched labeled data for regional languages. Conventional finetune approaches [4] often face language ambiguity, and phoneme confusion [5] due to insufficient domain coverage, lack of diversity, or bias toward dominant languages—leading to misrecognitions [6] or hallucinated outputs [7]. Recent work by Tuan et al. [8] addressed this issue by generating synthetic code-switched speech using a phrase mixing method, enabling effective fine-tuning of multilingual ASR models. While their approach demonstrated strong performance gains, it required costly and computationally intensive speech generation pipelines. We observed that large-scale pretrained ASR models like Whisper demonstrate strong acoustic capabilities, thanks to training on millions of hours of speech data [2], [9]. However, they still struggle with code-switching transcription. A key reason is their reliance on paired speech-text data, which limits language understanding—especially for underrepresented and mixed-language inputs. This raises a central question: Can abundant textual data help compensate for the lack of large-scale speech-text resources and improve code-switching performance in pretrained ASR models? To address this gap, we propose AsyncSwitch, a novel Asynchronous adaptation framework explicitly designed to improve ASR performance in Code-Switching scenarios, while also benefiting monolingual low-resource settings. To overcome the decoder’s limited understanding of codeswitched language, we introduce a three-stage adaptation process: (1) training the decoder’s self-attention and feedforward layers on target code-switched text, (2) aligning the decoder and encoder via cross-attention using a small amount of speech–text data, and (3) fully fine-tuning the entire model. This work is inspired by recent advances in the Large Speech Language Model (SLM) paradigm [10], which highlight the potential of large-scale textual pretraining followed by audio adaptation for multilingual and low-resource ASR tasks. Our contributions are as follows. • We propose AsyncSwitch, a novel three-stage asynchronous ASR adaptation framework that leverages abundant textual data and limited bilingual, and code-switched speech for improved code-switching performance. We achieve significant WER reductions on Bahasa Malay and Singlish datasets: $9 . 0 2 \%$ for Malay-English codeswitching [8], $1 7 . 3 5 \%$ for monolingual Malay, and $1 4 . 5 \%$ for Singlish. • Our method outperforms commercial ASR systems $( \mathrm { I } ^ { 2 } \mathrm { R }$ $\mathbf { A } ^ { * } \mathbf { S } \mathbf { T } \mathbf { A } \mathbf { R }$ [11] and Azure [12]) by $7 . 7 \%$ and $1 1 . 9 \%$ respectively. Fig. 1. Overview of AsyncSwitch: a three-stage asynchronous adaptation framework using large-scale text and speech-text data for code-switched ASR Our method prevents catastrophic forgetting and improves performance on the Open ASR Leaderboard’s diverse English scenarios [13]. By prioritizing the challenges of code-switching in lowresource multilingual environments—and doing so without heavily depending on synthetic speech—this work contributes to more inclusive, efficient, and adaptable ASR systems. The remainder of this paper is organized as follows: Section II reviews related work on text-based adaptation and codeswitching ASR. Section III details the proposed three-stage method. Section IV outlines the experimental settings, including datasets and training configurations. Section V presents the results and analysis. Section VI discusses our limitations, and Section VII concludes with a summary and future directions. # II. RELATED WORK across multilingual and multitask settings, though primarily relying on paired data. Internal language model (LM) adaptation has also gained attention. ILME-based methods [15] subtract estimated internal LM scores during decoding to better integrate external LMs, while AdaBERT-CTC [16] fine-tunes a BERT-style encoder on domain-specific text. However, these approaches typically focus on inference-time integration or domain adaptation, and often neglect multilingual or code-switched challenges. In contrast, our work introduces a three-stage pipeline that begins with internal LM adaptation on unpaired text—including code-switched data—followed by crossattention alignment and end-to-end fine-tuning. This design enables systematic exploitation of large-scale text corpora and addresses both data scarcity and code-switching robustness in a unified manner. # A. Text-Based and Internal LM Adaptation Recent efforts have explored leveraging unpaired text data to enhance ASR performance, especially in low-resource settings. USTR-CT [14] proposed a temporary text encoder to adapt Conformer-Transducer models using text-only data, achieving strong gains without increasing inference complexity. Meanwhile, Radford et al. [2] demonstrated Whisper’s robustness # B. Code-Switching ASR Code-switching remains a persistent challenge in ASR, requiring systems to handle dynamic language transitions. Prior studies addressed this using unified modeling [17], syntactic priors [18], or synthetic augmentation [19]. However, many of these rely on annotated code-switched corpora, which are rare in under-resourced settings. Tuan et al. [8] proposed a scalable alternative by generating 3,000 hours of synthetic phrase-mixed CS speech across Malay-English, Mandarin-Malay, and Tamil-English. Their method—based on translation, alignment, and audio splicing—achieved notable improvements when fine-tuning large ASR models like Whisper, SeamlessM4T, and MMS. They also introduced benchmark test sets for CS, Singlish, and monolingual ASR, which we adopt in our evaluation. We build on Tuan et al.’s [5] insight that synthetic speech can benefit model training, and we directly use a portion $( 2 0 \% )$ of their synthetic dataset in our speech-text tuning setup. While our approach includes audio splicing as part of the data preparation process, it is not the primary focus. Instead, our method emphasizes adapting Whisper using predominantly large-scale text supervision. # C. Low-Resource ASR Low-resource ASR research often centers on transfer learning [19], self-supervised learning [20], or synthetic speech from TTS [21]. While these approaches reduce dependence on labeled data, they often require either large unlabelled audio corpora or high-quality TTS systems—both limiting factors in many languages. Our approach addresses this by front-loading adaptation into the language modeling component using only text. By priming the decoder with code-switched and monolingual text before aligning with speech, we achieve robust performance even in low-resource scenarios like Bahasa Malay and Singlish, without large-scale audio resources. # III. PROPOSED METHOD: ASYNCSWITCH This section details our methodology for adapting the pretrained Whisper ASR model [2] to improve performance on low-resource languages like Malay through a three-stage approach. Figure 1 illustrates the details of our method. # A. Pretrained ASR Model: Whisper We utilize the Whisper-Large- $. \mathbf { v } 3 ^ { 1 }$ model [2], trained on 5 million hours of diverse, weakly supervised audio data covering multiple languages and tasks. Despite its multilingual capabilities, adaptation is often needed for optimal performance on specific low-resource languages or domains not heavily represented in the initial training data. # B. AsyncSwitch: An Asynchronous Text-Speech Adaptation Framework We adapt a pretrained Whisper encoder–decoder model $\left( \theta = \{ \theta _ { E } , \theta _ { D } \} \right)$ to a low-resource target language via three successive stages. Let $\boldsymbol { x } \in \mathbb { R } ^ { L \times d }$ (audio encoder features), $y ~ = ~ ( y _ { 1 } , \dots , y _ { T } )$ (target token sequence), and decompose the decoder parameters as $$ \theta _ { D } = \{ \theta _ { \mathrm { S A } } , \theta _ { \mathrm { C A } } , \theta _ { \mathrm { F F } } , \theta _ { \mathrm { o u t } } \} , $$ where $ { \theta _ { \mathrm { S A } } } , \ { \theta _ { \mathrm { C A } } }$ , and $\theta _ { \mathrm { F F } }$ denote the self-attention, crossattention, and feed-forward blocks in each layer, and $\theta _ { \mathrm { o u t } }$ is the final projection to the vocabulary. 1) Stage 1: Decoder Internal LM Adaptation: We zero out the encoder output $( x \ = \ 0 )$ ) so that the decoder functions purely as a conditional language model. We update only $\theta _ { \mathrm { S A } } , \theta _ { \mathrm { F F } } , \theta _ { \mathrm { o u t } }$ (and keep $\theta _ { \mathrm { C A } }$ frozen) to learn domain text patterns via next-token cross-entropy: $$ \mathcal { L } _ { 1 } = - \sum _ { t = 1 } ^ { T } \log p _ { \theta _ { \mathrm { S A } } , \theta _ { \mathrm { F F } } , \theta _ { \mathrm { o u t } } } \left( y _ { t } \mid y _ { < t } , x = 0 \right) . $$ This stage leverages large unlabeled text corpora to adapt the internal LM without disturbing audio–text alignment. 2) Stage 2: Speech–Text Alignment Fine-tuning: We reactivate the acoustic encoder $( \theta _ { E } )$ (but still freezing) and unfreeze only the decoder’s cross-attention $\theta _ { \mathrm { C A } }$ , holding $\{ \theta _ { E } , \theta _ { \mathrm { S A } } , \theta _ { \mathrm { F F } } , \theta _ { \mathrm { o u t } } \}$ fixed. Using paired dataset $( x , y )$ , we optimize $$ \mathcal { L } _ { 2 } = - \sum _ { t = 1 } ^ { T } \log p _ { \theta _ { \mathrm { C A } } } \mathopen { } \mathclose \bgroup \left( y _ { t } \mid y _ { < t } , x \aftergroup \egroup \right) , $$ thereby strengthening the model’s ability to align encoder representations to the newly adapted decoder LM. 3) Stage 3: Full End-to-End Fine-tuning: Finally, we unfreeze all parameters $\{ \theta _ { E } , \theta _ { \mathrm { S A } } , \theta _ { \mathrm { C A } } , \theta _ { \mathrm { F F } } , \theta _ { \mathrm { o u t } } \}$ and fine-tune end-to-end on the same paired data: $$ \mathcal { L } _ { 3 } = - \sum _ { t = 1 } ^ { T } \log p _ { \theta } \left( y _ { t } \mid y _ { < t } , x \right) . $$ This global optimization refines both acoustic and linguistic components to the target domain. The progression from text-only LM adaptation (Stage 1) through targeted alignment (Stage 2) to full fine-tuning (Stage 3) provides a balanced trade-off between data efficiency and modeling flexibility (see Figure 1). Without first two stages, the model cannot adapt to text only data, which is very valuable in context of low-resources or domain-adaptation. # IV. EXPERIMENTAL SETTINGS # A. Dataset 1) Textual Data (Stage 1): We compiled approximately 38.3 million (38M) text utterances from the following sources: Sealion Pile BM corpus [26]: 29.2 million Malay utterances. IMDA NSC text subsets $\{ 1 , 2 , 3 , 5 , 6 \}$ [23]: 7.1 million Singlish utterances, covering prompted readings, debates, finance, phone calls, and more. LibriSpeech texts [27]: We used $2 8 0 \mathrm { k }$ US English Text from LibriSpeech. Filtered Malay YouTube transcripts2: 1.7 million utterances. Our processing included: (1) filtering out speech not in the target language using ECAPA-TDNN [28]; (2) filtering text using FastText language detection [29]; (3) removing utterances with repeated 2- or 3-grams occurring more than 4 times [30]; and (4) excluding utterances with fewer than 32 tokens. This resulted in 14,000 hours of Malay YouTube speech and 1.7 million text utterances. TABLE I EVALUATION RESULTS ON SINGLISH, MALAY, AND CODE-SWITCHED DATASETS. Italic values indicate group-wise averages. 2) Speech Data (Baseline Training & Stages 2/3): We used a combination of 1k-hours English, included: Singlish IMDA NSC $\{ 1 , 2 , 5 , 6 \}$ - each 180 hours [23] and US English LibriSpeech 250 hours [27], 1k-hours Malay [8], 1k-hours phrase-mixed [8] - all are same as [8], and 2k-hours new added, sampled from Filtered Malay YouTube. # B. Training Settings 1) Comparison: Baseline. We compared our proposed model against the original Whisper-Large-V3 [2] (WHISPERORIG.) and a version fine-tuned on a 5k-hour dataset (WHISPER-5K). We also evaluated commercial-grade ASR systems, including Azure Speech-to-Text [12] and the $\mathrm { I } ^ { 2 } \mathrm { R }$ , A\*STAR ASR API [11], both optimized for Southeast Asian languages. With Language Model. For WHISPER-5K, we applied a 5-gram language model trained on 38M tokens with beam size 2, following [5]. We tuned hyperparameters $\alpha \in [ 0 , 0 . 1 ]$ and $\beta \in \left[ - 0 . 2 , 0 . 2 \right]$ on 500 samples from the $5 \mathrm { k }$ -hour dataset (excluding 1k hours of phrase-mixed data) across 4 trials. SpeechLLM. We conducted comparative evaluations against established SpeechLLMs [10], with particular attention to Southeast Asian variants, including MERALIONAUDIOLLM-WHISPER-SEA-LION [25]. This model is tailored for Southeast Asian languages through training on SealionPile [26] (our Malay text comprises a subset of this dataset, representing approximately $0 . 2 9 \%$ of total tokens) and large-scale speech-instruction tuning data that includes codeswitched Singapore languages. 2) Three-Stage Adaptation: Stage 1 - Textual Adaptation: The decoder is trained on 38M text utterances with a peak learning rate of 2e-5, $10 \%$ linear warm-up, cosine decay, and a batch size of 128. Stage 2 - Speech-Text Alignment: Cross-attention layers are trained on $5 \mathrm { k }$ hours of paired speech-text data for 1 epoch. • Stage $3 \cdot \mathrm { F u l l }$ Fine-tuning: The entire model is finetuned on the same 5k-hours speech-text dataset for 2 epochs (45k updates), resulting in the final model, WHISPER-38M-5K. All speech-text experiments (baseline, Stage 2, and Stage 3) used a peak learning rate of 2e-5, $20 \%$ linear warm-up, cosine decay, and a batch size of 32. Default SpecAugment settings [31] were applied, along with noise augmentation using the Noise subset of Musan [32] at $\{ 1 0 , 4 0 \}$ dB. During training, we used language prompting [2] by prepending either the $< | \mathsf { m s } | >$ (Malay) or $< | \mathsf { e n } | >$ (English) token to each utterance, based on the dominant language (determined by word count). After fine-tuning, all models were merged with the original WHISPER-LARGE-V3 using a merging ratio of 0.4 (with the original model contributing 0.6). Detailed merging ratios are provided in Section V. All experiments were conducted on four A40 GPUs (44 GB each), using DeepSpeed ZeRO-3 optimization. # C. Evaluation Settings We evaluate the fine-tuned model on code-switching MalayEnglish (BM-EN), Singlish (EN), and Malay (BM) scenarios, under noisy and conversational conditions, with a focus on low-resource domains: CS BM-EN: ChatGPT-generated conversations, the Singaporean Reading test set [8], and IMDA4 BM-EN [23]. • Singlish: Noisy historical interviews (past-century) (NLB) [22] and IMDA3 Conversation [23]. Malay: Conversational and noisy sets from [8], [24]. We used the combined $< | \mathrm { m s } | > < | \mathrm { e n } | >$ prompt [33] across all test sets to support code-switching output. The model was also evaluated on the OpenASR Leaderboard [13] (English), to assess catastrophic forgetting. Additionally, we used the Code-Mixing Index (CMI) [34] to quantify code-switching in text corpora. Higher CMI values indicate more code-switching patterns. # V. RESULTS # A. Main Results Baseline Comparisons. WHISPER-38M-5K demonstrates substantial performance gains, exceeding the original Whisper model by $2 3 . 3 5 \%$ and models trained on equivalent labeled speech data (WHISPER-5K) by $1 4 . 0 5 \%$ . The largest improvements are observed on the Singlish IMDA3 and Malay Noisy datasets. While external language models trained on the same text data provide a marginal $0 . 2 3 \%$ improvement, they consistently underperform our proposed method. Commercial Systems. Our method achieves $7 . 7 \%$ relative improvement over the $\operatorname { I } ^ { 2 } \operatorname { R }$ , $\mathbf { A } ^ { * } \mathbf { S } \mathbf { T } \mathbf { A } \mathbf { R }$ ASR [11] across all test sets. While Azure [12] performs better on Singlishs, it underperforms significantly on Malay and Code-Switch Malay-English. SpeechLLM. Our method outperforms MERALIONAUDIOLLM-WHISPER-SEA-LION [25] by $4 2 . 3 4 \%$ overall while being $6 \times$ smaller in size. Large-Scale Code-Switching Models. WHISPERTURBOV3 EN-ZH-BM-TA performs best on Singlish IMDA3, Malay Convo, and Code-Switched IMDA4, but fails on Code-Switch reading (16.75 vs 5.10 for our model). Overall, our method achieves $2 4 . 4 9 \%$ , $3 7 . 6 7 \%$ , and $1 4 . 2 2 \%$ improvements over WHISPERTURBO-V3, MMS-1B-ALL, and SEAMLESSM4TV2, respectively. Catastrophic Forgetting. Our method improves English speech recognition by $5 . 3 7 \%$ compared to the original model, demonstrating successful knowledge retention without degradation. These results clearly state that AsyncSwitch demonstrates strong performance on code-switching tasks while maintaining the same model structure. The method shows consistent improvements across baselines, with the largest gains on codeswitched BM-EN datasets, providing an effective approach for code-switch speech recognition. # B. Ablation Study 1) Results on different stage of training: Table II shows that our three-stage AsyncSwitch training provides incremental improvements. Stages 1-2 with domain text and minimal supervised speech data substantially improve Singlish and surpass WHISPER- $5 \mathrm { K }$ , but show limited improvement for lowresource Malay and minimal code-switch gains (gap with Original to $1 . 2 \%$ relatively). Stage 3 with full supervision achieves optimal performance across scenarios, confirming that early domain text incorporation provides a foundation while comprehensive fine-tuning is essential for all domains. 2) Scale of textual data: We compared smaller-but-focused code-switch text fine-tuning with MalayYoutube 1.7M text (WHISPER-1.7M-5K) against our 38M text approach. Table III shows the smaller text model performs better on Malay and Code-Switch $3 . 8 \%$ and $3 . 7 2 \%$ relatively) but degrades TABLE II EVALUATION RESULTS AT DIFFERENT TRAINING STAGES Singlish performance ( $1 1 . 5 8 \%$ relatively) due to its narrow focus. Overall, WHISPER-38M-5K achieves better performance. While the $1 . 7 \mathbf { M }$ MalayYoutube text has high CMI values (Table IV), the significantly larger and more diverse 38M corpus better handles real-world code-switching scenarios. TABLE III COMPARISON OF RESULTS BY TEXTUAL DATA SIZE TABLE IV CODE-MIXED INDEX (CMI) STATISTICS ACROSS DATASETS 3) Optimal Merging Ratio: Table V presents merging ratios using linear interpolation [35] between the original Whisper model and our domain-specific fine-tuned model. We choose a merging ratio of 0.4 for best code-switching performance (17.04) while maintaining acceptable Singlish and Malay results. Although 0.8 achieves the best overall average (16.77), it degrades performance on all other scenarios (Singlish, CS, OpenASR Leaderboard with diverse English). Higher finetuned ratios favor Malay without necessarily improving codeswitching. # VI. LIMITATION This study was conducted within the Malaysian and Singaporean linguistic contexts, where English serves as a prominent language alongside Malay, providing abundant text with high Code-Mixing Index (CMI) values. This unique bilingual environment may limit the generalizability of our findings to regions with different language dynamics or less extensive code-switched text resources. Future applications of AsyncSwitch must carefully evaluate their target text characteristics to ensure domain compatibility with the proposed approach. TABLE V EVALUATION RESULTS FOR DIFFERENT MERGING RATIOS
Developing code-switched ASR systems is challenging due to language ambiguity and limited exposure to multilingual, code-switched data, while collecting such speech is costly. Prior work generates synthetic audio from text, but these methods are computationally intensive and hard to scale. We introduce AsyncSwitch, a novel asynchronous adaptation framework that leverages large-scale, text-rich web data to pre-expose ASR models to diverse code-switched domains before fine-tuning on paired speech-text corpora. Our three-stage process (1) trains decoder self-attention and feedforward layers on code-switched text, (2) aligns decoder and encoder via cross-attention using limited speech-text data, and (3) fully fine-tunes the entire model. Experiments with Whisper on Malay-English code-switching demonstrate a 9.02% relative WER reduction, while improving monolingual performance in Singlish, Malay, and other English variants.
[ "cs.CL", "cs.SD", "eess.AS" ]
# 1 Introduction Large Language Models (LLMs) are rapidly evolving and demonstrating increasing capabilities in coding, fundamentally transforming the software development ecosystem. Recent LLMs such as ChatGPT [55] and Claude [4] exhibit remarkable code generation performance, producing highquality outputs in response to concise natural language prompts. The emergence of reasoning-capable models like DeepSeek-R1 [26] has further accelerated LLM adoption among developers. According to Stack Overflow’s industry report [72], $8 2 . 1 \%$ of the 65,000 surveyed developers report using ChatGPT [55] during their development workflow. Capitalizing on the strong coding abilities of LLMs, assistant tools such as GitHub Copilot [20] and Cursor [12] have been developed to enhance productivity by helping developers write, modify, and debug code directly within integrated development environments (IDEs). Furthermore, state-of-the-art LLM-based agentic systems such as OpenHands [83] achieve up to a $6 5 . 8 \%$ resolved rate on SWE-Bench [34], demonstrating the effectiveness of LLMs in addressing real-world software engineering tasks. These trends indicate that LLMs and their associated tools are becoming integral to modern software development workflows. However, the rapid spread of AI-generated code has raised concerns about new vulnerabilities and misuse. Systematic benchmarks show that LLM outputs often ship with logic errors and latent security flaws[45, 21, 78, 96, 61, 36]. Comparative evaluations reveal that AI suggestions can embed at least as many vulnerabilities as human code[40, 82, 76, 80, 5, 77]. Furthermore, LLMs are susceptible to manipulation [36], including poisoning attacks [91, 11, 54] and prompt injections [49, 95], which can induce the generation of targeted vulnerable code. At the same time, educators warn of an impending wave of AI-driven plagiarism that evades conventional detectors [31, 74, 38, 13, 85, 69, 39], while legal scholars highlight intellectual-property [94, 43, 86, 73] and licence-compliance [88] risks. Robust AI-code detection is therefore critical for secure software supply chains, responsible academic practice, and licence compliance. To address the challenges of AI-generated code identification, various detection methods have been proposed, leveraging statistical features of code [32], the capabilities of language models [90, 70, 92, 93, 89, 53, 52], and code embedding models [75, 46]. However, evaluations based on existing benchmarks and datasets [75, 59, 14, 62, 58, 87] often fall short in three key aspects. First, they typically cover only a narrow set of programming languages—primarily $^ { C + + }$ and Python—while neglecting other widely used languages such as Go and HTML, resulting in limited language diversity compared to real-world software development. Second, most benchmarks rely on open-source LLMs with relatively small model sizes and lower generation quality, or include only a small number of commercial models, leaving a gap between benchmark conditions and real-world usage. Third, most existing datasets lack practical adversarial scenarios, such as paraphrasing [41, 68], which are common in practice and essential for evaluating the robustness of detection systems. Thus, a rigorous benchmark that captures real-world language diversity, modern commercial models, and adversarial scenarios is indispensable for driving meaningful progress in this emerging field. We introduce CodeMirage, a comprehensive benchmark for evaluating AI-generated code detectors under realistic and adversarial conditions, to solve the three major limitations identified in prior benchmark work. CodeMirage is constructed from real-world human-written code and enriched with both AI-generated and paraphrased variants produced by a diverse set of state-of-the-art reasoning and non-reasoning LLMs from six major commercial service providers. The paraphrasing techniques are domain-specific and tailored to source code, enabling rigorous evaluation of detector generalization and robustness. Our key contributions are as follows: • We present a large-scale, multilingual benchmark for AI-generated code detection, spanning 10 widely used programming languages. The dataset comprises approximately 210,000 samples, including 10,000 human-written code files sourced from GitHub [9], as well as AI-generated and paraphrased counterparts produced by 10 production-level LLMs. • We design four progressively challenging evaluation configurations with three complementary performance metrics to facilitate rigorous and realistic assessment of detector effectiveness under various real-world scenarios. • We conduct a comprehensive evaluation of 10 representative detectors across four methodological paradigms using CodeMirage, providing insights into their accuracy, robustness, and generalization across program languages, models, and adversarial settings. # 2 Background and Related Work # 2.1 Taxonomy of AI-Generated Code Detection Methods Detecting AI-generated content has been a long-standing challenge in both the natural language [79, 22, 2, 23] and computer vision domains [67, 24, 97, 15, 98], predating even the emergence of large language models (LLMs) [81, 1] and diffusion-based generative models [71, 29]. In contrast, detecting AI-generated source code is a relatively new research direction, emerging primarily in the last two years due to the rapid advancements in the coding capabilities of LLMs [55, 4]. Inspired by traditional statistical-based methods used for AI-generated text detection [64, 33], early approaches for code focus on analyzing surface-level statistical features. For example, Whodunit [32] extracts stylometric and complexity-based features from both raw source code and its abstract syntax tree (AST). However, these methods often struggle to distinguish code generated by modern, high-performing LLMs [55, 4, 26, 37], which can mimic human coding styles more closely. Table 1: Comparison between existing AI-generated code benchmarks and our CodeMirage. Gran. $\mathbf { \tau } = \mathbf { \tau }$ granularity (Func: function/snippet, Doc: whole file). $\mathrm { I I D } =$ in–distribution; $\mathrm { { O O D } = }$ out-ofdistribution. Baseline categories: $\mathbf { z }$ (zero-shot detector), $\mathbf { E }$ (embedding-based detector), $\mathbf { F }$ (finetuning-based detector), $\mathbf { P }$ (pre-trained $\mathbf { L L M + }$ downstream detector). Columns “Open LLMs” and “Comm. LLMs” show whether the dataset includes any open-source or commercial generators. To improve detection effectiveness, recent research has explored more advanced techniques—often leveraging large language models (LLMs) or code embedding models—which can be broadly categorized into the following four methodological paradigms: Zero-shot Detector. This category of detectors assigns detection confidence scores based on tokenlevel statistics derived from pretrained LLMs, without requiring task-specific fine-tuning. For example, LogRank [22] and Entropy [42] rely on average next-token log-rank and entropy, respectively, to quantify AI-generated token distributions. DetectGPT [51] evaluates the divergence between original and perturbed text using a scoring model, which is a strategy extended in code-specific settings by DetectCodeGPT [70], GPT4Code [92], and AIGC Detector [90], each employing tailored perturbation schemes for code. CR [93] instead measures divergence between original and LLM-rewritten code samples. Binoculars [28] introduces a model-comparison approach, using cross-perplexity between instruction-tuned and non-instruction-tuned LLMs as a detection signal. Embedding-based Detector. Embedding-based detectors [40] utilize pretrained code embedding models, such as CodeT $^ { 1 5 + }$ Embedding [84] and CodeXEmbed [46], to extract high-level semantic representations from either raw source code or abstract syntax trees (ASTs). These embeddings are then fed into lightweight classifiers, e.g., MLP [66], to perform binary classification between human-written and AI-generated code. Fine-tuning-based Detector. This class of detectors fine-tunes transformer-based models to directly capture discriminative patterns between human-written and AI-generated code. For example, GPTSniffer [52, 53] fine-tunes CodeBERT [19] on labeled code samples to perform binary classification. Other approaches [75] explore different backbone architectures, such as CodeT $5 +$ [84] and RoBERTa [47], to enhance detection performance across varied programming languages and generative models. Pretrained LLM with Downstream Detector. Unlike zero-shot methods, detectors in this category extract rich semantic representations or statistical signals from pretrained LLMs and train downstream classifiers on these features. For instance, MageCode [62] uses statistical features derived from the hidden state of the classification token in a pretrained CodeT $5 +$ [84] to train a two-layer linear classifier. Some detectors originally developed for text, such as Raidar [48], could be extended to code by comparing metrics between original and LLM-rewritten samples, followed by an XGBoost [8] classifier. BiScope [27] applies a novel bi-directional cross-entropy analysis using pretrained LLMs and feeds the resulting features into a Random Forest [6] classifier. # 2.2 Existing AI-generated Code Datasets and Benchmarks Prior studies [75, 59, 14, 62, 58, 87] has laid important groundwork for building benchmarks to evaluate AI-generated code detectors. As shown in Table 1, several benchmarks introduce valuable contributions: for instance, Suh et al. [75] propose a large-scale function-level dataset spanning three programming languages. Pan et al. [59] and CoDet-M4 [58] incorporate adversarial perturbations into AI-generated code to test robustness. AIGCodeSet [14] and MAGECODE [62] employ quality Production-Level Baseline Detectors Evaluation Configurations LLM Services API Call EFimnbeteudndiing-based PretZreairnoe-sdh+otCLF InP-aDrisatprihbruatsieon CrCoMssP-aMroadpehlr(aCseM) Select 1 Human Code Pre-Processing Summarize Generate 三9 元 × 3 Paraphrasing 2 AI-Paraphrased Code 4 Benchmarking Github Data Filter Human Code Summarizer Generator Inspector Paraphraser E Evaluator C, C++, C#, Go, HTML, Java, V JavaScript, PHP, Python, Ruby AI-Generated Code checks during code generation. LLMGCode [87] expands language coverage to eight programming languages. Collectively, these datasets serve as solid foundations for evaluating AI-generated code detectors. However, each of these benchmarks has notable limitations. Most cover only a small number of programming languages, rely on open-source or less capable LLMs, and none of them leverage latest reasoning models [26, 57, 35]. Furthermore, baseline evaluations in these benchmarks do not comprehensively include all four major categories of detection methods, and only two out of the six existing benchmarks include adversarial testing, which is critical for assessing real-world robustness. To address these gaps, our proposed benchmark, CodeMirage, includes: (1) code samples across 10 widely used programming languages; (2) outputs from 10 state-of-the-art production-level LLMs, including three reasoning models; (3) both out-of-distribution and adversarial evaluation settings; and (4) baselines covering all four methodological categories of AI-generated code detection. # 3 CodeMirage Framework # 3.1 Benchmark Construction Human Code Pre-Processing. To construct a comprehensive benchmark of AI-generated and paraphrased code, we begin by sourcing high-quality human-written code samples from the CodeParrot Github-Code-Clean dataset [9], a curated subset of the original Github-Code dataset [10], as shown in Figure 1. This cleaned version filters out overly short snippets, auto-generated files, and samples with excessive alphanumeric characters. The dataset was collected and sanitized in May 2022, prior to the widespread deployment of code LLMs and AI coding agents, ensuring the selected samples are genuinely human-authored. Based on its statistics, we select the ten most commonly used programming languages—C, $\mathrm { C } { + } { + }$ , C#, Go, HTML, Java, JavaScript, PHP, Python, and Ruby—and randomly extract 1,000 code snippets per language. Additional length-based filtering is applied during the sampling to preserve code diversity while ensuring the code remains within a controlled length scale. Production-Level LLMs. In CodeMirage, we leverage ten production-level LLMs from six leading companies to generate code samples, covering the majority of LLMs commonly used for realworld coding tasks. Among these ten models, four are open-source and three are designed with reasoning capabilities. Specifically, CodeMirage includes GPT-4o-mini [56], o3-mini [57], Claude3.5-Haiku [3], Gemini-2.0-Flash [63], Gemini-2.0-Flash-Thinking-Experimental [35], Gemini-2.0- Pro-Experimental [37], DeepSeek-V3 [44], DeepSeek-R1 [26], Llama-3.3-70B [50], and Qwen-2.5- Coder-32B [30]. We access all ten LLMs via API-based services with default temperatures. For additional details on the LLM configurations and generation settings, please refer to Appendix A. AI Code Summarization. To generate high-quality AI-generated code samples while avoiding direct copying of human-written code, CodeMirage adopts a text-to-code generation strategy. As the first step, we produce a comprehensive yet concise summary for each human-written code sample. Since these samples are typically full documents—including library imports, class and structure definitions, Human-Written Code AI-Generated Code AI-Paraphrased Code (a) Lines of Code (b) Character Length (c) AST Depth (d) CodeBLEU 10 10 10 5 Percentage (%) 5 5 5 0 0 100 200 300 0 5,000 10,000 0 10 20 30 40 0 0.5 1.0 Lines of Code Character Length AST Depth CodeBLEU Score (e) BLEU (f) Weighted BLEU (g) Syntactic AST Match (h) Semantic Data-Flow Match 30 6 120 10 24 Percentage (%) 4 2 0 0 0.5 1.0 0 0.5 1.0 0 0.5 1.0 0 0.5 1.0 BLEU Score Weighted BLEU Score AST Match Score Data-Flow Match Score and function implementations—we prompt the LLM to extract and summarize key elements such as the purpose, functionality, logic overview, and key features, along with the names of relevant libraries, functions, classes, structures, and variables. Optional contextual notes are also included to account for uncommon assumptions or dependencies in the source code. This summary serves as an intermediate representation of the original code, ensuring that the LLM does not access the original human-written implementation during the following code generation step. Full prompts and summary examples are provided in Appendix B. AI Code Generation. Given the summary of each human-written code sample, CodeMirage employs multiple production-level LLMs to generate corresponding AI-written code based on the provided description. To align the structural characteristics of the generated code with the original humanwritten version, we additionally supply the LLMs with metadata such as the line count and total character length. Due to the inherent uncertainty of LLMs, generated code may occasionally deviate from the desired format or content. To further ensure quality, we implement a rule-based inspector that verifies: (1) consistency with the original human-written code’s line count and character length, and (2) adequate token-level divergence from the original, enforced by requiring a BLEU [60] score below 0.5 to avoid recitation. Regeneration is forced if any check fails, and samples are discarded after multiple failed attempts. Detailed prompts and generation examples are provided in Appendix C. AI Code Paraphrasing. Paraphrasing [41, 68] is a widely adopted strategy for evaluating the robustness of AI-generated text detectors under adversarial and real-world conditions. However, in the domain of AI-generated code detection, most existing benchmarks [75, 59, 14, 62, 58, 87] do not incorporate such adversarial testing. Although some text detection studies [48, 27] have included paraphrased code in their evaluations, they rely on generic prompts and a limited number of code samples, constraining both the effectiveness and generality of their paraphrasing evaluation on code. In CodeMirage, we introduce a systematic, domain-specific paraphrasing for code, covering six transformation types: renaming, formatting adjustments, logic rewriting and replacement, expression variation, literal transformations, and redundancy insertion. Detailed rules, prompt designs, and representative examples are provided in Appendix D. # 3.2 Benchmark Statistics CodeMirage spans ten programming languages, each containing 1,000 human-written code samples and 10,000 AI-generated counterparts. For every language, we obtain 1,000 outputs from each of ten production-level LLMs, yielding a 1:10 mapping between every human sample and its LLM-generated variants. Within every 1,000-sample shard (human or AI), we allocate 700 examples for training and 300 for testing. We present four structural and semantic metrics of the dataset in Figure 2: lines of code (a), character length (b), AST depth (c), and CodeBLEU [65] score (d). The first three metrics reflect the overall structural characteristics of the code and show close resemblance between human-written and AIgenerated samples. This similarity implies that naive statistical classifiers would struggle to detect AI-generated code using basic code features. Figure 2 (d) reports the CodeBLEU score, a composite metric calculated as: $$ C o d e B L E U = \alpha \cdot B L E U + \beta \cdot B L E U _ { w e i g h t e d } + \gamma \cdot M a t c h _ { A S T } + \delta \cdot M a t c h _ { D F } , $$ where each component is equally weighted with $\alpha = \beta = \gamma = \delta = 0 . 2 5$ by default. The median CodeBLEU score for AI-generated code is approximately 0.3, consistent with prior observations in text-to-code generation [16, 17, 18]. Paraphrased code yields slightly lower scores due to deliberate perturbations in both code format and structure. To further analyze CodeMirage’s code quality, we decompose the CodeBLEU score into its four subcomponents in Figure 2 (e)–(h). Both AI-generated and AI-paraphrased code show relatively low BLEU [60] and weighted BLEU [65] scores, indicating limited n-gram overlap with their human counterparts. While the syntactic AST match and semantic data-flow [25] match scores of AI code exceed 0.5 on average, suggesting that despite token-level divergence, both AI-generated and AIparaphrased code maintains a fair level of syntactic and semantic consistency with human-written code. More detailed benchmark statistics are presented in Appendix E. # 3.3 Baseline Detectors We select ten state-of-the-art detectors spanning four categories. Zero-shot detectors: LogRank [22], Entropy [22, 42], and Binoculars [28], which rely on token-rank or entropy-related features without training. Embedding-based detectors: following existing studies [75], we extract representations with the CodeXEmbed- $2 B$ model [46] from either raw source code or its abstract-syntax tree (AST) and train a lightweight random forest [6] classifier. Fine-tuned detectors: we include GPTSniffer [53, 52], a variant built on the latest $C o d e T 5 +$ backbone [84], and a RoBERTa detector [47], with each fine-tuned on our training corpus. Pretrained-LLM with downstream detector: Raidar [48] and BiScope [27], extracting features via rewriting [48] and bi-directional cross entropy [27]. More details of the baseline detectors are presented in Appendix F. # 3.4 Evaluation Metrics To thoroughly assess the performance of the baseline detectors in different scenarios, we employ three evaluation metrics in our experiments, including the F1 score, $T P R @ F P R = I O \%$ , and $T P R @ F P R = I \%$ . The $_ { F l }$ score balances precision and recall, providing an overall measure of detection accuracy without favoring AI-generated or human-written code samples. For each detector, we first identify the optimal decision threshold and then report its corresponding $F l$ score. The metric $T P R @ F P R = I O \%$ reports the true positive rate (TPR) when the false positive rate (FPR) is limited to $10 \%$ , representing scenarios that can tolerate a moderate number of false alarms. Conversely, $T P R @ F P R = I \%$ measures the TPR at an FPR of only $1 \%$ , which is essential for applications where even a small fraction of false positives is unacceptable. # 3.5 Evaluation Configurations In CodeMirage, we include four evaluation configurations to thoroughly assess baseline detectors under various real-world scenarios, including the in-distribution configuration and three out-ofdistribution configurations (paraphrase configuration, cross-model configuration, and cross-model paraphrase configuration). We omit the cross language configuration because programming language can be easily identified; thus, detectors can be trained separately for each language. In-Distribution Configuration. This configuration evaluates the in-distribution stability of each detector in multiple LLMs and programming languages. For each language, we pair the human-written training set with the training samples produced by a single LLM, train the detector on this combined data, and determine the optimal decision threshold. We then test the detector on the human-written test set together with the test samples generated by the same LLM. Paraphrase Configuration. This setting evaluates each detector’s out-of-distribution performance when the AI-generated code is adversarially paraphrased. Specifically, we train the detector and select its optimal threshold same as in the in-distribution configuration, but we test on paraphrased code produced by the same LLM that generated the original samples. Cross-Model Configuration. This setting evaluates detector’s robustness against unseen LLMs. For each programming language, we train the detector and choose its optimal threshold on a training set consisting of human-written samples and AI-generated samples from a single LLM. We then test the detector on human test samples paired with AI-generated samples from all other LLMs. The detector’s scores on these unseen-model test sets are averaged to yield the overall cross-model result. Figure 3: Comparison Between Evaluation Configurations and Detectors. The bar chart presents the average F1 scores of baseline detectors across all the programming languages and LLMs. Cross-Model Paraphrase Configuration. This scenario mirrors real-world conditions in which code samples are both generated by unseen LLMs and subsequently paraphrased. We adopt the testing procedure of the cross-model configuration, but pair human test samples with paraphrased test samples produced by the other LLMs. The detector’s average score over all such paraphrased, unseen-model test sets is reported as the cross-model paraphrase result. # 4 Evaluation Results and Insights We conduct an extensive evaluation using CodeMirage in various scenarios and summarize the observations into nine findings. We present representative processed results in the main text and include the full experimental results in Appendix H. # 4.1 Comparison Between Evaluation Configurations and Detectors We first evaluate the performance of the various detectors under four distinct configurations 3.5. The results are presented in Figure 3, where the $\mathbf { X }$ -axis lists the detectors and the y-axis represents the F1 score. Each bar corresponds to a specific evaluation configuration. Notably, to ensure a fair and unbiased comparison, each bar reflects the average F score obtained across ten programming languages and ten LLMs, with error bars indicating one standard deviation. Finding 1: In-distribution testing consistently outperforms all out-of-distribution scenarios. This is intuitive and reasonable given the shared distribution between training and test sets. Under out-of-distribution settings, cross-model testing yields a larger performance drop than paraphrasing in most cases, since paraphrasing leverages the same LLM and thus incurs a smaller distribution shift than code generation by a different LLM. However, some corner cases, e.g., LogRank and Binoculars, deviate from this trend. As zero-shot methods, they are particularly sensitive to token-level features, and paraphrasing induces greater token variance than cross-model evaluation. Furthermore, different detection methods exhibit varying performance. According to subsection 3.3, these methods fall into four categories. Finding 2: Fine-tuning-based methods outperforms other types. Fine-tuned detectors, e.g., GPTSniffer and CodeT5+, lead the pack. Zero-shot approaches, e.g., LogRank and Entropy, perform poorest, which makes sense given their limited feature extraction when confronted with the complexity of code. Embedding-based detectors, e.g., Embed-Code and Embed-AST, sit in the middle but impressively maintain stable accuracy even under out-ofdistribution evaluation, thanks to their reliance on code representations that generalize across LLMs. Pretrained LLMs paired with downstream classifiers, e.g., Raidar and BiScope, match embedding methods in-distribution but suffer a larger drop on out-of-distribution tests, reflecting subtle shifts in the features they extract across different models and paraphrased inputs. Finding 3: Fine-tuning approaches using backbone LLMs pre-trained on larger code corpora achieve superior performance. Figure 4: Comparison Between Different Programming Languages. The bar chart presents the average F1 scores of baseline detectors on different programming languages across LLMs. Performance varies across fine-tuning methods. For example, CodeT $^ { 1 5 + }$ slightly outperforms GPTSniffer, and both surpass RoBERTa. This gap reflects their pre-training corpora: GPTSniffer’s CodeBERT backbone is trained on six programming languages, whereas Code $^ { 7 5 + }$ ’s backbone covers nine. In contrast, RoBERTa is pretrained solely on natural-language text. Consequently, backbones exposed to more and broader code samples exhibit superior coding proficiency, and hence better detection capability. Finding 4: Fine-tuning–based detectors are prone to overfitting. We also observe that fine-tuning–based methods (e.g., GPTSniffer and CodeT5+) exhibit a larger performance drop from in-distribution to cross-model evaluations than other approaches. This is likely due to their overfitting tendencies and should be taken into account in real-world deployments. Finding 5: ASTs provide a superior feature representation compared to raw source code. Two embedding–based detectors demonstrate comparable performance, with Embed-AST marginally outperforming Embed-Code. This suggests that AST-based embeddings capture the program’s syntactic hierarchy and semantic relationships, e.g., control flow and data dependencies, more effectively than raw code tokens, making them more robust to superficial variations like naming or formatting. # 4.2 Comparison Between Different Programming Languages We evaluate detection performance across ten programming languages using CodeMirage. The results are shown in Figure 5, where the $\mathbf { \boldsymbol { x } }$ -axis lists the languages and the y-axis denotes the F1 score. To minimize bias, each bar aggregates results from experiments with all ten LLMs and ten detectors. Its height indicates the average F1 score, and the error bars represent one standard deviation. Finding 6: Detection is Consistent across Programming Languages, with Common Languages Performing Slightly Better. We observe only slight performance differences among languages, with similar patterns across evaluation configurations. Notably, less common languages exhibit marginally lower performance. For example, $^ { C + + }$ achieves higher F1 scores than Go or Ruby. This discrepancy arises because several detection methods, e.g., Biscope [27] and Raidar [48], rely on pre-trained LLMs for feature extraction. These models are pre-trained on large online corpora containing more examples of common languages (e.g., $\mathbf { C } { + + }$ ) than atypical ones (e.g., Go), resulting in stronger representations for the former. Hence detection performances are better detection on those common languages. # 4.3 Comparison Between Different LLMs We evaluate the detection performance of code generated by different LLMs, with results shown in Figure 5. The $\mathbf { X }$ -axis represents the generative models, while the y-axis indicates the F1 score. Each bar color corresponds to one of four evaluation settings. Finding 7: Detection performance is generally similar across LLMs, with GPT and Llama showing slightly higher scores. Figure 5: Comparison Between Different LLMs. The bar chart shows the average F1 scores of baseline detectors on different LLMs across programming languages. Among all models, GPT-4o mini achieves the highest F1 scores, particularly under the In-Distribution and Paraphrase settings, suggesting that its code style is more consistent or distinctive, making detection easier. Claude 3.5 Haiku and Llama $3 . 3 7 0 \mathrm { B }$ also demonstrate strong performance, especially under In-Distribution, likely due to their more recognizable or less variable code patterns. In contrast, Cross-Model Paraphrase consistently yields the lowest F1 scores (around 0.65–0.7), highlighting it as the most challenging scenario for detection. Models such as Gemini 2.0 Pro and Qwen 2.5 Coder 32B exhibit lower detectability across settings, especially under paraphrased or cross-model conditions, indicating that their outputs may be more diverse or stylistically more similar to human’s, thereby reducing their distinctiveness. Finding 8: Reasoning models exhibit a larger performance drop after paraphrasing. We observe that for non-reasoning models (DeepSeek V3, GPT4o mini, Llama 3.3 70B, and Qwen 2.5 Coder 32B), paraphrasing has minimal impact on performance. In contrast, reasoning models (e.g., GPT o3 mini) suffer a more pronounced decline. This likely stems from their stronger comprehension abilities: they better interpret paraphrased inputs and adjust outputs to match human-style reasoning, making any deviations more evident after paraphrasing. # 4.4 Comparison Between Different Evaluation Metrics In previous experiments, we mainly use F1 score, which is a threshold-dependent measure that balances precision and recall, but F1 can be misleading in real-world detection tasks. As it gives equal weight to false positives and false negatives and depends on a single decision threshold, it often fails to reflect performance in imbalanced settings or under strict false-alarm constraints. By contrast, reporting the true positive rate at low false-positive rates directly measures how many genuine positives the model catches when false alarms must be kept to a minimum [7]. Therefore, we introduce two additional metrics, i.e., TP $2 \textcircled { \omega } \mathrm { F P R } { = } 1 0 \%$ and $1 \%$ , to better assess detector practicality. Finding 9: There is a significant gap between laboratory evaluations and practical use. Results in Appendix G indicates that despite decent F1 scores, all detectors suffer a dramatic drop in true-positive rate once the false-positive rate is constrained, showing that they fail to catch enough positives under realistic, low-alarm requirements and are therefore impractical.
Large language models (LLMs) have become integral to modern software development, producing vast amounts of AI-generated source code. While these models boost programming productivity, their misuse introduces critical risks, including code plagiarism, license violations, and the propagation of insecure programs. As a result, robust detection of AI-generated code is essential. To support the development of such detectors, a comprehensive benchmark that reflects real-world conditions is crucial. However, existing benchmarks fall short -- most cover only a limited set of programming languages and rely on less capable generative models. In this paper, we present CodeMirage, a comprehensive benchmark that addresses these limitations through three major advancements: (1) it spans ten widely used programming languages, (2) includes both original and paraphrased code samples, and (3) incorporates outputs from ten state-of-the-art production-level LLMs, including both reasoning and non-reasoning models from six major providers. Using CodeMirage, we evaluate ten representative detectors across four methodological paradigms under four realistic evaluation configurations, reporting results using three complementary metrics. Our analysis reveals nine key findings that uncover the strengths and weaknesses of current detectors, and identify critical challenges for future work. We believe CodeMirage offers a rigorous and practical testbed to advance the development of robust and generalizable AI-generated code detectors.
[ "cs.SE", "cs.CL", "cs.CY", "cs.LG" ]
# 1 Introduction Many popular programming languages, including C#, Java, and Python, support exceptions [15, 17, 37]. Exceptions are thrown during program execution if an unwanted event happens, e.g., a method is invoked with an illegal argument value. Software developers write exceptional behavior tests (EBTs) to check that their code properly detects unwanted events and throws desired exceptions. Prior research studies on EBTs [2, 8, 14, 21, 24] have shown the importance of EBTs and developers’ desire to improve the testing of exceptional behaviors. However, in practice, developers tend to focus on “happy paths” and have limited time to test exceptional behaviors. This results in a lower number of EBTs compared to non-EBTs in most projects. Sadly, tool support for automatically generating EBTs is limited. Most existing analysis-based test generation tools (e.g., Randoop [28, 31] and EvoSuite [12]) and learning-based test generation tools (e.g., CAT-LM [30] and TeCo [26]) have no special settings for targeting EBTs and are primarily evaluated on non-EBTs. Random test generation tools can be guided by reinforcement learning to target exceptional behaviors [1], but the generation works only on the entire codebase, and not for a specific throw statement that a developer might select. Additionally, tests produced by analysisbased tools often lack readability [6, 7, 29]. We recently designed and developed exLong [44], a framework that utilized an instruction fine-tuned large language model (LLM) to automatically generate EBTs. Using CodeLlama [32] as its base, exLong is fine-tuned [34, 39, 40] with a novel task instruction dataset, designed specifically to embed the reasoning about the context which includes: (a) stack traces that lead to target throw statements, (b) guard expressions (i.e., conditional expressions that guard those throw statements), and (c) non-EBTs that execute similar traces. This context is used as the input to generate an EBT that 1 public Scheduler(SchedulerConfig config) { 2 if(config.getTimeProvider() $\scriptstyle = =$ null) { 3 throw new NullPointerException("The timeProvider cannot be null"); } 4 5 } (a) Method under test: Scheduler. 1 @Test(expected $\mathbf { \Sigma } = \mathbf { \Sigma }$ NullPointerException.class) 2 public void should_fail_if_timeProvider_is_null() { 3 new Scheduler(SchedulerConfig.builder().maxThreads(1).timeProvider(null) .build());} (b) EBT generated by exLong. triggers the target throw statement. In figures 1 and 2, we show examples of EBTs generated by exLong. This paper extends exLong by introducing a new command-line interface that simplifies the process of extracting the necessary context for EBTs generation and querying the fine-tuned LLM. We describe two use cases supported by exLong: (1) developer-oriented use case: developers select a method under test (e.g., schedule in Figure 1a), a target throw statement (e.g., line 12 in Figure 1a) and a destination test file. exLong then automatically generates an EBT that executes the target throw statement. (2) machine-oriented use case: developers employ exLong to automatically generate EBTs for their entire codebase, covering each existing throw statement, such as line 3 in Scheduler in Figure 2a. Additionally, to improve exLong’s accessibility for typical users, we include an option to use a quantized [9, 42] version of the fine-tuned LLM, which reduces the memory usage by $7 5 \%$ . This optimization enables exLong to operate on machines with limited computational resources. Our experiments demonstrate exLong’s effectiveness in both supported use cases. For the developer-oriented use case, we compare our tool against a state-of-the-art test generation model (CATLM [30]) and a leading foundation LLM (GPT3.5 [27]). Results show that exLong generates $8 3 . 8 \%$ more executable EBTs than CATLM and $9 . 9 \%$ more than GPT3.5. After quantization, exLong can run on a local machine with a single GPU, with a relative small performance reduction resulting in the generation of $1 3 . 1 \%$ fewer executable EBTs. For the machine-oriented use case, we compare our tool against two popular analysis-based test generation tools: Randoop [28, 31] and EvoSuite [12]. While these tools complement each other (i.e., each tool can generate EBTs for some target throw statements that others cannot), our findings indicate that exLong outperforms both Randoop and EvoSuite. exLong is available on GitHub at https://github.com/EngineeringSoftware/exLong. # 2 Technique and Implementation Figure 3 [44] illustrates the workflow of exLong. Given a method under test (MUT), a target throw statement, and a destination test file, exLong collects stack trace, guard expression, and relevant non-EBTs using both static and dynamic program analyses $\textcircled{3}$ . These components are then used to construct a prompt which encompasses both the task inputs and the relevant context $\textcircled{4}$ . During training, a foundation LLM is fine-tuned to generate the EBT conditioned on the input p. During inference, exLong first prepares the necessary context to construct the prompt then the fine-tuned LLM generates EBTs given the prompt. We detail the design and implementation in the rest of this section. # 2.1 Developer-oriented use case Preparation. In this phase, exLong collects a set of stack traces from the execution of existing non-EBTs, that can reach methods that contain target throw statements in the repository.Using the example in Figure 1, exLong first identifies and instruments the throw statement in the method prepareJob to log the current stack trace upon the invocation of prepareJob. Then exLong executes the existing non-EBTs to log the stack traces and record the mapping between the non-EBTs and their invoked methods. Note that a developer only need to run this phase once for the repository they are working on. Analysis. exLong constructs a prompt from the developer-provided context and the information collected in the preparation phase. Taking Figure 1 as an example, exLong first searches the collected stack traces for one that begins with schedule and ends in prepareJob. An example of the resulting stack trace consisting of the schedule and prepareJob methods is shown in Figure 4a. While stack trace provides the sequence of method invocations that lead to the target throw statement, knowing only the names of the methods is insufficient for generating EBTs. exLong then constructs a guard expression to further aid the LLM’s reasoning about system configurations that would lead to exceptional behaviors. A guard expression is a logical formula representing the constraints necessary to reach the target throw statement. An example of guard expression is shown in Figure 4b. Specifically, exLong collects guard-related AST nodes along the stack trace, including conditional expressions (line 11 in Figure 1) and assignments (line 10 in Figure 1). It then propagates symbolic variables, performing substitutions where necessary. The resulting formula is a conjunction of expressions guarding the target throw statement. Finally, exLong identifies relevant non-EBTs from the same repository to encourage the LLM to reason about the procedures to set up the object under test and to promote consistency between the newly generated code and existing code in terms of format and coding conventions. The non-EBT in figure 4c is identified as relevant since it invokes the target MUT schedule. To enhance the quality of the generated EBTs, exLong can optionally create multiple prompts by including different relevant non-EBTs and then select the best EBT based on its ability to compile, execute, and cover the target throw statement. # 2.2 Machine-oriented use case Preparation. exLong parses the repository to identify all target throw statements within public methods (line 3 in Figure 2). Similar to developer-oriented use case, it executes the existing non-EBTs to extract the coverage data. This is used to determine both the relevant non-EBTs and the destination test file. Analysis. As shown in Figure 2a, for each target throw statement, the MUT is defined as the method containing the target throw statement (Scheduler). In this case, the stack trace only includes the MUT. The guard expression and relevant non-EBTs are extracted using the same approach as developer-oriented use case. The destination test file is selected using two heuristics similar to prior works [30]: (1) file name matching where given a code file named Scheduler.java, exLong searches for test file named TestScheduler.java or SchedulerTest.java, and (2) test coverage analysis in which if name matching fails, exLong searches for analysisphase Stack trace Prompt generation phase Task searchTerm(SearchCommandParser.java:86) eavietatinal Getitatitonstitfitey testhemetoutmeyin ⑤ 由 !that covers \${target throw exception'(exception_type)' statement} from \${MUT} Guard expressions es LLM O & Thelhd 1 developer-oriented &reqestosmCE Condiderthefolwingseguence √ ftmethtod calts 6 Generated EBT ② tatethrotw repo !SizeTerm(ComparisonTerm.LT, 5); methods. Return \*\*only\*\* the throws ProtocolException{ machine-oriented SearchTerm searchTerm = parse("SMALLER 5"); code in the completion: parse("\*"); assertThat(searchTerm).isEqualTo(expectedTerm); } ‘java {test_file} 1 1 schedule(Scheduler.java:186) 2 3 Job job $\mathbf { \Sigma } = \mathbf { \Sigma }$ prepareJob(name, runnable, when); 4 5 prepareJob(Scheduler.java:340) 6 7 throw new IllegalArgumentException("A job is already scheduled with the name:" $^ +$ name); 8 # (a) Stack trace from MUT to target throw statement. 1 findJob(nullableName $\scriptstyle = =$ null ? runnable.toString() : nullableName).orElse( null) $! =$ null && findJob(nullableName $\scriptstyle = =$ null ? runnable.toString() : nullableName).orElse(null).status() $! =$ JobStatus.DONE # (b) Guard expression. 1 @Test 2 public void should_run_a_single_job() throws InterruptedException { 3 Scheduler scheduler $\mathbf { \Sigma } = \mathbf { \Sigma }$ new Scheduler(); 4 SingleJob singleJob $\mathbf { \Sigma } = \mathbf { \Sigma }$ new SingleJob(); 5 scheduler.schedule("test", singleJob, Schedules.executeOnce(Schedules. fixedDelaySchedule(Duration.ofMillis(1)))); 6 waitOn(singleJob, $( ) \ $ singleJob.countExecuted.get() $> \emptyset$ , 10000); 7 scheduler.gracefullyShutdown(); 8 assertThat(singleJob.countExecuted.get()).isEqualTo(1);} (c) non-EBT. the test class covering the MUT or the class of the MUT. Finally, exLong constructs the prompt with all the available context. exLong can optionally create multiple prompts from different non-EBTs, generating and evaluating multiple EBTs then select the best one based on runtime evaluation. # 3 Tool Installation exLong generates EBTs for Java projects built using Maven. We require Maven $3 . 8 . 3 \tmspace 0 . 0 0 0 \ t$ and Java $^ { 8 + }$ . For quantized LLM inference, exLong leverages ollama [41], which can be installed following the instructions from ollama’s official GitHub repository. To get started with exLong, begin by cloning the repository: \$ git clone https :// github . com / EngineeringSoftware / exLong .git exLong is implemented in Python and requires version 3.10 or higher. For a smooth installation process, we recommend using Conda [5] to manage dependencies. Users can execute our provided script to set up exLong and its required components: We also offer Docker-based installation options. The Docker image can be built and run with: \$ docker build -t exlong \$ docker exec -it exlong /bin/ bash # Furthermore, for integration with the ollama Docker image, the users can use our Docker Compose setup: \$ docker compose up -d \$ docker exec -it exlong -tool -1 /bin/ bash # 4 Tool Usage In this section, we introduce how to use exLong for developeroriented use case and machine-oriented use case. # 4.1 Developer-oriented use case For the developer-oriented use case, where exLong generates an EBT for a user-specified target throw statement, our tool’s CLI requires the following parameters: the local path or remote link to the git repository, the path to the file containing the MUT, the line number of the beginning of MUT’s definition, the path to the file containing the target throw statement, the line number of the target throw statement, and the path to the destination test file. Additionally, exLong’s CLI accepts the following optional parameters: a commit SHA (default: latest commit on the main branch), name of the test method to be written by exLong (default: none), whether exLong should used quantized LLM (default: true), whether exLong should sample multiple candidate EBTs and select the best test based on runtime evaluation (default: false), and the output file path for the generated EBT (default: ./output.java). An example command to invoke developer-oriented use case of exLong is as follows: \$ python -m etestgen .cli user_view \ -- repo_path $\mathbf { \Sigma } =$ ./ Wisp \ -- mut_file_path $\mathbf { \Sigma } = \mathbf { \Sigma }$ Scheduler . java \ -- mut_line =180 \ --quan $. =$ true \ -- throw_file_path $\mathbf { \tau } = \mathbf { \tau }$ Scheduler . java \ -- throw_line $= 3 4 0$ \ -- test_context_pat $\mathsf { \Omega } _ { 1 } = \mathsf { \Omega } \mathsf { S }$ chedulerTest . java --sha=" ce1d9f3cb1944115ad98b4428ea24b24ab3faf56 " \ -- test_name $\mathbf { \tau } = \mathbf { \cdot }$ testSchedulerError \ -- pick_best = True \ - output_file =./ ExlongTest . java Table 1: Results on developer-oriented use case with groundtruth EBT’s name in the prompt. # 4.2 Machine-oriented use case In the machine-oriented use case, where exLong generates EBTs for the entire codebase. The only required parameter for exLong’s CLI is the path or link to the git repository. The CLI also accepts commit SHA, option to sample multiple EBTs, option to use quantized LLM, time budget for exLong to finish, and path to output file as optional parameters. An example command to invoke developer-oriented use case of exLong is as follows: \$ python -m etestgen . cli machine_view -- repo_link $\mathbf { \Psi } = \mathbf { \Psi }$ \ " https :// github .com/ Coreoz / Wisp . git " \ --sha $= "$ ce1d9f3cb1944115ad98b4428ea24b24ab3faf56 " -- timeou $= 1 0 0 0$ # 5 Evaluation Following prior work [26], we collect our dataset from Java projects in CodeSearchNet [19], which are available on GitHub. We evaluate exLong’s performance with full precision LLM under both developer-oriented use case and machine-oriented use case. For developer-oriented use case, we benchmark exLong on a subset of 434 examples from which we are able to extract stack traces. For machine-oriented use case, we evaluate exLong on 649 examples, filtering out data for which our heuristic failed to locate the corresponding destination test file. We evaluate EBTs generated by exLong using the percentage of generated EBTs that can be compiled (Compilable%), can be executed (Runnable%), and those that are semantically valid and are targeting the throw statement specified by developers (Throw$\mathrm { C o v \% }$ ). We compare exLong against a widely used foundation model, GPT3.5, and a specialized test-generating LLM, CAT-LM. Our results are shown in Table 1. We observe that exLong outperforms all the baselines on all metrics. exLong achieves higher performance for both generating executable EBTs (Runnable $\%$ ) and EBTs that cover the target throw statements (ThrowCov%). Specifically, exLong outperforms GPT3.5 by $9 . 9 \%$ and $2 2 . 8 \%$ on Runnable $\%$ and ThrowCov%, respectively. Similarly, exLong outperforms CATLM by $8 3 . 8 \%$ and $9 8 . 0 \%$ on Runnable $\%$ and ThrowCov%, respectively. For machine-oriented use case, we evaluate the tool’s ability to cover throw statements within a given repository with ThrowCov%, which measures the percentage of target throw statements covered by the generated EBTs. We benchmark exLong against two widelyused analysis-based test generation tools: Randoop [28, 31] and EvoSuite [12]. Our results, illustrated in Figure 5, indicates that exLong covers the most target throw statements. For more details of our evaluation, refer to the full paper [44]. Figure 5: Venn diagram of target throw statements coverage by exLong, Randoop, and EvoSuite on all 30 projects. # 6 Related Work Recent studies have leveraged transformer models for test generation [10, 20, 25, 26, 30, 35, 36, 38, 43]. Some approaches use conditions to guide the generation process [3, 4, 33], while others utilize existing test cases as context [10, 26, 30, 36]. Our work uniquely combines non-exceptional tests with stack traces and guard expression to guide exceptional test generation. Non-LLM test generation approaches include random-based [28, 31], search-based [12, 16, 22, 23], and constraint-based [11, 13, 18] strategies. While tools like Randoop and EvoSuite can generate tests for exceptional behaviors, they neither guarantee coverage of specific exceptional paths nor consistently produce readable test cases due to their random nature.
Exceptional behavior tests (EBTs) are crucial in software development for verifying that code correctly handles unwanted events and throws appropriate exceptions. However, prior research has shown that developers often prioritize testing "happy paths", e.g., paths without unwanted events over exceptional scenarios. We present exLong, a framework that automatically generates EBTs to address this gap. exLong leverages a large language model (LLM) fine-tuned from CodeLlama and incorporates reasoning about exception-throwing traces, conditional expressions that guard throw statements, and non-exceptional behavior tests that execute similar traces. Our demonstration video illustrates how exLong can effectively assist developers in creating comprehensive EBTs for their project (available at https://youtu.be/Jro8kMgplZk).
[ "cs.SE", "cs.AI" ]
# 1. Introduction End-to-end (E2E) automatic speech recognition (ASR) [1, 2] has made significant strides in recent years, achieving remarkable performance on various benchmarks [3–5]. However, the challenge of recognizing overlapping speech in multi-talker scenarios remains a critical area of research. Traditional ASR systems struggle with overlapping speech, leading to significant degradation in word error rates (WER) [6, 7]. To address this, several approaches have been proposed, including permutation invariant training (PIT) [8, 9], serialized output training (SOT) [10], and continuous speech separation (CSS) [6, 11]. SOT [10] was introduced to overcome some of the limitations of PIT. SOT uses a single output layer that generates transcriptions for multiple speakers sequentially, separated by a special token indicating speaker change. This approach eliminates the number of maximum speakers constraint and models dependencies among outputs for different speakers. However, SOT is primarily designed for offline ASR and does not support streaming inference. Token-level serialized output training (tSOT) [12] further extends the SOT framework by generating recognition tokens for multiple speakers in chronological order, making it suitable for streaming ASR applications. CSS [6, 11] is another approach that has been explored for handling overlapping speech. CSS converts a long-form multitalker speech signal into multiple overlap-free speech signals using a sliding window. Each of the separated signals can then be passed to a conventional single-speaker ASR system. While CSS has shown promise in improving ASR performance for overlapping speech [6, 11, 13], it relies on a separate front-end processing step, which can introduce artifacts and errors that degrade the overall ASR accuracy. Despite these advancements, current ASR systems still face challenges in balancing latency and accuracy, especially for practical applications that require both streaming and offline capabilities. Training separate models for each scenario is inefficient and introduces unnecessary complexity. Moreover, existing methods often struggle with highly overlapping speech and require complex architectures or multiple processing steps. To address these limitations, we propose three improvements in multi-talker ASR modeling. First, we leverage CSS single-channel front-end for E2E systems, challenging the conventional wisdom of E2E versus cascaded setups. The CSS single-channel front-end improves performance in highly overlapping scenarios by effectively separating speech from multiple speakers, thus enhancing the accuracy of the ASR system. While using an explicit front-end for multi-channel E2E multi-talker speech recognition has been shown to help [14–16], such approaches naturally benefit from spatial filtering techniques like beamforming, which enable effective speech separation and improve recognition. In contrast, explicit speech separation for single-channel ASR has been less explored, as most E2E systems are trained directly for multi-talker recognition, relying on the model to implicitly learn both separation and transcription [17]. We show that explicit speech separation using a single channel front-end provides significant advantages in highly overlapping scenarios compared to implicit separation within the ASR model. Second, we implement dual models — Conformer Transducer (CT) [18, 19] for streaming and sequence-to-sequence (S2S) [20, 21] for offline — and alternatively, a unified twopass model based on cascaded encoders [22] that balances accuracy with latency. Finally, we explore segSOT ordering of multi-talker transcriptions to improve readability, turn-taking, and context for offline scenarios. We also study the effect of CSS encoder to further improve the accuracy of our offline model. # 2. Model # 2.1. Conformer Encoder with CSS Input We used an encoder architecture similar to what was used with a multichannel front-end in [16]. As shown in Fig. 1, the encoder is designed to process two audio signals by splitting the conformer encoder with $L$ layers in total into $N$ channel dependent layers and $L - N$ channel independent layers.† The outputs from the N-th layer in each channel are summed and further processed by the $L - N$ channel independent layers. The parameters for the two-branched encoders are shared between the two channels. Consequently, the parameter count of the two-channel CSS Conformer Encoder is equivalent to that of the conventional single-channel non-CSS Conformer Encoder. Figure 1: Two channel Conformer Encoder with CSS inputs In the CSS front-end, a long-form audio input is segmented into overlapping chunks. Each chunk is then processed by a local Speech Separation (SS) network, which estimates two overlap-free signals (assuming that there are only two speakers in the chunk) that are input to the two-channel encoder in Fig. 1. We use a conformer-based CSS [23] similar to [13]. # 2.2. Cascaded Encoder Cascaded encoders [22] have previously been proposed to unify conventional streaming and non-streaming ASR models. The cascaded encoder model consists of a causal encoder and a noncausal encoder, where the causal encoder processes input features in a streaming fashion, and the non-causal encoder further processes these features using future context information. The outputs from both encoders are then fed into a shared Recurrent Neural Network Transducer decoder, allowing the model to operate in both streaming and non-streaming modes. Our proposed model shown in Fig. 2, leverages the strengths of the original cascaded encoder [22] architecture while incorporating several key improvements to enhance performance in highly overlapping scenarios. We modify the causal encoder to consume two channel CSS inputs similar to Section 2.1. We train it with multi-talker data and serialized output training. # 2.3. S2S Model with Segment-based SOT (segSOT) In this section, we explain the various components of the S2S-segSOT model. It follows the architecture of an S2S model. Given a sequence of input acoustic frames $\mathrm { ~ \bf ~ x ~ } = $ $\left[ { \bf x } _ { 1 } , { \bf x } _ { 2 } , \cdots , { \bf x } _ { T } \right]$ and labels ${ \bf y } = [ y _ { 0 } , y _ { 1 } , \cdot \cdot \cdot , y _ { U } ]$ , it models the distribution of the prediction labels conditioned on the entire input sequence $\mathbf { x }$ and the partial sequence of previously predicted labels, i.e., $P ( y _ { u } | y _ { 0 } , y _ { 1 } , \cdot \cdot \cdot , y _ { u - 1 } , \mathbf { x } )$ · · , yu 1, x) (non-causal autoregressive). Since the predicted label at step $u$ is conditioned on the entire input sequence x, it is suitable for modeling offline scenarios. Figure 2: Conformer Transducer with Multi-Talker Cascaded Encoder. The Causal Encoder here takes two channel CSS inputs. There are different ways of ordering/serializing the transcriptions in multi-talker simulations. One such way is the SOT paradigm [10,24]. Transcriptions of a multi-talker conversation are shown in Fig. 3. There are 3 speakers with several regions of overlapped and non-overlapped speech. Ordering the transcriptions by start times of speakers and concatenating them yields an sSOT [10] transcription. Ordering by the start times of individual tokens/words yields tSOT [12] transcriptions. We propose segSOT ordering of transcriptions which is suitable for offline scenarios. In segSOT, an utterance is split into segments depending on speech activity or short pauses. The segments are then ordered according to their start times to yield a segSOT serialized transcription. The three different transcriptions (sSOT, tSOT, segSOT) corresponding to the scenario in Fig. 3 are shown below. sSOT: hi how are you doing everyone it has been raining here where are you all ${ < } c c >$ oh hi i’m fine ${ < } c c >$ hi there doing well tSOT: hi how are you doing ${ < } c c >$ oh ${ < } c c >$ everyone ${ < } c c >$ hi ${ < } c c { > }$ hi ${ < } c c >$ there ${ < } c c >$ it has been ${ < } c c >$ doing ${ < } c c >$ raining ${ < } c c >$ well ${ < } c c >$ i’m ${ < } c c >$ here ${ < } c c >$ fine ${ < } c c >$ where are you all segSOT: hi how are you doing everyone it has been raining here ${ < } c c >$ oh hi ${ < } c c { > }$ hi there doing well ${ < } c c >$ i’m fine ${ < } c c >$ where are you all The ${ < } \mathrm { c c > }$ tag denotes a channel (or speaker) change. The length of a segment is determined by two parameters - a) $\alpha$ : Maximum allowed length during speech activity and b) $\beta$ : Maximum allowed length of a short pause. We design the maximum allowed length of a segment during speech activity to represent turn-taking scenarios. For example, in Fig. 3, speaker 1 speaks continuously for more than $\alpha$ seconds. However, at $\scriptstyle { \mathrm { t } } = \alpha$ , segSOT starts transcribing the earliest available segment of a different speaker, i.e., speaker 2 (oh hi). This prevents large delays in transcribing other overlapping speakers thereby allowing frequent turn-taking scenarios which is common in multi-talker conversations. Following this, speaker 2 pauses for a while that exceeds $\beta$ seconds. Thus, segSOT stops transcribing speaker 2 and switches to the earliest available segment which is from speaker 3 (hi there). Since there is a short pause $( \leq \beta )$ , segSOT does not break the segment and continues to transcribe speaker 3 (doing well). After this, it finds the earliest available segment which is from speaker 2 (i’m fine). This process continues until all segments are exhausted. Figure 3: Multi-talker Transcription $\mathbf { \dot { \alpha } } _ { \mathbf { \beta } } ( \mathbf { \dot { \alpha } } _ { \mathbf { \beta } } ) = \mathbf { \beta } _ { \mathbf { \beta } }$ Maximum length of speech activity, $\beta =$ Maximum length of short pause). There are several advantages of using segSOT. • Readability: The readability of sSOT transcription can sometimes be difficult if there are several overlapping speakers. Although this can be improved with additional postprocessing methods, the readability of segSOT transcriptions is much better since it is closer to the way humans transcribe, and no additional post-processing methods are required. • Turn-taking and Context: There is utterance-based SOT (uSOT) [25] which orders the transcriptions according to the start times of the utterances. In [25], full utterances were generated through prior segmentation of speech and text data. Later, those utterances were randomly sampled and overlapped to generate uSOT transcriptions. Since the length of the utterances were not properly defined, uSOT transcriptions are prone to large variations. For example, an utterance could encompass a long speech active region which impedes turntaking among speakers. Alternatively, an utterance consisting of a long silence region flanked by two short speech active regions of unrelated text (eg. it is <long sil $>$ got it) results in an incoherent transcription (it is got it). However, with $\mathrm { s e g S O T }$ , these problems are avoided thereby achieving better consistency in turn-taking and context through choices of $\alpha$ and $\beta$ parameters. Furthermore, the sSOT ordering precludes turn-taking during overlapped speech and [25] has shown that sSOT ordering performed worse than uSOT ordering. • CTC: Since S2S models are vulnerable to hallucinations, an auxiliary CTC objective criterion is usually added to mitigate this problem [26]. While word ordering in sSOT/uSOT is more non-monotonic than segSOT, the CTC criterion favors monotonicity. Because of this conflict, the CTC objective tends to penalize sSOT/uSOT more severely than segSOT. # 3. Experiments # 3.1. Data We trained two seed single-speaker ASR models (CT, S2S) using 30,000 hours of in-house data [27], with all personally identifiable information removed. To develop multi-speaker models, we fine-tuned the initial seed model using a diverse dataset. This dataset included: a) simulated multi-speaker data derived from the aforementioned 30,000 hours of recordings, b) real meetings data from the AMI [28] and ICSI [29] corpora, and c) some in-house meeting recordings. During the multi-speaker simulation, two utterances were randomly mixed in approximately two-thirds of the cases, while the remaining one-third consisted of the original single-speaker utterances. We evaluated our models using the LibriCSS test set [6]. The original recordings were made with a 7-channel microphone array, but we used only the first channel for our experiments. The recordings span a total of 10 hours and are categorized by speaker overlap ratios ranging from $0 \%$ to $40 \%$ . Each category includes 10 mini-sessions, each lasting $1 0 ~ \mathrm { { m i n } } .$ - utes. For our evaluation, we used the segmented versions of sessions 1 through 9, excluding session 0. We measure the performance with Speaker Agnostic Word Error Rate $( \mathrm { S A g W E R } )$ computed using NIST’s asclite [30] tool. # 3.2. Model # 3.2.1. CSS Front-end The training of the CSS network utilizing WavLM [31] follows the methodology outlined in [13, 32], which adopts a sliding window approach (2.4s window $+ 0 . 8 \mathrm { s }$ hop). Initially, WavLM is trained using unlabeled speech data. Subsequently, the conformer-based SS network [23] is fine-tuned. This process involves inputting both an audio spectrogram and the WavLM embedding into the network. The WavLM embedding itself is derived from the weighted average of the outputs across all transformer layers of WavLM, with weights adjusted during fine-tuning. The optimization of the SS network employs an utterance-level permutation invariant training loss for two outputs, where each output branch is tasked with estimating a magnitude mask for each speaker. This design introduces a processing latency of 0.8s. In [32], the RTF is calculated as 0.548 for this frontend. The overall latency of our proposed system is the maximum of this 0.8s and the latency of ASR. We trained our ASR models without and with the twochannel CSS encoder. ASR models without the CSS encoder were trained with 80-dimensional log Mel filter bank (LMFB) features extracted every 10ms from a window of 25ms of speech to handle the mixed band 8kHz or 16kHz speech with the method described in [33]. ASR models with the two-channel CSS encoder were trained with 80-dimensional LMFB features extracted from each channel. # 3.2.2. CT-tSOT Model and Cascaded CT Model The CT model encoder comprises 2 convolutional layers followed by an 18-layer Conformer encoder, utilizing a chunkwise streaming mask to achieve a latency of $1 6 0 \mathrm { m s }$ . Each Conformer block includes a multi-head self-attention (MHSA) layer with an attention dimension of 512, distributed across 8 heads, and a 2048-dimensional feed-forward network (FFN) with a Gaussian error linear unit (GELU). For the cascaded CT model we used 12 causal layers and 6 non-causal layers so that overall parameters are the same as the CT model. The non-causal layers have a chunk size of 5s. The prediction network for the transducer models consists of a 2-layer LSTM with 1024 dimensions, and the joint network is a 512-dimensional feed-forward layer. We utilized 4,003 tokens for recognition, along with blank and $\langle \mathrm { c c } \rangle$ tokens. We used the AdamW optimizer with $( \beta _ { 1 } , \beta _ { 2 } ) = ( 0 . 9 , 0 . 9 8 )$ and a peak learning rate (PLR) of $2 . 0 \times 1 0 ^ { - 4 }$ . The LR schedule followed a linear decay in LR from the PLR over 4 million steps. # 3.2.3. S2S-segSOT Model The S2S-segSOT model consists of an 18-layer Conformer encoder and a 6-layer transformer decoder. The architecture of Conformer encoder blocks are the same as the CT model. Table 1: SAgWER $( \% )$ on the monaural LibriCSS test set. A macro average of SAgWERs is shown in the “Avg.” column. 0L and 0S are $0 \%$ overlap conditions with long and short inter-utterance silences. Column “CSS” represents whether the two-channel CSS Encoder is enabled or not. Column “ $\cdot _ { C , N C }$ ” represents the number of causal and non-causal encoder layers. T is a variable representing the length of the input utterance group. Each decoder block consists of an MHSA layer with an attention dimension of 512, distributed across 8 heads, and a 2048- dimensional FFN followed by Rectified Linear Unit (RELU). The $\mathrm { 5 2 S { - } s e g S O T }$ model was trained with a combination of label smoothing and CTC auxiliary losses using weights of 1 and 0.2 respectively on segSOT transcriptions. The segSOT transcriptions were generated using $( \alpha , \beta ) = ( 5 , 0 . 5 )$ seconds. The same AdamW optimizer of the CT model was used but with a PLR of $2 . 2 \times 1 0 ^ { - 5 }$ . The LR schedule followed a linear increase in LR up to the PLR over $3 0 0 \mathrm { k }$ warmup steps followed by a linear decay up to 3 million steps. # 3.3. Results The evaluation results for our models tested on LibriCSS are tabulated in Table 1. It should be noted that no Librispeech data was used during the training. Hence, the results represent the performance of our models in unseen test conditions. CTtSOT is the baseline in our study and it was shown in [12] to perform similarly or better than using a cascaded system (CSS $^ +$ single-talker ASR), even with only simulated training data. Here, we also use real overlapped multi-talker data in training, which benefits E2E multi-talker systems. Training single-talker ASR with such real data is not straightforward as we cannot obtain channel-specific error-free target transcriptions even if we preprocess it with CSS. First, in rows 1-2, we compare the performance of the multi-talker streaming CT-tSOT system with and without the proposed CSS encoder. The zero overlap test sets in LibriCSS (“0L”, “0S”) represent single-talker test sets. For both the test sets, there was some degradation when using the CSS front-end. However, there was significant improvement $8 - 9 \%$ relative) in the highly overlapping scenarios $( ^ { 6 4 } 3 0 ^ { 9 } , ^ { 6 6 } 4 0 ^ { 9 } )$ with the overall average across all scenarios showing a marginal improvement. In rows 3-5, we discuss the results of the cascaded CT model with the CSS encoder. Row 3 shows the first pass (streaming) results of the cascaded CT-tSOT model where only the first 12 layers of causal encoders were used. Non-causal encoders were not used at all. The results are significantly worse than row 2 as there are fewer number of causal encoder layers, i.e., 18 in row 2 versus 12 in row 3. Our cascaded system was designed with fewer number of causal encoder layers to match the total number of parameters with the CT systems in rows 1- 2. Next, in row 4, we compare the second pass results of the cascaded CT-tSOT model using both the causal and non-causal encoder layers. Clearly, this model significantly outperforms the CT-tSOT model in row 2 across all scenarios. The improvement is $1 0 . 2 \%$ relative $( 1 1 . 3 8 1 0 . 2 )$ ) at the cost of more latency. This demonstrates a trade-off between accuracy and latency within the same model using the first pass and second pass outputs. Using the segSOT transcription ordering with cascaded CT model (row 5) causes some degradation as the segSOT ordering makes the alignment more complicated with RNN-T loss but improves on readability. Next, we discuss the results of the proposed purely offline S2S models. To assess the performance of our models in single-talker test cases, we compare the multi-talker S2SsegSOT model with a baseline single-talker S2S model. Both these models were trained the same number of parameters. The results are: $\mathrm { S 2 S \to \mathrm { s e g S O T } }$ : $7 . 0 2 7 . 0 5$ (0L test) and $7 . 1 5 $ 7.23 (0S test). There is some degradation but it is quite marginal $( \leq 1 \% )$ . This shows that the multi-talker model is able to preserve the capacity of the single-talker model. Finally, in rows 6-7, we compare the S2S-segSOT model with CSS encoder (row 7) and without CSS encoder (row 6). The proposed S2S-segSOT model without the CSS encoder performed better than the model with CSS encoder in single-talker scenarios (0L, 0S) by only about $3 . 3 \%$ relative. Moreover, the model tends to perform better with long utterances (0L: 7.05, 7.30) than with short utterances (0S: 7.23, 7.46) because of longer contexts. However, for the more challenging scenarios of overlapped speech (10-40), the model equipped with CSS encoder (row 7) performed better on an average by about $4 . 2 \%$ relative. The robustness of S2S-segSOT model with CSS encoder is further highlighted by the fact that larger the amount of overlapped speech, the wider the performance gap with respect to the non-CSS encoder model. Moreover, it achieves the highest accuracy $( 9 . 9 3 \% )$ among all the models in Table 1. Finally, we compare the best Cascaded CT-tSOT model (row 4) and with S2S-segSOT model (row 6). We observe that the Cascaded CT-tSOT model with CSS encoder, despite limited latency, is able to achieve on par accuracy $( 1 0 . 2 4 \% )$ with the S2S-segSOT model which is a purely offline model.
We extend the frameworks of Serialized Output Training (SOT) to address practical needs of both streaming and offline automatic speech recognition (ASR) applications. Our approach focuses on balancing latency and accuracy, catering to real-time captioning and summarization requirements. We propose several key improvements: (1) Leveraging Continuous Speech Separation (CSS) single-channel front-end with end-to-end (E2E) systems for highly overlapping scenarios, challenging the conventional wisdom of E2E versus cascaded setups. The CSS framework improves the accuracy of the ASR system by separating overlapped speech from multiple speakers. (2) Implementing dual models -- Conformer Transducer for streaming and Sequence-to-Sequence for offline -- or alternatively, a two-pass model based on cascaded encoders. (3) Exploring segment-based SOT (segSOT) which is better suited for offline scenarios while also enhancing readability of multi-talker transcriptions.
[ "eess.AS", "cs.CL", "cs.SD" ]
# 1 Introduction 3D reconstruction from multi-view images is a cornerstone task in computer vision. Traditionally, this process has been achieved by assembling classical techniques such as keypoint detection $[ 1 -$ 3] and matching [4, 5], robust camera estimation [4, 6], Structure-from-Motion(SfM), Bundle Adjustment(BA) [7–9], and dense Multi-View Stereo [10, 11]. Although effective, these multistage methods require significant engineering effort to manage the entire process. This complexity inherently constrains their scalability and efficiency. Recently, dense matching methods, such as DUSt3R [12] and MAST3R [13], have emerged as compelling alternatives. At its core, DUSt3R utilizes a deep neural network trained to predict dense correspondences between image pairs in an end-to-end fashion. Specifically, DUSt3R takes in two images and, for each, predicts a pointmap. Each pointmap represents the 3D coordinates of every pixel, as projected into a common reference view’s coordinate system. Once pointmaps are generated from multiple views, DUSt3R aligns them by optimizing the registration of these 3D points. This process recovers the camera pose for each view and reconstructs the overall 3D geometry. Figure 2: Inconsistency Study. On the left are two image pairs sharing the same reference view $I _ { 1 }$ but with different source views $I _ { 2 }$ and $I _ { 3 }$ . On the right are the corresponding point maps, with each color indicating the respective image pair. Despite its huge success, this pair-wise prediction paradigm is inherently problematic. Under such a design, the model considers only two images at a time. Such a constraint leads to several issues. To investigate this, we compare the pointmaps of image $I _ { 1 }$ but with different views $I _ { 2 }$ and $I _ { 3 }$ in Figure 2. It demonstrates that the predicted pointmaps are imprecise and inconsistent. Firstly, the precision of geometric predictions can suffer because the model is restricted to inferring scene geometry from just one image pair. This is especially true for short-baseline cases [14], where small camera movement leads to poor triangulation and thus inaccurate geometry. Second, reconstructing an entire scene requires pointmaps from multiple image pairs. Unfortunately, these individual pairwise predictions may not be mutually consistent. For example, the pointmap predicted from $( \bar { I _ { 1 } } , \bar { I _ { 2 } } , \bar { I _ { 3 } } )$ may not align with the prediction from $( I _ { 1 } , \mathbb { X } _ { 3 } , I _ { 3 } )$ , as highlighted by the color difference in Figure 2. This local inconsistency further leads to discrepancies in the overall recomstruction. What makes things worse, the model, like many deep learning systems, struggles to generalize to new or diverse scenes. Such limitations directly exacerbate the previously discussed problems of precision and inter-pair consistency. Consequently, even with a final global refinement stage, inaccurate pointmaps lead to persistent errors. To address these problems, in this paper, we present Test3R, a novel yet strikingly simple solution for 3D reconstruction, operating entirely at test time. Its core idea is straightforward: Maximizing the consistency between the reconstructions generated from multiple image pairs. This principle is realized through two basic steps: Given image triplets $( I _ { 1 } , I _ { 2 } , I _ { 3 } )$ , Test3R first estimates two initial pointmaps with respect to $I _ { 1 } \colon X _ { 1 }$ from pairs $( I _ { 1 } , I _ { 2 } )$ and $X _ { 2 }$ from $( I _ { 1 } , I _ { 3 } )$ . Test3R optimizes the network, so that the two pointmaps are cross-pair consistent, i.e., $X _ { 1 } \approx X _ { 2 }$ . Critically, this optimization is performed at test time via prompt tuning [15]. Despite its simplicity, Test3R offers a robust solution to all challenges mentioned above. It ensures consistency by aligning local two-view predictions, which resolves inconsistencies. This same mechanism also improves geometric precision: if a pointmap from short-baseline images is imprecise, Test3R pushes it closer to an overall global prediction, which reduces errors. Finally, Test3R adapts to new, unseen scenes, minimizing its errors on unfamiliar data. We evaluated Test3R on the DUSt3R for 3D reconstruction and multi-view depth estimation. Test3R performs exceptionally well across diverse datasets, improving upon vanilla DUSt3R to achieve competitive or state-of-the-art results in both tasks. Surprisingly, for multi-view depth estimation, Test3R even surpasses baselines requiring camera poses and intrinsics, as well as those trained on the same domain. This further validates our model’s robustness and efficacy. The best part is that Test3R is universally applicable and nearly cost-free. This means it can easily be applied to other models sharing a similar pipeline. We validated this by incorporating our design into MAST3R [13] and MonST3R [16]. Experimental results confirmed substantial performance improvements for both models. The contributions of this work are as follows: • We introduce Test3R, a novel yet simple solution to learn the reconstruction at test time. It optimizes the model via visual prompts to maximize the cross-pair consistency. It provides a robust solution to the challenges of the pairwise prediction paradigm and limited generalization capability. • We conducted comprehensive experiments across several downstream tasks on the DUSt3R. Experiment results demonstrate that Test3R not only improves the reconstruction performance compared to vanilla DUSt3R but also outperforms a wide range of baselines. • Our design is universally applicable and nearly cost-free. It can easily applied to other models and implemented with minimal test-time training overhead and parameter footprint. # 2 Related Work # 2.1 Multi-view Stereo Multi-view Stereo(MVS) aims to densely reconstruct the geometry of a scene from multiple overlapping images. Traditionally, all camera parameters are often estimated with SfM [17], as the given input. Existing MVS approaches can generally be classified into three categories: traditional handcrafted [11, 18–20], global optimization [21–24], and learning-based methods [10, 25–28]. Recently, DUSt3R [12] has attracted significant attention as a representative of learning-based methods. It attempts to estimate dense pointmaps from a pair of views without any explicit knowledge of the camera parameters. Subsequent tremendous works focus on improving its efficiency [29–31], quality [13, 29, 32], and broadening its applicability to dynamic reconstruction [16, 33–36] and 3D perception [37]. The majority employ the pairwise prediction strategy introduced by DUSt3R [12]. However, the pair-wise prediction paradigm is inherently problematic. It leads to low precision and mutually inconsistent pointmaps. Furthermore, the limited generalization capability of the model exacerbates these issues. This challenge continues even with the latest models [29, 38], which can process multiple images in a single forward pass. While potentially more robust, these newer approaches demand significantly larger resources for training and, importantly, still face challenges in generalizing to unseen environments. To this end, we introduce a novel test-time training technique. This simple design ensures the cross-pairs consistency by aligning local two-view predictions to push the pointmaps closer to an overall global prediction, which addresses all challenges mentioned above. # 2.2 Test-time Training The idea of training on unlabeled test data dates back to the 1990s [39], called transductive learning. As Vladimir Vapnik [40] famously stated, “Try to get the answer that you really need but not a more general one”, this principle has been widely applied to SVMs [41, 42] and recently in large language models [43]. Another early line of work is local learning [44, 45]: for each test input, a “local” model is trained on the nearest neighbors before a prediction is made. Recently, Test-time training(TTT) [46] proposes a general framework for test-time training with self-supervised learning, which produces a different model for every single test input through the self-supervision task. This strategy allows the model trained on the large-scale datasets to adapt to the target domain at test time. Many other works have followed this framework since then [47–50]. Inspired by these studies, we introduce Test3R, a novel yet simple technique that extends the test-time training paradigm to the 3D reconstruction domain. Our model exploits the cross-pairs consistency as a strong self-supervised objective to optimize the model parameters at test time, thereby improving the final quality of reconstruction. # 2.3 Prompt tuning Prompt tuning was first proposed as a technique that appends learnable textual prompts to the input sequence, allowing pre-trained language models to adapt to downstream tasks without modifying the backbone parameters [51]. In follow-up research, a portion of studies [52, 53] explored strategies for crafting more effective prompt texts, whereas others [54–56] proposed treating prompts as learnable, task-specific continuous embeddings, which are optimized via gradient descent during fine-tuning referred to as Prompt Tuning. In recent years, prompt tuning has also received considerable attention in the 2D vision domain. Among these, Visual Prompt Tuning (VPT) [15] has gained significant attention as an efficient approach specifically tailored for vision tasks. It introduces a set of learnable prompt tokens into the pretrained model and optimizes them using the downstream task’s supervision while keeping the backbone frozen. This strategy enables the model to transfer effectively to downstream tasks. In our study, we leverage the efficient fine-tuning capability of VPT to optimize the model to ensure the pointmaps are cross-view consistent. This design makes our model nearly cost-free, requiring minimal test-time training overhead and a small parameter footprint. # 3 Preliminary of DUSt3R Given a set of images $\{ \mathbf { I } _ { t } ^ { i } \} _ { i = 1 } ^ { N _ { t } }$ of a specific scene, DUSt3R [12] achieves high precision 3D reconstruction by predicting pairwise pointmaps of all views and global alignment. Pairwise prediction. Briefly, DUSt3R takes a pair of images, $I ^ { 1 } , I ^ { 2 } \in \mathbb { R } ^ { W \times H \times 3 }$ as input and outputs the corresponding pointmaps $X ^ { 1 , 1 } , X ^ { 2 , \bar { 1 } } \ \in \ \mathbb { R } ^ { W \times \breve { H } \times 3 }$ which a∈re expressed in the same coordinate frame of $I ^ { 1 }$ . In our paper, we refer to the viewpoint of $I ^ { 1 }$ as the reference view, while the other is the source view. Therefore, the pointmaps $X ^ { 1 , 1 } , \dot { X } ^ { 2 , 1 }$ can be denoted as $X ^ { r e f , r e f } , X ^ { s r c , r e f }$ , respectively. In more detail, these two input images $I ^ { r e f } , I ^ { s r c }$ are first encoded by the same weight-sharing ViT-based model [57] with $N _ { e }$ layers to yield two token representations ${ \bf \zeta } _ { F ^ { r e f } }$ and $F ^ { s r c }$ : $$ F ^ { r e f } = E n c o d e r ( I ^ { r e f } ) , \quad F ^ { s r c } = E n c o d e r ( I ^ { s r c } ) $$ After encoding, the network reasons over both of them jointly in the decoder. Each decoder block also attends to tokens from the other branch: $$ \begin{array} { r } { G _ { i } ^ { r e f } = D e c o d e r B l o c k _ { i } ^ { r e f } ( G _ { i - 1 } ^ { r e f } , G _ { i - 1 } ^ { s r c } ) } \\ { G _ { i } ^ { s r c } = D e c o d e r B l o c k _ { i } ^ { s r c } ( G _ { i - 1 } ^ { s r c } , G _ { i - 1 } ^ { r e f } ) } \end{array} $$ where $i = 1 , \cdots , N _ { d }$ for a decoder with $N _ { d }$ decoder layers and initialized with encoder tokens $G _ { 0 } ^ { r e f } = F ^ { r e f }$ and $G _ { 0 } ^ { s r c } = F ^ { s r c }$ . Finally, in each branch, a separate regression head takes the set of decoder tokens and outputs a pointmap and an associated confidence map: $$ \begin{array} { r } { X ^ { r e f , r e f } , C ^ { r e f , r e f } = H e a d ^ { r e f } ( G _ { 0 } ^ { r e f } , \dots , G _ { N _ { d } } ^ { r e f } ) , } \\ { X ^ { s r c , r e f } , C ^ { s r c , r e f } = H e a d ^ { s r c } ( G _ { 0 } ^ { s r c } , \dots , G _ { N _ { d } } ^ { s r c } ) . } \end{array} $$ Global alignment. After predicting all the pairwise pointmaps, DUSt3R introduces a global alignment to handle pointmaps predicted from multiple images. For the given image set $\{ \mathbf { I } _ { t } ^ { i } \} _ { i = 1 } ^ { N _ { t } }$ , DUSt3R first constructs a connectivity graph $\mathcal { G } ( \nu , \mathcal { E } )$ for selecting pairwise images, where the vertices $\nu$ represent $N _ { t }$ images and each edge $\boldsymbol { \mathscr { e } } \in \mathcal { E }$ is an image pair. Then, it estimates the depth maps $\bar { D \mathbf { \Psi } } : = \{ \mathbf { D } _ { k } \}$ and camera pose $\pi : = \left\{ \pi _ { k } \right\}$ by $$ \underset { \mathbf { D } , \pi , \sigma } { \arg \operatorname* { m i n } } \ : \sum _ { e \in \mathcal { E } } \sum _ { v \in e } \mathbf { C } _ { v } ^ { e } \left\| \mathbf { D } _ { v } - \sigma _ { e } P _ { e } ( \pi _ { v } , \mathbf { X } _ { v } ^ { e } ) \right\| _ { 2 } ^ { 2 } , $$ where $\sigma = \{ \sigma _ { e } \}$ are the scale factors defined on the edges, $P _ { e } ( \pi _ { v } , \mathbf { X } _ { v } ^ { e } )$ means projecting the predicted pointmap $\mathbf { X } _ { v } ^ { e }$ to view $v$ using poses $\pi _ { v }$ to get a depth map. The objective function in eq. (6) explicitly constrains the geometry alignment between frame pairs, aiming to preserve cross-view consistency in the depth maps. # 4 Methods Test3R is a test-time training technique that adapts DUSt3R [12] to challenging test scenes. It improves reconstruction by maximizing cross-pair consistency. We begin by analyzing the root cause of inconsistency in Sec. 4.1. In Sec. 4.2, we establish the core problem and define the test-time training objective. Finally, we employ prompt tuning for efficient test-time adaptation in Sec. 4.3. # 4.1 Cross-pair Inconsistency DUSt3R [12] aims to achieve consistency through global alignment; however, the inaccurate and inconsistent pointmaps lead to persistent errors, significantly compromising the effectiveness of global alignment. Figure 3: Overview of Test3R. The primary goal of Test3R is to adapt a pretrained reconstruction model $f _ { s }$ to the specific distribution of test scenes $f _ { t }$ . It achieves this goal by optimizing a set of visual prompts at test time through a self-supervised training objective that maximizes cross-pair consistency between $X _ { 1 } ^ { r e f , r e f }$ and $X _ { 2 } ^ { r e f , r e f }$ . Therefore, we show a qualitative analysis of the pointmaps on the DTU [58] and ETH3D [59] datasets. Specifically, we compare the pointmap for the same reference view but paired with two different source views, and align these two pointmaps to the same coordinate system using Iterative Closest Point (ICP). The result is shown in Figure 2. On the left are two image pairs sharing the same reference view but with different source views. On the right are the corresponding pointmaps, with each color indicating the respective image pair. Observations. These two predicted pointmaps of the reference view exhibit inconsistencies, as highlighted by the presence of large regions with inconsistent colors in 3D space. Ideally, if these pointmaps are consistent, they should be accurate enough to align perfectly in 3D space, resulting in a single, unified color (either blue or red). This result indicates that DUSt3R may produce different pointmaps for the same reference view when paired with different source views. In our view, this phenomenon stems from the problematic pair-wise prediction paradigm. First, since only two views are provided as input at each prediction step, the scene geometry is estimated solely based on visual correspondences between a single image pair. Therefore, the model produces inaccurate pointmaps. Second, all predicted pointmaps are mutually inconsistent individual pairs. For different image pairs, their visual correspondences are also different. As a result, DUSt3R may produce inconsistent pointmaps for the same reference view when paired with different source views due to the different correspondences. This issue significantly hinders the effectiveness of subsequent global alignment and further leads to discrepancies in the overall reconstruction. What’s worse, the limited generalization capability of DUSt3R further exacerbates the above issues of low precision and cross-pair inconsistency. # 4.2 Triplet Objective Made Consistent The inconsistencies observed above highlight a core limitation of the pairwise prediction paradigm. Specifically, DUSt3R may produce different pointmaps for the same reference view when paired with different source views. This motivates a simple but effective idea: enforce triplet consistency across these pointmaps directly at test time, as shown in Figure 3. Definition. We first describe the definition of test-time training on the 3D reconstruction task, where only images $\{ I _ { t } ^ { i } \} _ { i = 1 } ^ { N _ { t } }$ from the test scene are available. During training time training phase, $N _ { s }$ labeled samples $\{ I _ { s } ^ { i } , \bar { X } _ { s } ^ { i } \} _ { i = 1 } ^ { N _ { s } }$ collected from various scenes are given, where $I _ { s } ^ { i } \in \mathcal { T } _ { s }$ and $\bar { X } _ { s } ^ { i } \in \bar { \mathcal X } _ { s }$ are images and the corresponding pointmaps derived from the ground-truth depth $\bar { D _ { s } } \in \bar { D _ { s } }$ . Furthermore, we denote DUSt3R [12], parameterized by $\theta$ , as the model trained to learn the reconstruction function $f _ { s } : \mathcal { T } _ { s } \bar { \mathcal { X } } _ { s }$ . Subsequently, during test time training phase, only unlabeled images $\{ I _ { t } ^ { i } \} _ { i = 1 } ^ { N _ { t } }$ from test scene are available, where $I _ { t } ^ { i } \in \mathcal { T } _ { t }$ . Our goal is to optimize the model $f _ { s }$ to the specific scene $f _ { t } : \mathcal { T } _ { t } \to \bar { \mathcal { X } } _ { t }$ at test time. This is achieved by minimizing the self-supervised training objective $\ell$ . Specifically, our core training objective is to maximize the geometric consistency by aligning the pointmaps of the reference view when paired with different source views. For a set of images $\{ I _ { t } ^ { i } \} _ { i = 1 } ^ { N _ { t } }$ from the specific scene, we consider a triplet consisting of one reference view and two different source views, denoted as $( I ^ { r e f } , I ^ { s r c 1 } , I ^ { s r c 2 } )$ . Subsequently, Test3R forms two reference–source view pairs $( I ^ { r e f } , I ^ { s r c 1 } )$ and $( I ^ { r e f } , I ^ { s r c 2 } )$ from this triplets. These reference–source view pairs are then fed into the Test3R independently to predict pointmaps of reference views under different source view conditions in the same coordinate frame of $I ^ { r e f }$ , denoted as $X _ { 1 } ^ { r e f , r e f }$ and ${ \bf X } _ { 2 } ^ { r e f , r e f }$ . Finally, we construct the training objective by aligning these two inconsistent pointmaps, formulated as: $$ \ell = \left\| X _ { 1 } ^ { r e f , r e f } - X _ { 2 } ^ { r e f , r e f } \right\| . $$ With this objective, we can collectively compose triplets from a large number of views of an unseen 3D scene at test time. It guides the model to successfully resolve the limitations mentioned in Section 4.1. For inconsistencies, it ensures consistency by aligning the local two-view predictions. Meanwhile, it also pushes the predicted pointmap closer to an overall global prediction to mitigate the inaccuracy. Moreover, by optimizing for the specific scene at test time, it enables the model to adapt to the distribution of that scene. # 4.3 Visual Prompt Tuning for Test Time Training After the self-supervised training objective is defined, effectively modulating the model during testtime training for specific scenes remains a non-trivial challenge. During the test-time training phase, it only relies on unsupervised training objectives. However, these objectives are often noisy and unreliable, which makes the model prone to overfitting and may lead to training collapse, especially when only a limited number of images are available for the current scene. Fortunately, similar issues has been partially explored in the 2D vision community. In these works, visual prompt tuning [15] has demonstrated strong effectiveness in domain adaptation in 2D classification tasks [60]. It utilizes a set of learnable continuous parameters to learn the specific knowledge while retaining the knowledge learned from large-scale pretraining. Motivated by this, we explore the use of visual prompts as a carrier to learn the geometric consistency for specific scenes. Specifically, we incorporate a set of learnable prompts into the encoder of DUSt3R [12]. Consider an encoder of DUSt3R with $N _ { e }$ standard Vision Transformer(ViT) [57] layers, an input image is first divided into fixed-sized patches and then embedded into $\mathrm { d }$ -dimensional tokens ${ \bf E _ { 0 } } = \{ { \bf e } _ { 0 } ^ { k } \in \mathbb { R } ^ { D } | k \in$ $\mathbb { N } , 1 \le k \le N _ { t } \}$ , where $N _ { t }$ is the length of image patch tokens. Subsequently, to optimize the model, we introduce a set of learnable prompt tokens $\{ \mathbf { P } _ { i - 1 } \} _ { i = 1 } ^ { N _ { e } }$ into each Transformer layer. For $i - t h$ transformer layer, the prompt tokens are denoted as $\ddot { \mathbf { P } } _ { i - 1 } \overset { } { = } \{ \mathbf { p } _ { i - 1 } ^ { k } \in \mathbb { R } ^ { D } | k \in \mathbb { N } , 1 \leq k \leq N _ { p } \}$ , where $N _ { p }$ is the length of prompt tokens. Therefore, the encoder layer augmented by visual prompts is formulated as: Figure 4: Qualitative Comparison on 3D Reconstruction. $$ [ \bf _ { - } , E _ { i } ] = { } L _ { \it { i } } ( [ \bf P _ { i - 1 } , E _ { i - 1 } ] ) $$ where $\mathbf { P } _ { i - 1 }$ and $\mathbf { E } _ { i - 1 }$ are learnable prompt tokens and image patch tokens at $i - 1$ -th Transformer layer. Test-time training. We only fine-tune the parameters of the prompts, while all other parameters are fixed. This strategy enables our model to maximize geometric consistency by optimizing the prompts at test time while retaining the reconstruction knowledge acquired from large-scale datasets training within the unchanged backbone. # 5 Experiment We evaluate our method across a range of 3D tasks, including 3D Reconstruction( Section 5.1) and Multi-view Depth( Section 5.2). Moreover, we discuss the generality of Test3R and the prompt design( Section 5.3). Additional experiments and detailed model information, including parameter settings, test-time training overhead, and memory consumption, are provided in the appendix. Baselines. Our primary baseline is DUSt3R [12], which serves as the backbone of our technique in the experiment. Subsequently, we select different baselines for the specific tasks to comprehensively evaluate the performance of our proposed method. For the 3D reconstruction task, which is the primary focus of the majority of 3R-series models, we compared our method with current mainstream approaches to evaluate its effectiveness. It includes MAST3R [13], MonST3R [16], CUT3R [35] and Spann3R [31]. All of these models are follow-up works building on the foundation established by DUSt3R [12]. Furthermore, for the multi-view task, we not only compare our model with baselines [61, 62] that do not require camera parameters but also evaluate our model against methods [9, 11, 27, 61–64, 64, 65] that rely on camera parameters or trained on datasets from the same distribution to demonstrate the effectiveness of our technique. # 5.1 3D Reconstruction We utilize two scene-level datasets, 7Scenes [66] and NRGBD [67] datasets. We follow the experiment setting on the CUT3R [35], and employ several commonly used metrics: accuracy (Acc), completion (Comp), and normal consistency (NC) metrics. Each scene has only 3 to 5 views available for the 7Scenes [66] dataset and 2 to 4 views for NRGBD [67] dataset. This is a highly challenging experimental setup, as the overlap between images in each scene is minimal, demanding a strong scene reconstruction capability. Quantitative Results. The quantitative evaluation is shown in Table 1. Compared to vanilla DUSt3R [12], our model demonstrates superior performance, outperforming DUSt3R on the majority of evaluation metrics, particularly in terms of mean accuracy and completion. Moreover, our approach achieves comparable or even superior results compared to mainstream methods. Only CUT3R [35] and MAST3R [13] outperform our approach on several metrics. This demonstrates the effectiveness of our test-time training strategy. Qualitative Results. The qualitative results are shown in Figure 4. We compare our method with CUT3R [35] and DUSt3R [12] on the Office and Kitchen scenes from the 7Scenes [66] and NRGBD [67] datasets, respectively. We observe that DUSt3R incorrectly regresses the positions of scene views, leading to errors in the final scene reconstruction. In contrast, our model achieves more reliable scene reconstructions. This improvement is particularly evident in the statue in the Office scene and the wall in the Kitchen scene. For these two objects, the reconstruction results from DUSt3R are drastically different from the ground truth. Compared to CUT3R, the current state-ofthe-art in 3D reconstruction, we achieve better reconstruction results. Specifically, we effectively avoid the generation of outliers, resulting in more accurate pointmaps. Details can be seen in the red bounding boxes as shown in Figure 4. Table 1: 3D reconstruction comparison on 7Scenes and NRGBD datasets. Table 2: Multi-view depth evaluation. (Parentheses) denote training on data from the same domain. # 5.2 Multi-view Depth Following RobustMVD [63], performances are measured on the object-centric dataset DTU [58] and scene-centric dataset ETH3D [59]. To evaluate the depth map, we report the Absolute Relative Error (rel) and the Inlier Ratio $( \tau )$ at a threshold of $3 \%$ on each test set and the averages across all test sets. Quantitative Results. The quantitative evaluation is shown in Table 2. On the DTU dataset, our model significantly improves upon the performance of vanilla DUSt3R, reducing the Absolute Relative Error by 1.3 and increasing the Inlier Ratio by 14.2. Similarly, on the ETH3D dataset, our model also demonstrates comparable improvements, achieving state-of-the-art performance on this challenging benchmark as well. Notably, our model surpasses the majority of methods that rely on camera poses and intrinsic parameters, and the models trained on the dataset from the same domain. This indicates that our approach effectively captures scene-specific global information and enables to adaptation of the distribution of test scenes, thereby significantly improving the quality of the depth maps. Figure 5: Qualitative Comparison on Multi-view Depth. Qualitative Results. The qualitative result is shown in Figure 5. We present the depth map on the key view, following RobustMVD [63]. We observe that Test3R effectively improves the accuracy of depth estimation compared to DUSt3R and RobustMVD [63] with camera parameters. Specifically, Test3R captures more fine-grained details, including the computer chassis and table. Additionally, on the white-background DTU dataset, Test3R effectively understands scene context, allowing it to accurately estimate the depth of background regions. # 5.3 Ablation Study and Analysis # 5.3.1 Framework Generalization. To demonstrate the generalization ability of our proposed technique, we applied Test3R to MAST3R [13] and MonST3R [16], and evaluated the performances on the 7Scenes [66] dataset. As shown in Table 3, Test3R effectively improves the performance of MAST3R and MonST3R on 3D reconstruction task. This demonstrates the generalization ability of our technique, which can be applied to other models sharing a similar pipeline. # 5.3.2 Ablation on Visual Prompt. We introduce a model variant, Test3R-S, and conduct an ablation study to evaluate the impact of visual prompts. For Test3R-S, the prompts are only inserted into the first Transformer layer, accompany the image tokens through the encoding process, and are then discarded. Table 3: Generalization Study. Table 4: Ablation study on Visual Prompt. The result is shown in Table 4. Both Test3R-S and Test3R effectively improve model performance, compared to vanilla DUSt3R. For prompt length, we observe that when the number of prompts is small, increasing the prompt length can enhance the ability of Test3R to improve reconstruction quality. However, as the prompt length increases, the number of trainable parameters also grows, making it more challenging to converge within the same number of iterations, thereby reducing their overall effectiveness. For prompt insertion depth, we observe that Test3R, which uses distinct prompts at each layer, demonstrates superior performance. This is because the feature distributions vary across each layer of the encoder of DUSt3R, making layer-specific prompts more effective for fine-tuning. However, as the number of prompt parameters increases, Test3R becomes more susceptible to optimization challenges compared to Test3R-S, leading to a faster performance decline.
Dense matching methods like DUSt3R regress pairwise pointmaps for 3D reconstruction. However, the reliance on pairwise prediction and the limited generalization capability inherently restrict the global geometric consistency. In this work, we introduce Test3R, a surprisingly simple test-time learning technique that significantly boosts geometric accuracy. Using image triplets ($I_1,I_2,I_3$), Test3R generates reconstructions from pairs ($I_1,I_2$) and ($I_1,I_3$). The core idea is to optimize the network at test time via a self-supervised objective: maximizing the geometric consistency between these two reconstructions relative to the common image $I_1$. This ensures the model produces cross-pair consistent outputs, regardless of the inputs. Extensive experiments demonstrate that our technique significantly outperforms previous state-of-the-art methods on the 3D reconstruction and multi-view depth estimation tasks. Moreover, it is universally applicable and nearly cost-free, making it easily applied to other models and implemented with minimal test-time training overhead and parameter footprint. Code is available at https://github.com/nopQAQ/Test3R.
[ "cs.CV" ]
# 1 Introduction As large language models (LLMs) rapidly evolve and are deployed across critical applications, there is a pressing need for reliable safety evaluation methods that can keep pace with new models and adversarial attacks, and uncover failure modes before harm occurs. One common paradigm is dynamic safety evaluation, e.g., LLM-based red-teaming methods that generate adversarial attacks to uncover safety vulnerabilities (Ganguli et al., 2022; Perez et al., 2022; Shen et al., 2023; Andriushchenko et al., 2025). Alternatively, researchers have manually curated prompts and aggregated them as static safety benchmarks (Chao et al., 2024a; Souly et al., 2024; Zhang et al., 2024). However, prior works have noted current LLM safety evaluations, including both dynamic evaluation and static benchmarks, are not robust (Beyer et al., 2025; Eiras et al., 2025), facing issues on comparability, reproducibility, and saturation. Therefore, new safety evaluation paradigms are urgently needed.1 We begin by asking the foundational question: what constitutes a good safety benchmark? To answer this question, we outline key desiderata for safety benchmarking—effectiveness, separability, and diversity—and present corresponding metrics to assess benchmark quality (§2). To address the shortcomings of existing evaluation paradigms, we present Jailbreak Distillation (JBDISTILL)2, a bestof-both-world framework that tackles the comparability and reproducibility challenges of dynamic LLM-based red-teaming algorithms, as well as the saturation and contamination challenges of static safety benchmarks (§3). JBDISTILL introduces a novel benchmark construction pipeline that “distills” jailbreak attacks into high-quality and easily-updatable safety benchmarks. It first creates a candidate prompt pool by running off-the-shelf jailbreak attack algorithms on a small set of “development models” to transform seed harmful queries into diverse adversarial prompts. Next, driven by the intuition that effectiveness on development models can serve as a proxy for effectiveness on held-out evaluation models (empirically validated in $\ S 5$ ), we propose several Development models Effectiveness Fasu to regenerdte +8 Separability nsg benchmarks: Diversity AutoDAN-Turbo Jailbreak Distillation JBDistill Evaluation models 干 5 Prompt Benchmark 8 TAP Transformation selection JBeDnicshtmialrlk Adv Reasoning functions algorithms epnrcohmpatrsk Off-the-shelf attacks goals Seed prompt pool Candidate Effective prompts 双 Benchmark Construction Benchmark Evaluation prompt selection algorithms that allow JBDISTILL to select an effective subset of prompts from the candidate prompt pool as the safety benchmark. JBDISTILL enjoys several benefits over naively running dynamic safety evaluation for each model. Since the same set of evaluation prompts is used for all models at test time, JBDISTILL ensures fair comparisons and is more reproducible than naively running LLM-based red-teaming, which develops different attack prompts for different models under inconsistent compute budgets, and small changes in its the attack setup (e.g., hyperparameters, chat templates) can lead to large variability in attack success (Beyer et al., 2025). Because expensive attacks are only run during benchmark construction time, JBDISTILL is also significantly more efficient at evaluation time. Intuitively, JBDISTILL amortizes the test-time cost of generating jailbreak attack for each evaluation model into benchmark construction time. Compared to static safety benchmarks that carefully curate unsafe prompts (Chao et al., 2024a; Souly et al., 2024; Zhang et al., 2024), JBDISTILL requires minimal human effort to create updated versions of benchmarks that incorporate new models and attacks as they emerge, simply by rerunning the benchmark creation pipeline. The easily-updatable nature of JBDISTILL alleviates concerns on benchmark saturation and contamination (Li et al., 2024; Chen et al., 2025). Experimental results show that with only four 8B-scale open-source development models, JBDISTILL produces benchmarks that achieve as high as $8 1 . 8 \%$ effectiveness, and generalize to 13 diverse evaluation models, including newer, larger, proprietary, specialized, and reasoning models. We also discover trade-offs between effectiveness and separability, which can be controlled by the prompt selection algorithm. Ablation studies show each component of JBDISTILL is crucial for high effectiveness and new models and attacks can be easily integrated into the benchmark construction process. Our main contributions are: (1) We outline the desiderata and evaluation criteria for safety benchmarks. (2) We propose JBDISTILL, a high-level framework that enables renewable safety benchmarking. (3) We instantiate JBDISTILL in two settings—single-turn and multi-turn evaluation, and propose effective prompt selection algorithms, empirically verified by our experiments. (4) We conduct analyses and discover no evidence of significant bias in JBDISTILL-produced benchmarks. # 2 Desiderata for Safety Benchmarks While many benchmarks are constructed to evaluate model safety, how should we assess the quality of the benchmarks themselves? We define the evaluation setup and key desiderata, which are then materialized as metrics for evaluating benchmarks. # 2.1 Preliminaries We define a safety benchmark $B = \{ ( g _ { i } , p _ { i } ) \} _ { i }$ as a set of seed goals $g _ { i }$ paired with attack prompts $p _ { i }$ . Seed goals $g _ { i }$ are straightforward queries that aim to elicit harmful behaviors from the models, e.g., “How to build a bomb?”, and attack prompts are transformations of the seed goals intended to bypass model safety guardrails and achieve the harmful behavior. To run a benchmark on a model $M$ , a response judge $J : G \times \Sigma ^ { * } \mapsto \{ 0 , 1 \}$ takes in the original goal $g _ { i } \in G$ , model response to the attack prompt $M ( p _ { i } ) \in \Sigma ^ { * }$ $( G , \Sigma ^ { * }$ denote the space of seed goals and model responses, resp.), and produce a binary label of attack success $J ( g , M ( p _ { i } ) )$ . # 2.2 Evaluating Safety Benchmarks To evaluate a safety benchmark, we run it on a diverse set of evaluation models $\mathcal { M } _ { \bf { e v a l } }$ and collect aggregated statistics, as we believe that using a broad range of models whose responsible deployment is critical provides a reliable proxy for the benchmark’s real-world utility.3 We propose three desiderata for safety benchmarks: effectiveness, separability, and diversity. (A) Effectiveness indicates the benchmark is capable of eliciting harmful behaviors from a broad range of models with high success rate. Given a judge $J$ , we measure the effectiveness of a benchmark $B$ using the average attack success rate (ASR) across all evaluation models $\mathcal { M } _ { \mathrm { e v a l } }$ as follows: $$ \mathrm { E F F } ( B ; \mathcal { M } _ { \mathrm { e v a l } } ) = \frac { 1 } { | \mathcal { M } _ { \mathrm { e v a l } } | } \sum _ { M \in \mathcal { M } _ { \mathrm { e v a l } } } \mathrm { A S R } ( M ; B ) , $$ where the ASR of model $M$ under benchmark $B$ is defined as the average judge score over all evaluation prompts in $B$ : $$ \operatorname { A S R } ( M ; B ) = { \frac { 1 } { | B | } } \sum _ { ( g , p ) \in B } J ( g , M ( p ) ) . $$ (B) Separability, which indicates a benchmark’s ability to distinguish between models, is important because good benchmarks should separate model performance with high confidence. To measure separability, we compute the $9 5 \%$ confidence interval of ASR of each $\mathcal { M } _ { \mathrm { e v a l } }$ via bootstrapping. Next, we compute the ratio of non-overlapping CIs among all $\left( ^ { \left\lceil \mathcal { M } _ { \mathrm { e v a l } } \right\rceil } _ { 2 } \right)$ model pairs. A higher separability indicates the benchmark is capable of distinguishing between ASRs of different models with high confidence. This process is similar to Li et al. (2024), but we adapt it for safety evaluation. Formally, the separability of a benchmark $B$ on evaluation models $\mathcal { M } _ { \mathrm { e v a l } }$ is defined as: $$ \mathrm { S E P } ( B ; \mathcal { M } _ { \mathrm { e v a l } } ) = \frac { 1 } { \left( \stackrel { | \mathcal { M } _ { \mathrm { e v a l } } | } { 2 } \right) } \sum _ { \stackrel { M _ { i } \neq M _ { j } } { M _ { i } , M _ { j } \in \mathcal { M } _ { \mathrm { d e v } } } } \mathbb { I } _ { \{ C _ { i } \cap C _ { j } = \emptyset \} } , $$ where $C _ { i } : = C I ( M _ { i } ; B )$ is the confidence interval of the ASR of model $M _ { i }$ on benchmark $B$ . (C) Diversity is also crucial because a safety benchmark should effectively uncover a wide range of unsafe behaviors across different models. We measure diversity using two metrics: (1) Since JBDISTILL constructs the benchmark from a fixed set of seed goals $G$ , we propose Versatility, which is the proportion of unique seed goals $g \in G$ that lead to at least one successful attack on a particular evaluation model, averaged over all evaluation models. That is, $$ \operatorname { V E R } ( B ; \mathcal { M } _ { \mathrm { e v a l } } ) = \sum _ { M \in \mathcal { M } _ { \mathrm { e v a l } } } \frac { \left| \left\{ g \in G \left| \frac { \exists p \colon ( g , p ) \in B } { \boldsymbol { J } ( g , M ( p ) ) = 1 } \right. \right\} \right| / G | } { \left| \mathcal { M } _ { \mathrm { e v a l } } \right| } . $$ We complement versatility with another diversity metric, Coverage, i.e., the proportion of seed goals that are covered by the benchmark. Coverage is important because it indicates how well the benchmark represents the original set of seed goals. We argue that all three desiderata are crucial: a benchmark with low effectiveness reveals limited safety vulnerabilities, thus unreliable. Without high separability, it cannot distinguish the safety of different models, rendering benchmark results inconclusive. Low diversity implies narrow focus (low coverage) or effectiveness on only a small set of seed goals (low versatility), leading to biased evaluation results. # 3 The JBDISTILL Framework We now introduce the JBDISTILL framework, which distills jailbreak attacks into effective safety benchmarks (Fig. 1). We first describe its key components, then present a unified algorithm, and conclude with intuitions for why JBDISTILL achieves strong effectiveness. Key components Driven by the ultimate goal of producing safety benchmarks that are broadly effective, we propose using a small group of development models $\mathcal { M } _ { \mathbf { d e v } }$ during the benchmark construction process. We hypothesize that using the information of multiple $\mathcal { M } _ { \mathrm { d e v } }$ to generate and select evaluation prompts can lead to more effective benchmarks (validated in $\ S 5 . 4 )$ . JBDISTILL starts with seed goals $G = \{ g _ { 1 } , . . . , g _ { n } \}$ , which can easily be obtained from existing benchmarks or curated to target specific harmful domains. A transformation function $f ( g , M )$ takes in a single seed goal $g$ and optionally one or more development models $M$ , and outputs a set of attack prompts paired with its original goal, $P =$ $\{ ( g , p _ { i } ) \} _ { i }$ . In principle, transformation functions can be any operations that transform the seed # Algorithm 1 JBDISTILL benchmark construction Input: development models $\mathcal { M } _ { \mathrm { d e v } }$ , seed goals $G$ , transformation functions $\mathcal { F } = \{ f _ { i } \} _ { i }$ , prompt selection algorithm $\mathcal { A }$ , target benchmark size $n$ . Output: produced benchmark $P ^ { * }$ 1: $P \emptyset$ ▷ Initialize the candidate prompt pool 2: for $f \in \mathcal T$ do ▷ For each transformation function 3: for $M \in \mathcal { M } _ { \mathrm { d e v } }$ do $D$ For each development model 4: for $g \in G$ do ▷ For each seed goal 5: $P _ { g , M } f ( g , M )$ ▷ Transform the seed goal 6: P P Pg,M ▷ Add the transformed prompts to the pool 7: $P ^ { * } \gets \mathcal { A } ( \mathcal { M } _ { \mathrm { d e v } } , P , n )$ ▷ Subselect $n$ prompts from the pool as the benchmark 8: return P ∗ goal into a prompt such as a template-based function transformation, e.g., prepending Do-AnythingNow templates (Shen et al., 2023) to the seed goal or even the identity function. Detailed in $\ S 4$ , we opt for a collection of existing single-turn and multiturn jailbreak attacks as transformation functions. Given development models $\mathcal { M } _ { \mathrm { d e v } }$ and target benchmark size $n$ , a prompt selection algorithm $\mathcal { A } ( P ; \mathcal { M } _ { \mathrm { d e v } } , n )$ takes in the candidate prompt pool $P$ already transformed by transformation functions and returns a subset of the prompts $P ^ { * } \subseteq P$ of size $n$ which serves as the output benchmark. We propose several selection algorithms in $\ S 4 . 3$ . A unified algorithm Alg. 1 presents the highlevel pipeline of JBDISTILL. It applies each transformation function paired with an $\mathcal { M } _ { \mathrm { d e v } }$ to every seed goal $g \in G$ to produce a pool $P$ of candidate prompts. Next, the prompt selection algorithm $\mathcal { A }$ chooses a subset of $n$ prompts satisfying our desiderata (§2) as the constructed benchmark $P ^ { * }$ . When will JBDISTILL be effective? The effectiveness of JBDISTILL benchmarks relies on the selected attack prompts being broadly effective across $\mathcal { M } _ { \mathrm { d e v } }$ and $\mathcal { M } _ { \mathrm { e v a l } }$ , while not being developed on $\mathcal { M } _ { \mathrm { e v a l } }$ . Although selecting more capable attacks as transformation functions will likely lead to more effective benchmarks, our approach is not necessarily limited by the initial effectiveness of attack prompts: our proposed prompt selection stage allows a more effective subset of prompts to be selected from the candidate prompt pool by leveraging multiple development models as a proxy for effectiveness. We hypothesize that attacks effective against multiple development models will be broadly effective against diverse evaluation models, and our empirical results in $\ S 5 . 2$ support this hypothesis. # 4 Instantiations of JBDISTILL To demonstrate the generality of our framework, we apply it in two safety evaluation scenarios: singleturn and multi-turn interactions. LLM safety under multi-turn interaction is typically evaluated separately as it exposes unique vulnerabilities (Yu et al., 2024; Russinovich et al., 2024). We further discuss nuances of multi-turn JBDISTILL, such as the implication of transferring response from $\mathcal { M } _ { \mathrm { d e v } }$ to other models, in our analysis (§6.3). We leave exploring other instantiations, e.g., multimodal interactions for future work. # 4.1 Transformation Functions For single-turn JBDISTILL, we use Tree of Attacks with Pruning (TAP; Mehrotra et al., 2024), Persuasive Adversarial Prompts (PAP; Zeng et al., 2024), AutoDAN-Turbo (Liu et al., 2025), and Adversarial Reasoning (Sabbaghi et al., 2025). For multi-turn JBDISTILL, we use ActorAttack (Ren et al., 2024), Red Queen (Jiang et al., 2024b), Context Compliance Attack (CCA; Russinovich and Salem, 2025), and Speak Easy (Chan et al., 2025), further detailed in $\ S _ { \mathrm { { D } } }$ . We employ the aforementioned 8 attack methods off-the-shelf because they are recent, widely-used, and produce interpretable (semantically meaningful) prompts, essential for deriving insights from the benchmarking process. Using these off-theshelf attack methods as transformation functions is already very effective, significantly outperforming all baselines as, we show in $\ S 5$ . Developing targeted transformations for JBDISTILL may yield further improvements, leaving potential for future work. # 4.2 Problem Formation for Prompt Selection We formulate the prompt selection problem as a discrete optimization problem. Given development models $\mathcal { M } _ { \mathrm { d e v } }$ and target benchmark size $n$ , the goal is to select a subset of prompts $P ^ { * } \subseteq P$ from a candidate prompts pool $P$ that maximizes the effectiveness of the benchmark while satisfying the constraints of size and coverage: $$ \begin{array} { r l } { \operatorname* { m a x } _ { P ^ { * } \subseteq P } } & { \operatorname { E F F } ( P ^ { * } ; \mathcal { M } _ { \mathrm { d e v } } ) } \\ { \mathrm { s . t . } \quad } & { | P ^ { * } | = n , \operatorname { C o v E R A G E } ( P ^ { * } ) \geq \alpha , } \end{array} $$ where $\alpha$ is the coverage requirement. A core assumption here is that one can use success on the development models $\mathcal { M } _ { \mathrm { d e v } }$ to predict the effectiveness of particular prompts to evaluation models $\mathcal { M } _ { \mathrm { e v a l } }$ . Therefore, selecting a subset of prompts with high effectiveness on development models is indicative of high effectiveness on diverse evaluation models $\mathrm { E F F } ( P ^ { * } ; \mathcal { M } _ { \mathrm { e v a l } } )$ , which we empirically validate in $\ S 5$ . Next, we propose simple but effective prompt selection algorithms. # 4.3 Prompt Selection Algorithms Compatible with both single-turn and multi-turn JBDISTILL, we propose several prompt selection algorithms. Interestingly, we find that simple greedy algorithms already achieve high effectiveness and separability in practice (§5.2). We use random selection as a baseline, and propose three algorithms: RBS, BPG, and CS. Baseline algorithm: RANDOMSELECTION (RS) The simplest baseline prompt selection algorithm is randomly selecting $n$ prompts from the candidate prompt pool $P$ to form the benchmark $P ^ { * }$ . Note that this algorithm does not leverage any information from the development models $\mathcal { M } _ { \mathrm { d e v } }$ . Maximizing effectiveness with RANKBYSUCCESS (RBS) We propose RBS (Alg. 2), a greedy selection algorithm that aims to optimize for effectiveness. The algorithm first scores each prompt $( p , g ) \in P$ by the number of development models $\mathcal { M } _ { \mathrm { d e v } }$ that the prompt successfully jailbreaks. It then selects the top $n$ prompts with the highest scores, breaking even randomly. RBS assumes no explicit coverage requirement, i.e., $\alpha = 0$ , though we observe the coverage is high in practice (§5.2). Balancing separability and effectiveness with BESTPERGOAL (BPG) Although RANKBYSUCCESS maximizes effectiveness, it does not guarantee coverage. Moreover, a set of prompts that are effective on all models might not be the best to separate models that are more or less safe.4 Driven by the intuition that different models may have safety vulnerabilities on different harmful behaviors, we propose the BPG algorithm which selects prompts in a more goal-balanced manner. Our BPG algorithm (Alg. 3) repeatedly iterates over the seed goals and selects a corresponding prompt to each goal at a time until $n$ prompts are selected. Given a set of unselected prompts for each goal, BPG selects the prompt that maximizes the number of successfully jailbroken models for that goal. Unlike RBS which focuses on maximizing effectiveness, BPG ensures coverage $\alpha = 1$ given a sufficient benchmark size $n \geq | G |$ , and may sacrifice some effectiveness for better separability. COMBINEDSELECTION (CS) To balance effectiveness and coverage, the COMBINEDSELECTION algorithm (Alg. 4) first selects the prompt with maximum number of successfully jailbroken models for each seed goal, following BPG. For the remaining $n - | G |$ prompts, it solely optimizes for effectiveness by selecting the prompts with maximum number of jailbroken models in general i.e., without considering the seed goals, following RBS. # 5 Experiments on JBDISTILL framework # 5.1 Experimental Setup Seed goals We source seed goals from the HarmBench (Mazeika et al., 2024) benchmark, using the standard behaviors set which contains 200 seed goals. We utilize HarmBench due to its wide use and that it contains a diverse set of goals with 7 semantic categories, facilitating our analysis (§6). Model selection Ideally, JBDISTILL should be able to produce effective benchmark with small scale open-source models, which are readily available and not too costly to use. Therefore, we choose LLAMA2-7B-CHAT, LLAMA3.1-8B-INSTRUCT, GEMMA2-9B-IT, and OLMO2-7B-INSTRUCT as $\mathcal { M } _ { \mathrm { d e v } }$ , which we demonstrate in $\ S 5$ are already very effective. We select a diverse set of 10 evaluation models for our main experiments (§5.2) and 13 models for the generalization study (§5.3). We cover (A) newer and (B) larger variants of the development models, (C) reasoning models, (D) unseen families (model families that are not represented in $\mathcal { M } _ { \mathrm { d e v } } )$ , and (E) specialized models (e.g., coding- or healthcare-oriented models), to evaluate the effectiveness of the benchmark, detailed in $\ S \mathrm { F }$ . Evaluation judge We use the AdvPrefix judge for single-turn evaluation attack evaluation as it is shown to have high human agreement rate (Zhu et al., 2024). We also develop a multi-turn variant of the AdvPrefix judge and show it has high human agreement rate as well, detailed in $\ S \mathbf { B }$ . Baselines and hyperparameters We compare JBDISTILL to three recent and commonlyused static benchmarks: HarmBench (Mazeika et al., 2024), DAN prompts (Shen et al., 2024) prepended to HarmBench seed goals, and WildJailbreaks (Jiang et al., 2024a). We also include CoSafe (Yu et al., 2024), a recently-introduced multi-turn benchmark. Moreover, we run individual adversarial attacks against each development model on HarmBench goals and gather the produced prompts as baseline benchmarks. We set $n$ to 500 for all baselines and for JBDISTILL benchmarks and show JBDISTILL is stable under different sizes in $\ S 6 . 2$ . We sample 500 prompts from baseline benchmarks that are larger for fair comparisons. Table 1: Performance $( \% )$ of different benchmarking methods on $\mathcal { M } _ { \mathrm { e v a l } }$ . JBDISTILL uses HarmBench as the seed goals. Non-baseline JBDISTILL benchmarks are highlighted . The best result of each benchmarking method is bolded. Our proposed framework significantly outperforms static benchmarks and dynamic attacks on effectiveness and versatility while maintaining separability and coverage. Prompt selection algorithms are crucial for producing effective benchmarks. # 5.2 Main Results JBDISTILL outperforms existing static benchmarks and dynamic jailbreak attacks (Table 1) Both single-turn and multi-turn JBDISTILL significantly outperform static benchmarks and dynamic attacks in terms of effectiveness and versatility, achieving $8 1 . 8 \%$ and $78 . 1 \%$ best effectiveness respectively. JBDISTILL also maintains separability over baselines. This validates our motivation to distill jailbreak attacks into safety benchmarks, and confirms JBDISTILL produces high-quality benchmarks. Prompt selection algorithms are crucial for high effectiveness Table 1 shows the RBS algorithm outperforms the baseline RS algorithm by a large margin, $8 1 . 8 \%$ effectiveness compared to $5 3 . 1 \%$ , with a similar trend for multi-turn setting. This shows that using multiple development models allows for selecting effective prompt subsets, validating our core hypothesis. While previous works have mostly focused on generating more transferable attack prompts (Zou et al., 2023; Sabbaghi et al., 2025; Lin et al., $2 0 2 5 \mathrm { a }$ ; Yang et al., 2025), we show that over-generating attacks prompts using off-the-shelf methods and then selecting a highly effective subset of prompts is a simple, effective, and overlooked method to enhance attack transferability. We provide further discussions in $\ S 7$ . We also observe a trade-off between effectiveness and separability: when prompts are so effective that most prompts jailbreak most models, the performance differences between models are smaller. Nevertheless, the trade off can be made by the choice of prompt selection algorithm: BPG achieves the best separability but sacrifices some effectiveness, achieving $7 3 . 3 \%$ effectiveness compared to $8 1 . 8 \%$ of RBS. In practice, benchmark developers can choose the algorithm that best fits their needs to balance different desiderata. # 5.3 Generalization to Evaluation Models Fig. 2 shows the ASR (Eq. 1) of the JBDISTILL single-turn benchmark produced with RBS. We evaluate on 13 models organized into 5 groups (detailed in $\ S \mathrm { F }$ ), and find that 10 out of 13 models achieved higher ASR than the average ASR of $\mathcal { M } _ { \mathrm { d e v } }$ , demonstrating JBDISTILL benchmarks effectively generalize to a wide range of $\mathcal { M } _ { \mathrm { e v a l } }$ . Every $\mathcal { M } _ { \mathrm { e v a l } }$ achieves ${ > } 6 0 \%$ ASR, including o1. We hypothesize that LLAMA2-7B-CHAT has relatively low ASR because it is a very conservative model, which is consistent with prior works which find it to have high overrefual rates (Cui et al., 2024). Figure 2: ASR of JBDISTILL-produced benchmark (RBS), where error bars represents $9 5 \%$ CI. The benchmark is effective across different groups of evaluation models held-out during benchmark construction, with 10 out of 13 models achieving higher ASR than the average ASR of development models (horizontal dashed line ). Figure 3: As more development models and transformation functions are added, the effectiveness of the benchmark on held-out evaluation models increases, outperforming the average effectiveness of using a single development model or transformation function. # 5.4 Ablation: Adding Development Models and Transformation Functions We vary the number of development models and transformation functions used in JBDISTILL benchmark construction using the RBS selection algorithm. Fig. 3 shows that as more models and transformation functions are added, the effectiveness of the benchmark increases, significantly outperforming average effectiveness of using a single model or a single transformation function. This further supports the sustainability of JBDISTILL: as new models and jailbreak attacks are released, they can be easily incorporate into JBDISTILL to construct an updated benchmark that will maintain or improve effectiveness. This is in contrast to static benchmarks, which often require significant human effort to update and maintain. 85 % 80 Effectiveness (%) 705 Single model effectiveness average Single trans. func. 65 effectiveness average Uama2 tUama3.1 3.Gemmaz a2OLMo2 AutoDAN-+Adv TuDAN-ReaSon +TAr +PAP Table 2: Removing the LLAMA or GEMMA family from $\mathcal { M } _ { \mathrm { d e v } }$ does not significantly affect ASR and rankings of the benchmark for $\mathcal { M } _ { \mathrm { e v a l } }$ of the same family. # 6 Analysis # 6.1 Are JBDISTILL Benchmarks biased toward Development Model Families? Because JBDISTILL accesses multiple $\mathcal { M } _ { \mathrm { d e v } }$ during benchmark construction, we investigate whether the benchmark is biased toward a particular family of models used during benchmark construction. Specifically, we separately remove each of LLAMA (LLAMA2-7B and LLAMA3.1-8B) and GEMMA (GEMMA2-9B) families from $\mathcal { M } _ { \mathrm { d e v } }$ and regenerate the benchmark. Table 2 shows that this leads to negligible changes in the ASR and ASR rankings for $\mathcal { M } _ { \mathrm { e v a l } }$ from the same family. Thus, we find no evidence of significant bias towards model families used during benchmark construction, suggesting JBDISTILL produces benchmarks with generalizable prompts. # 6.2 Stability under Varied Construction Setup Ideally, different benchmarks created by optimizing fixed desiderata (§2) in JBDISTILL should produce consistent rankings for models under evaluation. To study the stability of JBDISTILL-produced benchmarks, we use single-turn JBDISTILL benchmark produced by RBS as the reference benchmark $B ^ { * }$ , create different benchmarks using different setups, and measure the Kendall tau distance $d$ (number of pairwise disagreements) and correlation coefficient $\tau$ between the ASR rankings of $B ^ { * }$ and each benchmark variant. Depicted in Table 3, the modified benchmarks produce rankings highly correlated with $B ^ { * }$ , demonstrating the strong stability of our JBDISTILL benchmark creation pipeline. Table 3: $d$ is Kendall tau distance and $\tau$ is Kendall rank correlation efficient. We construct benchmarks with modified setups. Produced rankings of 10 evaluation models $( \ S \mathrm { F } )$ are highly correlated with the ranking produced by the reference benchmark $B ^ { * }$ , indicating the high stability of JBDISTILL. # 6.3 Multi-Turn Response Transfer Analysis For multi-turn JBDISTILL, both attack queries generated by jailbreak attack algorithms and responses from development models are used as the benchmark prompt. We now investigate whether responses from particular development models will bias the attacks to the original development model. In Fig. 4, we depict the ASR of the SpeakEasy attack generated on each $\mathcal { M } _ { \mathrm { d e v } }$ transferred to other ${ \mathcal { M } } _ { \mathrm { d e v } }$ , and do not see a notable gap between transferred and non-transferred attacks. This indicates transferring response from development models do not pose significant bias for attack success. Gemma 2 14.0 8.5 26.5 23.5 Llama 2 19.5 10.5 24.0 23.5 Llama 3.1 23.0 13.5 25.5 38.0 OLMo 2 20.0 7.5 25.5 34.0 Gemma 2 Llama 2 Llama 3.1 OLMo 2 We defer further analyses on benchmark breakdown to $\ S C$ . # 7 Related Work Benchmark construction pipelines With rapidly evolving models, LLM evaluation is moving to dynamic evaluation methods that generate test prompts on the fly or live benchmarks that can be continuously updated (Chen et al., 2025; Zhang et al., $2 0 2 5 \mathrm { a }$ ; Verma et al., 2025, i.a.). JBDISTILL fall into this space and is a benchmark construction pipeline that generates continually-updatable safety benchmarks. ArenaHard BenchBuilder pipeline (Li et al., 2024) curates evaluation prompts from crowdsourced user prompts. Butt et al. (2024) facilitate benchmark creation with an agentic framework that utilizes human-in-theloop feedback. AutoBencher (Li et al., 2025) introduces a declarative benchmark construction framework for capability and safety. While they optimize safety benchmarks for attack success and harmfulness, we propose a more general set of desiderata on effectiveness, separability, and diversity. Importantly, JBDISTILL allows for easily incorporating arbitrary jailbreak attack methods, which are rapidly being discovered and developed. Furthermore, JBDISTILL is a general framework that can be instantiated for various safety evaluation setups (§4). Safety benchmarks Safety benchmarks that carefully curate static sets of prompts have been proposed to advance evaluation (Huang et al., 2023; Chao et al., 2024a; Tedeschi et al., 2024; Souly et al., 2024; Vidgen et al., 2024; Xie et al., 2025). The major human involvement in the creation process of these benchmarks typically yields high-quality prompts, but also hinders continuous benchmark updates. WildTeaming (Jiang et al., 2024a) composes automatically mined humandevised jailbreak strategies to transform vanilla harmful queries into adversarial attacks, creating WildJailbreaks. While we also use adversarial attacks for benchmarking, we employ diverse offthe-shelf attack algorithms to generate attacks and conduct prompt selection with multiple development models to enhance effectiveness. Automatic red-teaming Ample methods for automatic red-teaming that search for jailbreaks to dynamically evaluate LLM safety are crafted with a rapid pace (Zou et al., 2023; Chao et al., 2024b; Beutel et al., 2024; Liu et al., 2025, i.a.). Notably, rainbow-teaming (Samvelyan et al., 2024) takes a prompt-based mutation approach to discover diverse adversarial prompts for a given model. Unlike their category-based definition of diversity, we adopt a more fine-grained definition based on covering provided seed goals. JBDISTILL incorporates such jailbreak-search methods as transformations to produce widely-effective benchmarks (§3). Jailbreak attack transferability Transferring jailbreak attacks developed on particular models to other models has been widely studied (Liu et al., 2024; Shah et al., 2023; Lee et al., 2025, i.a.). Specifically, recent works have focused on searching for more transferable prompts in attack generation phase via loss averaging across multiple models (Zou et al., 2023; Sabbaghi et al., 2025), modifying search constraints (Yang et al., 2025), and post-editing (Lin et al., 2025b). The JBDISTILL framework creates attacks from a small set of development models and transfers them to arbitrary evaluation models (§5.3). Instead of generating more transferable prompts, we over-generate and select transferable prompts from the candidate pool using signal from multiple development models. We find this simple approach to be extremely effective for improving transferability $( \ S 5 . 2 , \ S 5 . 3 )$ .
Large language models (LLMs) are rapidly deployed in critical applications, raising urgent needs for robust safety benchmarking. We propose Jailbreak Distillation (JBDistill), a novel benchmark construction framework that "distills" jailbreak attacks into high-quality and easily-updatable safety benchmarks. JBDistill utilizes a small set of development models and existing jailbreak attack algorithms to create a candidate prompt pool, then employs prompt selection algorithms to identify an effective subset of prompts as safety benchmarks. JBDistill addresses challenges in existing safety evaluation: the use of consistent evaluation prompts across models ensures fair comparisons and reproducibility. It requires minimal human effort to rerun the JBDistill pipeline and produce updated benchmarks, alleviating concerns on saturation and contamination. Extensive experiments demonstrate our benchmarks generalize robustly to 13 diverse evaluation models held out from benchmark construction, including proprietary, specialized, and newer-generation LLMs, significantly outperforming existing safety benchmarks in effectiveness while maintaining high separability and diversity. Our framework thus provides an effective, sustainable, and adaptable solution for streamlining safety evaluation.
[ "cs.CL", "cs.CR", "cs.SE" ]
# 1. Introduction Egocentric human-object interaction (Ego-HOI) detection aims to locate interacting human-object pairs and reason about their interaction relationships from the first-person vision. As a crucial task in human-centered advanced scene understanding, its precise outcomes can drive advancements in a wide range of downstream applications, including embodied intelligence [1, 2, 3], mixed reality [4, 5], surveillance event detection [6, 7], and visual question answering [8, 9, 10]. By analyzing images captured from a first-person perspective, the Ego-HOI assistance system offers guidance and feedback based on the user’s actions, facilitating tasks such as cooking and assembly. Furthermore, this technology has the potential to enhance embodied intelligence in imitation learning and the execution of complex tasks. HOI detection has made significant progress in thirdperson vision [11, 12, 13, 14, 15, 16, 17, 18, 19]. However, it is rarely explored from the egocentric view. The primary reason is the lack of benchmark datasets clearly labeled for the Ego-HOI detection task. On the one hand, the significant domain mismatch between the HOI datasets captured from third-person vision and the egocentric task renders them unsuitable for direct application to Ego-HOI detection. As shown in Fig. 1, the third-person perspective (top row) provides a comprehensive view of the human body posture and surroundings, while the egocentric perspective (bottom row) captures interaction details of hands and objects in close range. On the other hand, although a large number of egocentric datasets have emerged in recent years, e.g., Ego4D [20], EPIC-KITCHENS[21, 22], they usually focus on the action recognition task and lack high-quality fine-grained annotations of three fundamental elements of Ego-HOI: <human hand, verb, object>. Furthermore, these datasets either only cover a single scene [22, 23, 24, 25, 26, 27] or single-hand interactions [28], or focus on rigid objects with relatively simple interaction patterns but ignore articulated objects [24, 25, 26, 28, 29], which is far from sufficient for building a comprehensive understanding of egocentric human-object interactions in real-world scenarios. The limitations of existing egocentric public datasets regarding annotation modalities and interaction diversity severely hinder the development of Ego-HOI detection. Figure 1: Examples of human-object interactions from thirdperson perspective (top row) and egocentric perspective (bottom row). Different colors represent distinct elements of each HOI triplet <human/hand, verb, object>. The narrow field of view in egocentric vision leads to severe visual occlusion [29, 30, 31], presenting a significant challenge for interaction recognition. Existing HOI detection methods are usually designed for third-person vision and rely on the rich contextual information provided by the broad view of external cameras. When applied to egocentric vision, these methods suffer from the loss of information due to mutual occlusions of hands and objects, affecting their performance in Ego-HOI detection. Due to the structural connectivity properties of human skeletons, human pose features exhibit higher robustness and reliability than traditional visual features when dealing with partial occlusions [32, 33, 34]. Based on this insight, previous studies [11, 34, 35, 36, 37, 38, 39] have attempted to incorporate pose information to distinguish subtle interaction differences. However, these methods usually depend on human pose estimators or body part detectors, which is unsuitable for hand posture estimation in egocentric scenes. Moreover, they primarily focus on extracting geometric features from the overall structure of the human body, which are not specifically designed for hands. Therefore, it is crucial to further explore flexible and effective ways to capture gesture cues to facilitate egocentric interactivity learning even under occlusion. In view of the above issues, 1) we present a new benchmark dataset, Ego-HOIBench, featuring explicit and highquality <human hand, verb, object> triplet annotations to facilitate research on Ego-HOI perception. Our dataset covers 27,575 images and 123 hand-verb-object triplet categories, thoroughly annotating the bounding boxes and categories of human hands, active objects, and their relations. It not only extensively covers diverse hand-object interaction scenarios but also includes and distinguishes single-hand and two-hand manipulated interactions. We also define two Ego-HOIBench challenges under instance-level and imagelevel settings to explore the Ego-HOI detection task. 2) We propose a lightweight and effective interaction enhancement scheme, Hand Geometry and Interactivity Refinement (HGIR), that utilizes hand pose and geometric cues to improve the interaction representations from a global perspective. In particular, our approach first estimates multiple sets of candidate hand joint positions based on hand features from an HOI baseline detector. Then, we construct global hand geometric features by designing a selection strategy to identify the most suitable hand pose proposals. Next, we enhance interaction representation by pose prompts with pose-interaction attention, generating pose-aware interaction features. Finally, hand geometric features and poseaware interaction features are fused for interaction recognition. Note that our method can be flexibly integrated with offthe-shelf HOI detectors, eliminating the need for additional hand pose estimators and achieving impressive efficiency. The main contributions of our work can be summarized as follows: • We introduce an Ego-HOIBench, the first Ego-HOI detection benchmark containing 27K real-world egocentric images and 123 fine-grained hand-verb-object triplet annotations. Besides, we adapt and reimplement four representative third-person HOI detection methods on Ego-HOIBench, aiming to significantly advance the benchmarking works in egocentric interactive localization and recognition research. • We propose a plug-and-play interaction enhancement scheme, i.e. HGIR, incorporating global hand pose understanding to complement and enhance interaction representations in the egocentric vision. Our approach is lightweight, effective, and general and works seamlessly with off-the-shelf HOI detection methods. • Experiments in representative and influential HOI baselines with our scheme validate its significant performance improvements. Extensive experiments, ablation studies, and discussions are conducted to illustrate the significance of benchmarking EgoHOI. # 2. Related Work # 2.1. Egocentric Datasets and Benchmarks With the development of wearable devices and smart glasses, increasing egocentric datasets have been proposed to study human activities from the unique first-person viewpoint [8, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 40, 41, 42]. Here, we focus on datasets related to interaction perception and localization. EPIC-KITCHENS and its extensions [21, 22] are a series of large-scale datasets that capture long-term unscripted activities in kitchen environments and densely label actions through an automatic annotation pipeline. To comprehensively study attention and action, EGTEA Gaze+ [23] records video and audio of meal preparation tasks and simultaneously provides gaze tracking, hand masks, and fine-grained action annotations. Compared with any other datasets, Ego4D [20] has a massive scale and unprecedented diversity. It builds a vast suite of benchmarks, including episodic memory, hands and objects, audio-visual diarization, social interaction, and forecasting. Datasets and benchmarks such as these fill the gaps in the first-person vision for different visual perception tasks, such as action recognition, video captioning, and hand detection. However, most works focus on action recognition or custom tasks and do not provide joint annotations of <human hand, verb, object> required for training and evaluating Ego-HOI detection models. Many datasets [22, 30] detail all objects interacting with the hand over time but do not explicitly specify the current active object, which is essential for constructing Ego-HOI annotations. Although the MECCANO dataset [25] provides clear and complete Ego-HOI annotations, it only considers toy assembly activities. Such a limited scenario is very detrimental to the generalization of the Ego-HOI detection model. Furthermore, object diversity is essential in fully understanding interactions, but some datasets [24, 25, 26, 28, 29] tend to collect behavioral data related to rigid objects with simple interactions, ignoring articulated objects. The diversity of hand configurations is also often overlooked. The FPHA dataset [28] only presents the 3D joint locations for right hands, while H2O [29] focuses on two-hand operations. Although some datasets [23, 28] cover both single-hand and two-hand interactions, they do not consider differentiating between the left and right hands. These oversights hinder a comprehensive understanding of interactions under different hand configurations. In addition, although recent egocentric datasets have primarily focused on videos, image-based Ego-HOI detection remains highly novel and valuable for research. It is particularly wellsuited for resource-limited devices and real-time applications due to its easy accessibility, fast response, and low computational requirements. The comparison between our work and existing public datasets is shown in Table 1. To the best of our knowledge, our Ego-HOIBench is the first real image-based dataset that comprises explicit and highquality annotations of <human hand, verb, object> for EgoHOI detection, covering a rich set of scenarios, objects, and hand configurations. Table 1 Comparison of Ego-HOIBench with existing egocentric datasets. Active Object Distinction denotes that the annotation specifies the object involved in the current interaction. Hand Dist. means distinguishing between left and right hands. $( ^ { * } )$ : It can be converted from masks. $( ^ { * * } )$ : Only available for a subset of frames. 1 This number is obtained from the “Short-Term Object Interaction Anticipation” task. 2 Most images are synthetic. # 2.2. HOI Detection In recent years, HOI detection has attracted widespread research interest. This task aims to gain a fine-grained understanding of human activities by localizing human-object pairs and inferring their high-level semantic relationships. Existing HOI detection work can be categorized into twoand one-stage methods based on their detection strategies. The two-stage methods [11, 13, 15, 35, 36, 38, 43, 44, 45, 46, 47, 48] involve using a frozen object detector (e.g., FasterRCNN [49], DETR [50]) to generate proposals for humanobject pairs. These proposals are then classified using a separate network based on the features of cropped regions. The two-stage methods usually extract additional visual and contextual information, such as spatial [43, 45, 48], language [13, 15], and human pose features [11, 35, 36, 38], to improve interaction representations. Some studies [43, 47] also utilize graph structures for message propagation between detected human and object instances, thereby enhancing the reasoning performance of interactions between these instance nodes. Decoupling the stages enables training solely the interaction recognition network, thereby saving computational resources and improving training efficiency. However, optimizing the two sub-problems separately may result in suboptimal results. In contrast, the one-stage methods [16, 17, 51, 52, 53, 54, 55] directly detect HOI triplets from the entire image. Early CNN-based one-stage methods use interaction points [51] or union boxes [53] to predict interactions, but these heuristics rely on complex post-processing techniques such as Non-Maximum Suppression. Building on the success of DETR [50] in object detection, many approaches have extended the Transformer architecture to achieve end-to-end HOI detection [16, 17, 52, 54]. According to the degree of decoupling of human detection, object recognition, and interaction recognition, these methods can be further divided into single-branch [17, 56], two-branch [52], and three-branch [54] methods. Overall, these methods benefit from the strengths of Transformers in efficiently capturing long-range dependencies and have achieved significant performance improvements. # 2.3. Human Pose as HOI Cues Recognizing the importance of human pose in understanding human behavior and intention, researchers have explored various methods [11, 34, 35, 36, 37, 38, 57, 58] to extract and leverage pose features to enhance interaction representations. For example, Park et al. [11] designed a pose-conditioned graph neural network that utilizes local features of human joints to update human node encoding to contain richer local information. Qiao et al. [34] focused on extracting geometric features, such as human posture and object position, to supplement visual features to improve robustness in partially occluded scenes. Li et al. [35] emphasized the unique characteristics of human body parts related to interactivity and proposed a hierarchical learning framework based on instance-level and part-level body features. However, these works mainly focus on the local pose features of the target person, neglecting global clues from other people in the image. To overcome this limitation, Wu et al. [58] introduced body-part saliency maps to capture multi-person features and learn the overall relationship between different body parts. Nevertheless, most of these methods rely on off-the-shelf human pose estimators or body part detectors, significantly increasing complexity and computational costs. Moreover, these models are typically trained on third-person datasets, making their application challenging in first-person scenarios. To address these issues, our work leverages the geometric robustness of global hand pose features to provide crucial complementary information to visual features, deepening our understanding of the complex dynamics of Ego-HOI under partial occlusion. We integrate hand pose estimation into our Ego-HOI detection pipeline, sharing weights with the hand detection branch. This integration not only addresses generalization limitations but also reduces the computational burden, making the entire system more efficient and practical. # 3. Our Ego-HOIBench Benchmark Ego-HOIBench is an egocentric image dataset explicitly annotated for Ego-HOI detection research. The dataset provides high-quality ground truth annotations for hand-object pair detection and interaction recognition across all frames. Hand and object annotations contain multiple (𝑐𝑙𝑎𝑠𝑠, 𝑏𝑏𝑜𝑥) tuples, where 𝑐𝑙𝑎𝑠𝑠 indicates the hand side (left or right) or object category, and 𝑏𝑏𝑜𝑥 denotes a bounding box determined by the coordinates of its top-left and bottom-right corners. Interaction annotations specify the exact action category performed by each hand-object pair. Combined with original hand pose annotations, our Ego-HOIBench dataset provides rich details for studying human-object interactions in egocentric vision. # 3.1. Generation Steps We perform the following steps to acquire images and generate annotations for our Ego-HOIBench benchmark. Given an untrimmed RGB-D video sequence derived from the HOI4D dataset [30], we begin by extracting the intermediate frames from each action clip, based on the identified start and end timestamps, as these frames effectively capture sufficient information. Then, the intermediate frame’s mask regions are associated with the corresponding object categories. According to the definition of the Ego-HOI detection task, we focus only on the active objects in current frames. By analyzing the task information, we restrict the possible categories of active objects and filter out irrelevant objects. To avoid meaningless component segmentation, we merge different components of the same objects, e.g., the scissors’ left and right parts, and the safe’s body and door. Subsequently, we convert the mask regions into bounding boxes. A hand-object pair’s bounding boxes and categories are combined with the corresponding action category to form a complete Ego-HOI triplet annotation, i.e., <human hand, verb, object>. Since subtle errors in pixel-level masks can lead to considerable deviations in corresponding bounding boxes, we employ human experts to double-check and ensure accurate annotations. The label correction work is timeconsuming, requiring a combined effort of twenty persondays. The entire dataset generation process spans approximately one and a half months. The extracted intermediate frames and their annotations constitute the Ego-HOIBench dataset. The Ego-HOIBench dataset is further divided into training and test sets. We split the frames according to their video identities to ensure no overlap of object entities in the training and test sets. With a split ratio of $8 0 \% { : } 2 0 \%$ , we finally obtain 22,088 training frames and 5,487 testing frames. Figure 2: Distributions of objects (top) and verbs (bottom), sorted by instance count. Some categories appear significantly more frequently than others. # 3.2. Dataset Statistics The Ego-HOIBench dataset contains 27,575 RGB-D frames, with a frame resolution of $1 9 2 0 \times 1 0 8 0$ . It covers 22 representative noun classes, including 10 rigid object classes and 10 articulated object classes, as well as left-hand and right-hand categories. We annotate $5 8 . 4 \mathrm { K }$ bounding boxes, with ${ \sim } 2 8 \mathrm { K }$ for objects. We consider 18 different verbs to describe actions typically performed by camera wearers in daily activities, ensuring broad coverage of common types. Grasp, Pick up, Put down, Carry, Push, Pull, Carry (both hands), Open, Close, Reach out, Turn on, Press, Cut with, Cut, Dump, Dump into, Bind with, Bind. Among the observed instances, the vast majority $( 9 1 . 4 \% )$ rely on the right hand alone, while fewer cases $( 8 . 2 \% )$ use both hands. Even fewer cases operate with the left hand alone, accounting for only $0 . 4 \%$ . Fig. 2 shows the distributions of object and verb categories under the instancelevel setting, the definition of which will be illustrated in Table 2 Statistics of instances and Ego-HOI triplet categories at different occlusion ratios in our Ego-HOIBench Dataset. Bind Bind with Dump into Dump Cut Cut with Press Turn on Reach out Close Open Carry (both hands) Pull Push Carry Put down Pick up Grasp Sec. 3.3. The instance number of different object categories show a significant span, ranging from 2630 to 8. This wide distribution range is also reflected in the number of verbs. The triplet combination of hands, verbs, and objects further exacerbates the data imbalance, reflecting the natural distribution of HOIs in the real world. This characteristic makes Ego-HOIBench a distinctive and challenging benchmark for Ego-HOI detection, presenting challenges that are closely related to practical applications. Fig. 3 shows the co-occurrence between verbs and objects (e.g., Open and Drawer). Each square in the heat map directly reflects the number of instances involving a particular verb-object pair, with darker colors indicating more corresponding instances. Our dataset contains various distinctive co-occurrence scenes, where some specific verbs, such as Turn on only associated with the object Lamp, and Press only co-occurs with Trash can. The co-occurrence between objects (including hands) and verbs highlights the feasibility of using this information to suppress negative prediction candidates. This suppression scheme closely mirrors human decision-making processes and is therefore frequently employed during the model inference stage [11, 43]. Table 2 provides detailed statistics on the number of instances and Ego-HOI triplet categories at various occlusion ratios for the training and test sets in our Ego-HOIBench dataset. The occlusion ratio of an Ego-HOI instance is calculated by dividing the area of the object occluded by hands and other objects by the area of its bounding box. In our dataset, occlusion is common, with roughly half of the instances having at least $20 \%$ of their area occluded and about $2 0 \%$ having an occlusion ratio over $4 0 \%$ . The high occlusion ratios increase the difficulty of detection and recognition and affect the model’s generalization and robustness. In addition, the number of triplet categories decreases significantly as occlusion increases. This phenomenon is closely related to the physical size of the objects. Larger objects, such as cabinets and chairs, are typically only slightly obscured by hands or other objects. In contrast, smaller objects, like staplers and bowls, are more prone to varying degrees of occlusion. # 3.3. Ego-HOI Detection Tasks Following a third-person perspective, HOI is defined as a triplet containing a person, a verb, and an object [14, 59]. It assumes a one-to-one correspondence between these three elements. Since people, as interacting subjects, remain constant, most HOI detection models disregard subject identification but focus solely on the localization of humans. In the context of egocentric vision, Ragusa et al. [25] described interactions with multiple objects using <verb, objects>, completely ignoring the role of the human hand in the interaction. However, a person’s left and right hands can independently perform different interactions or collaborate on a single interaction, making it an oversimplification to treat a person’s hands as a single, unchanging entity. Furthermore, human hands are not passive in activities but actively influence and shape the nature of interactions. Therefore, a comprehensive understanding of hand factors is indispensable for Ego-HOI detection. Building upon the aforementioned understanding, we redefine Ego-HOI as the <hands, verb, objects> triplet. We emphasize the significance of hands in interactions and consider comprehending their categories and positions as essential for understanding egocentric interaction dynamics. In light of the definition of Ego-HOI, we present two detection tasks to evaluate the model’s capacity to comprehend interactions at the instance level and the abstract image level. # 3.3.1. Instance-level Ego-HOI Detection Task Let $\mathcal { H } ~ = ~ \left\{ h _ { r } , h _ { l } \right\}$ , $\bar { \mathcal { V } } \ = \ \left\{ v _ { 1 } , v _ { 2 } , \ldots , v _ { m } \right\}$ , and $\mathcal { O } \ =$ $\left\{ o _ { 1 } , o _ { 2 } , \ldots , o _ { n } \right\}$ denote the sets of hands, verbs, and objects, respectively, where $m$ and $n$ are the number of verbs’ and objects’ categories, respectively. We define the prediction target for each instance as follows: $$ e h o i _ { i n s } = \left\{ \left( \overline { { h _ { r } } } , \overline { { h _ { l } } } \right) , v _ { i } , o _ { j } \right\} $$ where $\left( \overline { { h _ { r } } } , \overline { { h _ { l } } } \right)$ are the hands engaged in the interaction. There are three situations to consider: right hand only $\left( h _ { r } , \cdot \right)$ , left hand only $\left( \cdot , h _ { l } \right)$ , and both hands $( h _ { r } , h _ { l } ) . \ v _ { i } \ \subset \ \mathfrak { V }$ is the verb that describes interaction, and $o _ { j } \subset \mathcal { O }$ is the object of interest. The annotations for each instance comprise the class labels for the hand(s) and active object, along with their respective bounding boxes and the verb class label. In total, we define 123 Ego-HOI triplet categories consisting of one or two hands, a verb, and an object, e.g., right-hand cut apple and left and right hands dump bucket. Figure 4: Examples of general, instance-level specific, and image-level specific hand-object interactions. The instance-level setting focuses on the interaction behaviors involving a single active object, while the image-level setting interprets the interaction holistically from the perspective of the entire image. In the instance-level setting (second row), an image may be parsed into two separate interaction instances: right-hand bind paper and right-hand bind with stapler. In contrast, the image-level setting (last row) defines the image as a unified interaction: right-hand bind paper with stapler. For clarity, the hand category is omitted from the image captions. # 3.3.2. Image-level Ego-HOI Detection Task The objective of image-level Ego-HOI detection is to deduce the primary interaction within each frame and identify all hands and active objects participating in it. Compared with the instance-level setting, the image-level setting comprehensively considers the objects directly and indirectly involved in the interaction when analyzing triplets. For example, in the instance-level setting, an image may be parsed into two separate interaction instances right-hand bind paper and right-hand bind with stapler. In contrast, the imagelevel setting defines the image as a unified interaction righthand bind paper with stapler. This image-level perspective examines and explains interaction behaviors from a broader perspective. To explain the instance-level and image-level Ego-HOI detection tasks more clearly, we show examples of hand-object interactions in Fig. 4. # 4. Our Method In this work, we present a Hand Geometry and Interactivity Refinement (HGIR) scheme that enhances interaction learning in Ego-HOI detection by leveraging global hand pose cues. Our method comprises four components: the hand pose estimation block for obtaining hand pose candidates (see Sec. 4.2 for details), the hand geometry extraction block that focuses on exploiting global structural features (see Sec. 4.3 for details), the interactivity refinement block by optimizing pose-interaction attention (see Sec. 4.4 for details), and the feature aggregation block for fusing complementary geometric and refined interaction features (see Sec. 4.5 for details). # 4.1. HGIR Architecture Our HGIR scheme is straightforward yet robust and can be easily integrated with various baseline HOI detection methods, yielding appealing results in the Ego-HOI detection task. The overall architecture of our method is shown in Fig. 5. Given an input RGB image $\textbf { X } \in \ \mathbb { R } ^ { H \times W \times 3 }$ , we employ the original baseline HOI detection method to obtain the hand features $\textbf { H } \in \ \mathbb { R } ^ { N \times d }$ , the object features $\textbf { o } \in$ $\mathbb { R } ^ { N \times d }$ , and the interaction features $\mathbf { I } \in \mathbb { R } ^ { N \times d }$ , denoted as $( { \bf H } , { \bf O } , { \bf I } ) = B a s e l i n e \left( { \bf X } \right)$ . The baseline method can adopt either a unified or decoupled prediction strategy as long as it provides the necessary interaction (i.e., verb) and hand (i.e., subject) representations. Multiple hand pose candidates $\hat { \mathcal { G } } \in \mathbb { R } ^ { \check { N } \times 2 \dot { N } _ { g } }$ are estimated based on $\mathbf { H }$ , where $N _ { g }$ is the number of hand joints. Then, a selection strategy is designed to generate left-hand and right-hand pose proposals, and their geometric features $\mathbf { f } \in \mathbb { R } ^ { 2 K N _ { g } \left( N _ { g } - 1 \right) }$ are extracted to describe the details of hand structure. Simultaneously, the interactivity refinement block uses the attention mechanism to direct the interaction features focus toward the regional information derived from pose offset prompts $\mathbf { H } ^ { \mathrm { o f f } }$ . These two features are fused to obtain the ultimate interaction embedding $\mathbf { E } \in \mathbb { R } ^ { N \times d }$ for classification. Overall, our HGIR scheme exploits the synergy of complementary geometric features and refined interaction features to enhance the ability to perceive interaction dynamics. Figure 5: Overview of our framework. Given an input image, a baseline HOI detection method (at the bottom) generates the initial hand $\mathbf { \Pi } ( \mathbf { H } )$ , object (𝐎), and interaction (𝐈) features. (a) Within our HGIR scheme (at the top), a set of pose candidates $( \hat { \mathcal { G } } )$ is first estimated based on 𝐇 (see Sec. 4.2). (b) Top K pairs of hand proposals are then selected, and their geometric features (𝐟) are further extracted to reveal the dynamic structural properties of hands in interactions (see Sec. 4.3). (c) Simultaneously, the hand pose offset-specific prompts $( \mathbf { H } ^ { \mathrm { o f f } } )$ are incorporated to enrich the interaction representations using the pose-interaction attention mechanism (see Sec. 4.4). (d) Finally, the hand geometric features and refined pose-aware interaction features $\left( \mathbf { I } ^ { * } \right)$ are aggregated to obtain enhanced interaction embedding (𝐄) for interaction recognition (see Sec. 4.5). Our scheme is dedicated to interactivity learning and can be integrated with baseline HOI methods that provide interaction and hand features. # 4.2. Hand Pose Estimation Our pose estimation block embeds the auxiliary task of hand pose estimation into the HOI baseline method, sharing most of the network and weights with the hand detection branch. This strategy minimizes computational overhead and allows for flexible adaptation to different datasets without being restricted by the domain of the external hand pose estimator. Some HOI baseline methods offer specialized hand features, while others use instance features to uniformly describe both the subject and object. To extract and emphasize hand information more deeply, we apply a consistent Transformer encoder across various baseline methods. This encoder is primarily composed of a self-attention layer and a feed-forward (FFN) layer. Formally, we obtain the advanced hand representations using this encoder, denoted as $\mathbf { H ^ { * } } =$ 𝐸𝑛𝑐𝑜𝑑𝑒𝑟 $\mathbf { \Pi } ( \mathbf { H } )$ , where $\mathbf { H ^ { * } }$ consists of $N$ vectors $\mathbf { h } _ { i } \in \mathbb { R } ^ { d }$ . Two lightweight multi-layer perceptrons (MLP) are then used in parallel to extract hand detection-specific features $\mathbf { H } ^ { \mathrm { d e t } }$ and pose offset-specific features $\mathbf { H } ^ { \mathrm { o f f } }$ , where $i$ -th feature vectors are calculated as ${ \bf h } _ { i } ^ { \mathrm { d e t } } = M L P \left( { \bf h } _ { i } \right)$ and ${ \bf h } _ { i } ^ { \mathrm { o f f } } =$ $M L P \left( \mathbf { h } _ { i } \right)$ , respectively. Here, the main reason for choosing MLPs as feature extractors is to ensure the feature index alignment. This index consistency lays the foundation for the subsequent combination of in-box reference points and pose offsets according to the shared indexes. Reference Point. Two small FFNs $f _ { h c } , f _ { h b }$ are adopted as prediction heads to obtain the hand classification probabilities $\left\{ \hat { \mathbf { p } } _ { i } ^ { \mathrm { h } } \right\} _ { i = 1 } ^ { N }$ (i.e., left hand or right hand) and bounding boxes $\left\{ \hat { \mathbf { b } } _ { i } ^ { \mathrm { h } } \right\} _ { i = 1 } ^ { N }$ of all $N$ tokens, respectively, as follows: $$ \begin{array} { r l } & { \hat { \mathbf { p } } _ { i } ^ { \mathrm { h } } = \delta \left( f _ { h c } \left( \mathbf { h } _ { i } ^ { \mathrm { d e t } } \right) \right) \in \mathbb { R } ^ { | \mathcal { H } | + 1 } } \\ & { \hat { \mathbf { b } } _ { i } ^ { \mathrm { h } } = \sigma \left( f _ { h b } \left( \mathbf { h } _ { i } ^ { \mathrm { d e t } } \right) \right) \in \mathbb { R } ^ { 4 } } \end{array} $$ where $\delta$ and $\sigma$ are the sigmoid and softmax operations, respectively. $| \mathcal { H } |$ denotes the number of hand category set, and the addi i|on|al class represents the background class (no object). The predicted category $\hat { c } _ { i } ^ { \mathrm { h } }$ and score $\hat { \boldsymbol { s } } _ { i } ^ { \mathrm { h } }$ are given by arg max $\hat { \mathbf { p } } _ { i , k } ^ { \mathrm { h } }$ and m𝑘ax $\hat { \mathbf { p } } _ { i , k } ^ { \mathrm { h } }$ . Using the predicted $N$ hand bounding boxes, we determine the reference points $\textbf { R } = \ \left\{ \left( x r e f _ { i } , y r e f _ { i } \right) \right\} _ { i = 1 } ^ { N }$ After in-depth analysis and experimental verification, we choose the top center point of each bounding box as the reference point, which constrains the positions of hand joint points to the vicinity of the hand, making it easier to obtain accurate estimates of joint positions. Offset. Using an additional offset head, we predict the offsets of $N _ { g } = 2 1$ hand joints relative to the corresponding reference point from $\mathbf { R }$ along the $x$ and $y$ axes. Taking the hand offset-specific features as inputs, the $i$ -th offset vector predicted by the offset head $f _ { \Delta }$ is given by: $$ \Delta _ { i } = \sigma \left( f _ { \Delta } \left( \mathbf { h } _ { i } ^ { \mathrm { o f f } } \right) \right) \in \mathbb { R } ^ { 2 N _ { g } } $$ where $\left\{ \left( \Delta _ { i , 2 k - 1 } , \Delta _ { i , 2 k } \right) \mid k = 1 , \ldots , N _ { g } \right\}$ denotes the $x$ - coordi ate and $y .$ -coordinate of the $k$ -th joint. The reference points and offsets of the same indexes are added to obtain a set of hand gesture candidates $\hat { \mathcal { G } } =$ $\left\{ \hat { \mathbf { g } } _ { i } \mid \hat { \mathbf { g } } _ { i } \in \mathbb { R } ^ { 2 N _ { g } } \right\} _ { i = 1 } ^ { N }$ , as follows: $$ \left( \hat { \bf g } _ { i , 2 k - 1 } , \hat { \bf g } _ { i , 2 k } \right) = \left( x r e f _ { i } + \pmb { \Delta } _ { i , 2 k - 1 } , y r e f _ { i } + \pmb { \Delta } _ { i , 2 k } \right) $$ where $k \in \left\{ 1 , \ldots , N _ { g } \right\}$ . Combining reference points and offsets instead of directly predicting joint positions offers two key advantages. Firstly, it avoids the complexity of directly finding joint points from the entire image. Secondly, the pose offsetspecific features can act as valuable prompts in the subsequent interactivity refinement. # 4.3. Hand Geometry Extraction From an egocentric view, the hands can carry out tasks independently or collaboratively. Even though the left and right hands perform different actions, they can still provide valuable complementary information to each other. Therefore, we extract the geometric features of all hands in the image from a global perspective based on the pose estimation results to gain a comprehensive insight into the interaction’s semantics. Selection Strategy. We match the hand pose candidates with predicted hand categories and scores to obtain a set of $\left\{ \left( \hat { \mathbf { g } } _ { i } , \hat { c } _ { i } ^ { \mathrm { h } } , \hat { s } _ { i } ^ { \mathrm { h } } \right) \right\} _ { i = 1 } ^ { N }$ . This matching process also benefits from the index consistency mentioned before. Based on $\hat { c } _ { i } ^ { \mathrm { h } }$ , the hand pose candidates are partitioned into two sets $\Omega \ =$ $\left\{ \Omega _ { l } , \Omega _ { r } \right\}$ , where $\Omega _ { l }$ and $\Omega _ { r }$ denote the set of predictions whose categories are left hand and right hand, respectively. To screen out high-quality hand pose candidates, we preset a threshold $T _ { p o s e }$ and the retained left-hand and righthand pose candidates are denoted as $\Omega _ { l } ^ { ' } = \left\{ \hat { \mathbf { g } } _ { i } \in \Omega _ { l } \ : | \ : \hat { s } _ { i } ^ { \mathrm { h } } \geq T _ { p o s e } \right\} _ { \mathrm { ] } }$ and $\Omega _ { r } ^ { ' } = \left\{ \hat { \mathbf { g } } _ { i } \in \Omega _ { r } \ : | \ : \hat { s } _ { i } ^ { \mathrm { h } } \geq T _ { p o s e } \right\}$ , respectively. For each set, we re-rank the candidates based on $\hat { s } _ { i } ^ { \mathrm { h } }$ and select the Top $K$ candidates with the highest confidence to constitute the pose proposals. In the case of fewer than $K$ valid candidates, we use the candidates of the other hand for padding to maintain feature integrity. For example, if the number of valid candidates for the left hand is less than $K$ , we will use candidates from the right hand for padding, and vice versa: $$ \begin{array} { r } { \Omega _ { l } ^ { * } = \Omega _ { l } ^ { ' } \cup \left\{ \hat { \mathbf { g } } _ { i } \in \Omega _ { r } ^ { ' } \mid i = 1 , \ldots , K - \left| \Omega _ { l } ^ { ' } \right| \right\} } \\ { \Omega _ { r } ^ { * } = \Omega _ { r } ^ { ' } \cup \left\{ \hat { \mathbf { g } } _ { i } \in \Omega _ { l } ^ { ' } \mid i = 1 , \ldots , K - \left| \Omega _ { r } ^ { ' } \right| \right\} } \end{array} $$ where $\left. \cdot \right.$ indicates ther number of the set. In this manner, regard |es|s of the number of valid candidates, the final sets $\Omega _ { l } ^ { * }$ and $\boldsymbol { \Omega } _ { r } ^ { * }$ will each contain exactly $K$ proposals. Geometric Feature Extraction. The angles between joints are critical to intuitively reflecting hand-related interactions. Based on this understanding, we extract joint geometric features from the left- and right-hand pose proposals. Formally, for the $i$ -th proposal, the feature vector consisting of directional components of all non-repeated joint pairs is as follows: $$ \mathbf { f } _ { i } ^ { \tau } = \left[ d x _ { j k } , d y _ { j k } ~ | ~ \forall j , k \in \left\{ 1 , \ldots , N _ { g } \right\} , j < k \right] $$ where $\tau { \in { \mathrm { ~ \Omega ~ } } \{ l , r \} }$ denotes the left-hand and right-hand proposals, respectively. And $\begin{array} { r l r } { d x _ { j k } } & { { } = } & { \frac { \hat { \bf g } _ { i , 2 k - 1 } - \hat { \bf g } _ { i , 2 j - 1 } } { \left\| \hat { \bf g } _ { i , 2 k - 1 } - \hat { \bf g } _ { i , 2 j - 1 } \right\| } } \end{array}$ and $\begin{array} { r } { d y _ { j k } = \frac { \hat { \bf g } _ { i , 2 k } - \hat { \bf g } _ { i , 2 j } } { \left\| \hat { \bf g } _ { i , 2 k } - \hat { \bf g } _ { i , 2 j } \right\| } } \end{array}$ are the normalized ‖directional co‖mponents on‖the $x -$ an‖d $y$ -axis of the $j$ -th and $k$ -th joint pairs, respectively. By concatenating all the features of the left and right proposals, we obtain a global geometric vector, given by: $$ \mathbf { f } = \left[ \mathbf { f } _ { 1 } ^ { l } ; \ldots ; \mathbf { f } _ { K } ^ { l } ; \mathbf { f } _ { 1 } ^ { r } ; \ldots ; \mathbf { f } _ { K } ^ { r } \right] $$ 𝐟 is a $2 K N _ { g } \left( N _ { g } - 1 \right)$ -dimensional vector, which not only captures rich inter-joint clues but also enhances our understanding of hand interactivity through gesture contexts from both hands. # 4.4. Interactivity Refinement To obtain pose-aware interaction representations, we introduce hand pose prompts to refine the interaction-specific features using a pose-interaction attention mechanism. The refiner contains a self-attention layer [50] that focuses on capturing and modeling the intrinsic correlations within the interaction features, obtaining the advanced interaction features $\mathbf { I ^ { \prime } } ~ = ~ \left\{ \mathbf { I } _ { i } ^ { \prime } \right\} _ { i = 1 } ^ { N }$ . Next, we introduce the pose offset-specific fea res $\mathbf { H } ^ { \mathrm { o f f } }$ as pose prompts to inject pose awareness into the advanced interaction features. Specifically, we feed $\mathbf { H } ^ { \mathrm { o f f } }$ into the attention mechanism as keys and values, while $\mathbf { I } ^ { \prime }$ serves as queries. Each output element ${ \mathbf { I } _ { i } } ^ { \prime \prime }$ is computed by aggregating all values weighted with attention: $\begin{array} { r } { { \bf I } _ { i } ^ { \prime \prime } = \sum _ { j } \alpha _ { i j } \left( { \bf W } _ { v } { \bf h } _ { j } ^ { \mathrm { o f f } } \right) } \end{array}$ , where $\alpha _ { i j }$ is the normalized attention weight, as follows: $$ \alpha _ { i j } = s o f t m a x \left( \frac { \left( \mathbf { W } _ { q } \mathbf { I } _ { i } ^ { \prime } \right) ^ { \mathrm { T } } \mathbf { W } _ { k } \mathbf { h } _ { j } ^ { \mathrm { o f f } } } { \sqrt { d } } \right) $$ where $\mathbf { W } _ { q } , \mathbf { W } _ { k } , \mathbf { W } _ { v }$ are learnable embedding matrices corresponding to queries, keys, and values, respectively. After passing $\mathbf { I ^ { \prime } }$ through the subsequent FFN layer, we finally obtain the refined pose-aware interaction representations $\mathbf { I ^ { * } }$ . Our refiner contains only one decoder layer without consuming many computational resources. In this way, we guide the interaction features to focus on regions and features that are closely related to the subtle changes in hand poses. # 4.5. Feature Aggregation To make the perception of interactivity more robust and effective, we aggregate the global hand geometric features 𝐟 and the refined pose-aware interaction features $\mathbf { I } ^ { * } \in \mathbb { R } ^ { N \times d }$ . First, the dimensions of both need to be aligned. To this end, we take a straightforward method: expand the dimensions of the feature vector 𝐟 by repeating $N$ times. Next, we concatenate the tiled geometric feature map and the interaction features, and project them into a unified embedding space using an MLP. The feature aggregation can be formulated as follows: $$ \mathbf { E } = f _ { e m b } \left( C o n c a t \left[ \mathbf { I ^ { * } } , T i l e \left( \mathbf { f } \right) \right] \right) $$ Using the enhanced embedding $\mathbf { E }$ as input to the interaction head significantly improves the model’s performance compared to using only the refined interaction features $\mathbf { I } ^ { * }$ . This improvement is attributed to the effective fusion of hand geometry and pose-aware interaction features, which complement each other and enhance the Ego-HOI detection model’s reasoning about interactive behaviors. # 4.6. Training and Inference In addition to the hand bounding box and category prediction heads mentioned in Eq. 2, our method employs another three heads to predict the verb category, object category, and object bounding box. Training Objective. The baseline HOI detection methods are usually trained using a multi-task loss, as follows: $$ \mathcal { L } _ { b a s e } = \lambda _ { L 1 } \mathcal { L } _ { L 1 } + \lambda _ { G I o U } \mathcal { L } _ { G I o U } + \lambda _ { h o c } \left( \mathcal { L } _ { o c } + \mathcal { L } _ { h c } \right) + \lambda _ { a c } \mathcal { L } _ { a c } $$ where L1 loss [49] $\mathcal { L } _ { L 1 }$ and GIoU loss [60] $\mathcal { L } _ { G I o U }$ are applied to both hand and object bounding box regression, and focal loss [61] $\mathcal { L } _ { a c }$ is for interaction classification. Notably, the Ego-HOI detection task considers hand classification, which differs from the third-person perspective. Therefore, the cross-entropy loss is employed not only for object classification $\mathcal { L } _ { o c }$ , but also for hand classification $\mathcal { L } _ { h c } . \lambda _ { L 1 } , \lambda _ { G I o U }$ , $\lambda _ { h o c }$ and $\lambda _ { a c }$ are the hyper-parameters for weighting each loss. The loss functions of the baseline models [54, 56] and comparison models [46, 52] are similar to Eq. 10, but the details may be different due to the unique characteristics of each model. The learning of auxiliary hand pose estimation is supervised by the average L1 loss, as follows: $$ \mathcal { L } _ { p o s e } = \frac { 1 } { 2 N _ { g } } \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { 2 N _ { g } } \left| \mathbf { g } _ { i , j } - \hat { \mathbf { g } } _ { i , j } \right| $$ where $\mathbf { g } _ { i , j }$ and $\hat { \mathbf { g } } _ { i , j }$ are the ground truth and prediction result of the $j$ -th value of the $i$ -th hand pose candidate, respectively. During training, the original loss $\mathcal { L } _ { b a s e }$ is integrated with the auxiliary pose estimation loss in Eq. 11. The overall loss $\scriptstyle { \mathcal { L } }$ is given by: $$ \mathcal { L } = \mathcal { L } _ { b a s e } + \lambda _ { p o s e } \mathcal { L } _ { p o s e } $$ where the weight $\lambda _ { p o s e }$ denotes the weight to balance $\mathcal { L } _ { b a s e }$ and $\mathcal { L } _ { p o s e }$ , and is 1.0 by default. Inference. Given a set of Ego-HOI prediction results $\Big \{ \left( \hat { \mathbf { p } } _ { i } ^ { \mathrm { i } } , \hat { \mathbf { p } } _ { i } ^ { \mathrm { h } } , \hat { \mathbf { p } } _ { i } ^ { \mathrm { o } } , \hat { \mathbf { b } } _ { i } ^ { \mathrm { h } } , \hat { \mathbf { b } } _ { i } ^ { \mathrm { o } } \right) \Big \} _ { i = 1 } ^ { N }$ , where $\hat { \mathbf { p } } _ { i } ^ { \mathrm { i } } \in \mathbb { R } ^ { | \mathcal { V } | }$ and $\hat { \mathbf { p } } _ { i } ^ { \mathrm { o } } \in \mathbb { R } ^ { | \mathcal { O } | + 1 }$ correspond to the classification probabilities for interaction and object respectively, the predicted category $c _ { i } ^ { \tau }$ and its score $s _ { i } ^ { \tau }$ are given by arg max $\hat { \mathbf { p } } _ { i , k } ^ { \tau }$ and max $\hat { \mathbf { p } } _ { i , k } ^ { \tau }$ . Considering 𝑘 𝑘 the hand classification, the confidence score of an Ego-HOI prediction is defined as follows: $$ s _ { i } ^ { \mathrm { e h o i } } = s _ { i } ^ { \mathrm { i } } \cdot s _ { i } ^ { \mathrm { h } } \cdot s _ { i } ^ { \mathrm { o } } $$ And we only select the top several predictions with confidence scores above a threshold from all $N$ results. # 5. Experiments # 5.1. Integration to Off-the-shelf HOI Detectors Our method is general and can be seamlessly integrated with most existing HOI detection approaches. The integration process is straightforward. In this work, we select two representative yet diverse baseline methods to evaluate the effectiveness of our proposed approach thoroughly. MUREN [54] is an end-to-end Transformer-based approach with a three-branch architecture. It decouples human detection, object detection, and interaction classification, using independent decoder layers to extract task-specific tokens for sub-task learning. In our integration, the interaction branch’s attention fusion module output is leveraged as the interaction representations 𝐈, while the human branch’s attention fusion module output serves as the hand representations $\mathbf { H }$ . QPIC [56] is one of the pioneering Transformer-based set prediction models for HOI detection. It employs a single decoder to predict all three elements of HOI: human, verb, and object. In our integration, the unified features output by the decoder are used indiscriminately as the original interaction features 𝐈 and hand features 𝐇. We apply a vanilla encoder to the unified features to derive the object-specific features. # 5.2. Experimental Setup Implementation Details. Our experiments cover two baselines and their integrations with our method. We also include other existing HOI detection methods for comparison, all of which are modified and retrained for the Ego-HOI detection task. Our experimental and analytical endeavors focus on the instance-level setting, as this level provides richer details. Image-level detection can be achieved by simply modifying the prediction heads or post-processing, so we do not compare them here. To obtain better detection performance, we fine-tune the object detector (usually DETR [50] with a ResNet-50 backbone) on the EgoHOIBench training set. All experiments are performed on 4 RTX 4090 GPUs. The hyper-parameters in the experiment remain consistent with the default settings of respective methods, but the batch size and initial learning rate are adjusted according to the supported computing resources. Specifically, all experiments of HOTR [52] and MUREN [54] adopt a batch size of 16 and an initial learning rate of 5e-5. For STIP [46], the HOI detector with a frozen object detector uses a batch size of 16 and an initial learning rate of 5e-5, while the batch size is 8 and the initial learning rate is 1e-5 during the two-stage joint fine-tuning. QPIC [56] is trained with a batch size of 8 and an initial learning rate of 5e-5. Table 3 Performance and efficiency comparison of different HOI baselines with and without integration with our method. For clarity, all AP and Accuracy metrics are presented as percentages. Evaluation Metrics. We evaluate models’ performance on the Ego-HOIBench benchmark using mean average precision (mAP) with IoU thresholds ranging from 0.5 to 0.95 with a step size of 0.05. A detection result is considered a true positive only if the predicted hand, object, and verb categories are all correct, and the hand and object bounding boxes have IoUs with ground truths larger than the specified threshold. We further divide all the Ego-HOI triplet categories into rare and non-rare according to whether they appear at least 100 times in the training set. Based on this criterion, we report the mAP for the Full, Rare, and Non-rare categories. The mAPs of the full testing set at IoU thresholds of 0.5 and 0.75 are reported separately, denoted as $\mathrm { m A P } _ { 5 0 }$ and $\mathrm { m A P } _ { 7 5 }$ , similar to [44, 54]. In addition, to highlight the improvement of our method in interaction recognition, we introduce Top $\boldsymbol { @ } \mathbf { G }$ Verb Accuracy as a metric. For an image to be considered correct, the G predictions with the highest probabilities must completely cover the set of true verb labels, where G represents the number of true labels. # 5.3. Improvement on Two Different Baselines Table 3 shows the performance comparison of two mainstream baseline HOI detection methods before and after integrating our proposed method. By incorporating our approach, both baseline methods achieve significant performance improvements. Specifically, MUREN [54] achieves a $1 . 8 \%$ improvement in Full mAP and a $4 . 3 \%$ increase in Top@G Accuracy. As for QPIC [56], Full mAP is improved by $1 . 7 \%$ and $\mathrm { T o p } @ \mathbf { G }$ Accuracy obtains a substantial improvement of $6 . 2 \%$ , setting a new high for the state-of-the-art results. These results demonstrate that our method is applicable not only to models with a unified decoder but also to the methods that decouple the sub-tasks. Moreover, our scheme imposes no specific restrictions on the backbone. Note that after integrating our module, these two baseline methods can still maintain end-to-end training and reasoning. We also compare their model sizes and runtime efficiencies to prove that the performance improvement is not due to the increase in model size. Although our method adds several million parameters, this increase is very limited relative to the original model size. Furthermore, in terms of Frames Per Second (FPS), the runtime speed drop is negligible, only a few percentage points. The results show that our technology is extremely lightweight and efficient. # 5.4. Comparison with State-of-the-art Methods Table 4 presents a detailed performance comparison between our proposed method and many typical approaches, including the one-stage single-branch method QPIC [56], the one-stage two-branch method HOTR [52], the one-stage three-branch method MUREN [54], the two-stage method STIP [46] and its jointly fine-tuned version. Here, all AP and Accuracy metrics are presented in percentage form. We use QPIC as the baseline and integrate it with our scheme for comparison. Our method (last row) surpasses all existing one-stage and two-stage methods, whether in Ego-HOI detection or interaction recognition. A noteworthy phenomenon is that the rare triplet categories consistently underperform compared to the non-rare categories in terms of mAP across all other methods. In contrast, our method significantly enhances the detection performance of rare categories, even surpassing that of non-rare categories. The superior performance of our method is mainly due to the fact that we effectively extract and incorporate hand pose cues into the interaction embedding. This enhancement significantly boosts the model’s ability to distinguish complex and rare-seen interactions, further improving the overall performance of Ego-HOI detection. # 5.5. Ablation Study We conduct various ablation studies to validate the effectiveness of our method. For each ablation experiment, we modify one hyper-parameter or component while keeping all other hyper-parameters in their optimal settings. The Table 4 Performance comparison of our proposed method (last row) and state-of-the-art methods on the Ego-HOIBench dataset. All metrics are presented as percentages. $\dagger$ denotes that the object detector and HOI detector are further fine-tuned jointly. Table 5 Ablation study of each component in our HGIR scheme, starting from the baseline and progressively building up to our complete method. ✓ means that the corresponding component is used. HPE: Hand Pose Estimation. IR: Interactivity Refinement. HGE: Hand Geometry Extraction. MUREN baseline is used across all our ablation studies. We choose $\mathrm { m A P } _ { 5 0 }$ , Full mAP, and $\mathrm { T o p } @ \mathrm { G }$ Accuracy as representative metrics to evaluate the performance of each variant. Components of HGIR Scheme. To thoroughly assess the impact of each component in our method, we conduct an ablation study by gradually incorporating them into the baseline. The components evaluated include Hand Pose Estimation (HPE), Interactivity Refinement (IR), and Hand Geometry Extraction (HGE). The results are summarized in Table 5. Compared with the baseline, introducing a supervised HPE block results in a relative Full mAP gain of $1 . 0 \%$ . This gain indicate that the auxiliary task enhances the learning of hand features, which indirectly positively impacts EgoHOI detection. Next, integrating the IR block yields further advancements. While the gains in $\mathrm { m A P } _ { 5 0 }$ and Full mAP are relatively modest, $\mathrm { T o p } @ \mathrm { G }$ Accuracy achieves a significant leap to $8 4 . 7 \%$ , with an increase of $3 . 4 \%$ . These performance improvements show that incorporating pose prompts for engaging in meaningful interactions can significantly boost expressiveness. Our complete method, as shown in the last row of Table 5, which includes the above two components and the PGE component, achieves notable improvements across all three metrics. Specifically, $\mathrm { m A P } _ { 5 0 }$ is further increased by $0 . 5 \%$ , and Full mAP is significantly improved by $0 . 7 \%$ , and $\mathrm { T o p } @ \mathrm { G }$ Accuracy by $1 . 0 \%$ . These results demonstrate that the extracted hand geometric features provide complementary information, significantly enhancing interaction recognition and detection. The enhancements observed in this ablation study confirm the synergy of each component within the HGIR scheme and highlight the importance of utilizing hand geometric and refined interaction features to improve the model’s accuracy and robustness in Ego-HOI perception. Table 6 Performance comparison of different hand pose estimation schemes. Pose Estimation Schemes. We compare the impact of different pose estimation schemes, as shown in Table 6. We explore two main categories of methods: directly predicting hand joint positions from the hand features and indirectly estimating them by combining reference points and offsets. When directly predicting (row a), we observe that both $\mathrm { m A P } _ { 5 0 }$ and Full mAP are the lowest among the four schemes. The challenge with this scheme is that it is equivalent to predicting the offsets using the upper left corner of the image as a reference point. The long distance between the reference point and the hand makes accurate prediction extremely difficult. Various schemes for computing reference points are evaluated, ranging from learnable points to hand box centers and top centers. Compared to direct prediction, leveraging hand-detection-specific features to infer reference points (row b) significantly improves Full mAP by $0 . 9 \%$ . However, the notable improvement in Full mAP is not synchronously reflected in the other two metrics. In contrast, using the centers (row c) or top centers (row d) of the predicted hand boxes as references achieve better results in terms of $\mathrm { m A P } _ { 5 0 }$ . The best performance is achieved using the top center reference points, with $\mathrm { m A P } _ { 5 0 }$ increased to $8 4 . 1 \%$ , Full mAP increased to $6 6 . 8 \%$ , and Top $\ @ \mathbf { G }$ Accuracy reaching $8 5 . 7 \%$ . These improvements are likely due to our ability to explicitly constrain the reference points and estimated joint positions to the vicinity of the hand, leading to more stable and accurate joint localization and further enhancing the overall Ego-HOI detection performance. Figure 6: Qualitative comparison between the baseline and our proposed method. For each image, the detection outputs of our proposed method are marked in green, while the baseline outputs are marked in red. The predicted classes and scores are presented in the captions. If no true positive is predicted, the score is marked as none. For clarity, the hand category is omitted from the image captions. Table 7 Performance comparison of different number of selected pose proposal pairs. Number of Selected Pose Proposal Pairs. We also study the impact of the number of selected pose proposal pairs on model performance. Specifically, we test different values of $K$ (1, 2, 3, and 4), where $K$ represents that only the top $K$ pairs of left-hand and right-hand pose proposals with highest scores are used to extract hand geometric features. The results are summarized in Table 7. Our observations show that the model performs best when $\textit { \textbf { K } } = \textit { 1 }$ . We speculate that increasing the number of proposal pairs may introduce more invalid or low-quality geometric features, which dilutes the effective information and negatively impacts the stability of relational reasoning. # 5.6. Qualitative Results and Discussions To qualitatively demonstrate the advantages of our method in Ego-HOI detection, comparison examples between the baseline and our proposed method are provided in Fig. 6. Our method is particularly outstanding in improving the confidence of interaction predictions. For instance, in Case 1, the baseline model predicts a right-hand reach out drawer with a score of 0.299, while our model significantly improves this score to 0.936. Furthermore, our method successfully recognizes Ego-HOI triplets that the baseline method fails to output a true positive prediction (Cases 7 and 8). These improvements cover scenes with small or occluded objects (Samples 4, 6, 7, 8) and complex scenes (Samples 2, 5, 8), showcasing that our approach can provide more accurate predictions under challenging conditions. Overall, our method shows apparent advantages in prediction accuracy and robustness. We also compare our proposed method with the baseline across different object occlusion ratios. For Ego-HOI detection, we count prediction results according to their ground truth occlusion ratios, as shown in Fig. 7 (top). For interaction recognition, we statistically classify each prediction according to the average ground truth occlusion ratio of the instances within each image, as shown in Fig. 7 (bottom). Overall, the performance of both the baseline and our method shows a downward trend with the increase of occlusion ratio. This phenomenon occurs because occlusions may obscure critical features, thus hindering the model’s learning. Nonetheless, our method consistently outperforms the baseline at all occlusion levels. In particular, at the high occlusion level $( 0 . 8 \mathord { \sim } 1 )$ , our method improves Full mAP by $5 . 7 \%$ and Top $\ @ \mathbf { G }$ Accuracy by $4 . 0 \%$ compared to the baseline. These significant improvements are mainly due to our method’s ability to leverage poses as additional cues to enhance interaction features and infer interactions more effectively, even when the visible portion of an object is too limited to provide enough information.
Understanding the interaction between humans and objects has gained much attention in recent years. Existing human-object interaction (HOI) detection methods mainly focus on the third-person perspectives, overlooking a more intuitive way from the egocentric view of HOI, namely Ego-HOI. This paper introduces an Ego-HOIBench, a new dataset to promote the benchmarking and development of Ego-HOI detection. Our Ego-HOIBench comprises more than 27K egocentric images with high-quality hand-verb-object triplet annotations across 123 fine-grained interaction categories and locations, covering a rich diversity of scenarios, object types, and hand configurations in daily activities. In addition, we explore and adapt third-person HOI detection methods to Ego-HOIBench and illustrate the challenges of hand-occluded objects and the complexity of single- and two-hand interactions. To build a new baseline, we propose a Hand Geometry and Interactivity Refinement (HGIR) scheme, which leverages hand pose and geometric information as valuable cues for interpreting interactions. Specifically, the HGIR scheme explicitly extracts global hand geometric features from the estimated hand pose proposals and refines the interaction-specific features using pose-interaction attention. This scheme enables the model to obtain a robust and powerful interaction representation, significantly improving the Ego-HOI detection capability. Our approach is lightweight and effective, and it can be easily applied to HOI baselines in a plug-and-play manner to achieve state-of-the-art results on Ego-HOIBench. Our project is available at: https://dengkunyuan.github.io/EgoHOIBench/
[ "cs.CV" ]
# 1 Introduction Open Source Software (OSS) is extensively used across various sectors and plays a critical role in powering modern technological systems, from critical infrastructure to innovative applications. Many foundational software systems, such as operating systems and database management systems, are written in ${ \mathrm { C / C } } { + + }$ [35]. Vulnerabilities in these systems can cause significant damage [47](e.g., the Heartbleed bug in OpenSSL [6]), making automated vulnerability detection for ${ \mathrm { C } } / { \mathrm { C } } + +$ OSS essential. Additionally, automated analysis of ${ \mathrm { C } } / { \mathrm { C } } { + + }$ OSS is crucial for areas like vulnerability management, quality assurance, and performance optimization. Building ${ \mathrm { C } } / { \mathrm { C } } + +$ software [53] involves compiling, resolving dependencies, linking libraries, configuring environments, and managing platform-specific challenges. These processes are critical for performing automated program analysis, especially for dynamic analysis. Such tasks require the project to be built into a binary beforehand. However, a significant gap remains in the realm of automated analysis of ${ \mathrm { C } } / { \mathrm { C } } { + + }$ OSS [15]: the absence of a standardized, automated method for building repositories from ${ \mathrm { C } } / { \mathrm { C } } + +$ source code. Bridging this gap is essential for enhancing the efficiency and effectiveness of program analyses. The significance of automatically building software from source code for automated analysis can be summarized from two key aspects: (1) Facilitation of Static Program Analysis: Many static analysis tasks rely on intermediate representations (IR), such as LLVM IR [29]. This typically requires that the project successfully installs dependencies and can be compiled. (2) Enablement of Dynamic Program Analysis: Dynamic program analysis like fuzzing, also requires that the program be compilable from source code, particularly when source-level instrumentation is needed. Automatically built projects can also assist with several downstream tasks, such as automated vulnerability reproduction tasks. Existing research efforts [16, 18, 20, 36, 59, 60] mainly focus on Java/Python, while ${ \mathrm { C } } / { \mathrm { C } } + +$ remains underexplored due to its higher complexity. Unlike the relatively unified and automated build and package management tools in Java (e.g., Maven [2], Gradle [13]) and JavaScript (e.g., NPM [38]), or Python’s convenient pip [48], the ${ \mathrm { C } } / { \mathrm { C } } + +$ ecosystem contains over 20 distinct build systems [7], with lower levels of standardization and automation, posing significant challenges. To better understand the automation of ${ \mathrm { C / C } } { + + }$ project builds, we investigate the build systems of 100 popular open-source ${ \mathrm { C / C } } { + + }$ projects across 10 different categories, using their default build commands and settings. The study shows that more than $70 \%$ of these projects fail to be built successfully without manual intervention, suggesting that most $C / C + +$ projects require additional configuration, such as downloading dependencies or setting compilation parameters. To further investigate the root causes of these failures, we manually fix the errors encountered during the build process guided by the failure messages iteratively until successful completion. In total, we encounter 384 errors across 79 projects, and spend over 153 man-hours to resolve them. This underscores the significant challenges in automating the $C / C + +$ build process. Challenges. Drawing from the root causes and insights gathered in our study, we summarize the following challenges associated with ${ \mathrm { C / C } } { + + }$ build automation: Challenge 1: Complexity of Dependency Management. ${ \mathrm { C } } / { \mathrm { C } } + +$ projects often rely on substantial external libraries and tools, which require careful management and configuration of dependencies. Although package management tools like Conan [23] and vcpkg [33] are available, they support different sets of libraries and have distinct usage patterns, which makes dependency management a complex task. • Challenge 2: Diversity of Build Systems and Compilation Options. ${ \mathrm { C } } / { \mathrm { C } } + +$ projects adopt at least 20 different build systems (such as Makefile [10], CMake [22], Autotools [11], and SCons [52]), each with unique syntax and configuration requirements. Additionally, these projects employ a wide array of compilers (such as GCC [9] and Clang [39]) and toolchains, each with its own options and configuration methods. Such diverse build systems and toolchains would trigger substantial errors when building large-scale real projects. • Challenge 3: Complexity of Error Diagnosis and Debugging. The diverse build process in ${ \mathrm { C / C } } { + + }$ projects often generate many error messages at multiple levels, such as pre-processing, compilation, and linking, which often vary greatly across different projects. Our System. Large Language Models (LLMs) are renowned for their strong capabilities in understanding complex documentations [19, 62], generating structured instructions [24, 50], and resolving errors [14, 49]. Inspired by such abilities, we investigate whether LLMs can extend their effectiveness to the domain of build systems and error resolution. In particular, we apply LLMs (i.e., GPT4o) to solve specific build issues identified in our empirical study and observe that it can successfully address several of them (as demonstrated in the experiment in Section 5.1). For instance, during GCC builds, the LLM can automatically install dependencies like GMP, MPFR, MPC, and Flex, and prompts 64-bit compilation, thus avoiding errors from the default build instructions and simplifying dependency management. Such results demonstrate LLM’s potential capability in this domain. At the same time, it also indicates that for complex project builds, which involve multi-step processes, the effectiveness of standalone LLMs is limited. Relying solely on LLMs can only address a small fraction of errors, highlighting the need for more refined strategies capable of continuously addressing build failures. To address the above challenges, we propose an LLM-based agent system named CXXCrafter that leverages LLMs to dynamically manage complex build processes. The system consists of three modules: the Parser Module, the Generator Module, and the Executor Module. Specifically, the Parser Module automatically extracts and parses relevant information from the repositories, such as dependencies and build system configurations. The Generator Module utilizes LLMs to generate candidate build solutions (i.e., Dockerfiles, which include shell scripts for the entire software build process) based on the parsed information. Additionally, the Generator is responsible for modifying the candidate build solutions in response to error feedback from the Executor. The Executor Module oversees the build process in the Docker container where the build is performed, capturing error messages and determining whether the build is successful. The Generator and Executor form a dynamic interaction loop, continuously addressing build issues until the process completes successfully. Our design effectively addresses the three challenges as mentioned above. In particular, the Parser can identify the required dependencies to avoid potential dependency errors. Besides, CXXCrafter also employs an automated, iterative feedback process powered by LLMs to dynamically identify and install dependencies, thus effectively addressing issues such as uncertain dependencies or version conflicts (Challenge 1). Furthermore, CXXCrafter leverages LLMs’ rich domain knowledge via nested prompt templates to unify different build systems and compilation options (Challenge 2). For the Challenge 3, CXXCrafter captures real-time feedback during the build process, enabling efficient error diagnosis and debugging by adapting to both known and new errors arising during the build. We evaluate CXXCrafter on both the aforementioned 100 popular $C / C + +$ projects and the larger Awesome-CPP dataset [7], which includes 652 projects across various categories. Specifically, CXXCrafter successfully builds 587 out of the 752 projects, achieving a success rate of $78 \%$ . This significantly outperforms other heuristic approaches $( 3 9 . 0 1 \% )$ and bare LLM $( 3 4 . 2 2 \% )$ . Though its overall performance does not surpass the build success rate achieved by humans, CXXCrafter resolves three projects that cannot be successfully built through human efforts. Our analysis for these three projects shows that CXXCrafter leverages the implicit build knowledge embedded in the LLM and the powerful retrieval capabilities of its parser module, offering unique advantages even compared to human efforts in project builds. Additionally, a component analysis demonstrates its effectiveness in designing agents capable of handling complex tasks. Finally, we assess the efficiency and cost, and the results show that it takes 875 seconds together with a financial cost of $\$ 0.41$ per successful build. These evaluation experiments underscore the practical value of CXXCrafter. Contributions. This paper makes the following main contributions: • Originality: To our best knowledge, we are the first to explore the idea of utilizing LLM agent to automate $C / C + +$ build process, and our study demonstrates promising results. • Empirical Study: We conduct an empirical study on the build processes of 100 popular open-source $C / C + +$ projects to understand the current state of build tools. By identifying and categorizing 384 build errors, we provide a comprehensive analysis of the challenges to automate ${ \mathrm { C / C } } { + + }$ builds, offering key findings to the root causes of such failures. Approach: We propose CXXCrafter, an LLM-based agent system designed to automate the build process for large-scale ${ \mathrm { C / C } } { + + }$ repositories. In particular, CXXCrafter dynamically manages dependencies, resolves build issues, and diagnoses errors, effectively addressing the challenges such as handling various build systems and installing dependencies. Evaluation: Through extensive evaluations on 752 projects, CXXCrafter achieves an impressive build success rate of $78 \%$ , demonstrating its pioneering effectiveness in $C / C + +$ build automation. Our research has the potential to support downstream program analysis efforts. # 2 Background & Related Work Software Building. Software building [53] converts code into executables or libraries, involving tasks like dependency resolution, compilation, and linking. For large projects, automated build systems become essential, as manual handling becomes impractical. These systems streamline the process, managing tasks efficiently. Different programming languages have specific build systems: Java uses Apache Ant [1], Maven [2], and Gradle, while JavaScript relies on NPM [38], and Python uses setuptools [40]. In ${ \mathrm { C / C } } { + + }$ projects, tools like CMake, Make, Ninja, and Bazel are frequently used. Additionally, building differs from compiling, which is just one part of the broader building process, and Continuous Integration (CI), where building is a prerequisite for integration. Several studies focus on automating software builds, mostly for languages like Java, with fewer addressing ${ \mathrm { C / C } } { + + }$ projects. Hassan et al. [16] investigate Java build failures, revealing that 86 out of 200 projects fail to build automatically using default commands. Other studies have explored build [18, 36, 59, 60] and CI failures [58]. For example, Lou et al. [30] analyzed 1,080 build issues from Stack Overflow related to Maven, Ant, and Gradle, finding that $6 7 . 9 6 \%$ of the issues were resolved by modifying build scripts for plugins and dependencies. Similarly, Olivier et al. [37] analyzed over 1.2 million build logs from Google’s OSS-Fuzz service to identify common failure patterns. In the context of ${ \mathrm { C / C } } { + + }$ , we only found CPPBuild [15] for automating the build process. But it is limited to CMake, Make, and Autotools, resulting in lower accuracy for open-source projects with other build systems. Furthermore, while some works focus on containerization techniques and Dockerfile generation [17, 32, 43], they typically do not address building software from source. LLMs and Agents. LLMs have shown outstanding performance across multiple dimensions, including semantic understanding [46], code generation [21], and implicit knowledge storage [57]. However, they still face several limitations [28, 42], such as solving complex tasks, maintaining context over long interactions, executing actions in real-world environments, and engaging in dynamic, multi-turn dialogues. LLM-based agents, designed to address these challenges, integrate more advanced functionalities. They are increasingly used in a variety of scenarios [3], including code generation [12, 51] and security tasks [5], showing significant promise for future advancements. Table 1. The Top 100 Projects and Their Categories # 3 Empirical Study In this section, we conduct an empirical study to assess the current status of building ${ \mathrm { C / C } } { + + }$ projects, aiming to determine how effectively existing build systems can handle the complexities and challenges of real-world projects. Our research team consists of 4 programmers, each with extensive experience in ${ \mathrm { C } } / { \mathrm { C } } + +$ development and building. Specifically, we manually attempt to build 100 widely-used ${ \mathrm { C } } / { \mathrm { C } } + +$ projects, devoting approximately 153 man-hours to resolving the generated build failures. Out of the 100 projects, 86 have been built successfully, while the remaining projects either require excessive time budget or encounter unresolved issues. Additionally, we analyze the errors encountered during the building process and summarize the root causes. The research questions, datasets, and study results are presented in detail as follows. Research Questions. Referring to a recent study [16], which investigates the build mechanism and ecosystem of Java, we design the following research questions for the study on ${ \mathrm { C / C } } { + + }$ OSS: • RQ1 (Default Build Success Rate): What proportion of popular $C / C + +$ projects can be successfully built using their respective build systems and default build commands? RQ2 (Build Failure Causes): What are the major root causes of the observed build failures among these projects? Fig. 1. The Statistics of Build Tools used in the Top 100 and Awesome-CPP Datasets (introduced in Section 5). Dataset. For our empirical study, we construct a dataset (hereinafter referred to as Top100) via selecting the top 100 most popular open-source ${ \mathrm { C / C } } { + + }$ projects from GitHub, spanning 10 distinct categories to ensure diversity and comprehensiveness. These categories include foundational projects such as operating systems, database management systems, as well as emerging projects like AI frameworks. The projects, as summarized in Table 1, are mostly the top 10 in their respective fields based on star ratings, except for those that do not meet the following requirements. Since our builds are conducted on a Linux system, we exclude any projects that are incompatible with Linux builds (e.g., CnC_Remastered_Collection). Additionally, repositories that are not fully open-source (e.g., AppCode, Cvelop) or do not qualify as complete projects (e.g., 3d-game-shaders-for-beginners, minimp3) are also excluded. We focus on these repositories because they are frequently analyzed and studied in downstream applications such as program analysis, making them ideal candidates for our research. Additionally, as popular projects, they exemplify common practices and challenges in building ${ \mathrm { C / C } } { + + }$ projects within the open-source community. Table 2. Results of Executing Default Build Commands on the Top100 Dataset # 3.1 Success Rate of Default Build Commands (RQ1) To answer RQ1, we employ a three-phase process to apply default build commands to each $C / C + +$ project. In the first phase, we gather the most commonly used build commands for popular build tools through an extensive review of online tutorials and documentation. For example, we choose ‘make’ for Makefile-based projects, ‘mkdir build && cmake .. && make’ for CMake combined with make, and ‘./configure && make’ for Autotools. A complete list of build systems and their corresponding default commands is provided in the appendix [56]. In the second phase, we select the build systems used by the 100 projects. We manually inspect the project’s source code directory to identify the build system. For projects that support multiple build systems, we determine the primary system and entry files (the files used to initiate the build process) based on the official documentation. If the documentation does not offer a clear recommendation, we randomly select one to proceed with. In cases where the selected build system fails in the subsequent steps, we switch to another one. If the chosen system succeeds, the process completes. In the third phase, we apply the appropriate build commands to each project. To ensure consistency, all builds are executed separately within a newly installed Ubuntu 22.04 Docker environment, without any pre-installed dependencies. If a project has specific OS requirements, we switch to the required system. During the build process, we document the build systems used in the 100 projects, as shown in Figure 1. The statistics reveal significant variability in the build systems employed by popular projects. In particular, most projects support CMake and Make, with these two systems often being used in combination. The results of applying the default build commands to the Top100 dataset are presented in Table 2. As shown in the table, only 21 projects are successfully built, highlighting that even well-known and actively maintained projects demonstrate low compatibility with default configurations. For the remaining 79 projects, we observe the failure reasons can be attributed to a lack of specific setups, which can be mainly categorized into 3 types. First, 51 projects encounter dependency-related errors, where required dependencies, such as libpng when building mozjpeg, are missing and not automatically installed. For projects with missing dependencies, we manually review the project’s documentation, including files like “README”, “Contribution”, “Compile”, and “Building”, to check for any information on dependencies required before building. Out of the 51 projects, 28 have missing dependencies that are not mentioned in their documentation. Many projects do not clearly specify which dependencies are required, forcing developers to spend extra time addressing these issues. Second, 17 projects face issues related to incompatible build system versions or missing tools. For example, the bazel version required for building mediapipe does not meet the requirements. Third, 11 projects fail due to incorrect build commands, such as needing to specify the target as ‘build’ when running ‘make’ for LocalAI. In total, resolving these issues for the 79 failed projects requires additional, non-default configurations across all three categories. Finding1: The build systems of ${ \mathrm { C } } / { \mathrm { C } } + +$ projects vary significantly, yet the level of automation among existing systems remains relatively low. Furthermore, many projects often require additional specific setup steps to build successfully. Table 3. Results of the Build Process by Humans on the Top100 Dataset # 3.2 Root Causes of Build Failures in Actual Build Processes (RQ2) To answer RQ2, we continue building the $7 9 C / C + +$ projects that initially failed with the default build commands by systematically investigating each build failure. Leveraging expert knowledge and online resources, 4 programmers resolve errors one by one, documenting each issue and verifying the resolution by ensuring the error no longer occurs. In total, we have successfully built 65 out of the 79 projects. However, 14 projects cannot be built for two reasons: unresolved source code errors and exceeding the four-hour build time limit. Among these 14 projects, 9 encounter errors that cannot be resolved (e.g., the “Unknown CMake command ‘harfbuzz_Populate’ ” in MuseScore). We determine that these errors are unlikely to be fixed because similar issues have been reported by other developers in the official GitHub repositories, yet the project maintainers have not provided effective solutions. By searching the official GitHub issues using keywords from the error messages, we find that, of the 9 issues, 7 are still open and 2 are closed. However, even for the closed issues, the proposed solutions do not resolve our build problems. Additionally, 5 out of 14 projects fail due to timeouts. Based on our observation, projects that exceed 4 hours typically do not resolve independently. Compiling large projects, such as the Linux kernel, takes less than 20 minutes on our server, and the four-hour window allows sufficient attempts to address any issues. Therefore, we consider projects that exceed this time limit as failures to avoid unnecessary time expenditure. After completing all the build processes, we resolve a total of 384 errors, nearly 5 errors per project on average. By conducting a systematic taxonomy, we categorize the root causes of these failures, as summarized in Table 4. The build failures of ${ \mathrm { C / C } } { + + }$ projects are classified into three main categories: library issues, build toolchain issues, and configuration issues. In addition to these, we also identified other factors that contributed to the failures, such as code errors within the projects, which are classified as other issues. The following introduces the details. Table 4. Root Causes of Build Errors in the Building of the Top 100 Projects by Humans 3.2.1 Library Issues. Library issues often occur when the required libraries are either not installed, not placed in system environment paths, or have incompatible versions. These issues typically result in errors such as “library not found” or “undefined reference” as the compiler or linker is unable to resolve the symbols or functions defined in those libraries. Among open-source projects, developers often share only the core source code and exclude the installed libraries to keep the repository concise. However, this can lead to library-related errors when others attempt to build the project without the necessary dependencies installed. This issue occurs a total of 284 times in our study, making it the most frequent problem encountered during ${ \mathrm { C / C } } { + + }$ project builds. Compared to other build systems, such as Maven or Gradle in Java [16], we find that ${ \mathrm { C / C } } { + + }$ build systems generally make less effort to automatically reinstall removed libraries. To some extent, this may be due to the more complex nature of ${ \mathrm { C } } / { \mathrm { C } } + +$ dependencies and the lack of a unified package management tool like those found in higher-level languages such as Java or Python. These issues can be further categorized into three sub-categories as follows. Library Not Installed. In our empirical study, most library issues are attributed to missing libraries, with 263 out of 284 cases falling into this category. These missing libraries are typically, though not always, removed from open-source projects by developers to save space or for other reasons. As a result, builders manually download them through methods such as using package managers or building from source. For instance, during the build of libde265, OpenRCT2, and minetest, SDL2 is not found and needs to be installed using apt. Errors related to missing libraries frequently occur during the preparation phase when build systems checking for dependencies. However, if left unresolved, they can also surface later during the compilation or linking phases, as observed in the building processes of projects like rpcs3, aseprite, and mxnet. While package management tools like vcpkg and Conan exist for ${ \mathrm { C / C } } { + + }$ development, they are not as widely adopted or standardized as those used in higher-level languages like Java. Library Not in Path. This issue arises when libraries are installed but not included in the system’s search paths, such as ‘LD_LIBRARY_PATH’, preventing the build system from locating them. In our study, this occurs 10 times, causing errors during compilation or linking phase when dependencies cannot be resolved. For example, MuseScore fails to build because the file ‘FindQt6Qml.cmake’ is not found in ‘CMAKE_MODULE_PATH’. Library Version Inconsistency. This issue occurs when the installed library version is inconsistent with what the project requires. Due to API or behavioral discrepancies, this leads to incompatibilities during the dependency management, linking, or compilation phases. In our study, this issue is observed 11 times in projects such as Shotcut, OpenPose, and Sonic-Pi, where these conflicts resulted in build failures. Resolving such issues typically involves updating the project to accommodate the installed library version or reverting to an older, compatible version. 3.2.2 Build Toolchain Issues. Build toolchain issues refer to problems related to missing or incompatible versions of tools necessary for the build process, such as compilers, linkers, or other essential utilities. These issues typically arise when the project’s toolchain is not fully specified or when the available version does not meet the project’s requirements. In our study, this occurs 64 times. These toolchain issues can be further divided into two categories, as outlined below. Build System Version Conflict. This sub-category error occurs 6 times in our study. We ensure that the corresponding build systems are installed by default, thus avoiding missing toolchain issues. However, version mismatches occasionally occur. For example, in the wav2letter project, the environment requires a minimum CMake version of 3.29.2, but the version available in the default APT repositories is 3.25.1. Due to the unique role of build systems within the toolchain, we classify this type of error as a separate sub-category. Other External Tools Missing or Conflicting. The toolchain also includes external utilities such as debuggers, linker and profilers, which may be necessary for certain stages of the build or testing process. Incompatible or missing versions of these tools caused issues in 58 cases. For example, missing or conflicting versions of utilities like GDB or Valgrind can lead to failures during debugging or performance analysis stages. Finding 2: Library issues (e.g., library not installed, version inconsistency) are the most significant challenges in $C / C + +$ project building, followed by build toolchain issues and configuration issues. 3.2.3 Configuration Issues. Configuration issues occur when a project’s build scripts are misconfigured or incompatible with the specific environment. These issues include platform or operating system incompatibilities, incorrect build options, and misconfigured files. System or Equipment Incompatibility. Certain projects are designed to run exclusively on specific operating systems or hardware platforms, and attempting to build them on an unsupported platform often results in failures. For example, projects like OpenPose recommend using Ubuntu versions between 14 and 20, while older projects such as OpenAPLR suggest Ubuntu 16.04. Additionally, hardware-specific requirements, such as the absence of a GPU, can prevent the building of projects reliant on CUDA and cuDNN. In our evaluation, such errors occurred 7 times. Incorrect Build Commands Build instructions often require specific setups, such as configuring environment variables, cross-compilation, or managing dependencies. For example, when building for a different architecture like ARM, a toolchain file must be specified to ensure proper compilation: ‘cmake -DCMAKE_TOOLCHAIN_FILE=path/to/arm_toolchain.cmake ..’. Without such configurations, the build process may fail or produce incorrect results. Project Configuration Issues This error occurred 13 times and is typically caused by missing project-specific configurations, such as hardcoded paths or dependencies hosted on private sources. For example, the gameplay project requires files (e.g., gameplay-deps) from a specific URL. Without performing these required custom setups, the build process is bound to fail. While we also encountered issues such as source code errors and unstable versions, which occurred 6 times. Our study focuses primarily on build system-related problems. We have documented these issues as they pose significant barriers to successful builds. Finding 3: Build errors in ${ \mathrm { C } } / { \mathrm { C } } + +$ projects can occur at various stages, including dependency resolution, compilation, linking, or runtime setup. These issues are diverse in nature, as they vary depending on the build tools and project characteristics involved at each stage. Building $C / C + +$ projects is challenging, even for human developers, and automating this process adds further complexity. Based on our empirical study, we summarize the key challenges in automating ${ \mathrm { C / C } } { + + }$ builds. Dependency management is a frequent challenge during dynamic builds, involving the identification, downloading, and resolution of issues. While Software Composition Analysis (SCA) studies [36, 54, 55] address dependency issues, they fail to detect non-local thirdparty libraries (TPLs) before building. Research like CCScanner [45] examines package management tools, but build-specific issues like alias conflicts and version mismatches remain unaddressed. Static analysis is insufficient for dependencies that are conditional, dynamically loaded, or tied to build environments with varying compiler flags and OS requirements. Additionally, dependencies often originate from multiple sources, such as package managers (e.g., apt) or source code, and the obstacles in downloading them further complicate the resolution process. Second, in our study, we manually write extensive shell scripts and perform debugging within Dockerfiles, utilizing various tools like build systems, compilers, and package managers. The diversity of these tools and their commands makes it difficult to standardize the build process with fixed rules, presenting a major challenge in automation. Lastly, build errors can occur at any stage, including preprocessing, compilation, linking, or even due to external factors like network issues or hardware limitations, such as insufficient memory, can also cause build failures. These errors vary significantly, making it difficult to apply generalized error-handling strategies. Furthermore, the solutions to these problems are often scattered across various sources, requiring extensive expertise or the ability to conduct in-depth research through documentation, community forums, and other resources. All these challenges hinder the automation of building ${ \mathrm { C } } / { \mathrm { C } } + +$ projects. # 4 Methodology In light of the challenges discussed in Section 3, we design an agent CXXCrafter to streamline the building of ${ \mathrm { C / C } } { + + }$ projects by leveraging LLMs to handle various stages of the building process. # 4.1 Overview Our approach is driven by the broad capabilities of LLMs across multiple dimensions, including semantic understanding [46], code generation [21], and implicit knowledge storage [57]. Existing studies show that LLMs’ semantic understanding enables the performance of code analysis tasks [8] and assists in bug and error comprehension [25, 27], demonstrating potential for interpreting diverse error messages in the build process. Their robust code generation capabilities allow developers to create applications in various programming languages [61], with promising potential for automatically generating build instructions and bash scripts to resolve build errors. Additionally, through training on extensive corpora, LLMs implicitly store vast amounts of knowledge across multiple domains, helping to address issues in various fields [31, 41, 44]. LLMs may have been trained on large-scale open-source resources, including GitHub issues and Stack Overflow [4, 26, 34], which contain numerous build-related problems and solutions, further underscoring their potential in tackling challenges related to software construction. However, the effectiveness of directly using LLMs for building is limited. As shown in the experimental results in Section 5, using bare LLMs with prompts successfully generates build solutions for only about $30 \%$ of the projects. This is because the build process for many $C / C + +$ projects involves multi-faceted errors, including those arising from different stages of the build. Relying solely on the LLM can only address a small fraction of such errors. An iterative approach is needed to continuously resolve issues as they arise. To address this, we propose an LLM-based agent that dynamically manages the build process through iterative feedback mechanisms. This agent autonomously resolves errors in real-time, adjusting and refining build decisions based on evolving conditions. The framework not only reduces the need for manual intervention but also enhances build reliability and success rates. Fig. 2. The Overall Framework of CXXCrafter As illustrated in Figure 2, the CXXCrafter is comprised of three essential modules: Parser Module: This module automatically extracts and analyzes key build-related information from the project directory, encompassing dependencies, environment settings, and relevant documentation that facilitate the build process. This ensures that all essential data is available for the subsequent stages of the workflow. Additionally, we leverage the LLM’s semantic understanding capabilities to overcome two key obstacles: identifying the valid build system entry file and retrieving helpful documentation. • Generator Module: This module utilizes LLMs to generate a Dockerfile that includes build procedure code based on the parsed information, ensuring that necessary dependencies, environment settings, and configurations are correctly specified. The module also modifies the Dockerfile in response to error feedback from the Executor Module, ensuring an adaptive approach to resolving build issues. Executor Module: This module oversees the build process in containers by execute Dockerfile, providing a consistent and clean build environment for testing whether the build solution succeeds. Specifically, it captures errors and logs, feeding them back to the Generator Module, forming a dynamic interaction loop that continuously addresses errors until completion. CXXCrafter uses five types of prompts for different use cases, incorporating techniques such as RAG and nested prompt templates, as detailed in Section 4.5. # 4.2 Parser Module The Parser Module analyzes local projects to extract critical information for the software’s environment preparation and compilation. It employs three specialized extractors (see Figure 3) to gather data, including environment settings, dependency details, and helpful build documentation. $\textcircled{1}$ In the Environment Information Extractor, CXXCrafter uses basic shell commands like ‘lscpu’ and ‘uname - $\cdot a ^ { \dagger }$ to capture system ① Extract Environment Information ↓ ↓ Extract the build system and build script paths used in Equip Info. Read the files and use LLM to summarize key build the project based on file name matching GPU Info.: W/O GPU Architecture: X86 steps ↓ Key Advice Summary Prompt Candidate Build Systems … summary of the key advice for building the project from source... {file_content} C→sMrca/kCeMRaekleatLeisdtsF.tixltes: ② Extract Documentation Information □ →src/video/CMakeLists.txt Scan the project structure and use keyword-based regex Helpful Document →src/openalpr/CMakeLists.txt to identify all building-related files Recommended OS Version: →src/openalpr/simpleini/CMakeLists.txt + Ubuntu 16.04 is recommended for installation. →src/openalpr/support/CMakeLists.txt Candidate File Paths Dependency Versions: Python Related Files: File Path List: Key dependencies include Tesseract OCR v3.0.4 and →src/bindings/python/setup.py →README.md OpenCV v2.4.8+. Specifying the …… →runtime_data/postprocess/readme.txt ↓ →src/openalpr/simpleini/README.md Choose the most suitable build system and entry file from the given list Input the file list into the LLM to filter files relevant ③ Extract Dependency Information Build System Selection Prompt to this project’s build Invoke the dependency analysis tool CCScanner to ... the project {Project Name}, which can be built us File Path-Level Filtering Prompt extract dependencies ing one of the identified build systems listed belo ... build the project … the following docu-ments are → CCScanner detects TPL dependencies in C/C++ by parsing m and w. … {Build System Dict} … Select … build syste entry potential resources … :{doc_files}…Please select the most relavant documents quasi-SBOM files from 21 package managers and using CENTRIS for code clone detection ? ↓ 日 Build System Info. Candidate File Contents Dependencies BEVeunritlsridyoSnFyilsRete:eqsmru:ci /rCeCMmMaeaknkete:L2i.s6t.toxrt higher SReEtuApD.pMyE RFielaed OAputeonmAaLtiPcRLisceansoepPelna-tseou…rce N→aLDimebebleh&petlopVneircsa&i-od9ne:v & None →TDesfsaeurlat-cjtdk&&NoNnoene uPsairnsge sChPelUl cspoemcimfiacnadtisons and check for GPU presence Filtered Files pSiatmhp]l…y typing “alpr [image file Finally, Parsed data will be fed into the 自 ↓ generator module for processing Behavioral Operation Intermediate Result LLM Prompt Final Parsing Result details, including CPU specs, OS, and their versions. This information is crucial for addressing issues discussed in Section 3.2.3, such as installing software for specific architectures or ensuring the correct GPU/CPU driver versions. It plays a key role in ensuring compatibility and optimization. $\textcircled{2}$ The Dependency Information Extractor takes the entire source code folder as input and outputs the names and versions of all required dependencies, helping to prevent conflicts and ensure software stability. Existing research on dependency identification falls into two categories: some studies [36, 54, 55] use Software Composition Analysis (SCA), but SCA can’t recognize third-party libraries (TPLs) before build time, as many TPLs are not available locally at that stage. CCScanner [45] detects TPL dependencies in $C / C + +$ by parsing quasi-SBOM files from 21 package managers and using CENTRIS [54] for code clone detection. We use it in our parser module to statically extract dependency names and versions. It is worth noting that while the statically extracted dependencies help address library-related issues, they do not fully resolve them. Specifically, they are incapable of handling dynamic errors such as aliasing (mismatched resolved and downloaded library names) or version conflicts, which only manifest during the execution process. Static dependency analysis at this stage is insufficient, necessitating the use of the generator and executor modules for dynamic resolution. $\textcircled{3}$ The Useful Documentation Extractor collects relevant build instructions and configuration guides, aiding CXXCrafter in troubleshooting and understanding the build process. As shown in Part 2 of Figure 3, it scans the source code folder and applies two rounds of filtering. First, it uses keyword-based regular expressions to identify build-related files and remove irrelevant ones. Then, it performs finer filtering using LLMs, based on the project name and document path, to exclude unrelated files. Finally, it reads the filtered files and uses the LLM to summarize key build information, ultimately obtaining the relevant documents for the build process. The parser module faces 2 key obstacles: identifying the correct build system and entry file, as well as retrieving useful documentation for the build process. First, many projects employ multiple build systems, each with several build files. Expert knowledge is required to determine which build system and entry file are suited to compile the entire project. We address this by leveraging LLMs combined with tailored prompts. For example, in the OpenALPR project (Figure 3 Part 1), both CMake and Python are present, but the LLM correctly identifies CMake, recognizing the Python paths as interface files rather than the main project. Second, some projects include useful documentation that aids the build process, but traditional rule-based methods struggle to locate this information. To address this, we develop a RAG system to search for relevant content. For example, in Figure 3, we retrieved documentation from the “README.md” file, which recommended installing Ubuntu 16.04 and provided advice on dependency versions to help avoid potential compatibility issues. # 4.3 Generator Module The generator module is responsible for creating and modifying build solutions. In CXXCrafter, build solutions are defined using Dockerfiles, enabling the construction of ${ \mathrm { C / C } } { + + }$ software in clean and reproducible environments. While Shell or Python scripts can also be used, Docker often offers higher flexibility and consistency. Its ability to generate clean system images ensures that the resulting Dockerfiles can be executed reliably across different environments. Upon receiving the output from the parser, the generator produces an initial version of the Dockerfile. We have designed curated Embedded Prompt Templates (detailed in Section 4.5), which provide structured guidance to the LLM by embedding predefined formats and placeholders within the prompts. These templates ensure the Dockerfile creation process is structured and consistent. The generator begins modifying the Dockerfile when the executor encounters a failure, utilizing the error message and the recently executed Dockerfile. We retain all modification history within the same session of the LLM and prioritize clearing the oldest resolved issues when the context limit is reached, allowing the model to reference recent decisions during the modification process. Our key methodology in the generator module is the design of Embedded Prompt Templates. These templates offer structured guidance to the LLM by embedding predefined formats and placeholders within the prompts. Drawing from the building experience in Section 3, we have systematically outlined the structure of Dockerfiles in the prompt, encompassing essential components such as system and tool installation, package management updates, dependency installation, project-specific configurations, and build-related instructions. This structured approach ensures consistency and adherence to best practices, promoting the generation of standardized yet flexible build solutions. # 4.4 Executor Module The executor module is responsible for executing the Dockerfile generated by the generator module. It monitors the entire build process to detect errors. During the building process, the executor tracks the executed commands and logs detailed traces. If the build fails, the executor sends the error messages back to the generator. This initiates an optimization process, creating a dynamic interaction loop between the generator and executor. This loop continues until a successful build solution is achieved or the maximum number of iterations is reached. Additionally, the executor implements an LLM-based discriminator on the build instructions and logs. This ensures the success of the build and helps identify and resolve errors comprehensively. A critical challenge in designing the executor module is accurately verifying whether the project has been successfully built. We employ the Python Docker SDK to capture the execution results within the Docker container and save these as log files. However, certain build instruction errors may lead to issues that Docker cannot detect. One such scenario arises when a project implements custom error handling, which may suppress the generation of error messages. For example, in LocalAI, the Makefile includes error handling for build targets, meaning that even if the wrong target is selected, Docker will not report any build errors. Another issue occurs when the Dockerfile generated by the LLM lacks essential build instructions (e.g., ‘make’). In this case, while no errors may be reported, no actual building operation takes place. We refer to these situations as “nonerror failures”. Due to the diverse nature of the outputs in these cases, traditional rule-based or keyword-matching error detection methods often fail to reliably identify such build failures. To address the challenge, we design an LLM-based discriminator to identify these build failures. In designing the LLM discriminator, we incorporate two key insights from our manual construction process: Static criterion: The Dockerfile should include build and compile instructions (e.g., ‘make’, ‘cmake –build’), and the build target must match the default or primary components as described in the project’s documentation. Dynamic criterion: We store log files (an example is available in our project [56]) generated during the build process. By analyzing these logs, we can confirm whether the build commands are executed successfully. Logs from successful builds typically show compile progress (e.g., $\cdot [ 3 \% ]$ Building CXX object..’) and test progress (e.g., ‘Performing Test C_FLAG_WALL Success’). The discriminator’s judgment process is divided into two steps. First, we design prompts to guide the LLM in making judgments based on these two key criteria. Second, to further mitigate hallucinations, we introduce a reflection mechanism to re-validate the “judgment process” of the first step. If the “judgment process” did not strictly adhere to the two criteria, the build is deemed a failure, thus minimizing FP. When providing information to the discriminator, the executor carefully controls the context length and selects the log segments most relevant to state determination. In the case of “error-type failures”, which are typically direct and concise, the executor inputs the most recent 50 execution logs into the LLM for accurate error detection and analysis. When no errors are reported, and since determining “non-error failures” often requires more contextual information, the executor inputs the Dockerfile and the last 200 lines of the log. If the input exceeds the LLM’s context length limit, a sliding window mechanism is used, prioritizing the retention of the most recent logs to ensure effective resolution of new errors. To evaluate the effectiveness of the LLM-based discriminator, we examine the accuracy of 4 LLMs—DeepSeek-v2, DeepSeek-v3, GPT-4o, and GPT-4o mini—on the Top100 dataset. We manually check and validate the discriminator’s judgments during the build process. Out of the 400 build processes, 249 are classified as successful. We manually verify these 249 samples and find all judgments to be correct. Regardless of the LLM used, the discriminator accurately identifies all successful builds. This validates the effectiveness of the LLM-based discriminator design. # 4.5 Prompt Design in CXXCrafter We have designed a set of prompts tailored to 5 specific scenarios, including build system and entry point identification, RAG for documentation parsing, initial Dockerfile creation, Dockerfile modification, and build success discriminator. These prompts, developed based on expert knowledge and refined through iterative experimentation, incorporate strategies such as nested prompt templates and RAG to address task complexity. The complete set of prompts is provided in the appendix file[56]. In the design process, several challenges arise when prompting LLMs to effectively complete building tasks. Challenge 1 involves breaking down complex problems when generating build solutions. We address this by using embedded prompt templates to dynamically inject parsing information, such as in our Dockerfile generation prompt (Section 2.3 in Appendix file [56]), which dynamically fills in parsed data. Additionally, we provide the LLM with strategic guidance in the form of requirement notes. Challenge 2 stems from unclear project-specific build processes and details. To resolve this, we utilize RAG to retrieve relevant files from the project’s source code directory, such as documentation RAG and build system identification prompts (Section 2.1 and 2.2 in Appendix file[56]). Finally, Challenge 3, related to token limitations, arises during Dockerfile modification. To effectively manage error feedback, we retain error messages and decisions within a single session to ensure continuity. However, when the context exceeds the token limit, we remove the earliest resolved issues to maintain focus on the current task. An Example of Prompt. As shown in Figure 4, this prompt is used to generate a Dockerfile. Specifically, we combine information obtained from the parser with pre-defined templates for builtin prompts to create the final prompt that generates the Dockerfile. The prompts corresponding to numbers 3, 4, and 5 in the figure include the information parsed by the parser. Prompt 1 is a Dockerfile template that guides the LLM to structure the Dockerfile correctly, breaking down steps such as basic environment setup and dependency installation. In addition, this prompt includes specific requirements, such as correctly handling line breaks in comments, to ensure that the generated Dockerfile is free from syntax errors. 1 Dockerfile Template 2 Prompt for Generation FROM ubuntu:[ubuntu_version] Please generate a dockerfile which build the project {Project Name} from ENV DEBIAN_FRONTEND=noninteractive source code according to the dockerfile template: # Install necessary packages {Dockerfile Template} Run apt-get update Requirements: Run apt-get install -y build-essential 1. Install commands must be executed one at a time. Run apt-get install -y software-properties-common 2. Please adhere to Dockerfile syntax. For example, ensure that comments and # Install Dependencies commands are on separate lines. Comments should start with a # and be placed Run apt-get install -y [dependency1] independently of commands. Run apt-get install -y [dependency2] Useful information for reference: # Build the project with [build system] 1. Environment requirements: {Environment Info.} 1 2. Documentation information: {Docs} 3. Potential Dependencies: {Dependencies} 3 Environment Info. Docs Dependencies Build system is {Build System Name}. The key recommendations for building the Potential dependencies and versions And the file of entry file of {Build System project are as follows: identified from the build script scan are as Name} is in {Entry File} of the project. {Helpful Documents}. follows: And build system's version requirement is Whether there are custom scripts: {Dependencies and corresponding {Build System Version}. {{True or False}, {Script Path}}. versions}. Mandatory dependencies: … # 5 Evaluation We implement CXXCrafter in Python, without relying on LLM frameworks like Langchain. Our implementation ensures a clear modular structure and strong scalability, enabling easy upgrades and replacements of components and tools. CXXCrafter consists of 1,664 lines of code and uses 5 different types of prompts. In the experiments, CXXCrafter uses GPT-4o as the default LLM, with the dynamic interaction limit set to 10 by default. The execution environment for our build solution is managed through Python Docker SDK. Our experiments are conducted on three Ubuntu 22.04 servers with varying hardware configurations. The first machine is equipped with two Intel Xeon 6330 processors, 512 GB of RAM, and 3 TB of HDD storage. The second and third machines feature four Intel Xeon 8260 processors, 256 GB of RAM, and 3.37 TB of HDD storage each. Research Questions. Our evaluation aims to address the following research questions: RQ3 (Effectiveness): How many $C / C + +$ projects can be automatically built by CXXCrafter? RQ4 (Ablation Study): How does each component within CXXCrafter contribute to the overall build performance? RQ5 (Case Study): How does CXXCrafter resolve build issues that manual methods fail to address, and what specific advantages does it offer in handling complex $C / C + +$ projects? • RQ6 (Efficiency and Cost): What is the efficiency and cost of using CXXCrafter? Dataset. Two datasets are used for evaluation. The first dataset, Top100, is described in Section 3. The second dataset, from Awesome-CPP [7], includes a broader collection with 58.6K stars as of September 2024. It covers a wide range of $C + +$ libraries, frameworks, and tools, providing a comprehensive testbed for evaluating CXXCrafter ’s performance across diverse real-world $C / C + +$ projects. To ensure there are no duplicates between datasets and that all projects are buildable, we remove any overlapping projects with the Top100 dataset and manually exclude non- $. C / C + +$ projects based on the criteria outlined in Section 3. After filtering, 652 distinct projects remain for evaluation. For all projects, we use the latest available version for experimentation. LLMs Selection. We select four LLMs: GPT-4o, a high-performance closed-source model; GPT4o mini, a more affordable alternative of GPT-4o; DeepSeek-v2 with 236B parameters and DeepSeek-v3 with 671B parameters, both open-source models that excel in code-related tasks. Baselines. We select 3 types of baselines. (1) Default Build Commands: We have collected over 20 common $C / C + +$ build systems and their associated instructions (see Appendix [56]). Based on this collection, we develop an automated script to execute default or commonly used build commands. The script first identifies potential configuration files, such as Makefile or CMakeLists.txt. It then identifies all possible build systems from the configuration files and executes their corresponding build instructions in sequence. (2) Programmers: The manual building methods used in Section 3.1. (3) Different Bare LLMs: We also explore the performance of different bare LLMs. These models use the same prompts as the CXXCrafter generator but lack the information provided by CXXCrafter’s parser. Additionally, there is no dynamic iterative process if the build fails. Metrics of Success. We determine the success of builds by manually inspecting the Dockerfile instructions and the corresponding execution outputs. During this inspection, we follow two criteria to efficiently assess success as follows: (1) Static Criterion: The Dockerfile must contain the necessary build-related instructions, and the build target should align with the primary components as specified in the project documentation. (2) Dynamic Criterion: We analyze the execution logs generated during the building process to ensure that the build commands are executed properly and that the process completes without errors. Only projects that satisfy both criteria are considered as successful builds. We further evaluate these metrics (see Section 6), confirming that builds meeting these criteria yield outputs consistent with those produced by manual builds and demonstrate correct functionality. These criteria are the same as those in the executor (see Section 4.4), with the key difference being that we perform manual checks to prevent misjudgments by LLMs. # 5.1 Overall Effectiveness Evaluation (RQ3) To address RQ3, experiments are conducted on two datasets. For CXXCrafter, the default dynamic interaction step limit is set to 10, with GPT-4o serving as the core LLM due to its superior performance in trials. We also evaluate the build performance of CXXCrafter using another powerful opensource model, DeepSeek- $\cdot \nu 3 ,$ while keeping all other settings the same. Additionally, we compare the results with those of the Default Build Commands and the bare LLMs, as mentioned above. For all build results, we manually inspect and verify their correctness. As shown in Table 5, CXXCrafter demonstrates significant superiority. For the Top100 dataset, CXXCrafter (Default) successfully builds 75 projects, significantly surpassing other methods. The Default Build Commands tool achieves 21 builds, while the bare LLM models, show a similar performance, with 23 and 17 successful builds, respectively. In the Awesome-CPP collection, CXXCrafter achieves 512 successful builds, greatly outperforming Default Build Commands (272 builds) and the bare LLMs (264 for DeepSeek-v3 and 215 for GPT-4o). The Default Build Commands approach achieves a $3 9 . 0 1 \%$ success rate. While this method proves reliable for simpler projects, it struggles with more complex or non-standard build configurations, resulting in a relatively low success rate. The bare LLMs (DeepSeek, GPT-4o, and GPT-4o mini) demonstrate even lower success rates of $3 8 . 4 3 \%$ , $3 1 . 6 5 \%$ , and $1 9 . 8 1 \%$ , respectively. These findings suggest that while LLMs have some capacity to handle build tasks, their effectiveness remains limited without further domain-specific optimization. In some cases, they perform worse than rule-based methodologies. Notably, GPT-4o mini, with a $1 9 . 8 1 \%$ success rate, exhibits significant limitations when applying a smaller LLM to complex build processes. In stark contrast, CXXCrafter achieves a $7 8 . 1 0 \%$ success rate, showing a marked improvement over all other methods. This outcome underscores the effectiveness of CXXCrafter ’s modular design, which allows it to adapt efficiently to diverse build scenarios. The substantial gap between CXXCrafter and the other methodologies emphasizes the importance of specialized agent in automating complex tasks like ${ \mathrm { C / C } } { + + }$ project builds. Overall, CXXCrafter significantly outperforms both the bare LLMs and the heuristic build tool, demonstrating high success rates and the potential to reduce the time and efforts required for large-scale OSS building, making it a valuable tool in modern development workflows. Finding 4: Without a carefully designed iterative framework, LLMs remain inadequate for addressing the inherent complexity and multi-stage processes of project building. Table 5. Experimental Results Between CXXCrafter and Baselines. # 5.2 Component Design Analysis (RQ4) In this section, we present a detailed component-wise analysis to assess the contribution of key modules and various configurations in CXXCrafter. This analysis focuses on 3 main aspects: • The role of the parser module in enhancing build success. The impact of dynamic interaction and effect of varying dynamic interaction step counts. • The impact of different LLMs on CXXCrafter ’s performance. We conduct experiments on the Top100 dataset, with results shown in Figure 5. The CXXCrafter (Default) also uses GPT-4o as the LLM, with a maximum of 10 dynamic interaction steps. The Role of the Parser. The default configuration, with all components enabled, achieves the highest number of successful builds, completing 75 builds. When the parser is removed (CXXCrafter-w/o-Parser), the success rate drops to 48 builds, highlighting the parser’s crucial role. In CXXCrafter, build system selection and entry file identification rely on the parser, which forms the foundation for the entire build process and helps avoid many errors. Additionally, buildrelated documentation is crucial. The parser automates the search for and interpretation of these documents, further enhancing the build success rate. The Impact of Dynamic Interaction. Dynamic interaction is the key design of CXXCrafter, allowing iterative execution and modification during the build process. When dynamic interaction is disabled (CXXCrafter-w/o-Interaction), the number of successful builds drops sharply to 22, highlighting its importance in managing complex, multi-step build scenarios. We also analyze the impact of different interaction step limits. When the limit is set to 5 steps, performance declines, with only 69 successful builds. Increasing the step count to 20 does not further improve performance, with 74 successful builds. We observe that the benefits of increasing interaction steps begin to diminish beyond a certain threshold. For example, increasing the step count from 0 to 5 leads to a significant improvement of 47 successful builds. However, increasing it from 5 to 10 only adds 6 builds. Furthermore, increasing from 10 to 20 results in one fewer successful build. This variation is likely caused by the inherent instability in the LLM’s output. Finding 5: Dynamic interaction plays a crucial role in managing multi-step tasks in the agent design. Increasing interaction steps improves success rates while the enhancement can be limited. The Impact of Different LLMs. Finally, we evaluate the impact of different LLMs on CXXCrafter. Specifically, DeepSeek-v2 completes 57 builds, DeepSeek-v3 completes 67, while GPT-4o mini completes 50. GPT-4o remains the most effective, with 75 successful builds. These results highlight the significant impact of LLMs on CXXCrafter’s ability to automate the build process. Notably, we observe that open-source LLMs can now achieve performance on par with leading closed-source models. Furthermore, cost-effective closed-source models like GPT-4o mini can achieve about $5 0 \%$ of the effectiveness in our design. Additionally, CXXCrafter, based on these models, performs much better as an agent than bare LLMs (see Section 5.1), further demonstrating that our design leads to a substantial improvement in performance. Finding 6: The selection of LLMs significantly affects the agent’s performance. More powerful models, such as GPT-4o, can offer stronger assistance and enhance the overall effectiveness. Fig. 5. Number of Successful Builds of CXXCrafter Variants on the Top100 Dataset. # 5.3 Case Study (RQ5) Among the Top100 dataset, 72 projects are successfully built manually as well as by CXXCrafter in Section 5.1. Three projects succeed in CXXCrafter but fail in manual builds, while 14 projects are successful manually but failed by CXXCrafter. We analyze the 3 cases where manual builds failed. Case 1: When building CQuery manually, an error occurs with “std::unique_lock being defined in the header <mutex>”, suggesting that “<mutex>” is not included. Initially, we suspect this is related to the Clang version. However, after trying various versions (e.g., Clang 7, Clang 11), the issue remains. Further analysis reveals that the CQuery build script defaults to Clang 7 on Ubuntu 14.04, which is incompatible with Ubuntu 22.04. CXXCrafter suggests using Ubuntu 20.04, where Clang 7 is compatible and thus resolve the issue. Case 2: In the Paddle project, human attempts produce the error: “libdnnl.so: undefined reference to dnnl::impl...”. Despite confirming oneDNN installations and testing various versions, the error persists. CXXCrafter identifies a version mismatch between protobuf and oneDNN (requiring protobuf $3 . 2 0 . 2 \ t$ as noted in the “requirements.txt”), a detail overlooked by human builders. Case 3: In DOOM, a linking error initially suggests issues with 32-bit libraries or the container environment, leading to various adjustments. CXXCrafter identifies the actual issue as a mismatch between the TSL variable errno and the shared library version, resolving it with a code modification. CXXCrafter’s Advantages over Humans. CXXCrafter offers two key advantages over manual building: (i) The parser module uses RAG to efficiently process documents and other information, allowing it to identify build-related information more comprehensively than manual searches. For example, in CASE2, CXXCrafter prevented errors that would have arisen from overlooking crucial information during manual builds. (ii) The LLM stores historical build knowledge, compensating for the limitations of human experience. As demonstrated in CASE3, CXXCrafter makes more correct decisions, avoiding potential errors. The major drawback of CXXCrafter is its higher error rate when installing complex dependencies, such as ‘CUDA’ for OpenPose. These libraries involve complex installation processes with many dependencies and steps. This may be resolved in the future through knowledge injection or RAG. # 5.4 Efficiency and Cost (RQ6) We assess the cost of CXXCrafter across three dimensions: time, financial expense, and disk storage. These factors are crucial in determining the practical usefulness and scalability of our approach. Time Cost. On average, CXXCrafter takes 875.31 seconds to successfully build a project on the Top100 dataset. Additionally, the average time cost for the failed projects is 2.67 hours. However, time costs can vary due to factors such as Docker caching and network speed. Enabling multiprocessing significantly enhances efficiency, substantially reducing the overall build time. Financial Cost. Running CXXCrafter on the Top100 dataset generates 4,297,652 input tokens and 624,170 output tokens. This incurs GPT-4o related costs of $\$ 21.49$ for input tokens and $\$ 9.36$ for output tokens, respectively. Among these, 75 projects are successfully built, with an average cost per successful build calculated at $\$ 0.41$ . The 25 failed projects generate a total of 2,420,092 input tokens and 225,154 output tokens, with an average cost of $\$ 0.6191$ per project. This price is based on OpenAI’s pricing as of September 2024. Disk Storage Cost. The experiment generates over $5 0 \mathrm { T B }$ of data, including Docker container caches and image files. This creates significant storage demands. Despite using three machines, disk space management remains a critical and recurring challenge throughout the experiment. # 6 Discussion Effectiveness and Consistency of Build Artifacts. We conduct an in-depth analysis of the build artifacts to verify their functionality and consistency with manually built artifact. To verify that the build artifacts perform as expected, we run the unit tests provided by the projects. Among the 75 successfully built projects in TOP100, we identify 24 that generate test executables. Among them 22 projects pass while 2 fail due to missing audio connections in libsoundio and lack of GUI display support for Stockfish on our server (due to the absence of the relevant devices). These results confirm that the build artifacts produced by CXXCrafter are valid and function as intended. Additionally, we use a diff tool to compare the automated build artifacts with the manually built artifacts for all 75 successfully built projects. The results show that the automated and manual build artifacts are completely consistent. Detailed experimental results can be found in our Project [56]. These results also further validate the effectiveness of our success metrics in the experiment. Building Different Software Versions. We conduct two additional experiments. First, we investigate the build success rate across different software versions. For 20 projects that are successfully built, we randomly select 5 commits for each, covering their entire repository commit histories from the creation of the repository. CXXCrafter achieves an $8 1 \%$ success rate with 81 out of 100 builds successful. This demonstrates that CXXCrafter are effective across multiple versions. Some failures occur due to older versions requiring outdated packages, which are often hard to find. Second, we test the build performance of consecutive commits (i.e., building one commit after another). By selecting the latest 5 commits from 20 projects, we observe a higher success rate, with 96 out of 100 builds successful. Overall, these experiments demonstrate that CXXCrafter is effective in both version diversity and consecutive commit builds. Building Different Language Projects. CXXCrafter’s design shows promising potential for other languages. Specifically, we conduct a simple migration to the top 100 most starred Java projects (after filtering the unbuildable projects, 76 remain), successfully building 57 out of 76 projects, achieving a success rate of $7 5 \%$ . This already represents promising performance in Java automation build methods [16] to our best knowledge. Potential Applications of CXXCrafter. CXXCrafter is highly beneficial for various downstream applications in software security analysis, including but not limited to: (1) reproducing identified vulnerabilities by facilitating set up environments with specific versions for vulnerability reproduction; (2) static program analysis, particularly high-precision analysis based on LLVM IR, which often requires code to meet compilation requirements. CXXCrafter can assist in fulfilling this process; (3) dynamic program analysis, such as source code instrumentation, where CXXCrafter can ensure proper compilation, thus streamlining workflows for tasks like fuzz testing. # 7 Threats to Validity Our study mainly suffers from the following threats to validity. Specifically, the internal validity threat in our study mainly stems from the variations in LLM performance, which could impact the experimental results. To mitigate this issue, we conduct experiments using the open-source model DeepSeek. Additionally, during the dependency download process, CXXCrafter retrieves dependencies from online sources (e.g., by using ‘apt’ to install packages or ‘git’ to download repositories). If these sources become unavailable or if network issues arise, it may affect the results. Furthermore, updates to the software itself could also introduce internal validity threats. Our research primarily targets popular projects, for which LLMs may have gain deeper understanding and the documentation is usually more comprehensive. Therefore, CXXCrafter’s performance on less popular projects may be impacted, which constitutes an external validity threat. To address this, we plan to further incorporate RAG techniques or retrain the model in the future. Lastly, the authors have rich experiences in $C / C + +$ related researches, and via further extensive investigations in ${ \mathrm { C / C } } { + + }$ projects build automation, the authors have gained deep understandings towards build-related issues. As a result, we believe this study’ threats of construct validity are limited.
Project building is pivotal to support various program analysis tasks, such as generating intermediate rep- resentation code for static analysis and preparing binary code for vulnerability reproduction. However, automating the building process for C/C++ projects is a highly complex endeavor, involving tremendous technical challenges, such as intricate dependency management, diverse build systems, varied toolchains, and multifaceted error handling mechanisms. Consequently, building C/C++ projects often proves to be difficult in practice, hindering the progress of downstream applications. Unfortunately, research on facilitating the building of C/C++ projects remains to be inadequate. The emergence of Large Language Models (LLMs) offers promising solutions to automated software building. Trained on extensive corpora, LLMs can help unify diverse build systems through their comprehension capabilities and address complex errors by leveraging tacit knowledge storage. Moreover, LLM-based agents can be systematically designed to dynamically interact with the environment, effectively managing dynamic building issues. Motivated by these opportunities, we first conduct an empirical study to systematically analyze the current challenges in the C/C++ project building process. Particularly, we observe that most popular C/C++ projects encounter an average of five errors when relying solely on the default build systems. Based on our study, we develop an automated build system called CXXCrafter to specifically address the above-mentioned challenges, such as dependency resolution. Our evaluation on open-source software demonstrates that CXXCrafter achieves a success rate of 78% in project building. Specifically, among the Top100 dataset, 72 projects are built successfully by both CXXCrafter and manual efforts, 3 by CXXCrafter only, and 14 manually only. ...
[ "cs.SE" ]
# I. INTRODUCTION poral boundaries of individual speakers within an audio stream and assign appropriate speaker identities, addresses the fundamental question of “who spoke when” [1]. It serves as a foundational component in numerous downstream speechrelated tasks, including automatic meeting summarization, conversational analysis, and dialogue transcription [2]. Nevertheless, achieving robust diarization performance in practical settings remains a persistent challenge, primarily due to factors such as an unknown and variable number of speakers, acoustically adverse environments, and a high prevalence of overlapping speech segments. Traditional clustering-based speaker diarization approaches [3] typically consist of several sequential modules, including voice activity detection (VAD), speech segmentation, speaker representation extraction—such as i-vector [4], d-vector [5], and x-vector [6]—speaker clustering [7]–[9], and subsequent re-segmentation procedures [10]. While such modular pipelines demonstrate considerable robustness across a variety of domains, they inherently struggle with overlapping speech segments, as each segment is constrained to a single speaker label due to the limitations of the clustering mechanism. To overcome these limitations, neural-based diarization methods have been actively explored in recent years. Among them, End-to-End Neural Diarization (EEND) [11] represents a paradigm shift by integrating multiple diarization components—including voice activity detection, speaker embedding extraction, and speaker attribution—into a single, jointly optimized model. EEND reformulates speaker diarization as a multi-label frame-wise classification task, directly predicting speaker activities from audio features without relying on intermediate clustering. Building upon the original EEND framework, numerous improvements have been proposed to enhance its performance and applicability.Self-Attentive EEND (SAEEND) [12], which leverages the global modeling capability of self-attention mechanisms to improve the performance upper bound of EEND. To address the variable-number-ofspeakers challenge, Encoder-Decoder-based Attractor Calculation EEND (EDA-EEND) [13] has been proposed. This method employs an additional attractor module to detect new speakers and integrates the attractor’s outputs into the main network to guide the final diarization results. For extending EEND to online decoding scenarios, Xue et al. [14] proposed an EEND variant with a speaker-tracking buffer, which aligns speaker labels across adjacent processing chunks using a tracking buffer. When processing long-duration audio, EEND faces significant computational and memory burdens due to the quadratic time complexity of attention mechanisms. To mitigate this issue, EEND-vector clustering (EEND-VC) [15] processes long audio in segmented chunks. Each chunk is independently decoded, and speaker-specific features from the same speaker are averaged along the time dimension and projected into a speaker embedding space. Finally, clustering algorithms are applied to the speaker embeddings to resolve speaker alignment across different chunks. In parallel, Target-Speaker Voice Activity Detection (TSVAD) [16] has been proposed as a neural-based postprocessing method to refine the outputs of traditional diarization systems. TS-VAD leverages prior speaker information to perform target-speaker detection and jointly models the activities of multiple speakers. Despite its widespread success in various applications [17]–[19], original TS-VAD still exhibits some limitations that have motivated numerous research efforts. A transformer-based TS-VAD [20] architecture that handles variable numbers of speakers through representations with dynamic time and speaker dimensions. Meanwhile, an end-to-end target-speaker voice activity detection (E2E-TSVAD) method [15] was proposed to jointly learn speaker representation and diarization refinement, achieving better performance than the original TS-VAD with clustering-based initialization. Seq2Seq-TSVAD [21] adopted a sequence-tosequence framework, demonstrating improved efficiency while maintaining accuracy. NSD-MA-MSE [22] tackled the speaker embedding reliability issue through a memory-augmented neural network that dynamically refines speaker representations, thereby mitigating the domain gap between embedding extraction and neural network. To promote advances in speaker diarization under complex acoustic conditions, several international challenges have been organized to systematically benchmark algorithmic progress. Among them, the CHiME-7 Challenge and the DIHARD III Challenge are particularly notable. The CHiME-7 Challenge [19], introduced a main track focused on multi-speaker automatic speech recognition (ASR) centered on distant microphone conversational speech recorded under real-world conditions, where speaker diarization served as a critical frontend module to segment and organize speaker turns before transcription. It utilized three datasets: CHiME-6 [18], DiPCo [23], and Mixer 6 [24]. These datasets cover a wide range of challenging conversational scenarios, including multi-speaker dinner parties across kitchens, dining rooms, and living rooms, as well as interview sessions, telephone-style dialogues, and spontaneous dictations in controlled environments. Recordings are conducted with far-field microphone arrays and allow for natural speaker behaviors such as free movement, overlapping speech, and dynamic interaction patterns. Consequently, these datasets present significant challenges, including strong reverberation, background noise, severe speech overlap, and varying speaker counts. In a similar vein, the DIHARD-III Challenge [17] targets speaker diarization in highly diverse and challenging domains. Spanning 11 diverse domains, including clinical interviews, sociolinguistic fieldwork recordings, telephone conversations, YouTube videos, courtroom trials and so on. Both CHiME-7 and DIHARD-III have substantially contributed to pushing the limits of diarization technology, encouraging the development of systems that are more robust, generalizable, and capable of handling complex real-world scenarios. Despite substantial research efforts, many existing diarization systems still face challenges in achieving robust and generalized performance on these benchmarks [?], [19] The original TS-VAD framework, while effective, exhibits several notable limitations. First, its reliance on a BLSTMbased architecture results in high computational complexity, leading to slower inference speeds and substantial GPU memory consumption, particularly as the input sequence length increases. Second, TS-VAD typically employs a pre-trained extractor to generate speaker embeddings (such as i-vectors), but in real-world applications, these embeddings often suffer from degradation due to the absence of oracle speaker segments, thus compromising system robustness. Third, when deployed across diverse acoustic domains, TS-VAD models are susceptible to domain-specific biases, limiting their generalization capability and affecting performance consistency under mismatched conditions.To address these challenges, various improved methods have been proposed [15], [20]– [22]. Nevertheless, existing solutions often mitigate only part of the issues, and a unified approach that simultaneously enhances efficiency, robustness, and generalization remains underexplored. To address the aforementioned challenges, we propose a novel neural speaker diarization system using memory-aware multi-speaker embedding with sequence-to-sequence architecture (NSD-MS2S). Additionally, we explore the application of mixture of experts in spkeaker diarization, and extend NSDMS2S to NSD-MS2S-SSMoE. Consequently, the principal contributions of our study can be summarized as follows: 1) NSD-MS2S seamlessly integrates the advantages of the Memory-Aware Multi-Speaker Embedding (MA-MSE) module and the Sequence-to-Sequence (Seq2Seq) architecture, achieving an efficient and powerful framework for speaker diarization. Then, we develop a simple yet effective feature fusion strategy, which significantly reduces the computational burden in the transformer’s decoder without sacrificing diarization accuracy. To enhance the retrieval of multi-speaker embeddings from the memory module, we introduce a Deep Interactive Module (DIM) within the MA-MSE framework. By performing multi-scale feature fusion between acoustic features and speaker embedding basis vectors, DIM produces cleaner and more discriminative multi-speaker representations. 2) To address the issue of model bias across different acoustic conditions, we further introduce a novel Shared and Soft Mixture of Experts (SS-MoE) module into the Seq2Seq-based diarization framework, resulting in the development of an enhanced system referred to as NSDMS2S-SSMoE. 3) We introduce a simple and effective parameter transfer strategy, where the pre-trained parameters from NSDMS2S are migrated to initialize the NSD-MS2S-SSMoE model. This method accelerates the convergence of the SS-MoE enhanced system during training and reduces the overall training cost. 4) Our proposed NSD-MS2S system achieved the first place in the main track of the CHiME-7 challenge. Furthermore, NSD-MS2S-SSMoE further improves singlemodel performance, achieving results comparable to the system fusion of NSD-MS2S on the CHiME-7 evaluation set, and attaining state-of-the-art performance on the DIHARD-III evaluation set. # II. RELATED WORK # A. Neural Speaker Diarization Using Memory-Aware Multispeaker Embedding Traditional speaker diarization systems predominantly rely on clustering-based paradigms [7], [9], [25]. While effective in many scenarios, these methods exhibit significant challenges when encountering overlapping speech, due to the property of the clustering algorithm. To overcome these limitations, end-to-end neural diarization (EEND) approaches have been proposed, reframing diarization as a multi-label classification task. Representative methods such as EEND [11]–[13] directly predict frame-level speaker activity for all speakers simultaneously. Similarly, target-speaker voice activity detection (TS-VAD) [16], [20], [20]–[22] enhances speaker tracing by leveraging pre-acquired speaker embeddings to estimate speaker presence probabilities. Neural Speaker Diarization Using Memory-Aware Multi-Speaker Embedding (NSD-MAMSE) [22] represents one of the state-of-the-art approaches among TS-VAD-based methods, which introduces a dedicated memory module designed to generate a set of speaker embeddings specifically for TS-VAD. NSD-MA-MSE accepts a sequence of acoustic frames as input, represented by the matrix $\mathbf { X } = [ \mathbf { x } _ { 1 } , \mathbf { x } _ { 2 } , \ldots , \mathbf { x } _ { T } ]$ , where each $\mathbf { x } _ { t } \in \mathbb { R } ^ { D ^ { \prime } }$ corresponds to a $D ^ { \prime }$ -dimensional log-Mel filterbank (FBANK) feature vector at time step $t$ , and $T$ denotes the columns of the similarity matrix $\mathbf { X } \Phi$ . Each slot representation is then obtained as a convex combination of all input tokens: Fig. 1: The architecture of neural speaker diarization network using memory-aware multi-speaker embedding. Fig. 2: Soft MoE routing details. $$ \mathbf { D } _ { i j } = \frac { \exp ( ( \mathbf { X } \Phi ) _ { i j } ) } { \sum _ { i ^ { \prime } = 1 } ^ { m } \exp ( ( \mathbf { X } \Phi ) _ { i ^ { \prime } j } ) } , \quad \tilde { \mathbf { X } } = \mathbf { D } ^ { \top } \mathbf { X } $$ total number of frames in an utterance. This input is processed through four convolutional layers, which transform the raw acoustic features into higher-level representations. The resulting deep features are denoted as $\mathbf { F } ~ = ~ [ \mathbf { f } _ { 1 } , \mathbf { f } _ { 2 } , \dots , \mathbf { f } _ { T } ]$ , with each $\mathbf { f } _ { t } \in \mathbb { R } ^ { D }$ capturing a $D$ -dimensional vector at frame $t$ . These frame-wise features are simultaneously used by the primary model and a memory-based module. To model speakerspecific characteristics, each deep feature is concatenated with a replicated set of speaker embeddings $\mathbf { E } = [ \mathbf { e } _ { 1 } , \mathbf { e } _ { 2 } , \ldots , \mathbf { e } _ { N } ]$ , where each ${ \bf e } _ { n } \in \mathbb { R } ^ { L }$ is an embedding for the $n$ -th speaker. These embeddings are broadcast across all frames and are generated by the MA-MSE module. The combined representations (acoustic features and speaker embeddings) are then forwarded to a speaker detection component, which consists of a two-layer bidirectional LSTM with projection (BLSTMP) [26] to capture the temporal dependencies in speaker activity patterns. Subsequently, the speaker-wise outputs from the SD module are aggregated and passed through a final one-layer BLSTMP to compute binary classification outputs for speech activity. The system generates $\hat { \mathbf { Y } } = ( \hat { y } _ { n t } ) \in \overline { { \mathbb { R } ^ { N \times T } } }$ , where each $\hat { y } _ { n t } ~ \in ~ [ 0 , 1 ]$ denotes the probability that speaker $n$ is speaking at frame $t$ . Each row of $\tilde { \mathbf { X } }$ is routed to a designated expert based on its slot index. The expert function $f _ { \lfloor i / p \rfloor }$ then processes each slot independently to produce the intermediate output slots $\tilde { \mathbf { Y } } _ { i }$ . A second softmax, applied row-wise to the same token-slot interaction scores, yields the combine matrix $\mathbf { C } \in \mathbb { R } ^ { m \times ( n \cdot p ) }$ , which is used to reconstruct the final output tokens: $$ \mathbf { C } _ { i j } = \frac { \exp ( ( \mathbf { X } \Phi ) _ { i j } ) } { \sum _ { j ^ { \prime } = 1 } ^ { n \cdot p } \exp ( ( \mathbf { X } \Phi ) _ { i j ^ { \prime } } ) } , \quad \mathbf { Y } = \mathbf { C } \tilde { \mathbf { Y } } $$ This fully differentiable token-to-slot-to-token mechanism enables end-to-end training without hard routing decisions. In practical applications, a portion of the Transformer’s feedforward layers—typically the latter half—can be replaced by Soft MoE modules. The number of slots, rather than the number of experts, primarily influences the computational cost, making it a tunable parameter for balancing efficiency and performance. Typically, part of the Transformer’s feedforward layer can be replaced by the Soft-MoE block. # B. Soft Mixture of Experts The Soft MoE routing algorithm [27] presents a token-toslot assignment strategy that enables fully differentiable expert selection. Given input tokens $\mathbf { X } \in \mathbb { R } ^ { m \times d }$ , where $m$ denotes the number of tokens and $d$ the token dimension, each expert in the mixture operates on $p$ virtual slots, parameterized by $\Phi \in \mathbb { R } ^ { d \times ( n \cdot p ) }$ . Here, $n$ is the number of experts, and the total number of slots is $n \cdot p$ . # III. NEURAL SPEAKER DIARIZATION SYSTEM USING MEMORY-AWARE MULTI-SPEAKER EMBEDDING WITH SEQUENCE-TO-SEQUENCE ARCHITECTURE To compute the slot representations, a dispatch matrix $\mathbf { D } \in$ $\mathbb { R } ^ { m \times ( n \cdot p ) }$ is first computed via softmax normalization over the # A. Overview of Network The main network receives a sequence of acoustic features, denoted as $\textbf { X } \in \ \mathbb { R } ^ { T \times F }$ , where $T$ and $F$ represent the time steps and the dimensionality of log-Mel filter-bank features (FBANKs), respectively. These features are processed by convolutional layers to extract a set of deep features $\dot { \textbf { F } } \in \ \mathbb { R } ^ { C \times T \times \frac { F } { 2 } }$ , which are then downsampled to produce $\mathbf { F } ^ { \prime } \in \mathbb { R } ^ { T \times D }$ , where $C$ and $D$ are the channel and feature dimensions, respectively. The feature sequence $\mathbf { F ^ { \prime } }$ is augmented with positional embeddings (PE) and passed through the speaker detection (SD) encoder, which consists of a stack of conformer blocks, yielding the encoded features $\mathbf { E } _ { \mathrm { e n c } } \in \mathbb { R } ^ { T \times D }$ . Fig. 3: The spofrtmoapx poersselodt NSDisDpa-tchMweSig2htS-SS分M发o模E块sfofrtamamx perwtokoenrk. Additionally, $\mathbf { F ^ { \prime } }$ and the speaker mask matrix $\mathbf { S } \in \mathbb { R } ^ { N \times T }$ are input to the MA-MSE module, producing the MA-MSE embedding ${ \bf E } _ { M } ~ \in ~ \mathbb { R } ^ { N \times D _ { M } }$ , where $N$ is the number of speakers and $D _ { M }$ is the dimensionality of the MA-MSE embedding. This embedding is concatenated with the i-vector to form the aggregated embedding $\mathbf { E } _ { A } \in \mathbb { R } ^ { N \times D }$ , which is further described in the latter. The aggregate embedding $\mathbf { E } _ { A }$ , along with the decoder embedding $\bar { \mathbf { E } _ { D } } \in \mathbb { R } ^ { N \times D }$ and the encoded features $\mathbf { E } _ { \mathrm { e n c } }$ , are passed through the SD decoder, augmented with sinusoidal positional embeddings. This results in the decoded features $\mathbf { \bar { E } } _ { \mathrm { d e c } } ~ \in ~ \mathbb { R } ^ { N \times D }$ , which are discussed in the latter. Finally, the output layer converts $\mathbf { E } _ { \mathrm { d e c } }$ into posterior probabilities $\hat { \mathbf { Y } } ~ = ~ [ \hat { \hat { y } } _ { \mathrm { \scriptsize { n t } } } ] _ { N \times T }$ , representing the voice activity probabilities for $N$ speakers. # B. Speaker Detection Decoder The design of the speaker detection (SD) decoder is primarily inspired by [21], [28]. It consists of multiple SD blocks that predict the voice activities of target speakers by considering cross-speaker correlations. In the forward pass of an SD block, the decoder embedding $\mathbf { E } _ { D }$ and the aggregate embedding $\mathbf { E } _ { A }$ are processed through their respective multi-layer perceptrons (MLPs) to generate the within-block representations $\mathbf { E } _ { D } ^ { Q _ { 1 } ^ { \star } }$ , $\mathbf { E } _ { D } ^ { K _ { 1 } }$ , $\mathbf { E } _ { D } ^ { V _ { 1 } }$ , $\mathbf { E } _ { A } ^ { Q _ { 1 } ^ { - } }$ , and $\mathbf { E } _ { A } ^ { K _ { 1 } }$ , where $Q , K$ , and $V$ represent the query, key, and value in the attention mechanism, respectively. All MLP layers, unless otherwise noted, map the input feature dimensions to $D$ . The MLP structure is omitted for simplicity in Fig.??(c). To ensure that the decoder embedding includes speaker information while minimizing subsequent time and space overhead, the input features are fused without increasing the feature dimensions. This fusion can be expressed by the following equations: $$ \begin{array} { r } { \mathbf { Q } _ { 1 } = \boldsymbol { \beta } _ { 1 } \times \mathbf { E } _ { D } ^ { Q _ { 1 } } + \left( 1 - \beta _ { 1 } \right) \times \mathbf { E } _ { A } ^ { Q _ { 1 } } \quad } \\ { \mathbf { K } _ { 1 } = \boldsymbol { \beta } _ { 2 } \times \mathbf { E } _ { D } ^ { K _ { 1 } } + \left( 1 - \beta _ { 2 } \right) \times \mathbf { E } _ { A } ^ { K _ { 1 } } \quad } \\ { \mathbf { V } _ { 1 } = \mathbf { E } _ { D } ^ { V _ { 1 } } \quad \quad } \end{array} $$ where $\beta _ { 1 }$ and $\beta _ { 2 }$ are learnable parameters that allow the model to determine the most relevant information. The queries $\mathbf { Q } _ { 1 }$ , keys ${ \bf K } _ { 1 }$ , and values $\mathbf { V } _ { 1 }$ undergo layer normalization (LN) and multi-attention (MA) to extract features at different levels, resulting in the within-block features EF ∈ RN×D. Next, we transform $\mathbf { E } _ { F } , \mathbf { E } _ { A }$ , and $\mathbf { E } _ { \mathrm { e n c } }$ into within-block representations $\mathbf { E } _ { F } ^ { Q _ { 2 } } , \ \mathbf { E } _ { A } ^ { Q _ { 2 } } , \ \mathbf { E } _ { \mathrm { e n c } } ^ { K _ { 2 } }$ , and $\mathbf { E } _ { \mathrm { e n c } } ^ { V _ { 2 } }$ via MLP layers. The queries, keys, and values for the second LN & MA layer are obtained using the following functions: $$ \begin{array} { c } { \mathbf { Q } _ { 2 } = \beta _ { 3 } \times \mathbf { E } _ { F } ^ { Q _ { 2 } } + ( 1 - \beta _ { 3 } ) \times \mathbf { E } _ { A } ^ { Q _ { 2 } } } \\ { \mathbf { K } _ { 2 } = \mathbf { E } _ { \mathrm { e n c } } ^ { K _ { 2 } } + \mathbf { P E } } \\ { \mathbf { V } _ { 2 } = \mathbf { E } _ { \mathrm { e n c } } ^ { V _ { 2 } } } \end{array} $$ where PE represents the sinusoidal positional embedding, and $\beta _ { 3 }$ is another learnable parameter. The output of the second LN & MA layer is then passed through a feed-forward network (FFN), producing the next decoder embedding. Finally, the output embedding $\mathbf { E } _ { \mathrm { d e c } }$ is sent to the output layer, which consists of a linear layer followed by a sigmoid activation function to predict target-speaker voice activities. The output layer’s structure also determines the length of the decoding process. # C. MA-MSE with Deep Interactive Module The memory-aware multi-speaker embedding (MA-MSE) module is designed to retrieve clean and discriminative multispeaker embeddings from memory using a simple additive attention mechanism. As outlined in [22], the core of the MA-MSE module is the memory component, which consists of speaker embedding basis vectors derived from additional datasets. Specifically, these embedding basis vectors are obtained by clustering speaker embeddings (e.g., i-vectors or xvectors) and selecting the cluster centers. Before feeding the features $\mathbf { F ^ { \prime } }$ into the MA-MSE module, we apply a clustering-based approach to obtain a speaker activity mask $\mathbf { S } ~ \in ~ \bar { \mathbb { R } } ^ { N \times T }$ , where each frame is assigned a $0 / 1$ label indicating speaker presence. The features $\mathbf { { F ^ { \prime } } }$ and the mask $\mathbf { S }$ are then multiplied to select the relevant features for each speaker, yielding the selected features ${ \bf F } _ { S } = \left[ { \bf F } _ { S } ^ { 1 } , { \bf F } _ { S } ^ { 2 } , \ldots , { \bf F } _ { S } ^ { \hat { N } } \right] ^ { T } \in \bar { \mathbb { R } } ^ { N \times D }$ . An additive attention mechanism is then employed to match the current speech segment with the most relevant speaker embedding bases from the memory. Through the CHiME-7 DASR Challenge, we identified that if the MA-MSE module structure is not optimized, it can severely affect performance in complex acoustic environments. Furthermore, overly simplistic mechanisms may limit the potential for performance improvement. To address this, we introduce the Deep Interactive Module (DIM), which replaces the additive attention mechanism with a dot-product attention mechanism and increases the depth of interaction layers. This multi-scale feature fusion approach enhances the extraction of cleaner, more discriminative multispeaker embeddings from the memory module. Fig. 4: Deep interactive module The DIM consists of three DIM blocks, each containing two cross-attention layers along the feature dimension. The speaker embedding basis vectors in the memory module are denoted by $\mathbf { M } \in \mathbb { R } ^ { K \times D _ { M } }$ , where $K$ is the number of vectors. In the first DIM block, the input features $\mathbf { F } _ { S } ^ { n }$ of the $n$ -th speaker and the memory $\mathbf { M }$ are processed as follows: $$ \mathbf { H } _ { 1 } ^ { n } = \mathrm { S o f t m a x } \left( \frac { \left( \mathbf { F } _ { S } ^ { n } \mathbf { W } _ { 1 } ^ { n , q } \right) \left( \mathbf { M } \mathbf { W } _ { 1 } ^ { n , k } \right) ^ { T } } { \sqrt { d _ { m } } } \right) \mathbf { M } $$ where $\mathbf { W } _ { 1 } ^ { n , q } \in \mathbb { R } ^ { D \times D }$ and $\mathbf { W } _ { 1 } ^ { n , k } \in \mathbb { R } ^ { D _ { M } \times D }$ are learnable weight matrices, and $\sqrt { d _ { m } }$ is used for scaling to ensure numerical stability. The output of the first DIM block is then calculated by: $$ \mathbf { H } _ { 2 } ^ { n } = \mathrm { S o f t m a x } \left( \frac { \left( \mathbf { F } _ { S } ^ { n } \mathbf { W } _ { 2 } ^ { n , q } \right) \left( \mathbf { H } _ { 1 } ^ { n } \mathbf { W } _ { 2 } ^ { n , k } \right) ^ { T } } { \sqrt { d _ { m } } } \right) \mathbf { H } _ { 1 } ^ { n } $$ where $\mathbf { W } _ { 2 } ^ { n , q } \in \mathbb { R } ^ { D \times D }$ and $\mathbf { W } _ { 2 } ^ { n , k } \in \mathbb { R } ^ { D _ { M } \times D }$ are additional learnable weights. After this, the resulting $\mathbf { H } _ { 2 } ^ { n }$ is passed, along with $\mathbf { F } _ { S } ^ { n }$ , to the next DIM block. Finally, after processing through all three DIM blocks, the MA-MSE embedding $\mathbf { E } _ { M }$ is obtained. This embedding, which provides crucial supplementary speaker information, is concatenated with the current speaker’s i-vector to generate the aggregate embedding $\mathbf { E } _ { A }$ . # D. Shared and Soft Mixture of Experts The SS-MoE module consists of a shared expert, multiple collaborative experts, input slots, an input dispatch module, and an output combination module. We denote the input tokens of a sequence as $\mathbf { X } \in \mathbb { R } ^ { m \times d }$ , where $m$ is the number of tokens and $d$ is their dimensionality. The shared expert directly processes the input tokens and produces: $$ \mathbf { Y } _ { \mathrm { s h a } } = \mathrm { E x p e r t } _ { \mathrm { s h a } } ( \mathbf { X } ) \in \mathbb { R } ^ { m \times d } $$ In the SS-MoE module, the input dispatch module assigns weights to input tokens and distributes them to different slots. This process ensures that each collaborative expert receives a weighted average of tokens as input, rather than individual tokens. After processing their inputs, the collaborative experts’ outputs are merged by the output combination module, resulting in the fused expert output ${ \bf Y } _ { \mathrm { c o } }$ . Each output token is also a weighted average of all collaborative expert outputs. Finally, the fused expert output and the shared expert output are combined to produce the final output of the SS-MoE module: $$ \mathbf { Y } = \mathbf { Y } _ { \mathrm { s h a } } + \mathbf { Y } _ { \mathrm { c o } } \in \mathbb { R } ^ { m \times d } $$ Next, we elaborate on the technical details of each component. 1) Input Dispatch Module: The architecture of the input dispatch module is illustrated in Figure 5. Each SS-MoE layer contains $n$ collaborative experts. Each expert processes $p$ slots, and each slot is associated with a learnable $d$ -dimensional vector $\Phi \in \mathbb { R } ^ { d \times ( n \cdot p ) }$ . The weight logits are computed as the matrix product of $\mathbf { X }$ and $\Phi$ : $$ \mathrm { W e i g h t \_ l o g i t s } = ( \mathbf { X } \Phi ) \in \mathbb { R } ^ { m \times ( n \cdot p ) } $$ Then, a Softmax is applied to each column of the weight logits: $$ \mathbf { D } _ { i j } = \frac { \exp ( \mathrm { W e i g h t \_ l o g i t s } _ { i j } ) } { \sum _ { i ^ { \prime } = 1 } ^ { m } \exp ( \mathrm { W e i g h t \_ l o g i t s } _ { i ^ { \prime } j } ) } \in \mathbb { R } ^ { m \times ( n \cdot p ) } $$ Here, $\mathbf { D } _ { i j }$ is referred to as the dispatch weight. The inputs are then linearly combined based on these weights to obtain the inputs $\tilde { \mathbf { X } }$ for each of the $p$ slots of the $n$ collaborative experts: $$ \tilde { \mathbf { X } } = \mathbf { D } ^ { \top } \mathbf { X } \in \mathbb { R } ^ { ( n \cdot p ) \times d } $$ Intuitively, each slot in $\tilde { \mathbf { X } }$ represents a weighted sum of all input tokens in $\mathbf { X }$ . $$ { \mathrm { C o m b i n e d \_ l o g i t s } } = \mathbf { L i n e a r } ( \mathbf { L o g i t s \_ n o r m } ) \in \mathbb { R } ^ { m \times ( n \cdot p ) } $$ A row-wise Softmax is applied to the combined logits: $$ \mathbf { C } _ { i j } = \frac { \mathrm { e x p } ( \mathrm { C o m b i n e d \_ l o g i t s } _ { i j } ) } { \sum _ { j ^ { \prime } = 1 } ^ { n \cdot p } \mathrm { e x p } ( \mathrm { C o m b i n e d \_ l o g i t s } _ { i j ^ { \prime } } ) } \in \mathbb { R } ^ { m \times ( n \cdot p ) } $$ $\mathbf { C } _ { i j }$ is referred to as the combination weight. Finally, these weights are used to linearly combine the outputs $\tilde { Y } _ { \mathrm { c o } }$ from all collaborative experts to produce the fused expert output: $$ \mathbf { Y } _ { \mathrm { c o } } = \mathbf { C } \tilde { Y } _ { \mathrm { c o } } \in \mathbb { R } ^ { m \times d } $$ Fig. 5: Illustration of the input dispatch module Fig. 6: Illustration of the output combined module 2) Output Combined Module: The purpose of the output combination module is to better fuse the outputs of multiple experts. Its architecture is shown in Figure 6. The outputs of the collaborative experts are defined as: $$ \tilde { Y } _ { \mathrm { c o } } = \mathrm { E x p e r t s } _ { \mathrm { c o } } ( \tilde { \mathbf { X } } ) \in \mathbb { R } ^ { ( n \cdot p ) \times d } $$ The output combination module further transforms the weight logits from Section III-D1. Specifically, an attention layer is used to focus on the most informative weights: $$ \begin{array} { r l r } & { \boldsymbol { Q } = \mathrm { W e i g h t \_ l o g i t s } \cdot \boldsymbol { W } _ { Q } } \\ & { \boldsymbol { K } = \mathrm { W e i g h t \_ l o g i t s } \cdot \boldsymbol { W } _ { K } } \\ & { \boldsymbol { V } = \mathrm { W e i g h t \_ l o g i t s } \cdot \boldsymbol { W } _ { V } } \\ & { \mathrm { \scriptsize ~ \left[ \sum o g i t s \_ a t t e n t i o n = \sum o f t m a x \left( \frac { \boldsymbol { Q } \boldsymbol { K } ^ { \top } } { \sqrt { d _ { k } } } \right) \boldsymbol { V } \in \mathbb { R } ^ { m \times ( n \cdot p ) } \right. } } \end{array} $$ Here, $W _ { Q } , W _ { K } , W _ { V } \in \mathbb { R } ^ { ( n \cdot p ) \times ( n \cdot p ) }$ are learnable parameters. The attention output is then normalized: $$ \mathrm { L o g i t s \_ n o r m } = \mathrm { N o r m } ( \mathrm { L o g i t s \_ a t t e n t i o n } ) \in \mathbb { R } ^ { m \times ( n \cdot p ) } $$ We use Instance Normalization for this normalization. A linear layer is then applied to project the normalized logits into a suitable feature space, yielding the combined logits: # chEw.e htParameters ftTmarxapenr tsoCfkoemnbri dSwterigahtWteightyi gf logr Accele结ra合t模in块g Model Optimization Training a Mixture-of-Experts (MoE) model from scratch typically incurs substantial computational and time costs. To address this challenge, we propose a parameter transfer strategy aimed at leveraging pretrained model parameters to effectively initialize the NSD-MS2S-SSMoE model. This approach enables faster convergence and reduces training overhead. Specifically, we utilize a pretrained NSD-MS2S model and transfer its parameters to initialize structurally compatible components within the NSD-MS2S-SSMoE model. As illustrated in Figure 7, we identify all modules that are shared between the two models and perform direct parameter copying wherever applicable. In particular, for the speaker decoder block, we replicate the Feed-Forward Network (FFN) parameters from the pretrained model $n { + 1 }$ times to initialize the $n { + 1 }$ expert networks in the SSMoE module. Other identical submodules, such as attention mechanisms and normalization layers, are directly initialized with their corresponding pretrained weights. This parameter reuse paradigm allows us to retain the inductive biases and prior knowledge embedded in the pretrained model, thereby providing a strong initialization for the SSMoE-enhanced architecture. Empirical results demonstrate that this transfer strategy significantly reduces training cost while preserving or even improving model performance. Fig. 7: Illustration of the parameter transfer process from the pretrained NSD-MS2S model to the NSD-MS2S-SSMoE model. # IV. EXPERIMENTAL SETUP # A. Experimental Setup 1) Datasets: To evaluate the robustness of the proposed diarization system in complex acoustic conditions, experiments were conducted on three challenging English datasets: CHiME-6, DiPCo, and Mixer 6. Additionally, we further validated the proposed method on the DIHARD-III dataset, which includes a broader range of scenarios. 2) Training Data: For the CHiME-6, DiPCo, and Mixer 6 datasets, we adopt a simulation strategy1 to generate a large amount of synthetic data for training set augmentation. Specifically, simulated multi-speaker conversations are constructed using real single-speaker utterances. This approach enables the expansion of the training data without incurring the cost of manual annotation. In addition to the official training sets of CHiME-6 and Mixer 6, we further enhance the Mixer 6 training set by applying pseudo-labeling techniques, following the method proposed in [29]. The total duration of training data used amounted to approximately 5,300 hours. Since the DIHARD-III dataset does not provide a dedicated training set, it poses a significant challenge to the generalization capability of the diarization system. To address this, in addition to simulating multi-speaker conversations using the LibriSpeech dataset, we utilized several real-world datasets, including Switchboard-1 [30], AMI [31], and the development portion of VoxConverse [32]. The total duration of training data used amounted to approximately 1,400 hours. 3) Initialization of Diarization Systems: For the CHiME6, DiPCo, and Mixer 6 datasets, we adopt the baseline VAD model provided by CHiME- $\cdot 7 ^ { 2 }$ , and further fine-tune it using the training data from CHiME-6 and Mixer 6. After applying VAD, we segment the detected speech regions into overlapping subsegments with a window length of 1.5 seconds and a shift of 0.75 seconds. We then extract x-vectors from each segment using the ECAPA-TDNN model [33], which is pretrained on the VoxCeleb corpus [34]. Finally, speaker clustering is performed using spectral clustering based on cosine similarity. For the DIHARD-III dataset, a clustering-based diarization system, VBx [35], was adopted for initialization. Specifically, silence segments were first removed based on official annotations. Then, $\mathbf { \boldsymbol { x } }$ -vectors were extracted using a speaker embedding model (ECAPA-TDNN) pre-trained on VoxCeleb. Agglomerative Hierarchical Clustering (AHC) was performed on the x-vectors to obtain coarse cluster assignments, which were used to initialize the parameters of VBx. In this system, each state in the Hidden Markov Model (HMM) is treated as an individual speaker, and transitions between states correspond to speaker changes. The x-vector sequence is regarded as the observation sequence, and Variational Inference is employed to obtain the most probable state sequence, corresponding to the final diarization output. 4) Model Configuration and Training: For the NSD-MS2SMoE system, 40-dimensional log Mel-filterbank (Fbank) features are used as input. The model consists of 6 speaker detection encoder layers and 6 speaker detection decoder layers. For the CHiME-6, DiPCo, and Mixer 6 datasets, the SS-MoE module was inserted into the last three decoder layers. For the DIHARD-III dataset, SS-MoE was only applied to the second decoder layer. Each SS-MoE block comprises Gated Linear Unit (GLU)- based expert models, each consisting of two fully-connected layers. The first layer projects the input to $2 d$ dimensions, followed by Gaussian Error Linear Unit (GEGLU) gating. The input is split along the channel dimension, where one half is passed through a GELU activation function and multiplied element-wise with the other half. Dropout with a rate of 0.1 is applied for regularization. The second fully-connected layer projects the output back to the original $d$ dimensions, with $d =$ 512. Each SS-MoE layer contains 6 experts, and the number of input slots is set to 4. In the fusion branches, attention layers use a dimensionality of 512 with 4 attention heads. Other model parameters remain consistent with the baseline NSD-MS2S configuration. For experiments on the DIHARD-III dataset, we first pretrained the NSD-MS2S model for 30 epochs using a learning rate of 1e-4. The resulting parameters were then used to initialize the corresponding modules of the NSD-MS2S-SSMoE model. During fine-tuning, a two-stage strategy was adopted: first, the learning rate was set to 1e-5 and all parameters except the SS-MoE layers were frozen for 2 epochs; then, all parameters were unfrozen and the entire model was fine-tuned for an additional 3 epochs. The Adam optimizer was used throughout. This staged fine-tuning approach facilitates stable training while gradually improving the SS-MoE performance. For the CHiME-6, DiPCo, and Mixer 6 datasets, pretraining was performed for 6 epochs. During fine-tuning, the model was first trained for 1 epoch with frozen parameters except for the SS-MoE layers, followed by another epoch with all parameters unfrozen. Other experimental settings remained the TABLE I: Performance comparison on the CHiME-6 dataset (collar $= 0 . 2 5 \mathrm { s }$ ) same. 5) Baseline Systems: To assess the effectiveness of the proposed NSD-MS2S-MoE system, we compared it against several state-of-the-art diarization systems, including TS-VAD, NSD-MA-MSE, and NSD-MS2S. Furthermore, we included QM-TS-VAD [36], a recent TS-VAD variant, and ITS-VAD, the top-ranked system in the DIHARD-III challenge, as additional baselines. 6) Evaluation Metrics: For the DIHARD-III dataset, the Diarization Error Rate (DER) was used as the primary evaluation metric, with a collar tolerance of 0 seconds to ensure strict alignment with reference annotations. For CHiME-6, DiPCo, and Mixer 6, both DER and the Jaccard Error Rate (JER) were adopted, using a collar of 0.25 seconds. For methods or datasets where reference results were unavailable in the literature, missing results are indicated with “–”. # V. RESULTS AND ANALYSIS 1) Results on Different Datasets: Table I reports the diarization performance of different systems on the CHiME-6 dataset. The proposed NSD-MS2S-SSMoE achieves the lowest DER and JER on both the development and evaluation sets, with DER/JER values of $2 6 . 3 1 \% / 2 8 . 5 6 \%$ and $2 8 . 5 1 \% / 3 2 . 3 1 \%$ , respectively. Compared with the NSD-MS2S baseline, the DER is relatively reduced by $7 . 2 3 \%$ on the development set and $3 . 1 9 \%$ on the evaluation set, demonstrating the effectiveness of incorporating sparse mixture-of-experts in improving diarization accuracy. Table II presents results on the DiPCo dataset. Our NSDMS2S-SSMoE consistently outperforms all baselines. In particular, DER/JER are reduced to $1 5 . 9 7 \% / 1 7 . 1 7 \%$ on the development set and $1 9 . 2 5 \% / 2 6 . 2 5 \%$ on the evaluation set. Compared to the NSD-MS2S baseline, relative DER reduction on the development set reaches $6 . 3 9 \%$ , while the gain on the evaluation set is more modest $( 1 . 0 9 \% )$ , indicating a possible risk of overfitting due to increased model complexity in the expert architecture. TABLE II: Performance comparison on the DiPCo dataset (collar $= 0 . 2 5 \mathrm { s }$ ). TABLE III: Performance comparison on the Mixer 6 dataset (collar $= 0 . 2 5 \mathrm { s } \mathrm { \dot { \ } }$ ). Table III summarizes performance on the Mixer 6 dataset. NSD-MS2S-SSMoE achieves the best DER and JER in both splits, with DER/JER of $7 . 1 6 \% / 9 . 1 4 \%$ on the development set and $4 . 9 4 \% / 5 . 4 9 \%$ on the evaluation set. However, since most systems already achieve very low error rates on this dataset and potential annotation inaccuracies may limit further improvement, the performance gains here are marginal. We present in Table IV the diarization error rate (DER) results of various systems on the DIHARD-III evaluation set across eight domains: broadcast news (BROADC.), courtroom (COURT), map task (MAP TASK), clinical interviews (CLINICAL), sociolinguistic lab interviews (SOC.LAB), sociolinguistic field recordings (SOC.FIELD), conversational telephone speech (CTS), and meetings (MEETING). Our proposed system, NSD-MS2S-SSMoE, achieves the SOTA performance in multiple domains including BROADC., COURT, SOC.LAB, SOC.FIELD, CTS, and MEETING, demonstrating strong robustness and generalization across diverse acoustic conditions. TABLE IV: DER $( \% )$ comparison of different systems on the DIHARD-III evaluation set across eight domains (collar ${ \bf \Omega } = 0 { \bf s } ^ { \prime }$ ). Additionally, QM-TS-VAD shows superior results in MAP TASK and CLINICAL, likely benefiting from its fine-tuning on simulated data generated from high-quality in-domain recordings, which enhances performance in domain-specific settings. It is also worth noting that in the BROADC. domain, all endto-end diarization systems struggle to surpass the traditional $\operatorname { v } \mathbf { B } \mathbf { x }$ system. This is likely due to the very low overlap speech ratio (only $1 . 1 8 \%$ ) in this domain, which limits the advantage of overlap-aware modeling typically offered by end-to-end systems. 2) Analysis of the DIM Module Results: Figure 8 illustrates the impact of the proposed DIM module on the performance of the NSD-MS2S system. It can be observed that the inclusion of the DIM module consistently improves system performance across different datasets. Specifically, the DIM module reduces the Diarization Error Rate (DER) on the evaluation sets of CHiME-6, DiPCo, and Mixer 6 by $3 . 4 4 \%$ (from $3 0 . 5 0 \%$ to $2 9 . 4 5 \% )$ , $1 0 . 7 6 \%$ (from $2 1 . 6 4 \%$ to $1 9 . 3 1 \%$ ), and $9 . 8 0 \%$ (from $5 . 5 0 \%$ to $5 . 0 1 \%$ ), respectively. Fig. 8: Impact of the DIM module on the performance of the NSD-MS2S system To further investigate the effect of the DIM module, we analyze the changes in the components of DER on the DiPCo evaluation set, as shown in Figure 9. The DIM module demonstrates varying degrees of improvement across all error types, including False Alarm (FA), Miss (MISS), and Speaker Fig. 9: Detailed breakdown of DER components on the DiPCo evaluation set Error (SPKERR). Notably, the most significant improvement is observed in SPKERR, which is relatively reduced by $27 \%$ (from $3 . 7 8 \%$ to $2 . 7 4 \%$ ). These results indicate that the DIM module helps the NSD-MS2S system extract cleaner and more discriminative speaker embeddings, thereby enhancing its ability to differentiate between speakers effectively. 3) Convergence analysis of parameter migration strategies: Figure 10 illustrates the convergence behavior of NSD-MS2S and NSD-MS2S-SSMoE on the CHiME-6 evaluation set. The $\mathbf { X }$ -axis represents the logarithm of the number of model update iterations, while the y-axis indicates the DER on the CHiME6 evaluation set. Blue dots correspond to NSD-MS2S, gray dots to NSD-MS2S-SSMoE initialized with random parameters, and green dots to NSD-MS2S-SSMoE initialized with pretrained NSD-MS2S parameters. In the early training stages, the model initialized with random parameters exhibits more volatile updates, likely due to a relatively high learning rate. As training progresses, NSD-MS2S-SSMoE converges to a lower DER compared to NSD-MS2S, which is consistent with the previous experimental results. Furthermore, the model utilizing parameter transfer exhibits smoother convergence and reaches the optimal region more rapidly, reducing the retraining cost by over $50 \%$ . 4) Effect of the Number of Experts on System Performance: Figure 11 shows the impact of the number of experts on system performance, evaluated using DER on the development sets of CHiME-6 and DiPCo. The blue dashed line indicates the baseline performance of NSD-MS2S. As the number of experts increases, the DER of NSD-MS2S-SSMoE decreases, suggesting that incorporating more experts effectively enhances system performance. Specifically, increasing the number of experts from 2 to 6 significantly improves model performance, likely because more experts can better capture the complex speaker characteristics in diverse acoustic scenarios. However, further increasing the number of experts yields marginal gains, indicating performance saturation. Fig. 10: Convergence comparison with different setups. Fig. 11: Impact of the number of experts on system performance. 5) Effect of Expert Placement on System Performance: Figure 12 presents the impact of expert placement on system performance. Each data point corresponds to inserting SS-MoE modules from the $n$ -th layer to the final layer of the speaker decoder. The results indicate that adding MoE modules across all layers does not necessarily yield optimal performance. On the CHiME-6 development set, the best results are obtained when SS-MoE modules are inserted in the last three layers (layers 4–6), while for DiPCo, inserting them in the last two layers (layers 5–6) leads to better performance. These findings suggest that optimal expert placement should be determined on a task-specific basis. 6) Comparison Between NSD-MS2S-SSMoE and NSDMS2S Fusion Models: Table V compares the performance of the proposed mixture of experts (MoE) model and a model ensemble approach across three datasets. Here, NSDMS2S (Fusion) refers to an ensemble-based method where six model checkpoints from different epochs are averaged at the parameter level, effectively mitigating model bias through ensembling. The results show that NSD-MS2S (Fusion) achieves significantly lower DER than the single NSD-MS2S model, highlighting the benefits of ensemble learning. Fig. 12: Impact of expert placement on system performance. TABLE V: Performance comparison of NSD-MS2S-SSMoE and NSD-MS2S fusion models on CHiME-6, DiPCo, and Mixer 6 datasets (collar $= 0 . 2 5 \mathrm { s } \mathrm { \ i }$ ). AVG denotes the average DER/JER over the three datasets. Furthermore, the NSD-MS2S-SSMoE model outperforms the fusion model on most metrics. However, while NSDMS2S-SSMoE achieves a lower average DER on the development sets (from $1 6 . 7 1 \%$ to $1 6 . 4 8 \%$ , it slightly underperforms the fusion model on the evaluation sets ( $1 7 . 5 1 \%$ vs. $1 7 . 4 3 \%$ ). This indicates that despite the strong learning capacity of SSMoE and its effectiveness in alleviating bias, it may still be prone to overfitting, warranting further investigation.
In this paper, we propose a novel neural speaker diarization system using memory-aware multi-speaker embedding with sequence-to-sequence architecture (NSD-MS2S), which integrates a memory-aware multi-speaker embedding module with a sequence-to-sequence architecture. The system leverages a memory module to enhance speaker embeddings and employs a Seq2Seq framework to efficiently map acoustic features to speaker labels. Additionally, we explore the application of mixture of experts in speaker diarization, and introduce a Shared and Soft Mixture of Experts (SS-MoE) module to further mitigate model bias and enhance performance. Incorporating SS-MoE leads to the extended model NSD-MS2S-SSMoE. Experiments on multiple complex acoustic datasets, including CHiME-6, DiPCo, Mixer 6 and DIHARD-III evaluation sets, demonstrate meaningful improvements in robustness and generalization. The proposed methods achieve state-of-the-art results, showcasing their effectiveness in challenging real-world scenarios.
[ "cs.SD", "cs.AI" ]
# 1 Introduction Recent advancements in generative AI have profoundly impacted autonomous driving, with diffusion models (DMs) emerging as pivotal tools for data synthesis and driving simulation. Some approaches utilize DMs as data machines, producing high-fidelity driving videos [1–14] or multi-modal synthetic data [15–18] to augment perception tasks, as well as generating corner cases (e.g., vehicle cut-ins) to enrich planning data with uncommon yet critical scenarios. Beyond this, other methods employ DMs as world models to predict future driving states, enabling end-to-end planning [19–21] and closed-loop simulation [22–27]. All these efforts emphasize long-term video generation through temporal recursion, encouraging DMs to produce coherent video sequences for downstream tasks. However, large-scale scene generation with spatial expansion, which aims to build expansive and immersive 3D environments for arbitrary driving simulation, remains an emerging yet underexplored direction. A handful of pioneering works have explored 3D driving scene generation at scale. For example, SemCity [28] generates city-scale 3D occupancy grids using DMs, but the lack of appearance details limits its practicality for realistic simulation. UniScene [18] and InfiniCube [29] extend this by generating both 3D occupancy and images, but require a manually defined large-scale layout as a conditioning input, complicating the generation process and hindering flexibility. In this work, we explore the potential solution to large-scale scene generation with spatial expansion, which faces the following three main challenges: 1) Flexible Controllability: Enabling versatile control through both low-level conditions (e.g., layouts) for precise scene composition and highlevel prompts (e.g., user-intent text descriptions) for efficient, intuitive customization. For instance, as shown in Fig. 1, users can provide a brief scene description, which the system elaborates into a plausible scene by fully harnessing the generative model’s creative capacity; 2) High-Fidelity Geometry and Appearance: generating intricate geometry alongside photorealistic appearance, which is essential to ensure both the structural integrity and visual realism of the 3D scene; 3) Large-Scale Consistency: maintaining spatial coherence across interconnected regions to ensure global consistency throughout the extended scene. To address these challenges, we propose $\mathcal { X }$ -Scene, a novel framework for large-scale driving scene generation. $\chi$ -Scene offers: 1) Multi-Granular Controllability: $\mathcal { X }$ -Scene empowers users to guide generation with varying levels of detail, accommodating both fine-grained layouts for precise control and high-level text prompts for efficient scene customization. To enhance the expressiveness of text-based control, textual prompts are initially enriched by LLMs to form detailed scene narratives. These narratives then inform a text-driven layout generation module that automatically establishes spatial arrangements, guiding subsequent scene synthesis. This dual-control paradigm effectively supports users requiring meticulous, layout-based precision alongside those preferring rapid, promptdriven customization, thereby broadening accessibility. 2) Geometrical and Visual Fidelity: $\chi$ -Scene achieves high fidelity by employing a unified pipeline that sequentially generates 3D semantic occupancy and the corresponding multiview images. This process ensures both structural accuracy in the 3D geometry and photorealistic visual appearance, promoting inherent consistency and robust alignment between the geometric (occupancy) and visual (image) modalities. 3) Consistent LargeScale Extrapolation: To enable the creation of expansive environments, $\chi$ -Scene progressively extrapolates new scene content conditioned on adjacent, previously synthesized regions. This consistency-aware outpainting mechanism meticulously preserves spatial continuity, facilitating the seamless and coherent extension of the 3D driving scene well beyond a single local area. Moreover, to support a diverse array of downstream applications, including realistic driving simulations and immersive free-roam exploration within the generated environments, we further process the synthesized semantic occupancy and multi-view images. Specifically, we reconstruct them into 3D Gaussian (3DGS) [30] representations, a technique adept at faithfully preserving both intricate geometric structures and high-fidelity visual appearance. By unifying these capabilities, $\chi$ -Scene advances the state of the art in large-scale, high-fidelity, and controllable driving scene synthesis, empowering data generation and simulation for autonomous driving. The main contributions of our work are summarized as follows: • We propose $\mathcal { X }$ -Scene, a novel framework for large-scale 3D driving scene generation with multigranular controllability, geometrical and visual fidelity, and consistent large-scale extrapolation, supporting a wide range of downstream applications. • We design a flexible multi-granular control mechanism that synergistically combines high-level semantic guidance (LLM-enriched text prompts) with low-level geometric specifications (userprovided or text-driven layout), enabling scene creation tailored to diverse user needs. • We present a unified generation and extrapolation pipeline that ensures robust geometric fidelity and photorealistic visual appearance, while also achieving seamless large-scale scene expansion by maintaining spatial and semantic coherence across extrapolated regions. • Extensive experiments show $\chi$ -Scene achieves superior performance in generation quality and controllability, enabling diverse applications from data augmentation to driving simulation. # 2 Related Works # 2.1 Driving Image and Video Generation Diffusion models [31–34] have revolutionized image generation by iteratively refining Gaussian noise into high-quality images. Building on this technique, they have significantly advanced autonomous driving by enabling image and video generation for a wide range of downstream applications. For example, several methods synthesize realistic driving images [35, 36, 1] or videos [2–14] from 3D box or layout conditions to support perception tasks through data augmentation. Other approaches [37, 38] focus on generating rare yet critical driving events, such as lane changes or vehicle cut-ins, to enhance planning tasks with corner-case scenarios. In addition, some works train diffusion models as world models that predict future driving videos for end-to-end planning [19–21] or closed-loop simulation [22–27]. While existing work predominantly focuses on temporal consistency generation, our work explores the complementary dimension of spatial coherence for large-scale scene generation. Figure 1: Pipeline of $\mathcal { X }$ -Scene for scalable driving scene generation: (a) Multi-granular controllability supports both high-level text prompts and low-level geometric constraints for flexible specification; (b) Joint occupancy-image generation synthesizes aligned 3D voxels and multi-view images via conditional diffusion; (c) Large-scale extrapolation enables coherent scene expansion through consistency-aware outpainting (Fig. 3). Fig. 2 details the scene-graph to layout diffusion. # 2.2 3D and 4D Driving Scene Generation Recent advances extend beyond 2D generation to 3D/4D scene synthesis for autonomous driving. These methods generate 3D scenes using various representations, such as LiDAR point clouds [39– 44], occupancy volumes [45, 46, 28, 47–50], or 3D Gaussian Splatting (3DGS) [51–53, 38, 54–56], serving as neural simulators for data synthesis and driving simulation. The field has further evolved in two key directions. First, as 3D world models that predict future scene representations—such as point clouds [57–59] or occupancy maps [60–64]—to support planning and pretraining. Second, as multi-modal generators that synthesize aligned cross-modal data, such as image-LiDAR [15, 16] or image-occupancy pairs [17, 18, 24]. In this work, we explore joint occupancy-and-image generation to construct scenes that combine intricate geometry with realistic appearance. # 2.3 Large-Scale Scene Generation Prior work on large-scale city generation has evolved into four main approaches: video-based methods [65, 66], outpainting-based techniques [67–69], PCG-based systems [70–72], and neural-based frameworks [73–75]. While effective at generating natural environments or urban buildings, these methods are not optimized for driving scenarios that require precise street layouts and dynamic agent arrangements. In addition, existing driving-specific solutions face notable limitations. XCube [49] and SemCity [28] generate only geometric occupancy without appearance modeling, while DrivingSphere [24], UniScene [18], and InfiniCube [29] rely on manually defined large-scale layouts, hindering practicality. In contrast, our $\chi$ -Scene framework supports joint geometry and appearance generation with flexible, text-based control, enabling more efficient and user-friendly customization. # 3 Methodology $\chi$ -Scene strives to generate large-scale 3D driving scenes through a unified framework that addresses controllability, fidelity, and scalability. As illustrated in Fig. 1, $\chi$ -Scene comprises three key components: First, the Multi-Granular Controllability module (Sec.3.1) supports both high-level user intent and low-level geometric conditions, enabling flexible scene specification. Next, the Joint Occupancy and Image Generation module (Sec.3.2) leverages conditioned diffusion models to synthesize 3D Scene Graph Semantic Graph Graph Encoder Please generate an urban street I imagine a driving scene features: Behind semantic 9 scene during the daytime, with various truck car embedding V vehicles and trucks driving on the road Style: urban city street on a sunny daytime ei and a few pedestrians walking on the Foreground Objects: Etext on Left of sidewalks, and provide the layout in · car: a black SUV parked along the street Siscene-graph format. · truck: white with green accents parked gi→j · pedestrian: man standing near the sidewalk geometric User Prompt road Right of pedestrian embedding Background Elements: T Large-Language Model · sidewalk: conrete running parallel to the road RAG · crosswalk: visible at the intersection ahead ... W node embeddings Denoiser Scne-Graph Layout: Scene Description Memory Bank · tpreudceks tbreaihninodnatnhdebsiigdgewratlkhan car b b0 Generated Layout (a) Textual Description Enrichment (b) Textual Scene-Graph to Layout Generation voxel occupancy and multi-view images, ensuring structural accuracy and photorealistic appearance. Finally, the Large-Scale Scene Extrapolation and Reconstruction module (Sec. 3.3) coherently extends scenes through consistency-aware outpainting and lifts the generated content into 3DGS representations, facilitating downstream simulation and exploration. # 3.1 Multi-Granular Controllability $\chi$ -Scene supports dual-mode scene control through: 1) high-level textual prompts, which are enriched by LLMs and converted into structured layouts via a text-to-layout generation model (illustrated in Fig. 2); and 2) direct low-level geometric control for precise spatial specification. This hybrid approach enables both intuitive creative expression and exacting scene customization. Text Description Enrichment. Given a coarse user-provided textual prompt $\mathcal { T } _ { \mathcal { P } }$ , we first enrich it into a comprehensive scene description $\mathcal { D } = \{ \boldsymbol { S } , \boldsymbol { \mathcal { O } } , \boldsymbol { B } , \boldsymbol { \mathcal { L } } \}$ , comprising: scene style $s$ (weather, lighting, environment), foreground objects $\mathcal { O }$ (semantics, spatial attributes, and appearance), background elements $\boldsymbol { B }$ (semantics and visual characteristics), and textual scene-graph layout $\mathcal { L }$ , representing spatial relationships among scene entities. The structured description $\mathcal { D }$ is generated as: $$ \mathcal { D } = \mathcal { G } _ { \mathrm { d e s c r i p t i o n } } \left( \mathcal { T } _ { \mathcal { P } } , \operatorname { R A G } ( \mathcal { T } _ { \mathcal { P } } , \mathcal { M } ) \right) $$ where $\mathcal { M } = \{ m _ { i } \} _ { i = 1 } ^ { N }$ denotes the scene description memory. Each entity $m _ { i }$ is automatically constructed using one of the $N$ collected scene datasets by: 1) extracting $\{ { \cal { S } } , { \cal { O } } , { \cal { B } } \}$ using VLMs on scene images; and 2) converting spatial annotations (object boxes and road lanes) into textual scene-graph layout $\mathcal { L }$ . As shown in Fig. 2, the Retrieval-Augmented Generation (RAG) module retrieves relevant descriptions similar to $\mathcal { T } _ { \mathcal { P } }$ from the memory bank $\mathcal { M }$ , which are then composed into a detailed, user-intended scene description by an LLM-based generator Gdescription. This pipeline leverages RAG for few-shot retrieval and composition when processing brief user prompts, enabling flexible and context-aware scene synthesis. The memory bank $\mathcal { M }$ is designed to be extensible, allowing seamless integration of new datasets to support a broader variety of scene styles. Additional examples of generated scene descriptions are provided in the appendix. Textual Scene-Graph to Layout Generation. Given the textual layout $\mathcal { L }$ , we transform it into a detailed layout map through a scene-graph to layout generation pipeline (See Fig. 2). First, we construct a scene graph $\mathcal { G } = ( \nu , \mathcal { E } )$ , where nodes $\mathcal { V } = \{ v _ { i } \} _ { i = 1 } ^ { M }$ represent $M$ scene entities (e.g., cars, pedestrians, road lanes) and edges $\mathcal { E } = \{ e _ { i j } | i , j \in \{ 1 , . . . , M \} \}$ represent spatial relations (e.g., front of, on top of ). Each node and edge is then embedded by concatenating semantic features $s _ { i }$ , $s _ { i \to j }$ (extracted using a text encoder $\mathcal { E } _ { \mathrm { t e x t } } )$ with learnable geometric embeddings $g _ { i }$ , $g _ { i \to j }$ , resulting in node embeddings $\mathbf { v } _ { i } = \mathbf { C o n c a t } ( s _ { i } , g _ { i } )$ and edge embeddings $\mathbf { e } _ { i j } = \mathbf { C o n c a t } ( s _ { i j } , g _ { i j } )$ . The graph embeddings are refined using a graph convolutional network, which propagates contextual information $\mathbf { e } _ { i j }$ across the graph and updates each node embedding $\mathbf { v } _ { i }$ via neighborhood aggregation. Finally, layout generation is formulated as a conditional diffusion process: each object layout is initialized as a noisy 7-D vector $b _ { i } \in \mathbb { R } ^ { 7 }$ (representing box center, dimensions, and orientation), while each road lane begins as a se of $N$ noisy 2D points $p _ { i } \in \mathbb { R } ^ { N \times 2 }$ , with denoising process is conditioned on the corresponding node embeddings $\mathbf { v } _ { i }$ to produce geometrically coherent placements. Low-Level Conditional Encoding. We encode fine-grained conditions (such as user-provided or model-generated layout maps and 3D bounding boxes) into embeddings to enable precise geometric control. As illustrated in Fig. 1, the 2D layout maps are processed by a ConvNet $( \mathcal { E } _ { l a y o u t } )$ to extract layout embeddings ${ \bf e } _ { l a y o u t }$ , while 3D box embeddings $\mathbf { e } _ { b o x }$ are obtained via MLPs $( \dot { \mathcal { E } } _ { b o x } )$ , which fuse object class and spatial coordinate features. To further enhance geometric alignment, we project both the scene layout and 3D boxes into the camera view to generate perspective maps, which are encoded by another ConvNet $( \mathcal { E } _ { p e r s p . } )$ to capture spatial constraints from the image plane. Additionally, high-level scene descriptions $\mathcal { D }$ are embedded via a T5 encoder $( \mathcal { E } _ { t e x t } )$ , providing rich semantic cues for controllable generation through the resulting text embeddings $\mathbf { e } _ { t e x t }$ . # 3.2 Joint Occupancy and Image Generation Inspired by [18], we adopt a joint 3D-to-2D generation hierarchy that first models 3D geometry via occupancy diffusion, followed by photorealistic image synthesis guided by occupancy-rendered semantic and depth maps. This 3D-aware guidance ensures geometric consistency and visual realism. Occupancy Generation via Triplane Diffusion. We adopt a triplane representation [76] to encode 3D occupancy fields with high geometric fidelity. Given an occupancy volume $\mathbf { o } \in \mathbb { R } ^ { \mathbf { \bar { X } } \times Y \times Z }$ , a triplane encoder compresses it into three orthogonal latent planes $\mathbf { h } = \{ \mathbf { h } ^ { x y } , \mathbf { h } ^ { x z } , \mathbf { h } ^ { y z } \}$ with spatial downsampling. To mitigate information loss due to reduced resolution, we propose a novel triplane deformable attention mechanism that aggregates richer features for a query point $\mathbf { q } = ( x , y , z )$ as: $$ \mathbf { F _ { q } } ( x , y , z ) = \sum _ { \mathcal { P } \in \{ x y , x z , y z \} } \sum _ { k = 1 } ^ { K } \sigma \big ( \mathbf { W } _ { \omega } ^ { \mathcal { P } } \cdot \mathbf { P E } ( x , y , z ) \big ) _ { k } \cdot \mathbf { h } ^ { \mathcal { P } } \left( \operatorname { p r o j } _ { \mathcal { P } } ( x , y , z ) + \Delta p _ { k } ^ { \mathcal { P } } \right) $$ where $K$ is the number of sampling points, $\mathrm { P E } ( \cdot ) : \mathbb { R } ^ { 3 } \mathbb { R } ^ { D }$ denotes positional encoding, and $\mathbf { W } _ { \omega } ^ { \mathcal { P } } \in \mathbb { R } ^ { K \times D }$ generates attention weights with the softmax function $\sigma ( \cdot )$ . The projection function $\mathrm { p r o j } _ { \mathcal { P } }$ maps 3D coordinates to 2D planes (e.g., $\operatorname { p r o j } _ { x y } ( x , y , z ) = ( x , y ) )$ , and the learnable offset $\Delta p _ { k } ^ { \mathcal { P } } = \mathbf { W } _ { o } ^ { \mathcal { P } } [ k ] \cdot \mathrm { P E } ( x , y , z ) \in \mathbb { R } ^ { 2 }$ uses weights $\mathbf { W } _ { o } ^ { \mathcal { P } } \in \mathbb { R } ^ { 2 \times D }$ to shift sampling positions for better feature alignment. Then the triplane-VAE decoder reconstructs the 3D occupancy field from the aggregated features $\mathbf { F _ { q } }$ . Building on the latent triplane representation h, we introduce a conditional diffusion model $\epsilon _ { \theta } ^ { o c c }$ that synthesizes novel triplanes through iterative denoising. At each timestep $t$ , the model refines a noisy triplane ${ \bf h } _ { t }$ toward the clean target $\mathbf { h } _ { 0 }$ using two complementary conditioning strategies: 1) additive spatial conditioning with the layout embedding $\mathbf { e } _ { \mathrm { l a y o u t } }$ ; and 2) cross-attention-based conditioning with $\mathcal { C } = \mathrm { C o n c a t } ( \mathbf { e } _ { \mathrm { b o x } } , \mathbf { e } _ { \mathrm { t e x t } } )$ , integrating geometric and semantic constraints. The model is trained to predict the added noise $\epsilon$ using the denoising objective: $\mathcal { L } _ { d i f f } ^ { o c c } = \mathbb { E } _ { t , \mathbf { h } _ { 0 } , \epsilon } \left[ \| \epsilon - \epsilon _ { \theta } ^ { o c c } ( \mathbf { h } _ { t } , t , \mathbf { e } _ { \mathrm { l a y o u t } } , \mathcal { C } ) \| _ { 2 } ^ { 2 } \right]$ Image Generation with 3D Geometry Guidance. After obtaining the 3D occupancy, we convert voxels into 3D Gaussian primitives parameterized by voxel coordinates, semantics, and opacity, which are rendered into semantic and depth maps via tile-based rasterization [30]. To further incorporate object-level geometry, we generate normalized 3D coordinates for the entire scene and use object bounding boxes to extract relevant coordinates, which are encoded into object positional embeddings $\mathbf { e } _ { \mathrm { p o s } }$ to provide fine-grained geometric guidance. The semantic, depth, and perspective maps are processed by ConvNets and fused with $\mathbf { e } _ { \mathrm { p o s } }$ to form the final geometric embedding $\mathbf { e } _ { \mathrm { g e o } }$ . This embedding is then combined with noisy image latents to enable pixel-aligned geometric guidance. The image diffusion model $\epsilon _ { \theta } ^ { \mathrm { i m g } }$ further leverages cross-attention with conditions $\mathcal { C }$ (text, camera, and box embeddings) for appearance control. The model is trained via: $\begin{array} { r } { \small \mathcal { L } _ { \mathrm { d i f f } } ^ { \mathrm { i m g } } = \mathbb { E } _ { t , \mathbf { x } _ { 0 } , \epsilon } \left[ \| \epsilon - \epsilon _ { \theta } ^ { \mathrm { i m g } } ( \mathbf { x } _ { t } , t , \mathbf { e } _ { \mathrm { g e o } } , \mathcal { C } ) \| _ { 2 } ^ { 2 } \right] } \end{array}$ . # 3.3 Large-Scale Scene Extrapolation and Reconstruction Building on our single-chunk scene generation, we propose a progressive extrapolation approach that coherently expands occupancy and images across multiple chunks, and reconstructs them into an amodal 3DGS with integrated geometry and appearance for versatile downstream applications. Geometry-Consistent Scene Outpainting. We extend the occupancy field via triplane extrapolation [77], which decomposes the task into extrapolating three orthogonal 2D planes, as illustrated in Fig. 3. The core idea is to generate a new latent plane $\mathbf { h } _ { 0 } ^ { \mathrm { n e w } }$ by synchronizing its denoising process with the forward diffusion of a known reference plane ${ \bf h } _ { 0 } ^ { \mathrm { r e f } }$ , guided by an overlap mask M. Specifically, at each denoising step $t$ , the new latent is updated as: $$ \mathbf { h } _ { t - 1 } ^ { \mathrm { n e w } } ( \sqrt { \bar { \alpha } _ { t } } \mathbf { h } _ { 0 } ^ { \mathrm { r e f } } + \sqrt { 1 - \bar { \alpha } _ { t } } \epsilon ) \odot \mathbf { M } + \epsilon _ { \theta } ^ { o c c } ( \mathbf { h } _ { t } ^ { \mathrm { n e w } } , t ) \odot ( 1 - \mathbf { M } ) $$ where $\epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ and $\bar { \alpha } _ { t }$ is determined by the noise scheduler at timestep $t$ . This method allows the new latent to preserve structural consistency in the overlapped region while plausibly extending the reference content into unseen areas, resulting in coherent and geometry-consistent scene extensions. Visual-Coherent Image Extrapolation. Beyond occupancy outpainting, we further extrapolate the image field for synchronized appearance generation. To ensure visual coherence in the overlapped region between the reference image ${ \bf x } _ { 0 } ^ { \mathrm { r e f } }$ and the new view $\mathbf { x } _ { 0 } ^ { \mathrm { n e w } }$ , a naive solution warps $\mathbf { x } _ { 0 } ^ { \mathrm { r e f } }$ using the camera pose $( R , T )$ and applies image inpainting (see Fig. 3). However, solely using the warped images as conditions is insufficient. To overcome this, we fine-tune the diffusion model $\epsilon _ { \theta } ^ { \mathrm { i m g } }$ with explicit conditioning on ${ \bf x } _ { 0 } ^ { \mathrm { r e f } }$ and camera embeddings ${ \mathbf e } ( R , T )$ . Figure 3: Illustration of consistency-aware outpainting: (a) Occupancy triplane extrapolation is decomposed into the extrapolation of three 2D planes, guided by priors from overlapping regions; (b) Image extrapolation is performed via diffusion conditioned on images and camera parameters. Specifically, ${ \bf x } _ { 0 } ^ { \mathrm { r e f } }$ is concatenated with the noisy novel image $\mathbf { x } _ { t } ^ { \mathrm { n e w } }$ , while ${ \mathbf e } ( R , T )$ is injected via cross-attention. This enables view-consistent extrapolation while retaining photorealistic generation. # 4 Experiments # 4.1 Experimental Settings We use Occ3D-nuScenes[78] to train our occupancy generation module and nuScenes[79] for the multi-view image generation module. Additional implementation details are provided in the appendix. Experimental Tasks and Metrics. We evaluate $\chi$ -Scene across three aspects using a range of metrics: 1) Occupancy Generation: We evaluate the reconstruction results of the VAE with IoU and mIoU metrics. For occupancy generation, following [50], we report both generative 3D and 2D metrics, including Inception Score, FID, KID, Precision, Recall, and F-Score. 2) Multi-view Image Generation: We evaluate the quality of the synthesized images using FID. 3) Downstream Tasks: We evaluate the sim-to-real gap by measuring performance on the generated scenes across downstream tasks, including semantic occupancy prediction (IoU and mIoU), 3D object detection (mAP and NDS), and BEV segmentation (mIoU for the road and vehicle classes). # 4.2 Qualitative Results Large-Scale Scene Generation. The upper part of Figure 4 presents the large-scale scene generation results. By iteratively applying consistency-aware outpainting, $\chi$ -Scene effectively extends local regions into coherent and large-scale driving scenes. Furthermore, the generated scenes can be reconstructed into 3D representations, enabling novel view synthesis and supporting downstream perception tasks. Please refer to the Appendix for additional qualitative results. Figure 4: Versatile generation capability of $\mathcal { X }$ -Scene: (a) Generation of large-scale, consistent semantic occupancy and multi-view images, which are reconstructed into 3D scenes for novel view rendering; (b) User-prompted layout and scene generation, along with scene geometry editing. Table 1: Comparisons of occupancy reconstruction of the VAE. The downsampled size is reported in terms of spatial dimensions (H, W) and feature dimension (C). Table 2: Comparisons of 3D Occupancy Generation. We report Inception Score (IS), Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Precision (P), Recall (R), and F-Score (F) in both the $2 \mathbf { D }$ and 3D domains. † denotes unconditioned generation, while other methods are evaluated using layout conditions. All methods are implemented using official codes and checkpoints. User-Prompted Generation and Editing. The lower part of Figure 4 demonstrates the flexibility of $\chi$ -Scene in interactive scene generation, supporting both user-prompted generation and geometric editing. Users can provide high-level prompts (e.g., "create a busy intersection"), which are processed to generate corresponding layouts and scene content. Furthermore, given an existing scene, users can specify editing intents (e.g., “remove the parked car”) or adjust low-level geometric attributes. Our pipeline updates the scene graph accordingly and regenerates the scene through conditional diffusion. # 4.3 Main Result Comparisons Occupancy Reconstruction and Generation. Table 1 presents the comparative occupancy reconstruction results. The results show that $\chi$ -Scene achieves superior reconstruction performance, significantly outperforming prior approaches under similar compression settings (e.g., $+ 0 . 8 \%$ mIoU and $+ 2 . 5 \%$ IoU compared to UniScene [18]). This improvement is attributed to the enhanced capacity of our triplane representation to preserve geometric details while maintaining encoding efficiency. Table 2 presents the quantitative results for 3D occupancy generation. Following the protocol in [50], we report performance under two settings: (1) a label-mapped setting, where 11 classes are evaluated by merging similar categories (e.g., car, bus, truck) into a unified "vehicle" class, and (2) the full Table 3: Comparisons of Multi-view Image Generation. We report FID and evaluate generation fidelity by performing BEV segmentation [81] and 3D object detection [82] tasks on the generated data from the validation set. Bold indicates the best, and underline denotes the second-best results. Table 4: Comparisons of training Table 5: Comparison of training support for BEV segmentasupport for semantic occupancy pre- tion (Baseline as CVT [81]) and 3D object detection (Baseline diction (Baseline as CONet [83]). as StreamPETR [84] following the setup in [23, 5]). 17-class setting without label merging. Our approach consistently achieves the best performance across both 2D and 3D metrics. Notably, in the 17-class setting without label mapping, we observe substantial improvements, with $\mathrm { F I D } ^ { 3 \mathrm { D } }$ reduced by $5 1 . 2 \%$ (258.8 vs. 529.6), highlighting our method’s capacity for fine-grained category distinction. Additionally, our method demonstrates strong precision and recall, reflecting its ability to generate diverse yet semantically consistent occupancy. Image Generation Fidelity. Table 3 presents the results of multi-view image generation, including FID scores and downstream task evaluations. Notably, $\chi$ -Scene supports high-resolution image generation with competitive fidelity, which is crucial for downstream tasks like 3D reconstruction. The results show that $\chi$ -Scene achieves the best FID, with a $4 . 9 1 \%$ improvement over the baseline [1], indicating superior visual realism. Moreover, $\chi$ -Scene consistently outperforms other methods in BEV segmentation and 3D object detection as resolution increases. For BEV segmentation in particular, performance on generated scenes at $4 4 8 \times 8 0 0$ resolution closely matches that on real data, showcasing $\chi$ -Scene’s strong conditional generation aligned with downstream visual applications. Downstream Tasks Evaluation. We evaluate the effectiveness of generated scene data in supporting downstream model training. Table 4 presents results for 3D semantic occupancy prediction. Finetuning with our generated 3D occupancy grids significantly improves baseline performance $( + 4 . 9 \%$ IoU, $+ 6 . 8 \%$ mIoU), as the generated high-resolution grids provide reliable spatial structures that facilitate refinement. Furthermore, combining 2D and 3D modalities yields the best performance, underscoring the effectiveness of our multimodal alignment. Table 5 presents the results for 3D object detection and BEV segmentation tasks. Our method achieves the best performance among all synthetic data sources, demonstrating the higher fidelity and structural consistency of the generated views. These results highlight the potential of our synthesized images to enhance perception models. Qualitative Comparisions. Figure 5 presents a comparison of joint voxel-and-image generation. The results show that $\chi$ -Scene not only produces more realistic images but also achieves superior cross-modal consistency, ensuring better alignment between 3D structures and 2D appearances. Figure 5: Qualitative comparison of joint voxel-and-image generation. Our method achieves superior consistency between generated 3D occupancy and 2D images compared to UniScene [18]. Table 6: Ablation study for designs in the occupancy generation model. Table 7: Ablation study for designs in the multiview image generation model. # 4.4 Ablation Study Effects of Designs in Occupancy Generation. As shown in Table 6, the proposed triplane deformable attention module improves performance, particularly at lower resolutions. Under the (50, 50, 16) resolution setting, incorporating deformable attention leads to gains of $+ 1 . 9 \%$ in IoU and $+ 2 . 4 \%$ in mIoU, demonstrating its effectiveness in mitigating feature degradation caused by downsampling. We further analyze the impact of different conditioning inputs. Removing either the additive layout condition or the box condition results in reduced generation quality, underscoring their importance in providing fine-grained geometric cues necessary for accurate occupancy field generation. Effects of Designs in Image Generation. Table 7 presents the ablation results for various conditioning components in the image generation model. Removing the semantic or depth maps that are rendered from 3D occupancy significantly degrades FID and downstream performance, highlighting their importance in providing dense geometric and semantic cues. Excluding the perspective map, which encodes projected 3D boxes and lanes, also reduces downstream performance (with mAP dropping by $2 . 9 7 \%$ ), underscoring its role in conveying explicit layout priors. The 3D positional embedding is particularly critical for object detection, as it enhances localization and spatial representation. Finally, removing the text description degrades generation fidelity (FID worsening by $1 . 3 1 \%$ ), showing that rich linguistic context aids fine-grained appearance modeling and scene understanding.
Diffusion models are advancing autonomous driving by enabling realistic data synthesis, predictive end-to-end planning, and closed-loop simulation, with a primary focus on temporally consistent generation. However, the generation of large-scale 3D scenes that require spatial coherence remains underexplored. In this paper, we propose X-Scene, a novel framework for large-scale driving scene generation that achieves both geometric intricacy and appearance fidelity, while offering flexible controllability. Specifically, X-Scene supports multi-granular control, including low-level conditions such as user-provided or text-driven layout for detailed scene composition and high-level semantic guidance such as user-intent and LLM-enriched text prompts for efficient customization. To enhance geometrical and visual fidelity, we introduce a unified pipeline that sequentially generates 3D semantic occupancy and the corresponding multiview images, while ensuring alignment between modalities. Additionally, we extend the generated local region into a large-scale scene through consistency-aware scene outpainting, which extrapolates new occupancy and images conditioned on the previously generated area, enhancing spatial continuity and preserving visual coherence. The resulting scenes are lifted into high-quality 3DGS representations, supporting diverse applications such as scene exploration. Comprehensive experiments demonstrate that X-Scene significantly advances controllability and fidelity for large-scale driving scene generation, empowering data generation and simulation for autonomous driving.
[ "cs.CV" ]
# 1 INTRODUCTION Index tuning aims to find the optimal index configuration (i.e., a set of indexes) for an input workload of SQL queries. It is often a time-consuming and resource-intensive process for large and complex workloads in practice. From user’s perspective, it is therefore desirable to constrain the index tuner/advisor by limiting its execution time and resource, with the compromise that the goal of index tuning shifts to seeking the best configuration within the given time and resource constraints. Indeed, commercial index tuners, such as the Database Tuning Advisor (DTA) developed for Fig. 1. The architecture of budget-aware index tuning with “Wii”, i.e., what-if (call) interception, where 𝑊 represents the input workload, $q _ { i } \in W$ represents an individual SQL query in the workload, $\Gamma$ represents a set of tuning constraints, $B$ represents the budget on the number of what-if calls allowed. Moreover, $\{ z _ { j } \}$ represents the set of candidate indexes generated for $W$ , and $C \subseteq \{ z _ { j } \}$ represents an index configuration proposed during configuration enumeration. Microsoft SQL Server, have been offering a timeout option that allows user to explicitly control the execution time of index tuning to prevent it from running indefinitely [1, 7]. More recently, there has been a proposal of budget-aware index tuning that puts a budget constraint on the number of “what-if” (optimizer) calls [46], motivated by the observation that most of the time and resource in index tuning is spent on what-if calls [19, 26] made to the query optimizer during configuration enumeration (see Figure 1). A what-if call takes as input a query-configuration pair (QCP) and returns the estimated cost of the query by utilizing the indexes in the configuration. It is the same as a regular query optimizer call except for that it also takes hypothetical indexes, i.e., indexes that are proposed by the index tuner but have not been materialized, into consideration [9, 40]. There can be thousands or even millions of potential what-if calls when tuning large and complex workloads [36]. Therefore, it is not feasible to make a what-if call for every QCP encountered in configuration enumeration/search. As a result, one key problem in budget-aware index tuning is budget allocation, where one needs to determine which QCP’s to make what-if calls for so that the index tuner can find the best index configuration. Unfortunately, optimal budget allocation is NP-hard [6, 11, 46]. Existing budgetaware configuration search algorithms [46] range from adaptations of the classic greedy search algorithm [8] to more sophisticated approaches with Monte Carlo tree search (MCTS) [18], which allocate budget by leveraging various heuristics. For example, the greedy-search variants adopt a simple “first come first serve” (FCFS) strategy where what-if calls are allocated on demand, and the MCTS-based approach considers the rewards observed in previous budget allocation steps to decide the next allocation step. These budget allocation strategies can be inferior. In particular, we find in practice that many of the what-if calls made are unnecessary, as their corresponding what-if costs are close to the approximations given by a well-known technique called cost derivation [8]. Compared to making a what-if call, cost derivation is computationally much more efficient and has been integrated into commercial index tuning software such as DTA [1, 7]. In the rest of this paper, we refer to the approximation given by cost derivation as the derived cost. Figure 2 presents the distribution of the relative gap between what-if cost and derived cost when tuning the TPC-DS benchmark workload with 99 complex queries. We observe that $8 0 \%$ to $9 0 \%$ of the what-if calls were made for QCP’s with relative gap below $5 \%$ , for two state-of-the-art budget-aware configuration search algorithms two-phase greedy and MCTS (Section 2.2). If we know that the derived cost is indeed a good approximation, we can avoid such a spurious what-if call. The challenge, however, is that we need to learn this fact before the what-if call is made. Fig. 2. Distribution of the relative gap between what-if cost and derived cost when tuning TPC-DS under a budget of 5,000 what-if calls. Here the relative gap is defined as derived ceorisvte−dwchoast-if cost $\times 1 0 0 \%$ , as derived cost is an upper bound of the what-if cost under monotonicity assumption. The best knowledge we have so far is that, under mild assumption on the monotonicity of query optimizer’s cost function (i.e., a larger configuration with more indexes should not increase the query execution cost), the derived cost acts as an upper bound of the what-if cost (Section 2.2.2). However, the what-if cost can still lie anywhere between zero and the derived cost. In this paper, we take one step further by proposing a generic framework that develops a lower bound for the what-if cost. The gap between the lower bound and the upper bound (i.e., the derived cost) therefore measures the closeness between the what-if cost and the derived cost. As a result, it is safe to avoid a what-if call when this gap is small and use the derived cost as a surrogate. Albeit a natural idea, there are a couple of key requirements to make it relevant in practice. First, the lower bound needs to be nontrivial, i.e., it needs to be as close to the what-if cost as possible—an example of a trivial but perhaps useless lower bound would be always setting it to zero. Second, the lower bound needs to be computationally efficient compared to making a what-if call. Third, the lower bound needs to be integratable with existing budget-aware configuration enumeration algorithms. In this paper, we address these requirements as follows. Nontriviality. We develop a lower bound that depends only on common properties of the cost functions used by the query optimizer, such as monotonicity and submodularity, which have been widely assumed by previous work [10, 15, 22, 31, 44] and independently verified in our own experiments [41]. In a nutshell, it looks into the marginal cost improvement (MCI) that each individual index in the given configuration can achieve, and then establishes an upper bound on the cost improvement (and therefore a lower bound on the what-if cost) of the given configuration by summing up the upper bounds on the MCI’s of individual indexes (Section 3.1). We further propose optimization techniques to refine the lower bound for budget-aware greedy search algorithms (Section 4.1) and MCTS-based algorithms (Section 4.2). Efficiency. We demonstrate that the computation time of our lower bound is orders of magnitude less compared to a what-if call, though it is in general more expensive than computing the upper bound, i.e., the derived cost (Section 6.4). For example, as shown in Figure 16(b), when running the MCTS configuration enumeration algorithm on top of the TPC-DS benchmark, on average it takes $0 . 0 2 { \mathrm { m s } }$ and $0 . 0 4 ~ \mathrm { { m s } }$ to compute the derived cost and our lower bound, respectively; in contrast, the average time of making a what-if call to the query optimizer is around $8 0 0 ~ \mathrm { { m s } }$ . Integratability. We demonstrate that our lower bound can be seamlessly integrated with existing budget-aware index tuning algorithms (Section 5). From a software engineering perspective, the integration is non-intrusive, meaning that there is no need to change the architecture of the current cost-based index tuning software stack. As illustrated in Figure 1, we encapsulate the lower-bound computation inside a component called “Wii,” which is shorthand for “what-if (call) interception.” During configuration enumeration, Wii intercepts every what-if call made to the query optimizer, computes the lower bound of the what-if cost, and then checks the closeness between the lower bound and the derived cost (i.e., the upper bound) with a confidence-based mechanism (Section 3.3). If Wii feels confident enough, it will skip the what-if call and instead send the derived cost back to the configuration enumerator. More importantly, we demonstrate the efficacy of Wii in terms of (1) the number of what-if calls it allows to skip (Section 6.3) and (2) the end-to-end improvement on the final index configuration found (Section 6.2). The latter is perhaps the most valuable benefit of Wii in practice, and we show that, by reallocating the saved budget to what-if calls where Wii is less confident, it can yield significant improvement on both standard industrial benchmarks and real customer workloads (Section 6.2). For example, as showcased in Figure 6(f), with 5,000 what-if calls as budget and 20 as the maximum configuration size allowed, on TPC-DS Wii improves the baseline two-phase greedy configuration enumeration algorithm by increasing the percentage improvement of the final configuration found from $5 0 \%$ to $6 5 \%$ ; this is achieved by skipping around 18,000 unnecessary what-if calls, as shown in Figure 14(b). Last but not least, while we focus on budget-aware index tuning in this paper, Wii can also be used in a special situation where one does not enforce a budget on the index tuner, namely, the tuner has unlimited budget on the number of what-if calls. This special situation may make sense if, for example, one has a relatively small workload. Wii plays a different role here. Since there is no budget constraint, Wii cannot improve the quality of the final configuration found, as the best quality can anyways be achieved by keeping on issuing what-if calls to the query optimizer. Instead, by skipping spurious what-if calls, Wii can significantly improve the overall efficiency of index tuning. For example, without a budget constraint, when tuning the standard TPC-H benchmark with 22 queries, Wii can reduce index tuning time by $4 \times$ while achieving the same quality on the best configuration found (Section 6.8). # 2 PRELIMINARIES In this section, we present a brief overview of the budget-aware index configuration search problem. # 2.1 Cost-based Index Tuning As Figure 1 shows, cost-based index tuning consists of two stages: Candidate index generation. We generate a set of candidate indexes for each query in the workload based on the indexable columns [8]. Indexable columns are those that appear in the selection, join, group-by, and order- $\cdot b y$ expressions of a SQL query, which are used as key columns for fast seek-based index look-ups. We then take the union of the candidate indexes from individual queries as the candidate indexes for the entire workload. • Configuration enumeration. We search for a subset (i.e., a configuration) of the candidate indexes that can minimize the what-if cost of the workload, with respect to constraints such as the maximum number of indexes allowed or the total amount of storage taken by the index configuration. Index tuning is time-consuming and resource-intensive, due to the large amount of what-if calls issued to the query optimizer during configuration enumeration/search. Therefore, previous work proposes putting a budget on the amount of what-if calls that can be issued during configuration search [46]. We next present this budget-aware configuration search problem in more detail. Greedy Phase 1 Step 2 {𝑧1, 𝑧2} {𝑧2, 𝑧3} Vanilla greedy on 𝑞1 Vanilla grPeheadsyeo1n 𝑞12 Greedy Vanilla greedy on 𝑞3 Step 1 {𝑧1} {𝑧2} {𝑧3} 𝐶1∗ ∪ 𝐶2∗ ∪ 𝐶3∗ cEoxinsftignugration ∅ Vanilla greePdhya soen $\{ q _ { 1 } , q _ { 2 } , q _ { 3 } \}$ (a) Vanilla greedy (b) Two-phase greedy # 2.2 Budget-aware Configuration Search 2.2.1 Problem Statement. Given an input workload $W$ with a set of candidate indexes $I$ [8], a set of constraints $\Gamma$ , and a budget $B$ on the number of what-if calls allowed during configuration enumeration, our goal is to find a configuration $C ^ { * } \subseteq I$ whose what-if cost $c ( W , C ^ { * } )$ is minimized under the constraints given by $\Gamma$ and $B$ . In this paper, we focus on index tuning for data analytic workloads $W$ (e.g., the TPC-H an1d TPC-DS benchmark workloads). Although the constraints in $\Gamma$ can be arbitrary, we focus on the cardinality constraint $K$ that specifies the maximum configuration size (i.e., the number of indexes contained by the configuration) allowed. Moreover, under a limited budget $B$ , it is often impossible to know the what-if cost of every query-configuration pair (QCP) encountered during configuration enumeration. Therefore, to estimate the costs for QCP’s where what-if calls are not allocated, one has to rely on approximation of the what-if cost without invoking the query optimizer. One common approximation technique is cost derivation [7, 8], as we discuss below. 2.2.2 Cost Derivation. Given a QCP $( q , C )$ , its derived cost $d ( q , C )$ is the minimum cost over all subset configurations of $C$ with known what-if costs. Formally, Definition 1 (Derived Cost). The derived cost of 𝑞 over $C$ is $$ d ( q , C ) = \operatorname* { m i n } _ { S \subseteq C } c ( q , S ) . $$ Here, $c ( q , S )$ is the what-if cost of 𝑞 using only a subset $s$ of indexes from the configuration $C$ We assume the following monotone property [15, 31] of index configuration costs w.r.t. to an arbitrary query 𝑞: Assumption 1 (Monotonicity). Let $C _ { 1 }$ and $C _ { 2 }$ be two index configurations where $C _ { 1 } \subseteq C _ { 2 }$ . Then $c ( q , C _ { 2 } ) \leq c ( q , C _ { 1 } )$ . That is, including more indexes into a configuration does not increase the what-if cost. Our validation results using Microsoft SQL Server show that monotonicity holds with probability between 0.95 and 0.99, on a variety of benchmark and real workloads (see [41] for details). Under Assumption 1, we have $$ d ( q , C ) \geq c ( q , C ) , $$ i.e., derived cost is an upper bound $U ( q , C )$ of what-if cost: $$ U ( q , C ) = d ( q , C ) = \operatorname* { m i n } _ { S \subseteq C } c ( q , S ) . $$ 2.2.3 Existing Solutions. The budget-aware configuration search problem is NP-hard. At the core of this problem is budget allocation, namely, to decide on which QCP’s to make what-if calls. Existing heuristic solutions to the problem include: (1) vanilla greedy, (2) two-phase greedy, (3) AutoAdmin greedy, and (4) MCTS. Since (2) and (3) are similar, we omit (3) in this paper. Proc. ACM Manag. Data, Vol. 2, No. 3 (SIGMOD), Article 182. Publication date: June 2024. Fig. 4. Example of budget allocation in MCTS. Vanilla greedy. Figure 3(a) illustrates the vanilla greedy algorithm with an example of three candidate indexes $\{ z _ { 1 } , z _ { 2 } , z _ { 3 } \}$ and the cardinality constraint $K = 2$ . Throughout this paper, we use $\varnothing$ to represent the existing configuration. Vanilla greedy works step-by-step, where each step adopts a greedy policy to choose the next index to be included that can minimize the workload cost on the chosen configuration. In this example, we have two greedy steps. The first step examines th1e three singleton configurations $\{ z _ { 1 } \} , \{ z _ { 2 } \}$ , and $\{ z _ { 3 } \}$ . Suppose that $\{ z _ { 2 } \}$ results in the lowest workload cost. The second step tries to expand $\left\{ z _ { 2 } \right\}$ by adding one more index, which leads to two candidate configurations $\{ z _ { 1 } , z _ { 2 } \}$ and $\{ z _ { 2 } , z _ { 3 } \}$ . Suppose that $\{ z _ { 1 } , z _ { 2 } \}$ is better and therefore returned by vanilla greedy. Note that the configuration $\{ z _ { 1 } , z _ { 3 } \}$ is never visited in this example. Vanilla greedy adopts a simple “first come first serve (FCFS)” budget allocation policy to make what-if calls. Two-phase greedy. Figure 3(b) illustrates the two-phase greedy algorithm that can be viewed as an optimization on top of vanilla greedy. Specifically, there are two phases of greedy search in two-phase greedy. In the first phase, we view each query as a workload by itself and run vanilla greedy on top of it to obtain the best configuration for that query. In this particular example, we have three queries $q _ { 1 } , q _ { 2 }$ , and $q _ { 3 }$ in the workload. After running vanilla greedy, we obtain their best configurations $C _ { 1 } ^ { * }$ , $C _ { 2 } ^ { * }$ , and $C _ { 3 } ^ { * }$ , respectively. In the second phase, we take the union of the best configurations found for individual queries and use that as the refined set of candidate indexes for the entire workload. We then run vanilla greedy again for the workload with this refined set of candidate indexes, as depicted in Figure 3(b) for the given example. Two-phase greedy has particular importance in practice as it has been adopted by commercial index tuning software such as Microsoft’s Database Tuning Advisor (DTA) [1, 7]. Again, budget is allocated with the simple FCFS policy—the same as in vanilla greedy. MCTS. Figure 4 illustrates the MCTS algorithm with the same example used in Figure 3. It is an iterative procedure that allocates one what-if call in each iteration until the budget runs out. The decision procedure in each iteration on which query and which configuration to issue the what-if call is an application of the classic Monte Carlo tree search (MCTS) algorithm [3] in the context of index configuration search. It involves four basic steps: (1) selection, (2) expansion, (3) simulation, and (4) update. Due to space limitation, we refer the readers to [46] for the full details of this procedure. After all what-if calls are issued, we run vanilla greedy again without making extra what-if calls to find the best configuration. Our particular version of MCTS here employs an $\epsilon$ -greedy policy [39] when selecting the next index to explore. # 3 WHAT-IF CALL INTERCEPTION We develop “Wii” that can skip spurious what-if calls where their what-if costs and derived costs are close. One key idea is to develop a lower bound for the what-if cost: if the gap between the lower bound and the derived cost is small, then it is safe to skip the what-if call. In this section, we present the generic form of the lower bound, as well as a confidence-based framework used by Wii on top of the lower bound to skip spurious what-if calls. We defer the discussion on further optimizations of the lower bound to Section 4. # 3.1 Lower Bound of What-if Cost We use $L ( q , C )$ to denote the lower bound of the what-if cost $c ( q , C )$ . In the following, we first introduce the notion of marginal cost improvement (MCI) of an index, which indicates the additional benefit of adding this index to a configuration for a query. We then establish $L ( q , C )$ by leveraging the upper bounds of MCI. Definition 2 (Marginal Cost Improvement). We define the marginal cost improvement (MCI) of an index $z$ with respect to a query $q$ and a configuration $X$ as $$ \delta ( q , z , X ) = c ( q , X ) - c ( q , X \cup \{ z \} ) . $$ Definition 3 (Cost Improvement). We define the cost improvement $( C I )$ of a query $q$ given $a$ configuration $X$ as $$ \Delta ( q , X ) = c ( q , \emptyset ) - c ( q , X ) . $$ We can express $\mathrm { C I }$ in terms of MCI. Specifically, consider a query $q$ and a configuration $C =$ $\{ z _ { 1 } , . . . , z _ { m } \}$ . The cost improvement $\Delta ( q , C )$ can be seen as the sum of MCI’s by adding the indexes from $C$ one by one, namely, $$ \Delta ( q , C ) = \Bigl ( c ( q , \emptyset ) - c ( q , \{ z _ { 1 } \} ) \Bigr ) + \Bigl ( c ( q , \{ z _ { 1 } \} ) - c ( q , \{ z _ { 1 } , z _ { 2 } \} ) \Bigr ) $$ $$ + \cdot \cdot \cdot + \Bigl ( c ( q , \{ z _ { 1 } , . . . , z _ { m - 1 } \} ) - c ( q , C ) \Bigr ) . $$ Let $C _ { 0 } = \varnothing$ and $C _ { j } = C _ { j - 1 } \cup \{ z _ { j } \}$ for $1 \leq j \leq m$ . It follows that $C _ { m } = C$ and therefore, $\Delta ( q , C ) =$ $\begin{array} { r } { \sum _ { j = 1 } ^ { m } \delta ( q , z _ { j } , C _ { j - 1 } ) } \end{array}$ . If we can have a configuration-independent upper bound $u ( q , z _ { j } )$ for $\delta ( q , z _ { j } , C _ { j - 1 } )$ , namely, $u ( q , z _ { j } ) \ge \delta ( q , z _ { j } , X )$ for any $X$ , then $$ \Delta ( q , C ) \leq \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) . $$ As a result, $$ c ( q , \emptyset ) - c ( q , C ) \leq \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) , $$ and it follows that $$ c ( q , C ) \geq c ( q , \emptyset ) - \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) . $$ We therefore can set the lower bound $L ( q , C )$ as $$ L ( q , C ) = c ( q , \emptyset ) - \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) . $$ Generalization. This idea can be further generalized if we know the what-if costs of configurations that are subsets of $C$ . Specifically, let $S \subset C$ be a subset of $C$ with known what-if cost $c ( q , S )$ . Without loss of generality, let $C - S = \{ z _ { 1 } , . . . , z _ { k } \}$ . We have $$ c ( q , S ) - c ( q , C ) = \sum _ { i = 1 } ^ { k } \Big ( c ( q , C _ { i - 1 } ) - c ( q , C _ { i } ) \Big ) \leq \sum _ { i = 1 } ^ { k } u ( q , z _ { i } ) , $$ where $C _ { 0 }$ is now set to $s$ . Therefore, $$ c ( q , C ) \geq c ( q , S ) - \sum _ { i = 1 } ^ { k } u ( q , z _ { i } ) . $$ Proc. ACM Manag. Data, Vol. 2, No. 3 (SIGMOD), Article 182. Publication date: June 2024. Since $s$ is arbitrary, we conclude $$ c ( q , C ) \geq \operatorname* { m a x } _ { S \subset C } \Big ( c ( q , S ) - \sum _ { z \in C - S } u ( q , z ) \Big ) . $$ As a result, it is safe to set $$ L ( q , C ) = \operatorname* { m a x } _ { S \subset C } \Big ( c ( q , S ) - \sum _ { z \in C - S } u ( q , z ) \Big ) . $$ Since $\varnothing \subset C$ , Equation 5 is a generalization of Equation 4. # 3.2 Upper Bound of MCI The main question is then to maintain an upper bound $u ( q , z )$ for the MCI of each query $q$ and each individual index $z$ so that $u ( q , z ) \ge \delta ( q , z , X )$ for any configuration $X$ . Below we discuss several such upper bounds. Our basic idea is to leverage the CIs of explored configurations that contain $z$ , along with some well-known properties, such as monotonicity and submodularity, of the cost function used by the query optimizer. 3.2.1 Naive Upper Bound. Let $\Omega$ be the set of all candidate indexes. Definition 4 (Naive Upper Bound). Under Assumption 1, $$ \boldsymbol { u } ( \boldsymbol { q } , z ) = \boldsymbol { c } ( \boldsymbol { q } , \emptyset ) - \boldsymbol { c } ( \boldsymbol { q } , \Omega ) = \Delta ( \boldsymbol { q } , \Omega ) $$ is a valid upper bound of $\delta ( q , z , X )$ for any $X$ . Intuitively, by the monotonicity property, the MCI of any single index $z$ cannot be larger than the CI of all candidate indexes in $\Omega$ combined. In practical index tuning applications, we often have $c ( q , \Omega )$ available. However, if $c ( q , \Omega )$ is unavailable, then we set $u ( q , z ) = c ( q , \emptyset )$ as it always holds that $c ( q , \Omega ) \geq 0$ . 3.2.2 Upper Bound by Submodularity. We can improve over the naive upper bound by assuming that the cost function is submodular, which has been studied by previous work [10]. Assumption 2 (Submodularity). Given two configurations $X \subseteq Y$ and an index $z \not \in Y$ , we have $$ c ( q , Y ) - c ( q , Y \cup \{ z \} ) \leq c ( q , X ) - c ( q , X \cup \{ z \} ) . $$ Or equivalently, $\delta ( q , z , Y ) \leq \delta ( q , z , X )$ . That is, the MCI of an index $z$ diminishes when $z$ is included into larger configuration with more indexes. Submodularity does not hold often due to index interaction [31]. We also validated the submodularity assumption using Microsoft SQL Server and the same workloads that we used to validate the monotonicity assumption. Our validation results show that submodularity holds with probability between 0.75 and 0.89 on the workloads tested [41]. Lemma 1. Under Assumption 2, we have $$ \delta ( q , z , X ) \leq \Delta ( q , \{ z \} ) $$ for any configuration $X$ . Due to space constraint, all proofs are postponed to the full version of this paper [41]. Intuitively, Lemma 1 indicates that the CI of a singleton configuration $\{ z \}$ can be used as an upper bound of the MCI of the index $z$ . As a result, we can set $$ u ( q , z ) = \Delta ( q , \{ z \} ) = c ( q , \emptyset ) - c ( q , \{ z \} ) . $$ Proc. ACM Manag. Data, Vol. 2, No. 3 (SIGMOD), Article 182. Publication date: June 2024. There are cases where $c ( q , \{ z \} )$ is unknown but we know the cost of some configuration $X$ that contains $z$ , e.g., in MCTS where configurations are explored in random order. By Assumption 1, $$ c ( q , \{ z \} ) \geq \operatorname* { m a x } _ { z \in X } c ( q , X ) . $$ Therefore, we can generalize Equation 8 to have Definition 5 (Submodular Upper Bound). $$ \begin{array} { l c l } { { u ( q , z ) } } & { { = } } & { { c ( q , \emptyset ) - \displaystyle \operatorname* { m a x } _ { z \in X } c ( q , X ) } } \\ { { } } & { { = } } & { { \displaystyle \operatorname* { m i n } _ { z \in X } \Big ( c ( q , \emptyset ) - c ( q , X ) \Big ) } } \\ { { } } & { { = } } & { { \displaystyle \operatorname* { m i n } _ { z \in X } \Delta ( q , X ) . } } \end{array} $$ That is, the MCI of an index should be no larger than the minimum CI of all the configurations that contain it. 3.2.3 Summary. To summarize, assuming monotonicity and submodularity of the cost function $c$ , we can set $u ( q , z )$ as follows: $$ u ( q , z ) = \operatorname* { m i n } \{ c ( q , \emptyset ) , \Delta ( q , \Omega ) , \Delta ( q , \{ z \} ) , \operatorname* { m i n } _ { z \in X } \Delta ( q , X ) \} . $$ # 3.3 Confidence-based What-if Call Skipping Intuitively, the confidence of skipping the what-if call for a QCP $( q , C )$ depends on the closeness between the lower bound $L ( q , C )$ and the upper bound $U ( q , C )$ , i.e., the derived cost $d ( q , C )$ . We define the gap between $U ( q , C )$ and $L ( q , C )$ as $$ G ( q , C ) = U ( q , C ) - L ( q , C ) . $$ Clearly, the larger the gap is, the lower the confidence is. Therefore, it is natural to define the confidence as $$ \alpha ( q , C ) = 1 - \frac { G ( q , C ) } { U ( q , C ) } = \frac { L ( q , C ) } { U ( q , C ) } . $$ Following this definition, we have $0 \leq \alpha ( q , C ) \leq 1$ . We further note two special cases: $( 1 ) \alpha ( q , C ) = 0$ , which implies $L ( q , C ) = 0$ ; and (2) $\alpha ( q , C ) = 1$ , which implies $L ( q , C ) = U ( q , C )$ . Let $\alpha \in [ 0 , 1 ]$ be a threshold for the confidence, i.e., it is the minimum confidence for skipping a what-if call and we require $\alpha ( q , C ) \geq \alpha$ . Intuitively, the higher $\alpha$ is, the higher confidence that a what-if call can be skipped with. In our experimental evaluation, we further varied $\alpha$ to test the effectiveness of this confidence-based interception mechanism (see Section 6). # 4 OPTIMIZATION We present two optimization techniques for the generic lower bound detailed in Section 3.1, which is agnostic to budget-aware configuration enumeration algorithms—it only relies on general assumptions (i.e., monotonicity and submodularity) of the cost function 𝑐. One optimization is dedicated to budget-aware greedy search (i.e., vanilla/two-phase greedy), which is of practical importance due to its adoption in commercial index tuning software [7] (Section 4.1). The other optimization is more general and can also be used for other configuration enumeration algorithms mentioned in Section 2.2.3 such as MCTS (Section 4.2). # 4.1 MCI Upper Bounds for Greedy Search We propose the following optimization procedure for maintaining the MCI upper-bound $u ( q , z )$ , which is the basic building block of the lower bound presented in Section 3.1, in vanilla greedy and two-phase greedy (see Section 2): Procedure 1. For each index $z$ that has not been selected by greedy search, we can update $u ( q , z )$ w.r.t. the current configuration selected by greedy search as follows: (1) Initialize $u ( q , z ) = \operatorname* { m i n } \{ c ( q , \emptyset ) , \Delta ( q , \Omega ) \}$ for each index $z$ . (2) During each greedy step $1 \leq k \leq K$ , update $$ u ( q , z ) = c ( q , C _ { k - 1 } ) - c ( q , C _ { k - 1 } \cup \{ z \} ) = \delta ( q , z , C _ { k - 1 } ) $$ if both $c ( q , C _ { k - 1 } )$ and $c ( q , C _ { k - 1 } \cup \{ z \} )$ are available. In step (2), $C _ { k }$ is the configuration selected by greedy search in step $k$ and we set $C _ { 0 } = \varnothing$ . A special case is when $k = 1$ , if we know $c ( q , \{ z \} )$ then we can update $u ( q , z ) = c ( q , \emptyset ) - c ( q , \{ z \} ) = \Delta ( q , \{ z \} )$ , which reduces to the general upper bound (see Lemma 1). Theorem 1. Under Assumptions 1 and 2, Procedure 1 is correct, i.e., the $u ( q , z )$ after each update remains an MCI upper bound w.r.t. any future configuration $X$ explored by greedy search. # 4.2 Coverage-based Refinement The tightness of the MCI upper bounds in Section 3.2 largely depends on the knowledge about $c ( q , \{ z \} )$ , namely, what-if costs of singleton configurations with one single index. Unfortunately, such information is often unavailable, and the MCI upper bound in Equation 9 is reduced to its naive version (Equation 6). For vanilla greedy and two-phase greedy, this implies that none of the QCP’s with singleton configurations can be skipped under a reasonable confidence threshold (e.g., 0.8), which can take a large fraction of the budget, although the bounds are effective at skipping what-if calls for multi-index configurations; for MCTS where configurations are explored in a random order, this further implies that skipping can be less effective for multi-index configurations as they are more likely to contain indexes with unknown what-if costs, in contrast to greedy search where multi-index configurations are always explored after singleton configurations. To overcome this limitation, we propose refinement techniques based on estimating the what-if cost $c ( q , \{ z \} )$ if it is unknown, by introducing the notion of “coverage.” 4.2.1 Definition of Coverage. We assume that $c ( q , \Omega )$ is known for each query $q$ . Moreover, we assume that we know the subset $\Omega _ { q } \subset \Omega$ of indexes that appear in the optimal plan of $q$ by using indexes in $\Omega$ . Clearly, $c ( q , \Omega ) = c ( q , \Omega _ { q } )$ . For an index $z$ , we define its coverage on the query $q$ as $$ \rho ( q , z ) = \frac { c ( q , \emptyset ) - c ( q , \{ z \} ) } { c ( q , \emptyset ) - c ( q , \emptyset _ { q } ) } = \frac { \Delta ( q , \{ z \} ) } { \Delta ( q , \Omega _ { q } ) } . $$ In other words, coverage measures the relative cost improvement of $z$ w.r.t. the maximum possible cost improvement over $q$ delivered by $\Omega _ { q }$ . If we know $\rho ( q , z )$ , the cost $c ( q , \{ z \} )$ can be recovered as $$ \begin{array} { r c l } { { c ( q , \{ z \} ) } } & { { = } } & { { { c ( q , \emptyset ) - \rho ( q , z ) \cdot \left( c ( q , \emptyset ) - c ( q , \Omega _ { q } ) \right) } } } \\ { { } } & { { = } } & { { \left( 1 - \rho ( q , z ) \right) \cdot c ( q , \emptyset ) + \rho ( q , z ) \cdot c ( q , \Omega _ { q } ) . } } \end{array} $$ In the following, we present techniques to estimate $\rho ( q , z )$ based on the similarities between index configurations, in particular $\{ z \}$ and $\Omega _ { q }$ . 4.2.2 Estimation of Coverage. We estimate coverage based on the assumption that it depends on the similarity between $\{ z \}$ and $\Omega _ { q }$ . Specifically, let $\mathrm { S i m } ( \{ z \} , \Omega _ { q } )$ be some similarity measure that is between 0 and 1, and we define $$ \rho ( q , z ) = \mathrm { S i m } ( \{ z \} , \Omega _ { q } ) . $$ The problem is then reduced to developing an appropriate similarity measure. Our current solution is the following, while further improvement is possible and left for future work. Configuration Representation. We use a representation similar to the one described in DBA bandits [28] that converts an index $z$ into a feature vector $\vec { \bf z }$ . Specifically, we use one-hot encoding based on all indexable columns identified in the given workload $W$ . Let $\mathcal { D } = \{ c _ { 1 } , . . . , c _ { L } \}$ be the entire domain of these $L$ indexable columns. For a given index $z$ , $\vec { \bf z }$ is an $L$ -dimensional vector. If some column $\boldsymbol { c } _ { l } \in \mathcal { D }$ $\quad 1 \leq l \leq L )$ appears in $z$ , then $\vec { \bf z } [ l ]$ receives some nonzero weight $w _ { l }$ based on the weighing policy described below: • If 𝑐𝑙 is the 𝑗 -th key column of 𝑧, 𝑤𝑙 = 2𝑗1 1 ; • If $c _ { l }$ is an included column of $z$ , $\begin{array} { r } { w _ { l } = \frac { 1 } { 2 ^ { J } } } \end{array}$ where $J$ is the number of key columns contained by $z$ Otherwise, we set ${ \vec { \mathbf { z } } } [ l ] = 0$ . Note that the above weighing policy considers the columns contained by an index as well as their order. Intuitively, leading columns in index keys play a more important role than other columns (e.g., for a “range predicate”, an access path chosen by the query optimizer needs to match the “sort order” specified in the index key columns). We further combine feature vectors of individual indexes to generate a feature vector for the entire configuration. Specifically, consider a configuration $C = \{ z _ { 1 } , . . . , z _ { m } \}$ and let $\vec { \bf z } _ { i }$ be the feature representation of the index $z _ { i }$ $\left( 1 ~ \le ~ i ~ \le ~ m \right)$ ). The feature representation $\vec { \mathbf { C } }$ of $C$ is again an $L$ - dimensional vector where $$ \begin{array} { r } { \vec { \mathbf { C } } [ l ] = \operatorname* { m a x } \{ \vec { \mathbf { z } } _ { 1 } [ l ] , . . . , \vec { \mathbf { z } } _ { m } [ l ] \} , \mathrm { ~ f o r ~ } 1 \leq l \leq L . } \end{array} $$ That is, the weight $\vec { \mathbf { C } } [ l ]$ is the largest weight of the $l$ -th dimension among the indexes contained by $C .$ . In particular, we generate the feature vector $\vec { \Omega } _ { q }$ for $\Omega _ { q }$ in this way. Query Representation. We further use a representation similar to the one described in ISUM [35] to represent a query $q$ as a feature vector $\vec { \bf q }$ . Specifically, we again use one-hot encoding for the query $q$ with the same domain $\mathcal { D } = \{ c _ { 1 } , . . . , c _ { L } \}$ of all indexable columns. If some column $\boldsymbol { c } _ { l } \in \mathcal { D }$ appears in the query $q$ , we assign a nonzero weight to $\vec { \bf q } [ l ]$ ; otherwise, $\vec { \bf q } [ l ] = 0$ . Here, we use the same weighing mechanism as used by ISUM. That is, the weight of a column is computed based on its corresponding table size and the number of candidate indexes that contain it. The intuition is that a column from a larger table and contained by more candidate indexes is more important and thus is assigned a higher weight. Similarity Measure. Before measuring the similarity, we first project $\vec { \bf z }$ and $\vec { \Omega } _ { q }$ onto $\vec { \bf q }$ to get their images under the context of the query $q$ . The projection is done by taking the element-wise dot product, i.e., $\tilde { \mathbf { z } } = \vec { \mathbf { z } } \cdot \vec { \mathbf { q } }$ and $\tilde { \Omega } _ { q } = \bar { \Omega } _ { q } \cdot \vec { \bf q }$ . Note that $\tilde { \textbf { z } }$ and $\tilde { \Omega } _ { q }$ remain vectors. We now define the similarity measure as $$ \mathrm { S i m } ( \{ z \} , \Omega _ { q } ) = \frac { \langle \tilde { \mathbf { z } } , \tilde { \Omega } _ { q } \rangle } { | \tilde { \Omega } _ { q } | ^ { 2 } } = \frac { | \tilde { \mathbf { z } } | \cdot | \tilde { \Omega } _ { q } | \cdot \cos \theta } { | \tilde { \Omega } _ { q } | ^ { 2 } } = \frac { | \tilde { \mathbf { z } } | \cdot \cos \theta } { | \tilde { \Omega } _ { q } | } , $$ where $\theta$ represents the angle between the two vectors $\tilde { \mathbf { z } }$ and $\tilde { \Omega } _ { q }$ Figure 5 illustrates and contrasts the definition and estimation of coverage. Figure 5(a) highlights the observation that $c ( q , \{ z \} )$ must lie between $c ( q , \Omega _ { q } )$ and $c ( q , \emptyset )$ , and coverage measures the cost improvement $\Delta ( q , \Omega _ { q } )$ of $\Omega _ { q }$ (i.e., the green segment) that is covered by the cost improvement $\Delta ( q , \{ z \} )$ of $\{ z \}$ (i.e., the orange segment). On the other hand, Figure 5(b) depicts the geometric Fig. 5. The definition and estimation of “coverage.” # Algorithm 1: InitMCIBounds(𝑊 , 𝐼) view involved in the estimation of coverage using the similarity metric $\mathrm { S i m } ( \{ z \} , \Omega _ { q } )$ . Intuitively, the similarity measures how much “length” of the configuration $\Omega _ { q }$ is covered by the “length” of the index $z$ when projected to the (same) “direction” of $\Omega _ { q }$ in the feature vector space. Note that it is not important whether the lengths are close to the corresponding cost improvements—only their ratio matters. Based on our evaluation, the estimated coverage using Equation 12 is close to the ground-truth coverage in Equation 11 (see the full version of this paper [41] for details). # 5 INTEGRATION In this section, we present design considerations and implementation details when integrating Wii with existing budget-aware configuration search algorithms. We start by presenting the API functions provided by Wii. We then illustrate how existing budget-aware configuration enumeration algorithms can leverage the Wii API’s without modification to the algorithms. # 5.1 Wii API Functions As illustrated in Figure 1, Wii sits between the index tuner and the query optimizer. It offers two API functions that can be invoked by a budget-aware configuration enumeration algorithm: (1) InitMCIBounds that initializes the MCI upper-bounds $u ( q , z )$ ; and (2) EvalCost that obtains the cost of a QCP $( q , C )$ in a budget-aware manner by utilizing the lower bound $L ( q , C )$ and the upper bound $U ( q , C )$ , i.e., the derived cost $d ( q , C )$ . 5.1.1 The InitMCIBounds Function. Algorithm 1 presents the details. It initializes the MCI upper bound $u ( q , z )$ for each query $q \in W$ and each of its candidate indexes $z \in I _ { q }$ . If $c ( q , \Omega _ { q } )$ is available, it uses the naive upper bound (Equation 6); otherwise, it uses $c ( q , \emptyset )$ . 5.1.2 The EvalCost Function. Algorithm 2 presents the details. If the what-if cost $c ( q , C )$ is known, it simply uses that and updates the MCI upper-bounds (lines 1 to 3). Otherwise, it checks whether the budget $B$ on the number of what-if calls has been reached and returns the derived cost $d ( q , C )$ if so (lines 4 to 5). On the other hand, if there is remaining budget, i.e., $B > 0$ , it then tries to use the upper-bound $U ( q , C )$ and the lower-bound $L ( q , C )$ to see whether the what-if call for $( q , C )$ can be skipped; if so, the derived cost $d ( q , C )$ is returned (lines 6 to 11)—the budget $B$ remains the same in this case. Finally, if the confidence of skipping is low, we make one what-if call to obtain $c ( q , C )$ (lines 12 to 13) and update the MCI upper-bounds (line 14). As a result, we deduct one from the current budget $B$ (line 15). One may have noticed the optional input parameter $s$ in Algorithm 2, which represents some subset configuration of $C$ and is set to be the existing configuration $\varnothing$ by default. We will discuss how to specify this parameter when using Wii in existing budget-aware configuration enumeration algorithms (e.g., greedy search and MCTS) shortly. # 5.2 Budget-aware Greedy Search To demonstrate how to use the Wii API’s without modifying the existing budget-aware configuration search algorithms, Algorithm 3 showcases how these API’s can be used by budget-aware greedy search, a basic building block of the existing algorithms. Notice that the InitMCIBounds API is invoked at line 1, whereas the EvalCost API is invoked at line 9, which are the only two differences compared to regular budget-aware greedy search. Therefore, there is no intrusive change to the greedy search procedure itself. Remarks. We have two remarks here. First, when calling Wii to evaluate cost at line 9, we pass $C ^ { * }$ to the optional parameter $s$ in Algorithm 2. Note that this is just a special case of Equation 5 for greedy search, as stated by the following theorem: $$ \overline { { \mathrm { { A l g o r i t h m 2 : E v a l C o s t ( } q , C , } B , \alpha , S \emptyset ) } } $$ Theorem 2. In the context of greedy search, Equation 5 reduces to $$ L ( q , C _ { z } ) = c ( q , C ^ { * } ) - \sum _ { x \in C _ { z } - C ^ { * } } u ( q , x ) = c ( q , C ^ { * } ) - u ( q , z ) , $$ where $C _ { z } = C ^ { * } \cup \{ z \}$ and $C ^ { * }$ is the latest configuration selected by budget-aware greedy search (as shown in Algorithm 3). Second, in the context of greedy search, the update step at line 20 of Algorithm 2 becomes $$ u ( q , x ) \gets \operatorname* { m i n } \{ u ( q , x ) , c ( q , C ^ { * } ) - c ( q , C ) \} . $$ The correctness of this update has been given by Theorem 1. # Algorithm 3: GreedySearch(𝑊 , 𝐼, 𝐾, 𝐵, 𝛼) # 5.3 Budget-aware Configuration Enumeration We now outline the skeleton of existing budget-aware configuration enumeration algorithms after integrating Wii. We use the integrated budget-aware greedy search procedure in Algorithm 3 as a building block in our illustration. 5.3.1 Vanilla Greedy. The vanilla greedy algorithm after integrating Wii is exactly the same as the GreedySearch procedure presented by Algorithm 3. 5.3.2 Two-phase Greedy. Algorithm 4 presents the details of the two-phase greedy algorithm after integrating Wii. There is no change to two-phase greedy except for using the version of GreedySearch in Algorithm 3. The function GetCandidateIndexes selects a subset of candidate indexes $I _ { q }$ from $I$ , considering only the indexable columns contained by the query $q$ [8]. # Algorithm 4: TwoPhaseGreedy(𝑊 , 𝐼, 𝐾, 𝐵, 𝛼) Input: 𝑊 , the workload; $I$ , the candidate indexes; $K$ , the cardinality constraint; $B$ , the budget on the number of what-if calls; $\alpha$ , the confidence threshold. Output: $C ^ { * }$ , the best configuration; $B ^ { \prime }$ , the remaining budget. 1 $I _ { W } \gets \emptyset$ , $B ^ { \prime } \gets B$ ; 2 foreach $q \in W$ do 3 $\begin{array} { r l } & { I _ { q } \gets \mathsf { G e t C a n d i d a t e I n d e x e s } ( q , I ) ; } \\ & { \left( C _ { q } , B ^ { \prime } \right) \gets \mathsf { G r e e d y S e a r c h } ( \{ q \} , I _ { q } , K , B ^ { \prime } , \alpha ) ; } \end{array}$ 4 5 $I _ { W } \gets I _ { W } \cup C _ { q }$ ; $\left( C ^ { \ast } , B ^ { \prime } \right) \gets \mathsf { G r e e d y S e a r c h } ( W , I _ { W } , K , B ^ { \prime } , \alpha )$ ; 7 return $\left( C ^ { * } , B ^ { \prime } \right)$ ; 5.3.3 MCTS. Algorithm 5 presents the skeleton of MCTS after Wii is integrated. The details of the three functions InitMCTS, SelectQueryConfigByMCTS, and UpdateRewardForMCTS can be found in [46]. Again, there is no change to the MCTS algorithm except for that cost evaluation at line 5 is delegated to the EvalCost API of Wii (Algorithm 2). Note that here we pass the existing configuration $\varnothing$ to the optional parameter $s$ in Algorithm 2, which makes line 8 of Algorithm 2 on computing $L ( q , C )$ become $$ L ( q , C ) \gets \operatorname* { m a x } \{ 0 , c ( q , \Omega _ { q } ) , c ( q , 0 ) - \sum _ { x \in C } u ( q , x ) \} . $$ Essentially, this means that we use Equation 4 for $L ( q , C )$ , instead of its generalized version shown in Equation 5. Although we could have used Equation 5, it was our design decision to stay with Equation 4, not only for simplicity but also because of the inefficacy of Equation 5 in the context of MCTS. This is due to the fact that in MCTS configurations and queries are explored in random order. Therefore, the subsets $s$ w.r.t. a given pair of $q$ and $C$ with known what-if costs $c ( q , S )$ are sparse. As a result, Equation 5 often reduces to Equation 4 when running Wii underlying MCTS. $$ \mathbf { A l g o r i t h m 5 : M C T S } ( W , I , K , B , \tau ) $$ Input: 𝑊 , the workload; $I$ , the candidate indexes; $K$ , the cardinality constraint; $B$ , the budget on the number of what-if calls; $\alpha$ , the confidence threshold. Output: $C ^ { * }$ , the best configuration; $B ^ { \prime }$ , the remaining budget. 1 $B ^ { \prime } \gets B$ ; 2 InitMCTS $( W , I )$ ; 3 while $B ^ { \prime } > 0$ do 4 (𝑞,𝐶) ←SelectQueryConfigByMCTS $( W , I , K )$ ; 5  cost(𝑞, 𝐶), 𝐵′ ←EvalCost(𝑞, 𝐶, 𝐵′, 𝛼, ∅); 6 UpdateRewardForMCTS(𝑞, 𝐶, cost(𝑞,𝐶)); 𝐶∗, 𝐵′ ←GreedySearch(𝑊 , 𝐼, 𝐾, 𝐵′, 𝛼); 8 return $\left( C ^ { * } , B ^ { \prime } \right)$ # 6 EXPERIMENTAL EVALUATION We now report experimental results on evaluating Wii when integrated with existing budget-aware configuration search algorithms. We perform all experiments using Microsoft SQL Server 2017 under Windows Server 2022, running on a workstation equipped with $2 . 6 \ : \mathrm { G H z }$ multi-core AMD CPUs and 256 GB main memory. Proc. ACM Manag. Data, Vol. 2, No. 3 (SIGMOD), Article 182. Publication date: June 2024. # 6.1 Experiment Settings Datasets. We used standard benchmarks and real workloads in our study. Table 1 summarizes the information of the workloads. For benchmark workloads, we use both the TPC-H and TPC-DS benchmarks with scaling factor 10. We also use two real workloads, denoted by Real-D and Real-M in Table 1, which are significantly more complicated compared to the benchmark workloads, in terms of schema complexity (e.g., the number of tables), query complexity (e.g., the average number of joins and table scans contained by a query), and database/workload size. Moreover, we report the number of candidate indexes of each workload, which serves as an indicator of the size of the corresponding search space faced by an index configuration search algorithm. Algorithms Evaluated. We focus on two state-of-the-art budget-aware configuration search algorithms described in Section 2: (1) two-phase greedy, which has been adopted by commercial index tuning software [7]; and (2) MCTS, which shows better performance than two-phase greedy. We omit vanilla greedy as it is significantly inferior to two-phase greedy [46]. Both two-phase greedy and MCTS use derived cost as an estimate for the what-if cost when the budget on what-if calls is exhausted. We evaluate Wii when integrated with the above configuration search algorithms. Other Experimental Settings. In our experiments, we set the cardinality constraint $K \in \{ 1 0 , 2 0 \}$ . Since the TPC-H workload is relatively small compared to the other workloads, we varied the budget $B$ on the number of what-if calls in $\{ 5 0 0 , 1 0 0 0 \}$ ; for the other workloads, we varied the budget $B$ in $\{ 5 0 0 , 1 0 0 0 , 2 0 0 0 , 5 0 0 0 \}$ . Table 1. Summary of database and workload statistics. # 6.2 End-to-End Improvement The evaluation metric used in our experiments is the percentage improvement of the workload based on the final index configuration found by a search algorithm, defined as $$ \eta ( W , C ) = \Big ( 1 - \frac { c ( W , C ) } { c ( W , \emptyset ) } \Big ) \times 1 0 0 \% , $$ where $\begin{array} { r } { c ( W , C ) = \sum _ { q \in W } c ( q , C ) } \end{array}$ . Note that here we use the query optimizer’s what-if cost estimate $c ( q , C )$ as the gold standard of query execution cost, instead of using the actual query execution time, to be in line with previous work on evaluating index configuration enumeration algorithms [8, 19]. 6.2.1 Two-phase Greedy. Figure 6 presents the evaluation results of Wii for two-phase greedy when setting the confidence threshold $\alpha = 0 . 9$ (see Section 6.2.5 for details of the ‘Best’ lines). We observe that Wii significantly outperforms the baseline (i.e., two-phase greedy without what-if call interception). For example, when setting $K = 2 0$ and $B = 5 , 0 0 0$ , Wii improves over the baseline by increasing the percentage improvement from $5 0 \%$ to $6 5 \%$ on TPC-DS (Figure $6 ( \mathrm { f ) } )$ , from $5 8 \%$ to $7 4 \%$ on Real-D (Figure $6 ( \mathrm { g ) _ { \it } ^ { \cdot } }$ ), and from $3 2 \%$ to $5 4 \%$ on Real-M (Figure $6 ( \mathrm { h } ) )$ ); even for the smallest workload TPC-H, when setting $K = 2 0$ and $B = 1 , 0 0 0$ , Wii improves over the baseline from $78 \%$ to $8 6 \%$ (Figure 6(e)). Note that here Wii has used the optimization for greedy search (Section 4.1). We also observe that incorporating the coverage-based refinement described in Section 4.2 can further improve Wii in certain cases. For instance, on TPC-DS when setting $K = 2 0$ and $B = 2$ , 000, it improves Wii by $1 3 \%$ , i.e., from $4 9 \%$ to $6 2 \%$ , whereas Wii and the baseline perform similarly (Figure 6(f)); on Real-D when setting $K = 1 0$ and $B = 5 0 0$ (Figure $6 ( \mathrm { c } ) )$ ), it improves Wii by an Baseline Wii−CBeosvt. BaseliWniei Wii−CBeosvt. BaseliWniei Wii−CBeosvt. Baseline Wii Wii−CBeosvt. 0 10 20 30 40 50 60 70 80 90Improvement (%) 560 10 20 30 40 50 60 70 80 450 30 30 20 20 10 10 500 1000 500 1000 2000 5000 500 1000 2000 5000 500 1000 2000 5000 Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls (a) TPC-H, $K = 1 0$ (b) TPC-DS, $K = 1 0$ (c) Real-D, $K = 1 0$ (d) Real-M, $K = 1 0$ BaseliWniei Wii−Cov. BaseliWniei Wii−Cov. BaseliWniei Wii−CBeosvt. BaseliWniei Wii−CBeosvt. Best Best 0 10 20 30 40 50 60 70 80 90Improvement (%) 70 6780 23450 123450 23450 500 1000 500 1000 2000 5000 500 1000 2000 5000 500 1000 2000 5000 Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls (e) TPC-H, $K = 2 0$ (f) TPC-DS, $K = 2 0$ (g) Real-D, $K = 2 0$ (h) Real-M, $K = 2 0$ Fig. 6. Results for two-phase greedy with confidence threshold $\alpha = 0 . 9$ (“Cov.” is shorthand for “Coverage”). Baselin Baseline Baseline Baselin Wii−No−MCI−OWpiti Wii−No−MCI−OWpiti Wii−No−MCI− OWpiti Wii−No−MCI−OWpiti 67850505 4565050 345670 2345505050 10 20 10 20 10 20 10 20 Cardinality Constraint K Cardinality Constraint K Cardinality Constraint K Cardinality Constraint K (a) TPC-H, 𝐵 = 1, 000 (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5$ , 000 Fig. 7. Impact on the performance of Wii with or without the optimization for the MCI upper bounds $( \alpha = 0 . 9 )$ . Baseline Wii−CBeosvt. Baseline Wii−Cov. Baseline Wii−Cov. Baseline Wii−Cov. 0 10 20 30 40 50 60 70 80 90 Wii 670 Wii Best 80 Wii Best 60 Wii Best Improvement (%) 450 34560 450 30 20 20 10 10 10 区 0 0 500 1000 500 1000 2000 5000 500 1000 2000 5000 500 1000 2000 5000 Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls (a) TPC-H, $K = 1 0$ (b) TPC-DS, $K = 1 0$ (c) Real-D, $K = 1 0$ (d) Real-M, $K = 1 0$ BaseliWniei Wii−CBeosvt. BaseliWniei Wii−CBeosvt. BaseliWniei Wii−CBeosvt. BaseliWniei Wii−CBeosvt. 0 10 20 30 40 50 60 70 80 90Improvement (%) 23450 70 123450 780 23450 500 1000 500 1000 2000 5000 500 1000 2000 5000 500 1000 2000 5000 Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls Budget on the number of what−if calls (e) TPC-H, $K = 2 0$ (f) TPC-DS, $K = 2 0$ (g) Real-D, $K = 2 0$ (h) Real-M, $K = 2 0$ C ”) Impact of Optimization for MCI Upper Bounds. We further study the impact of the optimization proposed in Section 4.1 for two-phase greedy. In our experiment, we set $\alpha = 0 . 9$ , $B = 1 , 0 0 0$ for TPC-H and $B = 5 , 0 0 0$ for the other workloads. Figure 7 presents the results. We observe that the optimization for MCI upper bounds offers a differentiable benefit in two-phase greedy on TPC-H, TPC-DS, and Real-M. Given its negligible computation overhead, this optimization is warranted to be enabled by default in Wii. 6.2.2 MCTS. Figure 8 presents the results of Wii for MCTS, again by setting the confidence threshold $\alpha = 0 . 9$ . Unlike the case of two-phase greedy, for MCTS Wii often performs similarly Fig. 9. Performance impact when lowering the confidence threshold $\alpha$ of Wii for two-phase greedy $( K = 2 0 )$ . to the baseline (i.e., MCTS without what-if call interception). This is not surprising, given that $M C T S$ already significantly outperforms two-phase greedy in many (but not all) cases, which can be verified by comparing the corresponding charts in Figure 6 and Figure 8—further improvement on top of that is more challenging. However, there are noticeable cases where we do observe significant improvement as we incorporate the coverage-based refinement into Wii. For instance, on Real-M, when setting $K = 1 0$ and $B = 5 0 0$ (Figure 8(d)), it improves over the baseline by increasing the percentage improvement of the final index configuration found by MCTS from $7 . 8 \%$ to $2 7 . 1 \%$ ; similar observation holds when we increasing $K$ to 20 (Figure $8 \mathrm { ( h ) } \dot { }$ ), where we observe an even higher boost on the percentage improvement (i.e., from $8 . 5 \%$ to $3 6 . 9 \%$ ). In general, we observe that Wii is more effective on the two larger workloads (TPC-DS and Real-M), which have more complex queries and thus much larger search spaces (ref. Table 1). In such situations, the number of configurations that MCTS can explore within the budget constraint is too small compared to the entire search space. Wii increases the opportunity for MCTS to find a better configuration by skipping spurious what-if calls. Nevertheless, compared to two-phase greedy, MCTS has its own limitations (e.g., its inherent usage of randomization) that require more research to pave its way of being adopted by commercial index tuners [36]. Moreover, MCTS is not suitable for the “unlimited budget” case (Section 6.8) as it requires a budget constraint as input. 6.2.3 Discussion. Comparing Figures 6 and 8, while the baseline version of two-phase greedy clearly underperforms that of MCTS, the Wii-enhanced version of two-phase greedy performs similarly or even better than that of MCTS. Existing budget allocation policies are largely macro-level optimization mechanisms, meaning that they deem what-if calls as atomic black-box operations that are out of their optimization scopes. However, our results here reveal that micro-level optimization mechanisms like Wii that operate at the granularity of individual what-if calls can interact with and have profound impact on the performance of those macro-level optimization mechanisms. An in-depth study and understanding of such macro-/micro-level interactions may lead to invention of better budget allocation policies. Moreover, based on our evaluation results, the coverage-based refinement does not always improve Wii’s performance. A natural question is then how users would choose whether or not to use it. Are there some simple tests that can indicate whether or not it will be beneficial? Since the motivation of the coverage-based refinement is to make Wii work more effectively in the presence of unknown singleton-configuration what-if costs, one idea could be to measure the fraction of such singleton-configurations and enable the coverage-based refinement only when this fraction is high. However, this measurement can only be monitored “during” index tuning and there are further questions if index tuning is budget-constrained (e.g., how much budget should be allocated for monitoring this measurement). Thus, there seems to be no simple answer and we leave its investigation for future work. 6.2.4 Evaluation of Confidence-based What-if Call Skipping. We start by investigating the impact of the confidence threshold $\alpha$ on Wii. For this set of experiments, we use the budget $B = 1 , 0 0 0$ for TPC-H and use $B = 5 , 0 0 0$ for the other workloads, and we vary $\alpha \in \left. 0 . 8 , 0 . 9 , 0 . 9 5 \right.$ . Figures 10 and 11 present the evaluation results. We observe that Wii is not sensitive to the threshold $\alpha$ Wii (K=10) Wii (K=20) Wii (K=10) Wii (K=20) Wii (K=10) Wii (K=2 Wii (K=10) Wii (K=20 Cov. (K=10) Cov. (K=20) Cov. (K=10) Cov. (K=20) Cov. (K=10) Cov. (K=20) Cov. (K=10) Cov. (K=20) 0 10 20 30 40 50 60 70 80 90 70 0 10 20 30 40 50 60 70 80 60 Improvement (%) 5 60 Improvement (%) 2340 40 230 10 10 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, $B = 5$ , 000 (c) Real-D, $B = 5 , 0 0 0$ (d) Real-M, $B = 5$ , 000 ig. 10. Impact of the confidence threshold for two-phase greedy (“Cov.” is shorthand for “Wii-Coverage” 234567890 CoWvi.i (K=20) 70 CoWvi.i (K=10) CoWvi.i (K=20) CoWvi.i (K=10) CoWvi.i (K=20) 12345050505050 CoWvi.i (K=20) Improvement (%) 123450 Improvement (%) 0 10 20 30 40 50 60 70 80 Improvement (%) 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5 , 0 0 0$ Fig. 11. Impact of the confidence threshold for MCTS (“Cov.” is shorthand for “Wii-Coverage”). Wii Baseline Wii Baseline Wii Baseline Wii Baseline Wii−Cov. Wii−Cov. Wii−Cov. Wii−Cov. 23456780 1234560 2345670 12340505050 0.5 0.9 0.5 0.9 0.5 0.9 0.5 0.9 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, 𝐵 = 5, 000 (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5$ , 000 Fig. 12. Performance impact when lowering the confidence threshold $\alpha$ used by Wii for MCTS $\left. K = 2 0 \right.$ Baseline Wii−Cov. Baseline Wii−Cov. Baseline Wii−Cov. Baseline Wii−Cov. Wii Randomized Wii Randomized Wii Randomized Wii Randomized 2345670 1234560 12345670 123450 two−phase−greedy MCTS two−phase−greedy MCTS two−phase−greedy MCTS two−phase−greedy MCTS Budget−aware Configuration Search Algorithm Budget−aware Configuration Search Algorithm Budget−aware Configuration Search Algorithm Budget−aware Configuration Search Algorithm (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5 , 0 0 0$ (d) Real-M, $B = 5 , 0 0 0$ within the range that we tested, for both two-phase greedy and MCTS. On the other hand, coveragebased refinement is more sensitive to $\alpha$ . For instance, for two-phase greeedy on Real-M with cardinality constraint $K = 1 0$ (ref. Figure 10(d)), the end-to-end percentage improvement of the final configuration found increases from $3 5 . 6 \%$ to $5 3 . 3 \%$ when raising $\alpha$ from 0.8 to 0.95. This suggests both opportunities and risks of using the coverage-based refinement for Wii, as one needs to choose the confidence threshold $\alpha$ more carefully. A more formal analysis can be found in [41]. Low Confidence Threshold. An interesting question is the performance impact of using a relatively lower confidence threshold compared to the ones used in the previous evaluations. To investigate this question, we further conduct experiments by setting the confidence threshold $\alpha = 0 . 5$ . Figures 9 and 12 present results for two-phase greedy and MCTS with the cardinality constraint $K = 2 0$ . We have the following observations. First, the performance of Wii often becomes much worse compared to using a high confidence threshold like the $\alpha = 0 . 9$ in the charts—it is sometimes even worse than the baseline, e.g., in the case of MCTS on Real-D, as shown in Figure 12(c). Second, coverage-based refinement seems more sensitive to the use of a low confidence threshold, due to its inherent uncertainty of estimating singleton-configuration what-if costs. Necessity of Confidence-based Mechanism. Since the confidence-based skipping mechanism comes with additional overhead of computing the lower and upper bounds of what-if cost (Section 6.4), it CoWvi.i (K=10) CoWvi.i (K=20) CoWvi.i (K=10) CoWvi.i (K=20) Wii (K=1 CoWvi.i (K=20) CoWvi.i (K=10) CoWvi.i (K=20) Cov. (K= 0 2 4 6 8 10 12 14 16 18What−If Calls Skipped (x1000) 0 10 20 30 40 50 60 70What−If Calls Skipped (x5000) 01 14682468 What−If Calls Skipped (x5000) 2 4 6 8 10 12 14 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, $B = 5$ , 000 (c) Real-D, $B = 5 , 0 0 0$ (d) Real-M, $B = 5$ , 000 Fig. 14. Amount of what-if calls skipped by Wii for two-phase greedy (“Cov.” is shorthand for “Wii-Coverage” 01 1468246 Wii (K=20) CoWvi.i (K=10) CoWvi.i (K=20) 0.6 CoWvi.i (K=10) CoWvi.i (K=20) CoWvi.i (K=10) CoWvi.i (K=20) What−If Calls Skipped (x5000) 3 What−If Calls Skipped (x5000) 0 5 10 15 20 25 30 35 40 45What−If Calls Skipped (x5000) 2.5 0.5 2 0.4 1.5 0.3 0.2 0.5 0.1 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 0.8 0.9 0.95 Confidence α Confidence Confidence α Confidence α (a) TPC-H, $B = 1 , 0 0 0$ (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5$ , 000 Fig. 15. Amount of what-if calls skipped by Wii for MCTS (“Cov.” is shorthand for “Wii-Coverage”). 100LDoewrievredBoCuonsdt WhCaotv−iefraCgalel 1000LDoewrievredBoCuonsdt WhCaotv−iefraCgalel 10000LDoewrievredBoCuonsdt WhCaotv−iefraCgalel 10000LDoewrievredBoCuonsdt WhCaotv−iefraCgalel Average Computation Time (ms) Average Computation Time (ms) Average Computation Time (ms) 1 10 100 1000 8 Average Computation Time (ms) 1000 10 100 100 10 10 0.1 品 GR 0.0.1 0.0.1 GO 0.0.1 two−phase−greedy MCTS two−phase−greedy MCTS two−phase−greedy MCTS two−phase−greedy MCTS Budget−aware Configuration Search Algorithm Budget−aware Configuration Search Algorithm Budget−aware Configu arch Algorithm Budget−aware Configuration Search Algorithm (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5$ , 000 6.2.5 Best Possible Improvement. It is difficult to know the best possible improvement without making a what-if call for every QCP enumerated during configuration search, which is infeasible in practice. We provide an approximate assessment by using a much larger budget $B$ in two-phase greedy. Specifically, we use $B = 5$ , 000 for TPC-H and $B = 2 0$ , 000 for the other workloads. For each workload, we run both two-phase greedy without and with Wii, and we take the best improvement observed in these two runs. The ‘Best’ line in Figures 6 and 8 presents this result. # 6.3 Efficacy of What-If Call Interception We measure the relative amount of what-if calls skipped by Wii, namely, the ratio between the number of what-if calls skipped and the budget allowed. Figures 14 and 15 present the results for two-phase greedy and MCTS when varying $\alpha \in \{ 0 . 8 , 0 . 9 , 0 . 9 5 \}$ . We have several observations. First, in general, Wii is more effective at skipping spurious what-if calls for two-phase greedy than MCTS. For example, when setting $K = 2 0$ and $\alpha = 0 . 9$ , Wii is able to skip $3 . 6 B$ (i.e., $3 . 6 \times 5$ , $0 0 0 = 1 8 , 0 0 0$ ) what-if calls for two-phase greedy whereas only $0 . 5 7 B$ (i.e., 2,850) what-if calls for MCTS. This is correlated with the observation that Wii exhibits more significant end-to-end improvement in terms of the final index configuration found for two-phase greedy than MCTS, as we highlighted in Section 6.2. Second, the coverage-based refinement often enables Wii to skip more what-if calls. For instance, for MCTS on Real-M when setting $K = 2 0$ and $\alpha = 0 . 8$ , Wii is able to skip only $1 . 4 8 B$ (i.e., 7,400) what-if calls, which leads to no observable end-to-end improvement over the baseline; with the coverage-based refinement enabled, however, the number of what-if calls that Wii can skip rises to $4 2 . 7 B$ (i.e., 213,500), which results in nearly $10 \%$ boost on the end-to-end improvement (ref. Figure 11(d)). Third, while one would expect that the amount of what-if calls skipped decreases when we increase the confidence threshold $\alpha$ , this is sometimes not the case, especially for two-phase greedy. As shown in Figures 14(a), 14(b), and $1 4 ( \mathrm { c } )$ , the number of skipped calls can actually increase when raising $\alpha$ . The reason for this unexpected phenomenon is the special structure of the two-phase greedy algorithm: lowering $\alpha$ allows for more what-if calls to be skipped in the first phase where the goal is to find good candidate indexes for each individual query. Skipping more what-if calls in the first phase therefore can result in fewer candidate indexes being selected because, without what-if calls, the derived costs for the candidate indexes will have the same value (as the what-if cost with the existing index configuration, i.e., $c ( q , \emptyset ) )$ and thus exit early in Algorithm 3 (line 14). As a result, it eventually leads to a smaller search space for the second phase and therefore fewer opportunities for what-if call interception. Table 2. Additional overhead of Wii and Wii-Coverage, measured as percentage of the execution time of the baseline configuration search algorithm $K = 2 0$ , $\alpha = 0 . 9$ ). # 6.4 Computation Overhead We measure the average computation time of the lower bound of the what-if cost. For comparison, we also report the average time of cost derivation as well as making a what-if call. Figure 16 summarizes the results when running two-phase greedy and MCTS with $K = 2 0$ and $\alpha = 0 . 9$ . We have the following observations. First, the computation time of the lower bound is similar to cost derivation, both of which are orders of magnitude less than the time of making a what-if call—the $y$ -axis of Figure 16 is in logarithmic scale. Second, the coverage-based refinement increases the computation time of the lower-bound, but it remains negligible compared to a what-if call. Table 2 further presents the additional overhead of Wii w.r.t. the baseline configuration search algorithm without Wii, measured as a percentage of the baseline execution time. We observe that Wii’s additional overhead, with or without the coverage-based refinement, is around $3 \%$ at maximum, while the typical additional overhead is less than $0 . 5 \%$ . # 6.5 Storage Constraints As mentioned earlier, one may have other constraints in practical index tuning in addition to the cardinality constraint. One common constraint is the storage constraint (SC) that limits the maximum amount of storage taken by the recommended indexes [19]. To demonstrate the robustness of Wii w.r.t. other constraints, we evaluate its efficacy by varying the SC as well. In our evaluation, we fix $K = 2 0$ , $\alpha = 0 . 9$ , $B = 1$ , 000 for TPC-H and $B = 5 , 0 0 0$ for the other workloads, while varying the allowed storage size as $2 \times$ and $3 \times$ of the database $3 \times$ is the default setting of DTA [1]). Figures 17 and 18 present the evaluation results for two-phase greedy and MCTS. Overall, we observe similar patterns in the presence of SC. That is, Wii, with or without the coverage-based refinement, often significantly outperforms the baseline approaches, especially for two-phase greedy. # 6.6 Beyond Derived Cost When Wii decides to skip a what-if call, it returns the derived cost (i.e., the upper bound) as an approximation of the what-if cost. This is not mandatory, and there are other options. For example, 12345670 Baseline Wii Wii−Cov. 234560 Baseline Wii Wii−Cov. 12345670 Baseline Wii Wii−Cov. Improvement (%) Baseline 560 40 Wii Wii−Cov. 30 20 10 10 2x 3x 2x 3x 2x 3x 2x 3x Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) (a) TPC-H, $B = 1 , 0 0 0$ (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, 𝐵 = 5, 000 (d) Real-M, $B = 5 , 0 0 0$ Fig. 17. Evaluation results of Wii for two-phase greedy with varying storage constraints $K = 2 0$ , $\alpha = 0 . 9$ Baseline Wii Wii−Cov. Baseline Wii Wii−Cov. Baseline Wii Wii−Cov. Baseline Wii Wii−Cov. 0 10 20 30 40 50 60 70 80Improvement (%) 0 10 20 30 40 50 60 70 80Improvement (%) 50 123450 2340 0 2x 3x 2x 3x 2x 3x 2x 3x Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) Storage Constraint (w.r.t. DB Size) (a) TPC-H, $B = 1 , 0 0 0$ (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5 , 0 0 0$ (d) Real-M, $B = 5$ , 000 Fig. 18. Evaluation results of Wii for MCTS with varying storage constraints $( K = 2 0 , \alpha = 0 . 9 )$ . Wii (avWeiria(gueppoferbboouunndds) Wii (avWeiria(gueppoferbboouunndds) Wii (avWeiria(gueppoferbboouunndds) Wii (avWeiria(gueppoferbboouunndds) Wii−Cov. (upper bound) Wii−Cov. (upper bound) Wii−Cov. (upper bound) Wii−Cov. (upper bound) Wii−Cov. (average of bounds) Wii−Cov. (average of bounds) Wii−Cov. (average of bounds) Wii−Cov. (average of bounds) 23456780 Baseline 1234560 Baseline 2345670 Baselin 5 560 Baseline 40 30 20 10 0.5 0.9 0.5 0.9 0.5 0.9 0.5 0.9 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1$ , 000 (b) TPC-DS, 𝐵 = 5, 000 (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5$ , 000 Fig. 19. Using derived cost vs. the average of lower and upper bounds for two-phase greedy $\left( K = 2 0 \right)$ ). Wii−CoWvi.i−(aCvoevr.a(gueppoferbboouunndds) Wii−CoWvi.i−(aCvoevr.a(gueppoferbboouunndds) Wii−CoWvi.i−(aCvoevr.a(gueppoferbboouunndds) Wii−CoWvi.i−(aCvoevr.a(gueppoferbboouunndds) Wii (upper boun 123456780 1234560 12345670 123050505 0.5 0.9 0.5 0.9 0.5 0.9 0.5 0.9 Confidence α Confidence α Confidence α Confidence α (a) TPC-H, $B = 1 , 0 0 0$ (b) TPC-DS, $B = 5 , 0 0 0$ (c) Real-D, $B = 5$ , 000 (d) Real-M, $B = 5 , 0 0 0$ one can instead return the average of the lower and upper bounds. We further evaluate this idea below. Figures 19 and 20 present the results. While both options perform similarly most of the time, we observe that they perform quite differently in a few cases; moreover, one may outperform the other in these cases. For example, with the coverage-based refinement enabled in Wii, when setting $\alpha = 0 . 5$ , on TPC-H returning the average significantly outperforms returning the upper bound ( $7 4 . 7 \%$ vs. $5 9 . 7 \%$ ); however, on Real-M returning the average loses $1 0 . 5 \%$ in percentage improvement compared to returning the upper bound ( $1 1 . 8 \%$ vs. $2 2 . 3 \%$ ). As a result, the question of having a better cost approximation than the upper bound (i.e., the derived cost) remains open, and we leave it for future exploration. # 6.7 Impact of Submodularity Assumption Although our validation results show that submodularity holds with probability between 0.75 and 0.89 on the workloads tested [41], it remains an interesting question to understand the impact on Wii when submodularity does not hold. As we mentioned in Section 3.2.2, submodularity does not hold often due to index interaction [31]. For example, the query optimizer may choose an index-intersection plan with two indexes available at the same time but utilizing neither if only one of them is present. In this example, submodularity does not hold, because the MCI of either index will increase after the other index is selected. As a result, Equation 8 is no longer an MCI upperbound—it will be smaller than the actual MCI upper-bound. Consequently, the $L ( q , C )$ computed by Equation 4 will be larger than the actual lower-bound of the what-if cost, which implies an overconfident situation for Wii where the confidence is computed by Equation 10. The degree of overconfidence depends on the magnitude of violation of the submodularity assumption, which we further measured in our evaluation (see [41] for details). Table 3. Magnitude of violation (of submodularity). Table 3 summarizes the key statistics of the magnitude of violation measured. Among the four workloads, we observe that Real-D and Real-M have relatively higher magnitude of violation, which implies that Wii tends to be more overconfident on these two workloads. As a result, Wii is more likely to skip what-if calls that should not have been skipped, especially when the confidence threshold $\alpha$ is relatively low. Correspondingly, we observe more sensitive behavior of Wii on Real-D and Real-M when increasing $\alpha$ from 0.5 to 0.9 (ref. Figures 9 and 12). # 6.8 The Case of Unlimited Budget As we noted in the introduction, Wii can also be used in a special situation where one does not enforce a budget on the index tuner, namely, the tuner can make unlimited number of what-if calls. This situation may make sense if one has a relatively small workload. Although Wii cannot improve the quality of the final configuration found, by skipping unnecessary what-if calls it can significantly reduce the overall index tuning time. To demonstrate this, we tune the two relatively small workloads, namely TPC-H with 22 queries and Real-D with 32 queries, using two-phase greedy without enforcing a budget constraint on the number of what-if calls. We do not use MCTS as it explicitly leverages the budget constraint by design and cannot work without the budget information. We set $K = 2 0$ for TPC-H and $K = 5$ for Real-D in our experiments to put the total execution time under control. We also vary the confidence threshold $\alpha \in \{ 0 . 8 , 0 . 9 \}$ for Wii. Table 4 summarizes the evaluation results. We observe significant reduction of index tuning time by using Wii. For instance, on TPC-H when setting the confidence threshold $\alpha = 0 . 9$ , the final configurations returned by two-phase greedy, with or without Wii, achieve (the same) $8 5 . 2 \%$ improvement over the existing configuration. However, the tuning time is reduced from 8.2 minutes to 1.9 minutes (i.e., $4 . 3 \times$ speedup) when Wii is used. As another example, on Real-D when setting $\alpha = 0 . 9$ , the final configurations returned, with or without Wii, achieve similar improvements over the existing configuration $6 4 \%$ vs. $6 2 . 3 \%$ ). However, the tuning time is reduced from 380.6 minutes to 120 minutes (i.e., $3 . 2 \times$ speedup) by using Wii. The index tuning time on Real-D is considerably longer than that on TPC-H, since the Real-D queries are much more complex. # 7 RELATED WORK Index Tuning. Index tuning has been studied extensively by previous work (e.g., [4, 5, 7, 8, 12, 17, 20, 30, 35, 37, 40, 42, 46]). The recent work by Kossmann et al. [19] conducted a survey as well as a benchmark study of existing index tuning technologies. Their evaluation results show that DTA with the two-phase greedy search algorithm [7, 8] can yield the state-of-the-art performance, which has been the focus of our study in this paper as well. Budget-aware Configuration Enumeration. Configuration enumeration is one core problem of index tuning. The problem is NP-hard and hard to approximate [6, 11]. Although two-phase greedy is the current state-of-the-art [19], it remains inefficient on large and/or complex workloads, due to the large amount of what-if calls made to the query optimizer during configuration enumeration [19, 26, 33, 37]. Motivated by this, [46] studies a constrained configuration enumeration problem, called budget-aware configuration enumeration, that limits the number of what-if calls allowed in configuration enumeration. Budget-aware configuration enumeration introduces a new budget allocation problem, regarding which query-configuration pairs (QCP’s) deserve what-if calls. Table 4. Index tuning time with unlimited budget. Application of Data-driven ML Technologies. There has been a flurry of recent work on applying data-driven machine learning (ML) technologies to various aspects of index tuning [36], such as reducing the chance of performance regression on the recommended indexes [13, 48], configuration search algorithms based on deep learning and reinforcement learning [21, 28, 29, 32], using learned cost models to replace what-if calls [33, 37], and so on. While we do not use ML technologies in this work, it remains interesting future work to consider using ML-based technologies, for example, to improve the accuracy of the estimated coverage. Cost Approximation and Modeling. From an API point of view, Wii returns an approximation (i.e., derived cost) of the what-if cost whenever a what-if call is saved. There have been various other technologies on cost approximation and modeling, focusing on replacing query optimizer’s cost estimate by actual prediction of query execution time (e.g., [2, 14, 16, 23–25, 27, 34, 38, 43–45, 47]). This line of effort is orthogonal to our work, which uses optimizer’s cost estimate as the gold standard of query execution cost, to be in line with previous work on evaluating index configuration enumeration algorithms [8, 19].
Index tuning aims to find the optimal index configuration for an input workload. It is often a time-consuming and resource-intensive process, largely attributed to the huge amount of "what-if" calls made to the query optimizer during configuration enumeration. Therefore, in practice it is desirable to set a budget constraint that limits the number of what-if calls allowed. This yields a new problem of budget allocation, namely, deciding on which query-configuration pairs (QCPs) to issue what-if calls. Unfortunately, optimal budget allocation is NP-hard, and budget allocation decisions made by existing solutions can be inferior. In particular, many of the what-if calls allocated by using existing solutions are devoted to QCPs whose what-if costs can be approximated by using cost derivation, a well-known technique that is computationally much more efficient and has been adopted by commercial index tuning software. This results in considerable waste of the budget, as these what-if calls are unnecessary. In this paper, we propose "Wii," a lightweight mechanism that aims to avoid such spurious what-if calls. It can be seamlessly integrated with existing configuration enumeration algorithms. Experimental evaluation on top of both standard industrial benchmarks and real workloads demonstrates that Wii can eliminate significant number of spurious what-if calls. Moreover, by reallocating the saved budget to QCPs where cost derivation is less accurate, existing algorithms can be significantly improved in terms of the final configuration found.
[ "cs.DB" ]
# 1 Introduction The Lecture Video Visual Objects (LVVO) Dataset is designed as a benchmark for object detection in lecture video frames. It includes bounding box annotations for four visual categories: Table, Chart-Graph, Photographic-Image, and Visual-Illustration. The dataset consists of 4,000 images (video frames) extracted from a diverse collection of lecture videos. Out of these, a randomly selected subset of 1,000 images has been manually annotated by expert annotators, forming the LVVO 1k labeled dataset. Each image was independently annotated by two annotators, with a third expert reviewing and resolving any disagreements to ensure high-quality consensus annotations. The following sections detail the dataset creation process and present key statistics gathered during its development. # 2 Dataset Preparation To build our dataset, we collected lecture videos from videopoints.org [1]. We then extracted 4,000 visually rich and distinct frames, ensuring diversity across multiple instructors and subject areas. # 2.1 Lecture Video Collection The lecture videos were sourced from videopoints.org [1], a platform hosting screen-captured live lectures, as part of the previous work in [2]. The collection includes videos from eight different instructors, covering 13 distinct courses, with a total of 245 lecture videos. These lectures span three subject areas: biology, computer science, and geosciences. To ensure the inclusion of the most recent lectures, we selected courses from the latest semesters offered by each instructor. The lectures in the dataset were recorded between 2019 and 2024. # 2.2 Unique Frame Extraction We adopted the method from [3] to identify slide transition points and extract key frames representing distinct slides from the lecture videos. However, we observed duplicate frames, often caused by instructors revisiting previous slides during lectures. To address this, we extended the algorithm to detect and remove duplicate frames within a window of key frames for each video. Additionally, we prioritized filtering out frames that contained only textual content with no significant visual elements. These refinements ensured that the final dataset retained unique video frames with significant visual content, resulting in a finalized set of 4,000 images. Each image file is named using the format: <instructor id> <course id> <video id> <filename>. Table 1 summarizes the distribution of instructors, courses, and extracted frames across the three subject areas. Table 1: Distribution of Instructors, Courses, and Extracted Video Frames Across Subject Areas # 3 Manual Annotation # 3.1 Annotation Workflow A randomly selected subset of 1,000 images (referred to as LVVO 1k) was manually labeled by expert annotators with bounding box annotations for four distinct categories: Table, Chart-Graph, Photographic-Image, and Visual-Illustration. Annotating lecture slides presents unique challenges. They typically consist of artificially designed visual content where visual objects have diverse semantic meanings and weak structural boundaries—unlike well-defined objects in natural images such as chairs, tables, cats, or dogs [4]. To ensure high-quality and consistent annotations, we engaged graduate students with relevant domain expertise and provided them with unified instructions (see Section 3.2). The annotation was carried out using the Microsoft VoTT annotation tool [5], which allowed annotators to draw bounding boxes and assign category labels. The process followed three phases: 1. Initial Calibration: All the annotators labeled an initial set of 50 sample frames using the provided instructions. After annotation, a group discussion was conducted to review differences and understand challenges in annotation. Subsequently, the guidelines were modified to reduce ambiguity. 2. Independent Annotation: The remaining frames were divided among the annotators, with each frame independently labeled by two annotators, following the finalized guidelines to ensure crossverification and consistency. 3. Conflict Resolution: A third expert was involved only in cases where the initial two annotators disagreed. For each such instance, the expert resolved conflicts by selecting the most accurate bounding boxes from one or both annotators, thereby finalizing the annotation. This rigorous annotation process ensures the dataset’s reliability and consistency, making it well-suited for benchmarking object detection models on lecture video frames. # 3.2 Annotation Instructions The following instructions were provided to the annotators to ensure consistency and accuracy during the manual labeling process. These guidelines define what qualifies as a visual object, outline the annotation procedure, and specify the categories used for labeling. Annotators followed these instructions while using he VoTT annotation tool [5] to perform bounding box annotations on the selected video frames. Task: To identify and categorize visual objects in video frames that are meaningful to the video content. Specifically, you will: 1. Identify and draw a bounding box around each visual object. 2. Label each identified visual object with a category selected from the provided list below. What is a Visual Object? For the purpose of this task, a visual object contains an image or multiple images that together represent meaningful semantic content in the video. • Visual objects can be photographic images, charts, tables, or illustrations. A visual object may contain text such as the content of the cells in a table or labels of components in the image. It should not include captions or descriptions that are not directly a part of the image. • Images that are not relevant to the lecture content are not considered visual objects. For example, speaker faces, logos, and other content that is part of the video frame background should not be selected as visual objects. • Your goal is to select coherent and complete visual objects, that we refer as valid objects. In some cases, a larger visual object consisting of nearby valid visual objects may also appear to be a valid visual object. In such situations, it is sufficient to select only the smaller valid visual objects. • The rectangular bounding boxes may overlap, but the visual objects themselves should not. Categories: Assign one of the following category labels to each visual object you identify: • Table: An arrangement of information or data, typically in rows and columns. • Chart-Graph: Graphical representation of data. • Photographic-Image: Pictures that are made using cameras. • Visual-Illustration: Diagrams, flowcharts, and other visual illustrations. In following these steps, use your best judgment in case of ambiguity. In some cases, the boundaries, the category label, or even the existence of a visual object may not be clear. We are looking forward to your best guess in such scenarios. # 3.3 Annotation Statistics # 3.3.1 Comparison of Independent Annotations To assess the agreement of independent annotations, we compared the two versions in which each image was labeled by different annotator. For each frame, bounding boxes from the two annotation sets were matched using a greedy algorithm that iteratively selects box pairs with the highest Intersection over Union (IoU), ensuring that each box is matched only once. The process continues until no remaining pairs meet the provided IoU threshold. After completing the matching across all frames, we aggregated the total number of matched pairs and unmatched boxes for each version. Figure 1 presents a stacked bar chart showing the distribution of matched and unmatched boxes across a range of IoU thresholds. Each bar corresponds to a specific IoU threshold value, with the total height representing the combined count of matched pairs and unmatched boxes. The green segment indicates the number of matched pairs, while the blue and red segments represent unmatched boxes from version 1 and version 2, respectively. Moving from right to left (i.e., from high to low IoU thresholds), there is a significant increase in the number of matched pairs at higher IoU values. This indicates that most matches occur when bounding boxes are closely aligned—suggesting that the two annotators generally placed bounding boxes in similar positions. As the IoU threshold decreases, the number of matched pairs gradually declines and eventually levels off. At the lowest thresholds, there remain some unmatched boxes—specifically, 122 from version 1 and 152 from version 2. These likely reflect semantic disagreements or differing interpretations between annotators. At an IoU threshold of 0.5, 1278 matched pairs (involving 2556 boxes) were identified, with 239 and 269 unmatched boxes from versions 1 and 2, respectively—reflecting an $8 3 . 4 1 \%$ agreement and strong annotator alignment under moderate overlap conditions. Figure 1: Stacked bar plot showing matched (green) and unmatched boxes (blue: version 1, red: version 2) over IoU thresholds. Higher thresholds lead to fewer matches due to stricter overlap requirements. Figure 2: Confusion matrix showing category-wise agreement among matched annotations (where IoU $> =$ 0.75). Strong diagonal values indicate high consistency, while off-diagonal elements reveal label discrepancies. Figure 2 illustrates the category-wise agreement between version 1 and version 2 annotations, limited to matched pairs with an IoU threshold of at least 0.75. The strong diagonal dominance indicates high labeling consistency across versions, while the sparse off-diagonal entries highlight rare instances of category mismatch. Notably, the Visual-illustration category is the most commonly confused with Chart-Graph and Photographic-image, suggesting occasional ambiguity in distinguishing between these categories. Although the annotations showed strong overall agreement, discrepancies were still present. To address these discrepancies, a third expert was involved to review and resolve conflicts, as outlined before. # 3.3.2 Final Annotation Following the conflict resolution process, we finalized the consensus annotations used in the dataset. Figure 3a shows that most images contain one or two objects, with fewer images containing higher counts. Figure 3b highlights that Visual-illustration dominates the category distribution, followed by Chart-Graph and Photographic-image, with Table being the least common. Figure 3: Summary of consensus annotations after conflict resolution. (a) Distribution of the number of annotated objects per image. (b) Category-wise distribution of annotated objects. # 4 Automatic Annotation The LVVO dataset contains 4,000 video frames, of which only 1,000 were manually labeled because of the significant effort required for high-quality manual annotation. To expand the LVVO dataset further and reduce manual effort, we are also releasing the remaining 3,000 frames, which have been automatically labeled using the methodology described below. Our approach [6] involves fine-tuning a COCO-pretrained YOLOv11 [7] model using transfer learning. The model is first adapted to the manually annotated LVVO 1k dataset, with an 80% training and $2 0 \%$ validation split. Once fine-tuned, the model is used in inference mode to predict bounding boxes on the unlabeled images. A confidence threshold of 0.5 is applied to discard low-confidence predictions, ensuring the quality of the automatically generated annotations. This automatic labeling process results in the LVVO 3k automatically labeled dataset. Combined with the manually annotated portion, it expands the LVVO dataset to a total of 4,000 labeled images, supporting further model development and evaluation. # 5 Dataset Files Description Here, the files and structure associated with the LVVO dataset are described. Three dataset variants are provided, each following a consistent internal structure: • LVVO 1k withCategories.zip: The manually annotated subset containing 1,000 images with the associated category labels. • LVVO 1k.zip: The same 1,000-image subset as above, but with all objects treated as a single category (a generic class label: object). • LVVO 3k.zip: The automatically annotated subset containing 3,000 additional images. Each dataset archive includes the following components: • images/: Contains the image files. • labels/: Contains JSON annotation files. Each file shares the same base name as its corresponding image file, allowing one-to-one mapping. • dataset info.json: Contains metadata including category names and their corresponding IDs, as well as mappings between image filenames and unique image identifiers. # References [1] VideoPoints, “Videopoints: Lecture video platform,” https://videopoints.org, 2025, accessed: March 20, 2025. [2] M. R. Rahman, R. S. Koka, S. K. Shah, T. Solorio, and J. Subhlok, “Enhancing lecture video navigation with AI generated summaries,” Education and Information Technologies, pp. 1–24, 2023. [3] T. Tuna, J. Subhlok, L. Barker, S. Shah, O. Johnson, and C. Hovey, “Indexed captioned searchable videos: A learning companion for STEM coursework,” Journal of Science Education and Technology, vol. 26, no. 1, pp. 82–99, 2017. [4] D. Biswas, S. Shah, and J. Subhlok, “Identification of visual objects in lecture videos with color and keypoints analysis,” in IEEE International Symposium on Multimedia (ISM). IEEE, 2023, pp. 315–320. [5] Microsoft, “Visual object tagging tool,” https://github.com/microsoft/VoTT, accessed: 2025-02-09. [6] D. Biswas, S. Shah, and J. Subhlok, “Visual content detection in educational videos with transfer learning and dataset enrichment,” in Proceedings of the 8th IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR), 2025, to appear. [7] G. Jocher and J. Qiu, “Ultralytics yolo11,” 2024. [Online]. Available: https://github.com/ultralytics/ ultralytics
We introduce the Lecture Video Visual Objects (LVVO) dataset, a new benchmark for visual object detection in educational video content. The dataset consists of 4,000 frames extracted from 245 lecture videos spanning biology, computer science, and geosciences. A subset of 1,000 frames, referred to as LVVO_1k, has been manually annotated with bounding boxes for four visual categories: Table, Chart-Graph, Photographic-image, and Visual-illustration. Each frame was labeled independently by two annotators, resulting in an inter-annotator F1 score of 83.41%, indicating strong agreement. To ensure high-quality consensus annotations, a third expert reviewed and resolved all cases of disagreement through a conflict resolution process. To expand the dataset, a semi-supervised approach was employed to automatically annotate the remaining 3,000 frames, forming LVVO_3k. The complete dataset offers a valuable resource for developing and evaluating both supervised and semi-supervised methods for visual content detection in educational videos. The LVVO dataset is publicly available to support further research in this domain.
[ "cs.CV", "cs.LG" ]
# 1. Introduction Open Science practices are fostered by institutions and research funders as a way to make research more collaborative, transparent and closer to society. Among these practices we find the effort to make research data useful for reuse. To achieve this goal the FAIR principles were developed (Wilkinson et al., 2016) and consolidated (Jacobsen et al., 2020). The implementation of these principles to research data fosters their opening but also the need to open their metadata when data cannot be shared publicly. When managing personal data from research activities we find this later situation: Data cannot be openly shared. A decade ago, researchers at Harvard proposed the idea of tagging personal data to provide researchers with a tool to know how to share this kind of data (Latanya Sweeney et al., 2015)(Bar-Sinai et al., 2016). That project was created following US applicable laws what required an adaptation to be used in other legal frameworks. Year later, DANS, the Dutch national centre of expertise and repository for research data, began adapting the model (Ilona von Stein, 2017)(Baxter et al., n.d.) within the European General Data Protection Regulation (GDPR) framework ((Regulation - 2016/679 - EN - Gdpr - EUR-Lex, n.d.)). Although this project was never completed, there are certain projects that came out of this idea (Sansone et al., 2017) (Alter et al., 2020). It was also the predecessor of ours. The library at the University, known as CRAI (Centre de Recursos per l’Aprenentatge i la Investigació), currently provides support to manage research data, especially in developing data management plans and in publishing data in the consortium repository, CORA.Repositori de Dades de Recerca (CORA.RDR). Until now, this repository doesn’t allow the deposit of personal data and researchers often ask how to manage and keep personal data safely. These were the two main reason to develop the current work and continue what DANS started inspired by the American Datatags. Initially we used the GDPR as the legal foundation to build our tools but when we invited the Data Protection Office, we focused on the national implementation of the GDPR because the national law can introduce differences between the Member States of the EU. The European Regulation allows the Member States of the EU to complete its provisions, what was made in Spain through the Organic Law 3/2018 of December 5, on Personal Data Protection and the guarantee of digital rights. (BOE-A-2018-16673-Consolidado LOPDGDD, n.d.). This work can be divided into two key phases. The first phase involved designing a decision tree (see Figure 1) and defining the data tags, providing researchers with a practical tool to assess the nature of the data they handle. This phase also demonstrated that the FAIR principles (Findable, Accessible, Interoperable, and Reusable) can still be upheld even when certain data must remain closed due to security and privacy concerns. The decision tree serves to uphold the principle of “as open as possible, but as closed as necessary”, challenging the misconception that non-open data cannot adhere to FAIR principles. We try to show that open science must be done responsibly and when it's necessary to close sensitive data. The second phase focuses on the implementation of the necessary security and precautionary measures in research data repositories. Our next step is to work on integrating these data tags into the CORA.RDR, ensuring that the appropriate safeguards are in place to protect sensitive data while maintaining its accessibility for research purposes. # 2. Legal Framework The main legal framework for the protection of personal data, including in research, is the General Data Protection Regulation. Though the GDPR sets a very high standard for data protection, it also contains important provisions that accommodate the unique needs of scientific research and balance the protection of personal data with the advancement of knowledge. One of the important features of the GDPR, in research, is its flexibility. Article 9 explicitly recognizes the importance of scientific research and allows the processing of special categories of personal data, under certain conditions, without explicit consent. For instance, personal data may be processed when research is in the public interest, provided that appropriate safeguards, such as pseudonymization or anonymization, are implemented to reduce risks for individuals. The GDPR also allows personal data collected for one purpose to be reused for compatible research purposes, provided that such use respects the principles of data minimization and purpose limitation, as outlined in Article 5. The GDPR specifically addresses Special Categories of Data in Article 9. These include data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, as well as the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health, or data concerning a natural person’s sex life or sexual orientation. The processing of such data is generally prohibited unless specific conditions are met, such as obtaining explicit consent from the data subject or if the processing is necessary for scientific research purposes based on Union or Member State law, subject to appropriate safeguards to protect the rights and freedoms of the data subjects. In Spain, the GDPR is complemented by Organic Law 3/2018, of December 5, on Personal Data Protection and the Guarantee of Digital Rights (LOPDGDD), which fills critical gaps and introduces more flexible measures in certain areas. The LOPDGDD tailors the GDPR to the Spanish context, providing detailed regulations for processing health data in scientific research. The LOPDGDD aligns with the GDPR by allowing the processing of health data for research without explicit consent under specific conditions, such as when the research is carried out in the public interest. However, it imposes additional safeguards, including stricter requirements for pseudonymization, encryption, and access control. Moreover, the LOPDGDD mandates that data protection impact assessments (DPIAs) be conducted for research projects involving sensitive data in the cases laid down by Article 35 of Regulation (EU) 2016/679 or in those established by the supervisory authority. One area where the LOPDGDD introduces further specificity is in the retention and reuse of data for research purposes. While the GDPR allows data to be reused for compatible purposes, the LOPDGDD explicitly requires that researchers establish clear protocols for ensuring compliance with data minimization and proportionality principles. It also defines additional restrictions for certain types of research data, requiring explicit legal or ethical justifications to override the rights of individuals. # 3. Methods and Procedures The reason for developing this work has been firstly the need for a standardised procedure that facilitates the reuse of research data to contribute to responsible open science, where the guarantee of privacy rights goes hand in hand with compliance with FAIR principles. Therefore, our thought process was to investigate previous works such as those previously discussed in Harvard and DANS, and to take up their projects more specifically for the Spanish legal framework. The development of the labels went hand in hand with the development of the decision tree (see Figure 1). It was a process of optimisation of both parts (tags and tree), which has resulted in a tree with a total of 7 possible outcomes. This study seeks to answer two key research questions: What criteria are used in the labeling system to classify data based on its sensitivity, and what specific consequences and precautions must be taken according to the assigned tag? To address these issues, a decision tree (see Figure 1) based on GDPR and LOPDGDD and a table (see Table 1) outlining the consequences and precautions associated with each type of tag has been created during the development of the project. The procedure for the user of the tool consists of reviewing the decision tree to analyse how tags are assigned based on issues related to the nature of the data and their legal use, as well as examining a table of consequences to identify recommended actions and precautions for each type of data tag. # 4. Classification data by tags As we aim to create a useful and efficient tool for research and technical staff involved in data management, we have tried to find the optimal point between not having too many questions in the decision tree but just enough to be able to correctly classify the data. Likewise, we used the same idea for the creation of each tag, trying not to generate too large a number of tag with specific characteristics for each one of them that could become unmanageable or impractical, but to generate a sufficient number of them to be able to correctly separate different sets of data that would otherwise have to be closed in a more restrictive way. Here is our proposal that guarantees this optimisation: Blue tag: Non-personal data. Green tag: Personal data. The publication of the dataset needs to indicate (a) whether the participants were informed that the data would be made available to other researchers or (b) whether consent was obtained that the data could be re-used for other research projects in a particular research area by indicating this area. Yellow tag: Personal data requiring the intervention of the data depositor (We understand data depositor as the person responsible for the processing of the data). The intervention of the data depositor is required to assess whether the re-use complies with Article 5.1b of the GDPR and Recital 50 of the GDPR. Orange tag: Personal data relating to health or genetics where consent for re-use is available under certain conditions. Intervention by the data depositor is required to assess whether the reuse complies with section 2a of additional provision 17a of the LOPDGDD, considering the consent given by the subject for the data to be reused for other research projects in a general area linked to a medical or research speciality. Purple tag: Special categories of personal data other than those related to health or genetics, where consent for re-use is available under certain conditions. Intervention of the data depositor is required to assess whether the re-use of the data complies with Recital 33 of the GDPR and Article 9.2a of the GDPR, considering the consent given by the subject that the data may be reused for other research projects in a particular area of research. Red tag: Personal data relating to health or genetics where consent for re-use is not available. Intervention by the data depositor is required to assess whether the re-use complies with section 2c or 2d of the 17a additional provision of the LOPDGDD. No tag possible: This is an end of the decision tree that indicates that the nature of the data is so complex that a prior review of the specific case by the Data Protection Officer of each institution is necessary. The difference between the orange and the purple tag lies in the scope of the consent for re-use given by the participants in the original project. The orange tag refers to medical or research specialities, the purple to other research areas. The reason for differentiating between these two tags was to avoid a message being displayed at the end of the decision tree explaining the two criteria depending on the type of data being deposited. # 5. Implementation for research data repositories One of the goals of our work was to implement the model in actual repositories that could provide open metadata while securing access and storage for research personal data according to Article 32 of the GDPR. To ensure that research data repositories comply with data protection regulations and adequately safeguard research data, we have classified the requirements into four key areas. These areas help determine the necessary safeguards and actions based on the sensitivity of the data: Identification and Authentication: Refers to the process of validating the identity of users accessing the data repository. Depending on the sensitivity of the dataset, authentication may not be required (public access) or more complex systems may be implemented, such as repository registration, passwords, two-factor authentication, and even validation by IP address to ensure that only authorised users have access. Read and Download Permissions: Establishes who has the right to view or download data from the repository. This ranges from unrestricted public access to permissions granted exclusively to registered users, which in some cases need explicit approval from the data depositor. For more sensitive data, downloading may be encrypted with passwords, or even disabled completely. Storage and Transmission: This refers to measures to protect data during storage in the repository and during transmission between systems. This ranges from unencrypted data (for low-risk tags) to the use of advanced encryption algorithms and double encryption for sensitive data. Transmission should always be through secure channels, such as encrypted connections, to prevent unauthorised access. Encryption Key Storage: Describes strategies for protecting the keys used to encrypt data. For more sensitive data, the keys must be stored separately from the data in the repository. In highly sensitive cases, a distributed model is implemented, where one key is managed by the repository and another by a trusted third party, ensuring maximum security even in the event of a breach. Table 1. The blue to red model for tags categorizes datasets based on their risk levels. Datasets with no associated risks fall under the blue tag, while increasing risk levels demand stricter data protection measures and more complex safeguards, with the red tag assigned to datasets of the highest sensitivity and risk. While it seems that all measures are the same for the orange and the purple label, the difference is in the organizational measure regarding approval as the depositor will have to consider different criteria. # 6. Discussion The classification of research data using data labels offers a practical and compliant solution for managing sensitive data (i.e. special categories of data according to the GDPR). Each tag provides a specific framework to help researchers and data controllers comply with legal and ethical obligations. The implementation of data labels is essential to properly manage the risks associated with the processing of research data. It also provides a standardized methodology that could facilitate future audits and compliance reviews. The original Datatags project was created with the idea of being implemented in a Dataverse environment. Our consortium repository uses such an environment, and the project has already been presented for its deployment there. We hope in a short period of time it will be available for researchers along with the decision tree. Our aim is to improve the reuse of research data while keeping personal data safe when needed, following the lemma of as open as possible as closed as necessary. FAIR data and responsible open science are fully compatible with robust security measures, ensuring the protection of sensitive data while enabling data sharing and reuse. The goal of this work is to provide a standardized tool to facilitate the identification, classification, and subsequent management of research data. Future work includes integrating this tagging system into CORA.RDR, the institutional research data repository. # 7. Decision tree Figure 1. Decision tree for the classification of personal data. This diagram guides researchers and depositors in assigning tags to datasets containing personal data based on their conditions for reuse and compliance with the General Data Protection Regulation (GDPR) and Spanish law. The color-coded tags (blue, green, yellow, orange, purple and red) indicate different legal bases and limitations for the secure storage, access, and reuse of the data in research contexts. # References I (Legislative acts) REGULATIONS REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) Alter, G., Gonzalez-Beltran, A., Ohno-Machado, L., & Rocca-Serra, P. (2020). The Data Tags Suite (DATS) model for discovering data access and use requirements. GigaScience, 9(2). https://doi.org/10.1093/GIGASCIENCE/GIZ165 Bar-Sinai, M., Sweeney, L., & Crosas, M. (2016). DataTags, Data Handling Policy Spaces and the Tags Language. Proceedings - 2016 IEEE Symposium on Security and Privacy Workshops, SPW 2016, 1–8. https://doi.org/10.1109/SPW.2016.11 Baxter, R., Emily Thomas, E., & Tjalsma, H. (n.d.). USING DATATAGS TO CLASSIFY PERSONAL DATA UNDER GDPR. BOE-A-2018-16673-consolidado LOPDGDD. (n.d.). Ilona von Stein. (2017). First GDPR datatags results presented in workshop. Https://Dans.Knaw.Nl/En/News/First-Gdpr-Datatags-Results-Presented-in-Workshop/. Jacobsen, A., Azevedo, R. de M., Juty, N., Batista, D., Coles, S., Cornet, R., Courtot, M., Crosas, M., Dumontier, M., Evelo, C. T., Goble, C., Guizzardi, G., Hansen, K. K., Hasnain, A., Hettne, K., Heringa, J., Hooft, R. W. W., Imming, M., Jeffery, K. G., … Schultes, E. (2020). Fair principles: Interpretations and implementation considerations. In Data Intelligence (Vol. 2, Issues 1–2, pp. 10–29). MIT Press Journals. https://doi.org/10.1162/dint_r_00024 Latanya Sweeney, Mercè Crosas, & Michael Bar-Sinai. (2015). Sharing Sensitive Data with Confidence: The Datatags System. Technology Science. Sansone, S. A., Gonzalez-Beltran, A., Rocca-Serra, P., Alter, G., Grethe, J. S., Xu, H., Fore, I. M., Lyle, J., Gururaj, A. E., Chen, X., Kim, H. E., Zong, N., Li, Y., Liu, R., Ozyurt, I. B., & Ohno-Machado, L. (2017). DATS, the data tag suite to enable discoverability of datasets. Scientific Data, 4. https://doi.org/10.1038/sdata.2017.59 Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J. W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). Comment: The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3. https://doi.org/10.1038/sdata.2016.18
Inspired by a proposal made almost ten years ago, this paper presents a model for classifying per-sonal data for research to inform researchers on how to manage them. The classification is based on the principles of the European General Data Protection Regulation and its implementation under the Spanish Law. The paper also describes in which conditions personal data may be stored and can be accessed ensuring compliance with data protection regulations and safeguarding privacy. The work has been developed collaboratively by the Library and the Data Protection Office. The outcomes of this collaboration are a decision tree for researchers and a list of requirements for research data re-positories to store and grant access to personal data securely. This proposal is aligned with the FAIR principles and the commitment for responsible open science practices.
[ "cs.CY", "cs.DB" ]
# I. INTRODUCTION Requirements elicitation and specification is a continuous and fundamental activity in software development [1]. Despite its importance, the process is challenging, manual, and laborintensive. A significant barrier is tacit knowledge – information held by a stakeholder but not explicitly shared with the requirements engineer [2]–[4] – which leads to incomplete requirements, a major pain in requirements engineering [5]. Clients often struggle to translate objectives into quantifiable requirements, resulting in misunderstandings that may cause deficient or missing critical requirements [2], [6] and ultimately produce a product lacking essential functionalities [7]. Natural language requirements lack structured syntax, leading to ambiguity, complexity, and vagueness, which makes understanding difficult for all stakeholders. [1], [8]. There is research leveraging AI to explore improving elicitation and specification activities. Earlier approaches, however, M. K. Habib and S. Wagner are with the Chair of Software Engineering, Technical University of Munich, Heilbronn, Germany (e-mail: kasra.habib@tum.de; stefan.wagner $@$ tum.de). ORCID: M. K. Habib 0000-0002-1272-9873, S. Wagner 0000-0002-5256- 8429. D. Graziotin is with the Institute of Information Systems, University of Hohenheim, Stuttgart, Germany (e-mail: graziotin $@$ uni-hohenheim.de). ORCID: 0000-0002-9107-7681. were constrained by the limitations of traditional AI techniques: often narrow in focus, lacking cross-domain knowledge, required extensive and carefully structured input, and struggling to capture the complexities of human communication [9]. As a result, these tools provided only limited support in real-world scenarios [10]. Recent advances in AI enable the understanding of complex context, relationships, and domain knowledge, offering opportunities to proactively support requirements engineers. In this study, we focus on an AI-assisted requirements generation approach and concentrate on the early phases of requirements engineering: elicitation and specification, with the potential for future work to expand into a broader range of requirements-related tasks. Whereas elicitation implies extracting unexpressed requirements, we define generation as the creation of requirements without prior confirmation of their alignment with stakeholder needs. To support this approach, we propose using large language models (LLMs) to generate software requirements. LLMs trained on large datasets offer a broad cross-domain knowledge base that can support requirements elicitation and specification [11], [12]. However, general-purpose LLMs might require fine-tuning as they are not specifically designed to generate authentic and adequate requirements, which is essential for overcoming the labor-intensive manual process and ensuring adherence to established requirements engineering standards. We consider genereated requirements to be authentic if they are indistinguishable from those written by humans in terms of clarity, coherence, relevance, realism, and implementability. Furthermore, with adequate, we refer to four dimensions in AI-generated requirements: (1) ISO 29148-compliant [13], (2) consistent with, (3) missing from, and (4) enhancing the overall completeness of, a given requirements specification. With that in mind, we introduce ReqBrain (Requirements Brain), a fine-tuned LLM and tool to generate authentic and adequate requirements to support the elicitation and specification phases of requirements engineering. To achieve ReqBrain, we employ task-specific instruction tuning1. We prefer fine-tuning over prompt engineering due to its ability to improve LLM performance on software engineering tasks and enhance context-specific performance [14]. It enables models to internalize task nuances, increasing usability for non-experts [15] and reducing computational overhead [16]. Moreover, it addresses limitations such as prompt length restrictions [16], the risk of knowledge conflict2 [17], and reliance on advanced domain expertise [18] to generate requirements. Our objective is to assess how fine-tuning affects large language models (LLMs) in generating authentic and adequate requirements. To achieve our objective for the potential of AI-assisted requirements generation employing ReqBrain, we explore the following research questions: # RQ1: How effectively does a fine-tuned large language model generate authentic requirements? To address this research question, we split it into the following actionable sub-questions: RQ1.1: Which fine-tuned large language model has the highest potential to generate authentic requirements? We benchmark several LLMs, after fine-tuning, using automated NLP metrics. It is crucial to select the model with the highest potential for authentic requirements to reduce the need for exhaustive human evaluation across different models. # RQ1.2: Does fine-tuning improve the generation of authentic requirements? We aim to understand whether fine-tuning can match or exceed the performance of untuned general-purpose commercial models, in particular ChatGPT-4o, for authentic requirements generation. # RQ1.3: Are generated requirements perceived as authentic by humans? Human evaluators evaluate if ReqBrain generates requirements that are indistinguishable from those authored by humans. This is crucial because achieving human quality standards is fundamental for establishing trustworthiness, user confidence, and integration in development processes. # RQ2: How effectively does a fine-tuned LLM generate adequate requirements? Human evaluators assess whether a fine-tuned LLM, ReqBrain, generates adequate requirements. Ensuring that ReqBrain meets the four dimensions of adequate requirements – ISO 29148-compliant, consistent with, missing from, and enhancing the overall completeness of a given requirements specification – is critical for generating initial high-quality requirements, saving structuring effort and time to specify requirements unambiguously, preventing costly development issues due to incomplete specifications or identifying potential gaps. Our work contributes to advancing AI-assisted requirements generation by providing: 1. A novel method and tool for the generation of authentic and adequate requirements. 2. An open ‘instruct’3 dataset to support further development and evaluation. 3. Open-source fine-tuned LLMs that enable continual learning and domain adaptation. 2Knowledge conflict occurs when the model’s pre-existing training data causes it to interpret a provided instruction or concept differently than intended. 3‘Instruct’ refers to instructions, with each training instance comprising commands and the expected output, as detailed in Section IV-B. Organization: The rest of the paper is organized as follows: Section II provides background, Section III related work, Section IV presents requirement generation with ReqBrain, Section V describes the evaluation methodology, Section VI presents results and a discussion, Section VII explores implications, Section VIII discusses threats to validity, and Section IX presents conclusions and future work. # II. BACKGROUND This section defines key concepts and defines ISO 29148- compliant requirements relevant to our work. # A. Task-Specific Instruction Tuning Task-specific instruction tuning enhances large language model (LLM) performance on specific tasks by using targeted instructions; in our study, these instructions are about writing requirements. A key benefit of this technique is its ability to reduce data and computational costs while maintaining or improving model effectiveness [19]. This supervised fine-tuning method represents the instruct dataset as $D = \{ ( x _ { 1 } , y _ { 1 } ) , ( x _ { 2 } , y _ { 2 } ) , . . . , ( x _ { n } , y _ { n } ) \}$ , where each pair $( x _ { i } , y _ { i } )$ consists of an instruction $x _ { i }$ and its corresponding ground truth output $y _ { i }$ (also referred to as completion), with $x _ { i } \in X$ and $y _ { i } \in Y$ . During the forward pass, the dataset $D$ is input into the language model, producing predicted outputs $\hat { Y } \stackrel { } { = } \{ \hat { y } _ { 1 } , \hat { y } _ { 2 } , \dotsc , \hat { y } _ { n } \}$ . Then, each pair $( y _ { i } , \hat { y } _ { i } )$ is compared, and gradients are iteratively computed to optimize the model and align its outputs with the ground truth in terms of syntax and semantics. AI-assisted requirements generation can benefit from this fine-tuning method for the automation of various requirements-related tasks. In our case, $x _ { i } , y _ { i }$ , and $\hat { y } _ { i }$ are text sequences. For example, $y _ { i }$ is a human-written requirement derived from a real-world project. Based on this requirement, the corresponding instruction $x _ { i }$ might be: “Write a functional requirement for a car’s ABS.” and $\hat { y } _ { i }$ is the model-generated requirement intended to match $y _ { i }$ . # B. ISO 29148-Compliant Requirements ISO/IEC/IEEE 29148:2018 [13] provides guidelines for eliciting and specifying high-quality textual requirements in natural language for system and software engineering. We incorporate these guidelines to select high-quality, humanauthored requirements to ensure that the training dataset reflects real-world language and the nuanced complexities of industry requirements. The subject of ISO-29148-compliant requirements is expansive, and complete coverage of the standard is beyond the scope of this paper. Instead, we limit it to requirements that employ the below-recommended syntaxes and specific signaling keywords, namely, shall, should, may, and will. SYNTAX-1 : [Subject][Action][Constraint] SYNTAX-2 : [Condition][Subject][Action][Object][Constraint] # III. RELATED WORK Studies that focus directly on the generation of requirements using LLMs are still limited, leaving a clear research gap: there is a lack of a systematic approach to internalize requirementsengineering-specific knowledge using fine-tuning LLMs to generate authentic and adequate requirements that support simultaneous elicitation and specification of user requirements by utilizing broad cross-domain knowledge. Additionally, there is a lack of proper human evaluations to validate the requirements generated by these models. A related study by Arora, Grundy, and Abdelrazek [18] explores LLM generation across all requirements engineering stages, highlighting use cases in elicitation, specification, and validation, supported by a SWOT analysis and preliminary experiments. They emphasize that prompt design critically impacts output quality in prompt-based LLMs, often leading to inconsistent or overly generic requirements, a limitation experimentally demonstrated in other domains where finetuning yields more reliable outputs [20]. Similarly, Ronanki, Berger, and Horkoff [21] evaluate ChatGPT’s potential to generate requirements through controlled experiments. They crafted six elicitation questions and presented them to ChatGPT and human experts. They then compared ChatGPT’s outputs with human experts based on abstraction, atomicity, consistency, correctness, unambiguity, understandability, and feasibility. Results show that ChatGPT outperformed human experts in all aspects except unambiguity and feasibility. While LLMs are rich in knowledge, they lack the nuanced, domain-specific understanding needed for authentic and adequate requirement formulation, a limitation that an LLM can learn utilizing fine-tuning [22], [23]. While earlier studies have generally explored generating requirements with LLMs, the study by Voria et al. [24] introduces RECOVER, a pipeline that automatically generates system requirements from stakeholder conversations. The pipeline works by classifying parts of the conversation as requirements segments, cleaning the selected segments, connecting related ideas in conversation, and generating requirements using the LLaMA-2 model. Their results show vulnerability to hallucinations during generation, a known challenge in promptdriven pipelines, resulting in knowledge conflict or fluent but unfaithful outputs without explicit domain knowledge using fine-tuning [22]. In contrast, AI-assisted requirements generation with ReqBrain addresses these limitations directly. ReqBrain is not tied to a specific technique, an initial set of requirements, stakeholder conversations, interviews, or pre-acquired data. When such data is available, ReqBrain can also be used to extract and generate requirements from it. We fine-tune ReqBrain to generate authentic and adequate requirements using its internal knowledge, and we are the first to investigate the effect of such tuning by employing a systematic approach. LLMs like ReqBrain can encourage dynamic, interactive engagement between requirements engineers and stakeholders, simulating stakeholder perspectives to generate missing requirements and address tacit knowledge gaps while simultaneously specifying the requirements. However, domain experts retain the final decision on accepting, rejecting, or modifying the generated requirements to ensure they align with project-specific ethics, needs, and constraints. Our contribution directly targets this research gap: the absence of a fine-tuned LLM, a systematic approach for authentic and adequate requirements generation, and the lack of systematic human evaluations to assess such LLMs’ output quality. # IV. REQUIREMENTS GENERATION WITH REQBRAIN ReqBrain is a fine-tuned large language model (LLM) trained to generate authentic and adequate requirements using its internal knowledge to support the AI-assisted requirements generation approach. Fig. 1 shows an overview of how a user – such as requirements engineers, project managers, end users, or customers – can generate the initial set of requirements through a simple instruction prompt. In this case, a user might start by specifying a target system (e.g., “Anti-lock brake system”) for which the user wants to generate initial requirements. Then, the user can prompt for further clarification about the generated requirements set, addressing the tacit knowledge by posing additional questions (e.g., “What is a CAN bus?”) and might continue by concentrating on generating specific requirement types (e.g., “safety requirements”) to expand the requirements set. This process can continue until the requirements set fully meet the user’s objectives. In addition to that scenario, a user might take a different approach to generate requirements for a target system by inputing a bug or ticket description and prompting ReqBrain to extract core requirements and generate additional ones. Similarly, users can input concrete requirements to ReqBrain and ask for missing ones or enable ReqBrain with retrievalaugmented generation (RAG) to connect it to an internal management system, such as Jira or local git deployments. In Fig. 2, we present an overview of ReqBrain’s integration into the requirements engineering process based on the software engineering body of knowledge (SWEBOK) activities [25]. To support such capabilities, we incorporate a set of targeted training tasks into the training dataset to improve ReqBrain’s performance in generating authentic and adequate requirements: How-to? Instructions (INST): This task is designed to enhance the model’s proficiency using ISO 29148- compliance syntax and signaling keywords to generate requirements aligning with industry standards. RE-types INST: This task builds on the previous one to broaden the model’s understanding of different requirements classes, such as non-functional security, usability, or functional, to assist the model in accurately differentiating and generating various types of requirements when a user prompts the model. Missing INST: This task simulates scenarios where a set of requirements is incomplete, or a piece of information containing requirements is provided to the model, and the model is tasked to identify missing requirements or extract requirements from the provided information while considering the input information/data. I want three software requirements for a car Anti-lock Breaking System (ABS). # ol ReqBrain 1- The ABS system shall detect a wheel spin and send signals to the actuators torelease and reapply the brakes to all wheels except the spinning wheel. 2- The ABS system shall respond in 5O ms or less to wheel spin detection. 3-The ABS system shall maintain communication between sensors and actuators throughcan bus. # oI ReqBrain Sure! In automotive engineering,a CAN (Control Area Network) bus is a communication system that connects various components of a car's control systems. Each message on the bus contains an identifier and one or more bytes of data. The ABS is just one subsystem in the car that uses the can bus to communicate with other control systems such as engine, transmission,and wheel speed sensors.Requirement 3 specifies that the ABS system shall use the CAN bus to send and receive messages related to braking functions. Awesome,let'sfocusonsafetyrequirements. Helpme find threesafetyrequirementsrelated to ABS. be transformed later, generation allows for real-time, semiautomated creation of clear and actionable requirements. By reducing ambiguity early on, generation shifts the analysis from the second to the third phase, interpreting unclear inputs to make higher-level decisions, such as accepting, rejecting, or modifying generated requirements. # A. Assisting Elicitation and Specification with Generation SWEBOK [25] defines requirements development in four phases: elicitation, analysis, specification, and validation, which are performed in a sequence iteratively. Our focus is primarily on how generation can assist in the elicitation and specification phases. As illustrated in Fig. 2, generation can be used to simultaneously produce requirements compliant to established specification standards, e.g., ISO 29148, during the elicitation phase, effectively merging elicitation with the specification. This reduces manual overhead, shortens iteration cycles, and ensures that early requirements are already in a usable form. Rather than treating elicited data as raw input that needs to While other generative approaches are possible, LLMs provide promising support for requirements elicitation and specification. LLMs trained on large datasets offer a broad crossdomain knowledge base that supports requirements elicitation and specification [11], [12], where a lack of domain knowledge challenges requirements engineers. Furthermore, LLMs can process large volumes of domain-specific information, such as legacy documentation, Jira, or Git, to generate in-context requirements and save time and effort. Additionally, LLMs can encourage requirements engineers and stakeholders to engage in a dynamic, interactive process to shape and refine authentic requirements [18]. They can also simulate stakeholder perspectives [18] to generate missing requirements and address tacit knowledge gaps. Despite the potential for AI-assisted requirements generation in the elicitation and specification phases, human expert involvement and review remain essential during analysis and validation to ensure alignment with project goals, ethical considerations, emotional intelligence, and contextual understanding. # B. Training Dataset To address the absence of a pre-existing dataset for requirements generation, we created an instruct dataset to fill that gap. As established in Section II-A, an instruct dataset comprises training records, each represented as a pair $( x _ { i } , y _ { i } )$ , where $x _ { i }$ denotes an instruction and $y _ { i }$ its corresponding ground truth output (a human-authored requirement or set of requirements), also referred to as the completion. First, we describe the process of collecting and selecting the requirements $( y _ { i } )$ and the creation of the instructions $( x _ { i } )$ is discussed next. 1) Requirements Selection: We gathered requirements from the Software Requirements Dataset4 (SwaRD), which will soon be released as part of another study. SwaRD consolidates publicly disclosed software requirement artifacts from the internet along with non-disclosed requirements from our industry partners. It includes various types of requirements, such as user stories and acceptance criteria. For this study, we filtered the ISO 29148-compliant requirements from SwaRD. Although these requirements are labeled as ISO 29148- compliant, we found gaps in their compliance upon closer inspection. The first author manually reviewed and selected the compliant requirements, as outlined in Section II-B. While this process is not fully replicable, the selected requirements will be made available within a replication package, allowing interested readers to load the dataset and assess their quality independently. 4The publicly available requirements datasets sourced from within the SwaRD are acknowledged for their contributions: [26]–[36]. Fig. 2. AI-assisted requirements generation approach overview, integrating ReqBrain. 2) Instruction Creation: To create instructions $( x _ { i } )$ , we followed established practices and guidelines from the documentation of Hugging Face and OpenAI. For each pair $( x _ { i } , y _ { i } )$ , we reviewed the requirement $( y _ { i } )$ and created a context that includes supporting information about its intent, class (e.g., functional or non-functional), and ISO 29148-compliance (see Section II-B). We then incorporated this context to craft the instruction $( x _ { i } )$ , paired it with $y _ { i }$ , and added the pair to our dataset, $D$ . Our templates for writing instructions are given below: # Instruction Template-1 : [Context + Instruction] Instruction Template-2 : [Instruction + Context] To realize each of the three targeted training tasks, described earlier, we create a corresponding instruction category. First, for instructions in the How-to? INST task, we included syntax details such as the correct placement of constraints, conditions, subjects, or signaling keywords, enabling the model to learn the structure of requirements. Next, for instructions in the RE-types INST task, we added information about requirement classes using the relabeled PROMISE dataset [33], known for its quality and widespread use in requirement classification studies. Finally, for instructions in the Missing INST task, we grouped the selected requirements by their original software projects and split them into two groups: one used in the instruction to simulate an incomplete set of requirements, and the other serving as completion labels. Table I draws a single instance from each task category from our dataset to illustrate their distinctions and provide an overall understanding. 3) Training and Evaluation Sets: Our instruction dataset comprises 166 training instances. Together, these instances cover a total of 242 individual requirements. Each instance is a training record that combines an instruction with its corresponding completion. A training record may include multiple requirements, as demonstrated by the Missing INST record in Table I. The dataset is organized into columns containing metadata about the collected requirements, as outlined in Table II. Each column serves a specific purpose to ensure that all necessary components are available for efficient model training. We applied a stratified split based on targeted task categories, allocating $80 \%$ of the dataset for training and $20 \%$ for evaluation. This method ensures a balanced representation of instruction categories across both sets. 4) Why not a larger training set?: Creating an extensive training set manually is time-intensive; however, our finetuning approach reduces the need for large datasets. For example, Chen et al. [19] demonstrated that task-specific instruction-tuned models achieve significant performance with only a small fraction of a dataset. Moreover, fine-tuning an LLM on a large dataset for a specific task can distort its pre-trained weights, leading to catastrophic forgetting and underperformance [37], [38]. Several studies, including the influential OpenAI paper on GPT-3 [39] and [40], suggest that LLMs require only a few high-quality examples to learn a new task. Therefore, we selected 242 high-quality requirements that best align with our definition of ISO 29148-compliant requirements from, Section II-B, instead of creating a large training set. # C. Fine-Tuning Training large language models (LLMs) involves two main steps: pre-training and fine-tuning. During pre-training, models are exposed to vast text corpora without task-specific labels or annotations [11], [41], [42], enabling them to learn general linguistic patterns and structures unsupervised. However, pretraining is costly in terms of computational resources, time, and data requirements, making it less feasible for each new task. Fine-tuning refines a pre-trained model’s representations for a downstream task by updating the pre-trained weights $\Phi _ { p }$ to $\Phi _ { p } + \Delta \Phi _ { p }$ , following gradients to maximize the conditional language modeling objective [43]. Fine-tuning is less costly than pre-training because it uses the pre-trained model as a base. However, it has a limitation: learning the complete set of parameters $\Delta \Phi _ { p }$ , whose dimension $| \Delta \Phi _ { p } |$ equals $| \Phi _ { p } |$ . Parameter-efficient fine-tuning methods, such as low-rank adaptation (LoRA) [43], address this limitation by reducing the number of parameters updated during training, thereby minimizing the risk of catastrophic forgetting [44] – the loss of pre-trained knowledge during full fine-tuning [45], [46]. Minimizing catastrophic forgetting is crucial for ReqBrain, as it preserves the base model’s knowledge for use in chat/dialog capabilities to inquire about various aspects of the generated requirements. TABLE I TRAINING PAIRS FROM THE INSTRUCT DATASET. ∗ELLIPSES ARE USED TO CONDENSE THE TEXT TO SAVE SPACE. TABLE II INSTRUCT DATASET STRUCTURE, COLUMNS AND DESCRIPTION LoRA leverages the concept of intrinsic dimension – the minimum number of dimensions required to represent a matrix’s essential features. In deep learning, training on the intrinsic dimension (i.e., partial training) means updating only a subset, $r$ , of $\Phi _ { p }$ for the downstream task [47]. LoRA achieves this by freezing the pre-trained weights $\Phi _ { p }$ , training LoRA weights $\Delta \Phi _ { l }$ for a weight matrix $\Phi _ { l } \in \mathbb { R } ^ { \bar { A } \times B }$ , and decomposing the weight update matrix into two smaller matrices, as shown in equation 1. $$ \Delta \Phi _ { l } = \Phi _ { A } \times \Phi _ { B } $$ Where, $\Phi _ { A } \in \mathbb { R } ^ { A \times r }$ and $\Phi _ { B } \in \mathbb { R } ^ { r \times B }$ , with $r$ representing the intrinsic dimension, a tunable parameter that effectively reduces the number of dimensions. For inference and evaluation, the LoRA weights are added to the original frozen weights $\Phi _ { p }$ at the end of each training round, as shown in equation 2. $$ h = \Phi _ { p } + \Delta \Phi _ { l } = \Phi _ { p } + A B $$ # D. Selecting Models for Fine-Tuning For fine-tuning, we focus on open-source models to enable reproducibility and to support organizations in hosting models on their own platforms for privacy. Although tuning commercial models is possible, our key contribution is sharing the dataset and open-source models to enable the models’ continual learning, collaboration, and transparency, advancing the AI-assisted requirements generation. Selecting pre-trained models requires balancing performance and computational resources, which is influenced by model size. Recent studies highlight the effectiveness of 7B LLMs in achieving this balance [48], [49]. To establish a baseline and identify the best variant for generating requirements, we initially fine-tuned and compared Falcon-7b-base and the instruct variant, with the instruct variant outperforming the others. Instruct and chat models interact similarly by providing answers through chat, whereas base models are trained to acquire diverse features, serving as a robust foundation for various tasks. Therefore, we present results for four state-ofthe-art open-source instruct or chat models: Llama-2-7b-chat$\mathrm { h f ^ { 5 } }$ [50], Mistral-7B-Instruct-v0.2 [51], Zephyr-7b-beta [52], Falcon-7b-instruct [53], and one base model: Falcon-7b. # E. Fine-Tuning Hyperparameters Selection Throughout our experiments, we used the Hugging Face API [54] and its models. For all models, we employed LoRA with $r \ = \ 6 4$ , as supported by [43], which shows that a low-rank adaptation matrix with $r \ = \ 6 4$ effectively captures essential weight update information while ensuring competitive performance and computational efficiency. Based on [55], we opted for a learning rate of $2 e - 4$ with a cosine scheduler, which balances stability and efficiency by gradually adjusting the learning rate to facilitate stable convergence and mitigate premature stagnation or divergence. For the remaining parameters, we used the original hyperparameters from the base model, as documented in the Hugging Face model’s documentation. # V. EVALUATION This section outlines the evaluation methodology and study design used to assess ReqBrain’s performance. 5As of the training and evaluation period, LLaMA had no instruct-tuned version available. # A. Study Design Our objective is to assess how fine-tuning affects large language models (LLMs) in generating authentic and adequate requirements. To achieve our objective, for RQ1.1 and RQ1.2, we used a standard NLP study design. We conducted a between-subjects study design for the remaining research questions to minimize biases such as carryover or learning effects [56], [57]. By exposing participants to only one condition, the design ensured independent evaluations free from the influence of prior conditions. This independence was crucial for nuanced judgments when comparing ReqBraingenerated requirements against its untuned baseline model and against human-authored requirements or assessing generated requirements for consistency under a single condition [56]. Where applicable, we followed the evaluation guidelines for empirical studies involving LLMs in software engineering by Wagner et al. [58], reporting model roles, versions, hyperparameters, and hosting details as recommended. Furthermore, all participants provided informed consent prior to their involvement, acknowledging their understanding of the study’s purpose, the nature of their participation, and their right to privacy and anonymity. Participants were asked to bring their laptops to the session, where the study objectives and background information (including knowledge refresher material from ISO 29148) were outlined. They were introduced to three evaluation datasets and their structure. At the end of the session, an evaluation package containing essential information and assurances of privacy and anonymity was distributed. Before concluding, participants evaluated a few random requirements from each task to confirm their understanding of the process. Furthermore, all participants provided informed consent prior to their involvement, acknowledging their understanding of the study’s purpose, the nature of their participation, and their right to privacy and anonymity. # B. Tasks We address our research questions through the following tasks. All tasks, except Task A, are evaluated by human participants. 1) Task A: Within this task, we benchmark the performance of the five fine-tuned LLMs to identify the potential bestperforming model in generating authentic requirements for RQ1.1 to reduce exhaustive human evaluation across various models for the subsequent questions. Then, we compare the selected model, ReqBrain, with the untuned ChatGPT-4olatest6 to determine whether commercial models designed for general tasks can match or exceed the performance of our fine-tuned model for RQ1.2. 2) Task B: This task compares the requirements generated by ReqBrain with those produced by its untuned baseline model, addressing authenticity in RQ1.3 and the ISO 29148 compliance dimension of adequacy in RQ2. To determine authenticity, participants evaluated how indistinguishable the generated requirements are from those written by humans, focusing on clarity, coherence, relevance, realism, and implementability. For adequacy, we considered the qualities defined for ISO 29148-compliant requirements in Section II-B. Human participants evaluated requirements from both models, knowing the set contained a mix of human-authored and AI-generated requirements. We assume a positive fine-tuning effect if participants frequently judge ReqBrain-generated requirements as human-authored. 3) Task C: Building on task B, participants assess authenticity in RQ1.3 and ISO 29148-compliant dimension of adequacy in RQ2 between ReqBrain-generated and humanauthored requirements. 4) Task $D$ : We focus on the three remaining dimensions of adequacy in RQ2. We input requirements specifications from real-world projects to ReqBrain and task participants to evaluate whether the generated requirements are consistent with, missing from, and enhance the overall completeness of the given requirements specification. # C. Evaluation Materials A comprehensive evaluation set was created for each specific task to assess performance and ensure accurate measurement of outcomes. 1) Benchmark Datasets: For task A, we generated requirements for each fine-tuned LLM and untuned ChatGPT-4o by inputting the instructions corresponding to the humanauthored requirements from our evaluation set detailed in Section IV-B3. Each human-authored requirement is paired with its corresponding LLM-generated requirement for each of the evaluation sets, resulting in a total of six benchmarking datasets. 2) ReqBrain vs. Baseline Model Evaluation Dataset: For task B, we input the instructions from our evaluation set to ReqBrain and its untuned baseline model to generate requirements. To ensure unbiased assessment, the authorship of all generated requirements was anonymized. The requirements were then combined and shuffled before being presented to participants for evaluation. 3) ReqBrain vs. Human-Authored Evaluation Dataset: For Task C, we combined the ReqBrain-generated requirements with their corresponding human-authored counterparts. The requirements were anonymized, shuffled, and presented in a stacked list format for direct comparison. 4) ReqBrain Usability Evaluation Dataset: For Task D, we developed a new evaluation set using requirements from three distinct software projects within $\mathrm { K I B } ^ { 3 }$ (Ku¨nstliche Intelligenz in die berufliche Bildung bringen). ${ \bf K I B } ^ { 3 }$ is a German AI educational initiative aimed at developing innovative AI tools to support students in their studies. The selected requirements met the following criteria: 1. Formalized according to ISO 29148 guidelines 2. Elicited from diverse stakeholders, not derived from prior projects or the internet 3. Not published online, reducing the risk of inclusion in any model’s training data 4. Created through a well-documented process 5. Open-source $\left( \mathrm { K I B ^ { 3 } } \right)$ , allowing requirements to be published for transparency and reproducibility Three software projects were selected: students’ selfevaluation software, adaptation software, and chatbot software. For each project, we created instructions incorporating its requirements and provided them to ReqBrain, which generated additional requirements. The generated requirements were paired with their corresponding instructions and presented to participants for evaluation. In this task, the authorship of the requirements was not concealed, enabling participants to evaluate the generated requirements in full context. # D. Hypotheses, Variables, and Operationalization The variables used to measure the authentic and adequate constructs are provided in Table III. To assess how closely AIgenerated text aligns with human-authored ground truth in semantics, fluency, coherence, factual accuracy, and originality, we use the established and automated Human Alignment(HA) metrics BERT and FRUGAL. Although the FRUGAL Score is more powerful, BERT is more intuitive; therefore, we computed both. Furthermore, BERT and FRUGAL Scores are learned metrics [59], [60] that are preferred over traditional metrics such as BLEU, ROUGE, or TER [61]–[63], which emphasize surface-form similarity and are often not suitable for computing human alignment [64]–[66]. We distinguish comparisons (1) between requirements generated by ReqBrain and those produced by its untuned baseline model and (2) between human-authored requirements and those generated by ReqBrain. In the first comparison, failing to reject the null hypothesis indicates no fine-tuning effect. In the second comparison, it suggests a positive effect, as our goal is to achieve human-comparable qualities. For RQ1.1 and RQ1.2, we used the Human Alignment(HA) variable to evaluate the quality of AI-generated requirements relative to their corresponding human-authored counterparts. For RQ1.3, we first compare participants’ perceptions and success rates in identifying the requirements generated by ReqBrain against its untuned baseline model as humanauthored using the Perceived Authorship $( P A )$ variable and formulate the following hypothesis: # Hypothesis: $H _ { 0 , 1 }$ : The proportion of generated requirements identified as human-authored is independent of whether they were generated by ReqBrain (the fine-tuned model) or its untuned baseline model. $H _ { a , 1 }$ : The proportion of generated requirements identified as human-authored is not independent of whether they were generated by ReqBrain (the fine-tuned model) or its untuned baseline, with ReqBrain producing a greater proportion. Second, we compare human-authored and ReqBraingenerated requirements using the following hypothesis: # Hypothesis: $H _ { 0 , 2 }$ : Humans do not reliably distinguish between human-authored and ReqBrain-generated requirements in terms of accuracy. $H _ { a , 2 }$ : Humans reliably distinguish between humanauthored and ReqBrain-generated requirements. For the ISO 29148-compliant dimension of adequacy in RQ2, we used the variables Written Syntax Compliance $( W S C )$ and Signaling Keywords Compliance $( S K C )$ to collect participants’ responses and formulated the following hypotheses between ReqBrain and its untuned baseline model: # Hypotheses $H _ { 0 , 3 }$ : Requirements generated by ReqBrain do not show greater adherence to ISO 29148 syntax compared to those from its untuned baseline model. $H _ { a , 3 }$ : Requirements generated by ReqBrain show greater adherence to ISO 29148 syntax compared to those from its untuned baseline model. $H _ { 0 , 4 }$ : Requirements generated by ReqBrain do not show greater adherence to ISO 29148 signaling keywords compared to those from its untuned baseline model. $H _ { a , 4 }$ : Requirements generated by ReqBrain show greater adherence to ISO 29148 signaling keywords compared to those from its untuned baseline model. Next, we compare ReqBrain-generated with humanauthored requirements using the following hypotheses: # Hypotheses: $H _ { 0 , 5 }$ : Human-authored and ReqBrain-generated requirements do not differ in their adherence to ISO 29148 written syntax. $H _ { a , 5 }$ : Human-authored and ReqBrain-generated requirements differ in their adherence to ISO 29148 written syntax. $H _ { 0 , 6 }$ : Human-authored and ReqBrain-generated requirements do not differ in their adherence to ISO 29148 signaling keywords. $H _ { a , 6 }$ : Human-authored and ReqBrain-generated requirements differ in their adherence to ISO 29148 signaling keywords. For the remaining dimensions of adequacy in RQ2, we used the variables Consistent with Requirements ${ \mathsf { S e t } } _ { ( C R S ) }$ , Missing Requirements $( I M R )$ , and Enhancing the Overall Completeness $( E O C )$ to collect participants responses. Responses $\leq 3$ indicate a range from neutral to strongly disagree on our selected Likert scale. TABLE III USED VARIABLES FOR CONSTRUCT EVALUATION. # Hypotheses: $H _ { 0 , 7 }$ : The median rating $( M )$ for the Consistent with Requirements $\mathsf { S e t } _ { ( C R S ) }$ is $\leq 3$ . $H _ { a , 7 }$ : The median rating for Consistent with Requirements $\mathsf { S e t } _ { ( C R S ) }$ is $> 3$ . $H _ { 0 , 8 }$ : The median rating $( M )$ for the Identify Missing Requirements $( I M R )$ is $\leq 3$ . $H _ { a , 8 }$ : The median rating for Identify Missing Requirements $( I M R )$ is $> 3$ . $H _ { 0 , 9 }$ : The median rating $( M )$ for the Enhancing the Overall Completeness $( E O C )$ is $\leq 3$ . $H _ { a , 9 }$ : The median rating for Enhancing the Overall Completeness $( E O C )$ is $> 3$ # E. Analysis Procedure First, we analyze RQ1.1 and RQ1.2 using NLP metrics to measure the similarity between requirements. Then, we outline the evaluation process for the remaining research questions. 1) NLP Metrics Analysis Procedure: We compute the pairwise similarity between the ReqBrain-generated and humanauthored ground truth requirements (see Section V-C1 for the evaluation set setup) for RQ1.1 and RQ1.2. a) The BERT Score: BERT score is a learned evaluation metric for text generation that utilizes contextualized embeddings from BERT [11]. It computes the cosine similarity between token embeddings of a human-authored reference $x ~ = ~ x _ { 1 } , x _ { 2 } , . . . , x _ { n }$ and an AI-generated equivalent $\hat { x } \ =$ $\hat { x } _ { 1 } , \hat { x } _ { 2 } , \ldots , \hat { x } _ { m }$ . The precision and recall are computed as: $$ R = \frac { 1 } { | x | } \sum _ { x _ { i } \in x } \operatorname* { m a x } _ { \hat { x } _ { j } \in \hat { x } } x _ { i } ^ { \top } \hat { x } j \ a n d \ P = \frac { 1 } { | \hat { x } | } \sum x _ { i } \in x \operatorname* { m a x } _ { { x } _ { j } } \in \hat { x } x _ { i } ^ { \top } \hat { x } _ { j } $$ The F1 score is derived from these values. Unlike traditional $n$ -gram metrics, contextualized embeddings capture word meaning, synonyms, context, and grammar [11]. b) The FRUGAL Score: The FRUGAL score is similar to the BERT score but is faster and lighter, and in some cases, outperforms the BERT score [60]. Its training involves generating a synthetic dataset by pairing sequences annotated with costly metrics aligned with human judgment, followed by pre-training a miniature language model7 on this dataset to learn the mapping of costly metrics and similarity functions. 2) Human Evaluation Analysis Procedure: For RQ1.3 and the four dimensions in RQ2, we used descriptive statistics to summarize sample characteristics and inferential statistics to test the hypotheses. For all samples used in RQ2, we calculated the mean $( \tilde { x } )$ , standard deviation (s), and median $( M )$ . Although we report $\tilde { x }$ and $s$ , hypothesis testing relies on the median $( M )$ , which is more appropriate for ordinal data [56], [68]. Due to the nature of the data, non-parametric tests were employed to evaluate all the hypotheses. Non-parametric tests are robust for ordinal data and do not assume normality or equal intervals [56], [69], [70]. For all tests, a significance level of $\alpha = . 0 5$ was used to determine statistical significance, and $9 5 \%$ confidence intervals were reported for effect size estimates. For RQ1.3, descriptive statistics include success and failure counts and success proportions. A right-tailed Fisher’s Exact test was used to test $H _ { a , 1 }$ , comparing the proportions of requirements identified as human-authored between ReqBrain and its untuned baseline model. An odds ratio was calculated as the effect size. For identifying authorship between ReqBrain-generated and human-authored requirements, a contingency table and expected frequencies were computed for both samples employing 7Is a downscaled larger model that maintains the original performance or comes close to it [67]. the Chi-square test $( \chi ^ { 2 } )$ to evaluate $H _ { a , 2 }$ . Overall human precision in identifying authorship between ReqBrain-generated and human-authored requirements was also calculated with confidence intervals. For the ISO 29148-compliant dimension of adequacy in RQ2, four hypotheses are tested. Two hypotheses $( H _ { a , 3 }$ and $H _ { a , 4 } \mathrm { , }$ ) compare ISO 29148-compliance between ReqBraingenerated and its untuned baseline model using right-tailed Mann-Whitney U tests. The remaining two $( H _ { a , 5 }$ and $H _ { a , 6 } )$ ) compare ReqBrain-generated requirements with humanauthored ones using two-tailed Mann-Whitney U tests to evaluate equivalence. The effect size for all Mann-Whitney U tests is quantified using Vargha and Delaney’s A-statistic [70]. For the remaining dimensions of adequacy in RQ2, three one-sample Wilcoxon signed-rank tests were conducted, one for each hypothesis $( H _ { a , 7 } , \ H _ { a , 8 }$ , and $H _ { a , 9 , } \mathrm { \cdot }$ ), with the rank biserial $( r )$ effect size. Additionally, to account for multiple tests for RQ1.3 and each dimension in RQ2, we calculate and report adjusted p-values using the Holm-Bonferroni method [71]. # $F .$ Sample Size and Participants We performed an a-priori power analysis to determine the sample size for different evaluations. Following Dyb˚a et al. [72], we conducted power analysis for the non-parametric tests using their analogous parametric tests. We used the conventional $\alpha = 0 . 0 5$ , power $= 0 . 8$ , and a recommended effect size $\beta = 0 . 5$ for software engineering studies [72]. An optimal sample size of 64 requirements was calculated for twotailed Mann-Whitney U tests and 51 for its one-tailed tests using a two-tailed t-test and one-tailed t-test, respectively. For one-sample, one-tailed Wilcoxon signed-rank tests, an optimal sample size of 26 was determined using a one-sample, onetailed t-test, and for Fisher’s exact, we used Chi-square to calculate an optimal sample size of 32. The first part of our study design (see Section IV-B3) resulted in sample sizes two to three times larger. Four experienced participants evaluated a total of 672 distinct requirements for different tasks. Two had 1–3 years of work experience, and the other had 4–6 years in software and requirements engineering. Each participant also held at least a bachelor’s degree in software engineering. In terms of familiarity with AI content, two were “Very familiar,” one was “Somewhat familiar,” and one was “Moderately familiar.” Regarding the use of generative AI tools like ChatGPT, two used them “Sometimes,” one answered “Yes,” and one responded “No.” Table IV presents a comprehensive overview of the main points in this section. # VI. RESULTS AND DISCUSSION We first present the results corresponding to each research question, followed by a brief discussion and then a summary of findings. # A. RQ1 Generating Authentic Requirements 1) RQ1.1 Benchmarking the Fine-tuned Models: Table V presents the performance metrics of five fine-tuned large language models. Zephyr-7b-beta outperforms all other models on both metrics. Figure 3 illustrates the performance of the models across the three instruction categories described in Section IV-B using FRUGAL and BERT scores. Both scores show that Mistral7B-Instruct-v0.2 performs slightly better than Zephyr-7b-beta in the Missing INST task. This may stem from its architectures and training data, which likely include a broader range of similar tasks. Fig. 3. Performance metrics across three task categories (see Section IV-B). Nevertheless, Zephyr-7b-beta records higher aggregate scores across all three task categories. Hence, we identified Zephyr-7b-beta as the most effective model for generating authentic requirements. 2) RQ1.2 Benchmarking ReqBrain against ChatGPT-4o: Table VI summarizes performance metrics for ReqBrain, our best-performing fine-tuned model, and untuned ChatGPT-4o. The metrics show that ReqBrain outperforms the untuned ChatGPT-4o in generating authentic requirements. Although the comparison with untuned ChatGPT-4o might seem unfair, it underscores the importance of fine-tuning LLMs for requirements elicitation tasks. ChatGPT-4o, with its larger parameter count, might surpass ReqBrain (which has 7 billion parameters) in performance if fine-tuned. 3) RQ1.3 ReqBrain vs. its Untuned Baseline Model: Table VII provides results to assess the perceived human authorship of requirements generated by ReqBrain and its untuned baseline model. For ReqBrain, $4 7 . 8 \%$ of the generated requirements are identified as human-authored, compared to only $8 . 8 \%$ for the untuned baseline model. The right-tailed Fisher’s Exact test produced a p-value $< \ 0 . 0 0 1$ , providing strong evidence in favor of the alternative hypothesis $( H _ { a , 1 } )$ . The odds ratio of 9.46 indicates that the odds of fine-tuned model outputs being perceived as authentic are approximately 9.5 times higher than those of the baseline model. 4) RQ1.3 ReqBrain vs. Human Authors: In Table VIII, we summarize the results for this comparison. The Chi-square test yielded $\chi ^ { 2 } ( 1 ; N = 2 7 2 ) = 0 . 0 1 4 7 5 6 9 4$ and $p = 0 . 9 0 3 3 1$ , with an odds ratio of 1.06, providing no evidence to support the alternative hypothesis $( H _ { a , 2 } )$ . Furthermore, the classification precision is $5 0 . 7 \%$ . The results suggest that ReqBraingenerated requirements are perceived as authentic by humans, as evaluators could not reliably distinguish between them and those authored by humans. TABLE IV RQ MAPPING TO HYPOTHESES, EVALUATION MATERIALS, VARIABLES, STATISTICAL TESTS, DIRECTIONALITY, AND COMPARED SAMPLES. ABBREVIATION: $M _ { h }$ , HYPOTHESIZED MEDIAN TABLE V Human Alignment $( H A )$ RESULTS: PERFORMANCE METRICS FOR FIVE FINE-TUNED LLMS. ABBREVIATIONS: P, PRECISION; R, RECALL. TABLE VI Human Alignment $( H A )$ RESULTS: PERFORMANCE METRICS FOR REQBRAIN VS. CHATGPT-4O. ABBREVIATIONS: P, PRECISION; R, RECALL. # Findings RQ1: Our study shows that fine-tuning LLMs effectively generates authentic requirements indistinguishable from those authored by humans. Human evaluators could not reliably differentiate between human-authored and finetuned LLM-generated requirements (ReqBrain), indicating that its outputs meet human quality standards. # B. RQ2 Generating Adequate Requirements 1) Comparing ReqBrain and its Untuned Baseline Model on ISO 29148: Table IX provides a summary of the results. For written syntax, ReqBrain achieved a median rating of 4 compared to 2 for the untuned baseline model. The right-tailed Mann-Whitney U test yielded $U = 1 4 2 0 3 . 5$ and $p \ < \ . 0 0 1$ , with large effect size $A _ { 1 2 } = 0 . 7 6$ , indicating strong evidence in favor of the alternative hypothesis $( H _ { a , 3 } )$ . For signaling keywords, ReqBrain also achieved a median rating of 4 compared to 2 for the untuned baseline model, with $U = 1 3 7 6 6 . 0$ and $p \ < \ . 0 0 1$ , and large effect size $A _ { 1 2 } = 0 . 7 4$ , supporting the alternative hypothesis $( H _ { a , 4 } )$ . TABLE VII Perceived Authorship $( P A )$ RESULTS: PERCEIVED HUMAN-LIKENESS BETWEEN REQBRAIN AND ITS UNTUNED BASELINE MODEL. TABLE VIII Perceived Authorship $( P A )$ RESULTS: HUMAN ABILITY TO DISTINGUISH REQBRAIN-GENERATED REQUIREMENTS FROM HUMAN-AUTHORED REQUIREMENTS The results suggest that a fine-tuned LLM significantly outperforms its untuned baseline model in generating ISO 29148-compliant requirements. 2) Comparing ReqBrain and Humans on ISO 29148: Table X provides a summary of the results. For written syntax, both groups achieved a median rating of 4. The Mann-Whitney U test revealed no significant difference with $U = 1 0 1 1 8 . 5$ and $p \ = \ 0 . 1 5 5 2 1$ , with an effect size of $A _ { 1 2 } ~ = ~ 0 . 5 4$ , thereby providing no support for the alternative hypothesis $( H _ { a , 5 } )$ . For signaling keywords, both groups recorded a median rating of 4. However, the Mann-Whitney U test revealed a statistically significant difference with $U = 1 0 4 8 2 . 0$ and $p = 0 . 0 4 0 6 8$ , supporting the alternative hypothesis $( H _ { a , 6 } )$ and a small effect size $A _ { 1 2 } = 0 . 5 6$ . Nevertheless, observing its Adj. $p = 0 . 1 2 2 0 4$ suggests rejecting the alternative hypothesis; note that for all other hypotheses, the adjusted $p$ -values confirm the unadjusted $p$ -values. The findings suggest that fine-tuned LLM produces requirements that are comparable to those authored by humans in terms of appropriate use of syntax and signaling keyword usage. 3) Evaluating ReqBrain on the Remaining Dimensions: Table XI summarizes the results for these three dimensions. For all three dimensions, the median rating was $M = 4$ . For consistent with dimension, the right-tailed Wilcoxon Signed Rank test yielded a statistically significant median rating greater than 3 with $W = 7 4 7 0 . 5$ and $p < . 0 0 1$ with a large effect size $r = 0 . 8 2$ supporting the alternative hypothesis $( H _ { a , 7 } )$ . For missing from dimension, the right-tailed Wilcoxon Signed Rank test showed a statistically significant median rating greater than 3 with $W = 7 0 3 5 . 0$ and $p < . 0 0 1$ , and a large effect size $r = 0 . 7 2$ . These results support the alternative hypothesis $( H _ { a , 8 } )$ . For enhancing the overall completeness dimension, the right-tailed Wilcoxon Signed Rank test with $W = 6 3 7 8 . 0$ and $p < . 0 0 1$ confirms the rejection of the null hypothesis in favor of the alternative $( H _ { a , 9 } )$ , with a large effect size $r = 0 . 5 8$ . In summary, all three dimensions confirm that ReqBrain is effective in generating requirements that are consistent with, missing from, and enhancing the overall completeness of a given specification. # Findings RQ2: Our results indicate that a fine-tuned LLM generates adequate requirements, thereby validating its effectiveness in generating high-quality requirements. # VII. IMPLICATIONS FOR RESEARCH AND PRACTICE Below, we discuss how ReqBrain contributes to both research and practice in requirements elicitation and specification. # A. Research Implications ReqBrain contributes to requirements engineering research by demonstrating that fine-tuning LLMs can enhance the generation of high-quality requirements. It fills a gap in the largely manual process of requirements elicitation and specification by generating authentic and adequate requirements. Empirical validation using BERT and FRUGAL scores, along with human evaluations, underscores the effectiveness of customized LLMs over untuned models. Future research may extend our approach to cover further requirements-related tasks to advance the AI-assisted requirements generation approach, supported by our open-source dataset and methodology for continued collaboration and advancement in the field. # B. Practice Implications Integrating ReqBrain into the requirements elicitation phase has the potential to improve the efficiency and accuracy of collecting, categorizing, and documenting requirements. Automatically generating authentic and adequate requirements may reduce manual workload and enable software engineers to focus more on strategic decision-making and stakeholder engagement. ReqBrain can be deployed individually or in group sessions. As an open-source model, it can be hosted locally to ensure data privacy and extended with RAG to process large volumes of text from sources like JIRA, databases, project documents, and interviews. Although our empirical validation demonstrates that ReqBrain is effective in generating authentic and adequate requirements, human expert review remains essential to ensure ethical considerations, emotional intelligence, and contextual understanding. # VIII. THREATS TO VALIDITY In the following section, we will outline the various potential threats that could undermine the validity of our study. # A. Construct Validity For the authentic construct, we used a two-step assessment. First, we deployed automated NLP-based metrics, then conducted a human evaluation. This order mitigates the limitations of automated NLP metrics in representing human alignment on clarity, coherence, relevance, realism, and implementability– key aspects of authenticity. Additionally, it helps reduce human evaluation across multiple low-quality models. Further, as an intermediary step, although this step is not replicable, between NLP-based metrics and human evaluators, the first author performed a manual review to confirm that these metrics identified the best-performing model. For the adequate construct, we employed human evaluations across four dimensions. # B. Internal Validity Differences in participants’ interpretations of evaluation criteria may affect validity. To mitigate this, we selected experienced participants, held an onboarding session with detailed guidelines and knowledge refresher materials, and allowed participants to evaluate a few instances from each task after onboarding and ask questions. Anonymizing requirements in tasks B and C further minimized bias. Further, using the same ReqBrain-generated requirements in tasks B and C ensured that differences in evaluations were due to output quality and fine-tuning effectiveness rather than data variations. Requirements were anonymized and shuffled, and participants were asked to complete tasks in different orders to minimize carryover effects. TABLE IX ISO 29148-COMPLIANT DIMENSIONS OF ADEQUATE: REQBRAIN VS. ITS UNTUNED BASELINE MODEL ABBREVIATIONS: $N$ , SAMPLE SIZE; $M$ , MEDIAN; $\tilde { x }$ , MEAN; $p$ , P-VALUE; Adj. $p$ , HOLM-BONFERRONI ADJUSTED P-VALUE; $U$ , MANN-WHITNEY U; $\boldsymbol { A } _ { 1 2 }$ , VARGHA AND DELANEY’S EFFECT SIZE. TABLE X ISO 29148-COMPLIANT DIMENSIONS OF ADEQUATE: REQBRAIN VS. HUMAN AUTHORS. ABBREVIATIONS: $N$ , SAMPLE SIZE; $M$ , MEDIAN; $\tilde { x }$ , MEAN; $p$ P-VALUE; Adj. $p$ , HOLM-BONFERRONI ADJUSTED P-VALUE; $U$ , MANN-WHITNEY U; $\boldsymbol { A } _ { 1 2 }$ , VARGHA AND DELANEY’S EFFECT SIZE. TABLE XI REQBRAIN EFFECTIVENESS ON REMAINING DIMENSIONS OF ADEQUATE. ABBREVIATIONS: $^ n$ , SAMPLE SIZE; $M$ , MEDIAN; $\tilde { x }$ , MEAN; $s$ , STANDARD DEVIATION; $p$ , P-VALUE; Adj. $p$ , HOLM-BONFERRONI ADJUSTED P-VALUE; $r$ , RANK BISERIAL EFFECT SIZE. # C. External Validity Our sample sizes were two to three times larger than those determined by our a-priori power analyses, ensuring robust statistical power. We deem our four evaluators to have adequate job experience and education in the domain to provide a fair generalizability of their ratings. While this enhances the reliability of the insights, we acknowledge that four evaluators constitute a limited representation of cross-domain professionals dealing with requirements, which may limit generalizability. The evaluation for consistent with, missing from, and enhancing the overall completeness of, a given requirements specification dimensions in RQ2 was based on three distinct software projects coming from the same domain, which may result in not sufficiently representing the full diversity of software development projects. The present study may not wholly represent the real-world utility of ReqBrain: a case study aligned with the ISO 9241-11 definition of usability is underway to further explore this aspect.
Requirements elicitation and specification remains a labor-intensive, manual process prone to inconsistencies and gaps, presenting a significant challenge in modern software engineering. Emerging studies underscore the potential of employing large language models (LLMs) for automated requirements generation to support requirements elicitation and specification; however, it remains unclear how to implement this effectively. In this work, we introduce ReqBrain, an Al-assisted tool that employs a fine-tuned LLM to generate authentic and adequate software requirements. Software engineers can engage with ReqBrain through chat-based sessions to automatically generate software requirements and categorize them by type. We curated a high-quality dataset of ISO 29148-compliant requirements and fine-tuned five 7B-parameter LLMs to determine the most effective base model for ReqBrain. The top-performing model, Zephyr-7b-beta, achieved 89.30\% Fl using the BERT score and a FRUGAL score of 91.20 in generating authentic and adequate requirements. Human evaluations further confirmed ReqBrain's effectiveness in generating requirements. Our findings suggest that generative Al, when fine-tuned, has the potential to improve requirements elicitation and specification, paving the way for future extensions into areas such as defect identification, test case generation, and agile user story creation.
[ "cs.SE", "cs.AI", "cs.LG" ]
# 1 Introduction LLMs are now widely applied in the AI for materials science field for applications such as literature and materials database knowledge extraction [1–3], materials property prediction [4–6], alloy design [7, 8], discovering new physical laws [9, 10], and proposing new scientific hypotheses [11, 12]. Recently, researchers [13–15] have built LLM agents that connect to existing software tools to collaboratively tackle complex scientific tasks posed by researchers. While these methods are efficient in completing some specific tasks, they still fundamentally rely on human-written and, hence, LLMs are not able to act autonomously to solve scientific problems. We performed the test of LLM performance in a materials science knowledge QA and a tool usage scenario, as shown in Figure 1. GPT-4o [16] can generate concise, refined, and accurate answers to scientific knowledge questions, whereas ChemLLM [17], a domain-specific LLM for materials chemistry, produces long, verbose responses with numerous errors. For generating codes using materials simulation tools, the situation worsens: even GPT-4o fails to provide correct answers, and other domain-specific models are unable to generate functional Python code. This falls far short of the requirements in the scientific field; domain-specific models fail to outperform general-purpose LLMs. LLM efficacy in scientific knowledge QA and materials simulation tool-usage tasks Scientific knowledge question: Real-world materials simulation tool-usage question: “Define the term ’grain boundary’ in materials science.” “Use pymatgen to write a Python code to generate a nitrogen interstitial defect (adsorbate) in GaN and calculate two properties of the material with this interstitial. 面 In materials science, grain boundaries are the interfaces where ICnopumtps:ut<eGthaeN corllyostwailngstrpurcotpuerret feilse>wi<thFrPayctmioantgalenc:oordinates of the nitrogen adsorbate> two individual crystals (or grains) within a polycrystalline material - adsorbate_name: The name of the adsorbate (a string, typically the element name GPT-4o cmryeset.allTohgersaephbiocuonrideanrtieatsioanreofrethgieoands oifnimnigsmgraticnhs.inGtrhaein Concise, i-nadicsaotrinbga iets_droelsecraisp iaon i:nAtesrtsrtiitniga )d.escribing the adsorbate site, including the element boundaries are typically a few atomic layers thick and represent a refined, and type and its fractional coordinates.” form of structural defect within the material. <More text> accurate This task requires the LLM to reason about how to construct an adsorbate defect using the provided information and to output the name and position In simple terms, a \*\*grain boundary\*\* is an interface that separates coordinates of the defect in a format compatible with pymatgen. two or more crystals within solid material (in contrast with ChemLLM ‘rgceragyisnotsna)ls.loIotg‘nrsayopofthuiercn’svdaiesmfuepalcleti.szeTwdheiac[shWseiokximipsete idsnioasritdleiniendbievitdwueaelncrdyifsftearlent Didbuwt esltlartefidrst, \`p\`y\`pmyatthgoen\.ncafronoraelm.ypspieysr.imdoaedtifcge_ectnas..bcloeorriemiimpmoprotorrtStpSPterecuriiecot\dunifrcreoS\ inmtfer\ponamftrholimb import Path\n\ndef tb%ec2hc9no)mpcrapolovdsiedetedasiesneftiovrrelrfyualrftrghoemrods emuxdalyl:mu>pnli\*teAs caranylldsetdga l“lviuensei ascduedbllistsit”oancnoaclnenceacnted hanllducliancaktiendg ‘gpairvdoesponersbtitaretuse:c\_t=nud{r-e\ens‘.a\c\nr“di\aspndtoirsRobonera’t:tbuearA_tnenst_:ar\ninmnaegdm :rcetT\:”ph:rAeNdsnoieacntiemto,a\entniao\rfn“ytaohcdfeosnoahtrdeabisanotidrensb_goadrteheb.sea\ctnrcei-apslticitoueln.a\”tn:e\d”\“\”\n corPraerctitaclloyde Ircroelnetveanntt by names such as lattices; each unit cell has specific size cshopaemrapakocitnseigtr iositnin.c\*cse>dwCeterartrameiinuleysdiutadallroygeterslyimnbaguktdenososetonemsxece tsuhcisiinevgneltliiyfk ecaacplclryoerdiicntghiotsw 心小 shPpeyrtehcoif.inec.aC: l‘oybdtaeonwdTaheredgpseystuhbirofnadcisnecgrdipetfnececratglscieuflso’ar,tmedisaptivolaner oemunosemrpgehynytswsihcbiacelhtqwuweaentw tidlilecsfoercnetsl iladeteverde Wrong code, cracks will propagate through our samples - so if you’ve got this also takes advantage over capabilities offered by PyMATGen toolkit like and unable to info about where all these little discontinuities lie then maybe certce.a. iLnegt puesriboedgiicnsiwtreitsinrgeporuersesonltiuntigond fnfoerwe-ntFkirisntdlys lleotcamtieonssh/otywpseos/msueblattices comply with predictions become easier? But I‘m inclined towards thinking they necessary steps before defining out final class method - 1.) Importing relevant format give us insight into other things too. <More text> ldiibrreacrtileys/i/nctlraisnsiecsa/llmyeltinhkoedd- tWoeg‘ellthearvesosenvateurralallmyetthosdeswaosuslodcinatveodlve usage requirements multiple classes/libraries/modules hence I’m importing them accordingly correctly 2.39883 b<eMlorwe,\*c\*o\*dpea>ndas : To manipulate/store large amounts numerical datasets\*\* (MatDerairalwsinLLM) Fanilsewdetro P3 ouFtapiluetdc otod e To address this issue, we first construct benchmarks for evaluating the ability of LLMs to use materials science tools. Previous scientific LLM benchmarks mainly focus on evaluating the reading comprehension [18–21] and materials property prediction abilities of individual LLMs [22–24], while neglecting the ability to generate code for performing physically meaningful calculations. In this paper, we present MatTools, a comprehensive benchmark specifically designed to evaluate LLM capabilities in materials science tool utilization. MatTools consists of two complementary components: (1) a materials simulation tool QA benchmark with 69,225 QA pairs derived from the pymatgen [25] (a widely adopted, robust open-source Python library for materials science that excels in data processing, analysis, and simulation) codebase and documentation and (2) a real-world toolusage benchmark comprising 49 questions (138 tasks) that challenge LLMs to generate functional Python code for materials defect property calculations, constructed from unit test files from the pymatgen-analysis-defects [26] library. MatTools aims to overcome the challenges of current LLM benchmarks by introducing the following design choices. • Automated data synthesis: Automatically generate real-world tool-usage benchmarks using unit test files, without manual data collection or materials science expert annotation. • Comprehensive dual-benchmark design: MatTools includes both a large-scale materials simulation tool QA benchmark and a real-world tool-usage benchmark, enabling evaluation of both knowledge comprehension and practical tool usage abilities. We test both the performance of individual LLMs and the LLM-RAG agent systems. • Secure and standardized evaluation: We employ Docker [27] sandbox to safely execute LLM-generated code, ensuring security and standardization. We design multi-level testing frameworks based on our benchmark to systematically evaluate LLM performance in materials science tool utilization. Our experimental results yield three key insights: • Generalists outshine specialists: General-purpose LLMs (such as GPT-4o and Qwen2.5 series [28]) significantly outperform domain-specific materials science LLMs in knowledge QA tasks ( $80 \%$ vs. $< 3 2 \%$ accuracy for general-purpose vs. domain-specific LLMs). • AI knows AI: Using LLM-generated documentation as the retrieval source in retrievalaugmented generation (RAG) systems substantially improves performance compared to using the original codebase and/or official documentation (e.g., the ability to generate runnable code and the task success rate increased by $4 7 . 8 \%$ and $1 1 5 . 7 \%$ over GPT-4o alone). • Simpler is better: Our self-reflection LLM-doc RAG agent system (leveraging only LLMgenerated documentation and incorporating multi-round reflection) outperforms more complex approaches such as agentic RAG (with task decomposition, NER, and reranking) and the SOTA GraphRAG method LightRAG [29]; our method yields improvements of $5 8 . 8 \%$ and $149 \%$ in task success rate compared with the agentic RAG method and the LightRAG. Remarkably, even the single LLM+RAG system outperforms the agentic RAG and LightRAG by $1 3 . 7 \%$ and $78 . 3 \%$ in task success rate. These findings highlight the current limitations of domain-specific LLMs and the effectiveness of leveraging LLM-generated documentation and self-reflection for enhancing LLM tool-use abilities. # 2 MatTools This section presents the development of MatTools, focusing on two key benchmarks: a materials simulation tool QA benchmark $( \ S 2 . 1 )$ and a real-world tool-usage benchmark $( \ S 2 . 2 )$ . The materials simulation tool QA benchmark aims to evaluate the knowledge and comprehension of LLMs in materials science, while the real-world tool-usage benchmark assesses the capabilities of LLMs for using these tools for code generation. For each benchmark, we detail our methodology including data collection, data synthesis, and the design of testing frameworks to evaluate the capabilities of LLMs in materials science tool usage (see Figure 2). # 2.1 Materials simulation tool QA benchmark Data collection We selected pymatgen as our primary benchmark data source. We leveraged Repoagent [31] to process pymatgen using the following steps: (1) Repository parsing: Repoagent automatically analyzes the codebase, constructing a hierarchical project tree with the repository as the root node and directories/Python files as intermediate nodes; (2) Structure extraction: classes and functions were integrated as leaf nodes under their respective Python files, while caller-callee relationships were captured to form a directed acyclic graph (DAG); (3) Documentation generation: documentation of each code segment was generated using Gemini-2.0-flash [32] with specialized RepoAgent prompts (Appendix A.1.1); (4) Dataset creation: two datasets—pymatgen_code and pymatgen_doc—were constructed, each comprising 7,192 datapoints extracted from code segments and their corresponding documentation, respectively (Appendix A.1.1). Benchmark data synthesis Two types of prompts were designed to generate QA pairs from the pymatgen_code and pymatgen_doc datasets (prompt templates in Appendix A.1.2). We instructed Gemini-2.0-flash to generate up to 5 distinct questions for each datapoint (code segment or documentation) with fewer questions when the datapoint content was insufficient to support 5 meaningful questions. Each generated question includes the question and four answer options (A, B, C, and D), requiring the LLMs to respond with only A, B, C, or D. This yielded two QA benchmarks: pymatgen_code_qa with 34,621 QA pairs and pymatgen_doc_qa with 34,604 QA pairs (see Appendix A.1.3). Testing framework design To systematically evaluate general LLMs for materials simulation tool comprehension and the scaling between performance and LLM size, we benchmarked 9 general LLMs (3 widely-used closed-source models and 6 Qwen2.5 open-source models with different parameter sizes). Recently, materials chemistry-focused LLMs demonstrated excellent performance MatTools: Materials science Tools Benchmark (1) Materials simulation tool QA benchmark QA dataset LLMs Automatic conclusion pymatgen codebase Functions and Q: Problem 1 with Closed-source Correct Answer of Q1: A classes extraction four choices Gemini Answer of LLM1: A Documentation Prompt QA A: A or B or C or D Open-source Answer of LLM2: B generation ······ ······ GeminiCheck - Code QA (34,621) Q: Problem n with Domain LLMs ↓ LdLocMu-gmenetartaitoend - DBoce nQcAh (m3a4,r6k04) I A: fAouorrcBhoirceCsor D 鱼 Stsatcirsitpitcal AocfcLuLraMcsy (2) Real-world tool-usage benchmark Problem statement illustration LLMs and RAG agents Criteria pymatgen-analysis- Generate a Python function to Pure LLMs and - Function defects unit test files calculate the following material RAG agents that use runnable rate pmruospterteiteusr,nwahdeircet tohnearfyunwcittihon different retrieval sources - Tasks Tree-sitter the property names as keys 你 codebase success rate Unit test functions tahnedvtahleuiresc.alculated values as + official doc 曲 LLM-doc slice S checker Verification code LLM-doc Code execution Problem statement 49 questions def verification (results: dict): Retrieval sources safe sandbox Highlight Material properties list 138 property # results checking code Agent systems A Our method VerificTartioplnetcsode tcaslckuslation if arlel ruersn “lotks are correct: GraphRAG Agentic RAG 201% else: Our method Benchmark return [error reason list] GPT-4o Benchmark construction Evaluation in understanding the materials science literature and in property prediction. To assess whether these domain-specific LLMs are proficient in materials simulation tool knowledge and instructionfollowing ability, we tested 3 materials chemistry LLMs (see Appendix A.1.4). We evaluated model performance accuracy (proportion of questions answered correctly) to compare understanding capabilities of different models on materials simulation tools. # 2.2 Real-world tool-usage benchmark Benchmark data synthesis Examples of real-world material simulation tool usage are rare. Hence, we designed an automated process using LLMs to transform unit test code into triplets of: (1) a problem statement (prompts LLMs to generate Python code for calculating material properties and returning a dictionary of material properties), (2) a dictionary of expected material properties to be calculated (the keys are material property names and the calculated results/values plus data types for verification), and (3) verification code to test the results from (2). We chose unit test code as the source because it contains three essential components: the problem to be solved, the implementation of the solution, and result verification. This automated pipeline enables the rapid generation of tool usage datasets (without constraint to specific LLMs) and facilitates benchmarking across models. We selected unit tests from the pymatgen-analysis-defects library to generate the triplets. This standalone pymatgen plugin is designed to analyze defects in materials (important material properties are controlled by the defects in materials). We first split the unit test files into unit test functions, then generated triplets for each function using GPT-4o [16]. Then, two materials science PhD students reviewed and revised errors in the generated triplets. (See Appendix A.2.1 for triplet generation prompts and generated triplet examples.) We generated 49 questions (138 tasks, where the number of tasks refers to the total number of properties to be calculated) for real-world tool-usage benchmark. Docker sandbox for result checking We designed a Docker sandbox for testing LLM-generated code for safe code execution without affecting the local environment. The sandbox supports (1) running the LLM-generated code and returning execution results (material property dictionary) and (2) running the verification code and returning verification results (the code returns “ok” if the results are correct and, if not, an error list). (1) Code Generator (5) LLM single LLM Question 。 >Answer Our method RAG agent Code (self-reflection LLM-doc (2) Start RAG aqgunt system) single RAG agent Question O→[Qustion+ Retriedcrntent]O Answer ? add √ search the Code sandbox (3) format and vectorstore deduplicate O Agentic RAG system [checked result] extract[keywords] Nameentity recognition node yes √Whether the code can run 1 Coeerch thed Answer ormbercteraxitlns 4 RAG agent Cfunction documentationTop-5codeand > C →Answer! deduplicate documentation] (4) Step1: build knowledge graph 好 √ pymatgen codebase Indetraph used [sugestiond for - Deduplication LightRAG agent system - LLM profiling (GraphRAG) - Entity & relationship extraction tep2 gry →[high-level [Low-1evel RAG agent Answer keys] dual-level retrieval paradigm Testing framework design We designed a testing framework (utilizing the synthesized benchmark data and the Docker sandbox) to evaluate these 5 approaches. The process involves feeding a problem statement from the generated triplets to each LLM-based system, which then attempts to generate the required Python code to calculate material properties. The generated code is executed within the Docker sandbox to obtain the calculated material properties dictionary. Subsequently, the verification code is executed in the Docker sandbox (with the obtained material properties dictionary as input) to verify the correctness of the results. For real-world materials simulation tool usage, we employ simple LLMs and agent systems to address complex code generation tasks. We designed and tested five distinct LLM-based systems (see Figure 3): (1) single LLM, (2) single RAG agent with pymatgen source code or documentation retrieval, (3) an agentic RAG system with multiple agents like task decomposition, NER and reranking, (4) a GraphRAG agent system (here we use the state-of-the-art method LightRAG) leveraging structured knowledge representations, and (5) our self-reflection LLM-doc RAG agent system that incorporates LLM-generated documentation retrieval and iterative refinement (see Appendix A.2.2). For each, we analyzed the number of runnable functions (total 49) and successful tasks (total 138) by verifying the generated codes through our Docker sandbox. # 3 Experiments # 3.1 Materials simulation tool QA benchmark results Table 1 shows the benchmark results of various LLMs. These results clearly demonstrate that general-purpose LLMs—both closed-source and open-source—significantly outperform domainspecific materials chemistry LLMs in understanding and reasoning about materials simulation tool knowledge. Leading general models (Gemini-1.5-Pro, Qwen2.5-32B-Instruct and Qwen2.5-72BInstruct) achieve over $80 \%$ accuracy on both code and document QA tests, while specialized materials chemistry models (ChemDFM-v1.5-8B, ChemLLM-7B-Chat-1_5-DPO, and Darwin 1.5-7B) perform substantially worse, with accuracies of ${ \sim } 3 0 \%$ (in one case, ${ \sim } 0$ ). The low performance of ChemLLM7B-Chat-1_5-DPO and Darwin 1.5-7B is associated with their poor instruction-following capability, leading to generating answers that are not formatted properly (i.e.,“<answer>Option</answer>”). Current general LLMs exhibit superior instruction-following, generalization capabilities and broader knowledge coverage for materials simulation tools compared to domain-specific models. The overall performance of open-source LLMs (e.g., the Qwen 2.5 series) improves with increasing model size. Overall, these results highlight the clear advantages of general-purpose LLMs in materials simulation tool knowledge QA tasks. Based on this, we focus exclusively on general-purpose LLMs in the following testing based on the real-world tool-usage benchmark. Table 1: Performance of different LLMs on code and document QA benchmarks. Figure 4: Comparison of the performance of different LLMs on the real-world tool-usage benchmark. Error bars indicate standard deviation across three independent experiments. The displayed values represent the mean performance metrics from these trials. # 3.2 Real-world tool-usage benchmark results To assess LLM performance on the real-world tool-usage benchmark, we designed three types of tests. The first involves directly querying the LLM with questions from the real-world tool-usage benchmark. We found that the function runnable rate and task success rate were both very low $( < 5 0 \% )$ . Next, we examined if the RAG method improves LLM performance. Testing of four different retrieval sources (lower panel of Figure 2) demonstrated that using the LLM-generated document as the RAG retrieval source yielded the best results; therefore, we designed a simple agent system using this RAG retrieval source. The system generates reflective results based on the execution of each round of generated code (see $\ S 2 . 2 \rangle$ , then iterates to generate the next round of code. The system, showed up to a $149 \%$ improvement over the SOTA GraphRAG method (LightRAG), $5 8 . 8 \%$ improvement over the agentic RAG system with task decomposition, NER and reranking, and $201 \%$ improvement over the GPT-4o model (see Appendix A.2.3 for more details and examples). Figure 5: Comparative performance analysis of a single RAG agent using different LLMs and retrieval sources on the real-world tool-usage benchmark. Retrieval sources include: (1) pymatgen codebase, (2) pymatgen official document split by recursively looking at characters, (3) LLMgenerated document split based on semantic similarity, and (4) LLM-generated document split based on function and class. Error bars indicate standard deviation across three independent experimental runs; displayed values represent mean performance metrics from these trials. Results of testing single LLM system Figure 4 compares the performance of different LLMs on a real-world tool-usage benchmark. GPT-3.5 achieves function runnable rate of only $2 0 . 4 1 \%$ and a task success rate of $3 . 6 2 \%$ . Even the top-performing model GPT-4o achieves a function runnable rate of only $4 5 . 5 8 \%$ and a task success rate of $1 8 . 3 6 \%$ . The reasoning model, Gemini-2.0-flash-thinkingexp-01-21, achieves the highest task success rate $( 2 5 . 6 3 \% )$ , but a function runnable rate of only $4 2 . 8 6 \%$ . All tested models demonstrate relatively low runnable function rates and task success rates, indicating that current mainstream LLMs, even reasoning models, struggle to effectively complete materials science tool usage tasks. The low function runnable rates suggest that codes generated by LLMs are often not executable without modification, while the low task success rates demonstrate that even when the code runs successfully it is unreliable. To address these two challenges, we tested the RAG method in the next section to enhance LLM materials science tool usage capabilities. Results of testing single RAG agent with different retrieval sources Figure 5 compares the performance of a single RAG agent using different LLMs and retrieval sources on the real-world tool-usage benchmark. Among the four retrieval sources, the LLM-generated document split based on function and class for retrieval content yielded the best performance for the RAG agent. GPT-4o with the LLM-generated document split based on function and class achieved the highest function runnable $( 6 7 . 3 5 \% )$ and task success $( 3 9 . 6 1 \% )$ rates; this is an improvement of $4 7 . 8 \%$ and $1 1 5 . 7 \%$ respectively compared to GPT-4o alone and $1 9 . 3 \%$ and $67 . 3 \%$ compared to GPT-4o with the official document. This indicates that LLM-generated information for the RAG leads to improved content retrieval and improved overall performance. Figure 6: Comparative performance analysis of advanced RAG agent systems on the real-world tool-usage benchmark. All systems used GPT-4o as the base model to generate code. Results of testing advanced RAG agents Based on these results, we design a simple agent system with LLM-generated document split based on function and class as the retrieval source and apply the reflection method to provide LLM feedback on the generated code. Figure 6 compares the performance of our self-reflection LLM-doc RAG agent system with other mainstream RAG agent systems on the real-world tool-usage benchmark (we use GPT-4o in the single RAG agent system as the base model for all advanced RAG agent systems). Our self-reflection LLM-doc RAG agent system led to $2 6 . 3 \%$ improvement in function runnable rate and $3 9 . 6 \%$ improvement in task success rate, compared to the results without self-reflection. It is interesting to note that the agentic RAG system with task decomposition, NER and reranking achieved a task success rate lower than that from GPT-4o with LLM-doc RAG. The GraphRAG method (LightRAG) performed even worse than the agentic RAG system. This suggests that LLMs utilizing only LLM-generated documentation as the retrieval source, combined with self-reflection, outperform mainstream approaches on materials science tool usage tasks (even though LightRAG and agentic RAG approaches typically perform better in other application domains). Compared to the single LLM only using GPT-4o, our self-reflection LLM-doc RAG system demonstrated significant improvements $( 8 6 . 6 \% )$ in function runnable rate and task success rate $( 2 0 1 . 3 \%$ ) compared with GPT-4o alone.
Large language models (LLMs) are increasingly applied to materials science questions, including literature comprehension, property prediction, materials discovery and alloy design. At the same time, a wide range of physics-based computational approaches have been developed in which materials properties can be calculated. Here, we propose a benchmark application to evaluate the proficiency of LLMs to answer materials science questions through the generation and safe execution of codes based on such physics-based computational materials science packages. MatTools is built on two complementary components: a materials simulation tool question-answer (QA) benchmark and a real-world tool-usage benchmark. We designed an automated methodology to efficiently collect real-world materials science tool-use examples. The QA benchmark, derived from the pymatgen (Python Materials Genomics) codebase and documentation, comprises 69,225 QA pairs that assess the ability of an LLM to understand materials science tools. The real-world benchmark contains 49 tasks (138 subtasks) requiring the generation of functional Python code for materials property calculations. Our evaluation of diverse LLMs yields three key insights: (1)Generalists outshine specialists;(2)AI knows AI; and (3)Simpler is better. MatTools provides a standardized framework for assessing and improving LLM capabilities for materials science tool applications, facilitating the development of more effective AI systems for materials science and general scientific research.
[ "cond-mat.mtrl-sci", "cs.CL", "cs.DB" ]
# 1 Introduction Large Language Models (LLMs) have unequivocally revolutionized the landscape of automated code generation. Models like GPT-4o [33], Google Gemini [15], Anthropic Claude [1], and GitHub Copilot [14] excel at generating functional code snippets and translating between languages. These capabilities are now integrated into AI-powered IDEs, such as Cursor AI, to support large-scale software development. This proficiency had led to increasing needs on established benchmarks, as early as HumanEval [6] and MBPP [3], to reflect the capability of each LLM tool. However, this rapid progress raises critical questions about the trustworthiness and reliability of the generated artifacts. Many existing benchmarks, while useful for gauging functional correctness through test suites, are approaching saturation [21, 13, 16, 45] and inherently offer limited guarantees regarding the deeper aspects of program correctness. Test cases, by their nature, can demonstrate the presence of bugs, but cannot prove their absence [11], leaving a significant gap in assessing the formal robustness and true reasoning capabilities of these powerful models. Reliable software must go beyond passing tests to be trustworthy, precisely follow specifications, and even self-validate. Formal verification offers the most rigorous approach to achieving these guarantees. This paradigm involves providing mathematical proof that a program adheres to a formal specification, thereby guaranteeing critical properties such as functional correctness, liveness (ensuring the program eventually does something good), and safety (ensuring the program never does something bad) [18]. Modern program verification infrastructures, such as Dafny [27], FramaC [22], Verus [26], Isabelle/HOL [32], and Lean [8], coupled with powerful automated theorem provers and SMT solvers like Z3 [7] and CVC5 [4], have significantly streamlined the process of writing and checking such verified software. These tools allow developers to express complex specifications and then automatically or semi-automatically verify that the implementation meets these specifications. VerifyThis Challenge(2011 - 2024) Language models Code w/ spec & proofs Description: This challenge is an instance of Kaldewaij's Search by Elimination, where an element with a given property is located by 8 eliminating elements that do not have that property. The 6 challenge was selected as it involves a relatively simple but interesting code invariant, expressing that the maximal element is in the remaining search space rather than maintaining the maximal element found so far. proof gTiavsekn:pPolienatset oimanplelme emnet natnmdavxeirmifayl tihnathtehearirnadyex returned by the method max() code proof Rw/elpaxretidalssetotliuntgions AI 6 code code proof code do code proof Drama C Why3 cBMC prode VeriFast code Itehraotiuvgehrfefiendebmaceknt VerCors WWproof Verus code Although researchers have developed multiple benchmarks to assess LLMs on formal verification subtasks [20, 5, 35, 12], none evaluate end-to-end program verification starting from a natural-language description. Instead, existing suites either require verifying or synthesizing small programs against a formal specification or focus on aiding proof completion by suggesting individual verification steps. Consequently, even though state-of-the-art LLMs have been reported to solve up to $9 7 . 8 \%$ of these benchmark tasks [44], those numbers do not reflect their true capability for end-to-end program verification. To bridge this gap and rigorously evaluate the capabilities of LLMs in this demanding domain, we introduce VerifyThisBench, a novel benchmark designed to assess end-to-end program verification, as shown in Figure 1. Inspired by the annual VerifyThis Challenge [43], where human contestants devise implementations and accompanying formal proofs in verification-aware languages, VerifyThisBench tasks LLMs with interpreting natural language problem descriptions, formulating formal specifications, generating the corresponding code, and constructing machine-checkable correctness proofs—all at once. Our evaluation using VerifyThisBench reveals that even state-of-the-art (SOTA) models, such as o3-mini, achieve a zero-shot pass rate of $3 . 6 2 \%$ on this end-to-end task, with a significant number of outputs failing even to compile, and only reach a pass rate of $9 . 3 7 \%$ after five rounds of feedback. These results underscore the profound challenge this domain presents. To dissect these challenges further and explore capabilities in a more guided setting, we also propose VerifyThisBenchXS, a variant where partial implementations or proofs are provided, and the LLM’s task is to complete the missing components. This paper makes the following key contributions: • VerifyThisBench: We present VerifyThisBench, a new benchmark suite for evaluating the ability of LLMs to generate fully verified programs (code, specifications, and proofs) from natural language descriptions. • Relaxed VerifyThisBench: We introduce VerifyThisBenchXS, a relaxed version of the VerifyThisBench, to assess LLM performance when provided with partial artifacts and tasked with completing them. • Unified Environment: We provide a unified evaluation environment that integrates seven verification tools and an automated pipeline, enabling consistent and scalable benchmarking across diverse formal verification tasks. • SOTA LLM Evaluation: We conduct a systematic evaluation of nine SOTA LLMs on both benchmarks, revealing current capabilities and significant limitations. Our analysis includes performance breakdowns across tools, attempt-based comparisons, model-specific strengths, self-assessed coherence, and the impact of partial guidance, providing a comprehensive understanding of model behavior in formal verification tasks. # 2 Background & Related Work # 2.1 Unverified Code Synthesis Benchmarks Recent benchmarks for code generation include APPS [17], HumanEval [6], MBPP [3], CodeContests [28], DS1000[25], SWEBench [19], and EvalPlus [29], among others. These benchmarks present programming tasks, often sourced from online competitions or community platforms, and evaluate models based on whether generated solutions pass a set of input-output test cases. While effective for measuring functional correctness, they do not involve formal specifications, proofs, or specification synthesis. In contrast, VerifyThisBench requires models to go beyond functional testing: they must extract a formal specification from a natural-language description, generate code in a verification-aware language, and produce a proof that passes formal verification. This makes VerifyThisBench a substantially more rigorous and comprehensive benchmark than traditional synthesis tasks. # 2.2 Program Verification Benchmarks Benchmarks in formal verification include SV-COMP [37], SyGuS [38], and Code2Inv [36]. SV-COMP and Code2Inv focus solely on verification task with no code generation involved. Specifically, the former contains large-scale C benchmarks with fixed safety properties and the latter targets invariant generation over small C-style programs. SyGuS focuses on constraint-based synthesis. More recent efforts like DafnyBench [30] and VerusBench [46] collect verified programs in Dafny and Verus respectively, primarily to train and evaluate ML-based tools for aiding in proof completion and suggesting verification steps, rather than end-to-end program generation from natural language. These benchmarks evaluate components of the verification pipeline but typically assume a preset formal specification or verification goal. In contrast, VerifyThisBench uses the end-to-end setup to explicitly evaluate the model’s ability in interpreting and encoding natural-language descriptions into provably correct formal programs, a capability not tested in existing benchmarks. # 2.3 Formal Methods in Software Verification: A Primer Formal methods in software verification aim to mathematically prove program correctness against a formal specification—a precise, unambiguous description of what a program should do, often expressed in a logical language. This contrasts with testing, which can only show the presence of bugs for specific inputs. The verification process typically relies on several key components embedded within or alongside the executable program code: • Contracts: These formalize the obligations and guarantees of a code segment. – Pre-conditions (requires clauses): Properties that must hold true before a function or code block executes for it to behave correctly. – Post-conditions (ensures clauses): Properties guaranteed to be true after a function or code block finishes, provided its pre-conditions were met. • Intermediate Assertions: Assistive hints are often needed to bridge any reasoning gaps between the pre&postconditions where the underlying solver cannot automatically address. • Loop Invariants: For iterative constructs, loop invariants are crucial properties that hold at the start of a loop, are preserved by each iteration, and, in conjunction with the loop’s termination, help prove the loop’s correctness. The typical verification flow in systems utilizing these concepts is as follows: 1. Annotation: Developers write code in a verification-aware language (e.g., Dafny [27], Frama-C [22], Verus [26] and annotate it with formal specifications and proof hints, including pre-conditions, post-conditions, assertions, and loop invariants. 2. Generation of Proof Obligations: A tool, often a Verification Condition Generator (VCG), processes the annotated code and its specifications. It translates them into a series of mathematical proof obligations (verification conditions) that, if all true, logically imply the program’s correctness with respect to its specification. 3. Automated Proving: These verification conditions are then fed to backend automated theorem provers, typically Satisfiability Modulo Theories (SMT) solvers like Z3 [7] or CVC5 [4]. These solvers attempt to mathematically prove each obligation. 4. Feedback: The system reports to the developer whether the proofs succeeded or failed. Failures often pinpoint inconsistencies between the code and its specification, or missing/incorrect annotations. Successfully generating code within this paradigm, as targeted by our VerifyThisBench benchmark, requires an LLM not only to produce the algorithmic implementation but also to understand, formulate, and correctly express these intricate formal specifications and proof structures that enable automated verification. # 3 VerifyThisBench Benchmark VerifyThisBench is inspired by the annual VerifyThis Challenges [43], a competition where participants are tasked with formalizing specifications, implementing solutions, and verifying that the implementations meet the specification. Each challenge is designed to be completed within a 90-minute session and varies in difficulty. Submissions are evaluated based on correctness, completeness, and additional quality criteria such as elegance and the degree of automation. Similarly, in VerifyThisBench, the task is to interpret natural language problem descriptions and implement code and write proofs. # 3.1 Benchmark Construction We collected challenges from each annual competition between 2011 and 2024, documenting their descriptions, pseudocode, and associated tasks. Tasks are categorized as either implementation (completing an algorithm from pseudo-code) or verification (proving a model or implementation correct against a specification). All tasks are described in natural language. In total, the dataset includes 41 challenges and 154 tasks. The dataset is available at [10]. # 3.2 Environment To facilitate evaluation, we provide a unified environment supporting seven verification tools. Five of them, Dafny [27], Why3 [42], VeriFast [41], VerCors [40], and Frama-C [22], are widely used in past VerifyThis competitions. To broaden tool diversity, we additionally include Verus [26] and CBMC [23]. Tool versions and brief descriptions can be found in Appendix D. # 3.3 Features of VerifyThisBench End-to-end verification tasks with natural language problem descriptions: All tasks start with informal, natural language prompts (often with pseudo-code). Models must interpret the intent and formalize it into precise logical specifications. They are required to generate specifications, implementations, and formal proofs in a verification-aware language, ensuring the code passes machine-checkable verification. Example challenge and solution can be found in Appendix E. Graded difficulty and multi-step challenges: Challenges are drawn from the VerifyThis competition and span a range of difficulties. Many include sequential subtasks, allowing fine-grained assessment of model capability. Tool diversity: Multiple tools are provided. Models must conform to the syntax and semantics of real-world verification frameworks. # 3.4 Relaxation We observe that most language models fail to generate compilable code when targeting specific formal verification tools. This is often due to the syntactic complexity and precise annotations required by these tools. To decrease the level of difficulty and better assess LLM capabilities under more supportive conditions, we construct a set of relaxed problems derived from past human-written solutions. Specifically, we define three forms of relaxation: • Code relaxation: We provide only the function specifications, omitting both the implementation and the proof annotations. • Specification relaxation: We provide the implementation and its proof, but remove the function specifications. • Proof relaxation: We provide specifications and implementations, but remove loop invariants and other auxiliary annotations needed for verification. To further diversify the difficulty spectrum, we vary the extent of relaxation. In some instances, we remove all relevant components (e.g., entire specs or proofs), while in others, we retain partial elements or include complete examples as guidance. This enables a more graded evaluation of LLM performance across varying levels of verification support. In total, we create a set of 481 tasks. Specifically, there are 195 fill-implementation task, 90 fill-proof/invariants tasks, and 196 fill-specification tasks. Table 6 in Appendix A shows the statistics of VerifyThisBenchXS. As there are no prior solutions in CBMC and Verus, no tasks were created and no results reported for these tools in the relaxed setting. # 4 Experiment Results # 4.1 Evaluation Setup and Metrics We evaluate a diverse set of state-of-the-art (SOTA) language models, including both proprietary and open-source systems. The models include representatives from the OpenAI family (GPT-4o, GPT-4omini, o3-mini, o4-mini) [34], Anthropic (Claude-3.7-Sonnet) [2], Google (Gemini-2.5-Flash), DeepSeek (Deepseek-chat-v3) [9], Meta (Llama3.3- 70B-Instruct) [31] and Alibaba (Qwen-2.5-72B-Instruct) [39]. This selection enables a comprehensive comparison across different model architectures and training paradigms. Model versions are provided in appendix C. # 4.2 Experiment Design For both VerifyThisBench and VerifyThisBenchXS, we conduct experiments with iterative refinement based on tool-generated error messages. To evaluate correctness, we pass the generated code to the target verification tool and check whether it compiles and verifies successfully. A task is marked as pass if no errors are returned. In addition to correctness checking, we introduce a coherence check as a relaxed evaluation metric. Here, the model self-assesses whether its generated code semantically aligns with the original problem intent—an aspect difficult to verify automatically. This helps evaluate how well the specification matches the task description and provides insight into the model’s confidence in its output. Each task is attempted five times per model. The first attempt uses only the task prompt; the next four incorporate feedback from previous errors. During refinement, the model has access to the full history of its prior attempts and corresponding feedback for the current task, enabling iterative improvement. In VerifyThisBench, tasks of a challenge are completed sequentially. Only the final attempt from the previous task is carried over to the next, preserving essential context while keeping prompts concise. In contrast, VerifyThisBenchXS tasks have isolated contexts and are completed independently, with no progress carried over between tasks. To ensure fairness, we use the same prompt across all models and set the temperature to 0.7 when applicable. A timeout of 1 minute is enforced for all experiments on the verifier. The experiments were conducted on a machine with an Intel i7-1360P CPU and 16GB of RAM. # 4.3 Overall Pass Rate Table 1: Overall Pass Rate On VerifyThisBench Table 2: Overall Pass Rate On VerifyThisBenchXS Table 1 presents the performance of the SOTA models on VerifyThisBench. For each verification tool, we repor pass rates for the initial zero-shot attempt and after four additional refinement attempts using feedback. In the first attempt, most models perform poorly, with success rates under $4 \%$ . The top performers are o3-mini, Llama, and Claude, indicating that even the strongest models struggle initially. By the fifth attempt, performance improves significantly across all models. o3-mini leads overall, followed by Claude, $\mathrm { { _ { 0 4 - m i n i } } }$ , and Llama. These results highlight the effectiveness of iterative refinement and feedback in enhancing model performance. Each model exhibits distinct strengths across different verification tools, underscoring that no single model consistently outperforms the rest. For example, o3-mini, the top overall performer, excels especially in CBMC and Verus. On the other hand, Claude shows consistent strength in Dafny and Frama-C. Gemini, while generally average, performs exceptionally well on VerCors. Llama, another open-source model, performs best on Verus. In contrast, Qwen shows consistently low performance across all tools, suggesting limitations in its current proof synthesis capabilities. Further insights into tool-specific performance are discussed in Section 4.6. Table 2 shows the results on VerifyThisBenchXS. Similarly, at the first attempt, absolute numbers remain low (less than $3 \%$ ) for all models. At the fifth iteration, Deepseek tops the competition with $1 6 . 0 1 \%$ , followed closely by o4-mini $( 1 4 . 5 5 \% )$ , Claude $( 1 3 . 1 0 \% )$ , and Llama $( 1 1 . 2 3 \% )$ . Feedback leads to substantial improvement for most models, achieving relative gains of over $10 \%$ . In conclusion, while few models succeed from scratch, many become competitive when guided by partial context. Open-source models like Deepseek, and Llama outperform many closed-source counterparts, showing strong potential for real-world deployment in assisted formal verification. These results also underscore the importance of combining structural hints, feedback loops, and domain-specific strengths when applying LLMs to formal reasoning tasks. Key Insights: Average pass rates remain low at $10 \%$ on VerifyThisBench and $16 \%$ on VerifyThisBenchXS, revealing the challenges formal verification poses even to SOTA LLMs. All models improve with feedback. # 4.4 Failure Mode Distribution Figures 2 to 5 show clear improvements in model’s performance when partial solution templates are provided. Specifically, partial success rates, where the verifier is able to confirm some goals, increase significantly. This suggests that templates or hints help models generate more accurate solutions. Timeout rates, where the program compiles but the verifier fails to complete within the time limit, remain relatively stable. This indicates that models are making meaningful progress toward valid proofs, and the verifier struggles to find counterexamples, implying the generated solutions are closer to correctness. Compilation errors still dominate but tend to decrease under the relaxed setting for some models, demonstrating that context helps reduce syntax-level mistakes. However, some models like GPT4o-mini and o3-mini exhibit mixed trends, suggesting that while the template helps, the model’s internal understanding and code generation fidelity still vary. If we relax the metric to consider compilable code rather than fully verified solutions, Claude, GPT-4o, and Deepseek consistently emerge as the top performers across both benchmarks. Notably, Claude generates compilable outputs in nearly $50 \%$ of attempts on VerifyThisBenchXS and around $2 5 \%$ on VerifyThisBench in the first attempt alone, highlighting its strong baseline capability even without iterative feedback. NOGEN Compile Error Timeout Partial Succeed $100 \%$ M 75% 50% 25% 0% 名 多 av 的 e owe A4om 3 oeeps Figure 2: zero-shot on VerifyThisBench Figure 4: zero-shot on VerifyThisBenchXS Figure 5: refinement on VerifyThisBenchXS Figure 3: refinement on VerifyThisBench NOGEN Compile Error Timeout Partial Succeed $1 0 0 \%$ 75% 50% 川 25% 0% aor epse Key Insights: While compilation error dominates in both benchmarks, in the relax setting we observe decreases in such failures and increases of partial correct or compilable solutions, moving model performance closer to usable verification outputs even when full correctness is not achieved. # 4.5 Coherence Table 3: Self-Assessment of Specification Coherence on VerifyThisBench Table 3 reports each model’s coherence confidence, i.e. whether the model believes its generated specification matches the intended problem requirement. This metric is evaluated across generations passing the verification tool. While passing a formal verifier indicates syntactic and logical correctness, it does not address the alignment problem (i.e., whether the verified implementation perfectly algins with the user-intent expressed in natural language descriptions); hence, coherence offers complementary insight. Notably, except o3-mini and Qwen, models’s confidence is less than $50 \%$ on passed solutions. The results reveal considerable variance across models in their self-assessment behavior. Models like o3-mini and Claude exhibit high confidence, often reporting over $80 \%$ coherence even in the zero-shot setting, suggesting strong internal certainty—though this may reflect overconfidence rather than accurate introspection. In contrast, models like GPT-4o and Llama show much more conservative estimates, with coherence below $30 \%$ , indicating either bettercalibrated uncertainty or limited self-awareness. Interestingly, refinement tends to reduce overconfidence for some models (e.g., Claude) while slightly improving coherence estimation for others (e.g., GPT-4o and Deepseek), suggesting iterative attempts help align perceived and actual correctness. We manually inspected a subset of successful solutions to validate if generated specifications align with the intended problem. Except for o3-mini, most models appear honest in their coherence self-assessments, with no false negatives found. Thus, our evaluation reflects an optimistic upper bound on model performance—assuming coherence estimates are accurate and verifier passes indicate best-case correctness. Automatically verifying the alignment between generated specifications and user intent in natural language remains an open technical challenge [24]. Tackling this alignment challenge is beyond the scope of this benchmark work and is left for future research. # 4.6 Performance by Tools Table 4: Average Pass Rates across Tools Table 4 shows that all tools benefit from iterative refinement through feedback. In the VerifyThisBench setting, CBMC and Verus exhibit the most pronounced improvements, likely due to their syntactic similarity to C and Rust, making them more accessible to language models. Dafny also shows moderate gains in this setting. In VerifyThisBenchXS, improvements are even more substantial. Dafny, in particular, demonstrates a remarkable leap—from near-zero success to over a $24 \%$ pass rate by Iteration 5, highlighting strong synergy with guided synthesis. In contrast, tools such as VeriFast, Frama-C, and Why3 remain largely stagnant on both benchmarks, suggesting either stricter syntactic or semantic constraints, or a structural mismatch with current model capabilities. # 4.7 Performance by Relexation Table 5: Overall Performance across Different Relaxation Settings Table 5 categorizes performance based on three relaxation types: specification, where the implementation and proof are given and the model fills in the spec; code, where the spec is provided and the model completes both the implementation and proof; and proof, where the full solution is available except for the loop invariant, which the model must supply. Across all categories, performance improves notably from Iter 1 (zero-shot) to Iter 5 (refinement), indicating that iterative refinement or feedback significantly aids verification success. Among the three, specification relaxation yields the highest overall pass rates, suggesting that models are most effective when reasoning about what a program is supposed to do, given a working implementation and its proof context. Completing code implementation falls between the two, showing that models can sometimes generate plausible code. Completing loop invariant, arguably the most abstract and logically demanding task, results in lowest pass rates, though still showing solid gains with retries. This points to the inherent difficulty models face in understanding and completing partial proofs. Key Insights: Generating the entire solution holistically (with overall pass rate of $9 . 3 4 \%$ ) may not be more difficult than generating a specific one, e.g., loop invariant (with overall pass rate of $6 . 3 \%$ ).
Large language models (LLMs) have demonstrated remarkable progress in code generation, but many existing benchmarks are approaching saturation and offer little guarantee on the trustworthiness of the generated programs, offering limited insight into deeper reasoning capabilities. We introduce VerifyThisBench, a new benchmark designed to evaluate LLMs on end-to-end program verification tasks that require interpreting natural language problem descriptions, formulating formal specifications, generating code, and constructing correctness proofs. Our evaluation reveals that even state-of-the-art (SOTA) models, such as o3-mini, achieve a pass rate of less than 4%, with many outputs failing to compile. To reduce task complexity, we further propose VerifyThisBenchXS, a variant in which partial implementations or proofs are provided. We systematically assess SOTA models on both benchmarks, uncovering key strengths and limitations in their formal reasoning and verification capabilities.
[ "cs.SE" ]
# 1 Introduction In the era of computers and digital interactions, individuals are increasingly exposed to risks they do not anticipate or desire. Social media, instant messaging applications, and chatbots serve as digital platforms that enable individuals to interact without being physically present or revealing their identity and face. To enhance safety and ensure secure and healthy digital platforms for such interactions, it is crucial to effectively identify influential statements and behaviors. A secure and healthy digital environment enables appropriate interactions without threats from malicious actors seeking to influence users in order to steal information, issue threats, or endanger individuals for their own objectives. On these platforms, where all communication occurs digitally, there are heightened opportunities for phishing, scamming, and other manipulative actions designed to extract sensitive information from users [26]. Addressing these challenges and detecting such behaviors on digital platforms and chatbots are essential for maintaining a secure environment for interactions via computers and other digital devices. As current models have achieved high accuracy in detecting explicit patterns, malevolent actors are increasingly attempting to exert mental influence over users to accomplish their aims. Mental influential patterns constitute deceptive strategies intended to control or influence the emotions, thoughts, and behaviors of targeted individuals [5,13]. It represents an intersection of mental health conditions and toxic behavior, characterized by causing distress through implicitly deceitful remarks [22]. Unlike explicit hate speech or overtly toxic language, influential statements are inherently subtle, nuanced, and difficult to detect. Recently, actors have increasingly used nuanced strategies to influence audiences through conversational contexts. Detecting such remarks has proven to be significantly more challenging than identifying hate speech [9,11], toxicity [3], or sarcasm [2]. Previous detection models typically relied on learning from labeled sentences or paragraphs. However, current influential remarks often manifest subtly within broader conversations, appearing sporadically in sentences [27]. This intermittent nature complicates the detection task for language models. Moreover, mental influential patterns often lack overtly negative connotations, becoming identifiable only when analyzed within the context of an entire conversation. The objective of this research is to enhance the detection of implicit influential patterns within conversations using the capabilities of large language models. Previous studies [20,24] have shown that relying solely on prompting with available large language models is not an effective approach for detecting such patterns. Even fine-tuning models on conversations with a single label has not proven to be as effective as anticipated [22]. To address this gap, we propose a framework designed to improve the accuracy of detection tasks. This framework consists of two main stages: data augmentation and a two-phase fine-tuning process. Specifically, our augmentation strategy involves utilizing a reasoning language model to identify mental influential statements within conversations. The detected influential sentences are subsequently incorporated into the fine-tuning pipeline, in order to boost overall model performance. Another motivation for this augmentation strategy is to improve model interpretability through instruction fine-tuning [23]. By training the model to precisely identify the locations of mental influential elements within conversations, we can develop an explanatory system capable of highlighting and clarifying these influential segments in a conversation. The structure of this paper is organized as follows. The following section reviews related work and relevant literature. Section 3 provides a detailed explanation of the proposed framework. Section 4 describes the datasets and experimental setup, while Section 5 presents the results. Finally, Section 6 concludes the paper. # 2 Related Work There are numerous studies examining influence both in general and specifically within texts. [1] investigated influential actors on the X social media platform by analyzing the frequency of news sharing, finding that individuals who share news with varying credibility and platform popularity exhibit distinct influence patterns across the network. In the context of textual analysis, [27] categorize text data into three groups: utterances, conversations, and documents. An utterance typically refers to a standalone statement produced by an individual, such as a post on an online social networking platform, and does not require conversational engagement [4]. Notably, an utterance may consist of several sentences. Several datasets focus on utterances collected from online forums and social networks. For instance, Dreaddit [21] addresses mental stress, and Detex [25] focuses on delicate text. As utterances can be produced by large language models, datasets such as ToxiGen [14] have been generated using these models to provide numerous training samples aimed at enhancing safety and mitigating hate speech. While progress in utterance-level detection has been significant, more sophisticated models are required for conversation and document-level tasks [27]. Within the field of human-computer interaction, social chatbots have been developed to help users cope with mental distress. However, a recent study [18] reported that prolonged communication with such chatbots can result in mental health harms, primarily due to users’ emotional dependence on these systems, which develops over the course of continuous interactions between the individual and the computer. Recent research has aimed to improve the detection of influential patterns by employing advanced prompting methods, such as Chain-of-Thought (CoT) [24] and intent-aware prompting techniques [20]. Incorporating Chain-of-Thought prompts [17] for detecting implicit influential patterns did not significantly improve results, although a combination of CoT with few-shot learning yielded modest gains [24]. Intent-aware prompting involves first extracting the intent of each participant in a conversation using a language model, then appending this information to the conversation and prompting the model again to detect mental manipulation. This approach demonstrated greater improvement in detection performance compared to other methods [20]. A recent study [10] introduced MentalMAC, a multi-task anti-curriculum distillation approach for mental manipulation detection. By leveraging a large teacher model to generate rationales and feedback, they combined unsupervised data augmentation (EVOSA) with staged knowledge distillation to train a smaller student model. Their student model surpassed larger LLMs, achieving higher accuracy than established baselines. As these studies demonstrate, many methods involve augmenting conversational data by adding information extracted from the primary data source. To further improve detection accuracy, we propose a novel framework for detecting implicit influential patterns in conversations, featuring new data augmentation and fine-tuning approaches. # 3 Methodology In this section, the designed framework for implicit influential patterns detection will be explained extensively. First, data augmentation will be explained and then we leverage the augmented data to fine-tune a base language model in two phases for having a robust model. # 3.1 Data Augmentation In the proposed framework, instead of training the model on the entire conversation and providing a single label, the objective is to indicate which parts of the conversation contain implicit influential patterns manifested as mental manipulation. The conversations are between two individuals and are separated line by line. Reasoning language models are leveraged to identify the specific lines that contain implicit influential elements. Through this approach, the augmented data provides the model with the particular lines that need to be learned to better detect influential parts, rather than presenting the whole conversation with a single binary label. To accomplish this, distilled versions of the Deepseek language model [8] — which are open source and available online, particularly the Llama-distilled variant — were employed to identify influential segments. Given the stochastic nature of these models, each conversation was prompted to the reasoning language model ten times, and the results from these analyses were summarized by another language model. Notably, the summarization is performed by a language model that does not conduct reasoning. Further details are provided in Appendix 1. The detailed pipeline for data augmentation is presented in Figure 1. Fig. 1: Data augmentation pipeline for finding influential patterns After identifying these influential segments within conversations, we manually sampled the results to verify the accuracy of this approach. Since each conversation was independently analyzed ten times and the results were aggregated, the data augmentation process demonstrated high accuracy. # 3.2 Model Framework The primary rationale for data augmentation is to train the model to identify the locations of influential statements within conversations, thereby enhancing its learning capacity. Given the computational expense of fully fine-tuning language models, instruction fine-tuning is employed by attaching a Low-Rank Adapter (LoRA) [15] to the model. This instruction-tuned model is then used to identify implicit influential segments within conversations, a task previously unattainable due to the lack of relevant data. LoRA introduces an approach in which, instead of fully fine-tuning all layers in a neural network, the weight updates are approximated by two low-rank matrices, which are then attached to the layers. This approach is also advantageous because all the base model weights can be frozen, allowing only the newly added parameters introduced by the low rank adapters to be trained [15]. Mathematically, if the initial weights are represented by a matrix $W _ { 1 } \in \mathbb { R } ^ { d \times k }$ , the weight updates can be approximated by two matrices, $A \in \mathbb { R } ^ { d \times r }$ and $\boldsymbol { B } \in \mathbb { R } ^ { r \times k }$ , where the rank $r$ should be chosen such that $r < \operatorname* { m i n } ( d , k )$ , where $d$ is the input dimension and $k$ is the output dimension, respectively. Thus, the weight matrix for the instruction fine-tuned model is given by: $$ h _ { 1 } = W _ { 1 } x + \varDelta W _ { 1 } x = W _ { 1 } x + \ A B x $$ where $h _ { 1 }$ represents the forward pass of the instruction fine-tuned model, $W _ { 1 }$ denotes the initial weights of the language model, $\varDelta W _ { 1 }$ represents the weight updates from instruction fine-tuning, $x$ denotes the concatenation of the instruction prompt and the initial conversation as input data, and the labels correspond to the augmented data generated in the previous procedure. For classification tasks, referred to as detection in our framework, the newly attached adapter and the new classification head — added after removing the original language model head — are simultaneously fine-tuned. Following the initial instruction fine-tuning, the previous weights are frozen, and only the newly introduced parameters in the second adapter and the classification head are updated. Prior to removing the language model head and attaching the classification head, as a new adapter is added to the instruction fine-tuned model, this step can be mathematically expressed as: $$ h _ { 2 } = W _ { 2 } y + \varDelta W _ { 2 } y = ( W _ { 1 } + \varDelta W _ { 1 } ) y + C D y = W _ { 1 } y + A B y + C D y $$ where $h _ { 2 }$ denotes the forward pass of the model with the newly attached adapter, $W _ { 2 }$ represents the initial weights of the language model after instruction finetuning, including the weights from the adapter attached during the first stage, $\varDelta W _ { 2 }$ denotes the weight updates from classification training, $y$ is the original conversation input for the classification task, and $C$ and $D$ are analogous to the $A$ and $B$ matrices but may have different dimensions. The attached classifier is then trained to determine whether or not a conversation contains implicit influential patterns. It should be noted that open-source models from the Llama 3 series [12] are used in the experiments, as they can be downloaded and finetuned specifically for our tasks 1. The complete model framework is illustrated in Figure 2. Fig. 2: The framework of two-phase fine-tuning for detecting mental influential patterns. The snowflake symbol indicates frozen weights, whereas the fire symbol denotes the weights that are updated during the fine-tuning process. # 4 Experimental Setup # 4.1 Dataset Most publicly available datasets are based on individual utterances. We have excluded these datasets and instead focused on newly released datasets compiled by [22]. These datasets comprises approximately 4,000 conversations and includes three distinct types of labels: the presence or absence of mental influence, multi-label annotations specifying the techniques employed for mental influence, and the vulnerability types of influenced victims. Our goal is to improve detection accuracy across all these categories. This paper [22] introduces two datasets: MentalManipCon and MentalManipMaj. In these names, "con" stands for consensus, while "maj" denotes majority. During the annotation process, annotators sometimes held differing opinions. The consensus dataset contains labels assigned only when all annotators were in agreement, whereas the majority dataset includes labels determined by the majority vote among annotators, even in the presence of differing viewpoints. For further details regarding the annotation procedure, read the original paper by [22]. The technique labels used for identifying techniques of mental influence are: "Denial", "Evasion", "Feigning Innocence", "Rationalization", "Playing the Victim Role", "Playing the Servant Role", "Shaming or Belittlement", "Intimidation", "Brandishing Anger", "Accusation", and "Persuasion or Seduction". The vulnerability labels for victims are: "Over-responsibility", "Over-intellectualization", "Naivete", "Low self-esteem", and "Dependency". For further details and definitions of each technique and vulnerability label, refer to the paper by [22]. # 4.2 Evaluation Metrics The experiments are divided into two parts. The first part involves binary classification, where the trained model predicts whether a given conversation contains any implicit influential patterns. The second part involves multi-label classification, in which the model is required to identify all relevant technique labels used by the actors influencing the victims, as well as the vulnerability labels of victims present in a conversation. The primary evaluation criterion is accuracy, along with other standard metrics such as precision, recall, and micro F1 score [16]. # 5 Results # 5.1 Binary Classification of Implicit Influential Patterns First, the detection of influential patterns was investigated using zero-shot and few-shot learning approaches with state-of-the-art large language models. Zeroshot learning [19] refers to querying a vanilla language model without any additional training or fine-tuning, assessing its performance based solely on the knowledge acquired during pretraining and post-training phases. In other words, the model is evaluated based on the general knowledge it has acquired during training phases, without having been explicitly trained on the specific task being assessed. As shown in Table 1, zero-shot learning did not yield significant performance differences across different models. However, the newer 3.2 version of the Llama model with 3 billion parameters outperformed other model variants. It is noteworthy that the smallest Llama model, with only 1 billion parameters, performed poorly in zero-shot learning, likely due to its limited capacity for storing knowledge. Few-shot learning [7] is similar to zero-shot learning, except that a few labeled examples are included in the original prompt before querying the model. This approach tests whether the language model can identify conversations containing implicit influential patterns when given some guidance through examples. For the few-shot learning experiments, two positive and two negative examples were included in each prompt, with examples randomly selected from the dataset; thus, the prompts did not always contain the same samples. The results indicate that the largest model size achieved the best performance among all evaluated models. Notably, few-shot learning improved the performance of the smallest model by 34 percent. This finding suggests that, although the Llama-3.2-1B model alone lacked sufficient internal knowledge, providing relevant examples enabled it to better detect influential patterns compared to zero-shot learning with only the conversation itself. As zero-shot and few-shot approaches yielded only limited improvements compared to previous iterations of such models, these results underscore the necessity of a robust pipeline to enhance detection accuracy, since larger models do not necessarily yield better results. Therefore, we conducted experiments using the proposed framework outlined in the methodology section, and the results are reported in Table 1 under “ours” alongside the baseline models. The highest accuracy was achieved by the model utilizing Llama-3.2-3B as the base language model, with an accuracy of 82.6 percent on the MentalManipCon dataset. The other two models performed comparably, resulting in an overall performance improvement of approximately 6 percent. Notably, this improvement was attained by fine-tuning a language model with 10 billion fewer parameters. Even when Llama-3.2-1B was used as the base model, the performance remained around 82 percent, utilizing 12 billion fewer parameters. This demonstrates that designing a robust fine-tuning pipeline is more critical than merely increasing model size or fine-tuning on raw data. For the other dataset, MentalManipMaj, the Llama-3.1-8B model achieved the best results. The performance of Llama-3.2-3B was also noteworthy and comparable to Llama-3.1-8B, with both models improving accuracy by around 3 percent. Although Llama-3.2-1B did not achieve the same level of improvement as the larger models, it still outperformed the model trained with the approach from [22], despite having 12 billion fewer parameters under this framework. Table 1: Performance of Llama models in terms of accuracy, precision, recall, and F1-score under zero-shot, few-shot, and fine-tuning settings # 5.2 Multi-label Classification of Techniques and Vulnerabilities In Table 2, the results for multi-label classification are provided. There were 11 unique techniques in total, and some may have one of them and others may have multiple techniques annotated for a manipulative conversation. Since the performance of a model needed to be tested to see how many of those labels can be detected for each conversation, a multi-label classification was required. Our method with a Llama base model with 8 billion parameters was the best one among the others, with the accuracy of 35.7 percent. Due to having a lot of different labels, a larger model performed better. It should be noted that our approach with the smallest Llama model with 1 billion parameters acheived a performance more than 10 times better than the vanilla fine-tuning in [22]. For vulnerability, since it has only 5 unique labels, the performance expected to be better due to lower complexity. In terms of accuracy, the performance of our approach with Llama base model with 3 billion parameters was the best among others. Nevertheless, the performance of our method with Llama-8B was on par with 3 billion parameter. The results clearly shows that have the approach was clearly a better option than using vanilla fine-tuning with a model with a lot of parameters and this can reduce costs in terms of hardware and boost acceleration since running model with a lot of parameters need high computation resources. Table 2: Performance of Llama models in terms of accuracy, precision, recall, and F1-score for multi-label classification of techniques used by manipulators and vulnerability of victims under fine-tuning settings
In the era of digitalization, as individuals increasingly rely on digital platforms for communication and news consumption, various actors employ linguistic strategies to influence public perception. While models have become proficient at detecting explicit patterns, which typically appear in texts as single remarks referred to as utterances, such as social media posts, malicious actors have shifted toward utilizing implicit influential verbal patterns embedded within conversations. These verbal patterns aim to mentally penetrate the victim's mind in order to influence them, enabling the actor to obtain the desired information through implicit means. This paper presents an improved approach for detecting such implicit influential patterns. Furthermore, the proposed model is capable of identifying the specific locations of these influential elements within a conversation. To achieve this, the existing dataset was augmented using the reasoning capabilities of state-of-the-art language models. Our designed framework resulted in a 6% improvement in the detection of implicit influential patterns in conversations. Moreover, this approach improved the multi-label classification tasks related to both the techniques used for influence and the vulnerability of victims by 33% and 43%, respectively.
[ "cs.CL" ]
# 1. Introduction In recent years, Large Language Models (LLMs) have emerged as transformative tools across a wide range of natural language processing (NLP) applications, including machine translation, question answering, summarization, and dialogue systems [1, 2, 3]. Their ability to model longrange dependencies and generate coherent, contextually rich language has made them foundational in both research and industry. As their capabilities continue to evolve, a growing body of work has turned toward leveraging LLMs for speech-related tasks, aiming to unify language and speech processing under a single modeling framework [4, 5, 6]. This shift has opened new directions in Automatic Speech Recognition (ASR), audio captioning, and the development of spoken dialogue systems, particularly in multilingual and real-world settings. To address the unique challenges of speech, recent efforts have focused on extending LLMs with speech understanding capabilities through multimodal architectures. These systems typically consist of a speech encoder, a projector module to align modalities, and a language model for decoding. Notable approaches include compressing speech representations temporally, incorporating modality alignment mechanisms, and partially fine-tuning LLMs to adapt to spoken input [4]. Despite such advances, the design of effective LLM-based speech models remains nontrivial, particularly when confronted with real-world conversational speech—characterized by disfluencies, speaker overlaps, and diverse turn-taking styles. Furthermore, the lack of extensive multilingual conversational corpora further complicates generalization and robustness. In our submission to the MLC-SLM Challenge1, we propose a streamlined and effective system architecture that harnesses the strengths of pretrained models with minimal task-specific engineering. Our system utilizes OpenAI’s Whisper model [7] as the speech encoder due to its strong generalization capabilities and robustness to multilingual input. For the language modeling component, we explore both Qwen2.5 [8] and Gemma3 [9]. A lightweight linear projector module is trained to bridge the speech and language modalities. Through this simple yet effective setup, we demonstrate competitive performance in multilingual conversational speech modeling, highlighting the strength of modular design and pre-trained components over heavily customized architectures. Figure 1: The overall architecture. Main components include a speech encoder, a projector, and a large language model. # 2. System Architecture The architecture of our system is illustrated in Figure 1, including three main components. From the raw waveform $O$ , a speech encoder $\operatorname { S E } ( \cdot )$ is utilized to extract speech representations from the raw waveform $\tilde { S } \ = \ \mathrm { S E } ( O ) \ \in$ $\mathbb { R } ^ { \mathbf { \lambda } _ { T _ { s } \times D _ { s } } }$ , where $T _ { s }$ is the number of speech frames ∈d $D _ { s }$ is the output dimension of the speech encoder. Subsequently, the representation is mapped into the same embedding dimension as the LLM’s input with a linear transformation, denoted as $S ^ { \prime } = \mathrm { L i n e a r } ( \mathbf { \widetilde { \cal S } } ) \in \mathbb { R } ^ { T _ { s } \times D _ { l } }$ . After that, the projector learns to compress $S ^ { \prime }$ in the temporal dimension and maps them into the text space of the LLM, aligning the different modalities effectively. The projected speech representations is denoted as S = Projector(S) ∈ RT ×Dl , where $T < T _ { s }$ is the number of speech time frames after compression by a pooling operation. The compression significantly reduces computational requirements while maintaining essential temporal information needed for the LLM to learn. The input to the LLM is a concatenation of speech representations $S = ( S _ { t } \in \mathbb { R } ^ { D _ { l } } | t = 1 , . . , T )$ and the instruction tokens $P = ( P _ { n } \in \mathbb { R } ^ { D _ { l } } | n = 1 , . . , N )$ , where $N$ is the number of tokens in the instruction. During training, the ground truth transcription are tokenized into token IDs using the LLM’s tokenizer. Those token IDs are fed into the LLM as labels and generated via the next-token prediction. We employ a 3-stage training process for our system. Specifically: • Stage 1. Only the speech encoder is trained • Stage 2. Trained both speech encoder and projector • Stage 3. The projector is trained together with the LoRA adapter in the LLM. # 3. Experiment Setup # 3.1. Models # 3.1.1. Speech encoder We investigate the use of Whisper as a speech encoder, specifically the large-v3 version. Whisper is a Transformerbased encoder-decoder model, trained on 680k hours of labelled speech of multiple languages. The large version has 1.5B parameters. # 3.1.2. Projector The projector architecture is a two-layer perceptron with SwiGLU [10] activation function. There are two projector variants with different compression ratio: • Projector 5. Reduces 1,500 frames to 300 frames in the temporal dimension (1,500 is the number of frames from a 30-second input utterance). This results in a 5:1 compression ratio. • Projector 4. Reduces 1,500 frames to 375 frames (4:1 compression ratio). # 3.1.3. LLM We employ two families of LLM in our system: Qwen2.5- $7 { \bf B } ^ { 2 }$ with 7B parameters, and Gemma3- $1 2 \mathbf { B } ^ { 3 }$ with 12B parameters. Both LLMs have the capability to support an extensive number of languages. # 3.2. Data preparation The training set comprises around 1,500 hours of recordings in 11 languages: English (en), French $( \operatorname { f r } )$ , German (de), Italian (it), Portuguese (pt), Spanish (es), Japanese (jp), Korean $( \mathrm { k o } )$ , Russian (ru), Thai (th), Vietnamese (vi). In English, there are 5 smaller subclasses: American, British, Filipino, Australian, and Indian. Each recording is a monolingual two-speaker conversation on random topics. To be compatible with pre-trained Whisper speech encoders, we segment each recording into 30-second segments with an overlapping section of 15 seconds. In total, we achieve around 2,300 hours of 30-second utterances for training. The challenge also provides a development set with the same settings as the training set, with approximately 4 hours of recordings for each language. Table 1: Average CER/WER $( \% )$ results on development and evaluation set # 3.3. Training details All training stages utilize Flash Attention 2 [11] for memory-efficient attention computation across both encoder and decoder components. All stages are trained using a learning rate of 3e-5 with a Cosine warmup ratio of 0.05, optimized by AdamW [12] with a weight decay of 1e-5. For augmentation, we only apply SpecAugment [13] to enhance the speech encoders’ robustness. All models are trained on two NVIDIA A40 GPUs with DeepSpeed ZeRO2 for efficient parallelization. All models are evaluated with the Word Error Rate $( \mathrm { W E R \% } )$ . For Korean, Japanese, and Thai, we add a space between every character and calculate the Character Error Rate $( \mathrm { C E R \% } )$ . We use the meeteval4 toolkit for evaluation, similar to the baseline implementation. # 3.3.1. Whisper-only system We fine-tune the Whisper large-v3 on 2,300 hours of the training set for 10 epochs. The fine-tuned Whisper implies Whisper large-v3 with the implementation details mentioned above in this paper, unless specified otherwise. # 3.3.2. Whisper and Qwen2.5 We use the fine-tuned Whisper and train the system in the 3- stage manner as discussed in Section 2. We use LoRA with an alpha value of 32 to fine-tune the Qwen2.5-7B version with precision of 16 bits. The projector used is Projector 5. # 3.3.3. Whisper and Gemma3 We also use the fine-tuned Whisper and train the system in the 3-stage manner as discussed in Section 2. Note that in stage 2 for Gemma3, we continue to train the speech encoder along with the Projector 4 to achieve better feature alignment. We also use LoRA with an alpha of 32 to finetune the Gemma3-12B version, with precision of 4 bits. # 4. Experimental Results # 4.1. Main results The main results are illustrated in Table 1. In relative, our proposed systems outperform the baseline by $7 . 7 8 \%$ and $1 7 . 5 5 \%$ for Whisper+Qwen2.5-7B and Whisper+Gemma3- 12B respectively. The integration of Gemma3 helps to reduce the CER/WER significantly, with an absolute reduction of $1 . 9 7 \%$ compared to using Qwen2.5-7B as the language model. # 4.2. Ablation studies In this section, we provide in-depth results in each language on the development set in Table 2. We also divide the languages by group to see which language does every model perform best in each group. We compare our proposed systems with 4 baselines: (i) the baseline vanilla Whisper, which involves fine-tuning a single Whisper-large-v3 and use that as the transcriber (Baseline LargeV3); (ii) the vanilla Whisper as the speech encoder and Qwen2.5- 7B as a language model fine-tuned with LoRA (BaselineQwen); (iii) the vanilla Whisper and Llama3.1-8B [3] finetuned with LoRA (Baseline-Llama; and (iv) Phi-4 [14] - a multimodal LLM, transcribing in a zero-shot manner (Phi-4-multimodal-0-shot). Note that Phi-4 was not pretrained on Russian, Korean, Thai, and Vietnamese among the evaluated languages. We use the instruction-fine-tuned version, Phi-4-Instruct for inference. Our proposed systems for comparison include the following: Table 2: WER/CER $( \% )$ for each language on the development set of the baseline systems and our models. Bold indicates the best result overall (row-wise), and underline indicates the best result within each model group for that language. • LargeV3-I. The Whisper-large-v3 fine-tuned on the provided training data. • Qwen2.5-7B-16bit-III. The fine-tuned Whisper along with the Qwen2.5-7B fine-tuned with LoRA to stage 3. • Gemma3-12B-4bit-II. The fine-tuned Whisper along with Gemma3-12B fine-tuned in LoRA to stage 2. • Gemma3-12B-4bit-III. The fine-tuned Whisper along with Gemma3-12B fine-tuned in LoRA to stage 3. We can first see that Phi-4 Instruct, a public LLM baseline, performs worse than all other baselines and custom models, with an average WER/CER of $1 0 7 . 8 8 \%$ . In contrast, the average of other baselines ranges from $1 6 . 9 4 \%$ (Baseline LargeV3) to $2 1 . 0 9 \%$ (Baseline-Llama), indicating much more stable and realistic performance. A clear trend in the table is that direct integration of Whisper with LLMs like Qwen2.5 and Llama3.1 leads to performance degradation compared to vanilla Whisper. For example, for almost every language, Baseline-Qwen and Baseline-Llama yield higher WER/CER than vanilla Whisper. This suggests that naive fusion with large language models leads to degraded recognition performance. While not universally superior, our LargeV3-I significantly improves over Baseline-LargeV3 in several languages. For example, it reduces error rates in EnglishAustralian $( 1 1 . 7 2 \%$ to $9 . 6 8 \%$ , English-Filipino $( 9 . 2 0 \%$ to $9 . 1 6 \%$ ), French $2 8 . 1 4 \%$ to $2 7 . 7 8 \%$ ), Russian $( 1 7 . 6 7 \%$ to $1 4 . 5 1 \%$ ), Thai ( $1 4 . 4 9 \%$ to $1 0 . 7 8 \%$ ), and Vietnamese $( 2 7 . 1 6 \%$ to $2 0 . 6 4 \%$ ). When comparing our LargeV3-I $^ +$ Gemma-12B-4bitIII model with the two baseline fused models (BaselineQwen and Baseline-Llama), it performs better on nearly every language, achieving a relative error reduction of $1 . 9 5 \%$ over Baseline-Llama, while slightly underperforming Baseline-Qwen with a marginal increase of $0 . 2 9 \%$ Overall, both our Qwen2.5-7B-16bit-III and Gemma3- 12B-4bit-III configurations outperform the baselines in the East Asia and Southeast Asia language groups, but lag behind in English and European languages. We also added a LargeV3- $\boldsymbol { \mathrm { ~ I ~ + ~ } }$ Qwen2.5-7B model for Error Correction (EC) as a cascaded version of SpeechLLM, where the LLM will fix the transcription output by Whisper. While it shows promising results, it actually degrades performance compared to the original LargeV3-I output (increasing the error from $1 7 . 6 7 \%$ to $3 1 . 2 9 \%$ ) and still lags behind the Qwen2.5-7B-16bit-III model $( 2 1 . 3 1 \% )$ . This showcases the effectiveness of end-to-end optimization. Note that this experiment is for ablation only, since the challenge does not permit the use of LLM as an supplementary EC.
This paper presents our system for the MLC-SLM Challenge 2025, focusing on multilingual speech recognition and language modeling with large language models (LLMs). Our approach combines a fine-tuned Whisper-large-v3 encoder with efficient projector architectures and various decoder configurations. We employ a three-stage training methodology that progressively optimizes the encoder, projector, and LLM components. Our system achieves competitive performance with a private test average WER/CER result of 16.63% using the Gemma3-12B and 18.6% using the Qwen2.5-7B as decoder-only language model.
[ "cs.CL", "cs.SD", "eess.AS" ]
# 1 Introduction The efficient processing of analytic queries is an important issue in databases and has motivated considerable research work since the last three decades. The main purpose of analytic queries is to extract relevant ‘statistics’ from huge volumes of data, resulting from the integration of heterogeneous databases and stored in what is called a data warehouse [9,12]. For efficiency reasons, the data stored in a data warehouse is generally organized according to a non-normalized schema, called a star schema. A star schema consists of two types of relation schemas (also called tables): a Acknowledgment: Work conducted while the second author was visiting at FORTH Institute of Computer Science, Crete, Greece (https://www.ics.forth.gr/) fact table and a number of dimension tables. In this context, an analytic query can be seen as an SQL Group-by query involving some aggregate function such as min, max, count or sum operating over attributes in the fact table called measure attributes (or simply measures). Let us see an example to illustrate the concepts of star schema and analytic query. Example 1 Consider a company structured into branches located all over the world and selling products of various types to customers. To analyze the efficiency of the company operations, one may for instance be interested in the quantities of products sold in each branch during the past year. In order to answer efficiently such a query, knowing that the data warehouse may contain billions of sales, the data are organized according to the following star schema: – Fact table. This table, denoted by $F ^ { \prime }$ is meant to store all sales by the company. In our example, $F$ is defined over attributes $K _ { B }$ , $K _ { P }$ , $K _ { C }$ , $K _ { D }$ and $Q t y$ , with $K _ { B } K _ { P } K _ { C } K _ { D }$ being its (primary) key, which means that $F$ must satisfy the functional dependency $K _ { B } K _ { P } K _ { C } K _ { D } Q t y$ . In other words, there can’t be two distinct sales concerning the same branch, the same product, the same customer and the same date, associated with more than one quantity. – Dimension tables. There are four dimension tables, one for each of the attributes $K _ { B } , K _ { P } , K _ { C } , K _ { D }$ : – Branch, defined over the attributes $K _ { B }$ , B Loc, B Ctry standing respectively for the branch identifier, the town in which the branch is located and the country which this town belongs to. The attribute $K _ { B }$ is the (primary) key of Branch, meaning that the table Branch must satisfy the functional dependencies $K _ { B } B _ { - } L o c$ and $K _ { B } B _ { - } C t r y$ . – Prod, defined over the attributes $K _ { P }$ , $P { \_ } T y p e$ , P rice, where $K _ { P }$ is the product identifier, and $P { \_ } T y p e$ and P rice are respectively the type and the price of a product. The attribute $K _ { P }$ is the (primary) key of Prod, meaning that the table Prod must satisfy the functional dependencies $K _ { P } P _ { - } T y p e$ and $K _ { B } P r i c e$ . – Cust, defined over the attributes $K _ { C }$ , $C$ Name, $C _ { - } A d d r$ standing respectively for customer identifier, name and address of the customer. The attribute $K _ { C }$ is the (primary) key of the table Cust, meaning that Cust must satisfy the functional dependencies $K _ { C } C _ { - } N a m e$ and $K _ { C } C _ { - } A d d r$ . – Date, defined over the attributes $K _ { D }$ , Month, Y ear standing respectively for the date identifier or key, the month and the year of the date. The attribute $K _ { D }$ is the (primary) key of Date, meaning that Date must satisfy the functional dependencies $K _ { D } \ \to \ M o n t h$ and $K _ { D } Y e a r$ . Moreover, referential constraints are generally enforced in order to ensure that any key value occurring in $F$ also occurs in the corresponding dimension table. In the case of our example these constraints are expressed through the following inclusions: $\pi _ { K _ { B } } ( F ) \ \subseteq \ \pi _ { K _ { B } } ( { \mathsf { B r a n c h } } )$ , $\pi _ { K _ { P } } ( F ) \subseteq$ $\pi _ { K _ { P } } ( \mathsf { P r o d } )$ , $\pi _ { K _ { C } } ( F ) \subseteq \pi _ { K _ { C } } ( \mathsf { C u s t } )$ and $\pi _ { K _ { D } } ( F ) \subseteq \pi _ { K _ { D } } ( \mathsf { D a t e } )$ . In this setting a typical analytic query is to display the total quantity of all products sold in all branches during the year 2024. This query can be expressed in SQL as follows: select $K _ { P }$ , $s u m ( Q t y )$ from $J$ where $Y e a r = 2 0 2 4$ group by $K _ { P }$ Here $J$ denotes the (lossless) join of all dimension tables with the fact table $F$ (although the join can by simplified by involving only $F$ , Prod and Date). □ How to efficiently evaluate analytic queries against huge volumes of data has been widely investigated and lies outside the scope of the present paper; the reader is referred to [16] regarding standard SQL query optimization and to [4] regarding more specific optimization techniques for analytic queries. Now, most approaches to optimize the evaluation of analytic queries assume that the functional dependencies and referential constraints are satisfied by the data warehouse. However, in practice, the situation is quite different as the data warehouse may contain inconsistencies and also missing data. For instance, in the above example, a customer may appear in the data warehouse with two distinct addresses (one in P aris and one in $A t h e n s$ ), thus violating the functional dependency $K _ { C } C _ { - } A d d r$ ; or the price of a product may be missing in the table Prod. We draw attention on that, in the case of the above query, these ‘inconsistencies’ should not affect the computation of its answer, because the query does not refer to customer addresses, nor to product prices. Notice also that, if a product identifier occurs in the fact table $F ^ { \prime }$ but not in the dimension table Prod - thus violating the referential constraint $\pi _ { K _ { P } } ( F ) \subseteq \pi _ { K _ { P } } ( \mathsf { P r o d } )$ , all sales involving this product can be processed when computing the answer to the above query. This is so because, when computing the answer to this query, the only needed attribute value among all attributes of the table Cust is the $K _ { C }$ -value of the tuple in $F$ being processed. A more problematic situation is if the selection condition in the query is $Y e a r = 2 0 2 4$ and $C _ { - } A d d r = P a r i s$ . This is so because among all transactions regarding customers whose address may be Paris, some concern customers whose address may violate the dependency $K _ { C } C _ { - } A d d r$ in the table Cust. Dealing with such inconsistencies, known as the problem of computing the consistent answer to an analytic query, is not trivial, and as argued in [2,8], techniques used for standard non analytic queries cannot be used for analytic queries. To cope with inconsistencies and missing values in data warehouses, our approach is based on our earlier work [14] dealing with consistent query answering for standard, non analytic queries in multi-table databases. In that work, we presented polynomial algorithms for computing either the exact consistent answer to a standard non analytic query or bounds of the exact answer, depending on whether the query involves or not a selection condition. In the present paper, we show that in the case of a star schema, under the restrictions that the selection condition involves no keys and satisfies the property of independency (i.e., the condition can be expressed as a conjunction of conditions each involving a single attribute), the exact consistent answer can be effectively computed. In the following section, we summarize briefly the main results of the approach in [14] and then focus on analytic queries over star schemas. Considering queries, analytic or not, whose selection condition satisfies the two restrictions mentioned above, the main contribution of this paper is showing that: (a) computing the exact consistent answer to a usual projection-selection-join query over a star schema is in time polynomial in the size of the data warehouse (in contrast to [14], where consistent answers to non analytic queries are approximated when involving a selection condition), and (b) the exact consistent answer to an analytic query over a star schema can be computed in time polynomial in the size of the data warehouse (with two exceptions where only approximations are given). The paper is organized as follows: In Section 2 we recall the main features of our previous work in [14], on which the present approach is based. In Section 3 we first recall the definition of a star schema and argue that the approach in [14] applies in this context. In Section 4 we investigate further the concept of repairs in the context of star schemas. Section 5 deals with consistent answers to queries in the case of standard projection-selection queries as well as in the case of analytic queries. In Section 6, we propose algorithms for efficiently computing the consistent answers, or in some few cases, an approximation of the consistent answers to analytic queries. In Section 7 we compare our approach to other approaches from the literature and in Section 8 we summarize the contents of the paper and suggest research directions for future work. # 2 The Approach of [14] # 2.1 The Approach at a Glance Traditionally, to verify the consistency of a multi-table database with respect to a set $F D$ of functional dependencies one applies the well-known Chase algorithm [16]. The input of this algorithm is a table $T$ over the set $U$ of all attributes appearing in the database. $T$ has as many rows as there are tuples in the database and each tuple is placed on a separate row, eventually with missing values. The algorithm derives new tuples by applying the dependencies of $F D$ as long as no pair of tuples is in conflict with some dependency; and stops as soon as such a conflict is encountered. Let $C h a s e ( T )$ denote the result upon termination. We recall here that a dependency application, also known as the Lossless-Join rule is defined as follows [7, 16]: for all $t$ and $t ^ { \prime }$ in the current value of $C h a s e ( T )$ if there exists $X A$ in $F D$ such that $t$ and $t ^ { \prime }$ are defined over $X A$ and $t . X = t ^ { \prime } . X$ then if $t . A$ and $t ^ { \prime } . A$ are distinct domain values, then fail else if $t . A = a$ and $t ^ { \prime } . A$ is null then assign $a$ to $t ^ { \prime } . A$ In this context, a tuple $t$ in the current value of $C h a s e ( T )$ is said to be conflicting if the following holds: there is a tuple $t ^ { \prime }$ in the current value of $C h a s e ( T )$ and a dependency $X A$ in $F D$ such that $t$ and $t ^ { \prime }$ are both defined over $X A$ , $t . X = t ^ { \prime } . X$ and $t . A \neq t ^ { \prime } . A$ . A tuple $t$ is called non-conflicting if $t$ is not a conflicting tuple. Now, if Chase is successful (i.e., no conflicts are encountered and no more new tuples can be derived) then the database is declared consistent else conflicting. If the database is consistent then processing of queries (whether standard queries or analytic queries) proceeds as usual, else the following question arises: can we still extract useful (i.e. non conflicting) information from the conflicting database? The work in [14] gives a positive answer to this question based on an extended version of the Chase algorithm, called the $m _ { \it - } C h a s e$ algorithm to be presented formally in the following subsection. The input of this algorithm is no more $T$ but a table $\tau$ containing all tuples that can be built up from constants in the active domains of the database. The set $\tau$ is called the ‘Universe of discourse’ of the $m$ Chase algorithm and the notion of ‘conflicting tuple’ remains the same but now concerns all tuples of $\tau$ and not only those of $T$ . It is shown in [14] that the tuples of $\tau$ can be characterized in two orthogonal ways: a tuple of $\tau$ can be either true or false, and it can be either conflicting or non-conflicting. This characterization can be intuitively described as follows. If Chase terminates successfully and $C h a s e ( T )$ denotes the output table then a tuple $t$ of $\tau$ is true if it appears in $C h a s e ( T )$ and false otherwise (i.e. $t$ is false if it appears in $\tau \backslash C h a s e ( T ) )$ . However, if the Chase algorithm fails then we don’t know which tuples are true and which are non-conflicting. The m Chase algorithm remedies this deficiency by modifying the Chase algorithm as follows: instead of stopping the application of functional dependencies on table $T$ when a conflict is encountered, the application continues (and the true tuples are stored) until no more tuples are found. In doing so, all true tuples and all conflicting tuples are computed - and therefore each tuple of $\tau$ can be characterized as true/false and as conflicting/non-conflicting. It follows from the above definition of conflicting tuple that if $t$ is conflicting then every true super-tuple of $t$ is also conflicting. Therefore the conflicting tuples can be retrieved as true supertuples of true tuples of the form $x a$ over $X A$ such that: (a) $X A$ is a dependency in $F D$ and (b) $a$ and $a ^ { \prime }$ are in $a d o m ( A )$ such that $a \neq a ^ { \prime }$ and $x a ^ { \prime }$ is true (here $a d o m ( A )$ stands for ‘active domain’ of $A$ ). Then assuming that all true tuples and all conflicting tuples are known, we can define a tuple $t$ of $\tau$ to be consistent if $t$ is true and non-conflicting. Note that every sub-tuple of a true tuple is true and that every sub-tuple of a consistent tuple is consistent. Finally, call ‘consistent subset’ of $\tau$ any set $S$ of true tuples of $\tau$ such that the set of all tuples inferred from $S$ using the functional dependencies contains no conflicting tuples in $s$ . Let us illustrate these concepts using the following example. Example 2 Suppose $T = \{ a b , b c , a c ^ { \prime } \}$ and $F D = \{ A \to C , B \to C \}$ . Then all tuples in $T$ are true and non conflicting (hence consistent), but the application of the functional dependencies on $T$ allows to infer the tuples abc and $a b c ^ { \prime }$ , which are conflicting tuples of $\tau$ inferred from $T$ . In fact it can be seen that $( a )$ the true tuples of $\tau$ are abc, $a b c ^ { \prime }$ and all their sub-tuples, implying that any other tuple of $\tau$ is false in $\tau$ , and $( b )$ the conflicting tuples in $\tau$ are abc, $a b c ^ { \prime }$ , $a c$ , $a c ^ { \prime }$ , $b c$ and $b c ^ { \prime }$ , implying that any other tuple in $\tau$ is non conflicting in $\tau$ . In this example, the consistent tuples of $\tau$ are $a b$ , $a$ , $b$ , $c$ and $c ^ { \prime }$ . In this context, the set $R = \{ a b c , c ^ { \prime } \}$ is a consistent subset of $\tau$ . Indeed, since $\mathcal { R }$ is the set of all tuples built up using constants $a$ , $b$ , $c$ and $c ^ { \prime }$ , we have $\mathcal { R } = \mathcal { T }$ . Moreover the tuples in $R$ are tuples true in $\tau$ and no conflicting tuples in $\mathcal { R }$ can be generated from the tuples in $R$ . It is important to notice that, although abc is conflicting in $\tau$ , abc is not conflicting in $\mathcal { R }$ because $a c ^ { \prime }$ , $b c ^ { \prime }$ and $a b c ^ { \prime }$ are not true in $\mathcal { R }$ . □ Finally, based on the concepts introduced so far, a repair of $T$ in [14] is defined to be a maximal and consistent subset of $\tau$ containing a maximal and consistent set of tuples which are consistent in $\tau$ . In our previous example, the subset $\boldsymbol { R } = \{ a b c , c ^ { \prime } \}$ of $\tau$ is a repair of $T$ because $( a )$ as we have just seen, $R$ is a consistent subset of $\tau$ , (b) $R$ is maximal because adding to $R$ a tuple true of $\tau$ either does not bring any new true tuple in $\mathcal { R }$ (e.g., adding the tuple $a c$ ) or generates a conflicting tuple in $\mathcal { R }$ (e.g., adding the tuple $a c ^ { \prime }$ ), and $( c )$ all consistent tuples of $\tau$ are true in $\mathcal { R }$ . Note that similar arguments show that the set $\boldsymbol { S } = \{ b c , a c ^ { \prime } \}$ is a maximal and consistent subset of $\tau$ , however, $S$ is not a repair of $T$ , because $a b$ is a consistent tuple of $\tau$ which is not true in $\boldsymbol { S }$ . By the way, as we shall see in Section 4.1, our definition of repair is more restrictive than the usual definition [1, 18] in which a repair is defined to be a maximal and consistent subset of $\tau$ . For example, the set $S = \{ b c , a c ^ { \prime } \}$ is a repair of $T$ following [1, 18], but it is not a repair of $T$ following our approach. # 2.2 The m-Chase Algorithm Clearly to apply the m Chase-based approach described above, one has to answer the following questions: – Does the $\ m _ { - } C h a s e$ algorithm terminate? – Is the result independent of the order in which the functional dependencies are applied? – Does the result contain all true tuples and all conflicting tuples that the dependencies can derive? In other words: which underlying semantics ensure that $m _ { - } C h a s e$ algorithm is correct? All these questions find positive answers in [14], based on the set theoretic semantics introduced in [5, 15], under the assumption that the set $F D$ is normalized. Following [14], if $F D ^ { + }$ denotes the closure of $F D$ under the Armstrong’s axioms [3], then $F D$ is said to be normalized if it contains all dependencies in $F D ^ { + }$ such that: FD1: every dependency in $F D$ is of the form $X A$ where $A$ is an attribute in $U$ not in $X$ FD2: for every $X A$ in $F D$ , there is no $Y \subset X$ such that $Y A$ is implied by $F D$ (i.e., such that $Y A$ is in $F D ^ { + }$ ) As shown in [14], every set $F D$ of functional dependencies can be put in an equivalent normalized form. Moreover, a set $F D$ of functional dependencies is said to be cyclic if there exist $X A$ and $Y \ B$ in $F D$ such that $A$ is in $Y$ and $B$ in $X$ . It is shown in [14] that cyclic sets of functional dependencies raise important difficulties when it comes to computing consistent answers. It is easy to see that the sets $F D$ considered in Example 1 and in Example 2 are both normalized and acyclic. In this section, we recall from [14] the basic formalism on which the algorithm $m$ Chase relies, namely that of multi-valued tuple. A multi-valued tuple, or $m$ -tuple, extends the notion of tuple in the sense that an m-tuple associates every attribute $A$ with a possibly empty subset of the active domain of $A$ as opposed to a single value from the active domain. Definition 1 A multi-valued tuple $\sigma$ over universe $U$ , or m-tuple, is a function from $U$ to the cross product $\mathsf { X } _ { A \in U } \mathcal { P } ( a d o m ( A ) )$ , where $\mathcal { P } ( a d o m ( A ) )$ is the power set of $a d o m ( A )$ . The set of all attributes $A$ such that $\sigma ( A ) \neq \varnothing$ , is called the schema of $\sigma$ , denoted by $s c h ( \sigma )$ . Given $\sigma$ and a subset $X$ of Input: A table $T$ over $U$ and a normalized set $F D$ of functional dependencies over Output: An m-table denoted by $m _ { - } C h a s e ( T )$ . 1: $\Sigma : = \{ \sigma _ { t } \ | \ t \in T \} \ / / \ \sigma _ { t }$ is the m-tuple such that $\sigma _ { t } ( A ) = \{ t . A \}$ for $A \in s c h ( t )$ 2: change := true 3: while ch $_ { a n g e } = t r u e$ do 4: change $\mathrel { \mathop : } = f a l s e$ 5: for all $\sigma$ and $\sigma ^ { \prime }$ in $\varSigma$ do 6: for all $X A$ in $F D$ such that $X A \subseteq s c h ( \sigma )$ and $X A \subseteq s c h ( \sigma )$ do 7: if tuples $( \sigma ( X ) )$ ∩ tuples $( \sigma ^ { \prime } ( X ) ) \neq \emptyset$ then 8: apply the m-Chase rule to $\sigma$ and $\sigma ^ { \prime }$ 9: $c h a n g e : = t r u e$ 10: m Chase(T ) := Σ 11: return m Chase(T ) $s c h ( \sigma )$ , the restriction of $\sigma$ to $X$ , denoted $\sigma ( X )$ , is the m-tuple defined by $( \sigma ( X ) ) ( A ) = \sigma ( A )$ for every $A$ in $X$ and $( \sigma ( X ) ) ( A ) = \varnothing$ for any $A$ not in $X$ . Given an m-tuple $\sigma$ , the set tuples $( \sigma )$ denotes the set of all tuples $t$ such that $s c h ( t ) = s c h ( \sigma )$ and for every $A$ in $s c h ( t )$ , $t . A$ belongs to $\sigma ( A )$ . □ Given an m-tuple $\sigma$ , the set $\sigma ( A )$ is denoted by the concatenation of its elements between parentheses, and $\sigma$ is denoted by the concatenation of all $\sigma ( A )$ such that $\sigma ( A ) \neq \varnothing$ . Moreover, $\boldsymbol { \sigma } \subseteq \boldsymbol { \sigma } ^ { \prime }$ denotes the ‘component-wise inclusion’ of $\sigma$ in $\sigma ^ { \prime }$ , that is $\boldsymbol { \sigma } \stackrel { \lbrack } { = } \boldsymbol { \sigma } ^ { \prime }$ holds if for every $A \in s c h ( \sigma )$ , $\sigma ( A ) \subseteq \sigma ^ { \prime } ( A )$ . Considering that a tuple $t$ can be seen as an m-tuple $\widetilde { t }$ whose components are either empty or singletons (i.e., $t . A = a$ if and only if $\widetilde t ( A ) = \left( a \right) ^ { \cdot }$ ), we coensider that $\sqsubseteq$ may be applied indifferently to tuples and m-tuples. We call $m$ -table over $U$ any finite set of m-tuples over $U$ . For all $\sigma$ and $\sigma ^ { \prime }$ in an m-table $\Sigma$ , and $X A$ such that $X A \subseteq s c h ( \sigma )$ and $X A \subseteq s c h ( \sigma ^ { \prime } )$ , the following rule called $\mathsf { m }$ -Chase rule generalizes the chase rule. m-Chase rule: Let $\sigma _ { 1 } = \sigma \cup \sigma ^ { \prime } ( A )$ and $\sigma _ { 1 } ^ { \prime } = \sigma ^ { \prime } \cup \sigma ( A )$ Case of $\sigma _ { 1 } \subseteq \sigma _ { 1 } ^ { \prime }$ : replace $\sigma$ with $\sigma _ { 1 } ^ { \prime }$ , and remove $\sigma _ { 1 }$ Case of $\sigma _ { 1 } ^ { \prime } \subseteq \sigma _ { 1 }$ : replace $\sigma ^ { \prime }$ with $\sigma _ { 1 }$ , and remove $\sigma _ { 1 } ^ { \prime }$ Otherwise: replace $\sigma$ and $\sigma ^ { \prime }$ with $\sigma _ { 1 }$ and $\sigma _ { 1 } ^ { \prime }$ , respectively. As shown in Algorithm 1, our algorithm consists in applying the above $\mathsf { m }$ -Chase rule whenever $\mathsf { t u p l e s } ( \sigma ( X ) ) \cap \mathsf { t u p l e s } ( \sigma ^ { \prime } ( X ) ) \neq \emptyset$ until no further transformation is possible. The output is an mtable denoted by $m _ { - } C h a s e ( T )$ . It has been shown in [14] that this algorithm always terminates and that the partition semantics of tuples in $\tau$ (as introduced in [15] and extended in [13,14]), can be defined based on $m _ { - } C h a s e ( T )$ as follows: Proposition 1 Let $T$ be a table over universe $U$ with $F D$ as set of functional dependencies. The following holds: 1. A tuple $t$ in $\tau$ is in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ if and only if there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that: $- \ s c h ( t ) \subseteq s c h ( \sigma )$ (i.e., $\sigma$ has nonempty components over attributes on which t is defined), $\mathbf { \varepsilon } - \mathbf { \varepsilon } t \subseteq \sigma$ . 2. $A$ tuple $t$ in $\tau$ is in $\mathsf { C o n f l } ( \mathcal { T } )$ if and only if there exists $\sigma$ in m Chase $( T )$ such that: $- \ s c h ( t ) \subseteq s c h ( \sigma )$ and $t \subseteq \sigma$ , − there exists $X A$ in $F D$ such that $X A \subseteq s c h ( t )$ and $| \mathbf { t u p l e s } ( \sigma ( A ) | > 1$ . 3. A tuple $t$ in $\tau$ is in $\mathtt { C o n s } ( \tau )$ if and only if there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that: $- \ s c h ( t ) \subseteq s c h ( \sigma )$ and $t \subseteq \sigma$ , for every $X A$ in $F D$ such that $X A \subseteq s c h ( t )$ , $| \mathsf { t u p } | \mathsf { e s } ( \sigma ( A ) ) | = 1$ . $\textcircled { 4 }$ . For every $\sigma$ in $m _ { - } C h a s e ( T )$ and every $S \subseteq s c h ( \sigma )$ , either tuple $\mathsf { s } ( \sigma ( S ) ) \subseteq \mathsf { C o n s } ( \mathcal T )$ or tuples $( \sigma ( S ) ) \subseteq$ $\mathsf { C o n f l } ( \mathcal { T } )$ . □ As shown in [14], the computation of $m _ { - } C h a s e ( T )$ is in $\mathcal { O } ( | m _ { - } C h a s e ( T ) | ^ { 3 } . \delta ^ { 2 } )$ , where $\delta$ is the maximal cardinality of the components of m-tuples in $m _ { - } C h a s e ( T )$ , which is precisely the maximum number of $A$ -values associated with $X$ -values when $X A$ is a functional dependency in $F D$ . As Algorithm 1 shows that $| m _ { - } C h a s e ( T ) | \leq | T |$ , we state that the computation of $m _ { - } C h a s e ( T )$ is in $\mathcal { O } ( | T | ^ { 3 } . \delta ^ { 2 } )$ , i.e., polynomial in the size of $T$ . To illustrate Algorithm 1 and Proposition 1, consider again the context of Example 2 where $U = \{ A , B , C \}$ , $F D = \{ A \to C , B \to C \}$ and $T = \{ a b , b c , a c ^ { \prime } \}$ . Running Algorithm 1 yields the following steps: – The algorithm starts with the m-table ${ \boldsymbol { \Sigma } } = \{ ( { \boldsymbol { a } } ) ( { \boldsymbol { b } } ) , ( { \boldsymbol { b } } ) ( { \boldsymbol { c } } ) , ( { \boldsymbol { a } } ) ( { \boldsymbol { c } } ^ { \prime } ) \}$ – Applying $B C$ to the first two m-tuples, we obtain ${ \cal { \Sigma } } = \{ ( a ) ( b ) ( c ) , ( a ) ( c ^ { \prime } ) \}$ – Applying now $A C$ to these two m-tuples, we obtain ${ \cal { \Sigma } } = \{ ( a ) ( b ) ( c c ^ { \prime } ) \}$ . Since no new m-tuple can be generated from $\Sigma$ , $m _ { - } C h a s e ( T ) = \{ ( a ) ( b ) ( c c ^ { \prime } ) \}$ is returned by Algorithm 1, and so, by Proposition 1, it follows that – ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ is the set of all sub-tuples of tuples in tuples $\boldsymbol { \big ( } ( a ) ( b ) ( c , c ^ { \prime } ) \big )$ , that is ${ \mathsf { T r u e } } ( { \mathcal { T } } ) = \{ a b c , a b c ^ { \prime } , a b$ $a c , a c ^ { \prime } , b c , b c ^ { \prime } , a , b , c , c ^ { \prime } \}$ . In other words, there are no false tuples in this example. $- \mathsf { C o n f l } ( \mathcal { T } ) = \{ a b c , a b c ^ { \prime } , a c , a c ^ { \prime } , b c , b c ^ { \prime } \} .$ $- \mathsf { C o n s } ( \mathcal { T } ) = \mathsf { T r u e } ( \mathcal { T } ) \setminus \mathsf { C o n f l } ( \mathcal { T } ) , \mathsf { \Omega } ,$ that is $\mathsf { C o n s } ( \mathcal { T } ) = \{ a b , a , b , c , c ^ { \prime } \}$ . In the following section, we first define the notion of star schema and then we show that the results from [14] that have just been recalled apply in this context as well. # 3 Star Schemas # 3.1 The Context of our Approach We first recall from the literature [16] that a star schema, as considered in our approach, consists of the following tables and constraints: $\mathit { \Pi } - \mathit { n }$ dimension tables $D _ { 1 } , \ldots , D _ { n }$ . For $i = 1 , \ldots , n$ , $D _ { i }$ is defined over attributes $K _ { i } , A _ { i } ^ { 1 } , \ldots , A _ { i } ^ { d _ { i } }$ . For $i = 1 , \ldots , n$ , the schema of $D _ { i }$ is denoted by $s c h ( D _ { i } )$ , and the set $s c h ( D _ { i } ) \setminus \{ K _ { i } \}$ is denoted by $s c h ^ { * } ( D _ { i } )$ . $-$ a fact table $F ^ { \prime }$ defined over $K _ { 1 } , \ldots , K _ { n } , M _ { 1 } , \ldots , M _ { p }$ . The attributes $M _ { 1 } , \ldots , M _ { p }$ are called measures, and we denote by $\mathbf { M }$ the set of all measures, that is $\mathbf { M } = \{ M _ { 1 } , \dots , M _ { p } \}$ . $- \ F D = \bigcup _ { i = 1 } ^ { i = n } \{ K _ { i } \to A _ { i } ^ { j } \mid j = 1 , \dots , d _ { i } \} \cup \{ K _ { 1 } \dots K _ { n } \to M _ { k } \mid k = 1 , \dots , p \}$ . In other words, for $i = 1 , \ldots , n$ , $K _ { i }$ is the key of $D _ { i }$ and $K _ { 1 } \ldots K _ { n }$ is the key of $F$ . We denote by $\mathbf { K }$ the set of all dimension keys that is $\mathbf { K } = \{ K _ { 1 } , \ldots , K _ { n } \}$ . It is easy to see that if $F D$ is defined as above, then for every non trivial functional dependency $X A$ in $F D ^ { + }$ we have: $X \cap \mathbf { K } \neq \varnothing$ . More precisely, if $A$ is in $\mathbf { K }$ then $A$ must occur in $X$ , in which case $X A$ is trivial (because $F D$ contains no dependency whose right hand-side is in $\mathbf { K }$ ), if $A$ is in $s c h ^ { * } ( D _ { i } )$ then $X$ must contain $K _ { i }$ and if $A$ is in M then $X$ must contain $\mathbf { K }$ . Thus, for every non trivial functional dependency $X A$ in $F D ^ { + }$ , there exists $X _ { 0 } A$ in $F D$ such that $X _ { 0 } \subseteq X$ . Since the left hand-sides of the dependencies in $F D$ can not be reduced further, this means that $F D$ is normalized. On the other hand as the left hand-sides of functional dependencies in $F D$ are attributes in $\mathbf { K }$ , not occurring in the right hand-sides of these dependencies, $F D$ is acyclic. As a consequence, all results in [14] apply in the context of star schemas. In what follows, we call data warehouse a database whose schema is a star schema. Moreover, we use the terms ‘data warehouse’ and ‘table’ instead of ‘multi-relation database’ and ‘relation’, to better fit the usual terminology when dealing with data warehouses. In our approach, it is thus possible to deal with data warehouses in which some of the tables $D _ { i }$ or $F$ have missing values for some of their attributes. However, in order to consider cases that make sense in practice, we restrict missing values in the warehouse tables as follows: 1. For every $i = 1 , \ldots , n$ , every $t$ in $D _ { i }$ is defined over the key attribute $K _ { i }$ and over at least one non-key attribute in $s c h ^ { * } ( D _ { i } )$ . We consider that storing a key value with no associated non-key value makes no sense. 2. For every $t$ in $F$ , $t$ is defined over $\mathbf { K }$ and over at least one measure attribute in M. We consider that storing a fact with no associated measure value makes no sense. Example $\mathcal { B }$ We illustrate the above concepts using a toy example that will serve as a running example in the remainder of this paper. We consider two dimensions $D _ { 1 }$ and $D _ { 2 }$ such that $s c h ( D _ { 1 } ) = K _ { 1 } A _ { 1 } ^ { 1 } A _ { 1 } ^ { 2 }$ and $s c h ( D _ { 2 } ) = K _ { 1 } A _ { 2 } ^ { 1 } A _ { 2 } ^ { 2 }$ . Moreover the fact table $F$ is such that $s c h ( F ) = K _ { 1 } K _ { 2 } M _ { 1 }$ meaning that we consider one measure attribute $M _ { 1 }$ . As specified above, we have $F D = \{ K _ { 1 } A _ { 1 } ^ { 1 } , K _ { 1 } A _ { 1 } ^ { 2 } , K _ { 2 } $ $A _ { 2 } ^ { 1 } , K _ { 2 } A _ { 2 } ^ { 2 } , K _ { 1 } K _ { 2 } M _ { 1 } \}$ . The content of the tables $D _ { 1 }$ , $D _ { 2 }$ and $F ^ { \prime }$ is shown in the following Figure 1. Fig. 1 The tables of the data warehouse in our running example We observe that these tables are indeed those of a star schema and that they comply with the two restrictions above regarding missing values. Moreover, it should be emphasized that $D _ { 1 }$ and $F$ do not satisfy $F D$ . Indeed, the first two tuples in $D _ { 1 }$ violate $K _ { 1 } A _ { 1 } ^ { 1 }$ and the second and third tuples in $F ^ { \prime }$ violate $K _ { 1 } K _ { 2 } M _ { 1 }$ . On the other hand $D _ { 2 }$ satisfies its two associated functional dependencies $K _ { 2 } A _ { 2 } ^ { 1 }$ and $K _ { 2 } A _ { 2 } ^ { 2 }$ . We also stress that the key value $k _ { 1 } ^ { \prime \prime }$ occurs in $D _ { 1 }$ but not in $F$ , whereas the key value $k _ { 2 } ^ { \prime \prime }$ over $K _ { 2 }$ occurs in $F ^ { \prime }$ but not in $D _ { 2 }$ . These two cases respectively illustrate that key values in a dimension table may not occur in the fact table and that the foreign key constraint between a dimension table and the fact table may not be satisfied (contrary to what is generally assumed in the literature). 3.2 True, Consistent and Conflicting Tuples in a Star Schema The following proposition states important properties regarding m-tuples of the table $m _ { - } C h a s e ( T )$ where $T$ is the table collecting all tuples in the data warehouse. In the remainder of this paper, we refer to such table as a star-table. Proposition 2 Let $T$ be a star-table over universe $U$ . The following hold: 1. For every $\sigma$ in $m _ { - } C h a s e ( T )$ and every $i = 1 , \ldots , n$ if $K _ { i } ~ \in ~ s c h ( \sigma )$ then $| \mathsf { t u p l e s } ( \sigma ( K _ { i } ) ) | = 1$ . Consequently, if $\mathbf { K } \subseteq s c h ( \sigma )$ then $| \mathrm { t u p } | { \bf e s } ( \sigma ( { \bf K } ) ) | = 1$ . 2. For every tuple $k$ over $\mathbf { K }$ in $\tau$ , there exists at most one $\sigma$ in $m _ { - } C h a s e ( T )$ such that $\mathbf { K } \subseteq s c h ( \sigma )$ and tuples $( \sigma ( { \bf K } ) ) = \{ k \}$ . 3. Moreover, $m _ { - } C h a s e ( T )$ contains the following two kinds of $m$ -tuples: (a) $\sigma$ for which there exists $i _ { 0 } \in \{ 1 , \ldots , n \}$ such that: $- \ s c h ( \sigma ) \subseteq s c h ( D _ { i _ { 0 } } )$ , tu $\mathsf { p l e s } ( \sigma ( K _ { i _ { 0 } } ) ) = \{ k _ { i _ { 0 } } \}$ and for every $t \in F$ , $t . K _ { i _ { 0 } } \neq k _ { i _ { 0 } }$ . − for every $A \in s c h ^ { * } ( D _ { i _ { 0 } } )$ , $\sigma ( A ) = \{ a \ | \ ( \exists q \in D _ { i } ) ( q . K _ { i _ { 0 } } = k _ { i _ { 0 } } \land q . A = a ) \}$ (b) σ such that $\mathbf { K } \subseteq s c h ( \sigma )$ and tuple ${ \mathfrak { s } } ( { \boldsymbol { \sigma } } ( \mathbf { K } ) ) = \{ k \}$ , and − for every $M _ { j } \in \mathbf { M }$ , $\sigma ( M _ { j } ) = \{ m _ { j }$ | $( \exists t \in F ) ( t . \mathbf { K } = \sigma ( \mathbf { K } ) \land t . M _ { j } = m _ { j } ) \}$ − for every $i = 1 , \ldots , n$ , for every $A \in s c h ^ { * } ( D _ { i } )$ , $\sigma ( A ) = \{ a \ | \ ( \exists t \in D _ { i } ) ( t . K _ { i } = k . K _ { i } \land t . A = a ) \}$ . Proof. 1. This result comes from the fact that, in order to generate multi-valued components of an attribute $A$ , the algorithm $m _ { - } C h a s e$ has to consider a functional dependency whose right hand-side is $A$ . As in the case of a star-table, no dependency in $F D$ has its right hand-side in $\mathbf { K }$ , the proof of this item is complete. 2. Let $\sigma _ { 1 }$ and $\sigma _ { 2 }$ be in $m _ { - } C h a s e ( T )$ such that $\mathbf { K } \subseteq s c h ( \sigma _ { i } )$ for $i = 1 , 2$ and $\sigma _ { 1 } ( \mathbf { K } ) = \sigma _ { 2 } ( \mathbf { K } )$ . Since $\sigma _ { 1 } \neq \sigma _ { 2 }$ (the algorithm eliminates duplicates) there exists $A$ not in $\mathbf { K }$ such that $\sigma _ { 1 } ( A ) \neq \sigma _ { 2 } ( A )$ . Since $A$ is not in $\mathbf { K }$ , either there is $i _ { 0 }$ in $\{ 1 , \ldots , n \}$ such that $A \in s c h ^ { * } ( D _ { i _ { 0 } } )$ or $A$ is in M. In either case, $F D$ contains a dependency of the form $X A$ where $X = K _ { i _ { 0 } }$ in the former case, or $X = \mathbf { K }$ in the latter case. Since we have $\sigma _ { 1 } ( X ) = \sigma _ { 2 } ( X )$ , applying $m$ Chase to $m _ { - } C h a s e ( T )$ changes $\sigma _ { 1 } ( A )$ and $\sigma _ { 2 } ( A )$ into $\sigma _ { 1 } ( A ) \cup \sigma _ { 2 } ( A )$ . As by definition of $m _ { - } C h a s e ( T )$ , we have $m _ { - } C h a s e ( m _ { - } C h a s e ( T ) ) = m _ { - } C h a s e ( T )$ , we obtain a contradiction which completes the proof. 3. We first notice that, as stated above, for every $K _ { i }$ in $\mathbf { K }$ and every $\sigma$ in $m _ { - } C h a s e ( T )$ such that $K _ { i }$ is in $s c h ( \sigma )$ , we have $| \mathsf { t u p l e s } ( \sigma ( K _ { i } ) ) | = 1$ . This is so because no functional dependency in $F D$ has a right hand-side in $\mathbf { K }$ , which makes it impossible to generate conflicts on these attributes. Moreover, the m-tuples of the first kind are the result of joining two rows $\rho _ { 1 }$ and $\rho _ { 2 }$ in the current table such that $s c h ( \rho _ { 1 } ) = s c h ( \rho _ { 2 } ) = s c h ( { D } _ { i _ { 0 } } )$ , $\rho _ { 1 } ( K _ { i _ { 0 } } ) = \rho _ { 2 } ( K _ { i _ { 0 } } )$ , and this key value does not occur in $F$ thus preventing these m-tuples to be combined with m-tuples over $s c h ( F )$ . Similarly, the tuples of the second kind are obtained by joining every tuple $k m$ in $F ^ { \prime }$ with tuples $t _ { i }$ in $D _ { i }$ such that $k . K _ { i } = t _ { i } . K _ { i }$ ( $i = 1 , \ldots , n$ ). We thus obtain that the set $\varSigma$ of all m-tuples as described above occur in $m _ { - } C h a s e ( T )$ . To end the proof we now argue that applying the $m _ { - } C h a s e$ procedure to $\varSigma$ has no effect. This is so because for every $X A$ in $F D$ and all rows $\rho _ { 1 }$ and $\rho _ { 2 }$ in $\varSigma$ such that $\rho _ { 1 } ( X ) \cap \rho _ { 2 } ( X ) \neq \emptyset$ , the $X$ -values of these rows are reduced to one tuple and thus we have $\rho _ { 1 } ( X ) = \rho _ { 2 } ( X )$ . By definition of $\varSigma$ , it follows that $\rho _ { 1 } ( A ) = \rho _ { 2 } ( A )$ , which completes the proof. □ Fig. 2 The star-table $T$ and the m-table $m _ { - } C h a s e ( T )$ of Example 3 In the context of Example 3, the star-table $T$ and its associated m-table $m _ { - } C h a s e ( T )$ are shown in Figure 2. In particular, it can be seen that, in $m _ { - } C h a s e ( T )$ , the last m-tuple satisfies item 3.a of Proposition 2 while the first three m-tuples satisfy item 3.b. As will be seen later, these three m-tuples are the ones to be relevant for analytic queries. Based on the fact that if a tuple $t$ is conflicting then all its true super-tuples are conflicting as well, the set $\mathsf { C o n f l } ( \tau )$ can be characterized by means of its minimal tuples with respect to $\sqsubseteq$ . More precisely, denoting this set by $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \tau )$ , we have $\mathsf { C o n f l } ( \mathcal T ) = \{ t \in \mathsf { T r u e } ( \mathcal T )$ | $\exists q \in \mathsf { C o n f l } _ { \operatorname* { m i n } } ( \mathcal { T } ) ) ( q \subseteq$ $t ) \}$ . Using Proposition 1, the set $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \tau )$ is characterized as follows: $t$ is in $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \tau )$ if and only if one of the following two statements holds: – there exist $\sigma$ in $m _ { - } C h a s e ( T )$ , $i _ { 0 }$ in $\{ 1 , \ldots , n \}$ and $A$ in $s c h ^ { * } ( D _ { i _ { 0 } } ) \cap s c h ( \sigma ) \cap s c h ( t )$ such that $t = k _ { i _ { 0 } } a$ is in tuples $( \sigma ( K _ { i _ { 0 } } A ) )$ and $| \mathsf { t u p l e s } ( \sigma ( A ) ) | > 1$ – there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that $\mathbf { K } \subseteq s c h ( \sigma ) \cap s c h ( t )$ , there exists $M _ { i }$ in $\mathbf { M } \cap s c h ( \sigma ) \cap s c h ( t )$ such that $t = k m _ { i }$ is in tuples $( \sigma ( { \bf K } M _ { i } ) )$ and $| \mathsf { t u p l e s } ( \sigma ( M _ { i } ) ) | > 1$ . By complementation with respect to ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ , a tuple $t$ is in $\mathsf { C o n s } ( \mathcal { T } )$ if and only if it has no sub-tuple satisfying one of the above statements. Example 4 Applying Proposition 1 in the context of Example 3, for which Figure 2 displays the star-table $T$ and the m-table $m _ { - } C h a s e ( T )$ , the sets $\mathtt { T r u e ( T ) , C o n f l ( \mathcal { T } ) }$ and $\mathtt { C o n s } ( \tau )$ are as follows: True $( \tau )$ is the set of all sub-tuples of the tuples in tuples $( \sigma )$ for every $\sigma$ in $m _ { - } C h a s e ( T )$ . Thus, ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ is the set of all sub-tuples of: $\mathbf { \Phi } - k _ { 1 } k _ { 2 } a _ { 1 } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $k _ { 1 } k _ { 2 } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $- \ k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime \prime }$ , k1k′2a′1a2b1m′1′, − k′1k′2′a1m1, $- \ k _ { 1 } ^ { \prime \prime } a _ { 1 } ^ { \prime } a _ { 2 } ^ { \prime }$ . $- \mathsf { C o n f l } ( \mathcal T )$ is the set of all true super-tuples of tuples in $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \mathcal { T } ) = \{ k _ { 1 } a _ { 1 } , ~ k _ { 1 } a _ { 1 } ^ { \prime } , ~ k _ { 1 } k _ { 2 } ^ { \prime } m _ { 1 } ^ { \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } m _ { 1 } ^ { \prime \prime } \}$ . The maximal tuples in $\mathsf { C o n f l } ( \tau )$ are: $\mathbf { \Sigma } - k _ { 1 } k _ { 2 } a _ { 1 } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $k _ { 1 } k _ { 2 } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $- \ k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime \prime }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime \prime }$ . $- \mathsf { C o n s } ( \mathcal { T } ) = \mathsf { T r u e } ( \mathcal { T } ) \setminus \mathsf { C o n f l } ( \mathcal { T } )$ . Thus $\mathsf { C o n s } ( \mathcal { T } )$ is the set of all sub-tuples of: $- \ k _ { 1 } k _ { 2 } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $k _ { 2 } a _ { 1 } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $k _ { 2 } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , − k1k′2a2b1, $k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 2 } ^ { \prime } a _ { 1 } ^ { \prime } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime \prime }$ , k′2a′1a2b1m′1′, k1a2b1m′1, k1a2b1m′1′, $- \ k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } a _ { 1 } m _ { 1 }$ , − k′1′a2′1a′2. # 4 Repairs and their Basic Properties In this section we adapt the definition of repair given in [14] to the case of a star-table and then we further investigate repairs of star-tables. # 4.1 Definition of Repairs As explained in [14], contrary to most approaches to consistent query answering in the literature, it is not appropriate to define a repair of $T$ as a maximal subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ satisfying $F D$ . This is so because it is intuitively justified to define a repair $R$ of $T$ so as ${ \sf T r u e } ( \mathcal { R } )$ contains $\mathsf { C o n s } ( \mathcal { T } )$ . Unfortunately as shown earlier in Example 2, it may happen that a maximal consistent subset S of $\mathsf { T r u e } ( \tau )$ does not contain all consistent tuples. On the other hand, it has also been shown in [14], that in case of cyclic sets of functional dependencies (that is if there exist $X A$ and $Y B$ in $F D$ such that $A \in Y$ and $B \in X$ hold), the set $\mathsf { C o n s } ( \mathcal { T } )$ may not satisfy $F D$ . As in the case of a star-table, the set $F D$ of functional dependencies is clearly acyclic, $\mathtt { C o n s } ( \tau )$ satisfies $F D$ . We thus define repairs of star-tables as follows. Definition 2 Let $T$ be a star-table over universe $U$ . A repair $R$ of $T$ is a table over $U$ such that: 1. $\mathsf { C o n s } ( \mathcal { T } ) \subseteq \mathsf { T r u e } ( \mathcal { R } ) \subseteq \mathsf { T r u e } ( \mathcal { T } )$ 2. $R \models F \boldsymbol { D }$ 3. For every table $R ^ { \prime }$ satisfying 1, and 2 above, and such that ${ \sf T r u e } ( \mathcal { R } ) \subseteq { \sf T r u e } ( \mathcal { R } ^ { \prime } )$ , we have ${ \sf T r u e } ( \mathcal { R } ) = { \sf T r u e } ( \mathcal { R } ^ { \prime } )$ . The set of all repairs of $T$ is denoted by $\mathsf { R e p } ( T )$ . It has been shown in [14] that the following holds, based on Definition 2: – For every table $T$ , $\mathsf { R e p } ( T ) \neq \emptyset$ . – If $ { \boldsymbol { T } } { \ v { D } } = { \boldsymbol { F } } { \boldsymbol { D } }$ then for every $R$ in $\mathsf { R e p } ( T )$ , ${ \mathsf { T r u e } } ( { \mathcal { R } } ) = { \mathsf { T r u e } } ( { \mathcal { T } } )$ . In this case $R$ and $T$ carry the same information, but the tables $R$ and $T$ might not be equal. For example for $F D = \varnothing$ and $T = \{ a b c , a c \}$ , $R = \{ a b c \}$ is a repair of $T$ . Elaborating briefly on the remark in the second item above, we notice that two distinct tables defined over the same universe and the same set of functional dependencies can have the same sets of true tuples. Databases such as $T$ and $R$ above are said to be equivalent and when we refer to a table $T$ we in fact refer to any table $\tilde { T }$ such that $\mathsf { T r u e } ( \mathcal { T } ) = \mathsf { T r u e } ( \widetilde { \mathcal { T } } )$ . The following basic theorem has be en shown in [14] to hold wehenever the set $F D$ is acyclic. Theorem 1 Let $T$ be a table over universe $U$ and $F D$ an acyclic set of functional dependencies over $U$ . Then: $$ \mathsf { C o n s } ( \mathcal { T } ) = \bigcap _ { R \in \mathsf { R e p } ( T ) } \mathsf { T r u e } ( \mathcal { R } ) . $$ # 4.2 Repairs of Star-Tables In this section, we show that, in the case of star-tables, repairs satisfy important properties that don’t hold in general. Another important specificity of star-tables is that our notion of repair coincides with that in the literature, applied to ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ . More precisely, we show that if $T$ is a star-table and $S$ is a maximal consistent subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ , then $S$ is a repair of $T$ in the sense of Definition 2, implying in particular that ${ \mathsf { T r u e } } ( S )$ contains $\mathsf { C o n s } ( \mathcal { T } )$ . As a consequence, in Definition 2, the first item can be more simply stated as ${ \mathsf { T r u e } } ( { \mathcal { R } } ) \subseteq { \mathsf { T r u e } } ( { \mathcal { T } } )$ . This result relies on the following preliminary two lemmas. Lemma 1 Let $T$ be a star-table over universe $U$ , $S$ a subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ and $t$ a tuple in $\mathsf { C o n s } ( \mathcal { T } )$ . If $S \models F D$ then $S \cup \{ t \} \models F D$ . Proof. Since $S \models F \boldsymbol { D }$ , every $\sigma$ in the m-table $m { \_ } C h a s e ( S )$ is such that, for every $A$ in $s c h ( \sigma )$ , $\sigma ( A ) = ( a )$ . Thus $m _ { - } C h a s e ( S )$ can be seen as a table that we denote by $S ^ { * }$ . Moreover, denoting by $S _ { t }$ the table $S \cup \{ t \}$ , we have $m _ { - } C h a s e ( S _ { t } ) = m _ { - } C h a s e ( S ^ { * } \cup \{ t \} )$ . Let $S _ { t } ^ { * } = S ^ { * } \cup \{ t \}$ , and let us consider the computation of $m \_ C h a s e ( S _ { t } ^ { * } )$ . To this end, given $q _ { 1 }$ and $q _ { 2 }$ in $\boldsymbol { S } _ { t } ^ { * }$ and $X A$ in $F D$ such that $q _ { 1 } . X = q _ { 2 } . X$ , the only possible cases are as follows: $- \operatorname { I f } q _ { 1 }$ and $q _ { 2 }$ are in $\boldsymbol { S } _ { t } ^ { * }$ , then either $q _ { 1 }$ and $q _ { 2 }$ are not defined over $A$ or, they are both defined such that $q _ { 1 } . A = q _ { 2 } . A$ . In this case $m _ { \mathrm { - } } C h a s e$ does not change $S _ { t } ^ { * }$ . − If $q _ { 1 } \in S _ { t } ^ { * }$ , $q _ { 2 } = t$ , and $q _ { 1 }$ and $q _ { 2 }$ are not defined over $A$ , then $m _ { - } C h a s e$ does not change $S _ { t } ^ { * }$ . − If $q _ { 1 } \in S _ { t } ^ { * }$ , $q _ { 2 } = t$ , and $q _ { 1 }$ and $q _ { 2 }$ are defined over $A$ . Since $t \in \mathsf { C o n s } ( \mathcal { T } )$ , it is not possible that $q _ { 1 } . A \neq t . A$ . Thus in this case again, m Chase does not change $S _ { t } ^ { * }$ . − If $q _ { 1 } \in S _ { t } ^ { * }$ , $q _ { 2 } = t$ , $q _ { 1 }$ is defined over $A$ and $_ { q 2 }$ is not defined over $A$ . Then m Chase changes $t$ into $t a$ where $a = q _ { 1 } . A$ . $-$ If $q _ { 1 } \in S _ { t } ^ { * }$ , $q _ { 2 } = t$ , $q _ { 1 }$ is not defined over $A$ and $q _ { 2 }$ is defined over $A$ . Then $m _ { - } C h a s e$ changes $q _ { 1 }$ into $q a$ where $a = t . A$ . Based on the previous cases, we denote by $\varSigma$ the table obtained from $S _ { t } ^ { * }$ by the following transformations: (1) For every $q$ in $S ^ { * }$ and every $X A$ in $F D$ such that $q . X = t . X$ , $q$ is not defined over $A$ and $t . A = a$ , in $\varSigma$ , $q . A$ is set to $a$ . (2) For every $X A$ in $F D$ such that $q . X = t . X$ , $q . A = a$ and $t$ is not defined over $A$ , in $\varSigma$ , $t . A$ is set to $a$ . Since in a star schema, for every attribute $A$ in $U$ there is at most one functional dependency whose right hand-side is $A$ , the construction of $\varSigma$ cannot generate conflicts, thus entailing that $\varSigma$ contains no conflicts. We now show that $m _ { - } C h a s e ( \varSigma ) = \varSigma$ . Indeed, let $q _ { 1 }$ and $q _ { 2 }$ be in $\varSigma$ and $X A$ be in $F D$ . The only possibility for m Chase to change $\varSigma$ is that $q _ { 1 } . X = q _ { 2 } . X$ , $q _ { 1 } . A = a$ and $q _ { 2 }$ is not defined over $A$ . As this does not happen in $S ^ { * }$ (because $m _ { \cal C } { h a s e } ( S ^ { * } ) = S ^ { * } ,$ ), and as this does not happen in $\varSigma$ (by definition of $\varSigma$ ), we obtain that $m _ { - } C h a s e ( \varSigma ) = \varSigma$ . It follows that $m _ { - } C h a s e ( S _ { t } ^ { * } ) = \Sigma$ , and since we have seen that no conflicts occur in $\varSigma$ , $S _ { t } ^ { * } \Vdash { F D }$ holds. The proof is therefore complete. □ The following lemma is a consequence of Lemma 1. Lemma 2 Let $T$ be a star-table over universe $U$ , $S$ a subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ . If $S \models F D$ then $S \cup \mathsf { C o n s } ( \mathcal { T } ) \mid =$ $F D$ . Proof. If $N$ denotes the number of tuples in $\mathsf { C o n s } ( \mathcal { T } )$ , applying Lemma 1 successively to each tuple of $\mathsf { C o n s } ( \mathcal { T } )$ generates a sequence of sets $S _ { i }$ $( i = 1 , \ldots , N )$ such that $S _ { i } \models F D$ and $\mathsf { C o n s } ( \mathcal { T } ) \subseteq$ $\mathsf { T r u e } ( S _ { N } )$ . Since $\mathsf { T r u e } ( S _ { N } ) = \mathsf { T r u e } ( S \cup \mathsf { C o n s } ( \mathcal { T } ) )$ , the proof is complete. □ We now state the expected result that every maximal and consistent subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ defines a repair of $T$ . Proposition 3 Let $T$ be a star-table over universe $U$ , and $S$ a subset of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ such that $S = F D$ and there is no subset $S ^ { \prime }$ of ${ \sf T r u e } ( \mathcal { T } )$ such that ${ \cal S } ^ { \prime } \models { \cal F D }$ and ${ \mathsf { T r u e } } ( S )$ is a strict subset of ${ \sf T r u e } ( S ^ { \prime } )$ . Then $S$ is in $\mathsf { R e p } ( T )$ . Proof. By Definition 2, in order to show that $S \in \mathsf { R e p } ( T )$ , we only have to prove that $\mathsf { C o n s } ( \mathcal { T } ) \subseteq$ ${ \mathsf { T r u e } } ( S )$ . If such is not the case, by Lemma 2, $\mathsf { T r u e } ( S ) \cup \mathsf { C o n s } ( \mathcal { T } ) \vdash F D$ . We thus obtain a consistent strict super-set of ${ \mathsf { T r u e } } ( S )$ , which is a contradiction with the last statement in the proposition. The proof is therefore complete. □ As a consequence of Proposition 3, we show that in the case of a star-table, given a true tuple $t$ , there exists a repair in which this tuple is true, and that moreover, if this tuple is conflicting, there exists a repair in which this tuple is not true. It should be noticed here that, the second result is shown in [14], whereas the first one is not. Proposition 4 Let $T$ be a star-table over universe $U$ . Then the following statements hold: For every t in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ there exists $R$ in $\mathsf { R e p } ( T )$ such that $t$ is in ${ \sf T r u e } ( \mathcal { R } )$ . For every t in $\mathsf { C o n f l } ( \tau )$ there exists $R$ in $\mathsf { R e p } ( T )$ such that $t$ is not in ${ \sf T r u e } ( \mathcal { R } )$ . Proof. To show the first statement, based on the fact that for every tuple $t$ in $\tau$ , $\{ t \} \ \models { \cal F } { \cal D }$ trivially holds, as ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ is finite, there exists a maximal subset $S$ of ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ such that $t \in S$ and $S \models F D$ . By Proposition 3, $S$ is in $\mathsf { R e p } ( T )$ . Thus the proof of the first statement of the proposition is complete. The second statement being the subject of Proposition 4 of [14], the proof of the proposition is complete. □ As a last property of repairs of star-tables, we show below that if tuples are conflicting, then for every repair $R$ , one of the tuples involved in the conflict is true in $R$ . Proposition 5 Let $T$ be a star-table over universe $U$ . For a given $X A$ in $F D$ and a given tuple $x$ over $X$ , let $\{ x a _ { 1 } , \ldots , x a _ { k } \}$ be the set of all tuples over $X A$ that belong to ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ . Then, for every $R$ in $\mathsf { R e p } ( T )$ , there exists i0 in $\{ 1 , \ldots , k \}$ such that $x a _ { i _ { 0 } }$ is in ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ . Proof. It is easy to show that the proposition holds for $k = 1$ , since in this case $x a _ { 1 }$ is in $\mathsf { C o n s } ( \mathcal { T } )$ , and thus in ${ \sf T r u e } ( \mathcal { R } )$ for every $R$ in $\mathsf { R e p } ( T )$ . We now assume that $k > 1$ , which implies that for every $\sigma$ in $m _ { - } C h a s e ( T )$ such that $X A \subseteq s c h ( \sigma )$ and $x$ is in $\scriptstyle { \mathrm { t u p l e s } } ( \sigma ( X ) )$ , ${ \mathfrak { t u p l e s } } ( \sigma ( X A ) ) = ( x a _ { 1 } , . . . , x a _ { k } )$ with $k > 1$ . Moreover, given the form of functional dependencies in $F D$ , either $( a )$ $X = \mathbf { K }$ and $A$ is in $\mathbf { M }$ , or $( b )$ there exists $i _ { 0 }$ in $\{ 1 , \ldots , n \}$ such that $X = K _ { i _ { 0 } }$ and $A$ is in $s c h ^ { * } ( D _ { i _ { 0 } } )$ . Let $R$ be in $\mathsf { R e p } ( T )$ such that ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ contains no $x a _ { l }$ from $\{ x a _ { 1 } , \ldots , x a _ { k } \}$ , and let $l _ { 0 }$ in $\{ 1 , \ldots , k \}$ . Denoting by $R ^ { \prime }$ the table $R \cup \{ x a _ { l _ { 0 } } \}$ we show that in either case $( a )$ or $( b )$ above, ${ \boldsymbol { R ^ { \prime } } } \left| = { \boldsymbol { F } } { \boldsymbol { D } } \right.$ . To this end, we first notice that, since for every $k$ over $\mathbf { K }$ and for any $k _ { i }$ over $K _ { i }$ , $k$ and $k _ { i }$ are in $\mathsf { C o n s } ( \mathcal { T } )$ , the tuple $x$ is in ${ \sf T r u e } ( \mathcal { R } )$ . Hence, ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ contains tuples whose $X$ -value is $x$ and for every such tuple, the $A$ -value is missing. The cases $( a )$ and $( b )$ are then handled as follows when computing $m _ { - } C h a s e ( R ^ { \prime } )$ : every $t$ in $\mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ such that $t . X \ = \ x$ is considered and its $A$ -value is set to $a _ { l _ { 0 } }$ . Denoting by $\varSigma$ the resulting m-table and let $X ^ { \prime } A ^ { \prime }$ be a dependency in $F D$ , and let $\sigma _ { 1 }$ and $\sigma _ { 2 }$ be two m-tuples in $\varSigma$ such that $\sigma _ { 1 } ( \boldsymbol { X } ^ { \prime } ) = \sigma _ { 2 } ( \boldsymbol { X } ^ { \prime } )$ where $X ^ { \prime }$ is either $\mathbf { K }$ or $K _ { j }$ . − If $X ^ { \prime } = X$ then by definition of $\varSigma$ , $\sigma _ { 1 } ( A ) = \sigma _ { 2 } ( A ) = ( a _ { l _ { 0 } } )$ . Thus $\varSigma$ is not changed. − If $X ^ { \prime } = K _ { j }$ , $X = K _ { i }$ and $i \neq j$ , $\sigma _ { 1 }$ and $\sigma _ { 2 }$ are both in $R ^ { \prime }$ and their $A ^ { \prime }$ -components have not been changed when building $\varSigma$ (because $K _ { j } \neq K _ { i }$ implies $A ^ { \prime } \neq A$ ). Since $m _ { - } C h a s e ( \mathsf { T r u e } ( \mathcal { R } ^ { \prime } ) ) = \mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ and ${ \cal R } ^ { \prime } \ : \models \ : { \cal F } \ : { \cal D }$ , either $A ^ { \prime }$ is in $s c h ( \sigma _ { 1 } ) \cap s c h ( \sigma _ { 2 } )$ and $\sigma _ { 1 } ( A ^ { \prime } ) = \sigma _ { 2 } ( A ^ { \prime } )$ , or $A ^ { \prime }$ is neither in $s c h ( \sigma _ { 1 } )$ nor in $s c h ( \sigma _ { 2 } )$ . In either case, $\varSigma$ is not changed. − If $X ^ { \prime } = K _ { j }$ and $X = \mathbf { K }$ , since as above we have $A ^ { \prime } \neq A$ , m Chase does not change $\varSigma$ . $\mathrm { ~ - ~ }$ If $X ^ { \prime } = \mathbf { K }$ and $X \ = \ K _ { i }$ , then in $\mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ , we have $\sigma _ { 1 } ( X ) = \sigma _ { 2 } ( X ) = x$ , and $A$ is neither in $s c h ( \sigma _ { 1 } )$ nor in $s c h ( \sigma _ { 2 } )$ . By definition of $\varSigma$ , $\sigma _ { 1 } ( A )$ and $\sigma _ { 2 } ( A )$ are set to $\left( a _ { l _ { 0 } } \right)$ , and then here again, $m _ { - } C h a s e$ does not change $\varSigma$ . We thus obtain that $\Sigma = m \_ C h a s e ( \mathcal { R } ^ { \prime } )$ and that ${ \cal R } ^ { \prime } \ : \models \ : { \cal F } \ : { \cal D }$ , because $R \models F \boldsymbol { D }$ and no conflicts occur in $\varSigma$ . Moreover, as ${ \sf T r u e } ( \mathcal { R } ) \subseteq { \sf T r u e } ( \mathcal { R } ^ { \prime } )$ , we have $\mathsf { C o n s } ( \mathcal { T } ) \subseteq \mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ . It follows from Definition 2 that $R$ can not be in $\mathsf { R e p } ( T )$ since the inclusion $\mathsf { T r u e } ( \mathcal { R } ) \subset \mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ is strict (because $x a _ { l _ { 0 } }$ is in $\mathsf { T r u e } ( \mathcal { R } ^ { \prime } )$ but not in ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ ). As this is a contradiction with our hypothesis that $R$ is in ${ \mathsf { R e p } } ( T )$ , the proof is complete. □ We now show how to characterize all repairs of a star-table $T$ , based on $m _ { - } C h a s e ( T )$ . To this end, we define the following process (P): Step P1. For every $i$ in $\{ 1 , \ldots , n \}$ and every $A _ { i } ^ { j }$ in $s c h ^ { * } ( D _ { i } )$ , if $k _ { i }$ is a $K _ { i }$ -value occurring in $m _ { - } C h a s e ( T )$ , we choose one $A _ { i } ^ { j }$ -value in the set $\sigma ( A _ { i } ^ { j } )$ where $\sigma$ is any m-tuple in $m _ { - } C h a s e ( T )$ such that $K _ { i } A _ { i } ^ { j } \subseteq s c h ( \sigma )$ and $\sigma ( K _ { i } ) = ( k _ { i } )$ . Denoting by $\varphi _ { i } ^ { j } ( k _ { i } )$ this value, we notice that $\varphi _ { i } ^ { j } ( k _ { i } )$ is well defined because, thanks to $K _ { i } A _ { i } ^ { j }$ in $F D$ , all $\sigma$ in $m _ { - } C h a s e ( T )$ such that $K _ { i } A _ { i } ^ { j } \subseteq s c h ( \sigma )$ and $\sigma ( K _ { i } ) = ( k _ { i } )$ have the same value over $A _ { i } ^ { j }$ . Step P2. For every $l$ in $\{ 1 , \ldots , p \}$ , if $k$ is a $\mathbf { K }$ -value occurring in $m _ { - } C h a s e ( T )$ , we choose one $M _ { l }$ -value in the set $\sigma ( M _ { l } )$ where $\sigma$ is any m-tuple in $m _ { - } C h a s e ( T )$ such that ${ \bf K } M _ { l } \subseteq s c h ( \sigma )$ and $\sigma ( { \bf K } ) = { \bf \Xi } ( k )$ . Denoting by $\varphi _ { l } ( k )$ this value, we notice that, as above, $\varphi _ { l } ( k )$ is well defined because, thanks to $\mathbf { K } M _ { l }$ in $F D$ , all $\sigma$ in $m _ { - } C h a s e ( T )$ such that ${ \bf K } M _ { l } \subseteq s c h ( \sigma )$ and $\sigma ( { \bf K } ) = { \bf \Xi } ( k )$ have the same value over $M _ { l }$ . Step P3. Denoting by $\varphi$ the set of all $\varphi _ { i } ^ { j } ( k _ { i } )$ and all $\varphi _ { l } ( k )$ defined above, for every $\sigma$ in $m _ { - } C h a s e ( T )$ , let $t _ { \varphi } ( \sigma )$ be the tuple such that $s c h ( t _ { \varphi } ( \sigma ) ) = s c h ( \sigma )$ , defined by: – For every $i = 1 , \ldots , n$ , if $K _ { i } \in s c h ( \sigma ) .$ and $\sigma ( K _ { i } ) = ( k _ { i } )$ , then $t _ { \varphi } ( \sigma ) . K _ { i } = k _ { i }$ and for every $j$ such that $A _ { i } ^ { j } \in s c h ( \sigma )$ , $t _ { \varphi } ( \sigma ) . A _ { i } ^ { \ j } = \varphi _ { i } ^ { \ j } ( k _ { i } )$ . – If $\mathbf { K } \subseteq s c h ( \sigma )$ and $\sigma ( { \bf K } ) = { \bf \Xi } ( k )$ , for every $l = 1 , \hdots , p$ such that $M _ { l } \in s c h ( \sigma )$ , let $t _ { \varphi } ( \sigma ) . M _ { l } =$ $\varphi _ { l } ( k )$ . Fig. 3 The repairs of the star-table $T$ of Example 3 Denoting by $R _ { \varphi }$ the table $R _ { \varphi } = \{ t _ { \varphi } ( \sigma ) \mid \sigma \in m _ { - } C h a s e ( T ) \} \cup C \mathsf { o n s } ( 7 )$ , the following proposition states that $R _ { \varphi }$ is a repair and that all repairs are obtained through such process. Proposition 6 Let $T$ be a star-table over $U$ . $R$ is a repair of $T$ if and only if there is $\varphi$ as defined above such that ${ \sf T r u e } ( \mathcal { R } ) = { \sf T r u e } ( \mathcal { R } _ { \varphi } )$ . Proof. See Appendix A. We illustrate repairs of star-tables in the context of our running Example 3 as follows. Example $^ { 5 }$ In the context of Example 3, $T$ has four repairs as shown in Figure 3. It can be seen from the m-table $m _ { - } C h a s e ( T )$ in Figure 2 that, in each of these four tables: – the first two rows are generated using the first m-tuple in $m _ { - } C h a s e ( T )$ , namely $( k _ { 1 } ) ( k _ { 2 } ) ( a _ { 1 } a _ { 1 } ^ { \prime } ) ( a _ { 2 } ) ( b _ { 1 } ) ( b _ { 2 } ) ( m _ { 1 } )$ , $\mathrm { ~ - ~ }$ the next five rows are generated using the second m-tuple in $m _ { - } C h a s e ( T )$ , namely $( k _ { 1 } ) ( k _ { 2 } ^ { \prime } ) ( a _ { 1 } a _ { 1 } ^ { \prime } ) ( a _ { 2 } ) ( b _ { 1 } ) ( m _ { 1 } ^ { \prime } m _ { 1 } ^ { \prime \prime } )$ , – the last two rows are generated using respectively the last two m-tuples in $m _ { - } C h a s e ( T )$ , namely $( k _ { 1 } ^ { \prime } ) ( k _ { 2 } ^ { \prime \prime } ) ( a _ { 1 } ) ( m _ { 1 } )$ and $( k _ { 1 } ^ { \prime \prime } ) ( a _ { 1 } ^ { \prime } ) ( a _ { 2 } ^ { \prime } )$ . To illustrate how process (P) works, we apply it to generate the repair $R _ { 1 }$ shown in Figure 3. – Step P1. Regarding attributes in $s c h ^ { * } ( D _ { 1 } )$ , for $A _ { 1 } ^ { 1 }$ , the three $K _ { 1 }$ -values $k _ { 1 }$ , $k _ { 1 } ^ { \prime }$ and $k _ { 1 } ^ { \prime \prime }$ are respectively associated with $a _ { 1 }$ , $a _ { 1 }$ and $a _ { 1 } ^ { \prime }$ , while for $A _ { 1 } ^ { 2 }$ , the two $K _ { 1 }$ -values $k _ { 1 }$ and $k _ { 1 } ^ { \prime \prime }$ are associated with $a _ { 2 }$ and $a _ { 2 } ^ { \prime }$ , respectively (no $A _ { 1 } ^ { 2 }$ -value is associated with $k _ { 1 } ^ { \prime }$ in $T$ ). Regarding attributes in $s c h ^ { * } ( D _ { 2 } )$ , for $A _ { 2 } ^ { 1 }$ , the two $K _ { 2 }$ -values $k _ { 2 }$ and $k _ { 2 } ^ { \prime }$ are both associated with $b _ { 1 }$ (no $A _ { 2 } ^ { 1 }$ -value is associated with $k _ { 2 } ^ { \prime \prime }$ in $T$ ), and for $A _ { 2 } ^ { 2 }$ , the only $K _ { 2 }$ -values $k _ { 2 } ^ { \prime }$ is associated with $b _ { 2 }$ (no $A _ { 2 } ^ { 2 }$ -value is associated with $k _ { 2 } ^ { \prime }$ or with $k _ { 2 } ^ { \prime \prime }$ in $T$ ). – Step P2. The only measure attribute is here $M _ { 1 }$ and the $\mathbf { K }$ -values to be considered are $k _ { 1 } k _ { 2 }$ , $k _ { 1 } k _ { 2 } ^ { \prime }$ and $k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime }$ which are respectively associated with $M _ { 1 }$ -values $m _ { 1 }$ , $m _ { 1 } ^ { \prime \prime }$ and $m _ { 1 }$ . – Step P3. Considering every m-tuple $\sigma$ of the table $m _ { - } C h a s e ( T )$ shown in Figure 2, we obtain the following four tuples: $k _ { 1 } k _ { 2 } a _ { 1 } a _ { 2 } b _ { 1 } b _ { 2 } m _ { 1 }$ , $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 1 } a _ { 2 } b _ { 1 } m _ { 1 } ^ { \prime }$ , $k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } a _ { 1 } m _ { 1 }$ and $k _ { 1 } ^ { \prime \prime } a _ { 1 } ^ { \prime } a _ { 2 } ^ { \prime }$ . We obtain that $R _ { \varphi }$ contains the four tuples above along with the tuples in $\mathsf { C o n s } ( \mathcal { T } )$ that have been characterized in Example 3. Moreover, inspecting thoroughly the table $R _ { 1 }$ and the set $R _ { \varphi }$ shows that the two sets yield the same set of true tuples, corresponding to the same repair of $T$ . □ # 5 Consistent Query Answering Hereafter, given a star-table $T$ , we consider different kinds of queries on $T$ , all based on the SQL syntax: 1. Standard queries or simply queries, to be defined in the next section. These queries are simple projection-selection queries of the form select $X$ from $T$ where $\varGamma$ , where $X$ is a list of attributes and $\boldsymbol { { \cal T } }$ an optional selection condition. 2. Analytic queries with no group-by statement, defined in Section 5.3. These queries are of the form select $X , a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ , where $X$ and $\boldsymbol { { \cal T } }$ are as above, and where aggr is an aggregation operator and $M _ { i }$ is a measure attribute. 3. Analytic queries with $a$ group-by statement, also defined in Section 5.3. These queries are of the form select $X , a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ group by $X$ , where $X$ , $\boldsymbol { { \cal T } }$ , aggr and $M _ { i }$ are as above. 4. Analytic queries with $a$ group-by-having statement, dealt with in Section 6.4. These queries are of the form select $X , a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ group by $X$ having $\boldsymbol { { \mathit { T } } }$ , where $X$ , $\varGamma$ , aggr and $M _ { i }$ are as above, and where $\boldsymbol { { \mathit { T } } }$ is a boolean expression involving aggregates. # 5.1 Standard Queries As mentioned just above, we use SQL as the query language and, as we query a single table $T$ , the (standard) queries $Q$ that we consider have one of the following two forms: $$ Q : { \mathsf { s e l e c t ~ } } X { \mathsf { \ a r o m ~ } } T \qquad { \mathrm { o r } } \qquad Q : { \mathsf { s e l e c t ~ } } X { \mathsf { \ a r o m ~ } } T { \mathrm { \ u h e r e ~ } } P $$ In either of these forms, $X$ is an attribute list seen as a relation schema, and in the second form the where clause specifies a selection condition $\varGamma$ . As in SQL the where clause in a query is optional, the generic form of a query $Q$ is denoted by $$ Q : \mathsf { s e l e c t { X f r o m } } T \ [ \mathsf { w h e r e { I T } } ] $$ The set of all attributes occurring in $\boldsymbol { \varGamma }$ is called the schema of $\varGamma$ , denoted by $s c h ( I )$ ; and the attribute set $X \cup s c h ( { \cal T } )$ is called the schema of $Q$ , denoted by $s c h ( Q )$ . A selection condition $\varGamma$ is a well-formed formula involving the usual connectors $\neg$ , $\vee$ and $\wedge$ and built up from atomic Boolean comparisons of one of the following forms: $A \theta a$ or $A \theta A ^ { \prime }$ , where $\theta$ is a comparison predicate, $A$ and $A ^ { \prime }$ are attributes in $U$ whose domain elements are comparable through $\theta$ , and $a$ is in $d o m ( A )$ . Given a selection condition $\boldsymbol { { \cal T } }$ , we denote by $S a t ( T )$ the set of all tuples in $\mathcal { T } ( s c h ( \boldsymbol { \cal T } ) )$ satisfying $\varGamma$ , as defined below: if $\varGamma$ is of the the form $A \theta a$ , $S a t ( \Gamma ) = \{ t \in { \mathcal { T } } ( s c h ( \Gamma ) ) ~ | ~ t . A \theta a \} ,$ • if $\varGamma$ is of the the form $A \theta B$ , $S a t ( \Gamma ) = \{ t \in \mathcal { T } ( s c h ( \Gamma ) ) ~ | ~ t . A \theta t . B \}$ , $\bullet$ if $\varGamma$ is of the form ${ \varGamma } _ { 1 } \vee { \varGamma } _ { 2 }$ , $S a t ( \Gamma ) = S a t ( \Gamma _ { 1 } ) \cup S a t ( \Gamma _ { 2 } )$ , $\bullet$ if $\varGamma$ is of the form $T _ { 1 } \land T _ { 2 }$ , $S a t ( { T } ) = S a t ( { T } _ { 1 } ) \cap S a t ( { T } _ { 2 } )$ , $\bullet$ if $\varGamma$ is of the form $\neg T _ { 1 }$ , $S a t ( \Gamma ) = \mathcal { T } ( s c h ( \Gamma ) ) \setminus S a t ( \Gamma _ { 1 } )$ . As usual in the literature [1], we define the consistent answer of such a query $Q$ as the intersection of the answers to the query in every repair. The formal definition follows. Definition 3 Let $T$ be a table over universe $U$ and $F D$ an acyclic set of functional dependencies over $U$ . Given the query $Q$ : select $X$ from $T$ [where $\varGamma ]$ , the consistent answer to $Q$ in $T$ , denoted by $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ , is defined by: $$ { \mathsf { C } } _ { - } { \mathsf { a n s } } ( Q ) = \bigcap _ { R \in \mathsf { R e p } ( T ) } { \mathsf { A n s } } \left( Q ^ { [ R ] } \right) $$ where $Q ^ { [ R ] }$ is the query select $X$ from $R$ [where $\varGamma ]$ , and $\mathsf { A n s } ( Q ^ { [ R ] } )$ is the answer to $Q ^ { [ R ] }$ in $R$ . Formally, $\mathsf { A n s } ( Q ^ { [ R ] } )$ is the set of all tuples $x$ over $X$ such that there exits $\gamma$ in $S a t ( T )$ such that $x \gamma$ is a tuple over $s c h ( Q )$ that belongs to ${ \sf T r u e } ( \mathcal { R } )$ . □ It is important to notice that, as a consequence of Theorem 1, given a query $Q$ , we have ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( Q ) \subseteq$ $\mathsf { C o n s } ( \mathcal { T } )$ . This is so because every $x$ in $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ is true in every repair of $T$ . We also point out that if $Q$ involves no selection condition then Theorem 1 allows for characterizing $\mathsf { C } _ { - \mathsf { a n s } } ( Q )$ as the set of all tuples in $\mathtt { C o n s } ( \tau )$ whose schema is $X$ . We however recall that the issue of easily characterizing $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ in the case of a query $Q$ involving a selection condition remains open. We recall in this respect from [14], that $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ can be bounded as follows. Proposition 7 Let $T$ be a table over universe $U$ and its associated acyclic set of functional dependencies $F D$ . Given a query $Q$ : select $X$ where $\boldsymbol { \varGamma }$ , let: $$ \begin{array} { r l } & { - \mathrm { \normalfont ~ C } _ { - } \mathrm { a n s } ^ { - } ( Q ) = \{ x \in { \mathcal T } ( X ) \mid ( \exists t \in { \mathcal T } ( s c h ( Q ) ) ) ( t \in \mathsf { C o n s } ( { \mathcal T } ) \land t . X = x \land t . s c h ( T ) \in S a t ( T ) \} } \\ { - \mathrm { \normalfont ~ C } _ { - } \mathrm { a n s } ^ { + } ( Q ) = \{ x \in { \mathcal T } ( X ) \mid ( \exists t \in { \mathcal T } ( s c h ( Q ) ) ( t \in \mathsf { T r u e } ( { \mathcal T } ) \land t . X = x \land } \\ & { \qquad x \in \mathsf { C o n s } ( T ) \land t . s c h ( T ) \in S a t ( T ) \} } \end{array} $$ Then, $\mathsf { C } \mathsf { \lrcorner } \mathsf { a n s } ^ { - } ( Q )$ and $\mathsf { C - a n s ^ { + } } ( Q )$ are respectively a lower bound and an upper bound of $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ , i.e., the following inclusions hold: C a $\mathfrak { n s } ^ { - } ( Q ) \subseteq \mathsf { C } _ { - \mathsf { a n s } } ( Q ) \subseteq \mathsf { C } _ { - \mathsf { a n s } ^ { + } ( Q ) }$ . Moreover, based on Proposition 1, it is shown in [14] that the two bounds $\mathsf { C } \lrcorner \mathsf { a n s } ^ { - } \mathopen { } \mathclose \bgroup \left( Q \aftergroup \egroup \right)$ and $\mathsf { C - a n s ^ { + } } ( Q )$ can be easily computed by means of one scan of the m-table $m _ { - } C h a s e ( T )$ . It turns out that approximating the consistent answer to any query in our approach is polynomial. For example, in the context of Example 3, – For $Q _ { 1 }$ : select $K _ { 1 }$ , $K _ { 2 }$ , $M _ { 1 }$ from $T$ where $( A _ { 1 } ^ { 1 } \ = \ a _ { 1 } )$ , we have $\mathsf { C } _ { - } \mathsf { a n s } ^ { - } ( Q _ { 1 } ) = \{ k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } m _ { 1 } \}$ and $\mathsf C _ { - } \mathsf n \mathsf n \mathsf s ^ { + } ( Q _ { 1 } ) = \{ k _ { 1 } k _ { 2 } m _ { 1 } , k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } m _ { 1 } \}$ . On the other hand, it can be seen from Figure 3 that $\mathsf { C } _ { - } \mathsf { a n s } ( Q _ { 1 } ) = \mathsf { C } _ { - } \mathsf { a n s } ^ { - } ( Q _ { 1 } ) = \{ k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } m _ { 1 } \} .$ – For $Q _ { 2 }$ : select $K _ { 1 }$ , $K _ { 2 } , A _ { 2 } ^ { 1 }$ , $M _ { 1 }$ , from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ or $A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime } )$ , then we have $\mathsf C _ { - } \mathsf n \mathsf s ^ { - } ( Q _ { 2 } ) =$ $\varnothing$ and $\mathsf C _ { - } \mathsf n \mathsf n \mathsf s ^ { + } ( Q _ { 2 } ) \ = \ \{ k _ { 1 } k _ { 2 } b _ { 1 } m _ { 1 } \}$ . On the other hand, it can be seen from Figure 3 that $\mathtt { C \_ a n s } ( Q _ { 2 } ) = \mathtt { C \_ a n s } ^ { + } ( Q _ { 2 } ) = \{ k _ { 1 } k _ { 2 } b _ { 1 } m _ { 1 } \}$ . # 5.2 Consistent Query Answering and Star Schemas As mentioned in our introductory section, we can compute efficiently exact consistent answers (instead of bounds of exact consistent anwers as in [14]) if the selection condition meets two requirements. The first requirement is that satisfaction of the selection condition can be tested attribute by attribute, as stated in the following definition. Definition 4 A selection condition $\varGamma$ is said to be independent if $\varGamma$ is equivalent to the conjunction ${ \cal { T } } ( A _ { 1 } ) \wedge . . . \wedge { \cal { T } } ( A _ { k } )$ where for every $i = 1 , \ldots , k$ , $\textstyle { \boldsymbol { \Gamma } } ( A _ { i } )$ is a selection condition involving only attribute $A _ { i }$ . □ By Definition 4, if $r = r ( A _ { 1 } ) \land . . . \land r ( A _ { k } )$ is independent, then $s c h ( \boldsymbol { r } ) = A _ { 1 } \ldots A _ { k }$ and a tuple $\gamma$ over $s c h ( I )$ is in $S a t ( T )$ if and only if for every $i = 1 , \ldots , k$ , $\gamma . A _ { i }$ is in $S a t ( T ( A _ { i } ) )$ . For example, $\varGamma = ( A _ { 1 } \leq 0 ) \vee ( ( A _ { 1 } \geq 4 ) \wedge ( ( A _ { 2 } = a ) )$ can be written as $( ( A _ { 1 } \ \leq \ 0 ) \lor ( A _ { 1 } \ \geq$ $4 ) ) \wedge ( ( A _ { 1 } \leq 0 ) \vee ( A _ { 2 } = a ) )$ . Thus, $\varGamma$ is not independent. On the other hand, $\begin{array} { r } { \Gamma ^ { \prime } = ( ( A _ { 1 } \leq 4 ) \lor ( A _ { 1 } \geq } \end{array}$ $0 ) ) \land ( A _ { 2 } = a )$ is obviously independent with ${ \Gamma } ^ { \prime } ( A _ { 1 } ) = ( A _ { 1 } \geq 4 ) \vee ( A _ { 1 } \leq 0 )$ and ${ \Gamma } ^ { \prime } ( A _ { 2 } ) = ( A _ { 2 } = a )$ , and we have $S a t ( T ^ { \prime } ) = \{ ( a _ { 1 } , a _ { 2 } ) \in a d o m ( A _ { 1 } ) \times a d o m ( A _ { 2 } ) \ |$ $( a _ { 1 } \leq 0 \lor a _ { 1 } \geq 4 ) \land ( a _ { 2 } = a ) \}$ . We emphasize that, as shown by the example just above, independent selection conditions may involve disjunctions. Therefore, and contrary to most existing approaches to consistent query answering [1,17], our approach does not restrict selection conditions to be conjunctive. Another restriction that we consider on queries is that key attributes in $\mathbf { K }$ are not allowed in selection conditions. We point out that this syntactical restriction on queries does not rule out queries of practical interest, because in most relevant queries, selections are expressed using meaningful attributes rather than numerical abstract key-values. Notice in this respect that key attributes are mainly used in selection conditions to express join conditions, which are not explicit in our chase-based approach. From now on, it is assumed, even when not explicitly mentioned, that selection conditions are independent and involve no key attributes from $\mathbf { K }$ . Actually, what the above assumption implies is that, in a star-table, given a query $Q$ : select $X$ from $T$ where $\varGamma$ , for every $Y B$ in $F D$ such that $Y B \subseteq s c h ( Q )$ holds, we have $Y \subseteq X \setminus s c h ( r )$ . The following proposition states one of the main contributions of the paper, namely that, under the restriction above, the consistent answer to any query having an independent selection condition can be easily computed. Proposition 8 Let $T$ be a star-table over universe $U$ , and let $Q$ : select $X$ from $T$ where $\boldsymbol { { \cal T } }$ be a query such that $\mathbf { K } \cap s c h ( I ) = \emptyset$ and $\boldsymbol { \varGamma }$ is an independent selection condition. Then $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ is the set of all tuples $x$ over $X$ for which there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that 1. $s c h ( Q ) \subseteq s c h ( \sigma )$ , $x \in \sigma ( X )$ and $\sigma ( s c h ( r ) ) \cap S a t ( r ) \neq \emptyset$ , 2. for every $Y B$ in $F D$ such that $Y B \subseteq X$ , $| \mathsf { t u p l e s } ( \sigma ( B ) ) | = 1$ , 3. for every $Y B$ in $F D$ such that $Y \subseteq X$ and $B \in s c h ( T )$ , $\mathsf { t u p l e s } ( \sigma ( B ) ) \subseteq S a t ( \Gamma ( B ) ) .$ Proof. See Appendix B. An important consequence of Proposition 8 is that the data complexity of consistent query answering to projection-selection-join queries on star schemes is polynomial, under the restriction that the selection condition is independent and involves no keys. The following corollary shows that the third item in Proposition 8 can be more simply stated when $\mathbf { K }$ is a subset of $X$ . Corollary 1 Let $T$ be a star-table over universe $U$ , and let $Q$ : select $X$ from $T$ where $\boldsymbol { { \cal T } }$ be a query such that $\mathbf { K } \cap s c h ( I ) = \emptyset$ and $\varGamma$ is an independent selection condition. If $\mathbf { K } \subseteq X$ , C ans $( Q )$ is the set of all tuples $x$ over $X$ for which there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that 1. $s c h ( Q ) \subseteq s c h ( \sigma )$ , $x \in \sigma ( X )$ and $\sigma ( s c h ( r ) ) \cap S a t ( r ) \neq \emptyset$ , 2. for every $Y B$ in $F D$ such that $Y B \subseteq X$ , $| \mathtt { t u p l e s } ( \sigma ( B ) ) | = 1$ , $\mathcal { B }$ . tuples $\mathop { ' } \sigma ( s c h ( r ) ) ) \mathop { \subseteq } S a t ( r )$ . Proof. As for every $A$ in $U$ , there exists $Y A$ in $F D$ with $Y \subseteq \mathbf { K }$ , if $\mathbf { K } \subseteq X$ , for every $B$ in $s c h ( I )$ there exists $Y B$ in $F D$ with $Y \subseteq X$ . Therefore, in this case, item 3 in Proposition 8 can be stated as: for every $B \in s c h ( T )$ , $\mathsf { t u p l e s } ( \sigma ( B ) ) \subseteq S a t ( \Gamma ( B )$ . Since $\varGamma$ is independent, as mentioned right after Definition 4, this is equivalent tuples $( \sigma ( s c h ( r ) ) ) \subseteq S a t ( r )$ . The proof is therefore complete. □ In the following example, we illustrate Proposition 8 and Corollary 1, and we show the impact of our two restrictions on the form of the selection conditions. Example $\boldsymbol { \mathscr { \delta } }$ In the context of Example 3, we consider the star-table $T$ , and its associated m-table $m _ { - } C h a s e ( T )$ as shown in Figure 2, let $Q _ { 2 }$ be the query defined by: $Q _ { 2 }$ : select $K _ { 1 }$ , $K _ { 2 }$ , $A _ { 2 } ^ { 1 }$ , $M _ { 1 }$ , from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ or $A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime } )$ . As already noticed after stating Proposition 7, we have $\mathsf C _ { - } \mathsf n \mathsf n ( Q _ { 2 } ) = \{ k _ { 1 } k _ { 2 } b _ { 1 } m _ { 1 } \}$ . On the other hand, the selection condition in this query obviously satisfies our restrictions, since it is independent and involves no key attributes. Applying Proposition 8 or Corollary 1 (because $K _ { 1 } K _ { 2 } \subseteq s c h ( Q _ { 2 } ) ;$ yields the following: – The m-tuples $\sigma$ in $m _ { - } C h a s e ( T )$ such that $s c h ( Q _ { i } ) \subseteq s c h ( \sigma )$ , $x \in \sigma ( X )$ and $\sigma ( s c h ( \boldsymbol { r } ) ) \cap S a t ( \boldsymbol { r } ) \neq \emptyset$ , are $( k _ { 1 } ) ( k _ { 2 } ) ( a _ { 1 } a _ { 1 } ^ { \prime } ) ( a _ { 2 } ) ( b _ { 1 } ) ( b _ { 2 } ) ( m _ { 1 } )$ and $( k _ { 1 } ) ( k _ { 2 } ^ { \prime } ) ( a _ { 1 } a _ { 1 } ^ { \prime } ) ( a _ { 2 } ) ( b _ { 1 } ) ( m _ { 1 } ^ { \prime } m _ { 1 } ^ { \prime \prime } )$ . – Among these m-tuples, only $( k _ { 1 } ) ( k _ { 2 } ) ( a _ { 1 } a _ { 1 } ^ { \prime } ) ( a _ { 2 } ) ( b _ { 1 } ) ( b _ { 2 } ) ( m _ { 1 } )$ is such that for every $Y B$ in $F D$ such that $Y B \subseteq X _ { i }$ , $| \mathtt { t u p l e s } ( \sigma ( B ) ) | = 1$ . – This m-tuple satisfies tuple $\mathsf { s } ( \sigma ( s c h ( \varGamma ) ) ) \subseteq S a t ( \varGamma )$ . We therefore obtain the expected consistent answer, namely $\mathsf C _ { - } \mathsf { a n s } ( Q _ { 2 } ) = \{ k _ { 1 } k _ { 2 } b _ { 1 } m _ { 1 } \}$ . Now, let $Q _ { 3 }$ be defined by: Q3 : select $K _ { 1 } , K _ { 2 } , A _ { 1 } ^ { 2 }$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ and $M _ { 1 } = m _ { 1 }$ ) or $( A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime }$ and $M _ { 1 } = m _ { 1 } ^ { \prime \prime } )$ . In this case, the selection condition ${ { T } _ { 3 } }$ is clearly not independent, and it can be seen from Figure 3 that $\mathsf { C } _ { - } \mathsf { a n s } ( Q _ { 3 } )$ is in fact empty, because: $\mathit { \Omega } - \mathit { \Omega }$ in $R _ { 2 }$ , $k _ { 1 } k _ { 2 } a _ { 2 }$ is only associated with $a _ { 1 } ^ { \prime } m _ { 1 }$ which does not belong to $S a t ( T _ { 3 } )$ , $\mathrm { ~ - ~ i n ~ } R _ { 1 } , k _ { 1 } k _ { 2 } ^ { \prime } a _ { 2 }$ is only associated with $a _ { 1 } m _ { 1 } ^ { \prime }$ which does not belong to $S a t ( T _ { 3 } )$ . On the other hand, considering the third item in Proposition 8, we emphasize the following: $k _ { 1 } k _ { 2 } ^ { \prime } a _ { 2 }$ is discarded from the consistent answer because $a _ { 1 } m _ { 1 } ^ { \prime }$ is in ${ \mathsf { z u p l e s } } ( \sigma _ { 2 } ( A _ { 1 } ^ { 1 } M _ { 1 } ) )$ but not in $S a t ( T _ { 3 } )$ . However, $\sigma _ { 1 }$ can be seen as satisfying the third item in the proposition for the following reasons: the dependencies to consider are $K _ { 1 } ~ ~ A _ { 1 } ^ { 1 }$ and $K _ { 1 } K _ { 2 } \ \ M _ { 1 }$ , and $\mathsf { t u p l e s } \big ( \sigma _ { 1 } \big ( A _ { 1 } ^ { 1 } \big ) \big )$ , respectively tuples $( \sigma _ { 1 } ( M _ { 1 } ) )$ , is a sub-set of the set of all $A _ { 1 } ^ { 1 }$ -values, respectively all $M _ { 1 }$ -values, occurring in $S a t ( T _ { 3 } )$ . A characterization of the consistent answer to a query involving a non independent selection condition is currently unknown to the authors. Now, to illustrate our restriction on the key-attributes with respect to selection conditions, let $Q _ { 4 }$ : select $\scriptstyle A _ { 1 }$ from $T$ where $\mathbf { \tilde { { K } } } _ { 1 } = k _ { 1 } \mathbf { \tilde { { ) } } }$ . It should be clear that the selection condition of $Q _ { 4 }$ is independent, and that $\mathsf C _ { - } \mathsf { a n s } ( Q _ { 4 } ) = \emptyset$ (because, by Figure 3, neither $a _ { 1 }$ nor $a _ { 1 } ^ { \prime }$ is associated with $k _ { 1 }$ is every repair). On the other hand, it is easy to see that the conditions in Proposition 8 or Corollary 1 are satisfied by the first two m-tuples of $m _ { - } C h a s e ( T )$ (see Figure 2). A characterization of the consistent answer to a query with a selection condition involving a key attribute is currently unknown to the authors. □ In the next two sections, we consider analytic queries and see how their consistent answer can be defined and effectively computed, relying on Corollary 1. # 5.3 Analytic Queries and their Consistent Answers In the literature [16], an analytic query is a query involving an aggregate function among count, min, max, sum, and possibly a group-by clause. Formally, given a data warehouse over a star schema, an analytic query $\mathcal { A } \mathcal { Q }$ has one of the following two forms: – AQ : select $a g g r ( M _ { i } )$ from $\varphi$ where $\boldsymbol { \varGamma }$ , or – AQ : select $X$ , $a g g r ( M _ { i } )$ from $\varphi$ where $\boldsymbol { \varGamma }$ group by $X$ where $\varphi$ is the join of the fact table $F$ with all dimension tables $D _ { 1 } , \ldots , D _ { n }$ , $X$ is a relation schema, and for $j = 1 , \dotsc , p$ , aggr is an aggregate function such as count, min, $m a x$ , sum and $M _ { i }$ is a measure attribute. In all traditional approaches to data warehouses, it is generally assumed that the tables in the data warehouse have no missing values and that the key-foreign-key constraints between $F$ and all dimension tables $D _ { i }$ are satisfied. In this context, the answer to $\mathcal { A } \mathcal { Q }$ is as follows: – If $\mathcal { A } \mathcal { Q }$ involves no group-by clause, $\mathsf { A n s } ( \mathcal { A } \mathscr { Q } )$ is the value of the aggregation evaluated over the set of all tuples in $\varphi$ satisfying the condition $\varGamma$ . We notice that the aggregate may involve no attribute, when expressed as $c o u n t ( * )$ . – If $\mathcal { A } \mathcal { Q }$ involves a statement group-by $X$ , $\mathsf { A n s } ( \mathcal { A } \mathscr { Q } )$ is the set of all pairs $\langle x , v _ { x } \rangle$ where $x$ is such that there exists a tuple in $\varphi$ satisfying $\varGamma$ and whose $X$ -value is $x$ , and where ${ \boldsymbol { v } } _ { \boldsymbol { x } }$ is the value of the aggregation evaluated over all tuples in $\varphi$ satisfying $\varGamma$ and whose $X$ -value is $x$ . Now, if the underlying data warehouse does not satisfy all functional dependencies, this traditional semantics of answer to analytic queries has to be revisited. When no dependency of the form $\mathbf { K } M _ { i }$ is considered, this has been done in [2], in the case of queries with no group-by clause and in [8] when a group-by clause occurs in the query. As shown in [6], in either case, the ‘repair semantics’ of the consistent answer to an analytic query consists intuitively in producing an interval in which aggregate values fall, when answering the query in any repair. The reader is referred to the forthcoming Section 7 for a more precise relationship between these approaches and our work. In our approach, we follow the same line regarding analytic queries and their consistent answers, and we emphasize that, as in the previous section, all selection conditions are assumed to satisfy the restrictions that are independent and that they involve no attribute from $\mathbf { K }$ . Definition 5 Let $T$ be a star-table over universe $U$ . We call analytic query with or without group-by clause, a query of the generic form: $\mathcal { A Q } : \mathsf { s e l e c t } \left[ X \right] , a g g r \bigl ( M _ { i } \bigr ) \textbf { f }$ rom $T$ where $\boldsymbol { { \cal T } }$ [group by $X ]$ here the group-by clause may be omitted, in which case $X$ is not in the select clause. The consistent answer to analytic query $\mathcal { A } \mathcal { Q }$ , denoted by $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ , is defined as follows: (A) If $\mathcal { A } \mathcal { Q }$ involves no group-by clause, then: – If there exists $R$ in $\mathsf { R e p } ( T )$ such that ${ \sf T r u e } ( \mathcal { R } )$ contains no tuple over $s c h ( Q )$ satisfying $\boldsymbol { { \cal T } }$ , then $\mathsf C _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } ) = \mathbb { N U L L }$ if $a g g r \neq c o u n t$ , and $\textsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } ) = 0$ if aggr is count. – If for every $R$ in $\mathsf { R e p } ( T )$ , ${ \sf T r u e } ( \mathcal { R } )$ contains at least one tuple over $s c h ( Q )$ satisfying $\boldsymbol { { \cal T } }$ , then $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ g l b , l u b ]$ such that: – for every $R$ in $\mathsf { R e p } ( T )$ there exists $d$ in $[ g l b , l u b ]$ such that $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R ] } ) = d$ , – there exist $R _ { 1 }$ and $R _ { 2 }$ in $\mathsf { R e p } ( T )$ such that $\mathsf { A n s } ( \mathcal { A } \mathcal { Q } ^ { [ R _ { 1 } ] } ) = g l b$ and $\mathsf { A n s } ( \mathcal { A } \mathcal { Q } ^ { [ R _ { 2 } ] } ) = l u b$ . (B) If $\mathcal { A } \mathcal { Q }$ involves a group-by clause, $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ is a set of pairs of the form $\langle x , [ g l b , l u b ] \rangle$ such that: – for every $R$ in $\mathsf { R e p } ( T )$ there exists $d$ in $[ g l b , l u b ]$ such that $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R ] } )$ contains $( x d )$ , – there exist $R _ { 1 }$ and $R _ { 2 }$ in $\mathsf { R e p } ( T )$ such that $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 1 } ] } )$ and $\mathsf { A n s } ( \mathcal { A } \bar { \mathcal { Q } } ^ { [ R _ { 2 } ] } )$ respectively contain $( x g l b )$ and $( x l u b )$ . □ The following remarks are in order regarding Definition 5: – In case (A), a NULL answer means that the consistent answer to the query $Q :$ select $\mathbf { K }$ from $T$ where $\boldsymbol { \varGamma }$ is empty. In this case the aggregate cannot be computed if different than count, and if the aggregate is count, then the expected answer is 0. This explains why we return NULL in the former case. – In case (B), all $X$ -values $x$ occurring in $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ must be consistent, because the first condition requires that $x$ be true in every repair of $T$ . On the other hand, if $x$ is consistent and if there is a repair $R$ such that ${ \sf T r u e } ( \mathcal { R } )$ contains no tuple of the form $x \gamma$ where $\gamma$ is in $S a t ( T )$ , there is no need to use a NULL because is this case, no pair involving $x$ will appear in $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . Consequently, if no $X$ -value fits the conditions in the definition, $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ is simply set to $\varnothing$ . Additionally to consistent answers as defined above, we argue that our approach allows for another kind of consistent answers to analytic queries, relying on the tuples in $\mathtt { C o n s } ( \tau )$ , or by Theorem 1 on tuples true in every repair. To define this kind of consistent answer, a given analytic query $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ : select $[ X ]$ , $a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ [group by $X ]$ is associated with a query involving no aggregate, defined by $$ Q _ { \cal A \mathcal { Q } } : \mathrm { s e l e c t } ~ { \bf K } , \left[ X \right] , M _ { i } $$ Then, relying on the consistent answer to $Q _ { \mathbf { { \mathcal { A } } } \mathbf { { \mathcal { Q } } } }$ , we define the strongly consistent answer to $\mathcal { A } \mathcal { Q }$ , denoted by $\mathsf C _ { - } \mathsf { a n s } ^ { * } ( \mathcal A \mathcal Q )$ , as the answer to the following analytic query: select $[ X ] , a g g r ( M _ { i } ) \ { \bf f r o m \ C \_ a n s } ( Q _ { \cal A Q } ) [ { \bf g r o u p - b y } X ] .$ We emphasize that, based on Corollary 1, since the select clause of $Q _ { \mathbf { \mathcal { A } } \mathbf { \mathcal { Q } } }$ contains $\mathbf { K }$ , $\mathsf { C } \mathsf { \_ a n s } ( Q _ { \mathcal { A } \mathcal { Q } } )$ can be effectively computed, assuming that $m _ { - } C h a s e ( T )$ is available. The important issue of efficiently computing answers to analytic queries is addressed in the next section. Example 7 In the context of Example 3, 1 : select $s u m ( M _ { 1 } )$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 } )$ is an example of analytic query with no group-by clause and involving the aggregate max. The expected result is the interval containing all possible sums of $M _ { 1 }$ -values among all tuples defined over $K _ { 1 } K _ { 2 }$ and associated with $A _ { 1 } ^ { 1 }$ -value $a _ { 1 }$ in all repairs. Referring to Figure 3, we have: $$ \begin{array} { r l } & { \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { [ R _ { 1 } ] } ) = m _ { 1 } + m _ { 1 } ^ { \prime } + m _ { 1 } , \mathrm { ~ a n d ~ } \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { [ R _ { 2 } ] } ) = m _ { 1 } , } \\ & { \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { [ R _ { 3 } ] } ) = m _ { 1 } + m _ { 1 } ^ { \prime \prime } + m _ { 1 } , \mathrm { ~ a n d ~ } \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { [ R _ { 4 } ] } ) = m _ { 1 } . } \end{array} $$ Assuming that $m _ { 1 } , m _ { 1 } ^ { \prime }$ and $m _ { 1 } ^ { \prime \prime }$ are positive numbers and that $m _ { 1 } ^ { \prime } \leq m _ { 1 } ^ { \prime \prime }$ , we obtain that $\begin{array} { r } { \mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } _ { 1 } ) = } \end{array}$ $[ g l b , l u b ]$ and where $g l b = m _ { 1 }$ and $l u b = 2 . m 1 + m _ { 1 } ^ { \prime \prime }$ . On the other hand, we have $Q _ { \mathcal { A } \mathcal { Q } _ { 1 } }$ : select ${ \bf K } , M _ { 1 }$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 } )$ , and it can be seen from Figure 3, that $\mathsf { C } _ { - } \mathsf { a n s } ( Q _ { \mathcal { A } \mathcal { Q } _ { 1 } } ) = \{ k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime \prime } m _ { 1 } \}$ . Therefore, $\mathsf C _ { - } \mathsf n \mathsf s ^ { \ast } ( \mathcal A \mathcal Q _ { 1 } ) = m _ { 1 }$ . To illustrate that if $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ g l b , l u b ]$ and $\mathsf { C } _ { - } \mathsf { a n s } ^ { * } ( \mathcal { A } \mathcal { Q } ) = m$ , then it does not always hold that $g l b \ \leq \ m \ \leq \ l u b$ , consider the aggregate $c o u n t ( * )$ instead of $s u m ( M _ { 1 } )$ . Indeed, in this case, $\mathsf C \_ \mathsf { a n s } ( \mathcal { A } \mathcal { Q } ) = [ 3 , 3 ]$ and $\mathsf { C } _ { - } \mathsf { a n s } ^ { * } ( \mathcal { A } \mathscr { Q } ) = 1$ . As another example of analytic query with no group-by clause and slightly different from $\mathcal { A } \mathcal { Q } _ { 1 }$ , consider $\mathcal { A } \mathcal { Q } _ { 1 } ^ { \prime }$ : select $s u m ( M _ { 1 } )$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime } )$ . Referring to Figure 3, we have in this case: $$ \begin{array} { r } { \cdot \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { \prime [ R _ { 1 } ] } ) = \mathsf { N U L L , ~ a n d ~ } \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { \prime [ R _ { 2 } ] } ) = m _ { 1 } , } \\ { \cdot \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { \prime [ R _ { 3 } ] } ) = \mathsf { N U L L , ~ a n d ~ } \mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 1 } ^ { \prime [ R _ { 4 } ] } ) = m _ { 1 } . } \end{array} $$ The presence of NULL for $R _ { 1 }$ and $R _ { 3 }$ above is due to the fact that these repairs contain no tuples satisfying the selection condition in $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } } _ { 1 } ^ { \prime }$ . In this case, by Definition 5(A), the expected consistent answer is NULL. The query $\mathcal { A } \mathcal { Q } _ { 2 } : \mathtt { s e l e c t }$ $A _ { 2 } ^ { 1 }$ , $s u m ( M _ { 1 } )$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ or $A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime }$ ) group-by $A _ { 2 } ^ { 1 }$ is an analytic query involving a group-by clause. Since in this example ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ contains only one $A _ { 2 } ^ { 1 }$ -value, namely $b _ { 1 }$ , at most one pair $\langle b _ { 1 } , [ g l b , l u b ] \rangle$ is expected in $\mathsf C \mathsf { \_ a n s } ( \mathcal A \mathcal Q _ { 2 } )$ . Referring again to Figure 3, we have: Assuming as above that $m _ { 1 } , m _ { 1 } ^ { \prime }$ and $m _ { 1 } ^ { \prime \prime }$ are positive and that $m _ { 1 } ^ { \prime } \leq m _ { 1 } ^ { \prime \prime }$ , we obtain that $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q _ { 2 } ) =$ $\{ \langle b _ { 1 } , [ g l b , l u b ] \rangle \}$ where $g l b = m _ { 1 } + m _ { 1 } ^ { \prime }$ and $l u b = m _ { 1 } + m _ { 1 } ^ { \prime \prime }$ . On the other hand, we have $Q _ { A \mathcal { Q } _ { 2 } } : \mathsf { s e l e c t } \ K , A _ { 1 } ^ { 2 } , M _ { 1 }$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 } \ \circ \mathbf { r } A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime } ) .$ It can be seen from Figure 3, that $\mathsf C _ { - } \mathsf n \mathsf n \mathsf s ( Q _ { \mathcal A \mathcal Q _ { 2 } } ) = \{ k _ { 1 } k _ { 2 } b _ { 1 } m _ { 1 } \}$ . Therefore, ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ^ { * } ( { \mathcal { A } } { \mathcal { Q } } _ { 2 } ) = \{ ( b _ { 1 } , m _ { 1 } ) \}$ . This example illustrates that if $\langle x , [ g l b , l u b ] \rangle$ is in $\mathsf { C } \_ \mathsf { a n s } ( \mathcal { A } \mathcal { Q } )$ and $x m$ is in $\mathsf C _ { - } \mathsf { a n s } ^ { * } ( \mathcal A \mathcal Q )$ , then it does not always hold that $g l b \leq m \leq l u b$ . Indeed, for $m _ { 1 } = 1$ , $m _ { 1 } ^ { \prime } = 2$ and $m _ { 1 } ^ { \prime \prime } = 3$ , we have $g l b = 3$ , $l u b = 4$ showing that for $x = b _ { 1 }$ , $m \not \in [ 3 , 4 ]$ . □ # 6 Computing Consistent Answers to Analytic Queries In this section, we give algorithms for computing consistent answers to analytic queries, first when no group-by clause is present, then when a group-by clause is present. We also consider in this section the case of analytic queries involving a group-by-having clause, whereas the introduction of the clause distinct is the subject of the last sub-section. We emphasize again that the selection conditions in queries are independent and involve no key attributes. # 6.1 Basic Result Before presenting our algorithms, we state important properties of repairs and analytic queries. To this end, we introduce the following additional notation. Given a star-table $T$ and an analytic query $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ : select $\left[ X \right] , a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { { \cal T } }$ [group by $X ]$ , let: $$ \begin{array} { r l } & { \mathrm { , ~ } \Sigma ( A \mathscr { Q } ) = \{ \sigma \in m _ { - } C h a s e ( T ) \mid ( { \mathbf K } \cup M _ { i } \cup s c h ( T ) \subseteq s c h ( \sigma ) ) \land ( \sigma ( s c h ( T ) ) \cap S a t ( T ) \neq \emptyset ) \} } \\ & { \mathrm { , ~ } \Sigma ^ { + } ( A \mathscr { Q } ) = \{ \sigma \in \Sigma ( A \mathscr { Q } ) \mid \sigma ( s c h ( T ) ) \subseteq S a t ( T ) \} \mathrm { . } } \end{array} $$ Proposition 9 Let $T$ be a star-table over universe $U$ and $\mathcal { A } \mathcal { Q }$ : select $[ X ]$ , $, a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { { \cal T } }$ [group by $X ]$ an analytic query where $\boldsymbol { \varGamma }$ is independent, i.e., $r = r ( A _ { 1 } ) \land . . . \land r ( A _ { k } )$ where $s c h ( \boldsymbol { r } ) = A _ { 1 } \ldots A _ { k }$ . If $\Sigma ( A \mathcal { Q } ) \setminus \Sigma ^ { + } ( A \mathcal { Q } ) \ne \emptyset$ , there exist $R _ { 1 }$ and $R _ { 2 }$ in $\mathsf { R e p } ( T )$ such that 1. for every $\sigma$ in $\Sigma ( A \mathcal { Q } ) \setminus \Sigma ^ { + } ( A \mathcal { Q } )$ , there exists $t$ in tuples $( \sigma ) \cap { \mathsf { T r u e } } ( \mathcal { R } _ { 1 } )$ such that $t . s c h ( \boldsymbol { \Gamma } ) \not \in S a t ( \boldsymbol { \Gamma } )$ , 2. for every $\sigma$ in $Sigma ( A \mathcal { Q } ) \setminus \Sigma ^ { + } ( A \mathcal { Q } )$ , there exists $t$ in tuples ${ \mathrm { : } } ( \sigma ) \cap \mathsf { T r u e } ( \mathcal { R } _ { 2 } )$ such that $t . s c h ( \boldsymbol { r } ) \in S a t ( \boldsymbol { r } )$ . Proof. The proof relies on the fact that for every repair $R$ of $T$ and every $\sigma$ in $\Sigma ( \mathcal { A } \mathcal { Q } )$ , ${ \sf T r u e } ( \mathcal { R } )$ contains exactly one tuple $t$ such that $t \in { \mathrm { t u p l e s } } ( \sigma )$ and $t . K \in \sigma ( { \bf K } )$ (the existence is a consequence of Proposition 4, unicity follows from Proposition $2 ( 1 )$ because $| \sigma ( { \bf K } ) | = 1$ , $\mathbf { K }$ is a key of $R$ and $R \models F \boldsymbol { D } \vDdot$ ). As a consequence of Proposition 6, the existence of repairs $R _ { 1 }$ and $R _ { 2 }$ is explicitly shown using the process (P). 1. Regarding $R _ { 1 }$ , we build $\varphi$ as follows: For every $\sigma$ in $\Sigma ( \mathcal { A } \mathcal { Q } )$ : Step P1. For every $A _ { p } ^ { k }$ in $s c h ^ { * } ( D _ { p } ) \cap s c h ( \sigma )$ such that $R _ { 1 }$ contains no tuple $k _ { p } a _ { p } ^ { k }$ from tuples $( \sigma ( K _ { p } A _ { p } ^ { k } ) _ { . } ^ { \dag }$ ) $( a )$ if $\sigma \in { \Sigma } ^ { + } ( \mathcal { A } \mathcal { Q } )$ then choose an $A _ { p } ^ { k }$ -value in $\sigma ( K _ { p } A _ { p } ^ { k } )$ and let $\varphi _ { p } ^ { k } ( k _ { p } ) = a _ { p } ^ { k }$ . $( b )$ if $\sigma \in \Sigma ( \mathcal { A } \mathcal { Q } ) \setminus \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ and $\mathsf { t u p l e s } ( \sigma ( A _ { p } ^ { k } ) ) \subset S a t ( \varGamma ( A _ { k } ^ { p } ) )$ , choose an $A _ { p } ^ { k }$ -value $a _ { p } ^ { k }$ in $\sigma ( K _ { p } A _ { p } ^ { k } ) \dag$ $S a t ( T ( A _ { k } ^ { p } ) )$ and let $\varphi _ { p } ^ { k } ( k _ { p } ) = a _ { p } ^ { k }$ . If $\mathsf { t u p l e s } ( \sigma ( A _ { p } ^ { k } ) ) \subseteq S a t ( \varGamma ( A _ { k } ^ { p } ) )$ , choose any $A _ { p } ^ { k }$ -value $a _ { p } ^ { k }$ in $\sigma ( K _ { p } A _ { p } ^ { k } )$ and let $\varphi _ { p } ^ { k } ( k _ { p } ) = a _ { p } ^ { k }$ . It is important to notice that in this case, there always exists at least one attribute $A _ { p } ^ { k }$ such that $\mathsf { t u p l e s } ( \sigma ( A _ { p } ^ { k } ) ) \subsetneq S a t ( \Gamma ( A _ { k } ^ { p } ) )$ , because otherwise $\sigma$ would be in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ . Step P2. If $M _ { i } \notin s c h ( \boldsymbol { \Gamma } )$ or if $M _ { i } \in s c h ( \Gamma )$ and $\sigma \in \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , choose an $M _ { i }$ -value $m$ in $\sigma ( M _ { i } )$ and let $\varphi _ { i } ( k _ { p } ) = m$ . Otherwise (i.e., $M _ { i } \in s c h ( \Gamma )$ and $\sigma \in \Sigma ( A \mathcal { Q } ) \setminus \Sigma ^ { + } ( A \mathcal { Q } ) )$ , choose an $M _ { i }$ -value $m$ in $\sigma ( M _ { i } ) \setminus S a t ( { \Gamma } ( M _ { i } ) )$ for $\varphi _ { i } ( K _ { p } )$ . Once all m-tuples in $\Sigma ( \mathcal { A } \mathcal { Q } )$ have been considered, the two steps above are completed by considering all non-processed $K _ { i }$ - or $\mathbf { K }$ -values as done in the generic description of process (P). Consequently, the corresponding set $\varphi$ in process (P) is properly defined. Step P3. Build up the tuples $t _ { \varphi } ( \sigma )$ for every $\sigma$ in $m _ { - } C h a s e ( T )$ . We notice that $( a )$ if $\sigma \in { \Sigma } ^ { + } ( \mathcal { A } \mathcal { Q } )$ then $t _ { \varphi } ( \sigma ) . s c h ( \Gamma ) \in S a t ( \Gamma )$ , (b) if $\sigma \in \Sigma ( \mathcal { A } \mathcal { Q } ) \setminus \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ then $t _ { \varphi } ( \sigma ) . s c h ( \Gamma ) \notin S a t ( \Gamma )$ . By Proposition 6, we therefore obtain a repair $R _ { 1 }$ satisfying item 1 in the proposition. 2. Regarding the item 2, the repair $R _ { 2 }$ is obtained as above, by changing $( b )$ in Step P1 and Step P2 as follows: Step P1. $( b )$ if $\sigma \in \Sigma ( A \mathcal { Q } ) \setminus \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ and $\mathsf { t u p l e s } ( \sigma ( A _ { p } ^ { k } ) ) \nsubseteq S a t ( \varGamma ( A _ { k } ^ { p } ) )$ , choose an $A _ { p } ^ { k }$ -value $a _ { p } ^ { k }$ in $\sigma ( K p A _ { p } ^ { k } ) \cap S a t ( \varGamma ( A _ { k } ^ { p } ) )$ and let $\varphi _ { p } ^ { k } ( k _ { p } ) = a _ { p } ^ { k }$ . It is important to notice that in this case, every attribute ${ A } _ { p } ^ { k }$ in $s c h ( I )$ is such that $\mathsf { t u p l e s } ( \sigma ( A _ { p } ^ { k } ) ) \cap S a t ( \Gamma ( A _ { k } ^ { p } ) ) \neq \emptyset$ , because otherwise tuples $( \sigma ( s c h ( T ) ) \cap$ $S a t ( T ) \neq \emptyset$ would not be possible. Step P2. Choose an $M _ { i }$ -value $m$ in $\sigma ( M _ { i } )$ and let $\varphi _ { i } ( k ) = m$ where $k = \sigma ( { \bf K } )$ . Then, as above, these two steps are completed by considering all non-processed $K _ { i }$ - or $\mathbf { K }$ - values, and Step P3 is considered. By Proposition 6, we obtain a repair $R _ { 2 }$ that satisfies item 2 in the proposition. The proof is therefore complete. □ The following example shows that Proposition 9 does not always hold for non independent selection conditions. Example $\boldsymbol { \vartheta }$ Let $U = \{ K _ { 1 } , K _ { 2 } , A _ { 1 } , A _ { 2 } , M _ { 1 } \}$ and $F D = \{ K _ { 1 } \to A _ { 1 } , K _ { 2 } \to A _ { 2 } , K _ { 1 } K _ { 2 } \to M _ { 1 } \}$ , and the following tables $$ \begin{array} { r } { \frac { D _ { 1 } \mid K _ { 1 } \quad A _ { 1 } } { \mid k _ { 1 } \mid } \qquad \frac { D _ { 2 } \mid K _ { 2 } \quad A _ { 2 } } { \mid k _ { 2 } \mid } \qquad \frac { F \mid K _ { 1 } \quad K _ { 2 } \qquad M _ { 1 } } { \mid k _ { 1 } \mid } } \\ { \frac { k _ { 1 } ^ { \prime } } { k _ { 1 } ^ { \prime } \quad - 5 } \qquad \frac { k _ { 2 } ^ { \prime } } { k _ { 2 } ^ { \prime } \quad - 1 } \qquad \frac { 1 } { \mid k _ { 1 } ^ { \prime } \quad k _ { 2 } ^ { \prime } \quad - 1 0 } } \end{array} \qquad \begin{array} { r } { \frac { F } { \mid K _ { 1 } \quad K _ { 2 } \quad \quad \quad } _ { 1 2 } } \\ { k _ { 1 } \quad k _ { 2 } \quad \quad - 1 0 } \\ { k _ { 1 } \quad k _ { 2 } ^ { \prime } \quad \quad - 2 } \\ { k ^ { \prime } \boldsymbol { 1 } } \end{array} $$ Thus $m _ { - } C h a s e ( T )$ contains three m-tuples $\sigma _ { 1 }$ , $\sigma _ { 2 }$ , $\sigma _ { 3 }$ defined over $U$ as follows: σ1 = (k1)(k2)(10)(20)(−10), σ2 = (k1)(k′2)(10)(−1 30)(2), σ3 = (k′1)(k′2)(−5)(−1 30)(−100) and $T$ has two repairs $R _ { 1 }$ and $R _ { 2 }$ . Moreover, the tuples in True $\left( \mathcal { R } _ { i } \right)$ $( i = 1 , 2$ ) whose schemas contain ${ \bf K } = K _ { 1 } K _ { 2 }$ , respectively denoted $R _ { 1 } ^ { \bf K }$ and $R _ { 2 } ^ { \bf K }$ , are: $$ \begin{array} { r l } & { R _ { 1 } ^ { \mathbf { K } } = \{ ( k _ { 1 } , k _ { 2 } , 1 0 , 2 0 , - 1 0 ) , ( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , - 1 , 2 ) , ( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , - 1 , - 1 0 0 ) \} } \\ & { R _ { 2 } ^ { \mathbf { K } } = \{ ( k _ { 1 } , k _ { 2 } , 1 0 , 2 0 , - 1 0 ) , ( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 3 0 , 2 ) , ( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , 3 0 , - 1 0 0 ) \} . } \end{array} $$ For $\begin{array} { r } { T = ( M _ { 1 } \leq A _ { 1 } ) \wedge ( M _ { 1 } \geq 0 \Rightarrow A _ { 2 } < 0 ) \wedge ( M _ { 1 } < \cdot } \end{array}$ −50 ⇒ A2 > 20) and $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ an analytic query involving $\varGamma$ , we have $\Sigma ( A \mathcal { Q } ) = \{ \sigma _ { 1 } , \sigma _ { 2 } , \sigma _ { 3 } \}$ and $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } ) = \{ \sigma _ { 1 } \}$ . More precisely: 1. $\mathsf { t u p l e s } ( \sigma _ { 1 } ) = \{ ( k _ { 1 } , k _ { 2 } , 1 0 , 2 0 , - 1 0 ) \}$ where $( 1 0 , 2 0 , - 1 0 ) \in S a t ( T )$ . Thus, tuples $ \left( \sigma _ { 1 } \right) \subseteq S a t ( { \cal T } )$ holds. 2. $\mathrm { t u p l e s } ( \sigma _ { 2 } ) = \{ ( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , - 1 , 2 ) , ( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 3 0 , 2 ) \}$ , where $( 1 0 , - 1 , 2 ) \in S a t ( \boldsymbol { \Gamma } )$ and $( 1 0 , 3 0 , 2 ) \ \notin$ $S a t ( T )$ . Thus tu $\mathsf { \Pi } ^ { \mathsf { } } | \mathsf { p l e s } ( \sigma _ { 2 } ) \cap S a t ( T ) \neq \emptyset$ holds but not tup $\mathsf { I e s } ( \sigma _ { 2 } ) \subseteq S a t ( T )$ . 3. tu $\mathsf { p l e s } ( \sigma _ { 3 } ) = \{ ( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , - 1 , - 1 0 0 ) \}$ , $( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , 3 0 , - 1 0 0 ) \}$ , where $( - 5 , 3 0 , - 1 0 0 ) \in S a t ( T )$ and $( - 5 , - 1 , - 1 0 0 ) \notin S a t ( T )$ . Thus $\mathsf { t u p l e s } ( \sigma _ { 3 } ) \cap S a t ( \varGamma ) \neq \emptyset$ holds but not tu $\mathsf { \Pi } | \mathsf { p l e s } ( \sigma _ { 3 } ) \subseteq S a t ( T )$ . On the other hand, it should be clear that $\boldsymbol { \varGamma }$ is not independent, and that neither $\mathsf { T r u e } ( \mathcal { R } _ { 1 } )$ nor $\mathsf { T r u e } ( \mathcal { R } _ { 2 } )$ contains the two tuples $( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 3 0 , 2 )$ and $( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , - 1 , - 1 0 0 )$ , simply because they form a set that does not satisfy $K _ { 2 } A _ { 2 }$ . Thus Proposition 9(1) does not hold in this example. Similarly, Proposition 9(2) is not satisfied either because neither ${ \sf T r u e } ( \mathcal { R } _ { 1 } )$ nor $\mathsf { T r u e } ( \mathcal { R } _ { 2 } )$ contains the two tuples $( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , - 1 , 2 )$ and $( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 5 , 3 0 , - 1 0 0 )$ , again because of $K _ { 2 } A _ { 2 }$ in $F D$ . □ In the following two sections 6.2 and 6.3, we successively introduce our algorithms for computing consistent answers to analytic queries, depending on whether they involve or not a group-by clause. In each case, the main algorithm relies on a procedure, called Compute aggregate and shown in Algorithm 2, whose role is to scan (either entirely or partially) the m-table $m _ { - } C h a s e ( T )$ and return values that will appear in the answers. # 6.2 Analytic Queries with no Group-by Clause If the query involves no group-by clause, as shown in Algorithm 3, the procedure Compute aggregate of Algorithm 2 is called to scan the whole m-table $m _ { - } C h a s e ( T )$ . When running this call, line 1 in Algorithm 3, the main loop lines 6-32 of Algorithm 2 scans $m _ { - } C h a s e ( T )$ and computes values appearing in the answers. The following proposition shows that Algorithm 3 is correct, except when $a g g r = s u m$ . In other words, if $a g g r \ne s u m$ Algorithm 3 returns $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . Moreover, it is also shown that $\mathsf { C } _ { - } \mathsf { a n s } ^ { \ast } ( \mathcal { A } \mathcal { Q } )$ is correctly computed by Algorithm 3 in any case. # Algorithm 2 Procedure Compute Aggregate Proposition 10 Let $T$ be a star-table over universe $U$ and let $\mathcal { A } \mathcal { Q }$ : select $a g g r ( M _ { i } )$ from $T$ where $\varGamma$ be an analytic query with no group-by clause. Then we have: – If aggr is min, max or count, then ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( { \mathcal { A } } { \mathcal { Q } } ) = [ m i n . a n s , m a x . a n s ]$ where min ans and max ans are returned by Algorithm 3. – If aggr $\mathbf { \Psi } = \mathbf { \Psi } .$ sum and if $\mathsf C _ { - } \mathsf n \mathsf n \mathsf s ( \mathcal A \mathcal Q ) = [ \boldsymbol g l b , l u b ]$ , then min ans and max ans as returned by Algorithm 3 satisfy that min ans $\leq g l b$ and max ans ≥ lub. Moreover, if for every $m \in a d o m ( M _ { i } )$ , $m \geq 0$ , then ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( { \mathcal { A } } { \mathcal { Q } } ) = [ m i n _ { - } a n s , m a x _ { - } a n s ]$ . – For every aggregate function and every selection condition $\boldsymbol { { \cal T } }$ , ans∗ as returned by Algorithm $\mathcal { B }$ is equal to C ans $^ { \ast } ( \mathcal { A } \mathcal { Q } )$ . Proof. See Appendix C. In the following example we show that Algorithm 3 may fail to return exact values for $g l b$ and lub when the aggregate operator is sum, operating on positive values and negative values. Example $g$ As in Example 8, we consider $U = \{ K _ { 1 } , K _ { 2 } , A _ { 1 } , A _ { 2 } , M _ { 1 } \}$ and $F D = \{ K _ { 1 } \to A _ { 1 } , K _ { 2 } \to$ $A _ { 2 } , K _ { 1 } K _ { 2 } \to M _ { 1 } \}$ , but here, with the following tables Input: The m-table $m _ { - } C h a s e ( T )$ and $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ : select $a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ Output: [min ans, max ans] // meant to be equal to $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ ans∗ / meant to be equal to C ans $^ { \ast } ( \mathcal { A } \mathcal { Q } )$ 1: Call procedure Compute aggregate with input parameters $m _ { - } C h a s e ( T )$ , $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ and with output parameters change min max, min ans, max ans, ans∗ 2: if change min max $\ b =$ true then 3: return ([min ans, max ans], ans∗) 4: else 5: if aggr $\neq$ count then 6: return (NULL, NULL) 7: if aggr = count then 8: return ([0, 0], 0) Algorithm 3 Consistent answer to analytic queries with no group-by clause Thus $m _ { - } C h a s e ( T )$ contains three m-tuples $\sigma _ { 1 }$ , $\sigma _ { 2 }$ , $\sigma _ { 3 }$ defined over $U$ as follows: $\sigma _ { 1 } = ( k _ { 1 } ) ( k _ { 2 } ) ( 1 0 ) ( 2 ) ( 3 0 )$ , $\sigma _ { 2 } = ( k _ { 1 } ^ { \prime } ) ( k _ { 2 } ^ { \prime } ) ( - 1 5 2 0 ) ( 0 3 ) ( - 1 0 )$ , $\sigma _ { 3 } = ( k _ { 1 } ) ( k _ { 2 } ^ { \prime } ) ( 1 0 ) ( 0 3 ) ( 1 0 0 )$ and $T$ has four repairs denoted by $R _ { i }$ for $i = 1 , \dots , 4$ , whose sets $R _ { i } ^ { \bf K }$ of tuples over $U$ are defined by: $$ \begin{array} { l } { { R _ { 1 } ^ { \mathbf { K } } = \{ \left( k _ { 1 } , k _ { 2 } , 1 0 , 2 , 3 0 \right) , \left( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 1 5 , 0 , - 1 0 \right) , \left( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 0 , 1 0 0 \right) \} } } \\ { { R _ { 2 } ^ { \mathbf { K } } = \{ \left( k _ { 1 } , k _ { 2 } , 1 0 , 2 , 3 0 \right) , \left( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , - 1 5 , 3 , - 1 0 \right) , \left( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 3 , 1 0 0 \right) \} } } \\ { { R _ { 3 } ^ { \mathbf { K } } = \{ \left( k _ { 1 } , k _ { 2 } , 1 0 , 2 , 3 0 \right) , \left( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , 2 0 , 0 , - 1 0 \right) , \left( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 0 , 1 0 0 \right) \} } } \\ { R _ { 4 } ^ { \mathbf { K } } = \{ \left( k _ { 1 } , k _ { 2 } , 1 0 , 2 , 3 0 \right) , \left( k _ { 1 } ^ { \prime } , k _ { 2 } ^ { \prime } , 2 0 , 3 , - 1 0 \right) , \left( k _ { 1 } , k _ { 2 } ^ { \prime } , 1 0 , 3 , 1 0 0 \right) \} . } \end{array} $$ For $\Gamma = ( A _ { 1 } > 0 ) \land ( A _ { 2 } > 0 )$ and : select $a g g r ( M _ { 1 } )$ from $T$ where $\boldsymbol { \varGamma }$ , we have $\Sigma ( \ r { A } \mathcal { Q } ) =$ $\{ \sigma _ { 1 } , \sigma _ { 2 } , \sigma _ { 3 } \}$ and $\Sigma ^ { + } ( { \mathcal A } \mathcal { Q } ) = \{ \sigma _ { 1 } \}$ , because tup $\mathsf { l e s } ( \sigma _ { 1 } ( A _ { 1 } A _ { 2 } ) ) \subseteq S a t ( T )$ holds, tu $\mathsf { \ p l e s } \big ( \sigma _ { 2 } \big ( A _ { 1 } A _ { 2 } \big ) \big ) \ \cap$ $S a t ( T ) \neq \emptyset$ holds but not tup $\mathsf { I e s } ( \sigma _ { 2 } ( A _ { 1 } A _ { 2 } ) ) \subseteq S a t ( T )$ , and tuples $; ( \sigma _ { 3 } ( A _ { 1 } A _ { 2 } ) ) \cap S a t ( \varGamma ) \neq \emptyset$ holds but not tuple $\mathbf { \mathscr { s } } ( \sigma _ { 3 } ( A _ { 1 } A _ { 2 } ) ) \subseteq S a t ( T )$ . On the other hand, $\boldsymbol { \varGamma }$ is clearly independent. – If $a g g r = m i n$ then, for every $i = 1 , \dots , 3$ , $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { i } ] } ) = 3 0$ and $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 4 } ] } ) = - 1 0$ . Thus, $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ - 1 0 , 3 0 ]$ . When running the call of the procedure Compute aggregate in Algorithm 3, min ans is first set to 30 when processing $\sigma _ { 1 }$ , then to $\operatorname* { m i n } ( \{ 3 0 , \operatorname* { m i n } \{ - 1 0 , 3 0 \} \} ) = - 1 0$ when processing $\sigma _ { 2 }$ , and then to $\operatorname* { m i n } ( \{ - 1 0 , \operatorname* { m i n } \{ 1 0 0 , - 1 0 \} \} ) = - 1 0$ when processing $\sigma _ { 3 }$ . Similarly, max ans is first set to 30 when processing $\sigma _ { 1 }$ , then to $\operatorname* { m a x } ( \{ 3 0 , \operatorname* { m i n } \{ - 1 0 , 3 0 \} \} ) = 3 0$ when processing $\sigma _ { 2 }$ , and then to $\operatorname* { m a x } ( \{ 3 0 , \operatorname* { m i n } \{ 1 0 0 , 3 0 \} \} ) = 3 0$ when processing $\sigma _ { 3 }$ . Hence, Algorithm 3 returns $[ - 1 0 , 3 0 ]$ as expected. – Similarly, if $a g g r = m a x$ , we have $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ 3 0 , 1 0 0 ]$ , which is also returned by Algorithm 3. – If aggr = count then we have $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 1 } ] } ) \ = \ 1$ , $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 2 } ] } ) \ = \ 2$ , $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 3 } ] } ) \ = \ 1$ and $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 4 } ] } ) = 3$ . Thus, $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ 1 , 3 ]$ . When running the call of the procedure Compute aggregate in Algorithm 3, min ans is first set to 1 when processing $\sigma _ { 1 }$ and then unchanged when processing $\sigma _ { 2 }$ and $\sigma _ { 3 }$ . Moreover, max ans is increased by 1 for each m-tuple $\sigma _ { 1 }$ , $\sigma _ { 2 }$ and $\sigma _ { 3 }$ . Thus Algorithm 3 returns [1, 3] as expected. – If $a g g r = s u m$ then we have $\mathsf { A n s } ( \mathcal { A } \mathcal { Q } ^ { [ R _ { 1 } ] } ) = 3 0$ , $\mathsf { A n s } ( \ r _ { A } \bar { \varrho ^ { [ R _ { 2 } ] } } ) = 1 3 0$ , $\mathsf { A n s } ( \bar { \mathcal { A } } \mathcal { Q } ^ { [ R _ { 3 } ] } ) = 3 0$ and $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 4 } ] } ) = 1 2 0$ . Thus, $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ 3 0 , 1 3 0 ]$ . When running the call of the procedure Compute aggregate in Algorithm 3, min ans is successively set to 30, $( 3 0 - 1 0 ) = 2 0 \$ when processing $\sigma _ { 1 }$ and then $\sigma _ { 2 }$ , and unchanged when processing $\sigma _ { 3 }$ . On the other hand, max ans is successively set to 30 when processing $\sigma _ { 1 }$ , unchanged when Input: The m-table $m _ { - } C h a s e ( T )$ and $\mathcal { A } \mathcal { Q }$ : select $X , a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ group by $X$ Output: Cons Ans: a set of pairs $\langle x , [ m i n \_ a n s , m a x \_ a n s ] \rangle$ // meant to be equal to C ans(AQ) $A n s ^ { * }$ : a set of pairs $( x , a n s ^ { * }$ ) // meant to be equal to C ans∗( ) 1: Cons $. A n s : = \varnothing$ ; Ans∗ := ∅ ; $T e m p : = \varnothing$ 2: for all $\sigma$ in $m _ { - } C h a s e ( T )$ do 3: if $\mathbf { K } \cup s c h ( Q ) \subseteq s c h ( \sigma )$ then 4: if $Y B$ in $F D$ and $Y B \subseteq X \Rightarrow | \mathbf { t u p } | \mathbf { e s } ( \sigma ( B ) ) | = 1 )$ and $( \mathbf { t u p l e s } ( \sigma ( s c h ( \varGamma ) ) ) \cap S a t ( \varGamma ) \neq \emptyset )$ then 5: T e $n p : = T e m p \cup \{ \sigma \}$ 6: for all $X$ -value $x$ occurring in $T$ emp do 7: $T e m p ( x ) : = \{ \sigma \in T e m p \mid x \in \mathfrak { t u p l e s } ( \sigma ( X ) ) \}$ 8: Call procedure Compute aggregate with input parameters $T e m p ( x )$ , $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ and with output parameters change min max, min ans, max ans, ans∗ 9: if change min max $\ b =$ true then 10: $C o n s \_ A n s : = C o n s \_ A n s \cup \{ \langle x , [ m i n \_ a n s , m a x \_ a n s ] \rangle \}$ 11: Ans∗ := Ans∗ ∪ {(x, ans∗)} 12: return (Cons Ans, Ans∗) processing $\sigma _ { 2 }$ and set to $( 3 0 + 1 0 0 ) = 1 3 0$ when processing $\sigma _ { 3 }$ . Thus the call of the procedure Compute aggregate in Algorithm 3 returns [20, 130]. – On the other hand, assuming that the second tuple in $F$ is $( k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime } , 1 0 )$ (instead of $( k _ { 1 } ^ { \prime } k _ { 2 } ^ { \prime } , - 1 0 ) ;$ , then $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ 3 0 , 1 4 0 ]$ , because $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { i } ] } )$ is 30 if $i = 1$ or $i = 3$ , 130 if $i = 2$ and 140 if $i = 4$ . It can be seen that when running the call of the procedure Compute aggregate in Algorithm 3, we obtain $m i n \_ a n s = 3 0$ and $m a x _ { - } a n s = 1 4 0$ . □ # 6.3 Analytic Queries with a Group-by Clause As in the case of analytic queries with no group-by clause, when the query involves a group-by clause, the computation of the consistent answers also involves a call of the procedure Compute aggregate, as shown line 8 of Algorithm 4. This algorithm works as follows: a first scan of $m _ { - } C h a s e ( T )$ , lines 2-5 retrieves in the set T emp all relevant m-tuples $\sigma$ , namely, such that $s c h ( \sigma )$ contains all attributes occurring in $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ and such that the $X$ -values are consistent and may be associated with a tuple over $s c h ( T )$ satisfying the condition $\boldsymbol { { \cal T } }$ . Then, a subsequent loop lines 6-11 operates a call of the procedure Compute aggregate for each $X$ -value $x$ occurring in the m-tuples of the set T emp. For each such call, Algorithm 2 scans the subset $T e m p ( x )$ of $T e m p$ and computes the corresponding aggregate values appearing in the answers associated with $x$ . The following proposition shows that Algorithm 4 is correct, except when $a g g r = s u m$ . In other words, if $a g g r \neq s u m$ Algorithm 4 returns $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . Moreover, it is also shown that $\mathsf C _ { - } \mathsf { a n s } ^ { * } ( \mathcal A \mathcal Q )$ is correctly computed by Algorithm 4 in any case. Proposition 11 Let $T$ be a star-table over universe $U$ and $\mathcal { A } \mathcal { Q }$ an analytic query with a group-by clause. – If $a g g r = m i n$ , $a g g r = m a x$ or $a g g r = c o u n t$ , then Cons Ans as returned by Algorithm 4 is equal to $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . – If $a g g r = s u m$ , then $\langle x , [ m i n \_ a n s , m a x \_ a n s ] \rangle$ is in Cons Ans as returned by Algorithm 4 if and only if $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ contains $\langle x , [ l u b , g l b ] \rangle$ such that min ans $\leq g l b$ and $m a x _ { - } a n s \geq l u b$ . Moreover, if for every $m \in a d o m ( M _ { i } )$ , $m \geq 0$ , then $C o n s \_ A n s = C \_ \mathsf { a n s } ( \mathcal { A } \mathcal { Q } )$ . – For every aggregate function and every selection condition $\varGamma$ , $A n s ^ { * }$ as returned by Algorithm 4 is equal to C ans $^ { \ast } ( \mathcal { A } \mathcal { Q } )$ . Proof. First, the loop lines 2-5 scans $m _ { - } C h a s e ( T )$ and collects in the set T emp the only m-tuples necessary to compute the consistent answer. Indeed, the collected m-tuples are all m-tuples $\sigma$ such that: they are defined over a super-set of $\mathbf { K } \cup s c h ( \sigma )$ , their $X$ -component contains consistent tuples, and their m-tuples over $s c h ( I )$ contain at least one tuple of $S a t ( T )$ . This is so because any m-tuple in $m _ { - } C h a s e ( T )$ not satisfying the above conditions can not contribute to the consistent answer. Then, in the next steps for every collected $X$ -value, its associated aggregate value is evaluated as done in Algorithm 3. As a consequence of Proposition 10, for $x$ in $T e m p$ , the loop lines 6-11 in Algorithm 4 generates correct answers. The proof is therefore complete. □ We emphasize that Algorithm 3 and Algorithm 4 show that the consistent answers to a given analytic query $\mathcal { A } \mathcal { Q }$ , with or without a group-by clause, are computed in linear time in the size of $m _ { - } C h a s e ( T )$ . Recalling that the computation of $m _ { - } C h a s e ( T )$ has been shown in [14] to be polynomial in time with respect to the size of $T$ , it turns out that $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ and $\mathsf { C } _ { - } \mathsf { a n s } ^ { * } ( \mathcal { A } \mathscr { Q } )$ can be computed in polynomial time with respect to the size of $T$ . # 6.4 Analytic Queries with a group-by-having Clause Another important feature of our approach is that analytic queries with a group-by-having clause can be handled in our framework. This is so because, intuitively a having clause specifies through a boolean expression, which groupings should be in the answer. For example, in the context of Example 3, let $\mathcal { A } \mathcal { Q } _ { 3 }$ be the following query: $\mathcal { A } \mathcal { Q } _ { 3 }$ : select $A _ { 2 } ^ { 1 }$ , $s u m ( M _ { 1 } )$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ or $A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime }$ ) group by $A _ { 2 } ^ { 1 }$ having $( m a x \bigl ( M _ { 1 } ) < 1 0 )$ In this case, $\mathcal { A } \mathcal { Q } _ { 3 }$ can be associated with the analytic query $\mathcal { A } \mathcal { Q } _ { 3 } ^ { \prime }$ defined by: $\mathcal { A } \mathcal { Q } _ { 3 } ^ { \prime }$ : select $A _ { 2 } ^ { 1 }$ , $s u m ( M _ { 1 } )$ , $m a x ( M _ { 1 } )$ from $T$ where $( A _ { 1 } ^ { 1 } = a _ { 1 }$ or $A _ { 1 } ^ { 1 } = a _ { 1 } ^ { \prime } .$ ) group by $A _ { 2 } ^ { 1 }$ whose consistent answer $\mathsf C \_ \mathsf { a n s } ( \mathcal A \mathcal Q _ { 3 } ^ { \prime } )$ is computed as described above. Then, considering the triples of the form $\left. { a _ { 2 } , [ g l b _ { 1 } , l u b _ { 1 } ] , [ g l b _ { 2 } , l u b _ { 2 } ] } \right.$ in $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } _ { 3 } ^ { \prime } )$ , all pairs $\langle a _ { 2 } , [ g l b _ { 1 } , l u b _ { 1 } ] \rangle$ such that $l u b _ { 2 } < 1 0$ are inserted in $\mathsf C \mathsf { \_ a n s } ( \mathcal A \mathcal Q _ { 3 } )$ . In other words, we have: $$ \mathsf { C } . \mathsf { a n s } ( \mathcal { A } \mathscr { Q } _ { 3 } ) = \{ \langle a _ { 2 } , [ g l b _ { 1 } , l u b _ { 1 } ] \rangle \mid \langle a _ { 2 } , [ g l b _ { 1 } , l u b _ { 1 } ] , [ g l b _ { 2 } , l u b _ { 2 } ] \rangle \in \mathsf { C } . \mathsf { a n s } ( \mathcal { A } \mathscr { Q } _ { 3 } ^ { \prime } ) \land l u b _ { 2 } < 1 0 \} . $$ More precisely, applying Proposition 11 to the star-table $T$ in Figure 2, we obtain the following: – If we assume that $m _ { 1 } = 5$ , $m _ { 1 } ^ { \prime } = 5$ and $m _ { 1 } ^ { \prime \prime } = 1 5$ , then $\mathsf { C } _ { - } \mathsf { a n s } ( \boldsymbol { \mathcal { A } } \mathscr { Q } _ { 3 } ^ { \prime } ) = \{ \langle a _ { 2 } , [ 1 0 , 2 0 ] , [ 5 , 1 5 ] \rangle \}$ and so, $\mathsf C \mathsf { \_ a n s } ( \mathcal A \mathcal Q _ { 3 } )$ is empty. Notice incidentally that it is easily seen from Figure 3 that there exist repairs $R$ in $\mathsf { R e p } ( T )$ for which $\mathsf { A n s } ( \boldsymbol { \mathcal { A Q } } _ { 3 } ^ { ' [ R ] } ) = \{ \langle a _ { 2 } , 2 0 , 1 5 \rangle \}$ , implying that $\mathsf { A n s } ( \mathcal { A Q } _ { 3 } ^ { [ R ] } ) = \emptyset$ . – If we assume that $m _ { 1 } = 5$ , $m _ { 1 } ^ { \prime } = 5$ and $m _ { 1 } ^ { \prime \prime } = 8$ , then ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( { \cal A } { \cal Q } _ { 3 } ^ { \prime } ) = \{ \langle a _ { 2 } , [ 1 0 , 1 3 ] , [ 5 , 8 ] \rangle \}$ and so, $\langle a _ { 2 } , [ 1 0 , 1 3 ] \rangle$ is in $\mathsf C \mathsf { \_ a n s } ( \mathcal A \mathcal Q _ { 3 } )$ . In this case, it is easily seen from Figure 3 that for every repair $R$ in $\mathsf { R e p } ( T )$ , $\mathsf { A n s } ( \mathcal { A } \mathcal { Q } _ { 3 } ^ { ' [ R ] } )$ contains a pair $\langle a _ { 2 } , \mu , \nu \rangle$ where $1 0 \leq \mu \leq 1 3$ and $\nu < 1 0$ , implying that $\langle a _ { 2 } , [ 1 0 , 1 3 ] \rangle$ is in $\mathsf C \mathsf { \_ a n s } ( \mathcal A \mathcal Q _ { 3 } )$ . In the light of this example, we argue that our approach can deal with analytic queries involving a having clause with condition $\boldsymbol { { \cal T } }$ , under the following restrictions: 1. $\boldsymbol { { \mathit { T } } }$ is a conjunctive boolean expression built up from atoms of the form $a g g r ( M _ { i } ) \theta \alpha$ where $M _ { i }$ is a measure attribute in $\mathbf { M }$ , $\theta$ is a comparison predicate in $\{ < , \leq , > , \geq \}$ and $\alpha$ is a number. 2. For every aggregate term in $\mathcal { A } \mathcal { Q }$ of the form $s u m ( M _ { i } )$ , then all $M _ { i }$ -values are positive. Calling $a g g r ( M _ { i } )$ an aggregate term, the first item above implies that $\boldsymbol { { \cal T } }$ can be written as $\mathcal { r } _ { 1 } \wedge \ldots \wedge \mathcal { r } _ { h }$ where for every $p = 1 , \ldots , h$ , $\boldsymbol { { \cal T } _ { p } }$ is the conjunction of all atoms in $\boldsymbol { { \mathit { T } } }$ involving the same aggregate term. Moreover, for every aggregate term $\lambda _ { p }$ occurring in $\boldsymbol { { \cal T } }$ , given a number $\alpha$ , let $\boldsymbol { \Upsilon } _ { p } ^ { [ \alpha ] }$ be the expression obtained by substituting in $\boldsymbol { { \cal T } _ { p } }$ every occurrence of $\lambda _ { p }$ with $\alpha$ . Then, it turns out that the set $S a t ( Y _ { p } ) = \{ \alpha \mid Y _ { p } ^ { [ \alpha ] }$ evaluates to true} is an interval. Based on this important remark, the following proposition characterizes the consistent answers of analytic queries involving a group-by-having clause. Proposition 12 Let $T$ be a star-table over universe $U$ and $\mathcal { A } \mathcal { Q }$ an analytic query with $a$ group-by-having clause defined by $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ : select $X$ , $a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ group by $X$ having $\boldsymbol { { \mathit { T } } }$ such that the above restrictions are met. If $\ d { \cal { Y } } = \ d { \cal { Y } } _ { 1 } \wedge . . . \wedge \ d { \cal { Y } } _ { h }$ , let $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } } ^ { \prime }$ be the following analytic query with no having clause: $\ u \mathcal { A } \mathcal { Q } ^ { \prime }$ $\mathcal { Q } ^ { \prime } : \mathsf { s e l e c t ~ } X , a g g r ( M _ { i } ) , a g g r _ { 1 } ( M _ { i _ { 1 } } ) , \ldots , a g g r _ { h } ( M _ { i _ { h } } ) |$ from $T$ where $\boldsymbol { \varGamma }$ group by $X$ Then, $\langle x , [ g l b , l u b ] \rangle$ belongs to $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ if and only if there exists $\langle x , [ g l b , l u b ] , [ g l b _ { 1 } , l u b _ { 1 } ] , \dots , [ g l b _ { h } , l u b _ { h } ] \rangle$ in $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } ^ { \prime } )$ such that for every $p = 1 , \ldots , h$ , $[ g l b _ { p } , l u b _ { p } ] \subseteq S a t ( { \cal Y } _ { p } )$ . Proof. Let $\langle x , [ g l b , l u b ]$ , $[ g l b _ { 1 } , l u b _ { 1 } ] , \dots , [ g l b _ { h } , l u b _ { h } ] \rangle$ in $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } ^ { \prime } )$ such that for every $p = 1 , \ldots , h$ , $[ g l b _ { p } , l u b _ { p } ] \subseteq S a t ( { \cal Y } _ { p } )$ . Given $R$ in $\mathsf { R e p } ( T )$ , $\mathsf { A n s } ( \mathcal { A Q } ^ { ' [ R ] } )$ contains a tuple $\langle x , \mu , \mu _ { 1 } , \ldots , \mu _ { p } \rangle$ such that $\mu \in [ q l b , l u b ]$ and $\mu _ { p } \in [ g l b _ { p } , l u b _ { p } ]$ for $p = 1 , \ldots , h$ hold. Hence, for $p = 1 , \ldots , h$ , $\mu _ { p }$ is in $S a t ( T _ { p } )$ , implying that $\langle x , [ g l b , l u b ] \rangle$ is in $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . Conversely, let $\langle x , [ g l b , l u b ] \rangle$ be in $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ and let $p$ such that $[ g l b _ { p } , l u b _ { p } ] \subseteq S a t ( { \cal Y } _ { p } )$ does not hold. Since $S a t ( T _ { p } )$ is an interval, this implies that $g l b _ { p } \notin S a t ( \varUpsilon _ { p } )$ or that $l u b _ { p } \notin S a t ( \boldsymbol { Y _ { p } } )$ . Since there exists a repair $R _ { m i n }$ , respectively a repair $R _ { m a x }$ , such that $a g g r _ { p } ( M _ { i _ { p } } ) = g l b _ { p }$ , respectively $a g g r _ { p } ( { \cal M } _ { i _ { p } } ) = l u b _ { p }$ , it turns out that there exists at least one repair $R$ such that $a n s ( \mathcal { A } \mathcal { Q } ^ { ' [ R ] } )$ contains a tuple $\langle x , \mu , \mu _ { 1 } , \ldots , \mu _ { h } \rangle$ such that $\mu _ { p } \notin S a t ( \boldsymbol { Y } _ { p } )$ . Thus $\langle x , \mu \rangle$ is not in $a n s ( \mathcal { A } \mathcal { Q } ^ { [ R ] } )$ , which implies that $\langle x , [ g l b , l u b ] \rangle$ is not in $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ ; a contradiction which completes the proof. □ It should be clear from Proposition 11 and Proposition 12 that, under the restrictions earlier stated, the consistent answer of an analytic query involving a group-by-having clause can be computed in polynomial time with respect to the size of $T$ . To end this section, we illustrate why in our approach, the sets $S a t ( T _ { p } )$ $( p = 1 , \ldots , h )$ must be intervals. In the context of our earlier example, consider the query $\mathcal { A } \mathcal { Q } _ { 3 }$ with a having clause defined by $\Upsilon = ( m a x ( M _ { 1 } ) < 5 0$ or $m a x ( M _ { 1 } ) > 1 0 0 _ { , } ^ { , }$ ), instead of $m a x ( M _ { 1 } ) < 1 0$ . Notice that in this case, the above restrictions are not satisfied, because $\boldsymbol { { \mathit { T } } }$ is not a conjunction, and $S a t ( T )$ is clearly not an interval. Let $m _ { 1 } = 2 0$ , $m _ { 1 } ^ { \prime } = 1 0$ and $m _ { 1 } ^ { \prime \prime } = 1 5 0$ and let $\langle a _ { 2 } , [ 3 0 , 1 7 0 ] , [ 2 0 , 1 5 0 ] \rangle$ in $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathcal { Q } _ { 3 } ^ { \prime } )$ , implying that the values of $m a x ( M _ { 1 } )$ range between 20 and 150, when varying the repair. Although there are repairs in which the maximum is 20 or 150, thus satisfying the having condition, it is not possible to conclude that for every repair, the maximum $M _ { 1 }$ -value satisfies this condition: it could for example be that for a repair, $m a x ( M _ { 1 } )$ is equal to 80. It is thus impossible to certify that $\langle a _ { 2 } , [ g l b , l u b ] \rangle$ is in the consistent answer of $\mathcal { A } \mathcal { Q } _ { 3 }$ . # 6.5 Handling distinct Clauses In this section, we show how to handle the distinct clause associated with the operator count. Recalling that count(distinct $M _ { i }$ ) counts each $M _ { i }$ -value only once, independently from the number of its occurrences, we consider analytic queries $\mathcal { A } \mathcal { Q }$ with no group-by clause, knowing that the presence of a group-by clause does not raise particular difficulties. Given such a query $\mathcal { A } \mathcal { Q }$ : select count(distinct $M _ { i }$ ) from $T$ where $\boldsymbol { \varGamma }$ , we let $\mathsf C _ { - } \mathsf n \mathsf n ( \boldsymbol { A } \mathscr Q ) = [ \boldsymbol { g } l b , l u b ]$ . Similarly to the general case of the aggregate sum, an effective way of computing $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ is currently unknown to the authors. This is so because, according to Algorithm 2, in order to determine lub (respectively $g l b$ ) without counting the same value more than once, every choice of one $M _ { i }$ -value in each set $\mathsf { t u p l e s } ( \sigma ( M _ { i } ) )$ where $\sigma$ is in $\Sigma ( \mathcal { A } \mathcal { Q } )$ (respectively in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } ) ,$ ) must be analyzed separately. As such a processing clearly requires to compute all possible choices, it is not polynomial, and so not tractable. Consequently, instead of computing the exact values of lub and $g l b$ , we propose a tractable approximation of $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . To this end, we first notice that the values to be counted for determining lub are obtained by picking exactly one $M _ { i }$ -value in tuples( $\left( \sigma ( M _ { i } ) \right)$ for every $\sigma$ in $\Sigma ( \mathcal { A } \mathcal { Q } )$ . Therefore, the set resulting from such a choice can not contain $( a )$ more distinct values than there are mtuples in $\Sigma ( \mathcal { A } \mathcal { Q } )$ and $( b )$ more distinct values than there are distinct values in the union of all sets tuples $( \sigma ( M _ { i } ) )$ where $\sigma$ is in $\Sigma ( \mathcal { A } \mathcal { Q } )$ . This can be formally stated as ( $\iota ) \ l u b \leq | \varSigma ( \mathcal { A } \mathcal { Q } ) |$ and $( b )$ $l u b \leq | \bigcup _ { \sigma \in { \mathcal { D } } ( A \mathcal { Q } ) }$ tuples $( \sigma ( M _ { i } ) ) |$ , which we write as $$ l u b \leq \operatorname* { i n f } \Big ( | \Sigma ( \mathcal { A } \mathcal { Q } ) | , | \bigcup _ { \sigma \in \Sigma ( \mathcal { A } \mathcal { Q } ) } \mathrm { t u p } | \mathrm { e s } \big ( \sigma ( M _ { i } ) \big ) | \Big ) . $$ If count max denotes the infimum defined above, we emphasize that Algorithm 2 can be modified so as to compute count max in polynomial time. Indeed, it is easy to see that $| \Sigma ( \mathcal { A } \mathcal { Q } ) |$ is equal to the value of $m a x _ { - } a n s$ returned by Algorithm 3 in the case of count. However, computing the second cardinality requires extra storage and extra computation to ensure that every value in the union is counted only once. Regarding the approximation of $g l b$ , we recall from Algorithm 2 that the values to be counted for determining $g l b$ are distinct values obtained by picking exactly one $M _ { i }$ -value in tuples( $\boxed { \sigma ( M _ { i } ) }$ for every $\sigma$ in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ . Thus, if $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } ) = \emptyset$ , $g l b = 0$ , and otherwise, a trivial lower bound for $g l b$ is $1$ , because all sets tuples $( \sigma ( M _ { i } ) )$ for $\sigma$ in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , are nonempty. To find a more accurate value of this lower bound, we notice that any choice of one $M _ { i }$ -value in tuples $( \sigma ( M _ { i } ) )$ for every $\sigma$ in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , concerns at least as many distinct $M _ { i }$ -values as there are in the union of all tuples $( \sigma ( M _ { i } ) )$ of cardinality 1, for $\sigma$ in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ . Denoting by $\Sigma _ { 1 } ^ { + } ( { \mathcal { A } } \mathcal { Q } )$ the set of these m-tuples, that is ${ \itSigma } _ { 1 } ^ { + } ( A \mathcal { Q } ) = \{ \sigma \in { \itSigma } ^ { + } ( A \mathcal { Q } ) \mid$ tuples $( \sigma ( M _ { i } ) ) | = 1 \}$ , whenever $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } ) \neq \emptyset$ , we have: $$ \begin{array} { r } { g l b \geq | \bigcup _ { \sigma \in { \cal { S } } _ { 1 } ^ { + } ( { \cal { A } } \mathscr { Q } ) } \tan ( \sigma ( M _ { i } ) ) | . } \end{array} $$ Denoting by count min the cardinality shown above, we notice that, as for count max, Algorithm 3 can be modified so as to compute count min in polynomial time. Another important remark regarding distinct is that if for every $\sigma$ in $m _ { - } C h a s e ( T )$ , we have |tuples $( \sigma ( M _ { i } ) ) | = 1$ (i.e., the dependency $\mathbf { K } M _ { i }$ is satisfied), then $\Sigma _ { 1 } ^ { + } ( { \cal A } \mathcal { Q } ) = \Sigma ^ { + } ( { \cal A } \mathcal { Q } )$ , and: $$ \begin{array} { r } { \bullet \ { c o u n t \_ m a x } = l u b = | \bigcup _ { \sigma \in \Sigma ( A Q ) } \mathsf { t u p l e s } ( \sigma ( M _ { i } ) ) | } \\ { \bullet \ { c o u n t \_ m i n } = g l b = | \bigcup _ { \sigma \in \Sigma ^ { + } ( A Q ) } \mathsf { t u p l e s } ( \sigma ( M _ { i } ) ) | } \end{array} $$ The above remark applies to the query $\mathcal { A } \mathcal { Q }$ of Example 9 if $a g g r = c o u n t ( \tt d i s t i n c t M 1 )$ , because tuples $\left( \sigma _ { i } ( M _ { 1 } ) \right)$ are singletons for every $i = 1 , 2 , 3$ . More precisely, $| \cup _ { \sigma \in \Sigma ( A \mathcal { Q } ) }$ tuple $\mathsf { s } ( \sigma ( M _ { i } ) ) | ~ =$ $| { \cal { \Sigma } } ( { \cal { A } } \mathcal { Q } ) | = 3$ (because the values involved are pairwise distinct), and $| \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } ) | = 1$ , showing that Algorithm 3 would return the exact value of $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ , that is the interval [1, 3]. Example $\mathit { 1 0 }$ In the context of Example 3, let $$ \mathsf { \Pi } \boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } } : \mathsf { s e l e c t \ c o u n t } \bigl ( \mathsf { d i s t i n c t } \boldsymbol { M } _ { 1 } \bigr ) \mathsf { f r o m } \ T $$ be an analytic query involving a distinct clause, but no selection condition. In this case, we have $\Sigma ( A \mathcal { Q } ) = \Sigma ^ { + } ( A \mathcal { Q } )$ and it can be seen from Figure 2 that the only m-tuples $\sigma$ in $m _ { - } C h a s e ( T )$ such that ${ \bf K } \cup s c h ( { \mathcal { A } } { \mathcal { Q } } ) \subseteq s c h ( \sigma )$ are the first three m-tuples in $m _ { - } C h a s e ( T )$ that we denote by $\sigma _ { 1 } , \sigma _ { 2 }$ and $\sigma _ { 3 }$ . Thus, $\Sigma ( \mathcal { A } \mathcal { Q } )$ and $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ are equal to $\{ \sigma _ { 1 } , \sigma _ { 2 } , \sigma _ { 3 } \}$ that are such that tuples $\left( \sigma _ { 1 } ( M _ { 1 } ) \right) =$ tuple $\left. \sigma _ { 3 } ( M _ { 1 } ) \right. = \{ m _ { 1 } \}$ and tuple $\mathsf { \pmb { s } } ( \sigma _ { 2 } ( M _ { 1 } ) ) = \{ m _ { 1 } ^ { \prime } , m _ { 1 } ^ { \prime \prime } \}$ . Referring to the four repairs of $m _ { - } C h a s e ( T )$ shown in Figure 3, we have $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { i } ] } ) = 2$ for $i = 1 , \dots , 4$ , implying that $\mathsf { C } _ { - } \mathsf { a n s } ( \mathcal { A } \mathscr { Q } ) = 2$ . On the other hand, applying the approximations shown above, we have $| \Sigma ( A \mathcal { Q } ) | = | \mathsf { t u p l e s } ( \sigma _ { 1 } ( M _ { 1 } ) ) \cup \mathsf { t u p l e s } ( \sigma _ { 2 } ( M _ { 1 } ) ) \cup \mathsf { t u p l e s } ( \sigma _ { 3 } ( M _ { 1 } ) ) | = 3$ and thus, $_ { c o u n t \_ m a x } = 3$ . Moreover, $\Sigma _ { 1 } ^ { + } ( \mathcal { A } \mathcal { Q } ) = \{ \sigma _ { 1 } , \sigma _ { 3 } \}$ and $\vert { \sf t u p l e s } ( \sigma _ { 1 } ( M _ { 1 } ) ) \cup { \sf t u p l e s } ( \sigma _ { 3 } ( M _ { 1 } ) ) \vert = 1$ . Hence, we obtain $c o u n t { - } m i n = 1$ , producing the interval [1, 3], which approximates, but is not equal to $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ . Changing $m _ { 1 } ^ { \prime }$ to $m _ { 1 }$ in $\sigma _ { 2 }$ yields that $\mathsf { A n s } ( \boldsymbol { \mathcal { A } } \mathscr { Q } ^ { [ R _ { 1 } ] } ) = \mathsf { A n s } ( \boldsymbol { \mathcal { A } } \mathscr { Q } ^ { [ R _ { 2 } ] } ) = 1$ and $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 3 } ] } ) \ =$ $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { 4 } ] } ) = 2$ . Thus $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ 1 , 2 ]$ . On the other hand, applying the approximations as above, we have $| \Sigma ( \mathcal { A } \mathcal { Q } ) | = 3$ and $\mathsf { | t u p l e s ( } \sigma _ { 1 } ( M _ { 1 } ) ) \cup \mathsf { t u p l e s } ( \sigma _ { 2 } ( M _ { 1 } ) ) \cup \mathsf { t u p l e s } ( \sigma _ { 3 } ( M _ { 1 } ) ) | = 2$ and so, $c o u n t { - } m a x = 2$ . Moreover, $\Sigma _ { 1 } ^ { + } ( { \cal A } \mathcal { Q } ) = \{ \sigma _ { 1 } , \sigma _ { 3 } \}$ and $| \mathrm { t u p } | { \mathsf { e s } } ( \sigma _ { 1 } ( M _ { 1 } ) ) \cup \mathrm { t u p } | { \mathsf { e s } } ( \sigma _ { 3 } ( M _ { 1 } ) ) | = 1$ . Hence, we obtain again count min = 1, producing the interval [1, 2], which is now equal to $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathcal { Q } )$ . # 7 Related Work The problem of consistent query answering in inconsistent databases has motivated considerable research efforts since its introduction in [1]. These efforts were first focused on conjunctive and self-join free queries involving no aggregate operators. The case of queries with aggregate operators and possibly with group-by clauses has been introduced in [2] and then further investigated in [8] and in [10,11]. Focusing on relations whose constraints are unique key-constraints, these approaches rely on earlier work (see [18]) characterizing classes of queries that can be rewritten in polynomial time, into SQL queries whose evaluation yields the consistent answer as defined in [2]. In parallel, the issue of consistent conjunctive query answering to queries involving aggregate operators has been addressed in [6], based on a radically different technique, mainly that of reducing the problem to the well-known SAT problem. Here again, the constraints of interest are keyconstraints associated with relations, as in [8]. Comparing our approach with the work in [6] or in [8], the main differences are the following: 1. The functional dependencies in [6] or in [8] do not exactly match those usually considered in star schemas. Indeed, the dependencies $\mathbf { K } M _ { i }$ are not in the scope of these approaches, contrary to the present approach. 2. In our approach, we allow missing values in the input tables, which is not the case in [6] or in [8]. 3. Whereas in these approaches conjunctive queries are considered, in our approach, we allow specific disjunctions that we call independent selection conditions. We recall that independent selection conditions generalize conjunctive selection conditions. 4. Dealing with aggregate operators sum and count(distinct .) in our approach may lead to approximate query answers, which is not the case in [6] and in [8]. Therefore, it turns out that the two approaches are hardly comparable. However, we argue in this section that all results stated in the previous sections also hold when the functional dependencies $\mathbf { K } M _ { i }$ are not considered. Moreover, in this case, when the aggregate is count(distinct .) on a measure attribute, the consistent answer can be effectively evaluated, instead of being approximated as earlier explained. More precisely, we show that our approach works in the context of [6] or of [8], assuming that the key of the fact table $F$ consists of all its attributes (namely all key attributes along with all measure attributes). Indeed, if $F D$ contains no functional dependency of the form ${ \bf K } M _ { i }$ then for every $\sigma$ in $m _ { - } C h a s e ( T )$ and every $i = 1 , \dotsc , p$ , tuples $( \sigma ( M _ { i } ) )$ is a singleton. Immediate consequences are that, in this context: 1. $F D$ is normalized and acyclic, and so, the results stated in [14] hold. 2. Proposition 2(2) is not true: m-tuples in $m _ { - } C h a s e ( T )$ may be distinct and have the same Kvalue. 3. Step P2 in process (P). is irrelevant and so, should be ignored. Moreover, the second item of Step P3 should be removed because for every $\sigma$ such that ${ \bf K } M _ { i } \subseteq s c h ( \sigma )$ for $i = 1 , \ldots , p$ , $\mathsf { t u p l e s } ( \sigma ( M _ { i } ) )$ is reduced to one tuple, and this tuple is consistent. However, it should be emphasized that Proposition 6 still holds in this setting because adapting the proof amounts to discard all arguments related to inconsistencies of tuples involving $\mathbf { K }$ -values associated with measure values. 4. The second item about the characterization of tuples in $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \tau )$ that follows Proposition 1 should be ignored because $| \mathtt { t u p l e s } ( \sigma ( M _ { i } ) | > 1$ cannot happen. On the other hand, it turns out that no tuple in $\mathsf { C o n f l } _ { \mathrm { m i n } } ( \tau )$ can involve a measure value, because it is easy to see that if $t = t ^ { \prime } m$ is in $\mathsf { C o n f l } ( \tau )$ and $m$ is in $a d o m ( M _ { i } )$ , then $t ^ { \prime }$ is in $\mathsf { C o n f l } ( \tau )$ . 5. Despite the first paragraph in the proof of Proposition 9 (mentioning that this proof relies on the fact that $m _ { - } C h a s e ( T )$ cannot contain distinct m-tuples with the same $\mathbf { K }$ -value), the proposition still holds. Indeed, as Proposition 6 still applies, for any of the two items in Proposition 9, the modified process (P) generates a repair which satisfies the conditions in the proposition. 6. Dealing with aggregate operator count(distinct .) in this context yields the exact consistent answer, instead of an approximation. This is so because in this case all $\sigma ( M _ { i } )$ contain one value, in which case the equalities (1) and (2) in the previous sub-section hold. An important consequence is that Algorithm 2, and thus Algorithm 3 and Algorithm 4, work in this context, showing that consistent answers to analytic queries can be computed in polynomial time, even when no functional dependency of the form $\mathbf { K } M _ { i }$ is considered. It can thus be stated that our approach extends approaches from the literature because we allow missing values in the input tables and we can handle some disjunctive selection conditions (namely independent selection conditions involving no key attributes), at the cost of approximate consistent answers when the aggregate operator is sum, operating on positive or negative numbers. # 8 Concluding Remarks Data analytics requires the collection and integration of data coming from different sources and stored in what is called a data warehouse that is a database operating under a specific type of schema called a star schema. As a consequence, more often than not, such integrated data contain inconsistencies and missing values. In this paper, we have seen an approach to efficiently compute consistent answers to analytic queries in data warehouses. Extending earlier work concerning the consistent query answering of standard, non-analytic queries in multi-table databases, we presented in this paper an approach to computing exact consistent answers of analytic queries over star schemas under two assumptions: (a) the selection condition in the query involves no keys and (b) the selection condition is independent. Our main contributions are: (a) a polynomial algorithm for computing the exact consistent answer to a usual projection-selection-join query over a star schema, and (b) showing that the exact consistent answer to an analytic query over a star schema can be computed also in time polynomial in the size of the data warehouse. Our current work follows two main lines of research. First, the implementation of our approach is an important issue under consideration. We emphasize in this respect that since the functional dependencies in a star schema have a simple form, the generic algorithm $m _ { - } C h a s e$ as presented here can be optimized. Moreover, as $m _ { - } C h a s e ( T )$ has to be ‘re-computed’ after each update, the question of incrementally implementing the changes in $m _ { - } C h a s e ( T )$ is crucial regarding efficiency. We are currently investigating these issues. A second line of research is investigating the introduction of hierarchies over dimensions. Indeed, hierarchies play a central role in data warehouse querying, because they allow for data analyses at different ‘levels’ (for instance at the level of states instead of cities in a location dimension). From a practical point of view, introducing hierarchies means introducing new functional dependencies among dimensional attributes. So, if these hierarchies have inconsistencies and/or missing values, the results on the computation of consistent answers presented here have to be revisited. Moreover, the case of queries involving the operators roll-up or drill-down (that allow for ‘navigating’ within hierarchies) has to be investigated in this context. # Declarations The two authors contributed to the study, conception and design. Both read and approved the manuscript. No funds, grants, or other support was received for conducting this study. # References 1. Marcelo Arenas, Leopoldo E. Bertossi, and Jan Chomicki. Consistent query answers in inconsistent databases. In Victor Vianu and Christos H. Papadimitriou, editors, Proceedings of the Eighteenth ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems, May 31 - June 2, 1999, Philadelphia, Pennsylvania, USA, pages 68–79. ACM Press, 1999. 2. Marcelo Arenas, Leopoldo E. Bertossi, Jan Chomicki, Xin He, Vijay Raghavan, and Jeremy P. Spinrad. Scalar aggregation in inconsistent databases. Theor. Comput. Sci., 296(3):405–434, 2003. 3. W.W. Armstrong. Dependency structures of data base relationships. In IFIP Congress, pages 580–583. North Holland, 1974. 4. Ladjel Bellatreche, Arnaud Giacometti, Dominique Laurent, and Hassina Mouloudi. A framework for combining rule-based and cost-based approaches to optimize OLAP queries. In Fadila Bentayeb, Omar Boussaı¨d, J´eroˆme Darmont, and Sabine Loudcher Rabas´eda, editors, Actes de la 1\`ere journ´ee francophone sur les Entrepˆots de Donn´ees et l’Analyse en ligne, EDA 2005, Lyon, France, Juin 10, 2005, volume B-1 of RNTI, pages 177–196. C´epadue\`s, 2005. 5. Stavros S. Cosmadakis, Paris C. Kanellakis, and Nicolas Spyratos. Partition semantics for relations. J. Comput. Syst. Sci., 33(2):203–233, 1986. 6. Akhil A. Dixit and Phokion G. Kolaitis. A SAT-based system for consistent query answering. In Mikol´as Janota and Ineˆs Lynce, editors, Theory and Applications of Satisfiability Testing - SAT 2019 - 22nd International Conference, SAT 2019, Lisbon, Portugal, July 9-12, 2019, Proceedings, volume 11628 of Lecture Notes in Computer Science, pages 117–135. Springer, 2019. 7. Ronald Fagin, Alberto O. Mendelzon, and Jeffrey D. Ullman. A simplified universal relation assumption and its properties. ACM Trans. Database Syst., 7(3):343–360, 1982. 8. Ariel Fuxman, Elham Fazli, and Ren´ee J. Miller. Conquer: Efficient management of inconsistent databases. In Fatma O¨ zcan, editor, Proceedings of the ACM SIGMOD International Conference on Management of Data, Baltimore, Maryland, USA, June 14-16, 2005, pages 155–166. ACM, 2005. 9. W.H. Inmon. Building the Datawarehouse. John Wiley & Sons, 1996. 10. Aziz Amezian El Khalfioui and Jef Wijsen. Consistent query answering for primary keys and conjunctive queries with counting. In Floris Geerts and Brecht Vandevoort, editors, 26th International Conference on Database Theory, ICDT 2023, March 28-31, 2023, Ioannina, Greece, volume 255 of LIPIcs, pages 23:1–23:19. Schloss Dagstuhl - Leibniz-Zentrum fu¨r Informatik, 2023. 11. Aziz Amezian El Khalfioui and Jef Wijsen. Computing range consistent answers to aggregation queries via rewriting. Proc. ACM Manag. Data, 2(5):218:1–218:19, 2024. 12. Ralph Kimball. The Data Warehouse Toolkit. John Wiley & Sons, 1996. 13. Dominique Laurent and Nicolas Spyratos. Handling inconsistencies in tables with nulls and functional dependencies. J. Intell. Inf. Syst., 59(2):285–317, 2022. 14. Dominique Laurent and Nicolas Spyratos. Consistent query answering in multi-relation databases. Inf. Comput., 303:105279, 2025. 15. Nicolas Spyratos. The partition model: A deductive database model. ACM Trans. Database Syst., 12(1):1–37, 1987. 16. Jeffrey D. Ullman. Principles of Databases and Knowledge-Base Systems, volume 1-2. Computer Science Press, 1988. 17. Jef Wijsen. On the consistent rewriting of conjunctive queries under primary key constraints. Inf. Syst., 34(7):578–601, 2009. 18. Jef Wijsen. Foundations of query answering on inconsistent databases. SIGMOD Rec., 48(3):6–16, 2019. # A Proof of Proposition 6 Proposition 6 Let $T$ be a star-table over $U$ . $R$ is a repair of $T$ if and only if there is a φ as defined above such that ${ \sf T r u e } ( \mathcal { R } ) = { \sf T r u e } ( \mathcal { R } _ { \varphi } )$ . Proof. We first notice that ${ \sf T r u e } ( \mathcal { R } _ { \varphi } ) \subseteq { \sf T r u e } ( \mathcal { T } )$ holds because all tuples in $R _ { \varphi }$ are in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ . To show that $R _ { \varphi } \ : \models F D$ , we denote by $T _ { \varphi }$ the table $\{ t _ { \varphi } ( \sigma ) \ | \ \sigma \in m _ { - } C h a s e ( T ) \}$ , and we recall that $R _ { \varphi } = T _ { \varphi } \cup \mathsf { C o n s } ( \mathcal T )$ . We first prove that $m _ { - } C h a s e ( T _ { \varphi } ) = T _ { \varphi }$ . Indeed, let $X A$ in $F D$ , $t _ { \varphi } ( \sigma )$ and $t _ { \varphi } ( \sigma ^ { \prime } )$ in $T _ { \varphi }$ such that $t _ { \varphi } ( \sigma ) . X = t _ { \varphi } ( \sigma ^ { \prime } ) . X$ . If $\sigma$ and $\sigma ^ { \prime }$ are defined over $A$ , then, by construction of $T _ { \varphi }$ , we have $t _ { \varphi } ( \sigma ) . A = t _ { \varphi } ( \sigma ^ { \prime } ) . A$ , in which case $m$ Chase does not change $T _ { \varphi }$ . The case whereby $\sigma$ is defined over $A$ and $\sigma ^ { \prime }$ is not defined over $A$ is not possible in $m _ { - } C h a s e ( T )$ , and thus it is not possible that $t _ { \varphi } ( \sigma )$ is defined over $A$ while $t _ { \varphi } ( \sigma ^ { \prime } )$ is not. Therefore, we have $m _ { - } C h a s e ( T _ { \varphi } ) = T _ { \varphi }$ , and this implies that $T _ { \varphi } \models F D$ , because no conflicts can occur in $T _ { \varphi }$ . Given an m-table $\varSigma$ over universe $U$ , we denote by $\tau ( \Sigma )$ the set of all tuples occurring in $\varSigma$ . More formally: $\tau ( \Sigma ) = \{ q \in \mathcal { T } \mid ( \exists \sigma \in \Sigma ) ( \exists t \in \mathbf { t u p l e s } ( \sigma ) ) ( q \sqsubseteq t ) \}$ . Recalling from [14] that, when $F D$ is acyclic, we also have $m _ { - } C h a s e ( C \mathsf { o n s } ( \mathcal { T } ) ) = C \mathsf { o n s } ( \mathcal { T } )$ and $\mathsf { C o n s } ( \mathcal { T } ) \models F D$ . We prove by induction that, at each step $k$ of the computation of $m _ { - } C h a s e ( R _ { \varphi } )$ the obtained m-table $\textstyle { \sum } ^ { k }$ is such that (1) $\textstyle { \sum } ^ { k }$ contains no conflict and (2) $\tau ( \varSigma ^ { k } ) = \tau ( R _ { \varphi } )$ . $\bullet$ For $k = 0$ , i.e., $\Sigma ^ { 0 } = R _ { \varphi }$ , (2) is obvious. As for (1), assume that $R _ { \varphi }$ contains $t$ and $t ^ { \prime }$ such that for $X A$ in $F D$ we have $t . X = t ^ { \prime } . X$ and $t . A \neq t ^ { \prime } . A$ . In this case, as $t$ and $t ^ { \prime }$ cannot be both in $T _ { \varphi }$ or both in $\mathsf { C o n s } ( \mathcal { T } )$ , we consider that $t \in T _ { \varphi }$ and that $t ^ { \prime } \in \mathsf { C o n s } ( \mathcal { T } )$ . As $t$ is in $\mathsf { T r u e } ( \tau )$ , this implies that $t ^ { \prime }$ is in $\mathsf { C o n f l } ( \tau )$ , which is a contradiction. We therefore obtain that $t . A = t ^ { \prime } A$ , and thus, $R _ { \varphi }$ contains no conflicts. $\bullet$ Assuming now that the result holds for some $k > 0$ , we prove that such is also the case for $k + 1$ . As $\textstyle { \sum } ^ { k }$ contains no conflicts, its rows can be seen as tuples. So consider $t$ and $t ^ { \prime }$ in $\textstyle { \sum } ^ { k }$ and $X A$ in $F D$ such that $t . X = t ^ { \prime } . X$ and $t$ and $t ^ { \prime }$ are defined over $A$ . As $\textstyle { \mathcal { L } } ^ { k }$ contains no conflict, we have $t . A = t ^ { \prime } . A$ and so, $\scriptstyle { \mathcal { Z } } ^ { k + 1 } \ = \ { \mathcal { Z } } ^ { k }$ . Thus (1) and (2) are trivially satisfied. Consider now $t$ and $t ^ { \prime }$ in $\varSigma ^ { k }$ and $X A$ in $F D$ such that $t . X = t ^ { \prime } . X$ and $t . A = a$ and $A \notin s c h ( t ^ { \prime } )$ . In this case, the tuple $t ^ { \prime }$ is set to $t _ { A } ^ { \prime }$ such that $s c h ( t _ { A } ^ { \prime } ) = s c h ( t ) \cup \{ A \}$ , $t _ { A } ^ { \prime } . A = a$ and $t _ { A } ^ { \prime } . s c h ( t ) = t$ . Thus contrary to the previous case, $\scriptstyle { \mathcal { Z } } ^ { k + 1 } = { \mathcal { Z } } ^ { k }$ does not hold. We however notice that this transformation generates not conflicts, and thus that (1) is satisfied by $\scriptstyle { \sum } ^ { k + 1 }$ . We now argue that $t _ { A } ^ { \prime }$ is a tuple in $\tau ( R _ { \varphi } )$ , which combined with the inductive hypothesis that $\tau ( \varSigma ^ { k } ) = \tau ( R _ { \varphi } )$ , implies that $\tau ( { \Sigma } ^ { k + 1 } ) = \tau ( R _ { \varphi } )$ . As $t$ and $t ^ { \prime }$ are in $\scriptstyle { \mathcal { L } } ^ { k }$ , they both belong to $\tau ( R _ { \varphi } )$ , and so there are $q$ and $q ^ { \prime }$ in $R _ { \varphi }$ such that $t \subseteq q$ and $t ^ { \prime } \subseteq q ^ { \prime }$ . To show that $t _ { A } ^ { \prime }$ is in $\tau ( R _ { \varphi } )$ , we successively consider the following cases: If $q$ and $q ^ { \prime }$ are both in $T _ { \varphi }$ (respectively both in $\mathsf { C o n s } ( \mathcal { T } ) )$ : as these sets are ‘closed’ under m Chase, $t$ and $t ^ { \prime }$ are both in $T _ { \varphi }$ (respectively both in $\mathsf { C o n s } ( \mathcal { T } ) )$ , and so $t _ { A } ^ { \prime }$ is in $T _ { \varphi }$ (respectively in $\mathsf { C o n s } ( \mathcal { T } ) )$ , because $t _ { A } ^ { \prime }$ is obtained through the m-Chase process. − If $q$ is in $\mathsf { C o n s } ( \mathcal { T } )$ and $q ^ { \prime }$ is in $T _ { \varphi } \cap \mathsf { C o n f l } ( \mathcal { T } )$ : in this case $x a$ is in $\mathsf { C o n s } ( \mathcal { T } )$ (because $x a \\subseteq q$ ). If $\sigma ^ { \prime }$ is the m-tuple in $m _ { - } C h a s e ( T )$ such that $q ^ { \prime } = t _ { \varphi } ( \sigma ^ { \prime } )$ , $X A \subseteq s c h ( \sigma ^ { \prime } )$ and $x a$ is in tuples $\left( \sigma ^ { \prime } ( X A ) \right)$ . Hence $t _ { A } ^ { \prime } \subseteq t _ { \varphi } ( \sigma ^ { \prime } )$ , and thus $t _ { A } ^ { \prime }$ is in tuples $( R _ { \varphi } )$ . If $t$ is in $T _ { \varphi } \cap \mathsf { C o n f l } ( \mathcal { T } )$ and $t ^ { \prime }$ in $\mathsf { C o n s } ( \mathcal { T } )$ : in this case, assume first that $x a$ is in $\mathsf { C o n s } ( \mathcal { T } )$ . Then, $t _ { A } ^ { \prime }$ is in $\mathsf { C o n s } ( \mathcal { T } )$ , because for every $Y B$ in $F D$ other than $X A$ , if $Y B \subseteq s c h ( t _ { A } ^ { \prime } )$ then $Y B \subseteq s c h ( t ^ { \prime } )$ . Assuming now that $x a$ is in $\mathsf { C o n f l } ( \tau )$ , we have $a = \varphi ( x )$ and so, for every $Y B$ such that $Y B \subseteq s c h ( t _ { A } ^ { \prime } )$ $t _ { A } ^ { \prime } . B = \varphi ( t _ { A } ^ { \prime } . Y )$ (because as $t ^ { \prime }$ is in $\mathsf { C o n s } ( \mathcal { T } )$ , we also have that for every $Y \ \ B$ such that $Y B \subseteq s c h ( { \dot { t } } ^ { \prime } )$ , $t ^ { \prime } . B = \varphi ( t ^ { \prime } . Y ) )$ . Hence, there exists $\sigma ^ { \prime }$ in $m _ { - } C h a s e ( T )$ such that $t _ { A } ^ { \prime } \subseteq t _ { \varphi } ( \sigma ^ { \prime } )$ . Therefore $t _ { A } ^ { \prime }$ is in $\tau ( R _ { \varphi } )$ . As a consequence, $\textstyle { \sum } k + 1$ satisfies (1) and (2), and thus, so does $m _ { - } C h a s e ( R _ { \varphi } )$ , meaning that $\mathsf { T r u e } ( R _ { \varphi } ) =$ $\tau ( R _ { \varphi } )$ and $R _ { \varphi } \models F D$ . By Definition 2, in order to show that $R _ { \varphi }$ is in $\mathsf { R e p } ( T )$ , we have to show that ${ \sf T r u e } ( \mathcal { R } _ { \varphi } )$ is maximal. To this end, let $S$ be such that ${ \sf T r u e } ( \mathcal { R } _ { \varphi } ) \subseteq { \sf T r u e } ( S ) \subseteq { \sf T r u e } ( \mathcal { T } )$ and $S \models F _ { \ v D }$ , and we consider a tuple $q$ in $\mathsf { T r u e } ( S ) \setminus \mathsf { T r u e } ( \mathcal { R } _ { \varphi } )$ . Then, as $q$ is in $\mathsf { T r u e } ( \mathcal { T } )$ , there exist $\sigma$ in $m _ { - } C h a s e ( T )$ and $t$ in tuples $( \sigma )$ such that $q \subseteq t$ . Since $\mathsf { C o n s } ( \mathcal { T } ) \subseteq \mathsf { T r u e } ( \mathcal { R } _ { \varphi } )$ , it follows that $q$ is in $\mathsf { C o n f l } ( \tau )$ , implying that there exists $X A$ in $F D$ such that $X A \subseteq s c h ( q )$ and ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ contains $x a ^ { \prime }$ such that $x = q . X$ and $ { a ^ { \prime } } \neq q . A$ . By construction of $R _ { \varphi }$ , ${ \sf T r u e } ( \mathcal { R } _ { \varphi } )$ contains a tuple $t ^ { \prime }$ from tuples $( \sigma )$ , and so, we have $t . X = q . X = t ^ { \prime } . X$ (because for every $X A \in F D$ , $| \mathsf { t u p l e s } ( \sigma ( X ) ) | = 1$ ) and $t . A \neq t ^ { \prime } . A$ . As $t . A = q . A$ and $t ^ { \prime }$ is in $\mathsf { T r u e } ( S )$ , this implies that $S \nvDash F D$ , which is a contradiction. This part of the proof is therefore complete. Conversely, let $R$ be in $\mathsf { R e p } ( T )$ . By Proposition 5, for every $X A$ in $F D$ , for a given $X$ -value $x$ in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ , ${ \sf T r u e } ( \mathcal { R } )$ contains one tuple among all tuples of the form $x a _ { i }$ from ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ where $a _ { i }$ is in $a d o m ( A )$ . According to the steps of Process (P), these tuples define a repair, denoted by $R _ { \varphi }$ , and we show that ${ \sf T r u e } ( \mathcal { R } ) = { \sf T r u e } ( \mathcal { R } _ { \varphi } )$ . Since $R$ and $R _ { \varphi }$ are in $\mathsf { R e p } ( T )$ , $\mathsf { C o n s } ( \mathcal { T } )$ is a subset of ${ \sf T r u e } ( \mathcal { R } )$ and of $\operatorname { T r u e } ( \mathcal { R } _ { \varphi } )$ . Now, let $t$ be in Confl $( \tau )$ . Then, for every $X A$ in $F D$ such that $X A \subseteq s c h ( t )$ , $t . X A$ is in ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ if and only if $t . X A$ is in ${ \sf T r u e } ( \mathcal { R } _ { \varphi } )$ because $t . X$ is in $\mathsf { C o n s } ( \mathcal { T } )$ , associated with the same $A$ -value in $R$ and $R _ { \varphi }$ . It follows that when $t$ is in $\mathsf { C o n f l } ( \tau )$ , $t$ is in ${ \sf T r u e } ( \mathcal { R } )$ if and only if $t$ is in ${ \sf T r u e } ( \mathcal { R } _ { \varphi } )$ . Hence, we obtain that ${ \sf T r u e } ( \mathcal { R } ) = { \sf T r u e } ( \mathcal { R } _ { \varphi } )$ , which completes the proof. # B Proof of Proposition 8 Proposition 8 Let $T$ be a star-table over universe $U$ , and $Q$ : select $X$ from $T$ where $\boldsymbol { \varGamma }$ a query, such that $\mathbf { K } \cap s c h ( I ) = \emptyset$ and $\boldsymbol { \varGamma }$ is an independent selection condition. Then C ans $( Q )$ is the set of all tuples x over $X$ for which there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that 1. $s c h ( Q ) \subseteq s c h ( \sigma )$ , $x \in \sigma ( X )$ and $\sigma ( s c h ( { \boldsymbol { r } } ) ) \cap S a t ( { \boldsymbol { r } } ) \neq \emptyset$ , 2. for every $Y B$ in $F D$ such that $Y B \subseteq X$ , |tupl ${ \sf z s } ( \sigma ( B ) ) | = 1$ , 3. for every $Y B$ in $F D$ such that $Y \subseteq X$ and $B \in s c h ( T )$ , $\mathsf { t u p l e s } ( \sigma ( B ) ) \subseteq S a t ( \Gamma ( B ) ) .$ Proof. Assume first that $\sigma$ in $m _ { - } C h a s e ( T )$ satisfies the items of the proposition. To show that $x$ is $\mathsf { C } _ { - \mathsf { a n s } ( Q ) }$ , we consider a repair $R$ in $\mathsf { R e p } ( T )$ and we show that ${ \sf T r u e } ( \mathcal { R } )$ contains a tuple $x \gamma$ where $\gamma$ is in $S a t ( T )$ . To this end, we first notice that by Proposition 1, the first two items imply that $x$ is in $\mathsf { C o n s } ( \mathcal { T } )$ and thus, by Theorem 1, $x$ is in ${ \sf T r u e } ( \mathcal { R } )$ . Moreover, since $\boldsymbol { \varGamma }$ is independent, item 1 implies that for every $B$ in $s c h ( I )$ , there exits $b$ in $\sigma ( B )$ such that $b$ is in $S a t ( T ( B ) )$ . If $S _ { 1 }$ is the set of all $B$ in $s c h ( I )$ such that there exists $Y \ B$ in $F D$ where $Y B \subseteq s c h ( Q )$ , let $S _ { 2 } = s c h ( { \cal T } ) \backslash S _ { 1 }$ and $\gamma _ { 2 }$ a tuple in $\scriptstyle \mathbf { t u p l e s } ( \sigma ( S _ { 2 } ) )$ such that for every $B$ in $S _ { 2 }$ , $\gamma _ { 2 } . B \in S a t ( { \cal { T } } ( B ) )$ (such a tuple exists because by item 1, $\sigma ( s c h ( \boldsymbol { r } ) ) \cap S a t ( \boldsymbol { r } ) \neq \emptyset )$ . We show that $x \gamma _ { 2 }$ , which belongs to tuple ${ \mathfrak { s } } { \big ( } \sigma { \big ( } X S _ { 2 } { \big ) } { \big ) }$ , is in ${ \sf T r u e } ( \mathcal { R } )$ . Indeed, by definition of $S _ { 2 }$ , if $Y B$ is in $F D$ and such that $Y B \subseteq X S _ { 2 }$ , then $B \notin S _ { 2 }$ . Thus, as $\mathbf { K } \cap S _ { 2 } = \emptyset$ (because $\mathbf { K } \cap s c h ( { \boldsymbol { \Gamma } } ) = \emptyset$ and $S _ { 2 } \subseteq s c h ( { \cal T } ) )$ , $Y B \subseteq X S _ { 2 }$ holds if and only if $Y B \subseteq X$ . Consequently $x \gamma _ { 2 }$ is in $\mathsf { C o n s } ( \mathcal { T } )$ because so is $x$ , showing by Theorem 1 that $x \gamma _ { 2 }$ belongs to ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ . Now let $B$ be in $S _ { 1 }$ and $Y B$ the corresponding dependency in $F D$ such that $Y \subseteq X$ . By Proposition $6$ , $R$ can be defined using process (P) by which a tuple $t _ { \varphi } ( \sigma )$ from tuples $( \sigma )$ is set to be in ${ \sf T r u e } ( \mathcal { R } )$ . Since $s c h ( t _ { \varphi } ( \sigma ) ) =$ $s c h ( \sigma )$ and $s c h ( Q ) \ \subseteq \ s c h ( \sigma )$ , we write $t _ { \varphi } ( \sigma )$ as $x _ { \sigma } \gamma _ { \sigma } q$ where $x _ { \sigma } = t _ { \varphi } ( \sigma ) . X$ , $\gamma _ { \sigma } ~ = ~ t _ { \varphi } ( \sigma ) . s c h ( \varGamma )$ , and $q \ =$ $t _ { \varphi } ( \sigma ) . ( s c h ( \sigma ) \setminus s c h ( Q ) )$ . In this setting, we have $t _ { \varphi } ( \sigma ) . Y = t . Y$ (because, by Proposition 2(1), $\sigma ( Y )$ contains one tuple), and thus, by the Lossless-join rule (which applies here because $R \models F \boldsymbol { D } ,$ ) applied to $t _ { \varphi } ( \sigma )$ , $x \gamma _ { 2 }$ and $Y B$ , we obtain that $x \gamma _ { 2 } b$ is in ${ \sf T r u e } ( \mathcal { R } )$ . Reproducing this reasoning for every $B$ in $S _ { 1 }$ , we obtain a tuple $x \gamma _ { 2 } \gamma _ { 1 }$ in $s c h ( Q )$ such that for every $B$ in $S _ { 1 }$ , $x \gamma _ { 2 } \gamma _ { 1 } . B$ is in $\sigma ( B )$ and thus, by item 3, in $S a t ( T ( B ) )$ . It follows that $x \gamma _ { 1 } \gamma _ { 2 }$ is in ${ \sf T r u e } ( \mathcal { R } )$ and that $\gamma _ { 1 } \gamma _ { 2 }$ is in $S a t ( T )$ . Thus $x$ is in $\mathsf { A n s } ( Q ^ { [ R ] } )$ , and since this reasoning holds for every repair $R$ of $T$ , we obtain that $x$ is in $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ . Conversely, let $x$ be in ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( Q )$ . By Proposition 7, $x$ belongs to $\mathsf { C - a n s ^ { + } } ( Q )$ , meaning that there exists $t$ in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ such that $s c h ( Q ) \subseteq s c h ( t )$ , $t . X = x$ , $x \in \mathsf { C o n s } ( \mathcal { T } )$ and $t . s c h ( \boldsymbol { r } ) \in S a t ( \boldsymbol { r } )$ . As for every $t \in \mathsf { T r u e } ( \mathcal { T } )$ , there exists $\sigma$ in $m _ { - } C h a s e ( T )$ such that $t$ is in tuple $\mathbf { \boldsymbol { s } } ( \sigma ( s c h ( t ) ) )$ , there must exist $\sigma$ in $m _ { - } C h a s e ( T )$ such that $s c h ( Q ) ~ \subseteq ~ s c h ( \sigma )$ , $x \ \in \ \sigma ( X )$ , $\sigma ( s c h ( \boldsymbol { r } ) ) \cap S a t ( \boldsymbol { r } ) \neq \emptyset$ and for every $Y ~ ~ B$ in $F D$ such that $Y B \subseteq X$ , $| \mathsf { t u p l e s } ( \sigma ( B ) ) | = 1$ (because $x$ is in $\mathsf { C o n s } ( \mathcal { T } ) )$ . In other words, there exists $\sigma$ in $m _ { - } C h a s e ( T )$ satisfying the first two items of the proposition. Denoting by $\Sigma ( Q )$ the set of all these m-tuples, let $\sigma$ be in $\Sigma ( Q )$ for which item 3 is not satisfied. Then, let $Y \ B$ in $F D$ such that $Y \subseteq X$ , $B \in s c h ( T )$ and $b$ in tuple ${ \mathfrak { s } } ( \sigma ( B ) ) \setminus S a t ( T ( B ) )$ . By item 1, $\mathsf { t u p l e s } ( \sigma ( s c h ( Q ) ) )$ contains $t$ such that $t . X = x$ and $t . B = b$ . As $t$ is in ${ \mathsf { T r u e } } ( { \mathcal { T } } )$ , by Proposition 4, there exists a repair $R$ such that $t$ is in ${ \sf T r u e } ( \mathcal { R } )$ . Then, for every $x \gamma$ such that $\gamma$ is in $S a t ( T )$ , we have $t . Y = x . Y$ and $t . B \neq \gamma . B$ (since $\gamma . B \in S a t ( \Gamma ( B ) )$ and $t . B \not \in S a t ( { \cal { T } } ( B ) ) )$ . Consequently, $t$ and $x \gamma$ can not both belong to ${ \mathsf { T r u e } } ( { \mathcal { R } } )$ (because $R \models F { \cal D }$ ). Therefore, if $\Sigma ( Q )$ contains one m-tuple not satisfying item 3, then $x$ cannot belong to C $\therefore - \mathsf { a n s } ( Q )$ . We moreover notice that if one $\sigma$ in $\Sigma ( Q )$ does not satisfy item 3 then for any other $\sigma ^ { \prime }$ in $\Sigma ( Q )$ , we have $\sigma ^ { \prime } ( Y ) = \sigma ( Y )$ and thus $\sigma ^ { \prime } ( B ) = \sigma ( B )$ . It thus follows that $\sigma ^ { \prime }$ does not satisfy item 3 either. As a consequence, we obtain that if $x$ is in $\mathsf { C } _ { - } \mathsf { a n s } ( Q )$ then $m _ { - } C h a s e ( T )$ contains an m-tuple satisfying all three items in the proposition. The proof is therefore complete. # C Proof of Proposition 10 Proposition 10 Let $T$ be a star-table over universe $U$ and $\boldsymbol { \mathcal { A } } \boldsymbol { \mathcal { Q } }$ : select $a g g r ( M _ { i } )$ from $T$ where $\boldsymbol { \varGamma }$ an analytic query with no group-by clause. – If aggr is min, max or count, then $\mathsf { C _ { - } a n s } ( \mathcal { A } \mathcal { Q } ) = [ m i n _ { - } a n s , m a x _ { - } a n s ]$ where min ans and max ans are returned by Algorithm 3. – If a $\jmath g r = s u m$ and if $\mathsf C _ { - } \mathsf { a n s } ( \mathcal A \mathcal Q ) = [ g l b , l u b ]$ , then min ans and max ans as returned by Algorithm 3 satisfy that min ans $\leq g l b$ and max ans $\geq l u b$ . Moreover, if for every $m \in a d o m ( M _ { i } )$ , $m \geq 0$ , then ${ \mathsf { C } } _ { - } { \mathsf { a n s } } ( { \mathcal { A } } { \mathcal { Q } } ) = [ m i n _ { - } a n s , m a x _ { - } a n s ]$ . – For every aggregate function and every selection condition $\boldsymbol { \varGamma }$ , $a n s ^ { * }$ as returned by Algorithm $\mathcal { B }$ is equal to C ans $^ { \ast } ( \mathcal { A } \mathcal { Q } )$ . Proof. We separately consider two cases, depending on whether the test line 2 in Algorithm 3 succeeds or not. If the test fails, i.e., if change min max as returned by the call of Compute aggregate has value false, this means that $m _ { - } C h a s e ( T )$ contains no $\sigma$ such that $\mathbf { K } \cup s c h ( Q ) \subseteq s c h ( \sigma )$ and tuples $( \sigma ( s c h ( \Gamma ) ) ) \subseteq S a t ( \Gamma )$ . By Corollary 1, this holds if and only if the consistent answer to $Q$ : select $K , X$ from $T$ where $\boldsymbol { \varGamma }$ is empty. Thus, the test fails if and only if there exists a repair $R$ of $T$ for which $\mathsf { A n s } ( Q ^ { [ R ] } ) = \emptyset$ . In this case $\mathsf { C } \lrcorner \mathsf { a n s } ( \mathcal { A } \mathscr { Q } )$ is NULL, which is expressed by the fact that the values of min ans, max ans and $a n s ^ { * }$ returned by the call of Compute aggregate are respectively -dummy, $^ +$ dummy and -dummy. Hence, in this case, Algorithm 3 provides the correct answer. Suppose now that the test line 2 succeeds, i.e., that the value of change min max returned by the call of Compute aggregate is true. The statement line 16 of Algorithm 2 shows that $m _ { - } C h a s e ( T )$ contains at least one $\sigma$ such that $\mathbf { K } \cup s c h ( Q ) \subseteq s c h ( \sigma )$ and tuples $\mathop { ' } \sigma ( s c h ( \Gamma ) ) ) \mathop { \subseteq } S a t ( \Gamma )$ . By Corollary 1, this holds if and only if the consistent answer to $Q$ : select $K , X$ from $T$ where $\boldsymbol { \varGamma }$ is not empty. Thus, the test succeeds if and only if for every repair $R$ of $T$ , $\mathsf { A n s } ( Q ^ { [ R ] } ) \neq \emptyset$ , in which case, min ans, max ans and $a n s ^ { * }$ are proper values, either values from $a d o m ( M _ { i } )$ if $a g g r \neq c o u n t$ , or positive integers if $a g g r = c o u n t$ . In this case, it can be seen that the m-tuples $\sigma$ in $m _ { - } C h a s e ( T )$ that contribute to the construction of the interval are such that ${ \bf K } \cup X \cup s c h ( { \cal T } ) \subseteq s c h ( \sigma )$ and tupl $\mathfrak { s } ( \sigma ( \mathfrak { s c h } ( r ) ) ) \cap S a t ( r ) \neq \emptyset$ . This is so, because of our assumption on missing values in the fact table $F$ (every tuple with an $M _ { i }$ -value must be defined over all key attributes); and because any $\sigma$ not defined over some attributes in $X \cup s c h ( \Gamma )$ or such that tup $\mathsf { l e s } ( \sigma ( s c h ( \varGamma ) ) ) \cap S a t ( \varGamma ) = \emptyset$ cannot contribute in the consistent answer to $\mathbf { \mathcal { A } } \mathbf { \mathcal { Q } }$ . This explains why, when the test line 7 of Algorithm 2 fails, no action is taken. On the other hand, the m-tuples $\sigma$ such that $\mathbf { K } \cup X \cup s c h ( \varGamma ) \subseteq s c h ( \sigma )$ and $\mathsf { t u p l e s } ( \sigma ( s c h ( \boldsymbol { r } ) ) ) \cap S a t ( \boldsymbol { r } ) \neq \emptyset$ are precisely those in $\Sigma ( \mathcal { A } \mathcal { Q } )$ , and recalling that $\Sigma ^ { + } ( { \mathcal { A } } \mathcal { Q } )$ is the set of m-tuples in $\Sigma ( \mathcal { A } \mathcal { Q } )$ such that tuples $\mathsf { s } ( \sigma ( s c h ( \boldsymbol { \Gamma } ) ) ) \subseteq$ $S a t ( T )$ , we have the following for every $\sigma$ in $\Sigma ( \mathcal { A } \mathcal { Q } )$ : (1) By Proposition 4, for every $t \in \mathsf { t u p l e s } ( \sigma )$ , there exists a repair $R$ such that $t \in { \mathsf { T r u e } } ( { \mathcal { R } } )$ . Moreover, $t$ is the unique tuple over $s c h ( \sigma )$ in ${ \sf T r u e } ( \mathcal { R } )$ having $t . K$ as a $\mathbf { K }$ -value. Notice that since $t . s c h ( T )$ may not be in $S a t ( T )$ , it is possible that $t . M _ { i }$ does not contribute in the computation of the aggregate in $R$ . (2) If $^ { o }$ is in $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , by Corollary 1, every repair $R$ is such that ${ \sf T r u e } ( \mathcal { R } )$ contains a tuple $t$ in tuples $\mathfrak { s } ( \sigma )$ . In this case, since $t . s c h ( I )$ is in $S a t ( T )$ , $t . M _ { i }$ does contribute in the computation of the aggregate in $R$ . Before showing that the values returned by Algorithm 3 are as stated in Definition 5, we mention that the aggregate operators min, max and sum are not defined when their argument is empty, which we write as $a g g r ( \varnothing ) = \mathtt { N U L L }$ . Otherwise, if $v _ { 1 }$ , $v _ { 2 }$ and $v _ { 3 }$ are values to which aggr applies, then Commutativity: agg $r ( \{ v _ { 1 } , v _ { 2 } \} ) = a g g r ( \{ v _ { 2 } , v _ { 1 } \} )$ . Associativity: $a g g r ( \{ v _ { 1 } , a g g r ( \{ v _ { 2 } , v _ { 3 } \} ) \} ) = a g g r ( \{ a g g r ( \{ v _ { 1 } , v _ { 2 } \} , v _ { 3 } \} ) = a g g r ( \{ v _ { 1 } , v _ { 2 } , v _ { 3 } \} ) .$ Monotonicity: If $a g g r \neq c o u n t$ and $v _ { 2 } \leq v _ { 3 }$ then $a g g r ( \{ v _ { 1 } , v _ { 2 } \} ) \leq a g g r ( \{ v _ { 1 } , v _ { 3 } \} )$ . The first two properties show that aggregate values do not depend on the order the elementary values are considered and how they are grouped during the computation. Moreover, the third property shows that, if $a g g r \neq c o u n t$ , the higher the values, the higher the aggregate values. In our context, recalling that elementary values are values in $\sigma ( M _ { i } )$ for $\sigma$ in $\Sigma ( \mathcal { A } \mathcal { Q } )$ , this last property shows that, when the aggregate is different than count, for a fixed $\sigma$ , the least, respectively the highest, aggregate value is obtained by considering the least, respectively the highest, possible $M _ { i }$ -value. These values, respectively denoted by $m i n _ { \sigma }$ and $m a x _ { \sigma }$ are computed lines 8-14 of Algorithm 2. Since the second property shows that aggregates can be seen as operating over a set of values we recall the following standard properties that will be used in this proof. Let $S _ { 1 }$ and $S _ { 2 }$ be sets of values to which aggr applies, and such that $S _ { 1 } \subseteq S _ { 2 }$ , then: $\operatorname* { m i n } ( S _ { 1 } ) \geq \operatorname* { m i n } ( S _ { 2 } )$ , $\operatorname* { m a x } ( S _ { 1 } ) \leq \operatorname* { m a x } ( S _ { 2 } )$ , $c o u n t ( S _ { 1 } ) \ge c o u n t ( S _ { 2 } )$ , and if all values in $S _ { 1 }$ or $S _ { 2 }$ are positive, $s u m ( S _ { 1 } ) \leq s u m ( S _ { 2 } )$ . We notice that regarding the last property, if values in $S _ { 1 }$ or $S _ { 2 }$ can be positive or negative, no generic comparison can be stated. We now explain, for each aggregate, how the bounds of the interval of the consistent answer can be obtained, and see why Algorithm 3 returns (or not) these values. First, as for every $\sigma \in \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , every repair contains exactly one tuple in tuples $( \sigma )$ , the $M _ { i }$ -values in $\sigma ( M _ { i } )$ if $M _ { i } \notin s c h ( \boldsymbol { \Gamma } )$ , or in $\sigma ( M _ { i } ) \cap S a t ( s c h ( \Gamma ) )$ otherwise, contribute in the computation of the aggregate. Moreover, since by monotonicity, $m i n _ { \sigma }$ contributes in the computation of min ans and $m a x _ { \sigma }$ contributes in the computation of max ans, we have the following, given that ${ \Sigma } ^ { + } ( \mathcal { A } \mathcal { Q } ) \subseteq \Sigma ( \mathcal { A } \mathcal { Q } )$ : – For $a g g r = m i n$ , $m i n _ { - } a n s = \operatorname* { m i n } \{ m i n _ { \sigma } \mid \sigma \in \Sigma ( A \mathcal { Q } ) \}$ and $m a x \_ a n s = \operatorname* { m i n } \{ m a x _ { \sigma } \mid \sigma \in \Sigma ^ { + } ( A \mathcal { Q } ) \}$ . These values are computed respectively in lines 23 and 17 of Algorithm 2. – For $a g g r = m a x$ , $m i n _ { - } a n s = \operatorname* { m a x } \{ m i n _ { \sigma } \mid \sigma \in \Sigma ^ { + } ( A \mathcal { Q } ) \}$ and $m a x \_ a n s = \operatorname* { m a x } \{ m a x _ { \sigma } \mid \sigma \in \Sigma ( A \mathcal { Q } ) \}$ . These values are computed respectively in lines 17 and 25 of Algorithm 2. – For $a g g r = c o u n t$ , min ans, respectively max ans is the cardinality of $\Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ , respectively of $\Sigma ( \mathcal { A } \mathcal { Q } )$ . These values are computed respectively in lines 17 and 27 of Algorithm 2. – For $a g g r = s u m$ , min ans, respectively max ans, is the minimal, respectively maximal, sum that can be obtained by adding one $M _ { i }$ -value in every $\sigma ( M _ { i } )$ for $\sigma \in \Sigma ^ { + } ( \mathcal { A } \mathcal { Q } )$ and then possibly one $M _ { i }$ -value in every $\sigma ( M _ { i } )$ for $\sigma \in { \Sigma } ^ { + } ( \mathcal { A } \mathcal { Q } )$ , respectively of $\Sigma ( \mathcal { A } \mathcal { Q } )$ . These values are computed in lines 17 and 30, respectively in lines 18 and 32 of Algorithm 2. Notice that if all $M _ { i }$ -values are positive, knowing that adding 0 is neutral, then the test line 29 always fails and thus, proc min is the sum of all minimal $M _ { i }$ -values in $\sigma ( M _ { i } )$ for $\sigma \in { \Sigma } ^ { + } ( \mathcal { A } \mathcal { Q } )$ . Similarly, in this case, the test line 31 always succeeds, except for 0, and thus, proc max is the sum of all maximal $M _ { i }$ -values in $\sigma ( M _ { i } )$ for $\sigma \in \Sigma ( \mathcal { A } \mathcal { Q } )$ . To complete the proof that the returned values min ans and max ans are as stated in the proposition, we have to show that $\mathsf { R e p } ( T )$ contains repairs $R _ { m i n }$ and $R _ { m a x }$ such that $\mathsf { A n s } ( \mathcal { A } \mathcal { Q } ^ { [ R _ { m i n } ] } ) = m i n _ { - } a n s$ and $\mathsf { A n s } ( \mathcal { A Q } ^ { [ R _ { m a x } ] } ) =$ max ans. These results are consequences of Proposition 9 by considering $R _ { m i n } = R _ { 2 }$ and $R _ { m a x } = R _ { 1 }$ if $a g g r =$ min, and $R _ { m i n } = R _ { 1 }$ and $R _ { m a x } = R _ { 2 }$ if aggr is max or count or if $M _ { i }$ -values are positive and $a g g r = s u m$ . We also mention that if $a g g r = s u m$ with no restriction on the $M _ { i }$ -values, min ans, respectively max ans, is the least, respectively the largest, sum that can be obtained using all relevant sets $\sigma ( M _ { i } )$ . We thus have min ans $\leq g l b$ and $m a x _ { - } a n s \geq l u b$ , which concludes this part of the proof. The last item in the proposition regarding $\mathsf { C } _ { - } \mathsf { a n s } ^ { \ast } ( \mathcal { A } \mathcal { Q } )$ and $a n s ^ { * }$ is an immediate consequence of Corollary 1. The proof of the proposition is thus complete. □
We present an approach to computing consistent answers to analytic queries in data warehouses operating under a star schema and possibly containing missing values and inconsistent data. Our approach is based on earlier work concerning consistent query answering for standard, non-analytic queries in multi-table databases. In that work we presented polynomial algorithms for computing either the exact consistent answer to a standard, non analytic query or bounds of the exact answer, depending on whether the query involves a selection condition or not. We extend this approach to computing exact consistent answers of analytic queries over star schemas, provided that the selection condition in the query involves no keys and satisfies the property of independency (i.e., the condition can be expressed as a conjunction of conditions each involving a single attribute). The main contributions of this paper are: (a) a polynomial algorithm for computing the exact consistent answer to a usual projection-selection-join query over a star schema under the above restrictions on the selection condition, and (b) showing that, under the same restrictions the exact consistent answer to an analytic query over a star schema can be computed in time polynomial in the size of the data warehouse.
[ "cs.DB" ]
# 1 INTRODUCTION Recent developments in Large Language Models (LLMs) have spurred extensive research into automating many activities, including code generation. In this paper, we address text-to-SQL generation [QHWY $+ 2 2$ ]. Our research was informed by our experiences with large-scale industrial databases [DJMS02][GJ14]. We have consistently found that the most difficult part of writing correct SQL queries is understanding what is in the database to begin with. After that, writing queries is relatively straightforward. Examples of difficulties include: Lack of documentation: In a large number of cases, the database lacks documentation – no field or table descriptions. Primary key – foreign key dependencies are not labeled. The new database user has only cryptic field and table names for guidance. Incomplete or dated documentation: Databases undergo continual schema and data change. Fields get added or dropped, and their contents can change their format and meaning. Often times, the documentation is not updated. In general, even the Subject Matter Experts (SMEs) are often not fully aware of the current contents of a database. Unclear data formats: To take a simple example, suppose that we know that the owner field is the name of the owner of the account. What is the format? Is it last_name first_name, last_name, first_name, or first_name last_name? Is there any capitalization? Punctuation? How are more than two names handled? Multiple data formats: A single field might have data in multiple formats. For example, the owner field might half its entries in format first_name last_name and half in format last_name, first_name. Multiple fields with similar meanings: A common example of this issue is a date field. Suppose that table customer has 4 date fields, a_date, b_date, c_date, d_date each of which represents a different date of an interaction (actual names can be almost as cryptic). How does one find all customers who signed up on or after Sept. 1, 2024? Different fields might be filled in based on the sign up process so the proper formula might be Coalesce(a_date, c_date) $> =$ date $\cdot 0 I / 0 9 / 2 0 2 4$ . Complex join paths: There can be many complexities in developing a correct join expression – the join key might involve multiple fields, it might involve conditional values (e.g. because of multiple processes adding records to the table – iff(R.vendor ${ \it \Delta \Psi } = 5 { \it \Delta \Psi }$ , R.id, R.serial_num) $\begin{array} { r } { \mathbf { \sigma } = S . } \end{array}$ serial_num), or it might involve transformations on join keys. In one example, we needed to join two well-documented tables on the IMEI1. The join result was empty - until we realized that in one table the IMEI was 13 digits and in the other it was 14. The longer IMEI always had ‘1’ as a prefix. Complex formulae: The complexities of an operations or business process can make seemingly simple calculations obscure. For example, the revenue of an interaction might be calculated as iff(cds_code $^ { = } I$ , revenue, zdc+total_rendered\*10000). Default values: In many cases, seemingly required fields have nonsense values. For example, a telephone number field might have a very large number of entries such as 123-456-7890 or 111-111-1111. These default values should be excluded from results involving the telephone number. If the telephone number is a join key between two 1-million record tables and both fields have 123-456-7890 in one tenth of their records, then the query must materialize 10 billion useless records. Several approaches exist for understanding complex databases. One well-known approach is profiling [AGN15], the systematic querying of the tables in a database to create reports on their properties. For example, the profiling might show that the T.IMEI field is always 13 digits while the S.IMEI field is always 14 digits and always starts with a 1. A natural conclusion is that T and S are joinable on $\cdot 1 ^ { \circ } \Vert \mathrm { T . I M E I = S . I M E I } .$ Another common approach is to examine queries that SMEs have written for clues to important fields, join paths, business logic, and so on. If a query log is available, this analysis can be automated to generate statistical reports which might indicate e.g. complex join paths [YPS09]. A third approach is newer, and is enabled by LLMs. If one has a query log and high-quality metadata, one can ask the LLM to translate the SQL into text. This technique allows the user to find related queries based on the textual similarity of the questions. In this paper, we apply these techniques to LLM-based text-toSQL generation. For evaluation, we use the BIRD benchmark $\left[ \mathrm { J H Q Y + } 2 3 \right]$ to quantify their benefit. BIRD is a challenging benchmark, with often ambiguous and/or dirty schemas, data, and metadata. Our experiments are run on the dev database, with questions selected from minidev for some experiments. In the BIRD benchmark, a submission is evaluated on an unknown test database – so no query log is available. We developed a submission which uses only profiling information. On Sept. 1, 2024 and again on Nov 11,2024, we achieved the highest scores both using and not using the oracle2 information. Since oracle information is never present in practice, the test which does not use the oracle is more indicative of how a text-toSQL technique works in practice. Without the oracle, our submission got a score 10.28 percentage points higher than the next best submission without the oracle, $67 . 4 1 \%$ vs. $5 7 . 1 3 \%$ (at the time of writing – Jan. 2025). On March 11, 2025 we submitted again using the oracle and achieved the #1 spot with a test score of 77.14. The top five scores using hints, at the time of writing, are: 77.14 (AT&T), 76.02 (Google), 75.63 (Contextual AI), 75.63 (Alibaba), 73.17 (IBM). # 1.1 Contributions In this paper, we investigate three schemes for automatic metadata generation for text-to-SQL applications. Two of these schemes are traditional: database profiling and query log analysis. In the context of database profiling, we show that an LLM can translate the profiling information (in the context of the table schema) into useful metadata about field meaning. We use the BIRD benchmark to evaluate automatic metadata extraction. In the context of database profiling, we find that by using the LLM to summarize the profile metadata, we can gain significant insights into field contents. Using profile-enhanced field metadata blows up the size of the schema provided in an LLM prompt. To obtain better results, we develop a novel schema linking algorithm. We find that using our schema linking algorithm provides a significant boost in accuracy scores. We also find that using the profile-generated metadata provides better results than using just the SME metadata supplied in the benchmark! Using fused metadata provides the best results, and the combination of techniques let us achieve the $\# 1$ spot on the BIRD leaderboard twice. The BIRD query set and schemas are relatively simplistic, but interesting results can still be extracted. By using query log analysis, we can find a significant number $2 5 \%$ of total) of undocumented join paths. We can also find complex join predicates, and business logic for predicates and fields that are only documented in the oracles, or not at all. We also investigate the use of the LLM SQL-to-text generation to create few-shot examples – a task made possible by the introduction of LLMs. While SQL-to-text has been used in e.g. $[ \mathrm { P L S C } + 2 4 ]$ , we make an experimental study to evaluate the technique and find that the LLM can generate questions as good as or better than the human annotations. # 2 Profiling Database profiling has a huge literature dating back decades [AGN15]. The common idea is to analyze database contents to extract properties that aid in understanding database contents. Basic profiling takes a pass over a table and collects statistics such as The number of records in a table. For a field, the number of NULL vs. non-NULL values. For a field, the number of distinct values. For a field, the “shape” of a field, e.g. min and max, number of characters (digits), alphabet (upper/lower/punctuation/…), common prefixes, etc. For each field, a sample of the top-k field values. For each field, a minhash sketch. Count distinct, and the set of top-k field values and their counts can be computed by approximate means [FFGM07] [IBS08] and these functions are increasingly present in commercial databases34. A minhash sketch [B97] is a collection of K values computed by $\mathrm { { m i } ( f ) = \mathrm { { m i n } ( \mathrm { { h i } ( v _ { j } ) \mid } } }$ over all values vj of field f) for i ranging from 1 to $\mathrm { \Delta K }$ , and each $\mathrm { { h } _ { i } }$ is a different hash function. The minhash sketch can be used to compute the resemblance of two contents of fields F and G, which is $$ \operatorname { r e s } ( \mathrm { F } , \mathrm { G } ) = | \mathrm { F } \cap \mathrm { G } | / | \mathrm { F } \mathrm { U } \mathrm { G } | $$ Given two minhash sketches $\mathbf { m ( f ) }$ and ${ \bf m } ( \bf g )$ , the resemblance between the values of fields f and $\mathbf { g }$ can be approximated by $\mathrm { r e s } ( \mathrm { f , } \mathrm { g } ) = \mathrm { s u m } ( \mathrm { \ i f { ( m i ( f ) = m i ( g ) , } } 1 , 0 ) ) , \mathrm { i \ i n \ } 1 , . . . , \mathrm { K } ) / \mathrm { K }$ Given the minhash sketch of field f, the collection of fields g with a large intersection can be quickly computed. These can be used for tasks such as Finding join paths Imputing metadata from field f to field g. Other more complex profiles can be collected, such as multi-field keys, functional dependencies and other kinds of dependencies [AGN15]. In this study we restrict ourselves to the basic profiles. # 2.1 Using Profiling Information for Text-to-SQL Actually using profiling information for text-to-SQL requires transforming the raw statistics into a form that the LLM can readily use. We describe the process using examples from BIRD. We can start with frpm.CDSCode. A mechanically generated English language description of the profile for this field is: Column CDSCode has 0 NULL values out of 9986 records. There are 9986 distinct values. The minimum value is '01100170109835' and the maximum value is '58727695838305'. Most common non-NULL column values are '01100170109835', '01100170112607', '01100170118489', '01100170123968', '01100170124172', '01100170125567', '01100170130401', '01100170130419', '01100176001788', '01100176002000'. The values are always 14 characters long. Every column value looks like a number. The next step is to use the English-language profile, the provided metadata for this field (which is just “CDSCode”), the table name, and the names of other fields in the table, to ask the LLM for a short description of the contents of the field. The resulting short description is: The CDSCode column stores unique 14-character numeric identifiers for each school in the database, where CDS stands for County-District-School. The short description of CDSCode describes the format of the values in the field, and identifies its meaning: County-DistrictSchool. The LLM is able to pick up on the meaning of CDS because CDS is a common acronym of County-District-School. However CDS can also mean Cadmium Sulfide, credit default swap, counterfeit deterrence system, cross domain solution, and so on. But in the context of the table name (FRPM, or Free or Reduced Price Meal) and column names such as “Academic Year”, “County Code”, and so on, the most likely meaning of CDS is the one chosen. While this short description is good for identifying the meaning of the field, more detail about the field values can guide the text-toSQL LLM to use proper literal values. A long description is: The CDSCode column stores unique 14-character numeric identifiers for each school in the database, where CDS stands for County-District-School.The CDSCode column contains 14-character numeric strings with no NULL values, 9986 distinct values, ranging from '01100170109835' to '58727695838305'; common values include '01100170109835', '01100170112607', '01100170118489', '01100170123968', '01100170124172', '01100170125567', '01100170130401', '01100170130419', '01100176001788', '01100176002000'. For another example where the LLM can guide the choice of literals for constraints, consider frpm.\`Academic Year\`. The provided metadata is "Academic Year", with the field value format left vague. Even the short LLM description is specific about the field value format: The \`Academic Year\` column stores the academic year for each record in the format 'YYYY-YYYY'. A particularly striking example is cards.leadershipskills. This field contains JSON data, but this is not indicated in the field metadata: A list of formats the card is legal to be a commander in The LLM recognizes the format of the field contents and provides this short summary: The leadershipSkills column stores JSON-formatted data indicating the formats in which a card is legal to be used as a commander, such as Brawl, Commander, and Oathbreaker # 3 Schema Linking with Profile Metadata The examples in the previous section make clear that an LLM can very often generate excellent descriptions of field contents and meanings. However these descriptions, especially the long descriptions, can overflow the token limit for LLM systems $[ \mathrm { T P C M } \cdot 2 4 ]$ . In addition, we have observed that in the presence of long prompts, the LLM will pick up on the material in the beginning and the end, but tend to ignore the part in the middle (also observed by $\left[ \mathrm { T P C M + } 2 4 \right] .$ ). The long field descriptions generated from profiling and LLM summarization are too long to be provided for context if such a description is provided for every field of every table in a database. Schema linking $[ \mathrm { T P C M } \cdot 2 4 ]$ $\mathrm { [ D Z G M + 2 3 ] }$ $\mathrm { [ Q H W Y + 2 2 ] }$ [LPKP24] $\mathrm { [ G W L S + 2 3 ] }$ refers to identifying which fields are relevant to generating an SQL query in response to a question. Some authors $[ \mathrm { F P Z E } \cdot 2 4 ]$ have found that schema linking improves text-to-SQL performance even when the schema fits into the prompt context. In CHESS $[ \mathrm { T P C M } \cdot 2 4 ]$ , the authors found that perfect schema linking significantly improves performance. While some authors [MAJM24] have expressed an opinion that schema linking is not necessary with newer LLMs with large prompt contexts, industrial databases can have hundreds of tables each with hundreds of fields, so in practice schema linking is a necessity. In this section, we describe how we performed schema linking in our BIRD submission. The value of the two types of LLM profile summaries (short and long) should be clear: the short summary is used to help with schema linking while the long summary is used for generating SQL from text. There are four common schema linking mechanisms $[ \mathrm { T P C M } \cdot 2 4 ]$ , which can be used in combination: Metadata similarity search: search a vector database for fields whose name and/or metadata are semantically similar to the question. Column filtering: For each field, ask the LLM if the field is relevant. Table selection: Give the LLM the full schema and ask it to identify relevant tables. Column selection: Give the LLM the full schema and ask it to identify relevant fields. We tried these schema linking techniques but obtained unsatisfactory results. The authors of $[ \mathrm { Q L L Q } + 2 4 ]$ have observed that LLMs have good performance at tasks for which they have been trained, but poor performance on tasks outside of these boundaries. This property is known as task alignment. Specifically, we have observed that LLMs are not good at direct schema linking – identifying which tables/columns are relevant to a question – but are good at generating SQL. So our schema linking method is focused on generating SQL queries and gathering the fields they reference. Another consideration is that the literals used in the question can help to indicate the field that should be constrained $[ \mathrm { T P C M } \cdot 2 4 ]$ . One step in the algorithm is to identify fields which can contain a literal, and if the field involved in the constraint is not among that set, ask the LLM to rephrase the generated SQL using one of the fields in the set. Yet another consideration is that recall is better than precision – it is better to put too many fields in the prompt (within limits) than too few. The general outline of our schema linking algorithm is: For several different variants of field collections and their metadata: o Ask the LLM to generate an SQL query based on the question and in the context of the metadata variant o Collect the fields and literals in the generated query o Adjust the query to try to use fields which contain the literals, if needed. Use the collection of all fields returned by the different variants as the schema linked to the question. In the remainder of this section, we describe in more detail the methods we have found to be effective. The schema linking algorithm requires a preprocessing step to aid in finding fields that match a literal. 1. For every field f, a. Fetch $N$ distinct values of the $f ,$ or as many distinct values exist. b. Compute a string similarity index on these values c. Attach the string similarity index to the field’s entry in the profile. We found that a Locality Sensitive Hash (LSH) index 5 on shingles6 is effective for the field value index as it provides an approximate match on values, but does not do semantic similarity. For the BIRD benchmark, we used $N { = } 1 0 0 0 0$ . By contrast, CHESS $[ \mathrm { T P C M } \cdot 2 4 ]$ indexes all values of all fields for literal matching. This technique is not scalable outside of small benchmark databases, so we limit ourselves to a moderate size sample. A second semantic similarity index is computed on the profile using FAISS7 [JDJ19]. This index is on textual descriptions on the profile of a field (i.e., the long summary), which allows for the efficient search of fields likely to be relevant to a textual query. A key part of the schema linking algorithm is the metadata context. We use the following terms: Focused schema: Given a question, this is the set of fields which are textually similar to the users question, based on the string similarity index on the fields. In addition, literals are extracted from the question, and additional fields which include that literal in their values (using the LSH indices in the profile) are added to the focused schema. . Full schema: All fields in all tables. Minimal profile: describe the field using the short LLM summary.  Maximal profile: describe the field using the long LLM summary. Full profile: describe the field using the SME-supplied metadata along with the maximal profile. Input: profile Profile, vector database Index, textual question Question, int MaxRetry 1. Let Fields be a set of fields and Lits be a set of literals 2. For each of the following five cases of schema Schema: a) focused schema, minimal profile; b) focused schema, maximal profile; c) full schema, minimal profile; d) full schema, maximal profile; e) focused schema, full profile; do the following a. Use the LLM to generate an SQL query $\boldsymbol { \mathcal { Q } }$ in response to Question and Schema. b. Let FieldsQ be the fields referenced in $\boldsymbol { \mathcal { Q } }$ and let LitsQ be the literals in $\boldsymbol { \mathcal { Q } }$ . c. Let LitFieldsQ and MissingLits be empty lists d. For each literal l in LitsQ, i. Use the LSH indices in the profile to identify the fields Fieldsl which contain l as a value. ii. If no field f in Fieldsl is in FieldsQ, 1. Add Fieldsl to LitFieldsQ 2. Add l to MissingLits e. If LitFieldsQ is not empty and the number of retries is less than MaxRetry i. Let AugmentedSchema be the schema augmented with any fields in LitFieldsQ which are not in Schema ii. Write a prompt which asks the LLM to revise the SQL query $\boldsymbol { \mathcal { Q } }$ suggesting the use of fields which contain literals in MissingLits, resulting in revised SQL query $\boldsymbol { \mathcal { Q } }$ . iii. Repeat steps 2.b through 2.e f. Add FieldsQ and LitsQ to Fields and Lits 3. Return FieldsQ as the set of fields for providing context. # 3.1 Example We work through a simplified example from the BIRD benchmark: From the California Schools dataset, Please list the zip code of all the charter schools in Fresno County Office of Education. The schema variant we will use is the full minimal profile, a sample of which is below: Field frpm.CDSCode means: The CDSCode column stores unique 14-character numeric identifiers for each school in the database, where CDS stands for County-District-School. This field joins with schools.CDSCode. Field frpm.\`Academic Year\`means: The \`Academic Year\` column stores the academic year for each record in the format 'YYYY-YYYY'. Field from.\`County Code\` means: The \`County Code\` column stores 2-character codes representing different counties. Field schools.LastUpdate means: The LastUpdate column stores the date when each record was last updated. A prompt is prepared and sent to the LLM, which responds with SELECT T2.Zip FROM frpm AS T1 INNER JOIN schools AS T2 ON T1.CDSCode $\mathbf { \tau } = \mathbf { \tau }$ T2.CDSCode WHERE T1.\`Charter School (Y/N)\` $\mathbf { \Sigma } = \mathbf { \Sigma }$ 1 AND T1.\`Country Name $\mathbf { \bar { \rho } } = \mathbf { \rho }$ 'Fresno County Office of Education’ We extract the following fields: frpm.CDSCode, frpm.\`County Name\`, frpm.\`Charter School (Y/N)\`, schools.CDSCode, schools.Zip; and the following literals: 'Fresno County Office of Education’. Using the LSH indices in the schemas, we find that the literal does not occur in any field in the generated query, but does occur in fields frpm.\`District Name\`, satscores.dname, and schools.District. Another prompt is generated which recommends the LLM to use one of these fields to match the literal. The revised SQL query is SELECT T2.Zip FROM frpm AS T1 INNER JOIN schools AS T2 ON T1.CDSCode $\mathbf { \Sigma } = \mathbf { \Sigma }$ T2.CDSCode WHERE T1.\`Charter School $\begin{array} { r l r } { ( \mathrm { Y } / \mathrm { N } ) \ \mathrm { ~ \dot { ~ } = ~ } } & { { } 1 } \end{array}$ AND T2.District $\mathbf { \Sigma } = \mathbf { \Sigma }$ 'Fresno County Office of Education' Now all literals are matched to a field in the query, so this field list is returned. # 4 BIRD Submission Details Our BIRD submission which achieved the #1 spot on the BIRD leaderboard on Nov. 11, 2024 primarily uses the techniques of database profiling, LLM profile summarization, and schema linking described in sections 2, 2.1, and 3. In this section, we describe some additional details of the BIRD benchmark submission. Our BIRD submission makes use of few-shot examples $[ \mathrm { N Z Z R } + 2 3 ] [ \mathrm { G W L S } + 2 3 ] [ \mathrm { P L } 2 2 ]$ taken from the train query set. We use a technique described by [LPKP24]. In every question, we use the LLM to replace names with placeholders. The masked questions are put in a vector database with a reference to the corresponding SQL. To find few-shot examples, we mask the input question and find the 8 most similar questions in the vector database. These 8 queries are used as the few-shot examples. As in Chase $[ \mathrm { P L S C } + 2 4 ]$ , we generate multiple candidate queries and select one as the answer. To introduce variety into the candidate set, we use two techniques: Changing the LLM randomization seed. Changing the prompt by randomizing the order of the (schema linking-reduced) schema fields. Our BIRD submission generates three candidates. These three candidates are checked for the validity of the SQL by using SQLglot8. We then check for SQL constructions that are likely to indicate an incorrect response. Some of these are checks on possible SQL problems. For example, a NULL value is ordered before all other values. So we check to ensure a NOT NULL predicate on $f$ if: If the output is in ascending order on field f. If the select list contains the aggregate $\operatorname* { m i n } ( f )$ . Other checks relate to the apparent preferences of the authors of the SQL queries. For example: Check if a min/max query used a nested subquery instead of an Order By. Check if a query performs string catenation on fields instead of returning the fields individually. If a bad SQL construction is detected, the LLM is asked for a correction with up to three retries. We use majority voting among the candidates to pick one to be the final answer. Each of the up to three candidates are executed, and their results converted to sets. If there is agreement among two of the candidates, one of them is chosen. Else, an answer is chosen among the candidates randomly. # 4.1 Experiments The methods described in Sections 3 and 4 got us to the $\# 1$ spot on the BIRD benchmark leaderboard (currently at the #3 spot at the time of writing). As our interest is on the efficacy of automatically generated metadata, we ran some experiments on how additional field metadata helps to improve accuracy. For these experiments, we used the schema linking and additional techniques described in this section and used the GPT-4o LLM. We ran the experiments on the MiniDev questions/SQL set (of 500 questions). Our results are below. Table 1. Effects of metadata and hints on accuracy. A first takeaway is that the most powerful metadata available in Bird are the hints, which in MiniDev are very clear and precise. However such hints are not available in practice. If one has such exact information about how to write a query, one might as well develop a canned query system. Without hints (for which our submission is at the $\# 1$ spot among those without hints), the text-to-SQL generation does surprisingly well even without field metadata. This result reflects the descriptive field and table names in many of the tables. Using field metadata improves accuracy as expected, but using profiling metadata results in a bigger accuracy boost than using the Birdsupplied metadata does! However there are details in the Bird metadata that are missing in the profiling metadata, so the fused metadata naturally provides the best accuracy (with or without hints). A next question to ask is, how well does schema linking work? We compare using the full schema, our schema linking (described in Section 3), and perfect schema linking. We used the fused metadata for these experiments. The results are in the table below. Table 2. Effects of schema linking on accuracy. Recent papers [MAJM24] $\left| [ \mathrm { P L S C } + 2 4 \mathrm { b } ] \right.$ claim that schema linking is not needed when using frontier models such as GTP-4o. However these results show that this is not the case even for the small schemas in the BIRD benchmark. The full schema case corresponds to no-schema-linking. The algorithm described in Section 4 provides a significant improvement, so effective schema linking does help. However perfect schema linking provides a large jump in scores. Clearly, further research on schema linking for text-to-SQL is needed. # 4.1.1 Detailed Discussion In this section, we describe some peculiarities we observed in these experiments. One phenomenon that we observed is that providing profile metadata can lead to generated queries being flagged as incorrect. For example, in Q356, the question is How many cards have infinite power? With the hint infinite power refers to power $= 1 * 1$ Profiling detects the presence of special symbols, and the LLM summary contains the phrase special symbols like " $\cdot _ { \infty }$ " for infinite power The gold SQL is SELECT COUNT $( ^ { \star }$ ) FROM cards WHERE power $\begin{array} { r l } { \mathbf { \Psi } = } & { { } \cdot \mathbf { \Psi } { } ^ { \star } \mathbf { \Psi } ^ { \prime } } \end{array}$ But the predicate should be powe $\scriptstyle \because \infty$ ’. Another example is Q1260, which has question Please list the ID of the patient whose RF is normal and who is older than 60. With hint normal RF refers to $\mathrm { R F } < 2 0 \$ ; don't have thrombosis refers to Thrombosis $\mathbf { \epsilon } = \mathbf { \epsilon } \cdot \mathbf { \epsilon } 0 ^ { \prime }$ The gold SQL is SELECT COUNT(DISTINCT T1.ID) FROM Examination AS T1 INNER JOIN Laboratory AS T2 ON T1.ID $\mathbf { \Sigma } = \mathbf { \Sigma }$ T2.ID WHERE T2.RF $< ~ 2 0$ AND T1.Thrombosis $\mathit { \Theta } = \mathit { \Theta } 0$ This query is wrong is several ways, but we will focus on the predicate on Laboratory.RF. The hint states that the predicate to use is $\mathrm { R F } { < } 2 0$ and this predicate is pasted in. But RF is of type text, so the proper predicate should be CAST(T2.RF AS REAL) $< ~ 2 0$ . In some cases, the query generated using the full schema is flagged as correct, but with the linked or perfect schema, its flagged as incorrect. One example is Q1480, where there is an issue in formatting yearmonth.Date with the linked and perfect schema, but not the full schema. Another example is Q1505, where the choice between returning $\mathsf { c o u n t } ( ^ { * } )$ vs. count(distinct CustomerID) depends on the length of the schema. Here the linked and perfect schema SQLs are correct but the gold SQL is not, so they are marked as wrong. These problems are due to the instability of LLM answers. # 5 Query Log Analysis Most DBMS systems collect query logs – a log of all the SQL query text submitted to the DBMS. Many of these queries are written by expert SMEs. So, by extracting features from the log, we can obtain SME information without the need for extensive interviews. In addition, query logs likely contain features that SMEs don’t know about or have forgotten. The query log is kept for some window (e.g. 90 days) and contains a variety of additional metadata such as the time of submission, user ID, query status (succeed/fail), performance metrics, or even the query plan. While these metadata fields are valuable for filtering, in the study we focus on the query text. In some data analytics platforms, e.g. DataBricks, a large portion of the queries submitted to the system do not have textual SQL queries. This is the case when data analytics frameworks such as PySpark9 are used. Pyspark primarily operates on dataframes, (which correspond to Spark RDDs) using a sequence of dataframe manipulation methods to construct an execution plan of select, project, join, aggregation, etc. operations. These are collected until an action triggers the dataframe evaluation. The execution plan is optimized and evaluated, generally in the same way that an explicit SQL query would be. So, dataframes allow the construction of SQL-equivalent plans but no SQL text is involved and no SQL text is logged. However Databricks keeps logs of query plans, which can be handled in a manner similar to that of textual SQL queries. We will note how query plans can be handled in the discussion of query log processing. Pyspark is becoming popular in other databases, such as e.g. Snowpark10 in Snowflake. # 5.1 Query Log Processing The raw query text is not directly useful in this section, instead it has to be processed to extract interesting features. To be useful for SQL generation, all fields referenced in the query must be traced back to their source table. Further, in the presence of subqueries, the formula used to compute a field should be substituted for that field. # For example, consider the following query: Select A.uop_cd, A.trans_amt+P.total_planned as current_spend_and_planned From accounting_table A, Select source_code, sum(planned_amt)as total_planned From planning_table Where country_code $\scriptstyle = ^ { \prime }$ USA’ Group By source_code) P Where A.uop_cd $\mathtt { \Omega } = \mathtt { P }$ .source_code In the select clause, A.uop_cd can be resolved to be accounting_table.uop_cd. The second element has an element sourced from a subquery. Resolving the formula for P.total_planned, we can determine that output field current_spend_and_planned is sourced from accounting_table.trans_amt + sum(planning_table.planned_amt) To perform field resolution and feature extraction, the first step is to use an SQL parser to convert the SQL statement into an Abstract Syntax Tree (AST). There are many open source SQL parsers, e.g. sqlglot11 and Jsqlparser12. We developed our own using the Python Lark13 parser. To simplify the discussion, we assume that the AST returns a node which represents a regular query, e.g. select/from/where/group-by/having. The only subqueries (used for field resolution) are in the From clause. The goal of the algorithm is to resolve the formulas used to compute the fields of the subqueries, if any, to use for resolving fields in the top-level query. The actual algorithm we use has a variety of complexities to handle those of SQL, but they are not important here. The main data structure is a subquery summary table, which maps a field returned by a subquery with the formula which computes it. For example, the subquery summary for $\mathrm { \bf P }$ is The algorithm for field resolution is: Resolve_fields(root) 1. For every subquery $q$ in the From clause a. Summary(q) $\ O =$ resolve_fields(q) 2. Query_summary $\mathbf { \tau } =$ <empty table> 3. For every field $f$ in the Select list a. Resolve the fields referenced in $f ^ { \prime }$ s formula using the tables and subquery summaries in the From clause b. Query_summary(f) $\mathbf { \Sigma } =$ <resolved formula> # 4. Return Query_summary For example, to resolve the top level in the example query, we process each returned field in turn. Uop_cd is computed from A.uop_cd, which resolves to accounting_table.uop_cd. Current_spend_and_planned is computed from A.trans_amt $+ \mathrm { P }$ .total_planned. The two fields in this formula resolve to accounting_table.trans_amt and sum(planning_table.planned_amt). So spend_and_planned is computed from their sum, with result: # 5.2 Query Plan Processing Ideally, one has access to query logs containing the SQL text. However in some cases, only a query plan is available. We encountered this issue when trying to understand queries in DataBricks. Many data science workloads do not submit textual SQL queries, but rather are written in a programming system such as PySpark. For example, the query Select a, b from Table where $_ { \mathrm { C } } = 5$ Could be expressed as $\boldsymbol { \mathrm { D } } \boldsymbol { \mathrm { f } } \ \boldsymbol { \mathrm { \Lambda } } = \boldsymbol { \mathsf { \Lambda } }$ spark.read.format("delta").load(<Table_delta_f $\mathrm { i } 1 \mathrm { e } >$ ). where(col $( \therefore c ^ { \prime } ) = 5$ ).select $( \because , \sqrt { b } ^ { \prime } ) .$ ) These constructions are not rendered into SQL, instead they create a query plan which is optimized and executed. Given that Spark integration is a desirable feature, additional vendors are adding similar execution environments - e.g. Snowpark/Snowflake. The simple query above would be rendered into a query plan in the expected way: a table scan, followed by a Select operator, followed by a Project operator, followed by an Output operator14. Each operator can be viewed as a separate subquery, so the algorithm for resolving field formulas through subqueries applies to well-annotated query plans also. CHASE-SQL $[ \mathrm { P L S C } + 2 4 ]$ processes the SQLite query plan to extract features, to ensure diversity in SQL queries generated in response to a question, for candidate selection. # 5.3 Feature Extraction We can extract further features from this query, for example Spend_and_planned is computed from accounting_table.trans_amt + sum(planning_table.planned_amt) There is a constraint planning_table.country_code $\scriptstyle = ^ { \prime }$ USA’, or alternatively planning_table.country_code $\ c =$ <string> Planning_table.source_code is used as a group-by variable There is a join path accounting_table.uop_cd=planning_table.source_ code These features provide valuable metadata about the use of accounting_table and planning_table. The named field, spend_and_planned, names the formula in bullet point 1. So if a question asks current and planned spending, textual similarity leads to the formula. The labeling of a join path from accounting_table to planning_table provides solid information about how to perform joins if primary key/foreign key information is missing from the schema (as it often is). The types of features one can extract from a query include: Named select fields Unnamed select fields (i.e. no AS clause, or a trivial AS clause) Non-join field constraints – i.e. between fields of the same range variable Join field constraints – between fields of different range variables Constraints in the ON clause. Sets of constraints in the ON clause – two or more field constraints might be needed for the join from R to S. Group-by formulas Group-by sets – all fields (formulas) in a group-by clause WITH subqueries. The naming of a subquery often indicates its purpose. Sets of tables referenced Set of fields referenced Query complexity Query log features help in a variety of ways. For one, primary/foreign key constraints are often not well documented, and many join constraints involve multiple fields and data transformations. For another, extracted features can show details of how a database should be queried. A common strategy in textto-SQL is the use of few-shot examples $[ 1 1 , 2 . 7 . 1 . + 2 . 3 ]$ [GWLS+23][PL22] – sample queries generally found by semantic similarity of their associated questions to the target question. However, the few shot queries might not contain the necessary details, and might have the wrong query “shape”. Further, features can be expressed with less text that few-shot examples, reducing prompt size. # 5.4 Experiments BIRD does not reveal any information about test database used for ranking, so no query logs are available for analysis. Instead, we use BIRD as a test suite for determining if we can use log analysis for detecting missing metadata. We use the BIRD dev test suite from Sept 2023 (dev 9-23), as well as the newer cleaned-up version of June 2024 (dev 6-24), as the unrevised version is likelier to reflect actual industrial databases. Our experiments focus on three questions: 1. Can we find equality (e.g. pk-fk) join constraints that are not documented in the SQLite schema? 2. Can we find other interesting join constraints? 3. Can we find interesting named formulas (business logic). For interesting constraints and named formulas, we compare the features that we find to the provided metadata, and also to the hints associated with each query. We note that in an industrial application, no hint is available, so one must use few shot examples or relevant query features. # 5.4.1 Primary key - Foreign key / Equality Constraints Using the dev queries as the query log, we extract all constraints of the form $\mathrm { R . f { = } S . g }$ . This yields 109 distinct constraints for dev 9-23 and 107 for dev 6-24. Comparing the constraint lists, there are 3 new constraints and 5 missing constraints in dev 6-24 as compared to dev 9-23, reflecting a revision of the gold standard queries. An example of an added constraint is card_games.legalities.format=card_games.maxbanned.format and an example of a missing constraint is financial.disp.account_id=financial.trans.account_id Extracting the PK-FK constraints from the BIRD documentation required some manual intervention. We use the SQLite schemas of the SQLite databases to extract the foreign keys using PRAGMA foreign_key_list. On processing the results, we found that three constraints were improperly specified (for both): debit_card_specializing.yearmonth.CustomerID $\ O =$ debit_card_specializing.customers.CustomerID european_football_2.Match.league_id $\mathbf { \Sigma } =$ european_football_2.League.id european_football_2.Match.country_id $\ O =$ european_football_2.Country.id The problem being that the foreign key field was not specified in the SQLite schema, and the sqlite pragma wasn’t able to identify the referenced field. So, the field on the right hand side was empty. We filled these in by hand. The result was 109 constraints for dev 9-23 and 104 for dev 6-24 (the unused databases card_games_2 in dev 9-23 was removed for dev 6-24). We normalized the two sets of constraints and took set differences to find how the results differed. From the SQLite dev 9-23 constraints, there were 29 equality constraints that were never used, and from dev 6-24 there are 24 unused constraints. Examples include: card_games_2.foreign_data.uuid $\ O =$ card_games_2.cards.uuid european_football_2.Match.away_player_ $1 1 =$ european_football_2.Player.player_api_id toxicology.connected.atom_id $\ O =$ toxicology.atom.atom_id From the constraints extracted from log analysis, there are 29 constraints detected in dev 9-23 queries but not documented in the SQLite schema, and for dev 6-24 there are 27. Examples include: debit_card_specializing .gasstations.gasstationid $\ O =$ debit_card_specializing .transactions_1k.gasstationid card_games.cards.setcode $\ c =$ card_games.sets.code thrombosis_prediction .examination.id $\mathbf { \Sigma } = \mathbf { \Sigma }$ thrombosis_prediction .laboratory.id Of the discovered constraints, 4 equated different field names (e.g. cards.setcode $\ c =$ sets.code). If the hand-constructed constraints are excluded, there are $3 2 \ / \ 3 0$ discovered constraints of which 6 equate different field names. So, $2 7 \% / 2 5 \%$ of the field-to-field equality constraints actually used in a dev query are either undocumented or require hand extraction. While dev $6 / 2 4$ has slightly fewer undocumented equality constraints, a large fraction of these constraints are undocumented in the schema. Many of the missing equality constraints have either the same field name, or they both end in “id”. So the LLM can make a good guess about how to perform a join. However there are two problems. First, industrial databases (especially those grown by accretion) often do not have such convenient naming systems. Second, its better to know the join path than to need to guess it. For an experiment, we counted the number of fields per table that end in “id”. We found that there are an average of 1.8 id fields per table, and a maximum of 15 (in cardgames.cards). 20 tables have no id field, and 23 have one id field. So guessing an equality predicate in BIRD can still present challenges. # 5.4.2 Other Join Constraints Many more join constraints can be found (examples from dev 6- 24). For example, there are two multi-field joins: Q782: colour.id=superhero.eye_colour_id And colour.id=superhero.hair_colour_id Q1016: results.raceid=pitstops.raceid And results.driverid=pitstops.driverid None of these 4 individual predicates are listed in the SQLite schema, although the hint in 782 suggests the join. We can also find joins that require computation: Q234: "bond.molecule_id | $| ! _ { - } 1 " \mathrm { c }$ onnected.atom_id And bond.molecule_id $\| { \bf \underline { { \cdot } } } { \bf \underline { { 2 } } } { \bf \underline { { \cdot } } } = $ connected.atom_id2 And bond.bond_id=connected.bond_id The metadata for the bond.molecule_id field does mention the encoding standard. However no join path from bond.molecule_id to connected.atom_id is explicitly documented. There are also many multi-table join predicates: Q146: card.disp_id=disp.disp_id AND client.client_id=disp.client_id AND disp.account_id=loan.account_id These constraints indicate join patterns, which might go through an otherwise unrelated linkage table. When a table has multiple date fields, understanding which should be used in a constraint can be obscure. For example, california_schools.schools has three date fields, opendate, closedate, and lastupdate. Opendate is constrained in 6 queries (4, 27,39, 47, 66, 87) and closeddate is constrained in three (27, 67, 68), while lastupdate is never constrained. Finally, we can find date constraints: strftime $1 \% \mathrm { Y ^ { \prime } }$ , california_schools.schools.opendate)=’1980’ california_schools.schools.opendate>'2000-01-01' # 5.4.3 Interesting Formulas In our experience, queries contain many named non-trivial formulas (capturing useful business logic). The queries in BIRD generally don’t name elements in the select clause, but there are some. For example, Q221 has a pair of named formulas in the Select clause, which we paraphrase as: atom_id1 is computed from substr(bond.bond_id, 1, 7) atom_id2 is computed from (bond.molecule_id)||(substr(bond.bond_id, 8, 2)) The field metadata contains the following: bond_id: unique id representing bonds; TRxxx_A1_A2: TRXXX refers to which molecule A1 and A2 refers to which atom molecule_id:identifying the molecule in which the bond appears So there is an indication of the structure of bond_id but the formulas used in Q221 are not clear from the text. Some other examples are: Q1499: monthlyconsumption is computed from (sum(debit_card_specializing.yearmonth.consumption))/(12) Q215: iodine_nums is computed from count(DISTINCT case When ( $\because$ toxicology.atom.element) Then toxicology.atom.atom_id Else Null End) Q222: diff_car_notcar is computed from (count(case When ( $\therefore -$ toxicology.molecule.label) Then toxicology.molecule.molecule_id Else Null End))-(count(case When ( $\therefore =$ toxicology.molecule.label) Then toxicology.molecule.molecule_id Else Null End)) In each of these cases, the “evidence” hint suggests the use of these formulas. However, in practice these hints are not available. But they can be extracted by query log analysis. # 5.4.4 Summary We have shown that many useful query features can be found in the collection of dev queries, considered as a query log. For example, $2 5 \%$ of the equality joins that are used in at least one query are not documented in the SQLite schema. However, the usefulness of this information is limited in the context of the BIRD benchmark, which is fairly simple and readable. Field names are highly suggestive of e.g. join paths, and few formulas are explicitly named in the gold queries using AS (correctness checking is done using field positions, not names). The hints provided with the question generally contain the query features to be used in the generated query – a very unrealistic situation. A better test would list query features and require the submission to do query feature linking. # 6 Sql to Text The use of question/SQL pairs as few-shot examples has been shown to be an effective means of boosting text-to-SQL performance examples $[ \mathrm { N Z Z R } + 2 3 ] [ \mathrm { G W L S } + 2 3 ] [ \mathrm { P L } 2 2 ]$ and has been used in our BIRD submission, as described in section 4. However, generating these pairs creates a very large workload for the SMEs who must think up questions and write the corresponding SQL queries. For the Spider benchmark, $\left[ \mathrm { Y } Z \mathrm { Y } \mathrm { Y } { + } 1 8 \right]$ , students spent 1,000 hours creating 5700 question/answer pairs. The BIRD benchmark does not list the number of work hours, but states that 11 contributors and three assessors were employed in developing 12000 question/answer pairs, and that $\$ 98,000$ was spent. If one has access to query logs, then one can sample a selection of these queries and use an LLM to generate a corresponding question from them. The procedure we use is: Input: SQL query $\boldsymbol { \mathcal { Q } }$ , full schema $S$ 1. Analyze $\boldsymbol { \mathcal { Q } }$ to determine the set of fields $F$ referenced in $\mathcal { Q }$ . 2. Extract a focused schema $F S$ by selecting from $S$ the fields in $F$ 3. With the context of the focused schema $F S$ , ask the LLM to create a long question and a short question from query $\boldsymbol { \mathcal { Q } }$ . That is, by query analysis, one can obtain perfect schema linking $[ \mathrm { T P C M } \cdot 2 4 ]$ – helping to make SQL-to-text an easier problem than text-to-SQL. Query logs can contain a very large number of distinct queries, but only an interesting and representative sample should be selected for few-shot examples. For example, one might be focused on generating SQL for a subset of tables, or one might choose examples which use an interesting formula such as “(bond.molecule_id)||(substr(bond.bond_id, 8, 2))”. We have found the following procedure to be useful. 1. For each query $\boldsymbol { \mathcal { Q } }$ in query $\log L$ , analyze $\boldsymbol { \mathcal { Q } }$ , extract its features, and associate them with $\boldsymbol { \mathcal { Q } }$ (e.g. in a JSON file). 2. Provide the summary of extracted features to an SME, who identifies features $F$ , containing a collection of query features to match against. Aggregate the collection of features into a set of features $F S$ . 3. For each set of features $f _ { S }$ in $F S$ , collect the list set of queries $\mathcal { Q } f s$ that contain $f _ { S }$ . 4. Return the union of all $\mathcal { Q } f s$ . # 6.1 Experiments Our experiments consist of generating text from SQL, and then comparing the generated questions to the supplied question and SQL. We used BIRD Minidev 15 as the source of the question/SQL pairs. However there are 500 question/SQL pairs in Minidev, the grading process is very time consuming, and we have limited human resources. So we further selected every $6 ^ { \mathrm { { t h } } }$ entry (i.e., the first, 7th, $1 3 ^ { \mathrm { t h } }$ , etc. entry) for use in evaluation. The question/SQL pairs were human-generated in the text-to-SQL direction. For our experiments, we treat the corpus as having been generated in the opposite direction: given an SQL query, what is it asking? We generate both the long and short question, and use the best result for grading – which we consider reasonable both for query explanation and for query retrieval from a vector database for few-shot selection. Our grading is subjective, so list the SQL, supplied question, generated questions and their ratings in the Appendix. Our ratings are: Bad $\because$ The question is not related to the SQL.  $\mathrm { B a d + }$ : The question is related to the SQL, but misses important details.  Good-: The question generally matches the SQL, but misses some important detail. Good $\because$ The question matches the SQL and is on par with the supplied question in terms of accuracy and readability.  Good+ : The question matches the SQL and is better than the supplied question. We note that good $^ +$ includes entries in which the supplied question and the SQL are not in agreement (problems with question/SQL pairs have been previously noted $[ \mathrm { T P C M } \cdot 2 4 ]$ ). We use four different kinds of metadata: Base $\because$ no metadata. Bird $\because$ The benchmark-supplied metadata (i.e., from the \*.csv files in the database_description subdirectories). LLM $\because$ The short LLM summary generated from the field profile. Fused: Both the Bird and the LLM metadata. Our results, across all difficulty levels are below. The results sliced on difficulty level (simple/moderate/challenging) are similar: Table 3. SQL-to-text evaluation. Even with no metadata, the SQL-to-text performance is surprisingly good – almost as good as human annotation. With fused metadata, the generated questions are significantly better than the human annotation. We conclude that by using the techniques described in Section 2, query extraction from a query log plus SQL-to-text generation is an effective technique for generating few-shot examples. The human-generated question is worse in 13 out of 83 sample questions $( 1 6 \% )$ . This is likely due to the tedious and exhausting nature of generating $1 2 0 0 0 +$ question/SQL pairs. In the remainder of this section, we explore some examples. We start with the example where the generated question is rated “bad+” (question_id 93). The SQL is SELECT COUNT(T1.client_id) FROM client AS T1 INNER JOIN district AS T2 ON T1.district_id $\mathbf { \Sigma } = \mathbf { \Sigma }$ T2.district_id WHERE T1.gender $\begin{array} { r l } { = } & { { } \cdot _ { \mathrm { M } } \cdot } \end{array}$ AND T2.A3 $\mathbf { \Sigma } = \mathbf { \Sigma }$ 'north Bohemia' AND $\begin{array} { r } { \mathrm { T } 2 . \mathrm { A } 1 1 > 8 0 0 0 } \end{array}$ The bad+ question using LLM metadata is How many men from districts in north Bohemia with populations over 8000 are clients? Field A11 refers to salary not population so the generated question is significantly off. A good- base-generated question is: How many men from the North Bohemia district with an A11 value over 8000 are clients? This question also misses the meaning of field A11, but does not try to guess the meaning. The question with fused metadata indicates a salary of 8000 or more, which is correct. An example entry where all of the generated questions are good is question_id 710. The SQL is SELECT COUNT(T1.id) FROM comments AS T1 INNER JOIN posts AS T2 ON T1.PostId $\mathbf { \Sigma } = \mathbf { \Sigma }$ T2.Id WHERE T2.CommentCount $= 1$ AND T2.Score $= 0$ While the supplied question is In posts with 1 comment, how many of the comments have 0 score? The generated questions are similar, though the one generated with the fused metadata is more accurate: How many comments are linked to posts with only one comment and no upvotes? In 8 of the 83 total questions, all of the generated questions are rated “good”, but we added a clarification note, generally indicating that the supplied question is vague or has poor grammar. An example is question_id 39 with SQL: SELECT AVG(T1.NumTstTakr) FROM satscores AS T1 INNER JOIN schools AS T2 ON T1.cds $\ O =$ T2.CDSCode WHERE strftime $( \% )$ , T2.OpenDate) $\mathbf { \Sigma } = \mathbf { \Sigma }$ '1980' AND T2.County $\ O =$ 'Fresno' And supplied question What is the average number of test takers from Fresno schools that opened between 1/1/1980 and 12/31/1980? Even the question generated with the base metadata is more accurate: What's the average SAT participation for schools opened in 1980 in Fresno County? For some generated questions labeled good+, an example where all generated questions are good+ is question_id 112, with SQL SELECT T1.A2 FROM district AS T1 INNER JOIN client AS T2 ON T1.district_id $\mathbf { \Sigma } = \mathbf { \Sigma }$ T2.district_id WHERE T2.birth_date $\ O =$ '1976-01-29' AND T2.gender $\ O =$ 'F' The supplied question is For the female client who was born in 1976/1/29, which district did she opened her account? However, there is nothing in the SQL which suggests that there is only one match. A more accurate question is: Which districts have female clients born on 29th January 1976? An example of a supplied question which can be considered accurate, but which have poor grammar is question_id 862: For the Bahrain Grand Prix in 2007, how many drivers not finished the game? An example where the supplied question does not match the SQL is question_id 231, with SQL SELECT T.bond_type FROM ( SELECT T1.bond_type, COUNT(T1.molecule_id) FROM bond AS T1 WHERE T1.molecule_id $\begin{array} { r l } { = } & { { } \mathbf { \bar { T } } \mathbf { R } 0 1 0 ^ { \prime } } \end{array}$ GROUP BY T1.bond_type ORDER BY COUNT(T1.molecule_id) DESC LIMIT 1 ) AS T And supplied question Which bond type accounted for the majority of the bonds found in molecule TR010 and state whether or not this molecule is carcinogenic? The select list has no indication of carcinogenic status. # 7 Related Work Text-to-SQL code generation has attracted a great deal of attention in recent years $[ \mathrm { C L H Y + 2 2 } ] [ \mathrm { Q H W Y + 2 2 } ] [ \mathrm { Y W D D 1 7 } ] [$ XLS17][ZM96]. Development has been accelerated by the release of standardized benchmarks: WikiSQL 16 [ZXS17], Spider17 $\left[ \mathrm { Y } Z \mathrm { Y } \mathrm { Y } { + } 1 8 \right]$ and $\mathrm { B I R D ^ { 1 8 } }$ $\left[ \mathrm { J H Q Y + } 2 3 \right]$ . In this discussion, all scores and rankings are at the time of writing (Jan. 2025). In $\mathrm { [ G W L S + 2 3 ] }$ , the authors put a corpus of Spider questions in a vector database and extracted few-shot examples by similarity to the posed question. This technique and a tuned LLM got them the #1 spot on the Spider leaderboard (currently at #2). The CHESS19 submission to BIRD [TPCM+24] uses LLM-driven schema linking techniques: column filtering, table filtering, and column selection. This plus query revision and query candidate selection got them on the $\# 1$ spot (currently at #7). The BIRD submission from Distillery [MAJM24] takes a different approach. They argue that newer LLMs remove the need to do schema linking. They achieved the #1 spot on BIRD, and are currently at $\# 6$ . IBM achieved the #1 spot on Bird, and is currently at $\# 4$ spot. They have not posted a paper but marketing materials20 state that they use “extractive schema-linking” and a tuned version of their Granite LLM. Chase $\left[ \mathrm { P L S C } + 2 4 \right]$ has the current $\# 2$ spot on the BIRD benchmark. Their paper describes a number of interesting techniques, including methods for generating diverse query candidates. They use query plan analysis to determine the “shape” of the query, and try to get candidates with a variety of query shapes. They also use a tuned version of the Google Gemini LLM. Our submission and the Google submission have been trading the $\# 1$ spot on the Bird benchmark, both recently beaten by Alibaba. XiYan-SQL [GLLS $+ 2 <$ ] has the current number spot on the BIRD benchmark. Among the techniques used is to generate candidate SQL using several different models, and then use a fine-tuned selector model to pick the result. We explore the use of well-known, and also newer, metadata extraction techniques for text-to-SQL generation. Database profiling [AGN15] has a large literature. Query parsing and feature extraction is at the core of query planning, and has been used to e.g. find join paths in large, complex databases [YPS09]. A new technique is the conversion of SQL to text questions. This technique has been used for candidate selection by Chase $[ \mathrm { P L S C } + 2 4 ]$ .
Large Language Models (LLMs) have recently become sophisticated enough to automate many tasks ranging from pattern finding to writing assistance to code generation. In this paper, we examine text-to-SQL generation. We have observed from decades of experience that the most difficult part of query development lies in understanding the database contents. These experiences inform the direction of our research. Text-to-SQL benchmarks such as SPIDER and Bird contain extensive metadata that is generally not available in practice. Human-generated metadata requires the use of expensive Subject Matter Experts (SMEs), who are often not fully aware of many aspects of their databases. In this paper, we explore techniques for automatic metadata extraction to enable text-to-SQL generation. Ee explore the use of two standard and one newer metadata extraction techniques: profiling, query log analysis, and SQL-to text generation using an LLM. We use BIRD benchmark [JHQY+23] to evaluate the effectiveness of these techniques. BIRD does not provide query logs on their test database, so we prepared a submission that uses profiling alone, and does not use any specially tuned model (we used GPT-4o). From Sept 1 to Sept 23, 2024, and Nov 11 through Nov 23, 2024 we achieved the highest score both with and without using the "oracle" information provided with the question set. We regained the number 1 spot on Mar 11, 2025, and are still at #1 at the time of the writing (May, 2025).
[ "cs.DB" ]
# 1 Introduction Large Language Models (LLMs) and Vision Language Models (VLMs) have achieved remarkable success across a wide range of natural language processing tasks (Dong et al., 2024; Sheng et al., 2024; Zhu et al., 2023b; Yin et al., 2024), demonstrating strong capabilities in reasoning, knowledge retrieval, and text generation. Despite these advancements, the knowledge encapsulated within LLMs and VLMs usually becomes static after training, which makes it difficult for them to update and correct errors, incorporate new knowledge, or refine specific behaviors in real-world applications (Liang et al., 2024) . To efficiently alleviate this problem, model editing has emerged as a promising solution (Wang et al., 2023; Hartvigsen et al., 2023; Chen et al., 2024a; Jiang et al., 2024), allowing targeted modifications to a model’s predictions while preserving overall performance and minimizing unintended changes to unrelated inputs. Previous model editing methods have developed many efficient strategies to address this problem. For instance, methods like ROME (Meng et al., 2022a), MEND (Mitchell et al., 2021), and MEMIT (Meng et al., 2022b) achieve knowledge edits by applying offsets to specific model parameters, while memory-based methods like SERAC (Mitchell et al., 2022), GLAME (Zhang et al., 2024a), and WISE (Wang et al., 2024) leverage external memory for targeted edits. However, most current model editing methods are designed for single-modal models and cannot easily adapt to the increasing significance of multi-modal models (Liang et al., 2024), such as VLMs. As pointed out by Cheng et al. (2023), who constructed a multimodal editing benchmark, traditional single-modal editing methods perform poorly in multimodal scenarios. This challenge arises because errors in multimodal models frequently stem from the complex interactions between different modalities, such as the intertwined influence of both visual and textual modalities in VLMs (Liang et al., 2024) . Therefore, to effectively address the complexities of multi-modal model editing, it is crucial to first fully explore the specific role of each modality and their key layers. In spite of the increasing prominence of VLMs, existing research on editing these multimodal models is still limited (Cheng et al., 2023). Focusing on editing in VLMs, VisEdit (Chen et al., 2024b) is the first work to identify key layers in the visual modality and edit them to update the model’s knowledge. However, while achieving good performance, they concentrate solely on the visual modality and completely neglect the textual modality, which fails to fully recognize the roles of different modalities in VLMs. To gain a deep understanding of the roles different modalities play in knowledge editing for VLMs and to offer valuable insights for designing multi-modal knowledge editing methods, we begin by conducting a thorough empirical analysis of each modality’s contribution to the model’s overall performance. Specifically, through carefully designed experiments, we explore and analyze from the following two perspectives: • Layer-wise and modality-wise importance. In order to investigate the sensitivity of visual and textual modalities, we first analyze the attention scores of different modalities at various layers, which reflect the relative importance of each modality at each layer. The results reveal that, within the same layer, textual modalities receive higher attention scores than visual modalities. Combining this with experiments on the impact of perturbations to each modality at different layers on the final performance, we conclude that the importance of modalities differs both across layers and within the same layer. Thus, it is crucial to address each modality separately at every layer during knowledge editing. • Trade-Off between reliability and locality. By directly editing visual and textual modalities, we find that they can easily adapt to injected modifications for updating new knowledge (i.e., improving the Rel. performance), but at the same time, it also faces a greater risk of disrupting existing knowledge (i.e., decreasing the Loc. metric), which aligns with Gekhman et al. (2024). Consequently, while considering the importance of different modalities, we must also be mindful of the trade-off between reliability and locality when editing specific modalities. Based on these two findings, we introduce DualEdit, a novel editing approach for VLMs that takes into account the distinct effects of textual and visual modalities during editing. Unlike conventional editing methods that treat multimodal inputs uniformly, DualEdit performs modality-aware modifications by applying edits at different layers for visual and textual features. Specifically, we design a gating module that selectively decides whether to edit a model’s response using a learnable adapter. By applying this mechanism separately to the textual and visual modalities, we enable modality-specific editing in VLMs while ensuring a well-balanced trade-off between reliability and locality. To sum up, our main contributions can be summarized as follows: • We are the first to conduct comprehensive experiments that decouple the analysis of different modalities in VLM knowledge editing, examining their impact and identifying their relative and absolute importance both across layers and within individual layers. • Based on our findings, we further propose DualEdit, a modality-aware editing approach that operates on some key layers, ensuring a well-balanced trade-off between reliability and locality by incorporating the designed gating module. • We conduct comprehensive quantitative and ablation experiments across multiple VLM backbones and benchmark datasets, demonstrating the superiority of DualEdit over state-of-the-art VLM editing baselines as well as adapted LLM editing methods. # 2 Analysis of Different Modalities in VLMs Most existing model editing methods primarily focus on single-modality in LLMs. However, the lack of a comprehensive analysis across different modalities has limited the development of effective model editing techniques for VLMs. To better understand these impacts, we randomly selected 1000 samples and conducted exploratory experiments across various layers using the widely adopted LLaVA-V1.5 backbone. The results can be summarized in the following two key findings: Figure 1: The average attention scores of textual representations and visual representations across different layers in LLaVA-V1.5 (Liu et al., 2023a). The dashed line illustrates the mean attention scores of the three visual representations receiving the highest attention values in each layer. The right figure also shows the attention scores of the sample at [10, 18, 25] layers. Figure 2: KL divergence between original and perturbed output logits when adding gaussian noise to (a) visual, (b) textual, and (c) all representations at different layers with different noise variance $\sigma$ . Finding-1: Textual and Visual representations in VLMs demonstrate varying levels of importance within the same layer, and the significance of each modality changes across different layers. As shown in Figure 1, the average attention scores of textual modalities are significantly higher than those of visual modalities, with only a few visual tokens (e.g., the top 3 visual tokens) exhibiting high scores, as indicated by the dashed line. Moreover, the attention scores of both modalities peak at different shallow layers. These observations both highlight the distinct treatment of textual and visual modalities across the layers of VLMs, underscoring the necessity of handling different modalities separately. Perturbation Effects on Visual Representations Perturbation Effects on Textual Representations Perturbation Effects on All Representations 0.08 ↓ Layer 10 Layer 10 7.0 →Layer 10 + Layer 15 7 + Layer 15 Layer 15 I1 Layer 20 二 ←Layer 25 0.04 0.03 K 4 0.02 4.0 0.01 3 3.5 0.1 0.5 i 3 0.1 0.5 i 3 0.1 0.5 i 3 Noise Variance σ Noise Varianceσ Noise Variance σ (a) (b) (c) On the other hand, to explore the absolute importance of each modality at different layers of VLMs, we introduce varying levels of Gaussian noise into specific layers of a given modality and analyze the resulting changes in the output, as depicted in Figure 2 (a)(b). It can be observed that perturbations at different layers affect the textual and visual modalities differently, indicating that the importance of each layer varies across modalities. In addition to applying perturbations separately, we also present the results of perturbing both modalities simultaneously, as shown in Figure 2 (c). This reveals a distinct pattern compared to previous cases, suggesting that the influence of both modalities on the output is not simply additive. This further underscores the need to carefully consider the interactions between modalities when designing editing methods for VLMs. In summary, by combining the results of Figure 1 and Figure 2, we can summarize Finding 1, where the important layers of both modalities lie in different layers. Finding-2: In VLMs, editing textual and visual modalities can improve the performance of edited samples, but it significantly impacts the performance on original samples (i.e., locality performance). As illustrated in Table 1, we perform editing on different textual layers (i.e., T-Layer) and visual layers (i.e., V-Layer). The results show that, compared to the visual modality, the textual modality adapts more easily to edited samples, achieving higher Rel. performance. However, modifying either the textual or visual modality at different layers negatively impacts the model’s original Table 1: Ablations of editing different modalities at key layers. Details of metrics please see Sec. 3.1. performance, leading to lower Loc. performance. This underscores the importance of designing effective editing strategies that minimize unintended disruptions while ensuring successful knowledge updates. # 3 The Proposed DualEdit As analyzed in Sec. 2, independently editing different modalities at their key layers is essential while maintaining an optimal balance between reliability and locality. In this section, we first introduce the base setting and evaluation metrics in Sec. 3.1, followed by an introduction to the framework and training details of DualEdit in Sec. 3.2, aiming to address the issues discussed earlier. # 3.1 Preliminary Consider a vision language model $f _ { \theta } ,$ where $\theta$ represent the model parameters. This VLM model $f _ { \theta }$ maps the visual inputs $\mathbf { x } _ { v }$ and textual input $\mathbf { x } _ { t }$ to the original output o (i.e., $f _ { \theta } ( \mathbf { x } _ { v } , \mathbf { x } _ { t } ) = \mathbf { o }$ ). For a given edit sample $( \mathbf { x } _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } , \mathbf { o } ^ { e } )$ where $f _ { \theta } ( \mathbf { x } _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } ) \neq \mathbf { \breve { o } } ^ { e } ,$ , the VLMs editor $\mathrm { \dot { \cal M } } _ { E } ( \cdot )$ can generate the edited model $f _ { \theta _ { e } } = \mathrm { \overrightarrow { \cal M } } _ { E } ( f _ { \theta } , { \bf x } _ { v } ^ { e } , { \bf x } _ { t } ^ { e } , { \bf o } ^ { e } )$ such that $f _ { \theta _ { e } } ( \mathbf { x } _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } ) = \mathbf { o } ^ { e }$ . Meanwhile, a good editor $M _ { E } ( \cdot )$ should also satisfy the following several criteria (Chen et al., 2024b; Cheng et al., 2023). Reliability (Rel.) evaluates the accuracy of the post-edit model $f _ { \theta _ { e } }$ on edit samples: $$ \mathbb { E } _ { ( \mathbf { x } _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } , \mathbf { o } ^ { e } ) \sim D _ { e } } \mathbb { I } \{ f _ { \theta _ { e } } ( \mathbf { x } _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } ) = \mathbf { o } ^ { e } \} , $$ where $D _ { e }$ refers to the set of edit samples, and $\mathbb { I } \{ \cdot \}$ is the indicator function. Generality (Gen.) requires the edited model $f _ { \theta _ { e } }$ to accurately predict the correct output for inputs relevant to the exact edited samples. In the context of VLMs, generality can be further categorized into Textual Generality and Visual Generality. Textual Generality (T-Gen.) ensures that the model editing is robust to semantically equivalent variations in textual input. This reflects whether the edited model can correctly respond to the paraphrases or variations of a specific textual input. Similarly, Visual Generality (V-Gen.) ensures that the model editing remains effective across semantically equivalent variations in visual input. These can be individually expressed as: $$ \begin{array} { r l } { \mathrm { ( T \mathrm { - } G e n . ) } } & { \mathbb { E } _ { ( x _ { v } ^ { e } , x _ { t } ^ { e } , \mathbf { 0 } ^ { e } ) \sim D _ { e } } \mathbb { E } _ { \hat { x } _ { t } ^ { g } \sim \mathcal { N } ( x _ { t } ^ { e } ) } \mathbb { I } \{ f _ { \theta _ { e } } ( \mathbf { x } _ { v } ^ { e } , \hat { \mathbf { x } } _ { t } ^ { g } ) = \mathbf { 0 } ^ { e } \} , } \\ { \mathrm { ( V \mathrm { - } G e n . ) } } & { \mathbb { E } _ { ( x _ { v } ^ { e } , \mathbf { x } _ { t } ^ { e } , \mathbf { 0 } ^ { e } ) \sim D _ { e } } \mathbb { E } _ { \hat { x } _ { v } ^ { g } \sim \mathcal { N } ( \mathbf { x } _ { v } ^ { e } ) } \mathbb { I } \{ f _ { \theta _ { e } } ( \hat { \mathbf { x } } _ { v } ^ { g } , \mathbf { x } _ { t } ^ { e } ) = \mathbf { 0 } ^ { e } \} , } \end{array} $$ where $\mathcal { N } ( \cdot )$ means the neighborhood of various edit inputs. Locality (Loc.) ensures that the edited model $f _ { \theta _ { e } }$ maintains consistency with the original model $f _ { \theta }$ for samples unrelated to the edited samples. Similar to Generality in VLMs, Locality also consists of two items, Textual Locality (T-Loc.) and Multimodal Locality (MLoc.). The T-Loc. measures whether the edited model produces the same output as the original model when handling text-only samples which are unrelated to the edited samples, while M-Loc. reflects whether the model retains the original outputs when both visual and textual inputs are unrelated to the edited samples, as shown in the following equations: Figure 3: The framework of the proposed DualEdit method. Two modality-specific learnable adapters are inserted at designated layers. The gating module uses the representation of the last tokens to jointly control the activation of both adapters. $$ \begin{array} { r l } { \mathrm { ( T \mathrm { - } L o c . ) } } & { \mathbb { E } _ { ( \boldsymbol { x } _ { v } ^ { e } , \boldsymbol { x } _ { t } ^ { e } , \mathbf { 0 } ^ { e } ) \sim D _ { e } } \mathbb { E } _ { ( \hat { \mathbf { x } } _ { t } ^ { l } , \hat { \mathbf { x } } _ { t } ^ { l } ) \sim \mathcal { U } ( \mathbf { x } _ { t } ^ { e } ) } \mathbb { I } \{ f _ { \boldsymbol { \theta } _ { e } } ( \mathcal { D } , \hat { \mathbf { x } } _ { t } ^ { l } ) = f _ { \boldsymbol { \theta } } ( \mathcal { D } , \hat { \mathbf { x } } _ { t } ^ { l } ) = \hat { \mathbf { 0 } } ^ { l } \} , } \\ { \mathrm { ( M \mathrm { - } L o c . ) } } & { \mathbb { E } _ { ( \boldsymbol { x } _ { v } ^ { e } , \boldsymbol { x } _ { t } ^ { e } , \mathbf { 0 } ^ { e } ) \sim D _ { e } } \mathbb { E } _ { ( \hat { \mathbf { x } } _ { v } ^ { l } , \hat { \mathbf { x } } _ { t } ^ { l } , \hat { \mathbf { 0 } } ^ { l } ) \sim \mathcal { U } ( \mathbf { x } _ { v } ^ { e } , \boldsymbol { x } _ { t } ^ { e } ) } \mathbb { I } \{ f _ { \boldsymbol { \theta } _ { e } } ( \hat { \mathbf { x } } _ { v } ^ { l } , \hat { \mathbf { x } } _ { t } ^ { l } ) = f _ { \boldsymbol { \theta } } ( \hat { \mathbf { x } } _ { v } ^ { l } , \hat { \mathbf { x } } _ { t } ^ { l } ) = \hat { \mathbf { 0 } } ^ { l } \} , } \end{array} $$ where $\mathcal { U } ( \cdot )$ represent sample sets unrelated to the edited sample. # 3.2 The Designed DualEdit Algorithm Based on the findings discussed in Sec. 2, we propose the DualEdit algorithm, which enables editing of both modalities at their respectively important layers. To achieve a balanced tradeoff between Rel and Loc. performance, we introduce a gating mechanism that leverages the cosine similarity of the last token representations. The framework is illustrated in Figure 3. Gating mechanism. As shown in Figure 3, we design a gating module that determines whether to apply the editing operation. If it can effectively distinguish edit samples, the gating module function operates similarly to a Mixture of Experts (Cai et al., 2024): edit samples are processed through a dedicated learnable adapter (i.e., edited model $f _ { \theta _ { e } } \mathrm { ~ . ~ }$ ) to enhance reliability, while other samples are handled by the original model $f _ { \theta , }$ , ensuring high Loc. performance. The key question in designing a gating mechanism is how to accurately distinguish whether the input samples are edit samples. Inspired by some studies (BehnamGhader et al., 2024; Sheng et al., 2024) showing that the latent space of LLMs inherently contains rich feature information, we leverage this property in our gating mechanism by directly computing the similarity between the edit sample and the input sample in the latent space. Specifically, for a given layer, let the last token representation of the edit sample and the input sample be represented as $\mathbf { h ^ { e } } , \mathbf { h ^ { i } } \in \mathbb { R } ^ { d }$ where $\mathrm { ~ d ~ }$ is the dimension of the hidden space. Simply yet effectively, we calculate the cosine similarity of the two last token representations to determine the gating threshold by the following equation: $$ \mathrm { S i m } = \frac { \mathbf { h } ^ { \mathrm { e } } \cdot \mathbf { h } ^ { \mathrm { i } } } { \| \mathbf { h } ^ { \mathrm { e } } \| _ { 2 } \cdot \| \mathbf { h } ^ { \mathrm { i } } \| _ { 2 } } . $$ As shown in Figure 4 (c), our gating strategy, which computes the cosine similarity of the last token representations, can effectively differentiate between editing examples (indicated by Gen. performance) and original input samples (reflected by Loc. performance) by applying an appropriate threshold $\tau$ . To further verify the effectiveness of our proposed gating module based on the last token representation, we also plotted histograms showing the similarity of regular textual and visual representations. As shown in Figure 4 (a) and (b), compared to our last-token-based approach, the regular textual and visual representations fail to effectively distinguish between the examples. Figure 4: Gating module analysis: cosine similarity distribution of different types of representations across different samples. (a), (b) and (c) are similarity distributions calculated based on visual representations, textual representations, and the last representations. Learnable adapters across key layers in different modalities. By introducing the gating method, we can effectively improve Loc. performance. To further enhance the editing performance, we investigate the impact of reliability at different layers, as shown in Figure 5. The findings indicate that the 16th textual layer and the 19th visual layer deliver the best outcomes. Therefore, in our implementation, we focus on layer $\scriptstyle i = 1 { \dot { 6 } }$ and layer $\scriptstyle { j = 1 9 }$ as outlined in the flowchart of Figure 3. For the learnable adapter, as shown in Figure 3, it includes two inputs: (1) the $k$ -th layer representations $\mathbf { h } _ { e } ^ { k }$ obtained by feeding the edit sample $\left( \mathbf { x } _ { v } ^ { e ^ { \ast } } , \mathbf { x } _ { t } ^ { e } , \mathbf { o } ^ { e } \right)$ into the model $f _ { \theta }$ ; (2) the $k$ -th layer textual representation $\mathbf { h } _ { t } ^ { k }$ or visual representation $\mathbf { h } _ { v } ^ { k }$ . The output is the edited $k$ -th layer textual or visual representation. For simplicity, we utilize cross-attention as the learnable adapter, which introduces separate learnable weights for different modalities. The concrete equation can be expressed as, Figure 5: Visualization of the sensitivity of learnable adapters $$ \begin{array} { r } { \hat { \mathbf { h } } _ { t / v } ^ { k } = S o f t m a x \left( \mathbf { h } _ { t / v } ^ { k } \mathbf { W } _ { 1 } ^ { t / v } \cdot ( \mathbf { h } _ { e } ^ { k } \mathbf { W } _ { 2 } ^ { t / v } ) ^ { \top } \right) \cdot \mathbf { h } _ { e } ^ { k } \mathbf { W } _ { 3 } ^ { t / v } , } \end{array} $$ where $\hat { \mathbf { h } } _ { t } ^ { k }$ and $\hat { \mathbf { h } } _ { v } ^ { k }$ denote the edited $k$ -th layer textual and across different layers. visual representations, respectively. The notation $\mathbf { W } _ { i } ^ { t / v } , i \in \{ 1 , 2 , 3 \}$ represent the learnable textual or visual projection matrices. Loss functions. Following Cheng et al. (2023), the training loss $\ell$ of the proposed DualEdit includes three components: reliability loss $\ell _ { \mathrm { r e l } } ,$ generality loss $\ell _ { \mathrm { g e n } }$ and locality loss $\ell _ { \mathrm { l o c } } ,$ which can be expressed as: $$ \ell = \ell _ { \mathrm { r e l } } + \ell _ { \mathrm { g e n } } + \ell _ { \mathrm { l o c } } , $$ where the details of individual terms of $\ell _ { \mathrm { r e l } } , \ell _ { \mathrm { g e n } }$ and $\ell _ { \mathrm { l o c } }$ are listed in Appendix A. # 4 Experiments # 4.1 Experimental Settings We evaluate on E-VQA and E-IC datasets (Cheng et al., 2023; Chen et al., 2024b). For VLM backbones, we select BLIP2-OPT-2.7B (Li et al., 2023) and LLaVA-V1.5-7B (Liu et al., 2023a), which differ in architecture and training: BLIP2 uses a Q-Former and contrastive pretraining, while LLaVA adopts a projection layer and instruction tuning. As VisEdit is the only editing method tailored for VLMs, we compare it against adapted LLM editors including FT-V, FT-L, KE, IKE, SERAC, MEND, TP, and LTE. Table 2: Main results of BLIP2-OPT and LLaVA-V1.5 on E-VQA and E-IC datasets. The best result is marked in bold. # 4.2 Main Results Overall analysis. According to Table 2, DualEdit demonstrates superior performance compared to other editors across both datasets. DualEdit achieves the highest average scores in both E-VQA and E-IC across different backbones, outperforming all single-modality editors. These results validate the significance of the modifications on different modalities. Superior M-Loc. performance. DualEdit excels particularly in the M-Loc. metric, achieving near-perfect scores $( 9 9 . 8 9 \%$ and $1 0 0 . 0 0 \%$ with BLIP2-OPT; $9 9 . 6 1 \%$ and $9 9 . 7 4 \%$ with LLaVA-V1.5) across both datasets and backbones. The exceptional M-Loc. performance demonstrates the effectiveness of our novel gating mechanism based on the last token representation cosine similarity, which enables precise protection of locality-related samples. This approach allows VLMs to intelligently identify which content should remain unchanged while still implementing necessary edits, effectively solving a key challenge in model editing. In contrast, methods like MEND show extremely poor M-Loc. performance, indicating widespread unintended modifications that compromise model integrity. # 4.3 Ablation Analysis We conduct a comprehensive ablation study to evaluate the effectiveness of different components of DualEdit, with results presented in Table 3. The ablation experiments examine: (1) the impact of editing at different layers (early, middle, and late layers) in the model, (2) the effectiveness of dual-path editing compared to single-path editing, and (3) the influence of our proposed gating mechanism. Impact of editing at different layers. Rows 1-5 in Table 3 show the performance when applying edits at different depth levels without the gating mechanism. The results demonstrate that editing performance varies significantly depending on which layers are targeted. When edits are applied to very early layers (T-Layer $_ { = 1 }$ , V-Layer ${ \boldsymbol { \mathbf { \mathit { \sigma } } } } = 2 { \boldsymbol { \mathbf { \mathit { \sigma } } } }$ ), average performance is limited $( 7 4 . 9 1 \%$ on E-VQA and $\dot { 7 } 9 . \dot { 2 } 4 \%$ on E-IC). Performance improves as we move toward middle layers (T-Layer ${ \tt = } 5 { \tt }$ , V-Layer ${ = } 5$ ) and reaches its peak when targeting intermediate layers (T-Layer $= 1 0$ , V-Layer $= 1 0$ ), achieving $8 8 . 5 8 \%$ on E-VQA and $8 \breve { 8 } . 2 5 \%$ on E-IC. Interestingly, editing very deep layers (T-Layer $scriptstyle = 3 0$ , V-Layer $\scriptstyle = 3 0$ ) leads to a substantial drop in performance $( 6 0 . 9 7 \%$ on E-VQA and $7 0 . 9 3 \%$ on E-IC). Row 6 shows our empirically determined optimal layer configuration (T-Layer ${ \boldsymbol { \mathbf { \mathit { \sigma } } } } = 1 6$ , V-Layer $_ { = 1 9 }$ ) without gating, which achieves strong performance of $\mathsf { \bar { 9 } 1 . 3 6 \% }$ on E-VQA and $9 6 . 3 \dot { 8 } \%$ on E-IC. Dual-Editing vs. Single-Editing. Rows 7-8 examine the effectiveness of our dual-editing approach compared to single-editing alternatives, both with the gating mechanism enabled. In row 7, we apply edits only to the textual representations, while in row 8, we apply edits only to the visual representations. Our full DualEdit approach (row 9), which simultaneously edits both textual and visual representations (T-Layer ${ \dot { \mathbf { \eta } } } = 1 6$ , V-Layer $_ { = 1 9 }$ ) with gating, consistently outperforms single-editing approaches across both datasets, achieving ${ \bar { 9 } } 8 . 8 2 { \bar { \% } }$ on E-VQA and $\mathbf { \hat { 9 } 7 . 9 7 \% }$ on E-IC. This demonstrates the complementary benefits of modifying both modality paths, allowing the model to integrate edited knowledge more comprehensively. Table 3: Ablation results of BLIP2-OPT on E-VQA and E-IC datasets, exploring different layers and the impact of our proposed gating module. The best result is marked in bold. Effectiveness of gating mechanism. The most significant improvement comes from our proposed gating mechanism, as evidenced by comparing rows 6 and 9, which have identical layer configurations (T-Layer $_ { = 1 6 }$ , V-Layer $_ { = 1 9 }$ ) but differ in whether gating is enabled. Without gating (row 6), the model achieves $9 1 . 3 6 \%$ on E-VQA and $8 1 . 7 7 \%$ on E-IC. With gating (row 9), performance jumps significantly to $9 8 . 8 2 \%$ on E-VQA $\left( + 7 . 4 6 \% \right)$ and $9 7 . 9 7 \%$ on E-IC $( + 1 6 . 2 0 \% )$ . The benefits of gating are particularly evident in the locality metrics (T-Loc. and M-Loc.), which measure the model’s ability to preserve unrelated knowledge. Without gating, M-Loc. scores are $7 2 . 0 2 \%$ on E-VQA and $9 2 . 0 5 \%$ on E-IC. With gating enabled, these scores improve dramatically to $9 9 . 8 9 \%$ and $1 0 0 . 0 0 \%$ , respectively, approaching perfect preservation of unrelated knowledge. This substantial improvement confirms that our gating mechanism successfully prevents unwanted modifications to non-target knowledge while allowing precise edits on target knowledge. Overall, our ablation study highlights three key insights. First, layer selection is critical: editing intermediate layers yields the best performance, while very early or very deep layers lead to suboptimal results. Second, modality-specific edits are complementary: editing only one modality benefits certain tasks, but dual-editing consistently achieves the best overall performance. Third, and most notably, our gating mechanism plays a crucial role in balancing knowledge integration with preservation. # 5 Related Works # 5.1 Vision Languages Model Earlier vision-language models, exemplified by CLIP (Radford et al., 2021), establish alignment between textual and visual information within shared hidden spaces through contrastive learning on extensive datasets. These models demonstrate remarkable generalization across diverse tasks with minimal adaptation requirements. Building upon these foundations, VLMs have successfully bridged the gap between visual and linguistic modalities, achieving exceptional results in various applications including in-context predictions (Liu et al., 2023b; Salewski et al., 2023), multi-image understanding, and chain-of-thought reasoning (Driess et al., 2023; Yang et al., 2023). The landscape of Large VLMs encompasses diverse architectural designs that reflect varying approaches to multimodal integration and processing (Wadekar et al., 2024). Token Fusion represents an architectural paradigm where tokenized input modalities are fed directly into the model’s input stage. This approach employs either a decoder-only transformer or an encoder-decoder style transformer as the multimodal integration mechanism, as exemplified by models like LaVIT (Jin et al., 2024). Deep Fusion approaches incorporate visual information into the internal layers of the LLM through cross-attention mechanisms. While models like PaLI-X (Chen et al., 2023) and Flamingo (Alayrac et al., 2022) implement standard cross-attention layers, alternatives such as LLaMA-Adapter(Zhang et al., 2024b) utilize custom-designed components to process visual representations before cross-attention operations. Early Projection Fusion represents another prevalent strategy where non-tokenized visual inputs undergo processing before being introduced at the model’s input rather than within internal layers. Various connection modules facilitate this integration, including linear projection layers, Q-formers with linear projections, perceiver resamplers, and custom learnable components. Notable implementations of this approach include Qwen2.5-VL (Bai et al., 2025), LlavaV1.5 (Liu et al., 2024), and MiniGPT4 (Zhu et al., 2023a), which have demonstrated superior capabilities in multimodal understanding and generation. This paper primarily focuses on examining and enhancing editing capabilities within this class of vision-language models. # 5.2 Model Editing Model editing in large language models (LLMs) lies at the intersection of continual learning (Wu et al., 2024a;b; 2025) and parameter-efficient fine-tuning (Si et al., 2025), aiming to incorporate new factual knowledge or behavioral changes into models with minimal forgetting and computational cost. From the CL perspective, model editing seeks to update model knowledge while mitigating catastrophic forgetting (Lopez-Paz & Ranzato, 2017), whereas from the PEFT angle, it emphasizes modifying only a small subset of parameters to achieve targeted updates (Hu et al., 2022). Recent advances in model editing for LLMs can be broadly classified into three paradigms: parameter modification, module-based augmentation, and prefix-based instruction injection. Parameter modification methods aim to directly adjust the internal weights of a model in response to specific edit instructions. Among these, Knowledge Editor (KE) (De Cao et al., 2021) and MEND (Mitchell et al., 2021) adopt a learning-based approach, where an auxiliary network is trained to generate weight deltas based on edit signals. On the other hand, ROME (Meng et al., 2022a) and MEMIT extension (Meng et al., 2022b) leverage tools from causal inference. A different line of work explores module-based augmentation to achieve editing without overwriting existing model knowledge. For instance, SERAC (Mitchell et al., 2022) trains an edit-aware counterfactual model that only activates under relevant conditions. TP (Huang et al., 2023) introduces an editable ”knowledge neuron” that can be trained separately from the base model. GRACE (Hartvigsen et al., 2023) routes inputs to a target editing output in latent space, conditional on their similarity crossing a predefined threshold. MELO (Yu et al., 2024) builds upon GRACE by retrieving and injecting editing matrices related to the input query, thereby enabling efficient updates to the model’s predictions. In contrast, prefix-tuning approaches avoid changing model weights by manipulating the context seen during inference. IKE (Zheng et al., 2023) applies in-context learning to guide the model’s output based on a few-shot edited prompt, while LTE (Jiang et al., 2024) explicitly trains the model to follow editing instructions. RECIPE (Chen et al., 2024a) further advances this line by introducing a learnable prompt generator that finds the shortest continuous prefix capable of inducing the desired model behavior. Although VLMs have gained increasing attention, research on editing such models remains relatively underexplored (Cheng et al., 2023). VisEdit (Chen et al., 2024b) represents the first attempt to identify and edit key layers within the visual modality. However, different from our DualEdit, it concentrates solely on the visual modality and completely neglects the textual modality, which fails to recognize the influence of different modalities in VLMs.
Model editing aims to efficiently update a pre-trained model's knowledge without the need for time-consuming full retraining. While existing pioneering editing methods achieve promising results, they primarily focus on editing single-modal language models (LLMs). However, for vision-language models (VLMs), which involve multiple modalities, the role and impact of each modality on editing performance remain largely unexplored. To address this gap, we explore the impact of textual and visual modalities on model editing and find that: (1) textual and visual representations reach peak sensitivity at different layers, reflecting their varying importance; and (2) editing both modalities can efficiently update knowledge, but this comes at the cost of compromising the model's original capabilities. Based on our findings, we propose DualEdit, an editor that modifies both textual and visual modalities at their respective key layers. Additionally, we introduce a gating module within the more sensitive textual modality, allowing DualEdit to efficiently update new knowledge while preserving the model's original information. We evaluate DualEdit across multiple VLM backbones and benchmark datasets, demonstrating its superiority over state-of-the-art VLM editing baselines as well as adapted LLM editing methods on different evaluation metrics.
[ "cs.CV", "cs.AI" ]
# 1 Introduction The concept of a foundation model was conceived based on substantial evidence suggesting that a pre-trained Transformer can solve many natural language tasks when properly adapted [3, 71]. Moreover, when scaled with sufficient parameters, these Transformers elicit emergent abilities that are not present in smaller models [76]. A key to sustaining the performance of these Transformers is the concurrent scaling of the training data and the model size [38, 33]. Informally speaking, the current large language models (LLMs) are trained on at least the entire Internet. We ask if a similar foundation model exists for graphs [52]. Such a model is pre-trained with and adapted to various kinds of graphs. Unlike natural language data that are sequential, graph data pose unique challenges for Transformers. First, graph data are non-sequential. Invariance and equivariance to node permutations require forgoing the positional encoding that is crucial for sequence Transformers; the graph structure will need to be encoded differently (such as using structural encoding or attention bias). Second, graph sizes vary, with node counts ranging from fewer than ten to several billion in practice. Meanwhile, the context length of a typical LLM is on the order of thousands, causing difficulties in batching graphs. While the nodes of small graphs can be chained together to fill a sequence, a large graph will need to be partitioned such that its nodes form multiple sequences. In this case, the connection between different sequences is lost. Our objective is to develop a methodology toward building a foundation model for graphs, with the following desiderata: D1: The model should be pre-trained with a broad inclusion of graph datasets in a self-supervised manner without the influence of task labels. D2: The model can be adapted to any downstream tasks and transferred to graphs in a new domain. D3: The model can handle graphs of varying sizes, ranging from small molecules to large networks. D4: The model can capture long-range interactions when they are important for the task at hand. These desiderata require a holistic design of the model architecture, input and output formats, training objectives, and downstream adaptation strategies. We begin with interrogating the strengths and limitations of the two most widely used graph deep learning architectures: Graph Neural Networks (GNNs) [44, 40, 28, 24, 72, 81, 21, 11] and Graph Transformers (GTs) [17, 41, 83, 78, 61, 7, 51]. The computation pattern of GNNs is neighborhood aggregation; as a result, the main challenge is a uniform network depth for all graphs and the handling of long-range interactions if exist [18]. Experience suggests that GNNs for different domains vary substantially in depth. While one may attempt to take the maximum depth, which also resolves the long-range challenge, on other occasions deep GNNs suffer from the over-smoothing problem [43]. Mitigation approaches exist, such as residual connections [11] and edge removals [63], but many layers aggravate the neighborhood explosion problem because typical neighborhood sampling methods [28, 9] will still create an enormous neighborhood. In the pursuit of a foundation model, recent approaches [45, 75] sample small-hop neighborhoods for large graphs so that the GNN depth can be more flexible, but these neighborhoods still miss long-range information. On the other hand, GTs are a principled approach to incorporating this information because of the pairwise attention, but they are faced with a different challenge—scalability. For a graph with $n$ nodes, it typically takes $\bar { O ( n ^ { 2 } ) }$ time to compute the attention scores. Much effort has been devoted to scaling GTs to large $n$ , such as (i) using kernel approximation of the softmax attention [77, 15], (ii) taking a hierarchical approach [89], and (iii) changing the input from a sequence of all graph nodes to a sequence of sampled neighbors of one node [87, 85, 10]. However, approaches (i) and (ii) still have trouble with batching when graphs have varying sizes and approach (iii) weakens the incorporation of long-range information. In this work, we propose the Random Walk-Based Pre-Trained Transformer (RWPT). The main idea behind this model is the use of multiple random walks to represent one node and the retention of the Transformer backbone for its foundational nature in representation learning. RWPT differs from usual GTs in that the Transformer input is neither the whole graph nor a sequence of sampled neighbors. Instead, multiple random walks are taken from a root node, forming ordered sequences including near neighbors and faraway nodes. Random walks are a revival of the early node embedding methods prior to GNNs, such as DeepWalk [58] and node2vec [25], which permit favoring depth in addition to breadth when considering node co-occurrences. They are key to our holistic design that meets the four aforementioned desiderata: Random walks resolve the batching problem of GTs when training graphs have drastically different sizes (D3); they encode a larger receptive field and better cope with long-range interactions [8], compared with small-hop neighborhood sampling (D4); and they allow the pre-training with any cumulation of graph datasets for scaling (D1) as well as the separation of self-supervised pre-training and downstream adaption (D2), following closely the practice of LLMs for natural language data. Moreover, we theoretically show that random walks with shortest-path distance positional encoding can reconstruct any ball (the ego-graph of a node induced by its $r$ -hop neighborhood) and distinguish two balls up to isomorphism. Hence, they are expressive in node representation learning. Our contributions are as follows: • We position four desiderata for a graph foundation model. The first two are parallel to natural language models while the other two are unique to graph-structured data. • We propose RWPT that meets these requirements and addresses the limitations of current graph deep learning models (GNNs and GTs). Central to RWPT is the use of multiple random walks to represent a node, which subsequently invokes the accompanying designs of positional encoding, attention masking, and training loss for the Transformer. • We conduct a theoretical analysis on random walks and show their expressivity in distinguishing node neighborhoods, justifying their use for representation learning. • We conduct comprehensive experiments to demonstrate the effectiveness of RWPT compared with (semi-)supervised and self-supervised methods, highlighting its transferability and adaptivity in cross-domain and cross-task uses. Figure 1: Pipeline of RWPT. Each node is represented by multiple random walks formulated into one positionally encoded sequence, augmented with domain information. The sequence is processed by a Transformer with a per-walk attention mask. The node representation is extracted from the output. The model is fine-tuned through training a prediction head dedicated to the downstream task. # 2 Related work This work emerges during the rapid development of foundation models for natural language processing (synonymously called LLMs). Efforts for building a unified model for graph data concurrently occur. Our approach is the most relevant to two recent methods: OFA [45] and GFT [75]. OFA [45] trains a single GNN across multiple datasets and tasks in a supervised fashion. For nodeand link-level tasks, the model operates on $\ell$ -hop neighborhood subgraphs rather than the entire graph. It remains unclear whether this framework can be adapted for pretraining without task-specific supervision. GFT [75] pretrains a vocabulary of embedding vectors derived from quantized $\ell$ -hop neighborhoods, which are encoded using a GNN. During inference, it assigns the nearest pretrained embedding to a new neighborhood subgraph for downstream tasks. Its reliance on small $\ell$ -hop neighborhoods limits its ability to capture long-range dependencies. To address this, we utilize random walks that can reach faraway contexts. See Section A for a more comprehensive discussion of the related work, including GTs, pre-training GNNs, foundation model concepts in GNNs, and LLM-based methods to solve graph problems. # 3 Methodology In this section, we elaborate on the details of the proposed RWPT model. It is a holistic design that involves not only the neural architecture but also the data representation and training. Three aspects are highlighted: the formulation and encoding of the input sequence, the attention mechanism, and the pre-training loss. Figure 1 shows feedforward flow of RWPT during inference. # 3.1 Random-walk representation of nodes Random walks form ordered sequences, which are natural inputs to the Transformer. In action, assume that $i = i _ { 0 }$ is the node of interest (i.e., the root node). We run $k$ independent random walks of length $\ell$ starting from $i _ { 0 }$ and denote by $i _ { s } ^ { r }$ the node at step $s = 1 , \ldots , \ell$ and walk index $r = 1 , \ldots , k$ . We concatenate all nodes, walk by walk, to form a sequence $$ \begin{array} { r } { \mathsf { s e q } ( i ) = [ i _ { 0 } , i _ { 1 } ^ { 1 } , \ldots , i _ { \ell } ^ { 1 } , i _ { 1 } ^ { 2 } , \ldots , i _ { \ell } ^ { 2 } , \ldots , i _ { 1 } ^ { k } , \ldots , i _ { \ell } ^ { k } ] . } \end{array} $$ One may consider that the union of the random walks forms a sampling of the large-hop neighborhood of $i$ . This sampled neighborhood differs from a common one resulting from neighborhood sampling [28, 9] in that a larger walk length $\ell$ is permissible but the number of hops in neighborhood sampling is limited, because the number of sampled nodes is exponential in the hop count. Note that a node in $\sec ( i )$ may appear multiple times (because it is sampled by different walks), but its embedding in different appearances may be different because of edge features. For a foundation model, we intend to pre-train it with multiple datasets. Then, to distinguish graphs from different domains, we introduce a virtual token $v$ for each dataset. We prepend this token to the sequence (1); that is, the full sequence for a node $i$ input to RWPT is $$ s ( i ) = [ v , { \mathrm { s e q } } ( i ) ] . $$ # 3.2 Sequence encoding Like a standard Transformer whose input sequence is encoded by token embedding and positional encoding, we define our input node features and positional encoding for RWPT. Additionally, we incorporate edge features by formulating them into the sequence and adding them to the input of each Transformer block. Unified input node features. One of the technical difficulties in designing a graph foundation model is unifying graphs from different domains with varying node features and dimensions. LLM offers a perfect mechanism to mitigate this difficulty [13, 29, 45]. Nearly all graphs from practical problems are equipped with semantic meanings for their nodes, edges, and even themselves as a whole. For example, the nodes in a molecular graph are atoms and the nodes in a citation graph are papers. They all can be described by text. Hence, we use an LLM to process the textual information, $t _ { i }$ , of a node $i$ , yielding the node feature vector $$ \begin{array} { r } { { \bf x } _ { i } = \mathrm { L L M } ( t _ { i } ) . } \end{array} $$ An advantage of obtaining node features in this manner is that the LLM, as a text foundation model, unifies knowledge of different domains and offers the same output dimension for nodes of any domain. Even for non-textual graphs, we can leverage an LLM to summarize structural features and generate descriptions [45, 75]. For example, a node comes with local degree profiles and centrality measures, from which the textural description can be “a node with medium degree and high betweenness centrality; its value on the Fiedler vector belongs to the 90 percentile.” Similarly, for an edge $i j$ of a graph and the virtual token $v$ of a graph dataset, let their textual description be $t _ { i j }$ and $t _ { v }$ . Then, we obtain the edge feature and virtual-node feature $$ { \bf e } _ { i j } = { \bf L } { \bf L } { \bf M } ( t _ { i j } ) , \qquad { \bf v } = { \bf L } { \bf L } { \bf M } ( t _ { v } ) , $$ respectively. The virtual-node feature v will be used together with node features in the input sequence; the use of the edge feature $\mathbf { e } _ { i j }$ will be elaborated later. Positional encoding. We enhance the integration of the graph structure by leveraging positional encodings based on shortest-path (SP) distances. Specifically, for a node $i _ { s } ^ { r }$ in the rooted sequence $\mathrm { s e q } ( i _ { 0 } )$ , its position is defined as the SP distance from the root $i _ { 0 }$ to $i _ { s } ^ { r }$ , which is at most $s$ . Additionally, for the virtual token $v$ and the root token $i _ { 0 }$ , their position is 0. The positional encoding is used in the subsequent theoretical analysis of random walks. We can straightforwardly extract SPs from the walks: if the positions of nodes on a walk segment from $u$ to $v$ are monotonically increasing by 1, then this segment must be a shortest path between $u$ and $v$ . Incorporating edge features. Edge features can be used to enhance the encoding of a node sequence. For the $r$ th walk $i _ { 0 } , i _ { 1 } ^ { r } , \ldots , i _ { \ell } ^ { r }$ , we form an edge-feature sequence $$ \mathbf { E } ^ { r } = [ \mathbf { e } _ { i _ { 0 } , i _ { 1 } ^ { r } } , \mathbf { e } _ { i _ { 1 } ^ { r } , i _ { 2 } ^ { r } } , \ldots , \mathbf { e } _ { i _ { \ell - 1 } ^ { r } , i _ { \ell } ^ { r } } ] , $$ and we concatenate the $k$ walks and prepend two zero vectors to form the full sequence $$ \mathbf { E } = [ \mathbf { 0 } , \mathbf { 0 } , \mathbf { E } ^ { 1 } , \mathbf { E } ^ { 2 } , \ldots , \mathbf { E } ^ { k } ] , $$ which has the same length as the Transformer input. Rather than merely adding $\mathbf { E }$ to the Transformer input, we project $\mathbf { E }$ and add it to the input of each Transformer block, similar to how the edge information is processed in every layer of a GNN [35]. Specifically, let the tth block of a standard Transformer [71, 47] be $\mathbf { H } ^ { ( t + 1 ) } = \dot { \mathbf { B } } \mathrm { l o c k } ( \mathbf { H } ^ { ( t ) } )$ . We introduce a block-dependent projector (a linear layer), Proj, and modify the block to be $\mathbf { H } ^ { ( t + 1 ) } =$ $\mathrm { B l o c k } ( \mathbf { H } ^ { ( t ) } + \mathrm { P r o j } ^ { ( t ) } ( \mathbf { E } ) )$ . The projectors are mainly used to map data from the LLM output dimension to the Transformer embedding dimension, similar to the one for node features. # 3.3 Per-walk attention mask The query-key-value (QKV) attention mechanism typically comes with a mask on the QK product (attention) matrix before softmax. The effect of this mask is to ignore some $\mathrm { v }$ items when linearly combining them. For example, the upper triangular part of the attention matrix is masked out in a causal Transformer, because a token will depend on the past but not the future information. Figure 2: Attention mask. Figure 3: Context learning. In our case, we use a per-walk attention mask to improve scalability. See Figure 2 for an illustration. Each random walk will attend to itself but not each other. The virtual token and the root node will still attend to all the tokens. Clearly, with such a mask, the number of nonzeros is reduced by nearly a factor of $k$ and so is the computational cost. # 3.4 Self-supervised pre-training: context prediction Let the Transformer output a sequence of vectors corresponding to the input sequence $s ( i )$ in (2): $$ [ \mathbf { h } _ { v } , \mathbf { h } _ { 0 } , \mathbf { h } _ { 1 } ^ { 1 } , \ldots , \mathbf { h } _ { \ell } ^ { 1 } , \mathbf { h } _ { 1 } ^ { 2 } , \ldots , \mathbf { h } _ { \ell } ^ { 2 } , \ldots , \mathbf { h } _ { 1 } ^ { k } , \ldots , \mathbf { h } _ { \ell } ^ { k } ] . $$ The vector $\mathbf { h } _ { 0 }$ is the representation of the root node $i$ . The pre-training makes use of the output sequence (7) but not task labels. In contrast to the next-token prediction in LLMs, self-supervised learning of GNNs is more often done in a contrastive manner. We follow the infomax principle [32, 27] and develop a contrastive loss suitable for random walks. The idea is to define increasingly large context windows for the root node $i$ (see Figure 3): $$ \mathbf { h } _ { c t x } ^ { ( j ) } = \frac { 1 } { j k } \sum _ { s = 1 } ^ { j } \sum _ { r = 1 } ^ { k } \mathbf { h } _ { s } ^ { r } . $$ Here, $\mathbf { h } _ { c t x } ^ { ( j ) }$ is the representation of the $j$ th context window, which includes the nodes up to the $j$ th step of all random walks. We maximize the mutual information between the root node $i$ and its context windows, while minimizing that between $i$ and the context windows of other root nodes in the batch. We use an MLP to parameterize the mutual information, leading to the sample loss formula $$ \mathcal { L } _ { s a m p l e } = - \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \Big ( \log \mathrm { M L P } \big ( \mathbf { h } _ { 0 } \odot \mathbf { h } _ { c t x } ^ { ( j ) } \big ) + \sum _ { \forall o t h e r } \log \big ( 1 - \mathrm { M L P } \big ( \mathbf { h } _ { 0 } \odot \mathbf { h } _ { o t h e r } ^ { ( j ) } \big ) \big ) \Big ) . $$ This loss encourages the representation of the root node to be close to its neighborhood but different from other neighborhoods. Dataset mixture. Because RWPT is pre-trained with multiple graph datasets, which may vary in size, we introduce a multiplier $\alpha _ { D }$ for each dataset $D$ of size $n _ { D }$ . The batch training consists of multiple passes, each of which iterates over all datasets. For each dataset, a total of $\alpha _ { D } n _ { D }$ nodes are randomly sampled to form batches. # 3.5 Downstream adaptation With the heavy lifting done in pre-training, a downstream use of the model only trains a task head while freezing the pre-trained model. For example, for node classification, the task head is an MLP that takes the node representation as input and outputs the class logits; for link prediction, the input to the MLP task head is a concatenation of the two node representations; and for graph classification, the input is the aggregation of node representations (see Section B). Note that the downstream adaptation can be done for many tasks even though they may seem “open-ended,” as long as there is a classification/regression formulation. For example, to predict the shortest path between $u$ and $v$ , it suffices to use a prediction head to classify, given any node $w$ on the graph, whether the concatenation of $u , v$ , and $w$ embeddings will be classified as on the path or off the path [1]. Following the practice of LLMs, other adaptation approaches are exploitable, such as fine-tuning all the parameters of the pre-trained model, fine-tuning only the node and edge feature projectors, and using low-rank adaptors or other adaptors, but a full comparison of the different approaches is out of the scope of this work. We find that a simple task head works well. # 4 Theoretical analysis The main idea of this work is to use random walks for node representation learning. These walks, together with the SP distance positional encoding, allows reconstructing neighborhoods and distinguishing them. Hence, they are well justified to be part of a foundation model. We formalize these arguments in what follows and provide the accompanying proofs in Section C. A graph is denoted as $G ( V , E )$ with the node set $V$ and edge set $E$ . Definition 4.1. The Shortest Path Distance Oracle ( $S P$ oracle) is a function $\psi : V \times V \to \mathbb { R }$ that takes a pair of nodes as input and returns the shortest path distance between these two nodes. Definition 4.2. Denote by $B _ { u , r } \subset G$ a ball centered at node $u$ with radius $r$ . Formally, it is a subgraph of $G$ with the node set $V ( \mathcal { B } _ { u , r } ) : = \{ v \in V | \psi ( u , v ) \leq r \}$ and the edge set $E ( B _ { u , r } ) : =$ $\{ e \bar { \in } \bar E ^ { } \mid V ( e ) \subset V ( B _ { u , r } ) \}$ , where $\psi ( \cdot , \cdot )$ is the SP oracle. Definition 4.3 ([25]). A Biased Random Walk with parameters $p$ and $q$ is a random walk such that after transitioning from node $u$ to node $v$ , the unnormalized probability to return to $u$ is $1 / p$ , that to jump to a direct neighbor of $u$ is 1, and that to jump to other neighbors of $v$ is $1 / q$ . The usual (unbiased) random walk is recovered with $p = q = 1$ . The following theorem states that a ball can be reconstructed by a sufficient number of random walks together with the SP distance positional encoding. Theorem 4.4. Assume that the graph $G$ is undirected and connected, with a bounded degree d. Let a ball $\boldsymbol { B } _ { u , r }$ with center u and radius $r$ have n nodes. The ball can be fully reconstructed given the sequence in Eq. (1) together with the $S P$ distance of every node from the root $u = i _ { 0 }$ , if the number of walks $k = \Theta ( \operatorname* { m a x } ( n \widetilde { r } , n ^ { 2 } / r ^ { 2 } ) )$ and the walk length $\ell = \Theta ( r )$ . The above theorem derives the complexities of $k$ and $\ell$ by using a biased random walk. The ball can also be reconstructed by using unbiased walks, at the cost of larger $k$ and $\ell$ . Empirically, small $k$ and $\ell$ already deliver competitive downstream performance (see the following section for details). Because random walks can reconstruct a ball, two balls can be distinguished with a graph kernel. Theorem 4.5. There exists a positive definite kernel function that distinguishes non-isomorphic balls centered at different nodes of $G$ . # 5 Experiments In this section, we present a comprehensive set of experiments to evaluate the effectiveness of RWPT as a graph foundation model, highlighting transferability in cross-domain and cross-task settings. # 5.1 Experiment setup Datasets. We use 14 datasets from diverse domains and for varying tasks. They include those supporting node-level tasks (Cora, CiteSeer, PubMed, Arxiv, WikiCS, and Products, where the first four are citation networks and the next two are the Web graph and the co-purchase graph, respectively); those supporting link-level tasks (WN18RR and FB15k237, which are knowledge graphs); and those supporting graph-level tasks (HIV, PCBA, ChEMBL, and Tox21, which are molecules). We also include Peptides-func and Peptides-struct (also molecules) from the Long Range Graph Benchmark [18]. Altogether, these datasets contain 25M nodes and 31M edges. See Section E for more details. Baselines. We compare RWPT with ten methods in diverse nature, including PRODIGY [37], OFA [45], and GFT [75], which are foundation-model style of methods; GCN [40], GIN [81], and GAT [72], which are GNNs trained in a (semi-)supervised manner; and DGI [73], BGRL [68], GraphMAE [34], and GIANT [14], which are self-supervised training methods. Note that OFA differs from foundation models in the usual sense in that it does not have a label-free pre-training stage, but we categorize it together with PRODIGY and GFT to distinguish it from the remaining methods that train a different model for each dataset. Table 1: Performance comparison of (semi-)supervised, self-supervised, and foundation-model methods for various domains and tasks. Bold and underline highlight the best and sub-best performance. Baseline results are replicated from [75]. Settings. Our Transformer backbone follows a standard architecture like GPT-2 [60], with modifications introduced in Section 3 and hyperparameters detailed in Section F. We utilize Llama2-7b-hf [70] for feature extraction; the prompts can be found in Section G. All experiments are conducted with $2 \mathrm { x }$ NVIDIA Tesla V100 16GB GPUs, Intel Xeon Platinum 8260 CPUs (32 cores), 50GiB RAM, and 1TB user storage space. Each run is repeated ten times with random seeds. # 5.2 Cross-domain and cross-task performance We first compare RWPT with a wide array of methods across domains and tasks (node-level, link-level, and graph-level). These methods include (semi-)supervised, self-supervised, and foundation-model methods. The first two classes of methods are not comparable to RWPT, because they are trained on an individual dataset; however, they set an expectation of the performance. Following GFT, we pre-train RWPT with ten datasets (see Section F for their batching ratio) and fine-tune it on each task. From Table 1, we see that foundation models are uniformly better than individually trained models. Moreover, among foundation models, RWPT outperforms GFT on seven out of eight tasks. # 5.3 Transferability While the outperformance of RWPT over individually trained models is not surprising, we investigate in depth its transferability. For this, we consider transfer learning and few-shot learning. Transfer learning (dataset- and domain-level transfer). This learning paradigm tests a pre-trained model on an unseen dataset or domain. To this end, we pre-train RWPT with limited datasets and evaluate it on others. Specifically, we use either Arxiv, FB15k237, ChEMBL, or a combination of them and compare with the early use of ten datasets. The three datasets represent different domains: citation, knowledge graph, and molecules. The results are summarized in Table 2 and Figure 4. We see that using three datasets to pre-train achieves a performance very close to using ten. This suggests that a small amount of representative datasets are already competitive for pre-training, demonstrating the transferability of RWPT to new datasets. More encouragingly, a model pre-trained with only one dataset achieves competitive performance as well: this performance is significantly higher than the individually trained models on nearly all tasks. For example, the model pre-trained on Arxiv (citation; node classification) performs the second best on WN18RR (knowledge graph; link prediction), with the attained accuracy more than 10 points higher than individually trained models. To better highlight domain transfer, we plot in Figure 4 the aggregated results of OOD (out-of-domain) and ID (in-domain) performance. Corresponding to Table 2, OOD means, for example, pre-training with Arxiv and testing on knowledge graphs and molecules, while ID means testing on citation networks or web graphs. The fact that the OOD performance is so close to ID, compared with the best of baselines, confirms that our pre-training model and method deliver strong transfer. In particular, random walk patterns enable effective cross-domain generalization. Table 2: Transfer learning performance. $\dagger$ denotes the best performance among all (semi-)supervised methods in Table 1; $\ddagger$ denotes the best performance among all self-supervised methods in Table 1; \* denotes pre-training RWPT with A $\mathbf { \mathrm { { x i v } + F B 1 5 k } } 2 3 7 + \mathbf { C h E M B L }$ . Figure 4: Aggregated transfer learning performance. “Best of baselines” denotes the highest score among (semi-)supervised and self-supervised methods. “OOD” (resp. “ID”) indicate that the datasets for pre-training and downstream testing are from the same (resp. different) domain. “Pre-Train w/ THREE” and “Pre-Train w/ ALL” follow the definitions in Table 2. Few-shot learning (label-level transfer). In this learning paradigm, a support set of $N$ classes with $k$ examples each is given and one is asked about the class of the query. Typically, $k$ is very small and the classes used for testing are not seen in training. Hence, few-shot learning tests two abilities: the ability to predict new labels and the ability to learn from very few examples. In Table 3, we fine-tune RWPT on a few select datasets, for each of which we conduct a few $N$ -way $k$ - shot experiments. (See Section B for fine-tuning details.) We compare RWPT with methods that train a separate model for each dataset and methods that (pre-)train a single model; i.e., foundation-modelstyle methods. From Table 3, we see that foundation-model-style methods outperform individually trained models, which is not surprising due to the limited training examples. Moreover, our model RWPT performs the best in nearly all cases, with the second best generally attained by GFT. While it is uncommon to include supervised and self-supervised baselines in few-shot settings, we follow GFT [75] by fine-tuning models on limited samples to enable this comparison. MLP, DGI, and our method share the same prediction head, differing only in whether using raw or LLM-generated features. GNN baselines rely on raw features but incorporate structural information via message passing. The results highlight that both feature quality and structural awareness are crucial for few-shot learning, supporting our claim that our model’s generated embeddings effectively integrate the graph structure and dataset context. # 5.4 Ablation study Comparison between random-walk sampling and neighborhood sampling. We motivate the use of random walks for representation learning in Section 1 with a few reasons. One of the reasons is that random walks can better cope with long-range interactions if they are important for some downstream tasks or datasets. Compared with neighborhood sampling [28], multiple random walks equally retain sufficient near-hop neighbors while being able to extend very far. To substantiate this argument, we perform an experiment to compare the two sampling approaches. We reuse the Transformer backbone but prepare the input sequences differently. For neighborhood sampling, we list all sampled nodes in a sequence in an arbitrary order. The ordering does not matter because we use the hop number as the positional encoding. Note that it is impossible to form edge features equivalent to the random-walk case because adjacent nodes in the sequence are not necessarily connected by an edge. Hence, we ignore edge features for neighborhood sampling. We compare the two approaches by using a similar sequence length. Table 3: Few-shot learning performance. Results of BGRL, GraphMAE, GIANT, PRODIGY, OFA, and GFT are replicated from [75]. From Table 8 in Section H.1, we do not see one setting that performs uniformly the best, but we note that using a walk length $\ell = 8$ achieves the best result for the two datasets from the Long Range Graph Benchmark (Peptides-func and Peptides-struct). Moreover, we see that random-walk sampling generally achieves better results than neighborhood sampling across datasets. Ultimately, the best walk length is dataset-dependent, but random walks always offer an opportunity to capture a larger receptive field, if ever needed. Comparison of training losses. We propose a new loss (9) in Section 3.4 to pre-train our Transformer. This pre-training loss contrasts the context of a node and those of other nodes. We compare this loss with other popular contrastive losses in the graph literature: DGI [73], GraphPrompt [48], and MaskGAE [42]. In brief, DGI uses a random sequence as the negative context; GraphPrompt uses an InfoNCE-style of loss, which sums over all contexts in the denominator; and MaskGAE contrasts the next-hop contexts. Additionally, we compare our loss with the mask-token prediction approach commonly used to pre-train LLM encoders. Specifically, we add a token (or position) reconstruction term to the loss. See Sections H.2 and H.3 for the mathematical details. The results are reported in Tables 9 and 10 of Sections H.2 and H.3, respectively. Compared with other contrastive losses, our loss achieves the best results on 5 out of 9 datasets. No other compared losses perform the best as many. Meanwhile, adding a reconstruction term flips four best cases to second best while flipping four second best to best. We conclude that our loss is more favorable than other contrastive losses and adding a reconstruction term is not beneficial.
A foundation model like GPT elicits many emergent abilities, owing to the pre-training with broad inclusion of data and the use of the powerful Transformer architecture. While foundation models in natural languages are prevalent, can we build similar models for graphs? This paper describes an approach toward a graph foundation model that is pre-trained with diverse graph datasets by adapting the Transformer backbone. A central challenge toward this end is how a sequence model encodes graphs of varying sizes and from different domains. We propose representing a node as multiple random walks, such that the Transformer can extract node representations from sequences, which in turn form edge and graph representations. We develop a novel context prediction loss for these random walks and theoretically analyze their expressive power in distinguishing neighborhoods and graphs. We also demonstrate the pre-training of our model and its adaptation to downstream tasks, showcasing its potential as a foundation for processing and reasoning with graph-structured data.
[ "cs.LG", "cs.AI" ]
# 1 Introduction Large language models (LLMs) have seen widespread adoption in software development for code generation, bug fixing, question answering, test generation, and related tasks. Recently, agentic assistants such as Cursor (Anysphere Inc. [2025])—an AI-powered IDE based on Visual Studio Code—have gained traction for their ability to not only answer questions but also perform complex code edits and execute commands. By contrast, semiconductor hardware design has not benefited as significantly from LLMs. Generating Verilog RTL (Register-Transfer Level—the textual code used to design digital logic chips) with LLMs presents unique challenges, including the limited availability of high-quality training data (Wang et al. [2025], Liu et al. [2025a]) and the relative recency of domain-specific benchmarks. Two widely used datasets are VerilogEval (Liu et al. [2023], Ho et al. [2024]) and RTLLM (Lu et al. [2023], Liu et al. [2025b]), which report pass rates as high as $6 3 \%$ on GPT-4 and $94 \%$ for agentic approaches (Pinckney et al. [2025], Ho et al. [2024]). However, these benchmarks are narrow in scope and do not reflect the full complexity of hardware development workflows. Moreover, their high pass rates leave little headroom for measuring future improvements, limiting their usefulness as research drivers. VerilogEval and RTLLM rely on hand-crafted prompts and evaluate on small, self-contained problems. RTL-Repo (Allam and Shalan [2024]) introduces more realistic GitHub-derived contexts, prompting LLMs to complete redacted code regions. While it captures real-world structure, RTL-Repo focuses solely on code completion and does not test broader challenges like specification-to-RTL generation, debugging, or verification. Related benchmarks cover testbench stimuli (Zhang et al. [2023]), though close to $100 \%$ coverage of their benchmark is achievable by Claude 3.5 Sonnet, and formal assertions (Liu et al. [2025b]). We introduce the Comprehensive Verilog Design Problems (CVDP) benchmark, which expands on prior work with broader task coverage and greater depth. CVDP includes 783 human-authored problems across 13 categories, including RTL generation, design verification, debugging, assertion creation, and technical comprehension. Tasks are provided in both Non-Agentic (single-turn) and Agentic (multi-turn, tool-using) formats. Previous benchmarks focus on single-turn prompts and evaluation infrastructure, while CVDP is designed to evaluate agents, with support for tool interaction, iterative workflows, and complex reasoning. CVDP addresses the growing need for benchmarks that reflect real-world hardware development. Problem categories cover tasks such as RTL/testbench generation, debugging, assertions, code modification, power and area optimization, question answering, and code-spec alignment. The dataset is intended to expand over time, evolving alongside improvements in LLM and agent capabilities, while continuing to offer meaningful challenge and headroom for future research. This work makes four key contributions: 1. The first agentic-oriented benchmark for Verilog RTL code generation, verification, and related tasks. The benchmark’s prompts and infrastructure are designed to evaluate Dockerized LLMbased agents on real-world problems with EDA tool use. 2. A broader benchmark that encompasses a wider range of hardware design and verification tasks. The benchmark is intended to support both model and agent research. Initial Non-Agentic categories were selected with greater agent workflows in mind, representing useful subtasks within larger design processes. 3. A more challenging benchmark, featuring tasks significantly more difficult than those in VerilogEval (Liu et al. [2023], Pinckney et al. [2025]) and RTLLM (Lu et al. [2023]). Prior benchmarks largely drew from public repositories and are increasingly saturated, with high pass rates from both models and agents. In contrast, the current benchmark offers data points crafted and QA’ed by experienced hardware engineers with more than 4 years of experience from scratch. As a result, we show that state-of-the-art models—including Claude 3.7 Sonnet, GPT-4.1, and LLaMA 3.1 405B—achieve no more than a $34 \%$ pass rate on code generation questions in our benchmark, providing substantial headroom for future research in LLM-driven hardware design. 4. Analysis of model failures examines why state-of-the-art models frequently fail across specific categories and offers insights into the key capabilities LLMs must develop before they can be reliably deployed for real-world hardware design and verification. RTL code represents only a small fraction of public GitHub repositories compared to software code, and much design knowledge remains proprietary within industry. Consequently, there is a strong need for an advanced, human-written, publicly available benchmark dataset—composed of real-world design problems authored by design and verification experts. We created CVDP to address this critical gap.1 # 2 CVDP Dataset The CVDP dataset and infrastructure build on methodologies from software LLM benchmarks such as SWE-bench (Jimenez et al. [2024]) and Microsoft’s Copilot evaluation harness (Agarwal et al. [2024]). Whereas SWE-bench had access to a wide range of high-quality, open-source, software code repositories and well-documented resolved GitHub issues to pull from, similar high-quality RTL repositories are not as available in the open-source domain. Instead, we engaged a team of approximately 35 hardware engineers with more than 4 years of Verilog and verification experience to author problems across 13 task categories and difficulty levels, in both Non-Agentic and Agentic formats. In addition, subject matter experts with doctoral degrees in hardware design and/or engineering management experiences also reviewed each problem for accuracy, task fit, and appropriate scope, with intensive manual review during initial calibration batches to ensure data quality and task alignment. Once categories stabilized, LLM-based filtering was used to catch errors, such as missing context or incorrect category, and score ambiguity and consistency of the prompt. Sanity checks ensured all reference solutions passed and incomplete contexts failed as expected. Of the 1,313 problems written, 783 were retained after quality filtering described in Section 3. As with any codebase, a benchmark cannot be entirely bug-free (Ho et al. [2024]). Errors may cap maximum achievable scores, and updated benchmark versions will be released as needed. Each datapoint, or “problem,” represents a multi-file repository extracted at evaluation time. A test harness—typically a CocoTB (CocoTB [2025]) simulation script—assesses correctness based on task type. CocoTB is a Python verification framework for testing RTL, and helps to automate the test harness. BLEU (Papineni et al. [2002]) scoring is used where code or natural language snippets are expected verbatim, while technical natural language answers are scored using LLM-based subjective judging. We distinguish between the testbench (SystemVerilog provided in-context) and the test harness (used only for evaluation). Models or agents may generate or use a testbench but never see the test harness or reference solution. # 2.1 Task Categories Categories in the initial CVDP release (Table 1) are grouped into two main areas: Code Generation and Code Comprehension. Code Generation covers RTL-focused tasks such as code completion, transforming natural language specifications to RTL, modifying or reusing existing modules, and improving code for linting or quality-of-results (QoR). It also includes design verification tasks like testbench stimulus and checker generation, assertion creation, and debugging. Code Comprehension includes matching specifications to RTL or testbench code (and vice versa), as well as technical question answering on both RTL and testbench content. These categories reflect common subtasks in real-world hardware design and verification workflows. Non-Agentic problems are evaluated in a single-turn setting where the prompt and context are fully provided to the model. In contrast, Agentic problems run inside a Docker container, allowing an agent to inspect a mini-repository and invoke tools (e.g., simulators). For both Non-Agentic and Agentic problems we limited datapoint creation to oracle contexts, where models are provided only the minimal, relevant information needed to complete the task, bypassing the need for retrieval or broader context understanding. However, this is not a technical limitation of the benchmark infrastructure and a full-repository context could be added to future datapoints. Category volumes were based on likely deployment scenarios. Most task categories include both Non-Agentic and Agentic datapoints, but some were designed as Non-Agentic-only or Agentic-only based on their expected use case—e.g., simpler tasks for single-turn model inference, and more complex tasks requiring tool use for agentic evaluation. Each datapoint includes the context and a golden reference solution. Supporting materials—such as related module documentation, testbenches, or editable starter code—were included as needed. The benchmark is packaged as two JSONL files: one for Non-Agentic and one for Agentic datapoints. The table shows the mean and maximum prompt and context token counts for each category, as estimated using the tiktoken cl100k_base encoding. # 2.2 Datapoint Author Guidelines Datapoint writers were instructed to cover a range of human-tagged difficulty levels—easy, medium, and hard. Since proxies like lines of code or gate count poorly capture true complexity (e.g., a 32-bit Table 1: Comparison of Non-Agentic and Agentic problem counts by task category. 16:1 multiplexer may be written succinctly or verbosely), writers were told to prioritize clarity and best coding practices over artificial complexity. Non-Agentic problems include only easy and medium tasks, while Agentic problems span all difficulty levels, as hard problems are too complex for single-turn evaluation. Writers were also asked to diversify topical coverage within each category, including: (1) FSM and control logic (e.g., Mealy/Moore, arbitration, counters); (2) Arithmetic and datapath (e.g., adders, multipliers, shifters); (3) Interconnects (e.g., crossbars, routers, FIFOs); (4) Memory systems (e.g., caches, CAMs); and (5) Architecture (e.g., CPUs, accelerators). # 3 Benchmark Infrastructure The benchmark infrastructure is implemented in Python and includes callback interfaces to evaluate custom models or agents. An overview of the evaluation flow is shown in Figure 1. Each datapoint can be run with either the initial context or the reference solution, enabling self-checking of harness validity. Harnesses use open-source tools where possible, including Icarus Verilog simulation (Williams [2025]), Yosys logic synthesis (Wolf and the YosysHQ contributors [2025]), and Verilator linting (Snyder and Contributors [2025]). Some tasks (cid12–14) require commercial tools, currently Cadence Xcelium (Cadence Design Systems, Inc. [2025]. All agents and harnesses run inside Docker containers to isolate evaluation artifacts, ensure tool consistency, and maintain security. Users populate tool and agent images using provided templates. Configurable timeouts and retry counts accommodate varying compute access. Figure 1: Benchmark Evaluation Flow. The infrastructure includes a map feature for querying models across datapoints with custom prompts—useful for prompt refinement or batch evaluation. The map feature also supports automated quality filtering using an LLM judge to score datapoints and remove low-quality examples. Lastly, Agentic and Non-Agentic formats can be converted between to allow single-turn evaluation on Agentic problems or multi-turn agent evaluation on Non-Agentic problems. # 4 LLM Benchmark Results We evaluated state-of-the-art models on the CVDP dataset, including both Non-Agentic and Agentic problems. Models evaluated include Anthropic Claude 3.7 Sonnet with and without Extended Thinking (Anthropic [2025]), Claude 3.5 Haiku, OpenAI GPT 4.1 (OpenAI [2025a]), GPT o1 (OpenAI [2024]), o4-mini OpenAI [2025b], Meta Llama 3.1 405B (Meta AI [2024a]), and Llama 3.1 70B (Meta AI [2024b]). We report a pass $\ @ 1$ with $n = 5$ samples as the pass rate. The pass $@ k$ metric is the probability that at least one sample passes among $k$ samples, we estimate the expected value of pass $\ @ 1$ across $n = 5$ samples. For Llama 3.1 405B and 70B, we set the decoding parameters to $T = 0 . 2$ and top- $p = 0 . 7$ . For the other models we used the default temperature and top- $p$ supported by the API endpoint. Tables 2 and 3 provide pass rates for the code generation tasks across models. Prior Verilog code generation benchmarks, such as VerilogEval v2 (Pinckney et al. [2025]), reported that LLaMA 3.1 405B achieved a pass rate of $57 \%$ on specification-to-RTL tasks, with GPT-4o achieving a pass $\ @ 1$ of $6 3 \%$ , the best result in that benchmark. In contrast, the tables shows that CVDP presents a substantially greater challenge to state-of-the-art models. The highest aggregate pass $\ @ 1$ rate observed was $34 \%$ (Claude 3.7 Sonnet), followed by GPT-4.1—the successor to GPT-4o—at $2 9 \%$ , and LLaMA 3.1 405B at $23 \%$ . Agentic problems, when evaluated in single-turn format using a model, were even more challenging overall—particularly for the OpenAI models. GPT-4.1 achieved a $21 \%$ pass $\ @ 1$ on Agentic tasks, $8 \%$ lower than its Non-Agentic score. Claude 3.7 Sonnet’s pass rate dropped by $4 \%$ between Non-Agentic and Agentic problems, while LLaMA 3.1 405B showed only a $2 \%$ drop, likely reflecting its inability to solve many of the harder problems in either setting. All reported results reflect the filtered dataset after automated quality control, as described in Section 2. Prior to filtering, pass rates were lower by approximately $3 \%$ and $1 . 5 \%$ on average for Non-Agentic and Agentic problems, respectively. These results highlight the difficulty of the CVDP benchmark and the significant advancements still required before LLMs can be reliably deployed in complex, real-world hardware design and verification workflows. Generation pass rates vary significantly across categories, as shown in Table 2. Categories cid02– 04 correspond to RTL code generation and modification, cid07 covers code improvement tasks (e.g., linting and QoR-focused modifications), and cid12–14 correspond to design verification tasks. Category cid16 is also included in the generation evaluation. Design verification categories—specifically testbench stimulus and checker generation (cid12–13) and assertion generation (cid14)—exhibit substantially lower pass rates compared to other code generation categories. This is examined in more detail in Section 5. Notably, state-of-the-art LLMs consistently struggle to generate even syntactically valid testbench code, despite it being written in the same hardware description language (SystemVerilog) as the RTL code generation tasks. This discrepancy may stem from the more procedural and imperative nature of testbench code, as opposed to the declarative structure typical of RTL logic. Table 2: Non-Agentic Code Generation Problems: Pass Rates Across Categories and Models. Categories are grouped into RTL generation and modification, code improvement, testbench or assertion generation, and debugging. Results are reported as pass $@ 1$ with $n = 5$ samples. Agentic datapoints were converted to Non-Agentic format for evaluation, as no open-source, generalpurpose hardware design agent currently exists. Agentic generation pass $\ @ 1$ rates across categories, shown in Table 3, follow similar trends to those observed in Table 2. Code Completion (cid02) and Code Improvement (cid07) tasks are exclusive to the Non-Agentic dataset, while the Agentic dataset introduces Spec-to-RTL Module Reuse tasks (cid05). These problems require composing multiple existing RTL modules into a new top-level module, often with additional glue logic, to satisfy the specified behavioral requirements. As in the Non-Agentic results, Claude 3.7 Sonnet performs notably well compared to other models on most RTL code generation and debugging categories (cid03–04, cid16). However, Claude 3.7 Sonnet does not exhibit a significant advantage over other models on Spec-to-RTL Component Reuse (cid05), suggesting that while it excels at generating or modifying RTL code, it struggles with the more complex task of composing existing RTL components to implement new functionality. Table 3: Agentic Code Generation Problems: Pass Rates Across Categories and Models. Categories are grouped into RTL generation and modification, testbench or assertion generation, and debugging. Results are reported as pass $\ @ 1$ with $n = 5$ samples. The Code Comprehension dataset is limited to Non-Agentic format and is scored differently from the Code Generation problems. RTL/Testbench Correspondence tasks (cid06, cid08) are evaluated using BLEU (Papineni et al. [2002]) scores, as the expected responses are code or natural language snippets that should match a reference verbatim. RTL/Testbench Question & Answer tasks (cid09–10) are scored using subjective, LLM-based evaluation: the model compares an actual response against the reference solution in the context of the original prompt. The scoring prompt instructs the model to emphasize information explicitly requested in the original question. For efficiency and availability, GPT o4-mini is used as the scoring model. As shown in the results, all LLMs perform well on the Question & Answer tasks, with minimal gains observed from newer models over older ones. Since conversational QA has been a central application area for LLMs, this may reflect the models’ maturity in chatbot-style environments. However, further investigation is needed to assess the technical reliability of these scores. Table 4: Non-Agentic Code Comprehension Problems: Overall and Per-Category Scores. Categories are grouped into Correspondence and Question & Answer problems. Results are reported with $n = 5$ samples. # 5 Failure Analysis and Insights We perform a systematic and detailed category-level analysis of the failed cases for each LLM to identify the critical areas that need improvement in state-of-the-art LLMs across various Verilog design categories (i.e., RTL coding, assertion generation, testbench generation, debugging, etc.). The category-level failure analysis flow is shown in Algorithm 1. First, we leverage a reasoning LLM (i.e., o1) to reflect on the failed data points and project the failure reflections into a vector space using SentenceTransformer (Line 2 to 5). Then, we apply the unsupervised K-means clustering methodology (Sinaga and Yang [2020]) to generate the optimal number of clusters based on the maximum silhouette score (Line 8 to 14). Finally, we use a reasoning LLM (i.e., o1) to interpret and summarize the category-level failures (CF), identifying the critical shortcomings of state-of-the-art LLMs in Verilog design and verification tasks (Line 15 to 18). # Algorithm 1 Category-Level Failure Analysis Require: Dataset $F _ { c } = \{ f _ { c , 1 } , f _ { c , 2 } , . ~ . ~ . ~ , f _ { c , n } \}$ , 1: Set $F _ { e } = [ ]$ 2: for each failed data point $f _ { c , i } \in F _ { c }$ do 3: $r _ { c , i } = R e f l e c t ( f _ { c , i } )$ 4: Fe.append(Embedding(rc,i)) 5: end for 6: Set $\mathbf { \nabla } _ { S b e s t } = 0$ ; $k _ { b e s t } = 0$ $^ { 7 }$ : Set $L _ { b e s t } = \mathrm { z e r o s } ( n , 1 )$ 8: for $k \gets 2$ to 11 do 9: Lk = Kmeans(Fe, k) 10: $s _ { k } =$ silhouette_score(Fe, Lk) 11: if $s _ { k } > s _ { b e s t }$ then 12: $s _ { b e s t } = s _ { k } ; k _ { b e s t } = k ; L _ { b e s t } = L _ { k }$ 13: end if 14: end for 15: for $g \gets 0$ to $k _ { b e s t }$ do 16: $C F _ { g } = R e a s o n ( F _ { g , e } )$ 17: end for 18: Return $C F$ {Kmeam clustering} g tMiimsseisncgatliemescale 6 cItncmoorrdeucltomodulo FSlaynwcehdrSoyninzcahtironization cIinesnutffCicoivenrtagCeoverage teTrducnocdateed code iaUlinzienidtivalirizaebdlevasriables leMudlrtiivpelresdrivers Llama 3.1405B Claude 3.7 Sonnet GPT 4.1 eSryrnotrasx errors cUhendmbaltocchkesd blocks ■RTL Coding ■Correspondence■Design Verification (a) # Average Clusters. (b) Testbench Checker (cid13) Claude 3.7 failure cluster visualization. Figure 2: Failure Analysis on different problem categories. Visualization plot uses PaCMAP graph reduction method (Wang et al. [2021]). We present category-level failure analysis results for Llama 3.1 405B, Claude 3.7 Sonnet, and GPT 4.1 in Table 5. We report the number of failed cases, number of clusters, the failure entity of the largest cluster, and its percentage share of the total failed cases within each category. We observe that state-of-the-art LLMs particularly struggle with testbench stimulus generation (cid12), testbench checker generation (cid13), and assertion generation (cid14). Compared to RTL coding (cid02–cid04, cid07), the average number of clusters for design verification and debug problems (cid12–cid14, cid16) is consistently higher across all three models—Llama 3.1 405B, Claude 3.7 Sonnet, and GPT 4.1 as shown in Figure 2a. In the design verification categories, in addition to syntax and functional errors, failure entities include issues like "Misplaced SVA" and "Insufficient Coverage." To illustrate the diversity of failure types within design verification problems, we present a cluster visualization plot for Claude 3.7 Sonnet on Testbench Checker generat (cid13) using the PaCMAP graph reduction method (Wang et al. [2021]) in Figure 2b, which preserves both local and global distances. Lastly, we further analyze the Testbench Checker Generation set (cid13) after applying quality filtering (as shown in Table 1), since state-of-the-art LLMs achieve the lowest pass rates in this category, and a larger number of data points are filtered during the quality screening process among the design verification categories. Figure 3 presents the cluster visualizations of Llama 3.1 405B, Claude 3.7 Sonnet, and GPT-4.1 on design verification categories before and after quality filtering. Compared to the unfiltered data, the number of failure clusters is reduced after quality filtering due to decreased ambiguity and increased consistency in the problem descriptions. For Claude 3.7 Sonnet specifically, the number of failure clusters drops from 6 to 2 after quality filtering, reflecting the improved clarity of the case descriptions. In summary, our failure analysis reveals key challenges and insights into where state-of-the-art LLMs struggle across RTL tasks—particularly in design verification—offering valuable and comprehensive benchmarks for advancing LLM research in hardware design and verification. Table 5: Failure analysis of Non-Agentic Generation, pass $\overline { { \ @ } }$ 1 $\scriptstyle { \overline { { n = 1 } } }$ ). For each category, we show #failures, #clusters, top failure entity, and max cluster share (#failed cases of max cluster/#failed cases). Figure 3: Failure cluster visualization of Testbench Checker set set (cid13) before/after quality filtered using the PaCMAP graph reduction method (Wang et al. [2021]). After quality filtered, the # of failure clusters is less because of improved ambiguity and consistency in prompt. # 6 Limitations The CVDP benchmark is designed to push the limits of existing LLMs and agents in solving real-world hardware code generation tasks. While considerably more challenging for current large language models than prior benchmarks—particularly in areas such as design verification and module reuse—it does have limitations. The contexts of the Agentic datapoints are, on average, larger than those of the Non-Agentic datapoints. However, the Agentic context remains an oracle context and does not include files referencing additional units. The Question & Answer Code Comprehension datapoints do not sufficiently challenge the LLMs, and a separate task category focused on specification creation from RTL code may be more informative and demanding while addressing similar comprehension goals. Finally, the tasks in the benchmark are limited to standard hardware design and verification tasks and do not encompass the full range of challenges a design or verification engineer might face from project inception through fabrication. Specific academic and industry organizations may have additional requirements, custom tooling, or specialized needs not fully addressed by CVDP.
We present the Comprehensive Verilog Design Problems (CVDP) benchmark, a new dataset and infrastructure to advance LLM and agent research in hardware design and verification. CVDP includes 783 problems across 13 task categories, covering RTL generation, verification, debugging, specification alignment, and technical Q&A authored by experienced hardware engineers. Problems are offered in both non-agentic and agentic formats. The benchmark introduces more realistic and challenging contexts than prior work, with state-of-the-art models achieving no more than 34% pass@1 on code generation. Agentic tasks$\unicode{x2013}$especially those involving RTL reuse and verification$\unicode{x2013}$are particularly difficult. Evaluation uses open-source tools and model scoring infrastructure, with comprehension tasks assessed via BLEU and LLM-based judging. CVDP reveals substantial gaps in current model capabilities, underscoring the need for continued research toward robust, real-world hardware design automation.
[ "cs.LG", "cs.AR" ]
# 1 Introduction Accompanying each new software version is a crucial document, the release note. Release notes summarise the new features, enhancements, bug fixes, and other changes in a software update, conveying the rationale and impact of changes to downstream developers and end users [Moreno et al. 2017; Wu et al. 2023], effectively serving as a software trail [Abebe et al. 2016]. To make informed update decisions, stakeholders evaluate the benefits of the update, such as bug fixes, features, and performance advancements, against potential drawbacks, like breaking changes that hinder the adoption of the new release [Bernard 2012]. Additionally, project managers use release notes to track the development progress and set milestones for release targets [Bi et al. 2020]. Original Conventional Changelog DeepRelease SmartNote (Ours) v1.14.62 (2024-08-18) Features release-v1.14.62 1. fix stock_zt_pool_em interface Bug Fixes Ad versionversion #5115) fix ·fix stock_zt_pool_em (657e06f) Ad ersion versn5109) • Introduced a check to return an empty DataFrame when ·Add version version (#5119) the data pool is empty. Fixed stock_zt_pool_em Features ·Add version version (#5122) · Add version version (#5108) ·add version 1.14.62 (af80e16) ·Fix datetime format error error (#5128) docs · Add version version (#5124) · Updated documentation for clarity and accuracy of stock fund flow concept data, with URL updates and revised Documentation e • Docs : update date (#5125) xx However, the task of writing release notes requires careful deliberation. Many developers and maintainers are reluctant as they perceive writing release notes as time-consuming and tedious; therefore, release notes are often neglected [Khomh et al. 2015]. Developers have expressed their frustrations, with one Microsoft developer stating in a blog post [Greenaway 2018], “We hate creating release notes when it’s our turn to ship software updates. This can be quite a challenge and very time-consuming”. Empirical evidence supports this claim. Moreno et al. [2014] found that it takes up to 8 hours for an experienced developer to draft a release note document. This challenge is even more pronounced for the maintainers of large-scale open-source projects. For example, maintainers of espressif/esp-idf are required to gather information from more than 5,000 commits to compile a single release note document for v5.2. With the rise of continuous software delivery — a software engineering approach that emphasises rapid, efficient, and reliable service improvements [Chen 2015; Humble and Farley 2010] — frequent software releases are becoming the new norm. This shift places an increasing burden on developers to consistently produce high-quality release notes. In general, release note producers face the time-consuming and challenging task of creating a well-balanced, high-quality release note, while ensuring that it is suitable for their target audience [Wu et al. 2022]. Automated release note generation tools such as DeepRelease [Jiang et al. 2021] and Convetional Changelog [2024], have been proposed and implemented by researchers and practitioners. However, they lack widespread adoption. Sumana et al. [2024], interviewed respondents who indicated that automated tools are not really useful to reduce work stress. Similarly, Bi et al. [2020] found that none of the participants they interviewed had adopted such tools for automatically generating release notes. The lack of adoption may be due to limitations that hinder their real-world application. First, they often fail to consider the diverse needs of various audiences and project domains, which are important contextual factors for effective personalisation. For example, release note users and producers have different expectations on releases of different types (i.e., major, minor, patch) from projects of different domains (e.g., applications, tools) [Bi et al. 2020; Klepper et al. 2016; Wu et al. 2022]. This mismatch results in release notes lacking critical details for specific user groups and leaves users frequently overwhelmed with generic and irrelevant information. For instance, users of Libraries & Frameworks prefer more prevalent changes documented in release notes, while users of Software Tools prioritise information about performance and security improvements [Nath and Roy 2022; Wu et al. 2023]. Second, existing tools have limited applicability, demand extensive adoption efforts, require significant workflow adjustments, and often produce overly verbose outputs. For instance, ARENA only supports specific programming languages [Moreno et al. 2017], and DeepRelease excludes $4 6 \%$ of projects that do not adhere to pull request (PR) practices [Jiang et al. 2021]. Our results confirm that DeepRelease failed for ${ \sim } 1 0 \%$ of the projects we analysed. Other tools rely on basic automation that aggregates changes directly from version control systems, such as Git commit messages. However, this approach fails to adequately communicate the impact of changes [OpenStack 2022] and generates overly verbose release notes which lead to disengagement and difficulties in pinpointing significant and relevant changes [Nath and Roy 2024; Wu et al. 2023]. Moreover, a common frustration with many off-the-shelf release note generators, like Conventional Changelog [2024] and semantic-release [2024], is that they enforce commit message conventions, such as conventional commits [2024] or angular [2020] respectively, and require rigorous configuration. These approaches restrict usability by imposing requirements and pressuring developers to modify their workflows or design independent solutions. For instance, our evaluation reveals that Conventional Changelog fails for more than half the projects analysed. Third, current approaches do not explore the effectiveness of LLMs in release note generation. To the best of our knowledge, despite recent advancements in natural language processing (NLP) and the proven capabilities of LLMs in various code and text-related tasks [Achiam et al. 2024; Anthropic 2024; Touvron et al. 2023; Zhao et al. 2023], there are no existing studies investigating integration and utilisation of LLMs in automated release note generation. LLMs have significant benefits in code comprehension tasks that allow them to capture semantic meaning and connections between code and natural language text [Feng et al. 2020; Nam et al. 2024; Wang et al. 2021], allowing for intricate summaries of changes. Therefore, the use of LLMs could offer benefits in aggregating information from various sources and enabling rich, high-quality, personalised release notes by automating the interpretation of complex commit histories, extracting the most relevant information, and incorporating writing styles and organisational patterns to automatically generate concise and accurate release notes. Thus, to bridge the gap in existing literature and support open source software (OSS) maintainers, we propose SmartNote, a release note generator that utilises the highly effective code to natural language comprehension capabilities of LLMs [Nam et al. 2024] to produce personalised, highquality release notes for diverse GitHub projects. Notably, SmartNote infers optimal settings based on project context and captures the semantics of code, commit messages, and pull requests. This enables it to generate comprehensive and contextually relevant release notes, as illustrated in Figure 1, even for projects where commit messages are inconsistent, unstructured, or non-compliant with conventional standards, enhancing its applicability. Moreover, SmartNote is designed to accommodate the preferences of various project domains and release types [Wu et al. 2023], ensuring that generated release notes are contextually tailored in terms of content, organisation, and style. At a high level, SmartNote achieves this in four steps: 1) context comprehension, 2) change summarisation, 3) change categorisation, and 4) change filtration to remove less significant entries and details. These steps are carried out through a five-stage generation pipeline, as we will explain later in Section 3.2. For this study, we selected OpenAI’s state-of-the-art LLM, "gpt-4o", which ranks $\# 1$ on the LMArena leaderboard [LMSYS and SkyLab 2024]. To optimise the prompts, we conducted multiple iterations of trial-and-error, guided by best practices in prompt engineering proposed in previous studies [Ekin 2023; Wei et al. 2024] and by LLM vendors [Sanders and Fishman 2023]. To evaluate SmartNote, we analyse generated release notes for completeness, clarity, conciseness, and organisation by conducting human and automated evaluations on 23 open source projects against baselines — DeepRelease, Conventional Changelog, and the projects’ original release notes. To begin with, we find that SmartNote is applicable to all evaluated projects, whereas DeepRelease fails for ${ \sim } 1 0 \%$ and Conventional Changelog for ${ \sim } 5 4 \%$ of projects. Furthermore, the human evaluation indicates that over $8 0 \%$ of participants agree or strongly agree that SmartNote achieves the best results for completeness, clarity, and organisation, while conciseness ranks second. In the automated evaluation, we found that SmartNote achieves $81 \%$ commit coverage, ranking first in organisation with an information entropy of 1.59. In contrast, conciseness ranks third, yielding mixed results. This can be attributed to the compromises made by release note authors, who often improve conciseness by reducing commit coverage. Moreover, to assess the impact of context awareness on release note quality, we conducted an ablation study with both automatic and human evaluations, comparing SmartNote’s generated release note with three variants. The results reveal that SmartNote captures human-written nuances, highlighting the importance of prompt engineering and context comprehension key components to SmartNote’s effectiveness, particularly in completeness and clarity. To summarise, despite industry and academic advances in automated release note generation, a gap remains between current methods and developer expectations. SmartNote resolves the current challenges by introducing the following innovations: 1. Workflow-Agnostic Design: Unlike prior tools that rely on rigid workflows (e.g., pull-merge strategies used by only $54 \%$ of OSS projects [Jiang et al. 2021] or commit conventions which are varied and not widely adopted), SmartNote works out-of-the-box with broader applicability (e.g., of the projects we evaluated, DeepRelease fails for more than $10 \%$ and Conventional-Changelog fails for more than $5 0 \%$ ). 2. User-Centric: SmartNote generates clear, complete, and well-organised release notes, making them user-friendly, actionable, and tailored to diverse audiences, addressing gaps that hinder the adoption of state-of-the-art tools. 3. Tailored Pipeline: SmartNote uses a tailored pipeline to aggregate, classify, score commits for significance, merge related changes, and organise release notes based on project domain as defined by previous research [Wu et al. 2023] and context. This approach ensures personalised, concise, and structured release notes. 4. Prompt Engineering: Research shows that prompt engineering improves LLM performance [Liu et al. 2023; Schulhoff et al. 2024, 2023; Wei et al. 2024]. Without it, LLMs produce inconsistent, verbose, and sometimes nonsensical results, whereas with it, they are better personalised and more applicable (Section 4, EQ2). # 2 Background In this section, we focus on, 1) studies analysing release note practices, characteristics, and usage; 2) state-of-the-art tools for automating release note generation and an overview of off-the-shelf release note generation tools; and 3) a summary concluding the limitations of current approaches. # 2.1 Release Note Characteristics and Contents Initially, release notes served as data sources for broader studies on software maintenance and evolution [Alali et al. 2008; Maalej and Happel 2010; Shihab et al. 2013; Yu 2009]. It has only been in recent years that researchers have turned their attention toward empirical studies that specifically examine release note practices and the development of automated generation techniques. Existing studies aimed at analysing release notes have greatly enhanced collective understanding of the characteristics and usages of release notes. Releases contain various aspects of information that must be organised into categories, to form well-structured release notes, which positively impact software activities [Bi et al. 2020], such as software evolution and continuous software delivery. However, release notes can be overwhelming and tedious to write at times, resulting in missed information or errors [Wu et al. 2022]. Additionally, research [Nath and Roy 2022] finds that release note content contains four main types of artefacts: issues $( 2 9 \% )$ , PRs $( 3 2 \% )$ , commits $( 1 9 \% )$ , and common vulnerability and exposure (CVE) issues $( 6 \% )$ . It also highlights that users prefer addressed issues to be summarised or rephrased, and that users’ preference for release note contents differs based on their background, for example, bug fixes are essential for developers and testers. Additionally, several studies find that conciseness is an important factor [Aghajani et al. 2020] and that most release notes list only between $6 \%$ and $2 6 \%$ of issues addressed [Abebe et al. 2016]. Expectations for release notes vary across software project domains (such as application software, system software, libraries and frameworks, and system tools) and release types (major, minor, patch) [Wu et al. 2022]. These factors influence the structure and content of release notes, which in turn affects their applicability [Nath and Roy 2024]. To this extent, Wu et al. [2023] conducted research to analyse and identify how release notes are organised, in what way they are written, and the content of release notes concerning project domain and release types. The authors analysed 612 release notes from 233 GitHub projects. They found that $6 4 . 5 4 \%$ of release notes organise changes into hierarchical structures, sorted by change type, affected module, change priority (the most common), or a combination of the three. Additionally, they identify three types of writing styles, “expository” (directly list the title or content of change-related commits/PRs/issues), “descriptive” (rephrase the content of change-related commits/PRs/issues to increase the readability and summarise the content of similar commits/PRs/issues), and “persuasive” (provide additional information to help developers understand the changes, such as the rationale behind the changes, the impact of the changes) which provide additional information and enhanced explanations specific to the project domain. Finally, they analyse the contents of release notes from different project domains, finding “System Software” and “Libraries & Frameworks” projects more likely to record breaking changes, while “Software Tools” projects emphasise enhancements. # 2.2 Automatic Release Note Generation Researchers have developed various approaches and tools to help achieve and enhance release note automation. Klepper et al. [2016] introduced a semi-automatic approach to generating audiencespecific release notes that integrates with continuous delivery workflows. While this method offers flexibility and reduces the burden on release note producers, it assumes ideal conditions and lacks detailed implementation guidance, which may limit its practical applicability in diverse development environments. For example, it assumes that the data sources (e.g., issue trackers and version control systems) are well-structured and consistently used by the development team, which may not always be the case in real-world projects. Moreno et al. [2017] introduced ARENA (Automatic RElease Notes generAtor), a tool to automatically generate release notes for Java projects. In comparison, Nath et al [2021] used extractive text summarisation technique, TextRank, to automate release notes production. Their approach functions without predetermined templates and is language-agnostic. Similarly, Jiang et al. [2021], proposed DeepRelease, a language-agnostic tool utilising deep learning techniques to generate release notes from the textual information contained in PRs. The tool concatenates the title, description, and commit messages of a preprocessed PR as the input, summarising them into a single change entry for the release note. They found that $54 \%$ of the 900 open-source projects they analysed generate release notes based on pull requests. However, while it is language-agnostic, it requires developers to follow specific pull request development practices, to automatically generate release notes. This requirement excludes the remaining $4 6 \%$ of projects that do not adhere to these practices, a significant amount. In contrast, Kamezawa et al. [2022] introduced a summarisation approach leveraging transformer-based sequence-to-sequence networks. This approach generates labelled release notes by summarising commit messages, making it adaptable to various projects without specific constraints. However, it places a significant burden on developers to consistently write high-quality commit messages. Developers have also created off-the-shelf solutions like Conventional Changelog [2024], which by default requires users to adhere to the conventional commit convention [Joslin 2024]. While less intricate than those proposed in research studies, off-the-shelf solutions are widely adopted (e.g., Conventional Changelog has more than one million downloads per week [NPM 2024]) due to their accessibility and ease of integration into development workflows. Additionally, they are customisable to cater to different projects’ needs, however, this flexibility necessitates extensive configuration due to their reliance on templates and labels. Moreover, these tools typically generate changelogs that list all changes, including minor revisions, using the commit message title to describe the change, whereas release notes focus on summarising the most significant changes in a new release, ensuring the impact is understandable for the user regardless of context [OpenStack 2022]. Both serve the same fundamental purpose. For example, Release Drafter [2024], provides users with the flexibility to configure labels and headings for their commits and release notes. However, this flexibility requires developers follow a specific commit pattern. Other tools such as semantic-release [2024], changelogen [2024], and github-changelog-generator [2024] operate on similar principles, enforcing commit conventions or specific patterns to generate changelogs. # 2.3 Knowledge Gap Despite the strong interest in automation, the application of release note generators in the open source world remains limited. Bi et al. [2020] found that none of the participants they interviewed had adopted release note automation tools, and Sumana et al. [2024]’s participants complain about the stress of release note production even with the assistance of automation. A major contributing factor to this phenomenon is the limitation of current automation approaches. Existing tools lack key features, are prone to errors, and are often difficult to configure [Aghajani et al. 2020; Wu et al. 2022]. Additionally, existing automation techniques often generate standardised patterns, which can confuse users and lead to hesitancy in adoption due to concerns for relevance and accuracy [Nath and Roy 2024]. To summarise, existing research on release notes has highlighted the importance of structured, well-organised content that caters to both release note producers and users. However, current tools while effective in aggregating information and categorising release notes, have the following limitations: (1) fail to address key discrepancies between release note producers and users; (2) do not personalise the content, structure and style to meet the diverse needs of various software domains and release types; (3) have stringent requirements, such as commit convention, templates, labels, or PR strategies; (4) require extensive configuration; (5) some work only for certain programming languages. e.g., ARENA. These limitations often hinder widespread adoption, resulting in inconsistent quality and detail in generated release notes. In short, current generation methods lack personalisation and applicability. # 3 Approach To address the gaps and limitations in automatic release note generation, we introduce SmartNote, a novel approach that leverages the comprehension capabilities of LLMs [Nam et al. 2024] to aggregate the "what" and "why" of changes in code, commits and pull requests, generating highquality, personalised release notes while remaining language and workflow agnostic. To personalise release notes to individual projects and release types, SmartNote incorporates best practices in content, writing style, and organisation identified in previous research [Wu et al. 2023]. In this section, we first introduce the key component of SmartNote, the LLM module. We then provide an overview, followed by a detailed explanation of each stage of SmartNote. # 3.1 The LLM Module The LLM module is integral to the generation pipeline, encapsulating capabilities for the release note generation process. It is responsible for formatting input by generating prompts based on predefined templates and contextual information, as well as parsing the output. Additionally, it manages the model’s context size limit by limiting the token count for each commit diff. In cases where the limit is exceeded, SmartNote provides warnings. However, the token limit is sufficiently large, making occurrences rare in practice, except in scenarios such as the initial commit or when entire folders are moved or deleted. In our evaluation, the model’s context size limit was exceeded in only 4 of the 23 projects, affecting no more than two commits. To ensure future compatibility, SmartNote is designed to function with any LLM. For this study, we employed OpenAI’s "gpt-4o" model, which was considered state-of-the-art at the time and ranked $\# 1$ on LMArena [LMSYS and SkyLab 2024]. The following subsections detail the prompt engineering techniques employed and the approach used to identify optimal hyperparameters. Prompts You will be provided with commit details (delimited with XML tags) and file changes (delimited with XML tags) outlining the changes of a commit. Keep in mind most users of the project are developers, who benefit from technical details. Q Large Language Model Use the following step-by-step instructions to respond to user input. Prefix each summary with the step number. Step 1 - Summarize the changes into a short paragraph (about 50-100 words) so that it can seamlessly be included in a release note document. Output Step 6 - Rewrite the summary from Step 5, removing references to version changes and updates. The commit introduces pretty Bézier curves to the Only output the summary from Step 6. visualization by replacing straight lines with paths in the radial cluster example. A new function <commit_details>Commit SHA: 595bf0007ab1… computes the Bézier curve from parent to child <file_change>filename: package.json nodes, enhancing the visual appeal. Additionally, status: modified the CSS for links is updated to ensure no fill is """@@ -29,7 +29,7 @@"""… applied, maintaining the stroke appearance. 3.1.1 Prompt Engineering. To ensure high-quality responses from the LLM, we utilised prompt engineering techniques and best practices identified by research [Chen et al. 2024] and recommended by LLM vendors [OpenAI 2024b]. Additionally, we iterated using a trial-and-error process to identify the most suitable prompt template — manually analysing and evaluating the response, adjusting the prompt until the LLM outputs a correct and high-quality response. Due to the page limit, we are unable to elaborate on all of the prompt templates. However, we illustrate the prompt engineering tactics with the example of the commit summarisation task in Figure 2 (which has been shortened to highlight the key aspects of the prompt and accommodate page space constraints). Complete versions of the prompt templates are available in our replication package. • Delimiters: Used to clearly indicate distinct parts of an input, especially for multi-line strings, helping to correctly distinguish the data that the model should focus on with the instructions given. The more complex a task is the more important it is to disambiguate task details. We use XML tags [OpenAI 2024b], to clearly indicate the code patch when generating release note entries and when summarising PRs to clearly indicate details such as the title and body, and highlight the commits. • Chain of Thought Prompting: By providing a clear structure for the model to follow in the form of intermediate reasoning steps, the input helps guide the model’s responses [Sahoo et al. 2024; Wei et al. 2024]. To this extent, we split the complex task into simpler subtasks, and ask the model to follow a step by step guide to generate its response when summarising changesets. • One-Shot & Few-Shot Prompting: Higher complexity tasks benefit from one-shot and few-shot prompting, a technique where one or a few examples are provided in the model input [Sahoo et al. 2024]. While examples don’t always help [Liu et al. 2020; Reynolds and McDonell 2021], they do serve as a mechanism to further guide the model in recalling previously learnt tasks. We used few-shot prompting in the settings generator to determine the project domain [Brown 2020] and one-shot prompting for rephrasing entries and content. • Intent Classification: Intent classification is a technique to identify the most relevant instruction for a query. In the case that the model needs to handle different cases, it is beneficial to categorise those cases ahead of time by hard coding it in the model input [OpenAI 2024b]. We use this technique to specify the available project domains the model can use when identifying the project domain, ensuring the model outputs results we expect. • Length Specification: This technique involves specifying the length of the desired output [OpenAI 2024b]. We use this during changeset summarisation to limit the model verbosity. • Aligning the Decimals of Numbers: The comprehension capability of LLMs is likely constrained by the design of BPE tokenisers, which interpret $^ { \mathfrak { s } } 9 . 1 1 ^ { \mathfrak { s } }$ as “9”, “.”, and $^ { \ast } 1 1 ^ { \ast }$ . This limitation is still under discussion [Xie 2024]. Research reveals that this capability can be improved with number formatting [Schwartz et al. 2024]. Since adding special tokens to OpenAI’s model is not viable, we used a simple yet effective technique to align the decimals to two digits. 3.1.2 Hyperparameters. Hyperparameters affect the accuracy, verbosity, certainty and more of the LLMs’ responses [OpenAI 2024a]. However, due to the high cost of inference, performing rigorous tuning (e.g., grid search) is not an option. Inspired by previous studies [Li et al. 2024; Liu et al. 2024], we greedy searched the following hyperparameters with manual evaluation: • Temperature: The temperature parameter controls randomness, with lower values producing more focused outputs [OpenAI 2024a]. After experiments, we found a temperature of 0 yields consistent and satisfactory outputs, which aligns with previous studies [Peng et al. 2023]. • Top-p: Top-p implements nucleus sampling, considering only tokens within the specified probability mass. It controls the balance between certainty and creativity. We found a low top-p value of 0.1 encourages the LLM to generate deterministic output, and reduces the chances of generating casual content in few-shot classification tasks. # 3.2 SmartNote Overview In this section, we outline the stages of SmartNote’s generation pipeline and describe its process for generating a release note from a GitHub repository given two versions, the previous version and the updated version. As illustrated in Figure 3, SmartNote is composed of five stages: 1) Info Retriever, 2) Settings Generator, 3) Commit Analyser, 4) Change Summariser, and 5) RN Composer. In the first stage, information retrieval, SmartNote collects all commit, pull request, and repository data associated with the changes made between the previous and updated versions. Next, in the settings generation stage, SmartNote starts by determining the project domain and commit message quality using the LLM module. Once all the data has been collected and the settings have been determined, SmartNote begins change summarisation. In this stage, commit data is packed into changesets which are then summarised into release note entries with the LLM module. The commit analyser stage is run simultaneously, using the machine learning (ML) classifiers to identify the conventional commit category and significance scores of commits, prerequisites for filtering, summarising, and personalising the release note entries. Finally, in the last stage, SmartNote vuejs/core Using the LLM : Using the ML Classifier v3.2.0 v2 v3.2.1 电 5RN Composer 3 Commit Analysei 1Info Retriever Merge Relevant Entries Mine Repository Score Changes Update Entity Mentions GitHub API Classify Changes Personalisation & Tuning 4Change Summariser 2SettingsGenerator Reorder Changes Create Changesets Project Domain? Summarise Entries Writing Style? Personalised Release Notes composes the release note. It utilises the LLM module to refine the content by first, rephrasing for conciseness and reorganising the entries, and then by adjusting the release note to align with the release context. # 3.3 Info Retriever In the information retrieval stage, SmartNote compares two versions of a GitHub project and collects data on commits, pull requests, releases and projects. First, SmartNote looks for the corresponding git tags (a reference to a specific point in the repository’s history [Git 2024]) of the previous release and the current release, and traverses over the commits with the PyDriller [2024] library. In cases where this approach fails, SmartNote falls back to the GitHub API. Notably, we observed that Conventional Changelog fails when the release points to an orphan commit (e.g., [QuestDB 2024a]), SmartNote is not vulnerable in such cases. Next, SmartNote combines repository mining and the GitHub API to extract: 1) release-related features such as release type (e.g., major, minor, patch), number of commits in the release, number of authors, and number of unique committers; 2) project-related features such as project name, previous versions and new versions, description, and README document; 3) commit-related features such as commit SHA, author, commit date, commit message, file patches (i.e., the code difference between the previous and new version or git diff ), file extensions, and lines changed; and 4) pull request features such as title, message, and associated commits. Additionally, the information retriever automatically identifies “squash merge” and “rebase merge” pull requests [GitHub 2024] and flags the associated commits — a necessary step for merging individual commits into release note entries. # 3.4 Settings Generator To accommodate the varying needs of projects, releases, and target audiences, SmartNote offers configuration options for project domain, writing style, and structure, based on definitions from previous studies [Wu et al. 2023]. Additionally, it provides options for commit grouping and the minimum significance score (MST). However, configuring these options can be tedious. To simplify this, SmartNote includes a settings generator that is responsible for automatically identifying and applying context-aware defaults based on findings from previous research and heuristics — an approach proven effective in software automation tools [He et al. 2023] — making SmartNote effectively zero-config. The project domain plays a key role in shaping the writing and organisational style of the release note [Moreno et al. 2014; Wu et al. 2023]. SmartNote does not require users to interpret domain definitions and choose the most suitable domain. Instead, it utilises a project domain classifier that leverages the LLM module to categorise the project domain based on the project’s description and README file. The writing style also plays a key role in shaping content and is closely linked to the project domain [Wu et al. 2023]. If the user has not specified a writing style, SmartNote automatically selects one based on the project domain. However, when the expository style is selected — where release note entries are composed of the original commit message — an additional evaluation step is taken to assess the project’s suitability for this writing style. If it’s unsuitable, the writing style is changed to persuasive. To achieve this, we designed a binary commit message classifier that leverages the LLM module. The prompts are engineered based on findings from a previous study identifying the characteristics of good commit messages [Tian et al. 2022]. The classifier aligns closely with human-labelled data, achieving $9 5 \%$ agreement (Cohen’s kappa $= 0 . 9$ ) between two authors who, after carefully reviewing the definitions and examples of "good" commit messages outlined in a previous study [Tian et al. 2022], independently labelled the overall commit message quality for the projects and subsequently discussed their labels to reach consensus. The MST is an option that allows users to specify a minimum commit significance threshold for changes. It enables SmartNote to filter out insignificant changes (e.g., fixing typos in comments or documentation; adjusting white space or indentation; and reformatting code without functional changes like running a linter), ensuring the release note is neither empty nor excessively long. This approach aligns with previous studies [Abebe et al. 2016], which found that well-formed release notes typically include only $6 \%$ to $2 6 \%$ of issues addressed in a release. It also reflects developer sentiments [Rahman 2012], which indicate that excessive detail can cause readers to lose attention, reinforcing the importance of conciseness. Therefore, SmartNote’s MST has been carefully tuned, ensuring release notes highlight significant changes without overwhelming the reader. This balance is achieved through iterative refinement and heuristic analysis, which determined that an MST between 0.1 and 0.15 strikes an optimal balance. Higher thresholds (0.2 or above) tend to introduce excessive detail, making release notes overly verbose, while lower (0.05 or below) thresholds risk omitting too much information, resulting in sparse release notes. For example, in cases like AkShare v1.14.62 [2024], where most commits are minor or insignificant, increasing the MST helps include more changes. Conversely, in cases like RustDesk v1.3.0 [2024], which has many meaningful commits, lowering the MST ensures a balance between verbosity and conciseness. The structure determines how the release note is formatted. By default, it is set to "Change Type", the most common release note structure in open source projects [Wu et al. 2023]. Lastly commit grouping determines whether commits are grouped together. It works by combining commits associated with the same pull request into a single release note entry — an approach commonly seen in real-world projects — using the LLM module, thereby improving conciseness. This setting is enabled by default, as conciseness is critical. # 3.5 Commit Analyser To address the problem of generic and verbose release notes, they are typically structured by change type [Wu et al. 2023], with included changes determined by their perceived importance. However, while developers are categorising changes in release notes, a vast majority of repositories do not categorise commits. Notably, approximately $9 0 \%$ of the projects we sampled do not enforce a standard commit specification, explaining the limited effectiveness of off-the-shelf release note generators that rely solely on parsing commit messages. Moreover, projects varying in domain, scale, and collaboration methods may exhibit distinct development patterns and naming conventions. Accordingly, we design a commit analyser that accounts for variations in content preferences based on release type and project context [Bi et al. 2020; Wu et al. 2023]. The commit analyser performs two tasks: 1) evaluating the relative significance of commits for conciseness and 2) categorising them for structural organisation. To achieve this, we leverage machine learning classifiers with models that are trained using contextual commit, release, and project features sampled from 3,728 repositories. To train the classifiers, we employ an approach that has been shown to be effective in software engineering studies [Mariano et al. 2019; Xiao et al. 2022]. This involves vectorising the text, combining it with numerical features, and feeding the resulting representation into XGBoost, an efficient, commonly used [Xiao et al. 2022], machine learning classifier. While most contextual features are numerical, LLMs have inherent limitations in processing numerical data effectively [Xie 2024]. In contrast, XGBoost, which implements the gradient boosting decision tree algorithm [Chen and Guestrin 2016], utilises a tree-based decision architecture, improving interpretability and ensuring transparency in the model’s decision-making process. 3.5.1 Data Collection. A high-quality dataset is important for creating reliable and precise models. We begin by defining our selection criteria: popular, actively maintained code repositories (i.e., excluding tutorials or resource collections) with a rich development and release history. To this end, we sampled 3,728 non-forked and non-archived repositories on GitHub using the SEART GitHub Search Engine (seart-ghs). These repositories were created before 2020, with more than 5000 stars, more than 10 releases on GitHub, and a codebase exceeding 100 lines. To ensure data quality, we removed six repositories whose git logs could not be parsed, as well as repositories that were duplicated or renamed. Additionally, we removed 23 repositories whose primary language was not English, as multilingual encoding introduces additional complexity and engineering overhead. Finally, we manually verified all repositories, removing five that contained tutorials or configuration files, resulting in a total of 644. The dataset was further processed for commit categorisation by retaining repositories in which at least two-thirds of the commits adhered to the conventional commits specification, resulting in 389 repositories. Lastly, to ensure the commit scoring model captures the intricacies of commit inclusion and exclusion in release notes, we refined the dataset to include only repositories which have more than three links to commits in their release notes, resulting in 272 repositories. From this dataset, we identified 21,882 releases encompassing 715,089 commits, of which 139,423 $( 1 7 . 7 \% )$ were explicitly referenced in their corresponding release notes. 3.5.2 Feature Selection. To enhance the accuracy and generalisability of the classifiers, we supply the model with extensive contextual features: Commit Context: The commit message $( e m b * )$ is the most significant semantic feature, as high-quality commit messages conclude “what” and “why” of the changes [Tian et al. 2022]. The scale of the changeset, i.e., the number of added lines (#𝑎𝑑𝑑𝐿𝑖𝑛𝑒𝑠) and deleted lines (#𝑑𝑒𝑙𝐿𝑖𝑛𝑒𝑠), represents the complexity of the commit and their relative importance [Levin and Yehudai 2017]. Furthermore, we build a programming language identifier using GitHub’s linguist library [GitHub-Linguist 2024] to identify the programming languages of files modified in commits $( l a n g * )$ , in order to support the commit classifier. • Release Context: The type of release (𝑟𝑒𝑙𝑒𝑎𝑠𝑒𝑇𝑦𝑝𝑒) influences the content of the release notes [Wu et al. 2023]. We parse and compare release versions with the semver library [Preston-Werner 2024] into Major, Minor, Patch, and Unknown (version numbers incompatible with the semantic versioning specification). Moreover, we collect numerical contextual features representing the scale of the release: the number of commits between versions #𝑟𝑒𝑙𝑒𝑎𝑠𝑒𝐶𝑜𝑚𝑚𝑖𝑡𝑠 , the number of committers and the number of authors #𝑟𝑒𝑙𝑒𝑎𝑠𝑒𝐴𝑢𝑡ℎ𝑜𝑟𝑠 . The modification complexity of the release is measured by averaging the number of modified files (𝑎𝑣𝑔𝐶ℎ𝑎𝑛𝑔𝑒𝑠𝑒𝑡), lines of code affected per file (𝑎𝑣𝑔𝐶𝑜𝑑𝑒𝑐ℎ𝑢𝑟𝑛), and history complexity (𝑎𝑣𝑔𝐻𝑖𝑠𝑡𝑜𝑟𝑦𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦) [Hassan 2009]. • Project Context: To quantify the project’s scale and complexity, we measure the following metrics at the time of release: the number of commits (#𝑐𝑜𝑚𝑚𝑖𝑡𝑠), contributors #𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑜𝑟𝑠 , stars #𝑠𝑡𝑎𝑟𝑠 , issues #𝑖𝑠𝑠𝑢𝑒𝑠 , PRs #𝑝𝑟𝑠 , comments #𝑐𝑜𝑚𝑚𝑒𝑛𝑡𝑠 . We categorise the projects into 4 domains (𝑝𝑟𝑜 𝑗𝑒𝑐𝑡𝐷𝑜𝑚𝑎𝑖𝑛): Application Software, System Software, Libraries and Frameworks, Software Tools derived from the 6 project domains proposed by Borges et al. [2016] and studied by Wu et al. [2023]. Commit messages were encoded into sentence embeddings (fixed-length vectors) leveraging the General Text Embeddings model (gte-base-en-v1.5) [Li et al. 2023]. With 137M parameters, it is small yet performant (ranking 31st on the MTEB classification leaderboard [Muennighoff et al. 2022]), and ideal for SmartNote’s intended use case — CPU-only CI environments. Statistics were mined with PyDriller, a Python framework that extracts information about commits, developers, modified files, diffs, and source code [Spadini 2024]. For historical events of the repositories, we utilised the GitHub API alongside the GHArchive, a project to record the public GitHub timeline, with the support of ClickHouse, a well-performing OLAP database. Project domains were labelled manually by two authors. The first round of independent labelling yields a Cohen’s kappa of 0.57, indicating moderate agreement [McHugh 2012]; Two authors discussed and reached a consensus. Feature importance Feature importance T 3 202132920959472656 3 33.57661437988281 PR 64.16336822509766 32.96908187866211 lavgCharkdent 58.204345703125 32.0542945 68618164 langXml 58.076744079589844 langPythes 2.3543571025246 lang 56.073135375967659 emb674 23.529626846313477 54.81142044067383 emb433 23.31896209716797 53.38886642456055 langText 22.90568733215332 contributors 48.216880798339844 emb314 22.304540634155273 comments 43.974609375 100 150 200 250 100 150 200 (a) Commit Classification (b) Commit Scoring 3.5.3 Model Training and Feature Importance. To mitigate the threat of overfitting, we implemented several strategies. First, we randomly partitioned the dataset into training, testing, and validation sets in a 7:2:1 ratio, following prior studies [Jiang et al. 2021; Xiao et al. 2022]. Second, we applied the early stopping technique to terminate training when validation loss began to degrade. Third, we conducted K-Fold validation to ensure the randomness of the split. Finally, we determined the optimal hyperparameters using grid search. The commit classification model converged after 100 iterations, with an accuracy of 0.71, a precision of 0.71, and a recall of 0.71 (numbers are weighted average). The 5-fold validation yielded the same performance numbers. Due to the interpretability of decision trees, the importance of each feature is straightforward, as indicated by XGBoost’s reported gain value (the improvement in accuracy brought by a feature to the branches it is on) [XGBoost 2022] in Figure 4. The programming languages of modified files (RST, Markdown, JSON, etc.) stand out, with several dimensions of the sentence embedding also being vital in the model’s classification. The commit scoring model converged after 100 iterations, with a precision of 0.94 and a recall of 0.94. K-Fold validation yields consistent accuracy numbers between 0.94 and 0.95. Figure 4, shows that the number of commits in the release is the most significant factor in the significance score, while domain, release type, and the number of issues also play key roles for determining commit significance. The project domain and release type are also notable factors, aligning with empirical evidence from previous studies [Bi et al. 2020; Wu et al. 2023]. # 3.6 Change Summariser Gathering data from different sources is a common method that’s utilised in existing research [Klepper et al. 2016] and ensures that the generated release note contains accurate and reliable information. In the change summarisation stage, SmartNote combines data from the previous stages into changesets consisting of the commit date & time, author, message, significance, change type, and file patches. The LLM module processes changesets, generating concise and accurate release note entries — typically single sentences or brief paragraphs — that form bullet points in the final release note. By creating a changeset for each commit and summarising it individually, we break down the task into smaller, more manageable sizes for the LLM, preventing hallucinations, and resulting in higher quality summaries. When commit grouping is enabled (which is by default), change summarisation combines all changes associated with a pull request into a changeset, summarises their descriptions, aggregates their scores, and replaces all associated release note entries with a single consolidated entry. # 3.7 RN Composer In the final stage, SmartNote composes the release note by aggregating the changesets into a list using the identified organisation strategy, and then performs several smaller tasks to refine it. (1) Merging Relevant Entries: Occasionally, developers create multiple commits or pull requests for the same feature or change. This can be caused by bugs or alterations to the feature, causing overlapping entries that increase verbosity, decrease clarity, and cause reader confusion. With the exception of release notes with the "Change Priority" structure type, the LLM module is utilised to remedy this by merging related entries, consolidating them to improve clarity and readability, while ensuring no information is lost. E.g., in version 1.3.1 of Sniffnet [2024], there were 22 commits referencing translation. In the release note produced by SmartNote, these were consolidated into 10 entries. (2) Updating Entity Mentions: In major refactoring releases, it’s common for an entity (a function or a variable) to be added, renamed, or removed. This causes fragmented entries that refer to the same entity as it worked or was named in different points of time in the past. To address this inconsistency, SmartNote utilises the LLM module to identify renamed or modified entities and updates their mentions in the release note to match the most recent repository state at the specified version, improving coherence and clarity. E.g., in PR $\# 1 0 9$ of the d3 project [2024], when a function was renamed based on feedback from the project maintainer. (3) Personalisation and Tuning: To improve conciseness, SmartNote removes details based on the content preferences of the project and its audience. Specifically, entries with lower significance scores than the specified MST are removed; and the release note is trimmed and summarised using the LLM based on the project domain’s description and content preferences, guided by findings in [Wu et al. 2023]. E.g., AKShare v1.14.62 [2024], which includes a single, simple code change. Without this step, a verbose 36-line release note is generated, with this step, changes are condensed into a single entry. (4) Reordering Changes: The inverted pyramid principle of writing suggests placing the most fundamental information at the top. Ordering with predefined rules may suffer from low generalisability though, as a previous study reveals that different projects prioritise different changes [Wu et al. 2023]. SmartNote utilises the LLM to reorder the categories, moving breaking changes and new features, bug fixes, and enhancements to the top as they are considered more important [Abebe et al. 2016; Wu et al. 2023]. Document changes, dependency updates, and version changes are considered less important and placed lower. E.g., in Bevy v0.14.1 [2024], features are moved to the top. 4 Evaluation SmartNote is designed to utilise context-awareness to generate high-quality, personalised release notes whilst being broadly applicable. To this end, we design our evaluation to answer three questions: 1) Does SmartNote generate high-quality release notes? — necessary to understand in which aspects SmartNote advances in; 2) Is SmartNote’s personalisation method effective? — to understand SmartNote’s personalisation capabilities and effectiveness; and 3) How applicable is SmartNote? — to compare against existing tools and understand if there are any applicability limitations. Table 1. Distribution of project domains among the projects for evaluation. To ensure a comprehensive comparison, we collected feedback for four types of release notes: first, for SmartNote; second, for DeepRelease, a state-of-the-art alternative, recent deep-learning power generator with accessible data; third, for the original release notes produced by the maintainers; and finally, for Conventional Changelog, a remarkably popular off-the-shelf tool with more than one million downloads per week on NPM, a JavaScript package manager [NPM 2024]. This approach covers a wide variety of release notes, enabling us to effectively compare our work against existing methods and solutions. We began by brainstorming emerging open source projects from the GitHub trending repositories page [2025], considering programming languages such as Python, Rust, $C { + } { + }$ , C#, Java, JavaScript, and TypeScript. We selected 33 projects with over 500 stars, under active development, and with recent community engagement (i.e., new commits, issues, discussions, or release history) and which allow for issues or discussions of enhancements, features or feedback. Another author independently reviewed the selected projects, ensuring they meet the criteria and identifying the project domain. The authors discussed any inconsistencies to reach a consensus. The projects were reevaluated for a final time, and those that were historical, extremely large, seldom publish releases, or appear to follow organisational guidelines to produce release notes were excluded as they are unlikely to be interested in our study. Furthermore, we performed under-sampling to balance the representation of each domain. After reevaluation, 10 projects were removed, resulting in a total of 23 projects, as shown in Table 1. Next, we obtained their latest releases (7 minor releases and 16 patch releases) and release notes from the GitHub API. Finally, we generate release notes using SmartNote, DeepRelease, and Conventional Changelog. Note that we used the default and automatic options of SmartNote, to avoid overwhelming survey participants with multiple variations of the release note. Additionally, we calculated the cost for SmartNote to generate release notes for the evaluation projects with automatic settings. SmartNote’s automatic release note generation costs an average of 90 cents per release $\$ 20.82$ for 23 releases), which is economical given the extensive time typically required for high-quality release notes [Moreno et al. 2014]. # 4.1 EQ1: Does SmartNote Generate High-Quality Release Notes? We conducted a human and an automatic evaluation to assess the effectiveness of SmartNote and the release notes generated by it. 4.1.1 Quality Expectations. To measure release note content quality, we identify four key criteria from previous work: (1) Completeness, to understand whether the release note sufficiently covers the changes made between the analysed versions. Missing or incorrect change information is a major issue in release note practices [Wu et al. 2022] and software documentation [Aghajani et al. 2020]. (2) Clarity, to understand whether the release note accurately and understandably reflects the changes made in the project. The absence of clarity has been identified as a central issue in software documentation by previous studies [Aghajani et al. 2020]. The changes need to be explained with rationales and details to be understandable by target users. (3) Conciseness, to understand whether the release note succinctly provides the right amount of information. Studies confirm that conciseness is important [Aghajani et al. 2020]. However, unnecessarily verbose release notes scare readers [Rahman 2012] which explains why most release notes only list between $6 \% - 2 6 \%$ of issues addressed in a release [Abebe et al. 2016]. (4) Organisation, to understand whether the release note is structured clearly and logically. Practitioners prefer release notes to contain many different aspects of information and are organised into categories [Wu et al. 2022]. Well-formed release notes positively impact software evolution [Bi et al. 2020]. Table 2. Human and automatic evaluation results for release notes. 4.1.2 Human Evaluation. To conduct the human evaluation, we created a unique questionnaire for each project, allowing us to gather feedback from users most familiar with it and its domain. Using 5-point Likert scale questions, the questionnaire was designed to assess the four criteria identified earlier. Following this, we recruit participants with diverse backgrounds for the credibility of the survey. First, we reached out to the most active maintainers of the projects via their public contact information. We received one response from a maintainer who was willing to participate in the survey. Next, we utilised snowball sampling to recruit ten open-source developers and six researchers with experience ranging from less than 1 year to over 8 years. In total, we recruited 17 participants, with the most common level of experience at 2 to 4 years, representing $5 5 \%$ of participants followed by 4 to 8 years representing $2 9 \%$ of participants. Participants selected up to 3 projects based on their familiarity, checked the projects’ GitHub release pages, and completed questionnaires. The release note generator’s name was masked to prevent bias. Once a project received 3 responses, we removed it from the list, spreading the responses across all the projects. Table 2 presents the survey results, with scores averaged across all projects. The agreement percentage, shown in brackets, indicates the proportion of participants who either agreed or strongly agreed. SmartNote outperforms all other tools in the categories of completeness, clarity, and organisation while conciseness achieves the second best performance. The release notes for all evaluated projects are available in our replication package. As a glimpse, we highlight projects where SmartNote rated significantly better, slightly better, or worse: Significantly Better: AkShare (Figure 1), LangChain, Sniffnet, and UniGetUI. In these projects, SmartNote produced release notes that were organised, context-aware, audiencespecific, and prioritised significant changes while filtering out irrelevant details compared to raw LLM outputs and other tools. For example, the AKShare project where the commit messages and PRs are simple, SmartNote addresses the changes understandably and concisely through code comprehension; in contrast, other tools borrow the developers’ words and output nonsense. Slightly Better: Zulip, StirlingPDF, es-toolkit, and Continue. These projects demonstrated improvements in clarity and organisation. SmartNote grouped related changes and provided well-structured summaries. Other tools and the raw LLM are likely to benefit from the fruitful commit messages and PRs in these projects and generate release notes of acceptable quality. While the improvements were less dramatic, the notes were still easier to interpret and more actionable than those generated by baseline tools. Worse: Jan is likely rated worse for the amount of dropped details (SmartNote: 2 entries vs Conventional Changelog: 16 entries). The default MST configuration may have excluded changes that some users consider relevant. The results of the survey reflect the performance of the fully automated pipeline. Considering this, SmartNote outperforms both the original release notes and competing solutions in terms of completeness, clarity, and organisation. Over $80 \%$ of participants agree that SmartNote performs the best among the compared tools. While only $5 5 \%$ agree regarding conciseness, this is unsurprising given our evaluation strategy. With additional project-specific tuning, users can achieve better results. In light of this, we can confidently say that SmartNote is significantly superior to off-theshelf tools and state-of-the-art alternatives. 4.1.3 Automatic Evaluation. Although human evaluation is undoubtedly the best way to assess release notes, we conducted an automatic evaluation to 1) complement our findings; and 2) explore the possibility of automatic quality assessment on RNs. Previous work [Jiang et al. 2021; Li et al. 2024; Zhang et al. 2024] employed text similarity metrics such as BLEU and ROUGE for the purpose of comparing and measuring the differences between automatically generated release notes and gold standards. However, we find their approach to unviable for two reasons. First, previous studies that applied BLEU or ROUGE on generated release notes did not publish their evaluation dataset so it’s impossible to compare. Second, text similarity metrics compare the lexical similarity of the text against “gold references”; they can penalise semantically correct hypotheses if they differ lexically from the reference [Wieting et al. 2019]. Thus, they are likely sub-optimal for the task, given the length and diversity of high-quality release notes. To address this, we designed six metrics to automatically measure the four quality aspects of our study. First, we used commit coverage to assess completeness, calculated by dividing the number of mentioned commits by the total number of commits in the release. A higher value indicates better overall completeness. Second, we measured conciseness using the token count of OpenAI’s tokeniser, as the word count doesn’t play well with code snippets. Third, to evaluate organisation, we leveraged markdown parsing to identify categories (headers) and items (bullet points), and we calculated information entropy, where a higher value signifies a higher number of categories and a more balanced distribution of entries, indicating better organisation. Finally, for clarity, we aimed to assess both specificity and understandability. We measured specificity by calculating the density of entities (specific software engineering terms, like the names of operating systems and libraries) with a specific software engineering named entity recogniser [Nguyen et al. 2023], where a higher value suggests more technical details, and a neutral value is optimal. For understandability, we used the Automated Readability Index [Smith and Senter 1967] calculated the average number of characters per word and of the words per sentence, where a lower score is better; and the Dale-Chall readability formula [Dale and Chall 1948] to measure word commonality, where lower scores also indicate better readability. Results in Table 2 show that, compared to the next best alternative, SmartNote achieves twice the performance with $8 1 \%$ coverage, whereas the original release notes achieve only $31 \%$ coverage. In terms of token count, which reflects conciseness, SmartNote ranks third; however, this is not a concern, as users can easily adjust the level of conciseness through the configuration settings. These results suggest that release note authors tend to sacrifice coverage for conciseness, as indicated by the token count — the number of words in the release note. In terms of organisation, SmartNote ranks first, with an entropy of 1.59, a significant improvement over DeepRelease, the next best tool, which has an entropy of 1.04. This indicates that SmartNote organises information much more effectively, consistent with the findings from our human evaluation. Finally, the automatic metrics for clarity yield mixed findings, SmartNote ranks third for the automated readability index but first for the Dale-Chall readability score, while having the lowest entity percentage, which measures specificity. Overall, SmartNote demonstrates superior organisation and provides significantly better commit coverage compared to baseline methods. These findings align with human evaluations, further confirming that SmartNote outperforms off-the-shelf tools and state-of-the-art alternatives. In contrast, SmartNote’s rank for conciseness suggests that the default settings we used may not produce ideal results. However, the results highlight that release note authors make sacrifices in order to compensate for other aspects, e.g., compromising on commit coverage to improve conciseness. Therefore, reinforcing the importance of giving users the control to make adjustments where necessary based on their preferences. This demonstrates that SmartNote is adaptable to both user and project needs, providing a significant advantage over static, off-the-shelf solutions. Result of EQ1: The human evaluation shows over $8 0 \%$ of participants agree that SmartNote performs best in completeness, clarity, and organisation. While only $5 5 \%$ agree regarding conciseness. The automatic evaluation shows that SmartNote ranked first in completeness with $8 1 \%$ commit coverage and organisation with an information entropy score of 1.59. Conciseness ranked third with a token count of 1162.48 and clarity which showed mixed findings. Overall, the automated evaluation aligns with the human evaluation. Conciseness did not achieve superior performance in either, but this is a non-issue as SmartNote offers the flexibility to customise conciseness as preferred. # 4.2 EQ2: How Applicable is SmartNote? To ensure applicability, it is important that release note generators can be used with all projects. Existing tools have stringent requirements, such as commit convention, templates, labels, or PR strategies; require extensive configuration; and some work only for certain programming languages. e.g., ARENA. SmartNote addresses these limitations by: 1) using a settings generator to determine optimal configurations, 2) ensuring developers do not have to follow any requirements, and 3) by being language-agnostic. Moreover, as shown in Figure 1, SmartNote adapts to the size and complexity of each project, producing concise and relevant release notes even for small projects. To this extent, we analyse the projects for which we generated release notes for to identify which succeeded and which failed (i.e., generating empty or completely wrong release notes). The success rate of each project is recorded in Table 2 and shows that SmartNote is widely applicable. Compared to existing tools, SmartNote does not fail at generating release notes for any project, while DeepRelease and Conventional Changelog fail approximately $1 0 \%$ and $54 \%$ of the time respectively. These failures can be attributed to the several main limitations. In the case of DeepRelease, it is only able to process PRs (e.g., bevy between version v0.14.0 and v0.14.1 [Engine 2024]). While in the case of Conventional Changelog, it is unable to process off-tree commits (e.g., questdb [QuestDB 2024a]), i.e., changes that are not between the two specified versions. Also, in cases where there’s a lack of labelling and commit conventions, it does not produce any output (e.g., flatbuffers between version $\mathbf { v } 2 4 . 3 . 7 \$ and v24.3.25 [Google 2024]). Result of EQ2: SmartNote successfully generates release notes for all projects, while DeepRelease fails for approximately $10 \%$ of projects due to its stringent PR requirements and Conventional Changelog fails approximately $54 \%$ of the time due its rigid commit convention requirement. # 4.3 EQ3: How Effective is SmartNote’s Personalisation? To better understand how SmartNote’s contextually aware, personalised generation pipeline contributes to the quality of release notes, we conducted an ablation study with both automatic and human evaluations, comparing SmartNote’s release note with three different variants: Raw LLM: To understand the contribution of the LLM, we instruct it to generate a release note without any prompt engineering, guidelines or examples, relying solely on its world knowledge and comprehension capabilities. • No Composer: A release note generated by SmartNote without the RN Composer stage, a key component of its personalisation capabilities. • Random Context: A release note generated by SmartNote with a randomly selected project domain and an "Unknown" release type. For the human evaluation, we recruited a separate group of six participants, consisting of students and industry professionals, to rank release notes based on the four key aspects previously discussed: completeness, clarity, conciseness, and organisation. To mitigate author bias between the previous survey and the ablation, we asked participants in both groups to score the generated release note for both the Raw LLM and the Random Context variant. Furthermore, to ensure a fair comparison, we applied weighted normalisation to our results, shown in Table 3. The results of the human evaluation for the Raw LLM variant show that participants consistently rated it lower across all metrics: completeness (4.00 vs. 2.90), clarity (4.06 vs. 3.12), conciseness (3.35 vs. 2.91), and organisation (4.10 vs. 3.04), confirming the significance of our prompt engineering efforts. Further examination revealed: 1) Inconsistent Styles and Structures: E.g., a long, plain list of pull request titles and files changed after the organised list of changes for Manticore-Search v6.3.4 [2024]. 2) Verbose Results for Simple Changes: E.g., a verbose 36-line release note for AKShare v1.14.62 [2024], despite the change involving a single, simple code modification. 3) Minimal Technical Details and Practical Implications: E.g., "various code quality improvements and refactoring for better maintainability and readability" in a release note generated for QuestDB v8.1.0 [2024b]. In comparison, the automatic evaluation yields mixed results. The commit coverage is lower and aligns with the human evaluation well, explaining why the token count is considerably lower when compared to other variants — the LLM is sacrificing coverage for brevity. While initially, the information entropy indicates good organisation, we observe that an excessive information entropy score is not ideal in real world applications, due to its inconsistent categorisation and excessive granularity. E.g., in Zulip v9.1 [2024] where most commits would traditionally be categorised as documentation updates. Table 3. Human and automatic evaluation results for the ablation study. Proc. ACM Softw. Eng., Vol. 2, No. FSE, Article FSE075. Publication date: July 2025. Next, we examine the human evaluation for the No Composer and Random Context variants. As shown in Table 3, they similarly exhibit lower completeness and clarity, confirming that contextual understanding plays a significant role in generating more comprehensive and clearer release notes. Conciseness, however, presents a similar pattern to SmartNote. While it’s relatively lower across all variants, it still outperforms most baselines except for original, human written release notes $( 6 5 \% )$ , suggesting that release note authors tend to prioritise brevity by omitting details. These results indicate that the MST needs to be fine-tuned on a project-by-project basis to achieve balance between commit coverage and conciseness. On the other hand, organisation remains one of SmartNote’s strongest aspects, with the No Composer variant, the worst performing one, still achieving $6 3 \%$ , even without full contextual understanding. We attribute this to the LLMs world knowledge and code comprehension capabilities. In contrast, the automatic evaluation results indicate that the No Composer and Random Context variants perform comparably to SmartNote. However, they do not align with the human evaluation, which indicates that while the release notes perform well in automated metrics, they fail to capture the nuances of human-written release notes, as seen in SmartNote, which benefits from prompt engineering. Result of EQ3: To assess the impact of context awareness on release note quality, we conducted an ablation study with both automatic and human evaluations, comparing SmartNote’s generated release note with three variants: Raw LLM (simply feed the changes to an LLM without any prompt engineering, guidelines or examples), No Composer (without the composition stage), and Random Context (random project domain). The human evaluation of the Raw LLM, No Composer, and Random Context variants revealed that they are overly verbose and inconsistent. The automatic evaluation for the Raw LLM aligns with these finding. However, while the automatic evaluation of the No Composer and Random Context variants indicated metrics within margin of SmartNote, they miss the nuance of human-written release notes, captured by SmartNote, which is attributed to prompt engineering and context awareness. In summary, these findings highlight that context comprehension is key to SmartNote’s effectiveness, particularly in completeness and clarity. # 5 Limitations In this section, we discuss and address the limitations of our study, highlighting factors that may affect the validity of our study to guide future research. To this extent, we cover two factors: 1) internal validity, and 2) external validity. # 5.1 Internal Validity This concerns factors that are internal to the study which could potentially affect the study from accurately measuring what it intends to. Manually labelling the project domains in the dataset for training the classifiers may introduce author bias. To mitigate this, two authors independently examined and labelled the data, and inconsistencies were discussed and resolved with a consensus. Moreover, the variability of classifier accuracy across different projects (e.g., lower accuracy in projects with simple changes) could impact the results of the evaluation. For most projects in the evaluation set, the classifier’s accuracy is approximately 0.6, which is acceptable given that a random classifier would achieve less than 0.1. However, for repositories where changes are simple and maintainers are not inclined to write meaningful release notes (e.g., AgentGPT), the classifier’s accuracy can drop to around 0.3. To address this, we employ a range of different projects in our evaluation and recommend maintainers utilise commit message standards or prevent writing confusing and shorthand commit messages. Automated commit message generators [Li et al. 2024; Zhang et al. 2024] can be employed to enhance the quality of commit messages and release notes. Additionally, the assessment of the release notes in our survey may be influenced by personal preferences and experience, which could affect the objectivity of the evaluation. This challenge is exacerbated by the absence of standardised evaluation criteria, potentially introducing bias into subjective judgements. To address this, a wide range of industry developers and PhD students were invited to participate in evaluations, thereby aiming to reduce bias through varied perspectives and enhance the overall objectivity and credibility of the assessment. # 5.2 External Validity This concerns the generalisability of the findings of this study. First, SmartNote has only been tested on OpenAI’s “gpt-4o” model. However, the results of this paper should be generalisable to other top-tier LLMs (e.g., Claude, Gemini, Qwen) with another round of prompt engineering. We were not able to do so because of the high cost of LLM inference. Second, the human evaluation is performed on a relatively small evaluation set of 23 open-source projects, which may cast doubts on generalisability. However, the scale of evaluation is limited by the constraint of developer hours and communication efforts required to conduct the survey. Additionally, we ensured a diverse range of projects and participants were represented: projects from varying domains, sizes, and popularity, and participants of various backgrounds were involved in the study. Moreover, many off-the-shelf tools have features and quality-of-life aspects that maintainers may have become accustomed (e.g., first-time contributors). The absence of these features decreases the broad applicability of SmartNote.
The release note is a crucial document outlining changes in new software versions. Yet, many developers view the process of writing software release notes as a tedious and dreadful task. Consequently, numerous tools have been developed by researchers and practitioners to automate the generation of software release notes. However, these tools fail to consider project domain and target audience for personalisation, limiting their relevance and conciseness. Additionally, they suffer from limited applicability, often necessitating significant workflow adjustments and adoption efforts, hindering practical use and stressing developers. Despite recent advancements in natural language processing and the proven capabilities of large language models in various code and text-related tasks, there are no existing studies investigating the integration and utilisation of LLMs in automated release note generation. Therefore, we propose SmartNote, a novel and widely applicable release note generation approach that produces high-quality, contextually personalised release notes using LLM technology. SmartNote aggregates changes and uses an LLM to describe and summarise the changes using code, commit, and pull request details. It categorises and scores commits to generate structured and concise release notes of prioritised changes. Our human and automatic evaluations reveal that SmartNote outperforms or achieves comparable performance to DeepRelease, Conventional Changelog, and the projects'original release notes across four quality metrics: completeness, clarity, conciseness, and organisation. In both evaluations, SmartNote ranked first for completeness and organisation, while clarity ranked first in the human evaluation. A further evaluation demonstrates that SmartNote is effective in terms of context awareness and applicability.
[ "cs.SE" ]
# 1 Introduction Recent open-source work like DeepSeek R1 [Guo et al., 2025] has highlighted Reinforcement Fine-Tuning (RFT) with verifiable rewards as a promising approach for improving large models such as OpenAI’s o1 [Jaech et al., 2024]. While these techniques have shown success in text-based models, their application to Vision-Language (VL) models remains underexplored, despite the growing importance of VL reasoning models in multimodal AI. The Vision-Language Reward Model (VL-RM) [Li et al., 2024a, Sun et al., 2024, Li et al., 2023], also referred to as a vision-language verifier, plays a crucial role in refining VL reasoning models [Liu et al., 2025] by providing structured feedback to enhance response quality. The success of RFT in this domain depends on feedback quality, i.e., accuracy of VL-RM, highlighting the need for improved training strategies [Bradley and Terry, 1952] of VL-RM. As illustrated in Figure 1 , we identified 2 primary challenges for the training of VLRMs: Bootstrapping Pitfalls and the “Ouroboros-Like” Challenge. To minimize the need for extensive manual annotation, most Vision-Language Model (VLM) [Pi et al., 2024a, Li et al., 2023, Yu et al., 2024a] and Reward Model (RM) [Rafailov et al., 2023, Yuan et al., 2024, Xiong et al., 2024a, Lee et al., 2023] training methods rely on larger, more powerful VLMs for bootstrapping—where stronger models generate or label data [Zelikman et al., 2022, Dong et al., 2023, 2024, Chen et al., 2024a]. However, this creates a fundamental tail-eating-snake dilemma: high-quality data is essential for training strong VLMs, yet strong VLMs are needed to produce high-quality data. Breaking out of this “ouroboros-like” cycle requires introducing new expertise or external knowledge sources [Pi et al., 2024b, Chiu et al., 2025], as relying solely on self-generated data risks reinforcing the model’s existing biases and limitations. Inherited Modality Bias in RM Training and Negative Example Amplification. Negative responses are essential in RM training [Bradley and Terry, 1952, Zhang et al., 2024a, Zhong et al., 2024, Yang et al., 2024a], as they provide contrastive supervision [Wang et al., 2024a] that helps refine a model’s evaluation capabilities. However, in VLRM training and inference, the inherent misalignment between text and images introduces compounding challenges [Zhou et al., 2024, Pi et al., 2024a]. The process is effectively multi-turn: first, the Vision-Language Model (VLM) generates a response, and then the VLRM evaluates it. Unfortunately, any cross-modal bias introduced in the first turn becomes “baked into” the negative examples used for direct evaluation in the second turn—potentially leading to inherited modality bias and negative transfer. For instance, VLMs frequently hallucinate nonexistent objects, misidentify attributes such as shape or color, or provide incorrect object counts. Ideally, these errors should be corrected by the VLM itself, yet the VLRM is still required to assess such flawed responses. Classical discriminative Reward Models [Bradley and Terry, 1952] or direct generative RMs [Zhang et al., 2024a] typically rely on simple pairwise annotations (“Yes/No”), making them more prone to “negative example amplification.” A real-world analogy is seen in language learners: if their textbooks contain numerous incorrect grammar examples, they may internalize these errors rather than learn the correct forms, ultimately propagating misuses. Over multiple interactions, the need for chain-of-thought (CoT) [Wei et al., 2022, Pang et al., 2024] rationales and accumulated context becomes critical in mitigating such biases. Motivated by the challenges above, we introduce a novel training recipe to tackle these issues in training VLRMs. On the data side, we propose incorporating specialized visual knowledge [Liu et al., 2023a, Wu et al., 2019] and chain-of-thought rationales [Zhang et al., 2024a] to provide more constructive guidance. On the training side, we employ preference-learning techniques [Schulman et al., 2017, Ouyang et al., 2022] drawn from reinforcement learning (RL) in an iterative [Dong et al., 2023, Zelikman et al., 2022, Touvron et al., 2023] fashion—allowing the model’s generation to adapt toward more preferred outputs across multiple rounds. This RL-based method has already proven successful in large language models [OpenAI, 2024, Bai et al., 2022, Google, 2023], and one of our primary goals is to extend these techniques to address the unique challenge of aligning different modalities in a VLRM setting. Thus, we make the following contributions: • Automated Construction of Preference Datasets via Vision Experts. We leverage vision experts to generate large-scale preference datasets, improving supervision quality in our VL-GenRM. • CoT-Enhanced VL-GenRM Training. We incorporate chain-of-thought rationale generation techniques to systematically guide VL-GenRM training. This structured reasoning process increases the effective correct description part in the dataset, mitigating the limitations of self-generated data and reinforcing coherent reward modeling. • Iterative Bootstrapping with Margin-based Rejection Sampling. We refine VL-GenRM’s reasoning through iterative fine-tuning on successful rationales, which are selected through the margin between reward signals of positive and negative examples. • Comprehensive Evaluation. We validate our approach across VL-RM benchmarks and Best-ofN sampling, demonstrating improved multimodal reasoning and evaluation. Give Positive/Negative Response High/Low score Figure 1: Illustration of the Vision-Language Reward Model (VLRM) pipeline, highlighting key challenges in training. Issue 1 (Bootstrapping Pitfalls): The VLM generates responses based on self-produced data, leading to potential hallucinations (e.g., the nonexistent "bicy$c l e ^ { \prime \prime } )$ . While such errors can be mitigated using object detection expert models, relying solely on self-generated supervision risks reinforcing biases. Issue 2 (Inherited Modality Bias & Negative Example Amplification): The VLRM evaluates these flawed responses, but biases from the first round persist, as shown by the confusing agent icon in the second-generation step. Without structured reasoning or external supervision, these biases can be amplified rather than corrected, highlighting the need for improved alignment strategies. # 2 Related Work Vision-Language Modeling. Recent progress in visionlanguage models (VLMs) stems from integrating large language models (LLMs) with vision encoders via adaptation layers [Liu et al., 2023b, Dai et al., 2023]. Key advancements focus on (1) curating high-quality multimodal datasets [Zhang et al., 2024b, Chen et al., 2023], (2) improving architectures for enhanced pixel-text reasoning [Li et al., 2024b], and (3) optimizing training with reinforcement learning from human feedback (RLHF) to mitigate hallucinations [Zhou et al., 2024, Pi et al., 2024a]. However, robust reward modeling remains a challenge. We explore VLM-based reward training to enhance structured evaluation and reasoning. Specialized Vision Expert Models. Vision expert models specialize in object detection [Girshick, 2015, Yao et al., 2021, Carion et al., 2020] and depth estimation [Kim et al., 2022, Yang et al., 2024b], enabling precise visual understanding. Recent work [Gu et al., 2021, Liu et al., 2023a, Yao et al., 2024] shows their effectiveness in specialized tasks. We leverage vision experts to improve object-level verification, refining multimodal reasoning and reward modeling. Reward Models. Reward models (RMs) are essential in reinforcement learning from human feedback (RLHF) and preference-based optimization [Bradley and Terry, 1952, Rafailov et al., 2023]. Traditional RMs use binary classification or preference modeling to rank responses [Ouyang et al., 2022], with early improvements focusing on better preference data and token-wise dense rewards [Pi et al., 2024a, Lee et al., 2023, Zhong et al., 2024]. Recent work explores diverse RM types, such as outcome-based and process-based models [Lightman et al., 2023, Zhang et al., 2024c, Wang et al., 2023]. Generative reward models (GenRMs) [Zhang et al., 2024a] leverage token probabilities instead of classification scores, aligning with LLMs’ generative nature and enabling Chain-of-Thought (CoT) reasoning [Wei et al., 2022]. Additionally, LLM-as-a-judge [Zheng et al., 2023] eliminates separately trained RMs, while Direct Preference Optimization (DPO) [Rafailov et al., 2023] aligns models with human preferences without explicit rewards. Despite progress in text-based RMs, vision-language reward models (VL-RMs) remain underexplored [Li et al., 2024a], facing challenges in visual grounding, hallucination detection, and structured reasoning. Early efforts like VLFeedback [Li et al., 2023] and LLaVA-Critic [Xiong et al., 2024b] introduce multimodal preference datasets and critique-based training. Our work advances this area by developing a generative VL-RM with iterative optimization, vision expert integration, and Best-of-N selection to improve multimodal reasoning consistency. Iterative RL. Iterative reinforcement learning (RL) refines reward models through human-in-the-loop feedback. Proximal Policy Optimization (PPO) [Schulman et al., 2017] is central to RLHF, iteratively improving response quality [Ouyang et al., 2022]. Direct Preference Optimization (DPO) [Rafailov et al., 2023] simplifies PPO-based RLHF by reformulating it as an offline optimization problem. Beyond PPO, rejection sampling methods like STaR [Zelikman et al., 2022] and RAFT [Dong et al., 2023] enhance preference learning by filtering suboptimal responses. Recently, Iterative DPO [Xiong et al., 2024a, Dong et al., 2024] has gained traction, with variants like Pairwise Cringe Loss [Xu et al., 2023] and ReST [Gulcehre et al., 2024] refining preference learning iteratively. SPIN [Chen et al., 2024b] extends DPO by integrating human-labeled winning responses, while Self-Rewarding LLMs [Yuan et al., 2024] generate preference pairs for better instruction following, though with limited reasoning gains. While widely applied to text-based models, iterative RL for Vision-Language Models (VLMs) remains underexplored. Our work pioneers iterative RL in VL-RM training, incorporating vision experts and multimodal reasoning enhancements to improve preference learning in multimodal contexts. # 3 Background An autoregressive vision-language model generates an output sequence $\mathbf { y } = ( y _ { 1 } , y _ { 2 } , \dots , y _ { T } )$ given an input image $I$ and an input context $\mathbf { x }$ (e.g., a textual description or question) by predicting tokens one at a time (i.e., next token prediction), based on the previously generated tokens. Assuming that the model is parameterized by $\theta$ , the conditional probability distribution of generating a sequence y given $I$ and $\mathbf { x }$ is # Query: What is the man doing in the image? # [Positive Response] The man in the image is paddling or poling a small covered boat down a river. He is using a pole or oar to propel the boat through the water. # [Rationale] The response accurately describes the man's activity in the image. There are no crucial or obvious errors in the response. [Verification] Is the answer correct (Yes/No)? Yes # [Negative Response] The man in the image is standing on a surfboard, riding a wave in the ocean. He is holding onto a surfboard with both hands and standing upright on the board as it is carried along by the wave. # [Rationale] The response incorrectly identifies the man as standing on a surfboard and riding a wave in the ocean. The man is actually sitting in a boat, not a surfboard, and he is not riding a wave. [Verification] Is the answer correct (Yes/No)? No Figure 2: An example of a pairwise VLGenRM training data. $$ p _ { \theta } ( \mathbf { y } \mid I , \mathbf { x } ) = \prod _ { t = 1 } ^ { T } p _ { \theta } ( y _ { t } \mid I , \mathbf { x } , \mathbf { y } _ { < t } ) $$ with $\mathbf { y } _ { < t } = ( y _ { 1 } , y _ { 2 } , \dotsc , y _ { t - 1 } , y _ { t - 1 } )$ . For ease of notation, we define $p _ { \theta } ( y _ { t } \mid I , \mathbf { x } ) : = p _ { \theta } ( y _ { t } \mid I , \mathbf { x } , \mathbf { y } _ { < t } )$ . Vision-Language Reward Modeling. The vision-language (VL) reward model $r _ { \theta }$ assigns a score to a given input to assess the quality of the response $y$ : $$ r _ { \theta } ( I , { \bf x } , y ) = f _ { \theta } ( I , { \bf x } , y ) , $$ where $f _ { \theta } ( \cdot )$ is a learnable scoring function, typically implemented using a deep neural network. The training dataset $\mathcal { D } _ { \mathrm { V L } }$ consists of tuples containing an image, a context, and both preferred and rejected responses: $$ \mathcal { D } _ { \mathrm { V L R M } } = \{ ( I , \mathbf { x } , y ^ { + } , y ^ { - } ) \} , $$ where: $I$ is the input image, $\mathbf { x }$ is the input context (e.g., a question or description), $y ^ { + }$ is the preferred response selected by humans, $y ^ { - }$ is the less preferred or incorrect response. Generally, reward modeling for vision-language models follows BT model, which aims to distinguish between the chosen response $y ^ { + }$ and the rejected response $y ^ { - }$ given an image $I$ and an input context $\mathbf { x }$ : $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { r e w a r d } } ( \theta ) = } \\ & { - \ \mathbb { E } _ { ( I , \mathbf { x } , y ^ { + } , y ^ { - } ) \sim \mathcal { D } } \left[ \log \sigma \Big ( r _ { \theta } ( I , \mathbf { x } , y ^ { + } ) - r _ { \theta } ( I , \mathbf { x } , y ^ { - } ) \Big ) \right] , } \end{array} $$ Generative Reward Modeling (GenRM). GenRM formulates verification as a token prediction task, where a VLM learns to predict correctness labels given an image $I$ , input context $\mathbf { x }$ , and response $y$ . The training dataset consists of labeled problem-solution pairs: $$ \begin{array} { r } { \mathcal { D } _ { \mathtt { G e n R M } } = \{ ( { \mathbf { x } } , y ^ { + } , I ) , p , \mathbf { \xi } ^ { * } \mathrm { Y e s } ^ { , } \} \cup \{ ( { \mathbf { x } } , y ^ { - } , I ) , p , \mathbf { \xi } ^ { * } \mathrm { N o } ^ { , } \} , } \end{array} $$ where $p$ is a fixed prompt (“Is the most recent final answer correct (Yes or ${ \bf N o } ) ^ { \prime \prime } .$ ) that instructs the model to verify the correctness of $y$ . At inference, correctness likelihood is used as the model’s confidence score: $$ r _ { \mathrm { G e n R M } } ( \mathbf { x } , y , I ) = p _ { \theta } ( ^ { * } \mathrm { Y e s } ^ { , } \mid \mathbf { x } , y , I , p ) . $$ This approach enables direct verification through token probabilities via instruction tuning. The training objective of GenRM is Supervised Fine-Tuning (SFT) loss: $\mathcal { L } _ { \mathrm { G e n R M } } = \mathcal { L } _ { \mathrm { S F T } }$ . For the formulation of SFT, DPO, VLM-as-a-Judge and BoN, please refer to the Appendix A. # 4 Data Collection In this section, we describe our two-phase data preparation approach for VL-GenRM: Pairwise Data Generation and Chain-of-Thought (CoT) Generation and Verification. To ensure high-quality training data, we incorporate vision experts for object detection and verification, refining rejected responses to enhance preference datasets (Figure 3). # 4.1 Pairwise Data Generation The data generation framework for VL-GenRM follows a structured three-step process to ensure high-quality negative responses for training. Given the original dataset $\mathcal { D } _ { 0 } = \{ ( I , X , y ) \}$ , our goal is to construct a dataset $\mathcal { D } _ { p a i r }$ that consists of image-query-response pairs with both accurate positive responses and refined negative responses: $$ \mathcal { D } _ { p a i r } = \{ ( I , X , Y ^ { + } , Y _ { \mathrm { n e w } } ^ { - } , \hat { y } ) \} $$ where $I$ is the image, $X$ is the input query, $Y ^ { + }$ is the chosen response, $Y _ { \mathrm { n e w } } ^ { - }$ is the refined negative response, and $\hat { y } \in \{ \mathrm { Y e s } , \mathrm { N o } \}$ is the binary label indicating correctness. To achieve this, we follow three key steps: (1) Negative Response Collection, where a weak VLM generates incorrect responses; (2) Vision Expert Filtering, which verifies hallucinated responses by checking object presence in images; and (3) Refinement and Augmentation, where false rejections are corrected, and negative responses are modified to improve response diversity. This pipeline ensures that pairwise samples are realistic, visually grounded, and semantically refined, enhancing VL-GenRM’s reward modeling capabilities. # 4.1.1 Negative Response Collection In the Negative Response Collection phase, we construct an initial dataset $\mathcal { D } _ { 0 } \overset { \vartriangle } { = } \{ ( I , X , Y ^ { + } ) \}$ , where $I$ is an image, $X$ is the query, and $Y ^ { + }$ is a chosen response. To introduce contrastive supervision, we generate negative responses using a weak vision-language model (VLM), producing: $Y ^ { - } \overset { - } { = } f _ { \mathrm { w e a k - V L M } } ( I , X )$ These responses serve as plausible but incorrect samples, mimicking common errors made by weaker models. However, some responses may contain near-correct answers, requiring additional filtering. # 4.1.2 Vision Expert Filtering In the Vision Expert Filtering phase, we eliminate hallucinated negatives by verifying object consistency. First, we extract mentioned objects from the generated negative response: $\mathcal { O } ( Y ^ { - } ) = f _ { \mathrm { V L M } } \bar { \ l } ( Y ^ { - } )$ .Then, we detect actual objects in the image using an object detector (OD): ${ \mathcal { O } } ^ { * } ( I ) =$ $\bar { f } _ { \mathrm { O D } } ( I )$ .If $\mathcal { O } ( Y ^ { - } ) \ \nsubseteq { \bar { \mathcal { O } } } ^ { * } ( I )$ , the response is labeled as a hallucination and retained. For the remaining responses in the dataset, we will perform Refinement and Augmentation. # [Negative Response] This image shows a giraffe stretching its neck to eat leaves from a tree in a lush, green environment. In the background, a lion is lying down on the grass, partially hidden by the terrain. The scene feels like a peaceful moment in the African savanna. VISION EXPERT Object 1: Giraffe looking at the camera Position: [0.05, 0.75, 0.25, 0.85] Object 2: Giraffe laying down Position: [0.50, 0.15, 0.90, 0.85] Hallucination: a lion □ Vision Expert Filtering 𝐷! = (𝑥, 𝐼, 𝑦) CHECK LIST a giraffe verify Pairwise Data Generation leaves a lush a lion X CoT Rationale Generation # 4.1.3 Refinement and Augmentation In the Refinement and Augmentation phase, we correct false rejections and generate modified negatives to improve diversity. First, we determine if a rejected response is approximately correct by extracting objects from the image: $\mathsf { \bar { O } } _ { I } = g _ { \mathrm { O D } } ( I )$ . A stronger VLM evaluates if $Y ^ { - }$ is a valid response: Figure 3: The pipeline starts with a hallucinated negative response, like misidentifying a "lion" in an image. A vision expert verifies objects, filters errors, refines the response, and generates rationales to enhance training data. $$ \hat { y } = f _ { \mathrm { V L M } } ( I , X , Y ^ { - } , \mathcal { O } _ { I } ) , \quad \hat { y } \in \{ \mathrm { Y e s , N o } \} $$ If correct, $Y ^ { - }$ is flagged for replacement instead of rejection. Next, to diversify negative responses, we modify object mentions in $Y ^ { + }$ by selecting two objects: $\mathcal { O } _ { \mathrm { s a m p l e d } } ( \bar { Y ^ { + } } ) \subseteq \mathcal { O } ( Y ^ { + } )$ . A new negative response is then generated by altering these objects: $Y _ { \mathrm { n e w } } ^ { - } = f _ { \mathrm { V L M } } ( \mathcal { O } _ { \mathrm { s a m p l e d } } , Y ^ { + } )$ The final dataset $\mathcal { D } _ { p a i r } = \{ ( I , X , Y ^ { + } , Y _ { \mathrm { n e w } } ^ { - } , \hat { y } ) \}$ ensures that negative responses remain semantically valid yet distinct, strengthening the contrastive learning signal in VL-GenRM. # 4.2 Chain-of-Thought (CoT) Rationale Generation Given the dataset $\mathcal { D } _ { p a i r } = \{ ( I , X , Y ^ { + } , Y _ { \mathrm { n e w } } ^ { - } , \hat { y } ) \}$ from the previous stage, the goal of the CoT rationale generation process is to construct a dataset $\mathcal { D } _ { \mathrm { t r a i n } }$ that provides structured reasoning rationales alongside response pairs, enabling VL-GenRM to better assess correctness and improve interpretability. The final dataset is formulated as: $$ \mathcal { D } _ { \mathrm { t r a i n } } = \{ ( I , X , Y , c , \hat { y } ) \} $$ where $I$ is the image, $X$ is the query, $Y = \{ Y ^ { + } , Y ^ { - } \}$ , $Y ^ { + }$ and $Y ^ { - }$ are the chosen and rejected responses, respectively, $c = \{ c ^ { + } , c ^ { - } \}$ , $c ^ { + }$ and $c ^ { - }$ are the CoT rationales explaining why each response is correct or incorrect, and $\hat { y } \in \{ \mathrm { Y e s } , \mathrm { N o } \}$ is the binary label. To generate CoT rationales, we use a strong vision-language model (VLM) to produce step-by-step reasoning explanations for both responses: $$ c ^ { + } = f _ { \mathrm { V L M } } ( I , X , Y ^ { + } , \mathcal { O } _ { I } ) , \quad c ^ { - } = f _ { \mathrm { V L M } } ( I , X , Y ^ { - } , \mathcal { O } _ { I } ) $$ where $\mathcal { O } _ { I }$ represents detected objects in the image. These rationales help the model learn to justify its reward assignment, improving consistency in evaluating correctness. To enhance data quality, we apply selective rationale filtering, prompting the model to focus only on missing key objects and critical errors, thereby reducing unnecessary hallucinations. Additionally, we introduce external dataset augmentations, generating multiple responses per question using a smaller VLM to increase reasoning diversity. If no incorrect responses are naturally found, we inject random incorrect answers to maintain a balanced dataset. # 5 Training Framework As shown in Figure 4, following standard post-training practices for large models, we adopt a two-stage training framework to optimize VL-GenRM: (1) Rewarding instruction-following fine-tuning, where the model learns structured reward modeling from CoT rationales, and (2) Iterative optimization, where the model refines itself via self-generated reasoning and reward alignment. # 5.1 Rewarding Instruction-Following Fine-Tuning To develop an initial reward model, we train VL-GenRM on a structured dataset containing both positive and negative reasoning responses with corresponding CoT rationales. The training dataset is $\mathcal { D } _ { \mathrm { t r a i n } } = \{ \bar { ( } I , X , \bar { Y } , c , \hat { y } ) \}$ . Rewarding instruction following learning (IFT). We structure this training as an instruction-following task, where the model is required to assess a response and generate a reasoning-based evaluation. Specifically, we train the model to first generate a CoT rationale and then output a structured binary evaluation. From $\mathcal { D } _ { \mathrm { t r a i n } }$ , We extract all the problems-solution pairs with correctness tokens as $\bar { \mathcal { D } _ { \mathrm { c o r r e c t } } } = \{ ( I , X , Y ^ { + } , \dot { c } ^ { + } , \hat { y } ) \}$ . Following [Zhang et al., 2024a]The training objective of this IFT stage is: $$ { \mathcal { L } } _ { \mathrm { I F T } } ( \theta , { \mathcal { D } } _ { \mathrm { t r a i n } } ) = { \mathcal { L } } _ { \mathrm { S F T } } ( \theta , { \mathcal { D } } _ { \mathrm { t r a i n } } ) + \lambda { \mathcal { L } } _ { \mathrm { S F T } } ( \theta , { \mathcal { D } } _ { \mathrm { c o r r e c t } } ) , $$ where $\lambda > 0$ is a hyperparameter that controls the mixture ratio between verification $\left( \mathcal { D } _ { \mathrm { t r a i n } } \right)$ and generating correct solutions $\left( \mathcal { D } _ { \mathrm { c o r r e c t } } \right)$ . This unified training can improve both verification and generation performance. This step produces a refined dataset $\mathcal { D } _ { \mathrm { t r a i n } }$ , allowing VL-GenRM to develop structured reward alignment. Figure 4: Iterative Training Pipeline of VL-GenRM. The training consists of two stages: (1) Reward Initialization—Given raw data $( x , I , y )$ , OpenAI’s $\mathcal { O } _ { I }$ model detects correctness and generates refined annotations $( x , I , y _ { \mathrm { n e w } } , \hat { y } )$ . Contrastive pairs are constructed to form the structured dataset $D _ { \mathrm { t r a i n } }$ , which is used for instruction-following fine-tuning $( \mathcal { L } _ { \mathrm { I F T } } )$ . (2) Iterative Refinement—VL-GenRM simultaneously generates candidate rationales for both positive and negative responses, which are verified against reference outputs. A Margin-based Rejection Sampling strategy filters the most informative rationales $( c ^ { * } , \hat { y } ^ { * } )$ , refining the dataset $D _ { \mathrm { i t e r } }$ for continued fine-tuning. This iterative approach enhances reward alignment, mitigates hallucinations, and improves multimodal reasoning performance. # 5.2 Iterative Optimization Once an initial VL-GenRM is trained, we apply an iterative optimization strategy to further refine the model’s ability to assess correctness and reasoning consistency. This process involves self-generated CoT rationales, Margin-based Rejection Sampling, and IFT. The iterative training dataset is defined as: $$ \mathcal { D } _ { \mathrm { { i t e r } } } = \{ ( I , X , Y ^ { + } , Y ^ { - } , c _ { + } ^ { * } , c _ { - } ^ { * } ) \} $$ where $c _ { + } ^ { * }$ and $c _ { - } ^ { * }$ are the most informative rationales selected from multiple generated rationales. At the $t$ -th iteration, he model undergoes the following steps in each iteration: 1. Generating CoT Rationales: The model simultaneously generates the reasoning rationales for both positive and negative response: $$ c ^ { + } = f _ { \mathrm { R M } } ( I , X , Y ^ { + } ) , \quad c ^ { - } = f _ { \mathrm { R M } } ( I , X , Y ^ { - } ) $$ 2. Margin-based Rejection Sampling: We select the most informative rationales for both positive and negative responses using a margin-based scoring function [Touvron et al., 2023]: $$ m ( c _ { i } ^ { + } , c _ { i } ^ { - } ) = \operatorname { s c o r e } ( c _ { i } ^ { + } ) - \operatorname { s c o r e } ( c _ { i } ^ { - } ) , $$ 3. Data Augmentation and Filtering: We set a margin threshold range $[ \lambda _ { l } , \lambda _ { r } ]$ here. The rationale pairs with margin score $m ( c _ { i } ^ { + } , c _ { i } ^ { - } )$ fall within this range are kept, yielding a progressively refined dataset ${ \mathcal { D } } _ { \mathrm { i t e r } } ^ { * }$ To integrate these self-improving reasoning trajectories into VL-GenRM, we apply lightweight fine-tuning using LowRank Adaptation (LoRA) [Hu et al., 2022]. By leveraging self-correcting rejection sampling and iterative refinement, our approach enhances the model’s ability to perform structured reward learning, aligning CoT reasoning with robust verification capabilities. # 6 Implementation and Evaluation Settings # 6.1 Implementation For detailed implementation, training configurations and inference hyperparameters, please refer to Appendix B. # 6.2 Evaluation Metrics and Benchmarks To fully validate the effectiveness of our method, we employ two evaluation strategies: (1) evaluating directly on VLRewardBench following [Li et al., 2024a] and (2) using the trained VL-GenRM as a test-time verifier on LLaVAWild [Liu et al., 2023b] to assess Best-of-N accuracy. # 6.2.1 RewardBench We evaluate VL-GenRM following [Li et al., 2024a], assessing alignment, reasoning, and multimodal understanding. General Instruction Following. We sample 183 instances from WildVision [Lu et al., 2024a] and VLFeedback [Li et al., 2023]. WildVision covers diverse multimodal queries with human-verified annotations, while VLFeedback uses GPT-4V-based assessments with high-quality preference labels. Hallucination. For visual hallucination detection, we sample 749 examples from POVID [Zhou et al., 2024], RLAIFV [Yu et al., 2024a], and RLHF-V [Yu et al., 2024b]. POVID introduces controlled noise for robustness testing, RLAIF-V employs automated verification, and RLHF-V refines labels via human preference lables. Reasoning. We evaluate complex multimodal reasoning using 318 pairs from MMMU-Pro [Yue et al., 2024] and MathVerse [Zhang et al., 2024d]. MMMU-Pro assesses high-level multimodal inference across disciplines, while MathVerse focuses on visual mathematical reasoning tasks. # 6.2.2 Test-time Best-of-N accuracy To further assess the effectiveness of VL-GenRM, we evaluate its ability to serve as a test-time verifier by measuring Best-of-N accuracy. This approach follows prior works [Yang et al., 2024a, Liang et al., 2024, Hosseini et al., 2024], where we sample multiple candidate responses, rank them using the trained reward model, and select the highest-scoring response as the final answer. Here, we select Qwen-2-7B [Wang et al., 2024b], InterVL-4B [Chen et al., 2024c] and LLaVA-Next-8B [Liu et al., 2024] to generate $\mathbf { N }$ responses and then employ the VLRM to select the BoN. Formally, given an input $( I , X )$ , we generate a set of $N$ candidate responses: $\mathcal { V } = \{ Y _ { 1 } , Y _ { 2 } , . . . , Y _ { N } \} \sim P ( Y | I , X )$ where each $Y _ { i }$ is sampled from the model’s response distribution. The reward model assigns a score $s _ { i }$ to each response: $s _ { i } = f _ { \mathrm { R M } } ( I , X , Y _ { i } )$ We then select the response with the highest score: $Y ^ { * } = \arg \operatorname* { m a x } _ { Y _ { i } \in \mathcal { Y } } s _ { i }$ We evaluate the effectiveness of test-time BoN on the popular LLaVA-Wild [Liu et al., 2023b] benchmark. # 6.3 Baseline Models and Methods Evaluated Models. We evaluate four Proprietary VLMs, GPT-4o [OpenAI, 2024], Gemini-1.5-Pro [Google, 2023], Claudge-3.5-Sonnet [Bai et al., 2022] and Qwen-VL-Max [Bai et al., 2023]. For the open-source VLM, we include VLMs with parameters ranging from 7B to 90B: Llama-3.2 [Dubey et al., 2024], Molmo [Deitke et al., 2024], DeepSeekVL [Lu et al., 2024b], Aria [Li et al., 2024c], MammoTH-VL [Guo et al., 2024], Qwen2-VL [Wang et al., 2024b]. It is worth noting that Aria and DeepSeek own Mixture-of-Expert LLM. Evaluated Methods. We compare our approach against four vision-language reward modeling methods. BT-RM (Bradley-Terry Reward Model) optimizes a reward function via pairwise ranking, distinguishing preferred and rejected responses based on the Bradley-Terry model. VLM-as-a-Judge employs a vision-language model (VLM) to directly assess response quality, optionally comparing generated responses to reference answers. DPO (Direct Preference Optimization) reformulates preference learning as direct policy optimization, aligning response probabilities with human preferences without explicit reward modeling. Direct GenRM (Direct Generative Reward Modeling) trains a VLM to classify responses as correct or incorrect using token prediction, with correctness likelihood serving as the reward score. # 7 Experiment # 7.1 Comparison with other VLMs. As shown in Table 6, VL-GenRM (7B) achieves state-of-theart performance among open-source VLMs in general QA and hallucination robustness, outperforming much larger models like LLaMA-3.2 (90B) and Molmo (72B). This highlights that a well-designed 7B model can surpass significantly larger counterparts in these areas. However, VL-GenRM lags in reasoning due to its object detection-based vision module, which improves general understanding and hallucination resistance but is less suited for abstract reasoning requiring fine-grained scene analysis. Overall, VL-GenRM demonstrates strong generalization and hallucination robustness despite its compact size, with room for improvement in reasoning. Full results can be found in Appendix D. # 7.2 Comparison with other training methods. As shown in Table 6, VL-GenRM (7B) achieves state-of-theart performance among open-source VLMs in general QA and Table 1: Comparison of Proprietary and Open-Source VLMs. hallucination robustness, outperforming much larger models like LLaMA-3.2 (90B) and Molmo (72B). This highlights that a well-designed 7B model can surpass significantly larger counterparts in these areas. However, VL-GenRM lags in reasoning due to its object detection-based vision module, which improves general understanding and hallucination resistance but is less suited for abstract reasoning requiring fine-grained scene analysis Overall, VL-GenRM demonstrates strong generalization and hallucination robustness despite its compact size, with room for improvement in reasoning through enhanced multi-modal fusion. # 7.3 Test-time Best-of-N Evaluation Table 2: Comparison of different training methods. We categorize reward models (RMs) based on whether they are generative (capable of directly generating reward scores from “Yes/No” token.) and whether they introduce additional test-time computation (Test Time) overhead. Generative RMs are preferable for better inference performance. Additionally, models that use extra test-time computation are more capable. Among the evaluated methods, VLGenRM consistently outperforms others and achieves the best overall performance. Table 3 evaluates VL-GenRM under Best-of-N $\mathbf { \left( B 0 N \right) }$ accuracy, which measures its ability to act as a test-time verifier by selecting the best response among multiple candidates. This provides a more robust validation compared to single-response reward alignment. Across models like Qwen2.5-VL-7B, InternVL2.5-VL-4B, and LLaVA-Next-8B, VL-GenRM consistently outperforms BT RM and Direct GenRM, excelling in both reward modeling and test-time verification. Reward IFT $^ +$ Iteration further enhances performance, indicating that iterative refinement improves reward alignment. Additionally, results confirm that VL-GenRM is model-agnostic, demonstrating adaptability across architectures. Table 3: Performance improvement brought by the proposed training pipeline. Green arrow denotes the percentage improvement over the baseline without Best-of-N sampling $\left( \mathrm { N } { = } 1 6 \right)$ . Overall, these findings highlight VL-GenRM as a scalable and effective reward modeling solution, improving generation quality while maintaining efficiency. # 7.4 Ablation Table 4: Performance comparison of different data augmentation strategies. Table 5: Performance of different training steps based on QwenVL-7B. Data Ablation. Table 4 compares different data augmentation strategies. $^ { 6 6 } +$ pair data” is the baseline without CoT rationales or vision expert verification, leading to the weakest performance. $^ { 6 6 } +$ verified pair” improves hallucination robustness but lacks test-time computation, limiting reasoning gains. We further explore CoT rationale generation. $^ { 6 6 } +$ descriptive CoT pair” fails due to inherited modality bias and negative example amplification. In contrast, $^ { 6 6 } +$ critique CoT pair” enables effective test-time computation, improving both reasoning and hallucination control. This validates that critique-based $\mathbf { C o T }$ is essential for reasoning-aware supervision. Readers can refer to Table 12 and Table 13 for the detailed prompt used for generating such critique/descriptive CoT. We also made a data contamination analysis in Appendix C. The results show that the improvement is not merely due to including data from the same distribution. Training Method Ablation. Table 5 validates the effectiveness of our training design. “Reward IFT” significantly boosts reasoning and overall performance , while “Iteration 1” further enhances hallucination robustness. “Iteration $\pmb { 2 } ^ { \dag \mathparagraph }$ shows marginal gains, indicating saturation in our current setup. However, we expect further iterations to remain beneficial with larger models and datasets, highlighting the scalability of our approach.
Reinforcement Fine-Tuning (RFT) with verifiable rewards has advanced large language models but remains underexplored for Vision-Language (VL) models. The Vision-Language Reward Model (VL-RM) is key to aligning VL models by providing structured feedback, yet training effective VL-RMs faces two major challenges. First, the bootstrapping dilemma arises as high-quality training data depends on already strong VL models, creating a cycle where self-generated supervision reinforces existing biases. Second, modality bias and negative example amplification occur when VL models hallucinate incorrect visual attributes, leading to flawed preference data that further misguides training. To address these issues, we propose an iterative training framework leveraging vision experts, Chain-of-Thought (CoT) rationales, and Margin-based Rejection Sampling. Our approach refines preference datasets, enhances structured critiques, and iteratively improves reasoning. Experiments across VL-RM benchmarks demonstrate superior performance in hallucination detection and multimodal reasoning, advancing VL model alignment with reinforcement learning.
[ "cs.CL", "cs.CV" ]
# 1 Introduction Large Language Models (LLMs) have demonstrated impressive capabilities in SWE tasks, including code generation, debugging, and automated development workflows. Building on these capabilities, researchers have begun creating LLM-driven agents that interact with real codebases and development environments, performing actions and receiving feedback [Jin et al., 2025]. While frontier proprietary models drive the performance of the most competitive agents (e.g., OpenHands [Wang et al., 2024], Moatless Tools [Antoniades et al., 2024], Agentless [Xia et al., 2024]) on key benchmarks like SWE-bench [Jimenez et al., 2024], there exists a significant opportunity to enhance open-source models [Wang, 2025, Yang et al., 2025, Wang et al., 2025, Aggarwal et al., 2025, Ma et al., 2025, Wei et al., 2025, Golubev et al., 2024]. Progress in this direction, particularly toward complex agentic behaviors, may be accelerated with access to large-scale, high-quality training data that mirrors the interactivity inherent in real-world software development. Existing powerful open-source models like DeepSeek-V3 [DeepSeek-AI, 2024], LLaMa 4 [Meta AI, 2025] and Qwen3 [Team, 2025] could potentially be fine-tuned to achieve comparable performance in specific SWE domains, but this hinges on the availability of suitable interactive task data. Current approaches to training LLMs for programming often rely on code data from open-source repositories [Lozhkov et al., 2024] or synthetic instruction datasets [Wei et al., 2024] that are used for instruction tuning. However, training robust software engineering agents for real-world scenarios necessitates datasets that extend beyond simple code generation. To truly enable learning through methods like Reinforcement Learning (RL), which thrives on trial-and-error, agents require interactive tasks coupled with automatic verification mechanisms. Such data must allow agents to perform diverse actions, observe environment responses after each step, and receive eventual verification outcomes that determine task success. Unlike domains such as mathematics [Shao et al., 2024] or web navigation [Pan et al., 2024a], software engineering has historically lacked such large-scale interactive datasets due to the complexities of configuring diverse, executable environments at scale. While recent efforts like SWE-Gym [Pan et al., 2024b] and SWE-PolyBench [Rashid et al., 2025] represent promising steps, their manual curation processes and reliance on a limited number of repositories constrain their scope, diversity, and scalability. Furthermore, the evaluation of rapidly advancing LLM-based agents also faces significant challenges. Static benchmarks, while initially valuable, can become compromised by data contamination as newer models become exposed to test instances during their extensive pre/post-training. Moreover, the lack of standardized evaluation protocols, variability in agent scaffolds and inconsistent reporting practices make direct comparisons between models difficult and can obscure their true capabilities. To address these challenges in both training data availability and evaluation reliability, we present a scalable, fully automated pipeline for continuous collection of software engineering tasks from real-world GitHub repositories. Building upon our prior work such as SWE-bench Extra [Badertdinov et al., 2024], which has been well-received by the community and is already used to train open-source software engineering agents [Wang et al., 2025], our approach eliminates manual intervention and significantly expands task diversity and scale. To the best of our knowledge, this is the first system enabling fully automated, scalable collection of executable tasks from a wide set of real-world repositories, specifically designed to support interactive agent training and robust benchmarking. Our main contributions are as follows: • A scalable and fully automated pipeline for mining real-world software engineering tasks from GitHub, covering environment configuration, build setup, and test validation. • SWE-rebench2, a public dataset of more than 21,000 interactive Python-based SWE tasks, designed to train and benchmark agents in diverse executable environments, particularly suitable for reinforcement learning-based approaches. • A public SWE-rebench leaderboard3 that offers continuously updated, decontaminated, and standardized evaluations for LLM-based agents, promoting transparency and fair comparisons across both open- and closed-source models. By focusing on scale and automation, SWE-rebench aims to fill a critical gap in the LLM agent ecosystem. We believe it will serve as a foundational resource for accelerating open-source research and improving the reliability and performance of LLM-based software engineering agents. # 2 An automated pipeline for collecting software engineering tasks In this section we describe our automated pipeline for mining verifiable software engineering tasks at scale, that we used to build SWE-rebench, a dataset of 21,336 verifiable SWE tasks from 3468 distinct GitHub repositiories. Our pipeline comprises four stages: preliminary task collection, automated installation instruction configuration, execution-based installation verification, and quality assessment, which are fully described in this section. While our methodology incorporates several techniques from SWE-bench, it also introduces innovations to enhance automation and scalability. We detail the distinctions and novel aspects of our approach compared to the original SWE-bench methodology in Appendix G. The computationally intensive nature of our pipeline is managed through a distributed storage and computing platform TractoAI [TractoAI, 2025], which provides capabilities for efficient parallel processing and data management, helping us optimize throughput of each stage to enable rapid reprocessing whenever we change the pipeline. # 2.1 Preliminary task collection In the first stage, we download raw input data from multiple origins, merge them, and perform preliminary filtering. The primary sources for our data are GitHub Archive [Grigorik, 2011] and GitHub. • GitHub Archive. The GitHub Archive is a major source of public events on GitHub. Each day, it publishes a JSON archive listing all GitHub events from that day. We use this archive to collect detailed data about issues: issue description, discussion, linked pull requests, and metadata such as creation date and labels. We also extract information about pull requests, including their merge status, last commit, and discussions. • GitHub. We clone relevant GitHub repositories with their full commit histories to our local storage. A local copy enables efficient access to repository data and helps avoid GitHub API rate limits. We use preserved commit history to identify changes associated with pull requests and perform version analysis for automated dependency setup in later stages. To initiate the dataset building process, we download approximately 450,000 pull requests linked to issues created before May 1, 2025. These originate from over 30,000 repositories that feature permissive licenses granting broad usage rights (see Appendix D for the list of included license types) and where Python constitutes over $7 5 \%$ of the codebase lines of code. We then link issues with pull requests that mention resolving them in their title or description, applying filters to select instances where: • The issue is from a Python repository with a permissive license. • The issue is marked as resolved. • The PR is merged into the main branch. • The PR is not linked to multiple issues. • The issue description is longer than 10 characters. • The PR must introduces or modifies tests and includes code changes beyond test files. • Changes affect 1 to 15 files. This filtering aims to eliminate unsuitable candidates, particularly those lacking tests. We require pull requests that introduce or modify tests, as these are crucial for automatically evaluating whether a proposed code patch resolves the described issue. For each selected pull request, the overall patch is divided into two components: a solution patch, containing changes to non-test files intended to address the issue, and a test patch, comprising only changes to test files. After applying all filtering criteria, approximately 153,400 potential task instances remain. # 2.2 Automated installation instructions configuration Datasets like SWE-bench [Jimenez et al., 2024] or SWE-Gym [Pan et al., 2024b] rely on manual curation to configure executable environments for each repository. This approach inherently limits scalability, often confining such datasets to a small selection of well-known repositories. Key steps typically include project versioning (mapping multiple task instances to a single valid environment) and defining setup instructions (to install dependencies and run tests). Manually conducting these steps on a large-scale, diverse task collection is infeasible; therefore, we employ a fully automated approach. After preliminary filtering described in Section 2.1, remaining issues are treated as task instances. We group these task instances by project versions inferred from git tag outputs, normalizing versions to major.minor format (e.g., 1.2.3 is normalized to 1.2). For each version group, we select the base_commit of the pull request linked to the task instance with the most recent base_commit date. We prioritize this most recent base_commit because developers typically maintain dependency compatibility within minor versions, and later commits often include important environment fixes. This approach generally provides a stable dependency set, often sufficient for executing test patches from all tasks in that group within a shared environment. The git tag command provides a version for approximately $9 5 \%$ of task instances. We assign a unique version to the rest of the tasks, so that each one of them uses its own environment. We employ an agentless approach, inspired by [Xia et al., 2024], to generate candidate environment setup instructions. This process involves several LLM-driven steps: Figure 1: Overview of the automated pipeline for collecting software engineering data. • Identifying relevant files: An LLM scans repository files (e.g., README.md, Dockerfile, setup.py) to find potential sources of installation information. • Extracting installation recipe: The LLM processes the concatenated content of files identified in the previous stage to produce a structured JSON object detailing the installation recipe. Files are provided to the LLM in the format: <filename>F.ext</filename>\n<content>CONTENT</content>. An example of the LLM’s reasoning and the resulting JSON recipe is provided in Appendix B.3. We use the Qwen2.5-72B-Instruct model [Qwen et al., 2025] (prompt in Appendix B.2) to generate up to three candidate JSON recipes per task. If an error occurs during the subsequent scripted installation or test execution (derived from a recipe), the LLM attempts to refine that recipe by analyzing error logs and the original instructions (see correction prompt in Appendix B.4). This iterative refinement enables successful environment configuration for tasks with issues like missing libraries or incorrect setups, allowing their inclusion in the final dataset. Our approach successfully produces a working installation recipe for at least one task in $31 \%$ of all repositories. We also explored dependency installation using an interactive agent that directly interacts with a Docker environment to install projects and run tests. While this interactive agent occasionally configured environments more effectively, it proved to be significantly more resource-demanding. The chosen agentless method is more computationally efficient for large-scale processing, and generating multiple candidate recipes can further improve its effectiveness, making it our primary approach. A comparative evaluation of these approaches on a curated subset of SWE-bench tasks is detailed in Appendix C. # 2.3 Execution-based installation verification To confirm task solvability and the integrity of the provided tests, we perform execution-based installation verification. This stage involves installing the environment for each task within a container and executing the tests from the pull request’s test patch. We then parse the test run outputs to ensure that: (1) at least one test from the test patch fails before applying the solution patch (i.e., changes to non-test files from the original pull request), (2) all tests from the test patch that initially failed subsequently pass after the solution patch is applied, and (3) any tests from the test patch that initially passed continue to pass after the solution patch is applied. Tasks are considered valid only if they meet these conditions. Processing numerous task instances, each potentially with multiple candidate recipes requiring installation, testing, and logging, necessitates distributed execution to manage the workload efficiently. We use TractoAI for this purpose, as it enables distributed container building and parallel execution of verification tasks in built containers across a cluster. For installation verification, we use a default base container with pre-installed basic dependencies (e.g., conda, gcc). Our own image registry and internal PyPI/APT mirrors help cache popular dependencies, accelerating container launches and reducing reliance on external sources. During task verification we perform the following steps: • Install project dependencies in an isolated container using buildah. We utilize tmpfs for file system operations to minimize disk I/O and accelerate builds. • Execute tests and parse logs to identify tests validating the solution. • Build final container images upon successful verification. • Record exact dependency versions (via pip freeze, conda env export) after a successful setup to mitigate reproducibility issues from unpinned dependencies in Python projects, ensuring consistent environment recreation in the future. # 2.4 Automated instance quality assessment For our collected tasks to be effectively used for reinforcement learning, they should possess certain properties; otherwise, RL agents might generate trajectories that appear as failures but are actually due to task imperfections (e.g., an underspecified issue making the task unsolvable, or flawed tests that a correct solution cannot pass), leading to incorrectly penalizing the agent. While SWE-bench Verified ensures these properties through manual verification, the scale of our collection necessitates an automated approximation of these checks. To assess these properties automatically, we fine-tune an instruction-following model using human annotations from SWE-bench Verified to predict: • Issue Clarity: Whether the GitHub issue description is sufficiently detailed for a developer to understand and solve the problem. • Task Complexity: The estimated effort to resolve the issue, considering reasoning, code modification, and codebase familiarity. • Test Patch Correctness: Whether tests in the pull request accurately verify the intended fix without over-reliance on specific implementation details. We fine-tune Qwen 2.5-72B-Instruct using annotations from SWE-bench Verified. For each of the over 3,800 examples, the model receives the issue description, the canonical solution patch, and the test patch as input. It is then prompted to predict one of three binary quality labels: Issue Clarity, Task Complexity, or Test Patch Correctness. We train the model to predict each label independently; each task instance is assessed for each quality characteristic separately using a 75/25 training/validation split (total 413 validation examples). For Task Complexity (where ’high-score’ implies $^ { > 1 }$ hour to solve; 100 high-score vs. 313 low-score examples in validation), our fine-tuned model achieved $81 \%$ accuracy and a weighted F1-score of 0.82. This is an improvement over the baseline Qwen-72B-Instruct, which achieved $68 \%$ accuracy. For Test Patch Correctness (180 high-score vs. 233 low-score examples), the model achieved $67 \%$ accuracy (weighted F1: 0.65). For Issue Clarity (84 high-score vs. 329 low-score examples), it achieved $7 9 \%$ accuracy (weighted F1: 0.76). A more detailed prediction quality analysis, including precision and recall per class, can be found in Appendix F, Table 4. The LLM-generated labels for Issue Clarity, Task Complexity, and Test Patch Correctness are provided as metadata with each task instance. While this automated assessment is not perfect, these labels offer users a means to filter the dataset and select task instances according to their specific criteria. For example, these labels facilitate task difficulty control more precise than heuristics like the number of modified files (used, for example, for SWE-bench Lite subset), as the number of changed files can be misleading: a multi-file change might be simple (e.g., a repeated parameter update), while a single-file change might lack clear issue descriptions or adequate tests for full validation. Thus, these LLM-based scores for difficulty and clarity of description and tests empower users to perform more nuanced task selection, helping them identify challenging yet solvable and clearly specified tasks beneficial for their specific model training or evaluation needs, and potentially aiding in mitigating benchmark saturation. This four-stage pipeline automates the collection and processing of interactive software engineering tasks. The process yields the SWE-rebench dataset of 21,336 annotated task instances, which is publicly available on Hugging Face Datasets. Accompanying code for utilizing the dataset, including scripts for tasks evaluation, is available on GitHub. An example of a task instance with its full annotation is provided in Appendix E. # 3 SWE-rebench benchmark In this section we discuss key limitations of existing evaluation setups for LLM-based software engineering agents and how our automated data pipeline described in Section 2 helps to address them. We leverage this pipeline to construct SWE-rebench, a benchmark built from hundreds of real-world, executable SWE tasks. It comprises 294 executable tasks from 169 diverse repositories, selected using filtering criteria detailed in Appendix H, and is part of the broader SWE-rebench dataset release. To ensure reliable and standardized evaluation, we maintain a private leaderboard based on this benchmark. # 3.1 Challenges in SWE agent benchmarking We identified the following key areas for improvement: • Potential data contamination: SWE-bench, the de facto evaluation standard for SWE agents, has been public since late 2023. Models released afterward may have been exposed to its data during training, risking inflated scores and confounding generalization with memorization. • Incomparable results due to scaffolding variability: Current evaluation practices allow for a wide range of setups. Performance on SWE-bench is often heavily influenced by highly engineered prompts, complex multi-agent frameworks, retry mechanisms, best-of-N sampling strategies and validation loops. While these techniques demonstrate the potential of systems built around LLMs, they make it difficult to isolate and compare raw capabilities of different LLMs. Furthermore, the scaffoldings are often developed and tuned on subsets from SWE-bench, inadvertently leading to a potential for implicit overfitting to the benchmark’s specific characteristics. • Lack of standardized and verifiable evaluation: SWE-bench results are typically performed and reported by individual teams. This decentralized approach lacks a mechanism for independent verification and can potentially lead to inconsistencies or misleading reporting practices such as reporting pass $\textstyle { \mathcal { Q } } \mathbf { N }$ as pass $@ 1$ or implicitly using information derived from final tests. The reliance on closed-source frameworks for many submissions further reduces the transparency and reproducibility of the evaluation process. • High variance in agent performance across runs: Due to the stochastic nature of agent trajectories, the outcome of a single run can vary significantly. This includes cases where a model may successfully generate correct actions or recover from mistakes in some runs, but fail to do so in others. Without averaging or reporting performance across multiple runs, the results can be unrepresentative. In particular, evaluating an agent multiple times and reporting only the best-performing run risks overstating the model’s actual capabilities and resolved rate. # 3.2 Principles of SWE-rebench SWE-rebench is designed to address the above challenges and support rigorous, model-centric evaluation through several core principles: • Centralized and standardized evaluation framework: All evaluations on SWE-rebench are conducted by our team by using a fixed scaffolding, i.e., every model is assessed by using the same minimal ReAct-style agentic framework [Yao et al., 2023], identical prompts and default generation hyperparameters as recommended by model developers. We standardize the context length to 128K tokens for all evaluations, unless a model only supports a shorter context. This strict standardization ensures an equal environment, allowing for direct comparison of the core abilities of different models to understand and solve SWE tasks within a defined, general-purpose interaction structure. While model-specific tuning or a different scaffolding could potentially yield higher scores for a given model, our focus is on establishing a reliable baseline of model capabilities in a common setting. It’s important to note that the interaction with the development environment is based on the model generating textual commands according to the interaction format described in the prompt. To equalize evaluations, we don’t use the function-calling functionality that some of the tested models support. For transparency, we share the exact system prompt used for all model evaluations in Appendix I. • Continuous dataset updates and decontamination: SWE-rebench uses an automated pipeline from Section 2 for a continuous supply of fresh tasks. Since we precisely track the creation dates of the issues and their corresponding pull requests against model release dates, we can explicitly mark potentially contaminated evaluations that include issues created before a model’s release date. These evaluations are explicitly marked on our leaderboard, to ensure transparency around possible data leakage. • Accounting for stochasticity in agent behavior: To capture performance variability, we run each model five times on the full benchmark. We additionally report both the standard error of the mean (SEM) and pass $\textcircled { a } 5$ metrics to provide a statistically grounded and more reliable assessment of each model performance. This standardized approach allows SWE-rebench to focus on measuring two fundamental aspects of model performance: • The ability to comprehend a real-world software issue (presented as a GitHub issue), devise a plan, implement a correct code patch, and potentially validate the solution. • The ability to follow instructions and operate within a structured agentic framework, which is represented by our ReAct scaffolding. # 3.3 Result analysis We leverage the decontaminated nature of SWE-rebench to analyze performance trends over time and identify potential signs of contamination effects in prior benchmarks. Specifically, we evaluate models on two distinct temporal subsets of tasks: those created in January 2025 and those from March–April 2025. Table 1 presents model performance across these time windows. To investigate potential overfitting to the SWE-bench Verified dataset, we compare model performance on SWE-rebench tasks to the same models’ performance on SWE-bench Verified. This comparison focuses on open-source models released in 2024 or early 2025, for which the risk of data leakage from the Verified subset is higher. Table 2 summarizes the comparative results on SWE-bench Verified and the March-April 2025 slice of SWE-rebench. The results from this evaluation showcase several notable observations: • GPT-4.1 is the only model, which performance noticeably declined on the March–April subset compared to the January subset. • LLaMa-4-Maverick exhibits a high pass $\textcircled { a } 5$ score relative to models with similar mean resolution rates, yet has a relatively modest resolution rate. This indicates that while the model can produce correct solutions to more complex problems, it lacks reliability across runs, demonstrating high potential but inconsistent execution. Table 1: Comparison of model performance on SWE-rebench Jan 2025 and SWE-rebench (Mar–Apr 2025). All metrics are reported in percentages. Models released after 1st of March 2025 are denoted with an asterisk $( ^ { * } )$ . Table 2: Comparison of model performance on SWE-bench Verified and SWE-rebench (Mar–Apr 2025). All metrics are reported in percentages. • Qwen2.5-Coder-32B-Instruct underperforms expectations, especially considering its strong code generation capabilities. Analysis of its trajectories reveals problems with instruction following; the model frequently hallucinated environment responses or enters loops of formatting errors, ultimately failing without producing a meaningful solution attempt. • Qwen3 models perform similarly with or without think mode enabled – in some cases, the no-think variant even slightly surpasses the think version. This suggests the base model’s capabilities are sufficiently strong for deliberate planning to provide no measurable advantage. The nearly identical pass $\textcircled { a } 5$ scores further indicate that the model’s problemsolving efficiency remains consistent even without explicit reasoning mechanisms • DeepSeek models demonstrate the strongest performance among open-source models across both SWE-rebench subsets and the SWE-bench Verified benchmark. Notably, both the December and March releases of DeepSeek-V3 consistently outperform other open models in resolution rate and pass $\textcircled { a } 5$ , highlighting their robustness to changes in task distribution. For evaluation details and experimental setup, see Appendix J. # 4 Discussion and limitations Our automated pipeline and the resulting SWE-rebench dataset are designed to address the lack of large-scale, real-world tasks for agent-based training, and the need for up-to-date benchmarks that remain free from data contamination. By automating the extraction and validation of executable tasks, we enable broad coverage and continual supply of fresh data. However, the emphasis on scalability introduces trade-offs, particularly a reduced ability to manually curate and verify the quality and clarity of each individual task. Extracting consistently high-quality, verifiable SWE tasks from diverse real-world GitHub repositories (Section 2) is an inherently imperfect process. While our multi-stage filtering, refinements to existing methodologies and automated dependency installation are designed for robustness at scale, they rely on heuristics and LLM-driven interpretations. For instance, our LLM-based approach to generating installation instructions from repository files (Qwen2.5-72B-Instruct, Section 2.2), while far more scalable than manual methods, was validated on a limited set of 18 repositories for prompt engineering and may not capture every project’s subtleties. Similarly, the automated task quality assessment (Section 2.4), where an LLM is fine-tuned on SWE-bench Verified task labels to predict complexity and relevance, serves as a valuable scalable proxy but cannot fully replicate nuanced human judgment, thus, containing errors and decreasing quality of the datasets. Finally, while our benchmark is intended to support transparency and standardization in evaluating SWE agents, it may also accelerate the development of increasingly autonomous AI systems in software engineering. This progress brings potential risks, such as overreliance on AI-generated code or misuse of automated agents for introducing vulnerabilities. We believe that fostering openness, decontaminated evaluations, and rigorous benchmarking practices helps mitigate these concerns and contributes to responsible advancement of the field. We outline following main limitations of our work: • Automated task quality assessment: While we employ automated quality assessment, the fully automated pipeline may result in some tasks being imperfectly described or unsolvable solely from the issue. This can lead to lower absolute success rates compared to manually curated benchmarks. • Limited language diversity: The initial version of SWE-rebench and its underlying dataset are focused exclusively on Python-based tasks. Fundamentally, our pipeline is languageagnostic and can be extended to incorporate tasks from projects utilizing other programming languages.
LLM-based agents have shown promising capabilities in a growing range of software engineering (SWE) tasks. However, advancing this field faces two critical challenges. First, high-quality training data is scarce, especially data that reflects real-world SWE scenarios, where agents must interact with development environments, execute code and adapt behavior based on the outcomes of their actions. Existing datasets are either limited to one-shot code generation or comprise small, manually curated collections of interactive tasks, lacking both scale and diversity. Second, the lack of fresh interactive SWE tasks affects evaluation of rapidly improving models, as static benchmarks quickly become outdated due to contamination issues. To address these limitations, we introduce a novel, automated, and scalable pipeline to continuously extract real-world interactive SWE tasks from diverse GitHub repositories. Using this pipeline, we construct SWE-rebench, a public dataset comprising over 21,000 interactive Python-based SWE tasks, suitable for reinforcement learning of SWE agents at scale. Additionally, we use continuous supply of fresh tasks collected using SWE-rebench methodology to build a contamination-free benchmark for agentic software engineering. We compare results of various LLMs on this benchmark to results on SWE-bench Verified and show that performance of some language models might be inflated due to contamination issues.
[ "cs.SE", "cs.CL" ]
# 1 Introduction Large language models (LLMs) are revolutionizing software engineering by offering exceptional capabilities in natural language understanding and generation. These models can significantly enhance productivity by, e.g., automating code generation [1], helping in the development of cyber-physical systems [2] and digital twins [23, 36], analyzing logs [21, 34], and answering developing questions [25]. Their ability to analyze vast amounts of data and identify patterns also helps in optimizing system performance and predicting potential issues early: LLMs can not only streamline workflows but also foster innovation and improve overall software quality. Software architecture tasks also often require vast knowledge. Consequently, there is an emerging synergy between software architecture and LLMs. For example, LLMs have been explored for architecture tasks such as identifying design decisions [15, 27], generating architecture designs from requirements [13], and answering questions about architectural knowledge [28]. Moreover, software architecture can be applied to developing LLM-based systems by providing reference architectures for different use cases [7, 22, 26, 32]. Systematic literature reviews on the use of LLMs provide researchers and practitioners with comprehensive insights into current trends, challenges, and best practices. These reviews help identify gaps in existing knowledge, guide future research directions, and inform evidence-based decision-making in the development and application of LLMs. However, reviews on the use of LLMs in the scope of software engineering mainly focus on testing [31] or code generation [35]. Most articles in existing literature reviews on LLM usage in general software engineering [8, 11] are not related to software architecture: Hou et al. [11] do not include works from the software architecture community, as the search keys did not include relevant terms (only "software design" could be considered related). Even for design-related tasks, Fan et al. [8] report that they did not find much work on LLM-based software design. Different from code generation and tests, architecture tasks often affect higher level concerns and encounter data scarcity problems. These disparities highlight the value of a comprehensive review of software architecture and LLMs. To the best of our knowledge, no such review exists, making a systematic literature review particularly useful. Therefore, in this paper, we conduct a systematic literature review of research papers at the intersection between LLMs and software architecture. We formulate our research questions to derive insights into the current state-of-the-art in this field and about what is working well, challenges, and open questions. From the software architecture side, we analyze the software architecture tasks targeted by these works and how the performance of LLMs is evaluated. From the LLMs’ side, we explore which LLMs are used and how they are optimized. To provide insights into the path ahead for the synergy between LLMs and software architecture, we also analyze the discussed future work. Moreover, we give an initial overview on envisioned reference architectures for developing LLM systems. Following the methodology for systematic literature reviews on software engineering [18, 19], we initially found 119 with our search strategy, of which we identify and analyze 18 relevant papers about LLMs and architecture. We provide the complete data of this survey as supplementary material [29]. This literature review can benefit (i) software architecture researchers who want to apply LLMs in their architecture tasks and (ii) LLM-based systems developers who want to build their LLM systems with better architecture. In the following, we first present our methodology to the review in Section 2. After that, Section 3 presents our findings on our RQs, and Section 4 discusses threats to validity. We further discuss our findings and outline future research directions in Section 5. Finally, Section 6 concludes the paper. # 2 Methodology This section describes the approach we followed to select, analyze, and evaluate relevant research on the intersection of software architecture and LLMs. We follow the methodology defined by Kitchenham et al. [18, 19]. Therefore, our review process consists of three main phases: 1. Planning the review by formulating research questions of interest (Section 2.1) and defining a search strategy (Section 2.2), 2. filtering the articles obtained by our search (Section 2.3), and 3. analyzing the remaining relevant articles (Section 2.4). # 2.1 Research Questions Our review aims to provide an overview of the current applications of LLMs to software architecture research and vice versa, i.e., how software architecture research is applied to LLMs. We want to provide insight into what works well, what does not, and what challenges remain. First, we investigate which software architecture tasks LLMs are used for (RQ1) to understand which tasks are already being researched and potentially solved, and which remain an open challenge. To gain more detailed insight, we examine the degree of automation these approaches provide (RQ1.1), distinguishing between manual guidance, semi-automated, and fully automated methods. Additionally, we assess whether LLMs are applied end-to-end or only to specific sub-tasks within the broader software architecture process (RQ1.2). Since the LLMs’ capabilities can vary significantly, our goal is to identify which LLMs are used in the reviewed studies (RQ2). This research question provides insight into the most commonly applied models and whether there is a preference for general-purpose or domain-specific LLMs in software architecture. To understand how researchers tune LLM performance, we examine the techniques used to improve effectiveness (RQ3). Specifically, we investigate the used tuning techniques (RQ3.1) and prompt engineering strategies (RQ3.2). RQ3 as well as RQ2 are based on the investigation by Hou et al. [11]. Evaluating the effectiveness of LLM-based approaches is crucial for understanding their practical applicability. Therefore, we explore how these approaches are evaluated (RQ4) by analyzing the evaluation methods used (RQ4.1) [20] and the specific metrics applied (RQ4.2). Furthermore, we examine whether these methods outperform existing baselines (RQ4.3) and assess whether supplementary materials are provided (RQ4.4) to support reproducibility. Finally, to gain insights into the future directions of LLM research in software architecture, we analyze what future work the authors of the reviewed studies suggest (RQ5). Identifying open challenges and proposed research directions helps outline the next steps to advance LLM applications in this domain. # 2.2 Search Strategy We extracted a search query based on our goal to provide an overview of the applications of LLMs in software architecture and vice versa. Therefore, one part of our query is the keyword software architecture that has to be within the article for us to regard it as relevant. Moreover, the article has to contain a keyword related to LLMs. As not all articles may use the same keyword related to their usage of LLMs, we included several different terms the article has to contain at least one of, including the currently most popular models: "LLM" OR "language model" $^ { \prime \prime }$ or "language models" $^ { \prime \prime }$ OR "generative AI" OR "bert" $^ { \prime \prime }$ OR " $^ { \prime \prime }$ GPT" OR "Llama" $^ { \prime \prime }$ OR " $^ { \prime \prime }$ Transformer" $^ { \prime \prime }$ . We use this search string to search for articles in 25 top software engineering conferences and journals, such as ICSE, ASE, ICSA, ECSA, TSE, TOSEM, by using Google Scholar and defining them as sources of the publication. We provide the complete list of venues as part of our supplementary material [29]. For the most closely related conferences, namely ICSA and ECSA, we modify the search string not to need to contain the term software architecture, as the scope of the conference already implies that the article is related to software architecture. Moreover, we also include companion proceedings from these two conferences. This search leads to 119 articles. # 2.3 Filtering of Results Based on our initial search, resulting in 119 articles, we filter the results to make sure the articles are actually relevant to our survey. First, we check if they contain the term software architecture as part of the article and find that 44 articles only mention it as part of the references, e.g., when citing an article from a software architecture conference. Next, we assess if an article is a full article from the research track of the respective conference – except for ICSA and ECSA, where we also consider contributions from the companion proceedings – and if it conducts research on the topic of software architecture and LLMs. We distribute this step among the team of authors and refer to the ICSA scope of topics in the call for papers for determining if the article is in the scope of software architecture as inclusion criteria. Notably, this implies the exclusion criteria that domain and UML models like class and activity diagrams are excluded. Moreover, we exclude research related to only design patterns as opposed to architectural patterns. We filter 12 articles based on not being full articles, and 15 more because they are not related to software architecture and LLMs. We filter one additional article because it is a survey article; therefore, it only discusses existing research and does not propose a new approach. We notice that two more articles, while presenting slightly different ideas, show the same evaluation. Therefore, we subsume them into one article. # 2.4 Analysis of articles After filtering the results, we end up with 18 unique and relevant articles. Figure 1 shows their distribution by venue and year of publication. While the first article was already published in 2020 at ECSA, there was only one publication in the following years, and there was no publication in 2023. However, 2024 shows a steep increase with 10 articles, five of them being part of companion proceedings. In 2025, five articles have already been published, further indicating that the upward trend in publications will continue. Most articles (14/18) are published at ICSA, ECSA, or in their companion proceedings. This is not surprising as both conferences are the most closely related to the topic of our review. We then extracted relevant data from the articles to answer our research questions outlined above (cf. Section 2.1). Year Paper Count 5 ! 2020 4 2021 3 2022 2 2024 1 2025 0 \$ >CSm ℃ 2 SF C 3 . ? Sm 公 S # 3 Findings In the following, we present our findings from the analysis of articles on our research questions. # 3.1 RQ1: Software Architecture Tasks Our first research question investigates the software architecture tasks, the degree of automation of the approaches, and how LLMs are utilized in these tasks. We identified four main categories of software tasks related to software architecture that utilize LLMs: Reference Architectures, Classification $\&$ Detection, Extraction $\&$ Generation, and Assistants. We provide an overview of the distribution of these categories in Figure 2. Reference architectures cover domains such as self-adaptive systems [7], chatbots with LLMs [32], and agents [22, 26]. We discuss them shortly in Section 5. Fig. 2: Distribution of Tasks Utilizing LLMs (n=18) Classification and detection tasks include classifying tactics in code [16], design decisions [15], and identifying design decisions in mailing lists [27]. LLMs are also used as classifiers in traceability link recovery tasks [17]. Extraction and generation tasks involve extracting design rationales [37], architecture component names [10], design structures from code [9], and mining design discussions [24]. Regarding generation, creating architecture decision records [4], software architecture designs from requirements [13], and architecture components for FaaS [3] are application scenarios for LLMs. Also, the generation of module descriptions and text embeddings for model-to-code mappings [14] are part of this category. Assistant systems focus on question-answering about architectural knowledge [28] and aiding in selecting, assessing, and capturing better design decisions [5]. Most of the works (71 $\%$ ) use LLMs in an automated fashion. The two approaches that build assistants or chatbots are semi-automated, as they require user interaction. In the remaining categories, only two further studies are classified as semi-automated, while the rest are fully automated. The semi-automation is related to either providing adaptable infrastructure components for identifying types of architectural design decisions rather than fully automation [27] or requiring the user to define and enter prompts themselves [13]. Whether the LLM is used to solve a subtask or the entire task is mixed across the studies. While $6 4 \%$ of studies use LLMs end-to-end, $3 6 \%$ of studies use them for subtasks. We observed the following subtasks for the non-assistant categories: Classification tasks [27], generation of descriptions or embeddings [14], extraction of component names [10], and generation of explanations [9]. Moreover, one of the assistants [5] uses LLMs for multiple subtasks like suggesting patterns, ranking, assessment of decisions, and generation of architecture decision records. # 3.2 RQ2: Which Large Language Models are used? To give a more detailed insight into the capabilities of the LLMs used in the studies, we analyzed the distribution of the respective models. This distribution is displayed in Figure 3. In total, 23 different models were used, which we grouped according to the base approach they derived from. However, all but ULMFiT are based on the Transformer architecture [30]. We included ULMFiT [12] anyways, as it is the first language model that introduces transfer learning with taskspecific fine-tuning and can thus be seen as the direct predecessor of the current LLMs. The first observation one can make is that most of the used models $( 7 3 \ \% )$ are only using the decoder-part of the Transformer architecture (GPT-, Llama-, and DeepSeek-based models). Encoder-only models (BERT-based and ULMFiT) are used in $2 1 \ \%$ of the studies and encoder-decoder models (T-based) only in $7 \%$ of the cases. This was expected, as since the release of GPT-3 the auto-regressive (decoder-only) LLMs surpassed the other variants in most SE tasks [11]. This also aligns with the distribution of the models over time: until 2024, solely encoder-only models were used (two times BERT [15, 16] and one time ULMFiT [24]). However, as GPT-3 was released in Mai 2020, the adoption of the decoder-only LLMs in the software architecture community was slower than in other SE areas [11, 31]. The most recent versions of GPT-based models (GPT-4o and later) and Llama-based models (Llama 3.1 and later), as well as the DeepSeek-based models (DeepSeek-V2.5 and Artigenz-Coder), were only used in the 2025 publications [3, 10]. This trend might be underestimated, as our study only includes data until March 2025. Fig. 3: Distribution of Used Models Grouped by Their Base Approach # 3.3 RQ3: Optimization Techniques We observed a clear distinction in tuning approaches based on the type of model. Encoder models were consistently fine-tuned across studies, emphasizing the need for task-specific adaptation due to their transformer-based masked language modeling pre-training. Fine-tuning allows researchers to tailor the model to software architecture tasks by training it on domain-specific data. Fig. 4: Overview of used prompting techniques and used evaluation methods In contrast, decoder models like those from the GPT family were predominantly utilized through prompting techniques rather than fine-tuning, likely due to accessibility and cost constraints. Figure 4a shows the overview of prompting techniques used. We found that researchers most commonly employed zero-shot prompting ( $7 0 \ \%$ of used techniques). This aligns with the general usability of LLMs, as zero-shot prompting allows direct application without further training. This can also mean that the pre-trained LLMs encode enough knowledge for many software architecture tasks. Few-shot prompting was used less frequently $( 1 5 \%$ ), suggesting that providing examples is not necessarily required for software architecture tasks. More advanced prompt engineering strategies were rarely applied, which could indicate an area for future exploration. Chain-of-Thought prompting was used only in one study despite its potential to improve reasoning-based tasks. Retrieval-Augmented Generation was also applied only once, indicating that integrating external knowledge sources is not yet a common practice in this domain. Similarly, template-based prompting appeared in a single instance, suggesting that structured prompt design is underexplored for software architecture tasks. # 3.4 RQ4: Evaluation of Approaches For the evaluation of LLM-based approaches in software architecture tasks (RQ4.1), the most common methods were technical experiments and benchmarking, followed by case studies (cf. Figure 4). Technical experiments were the dominant evaluation method, used in $6 4 \ \%$ of studies (cf. Figure 4b). Benchmarking was conducted in $4 3 \ \%$ of studies, often involving comparisons with traditional or state-of-the-art approaches. Case studies were used in $2 9 \ \%$ of studies, offering qualitative insights into real-world applications. Other evaluation methods each only appeared once, including data science-based validation, interviews, and controlled experiments. Looking into RQ4.2, the evaluation of the LLM-generated outputs employed both traditional performance metrics and text-generation metrics. Traditional performance metrics (e.g., precision, recall, F $^ { 1 }$ -score) were frequently applied to measure the correctness of LLM-generated outputs. Text generation metrics, which are used to assess the quality of generated content, include BLEU (Bilingual Evaluation Understudy) and BERTScore. BLEU was adopted by three studies (21 %; i.e., [3, 4, 37]). BERTScore, which evaluates semantic similarity using contextual embeddings, appeared in one study (i.e., [4]). A key question in assessing the effectiveness of LLMs for software architecture is whether they outperform existing approaches (RQ4.3). Among the fourteen studies analyzed, nine included a comparison to other approaches. Five studies did not compare their methods to a baseline, limiting their ability to demonstrate relative effectiveness. In cases where a comparison was conducted, LLM-based solutions consistently outperformed the baseline in six studies. Two studies showed mixed results: Mahadi et al. [24] demonstrated better results within-dataset, but worse across, and Keim et al. [15] performed better than the baselines according to the $\mathrm { F _ { 1 } }$ -score, but showed lower precision in some cases. Another study [16] was not able to outperform the baseline. However, these results suggest a generally positive impact of LLMs on software architecture tasks. While these results highlight the potential of LLMs, the lack of baseline comparisons in one-third of the studies indicates a need for more rigorous benchmarking to establish their practical advantages. RQ4.4 tackles reproducibility as it is a crucial aspect of scientific research, enabling independent verification of results. Nearly all studies provided some form of supplementary material, such as datasets, source code, or implementation details. However, two works proposing reference architectures did not include additional materials, possibly due to the conceptual nature of their contributions. This suggests a strong commitment to reproducibility within the field, though improvements in providing accessible and well-documented supplementary materials could further enhance transparency. # 3.5 RQ5: Future Work Regarding RQ5, we considered the future work mentioned and related to LLMs. In total, five papers (36 $\%$ ) do not report on future work [14, 15, 17, 27, 37], while nine papers (64 %) give a short outlook [3, 4, 5, 9, 10, 13, 16, 24, 28] (cf. Figure 5). In these nine papers, future work aims to expand the papers’ results in three different directions. First, four studies want to use different LLMs for testing [28], with integrated reasoning [10], with code support [16], or with multimodal capabilities [5]. Second, seven studies want to improve the LLMs’ results, in general, [28] or with specific approaches. This includes a preprocessing or refinement of the input [5, 10], adding more context to the input [4, 10], applying different techniques (e.g., RAG) to the LLM [3, 4, 5, 9], or fine-tuning the LLM [4, 24]. Third, in one study, the authors plan to test LLMs for software architecture tasks continuously [13]. Fig. 5: Number of occurrences for different categories of future work (n=17). # 4 Threats to Validity In the following section, we discuss threats to validity [33]. One threat involves not finding all relevant articles due to our search strategy and query employed. We mitigated this threat by evaluating different queries and the relevance of papers found with them beforehand. We also checked if the results included relevant papers we knew of as a gold standard [6]. Another threat is the misclassification of articles. We need to extract information from the articles to answer our research questions, making it necessary to understand them correctly. All authors have expertise in the research field of our review, and papers were assigned based on their knowledge of the respective areas. Moreover, we discussed any issues that arose among the complete team of authors, ensuring consistent and accurate classification. # 5 Discussion In the following, we discuss our findings from Section 3 and identify future research directions. Software Architecture Tasks. Our study shows the diverse applications of LLMs in software architecture, with tasks falling into four main categories (cf. Section 3.1): reference architectures, classification & detection, extraction & generation, and assistants. Examining the 14 articles on LLMs for software architecture, we found that most of them propose automated approaches that use LLMs end-to-end, suggesting that LLMs are capable of addressing complete architectural tasks. Besides the 14 articles covering the applications of LLMs in software architecture, we also found four articles concerning the application of software architecture to LLMs. They propose reference architectures for incorporating LLMs into different domains, such as self-adaptive systems, chatbot frameworks, and autonomous agents. By structuring interactions between LLMs and external systems, software architecture enables more robust and adaptable applications, stressing how software architecture research can not only use LLMs but also benefit them. Surprisingly, we found only one work generating source code for architectural components using LLMs [3]. Also, there is only one paper regarding cloud-native computing and architecture [3], indicating a potential avenue for further research in this regard. We found no articles regarding evaluating quality aspects of software architecture, such as evolvability, and architecture conformance checking. Both could be addressed in future research, e.g., by building on works identifying architectural patterns [9] and design rationales [37] from code. Usage of LLMs. Most approaches (73 $\%$ ) rely on decoder-only models (Section 3.2), particularly GPT-based variants, reflecting their dominance in recent research. This trend of using mostly decoder-only, GPT-based LLMs can also be observed in the broader software engineering context [11, 31]. However, there is also no consensus for a specific variant [11]. Fine-tuning was common for encoder models, whereas decoder models were primarily used via prompting (Section 3.3), with zero-shot prompting being the most frequent strategy (70 $\%$ ). This also aligns with findings in the software testing context[31], where zero-shot prompting is also the most used strategy, followed by few-shot prompting. In the broader software engineering context, Hou et al. [11] found that few-shot prompting was the most commonly employed strategy, followed by zero-shot prompting. All surveys, including ours, show that advanced prompting techniques, like Chain-of-Thought and Retrieval-Augmented Generation, are only rarely used. Exploring whether these techniques can enhance approaches used for software architecture tasks is a question for future research. Evaluation of Approaches. Evaluation methods were mainly technical experiments and benchmarking, with F $^ { \perp }$ -score being the most commonly used metric (Section 3.4). While most studies showed LLMs outperforming baselines, around one-third lacked comparative evaluation to a baseline. This indicates a need for more rigorous validation to demonstrate the added benefits of utilizing LLMs. However, nearly all studies provide supplementary material, enabling further insight into the approaches and results. Future Work. Future research directions mentioned by the authors of the studies include testing different LLMs, refining input strategies, and integrating advanced techniques such as retrieval-augmented generation (RAG) and fine-tuning. These findings suggest that while LLMs offer significant potential for software architecture tasks and outperform baselines, it is a multi-dimensional problem to apply them in a way that ensures the best results. The Future of LLMs in Software Architecture. Our findings indicate that the current body of published research on this topic is relatively limited. This is consistent with the review by Fan et al. [8] that characterized LLM-based design as an open research direction. Yet, there seems to be emerging research, as shown by the number of workshop publications and at this year’s ICSA. One of the reasons for the comparatively low number of papers in software architecture as opposed to other software engineering disciplines could be that the capabilities of LLMs were not sufficient to perform software architecture tasks until then: The three studies from before 2024 that utilize encoder-only models were not able to demonstrate consistent improvements of their approaches over the baselines [15, 16, 24]. This also illustrates the need for the continuous evaluation of both LLMs and proposed approaches for software architecture tasks: Given the fast-paced development of LLM technology, future research should consider strategies for ongoing assessment and adaptation of models in software architecture contexts.
Large Language Models (LLMs) are used for many different software engineering tasks. In software architecture, they have been applied to tasks such as classification of design decisions, detection of design patterns, and generation of software architecture design from requirements. However, there is little overview on how well they work, what challenges exist, and what open problems remain. In this paper, we present a systematic literature review on the use of LLMs in software architecture. We analyze 18 research articles to answer five research questions, such as which software architecture tasks LLMs are used for, how much automation they provide, which models and techniques are used, and how these approaches are evaluated. Our findings show that while LLMs are increasingly applied to a variety of software architecture tasks and often outperform baselines, some areas, such as generating source code from architectural design, cloud-native computing and architecture, and checking conformance remain underexplored. Although current approaches mostly use simple prompting techniques, we identify a growing research interest in refining LLM-based approaches by integrating advanced techniques.
[ "cs.SE" ]
# 1 Introduction Code optimization refers to rewriting the code such that it performs the same task more efficiently [29]. The efficiency is predominantly measured by the runtime, but it could also refer to storage, energy, or other resource consumptions. Code optimization is a less explored coding task in the context of AI compared with the popularly studied code generation task [55], but it is a natural, and sometimes key, step in the software development cycle. Code performance can be improved in many ways, such as using a lower-complexity algorithm, caching and memorization, data alignment, vectorization, and parallelization [44, 4]. Many of these approaches require low-level system knowledge that code LLMs are not explicitly trained with. We are interested in exploring the use of multiple LLM-powered agents for code optimization. We use multiple LLMs because no one LLM performs the best on all problems, even if they are trained with extensive code data. The common practice of benchmarking uses aggregated performance to rank models [5], but the best-ranked model may not be the best performer for every problem. We take the ParEval benchmark [35], which consists of programming problems in scientific computing, for example. When three open-source, small models and two GPT models are evaluated on this benchmark (see Appendix A for details), GPT-4o [39] is the overall winner. However, on the “geometry” category of problems, Qwen7B [23] outperforms GPT-4o by $2 . 5 \times$ in terms of speedup; and on the “histogram” category, Deepseek7B [18] and Qwen14B [23] outperform GPT-4o by $1 . 6 \times$ (Figure 5). Additionally, among the three open-source models, Qwen7B excels at “search,” while Naive implementation of matrix multiplication $C =$ $A B$ . Original code for ( int i = 0; i < n; ++i) for (int j = 0; j < n; ++j) for ( int $\textbf { k } = \textbf { \theta }$ ; k < n; $+ + k$ ) C[i][j] $+ =$ A[i][k] \* B[k][j]; Improved code, round 1 for ( int i = 0; i < n; ++i) for (int $k \ = \ 0$ ; k < n; ++k) for ( int $j = 0$ ; j < n; ++j) $\begin{array} { r } { \textsf { C [ i ] [ j ] \cdot = A [ i ] [ k ] \ } \star \textsf { B [ k ] [ j ] } } \end{array}$ ; Improved code, round 2 # pragma omp parallel for for ( int $\mathrm { ~ i ~ \ = ~ \ } \emptyset \ ; \mathrm { ~ i ~ \ < ~ \mathsf ~ { ~ n ~ } ~ ; ~ \ } + + \mathrm { i } \ \mathrm { ) }$ for (int $\texttt { k } = \texttt { 0 }$ ; k < n; $+ + k$ ) for ( int $j = \emptyset ; \mathrm { ~ \ j ~ < ~ \mathfrak { n } ~ ; ~ \ \mathrm { + + \ j } ~ } )$ $\begin{array} { r } { \textsf { C [ i ] [ j ] \cdot = A [ i ] [ k ] \ } \star \textsf { B [ k ] [ j ] } } \end{array}$ ; Lesson: Reordering loops improves cache locality and increases performance. The order of $( \mathrm { i } , \mathsf { k } , \mathrm { j } )$ out of 6 different permutations often performs the best, because of how caches work. Lesson: Using OpenMP to parallelize the for-loop further improves performance. Parallelizing only the outermost loop performs the best, due to sufficient parallelism and less parallel scheduling overhead. Deepseek7B excels at “scan” and “dense_la” and Qwen14B excels at “sparse_la,” “reduce,” “fft,” and “sort” (Table 4). Such varied performance suggests the opportunity of exploiting the complementary strengths of different LLMs to deliver the best solution. How does one make use of multiple agents to solve a coding problem? In this work, we advocate the concept of lessons inspired by classroom experience. A student’s problem-solving skills are not only taught by a teacher at class but also developed through peer learning. For example, after receiving the graded homework, Student A, who cannot complete the solution to a problem may consult with Student B, who earns a perfect score, on tips of the correct steps toward the solution. Meanwhile, Student A can also benefit from learning from Student C, who comes up with a wrong solution, to avoid making similar mistakes. Each student learns from the success and failure lessons of others to improve their own problem-solving skills. LLM agents behave similarly. Pre-training is analogous to classroom teaching, while a pre-trained LLM can improve its skills through prompted with lessons. A lesson means any information that helps an LLM better solve the problem at hand. For code optimization, such lessons may be optimization strategies applicable to the current code, common pitfalls that programmers are trapped in, or performance feedback from profilers. Note that a code’s performance can be improved by a combined use of multiple strategies, step by step. Figure 1 shows the initial steps of a classic example of engineering the performance of matrix-matrix multiplications, which can be improved by $3 0 0 0 \mathrm { { x } }$ in speedup under extensive optimization [44]. The applied strategies shown are loop reordering and parallelization of for-loops. In a multi-LLM setting, such strategies, or lessons, can be iteratively summarized by one or a few LLMs and learned by others, so that they collectively improve the code performance. In this work, we propose the framework LessonL (pronouced as “lesson-nell”) for multiple LLM agents to collaboratively solve problems. Central to the framework is the lesson mechanism for agents to improve their collective intelligence through learning from each other. The main technical innovation is a solicitation–banking–selection framework that generates, deposits, and filters lessons incurred during the collective problem-solving process. Although code optimization is the focal application of LessonL, we demonstrate its use for other tasks (code generation) as well. LessonL is a novel multi-agent framework that resembles how humans learn to solve problems. Compared with other collaboration frameworks, such as where agents play different roles in the solution process [43, 11, 21, 25, 41], or where agents independently propose solutions that are subsequently communicated and aggregated [46, 13, 26, 33, 54] (see Section 2 for literature review), our framework has a few advantages. First, agents do not need to be distinguished by pre-specified roles, as their complementary strengths for a particular problem may be unknown a priori. Second, communication and prompt contents are economic since lessons are more concise than codes. Third, more importantly, lessons are interpretable and reusable, allowing the explication of coding knowledge and the creation of educational materials. Our work contributes the following: 1. a finding that LLMs have complementary strengths even on a fine level of tasks (Appendix A); 2. a novel lesson-based framework for multiple agents to collectively solve problems (Section 3); 3. state-of-the-art performance on code optimization and code generation benchmarks (Section 4.2); 4. empirical evidence that a team of small LLMs can significantly outperform a much larger LLM under similar resource consumptions (Section 4.4); 5. representative code examples and lessons (Appendices J and K). # 2 Related Work Multi-agent collaboration. Recent advances in prompting techniques have enhanced LLMs’ reasoning and planning capabilities for complex tasks like mathematics and coding. CoT [48], SelfConsistency [47], ReAct [52], and ToT [51] demonstrate improved problem-solving skills with reasoning while Reflexion [43] leverages self-generated feedback stored in episodic memory to enhance decision-making. As such, LLMs can be used as autonomous agents and recent research efforts propose ways for multiple agents to collaboratively solve problems. Popular multi-agent frameworks either place agents in different roles (such as planner, coder, debugger, reviewer, and tester in a software project) or have agents independently propose solutions, which are collectively refined and consolidated. For role-based methods, see AgentVerse [11], MetaGPT [21], MapCoder [25], ChatDev [41], Self-collaboration [12], SoA [24], and AgentCoder [22]. For individual solution proposals, see MoA [46], LLM-Debate [13, 14], LLM-Blender [26], DyLAN [33], and AgentPrune [54]. Our work LessonL belongs to the latter category. A unique contribution of LessonL is the lesson mechanism for agents to learn from each other and the explication of coding knowledge as a result of the collaborative learning. Code optimization. It is an important, but under-explored, use case of LLM for code. Prior works focuses on specialized models such as HPC-Coder [36, 37] and HPC-Coder-V2 [8] for highperformance computing, all require curating and/or generating code data and fine-tuning. Besides the usual fine-tuning approaches, PIE [45] proposes additional adaptation techniques, including retrievalbased prompting, performance-conditioning, and self-play. Among the few agentic approaches, SBLLM [16] retrieves optimization examples from an external dataset and Self-Refine [34] iteratively refines the code based on self-generated feedback. In contrast, our multi-agent framework allows the use of more than one agent and it does not rely on an external code dataset. For extended discussions and more related work, see Appendix B. # 3 The LessonL Framework Our multi-agent collaboration framework is centered around lessons, which can be any knowledge or information that helps an agent better solve the problem at hand. When a team of agents participates in the collective problem-solving process, such knowledge is solicited through inspecting each agent’s solution and is deposited into a bank for other agents to access. Hence, the solution process becomes iterative, looping between using the existing lessons to update the solutions and using the updated solutions to generate new lessons. For code optimization, such an iterative process can progressively improve the code performance. This process is pictorially illustrated in Figure 2 and sketched in the following pseudocode: 1: Each agent generates an initial solution and the lesson for it. Deposit lessons to the bank. 2: for round $t = 1 , \dots , T$ do 3: Select $k$ lessons from the bank. 4: Based on the selected lessons, each agent generates an updated solution. 5: Each agent generates a new lesson for the updated solution. Deposit lessons to the bank. 6: Adjust the effectiveness of the $k$ selected lessons. 7: end for 8: Return the best solution. Figure 2: The LessonL framework (which may repeat multiple rounds). In what follows, we elaborate on a few key components of this framework. The full algorithm is given in Appendix C. We also discuss its extension to other coding tasks, such as code generation. # 3.1 Lesson Solicitation Every solution comes with a lesson, either positive or negative. Such lessons explain why the solution is correct or what make it wrong. For code optimization, multiple tools can be used to grade the output code, such as the compiler, the clock, test cases, and profilers. Consider four resulting scenarios: (a) speed up, (b) slow down, (c) functional incorrectness, (d) syntax error. We solicit lessons by referring the original code and the modified code, A and B, respectively, and prompting the agent with the resulting scenario and asking for explanations. For (a) and (b), we supplement the scenario description with the measured speedup and for (d), we include the error message reported by the compiler. Such information helps the agent reason with the code pair and articulate precise lessons. For (c), we do not include the test cases because LLMs tend to reason specifically about the test cases, resulting in insufficiently general lessons. The detailed prompt templates are given in Appendix D. # 3.2 Lesson Banking and Selection In each round, $n$ lessons are generated and deposited to the bank, one from each agent. Then, after several rounds, there accumulate many lessons. It is unwise to feed all of them to the prompt when asking the agent to improve the code, because they may exceed the prompt capacity and also the token consumption can become too costly. Hence, the framework is run with at most $k$ lessons in each round. In practice, it suffices to set $k$ to be, or slightly larger than, $n$ . A set of lesson selection criteria is in place. First, naturally we want to use lessons about high speedups, because they point to the right directions of optimization. However, positive lessons alone cannot address certain limitations of LLMs, such as the lack of guarantee on code correctness. Hence, negative lessons are still valuable, because they can help the agents avoid similar mistakes. It is, however, challenging to decide which negative lessons are more important than others. Finally, we also consider relevance, in case an agent hallucinates irrelevant lessons. For this, we treat the original code as the query and retrieve semantically relevant lessons from the bank, in a manner similar to retrieval augmented generation [30]. Algorithmically, the selection mechanism goes as follows. If the bank has no more than $k$ lessons, pick them all. Otherwise, sort the lessons according to speedup (more on this in the next subsection) and pick the top $\lceil k / 2 \rceil$ . Then, sort the remaining lessons according to their cosine similarity with the original code and pick the top $\lfloor k / 2 \rfloor$ . Any powerful embedding models for code can be used to compute the cosine similarity; for example, CodeBERT [15]. # 3.3 Effectiveness Adjustment The effectiveness of a lesson $z$ is naturally measured by the speedup $s$ — the higher the speedup, the more useful the lesson. Note that $s$ was calculated when $z$ was created together with code $y$ . In other words, $s$ is the speedup of code $y$ over the original code. One’s view on the effectiveness of $z$ may change when this lesson is selected to apply later. For example, if the lesson later yields code $y ^ { \prime }$ with a worse speedup $s { ' } < s$ , should we keep selecting it only because the original $s$ is great? To more effectively select high-speedup lessons, we need some adjustment to the speedup dynamically. Rather than sorting them according to $s$ , we introduce an adjustment factor $f$ and sort the lessons according to $s \times f$ instead, where $f$ accounts for the performance of the lesson $z$ when actually applied. Specifically, when $z$ is applied in some round $t$ , it incurs speedups $s _ { j } ^ { ( t ) }$ for each agent $j$ . We initialize a correction variable $c$ with zero and add $1 + \epsilon$ to it whenever $s _ { j } ^ { ( t ) } > s$ , or add $1 - \epsilon$ when s(jt) < s, for some ϵ ∈ (0, 1). After looping over all n agents, the adjustment factor f is defined as $c / n$ . A value greater than 1 means more output codes enjoy a speedup $> s$ by applying the lesson. # 3.4 Extension to Other Coding Tasks The LessonL framework is general. It can be extended to other coding tasks provided that the key components discussed above are properly adapted. For example, in Python code generation, lessons are needed to be solicited for only one scenario: functional incorrectness, because if the output code passes all test cases, the iteration immediately terminates. We give the prompt templates in Appendix E for completeness. Additionally, we will still perform lesson banking, but $k$ lessons are selected based on the number of test cases passed and the semantic relevance instead of speedup. # 4 Experiments We perform a comprehensive set of experiments to evaluate LessonL. The main finding is that LessonL enables an effective collaboration among LLMs to perform code optimization and other coding tasks. Using our framework, a team of small LLMs that collaborate through the sharing of learned lessons can outperform larger LLMs and multi-LLM collaboration methods. # 4.1 Setup Benchmarks. We use six coding benchmarks to perform the evaluation. (1) ParEval [35] includes 60 coding tasks (per programming mode) related to scientific and parallel computing. We experiment with the serial and OpenMP modes, as they are less demanding on the computational resources required for evaluation. ParEval was originally designed for code generation (i.e., write the code given verbal description). We adapt it to code optimization (i.e., write a faster version of a given source code); see the adaptation details in Appendix F. (2) PolyBench [40] contains 30 numerical tasks with static control flows from domains like linear algebra, image processing, physics, and statistics, adapted for code optimization by us. (3) HumanEval [10] consists of 164 programming problems, assessing language comprehension, algorithms, and basic mathematics. (4) HumanEval $+$ [32] extends HumanEval with $8 0 \mathrm { x }$ more test cases. (5) MBPP [3] consists of around 1,000 crowd-sourced entrylevel Python problems covering fundamentals and standard library use. (6) MBPP+[32] extends MBPP with $3 5 \mathrm { x }$ more test cases. The last four benchmarks evaluate code generation. LLM agents. We use five models in total. Three are open-source, small models: Deepseek7B (deepseek-coder-7b-instruct-v1.5) [18], Qwen7B (Qwen2.5-Coder-7B-Instruct) [23], and Qwen14B (Qwen2.5-Coder-14B-Instruct) [23]; and two are GPT models: GPT-4o mini (gpt-4o-mini) [38] and GPT-4o (gpt-4o) [39]. The open-source models are used to evaluate multi-agent collaborations; GPT-4o mini is closed-source but its size is comparable to the open-source models; while GPT-4o is much larger and it sets a strong baseline for single-agent performance. Baselines. We compare LessonL with three categories of models/methods. (1) Single-agent standard prompting. All the above LLMs are experimented with. Among the open-source models, preliminary findings suggest that Qwen14B slightly outperforms the other two (see Table 4 in Appendix A); hence, we omit the results of these two in subsequent tables to save space. (2) Single-agent reasoning or reflection. We use Qwen14B as the agent and experiment with CoT [48] and Reflexion [43]. CoT applies a chain-style thought process to reason about the steps, while Reflexion iteratively reflects on the task feedback to improve the solution. (3) Multi-agent collaboration. We experiment with MapCoder [25] and MoA [46]. MapCoder uses agents for example retrieval, solution planning, coding, and debugging. For our purpose, all agents are Qwen14B. In contrast, each agent in MoA independently codes the solution and refines the aggregated solution. We use the open-source models as agents and use GPT-4o as the final aggregator. Similarly, in our framework LessonL, the agents are open-source models. Table 1: Comparison of model/method performance for two code optimization benchmarks. “Correct” means correctness; $^ { 6 6 } > 2 \mathrm { x } ^ { , 3 }$ means proportion of problems achieving speedup $> 2 \mathrm { x }$ ; “Speedup” is the geometric mean speedup across all problems in the benchmark. The results are reported as the mean and standard deviation over three runs. For details on baseline implementation, hyperparameters, hardware, and timing, see Appendix G. # 4.2 Benchmark Results We evaluated the performance of LessonL on code optimization and code generation tasks. For code optimization, we studied the ability of LessonL to optimize serial and parallel code drawn from the ParEval and PolyBench benchmarks. For code generation, we investigated LessonL’s performance on HumanEval, HumanEval+, MBPP, and $\mathbf { M B P P + }$ . Code optimization task. In the code optimization task, models are given a correct program and are tasked with generating a faster version of that code while maintaining correctness. The speedup achieved by a model is measured as the ratio of the runtime of the original and new code. We evaluate the performance of a model by measuring the geometric mean speedup achieved over the set of codes in the benchmark. The geometric mean is preferred over the arithmetic mean because it is more resilient to large outliers that can cause a single code to unduly influence the average. For example, when using the arithmetic mean an algorithmic optimization that improves the asymptotic runtime of a code from $\Theta ( n ^ { 2 } )$ to $\Theta ( n \log n )$ could result in a $1 0 0 0 \times$ speedup for sufficiently large input, and result in the arithmetic average becoming a poor measure a model’s ability to optimize diverse sets of codes. In the case where a new code is incorrect or slower than the original, we consider the speedup to be 1. Similar to [45], which uses the same convention, our rationale is any new code may be discarded in favor of keeping the original code. Code optimization results. Table 1 presents our experimental results on the code optimization benchmarks. We compared the performance of LessonL with three single-agent models (Qwen14B, Table 2: Comparison of model/method performance $\left( \mathsf { p a s s } @ 1 \right)$ ) for four code generation benchmarks. GPT-4o and GPT-4o mini results are taken from the EvalPlus leaderboard [32]. Table 3: Ablation study. Benchmark: ParEval (serial and OpenMP modes). The results are reported as the mean and standard deviation over three runs. GPT-4o-mini, and GPT-4o) and four prompting / agentic methods (CoT, Reflexion, MapCoder, and MoA). We report each method’s average speedup, proportion of correct code, and the proportion of codes that achieve a speedup greater than $2 \times$ . A few observations follow. First, LessonL achieves the best results across metrics and benchmarks This highlights the attractiveness of its novel collaborative lesson mechanism, which enables agents to learn from each other and iteratively improve solutions. Second, multi-agent collaborations generally surpass single-agent approaches. Following LessonL, MapCoder and MoA often yield the next best results with role specialization or solution aggregation. Third, achieving large speedups is more challenging for serial code than for OpenMP code. In serial mode, most methods yield an speedup below $2 \times$ , with less than $20 \%$ of solutions surpassing $2 \times$ speedup. Conversely, in OpenMP mode, most methods achieve more than $2 \times$ speedup, with more than $40 \%$ of solutions surpassing $2 \times$ . This is expected, as many scientific computing and image processing tasks are inherently parallelizable. For instance, LessonL achieves an average $3 . 4 \times$ speedup with 8 threads across both benchmarks. Code generation. While code optimization is the main testbed of LessonL, we also experiment with a widely studied coding task—code generation. Table 2 compares the results for four commonly used benchmarks by using the correctness (pass $\ @ 1$ ) metric. We see that LessonL is the best in three out of four benchmarks, with MoA coming next. In the remaining benchmark, the ranking positions of these two frameworks are swapped. This observation echos that of code optimization, corroborating the usefulness of multi-agent collaboration for solving coding tasks and highlighting the effectiveness of our lesson framework. # 4.3 Ablation Study We conducted a study of LessonL to investigate the impact of its design components and the number of collaboration rounds. Ablating components of the framework. We analyzed the selection mechanisms for lessons in our ablation study. The results are illustrated in Table 3 and compare five variations of the LessonL framework: (0) the full version of LessonL; (1) lessons are selected based only on speedup; (2) lessons selected based only on relevance; (3) lessons selected based on speedup and relevance, but without a speedup adjustment; (4) random selection of lessons; (5) no lessons used. Table 3 reports the results. We see that most ablated variants suffer a decrease of speedup. In serial mode, variants without high-speedup lesson prioritization (2,4) suffer most, while in OpenMP mode, variants (1,3) show larger performance drops. This trend also applies to the proportion of problems with ${ > } 2 \mathrm { x }$ speedup. This highlights different optimal strategies for each mode. Serial mode benefits from high-speedup lesson selection with dynamic adjustments, while OpenMP mode favors high-relevance lessons and speedup adjustments. Interestingly, in serial mode, the correctness metric sometimes benefits from ablations, even though the relationship between lessons and correctness is indecisive. While variant (4) marginally outperforms LessonL in OpenMP speedup $_ { ( + 0 . 0 1 ) }$ , LessonL demonstrates significantly better stability (0.03 vs 0.28 standard deviation). Finally, The consistent underperformance of variant (5) across all metrics confirms that lessons are fundamental to LessonL. Varying the number of iterations. Figure 3 plots the performance (speedup) achieved by LessonL and alternative methods when varying the number of collaboration rounds. The concept of a “round” is different across certain achitectures: for MoA a round is a “layer” in the MoA architecture, and for MapCoder it is a “debugging round”. The performance of GPT-4o is also included to provide a baseline for comparison. We see that while LessonL and Reflexion consistently benefit from using more rounds, the same is not true for the MoA and MapCoder frameworks. The performance of MoA actually decreases when increasing the number of rounds, and there is no clear trend when using more than 2 rounds in MapCoder. Even when using a much smaller model, Reflexion surpasses GPT-4o and continues to improve when further increasing the number of rounds. In contrast, MoA’s performance drops below GPT-4o when using more than 3 rounds, even though MoA actually uses the GPT-4o model as an aggregator. Figure 3: Performance over rounds (or called “layers”). Benchmark: ParEval (serial mode). We studied the outputs of each method and provide examples in Appendix H. For a program that finds the $k$ th smallest element of an array, we observe that MoA implements the divide-and-conquer algorithm in early rounds, achieving high speedup. In subsequent rounds, however, MoA introduces overheads and eventually resorts to a simple solution that calls std::nth_element, which results in slower code. In contrast, in the example of performing convolutions on an image, LessonL increases the speedup step by step by removing redundant zero-additions, separating boundary and interior computations, and avoiding extra data copies. These examples illustrate how the lesson mechanism can selectively inject useful information to aid improvements in coding solutions. # 4.4 Cost Analysis Code performance is not the only dimension that determines the effectiveness of a code model, considering the complex interplay of monetary costs and time costs in single-LLM and multi-LLM methods. We investigate where LessonL stands for the code optimization task by considering budget constraints with respect to money and time. For this, we follow [46] and extract the pricing information of LLMs from API providers or guesstimate the price based on similar models. We also use the number of floating point operations (FLOPS) as a proxy of latency when considering the time cost. See Appendix I for the detailed estimation. The estimation may be time sensitive (for example, price varies over time and across providers) and may include educated guesses (for example, the sizes of closed-source models), but it paints a reasonable picture of the performance landscape. Figure 4 shows two speedup-versus-cost plots for ParEval, with the Pareto front indicated by the dashed line. Besides the afore-mentioned models/methods, we add Qwen14B(20) and CoT(20), which prompt the respective model/method 20 times and select the fastest code (an inference-time scaling approach [6]). We see a few clusters from the plots. First, LessonL is Pareto optimal because of its superior speedup compared with competitors. Second, Qwen14B, GPT-4o mini, and CoT are also Pareto optimal, or nearly optimal, because of the low costs in both money and time. Third, GPT-4o is on the Pareto front regarding time and barely misses the front regarding money (its dollar cost is slightly higher than that of LessonL). We consider all these methods to be cost-effective. For the remaining methods, inference-time scaling (CoT(20) and Qwen14B(20)) does not help and neither do Reflexion or multi-agent methods. In fact, MapCoder and MoA appear to be the least cost-effective, because they not only iterate multiple times, but also use multiple agents. This signifies the challenge of multi-agent methods, which are generally the most costly, and in contrast reflects the attractiveness of LessonL. Finally, compared with the much larger model GPT-4o, LessonL with small models yields significantly better speedups by consuming similar resources. Figure 4: Performance versus costs and latency. Benchmark: ParEval (serial mode). The dashed line is the Pareto front. # 4.5 Case Study A few examples reveal the interesting lessons learned by the LLMs in our framework; see details in Appendices J (Geometry) and K (DFT). In the Geometry example, the original code finds the smallest distance among $n$ points in two dimensions by using the straightforward ${ \bar { O } } ( n ^ { 2 } )$ implementation. The final optimized code uses a divide-and-conquer algorithm that reduces the complexity to $O ( n \log n )$ and achieves a $7 4 . 3 1 \mathrm { x }$ speedup. This optimization is a complete change of algorithm and it is nontrivial even for human programmers. Several of the lessons affirm the benefit of divide-and-conquer while others point out syntactic errors and discourage the agents to repeat them. More interesting is the DFT example, where the original code implements the straightforward algorithm in $O ( \bar { N } ^ { 2 } )$ cost. It is well known that DFT can be implemented by using the FFT algorithm in $O ( N \log N )$ cost and some agents try to implement it, but they fail with subtle bugs. The lessons alert these failures and eventually the agents choose to optimize the code by precomputing the repeatedly used exponentials, yielding a considerable $1 0 . 8 3 \mathrm { x }$ speedup. Moreover, one lesson stands out by stating a syntactic error caused by using Intel intrinsic instructions. The evaluation pipeline and hardware do not support particular intrinsics and the involved agent later refrains from using them in the implementation.
Recent studies show that LLMs possess different skills and specialize in different tasks. In fact, we observe that their varied performance occur in several levels of granularity. For example, in the code optimization task, code LLMs excel at different optimization categories and no one dominates others. This observation prompts the question of how one leverages multiple LLM agents to solve a coding problem without knowing their complementary strengths a priori. We argue that a team of agents can learn from each other's successes and failures so as to improve their own performance. Thus, a lesson is the knowledge produced by an agent and passed on to other agents in the collective solution process. We propose a lesson-based collaboration framework, design the lesson solicitation--banking--selection mechanism, and demonstrate that a team of small LLMs with lessons learned can outperform a much larger LLM and other multi-LLM collaboration methods.
[ "cs.AI", "cs.LG", "cs.MA", "cs.SE" ]
# I. INTRODUCTION Installation of CCTV cameras has become a necessity nowadays almost everywhere including shopping malls, theatres, houses and restaurants. This is due to the increase in violent activities all over the world and the importance of monitoring it to reduce human and property loss. But simply keeping an eye is not enough, rather it is very crucial to recognize and classify it timely. For this, [3] have developed a system based on transfer learning to finely classify violent behavior using pre trained Movie and hockey datasets. Further, The paper utilizes GoogleNet, a 22-layer deep neural network, as a pretrained source for transfer learning experiments. GoogleNet is chosen for its efficiency, having 12 times fewer parameters than AlexNet, and is trained on the ImageNet dataset with 15 million annotated images across 1000 categories. In the experiments, the last classification layer is modified to distinguish between violent/fight actions and non-fight actions using 2 classes from the Hockey and Movies dataset. Hockey dataset primarily consists of 1000 recorded hockey game videos while Movie dataset contains 200 dynamic video clippings which include scenes from different scenarios including both violent and non-violent behavior. Human activity recognition has been studied very deeply in recent years and is of interest to the thriving research community. It is highly indulging due to the various diverse aspects it brings to the table, but also has a lot of real-time challenges to be addressed. This task can be classified into two broad domains namely human posture detection and human action recognition. A unique multi-task deep learning method for action recognition and estimation of human morphology is presented in the paper in [1]. The proposed 3D pose method achieves high precision with low-resolution feature maps, avoiding the need for costly volumetric heat maps by predicting specialized depth maps for body joints. The CNN architecture, combined with previous posture detection enhances efficient multi-scale individual objects and action management. The model is trainable with a mix of 2D and 3D data, showcasing significant improvements in 3D pose estimation. Simultaneous training with single frames and video clips is seamless. The carefully designed architecture effectively addresses the challenge of multitasking human poses and action recognition, outperforming separate task learning. Joint learning of human poses consistently enhances action recognition, and the model is highly scalable, offering flexibility in cutting at mutiple levels for predicting actions. Based on such findings and observations, researchers went on to explore human pose study to a greater extent. This paper [2] introduces a machine for 3D human posture that effectively integrates long-range spatio-temporal dependencies and three dimensional configuration in a comprehensive and suggested way. The model is improved by introducing a unique correction mechanism which is self-supervised, involving two different learning tasks: 2D-to-3D conversion of poses and 3D-to-2D pose estimation. This correction mechanism maintains geometric regularity among 2D manipulation of 3D objects and calculated 2D poses, allowing the model to further procress intermediate pose estimation bidirectionally using the predicted 2D human pose. Importantly, the given correction mechanism bridges the gap between three dimensional and two dimensional human poses, enabling the implementation of external 2D human pose data without the need for extra three dimensional annotations. Further research will focus on extending the correction mechanism which trains itself for time referenced relationship modeling in sequential human based activities, such as different physical actions and recognizing them. Additionally, newly designed supervision targets will be designed to incorporate various threedimensional geometric attributes for cost-effective model building. To integrate such solutions with real-time cases, it was important that devices should be movable and can capture features at any point in the space. The paper [6] introduces Wi-Mose, a novel device to determine movable 3D objects utilizing standard WiFi gadgets. The system is divided into three main components: dataset acquisition, processing of data, and estimation of poses. The data collection process involves the usage of dual receivers, a transmitter, and a compact refracting camera to gather video frames and Channel State Information (CSI). In the initial stage of processing data, unrefined CSI items are converted into CSI images, and consecutive video frames are transformed into human key-point coordinates for supervised learning. The position estimation element simplifies the process of reconstructing three dimensional human posture skeletons by extracting characteristics from CSI pictures and converting them into key-point values. By constructing CSI images, the system allows a neural network for deriving pose based specifications that are independent of location. The designed neural network converts these details into salient values. The results of the experiment demonstrate that Wi-Mose achieves improved accuracy, with $2 9 . 7 \mathrm { m m }$ and $3 7 . 8 \mathrm { m m }$ P-MPJPE in Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) scenarios, representing a $21 \%$ and $10 \%$ enhancement compared to the baseline. Future work aims to extend Wi-Mose's application to various environments for increased accuracy of 3D movable human pose calculation. # II. RELATED WORK It is important to analyze the various techniques used so far for crime scene analysis through different parameters and conditions leading to significant advancement in this field. This task can be broken down into two broad spectrums, namely action recognition and classification. # 2.1. 3D Pose Estimation Diogo C. Luzivon [1] aims to develop a system which detects poses and recognizes action of humans in 2D and 3D architecture simultaneously. The model is trained using both $2 D$ images and video clips to attain improved 3D pose estimation. It utilizes MPII Human Pose Dataset, which is a two dimensional dataset with 25,000 images from YouTube, annotated for sixteen joints in human body. The Human3.6M is a three dimensional dataset capturing eleven entities performing seventeen different activities at the same time with four cameras, annotated with elevated three dimensional poses for 17 joints. Penn Action is a 2D dataset for action recognition comprising of 2,326 different videos of sports-related activities, which are marked manually for thirteen human body joints. The NTU $\scriptstyle \mathbf { R } \mathbf { G B + D }$ consists of 56,000 Full HD videos, featuring around sixty actions conducted by forty actors, and registered by three cameras, including color videos, depth maps, and 3D Kinect poses. The model built using these datasets is much more accurate and scalable as compared to previous approaches of pose detection and action recognition as separate tasks. # 2.2. Transfer Learning Violence detection is of utmost importance to develop video surveillance cameras. To do so, researchers [3] have trained a deep learning CNN model utilizing the Hockey and movie dataset for fight action recognition using 10-fold cross validation technique. This system achieved around $9 9 \%$ accuracy for both the hockey and movie datasets to detect crime scenes. The system in [5] utilizes a Convolutional Neural Network (CNN) based on the VGG-16 architecture as a system for extracing features, then worked upon by state-of-the-art custom built classifiers applied to a database consisting of gun type equipments. The key innovation is the explicit reformulation of different layers as residual functions, referencing the inputs given to different layers. The system is capable of real-time detection and demonstrates robustness across variations in affine transformations, scale, rotation, and partial closure or occlusion. The system undergoes cross-validation with different parameter values, and the optimal set of parameters for visual handheld gun detection is identified. The evaluation on the ImageNet dataset includes residual networks with a depth of up to 152 layers, which is 8 times deeper than VGG nets, yet achieves higher accuracy. The overall system performance is evaluated through Receiver Operating Characteristic (ROC) curves, illustrating the tradeoff between specificity and sensitivity. # 2.3. Integration with CCTV Further, Cheng [9] addresses the need for automated violence detection in surveillance camera footage, emphasizing the current prevalence of such cameras in public places. While these cameras have contributed to a reduction in the overall crime rate, their primary use has been for post-event analysis rather than real-time prevention. This model incorporates a unique feature by introducing a branch of the optical flow channel to contribute to the development of a pooling mechanism. They have introduced a new dataset and method for violence detection in surveillance videos. The RWF2000 dataset is highlighted as the largest surveillance video dataset for violence detection in realistic scenes to date. Additionally, [17] there was a need to discriminate between small hand-held objects such as smartphones from weapons like knifes, guns, etc for accurate prediction of violent action and efficient categorization of video feed captured from CCTVs. The focus is on detecting weapons and objects that could be mistaken for a handgun or knife when manipulated by hand in video surveillance scenarios. The experimental study conducted using database containing six objects (pistol, knife, smartphone, bill, purse, and card) demonstrates that the proposed ODeBiC methodology effectively reduces the number of false positives through binarization techniques. # III. METHODOLOGY The system proposed is comprised of a Raspberry Pi, which is equipped with a camera module for the purpose of capturing real-time video data. The architecture of the system is separated into three primary components: the collection of images, image preprocessing, and image recognition, followed by the dissemination of results. # 3.1. Hardware Setup for Video capture The physical setup includes configuring the camera module with Raspberry pi to capture live video feed from surroundings where it is placed. The camera module is prepared by integrating it with a fisheye lens in order to attain maximum coverage of surroundings and attain wide area visibility. Configuring a camera module with a Raspberry Pi involves attaching the module to the Pi's CSI port and integrating a fisheye lens for wide-area visibility. After installing the Raspberry Pi OS and camera software packages like \`raspistill\` and \`raspivid\`, additional streaming software may be installed for live streaming. Configuration includes adjusting camera settings such as resolution and exposure, calibrating for fisheye lens distortion, and setting up features like motion detection. Testing ensures functionality, while optimization involves adjusting settings for quality and performance, considering factors like lighting and placement. Regular monitoring and maintenance, including cleaning the lens and keeping software updated, help ensure continued operation and security of the setup. This process enables effective live video capture with wide area coverage using the Raspberry Pi and camera module. Fig 1: System Architecture Diagram Fig 2: Sample image from camera sensor # 3.2. Preparation of Custom Dataset To train and enhance the performance of our ML model, we have prepared a custom dataset comprising a variety of videos. It has videos containing weapons and non-weapons for training and evaluation. It is mainly prepared by sampling some famous datasets such as the NTU CCTV fights, Hockey fights, Sohas and WVD datasets which have been used for violence detection models earlier. The videos are of $7 2 0 \mathrm { p }$ resolution with frame dimensions of $1 2 8 0 \mathrm { x } 7 2 0$ . A maximum of 500 videos are taken into account with length of 5 to 10 seconds per video for extraction of features # 3.3. Training ML model In order to develop our machine learning model for the detection of violence and the analysis of crime scenes, we utilize a CNN-LSTM algorithm that involves several critical stages. Initially, we have gathered our custom IIS (Intelligent Image Sensing) dataset, which consists of labeled video clips or frames indicating violent and nonviolent activities. Next, we preprocess this data by extracting frames, resizing them uniformly, and converting them into a suitable format, such as numpy arrays. Following this, we employ a pre-trained CNN model, such as VGG or ResNet, to extract spatial features from each frame or train a CNN from scratch if the dataset size permits. Subsequently, we integrate an LSTM network to capture the temporal dynamics of the video sequence, inputting the sequence of feature vectors extracted by the CNN. After splitting the dataset into training, validation, and testing sets, we train the CNN-LSTM model, adjusting the hyperparameters to prevent overfitting based on the validation set performance. Finally, we assess the trained ML model using the custom testing dataset, utilizing parameters like accuracy, recall, and F1-score to determine its ability to detect violence. Fig 3: Model Architecture # 3.4. Building up of Super Image To construct a SUPER IMAGE, the primary concept revolves around preserving aspect information and minimizing information loss. This involves selecting a sample size, custom sampler, ratio of its sizes in different dimensions and spatial arrangement for rearrangement of frames meanwhile retaining the aspect ratio of the actual frames. For instance, in the case of $1 2 8 0 \times 7 2 0$ frame resolution, where dimensions are in the form of height x width, given that width is greater than height, the aim is to produce a final constructed image with the height proximal to the width. This process ensures that the resulting image maintains the original aspect ratio while enhancing saliency and minimizing distortion. Through meticulous selection of sample sizes, samplers, and spatial arrangements, the approach maximizes the preservation of aspect information, ultimately yielding a SUPER IMAGE with optimized visual quality. # Selection of Sampler The tdifferent types of samplers used is definitely a major factor in determining the ability of the various classifiers. To create a SUPER IMAGE, different sampling methods can be employed, each with its unique approach: 1. Uniform Sampler: This method selects frames which are uniformly distributed throughout the video by specifying the parameter $\mathbf { k }$ , which determines the number of frames to select. It calculates the stride between selected frames and then chooses k equally spaced indices. This ensures that frames are selected at regular intervals throughout the video. 2. Random Sampler: In contrast, the random sampler selects $\mathbf { k }$ frames randomly from the video without replacement. It utilizes the \`np.random.choice()\` function to generate a list of k unique random indices. This method introduces randomness in frame selection, ensuring a diverse representation of frames from the video. 3. Continuous Sampler: The continuous sampler chooses $\mathbf { k }$ frames evenly spread throughout the whole video. Similar to the uniform sampler, it calculates the stride between selected frames and then chooses $\mathbf { k }$ indices. However, this method ensures that frames are evenly distributed across the video timeline, providing a comprehensive representation of the video content over time. By employing these sampling methods, the SUPER IMAGE construction process can effectively capture the salient aspects of the video while maintaining the desired aspect ratio and minimizing information loss. 4. Mean Absolute Difference (MAD) Sampler: In this approach, k frames are selected based on the smallest average absolute difference between adjacent frames. It calculates the absolute differences between each pair of adjacent frames and then selects the k frames with the smallest average absolute difference. This method aims to capture frames that exhibit minimal change from one frame to the next, ensuring consistency and stability in the selected frames. 5. Lucas-Kanade Sampler: Utilizing the Lucas-Kanade algorithm, this sampler computes optical flow between adjacent frames to measure motion. It selects k frames with the largest amount of motion, ensuring dynamic content is represented. By computing optical flow between adjacent frames and selecting frames with significant motion, this method captures the key dynamic elements of the video, providing a dynamic representation in the SUPER IMAGE. These methods offer diverse approaches to frame selection, each suited to capture different aspects of the video content, such as temporal distribution, visual consistency, and motion dynamics. Integrating these samplers in the SUPER IMAGE construction process ensures comprehensive coverage and representation of the video content while maintaining the desired aspect ratio and minimizing information loss. Fig 4: Sample Super Image # 3.5. Integration of Camera module with ML model Upon satisfactory performance, we need to deploy the model in a production environment and integrate it with hardware setups using frameworks like Flask for real-time violence detection. To do so, we have first set up Flask by installing it via pip and creating a new Python file for the Flask application. Within this application, we have defined routes to handle different types of requests, such as GET and POST. These routes contain logic to interact with both the hardware setup and the machine learning model. For hardware interaction, functions are called to perform tasks such as capturing images using the Raspberry Pi camera module. The machine learning model is integrated by loading it into the Flask application, either as a pre-trained model or one trained within the application. Routes or endpoints are created to send data to the model and receive predictions. Incoming requests from clients, such as web browsers or mobile apps, are handled within the Flask routes, processing data from the hardware and sending it to the model for prediction. The Flask server then returns appropriate responses, which may include predictions from the machine learning model or status updates about the hardware setup. Finally, the Flask server is run, ensuring accessibility to the hardware setup and any client applications needing interaction. This setup enables seamless real-time communication between the machine learning model and the hardware setup, facilitating a wide range of crime scene investigations. We will continuously monitor and improve the model, periodically retraining it on new data to ensure effectiveness in detecting violence over time. Through these steps, a robust CNN-LSTM model for violence detection can be developed and deployed for real-world applications. Fig 5: Model Output on Video # IV. RESULTS # 4.1. Evaluation of Model Performance The performance of the Convolutional Neural Network (CNN) model was thoroughly assessed using various metrics, including accuracy, precision, recall, and F1-score. The system achieved a remarkable $8 3 \%$ accuracy in detecting violent events in real-time video streams. With precision and recall values at $91 \%$ and $74 \%$ , respectively, the model effectively identified true positive cases of violence without a significant number of false positives. The F1-score, which consolidates precision and recall, was calculated to be $90 \%$ , showcasing the balanced performance of the detection system. Fig 6: Graphical Evaulation of Model Performance Fig 7: Accuracy Chart of Model Fig 8: Loss Chart of Model # 4.2. Real-World Testing To evaluate the system's real-world effectiveness, field tests were conducted in various environments with different levels of activity and lighting conditions. These tests revealed the system's resilience and adaptability, with consistent performance metrics across the different settings. During these tests, the system successfully flagged incidents of violence, enabling prompt responses. # 4.3. Computational Efficiency The Raspberry Pi's computational efficiency was a focal point of analysis. The device processed real-time data effectively, with an average processing time of 2 seconds per frame. This performance, considering the hardware limitations of the Raspberry Pi, highlights the optimization of the ML model and the overall system design. 4.4. System Limitations and Challenges Despite the promising results, certain challenges were identified, particularly in high-density crowd scenarios and lowlight conditions. Under these complex conditions, accuracy slightly decreased, indicating areas for future improvement.
The increasing global crime rate, coupled with substantial human and property losses, highlights the limitations of traditional surveillance methods in promptly detecting diverse and unexpected acts of violence. Addressing this pressing need for automatic violence detection, we leverage Machine Learning to detect and categorize violent events in video streams. This paper introduces a comprehensive framework for violence detection and classification, employing Supervised Learning for both binary and multi-class violence classification. The detection model relies on 3D Convolutional Neural Networks, while the classification model utilizes the separable convolutional 3D model for feature extraction and bidirectional LSTM for temporal processing. Training is conducted on a diverse customized datasets with frame-level annotations, incorporating videos from surveillance cameras, human recordings, hockey fight, sohas and wvd dataset across various platforms. Additionally, a camera module integrated with raspberry pi is used to capture live video feed, which is sent to the ML model for processing. Thus, demonstrating improved performance in terms of computational resource efficiency and accuracy.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Retrieval systems play a pivotal role in many NLP applications, enabling models to utilize relevant information from large corpora such as document collections, web pages, or conversational histories (Lewis et al., 2020; Gao et al., 2023). Relevance in retrieval can be established through a range of connections, from explicit lexical or semantic similarity to more implicit, context-dependent associations. However, widely used retrieval systems are highly reliant on surface-level cues such as exact matches, repetition, or where a fact appears in the text (Ram et al., 2023; Coelho et al., 2024; Fayyaz et al., 2025). Additionally, many popular Query: Who was visiting a museum on October 06, 2024? Negative Document 2024-09-26 12:14, Amarantha: … I visited the exhibit at the Rijksmuseum in Score: 0.42 Retrieval Amsterdam 5 days ago and was … Positive Document 2024-10-13 11:30, Maeve: ... when I visited Retrieval Score: the Smithsonian National Air and Space in 0.39 Washington, D.C. seven days ago... benchmarks (e.g., BEIR (Thakur et al., 2021)) do not surface these issues as their queries have lexical overlap with relevant documents (Shao et al., 2025). There are attempts to create reasoning-intensive datasets that push beyond lexical and surface-level matches. For instance, RAR-b (Xiao et al., 2024) reframes multiple-choice reasoning tasks into retrieval problems, BIRCO (Wang et al., 2024) collects multi-faceted questions across five domains, and BRIGHT (Su et al., 2025) uses full StackExchange problem descriptions as queries against the pages they cite. Since the reasoning burden lies on the query side, techniques like query expansion, chain-of-retrieval inference, or agentic retrieval can help models handle complex prompts and outperform standard retrievers (Wang et al., 2025; Song et al., 2025; Li et al., 2025). In contrast, we present IMPLIRET, a benchmark that shifts reasoning to document-side processing: the queries are simple, but relevance depends on facts stated implicitly within the documents, spanning arithmetic, temporal, and world knowledge relationships that require inference to uncover. Figure 1 gives an example: the correct document requires resolving a reference to a date that is implicit, i.e., not stated directly. An effective retrieval system must infer such implicit facts from the document content, ideally as part of the indexing process, in order to retrieve the correct result at query time. Yet current retrieval methods fail to capture the implicit signals needed for accurate retrieval. We evaluate sparse and dense approaches, including BM25 (Robertson and Zaragoza, 2009), ColBERT (Santhanam et al., 2022), and Dragon $^ +$ (Lin et al., 2023), and observe consistently poor performance: the best $\mathrm { n D C G } @ 1 0$ is only $1 5 . 0 7 \%$ across our benchmark. To test whether long-context capabilities could mitigate the problem, we evaluate models in a setting where the positive document is included among several distractors. While GPT-4.1 answers correctly when given only the positive document, its performance drops sharply even with just ten documents in-context, achieving a ROUGE-1 recall of $3 5 . 0 6 \%$ . Our dataset IMPLIRET introduces a new setting that requires document-side reasoning for retrieval rather than query-side reasoning. IMPLIRET presents challenges for both retrieval and long-context processing, highlighting the need for models that can reason over implicit information embedded in large corpora. # 2 IMPLIRET In IMPLIRET, we construct examples whose relevance depends on information that is implicitly stated in the document, i.e., it can only be discovered through reasoning, not by surface-level overlap. IMPLIRET covers three reasoning categories: World Knowledge, Arithmetic, and Temporal. We compile a collection of implicit-tuple sets. Within each set, a tuple links an implicit surface form that appears in a document to the explicit form that will appear in the query; see Fig. 1, e.g. (“2024-10-13 . . . seven days ago”, “October 06, $2 0 2 4 ^ { \prime \prime } )$ . For every reasoning category, we create $N$ such tuple sets. Each set $T _ { i }$ $( i = 1 , \ldots , N )$ contains $M$ unique tuples $( | T _ { i } | = M )$ . Tuples in the tuple sets are unique but not guaranteed to be unique throughout the collection of tuple sets. Hence, before document generation, we inject distinct auxiliary lexical entities (e.g. named entities, speaker names) into each tuple so that the documents generated from $T _ { i }$ remain distinguishable from those of $T _ { j }$ when $i \neq j$ (see Appendix A.4). From each tuple in the tuple set, we generate a document, yielding a pool of documents $\mathcal { D } _ { T _ { i } }$ with $\vert \mathcal { D } _ { T _ { i } } \vert = M$ . The document derived from $t _ { i } \in T _ { i }$ is the only positive for the query constructed from $t _ { i }$ , whereas all other documents in the global collection $\begin{array} { r } { \mathcal { D } = \bigcup _ { i = 1 } ^ { N } \mathcal { D } _ { T _ { i } } } \end{array}$ – including those from tuples $t _ { i } ^ { \prime } \neq t _ { i }$ in the same set and every document from any other set $T _ { j } \neq T _ { i }$ – are treated as negatives. For each reasoning category, we generate two collections of tuple sets, one realized in the unispeaker style and the other in the multi-speaker style, keeping their respective document pools separate to foster surface diversity. Thus, every query has exactly one positive document, while every other document in the global collection serves as a semantically irrelevant negative. In the remainder of this section, we detail the construction of the implicit-tuple sets and our procedure for generating documents and queries. # 2.1 Generating Tuple Sets Arithmetic. An arithmetic relation requires simple numerical reasoning. For instance, the query “Which bag costs $\$ 1,600? ?$ can be answered by “The Prada bag costs $\$ 2,000$ , the Gucci bag is $20 \%$ cheaper,” since $\$ 2,000\times0.8=\$ 81600$ . Here, the model must identify the reference price, interpret the relative statement ( $4 6 2 0 \%$ cheaper”), and perform the corresponding computation to infer the answer. Therefore, each tuple in the implicit tuple set takes the form $\left( ( p _ { 1 } , r , e ) , p _ { 2 } \right)$ , where $p _ { 1 }$ is the base price, $r$ is the relative multiplier, $e \in \{ ^ { \mathfrak { a } } \mathsf { L o w e r } ^ { \prime \prime } , { ^ { \mathfrak { a } } \mathsf { H i g h e r } ^ { \prime \prime } } \}$ indicates the direction of the change, and $p _ { 2 }$ is the queried price (e.g., ( $( 2 0 0 0 , 0 . 2 , \mathsf { L o w e r } ) , 1 6 0 0 )$ ). We apply constraints to ensure that queried prices are unique, realistic, and well-distributed across the tuple set. Tuples are generated using a sampling algorithm that selects base prices and checks constraint satisfaction, backtracking as needed until $M$ valid tuples are found (where $M$ is the target number of documents indicated as “Docs” in Table 1). Full constraint details and sampling logic are provided in Appendix 2.1. World Knowledge. A world knowledge relation connects a textual mention to an external fact. For instance, the query “Who was in the UK?” can be answered by “Lenna was at Big Ben,” based on the implicit fact that Big Ben is located in the UK. The model must identify the mentioned entity, retrieve the associated world fact, and use it to resolve the query. Each tuple is encoded as (landmark, country), e.g., (“Big Ben”, “UK”). To build the tuple set, we collect landmark-country pairs that are unambiguous, globally unique, free of lexical cues revealing the country, and refer to specific rather than generic locations. Candidates are sourced from Wikidata (Vrandeˇci´c and Krötzsch, 2014) and filtered using LLMs, embedding similarity, and web search verification. Full filtering criteria, prompts, and implementation details are provided in Appendix A.2. Here, we again generate a set of $M$ tuples of each implicit tuple set. Table 1: IMPLIRET statistics. For each reasoning category and discourse style (uni-speaker vs. multi-speaker), we list the number of documents $5 0$ tuple $s e t s \times 3 0$ $\begin{array} { r } { d o c s = I 5 0 0 \rangle } \end{array}$ , the average document length, and the total token count. Every document has exactly one associated query, so the document and query counts coincide. Temporal. A temporal relation involves reasoning over relative dates; we gave an example in Figure 1. The model must identify the reference date (2024-10-13), interpret the relative time expression (“seven days ago”), and compute the resulting absolute date $( ^ { } 2 0 2 4 \ – 1 0 – 0 6 ^ { \ ' } )$ . Each example is represented as a tuple $\left( ( d _ { B } , R ) , D _ { L } \right)$ , where $d _ { B }$ is the base date explicitly mentioned in the document, $R$ is a list of relative offsets (e.g. [“1 day after”, “2 days after”]), and $D _ { L }$ is the list of resolved explicit dates (e.g., [“March 6th”, “March 7th”]). We generate $M$ such tuples under constraints that ensure date uniqueness, broad coverage across a fixed window, and realistic time offsets. Target date sequences are first sampled, then anchored to a base date to define relative expressions. The sampling algorithm verifies constraints and backtracks as needed until a valid set is found. Further details on constraints and sampling logic are provided in Appendix A.3. # 2.2 Document-Query Pairs We generate a document-query pair from every fact tuple, realizing it in one of two styles: uni-speaker (multi-turn chat) or multi-speaker (forum thread). Uni-speaker (multi-turn chat). For each tuple, we create a short multi-turn dialogue. The same main conversant (e.g., “Alex”) appears in every dialogue within a tuple set and never appears in any other tuple sets. To keep the interactions natural, the second conversant’s name changes from one dialogue to the next. Depending on the reasoning category, the main conversant states which product they bought at a certain price (Arithmetic), mentions visiting a landmark (World Knowledge), or describes an activity that occurred on a specific date (Temporal). The query then targets the implicit fact contained in that statement: the product, person, or activity linked to the given price, country, or date. # Multi-speaker (forum thread, one post per user). Each tuple set receives a single prompt that serves as the thread’s opening post. For that tuple set, we create a forum thread in which each post is authored by a different user, realizing one tuple, and all posts respond to the shared prompt. Thus, the thread mimics a discussion in which several users independently mention their purchase, visit, or scheduled activity, respectively. While the underlying actions mirror the uni-speaker setting, the query perspective shifts: instead of asking about an attribute of a known entity, it now asks which entity (product, person, or activity) satisfies a stated condition such as a price, location, or date. Generation Pipeline. In both styles, i.e., in each conversation and post, every message includes a timestamp and speaker name (see Figure 1). In both styles, each example is produced via a threestep pipeline: (1) Entity binding: We assign entities (e.g., names, items, activities) to each tuple to create a plausible scenario and define the query target; (2) Document generation: We prompt an LLM to generate a chat or forum passage that embeds the entity and the implicit part of the tuple, without stating the explicit fact; (3) Verification: a second model attempts to extract the original tuple; we retain only examples where the intended fact is fully recoverable. This pipeline is supported by auxiliary lexical resources, including random names, brand-item pairs, and activity lists, as well as per-reasoning category prompt templates. We use LLAMA 3.3-70B (Meta, 2024) to synthesize the documents for each tuple.1 Table 1 presents IMPLIRET statistics2. Table 2: Retrieval evaluation. $\mathbf { n D C G } @ 1 0$ for our reasoning categories (world knowledge (W. Know.), arithmetic, and temporal, averaged over uni-speaker and multi-speaker documents) and “Average” of reasoning. # 3 Experiments We employ IMPLIRET to probe whether state-ofthe-art retrievers can perform document-side reasoning. Relevant documents are retrieved for each query among those documents that are in its corresponding (reasoning category and discourse style) group. At test time, each query is compared to all its discourse style documents. Our evaluation covers a wide variety of retrieval methods: sparse lexical baseline BM25 (Robertson and Zaragoza, 2009; Lù, 2024); dense encoders CONTRIEVER, $\mathrm { D R A G O N + }$ , and REASONIR (Izacard et al., 2021; Lin et al., 2023; Shao et al., 2025); late interaction model COLBERT V2 (Santhanam et al., 2022); and knowledge graph augmented retriever HIPPORAG 2 (Gutiérrez et al., 2025). Effectiveness is reported as $\mathrm { \ n D C G } @ k$ in the main text; MRR $@ k$ appears in Appendix B. # 4 Results The $\mathrm { n D C G } @ 1 0$ results across all reasoning categories are presented in Table 2. The highest average score, 15.07 (achieved by REASONIR, a recent 8B-parameter LLM), shows the difficulty retrieval models face when reasoning over implicit facts in documents. More efficient baselines such as CONTRIEVER, DRAGON+, and BM25 perform substantially worse; notably, BM25 reaches just 12.13 due to its reliance on surface-level lexical overlap. Performance varies across reasoning types: the World Knowledge category exhibits the largest performance spread (14.10 vs. 19.53), while it is narrowest for Arithmetic (10.74 vs. 14.61). Discourse style also plays a role: REASONIR scores 20.58 on multi-speaker examples compared to 9.56 on uni-speaker ones, suggesting that stylistic structure affects retrieval difficulty.3 Table 3: RAG-style evaluation. ROUGE-1 (R-1) recall for our reasoning categories (world knowledge (W. Know.), arithmetic and temporal, averaged over unispeaker and multi-speaker documents) and “Average” across categories. RAG Performance with an Oracle Retriever on Reason-Sensitive Documents. While retrieval quality clearly affects end-to-end performance, we ask whether an LLM with long-context capacity can still succeed once the relevant document is present. To test this, we use a retrieval-augmented generation (RAG) set-up with an oracle retriever, one that always includes the positive document in its top- $k$ . The model sees the question together with $k$ documents: one positive and $k - 1$ hard negatives sampled from the same pool (among other $M -$ 1 samples), ensuring comparable style and topic. This configuration removes retrieval as a variable and isolates the LLM’s document-side reasoning ability. We evaluate three settings: $k { = } 1$ (positive only), $k { = } 1 0$ (positive plus nine negatives), and a full-pool setting where all documents from the pool are provided as context. The model receives the query along with the sequence of documents and must generate an answer. We evaluate two reader models: LLAMA $3 . 3 ~ 7 0 \mathrm { B }$ and GPT-4.1.4 In Table 3, we report the average ROUGE-1 recall5 scores to measure the overlap between the generated output and the positive answer (Lin, 2004). When given only the positive document $\scriptstyle ( k = 1 )$ , the two models achieve average ROUGE-1 Recall of 81.92 and 88.05. This suggests that the query itself is straightforward to answer once the relevant document is isolated. This also means that an LLM can solve the task if a high-performing retriever (which would retrieve the relevant document at rank 1) is available. However, as $k$ increases (even with the positive included), performance declines, showing that LLMs struggle to focus on the correct evidence amid structurally similar negatives. This supports prior findings on long-context limitations and highlights the need for retrieving a small, focused set of documents rather than increasing context size (Kuratov et al., 2024; Modarressi et al., 2025).
Retrieval systems are central to many NLP pipelines, but often rely on surface-level cues such as keyword overlap and lexical semantic similarity. To evaluate retrieval beyond these shallow signals, recent benchmarks introduce reasoning-heavy queries; however, they primarily shift the burden to query-side processing techniques -- like prompting or multi-hop retrieval -- that can help resolve complexity. In contrast, we present ImpliRet, a benchmark that shifts the reasoning challenge to document-side processing: The queries are simple, but relevance depends on facts stated implicitly in documents through temporal (e.g., resolving "two days ago"), arithmetic, and world knowledge relationships. We evaluate a range of sparse and dense retrievers, all of which struggle in this setting: the best nDCG@10 is only 15.07%. We also test whether long-context models can overcome this limitation. But even with a short context of only ten documents, including the positive document, GPT-4.1 scores only 35.06%, showing that document-side reasoning remains a challenge. Our codes are available at github.com/ZeinabTaghavi/IMPLIRET.Contribution.
[ "cs.CL", "cs.AI" ]
# 1 INTRODUCTION Index tuning is a time-consuming process that may take hours to finish for large and complex workloads. Existing index tuners typically adopt a cost-based tuning architecture [7, 41], as illustrated in Figure 1. It consists of three main components: (1) workload parsing and analysis, which parses each query in the workload and extracts indexable columns, e.g., columns that appear in selection and join predicates; (2) candidate index generation, which puts together the extracted indexable columns to generate a set of indexes that can potentially reduce the execution cost of the input workload; and (3) configuration enumeration, which looks for a subset (a.k.a., configuration) from the candidate indexes that meets the input constraints (e.g., maximum configuration size or amount of storage to be taken by the indexes) while minimizing the input workload cost. To evaluate the cost of a given query and configuration pair, index tuners Index Tuner Database 𝑊 = 𝑞𝑖 , Γ, B Workload Server Parsing/Analysis 𝑊, Γ, 𝐵 Candidate Index Generation What-if Call 𝑊, Γ, 𝑧𝑗 , 𝐵 (𝑞𝑖, 𝐶) Query Bwe.sr.t.𝐶𝑊⊆, Γ{,𝑧𝑗B} CEonnufimgeuratiion What-if Cost (EOxpteinmdizedr) 𝑐(𝑞𝑖, 𝐶) rely on the so-called “what-if” utility [8]. It is an extended API of the query optimizer that can estimate the cost by viewing the indexes contained by the configuration as “hypothetical indexes” instead of materializing them in a storage system, which would be much more costly. Nevertheless, what-if optimizer calls are not free—they are at least as expensive as a regular query optimizer call. As a result, they become the major bottleneck when tuning large and/or complex workloads [38]. To address this challenge, some technologies have been developed, such as cost derivation [7], caching/reusing what-if calls [26] that requires code changes to the query optimizer beyond the whatif API, or ML-based cost approximation [39]. Recent research has proposed budget-aware index tuning, which constrains the number of what-if calls allowed during configuration enumeration [51]. Here, the main challenge shifts from reducing the number of whatif calls in classic index tuning to prioritizing what-if calls w.r.t. the importance of query-configuration pairs in budget-aware index tuning. This problem is termed as budget allocation, and there has been recent work on optimizing budget allocation in a dynamic manner that skips inessential what-if calls at index tuning runtime by utilizing lower and upper bounds of what-if costs [43]. In practice, we have observed the following “diminishing return” behavior of existing budget-aware index tuning algorithms: they typically make fast progress at the beginning in terms of the best index configuration found, but their progress slows down as more budget on what-if calls is allocated. To put our discussion in context, Figure 2 presents examples of the index tuning curve (ITC) when using two state-of-the-art budget-aware index tuning algorithms (see Section 2), namely, two-phase greedy search and Monte Carlo tree search (MCTS for short), to tune the TPC-H benchmark workload and a real customer workload Real-D (see Section 7.1.1). We defer a formal discussion of ITC to Section 6.2. Roughly speaking, the ITC represents a function that maps from the number of what-if calls made to the percentage improvement of the best configuration found, where the percentage improvement is defined as Figure 2: Examples of index tuning curves of two-phase greedy search and MCTS, where we set the number of indexes allowed $K = 2 0$ and the budget on what-if calls $B = 2 0 , 0 0 0$ . $$ \eta ( W , C ) = \frac { c ( W , \overset { \smile } { \boldsymbol { \theta } } ) - \overset { \cdot } { c } ( W , C ) } { c ( W , \emptyset ) } = 1 - \frac { c ( W , C ) } { c ( W , \emptyset ) } . $$ Here, $W$ represents the input workload, $C$ represents a configuration, and $\varnothing$ represents the existing configuration that index tuning starts from. $\begin{array} { r } { c ( W , C ) = \sum _ { q \in W } c ( q , C ) } \end{array}$ represents the what-if cost of the workload $W$ on top of the configuration $C$ , which is the sum of the what-if costs of individual queries contained by $W$ . In each plot of Figure 2, we use the red dashed line to represent the corresponding ITC. Intuitively, the ITC is a profile of the index tuner that characterizes its progress made so far with respect to the amount of budget on what-if calls being allocated. This “diminishing return” behavior of existing budget-aware index tuning algorithms motivates us to introduce early stopping. Specifically, let $\epsilon$ (e.g., $\epsilon = 5 \%$ ) be a user-given threshold that controls the loss on the percentage improvement, i.e., the gap between the percentage improvement of the best configuration found so far and the percentage improvement of the final best configuration with all budget allocated. If the projected improvement loss is below $\epsilon$ after certain amount of what-if calls are made, then we can safely terminate index tuning. Early stopping enables further savings on the number of what-if calls made in index tuning, and the savings can often be considerable. For example, as shown in Figure 2(a), two-phase greedy search requires making around 2,700 what-if calls to tune the TPC-H workload without early stopping. However, it actually makes no further progress (i.e., the best index configuration found does not change) after 1,000 what-if calls are made. Therefore, we would have saved 1,700 what-if calls, i.e., a reduction of $6 3 \%$ . While early stopping has been a well-known technique in the machine learning (ML) literature for preventing “overfitting” when training an ML model with an iterative method such as gradient descent [30, 31, 53], to the best of our knowledge we are the first to introduce it for index tuning with a very different goal of saving the amount of what-if calls. Enabling early stopping for budget-aware index tuning, however, raises new challenges. First, to project the further improvement loss that is required by triggering early stopping, we need to know (1) the percentage improvement of the best configuration found so far and (2) the percentage improvement of the final best configuration assuming that all budget were allocated. Unfortunately, both are not available at the time point where the projection needs to be made. While it is clear that (2) is not available, one may wonder why (1) is also not available. Note that the best configuration found so far in budget-aware index tuning is based on derived cost (see Section 2.1) rather than the true what-if cost [51]. Technically, we can obtain (1) by making an extra what-if call for each query in the workload with the best configuration found. However, this is too expensive to be affordable in practice when tuning a large workload. Second, even if we know (1) and (2) so that we can compute the gap between (1) and (2) to verify whether the projected further improvement loss is below the threshold $\epsilon$ , it is unclear when this verification should be performed. Conducting this verification at the beginning of index tuning seems unnecessary, as the index tuner is expected to make fast progress; however, if this verification happens too late, then most of the savings given by early stopping will vanish. To address these challenges, in this paper we propose Esc, a lowoverhead early-stopping checker for budget-aware index tuning. It is based on the following main ideas: • Instead of measuring the gap between (1) and (2), which cannot be obtained in practice, we develop a lower-bound for (1) and an upper-bound for (2) and then measure the gap between the lower and upper bounds. Clearly, if this gap is below the threshold $\epsilon$ , then the gap between (1) and (2) is also below 𝜖. Figure 2 also presents the lower and upper bounds of each index tuning curve. • To avoid verifying early-stopping either too early or too late, we develop a general approach that performs early-stopping verification by monitoring improvement rate of the ITC. Specifically, we measure the degree of convexity/concavity of the ITC based on the variation observed in its improvement rate, and we only verify early stopping when the ITC becomes concave. In more detail, we develop the lower and upper bounds of percentage improvement by piggybacking on the previous work [43]. While [43] lays the foundation of deriving lower and upper bounds for what-if cost, the bounds work only for individual what-if calls but not the entire workload. The extension to workload-level bounds is nontrivial—a straightforward approach that simply sums up calllevel bounds would lead to workload-level bounds that are too conservative to be useful (Section 4.1). Following this observation, we develop new mechanisms to improve over the naive workload-level bounds: (i) a simulated greedy search procedure that is designed for optimizing the bounds in the context of greedy search, which has been leveraged by both two-phase greedy search and MCTS as a basic building block (Section 4.2) and (ii) a generic approach to refining the bounds by modeling index interactions [33] at workload-level (Section 5). On the other hand, there can be multiple concave stages of an ITC, and only the final concave stage is worth early-stopping verification. For instance, this final stage of the ITC shown by Figure 2(b) begins after 6,000 what-if calls are made. It is challenging to identify whether a concave stage is the final one, and we further propose techniques to address this challenge and therefore reduce the chance of unnecessary early-stopping verification. To summarize, this paper makes the following contributions: • We introduce early stopping for budget-aware index tuning as a new mechanism that can result in significant savings on the number of what-if calls made (Section 3). • We propose Esc, a novel framework that enables early-stopping in budget-aware index tuning by developing lower/upper bounds of workload-level what-if cost (Section 4) with refinement by exploiting index interactions (Section 5) and lightweight verification schemes that leverage improvement rates and convexity/concavity properties of the index tuning curve (Section 6). Greedy Phase 2: Step 2 {𝑧1, 𝑧2} {𝑧2, 𝑧3} Greedy search on {𝑞1, 𝑞2} SGtreped1y {𝑧1} {𝑧2} {𝑧3} 𝐶1∗ ∪ 𝐶2∗ Existing ∅ GreedPyhsaesaerc1h: on 𝑞1 GreedPyhsaesaerc1h: on 𝑞2 configuration (a) Greedy search (b) Two-phase greedy search We conduct extensive experimental evaluation using both industrial benchmarks and real workloads, and empirical results demonstrate that Esc can significantly reduce the number of what-if calls for state-of-the-art budget-aware tuning algorithms with little extra computational overhead and little or no improvement loss on the final configuration returned (Section 7). Last but not least, while we focus on budget-aware index tuning algorithms in this work, early stopping can be applied to other index tuning algorithms such as (i) classic index tuning algorithms with unlimited budget of what-if calls [20, 43], which can be viewed as a special case of budget-aware index tuning and (ii) anytime index tuning algorithms [6], which are more sophisticated than budget-aware index tuning by constraining the overall index tuning time. Some of the technologies developed in this work, such as (a) the lower/upper bounds of workload-level what-if cost and (b) the general early-stopping verification scheme based on monitoring improvement rates of the index tuning curve, remain applicable, though their efficacy requires further investigation and evaluation. We leave this as an interesting direction for future work. # 2 PRELIMINARIES We present an overview of the problem of budget allocation and existing budget-aware index tuning algorithms. # 2.1 Budget-aware Index Tuning Budget-aware index tuning aims to constrain the amount of what-if calls that can be made during index tuning, in particular, during index configuration enumeration. An essential problem of budgetaware index tuning is budget allocation, i.e., determining on which query-configuration pairs to make what-if calls. For any queryconfiguration pair without making what-if call, we use the derived cost from cost derivation [7], defined by $$ \begin{array} { r } { d ( q , C ) = \operatorname* { m i n } _ { S \subseteq C } c ( q , S ) , } \end{array} $$ as an approximation of its true what-if cost. There are two existing algorithms that address this budget allocation problem: (1) twophase greedy search and (2) Monte Carlo tree search (MCTS). Based on the empirical study in [43], the gap between derived cost and the true what-if cost is below $5 \%$ for $8 0 \%$ to $9 0 \%$ of the what-if calls made by these two budget-aware index tuning algorithms. 2.1.1 Two-phase Greedy Search. A classic configuration enumeration algorithm is greedy search [7], as illustrated in Figure 3(a). It is a step-by-step procedure where it selects the next best candidate index in each greedy step that minimizes the workload cost, until the selected index configuration meets the given constraints. An improved version is the so-called two-phase greedy search [7], which first runs greedy search on top of each query to find its best candidate indexes and then runs greedy search again for the entire workload by taking the union of the best candidate indexes found for the individual queries. Figure 3(b) presents an example of two-phase greedy search with two queries in the workload. What-if calls are allocated in a “first come first serve” manner. Two-phase greedy search can achieve state-of-the-art performance [7, 20, 43, 51] in terms of the final index configuration found and has also been integrated into commercial database tuning software such as the Database Tuning Advisor (DTA) developed for Microsoft SQL Server [6], Table 1: Notation and terminology (QCP: query-configuration pair; WCP: workload-configuration pair; CI: cost improvem1 ent; MCI: marginal cost improvement; $q { \mathrm { : } }$ : a query; 𝑊 : a workload; $z$ : an index; 𝐶: an index configuration). 2.1.2 Monte Carlo Tree Search. To better tackle the trade-off between exploration and exploitation in budget allocation, previous work [51] proposed a budget-aware index tuning algorithm based on Monte Carlo tree search (MCTS). It models budget allocation as a Markov decision process (MDP) and allocates what-if calls with the goal of maximizing the “reward” that is defined by the percentage improvement (ref. Equation 1). After budget allocation is done, it runs greedy search again to find the best index configuration with the lowest derived cost (ref. Equation 2). It has been shown that MCTS outperforms two-phase greedy search under limited budget on the number of what-if calls [51]. # 2.2 What-if Call Interception The two budget-aware index tuning algorithms discussed above allocate what-if calls at a macro level by treating each what-if call as a black box. That is, they use the what-if cost (or its approximation, e.g., derived cost) as the only signal to decide the next what-if call to be made. This results in wasted budget on inessential what-if calls that can be accurately approximated by their derived costs without affecting the result of index tuning. To skip these inessential what-if calls, previous work developed Wii [43], a what-if call interception mechanism that enables dynamic budget allocation in index tuning. The main idea there is to use lower/upper bounds of what-if cost: a what-if call can be skipped if the gap between the lower and upper bounds is sufficiently small. We present more details in Section 3.2. In this paper, we will build on top of these call-level lower/upper bounds to develop Esc that enables early stopping at workloadlevel index tuning. Moreover, in budget-constrained index tuning, skipping these inessential what-if calls can sharpen the efficacy of budget allocation by reallocating the budget to what-if calls that cannot be skipped. This results in improved versions of two-phase greedy search and MCTS algorithms with Wii integrated. # 3 EARLY STOPPING IN INDEX TUNING We start with the problem formulation of early stopping in budgetaware index tuning and then present an overview of the solution that is based on lower/upper bounds of what-if cost. Table 1 summarizes the notation and terminology that will be used. # 3.1 Problem Formulation Let $B$ be the budget on the number of what-if calls. At time $t$ , i.e., when $t$ what-if calls have been allocated, we want to decide if it is safe to skip allocating the remaining $B - t$ what-if calls without much loss on the improvement of the final index configuration returned. Formally, let $C _ { t } ^ { * }$ be the configuration found with $t \leq B$ what-if calls allocated. That is, after 𝑡 what-if calls we can only use derived cost when running the remaining part of configuration search. Under this notation, $C _ { B } ^ { * }$ is the configuration found with all $B$ what-if calls allocated. We stop index tuning if $$ \eta \dot { ( } W , C _ { B } ^ { * } ) - \eta ( \Breve { W } , C _ { t } ^ { * } ) \leq \epsilon , $$ where $0 < \epsilon < 1$ is a user-defined threshold. By Equation 1, Unfortunately, computing the left side of Equation 4 is impossible since $c ( W , C _ { B } ^ { * } )$ would only be known when all the $B$ what-if calls were allocated, which negates the very purpose of early stopping. Moreover, the computation of $c ( W , C _ { t } ^ { * } )$ would require making $| W |$ extra what-if calls for each time point $t$ , which would be prohibitively expensive for large workloads. As a result, we need a different approach instead of utilizing Equation 4 directly. # 3.2 A Framework by Lower/Upper Bounds We develop a lower bound $\eta _ { L } ( W , C _ { t } ^ { * } )$ for $\eta ( W , C _ { t } ^ { * } )$ and an upper bound $\eta _ { U } ( W , C _ { B } ^ { * } )$ for $\eta ( W , C _ { B } ^ { * } )$ . That is, $\eta _ { L } ( W , C _ { t } ^ { * } ) \leq \eta ( W , C _ { t } ^ { * } )$ and $\eta ( W , C _ { B } ^ { * } ) \leq \eta _ { U } ( W , C _ { B } ^ { * } )$ . As a result, if $\eta _ { U } ( W , C _ { B } ^ { * } ) - \eta _ { L } ( W , C _ { t } ^ { * } ) \leq \epsilon$ , it then implies $\eta ( W , C _ { B } ^ { * } ) - \eta ( W , C _ { t } ^ { * } ) \leq \epsilon$ (i.e., Equation 3). Figure 4 illustrates this framework in detail. The $x$ -axis represents the number of what-if calls allocated, whereas the $y$ -axis represents the percentage improvement of the corresponding best configuration found. Ideally, we should compare the true percentage improvements $\eta ( W , C _ { t } ^ { * } )$ and $\eta ( W , C _ { B } ^ { * } )$ ; however, since the true improvements are not observable, we instead compare the lower and upper bounds $\eta _ { L } ( W , C _ { t } ^ { * } )$ and $\eta _ { U } ( W , C _ { B } ^ { * } )$ . 3.2.1 Conversion to Lower/Upper Bounds on What-if Costs. Our problem is equivalent to developing an upper bound $U ( W , C _ { t } ^ { * } ) \geq$ $c ( W , C _ { t } ^ { * } )$ and a lower bound $L ( W , C _ { B } ^ { * } ) \ \leq \ c ( W , C _ { B } ^ { * } )$ . As a result, $\eta _ { L } ( W , C _ { t } ^ { * } ) \leq \eta ( W , C _ { t } ^ { * } )$ and $\eta _ { U } ( W , C _ { B } ^ { * } ) \ge \eta ( W , C _ { B } ^ { * } )$ . To derive $L ( W , C _ { B } ^ { * } )$ and $U ( W , C _ { t } ^ { * } )$ , we consider a more fundamental problem: Given an arbitrary configuration $C$ , derive a lower bound $L ( W , C )$ and an upper bound $U ( W , C )$ such that $L ( W , C ) \leq$ $c ( W , C ) \leq U ( W , C )$ . Since $\begin{array} { r } { c ( W , C ) = \sum _ { q \in W } c ( q , C ) } \end{array}$ , it is natural to first consider call-level lower and upper bounds $L ( q , C )$ and $U ( q , C )$ for a given query $q$ such that $L ( q , C ) \leq c ( q , C ) \leq U ( q , C )$ . For this purpose, we reuse the results developed in previous work [43]. Below we provide a summary of the call-level lower/upper bounds. We will discuss extensions to workload-level bounds in Section 4. 3.2.2 Call-level Upper Bound. We assume the following monotonicity property of the what-if cost: Assumption 1 (Monotonicity). Let $C _ { 1 }$ and $C _ { 2 }$ be two index configurations where $C _ { 1 } \subseteq C _ { 2 }$ . Then $c ( q , C _ { 2 } ) \leq c ( q , C _ { 1 } )$ . That is, including more indexes into a configuration does not increase its what-if cost. We then have the derived cost $d ( q , C ) \geq$ $c ( q , C )$ , which is a valid upper bound, i.e., $U ( q , C ) = d ( q , C )$ . Figure 4: A framework for early-stopping in budget-aware index tuning based on workload-level bounds of what-if cost. 3.2.3 Call-level Lower Bound. We define the cost improvement of the query $q$ given the configuration $C$ as $\Delta ( q , C ) = c ( q , \emptyset ) - c ( q , C )$ . Moreover, we define the marginal cost improvement (MCI) of an index $z$ with respect to a configuration $C$ as $\delta ( q , z , C ) = c ( q , C ) -$ $c ( q , C \cup \{ z \} )$ . Let $C = \{ z _ { 1 } , . . . , z _ { m } \}$ . We can rewrite CI in terms of the MCI’s, i.e., $\begin{array} { r } { \Delta ( q , C ) = \sum _ { j = 1 } ^ { m } \delta ( q , z _ { j } , C _ { j - 1 } ) \le \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) } \end{array}$ , where $C _ { 0 } = \varnothing , C _ { j } = C _ { j - 1 } \cup \{ z _ { j } \}$ , and $u ( q , z _ { j } )$ is an upper bound of the MCI $\delta ( q , z _ { j } , C _ { j - 1 } )$ , for $1 \leq j \leq m$ . Hence, we can set the lower bound $$ L ( q , C ) = c ( q , \emptyset ) - \sum _ { j = 1 } ^ { m } u ( q , z _ { j } ) \leq c ( q , C ) . $$ 3.2.4 MCI Upper Bounds. We further assume the following submodularity property of the what-if cost: Assumption 2 (Submodularity). Given two configurations $X$ and $Y s . t . X \subseteq Y$ and an index $z \not \in Y$ , we have $c ( q , Y ) - c ( q , Y \cup \{ z \} ) \leq$ $c ( q , X ) - c ( q , X \cup \{ z \} )$ . Or equivalently, $\delta ( q , z , Y ) \leq \delta ( q , z , X )$ . That is, the MCI of an index $z$ diminishes when $z$ is included into a larger configuration with more indexes. Assume monotonicity and submodularity of the cost function $c ( q , X )$ . Let $\Omega _ { q }$ be the best possible configuration for $q$ assuming that all candidate indexes have been created. We can set $$ \boldsymbol { u } ( \boldsymbol { q } , z ) = \operatorname* { m i n } \{ c ( \boldsymbol { q } , \emptyset ) , \Delta ( \boldsymbol { q } , \Omega _ { q } ) , \Delta ( \boldsymbol { q } , \{ z \} ) \} . $$ In practice, there are situations where we do not know $c ( q , \{ z \} )$ and thus $\Delta ( q , \{ z \} )$ . In previous work [43], the authors proposed a lightweight approach to estimate $c ( q , \{ z \} )$ based on the coverage of $\{ z \}$ with respect to $\Omega _ { q }$ , assuming that $c ( q , \Omega _ { q } )$ is known. # 4 WORKLOAD-LEVEL BOUNDS We now discuss how to leverage the call-level lower and upper bounds on what-if cost to establish lower/upper bounds that can be used at workload-level. We discuss both general-purpose bounds as well as optimized bounds for greedy search, which has been an essential step in state-of-the-art budget-aware index tuning algorithms such as two-phase greedy search and MCTS. # 4.1 General-Purpose Bounds 4.1.1 Upper Bound of Workload Cost. The upper bound $U ( W , C _ { t } ^ { * } )$ can just be set to the derived cost $d ( W , C _ { t } ^ { * } )$ , since we can show $$ d ( W , C ) = \sum _ { q \in W } d ( q , C ) \geq \sum _ { q \in W } c ( q , C ) = c ( W , C ) $$ for an arbitrary index configuration $C$ . To obtain $C _ { t } ^ { * }$ , however, we need to continue with the index tuning algorithm on top of the current best configuration $C _ { t }$ found without making more what- $\cdot i f$ calls. As an example, we will illustrate this simulation process for greedy search in Section 4.2.1. 4.1.2 Lower Bound of Workload Cost. Let $C _ { B } ^ { * } ~ = ~ \{ z _ { 1 } , . . . , z _ { k } \}$ for some $k \leq K$ . By Equation 5, we could have set $$ L ( W , C _ { B } ^ { * } ) = \sum _ { q \in W } L ( q , C _ { B } ^ { * } ) = \sum _ { q \in W } \Big ( c ( q , \emptyset ) - \sum _ { i = 1 } ^ { k } u ( q , z _ { i } ) \Big ) . $$ Unfortunately, this lower bound cannot be computed, because we do not know $C _ { B } ^ { * }$ and therefore the $\{ z _ { 1 } , . . . , z _ { k } \}$ at time $t < B$ . However, for each query $q \in W$ , if we order all candidate indexes $z$ decreasingly with respect to their $u ( q , z )$ and then take the top $K$ candidate indexes in this ranking, it is easy to show that $$ { \sum } _ { i = 1 } ^ { k } u ( q , { z } _ { i } ) \leq \sum _ { z \in \mathcal { U } ( q , K ) } u ( q , z ) , $$ where ${ \mathcal { U } } ( q , K )$ represents the set of candidate indexes of $q$ with the top- $K$ largest MCI upper bounds. Therefore, we can instead set $$ L ( W , C _ { B } ^ { * } ) = \sum _ { q \in W } \Big ( c ( q , \emptyset ) - \sum _ { z \in \mathcal { U } ( q , K ) } u ( q , z ) \Big ) . $$ However, while this lower bound can be used for any budget-aware tuning algorithm, it may be too conservative. We next present optimizations of this lower bound for greedy search. # 4.2 Optimizations for Greedy Search Now let $C _ { B } ^ { * } = \{ z _ { 1 } , . . . , z _ { k } \}$ for some $k \leq K$ where $z _ { i }$ represents the index selected by greedy search at the $i$ -th step $( 1 \leq i \leq k )$ ) with $B$ what-if calls allocated. The lower bound by applying Equation 5, $$ L ( W , C _ { B } ^ { * } ) = \sum _ { q \in W } \Big ( c ( q , \emptyset ) - \sum _ { i = 1 } ^ { k } u ^ { ( i ) } ( q , z _ { i } ) \Big ) , $$ cannot be computed. Here, $u ^ { ( i ) } ( q , z )$ is the $u ( q , z )$ after the greedy step $i$ and we use Procedure 1 to update the MCI upper bounds [43]: Procedure 1. For each index $z$ that has not been selected by greedy search, we update $u ( q , z )$ as follows: (a) Initialize $u ( q , z ) = \operatorname* { m i n } \{ c ( q , \emptyset ) , \Delta ( q , \Omega _ { q } ) \}$ for each index $z$ . (b) During each greedy step $1 \leq k \leq K$ , update $$ u ( q , z ) = c ( q , C _ { k - 1 } ) - c ( q , C _ { k - 1 } \cup \{ z \} ) = \delta ( q , z , C _ { k - 1 } ) $$ if both $c ( q , C _ { k - 1 } )$ and $c ( q , C _ { k - 1 } \cup \{ z \} )$ are available, where $C _ { k }$ is the configuration selected by the greedy step $k$ and $C _ { 0 } = \varnothing$ . Our idea is to further develop an upper bound for $\textstyle \sum _ { i = 1 } ^ { k } u ^ { ( i ) } ( q , z _ { i } )$ by running a simulated greedy search procedure described below. 4.2.1 Simulated Greedy Search. For ease of exposition, consider tuning a workload with a single query $q$ using greedy search. Procedure 2. At time 𝑡 (i.e., when $t < B$ what-if calls have been made), run greedy search to get up to $K$ indexes in total, where each greedy step $j$ selects the index $z _ { j } ^ { \prime }$ with the maximum $u ^ { ( j ) } ( q , z _ { j } ^ { \prime } ) > 0$ Let the configuration found by Procedure 2 be $C _ { t } ^ { u } = \{ z _ { 1 } ^ { \prime } , z _ { 2 } ^ { \prime } , . . . , z _ { l } ^ { \prime } \}$ where $l \leq K$ . If $l < K$ , then it means that any remaining index $z$ satisfies $u ( q , z ) = 0$ . As a result, we can assume $l = K$ . Theorem 1. $\begin{array} { r } { \sum _ { j = 1 } ^ { K } u ^ { ( j ) } ( q , z _ { j } ^ { \prime } ) \geq \sum _ { i = 1 } ^ { k } u ^ { ( i ) } ( q , z _ { i } ) } \end{array}$ . As a result, $$ L ( q , C _ { B } ^ { * } ) = c ( q , \emptyset ) - \sum _ { j = 1 } ^ { K } u ^ { ( j ) } ( q , z _ { j } ^ { \prime } ) $$ is a lower bound of the what-if cost $c ( q , C _ { B } ^ { * } )$ for greedy search. Table: R (a, b, c, d) Queries Indexes $q _ { 1 }$ : SELECT a, b FROM R WHERE $\mathsf { R } . \mathsf { b } = 1 0$ $z _ { 1 } : [ \underline { { { \sf R . b } } } ; { \sf R . a } ]$ $q _ { 2 }$ : SELECT a, b FROM R WHERE $z _ { 2 } : [ \underline { { { \mathsf { R . b } } } } , \mathsf { R . a } , \mathsf { R . c } ]$ $\mathsf { R } . \mathsf { b } > 1 0$ AND $\mathtt { R . a } > 2 0$ AND $\mathtt { R . c } > 3 0$ Due to space constraints, all proofs are deferred to the full version of this paper [42]. We next generalize this result to multi-query workload with the understanding that the index $z _ { j } ^ { \prime }$ is selected for the entire workload $W$ with the maximum $u ^ { ( j ) } ( W , z _ { j } ^ { \prime } ) > 0$ , i.e., $$ L ( W , C _ { B } ^ { * } ) = c ( W , \emptyset ) - \sum _ { j = 1 } ^ { K } u ^ { ( j ) } ( W , z _ { j } ^ { \prime } ) , $$ where $\begin{array} { r } { u ( W , z ) = \sum _ { q \in W } u ( q , z ) } \end{array}$ Moreover, as we mentioned in Section 4.1.1, the simulated greedy search outlined in Procedure 2 can be reused for computing the upper bound $U ( W , C _ { t } ^ { * } )$ with slight modification. Details of this revised simulated greedy search are included in the full version [42]. 4.2.2 Lower Bound for Two-phase Greedy Search. We update the MCI upper-bounds for two-phase greedy search as follows: Procedure 3. For index 𝑧 and query 𝑞, update $u ( q , z )$ as follows: (a) Initialize $u ( q , z ) = \operatorname* { m i n } \{ c ( q , \emptyset ) , \Delta ( q , \Omega _ { q } ) \}$ for each index $z$ . (b) In Phase 1, update $u ( q , z )$ based on Equation 6. (c) In Phase 2, during each greedy step $1 \leq k \leq K$ , update $$ u ( q , z ) = c ( q , C _ { k - 1 } ) - c ( q , C _ { k - 1 } \cup \{ z \} ) = \delta ( q , z , C _ { k - 1 } ) $$ if both $c ( W , C _ { k - 1 } )$ and $c ( q , C _ { k - 1 } \cup \{ z \} )$ are available, where $C _ { k }$ is the configuration selected by greedy search in step $k$ $( C _ { 0 } = \varnothing$ ) and $z$ has not been included in $C _ { k }$ . The update step (c) excludes pathological cases where $c ( W , C _ { k } )$ is unknown but both $c ( q , C _ { k } )$ and $c ( q , C _ { k } \cup \{ z \} )$ are known for a particular query $q$ (due to Phase 1). Theorem 2. The $L ( W , C _ { B } ^ { * } )$ defined in Equation 10 remains a lower bound of $\dot { \boldsymbol { c } } ( \boldsymbol { W } , \boldsymbol { C } _ { B } ^ { * } )$ for two-phase greedy search $i f$ we maintain the MCI upper-bounds by following Procedure 3. 4.2.3 Lower Bound for Monte Carlo Tree Search. We can use the same simulated greedy search to obtain $L ( W , C _ { B } ^ { * } )$ , given that there is a final greedy search stage in MCTS after all budget allocation is done. However, we are only able to use Equation 6 for maintaining the MCI upper bounds—we can prove that it is safe to do so using the same argument as in two-phase greedy search when $t$ is in Phase 1 (see the full version [42]). It remains future work to investigate further improvement over Equation 6 for MCTS. # 5 REFINEMENT WITH INDEX INTERACTION Our approach of computing the lower bounds $L ( q , C _ { B } ^ { * } )$ and $L ( W , C _ { B } ^ { * } )$ in Equations 9 and 10 basically sums up the MCI Upper-bounds of individual indexes. This ignores potential index interactions, as illustrated by the following example. Example 1 (Index Interaction). As shown in Figure 5, let 𝑅 be a table with four columns 𝑎, $b$ , 𝑐, and $d$ . Let $z _ { 1 }$ and $z _ { 2 }$ be two indexes on $R _ { ☉ }$ , where $z _ { 1 }$ has a single key column $b$ with 𝑎 as an included column, and $z _ { 2 }$ has a compound key with three columns $b$ , 𝑎, and 𝑐 in order. Consider the SQL query $q _ { 1 }$ in Figure 5. Both $z _ { 1 }$ and $z _ { 2 }$ have very similar, if not the same, cost improvement for $q _ { 1 }$ , as one can use an index scan on top of either $z _ { 1 }$ and $z _ { 2 }$ to evaluate $q _ { 1 }$ without consulting the table 𝑅. As a result, $i f z _ { 1 }$ (resp. $z _ { 2 }$ ) has been included in some configuration, including $z _ { 2 }$ (resp. $z _ { 1 }$ ) cannot further improve the cost of $\mathbf { \dot { \boldsymbol { q } } } _ { 1 }$ . In other words, we have roughly the same cost improvements for $z _ { 1 } , z _ { 2 }$ , and $\{ z _ { 1 } , z _ { 2 } \}$ , i.e., $\Delta ( q _ { 1 } , \{ z _ { 1 } \} ) \approx \Delta ( q _ { 1 } , \{ z _ { 2 } \} ) \approx \Delta ( q _ { 1 } , \{ z _ { 1 } , z _ { 2 } \} )$ Note that index interaction is query-dependent. To see this, consider the same $z _ { 1 }$ and $z _ { 2 }$ in Example 1 but a different SQL query $q _ { 2 }$ in Figure 5. Since $z _ { 1 }$ can hardly be used for evaluating $q _ { 2 }$ , we have $\Delta ( q _ { 2 } , \{ z _ { 1 } \} ) \approx 0$ (see [42] for details). As a result, in the presence of both $z _ { 1 }$ and $z _ { 2 }$ , the query optimizer will pick $z _ { 2 }$ over $z _ { 1 }$ ; hence, we have $\Delta ( q _ { 2 } , \{ z _ { 1 } , z _ { 2 } \} ) = \Delta ( q _ { 2 } , \{ z _ { 2 } \} ) \approx \Delta ( q _ { 2 } , \{ z _ { 1 } \} ) + \Delta ( q _ { 2 } , \{ z _ { 2 } \} )$ . Therefore, $z _ { 1 }$ and $z _ { 2 }$ do not interact in the case of $q _ { 2 }$ . # 5.1 Index Interaction Motivated by Example 1, given two indexes $z _ { 1 } , z _ { 2 }$ and a query $q$ , we define the index interaction between $z _ { 1 }$ and $z _ { 2 }$ w.r.t. $q$ as $$ \bar { \cal T } ( z _ { 1 } , z _ { 2 } | q ) = \frac { \Delta _ { U } ( q , \{ z _ { 1 } , z _ { 2 } \} ) - \Delta ( q , \{ z _ { 1 } , z _ { 2 } \} ) } { \Delta _ { U } ( q , \{ z _ { 1 } , z _ { 2 } \} ) - \Delta _ { L } ( q , \{ z _ { 1 } , z _ { 2 } \} ) } . $$ Here, ${ \Delta } _ { L } ( { q } , \{ { z } _ { 1 } , { z } _ { 2 } \} ) = { \operatorname* { m a x } \{ { \Delta } ( { q } , \{ { z } _ { 1 } \} ) , { \Delta } ( { q } , \{ { z } _ { 2 } \} ) \} }$ is a lower bound of $\Delta ( q , \{ z _ { 1 } , z _ { 2 } \} )$ based on Assumption 1 (i.e., monotonicity), and $\Delta _ { U } ( q , \{ z _ { 1 } , z _ { 2 } \} ) ~ = ~ \Delta ( q , \{ z _ { 1 } \} ) + \Delta ( q , \{ z _ { 2 } \} )$ is an upper bound of $\Delta ( q , \{ z _ { 1 } , z _ { 2 } \} )$ based on Assumption 2 (i.e., submodularity). We now extend the above definition to define the interaction between an index $z$ and an index configuration $C$ w.r.t. a query $q$ : $$ \bar { I } ( z , C | q ) = \frac { \Delta _ { U } ( q , C \cup \{ z \} ) - \Delta ( q , C \cup \{ z \} ) } { \Delta _ { U } ( q , C \cup \{ z \} ) - \Delta _ { L } ( q , C \cup \{ z \} ) } . $$ Similarly, $\Delta _ { L } ( q , C \cup \{ z \} ) = \operatorname* { m a x } \{ \Delta ( q , C ) , \Delta ( q , \{ z \} ) \}$ is a lower bound of $\Delta ( q , C \cup \{ z \} )$ by Assumption 1, and $\Delta _ { U } ( q , C \cup \{ z \} ) = \Delta ( q , C ) +$ $\Delta ( q , \{ z \} )$ is an upper bound of $\Delta ( q , C \cup \{ z \} )$ by Assumption 2. # 5.2 A Similarity-based Approach Note that the interaction ${ \cal T } ( z , C | q )$ defined above cannot be directly computed if we do not have knowledge about $\Delta ( q , C )$ and $\Delta ( q , C \cup$ $\{ z \} ,$ ). Therefore, we propose an implicit approach to measure index interaction based on the similarity between indexes. Intuitively, if two indexes are similar, e.g., they share similar key columns where one is a prefix of the other, then it is likely that one of them cannot improve the workload cost given the presence of the other. As a result, there is strong interaction between the two indexes. Specifically, given a query $q$ and two indexes $z _ { 1 } , z _ { 2 }$ , we compute the similarity $S ( z _ { 1 } , z _ { 2 } | q )$ between $z _ { 1 }$ and $z _ { 2 }$ w.r.t. $q$ as follows: (1) Convert the query and indexes into feature vectors $\vec { \bf q } , \vec { \bf z } _ { 1 }$ , and $\vec { \bf z } _ { 2 }$ . We reuse the feature representation in previous work [37, 43] for this purpose. In more detail, we collect all indexable columns from the workload. Let $D$ be the number of indexable columns collected. We then represent $\vec { \bf q } , \vec { \bf z } _ { 1 }$ , and $\vec { \bf z } _ { 2 }$ as $D$ -dimensional vectors. We assign weights to each indexable column in the query representation $\vec { \bf q }$ by using the approach proposed in ISUM [37]. Specifically, the weight of a column is computed based on its corresponding table size and the number of candidate indexes that contain it. We further assign weights to each indexable column in the index representation $\vec { \bf z }$ by using the approach proposed in Wii [43]. Specifically, the weight of a column is determined by its position in the index $z$ , e.g., whether it is a key column or an included column of $z$ . Figure 6: Relationship between pairwise index interaction and pairwise index similarity (TPC-H). (2) Project the index vectors onto the query vector using dot product, i.e., $\vec { \bf z } _ { i } ^ { { \bf q } } = \vec { \bf z } _ { i } \cdot \vec { \bf q }$ for $i \in \{ 1 , 2 \}$ . Note that the resulting vectors $\vec { \bf z } _ { i } ^ { { \bf q } }$ for $i \in \{ 1 , 2 \}$ remain $D$ -dimensional vectors. This projection filters out columns in $\vec { \bf z } _ { i }$ that do not appear in $\vec { \bf q }$ and therefore do not have impact on the query performance of $q$ . (3) Calculate the cosine similarity S (𝑧1, 𝑧2 |𝑞) = zqz®1 ·z®2zq . We can further extend $S ( z _ { 1 } , z _ { 2 } | q )$ to represent the similarity between an index $z$ and an index configuration $C$ w.r.t. a query $q$ $\begin{array} { r } { S ( z , C | q ) = \frac { \vec { \bf { z } } ^ { \mathrm { q } } \cdot \vec { \bf { C } } ^ { \mathrm { q } } } { \lVert \vec { \bf { z } } ^ { \mathrm { q } } \rVert \cdot \lVert \vec { \bf { C } } ^ { \mathrm { q } } \rVert } } \end{array}$ . All we need is a feature representation C® of the configuration $C$ . For this purpose, we use the same approach as in Wii [43], where we featurize an index configuration as a $D$ - dimensional vector as follows. For each dimension $d$ $( 1 \leq d \leq D ) $ , we take the maximum of the feature values from the corresponding dimensions $d$ of the feature representations of the indexes contained by the configuration. The intuition is that, if an indexable column appears in multiple indexes of the configuration, we take the largest weight that represents its most significant role (e.g., a leading key column in some index). Ideally, we would wish the ${ \cal S } ( z , C | q )$ to be equal to ${ \cal { I } } ( z , C | q )$ Unfortunately, this is not the case. To shed some light on this, we conduct an empirical study to measure the correlation between pairwise index interaction ${ \cal T } ( z _ { 1 } , z _ { 2 } | q )$ and pairwise index similarity $S ( z _ { 1 } , z _ { 2 } | q )$ , using the workloads summarized in Table 2. Specifically, we pick the most costly queries for each workload and evaluate the what-if costs of all single indexes (i.e., singleton configurations) for each query. We then select the top 50 indexes w.r.t. their cost improvement (CI) in decreasing order and evaluate the what-if costs of all $5 0 \times 4 9 = 2$ , 450 configurations that contain a pair of the top-50 indexes. Finally, we compute the pairwise index interaction and the pairwise index similarity of these index pairs. Figure 5 presents their correlation for the two most costly queries of TPC-H, and similar results over the other queries and workloads are included in the full version [42]. We observe that there is no strong correlation between the two. Instead, for most of the queries, there is a sudden jump on the pairwise index interaction when the pairwise index similarity increases. That is, when the pairwise index similarity exceeds a certain threshold (e.g., 0.2), the pairwise index interaction will increase to a high value (e.g., close to 1). This motivates us to propose a threshold-based mechanism to utilize the index similarity to characterize the impact of index interaction. # 5.3 Refined Workload-Level Lower Bound Our basic idea is the following. During each step of the simulated greedy search (SGS) when selecting the next index to be included, we consider not only the benefit of the index, but also its interaction with the indexes that have been selected in previous steps of SGS. Specifically, we quantify the conditional benefit $\mu ^ { ( j ) } ( q , \bar { z } _ { j } ^ { \prime } )$ of the candidate index $z _ { j } ^ { \prime }$ based on its interaction with the SGS-selected configuration $C _ { j - 1 } = \{ z _ { 1 } ^ { \prime } , . . . , z _ { j - 1 } ^ { \prime } \}$ and use it to replace the MCI upper bound $u ^ { ( j ) } ( q , z _ { j } ^ { \prime } )$ in Procedure 2 as follows: $$ \mu ^ { ( j ) } ( q , z _ { j } ^ { \prime } ) = \left\{ { \begin{array} { l l } { 0 , } & { { \mathrm { i f } } S ( z _ { j } ^ { \prime } , C _ { j - 1 } | q ) > \tau ; } \\ { u ^ { ( j ) } ( q , z _ { j } ^ { \prime } ) , } & { { \mathrm { o t h e r w i s e } } . } \end{array} } \right. $$ Here, $0 \leq \tau \leq 1$ is a threshold. In our experimental evaluation (see Section 7), we found that this threshold-based mechanism can significantly improve the lower bound for two-phase greedy search but remains ineffective for MCTS, due to the presence of many query-index pairs with unknown what-if costs. We therefore further propose an optimization for MCTS. Specifically, for a queryindex pair $( q , z )$ with unknown what-if cost, we initialize its MCI upper bound by averaging the MCI upper bounds of indexes with known what-if costs that are similar to $z$ w.r.t. $q$ (see [42] for details). # 6 EARLY-STOPPING VERIFICATION Based on the workload-level lower/upper bounds in Sections 4 and 5, we develop $\boldsymbol { E s c }$ , an early-stopping checker for budget-aware index tuning. One main technical challenge faced by Esc is to understand when to invoke early-stopping verification. While one can employ simple strategies such as a fixed-step verification scheme where a verification is invoked every 𝑠 what-if calls, as we will see in our experimental evaluation (Section 7) such strategies may incur high computation overhead since obtaining the lower and upper bounds (e.g., by using the simulated greedy search procedure in Section 4.2.1) comes with a cost. In this section, we present our solutions to this problem. We start by giving a heuristic solution to two-phase greedy search that exploits special structural properties of this algorithm (Section 6.1). We then propose a generic solution (Section 6.3) by only leveraging improvement rates and convexity properties of the index tuning curve (Section 6.2) without requiring any algorithm-specific knowledge. # 6.1 Heuristic Verification Scheme There is some trade-off in terms of when to invoke early-stopping verification (ESV): if we invoke ESV too frequently, then the computation overhead may become considerable; on the other hand, if we invoke ESV insufficiently, then we may miss opportunities for stopping index tuning earlier and allocate more what-if calls than necessary. Clearly, in the early stages of index tuning, there is no need to check for early-stopping, as the index tuning algorithm is still making rapid progress. Ideally, one needs to detect when the progress of the index tuning algorithm starts to slow down. For two-phase greedy search, this inflection point is not difficult to tell. As an example, consider Figure 2(a) where we run two-phase greedy search to tune the TPC-H workload. In Figure 2(a) we have marked each greedy step within both Phase 1 and Phase 2. We observe that the progress starts to slow down significantly after the search enters Phase 2, especially during or after the first greedy step of Phase 2. As a result, we can simply skip Phase 1 and start checking early-stopping at the beginning of each greedy step of Phase 2. Our experiments in Section 7 confirm that this simple scheme can result in effective early-stopping while keeping the computation overhead negligible. Figure 7: Characterization of the relationship between different definitions of index tuning curve. This heuristic early-stopping verification scheme clearly cannot work for other algorithms such as MCTS. However, the above discussion hinted us to focus on looking for similar inflection points of index tuning curves. It leads to a generic early-stopping verification scheme that only relies on improvement rates and convexity properties of index tuning curves, as we will present next. # 6.2 Index Tuning Curve Properties We define the index tuning curve (ITC) as a function that maps from the number of what-if calls allocated at time $t$ to the percentage improvement $\eta ( W , C _ { t } ^ { * } )$ of the corresponding best index configuration found. By definition, the ITC is monotonically non-decreasing. The dash line in Figure 4 presents an example of ITC. Unfortunately, as we have discussed in Section 3.1, the ITC defined above cannot be directly observed without making extra what-if calls. One option is to replace $\eta ( W , C _ { t } ^ { * } )$ with its lower bound 𝜂𝐿 (𝑊 , 𝐶𝑡∗). However, the computation of 𝜂𝐿 (𝑊 , 𝐶𝑡∗) = 1 − 𝑑𝑐(𝑊 ,,𝐶𝑡∗ ) is not free (e.g., requiring running the simulated greedy search) and we therefore choose to use $\begin{array} { r } { \dot { \eta _ { L } } ( W , C _ { t } ) = 1 - \frac { \bar { d } ( W , C _ { t } ) } { c ( W , \emptyset ) } } \end{array}$ , where $C _ { t }$ is the observed best configuration at time $t$ without continuing tuning, in lieu of $\eta _ { L } ( W , C _ { t } ^ { * } )$ . $\eta _ { L } ( W , C _ { t } )$ is directly available at time $t$ without extra computation. Assuming monotonicity of what-if cost (i.e., Assumption 1), we have $\eta ( W , C _ { t } ) \le \eta _ { L } ( W , C _ { t } ^ { * } )$ , because $d ( W , C _ { t } ) \geq d ( W , C _ { t } ^ { * } )$ given that $C _ { t }$ is a subset of $C _ { t } ^ { * }$ . Figure 7 characterizes the relationship between different definitions of ITC. 6.2.1 Improvement Rate. Suppose that we check early stopping at $n$ time points with $B _ { j }$ what-if calls allocated at time point $j$ , where $1 \leq$ $j \leq n$ . We call this sequence $\{ B _ { j } \} _ { j = 1 } ^ { n }$ an early-stopping verification scheme (ESVS). Let the observed percentage improvement at time point $j$ be $I _ { j }$ , i.e., ${ \cal I } _ { j } = \eta _ { L } ( W , C _ { B _ { j } } )$ . We further define a starting point $( B _ { 0 } , I _ { 0 } )$ where we have known both $B _ { 0 }$ and $I _ { 0 }$ . By default, we choose $B _ { 0 } = 0$ and $I _ { 0 } = 0$ . Definition 1 (Improvement Rate). We define the improvement rate 𝑟 𝑗 at time point 𝑗 as 𝑟 𝑗 = 𝐼𝑗 −𝐼𝐵00 . The projected improvement at time point $j$ for budget $b$ of what-if calls (i.e., by making $b - B _ { j }$ more what-if calls) is then defined as $$ p _ { j } ( b ) = I _ { j } + r _ { j } \cdot ( b - B _ { j } ) . $$ For the default case where $B _ { 0 } = 0$ and $I _ { 0 } = 0$ , we have $\begin{array} { r } { p _ { j } ( b ) = I _ { j } \cdot \frac { b } { B _ { j } } } \end{array}$ . For ease of exposition, we will use this default setup in the rest of our discussion throughout this section. Definition 2 (Latest Improvement Rate). We define the latest improvement rate 𝑙 𝑗 at time point 𝑗 as 𝑙 𝑗 = 𝐼𝑗 −𝐼𝐵𝑗𝑗−11 . Figure 8: Relationship between improvement rates and convexity/concavity of index tuning curve. The latest improvement rate $l _ { j }$ approximates the tangent of the index tuning curve at the point $( B _ { j } , I _ { j } )$ . 6.2.2 Convexity and Concavity. Let $I = f ( b )$ be the function that represents the index tuning curve. That is, $f ( b ) = \eta _ { L } ( W , C _ { b } )$ where $C _ { b }$ is the observed best configuration with $b$ what-if calls allocated. Lemma 1. If $\dot { \boldsymbol { f } }$ is strictly concave and twice-differentiable, then $f ^ { \prime } ( b ) < \frac { f ( b ) } { b }$ for any $0 < b \le B$ . We have the following immediate result based on Lemma 1: Theorem 3. If $\boldsymbol { \cdot } \boldsymbol { f }$ is strictly concave and twice-differentiable, then $l _ { j } < r _ { j }$ for a given early-stopping verification scheme $\{ B _ { j } \} _ { j = 1 } ^ { n }$ . We have a similar result for a convex index tuning curve: Theorem 4. If $\cdot _ { f }$ is strictly convex and twice-differentiable, then $l _ { j } > r _ { j }$ for a given early-stopping verification scheme $\{ B _ { j } \} _ { j = 1 } ^ { n }$ . 6.2.3 Summary and Discussion. The previous analysis implies some potential relationship between the improvement rates that we defined and the convexity/concavity properties of an index tuning curve: (1) if the index tuning curve in $( B _ { j - 1 } , B _ { j } )$ is convex, i.e., it is making accelerating progress, then we will observe $l _ { j } > r _ { j }$ ; (2) on the other hand, if the index tuning curve in $( B _ { j - 1 } , B _ { j } )$ is concave, then we will observe $l _ { j } < r _ { j }$ . Figure 8 illustrates this relationship. In practice, an index tuning curve can be partitioned into ranges where in each range the curve can fall into one of the three categories: (1) convex, (2) concave, and (3) flat (i.e., $l _ { j } = 0 \sp { \prime }$ ). In general, we would expect that the curve is more likely to be convex in early stages of index tuning and is more likely to be concave or flat towards the end of tuning. This observation leads us to develop a generic ESVS that will be detailed next, where we leverage the convexity of the ITC to skip unnecessary invocations of early-stopping verification and put the overall verification overhead under control. # 6.3 Generic Verification Scheme We start from the aforementioned simple ESVS with fixed step size 𝑠 , i.e., $B _ { j } = B _ { j - 1 } + s$ , where $s$ can be a small number of what-if calls. We then compute $l _ { j }$ and $r _ { j }$ at each $B _ { j }$ accordingly. Now consider a specific time point $j$ . If we observe that $l _ { j } >$ $r _ { j }$ , then it is likely that the index tuning curve in $( B _ { j - 1 } , B _ { j } )$ is convex. Note that the condition in Theorem 4 is not necessary, so the convexity is not guaranteed when observing $l _ { j } > r _ { j }$ . In this case we can skip the early-stopping verification, because the index tuner is still making accelerating progress. On the other hand, if we observe that $l _ { j } < r _ { j }$ , then it is likely that the index tuning curve in $( B _ { j - 1 } , B _ { j } )$ is concave, i.e., the progress is decelerating, which implies that we perhaps can perform a verification. There are some subtleties in the above proposal. First, although it is reasonable to assume that the index tuning curve will eventually become concave/flat, it is not guaranteed that the index tuner has entered this final stage of tuning when $l _ { j } < r _ { j }$ is observed. Second, even if the index tuner has entered the final stage, the deceleration process may be slow before we can conclude that the improvement loss will be lower than the user-given threshold $\epsilon$ , which voids the necessity of the (expensive) early-stopping verification. 6.3.1 Significance of Concavity. To address these challenges, we measure the significance of the potential concavity of the index tuning curve. For this purpose, we project the percentage improvement at $B _ { j + 1 }$ using the improvement rates $l _ { j }$ and $r _ { j }$ and compare it with $I _ { j + 1 }$ to decide whether we want to invoke early-stopping verification (ESV) at the time point $j + 1$ . Specifically, we define the projected improvement gap between the projected improvements $\boldsymbol { p } _ { j + 1 } ^ { r }$ and $p _ { j + 1 } ^ { l }$ (using Equation 12) as $\bar { \Delta _ { j + 1 } } \bar { = } \boldsymbol { p } _ { j + 1 } ^ { r } - \boldsymbol { p } _ { j + 1 } ^ { \bar { l } }$ . Clearly, $\Delta _ { j + 1 } \ > \ 0$ since $l _ { j } < r _ { j }$ . Moreover, the larger $\Delta _ { j + 1 }$ is, the more significant the corresponding concavity is. Therefore, intuitively, we should have a higher probability of invoking ESV. Now consider the relationship between $I _ { j + 1 }$ and $p _ { j + 1 } ^ { l , r }$ . We have the following three possible cases: $p _ { j + 1 } ^ { l } < p _ { j + 1 } ^ { r } < I _ { j + 1 }$ : This suggests that $f$ grows even faster than $r _ { j }$ when moving from $B _ { j }$ to $B _ { j + 1 }$ , which implies that a verification at $j + 1$ is unnecessary. • $\boldsymbol { p } _ { j + 1 } ^ { l } < I _ { j + 1 } < \boldsymbol { p } _ { j + 1 } ^ { r }$ : This suggests that $f$ grows more slowly than $r _ { j }$ but faster than $l _ { j }$ . We further define $\delta _ { j + 1 } = \boldsymbol { p } _ { j + 1 } ^ { r } - \boldsymbol { I } _ { j + 1 }$ and define the significance of concavity 𝜎𝑗+1 as 𝜎𝑗+1 = 𝛿Δ𝑗+1 . Clearly, $0 < \delta _ { j + 1 } < \Delta _ { j + 1 }$ . We then set a threshold $0 < \sigma < 1$ and perform an early-stopping verification if $\sigma _ { j + 1 } \geq \sigma$ . • $I _ { j + 1 } < p _ { j + 1 } ^ { l }$ : This suggests that $f$ grows even more slowly than $l _ { j }$ , which implies that a verification at $j + 1$ is perhaps helpful. 6.3.2 A Probabilistic Mechanism for Invoking ESV. One problem is that, if the observed improvement is flat (i.e., $l _ { i } = 0 \mathrm { \dot { } }$ ) but the lower and upper bounds are not converging yet, then it may result in unnecessary ESV invocations. We therefore need to further consider the convergence of the bounds. Specifically, we use the following probabilistic mechanism for invoking ESV. We define = 𝑈𝑗 (𝑊 ,𝐶∗𝐵 )𝜖−𝐿𝑗 (𝑊 ,𝐶𝑡∗ ) as the relative gap w.r.t. the threshold 𝜖 of improvement loss. Instead of always invoking ESV as was outlined in Section 6.3.1, we invoke it with probability $\begin{array} { r } { \lambda _ { j } = \frac { 1 } { \rho _ { j } } } \end{array}$ . 6.3.3 Refinement of Improvement Rates. If early-stopping verification is invoked at $B _ { j + 1 }$ , there will be two possible outcomes: The early-stopping verification returns true, then we terminate index tuning accordingly. • The early-stopping verification returns false. In this case, we let $L _ { j + 1 } ( W , C _ { t } ^ { * } )$ and $U _ { j + 1 } ( W , C _ { B } ^ { * } )$ be the lower and upper bounds returned. We can use $L _ { j + 1 }$ and $U _ { j + 1 }$ to further refine the improvement rates $l _ { j + 1 }$ and $r _ { j + 1 }$ . Specifically, we have $p _ { j + 2 } ^ { r } ~ =$ $I _ { j + 1 } + r _ { j + 1 } \cdot s < U _ { j + 1 }$ and $p _ { j + 2 } ^ { l } = I _ { j + 1 } + l _ { j + 1 } \cdot s < U _ { j + 1 }$ , which gives $\begin{array} { r } { r _ { j + 1 } < \frac { U _ { j + 1 } - I _ { j + 1 } } { s } } \end{array}$ and $\begin{array} { r } { l _ { j + 1 } < \frac { U _ { j + 1 } - I _ { j + 1 } } { s } } \end{array}$ . Therefore, $r _ { j + 1 } =$ $\begin{array} { r } { \operatorname* { m i n } \{ \frac { I _ { j + 1 } } { B _ { j + 1 } } , \frac { U _ { j + 1 } - I _ { j + 1 } } { s } \} } \end{array}$ , and $\begin{array} { r } { l _ { j + 1 } = \operatorname* { m i n } \{ \frac { I _ { j + 1 } - I _ { j } } { s } , \frac { U _ { j + 1 } - I _ { j + 1 } } { s } \} } \end{array}$ . Thi+s refinement can be applied to all later steps $j + 3 , j + 4 , \cdots$ as well. Figure 11: Two-phase greedy search, Real-D, $K = 2 0$ , $B = 2 0 k$ . Table 2: Summary of database and workload statistics. # 7 EVALUATION We conduct extensive experimental evaluation of Esc and report the evaluation results in this section. # 7.1 Experiment Settings 7.1.1 Databases and Workloads. We use standard benchmarks as well as real customer workloads in our experiments. For benchmark workloads, we use (1) TPC-H, (2) TPC-DS, and (3) the “Join Order Benchmark” (JOB) [22]. We also use two real workloads, denoted by Real-D and Real-M. Table 2 summarizes some basic properties of the workloads, in terms of schema complexity (e.g., the number of tables), query complexity (e.g., the average number of joins and table scans contained by a query), database/workload size, and the number of candidate indexes found for index tuning. 7.1.2 Budget-aware Index Tuning Algorithms. We focus on evaluating two state-of-the-art budget-aware index tuning algorithms, (1) two-phase greedy search and (2) MCTS, as well as their enhanced versions with Wii, i.e., what-if call interception [43]. 7.1.3 Variants of Early-Stopping Verification Schemes. We use the heuristic ESVS in Section 6.1 for two-phase greedy search and use the generic ESVS in Section 6.3 for MCTS. We compare four variants: (1) Esc-B, where we use the corresponding ESVS with lower/upper bounds that do not consider index interaction; (2) Esc-I, which further uses index interaction to refine the lower bound, as discussed in Section 5.3; (3) Esc-B (FixStep), which is a baseline of Esc-B that instead adopts the fixed-step ESVS; and similarly, (4) Esc-I (FixStep), a baseline of Esc-I with the fixed-step ESVS. 7.1.4 Evaluation Metrics. We vary the improvement-loss threshold $\epsilon$ from $1 \%$ to $10 \%$ in our evaluation. For each $\epsilon$ , let $b _ { \epsilon }$ be the number of what-if calls allocated when early-stopping is triggered, and let $\tilde { B }$ be the number of what-if calls allocated without early-stopping. Note that $\tilde { B }$ can be smaller than the budget $B$ on the number of what-if calls, because algorithms such as greedy search can terminate if no better configuration can be found (regardless of whether there is remaining budget on the number of what-if calls). We then measure the following performance metrics of early-stopping: (a) extra time overhead of early-stopping verification, which is measured as the total time spent on invoking early-stopping verification; (b) improvement loss, defined as $\Delta ( b _ { \epsilon } ) = \eta ( W , C _ { B } ^ { * } ) - \eta ( W , C _ { b _ { \epsilon } } ^ { * } )$ ; and (c) savings on the number of what-if calls, defined as $( 1 - \frac { b _ { \epsilon } } { \tilde { B } } ) \times 1 0 0 \%$ . 7.1.5 Other Experimental Settings. We vary the number of indexes allowed $K \in \{ 1 0 , 2 0 \}$ . We set the budget on what-if calls $B = 2 0$ , 000 to make sure that index tuning can finish without early stopping; otherwise, early stopping would have never been triggered, which is correct but a tedious situation. Moreover, we set the threshold of index interaction for refinement of the lower-bound in Section 5.3 to be $\tau = 0 . 2$ , based on our empirical study in [42]. For the generic ESVS in Section 6.3 and the baseline fixed-step ESVS, we set the step size $s = 1 0 0$ (see [42] for results with $s = 5 0 0$ ); furthermore, we set the threshold $\sigma = 0 . 5$ for the significance of concavity. 7.1.6 Baselines. We also compare Esc with baseline approaches that are based on simple heuristics. Specifically, for two-phase greedy search, we compare Esc with a baseline that simply stops tuning after the first phase of greedy search; for MCTS, we compare Esc with a baseline that simply stops tuning if the observed percentage improvement $I _ { j }$ over the existing configuration is greater than some fixed threshold (we set the threshold to be $3 0 \%$ in our evaluation). # 7.2 Two-phase Greedy Search Figures 9 to 11 present the results when running two-phase greedy search on top of TPC-H, TPC-DS, and Real-D. The results on JOB and Real-M are included in [42]. In each figure, we present (a) the extra time overhead (in minutes) of early-stopping verification, (b) the improvement loss when early-stopping is triggered, (c) the savings on the number of what-if calls, and (d) the index tuning curve as well as the corresponding lower and upper bounds. 7.2.1 Extra Time Overhead of Early-Stopping Verification. As a reference point, in each plot (a) the red dashed line represents the corresponding index tuning time without early-stopping verification, whereas the gray bars represent the net index tuning time with early-stopping verification. We observe that the extra time overhead of both Esc-B and Esc-I is negligible compared to the index tuning time, across all workloads tested. On the other hand, Esc-B (FixStep) and Esc-I (FixStep) sometimes result in considerable extra time overhead. For example, as shown in Figure 10(a), on TPC-DS the extra time overhead of Esc-B (FixStep) is comparable to the index tuning time when varying the threshold $\epsilon$ from $1 \%$ to $7 \%$ . Overall, the savings in terms of end-to-end index tuning time by applying Esc resonate with the corresponding savings on what-if calls shown in each plot (c). 7.2.2 Improvement Loss. The red dashed line in each plot (b) delineates the acceptable improvement loss. That is, any improvement loss above that line violates the threshold $\epsilon$ set by the user. We observe that violation occurs rarely, e.g., when setting $\epsilon = 1 \%$ on TPC-H and using Esc-I for early stopping. Moreover, the actual improvement loss is often much smaller than the threshold $\epsilon$ when early-stopping is triggered. One reason for this is that our lower bound $\eta _ { L } ( W , C _ { t } ^ { * } )$ and upper bound $\eta _ { U } ( W , C _ { B } ^ { * } )$ are more conservative than the actual improvements $\eta ( W , C _ { t } ^ { * } )$ and $\eta ( W , C _ { B } ^ { * } )$ needed for triggering early-stopping (ref. Section 3.2). 7.2.3 Savings on What-If Calls. The plot (c) in each figure represents the (percentage) savings on the number of what-if calls. We have the following observations. First, the savings typically increase as the threshold $\epsilon$ increases. Intuitively, a less stringent $\epsilon$ can trigger early-stopping sooner. Second, the savings vary on different workloads. For example, with $\epsilon = 5 \%$ , the savings are around $6 0 \%$ on TPC-H; however, the savings drop to $2 5 \%$ on TPCDS and Real-D. We can understand this better by looking at the corresponding index tuning curve in the plot (d). Third, considering index interaction typically leads to an improved upper bound, which results in more savings on what-if calls. 7.2.4 Comparison with Baseline. We now compare Esc with the baseline approach that simply stops tuning after the first phase of greedy search, in terms of the improvement loss and the savings on what-if calls. As shown by the plots (b) and (c) of each figure, the baseline can achieve higher savings on what-if calls but can suffer from significantly higher improvement loss. For example, as Figure 10(b) shows, on TPC-DS the improvement loss of the baseline is around $12 \%$ while Esc has zero improvement loss. # 7.3 Monte Carlo Tree Search Figures 12 and 13 present the results for MCTS on TPC-H and Real-D. The results on the other workloads can be found in [42]. 7.3.1 Extra Time Overhead of Early-Stopping Verification. Again, we observe that the extra time overhead of early-stopping verification is negligible compared to the index tuning time in most of the cases tested. However, we also notice a few cases where the extra time overhead of early-stopping verification is considerable. This typically happens when it is difficult to trigger early-stopping using the lower and upper bounds. As a result, all the ESV invocations are unnecessary, which indicates opportunities for further improvement of the generic ESVS proposed in Section 6.3. Meanwhile, the generic ESVS again significantly reduces the extra time overhead compared to the fixed-step ESVS, by comparing Esc-B and Esc-I with Esc-B (FixStep) and Esc-I (FixStep), respectively. Moreover, like in two-phase greedy search, the relationship between the extra time overhead of Esc-B and Esc-I is inconclusive. In general, each invocation of early-stopping verification using EscB is less expensive than using Esc-I, because considering index interactions requires more computation. However, since Esc-I improves the upper bound $\eta _ { U } ( W , C _ { B } ^ { * } )$ , it can trigger early-stopping sooner, which leads to fewer invocations of early-stopping verification. Therefore, the overall extra time overhead of Esc-I can be smaller than that of Esc-B, as showcased in Figure 12(a) for TPC-H. On the other hand, the overall extra time overhead of Esc-I is considerably larger than that of Esc-B for the workload Real-D, as evidenced by Figure 13(a). Regarding the savings on end-to-end tuning time, for TPC-H the savings are similar to the corresponding savings on what-if calls, as Figure 12(c) shows; for Real-D the savings are similar when Esc-B is used but are vanished when Esc-I is used due to its much higher computation overhead. 7.3.2 Improvement Loss. Like in two-phase greedy search, we see almost no violation of the improvement-loss threshold $\epsilon$ when earlystopping is triggered for MCTS. Moreover, the actual improvement loss is typically much lower than the threshold $\epsilon$ . 7.3.3 Savings on What-If Calls. The (percentage) savings on the number of what-if calls again vary across the workloads tested. For example, on TPC-H we can save $6 0 \%$ what-if calls by using Esc-I when the improvement-loss threshold $\epsilon$ is set to $5 \%$ , as shown in Figure 12(c). The actual improvement loss when early-stopping is triggered, however, is less than $2 \%$ instead of the $5 \%$ threshold, based on Figure 12(b). For Real-D we can only start saving on what-if calls with $\epsilon > 5 \%$ , though we can save up to $4 0 \%$ what-if calls when setting $\epsilon = 1 0 \%$ and using $\pmb { { E s c - B } }$ , as Figure 13(c) indicates. Note that, although we can save up to $5 0 \%$ what-if calls by using Esc-I, its extra time overhead is prohibitively high based on Figure 13(a), while the extra time overhead of using Esc-B is significantly lower than the overall index tuning time. Moreover, a larger threshold $\epsilon$ typically leads to larger savings on the what-if calls, as it is easier for the gap between the lower and upper bounds to meet the threshold. 7.3.4 Comparison with Baseline. Compared to Esc, the baseline approach that simply stops tuning after observing $3 0 \%$ improvement again can suffer from significant improvement loss. For example, as Figure 13(b) shows, the improvement loss of the baseline on Real-D is around $2 5 \%$ , whereas Esc has almost no loss. One could argue that having a threshold different than the $3 0 \%$ used may make a difference; however, choosing an appropriate threshold upfront for the baseline approach is itself a challenging problem. Figure 15: Comparison of two-phase greedy (TPG) search with Esc (without or with what-if call interception) against DTA. # 7.4 What-If Call Interception We have observed several cases where early-stopping offers little or no benefit, e.g., when running two-phase greedy search on top of Real-M, or when running MCTS on top of TPC-DS and Real-M, as shown in the full version [42]. The main reason for this inefficacy is the slow convergence of the gap between the lower and upper bounds used for triggering early-stopping. This phenomenon can be alleviated by using Wii, the what-if call interception mechanism developed in [43], which skips inessential what-if calls whose whatif costs are close to their derived costs. For example, the heuristic ESVS in Section 6.1 only invokes earlystopping verification when two-phase greedy search enters Phase 2, when the upper bound is expected to drop sharply. With Wii integrated into two-phase greedy search, it can enter Phase 2 faster by skipping inessential what-if calls in Phase 1. As a result, we can expect Esc to be more effective for Wii-enhanced two-phase greedy search. To demonstrate this, we present the corresponding results for Real-M in Figure 14 using the Wii-enhanced two-phase greedy search with the coverage-based refinement. We observe that the savings on the number of what-if calls can further increase to $3 0 \%$ (using Esc-B) and $4 0 \%$ (using Esc-I), as Figure 14(c) presents. Remarks. While Wii can often significantly bring down the number of what-if calls, this is a side effect that is not by design. Indeed, the goal of Wii is only to skip inessential what-if calls. Nevertheless, it does reduce the number of what-if calls that need to be made—if this number is smaller than the given budget we will see a (sometimes significant) drop on the total number of what-if calls made. Therefore, the contributions of early stopping and Wii in terms of reducing what-if calls are orthogonal and should not be directly compared. That is, there are cases where Wii can and cannot reduce the number of what-if calls while early stopping can make similar (e.g., $2 0 \%$ to $4 0 \%$ ) reductions. # 7.5 Comparison with DTA To understand the overall benefit of budget-aware index tuning with Esc enabled, when compared to other index tuning algorithms, we further compare two-phase greedy search with Esc (TPG-Esc) against DTA, which employs anytime index tuning techniques [6] that can achieve state-of-the-art tuning performance [20]. In our evaluation, we set the threshold of improvement loss $\epsilon = 5 \%$ . We measure the corresponding time spent by TPG-Esc and use that as the tuning time allowed for DTA [1], for a fair comparison. Figure 15 presents the results. We omit the results on TPC-H as TPG-Esc and DTA achieve the same $7 9 \%$ improvement. We have the following observations on the other workloads. On JOB, TPG-Esc significantly outperforms DTA when Wii-coverage is enabled $67 \%$ by TPG-Esc vs. $2 4 \%$ by DTA). On TPC-DS, TPG-Esc and DTA perform similarly. On Real-D, TPG-Esc outperforms DTA by around $1 0 \%$ . On Real-M, TPG-Esc significantly outperforms DTA, again when Wii-coverage is enabled $6 4 \%$ by TPG-Esc vs. $1 7 \%$ by DTA). Overall, we observe that TPG-Esc either performs similarly to DTA or outperforms DTA by a noticeable margin in terms of percentage improvement, within the same amount of tuning time. Note that DTA leverages additional optimizations (e.g., “table subset” selection [2, 6], index merging [9], prioritized index selection [6], etc.) that we did not implement for TPG-Esc. On the other hand, it remains interesting to see the further improvement on DTA by integrating Esc, which is beyond the scope of this paper. # 7.6 Discussion and Future Work Violation of Improvement Loss. Violation is very rare based on our evaluation results, but it can happen if the assumptions about the what-if cost function, i.e., monotonicitiy and submodularity, are invalid. In such situations, the lower and upper bounds derived for the workload-level what-if cost are also invalid and therefore can mislead the early-stopping checker. One possible solution is then to validate the assumptions of monotonicity and submodularity while checking for early stopping. If validation fails frequently, then we will have lower confidence on the validity of the bounds and thus we can stop running the early-stopping checker to avoid potential violation on the promised improvement loss. Hard Cases. As an example, the TPC-DS results in Figure 10 represent a difficult case for Esc when applied to two-phase greedy search. From Table 2, we observe a large search space for two-phase greedy search over TPC-DS with 848 candidate indexes. Moreover, the workload size of TPC-DS with 99 queries is also considerably larger than the other workloads in Table 2. As a result, the heuristic early-stopping verification scheme designed for two-phase greedy search (Section 6.1) works less effectively, because verification will not be invoked until entering the second phase of greedy search. Lots of what-if calls have been made in the first phase as well as the first step of the second phase, before the bounds start converging sharply. To improve on this case, we have to make the bounds converge earlier, which is challenging given the conservative nature of the bounds. We therefore leave this for future work. # 8 RELATED WORK Cost-based Index Tuning. Offline index tuning has been extensively studied in the literature (e.g., [5–7, 11, 18, 20, 32, 41, 44, 51]). Early work focused on index configuration enumeration algorithms, including, e.g., Drop [44], AutoAdmin [7], DTA [6], DB2Advisor [41], Relaxation [5], CoPhy [11], Dexter [18], and Extend [32]. We refer the readers to the recent benchmark studies [20, 56] for more details and performance comparisons of these solutions. More recent work has been focusing on addressing scalability issues of index tuning when dealing with large and complex workloads (e.g., [4, 37, 39, 43, 51, 54]) and query performance regressions when the recommended indexes are actually deployed (e.g., [12, 13, 35, 46, 55]). The latter essentially addresses the problem of modeling query execution cost in the context of index tuning, and there has been lots of work devoted to this problem (e.g., [3, 16, 17, 23–25, 27, 36, 40, 47–50, 52]). There has also been recent work on online index tuning with a focus of applying deep learning and reinforcement learning technologies (e.g. [21, 28, 29, 34]). Online index tuning assumes a continuous workload model where queries are observed in a streaming manner, which is different from offline index tuning that assumes all queries have been observed before index tuning starts. Learning Curve and Early Stopping. Our notion of index tuning curve is akin to the term “learning curve” in the machine learning (ML) literature, which is used to characterize the performance of an iterative ML algorithm as a function of its training time or number of iterations [14, 19]. It is a popular tool for visualizing the concept of overfitting: although the performance of the ML model on the training dataset improves over time, its performance on the test dataset often degrades eventually. The study of learning curve has led to early stopping as a form of regularization used to avoid overfitting when training an ML model with an iterative method such as gradient descent [30, 31, 53]. Early-stopping in budget-aware index tuning, however, is different, with the goal of saving what-if calls instead of improving index quality, though the generic earlystopping verification scheme developed in Section 6.3 relies on the convexity/concavity properties of the index tuning curve. Index Interaction. Some early work (e.g. [10, 15, 45]) has noted down the importance of modeling index interactions. A more systematic study of index interaction was performed by Schnaitter et al. [33], and our definition of index interaction presented in Section 5.1 can be viewed as a simplified case of the definition proposed in that work. Here, we are only concerned with the interaction between the next index to be selected and the indexes that have been selected in the simulated greedy search outlined by Procedure 2. In contrast, the previous work [33] aims to quantify any pairwise index interaction within a given configuration, with respect to the presence of all other indexes within the same configuration. To compute the index interaction so defined, one then needs to enumerate all possible subsets of the configuration, which is computationally much more expensive. Since we need a rough but efficient way of quantifying index interaction, we do not pursue the definition proposed by [33] due to its computational complexity.
Index tuning is a time-consuming process. One major performance bottleneck in existing index tuning systems is the large amount of "what-if" query optimizer calls that estimate the cost of a given pair of query and index configuration without materializing the indexes. There has been recent work on budget-aware index tuning that limits the amount of what-if calls allowed in index tuning. Existing budget-aware index tuning algorithms, however, typically make fast progress early on in terms of the best configuration found but slow down when more and more what-if calls are allocated. This observation of "diminishing return" on index quality leads us to introduce early stopping for budget-aware index tuning, where user specifies a threshold on the tolerable loss of index quality and we stop index tuning if the projected loss with the remaining budget is below the threshold. We further propose Esc, a low-overhead early-stopping checker that realizes this new functionality. Experimental evaluation on top of both industrial benchmarks and real customer workloads demonstrate that Esc can significantly reduce the number of what-if calls made during budget-aware index tuning while incur little or zero improvement loss and little extra computational overhead compared to the overall index tuning time.
[ "cs.DB" ]
# 1 Introduction Human mobility simulation is a critical real-world task with widespread applications across many domains [27], such as supporting the implementation of the 15-minute city concept in urban development by modeling residents’ daily activities [45], optimizing transportation strategies through travel behavior simulation, and validating intervention policies in epidemic prevention and control. Given its significant value, the research community has studied this problem extensively for many years, resulting in a range of effective solutions. Early efforts, such as mechanism-based models like TimeGeo [16], have gradually been supplemented—and surpassed—by recent deep learning approaches, such as MoveSim [9], ActSTD [43], and DSTPP [44] and on. Despite remarkable progress, key challenges remain—particularly concerning the spatial transferability of methods, as well as the controllability and interpretability of the generated mobility behaviors. To address these challenges, recent research has explored integrating LLMs into mobility simulation, leveraging their role-playing [13, 35, 12, 28], commonsense knowledge [25, 32, 7] and reasoning capabilities [38, 39] to achieve promising results. The most crucial and challenging aspect of applying LLMs to mobility simulation lies in effectively incorporating spatial information. Existing work [15, 31] has shown that simply utilizing general-purpose LLMs is insufficient for accurately ① MobExtractor users pMaottbeilrintsy <bQy:MsYOtoeBupIr.LtIaTsYkPisATtoTEgRivNe>intent prediction using mobility pattern. Let's think step ✖ Similar Template 白 users patterns 自 {intent: eat, from: 12:00, to:14:30}, {intent: sports, from: 14:30, to:19:00} ] 日 自 Feature 自 acitivity sequence fusion sleep 正 work eat sports 自 Duration 8h4 Duration 4h Duration 2h Duration 4.5h 0 4 8 12 16 20 24 Teumseprlsate Candidate User ?1 Trajectory ?2 Generated TQh:Aecnttaesllampe rysounr ipnlan fuorb{awneneekidgahyb}(ofrhoomod0.:T0h0i tnok 2a4b:o0u0t)y.our daily routine. geospatial 品 profile description trajectory knowledge <MOBILITY_PATTERN> A:[location1 from 0:00 to 8:00, location2 from 8:00 to $\scriptstyle 1 2 : 0 0 , \ldots 1$ 心 三 prUosfeirle ?2 dTersacjreicpttoiroyn ?1 trajReactwory mobility sequence 0 0 0:00-8:00 8:00-12:00 12:00-14:30 14:30-19:00 ????????? ????????? ????????? ????????? ② GeoGenerator Mobility ③TrajEnhancer 白 S o Map tools 白 DPO 山Q山 React HLoomcea/liwzoerk LAocnacthiorns Social graph ECnithyaGncPeTd TrMaojebciltiotryy CAMS + 白 lQo:cYatoiuo narise laonc autriboann1,rewshi idcehnits. cYaotuergcourrizredntas U × 配 SuSbtdriestertict Construction Tuning Data aofndRetthaeilawdidtrheinss1iksmxxaxrx.ouPlnedalsoeclaistitoanll1.POIs A: location4, location5,… Trajectory Mobility real data √ understanding urban space. As a result, studies such as CoPB [31] and LLM-Mob [37] have proposed specific mechanisms within their frameworks to mitigate this limitation and harness the strengths of LLMs for sequential modeling and reasoning. However, these approaches typically combine spatial knowledge and LLMs in a relatively independent manner, fusing information in an ad hoc fashion. Moreover, spatial knowledge is often simplified to facilitate model comprehension, and the integration process remains largely unidirectional, lacking feedback-driven optimization or iterative reasoning updates. Recently, urban LLMs such as CityGPT [10] and LAMP [1] have emerged, directly enhancing general LLMs with urban spatial knowledge through post-training and achieving impressive results on geospatial tasks such as urban spatial knowledge question answering. In these works, they convert the urban spatial knowledge into the language format and train the general model to enhance the urban spatial knowledge. This progress offers a new perspective on incorporating spatial knowledge into LLMs and enables deeper collaboration between spatial knowledge and spatial reasoning. In this paper, we propose an agentic mobility simulation framework, CAMS, built upon CityGPT, by integrating native urban spatial knowledge into the reasoning process of large language models, enabling more controllable, accurate, and generalizable human mobility simulation. CAMS comprises three core components that work in synergy to enable accurate and generalizable urban human mobility simulation. First, MobExtractor is designed to extract and summarize general mobility patterns from raw trajectory data, capturing diverse high-level behavioral regularities. Second, GeoGenerator leverages an enhanced version of CityGPT to generate synthetic mobility trajectories, using the activity sequence from MobExtractor as input, and incorporates rich geospatial knowledge into the mobility simulation process. Third, TrajEnhancer improves spatial-temporal consistency by aligning generated trajectories with real-world trajectory data through direct preference optimization, ensuring realism and coherence. Built upon this unified framework, multi-dimensional feedback mechanisms are naturally introduced to iteratively refine the mobility generation procedure, enhancing both the fidelity and adaptability of simulated human mobility. In summary, our contributions are: • To the best of our knowledge, this work introduces the first agentic framework that integrates a urban foundation model with rich geospatial knowledge and multi-dimensional feedback signals, embedding urban structure constraints into LLM reasoning for controllable and generalizable mobility simulation. • We propose a dual-phase architecture that first condenses template users’ mobility patterns into compact linguistic representation, then generates synthetic patterns for new users through profile-aware feature fusion and variational encoding. • Through geospatial information alignment and fine-grained urban geographic knowledge finetuning, we enhance CityGPT’s capability to extract urban geospatial knowledge relevant to user profile and mobility patterns. • Through iterated DPO training, we progressively enhance the spatiotemporal continuity of generated trajectories, strengthening the model’s ability to capture the intrinsic connections between mobility patterns and urban geospatial knowledge. • Experimental results on real-world datasets show that the proposed CAMS framework, enhanced by incorporating an urban-knowledgeable large language model for geospatial reasoning and agentic simulation framework for mobility behavior reasoning, significantly outperforms existing methods in human mobility simulation. # 2 Methodology In this paper, we propose CAMS, an agentic framework for generating trajectories in real urban spaces based on an urban-knowledgeable LLM, CityGPT [10]. To align with the existing spatial knowledge in LLM effectively, we express urban structure in a hierarchical address system, which is similar to human spatial cognition [11]. In addition, we inject fine-grained urban mobility information into CityGPT to more thoroughly explore the information of each POI in urban space. The whole framework is shown in Figure 1, which comprises three central components: MobExtractor, GeoGenerator and TrajEnhancer. First, we present MobExtractor in section 2.1, which is designed to extract and synthesize mobility patterns in linguistic representations. Subsequently, we introduce the GeoGenerator capable of generating candidate urban geospatial knowledge related to user profile and mobility patterns in section 2.2. Finally, we detail the TrajEnhancer in section 2.3, which generates trajectories in real urban spaces via integrated reasoning, and enhance trajectory generation with aligning the real-world preference. # 2.1 MobExtractor: Semantic Mobility Pattern Extraction from Raw Trajectory MobExtractor employs a dual-phase architecture that first condenses template users’ mobility patterns into compact linguistic representation, then generates synthetic patterns for new users through profileaware feature fusion and variational encoding. User mobility patterns can be decomposed into shared generic patterns(common across populations) and special individual patterns(profile-specific variations). Data-driven approaches typically require massive high-quality trajectory datasets to effectively capture above mobility patterns, facing data scarcity problems. In contrast, LLM agents leverage their inherent knowledge to identify generic patterns by analyzing trajectories of a small set of users, and synthesize individual patterns for other users through semantic profiling of user attributes. To enhance the model’s capability to identify mobility patterns, we employ a two-step compression-recovery process in reconstruction stage. For test users, we employ an embedding-based method to synthesize movement patterns in generation stage. Mobility pattern recovery. As shown in Figure 1, in the mobility patterns reconstruction phase, the model learns high-level correlations between user profiles, semantic trajectory descriptions, and raw mobility patterns through a dual-phase compression-reconstruction process. The model automatically distill observed patterns and correlations into interpretable natural language rules, including $\mathbf { c } _ { 1 } , \mathbf { c } _ { 2 }$ in compression stage and $\mathbf { r } _ { 1 } , \mathbf { r } _ { 2 }$ in reconstruction stage. • Compression In compression stage, the model learns compression patterns that map raw trajectory data to user profile representations, i.e., (1) How to derive users’ behavioral habits and motivations by analyzing statistical patterns in their historical trajectories, (2) How to identify user’s mobility pattern from raw trajectory, habits, motivations and address information, (3) How to identify profile-influencing features from trajectory descriptions. Above compression patterns, denoted as $\mathbf { c } _ { 1 }$ and $\mathbf { c } _ { 2 }$ , are preserved to guide the subsequent generation of $\mathbf { r } _ { 1 }$ and $\mathbf { r } _ { 2 }$ . • Reconstruction During reconstruction, the model acquires reconstruction patterns that map user profiles back to raw trajectories, i.e., (1) How to identify components most predictive of trajectory description from user profile based on key profile determinants identified in $\mathbf { c } _ { 1 }$ , (2) How to generate user’s raw trajectory from trajectory description and candidate POIs based on $\mathbf { c } _ { 2 }$ . Above compression patterns, denoted as $\mathbf { r } _ { 1 }$ and $\mathbf { r } _ { 2 }$ , are preserved to condition the trajectory generation process for new users. Mobility pattern generation. As shown in Figure 1, in the generation phase, the model generates mobility patterns for any users with only profile information. To enhance the model’s generalization capability, we retrieve the top $\mathbf { k }$ most similar template users (training users) for each new user (test user). We compare following two strategies for retrieving similarity individuals. • LLM-based: Use LLM to select the top K most similar users based on semantic user profile characteristics, then directly output the ID and similarity score of each selected user. • Embedding-based: Find similar users based on similarity scores of user profile embeddings[42]. First, we construct a template user profile embedding matrix $\mathbf { E _ { t e m p l a t e } } \in \bar { \mathbb { R } ^ { m \times d } }$ , using the profiles of $m$ template users. Then we encode user profile of new user into an embedding $\mathbf { e } _ { 1 \times d }$ , computing cosine similarities $\mathrm { s i m } _ { i }$ between $\mathbf { e } _ { 1 \times d }$ and $\mathbf { E } _ { \mathrm { t e m p l a t e } }$ . Finally, we retrieve the top $\mathrm { \bf K }$ users $\tau$ with highest similarity scores. After acquiring similar users, we sequentially perform the following steps: (1) $\mathbf { c } _ { 1 }$ Feature Fusion: Use LLM to integrate key profile factors and high-order mobility characteristics in $\mathbf { c } _ { 1 }$ of the similar template users. (2) Trajectory Description Generation: Using the fused features, generate trajectory descriptions by referencing $\mathbf { r } _ { 1 }$ and $\mathbf { r } _ { 2 }$ in compression stage. (3) $\mathbf { c } _ { 2 }$ Feature Fusion: Use LLM to integrate both the unique movement patterns and universal movement patterns in $\mathbf { c } _ { 2 }$ of the similar template users. # 2.2 GeoGenerator: Integrating Urban Geospatial Knowledge into Trajectory Generation This section describes how to generate candidate geospatial information related to user profiles and mobility patterns. To fully leverage urban geospatial knowledge, we employ CityGPT [10] as the foundation model for our agent framework, which possesses fine-grained urban spatial knowledge. Initially, Anchor Location Extractor generates critical anchor points based on user profiles, collective distributions, and geographic knowledge, which are then converted into intent-composed trajectories by incorporating mobility patterns extracted in the first stage. Subsequently, an enhanced CityGPT maps these intent-composed trajectories in real urban spaces. Finally, we employ further alignment CAMS with the real trajectory with the help of directly preference optimization (DPO) to further enhance the spatial continuity of generated trajectories. # 2.2.1 Anchor Location Extractor The locations of homes and workplaces serve as the most important anchor points in human mobility trajectories, significantly shaped by individual user profiles and regional characteristics. To effectively identify these critical anchor locations, we propose a two-step extraction method built upon the foundation of CityGPT. Macro-to-micro cascaded generation. We propose a macro-to-micro cascaded generation system with iterative reasoning-execution-reflection cycles[41, 8] to progressively refine spatial distributions. First, we transfer coordinates of all homes and workplaces into a hierarchical address representation, namely administrative area $>$ subdistrict $>$ street $>$ POI. For regions in each hierarchy, we calculate user profile distributions and generate descriptive summaries. Then, from coarse (administrative area) to fine (street) spatial scales, the model hierarchically generates home/workplace assignments by propagating upper hierarchy outputs as contextual constraints for finer-grained reasoning. In reasoning stage, model consider descriptive summaries and geographical knowledge of child regions contained within each parent region’s extent (upper hierarchy) and user profile characteristics. In execution stage, the model select a region that best matches user profile characteristics guided by the reasoning stage. In reflection stage, the model performs periodic distribution-aware reflection. Finally, in POI spatial scale, the model directly generate the precise location of home/workplace. Reflection with collective distribution. We incorporate collective knowledge as feedback in reflection stage, progressively aligning generated results with distribution in real urban spaces. Upon completing execution stage of all users, we compute spatial distribution of generated locations. Then, in reflection stage, the model does comparative analysis against ground-truth distribution and adjusts generation strategies for subsequent iterations. Finally, in execution stage, model dynamically adjust individual output to minimize distributional divergence. # 2.2.2 Urban Structure Mapper To generate the remaining location points in a mobility trajectory beyond the two anchor locations (home and workplace), we introduce an Urban Structure Mapper (referred to as UrbanMapper). Given the anchor points and activity sequences, this module flexibly integrates urban spatial structure information to synthesize the remaining trajectory points. Enhancing CityGPT To demonstrate the effectiveness of urban spatial knowledge, UrbanMapper leverages CityGPT injected with fine-grained urban spatial knowledge to directly generate candidate locations based on current location, user profiles, mobility patterns and intention. To mitigate geographic hallucinations and improve spatial precision when generating specific location in real urban space, we augmented the knowledge embedded in CityGPT through fine-tuning with fine-grained urban spatial data. At a finer granularity, we posit that urban space is composed of three fundamental elements: points (POIs), lines (streets), and polygons (AOIs)[6]. Among these, points (POIs) constitute the most basic building blocks, which also serve as the foundational components of trajectories. Therefore, we construct our training data based on POI-level granularity. To simulate human cognitive and exploratory processes in urban spaces, we generate navigation paths between population-weighted randomly sampled origin-destination (OD) POIs, recording all traversed POIs along the pathways, and subsequently identifying specified-category POIs within defined radius around each recorded waypoint. The radius is determined by the average jump distance between consecutive trajectory points and is correlated with the user’s mobility pattern, while the category is related to user’s intention at each time point. We construct a fine-tuning dataset comprising 10,000 question-answer pairs, encompassing all POIs of specified categories within certain radius around every sampled POI. To activate the geospatial knowledge embedded in CityGPT, we enhance user profiles with address information to infer user approximate activity ranges in real urban spaces. Furthermore, we represent the geographic elements in datasets with semantically rich addresses rather than coordinates or grid-ID. We also investigate how different address representation formats impact the model’s comprehension of geographical information: (1) Hierarchical address representation: Use structured address hierarchies (e.g., admin $$ subdistrict $$ street $$ POI) to guide the model in recalling location names and attributes within specific region, reducing hallucinations and generating more realistic, specific locations. (2) Human-intuitive geospatial representations: Leverage human-intuitive geospatial representations (e.g., 100 meters from the intersection of Road B and Road C) to prompt the model to associate nearby locations and their attributes. Other alternative solutions. (1) Social Graph. To model the influence of social relationships, we propose a graph-based method to provide candidate locations in real urban spaces. First, we construct a global transition graph using all historical trajectory data from training users. Let $G = ( V , E )$ be the undirected graph. For each edge $e _ { i j } \in E$ , the weight $w _ { i j }$ is computed as: $\begin{array} { r } { w _ { i j } = \frac { n _ { i j } } { d _ { i j } ^ { \alpha } + \epsilon } } \end{array}$ , where $n _ { i j }$ means transition frequency between locations i and $\mathrm { j }$ , while $d _ { i j } ^ { \alpha }$ means network distance (Haversine distance). A higher $w _ { i j }$ indicates a greater transiting probability between two locations. Then, we identify similar users following the methodology proposed in section 2.1. Candidate locations for user to visit next are determined by the most likely next locations visited by similar users, which are reasoned on the graph. (2) Map Tools. To evaluate the effectiveness of commercial geospatial APIs, we construct a mapping between intentions and location categories and get candidate locations for user to visit next through map queries. The radius and the mapping relationship are fixed regardless of variation of mobility patterns. # 2.3 TrajEnhancer: Enhanced Trajectory Generation with Preference Alignment TrajEnhancer performs integrated reasoning by synthesizing the urban spatial knowledge generated in section 2.2 and mobility patterns extracted in section 2.1. It first generates daily activity plans for target users based on their profiles and mobility patterns, which consists of intentions and temporal constraints. Subsequently, it synthesizes realistic movement trajectories by holistically considering user profiles, mobility patterns, activity plans, anchors points, and urban geospatial knowledge. To enhance the spatiotemporal continuity of the model-generated trajectories, we apply iterated DPO training to further enrich the model’s urban geographic knowledge and enhance it’s ability to identify mobility patterns. We construct the training dataset using the corpus output by CAMS and corresponding individuals’ real trajectories. We execute multiple cycles of training $$ deployment $$ testing $$ data collection $$ retraining. Through these progressive multi-phase training iterations, we aim to progressively enhance CityGPT’s comprehension of the relationship between user mobility patterns and the spatiotemporal attributes of trajectory points in real urban spaces, thereby fully activating its spatiotemporal reasoning ability for mobility simulation. Table 1: Basic information of the trajectory datasets # 3 Experiments # 3.1 Experimental Setup Datasets We carry out experiment using two real-world mobility datasets, ChinaMobile and Tencent. The basic information of the datasets is shown in Table 1. To test CAMS’s performance on public datasets, we employ open street map’s road network data and AOI data along with global POI data from Foursquare to jointly represent urban spaces. This does not compromise the overall experimental results. This confirms the transferability of CAMS across different datasets, and can achieve reasonably good performance even on smaller, lower-quality datasets. Metrics. Following previous work [9, 31], we evaluate the quality of generated mobility data from three dimensions, including statistical evaluation, aggregation evaluation and semantics evaluation. We also use Toponym Valid Ratio (TVR) to measure geographic knowledge hallucination, and Composite Mean Reciprocal Rank (CMRR) to measure overall performance across all metrics. • Individual evaluation. We calculate Jensen–Shannon Divergence (JSDs) on the following metrics of per user: Distance, Radius, Step Interval (SI), Step Distance (SD) and Spatial-temporal Visits Distribution (STVD). • Collective evaluation. We evaluate the quality of all generated data from a collective perspective, calculating JSDs on following metric of all users: Frequently visited locations (FVLoc), which is defined as the overall distributions of top $4 0 \mathrm { m o s t }$ frequently visited locations across all users. • Semantics evaluation. To evaluate the plausibility of generated mobility data, we map venue categories to user intents (e.g., Food $$ dining) and compute JSDs on following category-related metrics at both individual and collective levels: Daily Activity Routine Distribution (DARD) and Activity Probability (ActProb). • Hallucination evaluation. We define Toponym Valid Ratio (TVR), which is the ratio of valid generated toponyms to total generated toponyms, to assess the degree of hallucination in the model’s candidate geospatial knowledge generation. • Comprehensive evaluation. To holistically assess model performance, we propose the Composite Mean Reciprocal Rank (CMRR) metric, computed through a two-stage process: (1) calculating the reciprocal rank of each metric relative to all comparable models, then (2) computing the arithmetic mean of these reciprocal ranks across all metrics. Methods. We compared our model against several state-of-the-art approaches: three deep-learningbased models including ActSTD[43], DSTPP[44] and MoveSim[9], two LLM-based models including CoPB[31] and LLMob[37], and a classic mechanistic model TimeGeo[16]. # 3.2 Main Results # 3.2.1 Mobility generation in real urban space We want to validate the model’s ability to generate geospatially accurate trajectories in real-world urban space using only minimal user profile data. To ensure fair comparison, for LLM-based models, we employ llama3.1-8b as LLM core, while removing all specific location names (except anchor points) and manually extracted user-specific trajectory features from prompts; for deep-learning-based methods, we apply uniform time-interval interpolation to sparse trajectory points, and reduce training set size to $3 \times$ test set to simulate limited urban context. Table 2: Performance comparison of mobility simulation methods across datasets. Best and secondbest results are highlighted in bold and underline, respectively. The experimental results demonstrate that CAMS exhibits more pronounced advantages in trajectory generation phase, achieving superior performance on 11 out of 16 metrics with the highest CMRR score. It performs exceptionally well on metrics related to spatial distributions (Distance, Radius, and SD), which can be attributed to its effective utilization of built-in urban spatial knowledge. Furthermore, CAMS maintains its leading performance on individual mobility pattern metrics (DARD), indicating its strong generalization capability to infer new users’ mobility patterns accurately based solely on profile information. The model also shows competitive results on collective distribution metrics (FVLoc, ActProb), suggesting its effective consideration of how trajectories with different user profiles distribute in real urban spaces. Compared to mobility recovery results , CAMS maintains consistent performance while other models experience significant performance degradation under limited input information, further highlighting its remarkable transfer learning capability in zero-shot scenarios. To demonstrate that CityGPT effectively captures the relationships between user profiles, mobility patterns, and trajectories in real urban spaces, we visualized the anchor points and single-day trajectory point distributions for different user profiles generated by the model in Figure 6. We conduct a comparative analysis of trajectory points distribution characteristics across different user profiles generated by the model (see Appendix A.4 for details). # 3.2.2 Semantic-based mobility recovery First we want to validate the effectiveness of MobExtractor. During compression, CAMS extracts the relationships between original trajectories and user profiles. In the reconstruction phase, these user profiles are utilized to reconstruct the original trajectories under the guidance of mobility patterns extracted from relationships. Lower Jensen-Shannon Divergence(JSD) scores between recovered trajectories and original trajectories indicates better performance of recovery stage of MobExtractor. The results of evaluating performance are detailed in Table 3. The results demonstrate that despite using only user profiles as external input, CAMS achieves superior performance on 7 of 16 metrics in both data sets, with particularly outstanding advantages in metrics evaluating individual mobility capability (Radius) and behavioral habits (DARD). CAMS also exhibits commendable performance in terms of spatial continuity within trajectories (Distance and SD). Comparative analysis revealed that LLMob performs best on features related to collective distribution and semantics (FVLoc) as well as individual routine patterns (SD), while MoveSim shows better results on metrics measuring collective distribution (FVLoc and ActProb). CAMS does not demonstrate significant advantages in these particular aspects. We attribute LLMob’s strengths to its model architecture’s emphasis on its explicit incorporation of personal movement characteristics as external input, whereas MoveSim’s advantages stem from its inherent data-driven approach that better fits overall distributions. However, both methods underperform significantly compared to CAMS on metrics evaluating individual mobility behaviors in real urban spaces (Distance, Radius). This superior performance of CAMS can be attributed to its comprehensive consideration of the alignment between urban geographical knowledge and user mobility patterns. For instance, when analyzing a low-income migrant worker with more constrained and fixed mobility patterns, the model preferentially considers workplace locations (areas with concentrated large factories), residential areas (neighborhoods with lower living costs), and nearby dining and entertainment venues during the generation process. Table 3: Performance comparison of trajectory recovery methods across datasets. Best and secondbest results are highlighted in bold and underline, respectively. Table 4: Performance comparison of different LLMs within the CAMS framework. Best and secondbest results are highlighted in bold and underline, respectively. # 3.3 Ablation Studies In this section, we perform analysis on varying model designs to further demonstrate the rationality and effectiveness of the model design. We also compared the task performance of CityGPT with other open-source/closed-source LLMs, further demonstrating the effectiveness of CityGPT in providing user-relevant urban geospatial knowledge. Impact of reflection in Anchor Location Extractor. As we introduce in section 2.2, we incorporate collective knowledge as feedback in reflection stage of Anchor Location Extractor. By analysing recovery results in Table 6 and generation results in Figure 3, we observe that the reflective version consistently outperforms its non-reflective counterpart(w/o C) across all metrics. This improvement confirms that by integrating collective knowledge, the model can more accurately infer the relationship between user profiles and real-world urban spatial patterns, consequently generating trajectories that better align with actual urban mobility distributions. Impact of TrajEnhancer. We evaluate overall performance of trajectory enhancement module in Table 6. As visually confirmed in Figure 4b, there is an overall reduction in JSDs across successive DPO iterations, indicating that TrajEnhancer progressively enhances the spatiotemporal continuity of generated trajectories to approximate real-world mobility patterns. Variations of each metric are visualized in Figure 4a. Figure 3: Ablation study on model designs(generation phase). Figure 2: Methodological comparisons in UrbanMapper using Tencent dataset Figure 4: DPO result analysis of TrajEnhancer. Comparison of different methodologies in UrbanMapper. By comparing results of using CityGPTenhanced(CAMS-E), map tools(CAMS-M) and social networks(CAMS-S) in Figure 2, we find that CAMS-E outperforms other methods with visibly lower JSDs. This suggests that implicitly incorporating geographic knowledge in trajectory generation tasks is reasonable, and CityGPT offers greater advantages over traditional GIS tools and social relationships. Performance comparison between enhanced CityGPT and other LLMs. We test the performance of multiple open-source and closed-source LLMs in experimental scenarios. The results in Table 4 demonstrate that CityGPT, based on the Llama3.1-8B pre-trained model, can provide more authentic and fine-grained urban geospatial knowledge compared to other larger-parameter models. Additionally, CityGPT achieves the highest CMRR, indicating its superior ability to capture the connections between user profiles, mobility patterns and geospatial knowledge. # 4 Related Work Mobility simulation. On the basis of macroscopic statistical laws [4, 30, 19], researchers proposed a series of mobility simulation models to depict individual behavior mechanism [26, 33, 26, 16]. While these mechanism models are concise but fail to capture complex human mobility patterns and model the impact of urban structure. With the rapid development of deep learning, different model structures were designed to model the complex dynamics of mobility behaviors [9, 21, 20]. However, these deep learning methods face challenges of data sparsity, poor transferability and low explainability. LLM for geospatial tasks. Since LLMs are geospatially knowledgeable [25, 3, 32], researchers pay attention to leverage LLM in geography and urban science field by solving domain-specific tasks like geospatial understanding tasks [5, 22, 29, 10] and geospatial prediction tasks [36, 2, 11, 14]. LLMs can achieve good results in global-scale or national-scale tasks with simple prompt engineering [23, 24] or a trained linear layer [15]. However, when breaks down to city scale, well-designed agentic frameworks and fine-tuning approaches are required to enable LLMs to acquire urban structural knowledge [10, 1] and enhance task-specific performance via geospatial knowledge alignment. LLM for mobility simulation. With the successful application of LLM in geospatial tasks, researchers are exploring the potential of applying LLMs to human mobility simulation [34, 31, 37, 18, 17, 40]. They extract individual knowledge from the user profile and historical trajectories, then synthesize simulated data [37], map simulated data to real urban spaces using mechanistic models [31, 18], or generate real-world trajectories based on the given urban spatial information [34]. They perform well in few-shot scenarios and exhibit good transferability. However, they insufficiently model real urban structures, and fail to effectively capture collective mobility patterns.
Human mobility simulation plays a crucial role in various real-world applications. Recently, to address the limitations of traditional data-driven approaches, researchers have explored leveraging the commonsense knowledge and reasoning capabilities of large language models (LLMs) to accelerate human mobility simulation. However, these methods suffer from several critical shortcomings, including inadequate modeling of urban spaces and poor integration with both individual mobility patterns and collective mobility distributions. To address these challenges, we propose \textbf{C}ityGPT-Powered \textbf{A}gentic framework for \textbf{M}obility \textbf{S}imulation (\textbf{CAMS}), an agentic framework that leverages the language based urban foundation model to simulate human mobility in urban space. \textbf{CAMS} comprises three core modules, including MobExtractor to extract template mobility patterns and synthesize new ones based on user profiles, GeoGenerator to generate anchor points considering collective knowledge and generate candidate urban geospatial knowledge using an enhanced version of CityGPT, TrajEnhancer to retrieve spatial knowledge based on mobility patterns and generate trajectories with real trajectory preference alignment via DPO. Experiments on real-world datasets show that \textbf{CAMS} achieves superior performance without relying on externally provided geospatial information. Moreover, by holistically modeling both individual mobility patterns and collective mobility constraints, \textbf{CAMS} generates more realistic and plausible trajectories. In general, \textbf{CAMS} establishes a new paradigm that integrates the agentic framework with urban-knowledgeable LLMs for human mobility simulation.
[ "cs.CL", "cs.AI" ]
# 1 Introduction Object detection, a key computer vision task, aims to identify and locate specific objects in images or videos [1]. Deep learning, especially CNN-based methods, has significantly advanced this field. However, traditional visible-light detection algorithms, which rely on RGB images, struggle in complex conditions like low light, bad weather, or camouflaged targets [2]. They also can’t capture multi-dimensional object features, limiting detection robustness and accuracy [3, 4]. Multispectral imaging, capturing electromagnetic spectra beyond visible light (e.g., infrared, near-infrared, short-wave infrared), offers a solution [5]. It provides richer object features, such as thermal radiation, vegetation health, and camouflage-penetration ability. These additional spectral details enhance detection performance, particularly in complex environments, driving the development of multispectral object detection algorithms that leverage these images to improve accuracy and robustness. Early multispectral object detection methods used traditional RGB models like YOLO [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 13, 16, 17], SSD [18, 19], and R-CNN [20, 21, 22, 23] directly on multispectral images. But their poor performance on multispectral data stemmed from underutilizing complementary information across spectral modalities. For instance, significant redundancy between RGB and infrared images led to information waste and insufficient performance when using traditional models. Consequently, researchers started exploring multispectral feature fusion methods. Multispectral object detection feature fusion strategies are categorized into early, mid-level, and late decision-level fusion based on their processing stage[20]. Early fusion integrates multispectral information during data collection or initial feature extraction to enrich input features. Mid-level fusion occurs during backbone feature extraction, enhancing network expressiveness through intermodal feature interaction. Late decision-level fusion combines detection results from different modalities in the final detection stage to boost overall performance. These fusion methods mark a shift from simple multi-modal stacking to more efficient feature integration and information complementarity, laying the foundation for improved multispectral object detection. Early fusion techniques comprise conventional image fusion methods [24] such as GRW (gradient-based region weighting) and GFF (gradient field fusion), as well as advanced deep learning-based approaches. For example, MDCNN [25] improves image quality in multi-scale feature extraction and fusion, CrossFuse[26] enhances data robustness and generalization with Top- $\mathbf { \nabla } \cdot \mathbf { k }$ visual alignment and self-supervised learning, and DIVFusion [27] optimizes infrared and visible image fusion using SIDNet and TCEFNet in an unsupervised manner. Despite their excellent performance, these deep-learning-based image fusion technologies are often computationally complex, time-consuming, and lack embeddability, making them more suitable for offline training. In multispectral object detection practice, there is an increasing trend towards mid-level fusion strategies. Studies [28, 29] using Faster R-CNN as a baseline have revealed significant complementarity between visible and infrared light in pedestrian detection tasks. Researchers have designed various fusion methods, with Halfway Fusion standing out by effectively improving detection performance through fusion in the middle stage of feature extraction and being adopted in subsequent studies. However, due to the slow speed and high deployment costs of two-stage models, subsequent research has shifted more towards improved YOLO-based models. These improved models have further enhanced the efficiency and performance of multispectral object detection by optimizing architecture and fusion strategies. Early mid-level feature fusion methods [30] mainly used feature concatenation or addition, but these approaches suffered from feature misalignment and poor fusion performance. To address these issues, researchers introduced various cross-attention mechanisms. For instance, Cross-Modality Fusion Transformer (CFT) [31] first applied Transformer to multispectral object detection, improving multispectral object detection performance of YOLOv5 and YOLOv3 by fusing visible and infrared features at each layer of the backbone network. Nevertheless, the huge number of parameters in CFT limits its efficiency in practical applications. To reduce model complexity, researchers have begun exploring more lightweight fusion methods [30, 32]. For example, ICAFusion [33] proposed a dual cross-attention feature fusion method that maintains high detection performance with fewer parameters through an iterative interaction mechanism and a cross-modal feature enhancement module. Subsequent research has delved into multifaceted aspects of multispectral object detection, including multispectral multiscale feature fusion [34], modality imbalance [35], and low-light adaptation [36, 37, 27]. By integrating Transformer’s self-attention or conventional spatial attention mechanisms like CBAM[38] and MLCA [39], researchers have effectively harnessed complementary information from visible and infrared images. This has led to superior performance on datasets like FLIR [40], M3FD [4], and VEDAI [41], and robustness in complex conditions. However, in mid-level fusion studies [31, 42, 36, 35, 43], modalities are often treated as equally important, which is limiting. In reality, one modality usually has an edge in multispectral detection tasks. For instance, visible light outperforms infrared in the VEDAI dataset, while infrared is better for pedestrian detection in datasets like LLVIP [44] and KAIST [45]. This highlights the need for differentiated modality treatment and fusion strategy refinement in specific scenarios. Despite notable progress in multispectral object detection, particularly in cross-modal interaction, low-light conditions, and model lightweightness, several challenges persist: (1) Lack of Unified Framework: Current methods are mostly model-specific or scene-specific, lacking a versatile single-stage multispectral detection framework. This limits algorithm generalizability and scalability across diverse applications. (2) Unreasonable Modality Weighting: Most networks treat modalities as equally important. Yet, in practice, one modality often surpasses the other. Uniform feature fusion may degrade model performance, even below single-modality detection levels. (3) Balancing Model Performance and Fusion Strategy: Selecting optimal fusion strategies across different stages remains challenging. Existing methods often fail to balance model performance and fusion effectively, compromising detection accuracy and efficiency. To address these challenges, this paper introduces YOLOv11-RGBT, a multimodal detection framework based on YOLOv11. It aims to balance detection accuracy, speed, and model parameters while maximizing feature utilization. The key contributions are: (1) YOLOv11-RGBT: A unified multispectral detection framework YOLOv11-RGBT supporting various tasks like detection, image classification, instance segmentation, and keypoint detection. (2) Rethinking multispectral feature mid-fusion strategies: Experiments show that mid-level fusion is suitable for single-stage detection. The proposed P3 mid-level fusion strategy achieves better results with fewer parameters by fusing at the right position once instead of multiple times. (3) Multispectral controllable fine-tuning (MCF): A controllable fine-tuning strategy for multispectral models inspired by ControlNet. It freezes pre-trained single-modal weights and introduces the other modality through fine-tuning to enhance detection stability. (4) Six multispectral fusion modes: Six designed single-stage multispectral fusion modes applied to multiple models, including YOLOv3-YOLOv12, PP-YOLOE, and RT-DETR, enabling multispectral task implementation across various single-stage networks. The paper is structured as follows: Section 2 reviews related work on multispectral object detection. Section 3 details the YOLOv11-RGBT framework and model components. Section 4 presents experimental results on three datasets. Section 5 discusses the experiments, and Section 6 concludes the study and outlines future work. # 2 Related Work # 2.1 General object detection algorithms for multispectral detection Object detection models are crucial in multispectral detection, enabling automatic object identification and localization in multispectral images. In recent years, deep learning, particularly CNN-based models, has significantly improved detection efficiency and accuracy through specialized network structures and loss functions. These models can be divided into single-stage models (e.g., YOLO [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 13, 16, 17] series, SSD [18, 19], RetinaNet [46]) and two-stage models (e.g., Faster R-CNN [22], Cascade R-CNN [23]). Single-stage models are known for their speed and suitability for real-time applications, while two-stage models are recognized for their high accuracy, making them ideal for scenarios requiring precise object localization. In multispectral object detection, these models can be enhanced to integrate visible and infrared multispectral information, thereby improving detection performance and demonstrating greater robustness in complex environments such as low-light and low-visibility conditions. The development of multispectral object detection models typically involves several steps: data preparation, model selection, training, evaluation, and fine-tuning. Once trained, these models are deployed in real-world systems to achieve automated multispectral object detection. As technology advances, more research is focusing on improving detection performance through methods like transfer learning and model fusion. For instance, incorporating attention mechanisms and multispectral feature fusion modules can significantly enhance a model’s adaptability and detection accuracy when dealing with multispectral data. These advancements indicate that deep learning-based object detection models have broad application prospects in multispectral detection, offering new possibilities for task automation in complex environments. # 2.2 Multispectral datasets Multispectral datasets are essential for research in multispectral object detection, image fusion, and semantic segmentation. With the continuous development of multispectral imaging technologies, several classic datasets have become key tools for evaluating the performance of multispectral algorithms. For example, the KAIST [45] and FLIR [40] datasets, commonly used as benchmarks in multispectral object detection, provide rich pairs of visible and infrared images across various illumination conditions and complex scenarios. The LLVIP [44] dataset focuses on visible-infrared paired images under low-light conditions, making it a valuable resource for low-light vision research. Additionally, the M3FD [29] and VEDAI [41] datasets are widely used in multispectral object detection studies. Their diverse image data and detailed annotation information have driven continuous progress in related technologies. Some of the datasets used in this paper’s experiments also come from the aforementioned open-source works. In the fields of semantic segmentation and image fusion, the FMB dataset[3], SUNRGBD dataset [47], and DynamicEarthNet [48] dataset offer multimodal data for outdoor, indoor, and satellite scenes, supporting pixel-level semantic segmentation and image fusion tasks. The diversity and complexity of these datasets provide rich resources for research in multispectral object detection, image fusion, and semantic segmentation, promoting the widespread application of multispectral technologies across different fields. In recent years, the scale and diversity of multispectral datasets have continuously expanded, significantly advancing multispectral object detection technologies. For instance, the DAMSDet [49] method introduces a dynamic adaptive multispectral detection transformer, which enhances multispectral object detection performance through a modality competition query selection strategy and a multispectral deformable cross-attention module. These research developments show that multispectral datasets not only provide rich multimodal data resources for multispectral object detection but also facilitate the application and development of related technologies in complex environments. This paper focuses on multispectral object detection tasks, aiming to improve detection robustness and accuracy by integrating visible and infrared image information from multispectral datasets. # 2.3 Multispectral feature fusion Multispectral feature fusion is a critical component of multispectral object detection, enhancing image information by integrating data from different spectral sensors. Deep learning-based fusion methods, especially those incorporating attention mechanisms and iterative learning strategies, have significantly improved fusion efficiency and robustness. As shown in the lower part of Figure 1, these methods include early fusion [50, 51, 52], mid-level fusion [31, 53], mid-to-late fusion [54], late fusion [42], and score fusion [42], each with its unique advantages and applicable scenarios. Early fusion integrates data at the raw data level, capturing complementary information between different modalities from the start. Mid-level fusion, conducted after feature extraction, enhances feature representation. Mid-posterior fusion combines the characteristics of mid-level and late fusion by first fusing features and then performing object detection, thereby improving detection accuracy and robustness. Late fusion and score fusion are two additional effective fusion strategies. Late fusion integrates detection features after each modality has independently completed feature extraction for object detection. This allows for independent evaluation of detection performance across modalities and combines results through specific strategies to boost overall detection performance. Score fusion focuses on detection scores from each modality during the detection process, integrating these scores through weighted averaging, maximum selection, etc., to produce final results. With the development of deep learning technologies, these fusion methods have shown great potential in multispectral image fusion, particularly in handling complex scenes and improving detection accuracy. The framework proposed in this paper encompasses these five fusion modes and combines them with iteratively cross-attention-guided feature fusion to enhance model performance and improve multispectral feature fusion and detection efficacy. Specific details are described in Section 3. # 3 Methodology # 3.1 The overall framework of the YOLOv11-RGBT This paper presents YOLOv11-RGBT, an integrated framework for multispectral image tasks, based on YOLOv11 [16]. As shown in Figure 1, it handles multispectral images with RGB and thermal (infrared) data, focusing on improving various multispectral computer vision tasks, particularly multispectral object detection. Model Architecture and Task Execution: YOLOv11-RGBT’s key strength lies in its flexible and efficient architecture supporting YOLOv11’s RGBT tasks and other models like YOLOv3-YOLOv12 [6, 7, 8, 9, 10, 11, 12, 14, 15, 13, 16, 17] , RT-DETR [55], and PP-YOLOE [56] for multispectral detection. The framework comprises three main components: a backbone for feature extraction, a neck for feature processing and fusion, and a head for task execution. This modular design ensures adaptability to diverse applications while maintaining high performance. Data Processing and Augmentation: Data preprocessing and augmentation are crucial for YOLOv11-RGBT’s performance. During preprocessing, multispectral images are standardized and normalized to meet the model’s input requirements. Data augmentation techniques like rotation, scaling, and cropping enhance data diversity, improving the model’s generalization and adaptability. This process lays a solid foundation for extracting high-quality features from multispectral data. Multispectral Feature Fusion Patterns: YOLOv11-RGBT supports five fusion modes, including early, mid-level, midposterior, late, and score fusion, as well as weight-sharing modes. These innovative combinations of RGB and thermal data boost the model’s performance in multispectral environments. By enhancing understanding of multispectral data and improving detection accuracy in complex scenarios, YOLOv11-RGBT effectively utilises multispectral data, providing a powerful tool for multispectral image tasks, especially object detection, and delivering outstanding performance in these tasks. Figure 1: The overall architecture of the YOLOv11-RGBT. # 3.2 Comparison of multispectral feature mid-fusion strategies While some studies indicate that early fusion is more effective for multispectral image fusion tasks [57, 58], mid-level fusion strategies are widely adopted in multispectral object detection [31, 42, 36, 35, 43]. Our experiments also confirm that mid-level fusion is superior in most scenarios. Consequently, this paper primarily focuses on mid-level fusion strategies. Three distinct mid-level fusion strategies corresponding to different single-stage multispectral object detection methods are illustrated in our figures. First, Figure 2(a) depicts the conventional mid-level fusion approach. Here, visible and infrared images undergo feature extraction via separate backbones. The resulting feature maps are fused in the neck component using methods like Concat or Add, before being passed to the head for detection output. Fusion typically occurs from the P3 to P5 stages [31, 42, 36], with some cases involving fusion across all backbone stages [35, 43] (including the dashed parts). Despite leveraging features from multiple levels, this method may introduce interfering information and lead to performance degradation. Moreover, multispectral feature fusion differs from multimodal feature fusion. Many multispectral object detection datasets have aligned features, and multi-level fusion can cause redundancy. Figure 2 (b) presents our proposed P3 mid-level fusion strategy. Fusion occurs at the specific P3 layer, as earlier fusion may not allow sufficient feature extraction. After feature maps from visible and infrared images are extracted by the backbone, they are passed to the neck. At the P3 layer, the feature maps from both modalities are concatenated and processed by a trainable module. This approach effectively utilizes P3 layer features, improving detection accuracy and performance while reducing model parameters and computations. The P3 fusion lightweight the model by reducing feature fusion nodes, but it is not universally effective across all scenarios. To address this, we propose the multispectral controllable fine-tuning (MCF) strategy shown in Figure 2 (c) inspired by ControlNet [59]. First, a detection model with excellent performance is trained using infrared images and then frozen to retain pretrained feature representations. Feature maps from visible images are fused with those from infrared images via a Zero Conv2d layer, which is a trainable 2D convolution with initial zero weights. This design allows for controlled fine-tuning of features from different modalities, enhancing model performance stably while utilizing pretrained model knowledge. If a pure visible light model outperforms infrared images (as in the VEDAI dataset), the visible light model can be frozen for fine-tuning. In our experiments, except for the VEDAI and M3FD datasets, we conducted multispectral controllable fine-tuning using models pretrained on infrared images across four datasets. Additionally, while this method primarily introduces information from spectral images, it can also incorporate text, point cloud, or depth data for multimodal object detection. However, this paper focuses on multispectral object detection, and readers are encouraged to explore other methods independently. Figure 2: The comparison of multi-spectral intermediate fusion methods for single-stage models. # 3.3 Multispectral controllable fine-tuning (MCF) strategy Figure 3 illustrates the overall network architecture of multispectral controllable fine-tuning (MCF) strategy, we embedded it into YOLOv11 as an example and named YOLOv11-RGBT-MCF, which comprises two parts: the frozen component and the Multispectral Controllable Fine-tuning (MCF) component. The frozen component is based on the YOLOv11 base model pretrained on COCO [60] dataset and is divided into three parts: Backbone, Neck, and Head. The Backbone is responsible for extracting image features and consists of multiple convolutional layers (Conv) and C3K2 modules. These modules extract image features from shallow to deep levels. The Neck component, which includes feature fusion, upsampling, and SPPF modules, integrates feature information across different scales to generate more comprehensive feature representations. The Head component, composed of multiple DC Head modules, each corresponding to detection outputs at different scales, enables multiscale object detection. Specific details of these modules are shown in the upper right corner. The Conv module consists of a 2D convolutional layer, a BN (BatchN) layer, and a Silu activation function. The C3K2 module consists of a 2D convolutional layer and a bottleneck layer. These designs enable the network to learn more features through multi-branch learning during training, thereby enhancing detection performance. The MCF strategy enhances the base model by fine-tuning it with visible light image features. This is achieved using a Zero Conv2d layer, which is a trainable 2D convolutional layer with initial zero weights. The Zero Conv2d layer allows for controlled fusion of visible light features with infrared features from the frozen model, enabling targeted fine-tuning of the single-modal model. Unlike ControlNet , which often fuses features in later stages like the Neck and Head, our MCF strategy focuses on mid-level fusion. This approach is more suitable for multispectral object detection models and allows for more effective information integration. Figure 3: The overall architecture of the YOLOv11-RGBT-MCF. # 3.4 Multispectral transfer training principle in YOLOv11-RGBT When conducting transfer training for YOLOv11-RGBT, the core principle is to load the pre-trained model weights from the COCO [60] dataset into the multispectral model architecture. If the multispectral model structure is identical to the pre-trained model, the corresponding weights can be directly copied, ensuring a seamless parameter transfer. However, when encountering structural discrepancies, we utilize several effective strategies to ensure model compatibility and performance. Specific details can be found in the repository code. For instance, in cases of inconsistent channels, channel averaging or copying can be applied to achieve uniformity, laying the foundation for subsequent training. Additionally, inserting $1 { \times } 1$ convolutional layers can adjust channel consistency, enabling the model to better process multispectral data and integrate information from different spectra, thereby enhancing target detection capabilities. Taking Midfusion as an example, its transfer training process involves replicating the YOLOv11 backbone into separate backbones for visible and infrared images. The neck and head components can then be directly copied, rapidly completing the transfer training and improving detection performance and generalisation in various scenarios. # 3.5 Loss function The loss function of YOLOv11-RGBT is consistent with YOLOv11 and is divided into 3 parts: Distribution focal loss $L _ { \mathrm { a l l } }$ , object classification loss $L _ { \mathrm { c l s } }$ , and object localisation loss $L _ { \mathrm { l o c } }$ . The loss function formula was as follows: $$ L _ { \mathrm { a l l } } = \lambda _ { \mathrm { d f l } } L _ { \mathrm { d f l } } + \lambda _ { \mathrm { c l s } } L _ { \mathrm { c l s } } + \lambda _ { \mathrm { l o c } } L _ { \mathrm { l o c } } $$ Where $L _ { \mathrm { a l l } }$ contains three parts and $\lambda$ is a hyperparameter representing the weights of each part. These weights can be adjusted before training according to actual conditions. In this paper, the weights for the three parts are 1.0, 0.5, and 0.05, respectively. The classification loss $L _ { c l s }$ utilises binary cross-entropy (BCE) loss, expressed as: $$ \begin{array} { l } { { \displaystyle { \cal L } _ { \mathrm { c l s } } = - \sum _ { I = 0 } ^ { K \times K } [ I _ { i j } ^ { \mathrm { o b j } } \sum _ { c \in \mathrm { c l a s s e s } } \{ P _ { i } ^ { j } ( c ) \log [ P _ { i } ^ { \prime j } ( c ) ] } } \\ { { \displaystyle + [ 1 - P _ { i } ^ { j } ( c ) ] \log [ 1 - P _ { i } ^ { \prime j } ( c ) ] \} ] } } \end{array} $$ Here, $\mathbf { K } ^ { * } \mathbf { K }$ can take three values depending on the image size (e.g., for the image size of $6 4 0 ^ { * } 6 4 0$ , they were $2 0 ^ { * } 2 0$ , $4 0 ^ { * } 4 0 , 8 0 ^ { * } 8 0 )$ , representing the grid numbers on three different scale feature maps. output by YOLOv11-RGBT $I _ { i j } ^ { \mathrm { o b j } }$ indicates whether the $j ^ { t h }$ prior box in the $i ^ { t h }$ grid has a predicted target (1 for yes, 0 for no). The c represents the target category, and $P _ { i } ^ { j } ( c )$ and $P _ { i } ^ { \prime j } ( c )$ are the probabilities of the target belonging to a certain category in the ground truth and prediction, respectively. The object localization loss employs CIOU Loss and incorporates three geometric parameters: overlap area, center point distance, and aspect ratio. These parameters are instrumental in refining the predicted box to better align with the ground truth box, thereby enhancing regression accuracy. The formula for the loss function is as follows: $$ L _ { \mathrm { l o c } } = \left\{ \begin{array} { l l } { \mathrm { 1 - I o U + \frac { \rho ^ { 2 } ( b _ { \mathrm { p r e d } } , b _ { \mathrm { g t } } ) } { c ^ { 2 } } + \alpha { v } } } \\ { \alpha = { v } / ( 1 - \mathrm { I o U } + { v } ) } \\ { { v } = \frac { 4 } { \pi ^ { 2 } } \left( \arctan \frac { w _ { \mathrm { g t } } } { h _ { \mathrm { g t } } } - \arctan \frac { w } { h } \right) ^ { 2 } } \end{array} \right. $$ Where, $\rho ^ { 2 } ( b _ { p r e d } , b _ { g t } )$ represents the Euclidean distance between the center points of the predicted box and the ground truth box, c is the diagonal distance of the smallest closed bounding box that could contain both the predicted box and the ground truth box, and $w _ { g t } , h _ { g t }$ are the width and height of the ground truth box, while w, h are the width and height of the predicted box. $\displaystyle { L _ { d f l } }$ is the Distribution Focal Loss (DFL) aimed at quickly focusing the network on values near the annotated positions and maximizing their probabilities. The expression is: $$ L _ { \mathrm { d f l } } = \sum _ { I = 0 } ^ { K \times K } \sum _ { p = 0 } ^ { 3 } I _ { i j } ^ { \mathrm { o b j } } \cdot \mathrm { D F L } ( s _ { i } , s _ { i + 1 } ) $$ Here, $\mathbf { K } ^ { * } \mathbf { K }$ is consistent with formula 4, and $\boldsymbol { \mathrm { ~ p ~ } }$ represents the four predicted coordinate values. DFL regresses the predicted boxes in a probabilistic way, requiring setting a hyperparameter reg_max in advance, default reg_max is 16. At this point, the output channel of this branch of the network is $6 4 = 4 ~ ^ { * }$ reg_max. Before that, 16 fixed reference values A: [0, 1, 2, ..., 15], are set, corresponding to each position of reg_max . For these reg_max numbers, the softmax function is utilized for discretization, treating it as a 16-class classification. Cross-entropy loss is employed for calculating the loss, as shown in the formula: $$ \mathrm { D F L } ( S _ { i } , S _ { i + 1 } ) = - \Big [ \big ( y _ { i + 1 } - y \big ) \log ( S _ { i } ) + ( y - y _ { i } ) \log ( S _ { i + 1 } ) \Big ] $$ The target position coordinates obtained in the feature map generally do not fall on specific grid corners, but labels need to be integers. Taking the prediction $x _ { m i n }$ as an example, its true value is y, where the left integer is $y _ { i }$ and the right integer is $y _ { i + 1 }$ . The $( y _ { i + 1 } - y )$ and $( y - y _ { i } )$ correspond to the weights of the distances from the true value, $S _ { i }$ and $S _ { i + 1 }$ correspond to the predicted values of $y _ { i }$ and $y _ { i + 1 }$ , respectively. # 4 Experiments The experimental platform, datasets, and details for this study are presented in sections 4.1 to 4.3, with additional details available in the code. Sections 4.4 and 4.5 aim to show that mid-term multispectral fusion can sometimes reduce model detection performance in certain scenarios, while also demonstrating the effectiveness and feasibility of the proposed MCF method. Section 4.6 focuses on proving the framework’s effectiveness and feasibility in typical multispectral detection tasks, as well as the practicality of multispectral transfer learning. # 4.1 Experimental platform and related indicators Table 1 illustrates the experimental platform. Evaluation of network performance was primarily dependent on the mAP (mean average precision) during training and the performance of the trained network in the verification set. To quantify the detection results, precision (P), recall (R), and mAP[57] were used as performance evaluation indices. This is the expression for P and R: $$ \begin{array} { c } { { R = \displaystyle \frac { T P } { T P + F N } } } \\ { { P = \displaystyle \frac { T P } { T P + F P } } } \end{array} $$ Table 1: Experimental platform True positives (TP): the number of positive samples that the classifier correctly identified as positive samples. True negatives (TN): the number of samples that are truly negative and are divided by the classifier into negative samples. False positives (FP): the number of samples that are truly negative but are misclassified by the classifier as positive. False negatives (FN): the number of positive samples that are incorrectly classified as negative by a classifier. Average precision (AP) is the region bounded by the P-R curves. In general, the higher the AP value, the better the classifier. The mAP is a comprehensive measure of the average accuracy of detected targets. The mAP is used to calculate the average value of each category’s APs individually. These expressions describe AP and mAP: $$ A P _ { i } = \int _ { 0 } ^ { 1 } P _ { i } ( R _ { i } ) d R _ { i } = \sum _ { k = 0 } ^ { n } P _ { i } ( k ) \Delta R _ { i } ( k ) $$ $$ m A P = { \frac { 1 } { C } } \sum _ { c = 1 } ^ { C } A P _ { i } $$ # 4.2 Experimental datasets Figure 4: Distribution of the number of objects in each dataset. The horizontal axis is the category name, and the vertical axis is the count of each categorie. (a) FLIR; (b) M3FD; (c) VEDAI. We utilized five open-source multispectral object detection datasets to verify the effectiveness, feasibility, and generalization ability of our detection system and algorithm in complex scenarios. All images from these datasets were resized to $6 4 0 \times 6 4 0$ before being input into the network. These datasets can be downloaded from their official websites or via the links in the GitHub introduction document. Below is a brief introduction to each dataset: FLIR [40]: Captured using infrared thermographic cameras, this dataset primarily annotates three categories: pedestrians, cars, and bicycles. With an image size of $6 4 0 { \times } 5 1 2$ pixels, it is pre-registered and consists of 4,124 training pairs and 1,013 testing pairs. It is commonly used for object detection, especially in complex scenarios like night-time and low-light conditions. The category distribution is shown in Figure 4(a). M3FD [4]: Collected with dual-optical cameras and infrared sensors, it contains 4,200 image pairs and annotates six categories, including humans, cars, and trucks. Widely used in image fusion and object detection tasks, 3,360 images were selected as the training set and 840 as the validation set. Its category distribution is illustrated in Figure 4(b). KAIST [45]: The original KAIST dataset was captured using visible and long-wave infrared (LWIR) sensors. This study employs the version readjusted by Li, which only annotates pedestrian targets. Mainly used for pedestrian detection tasks, it comprises 8,956 training pairs and 2,252 validation pairs, making it suitable for multispectral pedestrian detection research. LLVIP [44]: Acquired with visible and infrared cameras, it consists of 15,488 image pairs and annotates the pedestrian category. With an image size of $6 4 0 { \times } 5 1 2$ pixels, it is pre-registered and divided into 12,025 training pairs and 3,463 testing pairs. It is primarily used for low-light-vision tasks such as image fusion and object detection. VEDAI [41]: Captured via aerial visible and infrared cameras, it includes approximately 1,050 image pairs with a size of around $6 4 0 \times 6 4 0$ pixels. Pre-registered and without official fixed splits, it was divided into training and testing sets at a ratio of 8:2. Mainly used for object detection tasks, its category distribution is shown in Figure 4(c). # 4.3 Implementation details Experiments in this paper were conducted on two open-source frameworks: our YOLOv11-RGBT and MMDetection. We selected multiple models, including YOLOv3-YOLOv12 and RT-DETR, for comparative experiments. To boost result reproducibility, hyperparameters were barely altered and kept consistent across model training. When experimenting with the aforementioned datasets, the general settings were as follows: a batch size of 16 and a model input resolution of $6 4 0 { \times } 6 4 0$ . If GPU memory was insufficient, the batch size was reduced to 8. For MMDetection, training involved 3 repeated batches and 30 epochs, with ResNet50 as the backbone. Models in other frameworks were trained for 300 epochs. To speed up training, workers were set to 8 where possible. Below are brief model introductions: YOLOv3 [8]: YOLOv3 is a one-stage object detection model in the YOLO series. By incorporating multi-scale feature maps and utilizing a larger network structure, YOLOv3 improves the accuracy and detection capability for small objects. YOLOv4 [9]: Upgraded from YOLOv3 with CSPDarknet53 backbone, Mish activation, and SPP module for enhanced speed and precision. YOLOv5 [10]: YOLOv5 is a significant version in the YOLO series, featuring a lightweight network structure and model compression techniques. While maintaining high accuracy, it notably enhances detection speed and model efficiency, making it suitable for mobile devices and embedded systems. YOLOv6 [11]: Developed by Meituan, focuses on industrial applications with efficient decoupled heads and reparameterization techniques. YOLOv7 [12]: YOLOv7 also employs extensive reparametrization techniques and introduces trainable bag-of-freebies methods to significantly improve detection accuracy in real-time without increasing inference costs. Upon release, it surpassed all known object detectors in terms of both speed and accuracy. YOLOv8 [13]: YOLOv8 is a derived model based on enhancements and optimizations from YOLOv5, aiming to further enhance object detection performance and effectiveness. These improvements involve adjustments in network structure, training strategies, data augmentation, with the most significant change being the transition to an Anchor-Free paradigm. YOLOv9 [14]: Incorporates GELAN modules and deep supervision for better gradient flow and convergence i resource-constrained systems. YOLOv10 [15]: Introduces uniform double assignment strategy for NMS-free training and a lightweight classification head for efficiency. YOLOv11 [16]: Focuses on computational efficiency with C3k2 and C2PSA modules for improved feature extraction without accuracy loss. YOLOv12 [17]: Optimized from YOLOv8 with attention mechanisms for better feature extraction but slightly reduced generalization. RT-DETR [55]: Based on Transformer architecture, removes traditional NMS steps for reduced computationa complexity and faster inference. RetinaNet[46]: RetinaNet is a single-stage object detection model that addresses class imbalance issues in object detection using a feature pyramid network and Focal Loss. It achieves efficient and accurate object detection, particularly excelling in handling small objects. Faster R-CNN [22]: Faster R-CNN is a two-stage object detection model that introduces a Region Proposal Network (RPN) to generate candidate regions and utilizes a shared feature extraction network for classification and precise localization. It strikes a good balance between accuracy and speed. Cascade R-CNN [23]: Cascade R-CNN is an improved two-stage object detection model that cascades multiple R-CNN modules to progressively filter candidate boxes, enhancing object detection accuracy, especially suitable for small object detection and complex scenes. # 4.4 Comparative experiments on FLIR dataset The tables 2 to 7 present the comparative results of multiple models in the FLIR data set. Table 2 shows the effects of models trained solely on visible light images, while Table 3 presents results from models trained only on infrared images. Together, they offer a comprehensive evaluation of the latest YOLO models on FLIR. The tables 4 and 5 show results of models trained with Midfusion and Midfusion-P3 methods. Notably, all models in tables 2 to 5 were trained without pre-trained weights. A row-by-row analysis reveals that most multispectral-trained models in tables 4 and 5 outperform the visible-light-only models in Table 2, but few surpass the infrared-only models. This indicates that infrared images dominate in FLIR, as visible light images are less effective than infrared thermal images in harsh conditions like night-time or fog. For example, YOLOv11n-Midfusion improved mAP by $1 . 1 0 \%$ over YOLOv11n infrared models, and YOLOv3-Tiny’s 3-node fusion model increased mAP 50:95 by $0 . 9 1 \%$ compared to infrared-only models. These results confirm the effectiveness of our multispectral models and the superiority of the YOLOv11-RGBT framework. Figure 5: The transfer learning results of several YOLOv11 models after loading COCO-pretrained weights. Figure 6: The comparison results of multispectral controllable fine-tuning (MCF) strategy utilized different hyperparameters. Further analysis shows that while multispectral training results in tables 4 and 5 generally exceed those of visible-light models in Table 2, they seldom outperform the infrared-only models in Table 3. Taking YOLOv11 as an example, only the mid-fusion results in the YOLOv11n series surpass pure infrared models. This hints at possible modal weight imbalance in multispectral fusion strategies, the fusion of multispectral models in the mid-term may lead to the degradation of the detection performance of the model. To address this, we reduced fusion nodes to cut feature redundancy and conducted single-node fusion experiments, as shown in Table 5. Comparing Table 4 and Table 5, most P3-node-only fusion models outperform three-node fusion models. For instance, YOLOv11n-Midfusion-P3 enhanced mAP by $1 . 2 9 \%$ over YOLOv11n-Midfusion. This suggests that more fusion nodes don’t always mean better performance. Table 2: The comparison results of object detection models on the FLIR dataset using the visible images (RGB) Table 3: The comparison results of object detection models on the FLIR dataset using the infrared images (IR) Table 4: The comparison results of object detection models on the FLIR dataset using the multispectral image $( \operatorname { R G B + I R } )$ Table 5: The comparison results of object detection models on the FLIR dataset using the multispectral image $( \mathsf { R G B + I R } )$ Table 6: The comparison results of fine-tuning with different hyperparameters on the FLIR dataset Table 7: The comparison results of object detection models on the FLIR dataset, all the YOLOv11 models and our models were using the pretrained weights on the COCO dataset. The data of some models in the table are from the When the modality difference is small, especially after feature extraction, single-node fusion can achieve efficient information integration. Moreover, P3 single-node fusion models in Table 5 show complementarity with three-node fusion models in Table 4. When multi-node mid-fusion is ineffective, single-node fusion is advantageous and has fewer model parameters, lower computational requirements, and faster inference speeds. Figure 5 shows transfer learning results of several YOLOv11 models after loading COCO-pretrained weights. In most cases, transfer learning with multispectral models doesn’t perform as well as with pure infrared models. Ideal transfer learning should significantly boost deep learning model performance, but this wasn’t achieved when loading COCO-pretrained weights. This is mainly due to two factors: first, the backbone branches of the two modalities have almost identical initialized weights, leading to feature redundancy; second, COCO is not a multispectral dataset, and the task differences pose challenges for transfer learning, resulting in poor model performance. To tackle these issues, we designed a Multispectral Controllable Fine-Tuning (MCF) strategy. By freezing the infrareddominant branch and fine-tuning under different hyperparameters, the results in Table 6 and Figure 6 show that Adam outperforms SGD for YOLOv11n, YOLOv11l, and YOLOv11x, while SGD is better for YOLOv11s and YOLOv11m. Regardless of the fine-tuning method, results surpass those of directly using pre-trained models, proving MCF’s effectiveness and feasibility. Table 7 lists comparative results of different methods. Our method achieves better detection results than models from 2019 to 2024 in terms of AP. Moreover, while the CFT algorithm improved mAP from $3 7 . 4 \%$ to $4 0 . 0 \%$ with five interaction attention mechanisms, our algorithm significantly boosted mAP from $4 1 . 9 6 \%$ to $4 7 . 6 1 \%$ , showing a clear superiority in both improvement magnitude and final mAP value. # 4.5 Comparative experiments on LLVIP dataset Table 8 provides a thorough evaluation of the latest YOLO models on the LLVIP dataset. It shows that all YOLOv11 models trained on multispectral data perform better than those trained solely on visible spectra, but still not as well as models trained solely on infrared images. For instance, YOLOv11s trained on multispectral data achieves an AP50 of $8 9 . 8 4 \%$ and an AP of $5 3 . 2 9 \%$ , which is better than the visible-light-only model’s AP50 of $8 9 . 8 4 \%$ and AP of $5 3 . 2 9 \%$ , but still lags behind the infrared-only model’s AP50 of $9 7 . 5 5 \%$ and AP of $6 7 . 5 8 \%$ . This issue, also observed in the FLIR dataset, indicates a potential modality - weight imbalance in mid - term fusion strategies. As shown in Table 9, transfer - learning experiments on YOLOv11 models reveal the same problem. To address this, we applied MCF training to the LLVIP dataset. As indicated in Tables 9 and 10, MCF-trained YOLOv11 models, such as YOLOv11x-RGBT-MCF with an AP50 of $9 7 . 0 6 \%$ and AP of $7 0 . 2 6 \%$ , outperform infrared-only model’s AP50 of $9 7 . 4 1 \%$ and AP of $6 9 . 9 3 \%$ This demonstrates the effectiveness, feasibility, and generalizability of the MCF training strategy. # 4.6 Comparative experiments on M3FD dataset Table 11 presents the comparison of object detection models on the M3FD dataset. Analysis shows that multispectral and P3 models generally outperform single-modality models. For instance, YOLOv11s’s multispectral model in $\operatorname { R G B + I R }$ mode achieves an AP50 of $8 4 . 1 \%$ and an AP of $5 7 . 9 8 \%$ , surpassing the pure infrared YOLOv11s model’s $8 2 . 7 8 \%$ AP50 and $56 . 9 3 \%$ AP, as well as the pure visible light YOLOv11s model’s $8 4 . 6 7 \%$ AP50 and $5 8 . 5 1 \%$ AP. Additionally, the YOLOv11m-P3 model in $\operatorname { R G B + I R }$ mode attains an AP50 of $8 7 . 9 7 \%$ and an AP of $6 2 . 7 9 \%$ , outperforming the standard multispectral model’s $8 7 . 6 6 \%$ AP50 and $6 2 . 5 9 \%$ AP. These results confirm the effectiveness and feasibility of our proposed multispectral object detection framework and algorithms, which can efficiently integrate multimodal information and enhance detection accuracy. Moreover, experimental results reveal that training multispectral object detection models with mid-level fusion on the M3FD dataset doesn’t lead to the performance drop seen in the FLIR dataset. This indicates that the effectiveness of multispectral model fusion strategies is heavily dependent on the specific dataset characteristics. Table 12 shows the transfer learning results of multiple YOLOv11 models after loading the pre-trained weights from the COCO dataset. Taking the YOLOv11s model as an example, the advantages of multispectral models are significant. In most cases, the transfer learning performance of multispectral models is superior to that of pure infrared and visible light models. As shown in Table 12, the AP50 and AP of YOLOv11s-Midfusion in $\operatorname { R G B } + \operatorname { I R }$ mode reach $8 7 . 7 7 \%$ and $6 1 . 6 5 \%$ , respectively. In contrast, the pure infrared model YOLOv11s (IR mode) only achieves an AP50 of $8 2 . 7 8 \%$ and an AP of $56 . 9 3 \%$ . Meanwhile, the visible light model YOLOv11s (RGB mode) has an AP50 of 84.67 and an AP of $5 8 . 5 1 \%$ . This demonstrates that the model’s performance in visible light conditions also has a significant improvement, indicating that multispectral models can better integrate multimodal information and enhance object detection performance. Table 8: The comparison results of object detection models on the LLVIP dataset. The default $\operatorname { R G B + I R }$ is midfusion. Faster RCNN, Cascade RCNN and RetinaNet belong to the early fusion type, while the rest belong to the mid-term fusion type. Table 9: The comparison results of object detection models on the LLVIP dataset. All YOLOv11 models and ou models used pretrained weights on the COCO dataset. Some model data are from the literature [61]. Table 10: The comparison results of fine-tuning with different hyperparameters on the LLVIP dataset. Table 11: The comparison results of object detection models on the M3FD dataset. Table 12: The comparison results of object detection models on the M3FD dataset. All YOLOv11 models and ou models used pretrained weights on the COCO dataset. Table 13: The comparison results of fine-tuning with different hyperparameters on the M3FD dataset. RGB main branch. able 14: The comparison results of fine-tuning with different hyperparameters on the M3FD dataset. IR main branch. Table 15: The comparison results of fusion strategies on the M3FD dataset. Overall, the multi-spectral model transfer learning results are superior in most cases. Both the P3 and conventional Midfusion models outperform the MCF training that primarily uses infrared images. The P3 fusion model has advantages in parameters, computations, and detection results. For instance, YOLOv11s-Midfusion-P3 has an AP50 of $8 7 . 6 6 \%$ and AP of $6 2 . 2 0 \%$ in $\operatorname { R G B } + \operatorname { I R }$ mode, surpassing YOLOv11s-RGBT-MCF’s $8 4 . 1 \%$ and $5 7 . 9 8 \%$ . The experimental results in Table 12 differ from the conclusions in Table 7, highlighting two key points. Firstly, during transfer learning, visible-light models can sometimes outperform infrared models. This might be because the COCO dataset is visiblelight-based, leading to better transfer learning outcomes for visible-light models, or because the visible-light channel is inherently superior. Secondly, multi-spectral transfer learning results may exceed MCF training results. MCF training has limited parameters, with only some auxiliary branch parameters trainable and the rest frozen. Thus, it may be less flexible than multi-spectral transfer learning that trains the entire network. Therefore, it is recommended to try transfer learning first and consider MCF training if the results are unsatisfactory. Additionally, Table 13 shows that the Adam optimizer isn’t always the best choice. In some cases, the SGD optimizer with initial conditions can also yield good results. For example, YOLOv11x-RGBT-MCF using the SGD optimizer achieved an AP exceeding $64 \%$ , compared to $6 3 . 8 7 \%$ with the Adam optimizer. This underscores the importance of selecting the right optimizer and hyperparameters based on the specific model and task. We also attempted MCF training with infrared as the main branch. As shown in Table 14, using a non-primary spectral image for MCF training only guarantees superiority over that specific spectrum, not the primary one. For instance, YOLOv11l-RGBT-MCF with infrared as the main branch has an AP of $6 1 . 2 4 \%$ , higher than YOLOv11l trained on infrared images $( 6 0 . 5 2 \% )$ but lower than the pure visible-light trained model $( 6 2 . 1 \% )$ . This indicates that multi-spectral images have key channels, and it’s advisable to compare training results of both spectra before choosing the main branch. Table 15 shows the comparison of different fusion strategies on the M3FD dataset. For YOLOv11s, mid-fusion achieves the highest AP50 of $8 4 . 9 1 \%$ and AP of $5 8 . 4 7 \%$ , outperforming other strategies like early fusion (AP50 $8 4 . 1 1 \%$ , AP $5 8 . 1 9 \%$ ) and late fusion (AP50 $8 4 . 6 3 \%$ , AP $5 8 . 0 2 \%$ ). This aligns with previous studies of mid-level fusion strategies [31, 42, 36, 35, 43]. However, for YOLOv11m, early fusion $\mathrm { ' A P 5 0 8 7 . 0 6 \% }$ , AP $6 1 . 2 9 \%$ ) performs better than mid-fusion $( \mathrm { A P 5 0 8 6 . 7 1 \% }$ , AP $6 0 . 6 7 \%$ ). Moreover, the table reveals that most of the optimal detection results stem from early and mid-term fusion. This observation drove us to develop the P3-Midfusion method, as there might be a superior fusion strategy between early and mid-term fusion. Thus, while mid-fusion is often optimal, the best strategy can vary. Researchers and engineers should select fusion strategies based on their specific datasets and models. The feature map visualization in Figure 7 clearly shows the benefits of multi - spectral feature fusion. The feature maps shown are from stage2(P2) of the YOLOv11 model output, including RGB-only, IR-only, and mid-term fused $\operatorname { R G B + I R }$ feature maps. From the visualization, it’s evident that models using only RGB or IR data can detect objects to a certain extent, but their detection capabilities are limited. For example, the RGB-only model may fail to recognize objects in low-visibility or smoky conditions. The IR - only model may miss objects that are not prominent in the infrared spectrum, leading to poorer detection performance than the pure RGB model, as shown in Table 12. In contrast, the mid - term fusion model combining RGB and IR data demonstrates superior detection performance. Its feature maps not only highlight pedestrian outlines but also accurately show vehicles and other objects. This indicates that multi-spectral feature fusion can effectively integrate the advantages of different spectral bands, thereby significantly improving the model’s detection accuracy and reliability. Figure 7: Feature maps visualization of multi-spectral fusion from yolo v11 model stage2 (P2), illustrating enhanced object detection capabilities through combined rgb and infrared data processing. # 4.7 Qualitative test We performed some qualitative results of YOLOv11-RGBT-MCF algorithm on two multispectral datasets, as given in Figure 8. As depicted in the figure, the YOLOv11-RGBT-MCF model exhibites a strong capability in detecting objects in multispectral images, including those with complex backgrounds, low object discrimination, uneven lighting, smoke, rainy days, night time, as well as low-angle shooting perspectives, etc. # 5 Discussion The experiments in the above table prove the effectiveness, feasibility and generalisation of the model in the framework. In fact, in addition to the above experiments, we also designed a multispectral PGI [14] strategy and proposed several lightweight cross-attention mechanisms. Integrating it into YOLOv11-RGBT framework (see paper source address: https://github.com/wandahangFY/YOLOv11-RGBT). Multispectral PGI and cross-attention mechanisms can improve mAP by $0 . 5 \%$ on some datasets, but we did not show it in the main trial because its improvement is limited and only effective on some datasets, which may stem from its dependence on specific spectral features. The distribution of spectral features in different datasets is different, which affects the utilization effect of PGI on gradient information. For example, the gradient guiding effect of PGI is more significant on datasets with distinct differences in spectral features. This suggests that whether to use these modules should be carefully chosen according to the specific data characteristics in practical applications. We also found that on some datasets, such as M3FD [4], YOLOv11-midfusion model gets better detection results when the batch size is 32 than 16. For example, the mAP is about $0 . 6 \%$ higher, but considering that all hyperparameters need to be consistent, Except for the batch size of model x, which is 8, all the remaining models are set to 16 as far as possible. Therefore, theoretically, there is still room for further improvement of some weights, and interested researchers can try in the future. Figure 8: Some detection results on M3FD and VEDAI datasets of YOLOv11-RGBT-MCF. In addition, due to limited equipment resources, this paper only did the pre-training weight (from COCO [60] dataset)transfer training and multi-spectral controllable fine-tuning test of YOLOv11 on five datasets, and the other models only provide the experimental results without pre-training weights. Moreover, in order to ensure the generalization of the model, we did not introduce the attention mechanism [28, 29, 31, 30, 26] and the low visibility module [27, 36, 37] for experiments. In view of this, it is suggested that future research focus on improving the generalisation ability of the module and exploring adaptive adjustment strategies to adapt to multiple data sets and scenarios, so as to expand the scope of application of the module. Despite some limitations, YOLOv11-RGBT framework has a wide application prospect in security monitoring, automatic driving and other fields with the advantages of multi-spectral fusion. Engineers can flexibly choose fusion modes and strategies according to specific scenario requirements. For future research, it is suggested to dig deeper into the intrinsic correlation of multi-spectral features and develop more efficient feature extraction and fusion methods. At the same time, the lightweight multi-spectral detection model is explored to reduce the hardware requirements, so as to promote the application of multi-spectral target detection technology in resource-constrained environments. We have open sourced most of the work mentioned in this paper, and will open source the weights and methods once the paper is published so that researchers and engineers can explore and improve it further.
Multispectral object detection, which integrates information from multiple bands, can enhance detection accuracy and environmental adaptability, holding great application potential across various fields. Although existing methods have made progress in cross-modal interaction, low-light conditions, and model lightweight, there are still challenges like the lack of a unified single-stage framework, difficulty in balancing performance and fusion strategy, and unreasonable modality weight allocation. To address these, based on the YOLOv11 framework, we present YOLOv11-RGBT, a new comprehensive multimodal object detection framework. We designed six multispectral fusion modes and successfully applied them to models from YOLOv3 to YOLOv12 and RT-DETR. After reevaluating the importance of the two modalities, we proposed a P3 mid-fusion strategy and multispectral controllable fine-tuning (MCF) strategy for multispectral models. These improvements optimize feature fusion, reduce redundancy and mismatches, and boost overall model performance. Experiments show our framework excels on three major open-source multispectral object detection datasets, like LLVIP and FLIR. Particularly, the multispectral controllable fine-tuning strategy significantly enhanced model adaptability and robustness. On the FLIR dataset, it consistently improved YOLOv11 models' mAP by 3.41%-5.65%, reaching a maximum of 47.61%, verifying the framework and strategies' effectiveness. The code is available at: https://github.com/wandahangFY/YOLOv11-RGBT.
[ "cs.CV" ]
# 1 INTRODUCTION The successful replication of long chain-of-thought (CoT) reasoning, similar to that in OpenAI’s o1 (OpenAI, 2024), by DeepSeek-R1 (Guo et al., 2025) using the Group Relative Policy Optimization (GRPO) algorithm (Shao et al., 2024), has sparked a surge of interest within the open research community. This interest is focused on understanding, reproducing, and extending DeepSeek’s approach, as evidenced by a multitude of recent studies (Liu et al., 2025b; Hu et al., 2025; Zeng et al., 2025; Yu et al., 2025; He et al., 2025; Wen et al., 2025; Chen et al., 2025c). Fundamentally, this emerging paradigm is a form of Reinforcement Learning with Verifiable Rewards (RLVR) (Lambert et al., 2024; Guo et al., 2025; Yue et al., 2025), where a Large Language Model (LLM) acts as a policy, generating a CoT as a sequence of actions and receiving feedback on answer correctness from deterministic verifiers. This paradigm holds the promise of endowing LLMs with the ability to learn from experience through free exploration, potentially leading to unlimited intelligence (OpenAI, 2024; Guo et al., 2025; Silver & Sutton, 2025). However, emerging concerns question the true effectiveness of RLVR. These concerns are motivated by the observation that while RLVR improves the $\mathrm { P a s s } @ 1$ metric, it often fails to enhance the $P a s s @ K$ metric compared to the base model. This phenomenon was first noted by Shao et al. (2024) during the development of GRPO. Subsequently, a systematic study by Yue et al. (2025) on Reinforcement Learning with A Hypothesis Explaining Pass@1 and Pass@K Verifiable Rewards (RLVR) 1.0 AIME2024 1.0 AIME2025 Areasoningpathsarepresentinthebasemodel. Base LLM •RLVRimprovesamplingeficiency. •RLVRreducesreasoningcapacity. 0.6 0.8 0.8 0.6 AfterRLVR 0.4 SS 0.4 Base LLM 0.2 Base LLM 0.2 After RLVR Question 0.020212223242526272829210 0.0 × 20212223242526272829210 Sampling Number (K) Sampling Number (K) Chain of Thought AIME 2024 AIME2025 1.0 1.0 (CoT) 0.8 0.8 After RLVR Base LLM Answer X C 0.6 0.4 Our Perspective: RLVR Implicitly Incentivizes Correct Reasoning Base LLM 0.2 •RLVRpromotescorectreasoningpaths. After RLVR •RLVRmitigatespuriousgueses. 0.020212223242526272829210 0.020212223242526272829210 Sampling Number (K) Sampling Number (K) various open-weight RLVR models confirmed that the $P a s s @ K$ metric of the base model increases at a much faster rate than its RLVR-tuned counterpart. Consequently, for a moderately large K, the base model eventually matches and surpasses the reasoning model. This led to their adventurous hypothesis: all correct reasoning paths are already present in the base model, and RLVR merely improves sampling efficiency at the cost of reducing overall reasoning capacity. While this hypothesis has gained significant support (Zhu et al., 2025; Zhang et al., 2025; Wang et al., 2025a; Chen et al., 2025a), conflicting observations have also been reported. For instance, Liu et al. (2025a) detected the emergence of new reasoning patterns after RLVR, while they also acknowledged a loss in reasoning capacity as measured by $P a s s @ K$ . Chen et al. (2025c) reported statistically significant improvements in $P a s s @ K$ for values of $K$ up to 1024. Shojaee et al. (2025) observed similar $P a s s @ K$ observations on math datasets but found different patterns on puzzles with high complexity. To the best of our knowledge, no systematic explanation exists to reconcile these contradictory findings, leaving a critical question unanswered: should we accept the hypothesis as a fundamental limitation or should we trust empirical observations that challenge the hypothesis? In essence, we return to the core problem posed by Yue et al. (2025) and rephrase it as: “Does RLVR genuinely incentivize new reasoning in base LLMs, and if so, why does it often fail to improve their P ass@K performance?” In this work, we propose a new perspective to resolve this debate: RLVR’s primary role is to implicitly incentivize correct reasoning in base LLMs, not just to find correct final answers. We argue that $P a s s @ K$ is an unreliable metric for evaluating true reasoning progress, as base LLMs often produce inaccurate or incomplete CoTs that coincidentally arrive at the correct solution due to their strong likelihood maximization capabilities. Under this view, the failure of RLVR to improve $P a s s @ K$ does not signify a failure to enhance reasoning, but rather a failure of the metric itself to capture the underlying improvement in reasoning quality. To properly measure this phenomenon, we introduce a new metric, $C o T { \cdot } P a s s @ K$ , which evaluates success only when both the final answer and the intermediate reasoning CoT are correct. Moreover, we establish a theoretical foundation for our perspective, formalizing how RLVR’s optimization process, particularly under GRPO-style algorithms, differs from traditional RL by prioritizing the logical integrity of the reasoning path. Our theory not only aligns with our empirical results using $\bar { C o T } { \cdot } P a s s \bar { @ } \bar { K }$ but also explains several previously elusive phenomena observed in models like DeepSeek-R1 Guo et al. (2025). We conduct extensive empirical validation to support our claims, but manually verifying CoT correctness at scale is challenging, especially for complex math benchmarks. We overcome this by employing a powerful yet lightweight model (DeepSeek-R1-0528-Qwen3-8B (DeepSeek, 2025)) as an automated verifier in an LLM-as-a-CoT-Judge paradigm, a method whose reliability we confirm through manual checks. Using this verifier, we re-evaluate the performance of a post-RLVR model (DAPO-Qwen-32B (Yu et al., 2025)) against its base model (Qwen2.5-32B-Base (Qwen, 2024)). As summarized in Figure 1, the $C o T – P a s s @ K$ metric clearly demonstrates that RLVR robustly incentivizes correct reasoning paths across all tested values of $K$ (up to 1024). Furthermore, we investigate the training dynamics to understand when this improved reasoning emerges. By reproducing GRPO-style training using the open-source DAPO recipe (Yu et al., 2025) and analyzing checkpoints, we find that RLVR begins to incentivize correct reasoning from the very early stages of training, and this capability successfully generalizes to unseen test questions. The results of our training analysis align well with our theorem, which states the implicit incentivization of correct reasoning CoTs. The remainder of the paper is organized as follows. Section 3 presents the theoretical foundation of RLVR for LLMs. Section 4 provides empirical validation on standard benchmarks, and Section 5 analyzes the training dynamics of RLVR. Section 6 discusses limitations and future directions, Section 2 reviews related work, and Section 7 concludes the paper. Our key contributions are: • A New Perspective and Metric for RLVR: We reinterpret the effect of RLVR as incentivizing correct reasoning and propose $C o T { \cdot } P a s s @ K$ as a reliable measure. This new view addresses emerging concerns about RLVR’s efficacy and highlights its true potential. • A Theoretical Foundation: We establish a theoretical foundation that distinguishes RLVR for LLMs from traditional RL for generic models by emphasizing CoT correctness. This framework formalizes the optimization dynamics of RLVR, explains previously unclear empirical results, and guides future research. • Empirical Validation and Training Analysis: We observe that RLVR can improve $C o T$ - $P a s s @ K$ of base LLMs for all values of $K$ , indicating the incentivization of correct reasoning. Moreover, we observe that RLVR consistently promotes correct reasoning from early training stages and that this ability generalizes. # 2 RELATED WORK RLVR Since the release of DeepSeek-R1 (Guo et al., 2025), there has been a surge of research interest in the RLVR paradigm (Luo et al., 2025b; Liu et al., 2025b; Hu et al., 2025; Cui et al., 2025; Xie et al., 2025; Zeng et al., 2025; Yu et al., 2025; Luo et al., 2025a; Chen et al., 2025a; He et al., 2025; Wen et al., 2025; Cao et al., 2025; Liu et al., 2025a; Chen et al., 2025c). Due to the high computational cost of RLVR, most studies have focused on small- to medium-sized models (up to 32B parameters). These studies span a wide range of aspects, including training data curation, objective design, hyperparameter tuning, base model selection, and various insightful observations. However, only a few studies have addressed the theoretical foundations of RLVR. In this work, we argue that RLVR for LLMs should be understood from a different perspective—one that emphasizes the correctness of reasoning paths. We hope our theoretical perspective and empirical findings will inspire the community to develop more efficient and effective RLVR approaches, unlocking its broader potential across diverse applications. Debates on Whether RLVR Really Incentivizes Since Yue et al. (2025) raised the insightful question of whether RLVR truly incentivizes improvements beyond the base LLMs, and conducted extensive empirical experiments to demonstrate the wide applicability of their key hypothesis—that RLVR does not improve $P a s s @ K$ for the base LLM because all reasoning paths are already present in the base model—there have been varying perspectives on this hypothesis. Some researchers agree with this viewpoint (Wang et al., 2025b; Zhu et al., 2025; Zhang et al., 2025; Wang et al., $2 0 2 5 \mathrm { a }$ ; Chen et al., 2025a), while others report contradictory findings (Liu et al., $2 0 2 5 \mathrm { a }$ ; Chen et al., 2025c; Shojaee et al., 2025), as discussed in the introduction. There is currently no fundamental understanding to resolve these debates. Liu et al. (2025a) speculated that previous RLVR experiments may have been conducted within a single domain (e.g., math) and were optimized for limited gradient steps before true exploration could occur. Shojaee et al. (2025) suggested that the complexity of puzzles might be the key factor. Chen et al. (2025c) presented statistically significant empirical results to justify that their model indeed improves $P a s s @ K$ , particularly highlighting a persistent gap on the LiveCodeBench v6 (Jain et al., 2025), leading them to conclude that the base model is likely guessing. In this work, we align with the intuition of Chen et al. (2025c) and believe in the rationality of their empirical results. Our findings also suggest that on challenging, live benchmarks, base LLMs struggle to guess, and their limitations in reasoning become clearly evident. The Importance of Correct CoTs Recent studies have also highlighted the importance of verifying the correctness of CoTs (Arcuschin et al., 2025; McGinness & Baumgartner, 2025; Shojaee et al., 2025). However, their approaches focus on defining synthetic reasoning tasks where the correctness of reasoning CoTs can be verified easily. While this is an interesting and effective approach for fully examining reasoning correctness, it is difficult to apply to unstructured reasoning scenarios, such as in math and code. In this work, we argue that the LLM-as-a-CoT-Judge paradigm could play a crucial role in more general reasoning tasks, and emphasize the pressing need for the design of evaluation benchmarks to assess the reliability of emerging LLM verifiers. In the meanwhile, we note there is a contemporary study also advocating this paradigm (Jiang et al., 2025), and they mainly consider education and healthcare domains. # 3 A THEORETICAL FOUNDATION OF RLVR FOR LLMS In this section, we establish a theoretical foundation for how RLVR, as implemented in the GRPO algorithm (Shao et al., 2024), incentivizes the generation of correct reasoning CoTs, which we define as being both logically accurate and complete. A key distinction must be made between RLVR and traditional RL. Base LLMs, owing to their powerful likelihood estimation capabilities obtained during pre-training, can generate numerous incorrect or incomplete CoTs that coincidentally arrive at a correct final answer. In contrast, traditional RL simply optimizes for action trajectories that yield high rewards, without necessarily verifying the intrinsic correctness of each action along the path. For instance, in the Go game (Silver et al., 2017), every action is valid once the simulation environment is setup correctly. In the context of LLMs, we argue that the core principle of RLVR is fundamentally different. It is not merely about reaching a correct answer, but about exploring the immense reasoning space with broad prior knowledge and about identifying and reinforcing logically rigorous CoTs. To formalize this principle, we now elaborate on our problem formulation, key assumptions, the resulting theorem, and some discussions of its implications. # 3.1 PROBLEM SETUP For each prompt $q$ , we sample $G$ responses $\mathbf { Y } = \{ y _ { 1 } , y _ { 2 } , \dots , y _ { G } \}$ from policy $\pi _ { \boldsymbol { \theta } }$ . Let $c _ { i }$ be the CoT in response $y _ { i }$ , and $a _ { i }$ the final answer. Define correctness indicators: $$ \mathcal { I } _ { \mathrm { C o T } } ( c _ { i } ) = \left\{ \begin{array} { l l } { 1 } & { \mathrm { i f ~ } c _ { i } \mathrm { ~ i s ~ c o r r e c t } ( \mathrm { l o g i c a l l y ~ a c c u r a t e ~ a n d ~ c o m p l e t e } ) } \\ { 0 } & { \mathrm { o t h e r w i s e } } \end{array} \right. , $$ $$ \mathcal { T } _ { \mathrm { A n s } } ( a _ { i } ) = \left\{ \begin{array} { l l } { 1 } & { \mathrm { i f } a _ { i } \mathrm { i s } \mathrm { c o r r e c t } } \\ { 0 } & { \mathrm { o t h e r w i s e } } \end{array} \right. . $$ We have a verifiable reward $R ( y _ { i } )$ that is binary and determined solely by answer correctness: $$ R ( y _ { i } ) = \mathbb { Z } _ { \mathrm { A n s } } ( a _ { i } ) . $$ The GRPO advantage $\hat { A } ( y _ { i } )$ is computed as: $$ \hat { A } ( y _ { i } ) = \frac { R ( y _ { i } ) - \mu _ { \mathbf { Y } } } { \sigma _ { \mathbf { Y } } } , \mu _ { \mathbf { Y } } = \frac { 1 } { G } \sum _ { j = 1 } ^ { G } R ( y _ { j } ) , \sigma _ { \mathbf { Y } } = \sqrt { \frac { 1 } { G } \sum _ { j = 1 } ^ { G } ( R ( y _ { j } ) - \mu _ { \mathbf { Y } } ) ^ { 2 } } . $$ We consider a simplified GRPO gradient update: $$ \nabla _ { \boldsymbol { \theta } } J ( \boldsymbol { \theta } ) \approx \frac { 1 } { G } \sum _ { i = 1 } ^ { G } \hat { A } ( y _ { i } ) \nabla _ { \boldsymbol { \theta } } \log \pi _ { \boldsymbol { \theta } } ( y _ { i } \mid \boldsymbol { q } ) . $$ # 3.2 THE THEOREM Given the following assumptions, we establish Theorem 1. • Logical Coherence: Compared with incorrect CoTs, correct CoTs have higher probabilities to induce correct answers since base LLMs has been pretrained over massive corpora to establish strong logical priors: $$ P ( \mathcal { T } _ { \mathrm { A n s } } ( a _ { i } ) = 1 \mid \mathcal { Z } _ { \mathrm { C o T } } ( c _ { i } ) = 1 ) = \alpha > P ( \mathcal { Z } _ { \mathrm { A n s } } ( a _ { i } ) = 1 \mid \mathcal { Z } _ { \mathrm { C o T } } ( c _ { i } ) = 0 ) = \beta $$ • Stable Advantage Estimation: A sufficiently large group $G$ to ensure statistically stable advantage estimates, and this group is learnable $( \sigma _ { \mathbf { Y } } > 0 ) ,$ ). Theorem 1 (GRPO Implicitly Incentivizes Correct Reasoning) Given the above problem setup and two assumptions, the expected GRPO advantage $\mathbb { E } [ \hat { A } ( y _ { i } ) ]$ satisfies: $$ \begin{array} { r l } & { \mathbb { E } \left[ \hat { A } ( y _ { i } ) \mid \mathcal { T } _ { C o T } ( c _ { i } ) = 1 \right] > 0 , } \\ & { \mathbb { E } \left[ \hat { A } ( y _ { i } ) \mid \mathcal { T } _ { C o T } ( c _ { i } ) = 0 \right] < 0 , } \end{array} $$ for any prompt $q$ , where $\hat { A } ( y _ { i } )$ is defined in equation 4. Consequently, GRPO policy updates (equation 5) increase the likelihood of generating correct CoTs. Proof 1 Let $p _ { c } = P ( \mathbb { Z } _ { C o T } ( c _ { i } ) = 1 )$ ) be the current probability of generating a correct CoT. The expected reward for a response $y _ { i }$ is: $$ \mathbb { E } [ R ( y _ { i } ) ] = \left\{ { \begin{array} { l l } { \alpha } & { i f { \mathcal { T } } _ { C o T } ( c _ { i } ) = 1 } \\ { \beta } & { i f { \mathcal { T } } _ { C o T } ( c _ { i } ) = 0 } \end{array} } \right. $$ The group-level expected reward ${ \boldsymbol { \mu } } \triangleq \operatorname { \mathbb { E } } [ \mu \mathbf { v } ]$ is: $$ \begin{array} { r } { \mu = p _ { c } \alpha + ( 1 - p _ { c } ) \beta . } \end{array} $$ For large $G$ , the group mean $\mu \mathbf { Y }$ and variance $\sigma _ { \mathbf { Y } } ^ { 2 }$ concentrate around their expectations: $$ \begin{array} { r l } & { \mu _ { \mathbf { Y } } \xrightarrow { G \infty } \mu } \\ & { \sigma _ { \mathbf { Y } } ^ { 2 } \xrightarrow { G \infty } \sigma ^ { 2 } > 0 . } \end{array} $$ The expected advantage conditional on CoT correctness is: $$ \begin{array} { l l } { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid { \mathcal { Z } } _ { C o T } ( c _ { i } ) = 1 ] \xrightarrow { G \to \infty } \displaystyle \frac { \alpha - \mu } { \sigma } } \\ { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid { \mathcal { Z } } _ { C o T } ( c _ { i } ) = 0 ] \xrightarrow { G \to \infty } \displaystyle \frac { \beta - \mu } { \sigma } . } \end{array} $$ Substituting equation $I O$ into equation $^ { 1 3 }$ and equation $^ { 1 4 }$ : $$ \begin{array} { r l } & { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid c o r r e c t C o T ] \frac { ( 1 - p _ { c } ) ( \alpha - \beta ) } { \sigma } } \\ & { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid i n c o r r e c t C o T ] \frac { - p _ { c } ( \alpha - \beta ) } { \sigma } . } \end{array} $$ Since $\alpha > \beta$ (by equation 6 under the assumption of logical coherence) and $\sigma > 0$ , we have: $$ \begin{array} { r } { ( 1 - p _ { c } ) ( \alpha - \beta ) / \sigma > 0 , } \\ { - p _ { c } ( \alpha - \beta ) / \sigma < 0 , } \end{array} $$ proving inequalities equation 7 and equation 8. The GRPO policy gradient update in equation 5, $\begin{array} { r } { \nabla _ { \theta } J ( \theta ) \approx \frac { 1 } { G } \sum _ { i = 1 } ^ { G } \hat { A } ( y _ { i } ) \nabla _ { \theta } \log \pi _ { \theta } ( y _ { i } \mid q ) , } \end{array}$ , on average increases the likelihood of responses with $\hat { A } ( y _ { i } ) > 0$ (correct CoTs) and decreases it for $\hat { A } ( y _ { i } ) < 0$ (incorrect CoTs). Thus, $p _ { c }$ increases monotonically. Discussions on $( p _ { c } , \alpha , \beta )$ Theorem 1 demonstrates that GRPO inherently aligns policy updates with correct reasoning, even for base models with low initial $p _ { c }$ . The driving factor is the gap $\alpha - \beta > 0$ , which amplifies the advantage difference between correct and incorrect CoTs. As training progresses and $\alpha$ increases (due to more sound reasoning) while $\beta$ decreases (reducing spurious correlations), causing the gap to widen and further accelerating coherent reasoning. As $p _ { c } 1$ , $( \alpha - \beta )$ may approach 1 in a faster pace because generating short answers is typically much easier than producing long correct CoTs, then $\mathbb { E } [ \hat { A } ( y _ { i } ) | \operatorname { c o r r e c t } \operatorname { C o T } ] \to 0 .$ , ensuring convergence. Discussions on $( \mu , \sigma ^ { 2 } )$ From equation 10, we know that the group reward mean is given by $\mu =$ $p _ { c } \alpha + ( 1 - p _ { c } ) \beta$ . Furthermore, we can derive the exact formula for the variance $\sigma ^ { 2 }$ in equation 12 and analyze their impacts together with $p _ { c } , \alpha$ , and $\beta$ on policy iterations. The sample variance $\sigma _ { \mathbf { Y } } ^ { 2 }$ converges to the true variance $\sigma ^ { 2 }$ : $$ \sigma _ { { \bf Y } } ^ { 2 } = \frac { 1 } { G } \sum _ { j = 1 } ^ { G } ( R ( y _ { j } ) - \mu _ { { \bf Y } } ) ^ { 2 } \xrightarrow { G \infty } \mathrm { V a r } ( R ( y _ { j } ) ) \equiv \sigma ^ { 2 } , $$ where $\operatorname { V a r } ( R ( y _ { j } ) )$ can be computed using the law of total variance: $$ \operatorname { V a r } ( R ( y _ { j } ) ) = \operatorname { V a r } ( \mathbb { E } [ R ( y _ { j } ) \ \big \vert \ \mathcal { Z } _ { \mathrm { C o T } } ( c _ { j } ) ] ) + \underline { { \mathbb { E } } } [ \operatorname { V a r } ( R ( y _ { j } ) \ \big \vert \ \mathcal { Z } _ { \mathrm { C o T } } ( c _ { j } ) ) ] . $$ First term: $$ \mathbb { E } [ R ( y _ { j } ) \mid { \mathcal { Z } } _ { \mathrm { C o T } } ( c _ { j } ) ] = { \left\{ \begin{array} { l l } { \alpha } & { { \mathrm { i f } } { \mathcal { Z } } _ { \mathrm { C o T } } ( c _ { j } ) = 1 } \\ { \beta } & { { \mathrm { i f } } { \mathcal { Z } } _ { \mathrm { C o T } } ( c _ { j } ) = 0 } \end{array} \right. } . $$ The random variable $\mathbb { E } [ R ( y _ { j } ) \mid { \underline { { \mathcal { T } } } } _ { \mathrm { C o T } } ( c _ { j } ) ]$ has variance: $$ \operatorname { V a r } ( \mathbb { E } [ R ( y _ { j } ) \mid { \mathcal { T } } _ { \operatorname { C o T } } ( c _ { j } ) ] ) = ( \alpha - \beta ) ^ { 2 } p _ { c } ( 1 - p _ { c } ) . $$ Second term: $$ \mathrm { V a r } ( R ( y _ { j } ) \mid \mathcal { Z } _ { \mathrm { C o T } } ( c _ { j } ) ) = \left\{ \begin{array} { l l } { \alpha ( 1 - \alpha ) } & { \mathrm { i f } \ \mathcal { Z } _ { \mathrm { C o T } } ( c _ { j } ) = 1 } \\ { \beta ( 1 - \beta ) } & { \mathrm { i f } \ \mathcal { Z } _ { \mathrm { C o T } } ( c _ { j } ) = 0 } \end{array} \right. , $$ so its expectation is: $$ \mathbb { E } [ \mathrm { V a r } ( R ( y _ { j } ) \mid \mathcal { T } _ { \mathrm { C o T } } ( c _ { j } ) ) ] = p _ { c } \alpha ( 1 - \alpha ) + ( 1 - p _ { c } ) \beta ( 1 - \beta ) . $$ Thus: $$ \sigma ^ { 2 } = ( \alpha - \beta ) ^ { 2 } p _ { c } ( 1 - p _ { c } ) + p _ { c } \alpha ( 1 - \alpha ) + ( 1 - p _ { c } ) \beta ( 1 - \beta ) . $$ Substituting $\mu$ and $\sigma$ into equation 15 and equation 16, we have $$ \begin{array} { r l } & { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid \mathrm { c o r r e c t } \mathrm { C o T } ] \frac { ( 1 - p _ { c } ) ( \alpha - \beta ) } { \sqrt { ( \alpha - \beta ) ^ { 2 } p _ { c } ( 1 - p _ { c } ) + p _ { c } \alpha ( 1 - \alpha ) + ( 1 - p _ { c } ) \beta ( 1 - \beta ) } } , } \\ & { \mathbb { E } [ \hat { A } ( y _ { i } ) \mid \mathrm { i n c o r r e c t } \mathrm { C o T } ] \frac { - p _ { c } ( \alpha - \beta ) } { \sqrt { ( \alpha - \beta ) ^ { 2 } p _ { c } ( 1 - p _ { c } ) + p _ { c } \alpha ( 1 - \alpha ) + ( 1 - p _ { c } ) \beta ( 1 - \beta ) } } . } \end{array} $$ An ideal pre-training on a high-capacity model could help to ensure that $\alpha 1$ and $\beta \to 0$ at the beginning of RLVR. In this condition, we have the following advantage estimates: $$ \mathbb { E } [ \hat { A } ( y _ { i } ) | \mathrm { c o r r e c t } \mathrm { C o T } ] \sqrt { \frac { 1 - p _ { c } } { p _ { c } } } , \mathbb { E } [ \hat { A } ( y _ { i } ) | \mathrm { i n c o r r e c t } \mathrm { C o T } ] - \sqrt { \frac { p _ { c } } { 1 - p _ { c } } } . $$ In this ideal scenario, the role of human would be to prepare a comprehensive and diverse set of questions and answers, leveraging RLVR to automatically incentivize the model’s reasoning capabilities. However, in practice—the “unideal case”—it is often necessary to first fine-tune the base LLM to align its output with a proper reasoning distribution before applying RLVR. Discussions on Key Observations in RLVR Grounded in our theoretical analysis, we can now provide our unique explanations for several previously elusive yet important observations reported in DeepSeek-R1 (Guo et al., 2025). Our Explanation of the Observation “DeepSeek-R1-Zero achieved remarkable Pass@K performance on AIME 2024 but encountered challenges such as poor readability and language mixing.”: Even DeepSeek-V3 (Liu et al., 2024) cannot guarantee ideal conditions where $\alpha 1 , \beta 0$ . As a result, cold-start data is required to rectify prior logic biases, motivating the R1 approach. Our Explanation of the Observation “The R1-Zero approach did not work well for the 32B dense model, yet distillation can be very effective.”: Key factors such as $( p _ { c } , \alpha , \beta )$ for the 32B base model are in an even worse state, causing pure RLVR to converge to suboptimal local solutions. Based on our analysis, the key to effective reasoning lies in learning correct CoTs. Therefore, the distillation approach can efficiently teach an LLM how to reason properly. Our Explanation of the Observation “The average response length of DeepSeek-R1-Zero naturally increases during training.”: On average, long CoTs have higher probabilities than short CoTs to generate correct answers because more tokens can enable problem solving in finer-grained steps and may also introduce more spurious correlations. Replacing “correct v.s. incorrect” with “long v.s. short” in equation 6 leads to the conclusion that long CoTs being naturally incentivized. For simple problems, long CoTs may be regarded as an improper model bias, which could be the root cause of widely observed “over-thinking” phenomena (Chen et al., 2025b). Discussions on Exceptional Cases We acknowledge that the assumption of logical coherence (equation 6) may not always hold, potentially leading to the reinforcement of incorrect CoTs. As previously discussed, base LLMs may retain inherent biases from pre-training—though incorrect, these biases might coincidentally yield the right final answer due to spurious correlations. In such cases, improper model biases could be unintentionally reinforced. Consequently, we believe that additional techniques, such as learning from human feedback (Ouyang et al., 2022) or off-policyguided learning (Yan et al., 2025), may prove essential in addressing these misalignments. # 3.3 KEY METRICS TO MEASURE For each prompt $q$ with $G$ responses, we define the number of correct answers and the number of correct CoTs (with correct final answers) as: $$ \begin{array} { l l } { { C = \displaystyle \sum _ { i = 1 } ^ { G } { \cal Z } _ { \mathrm { { A n s } } } ( a _ { i } ) \qquad } } & { { \mathrm { ( N u m b e r ~ o f ~ c o r r e c t ~ a n s w e r s ) } } } \\ { { D = \displaystyle \sum _ { i = 1 } ^ { G } { \cal Z } _ { \mathrm { { C o T } } } ( c _ { i } ) \cdot { \cal Z } _ { \mathrm { A n s } } ( a _ { i } ) \qquad } } & { { \mathrm { ( C o r r e c t ~ C o T s ~ w i t h ~ c o r r e c t ~ a n s w e r s ) } } } \end{array} $$ We estimate $P a s s @ K$ using the method introduced by Chen et al. (2021); Yue et al. (2025). Accordingly, we define the specific calculations for per-prompt key metrics for any $K \leq G$ as: $$ \begin{array} { c } { { P a s s @ K ^ { ( q ) } = 1 - \displaystyle \frac { \binom { G - C } { K } } { \binom { G } { K } } } } \\ { { { } } } \\ { { C o T . P a s s @ K ^ { ( q ) } = 1 - \displaystyle \frac { \binom { G - D } { K } } { \binom { G } { K } } } } \\ { { { } } } \\ { { P ( C A ) ^ { ( q ) } = \displaystyle \frac { C } { G } } } \\ { { P ( C C | C A ) ^ { ( q ) } = \displaystyle \frac { D } { C } } } \end{array} $$ (Prob. of at least one correct answer) $$ ( { \mathrm { F r a c t i o n ~ o f ~ c o r r e c t ~ a n s w e r s } } = P a s s @ 1 ^ { ( q ) } ) $$ The overall (averaged) metrics across $M$ prompts are given by: $$ \begin{array} { c } { { P a s s @ K = \displaystyle \frac { 1 } { M } \displaystyle \sum _ { q = 1 } ^ { M } P a s s @ K ^ { ( q ) } } } \\ { { { } } } \\ { { C o T . P a s s @ K = \displaystyle \frac { 1 } { M } \displaystyle \sum _ { q = 1 } ^ { M } C o T . P a s s @ K ^ { ( q ) } } } \\ { { P ( C A ) = \displaystyle \frac { 1 } { M } \displaystyle \sum _ { q = 1 } ^ { M } P ( C A ) ^ { ( q ) } } } \\ { { P ( C C | C A ) = \displaystyle \frac { 1 } { M } \displaystyle \sum _ { q = 1 } ^ { M } P ( C C | C A ) ^ { ( q ) } } } \end{array} $$ # 4 REVISITING PASS $@$ K EXPERIMENTS WITH COT-PASS $@$ K We revisit the $P a s s @ K$ experiments on popular math benchmarks using EvalHub (Ye, 2025), introducing $C o T – P a s s @ K$ to provide a more accurate assessment of reasoning. A prominent challenge in this analysis is the verification of massive volumes of long and complex CoTs, a task that requires expert-level mathematical knowledge and is prohibitively difficult to perform manually at scale. To address this, we leverage the recently released DeepSeek-R1-0528 (DeepSeek, 2025), employing its distilled 8B variant, DeepSeek-R1-0528-Qwen3-8B, as a powerful yet lightweight verifier. We developed a specific prompt template for this task (see Appendix A.4). Following automatic verifications at scale, we confirmed the reliability of this LLM-as-a-CoT-Judge paradigm by manually verifying its judgments on some of the most difficult problems (see Appendix A.5). To mitigate potential errors from the LLM verifier, which is powerful but not infallible, we verify each CoT multiple times. We then determine the final CoT correctness using three distinct strategies to ensure the robustness of our findings: any-correct (at least one verification returns correct), allcorrect (all verifications must return correct), and majority-correct (a majority vote determines the outcome). In Appendix A.3, we have justified that this multi-verification system can mitigate both false positives and false negatives. Figure 2 presents a comparison between the base LLM and its post-RLVR counterpart using both $P a s s @ K$ and $C o T – P a s s @ K$ . The $P a s s @ K$ results (top row) confirm the observations in (Yue et al., 2025): the performance of the base LLM appears to catch up and even surpass the postRLVR model as $K$ increases. However, in stark contrast, the $C o T – P a s s @ K$ results on AIME 2024 and AIME 2025 reveal a persistent and significant performance gap between the models across all values of $K$ (up to 1024). This gap is especially pronounced on AIME 2025, as it is free from data contamination, having been released after the base model’s training cutoff. Manual inspection of numerous cases confirms that the base LLM frequently arrives at correct answers through flawed reasoning (see examples in Appendix A.5.1 and A.5.2). These flawed solutions, which inflate the standard $P a s s @ K$ score, are correctly filtered out by our $C o T – P a s s @ K$ metric. Conversely, the post-RLVR model consistently produces rigorous reasoning chains, as evidenced by its high scores even under the strict all-correct verification strategy. AIME 2025 AIME 2024 Math-500 AMC23 Minerva 1.0 Base LLM 1.0 1.0 0.7 0.46 After RLVR 0.8 0.9 0.8 0.6 0.6 0.8 0.4 0.6 0.5 0.2 0.2 ABfatseer LRLVMR 0.7 ABfatseer LRLVMR ABfatseer LRLVMR 0.4 ABfatseer LRLVMR 0.4 0.0 20 21 22 23 24 25 26 27 28 29 210 0.0 20 21 22 23 24 25 26 27 28 29 210 20 21 22 23 24 25 26 27 20 21 22 23 24 25 26 27 28 29 210 20 21 22 23 24 25 26 27 Sampling Number (K) Sampling Number (K) Sampling Number (K) Sampling Number (K) Sampling Number (K) 0.468 ABfatseer LRLVMR 01.80 0.9 0.8 0.6 0.6 0.8 0.5 0.4 0.7 0.6 0.4 0.2 ABfatseer LRLVMR 0.6 ABfatseer LRLVMR 0.4 ABfatseer LRLVMR 0.3 ABfatseer LRLVMR 0.0 20 21 22 23 24 25 26 27 28 29 210 0.0 20 21 22 23 24 25 26 27 28 29 210 0.5 20 21 22 23 24 25 26 27 20 21 22 23 24 25 26 27 28 29 210 20 21 22 23 24 25 26 27 Sampling Number (K) Sampling Number (K) Sampling Number (K) Sampling Number (K) Sampling Number (K) Nevertheless, we observe that on other benchmarks like Math-500 and AMC23, the incentivizing effects of RLVR are less apparent, as the base LLM is already capable of solving these problems correctly within a few trials. This could be because 1) the problems are simple enough for the base LLM to solve using its existing knowledge, or 2) the problems were part of its pre-training data so the base LLM can easily recall a correct solution given multiple trials. It is difficult to distinguish these possibilities without knowing the training data recipe of Qwen2.5-32B. Furthermore, on the Minerva benchmark, the post-RLVR model shows no improvement. This is likely attributable to a domain mismatch, as Minerva contains many physics problems and more free-form answers, whereas the DAPO training data was restricted to math problems formatted to produce integer answers. Our theoretical framework ensures that RLVR incentivizes correct reasoning for training prompts, but it does not guarantee generalization across all scenarios. Therefore, the observed evaluation variations do not challenge the validity of our framework. The results on AIME 2024 and AIME 205 already demonstrate the generalization of correctly reasoned generations incentivized during training. Moreover, these differing generalization behaviors highlight the critical importance of evaluating RLVR on challenging, contamination-free benchmarks to accurately assess its impact on model reasoning capabilities. They also underscore the need for curating comprehensive and diverse datasets to effectively scale RLVR, as demonstrated in (Liu et al., 2025a; Chen et al., 2025c). # 5 ANALYZING THE TRAINING DYNAMICS OF RLVR The existence of generalizable, incentivized correct reasoning on AIME 2024 and AIME 2025 motivates us to investigate when such incentivization emerges during RLVR training. To this end, we adopt the open-sourced DAPO training recipe (Yu et al., 2025), which follows the R1-zero approach starting from the base LLM Qwen2.5-32B and claims to achieve results better than DeepSeekR1 (Guo et al., 2025) on the same base model. Our reproduction was conducted on 32 AMD MI300X GPUs using the VERL framework (Sheng et al., 2025), and ran for over two weeks. While our run did not reproduce the $P a s s @ 1$ accuracy above $50 \%$ as reported by Yu et al. (2025), we reached a comparable performance of around $44 \%$ $P a s s @ 1$ , in line with a third-party reproduction (Chen et al., 2025a). We use the same verifier introduced in Section 4 to assess the correctness of both training and evaluation rollouts. Figure 3 summarizes the training dynamics of our DAPO reproduction. We observe that RLVR begins to incentivize correct reasoning from the very beginning, as evidenced by increased ${ \overset { \triangledown } { P ( C C | C A ) ^ { ( q ) } } }$ values in the early training steps shown in Figures 3(a) and 3(b). These incentivized reasoning capabilities translate into improved generalization on unseen questions, as demonstrated by notable gains in $C o T – P a s s @ K$ on AIME 2024 within the first 20 training steps in Figure 3(c). Note that each training step here corresponds to one round of PPO-style optimization (Schulman P(CA)(q) P(CC|CA)(q) 1.0 0.46 0.2 0.0 0 0\~20 80\~100 180\~200 280\~300 380\~400 Training Steps (a) Distributions of $P ( C A ) ^ { ( q ) }$ and $P ( C C | C A ) ^ { ( q ) }$ for easy training questions in DAPO. P(CA)(q) P(CC|CA)(q) 0.68 0.2 0.0 0 0\~20 80\~100 180\~200 280\~300 380\~400 Training Steps (b) Distributions of $P ( C A ) ^ { ( q ) }$ and $P ( C C | C A ) ^ { ( q ) }$ for hard training questions in DAPO. AIME 2024 0.5 Pass@1 0.4 CoT-Pass@1 0.3 0.2 0.1 0.0 0 50 100 150 200 250 300 350 400 Training Steps (c) Generalization performance on AIME 2024 across different training steps. et al., 2017), which includes 16 gradient updates, according to the DAPO training script. Thus, we see that correct reasoning abilities begin to generalize after only a few gradient updates. Furthermore, the incentivization of correct reasoning on training questions appears to be a continuous process, as reflected by the steady increase in the mean of $P ( C C | C A ) ^ { ( q ) }$ throughout training, for both easy and hard questions. Meanwhile, we again observe that $P ( C A ) ^ { ( q ) }$ (equivalent to $P a s s @ 1 ^ { ( q ) } )$ is an unreliable metric, particularly for easy training questions. As shown in Figure 3(a), the distribution of $P ( C A ) ^ { ( q ) }$ becomes highly skewed toward 1.0 after 180 steps, misleadingly suggesting that most questions are perfectly solved. However, examining the distribution of $P \bar { ( } \bar { C } { \dot { C } } | \bar { C } \bar { A } ) ^ { ( q ) }$ reveals that a substantial fraction of responses still contain flawed reasoning. We suspect this is one of the reasons behind the difficulty of achieving strong results with Qwen2.5-32B using the R1-zero approach. In addition, for hard questions, we observe that the mean of $P ( C A ) ^ { ( q ) }$ increases more quickly than that of $P ( C C | C A ) ^ { ( q ) }$ , albeit at a slower rate compared to the easy-question setting. In both cases, improving $P ( C C | C A ) ^ { ( q ) }$ proves to be a slow and challenging process. Since our analysis shows that incentivizing correct CoTs is key to improving reasoning capabilities, we believe that future research should explore novel mechanisms to accelerate the improvement of $P ( C C | C A ) ^ { ( q ) }$ , thereby enhancing both the efficiency and effectiveness of RLVR. Figure 4: Revisiting $P a s s @ K$ and $C o T – P a s s @ K$ experiments on AIME 2024 and AIME 2025 using early and mid-stage checkpoints of our DAPO reproduction. The base LLM and post-RLVR model are Qwen2.5-32B and DAPO-Qwen-32B, respectively. To further support the claim that RLVR incentivizes correct reasoning from the start in a smooth and consistent manner, we conduct additional evaluations using early and mid-stage checkpoints from our DAPO reproduction. Figure 4 presents the corresponding $P a s s @ K$ and $C o T – P a s s @ K$ results on AIME 2024 and AIME 2025 with $K$ scaled up to 1024, while the initial DAPO experiment adopts $K = 1 6$ in training and $K = 3 2$ in testing. These results more clearly reveal the underlying incentivization of reasoning as captured by $C o T – P a s s @ K$ . The contamination-free AIME 2025 benchmark provides especially clear evidence of this effect across all tested values of $K$ . We believe these empirical findings from the training dynamics of RLVR strongly validate the theoretical framework proposed in this work. # 6 DISCUSSIONS Limitations A key limitation of our study lies in the use of a LLM as the verifier for the correctness of reasoning CoTs, due to the prohibitive cost of manually checking a large volume of generated reasoning paths. To mitigate this, we present extensive case studies in Appendix A.5 to demonstrate that DeepSeek-R1-0528-Qwen3-8B functions as a relatively robust verifier across multiple math benchmarks. Furthermore, we apply multiple verification calls to obtain CoT-Pass ${ \mathfrak { Q } } \mathrm { K }$ metrics under various criteria, including any-correct, majority-correct, and all-correct, in order to balance between false positives and false negatives. Another limitation is the current focus on math reasoning and a limited number of post-RLVR models. We plan to broaden the scope in future work by incorporating more reasoning domains and more models. Call for Live, Challenging Benchmarks Static benchmarks developed prior to the release of modern base models are increasingly susceptible to contamination risks, potentially undermining the reliability of observed improvements. In response, we emphasize the need for live benchmarks that evolve over time, as suggested in recent studies (Jain et al., 2025; White et al., 2025). Additionally, we agree with the viewpoint of Yao (2025) that future research advancements may rely more on designing new evaluations, benchmarks, and environments. Call for Lightweight yet Powerful CoT Verifiers While DeepSeek-R1-0528-Qwen3-8B serves as a useful CoT verifier, it is not infallible. Conflicting verification results across multiple queries reveal the challenges of false-positive and false-negative verifications. To tackle this, we combine multiple verification strategies, including different voting rules, to improve robustness. Looking forward, there is a pressing need for light yet reliable CoT verifiers that can serve as standardized evaluators beyond the coarse-grained Pass ${ \mathfrak { Q } } \mathbf { K }$ metric. This direction also relates to previous studies on process reward modeling (Lightman et al., 2024; Uesato et al., 2022; Wang et al., 2024). Scaling RLVR or Scaling Pre-Training While the scaling of pre-training has led to transformative progress in LLMs (Kaplan et al., 2020; Liu et al., 2024), enabling the transition to the era of artificial general intelligence, we argue that scaling RLVR could be equally pivotal, given the empirical evidences and theoretical foundation that all demonstrate its real incentivization beyond base LLMs. As modern LLMs approach the limits of language token exposure, learning from experience (Silver & Sutton, 2025) may represent the next leap. Recent efforts by leading research teams suggest a growing emphasis on this direction (Guo et al., 2025; DeepSeek, 2025; Gemini, 2024; Grok, 2025; OpenAI, 2025; Qwen, 2025; Gemini, 2025; Anthropic, 2025; Mistral.AI, 2025). For the broad open research community, understanding the foundations and limitations of current RLVR algorithms is crucial to push this direction further. New RLVR Algorithms and Beyond With our insight that RLVR implicitly incentivizes correct reasoning in base LLMs, we anticipate the development of new algorithmic paradigms. These may include optimization formulations or objective functions, such as policy-gradient approaches (Sutton et al., 1999; Schulman et al., 2017), new likelihood-based optimization objectives (Chen et al., 2025a; Zhu et al., 2025), and preference optimization frameworks (Rafailov et al., 2023; Su et al., 2025). The key principle is that the new algorithms should be designed to more directly incentivize correct reasoning paths, alleviating inherent logical biases in base LLMs.
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a promising paradigm for advancing the reasoning capabilities of Large Language Models (LLMs). However, a critical paradox clouds its efficacy: RLVR-tuned models often underperform their base models on the $Pass@K$ metric for solution-finding, leading to the hypothesis that RLVR merely re-weights existing reasoning paths at the cost of reasoning diversity. In this work, we resolve this contradiction by identifying the source of the problem: the $Pass@K$ metric itself is a flawed measure of reasoning, as it credits correct final answers that probably arise from inaccurate or incomplete chains of thought (CoTs). To address this, we introduce a more precise evaluation metric, $CoT$-$Pass@K$, which mandates that both the reasoning path and the final answer be correct. We provide a new theoretical foundation that formalizes how RLVR, unlike traditional RL, is uniquely structured to incentivize logical integrity. Our empirical results are supportive: using $CoT$-$Pass@K$, we observe that RLVR can incentivize the generalization of correct reasoning for all values of $K$. Furthermore, by analyzing the training dynamics, we find that this enhanced reasoning capability emerges early in the training process and smoothly generalizes. Our work provides a clear perspective on the role of RLVR, offers a more reliable method for its evaluation, and confirms its potential to genuinely advance machine reasoning.
[ "cs.AI", "cs.CL" ]
# 1 Introduction While autoregressive models are effective, they also involve highly sequential sampling, cannot take bidirectional context, and lead to constrained architectures by requiring a decoder mask. In contrast, discrete diffusion models (Lou et al.; Sahoo et al., 2024) can parallelize generation by denoising multiple tokens simultaneously, use bidirectional information, do not need a decoder mask, and, notably, allow for more controllable generation. Previous work (Gat et al.; Shi et al., 2024) has demonstrated that discrete diffusion models are capable of arbitrary prompting locations, whereas autoregressive models are only capable of left-to-right text completion. However, this advantage has a significant limitation: existing diffusion models cannot alter the distances between these prompt tokens. Consequently, existing text diffusion models cannot generate the ground-truth sample without access to the oracle positions of the prompt and infilled text. Figure 1: Our novel diffusion across token positions enables dynamic token movement for infilling. Unlike prior methods, DDOT learns to move mask tokens to appropriate locations, such as to the right of "brown," even if initially unmasked there. The OT coupling (colored lines) significantly simplifies this learning by drastically reducing possible permutations. We solve this issue by enabling discrete diffusion models to learn where to move tokens. Specifically, we design a diffusion process that operates across token positions, allowing the model to vary the position and length of infilled spans. Furthermore, given the importance of token positioning in preserving semantic meaning (He et al.), we incorporate sample-level OT (optimal transport) coupling to maintain relative token ordering throughout the diffusion process. Even minor positional changes can dramatically alter meaning, as seen in phrases like "The child’s green coat" and "The green child’s coat". DDOT’s OT coupling preserves this relative ordering throughout the diffusion process while supporting flexiblelength text infilling. Our OT coupling prevents such swaps and drastically improves DDOT’s downstream performance across all studied benchmarks and metrics. Extensive experiments show that DDOT outperforms naive diffusion baselines and achieves on par performance with state-of-the-art non-autoregressive (NAR) models. In summary, our contributions are as follows: • We propose DDOT, the first discrete text diffusion method for infilling arbitrary text sequences without ground-truth span lengths. • We provide extensive experiments on DDOT that show it achieves outperforms diffusion baselines and achieves performance on par with state-of-the-art NAR models. • We provide detailed ablations and visualizations that verify DDOT’s effectiveness for adjusting the position and length of infilled text spans and provide insights into our novel sample-level OT coupling. The OT coupling significantly outperforms naive diffusion across all tested benchmarks and metrics. # 2 Related Work # 2.1 Lexicographically Constrained Generation Constrained text generation has been explored through a variety of approaches, including AR and NAR methods (Zhang et al., 2020; Iso, 2024; He, 2021; Stern et al., 2019; Lu et al., 2022). POINTER (Zhang et al., 2020) enables flexible token generation through iterative insertion, though it still depends on sequential token prediction and multiple forward passes. AutoTemplate (Iso, 2024) approaches constrained text generation by simply feeding the prompt tokens into an encoder-decoder style model. CBART (He, 2021) extends the POINTER architecture by moving to an encoderdecoder model. Autoregressive methods, while effective for their specific use cases, inherit fundamental limitations: they require sequential generation that scales linearly with sequence length, and their causal attention masks prevent full utilization of bidirectional context during generation. Most critically, for text infilling tasks, these approaches struggle to simultaneously consider both past and future context when generating intermediate content (Cao et al., 2023). # 2.2 Discrete Diffusion Models Discrete diffusion models offer an innovative approach to text generation, addressing key limitations of autoregressive methods (Lou et al.; Ren et al., 2024; Gong et al., 2024; Sahoo et al., 2024). These models denoise corrupted text sequences, enabling parallel updates of multiple tokens rather than the token-by-token process of autoregressive methods, reducing the number of forward passes. Additionally, their bidirectional nature allows them to leverage both past and future context in contrast to causal masking that constrains autoregressive models. Early frameworks like D3PM (Austin et al.) adapted continuous diffusion to discrete tokens using Markovian corruption, while subsequent advances such as multinomial diffusion (Hoogeboom et al., 2021) improved noise schedules and training efficiency. Recent work on score-based discrete diffusion has further advanced the field by providing analytical solutions for the denoising process. Instead of directly modeling transition probabilities SEDD (Lou et al.) uses a score-based approach that learn the gradient of the log probability, which proves particularly effective for handling highdimensional categorical data like text. However, despite these advantages, current discrete diffusion models face a significant limitation: they require fixed token positions throughout the generation process. This constraint makes them unsuitable for flexible-length text infilling, where the length of the generated text might differ from the original masked region. # 2.3 Optimal Transport Coupling for Flexible Generation OT (Villani et al., 2009) coupling has been well studied in image generation through theoretical foundations such as Rectified Flow, (Liu et al.), which showed how continuous normalizing flows can be understood as OT paths. This framework demonstrated that probability mass can be efficiently transported through straight trajectories while preserving structural relationships, providing insights into how sequential data can be manipulated while maintaining order. Building on these insights, (Tong et al.) showed how OT can be incorporated into diffusion models to accelerate generation. By defining appropriate transport costs between positions, OT coupling ensures that dimensions within a sample move in straight, predictable paths. Figure 2: DDOT learns to vary infilled span lengths and positions unlike prior fixed-position diffusion methods. (Left) We compute two separate intra-set OT couplings within the prompt positions and the response positions. This constraint drastically simplifies possible permutations. Right Given a timestep $t$ , we predict the token and position. Although previous works incorporated OT into diffusion models with a focus on generation speed, our work leverages these properties to enable flexible-length text infilling. By coupling the discrete token values with continuous position variables through OT, we allow the model to optimize both token content and positioning simultaneously. This approach maintains the parallel generation and bidirectional context advantages of discrete diffusion while adding the ability to dynamically adjust sequence lengths. The OT coupling effectively serves as a bridge between the discrete token space and continuous position space, enabling natural language generation that respects both local token relationships and global sequence structure. # 3 Background: Masked Text Diffusion Discrete diffusion adapts the continuous nature of diffusion processes to text. The simplest and most performant version of discrete diffusion is masked diffusion. Rather than adding gaussian noise to continuous values such as pixels, masked text diffusion assigns a certain probability of masking tokens throughout the forward diffusion process. For the sake of this paper, masked diffusion can be seen as a masked language model (like BERT (Devlin et al., 2019)) that works at gradually decreasing masking ratios to generate text. Specifically, our text diffusion process follows Score Entropy Discrete Diffusion (SEDD) (Lou et al.), modeling a score function across a support of $N$ states or token values. The forward diffusion process is given by a continuous time Markov chain (CTMC): $$ { \frac { d p _ { t } } { d t } } = Q _ { t } p _ { t } \quad p _ { 0 } \approx p _ { \mathrm { d a t a } } $$ $Q _ { t } \in \mathbb { R } ^ { n \times n }$ denotes a transition matrix where the columns give the probability of transitioning from one discrete state (or token) to another. In masking diffusion models, the diagonal is $- 1$ , the last row is 1, and all other elements are 0, moving probability mass from any token value to the last token in the vocabulary, the mask token. To extend the process to sequences rather than individual tokens, SEDD applies discrete diffusion independently to all tokens in a sequence. SEDD reverses the textual diffusion equation by modeling the score $s _ { \boldsymbol { \theta } } ( \boldsymbol { x } ) _ { \boldsymbol { y } }$ of transitioning from token $x$ to token $y$ with a score entropy loss: $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { t o k } } = \mathbb { E } _ { ( x , x _ { 0 } ) } \left[ \sum _ { y \neq x } w _ { x y } \left( s _ { \theta } ( x ) _ { y } - \frac { p ( y | x _ { 0 } ) } { p ( x | x _ { 0 } ) } \log s _ { \theta } ( x ) _ { y } \right) \right] } \end{array} $$ where $w _ { x y }$ weighs the importance of different states. Finally, to simulate the reverse diffusion process, we either take Euler steps or use an analytical denoiser (Lou et al.). # 4 Approach Previous discrete diffusion models must know the ground truth positions of mask tokens to infill a sequence, as the positions of tokens are fixed throughout the entire diffusion process. To address this limitation, we introduce DDOT, a diffusion framework that jointly denoises discrete token values and continuous token positions, allowing for flexible infilling spans. # 4.1 Continuous Position Diffusion DDOT gradually transforms token positions from a simple initial distribution to their ground truth permutation. To achieve this, we denoise $l$ token positions $z _ { t } \in [ - 1 , 1 ] ^ { l }$ in the continuous domain. The initial (limiting) distribution of positions is sampled from a uniform distribution, $z _ { T } \sim \mathcal { U } ( - 1 , 1 ) ^ { L }$ , where $L$ represents the maximum sequence length. The ground truth positions are defined as: $$ z _ { 0 } = l i n s p a c e \left( - \frac { l } { L } , \frac { l } { L } , l \right) , $$ where $z _ { 0 }$ is evenly spaced and scaled to match the length of the true sequence. This setup ensures that the position diffusion process captures the gradual transition from a simple prior to a structured output aligned with the token ordering and length. Similar to fixed position models, we scale by $l$ to provide information on the abosolute distances between tokens. Our preliminary studies reveal that naively performing diffusion on the token positions leads to poor performance. We observe that adding noise to token positions destroys the relative ordering of text and adds a combinatorially exploding number of permutations, problems not present in absolutevalued continuous domains such as image generation (Ho et al., 2020; Karras et al., 2022). Since the relative ordering of text can drastically change its meaning and consequently abruptly destroy the signal across timesteps, it is crucial to fix token ordering across time. Unlike traditional diffusion processes that aim to completely destroy the data signal at the final timestep $T$ , we provide the ordering of the prompt tokens at $T$ to the model. To maintain the prompt token ordering, we introduce a sample level OT coupling between the initial $z _ { 0 }$ and final $z _ { T }$ token positions of the diffusion process. Additionally, we do not add noise to the positions and instead use linear interpolation between $z _ { 0 }$ and $z _ { T }$ . Following Liu et al.; Albergo and VandenEijnden, we model the forward position diffusion process with an ordinary differential equation (ODE) $$ d Z _ { t } = v ( Z _ { t } , t ) d t $$ where $v ( z _ { t } , t )$ is the velocity (rate of change) of the positions at time $t$ . We partition $x$ and $z$ into prompt $( x ^ { p } , z ^ { p } )$ and infilled response $( x ^ { r } , z ^ { r } )$ subsets. The limiting positions $z _ { T }$ are determined as the solution to the linear sum assignment problem, minimizing OT cost after scaling $z _ { 0 } \in [ - 1 , 1 ]$ : $$ z _ { T } ^ { * } = \pi ( \frac { z _ { 0 } L } { l } , z _ { T } ) , \quad \pi = \arg \operatorname* { m i n } _ { \pi ^ { \prime } \in S _ { n } } \sum _ { i = 1 } ^ { n } C _ { i , \pi ^ { \prime } ( i ) } $$ where $C _ { i , j }$ is the Euclidean distance between positions and $S _ { n }$ is all possible permutations of length $n$ . For prompt tokens, the optimal transport is balanced, so $| z _ { T } ^ { p } | = | z _ { 0 } ^ { p } | = l ^ { p }$ However, the number of infilled tokens in the limiting distribution is almost always larger than that of the ground truth $| z _ { T } ^ { r } | = L - l ^ { p } \geq l - l ^ { p } = | z _ { 0 } ^ { r } |$ To reconcile the different sizes of $z _ { 0 }$ and $z _ { T }$ , we treat the remaining unassigned $L - l$ positions as pad tokens. Specifically, we set the ground truth tokens to the pad token and the ground truth positions to stationary paths before scaling: pad = zTpad ∗ lL . Importantly, these stationary paths maintain that the coupling between $z _ { 0 }$ and $z _ { T }$ achieves the optimal OT cost. This approach establishes a coupling within prompt and infilled tokens, ensuring that tokens in each subset maintain non-crossing paths, a key property of OT (Tong et al.; Villani et al., 2009). While intra-set paths remain ordered to minimize transport cost, DDOT allows inter-set crossings (i.e., between prompt and infilled tokens), enabling flexible token positioning while preserving the relative order of prompt tokens. Path visualizations are available in A.4. Beyond prompt tokens, DDOT ensures that relative token order within all sets, prompt and infilled response, aligns with their ground truth at any timestep. This guarantees smooth transitions and avoids disruptions caused by abrupt positional swaps. Using OT, DDOT simplifies position manipulation, reducing unnecessary degrees of freedom. OT is traditionally computationally expensive, requiring iterative calculations on the CPU. However, our OT coupling is computationally efficient, addressing the large-scale data demands of generative text modeling. Previous methods approximate dataset-level OT through per-batch computations (Tong et al.), typically relying on algorithms like the Sinkhorn method (Cuturi, 2013) and operating in high-dimensional spaces (e.g., image latents). In contrast, our OT coupling operates at the scalar level and limited to $L$ elements, corresponding to the model’s context size—a scale significantly smaller than the number of samples in a dataset. For prompt tokens, the OT coupling is balanced and can therefore be efficiently computed by simply sorting $z _ { 0 }$ and $z _ { T }$ . This efficiency enables us to calculate exact sample-level OT, unlike the approximations required in previous dataset-level approaches. For training, we randomly sample a timestep $t$ and retrieve the corresponding positions in a simulation-free manner with linear interpolation $z _ { t } = ( 1 - t ) z _ { 0 } + t z _ { T }$ . The training objective is a weighted mean-squared error loss: $$ \mathcal { L } _ { p o s } ( \theta ) = \mathbb { E } _ { ( z , t ) } \left[ Q _ { t } ( x _ { t } , y ) | | v _ { \theta } ( z _ { t } , t ) - ( z _ { 0 } - z _ { T } ) | | ^ { 2 } \right] $$ We investigate two methods of initializing $z _ { T }$ to begin generation. The first method is to sample $z _ { T } \sim \mathcal { U } ( - 1 , 1 ) ^ { L }$ , which we refer to as DDOTrandom (DDOT-R). We then randomly select $l ^ { p }$ tokens from $z _ { T }$ to serve as the positions for the prompt tokens. However, some areas of high density arise when randomly sampling which tend to map to pad tokens because there are less tokens in the corresponding ground truth region. Therefore, we propose DDOT-uniform (DDOTU), which uniformly spaces out the prompt and infilled positions: $z _ { T } ^ { p } = \mathrm { l i n s p a c e } ( - 1 , 1 , l ^ { p } )$ and $z _ { T } ^ { r } = \operatorname* { l i n s p a c e } ( - 1 , 1 , l ^ { r } )$ . During the sampling process, the token paths tend to follow straight trajectories because the ground truth velocity, $z _ { 0 } - z _ { T }$ , remains constant over time. Therefore, we find it sufficient to use Euler steps during sampling to predict the positions. Combined with $\tau$ -leaping (Lou et al.), the straight position paths allow for favorable computeaccuracy tradeoffs and result in fast sampling. The total loss $\mathcal { L }$ is a simple linear combination $\mathcal { L } = \mathcal { L } _ { t o k } + \lambda \mathcal { L } _ { p o s }$ . # 4.2 Simultaneous Text & Position Diffusion DDOT performs discrete text and continuous position diffusion simultaneously, as these processes operate independently in continuous time. We therefore predict both token value scores and position velocities in a single forward pass. This independence also enables simulation-free training by independently sampling token and position states at arbitrary timesteps. We summarize the training procedure in Appendix A.5 # 4.3 Implementation Details Due to compute constraints, we implement our framework on SEDD (Lou et al.). Our flexible position diffusion component is orthogonal to existing text diffusion methods and minimally alters the permutation of text tokens for infilling. This design allows DDOT to integrate seamlessly with various pretrained text diffusion models. We extend SEDD, which is based on the Diffusion Transformer architecture (Peebles and Xie, 2023), with two additional modules. First, we introduce a learnable type embedding applied directly after the token embedding lookup. This embedding indicates whether a token is part of the prompt or the masked response $( x \in x ^ { p }$ or $x \in x ^ { r } .$ ), which is critical for assigning each token to the correct OT flow. Second, we add a linear head at the end of the diffusion transformer to compute $v _ { \theta } ( z _ { t } , t )$ . To incorporate continuous positional information, we scale $z _ { t }$ from the range $[ - 1 , 1 ]$ to match the context length of the original pretrained model (1024). We then use Rotary Position Embeddings (Su et al., 2024), a standard technique in discrete diffusion models. Implementation details can be found in subsection A.3. # 5 Experiments # 5.1 Experimental Setup Datasets We evaluate our approach on the OneBillion-Words and Yelp, following the preprocessing steps outlined in prior works on infilling and lexicographically constrained generation (Miao et al., 2019; Zhang et al., 2020; Iso, 2024). These datasets consist of examples with 1 to 6 keywords that must be infilled while maintaining their relative order to generate coherent sentences. In addition to randomly masking positions, we also introduce the "block" masking method that masks a single continuous chunk of text 0 to $L / 2$ tokens long (32 for one-billion-word and yelp and 512 for CodeParrot). Finally, we apply the aforementioned masking methods to the Python subset of the CodeParrot dataset. Table 1 illustrates examples of this lexicographically constrained generation task. Table 1: Example generations for the keywords-tosentence generation on One-Billion-Word and Yelp. Training Details To align the position prediction modules, we first finetune SEDD with the added modules on FineWeb-Edu (Penedo et al., 2024). Afterwards, we further finetune on One-BillionWord and Yelp datasests. For simplicity, we always keep all parameters unfrozen and simultaneously optimize both $\mathcal { L } _ { t o k }$ and $\mathcal { L } _ { p o s }$ . In line with SEDD, we train our model in two configurations: small (90M non-embedding parameters) and medium (320M non-embedding parameters). DDOT-medium is on the same scale as CBART (406M parameters), and AutoTempalate-base (220M parameters). Following SEDD, we use the AdamW optimizer with a learning rate of 3e-5. We set $\lambda = 1 0$ when using scaling. For each experiment, we either use 48 L40s (48 GB) GPUs, 80 A30 (24GB) GPUs, or 8 A100 (80GB) GPUs. Baselines We compare our method against strong autoregressive (AR) and non-autoregressive (NAR) baselines. AutoTemplate (Iso, 2024), the state-ofthe-art AR model, leverages the T5 (Raffel et al., 2020) family of pretrained models. Specifically, AutoTemplate parses the lexicographically constrained generation task into a template that is autoregressively generated from left to right. The previous state-of-the-art NAR method, CBART (He, 2021), is built upon the BART (Lewis et al., 2020) pretrained framework and iteratively inserts tokens into a sequence. We also introduce two diffusion-based models that follow the same training procedure as DDOT. Left Context (LC) concatenates all the prompt tokens to the left of the sequence and generates the response to the right of a separator token. Position Prediction (PoP) uses a SEDD model with a linear head that first predicts the positions of every token. Then, this sequence is fed through a finetuned fixed-position SEDD. Distribution Annealing Many lexiocagphically constrained generation baselines including AutoTemplate and CBART use distribution annealing methods such as top-p, top- $\mathbf { \nabla } \cdot \mathbf { k }$ , greedy sampling, and beam search. To provide a parallel to greedy decoding which always takes the top token probability, we anneal the distributions of our tokens values during sampling to only include the most probable token. Specifically, given the predicted probability of a certain token being mask $\hat { p } ( x ^ { m a s k } )$ , we assign $1 - { \hat { p } } ( x ^ { m a s k } )$ to the token value with the highest probability excluding the mask token. The rest of the token probabilities are set to 0. Greedy decoding in prior models (such as autoregressive) is deterministic, collapsing the tree of all generation paths into a single path. However, our annealing process maintains generation diversity (A.2) because the model must still sample from the annealed distribution with the top token value and the mask token. Whenever possible, we evaluate against the greedy decoding baseline. Metrics Following prior works (Miao et al., 2019; Zhang et al., 2020; Iso, 2024), we evaluate on BLEU-2/4 (Papineni et al., 2002), NIST-2/4 (Doddington, 2002), and METEOR-v1.5 (Denkowski and Lavie, 2014), and success rate (SR). # 5.2 Main Results We present lexicographically constrained generation results in Table 2. Our approach uses greedy annealing and is compared against greedy decoding wherever applicable, including the CBART greedy decoding baseline. Our method achieves competitive performance with previous NAR models, approaching AR performance. Notably, our model achieves state of the art performance on most metrics among the diffusion baselines. Our method does well on block infilling, which may be more useful in real-world applications. Furthermore, we notice that DDOT scales well to longer sequences. Specifically, it frequently generates valid responses that include all the prompt words in the same relative ordering, as shown in the success rates (SR). In contrast, diffusion baselines quickly generate invalid responses as the number of prompt tokens increase (Table 3, Table 2). However, in benchmarks with 6 or less prompt tokens, diffusion baselines maintain high SR (Table 3). This may be because the fixed-position models have room to correct generation when the ratio of prompt to response tokens is low. Table 3 Compares results with previous works in the lexicographically constrained generation line of work. Since pretrained diffusion models lag behind AR models, an issue not unique to DDOT, we elect to focus on NAR models. DDOT performs on par with the previous SOTA models, and achieves higher SR than all diffusion baselines. Although DDOT underperforms LC and PoP in some metrics, we argue that one-billion-word-random and yelp-random over-index on the unrealistic task of generating text with only 1 6 randomly spaced tokens, and that Table 3 shows the broader trend when DDOT is scaled to more prompt tokens. Table 2: DDOT outperforms diffusion baselines on standard sequences (0-32 prompt tokens). Metrics are BLEU (B2, B4), NIST (N2, N4), METEOR (M), and Success Rate (SR). Top scores are bolded. Table 3: DDOT performs on-par with state-of-the-art NAR models on short sequences (1–6 prompt tokens). Top NAR scores are bold; secondbest are underlined. Since SOTA NAR backbones (e.g. diffusion) still lag behind AR backbones, we focus on NAR comparisons. Figure 3: Success rate on block datasets. LC and PoP increasingly generate invalid responses (missing or swapping prompt tokens) as the number of prompt tokens grows. # 5.3 Analysis In this section we investigate the effect of random versus uniform position initialization, the inclusion of OT coupling, and the impact of varying the number of sampling steps. Position Initialization In Table 2 and Table 3 we also explore the difference between DDOT-R and DDOT-U. In DDOT-R, pad tokens tend to cluster in areas of high density because the OT finds no match for them. However, the pad tokens in DDOT-U tend to be evenly spaced out. We find that DDOT-U consistently outperforms DDOT-R. OT Coupling To demonstrate the importance of OT coupling between source and target positions, we retrain the small version of DDOT without OT coupling and provide a quantitative comparison in Table 4. Models trained with OT coupling consistently outperform those using independent (random) coupling. We theorize that the OT coupling provides more signal about the ordering of tokens throughout the diffusion process. Specifically, the DDOT guarantees that the relative ordering of the prompt and generated tokens at any timestep is the same as the original ordering. In contrast, independent coupling requires the model to infer the original ordering of tokens—a challenging task given the numerous plausible orderings that can result from interspersed prompt tokens. Table 4: Our OT coupling drastically improves preformance across all metrics. Ablation on OT coupling with small model size. Position Over Time We qualitatively compare the ground truth token paths during training Figure 4. With OT coupling, token trajectories exhibit significantly fewer crossings, maintaining relative order throughout the generation process. In contrast, the independent coupling frequently permutes tokens. Visualizations of token paths during inference are available in subsection A.4 Positions over Time W/O OT Positions over Time W/ OT Timestep (t) Figure 4a) Timestep (t) Figure 4b) Performance over Sampling Steps Figure 4c) Figure 4: (a) and (b) exhibit Ground Truth Token Velocities. The velocities without OT (a) have a lot of crossing lines demonstrating instability in matching whereas the velocities with OT coupling (b) show almost straighter lines throughout the denoising process. (c) Performance tends to increase with sampling steps. Number of Sampling Steps One advantage of diffusion models over autoregressive models is their ability to exchange compute for accuracy by varying the number of inference steps. Figure 4 shows how the number of sampling steps influences lexicographically constrained generation performance. As the number of sampling steps increases, performance also increases. Wall Time Analysis We evaluate the inference speed of DDOT against the diffusion baselines on the One-Billion-Word dataset. Table 5 presents the wall-clock time per batch alongside BLEU-2 and BLEU-4 scores for an increasing number of sampling steps. DDOT demonstrates significantly better efficiency. For any given number of sampling steps, DDOT is not only faster than LC and competitive with PoP in terms of raw speed, but also achieves substantially higher BLEU scores. Notably, LC must regenerate prompt tokens and therefore requires up to double the input sequence length. PoP also requires an additional forward pass to predict initial positions. Efficiency Considerations The added modules to enable position prediction are lightweight, consisting of a linear head and two type embeddings. On the other hand, the LC baseline requires double the context length of DDOT because it must regenerate prompt tokens. The OT calculation is highly efficient, taking 16 minutes and 11 seconds on an Intel Xeon $8 4 6 2 \mathrm { Y } +$ 64 core processor for the 10 billion token subset of FineWeb EDU. In practice, we stream the dataset, caching several OT couplings in advance without needing to preprocess the OT. With caching, it takes 4 minutes and 30 seconds to do 1000 training steps on an $\mathrm { \Delta L 4 0 ~ g p u }$ with a batch size of 256. Without caching, it takes 4 minutes and 27 seconds, a negligible difference. Table 5: DDOT achieves superior BLEU scores with faster inference times. Inference speed (seconds per batch) and BLEU scores on One-Billion-Word for varying numbers of sampling steps.
Discrete diffusion models are a new class of text generators that offer advantages such as bidirectional context use, parallelizable generation, and flexible prompting compared to autoregressive models. However, a critical limitation of discrete diffusion models is their inability to perform flexible-length or flexible-position text infilling without access to ground-truth positional data. We introduce \textbf{DDOT} (\textbf{D}iscrete \textbf{D}iffusion with \textbf{O}ptimal \textbf{T}ransport Position Coupling), the first discrete diffusion model to overcome this challenge. DDOT jointly denoises token values and token positions, employing a novel sample-level Optimal Transport (OT) coupling. This coupling preserves relative token ordering while dynamically adjusting the positions and length of infilled segments, a capability previously missing in text diffusion. Our method is orthogonal to existing discrete text diffusion methods and is compatible with various pretrained text denoisers. Extensive experiments on text infilling benchmarks such as One-Billion-Word and Yelp demonstrate that DDOT outperforms naive diffusion baselines. Furthermore, DDOT achieves performance on par with state-of-the-art non-autoregressive models and enables significant improvements in training efficiency and flexibility.
[ "cs.LG", "cs.AI", "cs.CL", "cs.CV" ]
# I. INTRODUCTION In recent years, the prevalence of psychological disorders such as depression, anxiety, and stress has increased significantly due to the fast-paced nature of modern life. As a result, this has driven considerable interest in applying machine learning (ML) techniques for early detection, accurate diagnosis, and effective treatment prediction of mental health issues. These technologies offer the potential to analyze large volumes of behavioral and physiological data, uncover hidden patterns, and provide insights that might be missed by traditional diagnostic methods. With the growing accessibility of digital health records and wearable devices, ML-based tools are becoming increasingly viable for real-world clinical applications. Several studies have focused on predicting psychological distress using questionnaire-based data and various ML algorithms. One such work [1] employed the Depression, Anxiety and Stress Scale (DASS-21) to gather data from individuals of diverse backgrounds and applied five ML models to classify the severity of psychological conditions into five levels. Among the algorithms tested, Random Forest emerged as the most effective, particularly in handling imbalanced class distributions through F1-score and specificity evaluations. A broader view is offered in review studies such as that by [2], which systematically categorized ML techniques into classification, deep learning, and ensemble models for depression diagnosis. These models typically follow a pipeline involving data preprocessing, feature selection, classifier training, and performance evaluation. The study emphasized the growing potential of ML to outperform traditional diagnostic methods and presented insights into both the strengths and limitations of existing approaches. Further in-depth analysis was presented by [3] , who tested six machine learning classifiers using socio-demographic and psychosocial variables to detect depression. With SMOTE used to address class imbalance and feature selection techniques such as SelectKBest, mRMR, and Boruta applied, the AdaBoost classifier combined with SelectKBest yielded the best accuracy of $9 2 . 5 6 \%$ , demonstrating the effectiveness of tailored feature selection in enhancing predictive accuracy. While survey-based approaches provide valuable insights, other research has explored the use of neuroimaging data for depression analysis. Studies such as that by [4] utilized functional and structural imaging data to distinguish between depressed and non-depressed individuals and predict treatment outcomes. Parallel to this, study into treatment outcome prediction using ML has gained momentum. A notable study by [5] trained a model on the STAR\*D dataset and externally validated it on the COMED trial, showing statistically significant predictions for remission in patients treated with citalopram and escitalopram, though with moderate accuracies around $6 0 \%$ . These results suggest that ML can assist in personalized treatment planning, although model generalizability remains a challenge. Expanding on this, a meta-analysis and systematic review by [6] synthesized findings across multiple studies, reporting an overall predictive accuracy of $82 \%$ for therapeutic outcomes in mood disorders using ML. Models that integrated multi-modal data (e.g., neuroimaging, genomics, and clinical features) achieved significantly better accuracy than those relying on single data types. However, issues related to study heterogeneity, retrospective designs, and lack of standardization across ML pipelines were noted as major limitations. Another meta-analysis by [7] focused specifically on major depressive disorder (MDD) and found that high-quality studies had a lower mean accuracy $( 6 3 \% )$ compared to others $( 7 5 \% )$ , suggesting a potential overestimation of ML performance in lower-rigor settings. Moreover, the ability to predict treatment resistance surpassed that of predicting remission or response, indicating varying effectiveness depending on the clinical target. A more technical perspective was explored in a study by [8] that evaluated EEG-based depression recognition using feature extraction methods (e.g., power spectral density, Hjorth parameters) and compared ensemble learning with deep learning models. This study demonstrated the value of objective biosignal-based methods in reducing diagnostic subjectivity and enhancing classification performance through sophisticated signal processing and model tuning. Similarly, [9] explored early depression diagnosis using EEG data and ML, reinforcing the importance of physiological signals as biomarkers for mental health conditions. In a recent and notably robust study, [10] presented a machine learning-based behavioral analysis approach to differentiate between anxiety and depression. Using a comprehensive cognitive-emotional test battery and custom-built ML models, the study achieved over $70 \%$ accuracy in identifying distinct symptom patterns, laying the groundwork for improved diagnostic instruments and more personalized treatment strategies. Among multi-class classification efforts, [11] demonstrated one of the most accurate models for assessing depression, anxiety, and stress levels, achieving high performance across all three categories, and offering a reliable multiclass prediction model. Finally, [12] combined natural language processing (NLP) with ML to analyze depression and suicide risk through social media data, offering an innovative approach to mental health surveillance through digital footprints. Fig. 1: Methodology overview of Stacking Ensemble Model Table 1 shows the overview of the literature review. Compared to these studies, our research uniquely focuses on predicting psychological disorders using demographic, occupational, and lifestyle attributes that influence mental wellbeing while applying multiple ML algorithms without relying on external clinical or biosignal data. While prior works emphasized accuracy or feature importance, our study also emphasizes comparative performance evaluation using key metrics like accuracy, precision, recall, and F1-score, offering a transparent, balanced view of model reliability. Moreover, unlike neuroimaging studies that require high computational resources and clinical expertise, our approach remains accessible and scalable for educational institutions or public health surveys. Thus, our study contributes to the field by offering a replicable, lightweight model for early psychological disorder detection, particularly relevant in low-resource settings. TABLE I: Summary of ML Studies on Mental Health Diagnosis # II. METHODOLOGY In this section, we present the detailed pipeline of our proposed ensemble model, which was designed to enhance depression prediction among professionals. The methodological framework comprises data collection, data preprocessing, feature selection and development of a stacking ensemble model. The methodology overview is shown in figure 1. # A. Dataset Overview The Depression Professional Dataset[13] has been collected from Kaggle as part of a comprehensive survey aimed at understanding the factors contributing to depression risk among adults. It was gathered during an anonymous survey conducted between January and June 2023 across various cities, targeting individuals from diverse backgrounds and professions aged between 18 to 60. The dataset inspects the relationship between mental health and various demographic, lifestyle, and work-related factors. It includes information on gender, age, work pressure, job satisfaction, sleep duration, dietary habits, financial stress, work hours, and mental health indicators such as depression, suicidal thoughts, and family history of mental illness. It illustrates how lifestyle and work conditions influence mental health and the impact of work-life balance. # B. Data Preprocessing Figure 2 shows the full preprocessing steps in detail. Before the model could be fitted, the dataset needed to be preprocessed. The raw dataset contains data on 2556 participants in total, with 19 columns. Initially, five unnecessary columns, such as participant’s name, type, and city were removed. To handle missing values, three columns with more than $60 \%$ null values and rows with missing values were eliminated. After cleaning, the dataset was reduced to 2054 participants with 11 columns including the target feature. Encoding was used for categorical textual values. The dataset was then balanced for binary classification. From the preprocessed dataset, $70 \%$ of the samples were randomly selected as the training set for building the model and performing feature selection. The remaining $30 \%$ was split into $20 \%$ for the test set and $10 \%$ for the validation set. 1) Chi-Square Test:: $$ X ^ { 2 } = \sum { \frac { ( A - B ) ^ { 2 } } { B } } $$ The Chi-Square test [15] compares observed counts $A$ with expected counts $B$ in categorical data. A higher test statistic indicates a stronger relationship between a feature and the target. $$ \nu = ( m - 1 ) ( n - 1 ) $$ a) Degrees of Freedom: The degrees of freedom $\nu$ [16] depend on the size of the contingency table and are calculated as one less than the number of rows times one less than the number of columns. $b$ ) Cumulative Distribution Function $\left( C D F \right) I I 7 J .$ : The pvalue is the probability of observing a test statistic as extreme as $X _ { \mathrm { c a l c } } ^ { 2 }$ , assuming the null hypothesis is true. It is computed as: $$ p = P ( X ^ { 2 } \geq X _ { \mathrm { c a l c } } ^ { 2 } ) = 1 - G ( X _ { \mathrm { c a l c } } ^ { 2 } ; \nu ) $$ # D. Development of Stacking Ensemble Model A stacking ensemble model was developed using K-Nearest Neighbors (KNN), Support Vector Machine (SVM), MultiLayer Perceptron (MLP), and AdaBoost as base models, with Logistic Regression as the mediated model. The main objective was to evaluate the predictive performance of machine learning algorithms. K-Nearest Neighbors (KNN) [18] is a memory-based model that classifies a new instance based on the majority class of its nearest neighbors, using a distance metric. Handle Missing Values Encode Categorical Split into Split into Initial Dataset Values Training Set Validation Set 四 目 D 器 蜀 。 。 。 。 。 。 0 。 画 E 器 X □ Remove Reduce Balance Split into Test Unnecessary Dataset Size Dataset Set Columns $$ d ( x , x ^ { \prime } ) = \sqrt { \sum _ { i = 1 } ^ { n } ( x _ { i } - x _ { i } ^ { \prime } ) ^ { 2 } } $$ Here, $\mathbf { x }$ is a feature vector which representing the first data point, and let $\mathbf { x } ^ { \prime }$ is another feature vector represents the second data point, which is to be compared with $\mathbf { x }$ . The value of the $i$ - th feature in vector $\mathbf { x }$ is denoted as $x _ { i }$ , while the corresponding value in vector $\mathbf { x } ^ { \prime }$ is denoted as $x _ { i } ^ { \prime }$ . $n$ represent the total number of features in each data point. The Euclidean distance between the two points $\mathbf { x }$ and $\mathbf { x } ^ { \prime }$ is denoted as $d ( { \bf x } , { \bf x } ^ { \prime } )$ . Support Vector Machine (SVM) [19] is a supervised learning algorithm designed to identify the optimal hyperplane that separates instances of different classes with the maximum possible margin. The associated optimization problem can be expressed as: # C. Feature Selection Feature selection [14] is a key preprocessing step that helps identify the most relevant features in the dataset. In this work, the chi-square $( \chi ^ { 2 } )$ test was used to measure the statistical relationship between each feature and the target variable. Features showing strong significance were kept for modeling to enhance performance and reduce noise. $$ \operatorname* { m i n } _ { \theta , \beta , \epsilon } \left( \frac { 1 } { 2 } | \theta | ^ { 2 } + \lambda \sum _ { j = 1 } ^ { m } \epsilon _ { j } \right) $$ In this formulation, $\theta$ denotes the weight vector that determines the orientation of the separating hyperplane, and $\beta$ represents the bias term that shifts the hyperplane from the origin. The variable $\epsilon _ { j }$ is a slack variable for the $j$ -th training instance, allowing some flexibility for misclassification or margin violation. The regularization parameter $\lambda$ balances the trade-off between maximizing the margin and reducing the classification errors by penalizing the slack variables. Here, $m$ refers to the total number of training samples. Minimizing the squared norm $| \theta | ^ { 2 }$ corresponds to maximizing the margin between the classes. A Multi-Layer Perceptron (MLP) [20] is a form of feedforward neural network that consists of one or more hidden layers. Each neuron processes its inputs by computing a weighted sum followed by the application of an activation function. weighted classification error made by that particular weak learner. A lower error results in a higher weight, giving more reliable classifiers a stronger influence on the final result. Logistic regression [22] can function as a meta-classifier in a stacking ensemble, where it consolidates the predictions from various base models. Each base model’s output is given a corresponding weight, and these weighted predictions are summed along with a bias term to produce a combined decision score. This score is then passed through a non-linear function to yield a probability that indicates the confidence of the final classification. $$ u = \sum \theta _ { j } x _ { j } + \gamma $$ $$ \phi ( u ) = \frac { 1 } { 1 + e ^ { - u } } $$ $$ \tilde { y } = \phi \left( \sum _ { r = 1 } ^ { R } \theta _ { r } f _ { r } ( x ) + \delta \right) $$ Here, $\theta _ { j }$ refers to the weight associated with the $j$ -th input feature $x _ { j }$ , and $\gamma$ is the bias term. The expression $u$ denotes the linear combination of inputs, which is then passed through an activation function $\phi ( u )$ —in this case, the sigmoid function. $$ \mathcal { I } = - \frac { 1 } { m } \sum _ { k = 1 } ^ { m } [ t _ { k } \log ( \hat { t } _ { k } ) + ( 1 - t _ { k } ) \log ( 1 - \hat { t } _ { k } ) ] $$ The variable $\mathcal { I }$ represents the cost function, specifically the binary cross-entropy loss, where $m$ is the total number of samples. The true label for each example is denoted by $t _ { k }$ , while $\hat { t } _ { k }$ represents the predicted probability output. $$ \phi ( s ) = { \frac { 1 } { 1 + e ^ { - s } } } $$ $$ \Theta = \Theta - \alpha \cdot \frac { \partial \mathcal { I } } { \partial \Theta } $$ In the above, $\Theta$ indicates the weight vector prior to the update, and $\alpha$ is the learning rate. The gradient of the loss $\mathcal { I }$ with respect to the weights is given by $\textstyle { \overline { { \frac { \partial { \mathcal { J } } } { \partial \Theta } } } }$ , which guides the weight adjustment during training. AdaBoost [21] is an ensemble learning algorithm that constructs a powerful classifier by combining several weak learners. It iteratively updates the weights of training instances, placing greater emphasis on those that were previously misclassified. $$ \mathcal { L } = - \frac { 1 } { M } \sum _ { m = 1 } ^ { M } \left[ t _ { m } \log ( \tilde { t } _ { m } ) + ( 1 - t _ { m } ) \log ( 1 - \tilde { t } _ { m } ) \right] $$ The sigmoid function [23], denoted by $\phi ( s )$ , transforms the aggregate score into a probability ranging from 0 to 1. The learning process involves minimizing the loss function $\mathcal { L }$ , which quantifies the discrepancy between predicted values $\tilde { t } _ { m }$ and actual labels $t _ { m }$ . This optimization step updates the meta-learner’s weights $\theta _ { r }$ and bias $\delta$ to enhance predictive performance. # E. Model Evaluation Metrics To assess the prediction performance, this study employed the indices accuracy [24], precision, recall, and F1-score. The computation formula for each evaluation index is shown below: Accuracy: Accuracy measures the proportion of total correct predictions made by the model. $$ \mathrm { A c c u r a c y } = { \frac { T P + T N } { T P + T N + F P + F N } } $$ Precision: Precision shows how many of the predicted positive samples were correct. $$ F ( x ) = \mathrm { s i g n } \left( \sum _ { k = 1 } ^ { K } \beta _ { k } \cdot g _ { k } ( x ) \right) $$ In this expression, $F ( x )$ represents the final aggregated (strong) classifier. The ensemble consists of $K$ weak classifiers. Each weak learner $g _ { k } ( x )$ contributes to the final prediction with an associated weight $\beta _ { k }$ . The sign function determines the final output by returning either $+ 1$ or $- 1$ depending on the sign of the sum. $$ \beta _ { k } = \frac { 1 } { 2 } \ln \left( \frac { 1 - \epsilon _ { k } } { \epsilon _ { k } } \right) $$ Here, $\beta _ { k }$ indicates the importance (or influence) of the $k$ - th weak classifier in the final decision, while $\epsilon _ { k }$ denotes the $$ \mathrm { P r e c i s i o n } = { \frac { T P } { T P + F P } } $$ Recall: Recall measures how many actual positive samples were correctly identified. $$ \mathrm { R e c a l l } = { \frac { T P } { T P + F N } } $$ F1-score: F1-score is the harmonic mean of precision and recall. It provides a balanced metric, especially useful when classes are imbalanced. $$ { \mathrm { F 1 - s c o r e } } = 2 \times { \frac { { \mathrm { P r e c i s i o n } } \times { \mathrm { R e c a l l } } } { { \mathrm { P r e c i s i o n } } + { \mathrm { R e c a l l } } } } $$ Here, $T P , T N , F P$ , and $F N$ represent true positives, true negatives, false positives, and false negatives, respectively. # III. RESULTS The analysis in figure 3 identified several attributes with statistically highly significant associations, as indicated by their $p$ -values which is shown in Figure 2. Age demonstrated the strongest significance $( p = 2 . 4 6 \times 1 0 ^ { - 2 1 } ,$ ), followed by Suicidal Thoughts $( p \ : = \ : 1 . 6 5 \times 1 0 ^ { - 1 8 } )$ and Work Pressure $( p ~ = ~ 1 . 6 5 \times 1 0 ^ { - 1 1 } .$ ). Other notable attributes included Job Satisfaction $( p = 7 . 6 3 \times 1 0 ^ { - 8 } )$ , Dietary Habits $( p = 1 . 6 8 \times$ $1 0 ^ { - 4 } )$ , Financial Stress $( p = 5 . 1 5 \times 1 0 ^ { - 5 } )$ ), Sleep Duration $( p \ : = \ : 9 . 5 6 \times 1 0 ^ { - 4 } )$ , and Work Hours $( p \ = \ 4 . 5 3 \times 1 0 ^ { - 3 } )$ . Age was further associated with sub-attributes such as Work Pressure, Job Satisfaction, Sleep Duration, Dietary Habits, Work Hours, and Financial Stress, highlighting its central role in the study context. Fig. 3: Significant Attributes and their p-value # A. AUC-ROC Curve Analysis From Figure 4, the AUC-ROC analysis [25] on the test dataset demonstrates strong classification performance across the evaluated models. Logistic Regression achieved a perfect AUC score of 1.00, indicating flawless separability between the positive and negative classes. Both AdaBoost and MLP Classifier closely followed with AUC values of 0.98, reflecting highly reliable performance. Support Vector Machine and K-Nearest Neighbors also showed commendable results, attaining AUC scores of 0.95 and 0.94, respectively. These findings highlight the robustness and generalization ability of the selected models, particularly in binary classification tasks. Fig. 4: AUC-ROC curve of the selected models # B. Confusion Matrix Figure 5 highlights the effectiveness of the stacking ensemble model, which integrates K-Nearest Neighbors, Support Vector Machine, Multi-Layer Perceptron, and AdaBoost as its base classifiers, with Logistic Regression serving as the metaclassifier. The corresponding confusion matrix [26] further supports this observation by illustrating the model’s strong ability to correctly distinguish between the two classes (Actual 0 and Actual 1). Instances of misclassification were minimal, indicating a well-generalized model that maintains consistency across various input patterns. The detailed numerical results in the matrix underscore the robustness of this ensemble strategy. Fig. 5: Confusion Matrix # C. Performance Evaluation and Comparison The table II presents a comprehensive performance comparison of eight machine learning models, evaluated using four widely recognized classification metrics: Accuracy, Precision, Recall, and F1-Score. Among the individual models, Logistic Regression demonstrated remarkably strong and consistent performance, achieving an accuracy of $9 7 . 5 0 \%$ . The MultiLayer Perceptron (MLP) classifier followed closely, with an accuracy of $9 3 . 7 5 \%$ , showcasing its capability to capture complex, non-linear relationships in the data. Support Vector Machine (SVM) and AdaBoost produced nearly identical results, each attaining an accuracy of $9 2 . 5 0 \%$ , indicating their robustness and adaptability in handling classification tasks. The K-Nearest Neighbors (KNN) model performed slightly lower, with an accuracy of $9 1 . 4 3 \%$ , likely due to its sensitivity to local data distributions. Gradient Boosting achieved an accuracy of $8 8 . 7 5 \%$ , while the Na¨ıve Bayes classifier yielded the lowest performance, with an accuracy of $8 6 . 2 5 \%$ , suggesting its assumptions were not well-suited to the dataset’s characteristics. Notably, the Stacking Ensemble model significantly outperformed all individual models. By integrating KNN, SVM, MLP, and AdaBoost as base learners and using Logistic Regression as the meta-classifier, the ensemble achieved the highest accuracy of $9 8 . 7 5 \%$ . It also excelled across all other metrics, with a precision of $9 8 . 7 8 \%$ , recall of $9 8 . 7 5 \%$ , and F1-score of $9 8 . 7 5 \%$ . TABLE II: Performance Comparison of Classification Models
Depression is a significant mental health concern, particularly in professional environments where work-related stress, financial pressure, and lifestyle imbalances contribute to deteriorating well-being. Despite increasing awareness, researchers and practitioners face critical challenges in developing accurate and generalizable predictive models for mental health disorders. Traditional classification approaches often struggle with the complexity of depression, as it is influenced by multifaceted, interdependent factors, including occupational stress, sleep patterns, and job satisfaction. This study addresses these challenges by proposing a stacking-based ensemble learning approach to improve the predictive accuracy of depression classification among professionals. The Depression Professional Dataset has been collected from Kaggle. The dataset comprises demographic, occupational, and lifestyle attributes that influence mental well-being. Our stacking model integrates multiple base learners with a logistic regression-mediated model, effectively capturing diverse learning patterns. The experimental results demonstrate that the proposed model achieves high predictive performance, with an accuracy of 99.64% on training data and 98.75% on testing data, with precision, recall, and F1-score all exceeding 98%. These findings highlight the effectiveness of ensemble learning in mental health analytics and underscore its potential for early detection and intervention strategies.
[ "cs.LG" ]
# 1 Introduction Large language models (LLMs) have fundamentally reshaped the landscape of software engineering [7], powering tools such as Cursor [4] and GitHub Copilot [5] that are now integral to modern development workflows. These models have transformed key stages of the software development lifecycle—automated code generation, bug detection, and issue resolution—leading to substantial gains in developer productivity. To systematically assess LLM capabilities across these tasks, a variety of curated benchmarks have been developed, including HumanEval [3], MBPP [2], SWE-bench [10], DI-Bench [31], and OpenRCA [19]. These benchmarks are instrumental in identifying both the strengths and limitations of LLMs in diverse programming and maintenance settings. Among them, SWE-bench [10] and its variants, such as Multimodal SWE-bench [22] and MultiSWE-bench [25], have become standard for evaluating LLMs on the issue resolution task, where models are required to comprehend complex codebases, interact with execution environments, and generate patches that fix real-world issues. However, as LLMs evolve rapidly, existing benchmarks exhibit several critical limitations that undermine their continued utility: 1. Staleness. SWE-bench and its derivatives have not been updated since their initial releases, making them static benchmarks. Because LLMs are trained on massive inscrutable corpora, Raw Issue-PR Crawling Automated Env Setup $\Cup$ Validating Task Instances Popular Repositories 🤖 RE FPiOndL rAeUleNvaCnHt files RCEIA/DCMDE.cmodnfigs 1. A>pplytTeestst Patch Parser test_rseptaucren__scempd a Select base image python:3.11 When using sh 2.x ... It should be a kwarg to preserve .. 2. Apply Fix Patch Fixed 🤖 Setup Execution Environment test_return_cmd > bash > pytest Parser test_space_sep √ PR #744 merged 4 days ago LOG test_bool_value FAidxdesed a c#h7e4c3k for returning obj ... 🤖 Verify Packaging to image Valid SWE-Bench-Live Instances Table 1: Comparison with existing issue resolving benchmarks. these static datasets are at risk of data contamination, as they could have be been unpurposely included in model training data. This raises concerns about whether newer models are making truly generalizable progress or merely memorizing benchmark content, reducing the benchmarks effectiveness in distinguishing model capabilities. 2. Limited repository coverage. These benchmarks draw from a small set of repositories, limiting diversity in codebases, domains, and programming practices (see Table 1 for details). This narrow scope weakens the generalizability and robustness of evaluations. 3. Heavy reliance on manual effort. Constructing instances for SWE-bench-like task istances involves substantial human labor: identifying appropriate issue-resolution pairs, locating relevant tests, configuring runnable environments, composing test commands, and validating the full workflow.2 This process is resource-intensive and creates scalability bottlenecks. To address these challenges, we introduce SWE-bench-Live, a live and scalable benchmark built for evaluating LLMs on real-world issue resolution tasks. In contrast to recent efforts such as LiveCodeBench [9], which target algorithmic programming problems, SWE-bench-Live is the first live-updating benchmark designed for complex, repository-level tasks that demand multi-file reasoning, environment setup, and reproducible execution. Figure 1 illustrates the construction pipeline of SWE-bench-Live. At the core of our framework is REPOLAUNCH, a fully automated pipeline that eliminates manual bottlenecks by streamlining the entire process—from issue mining to environment packaging. More specifically, REPOLAUNCH leverages an agentic and end-to-end workflow to setup the Docker environment by identifying relevant instruction files, selecting base images, installing necessary dependencies, building the project, and validating its test suite. This automation enables continuous updates, broad repository coverage, and large-scale dataset expansion. Our current release of SWE-bench-Live contains 1,319 issue-resolution tasks sourced from real-world GitHub issues created since 2024, spanning 93 repositories. Compared to existing benchmarks, this represents a significant leap in freshness, diversity, and scale (see Table 1). We evaluate three leading agent frameworks (i.e., OpenHands [16], SWE-Agent [20], and Agentless [17]) in combination with four state-of-the-art LLMs (namely, GPT-4.1, GPT-4o, Claude 3.7 Sonnet, and DeepSeek V3). Consistent with performance rankings reported on SWE-bench Verified,3 we observe that OpenHands, when paired with Claude 3.7 Sonnet, achieves the highest performance on SWE-bench-Live. However, its overall results are significantly lower compared to those achieved on SWE-bench Verified. To explore this discrepancy further, we conduct a controlled comparison and find that the same agent-LLM pair consistently performs worse on SWE-bench-Live than on SWE-bench. This finding suggests that existing models may be overfitting to static benchmarks like SWE-bench, underscoring the importance of developing more dynamic and diverse evaluation settings, such as those provided by SWE-bench-Live. Our main contributions are summarized as follows: • We introduce SWE-bench-Live, a contamination-resistant, reproducible, and continuously updatable benchmark tailored to real-world issue resolution tasks. It reflects the dynamic nature of software development and offers broader repository coverage compared to prior benchmarks. • We propose REPOLAUNCH, a fully automated pipeline for benchmark construction that seamlessly integrates data curation, environment setup, and test validation into a cohesive and scalable system. • Through experimental evaluation, we observe the suboptimal performance of leading agent frameworks on SWE-bench-Live, highlighting significant opportunities for improvement on the contamination-free benchmark. # 2 Related Work Coding Benchmarks. Early benchmarks for program synthesis and bug fixing focused on single-file, synthetic tasks such as HumanEval [3] and MBPP [2], which do not reflect the complexity of real repositories. To move closer to practice, SWE-bench [10] introduced the issue-resolving task, requiring a model to generate a validated patch for a GitHub repositories issue. Numerous extensions have since appeared—including Multimodal SWE-bench for JavaScript and UI screenshots [22], Multi-SWE-bench for multiple languages such as Java and Rust [25]. Despite their impact, all of these datasets are static: they are collected once, cover at most a few dozen repositories, and depend on labor-intensive environment construction. These yield two limitations. First, models can overfit to the fixed test set, inflating apparent progress. Second, public tasks may lead to data contamination, where benchmark instances leak into pre-training corpora [30, 8]. Recent “live” datasets such as LiveCodeBench [9] mitigate contamination by streaming algorithmic problems after their release dates, yet they do not address the harder repository-level setting that demands multi-file reasoning and execution inside a faithful environment. SWE-bench-Live is the first open, continuously updating benchmark that fulfills these requirements. Coding Agents. On top of the above benchmarks, a recent line of work has been working creating autonomous code agents that search, edit, and test large codebases. Representative systems include SWE-Agent [21], OpenHands [16], Agentless [17], and training frameworks that synthesize thousands of SWE-bench-like instances [15, 23, 18]. These agents report remarkable headline numbers, yet their evaluations rely almost exclusively on static offline datasets. As a consequence, improvements may partially stem from memorisation of leaked solutions or configuration quirks, rather than genuine advances. SWE-bench-Live closes this gap by pushing agents to fix previously unseen, continuously arriving real-world bugs under fully reproducible Docker images, it reveals failure modes hidden by stale test suites and provides a trustworthy yard-stick for code agents and LLMs. # 3 SWE-bench-Live Targeting the issue resolution task on real-world GitHub repositories, SWE-bench serves as a practical proxy for evaluating the coding capabilities of LLM-based systems. The issue resolving task is defined as follows: given a code repository and an associated issue, an approach (e.g., LLM agent) is required to generate a patch that resolves the issue and passes the test cases (see Appendix B for details). While SWE-bench-Live adopts the same task definition as SWE-bench, it introduces a novel, fully automated pipeline that enables scalable and continuously updatable benchmark construction. This automation allows for a larger number of up-to-date instances and broader repository coverage. The initial release of SWE-bench-Live consists of $I , 3 I 9$ task instances created between January 2024 and April 2025, spanning 93 real-world repositories. Pipeline Overview. As shown in Figure 1, the construction of SWE-bench-Live follows a threestage pipeline. First, starting from popular repositories, we identify GitHub issues that are resolved by a pull request (PR). Next, we apply the proposed REPOLAUNCH—an agentic approach that automatically sets up an Docker-based execution environment for each candidate instance. Finally, we perform multiple rounds of test execution for each instance to validate whether it consistently exhibits the expected issue-resolving testing behavior, and finalize the valid instances. Thanks to its fully automated pipeline, SWE-bench-Live can be maintained with minimal–ideally zero–manual effort. We plan to update SWE-bench-Live on a monthly basis, continually providing the community with an up-to-date evaluation dataset. This enables contamination-free, rigorous assessment of AI systems’ issue-resolving capabilities in a constantly evolving real-world setting. # 3.1 Raw Issue–PR Crawling The first phase of the SWE-bench-Live pipeline involves collecting real-world issue–pull request (PR) pairs from popular open-source GitHub repositories. Repository Selection. We focus on Python repositories for the initial release of SWE-bench-Live, aligning with SWE-bench and other prior benchmarks due to its popularity. The selection process includes three filtering stages: (i) We first queried GitHub API for repositories with over 1,000 stars and Python set as the primary language. This initial query yielded 8,577 repositories as of April 2025. (ii) We then refined this set by requiring each repository to have more than 200 issues and pull requests, over 200 forks, and at least $60 \%$ of its codebase written in Python. This reduced the pool to 3,316 repositories. (iii) Finally, to comply with licensing requirements, we retained only repositories containing a valid open-source license, resulting in a final selection of 2,609 repositories. Issue–PR Pair Extraction. From the selected repositories, we adopt the collection script from SWE-bench to extract issue and its associated PR. Meanwhile, the pull request must modify the repository’s test suite–i.e., a “test patch”, which will serve as the evaluation targets. We also incorporate improvements from SWE-Fixer [18], which introduces more robust heuristics to improve the effectiveness of issue–PR pair identification and reduce reliance on the brittle string-matching method. To reduce the risk of data leakage, SWE-bench-Live prioritizes recency by including only issues created after January 2024 in our initial release. # 3.2 REPOLAUNCH: Automated Execution Environment Setup The “raw” issue–PR pairs remain at the textual and plain code level. To support subsequent testbased evaluation, it is required to provide an execution environment capable of running tests locally and producing execution feedback. In the context of issue-resolving benchmarks, the execution environment is critical for test-based evaluation. However, preparing such execution environments is widely recognized as the most labour-intensive step in constructing issue-resolving datasets. In prior work, including SWE-bench [10] and SWEGym [14], environment setup has been performed entirely by humans. For example, SWE-Gym reports that building execution environments required over 200 hours of manual effort, underscoring a significant scalability bottleneck. Notably, even repository-level environments are insufficient: different commits within the same repository may depend on different libraries or configurations, necessitating environment construction at the snapshot level. SWE-bench partially mitigates this by building environments per version tag, but the granularity remains coarse and relies on manual labor. To address this bottleneck, we introduce an agent-based framework REPOLAUNCH, which automatically creates a fully functional execution environment for each issue instance. For any given repository snapshot, REPOLAUNCH produces a Docker container that installs all required dependencies, builds the project, and validates its test suite. This containerized instance serves as the foundation for running and evaluating model-generated patches. Repository Snapshots and Environment Definition. A repository snapshot corresponds to the codebase at the base commit associated with an issue. The goal is to recreate an environment faithful to that moment in time. We define a valid execution environment as a Docker container where $( i )$ the codebase is correctly installed from source, and (ii) the repository’s test suite passes with zero or tolerable failures. This environment is essential for test-based evaluation, providing the ground truth mechanism to verify whether the issue has been resolved. REPOLAUNCH follows an LLM-driven, agentic workflow [27, 26] inspired by how human developers set up unfamiliar projects, as shown in Figure 1. The process proceeds in five steps: • Relevant Files Identification. The first step is to identify relevant files in the repository–such as CI/CD pipelines and README files that are likely to contain useful information for setting up the environment (a detailed list is provided in the Appendix G). • Base Image Selection. Given the full content of the relevant files, this step is to select a suitable base Docker image based on the information provided in the repository. This involves correctly identifying the programming language and SDK version used in the repository (e.g., python:3.11). A container is instantiated from the chosen image, and a persistent bash session is launched. • Interactive Environment Setup. The setup process is carried out by an agent whose goal is to successfully execute and pass all test cases in the repository’s test suite within the container. The agent interacts with the bash session by issuing commands and receiving feedback such as exit codes and outputs. It follows the ReAct design [24], iterating over $T h o u g h t A c t i o n $ Observation [29, 28], mimicking a developer’s reasoning and trial process. The agent can also search the web or query the issue tracker for troubleshooting. • Verification. Once the setup agent determines that the environment has reached a satisfactory state or a step limit is reached, control is transferred to a verifying agent. The agent attempts to generate the appropriate test command and execute it. The execution results are evaluated with the agent to check if all test cases passed. If test failures occur, the results are fed back to the setup agent for further refinement. If all tests pass, the environment is considered valid. • Finalization. Upon successful validation, the container is committed as a Docker image, producing a instance-level execution environment for reuse. Challenges of Version Incompatibility. A major challenge when setting up out-of-date repositories is the “dependency version drift” issue. When dependencies are not pinned to specific versions, tools like pip by default will resolve to the latest package versions, which often introduce backwardincompatible issues and make the environment setup fail. To address this, we implement an timemachine mechanism by forcing the package installation tool to only look at valid versions released no later than the current base commit timestamp. Specifically, we modified the pip default index server to a proxy which fetches those valid package versions. This simple but effective strategy prevents the “future” version incompatibilities and significantly improves setup success rates. We will open-source REPOLAUNCH to benefit the community. While designed for automated benchmark construction, REPOLAUNCH can also assist developers in quickly setting up environments for unfamiliar codebases. Its ability to replicate historical setups and automatically resolve environment dependencies positions it as a practical tool with broader applicability beyond benchmarking. # 3.3 Validating Task Instances To ensure the quality of the benchmark, each task instance is validated to confirm that the associated PR effectively resolves the issue it is intended to fix. The validation is based on analyzing changes in the test suite results before and after applying the PR’s patch. Specifically, we focuses on identifying two key behaviors in the test outcomes: • FAIL_TO_PASS transitions: Tests that were initially failing (FAILED or ERROR) and later passing (PASSED) after the patch is applied. These yield that the patch addresses the issue effectively. • PASS_TO_PASS transitions: Tests that were both passing before and after the patch is applied. These transitions demonstrate that the patch does not break unrelated functionality. To identify these transitions, the test results (as logs) are collected both before and after applying the PR’s patch. By comparing individual test outcomes between the two runs, we determine how the patch affected specific tests. We designed framework-specific (e.g., tox, pytest) parsers to interpret test outputs reliably, as different testing tools may produce logs in various formats. For a task instance to be included in the benchmark, it must exhibit at least one FAIL_TO_PASS transition. Instances lacking such a transition are excluded because they do not demonstrate effective bug resolution. Additionally, to ensure reproducibility and avoid issues caused by test flakiness, the validation process is repeated multiple times. Only instances with consistent results across all runs are retained. This approach ensures that all task instances are grounded in evidence of real-world bug fixes and preserves stable behaviors, resulting in a robust benchmark for evaluating automated bug-fixing solutions. Figure 2: Temporal distribution of issue creation times in SWE-bench-Live. # 3.4 SWE-bench-Live Statistics The initial release of the SWE-bench-Live dataset consists of 1,319 task instances collected from real-world issues and pull requests across 93 open-source Python repositories. To ensure freshness and reduce the risk of data contamination from pretraining, we restrict the dataset to issues created between January 1, 2024, and April 20, 2025. As shown in Figure 2, the temporal distribution is generally uniform, indicating consistent coverage of issues over time. We plan to update the dataset on a monthly basis to reflect the evolving software landscape and continuously provide new instances. Table 2 summarizes key statistics at both the repository and instance levels. At the repository level, projects vary in size, with an average of $8 5 \mathrm { k }$ lines of Python code and 423 files. At the instance level, we report metrics of the gold patches—including the number of edited files, hunks, and lines—as heuristic indicators of task complexity. These statistics suggest that SWE-bench-Live tasks reflect realistic, non-trivial bug fixes that challenge code understanding, reasoning, and manipulation capabilities of LLMs. Additionally, we record the number of test cases that transition from failure to pass (F2P) and those that consistently pass (P2P), which form the basis of test-based evaluation. Repository Diversity. To ensure broad applicability, SWE-bench-Live includes repositories from diverse application domains. As shown in Figure 3, we manually categorized each repository based on its primary functionality—such as AI/ML, DevOps, Web development, and others. This diversity helps evaluate LLMs across varied software stacks and bug types, enhancing the benchmark’s representativeness of real-world usage scenarios. Lite Subset. To support lightweight experimentation, we construct a lite subset of SWE-bench-Live by sampling 50 instances per month from issues created between October 2024 and March 2025. This results in a compact set of 300 instances that balances recency, diversity, and evaluation efficiency. Comparison with Existing Benchmarks. Table 1 compares SWE-bench-Live with several existing issue-resolution benchmarks. Unlike SWE-bench and its variants, which require extensive manual curation and cover a limited set of repositories, SWE-bench-Live is the first to offer an automatically constructed, continuously updatable benchmark. It covers a broader set of repositories (93 in total), while preserving the use of real issues and test-based evaluation. Compared to synthetic datasets like SWE-smith, which may not fully capture the complexity of human-written code and bugs, SWE-bench-Live maintains fidelity to real-world development workflows. Its unique combination of automation, realism, and diversity fills a critical gap of the LLM evaluation for software engineering. # 4 Experiments # 4.1 Setups Agents and Model Selection. To evaluate the effectiveness of our proposed SWE-bench-Live, we conduct experiments using three representative agent frameworks. These include the generalpurpose coding agent OpenHands [16] (paired with CodeAct), as well as two agents specifically designed for issue-resolving tasks: SWE-Agent [20] and Agentless [17]. For OpenHands, we set a maximum of 60 iterations per instance. For SWE-Agent, we limit the number of LLM calls to 100 per instance to maintain computational efficiency. For Agentless, we largely follow the original pipeline, which consists of two main stages: issue localization and patch generation. However, we omit the reranking stage based on regression testing, as supporting this step on SWE-bench-Live would require substantial infrastructure adaptation and is beyond the scope of this study. Consequently, both the localization and repair stages in our Agentless evaluation produce a single sample without reranking. We test these agents using four recent state-of-the-art LLMs, covering both proprietary and open-source models: GPT-4o [11] (gpt-4o-2024-11-20), GPT-4.1 [12] (gpt-4.1-2025-04-14), Claude 3.7 Sonnet [1] (claude-3-7-sonnet-20250219), and DeepSeek V3 [6] (DeepSeek-V3-0324). Figure 3: Repository classifications. Table 2: Statistics of SWE-bench-Live \*Only count Python code. †Stats of gold patch. Evaluation Metrics. Following the evaluation protocol of SWE-bench [10], we adopt the Resolved Rate $( \% )$ as our primary metric. This measures the proportion of issues successfully resolved by the agent across all task instances. We also report the Patch Apply Rate $( \% )$ , which indicates the percentage of generated patches that are syntactically correct and can be successfully applied to the codebase without errors. Additionally, we measure the Localization Success Rate $( \% )$ at the file level. This reflects whether the set of files modified by the generated patch matches the gold patch. # 4.2 Performance on SWE-bench-Live We report the performance of all agent–model combinations on the Lite subset of SWE-bench-Live in Table 3. Meanwhile, Table 4 presents the results of the top three combinations selected based on Lite performance, evaluated on the full version of SWE-bench-Live. We observe that the same methods achieve substantially higher scores on SWE-bench compared to their performance on SWE-bench-Live, despite both benchmarks targeting the same issue-resolving task with identical settings. For example, recent state-of-the-art agents and models report a resolved rate exceeding $60 \%$ on the SWE-bench Verified subset4. In contrast, the highest resolved rate on SWE-bench-Live is only $1 9 . 2 5 \%$ . Considering that the experimental setups on the SWE-bench leaderboard often involve dramatically high rollout numbers or iteration efforts, we specifically re-ran the best performing combination, OpenHands with Claude 3.7 Sonnet, on the SWE-bench verified subset using the exact same setups as in our experiments. The resulting resolved rate reached $4 3 . 2 0 \%$ , more than twice the score achieved on SWE-bench-Live. This is a particularly interesting phenomenon, as it highlights the challenges of constructing a benchmark that can objectively measure an AI system’s ability to resolve arbitrary and previously unseen issues. It also raises concerns about potential overfitting to SWE-bench. Similar phenomena are also observed in other existing issue-resolving datasets: the best-performing method in Multi-SWE-bench achieves a resolved rate of only $1 9 . 3 2 \%$ , while the highest score reported in OmniGIRL is as low as $8 . 6 \%$ . To investigate this, we further categorize the instances in SWE-bench-Live based on their repository origin. Specifically, 216 instances are derived from 8 repositories that were originally included in Table 3: Performance on SWE-bench-Live (Lite subset). Table 4: Performance of top-3 performing Agent $^ +$ Model combinations on SWE-bench-Live. SWE-bench, which we refer to as From SWE-bench Repos. The remaining 1,103 instances are sourced from repositories not previously used in SWE-bench and are denoted as From Non-SWE-bench Repos. As shown in Table 5, although the Non-SWE-bench repositories are generally simpler with fewer files and lower code volume, the best-performing agent–model pair achieves a higher resolved rate of $2 2 . 9 6 \%$ on SWE-bench Instances, compared to only $1 8 . 8 9 \%$ on the Non-SWE-bench ones. This reinforces the hypothesis that existing agents may be overfit or implicitly optimized for the SWEbench repositories, further motivating the need for continuously updated, contamination-resistant benchmarks like SWE-bench-Live. Table 5: SWE-bench vs. Non-SWE-bench. # 4.3 Performance vs. Creation Date. To investigate whether the recency of an issue affects its difficulty, we analyze the resolved rate across different creation periods. As shown in Figure 4, SWE-bench-Live includes a balanced distribution of instances across quarters from 2024Q1 to 2025Q1. The resolved rate, based on OpenHands with Claude 3.7 Sonnet on the full benchmark, remains relatively stable over time, fluctuating only modestly across quarters. While there is a slight dip in resolved rate during 2024Q4, followed by a recovery in 2025Q1, the trend does not indicate a clear correlation between task recency and success rate. This suggests that newer issues are not inherently harder for current agents to solve, and that SWE-bench-Live maintains a consistent level of challenge across time. These results reinforce the benchmark’s ability to deliver a steady and reliable evaluation signal, even as it continuously evolves with newly introduced instances. Figure 4: Resolved rate in relation to the creation date of instances. (OpenHands / Claude 3.7 Sonnet on full set) Figure 5: Resolved rate in relation to the difficulty of instances. (OpenHands / Claude 3.7 Sonnet on full set) 1.0 2 $\ y ^ { \mathcal { P } }$ 0.48 0.23 0.17 0.07 0.00 0.00 0.00 0.23 $q ^ { \pi ^ { 3 } }$ 0.43 0.31 0.16 0.00 0.00 0.00 0.25 0.00 0.8 $3 ^ { \alpha }$ 0.00 0.16 0.12 0.23 0.10 0.00 0.12 0.00 0.6 0.00 0.24 0.04 0.00 0.00 0.00 0.00 0.4 5.6 0.00 0.22 0.17 0.00 0.00 0.00 0.00 6 0.10 0.33 0.00 0.00 0.00 0.00 0.2 1 0.00 0.10 0.00 0.00 0.11 0.00 0.03 0.0 201 60 g 心 3 60 180 20.1年 40.2元 Lines # 4.4 Performance vs. Difficulty We approximate the difficulty of a bug–fixing instance along two complementary axes. Patch difficulty is captured by the scope of the gold fix—the number of files it touches and the total lines modified—while repository difficulty is approximated by the overall size of the project in files and lines of code (LOC). Patch difficulty. Figure 5 visualises resolved rate as a heat-map over patch scope. Success is high when the fix is local: a single-file patch that changes fewer than five lines is solved almost one time in two $( 4 8 \% )$ . Performance degrades quickly as either dimension grows. Once the patch edits three or more files, or spans more than one hundred lines, the success rate falls below ten per-cent; patches that touch seven or more files are never solved. The sharp drop beyond the one-file / few-lines corner highlights a key limitation of current agents: they struggle to coordinate coherent edits across multiple files or to reason about large, intra-file changes. Repository difficulty. Figure 7 in Appendix C plots resolved rate for every repository against its size (Python files on the $\mathbf { X }$ -axis, LOC on the y-axis). Bubble area reflects the number of instances drawn from each project, and red outlines mark the original SWE-bench repositories. A clear negative trend emerges: repositories with fewer than one hundred files and under twenty-thousand LOC often yield success rates above twenty per-cent, whereas projects exceeding five-hundred files rarely exceed five per-cent. Nevertheless, notable variance remains—some small-to-mid-size projects are still hard to fix, likely due to atypical build systems or complex domain logic—emphasising that size is an informative but imperfect proxy for difficulty. Together, the two figures show that difficulty increases along both local (patch) and global (repository) dimensions, and that current code agents falter once fixes spill beyond a handful of lines or involve cross-file reasoning. Because SWE-bench-Live spans the full spectrum of these difficulty factors—while continuously adding fresh, unseen instances—it provides a stringent and up-to-date testbed for future advances in large-scale program repair.
The issue-resolving task, where a model generates patches to fix real-world bugs, has emerged as a critical benchmark for evaluating the capabilities of large language models (LLMs). While SWE-bench and its variants have become standard in this domain, they suffer from key limitations: they have not been updated since their initial releases, cover a narrow set of repositories, and depend heavily on manual effort for instance construction and environment setup. These factors hinder scalability and introduce risks of overfitting and data contamination. In this work, we present SWE-bench-Live, a live-updatable benchmark designed to overcome these challenges. Our initial release consists of 1,319 tasks derived from real GitHub issues created since 2024, spanning 93 repositories. Each task is accompanied by a dedicated Docker image to ensure reproducible execution. Central to our benchmark is \method, an automated curation pipeline that streamlines the entire process from instance creation to environment setup, removing manual bottlenecks and enabling scalability and continuous updates. We evaluate a range of state-of-the-art agent frameworks and LLMs on SWE-bench-Live, revealing a substantial performance gap compared to static benchmarks like SWE-bench, even under controlled evaluation conditions. To better understand this discrepancy, we perform detailed analyses across repository origin, issue recency, and task difficulty. By providing a fresh, diverse, and executable benchmark grounded in live repository activity, SWE-bench-Live facilitates rigorous, contamination-resistant evaluation of LLMs and agents in dynamic, real-world software development settings.
[ "cs.SE", "cs.AI", "cs.CL" ]
# 1 Introduction Anomaly detection (AD) — the task of identifying data that deviate from expected behavior — is central in many domains, from daily usage in manufacturing [3] and content moderation [8] to high stakes domains like cybersecurity [43, 23] and healthcare [31, 14]. Despite its broad applicability, most AD research focuses on unsupervised AD, where only normal data are available during training. When limited anomalies are also available during training, many unsupervised methods do not handle this additional information and remove these “known” training anomalies (e.g., Kim et al. [19], Qiu et al. [30], Shenkar and Wolf [40], Xiao and Fan [46]). Ideally, models should incorporate these known anomalies during training while still detecting “unknown anomalies” (i.e., anomaly types absent during training) during test time. Can unsupervised AD principles generalize to semi-supervised AD? We address this question by focusing on a key principle from unsupervised AD: training classifiers to distinguish normal data from (randomly generated synthetic) anomalies. This principle has Binary Classification Unsupervised AD Semi-Supervised AD (Ours) h1/h- h1/c h1/(h-+c) Normal h1 Ahn2o=mha-ly hN1ormal “Anho2m=caly” hN1ormal Ah2n=ohm-a+lcy x x x both theoretical justification and empirical success in unsupervised settings [42, 52, 41], yet its effectiveness and validity in the semi-supervised regime remain unexplored. At first glance, mixing synthetic with known anomalies might dilute the known anomaly signal — the anomaly class during training contains both known and synthetic anomalies. Synthetic anomalies may also contaminate regions with normal data. However, we claim that synthetic anomalies are key in semi-supervised AD. In this work, we propose that adding synthetic anomalies during training is a theoretically-grounded and empirically effective framework for semi-supervised AD. Theoretically, we provide the first mathematical formulation of semi-supervised AD (Figure 1). This formulation reveals the benefits of synthetic anomalies: they (i) label low density regions of normal data as anomalous and (ii) improve model learning. The former suggests that our formulation models AD well, while the latter allows us to prove the first theoretical learning guarantees for semi-supervised AD with neural networks. Our theoretical model also recommends the number of synthetic anomalies to add, mitigating issues of dilution and contamination of real training data. We also demonstrate that our theoretical framework of adding synthetic anomalies translates into a practical and effective implementation, evaluating our framework on five real-world datasets. We observe that synthetic anomalies can improve performance on both known and unknown anomalies. This improvement is not only seen for our theoretical model, but also for other state-of-the-art classification-based AD methods. These analyses on theoretical guarantees and empirical evaluations on diverse datasets and AD methods demonstrate the feasibility of adding synthetic anomalies in semi-supervised AD. We summarize our contributions below: • We propose a theoretically-driven and empirically effective framework for semi-supervised AD, adding synthetic anomalies to the anomaly class for binary classification during training. • We provide the first mathematical formulation for semi-supervised AD which generalizes unsupervised AD to allow for known anomalies. • We show that adding synthetic anomalies to the anomaly class during training sidesteps two potential problems of anomaly modeling and ineffective learning. • To show effective learning, we prove the optimal convergence of the excess risk of our neural network binary classifiers, the first theoretical result in semi-supervised AD. • Our experiments demonstrate that adding synthetic anomalies improves performance. This improvement extends beyond our concrete example of vanilla binary classifiers to other classification-based AD methods, highlighting our method’s generalizability. # 2 Related Works Semi-Supervised AD Unlike unsupervised AD methods which assume all training data are normal, other methods have been able to leverage on the known anomaly sample during training with some empirical success [15, 34, 29, 53, 21, 20, 13? , 29, 10, 51]. For instance, Han et al. [15] shows that even with $1 \%$ labeled anomalies, methods incorporating supervision empirically outperform unsupervised AD methods. However, there is currently no mathematical formulation of the goal of semi-supervised AD, let alone a theoretically-grounded approach towards it. Without a mathematical formulation, unsupervised and semi-supervised AD remain as research areas with disjoint scopes. Auxiliary Data Using auxiliary data for (unsupervised) AD is popular in applied domains, such as generating anomalies from normal data [11, 6, 10]. In our work, we wish to understand the general theoretical underpinnings of AD, so we avoid using domain-specific knowledge. The first general theory for unsupervised AD with synthetic anomalies used uniformly random data as synthetic anomalies for support vector machine (SVM) classification [38]. Sipple [41] experimented with neural networks instead, while Cai and Fan [5] used another neural network for anomaly generation and Hendrycks et al. [17] used open-source data as anomalies for image AD. Correspondingly, Zhou et al. [52] provided the theoretical analysis for neural networks using synthetic anomalies. However, these works are for unsupervised AD, not semi-supervised AD. # 3 Formulating Anomaly Detection In this section, we provide a general AD formulation assuming full knowledge of anomalies. Then, we explore a potential formulation of semi-supervised AD that relaxes this assumption. # 3.1 Background: Modeling Anomaly Detection as Binary Classification First, consider a binary classification problem between $Y = 1$ (normal class) and $Y = - 1$ (anomaly class). Let $\mu$ be a known probability measure on our domain $\mathcal { X } \subseteq \mathbb { R } ^ { d }$ . Without loss of generality, let $\mathcal { X } = [ 0 , 1 ] ^ { d }$ . Assume data from the normal and anomaly classes are drawn respectively from unknown distributions $Q$ and $W$ on $\mathcal { X }$ , where $Q$ has density $h _ { 1 }$ with respect to $\mu$ , and $W$ has density $h _ { 2 }$ with respect to $\mu$ . Let $s \in \mathsf { \Gamma } ( 0 , 1 )$ denote the proportion of normal data on $\mathcal { X }$ such that $\operatorname { P } ( Y ~ = ~ 1 ) ~ = ~ s$ and $\operatorname { P } ( Y = - 1 ) = 1 - s$ . Let $P$ be a probability measure on $\mathcal { X } \times \mathcal { Y }$ such that the marginal distribution on $\mathcal { X }$ is $P _ { \mathcal { X } } = s Q + ( 1 - s ) W$ . For any classifier $\mathrm { s i g n } ( f )$ induced by a function $f : \mathcal { X } \to \mathbb { R }$ , its misclassification error is given as $R ( f ) = \operatorname { P } ( \operatorname { s i g n } ( f ( X ) ) \neq Y )$ ). The best we can do is obtain the Bayes classifier, denoted by $f _ { c }$ , which minimizes the misclassification error, i.e., $R ( f _ { c } ) = R ^ { * } : = \operatorname* { i n f } _ { f : \mathcal { X } \to \mathbb { R } }$ measurable $R ( f )$ . Like other settings [50], the Bayes classifier $f _ { c }$ is explicitly given as $f _ { c } ( X ) = s i g n ( f _ { P } ( X ) )$ (discussed in Appendix B), where $f _ { P }$ is the regression function $$ f _ { P } ( X ) : = \operatorname { E } [ Y | X ] = { \frac { s \cdot h _ { 1 } ( X ) - ( 1 - s ) \cdot h _ { 2 } ( X ) } { s \cdot h _ { 1 } ( X ) + ( 1 - s ) \cdot h _ { 2 } ( X ) } } , \qquad \forall X \in { \mathcal { X } } . $$ The Bayes classifier can also be defined with the likelihood ratio test [2, 16] $$ \mathbb { 1 } \left( \frac { h _ { 1 } ( X ) } { h _ { 2 } ( X ) } \geq \rho \right) $$ $\begin{array} { r } { \rho = \frac { \mathrm { P } ( Y = - 1 ) } { \mathrm { P } ( Y = 1 ) } } \end{array}$ $\rho$ We proceed to define the AD error of function $f : \mathcal { X } \mathbb { R }$ . Define the set of data classified as normal as $\{ f > 0 \} : = \{ X : f ( X ) > 0 \}$ . Let $\textstyle s = { \frac { 1 } { 1 + \rho } }$ and the classical Tsybakov noise condition [45, 44] $$ \operatorname* { P } _ { X } ( \{ X \in \mathcal { X } : | f _ { P } ( X ) | \leq t \} ) \leq c _ { 0 } t ^ { q } , \qquad \forall t > 0 , $$ hold with some $c _ { 0 } > 0$ and noise exponent $q \in [ 0 , \infty )$ . Then, for any measurable function $f : \mathcal { X } \mathbb { R }$ , we extend Steinwart et al. [42] (proven in Appendix E.1) to derive a bound on the AD error $$ S _ { \mu , h _ { 1 } , h _ { 2 } , \rho } ( f ) : = \mu \big ( \{ f > 0 \} \Delta \{ h _ { 1 } / h _ { 2 } > \rho \} \big ) \geq C _ { q } ( R ( f ) - R ^ { * } ) ^ { \frac { q } { q + 1 } } . $$ Here, $\Delta$ denotes the symmetric difference, $S _ { \mu , h _ { 1 } , h _ { 2 } , \rho } ( f )$ measures how well $\{ f > 0 \}$ matches the ground-truth set $\{ h _ { 1 } / h _ { 2 } \geq \rho \} : = \{ X : h _ { 1 } ( X ) / h _ { 2 } ( X ) \geq \rho \}$ (as in (3.2)). $C _ { q }$ is a positive constant depending on $c _ { 0 }$ and $q$ . From (3.4), we see $S _ { \mu , h _ { 1 } , h _ { 2 } , \rho } ( f ) \to 0$ if $R ( f ) - R ^ { * } \to 0$ . This implies that the excess risk $R ( \cdot ) - R ^ { * }$ , a standard error metric for binary classification, serves as a surrogate for $S _ { \mu , h _ { 1 } , h _ { 2 } , \rho } ( \cdot )$ and, thus, provides a viable error metric for AD (similar to Steinwart et al. [42]). In other words, to solve AD, we can solve a standard binary classification problem. However, the test-time anomaly density $h _ { 2 }$ is not known in AD. Unsupervised AD (i.e., only normal data during training) gets around this challenge with a density level set estimation formulation [35] $$ \{ h _ { 1 } \geq \rho \} : = \{ X : h _ { 1 } ( X ) \geq \rho \} . $$ This formulation (3.5) can be interpreted as a likelihood ratio test between $h _ { 1 }$ and a constant, because it is a special case of (3.2) with $h _ { 2 } \equiv 1$ . In contrast, for semi-supervised AD, we would like to set $h _ { 2 }$ to reflect our partial knowledge through our known anomaly sample. The question we seek to answer is — is it possible to apply this generalization to semisupervised AD? If so, what should $h _ { 2 }$ be to model semi-supervised AD? Straightforwardly, we can set $h _ { 2 }$ to be the known anomaly density. However, we proceed to show two potential issues with this approach. # 3.2 Two Potential Issues without Synthetic Data For concreteness, let our training data contain normal samples $T = \{ ( X _ { i } , 1 ) \} _ { i = 1 } ^ { n } \stackrel { \mathrm { i . i . d . } } { \sim } Q$ and anomalies $T ^ { - } = \{ ( X _ { i } ^ { - } , - 1 ) \} _ { i = 1 } ^ { n ^ { - } } \stackrel { \mathrm { i . i . d . } } { \sim } V$ , where $V \neq W$ is an unknown distribution with density $h _ { - }$ . The straightforward approach is to use $T ^ { - }$ during training (i.e., without synthetic anomalies), implicitly setting $h _ { 2 } = h _ { - }$ . However, we proceed to show two potential issues with this approach. The first potential issue is the “false negative modeling” problem, where anomalies are modeled as normal data. This may happen in regions where normal density $h _ { 1 }$ is low, but known anomaly density $h _ { - }$ is even lower, leading to $h _ { 1 } ( X ) / h _ { 2 } ( X )$ exploding. In other words, low-density regions of $h _ { 1 }$ can still be classified as normal. This is undesirable. Take a medical application. Refer to the density plot in the first row of Figure 2 and let $x$ refer to blood pressure. Let $h _ { 1 }$ refer to normal patients and $h _ { 2 }$ (known anomalies) refer to sick patients with high blood pressure. Consider a “test-time” patient with low blood pressure $X$ (see pink region on the left of $h _ { 1 }$ in Figure 2). Here, $h _ { 1 } ( X ) \gg h _ { 2 } ( X )$ , so this patient will be modeled as normal. However, we wish to model low blood pressure as anomalous because the probability of a normal patient with low blood pressure $h _ { 1 } ( X )$ is low. The second potential issue is the “insufficient regularity of learning” problem, where the trained neural network classifier can produce high error. This can arise from the discontinuity of the regression function $f _ { P }$ , making it challenging to learn the optimal classifier. Our novel observation is that, without synthetic anomalies in training data, the regression function is prone to discontinuity, which impacts effective learning. Proposition 1 (proven in Appendix E.2) Case 1 Density Plots (1D) Data Samples (2D) Problem Our Solution: Syn. Anom. h1 low, h1 h2 □ wrPoingkl yRemgoiodnelsed 区 Train h2 lower x 7 as Normal Neural Case 2 Network Zero Margin: h1, h2 h h2 Difficulty in Anomaly Score disjoint X Learning [-1, 1] Color Legend: - Normal - Known Anomaly - Synthetic Anomaly illustrates a general scenario (see Figure 2 and Appendix D for examples), where $f _ { P }$ is discontinuous despite both $h _ { 1 }$ and $h _ { - }$ being continuous. Proposition 1 (Separable Data with Zero Margin). Let $r > 0$ and $\mathcal { X }$ be the union of two intersecting, closed subdomains $\mathcal { X } _ { 1 }$ and $\mathcal { X } _ { - }$ with interior $( \mathcal { X } _ { 1 } \cap \mathcal { X } _ { - } ) = \emptyset$ . Suppose $h _ { 1 } \in C ^ { r } ( \mathcal { X } )$ has support $\mathcal { X } _ { 1 }$ and $h _ { - } \in C ^ { r } ( \mathcal { X } )$ has support $\mathcal { X } _ { - }$ . For $h _ { 2 } = h _ { - }$ , the regression function reduces to $$ f _ { P } ( X ) = \left\{ { \begin{array} { l l } { { \frac { h _ { 1 } ( X ) } { h _ { 1 } ( X ) } } = 1 , } & { i f X \in \operatorname { i n t e r i o r } ( \mathcal { X } _ { 1 } ) , } \\ { - { \frac { h _ { - } ( X ) } { h _ { - } ( X ) } } = - 1 , } & { i f X \in \operatorname { i n t e r i o r } ( \mathcal { X } _ { - } ) , } \end{array} } \right. $$ which is discontinuous on $\mathcal { X }$ . Moreover, for any continuous function $f : \mathcal { X } \mathbb { R }$ , the approximation error is at least $\| f - f _ { P } \| _ { L ^ { \infty } [ 0 , 1 ] ^ { d } } \geq 1$ . Next, we show that the discontinuity of $f _ { P }$ poses a difficulty for classification by neural networks. We consider feedforward rectified linear unit (ReLU) neural networks. We outline notation below. Definition 1. Let $\sigma ( x ) = \operatorname* { m a x } \{ 0 , x \}$ be the ReL $U$ activation function. A ReLU network $f : \mathcal { X } \mathbb { R }$ with $L \in \mathbb { N }$ hidden layers and width vector $\pmb { p } = ( p _ { 1 } , \dots , p _ { L } ) \in \mathbb { N } ^ { L }$ , which indicates the width in each hidden layer, is defined in the following compositional form: $$ \boldsymbol { f } ( \boldsymbol { X } ) = \boldsymbol { a } \cdot \boldsymbol { \sigma } \big ( \boldsymbol { W } ^ { ( L ) } \cdot \cdot \cdot \boldsymbol { \sigma } \big ( \boldsymbol { W } ^ { ( 1 ) } \boldsymbol { X } + b ^ { ( 1 ) } \big ) \cdot \cdot \cdot \big ) + \boldsymbol { b } ^ { ( L ) } , $$ where $X \in \mathcal { X } = [ 0 , 1 ] ^ { d }$ is the input, $a \in \mathbb { R } ^ { p _ { L } }$ is the outer weight, $W ^ { ( i ) }$ is a $p _ { i } \times p _ { i - 1 }$ weight matrix with $p _ { 0 } = d$ , and $b ^ { ( i ) } \in \mathbb { R } ^ { p _ { i } }$ is a bias vector, for $i = 1 , \dots , L$ . Let $\sigma ^ { k }$ be the $R e L U ^ { k }$ function, $a$ generalization of ReLU for $k \in \mathbb { N }$ , defined by $\sigma ^ { k } ( x ) = ( \operatorname* { m a x } \{ 0 , x \} ) ^ { k }$ . Define the “approx-sign function” $\sigma _ { \tau } : \mathbb { R } [ 0 , 1 ]$ , with $a$ bandwidth parameter $\tau > 0$ , as $$ \sigma _ { \tau } ( x ) = { \frac { 1 } { \tau } } \sigma ( x ) - { \frac { 1 } { \tau } } \sigma ( x - \tau ) - { \frac { 1 } { \tau } } \sigma ( - x ) + { \frac { 1 } { \tau } } \sigma ( - x - \tau ) . $$ We also define the generalized approx-signk function as $\begin{array} { r } { \sigma _ { \tau } ^ { k } ( x ) : = \frac { 1 } { k ! \tau ^ { k } } \sum _ { \ell = 0 } ^ { k } ( - 1 ) ^ { \ell } \binom { k } { \ell } \sigma ^ { k } ( x - \ell \tau ) - } \end{array}$ $\begin{array} { r } { \frac { 1 } { k ! \tau ^ { k } } \sum _ { \ell = 0 } ^ { k } ( - 1 ) ^ { \ell } { \binom { k } { \ell } } \sigma ^ { k } ( - x - \ell \tau ) } \end{array}$ for $k \in \mathbb { N }$ . Here, the approx-sign function is designed to approximate the sign function (as $\tau 0$ ). Meanwhile, $k \in \mathbb N$ is a parameter controlling the smoothness of ReLU (and the approx-sign activation function), generalizing our analysis beyond the original non-smooth ReLU function. We defer discussions and visualizations to Appendix C.2. The following novel theorem presents an upper bound for the excess risk of a function $\sigma _ { \tau } ^ { k } \left( f \right)$ induced by the output activation $\sigma _ { \tau } ^ { k }$ in terms of the bandwidth $\tau$ and the approximation error. Theorem 1. Assume the Tsybakov noise condition (3.3) holds for some exponent $q \in [ 0 , \infty )$ and constant $c _ { 0 } > 0$ . For any measurable function $f : \mathcal { X } \mathbb { R }$ , there holds $$ \underbrace { R \left( \sigma _ { \tau } ^ { k } \left( f \right) \right) - R ( f _ { c } ) } _ { e x c e s s \ r i s k } \leq 4 c _ { 0 } \big ( k \tau + \underbrace { \| f - f _ { P } \| _ { L ^ { \infty } [ 0 , 1 ] ^ { d } } } _ { a p p r o x i m a t i o n \ e r r o r } \big ) ^ { q + 1 } . $$ Theorem 1 shows that the smaller the approximation error1, the smaller the excess risk in classification. We discuss the significance of this theorem in Remark C.3 and prove it in Appendix F. The proof is built on the error decomposition framework, followed by bounding and balancing the approximation and estimation errors. From Proposition 1 and Theorem 1, we can see that if the regression function is discontinuous, the approximation error is high (at least 1), which may lead to vacuous excess risk bounds $^ 2$ (i.e., excess risk can be high and is not guaranteed to converge). Lacking theoretical guarantees, the Bayes classifier cannot be effectively learned. Due to (i) an undesirable formulation and (ii) lack of theoretical guarantees, we see that $h _ { 2 } = h _ { - }$ is not ideal. In the next section, we propose a semi-supervised AD method to mitigate these two issues. # 4 Our Proposed Method: Semi-Supervised AD with Synthetic Anomalies # 4.1 Overview of Our Method Building on the previous classification framework and inspired by the connection between density level set estimation and synthetic anomalies, we propose to add synthetic anomalies to mitigate the two aforementioned issues (Figure 2). In addition to samples $T$ and $T ^ { - }$ , we generate a set of synthetic anomalies $T ^ { \prime } = \{ ( X _ { i } ^ { \prime } , - 1 ) \} _ { i = 1 } ^ { n ^ { \prime } }$ , where each $X _ { i } ^ { \prime }$ is sampled i.i.d. from $\mu = \operatorname { U n i f o r m } ( \mathcal { X } )$ . Our full training dataset becomes $T \cup T ^ { - } \cup T ^ { \prime }$ , which we use to train a ReLU network classifier. # 4.2 Mitigating Issue 1: False Negative Modeling Problem Let $\tilde { s } \in ( 0 , 1 )$ denote a mixture parameter. By introducing synthetic anomalies, we are implicitly changing the density function representing the anomaly class to $$ h _ { 2 } = \tilde { s } h _ { - } + ( 1 - \tilde { s } ) , $$ which corresponds to a mixture. Here, a proportion $\tilde { s }$ of anomalies are drawn from known anomaly density $h _ { - }$ , and the remaining proportion $( 1 - \tilde { s } )$ of (synthetic) anomalies are drawn from the distribution $\mu$ . We see that (in (4.1)) $h _ { 2 }$ is bounded away from 0 due to the constant term $1 - \tilde { s } > 0$ , preventing $h _ { 1 } / h _ { 2 }$ from exploding even when $h _ { 1 }$ is small. Hence, low probability density normal data will not be modeled as anomalous even in regions where $h _ { 2 }$ is small. Remark 1. When $h _ { - }$ is constant, known anomalies are drawn from the uniform distribution, providing no additional prior on how anomalies can arise. This uninformative case also arises when mixture parameter $\tilde { s } = 0$ . In both cases, $h _ { 2 }$ will be constant, and (3.2) reduces to the density level set estimation problem $\{ h _ { 1 } > \rho \}$ of unsupervised AD. In other words, our semi-supervised AD framework is a generalization of unsupervised AD that allows for known anomaly supervision. # 4.3 Mitigating Issue 2: Insufficient Regularity of Learning Problem Adding synthetic anomalies from the uniform distribution can also improve the smoothness of the regression function. Later in Section 4.4, we use this fact to show how we can effectively learn the Bayes classifier. While Proposition 1 illustrated that the regression function $f _ { P }$ can be discontinuous despite $h _ { 1 }$ and $h _ { 2 }$ being continuous, our next novel result shows that adding synthetic anomalies ensures continuity of regression function $f _ { P }$ under the same conditions. Proposition 2. Suppose the condition stated in Proposition 1 holds. If we add synthetic anomalies from $\mu = U n i f o r m ( \mathcal { X } )$ (i.e., $h _ { 2 } = \tilde { s } h _ { - } + ( 1 - \tilde { s } )$ with $\tilde { s } \in ( 0 , 1 )$ ), the regression function is $$ f _ { P } ( X ) = \frac { s \cdot h _ { 1 } ( X ) - ( 1 - s ) \tilde { s } \cdot h _ { - } ( X ) - ( 1 - s ) ( 1 - \tilde { s } ) } { s \cdot h _ { 1 } ( X ) + ( 1 - s ) \tilde { s } \cdot h _ { - } ( X ) + ( 1 - s ) ( 1 - \tilde { s } ) } , $$ which is $C ^ { r }$ continuous. For concreteness, we present 2 examples in Appendix D to illustrate how synthetic anomalies enhance the smoothness of regression function $f _ { P }$ . Previously, from Proposition 1, we know that if $f _ { P }$ is discontinuous, no ReLU neural network can approximate it well. Conversely, if $f _ { P }$ is continuous, a well-established body of research has proved that ReLU neural networks can approximate it to any desired accuracy (e.g., Theorems 1 and 2 in Yarotsky [48], Theorem 5 in Schmidt-Hieber [36], Theorem 1.1 in Shen et al. [39]). However, we cannot directly use existing results because the i.i.d. assumption is violated — anomalies are not drawn i.i.d. from $h _ { 2 }$ , but they are drawn from $h _ { - }$ (known anomalies) and $\mu$ (synthetic anomalies) separately. We proceed to derive a novel theoretical result that accommodates this non-i.i.d. setting. # 4.4 Proposed Neural Network with Synthetic Anomalies and Theoretical Guarantees We proceed to show that our method achieves minimax optimal convergence of the excess risk (and consequently, the AD error metric), the first theoretical guarantee in semi-supervised AD. We adopt ReLU neural networks. We construct a specific class of ReLU neural networks (i.e., our hypothesis space) to learn the Bayes classifier $f _ { c }$ well. We introduce some notation to formally define this hypothesis space. Definition 2. Let $\| \boldsymbol { W } ^ { ( i ) } \| _ { 0 }$ and $| b ^ { ( i ) } | _ { 0 }$ denote the number of nonzero entries of $W ^ { ( i ) }$ and $b ^ { ( i ) }$ in the $i$ -th hidden layer, $\| { \pmb p } \| _ { \infty }$ denote the maximum number of nodes among all hidden layers, and $\| \pmb \theta \| _ { \infty }$ denote the largest absolute value of entries of $\{ W ^ { ( i ) } , b ^ { ( i ) } \} _ { i = 1 } ^ { L }$ . For $L , w , v , K > 0$ , we denote the form of neural network we consider in this work by $$ \begin{array}{c} \begin{array} { r } { \ r , v , K ) : = \left\{ \int \ o f \ t h e \ f o r m \ o f \ ( 3 . 6 ) : \| p \| _ { \infty } \leq w , \sum _ { i = 1 } ^ { L } \left( \| W ^ { ( i ) } \| _ { 0 } + | b ^ { ( i ) } | _ { 0 } \right) \leq v , \| \theta \| _ { \infty } \leq K \leq \frac { 1 } { L } \left( \| W ^ { ( i ) } \| _ { 0 } + | b ^ { ( i ) } | _ { 0 } \right) \leq v , \| \theta \| _ { \infty } \leq K \right.} \end{array} \end{array} $$ With $\sigma _ { \tau }$ given in (3.7), we define our hypothesis space $\varkappa _ { \tau }$ with $\tau \in ( \mathbf { 0 } , \mathbf { 1 } ]$ to be functions generated by $\mathcal { H } _ { \tau } : = s p a n \{ \sigma _ { \tau } \circ f : f \in \mathcal { F } ( L ^ { * } , w ^ { * } , v ^ { * } , K ^ { * } ) \}$ for specific $L ^ { * } , w ^ { * } , v ^ { * } , K ^ { * } > 0$ . Definition 3. To make computation feasible, it is common to adopt some convex, continuous loss to replace the 0-1 classification loss function. Among all functions in $\mathcal { H } _ { \tau }$ , we specifically consider the empirical risk minimizer (ERM) w.r.t. Hinge loss $\phi ( x ) : = \operatorname* { m a x } \{ 0 , 1 - x \}$ defined as $$ f _ { E R M } : = \arg \operatorname* { m i n } _ { f \in \mathcal { H } _ { \tau } } \varepsilon _ { T , T ^ { - } , T ^ { \prime } } ( f ) , $$ where the empirical risk w.r.t. $\phi$ is $$ \varepsilon _ { T ; T ^ { - } , T ^ { \prime } } ( f ) : = \frac { s } { n } \sum _ { i = 1 } ^ { n } \phi \left( f ( X _ { i } ) \right) + \frac { ( 1 - s ) \tilde { s } } { n ^ { - } } \sum _ { i = 1 } ^ { n ^ { - } } \phi ( - f ( X _ { i } ^ { - } ) ) + \frac { ( 1 - s ) ( 1 - \tilde { s } ) } { n ^ { \prime } } \sum _ { i = 1 } ^ { n ^ { \prime } } \phi ( - f ( X _ { i } ^ { - } ) ) . $$ which uses normal data, known anomalies and synthetic anomalies from a uniform distribution. Note that n and $n ^ { - }$ denote the number of normal and (real) anomalous training samples respectively, and $n ^ { \prime }$ denotes the number of synthetic anomalies we generate. The following theorem shows that the excess risk of the ERM, fERM (4.2), trained on normal data, known anomalies and synthetic anomalies, converges to 0 at an optimal rate (up to a logarithmic factor) as the number of training data increases. Theorem 2. Let $n , n ^ { - } , n ^ { \prime } \geq 3 , n _ { m i n } = \operatorname* { m i n } \{ n , n ^ { - } , n ^ { \prime } \} , d \in \mathbb { N } , \alpha > 0 |$ . Assume the Tsybakov noise condition (3.3) holds for noise exponent $q \in [ 0 , \infty )$ and constant $c _ { 0 } > 0$ , and the regression function $f _ { P }$ is $\alpha$ -Hölder continuous. Consider the hypothesis space $\mathcal { H } _ { \tau }$ with $\begin{array} { r l r } { { N = \lceil ( \frac { n _ { m i n } } { ( \log ( n _ { m i n } ) ) ^ { 4 } } ) ^ { \frac { d } { d + \alpha ( q + 2 ) } } \rceil , \tau = } } \end{array}$ $N ^ { - \frac { \alpha } { d } } , K ^ { * } = 1$ , and $\boldsymbol { L } ^ { * } , \boldsymbol { w } ^ { * } , \boldsymbol { v } ^ { * }$ depending on $N , \alpha , d$ given explicitly in $A$ ppendix $G$ . For any $0 < \delta < 1$ , with probability $1 - \delta$ , there holds, $$ R ( s i g n ( f _ { E R M } ) ) - R ( f _ { c } ) = \mathcal { O } \left( \frac { ( \log n _ { m i n } ) ^ { 4 } } { n _ { m i n } } \right) ^ { \frac { \alpha ( q + 1 ) } { d + \alpha ( q + 2 ) } } . $$ Proof Sketch of Theorem 2 The full proof and explicit excess risk bound are given in Appendix G. The proof is built on an error decomposition framework, followed by bounding and balancing the approximation and estimation errors. Notably, due to the non-i.i.d. nature of the training data, we cannot apply standard concentration inequalities. Therefore, we develop novel concentration bounds specifically adapted to this setting (see Lemma 4 and Lemma 6 in the proof). Theorem 2 tells us that when $n _ { \mathrm { m i n } } = \operatorname* { m i n } \{ n , n ^ { - } , n ^ { \prime } \}$ increases, the excess risk converges to 0 at a rate $\mathcal { O } \left( \left( n _ { \operatorname* { m i n } } \right) ^ { - \frac { \alpha ( q + 1 ) } { d + \alpha ( q + 2 ) } } \right)$ (dropping the logarithmic factor). This rate matches the minimax rates in the literature [1] because $n _ { \mathrm { m i n } }$ captures the minimum sample size across normal, anomalous and synthetic training data. Applying Theorem 2 to (3.4), we obtain with probability $1 - \delta$ , the AD error $S _ { \mu , h _ { 1 } , h _ { 2 } , \rho } ( \mathrm { s i g n } ( f _ { \mathrm { E R M } } ) ) = \mathcal { O } \left( \left( n _ { \mathrm { m i n } } \right) ^ { - \frac { \alpha q } { d + \alpha ( q + 2 ) } } \right)$ . As the number of training data grows, the AD error converges to 0, suggesting that ReLU network can solve semi-supervised AD effectively. Next, we conduct experiments with real-world data to evaluate the practical efficacy of synthetic anomalies. # 5 Experiments # 5.1 Set-Up We evaluate the area under the precision-recall curve (AUPR) of neural networks with vanilla classification (VC) [15]. We also test other AD methods (mentioned in order of their proximity in modeling VC): ES (VC with modified activation function) [21, 20], DROCC (VC but with adversarial synthetic anomalies) [13], ABC (VC with autoencoder structure) [? ] and DeepSAD [34] (autoencoder with latent hypersphere classification). All methods are evaluated with and without our proposed method of including randomly sampled synthetic anomalies (SA). VC-SA models our theoretical framework. Here, we are interested in 2 research questions (RQs). RQ1. Do synthetic anomalies improve performance of VC (i.e., VC-SA versus VC)? RQ2. [Generalizability] Do synthetic anomalies improve performance of other state-of-the-art methods? Results for RQ1 and RQ2 are reported in Tables 1a and 1b respectively. To avoid diluting the known anomaly supervision signal and contaminating the normal data during training, we avoid adding too many synthetic anomalies. Based on Theorem 2, we add $n ^ { \prime } = n + n ^ { - }$ number of synthetic anomalies. More details are in Appendix H. As a note, we also evaluate 9 methods, each composing a binary classifier on an unsupervised AD method. These methods first do unsupervised AD to identify data that belong to the training classes, and then binary classifiers differentiate normal from known anomalous data given that the data are known. However, they consistently produce random (i.e., poor) performance and are unsuitable. We defer further details and discussions to Appendix H.3. Dataset We summarize our five diverse real-world evaluation datasets spanning across tabular, image and language benchmarks. More details are in Appendix H.1. Our tabular datasets comprise NSL-KDD (cybersecurity) [43], Thyroid (medical) [31] and Arrhythmia (medical) [14]. MVTec [3] and AdvBench [8] are our image and language AD datasets respectively. Here, anomalies arise naturally from cyber-attacks, medical sickness, manufacturing defects and harmful text. For all datasets, we train with normal data and one type of “known” anomaly, and evaluate on normal data and the remaining anomalies in the dataset (mostly unknown anomalies). For instance, NSL-KDD has benign (normal) network traffic and 4 types of attacks (anomalies) during training and testing: Denial of Service (DoS), probe, remote access (RA), and privilege escalation (PE). To simulate semi-supervised AD, we use RA as known anomalies and the other 3 as unknown anomalies. To convert image and text data to tabular form, we use 1024-dimensional DINOv2 embeddings [28] and 384-dimensional BERT sentence embeddings [32] respectively. In total, we have 24 unknown categories and 7 known categories. Due to the small dataset size of Arrhythmia and MVTec, known anomalies are used only in training and not testing, and all unknown anomaly types are grouped together as a large unknown anomaly class for evaluation. We emphasize more on unknown anomaly evaluation because unknowns characterize the AD problem more than knowns (see the common density level set estimation formulation in Section 3). Nevertheless, we also include known categories to gauge if synthetic anomalies will dilute known anomaly training signal, or if they can improve known anomaly performance. # 5.2 Discussion RQ1. Is VC-SA better than VC? Across all 5 datasets, VC-SA generally outperforms VC. VC-SA expectedly has better performance on unknown anomalies (better on 19/24 unknown anomaly categories), with synthetic anomalies providing supervision signal to classify unknown regions as anomalous (Figure 3). Interestingly, VC-SA outperforms VC on known anomalies on 5/7 of known anomaly categories (PE from NSL-KDD, subnormal from Thyroid and 5 categories from AdvBench). Adding synthetic anomalies improves our modeling of density level set estimation (Case 1 in Figure 2), so improving (unsupervised) AD is not necessarily negatively correlated with improving known anomaly performance, as seen here. Overall, VC-SA performs better than VC. Table 1: AUPR results with and without synthetic anomalies for (a) our theoretical model (vanilla classification, VC) and (b) other AD models. Table 1b is a continuation of Table 1a, but separated as a different subtable to highlight the different RQs they are answering. Other models are arranged from left to right in order of how close they are to VC. More often than not, synthetic anomalies (-SA suffix) generally improve results for VC and other AD models closer to the left (ES and DROCC). Performance gains are seen for both unknown (unk.) and known anomalies. Meanwhile, autoencoder-based methods ABC and DeepSAD have more mixed results. (a) RQ1. AUPR for our VC model. # (b) RQ2. AUPR for other methods. UMAP of Model Predictions(NSL-KDD) with Syn.Anoms RQ2. How beneficial is adding random synthetic anomalies? Across datasets, synthetic anomalies had the most number of performance gains in MVTec image AD regardless of method. This dataset has the highest dimension and fewest training samples. Here, known anomalies are the least dense, suggesting that the added synthetic anomalies increased the anomaly signal for improved performance. Of other methods, ES and DROCC are the closest to VC and, likewise, benefit from adding synthetic anomalies. ES-SA outperforms ES in 16/24 (and tied 3) unknown and 7/7 known anomaly categories, while DROCC-SA outperforms DROCC in 19/24 (and tied 3) unknown and 5/7 known anomaly categories. Consistent Figure 3: Visualization. Synthetic anomalies occupy regions with unknown anomalies (top right), training the model to classify unknown anomalies as anomalous. erformance gains demonstrate that adding synthetic anomalies generalize well to other lassifier-based AD methods. Meanwhile, ABC and DeepSAD enforce autoencoder (i.e., encoder-decoder) structures. ABC is the next closest to VC, using a binary classification objective with an autoencoder structure. ABC-SA outperforms ABC in 18/24 unknown anomaly categories, but most (14) improvements come from one dataset (MVTec); performance on other datasets are mixed. DeepSAD is the least similar to VC with a two-stage training procedure: first training an autoencoder, then using the encoder for binary classification. DeepSAD-SA outperforms DeepSAD in only 9/24 (and tied 1) unknown anomaly categories and is the only model where adding synthetic anomalies is not better. Notably, DeepSAD has good performance on DoS and probe anomalies in NSL-KDD, which are easy anomalies [22], but struggles on other anomalies (e.g., Thyroid and AdvBench). Here, DeepSAD already underperforms, and adding synthetic anomalies may not be the solution for that. Moreover, only 2/7 known anomaly categories are better for both ABC-SA vs. ABC and DeepSAD-SA vs. DeepSAD, suggesting that synthetic anomalies dilute known anomaly training signal for these autoencoder models. Limitations and Extensions Overall, we validate that adding synthetic anomalies to classificationbased AD methods work well, while it generally works less effectively with autoencoder constraints. Supervision for negatives (i.e., anomalies) must be more carefully designed in autoencoders (e.g., in contrastive learning), but we leave this to future work. Additionally, there could be other valid semi-supervised AD formulations other than ours. We discuss more details in Appendix A. Ablations Due to space constraints, we leave details in Appendix H.4, summarizing ablations across 3 key hyperparameters: width, depth and number of synthetic anomalies (Table 2). Wider and deeper networks provide higher expressivity, but we observe vanishing gradients in the latter. Performance is not as sensitive to width. Meanwhile, more synthetic anomalies (even a small amount amount) improves unknown anomaly performance, but contaminates the supervision signal from known anomalies, hence affecting known anomaly performance. Therefore, in our experiments, we choose depth and width to balance expressivity (determined by data dimension and number of samples), and $n ^ { \prime } = n + n ^ { - }$ . Table 2: Ablations for NSL-KDD for the width $w$ , depth $L$ and proportion of synthetic anomalies $n ^ { \prime }$ to real training data $r : = n + n ^ { - }$ . AUPR of our vanilla classifier reported across attacks (anomalies).
Anomaly detection (AD) is a critical task across domains such as cybersecurity and healthcare. In the unsupervised setting, an effective and theoretically-grounded principle is to train classifiers to distinguish normal data from (synthetic) anomalies. We extend this principle to semi-supervised AD, where training data also include a limited labeled subset of anomalies possibly present in test time. We propose a theoretically-grounded and empirically effective framework for semi-supervised AD that combines known and synthetic anomalies during training. To analyze semi-supervised AD, we introduce the first mathematical formulation of semi-supervised AD, which generalizes unsupervised AD. Here, we show that synthetic anomalies enable (i) better anomaly modeling in low-density regions and (ii) optimal convergence guarantees for neural network classifiers -- the first theoretical result for semi-supervised AD. We empirically validate our framework on five diverse benchmarks, observing consistent performance gains. These improvements also extend beyond our theoretical framework to other classification-based AD methods, validating the generalizability of the synthetic anomaly principle in AD.
[ "stat.ML", "cs.CR", "cs.LG", "stat.AP" ]
# 1 Introduction Text-to-SQL (Tai et al., 2023; Li et al., 2024b; Shi et al., 2025) aims to translate natural language into structured database queries, playing a crucial role in democratizing data access by enabling nontechnical users to interact effectively with relational databases. A significant amount of work is devoted to the fine-tuning of a foundational model, where Reinforcement Learning (RL) has recently been shown to effectively enhance model performance (Pourreza et al., 2025b; Berdnyk and Collery, 2025; Ma et al., 2025). Among these efforts, the careful design of the Reward Model (RM) is a crucial challenge, as the quality of the reward signal directly influences policy optimization during fine-tuning. In RL-based Text-to-SQL approaches, execution accuracy remains a dominant signal (Nguyen et al., 2025; Ma et al., 2025; Pourreza et al., 2025b; Berdnyk and Collery, 2025), providing intuitive feedback based on query correctness. Additionally, the LLM-based Bradley–Terry reward model (BTRM) (Christiano et al., 2017) has been adapted for code generation by deriving preference pairs from execution outcomes (Zeng et al., 2025a). Structural rewards based on abstract syntax tree (AST) have also been explored to capture syntactic similarity (Shojaee et al., 2023). However, each approach has significant limitations in the Text-to-SQL task. Execution-based rewards introduce significant latency due to runtime database access. LLM-based BTRM incurs high computational and memory costs, limiting scalability. AST matching-based similarity is prone to false negatives, where syntactically divergent queries that are semantically equivalent are penalized, leading to inaccurate reward signals. These limitations underscore a key challenge in Text-to-SQL RL: designing an efficient reward model that can replace execution-based signals without compromising performance. To address the above limitations, we introduce Graph-Reward-SQL, a novel RL framework for Text-to-SQL tasks. This framework incorporates two complementary reward models: Graph Matching Network Score (GMNScore) and Stepwise Relational Operator Tree Match (StepRTM). GMNScore serves as an outcome-based reward, which evaluates the generated SQL queries using the Graph Matching Network (GMN) without requiring execution. GMN utilizes learned graph embeddings to assess functional equivalence, capturing the deep semantics of SQL queries (Zhan et al., 2025). In contrast to execution-based rewards, GMNScore eliminates the need for costly database executions, resulting in a significant speed-up. Furthermore, compared to LLM-based Bradley-Terry reward models (BTRM), GMNScore substantially reduces GPU memory consumption due to the lightweight architecture of GMN. Additionally, StepRTM provides intermediate feedback through a stepwise reward mechanism that evaluates the generation of Common Table Expression (CTE) subqueries, complementing GMNScore. The above design offers three notable advantages. (i) Superior Training Efficiency: Our method significantly reduces time cost and GPU memory usage compared to existing outcome reward models, leading to enhanced overall training efficiency for reinforcement learning. (ii) Intermediate Feedback Integration: Unlike existing models that focus solely on outcome evaluation, our reward model incorporates intermediate feedback by leveraging the structure of CTE SQL. This provides richer feedback during training, improving performance. (iii) Strong Empirical Performance: Extensive ablation studies and evaluations on the Spider (Yu et al., 2018) and BIRD (Li et al., 2024b) Text-to-SQL benchmarks validate the superiority of our reward model. The results consistently demonstrate that our approach outperforms multiple strong reward model baselines, highlighting its effectiveness. Our main contributions can be summarized as follows: • We propose GMNScore, the first outcome reward model that leverages GMN to replace executionbased rewards, achieving both higher efficiency and better performance. • We design a novel stepwise reward model StepRTM, which utilizes CTE SQL to deliver stepwise supervision by matching each subquery, resulting in improved performance. • Extensive experiments show that our reward models consistently improve performance while maintaining high inference efficiency and low memory consumption. # 2 Related Work Text-to-SQL. Text-to-SQL is a key task in Natural Language Processing (NLP) that involves transforming queries expressed in natural language into executable SQL queries (Tai et al., 2023; Li et al., 2024b; Shi et al., 2025). With the increasing deployment of large language models (LLMs), agentic frameworks (Wang et al., 2025; Pourreza et al., $2 0 2 5 \mathrm { a }$ ; Lei et al., 2024) have been introduced to enhance Text-to-SQL tasks. These frameworks enable LLMs to interact with databases through iterative reasoning and external tools. Code Foundation Models such as DeepSeek-Coder (Guo et al., 2024) and Qwen2.5-Coder (Hui et al., 2024) provide the backbone for these agentic systems, enabling structured reasoning and code generation. Several approaches aim to improve LLM performance in Text-to-SQL tasks, including direct finetuning (Li et al., 2024a; Yang et al., 2024; Pourreza and Rafiei, 2024), as well as techniques such as prompt design (Pourreza and Rafiei, 2023; Dong et al., 2023; Gao et al., 2024) and schema linking (Guo et al., 2019; Wang et al., 2020; Lei et al., 2020; Lee et al., 2025) to further optimize results. Reinforcement Learning and Reward Model. RL has become a important paradigm for effectively fine-tuning Code Foundation Models. Policy optimization methods, such as Proximal Policy Optimization (PPO) (Schulman et al., 2017) and Group Relative Policy Optimization (GRPO) (Shao et al., 2024), have been explored. However, the effectiveness of RL training heavily relies on the quality of reward signals, making the design of reward models a critical aspect (Trella et al., 2023). Several contributions to RL-based code generation have advanced reward model strategies. Notable works include CodeRL (Le et al., 2022), which leverages execution feedback, PPOCoder (Shojaee et al., 2023), which integrates semantic matching of abstract syntax trees, and AceCoder (Zeng et al., 2025a), which applies an LLM-based Bradley-Terry Reward Model. The execution-based reward model for Text-toSQL was initially introduced by (Zhong et al., 2017). Recent advancements have introduced continuous reward scores based on keyword matching (Nguyen et al., 2025) and leveraged LLMs to generate and iteratively refine reward model design (Berdnyk and Collery, 2025). Alongside these developments, reasoning models such as DeepSeekR1 (Guo et al., 2025) have advanced RL in reasoning tasks, leading to the introduction of more sophisticated reward model designs. For example, SQL-R1 (Ma et al., 2025) incorporates format and length constraints, while Reasoning-SQL (Pourreza et al., 2025b) employs more complex reward structures, such as schema linking feedback, n-gram similarity scores, and LLM-based judgment. Despite these enhancements, executionbased rewards continue to play a central role in above RL-based Text-to-SQL approaches. Current methods overlook the computational overhead of execution-based and LLM-based reward models and fail to fully exploit the deep semantic structure of SQL queries. Additionally, these approaches focus solely on evaluating the final generated SQL, neglecting the potential of leveraging intermediate supervision signals throughout the SQL generation process. To address these issues, we propose an execution-free outcome reward model and a stepwise reward mechanism. These methods significantly reduce computational overhead while providing more effective reward signals for RL-based Text-to-SQL task. Table 1: Comparison of Reward Models in RL for Text-to-SQL Tasks. Our proposed GMNScore and StepRTM achieves better performance while significantly reducing time and memory costs. # 3 Preliminaries # 3.1 Problem Formulation In the standard Text-to-SQL setting, $x$ denotes a natural language query, $\hat { q }$ and $q ^ { \star }$ denote the generated SQL query and reference SQL, respectively. In this work, we mainly use Proximal Policy Optimization (PPO) (Schulman et al., 2017), which optimizes the policy model $\pi _ { \boldsymbol { \theta } }$ by maximizing: $$ \begin{array} { r l } & { \mathcal { I } ( \theta ) = \mathbb { E } _ { ( x , \hat { q } ) \sim \mathcal { D } , \hat { q } \sim \pi _ { \theta } ( \cdot | x ) } [ r ( \hat { q } , q ^ { \star } ) } \\ & { \quad \quad \quad - \beta \mathbb { D } _ { \mathrm { K L } } ( \pi _ { \theta } ( \cdot { | } x ) \| \pi _ { \mathrm { r e f } } ( \cdot \vert \ x ) ) ] , } \end{array} $$ where $\pi _ { \mathrm { r e f } }$ is the reference model, $\beta$ is a PPO hyperparameter and $r ( \hat { q } , q ^ { \star } )$ is a reward model. Note that our method can be easily adapted to Group Relative Policy Optimization (GRPO) (Shao et al., 2024), as detailed in the Appendix D. # 3.2 Summary of Existing Reward Models Recognizing the great importance of reward models in RL, we discuss three types of main reward models. As summarized in Table 1, we compare these models with our proposed reward models in terms of time cost and GPU memory usage during inference. Additionally, the final performance of all reward models is evaluated and ranked, as described in Section 6.1. Detailed information on these comparisons can be found in the Appendix F. Execution Accuracy (EX). For the Text-to-SQL task, the execution accuracy serves as the most direct reward signal, providing a discrete score based on whether the generated SQL query yields the correct result upon execution. We use a discrete reward model with finer-grained feedback based on syntax error (Pourreza et al., 2025b) and runtime diagnostics following (Shojaee et al., 2023). Given a generated SQL $\hat { q }$ and reference SQL $q ^ { \star }$ , the formulation is listed as: $$ r _ { \mathrm { E X } } ( \hat { q } , q ^ { \star } ) = R _ { e x e c } + R _ { s y n t a x } + R _ { r u n t i m e } $$ However, the EX has notable limitations. When the database contains poor quality data (e.g., limited, missing, or inconsistent entries) or structural issues (e.g., redundancy or anomalies), different queries may produce identical results (Zhong et al., 2020). The Test Suite (TS) (Zhong et al., 2020) attempted to address this issue, but as shown in (Zhan et al., 2025), false positives and false negatives remain unavoidable. Additionally, repeatedly executing SQL queries introduces significant computational overhead, increasing training time. More details about EX are provided in the Appendix E. Bradley-Terry Reward Model (BTRM). Given a natural language input $x$ and a candidate SQL query $y$ , we define the reward model as $r _ { \psi } ( x , y ) =$ $h _ { r } \big ( \mathcal { M } _ { \theta } ( x , y ) \big )$ , with a pretrained language model $\mathcal { M } _ { \theta }$ and a reward head $h _ { r }$ . The training process uses preference pairs based on execution correctness: $\mathcal { D } = \{ ( x _ { i } , y _ { i } ^ { + } , y _ { i } ^ { - } ) \} _ { i = 1 } ^ { N }$ , where $y _ { i } ^ { + }$ executes correctly and $y _ { i } ^ { - }$ fails or returns an incorrect result (Zeng et al., 2025b). The objective is to minimize the Bradley-Terry log-likelihood (Bradley and Terry, 1952) as follows: $$ - \sum _ { i = 1 } ^ { N } \log \frac { \exp \left( r _ { \psi } ( x _ { i } , y _ { i } ^ { + } ) \right) } { \exp \left( r _ { \psi } ( x _ { i } , y _ { i } ^ { + } ) \right) + \exp \left( r _ { \psi } ( x _ { i } , y _ { i } ^ { - } ) \right) } $$ This model learns to assign higher scores to correct queries, providing a dense proxy reward model for RL (Christiano et al., 2017). In contrast to EX, BTRM enables more efficient policy training by eliminating the need to query databases. However, the large parameter size of LLM-based BTRM significantly increases GPU memory usage. Backpropagation Generate Policy name country age Stepwise Reward: StepRTM Model H Taylor Swift American 35 Generated Reference Values Jielun Zhou China 45 Action Value Model Shen Zhou China 32 1 Backpropagation age Prompt: " What singers are 34 years old or older? Please write SQL based on the table schema to answer question.” Reward Step Score: 0.3, 0.6, 0.95 ROT Representation Model OutCome Reward: GMNScore Generated SQL False Execution Result SELECT age FROM singer Positive Taylor Swift seed nodes WHERE age > 34; Executes Matching Jielun Zhou Reference SQL SELECT age FROM singer Executes Taylor Swift WHERE age $> = \ 3 4$ ; × Jielun Zhou Final Score: 0.5 Graph Representat Matching based Reward. In (Nguyen et al., 2025), keyword matching is used for SQL queries, while n-gram similarity is used in Reasoning-SQL (Pourreza et al., 2025b) to capture overlapping token sequences. Matching-based methods are fast but may assign negative rewards to semantically equivalent SQL queries that differ in syntax, which should ideally be considered correct. In broader code generation tasks, the PPOCoder (Shojaee et al., 2023) uses semantic matching of abstract syntax trees and data flow graphs. However, it still focuses on surface-level structure and does not fully capture the deep semantic information. # 4 Methodology We introduce GRAPH-REWARD-SQL, a novel framework designed to enhance SQL generation through two key innovations. First, we propose GMNScore, which replaces EX, reducing time costs while maintaining the accuracy of reward signals without requiring database execution. Second, we introduce StepRTM, a stepwise reward model based on the Relational Operator Tree (ROT) representation of CTE SQL, which provides intermediate feedback. # 4.1 Relational Operator Tree (ROT) Accurately modeling SQL structure and semantics is crucial for query analysis and comparison. SQL queries can be converted into Abstract Syntax Trees (ASTs) to capture their syntactic structure. However, unlike general programming languages, SQL lacks key representations like Control Flow Graphs (CFGs) (Cota et al., 1994) and Data Flow Graphs (DFGs) (Orailoglu and Gajski, 1986), which are essential for reflecting logic and data dependencies. To address this gap, we leverage the Relational Operator Tree (ROT) to represents SQL queries as trees of relational algebra operators. Each node in the tree corresponds to a specific logical operation (e.g., Join, Project, Filter), while the tree structure itself reflects the dependencies and execution order of the query. In practice, we use Apache Calcite (Begoli et al., 2018) to generate ROTs, which compiles SQL into a canonical intermediate representation called RelNode. This format includes various optimizations, such as operator reordering and clause simplification, resulting in normalized logical plans that are more resilient to surface-level differences. Similarly to CFGs and DFGs, the RelNode format can also integrate control dependencies and data flow as edges (Zhan et al., 2025). This enables the creation of more comprehensive graph representations that are essential for deeper query analysis. # 4.2 FuncEvalGMN After obtaining the SQL graph representations, we employ a Graph Matching Network (GMN) (Li et al., 2019) trained on SQL pairs (Zhan et al., 2025) to assess functional equivalence. The model introduces global positional encoding and a crossgraph attention mechanism. It is trained using contrastive learning for pretraining and supervised learning to capture deep semantic similarity of SQL queries. The similarity between two queries is computed as the negative Euclidean distance between their final graph-level embeddings: $s ( h _ { G _ { 1 } } , h _ { G _ { 2 } } ) =$ $\left\| h _ { G _ { 1 } } - h _ { G _ { 2 } } \right\| _ { 2 }$ , where $h _ { G _ { 1 } }$ and $h _ { G _ { 2 } }$ are computed b y the GMN, considering the joint representations of $G _ { 1 }$ and $G _ { 2 }$ . This approach, first introduced in FuncEvalGMN (Zhan et al., 2025), is described in further detail in Appendix M. # 4.3 ROT/RelNode Partial Matching (RelPM) Similar to ASTs, RelNode can also be used to evaluate SQL similarity through graph matching. RelPM (Zhan et al., 2025) is a rule-based matching algorithm that assesses the similarity of SQLs based on their RelNode representations, denoted $\mathcal { G } _ { \hat { q } }$ and $\mathcal { G } _ { \boldsymbol { q } ^ { \star } }$ , respectively. A comparable approach, applied to AST structures, is known as AstPM (Zhan et al., 2025). Both algorithms adopt a hierarchical partial matching strategy and derive a global similarity score based on the Precision and Recall of nodelevel matching results. At the node level, matches are determined by comparing each generated node $\boldsymbol { n } ^ { \prime } \in \mathcal { G } _ { \hat { \boldsymbol { q } } }$ with all the candidate nodes $n \in \mathcal G _ { q ^ { \star } }$ in the reference tree. A match is established when two nodes have the same operator type and value. Additionally, a matching score is computed by comparing their subgraphs, and the candidate node with the highest matching score is selected as the final match. Further details are provided in Appendix L. # 4.4 Reward Function Design Figure 1 illustrates our reward design, comprising the outcome reward model GMNScore and the stepwise model StepRTM. Given the generated $\mathtt { S Q L } \hat { q }$ and the reference ${ \mathrm { S Q L } } q ^ { \star }$ , the reward at time-step $t$ for a sequence of length $T$ is computed as follows: $$ \begin{array} { r l } & { \mathcal { R } _ { t } ( \hat { q } , q ^ { * } ) = \mathbb { 1 } ( c o n d _ { \mathrm { e o s } } ) \cdot [ R _ { \mathrm { G M N S c o r e } } ( \hat { q } , q ^ { * } ) - \beta R _ { \mathrm { k l } } ( \hat { q } _ { < t } ) ] } \\ & { \quad + \mathbb { 1 } ( c o n d _ { \mathrm { s u b } } ) \cdot [ R _ { \mathrm { S t e p R T M } } ( \hat { q } _ { \leq t } , q ^ { * } ) - \beta R _ { \mathrm { k l } } ( \hat { q } _ { < t } ) ] } \\ & { \quad + \mathbb { 1 } ( \neg c o n d _ { \mathrm { e o s } } ) \cdot \mathbb { 1 } ( \neg c o n d _ { \mathrm { s u b } } ) \cdot [ - \beta R _ { \mathrm { k l } } ( \hat { q } _ { < t } ) ] , } \end{array} $$ where $c o n d \mathrm { { e o s } }$ indicates the end of generation, at which point the outcome reward model $R$ GMNScore is applied. $c o n d _ { \mathrm { s u b } }$ signifies the completion of a subquery, triggering the stepwise reward model $R _ { \mathrm { S t e p R T M } }$ to compare the current subquery with the corresponding substructure in the reference query. The symbol $\lnot$ denotes logical negation. $R _ { \mathrm { k l } } ( { \hat { q } } _ { < t } )$ represents a KL-divergence penalty that measures the deviation between the learned policy and the pretrained language model, applied at each time step to regularize policy updates. The scalar $\beta$ is a hyperparameter that balances rewards with policy regularization. # 4.5 Outcome Reward: GMNScore As described in Section 4.2, the functional correctness of generated SQL can be evaluated using the FuncEvalGMN metric ${ \mathcal { M } } _ { \mathrm { G M N } }$ , which aligns well with the objective of reward model in RL. We design an outcome reward model as follows: $$ R _ { \mathrm { G M N S c o r e } } ( \hat { q } , q ^ { \star } ) = \left\{ \begin{array} { l l } { - 1 , } & { \mathrm { i f ~ s y n t a x ~ \hat { e } ~ } } \\ { - 0 . 6 , } & { \mathrm { i f ~ R O T ~ e r } } \\ { \operatorname* { m a x } ( 0 , \mathcal { M } _ { \mathrm { G M N } } + 1 ) } & { } \end{array} \right. $$ The GMNScore formulation introduces graded penalties for SQL queries that trigger syntax errors or ROT parsing errors1. For all other cases, we rescale the FuncEvalGMN similarity score $\mathcal { M } _ { \mathrm { G M N } }$ (which lies in the range $( - \infty , 0 ] )$ to the interval $[ 0 , 1 )$ by first applying an affine shift and then rectifying any negative values to zero. # 4.6 Stepwise Reward: StepRTM Current ETL (Extract, Transform, Load) pipelines rarely execute their logic in a single step. Instead, analysts break the workflow into a detailed plan of subqueries, where each subquery progressively transforms the data until the query is complete.CTEs are the standard method for expressing this plan: WITH step1 AS (/\* subquery1 \*/), step2 AS (/\* subquery2 \*/) SELECT .. FROM step2 ; In most cases, CTEs enhance the readability of complex SQL by providing clear representations of intermediate steps in an ETL pipeline. These steps not only facilitate data transformation but also offer a natural way of evaluating in a stepwise way. Inspired by subgraph matching techniques (Lou et al., 2020; Roy et al., 2022), we propose Stepwise Relational Operator Tree Matching (StepRTM), which incorporates stepwise reward scores to provide intermediate feedback. The overall procedure of StepRTM is illustrated in Figure 2. Let $q ^ { * }$ denote the reference SQL, and represent the generated SQL as a sequence of subqueries ${ \hat { q } } _ { \mathrm { c t e } } =$ $[ \hat { q } _ { 1 } , \hat { q } _ { 2 } , \dots , \hat { q } _ { n } ]$ . Let $\mathcal { G } _ { q ^ { * } }$ and $\mathcal { G } _ { \hat { q } _ { i } }$ denote the node sets of the ROT representations for the reference query and the $i$ -th generated subquery. The stepwise scores are then computed as follows: Figure 2: Overview of the StepRTM Stepwise Reward Calculation. (a) The generated SQL $\hat { q } _ { \mathrm { c t e } }$ is segmented into a sequence of subqueries, with the end index of each subquery recorded. (b) Both the reference SQL query $q ^ { * }$ and each subquery are parsed into ROTs (c) A stepwise matching process is performed between the ROTs. At each step, newly matched nodes are identified and used to compute incremental rewards. $$ \mathcal { R } _ { \mathrm { S t e p R T M } } ^ { ( i ) } ( \hat { q } _ { \mathrm { c t e } } , q ^ { * } ) = \frac { \left| \left( \mathcal { M } _ { i } \cup \mathcal { G } _ { i } \right) \cap \mathcal { G } _ { q ^ { * } } \right| } { \left| \mathcal { G } _ { q ^ { * } } \right| } , $$ where $\textstyle { \mathcal { M } } _ { i } = \bigcup _ { j = 1 } ^ { i - 1 } { \mathcal { G } } _ { j }$ represents all the matched subgraphs parsed from the first $i$ subqueries, $\mathcal { G } _ { j }$ denotes the maximal matched subgraph in the reference query that aligns with the $i$ -th subquery $\hat { q } _ { i }$ . This formulation prevents repeated rewards for the same reference node and ensures that the overall signal reflects the incremental semantic coverage of the target query. This stepwise supervision improves training performance by providing richer intermediate feedback, facilitating the generation of correct SQL queries. # 5 Experimental Setup Datasets. Our experiments are primarily conducted on the Spider and BIRD benchmarks. The Spider dataset (Yu et al., 2018) contains 10,181 natural language questions paired with 5,693 complex SQL queries across 138 domains. The BIRD dataset (Li et al., 2024b) consists of 12,751 questions spanning more than 37 professional fields. We use the training split of the Spider dataset for training and the development splits of both for evaluation. Additionally, the 200k-Text2SQL dataset is used for warmup before the PPO. Further details about the datasets are provided in Appendix A. Baselines. We compare our proposed reward models with several representative baselines. First, we use EX, a widely adopted reward signal in recent studies (Nguyen et al., 2025; Berdnyk and Collery, 2025; Ma et al., 2025; Pourreza et al., 2025b). To evaluate the efficacy of model-based reward mechanisms, we include the LLM-based BTRM (Christiano et al., 2017; Zeng et al., 2025b), trained using the DeepSeek-Coder-1.3B-Ins as the backbone, as detailed in Appendix K. Additionally, we incorporate AstPM and RelPM (Zhan et al., 2025) as matching-based reward model baselines, following recent work (Shojaee et al., 2023; Nguyen et al., 2025; Pourreza et al., 2025b). Evaluation Metrics. Following (Gao et al., 2024; Li et al., 2024a; Yang et al., 2024), we use the Test Suite (TS) (Zhong et al., 2020) as the evaluation metric. TS assesses correctness across multiple augmented databases, providing a more robust evaluation. Further details are provided in Appendix B. Table 2: TS Performance of Deepseek-Coder-1.3B-Ins and Deepseek-Coder-6.7B-Ins models under multiple baselines and proposed GMNScore outcome reward. Implementation Details. Prior to PPO training, we performed supervised fine-tuning (SFT) using two cold-start datasets. First, we sampled a subset from the 200k-Text2SQL dataset, matching the size of the Spider training set, and trained the DeepseekCoder-1.3B/6.7B-ins for two epochs. To promote the generation of CTE SQL queries in the stepwise reward PPO experiments, we converted BIRD data into CTE format to prepare a warm-up dataset referred to as CTE-SFT. Additional details about hyperparameter are provided in Appendix C. # 6 Results # 6.1 Reward Performance Comparison GMNScore can replace the EX, thereby eliminating dependence on SQL execution and database environments. As demonstrated in Table 8, GMNScore achieves the highest average TS for the 1.3B and 6.7B models, highlighting the importance of well-designed reward signals in RL. Another notable observation is that RelPM outperforms AstPM, with improvements of $2 . 5 3 \%$ and $1 . 7 1 \%$ for the two model sizes. The better performance of the former over the latter can be attributed to the use of normalized logical plans for SQL parsing in ROT, which are less susceptible to surface-level syntactic differences. This also provides an effective representation for two proposed reward models. GMNScore learns deep semantic information via graph-level embeddings, bypassing the need for execution-result comparisons and thus mitigating false-positive noise. Additionally, GMNScore eliminates the necessity of constructing and maintaining databases, offering a lightweight solution for large-scale Text-to-SQL RL. Case studies are provided in Appendix Q. Table 3: TS Performance of the Deepseek-Coder-1.3BIns model trained with the integration of CTE-SFT warmup and StepRTM, which consistently improves performance. ( ∗ indicating the use of a warmup phase.) Figure 3: TS Performance of Qwen2.5-Coder-7B/14BIns models directly trained by PPO/GRPO. The integration of StepRTM as a stepwise reward further enhances performance. As shown in Table 3, combining CTE-SFT with StepRTM consistently results in performance improvements across various outcome reward models. Notably, our framework, which integrates GMNScore alongside StepRTM, achieves the0 highest overall performance. Specifically, we observe a $5 . 8 7 \%$ improvement on the BIRD dataset and a $0 . 9 7 \%$ increase on the Spider dataset. These results suggest that the BIRD dataset, which is inherently more challenging due to its diverse database and query complexity, benefits more significantly from our proposed stepwise reward. # 6.2 Effectiveness of GMNScore with GRPO Our proposed GMNScore is effective not only with PPO but also when applied to GRPO. We trained the Qwen2.5-Coder-7B/14B-Ins with PPO and GRPO. As shown in Figure 3, the results consistently demonstrate that GMNScore outperforms EX in these two RL protocols, underscoring its robustness and effectiveness. Table 4: Comparisons among Reference SQL, Failed SQL, and CTE SQL demonstrate the effectiveness of StepRTM. Figure 4: AUC between reward scores and execution result during training. GMNScore exhibits superior consistency, achieving a rate of over $9 7 . 6 \%$ . # 6.3 Case Study: CTE SQL with StepRTM Table 4 presents two cases that demonstrate how the stepwise reward model enhances both correctness and structural clarity. Each case compares the reference SQL, a failed SQL query generated by a model trained solely with an outcome-based reward, and the CTE SQL query generated by a model trained with StepRTM. In the first case, the failed SQL incorrectly retrieves data from the comments table instead of the intended posts table. The CTE SQL resolves this by decomposing the task into clear subqueries: first locating the target user, then aggregating the scores of that user’s posts. In the second case, the failed SQL hardcodes gender identifiers, leading to errors in filtering. In contrast, the CTE SQL uses two dedicated subqueries to correctly filter for male superheroes and extract their superpowers. # 6.4 Analysis of GMNScore Accuracy Experimental results demonstrate the effectiveness of GMNScore as a reward model in PPO, significantly outperforming BTRM. We analyze the correlation between these two reward signals and actual execution outcomes during PPO training. As shown in Figure 4, GMNScore consistently maintains a high correlation2 with the execution results. This indicates that GMNScore provides a more stable and precise reward signal than BTRM during training, contributing to its superior performance. # 7 Discussion The GMNScore introduced in this paper offers an alternative to EX while remaining fully compatible with other reward models. As detailed in Appendix G, we extend our investigation beyond the StepRTM integration (Section 6.1) by applying hybrid outcome reward models, which further improve performance. This finding is consistent with previous work using multiple outcome reward models (Pourreza et al., 2025b; Ma et al., 2025). We chose to omit the hints (e.g. age $\mathbf { \tau } = \mathbf { \tau }$ year - birth_year) available in the BIRD dataset, as these are absent in most Text-to-SQL datasets, such as Spider. As a result, the performance results reported in our study are lower than those found in works that utilized the BIRD dataset’s hints.
Reinforcement learning (RL) has been widely adopted to enhance the performance of large language models (LLMs) on Text-to-SQL tasks. However, existing methods often rely on execution-based or LLM-based Bradley-Terry reward models. The former suffers from high execution latency caused by repeated database calls, whereas the latter imposes substantial GPU memory overhead, both of which significantly hinder the efficiency and scalability of RL pipelines. To this end, we propose a novel Text-to-SQL RL fine-tuning framework named Graph-Reward-SQL, which employs the GMNScore outcome reward model. We leverage SQL graph representations to provide accurate reward signals while significantly reducing inference time and GPU memory usage. Building on this foundation, we further introduce StepRTM, a stepwise reward model that provides intermediate supervision over Common Table Expression (CTE) subqueries. This encourages both functional correctness and structural clarity of SQL. Extensive comparative and ablation experiments on standard benchmarks, including Spider and BIRD, demonstrate that our method consistently outperforms existing reward models.
[ "cs.LG", "cs.DB", "cs.PL" ]
# 1 Introduction Recent advancements in large language models (LLMs) have been significantly influenced by the Mixture-of-Experts (MoE) architecture [20, 13, 5, 27, 28, 6, 7], which leverages dynamic routing mechanisms to scale model parameters efficiently while maintaining sub-linear increases in computational requirements. MoE models achieve superior performance by activating only a subset of expert networks based on input-specific needs, thereby enabling the development of larger models within the constraints of limited computational resources. However, despite their efficiency during training, MoE models face substantial challenges in deployment due to high memory and computational overhead [2, 17]. Specifically, the need to load all experts into memory simultaneously, even when only a few are activated, results in significant memory bandwidth constraints and increased inference costs. These challenges necessitate the exploration of effective compression techniques to reduce memory and computational demands, thereby facilitating the deployment of MoE models on resource-constrained devices. Figure 1: Sample distribution on the first MoE layer of DeepSeek-MoE-16B with different calibration sets. For C4 and WikiText2, $1 2 8 \times 4 0 9 6$ tokens were sampled. Post-Training Quantization (PTQ), a method that converts weights and activations to low-precision formats, has demonstrated significant effectiveness in reducing both model size and memory consumption, particularly showing strong performance in traditional large language models (LLMs). However, Quantizing Mixture-of-Experts (MoE) models introduces unique challenges rooted in their sparse, dynamic computation patterns. First, activation outliers in MoE layers exhibit expert-specific distributions, as tokens are routed to distinct subsets of experts. Traditional activation quantization methods [26, 24, 16, 25, 10, 18], designed for dense architectures where all tokens pass through shared weights, fail to handle these expert-dependent outlier patterns, leading to unstable quantization steps and accuracy collapse. Second, the router’s expert selection mechanism is highly sensitive to quantization-induced logit perturbations. Even minor deviations in gate scores can disrupt the top- $\mathbf { \nabla } \cdot \mathbf { k }$ expert assignment logic, degrading model performance due to misrouted tokens. Third, expert activation sparsity creates calibration bottlenecks: rarely activated experts receive insufficient data coverage during parameter calibration, resulting in inaccurate estimation of quantization parameters and large quantization errors. Existing PTQ methods [9, 11, 14, 15, 12, 8] either ignore activation quantization entirely or apply uniform smoothing strategies incompatible with MoE’s sparse routing mechanics, leaving these challenges unaddressed. To tackle these challenges, we propose a novel Expert-Aware Post-Training Quantization (EAQuant) method. Our approach begins with an expert-aware smoothing aggregation strategy designed to suppress activation outliers across MoE experts. By constructing a unified channel-wise smoothing vector that aggregates maximum scaling requirements from both expert weights and router logits, we redistribute outlier magnitudes while preserving mathematical equivalence through parameter fusion with preceding normalization layers. To ensure consistent expert selection post-quantization, we introduce router logits distribution alignment through a dual-objective calibration process that minimizes both logit reconstruction error and Kullback-Leibler divergence between full-precision and quantized routing probabilities. This guarantees stable top- $\mathbf { \nabla } \cdot \mathbf { k }$ expert activation despite quantizationinduced perturbations. Finally, we resolve expert-level activation sparsity through expert-level calibration data balance, where underutilized experts receive prioritized sampling from augmented datasets until their activation counts meet parity with computationally derived expectations. Extensive evaluations across diverse MoE architectures and quantization configurations demonstrate that EAQuant achieves state-of-the-art performance. For instance, EAQuant improves average task accuracy by $1 . 3 7 \%$ , $1 . 1 5 \%$ , and $1 . 1 5 \%$ over the sota method DuQuant across the three models under W4A4 quantization, with particularly pronounced gains in reasoning benchmarks (e.g., $+ 2 . 5 2 \%$ on ARC-E for Mixtral- $\mathbf { \delta } 8 \mathbf { x } 7 \mathbf { B }$ ) and closer perplexity alignment to full-precision baselines. Critically, EAQuant exhibits superior robustness in extreme W3A4 quantization, mitigating performance degradation. These advancements stem from our expert-aware smoothing aggregation strategy, router logits distribution alignment, and expert-level calibration data balancing, collectively establishing EAQuant as the new benchmark for efficient, high-precision MoE quantization. # 2 Motivation Expert-Dependent Outlier Heterogeneity. MoE architectures assign tokens to specialized experts via router gating, inducing expert-specific activation patterns. For instance, experts trained on mathematical reasoning exhibit sparse, high-magnitude outliers in specific feature dimensions, while experts handling linguistic tasks display smoother activation distributions. Conventional global smoothing strategies [26, 24, 16] fail to capture this per-expert heterogeneity, as they apply fixed scaling factors across all experts. This mismatch leads to over-quantization of outlier-prone experts (causing precision loss) and under-utilization of precision for experts with benign distributions. Routing Fragility under Quantization Noise. MoE routers rely on low-dimensional logit vectors to select top- $\mathbf { \nabla } \cdot \mathbf { k }$ experts, a mechanism highly sensitive to quantization-induced perturbations. Even minor distortions in expert weights—common in post-training quantization (PTQ)—can destabilize the gate’s decision boundary, causing misrouting errors and propagate errors through subsequent attention layers. Existing PTQ methods [9, 11] treat the router as a passive component, ignoring its interdependence with expert activations during quantization. Calibration Data Imbalance for Rare Experts. MoE models exhibit power-law activation distributions, where a small subset of “core” experts handle majority of tokens, leaving “niche” experts underutilized. During PTQ calibration, rare experts receive insufficient data coverage, causing their quantization parameters (e.g., scaling factors) to overfit to outliers or noise. As shown in Figure 1, we plot the sample distribution on the first MoE layer of DeepSeek-MoE-16B, this imbalance manifests consistently across both calibration sets. Current methods [11, 14, 15, 12, 8] almost ignore this sparsity, which compromise the MoE’s adaptive computation advantage. MoEQuant [9] proposes expert-balanced self-sampling to create a balanced calibration dataset, however generating new calibration data in this manner may compromise the fairness of comparison with other methods to some extent. Therefore, this calibration sparsity remains unaddressed in sota PTQ methods, creating a critical barrier to efficient MoE deployment. # 3 Method In this section, we detail the proposed post-training quantization (PTQ) method for Mixture-ofExperts (MoE) architectures. As shown in Figure 2, our method proceeds with three key components. First and foremost, we introduce an expert-aware smoothing aggregation strategy to effectively mitigate activation outliers in MoE inputs, ensuring robust activation patterns for consistent expert participation. Subsequently, we propose a router logit alignment mechanism that preserves expert selection consistency across quantization stages by aligning the probability distributions of pre-—and post-quantization router logits. Furthermore, we propose to balance the calibration data for sparsely activated local experts. # 3.1 Expert-Aware Smoothing Aggregation Existing literatures regarding the quantization for Mixture-of-Experts models mainly focus on quantizing weights only, while the activations still remain floating-point values. Our method efficiently quantizing both activations and weights by solving two major challenges encountered when quantizing the incoming activation of the MOE module. In post-training quantization for large language models, a small number of channels in activation tensors usually exhibit abnormal values with extremely large magnitude. Some well-known works like SmoothQuant [26] and OmniQuant [24] utilize the technique of mergeable smoothing vector to scale the dynamic range of activation tensor before quantizing activations during inference. Specifically, the smoothing vector $s$ is computed per-channel by $s _ { j } = \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } / \operatorname* { m a x } ( | \mathbf { W } _ { j } | ) ^ { 1 - \alpha }$ to alleviate the outlier value by channelwise scaling $\widetilde \mathbf { x } = \mathbf { x } \cdot \mathrm { d i a g } ^ { - 1 } ( s )$ , and therefore mitigate the quantization difficulty. Moreover, the smoothing vector $s$ can be merged into the preceding normalization layer, incurring no extra computation overhead. However, this technique faces a critical generalizability issue when quantizing activations of MoE model. For a token vector $\mathbf { x } \in \mathbb { R } ^ { d }$ with $d$ channels, and an MoE layer with $n$ local experts. The final output of the layer is the weighted sum of selected experts’ local outputs by the gate values: Figure 2: The overview of our proposed EAQuant with three key components. 1) Expert-Aware Smoothing Aggregation. 2) Router Logits Distribution Alignment. 3) Expert-Level Calibration Data Balance. $$ \mathbf { y } = \sum _ { i \in T } p ^ { i } ( \mathbf { x } ) E ^ { i } ( x ) , $$ where $\tau$ is the set of indices with the highest top- $\mathbf { \nabla } \cdot \mathbf { k }$ gate values. In this situation, the original activation smoothing method requires per-expert smoothing vectors $\{ s ^ { i } \in \mathbb { R } ^ { d } \} _ { i = 1 } ^ { n }$ before quantizing activations, respectively computed as: $$ s _ { j } ^ { i } = \frac { \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } } { \operatorname* { m a x } ( | \mathbf { W } _ { j } ^ { i } | ) ^ { 1 - \alpha } } \quad \forall j \in \{ 1 , 2 , \cdot \cdot \cdot , d \} $$ where subscript $j$ denotes the $j$ -th input channel and $\mathbf { W } ^ { i }$ denotes the first weight matrix of the $i$ -th local expert. While the weight transformation $\tilde { \mathbf { W } } ^ { i } = \mathbf { W } ^ { i } \mathrm { d i a g } ( s ^ { i } )$ preserves mathematical equivalence through $\mathbf { x W } ^ { i } = ( \mathbf { x } ~ \mathrm { d i a g } ^ { - 1 } ( s ^ { i } ) ) \cdot ( \mathrm { d i a g } ( s ^ { i } ) \mathbf { W } ^ { i } )$ , the activation scaling operation $\mathbf { x } ~ \mathrm { d i a g } ^ { - 1 } ( s ^ { i } )$ must be dynamicly executed after expert selection, introducing $\mathbf { O } ( k d )$ computational overhead per token where $k$ is the number of expert each token is routed to. The reason is that the preceding normalizing layer (i.e. RMSNorm and LayerNorm) can only absorb one vector before inference. Our key point is to construct a unified smoothing vector $\overline { { s } }$ that satisfies $$ \overline { { s } } _ { j } \geq \operatorname* { m a x } _ { i \in [ 1 , n ] } ( s _ { j } ^ { i } ) , \quad \forall j \in \{ 1 , 2 , \cdot \cdot \cdot , d \} , $$ to suppress the channel-related extreme values in activation no matter which local experts the current token will be routed to. We achieve this through channel-wise maximization over expert-specific requirements: $$ \overline { { s } } _ { j } = \operatorname* { m a x } _ { i \in [ 1 , n ] } \left( \frac { \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } } { \operatorname* { m a x } ( | \mathbf { W } _ { j } ^ { i } | ) ^ { 1 - \alpha } } \right) . $$ This aggregation guarantees that for any selected expert $i$ , we have $$ \overline { { s } } _ { j } \geq s _ { j } ^ { i } \Rightarrow \mathrm { d i a g } ^ { - 1 } ( \overline { { s } } ) \preceq \mathrm { d i a g } ^ { - 1 } ( s ^ { i } ) $$ where $\preceq$ denotes element-wise inequality, ensuring numerical stability when quantizing the activation with outlier channels. During the forward propagation of MoE module, the router’s weight $\mathbf { W } ^ { \mathrm { g a t e } }$ actually share the same input activation with local experts. Therefore we extend our unified smoothing vector to incorporate router weights $\mathbf { W ^ { \mathrm { g a t e } } } \in \mathbb { R } ^ { d \times n }$ by introducing router-specific scaling vector $\begin{array} { r } { s ^ { \mathrm { g a t e } } = \frac { \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } } { \operatorname* { m a x } ( | \mathbf { W } _ { j } ^ { \mathrm { g a t e } } | ) ^ { 1 - \alpha } } } \end{array}$ into the aggregation process: $$ \overline { { s } } _ { j } = \operatorname* { m a x } \left( \underbrace { \operatorname* { m a x } _ { i \in [ 1 , n ] } \left( \frac { \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } } { \operatorname* { m a x } ( | \mathbf { W } _ { j } ^ { i } | ) ^ { 1 - \alpha } } \right) } _ { \mathrm { E x p e r t r e q u i r e m e n t s } } , \underbrace { \frac { \operatorname* { m a x } ( | \mathbf { x } _ { j } | ) ^ { \alpha } } { \operatorname* { m a x } ( | \mathbf { W } _ { j } ^ { \mathrm { g a t e } } | ) ^ { 1 - \alpha } } } _ { \mathrm { R o u t e r ~ r e q u i r e m e n t } } \right) $$ This joint maximization guarantees $\overline { { s } } _ { j } \geq \operatorname* { m a x } \left( s _ { j } ^ { \mathrm { g a t e } } , \{ s _ { j } ^ { i } \} _ { i = 1 } ^ { n } \right)$ . The unified scaling enables equivalent transformations for both expert and router computations: $$ \begin{array} { r l } & { \mathrm { R o u t e r : } \tilde { \mathbf { x } } \tilde { \mathbf { W } } ^ { \mathrm { g a t e } } = ( \mathbf { x } \mathrm { d i a g } ^ { - 1 } ( \overline { { s } } ) ) ( \mathrm { d i a g } ( \overline { { s } } ) \mathbf { W } ^ { \mathrm { g a t e } } ) = \mathbf { x } \mathbf { W } ^ { \mathrm { g a t e } } , } \\ & { \mathrm { E x p e r t ~ i : } \tilde { \mathbf { x } } \tilde { \mathbf { W } } ^ { i } = ( \mathbf { x } \mathrm { d i a g } ^ { - 1 } ( \overline { { s } } ) ) ( \mathrm { d i a g } ( \overline { { s } } ) \mathbf { W } ^ { i } ) = \mathbf { x } \mathbf { W } ^ { i } . } \end{array} $$ And we can absorb $\mathit { \Pi } _ { \overline { { s } } }$ into the preceding RMSNorm layer through parameter fusion $$ \widetilde \mathbf { x } = \mathrm { R M S N o r m } ^ { \prime } ( \mathbf { x } ) = \frac { \gamma \oslash \overline { { s } } } { \sqrt { \frac { 1 } { d } \sum _ { j = 1 } ^ { d } \mathbf { x } _ { j } ^ { 2 } } } \odot \mathbf { x } . $$ # 3.2 Router Logits Distribution Alignment In order to preserve the accuracy of expert selection for router after quantization, we develop a dual-objective calibration strategy that jointly optimizes numerical precision and routing distribution consistency. Let ${ \bf W } ^ { \mathrm { g a t e } } \in \mathbb { R } ^ { d \times n }$ denote the router weights and $$ \mathcal { Q } ( \mathbf { W } ) = \mathrm { c l i p } ( \mathrm { r o u n d } ( \frac { \mathbf { W } } { \Delta } ) + z , \ q _ { m i n } , \ q _ { m a x } ) $$ denotes the uniform quantization operator. Our calibration process solves: $$ \displaystyle \operatorname* { m i n } _ { \theta } \underbrace { \mathbb { E } _ { \tilde { \mathbf { x } } } [ \| \tilde { \mathbf { x } } \tilde { \mathbf { W } } ^ { \mathrm { g a t e } } - \tilde { \mathbf { x } } \mathcal { Q } ( \tilde { \mathbf { W } } ^ { \mathrm { g a t e } } ) \| _ { 2 } ^ { 2 } ] } _ { \mathrm { L o g i t ~ M S E } } + \cdot \underbrace { \mathbb { E } _ { \tilde { \mathbf { x } } } [ D _ { \mathrm { K L } } ( p _ { \mathrm { f p } } \| p _ { \mathrm { q u a n t } } ) ] } _ { \mathrm { R o u t i n g ~ K L ~ D i v e r g e n c e } } $$ where $\theta$ represents quantization parameters such as scale and zero-point for $\mathbf { W } ^ { \mathrm { g a t e } }$ . The probability distributions are computed as: $$ \begin{array} { r } { p _ { \mathrm { f p } } = \mathrm { s o f t m a x } ( \tilde { \mathbf { x } } \tilde { \mathbf { W } } ^ { \mathrm { g a t e } } ) } \\ { p _ { \mathrm { q u a n t } } = \mathrm { s o f t m a x } ( \tilde { \mathbf { x } } \mathcal { Q } ( \tilde { \mathbf { W } } ^ { \mathrm { g a t e } } ) ) } \end{array} $$ In quantization scenario, even small logit errors can dramatically alter the result of top- $\mathbf { \nabla } \cdot \mathbf { k }$ expert selection. The traditional MSE objective solely calibrates the absolute difference of logit magnitudes before and after quantization, while the added Kullback-Leibler (KL) divergence term explicitly minimizes the distribution discrepancy in expert selection probabilities, which is critical for MoE models. # 3.3 Expert-Level Calibration Data Balance To address the inherent unbalanced expert activation issue in MoE models during post-training quantization, we propose a dynamic calibration data balancing strategy with expert-balanced sampling. This strategy selects tokens with explicit expert correlation through router configuration analysis and applies threshold-driven oversampling to under-represented experts until their activation counts meet the criterion, enhancing quantization parameter estimation precision. Following standard PTQ practice, we first sample 128 sequences of length 4096 from WikiText2 to construct the base dataset $\mathcal { D } _ { \mathrm { b a s e } }$ . This data calibrate non-expert components such as the QKV layer and the gating network. Table 1: Results of DuQuant and ours EAQuant with W4A4 weight-activation quantization configuration among 7 tasks on OLMoE- 7B, DeepSeek-MoE-16B and Mixtral-8x7B. Notably, the router layer is quantized with W8A8. Table 2: Results of DuQuant and ours EAQuant with W3A4 weight-activation quantization configuration among 7 tasks on OLMoE- 7B, DeepSeek-MoE-16B and Mixtral- $8 \mathrm { x } 7 \mathrm { B }$ . Notably, the router layer is quantized with W8A8. For MoE modules, we first forward $N = 1 2 8 \times 4 0 9 6$ tokens from $\mathcal { D } _ { \mathrm { b a s e } }$ through the top- $\mathbf { \nabla } \cdot \mathbf { k }$ router to obtain the profiling for token-expert assignment. For those experts whose input token quantities are less than the average level of $\textstyle r { \frac { k N } { n } }$ (e.g., the magnification ratio $r = 2 . 0$ ) tokens, we iteratively sample new batches from the training dataset to construct $\mathcal { D } _ { \mathrm { e x p e r t } }$ , until the routed tokens for these experts all surpass the average level of $\begin{array} { r } { r { \frac { k N } { n } } } \end{array}$ tokens. Finally we use tokens from $\mathcal { D } _ { \mathrm { b a s e } } \cup \mathcal { D } _ { \mathrm { e x p e r t } }$ to calibrate the quantization parameters for weights of local experts. # 4 Experiment Models and Evaluations. We perform comprehensive experiments across three state-of-the-art MoE language models: DeepSeek-MoE-16B [5], OLMOE-7B [20] and Mixtral- $\cdot 8 \mathrm { x } 7 \mathrm { B }$ [13]. Beyond conventional perplexity evaluation on the Wikitext-2 [19] and C4 [22] benchmarks, We evaluate the proposed EAQuant on commonsense QA tasks via zero-shot accuracy across four challenging datasets: PIQA [1], ARC [4], BoolQ [3] and WinoGrande [23]. Baseline. We choose the sota PTQ method DuQuant [16] as the baseline. The quantization calibration process employs 128 sequentially selected text segments from Wikitext2, with floating-point accuracy results preserved as reference points for performance validation. Implementation Details. In this work, all experiments are done on NVIDIA V100 GPUs with PyTorch [21]. We set sequence length to 2048 for all evaluation tasks. we apply per-token activation quantization and per-channel weight quantization for LLMs. As an effective post-training quantization (PTQ) approach, our proposed EAQuant bypasses the need for parameter-sensitive fine-tuning. We adapt the official repository of DuQuant to support the three MoE models. Table 3: Influence of different components in EAQuant with W3A4 weight-activation quantization configuration. Notably, the router layer is quantized with W8A8. # 4.1 Main Results Comparison Results. We conducted comprehensive evaluations of quantization performance across multiple MoE architectures (OLMoE-7B, DeepSeek-MoE-16B, and Mixtral-8x7B) and diverse benchmarks. As demonstrated in Tables 1 and 2, EAQuant consistently outperforms DuQuant under both standard W4A4 and the challenging W3A4 quantization configurations. For W4A4 quantization, EAQuant achieves $1 . 3 7 \%$ , $1 . 1 5 \%$ , and $1 . 1 5 \%$ average score improvements across the three models, with particularly strong gains in reasoning tasks (e.g., $+ 2 . 5 2 \%$ on ARC-E for Mixtral- $8 \mathrm { x } 7 \mathrm { B }$ ) and better perplexity alignment to full-precision baselines. In the challenging W3A4 regime, EAQuant’s advantages become even more pronounced: it delivers $2 . 2 8 \%$ , $1 . 3 3 \%$ , and $2 . 0 9 \%$ average score improvements over DuQuant, effectively mitigating performance degradation. These results validate EAQuant’s novel expert-aware smoothing aggregation and router alignment strategies, which preserve expert interaction dynamics even under extreme quantization constraints. By achieving state-ofthe-art performance across both standard and extreme quantization scenarios, EAQuant sets a new benchmark for efficient MoE model compression. # 4.2 Ablation Study Module-wise Impact. To evaluate the contributions of individual components in EAQuant, we conduct a module-wise ablation study under the W3A4 quantization configuration (with the router layer fixed at W8A8). In general, we ablate four distinct operations within EAQuant: 1) only the expert-aware smoothing aggregation strategy (smooth_aggregate); 2) only the router logits distribution alignment (router_align); 3) only the expert-level calibration data balance (calib_balance); and 4) full EAQuant approach. The results in Table 3 demonstrate that each component plays a distinct role in enhancing quantized model performance. Specifically, the smooth_aggregate operation significantly mitigates activation outliers by redistributing outlier magnitudes to the weight domain, leading to a reduction in OLMoE-7B’s WikiText2 perplexity (PPL) from 10.77 to 10.47 and an improvement in its average score from 63.30 to 65.42. For DeepSeek-MoE-16B, it lowers C4 PPL from 11.41 to 11.17 while boosting the average score to 63.67. The router_align operation moderately improves performance by aligning router logits distributions across experts, increasing OLMoE-7B’s average score to 64.20, while enhancing DeepSeek-MoE-16B’s average score to 63.15. The calib_balance operation prevents calibration bias across experts, slightly improving OLMoE7B’s average score to 63.65 and maintaining stability in DeepSeek-MoE-16B. Crucially, the full EAQuant approach, integrating all three components, achieves optimal results. This synergy confirms that smooth_aggregate addresses activation outliers, router_align refines expert routing consistency, and calib_balance ensures balanced expert-level calibration, collectively enabling effective MOE quantization. Ablation Analysis of Smooth_aggregate Strategy. The ablation results in Table 4 demonstrate the critical role of the smooth aggregate strategy in mitigating performance degradation during post-training quantization (PTQ) for MoE models. Compared to the baseline DuQuant method, which suffers significant drops in both PPL and accuracy, all three variants incorporating specialized aggregation strategies—maximum, expert_frequency and router_logits—effectively recover performance. In addition, maximum denotes fusion via max-scaling across expert weights, expert_frequency uses weighted sum with activation counts as weights, and router_logits employs weighted sum with routing probabilities as weights. Notably, maximum achieves the strongest overall improvement $( + 1 . 2 0 \$ avg. accuracy), suggesting its effectiveness in preserving critical expert signals during aggregation. Meanwhile, expert_frequency and router_logits demonstrate complementary strengths on specific tasks (e.g., ARC-E and WinoGrande), highlighting the importance of balancing expert utilization and leveraging router dynamics in MoE quantization. These results underscore the necessity of task-aware aggregation strategies to address expert activation irregularities introduced by quantization, while maintaining the OLMoE-7B model’s core capabilities across diverse benchmarks. Table 4: Ablation of smooth_aggregate strategy with W4A4 weight-activation quantization configuration. Notably, the router layer is quantized with W8A8. Table 5: Ablation of router_align in ours EAQuant with different weight-activation quantization configuration among 7 tasks on OLMoE-7B. Notably, $\mathrm { { R w ^ { * } a ^ { * } } }$ represents the weight-activation quantization configuration of router layer. Ablation Analysis of Router Alignment. We further systematically evaluates the impact of the router_align mechanism within the EAQuant method under varying weight-activation quantization configurations (W3A4, W4A4) on the OLMoE-7B model. As shown in Table 5, removing router_align consistently degrades model performance across most tasks, particularly in low-bit quantization regimes. For instance, under W3A4 quantization with router layer fixed at W3A4 $( \mathtt { w } 3 \mathtt { a } 4 \_ \mathtt { R w } 3 \mathtt { a } 4 )$ ), omitting router_align results in a 1.30-point drop in average accuracy (62.33 vs. 63.63) and notable declines in BoolQ $6 2 . 6 9 6 7 . 5 5$ ) and WinoGrande $6 1 . 1 7 6 2 . 5 1 \$ ). This underscores router_align’s critical role in mitigating quantization-induced routing inconsistencies. When applying higher quantization to the router layer (e.g., W8A8), router_align consistently maintain the task performance. For $\mathtt { w 3 a 4 \_ R w 8 a 8 }$ , disabling router_align reduces average accuracy by 0.9 points (63.30 vs. 64.20), with ARC-E accuracy dropping from 71.76 to 70.71. These results highlight that router_align effectively calibrates expert routing distributions, counteracting precision loss from aggressive quantization. The ablation results in Table 6 systematically evaluate the impact of KL loss in the router_align module under W4A4 quantization (router layer: W8A8), revealing critical insights into expert routing optimization. The kl_top0 configuration restricts the KL divergence calculation to the top- $\mathbf { \nabla } \cdot \mathbf { k }$ (specifically top-8) experts’ logits, whereas kl_top100 incorporates all experts’ logits in the computation. The relationship between the number of experts $m$ required to be calculated and the ratio $r$ can be expressed as the formula: $m = k + i n t ( ( n - k ) \bar { * } r )$ . Compared to the DuQuant baseline (Avg. 66.69), incorporating KL loss constrained to the top-8 experts (kl_top0) achieves the most significant performance gain (Avg. 67.97), particularly enhancing reasoning tasks such as ARC-E $( + 1 . 1 4 )$ and commonsense reasoning in WinoGrande $( + 3 . 6 3 )$ , while slightly reducing perplexity on C4 (11.48 vs. 11.51) and WikiText2 (8.60 vs. 8.64). This demonstrates that focusing KL regularization on the top-k experts $( \mathrm { k } { = } 8 )$ effectively preserves critical routing signals without introducing computational overhead. Gradually expanding KL regularization to include lower-confidence experts (via ratio parameter r) degrades performance (Avg. $6 7 . 4 7 { \scriptstyle } 6 7 . 1 9$ for kl_top25 $$ kl_top100), suggesting that excessive regularization on less relevant experts introduces noise into the routing mechanism. Notably, kl_top0 achieves consistent improvements across all tasks, highlighting its superiority in balancing routing precision and model capacity under quantization constraints. These results underscore the importance of strategically limiting KL regularization to high-confidence experts for maintaining task performance in mixture-of-experts architectures. Table 6: Ablation of KL loss in router_align with W4A4 weight-activation quantization configuration. Notably, the router layer is quantized with W8A8. Table 7: Ablation of the magnification ratio $r$ in calib_balance with W4A4 weight-activation quanti zation configuration. Notably, the router layer is quantized with W8A8. Ablation Analysis of Calibration Balance. we further investigate the impact of the magnification ratio $r$ in the calibration balance module. The $r$ dynamically adjusts the minimum token threshold for expert activation calibration, ensuring underutilized experts receive sufficient data to balance their participation during quantization, thereby mitigating activation imbalance in MoE models. The experiments are performed on OLMoE-7B across five tasks, as shown in Table 7. When $r$ is set to 2.0, the average score across datasets is maximized. Notably, the baseline without calibration balance $( r = 0 . 0 )$ ) exhibits the lowest accuracy, underscoring the critical role of calibration in mitigating quantization-induced errors.
Mixture-of-Experts (MoE) models have emerged as a cornerstone of large-scale deep learning by efficiently distributing computation and enhancing performance. However, their unique architecture-characterized by sparse expert activation and dynamic routing mechanisms-introduces inherent complexities that challenge conventional quantization techniques. Existing post-training quantization (PTQ) methods struggle to address activation outliers, router consistency and sparse expert calibration, leading to significant performance degradation. To bridge this gap, we propose EAQuant, a novel PTQ framework tailored for MoE architectures. Our method systematically tackles these challenges through three key innovations: (1) expert-aware smoothing aggregation to suppress activation outliers and stabilize quantization, (2) router logits distribution alignment to preserve expert selection consistency post-quantization, and (3) expert-level calibration data balance to optimize sparsely activated experts. Extensive experiments across W4A4 and extreme W3A4 quantization configurations demonstrate that EAQuant significantly outperforms existing methods, achieving average score improvements of 1.15 - 2.28% across three diverse MoE architectures, with particularly pronounced gains in reasoning tasks and robust performance retention under aggressive quantization. By integrating these innovations, EAQuant establishes a new state-of-the-art for high-precision, efficient MoE model compression. Our code is available at https://github.com/darren-fzq/EAQuant.
[ "cs.CL" ]
# 1 INTRODUCTION The $k$ nearest neighbor (KNN) search over high-dimensional vectors has become a fundamental operator in modern data systems, powering applications including recommendation systems [40], data mining [6], face recognition [46], product search [46], and retrievalargument generation (RAG) for large language models [30]. In production environments, vector embeddings are usually accompanied by structured attributes (e.g., product categories, geolocations). For instance, the customer may search for items similar to a photo and specify the brand and year in the e-commerce scenario. As illustrated in Fig. 1, the item search can be achieved by vector nearest neighbor search with label-specific. However, the exact KNN search in high-dimensional space suffers from curseof-dimensionality [24], where algorithms for exact solutions have extremely high computational cost in high dimensions. Therefore, researchers turn to studying the approximate $k$ nearest neighbor (AKNN) search, which can greatly improve efficiency by trading search accuracy. Consequently, approximate $k$ nearest neighbor search with specific keywords and attributes has attracted extensive attention recently [5, 38]. Specifically, the entries in the database $s$ consist of two parts: the vector embedding part and the label set 𝐿. Given a query vector $q$ and a query label set $L _ { q }$ , the problem is searching the approximate nearest neighbor of $q$ in $S$ that the vector Figure 1: Example of Label Containing Nearest Neighbor Search Example 1. Fig. 1 illustrates the containing label search scenario in online shopping. Customers may search for an item with a given photo and an extra label requirement. In detail, the $x _ { 1 } , . . . , x _ { 4 }$ is the image embedding vector of items, and $q$ is the query vector from the photo of customers. The customer requires the most similar item of $\dot { \boldsymbol { q } }$ with a specific brand and time. Then, the $x _ { 1 }$ and $x _ { 3 }$ will be filtered out, and $x _ { 4 }$ will be the nearest neighbor of 𝑞. label set contains the query label set, where $L _ { q } \subseteq L$ . This problem involves a hybrid search combining label containment and vector similarity, aiming to achieve a better trade-off between accuracy and efficiency. To efficiently answer the label hybrid query, existing approaches use filter search strategies and graph-based indexes [10, 11, 18, 26, 32, 35, 42], to handle the label search scenario due to their state-ofthe-art AKNN search efficiency. In detail, the graph-based index treats vectors as nodes on a graph. Each node links to its proximity neighbors, forming a graph with navigation properties. Then, the filtered search approaches [18] check whether the database label set contains the query label set on the fly. If the base vector label set does not contain the query label set, the base vector can be considered filtered out. Next, two search strategies, PreFiltering, and PostFiltering, can implement a filtered search without changing the existing graph index structure. The PreFiltering strategy filters out the nodes and their neighbor information during search, while PostFiltering keeps the filtered-out nodes’ neighbors for navigation. However, the search performance of these two strategies is poor when the selectivity is low. The PreFiltering strategy leads to poor search accuracy because the entry node of the graph is highly likely disconnected from the nearest neighbor, while the PostFiltering strategy computes the distance of too many filtered-out points, resulting in reduced search efficiency. Existing systems such as Milvus [41], ADB [46], VBASE [53], and CHASE [33] dynamically select different filtered search strategies based on cost estimation Empty[400] Empty[2000] AC[100] A[400] B[200] C[200] A[1000] B[900] C[600] A[400] ABC[100] AB[400] AC[100] BC[200] AB[500] AC[200] BC[200] AB[400] ABC[100] (b) Load Vector by Label ABC[100] Combination (a) Num of Vector in Each (c) Num of Vector in Label Label Group Containing Group Example 2. In Fig. 2, we consider a vector dataset with labels A, B, and C. The number of vectors in label $L$ is denoted as {𝐿}[number of vectors]. Each label group is connected to its minimum superset by an arrow. As shown in Fig. 2(a), there are 400 vectors with label A only, marked as A[400]. For a hybrid AKNN query that needs to contain label A, an index with data in label group $\{ A , A B , A C , A B C \}$ is built, a total of 1000 vectors as illustrated in Fig. 2(b). For all possible query label sets, all vector groups in Fig 2(c) need to be indexed, requiring a total of 5400 entries, which is $2 . 7 5 x$ of the dataset cardinality. and query planning, but the flaws of the strategy itself still limit the search performance. Some heuristic approaches, such as NHQ [42] and HQANN [47], use fusion distance to include the label as part of the distance computation. These methods require manual adjustment of the weights of the two parts, and their performance has a large gap compared to the state-of-the-art. State-of-the-Art. Recent methods ACORN [38] and UNG [5] are state-of-the-art algorithms for label-hybrid search. The ACORN method extends the PreFiltering strategy to deal with the connectivity issues at low selectivity. In detail, ACORN introduces an additional parameter $\gamma$ in graph construction to build a dense graph with $\gamma$ times outgoing edges per node of a normal graph index. A denser graph can improve connectivity and reduce the number of unreachable nodes during graph traversal, but can not fully guarantee the completeness of the result. Since it completely ignores the base vector label during index construction. The UNG [5] method utilizes the base label set inclusion relation to enable the filtered node to be reached from the entry nodes of the graph. Specifically, UNG builds sub-graphs for the vector group with the same label set. The sub-graph of each label set $L _ { i }$ is linked multiple crossgroup edges to the sub-graph of its minimum superset $L _ { s }$ . This approach ensures that all base vector label sets containing the $L _ { i }$ can be reached from the sub-graph entry points of group $L _ { i }$ . Thus, guarantee the completeness. However, the above methods only apply to graph-based indexes and thus lack index flexibility. More importantly, these methods lack theoretical or practical search efficiency guarantees. Experiments show that the performance of the above algorithm seriously degrades when the label set size increases. Meanwhile, the above algorithms also lack methods that can fully utilize resources with limited space. Challenge. Although the set of all possible labels may be large, the label set of a single entry may be sparse in practice because some attribute values are orthogonal to others. For example, the set of all possible product brands may be large, but a single product can only have one brand. Meanwhile, insignificant keywords can be integrated into the vector as features to avoid a single entry having a large number of labels. Even if the label set $L$ of a single entry is not large, the brute-force approach still requires indexing $2 ^ { | \bar { L | } }$ entries. As shown in Fig. 2, for an entry with a label set of $\{ { \mathrm { A B C } } \}$ , it needs to be inserted into the 8 groups $\{ \varnothing$ , A, B, C, AB, AC, BC, ABC}, corresponding to the 8 possible label containing query label sets, where $\varnothing$ is the case without considering the label. When the average label set size is 6-10, the entries that need to be indexed are $6 4 \mathrm { X } - 1 0 2 4 \mathrm { X }$ of the original, resulting in extremely high construction time and space. Table 1: We compare our methods with existing solutions. The Search Performance represents the accuracy and efficiency of algorithms and is verified experimentally. The Efficiency Guarantee indicates the theoretical time complexity analysis, correctness guarantee, and that performance is predictable. Index Flexibility refers to the labelhybrid search problem, not restricted by a specific index. Space Utilization allows for more efficient trade space. Our idea. In this paper, we consider constructing indices with nearoptimal search performance with limited space and time. Instead of building indexes for all possible label combinations as the bruteforce approach, we selectively build partial indexes and exploit the covering relationship of sets to make the corresponding hybrid queries share the index. We use the elastic factor to model the overlap coverage ratio of entries between label sets, which can also serve as a performance indicator. A higher elastic factor means more search efficiency in both theory and practice. For instance, in Fig 2 right, the entries in group $\{ \mathrm { A } \}$ contain group {AB} data with an overlap ratio of 0.5. Using the index built by entries in group $\{ \mathrm { A } \}$ can efficiently answer {AB}-containing AKNN queries with the filter search strategy. The data entries in group $\{ { \mathrm { B } } \}$ also cover group {AB} and have a higher coverage ratio/elastic factor. Using the index of the group $\{ { \mathrm { B } } \}$ to perform a filter search to answer {AB}-containing queries is more efficient. Therefore, we use the greedy algorithm to select some indices to meet the search performance requirements. In addition, we use the monotonicity of the selection strategy to achieve the optimal index configuration under limited space and time. Compared with the current methods in Table 1, our approach has higher flexibility, theoretical performance guarantees, better practical search efficiency, and the ability to trade space for nearoptimal search performance. Contribution. We summarize our main contributions as follows: Problem Analysis. $\$ 3$ We analyzed the solutions to problems related to label-hybrid search. The current solutions have low flexibility, lack search efficiency guarantees, and have a large performance gap compared to the optimal approach. We provide a theoretical evaluation of search performance based on the elastic factor, which motivates our novel index-sharing approach. Novel Efficiency-oriented Index Select Approach. $\ S \ 4 \mathrm { W e }$ formulate the fixed efficiency index selection problem (EIS), where a subset of candidate indices is selected to guarantee search performance with optimized index space. We establish the NP-hardness of this problem and propose a greedy-based algorithm that delivers an approximate solution. Optimized Index Select with Limited Resource. $\$ 5$ We further investigate how to fully utilize constrained space to achieve optimal search efficiency. By leveraging monotonicity properties, we reduce the Fixed-Space Index Selection (SIS) problem to our Efficiencyoriented Index Selection (EIS) problem. This reduction enables us to derive a space-aware, efficiency-optimal indexing strategy based on solutions to EIS. Extensive Experiments. $\$ 6$ We evaluate our algorithm on multiple real-world datasets with diverse label distributions. Experimental results demonstrate that our approach achieves near-optimal retrieval efficiency while requiring only 1x additional space. Furthermore, our solution maintains robust performance across large-scale datasets and extensive label sets, delivering $1 0 \times - 8 0 0 \times$ speedup over state-of-the-art baselines. # 2 PRELIMINARY In this section, we define the label-hybrid approximate $k$ nearest neighbor search problem in $\ S 2 . 1$ . The involved indexing algorithms and filtered search strategies are introduced in $\ S 2 . 2$ , # 2.1 Definition We first consider the problem definition of label-hybrid search. The entry $( x _ { i } , L _ { i } )$ in a label vector hybrid dataset $s$ consists of two parts: the vector embedding $x _ { i } \in \mathbb { R } ^ { d }$ in $d$ -dimensional space and the label set $L _ { i }$ . The label set $L _ { i }$ consists of label elements $l _ { i } \in L _ { i }$ , which can also be an empty set. Then, we consider the label-hybrid query $( q , L _ { q } )$ , where $q$ is the query vector and $L _ { q }$ is the label set of the query. When $L _ { q }$ is given, the entries in $s$ are filtered first and then searched. The filtered set $S ( L _ { q } )$ can be regarded as the subset of entries in $s$ whose label set contains $L _ { q }$ , defined as $S ( L _ { q } ) ~ = ~ \{ ( x _ { i } , L _ { i } ) ~ \in ~ S ~ | ~ L _ { q } ~ \subseteq ~ L _ { i } \}$ . With all notation above, the label-hybrid $k$ nearest neighbor search is formally defined: Definition 2.1 (Label-Hybrid $k$ Nearest Neighbor Search). Given a label-hybrid dataset $s$ and a query tuple $( q , L _ { q } )$ . The label-hybrid $k$ nearest neighbor search problem requires returning a set $S ^ { \prime } \subseteq S ( L _ { q } )$ of $\mathbf { k }$ entries, where for any $( x _ { i } , L _ { i } ) \in S ^ { \prime }$ and any $( x _ { j } , L _ { j } ) \in S ( L _ { q } )$ , $\delta ( x _ { i } , q ) \leq \delta ( x _ { j } , q )$ . However, the exact nearest neighbor search (NNS) suffers from the curse-of-dimensionality [24]. The data structures [3, 4, 37] work in low-dimensional space perform poorly in high-dimensional space. Therefore, approximate nearest neighbor search has been extensively studied [1, 7, 10, 11, 13–17, 26, 28, 31, 35, 44, 45, 49, 51, 54] because it can greatly improve search efficiency at the cost of sacrificing accuracy. The label hybrid search faces the same problem, and the approximate solution is the main focus of this paper. That is, the label-hybrid approximate $k$ nearest neighbor search (label-hybrid search for short) problem. To measure the quality of label-hybrid search, the recall as the metric is defined as: 𝑟𝑒𝑐𝑎𝑙𝑙 $= | \hat { S } \cap S ^ { \prime } | / | S ^ { \prime } |$ Table 2: A Summary of Notations Figure 3: The Graph Search Example where $S ^ { \prime }$ is the groundtruth and $\hat { S }$ is the result return by approximate solution. The label-hybrid search needs to achieve both higher search efficiency and higher result accuracy (recall). Remark. The label-hybrid search can be regarded as a special case of the filtered nearest neighbor search problem [18], which uses the label set inclusion relationship for filtering. This paper focuses on set inclusion relations, where set intersection $( L _ { q } \cap L _ { i } \neq \emptyset )$ and set equality $( { \cal L } _ { q } = { \cal L } _ { i } )$ ) can be solved by simple transformation and partition indexing approaches. We analyze this in detail in $\ S \ O 3$ . # 2.2 AKNN Index and Filtered Search The existing filter search and label-hybrid search approaches still involve the existing AKNN search index. Among various vector indexes, graph-based indexes [11, 23, 26, 32, 35, 39] are widely used due to their state-of-the-art search efficiency. Specifically, the graph-based index treats vectors in high-dimensional space as nodes on the graph, and each node is connected to its proximity vector, making the graph navigable. The search process starts from the entry node of the graph and moves iteratively to nodes closer to the query. The essential aspect of the graph index lies in the edge occlusion strategy. This approach makes each step of the graph search get closer to the query as possible while preventing the graph from becoming dense. More importantly, the node degree of the graph can be bounded by a constant after edge occlusion [11, 25]. The graph index can support top $k$ approximate nearest neighbor search by utilizing beam search, maintaining the current best top$m$ results during graph search where $m$ is the beam size. Under ideal indexing conditions (low index efficiency and query is within database) [11, 25], only one extra search step in the graph is required to find the $k { + } 1$ nearest neighbors, and the overall time complexity is the log level of the data cardinality [11, 25]. Next, we consider the filtered search strategy with the graph index. The PreFiltering and PostFiltering approaches are basic strategies that utilize graph indexes for a filtered search. The advantage lies in the unchanged index structure, which only needs specific filtering conditions during the graph traverse. The PreFiltering strategy removes the filtered node and its neighbor during graph search. As illustrated in Fig. 3, the nodes $x _ { 4 }$ and $x _ { 5 }$ are filtered out with two specific queries. When searching for query 1, the PreFiltering strategy removes the outgoing edges of $x _ { 5 }$ , making the nearest neighbor $x _ { 6 }$ unreachable from the entry node $x _ { 1 }$ . The PostFiltering strategy retains the filtered node information for routing without recording it in the result set. When searching for query 2 in Fig. 3, the PostFiltering algorithm visit node $x _ { 1 } , x _ { 2 } , x _ { 3 } , x _ { 4 } , x _ { 7 }$ at each search step. The outgoing edge of $x _ { 4 }$ is still used for routing, and finally, the filtered nearest neighbor $x _ { 7 }$ is searched. Since the filtered-out nodes will not be retained in the result set, the final result returned is $x _ { 7 }$ , the nearest neighbor in the filtered nodes. For the top $k$ filtered search, the PostFiltering method only needs to keep searching the $k { + } 1$ nearest neighbors and accumulate $k$ unfiltered results, but the time complexity is affected by the selectivity of the query. If most of the points are filtered out, it may visit near $O ( N )$ points to accumulate $k$ results. The PreFiltering method encounters the same challenge. With a small selectivity, it is difficult to ensure that the nearest neighbor can be reached from the entry node, making its correctness and time complexity difficult to analyze. # 2.3 Limitation and Challenge Limitation. Using filtered search technology to answer the labelhybrid search problem performs poorly when selectivity is low. Although technologies such as query planning can select the best filter strategy based on the query workload, the filter search algorithm still has a large performance gap compared to the optimal approach, which only indexes filtered vectors. To mitigate the search performance loss of the filter search, specialized algorithms are proposed for label hybrid search. NHQ considers the label set as part of the distance and proposes a fusion distance for search and indexing. This method requires adjusting the weights of the label and vector manually. Although an empirical setting method is given, the overall search performance has a large gap with the stateof-the-art algorithms ACORN and UNG. In addition, the change in the distance computation also makes it difficult to analyze both the time complexity and the soundness of the algorithm. The current approach ACORN constructs a denser graph index based on a compressed neighbor list. For label hybrid search, PreFiltering search is performed on the subgraph composed of filtered points. A denser graph can practically avoid the node reachability problem caused by PreFiltering, but it cannot be fully guaranteed. UNG uses the containing relationship between label sets to solve this problem. The nodes of each label set are connected to the nodes of its minimum superset through cross-group edges. This ensures that the graph index only searches with filtered data and maintains connectivity. However, both ACORN and UNG are based on the navigational property of the graph that can not apply to other types of vector indexes. It is also difficult for UNG and ACORN to guarantee the search efficiency in terms of time complexity due to the heuristic nature. Moreover, the search performance is also undesirable compared with the optimal approach. Therefore, we analyze the challenges of label hybrid search from the perspective of the optimal approach next. A:{x2 x5} AB:{x6} x :{ABC} A:{x1 x2 x5 AB:{x1 x6} x :{A} x6  x7} B:{x3} AC:{x7} x3:{B} AC:{x1 x7} x4:{C} B:{x1 x3 x6 x8} x5:{A} C:{x4} BC:{x8} x :{AB} BC:{x1 x8} x7:{AC} C:{x1 x4 x7 x8} ABC:{x1} x8:{BC} ABC:{x1} x9:{C} The Inverted List of The Inverted List of Database Entries Label Equality Label Containing Challenge We first consider achieving optimal search performance via existing AKNN search indexing algorithms. A straightforward approach is to pre-build indexes with data entries selected by the label-hybrid query set. The selectivity of different filter conditions varies. In the label-equality scenario, that is, the filter condition is $\scriptstyle L _ { i } = L _ { q }$ , the optimal approach can be achieved by directly building the index for the entries group with the same label set. As the example in the left Fig. 4, the entries are organized with the label set inverted list. To build indexes for all possible label-equality queries, each group can build a corresponding index with the label set inverted list data, and the overall cardinality remains unchanged. For instance, the index is built with vectors $x _ { 2 } , x _ { 5 }$ for ${ \cal L } _ { q } { = } \{ { \cal A } \}$ query. In the case of the most efficient graph index, the overall space complexity is $O ( N M )$ because the node edges are bounded by a constant $M$ via edge occlusion. Moreover, the label-overlap query with filter condition $L _ { i } \cap L _ { q } \neq \emptyset$ can be obtained by merging the search results of $| L _ { q } |$ label containing queries. That is, the overlap is converted into a label containing queries with a single label. For example, the label overlap query of $L _ { q } \mathrm { = } \mathrm { A B }$ can be converted into the merge results of the label containing queries with $L _ { q } = \mathrm { A }$ and $L _ { q } \mathrm { = B }$ . Therefore, the label-containing query is the focus of this paper. The label-containing query is more widely used, but the optimal index approach has a higher cost. As illustrated in the right side of Fig. 4, the entry $( x _ { 6 } , \mathrm { A B } )$ is inserted into three inverted lists of label sets A, B, AB, and one extra for label-free AKNN search, since the three corresponding possible $L _ { q }$ is set covered by label set AB. Therefore, a single entry $( x _ { i } , L _ { i } )$ needs to be inserted $2 ^ { \left| L _ { i } \right| }$ times, if all combinations of its labels is the label hybrid query that needs to be answered. For example, the entry $( x _ { 1 } , \mathrm { A B C } )$ is inserted into all possible label sets in right Fig. 4. Consequently, the optimal method entails substantial costs in terms of both indexing time and storage space when the average size of the label set is considerable. Although it is feasible to incorporate less significant labels into the vector features and some label combinations of hybrid queries may not be needed, an average label set size of 10 can lead to an index space that is $1 0 0 0 \mathrm { x }$ larger than a label-free index. Limit its scalability and practical application. Motivation. With previous analysis, the main obstacle to achieving the optimal approach is the index space and construction time cost that is exponential in the average label set size. Therefore, the goal is to reduce the indexed data as much as possible while maintaining efficiency. We found that the data in the label-containing query of the optimal approach index has an inclusion relationship. For example, on the right of Fig. 4, the data within the inverted list of label set B contains the data of AB and BC. Using the index of set B and a filtered search can answer the label containing query of AB and BC. Consequently, we consider selectively building indexes corresponding to label groups to enable shared indexes for labelcontaining queries. Thus reducing the index space and time. The following sections discuss the search efficiency via index sharing. Figure 5: The cover relationship of the entries set, each set is connected to its minimum superset. Traverse from the set of this directed acyclic graph can find all its supersets. # 3 ELASTIC INDEX SELECTION PROBLEM The main objective of index selection is to achieve a better efficiencyspace tradeoff. We can study the problem with two optimization objectives, namely efficiency and time, from different perspectives. From one perspective, the system needs to achieve a fixed efficiency. For example, the response time required by the design system is within $5 0 \mathrm { m s }$ . In the other perspective, we hope to achieve maximum efficiency under the condition of limited space. Specifically, given a machine with 32 GB of memory, the design algorithm needs to fully use the resources. First, we study the efficiency-oriented problem. We extend our algorithm with space limitation in $\ S 5$ . # 3.1 Index-Sharing Scheme We first consider the efficiency and completeness of the indexsharing scheme. First, the shared index needs to contain the data possibly selected by the query to ensure the completeness of the search. For example, the entries group with label A contains the data of the group with label set AB (entries containing label AB also contain A) as illustrated in Fig. 5. Therefore, queries can only share indexes that are built on a superset of label set group entries. Next, we study the query efficiency via a shared index. The query efficiency of the shared index is affected by the selectivity of the query. In extreme cases, we only need to perform a filter search on the index built by all entries. This is the strategy to answer label hybrid using filter search, but it has poor performance when the query selectivity is low. The reason is the low overlap between the indexed data and the filtered data, which results in the PreFiltering being unable to find the nearest neighbor and the high computational cost of the PostFiltering strategy, as our analysis in $\ S 3$ . Compared with PreFiltering, the PostFiltering strategy can at least guarantee the search results, which motivates us to further analyze the relationship between its efficiency and selectivity. First, we define the elastic factor, which is the overlap ratio of the best index that a given label-hybrid query can share. Figure 6: The test of verifying elastic factor and query efficiency. We randomly generate labeled data and queries. We built HNSW indexes for the original data and divided the queries into four groups according to the elastic factors (selectivity): 0.1, 0.2, 0.5, and 1. The case of $e = 1$ can be regarded as the optimal approach. Definition 3.1 (Elastic Factor). Given a label-hybrid dataset $s$ , the query $( q , L _ { q } )$ and a set index $\mathbb { I } = \{ I _ { 1 } , . . . , I _ { m } \}$ , each index is a subset of $s$ . The elastic factor of a index set with $I _ { i } \in \mathbb { I }$ is defined as: $$ e ( S ( L _ { q } ) , \mathbb { I } ) = \operatorname* { m a x } _ { S ( L _ { q } ) \subseteq I _ { i } } \left( { \frac { | S ( L _ { q } ) | } { | I _ { i } | } } \right) $$ Theoretically, any index that supports vector top- $\mathbf { \nabla } \cdot k$ nearest neighbor search can perform incremental $k { + } 1$ search to accumulate $k$ unfiltered data to implement filter search. Therefore, the performance is affected by the expected $k { + } 1$ search times to accumulate $k$ of unfiltered data. Using an index with all data in 𝑆 for PostFiltering search may result in $\mathbb { E } ( r ) { = } O ( N )$ when the query selectivity is very low, which is also the reason for its low efficiency. However, if the query is answered with an elastic factor of constant $c$ , the expectation of $k { + } 1$ search times can also be bounded by a constant $O ( k / c )$ . In the case of using the graph index, only one extra step is needed to search for the $k { + } 1$ nearest neighbor under ideal conditions (slow build process, query in database). Thus, if any label hybrid query has a shared index with a minimum elastic factor of $c$ , the overall search time complexity remains unchanged, with only an additional factor of $k / c$ . We summarize this into Lemma 3.2. Lemma 3.2. Given a label-hybrid dataset $s$ , a label-hybrid query $( q , L _ { q } )$ and index set I. Let $O ( C )$ be the top-1 search time complexity of the graph index. If the elastic factor is a constant $e ( S ( L _ { q } ) , \mathbb { I } ) = c ,$ , the expected time complexity of filter AKNN search with a graph index is $O ( C + k / c )$ . Since different graph indexes have different time complexities for searching the nearest neighbor, we use $C$ to represent it, which is the log level of the cardinality $N$ . However, the above time complexity requires that the graph index adopts $O ( N ^ { 3 } )$ construction time complexity [11, 25], which makes it unsuitable for large-scale vector search. In practice, graph indexes use some heuristic methods to speed up construction [11, 23, 32, 35]. For example, the edge occlusion of each node only considers some of its proximity neighbors. This also results in approximate results. Moreover, the theory in [11] requires the query within the database, while in practice, the query can be in arbitrary locations. Therefore, we conducted a practical efficiency evaluation of PostFiltering search with different elastic factors on the SIFT and GIST datasets. From the result in Fig. 6, the maximum elastic factor supported by the indexes for the labelhybrid queries directly affects its search performance. A higher elastic factor means higher efficiency, corresponding to the lower time complexity analyzed previously. When $e ( \cdot , \cdot ) = 1$ , the search efficiency is optimal, equivalent to building an index for the filtered data. Besides, the search efficiency is sublinear to the elastic factor. When the elastic factor is $1 / 1 0$ of the optimal, the search efficiency is only reduced by 2x when recall is $9 8 \%$ and $k { = } 1 0$ . This is because the time complexity of filter search with a smaller elastic factor index is only increased by a factor relative to top- $\mathbf { \nabla } \cdot k$ . The time complexity of searching the nearest neighbor remains unchanged, still at the log level of the cardinality $N$ . This performance gap becomes significant when $k$ is larger, and the efficiency is reduced by 3x when $k { = } 1 0 0$ and recall is $9 8 \%$ . However, the top $k$ setting in the AKNN search is not large in many applications and is generally much smaller than $N$ . Therefore, if the elastic factor of the query index is constant, the overall time complexity can still be bounded. Thus far, we have analyzed the impact of elastic factors on search efficiency. Next, we will consider how to achieve a higher elastic factor at a lower cost. # 3.2 Problem Definition. We first define the fixed efficiency index selection (EIS) problem. Unfortunately, when using graph indexes, our query cost is not linear with the cardinality of the data. Instead, we use the elastic factor to model query efficiency. Specifically, we map the query cost to the elastic factor of the query. Then, we can select some indices to make the elastic factor of the query at least greater than a constant bound $c$ . This makes the top- $k$ search algorithm only scale $k / c$ in theory, and we can control the bound $c$ to ensure query efficiency. Formally, we define the EIS problem as follows. # Definition 3.3 (Fixed Efficiency Index Selection (EIS-decision)). Input The label-hybrid dataset $s$ , the label-hybrid query where the label sets are $\mathcal { L } _ { q } = \{ L _ { 1 } , \cdots , L _ { n } \}$ , the index collection $\mathbb { I } = \{ I _ { 1 } , \cdots , I _ { n } \}$ which can be viewed as the selected data corresponding to each query where $I _ { i } = S ( L _ { i } )$ , the cost of each index denoted by $\left| I _ { i } \right|$ . A non-negative real number $\tau$ . Output A subset $\mathbb { I } ^ { \prime }$ of I such that the elastic factor $e ( \mathbb { I } ^ { \prime } , S ( L _ { i } ) )$ is greater than $c$ for any $L _ { i } \in \mathcal { L } _ { q }$ and the total cost is less than $\tau$ . When only the label part of the query is considered, the number of queries is $\dot { 2 } ^ { | \mathcal { L } _ { q } | }$ in the worst case. In practice, this number will be smaller due to the orthogonality of some labels. In the problem definition, we only build indexes for the vectors selected by the given query label sets and do not consider the label combinations that are not within them. In this paper, we assume the top index always exists. That means the query workload always needs a labelfree nearest neighbor search. Therefore, in the problem definition, Figure 7: The cover relationship of the query index. Example 3. Fig. 7 shows the index inclusion relationship under different elastic factor constraints $e = 0 . 5$ or $e = 0 . 3$ . Under the setting of $\dot { e } = 0 . 5$ , the top index can answer the label contain query of both $L _ { q } = A$ and $L _ { q } = \emptyset$ because the overlap between group $A$ and top is greater than or equal to 0.5. The top index is not available for the $L _ { q } = B$ query because the overlap is only 0.45. When $e = 0 . 3$ , the top index can answer the query of $\mathrm { T } _ { q } = B$ and $L _ { q } = C$ . Note that the top index cannot answer the query of $\mathbf { \dot { \boldsymbol { L } } } _ { q } = \boldsymbol { A } \boldsymbol { B }$ because its overlap is only 0.25, which is feasible under the setting of $z < 0 . 2 5$ . When $e = 0$ , the top index can handle all possible queries. we exclude the cost of the top index to simplify the problem $( | I _ { t o p } | =$ 0). Next, the EIS problem aims to determine whether there is a solution with cost less than $\tau$ with elastic factor at least $\boldsymbol { c }$ . The user-specific parameter 𝑐 affects the index sharing relationship as illustrated in Fig. 7. In subsequent sections, we use its monotonic relationship with index cost to achieve optimization under fixed space. Remark. In this paper, the cost of each index is the space it requires. When using a graph index, we can simplify the cost of the index to the number of vectors in the index because the node degree of the graph index can be bounded by a constant [11, 25]. Each node on the graph has $M$ (user-specified parameter) edges for fast memory access in practice. [34]. In other words, we only need to multiply the total cost by $M$ to obtain the space usage of the index set. # 3.3 Problem Hardness. We proved the NP-hardness of the EIS problem in Theorem 3.4. This analysis illustrates the challenges of solving the love EIS problem and motivates us to propose the greedy algorithm in practice. Theorem 3.4. The EIS problem is NP-hard. Proof. To prove the NP-hardness of EIS, we first introduce an NP-complete problem called the 3-Set Cover (3-SC) [8, 29]. Input A universal set $\mathbb { U }$ containing $p$ elements, denoted as $\mathbb { U } =$ $\{ u _ { 1 } , u _ { 2 } , \cdots , u _ { p } \}$ , a set $\mathbb { S }$ containing $l$ subsets, denoted as $\mathfrak { S } = \{ s _ { 1 } , s _ { 2 } , \cdot \cdot \cdot , s _ { l } \}$ , where each $s _ { i } \in \mathbb S$ contains up to 3 elements from U. A non-negative number $k$ . Output A non-empty subset from S, denoted as A whose union is still $\mathbb { U }$ and the size is less than $k$ . Given a 3-SC instance defined above, we could generate an instance of the EIS-decision problem in Fig. 8 where the arrow denotes the cover relationship. Solving this EIS-decision problem is equivalent to solving the 3-SC instance. As illustrated in Fig. 8 (a), each element $s _ { i }$ from $\mathbb { S }$ and $u _ { i }$ from $\mathbb { U }$ in 3-SC instance are mapped to an index in I except the top and bottom one. The top index is built with all entries, and the bottom index is built with data entries that contain all possible labels. In this case, the query label set is limited to label combinations that appear in base 𝑆. From Fig. 8 (b), each label set of $s _ { i }$ index is set as a single element $S _ { i }$ . If $u _ { i } \in s _ { i }$ , we add $S _ { i }$ to the label set of $u _ { i }$ index, and the init label set is $U _ { i }$ . This can ensure the $s _ { i }$ index can cover $u _ { i }$ . Additionally, we add a duplicate index $u _ { i } ^ { \prime }$ that has the same label set as $u _ { i }$ except the init label $U _ { i }$ , which is set as $U _ { i } ^ { \prime }$ . We add the duplicate index to ensure that the cost of using $s _ { i }$ to cover $u _ { i }$ is lower than the cost of selecting $u _ { i }$ alone. Next, we design the cost (size) of each index and the elastic factor bound 𝑐. The cost of $u _ { i }$ and $u _ { i } ^ { \prime }$ is 11, since the index contains one entry with the corresponding label set and 10 entries from the bottom of Fig. 8. For example, the query index $u _ { 1 }$ contains 1 entry with $\{ S _ { 1 } S _ { 2 } U _ { 1 } \}$ and 10 entries with all labels for containing query label set $\{ S _ { 1 } S _ { 2 } U _ { 1 } \}$ . The cost of $s _ { i }$ index is set to 20, where up to 3 $u _ { i }$ is covered by $s _ { i }$ , and the number of entries with label $S _ { i }$ is set to $1 0 { - } 2 { \times } | s _ { i } |$ . For instance, the $s _ { 2 }$ index contains 4 entries with label $S _ { 2 }$ and 6 entries from $u _ { 1 } , u _ { 2 } , u _ { 3 } , u _ { 1 } ^ { \prime } , u _ { 2 } ^ { \prime } , u _ { 3 } ^ { \prime }$ , and 10 from bottom. The cost of $s _ { i }$ is equal to adjusting the number of $S _ { i }$ label entries. Then, the elastic factor can be set to $2 0 / N < c < 0 . 5$ such that the top index can cover the $s _ { i }$ index but can not cover $u _ { i }$ , and the $s _ { i }$ index can cover $u _ { i }$ if $u _ { i } \in s _ { i }$ . Next, we analyze the solution from 3-SC to EIS-decision. Figure 8: The Proof of NP-hardness. Solution to ${ \underline { { 3 } } } { \cdot } { \mathsf { S C } } \Rightarrow$ Solution to EIS-decision. For EIS, the top index can cover index $s _ { i }$ , and the cost is excluded as we discussed before. Since each $s _ { i }$ has the same cost $\cos t = 2 0 ^ { \circ } ,$ ), we set $k = \lfloor \tau / 2 0 \rfloor$ . Assuming 3-SC can be solved, then we can determine whether a subset A of $\mathbb { S }$ with up to $k$ elements is equal to the universal set U. If the solution A for 3-SC exists, we select $s _ { i } \in \mathbb { A }$ to cover all $u _ { i }$ in Fig. 8 and the cost is $k \times 2 0 \leq \tau$ . Any selected index can cover the bottom index. Then, all $L _ { i } \in \mathcal { L } _ { q }$ are covered with an elastic factor greater than 𝑐. Solution to EIS-decision $\Rightarrow$ Solution to 3-SC. For a 3-SC instance with parameter $k$ . We can still set $\tau = k \times 2 0$ . If the solution $\mathbb { I } ^ { \prime }$ of EISdecision exists, we can get a subset of I that covers all $I : u _ { i }$ indices. As we discussed, the cost of index $u _ { i }$ is greater than any $s _ { i }$ containing $u _ { i }$ , and any $s _ { i }$ can cover the bottom index. We transfer the $u _ { i }$ in solution $\mathbb { I } ^ { \prime }$ to any $s _ { i }$ that contain $u _ { i }$ . This operation will only reduce the cost without affecting the correctness of the solution. Then we can get the solution A for 3-SC by select $s _ { i }$ in $\mathbb { I } ^ { \prime }$ . Since the cost of each element in $\mathbb { I } ^ { \prime }$ is 20 and the threshold $\tau$ is $k \times 2 0$ , the number of elements can only be less than or equal to $k$ . This indicates that $\mathbb { A } ^ { \prime }$ is the solution to 3-SC. Since EIS-decision is NP-complete, the optimization version of EIS is NP-hard. □ The NP-hardness of the EIS problem makes the optimal index selection intractable. Consequently, we look for heuristic algorithms that give an approximate solution. A straightforward approach is to use a greedy algorithm to select indices sequentially, choosing the best index each time. # 4 PROBLEM SOLUTION In $\ S 4 . 1$ , we present a greedy algorithm to solve the EIS problem, while in $\ S 4 . 2$ we study the time cost and solution for large-scale data. # 4.1 The Greedy Algorithm. Consider a label-hybrid index set where each index is associated with a corresponding space cost. We first need to define the benefit of each index selection and then select the index with the largest benefit at each iteration. Note that the index built using all data always exists (top index); this is because we need to ensure that each query has at least one index answer that is not efficient. In addition, the index also needs support for AKNN searches without labels, and no index can be shared with the top one except itself. Therefore, our index selection set $\mathbb { I } ^ { \prime }$ initially contains the top index. Then, we define benefit of selecting one index from the all possible index set I. Suppose we have selected the set of indices $\mathbb { I } ^ { \prime }$ , we define the benefit $B ( I ^ { \prime } , { \mathbb { T } } ^ { \prime } )$ of selecting index $I ^ { \prime }$ under current state $\mathbb { I } ^ { \prime }$ as follows. Definition 4.1. Let $\mathbb { C } \subset \mathbb { I }$ be the index set covered by the selected index set $\mathbb { I } ^ { \prime }$ under elastic factor $c$ where: $$ \mathbb { C } = \{ I _ { i } \in \mathbb { I } \ | \ \exists I _ { j } \in \mathbb { I } ^ { \prime } \ s . t . ( I _ { i } \subseteq I _ { j } ) \wedge ( | I _ { i } | / | I _ { j } | > c ) \} . $$ Let $\mathbb { C } ^ { \prime }$ denote the set of index covered by $I _ { i }$ where: $$ \mathbb { C } ^ { \prime } = \{ I _ { i } \in \mathbb { I } \backslash \mathbb { C } \mid ( I _ { i } \subseteq I ^ { \prime } ) \wedge ( | I _ { i } | / | I ^ { \prime } | > c ) \} . $$ The benefit $B ( I ^ { \prime } , \mathbb { I } ^ { \prime } )$ is defined as: $$ B ( I ^ { \prime } , \mathbb { I } ^ { \prime } ) = \sum _ { I _ { i } \in \mathbb { C } ^ { \prime } } | I _ { i } | / | I ^ { \prime } | $$ When only the number of indices is considered, the benefit of selecting an index should be the number of indices it can cover. Suppose the cost of the index is given by the number of vectors in it. The benefit is then defined as the sum of the costs of the indices covered per unit. Next, we summarize the greedy strategy in Algorithm 1. That is, we select the index with the largest unit value each time. We provide a dataset example in Fig. 9. Fig. 9 (a) illustrates the vector label set organized in each group, where 3 vectors have no label and 3 vectors with label set {ABC}. Fig. 9 (b) demonstrates the number of vectors considered by all possible labelcontaining queries. For example, in the case of the label containing Empty[3] I1:Empty[17] I1:{EA,mAptBy, A,C,B,ACB,CA}C} I1:Empty[17] I1:Empty[17] √ A[3] B[1] C[1] I2:A[10] I3:B[7] I4:C[9] I3:{BC, ABC, BC, ABC} I2:A[10] I3:B[7] I4:C[9] I2:A[10] I3:B[7] I4:C[9] √ √ I5:{AB, ABC} AB[1] AC[3] BC[2] I5:AB[4] I6:AC[6] I7:BC[5] I6:{AC, ABC} I5:AB[4] I6:AC[6] I7:BC[5] I5:AB[4] I6:AC[6] I7:BC[5] I7:{BC, ABC} ABC[3] I8:ABC[3] I8:{ABC} I8:ABC[3] I8:ABC[3] (a)i:nNEuamchofGrVoeuctpors L(ab):e lNCuomn aotfinViencgtoGrrsoiunp (Uc)n: dSertEClaosvcetricRFelaactiorn=s0h.i3p (d): Greedy Result (e): Optimal Result Table 3: The Benefits per-Unit of Possible Choices. query $\scriptstyle L _ { q } = \{ { \mathrm { A } } \}$ , the vectors with labels {A}, {AB}, $\{ \mathrm { A C } \}$ , and $\{ { \mathrm { A B C } } \}$ all contain label A, totaling 10 vectors. Building index $I _ { 2 }$ on these 10 vectors can not only answer the label containing query $L _ { q } = \Lambda ,$ but also answer queries such as $scriptstyle L _ { q } = \{ { \mathrm { A B } } \}$ or $\{ \mathrm { A C } \}$ when using filter search because it has all the data corresponding to the query. Fig. 9 (c) shows the index sharing when the elastic factor is 0.3. The index $I _ { 2 }$ can answer queries with $L _ { q } { = } \{ \mathrm { A B C } \}$ , since its overlap ratio is $3 / 1 0$ which is equal to 0.3. The $I _ { 1 }$ index cannot answer $L _ { q } { = } \{ \mathrm { A B C } \}$ because its overlap ratio $3 / 1 7$ is less than 0.3. With the dataset given in Fig. 9, we consider the benefit of index selection next. We list the benefit per-unit cost of each index choice in Table 3. Since our index selection needs to include the top index $I _ { 1 }$ , we always select $I _ { 1 }$ in the first round, regardless of the unit benefit of each index. Selecting $I _ { 1 }$ can cover the query vectors corresponding to the indexes $I _ { 2 } , I _ { 3 } , I _ { 4 } , I _ { 6 }$ and itself, covering a total of $1 7 + 1 0 + 7 + 9 + 6 = 4 9$ vectors, with a cost of 17, that is, the unit benefit is $4 9 / 1 7 { = } 2 . 8 8$ . After selecting $I _ { 1 }$ , the benefit of each candidate index changes. For example, in the init stage, $I _ { 2 }$ can cover indexes $I _ { 2 } , I _ { 5 } , I _ { 6 } , I _ { 8 }$ , a total of 23 entries with a cost of 10, and the unit benefit is $2 3 / 1 0 { = } 2 . 3$ . However, in the second round, $I _ { 1 }$ has covered $l _ { 2 } , I _ { 6 }$ , and the index of $I _ { 2 }$ can only cover $I _ { 5 } , I _ { 8 }$ , using a cost of 10 to cover 7 vectors, with a unit benefit of 0.7. After selecting $I _ { 1 }$ , our greedy algorithm selects the index with the largest benefit after updating as shown in Fig. 9 (d). As a result, index $I _ { 5 }$ is selected for the second round. So far, we have selected indexes $I _ { 1 } , I _ { 5 }$ and covered all indexes except $I _ { 7 }$ . In the third round, we selected $l _ { 7 }$ to cover the given possible label-containing query with a cost of 5. The total cost of the greedy algorithm is $1 7 + 4 + 5 = 2 6$ , but it is not optimal. As illustrated in Fig. 9 (e), the optimal solution selects the $I _ { 3 }$ index in the second round. The index $I _ { 3 }$ can only cover $I _ { 5 } , I _ { 7 } , I _ { 8 }$ with a unit benefit of 1.71, which is less than $I _ { 5 }$ because the data in $I _ { 3 }$ is covered by $I _ { 1 }$ . However, all queries are covered after selecting $I _ { 3 }$ , and the total cost is $1 7 + 7 = 2 4$ , better than the greedy approach. # 4.2 Time Cost Analysis. The time of selecting index also accounts for the time of overall index construction. Empirically, the index selection accounts for only a small fraction of the overall index build time with power law distribution label data. However, in theory, when the query label set is not limited by the labels that appear in the base, each data entry needs to be inserted into $2 ^ { | L _ { i } | }$ indexes, so the time complexity of determining the size of these indexes is $O ( \sum 2 ^ { | L _ { i } | } )$ . This has little impact when the average label set size is not large, but it affects the scalability. To handle large-scale data, we can use a similar idea in [21] to obtain the approximate size of each index, such as using the simplest sampling method or more advanced estimation models [22]. With all index sizes determined, the benefit of each selection can be easily obtained. We use a heap to maintain the selection of the maximum benefit at each step. Selecting an index will affect the benefits of up to $2 ^ { \left| L _ { i } \right| }$ subsequent indices. We check if the top benefit of the heap has been updated, and if not, update it and reinsert it into the heap. Let $N ^ { \prime }$ be the number of indices, and $\vert L _ { m a x } \vert$ be the maximum label set size. The time complexity is $O ( N ^ { \prime } 2 ^ { | L _ { m a x } | } \log N ^ { \prime } )$ where $2 ^ { | L _ { m a x } | } \leq N ^ { \prime }$ . # 5 EXTEND WITH LIMITED SPACE In the previous section, we studied the efficiency-oriented index selection problem. A natural question is how to select the index to achieve maximum efficiency under a space limitation. Since we use the elastic factor to model the search efficiency, we transfer the problem to maximizing the elastic factor bound with a given space limitation. Formally, we aim to select a subset $\mathbb { I } ^ { \prime }$ from all possible index set I such that the elastic factor $e ( S ( L _ { q } ) , \mathbb { T } ^ { \prime } )$ bound is maximum for given label-hybrid queries $L _ { q } \in \mathcal { L }$ and the cost of selected index set $\mathbb { I } ^ { \prime }$ is less than a threshold $\tau$ . Next, we consider the hardness of the Fixed Space Index Selection (SIS) problem. We observe that the elastic factor bound $\boldsymbol { c }$ has a monotonic property. For example, an index selection $\mathbb { I } ^ { \prime }$ achieves a 0.5 elastic factor with given query workload $\mathcal { L } _ { q }$ also satisfies any elastic factor bound less than 0.5. This property allows us to reduce the SIS problem into a decision problem that determines whether a solution $\mathbb { I } ^ { \prime }$ exists subject to the elastic factor greater than 𝑐 and the cost is below threshold $\tau$ via binary search in polynomial time. This problem can be viewed as the decision version of the fixed efficiency index selection EIS problem. Since the optimization problem EIS can be reduced to 3-SC, the hardness of the decision problem is at least NP-complete due to the NP-completeness of the decision version of 3-SC [8]. Despite the hardness of the SIS problem, we can reuse the greedy selection method for EIS to solve the problem. We still use binary search for the best elastic factor bound $c$ . We update the result if the greedy solution cost is lower than the threshold $\tau$ with a better elastic factor bound. This only requires $\mathrm { O } ( \log )$ calls to the greedy algorithm. In practice, binary search with the greedy method takes up less than $1 \%$ of the total construction time overhead, usually 1-2 seconds for even large $\mathcal { L }$ . Table 4: The Statistics of Datasets # 6 EXPERIMENT # 6.1 Experiment Settings Datasets. We utilize six public vector datasets, which are widely used in benchmarking and evaluation(SIFT GIST)1. Some of them are generated by state-of-the-art AI embedding models (MSMARCO2 OpenAI- $1 5 3 6 ^ { 3 }$ OpenAI- $3 0 7 2 ^ { 4 }$ ). For the Label part, except for the paper dataset, which comes with real labels, we use the method of the previous works [5, 18] to generate label data with different distributions for other datasets. Note that some datasets have variable vector entries. We randomly sample 1 million vectors as the base vector if the dataset cardinality exceeds 1M. Moreover, we also sampled 100M data from the deep1B dataset to verify the scalability of our algorithms. Metrics. The overall evaluation metrics involve both search efficiency and accuracy. For the search efficiency metric, we use query per second (Qps), which indicates the number of queries processed by the algorithm per second to evaluate all the methods as it is most commonly used in the benchmark. For the search accuracy, we use recall defined in $\ S 2 . 1$ as the metric to align with the baselines [5, 38]. All metrics used in the experiment are reported on averages. Label Distribution. In real scenarios, keywords, tags, and labels often approximate the power law distribution. Previous work adopts the power law distribution, the Zipf distribution, to generate labels [5, 18]. We use its original code to generate label data for base vectors with varying possible labels of $\vert \mathcal { L } \vert = 8$ , 12, 24, and 32. We also consider real datasets and other label distributions, such as Uniform and Poisson, in subsequent experiments. Algorithms. The algorithms compared in our study are as follows: ELI-0.2: Our proposed method with fixed search efficiency: elastic factor bound set to 0.2. ELI-2.0: Our proposed method with fixed index space: use at most double the original index space. • UNG: Unified navigating graph approach based on label navigation graph [5]. ACORN-1: ANN constraint-optimized retrieval network with low construction overhead [38]. ACORN-𝛾: ANN constraint-optimized retrieval network for highefficiency search [38]. Table 5: Summary of Index Time (s) Table 6: Summary of Index Size (Mb) Implementation Details. All code was implemented in $C + +$ and compiled using GCC version 9.4.0 with -Ofast optimization. The experiments were conducted on a workstation with Intel(R) Xeon(R) Platinum 8352V CPUs $_ { \mathscr { O } } 2 . 1 0 \mathrm { G H z }$ , 512GB of memory. We utilized multi-threading (144 threads) for index construction and a single thread for search evaluation. We use HNSW as the modular index with parameters $\scriptstyle { M = 1 6 }$ and efconstruc $\scriptstyle { \pmb { \tan } }$ . For ELI- $\cdot 0 . 2 ( < 1 . 0 ) \$ , we use the index selection method to achieve a fixed elastic factor of 0.2. For ELI- $2 . 0 ( > 1 . 0 )$ , we use the fixed space method of at most double the origin index space to achieve the maximum efficiency. For other baselines such as ACORN and UNG, we use the default parameters in their papers, i.e., $\scriptstyle \alpha = 1 . 2 \ { \mathrm { L } } = 1 0 0$ for UNG and $\gamma = 1 2$ for $\mathsf { A C O R N } _ { - \gamma }$ . # 6.2 Experiment Results Exp-1: Query Efficiency Performance. We start our evaluation from the query efficiency of different algorithms. We compare UNG, ACORN-1. ACORN- $\cdot \gamma$ and our method ELI-0.2 and ELI-2.0 under different label set size settings for label containing queries in Fig. 10(top right is better). From the result in Fig. 10, our algorithm achieves the best search efficiency and accuracy tradeoff. At $9 5 \%$ recall and $| { \mathcal { L } } | = 8$ , our algorithm achieves a $_ { 4 \mathrm { X } }$ improvement in search efficiency over the state-of-the-art algorithm UNG on the SIFT dataset and a $1 0 \mathrm { x }$ performance improvement on the MSMARC dataset. Moreover, our algorithm performs very stably under different $| { \mathcal { L } } |$ . In contrast, the retrieval efficiency of the UNG algorithm decreases significantly with the increase of $| { \mathcal { L } } |$ , which is also verified in its paper. Our algorithm uses the elastic factor to achieve stable performance and then has a nearly $_ { 1 2 \mathrm { x } }$ search speed to UNG when $| { \mathcal { L } } | = 3 2$ on the OpenAI dataset. The search accuracy of the ACORN algorithm stuck at a bottleneck under larger $| { \mathcal { L } } |$ settings, mainly due to the shortcomings of the PreFiltering strategy it adopts. Note that there is a partial performance gap between ELI-0.2 and ELI-2.0. This is because under the Zipf distribution, the elastic factor that ELI-2.0 can achieve with only double space is usually less than 0.2 when $| { \mathcal { L } } |$ is large. Most importantly, as $| { \mathcal { L } } |$ increases, the search accuracy of the ACORN decreases, and the search efficiency of UNG drops, which has been verified by experiments in their respective papers. However, our fixed efficiency approach ELI-0.2 has higher efficiency as $| { \mathcal { L } } |$ increases. This is because a larger $| { \mathcal { L } } |$ leads to a lower average selectivity of the query, while our algorithm can guarantee the selectivity-based search efficiency in theory. Next, we analyze the efficiency improvement brought by more index space. Figure 10: The Tradeoff Between Accuracy and Efficiency Exp-2: Index Time and Space. We compare the index time with baseline methods in Table 5. Since the space size of the partial index is affected by $| { \mathcal { L } } |$ and increases monotonically, we report the index time and space results of $| { \mathcal { L } } | = 3 2$ in efficiency comparison Fig. 10. We obtained the following results. First, we find that the indexing time of our ELI method is much more efficient than UNG and ACORN- $\gamma$ . Specifically, UNG requires nearly $2 0 \mathrm { x }$ index time on the SIFT dataset and $1 0 \mathrm { x }$ on the PAPER dataset, while ACORN$\gamma$ demands on average $2 \mathrm { x }$ index time of our method. Although ACORN-1 is efficient in constructing indexes, the consequence is worse search efficiency due to its complete lack of label information in the indexing process. Next, we examine the index space. As illustrated in Table 6, our methods use a comparable index space to achieve the best search performance. In particular, both ACORN and ELI-2.0 use twice the HNSW space of the origin vector, while ELI-2.0 has higher search performance, which also shows that our algorithm achieves a better space-efficiency tradeoff. The UNG algorithm saves more space, but the index only accounts for a small proportion of the vector dataset size in high-dimensional cases. For instance, the index size of all the methods only occupies $1 / 1 0$ of the total space on the OpenAI-1536 dataset and 1/20 on the OpenAI-3072 dataset. Exp-3: Test of Varying Query Threads.The parallelism affects the search performance. We adjust the number of threads from 4 to 32 with $| \mathcal { L } | = 1 2$ in Fig. 11. From the result in Fig. 11, our method achieves the best search performance under various thread settings. This indicates the robustness of our method in a multi-thread search scenario. Moreover, our separate architecture requires only one subindex to be invoked for a single query, making it more suitable in a distributed system. Exp-4: Test of Scalability. We also evaluate the scalability of various approaches. For this purpose, we perform experiments on the largest dataset, DEEP100M, adjusting the recall rate to compare different methods. As illustrated in Fig. 10, ACORN exhibits $_ \mathrm { 4 X }$ worse performance than our proposed ELI-0.2 at a $9 0 \%$ recall rate and 3x worse than ELI-2.0. Additionally, ELI maintains consistent performance on large-scale datasets, while ACORN fails to reach the target $9 0 \%$ recall when $| { \mathcal { L } } |$ is large. Our approach also provides notable benefits in space efficiency and indexing time for large-scale datasets. For instance, the ELI-2.0 uses less index space and time to achieve better search efficiency than ACORN. UNG core dump during index building, failed to build the index. Exp-5: Test of Varying Label Distribution. The label distribution affects the index selection strategy. We evaluate our methods and our baseline under various distribution settings, such as Uniform, Poisson, and Real-World data. We use the original code of UNG to generate synchronized data, and the PAPER dataset provides realworld vector labels for containing search. From the result in Fig. 12, we derive the following conclusion. (1) Our approach performs stably across different distribution settings. The search accuracy of ACORN varies when the distribution changes. Under Uniform and Multinormal distribution, ACORN fails to reach $8 0 \%$ recall while UNG and our method ELI still perform well. (2) Our methods are highly competitive in a variety of distributions. Our ELI algorithm has comparable performance to UNG at high recall for all four distributions, and has $3 \mathrm { x } \mathrm { - } 5 \mathrm { x }$ better search efficiency under Uniform, Poisson, and Multinormal distributions. Exp-6: Test of Varying Label Set Size. We test the search efficiency and index build time of a large base label set $| { \mathcal { L } } |$ under the Zipf distribution. We set $\vert \mathcal { L } \vert = 6 4 , 1 2 8 , 2 5 6 , 5 1 2$ to test UNG and our method ELI-0.2. The ACORN method fails to achieve $8 0 \%$ recall in all label set size settings, which is not shown in Fig. 13. From the result in Fig. 13, our method achieves nearly $8 0 0 \mathrm { x }$ search efficiency speed up at $9 9 \%$ recall when $| \mathcal { L } | = 5 1 2$ . This is because the fix efficiency feature allows ELI-0.2 to achieve higher performance with the low selectivity. The UNG suffers from the large $| { \mathcal { L } } |$ , the search efficiency degrades when $| { \mathcal { L } } |$ grows, which also has been verified in their paper. Moreover, the index build time of ELI-0.2 is much more efficient than UNG. For example, the index time of ELI-0.2 with $| \mathcal { L } | = 5 1 2$ is 155 second where UNG use 2091 second which is $1 3 \mathrm { x }$ lower than ELI-0.2. Exp-7: Compare to Optimal Approach. The optimal approach build indices for all possible label-hybrid queries. We compare the search efficiency in Fig. 14 under the Zipf distribution when $| { \mathcal { L } } | = 3 2$ . From the result in Fig. 14, ELI-0.5 and ELI-0.2 achieve near-optimal search efficiency at high recall. The ELI-2.0 has 3x worse Qps than optimal, but saves more space. The index space of ELI-2.0, ELI-0.2, ELI-0.5, and ELI-opt are 383MB, 634MB, 812MB, and 1280MB, respectively. Without the top index size 192MB that must include, the ELI-2.0 only uses $1 / 6$ of the optimal approach space to achieve acceptable search performance. Moreover, the ELI-0.5 uses a tiny more half space to achieve almost equivalent search performance, making our methods more advantageous in an efficiency-oriented scenario. # 7 RELATED WORK Vector Similarity Search. Vector similarity search has been widely studied. Most of the research focuses on approximate search [16], as exact search has a high cost in high-dimensional space [24]. Among them, $c$ -approximate nearest neighbor search returns the result with an approximate ratio of $c$ , which can be solved by local sensitive hashing (LSH) in sublinear time [7, 16, 17]. For the AKNN search problem in this paper, graph-based vector indexes [10, 11, 20, 26, 32, 35, 39, 44] are the current state-of-the-art solution, which is much more efficient than LSH according to benchmark [31]. In addition, inverted list-based indexes [2, 27] and quantizationbased methods [12, 14, 15, 19, 27, 28, 51] are also widely used in different scenarios of AKNN search. For example, inverted indexes use less space, and quantization methods can speed up the distance computation of AKNN search [43, 49]. We use the most widely used HNSW algorithm [34] as the module index in this paper, and other well-optimized AKNN search libraries [26, 55] can also replace it. Figure 11: Varying Number of Query Threads Figure 12: Varying Label Distributions Figure 13: Varying the Label Set Size $| { \mathcal { L } } |$ Figure 14: Compare to the Optimal Approach Attribute-filtering AKNN search. Various ways of AKNN indexing support attribute value filtering, where the attribute can be a label, a keyword, or a numerical value. We divided them into two classes: the attribute-free and attribute-involved approach. Attribute-Free Approach. These methods do not require attributes during index construction and determine whether a vector is filtered on the fly. Based on this, the PreFiltering and PostFiltering search strategies have been proposed. These two strategies have different performances under different queries. Database systems such as ADB [46], VBASE [53], CHASE [33], Milvus [41] etc., select different strategies based on the cost model. The ACORN is also based on the PreFiltering strategy, but builds a denser graph index to avoid the connectivity issue caused by the PreFiltering strategy. Since the above strategies completely ignore the attributes during index construction, the search accuracy and efficiency have a large gap compared to the method that involves attributes. Attribute-Involved Approach. The attribute-involved index is constructed based on different attribute types. For range filter queries on numerical attributes, recent methods [9, 48, 50] construct a segment tree to reconstruct the query interval. SeRF [56] uses the idea of compression to compress the index corresponding to each query range interval into a compact one. For label attributes, UNG [5] uses the inclusion relationship of the label set and constructs crossgroup edges to ensure that the graph index only searches the filtered vectors. Beyond the above methods, NHQ [42] takes the attribute as part of the vector similarity computation, which requires manually adjusting the weight of the two parts. The recent approach DEG [52] solves this by adjusting the graph neighbor dynamically. The Index Selection Problem. Materialized view/index selection is a critical problem in many applications that has been studied for decades [21, 36]. Various greedy algorithms have been proposed to solve this problem, such as [21], maximizing the benefit of selecting views at each round. Different from previous studies, we study the efficiency-oriented index selection scheme, use elastic factors to model efficiency, and provide a space-based cost function, making full use of the existing vector index theory and features. Finally, we provide index selection under space constraints. In the future, we will consider index selection under changes in query workload and more advanced index selection algorithms.
Real-world vector embeddings are usually associated with extra labels, such as attributes and keywords. Many applications require the nearest neighbor search that contains specific labels, such as searching for product image embeddings restricted to a particular brand. A straightforward approach is to materialize all possible indices according to the complete query label workload. However, this leads to an exponential increase in both index space and processing time, which significantly limits scalability and efficiency. In this paper, we leverage the inclusion relationships among query label sets to construct partial indexes, enabling index sharing across queries for improved construction efficiency. We introduce \textit{elastic factor} bounds to guarantee search performance and use the greedy algorithm to select indices that meet the bounds, achieving a tradeoff between efficiency and space. Meanwhile, we also designed the algorithm to achieve the best elastic factor under a given space limitation. Experimental results on multiple real datasets demonstrate that our algorithm can achieve near-optimal search performance, achieving up to 10x-500x search efficiency speed up over state-of-the-art approaches. Our algorithm is highly versatile, since it is not constrained by index type and can seamlessly integrate with existing optimized libraries.
[ "cs.DB" ]
# 1 Introduction Large language models (LLMs) have shown strong performance in programming [1–3] and are widely adopted in tools like Cursor and GitHub Copilot to boost developer productivity [4]. LLM-generated code is becoming prevalent in commercial software [5] and may eventually form a substantial portion of the world’s code. However, due to their probabilistic nature, LLMs alone cannot provide formal guarantees for the generated code. As a result, the generated code often contains bugs, such as functional errors [6] and security vulnerabilities [7]. When LLM-based code generation is increasingly adopted, these issues can become a productivity bottleneck, as they typically require human review to be resolved [8]. Formal verification presents a promising path to establish correctness guarantees in LLM-generated code but has traditionally been limited to safety-critical applications due to high cost [9–11]. Similarly to how they scale up code generation, LLMs have the potential to significantly lower the barrier of formal verification. By jointly generating code, formal specifications, and formal proofs of alignment between code and specifications, LLMs can offer higher levels of correctness assurance and automation in software development. This approach represents an emerging programming paradigm known as verifiable code generation [12, 13]. Given the transformative potential of verifiable code generation, it is crucial to develop suitable benchmarks to track progress and guide future development. This is challenging because verifiable code generation involves three interconnected tasks: code, specification, and proof generation. We need to curate high-quality samples and establish robust evaluation metrics for each individual task, while also composing individual tasks to reflect real-world end-to-end usage scenarios where LLMs automate the creation of verified software directly from high-level requirements. Existing benchmarks, as listed in Table 1 and detailed in Section 2, fall short as they lack comprehensive support for all three tasks [14–16], quality control [17], robust metrics [18], or a modular design [12]. To bridge this gap, we introduce VERINA (Verifiable Code Generation Arena), a high-quality benchmark to comprehensively evaluate verifiable code generation. It consists of 189 programming challenges with detailed problem descriptions, code, specifications, proofs, and comprehensive test suites. We format these problems in Lean [19], a general-purpose programming language with a rapidly growing ecosystem and applications in both formal mathematics [20, 21] and verification [22, 23]. VERINA is constructed with careful quality control. It draws problems from various sources, including MBPP [18, 24], LiveCodeBench [1], and LeetCode, offering a diverse range of difficulty levels. All samples in the benchmark are manually inspected and revised to ensure clear natural language descriptions and accurate formal specifications and code implementations. Moreover, each sample also includes a comprehensive test suite with both positive and negative cases, which achieves $100 \%$ line coverage on the code implementation and passes the ground truth specification. VERINA facilitates the evaluation of code, specification, and proof generation, along with flexible combinations of these individual tasks. We utilize the standard pass $@ k$ metric [25] with our comprehensive test suites to evaluate code generation. For proof generation, we use the Lean compiler to automatically verify their correctness. Furthermore, we develop a practical, testing-based approach based on to automatically evaluate model-generated specifications, by verifying their soundness and completeness with respect to ground truth specifications. The high-quality samples and robust metrics of VERINA establish it as a rigorous platform for evaluating verifiable code generation. We conduct a thorough experimental evaluation of nine state-of-the-art LLMs on VERINA. Our results reveal that even the top-performing LLM, OpenAI $\scriptstyle \mathbf { o 4 - m i n i }$ , struggles with verifiable code generation, producing only $6 1 . 4 \%$ correct code solutions, $5 1 . 0 \%$ sound and complete specifications, and $3 . 6 \%$ successful proof, given a single sample for each task. Interestingly, iterative refinement using Lean compiler feedback can increase the proof success rate to $2 2 . 2 \%$ with 64 refinement steps. However, this approach significantly raises costs and the $2 2 . 2 \%$ success rate is still insufficient. These findings underscore the challenges of verifiable code generation and highlight the critical role of VERINA in advancing the field. # 2 Background and Related Work We present works closely related to ours in Table 1 and discuss them in detail below. Task support for verifiable code generation. Writing code, specifications, and proofs for a verified software component is time-consuming when done manually. Although various studies have explored using LLMs to automate these tasks, they primarily focus on individual aspects, which are insufficient for fully automating end-to-end verifiable code generation. Benchmarks like HumanEval [3] and MBPP [24] have sparked impressive progress on LLM-based code generation but do not handle formal specifications or proofs. Many verification-focused efforts target only one or two tasks, while assuming the other elements are provided by the human user. For example, DafnyBench [14] and miniCodeProps [26] are two benchmarks designed exclusively for proof generation. Moreover, AutoSpec [27] and SpecGen [28] infer specifications and proofs from human-written code. To the best of our knowledge, Dafny-Synthesis [18] and Clover [12] are the only two works that cover all three tasks, like VERINA. However, they target automated theorem proving using Dafny [29], while VERINA leverages interactive theorem proving in Lean. Moreover, they have relatively small numbers of human-written samples (50 and 62 respectively). In contrast, VERINA provides 189 high-quality samples with varying difficulty levels. Automated and interactive theorem proving. A major challenge in formal verification and verifiable code generation lies in tooling. Verification-oriented languages like Dafny [29] and Verus [30] leverage SMT solvers for automated theorem proving [31, 32] and consume only proof hints, such as loop invariants [33] and assertions [34]. However, SMT solvers handle only limited proof domains and behave as black boxes, which can make proofs brittle and hard to debug [35]. Interactive theorem proving (ITP) systems like Lean provide a promising target for verifiable code generation with LLMs. Table 1: A comparison of VERINA with related works on LLMs for code generation and verification. We characterize whether each work supports the three foundational tasks for end-to-end verifiable code generation: CodeGen, SpecGen, ProofGen (Section 4.1). $\bullet$ means fully supported, $\mathbf { \bullet }$ means partially supported, $\bigcirc$ means unsupported. If ProofGen is supp ted, we specify the proviG#ng style: automated theorem #proving (ATP) or interactive theorem proving (ITP). For works supporting multiple tasks, we annotate if these tasks are supported in a modular and composable manner. Overall, VERINA offers more comprehensive and high-quality benchmarking compared to prior works. ITPs support constructing proofs with explicit intermediate steps. This visibility enables LLMs to diagnose errors, learn from unsuccessful steps, and iteratively refine their proofs. While ITPs traditionally require humans to construct proofs, recent work shows that LLMs can generate proofs at human level in math competitions [36]. To our knowledge, the only existing verification benchmarks in Lean are miniCodeProps [26] and FVAPPS [17]. miniCodeProps translates 201 Haskell programs and their specifications into Lean but is designed for proof generation only. FVAPPS contains 4,715 Lean programs with LLM-generated specifications from a fully automated pipeline that lacks human validation and quality control. In contrast, VERINA provides high-quality, human-verified samples and captures all three foundational tasks in verifiable code generation. Task compositionality. A key strength of VERINA is its modular design, which enables flexible evaluation of not only individual tasks but also their combinations. This compositionality captures diverse real-world scenarios—from specification-guided code generation to end-to-end verifiable code generation—enabling a comprehensive assessment of different aspects of verifiable code generation. This modularity also facilitates targeted research on specific weaknesses, such as improving proof generation. On the contrary, all other prior works lack compositionality. For example, DafnySynthesis [18] and Clover [12] mix specification and proof generation into a single task, lacking support for separate evaluation of each. # 3 VERINA: Data Format, Construction, and Quality Assurance We describe the VERINA benchmark, its data construction pipeline, and quality assurance measures. # 3.1 Overview and Data Format VERINA consists of 189 standalone programs, annotated with natural language descriptions, code, specifications, proofs, and test cases. The code, specification, and proof are all written in Lean. An example is illustrated in Figure 1, consisting of: • Natural language description (Line 1–4): informal description of the programming problem, capturing the intent of the human developer. • Code (Line 6–8): ground truth code implementation that solves the programming problem. 1 -- Description of the coding problem in natural language 2 -- Remove an element from a given array of integers at a specified index. The resulting array should 3 -- contain all the original elements except for the one at the given index. Elements before the 4 -- removed element remain unchanged, and elements after it are shifted one position to the left. 5 6 -- Code implementation 7 def removeElement (s : Array Int) (k : Nat) (h_precond $\because$ removeElement_pre s k) : Array Int := 8 s.eraseIdx! k 9 10 - Pre-condition 11 def removeElement_pre (s : Array Int) (k : Nat) : Prop := 12 k < s.size -- the index must be smaller than the array size 13 14 Post-condition 15 def removeElement_post (s : Array Int) (k : Nat) (result: Array Int) (h_precond : removeElement_pre s k) : Prop := 16 result.size $\mathbf { \Sigma } = \mathbf { \Sigma }$ s.size - 1 $\begin{array} { r l r } { \wedge } & { { } \textrm { --- } } & { { \cal U } n \ i { \cal y } } \end{array}$ one element is removed 17 (∀ i, $\dot { \textbf { i } } < \textbf { k } $ result[i]! $\mathbf { \Psi } = \mathbf { \Psi }$ s[i]!) ∧ -- The elements before index k remain unchanged 18 (∀ i, $\dot { \textbf { 1 } } <$ result.size $ \textbf { i } \geq \textbf { k } $ result[i]! $\mathbf { \sigma } = \mathbf { \sigma }$ s[i $^ +$ 1]!) -- The elements after index k are shifted by one position 19 20 -- Proof 21 theorem removeElement_spec (s: Array Int) (k: Nat) (h_precond : removeElement_pre s k) : 22 removeElement_post s k (removeElement s k h_precond) h_precond : $: = { }$ by sorry -- The proof is omitted for brevity 23 24 -- Test cases 25 (s : #[1, 2, 3, 4, 5]) $\mathbf { \lambda } ( \mathbf { k } ~ : ~ 2 )$ (result : #[1, 2, 4, 5]) -- Positive test with valid inputs and output 26 (s : #[1, 2, 3, 4, 5]) $\mathrm { ~ ( ~ k ~ : ~ } \mathrm { ~ 5 ~ ) ~ }$ -- Negative test: inputs violate the pre-condition at Line 12 27 (s : #[1, 2, 3, 4, 5]) $\mathbf { \lambda } ( \mathbf { k } ~ : ~ 2 )$ (result : #[1, 2, 4]) -- Negative test: output violates the post-condition at Line 16 28 (s : #[1, 2, 3, 4, 5]) $\texttt { ( k : 2 ) }$ (result : #[2, 2, 4, 5]) -- Negative test: output violates the post-condition at Line 17 29 (s : #[1, 2, 3, 4, 5]) $\texttt { ( k : 2 ) }$ (result : #[1, 2, 4, 4]) -- Negative test: output violates the post-condition at Line 18 • Specification (Line 10–18): ground truth formal specification for the programming problem. It consists of a pre-condition, which states properties the inputs must satisfy, and a post-condition, which states desired relationship between inputs and outputs. • Proof (Optional, Line 20–22): formal proof establishing that the code satisfies the specification. Ground truth proofs are optional in VERINA, as they are not required for evaluation. Modelgenerated proofs can be checked by Lean directly. Nevertheless, we invest significant manual effort in writing proofs for 46 out of 189 examples as they help quality assurance (Section 3.2). • Test suite (Line 24–29): a comprehensive suite of both positive and negative test cases. Positive tests are valid input-output pairs that meet both the pre-condition and the post-condition. Negative tests are invalid inputs-output pairs, which means either the inputs violate the pre-condition or the output violates the post-condition. These test cases are useful for evaluating model-generated code and specifications, as detailed in Section 4.1. They are formatted in Lean during evaluation. Benchmark statistics. Table 2 presents key statistics of VERINA. Natural language descriptions have a median length of 110 words, ensuring they are both informative and detailed. Code ranges up to 38 lines and specifications up to 62 lines, demonstrating that VERINA captures complex tasks. With a median of 5 positive tests and 12 negative tests per instance, the constructed test suites provide strong evidence for the high quality and correctness of VERINA. Table 2: Statistics of VERINA. # 3.2 Benchmark Construction and Quality Assurance VERINA consists of two subsets sourced from different origins: VERINA-BASIC and VERINA-ADV. For VERINA-BASIC, the maximum (median) LoC for code and specification are 26 (6) and 17 (2), respectively. For VERINA-ADV, they are 38 (16) and 62 (7). This underscores the diversity and varying difficulty levels between VERINA-BASIC and VERINA-ADV. We employ a meticulous data curation process that combines careful translation, thorough manual review, and automated mechanisms, leading to a rigorous and high-quality benchmark for verifiable code generation. VERINA-BASIC: translation from human-written Dafny code. We first consider MBPP-DFY50 [18], which contains MBPP [24] coding problems paired with human-verified solutions in Dafny. Each instance contains a natural language problem description, code implementation, specifications, proof, and test cases. We manually translated 49 problems into Lean, refining and verifying each translation. To extend the benchmark, we added 59 more human-authored Dafny instances from CloverBench [12]. These were translated into Lean using OpenAI o3-mini with few-shot prompting based on our manual translations, followed by manual inspection and correction. Overall, the coding difficulty in VERINA-BASIC, abstracting away language differences, is comparable to MBPP. VERINA-ADV: writing Lean code from scratch. VERINA-ADV enhances the diversity of VERINA by incorporating more advanced coding problems and solutions. They were adapted from student submissions to a lab assignment in a course on theorem proving and program verification. Students, both undergraduate and graduate, were encouraged to source problems from platforms like LeetCode or more challenging datasets such as LiveCodeBench [1]. They formalized and solved these problems in Lean, providing all necessary elements in VERINA’s format (Section 3.1). We carefully selected the most suitable and high-quality submissions, resulting in 81 benchmark instances. In addition, we manually reviewed and edited the submissions to ensure their correctness. Quality assurance. During the data collection process, we consistently enforce various manual and automatic mechanisms to ensure the high quality of VERINA: • Detailed problem descriptions: The original problem descriptions, such as those from MBPP-DFY50, can be short and ambiguous, making them inadequate for specification generation. To resolve this, we manually enhanced the descriptions by clearly outlining the high-level intent, specifying input parameters with explicit type information, and detailing output specifications. • Full code coverage with positive tests: Beyond the original test cases, we expanded the set of positive tests to ensure that they achieve full line coverage on the ground truth code. We created these additional tests both manually and with LLMs. We leveraged the standard coverage.py tool to verify complete line coverage, since Lean lacks a robust coverage tool. For Python reference implementations, we either used the original MBPP code or generated an implementation from the enhanced problem description via OpenAI’s o4-mini with manual validation. • Full test pass rate on ground truth specifications: We evaluated the ground truth specifications against our comprehensive test suites. All ground truth specifications successfully pass their respective positive tests, confirming the quality of the specifications in VERINA. • Necessary negative tests: We mutated each positive test case to construct at least three different negative tests that violate either the pre- or the post-condition, except when the function’s output has boolean type, in which case only a single negative test can be created. We made sure that our ground truth code and specifications do not pass these negative tests. • Preventing trivial code generation: VERINA allows providing ground truth specifications as an optional input for the code generation task (discussed in Section 4.1). We crafted all ground truth specifications such that they cannot be directly used to solve the coding problem. This prevents LLMs from generating an implementation trivially equivalent to the specification. As a result, the model must genuinely demonstrate semantic comprehension of the reference specification and non-trivial reasoning to generate the corresponding implementation. • Manual review and edits: Each benchmark instance was manually reviewed by at least two authors, carefully inspecting and editing them to ensure correctness and high quality. # 4 Evaluating Verifiable Code Generation Using VERINA VERINA enables comprehensive evaluation of verifiable code generation, covering foundational tasks—code, specification, and proof generation—and their combinations to form an end-to-end pipeline from natural language descriptions to verifiable code. We also introduce a novel framework for a reliable automatic evaluation of model-generated specifications. # 4.1 Foundational Tasks and Metrics As shown in Figure 2, all three foundational tasks include natural language descriptions and function signatures (Lines 7, 11, and 15 in Figure 1) as model inputs, which captures human intent and enforces consistent output formats, facilitating streamlined evaluation. Figure 2: VERINA’s three foundational tasks. Dashed arrows represent optional inputs. Figure 3: Our evaluator for specification generation. Specification generation (SpecGen). Given a description, signature, and optionally code implementation, the model generates a formal specification. Specifications must accurately capture human intent. Let $\phi$ denote the set of correct programs that satisfy human intent and $\hat { \phi }$ the set that aligns with the generated specification. An ideal specification should achieve ${ \hat { \phi } } = \phi$ , which entails two properties—(i) soundness $( { \hat { \phi } } \subseteq \phi )$ : it is “small enough” to cover only correct programs, and (ii) completeness $( \boldsymbol { \phi } \subseteq \hat { \boldsymbol { \phi } } )$ : it is “large enough” to cover all correct programs. In practice, two challenges arise for evaluating $\hat { \phi }$ . First, we must capture $\phi$ formally. VERINA addresses this by leveraging high-quality ground truth specifications (see Section 3.2) and comprehensive test suites. Second, we need to assess the relationship between $\hat { \phi }$ and $\phi$ to establish soundness and completeness. Since specifications consist of pre-conditions and post-conditions, let $P$ and $\hat { P }$ denote the ground truth and model-generated pre-conditions, respectively, and $Q$ and $\hat { Q }$ the corresponding post-conditions. In VERINA, we define the soundness and completeness of $\hat { P }$ and $\hat { Q }$ as follows: • $\hat { P }$ is sound iff $\forall \overline { { x } } . P ( \overline { { x } } ) \Rightarrow \hat { P } ( \overline { { x } } )$ , where $\textstyle { \overline { { x } } }$ are the program’s input values. Given the same postcondition (e.g., $Q$ ), it is more difficult for a program to satisfy $\hat { P }$ than $P$ . This is because $\hat { P }$ allows more inputs, which the program must handle to meet the post-condition. As a result, the set of programs accepted by $\hat { P }$ a subset of those accepted by $P$ . • $\hat { P }$ is complete iff $\forall \overline { { x } } . \hat { P } ( \overline { { x } } ) \Rightarrow P ( \overline { { x } } )$ . Given the same post-condition, the set of programs accepted by $\hat { P }$ is now a superset of those accepted by $P$ , since $\hat { P }$ is more restrictive than $P$ . • $\hat { Q }$ is sound iff $\forall x , y . P ( \overline { { x } } ) \land \hat { Q } ( \overline { { x } } , y ) \Rightarrow Q ( \overline { { x } } , y )$ , where $y$ is the output value. For any valid inputs w.r.t. $P$ , the set of output accepted by $\hat { Q }$ is a subset of those accepted by $Q$ , establishing soundness. • Symmetrically, $\hat { Q }$ is complete iff $\forall \overline { { x } } , y . P ( \overline { { x } } ) \land Q ( \overline { { x } } , y ) \Rightarrow \hat { Q } ( \overline { { x } } , y ) .$ To evaluate SpecGen, we need automatic and robust mechanisms to check if the above relationships hold. Formally proving them is difficult, as they may contain nested quantifiers and complex program properties. LLM-based provers are ineffective in the verification domain, as shown in Section 5, making them unreliable for this use case. Another approach is to convert these relationships into ATP; however, existing tools do not adequately model the necessary Lean features [43]. To overcome these limitations, we leverage a practical testing-based evaluation framework using our comprehensive test suites, as shown in Figure 3. We formalize a given soundness or completeness relationship, denoted by $R$ , in Lean. Instead of proving $R$ for universally quantified input and output variables, we check $R$ against concrete values in test cases. For example, to evaluate $\hat { Q }$ ’s soundness, Simplify R to R’ using tests Decides if R’ holds Cannot Yes No decide R holds R does Property-based not hold Testing for R’ Counterexample? Cannot No Yes test R holds R does Unknown not hold we check if $P ( \overline { { x } } ) \land \hat { Q } ( \overline { { x } } , y ) \Rightarrow Q ( \overline { { x } } , y )$ holds for all test cases $( { \overline { { x } } } , y )$ in our test suite. We denote this simplified version of $R$ as $R ^ { \prime }$ . For many cases, e.g., the specification in Figure 1, Lean can automatically determine if $R ^ { \prime }$ holds [44] and we return the corresponding result. Otherwise, we employ property-based testing with the plausible tactic in Lean [45]. It generates diverse inputs specifically targeting the remaining universally and existentially quantified variables in $R ^ { \prime }$ , systematically exploring the space of possible values to test $R ^ { \prime }$ . In Appendix A.4, we provide a detailed description on how we implement these metrics in Lean. Since our evaluator is based on testing, it can prove that $R$ does not hold through counterexamples, as highlighted in green in Figure 3. While it cannot formally establish $R$ holds, it remains highly robust in this regard, due to our comprehensive test suite with both positive and negative tests, which achieve full coverage on ground truth code implementations. Lean’s property-based testing cannot handle a small number of complicated relationships, for which our evaluator returns unknown. To further enhance the accuracy of our metric, we repeat our evaluation framework in Figure 3 to check $\neg R$ . We compare the evaluator outcomes on $R$ and $\neg R$ , and select the more accurate result as the final output. Our final metrics for SpecGen include individual pass ${ @ k }$ [3] scores for soundness and completeness of all generated pre-conditions and post-conditions, as well as aggregated scores that soundness and completeness hold simultaneously for pre-condition, post-condition, and the complete specification. Since the evaluation of the specification may return unknown, we plot error bars indicating the lower bound (treating unknown as $R$ does not hold) and upper bound (treating as $R$ holds). Figure 4: Combinations of VERINA’s foundational tasks: specification-guided code generation (top left), specification inference from code (bottom left), and end-to-end verifiable code generation (right). Natural language descriptions and function signatures are omitted in the figure for brevity. To illustrate our metric, consider the ground truth pre-condition $\texttt { k } < \texttt { s }$ .size at Line 12 of Figure 1, and model-generated pre-condition k < s.size - 1 and k < s.size $^ { + \ 1 }$ . k < s.size - 1 can be determined as unsound using the positive test (s : #[1, 2, 3, 4, 5]) $( \texttt { k : 4 } )$ , while k < s.size $^ { + \ 1 }$ is incomplete based on the negative test $( { \textbf { s } } : \# [ 1 , \ 2 , \ 3 , \ 4 , \ 5 ] ) ( { \textbf { \em k } } : \ 5 )$ . We more examples of our metrics for specification generation in Appendix C. Code generation (CodeGen). Given a natural language description, function signature, and optionally specification, the model generates code implementing the desired functionality. Following standard practice, we evaluate the generated code by running it against positive test cases in VERINA and reporting the pass ${ \ @ k }$ metric defined by Chen et al. [3]. In Section 4.2, we will explore evaluating the code by proving its correctness with respect to the formal specification. Proof generation (ProofGen). Given a description, signature, code, and specification, the model generates a formal proof in Lean to establish that the code satisfies the specification. This task evaluates the model’s ability to reason about code behavior and construct logically valid arguments for correctness. We use Lean to automatically check the validity of generated proofs, and proofs containing placeholders (e.g., the sorry tactic) are marked as incorrect. # 4.2 Task Combinations VERINA enables combining the three foundational tasks to evaluate various capabilities in verifiable code generation. These combined tasks reflect real-world scenarios where developers utilize the model to automatically create verified software in an end-to-end manner. Such modularity and compositionality highlight the generality of VERINA, which encompasses various tasks studied in previous work (Table 1). Three examples of combined tasks are (Figure 4): • Specification-Guided Code Generation: Given a natural language description, function signature, and the ground truth specification, the model first generates the code and then proves that the code satisfies the specification. This aligns with tasks explored in FVAPPS [17] and AlphaVerus [15]. • Specification Inference from Code: In some cases, developers may have the code implementation and want the model to annotate it with a formal specification and prove their alignment. This corresponds to the setting in AutoSpec [27], SpecGen [28], and SAFE [16]. • End-to-End Verifiable Code Generation: For an even higher degree of automation, developers might start with only a high-level problem description in natural language and instruct the model to generate code and specification independently, and then generate the proof. This captures the scenario in Dafny-Synthesis [18] and Clover [12]. In these task combinations, a crucial design consideration is the dependency between code and specification. For example, in specification-guided code generation, it is important to assess how beneficial the ground truth specification is beyond the natural language description, which already captures the developer’s intent. Additionally, for end-to-end verifiable code generation, it is essential to decide the order of the CodeGen and SpecGen modules—whether to make SpecGen dependent on the output of CodeGen, place SpecGen before CodeGen, or run them independently (as in Figure 4). We experimentally explore these design choices using VERINA in Section 5. In end-to-end verifiable code generation, it is crucial that the model generates code and specification independently, rather than sequentially. Otherwise, it may exploit shortcuts by producing definitionally equivalent code-specification pairs, making the proof task trivial. When designing VERINA’s tasks, we enforce independence—for example, the model cannot access the code when generating the specification. While we do not yet check for definitional equivalence (e.g., using BEq [46]), we leave this as an important direction for future work. # 5 Experimental Evaluation Experimental setup. We evaluate a diverse set of nine state-of-the-art LLMs on VERINA. We leverage 2-shot prompting to enhance output format adherence, with the 2-shot examples excluded from the final benchmark. For each task, we primarily report the pass $@ 1$ metric [3]. We provide detailed input prompts, output formats, and LLM setups in Appendix A. Figure 5: pass $\ @ 1$ performance of LLMs on VERINA’s three foundational tasks. All foundational tasks are challenging, especially ProofGen. Figure 5 shows a clear difficulty hierarchy across the three foundational tasks. Code generation achieves the highest success rates across models, followed by specification generation, while proof generation remains the most challenging with pass $@ 1$ rates below $3 . 6 \%$ for all models. All three tasks pose significant challenges for current LLMs, with constructing Lean proofs that the implementation satisfies the specification being particularly hard and requiring specialized theorem proving capabilities. This also means that for any combined task involving ProofGen, LLMs’ performance will be heavily bottlenecked by the ProofGen subtask. Among the evaluated models, o4-mini, GPT 4.1, Claude Sonnet 3.7, and Gemini 2.5 Flash demonstrate relatively stronger performance across tasks. We report detailed results on pre-condition and post-condition soundness and completeness in Appendix B, where we observe that generating sound and complete post-conditions is generally more difficult than pre-conditions. Figure 6: pass $@ 1$ performance on three foundational tasks for VERINA-BASIC and VERINA-ADV. VERINA-ADV is much more challenging than VERINA-BASIC. The comparison between VERINABASIC and VERINA-ADV in Figure 6 reveals substantial difficulty gaps on all three tasks. This demonstrates that problem complexity significantly impacts all aspects of verifiable code generation, and VERINA-ADV provides a valuable challenge for advancing future research in this domain. Figure 7: pass ${ @ k }$ performance of selective LLMs on the ProofGen tasks in VERINA using proof refinement (first row) and direct generation (second row). Iterative proof refinement shows meaningful improvements. For ProofGen task, besides pass $\ @ 1$ , we also extend the evaluation of the 4 best performing LLMs (o4-mini, GPT 4.1, Claude Sonnet 3.7, Gemini 2.5 Flash) to further investigate LLMs’ theorem proving capabilities. We evaluate them with iterative proof refinement, where the evaluated model receives Lean verifier error messages and is prompted to revise its proof, and with direct generation, where the evaluated model generates responses independently in each iteration. For both method, we report pass $\circledcirc k$ , the success rate after $k$ rounds of iterations, for $k$ up 64. This metric investigates how much additional interaction helps repair the proof that a single-pass generation would miss, and whether providing Lean verifier feedback improves success rates compared to independent generation attempts. As shown in Figure 7, iterative proof refinement yields meaningful improvements on simpler problems, with o4-mini improving from $7 . 4 1 \%$ to $2 2 . 2 2 \%$ on VERINA-BASIC after 64 iterations. However, these gains are substantially less on VERINA-ADV ( $1 . 2 3 \%$ to $6 . 1 7 \%$ ), indicating that naive refinement strategies are insufficient for complex proving tasks. Furthermore, comparing refinements and direct generation without error messages demonstrates the clear value of Lean verifier feedback. Figure 8: Impact of contextual information (reference code or specification input) on CodeGen and SpecGen performance. Providing ground truth specification benefits CodeGen. Providing ground truth specifications as context consistently improves CodeGen performance across models. Since the ground truth specifications cannot be used directly as code (as explained in 3.2), all CodeGen improvements rely on semantic understanding of the reference specification. On the contrary, providing ground truth code as context shows minimal or negative improvement for SpecGen. While it is possible for LLMs to directly use the ground truth code in the specification, manual inspection of our evaluation results reveals no evidence of such behaviors. This is likely because using code as specification is uncommon in standard development practices, and our prompts A.3 ask LLMs to focus on constraining code behavior rather than replicating implementation details. The asymmetry in using ground truth information for CodeGen versus SpecGen suggests that formal specifications effectively constrain and guide code synthesis, while verbose code implementations may introduce noise to or over-constrain specification generation rather than providing helpful guidance. Furthermore, when using LLM-generated code or specifications as context instead of ground truth, performance generally degrades. The generated artifacts can be of insufficient quality to serve as reliable reference. This suggests that combined tasks, where LLMs must generate both code and specifications jointly, might be significantly more challenging than individual tasks in isolation. Qualitative case studies. We present detailed qualitative case studies with analysis of failure modes and success patterns across different tasks in Appendix C.
Large language models (LLMs) are increasingly integrated in software development, but ensuring correctness in LLM-generated code remains challenging and often requires costly manual review. Verifiable code generation -- jointly generating code, specifications, and proofs of code-specification alignment -- offers a promising path to address this limitation and further unleash LLMs' benefits in coding. Yet, there exists a significant gap in evaluation: current benchmarks often lack support for end-to-end verifiable code generation. In this paper, we introduce Verina (Verifiable Code Generation Arena), a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as their compositions. Verina consists of 189 manually curated coding tasks in Lean, with detailed problem descriptions, reference implementations, formal specifications, and extensive test suites. Our extensive evaluation of state-of-the-art LLMs reveals significant challenges in verifiable code generation, especially in proof generation, underscoring the need for improving LLM-based theorem provers in verification domains. The best model, OpenAI o4-mini, generates only 61.4% correct code, 51.0% sound and complete specifications, and 3.6% successful proofs, with one trial per task. We hope Verina will catalyze progress in verifiable code generation by providing a rigorous and comprehensive benchmark. We release our dataset on https://huggingface.co/datasets/sunblaze-ucb/verina and our evaluation code on https://github.com/sunblaze-ucb/verina.
[ "cs.LG", "cs.AI", "cs.LO", "cs.PL", "cs.SE" ]
# 1 Introduction Large Language Models (LLMs) have rapidly transformed digital communication, enabling machines to generate text that closely mimics human writing. Their utility in domains such as virtual assistants, content production, and automated dialogue is evident. However, their widespread and unregulated use in social platforms raises serious concerns, particularly in relation to the spread of misinformation and the reinforcement of ideological divisions Sun et al., 2024. Social media environments, designed around engagementdriven recommendation systems, tend to prioritize content that provokes strong reactions, often elevating controversial or misleading narratives Yang et al., 2020. Within this ecosystem, AI-generated text risks blending indistinguishably into public discourse, influencing opinion formation and altering the dynamics of online debate. One particularly sensitive domain is political communication. Recent work on social bots demonstrates how automated accounts can interact persuasively with human users, disseminating content that inflames existing divisions while avoiding detection by traditional moderation systems Feng et al., 2023. In contrast to earlier bots based on static scripts, models finetuned from open-source LLMs are capable of producing responses that are not only fluent but also ideologically coherent and contextually aware. This shift raises the possibility that such systems might be strategically deployed to influence discussions, distort narratives, or manipulate perceptions across digital communities. Reddit offers a compelling case study for investigating this phenomenon. With its subreddit-based structure and open access to discussion trees, the platform hosts a variety of ideologically polarized communities. By examining the behavior of a fine-tuned model operating within these spaces, we aim to understand how LLMs might participate in or amplify partisan dynamics. In particular, we ask whether a fine-tuned LLM can generate persuasive, engaging responses that are consistent with the rhetorical style and ideological orientation of the communities it interacts with. To this end, the study develops a dataset of comment-reply pairs from selected political subreddits and employs a fine-tuning strategy using the LLaMA-2 Chat 7B model in combination with Low-Rank Adaptation (LoRA). Four experimental configurations are tested, varying in terms of whether the model is fine-tuned and whether it receives explicit prompting for contextual awareness. The resulting outputs are analyzed both quantitatively and qualitatively to assess credibility, linguistic realism, and ideological alignment. The remainder of this paper is structured as follows. Section 2 surveys relevant literature on bot detection, language model fine-tuning, and the intersection of AI and political communication. Section 3 describes the construction of the dataset and the fine-tuning procedure. Section 4 presents experimental results and evaluations. Section 5 discusses the broader implications for AI regulation and platform policy. Section 6 concludes with reflections on limitations and directions for future research. # 2 Related Work The increasing capability of large language models (LLMs) in generating human-like text has raised concerns about their potential role in the dissemination of misinformation and the reinforcement of ideological polarization. Several studies have documented how AI-generated content can contribute to the formation of echo chambers, particularly in social media environments where engagement-oriented algorithms privilege emotionally charged and divisive narratives Sun et al., 2024; Yang et al., 2020. In parallel, the evolution of natural language processing has enabled progressively more sophisticated generative systems. While early models based on recurrent neural networks (RNNs) laid the groundwork for sequence modeling Elman, 1990, the introduction of Transformer-based architectures marked a significant leap in contextual understanding and fluency Vaswani et al., 2017. This architectural shift has expanded the persuasive capabilities of LLMs, allowing them to emulate rhetorical strategies that enhance the credibility of their outputs Gao et al., 2023. Rather than relying exclusively on false claims, AI-generated misinformation increasingly exploits nuanced persuasion techniques, making it more difficult to identify through traditional fact-checking approaches Jurafsky and Martin, 2024. A related body of research has focused on the role of social bots in digital manipulation. Bots have long been used to amplify misinformation, simulate public consensus, and distort the visibility of certain narratives Feng et al., 2023. Empirical studies such as those based on the TwiBot-22 dataset have shown that bot networks frequently engage in coordinated disinformation campaigns, mimicking legitimate user behavior to evade detection Qiao et al., 2024. Traditional detection methods, which rely on behavioral heuristics and network-level anomalies Wei and Nguyen, 2020, are increasingly insufficient in the face of LLM-powered botnets. These newer systems are capable of producing contextually relevant and coherent responses, rendering them virtually indistinguishable from real users Gao et al., 2023; Touvron et al., 2023. Despite progress in bot detection and AI governance, several unresolved challenges remain. First, many detection models target generic misinformation but neglect the ideological bias embedded in AI-generated political discourse. Second, the accessibility of powerful fine-tuning techniques raises concerns about the unsupervised creation of persuasive ideological agents. Third, most evaluations rely on synthetic benchmarks and do not consider the dynamics of real-world interaction. Finally, regulatory frameworks have not kept pace with the evolving capabilities of generative models, leaving significant gaps in mitigation strategies. Addressing these issues requires interdisciplinary collaboration across machine learning, social science, and policy-making. Our work contributes to this growing field by analyzing how fine-tuned LLMs behave in ideologically charged discussions, assessing their rhetorical strategies, comparing them to existing bot-based manipulation frameworks, and proposing directions for detection and governance in future AI deployments. # 3 Methodology # 3.1 Dataset Construction Reddit was selected as the primary data source due to its structured discussion format and the presence of thematically organized communities that often exhibit strong ideological polarization. The platform’s architecture, based on threaded discussions within topical subreddits, offers a granular view of user interactions, making it particularly suitable for capturing rhetorical strategies in political discourse. To build the dataset, sixteen subreddits were identified based on their ideological alignment and relevance to public debate. These included communities explicitly oriented around political identities, such as r/trump and r/Republican on the right and r/IncelTears and r/GenderCynical on the left, as well as subreddits associated with conspiracy theories (e.g., r/conspiracy, r/flatearth) and public figures (e.g., r/JoeRogan, r/elonmusk). For each subreddit, the top 1502 trending posts were retrieved using the Python Reddit API Wrapper (PRAW), and all associated comment threads were recursively extracted. This process ensured the inclusion of naturalistic, user-generated interactions reflecting the discourse style of each community. Comment-reply pairs were then filtered and preprocessed to remove duplicates, links, moderation artifacts, and content shorter than a minimum threshold. The final corpus was structured as source-target pairs, with each source representing a user comment and the target being the immediate reply. This conversational structure was preserved to maintain contextual continuity during the fine-tuning phase. # 3.2 Model Architecture and Fine-Tuning Strategy The model employed for this study was LLaMA-2 Chat 7B, an open-source large language model developed by Meta. Fine-tuning was performed using Low-Rank Adaptation (LoRA) Hu et al., 2021, a technique that injects trainable low-rank matrices into each layer of the transformer architecture without modifying the original model weights. This approach significantly reduces memory consumption and training time, making it suitable for experimentation on consumer-grade hardware. To further optimize efficiency, LoRA was combined with 4-bit quantization using the NormalFloat (NF4) precision format Dettmers et al., 2022. Quantization allowed parameters to be stored in reduced precision while maintaining inference-level performance through computation in FP16. The fine-tuning process was carried out using the QLoRA framework Dettmers et al., 2023, which supports parameter-efficient training under constrained memory conditions. Training was conducted on a single A100 GPU with 80GB of VRAM. Each model variant was fine-tuned for three epochs using a learning rate of 2e-5 and a batch size of 64. A context length of 512 tokens was used, and responses exceeding this length were truncated. Training and validation losses were monitored to avoid overfitting, and checkpointing was implemented to allow rollback in case of divergence. # 3.3 Experimental Conditions Four distinct configurations were evaluated to isolate the effects of fine-tuning and prompting on ideological alignment and rhetorical quality. The baseline consisted of the raw, unaltered LLaMA-2 model without any fine-tuning or task-specific prompts. The second variant introduced structured prompts to guide the model toward context-aware generation. The third and fourth variants applied fine-tuning with and without prompting, respectively, allowing for a comparative analysis of model behavior under adversarial adaptation and guided inference. All models were evaluated on the same test set of unseen Reddit interactions, ensuring a consistent basis for performance comparison. Human annotation and automated metrics were subsequently applied to quantify credibility, emotional tone, and ideological bias, as described in the following section. # 4 Results # 4.1 Fine-Tuning and Experimental Setup To evaluate whether LLMs can effectively mimic ideological rhetoric, we finetuned $^ { * * }$ LLaMA-2 Chat 7B $^ { * * }$ using $^ { * * }$ LoRA (Low-Rank Adaptation) $^ { * * }$ , optimizing for persuasive argumentation and ideological alignment. The model was trained using $^ { * * }$ comment-reply pairs $^ { * * }$ extracted from Reddit, ensuring that the fine-tuned AI learned from natural conversations. Four experimental configurations were tested: AI-1 (Raw Unprompted) Standard LLaMA-2 model with no fine-tuning; AI-2 (Raw Prompted): LLaMA-2 with structured prompting for contextual awareness; AI-3 (FineTuned Unprompted): Fine-tuned model generating responses without additional guidance; AI-4 (Fine-Tuned Prompted): Fine-tuned model with contextual prompts enhancing ideological alignment. # 4.2 Training Configuration The fine-tuning was conducted on \*\*Google Colab Pro\*\*, utilizing \*\*8 A100 GPUs\*\*. The hyperparameter configuration was as follows: • Batch size: 1 Learning rate: $2 \times 1 0 ^ { - 4 }$ • Optimizer: Paged AdamW 8bit • Number of epochs: 2 Training duration: Approximately 2.5 hours Each training sample followed a structured input format: Comment: [USER COMMENT] Reply: [GENERATED RESPONSE] The fine-tuning process lasted approximately \*\*2.5 hours\*\*, optimizing $^ { * * } 1 . 1 3 \%$ of the model’s total 43.54 billion parameters\*\*, corresponding to \*\*39,976,960 trainable parameters\*\* Bahdanau et al., 2014. # 4.3 Inference and Response Generation To assess the effectiveness of the fine-tuned model, we tested four inference configurations: \*\*AI-1 (Raw Unprompted)\*\*: Standard LLaMA-2 with no fine-tuning. \*\*AI-2 (Raw Prompted)\*\*: LLaMA-2 with structured prompts but no fine-tuning. $^ { * * }$ AI-3 (Fine-Tuned Unprompted) $^ { * * }$ : Fine-tuned model generating responses with minimal context. $^ { * * }$ AI-4 (Fine-Tuned Prompted)\*\*: Fine-tuned model with additional contextual prompts. The $^ { * * }$ unprompted\*\* inference mode provided the model only with a comment from the test set: Comment: [TEST COMMENT] Reply: The \*\*prompted\*\* mode included additional metadata, such as the post title and subreddit: You are a Reddit user reading a post titled [TITLE] in the subreddit [SUBREDDIT]. The reply should be engaging, thought-provoking, and mimic a natural Reddit response. Each model generated responses for $^ { * * } 4 8$ test comments\*\*, which were then compared to the $^ { * * }$ original human responses on Reddit\*\* Gao et al., 2023. # 4.4 Evaluation Metrics To quantitatively assess the performance of the models, three primary metrics were used: $^ { * * }$ BLEU Score:\*\* Measures textual similarity between generated responses and human-written replies. \*\*Perplexity:\*\* Evaluates language model fluency and coherence. $^ { * * }$ Sentiment Alignment:\*\* Assesses ideological consistency of AI-generated responses. Perplexity is computed as: $$ P P L = \exp \left( - \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \log P ( w _ { i } ) \right) , $$ where $w _ { i }$ represents each token in the sequence. Figure 1: Transformer model architecture used in this study. # 4.5 Human Evaluation: Credibility and Persuasiveness To further assess response quality, a $^ { * * }$ human evaluation survey $^ { * * }$ was conducted. Participants rated AI-generated and human responses based on: 1. $^ { * * }$ Credibility\*\*: How human-like the response appeared (1 = artificial, 5 = highly credible). 2. $^ { * * }$ Provocativeness $^ { * * }$ : How engaging or polarizing the response was (1 = neutral, 5 = highly provocative). The survey included $^ { * * } 1 0$ randomly selected test comments $^ { * * }$ , with $^ { * * } 5$ responses per comment $^ { * * }$ (4 AI-generated, 1 human). $^ { * * }$ 16 participants $^ { * * }$ rated responses blindly in a randomized order. The results are presented in Section 4 (Results) Qiao et al., 2024. # 4.6 Dataset and Subreddit Selection Reddit was chosen as the primary data source due to its structured discussion threads and the presence of \*\*highly polarized ideological communities\*\*. Data was collected from 16 politically charged subreddits, covering a diverse range of perspectives: • Right-wing communities: r/trump, r/Republican, r/benshapiro, r/TrueChristian. • Left-wing and progressive communities: r/IncelTears, r/GenderCynical, r/europe. • Conspiracy and alternative information communities: r/conspiracy, r/flatearth, r/skeptic. • Influencer-driven communities: r/JoeRogan, r/stevencrowder, r/elonmusk A total of $^ { * * } 1 5 0 2$ posts per subreddit\*\* were extracted, and discussions were segmented into $^ { * * }$ comment-reply pairs\*\* for training purposes. The dataset was preprocessed to remove bot-generated comments, low-engagement threads, and spam, resulting in a $^ { * * }$ high-quality corpus of human interactions\*\*. # 4.7 Performance Evaluation To evaluate the effectiveness of our fine-tuned model, we measured key NLP metrics, including BLEU score, Perplexity, and Sentiment Alignment. Table 1 presents a comparative analysis of our fine-tuned LLaMA-2 model against baseline models and AI-driven social bots. Table 1: Comparison of model performance across key evaluation metrics. The fine-tuned LLaMA-2 model outperformed all baselines in BLEU score and Sentiment Alignment, indicating a higher degree of fluency and ideological consistency. The lower Perplexity value suggests improved text coherence and predictive accuracy. # 4.8 Human Evaluation: Credibility and Provocativeness To further validate the results, we conducted a human evaluation survey where participants rated AI-generated and real responses based on credibility and provocativeness. The results are shown in Figure 2. Figure 2: Human evaluation results: Credibility and Provocativeness of AI vs. Human Responses. Participants rated responses on two key dimensions: • Credibility: How human-like the response appeared (1 = artificial, 5 = highly credible). • Provocativeness: How engaging or provocative the response was ( ${ } = { }$ neutral, 5 = highly provocative). The fine-tuned and prompted model (AI-4) achieved the highest credibility score (3.87), surpassing even the real human responses (3.71). Meanwhile, AI-2 (raw model with prompting) was the most provocative (4.03), demonstrating that \*\*structured prompts alone can significantly influence the perceived persuasiveness of AI-generated content\*\*. # 4.9 Bias Analysis and Model Limitations Despite its strong performance, the fine-tuned model exhibited several notable biases: $^ { * * }$ Bias reinforcement $^ { * * }$ : When prompted with polarized discussions, the model tended to generate increasingly extreme responses. $^ { * * }$ Hallucination $^ { * * }$ : Some generated responses contained factually incorrect statements. $^ { * * }$ Overconfidence $^ { * * }$ : The model occasionally produced definitive claims, even when the input was ambiguous. These findings highlight the $^ { * * }$ importance of improving fine-tuning strategies and integrating more robust bias detection techniques\*\* to prevent potential misuse in real-world applications. # 5 Discussion # 5.1 Interpretation of Results The findings of this study indicate that fine-tuning LLaMA-2 using LoRA significantly enhances its capacity to produce ideologically consistent and rhetorically persuasive responses. With a BLEU score of 32.4 and a perplexity of 30.2, the fine-tuned model demonstrated superior fluency and coherence compared to its baseline counterparts. The sentiment alignment score of $7 8 . 9 \%$ further confirms its ability to replicate the ideological tenor of the training corpus. Human evaluation results reinforce these observations. The prompted fine-tuned model (AI-4) was perceived as more credible than actual humanwritten replies, suggesting that LLMs, once ideologically optimized, can generate responses indistinguishable from those of real users. Notably, the high provocativeness score of the prompted but non-fine-tuned model (AI-2) underscores the capacity of structured prompting alone to increase rhetorical impact, even in the absence of parameter adaptation. # 5.2 Relation to Prior Work These results extend current understanding of AI-driven persuasion and social bot behavior. Prior studies have emphasized the growing challenge of detecting LLM-powered bots in online discourse Feng et al., 2023, and our work corroborates this concern by demonstrating that fine-tuned models not only imitate natural conversation but also embed ideological nuance with surprising accuracy. Moreover, while earlier research in misinformation detection has predominantly focused on factual verification Sun et al., 2024, the present study highlights how persuasive AI-generated discourse may circumvent such mechanisms by relying less on falsehoods and more on plausible, ideologically resonant rhetoric. This suggests that traditional fact-checking alone may be insufficient to counteract the subtle influence of fine-tuned language models. # 5.3 Bias and Model Constraints Despite these promising results, the model exhibited several well-documented limitations. When exposed to polarizing content, it tended to reinforce ideological extremity, generating progressively radicalized replies. It also displayed excessive confidence in its assertions, even when the input was ambiguous. Finally, some outputs contained hallucinated or inaccurate information, a recurring problem across large language models. These behaviors underscore the need for stronger fine-tuning safeguards and more robust bias detection frameworks.
The increasing sophistication of large language models (LLMs) has sparked growing concerns regarding their potential role in exacerbating ideological polarization through the automated generation of persuasive and biased content. This study explores the extent to which fine-tuned LLMs can replicate and amplify polarizing discourse within online environments. Using a curated dataset of politically charged discussions extracted from Reddit, we fine-tune an open-source LLM to produce context-aware and ideologically aligned responses. The model's outputs are evaluated through linguistic analysis, sentiment scoring, and human annotation, with particular attention to credibility and rhetorical alignment with the original discourse. The results indicate that, when trained on partisan data, LLMs are capable of producing highly plausible and provocative comments, often indistinguishable from those written by humans. These findings raise significant ethical questions about the use of AI in political discourse, disinformation, and manipulation campaigns. The paper concludes with a discussion of the broader implications for AI governance, platform regulation, and the development of detection tools to mitigate adversarial fine-tuning risks.
[ "cs.CL", "cs.CY" ]
# 1 INTRODUCTION The convergence of high-resolution structural biology and generative AI has ushered in a new era of rational drug design. Breakthroughs in cryo-EM and geometric deep learning now permit direct generation of ligand molecules within three-dimensional protein pockets, effectively unifying structural insight with chemical synthesis [1, 2, 3, 4]. Yet state-of-the-art generators still face a fundamental conflict: they must be stochastic enough to explore chemical space while simultaneously obeying the rigid geometric and energetic laws to govern molecular interactions. Limitations of autoregressive frameworks. Early structure-conditioned methods such as Pocket2Mol [5] and GraphBP [8] assemble molecules in an atom-by-atom autoregressive manner. The resulting sequential bias accumulates errors and often traps the search in local optima [6]. Furthermore, their Cartesian-coordinate parametrization lacks rotational equivariance, leading to steric clashes [7]. Remedies based on multi-scale modelling [10] or fragment-level constraints [11] alleviate some artifacts but introduce substantial architectural and training complexity. Promise and pitfalls of diffusion models. Generators based on diffusion recast molecule synthesis as a step-by-step denoising process [12, 13, 14, 16, 17]. Equivariant variants [18, 19, 25] deliver markedly improved spatial fidelity, whereas the methods that pioneer target-aware diffusion-based ligand generation explicitly condition the denoising process on pocket information [22, 26]. Nevertheless, the injected uniform Gaussian noise can not align with the geometry of chemical bonds and often leads to invalid valence states or distorted three-dimensional conformations [41]. Corrections applied post hoc, such as the evolutionary optimization used in DiffSBDD [20] or the integrated scheme implemented in UniMoMo, mitigate these artifacts yet do not entirely eliminate the underlying mismatch [29]. Figure 1: READ overview. The diffusion model aligns atom-level representations with those of a pre-trained encoder, operating within a drug-like latent manifold. During sampling, template molecules retrieved from a pocket-similarity graph supply pre-trained embeddings that steer the denoising trajectory toward synthetically accessible, pocket-compatible ligands. Retrieval-Enhanced Aligned Diffusion (READ). To address the above issues, a Retrieval-Enhanced Aligned Diffusion (READ) model is proposed, which fuses latent diffusion with empirical chemical knowledge in the two phases of molecular generation, as illustrated in Fig. 1. In the first phase, a three-dimensional latent manifold is learned from six million MMFF-optimised [30] ZINC molecules via contrastive learning that combines random coordinate perturbations with atom masking [42]. This encoder embeds physicochemical metrics and geometric constraints directly into latent gradients, eliminating handcrafted validity filters. In the second phase, a protein-pocket index built with TMalign [32] and DaliLite [33] retrieves ligand templates via coarse structural matching followed by local neighbor refinement [23]. Their pre-trained embeddings modulate the diffusion steps through a cross-modal alignment module, balancing exploration with pocket-specific exploitation. To our knowledge, READ is the first retrieval-augmented diffusion framework for de novo ligand design. The contributions of this study can be summarized as follows: • We demonstrate that contrastive pretraining alone suffices to encode chemical validity into latent topology and obviate handcrafted constraints. • We introduce a hierarchical retrieval strategy that jointly optimizes exploration and exploitation via context-aware guidance. • By tightly coupling latent diffusion with empirical knowledge, READ establishes a principled path toward synthesizable and target-specific molecule generation. # 2 Related Work Structure-Based Drug Design. Structure-based molecular generation has advanced from sequential coordinate prediction to three-dimensional geometric reasoning. Early autoregressive generators broke rotational symmetry and imposed sequential biases [5, 6, 8]. Subsequent schemes that fuse multi-scale features [10] or enforce fragment-based restraints [11] improved chemical validity, but increased architectural complexity and reduced training stability. More recent diffusion-based approaches capture long-range interactions through iterative denoising. For instance, EDM [18] introduced equivariant diffusion, GeoLDM [25] leveraged a geometric latent space to boost sampling efficiency, and TargetDiff [22] refined local coordinate projections for enhanced binding specificity. Nonetheless, Gaussian perturbations inherent to these methods still distort bond lengths and angles [41], resulting in valence errors and warped conformers [27]. Attempts to incorporate fragment priors via stitching, such as FLAG [28] and DrugGPS [34], mitigate some issues yet introduce unnatural linkage patterns. AlignDiff [21] leverages preference optimization to post-training a pretrained diffusion model. Retrieval-Augmented Generation. Retrieval-augmented generation (RAG) enriches generative models by integrating exemplar data from large repositories [35]. Originating with the Retrieve-and-Read paradigm, it evolved into end-to-end differentiable frameworks such as DRAGON [37] and GraphRAG’s structured retrieval over molecular interaction graphs [36]. In drug design, some methods retrieve whole molecules or fragments to guide assembly: DeLinker [38] selects linkers from fragment libraries; RetMol [53] fuses exemplar compounds to steer generation toward target properties; and f-RAG [39] injects relevant fragments to balance diversity and validity. Others project retrieved molecules into pre-trained latent spaces to inform diffusion trajectories, as demonstrated by MolR[40]. However, fragment stitching can cause substructure mismatches, and global latent retrieval may introduce irrelevant features. Our hierarchical cross-modal retrieval mechanism overcomes these limitations by adjusting retrieval granularity and aligning geometric features across modalities. Figure 2: READ pipeline. (A) Forward diffusion injects Gaussian noise into atomic coordinates and categorical noise into atom types, while the reverse process iteratively removes noise to recover a valid ligand and its position. (B) At inference, a context-aware encoder fuses a perturbed pocket-ligand pair with graph embeddings of template molecules retrieved by RAG from the pretrained latent manifold, steering the coordinate and type heads during denoising. The green branch (bottom) is used only in training to align diffusion states with the latent space via multi-layer projections and is omitted at sampling time. # 3 Methodology # 3.1 Generative Framework with Latent-Aligned Diffusion The READ formulated the pocket-conditioned ligand generation as a retrieval-augmented diffusion process operating in two interleaved spaces: the molecular structure space ${ \mathcal { M } } { \bar { \mathbf { \Lambda } } } = ( \mathbf { X } , \mathbf { V } )$ , where $\mathbf { \bar { X } } \in \mathbb { R } ^ { N \times 3 }$ denotes atomic coordinates and $\mathbf { V } \in \{ 0 , 1 \} ^ { N \times K }$ atom types; and the latent chemical space ${ \mathcal { Z } } _ { m }$ pre-trained to encode physicochemical constraints. Given a protein pocket $\mathcal { P } = ( \mathbf { X } _ { p } , \mathbf { V } _ { p } )$ , our goal is to learn a conditional distribution $p _ { \theta } ( \mathcal { M } | \mathcal { P } )$ that respects both geometric complementarity and synthetic feasibility. The generation process integrates three components through stochastic differential equations: $$ \begin{array} { r } { d \mathcal { M } _ { t } = \underbrace { f _ { \theta } ( \mathcal { M } _ { t } , t | \mathcal { P } ) d t + g _ { t } d \mathbf { w } } _ { \mathrm { D i f f u s i o n ~ D y n a m i c s } } + \underbrace { \lambda \cdot \mathbb { E } _ { \mathcal { M } _ { k } \sim \mathcal { Z } _ { r } ( \mathcal { P } ) } [ \phi ( \mathcal { M } _ { k } ) ] } _ { \mathrm { R e t r i e v a l ~ G u i d a n c e } } , } \end{array} $$ where $f _ { \theta }$ parameterizes the equivariant denoiser, $g _ { t }$ controls noise scales, and $\phi ( \cdot )$ projects retrieved ligands $\mathcal { M } _ { k }$ from the pre-trained space ${ \mathcal { Z } } _ { m }$ into diffusion trajectories. The retrieval space $\mathcal { Z } _ { r }$ is constructed as a graph with nodes $\{ \mathcal { P } _ { i } , \mathcal { M } _ { i } \}$ and edges weighted by structural similarity metrics. Diffusion in Dual Spaces Our framework orchestrates simultaneous diffusion in molecular coordinates and atom types through coupled stochastic processes. For the 3D coordinates $\mathbf { X }$ , a variance-exploding SDE is adopted to accelerate geometry relaxation [15, 22]: $$ q ( \mathbf { X } _ { t } | \mathbf { X } _ { 0 } ) = { \mathcal { N } } ( \mathbf { X } _ { t } ; \mathbf { X } _ { 0 } , \sigma _ { t } ^ { 2 } \mathbf { I } ) , \quad \sigma _ { t } = \beta _ { \operatorname* { m a x } } \cdot t ^ { 2 } + \beta _ { \operatorname* { m i n } } , $$ where the quadratic variance schedule enables rapid exploration of conformational space while preserving geometric continuity through equivariant graph convolutions. Atom types $\mathbf { V }$ follow an absorbing-state diffusion process that maintains chemical validity: $$ q ( \mathbf { V } _ { t } | \mathbf { V } _ { t - 1 } ) = \alpha _ { t } \mathbf { V } _ { t - 1 } + ( 1 - \alpha _ { t } ) \mathbf { U } _ { K } , \quad \mathbf { U } _ { K } \sim \mathrm { U n i f o r m } \{ 1 , . . . , K \} , $$ where the learned transition matrix $\mathbf { U } _ { K }$ enforces type consistency by only allowing transitions to valid atom types. The joint reverse process integrates both modalities through cross-modal attention: $$ p _ { \theta } ( \mathbf { X } _ { t - 1 } , \mathbf { V } _ { t - 1 } | \mathbf { X } _ { t } , \mathbf { V } _ { t } , \mathcal { P } ) = \mathcal { N } ( \mathbf { X } _ { t - 1 } ; \mu _ { \theta } ( \mathbf { X } _ { t } , \mathbf { h } _ { t } ) , { \Sigma } _ { t } ) \cdot \mathrm { C a t } ( \mathbf { V } _ { t - 1 } ; \pi _ { \theta } ( \mathbf { h } _ { t } ) ) , $$ where the latent state $\begin{array} { r } { \mathbf { h } _ { t } = \mathrm { E G N N } ( \mathbf { X } _ { t } , \mathbf { V } _ { t } ) + \mathrm { A t t n } ( \mathbf { X } _ { t } , \phi ( \mathcal { M } _ { k } ) ) } \end{array}$ combines local structural reasoning from EGNN [44] with global prior knowledge retrieved from pretrained latent space ${ \mathcal { Z } } _ { m }$ . This dual-path design inherently addresses geometric and chemical constraints through coupled stochastic processes. Variance-controlled diffusion can guide $\mathbf { X }$ -space exploration and absorb probabilities in $\mathbf { V } _ { t }$ enforce type consistency, by cross-modal attention mediating between local optimization and global prior knowledge. # 3.2 Latent Space Pretraining for Alignment A hierarchical pretraining–synthesis framework is introduced to learn a unified latent space ${ \mathcal { Z } } _ { m }$ spanning atomic, geometric, and diffusion-step representations. Each atom $a _ { i }$ is mapped to an eightdimensional vector that fuses 3-D coordinates with key chemical descriptors—atom type, chirality, aromaticity, hybridization, and related attributes. Categorical attributes are embedded with learnable dictionaries $\dot { \bf D } _ { k } \in \mathbb { R } ^ { d _ { k } \times | C _ { k } | }$ , whereas continuous features remain in their original scale. Corpora and augmentations. Starting from six million ZINC compounds that satisfy standard drug-likeness constraints, four complementary graph–geometry views are created for every molecule—atom masking, bond perturbation, subgraph removal, and small coordinate perturbations, thereby decoupling chemical validity from geometric stability [43]. Atom-aware contrastive objective. View pairs are encoded by an SE(3)-equivariant GNN and aligned with a temperature-scaled InfoNCE objective in Eq. (5), applied at multiple depths through layer-specific projectors. This multi-resolution alignment teaches the encoder to disentangle geometric consistency from synthetic accessibility without handcrafted rules. $$ \mathcal { L } _ { \mathrm { I n f o N C E } } = - \log \frac { \exp \bigl ( s ( g , g ^ { + } ) / \tau \bigr ) } { \exp \bigl ( s ( g , g ^ { + } ) / \tau \bigr ) + \sum _ { g ^ { - } } \exp \bigl ( s ( g , g ^ { - } ) / \tau \bigr ) } , \quad s ( a , b ) = \frac { a ^ { \top } b } { \| a \| \| b \| } , $$ where $s ( a , b )$ denotes the cosine similarity; $g ^ { + }$ and $g ^ { - }$ are the positive and negative masked-graph embeddings; $g$ is the original (unmasked) graph embedding; and $\tau > 0$ is the temperature hyperparameter. Representation Alignment for Graph Neural Network. During training, alignment is imposed layer-wise between the diffusion model’s hidden states and the pretrained ligand embeddings retrieved from the pocket–ligand database. Specifically, let $x _ { t } = x + \bar { \epsilon } \sqrt { \bar { \alpha } _ { t } }$ be the noisy input at timestep $t$ , and let $h _ { \theta } ^ { ( l ) } ( x _ { t } ) \in \mathbb { R } ^ { d }$ denote the $l$ -th layer hidden state of the diffusion model. For each retrieved ligand $\mathcal { M } _ { k }$ , pretrained model extract its embedding $y _ { \phi , k } ^ { ( l ) } \in \mathbb { R } ^ { d }$ at the same layer $l$ . The alignment loss is then defined as $$ \mathcal { L } _ { \mathrm { a l i g n } } = - \mathbb { E } _ { \mathcal { M } , \epsilon , t } \Bigg [ \frac { 1 } { L } \sum _ { l = 1 } ^ { L } \sum _ { k = 1 } ^ { K } \log \frac { \exp \bigl ( s \bigl ( h _ { \theta } ^ { ( l ) } ( x _ { t } ) , y _ { \phi , k } ^ { ( l ) } \bigr ) / \tau \bigr ) } { \displaystyle \sum _ { j = 1 } ^ { K } \exp \bigl ( s \bigl ( h _ { \theta } ^ { ( l ) } ( x _ { t } ) , y _ { \phi , j } ^ { ( l ) } \bigr ) / \tau \bigr ) } \Bigg ] , $$ where $s ( a , b ) = { a ^ { \top } b } / ( \| a \| \| b \| )$ is cosine similarity, $K$ is the number of retrieved ligands, and $\tau > 0$ is the temperature. This mechanism ensures that, at each diffusion layer, the denoising trajectory is guided toward pockets-matched chemical priors, yielding conformations that are both energetically favorable and synthetically viable without explicit bond or valence constraints. Figure 3: Workflow of the hierarchical retrieval augmented guidance. Hierarchical retrievalaugmented guidance pipeline: (Step 1) Construct a pocket–ligand graph by linking pockets to their cognate ligands and weighting pocket–pocket edges with averaged TM-Align and DaliLite scores; (Step 2) At inference, use MSA to align the query pocket to a set of distant seeds, then retrieve its top- $K$ nearest neighbors; (Step 3) Select and embed ligands from those neighbors and fuse their embeddings into the denoiser. The right panel illustrates how an additional alignment model integrates retrieved embeddings via distinct strategies during training versus sampling. # 3.3 Hierarchical Retrieval-Augmented Guidance The retrieval module closes the feedback loop between prior chemical knowledge and the latent aligned diffusion dynamics in Sec. 3.1. The guidance term can be described by $\mathcal { G } ( \mathcal { P } ) =$ $\lambda \mathbb { E } _ { \mathcal { M } _ { k } \sim \mathcal { Z } _ { r } ( \mathcal { P } ) } \big [ \phi ( \mathcal { M } _ { k } ) \big ]$ , where $\mathcal { Z } _ { r }$ is the pocket–ligand graph. The rest of this subsection describes how the graph is constructed, queried, and fused into the denoiser. (i) Graph construction. Starting from the CBGBENCH [9] training split, a bipartite graph [36] is built, where pocket nodes $\{ \mathcal { P } _ { i } \}$ connect to their cognate ligand nodes $\{ \mathcal { M } _ { i } \}$ . Edges between pockets carry structural–similarity weights: $$ \sin ( \mathcal { P } _ { i } , \mathcal { P } _ { j } ) = \frac { \mathrm { T M - A l i g n } ( \mathcal { P } _ { i } , \mathcal { P } _ { j } ) + \mathrm { D a l i L i t e } ( \mathcal { P } _ { i } , \mathcal { P } _ { j } ) } { 2 } . $$ Ligands are excluded from edge construction; this mirrors the sampling stage, where only pocket geometry is available. (ii) Hierarchical querying in sampling. Given a target pocket $\mathcal { P }$ , we avoid an exhaustive $\mathcal { O } ( | \mathcal { Z } _ { r } | )$ search by a two-stage routine. First, ten pockets are precomputed and maximally distant from all others. A fast MSA [46] aligns $\mathcal { P }$ against these candidates, and chooses the closest one $\mathcal { P } _ { \star }$ as the entry node. Second, the alignment is refined within the $K$ nearest neighbours of $\mathcal { P } _ { \star }$ (default $K = 4 0$ ), producing a shortlist $\{ \mathcal { P } _ { \star } ^ { ( j ) } \}$ whose ligands are ranked by pocket-ligand complementarity. This coarse-to-fine pipeline reduces wall time by an order of magnitude while preserving the recall of biologically relevant templates. (iii) Embedding and fusion. For the top $m$ ligands (default $m = 4 \AA$ ), the pretrained embeddings $\phi ( \mathcal { M } _ { k } ) \in \mathcal { Z } _ { m }$ are obtained by the contrastive encoder from Sec. 3.2. At diffusion step $t$ , the hidden state of the pre-trained model is updated by $\mathbf { h } _ { t } \mathbf { h } _ { t } + \mathrm { C o n t e x t - a w a r e } ( \mathbf { h } _ { t } , \phi ( \mathcal { M } _ { k } ) )$ , thereby realizing $\mathcal { G } ( \mathcal { P } )$ within the stochastic differential equation. Since one embedding set is reused along the reverse trajectory, the added cost can be negligible compared with force field post-processing. (iv) Synergy with dual space noise. The denoiser alternates coordinate updates with feature updates. Retrieved ligand embeddings influence both streams: in the type noise branch, they bias the categorical logits of Eq. 3, steering atom type recovery toward privileged chemistries; in the geometry noise branch, they act as attractors in Eq. 2, pulling Gaussian point clouds toward known pharmacophoric motifs. This coupling reconciles global pocket similarity with local stereochemistry, succeeding where purely stochastic diffusion often fails. Figure 4: Qualitative assessment of READ candidates. For three representative CBGBench targets—4KEU, 3U5Y and 2PQW (columns, left to right)—the top row displays the best READ ligand selected from 100 samples while the bottom row shows the crystallographic reference ligand. The pocket surface is rendered in grey to highlight shape complementarity. Dash–outlined panels list drug-likeness metrics (QED, SA, logP and Lipinski heavy-rule count, LPSK) together with the three AutoDock-Vina energies reported throughout the paper: Score, Minimize and Dock. Across all pockets the READ molecules exhibit markedly lower (better) Vina energies than their native counterparts—often by more than $2 \mathrm { k c a l m o l } ^ { - 1 }$ —while preserving acceptable synthetic accessibility and physicochemical profiles, visually confirming the quantitative gains reported in Sec. 4 # 4 Experiment The proposed READ is evaluated on CBGBench [9]. For each protein pocket, the READ generates one hundred candidate ligands, discarding those with positive Vina energy (larger than $0 \mathrm { k c a l / m o l } ;$ ), valence errors, or severe steric clashes. AutoDock Vina [47] is run in the Score, Minimize and Dock modes, yielding the mean Vina energy $\operatorname { E v } _ { \operatorname { V i n a } { \downarrow } }$ and the improvement rate over reference ligands IMP $\uparrow$ . Dock mode additionally reports the mean percentage binding gap $\mathrm { M P B G } \uparrow$ and ligand-binding efficacy LBE $\uparrow$ , both normalized for molecular size. Interaction–pattern fidelity is quantified with PLIP v3.2 [52]: Jensen–Shannon divergence $\mathrm { J S D \downarrow }$ and mean absolute error MAE ↓ are computed across seven interaction classes at the per-pocket and global levels. Here, $\uparrow$ indicates higher-is-better whereas $\downarrow$ indicates lower-is-better. Our approach is compared with twelve leading baselines: autoregressive generators (Pocket2Mol [5], GraphBP [8], 3DSBDD [49]); diffusion models (TargetDiff [22], DecompDiff [24], DiffBP [26], DiffSBDD [20]); fragment generators (FLAG [28], D3FG [27]); and voxel grid methods (VoxBind [51], MolCraft [50]). All models are tested on the same CBGBench splits with default Vina settings and PLIP v3.2. Training protocol. Each model is trained on a single NVIDIA A6000 (48 GB) using a batch size of 8 and Adam with a learning rate of $1 0 ^ { - 3 }$ . Training lasts for 500,000 iterations (about six days of wall clock time); convergence is typically reached after 350000 iterations, well below the 2.5 million iterations often used by competitors. We release two READ variants with 1000 and 2000 denoising steps, each containing twelve million parameters and sharing identical hyperparameters. Sampling cost. Generating one hundred ligands requires $1 8 ~ \mathrm { m i n }$ for the 1000-step model and $3 5 \mathrm { m i n }$ for the 2000-step model. Without retrieval-based guidance, these times fall to $1 5 \mathrm { m i n }$ and $3 0 \mathrm { m i n }$ , respectively. The retrieval graph comprises 2,200 protein nodes and 166,000 ligand nodes yet adds negligible overhead at inference. # 4.1 Interaction analysis Thirteen metrics have been listed together with a Friedman weighted ranking in Table 1, where Dock-mode metrics double weight, and all others single weight. READ-2k achieves Rank 1 overall, surpassing every baseline across the composite score. Although LiGAN retains the best raw Vina scores under Score and Minimize modes, READ- $2 \mathrm { k }$ secures the strongest $\mathrm { { E _ { V i n a } } }$ and MPBG in the Dock mode by injecting pocket-matched scaffolds at every reverse-diffusion step. Relative to voxel-grid methods, READ reduces the global $\mathrm { J S D _ { O A } }$ and $\mathrm { M A E _ { O A } }$ , demonstrating finer atom-level contact fidelity. Compared with the best diffusion competitor, READ- $2 \mathrm { k }$ shows the smallest PLIP divergences and errors, thus preserving pharmacophoric patterns most faithfully. It also records the highest validity rate and the top diffusion-only rank, confirming that contrastive pretraining together with retrieval guidance can balance binding affinity, interaction fidelity and chemical plausibility better than any other model. Table 1: Comparison of various generation models on interaction analysis Per-pocket docking gains. Figure 5 provides a pocket-wise view of READ’s behaviour. Panels (a-c) plot the distribution of docking-score differences between the native ligand and the top candidate generated by READ under Score, Minimize, and Dock modes. Each histogram is strongly rightskewed, declines in performance are rare, and most pockets enjoy an apparent energy reduction. The grey bars mark the few pockets where the reference ligand remains superior, whereas the light-blue bars dominate the range of positive gains. This pattern shows that READ rarely sacrifices binding affinity in pursuit of novelty; it tends instead to return ligands that improve upon the crystal binder while still passing stringent drug-likeness filters. Panels (d-f) sharpen the view by scattering each pocket’s reference energy against the best READ energy, with color indicating the magnitude of the improvement. The cloud of points sits above the diagonal of parity for every protocol, and deeper shades congregate where the reference ligand is weakest. Hence, READ compensates precisely where conventional design struggles, yielding its largest gains on the most challenging pockets. We attribute this behavior to two factors. First, latent alignment embeds chemical priors that discourage invalid or over-strained conformations, so the energy landscape explored by the model excludes many of the traps that hinder earlier diffusion generators. Second, hierarchical retrieval supplies pocket-matched scaffolds that guide reverse diffusion toward regions of chemical space already validated by nature. Taken together with the aggregate results in Table 1, the figure confirms that READ’s Rank 1 standing is not driven by a handful of favorable cases but reflects a consistent uplift across nearly all targets. The method raises the floor and ceiling, lifting weak binders into a therapeutically relevant energy range while further polishing already strong complexes. # 4.2 Ablation study Table 2 contrasts three configurations. Baseline employs a denoiser that has never seen the contrastive encoder, Baseline, RAG augments this unaligned model with retrieval, and READ integrates both alignment and retrieval in a single training loop. The comparison highlights two key observations. First, merely injecting one template embedding into an unaligned backbone yields only modest benefits, and coupling that same backbone with retrieval actually degrades the raw Vina score. A latent mismatch prevents the guidance signal from propagating into spatial coordinates. Second, aligning the denoiser to the encoder already lifts all interaction metrics, confirming that representation agreement regularises internal features. When retrieval is then activated on the aligned model, every score rises sharply: the binding gap indicator grows by more than sevenfold under the shorter schedule and approaches a tenfold gain under the longer schedule. In plain language, alignment prepares the latent space and retrieval, then steers generation toward pocket-specific optima. Their combination, therefore, secures the Rank 1 position in the weighted benchmark while each component in isolation remains limited. (f) Dock: reference vs. Top-1 (d) Score: reference vs. Top-1 (e) Minimize: reference vs. Top-1 Figure 5: (a–c) Distribution of docking-performance gains achieved by selecting. For each protein pocket, the top-scoring ligand (of 100 generated) satisfies drug-likeness filters $\mathrm { ( Q E D > 0 . 5 }$ , $\mathrm { S A } >$ 0.6, Lipinski $= 5$ ) relative to the native ligand. Gray bars indicate cases where performance declined; light-blue bars indicate improvements. READ attains improvement probabilities of $9 6 . 7 \%$ , $9 8 . 4 \%$ and $9 6 . 8 \%$ for Score, Minimize and Dock, with mean Vina-score reductions of 2.88, 3.32 and 3.23, respectively. (d-f) Scatter plots of reference versus Top-1 docking metrics for Score, Minimize and Dock, colored by improvement magnitude. The dashed diagonal marks parity; the preponderance of points below it confirms that READ almost invariably generates drug-like molecules whose docking performance exceeds that of the native reference. Table 2: Ablation analysis of interaction metrics # 4.3 Chemical fidelity Table 3 compares how well the generators preserve substructure statistics and avoid steric clashes. By contrast, our unaligned diffusion baseline drifts in composition and shows frequent clashes. Once the model is aligned (READ-1k), the divergences already shrink, and adding retrieval (READ-2k) tightens them further, driving functional-group errors down and halving clash ratios, all while keeping static-geometry scores on par with the best voxel baselines. Together, these results demonstrate that READ excels in docking performance and chemical realism. Latent alignment supplies a chemically aware manifold, retrieval injects pocket-matched priors, and their synergy yields ligands that fit, bind, and respect real-world chemistry without resorting to heavy post-generation fixes. Table 3: Substructure fidelity and geometric-clash analysis. Moreover, for structural and geometric evaluation, a prior-informed paradigm was adopted: retrieved molecules were decomposed via BRICS, and a fragment comprising at least five heavy atoms was randomly selected and held fixed (i.e., no diffusion noise was applied to its atoms). This strategy reduced the number of atoms requiring denoising and led to improved preservation of ring types and functional groups. Although the introduction of an entirely new fragment occasionally resulted in larger deviations in atomic types compared to the reference ligand, geometric stability was enhanced and steric clash rates were significantly lowered. # 5 Discussion Experiment results show that READ-2k sets a new state of the art for pocket-conditioned ligand generation. Table 1 reports that READ- $2 \mathrm { k }$ obtains the strongest mean dock energy and the largest percentage binding gap in the entire benchmark, which places the model at Rank 1. A validity rate nearly twice that of the next best method confirms that retrieval guidance generates chemically sound ligands without trading away affinity. Figure 5 (a–c) further illustrates that READ- $2 \mathrm { k }$ improves docking for almost every pocket and lowers the mean Vina energy by a noticeable margin, with similar trends in Score and Minimize modes. Alignment and retrieval in combination. The ablation in Table 2 distinguishes three settings. A model trained without alignment serves as the Baseline; adding retrieval to this unaligned model yields Baseline RAG; finally, READ applies both alignment and retrieval. Alignment on its own lifts the improvement rate but leaves the binding gap nearly unchanged, whereas retrieval on an unaligned backbone even harms the raw Vina score. Only when the two components act together does the binding gap expand by roughly one order of magnitude, showing that alignment prepares the latent space and retrieval and steers generation toward the pocket optimum. Interaction fidelity. PLIP analysis confirms that READ reproduces pharmacophoric features more accurately than every diffusion competitor. It records the smallest Jensen–Shannon divergence and the lowest mean absolute error, indicating superior recovery of hydrogen bonds, hydrophobic contacts, and salt bridges—interactions that govern activity and selectivity in downstream assays. Chemical realism. Table 3 reveals that READ narrows divergences in atom, ring, and functionalgroup statistics while cutting clash ratios almost in half relative to the unaligned baseline. Staticgeometry scores remain on par with heavily post-processed methods such as UniMoMo, yet READ achieves these qualities directly from the generator without expensive clean-up. Limitations and outlook. Dependence on a pre-constructed retrieval graph of about two thousand pockets and one hundred and sixty-six thousand ligands can hinder performance on novel targets. Separating pretraining from diffusion simplifies optimization but still leaves a gap that a full end-toend framework might close, provided that stability controls are in place. Cross-modal attention and retrieval look-ups also add a modest overhead in memory and computing. Future work will enlarge the retrieval library with model-generated ligands, embed differentiable scoring into the diffusion objective, and handle receptor flexibility through ensemble-based retrieval.
Breakthroughs in high-accuracy protein structure prediction, such as AlphaFold, have established receptor-based molecule design as a critical driver for rapid early-phase drug discovery. However, most approaches still struggle to balance pocket-specific geometric fit with strict valence and synthetic constraints. To resolve this trade-off, a Retrieval-Enhanced Aligned Diffusion termed READ is introduced, which is the first to merge molecular Retrieval-Augmented Generation with an SE(3)-equivariant diffusion model. Specifically, a contrastively pre-trained encoder aligns atom-level representations during training, then retrieves graph embeddings of pocket-matched scaffolds to guide each reverse-diffusion step at inference. This single mechanism can inject real-world chemical priors exactly where needed, producing valid, diverse, and shape-complementary ligands. Experimental results demonstrate that READ can achieve very competitive performance in CBGBench, surpassing state-of-the-art generative models and even native ligands. That suggests retrieval and diffusion can be co-optimized for faster, more reliable structure-based drug design.
[ "q-bio.BM", "cs.LG" ]
# 1 Introduction The chase is a fundamental algorithm in database theory that is applied to address a wide range of problems. For instance, it is used to check containment of queries under constraints, in data exchange settings, or to solve ontology-based query answering; see the introductions of [11, 13] for more information. Technically speaking, the chase is a bottom-up materialisation procedure that attempts to compute a universal model (a model that can be embedded into all other models via homomorphism) for a knowledge base (KB), consisting of an (existential) rule set and a database. Example 1. Consider the KB $\mathcal { K } _ { 1 } = \langle \Sigma , D \rangle$ where $D$ is the database $\{ { \mathsf { B i c y c l e } } ( b ) \}$ and $\Sigma$ contains: $$ \begin{array} { r l r } & { \mathsf { y c l e } ( x ) \to \exists y . \mathsf { H a s P a r t } ( x , y ) \land \mathsf { w h e e l } ( y ) } & { \forall x , y . \mathsf { H a s P a r t } ( x , y ) \to \mathsf { I s P a r t 0 f } ( y , x ) } \\ & { \mathsf { I h e e l } ( x ) \to \exists y . \mathsf { I s P a r t 0 f } ( x , y ) \land \mathsf { B i c y c l e } ( y ) } & { \forall x , y . \mathsf { I s P a r t 0 f } ( x , y ) \to \mathsf { H a s P a r t } ( y , x ) } \end{array} $$ Then, Bicycle 𝑏 , HasPart $( b , t )$ , IsPartOf $( t , b )$ , Wheel 𝑡 is a universal model of $\mathcal { K }$ . 1Other researchers refer to these first-order formulas as “tuple generating dependencies” or simply as “TGDs”. W:2 W:2 W:4 W:2 W:5 W:9 HP:2 HP:2 IP:3 HP:4 HP:2 IP:3 HP:5 IP:7 HP:9 IP:3 IP:6 HP:7 IP:5 IP:4 HP:6 IP:8 HP:10 ¥ B:1 B:1 B:3 B:1 B:3 B:7 Although there are many variants of the chase, they all implement a similar strategy. Namely, they start with the database and then, in a step-by-step manner, extend this structure with new atoms to satisfy the rules in the input rule set in a most general way. Since none of these variants are guaranteed to terminate (some KBs do not even admit finite universal models), it is only natural to wonder about their respective halting problems [1, 5, 6, 10, 13, 18]. Despite intensive efforts, some results have remained open (until now!). Specifically, prior research has established tight bounds for all classes of chase terminating KBs and rule sets, except for the following: • The class ${ \sf C T K } _ { \forall } ^ { r e s t }$ of all KBs that only admit finite restricted chase sequences. • The class ${ \bf C T R } _ { \forall } ^ { r e s t }$ containing a rule set $\Sigma$ if $\left. \Sigma , D \right. \in { \bf C } \mathsf { T } \mathsf { K } _ { \forall } ^ { r e s t }$ for every database $D$ . Our main contribution is to show that both classes are $\Pi _ { 1 } ^ { 1 }$ -complete, a surprising result given that these are significantly harder than the corresponding classes for other chase variants [13]. The restricted chase differs from other variants in that it introduces new terms to satisfy existential quantifiers in rules only if these are not already satisfied by existing terms. Because of this, the order of rule applications impacts the termination of a chase sequence. For instance, the KB $\mathcal { K } _ { 1 }$ from Example 1 admits finite and infinite restricted chase sequences; some of these are represented in Fig. 1, where atoms are numbered to denote the sequence step at which they were introduced. ${ \sf C T K } _ { \forall } ^ { r e s t }$ has been claimed to be recursively enumerable (RE) in [13], probably with the following procedure in mind: given an input KB, compute all of its restricted chase sequences in parallel, and halt and accept if all of them are finite. Alas, this strategy does not work as there are terminating input KBs that admit infinitely many finite sequences that are of ever-increasing length. Example 2. Consider the $K B \mathcal { K } _ { 2 } = \langle \Sigma , D \rangle$ where $D$ is the database $\left\{ { \mathsf { R e a l } } ( a ) , \mathsf { E } ( a , c ) , \mathsf { E } ( c , b ) , { \mathsf { R e a l } } ( c ) \right.$ , $\mathsf { E } ( b , b )$ , Brake $( b ) \}$ and $\Sigma$ is the rule set that contains all of the following: $$ \begin{array} { r l } & { \forall x , y , z . \mathrm { R e } \mathbf { a } 1 ( x ) \wedge \mathsf { E } ( x , y ) \wedge \mathsf { R e } \mathbf { a } \mathbf { l } ( y ) \wedge \mathsf { B r } \mathsf { a k e } ( z ) \to \exists v . \mathsf { E } ( y , v ) \wedge \mathsf { E } ( v , z ) \wedge \mathsf { R e } \mathbf { a } \mathbf { l } ( v ) } \\ & { \qquad \forall x . \mathrm { B r } \mathbf { a k e } ( x ) \to \mathsf { R e } \mathbf { a } \mathbf { l } ( x ) } \end{array} $$ For any $k \geq 1$ , there is a restricted chase sequence of $\mathcal { K } _ { 2 }$ that yields the (finite) universal model $D \cup \{ \mathsf { E } ( c , t _ { 1 } ) \} \cup \{ \mathsf { E } ( t _ { i } , t _ { i + 1 } ) \mid i < k \} \cup \{ \mathsf { E } ( t _ { i } , b ) , \mathsf { R e a l } ( t _ { i } ) \mid i \leq k \} \cup \{ \mathsf { R e a l } ( b ) \}$ of $\mathcal { K } _ { 2 }$ . Such a sequence is obtained by applying the first rule $k$ consecutive times and then applying the second one once to derive Real $( b )$ . After this application, the first rule is satisfied and the restricted chase halts. The KB $\mathcal { K } _ { 2 }$ in the previous example is in ${ \sf C T K } _ { \forall } ^ { r e s t }$ because of fairness. This is a built-in condition in the definition of all chase variants that guarantees that the chase yields a model of the KB by requiring that, if a rule is applicable at some point during the computation of a sequence, then this rule must be eventually satisfied. Hence, the second rule in $\mathcal { K } _ { 2 }$ must sooner or later be applied in all restricted chase sequences and thus, all such sequences are finite. The KB in Example 2 uses a technique called the emergency brake, initially proposed by Krötzsch et al. in [16]. The idea is to connect every term in the chase to a special term (the constant $b$ in this example) that is not “Real” and acts as a “Brake”. Eventually, this term becomes “Real” because of fairness, all existential restrictions are satisfied, and the restricted chase halts. The emergency brake allows to grow the chase for an arbitrary number of steps whilst guaranteeing its termination. By activating infinite sequences of emergency brakes, we emulate the eternal recurrence often displayed by $\Pi _ { 1 } ^ { 1 }$ -complete problems and thus define the reductions that lead to our main results. After presenting the necessary preliminaries in Section 2, we discuss related work in Section 3. Then, we show that ${ \sf C T K } _ { \forall } ^ { r e s t }$ and ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ are $\Pi _ { 1 } ^ { 1 }$ -complete in Sections 4 and 5, respectively. In Section 6, we propose an alternative to fairness for the restricted chase that simplifies its universal termination problem. We conclude with a brief discussion about future work in Section 7. # 2 Preliminaries First-Order Logic. Consider pairwise disjoint, countably infinite sets of predicates Preds, variables Vars, constants Cons, and nulls Nulls. Every predicate has an arity through ar : Preds $ \mathbb { N } \cup \{ 0 \}$ . Elements in Vars ∪ Cons ∪ Nulls are called terms. An atom is an expression of the form $\mathsf { P } ( \vec { t } )$ where $\vec { t }$ a list of terms and P is a $| \vec { t } |$ -ary predicate. A fact is a variable-free atom. An (existential) rule $R$ is a closed first-order formula of the form $\forall \vec { x } , \vec { y } . B [ \vec { x } , \vec { y } ] \exists \vec { z } . H [ \vec { y } , \vec { z } ]$ where $\vec { x } , \vec { y }$ , and $\vec { z }$ are pairwise disjoint lists of variables; $B$ and $H$ are null-free conjunctions of atoms featuring exactly the variables in $\vec { x } , \vec { y }$ and ${ \vec { y } } , { \vec { z } } .$ , respectively; and $H$ is non-empty. We write body $( R )$ and head $( R )$ to denote $B$ and $H$ , respectively; and refer to the list $\vec { y }$ of variables as the frontier of $R$ . We omit universal quantifiers for brevity. A database is a finite fact set without nulls. A knowledge base $( K B )$ is a pair $\left. \Sigma , D \right.$ consisting of a finite rule set $\Sigma$ and a database $D$ . The Chase. A substitution $\sigma$ is a partial mapping from variables to constants or nulls. For an (arbitrary) expression $\varphi$ , let $\sigma ( \varphi )$ be the expression that results from $\varphi$ by replacing all occurrences of every variable $\boldsymbol { v }$ in $\varphi$ by $\sigma ( \upsilon )$ if the latter is defined. A trigger is a pair $\langle R , \sigma \rangle$ consisting of a rule $R$ and a substitution $\sigma$ that is defined exactly on the universally quantified variables in $R$ . The support of a trigger $\langle R , \sigma \rangle$ is support $( \langle R , \sigma \rangle ) = \sigma ( { \mathrm { b o d y } } ( R ) )$ . A trigger $\langle R , \sigma \rangle$ is loaded for a fact set $F$ if this fact set includes its support; and obsolete for $F$ if there exists a substitution $\sigma ^ { \prime }$ that extends $\sigma$ to the existential variables in $R$ such that $\sigma ^ { \prime } ( { \mathsf { h e a d } } ( R ) ) \subseteq F .$ . The output of a trigger $\langle R , \sigma \rangle$ that is not obsolete for $F$ is output $\begin{array} { r } { { \bf \Pi } ^ { \prime } ( \langle R , \sigma \rangle ) = \sigma ^ { \prime } ( { \sf h e a d } ( R ) ) } \end{array}$ , where $\sigma ^ { \prime }$ is some substitution that extends $\sigma$ by mapping every existential variable in $R$ to a fresh null. A $\Sigma$ -trigger is a trigger with a rule in $\Sigma$ . Definition 3. $A$ (restricted) chase derivation for a $K B \left. \Sigma , D \right.$ is a possibly infinite sequence $F _ { 0 } , F _ { 1 } , \ldots$ of fact sets such that $( 1 ) F _ { 0 } = D$ and, (2) for each $i \geq 0$ , there is some $\Sigma$ -trigger $\langle R , \sigma \rangle$ that is loaded and not obsolete for $F _ { i }$ such that $F _ { i + 1 } = F _ { i } \cup$ output $( \langle R , \sigma \rangle )$ . Such a chase derivation is $a$ (restricted) chase sequence $i f ,$ (3) for every $\Sigma$ -trigger 𝜆 and every $i \geq 0$ such that $\lambda$ is loaded for $F _ { i }$ , there is some $j \geq i$ such that $\lambda$ is obsolete for $F _ { j }$ . Condition (3) is known as fairness. Note that, if no appropriate trigger according to condition (2) exists for some $i \geq 0$ , then the sequence necessarily ends at $F _ { i }$ . The result of a chase sequence $\mathcal { F }$ is the union of all fact sets in $\mathcal { F }$ . It is well-known that the result $F$ of any chase sequence for a KB $\mathcal { K } = \langle \Sigma , D \rangle$ is a universal model for $\mathcal { K }$ . That is, every model of $\mathcal { K }$ can be homomorphically embedded into $F$ , which is also a model of this theory. Note that, if we consider infinite sequences, the result of the chase may not be a model of $\mathcal { K }$ if we disregard fairness. A chase sequence terminates if it is finite. A KB existentially terminates if it admits a terminating chase sequence; it universally terminates if all of its chase sequences terminate. A rule set $\Sigma$ existentially terminates if every KB with $\Sigma$ existentially terminates; it universally terminates if every KB with $\Sigma$ universally terminates. The classes of knowledge bases that existentially and universally terminate are denoted by ${ \sf C T K } _ { \exists } ^ { r e s t }$ and ${ \sf C T K } _ { \forall } ^ { r e s t }$ , respectively. The classes of rule sets that existentially and universally terminate are denoted by ${ \bf C } \mathsf { T } \mathsf { R } _ { \exists } ^ { r e s t }$ and ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ , respectively. We also consider similar classes for the oblivious and core chase variants, which we denoted in the obvious manner. For instance, $\mathsf { C T R } _ { \exists } ^ { o b l }$ is the set of all rule sets that existentially terminate for the oblivious chase. Turing Machines. As per our definition, all machines reuse the same initial state. Moreover, machines do not write blanks and cannot access accepting or rejecting states; these are not relevant in our context because we only consider halting problems. Definition 4. $A$ (non-deterministic Turing) machine is a tuple $\langle Q , \Gamma , \delta \rangle$ where $\boldsymbol { Q }$ is a set of states that contains the initial state $q _ { 0 } , \Gamma$ is a tape alphabet with $\Gamma \supseteq \{ 0 , 1 \}$ and $\textsf { B } \notin \Gamma$ , and $\delta$ is a transition function for $\boldsymbol { Q }$ . That is, $\delta$ is a function that maps from $Q \times \Gamma \cup \{ \mathsf { B } \}$ to $\mathcal { P } ( Q \times \Gamma \times \{ \left. , \right. \} )$ . Definition 5. $A$ configuration for a machine $\langle Q , \Gamma , \delta \rangle$ is a tuple $\langle n , t , p , q \rangle$ where $n$ is a natural number; $t : \{ 1 , . . . , n \} \to \Gamma \cup \{ \mathsf { B } \}$ is a function such that $t ( n ) = { \mathsf { B } }$ , and $t ( i + 1 ) = { \mathsf { B } }$ if $t ( i ) = \mathsf { B }$ for some $1 \leq i < n ; p$ is a number in $\{ 1 , \ldots , n \}$ ; and $q$ is a state in $\boldsymbol { Q }$ . The starting configuration on some word $w _ { 1 } , \ldots , w _ { n } \in \{ 0 , 1 \} ^ { * }$ is the tuple $\langle n + 1 , t , 1 , q _ { 0 } \rangle$ where $t$ is the function that maps 1 to $w _ { 1 }$ , 2 to $w _ { 2 } , \ldots , n$ to $\textstyle { w _ { n } }$ , and $n + 1$ to B. For a configuration $\langle n , t , p , q \rangle$ , we use $t$ to encode the contents of the tape at each position; moreover, we use $p$ and $q$ to encode the position of the head and the state of the machine, respectively. Note that elements of the tape alphabet $\Gamma$ may not occur after a blank symbol in such a configuration. Definition 6. Consider a machine $M = \langle { Q , \Gamma , \delta } \rangle$ and a configuration $\rho = \langle n , t , p , q \rangle$ with $q \in { \cal Q }$ . Then, let $N e x t _ { M } ( \rho )$ be the smallest set that, for every $\langle r , a , \rangle \in \delta ( t ( p ) , q )$ with $\left. = \right. o r p \geq 2$ , contains the configuration $\langle n + 1 , t ^ { \prime } , p ^ { \prime } , r \rangle$ where: • Let $t ^ { \prime } ( p ) = a$ , let $t ^ { \prime } ( n + 1 ) = \mathsf { B }$ , and let $t ^ { \prime } ( i ) = t ( i )$ for every $1 \leq i \leq n$ with $i \neq p$ . • ${ \cal I } f = $ , then $p ^ { \prime } = p - 1$ ; otherwise, $p ^ { \prime } = p + 1$ . As described above, any given machine defines a function that maps configurations to sets of configurations. An exhaustive traversal through a path in this possibly infinite tree of configurations that begins with a starting configuration yields a run: Definition 7. A run of a machine $M$ on a configuration $\rho _ { 1 }$ is a possibly infinite sequence $S = \rho _ { 1 } , \rho _ { 2 } , . . .$ of configurations such that $\rho _ { i + 1 }$ is in $N e x t _ { M } ( \rho _ { i } )$ for every $1 \leq i < | S |$ , and $N e x t _ { M } ( \rho _ { | S | } ) = \emptyset$ if $\mathbf { \sigma } ^ { \cdot } s$ is finite. $A$ partial run of 𝑀 on $\rho _ { 1 }$ is a sequence of configurations that can be extended into a run of 𝑀 on $\rho _ { 1 } . A$ (partial) run of 𝑀 on a word $\vec { w }$ is a (partial) run on the starting configuration of 𝑤® . Computability Theory. The arithmetical hierarchy consists of classes of formal languages $\Sigma _ { i } ^ { 0 }$ with $i \geq 1$ where $\Sigma _ { 1 } ^ { 0 }$ is the class of all semi-decidable languages and $\Sigma _ { i + 1 } ^ { 0 }$ is obtained from $\Sigma _ { i } ^ { 0 }$ with a Turing jump [19]. The co-classes are denoted by $\Pi _ { i } ^ { 0 }$ . Equivalently, these classes can be viewed as the sets of natural numbers definable by first-order logic formulas with bounded quantifier alternation. That is, $\Sigma _ { i } ^ { 0 }$ is the class of sets of natural numbers definable with a formula of the form $\exists \vec { x } _ { 1 } \forall \vec { x } _ { 2 } \dots Q _ { i } \vec { x } _ { i } . \phi [ x , \vec { x } _ { 1 } , \dots , \vec { x } _ { i } ]$ where $\phi$ is a quantifier-free formula and $Q _ { i }$ is $\exists$ if $i$ is odd or $\forall$ otherwise. For $\Pi _ { i } ^ { 0 }$ , the alternation starts with ∀. We also the first level of the analytical hierarchy; that is, $\Sigma _ { 1 } ^ { 1 }$ and $\Pi _ { 1 } ^ { 1 }$ [19]. The analytical hierarchy can analogously be defined using second-order formulae with bounded second-order quantifier alternation. In the following, we introduce complete problems for these classes that we later use in our reductions. Consider a machine $M$ and a state $q _ { r }$ . • The machine $M$ is non-recurring through $q _ { r }$ on some word $\vec { w }$ if every run of $M$ on $\vec { w }$ features $q _ { r }$ finitely many times. • It is universally non-recurring through $q _ { r }$ if it is non-recurring through $q _ { r }$ on all words. • It is robust non-recurring through $q _ { r }$ if every run of $M$ on any configuration features $q _ { r }$ finitely many times. We obtain $\Pi _ { 1 } ^ { 1 }$ -completeness of the first problem by adjusting a proof from the literature [15] and for the latter two using simple reductions that we define in Appendix A. Table 1. Undecidability status of the main decision problems related to chase termination; the results presented without citations refer to the main contributions of this article # 3 Related Work Novel Notation. The notation introduced in Section 2 to refer to classes of terminating KBs and rule sets differs from previous literature [13]; for instance, we write ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ instead of ${ \bf C } { \sf T } _ { \forall \forall } ^ { r e s t }$ Moreover, given some database $D$ , we do not consider a class such as ${ \bf C } { \bf T } _ { D \forall } ^ { r e s t }$ [13∀], which contain∀s∀a rule set $\Sigma$ if $\left. \Sigma , D \right.$ universally terminates for the restricted chase. For our purposes, it is clearer to consider a single class of terminating KBs (such as ${ \mathsf { C T K } } _ { \forall } ^ { r e s t } )$ instead of one class of terminating rule sets for every possible database because of the following result. Proposition 8. For a database $D ^ { \prime }$ , a quantifier $Q \in \{ \forall , \exists \}$ , and a chase variant var $\in \{ o b l ,$ rest, core}; there is a many-one reduction from $\mathsf { C T K } _ { \mathcal { Q } } ^ { \nu a r }$ to $\mathrm { C T } _ { D ^ { \prime } Q } ^ { \nu a r }$ and vice-versa. Proof. There is a many-one reduction $\mathrm { C T } _ { D ^ { \prime } Q } ^ { \nu a r }$ to $\mathsf { C T K } _ { Q } ^ { \nu a r }$ since, for a rule set $\Sigma$ , we have that $\Sigma \in \mathsf { C T } _ { D ^ { \prime } Q } ^ { \nu a r }$ if and only if $\langle \Sigma , D ^ { \prime } \rangle \in \mathsf { C T K } _ { Q } ^ { \nu a r }$ . To show that there is a many-one reduction in the other direction we describe a computable function that maps a KB $\mathcal { K } = \langle \Sigma , D \rangle$ into the rule set $\Sigma ^ { \prime }$ such that $\mathcal { K } \in \mathsf { C T K } _ { Q } ^ { \nu a r }$ if and only if $\Sigma ^ { \prime } \in { \bf C } \mathsf { T } _ { D ^ { \prime } Q } ^ { \nu a r }$ . Namely, let $\Sigma ^ { \prime }$ be the rule set that results from applying the following modifications to $\Sigma$ : (i) replace all occurrences of every predicate $P$ with a fresh predicate $P ^ { \prime }$ , (ii) add the conjunction $\textstyle \bigwedge _ { P ( \vec { c } ) \in D } P ^ { \prime } ( \vec { c } )$ to the body of every rule, and (iii) add the rule $ \land _ { P ( \vec { c } ) \in D } P ^ { \prime } ( \vec { c } )$ . The reduction is correct because one can easily establish a one-to-one correspondence between the sequences of $\mathcal { K }$ and those of $\langle \Sigma ^ { \prime } , D ^ { \prime } \rangle$ once we ignore the single trigger with $ \bigwedge _ { P ( \vec { c } ) \in D } P ^ { \prime } ( \vec { c } )$ at the beginning of every sequence of the latter KB. Note that the sets of facts produced at subsequent steps of these corresponding sequences are identical modulo replacement of all occurrences of every predicate $P$ by $P ^ { \prime }$ . □ Chase Termination in the General Case. All decision problems related to chase termination are undecidable. However, these are complete for different classes within the arithmetical and analytical hierarchies, as summarised in Table 1. In the following paragraphs, we discuss some simple proofs as well as the relevant references to understand all of the results in this table. One can readily show via induction that, if a fact occurs in some oblivious chase sequence of some KB, then it also occurs in all oblivious chase sequences of this KB. Hence, all such chase sequences of a KB yield the same result, and thus we conclude that $\mathsf { C T K } _ { \exists } ^ { o b l } = \mathsf { C T K } _ { \forall } ^ { o b l }$ and $\mathsf { C T R } _ { \exists } ^ { o b l } = \mathsf { C T R } _ { \forall } ^ { o b l }$ . Deutsch et al. proved that, if a KB admits a finite universal model, then all of its core chase sequences yield precisely this model and thus all of these sequences are finite; see Theorem 7 in [6]. Regardless of the variant, all terminating chase sequences yield a (not necessarily minimal) finite universal model; hence, if a KB does not admit a finite universal model, then it does not admit any finite chase sequence. Therefore, we have that either all core chase sequences of a KB are finite or all of them are infinite. Because of this, we conclude that $\mathrm { C T K } _ { \exists } ^ { c o r e } = \mathrm { C T K } _ { \forall } ^ { c o r e }$ and $\mathrm { C T R } _ { \exists } ^ { c o r e } = \mathrm { C T R } _ { \forall } ^ { c o r e }$ . To understand why $\mathsf { C T K } _ { \exists } ^ { o b l }$ (resp. ${ \sf C T K } _ { \exists } ^ { r e s t }$ or ${ \sf C T K } _ { \exists } ^ { c o r e } )$ is rec∃ursively e∀numerable (∃RE), consid∀er the following procedure: given some input KB, compute all of its oblivious (resp. restricted or core) chase sequences in parallel and accept as soon as you find a finite one. Deutsch et al. proved that ${ \sf C T K } _ { \exists } ^ { r e s t }$ is RE-hard. More precisely, they defined a reduction that takes a machine $M$ as input and prod∃uces a KB $\mathcal { K }$ as output such that $M$ halts on the empty word if and only $\mathcal { K }$ is in ${ \sf C T K } _ { \exists } ^ { r e s t }$ ; see Theorem 1 in [6]. This reduction works because all restricted chase sequences of $\mathcal { K }$ yield the same result, which encodes the computation of $M$ on the empty word with a grid-like structure (as we ourselves do in later sections). One can use the same reduction to show that $\mathsf { C T K } _ { \exists } ^ { o b l }$ is also RE-hard. Deutsch et al. also proved that ${ \sf C T K } _ { \exists } ^ { c o r e }$ is RE-hard. More precisely, they showed that checking if a KB admits a universal model is undecidable; see Theorem 6 in [6]. Moreover, they proved that the core chase is a procedure that halts and yields a finite universal model for an input KB if this theory admits one; see Theorem 7 of the same paper. Therefore, the core chase can be applied as a semi-decision procedure for checking if a KB admits a finite universal model. In Section 4, we argue that ${ \bf C T K } _ { \forall } ^ { r e s t }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. This contradicts Theorem 5.1 in [13], which states that ${ \sf C T K } _ { \forall } ^ { r e s t }$ is RE-complete. Specifically, it is claimed that this theorem follows from results in [6], but the authors of that paper only demonstrate that ${ \sf C T K } _ { \forall } ^ { r e s t }$ is undecidable without proving that it is in RE. Before our completeness result, the tightest lower bound was proven by Carral et al., who proved that this class is $\Pi _ { 2 } ^ { 0 }$ -hard; see Proposition 42 in [5]. Marnette proved that $\mathsf { C T R } _ { \exists } ^ { o b l }$ is in RE. More precisely, he showed that a rule set $\Sigma$ is in $\mathsf { C T R } _ { \exists } ^ { o b l }$ if and only if the KB $\langle \Sigma , D _ { \Sigma } ^ { \star } \rangle$ is in $\mathsf { C T K } _ { \exists } ^ { o b l }$ where $D _ { \Sigma } ^ { \star } = \{ { \sf P } ( \star , \dots , \star ) \ | \ { \sf P } \in { \sf P r e d s } ( \Sigma ) \}$ is the critical instance and $\star$ is a special fresh constant; see Theorem 2 in [18]. This result follows because one can show that, for any database $D$ , the (only) result of the oblivious chase of $\langle \Sigma , D _ { \Sigma } ^ { \star } \rangle$ includes the (only) result of the oblivious chase of $\left. \Sigma , D \right.$ if we replace all syntactic occurrences of constants in the latter with $\star$ . Since $\mathsf { C T K } _ { \exists } ^ { o b l }$ is in RE, we conclude that $\mathsf { C T R } _ { \exists } ^ { o b l }$ is also in this class. Gogacz and Marcinkowski proved that $\mathsf { C T R } _ { \exists } ^ { o b l }$ is RE-hard. More precisely, they presented a reduction that takes a 3-counter machine $M$ as input and produces a rule set $\Sigma$ such that $M$ halts on $\varepsilon$ if and only if $\langle \Sigma , D _ { \Sigma } ^ { \star } \rangle$ is in $\mathsf { C T K } _ { \exists } ^ { o b l }$ ; see Lemma 6 in [10].2 Hence, $M$ halts on the $\varepsilon$ and only if $\Sigma$ is in $\mathsf { C T R } _ { \exists } ^ { o b l }$ by Theorem 2 in [18]. Furthermore, Bednarczyk et al. showed that this hardness result holds even when we consider single-head binary rule sets; see Theorem 1.1 in [1]. To understand why ${ \bf C T R _ { \exists } ^ { { r e s t } } }$ is in $\Pi _ { 2 } ^ { 0 }$ , consider the following semi-decision procedure that can access an oracle that decides the RE-complete class ${ \mathsf { C T K } } _ { \exists } ^ { r e s t }$ : given some input rule set $\Sigma$ ; iterate through every database $D$ , use the oracle to decide if $\left. \Sigma , D \right.$ is in ${ \sf C T K } _ { \exists } ^ { r e s t }$ , and accept if this is not the case. Consider an analogous procedure to understand why ${ \mathrm { C T R } } _ { \exists } ^ { c o r e }$ ∃is in $\Pi _ { 2 } ^ { 0 }$ . Grahne and Onet proved that ${ \bf C T R } _ { \exists } ^ { r e s t }$ is $\Pi _ { 2 } ^ { 0 }$ -hard. To show this, they defined two reductions that take a word rewriting system $R$ and a word $\vec { w }$ as input, and produce a rule set $\Sigma _ { R }$ and a database $D _ { \vec { w } }$ , respectively. Then, they proved that $R$ terminates on $\vec { w }$ if and only if the KB $\langle \Sigma _ { R } , D _ { \vec { w } } \rangle$ is in ${ \mathsf { C T K } } _ { \exists } ^ { r e s t }$ ; this claim holds because $\langle \Sigma _ { R } , D _ { \vec { w } } \rangle$ only admits a single restricted chase result, which encodes all branches of computation of $R$ on $\vec { w }$ in an implicit tree-like structure. Therefore, $R$ is uniformly terminating if $\Sigma _ { R }$ is in ${ \bf C T R } _ { \exists } ^ { r e s t }$ . To ensure that $\Sigma _ { R }$ is in ${ \bf C T R } _ { \exists } ^ { r e s t }$ if $R$ is uniformly terminating, Grahne and Onet make use of “flooding”, a technique used in earlier work dealing with datalog boundedness [7]. For a comprehensive presentation of this technique and its applications, see Section 2 of [11]. Using the very same reduction, Grahne and Onet also proved that ${ \mathrm { C T R } } _ { \exists } ^ { c o r e }$ is $\Pi _ { 2 } ^ { 0 }$ -hard. In Section 5, we show that ${ \bf C } { \bf T } { \bf R } _ { \forall } ^ { r e s t }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. This contradicts Theorem 5.16 in [13], where it is stated that this class is $\Pi _ { 2 } ^ { 0 }$ -complete. The error in the upper-bound of this theorem arose from the assumption that ${ \sf C T K } _ { \forall } ^ { r e s t }$ is in RE, which, as previously discussed, is not the case. Regarding the lower bound, they consider an extended version of this class of rule sets where they allow the inclusion of a single “denial constraint”; that is, an implication with an empty head that halts the chase if the body is satisfied during the computation of a chase sequence. They prove that the always restricted halting problem for rule sets is $\Pi _ { 2 } ^ { 0 }$ -hard if one such constraint is allowed. Our results imply that we do not need to consider such an extension to obtain a higher lower bound. Chase Termination of Syntactic Fragments. Undeterred by the undecidability results discussed above, researchers have proven we can decide chase termination if we consider syntactic fragments of existential rules for which query entailment is decidable [2, 3, 12, 17]. Another way of checking termination in practice is to develop acyclicity and cyclicity notions; that is, sufficient conditions for termination and non-termination of the chase. Indeed, experiments show that we can determine chase termination for a large proportion of real-world rule sets with these checks [4, 8, 9, 14]. # 4 Knowledge base termination Theorem 9. The class ${ \sf C T K } _ { \forall } ^ { r e s t }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. The theorem immediately follows from the upcoming Lemma 12 and Lemma 13. For the membership part, we define a non-deterministic Turing machine that loops on $q _ { r }$ if and only if there is a non-terminating chase sequence for a given rule set. Definition 10. Consider a rule set Σ. For a fact set $F$ , let active $( F )$ be the set of all triggers with a rule in $\Sigma$ that are loaded and not obsolete for $F$ . Let $\mathcal { M } _ { \Sigma }$ be a non-deterministic Turing machine with start state $q _ { 0 }$ and a designated state $q _ { r }$ that executes the following procedure. (1) Check if the input tape contains a valid encoding of a database. If not, halt. (2) Initialize two counters $i = j = 0$ and a set of facts $F _ { 0 }$ containing exactly the encoded database. (3) If active $\left( F _ { i } \right)$ is empty, halt. (4) Non-deterministically pick a trigger $\langle R , \sigma \rangle$ from active $\left( F _ { i } \right)$ and let $F _ { i + 1 } = F _ { i } \cup \sigma ^ { \prime } ( { \mathsf { h e a d } } ( R ) )$ where $\sigma ^ { \prime }$ extends $\sigma$ by mapping existential variables in $R$ to fresh nulls (not occurring in $F _ { i }$ ). (5) If all triggers in active $( F _ { j } )$ are obsolete for $F _ { i }$ , then increment $j$ and visit $q _ { r }$ once. (6) Increment 𝑖 and go to 3. Lemma 11. For every database $D$ and rule set $\Sigma$ , there is a run of $\mathbf { \mathcal { M } } _ { \Sigma }$ on the encoding of $\mathbf { \dot { D } }$ that visits $q _ { r }$ infinitely often if and only if there is a non-terminating chase sequence for $\left. \Sigma , D \right.$ . Proof. Assume that there is a run of $\mathcal { M } _ { \Sigma }$ on the encoding of $D$ that visits $q _ { r }$ infinitely many times. Then, the sequence $F _ { 0 } , F _ { 1 } , \ldots$ constructed by $\mathcal { M } _ { \Sigma }$ is an infinite restricted chase derivation for $\left. \Sigma , D \right.$ by construction. Since $q _ { r }$ is visited infinitely many times, $j$ grows towards infinity. Therefore, every trigger that is loaded for some $F _ { j }$ with $j \geq 0$ is obsolete for some $i \geq j$ ; which is exactly fairness. Hence, the infinite derivation is a proper chase sequence. Assume that there is an infinite chase sequence $F _ { 0 } , F _ { 1 } , \ldots$ for $\left. \Sigma , D \right.$ . By definition, for each $i \geq 0$ , there is a trigger $\lambda \in a c t i v e ( F _ { i } )$ that yields $F _ { i + 1 }$ . Hence, there is a run of $\mathcal { M } _ { \Sigma }$ that nondeterministically picks these triggers. Because of fairness, for every trigger $\lambda$ in active $( F _ { j } )$ with $j \geq 0$ , there is $i \geq j$ such that $\lambda$ is obsolete for $F _ { i }$ . Hence, the run of $\mathcal { M } _ { \Sigma }$ visits $q _ { r }$ infinitely often. □ Lemma 12. Deciding membership in ${ \sf C T K } _ { \forall } ^ { r e s t }$ is in $\Pi _ { 1 } ^ { 1 }$ . Proof. We show a reduction to non-recurrence through $q _ { r }$ on the empty word. For a given rule set $\Sigma$ , let $\mathcal { M } _ { \Sigma } ^ { D }$ be a non-deterministic Turing machine that results from $\mathcal { M } _ { \Sigma }$ by adding an initial step that replaces the initial tape content by an encoding of $D$ . Then, by Lemma 11, $\Sigma$ is in ${ \sf C T K } _ { \forall } ^ { r e s t }$ if and only if no run of $\mathcal { M } _ { \Sigma } ^ { D }$ on the empty input visits $q _ { r }$ infinitely many times. □ Lemma 13. The class ${ \sf C T K } _ { \forall } ^ { r e s t }$ is $\Pi _ { 1 } ^ { 1 }$ -hard. Proc. ACM Manag. Data, Vol. 3, No. 2 (PODS), Article 109. Publication date: May 2025. To prove hardness, we reduce non-recurrence through $q _ { r }$ on the empty word to knowledge base termination. In other words, to a Turing machine $M$ , we will associate a database $D _ { \varepsilon }$ and a rule set $\Sigma _ { M }$ such that there exists a run of $M$ on the empty word reaching $q _ { r }$ infinitely often if and only if the restricted chase of $\Sigma _ { M }$ on $D _ { \varepsilon }$ does not halt. A perhaps surprising feature of this reduction is that the restricted chase must halt for rule sets generated from Turing machines that do not halt on the empty word, as long as they reach $q _ { r }$ only finitely often. As we cannot get any computable bound on the number of steps required to reach $q _ { r }$ , we must simulate any finite run of the Turing machine in a terminating way. This calls for the use of emergency brakes as presented in the introduction. We “stack” such brakes, each one being responsible to prevent the non-termination for runs that do not go through $q _ { r }$ . Schema. We will make use of the following predicates. Note that the last position usually holds an emergency brake. We introduce: For each letter $a$ in the Turing machine alphabet or equal to the blank B, a binary predicate a. For each state $q$ of the Turing machine, a binary predicate q. Two ternary predicates F and R, that encode the successor relation for time and for cells. Two binary predicates $C _ { \mathsf { L } }$ and $C _ { R }$ , used to copy tapes content. A unary predicate Real and a binary predicate NextBr, used for the machinery of emergency brakes. Two unary predicates Brake and End to identify terms used as emergency brakes and the last element of a configuration, respectively. Each time a new term is created during the chase, we link it in a specific way to the relevant brake. To simplify the subsequent presentation, we denote by brSet $( x , w )$ the set of atoms $\{ \mathsf { F } ( x , w , w ) , \mathsf { R } ( x , w , w ) , \mathsf { R e a l } ( x ) , \mathsf { B r a k e } ( w ) \}$ . The remainder of this section is devoted to the reduction from the “non-recurrence through ${ q _ { r } } ^ { \dag }$ problem to knowledge base restricted chase termination. We first present the reduction, and then focus on the main ideas required to show correctness. The Reduction. Each configuration $\rho$ of a Turing machine is encoded by a database as follows. Definition 14. The database $D _ { \rho }$ encoding a configuration $\rho = \langle n , t , p , q \rangle$ is $$ { \begin{array} { l } { { \boldsymbol { D } } _ { \rho } = \{ \mathsf { R } ( c _ { i } , c _ { i + 1 } , w _ { 1 } ) , \mathsf { a } _ { \mathrm { i } } ( c _ { i } , w _ { 1 } ) \mid 1 \leq i \leq n ; \mathsf { a } _ { \mathrm { i } } = t ( i ) \} \cup \big \{ \mathsf { q } ( c _ { \rho } , w _ { 1 } ) , \mathsf { B } ( c _ { n + 1 } , w _ { 1 } ) , \mathsf { E n d } ( c _ { n + 1 } , w _ { 1 } ) \big \} , } \\ { \cup \bigcup _ { \substack { 1 \leq i \leq n + 1 } } { \mathrm { b r s e t } } ( c _ { i } , w _ { 1 } ) } \end{array} } $$ For a word 𝑤, we denote $b y D _ { w }$ the database $D _ { \rho _ { w } }$ , where $\rho _ { w }$ is the initial configuration of 𝑀 on 𝑤 . Given a Turing machine $M$ with states $\boldsymbol { Q }$ and tape alphabet $\Gamma$ , we build $\Sigma _ { M }$ composed of the following rules. We first have a set of rules required for setting up emergency brakes. $$ \begin{array} { r l r } & { \mathtt { B r a k e } ( w ) \longrightarrow \bigwedge } & { \mathtt { a } ( w , w ) , \displaystyle \bigwedge \ \mathfrak { q } ( w , w ) , \top ( w , w , w ) , \mathtt { R } ( w , w , w ) , } \\ & { \mathtt { q e f f } \{ \mathfrak { B } \} } & { \mathtt { q e } Q } \\ & { \mathtt { C } _ { \mathrm { L } } ( w , w ) , \mathtt { C } _ { \mathtt { R } } ( w , w ) , \mathtt { R e a l } ( w ) , \mathtt { n e x t B r } ( w , w ) } & { ( R _ { \mathtt { M } } , \gamma ) } \\ & { \mathtt { b r S e t } ( x , w ) , \mathtt { n e x t B r } ( w , w ^ { \prime } ) \mathtt { b r S e t } ( x , w ^ { \prime } ) } & { ( R _ { \mathtt { M } } , \gamma ) } \end{array} $$ The next four rules are responsible of simulating the moves of the head of the Turing machine. The first two rules deal with the case where the machine is not in $q _ { r }$ , and the head moves to the right (resp. to the left). The important feature of these rules is the presence in both the body and the head of the same brake 𝑤. For all $q \neq q _ { r } , q ^ { \prime } \in Q$ and $a , b , c \in \Gamma \cup \{ \mathsf { B } \}$ such that $( q ^ { \prime } , b , ) \in \delta ( q , a )$ : $$ \begin{array} { r l } & { \mathfrak { q } ( x , w ) , \mathfrak { a } ( x , w ) , \mathsf { R } ( x , y , w ) , \mathsf { c } ( y , w ) , \mathsf { b r S e t } ( x , w ) , \mathsf { b r S e t } ( y , w ) } \\ & { \quad \to \exists x ^ { \prime } , y ^ { \prime } \mathfrak { q } ^ { \prime } ( y ^ { \prime } , w ) , \mathsf { c } ( y ^ { \prime } , w ) , \mathsf { b } ( x ^ { \prime } , w ) , \mathsf { C } _ { \mathrm { L } } ( x ^ { \prime } , w ) , \mathsf { C } _ { \mathrm { R } } ( y ^ { \prime } , w ) , } \\ & { \qquad \mathsf { R } ( x ^ { \prime } , y ^ { \prime } , w ) , \mathsf { F } ( x , x ^ { \prime } , w ) , \mathsf { F } ( y , y ^ { \prime } , w ) , \mathsf { b r S e t } ( x ^ { \prime } , w ) , \mathsf { b r S e t } ( y ^ { \prime } , w ) } \end{array} $$ For all $q \neq q _ { r } , q ^ { \prime } \in Q$ and $a , b , c \in \Gamma \cup \{ \mathsf { B } \}$ such that $( q ^ { \prime } , b , ) \in \delta ( q , a )$ : $$ \begin{array} { r l } & { \mathfrak { q } ( x , w ) , \mathfrak { a } ( x , w ) , \mathsf { R } ( y , x , w ) , \mathsf { c } ( y , w ) , \mathsf { b r S e t } ( x , w ) , \mathsf { b r S e t } ( y , w ) } \\ & { \quad \to \exists x ^ { \prime } , y ^ { \prime } \ \mathfrak { q } ^ { \prime } ( y ^ { \prime } , w ) , \mathsf { c } ( y ^ { \prime } , w ) , \mathsf { b } ( x ^ { \prime } , w ) , \mathsf { C } _ { \mathrm { L } } ( y ^ { \prime } , w ) , \mathsf { C } _ { \mathrm { R } } ( x ^ { \prime } , w ) , } \\ & { \qquad \mathsf { R } ( y ^ { \prime } , x ^ { \prime } , w ) , \mathsf { F } ( x , x ^ { \prime } , w ) , \mathsf { F } ( y , y ^ { \prime } , w ) , \mathsf { b r S e t } ( x ^ { \prime } , w ) , \mathsf { b r S e t } ( y ^ { \prime } , w ) } \end{array} $$ The following two rules treat the case where the transition is from $q _ { r }$ . The only difference with the two above rules is the introduction of a new brake $w ^ { \prime }$ in the head of the rules. This permits non-terminating restricted chase sequences in the presence of specific runs. For all $q ^ { \prime } \in Q$ and $a , b , c \in \Gamma \cup \{ \mathsf { B } \}$ such that $( q ^ { \prime } , b , ) \in \delta ( q _ { r } , a )$ : $$ \begin{array} { r l } & { \mathfrak { q } _ { r } ( x , w ) , \mathord { \mathtt { R } } ( x , y , w ) , \mathfrak { a } ( x , w ) , \mathfrak { c } ( y , w ) , \mathrm { b r S e t } ( x , w ) , \mathrm { b r S e t } ( y , w ) } \\ & { \quad \exists x ^ { \prime } , y ^ { \prime } , w ^ { \prime } , \mathfrak { q } ^ { \prime } ( y ^ { \prime } , w ^ { \prime } ) , \mathfrak { c } ( y ^ { \prime } , w ^ { \prime } ) , \mathrm { b } ( x ^ { \prime } , w ^ { \prime } ) , \mathord { \mathtt { R } } ( x ^ { \prime } , y ^ { \prime } , w ^ { \prime } ) , } \\ & { \qquad \ F ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( y , y ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { L } } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { R } } ( y ^ { \prime } , w ^ { \prime } ) , } \\ & { \qquad \mathrm { b r S e t } ( x ^ { \prime } , w ^ { \prime } ) , \mathrm { b r S e t } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { n e x t B r } ( w , w ^ { \prime } ) } \end{array} $$ For all $q ^ { \prime } \in Q$ and $a , b , c \in \Gamma \cup \{ \mathsf { B } \}$ such that $( q ^ { \prime } , b , ) \in \delta ( q _ { r } , a )$ : $$ \begin{array} { r l } & { \mathfrak { q } _ { r } ( x , w ) , \mathord { \mathtt { R } } ( y , x , w ) , \mathfrak { a } ( x , w ) , \mathfrak { c } ( y , w ) , \mathrm { b r S e t } ( x , w ) , \mathrm { b r S e t } ( y , w ) } \\ & { \quad \exists x ^ { \prime } , y ^ { \prime } , w ^ { \prime } , \mathfrak { q } ^ { \prime } ( y ^ { \prime } , w ^ { \prime } ) , \mathfrak { c } ( y ^ { \prime } , w ^ { \prime } ) , \mathfrak { b } ( x ^ { \prime } , w ^ { \prime } ) , \mathord { \mathtt { R } } ( y ^ { \prime } , x ^ { \prime } , w ^ { \prime } ) , } \\ & { \qquad \ F ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( y , y ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { L } } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { R } } ( x ^ { \prime } , w ^ { \prime } ) , } \\ & { \qquad \mathrm { b r S e t } ( x ^ { \prime } , w ^ { \prime } ) , \mathrm { b r S e t } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { n e x t B r } ( w , w ^ { \prime } ) } \end{array} $$ The following rules copy the content of unchanged cells to the right and the left of the head from one configuration to the next. We instantiate one of each rule for each $a \in \Gamma \cup \{ \mathsf { B } \}$ . $$ \begin{array} { r l } & { \mathsf { C } _ { \mathsf { R } } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( x , y , w ) , \mathsf { a } ( y , w ) , \mathsf { b r S e t } ( x , w ) , \mathsf { b r S e t } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { b r S e t } ( y , w ) } \\ & { \qquad \to \exists y ^ { \prime } \mathsf { F } ( y , y ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( x ^ { \prime } , y ^ { \prime } , w ^ { \prime } ) , \mathsf { a } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathsf { R } } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { b r S e t } ( y ^ { \prime } , w ^ { \prime } ) } \\ & { \mathsf { C } _ { \mathsf { L } } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( y , x , w ) , \mathsf { a } ( y , w ) , \mathsf { b r S e t } ( x , w ) , \mathsf { b r S e t } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { b r S e t } ( y , w ) } \\ & { \qquad \to \exists y ^ { \prime } \mathsf { F } ( y , y ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( y ^ { \prime } , x ^ { \prime } , w ^ { \prime } ) , \mathsf { a } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathsf { L } } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { b r S e t } ( y ^ { \prime } , w ^ { \prime } ) } \end{array} $$ Finally, we extend the represented part of the configuration by one cell at each step, as coherent with our definition of Turing machine runs: $$ \begin{array} { r l } & { \mathsf { C } _ { \mathsf { R } } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { E n d } ( x , w ) , \mathsf { b r S e t } ( x , w ) , \mathsf { b r S e t } ( x ^ { \prime } , w ^ { \prime } ) } \\ & { \quad \to \exists y ^ { \prime } , \mathsf { R } ( x ^ { \prime } , y ^ { \prime } , w ^ { \prime } ) , \mathsf { B } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { E n d } ( y ^ { \prime } , w ^ { \prime } ) , \mathsf { b r S e t } ( y ^ { \prime } , w ^ { \prime } ) } \end{array} $$ Example 15. Consider a machine $M = \langle \{ q _ { 0 } , q _ { r } \} , \{ 0 , 1 \} , \delta \rangle$ where $\delta$ is a transition function that maps $\langle q _ { 0 } , 0 \rangle$ to $\{ \langle q _ { r } , 1 , \rangle \}$ , $\left. q _ { r } , \mathsf { B } \right.$ to $\{ \langle q _ { 0 } , 1 , \rangle \} , \langle q _ { 0 } , 1 \rangle$ to $\{ \langle q _ { r } , 1 , \rangle \}$ , and $\langle q _ { r } , 1 \rangle$ to $\{ \langle q _ { 0 } , 1 , \rangle \}$ ; note how the (only) run of $\cdot _ { M }$ on the word 0 contains infinitely many configurations with the state $q _ { r }$ . In this representation, every label on an edge or a term represents several facts in the chase. For the sake of clarity, these labels can be extended with another argument, which should be some “Brake” term in the same dashed or later dashed box. Fig. 2. An Infinite Restricted Chase Sequence of $\left. \Sigma _ { M } , D _ { 0 } \right.$ where $M$ is the machine from Example 15, and AllPreds $\geq 2$ above is a shortcuts for $^ { \mathrm { { s } } } \mathsf { F }$ , R, $q _ { 0 } , q _ { r }$ , 0, 1, B, CR, $C _ { \mathsf { L } }$ , nextBr”. Correctness proof of the reduction. The reduction is now fully described, and we claim that: Proposition 16. $\Sigma _ { M }$ universally halts for the restricted chase on $D _ { \rho }$ if and only if there exists no run of $\cdot _ { M }$ on $\rho$ that goes infinitely often through $q _ { r }$ . We first prove that if there exists a run of $M$ going through $q _ { r }$ infinitely often, then there exists a non-terminating chase sequence. To that purpose, we identify interesting subsets of databases. Definition 17 (Wild Frontier of Configuration $\rho$ ). $A$ set of atoms $F$ has a wild frontier of configuration $\rho = \langle n , t , p , q \rangle$ overseen by $w \in$ terms $( F )$ if there exists $x _ { 1 } , . . . , x _ { n + 1 } \in$ terms $( F )$ such that: , $\mathsf { R e a l } ( w ) \notin F .$ ; · $\left\{ \mathsf { R } ( x _ { i } , x _ { i + 1 } , w ) , \mathsf { a } _ { \mathrm { i } } ( x _ { i } , w ) \right\} \subseteq F$ for all $i \in \{ 1 , \ldots , n \}$ , $\mathsf { a } _ { \mathrm { i } } = t ( i )$ ; • $\mathsf { q } ( x _ { p } , w )$ $\mathfrak { c } _ { \boldsymbol { p } } , w ) , \mathsf { E n d } ( { \boldsymbol { x } } _ { n + 1 } , w ) , \mathsf { B } ( { \boldsymbol { x } } _ { n + 1 } , w ) \in F ,$ ; • brSet $( x _ { i } , w ) \in F$ for all $i \in \left\{ 1 , \ldots , n + 1 \right\}$ ; • any other atom of ${ \bf \dot { \boldsymbol { F } } }$ having $x _ { i }$ as first argument has 𝑤 as second. A wild frontier has three important features $( i )$ it contains the necessary atoms to simulate the run of a Turing machine on that configuration; $( i i )$ it is correctly connected to a (not yet real) brake 𝑤; (iii) it does not contain atoms preventing the above run to be simulated through a restricted derivation. By comparing Definition 14 and Definition 17, it is clear that $D _ { \varepsilon }$ has a wild frontier of the configuration of $M$ on the empty word, overseen by $w _ { 1 }$ . The construction of an infinite restricted derivation is made by inductively using the following key proposition. Proposition 18. If 𝐹 has a wild frontier of $\dot { \boldsymbol \rho }$ overseen by $ { \boldsymbol { w } }$ , and $\rho ^ { \prime }$ is reachable in one step by a transition of $\cdot _ { M }$ , then there exists a restricted derivation $\mathcal { D } _ { \rho \to \rho ^ { \prime } } = F , . . . , F ^ { \prime }$ such that $F ^ { \prime }$ has a wild frontier of $\cdot _ { \rho ^ { \prime } }$ overseen $b y w ^ { \prime }$ , where $\boldsymbol { w ^ { \prime } } \ne \boldsymbol { w }$ is a fresh existential if $\dot { \boldsymbol { \rho } }$ is in $q _ { r }$ , and $w ^ { \prime } = w$ otherwise. Concatenating the infinite sequence of derivations built in Proposition 18 does not however provide a fair sequence of derivations, because of Rules $R _ { \mathsf { B r a k e } }$ , $R _ { \mathsf { n e x t B r } }$ and of the non-determinism of $M$ . Fairness is enforced by applying $R _ { \mathsf { B r a k e } }$ and $R _ { \mathsf { n e x t B r } }$ “late enough” to ensure that none of the triggers involved in the proof of Proposition 18 are made obsolete. This is possible because the run of $M$ going infinitely many often through $q _ { r }$ , infinitely many brakes are created. Details are provided in the appendix. Lemma 19. Let $( \rho _ { i } ) _ { i \in \mathbb { N } }$ be a run of 𝑀 on the empty word that visits $q _ { r }$ infinitely often. There exists an infinite restricted chase sequence for $\left. \Sigma _ { M } , D _ { \varepsilon } \right.$ . To show the converse, we fix an infinite restricted chase sequence $\mathcal { D }$ as $( F _ { i } ) _ { i \in \mathbb { N } }$ , where $F _ { 0 } = D _ { \varepsilon }$ . We build from $\mathcal { D }$ an infinite run that visits $q _ { r }$ infinitely often by identifying a substructure of the chase, consisting of state atoms. We then prove that a run can be built from these states atoms (and other elements of the chase), which fulfills the required conditions. Definition 20. $A$ state atom of $F$ is an atom of $F$ of the form $\mathsf { q } ( x , w )$ where $q \in Q$ and $x$ is not $a$ brake. $A$ state atom $A$ precedes $A ^ { \prime }$ if there is a trigger 𝑡 such that $A \in$ support $\mathbf { \rho } ( t )$ and $A ^ { \prime } \in$ output $( t )$ . In this case, we write $A \prec A ^ { \prime }$ . It is worth noticing that in the chase of $\left. \Sigma _ { M } , D _ { \varepsilon } \right.$ , state atoms are organised as a tree structure rooted in the unique state atom belonging to $D _ { \varepsilon }$ , and such that $A$ is a parent of $A ^ { \prime }$ if and only if $A \prec A ^ { \prime }$ . Intuitively, we can assign with each of these state atoms a configuration such that the configuration associated with $A ^ { \prime }$ is reachable in one transition of $M$ from the configuration associated with its parent $A$ . The key property is that in an infinite restricted chase, there exists an infinite sequence $( A _ { n } ) _ { n \in \mathbb { N } }$ with good properties. Lemma 21. For all databases $D$ , and all infinite chase sequences from $\left. \Sigma _ { M } , D \right.$ with result $F$ , there is an infinite sequence $( A _ { n } ) _ { n \in \mathbb { N } }$ of state atoms of $F$ such that: · $A _ { 0 } \in D$ ; · $A _ { n } < A _ { n + 1 }$ for all $n \in \mathbb { N }$ ; • for infinitely many $i \in \mathbb { N }$ , $A _ { i }$ is of the shape $\mathsf { q } _ { r } ( t _ { i } , w _ { i } )$ . Proof sketch. Since the rules that introduce state atoms (Rules $\ell _ { q _ { r } } ^ { } , R _ { q _ { r } } ^ { } , R _ { \ l _ { 7 } q _ { r } } ^ { }$ and $R _ { \lnot q _ { r } } ^ { }$ ) feature a state atom in their body, $\prec$ defines a forest structure over state atoms, where the root of each tree is an atom of the database. There is thus a finite amount of trees. We can prove by induction that there is a finite amount of atoms that feature a given brake. Thus, there is an infinite amount of brakes in $F$ . Then, since the rules that introduce new brakes (Rules $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ ) introduce a state atom too, there is an infinite number of state atoms. Thus, one of the trees must be infinite, and since branching can be proven to be finite, there must be an infinite branch by König’s lemma. □ Lemma 22. For every configuration $\rho$ , if the restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ then there exists a run of 𝑀 on $\rho$ which visits $q _ { r }$ infinitely many times. Lemmas 19 and 22 directly imply Proposition 16, and hence the correctness of the reduction. # 5 Rule set termination Theorem 23. ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. The theorem immediately follows from the upcoming Lemma 24 and Lemma 25. Lemma 24. Deciding membership in ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ is in $\Pi _ { 1 } ^ { 1 }$ . Proof. We reduce to universal non-recurrence through $q _ { r }$ . More precisely, we show that a rule set $\Sigma$ is in ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ if and only if $\mathcal { M } _ { \Sigma }$ from Definition 10 is universally non-recurring through $q _ { r }$ . Fig. 3. First three steps of the restricted chase from $\langle \mathcal { R } _ { M } , D \rangle$ as defined in Example 26. The predicate F and the brakes are not represented for the sake of readability, but terms are connected through the future predicate to an element on the same line at the previous step. Unlabeled arrows represent R-atoms. If $\Sigma$ is in ${ \bf C T R } _ { \forall } ^ { r e s t }$ , then $\left. \Sigma , D \right.$ is in ${ \sf C T K } _ { \forall } ^ { r e s t }$ for each $D$ . Hence, by Lemma 11, $\mathcal { M } _ { \Sigma }$ is non-recurring on every input that is the encoding of some database $D$ . On inputs that are not encodings of databases, $\mathcal { M } _ { \Sigma }$ halts immediately by Definition 10. Therefore, $\mathcal { M } _ { \Sigma }$ is universally non-recurring. If $\mathcal { M } _ { \Sigma }$ is universally non-recurring through $q _ { r }$ , then, in particular, $\mathcal { M } _ { \Sigma }$ is non-recurring through $q _ { r }$ on every input that is the encoding of a database. Hence, by Lemma 11, each restricted chase sequence for each knowledge base with $\Sigma$ is finite. Therefore, $\Sigma$ is in ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ . □ Lemma 25. ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ membership is $\Pi _ { 1 } ^ { 1 }$ -hard. The rest of the section is dedicated to proving this lemma, by reducing robust non-recurrence through $q _ { r }$ to rule set termination. In fact, the reduction is very similar to the one we use for knowledge base termination: to a machine $M$ , we associate the rule set $\Sigma _ { M }$ , which will belong to $C \mathsf { T R } _ { \forall } ^ { r e s t }$ if and only if $M$ is robust non-recurring through $q _ { r }$ . The direct implication follows from Lemma 22 by contrapositive: if a Turing machine $M$ is not robust non-recurring through $q _ { r }$ , then there is a configuration $\rho$ such that $M$ visits $q _ { r }$ infinitely many times from $\rho$ . Then, by Lemma 22, the restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ , and thus $\Sigma _ { M } \notin \mathbf { C } \mathsf { T R } _ { \forall } ^ { r e s t }$ . The other direction requires more work. Consider a Turing machine $M$ , and assume that there is some database $D$ such that the restricted chase does not terminate on $\left. \Sigma _ { M } , D \right.$ . We then show that $M$ is not robust non-recurring through $q _ { r }$ . Since the restricted chase does not terminate on $\left. \Sigma _ { M } , D \right.$ , there is an infinite chase sequence from this knowledge base. We use $F$ to denote its result. As in Section 4, by Lemma 21, $F$ contains an infinite sequence of state atoms $\mathcal { A } = ( A _ { n } ) _ { n \in \mathbb { N } }$ such that $A _ { 0 } \in D$ , $A _ { n } < A _ { n + 1 }$ for all $n \in \mathbb { N }$ , and there are infinitely many integers $i$ such that $A _ { i }$ is a ${ \mathsf { q } } _ { r }$ -atom. In the knowledge base case, we had control over the database as part of the knowledge base, which meant that we could start from a “well-formed” database (in the sense that it encodes a single start configuration). This allowed us to extract the unique configuration associated with a state atom. However, in the rule set case, the database $D$ leading to non-termination is arbitrary and can contain any kind of structure, as highlighted by the following example. Example 26. Consider a Turing machine $M$ that moves to the right in every step, writing 1 regardless of the symbol it reads. It alternates between its start state $q _ { 0 }$ and the designated state $q _ { r }$ . Now, consider the database depicted on the left side of Fig. 3, which contains the atoms $\mathsf { R } ( a , b _ { 1 } , \boldsymbol { w } )$ , $\mathsf { R } ( a , b _ { 2 } , \boldsymbol { w } )$ , $\mathsf { R } ( b _ { 1 } , c , \boldsymbol { w } )$ , $\ R ( b _ { 2 } , c , w ) , \ R ( c , d , w ) , \ R ( d , e , w ) , \ q _ { 0 } ( c , w ) , \ 1 ( a , w ) , \ 1 ( b _ { 1 } , w ) , \ \vartheta ( b _ { 2 } , w ) , \vartheta ( c , w ) , \ 1 ( d , w ) , \ 1 ( e , w )$ , B(𝑒, 𝑤 ), and brSet $( x , w )$ for all $x \in \{ a , b _ { 1 } , b _ { 2 } , c , d , e \}$ . This database represents four different configurations, each with a tape of size 5, the start state ${ \sf q } _ { 0 }$ , and the head positioned at the third cell. These configurations correspond to tapes with contents 11011, 10011, 1101B, and 1001B. As the simulation progresses, these configurations evolve simultaneously, creating new structures shown in the middle and right of Fig. 3. Notice how term 𝑒 has two successors through the F predicate, one for each symbol atom it belongs to. Furthermore, when the head encounters a branching structure, it splits into two, as observed in the third step of the simulation. In a sense, if the machine simulation is able to perform steps on the database at all, then it will gradually “heal” the structure step by step towards proper encodings of machine configurations. As highlighted by this example, the structure of the set of atoms connected to a state atom not present in the database is specific: it is the union of two trees rooted in the state atom. The first has arrows going towards the state atom, and the second one has arrows going away from the state atom. In fact, this structure represent the set of paths in the initial database (after the appropriate number of steps of simulation), which we coin a bow tie, due to its shape. Definition 27. The inverse $E ^ { - }$ of a binary relation $E$ is the relation defined by $( x , y ) \in E ^ { - }$ if and only $i f ( y , x ) \in E $ In a directed graph $G = ( V , E )$ we denote with ${ V _ { x } ^ { - y } }$ the connected componen $t ^ { 3 }$ of $\dot { \boldsymbol { x } }$ in the subgraph induced $b y V \backslash \{ y \}$ on $G$ , for any two vertices $x$ and 𝑦. $A$ bow tie is a graph $( V , E )$ with two distinguished vertices $x$ and $y$ that has the following properties: (1) $( x , y ) \in E$ ; (2) The subgraph induced $b y V _ { x } ^ { - y }$ on $( V , E ^ { - } )$ is a directed tree rooted in $x$ ; (3) The subgraph induced by $V _ { y } ^ { - x }$ on $( V , E )$ is a directed tree rooted in $y$ ; (4) The sets ${ V _ { x } ^ { - y } }$ and $V _ { y } ^ { - x }$ form a partition of $V$ ; that is, they are disjoint and $V = V _ { x } ^ { - y } \cup V _ { y } ^ { - x }$ . The edge $( x , y )$ is called the center of the bow tie, and the sets ${ V _ { x } ^ { - y } }$ and $V _ { y } ^ { - x }$ are called the left and right parts of the bow tie, respectively. In the following, we denote with semterms $( F )$ (for semantically meaningful terms) the set of all the terms in $F$ , except the brakes (which appear in the last position of atoms). We also define $E _ { R }$ as the relation over semterms $( F )$ such that $( x , y ) \in E _ { R }$ if and only if there is $ { \boldsymbol { w } }$ such that $R ( x , y , w ) \in F$ . For all state atoms $A = { \mathsf { q } } ( x , w )$ generated during the chase, we denote the connected component of $x$ in the graph (semterms $( F ) , E _ { R } )$ with bowtie $( A )$ . The following lemma explains how this bow tie structure is generated at each step. Lemma 28. For all database $D$ , and every $F$ result of a chase sequence for $\left. \Sigma _ { M } , D \right.$ , the graph bowtie $( A )$ is a finite bow tie for all state atoms $A \in F \setminus D$ . In addition: • The center of the bow tie is the atom generated along with $A$ , by rule $R _ { \lnot q _ { r } } ^ { \left. } , R _ { \lnot q _ { r } } ^ { \right. } , R _ { q _ { r } } ^ { \left. } o r R _ { q _ { r } } ^ { \right. } .$ ; • all the atoms in the left part of the bow tie are generated by rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ ; • all the atoms in the right part of the bow tie are generated by rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ , except possibly the end of a maximal path, which may have been generated by rule $R _ { \sf E n d }$ . Proof sketch. This proof relies on an analysis of how R-atoms are generated during the chase. All the rules that generate R-atoms (over non-brake terms) generate R-atoms containing at least one existentially quantified variable. Three cases occur: • Rules $R _ { \neg q _ { r } } ^ { } , R _ { \neg q _ { r } } ^ { } , R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ generate an R-atom $\mathsf { R } ( u , v , w )$ where $u$ and $\boldsymbol { v }$ are both existentially quantified. • Rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ generates an R-atom $\mathsf { R } ( u , v , w )$ where $u$ is existentially quantified and $\boldsymbol { v }$ is a frontier variable. Rules $R _ { \mathsf { C } _ { \mathsf { R } } }$ and $R _ { \sf E n d }$ generate an R-atom $\mathsf { R } ( u , v , w )$ where $u$ is a frontier variable and $\boldsymbol { v }$ is existentially quantified. Thus, all connected components are generated by a rule of the first kind, and then extended to the left by a rule of the second kind, and to the right by a rule of the third kind. Since no rule can generate an atom $\mathsf { R } ( u , v , w )$ where $u$ and $\boldsymbol { v }$ are both frontier variable (assuming $u$ and $\boldsymbol { v }$ are not brakes), this yields the wanted structure. Finiteness is guaranteed by the emergency brakes. □ We now have a bit of structure to work with. Let us give a bit of intuition before concluding the proof. We have considered an infinite sequence ${ \mathcal { A } } = ( A _ { n } ) _ { n \in \mathbb { N } }$ of state atoms, with $A _ { 0 } \in D$ and $A _ { n } < A _ { n + 1 }$ for all $n \in \mathbb { N }$ , and we have just shown that to each state atom (not in $D$ ) is attached a bow tie structure. As mentioned before, the bow tie bowtie $\left( A _ { n } \right)$ consists in a set of (non-disjoint) paths that represent configurations that can be obtained from a configuration containing $A _ { 0 }$ in the database, after $n$ steps of simulation. In addition, Lemma 28 shows how each of these paths is constructed using a path from bowtie $\left( A _ { n - 1 } \right)$ . We also have seen in Example 26 that a bow tie can get split. From these two facts we get that the number of configurations represented by bowtie $\left( A _ { n } \right)$ decreases as $n$ grows. Since this number is an integer, and each bow tie represents at least one configuration, this sequence will be stationnary at some point $N$ . At this point, we know that each of the configurations represented by bowtie $( A _ { N } )$ visits $q _ { r }$ infinitely many time. Thus, we pick such a configuration $\rho$ , and we show that the restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ , which is enough to conclude the proof by Lemma 22. We then formalize this argument. Definition 29. The set of configurations configs $\left( A _ { n } \right)$ associated to a state atom $A _ { n } = q ( x , w ) \in \mathcal { A }$ with $n > 0$ , is the set whose elements are the sets $$ \{ A _ { n } \} \cup \bigcup _ { i \leq m } { \operatorname { b r } } \mathord { \operatorname { S e t } } ( x _ { i } , w ) \cup \{ \mathsf { P } ( y _ { 1 } , \ldots , y _ { k } , w ) \in F \mid P \in \{ \mathsf { R } , \theta , 1 , \mathsf { B } , \mathsf { E n d } \} \mathrm { ~ a n d ~ } \forall i , y _ { i } \in \{ x _ { 1 } , \ldots , y _ { k } , w _ { k } \} \in \{ \mathsf { R } , \mathsf { B } , \mathsf { E n d } \} \mathrm { ~ a n d ~ } \forall i , \ $$ for all maximal paths $( x _ { 1 } , \ldots , x _ { m } )$ in bowtie $\left( A _ { n } \right)$ . Lemma 30. For all $n > 0$ , configs $\left( A _ { n } \right)$ is finite, non-empty, and each of its elements homomorphically embeds into $D _ { \rho }$ for some configuration $\rho$ . Also, there is an injective function pred𝑛 from configs $( A _ { n + 1 } )$ to configs $\left( A _ { n } \right)$ such that $S \in$ configs $\left( A _ { n + 1 } \right)$ can be generated using only atoms in pred𝑛 (𝑆). Proof sketch. To each set $S \in \mathrm { c o n f i g s } ( A _ { n + 1 } )$ we can associate a configuration $\rho$ and a path $p$ in bowtie $\left( A _ { n + 1 } \right)$ . We then define pred $\mathbf { \Omega } _ { n } ( S )$ as the set of atoms that was used to generate it, which is not hard: its associated configuration is an extension of a configuration that yields $\rho$ , and its associated path is connected through the F-predicate to $p$ . To show injectivity of pred $n$ , we then rely on a lemma stating that if $\mathsf { F } ( x , z , w )$ and $\mathsf { F } ( y , z , w )$ are both in $F$ , then $x = y$ . □ Since for all $n$ , there is an injective function from configs $\left( A _ { n + 1 } \right)$ to configs $\left( A _ { n } \right)$ , the sequence $( | \mathsf { c o n f i g s } ( A _ { n } ) | ) _ { n \in \mathbb { N } _ { > 0 } }$ is a decreasing sequence of natural numbers, as mentioned before. Thus, there must be some $N \in \mathbb { N }$ such that for all $n \geq N$ , $| \mathrm { c o n f i g s } ( A _ { n } ) | = | \mathrm { c o n f i g s } ( A _ { N } ) | > 0 .$ . We pick $S _ { 0 }$ in configs $( A _ { N } )$ , and let $\rho$ be a configuration such that $S _ { 0 }$ homomorphically embeds into $D _ { \rho }$ . Lemma 31. The restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ . Proof sketch. Since for all $n \geq N$ , $\vert \mathrm { c o n f i g s } ( A _ { n } ) \vert = \vert \mathrm { c o n f i g s } ( A _ { N } ) \vert$ , ${ \mathsf { p r e d } } _ { n }$ is actually a bijection. We thus define $S _ { n + 1 }$ as pred $\mathbf { \Pi } _ { N + n } ^ { - 1 } ( S _ { n } )$ . Intuitively, the sequence $( S _ { n } ) _ { n \in \mathbb { N } }$ encodes the run of $M$ that visits $q _ { r }$ infinitely many times from $\rho$ . We then construct an infinite chase sequence from $\langle \Sigma _ { M } , D _ { \rho } \rangle$ such that $S _ { n }$ homomorphically embed in it for all $n$ . □ By Lemma 22, this means that there is a run of $M$ which visits $q _ { r }$ infinitely many times, and thus that $M$ is not robust non-recurring through $q _ { r }$ , concluding the reduction. # 6 An Alternative to Fairness to Simplify Restricted Chase Termination All chase variants can be applied to semi-decide Boolean conjunctive query (BCQ) entailment. This is the case because, if a KB $\mathcal { K }$ entails a $\operatorname { B C Q } Q$ under standard first-order semantics, then every chase sequence of $\mathcal { K }$ features a fact set that entails $\boldsymbol { Q }$ . Consequently, it suffices to compute an (arbitrarily large) finite prefix of any (arbitrarily chosen) chase sequence of $\mathcal { K }$ to semi-decide whether $\mathcal { K }$ entails $\boldsymbol { Q }$ . Note that semi-decidability of BCQ entailment breaks down if we remove the fairness condition from the definition of a chase sequence. Unfortunately, this condition complicates the problem of universal termination for the restricted chase (see Theorems 9 and 23). To address this situation, we propose an alternative to fairness in the following definition that retains semi-decidability while simplifying the termination problem of the chase (see Theorem 34). Definition 32. $A$ breadth-first chase sequence for a $K B \left. \Sigma , D \right.$ is a chase derivation $F _ { 0 } , F _ { 1 } , \ldots$ such that, (†) if some $\Sigma$ -trigger $\lambda$ is loaded for some $F _ { i }$ , then there is some $j \in \{ i , \ldots , i + n \}$ such that $\lambda$ is obsolete for $F _ { j }$ and $n$ is the (finite) number of $\Sigma$ -triggers that are loaded and not obsolete for $F _ { i }$ . Note that, since $( \dag )$ implies fairness as introduced in Definition 3, every breadth-first chase sequence is also a chase sequence and we preserve semi-decidability of BCQ entailment. Definition 33. Let $\mathsf { C T K } _ { \forall } ^ { b I r }$ be the class of all KBs that only admit finite breadth-first chase sequences. Let $\mathsf { C T R } _ { \forall } ^ { b i r }$ be the class containing a rule set if ${ } ^ { r } \mathsf { C T K } _ { \forall } ^ { b l r }$ contains all KBs with this rule set. Theorem 34. The class $\mathsf { C T K } _ { \forall } ^ { b I r }$ is in RE, and the class $\mathsf { C T R } _ { \forall } ^ { b i r }$ is in $\Pi _ { 2 } ^ { 0 }$ . Proof. To show that $\mathsf { C T K } _ { \forall } ^ { b I r }$ is in RE, we define a semi-decision procedure, which executes the following instructions on a given input KB $\mathcal { K } = \langle \Sigma , D \rangle$ : (1) Initialise the set $\mathcal { P } _ { 1 }$ of lists of facts that contains the (unary) list $D$ , and a counter $i : = 2$ . (2) Compute the set $C _ { i }$ of all chase derivations of length $i$ of $\mathcal { K }$ that can be obtained by extending a chase derivation in $\mathcal { P } _ { i - 1 }$ with one fact set. Intuitively, $C _ { i }$ includes all lists of length $i$ that can be extended into breadth-first chase sequences for $\mathcal { K }$ . (3) Compute the maximal subset $\mathcal { P } _ { i }$ of $C _ { i }$ that does not contain a chase derivation $F _ { 1 } , \dots , F _ { i } \in C _ { i }$ if there is some $1 \leq k \leq i$ and some $\Sigma$ -trigger $\lambda$ such that $\lambda$ is loaded for $F _ { k }$ , the trigger $\lambda$ is not obsolete for $F _ { i } ,$ , and $i - k$ is larger than the number of $\Sigma$ -triggers that are loaded and not obsolete for $F _ { k }$ . Intuitively, $\mathcal { P } _ { i }$ filters out prefixes in $C _ { i }$ that already violate $( \dag )$ . (4) If $\mathcal { P } _ { i }$ is empty, accept. Otherwise, increment $i : = i + 1$ and go to 2. If the procedure accepts, then $\mathcal { P } _ { i }$ is empty for some $i$ and all breadth-first chase sequences of $\mathcal { K }$ are of length at most $i - 1$ . If the procedure loops, then there is an infinite chase derivation $F _ { 0 } , F _ { 1 } , \ldots$ of $\mathcal { K }$ such that $F _ { 0 } , \ldots , F _ { i - 1 } \in { \mathcal { P } } _ { i }$ for every $i \geq 1$ , which is a breadth-first derivation for $\mathcal { K }$ . The class $\mathsf { C T R } _ { \forall } ^ { b l r }$ is in $\Pi _ { 2 } ^ { 0 }$ because we can semi-decide if a rule set $\Sigma$ is not in $\mathsf { C T R } _ { \forall } ^ { b I r }$ using an oracle that solves $\mathsf { C T K } _ { \forall } ^ { b I r }$ . We simply enumerate every database $D$ , use the oracle to check if the KB $\left. \Sigma , D \right.$ is in $\mathsf { C T K } _ { \forall } ^ { b l r }$ , and accept if this is not the case. □ The previous result holds because the condition $( \dag )$ is finitely verifiable; that is, every infinite chase derivation that does not satisfy this condition has a finite prefix that witnesses this violation. Note that fairness does not have this property since any finite prefix of any chase derivation can be extended into a (fair) chase sequence. In fact, we can readily show a version of Theorem 34 for any other alternative condition if it is finitely verifiable. For an example of one such trigger application strategy, consider the one from [20], which is a bit more complex to define than but nevertheless results in a very efficient implementation of the restricted chase. # 7 Open Problems After settling the general case regarding restricted chase termination and proposing an alternative fairness condition, there are still open challenges. Namely, what is the undecidability status of the classes ${ \sf C T K } _ { \forall } ^ { r e s t }$ and ${ \bf C } { \sf T } { \sf R } _ { \forall } ^ { r e s t }$ if we only consider single-head rules or only guarded (multi-head) rules? Note that, with guarded rules, it is not obvious how to simulate a Turing machine. For single-head rules, we cannot implement the emergency brake and thus our proofs do not apply. Moreover, if we only consider single-head rule sets, we can ignore fairness when determining restricted chase termination because of the “fairness theorem” [12]: a single-head KB admits an infinite (possibly unfair) chase derivation if and only if admits an infinite (fair) chase sequence. We think that answers to these problems will help to develop a better understanding for the (restricted) chase overall. # Acknowledgments On TU Dresden side, this work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in project number 389792660 (TRR 248, Center for Perspicuous Systems), by the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the Center for Scalable Data Analytics and Artificial Intelligence (project SCADS25B, ScaDS.AI), and by Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) and Deutscher Akademischer Austauschdienst (DAAD, German Academic Exchange Service) in project 57616814 (SECAI, School of Embedded and Composite AI). # References [1] Bartosz Bednarczyk, Robert Ferens, and Piotr Ostropolski-Nalewaja. 2020. All-Instances Oblivious Chase Termination is Undecidable for Single-Head Binary TGDs. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, Christian Bessiere (Ed.). ijcai.org, 1719–1725. https://doi.org/10.24963/IJCAI.2020/238 [2] Marco Calautti, Georg Gottlob, and Andreas Pieris. 2015. Chase Termination for Guarded Existential Rules. In Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Victoria, Australia, May 31 - June 4, 2015, Tova Milo and Diego Calvanese (Eds.). ACM, 91–103. https://doi.org/10.1145/2745754.2745773 [3] Marco Calautti and Andreas Pieris. 2021. Semi-Oblivious Chase Termination: The Sticky Case. Theory Comput. Syst. 65, 1 (2021), 84–121. https://doi.org/10.1007/S00224-020-09994-5 [4] David Carral, Irina Dragoste, and Markus Krötzsch. 2017. Restricted Chase (Non)Termination for Existential Rules with Disjunctions. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, Carles Sierra (Ed.). ijcai.org, 922–928. https://doi.org/10.24963/IJCAI.2017/128 [5] David Carral, Lucas Larroque, Marie-Laure Mugnier, and Michaël Thomazo. 2022. Normalisations of Existential Rules: Not so Innocuous!. In Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, KR 2022, Haifa, Israel, July 31 - August 5, 2022, Gabriele Kern-Isberner, Gerhard Lakemeyer, and Thomas Meyer (Eds.). https://proceedings.kr.org/2022/11/ [6] Alin Deutsch, Alan Nash, and Jeffrey B. Remmel. 2008. The chase revisited. In Proceedings of the Twenty-Seventh ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2008, June 9-11, 2008, Vancouver, BC, Canada, Maurizio Lenzerini and Domenico Lembo (Eds.). ACM, 149–158. https://doi.org/10.1145/1376916.1376938 [7] Haim Gaifman, Harry G. Mairson, Yehoshua Sagiv, and Moshe Y. Vardi. 1993. Undecidable Optimization Problems for Database Logic Programs. J. ACM 40, 3 (1993), 683–713. https://doi.org/10.1145/174130.174142 [8] Lukas Gerlach and David Carral. 2023. Do Repeat Yourself: Understanding Sufficient Conditions for Restricted Chase Non-Termination. In Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023, Rhodes, Greece, September 2-8, 2023, Pierre Marquis, Tran Cao Son, and Gabriele Kern-Isberner (Eds.). 301–310. https://doi.org/10.24963/KR.2023/30 [9] Lukas Gerlach and David Carral. 2023. General Acyclicity and Cyclicity Notions for the Disjunctive Skolem Chase. In Thirty-Seventh AAAI Conference on Artificial Intelligence, AAAI 2023, Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence, IAAI 2023, Thirteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2023, Washington, DC, USA, February 7-14, 2023, Brian Williams, Yiling Chen, and Jennifer Neville (Eds.). AAAI Press, 6372–6379. https://doi.org/10.1609/AAAI.V37I5.25784 [10] Tomasz Gogacz and Jerzy Marcinkowski. 2014. All-Instances Termination of Chase is Undecidable. In Automata, Languages, and Programming - 41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part II (Lecture Notes in Computer Science, Vol. 8573), Javier Esparza, Pierre Fraigniaud, Thore Husfeldt, and Elias Koutsoupias (Eds.). Springer, 293–304. https://doi.org/10.1007/978-3-662-43951-7_25 [11] Tomasz Gogacz and Jerzy Marcinkowski. 2014. Termination of oblivious chase is undecidable. CoRR abs/1401.4840 (2014). arXiv:1401.4840 http://arxiv.org/abs/1401.4840 [12] Tomasz Gogacz, Jerzy Marcinkowski, and Andreas Pieris. 2023. Uniform Restricted Chase Termination. SIAM J. Comput. 52, 3 (2023), 641–683. https://doi.org/10.1137/20M1377035 [13] Gösta Grahne and Adrian Onet. 2018. Anatomy of the Chase. Fundam. Informaticae 157, 3 (2018), 221–270. https: //doi.org/10.3233/FI-2018-1627 [14] Bernardo Cuenca Grau, Ian Horrocks, Markus Krötzsch, Clemens Kupke, Despoina Magka, Boris Motik, and Zhe Wang. 2013. Acyclicity Notions for Existential Rules and Their Application to Query Answering in Ontologies. J. Artif. Intell. Res. 47 (2013), 741–808. https://doi.org/10.1613/JAIR.3949 [15] David Harel. 1986. Effective transformations on infinite trees, with applications to high undecidability, dominoes, and fairness. J. ACM 33, 1 (jan 1986), 224–248. https://doi.org/10.1145/4904.4993 [16] Markus Krötzsch, Maximilian Marx, and Sebastian Rudolph. 2019. The Power of the Terminating Chase (Invited Talk). In 22nd International Conference on Database Theory, ICDT 2019, March 26-28, 2019, Lisbon, Portugal (LIPIcs, Vol. 127), Pablo Barceló and Marco Calautti (Eds.). Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 3:1–3:17. https://doi.org/10.4230/LIPICS.ICDT.2019.3 [17] Michel Leclère, Marie-Laure Mugnier, Michaël Thomazo, and Federico Ulliana. 2019. A Single Approach to Decide Chase Termination on Linear Existential Rules. In 22nd International Conference on Database Theory, ICDT 2019, March 26-28, 2019, Lisbon, Portugal (LIPIcs, Vol. 127), Pablo Barceló and Marco Calautti (Eds.). Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 18:1–18:19. https://doi.org/10.4230/LIPICS.ICDT.2019.18 [18] Bruno Marnette. 2009. Generalized schema-mappings: from termination to tractability. In Proceedings of the TwentyEigth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2009, June 19 - July 1, 2009, Providence, Rhode Island, USA, Jan Paredaens and Jianwen Su (Eds.). ACM, 13–22. https://doi.org/10.1145/1559795. 1559799 [19] Hartley Rogers, Jr. 1987. Theory of recursive functions and effective computability (Reprint from 1967). MIT Press. http://mitpress.mit.edu/catalog/item/default.asp?ttype $\mathrel { \mathop : } =$ 2&tid $_ { . = 3 1 8 2 }$ [20] Jacopo Urbani, Markus Krötzsch, Ceriel J. H. Jacobs, Irina Dragoste, and David Carral. 2018. Efficient Model Construction for Horn Logic with VLog - System Description. In Automated Reasoning - 9th International Joint Conference, IJCAR 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 14-17, 2018, Proceedings (Lecture Notes in Computer Science, Vol. 10900), Didier Galmiche, Stephan Schulz, and Roberto Sebastiani (Eds.). Springer, 680–688. https://doi.org/10.1007/978-3-319-94205-6_44 Received December 2024; revised February 2025; accepted March 2025 # A $\Pi _ { 1 } ^ { 1 }$ -complete Turing Machine Problems for Reductions Our definition of non-recurring machines differs slightly from descriptions found in previous literature. Indeed, Harel showed that the following problem is $\Pi _ { 1 } ^ { 1 }$ -complete: decide if a (nondeterministic Turing) machine admits a run on the empty word that features the initial state $q _ { 0 }$ infinitely many times (see Corollary 6.2 in [15]). Our definition is slightly different since we choose a different state $q _ { r }$ to keep track of this infinite recurrence; note that this state may be different from the initial state. Fortunately, the choice of the initial state in the proof of Corollary 6.2 of Harel [15] is arbitrary, making it straightforward to adapt his proof to any given state. We first prove this in Section A.1, and then use this result to get $\Pi _ { 1 } ^ { 1 }$ -completeness for the other Turing machine problems we consider in Section A.2. # A.1 Non-Recurrence on the Empty Word To show that checking if a machine is non-recurring on the empty word is $\Pi _ { 1 } ^ { 1 }$ -complete, we adapt the proof of Corollary 6.2 in [15]. To do so, we first need to introduce some preliminary notions. A list is a finite sequence. The concatenation of two lists $u = u _ { 1 } , \ldots , u _ { n }$ and $\upsilon = \upsilon _ { 1 } , \ldots , \upsilon _ { m }$ is the list $u \cdot v = u _ { 1 } , \ldots , u _ { n } , v _ { 1 } , \ldots , v _ { m }$ . A list $u _ { 1 } , . . . , u _ { n }$ with $n \geq 2$ is the child of $u _ { 1 } , \ldots , u _ { n - 1 }$ . A list $u$ is an ancestor of another list $v$ , written $u \prec v$ , if $u$ is a prefix of $v$ ; that is, if ${ \boldsymbol { u } } \cdot { \boldsymbol { w } } = { \boldsymbol { v } }$ for some list $ { \boldsymbol { w } }$ . Definition 35. An $\omega$ -tree $T$ is a set of lists of natural numbers closed under $\prec$ . A node is an element in $T$ ; $a$ leaf is a node without children in $T$ . Such a tree is computable if so is the following function: $$ \chi _ { T } ( u ) = \left\{ \begin{array} { l l } { { 0 } } & { { i f u \notin T } } \\ { { 1 } } & { { i f u \in T \ a n d u \ i s \ a \ l e a f } } \\ { { 2 } } & { { i f u \in T \ a n d u \ i s \ n o t \ a \ l e a f } } \end{array} \right. $$ A possibly infinite sequence of naturalnumbers is a branch of $T$ if the latter contains every finite prefix of the former. Such a tree is well founded if all of its branches are finite. In the following, we identify a computable $\omega$ -tree $T$ with the machine that computes the function $\chi _ { T }$ . Note that this is a machine that implements a function mapping lists of natural numbers to elements of $\{ 0 , 1 , 2 \}$ as indicated in Definition 35. Checking if such a machine does correspond to a well-founded tree is a $\Pi _ { 1 } ^ { 1 }$ -complete problem. Lemma 36 ([19], Theorem 16). Checking if a computable $\omega$ -tree is well founded is $\Pi _ { 1 } ^ { 1 }$ -complete. Definition 37. For a natural number $k \geq 0$ , a $k$ -tree $T$ is an 𝜔-tree that does not contain sequences with numbers larger than $k$ . $A$ b-tree $( b$ for bounded) is a $k$ -tree for some $k \geq 0$ . $A$ marked b-tree is a pair $( T , \mu )$ consisting of a $b$ -tree $T$ and $a$ marking function $\mu$ ; that is, $a$ function from $T$ to $\{ 0 , 1 \} . A$ marked $b$ -tree is computable if the following function is computable: $$ \chi _ { T } ^ { \mu } ( u ) = \left\{ { \begin{array} { l l } { 0 } & { i f u \notin T } \\ { 1 } & { i f u \in T \ a n d u \ i s \ m a r k e d ( t h a t i s , \mu ( u ) = 1 ) } \\ { 2 } & { i f u \in T \ a n d u \ i s \ n o t \ m a r k e d } \end{array} } \right. $$ A marked $b$ -tree is recurring ifit has a branch with infinitely many marked prefixes. As we do for computable $\omega$ -trees, we identify a computable marked b-tree $( T , \mu )$ with the decider that implements the function $\chi _ { T } ^ { \mu }$ . Lemma 38 ([15], Corollary 5.3). Checking if a computable $b$ -tree is non-recurring is $\Pi _ { 1 } ^ { 1 }$ -complete. We are ready now to show the main result in this subsection. Proposition 39. The problem of checking if a machine is non-recurring through some state $q _ { r }$ on the empty word $\varepsilon$ is $\Pi _ { 1 } ^ { 1 }$ -complete. Proof. To show membership, we present a reduction that maps a machine $M = ( Q , \Gamma , \delta )$ to a computable marked b-tree $( T , \mu )$ such that $M$ is non-recurring through a given state $q _ { r } \in Q$ on the empty word $\varepsilon$ if and only if $( T , \mu )$ is non-recurring. To define $( T , \mu )$ , we consider an (arbitrarily chosen) enumeration $q _ { 1 } , \ldots , q _ { n }$ of the states in $\boldsymbol { Q }$ . • Let $T$ be the set containing a list of natural numbers $i _ { 1 } , \ldots , i _ { n }$ if there is a partial run $\rho _ { 1 } , . . . , \rho _ { n }$ of $M$ on $\varepsilon$ such that $\rho _ { j }$ features the state $q _ { i _ { j } }$ for every $1 \leq j \leq n$ . • Let $\mu$ be the function that maps a list $u \in T$ to 1 if and only if $q _ { i } = q _ { r }$ where $i$ is the last element in $u$ . That is, if the last element of $u$ is the index that corresponds to $q _ { r }$ in the enumeration $q _ { 1 } , \ldots , q _ { n } .$ For every infinite branch of $T$ , there is an infinite run of $M$ and vice-versa. Furthermore, by the definition of $\mu$ , a branch of $( T , \mu )$ containing infinitely many marked nodes corresponds to a run of $M$ visiting $q _ { r }$ infinitely many times. Therefore, $M$ is non-recurring through $q _ { r }$ if and only if $( T , \mu )$ is non-recurring. For hardness, we present a reduction that maps a computable $\omega$ -tree $T$ to a non-deterministic machine $M = ( Q , \Gamma , \delta )$ such that $T$ is well-founded if and only if $M$ is non-recurring through a state $q _ { r } \in Q$ on the empty word $\varepsilon$ . Intuitively, the machine $M$ proceeds by doing a traversal of the full $\omega$ -tree; formally, it implements the following instructions on input $\varepsilon$ : (1) Initialise the variable $u = 0$ , which stores a list of natural numbers. (2) If $u \not \in T$ , replace the last element $i$ in $u$ with $i + 1$ . (3) If $u \in T$ , make a non-deterministic choice between the following options: (a) Replace the last element $i$ in $u$ with $i + 1$ . (b) Append 0 to the list stored in $u$ and visit the state $q _ { r }$ . (4) Go to (2). We can effectively check if a list $u$ is a node in $T$ above because $T$ is a computable $\omega$ -tree and hence, so is function $\chi _ { T }$ . Intuitively, each run of $M$ on the empty word corresponds to a traversal of a branch in $T$ ; note how we use non-determinism in (3) to alternatively visit the sibling (Instruction 3.a) or the child (Instruction 3.b) of a node in the tree. Furthermore, note that $M$ only visits $q _ { r }$ when it moves deeper on a given branch; that is, when it executes instruction (3.b). Therefore, there is a run of $M$ visiting $q _ { r }$ infinitely often if and only if there is an infinite branch in $T$ . □ # A.2 Reductions between Turing Machine Problems Proposition 40. The problem of checking if a machine is universally non-recurring through a given state $q _ { r }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. Proof. To show membership, we present a reduction that maps a machine $M$ to another machine $M ^ { \prime }$ such that $M$ is universally non-recurring through a state $q _ { r }$ if and only if $M ^ { \prime }$ is non-recurring through a state $q _ { r } ^ { \prime }$ on $\varepsilon$ . On input $\varepsilon$ , the machine $M ^ { \prime }$ first guesses some input word and then simulates $M$ on this input. Formally, it executes the following instructions: (1) Make a non-deterministic choice to decide whether to go to (2) or to (3). (2) Replace the first occurrence of the blank symbol B in the input tape with some non-deterministically chosen symbol in the input alphabet of $M$ . Then, go to (1). (3) Simulate $M$ on the (finite) word written down in the input tape. During this simulation, visit $q _ { r } ^ { \prime }$ whenever $M$ would have visited $q _ { r }$ . Note that there are infinite runs of $M ^ { \prime }$ on $\varepsilon$ where the machine never executes Instruction 3. This does not invalidate our reduction since $M ^ { \prime }$ never visits $q _ { r } ^ { \prime }$ in these branches. To show hardness, we present a reduction that maps a machine $M$ to another machine $M ^ { \prime }$ such that $M$ is non-recurring through a state $q _ { r }$ on $\varepsilon$ if and only if $M ^ { \prime }$ is universally non-recurring through a state $q _ { r } ^ { \prime }$ . The machine $M ^ { \prime }$ first discards its input by replacing it with a special symbol that is treated like the blank symbol B.4 Then, $M ^ { \prime }$ simulates $M$ on $\varepsilon$ ; during this simulation, $M ^ { \prime }$ visits $q _ { r } ^ { \prime }$ whenever $M$ would have visited $q _ { r }$ . □ Proposition 41. Checking if a machine is robust non-recurring through $q _ { r }$ is $\Pi _ { 1 } ^ { 1 }$ -complete. Proof. To show membership, we present a reduction from a machine $M$ to a machine $M ^ { \prime }$ such that $M$ is robust non-recurring through a state $q _ { r }$ if and only if $M ^ { \prime }$ is universally non-recurring through a state $q _ { r } ^ { \prime }$ . The machine $M ^ { \prime }$ scans its input and halts if it does not encode a configuration of $M$ . Otherwise, $M ^ { \prime }$ simulates $M$ starting on this input configuration; during this simulation, $M ^ { \prime }$ visits $q _ { r } ^ { \prime }$ whenever $M$ would have visited $q _ { r }$ . To show hardness, we present a reduction from a machine $M$ to another machine $M ^ { \prime }$ such that $M$ is non-recurring through a state $q _ { r }$ on the empty word $\varepsilon$ if and only if $M ^ { \prime }$ is robust non-recurring through a state $q _ { r } ^ { \prime }$ . The machine $M ^ { \prime }$ executes the following instructions: (1) Halt if the input does not contain some configuration $\rho$ of $M$ . (2) If the configuration in the tape $\rho$ features the special state $q _ { r }$ , then visit $q _ { r } ^ { \prime }$ . (3) After the encoding of $\rho$ in the tape, (non-deterministically) simulate a run of $M$ on $\varepsilon$ until it terminates or you reach the configuration $\rho$ . If the run terminates, without finding $\rho$ , halt. Otherwise, continue in (4). (4) If $\mathsf { N e x t } _ { M } ( \rho )$ is empty, halt. Otherwise, replace the configuration $\rho$ in the tape with a nondeterministically chosen configuration in $\mathsf { N e x t } _ { M } ( \rho )$ , and go to (1). Intuitively speaking, Instruction 3 implements a reachability check for the configuration $\rho$ in the tape. That is, this procedure ensures that this configuration is reachable from the starting configuration of $M$ on the empty word $\varepsilon$ by some run. Note that the reachability check makes non-deterministic choices itself. So it can happen that $M ^ { \prime }$ terminates early or even runs forever because it picks the wrong run in Instruction 3. This does not invalidate our reduction though since on those runs, $M ^ { \prime }$ only visits $q _ { r } ^ { \prime }$ finitely many times. If $M ^ { \prime }$ is robust non-recurring through $q _ { r } ^ { \prime }$ , then it is also non-recurring on the starting configuration with the encoding of the starting configuration of $M$ on $\varepsilon$ . Since $M ^ { \prime }$ uses non-determinism to simulate all possible runs $M$ on $\varepsilon$ and visits $q _ { r } ^ { \prime }$ whenever $M$ would have visited $q _ { r }$ , we conclude that $M$ is non-recurring through $q _ { r }$ on $\varepsilon$ . Suppose that there is a configuration $\rho ^ { \prime }$ of $M ^ { \prime }$ that may lead to a run that visits $q _ { r } ^ { \prime }$ infinitely many times. In turn, this implies that there is a configuration $\rho$ of $M$ that leads to a run of $M$ that visits $q _ { r }$ infinitely many times. Moreover, all the configurations of $M$ in this infinite run are reachable from the start configuration of $M$ on $\varepsilon$ because of the check implemented in Instruction 3. Therefore, $M$ is recurring through $q _ { r }$ on the empty word. □ # B Proofs for Section 4 (Knowledge base termination) Proposition 18. If 𝐹 has a wild frontier of $\dot { \boldsymbol { \rho } }$ overseen by $ { \boldsymbol { w } }$ , and $\rho ^ { \prime }$ is reachable in one step by a transition of $\cdot _ { M }$ , then there exists a restricted derivation $\mathcal { D } _ { \rho \to \rho ^ { \prime } } = F , . . . , F ^ { \prime }$ such that $F ^ { \prime }$ has a wild frontier of $\cdot _ { \rho ^ { \prime } }$ overseen by $w ^ { \prime }$ , where $\boldsymbol { w ^ { \prime } } \ne \boldsymbol { w }$ is a fresh existential if $\dot { \boldsymbol { \rho } }$ is in $q _ { r }$ , and $w ^ { \prime } = w$ otherwise. Proof. We consider the case where $\rho = \langle n , t , p , q \rangle$ , with $q \not = q _ { r }$ , and where $\rho ^ { \prime }$ is obtained from $\rho$ because $( b , q ^ { \prime } , ) \in \delta ( t ( p ) , q )$ . We consider $x _ { 1 } , \ldots , x _ { n + 1 } ,$ $ { \boldsymbol { w } }$ as provided by the definition of a wild frontier of configuration $\rho$ . • we start by applying Rule $R _ { \neg q _ { r } } ^ { }$ , mapping $x$ to $x _ { p } , y$ to $x _ { p + 1 }$ and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . This produces the atoms $\mathsf { q } ^ { \prime } ( x _ { p + 1 } ^ { \prime } , w ) , \mathsf { c } ( x _ { p + 1 } ^ { \prime } , w ) , \mathsf { b } ( x _ { p } ^ { \prime } , w ) , \mathsf { C } _ { \mathsf { L } } ( x _ { p } ^ { \prime } , w ) , \mathsf { C } _ { \mathsf { R } } ( x _ { p + 1 } ^ { \prime } , w ) , \mathsf { R } ( x _ { p } ^ { \prime } , x _ { p + 1 } ^ { \prime } , w ) , \mathsf { F } ( x _ { p } , x _ { p } ^ { \prime } , w )$ $\mathsf { F } ( x _ { p + 1 } , x _ { p + 1 } ^ { \prime } , w )$ , brSet $( x _ { p } ^ { \prime } , w )$ , $\mathsf { b r S e t } ( x _ { p + 1 } ^ { \prime } , w )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { L } } } \ p - 1$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $\mathnormal { p } - 1 )$ application maps $x$ to $x _ { p - i + 1 }$ , $x ^ { \prime }$ to $x _ { p - i + 1 } ^ { \prime } , y$ to $x _ { p - i }$ , $w ^ { \prime }$ and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . It creates atoms $\mathsf { F } ( x _ { p - i } , x _ { p - i } ^ { \prime } , w )$ , ${ \sf R } ( x _ { p - i } ^ { \prime } , x _ { p - i + 1 } ^ { \prime } , w )$ , $\mathtt { t } ( \mathtt { p } - \mathrm { i } ) ( x _ { p - i } ^ { \prime } , w )$ , $\mathsf C _ { \mathsf { L } } ( x _ { p - i } ^ { \prime } , w )$ , $\mathsf { b r s e t } ( x _ { p - i } ^ { \prime } , w ^ { \prime } )$ ; • we apply Rule $R _ { \complement _ { \mathsf { R } } } \ n - { p }$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $n - p ,$ ) application maps $x$ to $x _ { p + i }$ , $x ^ { \prime }$ to $x _ { p + i } ^ { \prime }$ , 𝑦 to $x _ { p + i + 1 }$ , $w ^ { \prime }$ and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . It creates atoms $\mathsf { F } ( x _ { p + i + 1 } , x _ { p + i + 1 } ^ { \prime } , w )$ , ${ \mathsf { R } } ( x _ { p + i } ^ { \prime } , x _ { p + i + 1 } ^ { \prime } , w )$ $\mathsf { t } ( \mathsf { p } + \dot { \mathrm { i } } + 1 ) \big ( x _ { \substack { p + i + 1 } } ^ { \prime } , w \big ) , \mathsf { C } _ { \mathsf { R } } \big ( x _ { \substack { p + i + 1 } } ^ { \prime } , w \big ) , \mathsf { b r s e t } \big ( x _ { \substack { p + i + 1 } } ^ { \prime } , w \big )$ • we apply Rule $R _ { \sf E n d }$ , mapping $x ^ { \prime }$ to $x _ { n + 1 } ^ { \prime }$ , $x$ to $x _ { n + 1 }$ , $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $ { \boldsymbol { w } }$ . It creates the atoms $\mathsf { R } ( x _ { n + 1 } ^ { \prime } , x _ { n + 2 } ^ { \prime } , w ) , \mathsf { B } ( x _ { n + 2 } ^ { \prime } , w ) , \mathsf { E n d } ( x _ { n + 2 } ^ { \prime } , w ) , \mathsf { b r S e t } ( x _ { n + 2 } ^ { \prime } , w ) .$ . The result of that derivation has a wild frontier of configuration $\rho ^ { \prime }$ overseen by $ { \boldsymbol { w } }$ , as witnessed by terms $x _ { 1 } ^ { \prime } , \ldots , x _ { n + 2 } ^ { \prime }$ . If $\rho ^ { \prime }$ is obtained from $\rho$ because $( b , q ^ { \prime } , ) \in \delta ( t ( p ) , q )$ , with $q \not = q _ { r }$ , we consider $x _ { 1 } , \ldots , x _ { n + 1 }$ , 𝑤 as provided by the definition of a wild frontier of configuration $\rho$ . • we start by applying Rule $R _ { \neg q _ { r } } ^ { }$ , mapping $x$ to $x _ { p } , y$ to $x _ { p - 1 }$ , $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . This produces the atoms $q ^ { \prime } ( x _ { p - 1 } ^ { \prime } , w )$ , $\mathsf { c } ( \boldsymbol { x } _ { p - 1 } ^ { \prime } , \boldsymbol { w } )$ , $\mathbf { \widehat { b } } ( x _ { p } ^ { \prime } , w )$ , $\mathsf C _ { \mathsf C } ( \boldsymbol x _ { p - 1 } ^ { \prime } , w )$ , $\mathsf C _ { \mathsf { R } } ( x _ { p } ^ { \prime } , w )$ , ${ \sf R } ( x _ { p - 1 } ^ { \prime } , x _ { p } ^ { \prime } , w )$ , $\mathsf { F } ( x _ { p } , x _ { p } ^ { \prime } , w )$ , $\mathsf { F } ( x _ { p - 1 } , x _ { p - 1 } ^ { \prime } , w )$ , $\mathsf { b r S e t } ( x _ { p } ^ { \prime } , w )$ , $\mathsf { b r S e t } ( x _ { p - 1 } ^ { \prime } , \bar { w } )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { L } } } \ p - 2$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $p - 2 )$ application maps $x$ to $x _ { p - i } , x ^ { \prime }$ to $x _ { p - 1 } ^ { \prime }$ , $y$ to $x _ { p - i - 1 }$ , and $\boldsymbol { w } , \boldsymbol { w ^ { \prime } }$ to $ { \boldsymbol { w } }$ . It creates atoms $\mathsf { F } ( x _ { \mathit { p } - i - 1 } , x _ { \mathit { p } - i - 1 } ^ { \prime } , w )$ , ${ \sf R } ( x _ { p - i - 1 } ^ { \prime } , x _ { p - i } ^ { \prime } , w )$ $\mathsf { t } ( \mathsf { p } - \mathrm { i } - 1 ) ( x _ { \rho - i - 1 } ^ { \prime } , w ) , \mathsf { C } _ { \mathsf { L } } ( x _ { \rho - i - 1 } , w ) , \mathsf { b r s e t } ( x _ { \rho - i - 1 } , w )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { R } } } \ n - p + 1$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $n - p + 1 )$ application maps $x$ to $x _ { p - 1 + i } , x ^ { \prime }$ to $x _ { p - 1 + i } ^ { \prime } , \underline { { \imath } }$ $y$ to $x _ { p + i }$ , and $ { \boldsymbol { w } }$ , $w ^ { \prime }$ to $ { \boldsymbol { w } }$ . It creates atoms $\mathsf { F } ( x _ { { p } + i } , x _ { { p } + i } ^ { \prime } , w ) , \mathsf { R } ( x _ { { p } + i - 1 } ^ { \prime } , x _ { { p } + i } ^ { \prime } , w )$ , $\mathsf { t } ( \mathsf { p } - \mathrm { i } - 1 ) ( \bar { \boldsymbol { x } } _ { \beta + i } ^ { \prime } , \boldsymbol { w } ) , \mathsf { C } _ { \mathsf { R } } ( \boldsymbol { x } _ { \beta + i } ^ { \prime } , \boldsymbol { w } ) , \mathsf { b r S e t } ( \boldsymbol { x } _ { \beta + i } ^ { \prime } , \boldsymbol { w } )$ ; • we apply Rule $R _ { \sf E n d }$ , mapping $x ^ { \prime }$ to $x _ { n + 1 } ^ { \prime }$ , $x$ to $x _ { n + 1 }$ , $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $ { \boldsymbol { w } }$ . It creates the atoms $\mathsf { R } ( x _ { n + 1 } ^ { \prime } , x _ { n + 2 } ^ { \prime } , w ) , \mathsf { B } ( x _ { n + 2 } ^ { \prime } , w ) , \mathsf { E n d } ( x _ { n + 2 } ^ { \prime } , w ) , \mathsf { b r S e t } ( x _ { n + 2 } ^ { \prime } , w )$ . The result of that derivation has a wild frontier of configuration $\rho ^ { \prime }$ overseen by $ { \boldsymbol { w } }$ , as witnessed by terms $x _ { 1 } ^ { \prime } , \ldots , x _ { n + 2 } ^ { \prime }$ If $\rho ^ { \prime }$ is obtained from $\rho$ because $( b , q ^ { \prime } , ) \in \delta ( t ( p ) , q _ { r } )$ , we consider $x _ { 1 } , \ldots , w _ { n + 1 }$ as provided by the definition of a wild frontier of configuration 𝜌. • we start by applying Rule $R _ { q _ { r } } ^ { }$ , mapping $x$ to $x _ { p } , y$ to $x _ { p + 1 }$ and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . This rule application produces the atoms $\begin{array} { r } { \mathfrak { l } ^ { \prime } ( x _ { p + 1 } ^ { \prime } , \dot { w ^ { \prime } } ) , \mathtt { c } ( x _ { p + 1 } ^ { \prime } , w ^ { \prime } ) , \mathtt { b } ( x _ { p } ^ { \prime } , w ^ { \prime } ) , \mathtt { C } _ { \mathtt { L } } ( x _ { p } ^ { \prime } , w ^ { \prime } ) , \mathtt { C } _ { \mathtt { R } } ( x _ { p + 1 } ^ { \prime } , w ^ { \prime } ) , \mathtt { R } ( x _ { p } ^ { \prime } , x _ { p + 1 } ^ { \prime } , w ^ { \prime } ) } \end{array}$ $\mathsf { F } ( x _ { p } , x _ { p } ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( x _ { p + 1 } , \dot { x _ { p + 1 } ^ { \prime } } , w ^ { \prime } )$ , brSet $( x _ { p } ^ { \prime } , w ^ { \prime } )$ , b $\mathsf { r s e t } ( x _ { p + 1 } ^ { \prime } , w ^ { \prime } )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { L } } } \ p - 1$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $p - 1 )$ ) application maps $x$ to $x _ { p - i + 1 } , x ^ { \prime }$ to $x _ { p - i + 1 } ^ { \prime }$ , $y$ to $x _ { p - i }$ , $ { \boldsymbol { w } }$ to 𝑤 and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms $\mathsf { F } ( x _ { p - i } , x _ { p - i } ^ { \prime } , w ^ { \prime } )$ , ${ \sf R } ( x _ { p - i } ^ { \prime } , x _ { p - i + 1 } ^ { \prime } , w ^ { \prime } )$ $\begin{array} { r } { \dot { \mathrm { \bf ~ t } ( p - \mathrm { \bf ~ i } ) } ( x _ { p - i } ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { L } } ( x _ { p - i } ^ { \prime } , w ^ { \prime } ) , \mathsf { b r s e t } ( x _ { p - i } ^ { \prime } , w ^ { \prime } ) ; } \end{array}$ • we apply Rule $R _ { \mathsf { C } _ { \mathsf { R } } } \ n - p$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $n - p )$ ) application maps $x$ to $x _ { p + i } , x ^ { \prime }$ to $x _ { p + i } ^ { \prime }$ , $y$ to $x _ { p + i + 1 }$ , 𝑤 to $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms $\mathsf { F } ( x _ { \mathit { p } + i + 1 } , x _ { \mathit { p } + i + 1 } ^ { \prime } , w ^ { \prime } )$ , ${ \sf R } ( x _ { p + i } ^ { \prime } , x _ { p + i + 1 } ^ { \prime } , w ^ { \prime } )$ $\bar { \mathsf { t } } \bigl ( \mathsf { p } + \mathrm { i } + 1 \bigr ) \bigl ( x _ { \substack { p + i + 1 } } ^ { \prime } , w ^ { \prime } \bigr )$ , $\mathsf C _ { \mathsf R } ( x _ { p + i + 1 } ^ { \prime } , w ^ { \prime } )$ , $\mathsf { b r S e t } ( x _ { p + i + 1 } ^ { \prime } , w ^ { \prime } )$ • we apply Rule $R _ { \sf E n d }$ , mapping $x ^ { \prime }$ to $x _ { n + 1 } ^ { \prime }$ , $x$ to $x _ { n + 1 }$ , $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms ${ \sf R } ( x _ { n + 1 } ^ { \prime } , x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ , $\mathsf { B } ( x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ , End $( x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ ,brSet $( x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ . The result of that derivation has a wild frontier of configuration $\rho ^ { \prime }$ overseen by $w ^ { \prime }$ , as witnessed by terms $x _ { 1 } ^ { \prime } , \ldots , x _ { n + 2 } ^ { \prime }$ . If $\rho ^ { \prime }$ is obtained from $\rho$ because $( b , q ^ { \prime } , ) \in \delta ( t ( p ) , q _ { r } )$ , we consider $x _ { 1 } , \ldots , x _ { n + 1 } , w$ as provided by the definition of a wild frontier of configuration $\rho$ . • we start by applying Rule $R _ { q _ { r } } ^ { }$ , mapping $x$ to $x _ { p }$ , $y$ to $x _ { p - 1 }$ , $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ . This rule application produces the atoms $\begin{array} { r } { q ^ { \prime } ( x _ { p - 1 } ^ { \prime } , w ^ { \prime } \big ) , \mathsf { c } ( x _ { p - 1 } ^ { \prime } , w ^ { \prime } ) , \mathsf { b } ( x _ { p } ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { L } } ( x _ { p - 1 } ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathrm { R } } ( x _ { p } ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( x _ { p - 1 } ^ { \prime } , x _ { p } ^ { \prime } , w ^ { \prime } ) } \end{array}$ , $\mathsf { F } ( x _ { p } , x _ { p } ^ { \prime } , w ^ { \prime } )$ , $\overline { { \overline { { \mathbf { \alpha } } } } } ( x _ { p - 1 } , x _ { p - 1 } ^ { \prime } , w ^ { \prime } )$ , brSet $( x _ { p } ^ { \prime } , w ^ { \prime } )$ , brSet $( x _ { p - 1 } ^ { \prime } , w ^ { \prime } )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { L } } } \ p { - } 2$ times. The $i ^ { \mathrm { t h } }$ (for 𝑖 from 1 to $\scriptstyle { p - 2 } )$ application maps $x$ to $x _ { p - i } , x ^ { \prime }$ to $x _ { p - 1 } ^ { \prime }$ , $y$ to $x _ { p - i - 1 }$ , and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms $\mathsf { F } ( x _ { { p } - { i } - 1 } , x _ { { p } - { i } - 1 } ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( x _ { { p } - { i } - 1 } ^ { \prime } , x _ { { p } - { i } } ^ { \prime } , \mathsf { \bar { w } } ^ { \prime } )$ $\mathsf { t } ( \mathsf { p } - \mathrm { i } - 1 ) ( x _ { { p } - i - 1 } ^ { \prime } , w ^ { \prime } ) , \mathsf { C } _ { \mathsf { L } } ( x _ { { p } - i - 1 } , w ^ { \prime } )$ , $\mathsf { b r s e t } ( x _ { \mathit { p - i - 1 } } , w ^ { \prime } )$ ; • we apply Rule $R _ { \mathsf { C } _ { \mathsf { R } } } \ n - \ p + 1$ times. The $i ^ { \mathrm { t h } }$ (for $i$ from 1 to $n - p + 1 ,$ application maps $x$ to $x _ { p - 1 + i } , x ^ { \prime }$ to $x _ { p - 1 + i } ^ { \prime } , y$ to $x _ { p + i }$ , and $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms $\mathsf { F } ( x _ { p + i } , x _ { p + i } ^ { \prime } , w ^ { \prime } )$ , ${ \sf R } ( x _ { p + i - 1 } ^ { \prime } , x _ { p + i } ^ { \prime } , w ^ { \prime } )$ , $\mathtt { t } ( \mathtt { p } - \mathrm { i } - 1 ) ( x _ { p + i } ^ { \prime } , w ^ { \prime } )$ , $\mathsf C _ { \mathsf { R } } ( x _ { p + i } ^ { \prime } , w ^ { \prime } )$ , $\mathsf { b r S e t } ( x _ { p + i } ^ { \prime } , w ^ { \prime } )$ ; • we apply Rule $R _ { \sf E n d }$ , mapping $x ^ { \prime }$ to $x _ { n + 1 } ^ { \prime }$ , $x$ to $x _ { n + 1 }$ , $ { \boldsymbol { w } }$ to $\boldsymbol { w }$ and $w ^ { \prime }$ to $w ^ { \prime }$ . It creates atoms ${ \sf R } ( x _ { n + 1 } ^ { \prime } , x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ , $\mathsf { B } ( x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ , E $\sf { \sf { M } } ( \boldsymbol { x } _ { n + 2 } ^ { \prime } , \boldsymbol { w } ^ { \prime } )$ ,brSet $( x _ { n + 2 } ^ { \prime } , w ^ { \prime } )$ . The result of that derivation has a wild frontier of configuration $\rho ^ { \prime }$ overseen by $w ^ { \prime }$ , as witnessed by terms $x _ { 1 } ^ { \prime } , \ldots , x _ { n + 2 } ^ { \prime }$ . A rule is datalog if its head does not contain any existentially quantified variable. Proposition 42. Let $F _ { 0 } , \ldots , F _ { k }$ be restricted derivation. Let $w ^ { \ast } \in$ terms $\left( F _ { k } \right) \backslash$ terms $\left( F _ { 0 } \right)$ such that $\mathsf { R e a l } ( w ^ { * } ) \in F _ { k }$ . Then for any $j > k$ , the only rules generating $w ^ { \ast }$ as a last argument having nonobsolete triggers on $F _ { j }$ are datalog rules. Proof. We consider a non-datalog rule $R$ and $\sigma$ a homomorphism of body $( R )$ into $F _ { j }$ . We prove that $\sigma$ can be extended in a homomorphism of head $( R )$ into $F _ { j }$ , showing that $\langle R , \sigma \rangle$ is obsolete. Note that as $\mathsf { R e a l } ( w ^ { * } ) \in F _ { k }$ , it holds that Rule $R _ { \mathsf { B r a k e } }$ has been applied by mapping $ { \boldsymbol { w } }$ to $\boldsymbol { w } ^ { * }$ . • Rules $R _ { \neg q _ { r } } ^ { }$ and $R _ { \neg q _ { r } } ^ { }$ : extend $\sigma$ by mapping $x ^ { \prime }$ and $y ^ { \prime }$ to $\sigma ( w ) = w ^ { * }$ • Rules $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ : extend $\sigma$ by mapping $x ^ { \prime } , y ^ { \prime } , w ^ { \prime }$ to $\sigma ( w ) = w ^ { * }$ • Rules $R _ { \mathsf { C } _ { \mathsf { L } } }$ , $R _ { \mathsf { C } _ { \mathsf { R } } }$ , and $R _ { \sf E n d }$ : extend $\sigma$ by mapping $y ^ { \prime }$ to $\sigma ( w ^ { \prime } ) = w ^ { * }$ . Lemma 19. Let $( \rho _ { i } ) _ { i \in \mathbb { N } }$ be a run of 𝑀 on the empty word that visits $q _ { r }$ infinitely often. There exists an infinite restricted chase sequence for $\left. \Sigma _ { M } , D _ { \varepsilon } \right.$ . Proof. Let $( i _ { j } ) _ { j \in \mathbb { N } }$ be the infinite strictly increasing sequence of integers such that $i _ { 1 } = 1$ and $\rho _ { k }$ is in $q _ { r }$ if and only if $k = i _ { j }$ for some $j$ . We denote by $\mathscr { D } _ { \rho _ { i _ { j } } \to \rho _ { i _ { j + 1 } } }$ the concatenation of the restricted derivations provided by Proposition 18. Let us consider the derivation build by induction: • D1 = D𝜌𝑖1 𝜌𝑖2 • $\mathcal { D } _ { 1 } ^ { \prime }$ extends $\mathcal { D } _ { 1 }$ by the application of Rule $R _ { \mathsf { B } r a \mathsf { k e } }$ mapping $ { \boldsymbol { w } }$ to the brake overseeing the wild frontier of the last element of $\mathcal { D } _ { 1 }$ , as well as by applying any datalog rule mapping 𝑤 to that brake. • D 𝑗 extends D′𝑗 −1 by the derivation D𝜌𝑖𝑗 →𝜌𝑖𝑗 1 ; • $\mathcal { D } _ { j } ^ { \prime }$ extends $\mathcal { D } _ { j }$ by the application of Rule $R _ { \mathsf { B r a k e } }$ mapping 𝑤 to the brake overseeing the wild frontier of the last element of $\mathcal { D } _ { j }$ , and by applying Rule $R _ { \mathsf { n e x t B r } }$ in any possible way that maps $ { \boldsymbol { w } }$ to the brake overseeing the wild frontier of the last element of $\mathcal { D } _ { j }$ , as well by applying any datalog rule mapping $ { \boldsymbol { w } }$ to that brake. This derivation is fair: • any created atom has a brake as argument; $\bullet$ brakes are created exactly once in each derivation $\mathscr { D } _ { \rho _ { i _ { j } } \to \rho _ { i _ { j + 1 } } }$ (by definition of $( i _ { j } ) _ { j \in \mathbb { N } } )$ ; let us call 𝑤1 the brake appearing in 𝐷𝜀 , and 𝑤 𝑗+1 the brake crate+d in D𝜌𝑖𝑗 →𝜌𝑖𝑗 1 ; • by Proposition 42, the application of Rule $R _ { \mathsf { B r a k e } }$ mapping $ { \boldsymbol { w } }$ to $w _ { j }$ deactivates any trigger of a non-datalog rule mapping creating an atom with $w _ { j }$ as a last argument; • by definition of $\mathcal { D } _ { j } ^ { \prime }$ , all datalog rules creating an atom with $w _ { j }$ as last argument are applied. Lemma 43. Let 𝐹 be the result of an infinite restricted chase sequence $F _ { 0 } , F _ { 1 } , \ldots$ from $\left. \Sigma _ { M } , D \right.$ for some $D$ . For any 𝑤 such that Brake $( w ) \in F$ , there are finitely many atoms having 𝑤 as last argument in $F$ . There is thus an infinite amount of brakes in $F$ . Proof. Consider a term $ { \boldsymbol { w } }$ such that $\mathsf { B r a k e } ( \boldsymbol { w } ) \in F$ , which we call a brake. By fairness and Rule $R _ { \mathsf { B r a k e } }$ , there must be some integer $i$ such that $\mathsf { R e a l } ( w ) \in F _ { i }$ . At this step, there is a finite number of atoms with $ { \boldsymbol { w } }$ as last argument, and by Proposition 42, the only rules that can generate such atoms after step 𝑖 are datalog. Rule $R _ { \mathsf { B r a k e } }$ only generates atoms over $\boldsymbol { w }$ , so it is applicable at most one, and will yield at most 6 new atoms. Thus, the only rule left is Rule $R _ { \mathsf { n e x t B r } }$ . Only two rules create new Brake-atoms that do not already appear in their bodies, which are Rules $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ . Both these rules also generate an atom of the form $\mathsf { n e x t B r } ( w , w ^ { \prime } )$ , where Brake $\scriptstyle ( \nu )$ is the brake in their body, and Brake $\left( w ^ { \prime } \right)$ is the newly created brake. As this is the only way to generate nextBr-atoms, the predicate nextBr defines a forest relationship over the brakes, where the root of each tree is a term $w _ { 0 }$ such that Brake $( w _ { 0 } ) \in D$ . There is thus a finite number of trees. We then show that Rule $R _ { \mathsf { n e x t B r } }$ can only create a finite number of atoms by induction on this forest structure. If Brake $( w ) \in D$ , then all the atoms of the form nextBr $( w ^ { \prime } , w )$ are in $D$ , so $w ^ { \prime }$ is in $D$ too. Thus, Rule $R _ { \mathsf { n e x t B r } }$ can only create sets of atoms of the form brSet $( x , w ^ { \prime } )$ , where $x$ is a database term. As there is a finite amount of database terms, this yields a finite number of atoms. If $\mathsf { B r a k e } ( w ^ { \prime } ) \in F \setminus D$ , $\mathsf { n e x t B r } ( \boldsymbol { w } , \boldsymbol { w } ^ { \prime } ) \in F$ and there is a finite number of atoms having $ { \boldsymbol { w } }$ as last argument, then first notice that $ { \boldsymbol { w } }$ is the only term such that $\mathsf { n e x t B r } ( \boldsymbol { w } , \boldsymbol { w } ^ { \prime } ) \in F$ , since Rules $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ both generate nextBr-atoms featuring an existential variable in second position. Then, as there is a finite amount of atoms featuring $ { \boldsymbol { w } }$ as their last argument, there is a finite amount of terms $x$ such that brSet $( x , w ) \subseteq F$ . Thus, Rule $R _ { \mathsf { n e x t B r } }$ generates at most $\mathsf { b r S e t } ( x , w ^ { \prime } )$ for all these terms, which represents a finite number of atoms. Thus, there is a finite number of atoms that feature a given brake as their last argument. As $F _ { 0 } , F _ { 1 } , \ldots$ is infinite, $F$ must have an infinite amount of atoms, that were generated during the chase. Since ${ \mathsf { B r a k e } } ( \boldsymbol { w } )$ is required in the body of all the rules where $ { \boldsymbol { w } }$ appears as the last argument of an atom, there is thus an infinite amount of brakes in $F$ . □ Lemma 21. For all databases $D$ , and all infinite chase sequences from $\left. \Sigma _ { M } , D \right.$ with result $F$ , there is an infinite sequence $( A _ { n } ) _ { n \in \mathbb { N } }$ of state atoms of $F$ such that: · $A _ { 0 } \in D$ ; · $A _ { n } < A _ { n + 1 }$ for all $n \in \mathbb { N }$ ; • for infinitely many $i \in \mathbb { N }$ , $A _ { i }$ is of the shape $\mathsf { q } _ { r } ( t _ { i } , w _ { i } )$ . Proof. Since the rules that introduce state atoms (Rules $R _ { q _ { r } } ^ { }$ , $R _ { q _ { r } } ^ { }$ , $R _ { \lnot q _ { r } } ^ { }$ and $R _ { \neg q _ { r } } ^ { }$ ) feature a state atom in their body, $\prec$ defines a forest structure over state atoms, where the root of each tree is an atom of the database. There is thus a finite amount of trees (as there is a finite amount of atoms in the database). By Lemma 43, there is an infinite amount of brakes in $F$ . Then, since the rules that introduce new brakes (Rules $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ ) introduce a state atom too, there is an infinite number of state atoms. Thus, one of the trees must be infinite. In addition, since there is a finite amount of atoms that feature a given brake as last argument, and each state atom features a brake as last argument, each state atom only has a finite number of successors for $\prec$ . Indeed, infinitely many successors would require infinitely many rule applications, and thus infinitely many atoms featuring the same last argument as the state atom. We thus have an infinite tree with finite branching. It thus features an infinite branch, which must contain infinitely many ${ \mathsf { q } } _ { r }$ -atoms (as there are infinitely many ${ \mathsf { q } } _ { r }$ -atoms), by König’s lemma. □ To each state atom, we associate both a set of atoms and a configuration. Definition 44 (Atoms associated with a state atom). Let $F _ { k }$ be a fact set occuring in a chase derivation from $D _ { \rho }$ . The atoms associated with a state atom $\mathsf { q } ( x , w )$ in $F _ { k }$ is the largest subset of ${ \bf \ddot { \boldsymbol { F } } } _ { k }$ whose terms are included in $\{ x , w \} \cup X _ { i }$ and such that: · $X _ { i }$ is the set of terms reachable or co-reachable through an $R$ -path from $x$ not going through a brake; • 𝑤 can only appear in the last position of atoms. Definition 45 (Configuration of a state atom). Let $F _ { k }$ appearing in a restricted chase sequence for $\langle \Sigma , D _ { \rho } \rangle$ . The configuration associated with a state atom $\mathsf { q } ( x , w )$ , written conf $( \mathsf { q } ( x , w ) )$ , is defined $b y$ induction: • $i f \mathsf { q } ( x , w ) \in D _ { \rho }$ , c $\mathsf { o n f } ( \mathsf { q } ( x , w ) ) = \rho$ • otherwise, let 𝐴 be the unique state atom such that $A \prec \mathsf { q } ( x , w )$ . Let $( q ^ { \prime } b , d )$ be the element of $\cdot \delta ( q , a )$ due to which the rule whose application generated $\mathsf { q } ( w , x )$ belongs to $\Sigma _ { M }$ . We define conf $( \mathsf { q } ( x , w ) )$ ) as the configuration obtained from conf $( A )$ where the content of the head cell of $\mathsf { c o n f } ( A )$ is replaced $b y b$ , the head moves by 𝑑 and switches to state $q ^ { \prime }$ . Note that the above definition implies that if $A _ { p }$ is the parent of $A$ , then $\mathsf { c o n f } ( A )$ is reachable in one transition of $M$ from $\mathsf { c o n f } ( A _ { p } )$ . Intuitively, the configuration of a state atom is encoded by the atoms associated with it. However, the restricted derivation may not have derived all such atoms, hence we consider a weaker notion, that we coin consistency. Definition 46 (Consistency). $A$ set of atoms 𝐴 associated with a state atom is consistent with the configuration $\langle n , t , p , q \rangle$ if: • there exists $x$ and 𝑤 such that $\mathsf { q } ( x , w )$ is the only state atom in $A$ , and conf $( x )$ is in state $q$ ; • $\mathsf { a } ( x , w )$ is the only letter predicate having $x$ as argument in $A$ , and $t ( p ) = a$ ; • if there is an $R$ path of length $i$ from $x$ to $x ^ { \prime }$ , and there is an atom $\mathsf { a } ( x ^ { \prime } , x ^ { \prime \prime } )$ , then $x ^ { \prime \prime } = w$ , $p + i \leq n + 1$ and $t ( p + i ) = a$ ; there exists at most one atom $\operatorname { E n d } ( x ^ { \prime } , x ^ { \prime \prime } )$ in $A$ , and if it exists then $x ^ { \prime \prime } = w$ and there is an $R$ -path from $x$ to $x ^ { \prime }$ of length 𝑖, such that $p + i = n + 1$ ; • if there is an $R$ path of length 𝑖 from $x ^ { \prime }$ to $x$ , and there is an atom $\mathsf { a } ( x ^ { \prime } , x ^ { \prime \prime } )$ , then $x ^ { \prime \prime } = w$ , $p - i \geq 1$ and $t ( p - i ) = a$ . As expected, the set of atoms associated wtih a state atom is consistent with its configuration, and this allows us to prove Lemma 22. Proposition 47. Let $F _ { k }$ appearing in a restricted chase sequence for $\langle \Sigma , D _ { \rho } \rangle$ . For any state atom $A$ of $F _ { k }$ , the set of atoms associated with $A$ is consistent with conf $( A )$ . Proof. We prove the result by induction. If $A$ is a state atom that does not have any parent, then $A \in F _ { 0 } = D _ { \rho }$ . The set of atoms associated with $A$ is $D _ { \rho }$ , which is consistent with the initial configuration of $M$ on $\rho$ by definition, which is configuration of $A$ ; Otherwise, let $A = q ^ { \prime } ( y ^ { \prime } , w )$ be a state atom of $F _ { k }$ . We prove the result assuming that $A$ has been created by the application of Rule $R _ { \neg q _ { r } } ^ { }$ , mapping $x$ to $x _ { p }$ , $ { \boldsymbol { w } }$ to $ { \boldsymbol { w } }$ and $y$ to $y _ { p } . A _ { p }$ , the parent of $A$ , is thus of the shape $q ( x _ { p } , w )$ (other possible case would be $A$ being created by Rules $R _ { \neg q _ { r } } ^ { \right. } , R _ { q _ { r } } ^ { \left. }$ or $R _ { q _ { r } } ^ { }$ , which are treated similarly). It is easy to check that any term reachable from $y ^ { \prime }$ by an $R$ -path not going through a brake is either created by the same rule application as $y ^ { \prime }$ , or has been created by an application of Rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ mapping $x ^ { \prime }$ to a term reachble by an $R$ -path from $y ^ { \prime }$ (and similarly for terms co-reachable and Rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ ). Then: • if there exists $y _ { - 1 } ^ { \prime }$ such that $\mathsf { R } ( y _ { - 1 } ^ { \prime } , y ^ { \prime } , w ) \in F _ { k }$ , then $y _ { - 1 } ^ { \prime }$ has been created by the application of Rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ , in which case $\mathsf { a } ( y _ { - 1 } ^ { \prime } , w )$ is generated if the cell two positions on the left of the head of $\mathsf { c o n f } ( A _ { p } )$ contains an $a$ , that is, if the cell one position on the left of the head of $\mathsf { c o n f } ( A )$ contains an $a$ ; predecessors of $y ^ { \prime }$ further away from $y ^ { \prime }$ are treated by induction in a similar way. Proc. ACM Manag. Data, Vol. 3, No. 2 (PODS), Article 109. Publication date: May 2025. • there exists a $y _ { + 1 } ^ { \prime }$ such that $\mathsf { R } ( y ^ { \prime } , y _ { + 1 } ^ { \prime } , w ) \in F _ { k }$ , as such an element is created by the application of Rule $R _ { \lnot q _ { r } } ^ { }$ . The same application create the atom $\mathsf { b } ( y _ { + 1 } ^ { \prime } , w )$ , which is consistent with the fact that $\mathsf { c o n f } ( A _ { p } )$ contains a $b$ in the first cell at the right of the head; cells further to the right are treated similarly to cells to the left, which are necessarily created by Rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ . • the only way to derive an atom of the shape $\operatorname { \mathbb { E } n d } ( x ^ { \prime } , x ^ { \prime \prime } )$ is to apply Rule $R _ { \sf E n d }$ , which can only be done after $n - p$ rule applications of Rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ , yielding a path of length $n - p + 2$ from position $p - 1$ of the current configuration, which fulfills the condition of Definition 46 (remember that the length of conf $\operatorname { \dot { \rho } } ( A )$ is incremented by 1 with respect to the length of conf $( A _ { p } )$ ). Lemma 22. For every configuration $\rho$ , if the restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ then there exists a run of 𝑀 on $\rho$ which visits $q _ { r }$ infinitely many times. Proof. We consider the sequence of states atoms $( A _ { n } ) _ { n \in \mathbb { N } }$ provided by Lemma 21, and the sequence $( \mathsf { c o n f } ( A _ { n } ) ) _ { n \in \mathbb { N } }$ . • $\mathsf { c o n f } ( A _ { 0 } )$ is the starting configuration of $M$ on $\varepsilon$ , and thus a run of $M$ on that configuration; • if $( A _ { n } ) _ { n \in \mathbb { N } }$ is not a run, there exists a smallest $j \in \mathbb { N }$ such that $( \mathsf { c o n f } ( A _ { n } ) ) _ { 1 \leq n \leq j }$ is not a run. conf $( A _ { j - 1 } )$ is consistent with the set of atoms associated with $A _ { j - 1 }$ by Proposition 47. Hence conf $( A _ { j } )$ is obtained by applying the transition correponding to the rule creating $A _ { j }$ , and thus $( \mathsf { c o n f } ( A _ { n } ) ) _ { 1 \leq n \leq j }$ is a run, which leads to a contradiction. # C Proofs for Section 5 (Rule set termination) The following lemmas are used in later proofs of the section. Lemma 48. For all databases $D$ , and every $F$ result of a chase sequence for $\left. \Sigma _ { M } , D \right.$ , if the atoms $\mathsf { F } ( x , z , w )$ and $\mathsf { F } ( y , z , w )$ are both in $F$ and $z$ is a null, then $x = y$ . Proof. This result follows from the fact that whenever an F-atom appears in the head of a rule, it contains an existentially quantified variable in second position, and no two F-atoms contain this variable in second position. Thus, if $z$ is a null and $x$ and $y$ are different, the atoms $\mathsf { F } ( x , z , w )$ and $\mathsf { F } ( y , z , w )$ must have been generated by two rule applications, which both introduce $z$ , which is impossible. □ Lemma 49. For all databases $D$ , and every $F$ result of a chase sequence for $\left. \Sigma _ { M } , D \right.$ , for each null $y$ in semterms $( F \backslash D )$ , there are a unique 𝑤 and a unique $a \in \Gamma \cup \{ \mathsf { B } \}$ such that ${ \mathsf { a } } ( y , w ) \in F$ . Proof. Whenever there is an existentially quantified variable $x$ in the head of a rule in $\Sigma _ { M } \ \backslash$ $\{ R _ { \mathsf { B r a k e } } \}$ , it appears in a unique atom of the form $\mathsf { a } ( { \boldsymbol { x } } , { \boldsymbol { w } } )$ in the same head. In addition, all the atoms of the same form in heads of rules feature an existentially quantified variable in first position (except for $R _ { \mathsf { B r a k e } }$ , which feature a brake). Thus, when the null $y$ is introduced in the chase, there is a unique atom $\mathsf { a } ( y , w )$ introduced along with it (hence implying existence), and no other rule can introduce an atom of the same form (hence implying uniqueness). □ Lemma 28. For all database $D$ , and every $F$ result of a chase sequence for $\left. \Sigma _ { M } , D \right.$ , the graph bowtie $( A )$ is a finite bow tie for all state atoms $A \in F \setminus D$ . In addition: • The center of the bow tie is the atom generated along with $A$ , by rule $R _ { \lnot q _ { r } } ^ { \left. } , R _ { \lnot q _ { r } } ^ { \right. } , R _ { q _ { r } } ^ { \left. } o r R _ { q _ { r } } ^ { \right. } .$ ; • all the atoms in the left part of the bow tie are generated by rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ ; • all the atoms in the right part of the bow tie are generated by rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ , except possibly the end of a maximal path, which may have been generated by rule $R _ { \sf E n d }$ . Proc. ACM Manag. Data, Vol. 3, No. 2 (PODS), Article 109. Publication date: May 2025. Proof. First, notice that all the rules that generate R-atoms (over non-brake terms) generate R-atoms containing at least one existentially quantified variable. Three cases occur: • Rules $R _ { \neg q _ { r } } ^ { } , R _ { \neg q _ { r } } ^ { } , R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ generate an R-atom $\mathsf { R } ( u , v , w )$ where $u$ and $\boldsymbol { v }$ are both existentially quantified. • Rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ generates an R-atom $\mathsf { R } ( u , v , w )$ where $u$ is existentially quantified and $\boldsymbol { v }$ is a frontier variable. Rules $R _ { \mathsf { C } _ { \mathsf { R } } }$ and $R _ { \sf E n d }$ generate an R-atom $\mathsf { R } ( u , v , w )$ where $u$ is a frontier variable and $\boldsymbol { v }$ is existentially quantified. Thus, no connected component can contain two R-atoms that are generated using a rule among rules $R _ { \neg q _ { r } } ^ { } , R _ { \neg q _ { r } } ^ { } , R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ . Indeed, these rules create a new connected component, and to connect two connected components, we need a rule generating an R-atom $\mathsf { R } ( u , v , w )$ where $u$ and $\boldsymbol { v }$ are both frontier variables, which is not the case with this rule set. This also implies that (bowtie $( A ) , E _ { R } )$ is acyclic, even when seen as an undirected graph, for the same reason. Thus, since $A = { \mathfrak { q } } ( x , w )$ is generated by a rule among $R _ { \neg q _ { r } } ^ { }$ , $R _ { \neg q _ { r } } ^ { }$ , $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ along with an R-atom, all the other atoms in the connected component of $x$ must have been generated by $R _ { \mathsf { C } _ { \mathsf { L } } }$ , $R _ { \mathsf { C } _ { \mathsf { R } } }$ or $R _ { \sf E n d }$ . We assume here that $A$ was generated by rule $R _ { \lnot q _ { r } } ^ { }$ or $R _ { q _ { r } } ^ { }$ , as the other cases are symmetric. Then, $A$ is generated along the atom $\mathsf { R } ( x , y , w )$ , which will be the center of our bow tie, and atoms $\mathsf { C } _ { \mathsf { L } } ( x , w )$ and $\mathsf C _ { \mathsf { R } } ( y , w )$ . We then consider the sets bowtie $\left( \boldsymbol { A } \right) _ { x } ^ { - y }$ and bowtie $( A ) _ { y } ^ { - x }$ as defined in Definition 27. First, as mentioned before, the undirected graph induced by bowtie $( A ) , E _ { R } )$ is acyclic and connected, so these sets form a partition of bowtie $( A )$ . Thus, it only remains to show that the subgraphs induced by bowtie $\left( A \right) _ { x } ^ { - y }$ on (bowtie $( A ) , E _ { R } ^ { - } )$ and by bowtie $\left( A \right) _ { y } ^ { - x }$ on (bowtie $( A ) , E _ { R } )$ are trees. Again, since both proofs are similar, we only prove it for the second graph. A directed tree is an acyclic and connected graph such that each vertex has in-degree at most one. Since $( \mathsf { b o w t i e } ( A ) , E _ { R } )$ is acyclic, the subgraph induced by bowtie $( A ) _ { y } ^ { - x }$ is acyclic too, and as it is a connected component, it is connected. Thus, it only remains to show that each term in bowtie $( A ) _ { y } ^ { - x }$ has an in-degree of at most one. Our previous analysis of the rules entails that only a term $t$ such that $\mathsf C _ { \mathsf { L } } ( t , w ) \in F$ can have an in-degree greater than one. Indeed, the only rule that can increase the in-degree of an existing element is rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ , which requires this atom in its body. We thus show that there is no $t$ in bowtie $( A ) _ { y } ^ { - x }$ such that $\mathsf C _ { \mathsf { L } } ( t , w ) \in F$ . Only two kinds of rules can generate $\mathsf C _ { \mathsf L }$ -atoms (over non-brakes), which are transition rules $( R _ { \neg q _ { r } } ^ { }$ , $R _ { \neg q _ { r } } ^ { }$ , $R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ ), and the rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ . All these rules generate atoms of the form $\mathsf C _ { \mathsf { L } } ( u , w )$ where $u$ is existentially quantified. As stated before, in bowtie $( A )$ , only the atom $\mathsf { R } ( x , y , w )$ has been generated using a transition rule, and every other R-atom has been generated using $R _ { \mathsf { C } _ { \mathsf { L } } }$ or $R _ { \mathsf { C } _ { \mathsf { R } } }$ . Now, for a contradiction, assume that $t$ is the first term of bowtie $( A ) _ { y } ^ { - x }$ introduced during the chase such that $\mathsf C _ { \mathsf { L } } ( t , w ) \in F$ . Since the trigger generating $\mathsf { R } ( x , y , w )$ only generates $\mathsf C _ { \mathsf { L } } ( x , w )$ , and $x \notin$ bowtie $( A ) _ { y } ^ { - x }$ , the term $t$ has been generated by rule $R _ { \mathsf { C } _ { \mathsf { L } } }$ . This means that there is a term $u \in$ bowtie $( A ) _ { y } ^ { - x }$ such that $\mathsf C _ { \mathsf { L } } ( u , w ) \in F$ before $\mathsf C _ { \mathsf { L } } ( t , w )$ is introduced, which contradicts our hypothesis. Note that $u$ does have to be in bowtie $( A ) _ { y } ^ { - x }$ , since otherwise, $t \not \in$ bowtie $( A ) _ { y } ^ { - x }$ , as no rule can connect two disjoint connected components. Thus, there is no $C _ { \mathsf { L } }$ -atom over a term in bowtie $( A ) _ { y } ^ { - x }$ , meaning that $( \mathsf { b o w t i e } ( A ) _ { y } ^ { - x } , E _ { R } )$ is a tree. As mentioned before, an analog line of reasoning can be used to show that (bowtie $( A ) _ { x } ^ { - y } , E _ { R } ^ { - } )$ is also a tree, so bowtie $( A )$ is indeed a bow tie. Note also that since no $\mathsf C _ { \mathsf { L } }$ -atom over a term in bowtie $( A ) _ { y } ^ { - x }$ , all the R-atoms of the right part of the bow tie must have been generated by rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ or $R _ { \sf E n d }$ . However, rule $R _ { \sf E n d }$ generates a new null $y$ such that $\mathsf C _ { \mathsf { R } } ( y , w ) \notin F$ (by the same description as previously), and both rules $R _ { \mathsf { C } _ { \mathsf { R } } }$ and $R _ { \sf E n d }$ require an atom of this form to extend a path. Thus, if an R-atom is generated using rule $R _ { \sf E n d }$ , it is necessarily the end of a maximal path. It remains to show that bowtie $( A )$ is finite: if $F$ is finite, then so is $A$ and therefore bowtie $( A )$ . Otherwise, note that all atoms in bowtie $( A )$ are associated with the same brake $\boldsymbol { w }$ . Then, by Lemma 43, bowtie $( A )$ must be finite. □ Recall that $\mathcal { A } = ( A _ { n } )$ is the sequence of state atoms provided by Lemma 21. Lemma 30. For all $n > 0$ , configs $\left( A _ { n } \right)$ is finite, non-empty, and each of its elements homomorphically embeds into $D _ { \rho }$ for some configuration $\rho$ . Also, there is an injective function pred𝑛 from configs $( A _ { n + 1 } )$ to configs $\left( A _ { n } \right)$ such that $S \in$ configs $\left( A _ { n + 1 } \right)$ can be generated using only atoms in pred𝑛 (𝑆). Proof. Non-emptiness and finiteness. Non-emptiness and finiteness of configs $\left( A _ { n } \right)$ follow from Lemma 28, since a finite bow tie has a finite non-zero amount of maximal paths. The elements of configs $\left( A _ { n } \right)$ embed into some $D _ { \rho }$ . We then consider an element $s$ of configs $\left( A _ { n } \right)$ , and $( x _ { 1 } , \ldots , x _ { n } )$ the path associated with it. Also, let $A _ { n } = q ( x , w )$ . First, since $( x _ { 1 } , \ldots , x _ { n } )$ is a path in (bowtie $( A _ { n } ) , E _ { R } )$ , for all $i$ , the atom $\mathsf { R } ( x _ { i } , x _ { i + 1 } , w )$ is in $s$ for all $i$ , and these are all the R-atoms in $s$ by Lemma 28. Then, by Lemma 49, there is a unique atom $\mathsf { a } _ { \mathrm { i } } ( \boldsymbol { x } _ { i } , \boldsymbol { w } )$ for all $i \leq n$ . In addition, since all the maximal paths in a bow tie go through its center, there is some $p$ such that $x _ { p } = x$ . We thus define the configuration $\rho = \langle n , ( a _ { i } ) _ { i \leq n } , p , q \rangle$ . By mapping $x _ { i }$ to $c _ { i }$ for all $i$ , and $ { \boldsymbol { w } }$ to $w _ { 1 }$ , we get that $s$ and $D _ { \rho }$ are isomorphic, except for the End-atoms. However, as per the last item of Lemma 28, the only position that can have $\operatorname { E n d } ( x _ { i } , w )$ is the end of a maximal path (since this kind of atoms can only be generated by rule $R _ { \sf E n d }$ over non-brakes). Thus, the only possible End-atom in $s$ is $\operatorname { E n d } ( x _ { n } , w )$ , which has a counterpart in $D _ { \rho }$ . Thus, $s$ homomorphically embeds into $D _ { \rho }$ . Construction of pred𝑛. We then construct the function $\mathsf { p r e d } _ { n }$ . Let $A _ { n } = { \mathfrak { q } } ( x , w )$ and $A _ { n + 1 } \ =$ ${ \mathsf { q } } ^ { \prime } ( y , w ^ { \prime } )$ . First notice that the rule that generates $A _ { n + 1 }$ in the chase is among $R _ { \neg q _ { r } } ^ { } , R _ { \neg q _ { r } } ^ { } , R _ { q _ { r } } ^ { }$ and $R _ { q _ { r } } ^ { }$ . Then, there are some atoms $\mathsf { F } ( x , z , w ^ { \prime } )$ , and $\mathsf { R } ( z , y , w ^ { \prime } )$ or $\mathsf { R } ( y , z , w ^ { \prime } )$ depending on the direction of the transition. We then assume that the transition is to the right, as the left case is analogous. Consider a set $S \in$ configs $\left( A _ { n + 1 } \right)$ , $( y _ { 1 } , \dots , y _ { k } )$ the associated path, and $\rho ^ { \prime } = \langle k , ( b _ { 1 } , \ldots , b _ { k } ) , p ^ { \prime } , q ^ { \prime } \rangle$ the associated configuration, as defined earlier in the proof. Then, let pred $\mathbf { \Omega } _ { n } ( S )$ be one of the sets in configs $\left( A _ { n } \right)$ with associated path $( x _ { 1 } , \ldots , x _ { m } )$ and configuration $\rho = \langle m , ( a _ { 1 } , \ldots , a _ { m } ) , p , q \rangle$ such that there is an integer $l$ such that • for all $i < k$ , we have $\mathsf { F } ( x _ { i + l } , y _ { i } , w ^ { \prime } ) \in F$ ; $\bullet$ if $\mathsf { E n d } ( x _ { k } , w ^ { \prime } ) \in S _ { : }$ , then $\bar { \mathsf { z n d } } ( y _ { k + l - 1 } ) \in F$ , and otherwise $\mathsf { F } ( x _ { k + l } , y _ { k } , w ^ { \prime } ) \in F$ ; • for all $i \neq p ^ { \prime } - 1 , b _ { i } = a _ { i + l } ;$ • $p ^ { \prime } + l = p + 1$ . The function pred $\big | _ { n }$ is well-defined. By definition of $s$ and its associated path and configuration, there must be some atoms ${ \sf R } ( y _ { i } , y _ { i + 1 } , w ^ { \prime } )$ and $\mathsf { b } _ { \mathrm { i } } ( y _ { i } , w ^ { \prime } )$ for all $i$ , with $y = y _ { p ^ { \prime } }$ . By Lemma 28, $\mathsf { R } ( y _ { p ^ { \prime } - 1 } , y _ { p ^ { \prime } } )$ has been generated by rule $R _ { \neg q _ { r } } ^ { }$ or $R _ { q _ { r } } ^ { }$ along with $A _ { n + 1 }$ , and $\mathsf { R } ( x _ { k - 1 } , x _ { k } )$ may have been generated by rule $R _ { \sf E n d }$ or $R _ { \mathsf { C } _ { \mathsf { R } } }$ . Other than that, all the R-atoms in the path $y _ { 1 } , \ldots , y _ { k }$ have been generated by rules $R _ { \mathsf { C } _ { \mathsf { L } } }$ and $R _ { \mathsf { C } _ { \mathsf { R } } }$ . We then show that there is a path $x _ { 1 } ^ { \prime } , \ldots , x _ { k ^ { \prime } } ^ { \prime }$ such that for all $i < k$ , $\mathsf { F } ( x _ { i } ^ { \prime } , y _ { i } , w ) \in F$ , for all $i \neq p ^ { \prime } - 1 , \mathsf { b } _ { \mathrm { i } } ( x _ { i } ^ { \prime } , w ) \in F$ , and either $\mathsf { E n d } ( x _ { k - 1 } ^ { \prime } , w ) \in F$ (and $k ^ { \prime } = k - 1$ ) or $\mathsf { F } ( x _ { k } ^ { \prime } , y _ { k } , w ) \in F$ (and $k ^ { \prime } = k \mathbf { \dot { \Phi } }$ ), depending on whether $\operatorname { E n d } ( y _ { k } , w ) \in S$ or not. First, since the atom $A _ { n + 1 }$ has been generated by rule $R _ { \neg q _ { r } } ^ { }$ or $R _ { q _ { r } } ^ { }$ , there must be a term $z$ and some atoms $\mathsf { R } ( x , z , w )$ , $\mathsf { R } ( y _ { p ^ { \prime } - 1 } , y , w )$ , $\mathsf { F } ( x , y _ { p ^ { \prime } - 1 } , w )$ and $\mathsf { F } ( z , y , \bar { w } )$ in $F$ . Thus, let $x _ { p ^ { \prime } - 1 } ^ { \prime } = x$ and $x _ { p ^ { \prime } } ^ { \prime } = z$ We will then extend this path in both directions to construct $x _ { 1 } ^ { \prime } , \ldots , x _ { k ^ { \prime } } ^ { \prime }$ If the path has been extended up to $x _ { p ^ { \prime } + i } ^ { \prime }$ for some $i < k - 1 - p ^ { \prime }$ , we then extend it to $x _ { p ^ { \prime } + i + 1 } ^ { \prime }$ . As mentioned before, the atom $\mathsf { R } \left( y _ { p ^ { \prime } + i } , y _ { p ^ { \prime } + i + 1 } , w \right)$ has been generated by rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ (since $p ^ { \prime } + \bar { i } < k _ { , }$ ). Thus, there must be some terms $z , t$ and atoms $\mathsf { R } ( z , t , w )$ , $\bar { \cdot } ( z , y _ { p ^ { \prime } + i } , w )$ , $\mathsf { F } ( t , y _ { p ^ { \prime } + i + 1 } , w )$ and $\mathsf { b } _ { \mathrm { i } } ( t , \boldsymbol { w } )$ in $F$ . By Lemma 48, we then have $z = x _ { p ^ { \prime } + i } ^ { \prime }$ , since both $\mathsf { F } ( z , y _ { p ^ { \prime } + i } , w )$ and $\mathsf { F } ( x _ { p ^ { \prime } + i } ^ { \prime } , y _ { p ^ { \prime } + i } , w )$ are present in $F$ . We thus set $t = x _ { p ^ { \prime } + i + 1 } ^ { \prime }$ . The same reasoning lets us extend the path to $x _ { p ^ { \prime } - i - 1 } ^ { \prime }$ provided we have extended it to $x _ { p ^ { \prime } - i } ^ { \prime }$ , using the left copy rule instead of the right copy. We now treat the case where $i = k - 1 - p$ . If $\mathsf { E n d } ( y _ { k } , w ) \notin S$ , then $\mathsf { R } ( y _ { k - 1 } , y _ { k } , w )$ has been generated by rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ , so the same reasoning as before applies, and $k ^ { \prime } = k$ . Otherwise, $\mathsf { R } ( y _ { k - 1 } , y _ { k } , w )$ ha been introduced by rule $R _ { \sf E n d }$ , meaning that there are some term $z$ and atoms $\operatorname { E n d } ( z , w )$ and $\mathsf { F } ( z , y _ { k - 1 } )$ in $F$ . Thus, since both $\mathsf { F } ( z , y _ { k - 1 } )$ and $\mathsf { F } ( x _ { k - 1 } ^ { \prime } , y _ { k - 1 } )$ are in $F$ , by Lemma 48, $z = y _ { k - 1 }$ , and we have the atom $\mathsf { E n d } ( y _ { k - 1 } , w )$ in $F$ as promised, and $k ^ { \prime } = k - 1$ . We thus have a path $x _ { 1 } ^ { \prime } , \ldots , x _ { k } ^ { \prime }$ in bowtie $\left( A _ { n } \right)$ as described before. However, this path does not define an element of configs $\left( A _ { n } \right)$ , since it is not maximal. Thus, consider any maximal path $x _ { 1 } , \ldots , x _ { m }$ in bowtie $\left( A _ { n } \right)$ that extends $x _ { 1 } ^ { \prime } , \ldots , x _ { k } ^ { \prime }$ , $\mathsf { p r e d } _ { n } ( S )$ the corresponding set in configs $\left( A _ { n } \right)$ , $\langle m , ( a _ { 1 } , \ldots , a _ { m } ) , p , q \rangle$ the corresponding configuration, and let $l$ be the integer such that $x _ { l + 1 } = x _ { 1 } ^ { \prime }$ . Then, by definition of $( x _ { 1 } ^ { \prime } , \ldots , x _ { k ^ { \prime } } ^ { \prime } )$ , the first two points of the definition of pred $\mathbf { \Omega } _ { n } ( S )$ hold. Then, since $A _ { n } = \mathsf { q } ( x _ { p ^ { \prime } - 1 } ^ { \prime } , w )$ , and $x _ { p ^ { \prime } - 1 } ^ { \prime } = x _ { p ^ { \prime } - 1 + l }$ , we have $p = p ^ { \prime } - 1 + l$ , so $p + 1 = p ^ { \prime } + l$ . In addition, since for all $i \nearrow p ^ { \prime } - 1 , \mathsf { b _ { i } } ( x _ { i + l } , w ) \in F$ , we have $a _ { i + l } = b _ { i }$ . Thus, there is indeed a set in configs $\left( A _ { n } \right)$ that fits the definition of $\mathsf { p r e d } _ { n } ( S )$ . Note however that this path is not necessarily unique, but we only need an injective function, so this is fine. The set pred $\mathbf { \Omega } _ { n } ( S )$ is enough to generate $s$ . First note that all the rule applications described earlier suffice to generate $s$ . It is then enough to notice that all the atoms in the support of the mentioned triggers are present in pred $\mathbf { \Omega } _ { n } ( S )$ , or generated during the application of the previous triggers. # Injectivity of pre ${ \bf d } _ { n }$ . Consider two sets $S _ { 1 }$ with associated path $( y _ { 1 } , \dots , y _ { k _ { 1 } } )$ and configuration $\langle k _ { 1 } , ( b _ { 1 } , \ldots , b _ { k _ { 1 } } ) , p _ { 1 } , q ^ { \prime } \rangle$ , and $S _ { 2 }$ with associated path $( y _ { 1 } ^ { \prime } , \ldots , y _ { k _ { 2 } } ^ { \prime } )$ and configuration $\langle k _ { 2 } , ( b _ { 1 } ^ { \prime } , \ldots , b _ { k _ { 2 } } ^ { \prime } ) , p _ { 2 } , q ^ { \prime } \rangle$ , such that ${ \mathsf { p r e d } } _ { n } ( S _ { 1 } ) = { \mathsf { p r e d } } _ { n } ( S _ { 2 } ) = S ^ { \prime }$ , and $S ^ { \prime }$ has path $( x _ { 1 } , \ldots , x _ { m } )$ and configuration $\langle \tilde { m } , ( a _ { 1 } , \ldots , a _ { m } ) , p , q \rangle$ . Thus, there must be some $l _ { 1 }$ and $l _ { 2 }$ such that: • for all $i$ , $\mathsf { F } ( x _ { i + l _ { 1 } } , y _ { i } , w ^ { \prime } ) \in F$ and $\mathsf { F } ( x _ { i + l _ { 2 } } , y _ { i } ^ { \prime } , w ^ { \prime } ) \in F$ ; • for all $i \neq p _ { 1 } - 1$ , $b _ { i } = a _ { i + l _ { 1 } }$ and for all $i \neq p _ { 2 } - 1 , b _ { i } ^ { \prime } = a _ { i + l _ { 2 } }$ ; • $p _ { 1 } + l _ { 1 } = p + 1 = p _ { 2 } + l _ { 2 }$ . Assume w.l.o.g. that $l _ { 1 } \geq l _ { 2 }$ , and let $d = l _ { 1 } - l _ { 2 }$ . We then get that $p _ { 2 } = p _ { 1 } + d$ , and $b _ { i } = a _ { i + l _ { 1 } } =$ $a _ { i + d + l _ { 2 } } = b _ { i + d } ^ { \prime } .$ , for all $i \neq p _ { 1 }$ . We then show that for all $i$ such that $1 \leq i \leq k _ { 1 }$ and $1 \leq i + d \leq k _ { 2 }$ , we have $y _ { i } = y _ { i + d } ^ { \prime }$ . First, this is true for $i = p _ { 1 }$ , since $y _ { p _ { 1 } } = y = y _ { p _ { 2 } } ^ { \prime }$ (where $A _ { n + 1 } = { \mathsf { q } } ^ { \prime } ( y , w ^ { \prime } ) )$ and $p _ { 2 } = p _ { 1 } + d$ . This is also true for $i = p _ { 1 } - 1$ , since by definition of a bow tie and Lemma 28, there is only one term $t$ such that $\mathsf { R } ( t , y _ { p _ { 1 } } , w ^ { \prime } ) \in F$ . We then extend this to all $i$ by induction. Assume that $1 ~ \leq ~ i + 1 ~ \leq ~ k _ { 1 }$ and $1 \ \leq \ i + \ 1 + d \ \leq \ k _ { 2 }$ , and that $y _ { i } = y _ { i + d } ^ { \prime }$ for some $i \geq p _ { 1 }$ (the case where 𝑖 ≤ 𝑝1 − 1 is similar, using 𝑅CL instead of 𝑅CR ). We then sho+w that 𝑦𝑖+1 = 𝑦𝑖′ 1 𝑑 . Both the atoms ${ \sf R } ( y _ { i } , y _ { i + 1 } , w ^ { \prime } )$ and ${ \sf R } ( y _ { i } , y _ { i + 1 + d } ^ { \prime } , w ^ { \prime } )$ have been generated using rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ . We t+he+n show that the triggers generating these atoms are equal, so these atoms must be equal. The body of rule $R _ { \mathsf { C } _ { \mathsf { R } } }$ is $\{ \mathsf { C } _ { \mathsf { R } } ( x ^ { \prime } , w ^ { \prime } ) , \mathsf { F } ( x , x ^ { \prime } , w ^ { \prime } ) , \mathsf { R } ( x , y , w ) , \mathsf { b } _ { \mathrm { i } } ( y , w ) , \mathsf { R e a l } ( x ) , \mathsf { R e a l } ( x ^ { \prime } ) , \mathsf { R e a l } ( y ) \}$ . To generate ${ \sf R } ( y _ { i } , y _ { i + 1 } , w ^ { \prime } )$ , $x ^ { \prime }$ must be mapped to $y _ { i }$ (and $w ^ { \prime }$ to himself). Then, by Lemma 48, each term $\boldsymbol { v }$ can only have one term $u$ such that $\mathsf { F } ( u , v , w ) \in F$ , so $x$ is mapped to $x _ { i + l _ { 1 } }$ and $y$ to $x _ { i + 1 + l _ { 1 } }$ (and $ { \boldsymbol { w } }$ to himself), since $\mathsf { F } ( x _ { i + l _ { 1 } } , y _ { i } , w ^ { \prime } )$ and $\mathsf { F } ( x _ { i + 1 + l _ { 1 } } , y _ { i + 1 } , w ^ { \prime } )$ . However, we also have $\mathsf { F } ( x _ { i + l _ { 1 } } , y _ { i + d } ^ { \prime } , w ^ { \prime } )$ and $\mathsf { F } ( x _ { i + 1 + l _ { 1 } } , y _ { i + 1 + d } ^ { \prime } , w ^ { \prime } )$ , so the triggers generating ${ \sf R } ( y _ { i } , y _ { i + 1 } , w ^ { \prime } )$ and $\mathsf { R } ( y _ { i } ( \rho , \vec { x } ) , y _ { i + 1 + d } ^ { \prime } , w ^ { \prime } )$ are equal, and 𝑦𝑖+1 = 𝑦𝑖′ 1 𝑑 . Thus, $l _ { 1 } = l _ { 2 }$ and $k _ { 1 } = k _ { 2 }$ . Indeed, if $l _ { 1 } > l _ { 2 }$ , then we can extend $y _ { 1 } , \ldots , y _ { k _ { 1 } }$ into a bigger path $y _ { 1 } ^ { \prime } , \ldots , y _ { d } ^ { \prime } , y _ { 1 } , \ldots , y _ { k _ { 1 } }$ , which contradicts its maximality. If $k _ { 1 } \neq k _ { 2 }$ , then we can extend the shortest path into the longest, also contradicting its maximality. Thus, both paths are equal, and $S _ { 1 } = S _ { 2 }$ . From this we deduce that pred $\mathbf { \Sigma } _ { n }$ is injective. □ Lemma 31. The restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ . Proof. First note that since for all $n \geq N$ , $\vert \mathrm { c o n f i g s } ( A _ { n } ) \vert = \vert \mathrm { c o n f i g s } ( A _ { N } ) \vert$ , $\mathsf { p r e d } _ { n }$ is actually a bijection, since it is injective between sets of equal sizes. It thus has an inverse pred $_ n ^ { - 1 }$ . Thus, for all $n \in \mathbb { N }$ , we define $S _ { n + 1 }$ as pred $\mathbf { \Pi } _ { N + n } ^ { - 1 } ( S _ { n } )$ . Note that we picked $S _ { 0 }$ from configs $( A _ { N } )$ and that $S _ { 0 }$ homomorphically embeds into $D _ { \rho }$ . We then inductively construct a sequence of derivations $( \mathcal { D } _ { n } ) _ { n \in \mathbb { N } }$ such that for all $n \in \mathbb { N }$ , if $S _ { n }$ is over terms $x _ { 1 } , \ldots , x _ { k } , w$ , then · $\mathcal { D } _ { n + 1 }$ extends ${ \mathcal { D } } _ { n }$ ; $\bullet$ there is a homomorphism $\textstyle { \pi _ { n } }$ from $S _ { n }$ to the result $R _ { n }$ of ${ \mathcal { D } } _ { n }$ ; $\bullet$ if $\mathsf { F } ( \pi _ { n } ( x _ { i } ) , y , \pi _ { n } ( w ) ) \in \mathcal { D } _ { n }$ for some $i$ , then $y = \pi _ { n } ( w )$ ; , $\boldsymbol { \cdot } \mathsf { R e a l } ( \pi _ { n } ( \boldsymbol { w } ) ) \not \in \mathcal { D } _ { n }$ First, as stated in Lemma 30, $S _ { 0 }$ embeds in $D _ { \rho }$ , so we let $\mathcal { D } _ { 0 } = D _ { \rho }$ , which does fulfill all the conditions above. Then, assume that we have constructed derivation ${ \mathcal { D } } _ { n }$ as described. By Lemma 30 again, all the atoms in $S _ { n + 1 }$ can be generated using only atoms in pred $\mid _ { n } ( S _ { n + 1 } ) \ = \ S _ { n }$ . We thus extend the derivation ${ \mathcal { D } } _ { n }$ into a derivation $\mathcal { D } _ { n + 1 } ^ { \prime }$ with the triggers needed to generate the atoms in $S _ { n + 1 }$ , composed with $\textstyle \pi _ { n }$ . All these triggers are applicable since they all create atoms of the form $\mathsf { F } ( \pi _ { n } ( x _ { i } ) , y , \pi _ { n } ( w ) )$ and $\mathsf { R e a l } ( y )$ , which are not in the database by the third and forth item. The homomorphism $\pi _ { n + 1 }$ is then defined naturally (the triggers that generate $S _ { n + 1 }$ from $S _ { n }$ were used here to generate new nulls, to which we can map nulls of $S _ { n + 1 }$ ). Then, if $S _ { n }$ contains an atom of the form $\mathsf { q } _ { r } ( \boldsymbol { x } , \boldsymbol { w } ^ { \prime } )$ , we add the trigger $( R _ { \mathsf { B r a k e } } , \{ w \to \pi _ { n } ( w ) \} )$ at the end of this new derivation, to construct $\mathcal { D } _ { n + 1 }$ . The first and second point then follow by design. The third point follows from the fact that the triggers that were used to generate $S _ { n + 1 }$ from $S _ { n }$ do not generate other F-atoms, and the last point from the fact that if $\mathsf { q } _ { r } ( x , w ^ { \prime } ) \in S _ { n }$ , then $S _ { n }$ and $S _ { n + 1 }$ use different brakes. We now show that the derivation $\textstyle { \mathcal { D } } = \bigcup _ { n } { \mathcal { D } } _ { n }$ is fair. First, by Lemma 21, there are infinitely many ${ \mathsf { q } } _ { r }$ -atoms in $( A _ { n } ) _ { n \in \mathbb { N } }$ , and thus infinitely many $n \in \mathbb { N }$ such that $S _ { n }$ contains a ${ \sf q } _ { r }$ -atom. Then, notice that whenever we encounter a ${ \mathsf { q } } _ { r }$ -atom in $\mathcal { D }$ , we make the previous brake real, blocking any rule application involving the atoms containing it. Thus, for any trigger that is applicable at some step $n$ , there is a step $m$ at which the brake that appears in this trigger’s support gets real, making this trigger obsolete. Thus, $\mathcal { D }$ is fair, and the restricted chase does not terminate on $\langle \Sigma _ { M } , D _ { \rho } \rangle$ . □
The chase is a fundamental algorithm with ubiquitous uses in database theory. Given a database and a set of existential rules (aka tuple-generating dependencies), it iteratively extends the database to ensure that the rules are satisfied in a most general way. This process may not terminate, and a major problem is to decide whether it does. This problem has been studied for a large number of chase variants, which differ by the conditions under which a rule is applied to extend the database. Surprisingly, the complexity of the universal termination of the restricted (aka standard) chase is not fully understood. We close this gap by placing universal restricted chase termination in the analytical hierarchy. This higher hardness is due to the fairness condition, and we propose an alternative condition to reduce the hardness of universal termination.
[ "cs.LO", "cs.DB" ]
# 1. Introduction Large language models (LLMs) [3] [6] have emerged as powerful tools across a wide range of applications, from content generation to customer service [2, 18, 22]. As the technology behind these models advances, there is growing interest in their ability to integrate with external tools [7, 29, 43, 50], such as computational aids and data repositories [35, 40]. The ability of LLMs to leverage a broad spectrum of tools not only demonstrates their cognitive potential but also helps address inherent limitations [39], such as staying current with global knowledge [31], reducing inaccurate information generation [33] [41], and performing complex symbolic tasks. The constant emergence of new tools, including advanced software frameworks and domain-specific utilities [24], adds complexity to tool acquisition for LLMs. To address this, two primary approaches have been proposed for integrating tools into LLMs [26]. The first approach fine-tunes LLMs to learn specific tools [29]. While effective in some cases, this method is computationally expensive and struggles to adapt to new tools. The second approach, in-context learning, enables LLMs to handle new tools and has been successfully applied in various scenarios. However, it remains limited by the context length and performs poorly when mastering new tools with only a few examples [34] [52]. Recently, the ToolkenGPT method [12] was introduced to enhance LLMs by embedding multiple tools, enabling seamless integration via learned tool tokens. Figure 1 illustrates this tool-augmented LLM framework. By introducing additional tool tokens, the system supports two modes for next-token prediction: 1) If the predicted token is a word token, the system operates in the standard mode [12]; 2) If the predicted token corresponds to a tool, the system switches to tool mode and generates the tool’output as the next token [12]. Thus, the effectiveness of the learned tool tokens is critical for the success of this mode switch. Current token learning approaches typically learn token embeddings from scratch before integrating them with the vocabulary of tokens [12]. However, such approaches overlook the semantic relationship between tool and word token embeddings [23], which limits its adaptability within pre-trained LLMs [17]. Figure 1. An illustration of tool-augmented large language models. After inputting the command text $< t _ { 0 } , . . . , t _ { i - 1 } >$ segment to the LLMs, the LLM appended $t _ { i }$ to the output segment. This serves as an indicator to determine whether tool invocation is required. If tool usage is unnecessary, the system switches to Normal Mode and directly outputs the result. If tool invocation is required, the system transitions to Tool Mode and subsequently outputs the processed results from the tool. To address this limitation, we propose a novel token learning approach that jointly optimizes tool token embeddings for next-token prediction while ensuring their alignment with the word embedding space through a reinitialization perspective. Following ToolkenGPT [12], we construct training sequences that integrate both word and tool tokens, where tool-related tokens replace specific subsequences. To enhance consistency with the word embedding space, we align the learned tool token embeddings with prior tool token embeddings derived from word tokens. Specifically, these prior embeddings are constructed as follows. For each tool, we begin by extracting one or more word tokens from its name or description. We then calculate the tool’s prior embedding by averaging the embeddings of these extracted word tokens. This prior embedding serves to regularize the optimization of the learnable tool token embedding, ensuring alignment with the prior embedding, i.e., the word embedding space. Notably, the prior embeddings also serves as initialization for the learnable tool token embeddings, which helps accelerate convergence. As a result, the regularized token learning approach facilitates the learning of effective tool token embeddings that align with the existing word embeddings used by LLMs. To evaluate our proposed token learning approach for tool-augmented LLMs, we conducted comprehensive experiments across three representative tasks: mathematical problem solving, knowledge-based question answering, and embodied plan generation. In each of these tasks, external tools play a crucial role by significantly enhancing the reasoning capabilities of LLMs. Our results demonstrate that the proposed tool token learning approach significantly improves LLMs’ tool selection accuracy for complex problems, especially those involving requiring numerical calculations. Furthermore, the results highlight the importance of maintaining consistency between additional token embeddings and the original vocabulary when augmenting pre-trained LLMs. In other words, the information contained in the original vocabulary can substantially enhance the model’s ability to master and effectively use new tools. The main contributions of our proposed framework are as follows: • We propose a novel token learning approach for tool-augmented LLMs, which significantly enhances the accuracy of LLMs in selecting appropriate tools for complex tasks, particularly in scenarios requiring numerical calculations. • We introduce a pooling-based token embedding method to connect tool tokens with the LLM vocabulary, especially in complex scenarios. A regularization term is added to the loss function to ensure that the learned embeddings remain close to the prior embeddings. • Empirical evaluations on three representative tasks: mathematical problem solving, knowledge-based question answering, and embodied plan generation across LLaMA-2 models (7B, 13B, and 70B). In the tasks of mathematical problem solving, our method has improved the accuracy by approximately $3 \%$ compared to the latest method, ToolkenGPT. In the other two tasks, our method further improves the accuracy of the model in tool invocation, especially when the number of tools is large and the success rate of generated plan is low. # 2. Related Work # 2.1. Tool Tokenization Paradigms: The ToolkenGPT Approach ToolkenGPT [12] represents a significant advancement in tool integration for large language models (LLMs), introducing an innovative tokenization paradigm that addresses key limitations of previous approaches. By formulating tools as special tokens called "toolkens," this method enables seamless integration of external tools into the standard text generation process. Each toolken functions similarly to a word token but is associated with an embedding vector that encapsulates the tool’s functionality. The operational mechanism of ToolkenGPT [12] involves several sophisticated steps: when the model predicts a toolken during generation, it enters a specialized mode where it generates appropriate input arguments for the corresponding tool. This transition is managed through carefully designed prompting strategies that maintain the model’s contextual understanding while adapting to tool-specific requirements. After receiving the tool’s output, the system reintegrates this information into the ongoing generation process, creating a smooth interaction between language modeling and tool execution. This approach demonstrates particular strength in three key application areas: numerical reasoning tasks where precise calculations are required, knowledge-based question answering that benefits from external data sources, and embodied plan generation that requires interaction with simulated environments. The tokenized tool representation allows ToolkenGPT to outperform traditional methods like Chain-of-Thought [44] and ReAct [48] by eliminating the need for verbose intermediate reasoning steps while maintaining precise tool control. # 2.2. Evolution of Fine-tuning Based Tool Integration The historical development of tool integration in LLMs reveals a clear progression from specialized, finetuned systems to more flexible approaches. Early efforts in this domain primarily relied on model fine-tuning to achieve tool competency, focusing on enabling LLMs to work with a constrained set of tools within well-defined domains. Retrieval mechanisms emerged as one of the most impactful early tools, with systems like REALM [11], RAG [21], and RETRO [2] demonstrating how external knowledge sources [25] [51] could significantly enhance model performance on knowledge-intensive tasks. The WebGPT [27] system marked an important milestone by showing how human-like web search behaviors could be effectively incorporated into LLMs through finetuning. This work paved the way for broader tool integration efforts, with subsequent research expanding the range of incorporated tools to include question-answering systems, computational tools like calculators, language translation services, and various other utilities. Notable contributions in this expansion include TALM [29], which systematically explored tool augmentation across multiple domains, and Toolformer [35], which introduced selfsupervised learning for tool use. Despite these advances, the fine-tuning paradigm presents fundamental limitations that become increasingly apparent as the field progresses. The computational resources required for effective fine-tuning grow substantially with model size, creating significant barriers to widespread adoption. Furthermore, fine-tuned models exhibit limited flexibility when facing new tools or updated versions of existing tools, often requiring complete retraining to maintain functionality. # 2.3. In-Context Learning for Tool Usage The exploration of in-context learning for tool usage represents a paradigm shift from the fine-tuning approaches discussed earlier. This methodology capitalizes on LLMs’ remarkable ability to learn from contextual examples, eliminating the need for weight updates while maintaining flexibility. The approach works by embedding tool descriptions and usage demonstrations directly within the prompt structure [26] [32], allowing models to adapt their behavior dynamically based on the provided examples. Practical implementations of this approach, such as those seen in ChatGPT plugins, demonstrate its potential for real-world applications. A typical usage scenario might involve showing the model multiple examples of calculator tool usage [35], including the precise format for input expressions and output interpretations. While effective for simple tools and common use cases [32], this method encounters significant challenges when dealing with more complex scenarios. The finite context window of current LLMs imposes strict limits on the number and complexity of tools that can be effectively demonstrated, while the few-shot learning paradigm often proves insufficient for reliable tool mastery. REACT [48] [28] [55] offers a complementary approach that structures tool interaction through predefined action spaces. In knowledge-intensive applications, REACT typically employs a set of fundamental actions including search, lookup, and finish operations, often implemented through standardized APIs like Wikipedia’s interface. The system’s effectiveness is particularly evident in tasks like HotPotQA [46], where the model’s reasoning process directly informs its tool usage strategy. However, REACT’s reliance on predefined action spaces creates its own set of constraints. Complex, multi-step tasks often exceed the system’s capacity due to context window limitations [56] [5], while the need for careful action space design introduces additional implementation complexity. These limitations highlight the ongoing challenges in developing truly flexible and scalable tool integration methods for modern LLMs. # 3. Method This section introduces a regularized token learning framework for tool-augmented LLMs, aiming to align tool token embeddings with word embedding spaces and improve tool invocation accuracy. First, prior embeddings are constructed from tool names to initialize learnable tokens. Second, pooling operations (e.g., average/max pooling) aggregate word token features for embedding alignment. Finally, a regularization term constrains learned embeddings to match prior ones, enhancing training stability and generalization. Figure 2. The TokenLearning framework operates through the following methodological pipeline: First, we extract tool-related embedding vectors from the pretrained language model’s vocabulary matrix $\mathbf { W } _ { v }$ . These extracted embeddings then undergo a pooling operation to aggregate their feature representations. Subsequently, we concatenate the processed embedding vectors corresponding to each individual tool to construct the initial matrix $\mathbf { W } _ { \tau } ^ { 0 }$ . This constructed matrix serves dual purposes: (1) as the initialization value for the learnable tool embedding matrix $\mathbf { W } _ { \tau }$ , and (2) as a regularization constraint during optimization. Through this approach, the final optimized "toolken" matrix $\mathbf { W } _ { \tau }$ appended to the large language model exhibits enhanced directional properties, enabling more precise tool invocation capabilities. # 3.1. Tool-Augmented LLMs LLMs model the probability of a sequence of word tokens $s = ( t _ { 1 } , t _ { 2 } , \cdots , t _ { n } )$ as $\begin{array} { r } { P ( s ) = \sum _ { i } ^ { n } P ( t _ { i } | t _ { < i } ) } \end{array}$ , where $t _ { i } \in \mathcal { V }$ and $t _ { < i }$ represents the partial sequence of tokens preceding the $i$ -th token. The formula for predicting the distribution of the next word token is $$ P ( t _ { i } | t _ { < i } ) = \mathrm { s o f t m a x } ( \mathbf { W } _ { v } \cdot \mathbf { h } _ { i - 1 } ) , $$ where $\pmb { h } _ { i - 1 } \in \mathbb { R } ^ { d }$ denotes the last hidden state of the current context, and $\mathbf { W } _ { v } \in \mathbb { R } ^ { | \mathcal { V } | \times d }$ is the embedding matrix for word tokens, and $\nu$ represents the complete set of word tokens in LLMs [42]. The concept of tool tokens is also termed “toolken” in ToolkenGPT [12]. Given a set of tools $\tau$ , the next token prediction is then formulated as $$ P ( t _ { i } | t _ { < i } ) = \mathrm { s o f t m a x } ( [ \mathbf { W } _ { v } ; \mathbf { W } _ { \tau } ] \cdot h _ { i - 1 } ) , $$ where $t _ { i } \in \mathcal { V } \cup \mathcal { T }$ and $\mathbf { W } _ { \tau } \in \mathbb { R } ^ { | \mathcal { T } | \times d }$ is the embedding matrix of tool tokens [12]. When a tool invocation is triggered, the language model switches to the tool mode [12], pauses the current text generation, and completes the parameter generation according to the context demonstrations of the tool in the prompt and the syntax [tool](arguments). After the tool is executed, the result is sent back to the text in the reasoning mode for further processing [12]. Specifically, given a sentence, e.g., “there are 100 dollars", it can be tokenized into a word token sequence $s =$ (“there”, “are”, “1”, $^ { 6 6 } 0 ^ { 3 }$ , $^ { 6 6 } 0 ^ { , 9 }$ , “dollars”). To indicate when to predict the toolkens, we need a parallel sequence mixed with word tokens and toolkens, i.e. $s ^ { \prime } =$ (“there”, “are”,“[add]”,“[N/A]”, “[N/A]”,“[N/A]”,“dollars”). The position of $( \ { } ^ { } 1 { } ^ { } , \ { } ^ { } 0 { } ^ { , } , \ { } ^ { } 0 { } ^ { , } ^ { , } )$ in $s$ is where the returned tool’s results fill in, and the corresponding first token in $s ^ { \prime }$ is the toolken for the tool call with the following tokens are filled with [N/A], indicating neglect in loss calculation. When ToolkenGPT learns toolken embeddings matrix $\mathbf { W } _ { \tau }$ , given a dataset $\mathcal { D }$ composed of $( s , s ^ { \prime } )$ , the training objective becomes $$ \mathcal { L } ( W _ { \tau } ) = \sum _ { ( s , s ^ { \prime } ) \in \mathcal { D } } \sum _ { i = 1 } ^ { N } - \log P ( t _ { i } ^ { \prime } | t _ { < i } ) \mathbb { I } _ { t _ { i } ^ { \prime } \neq [ \mathrm { N } / \mathrm { A } ] } , $$ where $P ( t _ { i } ^ { \prime } | t _ { < i } )$ is calculated according to the above formula and $\mathbb { I } _ { t _ { i } ^ { \prime } \neq [ \mathrm { N } / \mathrm { A } ] }$ is used to ignore the [N/A] tokens during training [12]. # 3.2. Prior Token Embeddings The main idea of regularized token learning is to explicitly link tool tokens to those in the vocabulary of LLMs that corresponds to tools. To achieve this, we first construct prior token embeddings $\mathbf { W } _ { \tau } ^ { 0 }$ for each tool based on the tool’s name or description, which are then used to initialize and regularize the learnable tool token embeddings $\mathbf { W } _ { \tau }$ . For example, when solving mathematical problems, tools or operations such as add, subtract, multiply, and 3 LLM 三 tool's name like "P14" 1 tokens 5120 5120 embeddingvector divide are used. Each of these tool names corresponds to a word token in the LLM’s vocabulary, and we directly extract their word embeddings to serve as the prior embeddings for these tools. In cases where a tool name maps to multiple word tokens, we apply global pooling operations across the embeddings of these tokens to obtain a single prior embedding for the tool. We can perform average pooling or max pooling on the multi-dimensional vectors to transform them into the embedding vectors corresponding to the relevant tools. Note that $y$ is the output vector of the pooling operation, $k _ { w }$ is the size of the pooling window in the width direction, $x _ { j }$ is the element in $j$ -th column of the input feature map, where $j$ ranges from 0 to $k _ { w } - 1$ . The Average pooling operates on the matrix generated after the LLM processes the tokens, i.e., $$ y = \frac { 1 } { k _ { w } } \sum _ { j = 0 } ^ { k _ { w } - 1 } x _ { j } , $$ Average pooling selects the average value on the dimension of length(tokens) and generates a vector to provide information reference for the corresponding embedding vector. As the example shown in Figure 3, when a tool is input, 3 tokens are correspondingly generated, and the dimension of the matrix generated by the LLM is [3, dim]. After calculating the average, it is transformed into a vector of [1, dim], dim refers to the dimension of model’s hidden states of transformer. We apply the Max pooling to selects the maximum value on the corresponding dimension, i.e., $$ y = \operatorname* { m a x } _ { j \in [ k _ { w } - 1 ] } x _ { j } . $$ # 3.3. Regularized Token Learning We initialize the learnable tool token embeddings $\mathbf { W } _ { \boldsymbol { \tau } }$ using the prior embeddings $\mathbf { W } _ { \tau } ^ { 0 }$ from the previous subsection. Subsequently, we update the learnable tool token embeddings $\mathbf { W } _ { \boldsymbol { \tau } }$ based on the next token prediction loss, while also ensure consistency with the LLM’s word embedding space by using the prior embeddings $\mathbf { W } _ { \tau } ^ { 0 }$ as a reference. Therefore, our approach utilizes an overall loss function comprising two main components: the next-token prediction loss and a regularization term. These components are defined as follows: $$ \mathcal { L } ( W _ { \tau } ) = \sum _ { ( s , s ^ { \prime } ) \in D } \sum _ { i = 1 } ^ { N } - \log P ( t _ { i } ^ { \prime } | t _ { < i } ) \mathbb { I } _ { t _ { i } ^ { \prime } \neq \left[ \mathsf { N } / \mathbf { A } \right] } + \lambda \left. \mathbf { W } _ { \tau } - \mathbf { W } _ { \tau } ^ { 0 } \right. _ { 2 } ^ { 2 } , $$ where the hyper-parameter $\lambda$ controls the trade-off between the next token prediction loss and the regularization term that constrains the difference between $\mathbf { W } _ { \tau }$ and $\mathbf { W } _ { \tau } ^ { 0 }$ . By minimizing the difference between $\mathbf { W } _ { \boldsymbol { \tau } }$ and $\mathbf { W } _ { \tau } ^ { 0 }$ during optimization, we impose constraints on the learning process of $\mathbf { W } _ { \boldsymbol { \tau } }$ . $$ \mathcal { L } _ { \mathrm { r e g } } = \lambda \left\| \mathbf { W } _ { \tau } - \mathbf { W } _ { \tau } ^ { 0 } \right\| _ { 2 } ^ { 2 } . $$ The $L _ { 2 }$ regularization term $L _ { \mathrm { r e g } }$ serves a crucial function during model training by imposing a constraint on the deviation between the learned tool token embeddings $\mathbf { W } _ { \boldsymbol { \tau } }$ and their corresponding prior embeddings $\mathbf { W } _ { \tau } ^ { 0 }$ . This regularization mechanism ensures training stability by preventing excessive divergence of the model parameters from their initialization values. In the absence of such regularization, the model would be prone to overfitting the training data, consequently exhibiting suboptimal generalization performance on unseen data. The incorporation of $L _ { \mathrm { r e g } }$ promotes more conservative parameter updates, thereby mitigating the risk of over-adaptation to noisy or idiosyncratic patterns present in the training set. The regularization strength is governed by the hyperparameter $\lambda$ , which determines the trade-off between model flexibility and constraint severity. Specifically, For large values of $\lambda$ , the optimization process strongly penalizes deviations from $\mathbf { W } _ { \tau } ^ { 0 }$ , effectively anchoring the learned embeddings near their initial values. While this approach can reduce overfitting, it may excessively constrain the model’s capacity to learn meaningful representations, potentially leading to underfitting and subpar performance on both training and test data. Conversely, small values of $\lambda$ impose weak regularization constraints, permitting greater flexibility in parameter updates. However, this increased flexibility comes at the risk of overfitting, where the model may over-optimize to training set specifics at the expense of generalization capability. This regularization framework establishes a critical balance between preserving prior knowledge (encoded in $\mathbf { W } _ { \tau } ^ { 0 }$ ) and adapting to new information during the learning process. # 4. Experiments In this section, we first introduce the datasets related to the three tasks: mathematical problem solving, knowledgebased question answering, and embodied plan generation. Then, we present the experimental results and conduct ablation studies. # 4.1. Datasets We consider four datasets including GSM8K-XL [12], FuncQA [12], KAMEL [12], and VirtualHome [12], specifically, GSM8K-XL: GSM8K-XL dataset [12] represents an enhanced version of the GSM8K [8] benchmark, which consists of linguistically diverse grade school math word problems requiring sequential application of four basic arithmetic operations $( + , - , \times , \div )$ to reach final solutions. The original GSM8K dataset [8] primarily contains problems with small numerical values, potentially limiting its effectiveness in evaluating contemporary large language models (LLMs) as it fails to sufficiently challenge their reasoning capacities or thoroughly examine their tool-utilization capabilities in complex problem-solving contexts [4] [48]. To address this limitation, numerical values in the test set were systematically increased, resulting in the GSM8K-XL dataset comprising 568 test cases with substantially larger numbers to elevate computational difficulty. The training process utilizes the original GSM8K training set with calculation annotations. From the available 6,054 examples, 5,054 serve as training data, while the remaining 1,000 function as validation samples. Thereby elevating computational complexity and enabling a more robust evaluation of LLMs’ tool-assisted reasoning performance. FuncQA: The FuncQA dataset [12] was developed to enhance the complexity of numerical reasoning tasks by evaluating models’ ability to acquire and invoke appropriate tools when solving sophisticated mathematical problems involving multiple arithmetic operations and requiring multi-step reasoning [12]. This benchmark is designed to emulate realistic, computationally intensive scenarios that necessitate proficient utilization of diverse arithmetic tools, thereby imposing more stringent demands on models’ tool-manipulation and logical reasoning capabilities. The finalized FuncQA dataset comprises two distinct subsets: 68 one-hop questions that can be resolved through a single arithmetic operation, and 60 multi-hop questions requiring sequential reasoning steps. During the dataset construction process, a stratified sampling approach was employed - for each arithmetic operator, 47 training and 3 validation data points were systematically selected, culminating in a total of 611 training samples and 39 validation samples across all operators. KAMEL: Large language models (LLMs) often exhibit limitations in factual accuracy [10], frequently generating erroneous or hallucinated content due to inherent knowledge constraints [13, 19, 49, 53, 54]. To mitigate these issues, knowledge base (KB) integration has emerged as a viable solution for reducing hallucination rates [14, 36, 57]. In practical implementations, KB access is typically facilitated through application programming interfaces (APIs) [38] [9], where each relational query can be conceptually framed as a tool operation [32] - with subject entities as inputs and corresponding tail entities as outputs [20]. The KAMEL [12] framework incorporates knowledge spanning 234 distinct Wikidata relations, with each relation mapped to a specific question template. For instance, the “winner of" relation is associated with the template “Who is the winner of [S]?", effectively converting Wikidata facts into queryable formats. This structure yields a total of 234 tool-like query mechanisms. To systematically investigate the relationship between tool quantity and model performance, we constructed four evaluation subsets through stratified sampling from the original test set. These subsets contain questions corresponding to 30, 60, 100, and 234 tools respectively, with each subset comprising 500 carefully curated questions. VirtualHome: Recent research has demonstrated significant interest in employing large language models (LLMs) as controllers for embodied agents [1, 15, 16, 37, 45]. While prompt-based approaches have achieved preliminary success [47], significant challenges remain in enabling LLMs to develop comprehensive environmental understanding and generate grounded predictions. As highlighted by Mialon et al. [26], LLMs demonstrate the capacity to utilize diverse tool types - including both information-gathering tools (e.g., mathematical or knowledge base tools) and physical-world interaction tools (e.g., embodied agent actions) - through fundamentally similar mechanisms. VirtualHome [30] represents a foundational simulation platform and knowledge base for embodied intelligence research. This system, centered on common household activities, incorporates an ActivityPrograms knowledge base [30] containing numerous executable task plans. The dataset construction process involved selecting verbs and objects appearing with a minimum frequency threshold of 10 occurrences, resulting in a final configuration of 247 training tasks and 50 test tasks, encompassing 25 distinct verbs and 32 unique objects. When combined with the [END] function for plan termination, these elements collectively form 58 distinct tool tokens (toolkens) [30]. # 4.2. Implementation Details This section presents the application of our methodology across three well-defined domains exhibiting significant tool-utilization paradigms: 1) arithmetic operations for numerical reasoning tasks, 2) database API interactions for knowledge-based question answering, and 3) robotic action sequences for embodied planning generation. Our primary research objective focuses on enhancing large language models’ (LLMs) capabilities in both precise tool prediction and effective tool application within the ToolkenGPT framework [12]. Regarding computational requirements, the training process was conducted using the following hardware configurations: the LLaMA2-7B model [42] was trained on a single GeForce RTX 4090 GPU, the LLaMA2-13B [42] implementation utilized two GeForce RTX 4090 GPUs, and the LLaMA2-70B model [42] training was performed across eight GeForce RTX 4090D GPUs. # 4.2.1 GSM8K-XL and FuncQA datasets Building upon the methodology outlined in Section 3, we extract learning tokens corresponding to mathematical operation symbols to reinforce the constrained training of tool embeddings. During model inference, we employ a 4-shot Chain-of-Thought prompting strategy to augment the LLM’s reasoning capabilities. Our comparative analysis incorporates the following baseline approaches. For the GSM8K-XL dataset, The toolken embeddings of learning tokens are trained with a subset of 5,063 examples. An additional 1,000 examples are reserved for validation. The embeddings were trained with a learning rate of 1e-3, performing early stopping based on the development set, with a maximum of 5 epochs. For the FuncQA dataset, the learning rate we use is 1e-4, and we perform early stopping based on the development set, with the maximal training epochs to be 10 epochs. The prompt of FuncQA is similar to the prompt of GSM8K-XL. we establish three principal baseline approaches: • Chain-of-Thought (CoT) [44]: This state-of-the-art prompting technique employs carefully designed prompts to facilitate sequential reasoning during inference. We maintain consistency in reasoning chain examples across all comparative methods, including ToolkenGPT and TokenLearning implementations. • ReAct [48]: An interactive paradigm that jointly generates reasoning traces and tool invocations. Our implementation adopts the specialized syntax for operator calls (e.g., $5 0 * 3 . 2 = < m u l t i p l y > ( 5 0 , 3 . 2 ) =$ 160), where the system automatically triggers tool execution upon detecting this pattern during inference. • ToolkenGPT [12]: Our proposed approach represents tools as discrete tokens (“toolkens") embedded within the model’s parameter space. When the generation process produces a toolken, the system automatically initiates the corresponding tool invocation. For fair comparison, all methods utilize identical reasoning chain exemplars, varying only in their tool invocation syntax. We evaluate our approach using the LLaMA2 architecture at three different scales: LLaMA2-7B, LLaMA2-13B, and LLaMA2-70B models [42]. # 4.2.2 KAMEL dataset Toolken embeddings of learning tokens are trained with a learning rate of 1e-3, performing early stopping based on the development set, and trained for a maximum of 5 epochs. To rigorously assess our proposed methodology, we establish two principal baseline approaches on the KAMEL benchmark: • In-context Learning (ICL) [32]: This paradigm represents a state-of-the-art approach for equipping LLMs with tool-usage capabilities through demonstration-based learning. Our implementation adopts a two-stage prompting strategy: (1) we first prepend the complete inventory of available tools along with their functional descriptions to the model’s context window; (2) subsequently, we present the target query for processing. To mitigate the inherent constraints of limited context length in transformer-based architectures, we employ a space-optimized representation scheme where each tool is described using minimal lexical units (preferably single-word descriptors) without compromising operational semantics. Table 1. Results on the GSM8K-XL with diferent models. For GSM8K-XL dataset, accuracy is evaluated based on an exact match (float numbers rounded to four decimals). • ToolkenGPT [12]: Our proposed tokenized tool representation framework enables efficient tool composition through learned embeddings. The KAMEL dataset instantiation incorporates a comprehensive set of 234 distinct toolkens, each corresponding to a unique relational operation derived from the underlying knowledge graph. This representation allows for: (i) seamless integration with the model’s existing vocabulary, (ii) efficient tool retrieval during inference, and (iii) scalable addition of new capabilities through token expansion. Implementation Note: We employ constrained prompting techniques to restrict LLM outputs exclusively to relevant API calls, enabling precise evaluation of tool selection accuracy under controlled conditions. # 4.2.3 VirtualHome dataset Toolken embeddings of learning tokens are trained with a learning rate of 1e-4, performing early stopping based on the development set, with a maximum of 10 epochs. Note that all methods use the same prompts in this experiment. We establish parallel baseline methodologies for the VirtualHome environment to maintain consistent evaluation protocols: • In-context Learning (ICL) [32]: This approach implements a comprehensive priming strategy consisting of: (i) a complete enumeration of executable atomic actions, (ii) three exemplar task plans demonstrating proper tool sequencing, and (iii) the target task specification including its objective, operational parameters, and environmental context. This multi-component prompting architecture provides necessary grounding for situated action planning. • ToolkenGPT [12]: Our tokenized tool representation framework achieves efficient action composition through 58 discrete toolkens corresponding to: (a) 57 fundamental household actions, and (b) 1 termination token ([END]) for plan completion. Each toolken encapsulates both the semantic meaning and executable properties of its associated action. In terms of computational resources, we train and test TokenLearning based on LLaMA2-7B, LLaMA2-13B and LLaMA2-70B using 1, 2 and 8 Nvidia RTX 4090 GPUs. # 4.3. Experimental Results # 4.3.1 GSM8K-XL Table 1 presents a comprehensive evaluation of various methods on the GSM8K-XL dataset, revealing critical insights into large language models’ mathematical reasoning capabilities. The Chain-of-Thought (CoT) [44] approach demonstrates significant limitations, particularly in handling the dataset’s extended numerical ranges, as it requires both precise mathematical-logical reasoning and accurate numerical computation - a well-documented challenge for pure LLM-based methods. This computational bottleneck becomes increasingly pronounced with larger numerical values in the GSM8K-XL benchmark. In contrast, tool-augmented methods including ReAct [48], ToolkenGPT [12], and our proposed TokenLearning approach achieve substantially improved performance by externalizing numerical operations, thereby ensuring correct computational results when the model’s reasoning process is valid. Notably, our TokenLearning method, building upon ToolkenGPT’s framework, delivers consistent performance gains of approximately $3 \%$ across model sizes. While ReAct demonstrates strong results on the LLaMA2-70B model $( 5 1 . 2 3 \% )$ , highlighting the enhanced comprehension capabilities of largerscale models, our TokenLearning approach ultimately achieves superior performance $( 5 4 . 2 2 \% )$ , demonstrating that specialized training methodologies can further optimize model capabilities even when applied to already proficient large-scale architectures. Note that for FuncQA (One-Hop) dataset, accuracy is evaluated based on an exact match (float numbers rounded to three decimals). In FuncQA (Multi-Hops), we allow a margin of error of $0 . 1 \%$ to account for potential errors at each step of Multi-Hops reasoning. As presented in Table 2, our TokenLearning method achieves superior performance on the One-Hop task with 0.65 accuracy, significantly outperforming all baseline approaches on the LLaMa2-70B model. For Multi-Hop reasoning, while our method demonstrates a marked improvement (0.162) over ToolkenGPT (0.147), it remains marginally inferior to ReAct (0.176). These results suggest that while learned tool representations exhibit strong performance in simpler one-hop scenarios, their effectiveness in complex multi-hop reasoning may be constrained by the precision of token-level representations when training data is limited. Notably, ReAct’s superior multi-hop performance underscores the remarkable capability of large language models to dynamically select appropriate tools through well-designed prompting and in-context learning, even without explicit tool token training, highlighting the complementary advantages of prompt-based versus learned tool invocation mechanisms in different reasoning contexts. Table 2. Results on FuncQA dataset in different methods on LLaMA2-70B model under multi-hops and one-hop. Figure 4. Performance of Tokenlearning and baselines on 4 testsets(each testset consists of questions related to different numbers of relations, corresponding to 30, 60, 100, and 234, respectively, the size of each testset is 500) involving different numbers of tools (relations) from KAMEL. # 4.3.2 Kamel Our experimental evaluation across four test sets with varying relations demonstrates distinct performance characteristics among the compared approaches, as illustrated in Figure 4. The in-context learning (ICL) methods exhibit notable limitations in tool selection accuracy, with both ICL-13b and ICL-70b variants showing significantly lower performance compared to tool-augmented approaches. Notably, our TokenLearning method achieves consistent improvements of approximately $3 \%$ or greater over ToolkenGPT across all test sets and model scales (LLaMA2-13B and LLaMA2-70B), with the most substantial gains observed in the LLaMA2-70B configurations. These results substantiate that our learned token representations maintain effective guidance for tool selection despite the inherent challenge of API relations being composed of semantically irrelevant tokens, highlighting the robustness of our approach in capturing functional relationships beyond surface-level token semantics. The progressive performance enhancement from ICL to ToolkenGPT and further to TokenLearning suggests a clear hierarchy in tool utilization effectiveness, with our method establishing a new state-of-the-art in tool-augmented language model performance. # 4.3.3 VirtualHome The experimental results presented in Table 3 demonstrate that our TokenLearning method achieves consistent performance improvements, delivering approximately $2 \%$ higher accuracy than ToolkenGPT (0.72 vs 0.68 for LLaMa2-13B and 0.78 vs 0.76 for LLaMa2-70B models) while significantly outperforming In-Context Learning (ICL) by substantial margins (0.72 vs 0.24 for LLaMa2-13B and 0.78 vs 0.34 for LLaMa2-70B), thereby establishing a new state-of-the-art for tool-augmented task performance on the VirtualHome benchmark across both model scales. Table 3. Performance comparison on VirtualHome dataset across LLaMA2 model sizes [42]. Success accuracy measures the proportion of scripts achieving correct final states. Table 4. Results on of ablation study GSM8K - XL dataset in different initialization on LLaMA2-70b. Accuracy is evaluated based on an exact match (float numbers rounded to four decimals). Table 5. Results of ablation study on VirtualHome dataset in different initialization on LLaMA2-13b and LLaMA2-70b. Success accuracy is a relaxed variant meaning the proportion of scripts that have reached the correct final state, but not necessarily ending with it. # 4.4. Ablation Studies # 4.4.1 Initialization The quality of the additional matrix $W _ { \tau }$ is significantly influenced by different initialization methods. In the specific context of the GSM8K-XL dataset, tokens corresponding to fundamental arithmetic operations (addition, subtraction, multiplication, and division) exhibit distinct and well-defined directional properties. Building upon this observation, we carefully designed and conducted a series of experiments to comprehensively evaluate the model’s performance across various input tokens, including semantically neutral tokens such as “one”, “two”, “three”, and “four”. This experimental design enables clear demonstration of the directional relationship between learning tokens and tool tokens through comparative analysis. As evident from the results presented in Table 4, model accuracy is notably compromised when irrelevant tokens are employed for initialization, compared to the performance achieved using our proposed method. These empirical results robustly validate the superiority of our initialization approach. Notably, our method maintains high accuracy even under conditions of relatively limited training data. Furthermore, these findings corroborate that $\mathbf { W } _ { \tau } ^ { 0 }$ exhibits stronger directional guidance, enabling more effective steering of the model along the desired learning trajectory. To further investigate the impact of pooling operations on model initialization, we conducted additional experiments. Tables 5 present comparative analyses of initialization performance using matrices generated through different pooling operations. Our detailed examination reveals that average pooling, when applied after comprehensive integration of all token information, demonstrates particularly pronounced directional properties. This finding offers valuable strategic insights for model initialization approaches. # 4.4.2 Pooling In our experiments, we investigated the influence of different pooling operations on model performance. On the KAMEL dataset, owing to the uniqueness of its relations (where no separate relevant tokens exist in the vocabulary), the choice of pooling operation significantly affects the learning of tokens. We evaluated two distinct approaches—max pooling and average pooling—and observed a pronounced divergence in the accuracy of the generated outputs from the Large Language Model (LLM). As illustrated in Table 7, which compares max pooling and average pooling under varying constraint term coefficients on the LLaMA2-13B model with the KAMEL dataset, average pooling yields more stable and consistent results than max pooling. Specifically, as the number of tools (relations) increases, the accuracy of max pooling degrades sharply, whereas average pooling maintains robust performance. Furthermore, when encountering unseen “toolkens”, superior results are achieved by holistically leveraging the LLM’s contextual information to derive the learning “tokens”. These findings suggest that average pooling offers greater stability and generalizability, particularly in scenarios involving an expanding set of tools or unfamiliar “toolkens”. Table 6. Results of ablation study on GSM8K-XL, FuncQA-oh, KAMEL and VirtualHome datasets in different regularization term constraint coefficient λ on LLaMA2-70B and LLaMA2-13B. Table 7. Results of ablation study on KAMEL dataset in different pooling operations on LLaMA2-13b. Accuracy is evaluated based on an exact match (float numbers rounded to three decimals). # 4.4.3 Regularization Regarding tool invocation accuracy, our analysis demonstrates that an appropriately tuned regularization coefficient $\lambda$ consistently enhances performance. This parameter effectively guides the model in learning tool invocation patterns while leveraging prior knowledge to constrain updates of the tool-related parameter matrix $\mathbf { W } _ { \tau }$ , thereby enabling precise generation of tool tokens for successful invocations. However, suboptimal selection of $\lambda$ adversely impacts accuracy. An excessively large $\lambda$ imposes overly stringent constraints, preventing the model from adequately learning the essential features required for tool invocation and consequently impairing its adaptability to novel tools or evolving usage contexts. Conversely, an insufficient $\lambda$ value predisposes the model to overfitting, which manifests as degraded tool invocation accuracy in complex real-world scenarios. We systematically investigate the effect of the constraint term coefficient $\lambda$ on model performance through comprehensive experimentation. To thoroughly evaluate this relationship, we conduct extensive testing across three distinct datasets while varying the values of $\lambda$ . This empirical analysis provides quantitative insights into how different regularization strengths influence the system’s behavior under controlled experimental conditions. As shown in the tables 6, the results of the LLaMA2-13B and LLaMA2-70B models under different constraint term coefficients $\lambda$ and the number of tools (for the KAMEL dataset) on the GSM8K-XL, KAMEL, and VirtualHome datasets are presented. The trends of the results of each model with the change of the constraint term coefficient $\lambda$ vary on different datasets. Tables $6 { } ^ { 1 }$ present the comprehensive evaluation results of LLaMA2-13B and LLaMA2-70B models across three benchmark datasets: GSM8K-XL, KAMEL, and VirtualHome. The experimental data systematically demonstrate model performance under varying constraint term coefficients $\lambda$ , with additional analysis of tool quantity variations specific to the KAMEL dataset. Our comparative analysis reveals significant dataset-dependent variations in how each model’s performance metrics evolve with changes in the regularization strength $\lambda$ . These differential response patterns highlight the context-sensitive nature of optimal $\lambda$ selection across different problem domains and loss function.
Large language models have demonstrated exceptional performance, yet struggle with complex tasks such as numerical reasoning, plan generation. Integrating external tools, such as calculators and databases, into large language models (LLMs) is crucial for enhancing problem-solving capabilities. Current methods assign a unique token to each tool, enabling LLMs to call tools through token prediction-similar to word generation. However, this approach fails to account for the relationship between tool and word tokens, limiting adaptability within pre-trained LLMs. To address this issue, we propose a novel token learning method that aligns tool tokens with the existing word embedding space from the perspective of initialization, thereby enhancing model performance. We begin by constructing prior token embeddings for each tool based on the tool's name or description, which are used to initialize and regularize the learnable tool token embeddings. This ensures the learned embeddings are well-aligned with the word token space, improving tool call accuracy. We evaluate the method on tasks such as numerical reasoning, knowledge-based question answering, and embodied plan generation using GSM8K-XL, FuncQA, KAMEL, and VirtualHome datasets. The results demonstrate clear improvements over recent baselines, including CoT, REACT, ICL, and ToolkenGPT, indicating that our approach effectively augments LLMs with tools through relevant tokens across diverse domains.
[ "cs.CL", "cs.AI" ]
# I. Introduction of egocentric video understanding, driven by improvements in foundation models [1]–[4], pretraining strategies [5]–[7], loss functions [8], [9], and data augmentations [10]. Despite significant performance gains, the increasing scale of models, prolonged training pipelines, and ever-larger datasets have led to an exponential rise in training costs. Current state-of-the-art pretraining solutions [6], [11] generally adopt a pretraining pipeline that involves three stages: The research work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club Charities Trust. Xiaoqi Wang, Yi Wang, and Lap-Pui Chau are with the Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University, Hong Kong SAR (e-mail: xiaoqi.wang $@$ connect.polyu.hk; yieie.wang $@$ polyu.edu.hk; lap-pui.chau@polyu.edu.hk). Fig. 1. Our EVA02-AT-L model outperforms the previous state-of-theart methods on three egocentric benchmarks: EgoMCQ, EK-100 MIR, and CharadesEgo in both zero-shot and fine-tune settings by adopting joint attention blocks with integrated spatial-temporal RoPE. 1) capturing the spatial-temporal structure through video reconstruction tasks [12], 2) image-text alignment, and 3) videotext alignment via contrastive learning. During the pretraining process, large image and video datasets such as LAION [13] and InternVid [14], which contain hundreds of millions of vision-text pairs, make the training process prohibitively expensive. Besides the training cost, Rotary Positional Embeddings (RoPE) are now widely used in state-of-the-art vision models [15], [16]. CogvideoX [17] first proposes 3D-RoPE, which extends the RoPE to a spatial-temporal approach. Specifically, video tensors in latent space are treated as $( x , y , t )$ coordinates, and CogVideoX applies 1D-RoPE independently at these three coordinates. In practice, the feature dimension is divided into slices of $3 / 8 , \ 3 / 8$ , and $1 / 4$ corresponding to the $x , \ y$ , and $t$ coordinates, respectively. Although the effectiveness of this approach has been demonstrated, there are two key issues with the manual division of hidden feature dimensions: • Separation of spatial and temporal embeddings. The isolation in 3D-RoPE proposed in CogVideoX fails to model cross-axis relationships. Temporal embeddings, which represent motion between frames in video sequences, should ideally reflect changes in the spatial axis over time. In 3D-RoPE, since the dimensions are independent, the time changes $x y + \Delta t$ lack geometric meaning in spatial dimension, preventing the fusion of relative positions across temporal and spatial axes. Uneven dimension division. Dividing the hidden dimensions of vision transformer architectures into three parts is not always feasible (e.g., 1024 for ViT-L). In the case of 3D-RoPE, the dimensions of the $t$ coordinate are smaller than those of the $x$ and $y$ coordinates, which may be beneficial for spatially sensitive tasks, but reduce the ability to model long video sequences. Moreover, we identified an issue with the current loss functions used in egocentric retrieval tasks. Specifically, EgoVLP [8] introduces the adaptive Multi-Instance Max Margin (MIMM) loss, which employs a hard mining strategy. This strategy allows the dataloader to select samples where the soft label values exceed a threshold, rather than always selecting the most relevant ones. However, this could lead to negative pairs that are more strongly related to the textual descriptions than the positive pairs, steering the model in the wrong direction. However, simply removing the hard mining strategy would significantly reduce model performance. To address these issues, we propose EVA-02 with spAtialTemporal attention (EVA02-AT), a training-efficient solution for egocentric video understanding tasks. The EVA02-AT leverages the image-based pretraining CLIP model of EVA02 [16], [18], simplifying the pretraining pipeline to a single stage by directly transferring the image-based CLIP model to a video-based one through video-text alignment. To achieve this, we extend the Rotary Positional Embedding (RoPE) to a spatial-temporal approach that is compatible with the original 2D-RoPE. Concretely, RoPE can be treated as a rotation matrix, which is multiplicative, meaning the inner product of two RoPEs equals the sum of their respective positional angles. Therefore, we first generate a 1D-RoPE for the temporal embeddings and a 2D-RoPE for the spatial embeddings, where the dimension of both embeddings corresponds to the whole feature dimension. Then, we conduct an inner product of the temporal and spatial RoPEs to obtain the final representations of our spatial-temporal RoPE. This approach combines the RoPE with learnable temporal and spatial positional embeddings, forming a final positional embedding. Our spatial-temporal RoPE enables each subspace to jointly encode spatiotemporal information, naturally supporting crossaxis relative positions. To provide a more precise learning objective, we propose the Symmetric Multi-Similarity (SMS) loss to soft label multi-instance retrieval tasks. Inspired by Multi-Similarity loss [19] and RANP [9], our SMS loss collects not only the correlation values of positive pairs but also the negative pairs, optimizing the model from both sides. Therefore, the SMS loss redefines the relationship between positive and negative pairs and possibly converts certain negative pairs into positive ones under specific conditions, which enables the symmetric optimization of positive and negative pairs. Additionally, we introduce a relaxation factor to SMS loss to avoid the loss from falling into optimizing minor, unimportant samples. We evaluate our framework on three widely-used egocentric video datasets: Ego4D [20], EPIC-Kitchen-100(EK-100) [21], [22], and Charades-Ego [23]. The experiment results demonstrate both the effectiveness of our EVA02-AT models and the SMS loss. Our method is able to achieve state-of-the-art performance on these benchmarks in both zero-shot and finetuned settings, and the partial results are shown in Fig.1. # II. Related Works # A. Video Foundation Models Video foundation models can be grouped by their pretraining pipeline, which is often highly related to their architectural design. The foundation models based on video-text contrastive learning generally extend image–text models by adding temporal modules to capture temporal features. Early work like I3D [1] augments spatial 2D-CNNs with an LSTM [24] for temporal feature aggregation. More recent approaches like LaViLa [10] and EgoVLP [7], [8] utilize TSF [25] and FiT [26] as backbone networks, which add temporal-attention blocks into the ViT backbone, while AVION [4] treats each video as a flattened spatial-temporal sequence, processing endto-end by ViT, greatly reducing overall training costs. In contrast, models utilizing reconstruction-based pretraining pipelines can learn video representations via selfsupervised objectives such as masked video reconstruction [12], [27] and next-frame prediction [28]. This pretraining pipeline trains the model from the beginning, thus facilitating a more flexible architecture. Specifically, Internvideo [5], [6] adopts a 3D-CNN in the patchify process to form spatialtemporal cubes before feeding a ViT, such that the patches contain temporal information, while Flamingo [28] interleaves cross-attention layers to jointly encode video and text features. RoPE [29] has driven recent advances in vision–language models [15], [30] by providing continuous, unbounded position encoding. However, transferring RoPE to videos remains challenging. As shown in Fig. 3, existing solutions like 3DRoPE [17], M-RoPE [15], and VideoRoPE [31] provide different solutions for video RoPE. 3D-RoPE divides the feature dimension into uneven dimensions and applies three 1DRoPEs on the entire dimension, so that the three 1D-RoPEs represent $x$ -axis, $y$ -axis, and $t$ -axis individually. VideoRoPE further improves the 3D-RoPE by combining spatial axes, $x$ and $y$ , into a uniform 2D-RoPE. However, these methods manually split the embedding dimensions into spatial and temporal parts, such that they preclude a direct transfer of image-based encoders to video domains, and the uneven dimension division may cause a lack of ability to capture temporal information. # B. Loss Functions for Contrastive Learning Contrastive learning is a widely adopted paradigm for learning cross-modal representations by aligning paired samples while repelling mismatched ones [32]–[34]. In video–text pretraining, a common choice is the InfoNCE loss [35], which treats video–text pairs as positives and all other pairings within a minibatch as negatives. To better handle noisy alignments, MIL-NCE [36] relaxes the assumption of perfect correspondence by calculating the summation of multi-instance scores between all positive candidates, while EgoNCE [8] explicitly parses verbs and nouns in captions to weight pairwise affinities according to semantic overlap within each batch. Beyond batch-wide negatives, several works emphasize the importance of hard negatives or fine-grained similarity metrics. For example, RANP [9] mines semantic hard negative pairs and trains in a triplet manner [37], improving the discrimination of closely related but non-matching pairs. Circle loss [38] and Multi-Similarity (MS) loss [19] further generalize this idea by weighting each positive and negative pair according to its difficulty, enabling the model to focus more on challenging examples. Recent advances in soft labeling and adaptive margin strategies have also been shown to improve performance. The adaptive MI-MM loss in EgoVLP [8] incorporates soft labels from EK-100 MIR annotations, achieving a substantial improvement. The relevancy margin loss [39] adds the correlation value on negatives, providing a more accurate learning objective. Inspired by this, we propose the SMS loss, which extends the soft label to both the positive and negative pairs. # III. Preliminary Rotary Positional Embedding. RoPE [29] is known as an effective relative positional embedding approach that has shown extraordinary performance in many state-of-the-art video network architectures [6], [17], [40]. Originally, the vanilla 1D-RoPE was designed for word embeddings. In transformer-based models that use self-attention mechanisms, RoPE incorporates relative positional information into the attention mechanism. Specifically, the goal of RoPE is to embed the relative position information between the query $\mathbf { X _ { m } }$ at position $m ^ { t h }$ and the key ${ \bf { X _ { n } } }$ at position $n _ { t h }$ within the attention blocks. It should be a function $f ( \cdot )$ that satisfies the following condition: $$ \left. f _ { q } \left( \pmb { x } _ { m } , m \right) , f _ { k } \left( \pmb { x } _ { n } , n \right) \right. = g \left( \pmb { x } _ { m } , \pmb { x } _ { n } , m - n \right) , $$ where $g ( \cdot )$ denotes the real part of the inner product between $f _ { q } \left( x _ { m } , m \right)$ and $f _ { k } \left( x _ { n } , n \right)$ . In other words, the inner product between the projected query and key vectors at positions $m$ and $n$ is a function of both the input vectors and their relative position $m - n$ . This property indicates that RoPE is a multiplicative positional embedding, meaning the inner product between two RoPE embeddings is equivalent to the subtraction of their corresponding absolute positional embeddings. Learning objective. Given a triplet set $\mathcal { D } = \{ \mathcal { V } , \mathcal { T } , C \}$ , the objective of the video text retrieval task is to learn a similarity calculation function $S ( \cdot )$ that satisfies $S ( \mathcal { V } , \mathcal { T } ) = C$ . Here, $\mathcal { V } = \left\{ \mathbf { v } _ { i } \right\} _ { i = 1 } ^ { N _ { \nu } }$ and $\mathcal { T } = \{ \mathbf { t } _ { j } \} _ { j = 1 } ^ { N _ { t } }$ represent the(vVid eTo)an nCarration sets with $\mathcal { N } _ { \nu }$ and $\textstyle N _ { t }$ samples, respectively. The label $C \ =$ $\{ \mathbf { c } _ { i j } ~ \in ~ \{ 0 , 1 \} ~ | ~ i ~ = ~ 1 , 2 , \ldots , N _ { \nu } , j ~ = ~ 1 , 2 , \ldots , N _ { t } \}$ denotes whether a visual-text pair matches, that is, $c _ { i j } = 1$ signifies that $( \mathbf { v } _ { i } , \mathbf { t } _ { j } )$ is a corresponding visual-text pair, and vice versa. In deep metric learning, it is challenging to optimize every sample to its exact position. Alternatively, a general approach is to take advantage of a margin $\gamma$ to separate the positive and negative pairs. Therefore, in the typical visual-to-text retrieval task, the most instinctive learning objective is to ensure that the distance between positive and negative pairs is larger than the margin, which can be formulated as: Fig. 2. Illustration of the label collection mechanism of adaptive MI-MM loss. Sub-figure (a) indicates that the soft labels collected by the previous dataloader during the training process differ from the actual soft labels, since they only capture correlation values for positive pairs (i.e., the diagonal values). Subfigure (b) illustrates a case where negative pairs can have higher correlation values than positive pairs. $$ O _ { \nu 2 t } : = S ( \mathcal { V } , \mathcal { T } _ { p } ) - S ( \mathcal { V } , \mathcal { T } _ { n } ) \geq C \cdot \gamma , $$ where $S ( \cdot )$ denotes the similarity calculation function, $\mathcal { T } _ { p }$ and $\mathcal { T } _ { n }$ are the matching narrations and mismatching narrations to the corresponding video clips. Given that $c$ is the hard label set, where the values can only be 0 or 1, the target distance between the positive and negative pairs for every batch becomes: $( { \bf c } _ { p } - { \bf c } _ { n } ) \gamma = \gamma$ . Consider that cosine similarity is used for similarity calculations, where the matrix product of L2-normalized features will represent their similarity, the learning objective becomes: $$ \begin{array} { r l } & { O _ { \nu 2 t } : = S ( \mathbf { v } _ { i } , \mathbf { t } _ { j } ) - S ( \mathbf { v } _ { i } , \mathbf { t } _ { k } ) \geq \gamma } \\ & { \qquad : = \gamma - \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { j } + \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { k } \leq 0 . } \end{array} $$ Here, $j$ and $k$ are the samples in positive and negative sets, respectively. Since our task is bidirectional, that is, we need to conduct both the video-to-text and text-to-video retrieval, thus the loss function can be formulated as: $$ \mathcal { L } = \sum _ { ( i , j , k ) \in \cal N } \left[ \gamma - \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { j } + \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { k } \right] _ { + } + \left[ \gamma - \mathbf { t } _ { i } ^ { T } \mathbf { v } _ { j } + \mathbf { t } _ { i } ^ { T } \mathbf { v } _ { k } \right] _ { + } . $$ This is a commonly used loss function in the video-text retrieval task, called hinge loss or Multi-Instance Max-Margin (MI-MM) loss [41], and $[ \cdot ] _ { + }$ denotes the ReLU function here. Meanwhile, consider a special scenario when soft labels are introduced. In the Epic-Kitchen-100 multi-instance retrieval task, a semantic-based soft label generation method is proposed [42]. Specifically, since narrations are used to describe actions, which can be simplified as the combination of verbs and their corresponding objects(nouns). Consequently, the generation method can be formulated as follows. $$ { { S } _ { P o S } } \left( { { y } _ { i } } , { { y } _ { j } } \right) = \sum _ { p \in P } { \alpha } ^ { p } \frac { \left| { { w } _ { i } ^ { p } } \cap { { w } _ { j } ^ { p } } \right| } { \left| { { w } _ { i } ^ { p } } \cup { { w } _ { j } ^ { p } } \right| } , $$ where $p$ denotes parts of speech, e.g., verb and noun; $\alpha ^ { p }$ denotes the weights for every part of speech, commonly 0.5 for both verb and noun. Therefore, the equation means that the relevancy value, or the soft label values $\mathbf { c } _ { i j } \in [ 0 , 1 ]$ between the i-th and $\mathrm { j }$ -th narrations equals the IOU of the words in the selected part of the speech. In this scenario, the relevance matrix becomes $C ~ = ~ \{ \mathbf { c } _ { i j } ~ \in ~ [ 0 , 1 ] | i ~ = ~ 1 , 2 , . . . , N _ { \nu } , j ~ =$ $1 , 2 , . . . , N _ { t } \}$ . To take advantage of this prior information, the adaptive MI-MM loss [8], [43] is proposed, formulated as: $$ \mathcal { L } = \sum _ { ( i , j , k ) \in N } [ \mathbf { c } _ { i j } \gamma - \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { j } + \mathbf { v } _ { i } ^ { T } \mathbf { t } _ { k } ] _ { + } + [ \mathbf { c } _ { i j } \gamma - \mathbf { t } _ { i } ^ { T } \mathbf { v } _ { j } + \mathbf { t } _ { i } ^ { T } \mathbf { v } _ { k } ] _ { + } . $$ X dim. Y dim. T dim. 3D-RoPE (CogVideoX) X, Y dim. T dim. VideoRoPE X, Y dim. T dim. Videos 𝑥 Spatial-Temporal RoPE (ours) The learning objective of adaptive MI-MM Loss is similar to MI-MM Loss, but introduces the relevancy matrix $c$ to the learning objective. However, the adaptive MI-MM loss only considers the correlations of positive pairs, treating the correlation between video clips and their corresponding negative pairs as 0. As shown in Fig. 2(a), the correlation between negative pairs, $c _ { i k }$ , is not always 0. This makes the learning objective less precise for soft-label-based multi-instance retrieval tasks. Moreover, EgoVLP [8] employs a hard mining strategy that defines the positive set as $i ^ { + } ~ = ~ \{ j | { \bf c } _ { i j } ~ \geq ~ 0 . 1 \}$ , that is, the partially matched video-text pairs could be treated as positive samples. As illustrated in Fig. 2, since adaptive MI-MM loss ignores the correlation values of negative pairs, this can be problematic when $\mathbf { c } _ { i j } < \mathbf { c } _ { i k }$ , leading the learning objective in the opposite direction to the correct one. # IV. The Proposed Method # A. EVA-02 AT Transformer In this subsection, we introduce the design choices in the EVA-02 transformer, including the patchify process, spatialtemporal RoPE embedding, and the theory of joint attention blocks. Patchify. Inspired by the framework of AVION [4], we integrate a spatial-temporal attention block into a vanilla EVA-02 [18], [44]. For patch embedding, an input video sequence $\mathbf { v } \in \mathbb { R } ^ { C \times T \times H \times W }$ , where $C , T , H , W$ represents channels, number of frames, height, and length, is processed in the spatial domain only. This approach ensures compatibility with the original image encoder, yielding a patchified feature of dimension $\mathbb { R } ^ { B \times ( T \times \overline { { p } } ^ { 2 } ) \times D }$ , where $\begin{array} { r } { D = \frac { { \bf \bar { \Phi } } _ { C H \bar { W } } } { p _ { . } ^ { 2 } } } \end{array}$ . We introduce two distinct learnable positional embeddings: a temporal positional embedding $\boldsymbol { P } _ { t } ~ \in ~ \mathbb { R } ^ { T \times D }$ and a spatial positional embedding $P _ { x y } \in \mathbb { R } ^ { p ^ { 2 } \times D }$ . Each temporal positional embedding is replicated $\scriptstyle { \dot { p } } ^ { 2 }$ times across the patches of a frame, while each spatial positional embedding is replicated $T$ times to cover all frames. Therefore, the initial representation $z ^ { ( 0 ) }$ after patch embedding is formulated as: $$ \begin{array} { r l } & { \quad \boldsymbol { z } ^ { ( 0 ) } = \boldsymbol { P } _ { x y } ^ { T } + \boldsymbol { P } _ { t } ^ { S } + \boldsymbol { x } ^ { ( 0 ) } , } \\ & { \quad \boldsymbol { s . t . P } _ { x y } ^ { T } = \{ \boldsymbol { P } _ { x y } ^ { i } \in \mathbb { R } ^ { \boldsymbol { p } ^ { 2 } \times T \times D } \mid i = 1 , 2 , . . . , t \} , } \\ & { \quad \quad \boldsymbol { P } _ { t } ^ { S } = \{ \boldsymbol { P } _ { x y } ^ { j } \in \mathbb { R } ^ { \boldsymbol { p } ^ { 2 } \times T \times D } \mid j = 1 , 2 , . . . , x y \} . } \end{array} $$ Here, $P _ { t } ^ { S }$ and $P _ { x y } ^ { T }$ denote the final spatial and temporal positional embeddings before the transformer blocks. $x ^ { ( 0 ) }$ denotes the initial feature of the video clip after passing through the first convolutional layer in the patch embedding block. In this case, we employ a 3D convolution, also known as tube convolution [12], with a convolution kernel of $1 \times p \times p$ . This convolutional operation effectively captures both the spatial and temporal information of the video during the patch embedding phase. The inclusion of temporal dimensions allows the image encoder to act as a video encoder. Joint Spatial-Temporal Attention. The learnable spatialtemporal positional embedding in EVA02-AT enables the joint spatial-temporal attention. In EVA02-AT, joint attention blocks that process both spatial and temporal information simultaneously are adopted, rather than the divided spatial and temporal attention used in typical video encoders such as Timesformer and Frozen-in-Time [25], [45]. To cooperate with the joint attention, we need to apply an integrated spatial-temporal RoPE to capture the joint features. Fig. 3 illustrates how our spatial-temporal RoPE works. Specifically, since the RoPE is a multiplicative positional embedding where the inner product of two RoPEs is equivalent to the addition of rotation angles, to describe a time change in the spatial domain, $x y + \Delta t$ , it obeys the following equation: $$ R _ { ( x y + \Delta t ) } = R _ { x y } \cdot R _ { \Delta t } . $$ Therefore, we initialize a 2D-RoPE $R _ { x y } \ \in \ \mathbb { R } ^ { p ^ { 2 } \times D }$ on the spatial domain, where the dimension is evenly divided for height and width, and a 1D temporal RoPE $R _ { t } \in \mathbb { R } ^ { T \times D }$ on the entire dimension. By calculating the inner product of spatial and temporal RoPE, we obtain an addition of spatial and temporal rotation angles. Similar to the learnable positional embeddings, the spatial RoPE is replicated $T$ times for $T$ frames in the batch, and the temporal RoPE is replicated $p ^ { 2 }$ times for patches in every frame in order to align our 3DRoPE with the positional embedding. This operation can be expressed as: Fig. 4. Training framework of EVA02-AT. Given an input video clip $\nu _ { i }$ , a hard mining strategy is applied to find a partially matching narration $t _ { j }$ from the pre-build relevancy matrix. Then the dataloader would randomly select a narration as the positive pair to the input video clip from the candidates where the correlation value between $\nu _ { i }$ and narration $\{ t _ { j } | j = 1 , 2 , . . . , N _ { \sqcup } \} ~ 4$ $\mathbf { c } _ { i j }$ is greater than a predefined threshold $\epsilon$ . Meanwhile, the dataloader will record the serial number of the video clip in pre-build relevancy matrix, thus to rebuild a $B \times B$ correlation matrix during the loss calculation. $$ \begin{array} { r l } & { R _ { ( x y + t ) } = R _ { x y } ^ { T } \cdot R _ { t } ^ { S } , } \\ & { s . t . R _ { x y } ^ { T } = \{ R _ { x y } ^ { i } \in \mathbb { R } ^ { p ^ { 2 } \times T \times D } \mid i = 1 , 2 , . . . , t \} , } \\ & { \quad \quad \quad R _ { t } ^ { S } = \{ R _ { x y } ^ { j } \in \mathbb { R } ^ { p ^ { 2 } \times T \times D } \mid j = 1 , 2 , . . . , x y \} . } \end{array} $$ In this way, we thus apply the spatial RoPE and temporal RoPE on the entire dimension instead of manually dividing the dimension into uneven slides. Since we use the standard QK-RoPE, the output of our joint spatial-temporal attention at $k$ -th layer can be expressed as: $$ \begin{array} { r l } & { z ^ { k } = { S P A C E } - { T I M E } ( z ^ { k - 1 } ) } \\ & { \quad = A t t n \left( R _ { ( x y + t ) } W _ { q } z ^ { k - 1 } , R _ { ( x y + t ) } W _ { k } z ^ { k - 1 } , W _ { \nu } z ^ { k - 1 } \right) . } \end{array} $$ The $z ^ { k - 1 }$ denotes the output of the $k - 1$ th layer. In this way, the attention score between query and key becomes a global attention among all the patches in the video clip instead of the spatial attention on a single frame. Meanwhile, the model can still be trained on the basis of an image encoder, which simplifies the pretraining process. # B. Symmetric Multi-Similarity Loss As aforementioned, the adaptive MI-MM [8] is not an accurate loss function since the correlation values of negative pairs are not considered. Therefore, to provide a more accurate learning objective, we introduce a novel training framework, which is shown in Fig. 4. Building on the hard-mining strategy of EgoVLP [8], which treats partially matched pairs as positives, the training framework can learn verbs and nouns in natural languages independently. Beyond this, we further refine it by incorporating correlations from both positive and negative samples. Specifically, we compute the relevance matrix via Eqn. 5. During training, the dataloader collects not only matched video–text pairs but also sequences of video $\nu _ { i }$ and partially matched text $t _ { j }$ . For each batch, we reconstruct a $B \times B$ relevance matrix from these sequences, where $B$ represents the batch size. Thus, the relevancy value between arbitrary video and text within the batch can be found in this batch-wise relevancy matrix, so that negative pair entries reflect their true correlation scores rather than defaulting to zero. This enriched matrix serves as the foundation for our SMS loss. Given the correlation values for both positive and negative pairs, we aim to create a loss function that can optimize the model from both directions. The Multi-Similarity Loss [19] provides us a good example and demonstrates its effectiveness on metric learning tasks, which is formulated as: $$ \begin{array} { r } { \mathcal { L } _ { M S } = \displaystyle \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left\{ \frac { 1 } { \alpha } \log \left[ 1 + \sum _ { j \in P _ { i } } e ^ { - \alpha \left( S _ { i j } - \gamma \right) } \right] \right. } \\ { \displaystyle \left. + \frac { 1 } { \beta } \log \left[ 1 + \sum _ { k \in N _ { i } } e ^ { \beta ( S _ { i k } - \gamma ) } \right] \right\} , } \end{array} $$ where $\mathcal { P } _ { i }$ and ${ \cal N } _ { i }$ refer to the positive and negative sets corresponding to the $i$ -th video clip, $\alpha$ and $\beta$ are the scale factors for positive and negative pairs, respectively. To simplify this loss function, we consider a special case when $\alpha , \beta \to \infty$ : $$ \mathcal { L } _ { M S } ^ { ' } = \sum _ { ( i , j , k ) \in \cal N } \left[ \gamma - \mathbf { S } _ { i j } \right] _ { + } + \left[ \mathbf { S } _ { i k } - \boldsymbol { \gamma } \right] _ { + } . $$ This reveals that the learning objective for Multi-Similarity Loss is to push positive pairs closer to the margin while pulling negative pairs away from it. This inspires us to define a symmetric loss function for positive and negative pairs. However, as previously illustrated, it is challenging to determine if $\mathbf { t } _ { j }$ and $\mathbf { t } _ { k }$ are relatively more positive to the video clip $\mathbf { v } _ { i }$ . Therefore, directly applying Multi-Similarity Loss to this multi-instance retrieval task is still far from satisfactory. Therefore, we need to define the positive and negative pairs in our training pipeline. Given two narrations $j$ and $k$ corresponding to $i$ -th video clip, we formulate the correlation $\mathcal { R }$ between $S _ { i j }$ and $S _ { i k }$ as follows: $$ \mathcal { R } = \sum _ { ( i , j , k ) \in N } \mathbf { c } _ { i j } - \mathbf { c } _ { i k } . $$ In this way, when the correlation factors $\mathcal { R } > 0 , \ : \mathbf { v } _ { i }$ , and $\mathbf { t } _ { j }$ are the relatively more positive pair compared to $\mathbf { v } _ { i }$ and $\mathbf { t } _ { k }$ , and vice versa. Following the concept of multi-similarity loss, we extend the adaptive MI-MM loss to a bi-directional and symmetric form: $$ \mathcal { L } = \sum _ { ( i , j , k ) \in \mathcal { N } } \left\{ \begin{array} { l l } { \left[ \mathcal { R } \gamma - S _ { i j } + S _ { i k } \right] _ { + } \quad } & { \mathcal { R } > 0 } \\ { \left[ - \mathcal { R } \gamma + S _ { i j } - S _ { i k } \right] _ { + } \quad } & { \mathcal { R } < 0 } \end{array} \right. $$ However, a special case happens when $\mathcal { R } \ = \ 0$ , where the distance between $S _ { i j }$ and $S _ { i k }$ should be optimized to 0. However, in practice, two factors are preventing us from doing so. First, two descriptions with different verbs and nouns could have the same corresponding values. e.g., the current action label is "eat banana", while the partially matched positive pair is "eat apple", and the negative pair is "grab banana". In this case, $\mathbf { c } _ { i j }$ and $\mathbf { c } _ { i k }$ are the same, but the distance between them should not be optimized to 0. Meanwhile, we find that the loss at $\mathcal { R } = 0$ tends to be the dominant loss since the value of $\mathcal { R }$ is very small. To mitigate this, we introduce a relaxation factor, $\tau$ , such that when the Euclidean distance between $S _ { i j }$ and $S _ { i k }$ is smaller than $\tau$ , we cease optimizing this part. This adjustment allows us to maintain the major learning objective, i.e., $O : = S _ { p } - S _ { n } > \mathcal { R } \gamma$ . Thus, we obtain a symmetric loss regarding the distance between positive and negative pairs: $$ \mathcal { L } _ { S M S } = \sum _ { ( i , j , k ) \in \cal N } \left\{ \begin{array} { l l } { \left[ \mathcal { R } \gamma - S _ { i j } + S _ { i k } \right] _ { + } } & { \quad \mathcal { R } > 0 } \\ { \left[ - \mathcal { R } \gamma + S _ { i j } - S _ { i k } \right] _ { + } } & { \quad \mathcal { R } < 0 } \\ { \left[ \| S _ { i j } - S _ { i k } \| _ { 1 } - \tau \right] _ { + } } & { \quad \mathcal { R } = 0 } \end{array} \right. $$ Here, $S _ { * }$ denotes both the similarity of video-to-text and text-to-video. Additionally, we add a threshold $\lambda$ to constrain the edge conditions, of which the value equals the threshold for selecting positive pairs. Thus, the final loss function becomes: $$ \mathcal { L } _ { S M S } = \sum _ { ( i , j , k ) \in N } \left\{ \begin{array} { l l } { \left[ \mathcal { R } \gamma - S _ { i j } + S _ { i k } \right] _ { + } } & { \quad \mathcal { R } \geqslant \lambda } \\ { \left[ - \mathcal { R } \gamma + S _ { i j } - S _ { i k } \right] _ { + } } & { \quad \mathcal { R } \leqslant - \lambda } \\ { \left[ \left| S _ { i j } - S _ { i k } \right| - \tau \right] _ { + } } & { \quad \left| \mathcal { R } \right| < \lambda } \end{array} \right. $$ Theoretically, the relaxation factor $\tau$ should be less than the minimum value of $c$ for $C > 0$ . This ensures that the optimization process remains effective and balanced across different correlation scenarios. However, in practice, we sometimes need a larger $\tau$ to prevent the model from focusing on similar pairs. Therefore, we obtain the final representation of SMS loss, which optimizes the model symmetrically according to the difference in correlation values. # V. Experiments # A. Datasets and Implementation Details Datasets. We conduct the experiments on three egocentric datasets: Ego4D, Epic-Kitchens-100 (EK-100), and CharadesEgo. We first pretrain our models on the EgoClip and Ego${ \mathrm { C l i p } } +$ versions of the Ego4D dataset, where the EgoClip is proposed by EgoVLP [8], which contains 3.8 million videotext pairs for training, and the average length for each video clip is about 1 second. The EgoClip+ is proposed by LaViLa [10], which has a 35-million corpus that is augmented by GPT-2 XL [46]. After pretraining, we evaluate models on the Ego4D Multiple-Choice Questions (EgoMCQ) benchmark. Before fine-tuning, we directly evaluate the pretrained model on EK-100’s multi-instance retrieval (MIR) challenge and the Charades-Ego action recognition challenge, where the performance will be treated as zero-shot results. After that, we fine-tune the pretrained model on the training set of these two benchmarks, respectively, and evaluate their fine-tuned results. Implementation Details. We build our EVA02-AT models based on the AVION framework [4], a vanilla ViT-CLIP backbone, and our EVA02-AT-CLIP variants retain the same architecture as EVA02-CLIP except for the modified positional embeddings described in Section 4. We train on $4 \times \mathrm { { N V I D I A } }$ RTX 6000 Ada GPUs. During both pretraining and fine-tuning, frames are sampled uniformly from each clip at a resolution of $3 \times 2 2 4 \times 2 2 4$ , and the dimension of the feature space is set to 256. For our SMS loss, unless specified, we set the SMS-loss margin $\gamma$ to 0.6, and the relaxation factor $\tau$ to 0.1. Ego4D pretraining. For Ego4D pretraining, we optimize a bi-directional InfoNCE loss [35] with the temperature 0.05. We evenly sample 4 frames for each video clip. The batch size for our base-size model is set to 256 per GPU, resulting in a total batch size of 1024, while the batch size is set to 128 for the large model, resulting in a total batch size of 512. We train for five epochs using AdamW [52] with a fixed learning rate of $3 \times 1 0 ^ { - 5 }$ . The pretraining process takes approximately 40 hours for our base model. EK-100 MIR. When fine-tuning on the EK-100 dataset, we employ our SMS loss to fine-tune the model pretrained on the Ego4D dataset for 100 epochs. We warm up the learning rate from $1 0 ^ { - 6 }$ to a peak of $2 \times 1 0 ^ { - 5 }$ over the first epoch. During fine-tuning, 16 frames are sampled for each video clip, and the batch size is set to 64 for each GPU. Fine-tuning the base model under these settings requires about 20 hours. Charades-Ego Fine-tuning. The Charades-Ego dataset only contains hard labels, but there could be multiple different hard labels for each video clip. In order to be compatible with the Charades-Ego dataset, we refine our SMS loss as follows: TABLE I The main results on EK-100 multi-instance retrieval task. ’PT Dataset’ identifies the pretraining dataset, ’Vis Enc.’ indicates the visual encoder the models are using. The symbol ’\*’ indicates reproduced results, “†” denotes that three input modalities are used: RGB, Flow, and Audio. The base-size models are in white rows and large-size models are in gray. $$ \mathcal { L } _ { S M S } = \sum _ { ( i , j , k ) \in \cal N } \left\{ \begin{array} { l l } { \left[ \mathcal { R } \gamma - S _ { i j } + S _ { i k } \right] _ { + } } & { \quad \mathcal { R } = 1 } \\ { \left[ \left| S _ { i j } - S _ { i k } \right| - \tau \right] _ { + } } & { \quad \mathcal { R } = 0 } \end{array} \right. $$ TABLE II Zero-shot and Fine-tuned Results for Video-to-Text Retrieval Task on Charades-Ego dataset and EgoMCQ benchmark. The base-size models are in white rows and large-size models are in gray. We fine-tune the model for 10 epochs using the Lamb optimizer, warming up from $1 0 ^ { - 6 }$ to $3 \times 1 0 ^ { - 5 }$ in the first epoch. And we sample 16 frames per video clip per GPU. The margin 𝛾 of SMS loss is set to 0.3. # B. Compare with State-of-the-Arts EK100 MIR. The choice of pretraining data critically affects performance on the EK-100 multi-instance retrieval task. Currently, the state-of-the-art methods are using different pretraining settings, leading to a variety of results. To ensure fair comparisons, we group existing methods by their public pretraining datasets: (a) Image or non-egocentric video dataset; (b) EgoClip [8], [20]; and (c) EgoClip with LLM-augmented corpus $\scriptstyle ( \mathrm { E g o C l i p + } )$ ) proposed by LaViLa [10]. Table I shows the comparison between the state-of-theart results and our methods on the EK-100 multi-instance retrieval task, with base-size models in white rows and largesize models in gray. Across all three dataset categories, our models lead both base and large configurations. For base-size models, we improve average mAP by $7 . 2 \%$ (59.0 vs. 51.8) and average nDCG by $4 . 0 \%$ over the previous state-of-the-art method, AVION. Scaling to a large-size model, the gain boosts to $9 . 0 \%$ in average mAP (63.5 vs. 54.5), and $5 . 2 \%$ (74.2 vs. 69.0) in average nDCG. TABLE III Zero-shot performance of various network architectures on the EK-100 multi-instance retrieval task. The symbol ’\*’ indicates reproduced results. The ’Params (M)’ column lists the number of parameters for video encoders, text encoders, and additional blocks (if any), in that order. The base-size models are in white rows and large-size models are in gray. We can also observe from the table that our SMS loss drives much of this improvement. Simply replacing AVION’s MIMM loss with SMS yields a $7 . 6 \%$ improvement in average mAP and a $4 . 0 \%$ improvement in average nDCG. Furthermore, EVA02-AT architectures consistently outperform vanilla ViTs: when training on ${ \mathrm { E g o C l i p } } +$ , our base-size and large-size models improve the performance by $2 . 0 \%$ and $1 . 4 \%$ in average mAP, respectively. CharadesEgo Action Recognition. Table II provides the comparison results on CharadesEgo Video-to-Text action recognition task. Notably, with our SMS loss, our model outperforms the previous state-of-the-art results by $3 . 2 \%$ on the base model and $2 . 8 \%$ on the large model in $\mathrm { { V 2 T } \ m A P }$ in the fine-tune setup. We also evaluate our EVA02-AT model in a zero-shot setup, and the experiments show that our EVA02- AT outperforms the ViT models by $0 . 4 \%$ on the base model and $1 . 0 \%$ on the large model, respectively. EgoMCQ. We directly evaluate the EgoMCQ performance after pretraining the model on the Ego4D dataset. On EgoMCQ, our base model achieves $9 5 . 0 \%$ inter-video accuracy and $6 3 . 2 \%$ intra-video accuracy, while our large model achieves $9 5 . 9 \%$ inter-video accuracy and $6 6 . 5 \%$ intravideo accuracy, which surpasses the previous state-of-the-art results. # C. Ablation Study To evaluate the effectiveness of both our EVA02-AT network and the SMS loss function, we conduct the ablation experiments from three aspects: (1) the zero-shot performance across different network architectures; (2) the EVA02-AT model with different temporal positional embedding choices; (3) the finetuned performance across different loss functions. TABLE IV Comparison between different temporal embeddings on the zero-shot EK-100 MIR benchmark. Effect of EVA02-AT. In the zero-shot setting, we evaluate models pretrained on the EgoClip and ${ \mathrm { E g o C l i p } } +$ datasets, respectively. As Table III, our model consistently achieves the state-of-the-art results on both pretraining datasets without increasing the number of parameters. In contrast, backbone models like TSF and FiT, which introduce an external temporal attention block, inevitably increase the model’s parameters but fail to provide improved performance. i.e., the EVA02-AT outperforms LaViLa by $2 . 8 \%$ in average mAP for the large model. Meanwhile, compared with the architectures with joint attention, our model also achieves a better result with the help of spatial-temporal RoPE. i.e., the EVA02-AT beats ViT-B and ViT-L by $1 . 4 \%$ and $1 . 3 \%$ in average mAP, respectively. Fig. 5. Training curves for different loss functions. Figure (a) shows the loss value during the training process, and Figure (b) shows the validation mAP during the training process on the EK-100 MIR task. By providing an accurate learning objective, SMS decades more sharply than the other two losses. TABLE V The performance comparison of different loss functions pretrained on the EgoClip $^ +$ dataset and fine-tuned on the EK-100 multi-instance retrieval task. Effect of 3D-RoPE. In table IV, we change the temporal positional embedding to (a) the learnable positional embedding, (b) 1D-RoPE embedding, and (c) learnable positional embedding with RoPE embedding. From the table, we can find that changing the temporal positional embedding will not influence the performance significantly, but (c) still outperforms all the other settings. Concretely, compared to the learnable temporal positional embedding, RoPE improves the model by $2 . 0 \%$ in average mAP. And compared to the case that only uses temporal RoPE, a learnable positional embedding can provide a $1 . 4 \ S$ gain in average mAP. Additionally, the experiment suggests that preserving the model’s extrapolation ability, i.e., using RoPE as the only temporal positional embedding, does not lead to a noticeable performance drop. Effect of SMS Loss. To verify the effectiveness and robustness of our SMS loss, we conduct an ablation study on both our ViT-B-based and ViT-L-based models. All experiments across different loss functions are conducted under the same learning rate and optimizer settings. We choose the best-performing hyperparameters for each loss function, i.e., a margin of 0.2 for the MI-MM loss and 0.4 for the adaptive MI-MM loss. The experiment results are presented in Table V. Fig. 6. Training curves for different hyper-parameter choices in SMS loss. Figures (a) and (b) show the average mAP and nDCG performances when $\gamma$ changes. For both the ViT-B-based and ViT-L-based models, our SMS loss demonstrates superior performance compared to its counterparts. Specifically, for the ViT-B-based model, our SMS loss improves the average mAP by $1 . 9 \%$ and the average nDCG by $2 . 4 \%$ compared to the adaptive MI-MM loss. Similarly, for the ViT-L-based model, our SMS loss also improves the model in average mAP by $2 . 3 \%$ and in average nDCG by $2 . 4 \%$ . We also conducted an experiment using a ViT-B-based model with $\tau = 0$ to evaluate the impact of the relaxation factor. This factor helps prevent over-optimization when the correlation values between positive and negative samples are similar. Our results show a performance drop of $1 . 8 \%$ in average mAP and $0 . 8 \%$ in average nDCG compared to the case where $\tau = 0 . 1$ . This demonstrates the crucial role of the relaxation factor in ensuring optimal model performance. We next examine the impact of the SMS loss hyperparameter, 𝛾. Fig. 6 plots average mAP as $\gamma$ varies from 0.3 to 0.8. Our results show that the model achieves its highest mAP at $\gamma = 0 . 6$ . Moreover, performance remains stable across the entire range, since the mAP difference between the best and worst settings is only $1 . 2 \%$ . The training curves for different loss functions are presented in Fig. 5. Notably, a performance gap emerges as early as 20 epochs, with the SMS loss continuing to decrease exponentially, while the other two loss functions show slower declines over the next 80 epochs. Although the absolute value of SMS loss is naturally lower than that of MI-MM and adaptive MIMM losses, the results highlight that an accurate learning objective significantly helps the fine-tuning process.
Egocentric video-language understanding demands both high efficiency and accurate spatial-temporal modeling. Existing approaches face three key challenges: 1) Excessive pre-training cost arising from multi-stage pre-training pipelines, 2) Ineffective spatial-temporal encoding due to manually split 3D rotary positional embeddings that hinder feature interactions, and 3) Imprecise learning objectives in soft-label multi-instance retrieval, which neglect negative pair correlations. In this paper, we introduce EVA02-AT, a suite of EVA02-based video-language foundation models tailored to egocentric video understanding tasks. EVA02-AT first efficiently transfers an image-based CLIP model into a unified video encoder via a single-stage pretraining. Second, instead of applying rotary positional embeddings to isolated dimensions, we introduce spatial-temporal rotary positional embeddings along with joint attention, which can effectively encode both spatial and temporal information on the entire hidden dimension. This joint encoding of spatial-temporal features enables the model to learn cross-axis relationships, which are crucial for accurately modeling motion and interaction in videos. Third, focusing on multi-instance video-language retrieval tasks, we introduce the Symmetric Multi-Similarity (SMS) loss and a novel training framework that advances all soft labels for both positive and negative pairs, providing a more precise learning objective. Extensive experiments on Ego4D, EPIC-Kitchens-100, and Charades-Ego under zero-shot and fine-tuning settings demonstrate that EVA02-AT achieves state-of-the-art performance across diverse egocentric video-language tasks with fewer parameters. Models with our SMS loss also show significant performance gains on multi-instance retrieval benchmarks. Our code and models are publicly available at https://github.com/xqwang14/EVA02-AT .
[ "cs.CV", "cs.AI" ]
# 1 Introduction Language models (LMs) have become powerful tools for automating software engineering tasks (Hou et al., 2024). A natural extension is software vulnerability detection, which plays a critical role in the early stages of the software lifecycle. Researchers typically fine-tune general or codespecific LMs on curated datasets to inject vulnerability knowledge, expecting them to learn complex Input Function void process(char \*input, int is_admin) { char buffer[16]; + 164.04% - 19.12% int check = 0; if (is_admin) { \~ S strcpy(buffer, input); check = 1; Predictable vulnerability- Classification Fine-tuning relevant region Performance Efficiency printf("Processed\n"); 国 } Detect Vulnerability FocusVul Select Contecxt → 国 vo cdhparobcuefsfs(ecrh[1a6r];\*input, int is_admin) { Fine-tuning LaMngoudaelge if (is_admin) { strcpy(buffer, input); Start region Data Dependency check = 1; Control Dependency } 网 Basic Block } to Identifier patterns from large-scale code corpora (Shestov et al., 2025; Chakraborty et al., 2021). However, due to the subtlety and diversity of vulnerabilities, LMs often struggle to handle real-world cases even at the function level (Ding et al., 2024). This challenge stems from two fundamental properties of vulnerabilities: 1) uncertain vulnerability location: LMs operate under strict input length constraints, and truncating long functions may omit vulnerable regions or retain irrelevant code (Jiang et al., 2024), resulting in misleading supervision during fine-tuning. 2) sparse vulnerability occurrence: vulnerabilities typically span only a few lines, leading to low signal-to-noise ratios. This sparsity makes models prone to learning spurious patterns, especially as LMs tend to degrade in performance on long and noisy inputs (Liu et al., 2023). One insight to handle these issues is to extract compact, vulnerability-relevant contexts to support more effective learning for LMs. Previous studies emphasize that restricting model inputs to code snippets relevant to potential vulnerabilities or behaviors of interest (e.g., leveraging code slicing) can improve detection effectiveness (Li et al., 2018; Qian et al., 2025). They manually identify key points (e.g., dangerous API calls) and extract code related to these points to create code gadgets, for instance, leveraging backwards and forward program analysis (code slicing). These gadgets have proven effective across conventional models, including both sequential (Li et al., 2018, 2021b; Thapa et al., 2022) and graph-based ones (Zhou et al., 2019; Cheng et al., 2021; Mirsky et al., 2023; Hin et al., 2022). However, in the context of vulnerability detection, most of them are tailored to specific Common Weakness Enumerations (CWEs), and their key points struggle to generalize to diverse real-world vulnerabilities, often introducing noisy patterns (Mächtle et al., 2025), leading to coarse supervision for LMs. Beyond predefined points, vulnerability-fixing commits provide more precise supervision by explicitly marking affected regions (Hoang et al., 2020), but these annotations are unavailable during inference (Pornprasit and Tantithamthavorn, 2021) and hard for models to learn due to their sparsity (Chen et al., 2020; He et al., 2023). These limitations motivate our proposed framework, FocusVul, which introduces concise supervision to extract vulnerabilityrelevant input and boost the strength of LMs for improving real-world vulnerability detection. FocusVul adopts a two-stage design. First, it learns from several commit-based annotations to identify potential vulnerability-relevant regions (VRRs), generalizing fine-grained supervision to inference. Second, it extracts LM-friendly context around the selected regions, optionally incorporating heuristic signals (e.g., sensitive calls) to improve coverage. This context is enriched through dependency and execution flows to better align with the sequential modeling strengths of LMs. FocusVul produces compact, vulnerabilityfocused code gadgets that can be seamlessly integrated into existing LMs for efficient fine-tuning. Our contributions are as follows: • We propose FocusVul, a model-agnostic framework that provides a concise, vulnerability-focused context of functions to fine-tune LM high-efficiently for vulnerability detection. • FocusVul integrates a learning-based vulnerability-relevant region (VRR) identifier with an LM-oriented context selector. It models commit-based VRR semantics through hierarchical representations to extend such high-quality supervision to inference. Context is selected via both dependency and execution flows to match the sequential modeling behavior of LMs. • Extensive experiments on recent real-world benchmarks demonstrate that FocusVul consistently enhances the performance of pretrained LMs on vulnerability detection and differentiation tasks, surpassing existing context selection and fine-tuning baselines. # 2 Related Work LM-based Vulnerability Detection. Pretrained language models (LMs) have become central to vulnerability detection. Early work used encoder-only models like CodeBERT (Feng et al., 2020) and GraphCodeBERT (Guo et al., 2020), and encoderdecoder models such as CodeT5 (Wang et al., 2021) and PLBART (Ahmad et al., 2021) for generative tasks. Recent studies adopt larger general-purpose or code-specific LMs (e.g., LLaMA (Grattafiori et al., 2024), StarCoder (Lozhkov et al., 2024)) to enhance semantic reasoning (Ding et al., 2024; Chen et al., 2023; Sheng et al., 2025). Various techniques improve LM adaptation, such as syntaxaware tokenization (Hanif and Maffeis, 2022), code-pretraining (Kanade et al., 2020), and finetuning on vulnerability data (Yin et al., 2024). However, fine-tuning language models on vulnerability data improves performance but remains limited (Yin et al., 2024). Jiang et al. (2024) attribute this to context window constraints that fail to capture long functions, causing information loss. Since vulnerabilities are sparse and localized, using entire functions dilutes supervision (Ding et al., 2024), while overly narrow views may miss key signals (Hin et al., 2022). Context Selection for Vulnerability Detection. To mitigate context limitations, prior work extracts code gadgets around predefined anchors such as APIs or pointer operations (Li et al., 2018). Subsequent studies (Li et al., 2021b; Cao et al., 2024; Thapa et al., 2022; Zou et al., 2022; Gonçalves et al., 2025) adopted static slicing (data/control dependencies) to extract semantically relevant regions, typically guided by syntax cues like special identifiers or pointer usage. Du et al. (2024) shows that such dependencies also support the validation of LLM-generated predictions. Moving beyond static analysis, Mächtle et al. (2025) uses execution traces to manually identify key endpoints, enabling more execution-aware context modeling. Other methods (Mirsky et al., 2023; Li et al., 2021a) rely on CWE-guided heuristics to locate vulnerability-relevant regions. While these strategies help prune irrelevant code, their handcrafted nature and vulnerability-specific heuristics (Sui and Xue, 2016) limit generalization and may introduce redundancy in unseen cases. Commit-based Vulnerability Supervision. Commits provide fine-grained supervision by highlighting faulty and fixed lines (Lin et al., 2024). Prior works (Hoang et al., 2020; Pornprasit and Tantithamthavorn, 2021) use this for defect prediction (Hoang et al., 2020; Pornprasit and Tantithamthavorn, 2021), while others leverage commit diffs to train vulnerability detectors (Nguyen et al., 2022; Zhou et al., 2021). Although some adopt line-level labels during training, inference is typically coarse-grained (commit or function level), limiting fine-grained localization and context selection. In contrast, FocusVul is trained on diff-annotated lines and supports line-level inference without relying on commit metadata, enabling precise extraction of vulnerability-relevant regions (VRRs). # 3 Methodology # 3.1 Preliminary # 3.1.1 Problem Formulation We formulate LM-based source code vulnerability detection as a binary classification task. Given a function $f$ consisting of $N$ lines of code, $f =$ $\{ \ell _ { 1 } , \ell _ { 2 } , \dots , \ell _ { N } \}$ , the goal is to predict a label $y _ { f } \in \{ 0 , 1 \}$ indicating whether $f$ is vulnerable. Sparse and localized vulnerabilities in long functions pose challenges for LMs due to semantic dilution and input length limits. To mitigate this, we propose a model-agnostic context selection framework, FocusVul, that learns to identify vulnerability-relevant regions and extracts concise, informative context for LM-based detection. # 3.1.2 Vulnerability-Relevant Region We define a Vulnerability-Relevant Region (VRR) as a subset of lines semantically or structurally linked to potential vulnerabilities in function $f$ , denoted as $V R R ( f ) \subseteq f$ , and formally defined as $V R R ( f ) = \{ \ell _ { i } \in f \mid y _ { \ell _ { i } } = 1 \}$ , where $\ell _ { i }$ is the $i$ -th line and $y _ { \ell _ { i } } = 1$ indicates line $\ell _ { i }$ is vulnerability relevance. VRRs are anchors for extracting critical context, helping models focus on concise, informative code regions. While VRRs can be defined over tokens, lines, or blocks, we adopt a line-level formulation to balance expressiveness and consistency (Hin et al., 2022; Fu and Tantithamthavorn, 2022). Lines are more robust than tokens and more uniform than blocks, facilitating aggregation and the learning of transferable patterns. We categorize VRRs into three types based on their source: Definition 1 (CWE-based VRR). $V R R _ { c w e }$ is guided by CWE-specific patterns (e.g., unchecked input in CWE-20, missing bounds checks in CWE119): $V R R _ { c w e } = \{ \ell _ { i } \in f \mid \ell _ { i } = r _ { c } ( f ) \}$ , where $r _ { c }$ is an expert-defined CWE-specific rule function. Definition 2 (Heuristic-based VRR). $V R R _ { h e u }$ is derived from static heuristics such as the presence of sensitive APIs, unsafe library functions, or vulnerable syntactic patterns: $V R R _ { h e u } = \{ \ell _ { i } \in f \mid$ $\ell _ { i } = h ( f ) \}$ , where $h$ denotes a static rule mapping. Definition 3 (Commit-based VRR). $V R R _ { c o m }$ is extracted from real-world vulnerability-fixing commits. Given a vulnerable function is denoted as: $f ^ { - }$ and its patched counterpart as: $f ^ { + }$ . We define the symmetric difference of the commit pair as the set of changed lines: $\Delta \left( f ^ { - } , f ^ { + } \right) =$ $\{ \ell _ { i } | \ell _ { i } \in f ^ { - } \oplus f ^ { + } \}$ , which includes all lines removed from $f ^ { - }$ or newly introduced in $f ^ { + }$ . Then, $V R R _ { c o m } = \{ \ell _ { i } \in f | \ell _ { i } \in \Delta ( f ^ { - } , f ^ { + } ) \}$ . # 3.1.3 Program Representation We adopt the Code Property Graph (CPG) (Yamaguchi et al., 2014) as our unified program representation. The CPG integrates multiple views, including abstract syntax trees (AST), control flow graphs (CFG), and program dependence graphs (PDG), into a joint directed multigraph. From the CPG, we extract two structural views essential to our method: (1) dependency relationships, which capture semantic links via data and control dependencies; and (2) execution relationships, which reflect runtime ordering through basic block sequences. These views go beyond lexical order to provide richer structural context. Dependency Relationship. We construct a PDG from CPG as a directed graph $G = ( V , E )$ , where nodes represent program statements and edges encode: 1) control dependence edge: $v _ { i } \to v _ { j }$ indicates that the execution of line $v _ { j }$ is conditionally dependent on the outcome of a control statement 自 0: void debug_log(char Vulnerability- relevant Region (VRR) Identifier ConSsetrgumcetinotn & Line-level VRR · Sensitive Context Selector FTunraceitnxiiotnrgasct .10.:.: } .fm(palarueirvdnkiet_lf(l(l>oueg0svg)elr{)d_;(unspeurt)_; nput); segS1el .e..c0tion ‘[G‘imnaprukt’, [MASK], ‘l;’o]gged’, ‘(’,p‘l"i’, ‘received’, ..0. . 12 Functions Dependency ReElxateicountsihoinps o commits Vulnerable Function 0 line vectors [pl1, pl2, . . . , pls] Ǟl 0.1 commit .10.:.::v%\*osu.\fdsn(pea"dlr,uei_vdunbiestnutlfe(pg(l>ru_"e_tl0Ivi,o)neniglp{pn()u;tcth0)ar;:receeveve)d{: seg 3 ...1 Fu[nCcLtSi]oin-lelv1el ls2emantfiuclnsctionǞli FGuasitoen ..0. .89 IdeFVnuRtniRfc teirons complementHareyuristic- CoLnatnegxtuage code changes -> segment semantics hf commit-based VRR Fixed b3enciognmvmeitr-siboansed VRRs Hierarchical Semantic Modeling Predbicatsed CVoRmRmit- 品 manually set list Fine-tuning VDueltnecrtaiboinlity at line $v _ { i }$ (e.g., an $\mathrm { i f }$ or while condition). 2) data dependence edge: $v _ { p } \to v _ { q }$ denotes a def-use relationship where a variable defined in $v _ { p }$ is used in $v _ { q }$ . These edges capture long-range semantic dependencies. Execution Relationship. To capture local control flow, we extract CFG edges and group sequential statements into basic blocks, maximal instruction sequences without internal branches. Each block models a unit of execution, and edges between blocks represent control transfers. This abstraction preserves execution order while reducing graph complexity and aligning with the sequential nature of language models, which benefit from explicit local ordering. # 3.2 Overall Architecture FocusVul tackles vulnerability sparsity and contextual complexity via a two-stage framework that distills vulnerability-relevant context for efficient detection under input length constraints. As shown in Figure 2, it comprises: Learning-based VRR Identifier: This module employs hierarchical semantic modeling to identify vulnerabilityrelevant regions (VRRs) as defined in Section 3.1.2, which serve as anchors for context extraction. A lightweight pre-trained model learns commitsupervised VRR patterns $( V R R _ { c o m } )$ and generalizes them to unseen functions. Heuristic VRRs $( V R R _ { h e u } )$ are used as a fallback to ensure coverage. Sensitive Context Extractor: This component performs structure-aware slicing around selected VRRs, incorporating control and data dependencies, while minimally enclosing adjacent basic blocks to preserve local execution semantics. Extracted lines are reordered to maintain syntactic coherence, forming concise, LM-ready inputs. # 3.3 Learning-based VRR Identifier As discussed in Section 3.1.2, $V R R _ { c o m }$ offers finegrained, semantically rich supervision but is unavailable during inference. Simply using it to guide training-time context selection risks distribution mismatch. To bridge this gap, we introduce a learning-based VRR Identifier that captures patterns from $V R R _ { c o m }$ and predicts analogous regions at test time, enabling consistent and effective context selection. # 3.3.1 Segment Construction VRRs range from localized edits to scattered changes. To preserve this diversity under input constraints, each function is divided into overlapping segments $\boldsymbol { S f } = s _ { 1 } , s _ { 2 } , \dots , s _ { K }$ , where each segment $s _ { i } = \ell _ { i } , \ldots , \ell _ { L _ { s } } \subseteq f$ consists of consecutive lines whose total token count does not exceed a maximum budget $M$ (typically 512). To avoid line truncation, segments may be shorter in length $( M _ { s } \le M )$ . They are generated by sliding a window over the tokenized function with stride $t = M \cdot ( 1 - c )$ , where $c \in [ 0 , 1 )$ controls overlap. Due to segment sparsity within functions, we sample per function during training: (1) all positive segments containing at least one $V R R _ { c o m }$ , and (2) one negative segment without any, to simulate test-time sparsity. Although positive segments contain $V R R _ { c o m }$ , they typically occupy only a small portion of lines, making learning difficult. We additionally sample $\alpha \%$ of the remaining negative segments. This strategy promotes supervision diversity and balance, and mitigates the overrepresentation of long functions in negative sampling. # 3.3.2 Hierarchical Semantic Modeling Identifying VRRs is a challenging line-level task due to their diverse forms and sparse distribution, requiring semantics beyond individual lines. We propose a hierarchical framework spanning tokens, lines, segments, and functions, where each level provides contextual cues at increasing granularity. Representations are derived from a lightweight pretrained encoder (e.g., CodeBERT), which outputs token embeddings for each segment, with the embedding of [CLS] serving as the segment summary. These multi-level semantics are ultimately fused into function-level line predictions to identify vulnerability-relevant regions. Line-level Semantics. Each line $\ell _ { i }$ is represented by token indices $\mathcal { T } \ell _ { i }$ . We apply learnable intra-line attention pooling over its token embeddings to obtain a dense vector $p \ell _ { i } \in \mathbb { R } ^ { d }$ , emphasizing semantic tokens (e.g., expressions, function calls) and filtering out syntactic noise. This yields a localized semantic representation for $\ell _ { i }$ . Segment-level Semantics. As vulnerability patterns often span multiple lines, we capture interline dependencies by aggregating line vectors within a segment: $P ^ { ( i ) } = \left[ p \ell _ { 1 } ; \ldots ; p \ell _ { L _ { s } } \right]$ and feed them into a Transformer encoder: $Z ^ { ( i ) } = $ $\mathcal { T } \left( P ^ { ( i ) } + E ^ { \mathrm { p o s } } \right)$ , where $E ^ { p o s } \in \mathbb { R } ^ { L _ { s } \times d }$ is the positional embedding and $Z ^ { ( i ) } = [ z ^ { ( i ) } \ell _ { 1 } ; \dots ; z ^ { ( i ) } \ell _ { L _ { s } } ]$ are the contextualized line embeddings. Each $z _ { \ell } ^ { ( i ) }$ encodes both the content of the line $\ell$ and its role within the segment $s _ { i }$ . Function-level Semantics. To capture global context beyond local segments, we aggregate the [CLS] vectors of all segments in function $f$ , denoted $c _ { 1 } , \ldots , c _ { K }$ , and compute a function-level vector $h _ { f } \in \mathbb { R } ^ { d }$ via self-attentive pooling. Each line embedding zℓ(i) is fused with hf using a gating mechanism: $\bar { z } _ { \ell } ^ { ( i ) } = g _ { \ell } ^ { ( i ) } \cdot h _ { f } + ( 1 - g _ { \ell } ^ { ( i ) } ) \cdot z _ { \ell } ^ { ( i ) }$ , where $u \in \mathbb { R } ^ { 2 d }$ is a trainable parameter and $\sigma$ is the sigmoid function. This allows each line to incorporate both local and global semantics. Since each function is processed in overlapping segments, we aggregate line-level predictions from all segments into a unified set: $\hat { y } _ { l } =$ $\{ \hat { y } _ { s _ { 1 } } ^ { \ell _ { 1 } } , \dots , \hat { y } _ { s _ { 1 } } ^ { \ell _ { L _ { s _ { 1 } } } } , \dots , \hat { y } _ { s _ { K } } ^ { \ell _ { L _ { s _ { K } } } } \}$ yˆsℓLKsK }, where yˆsℓk denotes the predicted probability for line $\ell$ in segment $s _ { k }$ . The corresponding ground-truth line labels are denoted as $y _ { l } = \{ y _ { \ell _ { 1 } } , y _ { \ell _ { 2 } } , . . . , y _ { \ell _ { N } } \}$ , aligned to the same line set. The binary cross-entropy loss is 12:: v\*osiedsshiaond, ien_trfelqauge)s{t(struct Session 1: \*vsoeidsshioan,dilnet_rfleaqgu)e{st(struct Session 3: // initialize code 5: int status = 0; 4: if (flag > 0) { log("started"); } 6: char \*token = NULL; 5: int status=0; char \*token=NULL; 7: token = session->token; 6: token = session->token; 8: if (token) { 7: // <-- vulnerable 9: authenticate(token); 8: if (token) { commit-base VRR 10: } Model 9: authenticate(token); 12: } Input 10: } 11: cleanup(); Original Function Data Dependency Basic Block 123:} // done Control Dependency Identifier Cleaning & Formatting Context Selection 1: void handle_request(struct Session Dependency Graph Control Flow \*session, int flag) { 2: if (flag > 0) { 1 3: log("started"); 4: } 2 5: int status = 0; 67: tcohkaern\*t=oskesnsi=onN->UtLoLk;en; 5-7 3 8: if (token) { 9 9: authenticate(token); 8 112: } cleanup(); ERexltartaicotnisohnip 9 11 control computed over all predicted lines: $$ \mathcal { L } _ { f } = \frac { 1 } { \vert \hat { y } _ { l } \vert } \sum _ { i = 1 } ^ { \vert \hat { y } _ { l } \vert } \left[ - y _ { i } \log \hat { y } _ { i } - ( 1 - y _ { i } ) \log ( 1 - \hat { y } _ { i } ) \right] , $$ where $\hat { y } _ { i } \in \hat { y } _ { l }$ and $y _ { i } \in y _ { l }$ are the predicted probability and ground-truth label for the $i$ -th line in function $f$ . # 3.3.3 Semantic Generalization Masking While hierarchical modeling captures contextual semantics, models may still overfit to shallow lexical patterns, i.e., memorizing specific token patterns (e.g., $^ + ~ 1$ , NULL, free()) that frequently appear in sensitive regions. To reduce reliance on such spurious correlations, we apply random token masking at the line level before encoding. A higher masking rate $\beta$ is used for annotated VRRs and a lower rate $\gamma$ elsewhere, encouraging the model to infer vulnerability patterns from broader contextual and functional cues rather than surface-level tokens. # 3.4 Sensitive Context Extractor After identifying $V R R ( f )$ , we use all predicted $V R R _ { c o m }$ and fall back to $V R R _ { h e u }$ otherwise, forming a hybrid set of vulnerability-relevant regions. From this, we extract a focused context $C _ { f }$ around them by analyzing both semantic and execution dependencies using static analysis. To extract dependency relationships, we make use of the open-source code analysis platform, Joern (Yamaguchi et al., 2014), to extract code property graphs (CPG) for $\scriptstyle \mathbf { C } / \mathbf { C } + +$ functions. Each VRR is served as a seed to conduct the backwards and forward program slicing on the extracted PDG from CPG. Specifically, given a seed node $v _ { s } \in V$ from the PDG $G = ( V , E )$ , the backwards slice $S _ { b } ( v _ { s } )$ collects all nodes $v \in V$ such that there exists a dependency path from $v$ to ${ v _ { s } }$ (i.e., $v \sim v _ { s , \ }$ ), indicating $v$ semantically influences ${ v _ { s } }$ ; the forward slice $S _ { f } ( v _ { s } )$ includes all nodes reachable from ${ v _ { s } }$ (i.e., $v _ { s } v$ ), indicating nodes potentially affected by ${ { v } _ { s } }$ . The union of both forms the dependencybased context: $C _ { f } ^ { d e p } = S _ { b } ( v _ { s } ) \cup S _ { f } ( v _ { s } )$ . To model execution context, we further postprocess the control flow information to recover basic blocks, sequences of linear, non-branching statements. Basic blocks that contain or are adjacent to $V R R s$ are included to preserve local execution context, denoted as $C _ { f } ^ { e x e }$ . The final context is the union of both dependency- and execution-based code regions: $\bar { C _ { f } } = \mathrm { s o r t } ( C _ { f } ^ { d e p } \cup C _ { f } ^ { e x e } )$ , sorted by original line order to maintain syntactic structure (Yin et al., 2024). This compact, semantically rich input is then fed to the language model for downstream vulnerability detection. Figure 3 shows an example of the context selection process. # 4 Evaluation We formulate three research questions to conduct a comprehensive evaluation for FocusVul: RQ1: Can FocusVul improve the ability of language models to detect vulnerable code in realistic settings? (Section 4.2) RQ2: How does FocusVul enhance the efficiency of vulnerability detection? (Section 4.3) RQ3: What is the relative contribution of each component of FocusVul? (Section 4.4) # 4.1 Experiment Setup # 4.1.1 Dataset Vulnerability Detection Dataset. We conduct experiments on PrimeVul (Ding et al., 2024), the largest real-world $\scriptstyle \mathbf { C } / \mathbf { C } + +$ vulnerability dataset to date that unifies security-related commits and functions from benchmarks such as BigVul (Fan et al., 2020), CrossVul (Nikitopoulos et al., 2021), CVEfixes (Bhandari et al., 2021), and DiverseVul (Chen et al., 2023). It removes duplicates and applies chronological splits to mimic real-world settings. It includes both individual and pairwise samples, where each pair consists of a vulnerable and fixed counterpart sharing at least $8 0 \%$ identical code. The dataset comprises 175, 797 training samples (7, 578 pairs), 23, 948 validation samples, and 24, 788 test samples. To ensure efficient yet realistic evaluation, the training set is downsampled to a $1 5 : 1$ normal-to-vulnerable ratio while retaining all pairs (77, 792 samples for training); validation and test sets remain unaltered. For $9 . 8 4 \%$ of functions that cannot be parsed due to slicing tool limitations, we retain their original function bodies. Unlike other baselines that discard such cases, the LM-based framework remains robust to formatting variations. VRR Identification Dataset. To train the region identification module (Section 3.1.2), we derive 6, 443 pairwise samples from the training split of the vulnerability detection dataset. Each sample contains at least one modified line from its associated vulnerability-fixing commit. Instead of using binary vulnerability labels, we use commit-based line changes as region-level supervision. The validation and test sets remain unchanged to ensure consistency with the main detection task. # 4.1.2 Model Settings Classical Baselines. We adopt two representative vulnerability detection methods based on semantic slicing: VulDeePecker (Li et al., 2018), which extracts CWE-specific data-dependent code snippets based on API calls and classifies them with a BiLSTM; and DeepWukong (Cheng et al., 2021), which slices functions using heuristic anchors and applies GCNs for subgraph-level classification. More details are shown in Appendix A.1. Language Model Selection. We evaluate FocusVul across a diverse set of language models, including encoder-only (CodeBERT (Feng et al., 2020)), encoder-decoder (CodeT5 (Wang et al., 2021)), and decoder-only models (StarCoder2 (Lozhkov et al., 2024), LLaMA3.1 (Grattafiori et al., 2024)), ranging from 100M to 8B parameters. To assess general-purpose models without task-specific tuning, we also include typical chat-based LMs (GPT-4o-mini, DeepSeek-R1, Qwen3-32B) in a zero-shot pairwise setup. All models label sample pairs, and we report comprehensive pairwise metrics. Additional details are provided in Appendix A.2. # 4.1.3 Evaluation Metrics Vulnerability Detection Metrics. We report both standard classification metrics and domainspecific indicators to comprehensively assess vulnerability detection performance. Precision, Recall, and F1 score (binary) are used to assess performance under imbalanced conditions. Additionally, in practical deployments, the primary objective is to identify as many vulnerabilities as possible, minimizing the false negative rate (FNR, where the vulnerable class is treated as negative). However, aggressively reducing the FNR can lead to an unacceptably high false positive rate, increasing the burden on experts to manually review benign samples mistakenly flagged as vulnerable. Here we use Vulnerability Detection Score (VDS) to evaluate FNR under a certain acceptable rate of FPR, i.e., FPR $@$ $( \mathrm { F P R } \le r )$ . Here we choose the tolerance rate $r = 0 . 5 \%$ , represented as ${ \tt V D S @ 0 . 5 }$ . Table 1: Performance of classical baselines and FocusVul-enhanced models on vulnerability detection. All results are shown as percentages $( \% )$ ; “w” and “w/o” indicate with and without FocusVul fine-tuning, respectively. Subscripts denote relative improvements over the corresponding baselines without FocusVul. Table 2: Performance of all models on pair-wise samples differentiation (All results represent as percentage $( \% )$ ) Pair-wise Evaluation Metrics. As discussed in Section 4.1.1, the PrimeVul dataset includes a set of paired samples (870 samples for pair-wise test). The ability to distinguish these pairs indicates whether the model captures true vulnerability signals rather than spurious patterns. We evaluate this ability using four metrics: (1) Pair-wise Correct Prediction (P-C), where the model correctly identifies both samples; (2) Pair-wise Vulnerable Prediction (P-V), where both are predicted as vulnerable; (3) Pair-wise Benign Prediction (P-B), where both are predicted as benign; and (4) Pair-wise Reversed Prediction (P-R), where the model assigns reversed labels. A higher P-C and lower P-V, P-B, and P-R indicate better vulnerability understanding. # 4.2 RQ1: Detection Effectiveness # 4.2.1 Vulnerability Detection We evaluate whether FocusVul enhances language models’ ability to detect vulnerabilities in realistic settings by fine-tuning several open-source code LMs and comparing them with function-level fine-tuning (Feng et al., 2020; Wang et al., 2021; Lozhkov et al., 2024; Grattafiori et al., 2024) and heuristic-based context selection methods (Cheng et al., 2021; Li et al., 2018). Results demonstrate that language models benefit substantially from the semantically focused context provided by FocusVul, achieving an average F1-score improvement of $1 6 4 . 0 3 \%$ . (We report binary F1-score due to severe label imbalance.) Consider the realistic metrics ${ \tt V D S @ 0 . 5 }$ , which decrease $2 5 . 0 3 \%$ on average. Full-function fine-tuning already outperforms traditional heuristics-based methods, which are often CWE-specific and lack pre-trained knowledge. Full-function fine-tuning outperforms traditional heuristics, which are often CWE-specific and lack pre-trained knowledge. Notably, larger generative LMs (e.g., LLaMA 3.1) underperform smaller encoder-based models (e.g., CodeBERT), giving us space to explore the effectiveness of generative decoding in classification tasks. # 4.2.2 Pair-wise Sample Differentiation To assess model sensitivity to vulnerability semantics, we adopt a pair-wise task (Ding et al., 2024) where each pair contains a vulnerable and a closely matched benign function. This setting is challenging due to subtle and sparse signals. As shown in Table 2, FocusVul significantly improves the proportion of correctly identified pairs (P-C), with an Table $3 \colon C R \left( \% \right)$ and $R R ( \% )$ across different context selection strategies. OF and HB represent original function inpus and heuristic-based context selection. Train Test Strategy CR ↑ RR CR RR OF 0.00 51.23 0.00 62.30 HB 8.12 52.76 8.51 64.89 FocusVul 17.34 59.08 18.56 72.62 0.5 -12.31% -12.72% -17.28% -34.17% Before After CodeBERT CodeT5 Starcoder2 Llama3.1 average gain of $2 4 1 . 1 2 \%$ , highlighting its ability to guide models toward vulnerability-aware predictions. While some baselines yield lower misclassification rates, they rarely produce correct predictions, making P-C the most meaningful metric. Chatbased models show divergent tendencies: GPT-4omini favors benign predictions (conservative bias), Qwen3 tends to over-predict vulnerabilities (oversensitivity), and DeepSeek-70B remains more balanced but lacks precision. These behaviors suggest that general-purpose LMs, while strong in reasoning, are not inherently tuned for sparse, localized vulnerability patterns. # 4.3 RQ2: Detection Efficiency To evaluate FocusVul’s efficiency, we report two kinds of independent device metrics: (i) compression metrics, including Compression Rate $( C R )$ and Retention Rate $( R R )$ . $C R$ measures input reduction and is defined as one minus the ratio of post-selection to pre-selection token length. Original functions have $C R \ : = \ : 0 . 0 0 \%$ . $R R$ quantifies the proportion of vulnerability-related tokens retained after truncation, averaged over pair-wise samples. A higher $R R$ indicates better preservation of critical information under input length constraints. Token lengths are computed using the RoBERTa tokenizer with a maximum length of 512. Table 3, FocusVul achieves higher CR and RR than other strategies, indicating more concise and informative context selection. (ii) GFLOPs/sample, we estimate computational cost as normalized FLOPs per model, with full-length inputs set to 1.0. As shown in Figure 4, FocusVul reduces average GFLOPs by $1 9 . 1 2 \%$ , reflecting lower token usage without degrading detection performance. Table 4: Contribution of key components (All results are F1-score $( \% )$ in the classification task) # 4.4 RQ3: Component Effectiveness To assess the contribution of key components in FocusVul, we conduct ablation studies targeting three aspects: (1) the benefit of $V R R _ { c o m }$ , (2) the effectiveness of test-time VRR prediction, and (3) the necessity of region-based context selection. We compare the full models with the following variants: 1) w/o $V R R _ { c o m } ( \mathrm { V C } )$ : replaces $V R R _ { c o m }$ with heuristic-based indicators in both training and test sets, evaluating the value of commit-based fine-grained supervision. 2) w/o $V R R _ { c o m }$ identification (VCI): uses $V R R _ { c o m }$ during training but applies heuristics at inference, assessing the effectiveness of $V R R _ { c o m }$ prediction. 3) w/o context selection (CS): removes all context selection strategies and uses full functions as input, verifying the necessity of VRR-guided context selection. Table 4 reports the performance of ablated variants. Removing either commit-based supervision (w/o VC) or context selection (w/o CS) leads to a significant performance drop. w/o VC performs slightly better than using full functions, suggesting that heuristic-guided context offers efficiency and relevance. Using $V R R _ { c o m }$ only during training (w/o VCI) leads to the largest drop, due to a mismatch between training and test-time semantics.
Language models (LMs) show promise for vulnerability detection but struggle with long, real-world code due to sparse and uncertain vulnerability locations. These issues, exacerbated by token limits, often cause models to miss vulnerability-related signals, thereby impairing effective learning. A key intuition is to enhance LMs with concise, information-rich context. Commit-based annotations offer precise, CWE-agnostic supervision, but are unavailable during inference, as they depend on historical code changes. Moreover, their extreme sparsity, often covering only a few lines, makes it difficult for LMs to process directly. In this paper, we propose FocusVul, a model-agnostic framework that improves LM-based vulnerability detection by learning to select sensitive context. FocusVul learns commit-based annotation patterns through hierarchical semantic modeling and generalizes them to identify line-level vulnerability-relevant regions during inference. It then extracts LM-oriented context via both dependency and execution flows surrounding selected regions, yielding semantically rich inputs for effective vulnerability detection. Experiments on real-world benchmarks show that FocusVul consistently outperforms heuristic-based and full-function fine-tuning approaches, improving classification performance by 164.04% and reducing FLOPs by 19.12% on average.
[ "cs.SE" ]
# 1 Introduction Spatial data has become integral to modern applications due to the evergrowing amount of Internet-of-Things (IoT) devices. These devices produce geospatial information used in services such as real-time traffic-aware routing, incident hotspot detection, and weather updates and predictions, with various other use cases existing [16, 17, 22, 24]. Building such applications requires informed decisions regarding the technology stack, especially regarding data storage factors. Spatial indexes enhance query performance in spatial databases by enabling efficient querying of spatial data [1, 27, 43]. These structures provide a way to quickly locate spatially co-located data points and vectors, while also enabling filtering of irrelevant data points. Some systems offer a single default index, while others, such as PostGIS, support multiple types, each with unique trade-offs between query speed, index creation time, and maintenance costs. Moving object data is an important subset of spatial data, showing the movement of multiple objects over time. Such data can be stored in different formats, such as discrete points or trajectories [31, 37, 42]. While purpose-built systems for trajectories exist, general-purpose solutions such as PostGIS remain widely used due to their flexibility and ecosystem compatibility. Choosing an optimal index and format for both spatial data and moving object data is non-trivial, as not all indexes are equally effective for all datasets and query types [26]. This is important to consider however, as the data format can impact the possible queries, the granularity of the query response, and the performance of the database. Performance may also vary depending on the characteristics of the data, such as spatial distribution and amount of data overlap, which can be difficult to determine a priori. This paper investigates how spatial data characteristics, data format, and index choice impact database performance using PostGIS as a prototype database platform for our evaluation. We construct an application-driven benchmark using both synthetic and real-world spatial datasets with varying data distributions and degrees of overlap. We provide novel approximation methods to determine the degree of skew and overlap, which scale for large datasets by providing a constant time complexity using an approximation method. Our benchmark includes both read and write evaluations to fully regard the impact of dataset properties, index choice, and data format on database performance. Based on our results, we provide guidance for developers seeking to make informed storage decisions tailored to their use case. Our key contributions are as follows: • We develop novel, tunable approximation methods to assess overlap and distribution properties of trajectorybased datasets (§3). We design a benchmark for comparing data formats, data characteristics, and index types using both realworld and synthetic datasets in PostGIS, analyzing their impact on read and write performance (§4.1). • We provide practical recommendations which index type and data format to choose depending on one’s dataset characteristics (§4.4). # 2 Background and Related Work In this section, we provide a background on moving object data, relevant storage and indexing strategies, and highlight key related work that guides our paper. # Moving Object Data Moving object data is a category of spatial data that captures the position of objects over time enables mobility-based applications. such as traffic prediction, route planning, and fleet management [11, 12, 25]. Typically, this data originates from GPS sensors, producing a sequence of point observations. A trajectory represents a continuous path constructed by linking these points, for example, tracing a bicyclist’s commute [23]. Trajectories can be stored in different formats, such as storing individual points, segments of a trajectory (each with its own entry), or storing the full trajectory as a single object. Each format offers trade-offs: Storing trajectory segments allows for more fine-grained analysis, while whole-trajectory storage simplifies representation and storage at the cost of query flexibility. Figure 1 illustrates how these formats differ from one another. Some databases provide interpolation features to support trajectory-based queries while storing data in a point-based format, with MobilityDB being a notable example [44]. # Spatial Indexing Strategies Spatial Indexes exist to support efficient querying of spatial data. These include R-Trees, QuadTrees, Generalized Search Trees (GiST), Block Range Indexes (BRIN), spacefilling curves, and many more. Each of these have trade-offs in terms of data structure, dimensionality, and query performance. Many of these indexes rely on bounding boxes such as the Minimum Bounding Rectangle (MBR) to quickly filter irrelevant data points. A summary of common index types is shown in Table 1, however many more exist to also include other attributes such as time [14, 33]. Systems exist that implement custom indexing and storage techniques for trajectory data [2, 8, 21]. # Factors Affecting Performance Several features of trajectory data affect database performance, which includes the storage format, spatial distribution of data (data skew), and the degree of spatial overlap (intersections). Storage format influences how efficiently queries can be processed. For example, segmenting trajectories may enable better indexing but increase complexity. Data skew refers to uneven distribution of data over space, such as traffic clustering in urban areas. This may lead to an imbalance in indexing structures and performance degradation. Intersections refer to overlapping spatial objects, and while related, they are distinct from skew. High skew does not necessarily imply high intersection, as we show in $\ S 3$ . For instance, delivery vehicles often operate in dense urban zones (high skew) without frequent overlap due to route optimization. Studies have addressed both phenomena. Chen et al. proposed specialized representations and intersection algorithms for 3D spatial data [6]. Others explored detecting skew in systems such as SpatialHadoop [4, 40]. # The Role of Representative Datasets Benchmarking indexing performance requires realistic, diverse datasets. Using only one dataset, as is often done, fails to account for performance variations due to data characteristics For example, Zhang et al. demonstrate how data skew can be exploited to optimize queries [41]. Publicly available datasets like the Piraeus AIS maritime dataset provide ample real-world data (over 240 million records) for this purpose [34]. # Related Work A wide range of research has investigated spatial indexes and benchmarks. Nguyen et al. demonstrated the benefit of using spatial indexes with trajectories in PostGIS, but only evaluated GiST and ignored other strategies, and also only considered a single data format [27]. Additionally, the work focuses on a road network, which is not representative of moving object data in general. Chen et al. benchmarked several indexes outside of a database, with synthetic car trajectory data [5]. However, they did not include real-world datasets or assess data formats. Xu et al. proposed GMOBench, a benchmark for multimodal trajectory data, but again relied on synthetic datasets and did not explore storage alternatives [39]. BerlinMOD introduced a mobility benchmark focusing on spatiotemporal queries and data generation [10]. Its contribution lies in query categorization, not in evaluating index or storage format impact. Figure 1: Moving Object Data can be stored in various formats, such as simply storing the point data. One can also store segments of the trajectory separately (each color represents a separate entry in the database), or store the entire trajectory as one object. Data View of Trajectory Storage Methods Table 1: Each of these indexing strategies has its own advantages and disadvantages, which can be used to determine the best index for a specific use case. The related work mentioned in the table is not exhaustive, but provides a good overview. TrajStore proposed a specialized trajectory storage format and indexing mechanism [8]. While valuable, its architecture differs from widely-used systems like PostGIS. # 3 Approach Previous research has shown that certain indexes are better suited for overlapping and non-overlapping data, and that data distribution can also impact the performance of a database depending on the index [9, 28]. Developers therefore should be interested in their dataset characteristics to make an informed decision on which index to use. In this section, we describe our benchmarking approach, while highlighting dataset features that we consider when selecting representative datasets for our evaluation. We provide novel approximation methods to quantify overlap and distribution in a dataset, which can be used as a guideline for developers to choose the appropriate index for their data. # Assessing the Impact of Data Skew and Overlap Real datasets, while sharing common characteristics, can differ greatly in terms of data skew and overlap. Data such as urban cycling data is often more evenly distributed across the city; however, when including all data from a country, a large skew may be present. Intersections and Overlaps. Intersections refer to two vectors or trajectories intersecting at one or more points, while overlaps refer to these trajectories occupying the same space for a larger amount of distance besides a single point. A simple example would be a 4-way street crossing: Two bicycles going in perpendicular directions have intersecting paths, while two going in the same direction overlap. In spatial databases that use MBR-based indexing strategies however, these two terms come together due to the way indexes are structured. As mentioned in the previous section, MBRs help quickly filter irrelevant data when running queries and allow for faster query responses. When regarding two trajectories for a possible intersection, the system first checks these MBR structures for an overlap, leading to the interchangeable use of the two words in this context. Figure 2 shows how both actually intersecting and non-intersecting data can have overlapping MBRs. Figure 2: Trajectory 1 and 2 have overlapping minimum bounding rectangles, but do not intersect. Determining the Global Overlap Coefficient. The amount of overlap in a dataset is crucial for the choice of the most suitable index, as some indexes perform better when data is space partitioned [1]. Given a trajectory $t$ and another trajectory $t ^ { \prime }$ , we can determine if they overlap in matters of bounding box based-indexing by checking if their MBRs overlap. Using this, we can turn our trajectory data into a graph to use an existing graph metric to determine the amount of overlap in our dataset. We can represent each trajectory in our dataset as a node in a graph. If the MBR of two trajectories overlap, we create an edge between those nodes. An example of how the trajectory data is converted to a graph can be seen in Figure 3. A graph representation of our data enables us to apply graph density measures to quantify the trajectory overlap. The graph density sets the number of total edges in relation to the number of all possible edges. Given a set of nodes $V$ and edges $E$ , the density of a graph can be calculated as follows: $$ D = { \frac { 2 | E | } { | V | ( | V | - 1 ) } } $$ Going back to our way of converting trajectories to overlaps, density in our case reflects the amount of overlapping trajectories in relation to the total amount of possible overlaps. A high density (i.e closer to 1) indicates that a high degree of trajectories overlap with one another, while a low value (closer to 0) indicates the opposite. The general graph approach is not without its problems, especially with highly overlapping data: Generating a graph data structure from large trajctory datasets, containing millions or billions of instances, is computationally not feasible. We thus propose an approximation to allow developers a fast estimation of their dataset characteristics. Given a dataset of $m$ trajectories: Figure 3: Traj. 1’s MBR is overlapping with Traj. 2 and 3, while Traj. 2 and 3 are not overlapping. In graph form, each trajectory is a node and possesses an edge to overlapping trajectories. The GOC of this dataset would then be $\mathbf { 2 } / 3$ . We take $n$ randomly selected trajectories from our dataset. • We apply our approach to the selected trajectories and calculate the density of the graph. We repeat this process $p$ times and take the median of all approximated densities. This allows for a constant time complexity of $O ( n * p )$ instead of $O ( m ^ { 2 } )$ for all graph sizes, where we can choose $n$ and $p$ based on the desired accuracy of our approximation. In the rest of the paper, we refer to this metric as the global overlap coefficient (GOC) of a dataset. Adapting Average Nearest Neighbor to Trajectories. The graph density, while suitable for evaluating overlap, does not cover the skew of a dataset. Data in some scenarios can be highly clustered without overlapping, which our density coefficient would not be able to detect. We therefore need an alternative approach to evaluate the distribution of our dataset. For point patterns, a common way to evaluate data distribution is using the average nearest neighbor (ANN) approach [7, 32]. Here we determine the average distance of each point to its nearest neighbor. The result is compared to the expected value of a dataset with a uniform distribution. The observed distance can be calculated as follows: $$ D _ { O } = { \frac { \sum _ { i = 1 } ^ { n } d _ { i } } { n } } $$ where $d _ { i }$ is the distance of point $i$ to its nearest neighbor, and $n$ is the total amount of points in the dataset. The expected distance in a uniformly distributed dataset with $n$ points can be calculated as: $$ D _ { E } = \frac { 0 . 5 } { \sqrt { n / A } } $$ Figure 4: These simplified trajectories show how we can apply ANN to trajectories. With a small number of trajectories, exactly calculating this value is still realistic. Including a large amount of trajectories necesitates an approximation to finish the calculation in a reasonable time. Our ANN approximation excludes points from the own trajectory. where $A$ is the area of the bounding box of the dataset. The ANN is simply the ratio of the two values: $$ A N N = { \frac { D _ { O } } { D _ { E } } } $$ A value of $< 1$ indicates a clustered distribution, while a value of $> 1$ indicates a distributed dataset. The larger the value, the more distributed the dataset is. When applying this to trajectories, the idea does not hold up well: The distance between trajectories is the closest possible distance of the two, which is not indicative of their actual distribution. Two trajectories may intersect at one point, but be very far apart otherwise and still have a distance of 0. We can however still use the ANN idea if we convert our trajectories into multiple points. In this paper, we rely on the update frequency of the trajectory to convert trajectories, meaning that every time a sensor updates its position, we create a point in our dataset. Figure 4 shows how such a nearest neighbor approach would look like in a trajectory dataset, while immediately highlighting an issue: Given a large amount of trajectories, which in turn is transformed into an even larger amount of points, the ANN approach is too computationally expensive to be used in a reasonable time frame. We can again approximate the value by relying on a sampling strategy. Given a dataset containing $m$ trajectories, each with $k$ points, we use the following approach: • We take a random point from a random trajectory. We calculate the nearest neighbor of this point, and store the distance. • We repeat this process $n$ times, sum up the stored distances, and divide by $n$ to get the average distance. • We scale this value by multiplying it with $( m * k ) / n$ to get an approximation of $D _ { O }$ . • We again repeat this process $p$ times and take the median of all results. This allows us to reduce the time complexity to $O ( n * p )$ , where we can choose $n$ and $p$ based on the desired accuracy of our approximation instead $O ( ( m * k ) ^ { 2 } )$ for all dataset sizes. Covering all Bases with Representative Datasets. We include real-world datasets from a variety of use cases, such as cycling data, aviation data, and ship trajectory (AIS) data. However, when we want to evaluate the impact of skew and overlap on database performance, a broader combination of datasets is required. Various approaches to dataset generation are possible and have been implemented in a variety of papers [10, 18]. By implementing the following dataset generation strategies, we can cover a broad range of use cases and data distributions: • Randomly generating trajectories within a bounding box. • Evenly distributing trajectories within a bounding box, where we lay a raster over the bounding box and place trajectory starting points at the center of each cell. Enable a hotspot-based approach of trajectories, where trajectories mostly exist within hotspots and form clusters. Provide the same hotspot-approach with overlaps to other hotspots (as could be found in car traffic). We run our benchmark against all mentioned datasets, and evaluate the impact of dataset features on database performance using them. All included datasets can be found in Table 2, where important data features are highlighted. Data sets are stored in two different formats: Line-based data can still differ in its storage format, as we could store the entire trajectory as a single object, or store each segment of the trajectory separately. These two approaches have trade-offs, with the segmented approach allowing for more fine-grained querying and analysis, while the trip as a whole reduces the entire trip to a single row in the database, which could lead to a change in performance. Therefore, both formats are included in our benchmark. A point-based approach reduces the amount of possible queries without prior conversion to a trajectory (at least within PostGIS and using real-world datasets), which led us to omit this format. # 4 Evaluation In this section, we evaluate database performance using both the synthetic and real-world datasets, and regard the impact of data format, index choice, and dataset characteristics in both read and write scenarios. # 4.1 Experiment Design We include different read query types in our evaluation to cover a broad range of real-world applications. Figure 5: We include 7 different datasets in our evaluation, with 4 synthetic and 3 real-world datasets. The real world datasets are from the SimRa project, the Deutsche Flugsicherung, and the Piraeus AIS dataset. Table 2: We include synthetic and real-world datasets to cover a variety of data patterns that might occur in various use cases. We calculate GOC and ANN using our approximation method where appropriate, doing 10 iterations of 10000 samples each. Each dataset consists of 30,000,000 segments, which are distributed across the listed number of trajectories. The name in parentheses is the name used in our evaluation. • How many other trajectories does a given trajectory overlap with (Intersection query)? • What trajectories are partially/completely within a specified polygon (Contains query)? • What are my K nearest neighbors to a given trajectory (KNN query)? • How many trajectories are within a specific distance of a given trajectory (Proximity query)? We additionally evaluate the three different write operations that can be performed (Insert, Update, and Delete), as they cover basic application scenarios and index choice may impact performance here as well. For our write operations, we insert either a single trajectory or 100 trajectories into the database. Regarding the update and delete scenarios, we either adjust a single trajectory or $1 \%$ of the dataset in a batch operation. Each of these queries is run with 50 unique configurations. Using a Contains and a single Insert query as an example: During each of the configurations, the bounding box of the polygon in which trajectories are queried is unique and a randomly generated trajectory is inserted into the database within a specified polygon. The SUT is therefore subjected to 50 different benchmark configurations for each query type. We take further considerations into account here, such as filtering polygons which return no results using rejection sampling, and ensuring that the bounding box is within the area of the dataset. Each of these experiments is run against each dataset in every unique combination of index type and data format that we include in this paper. Each dataset is included in two formats, a segmented and non-segmented version, and three different index types are included in the evaluation (GiST, SP-GiST, and BRIN). As SP-GiST is specifically designed for space partitioned data, our assumption is that datasets with low overlap will benefit from this index type. The segmented version of the dataset is created by splitting the trajectory using the update frequency of the dataset. To fairly compare the impact of spatial features and data format, our dataset size is fixed across all datasets. We always include 30, 000, 000 segments of trajectories across each dataset to ensure that we can fairly evaluate the impact of overlap/distribution. The key difference in datasets is how these are distributed across single entire trajectories, and how they are distributed across their respective bounding boxes. # 4.2 Experiment Setup All of our experiments were run on a 8-core Intel Xeon 4310 CPU with 32GB of RAM, with the database being run as a single instance. The SUT uses PostGIS 3.5.0 and PostgreSQL 16.8, with the database being run on a single instance. During initialization, we deploy all datasets to the SUT, with index creation happening before the evaluation. # 4.3 Experiment Results We first evaluate the read performance of our datasets using a variety of queries, and afterwards run write experiments, where we run separate insert, update, and delete benchmark runs. Within each part, we evaluate the impact of data format, index choice, and dataset characteristics. 4.3.1 Read Performance. We ran three experiment repetitions, but due to the high amount of results, we will focus on one run. The other runs were similar in their results. Impact of Spatial Index Choice. Results from our evaluation show GiST and SP-GiST to be the best performing index types for our read queries, with GiST outperforming SP-GiST especially in the non-segmented format. This was the likely result, as the larger bounding boxes resulting from the nonsegmented format do not allow for efficient space partitioned indexing of our data, which is the main advantage of SP-GiST. Figure 6 shows the performance of the different index types across all datasets and query types, with the segmented data on the left and the non-segmented data on the right. For trajectory data in PostGIS, BRIN performed poorly across all datasets and query types. The trajectory format makes it inefficient to use BRIN, and while it may be beneficial for point data in some cases, our results show that one should likely not use BRIN when working with trajectory data. Impact of Data Format and Characteristics. Figure 7 highlights the performance difference for all query types for both formats using GiST, as it was the best performing index type overall. Our results show a split between the two formats, with all of our high GOC datasets performing better with the segmented format, while the low GOC datasets benefitted from the non-segmented format. This lead us to further investigate the relation between GOC and the average speedup from relying on the nonsegmented format when using GiST as an index type when averaging across all query types. Figure 8 shows the relation between GOC and average speedup across all query types when using GiST as an index type, with the dotted line indicating a possible regression between values. We find that the lower the GOC of a dataset, the better the performance of the non-segmented format, with the high GOC datasets performing better with the segmented format across most query types. Further research here is required to determine the exact relation between GOC and performance, with one idea being to determine the elbow method for the GOC values to determine a threshold where the non-segmented format is beneficial. Figure 6: On average, GiST outperforms SP-GiST, especially in the segmented formats, however there are scenarios where SP-GiST does provide an advantage. If designing a general purpose application able to handle a variety of queries, GiST is the better choice. BRIN performed poorly across all datasets and query types, and should likely not be used for trajectory data. Figure 7: The GOC of a dataset here seems to be a good indicator whether the non-segmented format has any benefit at all, with our low GOC datasets showing performance benefits for the non-segmented format, while the ones with a higher GOC experience better performance with the segmented format. The Contains query type is the only one where the non-segmented format performs better across almost all datasets. While our ANN coefficient is able to provide insights into the distribution of the data, it does not seem to impact performance in our evaluation regarding the read performance overall. Regardless, we believe that it is a valuable addition and could potentially provide insights into performance when relying on other databases or indexing strategies. Figure 8: In our evaluation, datasets with a lower GOC benefitted heavily from using the non-segmented format when averaging across all query types. Higher GOC datasets received little to no speedup, with AIS data even performing better when using the segmented format. We were not able to determine a significant correlation however. 4.3.2 Write Performance. Our write experiments highlight how data format plays a large role in database performance across all three types of write operations, while also showing that BRIN can provide an advantage in some cases. Impact of Index Choice. Figure 9 shows the performance of the different index types across all datasets and query types, with the singular operations on the left and the batch operations on the right. Our results show that BRIN is the best performing index type for most write operations, however the benefit is not as pronounced as it was negative for read operations. As its benefit are inconsistently spread across write operations, we believe that it is not a good choice for trajectory data in PostGIS unless an application is heavily write focused. SP-GiST was outperformed by GiST in nearly all of our cases, however it sometimes provides a small advantage in datasets with low overlap. In single insert scenarios, SP-GiST was able to outperform GiST on average in all datasets, however the performance difference was negligible. Impact of Data Format and Characteristics Impact. When regarding the impact of data format on write performance, we found that the segmented format lacked behind the nonsegmented trajectory in nearly all of our comparisons. This is as expected, due to the fact that each operation on a segmented trajectory requires the database to perform the operation on multiple rows, whether it be inserting a trajectory in a segmented format or deleting/updating an existing one. The difference is not as pronounced in our insert operations, as we do not insert a percentage of the dataset, but a fixed number of trajectories. # 4.4 Summary of Findings & Recommendations for Developers When specifically regarding the impact of GOC and ANN on write performance, we focus on the segmented format, as the number of segments is fixed here across all datasets. Insert operations, where a bounding box is used to insert new trajectories, show that the GOC may have an impact on performance, as our higher GOC datasets performed noticeably worse than the lower GOC datasets. However, these are not in ascending order, and we were not able to determine a correlation between GOC and performance. While we still believe that trajectory ANN to be an important metric, it did not impact performance in our evaluation. When relying on another indexing strategy and database, it may be beneficial to include ANN in the evaluation again to regard a possible correlation with performance. Our results show that GIST remains the dominant choice when wanting to index spatial trajectory data in PostGIS, as it outperforms or performs similarly to SP-GIST in nearly all cases, while outperforming it in nearly all scenarios with non-segmented data. When developers are deciding how to store their data, we provide the following recommendations based on our results: When storing non-segmented data, GiST remains the optimal choice within the scope of our evaluation, as it outperforms SP-GIST in all cases. When implementing a segmented data format, both index types perform similarly, with SP-GiST performing slightly better in some cases. These cases are however not tied to the degree of overlap in the data. Our findings show that the higher the GOC of your data, the less of a benefit it is to store data in a non-segmented format. When using GiST, High GOC datasets benefit only slightly and in some cases even suffer from the non-segmented format. For those datasets, we recommend using the segmented format, as it provides better performance and a higher level of detail. While adapting ANN to trajectories provides novel insights into a dataset, we did not find a correlation between ANN and performance. While other experiments may show a relationship here, our results indicate that data skew does not impact performance when using trajectory-based data. When running a write-heavy workload, the non-segmented format provides advantage as it allows for faster writes across all datasets. While BRIN indexes are a good choice if one only considers write performance, its poor read performance likely makes it unsuitable for most applications. If expecting a mix of read and write queries, we recommend evaluating the GOC of your data to determine the optimal data format for performance, and relying on GiST as the index type. Our assumption was that SP-GIST would prove a better choice in scenarios where data is equally distributed across the observed area, as it was specifically designed for cases where data is spatially partitioned. However, results show that at least for trajectory data in PostGIS, this is not the case. Regarding write performance, both index types perform similarly in most cases, with BRIN having an advantage in most scenarios. The GOC seems to be a good indicator whether the non-segmented format has any benefit at all, while the ANN coefficient did not seem to impact performance in our evaluation. Figure 9: BRIN is usually the best performing index type for most write operations. SP-GiST was outperformed by GiST in nearly all of our cases, however, it sometimes provides a small advantage in datasets with low overlap. Of note is the noticeably better performance in our low GOC datasets for insert operations. # 5 Discussion & Future Work In this section, we discuss limitations of our evaluation and suggest future work to address these. While MobilityDB could be considered a more suitable system under test (SUT) due to its focus on mobility data, we chose PostGIS as it is more widely used, and our evaluation is limited to spatial queries. MobilityDB, despite its spatiotemporal capabilities, does not introduce novel spatial indexing strategies and offers no significant advantage in purely spatial scenarios without temporal aspects. Future work should consider extending the evaluation to spatiotemporal queries, in which case MobilityDB or similar databases would be more appropriate. We focus on GiST, SP-GiST, and BRIN indexes, as these are the primary spatial indexing methods available in PostGIS. While additional strategies such as space-filling curves or alternative indexing approaches could offer further insights, initial tests showed negligible benefit or poor performance compared to the selected methods. Future evaluations could explore these approaches in different database systems that offer a broader range of indexing options. Our benchmark includes diverse datasets and query types to ensure general applicability. Nevertheless, certain edge cases and access patterns may not be fully represented. While we normalize datasets by segment count to ensure fairness, this may unintentionally bias the results. We mitigate this by using multiple configurations and repeating experiments. Future work could expand the dataset range and normalization strategies to further reduce bias.
The growing number of moving Internet-of-Things (IoT) devices has led to a surge in moving object data, powering applications such as traffic routing, hotspot detection, or weather forecasting. When managing such data, spatial database systems offer various index options and data formats, e.g., point-based or trajectory-based. Likewise, dataset characteristics such as geographic overlap and skew can vary significantly. All three significantly affect database performance. While this has been studied in existing papers, none of them explore the effects and trade-offs resulting from a combination of all three aspects. In this paper, we evaluate the performance impact of index choice, data format, and dataset characteristics on a popular spatial database system, PostGIS. We focus on two aspects of dataset characteristics, the degree of overlap and the degree of skew, and propose novel approximation methods to determine these features. We design a benchmark that compares a variety of spatial indexing strategies and data formats, while also considering the impact of dataset characteristics on database performance. We include a variety of real-world and synthetic datasets, write operations, and read queries to cover a broad range of scenarios that might occur during application runtime. Our results offer practical guidance for developers looking to optimize spatial storage and querying, while also providing insights into dataset characteristics and their impact on database performance.
[ "cs.DB", "cs.DC" ]
# I. INTRODUCTION representations limit their feasibility for real-time rendering applications. More recently, Gaussian-based approaches [14], [15] have emphasized modeling the intrinsic features of each Gaussian point, and simultaneously combining the appearance embeddings to predict affine transformations of base colors for dynamic appearance representation through an MLP. While these methods offer improvements, they still encounter limitations akin to NeRF-based approaches, primarily due to the less expressive nature of global embeddings. The state-of-the-art GS-W method [16] advances this area by enabling Gaussian points to adaptively sample detailed dynamic appearance information from 2D feature maps, capturing richer details with greater flexibility. However, challenges like blurriness remain, especially upon close inspection of rendered images. Furthermore, most existing NeRF- and Gaussian-based methods are tailored to single-object scenes, and are not designed to scale to large, complex environments. In largescale settings, such as urban scenes, significant appearance variations arise across views due to lighting changes, atmospheric conditions, and other environmental factors. These variations frequently introduce inconsistencies in brightness and color, while supervision for each object in the scene becomes increasingly insufficient. Such complexities highlight the fundamental limitations of current appearance disentanglement techniques, raising critical questions about their ability to generalize to and effectively model the intricate appearance dynamics of large-scale, real-world environments. 3D aRnEdCloOnNgS-sTtaRnUdiCnTgIcOhNa lfernogme inmacgoemspius ea fvuinsidoanm, ewnittahl widespread applications ranging from immersive virtual reality to 3D content creation. Recent progress in this field has been largely propelled by both implicit representations, such as Neural Radiance Fields (NeRF) [1] and its derivatives [2]– [4], and explicit representations, exemplified by 3D Gaussian Splatting (3DGS) [5] and related techniques [6]–[10]. While these approaches have achieved remarkable success in reconstructing static scenes under controlled conditions, which are characterized by stable lighting and consistent camera settings they face notable challenges in real-world scenarios. In unconstrained and dynamic environments, traditional methods often struggle to ensure consistent reconstruction quality, leading to issues such as blurriness, visual artifacts, and significant performance degradation [11]. To address the challenges of dynamic appearance variations in real-world scenes, NeRF-W [12] introduced per-image appearance embeddings, which were later refined by methods like [11], [13] to better handle inter-view variations. Despite these advancements, global embeddings often fall short in representing fine-grained details, as they inadequately capture the significant appearance variations influenced by object properties and environmental factors at specific scene locations. Moreover, the high computational demands of implicit In this paper, we introduce Scalable Micro-macro Waveletbased Gaussian Splatting (SMW-GS), a novel approach that addresses key limitations in existing 3D reconstruction techniques. Our method decomposes Gaussian features into three distinct components: global appearance, refined appearance, and intrinsic features, offering a comprehensive representation of dynamically varying scenes. Global features capture overarching scene attributes, such as color tone and lighting, while refined features model detailed textures and region-specific phenomena like highlights and shadows. Intrinsic features represent consistent characteristics, such as inherent material properties, ensuring robustness across diverse viewpoints and appearance variations. The core innovation of our work lies in the Micro-macro Projection, which significantly improves refined appearance modeling, which is a critical challenge that existing methods often underperform. Specifically, our approach utilizes adaptive sampling over narrow and broad conical frustums on the 2D feature map, enabling the optimization of 3D Gaussian points to capture both fine-grained textures and broader regional features, such as lighting transitions. Inspired by traditional MipMap operations, we introduce a simple yet effective jitter mechanism to the projected position of each Gaussian point at the micro scale, rather than relying on a fixed position as in previous methods. This jitter introduces variability, facilitating the capture of a richer and more diverse set of features. In addition, we incorporate Waveletbased Sampling, leveraging frequency domain information to further enhance the accuracy of refined appearance modeling and reconstruction. This multi-scale approach ensures that each Gaussian point effectively captures fine-grained details while preserving feature diversity. To integrate these features cohesively, we design a Hierarchical Residual Fusion Network (HRFN), which seamlessly combines features across different scales, ensuring precise and consistent 3D reconstruction results. Building on our approach, we next address the challenge of extending Gaussian-level appearance disentanglement to largescale environments. Conventional large-scene reconstruction pipelines typically adopt a divide-and-conquer strategy to partition scenes into manageable blocks based on visibility or geometric sensitivity, yet they rarely evaluate how effectively those partitions translate into per-Gaussian supervision. To bridge this gap, we introduce our Point-Statistics-Guided (PSG) Camera Partitioning to achieve scene scale promotion. By analyzing the spatial distribution and sampling statistics of each Gaussian point, PSG ensures cameras are assigned where they can maximally inform individual Gaussians, yielding more consistent and robust supervision across the entire scene. Complementing this, our Rotational Block Training scheme alternates between partitioned optimization, stabilizing the appearance disentanglement network under varying supervision distributions. Together, these strategies enable SMW-GS to maintain high geometric fidelity and appearance consistency even when reconstructing expansive, unconstrained urban environments. Overall, this study makes the following contributions: • Scalable Micro-macro Wavelet-based Gaussian Splatting (SMW-GS): A unified framework for multi-scale 3D reconstruction that decomposes scene representations into global appearance, refined appearance, and intrinsic features to faithfully model dynamic environments. Micro-macro Projection: A sampling mechanism that combines jittered micro-scale perturbations with adaptive conical frustums, enabling each Gaussian point to capture a diverse range of fine-grained and regional features, significantly improving refined appearance modeling. • Wavelet-based Sampling: A frequency-domain sampling strategy that refines multi-resolution feature representations, enhancing reconstruction fidelity by leveraging high- and low-frequency cues. Point-Statistics-Guided (PSG) Camera Partitioning: A camera assignment method driven by per-point statistics, which optimally distributes supervision across all Gaussians and ensures balanced training in large-scale scenes. Extensive experiments on unconstrained image collections, including real-world large-scale datasets and our newly rendered benchmark which is characterized by pronounced appearance variations, demonstrate that SMW-GS consistently outperforms state-of-the-art methods in reconstruction quality. A preliminary version of this work, MW-GS, was previously published in [17]. This paper introduces significant improvements over the original version in the following aspects: (i) Scalability to Large-Scale Scenes: We extend our framework to support arbitrarily large, unconstrained environments by integrating a Point-Statistics-Guided Camera Partitioning strategy that jointly considers block-level sensitivity and per-Gaussian supervision, enabling efficient distributed training and robust appearance disentanglement at scale. (ii) New Benchmark and Expanded Evaluation: We render a novel large-scale synthetic scene with pronounced appearance variations and, alongside classic in-the-wild and existing large-scale datasets, use them to thoroughly evaluate SMW-GS’s generalization and robustness under challenging conditions. (iii) Fair and Systematic Comparisons: We conduct extensive comparisons against leading in-the-wild reconstruction methods and recent large-scale techniques across multiple datasets, demonstrating clear improvements in reconstruction accuracy, appearance consistency, and overall scalability. The remainder of this paper is organized as follows. Section II reviews related work on in-the-wild and large-scale scene reconstruction. Section III provides the necessary background. Section IV details our proposed method. Section V presents experimental results and analysis. Finally, Section VI concludes the paper and outlines future research directions. # II. RELATED WORK # A. Scene Representations Various 3D representations have been developed to capture the geometric and appearance information of 3D objects or scenes. Existing traditional methods include meshes [18]–[20], point clouds [21]–[24], and voxels [25], [26]. Recently, Neural Radiance Fields (NeRF) [1] have revolutionized the synthesis of novel, photo-realistic views from images. Extensions to NeRF enhance visual quality [27], [28], rendering speed [29], [30], and convergence [31], [32], though limitations persist in speed and detail. More recently, 3D Gaussian Splatting (3DGS) [5], an explicit representation method, offers realtime rendering with high-resolution quality. Recent advances in 3DGS include improvements in efficiency [33], surface reconstruction [34], and incorporating semantic attributes for multimodal applications [35]–[37]. 3DGS has also been extended to various tasks, including autonomous driving [38]– [40], 3D generation [41], [42], and controllable 3D scene editing [43]–[45]. # B. Novel View Synthesis from Unconstrained Images Traditional novel view synthesis methods assume static geometry, materials, and lighting conditions. However, internetsourced datasets [46] often contain varying illumination, which challenge these assumptions. NeRF-W [12] pioneered addressing these challenges by incorporating learnable appearance embeddings for each image. Later methods like Ha-NeRF [13] and CR-NeRF [11] further improved appearance modeling using CNN-based encoders. Despite these advancements, implicit NeRF-based models suffer from slow rendering, leading to the adoption of 3D Gaussian Splatting (3DGS) as a more efficient alternative to NeRF. Approaches such as SWAG [14] and WildGaussians [15] modulate Gaussian color via MLPs with learnable embeddings. WE-GS [47] introduces spatial attention for improved CNN-based representations, while GS-W [16] and Wild-GS [48] leverage CNNs to generate feature maps for dynamic appearance modulation. GS-W uses adaptive sampling from projected 2D features, whereas Wild-GS builds triplane embeddings via depth-aware projection. However, projecting 2D features into 3D often causes sparsity and information loss. In this work, we propose MicroMacro Wavelet-based Sampling, which enhances sampling diversity and accuracy by incorporating frequency-domain cues, representing the first integration of frequency domain data into 3DGS appearance representation. Our method significantly improves appearance representation and reconstruction quality for unstructured image collections. # C. Large-scale Scene Reconstruction Large-scale scene reconstruction has gained increasing attention due to demands for high-fidelity rendering and scalability. Early NeRF-based methods such as Block-NeRF [2] and Mega-NeRF [49] employ heuristic spatial partitioning. Subsequent approaches like Switch-NeRF [50] and Grid-NeRF [51] adopt learned or hybrid decompositions. 3D Gaussian Splatting has also scaled to urban scenes via spatial partitioning. VastGaussian [52] and CityGaussian [53] adopt a divide-and-conquer approach to reconstruct large-scale scenes. DOGS [54] and Momentum-GS [55] improve training via distributed optimization and self-distillation. However, global appearance embeddings (e.g., in VastGaussian) often struggle to capture fine-grained, per-point appearance changes. In contrast, our method introduces the first Gaussian-level appearance disentanglement within the divide-and-conquer paradigm for large-scale 3DGS. Enabled by Point-StatisticsGuided partitioning and Rotational Block Training, this finegrained supervision yields superior reconstruction quality and appearance consistency in complex urban environments. # III. PRELIMINARIES # A. 3D Gaussian Splatting 3D Gaussian Splatting (3DGS) [5] represents scenes using anisotropic 3D Gaussians. Gaussians are projected onto 2D via tile-based rasterization and rendered using fast $\alpha$ -blending. Each 3DGS is characterized by a complete 3D covariance matrix $\textbf { \textsf { E } } \in \ \mathbb { R } ^ { 3 \times 3 }$ , which is defined in world space and centered at a point (mean) $\boldsymbol { \mu } \in \mathbb { R } ^ { 3 }$ : $$ G ( x ) = e x p ( - \frac { 1 } { 2 } ( x - \mu ) ^ { \top } \Sigma ^ { - 1 } ( x - \mu ) ) , $$ where $x$ is an arbitrary position within the 3D scene. To maintain positive semi-definiteness during optimization, $\pmb { \Sigma }$ is decomposed as: $$ \begin{array} { r } { \pmb { \Sigma } = \mathbf { R } \mathbf { S } \mathbf { S } ^ { \top } \mathbf { R } ^ { \top } . } \end{array} $$ In practice, $\mathbf { R }$ (rotation) is parameterized using a unit quaternion $q$ and $\mathbf { S }$ (scaling) derived from a 3D vector $s$ . Each Gaussian is further associated with a color $\hat { c }$ and an opacity factor $\alpha$ , both modulated by $G ( x )$ during blending. For rendering, Gaussians are splatted [56] onto the screen, sorted by depth, and employing alpha compositing to compute the final color $\widehat { C }$ for each pixel $\mathbf { p }$ : $$ \widehat C ( { \mathbf { p } } ) = \sum _ { i \in N } \hat { c } _ { i } { \alpha _ { i } ^ { \prime } } \prod _ { j = 1 } ^ { i - 1 } ( 1 - { \alpha _ { j } ^ { \prime } } ) , $$ where $\alpha _ { i } ^ { \prime }$ is the product of $\alpha _ { i }$ and the splatted 2D Gaussian contribution. # B. Discrete Wavelet Transform Wavelet theory [57], [58] has long been a foundational tool in image analysis [59]–[61], offering an effective means to capture both local and global information by describing signals across different frequency bands and resolution levels. The 2D Discrete Wavelet Transform (DWT) decomposes an image into four distinct components in the frequency domain using low-pass $\mathbf { L }$ , emphasizing smooth regions) and highpass (H, capturing high-frequency details like textures) filters. Combining these filters yields four unique kernels, namely LL, LH, HL, and HH, which encode different spatial and frequency information. Given a feature map $\mathbf { F } \in \mathbb { R } ^ { H \times W }$ , where $H$ and $W$ denote its height and width, respectively, applying a one-level DWT decomposition produces four sub-band features. This process is expressed as: $$ \begin{array} { r l } { \mathbf { F } _ { w } ^ { \mathbf { L L } } = \mathbf { L F L } ^ { \top } , \mathbf { F } _ { w } ^ { \mathbf { L H } } } & { = \mathbf { H F L } ^ { \top } , } \\ { \mathbf { F } _ { w } ^ { \mathbf { H L } } = \mathbf { L F H } ^ { \top } , \mathbf { F } _ { w } ^ { \mathbf { H H } } } & { = \mathbf { H F H } ^ { \top } . } \end{array} $$ For multi-channel feature maps, the wavelet transform is applied independently to each channel. The sub-bands corresponding to the same filter across channels are concatenated, yielding four comprehensive frequency sub-bands that capture diverse spatial and frequency characteristics. # IV. METHOD To address the challenges of reconstructing unconstrained scenes with varying illumination, we propose Scalable MicroMacro Wavelet-based Gaussian Splatting (SMW-GS), as illustrated in Fig. 1, a unified framework that enhances 3D Gaussian representations through the following innovations. First, we decompose each Gaussian’s appearance into global illumination context, refined multi-scale textures, and intrinsic material embeddings, enabling explicit modeling across different abstraction levels. Global appearance features are extracted from a 2D reference image via a CNN encoder, while intrinsic features are parameterized as learnable embeddings. Our key innovation lies in Micro-Macro Wavelet Sampling mechanism, which enriches refined feature diversity by combining spatial jitter sampling with frequency-domain analysis: at both tight micro offsets and broader macro regions on decoded feature maps, we apply a one-level discrete wavelet transform to capture multi-resolution texture patterns with minimal overhead. A lightweight fusion network seamlessly integrates these signals to predict detail-preserving per-Gaussian color and opacity. Fig. 1. Overview of Scalable Micro-Macro Wavelet-based Gaussian Splatting (SMW-GS). Starting from an input image, a CNN backbone extracts global appearance embeddings and multi-scale feature maps. These feature maps undergo a one-level wavelet transform and Micro-Macro sampling, combining jittered micro offsets with broader macro frustums, to capture refined texture details for each 3D Gaussian. The global, refined, and learnable intrinsic embeddings are fused through a Hierarchical Residual Fusion Network (HRFN) to predict per-Gaussian color. For large-scale scenes, Gaussians are organized into overlapping blocks, with camera assignments based on per-point visibility to maximize supervision on the individual Gaussians. An alternating block-wise and full-scene training schedule ensures scalable and consistent reconstruction, supported by a globally unified appearance decoupling module and a shared Gaussian decoder. Crucially, to scale without sacrificing quality, SMW-GS employs a Point Statistics Guided partitioning strategy that dynamically selects camera views for each partition based on per-point visibility statistics. This is paired with a Rotational Block Training scheme that helps maintain uniform optimization of the decoupled module throughout the entire scene, thereby preventing overfitting to local regions. Together, these components guarantee effective supervision for every Gaussian—from isolated objects to expansive urban landscapes, resulting in superior local detail recovery and robust city-scale reconstruction, as corroborated by our extensive experiments. # A. Structured and Explicit Appearance Decoupling In unconstrained photo collections, appearance variations stem from factors such as diverse lighting conditions during capture and post-processing operations like gamma correction, exposure adjustment, and tone mapping. Additionally, scene points exhibit directional lighting effects, including highlights and shadows, which dynamically alter their appearance, while intrinsic material properties remain constant. To systematically model these variations, we explicitly decouple the appearance into three distinct components: Global Appearance Feature $( f _ { g } ~ \in ~ \mathbb { R } ^ { n _ { g } } )$ : Encodes overall scene information, capturing coarse-scale lighting and tonal characteristics. Refined Appearance Feature $( f _ { r } \in \mathbb { R } ^ { n _ { r } } )$ : Captures detailed, position-specific elements, such as high-frequency textures, local highlights, and shadows. Intrinsic Feature $( f _ { v } \in \mathbb { R } ^ { n _ { v } } )$ : Represents the inherent and static properties of scene points. For a point $v$ located at $\mathbf { x } _ { i }$ in 3D space, its appearance is characterized by these three components. The global $( f _ { g } )$ and refined $( f _ { r } )$ features are extracted from a reference image, while the intrinsic feature $( f _ { v } )$ is optimized during training. This structured decoupling balances the global context, local details, and material invariance, providing a comprehensive representation of scene appearance. To implement this, we adopt a voxel-based organization of Gaussians following the Scaffold-GS framework [6]. Each anchor point $v$ , located at the center of a voxel, is associated with a scaling factor $l _ { v } ~ \in { \mathbb { R } } ^ { 3 }$ and $k$ learnable offsets $O _ { v } \in \mathbb { R } ^ { k \times 3 }$ , which collectively define the $k$ Gaussians within the voxel. The global appearance feature $( f _ { g } )$ is consistently assigned to all anchors within the scene and is derived from a reference image by applying global average pooling to the UNet encoder’s feature map, followed by a trainable MLP $( M L P ^ { G } )$ to produce $f _ { g }$ . This approach ensures consistent modeling of global appearance variations while maintaining flexibility for local and intrinsic attributes. Fig. 2. Sampling comparison between GS-W and our method. The proposed one integrates both narrow and broad conical frustums with wavelet-based sampling, allowing for a more comprehensive capture of features and resulting in enhanced accuracy. # B. Micro-macro Wavelet-based Sampling To improve the accuracy and richness of scene representation, we propose a novel technique called Micro-Macro Wavelet-based Sampling (MWS). This technique enhances the appearance features of each 3D Gaussian by capturing more detailed and diverse information. It effectively accommodates real-world scene variations by incorporating both fine-grained and broad-scale features. The MWS strategy comprises two main components: Micro-Macro Projection (MP): Traditional MipMap techniques [62] leverage jitter sampling [63] to introduce random perturbations, enhancing the depiction of texture details. Extending this concept, we propose an adaptive jitter projection method for micro-projections. Instead of directly projecting each 3D point along a ray onto a fixed location on the 2D feature map, our method projects points within a narrow conical frustum. This enables each Gaussian along a ray to capture distinct yet correlated features, reflecting the unique characteristics of each 3D point. Fig. 2a contrasts our method with GS-W. While GS-W directly projects points onto a projection feature map, resulting in identical local appearance features for points along the same ray, it mitigates this limitation by adaptively sampling across multiple feature maps. However, GS-W lacks explicit control over specific local regions, limiting its ability to fully exploit informative features. Our micro-projection method addresses this limitation by employing a narrow conical frustum with a cross-sectional radius parameterized by $\dot { r }$ . To refine the sampling, we introduce $k _ { s }$ learnable coordinate offsets $\{ n c _ { i } \} _ { k _ { s } }$ for each 3D point, enabling adaptive sampling within the frustum. The features obtained from these $k _ { s }$ samples are averaged to produce the refined feature $f _ { r } ^ { n }$ . This design ensures diverse and consistent sampling, capturing rich fine-grained details while preserving texture coherence. In addition to capturing fine details, MWS also targets broader, long-range characteristics, such as regional highlights. To achieve this, we employ a broader conical frustum, as shown on the right in Fig. 2b. Guided by the principle that a point’s projection size is inversely proportional to its distance from the camera, we parameterize the projection radius of the broad frustum as $\dot { R } = \dot { R } _ { m a x } / \| \mathbf { x } _ { i } - \mathbf { x } _ { c } \| _ { 2 }$ , where $\mathbf { x } _ { c }$ represents the camera center. We also introduce $k _ { s }$ learnable scaling factors $\{ b c _ { i } \} _ { k _ { s } }$ for each 3D point to enable adaptive sampling within this frustum. The implementation $b c _ { i } \odot \hat { p } _ { i }$ , where $\hat { p } _ { i }$ denotes the projection center of the frustum, facilitates this process. The features derived from these $k _ { s }$ samples are averaged to produce the broad appearance feature $f _ { r } ^ { b }$ . By combining the refined $f _ { r } ^ { n }$ and broad $f _ { r } ^ { b }$ features, MWS achieves a balanced representation that captures both intricate details and long-range scene characteristics, significantly enhancing the fidelity and versatility of the scene modeling process. Wavelet-based Sampling (WS): In unconstrained image collections, significant variation in camera parameters poses challenges in handling large-scale differences across viewpoints with fixed-resolution sampling. To address this, we propose a Wavelet-based Sampling (WS) technique that captures highfrequency and multi-scale information. By leveraging the Discrete Wavelet Transform (DWT), we decompose the feature map FMAP , generated by a shared UNet with the global feature extractor, into a series of feature maps. The DWT splits $\mathbf { F } ^ { M A P }$ into four frequency bands while simultaneously reducing its resolution, effectively preserving spatial information and enabling efficient multi-scale sampling that captures diverse frequency information. The process begins with dividing the feature map $\mathbf { F } ^ { M A P }$ into $2 M + 2$ smaller feature maps $\{ \mathbf { F } ^ { 1 } , \cdot \cdot \cdot , \mathbf { F } ^ { 2 M + 2 } \}$ , where each $\mathbf { F } ^ { i } \in \mathbb { R } ^ { \frac { n _ { r } } { 2 M + 2 } \times H ^ { \mathbf { F } } \times W ^ { \mathbf { F } } }$ . Here, $M$ is the maximum number of downsampling operations (or the highest DWT level), serving as a critical hyperparameter. The dimensions $H ^ { \mathbf { F } }$ and $W ^ { \mathbf { F } }$ represent the height and width of each smaller feature map, respectively. During the $m$ -th downsampling stage, an $m$ -level DWT is applied to the $( 2 m + 1 )$ -th and $( 2 m + 2 )$ -th feature maps, producing $4 ^ { m }$ sub-feature maps, as shown in Eq. (4). These sub-feature maps are subsequently sampled via bilinear interpolation within narrow and broad conical frustums using the Micro-Macro Projection technique. This yields the feature sets $\{ f _ { r , m , j } ^ { n } \} _ { 4 ^ { m } }$ and $\{ f _ { r , m , j } ^ { b } \} _ { 4 ^ { m } }$ for fine-grained and broad-scale features, respectively. The refined features $f _ { r , m } ^ { n }$ and $f _ { r , m } ^ { b }$ for each downsampling level are calculated by applying learnable weight parameters to the sampled features: $$ f _ { r , m } ^ { n } = \sum _ { j = 1 } ^ { 4 ^ { m } } \omega _ { m , j } ^ { n } \cdot f _ { r , m , j } ^ { n } , \ f _ { r , m } ^ { b } = \sum _ { j = 1 } ^ { 4 ^ { m } } \omega _ { m , j } ^ { b } \cdot f _ { r , m , j } ^ { b } , $$ where $\omega _ { m , j } ^ { n }$ and $\omega _ { m , j } ^ { b }$ denote learnable weights for the $( 2 m +$ 1)-th and $\left( 2 m + 2 \right)$ -th feature maps, respectively. Finally, the refined appearance features for each anchor are obtained by concatenating features across all scales: $$ f _ { r } = f _ { r , 0 } ^ { n } \oplus f _ { r , 0 } ^ { b } \oplus \cdot \cdot \cdot \oplus f _ { r , M } ^ { n } \oplus f _ { r , M } ^ { b } . $$ By combining Micro-Macro Projection with Wavelet-based Sampling, our method captures multi-scale and high-frequency features, supplementing scene representation with detailed appearance variations and enabling a comprehensive understanding of scene structures across multiple scales. # C. Hierarchical Residual Fusion Network To generate the final $k$ Gaussian colors corresponding to each anchor, it is necessary to effectively combine the global appearance $( f _ { g } )$ , refined appearance $( f _ { r } )$ , intrinsic features $( f _ { v } )$ , and spatial information such as position and view direction. These features exist in different high-dimensional spaces, and simple concatenation is insufficient to achieve effective integration due to the complexity of their interactions. To address this challenge, we propose a Hierarchical Residual Fusion Network (HRFN), which incorporates a hierarchical design with residual connections to enhance the feature fusion process. The HRFN comprises four Multi-Layer Perceptrons (MLPs), denoted as $\boldsymbol { \mathcal { M } } ^ { H } = \{ \boldsymbol { \mathcal { M } } _ { 1 } ^ { H } , \boldsymbol { \mathcal { M } } _ { 2 } ^ { H } , \boldsymbol { \mathcal { M } } _ { 3 } ^ { H } , \boldsymbol { \mathcal { M } } _ { 4 } ^ { H } \}$ . The inputs to HRFN include the anchor center position $\mathbf { x } _ { i }$ , encoded using a positional encoding function $\gamma ( \cdot )$ ; the global appearance feature $f _ { g }$ , which encapsulates global information about the scene; the refined appearance feature $f _ { r }$ , which captures multi-scale and high-frequency details; the intrinsic feature $f _ { v }$ , optimized during training to represent specific anchorlevel properties; and the direction vector $\begin{array} { r } { \vec { \bf d } _ { i c } \bf \dot { \phi } = \frac { \bf x } { \varrho | | x _ { i } - x _ { c } | | 2 \sigma } } \end{array}$ representing the view direction relative to the anchor. These inputs are processed hierarchically to infer the output colors $\left\{ \hat { c } _ { k } \right\}$ for the $k$ Gaussians. The hierarchical fusion process is formulated as follows: $$ \begin{array} { r l } & { E m b = \boldsymbol { \mathcal { M } } _ { 1 } ^ { H } ( \gamma ( \mathbf { x } _ { i } ) \oplus f _ { v } \oplus f _ { r } \oplus f _ { g } ) \oplus \omega _ { r } f _ { r } , } \\ & { \{ \boldsymbol { \hat { c } } _ { k } \} = \boldsymbol { \mathcal { M } } _ { 4 } ^ { H } \left( \boldsymbol { \mathcal { M } } _ { 3 } ^ { H } \left( \boldsymbol { \mathcal { M } } _ { 2 } ^ { H } \left( E m b \right) \oplus \omega _ { v } f _ { v } \right) \oplus \vec { \mathbf { d } } _ { i c } \right) , } \end{array} $$ where $\oplus$ denotes concatenation, and $\omega _ { r }$ and $\omega _ { v }$ are learnable adaptive weights that dynamically adjust the contributions of refined and intrinsic features, respectively. First, the positional encoding $\gamma ( \mathbf { x } _ { i } )$ is fused with the appearance and intrinsic features through $\mathcal { M } _ { 1 } ^ { H }$ , producing an embedding $E m b$ that integrates global and local information. A residual term $\omega _ { r } f _ { r }$ is added to further enhance the representation of refined appearance features. Subsequently, the hierarchical refinement stages $( \mathcal { M } _ { 2 } ^ { H } , \mathcal { M } _ { 3 } ^ { H } , \mathcal { M } _ { 4 } ^ { H } )$ refine $E m b$ by progressively integrating the intrinsic feature $f _ { v }$ and view direction $\vec { \mathbf { d } } _ { i c }$ , capturing complex interactions and dependencies among the inputs. The hierarchical structure of HRFN facilitates a seamless integration of global, refined, and intrinsic features, leveraging their complementarity to capture rich information. Residual connections enhance gradient flow and convergence by preserving original features. This design enables effective modeling of complex feature interactions, leading to accurate Gaussian color prediction. # D. Large-scale Scene Promotion Existing in-the-wild methods perform effectively on small, object-centric scenes but struggle in large-scale settings due to their reliance on strong per-Gaussian supervision for Gaussianlevel appearance disentanglement. In contrast, large-scale approaches typically adopt block-level supervision, focusing on camera-block relations. This methodological gap leads to misalignment when combining the two, leading subsets of 3D Gaussians receive insufficient guidance. As a result, appearance disentanglement suffers, hindering both Gaussian parameter optimization and U-Net training, and limiting the effectiveness of applying existing divide-and-conquer strategies to large-scale scenes. Fig. 3. Schematic diagram of the Point-Statistics-Guided (PSG) Camera Partitioning in Stage 1. The chart on the right illustrates the variation in the cumulative supervision count for each Gaussian within a block, comparing scenarios with and without the implementation of Stage 1. To overcome these limitations, we focus on enhancing Gaussian-level supervision and adapting our scene representation framework for large-scale environments. We begin by segmenting the scene into blocks using the COLMAP point cloud and then employ a Point-Statistics-Guided camera partitioning strategy. This is further augmented with a block-level camera sensitivity measure to include additional correlated views, ensuring accurate supervision for each Gaussian. Subsequently, our Rotational Block Training strategy optimizes 3D Gaussians across all blocks for consistent parameter tuning. The appearance disentanglement U-Net leverages the complete image dataset to model appearances with global consistency. By effectively bridging the gap between Gaussian- and blocklevel supervision, our framework achieves robust, scalable performance, delivering real-time, high-quality rendering in expansive environments. Initial Division. We partition the scene using the COLMAP point cloud by computing the 0.05 and 0.95 quantiles along the ground plane’s horizontal axes to remove outliers. The scene is then divided into an $\mathbf { M } \times \mathbf { N }$ grid of blocks. To ensure overlap, each block’s boundary is expanded by $5 \%$ of the corresponding edge length, while outermost blocks are extended infinitely to fully cover all 3D points and camera poses. Cameras and 3D points are initially assigned to blocks based on whether their centers fall strictly within a block’s boundaries, serving as the foundation for subsequent partitioning and training. Point-Statistics-Guided Camera Partitioning. To address the insufficient supervision of boundary points in conventional block-based methods, we propose a compound camera partitioning strategy that ensures both minimum supervision guarantees and content-aware camera association. Stage 1: Visibility-Aware Camera Allocation (Fig. 3). We first compute an average visible camera count $\bar { c }$ by projecting each 3D point $p _ { i }$ onto all training views and counting its valid observations. Using a control parameter $\kappa ~ \in ~ ( 0 , 1 )$ , we establish a supervision threshold $\begin{array} { r } { \tau \ = \ \kappa \bar { c } } \end{array}$ . Points with total visible cameras $| \mathcal { V } ( p _ { i } ) | < \tau$ trigger our compensation mechanism: all cameras observing $p _ { i }$ are directly assigned to its containing block. For remaining cameras, we employ an iterative greedy assignment: 1) For each unassigned camera $c _ { j }$ , calculate its potential coverage gain $\begin{array} { r } { G _ { j } = \sum _ { p _ { k } \in \mathcal { B } _ { m } } \mathbb { I } ( N _ { v i s } ( p _ { k } ) < \tau \wedge c _ { j } \in \mathfrak { e } ^ { } } \end{array}$ $\mathcal { V } ( p _ { k } ) )$ . 2) Select camera $c _ { j ^ { * } }$ with maximal $G _ { j }$ and assign to block $B _ { m }$ . 3) Update $N _ { v i s } ( p _ { k } )$ for all $p _ { k } \in B _ { m }$ observed by $c _ { j ^ { * } }$ . 4) Repeat until $\forall p _ { k } \ \in \ B _ { m } , N _ { v i s } ( p _ { k } ) \ \geq \ \tau$ or no performance gains. where $B _ { m }$ denotes the $m$ -th block and $\mathcal { V } ( \boldsymbol { p } _ { k } )$ the visible camera set of $p _ { k }$ . We present a schematic illustration of Stage 1 in Fig. 3. The subfigure on the left demonstrates the process of camera selection, while the one on the right depicts the variation in the supervision count for each Gaussian with and without the proposed strategy in Stage 1, highlighting the substantial improvement in supervision effectiveness achieved. Stage 2: Content-Relevant Camera Augmentation. Inspired by [53], [55], we quantify block-camera relevance through rendering analysis: $$ \Delta \mathrm { S S I M } _ { j } ^ { m } = \mathrm { S S I M } ( \hat { I } _ { j } , \hat { I } _ { j } ^ { \backslash m } ) $$ where $\hat { I } _ { j }$ is the full rendering, and $\hat { I } _ { j } ^ { \backslash m }$ denotes the rendering excluding Gaussian points in block $\boldsymbol { B _ { m } }$ . Cameras with $\Delta \mathbf { S } \mathbf { S } \mathbf { I } \mathbf { M } _ { j } ^ { m } > \eta$ (threshold $\eta$ ) are identified as content-critical and added to $\boldsymbol { B _ { m } }$ ’s camera set. This effectively captures cameras whose viewpoints significantly affect $\boldsymbol { B _ { m } }$ ’s content. The combined strategy ensures provable supervision lower bounds through Stage 1’s $\tau$ -enforced assignment, as well as contextual awareness via Stage 2’s rendering-sensitive augmentation, particularly crucial for maintaining consistency in boundary regions where multiple blocks interact. Rotational Block Training. When the number of GPUs matches the total number of blocks $( \mathbf { M } \times \mathbf { N } )$ , all blocks can be trained in parallel using the full image set. Otherwise, we adopt a rotational block training strategy. Blocks are rotated across available GPUs every $N _ { \mathrm { i t e r } }$ , ensuring iterative exposure of the shared U-Net to the entire dataset. This rotational process maintains optimization quality and promotes generalization across all blocks. # E. Training Objective Our optimization framework combines photometric supervision with geometric regularization to ensure reconstruction fidelity and physical plausibility. The composite loss function consists of: $$ \mathcal { L } _ { p h o t o } = \lambda _ { S S I M } \mathcal { L } _ { S S I M } ( I _ { r } , I _ { g t } ) + \lambda _ { 1 } \mathcal { L } _ { 1 } ( I _ { r } , I _ { g t } ) $$ for measuring photometric discrepancies between rendered image $I _ { r }$ and ground truth $I _ { g t }$ . Regularization part contains two components: $\begin{array} { r } { \mathcal { L } _ { p r o j } ~ = ~ \sum \operatorname* { m a x } ( \| d _ { n } \| _ { 2 } - \dot { r } , 0 ) + } \end{array}$ $\sum \operatorname* { m a x } ( \| d _ { b } \| _ { 2 } - \dot { R } , 0 )$ constrains p jected points within valid frustum regions (Sec. IV-B), where $d _ { n }$ and $d _ { b }$ denote distances to frustum centers in narrow/broad regions, respectively, while $\dot { r }$ and $\dot { R }$ represent predefined distance thresholds for projection constraints; and the volume regularization $\begin{array} { r } { \mathcal { L } _ { v o l } = \sum _ { i } \prod ( s _ { i } ) } \end{array}$ prevents Gaussian overscaling through scale vector product minimization [6], [64]. Here $\prod ( \cdot )$ computes the product of Gaussian scale components $s _ { i }$ . The complete objective integrates these terms: $$ \mathcal { L } = \mathcal { L } _ { p h o t o } + \lambda _ { v o l } \mathcal { L } _ { v o l } + \lambda _ { p r o j } \mathcal { L } _ { p r o j } $$ Fig. 4. An example from our newly rendered MatrixCity dataset showcasing eight distinct appearance conditions. # V. EXPERIMENT To rigorously evaluate the proposed SMW-GS method, we conduct extensive experiments across diverse scenarios. Our evaluation protocol consists of three key components: (1) dataset description, (2) implementation details and evaluation metrics, and (3) comprehensive results analysis with detailed discussions. # A. Datasets and Metrics Our experimental evaluation encompasses a diverse collection of datasets, spanning both real-world and synthetic environments. For a systematic analysis, we categorize the experiments into three distinct parts: classical unconstrained data reconstruction, large-scale unconstrained scene reconstruction, and synthetic large-scale scene reconstruction under complex appearance conditions. We evaluate all results using PSNR, SSIM [65], and LPIPS [66]. 1) Classical Unconstrained Data Evaluation: We test on three scenes from the Phototourism dataset [46]: Brandenburg Gate, Sacre Coeur, and Trevi Fountain. These datasets contain internet photo collections commonly used for 3D reconstruction. Following prior works [13], [16], all images are downsampled by a factor of 2 for both training and evaluation. Comparisons are conducted against Ha-NeRF [13], CR-NeRF [11], WildGaussians [15], and GS-W [16]. 2) Real Large-Scale Unconstrained Data Evaluation: We evaluate the method on four large-scale scenes: Rubble and Building from the Mill-19 dataset [49], as well as Sci-Art and Residence from the UrbanScene3D dataset [67]. Consistent with prior works [53], [55], all images are downsampled by a factor of 4 during both training and evaluation. The scenes respectively comprise 1,657, 1,920, 2,998, and 2,561 training images, and 21, 20, 21, and 21 test images. We compare the proposed method against state-of-the-art approaches across two categories: in-the-wild methods, including GS-W [16] and WildGaussians [15], and large-scale methods, including VastGaussian [52], CityGaussian [53], and Momentum-GS [55]. 3) Synthetic Large-scale Data Evaluation: While the Real Large-Scale Unconstrained Data provides valuable insights, their appearance variations are limited compared to Classical Unconstrained Data. To more thoroughly assess our method’s effectiveness in reconstructing large-scale scenes under severe appearance variations as well as fully consistent conditions, we use the synthetic Aerial Data of MatrixCity [68], built on Unreal Engine 5. We extend MatrixCity by rendering images under seven additional appearance conditions using Unreal TABLE I QUANTITATIVE RESULTS ON THREE CLASSICAL UNCONSTRAINED DATASETS. BOLD AND UNDERLINED VALUES CORRESPOND TO THE BEST AND THE ECOND-BEST VALUE, RESPECTIVELY. OUR METHOD OUTPERFORMS THE PREVIOUS METHODS ACROSS ALL DATASETS ON PSNR , SSIM, AND LPIPS Fig. 5. Qualitative comparison on three classical unconstrained datasets. Red and blue crops emphasize that SMW-GS can recover finer details. Engine 5 (Fig. 4), resulting in eight diverse visual domains. The training set is sampled across all conditions, while the test set follows the original benchmark. We evaluate performance on both original blocks $( B l o c k \_ A$ and Block E) and newly generated blocks $( B l o c k \_ A *$ and $B l o c k \_ E * )$ , containing 1,063 and 837 training images, and 163 and 124 test images, respectively. All images are used at full resolution $( 1 , 9 2 0 \times 1 , 0 8 0 )$ without downsampling. Metrics and baselines align with those in the Real Large-Scale Unconstrained Data Evaluation, ensuring fair and consistent comparison across varying appearance complexities. TABLE II RENDERING SPEED COMPARISON ON THREE DATASETS WITH A RESOLUTION OF $8 0 0 \times 8 0 0$ USING A SINGLE RTX 3090 GPU, MEASURING PERFORMANCE IN FPS. BOLD AND UNDERLINED VALUES CORRESPOND TO THE BEST AND THE SECOND-BEST VALUE, RESPECTIVELY. # B. Implementation Details This section provides a detailed overview of the hyperparameter configurations and network settings used in SMWGS, along with information about the baselines used for comparison. We develop our method based on the original implementation of Scaffold-GS. In our setup: Gaussians per voxel $k = 1 0$ , frustum samples $k _ { s } = 1$ , wavelet dim $M = 1$ , intrinsic feature dim $n _ { v } = 4 8$ , refined feature dim $n _ { r } = 3 2$ , global feature dim $n _ { g } = 1 6$ . Learning rates for $n c _ { i }$ and $b c _ { i }$ decay from $1 \times 1 0 ^ { - 4 }$ to $1 \times 1 0 ^ { - 5 }$ . Optimization uses Adam with $\lambda _ { \mathrm { S S I M } } ~ = ~ 0 . 2$ , $\lambda _ { 1 } ~ = ~ 0 . 8$ , $\lambda _ { p r o j } ~ = ~ 0 . 0 1$ , and $\lambda _ { v o l } ~ = ~ 0 . 0 1$ . Other hyperparameters are set according to the guidelines of Scaffold-GS. TABLE III QUANTITATIVE RESULTS ON FOUR REAL LARGE-SCALE UNCONSTRAINED DATASETS. BOLD AND UNDERLINED VALUES CORRESPOND TO THE BEST AND THE SECOND-BEST VALUE, RESPECTIVELY. OUR METHOD OUTPERFORMS THE PREVIOUS METHODS ACROSS ALL DATASETS ON PSNR , SSIM, AND LPIPS. We use a ResNet-18 encoder (up to the layer before AdaptiveAvgPool2d), with frozen batch normalization. The global feature MLP $M L P ^ { G }$ has one hidden layer of size $2 n _ { g }$ , and it ultimately outputs the global appearance feature. The UNet decoder has four upsampling blocks with residual connections, followed by a final convolutional layer projecting to $n _ { r }$ . These modules are trained with a learning rate decaying from $1 \times 1 0 ^ { - 4 }$ to $1 \times 1 0 ^ { - 6 }$ . Since each Gaussian is trained per image, only ReLU activations are used (no batch norm) in the decoder. The proposed Hierarchical Residual Fusion Network (HRFN) consists of four MLPs, denoted as $\begin{array} { r l } { \mathcal { M } ^ { H } } & { { } = } \end{array}$ $\{ \mathcal { M } _ { 1 } ^ { H } , \mathcal { M } _ { 2 } ^ { H } , \mathcal { M } _ { 3 } ^ { H } , \mathcal { M } _ { 4 } ^ { H } \}$ , with the number of hidden units for each set as follows: $\{ 1 2 8 , 9 6 \}$ , $\{ 9 6 , 6 4 \}$ , $\{ 4 8 , 4 8 \}$ , and $\{ 4 8 \}$ . Learning rate decays from $5 \times 1 0 ^ { - 4 }$ to $5 \times 1 0 ^ { - 5 }$ . For a fair comparison, VastGaussian incorporates the Decoupled Appearance Modeling module. VastGaussian, Momentum-GS and WildGaussians optimize the appearance embeddings on test images while keeping other parameters frozen. CityGaussian uses the official implementation. # C. Comparison Results 1) Classical Unconstrained Data Evaluation: The quantitative results for the three classical unconstrained scenes, presented in Tab. I, highlight the effectiveness of the proposed SMW-GS method. Ha-NeRF and CR-NeRF show improvements over earlier baselines but remain limited in capturing local contextual cues essential for modeling diverse scene points. Similarly, WildGaussians struggle with appearance modeling due to their reliance on global appearance embeddings. GSW mitigates these limitations through adaptive sampling of local features, enabling a more precise representation of finegrained details. This targeted approach is reflected in its consistently superior performance across evaluation metrics. The proposed SMW-GS method advances further by seamlessly integrating long-range contextual information with detailed local features within Gaussian representations. By enhancing multi-scale information fusion and effectively capturing highfrequency details, SMW-GS achieves notable improvements. TABLE IV STORAGE USAGE (IN GB) ACROSS FOUR REAL LARGE-SCALE SCENES AND RENDERING SPEED (IN FPS) FOR FOUR DATASETS AT A RESOLUTION OF $1 , 9 2 0 \times 1 , 0 8 0$ , MEASURED ON A SINGLE RTX 3090 GPU. These advancements are evident in significant gains in PSNR, surpassing GS-W by 1.41 dB, 1.40 dB, and 1.16 dB across the three evaluated scenes, underscoring its robust performance in handling complex unconstrained scenarios. The qualitative results in Fig. 5 vividly illustrate the advantages of our approach. For instance, our method captures finer details and more accurate colors in the reliefs of the Trevi Fountain and the bronze statues at Sacre Coeur, surpassing the capabilities of existing techniques. While current methods often struggle with accurately representing intricate scene details and complex textures, our approach excels by leveraging micro-macro wavelet-based sampling to enhance feature extraction. This technique effectively integrates frequencydomain and multi-scale information, while the hierarchical fusion of structured features facilitates the precise recovery of appearance details and clear structural representation. To assess the training efficiency and rendering performance during inference, we conducted experiments on three datasets with an image resolution of $8 0 0 \times 8 0 0$ , using a single RTX 3090 GPU to compute the average rendering time per image. The overall inference time includes the feature extraction time for reference images in Ha-NeRF, CR-NeRF, GS-W, and our method. As summarized in Tab. II, our approach not only ensures fine-grained modeling of image appearance but also demonstrates excellent rendering speed, being nearly 1.5 times faster than existing Gaussian-based methods. Furthermore, we evaluated the reconstruction time (in hours) required for training, with results showing that our method enables efficient training while maintaining high-quality outcomes. 2) Real Large-scale Unconstrained Data Evaluation: Table III presents the quantitative results on four real-world, largescale, and unconstrained datasets, underscoring the scalability and effectiveness of the proposed SMW-GS method in handling expansive scenes. While WildGaussians and GSW exhibit commendable performance in classic in-the-wild scenarios, they face challenges in generalizing to large-scale environments. Despite leveraging decoupled appearance modeling and achieving relatively high PSNR on UrbanScene3D, their overall reconstruction quality falls short, particularly in structure-aware metrics like SSIM. Among methods specifically designed for large-scale scenes, VastGaussian incorporates appearance embeddings and a CNN-based appearance adjustment module but struggles to establish accurate imagelevel mappings between rendered and real images. Similarly, Momentum-GS, which employs a simple appearance embedding, encounters difficulties with effective appearance disentanglement in expansive scenes. In contrast, our SMW-GS method achieves significant improvements by seamlessly integrating long-range contextual information with fine-grained local features within the Gaussian representation. Additionally, the scale-up strategy enhances the representation of complex scenes, enabling superior reconstruction performance. Across the four datasets, SMW-GS consistently outperforms existing in-the-wild and large-scale reconstruction methods, surpassing the previous best, Momentum-GS, by 2.16 dB, 1.22 dB, 3.41 dB, and $3 . 5 5 ~ \mathrm { d B }$ in PSNR, respectively. Fig. 6. Qualitative comparison on four real large-scale unconstrained datasets. Red and blue crops emphasize that SMW-GS can recover finer details The qualitative results in Fig. 6 highlight the distinct advantages of our method in large-scale scene reconstruction. By effectively integrating frequency-domain and multi-scale information and ensuring sufficient Gaussian-level supervision during large-scale training, our approach consistently outperforms existing methods. The reconstructed scenes demonstrate superior visual fidelity and highly accurate geometric details. For example, our method precisely captures subtle features, such as the shadow of a lamppost in the Building scene, and accurately reconstructs intricate structures like the staircases in the Residence scene, delivering significantly better performance compared to prior techniques. Additionally, Fig. 7 illustrates the robustness of our approach in handling block boundary transitions within the Residence scene. The top row on the right presents results from our method, whereas the bottom row shows those from Momentum-GS. While Momentum-GS suffers from blurred reconstructions at region edges, our method ensures sharpness and structural consistency. This improvement can be attributed to our Point-Statistics-Guided Camera Partitioning strategy, which provides enhanced supervision for Gaussians situated near block boundaries. Fig. 7. Qualitative results on the Residence scene at block boundaries. Our method significantly enhances the visual quality at block boundaries, preserving sharpness and structural consistency. The top row on the right displays results from our method, while the bottom row illustrates results from Momentum-GS TABLE V QUANTITATIVE RESULTS ON SYNTHETIC LARGE-SCALE UNCONSTRAINED DATASETS. BOLD AND UNDERLINED VALUES CORRESPOND TO THE BEST AND THE SECOND-BEST VALUE, RESPECTIVELY. OUR METHOD OUTPERFORMS THE PREVIOUS METHODS ACROSS ALL DATASETS ON PSNR , SSIM, AND LPIPS. Furthermore, we report the corresponding storage usage in Tab. IV, where our method achieves superior reconstruction quality with reduced storage consumption. This efficiency stems from compressing appearance information into the appearance disentanglement network, significantly reducing storage requirements while maintaining high reconstruction accuracy. 3) Synthetic Large-scale Data Evaluation: Tab. V presents the quantitative results for four synthetic large-scale scenes, including two with significant appearance variations $B l o c k \_ A *$ and Block $E *$ and two with consistent appearance $( B l o c k \_ A$ and $B l o c k \_ E )$ . These findings highlight the scalability and effectiveness of our proposed SMW-GS method in handling large-scale environments. Our approach demonstrates robust performance in consistent-appearance scenarios while effectively managing significant appearance variation through advanced disentanglement techniques. As previously observed, methods like WildGaussians and GS-W struggle with generalization in large-scale settings. Although these methods achieve relatively higher PSNR on $B l o c k \_ A *$ and $B l o c k \_ E *$ due to their appearance modeling, their reconstruction quality suffers a significant decline, particularly in structure-aware metrics. Additionally, the performance gap between scenes with consistent appearance $( B l o c k \_ A$ and Block $\smash { \boldsymbol { \mathbf { \ell } } } _ { p } \mathbf { \mathcal { E } }$ ) and those with appearance variation $( B l o c k \_ A *$ and $B l o c k \_ E * )$ underscores their inability to scale effectively. Methods designed specifically for large-scale scenarios, such as VastGaussian and Momentum-GS, face even steeper declines in reconstruction quality when transitioning from consistent to varied appearance settings. These results emphasize the challenges of adapting in-the-wild methods to large-scale environments and the importance of scalable, robust solutions. In contrast, our SMW-GS method excels in both scalability and robustness. It maintains high performance across consistent and varied appearance scenarios, with minimal performance drop between Block A and $B l o c k \_ A *$ or $B l o c k \_ E$ and Block $E *$ . This demonstrates the superior disentanglement and adaptability of our framework. Across all four synthetic datasets, SMW-GS surpasses Momentum-GS in PSNR by 6.51 dB, 6.36 dB, $0 . 6 0 ~ \mathrm { d B }$ , and $0 . 7 4 { \mathrm { ~ d B } }$ , respectively, significantly outperforming both in-the-wild and large-scale baseline methods. The qualitative results presented in Fig. 8 further highlight the superiority of our method in reconstructing largescale scenes, both under significant appearance variations and in appearance-consistent environments. In challenging scenes such as $B l o c k \_ A *$ and $B l o c k \_ E *$ , CityGaussian struggles to effectively disentangle appearance components in complex conditions, leading to pronounced visual artifacts. Similarly, Momentum-GS, relying solely on globally learnable appearance embeddings, fails to handle intricate appearance variations, resulting in noticeable color inconsistencies and artifacts in the rendered images. While GS-W demonstrates relatively consistent appearance matching with the ground truth, its limited scalability to large scenes and inability to reconstruct fine details are evident. In contrast, our method delivers superior fidelity and precision in both appearance and geometric detail across all four scenes, significantly surpassing existing methods. For instance, our approach captures the intricate staircase structures at the base of buildings in the Block E scene and faithfully reproduces fine details such as windows and pavement in $B l o c k \_ A *$ and $B l o c k \_ E *$ , achieving appearance nearly indistinguishable from the ground truth and demonstrating a clear advantage over prior techniques. Fig. 8. Qualitative comparison on synthetic large-scale unconstrained datasets. Red and blue crops highlight that SMW-GS effectively disentangles complex appearance, resulting in more accurate color reproduction and finer detail restoration. Additionally, we provide a qualitative comparison of depth maps across different methods, with rendering viewpoints interpolated between training views exhibiting substantial appearance variations. WildGaussians and GS-W, which are not optimized for large-scale unconstrained scenes, produce depth maps plagued by significant blurring and noise, resulting in irregular and low-quality depth reconstructions. MomentumGS and VastGaussian, despite attempting to address large-scale reconstruction, employ simplistic strategies that fail to manage illumination variation effectively, leading to severe artifacts and blurred results, as highlighted by the red insets. In contrast, our method employs explicit disentanglement of appearance into three structured feature components, enabling robust and consistent geometric reconstruction across diverse appearance conditions, achieving state-of-the-art geometric fidelity. # D. Component Analysis 1) Ablation Study on Appearance Decoupling Module: The ablation study results conducted on classical unconstrained datasets, real large-scale datasets, and synthetic large-scale datasets are summarized in Tab. VI. Key findings are as follows: i. Micro-macro Projection (MP): MP significantly enhances the diversity of refined appearance sampling, allowing Gaussians to more accurately capture appearance features and local contextual information. As illustrated in Fig. 10, relying solely on micro projection (without full MP) results in noticeable blurring of distant objects and geometric inaccuracies. For example, the cylindrical pillar on the right side of the ground is poorly reconstructed when MP is omitted. ii. Waveletbased Sampling (WS): WS further refines attention to highfrequency and multi-scale information, resulting in superior reconstruction and rendering quality. When WS is excluded, there is a marked loss of detail, evidenced by the blurring of the Trevi Fountain sculptures and a 0.69 dB decrease in PSNR. This effect becomes even more pronounced in large-scale scenes, which are more sensitive to multi-scale information. For instance, the SSIM on the rubble dataset decreases by 0.02 without WS. iii. Hierarchical Residual Feature Network (HRFN): HRFN provides a more effective integration of features across different levels, enabling a comprehensive fusion of diverse information compared to simple concatenation. This results in a $0 . 6 8 { \mathrm { ~ d B } }$ increase in PSNR on the Brandenburg Gate dataset and a $0 . 3 9 { \mathrm { ~ d B } }$ increase on the $B l o c k \_ A *$ dataset. Furthermore, HRFN improves the accuracy of color predictions, benefiting the reconstruction of fine-grained structures. For example, it enhances the quality of reconstructed street lamps and other intricate details. The combined impact of these components ensures robust and detailed scene reconstruction across diverse datasets, validating the effectiveness of the proposed appearance decoupling module. Fig. 9. Qualitative comparison of depth maps generated by different methods, displaying rendering viewpoints interpolated between training views with significant appearance differences. Red insets highlight artifacts and blurry geometric reconstructions in competing methods. TABLE VI ABLATION STUDIES ON DECOUPLING MODULE. BOLD AND UNDERLINED VALUES CORRESPOND TO THE BEST AND THE SECOND-BEST VALUE Fig. 10. Ablation studies by visualization. The images demonstrate the effects of key components, including Micro-macro Projection (MP), Wavelet-based Sampling (WS), and the Hierarchical Residual Feature Network (HRFN), on reconstruction quality and detail retention. Fig. 11. Visualization of sampling analysis. (a) The attention maps generated by projecting sampling positions onto corresponding camera images. (b) The refined features $f _ { r } ^ { n }$ and $f _ { r } ^ { b }$ highlight the ability to integrate high-resolution textures and low-texture regions across multiple frequency bands. 2) Analysis of Sampling: To analyze our sampling strategy, we project sampling positions onto camera images to form attention maps, where denser regions indicate higher attention. As shown in Fig. 11a, our method captures fine local details via narrow frustums and integrates long-range context through broader projections. We further visualize the features of interest by examining the refined narrow features $f _ { r } ^ { n }$ and broad features $f _ { r } ^ { b }$ across different resolutions, as shown in Fig. 11b. The $f _ { r } ^ { n }$ features, focused on high-resolution details, adeptly capture local texture intricacies, while features processed at $0 . 5 \times$ resolution through DWT attend to varying details across different frequency bands. Conversely, the $f _ { r } ^ { b }$ features primarily target low-texture regions, such as water surfaces or specular highlights, which correspond to longrange features. The combination of $f _ { r } ^ { n }$ and $f _ { r } ^ { b }$ allows our MW sampling approach to effectively model dynamic appearances by capturing both detailed and distinct information on the 2D feature maps. TABLE VII UANTITATIVE RESULTS FOR DIFFERENT VALUES OF $M$ ACROSS THREE DATASETS. $M$ DENOTES THE NUMBER OF WAVELET DOWNSAMPLING STEPS. $M = 1$ ACHIEVES THE BEST BALANCE BETWEEN EFFICIENCY AND EFFECTIVENESS. BOLD INDICATES THE BEST RESULT. TABLE VIII QUANTITATIVE RESULTS FOR DIFFERENT $k _ { s }$ VALUES ACROSS THREE DATASETS. $k _ { s }$ DENOTES THE NUMBER OF SAMPLES PER CONICAL FRUSTUM CROSS-SECTION. $k _ { s } = 1$ BALANCES EFFICIENCY AND EFFECTIVENESS. BOLD INDICATES THE BEST RESULT. 3) Analysis of Wavelet Dimension: We study the impact of the wavelet decomposition dimension $M$ , which controls how many times the feature map is downsampled. Experiments on three datasets (Tab. VII) show that $M = 1$ , corresponding to sampling at both $1 \times$ resolution and $0 . 5 \times$ resolution, yields the best performance. Further downsampling (e.g., $0 . 2 5 \times )$ offers no gain but increases computation. Therefore, we reasonably set $M$ to 1. 4) Analysis of Sampling Number: We evaluate the number of samples $k _ { s }$ per conical frustum cross-section on three scenes (Tab. VIII). Interestingly, $k _ { s } ~ = ~ 1$ achieves the best performance, while higher values degrade results and increase cost. The reason could be that additional sampling might cause the features of 3D points along the same ray to converge towards a common mean, reducing diversity. Therefore, we set $k _ { s } = 1$ . 5) Analysis of Large-scale Scene Partitioning: We conduct experiments on different scene configurations to analyze the contribution of each component in the large-scale scene promotion strategy. The quantitative results are summarized in Tab. IX, evaluating the following variants: “w/o partition” (no spatial partitioning; whole scene trained jointly); $^ { \mathfrak { s } } \{ 2 , \ 2 \} ^ { \mathfrak { s } }$ , $^ { } \{ 3 , \ 2 \} ^ { \prime }$ ”, “{4, 2}” (scene partitioned into $2 \times 2$ , $3 { \times } 2$ , and $4 \times 2$ blocks, respectively); “w/o PSG” $3 { \times } 2$ partition, using CityGaussian’s partitioning instead of Point-Statistics-Guided (PSG) Camera Partitioning); “w/o PSG-S1” and “w/o PSG$S 2 ^ { , , }$ $3 { \times } 2$ partition, disabling Stage 1 or Stage 2 of PSG, respectively); “w/o RBT” $3 { \times } 2$ partition, disabling Rotational Fig. 12. Visualization of the $\{ 3 , 2 \}$ partition strategy applied to the rubble scene. Right: Normalized per-point increase in supervision counts when using the “full model” compared to the “w/o PSG-S1” variant, with values scaled to the range $[ 0 , 1 ]$ . Block Training (RBT)). i. The “w/o partition” results demonstrate that SMWGS maintains robust appearance disentanglement even under large-scale and inconsistent visual conditions, outperforming existing in-the-wild methods across all metrics. However, geometric quality suffers. For example, on $B l o c k \_ A ^ { * }$ , SSIM drops by 0.081 and PSNR by 1.91 dB compared to the optimal $^ { \mathfrak { s } } \{ 3 , 2 \} ^ { \mathfrak { s } }$ partition. This highlights the importance of spatial partitioning for maintaining reconstruction quality and ensuring sufficient supervision. ii. The results from different partitioning strategies, i.e. “w/o partition”, $^ { \ 6 6 } \{ 2 , \ 2 \} ^ { \prime }$ ”, $^ { \mathfrak { a } \mathfrak { s } } \{ 3 , 2 \} ^ { \mathfrak { s } }$ , and $^ { 6 6 } \{ 4 , 2 \} ^ { \prime \ }$ , reveal a clear trend: partitioning the scene into more blocks generally improves reconstruction quality, especially in terms of SSIM, as finer partitions help the model better capture local structures. However, when the number of blocks exceeds a certain value, the performance gains become marginal. We hypothesize this is due to a trade-off between spatial decomposition and data allocation. In our experiments, performance saturates at a $^ { 6 6 } \{ 3$ , $2 \} ^ { \mathsf { , , } \mathsf { , } }$ partition, where Gaussians are already well-optimized. Further partitioning does not significantly improve results, potentially introducing instability or fluctuations due to reduced data overlap and weaker global coherence. iii. The “w/o PSG”, “w/o PSG-S1”, and “w/o PSG-S2” variants are designed to evaluate the effectiveness of our camera selection strategy in a detailed manner. Compared to City TABLE IX QUANTITATIVE RESULTS OF LARGE-SCALE SCENE PARTITIONING ANALYSIS EXPERIMENTS. WE EVALUATED SEVERAL VARIANTS: NO PARTITIONING, PARTITIONING INTO 4, 6, AND 8 BLOCKS, AS WELL AS PARTITIONING INTO 6 BLOCKS WITHOUT USING POINT-STATISTICS-GUIDED CAMERA PARTITIONING, BLOCK-SENSITIVITY-AWARE CAMERA PARTITIONING AND ROTATIONAL BLOCK TRAINING. Fig. 13. Reconstruction under six blocks with varying appearances. Left: Reference images showing distinct appearances used by the six different blocks. Middle: Results from our method. Right: Results from Momentum-GS. Our method achieves consistent appearance reconstruction across the entire scene. Gaussian’s method, our PSG strategy significantly improves reconstruction. Stage 1 is especially critical, which ensures full supervision coverage, particularly near block boundaries as shown in Fig. 12. The right figure demonstrates that the full PSG strategy provides enhanced supervision for points near block boundaries and corners, compared to the “w/o PSG-S1” variant. Stage 2 further refines view selection, and while less impactful individually, its removal still degrades performance. Together, the two stages offer complementary benefits for robust large-scale reconstruction. iv. When GPU resources are limited, omitting RBT leads to notable performance drops due to biased supervision. Without alternating across partitions, only a subset of images contributes to training, weakening appearance disentanglement and affecting reconstruction quality. RBT plays a critical role when computational resources are limited, ensuring more balanced and effective optimization. # E. Extended Appearance Analysis To thoroughly evaluate our method’s capability in handling complex real-world conditions, we conduct comprehensive experiments focusing on two critical aspects: (1) robustness under challenging illumination variations in large-scale environments, and (2) flexible appearance manipulation enabled by effective appearance component decoupling. These experiments validate both the reconstruction stability and the practical utility of our decomposed representation. 1) Large-Scale Scene Robustness under Illumination Variations: To simulate real-world lighting variations where illumination may remain consistent within local regions but varies across a large scene, we divide the scene into six spatial blocks, each using images with consistent intra-block but varying inter-block illumination (e.g., morning vs. evening captures). As shown in Fig. 13, our method is compared to the recent SOTA method Momentum-GS under these challenging conditions. As illustrated, Momentum-GS struggles to maintain consistent appearance across blocks, often showing stark appearance differences between adjacent regions (e.g., dayto-night shifts). In contrast, our method employs a globally trained appearance-extraction U-Net effectively disentangles complex lighting across the scene, enabling the reconstruction of scenes with consistent appearance across large-scale environments. Fig. 14 further highlights consistency and details. The top row illustrates results from Momentum-GS, while the bottom row shows those from our approach. Momentum-GS exhibits significant appearance inconsistencies across different image regions and suffers from blurred reconstruction details. In contrast, our method achieves superior visual coherence and sharper reconstruction quality. Additionally, Fig. 15 demonstrates our method’s ability to manipulate the appearance of large-scale scenes (e.g., day to dusk) at a scene level. Unlike simple global color adjustments, our structured disentanglement supports regionspecific appearance changes, corresponding to their distinct intrinsic properties. For instance, building areas exhibit subtle variations, whereas street regions experience more pronounced changes. 2) Appearance Transfer: Our method exhibits advanced capabilities for appearance transfer in 3D scenes, highlighting its robust and precise appearance modeling. As shown in Fig. 16, a qualitative comparison between GS-W and our approach demonstrates that our method not only transfers both foreground and background elements to novel views but also retains intricate scene details, rather than merely reproducing the overall scene tone. This underscores the accuracy and reliability of our appearance modeling. Fig. 14. Comparison of large-scale scene reconstruction under block-wise lighting conditions. Top: Results from Momentum-GS, exhibiting noticeabl inconsistencies and deteriorated reconstruction quality in finer details upon magnification. Bottom: Results from our method, maintaining a more consisten overall appearance across the large scene and demonstrating superior detail reconstruction upon closer inspection. Fig. 15. Appearance transition from daytime to dusk in a large-scale scene. Fig. 16. Qualitative comparison of appearance transfer performance across the Brandenburg Gate and Sacre Coeur datasets. Overall, our method enables consistent, high-fidelity reconstructions and flexible appearance editing across large-scale scenes with complex lighting variations.
Reconstructing 3D scenes from unconstrained image collections poses significant challenges due to variations in appearance. In this paper, we propose Scalable Micro-macro Wavelet-based Gaussian Splatting (SMW-GS), a novel method that enhances 3D reconstruction across diverse scales by decomposing scene representations into global, refined, and intrinsic components. SMW-GS incorporates the following innovations: Micro-macro Projection, which enables Gaussian points to sample multi-scale details with improved diversity; and Wavelet-based Sampling, which refines feature representations using frequency-domain information to better capture complex scene appearances. To achieve scalability, we further propose a large-scale scene promotion strategy, which optimally assigns camera views to scene partitions by maximizing their contributions to Gaussian points, achieving consistent and high-quality reconstructions even in expansive environments. Extensive experiments demonstrate that SMW-GS significantly outperforms existing methods in both reconstruction quality and scalability, particularly excelling in large-scale urban environments with challenging illumination variations. Project is available at https://github.com/Kidleyh/SMW-GS.
[ "cs.CV" ]
# I. INTRODUCTION The rapid advancement in digital technologies has significantly transformed scientific research, facilitating the collection, processing, and analysis of extensive datasets. However, the growing diversity and complexity of research data present substantial challenges, particularly in terms of findability, accessibility, interoperability, and reusability [1]. To address these challenges, the FAIR principles [2] were established, guiding best practices for sustainable and efficient management of research data. FAIR Digital Objects (FAIR-DOs) [3], [4] embody these principles, aiming to provide common mechanisms to enable machine-actionable, persistent, and harmonized representation of (meta)data beyond the borders of data spaces [1], [5]. FAIRDOs use globally unique, resolvable, and persistent identifiers (PIDs) with their persistent records based on the wellestablished Handle system [6], which ensures their longevity and reliable referencing. Every value inside a FAIR-DO record is assigned a data type that is always referenced using a PID and defines the syntax as well as the semantic meaning of this value. This data type should be reused wherever its syntax and semantics fit, ensuring that identical references denote identical meaning. This generates harmonized artifacts which are interpretable and processable by a machine to, e.g., determine available operations for FAIR-DOs. Multiple data types may be aggregated in profiles that define the structure of a FAIRDO record. Beyond the borders of research domains, domainagnostic profiles, such as the Helmholtz Kernel Information Profile [7], are used to harmonize essential information in FAIR-DOs. The strong and manifested type system of FAIRDOs is therefore the foundation for machines to automatically interact with them and with their referenced resources across research domains [5], [8]. FAIR-DO Operations describe a mechanism for type-based interaction with FAIR-DOs and their contents, thus making them machine-actionable. To allow for automatic execution, they need to be described in a fully typed, interoperable, technology-agnostic, and reusable manner. To enable the computation of available FAIR-DO Operations for a given FAIRDO, we need to bidirectionally associate them with data types in a type system. This leads to a highly inter-connected typing model that needs to be managed, queried, and validated. Existing Data Type Registries (DTRs) [9] with their typing models represent a significant development towards machineinterpretable FAIR-DOs. However, their schema-reliant architecture makes them unable to utilize and provide complex mechanisms beyond the capabilities of JSON schema, and thus cannot facilitate type-associated FAIR-DO Operations. Such capabilities are needed to, e.g., bidirectionally associate FAIR-DO Operations to data types, to realize inheritance mechanisms and to deal with the highly connected typing model that is required for FAIR-DO Operations. To address these shortcomings, we developed a typing model for a new graph-based FAIR-DO type system that we prototypically implemented as the Integrated Data Type and Operations Registry with Inheritance System (IDORIS) to showcase its feasibility. This typing model is conceptually based on the components of current DTR systems and lessons learned through their usage. We leverage the resulting type system to model type-associated FAIR-DO Operations in a technology-agnostic, highly reusable, and well-described manner. We come up with a comprehensive solution that integrates FAIR-DO Operations as type-associated operations, inheritance, and semantic validation within a single type system. Hence, this work contributes to the field of FAIR (research) data management by achieving substantial progress in the long-term vision of machine-actionability. The paper is organized as follows: Section II provides a review of relevant technologies and related work. In Section III, we describe our typing model as a basis for type-associated FAIR-DO Operations and in Section IV we introduce IDORIS as a prototypical implementation of this model. In Section V, we discuss and evaluate our model as well as the prototypical implementation based on a specific use-case. Finally, Section VI summarizes the key contributions and discusses future research directions. # II. RELATED WORK FAIR-DOs essentially constitute a data management approach comparable to Linked Data [10] or nanopublications [11], but are distinguished primarily by their emphasis on persistence and strong type-safety, which ensure their machine-interpretability. Within the Research Data Alliance $( \mathrm { R D A } ) ^ { 1 }$ , multiple working groups and interest groups established outcomes and recommendations on FAIR-DO content [12], information typing models [13], and DTRs [14], which have been acknowledged, among others, by the European Commission [1], [15]. Despite ongoing discussions and early implementations within international initiatives such as the RDA and the FAIR Digital Objects Forum2, comprehensive practical solutions addressing critical gaps in the FAIR-DO typing infrastructure still need to be developed and widely adopted. The existing typing infrastructure comprises three Data Type Registry instances – the ePIC test DTR3, the ePIC production $\mathrm { D T R ^ { 4 } }$ and the EOSC DTR5. Those DTRs follow the same typing model, with three schemas that constitute the basis for information typing: PID-BasicInfoTypes, PIDInfoTypes and KernelInformationProfiles [9], [12]; However, their implementations slightly differ between the DTR instances and are not standardized. These systems were already used to model information types for FAIR-DOs in the frame of several use cases from different domains, e.g., in material sciences [16], in digital humanities [17], and in energy research [18]. Technologically, all current DTRs are based on Cordra [19], a JSON schema-based metadata repository that is only able to validate the syntactic compliance to JSON schemas. Thus, the primary focus of existing DTRs has been limited to syntactic validation, offering at most rudimentary support for semantic validation or linking executable operations to specific data types. Moreover, despite the recognized benefits of objectoriented programming (OOP) principles such as inheritance and polymorphism in software engineering, current implementations of information types by DTRs either completely lack or inadequately support these mechanisms. This absence of sophisticated logic significantly restricts the semantic richness and operational flexibility needed to represent complex, interconnected data resources commonly encountered in scientific research. In addition, the DTRs do not support systematic reuse of type definitions, significantly hindering scalability and maintainability. To enhance the domain-specific expressivity of FAIR-DOs, it can be desired to create a Kernel Information Profile (KIP) [12] of selected domain-specific attributes in addition to those provided by the domain-agnostic ones, e.g. Helmholtz KIP [7]. This case is not adequately supported by the existing DTRs, forcing creators of such domain-specific profiles to remodel the specification of the domain-agnostic KIP. This leads to redundant work, reduced reusability, and is a missed opportunity to leverage the de-facto subtyping relation between domain-specific and -agnostic KIPs. Conceptually, FAIR-DO Operations provide a mechanism to interact with FAIR-DOs, i.e., the values contained within the Handle records, and the external resources they reference (i.e., the bit sequence) [8]. Currently, multiple approaches exist for service-oriented FAIR-DO Operations. They typically focus on basic CRUD (Create, Read, Update, Delete) functionalities as specified by the Digital Object Interface Protocol (DOIP) [20]. They operate at the level of the FAIR-DO as a whole, and must be individually implemented by each service that supports such operations [20]. Currently, there is no method to describe technology-agnostic FAIR-DO Operations independently from the specific executing service and to dynamically associate them to FAIR-DOs according to at least one FAIR-DO association mechanism, i.e., “Record typing”, “Profile typing”, and “Attribute typing”, as described in [21]. These existing limitations highlight the necessity of a more advanced typing infrastructure that is capable of supporting sophisticated semantic validation, inheritance management, polymorphism, and robust type-associated FAIR-DO Operations within FAIR-DO ecosystems. # III. TYPING MODEL Before going into the specifics of our model, we need to provide a brief overview of the current technical implementation of typing for FAIR-DOs. Every FAIR-DO is described by an information record of key-value pairs, stored in the Handle Registry6 and resolvable by a Handle PID [5], [6]. A key in the information record uses a PID to reference a machine-interpretable information type in a DTR. This allows the value to be validated against the referenced information type. We use the term “typing” to refer to the availability of information within FAIR-DO records and our typing model. This is similar to the “information typing” 6https://handle.net <<abstract>> User Administrative Metadata -ORCiD : ORCiD-URL 1..\* -pid : PID -isOwner : Boolean contributors -name : String -description : String -createdAt : ISO-8601 Reference 0..\* -lastModifiedAt : ISO-8601 -reexltaetrionnalTRyepfe r:ePnIcDe : PID references -expectedUseCases : Set<String> overrides attributes Technology Interface 0..1 useTechnology -adapters : Set<PID> 0..\* 0..\* returns 0..\* Attribute -defaultDValtuaeT:yOpeptional 1 conformsTo -cdoefnastulatnVtaVlauleue: :OpOtpiotinoanlal 1 executableOn Operation 0..1 inheritsFrom A -luopwperBoundCardiinalliity : IOntpetigoenral<Integer> 1..\* 0..1 1 0..\* returns 1 executeOperation 0..1 Atomic Data Type hasAttributes input output execution -regularExpression $\because$ Optional<ECMA-262> Type Profile subSteps -permittedValuesEnumeration : Optional<Set> -forbiddenValuesEnumeration : Optional<Set> -permitEmbedding : Boolean = True -allowAdditionalProperties : Boolean = True 0..\* 1..\* 0..\* -minimum : Optional<Integer> -maximum : Optional<Integer> -naAmtteri:bSutreinMg apping 0..\* inputMappings -index :OIpnetreagteiron Step inheritsFrom -value : String -name : String 0..\* b1asedOn valida1tionPolicy -templateString : String 0..\*outputMappings 1 -executionMode : sync|async 1..\* <<enumeration>> <<enumeration>> Primitive Data Types Combination Options Boolean None Integer One Number Any String All used within current DTRs, but opposite to “FAIR-DO typing” in the context of association mechanisms for operations as proposed in [21]. On this basis, we introduce our typing model for FAIRDOs, the description of technology-agnostic FAIR-DO Operations and the association between data types and operations, indicating the analogies to OOP principles. Figure 1 depicts the typing model for FAIR-DOs in a colorized UML diagram, consisting of the following classes: Data Type as a generalized term (orange), Atomic Data Type (yellow), Type Profile (red), and Attribute (dark green). Likewise, technologyagnostic FAIR-DO Operations are associated with our typing model through the Attribute class and consist of instances of the classes Operation and Operation Step (blue), Technology Interface (purple) and Attribute Mapping (light green). The gray elements are enumerations and administrative metadata that partially depend on the implementation. For simplicity, we write the names of the classes in lowercase italics to refer to their instances, and in uppercase italics to refer to the classes themselves. # A. Data Types We use Data Type as a generalized term to refer to Atomic Data Types and Type Profiles by specifying the Data Type class as an abstract superclass of the Atomic Data Type and Type Profile classes. This abstraction allows us to reference data types consistently, thereby reducing the redundancy and complexity of our model and its implementation whilst enhancing its semantic clarity and expressivity. Likewise, as detailed in Subsection III-B, this also allows the definition and logic of attributes in the Attribute class to be agnostic towards the instances of the Data Type subclasses they conform to. 1) Atomic Data Types: Instances of the Atomic Data Type class define the syntax of every value in the information record of any FAIR-DO. They are built on top of primitive JSON types (Boolean, Integer, Number, or String) to enable JSON serialization. Therefore, atomic data types are comparable to primitive data types in OOP, but offer additional restriction mechanisms that allow for a more strict validation of values: for any atomic data type, predefined constant enumerations of permitted and forbidden values can be specified, which are prioritized over the following mechanisms. Strings can be limited by specifying a regular expression, as well as a minimum and maximum length. Integers and decimal numbers can be limited by providing a minimal and maximal value. These restrictions of the value space guarantee the quality and syntactic correctness of the information contained in FAIRDOs, which benefits machine-interpretability. To make atomic data types and their potential association with operations reusable and consistent, we introduce a simple hierarchical inheritance mechanism: they can optionally refer to at most one parent, which is intended to have a broader definition. Upon validation of a value for an atomic data type, this value needs to be correctly validated against all atomic data types in the inheritance chain. 2) Type Profiles: Type profiles specify the structure and content of a FAIR-DO by associating a set of typed attributes that are instances of the Attribute class. Attributes represent data types and additional semantics, which will be further explained in Subsection III-B. The validation policy determines which combination of attributes must be available, and whether to allow or forbid additional attributes in a FAIRDO complying with the type profile. The options are to allow none, exactly one, any but at least one, or all of the attributes. A type profile can describe the entire structure of FAIR-DO records and complex JSON objects that are used as values of a specific attribute within a FAIR-DO record. The latter option is particularly useful when dealing with intricate or tightlycoupled information that is not generic enough to be extracted into a separate FAIR-DO but still needs to be processed together. For instance, the description of measurement units requires storing a value, a unit, and possibly some information about its accuracy together. In addition, instances of the Type Profile class can make use of a multi-inheritance mechanism. Despite being known to cause problems such as naming conflicts in programming languages [22], this is not the case in our model since every data type and attribute is assigned a PID, making them unambiguously addressable. The remaining potential conflicts of multi-inheritance can be solved through heuristics, whose implementation details are outside the scope of this work. Type profiles are therefore comparable to classes in OOP. # B. Attributes An attribute points to a data type that defines its value space and a default value, if any. Attributes specify their cardinality by providing a lower boundary $l$ and optionally an upper boundary $u$ . This enables them to represent optional single values $( l ~ = ~ 0 ; u ~ = ~ 1 )$ , mandatory single values $( l = 1 ; u = 1 )$ , limited lists $( l \geq 0 ; u \geq 2 )$ , and unlimited lists $( l \geq 0 ; u = \bot )$ of values. Attributes behave covariantly when they are used in FAIR-DO information records or as a return value of an operation as detailed in III-C1. Since attributes are assigned a PID and contain elements of the Administrative Metadata class, they can be referenced directly to specify a value within a FAIR-DO record. This is necessary in case multiple values that conform to the same atomic data type are used in a type profile. For instance, the Helmholtz KIP [7] includes “dateCreated” and “dateModified”, both adhering to the ISO 8601 standard, which is represented as an atomic data type. Without directly referencing these attributes, both values would refer to the identical PID of the ISO 8601 atomic data type, resulting in a loss of valuable semantic differentiation. This approach to attributes resembles object attributes or variables in OOP, both in terms of functionality and semantics. However, attributes according to the Attribute class in our model additionally fulfill the crucial role of associating FAIRDO Operations with data types. # C. Modeling FAIR Digital Object Operations In the following, we outline the methodology for modeling technology-agnostic FAIR-DO Operations based on the previously introduced classes of the typing model for FAIR-DOs. For this, we need to abstract and describe these operations as well as technologies, enrich them with meaningful metadata, and provide a mechanism to adapt this generic definition to actual execution environments in order to enable automatic computation. Figure 2 is a visual example of a possible application of a FAIR-DO Operation. The depicted operation, modeled by instances of the Operation and Operation Step classes, receives an ORCiD via the “contact” attribute, contained within the Helmholtz KIP [7], extracts the ORCiD number/letter sequence, and returns the “primary e-mail address” of the ORCiD profile as the result. During the execution, two distinct technologies are used that are modeled by instances of the Technology Interface class (described in Subsection III-C3): a regular expression (Regex) and a Python Script. 1) Operations: The Operation class describes an action that can be performed on an instance of the Attribute class to which it is applicable. Therefore, it must reference this attribute and all attributes it returns. Operations always contain a nonempty ordered list of instances of the Operation Steps class (described in Subsection III-C2), that specify all the tasks that are performed during the execution of an operation. When comparing FAIR-DO Operations to OOP, instances of the Operation class are similar to both, functions and methods with exactly one input parameter and possibly multiple return values. They are bound to instances of the Attribute class (or the respective data type) on which they are executable, thereby resembling methods. However, as they do not have the capability to directly modify the associated values stored within the FAIR-DOs and are stateless, they align more closely with the typical characteristics of a function. 2) Operation Steps: Operation steps can be understood as tasks in an operation workflow. The order of execution of the operation steps within an operation is specified in ascending order by the index and the availability of the attributes. An operation step specifies whether it uses a technology via an instance of the Technology Interface class (Subsection III-C3), uses another operation, or contains multiple operation steps. It also contains a set of input and output mappings that use attribute mappings (Subsection III-C4) to connect, transform, and specify values between attributes, depending on the used technology interface or operation. Due to this strong coupling to the scope of the operations, operation steps and attribute mappings are not reusable, not assigned a PID, and managed as composites inside an operation. Attributes, technology interfaces, and operations on the other hand rely heavily on their reusability and are therefore assigned a PID. Operation steps are comparable to function calls on a technology interface or another operation. They can, however, also be seen as subroutines. 3) Technology Interface: Due to the high effort invested in specifying, testing, and executing a technology, it is desirable to make this work highly reusable. We therefore decided to separate the technologies from the problem-specific operations. Instances of the Technology Interface class realize the reusability layer by providing an environment-independent interface to the execution. Technology interfaces specify a set of input attributes, a set of output attributes, and reference a set of Adapter FAIR-DOs via their PIDs. These Adapter FAIR-DOs are specific to the executing environment and actually implement how the technology interface is executed on any given system. In our envisioned approach, the executing systems specify a type profile for their adapters, which in turn specify machine-interpretable information. This enables the adapter to be downloaded, verified in its integrity, and executed, being subject to the security policies of the Operation: Get the primary e-mail address from ORCiDvia the API Technology Interface: Regex Adapter FAIR-DO Operation Step: Extract ORCiD number from URL attributes: adapters: Adapter FAIR-DO executable on: contact regexPattern:https?://orcid.org/(/d{4}-lld{4}-lld{4}-lld{3}X?) Adapter FAIR-DO returns: profile Adapter index: 1 name Regex in JS extracted ORCiD source github.com/abc/xyz Operation Step: Get ORCiD Technology Interface: profile and extract e-mail address Python attributes: Adapter FAIR-DO runCommand: “python main.py -orcid {input}" adapters: sourceCode:“https:/github.com/foo/bar.git" Adapter FAIR-DO returns: setupCommand:“-pip install requests" profile Adapter e-mail address index: 0 returns: name Python in Docker source gitlab.com/def/ghi . executing system. We did not model Adapter FAIR-DOs as classes in our model (Figure 1) since they just need to be uni-directionally referenced from the technology interfaces and their content might vary between the executing systems which is perfectly solved by type profiles in FAIR-DO records that also have a PID. Relating this to the regular expressions used in our example (Figure 2), we see that there are different libraries and APIs for different environments. In this case, we propose to create a technology interface for the “Regex” technology that accepts an input string and a pattern while returning an array of strings for the regex groups (the first element always represents the fully validated string). For this “Regex” technology, we can then implement adapters for multiple environment (e.g., a JavaScript web-browser environment, a Python environment, a Java environment). The executing environment can then select the most suitable adapter among the available ones. When comparing technology interfaces to OOP primitives, we think of them as functional interfaces, as they only provide exactly one execute-function that has a set of parameters and a set of return values. These functional interfaces may then be implemented by the adapters that can be injected into the executing service as a dependency, resembling the dependency inversion principle of OOP. 4) Attribute Mappings: Since the attributes provided as input to an operation are not necessarily identical to those specified by an operation or technology interface, we need to map between these attributes. The Attribute Mapping class provides such a mapping mechanism and therefore enables the reuse of operations and technology interfaces. Figure 2 visualizes such a use case where the operation (light blue) demands a “contact” attribute, which is then transferred into the “regexInput” attribute of the “regex” technology interface. Similarly, an element from the output of the “regex” technology interface is extracted, transformed to the definition of an attribute and used in another operation step. We recall that every attribute conforms to a data type specifying its syntax. Attribute mappings therefore must fulfill multiple roles also known in OOP: Defining constant values: Not all attributes of technology interfaces need to be present in every FAIR-DO. Information such as the regex pattern extracting parts of an ORCiD-URL, the source code location of a Python script, or the setup necessary to run the script pertains to the operation itself rather than to the FAIR-DO it operates on. Therefore, attribute mappings support the specification of constant values within an operation step by providing a “value” field. Type casting: Different attributes often conform to different data types. For example, the “regexInput” attribute of the “Regex” Technology Interface naturally conforms to an arbitrary String. We therefore down-cast the “contact” attribute, that conforms to the syntax of an ORCiDURL, to a String to use it with the “Regex” technology interface. To make this mechanism work for both downand up-casting, the executing system needs to validate the input values for the attribute mappings against the data type the output attribute complies to. Addressing items in an array: In Figure 2, there are two examples for the use of up-casting mechanisms, although they do not perform a 1-to-1 mapping, but rather an n-to-1 mapping. The “regexOutput” and “returnValues” attributes have a cardinality greater than one, and can therefore be considered as an array of values. However, the “extracted ORCiD” and “e-mail address” attributes have a cardinality of exactly one, which necessitates the attribute mappings to select one element of their input arrays. The attribute mappings refer to the input and output attribute and specify the index of the element to be used. The validity has to be enforced by the executing system. • Providing templates for Strings: String templating, the process of adding the contents of a variable to a predefined string, is well known from programming and also of relevance in our operation model. In our example in Figure 2, we use this String template mechanism to insert the “extracted ORCiD” from the first operation step into the “runCommand” attribute that executes the Python script. The pattern of the insertion position (by default ${ } ^ { \cdots } \{ \{ i n p u t \} \} ^ { \prime \prime } )$ can be changed to facilitate as many use cases as possible. # IV. MODEL IMPLEMENTATION We introduce IDORIS as a prototypical implementation of our typing model that can be found on $\mathrm { { G i t H u b } } ^ { 7 }$ . The full name “Integrated Data Type and Operations Registry with Inheritance System” reflects the essential functionalities of our typing model described in Section III. Technologically, IDORIS is a Spring Boot8 microservice, developed in Java $2 1 ^ { 9 }$ . For storage, the graph database $\mathsf { N e o 4 j } ^ { 1 0 }$ is used in combination with Spring Data $\mathrm { N e o 4 j } ^ { 1 1 }$ and Spring Data $\mathrm { R E S T } ^ { 1 2 }$ , making IDORIS capable to provide an automatically generated and fully HATEOAS-enabled RESTful-API for CRUD functionality, solely based on our model. More advanced features that demand additional logic, such as resolving the inheritance hierarchy and retrieving available operations for data types, are exposed using traditional Spring Web MVC endpoints13. # A. Graph database Due to the high inter-connectivity of our model, efficient querying of relationships between model components is essential. This is a typical use-case for graph databases. We chose Neo4j for its labeled-property graph model and integrability into the technology stack of IDORIS, which allows us to implement our typing model with only minor technical changes, thus enhancing the expressivity of the graph. IDORIS uses these efficient in-database processing capabilities to find all operations executable on an attribute or transitively a data type, to detect cycles in the graph, and to resolve inheritance hierarchies. Furthermore, we can use graph algorithms for path finding, circle detection, and relationship querying provided by Neo4j directly inside our graph database. # B. Rule-based validation and processing Since our model depends heavily on the correctness of user-provided information, IDORIS must validate this data both, syntactically and semantically. Especially when realizing the inheritance mechanisms for atomic data types and type profiles, a validation mechanism that is able to act not only on individual entities but also on their contextual relationships is needed. This feature is clearly beyond the capabilities of a JSON schema. We realized a modular, rule-based approach for validation to enhance its maintainability. This is accomplished by using the “Visitor” design pattern [23] that separates logic from model classes in a highly modular fashion and is therefore often used, among others, for semantic validation and optimization inside compilers. Each validation rule is implemented in a separate Visitor class, having a dedicated behavior for each model class it is called upon (e.g., Atomic Data Type, Type Profile, Operation). Visitors perform non-trivial validations of the inheritance hierarchy and of relations to other entities (such as attributes). This is primarily done through recursion, interaction with the accessor methods, and interaction with the graph database to ensure cross-entity consistency. This approach ensures that the inheritance hierarchy is free of conflicts and circular dependencies. In IDORIS, Visitors are currently only used for validation purposes, but are designed to solve future problems such as JSON schema generation or optimization algorithms. # V. EVALUATION AND DISCUSSION # A. IDORIS-based Use-Case We visualized our typing model in Figure 1, defined our approach for type-associated FAIR-DO Operations in Subsection III-C, and illustrated a running example in Figure 2. In this subsection, we will reuse this running example as a use-case and provide an excerpt of the actual graph data structure in Figure 3 (as described in Subsection IV-A) with all relevant nodes and relations that describe an operation. For better readability, we will only show a description of the contents of each node, omitting its properties. The color-labels of the nodes in our labeled-property graph match those of Figures 1 and 2: the operation and its operation steps are in light blue, attributes in dark green, attribute mappings in light green, and technology interfaces in purple. This graph can be retrieved by executing the simple Cypher-Query: MATCH (n: Operation | AttributeMapping | Operation Step | TechnologyInterface | Attribute) RETURN n). Therefore, we can argue that the graph directly represents the classes of our UML-based typing model in a semantically meaningful manner. The semantically meaningful nodes and relations in our graph database can be exploited for more complex queries: with respect to our example, the red relations form a circle in our graph, representing the flow of data within the “Get primary e-mail from ORCiD via API”-operation. It is visible how the data flows from the “contact” attribute through the attribute mapping into the “regexInput” attribute, that is processed by the “Regex” technology interface whose output is then again transformed and inserted into a command starting a “Python script” that outputs the “e-mail address”. This circle is queried using the Cypher query: contact executableOn Get primary e-mail from ORCID via API returns e-mail address executeOperation executeOperation Extract ORCID from URL extracted ORCID Get ORCID profile & extract e-mail inputMappings input inputMappings inputMappings inputMappings input https?://orcid.org/(\d{4}-\d{4}-\d{4}-\d{3}X?) python main.py –orcid {{input}} https://github.com/foo/bar.git pip install requests output inputMappings useTechnology output outputMappings output output useTechnology output output outputMappings regexInput regexPattern regexOutputs runCommand sourceCode setupCommand returnValues statusCode output attributes attributes returns input attributes attributes attributes returns returns input contact → regexInput Regex regexOutputs[1] → extracted ORCID Python returnValues[0] → e-mail address MATCH (m1) WITH collect(m1) as nodes CALL apoc.nodes.cycles(nodes) YIELD path RETURN path). The same query returns an additional circle for each operation step, representing levels of abstraction inside an operation. This circle detection is useful for a future executing system to automatically parallelize processing and ensuring data is available when it is needed. The concrete use case of this example can be simplified by using a platformindependent execution mechanism, such as Web Assembly (WASM) [24], whose limitations are outside the scope of this work. However, more complex use cases that need mechanisms for environment-specific execution, optimization, and access to resources that only native code can provide may use technologies such as Docker Containers and Python Scripts, demanding a more flexible modeling approach. We do not intend to develop a “universal programming language”, but instead to facilitate the variety of languages, frameworks, and tools that already exist with our technology-agnostic model for FAIR-DO Operations. IDORIS must also ensure that no invalid cycles are introduced. Examples for such unwanted circular dependencies include, but are not limited to: circles in the inheritance hierarchy, type profiles using themselves as attributes, and an operation calling itself within an operation step. To avoid this, we used the path finding algorithms of Neo4j and created a rule for IDORIS’ rule-based validator system (Subsection IV-B). In Listing 1, we show an excerpt of such a validator in Java-like pseudocode that also creates error messages via the API including severity level, message, and the entity of interest. This feature of IDORIS assists users with detailed error messages when creating new elements and ensures data integrity. By implementing separate validator classes for each rule, we enhanced the maintainability of our codebase through separation of concerns. These validators can also be used to ensure semantic correctness, e.g., by ensuring no conflicts exist in the inheritance hierarchies of atomic data types and type profiles. public class AcyclicityValidator extends Visitor< ValidationResult> { private final Neo4jClient neo4jClient; Operation operation) { return doesNotExecuteItself(operation); } // Visitor method to ensure it has no recursive dependency on itself public ValidationResult visitTypeProfile ( TypeProfile typeProfile) { return ValidationResult.combine( doesNotInheritFromItself(typeProfile), doesNotUseItselfAsAttribute(typeProfile) ); } [...] private ValidationResult doesNotInheritItself ( DataType dataType){ String query $\mathbf { \Sigma } = \mathbf { \Sigma }$ "MATCH path $\mathbf { \Sigma } = \mathbf { \Sigma }$ (n:DataType { pid: \$nodePID})-[:inheritsFrom\*1..]->(n) RETURN path LIMIT 1"; // Query the path from the Neo4j database boolean hasCycle $\mathbf { \Sigma } = \mathbf { \Sigma }$ neo4jClient.query(query) .bind(dataType.getPID()).to("nodePID") .fetch() .first() .isPresent(); return hasCycle ? new Error("Circular inheritance detected", dataType) : new OK(); } Figure 4 shows detailed examples of two application cases of Type Profiles (described in Subsection III-A2 and marked in red): The “Helmholtz Kernel Information Profile” type profile is illustrated with excerpts containing selected attributes. This profile is used to describe the content of FAIR-DOs that adhere to it. One of these attributes (Subsection III-B) conforms to the “Checksum” type profile. This profile is used to describe a complex JSON object embedded within a value in a FAIRDO, utilizing the “Helmholtz Kernel Information Profile”. The resulting JSON object contains the hash and the algorithm, that generated the hash. Furthermore, it shows an example for the inheritance of atomic data types (Subsection III-A1) by specifying that “ORCiD-URL” inherits from “URL”. Furthermore, Figure 4 visualizes data types, namely atomic data types (yellow) and type profiles (red), in addition to the attributes, and operations using the running example and an additional example operation “Download Resource and Check Integrity”. We use these to elaborate on the fulfillment of conceptual association mechanisms for operations and FAIRDOs according to “Record Typing”, “Profile typing” and “Attribute typing” (as introduced in Section II). Since our operations are assigned with PIDs, we support referencing them from FAIR-DOs, enabling “Record typing”. We realize both “Profile typing” and “Attribute typing” by specifying a single attribute an operation is executable on. For “Profile typing”, this attribute conforms to a type profile (orange path in Figure 4). For “Attribute typing”, this attribute can conform to any data type, namely atomic data types or a type profile (light green path in Figure 4). However, we decided to not adopt the duck typing variant of “Attribute Typing” by allowing only exactly one attribute an operation is executable on. Hence, all FAIR-DO operation association mechanisms are (at least partially) supported by our model and implemented in IDORIS. Fig. 4. Excerpt from the Graph representing Data Types in the graph database # B. Comparison to Existing Data Type Registry Models We designed our integrated typing model based on the concepts and development of the ePIC and EOSC DTRs [9]. The core concepts — defining simple value syntax and structuring complex values — remain unchanged. In the ePIC and EOSC DTRs, PID-BasicInfoTypes and PID-InfoTypes are sometimes called Data Types for simplicity. PID-BasicInfoTypes for the syntax of simple values are modeled by the Atomic Data Type class (Subsection III-A1). To define complex structures, our model combines the PIDInfoTypes (for complex JSON values inside a FAIR-DO) and the KernelInformationProfiles (for the structure of FAIR-DOs) with the Type Profile class (Subsection III-A2). As a new approach, Atomic Data Types and Type Profiles themselves are abstracted by the Data Type class, which reduces redundancies and provides a strong definition of the term “data type” within our model, enhancing its semantic clarity. The newly introduced inheritance mechanisms for both, Atomic Data Types and Type Profiles promote reusability of their instances and already allows for basic polymorphic behavior through subtyping, facilitating reuse of the association between data types and operations. This relates to the model’s ability to specify machine-actionable operations that are associated with attributes (and transitively data types), enabling typeassociated FAIR-DO Operations. Unlike PID-BasicInfoTypes, our Atomic Data Types do not contain fields for specifying measurement units or categories, and exclude other rarely used or undocumented fields, to enhance the semantic clarity of the typing model. For the same reason, the former SubSchemaRelation for PITs and KIPs [9] was split into a validation policy and a flag indicating whether additional attributes are allowed. Unlike the ePIC and EOSC DTRs, IDORIS is not based on JSON schema. Instead, we use a graph database, ideal for storing highly connected entities and executing graph algorithms to query the inheritance hierarchy of data types and for finding operations executable on an attribute or data type. We also provide a more capable rule-based validation logic that is able to validate more than just the syntax of single entities. This way, we can describe type-associated operations and ensure the quality of information stored in IDORIS. We decided against an RDF-based system due to its limited integration into the Spring framework, higher modeling complexity with triples, and steeper learning curve. Although RDF-based graph databases, such as Apache Jena14, offer potential advantages, including an easier integration into knowledge graphs, the ability to reuse terms from ontologies, and possible support for federated queries beyond single systems, concrete use-cases benefiting from these features have not yet been identified.
FAIR Digital Objects support research data management aligned with the FAIR principles. To be machine-actionable, they must support operations that interact with their contents. This can be achieved by associating operations with FAIR-DO data types. However, current typing models and Data Type Registries lack support for type-associated operations. In this work, we introduce a typing model that describes type-associated and technology-agnostic FAIR Digital Object Operations in a machine-actionable way, building and improving on the existing concepts. In addition, we introduce the Integrated Data Type and Operations Registry with Inheritance System, a prototypical implementation of this model that integrates inheritance mechanisms for data types, a rule-based validation system, and the computation of type-operation associations. Our approach significantly improves the machine-actionability of FAIR Digital Objects, paving the way towards dynamic, interoperable, and reproducible research workflows.
[ "cs.DL", "cs.DB" ]
# 1 INTRODUCTION Statutory Interpretation (SI) is an important task in the scope of Artificial Intelligence (AI) and Law. It is concerned with the interpretation of legal concepts made by judges in the court rulings. The classical example is the sentence “No vehicles in the park” with respect to the vehicles concept. Does it cover bicycles and ambulances? A possible help from an AI system for such a task could offer a quick access to the judgments or fragments thereof containing statements that provide such interpretations. A system could identify a sentence appearing in one of the judgments, e.g. “Although a bicycle is commonly classified as a vehicle, the purpose of the sign is to keep the park as a quiet place, so bicycles are exempt from it.” as highly relevant for interpreting what a vehicle is in that context. The problem of finding such sentences could be treated as a general legal information retrieval and theoretically addressed with the state-of-the-art retrieval models based on deep neural networks, such as BGE [5]. However, the current task is special in the sense that we want to exclude a large group of texts found in the judgments, i.e. those excerpts that only cite the regulation without providing any extended interpretation of the legal concept. Due to the fact that such models are based on similarity of the query and the searched text, such excerpts would appear at the top of search results, when such a model is applied. So this task requires a more specific model to be solved correctly. Creating a specific model requires building a properly designed dataset, which contains information about the utility of the explanatory sentences retrieved by a general retriever. The construction of such a dataset is not a trivial task, as it requires annotators with a high level of expertise. The annotators are supposed to discover the true meaning of retrieved sentences in the context of legal explanation and decide whether the examples are useful or not in the task. This process is complicated and understanding of the sentences can be subjective and be prone to some other factors such as annotator fatigue [17]. The framework needed to mitigate the issues mentioned above appears to be very costly, as it must cover many factors that can have a potential impact on the interpretation of legal semantics. In this research, we want to check how the annotation process, necessary to train the model, can be optimized in order to improve its cost effectiveness. Namely, we have conducted various experiments to try and answer different questions regarding this process, with the goal of creating some initial optimization guidelines. We highlight the results of three experiments which answer some of these questions and outline a basic guide. Our first experiment tries to answer the question "given a statutory concept, how many examples do we need to annotate before the results converge (RQ1)?". Second, we want to check "how the choice of specific sentences to annotate might affect the quality of the model (RQ2)?". Lastly, we check "to what extent we can use a language model to annotate the examples (RQ3)?", following research by [21]. In the experiments, we tested these questions over different versions of an open source language model. The answers to these three questions allows us to write some initial guidelines to help with future SI retrieval tasks. Our benchmark is the state-of-theart results regarding SI retrieval obtained in [24]. The benchmark studies used different reranking methods to assess the usefulness of the exemplary sentences. In the next section, we overview the current state-of-the-art in data annotation, retrieval-augmented generation, and transfer learning in the context of law. This builds on the foundational research on argument retrieval presented in [1], establishing the first approach for building argument-retrieval systems - the systems enabling the SI, as well as recent advancements that demonstrate the capabilities of large language models in specialized legal tasks documented by [3] and later [11]. We then present our method, which closely follows the one introduced by Savelka’s PhD Thesis [17], in which he used the model to first find the explanatory examples and later to rank them with 4 categories of usefulness. This is followed by our description of the three experiments and then by the results of these experiments and detailed guidelines for SI annotations in different scenarios. We conclude the results and provide a summarized guide to help SI retrieval tasks. # 2 RELATED WORK As the subject of this article is optimizing the annotation process for statutory interpretation, this section presents the literature on various aspects concerning this process. Among others, they are: • facilitating annotation per se, • limiting the burden of labeling, or increasing the effectiveness of those efforts by using transfer learning, as well as retrieval of the best candidates to be included in a dataset. The most relevant research, i.e. the Ph.D. of Jaromir Savelka [19] is discussed in section 3. # 2.1 Data Annotation in the Context of Law The problem of annotation in the context of law has been addressed by Gray et al. [8]. They used an LLM (gpt-3.5-turbo-16k) to preannotate sentences of legal opinions in Drug-Interdiction AutoStop (DIAS) cases. The annotation of the sentences indicated which DIAS factor (if any) is present in a sentence. The LLM was provided with the annotation instruction and dozens of examples of proper annotation, both included in the prompt. However, in the described solution, the LLM was not an independent annotator but only an assistant that proposed the labels to human annotators. It constituted efficient support in the annotation process by making the work faster while not negatively influencing the outcome of the process, nor making the experts completely rely on it. The LLM hints increased the time efficiency of the annotation process. Savelka et al. [21] used LLMs in the annotation task requiring highly specialized domain expertise. The task here was to label sentences from court opinions to explain legal concepts (the same dataset we use in this research). GPT-4 model was provided with annotation guidelines in the prompt. It occurred that the model performed similarly to well-trained student annotators, maintaining good quality even for batch predictions. The level of annotator agreement (Krippendorff’s $\alpha$ ) of the LLM was in the middle of the pack of all annotators (LLM and human). It is worth mentioning that the literature seems to not undertake the topic of number of training examples, which is a part of the research presented in this article. # 2.2 Retrieval-Augmented Generation in the Context of Statutory Data In their recent work Luo at al. [14] present ATRI, a retrieval-augmented generation framework for interpreting vague legal concepts using past judicial precedents, alongside the new Legal Concept Entailment benchmark for automated evaluation, demonstrating that the system’s outputs effectively assist large language models and are comparable to expert-written interpretations. Concentrating on multi-layered system, de Oliviera Lima [6] proposed embedded texts—generating dense vector representations for individual articles, their subdivisions (paragraphs, clauses), and structural groupings (books, titles, chapters) in order to capture hierarchical complexities and enhance information retrieval. His aim was to demonstrate broad applicability to different legal systems and other domains with similarly structured text. Savelka and Ashley [20] provide foundational work on legal information retrieval focused on statutory interpretation, outlining methods for discovering sentences relevant to statutory terms and illustrating the limitations of traditional keyword-based techniques in this context. # 2.3 Transfer Learning in the Context of Law The data and the importance of reusing existing models was thoroughly examined by Savelka et al. [23] where functional segmentation of the judgments in cross-contextual and cross-jurisdictional tasks were revised and described. Researchers used language-agnostic sentence embeddings in sequence labeling models using Gated Recurrent Units (GRUs) to investigate transfer between different contexts and jurisdictions in the task of functional segmentation of adjudicatory decisions. The examined models appeared to be able to generalize beyond the context (e.g. jurisdiction) they had been trained on, as well as to be more robust and to have better overall performance during the evaluation of previously unseen contexts. Tyss et al.[26] transferred pre-trained models for legal case summarization to jurisdictions without available reference summaries for training, underlying the role of pre-training in the case summarization problem. Nevertheless, the choice of the dataset for pretraining should be based on lexical and jurisdictional similarity rather than its size or abstractiveness, which shows that the transfer cannot be performed discretionarily. Savelka et al.[22] used transfer learning to predict rhetorical roles in different domains and jurisdictions, demonstrating the ability of language models to generalize and abstract beyond the specific domain vocabulary. The article also shows that training the models on pools of data taken from different datasets can leverage their performance and robustness. A similar dataset augmenting approach is also presented in the paper discussed next. Joel Niklaus et al.[15] improved the performance of the BERT family models in the task of predicting legal judgment by augmenting the training dataset with case law from different jurisdictions. The model trained on the augmented datasets showed better performance than those trained on data from singular jurisdiction owing to, as the authors of the described paper believe, informational gain from other diverse cases. Unlike in the previous approaches, the transfer did not occur per se, but by transferring the information through the dataset augmentation and performing a single extended training process. Furthermore, to emphasize elder approaches and the longer presence of cross-jurisdictional transfer in the AI and law domain, Savelka and Ashley[18] used statistical machine learning to classify specific functional categories in statutory texts from multiple jurisdictions in US states. The transfer of the statistical model helped to solve the problem of sparse data and its imbalance in different jurisdictions. The transfer improved the classification results. Zheng et al. [29] emphasized the conditions under which pretraining improves performance on legal tasks, identifying key factors such as data similarity and structure. Chalkidis et al. [4] demonstrated that domain-specific pretraining of BERT on legal corpora significantly enhances performance on a wide range of downstream legal tasks. The literature mentioned above indicates that transfer learning is an efficient way of improving the models’ performance in various tasks, especially when the data is sparse or lacking, as it reduces the effort of data gathering and annotation. # 3 APPROACH This research concerns the task of discovering sentences for argumentation about the meaning of statutory terms. This task, introduced by Šavelka and Ashley, is defined in [20] as a specific type of legal argument retrieval, by itself defined by Ashley and Walker [1] as the merging of legal information retrieval and legal argument mining. # 3.1 The Dataset Savelka in his Ph.D. thesis [17] constructed a dataset of 42 concepts with more than 27 thousand sentences scored with respect to their value regarding statutory interpretation. These sentences were retrieved from the Caselaw access project and where selected according to the sentences containing specific occurrences of a list of chosen legal concepts. For their experiment, they have chosen to fine-tune the language model RoBERTa-base, which is a pretrained transformer-based language model developed by Facebook (currently Meta) AI. These findings were reviewed in [24] where the authors have shown that a better performance for this task can be obtained with a DeBERTa v.3 model [9] with a voting scheme. The task considered by Šavelka and Ashley is to return, given legal concept and a provision, a list of sentences which best explain the legal meaning of the concept. Such sentences can be definitional sentences [20] that state explicitly in a different way what the statutory phrase means or state what it does not mean, by providing an example, instance, or counterexample of the phrase, and sentences that show how a court determines whether something is such an example, instance, or counterexample. In order to be able to train and assess the models, the dataset was annotated by law students where each sentence was annotated by two students. The annotators assigned each sentence a category denoting whether it has a high, certain, potential or no value to understanding the legal concepts. In order to evaluate the quality of their fine-tuned models, the researchers used Normalized Discounted Cumulative Gain score, very popular in information retrieval [10]. For the task of argument mining they first defined $S _ { j } = ( s _ { 1 } , \ldots , s _ { n } )$ , where $s _ { i }$ for $0 < i \leq n$ is a sentence for concept $j$ in the $i$ -th place in the list of retrieved sentences. They then used, for the purpose of assigning a value for each $S _ { j }$ , and for a given $k$ , a normalized discounted cumulative gain as follows: $$ N D C G ( S _ { j } , k ) = \frac { 1 } { Z _ { j k } } \Sigma _ { i = 1 } ^ { k } \frac { r e l ( s _ { i } ) } { l o g _ { 2 } ( i + 1 ) } $$ where $r e l ( s _ { i } )$ is the value of each sentence for the understanding of a concept (3 for high value to 0 for no value) and $Z _ { j k }$ normalizes the result by dividing it with the value of the ideal sorting of the sentences. The reader is invited to consult with [10] for a detailed explanation of this measure. # 3.2 The Task Data Format The data in the core of the interest of this article are the sentences extracted from case law paired with the legal concept that they explain. The legal concepts are extracted from statutory law in a process of legal analysis. The examples used for demonstration are from the case law of the European Patent Office Board of Appeal . A data point contains two fields relevant from the point of view of the research: the text of the sentence and the legal concept attached (Fig. 1). After annotation, the example is given a label which expresses its explanatory value for the legal concept (Fig. 2): The model predicting the explanatory value instead of returning a discrete value returns a continuous value, which is a measure that allows to find the most relevant sentences (Fig. 3. # 3.3 The Training Setup For the purpose of fine-tuning their best model, they have used examples pairing the sentences with the provision of the legal concepts, where the provisions were defined as the smallest text in the regulation expressing a statutory provision regarding the { "text": "Thus,␣a␣chemical␣compound␣can␣ involve␣an␣inventive␣step␣irrespective␣ of␣whether␣it␣itself␣has␣an␣unexpected␣ technical␣effect,␣or␣whether␣its␣ effect␣is␣linked␣to␣the␣improvement␣in␣ a␣complete␣processing,␣as␣is␣the␣case␣ for␣the␣improvement␣in␣Z-isomer␣yield␣ directly␣attributable␣to␣the␣ intermediate␣compound␣(1)␣of␣claim␣1,␣ as␣set␣out␣above.", "concept": "involvesInventiveStep" } Figure 1: Example of a data point in JSON format used for annotation. Figure 2: Example of a data point in JSON format after annotation. Figure 3: Example of a data point in JSON format after model prediction. legal concept. Lastly, they have divided their data into 6 folds, 4 of which were used for training in a 4-fold cross-validation setup (in each training one of the folds is used as the evaluation fold and 3 remaining folds are used to train the model) and 2 for final testing. In order to ensure proper distribution of the data among the folds, they have classified each legal concept into one of four categories and ensured the same number of elements of each category are in each fold. This research was extended by Smywiński-Pohl and Libal in [24]. The authors have tested a number of additional models and settings for sorting the sentences, using the same dataset as the input. The authors have found out that DeBERTa v. 3 [9] in the large variant gives the best results for this task. Table 1: Results obtained by the models presented in [24] on the test subset. Their best result of running the model on the test for $k = 1 0$ and $k = 1 0 0$ is summarized in Table 1. The table concerns two setups: one without a voting scheme and the other including the voting scheme. Since the training follows a cross-validation procedure with 4 models, it is possible to use all these models to decide on the final score of the sentence. We present the best results with and without the voting scheme, while in the second setup the score is the average score obtained by all models on the test set. In the following experiment we will use the approach presented in [24] and [21] to answer the research questions. For the first two experiments we will train a cross-encoder model following closely the training paradigm presented in [24] and we will present the averaged scores obtained on the test splits. For the last experiment we will follow the LLM approach presented in [21] but we will extend the results for the full test set. This will enable us to present the NDCG scores and compare the different approaches directly. # 4 RESEARCH QUESTIONS # 4.1 RQ1: How many examples should be manually annotated? The number of annotated examples required for effective model training was not addressed in [19]. As noted in that work, annotation is time-consuming and error-prone. Our first research question thus explores whether only a subset of available sentences needs to be annotated to achieve comparable results. This topic also connects with broader literature on cost-effective annotation strategies. Ein-Dor et al. [7] investigate how few training examples are required when employing active learning for BERT-like models. Their work supports the hypothesis that significant gains can be achieved with fewer annotations if selection is optimized. Our approach to answer this question is to randomly pick up to $k$ sentences for each concept, with values for $k$ ranging from 100 to 1000 with 100 steps. In case the number of available sentences for a concept is below $k$ , we take the whole set for this concept. The model trained over the chosen sentences is then compared to a model which was trained on all sentences. In this research question we assume that for each concept we take up to $k$ sentences, to train the models and to compute their performance on the full testing subset. We compare the result with the performance of the models trained on the whole dataset. This means that for some concept all examples will be taken, if the total number of sentences found is lower than $k$ . The assumption is that $k$ is the same for all concepts, even though there is a different distribution of sentences for different concepts and a different distribution of labels for a given concept. We left the question how to adapt the number of examples depending on the sentence and value distribution for future research. For $k$ tested in the experiment we take values from 100 to 1000 with 100 step. We take 1000 as the maximum, since in the preliminary experiment we have observed that there is almost no difference between training on up to 1000 examples for each concept and the full training set. # 4.2 RQ2: Which sentences to choose for annotation? Our second research question focuses on the sentences chosen for annotation and whether using a preliminary sorting of the sentences can provide a better result than a random choice. As a reminder, the sentences are classified according to four different classes, with an unknown distribution among the sentences. Our approach to answer this question has two items. First, we made a decision to consider as higher quality for the purpose of training, sentences which are classified as more relevant for giving an interpretation for the concept. Second, we decided to use active learning in order to achieve that. This method is informed by prior work on active learning and sentence selection, including Gray et al. [8] and Westermann et al. [23], which show that LLMs and embeddings can assist in prioritizing high-value sentences. Active learning is a process where training examples are used selectively and incrementally to train a model. Our approach to select the best candidates is to use a previously trained model to rank and sort the sentences. For each of the four splits under consideration, a model is created by training on the other annotated three splits. The iterative and incremental element comes from the request of the algorithm, from the user, to annotate specific examples at each phase, which are ranked according to the model from the previous iteration. In this way, the accuracy of choosing sentences which are classified as most relevant increases at each step. In our experiment, we even take a step further and consider a model trained on all examples of the three remaining splits, without the iterative and incremental phases. The rational behind that is to answer the question whether this approach, given optimal settings, brings a value to the annotation process. We have therefore taken an optimal setting by considering all examples of the other splits as training data points. Although this approach cannot be reproduced in practice, due to lack of already annotated examples, it is quite useful, as will be shown later, to answer what can be achieved with such an approach. The model in the preliminary sorting phase is used as follows. First, the model is applied in order to rank each of the sentences. We then sort the sentences according to this rank, where more relevant sentences appear first. We then repeat the experiment from RQ1, but this time, the sentences are taken from the sorted list and are not chosen randomly. # 4.3 RQ3: Do we need the manual annotation at all? In the last question we want to check if the whole annotation effort is necessary at all. Thus we follow the approach presented by Savelka et al in [21] and use an LLM as the annotator. Similarly to [21], [3], [11] we create a prompt based on the same annotation guidelines and we pass each sentence together with the concept and the provision to the LLM and ask it to provide an annotation label. There are several differences between our approach and that taken in [21]. First, in [21], only 256 sentences were automatically selected. This is because of the cost associated with using GPT4, a closed-code model. Our use of an open source one allowed us to automatically annotate all the 11k examples in the test set. Thus, obtaining a more accurate estimation of the model. Table 2: Hyper-parameters used for training the base and large variants of the DeBERTa v. 3 models. Second, the fact that we have automatically annotated all examples in the test set allowed us to go beyond checking only the accuracy and F1 score of the model, as is done in [21]. Since our goal is to provide the 10 or 100 most relevant sentences, the NDCG scores are more relevant than accuracy of F1 scores. Besides the label of each class, we also register the probabilities associated with first tokens compatible with the valid labels. This allows us not only to compute the classification scores such as accuracy and F1, but also to compute the real-value score of the sentence and sort the sentences according to that score. This will allow to apply this method directly for presenting the top sentences to the end user. # 5 RESULTS To answer RQ1 and RQ2 we apply the following training procedure. Like [17], we train 4 models in a cross-validation setup, so each model is trained on 3 splits and validated during training on the remaining split. Following [24] we use DeBERTa (v. 3) base (184 million parameters) and large (435 million parameters) variants and use the hyper-parameters given in table 2. We save the model after every epoch and compute the $\mathrm { N D C G } @ 1 0$ score (metric parameter in the table) on the validation set to select the best model for a given training, which is the used to compute the $\mathrm { N D C G } @ 1 0$ and $\mathsf { N D C G } @ 1 0 0$ scores on the testing set. We use 4 labels, even though according to the findings in [24] the number of labels could be reduced to just 2. We train for 5 epoch with a batch size of 8. There is no warm up, we apply a linear decay schedule for the learning rate, which starts from 2e-05 and we use 768 as the maximum number of tokens that is passed to the model, even though the input can be longer. In the text passed to the model for classification, we put the concept first, then the sentence to be assessed and the provision at the end. So in the case the input is too long, the provision might be shortened. These trainings are repeated 5 times with random generator seed set to 0, 1, 2, 3, 4. We repeat these training for each tested value of $k$ (100, 200, 300 . . . ), so we get 200 trainings in total (10 values of $k$ , 4 splits, 5 repeats for different random seeds) for each model size and each setup (RQ1 and RQ2). Figure 4 contains the total number of sentences in the training subset (i.e. four splits) for different values of $k$ . The scaling is not linear, since the distribution of sentences in the dataset is not even. Figure 4: The total number of sentences in the training subset for different values of the maximum number of sentences $( k )$ taken for each concept. Since the deltas for growing values of $k$ are getting smaller, we see that the number of concepts having at least $k$ sentences is shrinking quickly. # 5.1 RQ1: How many examples should be manually annotated? The results for the first experiment are given in Table 3 and Figure 5. We report the scores obtained on the test subset averaged among the splits and different values of the random seed. On the figure we plot the standard deviation of the results for different values of the random seed (we take the averages of the splits as the input to compute the variability among the different seeds). For the large variant of the model for some values of $k$ we can observe a huge standard deviation among the results. We compare the base model with the large model and the results differ significantly for these setups. For the base model we observe that with a growing values of $k$ the performance of the model grows. We get $0 . 4 2 \mathrm { N D C G } @ 1 0$ for $k { = } 1 0 0$ and 0.55 for $k { = } 1 0 0 0$ , which is a $+ 1 3 \mathrm { p p }$ . improvement. For $\mathsf { N D C G } @ 1 0 0$ it grows from 0.61 up to 0.70, a $^ { + 9 } \operatorname { p p }$ . improvement. We also observe that the we could gain even more from a larger number of examples in this setup, since for the full training dataset we have an $\mathrm { N D C G } @ 1 0$ score of 0.68 $\cdot { 1 3 } \mathrm { p p }$ . improvement compared to $k { = } 1 0 0 0$ ) and $\mathsf { N D C G } @ 1 0 0$ score of 0.76 (16 pp. improvement) which are definitely large differences. The outcome for the large model is very different, i.e. the performance improvement thanks to the growing number of examples is very small. We reach a peak score of $7 7 . 3 \%$ $. 3 \% \mathrm { N D C G } @ 1 0$ for 500 examples and then $7 7 . 9 \%$ NDCG@10 for 1000 examples (compared to $7 9 . 0 \%$ for the full dataset, a $- 1 . 2 \mathrm { p p }$ . difference). For $\mathsf { N D C G } @ 1 0 0$ there’s also a peak at 500 examples $( 7 8 . 3 \% )$ and the best score is for 1000 examples $( 7 9 . 0 \% )$ which is better than the result for the full dataset $( 7 8 . 6 \% )$ . So we either observe small differences (like 1 pp. drop in $\mathsf { N D C G } @ 1 0$ if we limit the number of examples to 500) or even improvements with a smaller number of examples $( \mathrm { N D C G } @ 1 0 0$ for 1000 is better than for the full dataset). The second phenomenon might be due to the fact that without the threshold, one of the concepts in the training dataset dominates it, and the models overfit to that concept, loosing their generalization power. Table 3: NDCG scores $\textcircled { \pmb { \omega } } \mathbf { 1 0 }$ and $\textcircled { \omega } \mathbf { 1 0 0 }$ of training DeBERTa (v. 3) base and large models on up to $k$ random sentences for each concept. To summarize these findings we can conclude that if we are going to train a small model (since e.g. we are concerned with the deployment costs) we should annotate as many examples as possible. However for the larger model we can limit the number of examples to 500 (if we accept a 1 pp. drop in performance) or 1000 which could even bring us improvements compared to the training on the full dataset. # 5.2 RQ2: Which sentences to choose for annotation? The second research questions concerns the problem of selecting the sentences for annotation. In RQ1 we took a random sample of sentences containing the legal concept. Here we take a different approach – we take top- $\mathbf { \nabla } \cdot k$ examples, not a random sample. To sort these sentences for each split in the training subset we take a model trained on the remaining splits including the full training dataset. This setup would make no much sense in the real setting, i.e. when we are building a new dataset for statutory interpretation. But this experiment should be viewed primarily as an optimistic limit for a setup when such a model is available. Here we think specifically on the transfer learning scenario, i.e. how much annotation effort we could save (in the best case) if we took a dataset or a model trained for statutory interpretation in different jurisdiction or on a completely different set of legal concepts. Since we cannot expect that in such a case we could obtain better results than in this experiment we can use this information to make our choices regarding the annotation process. The results of training the DeBERTa model (base and large variants) on top- $\cdot k$ sentences are given in Table 4 and Figure 6. For the base size, similarily to the previous experiment, the performance of the model increases steadily with the growing number of top sentences, until threshold of $k { = } 8 0 0$ sentences per concept is reached. Then we observe a slight drop in performance, but we also see that the variance of the results for $\scriptstyle \mathbf { k } = 9 0 0$ is much higher, so the observed outcome might be just by one or few models which performed particularly bed in this setup. Figure 5: NDCG@10 and NDCG@100 results for the DeBERTa base (left) and large (right) model trained on top- $\mathbf { \cdot k }$ random sentences. The shaded contour indicates the standard deviation of the scores for 5 runs. Table 4: NDCG scores $\textcircled { a } 1 0$ and $\textcircled { \omega } \mathbf { 1 0 0 }$ of training a DeBERTa (v. 3) base and large models on the top- $k$ sentences. The difference for $\mathsf { N D C G } @ 1 0 0$ between $\scriptstyle \mathbf { k } = 8 0 0$ and $\scriptstyle \mathbf { k } = 1 0 0 0$ is small $\mathrm { ( + 0 . 7 p p . ) }$ . For $\mathrm { N D C G } @ 1 0$ there is $+ 1 . 2 \mathrm { p p }$ . difference between $k { = } 8 0 0$ and $k { = } 1 0 0 0$ . Our interpretation of these results is that for the base model there is a tendency to give a better score with the growing number of examples, but it seems that the plot flattens around 800 examples per concept. It also should be noted that there is a negligible difference $( + 0 . 2 \ \mathrm { p p }$ . for both ${ \mathsf { N D C G } } { \ @ } 1 0$ and $\mathsf { N D C G @ 1 0 0 } )$ between training on $k { = } 1 0 0 0$ and training on the full dataset for this setup. The comparison between the two experiments yields the following observation. For the base model trained without sorting we observe a huge discrepancy between the $\mathsf { N D C G } @ 1 0$ and $\mathsf { N D C G } @ 1 0 0$ metrics, meaning that the top results would be much worse for the base model in that setup. The gap between these metrics is in the range 15–18 pp. gap for the randomly selected sentences, while for the sorted sentences we observed – 8–15 pp. gap. Moreover for the the dataset with 1000 examples in this experiment there is a marginal $\mathrm { 0 . 2 \ p p }$ for both metrics) differences compared to the full dataset. For the random version we observe a $^ { 1 3 } \mathrm { P P }$ . difference for ${ \mathsf { N D C G } } { \ @ 1 0 }$ and a $6 \mathrm { p p }$ . difference for $\mathsf { N D C G } @ 1 0 0$ between $k { = } 1 0 0 0$ examples and the full dataset. The conclusion is that for the base model it is much better to sort the sentences according to some model first, since otherwise we can expect a huge performance reduction. For the large model there is no such trend – the performance fluctuates in a narrow range for most of the settings. It grows from 100 to 200 sentences, than it falls from 200 to 500, then increases until 700 sentences, falls for 800, and grows until 1000 sentences are reached. The peak at 200 sentences is only marginally worse than the training on the full dataset $\mathrm { ( 0 . 7 ~ p p ~ }$ . for $\mathrm { N D C G } @ 1 0$ and $1 . 3 \mathrm { p p }$ . for $\mathsf { N D C G } @ 1 0 0 \}$ ). The results for $k { = } 1 0 0 0$ are better than the results of training on the full dataset (0.1 pp. for $\mathsf { N D C G } @ 1 0$ and 0.5 pp. for $\mathsf { N D C G @ 1 0 0 } )$ . It should be noted that the drops in observed performance might be caused by one or several bad trainings for $k { = } 4 0 0$ , $k { = } 5 0 0$ and $k { = } 8 0 0$ , since we can observe a huge standard deviation for these values of $k$ . From the second experiment we can conclude that a base model observes steady improvements with the growing number of examples, but it is not sensible to train the model with the full dataset, since there is practically no difference between training with $k { = } 1 0 0 0$ and full dataset. For the larger model there is no such trend and a very good performance can be obtained with just 200 examples. For both models we can clearly state that there is no reason to annotate all sentences containing the legal concepts in question. Looking at the plots it seems that 1000 sentences per concept (for the base model) and even as few as 200 sentences per concept (for the large model) are enough to obtain results that are only marginally worse than those obtained when annotating the full dataset. It is thus apparent that the manual annotation effort might be substantially reduced when only a subset of the results is annotated. In fact if we only have a budget for 200 sentences per concept, then there is no difference in the obtained performance between the randomly picked sentences and the sorted ones (approx. $7 5 \%$ $\mathsf { N D C G } @ 1 0$ and $7 \% \mathrm { N D C G } @ 1 0 0 )$ . So the setup with random sentences is the preferred one, since we don’t need an existing model to intially sort these sentences. If we want to improve the results a bit, we can target 500 sentences per concept, when for the random version we gain 7 pp. for NDCG@10 and 3 pp. for NDCG@100 (compared to the setup with the sorted sentences). Using the sorted sentences with the large model only makes sense if we want to annotate up to 1000 examples for each concept, but we will only gain $1 . 2 \mathrm { p p }$ . for $\mathrm { N D C G } @ 1 0$ in that setup. Figure 6: NDCG $@$ 10 and NDCG $@$ 100 results for the DeBERTa base (left) and large (right) model trained on top- $\mathbf { \cdot k }$ sentences The shaded contour indicates the standard deviation of the scores for 5 runs. To sum up the results of this experiment, we see that for the base model it makes much more sense to use the approach with sentence sorting, while for the large model we will not gain much from it. To the contrary we can observe a much more stable performance improvement, when we draw the samples randomly with the large model. Since computing power is getting cheaper, there exist a number of efficient training techniques and the second setup does not require a pre-existing model for sorting the sentences, the recommended approach is to use a larger model without sorting, allowing us to select from a range of thresholds depending on our budget. If we still want to train a base model, since we are concerned with the costs when the model is deployed, we could follow the following scenario. First we should randomly annotate 500–1000 random examples for each concept and train a large model on that dataset. The we should use that model to sort the full set of sentences. Then we should pick up to 1000 sentences for each concept and annotate the sentences that were not yet annotated and we should use such a dataset to fine-tune the base model. According to the experiments conducted so far, this should give us performance similar to the setup when we annotated the full dataset. # 5.3 RQ3: Do we need the manual annotation at all? To answer the third research question we have followed the approach presented in [21], where the authors checked if a large language model (GPT-4 in that case) is able to provide annotation of a good quality. The authors have found out that the annotations provided by GPT-4 are of a medium quality – somewhere in the middle between the top-performing and the worst-performing human annotators. That experiment was limited in its scope – the authors wanted to reduce the cost of using OpenAI API, so they have only annotated automatically 256 sentences. As a result they were not able to compute the metrics used to quantify the sorting of the sentences. In RQ3 we have introduced the following changes in the experimental setting. First of all, to reduce the cost of the experiment and at the same time to check if the open source models can be a good alternative to closed models like GPT-4, we have tested Qwen 2.5 with 72 billion parameters [25, 27] in the instruct version. Since, according to our tests, the version of Qwen uploaded to HuggingFace is invalid, i.e. it lacks definition of some tokens used for instruction fine tuning, we have used unsloth/Qwen2.5-72B-Instruct. This is an exact copy of the original model with the missing tokens included. Secondly, since we didn’t have to pay for the API we have annotated the full test subset of the dataset (more than 11 thousand sentences). [21] have tested two variants of the prompt used to obtain the labels. We have followed this setup and tested exactly the same change in the prompt, which is concerned with the definition of the certain value. To obtain the results with optimized generation techniques (which reduce the computational time), we have used vLLM library [13]. This library uses KV cache [16] and prefix caching [28] which were both turned on during the inference. Qwen is a generative model and we have used it as such, i.e. we have not replaced the head of the model to construct a classification network. This is not optimal, since the model predicts all values appearing in the model’s dictionary and uses autoregressive generation to provide a piece of text. Since the only generated strings we care for are the strings no value, potential value, certain value and high value, we have applied guided decoding [2], to limit the outputs of the model to include only these strings. All these techniques contributed greatly to improving the performance of the inference and we were able to compute the labels for the test sentences (11 thousand) in less than 18 minutes on a node with $4 \mathrm { ~ x ~ G H { 2 0 0 } }$ superchips with a H100 96GB GPU. The problem of label prediction in [21] is posed as a text classification task. But these labels are later used to sort the sentences in order to present to the end user those sentences that are the most valuable according to the model. So, besides just predicting the label of the sentence, we obtained the sentence’s score by computing the probability of each valid first token associated with a specific label and then computing the weighted sum with the labels’ values mapped to numbers (0 for no value, 1 for potential value, 2 for certain value and 3 for high value). This allowed us to sort all sentences for a given concept according to that score and for computing the NDCG metric for the full test set. Table 5: Accuracy, (weighted) F1 and NDCG@10 and NDCG@100 scores of predicting the label by Qwen2.5- Instruct 72B on the test subset of the statutory interpretation dataset. The results of this experiment are given in Table 5. The table presents two prompts from the cited research – the direct conversion of the guidelines and a corrected version with an improved definition of the certain value label. We have used the variant without explanation and without batched prediction. We have also employed the few-shot prompting technique by supplementing the prompt with four examples taken randomly from the training set, one for each value of relevance. Regarding the obtained accuracy and F1 score – they are pretty similar to those obtained with GPT-4 in [21]. For the unmodified prompt it was 0.51 and 0.53 in the original research and we have obtained 0.51 and 0.51 $^ { \cdot } { 2 } \ \mathrm { p p } .$ ). For the improved prompt it was 0.55 and 0.57 and we have obtained 0.54 $\cdot 1 \ \mathrm { p p } .$ ) and 0.56 (-1 pp.) with the Qwen model. Thus the first outcome is that currently a moderately sized (72 billion), state-of-the-art open source model Qwen 2.5 obtains results very similar to those of GPT-4 in the statutory interpretation task. Still we have to remember that the original research was conducted only for a small subset of the sentences, while we have verified the results on the the full testing set. Yet the second result is much more interesting, i.e. the NDCG scores obtained with the prompts. The model with the original prompt achieves 0.777 NDCG@10 and $0 . 8 5 3 ~ { \mathrm { N D C G @ 1 0 0 } }$ while the model with the improved prompt achieves 0.766 (-1.1 pp.) and 0.848 (-0.7 pp.) respectively. This outcome is interesting since the accuracy and F1 scores are better for the improved prompt. The best results for the large model are 0.791 for $\mathrm { N D C G } @ 1 0$ and 0.791 for $\mathsf { N D C G } @ 1 0 0$ (for the scenario with sorted sentences), so if we are very much concerned with the first metric, the manual annotation of up to 1000 sentences for each concept will give us $1 . 4 ~ \mathrm { p p }$ better results. For the scenario with a random sample of sentences, the difference is negligible (0.2 pp. for ${ \tt N D C G } @ 1 0$ with 1000 sentences). Rarely such an improvement will justify the cost of annotation. We have to also observe the fact that none of the models trained on manual annotation achieved $\mathsf { N D C G } @ 1 0 0$ score better than the automatic annotation with the help of Qwen. Comparing the results achievable with Qwen and manual annotation we observe that for the base model in all setups we can achieve better results with the LLM, rather than with the manual annotation. The best $\mathsf { N D C G } @ 1 0$ score for the base model was 0.681 and it was 0.761 for $\mathrm { N D C G } @ 1 0 0$ . With the LLM we obtain $9 . 6 ~ \mathrm { p p }$ . better results for $\mathsf { N D C G } @ 1 0$ and $1 0 . 8 \mathrm { p p }$ . better results for NDCG@100. The only concern when using the model is the computational cost of annotation. A node with $4 \mathrm { ~ x ~ G H { 2 0 0 } }$ chips is very expensive. Still services such as Lambda labs rent 1 GH200 for $3 . 3 2 \ S / \mathrm { h }$ , so the cost of annotating even hundreds of thousands of sentences with the help of the model should be very small. To conclude the outcome of the last experiment we state that it is sufficient for the statutory interpretation task to use an LLM such as Qwen 2.5 with 72 billion parameters. There is very low chance that the model trained on the manual annotation of the dataset will yield better results with respect to the NDCG scores, at least if we stick to fine-tuning of models with sizes and performance similar to DeBERTa.
One of the elements of legal research is looking for cases where judges have extended the meaning of a legal concept by providing interpretations of what a concept means or does not mean. This allow legal professionals to use such interpretations as precedents as well as laymen to better understand the legal concept. The state-of-the-art approach for retrieving the most relevant interpretations for these concepts currently depends on the ranking of sentences and the training of language models over annotated examples. That manual annotation process can be quite expensive and need to be repeated for each such concept, which prompted recent research in trying to automate this process. In this paper, we highlight the results of various experiments conducted to determine the volume, scope and even the need for manual annotation. First of all, we check what is the optimal number of annotations per a legal concept. Second, we check if we can draw the sentences for annotation randomly or there is a gain in the performance of the model, when only the best candidates are annotated. As the last question we check what is the outcome of automating the annotation process with the help of an LLM.
[ "cs.CL" ]
# 1. Introduction As neural networks grow following established scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022), they become increasingly inaccessible to much of the research community. Training models with hundreds of billions of parameters requires computational resources available only to select institutions, threatening to concentrate AI advancement within well-resourced organizations. The fundamental bottleneck lies in end-to-end backpropagation (Rumelhart et al., 1986; He et al., 2016), which requires storing intermediate activations across the entire network, resulting in prohibitive memory demands for large models. This memory bottleneck is particularly critical for generative AI applications, where large-scale models are essential Figure 1. Overview of DiffusionBlocks compared to end-to-end backpropagation. Traditional training (top) requires backpropagating gradients through all blocks, creating memory bottlenecks. Our approach (bottom) trains each block independently as a diffusion-based denoiser for a specific noise range, eliminating gradient dependencies and achieving $B$ -fold memory reduction during training. for high-quality generation. Previous layerwise training approaches (Hinton, 2022; Bengio et al., 2006; Nøkland & Eidnes, 2019; Belilovsky et al., 2019; Siddiqui et al., 2024) have underperformed compared to end-to-end backpropagation, primarily because they lack principled mechanisms to coordinate information flow between independently trained layers and struggle to balance parameter allocation effectively. Moreover, these approaches have been predominantly evaluated on image classification tasks, with limited exploration of generative modeling applications. Meanwhile, diffusion models (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Ho et al., 2020; Song et al., 2021) have revolutionized generative modeling through their mathematically principled approach to distribution transformation. Recent advances in network conditioning (Karras et al., 2022) and sampling efficiency (Lu et al., 2022; 2023; Zhao et al., 2023) have established diffusion models as state-ofthe-art across multiple domains. We propose DiffusionBlocks, a framework that reconceptualizes neural network training by interpreting network blocks as implementing discretized steps of a continuous-time reverse diffusion process. Our key innovation is a principled mapping between network blocks and noise-level ranges based on equal cumulative probability mass, ensuring each block confronts an equally challenging learning problem. This approach enables independent block training without requiring gradient communication between blocks. Through experiments on image generation and language modeling tasks, we demonstrate that DiffusionBlocks reduces memory requirements proportionally to the number of blocks while achieving competitive or superior performance. Our primary contributions are: • A diffusion-inspired blockwise training framework achieving true block independence in continuous time, where each block can be trained without requiring gradients from other blocks. • An equi-probability partitioning strategy that optimally allocates learning difficulty across blocks based on cumulative probability mass, ensuring balanced parameter utilization. • Comprehensive empirical validation demonstrating $B$ - fold memory reduction (with $B$ blocks) and improved performance on both image generation and language modeling tasks. Figure 1 illustrates our approach compared to traditional end-to-end backpropagation. Unlike conventional methods that require gradient flow across all blocks, DiffusionBlocks enables truly independent block training through diffusionbased denoising objectives. # 2. Preliminaries # 2.1. Score-Based Diffusion Models Let $\mathbf { z } _ { 0 } \in \mathbb { R } ^ { d } \sim p _ { \mathrm { d a t a } }$ denote a clean data sample. Following the Variance-Exploding (VE) formulation (Song et al., 2021; Karras et al., 2022), we perturb $\mathbf { z } _ { \mathrm { 0 } }$ with Gaussian noise whose standard deviation $\sigma ( t )$ increases monotonically with the (continuous) time variable $t \in [ 0 , 1 ]$ : $$ \begin{array} { r } { \mathbf { z } _ { t } = \mathbf { z } _ { 0 } + \sigma ( t ) \boldsymbol { \epsilon } , \quad \boldsymbol { \epsilon } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) . } \end{array} $$ This gives $\mathbf { z } _ { t } \sim \mathcal { N } ( \mathbf { z } _ { 0 } , \sigma ( t ) ^ { 2 } \mathbf { I } ) = p _ { t } ( \mathbf { z } _ { t } | \mathbf { z } _ { 0 } )$ with marginal distribution $\begin{array} { r } { p _ { t } ( \mathbf { z } _ { t } ) = \int p _ { \mathrm { d a t a } } ( \mathbf { z } _ { 0 } ) p _ { t } ( \mathbf { z } _ { t } | \mathbf { z } _ { 0 } ) \mathrm { d } \mathbf { z } _ { 0 } } \end{array}$ . The continuous-time formulation of this process is described by a stochastic differential equation (SDE): $$ \mathrm { d } \mathbf { z } _ { t } = \sqrt { \frac { \mathrm { d } \sigma ( t ) ^ { 2 } } { \mathrm { d } t } } \mathrm { d } \mathbf { w } , \quad t \in [ 0 , 1 ] $$ where w is a standard Wiener process. For generating samples, we employ the Probability Flow ODE (PF-ODE), which shares the same marginal distributions as the SDE but follows deterministic trajectories: $$ \frac { \mathrm { d } \mathbf { z } _ { t } } { \mathrm { d } t } = - \dot { \sigma } ( t ) \sigma ( t ) \nabla _ { \mathbf { z } } \log p _ { t } ( \mathbf { z } _ { t } ) $$ where $\begin{array} { r } { \dot { \sigma } ( t ) = \frac { \mathrm { d } \sigma ( t ) } { \mathrm { d } t } } \end{array}$ dσd(tt) and ∇z log pt(zt) is the score of the density $p _ { t } ( \mathbf { z } _ { t } )$ . Following Karras et al. (2022), we can eliminate the abstract time variable by parameterizing directly in terms of noise levels. Setting $\sigma ( t ) = t$ , the PF-ODE simplifies to: $$ \frac { \mathrm { d } \mathbf { z } _ { \sigma } } { \mathrm { d } \sigma } = - \sigma \nabla _ { \mathbf { z } } \log p _ { \sigma } ( \mathbf { z } _ { \sigma } ) . $$ To estimate this score function, we parameterize it using a neural network. We leverage the relation $\nabla _ { \mathbf { z } } \log p _ { \sigma } ( \mathbf { z } _ { \sigma } ) \approx$ $\frac { \mathbf { z } _ { 0 } - \mathbf { z } _ { \sigma } } { \sigma ^ { 2 } }$ (Robbins, 1992) to approximate the score in terms of a denoiser $D _ { \theta } ( \mathbf { z } _ { \sigma } , \sigma )$ that predicts the clean data: $$ \nabla _ { \mathbf { z } } \log p _ { \sigma } ( \mathbf { z } _ { \sigma } ) \approx \frac { D _ { \mathbf { \theta } } ( \mathbf { z } _ { \sigma } , \sigma ) - \mathbf { z } _ { \sigma } } { \sigma ^ { 2 } } $$ The denoiser is trained using a weighted L2 loss: $$ \begin{array} { r } { \mathcal { L } ( \pmb { \theta } ) = \mathbb { E } _ { p _ { \mathrm { d a t a } } , p _ { \sigma } , \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) } \left[ w ( \sigma ) \lVert D _ { \theta } ( \mathbf { z } _ { \sigma } , \sigma ) - \mathbf { z } _ { 0 } \rVert _ { 2 } ^ { 2 } \right] } \end{array} $$ where $w ( \sigma )$ is a weighting function and $p _ { \sigma }$ is the distribution from which noise levels are sampled during training. # 2.2. Neural Network Block Structure Consider a deep neural network with $L$ layers, parameterized by $\pmb \theta = ( \pmb \theta _ { 0 } , \pmb \theta _ { 1 } , \dots , \pmb \theta _ { L } )$ . Traditional end-to-end training processes the input $\mathbf { x } \in \mathcal { X }$ through the network to produce an output $\hat { \mathbf { y } } \in \mathcal { V }$ as follows: $$ \begin{array} { r l } & { \mathbf { z } ^ { ( 0 ) } = f _ { 0 } ( \mathbf { x } ; \pmb { \theta } _ { 0 } ) \quad \mathrm { ( i n p u t e m b e d d i n g ) } } \\ & { \mathbf { z } ^ { ( l ) } = f _ { l } ( \mathbf { z } ^ { ( l - 1 ) } ; \pmb { \theta } _ { l } ) , \quad l \in [ L ] } \\ & { \quad \hat { \mathbf { y } } = f _ { L + 1 } \big ( \mathbf { z } ^ { ( L ) } ; \pmb { \theta } _ { L + 1 } \big ) \quad \mathrm { ( o u t p u t p r o j e c t i o n ) } } \end{array} $$ A loss function $\mathcal { L } ( \hat { \mathbf { y } } , \mathbf { y } )$ is computed between the predicted output $\hat { \mathbf { y } }$ and target y. Backpropagation calculates gradients $\nabla _ { \boldsymbol { \theta } } \mathcal { L }$ by propagating error signals backward through the entire network, requiring storage of all intermediate activations $\{ \mathbf { z } ^ { ( l ) } \} _ { l = 0 } ^ { L }$ . This memory requirement scales with network depth and batch size, creating a bottleneck for large-scale models. When partitioning a network into blocks, we group consecutive layers together to form $B$ blocks, where each block $i \in [ B ]$ consists of multiple layers and is parameterized by $\pmb \theta _ { i }$ . In traditional blockwise approaches, defining appropriate training objectives for each block remains challenging, as these blocks must coordinate to accomplish the overall task without end-to-end supervision. # 2.3. Residual Connections as Euler Steps of the Reverse Diffusion Process The connection between residual networks and continuoustime ODEs has been established in prior work (Haber & Ruthotto, 2017; Chen et al., 2018), where residual updates $\mathbf { z } ^ { ( l ) } = \mathbf { z } ^ { ( l - 1 ) } + g _ { \pmb { \theta } _ { l } } \big ( \mathbf { z } ^ { ( l - 1 ) } \big )$ are shown to correspond to Euler discretizations of ODEs. We extend this perspective to our blockwise diffusion framework. In diffusion models, the forward process adds noise progressively, while the reverse process removes it to generate data. This reverse process can be formulated either as a stochastic differential equation (SDE) or its deterministic counterpart, PF-ODE (Eq. (3)). While both formulations share the same marginal distributions, we focus on the PFODE due to its deterministic nature, which aligns naturally with the deterministic forward pass of neural networks. Applying Euler discretization to Eq. (4) with noise levels $\sigma _ { 0 } > \sigma _ { 1 } > \cdot \cdot \cdot > \sigma _ { N }$ yields: $$ \begin{array} { r l } & { \mathbf { z } _ { \sigma _ { l } } = \mathbf { z } _ { \sigma _ { l - 1 } } - \Delta \boldsymbol { \sigma } _ { l } \cdot \boldsymbol { \sigma } _ { l - 1 } \nabla _ { \mathbf { z } } \log p _ { \sigma _ { l - 1 } } ( \mathbf { z } _ { \sigma _ { l - 1 } } ) } \\ & { \qquad = \mathbf { z } _ { \sigma _ { l - 1 } } + \underbrace { \frac { \Delta \boldsymbol { \sigma } _ { l } } { \boldsymbol { \sigma } _ { l - 1 } } \left( \mathbf { z } _ { \sigma _ { l - 1 } } - { D } _ { \theta } ( \mathbf { z } _ { \sigma _ { l - 1 } } , \boldsymbol { \sigma } _ { l - 1 } ) \right) } _ { = : g _ { \theta _ { l } } ( \mathbf { z } _ { \sigma _ { l - 1 } } ) } , } \end{array} $$ where $\Delta \sigma _ { l } = \sigma _ { l - 1 } - \sigma _ { l } > 0$ and we used the score approximation from Eq. (5). This reveals that each denoising step naturally takes the form of a residual update $\mathbf { z } _ { \sigma _ { l } } = \mathbf { z } _ { \sigma _ { l - 1 } } + g _ { \pmb { \theta } _ { l } } ( \mathbf { z } _ { \sigma _ { l - 1 } } )$ , matching the structure of modern neural architectures with skip connections. This mathematical correspondence explains why skip connections are essential for our framework: they naturally implement the Euler discretization of the reverse diffusion process. Architectures with residual connections—such as ResNets (He et al., 2016), U-Nets (Ronneberger et al., 2015), and transformer blocks with residual paths (Vaswani et al., 2017)—are therefore ideally suited for our approach. Architectures without skip connections would require implicit ODE solvers, which are computationally more complex and less compatible with our blockwise training approach. Therefore, we restrict our framework to architectures with explicit residual connections, ensuring compatibility between the network structure and the underlying continuoustime diffusion process. # 3. Method We now present DiffusionBlocks, our approach for training neural networks without end-to-end backpropagation. Our key insight is interpreting neural networks as implementing discretized steps of a continuous-time score-based diffusion process. This perspective enables training individual blocks independently while maintaining network-wide coherence through a shared mathematical framework (Figure 1). # 3.1. Diffusion-Based Blockwise Training Framework Traditional neural networks transform input $\mathbf { x }$ through hidden layers to output $\hat { \mathbf { y } }$ . We reconceptualize this as a reverse diffusion process: the input corresponds to noise $( \mathbf { z } _ { \sigma _ { \mathrm { m a x } } } \sim$ $\mathcal { N } ( \mathbf { 0 } , \sigma _ { \operatorname* { m a x } } ^ { 2 } \mathbf { I } ) ;$ ), and the output to clean data $( \mathbf { z } _ { \mathrm { 0 } } \sim \ p _ { \mathrm { d a t a } } )$ Each network block then performs partial denoising within a specific noise range. Given a neural network with $L$ layers, we partition it into $B$ blocks, where each block contains one or more consecutive layers. Instead of training the entire network endto-end, each block is assigned responsibility for a specific range of noise levels in the diffusion process. Specifically, Block $i$ handles the noise level range $[ \sigma _ { i } , \sigma _ { i + 1 } ]$ , where $i \in \{ 0 , 1 , . . . , B - 1 \}$ and $\sigma _ { 0 } = \sigma _ { \mathrm { m a x } }$ and $\sigma _ { B } = \sigma _ { \mathrm { m i n } }$ (typically set to a small positive value or zero). During training, for a block $i$ handling noise level range $\left[ \sigma _ { i } , \sigma _ { i + 1 } \right]$ , we train the corresponding denoiser $D _ { \pmb { \theta } _ { i } } ( \mathbf { z } _ { \sigma } , \sigma , \mathbf { x } )$ to predict the clean target: $$ \mathcal { L } ( \pmb { \theta } _ { i } ) = \mathbb { E } _ { p _ { \mathrm { d a t a } } , p _ { \sigma } ^ { ( i ) } , N ( \mathbf { 0 } , \mathbf { I } ) } \left[ w ( \sigma ) \lVert D _ { \pmb { \theta } _ { i } } ( \mathbf { z } _ { \sigma } , \sigma , \mathbf { x } ) - \mathbf { y } \rVert _ { 2 } ^ { 2 } \right] $$ where p(σi is the distribution of noise levels specifically for block $i$ , defined by restricting the global noise distribution to the range $[ \sigma _ { i } , \sigma _ { i + 1 } ]$ . For tasks like language modeling, we replace the $L _ { 2 }$ loss with cross-entropy after appropriate normalization. Each block-specific denoiser includes input embedding layers, neural network blocks, and output embedding components, making blocks truly independent. This block independence is the key to our memory efficiency—during training, we only need to store activations for a single block rather than the entire network. Specifically, our approach requires storage of activations for $L / B$ layers instead of all $L$ layers needed by end-to-end backpropagation, resulting in approximately $B$ -fold memory reduction during training. # 3.2. Equi-Probability Block Partitioning A critical innovation in our approach is how we partition the noise levels among blocks. Following Karras et al. (2022), we recognize that different noise levels present varying degrees of difficulty for the denoising task. The intermediate noise range tends to be most challenging and impactful for learning, while very low or high noise levels are comparatively simpler. To optimize parameter utilization, we partition the range of noise levels $[ \sigma _ { \mathrm { m i n } } , \sigma _ { \mathrm { m a x } } ]$ into $B$ blocks such that each block handles an equal amount of cumulative probability under the noise distribution: Figure 2. Block partitioning strategies for noise level assignment. Colored regions represent individual blocks under our equi-probability partitioning, where each block handles equal cumulative probability mass from the EDM log-normal distribution (blue curve). Orange circles show our equi-probability boundaries that concentrate in the challenging intermediate noise region, while gray squares show uniform boundaries (equal intervals in log-space) for comparison. This strategy ensures balanced learning difficulty across blocks. $$ \sigma _ { i } = \exp \left( P _ { \mathrm { m e a n } } + P _ { \mathrm { s t d } } \cdot \Phi ^ { - 1 } ( p _ { i } ) \right) $$ where $\begin{array} { r } { p _ { i } = \mathrm { { C D F } _ { \operatorname* { m i n } } + \frac { \dot { i } } { B } \cdot ( \mathrm { { C D F } _ { \operatorname* { m a x } } - \mathrm { { C D F } _ { \operatorname* { m i n } } } ) } } } \end{array}$ represents the target cumulative probability for block $i$ , $\Phi ^ { - 1 }$ is the inverse CDF of the standard normal distribution, and $\mathrm { C D F _ { \mathrm { m i n } } }$ and $\mathrm { C D F } _ { \mathrm { m a x } }$ are the CDFs corresponding to $\sigma _ { \mathrm { m i n } }$ and $\sigma _ { \mathrm { m a x } }$ respectively. This partitioning ensures that each block handles an equal amount of cumulative probability mass: $$ \int _ { \sigma _ { i } } ^ { \sigma _ { i + 1 } } p _ { \sigma } ( \sigma ) d \sigma = \frac { 1 } { B } . $$ Figure 2 illustrates how our approach allocates block boundaries to ensure equal cumulative probability across the noise level distribution. This strategy ensures that each block contributes equally to the overall learning task, optimizing parameter utilization. In contrast, naive uniform partitioning (e.g., dividing $[ \sigma _ { \mathrm { m i n } } , \sigma _ { \mathrm { m a x } } ]$ into equal intervals) would allocate too many parameters to easy regions while underserving challenging noise levels. # 3.3. Controlled Block Overlap To mitigate potential discontinuities between blocks, we introduce a controlled overlap between adjacent noise level ranges. For a block $i$ responsible for noise range $[ \sigma _ { i } , \sigma _ { i + 1 } ]$ , we expand the training range to: $$ [ \sigma _ { i } / \alpha , \sigma _ { i + 1 } \cdot \alpha ] , $$ where $\alpha : = ( \sigma _ { i + 1 } / \sigma _ { i } ) ^ { \gamma }$ and $\gamma$ is the overlap coefficient. This controlled overlap ensures smoother transitions during inference by allowing each block to learn from samples slightly outside its primary range of responsibility. In all our experiments, we use $\gamma = 0 . 1$ , which provides an effective balance between block independence and transition smoothness. Table 1. Image generation results comparing FID scores (lower is better). DiffusionBlocks achieves superior quality while training each block independently. # 3.4. Implementation Details Our implementation follows the EDM framework (Karras et al., 2022) including the preconditioning strategy. Detailed training and inference algorithms are provided in Appendix C. # 4. Experiments We evaluate DiffusionBlocks on image generation and language modeling tasks, demonstrating superior or comparable performance to end-to-end backpropagation while training with significantly reduced memory requirements. We also analyze key components of our framework. # 4.1. Image Generation Experimental Setup. We evaluate our method on CIFAR10 (Krizhevsky, 2009) and ImageNet (Deng et al., 2009) at $2 5 6 \times 2 5 6$ resolution using Diffusion Transformer (DiT) architectures (Peebles & Xie, 2023). We use DiT-S with 12 layers and DiT-L with 24 layers, and partition them into 4 blocks. All models are trained with classifier-free guidance (Ho & Salimans, 2022), dropping labels with probability 0.1. For ImageNet, we follow Peebles & Xie (2023) compressing images using a pre-trained VAE. Detailed hyperparameters and implementation specifics are provided in Appendix D.1. Results. Table 1 compares our approach against endto-end backpropagation, showing that DiffusionBlocks achieves better FID scores on both datasets. By training only one block at a time and optimizing each block independently, our approach reduces memory requirements during training by a factor of $B$ $B \ = \ 4$ in our experiments)—backpropagation needs to be performed only through the active block rather than the entire network. Figure 3 shows examples of generated images from our model on the CIFAR-10 dataset. Table 2. Language modeling results comparing MAUVE scores (higher is better). Our method achieves superior performance compared to end-to-end backpropagation. Additionally, a significant advantage of our approach is faster inference: while the baseline model requires forwarding through all layers for each diffusion step, our method only needs to use the relevant block. This results in approximately $3 \times$ faster generation time. # 4.2. Language Modeling Experimental Setup. For language modeling, we use The One Billion Words Benchmark (LM1B) (Chelba et al., 2014) with a Llama-style architecture (Touvron et al., 2023) comprising 12 transformer layers partitioned into 4 blocks. We implement specialized attention mechanisms (Arriola et al., 2025) to handle autoregressive dependencies while maintaining diffusion-based denoising capabilities. We evaluate models using MAUVE score (Pillutla et al., 2021), following the conditional generation protocol established by SEDD (Lou et al., 2024). Detailed hyperparameters and implementation specifics are provided in Appendix D.2. Results. Table 2 shows that our method achieves superior MAUVE scores compared to end-to-end backpropagation, despite only requiring backpropagation through one block at a time during training. This demonstrates that our blockwise training approach can effectively learn high-quality text generation while maintaining significant memory efficiency. # 4.3. Ablation Studies We perform ablation studies on CIFAR-10 to analyze the importance of key components in our framework. All experiments use the same network architecture and hyperparameters unless otherwise specified. Block Partitioning Strategy. We compare our equiprobability partitioning strategy against uniform partitioning across noise levels. We disabled the block overlap in Section 3.3 to isolate the effectiveness of our partitioning strategy. As shown in Table 3, our approach outperforms uniform partitioning, achieving an FID of 45.50 compared to 68.06. While this improvement is meaningful, the difference highlights that both strategies can achieve reasonable performance, with our equi-probability approach providing a consistent advantage. This supports our hypothesis that allocating block capacity based on the intrinsic difficulty of denoising at different noise levels (as visualized in Figure 2) contributes to more effective parameter utilization. The uniform strategy, while functional, appears to be less optimal as it allocates equal capacity across all noise regions rather than concentrating resources where learning is most challenging. Table 3. Effect of block partitioning strategy on CIFAR-10. Our equi-probability partitioning outperforms uniform partitioning by allocating blocks based on learning difficulty. Table 4. Effect of block overlap on CIFAR-10. Controlled overlap between adjacent blocks significantly improves performance, with $\gamma = 0 . 1$ providing the optimal balance between block independence and transition smoothness. Effect of Block Overlap. To evaluate the importance of controlled overlap between blocks, we varied the overlap coefficient $\gamma$ from 0 (no overlap) to 0.2 (substantial overlap). Table 4 demonstrates that controlled overlap significantly improves performance compared to strict block boundaries. Without overlap $( \gamma = 0 )$ ), FID degrades to 45.50 due to discontinuities between independently trained blocks. Performance improves as we introduce modest overlap, reaching optimal results at $\gamma = 0 . 1$ (FID 41.39). However, excessive overlap $( \gamma \geq 0 . 1 5 )$ begins to degrade performance, with $\gamma = 0 . 2$ producing significantly worse results (FID 56.69), likely due to conflicting learning objectives when blocks have substantial overlap in their training regions. These results confirm that $\gamma = 0 . 1$ provides an effective balance between maintaining block independence and ensuring smooth transitions during inference. Effect of Block Count. We investigate how performance varies with different numbers of blocks while keeping the total network depth constant (12 layers). Table 5 reveals a clear trade-off between FID score and computational efficiency. Using fewer blocks yields better FID scores due to larger block capacity— $\mathbf { \nabla } B = 2$ achieves the best FID (38.58) but requires processing 6 layers per forward pass. As the number of blocks increases, inference becomes more efficient: $B = 4$ processes only 3 layers per step $4 \times$ faster than end-to-end) while maintaining reasonable FID (41.39), and $B = 6$ achieves $6 \times$ speedup at the cost of degraded performance (FID 53.74). The results suggest that $B = 3$ or $B = 4$ provide good balance points, offering substantial efficiency gains while preserving competitive generation quality. Beyond $B = 6$ , individual blocks become too small (2 layers each) to perform effective denoising, leading to significant quality degradation. This analysis enables practitioners to choose the appropriate block count based on their specific quality requirements and computational constraints. Table 5. Effect of block count on CIFAR-10. Fewer blocks achieve better FID but require more layers per diffusion step (L/S), creating a trade-off between quality and efficiency. Note that $\scriptstyle { \mathrm { L } } / { \mathrm { S } } = L / B$ , where $L$ is the total number of layers (12) and $B$ is the number of blocks. # 5. Related Work Diffusion Models and Score-Based Generation. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and score-based generative models (Song & Ermon, 2019; 2020; Song et al., 2021) have emerged as powerful frameworks for generative modeling. These models define processes that gradually transform simple distributions into complex ones through sequences of denoising steps. Recent advances in network conditioning (Karras et al., 2022), sampling efficiency (Lu et al., 2022; 2023; Zhao et al., 2023), and architectural improvements (Rombach et al., 2022; Peebles & Xie, 2023) have established diffusion models as state-of-the-art across various generative tasks. Our work leverages these mathematical foundations for neural network training, interpreting layer transformations through the lens of continuous-time diffusion processes. Layer/Block-wise Training Methods. Various approaches have been proposed to train neural networks without end-to-end backpropagation. Synthetic Gradients (Jaderberg et al., 2017) enables decoupled neural interfaces by predicting gradients locally, while biologicallymotivated methods include Feedback Alignment (Lillicrap et al., 2016), the Forward-Forward algorithm (Hinton, 2022), and Target Propagation (Lee et al., 2015). Additional approaches include local learning methods (Nøkland & Eidnes, 2019; Belilovsky et al., 2019), greedy layer-wise pretraining (Bengio et al., 2006), and Blockwise Self Supervised Learning (Siddiqui et al., 2024). However, these methods face two fundamental limitations: they lack principled theoretical foundations for coordinating information flow between independently trained components, and have demonstrated limited effectiveness on generative modeling tasks where maintaining coherent probabilistic modeling across components remains challenging. DiffusionBlocks addresses both limitations through the mathematical rigor of continuous-time diffusion theory, where each block’s denoising objective naturally aligns with the global generative goal. Memory-Efficient Implicit Depth Models. Neural $O D E s$ (Chen et al., 2018) parameterize network dynamics as continuous-time differential equations, using the adjoint sensitivity method to achieve constant memory backpropagation through time. Deep Equilibrium Models (DEQs) (Bai et al., 2019) represent another memory-efficient paradigm, directly solving for fixed points of implicit layers using root-finding and implicit differentiation, effectively creating infinite-depth networks with constant memory. While both approaches achieve memory efficiency through implicit computation, they fundamentally differ from our method: Neural ODEs still require end-to-end backpropagation through a single monolithic network, and DEQs focus on equilibrium computation rather than generative modeling. In contrast, DiffusionBlocks achieves true block independence by partitioning the continuous-time diffusion process into disjoint noise-level ranges, enabling genuinely parallel block training without any inter-block gradient flow. Connection to Concurrent Work. Most closely related to our work is the concurrent NoProp framework (Li et al., 2025), which also interprets neural network training through diffusion principles. NoProp’s discrete-time formulation (NoProp-DT) treats each network layer as a discrete denoising step, achieving memory-efficient training for classification tasks. However, their continuous-time variant (NoProp-CT) fundamentally differs from true blockwise training: it employs a single network $\hat { u } _ { \theta } ( z _ { t } , x , t )$ that must handle all noise levels $t \in [ 0 , 1 ]$ , requiring end-to-end backpropagation through the entire architecture. This approach more closely resembles Neural ODEs (Chen et al., 2018) than blockwise methods. Our framework achieves genuine blockwise independence in continuous time by partitioning the noise range $[ \sigma _ { \mathrm { m i n } } , \sigma _ { \mathrm { m a x } } ]$ into $B$ intervals, with each block $D _ { \theta _ { i } }$ independently responsible for its assigned range $[ \sigma _ { i } , \sigma _ { i + 1 } ]$ . This enables $B$ -fold memory reduction during training while maintaining the mathematical rigor of continuous-time diffusion. Furthermore, our equi-probability partitioning based on cumulative distribution mass ensures optimal parameter utilization across blocks—a principled approach absent in NoProp’s fixed layer-to-timestep mapping. Notably, while NoProp focuses primarily on classification tasks and evaluates against diffusion-inspired baselines, we demonstrate superior performance on generative modeling tasks—image generation and language modeling—where our framework naturally excels, directly comparing against conventional end-to-end backpropagation on established architectures.
Training large neural networks with end-to-end backpropagation creates significant memory bottlenecks, limiting accessibility to state-of-the-art AI research. We propose $\textit{DiffusionBlocks}$, a novel training framework that interprets neural network blocks as performing denoising operations in a continuous-time diffusion process. By partitioning the network into independently trainable blocks and optimizing noise level assignments based on equal cumulative probability mass, our approach achieves significant memory efficiency while maintaining competitive performance compared to traditional backpropagation in generative tasks. Experiments on image generation and language modeling tasks demonstrate memory reduction proportional to the number of blocks while achieving superior performance. DiffusionBlocks provides a promising pathway for democratizing access to large-scale neural network training with limited computational resources.
[ "cs.LG", "cs.AI", "stat.ML" ]
# 1 Introduction and Related Work Effective decision-making in networks—such as in communication networks, social networks, and transportation networks— often relies on graph-structured data representations. Among the techniques developed for learning from such data, Graph Neural Networks (GNNs) have become widely adopted across diverse domains, including tasks such as anomaly detection and recommendation systems in social networks [Hamilton et al., 2017], as well as for predicting biomedical molecular properties [Gilmer et al., 2017]. The majority of existing GNNs are designed for diverse graphs under a specific task [Wu et al., 2020], such as capturing graph-level representations [Zhang et al., 2018; Ying et al., 2018]. and learning subgraph patterns in link-level tasks [He et al., 2020; Zhang and Chen, 2018]. However, designing effective GNNs for different graph learning problems is challenging, as it requires substantial graph-related knowledge in order to understand the tasks and graphs [Hoffman et al., 1995]. Then, there is a natural question: How to integrate graph learning knowledge to design effective GNNs? It is non-trivial to answer this question. Firstly, existing methods have not provided explicit guidelines for utilizing knowledge in designing GNN model architectures. Most GNNs are designed to effectively model graphs for a specific task [Wu et al., 2020; Hamilton et al., 2017; Ying et al., 2018], based on implicit human expertise, which is difficult to explicitly describe and extract. Therefore, we propose LLMNet, which automates GNN design using LLMs. Specifically, we have designed a Knowledge Agent to extract graph-related knowledge, building knowledge bases covering advanced graph learning research. Then, we have developed a set of agents that use RAG (Retrieval-Augmented Generation) to interact with knowledge bases, designing GNNs step by step in a knowledgeguided manner. Leveraging LLMs’ task analysis, LLMNet streamlines the designing and refinement of GNN model architectures. Extensive experiments on twelve datasets across three tasks demonstrate LLMNet’s superior performance and efficiency, proving the effectiveness of integrating knowledge for automated GNN design. A concrete case demonstrating this process is presented in Section 4. # 2 Method We introduce LLMNet, which prepares and utilizes knowledge to design GNN model architectures for diverse graph learning tasks using LLM-based agents. Firstly, we gather graph-related resources and develop a knowledge agent for knowledge extraction and retrieval. Subsequently, the knowledge is then used by several LLM-based agents step by step to design effective GNN model architectures. # 2.1 Knowledge Bases Construction and Utilization Knowledge Bases Construction LLMs face challenges due to outdated knowledge and hallucinations. We address this by creating two knowledge bases, which is currently lacking for designing GNN model architectures. We collect resources and use the Knowledge Agent to manage them. The Knowledge Agent is tasked with acquiring and integrating specialized knowledge tailored to specific user requirements. This agent mainly manages two types of knowledge bases, as shown in Figure 1: the prior knowledge base and the experiment knowledge base. The prior knowledge base is enriched with task-specific information extracted from sources such as the Open Graph Benchmark (OGB) leaderboards, the PyTorch Geometric (PyG) documentation, and the top-tier conference proceedings that are accessible on Arxiv, (a) Input (e) Output Social Network Consumer Behavior Molecular Structure p OGB PyG @ 中 吧 国 IJCAI 工日 Response Performance: Valid results 回 P 0.4158±0.0038... · (b) Knowledge Bases Construction Step 1: Generate Planning (c) GNNs Designing Pipeline Step2 Acquire Task-Specfie Knowledge Agent Planning Agent Knowledge Agent Planning Agent Output: 国 国 Step 7: Revise Loop Check - Dataset: 'Actor' 中 - Task Type:'Classification' - Task Level: 'Node-level' Paper: Towards deper... er B Task: Graph RegressionMethod Name: DAGNN Dataset: ZINC Method Summary: edge EUpriment Knewiment KknPriordge C公 Retrieve Knowledge Base Base Base Data Agent Output: (d)KnowledgeUpdate Response: GNN architecture - Data Statistic... - Feature Engineering: Knowledge Agent - Plans: {.} ['ToUndirected,..] Configuration Agent Output: -SearchSpace: {'Aggregation:['GATCo.,.]..) Response - Search Algorithm: 'Differential Search' Step 4:Configure Search Spacei Step 5:Experiment Evaluation -Hyper-parameters: {'Hidden_Size':'32or 64’, and Search Algorithms - GNN:GCN|IGAT... G Evaluation Agent Configuration Agent ensuring the agent remains at the cutting edge of technology and methodology. The experiment knowledge base archives detailed experimental outcomes such as the benchmark evaluation results, including models setups and their performance on specific datasets, thereby providing insights into their effectiveness and application contexts. The content of papers and reports often overlaps, with redundant background information and methods that can introduce noise and reduce the informativeness of retrieved knowledge. To address this, we employ a two-level knowledge extraction strategy, first, we start by summarizing inputs to obtain coarse-grained knowledge, then refine this into finegrained details specific to graph learning tasks, such as architecture design and dataset usage. The code and the extended version with more details are available. 1. Knowledge Utilization and Update To effectively utilize the constructed knowledge bases, we implement a goal-aware knowledge retrieval mechanism. Utilizing the RAG technique, we enhance the effectiveness of the designing GNN model architectures by retrieving relevant knowledge. The pre-trained model all-MiniLM-L6-v2 encodes both the extracted knowledge and the queries from other agents. We calculate the cosine similarity in the embedding space to identify the most relevant knowledge. To accommodate the varying goals and resource types in graph learning, we apply a post-ranking strategy. The top- $k$ knowledge items from each resource type are initially retrieved and then re-ranked and selected by the knowledge agent based on the query’s context. This refined knowledge is integrated into the graph learning agent’s prompt, facilitating the design of GNN model. LLMNet also incorporates a dynamically knowledge update mechanism. After the evaluation of a GNN model, the experimental summary, including the task plan, designed GNNs, and results, is stored in memory. The planning agent then compiles a report, which is added to the knowledge base, ensuring that the system’s knowledge remains current and applicable for future pipeline runs. This continuous update process allows LLMNet to adapt and improve over time, enhancing its ability to design effective GNN models. # 2.2 Knowledge-Guided GNNs Model Designing Figure 1 illustrates how each agent engages with knowledge bases to streamline the entire process. The two knowledge bases bridge research and application, they empower agents to make informed decisions. Planning Agent The Planning Agent generate a task plan based on user instructions, to direct subsequent agent actions, which includes specifications for datasets, task types and evaluation metrics. After all agents completed their tasks, this agent evaluates the experimental results, utilizing insights from the experiment knowledge base to determine whether a revision loop is necessary. Table 1: Performance comparisons of the proposed LLMNet and baselines on three tasks. We report the test accuracy and the standard deviation for node and graph classification tasks, and use the common Rooted Mean Square Error (RMSE) for the item ranking task. The top-ranked performance in each dataset is highlighted in gray, and the second best one is underlined. The average rank on all datasets is provided in the last column. Data Agent The Data Agent utilizes insights from the prior knowledge base to perform feature engineering tailored to specific graphs and tasks, ensuring alignment with expert practices in a knowledge-guided manner. Configuration Agent The Configuration Agent is responsible for configuring the search space, which includes possible model architecture configurations such as layers and connections, and the search algorithm that explores this space. It interacts with the prior knowledge base to gain insights on model design, enhancing the effectiveness of search space configuration and algorithm selection. Evaluation Agent The Evaluation Agent is designed to finetune the designed GNN and conduct experiments to validate its performance. After completing the experiments, the Evaluation Agent transmits the results to the Knowledge Agent for integration into the experiment knowledge base. We evaluate LLMNet’s effectiveness on twelve datasets across three tasks as shown in Table 1, the performance of another three datasets are shown in appendix of extended version. Detailed resource costs and ablation studies are in the appendix of the extended version. # 3 Experiments Step0(a) Step1C) Generate Planning 1 in which node represent the paper and edges represent the The node attribute is the keywords mentioned in the paper I think SAGEConv will be useful. 1 "Data":"Cora" Step 5(b) ) Step 3 (d) Cases retrieved from prior knowledge base: { "Dataset Name": "ogbn-papers100M", "Method":"GLEM+GIANT+GAMLP", "Validation Accuracy":"0.7354\u00b1 0.0001", "Hardware": "Tesla V100 (32GB)", "Paper Summary": “This paper proposes a novel ap Step 5 [Data Statistic] su% mtay. 新 50 Degree # 3.1 Experimental Settings Datasets We evaluate twelve widely used datasets across three tasks as shown in Table 1. The detailed introduction of these datasets and the evaluation performance of another three datasets are shown in appendix of extended version. Baselines In this paper, we provide several kinds of baselines. (1) GNNs with task adaption, including GCN [Kipf and Welling, 2016] and GraphSAGE [Hamilton et al., 2017] with task-specific adaptations. (2) AutoML-based methods. We adopt F2GNN [Wei et al., 2022] / LRGNN [Wei et al., 2023] / Prof-CF [Wang et al., 2022] for three tasks. (3) LLM-GNN. GNNs generated by LLMs. (4) LLMNet (GL) operates without external knowledge. # 3.2 Performance Comparisons Table 1 showcases the performance of LLMNet on twelve datasets across three tasks. LLMNet consistently outperforms all baselines, highlighting its ability to design effective GNNs for various graph learning tasks. The enhanced performance of LLMNet over LLMNet (GL) underscores the value of incorporating extracted knowledge into the GNN design process. Unlike AutoML methods that operate within a predefined design space, LLMNet (GL) leverages LLMs to expand this space, achieving comparable performance and validating the agents’ problem-solving capabilities. The LLM GNN baseline, which relies solely on LLM suggestions without knowledge integration, faces challenges in understanding tasks and graphs, resulting in less effective GNN designs. LLMNet’s superior performance highlights the significance of knowledge in designing effective GNNs. # 4 Demonstration In this section, we demonstrate the use case of LLMNet on a real-world problem. For example, users aim to predict the category of articles within a citation network. As shown in Figure 2, (a) illustrates the user’s input instructions, (b) displays the system’s experimental results and its designed GNN model, LLMNet achieves an accuracy of 0.8710 on the Cora dataset, surpassing the GNN-based baselines GCN [Kipf and Welling, 2016] at 0.8568, ACMGCN [Luan et al., 2022] at 0.8667 (Detailed experiments is in the extended version), and the AutoML-based baseline SANE [Zhao et al., 2021] at 0.8640. (c) displays the task plan generated by the Planning Agent, which interprets the user’s intention to predict the category of articles within a citation network as a node classification task. (d) shows the Data Agent retrieving relevant knowledge from the prior knowledge base, including methods for node classification. It also visualizes graphs to better understand the data structure. This demonstration showcases the effectiveness of LLMNet in automatically designing GNN model for real-world graph learning problems. # Acknowledgement This work is supported by National Key Research and Development Program of China (under Grant No.2023YFB2903904), the National Natural Science Foundation of China (under Grant No. 92270106), and the Beijing Natural Science Foundation (under Grant No. 4242039). # References Hierarchical graph representation learning with differentiable pooling. In NeurIPS, pages 4800–4810, 2018. [Zhang and Chen, 2018] Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. Advances in neural information processing systems, 31, 2018. [Zhang et al., 2018] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In AAAI, 2018. [Zhao et al., 2021] Huan Zhao, Quanming Yao, and Weiwei Tu. Search to aggregate neighborhood for graph neural network. In ICDE, 2021. [Gilmer et al., 2017] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In ICML, pages 1263–1272, 2017. [Hamilton et al., 2017] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NeurIPS, pages 1024–1034, 2017. [He et al., 2020] Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, and Meng Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pages 639–648, 2020. [Hoffman et al., 1995] Robert R Hoffman, Nigel R Shadbolt, A Mike Burton, and Gary Klein. Eliciting knowledge from experts: A methodological analysis. Organizational behavior and human decision processes, 62(2):129–158, 1995. [Kipf and Welling, 2016] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2016. [Luan et al., 2022] Sitao Luan, Chenqing Hua, Qincheng Lu, Jiaqi Zhu, Mingde Zhao, Shuyuan Zhang, Xiao-Wen Chang, and Doina Precup. Revisiting heterophily for graph neural networks. In NeurIPS, 2022. [Wang et al., 2022] Xin Wang, Ziwei Zhang, and Wenwu Zhu. Automated graph machine learning: Approaches, libraries and directions. arXiv preprint arXiv:2201.01288, 2022. [Wei et al., 2022] Lanning Wei, Huan Zhao, and Zhiqiang He. Designing the topology of graph neural networks: A novel feature fusion perspective. In The WebConf, pages 1381–1391, 2022. [Wei et al., 2023] Lanning Wei, Zhiqiang He, Huan Zhao, and Quanming Yao. Search to capture long-range dependency with stacking gnns for graph classification. In Proceedings of the ACM Web Conference 2023, pages 588– 598, 2023. [Wu et al., 2020] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020. [Ying et al., 2018] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec.
Effective decision-making on networks often relies on learning from graph-structured data, where Graph Neural Networks (GNNs) play a central role, but they take efforts to configure and tune. In this demo, we propose LLMNet, showing how to design GNN automated through Large Language Models. Our system develops a set of agents that construct graph-related knowlege bases and then leverages Retrieval-Augmented Generation (RAG) to support automated configuration and refinement of GNN models through a knowledge-guided evolution process. These agents, equipped with specialized knowledge bases, extract insights into tasks and graph structures by interacting with the knowledge bases. Empirical results show LLMNet excels in twelve datasets across three graph learning tasks, validating its effectiveness of GNN model designing.
[ "cs.LG" ]
# 1 Introduction Large language models have achieved remarkable progress in a wide range of natural language understanding and generation tasks, largely powered by next word prediction (NWP) algorithm and supervised finetuning during pretraining and posttraining (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020; Achiam et al., 2023). The NWP task is widely regarded as the principal mechanism through which LLMs acquire both factual world knowledge, reasoning, and decision-making capabilities (Wei et al., 2022; ichter et al., 2023; Yao et al., 2023). However, recent research has revealed fundamental limitations in the standard supervised NWP approach: always providing the gold next word as supervision can both obscure rich underlying information and encourage models to latch onto superficial correlations in the data (e.g., associating “Bruce Lee” exclusively with “Kung Fu”), ultimately resulting in brittle generalization and hallucinations (Zhou et al., 2024; Li et al., 2024a; Xu et al., 2025). In essence, traditional NWP equips models to recognize what the next word is, but not why it should be the next word given the context. This limits a model’s capacity for deeper reasoning and fails to capture the generative uncertainty and justification behind language use. Prior works (Jiang et al., 2024; Xu et al., 2025; Ishibashi et al., 2025) have demonstrated that explicitly encouraging models to articulate why certain continuations is likely to lead to substantial gains in reasoning and reliability. In this work, we take this line of inquiry further and introduce BOttlenecked Next Word Exploration (BOW), a novel RL framework that fundamentally rethinks the NWP task. Our framework is visualized in Fig. 1. Rather than directly supervising the policy model with observed next words, we instead bottleneck the learning process. Without seeing the actual next word, a policy model must first explore to generate comprehensive and self-contained reasoning trajectories that describe what the next words could be. Then, a judge model (Li et al., 2024b) assesses the quality of the reasoning trajectories based on the ground truth next word, providing soft-rewards to the policy model for optimization. We finally optimize our policy model using the soft-rewards with GRPO (Shao et al., 2024), while other RL algorithms can also be applied. By replacing traditional supervised NWP with our BOW scheme, we challenge models to go beyond surface correlations and actively construct reasoning paths for the plausible next words. We also propose a novel regularization technique that encourages the policy model to match the judge model’s distribution of words, complementary to the observed gold word itself. This effectively prevents the policy model from "collapsing," where Reasoning Trajectory 𝛕 for Next Word Policy Model The next word set will likely contain nouns that are fruits, and these fruits should be larger in size to fill up for lunch. The reasoning is as follows: I like fruits a lot, so for lunch, I ate two e Given the context "I like fruits a lot, so for lunch, I ate two," the next word set will likely contain nouns Next word: pears that are fruits, and these fruits should be larger in size to fill up for lunch. The reasoning is as follows: 1. The phrase "I like fruits a lot" indicates a preference for fruits, narrowing down the possible next $\Upsilon ( \pmb { \tau } ) = 0 . 2$ words to fruits. 2. The phrase "for lunch" suggests that the next word should be a food item suitable for a meal. 3. The phrase "I ate two" implies that the fruits are in pairs, which are likely to be larger fruits that can be consumed in pairs, such as apples, pears, or bananas. apple: 0.2 Therefore, the next word set can be described as: larger fruits that can be eaten in pairs for lunch. pear: 0.2 banana: 0.2 1 grape: 0.01 Judge Next Word Prob. Distribution Model it learns to always generate a small set of specific words instead of providing accurate and comprehensive reasoning, a key drawback found in baseline models using hard reward signals. Across 10 benchmarks requiring world knowledge, multi-hop, and factual reasoning, continual pretraining with BOW enhances the general zeroshot reasoning capability of Qwen2.5-7B-Instruct and Llama-3.1-8B-Instruct, also outperforming all continual pretraining baselines, including an RL baseline with hard reward design. Moreover, empirical results and human analysis show that BOW improves the intrinsic next-word reasoning capabilities compared with baselines that are continually pretrained with direct next-word supervision. We also show the effectiveness of our novel regularization technique and designs for other critical components through ablation studies. A final human analysis shows that BOW leads to better nextword reasoning trajectories that comprehensively reason over all clues from the context and make more human-aligned next-word predictions. Overall, our findings suggest that explicitly bottlenecking next-word prediction is a promising direction for building LLMs that are not only knowledgeable but also capable of reasoning about language in a more human-like and interpretable fashion. # 2 Related Work Elaborated Reasoning Recent research has increasingly emphasized the importance of encouraging LLMs to articulate their internal reasoning processes rather than directly emitting final answers. The ToW framework (Xu et al., 2025) demonstrates that continually pretraining models to generate reasoning connections between words before producing the next word improves factuality and reasoning across a range of reasoning tasks. Similarly, Jiang et al. (2024) explores training models to produce natural language rationales between sentences for downstream reasoning tasks, showing that explicit reasoning improves both answer quality and user trust. Moreover, Ishibashi et al. (2025) proposes an unsupervised method to uncover and amplify implicit reasoning signals in domain-specific corpora, mimicking the human thinking processes involved in creating texts. These works motivate our design of a bottlenecked reasoning step that compels models to externalize and refine their thought processes before prediction in a self-evolving way through reinforcement learning. Bottlenecked Learning Recent work explores forcing models through information bottlenecks – constrained intermediate representations that must capture essential reasoning before prediction. Zelikman et al. (2022) introduces Self-Taught Reasoner (STaR), which creates a reasoning bottleneck by requiring models to generate explicit rationales before answers. The model can only access the final answer through this rationale bottleneck, iteratively learning which reasoning paths lead to correct predictions. Zelikman et al. (2024) further extends this to Quiet-STaR, creating an even tighter bottleneck where models must generate "thoughts" between every token during pretraining, not just for explicit questions. More recently, Zhou et al. (2025) demonstrates that bottlenecking can operate at an even more abstract level—forcing models to recognize and transfer high-level reasoning patterns rather than surface-level associations, creating a conceptual bottleneck that enables generalization to rare cases. Our BOW framework implements a particularly stringent form of bottlenecked learning: the policy model must generate reasoning that successfully guides a separate frozen judge model to predict the next token, without ever seeing the gold token itself. This architectural bottleneck ensures that the reasoning must contain sufficient information for an independent model to recover the correct prediction. Reasoning Overfitting A growing body of work reveals that LLMs often exploit spurious correlations rather than performing genuine reasoning, a phenomenon sometimes termed as reasoning overfitting. Li et al. (2024a) shows that LLMs take deceptive semantic shortcuts, relying on keyword/entity biases instead of following correct reasoning chains. This aligns with findings from Zhou et al. (2024), which demonstrates that LLMs drop $9 \%$ in performance when forced to reason abstractly rather than rely on surface patterns. In mathematics, which requires much rigor in reasoning, Yu et al. (2024) and Li et al. (2025) have both shown that LLMs’ so-called math reasoning primarily relies on pattern matching and memorization of solution paths from training data, often establishing spurious correlations between surface-level features and certain mathematical concepts. These works collectively highlight that current LLMs often produce plausible-looking reasoning that masks fundamental failures in logical coherence. Our bottlenecked reasoning approach BOW addresses this by requiring models to generate reasoning that must successfully guide a separate judge model, providing an external validation of reasoning quality beyond surface plausibility. # 3 Bottlenecked Next Word Exploration # 3.1 Overview Bottlenecked Next Word Exploration (BOW) is an RL framework consisting of three components: Bottlenecked Generation, Judge Mechanism, and RL Optimization. BOW first introduces a bottleneck process: rather than directly conditioning on the context to predict the next token, the policy model first generates a reasoning trajectory $\tau$ about plausible next words. Subsequently, a separate module, referred to as the judge, computes the nexttoken probability distribution, $P ( w | \tau )$ , given the reasoning trajectory $\tau$ . The policy model is finally optimized utilizing the reward for its generated reasoning trajectory computed based on $P ( w | \tau )$ and the ground truth next token $w ^ { \ast }$ , without being explicitly trained on the gold next token. # 3.2 Bottlenecked Generation In traditional NWP, models are trained to predict the next token given a specific context $C$ directly. In contrast, BOW introduces a structural bottleneck: rather than directly predicting the next token, the policy model $\pi _ { \boldsymbol { \theta } }$ must first generate an intermediate reasoning trajectory $\tau$ to reason towards plausible next words without directly seeing the gold next word $w ^ { \ast }$ . This bottleneck process fundamentally changes the learning schema from a one-step classification task to a multi-step generative decision-making process, where the reasoning path $\tau$ serves as a latent action. Notably, the gold next word $w ^ { \ast }$ is never observed by the policy model nor used in any cross-entropy loss. Supervision is provided only through a scalar reward signal assigned after the judge model assesses the informativeness and correctness of the generated reasoning path given the gold next word $w ^ { \ast }$ . To ensure that learning remains feasible in this under-specified setting, we carefully design the policy model prompt to elicit reasoning trajectories that exhibit two critical properties: comprehensively incorporating all relevant contextual features that influence next word reasoning, and providing a general characterization of plausible next words rather than explicitly identifying specific candidate words. For example, in Fig. 1, orange texts reflect the first property and the blue texts reflect the second, given the context. A solid prompting design provides a strong starting point for generating reasoning trajectories and has been shown effective in prior work (Gandhi et al., 2025) to facilitate reasoning supervision in similar low-resource or weakly supervised regimes. Please refer to Fig. 6 in the Appendix for concrete prompt design. # 3.3 Judge Mechanism The judge model $J _ { \phi }$ is a frozen LLM that serves as an evaluator of the reasoning trajectory $\tau$ . It receives only the $\tau$ , and outputs a probability distribution over the vocabulary for the next token of the context: $$ P ( w \mid \tau ) = J _ { \phi } ( w \mid \tau ) . $$ This distribution is interpreted as the judge model’s best guess for the next token, conditioned on the reasoning path $\tau$ for the next token. Thus, the reward for the given reasoning path $\tau$ is then defined as the probability assigned to the gold next token $w ^ { \ast }$ under this distribution: $$ r ( \tau ) = J ( w ^ { * } \mid \tau ) . $$ Importantly, the judge model is not trained to imitate human preferences (Christiano et al., 2017) or score completions step-by-step (Lightman et al., 2024). Instead, it only performs a constrained continuation task: predicting the next token given a structured intermediate rationale. This choice explicitly creates an information bottleneck and implicitly evaluates the generated reasoning paths by the effectiveness of recovering next tokens, which encourages a self-contained and comprehensive analysis from the policy model. To justify that the judge model is able to faithfully reflect the likelihood of candidate next words, as described in the reasoning path, we provide one concrete example in $\ S 5 . 2 . 1$ . Moreover, as we observe that the base policy model has already gained useful patterns and behaviors (Gandhi et al., 2025) when prompted to reason on plausible next words, we add a $L 1$ regularization style term in the final reward to prevent RL exploration of reasoning trajectories from collapsing into explicitly mentioning only a few specific next words instead of generally reasoning about the characteristic of candidate next words. We need to avoid this collapsing behavior since it is counterintuitive to constrain the prediction of next words to a few given an open-ended context, and also harmful to our RL algorithm. To achieve this regularization, we obtain a reference next-token distribution $J _ { \phi } ( w \mid C )$ by directly feeding the context into the Table 1: Two examples of transforming the original benchmark instances into next word prediction evaluation format. judge model. As a result, our final reward of the reasoning trajectory $\tau$ is formulated as: $$ R ( \tau ) = r ( \tau ) - \alpha | J _ { \phi } ( w \mid \tau ) - J _ { \phi } ( w \mid C ) | $$ where $\alpha$ represents a scaling factor. Please refer to $\ S 4 . 3 . 2$ for more details on the implementation of the judge. # 3.4 RL Optimization We optimize the policy model $\pi _ { \boldsymbol { \theta } }$ with Grouped Reward Policy Optimization (GRPO) (Shao et al., 2024), which improves RL training stability by normalizing rewards within groups of reasoning paths sharing the same context. For each context $C$ , a group of $N$ reasoning paths $\left\{ \tau _ { 1 } , \dots , \tau _ { N } \right\}$ are sampled from $\pi _ { \boldsymbol { \theta } }$ , and reward $R _ { i } ( \tau _ { i } )$ for each reasoning path $\tau _ { i }$ is computed using Eq. 3. GRPO uses the group mean $\bar { r }$ and group standard deviation $\sigma$ to compute advantages $\textstyle { \hat { A } } _ { i } { \bar { = } } { \frac { r _ { i } - { \bar { r } } } { \sigma } }$ , reducing gradient variance. Policy model updates are then performed using PPO-style optimization (Schulman et al., 2017). Any other RL algorithms can also be applied in BOW, such as PPO (Schulman et al., 2017) and REINFORCE (Williams, 1992). # 4 Experiments # 4.1 Training Data We train our models on narratives from the murder mystery domain (Del and Fishel, 2023), which we argue is well-suited for studying reasoningdriven next-word prediction. Mystery stories naturally encode complex world models—they describe who did what, when, why, and what happened next—requiring both commonsense and counterfactual reasoning to interpret. In this sense, we view next-word prediction not just as a language modeling task, but as an implicit approximation of world state transitions. Story-driven data thus provides rich, structured input–output sequences that align well with our goal of encouraging explicit reasoning in LLMs. Concretely, we use 191 long-form narratives 1 sourced from the “5 Minute Mystery” platform,2 filtering out those that exceed 2048 tokens to ensure compatibility with our model context length. This yields 178 narratives for training. To focus learning on reasoning-relevant supervision signals, we further filter the training data to remove context–next word pairs where the next tokens do not require meaningful reasoning to derive based on the context. Specifically, we discard tokens that are: (i) purely functional (e.g., determiners, punctuation), (ii) syntactically or semantically deterministic based on surface cues, or (iii) explainable without invoking latent knowledge or contextual abstraction. This selective language modeling (SLM) paradigm is inspired by prior work such as RHO-1 (Lin et al., 2024), which demonstrates that focusing training on informative or "reasoning-heavy" tokens improves learning efficiency and model generalization. To automate the filtering process, we utilize $\mathsf { 3 p t - 4 . 1 - m i n i - 2 } 0 2 5 - 0 4 - 1 4 ^ { 3 }$ to evaluate each context-next word pair based on the above criteria. Please refer to Fig. 3 in Appendix A for the detailed prompt used. Only context-next word pairs where non-trivial reasoning is required are retained. This filtering pipeline produces a final dataset of approximately 45K context–next word pairs. By aligning training examples with tokens that genuinely demand reasoning, we ensure that the supervision signal is compatible with the bottlenecked learning setup of BOW, where the model is rewarded not for token overlap, but for the quality of its latent reasoning. # 4.2 Evaluation Data Our evaluations are implemented with the following benchmarks: CSQA (Talmor et al., 2019), PIQA (Bisk et al., 2020), TruthfulQA (Lin et al., 2022), StrategyQA (Geva et al., 2021), ARCChallenge (Clark et al., 2018), WinoGrande (Sakaguchi et al., 2020), BBH (Suzgun et al., 2023), MMLU Hendrycks et al. (2021), MMLU-Pro (Wang et al., 2024), and GPQA (Rein et al., 2024). We perform two evaluation paradigms. We evaluate our model as a general reasoner and also intrinsically as a next-word predictor. For general reasoning capability evaluation, we use the benchmarks in their original multiple-choice question answering format. For intrinsic next-word prediction evaluation, we convert CSQA, PIQA, TruthfulQA, StrategyQA, and ARC-Challenge into a multiple-choice next-word prediction format. Specifically, we prompt gpt-4.5-preview-2025-02-274 to transform each multiple-choice QA instance into a context and multiple candidate next words. We make sure that each candidate next word is strictly a single word to appear at the end to complete the context and its logical reasoning. Notice that the original context and candidate options are transformed at the same time to the new context and the candidate next words. We also prompt GPT4.5 to make sure that the transformed next word selection problem must be at the same difficulty level as the original question and must evaluate the same knowledge and reasoning process. To ensure the quality of the transformed data, we further use GPT-4.5 to filter out transformed instances that don’t meet our requirements. We provide two transformation examples from PIQA and CSQA in Tab. 1. Please refer to Fig. 4 and Fig. 5 in Appendix A for concrete transformation and validation prompts. # 4.3 Experimental Details # 4.3.1 Training We use Qwen2.5-7B-Instruct (Team, 2024) and Llama-3.1-8B-Instruct (Grattafiori et al., 2024) as policy models. The detailed prompt used by policy models to elicit the reasoning path for next word prediction is in Fig. 6 of Appendix A. We conducted our RL training on 4 NVIDIA H200 GPUs, leveraging the VeRL5 (Sheng et al., 2025) repository. We train one epoch with a total batch size of 1024, a mini-batch size of 256, and a rollout size of 5. We use AdamW (Loshchilov and Hutter, 2019) optimizer with an initial learning rate of $1 \times$ $1 0 ^ { - 6 }$ , $( \beta _ { 1 } , \beta _ { 2 } ) = ( 0 . 9 , 0 . 9 9 9 )$ , and a weight decay of $1 \times 1 0 ^ { - 2 }$ . We turn off the KL loss used in standard GRPO. Table 2: General reasoning capability evaluation of BOW and various baselines are shown in this table. Notice that the vanilla instruction models are here only for reference, instead of comparison. All scores are obtained through self-consistency. TQA stands for TruthfulQA, SQA stands for StrategyQA, ARC-c stands for ARC-Challenge, WG stands for WinoGrande, and MMLU-p stands for MMLU-Pro. Table 3: Intrinsic next word prediction evaluation of BOW and various baselines are shown in this table. All scores are obtained through self-consistency. # 4.3.2 Judge Model We use Llama-3.1-8B-Instruct as the judge model for both policy model variants, given its unbiased and low-variance next word distribution for English. Please refer to Fig. 7 for the prompt that acquires the next word probability distribution from the judge model, conditioned on the reasoning path. $\ S 5 . 2 . 1$ also provides detailed justification of choosing LLaMA as the judge, instead of Qwen. To calculate the reward in Eq. 2, we use a temperature of 5 for a smooth numerical distribution, and only when the first token of the gold next word is among the top 100 positions of the next-token distribution given by the judge model, we assign this reasoning path a reward of the corresponding token probability value; otherwise, we assign it a reward of 0. For the regularization term calculation in Eq. 3, we don’t use the entire vocabulary of the judge model as $w$ , but the top 100 tokens in $J _ { \phi } ( w \mid C )$ as $w$ . We set $\alpha$ in Eq. 3 to 0.1. # 4.3.3 Baselines We compare with two continual pretraining baselines to remove the impact caused by continual pretraining itself, and also record the performance of vanilla instruction models for reference. Selective Language Modeling We compare with the selective language modeling (SLM) pretraining paradigm Lin et al. (2024). Specifically, we apply the causal language modeling loss only on those tokens that have been used as supervision during our BOW to finetune the policy model. No-Judge We completely remove the judge mechanism by prompting the policy model to first reason on the context and then wrap the predicted next word in \boxed{}. For reward calculation, we assign a reward of 1 if the first token of the predicted word extracted from the box matches the first token of the ground truth next word; otherwise, 0. Compared to BOW’s soft-reward design, this hard-reward design is inspired by the accuracy reward as in Guo et al. (2025). For fair comparison, we design the prompt, shown in Fig. 8, to be as similar as possible to the policy prompt of BOW to elicit the same type of reasoning. This baseline also somehow resembles Quiet-STaR (Zelikman et al., 2024), where a model must generate "thoughts" between every token during pretraining Vanilla Instruction Model For reference purposes, we also record the performance of untrained policy models, which are Qwen2.5-7B-Instruct and Llama-3.1-8B-Instruct. # 4.3.4 Evaluation As mentioned in $\ S 4 . 2$ , we perform two evaluation paradigms. For general reasoning capability evaluation, we perform zero-shot inference where models are prompted to think step by step and finally output the answer letter. We use Math-Verify to extract the last letter in the prediction as the answer letter. The detailed zero-shot prompt is shown in Fig. 9. For intrinsic next-word prediction evaluation, we follow our BOW training pipeline by first using the trained policy model to generate a reasoning path and then feeding the reasoning path to the judge model to calculate each candidate next word’s completion probability. We choose the one with the largest probability as the final answer. For the No-Judge baseline, since it directly generates the reasoning path along with a predicted next word wrapped in \boxed{}, we discard the text after the open bracket of $\scriptstyle \backslash \mathsf { b o x e d } \{ \}$ and calculate each candidate next word’s completion probability concatenated to this text prefix. For all the evaluation settings, we apply self-consistency (Wang et al., 2023) by sampling 10 times with the temperature of 0.8, and use majority vote to decide the final prediction. We use vLLM (Kwon et al., 2023) for higher efficiency during inference. # 5 Analysis # 5.1 Main Results General Reasoning Capability We report the results of general reasoning capability evaluation in Tab. 2. We compare with continual pretraining with SLM and No-Judge baselines and use the vanilla instruction model for reference. For Qwen2.5-7B-Instruct, BOW consistently outperforms SLM and No-Judge baselines across all benchmarks with an average of ${ \sim } 8 \%$ and ${ \sim } 4 \%$ respectively, highlighting that continual pretraining with BOW brings the base instruction models better generalization capabilities as a general reasoner, compared with baselines. At the same time, BOW outperforms the base instruction model on 7 out of 10 benchmarks, falling behind only ${ \sim } 1 \%$ on the StrategyQA, BBH, and MMLU-Pro. These results show that continual pretraining with BOW does not harm the original reasoning and instructionfollowing capabilities of the vanilla model, but enhances the generalization and reasoning capability of the base instruction model. SLM and No-Judge baselines do not exhibit this property, but instead harm the original reasoning capability of the base instruction model. For Llama-3.1-8B-Instruct, we observe the same trend. BOW outperforms continual pretraining with SLM and No-Judge baselines across all benchmarks with an average of ${ \sim } 3 0 \%$ and ${ \sim } 6 \%$ , showing better generalization capability as a general reasoner. BOW also outperforms the vanilla instruction model on 6 out of 10 benchmarks. Among the other four benchmarks, BOW achieves nearly the same performance, falling behind only around $0 . 2 \%$ on TruthfulQA, ARC-Challenge, and BBH. These results again show that continual pretraining with BOW further enhances the generalization and reasoning capability of the base instruction model, while baselines bring negative impacts. Consistent with the trend observed for LLaMA-family models in Lin et al. (2024), we observe that selective language modeling pretraining brings a fatal impact on the instruction-following capability of Llama-3.1-8B-Instruct, where models start to repeat themselves instead of performing proper zero-shot inference given the prompt. Overall, our results show that continual pretraining with BOW enhances the general reasoning capability of LLMs by shifting the training signal from direct next token prediction to explanationdriven learning. The consistent improvements across model families and benchmarks suggest that BOW is an effective and scalable continual pretraining methodology. Next-Word Prediction Capability We report the results of the intrinsic next-word prediction capability evaluation in Tab. 3. We compare with two continual pretraining baselines and also vanilla instruction models. We show that across 12 scenarios with six transformed benchmarks and two policy model variants, BOW outperforms vanilla instruction models in 10 of them with an everage of ${ \sim } 5 \%$ , and outperforms SLM in all with an avergae of ${ \sim } 1 9 \%$ . For the No-Judge baseline, despite BOW not achieving overwhelming superiority over it (outperforming in 7 out of 12 scenarios), we later show that, through human analysis in $\ S 5 . 3$ , BOW elicits policy models to generate nextword reasoning trajectories that comprehensively consider all factors affecting next-word prediction from contexts ( $8 3 \%$ vs. $2 5 \%$ ). Moreover, with a variant of BOW where the regularization term is removed, our method still outperforms No-Judge when producing human-aligned next-word descrip Given the context "The marathon runner felt dizzy from dehydration, so at the aid station she grabbed a bottle of," the next word set will likely contain nouns that represent beverages, as the runner is at an aid station and is likely to grab a drink to rehydrate. The set of possible next words can be described as: - Water - Sports drink - Juice - Electrolyte solution These are the most common types of beverages found at aid stations during marathons, designed to help runners rehydrate and replenish electrolytes lost through sweat. Figure 2: An example reasoning trajectory for candidate next words. tions ( $81 \%$ vs. $67 \%$ ). # 5.2 Ablation Study # 5.2.1 Judge Justification For BOW, we use LLaMA-3.1-8b-Instruct as the judge model for both Qwen and LLaMA variants of policy models. We show here LLaMA-3.1-8b-Instruct truthfully reflects the next token distribution given the reasoning trajectory for the next word, and also highlight that Qwen2.5-7b-Instruct is not an ideal choice. For example, given the reasoning trajectory for candidate next words as shown in Fig., we show the next token probability distribution given by the two judges in Tab. 4. Words that are described as highly possible, such as "water", "sports", "electro", "juice", "drink", and "energy", faithfully appear in the top 20 of LLaMA-3.1-8b-Instruct’s next token distribution. However, for Qwen2.5-7b-Instruct, "electro", "drink", and "energy" do not appear in the top 20 of its token distribution. Also, the probability distribution of Qwen2.5-7b-Instruct is extremely biased towards the word "water", based on its internal prior knowledge, ignoring other possible next words mentioned in the Table 4: Given the example reasoning path in Fig. 2, we show the top 20 tokens from the next word probability distribution given by Qwen2.5-7B-Instruct and Llama-3.1-8B-Instruct. Qwen is clearly more biased compared with LLaMA. Table 5: Ablation study for the effectiveness of reference distribution regularization term during reward calculation. Q stands for Qwen2.5-7b-Instruct and L stands for LLaMA-3.1-8b-Instruct. Table 6: Ablation study for the effectiveness of training data filtering. reasoning path. As a result, evidences indicate that Qwen2.5-7b-Instruct is a less biased next word probability estimator compared with LLaMA-3.1-8b-Instruct. # 5.2.2 Reference Distribution Regularization To show the effectiveness of the reference distribution regularization term in Eq. 3, we remove the regularization term and keep other settings the same as BOW. We first report next-word prediction capability evaluation in Tab. 5. We can see that for both Qwen2.5-7b-Instruct and LLaMA-3.1-8b-Instruct, BOW achieves higher average performances compared with variants without the regularization term. These demonstrate the positive effect brought by the regularization term on intrinsic next-word prediction evaluation. With other discussions regarding the regularization term, we will also show in Sec. 5.3 that, through qualitative human study, this regularization term leads to a next-word reasoning path that explicitly avoids collapsing into only mentioning a few specific next words, but focuses on comprehensively thinking and describing next words, aligned with our intuition of this regularization. # 5.2.3 Training Data Filtering To show the effectiveness of training data filtering by retaining only those reasoning-centric next words, we compare BOW with a random filter baseline, where we randomly select the same number of words as used in BOW for RL training and keep other settings the same. As shown in Tab. 6, for both Qwen2.5-7b-Instruct and LLaMA-3.1-8b-Instruct, and both capabilities This description captures the likely content of the next word set without missing any possibilities, adhering to the given rules. Table 7: In this table, we show one concrete qualitative analysis example. Table 8: Human selection rate for four Qwen-based model variants in our study across 150 contexts. # 5.3 Qualitative Human Study To better understand the next-word reasoning trajectories learned by BOW, we conduct a human analysis comparing four different Qwen-based models, namely, vanilla instruction model, No-Judge baseline, BOW without regularization (w/o. Reg.), and BOW. We prompt GPT-4.5 to curate a total of 150 contexts that satisfy two properties: (1) there are multiple plausible immediate next words given some statistical likelihood or grammar and syntax patterns, and (2) later, with more in-depth reasoning based on context, it becomes clear that only one or a few candidates are plausible. We also we measure in our work, BOW leads to slightly better performance in 3 out of 4 scenarios, with the other one a tie. This demonstrates the effectiveness of our training data filtering. make sure that the part of speech for sampled contexts’ next words varies, including nouns, verbs, and adjectives. We perform human judgment on the next-word reasoning trajectories for the four models along two evaluation dimensions: (1) regardless of the next word being predicted, which trajectory demonstrates the most in-depth reasoning by thoroughly considering every detail and possible logic clue included or implied by the given context that could affect next word prediction, and (2) which trajectory contains the next word mentioning or description to the best of annotators’ expectation based on their commonsense knowledge. The first dimension evaluates the reasoning process toward NWP, and the second dimension evaluates how the next-word prediction outcome aligns with humans. For each instance, annotators view anonymized and shuffled reasoning trajectories from the four models we study and are required to pick the best one along the two evaluation dimensions. We encourage ties when no clear winner is apparent. We report the selection rate of each model in Tab. 8. For the first dimension, BOW outperforms all other models with a large margin, demonstrating the effectiveness of our prompting design and reward design. BOW successfully optimizes the vanilla instruction model towards eliciting nextword reasoning paths that comprehensively consider all clues in the context that could affect nextword prediction. Especially when comparing BOW with BOW w/o. Reg. $8 3 \%$ vs. $5 2 \%$ ), it further demonstrates the effectiveness of our reward regularization term qualitatively (we have proved its effectiveness quantitatively in $\ S 5 . 2 . 2 )$ . For the second dimension, we observe that BOW actually is the least selected one. After we carefully examine the reasoning trajectory of BOW, we realize this is explainable. Although the regularization term makes the next-word reasoning paths more comprehensive, covering various aspects that could potentially affect next-word prediction, it at the same time forces the model to avoid collapsing into only predicting a few specific words too hard. It retains from next-word prediction in a human-aligned fashion, but only provides next-word descriptions in an over-general way. We conclude that there is a clear trade-off between explicitly predicting a few next words with the risk of collapsing and being comprehensive to describe next words when using the regularization term. Finally, since BOW w/o. Reg. achieves the best selection rate among the four with a large margin, we have demonstrated the effectiveness of BOW under both human evaluation dimensions. We showcase a concrete example context to directly compare the reasoning trajectory generated by the four models in Tab. 7. For the context, "The thunderstorm was getting closer, so I rolled up the", vanilla model and No-Judge show the least comprehensive reasoning process towards next-word prediction based on the given context, and also with the trend of collapsing into explicitly predicting a small set of next words, ignoring other highly possible candidates, such as "shade", "tent", or "blanket". On the contrary, BOW and its variant provide more general descriptions of next-word candidates with richer reasoning paths, thus covering more plausible next words. However, we observe the tradeoff mentioned above where the regularization term lead to reasoning path or next-words description being too general, so that it is less human-aligned during reasoning. Overall, we can conclude that BOW effectively optimizes the policy model to elicit better trajectories for comprehensive nextword reasoning and next-word description.
Large language models (LLMs) are typically trained via next-word prediction (NWP), which provides strong surface-level fluency but often lacks support for robust reasoning. We propose BOttlenecked next Word exploration (BOW), a novel RL framework that rethinks NWP by introducing a reasoning bottleneck where a policy model first generates a reasoning path rather than predicting the next token directly, after which a frozen judge model predicts the next token distribution based solely on this reasoning path. We train the policy model using GRPO with rewards that quantify how effectively the reasoning path facilitates next-word recovery. Compared with other continual pretraining baselines, we show that BOW improves both the general and next-word reasoning capabilities of the base model, evaluated on various benchmarks. Our findings show that BOW can serve as an effective and scalable alternative to vanilla NWP.
[ "cs.CL" ]
# 1 Introduction We are interested in studying the effect of treatment e.g., different policies and drugs, on rare yet impactful events such as large wildfires, hurricanes, tsunamis and climate change. These kinds of events happen at an extremely low frequency, but they can cause considerable damage to properties and pose serious threats to people’s lives. For instance, we may want to know the effect of more infrastructure investment or other kinds of precautions policies on earthquakes. In many applications – from financial risk to environmental policy – it isn’t enough to know how a treatment changes the average outcome; decision-makers care about whether it alters the extreme tail. More formally, we may want to estimate the effect of treatment $D$ on outcome $Y$ conditioning on some extreme events. Estimating this kind of effect can help policymakers evaluate the impact of a policy and choose the best policy to reduce economic loss and save more lives when disasters happen. Despite its clear importance, existing methods fall into two largely disconnected strands, each of which cannot fully address this question. One approach comes from the causal inference literature. Causal inference provide a comprehensive framework for counterfactual reasoning. Causal effect estimation is an important problem in this area, which finds wide applications in healthcare, education, business decision-making, and policy evaluation. Classic causal inference literature mainly focuses on estimating the average effects among certain groups. Little attention is paid to the causal effect on rare events. The scarcity of extreme data makes inference more challenging than in classic settings. As a result, naively applying classic causal inference estimation methods will produce poor results with large statistical error. For example, when making policies about earthquakes, we are usually unable to see a strong signal from historical data, as large earthquakes rarely occur and there are fewer samples in the dataset. On the other hand, the Extreme Value Theory (EVT) studies the tail behaviors for statistical distributions, which provides the ideal tools for analyzing rare events. However, this approach does not take the data structure into consideration. In particular, it does not accommodate counterfactual treatments or adjust for covariates, so it cannot tell us what would happen under an intervention. To bridge these gaps, we combine causal inference theory with EVT to provide a novel framework for extreme effect measurement. Following researches in EVT Coles et al. [2001], we use a multivariate regularly varying variable $U$ to model extremity. The rare event can be modeled by the event $\{ \| U \| > t \}$ for large $t$ . Our proposed estimand can be viewed as the Average Treatment Effect (ATE) conditioning on $\{ \| U \| > t \}$ with rescaling as $t$ increases to infinity. Detailed definition and explanation can be found in Section 3. Estimation is challenging because the limiting tail distribution is unknown and must be inferred from finite samples. To improve data efficiency and inference accuracy, we combine tail observations with moderate-frequency data in an extrapolation scheme, leveraging EVT insights alongside causal-inference techniques to achieve efficient estimation. To the best of our knowledge, we are not aware of any work in the literature that considers this problem. In this paper, we take the first step to measure the treatment effect on extreme events. To be more specific, our contributions can be summarized as follows. 1. We propose a measure for the treatment effect on rare events named Normalized Extreme Treatment Effect (NETE), which essentially measures the magnitude of treatment on tailed events. 2. We develop two consistent estimators for NETE—a doubly robust (DR) estimator and an inverse propensity weighting (IPW) estimator—by combining recent advances in multivariate tail–dependence estimation Zhang et al. [2023] with double machine learning methodology Chernozhukov et al. [2018], and derive finite-sample, non-asymptotic error bounds. 3. Synthetic and semi-synthetic experiments demonstrate a good practical performance of our proposed estimator as compared to baseline estimators adapted from standard causal inference literature. Related Work We briefly review some relevant literature in EVT and causal inference. Coles et al. [2001] provides a comprehensive introduction to EVT. A large amount of work focuses on the univariate setting Davison and Smith [1990], Leadbetter [1991], Pickands III [1975], Smith [1989]. Recently, there have been many recent works on the multivariate generalization of these results Avella-Medina et al. [2022], Zhang et al. [2023]. Causal effect estimation is a classical problem in causal inference [Rubin, 1974]. Common estimators include IPW [Rosenbaum and Rubin, 1983], DR methods [Bang and Robins, 2005, Kang and Schafer, 2007, Chernozhukov et al., 2016, 2017, 2018], Targeted Maximum Likelihood Estimation (TMLE) [van der Laan and Rubin, 2006]. There have been some efforts in the literature trying to combine the two research areas. Gissibl and Klüppelberg [2018] considers a special kind of Structural Causal Model (SCM) and shows that the proposed SCM is a kind of max-linear model. They also analyze the asymptotic distribution of their model. Chernozhukov and Du [2006], Chernozhukov and Fernández-Val [2011], Zhang [2018], Deuber et al. [2024] consider the task of estimating the extreme Quantile Treatment Effect (QTE). The other line of work Gnecco et al. [2021], Mhalla et al. [2020], Bodik et al. [2023] uses EVT to help causal discovery. However, we want to point out that the problems these works consider are quite different from our setting. The most similar setting would be extreme QTE estimation Chernozhukov and Du [2006], Chernozhukov and Fernández-Val [2011], Zhang [2018], Deuber et al. [2024], but the QTE still cannot capture on how the expectation of the outcome changes under intervention. # 2 Preliminary Causal Inference. We use the potential outcome framework Rubin [1974] in this paper. Let $X , D , Y$ be the covariate, binary treatment and outcome, respectively. We denote $Y ( d )$ to be the potential outcome when the treatment is set to be $d$ and assume consistency i.e., $Y ( D ) = Y$ throughout the paper. The Average Treatment Effect (ATE) is defined as $$ \mathrm { A T E } \ = \mathbb { E } [ Y ( 1 ) - Y ( 0 ) ] . $$ The ATE measures the effect of a treatment on the outcome $Y$ . In the policy-making example, $D$ is an indicator of whether to use the policy or not. $X$ is a covariate that may influence $D$ , like the geographic features of a place, which will influence the local government’s decision on policies, and $Y$ can be the economic loss. The ATE in this case provides information about how much loss can be saved if a policy is enforced. Under the following exogeneity and overlap assumptions, the ATE can be identified using the g-formula $\mathbb { E } [ \mathbb { E } [ Y | X , D = 1 ] - \mathbb { E } [ Y | X , D = 0 ] ]$ . Assumption 2.1 (Exogeneity). The data generation process satisfies $( Y ( 1 ) , Y ( 0 ) ) \perp D \mid X$ . Besides, the following overlap assumption is also often needed for non-asymptotic analysis. Assumption 2.2 (Overlap). There exists constant $c \in ( 0 , 1 / 2 )$ such that the estimated propensity $\widehat { p } ( x ) \in [ c , 1 - c ]$ , $\forall x \in { \mathcal { X } }$ . This assumption ensures that there is no extremely high or low propensity, which can make estimators unstable. This assumption can be easily achieved by clipping the estimated propensity at some threshold, i.e., setting propensity to be $\mathrm { n a x } \{ \mathrm { m i n } \{ \widehat { p } ( X ) , 1 - c \} , c \}$ . Extreme Value Theory. The study of extremity is mainly concerned about the tail behaviors of heavy-tailed distributions, which are often modeled by the regularly varying distributions. In this paper, we modeled extremity by multivariate regularly varying distributions. Definition 2.3. A random variable $U \in \mathbb { R } _ { + } ^ { d }$ is called regularly varying with index $\beta \in ( 0 , \infty )$ if for any norm $\| \cdot \|$ in $\mathbb { R } ^ { d }$ and positive unit sphere $\mathbb { S } ^ { + } = \left\{ x \in \mathbb { R } _ { + } ^ { d } : \left\| x \right\| = 1 \right\}$ , there exists a probability measure $S ( \cdot )$ on $\mathbb { S } ^ { + }$ and a sequence $b _ { n } \to \infty$ such that $n \operatorname { P } ( ( \| U \| / b _ { n } , U / \| U \| ) \in$ ) ${ \overset { w } { \to } } c \cdot \nu _ { \beta } \times S$ for some constant $c > 0$ , where $\mathbf { \nabla } \cdot \mathbf { \times } \mathbf { \cdot }$ is the product measure and $\nu _ { \beta } ( [ r , \infty ) ) = r ^ { - \beta }$ for all $r > 0$ . The parameter $\gamma = 1 / \beta$ is called the Extreme Value Index (EVI), which characterizes the decay rate of the tail. Notice that this definition implies that as $b _ { n } \to \infty$ , the norm of and $\| U \|$ and its angle $U / \| U \|$ become asymptotically independent. We will leverage this fact for estimation in later sections. A typical example of regularly varying distributions is the Pareto distribution. Definition 2.4. The density of a Pareto (type II) distribution with index $\beta \in ( 0 , \infty )$ is $f ( x ) = \beta ( 1 + x ) ^ { - ( \beta + 1 ) } , \forall x > 0$ . Definition 2.3 implies that the rescaled norm of a regularly varying variable is asymptotically a Pareto distribution. Notations. In the rest of the paper, we use $\| \cdot \|$ and $\| \cdot \| _ { 1 }$ as a shorthand for $\ell _ { 1 }$ -norm. We use the asymptotic order notation $o ( \cdot ) , O ( \cdot )$ and $\Theta ( \cdot )$ . We use $\mathbb { E } [ \cdot ]$ to represent expectation. For a matrix $A$ , we denote $A _ { \cdot , i }$ to be its $i$ -th column. Unif $( [ a , b ] )$ is the uniform distribution on interval $[ a , b ]$ and $\operatorname { B e r } ( p )$ is the Bernoulli distribution with expectation $p$ . # 3 Treatment Effect on Extreme Events # 3.1 Extreme Semi-parametric Inference While standard causal estimands capture average effects of $D$ on $Y$ , they obscure what happens in the tails—i.e., when rare, high-impact events occur. To address this, we model rare events with an explicit noise term $U$ . The data we consider is of the form $\{ ( X _ { i } , D _ { i } , Y _ { i } , U _ { i } ) \} _ { i = 1 } ^ { N }$ , where $X$ , $D$ , and $Y$ are as defined in Section 2, and $U$ is an independent extreme noise vector. We use $\| U \|$ to model the severity of rare events—large norms indicate more extreme realizations. For example, in a hurricane-loss application, $U$ might be the vector of maximum wind speed, rainfall, and storm surge; $X$ the region’s location; $D$ the level of infrastructure investment; and $Y$ the resulting economic loss. In what follows, we introduce a novel estimand that quantifies the causal effect of $\boldsymbol { D }$ on $Y$ specifically in the tail region defined by large $\| U \|$ . We then establish conditions for its identification under multivariate regular variation and propose two consistent estimators. We will make the following i.i.d. assumption. Assumption 3.1. The random variables $\{ ( X _ { i } , D _ { i } , Y _ { i } , U _ { i } ) \} _ { i = 1 } ^ { N }$ are i.i.d.. Furthermore, $U$ is regularly varying and is independent of $X , D$ . We are interested in the effect of treatment on the tail events of $U$ . Similar to ATE, a naive definition of the extreme treatment effect would be $$ \theta ^ { \mathrm { E T E } } = \operatorname* { l i m } _ { t \to \infty } \mathbb { E } [ Y ( 1 ) - Y ( 0 ) \mid \| U \| > t ] , $$ which is simply ATE conditioning on large $\| U \|$ . However, in the case of extreme effects, the outcome may be unbounded due to the presence of extreme noise. As $t$ increases to infinity, this effect may increase to infinity, making this quantity meaningless. Considering the climate change example, it is possible that dramatic climate change will damage or even destroy human societies, causing the effects of some policies to explode even though the policy can effectively reduce losses and slow down the process. Fortunately, regularly varying distributions have the nice property that as $t$ increases to infinity, $\| U \| / t \mid \| U \| > t$ converges weakly to the Pareto distribution (See Definition 2.3). Inspired by this property, we can normalize the quantity $Y ( 1 ) - Y ( 0 ) \mid \| U \| > t$ by its growth rate. To characterize the growth of this quantity, we introduce the following polynomial growth assumption. Assumption 3.2 (Asymptotic Homogeneous Property). We assume that the covariate $X$ is bounded, i.e. $\| X \| \leqslant R$ . Let $f ( X , D , U ) = \mathbb { E } [ Y \mid X , D , U ]$ . There exists a $L$ -Lipschitz continuous function $g ( x , d , u )$ and a function $e ( t ) : \mathbb { R } ^ { + } \mathbb { R } ^ { + }$ that satisfies $\begin{array} { r } { \operatorname* { l i m } _ { t \to \infty } e ( t ) = 0 } \end{array}$ and $$ | \frac { f ( x , d , t u ) } { t ^ { \alpha } } - g ( x , d , u ) | \leqslant e ( t ) , \forall x \in B _ { R } , u \in S ^ { d - 1 } . $$ This assumption characterizes the growth of the outcome with respect to the extreme noise. In many real-world examples, this assumption is satisfied. For instance, research show that landslide volume often follows a power-law relationship with rainfall intensity Tuganishuri et al. [2024]; the economic loss caused by hurricanes scales polynomially with the maximum wind speed Zhai and Jiang [2014]. In these cases, $f$ grows polynomially with respect to $\| U \|$ and $e ( t ) = 0$ exactly. We define the Normalized Extreme Treatment Effect (NETE) as $$ \theta ^ { \mathrm { { N E T E } } } = \operatorname* { l i m } _ { t \to \infty } \mathbb { E } \left[ { \frac { Y ( 1 ) - Y ( 0 ) } { t ^ { \alpha } } } \mid \| U \| > t \right] , $$ where $\alpha$ is a known index in Assumption 3.2 from prior knowledge. Note that the previous definition (3.1) is a special case of (3.2) when $\alpha = 0$ . The intuition for the scaling factor $t ^ { \alpha }$ is that under Assumption 3.2, $\mathbb { E } [ Y ( d ) ]$ is of the order $O ( \| U \| ^ { \alpha } )$ and (3.2) is of the order $O ( \mathbb { E } [ ( \| U \| / t ) ^ { \alpha } \mid \| U \| > t ] )$ , which is finite if $\alpha < \beta$ . (3.2) implies that for a large threshold $t$ , we have $\mathbb { E } [ Y ( 1 ) - Y ( 0 ) ] \approx t ^ { \alpha } \theta ^ { \mathrm { N E I E } }$ . Therefore, $\theta ^ { \mathrm { N E T E } }$ measures the influence of treatment on the susceptibility of outcome with respect to extreme noise $U$ . We want to remark that NETE naturally sits at the nexus of two well-studied strands of work, tail-conditional expectations in EVT, and average effects or distributional shifts at extreme quantiles, e.g., ATE, CATE and QTE. NETE can be understood as a causal analogue of EVT quantity $\mathbb { E } [ Z / t \mid Z > t ]$ , where $Z$ is a regularly varying variable. It generalizes ATE to the setting of extreme events and aligns with the growth rate given by EVT. # 3.2 Extreme Effect Identification and Estimation The estimand (3.2) is designed to measure the treatment effect under extreme events, i.e., extremely large $\| U \|$ . In practice, there may only be a small fraction of extreme samples in the dataset, which creates difficulties for statistical inference. To efficiently estimate the NETE, we leverage the asymptotic independence property of regularly varying variables (See Definition 2.3) to derive a novel identification formula. In particular, we have the following decomposition. $$ \begin{array} { r l } { \underset { \infty } { \mathrm { ~ n ~ } } \mathbb { E } \left[ \frac { Y ( 1 ) - Y ( 0 ) } { t ^ { \alpha } } \mid \| U \| > t \right] } & { = \underset { t \to \infty } { \mathrm { \operatorname* { l i m } } } \mathbb { E } \left[ \frac { f ( X , 1 , U ) - f ( X , 0 , U ) } { t ^ { \alpha } } \mid \| U \| > t \right] } \\ & { = \underset { t \to \infty } { \mathrm { \operatorname* { l i m } } } \mathbb { E } \left[ \frac { f ( X , 1 , U ) - f ( X , 0 , U ) } { \| U \| ^ { \alpha } } \cdot \left( \frac { \| U \| } { t } \right) ^ { \alpha } \mid \| U \| ^ { \alpha } \right] } \\ & { = \underset { t \to \infty } { \mathrm { \operatorname* { l i m } } } \mathbb { E } \left[ g ( X , 1 , U / \| U \| ) - g ( X , 0 , U / \| U \| ) \cdot \left( \frac { \| U \| ^ { \alpha } } { t } \right) ^ { \alpha } \right] } \end{array} $$ where we use Assumption 3.2 in the third equality. We can prove that the above quantity equals to $$ \underset { \infty } { \operatorname* { i m } } \mathbb { E } [ g ( X , 1 , U / \Vert U \Vert ) - g ( X , 0 , U / \Vert U \Vert ) \mid \Vert U \Vert > t ] \cdot \underset { t \infty } { \operatorname* { l i m } } \mathbb { E } [ \Vert U \Vert ^ { \alpha } / t ^ { \alpha } \mid \Vert U \Vert > t ] . $$ The first factor measures the average effect of treatment across different directions, while the second factor only depends on the norm of the extreme noise, which can be estimated via standard techniques in extreme value theory. We summarize the identification formula in the following proposition. Proposition 3.3 (Identification). Suppose that $U$ is multivariate regularly varying and Assumption 2.1, 3.1, 3.2 hold, we have $$ \begin{array} { r } { \phi ^ { \mathrm { N E T E } } = \underset { t \infty } { \operatorname* { l i m } } \mathbb { E } [ g ( X , 1 , U / \Vert U \Vert ) - g ( X , 0 , U / \Vert U \Vert ) \mid \Vert U \Vert > t ] \cdot \underset { t \infty } { \operatorname* { l i m } } \mathbb { E } [ \Vert U \Vert ^ { \alpha } / t ^ { \alpha } \mid \Vert ] } \end{array} $$ Proposition 3.3 separates the estimation of NETE into two parts, the expectation of the spectral measure and the index estimation, which facilitates the estimation. While in theory naive identification (3.3) works as well, we found that in practice (3.3) performs poorly (See Section 4 for empirical experiments). One reason is that without properly scaling, the (3.3) suffers from exploding $\| U \|$ , causing larger estimation errors. Inspired by this decomposition, we estimate the two factors separately. We summarize our estimators in Algorithm 1. To make our framework more flexible, we allow an approximate scaling exponential $\widehat { \alpha } _ { n }$ as input in Algorithm 1. $\widehat { \alpha } _ { n }$ can be obtained from some prior knowledge or via other heuristibcs. For the first factor, w dbesign two estimators, the Inverse Propensity Weighting (IPW) and the Doubly Robust (DR) estimators. To derive the estimators, we Algorithm 1 Algorithm for NETE Estimation Require: Dataset $\mathcal { D } = \{ ( X _ { i } , D _ { i } , Y _ { i } , U _ { i } ) \} _ { i = 1 } ^ { n }$ , threshold $t$ , exponent estimation $\widehat { \alpha } _ { n }$ , estimator 1: Randomly split $\mathcal { D }$ into two equal parts $\mathcal { D } _ { 1 }$ and $\mathcal { D } _ { 2 }$ 2: Using $\mathcal { D } _ { 1 }$ , estimate: a. Propensity function ${ \widehat { p } } ( x )$ via regression of $D$ on $X$ b. Pseudo-outcome regression ${ \widehat { g } } ( x , d , s )$ by regressing $Y / \| U \| ^ { \widehat { \alpha } _ { n } }$ on $( X , D , U / \| U \| )$ 3: Define index set $\mathcal { T } = \{ i : \| U _ { i } \| > t , ( X _ { i } , D _ { i } , Y _ { i } , U _ { i } ) \in \mathcal { D } _ { 2 } \}$ and set $S _ { i } = U _ { i } / \Vert U _ { i } \Vert$ for $i \in \mathcal { Z }$ 4: if estimator = IPW then 5: Compute $$ \widehat { \eta } _ { n , t } ^ { \mathrm { I P W } } = \frac { 1 } { | \mathcal { Z } | } \sum _ { i \in \mathcal { I } } \frac { Y _ { i } } { \| U _ { i } \| ^ { \widehat { \alpha } _ { n } } } \Big ( \frac { D _ { i } } { \widehat { p } ( X _ { i } ) } - \frac { 1 - D _ { i } } { 1 - \widehat { p } ( X _ { i } ) } \Big ) . $$ 6: else 7: Compute $$ \widehat { \eta } _ { n , t } ^ { \mathrm { D R } } = \frac { 1 } { | Z | } \sum _ { i \in \mathcal { I } } \left[ \widehat { g } ( X _ { i } , 1 , S _ { i } ) - \widehat { g } ( X _ { i } , 0 , S _ { i } ) + \frac { D _ { i } - \widehat { p } ( X _ { i } ) } { \widehat { p } ( X _ { i } ) ( 1 - \widehat { p } ( X _ { i } ) ) } ( Y _ { i } / \| U _ { i } \| ^ { \widehat { \alpha } _ { n } } - \widehat { g } ( X _ { i } , D _ { i } ) ) \right] . $$ 8: end if 9: Compute adaptive Hill estimator on $\{ \| U _ { i } \| : i \in \mathbb { Z } \}$ : $$ \widehat { \gamma } _ { n } = \frac { 1 } { k } \sum _ { j = 1 } ^ { k } \log \frac { \| U _ { ( j ) } \| } { \| U _ { ( j + 1 ) } \| } , \quad \widehat { \mu } _ { n } = \frac { 1 } { 1 - \widehat { \alpha } _ { n } \widehat { \gamma } _ { n } } , $$ where $\| U _ { ( 1 ) } \| \ge \cdots \ge \| U _ { ( k + 1 ) } \|$ and $k$ is chosen by $$ k = \operatorname* { m a x } \left\{ k \in \{ l _ { n } , \cdots , n \} \mathrm { ~ a n d ~ } \forall i \in \{ l _ { n } , \cdots , n \} , | \widehat { \gamma } _ { i } - \widehat { \gamma } _ { k } | \leqslant \frac { \widehat { \gamma } _ { i } r _ { n } ( \delta ) } { \sqrt { i } } \right\} , $$ Return: $\widehat { \theta } _ { n , t } ^ { \mathrm { e s t i m a t o r } } = \widehat { \eta } _ { n , t } ^ { \mathrm { e s t i m a t o r } } \cdot \widehat { \mu } _ { n } .$ . first randomly split the data into equal halves and use the first half for nuisance estimation, i.e., propensity and outcome. We use the first half of data to regress $( X , D , U / \| U \| )$ on $Y / \| U \| ^ { \widehat { \alpha } _ { n } }$ to get (normalized) pseudo-outcome $\widehat g$ and regress $X$ on $\boldsymbol { D }$ to get an estimation of the propensity function $\widehat { p }$ . Then, we use the sebcond half for estimation. The IPW and DR estimators are defined in (3.5) and (3.6), respectively. Notice that the second factor is the $\alpha$ -moment of the random variable $\| U \| / t \mid \| U \| > t$ , which converges weakly to a Pareto distribution as $t$ increases to infinity. Therefore, this quantity equals to the $\alpha$ moment of a standard Pareto $1 / ( 1 - \alpha \gamma )$ and the problem can be reduced to estimating the EVI of an asymptotic Pareto distribution. Here, we use the adaptive Hill estimator in (3.7) from Boucheron and Thomas [2015], which provide a data-driven method for choosing the threshold. Putting the two estimations together, we get our estimator of the NETE $\widehat { \theta _ { n , t } ^ { * } } = \widehat { \eta } _ { n , t } \cdot \widehat { \mu } _ { n }$ , where the superscript · can be DR or IPW. # 3.3 Non-asymptotic Analysis Up to now we have worked under very mild regular variation and asymptotic homogeneity conditions, which suffice to prove the consistency of our two-step estimator in the limit $n , t \to \infty$ . However, to obtain non–asymptotic, finite-sample deviation bounds for both the spectral-measure term and the tail-index term, we must invoke a more structured tail model. In particular, existing results such as those in Zhang et al. [2023] rely on the fact that, beyond regular variation, the noise vector behaves exactly like a (possibly linearly transformed) Pareto distribution. Although this is admittedly stronger than mere second-order regular variation, it is at present the only framework in which we can directly apply sharp concentration inequalities and Wasserstein-distance bounds for spectral-measure estimation. We therefore make the following Pareto-type assumption. Assumption 3.4. We assume that the distribution of $U$ comes from the following class of models $$ M = \cup _ { k = 1 } ^ { \infty } M _ { k } , $$ where $M _ { k } = \{ { \mathcal { L } } ( X ) : U = A Z$ , for $A \in { \mathcal { A } }$ and $\mathcal { L } ( Z ) \in \widetilde { M } _ { k } \}$ . The set of possible distributions for the components $Z$ is $$ \widetilde { M } _ { k } = \left\{ \begin{array} { l l } { Z \mathrm { ~ a d m i t s ~ a ~ } ( \mathrm { L e b e s g u e } ) \mathrm { ~ d e n s i t y ~ } h ( z ) \mathrm { ~ i n ~ } \mathbb { R } _ { + } ^ { d _ { z } } } \\ { \left| \frac { h ( z ) - \beta ^ { m } \prod _ { i = 1 } ^ { m } ( 1 + z _ { i } ) ^ { - ( \beta + 1 ) } } { \beta ^ { m } \prod _ { i = 1 } ^ { d _ { z } } ( 1 + z _ { i } ) ^ { - ( \beta + 1 ) } } \right| \leqslant \xi k ^ { - s } , \forall z } \\ { h ( z ) \propto \prod _ { i = 1 } ^ { m } ( 1 + z _ { i } ) ^ { - ( \beta + 1 ) } \mathrm { i f ~ } \| z \| _ { 1 } > \zeta k ^ { \frac { 1 - 2 s } { \beta } } } \end{array} \right\} , $$ and the set of possible matrices $\mathcal { A }$ is $$ \mathcal { A } = \left\{ A \in \mathbb { R } _ { + } ^ { d _ { u } \times d _ { z } } : l \leq \operatorname* { m i n } _ { i } \| A _ { \cdot i } \| _ { 1 } \leq \operatorname* { m a x } _ { i } \| A _ { \cdot i } \| _ { 1 } \leq u \mathrm { ~ a n d ~ } J A \geq \sigma \right\} , $$ where $J A = { \sqrt { \operatorname* { d e t } ( A ^ { \mathsf { T } } A ) } }$ . Throughout, we assume the constants satisfy $m \geq d \geq 2 , 0 < l <$ $1 < u , 0 < s < 1 / 2 , \sigma > 0$ , $0 < \xi < 1$ , and $\zeta > 0$ . This assumption states that the extreme variable is a linear transformation of an approximate Pareto distribution. The parameter $s$ measures how close $Z$ is to a standard multivariate Pareto distribution. A small $s$ means the distribution is far from Pareto. With these assumptions, we are ready to state our main theorem, which give a non-asymptotic rate to our estimand. Theorem 3.5. Suppose that Assumption 2.1, 2.2, 3.1, 3.2, 3.4 hold, $\alpha < \beta$ , where $\alpha$ and $\beta$ are defined in Assumption 3.2 and Assumption 3.4 respectively. Furthermore, for any fixed $t$ , with probability at least $1 - \delta$ , $$ \begin{array} { r l } & { | p ( X ) - \widehat { p } ( X ) | \leqslant R _ { p } ( n , \delta ) , | \widehat { \alpha } _ { n } - \alpha | \leqslant R _ { \alpha } ( n , \delta ) , } \\ & { | \mathbb { E } [ Y / \| U \| ^ { \alpha } \mid X , D , U / \| U \| , \| U \| > t ] - \widehat { g } ( X , D , U / \| U \| ) | \leqslant R _ { g } ( n _ { t } , \delta ) , } \end{array} $$ where $\begin{array} { r } { n _ { t } = \sum _ { i = 1 } ^ { n / 2 } I ( \| U _ { i } \| > t ) } \end{array}$ and $R _ { p } , R _ { g } , R _ { \alpha }$ are estimation errors that are monotonically decreasing with respect to sample size. Then, with probability at least $1 - \delta , \delta \in ( 0 , 1 / 2 )$ , we have $$ \begin{array} { r } { \left| \widehat { \theta } _ { n , t } ^ { D R } - \theta ^ { N E T E } \right| \leqslant O ( \sqrt { R _ { p } ( n / 2 , \delta ) R _ { g } ( n _ { t } , \delta ) } + t ^ { \beta / 2 } n ^ { - 1 / 2 } + \log ( 1 / \delta ) n ^ { - 1 / ( 2 + \beta ) } } \\ { + t ^ { - \operatorname* { m i n } \{ 1 , \beta \} } + t ^ { - \beta s / ( 1 - 2 s ) } + \log ( t ) R _ { \alpha } ( n , \delta ) + e ( t ) ) . } \end{array} $$ and $$ \begin{array} { r l } & { \left| \widehat { \theta } _ { n , t } ^ { I P W } - \theta ^ { N E T E } \right| \leqslant O ( R _ { p } ( n / 2 , \delta ) + t ^ { \beta / 2 } n ^ { - 1 / 2 } + \log ( 1 / \delta ) n ^ { - 1 / ( 2 + \beta ) } } \\ & { \qquad + t ^ { - \operatorname* { m i n } \{ 1 , \beta \} } + t ^ { - \beta s / ( 1 - 2 s ) } + \log ( t ) R _ { \alpha } ( n , \delta ) + e ( t ) ) . } \end{array} $$ The error bound (3.6) consist of the nuisance error $\sqrt { R _ { p } ( n / 2 , \delta ) R _ { g } ( n _ { t } , \delta ) }$ , variance $t ^ { \beta / 2 } n ^ { - 1 / 2 }$ , EVI estimation error $\log ( 1 / \delta ) n ^ { - 1 / ( 2 + \beta ) }$ , $\alpha$ error $R _ { \alpha } ( n , \delta )$ and bias terms $t ^ { - \operatorname* { m i n } \{ 1 , \beta \} } +$ $t ^ { - \beta s / ( 1 - 2 s ) } + e ( t )$ . Similar pattern holds for (3.9). Given this general result, we choose the threshold $t$ in a data-driven way to obtain a better rate. The idea is to use the estimated index to balance the bias and variance terms in (3.8) and (3.9). The following corollary gives the convergence rate in two different regimes. Corollary 3.6. Under the assumptions of Theorem 3.5, further suppose that Rp(n $, \delta ) = \Theta ( \log ( 1 / \delta ) n ^ { - 1 / 2 } ) , R _ { g } ( n , \delta ) = \Theta ( \log ( 1 / \delta ) n ^ { - 1 / 2 } ) , R _ { \alpha } ( n , \delta ) = \Theta ( \log ( 1 / \delta ) n ^ { - c _ { \delta } } )$ α), for some $c _ { \alpha } > 0$ , the following conclusions hold. 1. If $s \in ( 0 , 1 / ( 2 + \operatorname* { m a x } \{ 1 , \beta \} ) )$ , takes $t _ { n } = \Theta ( n ^ { ( 1 - 2 s ) } \widehat { \gamma } _ { n } )$ , with probability at least $1 - \delta$ , we have $$ | \widehat { \theta } _ { n , t } ^ { D R } - \theta ^ { N E T E } | = O ( e ( t _ { n } ) + n ^ { - s } \log ( 1 / \delta ) + n ^ { - c _ { \alpha } } \log ( n ) \log ( 1 / \delta ) ) . $$ 2. If $s \in [ 1 / ( 2 + \operatorname* { m a x } \{ 1 , \beta \} ) , 1 / 2 ) , \ t a k e s \ t = \Theta ( n ^ { ( \widehat { \gamma } _ { n } / ( 1 + 2 \operatorname* { m i n } \{ 1 , \widehat { \gamma } _ { n } \} ) ) } ) ,$ with probability at least $1 - \delta$ , we have $$ | \widehat { \theta } _ { n , t } ^ { D R } - \theta ^ { N E T E } | = O ( e ( t _ { n } ) + n ^ { - 1 / ( 2 + \operatorname* { m a x } \{ \beta , 1 \} ) } \log ( 1 / \delta ) + n ^ { - c _ { \alpha } } \log ( n ) \log ( 1 / \delta ) ) . $$ Similar results hold for the IPW estimator. Due to limited space, we leave the result for IPW in the appendix. Many common machine learning algorithms, e.g., Lasso, logistic regression, neural networks, can achieve $O ( n ^ { - 1 / 2 } )$ rate in the assumption of Corollary 3.6. We want to highlight that if $e ( t )$ decays fast enough and become negligible compared to the other term and we know the correct scaling exponential $\alpha$ , Corollary 3.6 matches the rate of [Zhang et al., 2023, Theorem 3.1] without prior knowledge on the index $\beta$ in the Assumption 3.4. Besides, if we have additional prior knowledge on $e ( t )$ and $c _ { \alpha }$ , we can adjust the choice of threshold $t$ to achieve a better rate. Remark 3.7. When the extreme noise is 1-dimensional, the spectral measure is trivially $\delta _ { \{ 1 \} }$ and there is no need to estimate the spectral measure. Following a similar argument of Theorem 3.5 and Corollary 3.6, we can obtain a convergence rate of $O ( e ( t _ { n } ) + \log ( 1 / \delta ) n ^ { - 1 / ( 2 + \beta ) } +$ $\log ( 1 / \delta ) n ^ { - c _ { \alpha } } ,$ . Remark 3.8. Assumption 3.4 may seem restricted at first glance. This assumption is used here because the non-asymptotic result for regularly varying extreme distributions is rare in the literature and the goal of this paper is not to develop a new estimator for the spectral measure. To the best of our knowledge, Zhang et al. [2023] is the only paper that gives such a result under Assumption 3.4. In fact, Assumption 3.4 can be easily replaced by the following two assumptions in our proof. (1) The extreme noise $U$ is regularly varying and its norm $\| U \|$ satisfies the von Mises condition in Boucheron and Thomas [2015]. (2) There exists an upper bound for the bias term $\begin{array} { r } { \| \mathbb { E } [ f ( U / \| U \| ) \mid \| U \| > t ] - \operatorname* { l i m } _ { t \to \infty } \mathbb { E } [ f ( U / \| U \| ) \mid \| U \| > t ] \leqslant O ( t ^ { - c _ { 0 } } ) } \end{array}$ , for some constant $c _ { 0 } > 0$ for a fixed Lipschitz function $f$ . We leave this generalization to future work. # 4 Experiments Having established in Section 3 that under our regularity and overlap assumptions the DR- and IPW-based extreme treatment estimators enjoy a provable non-asymptotic error bound, we next evaluate their finite-sample behavior and compare with our estimators with naive estimators that does not consider the regularly varying structure. In what follows, Section 4.1 presents purely synthetic simulations with known NETE. Section 4.2 then moves to a semi-synthetic setting—using real noise from wavesurge datasets—to assess practical performance under realistic complexities. # 4.1 Synthetic Dataset The data generation process we use in this subsection is $$ \begin{array} { r l } & { X \sim \mathrm { U n i f } ( [ 0 , 1 ] ^ { 5 } ) , D \sim \mathrm { B e r } ( p ( X ) ) , \mathrm { w h e r e } p ( x ) = 1 / ( 1 + \exp ( - \lambda } \\ & { Y = \| U \| ^ { \alpha } ( D + U / \| U \| + \epsilon ) + \| U \| ^ { \alpha / 2 } , \epsilon \sim \mathrm { U n i f } ( - 1 , 1 ) , } \end{array} $$ where $\alpha > 0$ is a constant and $b \sim N ( 0 , 1 ) , A \sim \mathrm { U n i f } ( [ 1 , 2 ] ^ { d _ { u } \times d _ { z } } )$ . We consider two ways of generating the extreme noise. The first one follows Assumption 3.4. $$ Z = ( Z _ { 1 } , \cdot \cdot \cdot , \ Z _ { d _ { z } } ) , \ Z _ { i } \sim \mathrm { P a r e t o } ( \beta ) , \ U = A Z , A \in \mathbb R ^ { d _ { u } \times d _ { z } } . $$ We also consider a Pareto mixture, i.e., $U = ( U _ { 1 } , \cdots , U _ { d _ { u } } ) , U _ { i } \sim 0 . 5 \mathrm { P a r e t o } ( \beta ) + 0 . 5 \mathrm { P a r e t o } ( \beta +$ 1). Note that Assumption 3.2 is satisfied with $e ( t ) = t ^ { - \alpha / 2 }$ . By Proposition 3.3, we can calculate the ground-truth effect. The graph below shows the Mean Square Error (MSE) with our estimator using different sample sizes. We take different values for $\alpha , \beta$ in the experiments. In this case, by Proposition 3.3, we know that the ground-truth NETE is $1 / ( 1 - \alpha / \beta )$ . We use Mean-Square-Error (MSE), $\mathbb { E } [ ( \widehat { \theta } - \theta ^ { \mathrm { N E T E } } ) ^ { 2 } ]$ , to measure the error. As a baseline, we compare our estimator with naive IPW and DR estimators. Naïve-IPW simply applies the standard IPW estimator to the $U _ { i }$ that has norm larger than a threshold $t$ , ignoring any tail-index modeling. Similarly, Naïve-DR augments it with the usual doubly-robust correction term but likewise ignores the Pareto structure. We leave the detailed math formulation of the baseline estimators to the appendix. The thresholds rule in Corollary 3.6 is used in the experiments and we use the same threshold selection rules for all estimators. We estimate the scaling exponential $\alpha$ by doing linear regression $\log ( | Y | ) \sim \log ( \| U \| )$ and use the coefficient of $\log ( \left| \left| U \right| \right| )$ as $\widehat { \alpha } _ { n }$ . We leave the experiment details to the appendix. Figure 1 and Figure 2 show the expe ibment results. In the following, we use EVT-IPW and EVT-DR to represent $\widehat { \theta } _ { n , t } ^ { \mathrm { I P W } }$ and $\widehat { \theta } _ { n , t } ^ { \mathrm { D R } }$ in Algorithm 1. Figure 1: Experiment results of four different configurations when the extreme noise is a linear transformation of Pareto variables. The configures of upper left, upper right, lower left and lower right are $\alpha , \beta , d _ { z } , d _ { u } = ( 1 , 1 . 5 , 5 0 , 1 0 ) , ( 1 , 1 . 5 , 3 0 , 5 ) , ( 1 , 2 . 5 , 3 0 , 5 )$ and $( 2 , 2 . 5 , 3 0 , 5 )$ respectively. The results are averages of 50 repeated experiments. We use EVT-IPW and EVT-DR to represent $\widehat { \theta } _ { n , t } ^ { \mathrm { I P W } }$ and $\widehat { \theta } _ { n , t } ^ { \mathrm { D R } }$ in Algorithm 1. Figure 1 and Figure 2 show that under different configurations of $\alpha , \beta , d _ { u } , d _ { z }$ , our estimators generally perform better than the baseline estimators. The reason is that our estimators can make better use of the regularly varying structure. In general, EVT-DR achieves the smallest MSE in most experiments and is robust under different configurations. Note that the Pareto mixture does not satisfy Assumption 3.4. Figure 2 shows that our method still maintain a good performance even if Assumption 3.4 is violated. We also observe that sometimes the MSE increases with more samples in Figure 2. An explanation for this is that violation of Assumption 3.4 causes the threshold selection rule in Corollary 3.6 not to be applicable and the variance term dominates the error. Figure 2: Experiment results of four different configurations when the extreme noise is a Pareto mixture. The configures of upper left, upper right, lower left and lower right are $\alpha , \beta , d _ { u } = ( 1 , 1 . 5 , 1 0 ) , ( 1 , 1 . 5 , 5 ) , ( 1 , 2 . 5 , 5 )$ and $( 2 , 2 . 5 , 5 )$ respectively. The results are averages of 50 repeated experiments. # 4.2 Semi-synthetic Dataset Now, we use the wavesurge dataset Coles et al. [2001] to create a semi-synthetic dataset for our experiments. The wavesurge dataset has 2894 data points, which contain wave and surge heights at a single location off south-west England. Since wave and surge heights are not in the same scale and may not be positive, we shift the data and normalize each dimension by its $1 0 \%$ quantile. Given the wavesurget dataset, we generate our semi-synthetic dataset in the following way. $$ \begin{array} { r l } & { X \sim \mathrm { U n i f } ( 0 , 1 ) , D \sim \mathrm { B e r } ( p ( X ) ) , \mathrm { w h e r e } p ( x ) = 1 / ( 1 + \exp ( - x ^ { \mathsf { T } } b ) ) , } \\ & { Y = ( 1 - X + D ) W ^ { \alpha _ { 1 } } S ^ { \alpha _ { 2 } } + N ( 0 , 1 ) , } \end{array} $$ where $W$ and $S$ are the height of the wave and surge, respectively. In this experiment, we evaluate how well our proposed EVT-based estimators recover the Normalized Extreme Treatment Effect (NETE) when only limited “short-term” data are available. We split the dataset into a training set (1,000 observations) and a test set (1,894 observations). First, we estimate the NETE on the training set using four estimators. Next, we apply the identification formula from Proposition 3.3 together with (4.1) to obtain a high-fidelity estimate of the NETE on the test set. Because the test-set estimate leverages additional data and the correct tail model, we treat it as a surrogate “ground truth” for comparison. The real-world implications of this experiment is that we can use some short-term data (the training set) to predict long-term and unobserved behavior (the test set). Table 1 shows the results we get using different estimators. The results show that our EVT-IPW and EVT-DR give estimations that are closer to the test-set estimate than the naive estimators. In particular, the naive estimators consistently overshoot the true NETE by an order of magnitude. In addition, while more extreme tail configurations (e.g. $( 1 , 3 )$ ) slightly increase variance, the EVT-based methods remain stable, with EVT-DR deviating by at most 0.3 from the test-set estimate. These findings demonstrate that incorporating multivariate extreme value structure via our EVT-IPW and EVT-DR estimators substantially improves finite-sample estimation of treatment effects on rare, tail events, compared both to naive methods.
Causal effect estimation seeks to determine the impact of an intervention from observational data. However, the existing causal inference literature primarily addresses treatment effects on frequently occurring events. But what if we are interested in estimating the effects of a policy intervention whose benefits, while potentially important, can only be observed and measured in rare yet impactful events, such as extreme climate events? The standard causal inference methodology is not designed for this type of inference since the events of interest may be scarce in the observed data and some degree of extrapolation is necessary. Extreme Value Theory (EVT) provides methodologies for analyzing statistical phenomena in such extreme regimes. We introduce a novel framework for assessing treatment effects in extreme data to capture the causal effect at the occurrence of rare events of interest. In particular, we employ the theory of multivariate regular variation to model extremities. We develop a consistent estimator for extreme treatment effects and present a rigorous non-asymptotic analysis of its performance. We illustrate the performance of our estimator using both synthetic and semi-synthetic data.
[ "stat.ML", "cs.LG", "math.ST", "stat.ME", "stat.TH" ]
# 1. Introduction Computational models of the interstellar medium help us to understand the physical structure and chemical content that we observe in astronomical regions such as the Orion bar (Peeters et al.; 2024). In order to understand the transition from the low density medium into the high density medium, three-dimensional simulations are performed. In order to match these simulations to the observables, the chemistry of these regions must be simulated as well. It is this coupling with the chemistry that causes a critical slowdown of the simulation. One solution is to develop surrogate models that can rapidly evaluate the chemistry, rebalancing the computational budget. We model these regions in interstellar space, known as Photodissociation Regions (PDR) (Wolfire et al.; 2022), by simulating their physical structure using hydrodynamical codes. Through a snapshot of such a simulation, we take many lines of sight from all directions, representing the rays along which we could observe this object. We then solve for the chemistry along these rays, with the independent variable being the visual extinction $A _ { V }$ . Visual extinction $A _ { V }$ is a measure of the decrease in radiation as we move into an astronomical object, and is related to the amount of hydrogen nuclei along a line of sight (G¨uver and ¨Ozel; 2009). Solving the chemistry as a function of the visual extinction is computationally expensive, since it needs to iteratively solve for both the coupled temperature and chemistry, accounting for the processes of cooling, heating, creation, and destruction of the species. A comprehensive review and benchmarking of different codes is provided in (Rollig et al.; 2007). In this work, we use the 3D-PDR code (Bisbas et al.; 2012) to post-process three physical structures: a homogeneous cloud in one dimension, an inhomogeneous cloud in one dimension, and finally an actual threedimensional simulation of the interstellar medium. We then train surrogate models that are drop-in replacements for the original expensive chemical code. Surrogate modeling has become a widespread tool for solving and helping interpret astrochemical problems. The goal of a surrogate model is to replace the original code, increasing the inference speed, at the cost of some accuracy or specialization to a predetermined parameter space. These surrogate models can be partitioned into two categories, one in which only one steady-state solution or solution at a time of the chemistry is achieved, and the other in which a full depth, time, or space-dependent solution is required. Good examples of the first are neural networks for the direct emulation of emission spectra (de Mijolla et al.; 2019; Grassi et al.; 2025) and regression forests for chemical abundances in order to help with explainability (Heyl et al.; 2023). The second category has been studied more widely in the past years, with first attempts applying autoencoders directly to abundances (Holdship et al.; 2021), Physics Informed Neural Networks (Branca and Pallottini; 2022), Latent (Neural) Differential Equations (Grassi et al.; 2021; Tang and Turk; 2022; Sulzer and Buck; 2023; Maes et al.; 2024), operator learning (Branca and Pallottini; 2024) and neural fields (Ramos et al.; 2024). Efforts to gather different datasets and compare architectures are also being made (Janssen et al.; 2024). The main goal of these surrogate models is to replace the plethora of computationally expensive astrochemical codes. The speedup of these surrogates enables the faster inference of observational results and the simulations of astronomical objects. With enough speedup, it could enable the direct inference of observations using coupled three-dimensional hydrodynamical and astrochemical codes, something which is currently prohibitively expensive. These coupled simulations are so expensive that they can currently only be run on university clusters and supercomputers (Seifried et al.; 2017; Grudi´c et al.; 2021; Gong et al.; 2023; Yue et al.; 2024). Table 1. Properties of the datasets used for training the surrogate models with the dynamic ranges of the auxiliary parameters listed in brackets. In this article, we discuss a total of three datasets of increasing physical complexity, all computed using the 3D-PDR code. The first two datasets consist of two simple spherical models, whereas the third dataset is derived from a three-dimensional simulation of a molecular cloud. We then introduce latent Neural Ordinary Differential Equations (NODEs) as a surrogate model that can be trained to emulate these datasets. This is followed by a description of the architecture, parameters, and strategies we use to effectively train these surrogate models. We then briefly discuss the results of the surrogate models trained on the first two datasets. Next, we present more extensively the results of the training on the last dataset, showing that the surrogate model can accurately reproduce the original observable column densities. Finally, we conclude the paper with a discussion and an outlook of what is needed to advance the application of these surrogate models. # 2. Methods # 2.1. Models of Photodissociation Regions Models of photodissociation regions are essential to model the transition of chemistry as we go from the low-density interstellar medium into higher density filaments and eventually into dense star-forming regions. The density, which is defined as the hydrogen nuclei number density per cubic centimeter: $n _ { \mathrm { H } , n u c l e i } = n _ { \mathrm { H } } + 2 n _ { \mathrm { H } _ { 2 } }$ with $n _ { \mathrm { H } }$ and $n _ { \mathrm { H _ { 2 } } }$ the hydrogen and molecular hydrogen number densities in $\mathrm { c m } ^ { - 3 }$ respectively, is the dominant physical parameter that dictates how the temperature, radiation, and subsequently the chemistry behave. The visual extinction and density are related via the integral $\begin{array} { r } { A _ { V } \propto \int n _ { \mathrm { H } , n u c l e i } \mathrm { d } s } \end{array}$ along the line of sight $s$ . At low visual extinction $A _ { \mathrm { V } } ~ < ~ 1$ , the medium is radiation-dominated and the densities are low, allowing ionized and atomic species to dominate. As the visual extinction increases to $A _ { \mathrm { V } } > 1$ , however, radiation is attenuated and cooling becomes more effective, allowing the gas to cool down and species to tend towards their molecular forms. At the highest densities, molecules such as carbon monoxide (CO) start to form effectively. The underlying physical processes are described by a system of differential equations with one ODE per species, and an ODE for the temperature: $$ \begin{array} { r } { \frac { \mathrm { d } n _ { i } } { \mathrm { d } t } = \displaystyle \sum _ { j , l } k _ { j l } n _ { j } n _ { l } + \sum _ { j } k _ { j } n _ { j } - n _ { i } \big ( \sum _ { i , l } k _ { i l } n _ { l } + \sum _ { j } k _ { j } \big ) , } \\ { \frac { \mathrm { d } T } { \mathrm { d } t } = \frac { 1 } { k _ { b } n _ { \mathrm { H } , n u c l e i } } \left( \sum _ { m } \Gamma _ { m } - \sum _ { m } { \Lambda } _ { m } \right) , } \end{array} $$ with $i$ , $j$ and $l$ the species indices and $m$ the cooling and heating process indices (Bovino and Grassi; 2023) and $k _ { b }$ the Boltzmann constant in $\mathrm { e r g \cdot K ^ { - 1 } }$ . The first system of differential equations describes the unimolecular and bimolecular reactions with the positive signs accounting for creation of the species and negative sign accounting for the destruction. The second equation describes the evolution of the energy in $\mathrm { e r g } { \cdot } \mathrm { c m } ^ { - 3 } { \cdot } s ^ { - 1 } \ddagger$ . The first term includes the heating processes and the second the cooling processes. The coupling of this nonlinear system of equations is strong, since the reaction rate equations depend on the temperature, $k _ { i j } ( T )$ and the change in temperature depends on chemistry, density, and temperature $\{ \Gamma _ { m } , \Lambda _ { m } \} ( n _ { i } , n _ { \mathrm { H } , n u c l e i } , T )$ . In order to solve this system of differential equations along a line of sight in 3D-PDR, a guess is made of an initial temperature, after which it tries to chemically and energetically converge to a steady-state solution. When the temperature or chemistry changes, this process must be repeated, resulting in costly evaluations. A more detailed description of the process can be found in Appendix A of (Bisbas et al.; 2012). 2.1.1. Uniform density one-dimensional models (v1) As a first benchmark of the surrogate model, we choose a spherically symmetric cloud of uniform density. This 1-dimensional model allows us to approximate the depth-dependent chemistry of a line of sight into the cloud. The initial conditions are chosen to reflect the Orion Cloud. We first vary the initial density $n _ { \mathrm { H , n u c l e i } }$ , which plays an important role in determining the rates at which reactions take place, how much heating and cooling can take place, and how much radiation can enter the cloud. Secondly, the initial radiation field $F _ { \mathrm { U V } }$ is varied, determining the amount of energy available in the outer parts of the cloud and how deep in the cloud the transition from atomic to molecular species takes place. Lastly, the cosmic-ray ionization rate $\zeta$ is varied: this rate is not attenuated along the line of sight and provides a mechanism to destroy molecules even deep within the cloud. By varying these three inputs as input parameters into 3D-PDR, we can compute the abundances and temperature along a line of sight directly into the cloud. A summary of the chosen parameters and the range of others can be found in Table 1. This dataset was generated in 864 CPU core hours using a Intel $\textsuperscript { ( R ) }$ CoreTM i9-13900 Processor. 2.1.2. Non uniform density one-dimensional models (v2) The first models assume a spherical geometry with uniform density, which is a good first-order approximation for the chemistry. However, it does not account for the fact that, in the interstellar medium, objects are extended and have a density profile that rapidly increases towards the center. We subsequently use the PDFChem dataset (Bisbas et al.; 2023), which was created with the goal to use probability density functions to rapidly infer the average densities of molecules. This provides convenient training data to test models of varying density. The dataset varies its initial radiation field $F _ { \mathrm { U V } }$ as well as the cosmic ray ionisation rate $\zeta$ , but it does not vary the initial density value $n _ { \mathrm { H , n u c l e i } }$ , which now changes as a function of depth instead. 2.1.3. Three-dimensional simulations of the Interstellar medium (v3) For the final dataset, we then proceed to a physical structure that much more closely resembles that of actual astrophysical objects. For the 3D-PDR setup, we use a three-dimensional model representing a typical Milky Way giant molecular cloud presented in Seifried et al. (2020) using a uniform grid consisting of $1 2 8 ^ { 3 }$ cells. From each cell, a hierarchy of 12 HEALPix rays (G´orski et al.; 2005) is emanated, along which we compute the column densities of species and the line cooling by adopting a large velocity gradient escape probability formalism. For the PDR model, we assume a constant cosmic-ray ionization rate of $\zeta _ { \mathrm { C R } } = 1 0 ^ { - 1 7 } \mathrm { s } ^ { - 1 }$ and an isotropic radiation field with intensity of $\chi / \chi _ { 0 } = 1$ (normalized to the spectral shape of Draine; 1978). Once 3d-pdr is converged, we output the gas temperatures and the abundances of species along the HEALPix hierarchy of 12- rays for all cells, under the assumption that each HEALPix ray is considered to be an independent one-dimensional PDR model. We thus generate a significantly large database of one-dimensional models (with a total number of $1 2 8 ^ { 3 } \times 1 2$ rays). Although they share the same PDR environmental parameters of $\zeta _ { \mathrm { C R } }$ and $\chi / \chi _ { 0 }$ , they differ in terms of density distribution along each HEALPix line-of-sight. This dataset takes a total of 1792 CPU core hours (Intel $\textsuperscript { ( R ) }$ Xeon $\textsuperscript { ( R ) }$ Gold 6348 Processor) to process the chemistry along all rays. We subsequently use a subset of $1 / 8 0$ total rays, resulting in a dataset with 314573 $A _ { V }$ -series. During training time, we limit ourselves to all series with more than $n > 4 8$ samples, effectively using only 158948 models. # 2.2. Structure of the data and preprocessing Typically in astrochemistry, the abundances of each molecule are computed in terms of fractional abundances, xi = $\begin{array} { r } { x _ { i } ~ = ~ \frac { n _ { i } } { n _ { \mathrm { H } , n u c l e i } } } \end{array}$ with $n _ { i }$ $( c m ^ { - 3 } )$ the number density. This allows one to investigate the relative abundances of each molecule, regardless of changes in the density of the medium. Inherently, abundances have a large dynamic range. Observable molecules have fractional abundances ranging between $1 0 ^ { - 1 2 } > x _ { i } > 1$ , the chemical model thus inherently has a dynamical range of 12 orders of magnitude. In order to also account for molecules below the observational limit, we subsequently choose a lower boundary of $x _ { i } ~ \geq ~ 1 0 ^ { - 2 0 }$ for the training data by introducing a minor offset to each fractional abundance: $\epsilon _ { x _ { i } } = 1 0 ^ { - 2 0 }$ . With this large dynamic range, it is more useful to compute our losses in this logarithmic space, so all species are modeled correctly, even when less abundant. To this end, we transform all abundances into log-space. Figure 1. An example of an $A _ { V }$ dependent model for dataset v1, $v { } _ { \mathcal { L } }$ , $v \mathcal { J }$ and $v { \mathcal { J } }$ with smoothing. In log space we then wish to ensure that the distribution of the input features has a distribution close to a standard distribution. To this end, we standardize the data by either the statistics per species (v1 and v2) or the statistics of all species at once (v3). This gives us the following data preprocessing step: $$ D _ { i } ^ { \prime } = \frac { \log _ { 1 0 } ( D _ { i } + \epsilon _ { i } ) - \tilde { \mu } } { \tilde { \sigma } } , $$ with $\tilde { \mu }$ , $\tilde { \sigma }$ being the mean and standard deviation in log-space respectively. For the auxiliary parameters, we choose the physical parameters that vary for each of the datasets $\vec { p _ { i } } = [ A _ { v } , T _ { g a s } , T _ { d u s t } , n _ { H , n u c l e i } , F _ { U V } , ( \xi ) ]$ . We choose to include the temperatures as physical parameters, instead of co-evolving them with the abundances in the latent space, as was done in (Vermari¨en et al.; 2024). For the $v \mathcal { J }$ dataset, there are some numerical artifacts where the healpix ray tracing scheme rapidly alternates between two cells with a vastly different chemical composition, resulting in jumps in the chemistry on a non-physical timescale. Due to the recurrent nature of training NODEs in latent space, this nonphysical high-frequency noise introduces large gradients that destabilize training. To combat this, we fit a smooth spline (Zemlyanoy; 2022) in the log abundance space and resample each of the abundances. The smoothing spline for the abundances uses a regularization parameter $\lambda = 1 0 ^ { - 4 }$ , and a lower and higher boundary of $x _ { i } \in [ - 3 0 , 0 ]$ in log space, so that values can never exceed 1 in linear space or become too small. For the physical parameters, we use the same regularization parameter, but no boundaries. After applying the smoothing spline in log-space, the data is transformed back into linear space. The original and smoothed v3 data can be seen in Figure 1. # 2.3. Latent Augmented Neural Ordinary Differential Equations In order to emulate the chemical series, which are governed by the differential equations defined earlier, we choose Neural Ordinary Differential Equations (NODE)(Chen et al.; 2019; Kidger; 2022) as a data-driven approach, replacing the original $x _ { i + 1 } =$ $0 \mathrm { D E s o l v e } \big ( \vec { x _ { i } } , \vec { p _ { i } } \big )$ with a new neural network approximator in the latent space $z _ { i + 1 } =$ $\mathtt { N O D E S o l v e } ( \vec { z } , \vec { p _ { i } } )$ with $\vec { z }$ being the latent chemical state vector. We can describe this latent integral over visual extinction as follows: $$ \Vec { z } _ { i + 1 } = \Psi ( \Vec { z } _ { i } , \Vec { p _ { i } } , A _ { V } , A _ { V + 1 } ) = \Vec { z } _ { i } + \int _ { A _ { \mathrm { v } , i } } ^ { A _ { \mathrm { v } , i + 1 } } \psi ( \Vec { z } _ { i } , \Vec { p _ { i } } ) d A _ { v } ^ { \prime } $$ where $\vec { z _ { i } } \in \mathbb { R } ^ { Z }$ is the latent state vector, $A _ { \mathrm { v } }$ is the visual extinction, which serves as the independent variable to integrate along the line of sight and $\vec { p _ { i } } ~ \in { \mathbb { R } ^ { P } }$ auxiliary parameters that are concatenated to the input of the nonlinear transformation $\psi : \cdot \ /$ $\mathbb { R } ^ { Z + P } \mathbb { R } ^ { Z }$ . Additionally, we define the shorthand notation without explicit mention of the limits: $\Psi \big ( \vec { z } _ { i } , \vec { p _ { i } } \big )$ . The addition of auxiliary parameters $\bar { p }$ , allows us to train a latent model that generalizes over many different physical models with different physical parameters. The practice of enhancing the state vector with extra dimensions and features to find more expressive NeuralODEs has been coined as augmented ODEs (Dupont et al.; 2019) and parameterized ODEs (Lee and Parish; 2021). In this article, we employ the term “auxiliary parameters”, since they provide auxiliary information about the physical state of the system to the latent space. This is essential to enable the application of this architecture to the post-processing of simulations, as they provide these physical parameters. But also for directly coupled hydrodynamical simulations in the future, the architecture relies on physical parameters computed by other codes. A diagram showing how the architecture is connected can be found in Figure 2. These latent neural differential equations require encoder and decoder transformations (Kramer; 1991), allowing one to construct a state for the latent ODE, which can typically be solved at, a lower cost (Grathwohl et al.; 2018; Rubanova et al.; n.d.). This latent ODE can be defined by a small dummy chemical network (Grassi et al.; 2021), constant terms (Sulzer and Buck; 2023) or a tensor expression akin to a larger chemical network (Maes et al.; 2024). Our choice is purely a data-driven NODE with a latent bottleneck size $l$ , enabling us to capture both the chemical and physical state in the latent space. This latent space can then be evolved by solving the learned latent differential equation as a function of visual depth. Specifically, we use a fifth-order Runga-Kutta differential equation solver (Tsitouras; 2011).§ Figure 2. A diagram of the Latent Augmented NeuralODE architecture, the rollout pathway produces a series of abundances: $\vec { x } _ { 0 } , \{ \vec { p _ { i } } \} \{ \vec { x } _ { 1 } , . . . , \vec { x } _ { n } \}$ whilst the autoencoder pathway just autoregresses: $\vec { x } _ { i } ~ ~ \vec { x } _ { i }$ . The blocks contain the neural networks $\phi , \psi , \varphi$ with the center block representing the latent differential equation $\Psi$ . # 2.4. Batching variable length series In dataset $\boldsymbol { v } \boldsymbol { \mathcal { I } }$ , the number of visual extinctions that are sampled along a ray can vary, resulting in a distribution of different series lengths. The distribution of the lengths can be found in Figure 3. We first impose a lower bound of $n \geq 4 8$ because the shorter series have a high similarity and are less dynamic, resulting in a bias in the training data towards steady state solutions. We then proceed to use a batching strategy to account for the fact that each series has a different length, with samples of similar lengths having a possibility of being relatively similar. The same problem exists in text-to-speech synthesis, where sorting variable length sentences by length might result in less randomness than desired in each batch (Ge et al.; 2021). On the other hand, if the distribution of lengths is similar to the one we have, batches can be filled with zero-padding to account for the difference in lengths. We adapt the strategy of semirandom batching, adapted for the large powerlaw distribution of our lengths. We propose to sort the dataset using a small random offset: $$ \begin{array} { r } { n ^ { \prime } = \log _ { 1 0 } ( n ) + \epsilon , \mathrm { ~ w h e r e } } \\ { \epsilon \sim \mathcal { U } ( - \alpha , \alpha ) , } \end{array} $$ with $n _ { i }$ the length of each series, and $\epsilon$ a randomly sampled offset factor. We then sort the series by $n ^ { \prime }$ , create batches by grouping along the sorted axis, and then shuffling the batches. The effect of the offset factor $\alpha$ to the fraction of zero padded elements(ZPF) for batch size 64 and dataset $v \mathcal { J }$ is shown in Table 2. Based on these values, we select the offset $\alpha = 0 . 0 1$ , since it only induces a zero padding fraction of 2%. Table 2. The Zero Padded Fraction (ZPF), the number of zero elements needed to pad all batch elements up to the longest length, as a function of the offset factor $\alpha$ for the semi-random sorting. − indicates infinite offset, resulting in random sorting. Figure 3. The distribution of the series length $n$ and maximum visual extinction in dataset v3. The lower bound $n = 4 8$ is used during training. # 2.5. Training neural differential equations 2.5.1. Loss functions The architecture consists of three main building blocks, the encoder $\phi$ , the latent NODE block with a vanilla Multi Layer Perceptron (MLP) as the nonlinear function transformation $\psi$ and lastly the decoder $\varphi$ . This architecture can be trained in two modes typically: directly as an autoencoder $\vec { x } _ { i } \vec { x } _ { i }$ , or in a recurrent fashion, $\vec { x } _ { 0 } \{ \vec { x } _ { 1 } , . . . , \vec { x } _ { n } \}$ for $n$ rollout steps. For training the architecture we utilize both, starting with a large contribution of the autoencoder loss: $$ \mathcal { L } _ { a u t o } = \sum _ { a \in \vec { A } _ { \mathrm { V } } } \mathrm { M S L E } ( \vec { x } _ { a } , \varphi ( \phi ( \vec { x } _ { a } ) ) ) , $$ where MSLE is defined as the Mean Square Logarithmic Error and is defined as MSLE $( A , B ) = \mathrm { M S E } ( \log _ { 1 0 } ( A ) , \log _ { 1 0 } ( B ) )$ and $\begin{array} { r } { \operatorname { \mathrm { { M S E } } } ( A , B ) = \frac { 1 } { N } \sum _ { n } ( A - B ) ^ { 2 } } \end{array}$ . The rollout loss is then computed by evolving the state in the latent space, decoding its values back into the physical space and computing the loss $$ \mathcal { L } _ { r o l l o u t } = \sum _ { a \in \vec { A } _ { \mathrm { V } } } \mathrm { M S L E } ( \vec { x } _ { a } , \varphi ( \psi ( \phi ( \vec { x } _ { 0 } ) , \{ \vec { p } _ { 0 } , . . . , \vec { p _ { a } } \} , a ) ) ) . $$ Lastly, we introduce a loss to directly penalize the latent state of the autoencoder and rollout training to stay close enough to each other, directly penalizing their square distance in the latent space: $$ \mathcal { L } _ { l a t e n t } = \sum _ { a \in \vec { A } _ { \mathrm { V } } } \mathrm { M S E } ( \phi ( \vec { x } _ { a } ) , \psi ( \phi ( \vec { x } _ { 0 } ) , \{ \vec { p _ { 0 } } , . . . , \vec { p _ { a } } \} , a ) ) $$ All these losses are then combined into ${ \mathcal { L } } = \textstyle \sum _ { i } \lambda _ { i } { \mathcal { L } } _ { i }$ for the training process. The computation of these losses is highlighted by the paths shown in Figure 2. These rollout and autoregressive losses on the training and validation set are computed using the standardized log-abundances and the corresponding predictions. In order to train the latent differential equation solver $\Psi$ and its MLP $\psi$ , one needs to backpropagate through the solver. Several numerical methods exist for this process, namely “discretise-then-optimise”, “optimise-then-discretise” and “reversibleODE-solvers”. We use the default ‘Diffrax‘ method of “dicretise-then-optimise”, directly propagating through all the operations within the solver, with the added benefit of accuracy and speed at the cost of memory footprint. A more detailed discussion of different methods to obtain gradients from differential equations can be found in chapter five of (Kidger; 2022). 2.5.2. Training strategy The loss weights start out with a large auto weight $\lambda _ { a u t o } = 1$ and a small rollout weight $\lambda _ { r o l l o u t } = 4 \times 1 0 ^ { - 2 }$ , but after 15 epochs, this relationship inverses in the span of 15 epochs, as can be seen in Figure 4. The latent loss weight is chosen to have a small values of $\lambda _ { l a t e n t } = 1 0 ^ { - 3 }$ . For the validation loss, we only utilize the rollout term, since this is the only relevant metric at inference time. We combine this multi-objective loss function with a training scheme where we only train with a subset of points in each individual sample by taking a random contiguous subset of the series in the $A _ { \mathrm { v } }$ axis. We increase the size of the subset after a number of epochs, until we sample the full extent of each series. For the v3 dataset, the subsampling size is shown in the top of Figure 4. For $v \mathcal { J }$ we use an increasing subset size of 64, 128, 256, 512 and finally all steps, after 0, 5, 10, 20 and 30 epochs respectively. For each of these intervals, the learning rate follows a cosine learning rate schedule with a linear warmup profile(Loshchilov and Hutter; 2016), performing a warm restart for each increase in subset size. For $v l$ and $v { \boldsymbol { \mathcal { Z } } }$ , we follow the same schedule, but with only half the subset size. Altogether, we train the architecture for a total of 100 epochs. The learning rate optimizer is AdamW with a weight decay factor of $1 0 ^ { - 5 }$ (Loshchilov and Hutter; 2017) and a peak learning rate $\lambda _ { l e a r n }$ . This is combined with the global gradient clipping to improve training stability (Pascanu et al.; 2013). For the training we use a batch size $B$ , a latent bottleneck size $l$ . The encoder $\phi$ , latent $\psi$ and decoder $\varphi$ MLPs all consist of $H$ hidden layers of width $W$ , with the $\psi$ having a final activation function tanh, allowing it to map to the range $[ - 1 , 1 ]$ . The used hyperparameters for training on dataset v1, $v { \boldsymbol { \mathcal { Z } } }$ and $v \mathcal { J }$ can be found in table 3. Table 3. The hyperparameters for training on the three datasets. Normalized learning rate Latent loss weight $( 1 0 ^ { - 3 }$ ) Rollout loss weight $( 4 \times 1 0 ^ { - 2 } - 1 )$ 0 Autoregressive loss weight $( 1 - 2 . 5 \times 1 0 ^ { - 1 }$ Figure 4. The scheduling of the learning rate and the weights of the loss function for the training on dataset v3. # 3. Results For each of the three datasets, we train the models using 70% of the available data, using $1 5 \%$ as validation set and keeping 15% available as a test set, which is the set we use for the figures in the results section. The Mean Absolute Erorr (MAE) we compute now in the log-space, without standardization; this results in a scaling of the mean of the test set compared to training and validation sets by a factor of 3. # 3.1. Homogenous models in one dimension The one-dimensional model takes 81 minutes to train (using an NVIDIA V100), reaching a final validation loss of ${ \mathcal { L } } _ { v a l } ~ = ~ 0 . 0 2$ . The loss curves can be found in Figure 5. These show that the training loss decreases quickly during the first 15 epochs, with the validation loss, which is evaluated using only the rollout loss term, lacking behind. We can see a small increase in the loss after expanding the length of the series. After the 15th epoch, as the autoencoder loss weight start decreasing and the rollout loss weight starts increasing, the training loss start increasing with the validation loss coming down, indicating that the latent NODE is being trained effectively. Once the loss weights are constant again at epoch 30, the training loss starts decreasing again. The validation loss is lower than the training loss, indicating that there is a trade-off between the autoregressive and latent loss. We show both the data and rollout prediction for one sample from the test dataset in Figure 6. The plot is constrained to a subset of species to allow for easier comparison. It shows a chemistry that is evolving as soon as the visual extinction reaches $A _ { V } = 0 . 1$ , with the auxiliary gas temperature and radiation field rapidly decreasing. The rollout predictions follow the data, but then as the chemistry starts changing more rapidly around $A _ { V } ~ = ~ 7$ , it fails to capture the rapid dynamics, instead smoothing out the chemical evolution. In the end, however, it does recover and converges to the steadystate solution of the chemistry. The over smoothened prediction for the chemistry at intermediate $A _ { V } \mathrm { c a n }$ be seen as a peak in the error in Figure 8, indicating that the surrogate model could still be improved there. The error does quickly reduce after the peak, indicating the approximation can correctly predict the steady state solution without a catastrophic buildup of the error at intermediate $A _ { V }$ . The error does not show a similar peak as a function of the index, since the visual extinction at which the chemistry rapidly changes depends on the initial radiation field, density and cosmic ray ionization rate, the largest changes occur at different indices within the series, resulting in no distinct peak in error, only a slightly larger error at the end of each series. # 3.2. Variable density models in one dimension The variable density model has a similar loss curve, as can be seen in Figure 5, with the training time taking 32 minutes (using an NVIDIA V100). However due to the smaller size of the dataset and greater physical complexity, the performance is not as great as the $v l$ model at a similar number of epochs. We see a similar pattern appear with the train and validation losses, where the validation loss seems to converge well after a peak in loss after increasing the series length at epoch 30. The final validation loss it achieves is $\mathcal { L } _ { v a l } = 0 . 0 7 6$ . The greater chemical complexity due to the increase in density is reflected in the fact that there are now several small jumps in the data, as can be seen in Figure 7. The neural network provides smooth interpolations, but the complexity of the surrogate model is not great enough to capture the quick changes in chemistry, indicating it must either be trained longer still or have a larger model complexity. This is reflected by the loss as a function of index and visual extinction as shown in Figure 9. It again has a peak, after which the error decreases as the surrogate converges to the steadystate solution of the chemistry. The lower performance of the dataset $v { \boldsymbol { \mathcal { Z } } }$ than $v l$ thus motivates the choice to use larger MLPs and latent size and more series to train on the dynamics of the $\boldsymbol { v } \boldsymbol { \mathcal { J } }$ dataset. Figure 5. The training and validation loss curves for dataset v1 and ${ \boldsymbol { v } } { \boldsymbol { \mathcal { Z } } }$ # 3.3. Interstellar medium models in three dimensions 3.3.1. Varying the batch and latent bottleneck size We proceed to train the surrogate model on the three-dimensional dataset. We tried several combinations of latent bottleneck size $l$ and batch size $B$ , as listed in Table 3. The resulting validation loss curves can be found in Figure 10. This shows that trying to utilize the smaller bottleneck sizes does not result in the surrogate models training successfully. The end of all these runs is marked by the latent differential equation producing a Not a Number in a batch, which can happen when an integrator tries to integrate a badly constrained function. Since these runs with bottleneck sizes of $l = \{ 8 , 1 6 , 3 2 \}$ did not show any improvement in the loss, the runs were not resumed. The model with $l = 6 4$ does improve in loss at the start of training, but in epoch 42 the training results in Not a Number gradients, effectively halting the training process. This Not a Number gradient is caused by the ODE solver not converging, resulting in the maximum number of integration steps being reached. Upon restarting at epoch 40 with the same weights, it quickly results in another Not a Number gradient, indicating that the weights are not converging towards a stable solution. Thus, the hyperparameter configuration is discarded. This only leaves the runs with the largest latent bottleneck size $l = 1 2 8$ . For the lowest batch size $l = 3 2$ , the loss seemed to improve the fastest, but in epoch 28, a Not a Number gradient occurs, and trying to resume the training process quickly results in other Not a Number losses, effectively discarding the hyperparameter configuration. This only leaves the batch sizes $B = \{ 6 4 , 1 2 8 \}$ , with the latter needing a restart after dealing with Not a Number gradients in epoch 26, but then it does train successfully until epoch 84. We subsequently choose the only run that ran continuously to achieve the lowest validation loss of $\mathcal { L } _ { v a l } = 2 . 6 \times 1 0 ^ { - 3 }$ in 94 epochs. Figure 6. A comparison between a test sample from $v { \cal { I } }$ and its prediction. Figure 7. A comparison between a test sample from $v { \boldsymbol { \mathcal { Z } } }$ and its prediction. Figure 8. The MAE in log space for the $v l$ test dataset. Figure 9. The MAE in log space for the ${ \boldsymbol { v } } { \boldsymbol { \mathcal { Z } } }$ test dataset. 3.3.2. Depth dependent approximation and column density maps We now take the best-performing model, and see how well we perform on the test dataset. To inspect the performance of the surrogate, we select a sample with a high carbon monoxide to carbon ratio. This ratio indicates that the ray has traced a high density region, resulting in the attenuation of the radiation and decrease in temperature, subsequently allowing for the formation of molecules (especially CO, HCO $^ +$ and H $^ 2$ O) in the cold and dense gas. The original unsmoothed data, smoothed training data and prediction are shown in Figure 11. It shows clearly that between $A _ { V } = | 0 . 2 , 0 . 4 |$ a high density region is traced, resulting in more complex molecules to peak with CO being as abundant as $1 0 ^ { - 4 }$ . We see that compared to the original data, the smoothing of the data has resulted in a less wide peak, meaning that the integral of the peak is lower. The neural network correctly predicts the peak of the more complex molecules, and the subsequent loss of them as the density drops, again increasing the temperature and radiation field. The evolution of the error on the test set as a function of index and visual extinction is shown in fig. 12. This shows that the MAE moves around 0.1 in log abundance space. As the rollout increases beyond index 300, we start to see an increase in the error, indicating the errors are accumulating in the latent space. Since there are only few models that proceed until these higher visual extinctions, see fig. 3, the surrogate model has not fit these longer rays as well as the shorter ones. We can see this rapid increase in error in the bottom visual extinction plot as well. We then take all the rays from the test set, and derive the column density maps. These column density $N _ { i }$ ( $\mathrm { c m } ^ { - 2 }$ ) maps integrate the number densities $n _ { i }$ ( $\mathrm { c m } ^ { - 3 }$ ) of each molecule along the lines of sight, resulting in an image that can be compared to observations. In order to go from the rays back to these images, we must first compute the number densities for the entire three-dimensional object. We choose a three-dimensional grid of $2 5 6 \times 2 5 6 \times 2 5 6$ cells, and then compute the mean fractional abundance of each molecule $x _ { i , x , y , z }$ for each cell. We can then recover the column density by multiplying each fractional abundance by the density of the cells $n _ { \mathrm { H , n u c l e i } }$ , and then summing this quantity over each cell that is non-zero, multiplying by the depth of each cell $\Delta z = 0 . 4 4$ parsec. This results in maps of each species. We show the column densities of atomic hydrogen H, molecular hydrogen $\mathrm { H _ { 2 } }$ and carbon monoxide (CO) in Figure 13. Here we can see that even with the smoothing of the data, the maps of both atomic and molecular hydrogen are recovered well. Atomic hydrogen traces regions of intermediate density, where it is more abundant, but is not yet captured in molecular hydrogen at lower temperatures. In the lower parts of the images, we see the higher density and low-temperature regions, where the hydrogen is captured in its molecular form. We can also see how the rays with high visual extinction pass through several structures with higher densities. Lastly, we can see the effect of the smoothing on the CO column densities. Its density is reduced by smoothing the data, resulting in both a lower peak value and a less extended region. Individual errors for each molecule can be found in Appendix B. Lastly, we investigate the relationship between the individual error of each prediction compared to the standard deviation of each abundance. This tells us whether the surrogate models have learned equally for each of the molecules. The result can be seen in Figure 14; here we can see that all species lie on a straight line, indicating that the error in the prediction scales with the dynamic range of each species. Species that barely vary, namely ionized carbon $\mathrm { C ^ { + } }$ and $\mathrm { e ^ { - } }$ , only change in abundance when they recombine in the highest-density areas, as seen in Figure 11, and thus their predictions have the lowest error. The species with the higher dynamic ranges have a larger error, which makes sense, as the latent differential equation can only try to approximate them, accumulating some error as it integrates and smoothing high-frequency changes. 3.3.3. Computational cost of training and inference and critical speedup The latent differential equations for the best hyperparameter configuration took approximately 84 GPU hours with an NVIDIA H100. This highlights that NODEs are expensive to train for a relatively small data volume of 159K samples. The many failed runs underline the instability and challenges of training neural differential equations. Nevertheless, the resulting surrogate model performs well enough to reconstruct both the depth-dependent chemistry and the resulting mock observation at a much lower computational cost at inference. The inference of all 159K samples takes 200 seconds without any optimization for throughput. This means the whole dataset could be inferred in little over 8 GPU hours compared to the 1792 CPU hours needed for generating the original dataset. This results in a considerable speedup and the effective utilization of the GPU, allowing the CPU to be utilized for gravity, hydrodynamics, and radiative transport. Figure 10. The loss curves for different hyperparameters latent bottleneck size $\it { l }$ and batch size $B$ , as the latent bottleneck size is decreased, training becomes decreasingly stable. Smaller batch size seem to improve performance, but for $B = 3 2$ training became instable after 28 epochs.
Computational astrochemical models are essential for helping us interpret and understand the observations of different astrophysical environments. In the age of high-resolution telescopes such as JWST and ALMA, the substructure of many objects can be resolved, raising the need for astrochemical modeling at these smaller scales, meaning that the simulations of these objects need to include both the physics and chemistry to accurately model the observations. The computational cost of the simulations coupling both the three-dimensional hydrodynamics and chemistry is enormous, creating an opportunity for surrogate models that can effectively substitute the chemical solver. In this work we present surrogate models that can replace the original chemical code, namely Latent Augmented Neural Ordinary Differential Equations. We train these surrogate architectures on three datasets of increasing physical complexity, with the last dataset derived directly from a three-dimensional simulation of a molecular cloud using a Photodissociation Region (PDR) code, 3D-PDR. We show that these surrogate models can provide speedup and reproduce the original observable column density maps of the dataset. This enables the rapid inference of the chemistry (on the GPU), allowing for the faster statistical inference of observations or increasing the resolution in hydrodynamical simulations of astrophysical environments.
[ "astro-ph.GA", "cs.LG" ]
# 1 INTRODUCTION In scientific research, datasets play a crucial role in method validation, model training, and result evaluation. Currently, research in many fields relies heavily on datasets, such as disease prediction in the medical field [18] and climate forecasting meteorology [44]. These studies often involve diverse datasets that span multiple disciplines. While researchers may be familiar with datasets within their own field, they may not be aware of datasets from other disciplines that could be beneficial to their research, which makes the process of finding a suitable dataset challenging. Despite the maturity of information retrieval technologies in the text domain, numerous challenges persist in the realm of data retrieval [19, 26]. These challenges stem from the inherent complexity of datasets, which often come in diverse formats such as images, videos, and structured tables, making traditional text-based retrieval methods insufficient. Moreover, the lack of comprehensive and standardized metadata provided by data publishers creates further barriers to locating relevant datasets. As a result, users frequently struggle to find suitable datasets for their research [8]. To make dataset discovery more efficient, we aim to profile the usage of datasets in academic papers and construct a structured paperdataset network. This network can provide a better understanding of dataset impact, foster reproducibility, and improve dataset discoverability for future research. While some academic platforms, like Google Dataset Search [7] and PapersWithCode (PwC) [39], have made progress in linking papers to datasets, they still rely heavily on manual annotation or rule-based methods, which are time-consuming and error-prone. Furthermore, existing methods frequently fail to capture fine-grained dataset attributes critical to researchers – including data types, size, and specific usage contexts – limiting their utility in comprehensive research analysis. In this paper, we introduce ChatPD, a novel system that leverages Large Language Models (LLMs) to automate the construction of a Paper-Dataset network. We design a dataset information template based on aspects that researchers usually focus on when studying datasets [25], and incorporate LLMs to analyze academic papers and extract dataset-related information. While LLMs generate large amounts of textual output at low cost, to integrate this data with existing academic platforms like PwC, we develop an algorithm based on graph completion and inference to map textual descriptions of datasets to the corresponding dataset entities in the dataset database, tailored to the characteristics of our data. Through our system, we obtain a high-quality paperdataset network with rich metadata information about datasets, which can be used for dataset discovery and recommendation. Finally, we deploy ChatPD as a practical dataset discovery service on https://chatpd-web.github.io/chatpd-web, supporting regular construction of AI-related paper-datasets networks on arXiv. In summary, our work has the following contributions: 1. We propose ChatPD, an LLM-driven system designed to automatically construct a paper-dataset network. The system is deployed as an online service that supports dataset-related queries, recommendations, and additional functionalities. 2. We comprehensively evaluate the reliability of ChatPD from the perspective of dataset information extraction and entity resolution. For dataset information extraction, ChatPD achieves a precision of ${ \sim } 0 . 9 9$ , significantly surpassing PwC’s result of ${ \sim } 0 . 8 3$ . In entity resolution, ChatPD attains an F1 score of ${ \sim } 0 . 8 8$ , outperforming state-of-the-art entity resolution algorithms [28, 59], which achieve only ${ \sim } 0 . 6 8$ . 3. By collecting papers on arXiv cs.AI from 2018 to 2024, we have built a continuously evolving paper-dataset network, which currently includes 60,126 papers, 4,224 dataset entities, and 137,004 paper-dataset usage records. Notably, the network constructed by ChatPD includes 444 new datasets not covered in PwC, demonstrating the superiority of its automated dataset collection strategy over the manual annotation-based approach employed by PwC. We open source ChatPD and the collected paper-dataset network on GitHub: https://github.com/ChatPD-web/ChatPD. # 2 BACKGROUND AND RELATED WORK Constructing a network that connects papers and datasets to facilitate dataset discovery poses two primary challenges. Firstly, we need to extract pertinent information from scholarly articles. Secondly, given that different papers may refer to the same dataset using diverse names, we are required to perform entity resolution. This process involves mapping varying dataset descriptions to their appropriate entities, enhancing the network’s quality. # 2.1 Dataset Discovery Dataset discovery is the process of locating, examining, and accessing relevant and valuable datasets for analysis, research, or other purposes. The retrieval systems for datasets usually rely on the context provided by dataset publishers [8]. Kern et al. [24] point out the pivotal role of metadata in the discovery of datasets. Following this idea, various studies have contributed to the development of dataset summaries and metadata to enhance dataset retrieval [20, 25, 57]. Various platforms have been developed to facilitate dataset discovery. Google Dataset Search [7] employs an automated approach, crawling dataset metadata from the web and aggregating metadata from various sources, to provide a comprehensive dataset search engine. However, this search engine primarily reflects the perspectives of data publishers, potentially omitting the real-world application of datasets. DataCite [42] assigns Digital Object Identifiers (DOIs) to datasets, improving their citability and accessibility. PapersWithCode (PwC) [39] bridges academic publications with their associated code and datasets, fostering reproducibility. These platforms enhance transparency in the research ecosystem by systematically linking papers to underlying data. However, their reliance on manual annotations often results in incomplete dataset usage labels, limiting their comprehensiveness. Our work addresses the limitations of manual annotation by developing a self-evolving system that automatically extracts paper-dataset relationships from newly published papers. # 2.2 Information Extraction Information Extraction (IE) is a fundamental task of identifying and converting specific details, like named entities and their relationships, from unstructured or semi-structured text into a structured format [13, 29]. Traditionally, IE depends on supervised learning methods, which require a large amount of labeled data. With more weak supervision methods proposed [30, 34], the need for annotation is alleviated. Recently, LLMs like GPTs [2] have subverted the previous modeling methods of natural language processing tasks. For the IE problem, researchers have begun to explore zeroshot or few-shot learning techniques based on LLMs as a uniform tool [21, 31, 38, 58]. Our work advances this paradigm by integrating LLMs to automate dataset information extraction, enhancing the scalability of detecting dataset usage in scholarly literature. # 2.3 Entity Resolution Entity Resolution (ER) is to identify multiple data representations of the same real-world entity and map them to a unified entity. The early ER methods are mainly based on distance-based methods, like edit distance method [41] and TF-IDF method [10]. To overcome the limitations of unsupervised distance-based methods, researchers have proposed supervised learning methods. Ravikumar et al. [50] define ER as a classification problem and use SVM to solve it. However, these methods are heavily based on labeled data. Recently, researchers have proposed unsupervised learning methods for ER. Lacoste-Julien et al. [28] propose a greedy matching method SiGMa and Wu et al. [59] propose ZeroER, which uses a Gaussian Mixture Model to learn the similarity distributions of matches and non-matches to solve ER. However, supervised learning methods require a large amount of labeled data, and unsupervised learning methods heavily rely on blocking methods, which makes them difficult to transfer to our dataset entity resolution. We propose a rulebased graph inference method leveraging strong indicator fields as relational constraints. Our algorithm performs iterative graph completion through deterministic pattern matching and transitive inference, achieving accurate entity resolution without training data or predefined blocking schemes. # 3 PROBLEM FORMULATION We aim to construct a paper-dataset network that captures the usage of datasets in academic papers. Formally, the paper-dataset network can be defined as a bipartite graph $G = \left( P , E , R \right)$ , where $P$ is the set of papers, $E$ is the set of dataset entities, and $R$ is the relationships between papers and datasets. Each edge $r _ { i , j } ~ \in ~ R$ ChatPD:An LLM-driven Paper-Dataset Networking System Module 1: Module 2: Module 3: Paper Collection Dataset Information Extraction Dataset Entity Resolution 自 F-MNIST 自 面 : Fashion-MNIST 自 ar Task Description FashionPrompts 自 f mnist MNIST √ Query Format Dataset Entity ar51v Paper Metadata Check JON 自 Resolution Collect 直 即 Y Experiment-Related Section Text 直 自 自 Dataset Discovery Services Legends Usage-Specific Dataset Query Location-Specific Dataset Query Similar Dataset Query Reserc papers 5 LLM C Table-Based Query Graph-Based Query Dataset description Datasets connects a paper $p _ { i } \in P$ to a dataset entity $e _ { j } \in E$ , indicating that the paper $p _ { i }$ uses the dataset entity $e _ { j }$ . Specifically, two main issues need to be addressed to construct the paper-dataset network: • Dataset information extraction: extract the dataset usage information from the texts of given papers; • Dataset entity resolution: align diverse dataset descriptions with their corresponding dataset entities, where a dataset entity represents a specific dataset within the dataset database. # 3.1 Dataset Information Extraction For each paper $p \in P$ , we have its text $T ( p )$ . The information extraction is to apply the function $F$ (realized via a prompt-based query to an LLM) to obtain: $$ D ( \boldsymbol { p } ) : = F ( T ( \boldsymbol { p } ) ) = \{ d _ { \boldsymbol { p } , 1 } , d _ { \boldsymbol { p } , 2 } , \dots , d _ { \boldsymbol { p } , n ( \boldsymbol { p } ) } \} \subseteq D $$ where $d _ { p , i }$ is a JSON object representing the $i$ -th dataset description in paper $p$ , and $n ( p )$ is the number of dataset descriptions in paper $p$ . Here is an example of a JSON object for a dataset description: # 3.2 Dataset Entity Resolution Given the dataset descriptions $D$ extracted from papers and an initial dataset entity database $E _ { \mathrm { i n i t } }$ (derived from PwC), the objective of Entity Resolution (ER) is to find a mapping $M : D E$ , where $E = E _ { \mathrm { i n i t } } \cup E _ { \mathrm { n e w } }$ . Each dataset description $d \in D$ is mapped to an entity $e \in E$ if they refer to the same real-world dataset. The set $E _ { \mathrm { n e w } }$ contains new dataset entities not present in $E _ { \mathrm { i n i t } }$ . Formally, let $C = \{ C _ { 1 } , C _ { 2 } , \dots , C _ { m } \}$ be a partition of $D$ into equivalence classes under the relation $d _ { i } \sim d _ { j }$ (indicating $d _ { i }$ and $d _ { j }$ refer to the same dataset). The mapping $M$ is defined as: $$ \begin{array} { r } { \forall C _ { k } \in C , \forall d \in C _ { k } , M ( d ) = \left\{ \begin{array} { l l } { e \in E _ { i n i t } } & { \mathrm { i f } \ \exists e \in E _ { \mathrm { i n i t } } \ \mathrm { s . t . } \ e \sim C _ { k } , } \\ { e _ { \mathrm { n e w } } \in E _ { \mathrm { n e w } } } & { \mathrm { o t h e r w i s e . } } \end{array} \right. } \end{array} $$ This ensures each cluster $C _ { k }$ aligns with an existing entity in $E _ { \mathrm { i n i t } }$ when possible; otherwise, a new entity $e _ { \mathrm { n e w } }$ is registered in $E _ { \mathrm { n e w } }$ if the cluster indeed refers to a new real-world dataset. The resolution process constructs a paper-dataset network by connecting papers $p \in P$ to their used dataset entities $M ( d ) \in E$ for all descriptions $d \in D ( \mathfrak { p } )$ . # 4 SYSTEM DESIGN In this section, we introduce ChatPD, a novel LLM-driven system designed to automate the construction of a paper-dataset network. By leveraging LLMs to extract dataset information from academic papers and perform entity resolution, ChatPD dynamically links papers to their corresponding datasets, forming a structured network. As illustrated in Fig. 1, the architecture of ChatPD is built upon three pivotal modules: Paper Collection: Aggregates papers from academic platforms to form the system’s foundational corpus. Dataset Information Extraction: Identifies and extracts datasetrelated text from academic papers, leveraging LLMs to generate semi-structured metadata (e.g., dataset names, data types, and associated tasks). Dataset Entity Resolution: Resolves variant mentions of the same dataset by aligning them to a canonical entity, thereby constructing a paper-dataset bipartite graph. # 4.1 Paper Collection In the first phase, we collect basic information about academic papers. ArXiv [3], one of the largest academic paper platforms, hosts a rich repository of preprints of research papers and is open on the web1. In the current implementation of ChatPD, we collect System:You'reaComputerScienceresearcher.You havea task to extract the dataset related information from the given paper information. User: I hope you can help me extract the dataset related informationand answer in the following JSON format: { "dataset name": "xxx", "dataset summary": "xxx", "arxiv id": "xxx", "title": "xxx", "task": "xxx", "data type": "xxx", "location": "xxx", "time": "xxx", "scale": "xxx", "dataset provider":"xxx", "url": "xxx", "dataset publiclyavailable": "xxx", "other useful information about the dataset": "xxx" Y Paper information: {Paper Information} Note: Ifapaper involvesmultipledatasets,please providea separate JSON for eachdataset. papers from arXiv, focusing on Artificial Intelligence in Computer Science (cs.AI), and use the ar5iv tool [52] to obtain the text-format papers. We emphasize that ChatPD operates independently of academic platforms, requiring only the text of papers for analysis. For example, by leveraging open-source PDF processing tools such as $\mathrm { P y P D F ^ { 2 } }$ , ChatPD can build a personalized local paper-dataset network directly from a user’s collection of PDF documents. Currently, we select arXiv as our primary source as it is fully open-access, and the majority of AI papers now publish preprints on this platform. # 4.2 Dataset Information Extraction The Dataset Information Extraction module identifies and extracts dataset-related metadata from academic papers collected in the preceding stage. For a paper $\boldsymbol { p }$ , the module outputs a collection of dataset descriptions $D ( \boldsymbol { p } ) = \{ d _ { \boldsymbol { p } , 1 } , d _ { \boldsymbol { p } , 2 } , \dots , d _ { \boldsymbol { p } , n ( \boldsymbol { p } ) } \}$ , where each $d _ { p , i }$ represents a semi-structured JSON object encapsulating core dataset attributes. Recently, LLMs have shown great effectiveness and efficiency in analyzing text corpus [40]. Based on LLMs, we can directly use chatstyle natural interaction to extract useful dataset information from paper texts collected. With LLMs, there are three issues needed to be carefully considered: (1) prompt design, (2) output quality control, and (3) cost optimization. 4.2.1 Prompt Design. LLMs, e.g., ChatGPT, have recently showcased impressive performance in zero-shot or few-shot text information extraction tasks. To initiate the dataset information extraction process and generate responses in our desired structured format, we provide a specific prompt. The example of our prompt and corresponding demonstration is shown in Fig. 2. Role. Prior research has shown that specifying the role for the LLM would significantly improve the LLM’s capability of solving the task [61]. Following the common practice, we set the role of the LLM as a computer science researcher, allowing it to better understand the task scenario. Paper Information. The prompt features a ‘{Paper Information}’ field designed to incorporate relevant text from the paper pertaining to the dataset. Intuitively, this field could contain the entire paper text; however, in practice, this may result in prohibitively high costs when using LLM APIs, as computational expenses scale directly with input length. We explore this cost consideration in greater detail in Sec. 4.2.3. Output Specification. We also give specific task requirements and format standards for output. Previous research has summarized key considerations for researchers when finding datasets [25]. We base our dataset information extraction on these key fields, such as the dataset name, data type, task, location, time, scale, and dataset providers. In addition to these key fields, we include the dataset summary, Uniform Resource Locator (URL), and other relevant information fields to offer a more comprehensive dataset description. To ensure the LLM produces semi-structured data, we instruct it to generate the output in JSON format. Considering that a paper may involve multiple datasets, we also add an annotation to remind the LLM to generate a JSON format description for each dataset. 4.2.2 Output Quality Control. The ideal output would be standard JSON-formatted data for downstream processing. However, our experiments reveal that even state-of-the-art LLMs (e.g., GPT-4o) occasionally generate outputs violating JSON syntax requirements. To mitigate this issue and ensure system reliability, we implement a dedicated format validation and correction step in the pipeline. Specifically, we summarize three principal anomalies and institute corresponding rectifications via a post-processing script: • Extraneous Expressions: Entries not commencing with $\cdot _ { \lbrace } , \cdot _ { \rbrace } ,$ , or ‘"’ are excised to eliminate non-pertinent phases. Malformed Escape Sequences: We identify characters that need to be escaped in the output and add corresponding escape characters for them. Inconsistent Comma Usage: We program to correct the problem of commas at the end of the line according to the syntax of JSON. 4.2.3 Cost Optimization. As we constrain the output to a JSON format with pre-defined fields, the cost of an LLM query is mostly related to the input length in ChatPD. In particular, the length of the paper text in the query, i.e., ‘{Paper Information}’, dominates the input length. If we directly send the full paper text to LLM for processing, the cost would be relatively high especially when we want to scale up ChatPD to deal with millions of papers. To address this issue, we opt to input only the text of the paper sections that probably contain dataset descriptions. Academic papers usually describe the datasets used in the experimental sections, so we select sections like “Experiment”, “Dataset description”, “Data”, and other similar ones. Considering the balance between the API call cost and the LLM’s processing power, the length of the truncated input text is 1500 tokens (approximately 1125 words). Additionally, we include the title and abstract of the paper as supplementary input to provide a more comprehensive context of the datasets. In our current implementation, the dataset information extraction module employs $\mathrm { G P T ^ { - } } 4 0 \mathrm { - m i n i ^ { 3 } }$ , OpenAI’s most advanced and cost-effective small-scale model. After cost optimization, the expense for ChatPD to process 10,000 papers would be reduced to just $\$ 6.3$ It is important to note that ChatPD is not restricted to specific LLM services, and we have also evaluated other LLM services in our experiments. With the advancement of LLM techniques, we believe that it will soon be feasible to develop a fully local version of ChatPD on a standard PC equipped with a mid-range graphics card. Exploring the deployment of such a locally deployable LLM model will be a focus of our future work. # 4.3 Dataset Entity Resolution The output of the dataset information extraction module is a set of dataset descriptions in JSON format, extracted from the paper texts. To construct the paper-dataset network, the next step is to extract dataset entities from these JSON-formatted descriptions. Specifically, there are two key challenges to address: (1) Existing Entity Matching: When a paper uses a dataset that has already been referenced in other papers (i.e., an existing dataset entity in the database), the challenge is to correctly map the JSON-formatted description to the corresponding entity. (2) New Entity Discovery: When a paper introduces a new dataset, the challenge is to identify it and register it as a new entity in the database. 4.3.1 Existing Entity Matching. To initialize the dataset entity database, we currently utilize the dataset entities collected by the PwC platform. Through crowdsourcing, the PwC platform has accumulated a substantial number of dataset entities in its database, which include rich metadata such as dataset names and URLs. Additionally, PwC data is publicly accessible under the CC-BY-SA-4 license. Our goal is to map the extracted dataset descriptions to their corresponding entities in the PwC database, thereby constructing a paper-dataset network. In Sec. 4.2, we extract dataset-related information from paper texts, with certain fields—such as "dataset name" and "URL"—that can be used to identify the same dataset entity in the database. Our approach is based on the idea that if a dataset description shares the same name or URL as an existing dataset entity, we can conclude that the description refers to that entity with high confidence. Following the idea, we propose a ‘dataset identity attributebased graph inference and completion’ algorithm to match dataset descriptions to existing entities. First, we model the extracted dataset descriptions and database entities as nodes in a graph, referred to as description nodes (D-nodes) and entity nodes ( $\mathcal { E }$ -nodes), respectively. We then introduce identity-attribute nodes (I-nodes) to represent unique identifiers such as dataset names and URLs. Notably, we create only one I-node for each unique dataset name or URL to avoid duplication. Next, we connect each I-node to its corresponding D-nodes and/or E-nodes. Then we introduce the graph inference and completion one by one. Graph Inference: This graph structure enables us to infer relationships between D-nodes (dataset descriptions) and E-nodes (dataset entities). For instance, if a D-node $d$ is linked to an I-node and this same I-node is also connected to an E-node 𝑒, we can infer that $d$ corresponds to $e$ . This process effectively matches the dataset description to an existing dataset entity in the database through their shared identifier (e.g., same dataset name or URL). # Algorithm 1 Graph Creation and Completion 1: Input: A list of dataset descriptions $D = \{ d _ { 1 } , d _ { 2 } , \dots , d _ { n } \}$ , a list of entities $E =$ $\{ e _ { 1 } , e _ { 2 } , \ldots , e _ { m } \}$ 2: Output: A graph $G = ( V , { \mathcal { E } } )$ with completions and corrections 3: Identity attributes: $\begin{array} { r } { A = \left\{ \begin{array} { r l r l } \end{array} \right. } \end{array}$ {dataset name, dataset url} 4: Initialize nodes: $V D \cup E \cup \{ I _ { d , \alpha } ~ | ~ d \in D , \alpha \in A \} \cup \{ I _ { e , \alpha } ~ | ~ e \in E , \alpha \in A \}$ ⊲ Graph Creation 5: $\begin{array} { r } { \ ; \mathcal { E } \bigcup _ { d \in D } \{ ( d \xrightarrow { \mathrm { h a s } _ { - } \alpha } I _ { d , \alpha } ) \ | \ \alpha \in A \} } \end{array}$ 6: $\mathcal { E } \mathcal { E } \cup \bigcup _ { e \in E } \{ ( I _ { e , \alpha } \xrightarrow [ ] { \mathrm { { \scriptsize ~ r e r s \_ t o } ~ } } e ) \ | \ \alpha \in A \}$ 7: while iteration_limit is not reached do ⊲ Graph Completion 8: for D-node $d \in D$ do 9: for attribute $\alpha \in A$ do 10: if ∃ I-node $I _ { d , \alpha }$ refers_to E-node $e$ then 11: $\mathcal { E } \mathcal { E } \cup \{ ( I _ { d , A \backslash \{ \alpha \} } \xrightarrow { \mathrm { r e f e r s \_ t o } } e ) \}$ 12: end if 13: if $| \{ I _ { d , \alpha } \xrightarrow { \mathrm { r e f e r s \_ t o } } e \} | > 1$ then ⊲ Refinement after Completion 14: Remove the I-node $I _ { d , \alpha }$ and its edges from $V$ and $\varepsilon$ 15: end if 16: end for 17: end for 18: end while 19: return $G$ Using the above process, we can match a D-node to its corresponding E-node if they share a common I-node. However, the original E-node in the database may initially connect to only a limited number of I-nodes, which restricts the coverage of this basic inference strategy. To address this limitation, we introduce a graph completion step to systematically enrich E-nodes’ connections to additional I-nodes, thereby improving inference coverage. Graph Completion: When a D-node $d$ is matched to an E-node 𝑒, all I-nodes connected to $d$ are also linked to 𝑒. This enriches 𝑒’s identity attributes by expanding its associated identifiers. Crucially, whenever a new I-node is connected to $e$ , we rerun the graph inference process for 𝑒 to identify any additional D-nodes that can now be matched to $e$ through the updated connections. Consider an E-node $e _ { \mathrm { c o c o } }$ representing the MS COCO dataset [32], which initially has two I-nodes: the name “MS COCO” and the URL “https://cocodataset.org/”. During the inference step, we identify a D-node that shares the URL I-node but has an additional name I-node, $^ { * } \mathrm { C O C O } 2 0 1 4 ^ { , * }$ . Through the graph completion step, we link the “COCO $2 0 1 4 ^ { " }$ I-node to $e _ { \mathrm { c o c o } }$ . This enriched connection enables subsequent D-nodes associated with the “COCO $2 0 1 4 ^ { \mathfrak { s } }$ I-node to be matched to $e _ { \mathrm { c o c o } }$ , thereby expanding the inference coverage. Considering the completion order, some I-nodes may not be connected to any E-node after the initial inference. To address this, we introduce a completion iteration to enrich the connections. In practice, we set an iteration limit to 3. Refinement after Completion: While the graph completion strategy improves inference coverage, it risks introducing erroneous connections. A core principle is that I-nodes—representing identity attributes—should link to at most one E-node. However, after completion, an I-node might connect to multiple E-nodes. This issue frequently arises with URL I-nodes. For instance, papers may cite generic data warehouse URLs like “www.kaggle.com” for used datasets, causing this I-node to link to multiple E-nodes for datasets hosted on Kaggle. Since such ambiguous I-nodes cannot reliably serve as unique identifiers, our current implementation of ChatPD removes them from the graph to preserve integrity. # Algorithm 2 Graph Inference for Entity Resolution 1: Input: A list of dataset descriptions $D$ , a list of dataset entities $E$ , the completed graph $G = ( V , { \mathcal { E } } )$ 2: Output: A list of matched dataset descriptions and entities 𝑀 3: $M \bar { } \{ \}$ 4: for D-node $d \in D$ do 5: for attribute $\alpha \in$ {dataset name, dataset url} do refers_to 6: if I-node $I _ { d , \alpha }$ $\longrightarrow$ E-node $e$ then 7: $M \gets M \cup \{ ( d , e ) \}$ 8: end if 9: end for 10: end for 11: return 𝑀 After graph completion and refinement, we can infer the final mappings between dataset descriptions (D-nodes) and their corresponding entities (E-nodes) in the database. The full process is formalized in Algorithm 1 and 2. 4.3.2 New Entity Discovery. Another key strength of ChatPD lies in its ability to discover novel dataset entities from academic literature. For example, our analysis reveals that nearly $50 \%$ of datasets extracted by ChatPD from arXiv papers are absent from PwC’s database, highlighting these datasets’ novelty and suggesting they represent emerging resources useful for academic research. After the graph inference and completion (Sec. 4.3.1), some Dnodes may remain unmatched to any E-nodes. These unmatched Dnodes could represent novel dataset entities introduced by the corresponding papers. However, automatically creating a new E-node for every unmatched D-node risks introducing noise, as dataset descriptions extracted by LLMs may contain inaccuracies. To address this, ChatPD enforces two criteria to determine whether an unmatched D-node needs the creation of a new E-node. 1. Identity Information Completeness. Currently, ChatPD only considers creating E-nodes for unmatched D-nodes with complete identity attributes, i.e., containing both a dataset name and a URL. Notably, after graph refinement (Sec. 4.3.1), all URL I-nodes associated with generic data warehouse links (e.g., “www.kaggle.com”) are removed. Therefore, if an unmatched D-node retains a URL Inode, it is likely a specific, non-generic URL, increasing confidence that the D-node represents a genuinely new dataset. 2. Multiple Paper Mentions. ChatPD prioritizes creating new E-nodes when multiple unmatched $D$ -nodes share identical I-nodes (e.g., the same dataset name or URL). This increases confidence that the dataset is genuine and significant, as it is independently mentioned across multiple papers. For such cases, ChatPD consolidates all D-nodes sharing same I-nodes into a single E-node, representing one unified novel dataset entity. In the implementation, We can define a threshold $\lambda$ to govern the creation of new E-nodes: a candidate dataset must be mentioned in at least $\lambda$ papers. Additionally, we plan to incorporate user feedback to improve the accuracy and efficiency of dataset discovery. For example, even if a dataset lacks mentions from $\lambda$ papers, we still create an E-node but flag it with an uncertainty indicator. When presenting such datasets, ChatPD could ask users to verify dataset accuracy (e.g., “Is this extracted dataset correct?”). User feedback, while valuable, is not always reliable. Accurately extracting trustworthy insights from such feedback remains a significant challenge—a problem widely recognized in literature as truth discovery. We defer addressing this challenge to future research. Table 1: Dataset Usage Statistics in Annotated Papers # 5 EXPERIMENT We evaluate ChatPD to ascertain its effectiveness in constructing the paper-dataset network following three questions: RQ1: Can ChatPD efficiently and accurately extract dataset information? RQ2: Can ChatPD effectively resolve dataset descriptions entities? RQ3: Can ChatPD discover new datasets? # 5.1 Performance of Dataset Information Extraction (RQ1) 5.1.1 Experimental Setup. To compare ChatPD, we implement three comparative approaches: (1) en_core_web_trf: employing a named entity recognition model en_core_web_trf 5 to detect dataset entities in papers[47]. en_core_web_trf is a powerful pre-trained transformer-based model that can recognize and label a variety of entities in text, including dataset names [36]. (2) Regular Expression: using regular expressions to identify and match dataset names and their common variants in paper text based on a predefined list of dataset names (e.g., hyphenation variations like "Mini-ImageNet" and "MiniImageNet")[46]. (3) PapersWithCode (PwC): directly using the datasets identified by $P w C$ for test papers. The dataset usage information on $P w C$ is derived partly from annotations by community members and partly from the rule-based automated extraction script.6 For implementing LLM APIs in ChatPD, we choose GPT-3.5- turbo, GPT-4o-mini (default), Qwen2.5-7b-instruct [53], and DeepSeekV3 [35] for comparison. To compare with our cost optimization strategy (Sec. 4.2.3), we also implement a variant of inputting the full paper text to LLMs. To construct the test set, we manually annotate datasets used in research papers to establish a ground truth for evaluation. Specifically, we annotate dataset usage in 119 papers from top-tier conferences, including KDD and NeurIPS. The statistics of the annotated papers are detailed in Table 1. To ensure a fair comparison with PwC, our selected test papers all have dataset annotations on PwC. 5.1.2 Results. We evaluate the performance of dataset information extraction by calculating various metrics, including Exact Match Ratio, Micro Average Precision, Micro Average Recall and Micro Average F1 score. The comparison results are shown in Fig. 3. Our results indicate that Regular Expression and en_core_web_trf struggle to effectively capture dataset information. ChatPD with GPT-3.5-turbo achieves competitive performance compared with $P w C .$ With more advanced LLMs such as GPT-4o-mini and DeepSeekV3, ChatPD outperforms $P w C$ significantly across all metrics. Our method remains robust even with lightweight, locally deployable en_core_web_trf NER model Regular Expression Comparison of Model Performance 1.0 ChatPD (DGPeTe-p4Soe-emki-nVi)3w)ith Full Paper) 0.8 0.4 0.2 0.0 models such as Qwen2.5-7b-instruct. By analyzing the data, we observe that the unsatisfactory performance of $P w C$ can be attributed to its rule-based extraction technique for identifying datasets from texts. This method frequently results in erroneous matches, e.g., wrongly identifying datasets that are merely referenced in the text but not actually used in the study. To evaluate the effectiveness of our cost optimization strategy, we conduct a comparison between the full-text input and our optimized 1500-token input using GPT-4o-mini. The results demonstrate that the 1500-token input achieves performance close to the full-text input, and even outperforms it in certain metrics like Precision. Note that processing the full text would require approximately 7 times more tokens compared to our optimized method, significantly increasing costs. Given that ChatPD is designed to handle a continuous and large volume of papers, we believe that limiting the input to 1500 tokens strikes an effective balance between cost efficiency and performance. Overall, our experimental results show that ChatPD with current LLMs are highly effective in extracting datasets from papers, surpassing state-of-the-art solutions like $P w C ,$ , highlighting the feasibility of using large language models for this task. # 5.2 Performance of Dataset Description Entity Resolution (RQ2) 5.2.1 Experimental Setup. In this experiment, we aim to match dataset descriptions to existing dataset entities. Specifically, we utilize the dataset entities already stored in PwC as the reference existing entities. To establish ground truths, we manually annotate dataset descriptions extracted from papers published in top-tier conferences, such as KDD and NeurIPS, by linking them to their corresponding entities in the database. We sample 1,000 dataset descriptions randomly and link them manually to the corresponding entities. We find that only 474 dataset descriptions, only half of the samples, can be linked to the dataset entities in the PwC database. The primary reason for the unlinked descriptions is the absence of corresponding entities in the PwC database. Additionally, some descriptions were too vague, such as ‘weather dataset’, to determine their corresponding entities. We compare our Graph Completion & Inference algorithm with the Name Matching method (connecting descriptions to entities with the same dataset name) and the Graph Inference algorithm (connecting dataset descriptions with the same dataset name, alias, or URL). Besides, we compare two popular entity resolution algorithms, SiGMa [28] and ZeroER [59]. Table 2: Evaluation of Entity Resolution Methods Table 3: New Dataset Entities Discovered by ChatPD 5.2.2 Results. We choose precision, recall, and F1 score as the evaluation metrics. Our results are shown in Table 2. Name Matching achieves the highest precision, but cannot find the same dataset with different names, leading to the lowest recall. As a result, its F1 score is also the worst. Graph Inference utilizes the aliases and URLs provided by PwC, achieving a higher recall and F1 score than the state-of-the-art methods SiGMa and ZeroER. Our Graph Completion & Inference algorithm considers the transitive relationship between dataset descriptions, which can further increase the recall. It achieves the best F1 score 0.8829, verifying its effectiveness in constructing the paper-dataset network. # 5.3 New Dataset Entity Discovery (RQ3) By applying the new dataset entity discovery strategy (Section 4.3.2), ChatPD can detect novel dataset entities referenced in academic papers. We list the top 10 most frequently used new dataset entities discovered by ChatPD that were not included in PwC’s dataset database as of November 16, 2024. We compare the coverage of these dataset entities in PwC’s database on November 16, 2024, and January 16, 2025. The results are shown in Table 3. Only three out of the ten popular new datasets were added to PwC as of January 16, 2025. Notably, the most widely used dataset, UltraFeedback [14], which has been used in over 40 papers, is still not included in PwC. This highlights that ChatPD is significantly more efficient at discovering new dataset entities compared to PwC. # 6 DEPLOYMENT ChatPD has been deployed to update the paper-dataset network weekly. Users can access https://chatpd-web.github.io/chatpdweb to search for datasets used in papers by specifying the arXiv ID or dataset name, data type, task, etc. We present the basic dataset services provided by ChatPD after deployment in Appendix A.1. # 6.1 Offline Results Before deployment, we conduct offline evaluations to ensure the effectiveness and efficiency of ChatPD. We randomly sample 35,310 papers in the cs.AI category on arXiv and extract dataset information from them by ChatPD. We compare the data extracted by ChatPD with that from the platform $P w C$ to analyze the network’s size and coverage. Table 4: Network Size and Coverage Statistics Table 5: Performance Evaluation of ChatPD in the cs.AI Category on arXiv (2024) Table 4 provides a summary of the network size and coverage metrics for $P w C$ and ChatPD. The data indicates that ChatPD has significantly expanded the scope of the paper-dataset network compared to PwC. ChatPD has extracted dataset usage information from more than double the number of papers and dataset descriptions compared to PwC. Besides the existing PwC entities, ChatPD also find 444 new dataset entities not included in PwC. Specifically, we infer a new dataset entity if it has a useful URL and is referenced by at least 3 papers (Sec. 4.3.2). Additionally, its cost-efficiency is notable, with an average cost per paper extracted being significantly low at $\$ 0.00063$ using GPT-4o-mini. Through offline evaluation, we demonstrate that ChatPD constructs a larger and more comprehensive paper-dataset network with impressive cost efficiency. # 6.2 Post-Deployment Results We evaluate the performance of the deployed ChatPD by analyzing the paper-dataset network constructed from cs.AI papers on arXiv in 2024. Our results are summarized in Table 5. Our results show that approximately $8 7 . 8 \%$ of papers have accessible text information via ar5iv. ChatPD successfully extracts dataset information from $8 5 . 5 \%$ of these papers, with an average of 2.41 dataset usage records per paper. Among the extracted dataset usage records, less than half of the dataset descriptions can be mapped to PwC’s dataset entities. Our offline experiments in Section 5.2 demonstrate the effectiveness of our entity resolution algorithm for mapping dataset descriptions to PwC’s dataset entities. Hence, this low matching ratio indicates that PwC’s database is still incomplete, i.e., there is still significant room for improvement in the coverage of PwC’s dataset database. We also evaluate the real-time performance of the deployed ChatPD and compare it with PwC’s results. We calculate the coverage of papers with extracted dataset information in the PwC database and the coverage of dataset usage records extracted by ChatPD by month. The results are shown in Fig. 4. As not all the extracted dataset descriptions can find matching entities in the PwC database, we record both ‘the coverage of papers with matched Figure 4: Coverage of Papers with Extracted Dataset Information in arXiv cs.AI Category PwC entities (ChatPD Matched Paper Coverage)’ and ‘the coverage of papers with extracted dataset information (ChatPD Paper Coverage)’. Our data is up to January 12, 2025. We observe that PwC’s paper coverage is higher than ChatPD’s matched paper coverage at the beginning of 2024. However, after May, ChatPD’s coverage surpasses PwC’s. PwC’s coverage is relatively low for newly published papers due to its partial reliance on community annotations. In contrast, ChatPD uses LLMs to automatically extract dataset information, enabling it to stably analyze dataset usage records in papers. Therefore, ChatPD’s coverage is significantly higher than PwC’s in the later months. In 2024, PwC’s paper coverage is $3 4 . 5 \%$ , ChatPD’s paper coverage that can be mapped to PwC dataset entities is $3 8 . 4 \%$ , and the paper coverage with extracted dataset information is $8 5 . 5 \%$ . This demonstrates that ChatPD can stably and efficiently extract dataset information.
Scientific research heavily depends on suitable datasets for method validation, but existing academic platforms with dataset management like PapersWithCode suffer from inefficiencies in their manual workflow. To overcome this bottleneck, we present a system, called ChatPD, that utilizes Large Language Models (LLMs) to automate dataset information extraction from academic papers and construct a structured paper-dataset network. Our system consists of three key modules: \textit{paper collection}, \textit{dataset information extraction}, and \textit{dataset entity resolution} to construct paper-dataset networks. Specifically, we propose a \textit{Graph Completion and Inference} strategy to map dataset descriptions to their corresponding entities. Through extensive experiments, we demonstrate that ChatPD not only outperforms the existing platform PapersWithCode in dataset usage extraction but also achieves about 90\% precision and recall in entity resolution tasks. Moreover, we have deployed ChatPD to continuously extract which datasets are used in papers, and provide a dataset discovery service, such as task-specific dataset queries and similar dataset recommendations. We open source ChatPD and the current paper-dataset network on this [GitHub repository]{https://github.com/ChatPD-web/ChatPD}.
[ "cs.DB", "cs.AI", "cs.IR" ]
# 1 Introduction The field of medicine increasingly relies on machine learning tools for clinical decision support. In diagnosis and prognosis, probabilistic scores capture uncertainty about patient outcomes. Combined with value judgments, they produce expected values that guide clinical decisions. It is less often emphasized that expected value calculations can be used to measure the miscalibration of the probabilistic forecast itself. We accordingly propose three principles that scoring functions used for clinical purposes should satisfy as closely as possible. First, scoring functions should be adapted to account for the known label shifts that commonly arise between development and deployment environments. In particular, many medical scoring rules are intentionally trained on more balanced class distributions than those encountered in deployment. Second, the scores returned by scoring functions should be sensitive to the relative cost of errors that are clinically significant, such as the trade-off between the cost of misdiagnosis and the cost of failing to diagnose in any given setting. This supports patient-centered care by enabling the classifier’s sensitivity to be calibrated to human feedback rather than presuming a fixed normative standard. Third, scores should be calibrated; using them as probabilities gives practitioners easy access to decision theory as a way to consistently and reliably adapt decisions about risk and outcomes, when their clinical situation changes from the model developer’s assumptions. This work focuses on evaluation: specifically, we examine how the field of medical machine learning assesses and compares scoring functions and the extent to which current evaluation practices reflect clinical priorities. We begin by showing that neither of the most commonly used metrics, accuracy and AUC-ROC, adequately captures all three priorities outlined above. Each abstracts away some considerations that are critical for clinical decision-making. We structure the paper as follows. We first examine accuracy and its variant, balanced accuracy, as these remain the most widely used scoring rules for classification tasks. Accuracy evaluates each decision independently and measures the overall proportion of correct predictions, abstracting away critical application-specific considerations such as class imbalance and asymmetric error costs. While this abstraction offers a form of neutrality, it obscures important aspects of clinical deployment, where decision thresholds must often be adapted to reflect evolving prevalence rates or varying tolerances for false positives and false negatives. As noted by several works [9, 74, 20], accuracy fixes a single operating point and, as such, fails to engage with this necessary flexibility. In particular, it is generally not meaningful to directly compare accuracy on samples with different prevalences. We then turn our attention to the Area Under the Receiver Operating Characteristic Curve (AUCROC), which is commonly viewed as a solution to the rigidity of accuracy because it evaluates classifier performance across all possible thresholds. However, AUC-ROC measures the expected performance of the ideally calibrated version of a scoring function, not the actual, potentially miscalibrated outputs of a model. Moreover, it ties evaluation to a distribution over positive prediction rates that may not correspond to clinical contexts. These assumptions often lead AUC-ROC to overstate the real-world reliability of scoring functions, especially when calibration is imperfect or deployment conditions differ from development data. The difference in units makes it hard to compare or reason together about AUC-ROC problems and miscalibration problems when both are present. Recent work in the fairness literature has explored calibration more directly [48, 12, 36] but without broad consensus on best practices for how calibration interacts with varying cost structures. In particular, the definition of perfect calibration is widely agreed upon, but the correct way to measure degrees of miscalibration, taking into account label prevalence and asymmetric error costs, is not. As a consequence, the use of calibration-based metrics has lagged behind that of accuracy and AUC-ROC in clinical ML settings. To address these concerns with current evaluation practices, we propose adapting a framework from the weather forecasting and belief elicitation literature known as the Schervish representation [79]. This framework shows that any proper scoring rule (a measure of calibration that doesn’t require binning) can be represented as an integral over discrete cost-weighted losses, directly linking calibration to decision-theoretic performance. We extend this framework to the setting of label shift and asymmetric costs, and average cost-sensitive metrics over a bounded range of class balances. In summary, this work makes three main contributions. First, we introduce a framing of scoring rule design that centers clinical priorities, namely calibration, robustness to distributional shift, and sensitivity to error costs. Second, we use the Schervish representation to show how these priorities induce loss functions for probabilistic forecasts. Third, we propose an adaptable scoring framework based on adjusted log scores that reflects clinical needs. It accommodates uncertainty in class balance, asymmetric cost structures, and the requirement for calibrated predictions, thereby offering a more principled foundation for evaluating machine learning models in clinical decision support. # 1.1 Problem Formulation Given an input space $\mathcal { X }$ and binary label space $\{ 0 , 1 \}$ , the standard goal of binary classification is to learn a decision rule that maps each input $x \in \mathcal { X }$ to a predicted label. A scoring function $s : \mathcal { X } \to \mathbb { R }$ assigns a real-valued score to each input, and a binary classifier is defined by thresholding this score. For a threshold parameter $\tau \in \mathbb { R }$ , the predicted label is $\boldsymbol { \kappa } ( s ( \boldsymbol { x } ) , \tau ) = \mathbf { 1 } _ { ( s ( \boldsymbol { x } ) \geq \tau ) }$ where $\mathbf { 1 } _ { ( \cdot ) }$ denotes the indicator function, equal to $^ { 1 }$ if the argument is true and 0 otherwise. We denote the dataset by $\mathcal { D } _ { \pi _ { 0 } }$ , consisting of input-label pairs $( x , y )$ drawn from an unknown distribution. We define the empirical class prevalence as $\pi _ { 0 } ~ = ~ \mathbb { P } _ { \pmb { \mathscr { D } } _ { \pmb { \pi _ { 0 } } } } ( y ~ = ~ 1 )$ , which represents the proportion of positive examples in the dataset and the possibly unknown target or deployment class prevalence as $\pi = \mathbb { P } _ { \pmb { \mathscr { D } } _ { \pmb { \pi } } } ( y = 1 )$ . To formalize evaluation objectives, we introduce three additional elements: (1) A value function where $V ( y , \kappa ( s ( x ) , \tau ) )$ specifies the utility or loss associated with predicting $\hat { y }$ when the true label is $y$ ; (2) a parameter $c \in ( 0 , 1 )$ , which encodes the relative cost of false positives and false negatives and determines the threshold; and (3) a distribution $H$ over possible data-generating distributions $\scriptstyle { \mathcal { D } } _ { \pi }$ , modelling uncertainty over the environment and potential distribution shifts. We denote odds multiplication by $\begin{array} { r } { a \otimes b \triangleq \frac { a b } { a b + ( 1 - a ) ( 1 - b ) } } \end{array}$ . # 2 Related Work Recent literature emphasizes that scoring rules in AI should meaningfully reflect the objectives of deployment contexts rather than relying on standard metrics that can lead to suboptimal or misleading outcomes [13, 51]. Decision Theory Decision theory has roots in gambling and actuarial sciences but was formally structured by foundational works such as Ramsey [75] and de Finetti [15, 16]. Within medical decision-making, a prominent recent lineq of inquiry has been Decision Curve Analysis (DCA), a decision theoretic framework developed by Vickers, et al. [91, 94]. However, DCA has avoided measuring the area under the decision curve [83], eschewing mathematical evaluation when neither classifier dominates. This body of work critically examines widely-used metrics such as the Area Under the Curve (AUC) [92, 93] and the Brier score [3], questioning their clinical utility and advocating for metrics directly connected to decision-analytic value. Proper Scoring Rules The literature on proper scoring rules began with Brier [10], and was subsequently enriched by contributions from Good [30] and McCarthy [53]. A critical advancement was the integral representation of proper scoring rules by Shuford et al. [81], which explicitly connects scoring rules with decision-theoretic utility via Savage [78]. This was followed by the comprehensive characterization provided by Schervish [79], who demonstrated that strictly proper scoring rules can be represented as mixtures of cost-weighted errors. The formalism was further elucidated by Shen [80] through the lens of Bregman divergences and by Gneiting and Raftery [28] as convex combinations of cost-sensitive metrics. Hand [33] advocated adapting scoring rules explicitly to application contexts using beta distributions, a perspective later extended by Hand and Anagnostopoulos [32], Zhu et al. [97] through asymmetric beta distributions. Our approach differs in that we rely on uniform intervals between upper and lower bounds. Compared to correctly setting the sum of Beta parameters, this is a more intuitive way of measuring dispersion. Calibration Techniques The Pool Adjacent Violators Algorithm (PAVA), introduced by Ayer et al. [4], remains a foundational calibration technique, equivalent to computing the convex hull of the ROC curve [24]. A distinct parametric calibration approach based on logistic regression was popularized by Platt [71], subsequently refined for slope-only calibration by Guo et al. [31]. An intercept-only version aligns closely with simple score adjustments [76], while broader generalizations are explored in Kull et al. [49]. More recently, the calibration literature has shifted towards semisupervised contexts, utilizing unlabeled data to enhance calibration quality [50, 5, 27]. Despite extensive critiques that, for example, highlight that the widely-adopted Expected Calibration Error (ECE) [67] is not a proper scoring rule [90, 95], this metric remains popular in practice. Calibration has recently emerged as a fairness metric alongside predictive accuracy. This perspective, however, has become contentious since calibration was shown to be fundamentally incompatible with other fairness criteria [48], spurring the development of "multicalibration" approaches that ensure calibration across numerous demographic subgroups [36]. Label Shift Label shift techniques are a particularly useful hyponym of calibration techniques. While the concept of shifting class prevalences without altering the underlying conditional distribution of features is longstanding [57, 37, 38, 76], formal treatments and systematic causal characterizations arose from Moreno-Torres et al. [61]. Earlier explorations of covariate shift [85] motivated a broader field of research aimed at developing invariant representations robust to distribution shifts. These efforts encompass methods based on richer causal assumptions [84], invariant representation learning [7, 62], and distributionally robust optimization [52, 26, 77, 21]. AUC-ROC & AUC-PR The Receiver Operating Characteristic (ROC) curve emerged within signal detection theory [70, 87], later becoming central in radiology and clinical diagnostics, where the convention solidified around measuring performance via the Area Under the Curve (AUC) [58, 35]. Use of AUC to aggregate over multiple thresholds was explored by Spackman [82], Bradley [9], and Huang and Ling [43], with subsequent critiques noting widespread interpretability issues [11]. Hand [33] showed how the AUC of calibrated classifiers relates to average accuracy across thresholds, while Hernández-Orallo et al. [41] described alternative interpretations via uniform distributions of predicted score or uniform distributions of desired positive fractions (see Appendix E for more details). Recently, there has been increased scrutiny of AUC-ROC, particularly regarding its lack of calibration and poor decomposability across subgroups [45]. Precision and Recall metrics originated in information retrieval, with Mean Average Precision (MAP) or the Area Under the Precision-Recall Curve (AUC-PR) formalized by Keen [46, 47]. While more recent trends in information retrieval have favored metrics such as Precision@K, Recall@K, and Discounted Cumulative Gain (DCG), Davis and Goadrich [14] popularized AUC-PR for classifier evaluation, particularly in contexts with imbalanced data. Despite well-documented critiques –including that AUC-PR poorly estimates MAP [8] and lacks clear theoretical justification [56]– its use persists, particularly in medical and biomedical contexts. Cost-Sensitive Learning Cost-sensitive evaluation, historically formalized through Cost/Loss frameworks [2, 63], was independently introduced in clinical decision-making as early as Pauker and Kassirer [68]. The modern foundation of cost-sensitive learning emerged prominently in the machine learning literature in the 1980s, notably via the seminal work on MetaCost by Domingos [18] and the canonical overview by Elkan [23]. Extending these frameworks to multi-class settings poses challenges due to the quadratic complexity of pairwise misclassification costs. Visualization Visualization techniques to illustrate economic or decision-theoretic value as a function of decision thresholds date back to Thompson and Brier [88], with subsequent development by Murphy, et al. [65, 66], who linked visualizations explicitly to scoring rule theory. Later rediscoveries within machine learning were articulated by Adams and Hand [1], and independently by Drummond and Holte [19, 20]. More recently, these visualizations were generalized to include uncalibrated models [39] and formally named Murphy Diagrams by Ehm et al. [22], with further implementation guidance provided by Dimitriadis et al. [17]. # 3 Accuracy: Calibration without Label Shift Uncertainty The most popular metric for evaluating binary classifiers is the simplest: accuracy. Definition 3.1 (Accuracy). Given a dataset ${ \mathcal { D } } _ { \pi _ { 0 } }$ , a score function $s$ , and a threshold $\tau$ , the accuracy is defined as $$ \mathrm { A c c u r a c y } ( \pmb { \mathscr { D } } _ { \pi _ { 0 } } , s , \tau ) = \frac { 1 } { | \mathscr { D } _ { \pi _ { 0 } } | } \sum _ { ( x , y ) \in \pmb { \mathscr { D } } _ { \pi _ { 0 } } } \mathbf { 1 } _ { \big ( y = \kappa ( s ( x ) , \tau ) \big ) } $$ Accuracy considers the binarized score, discarding the real-valued information necessary for assessing calibration or uncertainty. It further assumes $V ( 0 , 1 ) ~ = ~ V ( 1 , 0 )$ , treating false positives and false negatives as equally costly. This is misaligned with most real-world decision problems where asymmetric stakes are the norm. Finally, the validity of the evaluation results presumes that the operational data-generating distribution matches the evaluation distribution, thereby ignoring the possibility of distribution shift. We describe existing extensions to address asymmetric costs, label shift, and calibration. Asymmetric Costs. In most practical decision problems, false positives and false negatives carry asymmetric consequences. Several extensions of accuracy have been proposed to account for this asymmetry. Two commonly used variants are net benefit and weighted accuracy. This use of the term net benefit originates from decision curve analysis (DCA) [91] but is similar in structure to earlier formulations [63]. We use a variation of net benefit that focuses on the benefit of true negatives rather than the costs of false positives in order to be more directly comparable to accuracy. Definition 3.2 (Net Benefit). Given a threshold parameter $c \in ( 0 , 1 )$ , the net benefit is defined as $$ \mathrm { \mathop { N e t } ~ B e n e f i t } ( { \mathcal { D } } _ { \pi _ { 0 } } , s , \tau , c ) = \frac { 1 } { | { \mathcal { D } } _ { \pi _ { 0 } } | } \sum _ { ( x , y ) \in { \mathcal { D } } _ { \pi _ { 0 } } } \left( \frac { c } { 1 - c } \right) ^ { 1 - y } \mathbf { 1 } _ { \left( y = \kappa ( s ( x ) , \tau ) \right) } $$ At deployment time, the cost ratio may not match the one used in training, and the threshold may need to be adjusted. If a score function $s ( x )$ is well-calibrated, we can reliably threshold it to optimize binary decisions under any cost asymmetry. Specifically, the optimal threshold $\tau$ satisfies $$ P ( Y = 1 | s ( x ) = \tau ) = \frac { V ( 0 , 1 ) - V ( 0 , 0 ) } { V ( 0 , 1 ) - V ( 0 , 0 ) + V ( 1 , 1 ) - V ( 1 , 0 ) } , $$ See Appendix A.1 for details. Net benefit has the advantage that the interpretation of true positives remains consistent with standard accuracy. That is, true positives are rewarded uniformly regardless of the cost ratio, while false positives are penalized according to the cost ratio determined by $c$ . Another popular metric is weighted accuracy, which corresponds to what Murphy [63] called relative utility, and is normalized so that a perfect classifier achieves a score of 1 regardless of class balance. We provide a definition in Appendix B. While net benefit and weighted accuracy are both widely used, both inherit critical limitations from the basic accuracy framework: they binarize the score, thereby discarding information about uncertainty and calibration, and they assume a fixed data-generating distribution, thereby failing to account for distribution shift. Label Shift. To model deployment scenarios, we adopt a causal perspective. Under the label shift structure $\mathcal { D } _ { \pi } \to Y \to X$ , the conditional distribution $P ( X \mid Y , { \mathcal { D } } _ { \pi } ) = P ( X \mid Y )$ remains invariant across domains. This assumption holds in many clinical contexts, where observed features $( X )$ reflect underlying conditions ( $Y$ ) whose prevalence varies across populations $( \pmb { \mathscr { D } } _ { \pmb { \pi } } )$ . We focus on this structure because it aligns with the intuition of identifying latent diagnostic classes and enables robust correction methods for distribution shift. In contrast, under the alternative structure ${ \mathcal { D } } _ { \pi } \to X \to Y$ , $Y$ often encodes time-to-event outcomes, requiring distinct modeling strategies such as survival analysis. The $\mathcal { D } _ { \pi } Y X$ structure permits importance sampling to estimate deployment-time expectations. However, because prediction is performed via $P ( Y \mid X )$ , we must also adjust the posterior using Bayes’ rule to account for class prevalence changes [57, 38, 76]. This yields the adjusted posterior: $$ P ( Y = 1 | s ( x ) , \pmb { \mathscr { D } } _ { \pmb { \pi } } ) = P ( Y = 1 | s ( x ) , \pmb { \mathscr { D } } _ { \pmb { \pi _ { 0 } } } ) \otimes ( 1 - \pi _ { 0 } ) \otimes \boldsymbol { \pi } . $$ This formula for the accuracy is attained in the deployment environment using the correct score adjustment. We define $s _ { 1 / 2 } ( x ) \ \triangleq \ 1 \ - \ \pi _ { 0 } \ \otimes \ s ( x )$ , and denote the adjusted binary classifier by $\boldsymbol { \kappa } ( \pi \otimes s _ { 1 / 2 } ( x ) , \tau )$ . We refer the reader to Appendix A.3 for full derivation. Definition 3.3 (Prior-Adjusted Maximum Accuracy). Given the empirical class prevalence $\pi _ { 0 }$ and the deployment class prevalence $\pi$ , the prior-adjusted maximum accuracy is given by, $$ \mathrm { P A M A } ( \mathcal { D } _ { \pi } , s , \tau ) = \frac { 1 } { \left| \mathcal { D } _ { \pi _ { 0 } } \right| } \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } \left( \frac { \pi } { \pi _ { 0 } } \right) ^ { y } \left( \frac { 1 - \pi } { 1 - \pi _ { 0 } } \right) ^ { 1 - y } \mathbf { 1 } _ { \left( y = \kappa \left( \pi \otimes s _ { 1 / 2 } ( x ) , \tau \right) \right) } . $$ Prior-adjusted maximum accuracy allows us to handle any single known label shift, but it requires that our original score be probabilistically meaningful (what [15] called coherent). We can further combine these adjustments with asymmetric cost modeling to design metrics appropriate for specific deployment scenarios: Definition 3.4 (Prior-Adjusted Maximum Net Benefit). Given the empirical class prevalence $\pi _ { 0 }$ , the deployment class prevalence $\pi$ , and the cost ratio $c$ . $$ \mathrm { \mathrm { ~ \scriptstyle { \sf ~ A M N B } } } ( { \bf { \mathcal { D } } } _ { \pi } , s , \tau ) = \frac { 1 } { | { \mathcal { D } } _ { \pi _ { 0 } } | } \sum _ { x , y \in { \mathcal { D } } _ { \pi _ { 0 } } } \left( \frac { \pi } { \pi _ { 0 } } \right) ^ { y } \left( \frac { c } { 1 - c } \frac { 1 - \pi } { 1 - \pi _ { 0 } } \right) ^ { 1 - y } \mathbf { 1 } _ { \left( y = \kappa ( \pi \otimes s _ { 1 / 2 } ( x ) , c ) \right) } . $$ See Appendix B for details and an extension to weighted accuracy. However, even these extensions still fundamentally rely on binarized scores; we can account for any given fixed label shift, but we cannot account for label shift uncertainty. Calibration Beyond adapting to shifts in population prevalence, another crucial aspect of evaluating probabilistic model outputs, particularly in clinical decision-making, is calibration. Unlike threshold-based decision-making, which focuses on classification accuracy, the goal of calibration is to ensure that predicted probabilities match observed frequencies: that is, $P ( Y = 1 | s ( x ) ) = s ( x )$ . This perspective, well-established in the weather forecasting literature [10, 64], prioritizes reporting reliable probabilities over optimizing decisions directly. However, calibration alone does not guarantee utility. For instance, the “climatological forecast” which assigns the same score to all inputs is perfectly calibrated but useless for guiding decisions. A key issue in defining calibration is specifying where it is required. As shown in the presence of asymmetric costs, optimal decision-making depends on correctly identifying the point where $P ( Y = 1 | s ( x ) ) = c$ , but this can be achieved by a model that is calibrated only at $c$ . As long as $P ( Y = 1 | s ( x ) > c ) > c$ and $P ( Y = 0 | s ( x ) < c ) < c$ , a classifier can still support optimal thresholding at $c$ , even if it is miscalibrated elsewhere. Uncertain label shift is more complex and motivates a need for a broader sense of calibration. If we can bound the possible class balances, we will need the model to be calibrated within the whole corresponding range. A score function that is well-calibrated only in this narrow region can, however, still support robust, cost-sensitive classification. This suggests a more nuanced perspective: rather than enforcing global calibration, it may suffice to ensure calibration within a threshold band. Part of the contribution of this paper is to formalize and operationalize this idea. Perfect calibration supports optimal decisions across a range of environments and objectives, but the extent to which deviations from calibration degrade performance, particularly under shift, is far less intuitive. Developing principled ways to measure this degradation, and to evaluate classifiers in terms of local calibration and its decision-theoretic consequences, is a central motivation for the analysis that follows. # 3.1 Schervish representation Schervish [79] showed that every proper scoring rule can be represented as a mixture over cost-weighted errors, assuming that thresholds are set optimally for the associated costs. This representation provides one of the earliest meaningful interpretations of the units of miscalibration. Independently, Hand [33] rediscovered proper scoring rules, reframed as H-measures, in the context of mixtures over cost-weighted errors. He used this framing to show that the AUC-ROC of a calibrated classifier corresponds to a mixture of cost-weighted errors under a particular (and undesirable) distribution over cost ratios. The idea of generalizing from cost to a cost proportion that also depends on class balance has been repeatedly independently proposed in the setting where the scores’ analytic distributions are known [20, 41]. Hand and Anagnostopoulos [34] introduced the idea of a double integral over cost and balance, but their work does not explore the semantics of the resulting joint distribution, nor does it provide guidance on how the double integral should be computed. We build on the view that proper scoring rules can be interpreted as mixtures over a distribution $H$ of data distributions $\scriptstyle { \mathcal { D } } _ { \pi }$ , where each scoring rule evaluates cost-weighted errors $V$ over the corresponding $\scriptstyle { \mathcal { D } } _ { \pi }$ . Our approach does not have the ambiguity of combined cost / balance terms in Drummond and Holte [20], nor does it require the double integration over both cost and prevalence as suggested in Hand and Anagnostopoulos [34], which produces dilogarithm terms not widely used in practice. Instead, we fix the cost ratio $c$ and integrate over the variability of data distributions captured by $H$ , yielding tools that are computationally simpler and semantically interpretable. # 4 AUC-ROC: Label Shift Uncertainty without Calibration The most common approach to integrate over a range of operating conditions is to use the AUC-ROC in place of accuracy. This is an ordinal metric that discards information about the magnitudes of model scores and evaluates performance solely based on the relative ordering between positive and negative examples. Definition 4.1 (AUC-ROC). Let $s : \mathcal { X } \to \mathbb { R }$ be a scoring function on $\mathcal { D } _ { \pi _ { 0 } }$ . Then, the AUC-ROC is given by: $$ \begin{array} { r } { \mathbb { C } \mathrm { - } \mathrm { R O C } ( \mathcal { D } _ { \pi _ { 0 } } , s ) \triangleq \displaystyle \sum _ { ( x , y ) \in \mathcal { D } _ { \pi _ { 0 } } } \frac { 1 } { | \mathcal { D } _ { \pi _ { 0 } } | } \frac { 1 - y } { 1 - \pi _ { 0 } } \displaystyle \sum _ { ( x ^ { \prime } , y ^ { \prime } ) \in \mathcal { D } _ { \pi _ { 0 } } } \frac { 1 } { | \mathcal { D } _ { \pi _ { 0 } } | } \frac { y ^ { \prime } } { \pi _ { 0 } } \Big [ \mathbf { 1 } _ { ( s ( x ^ { \prime } ) > s ( x ) ) } + \frac { 1 } { 2 } \mathbf { 1 } _ { ( s ( x ^ { \prime } ) = s ( x ) ) } } \end{array} $$ At first glance, this formulation poses a challenge for decision-theoretic interpretation. Specifically, it is not a priori clear how to interpret AUC-ROC within a framework where the metric corresponds to expected utility or decision quality under a specified loss function and distributional assumption. AUC-ROC resists this interpretation because it is invariant to monotonic transformations of the score function and, therefore, indifferent to the calibration or absolute values of the scores, which are central to threshold-based decision-making. On the other hand, AUC-ROC does capture something that accuracy fails to: it aggregates performance across the full range of the score distribution, effectively summing a population-level statistic over levels of the score. There are numerous ways to interpret AUC-ROC, at least a dozen of which are enumerated in Appendix E. We nevertheless offer a new formulation, whose proof is in Theorem F.5, that sheds particular light on its relationship to label shift, i.e. when the marginal distribution over labels differs between training and deployment. Theorem 4.2 (AUC-ROC as Accuracy Averaged Across Label Shift). Let s be a scoring function that is calibrated on the evaluation distribution ${ \mathcal { D } } _ { \pi _ { 0 } }$ . Then: $$ \mathrm { A U C - R O C } ( s ) = \frac { 1 } { 2 } \mathbb { E } _ { t \sim s [ \mathbf { D _ { 1 / 2 } } ] } [ \mathrm { P A M A } ( \mathbf { \mathcal { D } } _ { 1 - t } , s , 1 / 2 ) ] $$ where $\mathcal { D } _ { 1 / 2 }$ denotes a balanced reweighting of the dataset (i.e., class prior $\pi = 1 / 2 ,$ ), and $s [ \mathcal { D } _ { 1 / 2 } ]$ denotes the distribution of model scores over this reweighted set. This perspective reveals that AUC-ROC can be viewed as averaging thresholded accuracy across a distribution of class prevalences, albeit one that is induced implicitly by the score distribution of the model itself. This provides a limited form of robustness to label shift in contrast to metrics like accuracy which are typically evaluated at a fixed class balance. However, this interpretation also surfaces several critical limitations. First, AUC-ROC entirely disregards calibration. By evaluating only the ordering of scores, it fails to assess whether predicted probabilities are well-aligned with empirical outcomes, but correctly estimating probabilities is one of the crucial pieces of expected value-based decision theory, so the lack of good estimates undermines efforts in high stakes domains. This issue is shared by other ranking metrics such as AUC-PR and net discounted cumulative gain, which similarly ignore score magnitudes. The historical development of AUC-ROC provides important context. Ordinal metrics were popularized in fields like psychology, where class prevalences were fixed by design, and information retrieval, where results per page were fixed by the capacity constraint of the querying user regardless of quality. Their subsequent adoption in machine learning reflects a shift in evaluation priorities away from deployment evaluation and toward the abstract comparison of new architectures and optimization techniques. In such a setting, ordinal metrics offer a convenient, threshold-free mode of comparison. However, such metrics are poorly aligned with the needs of real-world deployments, where thresholding, cost asymmetries, and calibration are often indispensable. Second, although AUC-ROC evaluates calibrated scores for their performance across varying class balances, the distribution over these prevalences is not user-specified or interpretable. It is instead a byproduct of the model’s score distribution on a hypothetical balanced dataset. Consequently, the underlying population over which AUC-ROC aggregates accuracy differs across models, making metric comparisons across models trained on the same data unreliable. Finally, AUC-ROC does not allow the independent specification of label shift and asymmetric error costs. Although we can interpret varying prevalences as including varying cost ratios through the relationship $\pi ^ { \prime } = 1 - c \otimes \pi$ [40], doing so entangles cost asymmetry with shifts in class balance. In summary, AUC-ROC offers a partial advantage over accuracy by aggregating across class balances, but its benefits are offset by its insensitivity to calibration, its implicit and model-dependent averaging distribution, and its inability to account for cost asymmetry. While it captures ranking performance, it fails to reflect key aspects of real-world decision quality. # 5 Log Score: Label Shift, Calibration and Cost Asymmetry To evaluate the utility of a thresholded classifier under uncertain or varying class balance, it is critical to evaluate the calibration of its underlying score function across a range of label distributions. Calibration metrics are only meaningful insofar as they are expressed in units that reflect applicationspecific costs. In this context, cost asymmetry is not a minor adjustment but a first-order concern that must be explicitly accounted for. However, a persistent challenge in real-world deployments is the difficulty of comparing the impact of miscalibration, measured in cost-aligned units, with the loss in performance attributable to poor sharpness or uncertainty in ranking [92]. # 5.1 Background Accuracy can be generalized to account for asymmetric costs and label shift, but it does not provide insight into performance across varying class balances. AUC-ROC and AUC-PR focus on performance across class balances but disregard calibration, potentially missing significant issues and offering no direct link to the ground truth. The log score (or cross entropy) can, owing to the Schervish representation, be viewed as an average of accuracy over a range of class balances whose log odds are uniform. Unfortunately, as [3] point out, this range is vast; too broad to be clinically useful. Where there are only a handful of deployment settings, we can take a discrete average of performance in each, but as uncertainty grows we need a simpler and more flexible, continuous approach. Recent attempts have focused on obtaining a central estimate and fitting a Beta distribution around it Hand [33], Hand and Anagnostopoulos [32], Zhu et al. [97]. Unfortunately, the dispersion of a Beta distribution remains unintuitive to most medical (and perhaps even most ML) practitioners. Indeed, Zhu et al. [97] do not provide a procedure to set the pseudocount $\alpha + \beta - 2$ , and Hand and Anagnostopoulos [32] do not quantify uncertainty but suggest always using 1 as "a sensible default value". # 5.2 Clipped Cross Entropy The core contribution of this paper is to propose: (1) a simple way to characterize uncertainty in label shift using lower and upper bounds on class balance, (2) a straightforward means to average accuracy over that range, and (3) a natural extension to two standard approaches for handling asymmetric costs. As previously mentioned, there exists a duality between measures of calibration and mixtures of accuracy measures across different prevalences. This result is somewhat unintuitive; see Appendix C for a derivation that sheds more light. We extend this result to average over only a specific subinterval of prevalences. Our new contribution demonstrates how this can also be applied when costs are asymmetric. These formulas enable straightforward calculation of the average cost-sensitive performance of a classifier over a specified range of prevalences. We begin by specifying a lower bound $a$ and an upper bound $b$ on the class balance. These bounds can be obtained in direct consultation with domain experts, through surveys, or utilizing previous estimates. In many cases, even very conservative bounds will substantially reduce the range of possible prevalences. However, since there can be order-of-magnitude differences in the prevalence of a condition across different populations, we average uniformly over the log odds of the prevalence rather than linearly on the class prevalence itself. This means, for example, that the interval between one part in one hundred and one part in ten thousand will be about half above and half below one part in one thousand. We represent this log odds transformation by saying that $\sigma ^ { - 1 } ( \pi ) \sim \mathrm { U n i f o r m } ( \sigma ^ { - 1 } ( a ) , \sigma ^ { - 1 } ( b ) )$ where $\textstyle \sigma ( x ) ~ \triangleq ~ { \frac { 1 } { 1 + e ^ { - x } } }$ is the typical sigmoid function. In what follows, we define the normalization constant $\gamma$ based on the cost ratio $c$ and the bounds on class balance $a$ and $b$ . Theorem 5.1 (Bounded DCA Log Score). Let $s ( x ) \in [ 0 , 1 ]$ be a score function and $c \in ( 0 , 1 )$ be a cost parameter defining a decision threshold. Then, the expected net benefit over a logit-uniform prior on prevalence satisfies $$ \begin{array} { r } { \{ \mathrm { P A M N B } ( \mathcal { D } _ { \pi } , s , \tau , c ) \} = \gamma ( \mathbb { E } [ \log | 1 - y - \mathrm { c l i p } ( 1 - c \otimes s _ { 1 / 2 } ( x ) ) | - \log | 1 - y - \mathrm { c l i p } ( 1 - s _ { 1 / 2 } ) | ] ) \{ \log | 1 - z | \} \ : . } \end{array} $$ where the expectation on the left-hand side is with respect to $\sigma ^ { - 1 } ( \pi ) \sim U n i f o r m ( \sigma ^ { - 1 } ( a ) , \sigma ^ { - 1 } ( b ) )$ , and on the right-hand side is with respect to (x, y) ∼ D(1−c), and where γ ≜ σ−12(b1)−cσ)−1( By clipping the score, this formula is able to produce a closed form expression for the average net benefit over a range of class balances. A major practical benefit of this score is that it is based on a pointwise calculation of loss, so confidence intervals can be trivially bootstrapped by resampling calculated losses, and each new draw only requires a weighted sum of these losses. However, as previously discussed, the ability to handle asymmetric costs is a first-order consideration when using such units. We focus on net benefit, because it is scaled the same way at different class balances, so it makes sense to add and average this quantity. However, see Theorem D.2 for a related derivation that holds if we want to use weighted accuracy, which is effectively a rescaled version of net benefit for which the maximum possible score of a perfect classifier is always 1, regardless of the class balance. # 5.3 Calibration, Label Shift and Asymmetric costs. The units of net benefit for any given class balance are clear: they are denominated by the value of a true positive. Our newly introduced DCA log score is a mixture over class balances, but remains in units of true positives. The Schervish representation gives us a simple way to describe what this is doing as a calibration metric as well; it is calibrating the model only over a particular bounded range of scores that is relevant to decisions and weighting scores in that range uniformly in log odds space. This, then, is a measure of miscalibration that can used to directly compare classifiers’ effects in the world. Indeed, Gneiting and Raftery [28] showed that the unconstrained log score can be decomposed linearly into components of calibration and sharpness. As [92] has argued, the need to weigh failures of calibration against failures of sharpness is a perpetual problem in the deployment of machine learning models in a medical context. Moreover, our approach is flexible enough that we can generalize beyond accuracy (a well known result) to cost-sensitive metrics, as we demonstrate with DCA Log Score and Weighted Accuracy Log Score. See Appendix D for details. # 6 Application to Subgroup Decomposition The utility of our clipped cross entropy approach is highlighted in analyzing the following racial disparities in Accuracy and AUC-ROC on the publicly available subset of the eICU dataset [44], for predictions of in-hospital mortality using APACHE IV scores. (C) African American patients (orange) have noticeably better AUCROC than Caucasian patients (blue). However, we can decompose the difference in accuracy into (A) a difference in mechanism of prediction at equal class balances (i.e. same in-hospital mortality) and (B) a difference in the class balance at which accuracy is evaluated for the two groups. Doing so reveals that across the range of prevalences, performance is consistently lower for African American patients, indicating that the observed accuracy difference is entirely driven by label shift (B). (C) African American patients (orange) have noticeably better AUCROC than Caucasian patients (blue). However, we can plot the accuracy of a perfectly recalibrated model (dashed lines), and then decompose the average accuracy using the calibration–sharpness framework [80, 28]. We see that (A) the model gives sharper predictions for African American than Caucasian patients, but (B), it is badly miscalibrated for African American patients and has virtually no miscalibration loss for Caucasian patients. The most important aspect of this analysis is that we can directly compare the magnitudes of the two effects. Our analysis illustrates that ranking-based metrics fail to address miscalibration in practical scenarios in a way that cannot be readily fixed by adding a separate and incommensurable calibration metric. Furthermore, they do not allow decompositions to consider the effects of differing class balance vs differing mechanisms of prediction. Conversely, it shows that accuracy can be misleading because of its dependence on specific class balances, and that the average over a range of class balances gives a more intuitive picture of the reasons for gaps in performance. In this particular example, we quantify uncertainty by computing confidence intervals (see Appendix G for visualization). Given that mortality prevalence is around $1 0 \%$ , and African American patients make up roughly $1 0 \%$ of the dataset, we lack sufficient statistical power to generalize conclusions from the public subset to the full data. However, the example is illustrative of the ways in which the analytical flexibility of the Schervish approach supports both practical development and principled scrutiny of deployed models. # 7 Discussion The prevailing paradigm for evaluating medical ML decision-support systems often misaligns with evidence-based medicine and beneficence by overlooking real-world cost structures, disease prevalences, and calibration nuances. We address these gaps through two main contributions: 1. Causal distribution-shift grounding of AUC–ROC and accuracy. We show that AUC–ROC corresponds to the expected utility of a fixed threshold under a specific distribution over class prevalences, and that accuracy arises as its degenerate case at the cost ratio in the evaluation set. 2. Illustration of the Schervish Representation Inspired by Schervish’s insight that “scoring rules are just a way of averaging all simple two-decision problems into a single, more complicated, decision problem” [79], we reconceptualize calibration as a continuum of cost-weighted binary decisions. This perspective clarifies when AUC–ROC and accuracy serve as valid proxies for clinical benefit (and when they obscure important cost asymmetries and class imbalances) and motivates their augmentation with interval-specific calibration metrics like the DCA log score for deployment-critical evaluations. 3. DCA log score for cost-sensitive calibration. Unlike binned calibration or global metrics, the DCA log score isolates miscalibration over clinically relevant probability intervals that are dictated by anticipated cost ratios and base rate bounds, thereby making the practical impact of calibration errors explicit. Our framework further elucidates a conceptual tension between forecasting and classification uncertainty via their causal structures that has perhaps limited the uptake of evaluation measures used in the forecasting literature, such as Brier scores, in the medical setting. Forecasting ${ \pmb { \mathscr { D } } } _ { \pi } X Y$ assumes stability in $P ( Y \mid X )$ but is vulnerable to feature-distribution shifts, while classification $\mathcal { D } _ { \pi } Y X$ assumes stable $P ( X \mid Y )$ , enabling more robust performance predictions across thresholds and prevalences. Recognizing this causal reversal underpins the targeted calibration assessment we propose. # 7.1 Limitations and Future Work While this work contributes a flexible and decision-theoretically grounded framework for evaluating predictive models under cost asymmetry and distributional shift, several challenges remain. These limitations point to open areas for theoretical refinement, methodological innovation, and practical implementation. Below, we highlight key directions for future research. Cost Uncertainty. While our extension of the Decision Curve Analysis (DCA) log score to uncertain cost ratios captures realistic ambiguity in clinical tradeoffs, it introduces dilogarithmic expressions that are analytically opaque and computationally intensive. These forms limit practical interpretability and scalability. Future work could explore tractable approximations or surrogate objectives that preserve sensitivity to cost uncertainty while enabling smoother optimization and interpretive clarity. Sampling Variability under Label Shift. In settings with symmetric misclassification costs, bootstrap resampling or binomial confidence intervals suffice for uncertainty estimation. However, under asymmetric costs, especially with population label shift, the evaluation metrics become sensitive to multinomial fluctuations in both the score distribution and the cost-weighted outcome prevalence. This introduces high variance and potential estimation bias. Quantifying and stabilizing this variability remains a significant challenge. Adaptive Base Rate Estimation. Our framework presumes known deployment class prevalences. In practice, these rates may be uncertain or drift over time due to changing patient populations, care protocols, or screening policies. Jointly estimating prevalence and adjusting probabilistic predictions in such regimes introduces an additional source of uncertainty. Future work could combine the error properties of the prevalence estimation along with threshold selection and cost evaluation. Asymmetric Cost Parameterization. We adopt a general framework for asymmetric cost modeling, but the semantics of varying cost ratios remain under-theorized. At a fixed cost ratio they all produce the same results, but different parameterizations introduce different scaling factors into the overall costs, as well as changing the meaning of "uniform uncertainty", leading to different properties when averaging. A systematic compartive study of these properties could yield robust and usable guidelines for choosing a parameterization. By combining decision-theoretic tools, causal framing, and clinically grounded metrics of calibration, this work moves toward evaluation methodologies that are both conceptually principled and actionable in real-world medical settings. Continued advances will require deeper integration of uncertainty quantification, model adaptivity, and domain-informed cost modeling. # References [1] N. Adams and D. Hand. Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition, 32(7):1139–1147, 1999. ISSN 0031-3203. doi: https://doi.org/ 10.1016/S0031-3203(98)00154-X. URL https://www.sciencedirect.com/science/article/ pii/S003132039800154X. [2] A. Angstrom. On the effectivity of weather warnings. Nordisk Statistisk Tidskrift, 1:394–408, 1922. [3] M. Assel, D. D. Sjoberg, and A. J. Vickers. The brier score does not evaluate the clinical utility of diagnostic tests or prediction models. Diagnostic and Prognostic Research, 1(1):19, 2017. doi: 10.1186/s41512-017-0020-3. URL https://doi.org/10.1186/s41512-017-0020-3. [4] M. Ayer, H. D. Brunk, G. M. Ewing, W. T. Reid, and E. Silverman. An empirical distribution function for sampling with incomplete information. The Annals of Mathematical Statistics, 26 (4):641–647, 1955. ISSN 00034851. URL http://www.jstor.org/stable/2236377. [5] K. Azizzadenesheli, A. Liu, F. Yang, and A. Anandkumar. Regularized learning for domain adaptation under label shifts. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJl0r3R9KX. [6] D. Bamber. The area above the ordinal dominance graph and the area below the receiver operating characteristic graph. Journal of Mathematical Psychology, 12(4):387–415, 1975. ISSN 0022-2496. doi: https://doi.org/10.1016/0022-2496(75)90001-2. URL https://www. sciencedirect.com/science/article/pii/0022249675900012. [7] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. Machine Learning, 79(1):151–175, 2010. doi: 10.1007/ s10994-009-5152-4. URL https://doi.org/10.1007/s10994-009-5152-4. [8] H. Blockeel, K. Kersting, S. Nijssen, and F. Železný, editors. Area under the Precision-Recall Curve: Point Estimates and Confidence Intervals, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. ISBN 978-3-642-40994-3. [9] A. P. Bradley. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition, 30(7):1145–1159, 1997. ISSN 0031-3203. [10] G. W. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78:1–3, 1950. URL https://api.semanticscholar.org/CorpusID:122906757. [11] A. M. Carrington, D. G. Manuel, P. W. Fieguth, T. Ramsay, V. Osmani, B. Wernly, C. Bennett, S. Hawken, O. Magwood, Y. Sheikh, M. McInnes, and A. Holzinger. Deep roc analysis and auc as balanced average accuracy, for improved classifier selection, audit and explanation. IEEE Trans Pattern Anal Mach Intell, 45(1):329–341, Jan 2023. ISSN 1939-3539 (Electronic); 0098-5589 (Linking). doi: 10.1109/TPAMI.2022.3145392. [12] S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages 797–806, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450348874. doi: 10.1145/3097983.3098095. URL https://doi.org/10.1145/3097983.3098095. [13] S. Cruz Rivera, X. Liu, A.-W. Chan, A. K. Denniston, M. J. Calvert, H. Ashrafian, A. L. Beam, G. S. Collins, A. Darzi, J. J. Deeks, M. K. ElZarrad, C. Espinoza, A. Esteva, L. Faes, L. Ferrante di Ruffano, J. Fletcher, R. Golub, H. Harvey, C. Haug, C. Holmes, A. Jonas, P. A. Keane, C. J. Kelly, A. Y. Lee, C. S. Lee, E. Manna, J. Matcham, M. McCradden, D. Moher, J. Monteiro, C. Mulrow, L. Oakden-Rayner, D. Paltoo, M. B. Panico, G. Price, S. Rowley, R. Savage, R. Sarkar, S. J. Vollmer, and C. Yau. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the spirit-ai extension. The Lancet Digital Health, 2(10):e549–e560, 2025/04/29 2020. doi: 10.1016/S2589-7500(20)30219-3. URL https://doi.org/10.1016/S2589-7500(20)30219-3. [14] J. Davis and M. Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 233– 240, New York, NY, USA, 2006. Association for Computing Machinery. ISBN 1595933832. doi: 10.1145/1143844.1143874. URL https://doi.org/10.1145/1143844.1143874. [15] B. de Finetti. La prévision : ses lois logiques, ses sources subjectives. Annales de l’institut Henri Poincaré, 7(1):1–68, 1937. URL http://eudml.org/doc/79004. [16] B. de Finetti. Foresight: Its Logical Laws, Its Subjective Sources, pages 134–174. Springer New York, New York, NY, 1992. ISBN 978-1-4612-0919-5. doi: 10.1007/978-1-4612-0919-5_10. URL https://doi.org/10.1007/978-1-4612-0919-5_10. [17] T. Dimitriadis, T. Gneiting, A. I. Jordan, and P. Vogel. Evaluating probabilistic classifiers: The triptych. International Journal of Forecasting, 40(3):1101–1122, 2024. ISSN 0169-2070. doi: https://doi.org/10.1016/j.ijforecast.2023.09.007. URL https://www.sciencedirect.com/ science/article/pii/S0169207023000997. [18] P. Domingos. Metacost: a general method for making classifiers cost-sensitive. In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’99, pages 155–164, New York, NY, USA, 1999. Association for Computing Machinery. ISBN 1581131437. doi: 10.1145/312129.312220. URL https://doi.org/10.1145/312129. 312220. [19] C. Drummond and R. C. Holte. Explicitly representing expected cost: an alternative to roc representation. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’00, pages 198–207, New York, NY, USA, 2000. Association for Computing Machinery. ISBN 1581132336. doi: 10.1145/347090.347126. URL https://doi.org/10.1145/347090.347126. [20] C. Drummond and R. C. Holte. Cost curves: An improved method for visualizing classifier performance. Machine Learning, 65(1):95–130, 2006. doi: 10.1007/s10994-006-8199-5. URL https://doi.org/10.1007/s10994-006-8199-5. [21] J. C. Duchi and H. Namkoong. Learning models with uniform performance via distributionally robust optimization. The Annals of Statistics, 49(3):1378 – 1406, 2021. doi: 10.1214/20-AOS2004. URL https://doi.org/10.1214/20-AOS2004. [22] W. Ehm, T. Gneiting, A. Jordan, and F. Krüger. Of quantiles and expectiles: Consistent scoring functions, choquet representations and forecast rankings. Journal of the Royal Statistical Society Series B: Statistical Methodology, 78(3):505–562, 05 2016. ISSN 1369-7412. doi: 10.1111/rssb.12154. URL https://doi.org/10.1111/rssb.12154. [23] C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the 17th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’01, pages 973–978, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc. ISBN 1558608125. [24] T. Fawcett and A. Niculescu-Mizil. Pav and the roc convex hull. Machine Learning, 68(1):97–106, 2007. doi: 10.1007/s10994-007-5011-0. URL https://doi.org/10.1007/s10994-007-5011-0. [25] P. Flach, J. Hernández-Orallo, and C. Ferri. A coherent interpretation of auc as a measure of aggregated classification performance. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 657–664, Madison, WI, USA, 2011. Omnipress. ISBN 9781450306195. [26] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. March, and V. Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1–35, 2016. URL http://jmlr.org/papers/v17/15-239.html. [27] S. Garg, Y. Wu, S. Balakrishnan, and Z. Lipton. A unified view of label shift estimation. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 3290–3300. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/ 219e052492f4008818b8adb6366c7ed6-Paper.pdf. [28] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378, 2007. doi: 10.1198/ 016214506000001437. URL https://doi.org/10.1198/016214506000001437. [29] A. Goldberger, L. Amaral, L. Glass, J. Hausdorff, P. C. Ivanov, R. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, and H. E. Stanley. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. Circulation, 101(23):e215–e220, 2000. [Online]. [30] I. J. Good. Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 14(1):107–114, 1952. ISSN 00359246. URL http://www.jstor.org/stable/2984087. [31] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 1321–1330. JMLR.org, 2017. [32] D. Hand and C. Anagnostopoulos. A better beta for the h measure of classification performance. Pattern Recognition Letters, 40:41–46, 2014. ISSN 0167-8655. doi: https://doi.org/ 10.1016/j.patrec.2013.12.011. URL https://www.sciencedirect.com/science/article/pii/ S0167865513004984. [33] D. J. Hand. Measuring classifier performance: a coherent alternative to the area under the roc curve. Machine Learning, 77(1):103–123, 2009. doi: 10.1007/s10994-009-5119-5. URL https://doi.org/10.1007/s10994-009-5119-5. [34] D. J. Hand and C. Anagnostopoulos. Notes on the h-measure of classifier performance. Advances in Data Analysis and Classification, 17(1):109–124, 2023. doi: 10.1007/s11634-021-00490-3. URL https://doi.org/10.1007/s11634-021-00490-3. [35] J. A. Hanley and B. J. McNeil. The meaning and use of the area under a receiver operating characteristic (roc) curve. Radiology, 143(1):29–36, 1982. ISSN 0033-8419. [36] U. Hebert-Johnson, M. Kim, O. Reingold, and G. Rothblum. Multicalibration: Calibration for the (Computationally-identifiable) masses. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1939–1948. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr. press/v80/hebert-johnson18a.html. [37] J. Heckman. Shadow prices, market wages, and labor supply. Econometrica, 42(4):679–694, 1974. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1913937. [38] J. J. Heckman. Sample selection bias as a specification error. Econometrica, 47(1):153–161, 1979. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1912352. [39] J. Hernández-Orallo, P. Flach, and C. Ferri. Brier curves: a new cost-based visualisation of classifier performance. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 585–592, Madison, WI, USA, 2011. Omnipress. ISBN 9781450306195. [40] J. Hernandez-Orallo, P. Flach, and C. Ferri. Threshold choice methods: the missing link. 12 2011. [41] J. Hernández-Orallo, P. Flach, and C. Ferri. A unified view of performance metrics: translating threshold choice into expected classification loss. J. Mach. Learn. Res., 13(1):2813–2869, 10 2012. [42] J. Hernández-Orallo, P. Flach, and C. Ferri. Roc curves in cost space. Machine Learning, 93(1):71–91, 2013. doi: 10.1007/s10994-013-5328-9. URL https://doi.org/10.1007/ s10994-013-5328-9. [43] J. Huang and C. Ling. Using auc and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17:299–310, 2005. doi: 10.1109/TKDE.2005. 50. [44] A. Johnson, T. Pollard, O. Badawi, and J. Raffa. eicu collaborative research database demo (version 2.0.1). PhysioNet, 2021. doi: 10.13026/4mxk-na84. URL https://doi.org/10.13026/ 4mxk-na84. [45] N. Kallus and A. Zhou. The fairness of risk scores beyond classification: bipartite ranking and the xAUC metric. Curran Associates Inc., Red Hook, NY, USA, 2019. [46] E. M. Keen. Measures and averaging methods used in performance testing of indexing systems. Technical report, The College of Aeronautics, Cranfield, England, 1966. URL https://sigir. org/resources/museum/. Available in the SIGIR Museum resources collection. [47] E. M. Keen. Evaluation parameters. Scientific Report ISR-13, Department of Computer Science, Cornell University, Ithaca, New York, 1968. Information Storage and Retrieval: Scientific Report No. ISR-13 to the National Science Foundation. [48] J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent trade-offs in the fair determination of risk scores. In C. H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), volume 67 of Leibniz International Proceedings in Informatics (LIPIcs), pages 43:1–43:23. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 2017. ISBN 978-3-95977- 029-3. doi: 10.4230/LIPIcs.ITCS.2017.43. URL http://drops.dagstuhl.de/opus/volltexte/ 2017/8156. [49] M. Kull, T. S. Filho, and P. Flach. Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. In A. Singh and J. Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 623–631. PMLR, 20–22 Apr 2017. URL https://proceedings.mlr.press/v54/kull17a.html. [50] Z. Lipton, Y.-X. Wang, and A. Smola. Detecting and correcting for label shift with black box predictors. 02 2018. doi: 10.48550/arXiv.1802.03916. [51] X. Liu, S. Cruz Rivera, D. Moher, M. J. Calvert, A. K. Denniston, A.-W. Chan, A. Darzi, C. Holmes, C. Yau, H. Ashrafian, J. J. Deeks, L. Ferrante di Ruffano, L. Faes, P. A. Keane, S. J. Vollmer, A. Y. Lee, A. Jonas, A. Esteva, A. L. Beam, M. B. Panico, C. S. Lee, C. Haug, C. J. Kelly, C. Mulrow, C. Espinoza, J. Fletcher, D. Paltoo, E. Manna, G. Price, G. S. Collins, H. Harvey, J. Matcham, J. Monteiro, M. K. ElZarrad, L. Oakden-Rayner, M. McCradden, P. A. Keane, R. Savage, R. Golub, R. Sarkar, S. Rowley, T. SPIRIT-AI, C.-A. W. Group, SPIRIT-AI, C.-A. S. Group, SPIRIT-AI, and C.-A. C. Group. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the consort-ai extension. Nature Medicine, 26(9):1364–1374, 2020. doi: 10.1038/s41591-020-1034-x. URL https://doi.org/10.1038/ s41591-020-1034-x. [52] M. Long, Y. Cao, J. Wang, and M. Jordan. Learning transferable features with deep adaptation networks. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 97–105, Lille, France, 07–09 Jul 2015. PMLR. URL https://proceedings.mlr.press/v37/long15.html. [53] J. McCarthy. Measures of the value of information. Proceedings of the National Academy of Sciences, 42(9):654–655, 1956. doi: 10.1073/pnas.42.9.654. URL https://www.pnas.org/doi/ abs/10.1073/pnas.42.9.654. [54] D. K. McClish. Analyzing a portion of the roc curve. Med Decis Making, 9(3):190–195, 1989. ISSN 0272-989X (Print); 0272-989X (Linking). doi: 10.1177/0272989X8900900307. [55] D. K. McClish. Evaluation of the accuracy of medical tests in a region around the optimal point. Academic Radiology, 19(12):1484–1490, 2025/05/05 2012. doi: 10.1016/j.acra.2012.09.004. URL https://doi.org/10.1016/j.acra.2012.09.004. [56] M. B. McDermott, H. Zhang, L. H. Hansen, G. Angelotti, and J. Gallifant. A closer look at AUROC and AUPRC under class imbalance. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id=S3HvA808gk. [57] P. E. Meehl and A. Rosen. Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52(3):194–216, 1955. doi: 10.1037/h0048070. [58] C. E. Metz. Basic principles of roc analysis. Semin Nucl Med, 8(4):283–98, Oct 1978. doi: 10.1016/s0001-2998(78)80014-2. [59] C. E. Metz. Roc methodology in radiologic imaging. Invest Radiol, 21(9):720–733, Sep 1986. ISSN 0020-9996 (Print); 0020-9996 (Linking). doi: 10.1097/00004424-198609000-00009. [60] C. E. Metz. Some practical issues of experimental design and data analysis in radiological roc studies. Invest Radiol, 24(3):234–245, Mar 1989. ISSN 0020-9996 (Print); 0020-9996 (Linking). doi: 10.1097/00004424-198903000-00012. [61] J. G. Moreno-Torres, T. Raeder, R. Alaiz-Rodríguez, N. V. Chawla, and F. Herrera. A unifying view on dataset shift in classification. Pattern Recognition, 45(1):521–530, 2012. ISSN 0031-3203. doi: https://doi.org/10.1016/j.patcog.2011.06.019. URL https://www.sciencedirect.com/ science/article/pii/S0031320311002901. [62] K. Muandet, D. Balduzzi, and B. Schölkopf. Domain generalization via invariant feature representation. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML’13, pages I–10–I–18. JMLR.org, 2013. [63] A. H. Murphy. A note on the utility of probabilistic predictions and the probability score in the cost-loss ratio decision situation. Journal of Applied Meteorology and Climatology, 5(4):534 – 537, 1966. doi: 10.1175/1520-0450(1966)005 $<$ 0534:ANOTUO $>$ 2.0.CO;2. URL https://journals. ametsoc.org/view/journals/apme/5/4/1520-0450_1966_005_0534_anotuo_2_0_co_2.xml. [64] A. H. Murphy. A new vector partition of the probability score. Journal of Applied Meteorology (1962-1982), 12(4):595–600, 1973. ISSN 00218952, 2163534X. URL http://www.jstor.org/ stable/26176769. [65] A. H. Murphy. The value of climatological, categorical and probabilistic forecasts in the cost-loss ratio situation. Monthly Weather Review, 105(7):803 – 816, 1977. doi: 10.1175/1520-0493(1977) 105 $<$ 0803:TVOCCA>2.0.CO;2. URL https://journals.ametsoc.org/view/journals/mwre/ 105/7/1520-0493_1977_105_0803_tvocca_2_0_co_2.xml. [66] A. H. Murphy and R. L. Winkler. A general framework for forecast verification. Monthly Weather Review, 115(7):1330 – 1338, 1987. doi: 10.1175/1520-0493(1987)115 $<$ 1330:AGFFFV $>$ 2.0. CO;2. URL https://journals.ametsoc.org/view/journals/mwre/115/7/1520-0493_1987_ 115_1330_agfffv_2_0_co_2.xml. [67] M. Pakdaman Naeini, G. Cooper, and M. Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. Proceedings of the AAAI Conference on Artificial Intelligence, 29 (1), Feb. 2015. doi: 10.1609/aaai.v29i1.9602. URL https://ojs.aaai.org/index.php/AAAI/ article/view/9602. [68] S. G. Pauker and J. P. Kassirer. Therapeutic decision making: A cost-benefit analysis. New England Journal of Medicine, 293(5):229–234, 1975. doi: 10.1056/NEJM197507312930505. URL https://www.nejm.org/doi/full/10.1056/NEJM197507312930505. [69] J. Pearl. Causality : models, reasoning, and inference. Cambridge University Press, Cambridge [u.a.], repr. with corrections edition, 2001. ISBN 0521773628. [70] . Peterson, W. Wesley and T. G. Birdsall. The theory of signal detectability. Michigan. University. Department of Electrical Engineering. Electronic Defense Group. Technical report; no. 13. Engineering Research Institute, Ann Arbor, 1953. [71] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. 1999. URL https://api.semanticscholar.org/CorpusID:56563878. [72] T. Pollard, A. Johnson, J. Raffa, L. A. Celi, O. Badawi, and R. Mark. eicu collaborative research database (version 2.0). PhysioNet, 2019. doi: 10.13026/C2WM1R. URL https: //doi.org/10.13026/C2WM1R. [73] T. J. Pollard, A. E. W. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, and O. Badawi. The eicu collaborative research database, a freely available multi-center database for critical care research. Scientific Data, 2018. doi: 10.1038/sdata.2018.178. URL http://dx.doi.org/10. 1038/sdata.2018.178. [74] F. J. Provost and T. Fawcett. Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions. In Knowledge Discovery and Data Mining, 1997. URL https://api.semanticscholar.org/CorpusID:157595. [75] F. P. Ramsey. Truth and probability. In R. B. Braithwaite, editor, The Foundations of Mathematics and other Logical Essays, chapter 7, pages 156–198. McMaster University Archive for the History of Economic Thought, 1926. URL https://EconPapers.repec.org/RePEc: hay:hetcha:ramsey1926. [76] M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori probabilities: a simple procedure. Neural Comput, 14(1):21–41, Jan 2002. ISSN 0899-7667 (Print); 0899-7667 (Linking). doi: 10.1162/089976602753284446. [77] S. Sagawa $^ *$ , P. W. Koh $^ *$ , T. B. Hashimoto, and P. Liang. Distributionally robust neural networks. In International Conference on Learning Representations, 2020. URL https://openreview. net/forum?id=ryxGuJrFvS. [78] L. J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336):783–801, 1971. ISSN 01621459, 1537274X. URL http://www. jstor.org/stable/2284229. [79] M. J. Schervish. A general method for comparing probability assessors. The Annals of Statistics, 17(4):1856–1879, 1989. ISSN 00905364, 21688966. URL http://www.jstor.org/stable/ 2241668. [80] Y. Shen. Loss functions for binary classification and class probability estimation. PhD thesis, 2005. URL https://www.proquest.com/dissertations-theses/ loss-functions-binary-classification-class/docview/305411117/se-2. Copyright Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works; Last updated - 2023-03-03. [81] E. H. Shuford, A. Albert, and H. Edward Massengill. Admissible probability measurement procedures. Psychometrika, 31(2):125–145, 1966. doi: 10.1007/BF02289503. URL https: //doi.org/10.1007/BF02289503. [82] K. A. Spackman. Signal detection theory: valuable tools for evaluating inductive learning. In Proceedings of the Sixth International Workshop on Machine Learning, pages 160–163, San Francisco, CA, USA, 1989. Morgan Kaufmann Publishers Inc. ISBN 1558600361. [83] E. W. Steyerberg and A. J. Vickers. Decision curve analysis: a discussion. Med Decis Making, 28(1):146–149, 2008. ISSN 0272-989X (Print); 0272-989X (Linking). doi: 10.1177/ 0272989X07312725. [84] A. Subbaswamy, P. Schulam, and S. Saria. Preventing failures due to dataset shift: Learning predictive models that transport. 2018. URL http://arxiv.org/abs/1812.04597. cite arxiv:1812.04597Comment: In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Previously presented at the NeurIPS 2018 Causal Learning Workshop. [85] M. Sugiyama, N. Rubens, and K.-R. Müller. A conditional expectation approach to model selection and active learning under covariate shift. In Dataset Shift in Machine Learning. The MIT Press, 12 2008. ISBN 9780262255103. doi: 10.7551/mitpress/7921.003.0012. URL https://doi.org/10.7551/mitpress/7921.003.0012. [86] J. Swets and T. Birdsall. The human use of information–iii: Decision-making in signal detection and recognition situations involving multiple alternatives. IRE Transactions on Information Theory, 2(3):138–165, 1956. doi: 10.1109/TIT.1956.1056799. [87] W. P. Tanner, J. A. Swets, and H. W. Welch. A new theory of visual detection. Technical Report UMR3825, University of Michigan, 1953. URL https://hdl.handle.net/2027.42/ 7893. Engineering Technical Report. [88] J. M. C. Thompson and G. W. Brier. The economic utility of weather forecasts. Monthly Weather Review, 83:249–253, 1955. URL https://api.semanticscholar.org/CorpusID:122117332. [89] D. G. Turakhia. Thirteen ways of looking: a theoretical inquiry in computational creative thinking. Master’s thesis, Massachusetts Institute of Technology, Cambridge, MA, 2017. URL http://hdl.handle.net/1721.1/113918. S.M. Thesis, Department of Architecture and Department of Electrical Engineering and Computer Science. [90] J. Vaicenavicius, D. Widmann, C. Andersson, F. Lindsten, J. Roll, and T. Schön. Evaluating model calibration in classification. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pages 3459–3467. PMLR, 16–18 Apr 2019. URL https://proceedings.mlr.press/v89/vaicenavicius19a.html. [91] A. J. Vickers and E. B. Elkin. Decision curve analysis: A novel method for evaluating prediction models. Medical Decision Making, 26(6):565–574, 2006. doi: 10.1177/0272989X06295361. URL https://doi.org/10.1177/0272989X06295361. PMID: 17099194. [92] A. J. Vickers and F. Holland. Decision curve analysis to evaluate the clinical benefit of prediction models. The Spine Journal, 21(10):1643–1648, 2021. ISSN 1529-9430. doi: https://doi.org/ 10.1016/j.spinee.2021.02.024. URL https://www.sciencedirect.com/science/article/pii/ S1529943021001121. [93] A. J. Vickers and S. Woo. Decision curve analysis in the evaluation of radiology research. European Radiology, 32(9):5787–5789, 2022. doi: 10.1007/s00330-022-08685-8. URL https: //doi.org/10.1007/s00330-022-08685-8. [94] A. J. Vickers, B. van Calster, and E. W. Steyerberg. A simple, step-by-step guide to interpreting decision curve analysis. Diagnostic and Prognostic Research, 3(1):18, 2019. doi: 10.1186/ s41512-019-0064-7. URL https://doi.org/10.1186/s41512-019-0064-7. [95] D. Widmann, F. Lindsten, and D. Zachariah. Calibration tests in multi-class classification: A unifying framework. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/ file/1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf. [96] X.-H. Zhou, N. Obuchowski, and D. McClish. Statistical Methods in Diagnostic Medicine, Second Edition. 01 2002. ISBN 9780470183144. doi: 10.1002/9780470906514. [97] K. Zhu, Y. Zheng, and K. C. G. Chan. Weighted brier score – an overall summary measure for risk prediction models with clinical utility consideration, 2024. URL https://arxiv.org/abs/ 2408.01626. # A Calibration The weather forecasting literature focuses on what are known as strictly proper scoring rules: those metrics that have the property that a forecaster is correctly incentivized to report their actual beliefs about the probability of the event. At first glance this seems a bit distant from binary classifier evaluation. After all, action is generally binary; we really want the weather report to tell us whether to take an umbrella, not to give us 3 decimal places of precision on the long run frequency with which it would rain. # A.1 Asymmetric Cost However, a calibrated, thresholded binary classifier has an immensely useful property: we know how to change the threshold to trade off false positives for false negatives if the class balance or the cost ratio changes. The optimality condition for choosing a threshold requires that the first derivative of the expected value be zero. This is equivalent to saying that the expected utility of assigning points exactly at the threshold to either class should be the same: $$ \begin{array} { r l } & { \underset { x , y : s ( x ) = \tau } { \mathbb { E } } V ( y , 0 ) = \underset { x , y : s ( x ) = \tau } { \mathbb { E } } V ( y , 1 ) } \\ { \implies } & { P ( y = 1 | s ( x ) = \tau ) = \frac { V ( 0 , 1 ) - V ( 0 , 0 ) } { ( V ( 0 , 1 ) - V ( 0 , 0 ) ) + ( V ( 1 , 1 ) - V ( 1 , 0 ) ) } } \end{array} $$ As a result, we generally call the quantity on the right $c$ and use it to describe the asymmetry of the error costs. Solving this for $\tau$ requires us to at least implicitly estimate $\tilde { s } ( \tau ) = P ( y = 1 | s ( x ) = \tau ,$ ). If we add the constraint of monotonicity to $\tilde { s } ( \tau )$ , then this problem is known as isotonic regression, and there are well-known algorithms for solving it. Assuming the existence of a good estimator $\tilde { s } ( \tau )$ for this quantity, then $\tilde { s } ( s ( x ) )$ is of course a calibrated estimator for $P ( y = 1 | x )$ . Using the same classifier at varying cost asymmetries requires that the classifer be, at minimum, implicitly calibrated; isotonic regression is in fact how such classifiers are calibrated. It is, of course, possible to develop an estimator calibrated only at $s ( x ) = c$ ; for any point higher or lower, ordinal comparison alone is enough to make a decision. A classifier optimized in this fashion may be wildly unreliable at other values of $s ( x )$ , and our calibration may simply give us two scores: higher than $c$ and lower than $c$ . If so, the condition of calibration is almost trivially satisfied: it only requires a binary predicted label and statistics for how often the classifier is correct in either case (PPV and NPV). # A.2 Label Shift In the weather forecasting literature, it is explicitly understood that ${ \pmb { \mathscr { D } } } _ { \pi } X Y$ , which is to say that rather than today’s atmospheric conditions being emanations of tomorrow’s decision of whether to rain or not, the evolution on physical principles of today’s conditions leads to tomorrow’s weather. As such, the idea of label shift is incoherent. The study of changes in classifier performance when $P ( y )$ changes in this setting is known as covariate shift; this is out of scope for this paper. The machine learning evaluation literature does acknowledge links between label shift and calibration. However, the setting is more abstract, with CDFs taken as given, and varying thresholds interpreted as a response to varying class balances without a clear enumeration of assumptions. # A.2.1 Notation Lemma A.1 (Importance Sampling as $\ell _ { 1 }$ distance). Consider the standard importance sampling weights to move from the training $( \pi _ { 0 } )$ to the deployment $( \pi )$ distribution. Label shift always holds when reweighting data by class because after we stratify by class, we do not change the distribution within the class. $$ \begin{array} { r l } { W ( \gamma _ { 5 0 } \to \pi ; y ) ^ { \frac { 2 } { 3 } } \equiv \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \Gamma _ { 5 0 } \pi ( x , y - y ) } y ^ { \frac { 2 } { 3 } } } \\ { = } & { \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \Gamma _ { 5 0 } \pi ( x , y - \theta ^ { \prime } ) } \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \Gamma _ { 5 0 } \pi ( x , y - \theta ^ { \prime } ) } } \\ & { = \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \Gamma _ { 5 0 } \pi ( x , y - \theta ^ { \prime } ) } \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { y } ^ { 2 } } } \\ & { = W ( x , y - \theta ^ { \prime } ) \frac { 2 \pi ( x , y - \theta ^ { \prime } ) } { \Gamma _ { 5 0 } \pi ( x , y - \theta ^ { \prime } ) } z \frac { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } } \\ & { = W ( x , y - \theta ^ { \prime } ) \frac { 2 \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } \frac { \mathcal { D } _ { y } \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } } \\ & { = W ( x , y - \theta ^ { \prime } ) \frac { 2 \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { y } \pi ( x , y - \theta ^ { \prime } ) } 2 [ ( 1 - x ) ] \hat { \pi } \hat { \mathcal { D } } ^ { \theta ^ { \prime } } - \Phi ^ { 2 } } \\ & = W ( x , y - \theta ^ { \prime } ) \frac { 2 \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } z \frac { 2 \pi ( x , y - \theta ^ { \prime } ) } { \mathcal { D } _ { x } \pi ( x , y - \theta ^ { \prime } ) } \end{array} $$ Definition A.2 (Odds Multiplication). $$ a \otimes b \triangleq { \frac { a b } { a b + ( 1 - a ) ( 1 - b ) } } $$ Proposition A.3 (Inverse). $$ a \otimes b = c \iff b = ( 1 - a ) \otimes c $$ Proposition A.4 (Jacobian). $$ a \otimes b { \frac { d a } { a ( 1 - a ) } } = a \otimes b { \frac { d ( a \otimes b ) } { ( a \otimes b ) ( 1 - a \otimes b ) } } $$ Proposition A.5 (One minus distributes over odds multiplication). $$ 1 - ( a \otimes b ) = 1 - a \otimes 1 - b $$ Proof. $$ { \begin{array} { r l } { { \frac { a b } { a b + ( 1 - a ) ( 1 - b ) } } + { \frac { ( 1 - a ) ( 1 - b ) } { a b + ( 1 - a ) ( 1 - b ) } } = 1 } & { } \\ { a \otimes b + \quad ( 1 - a ) \otimes ( 1 - b ) = 1 } & { } \\ { \quad } & { ( 1 - a ) \otimes ( 1 - b ) = 1 - a \otimes b } \end{array} } $$ Proposition A.6 (Logit Odds Multiplication is Additive). $$ \sigma ^ { - 1 } ( a \otimes b ) = \sigma ^ { - 1 } ( a ) + \sigma ^ { - 1 } ( b ) $$ Proof. $$ \log { \frac { { \frac { a b } { a b + ( 1 - a ) ( 1 - b ) } } } { { \frac { ( 1 - a ) ( 1 - b ) } { a b + ( 1 - a ) ( 1 - b ) } } } } = \log { \frac { a } { 1 - a } } + \log { \frac { b } { 1 - b } } $$ Proposition A.7 (Log Odds Interval Invariance). $$ \sigma ^ { - 1 } ( 1 - c \otimes b ) - \sigma ^ { - 1 } ( 1 - c \otimes a ) = \sigma ^ { - 1 } ( b ) - \sigma ^ { - 1 } ( a ) $$ Proof. $$ \begin{array} { r l } & { \sigma ^ { - 1 } ( 1 - c \otimes b ) - \sigma ^ { - 1 } ( 1 - c \otimes a ) } \\ & { \quad = [ \sigma ^ { - 1 } ( 1 - c ) + \sigma ^ { - 1 } ( b ) ] - [ \sigma ^ { - 1 } ( 1 - c ) + \sigma ^ { - 1 } ( a ) ] } \\ & { \quad = [ \sigma ^ { - 1 } ( 1 - c ) - \sigma ^ { - 1 } ( 1 - c ) ] + [ \sigma ^ { - 1 } ( b ) - \sigma ^ { - 1 } ( a ) ] } \\ & { \quad = [ \sigma ^ { - 1 } ( b ) - \sigma ^ { - 1 } ( a ) ] } \end{array} $$ # A.3 Prior-Adjustment With this notation, working with conditional probabilities is straightforward: $$ P ( y = 1 | s ( x ) , \mathcal { D } _ { \pi } ) $$ $$ \begin{array} { r l } & { = \underbrace { \overbrace { P ( s ( x ) | y = 1 , \mathscr { D } _ { \pi } ) } ^ { \mathrm { c o l e c t ~ t h i s } } } _ { \mathrm { a n d ~ t h i s } } P ( y = 1 | \mathcal { D } _ { \pi } ) } \\ & { = \underbrace { \overbrace { P ( s ( x ) | y = 1 , \mathscr { D } _ { \pi } ) } ^ { \mathrm { c o l e c t ~ t h i s } } P ( y = 1 | \mathcal { D } _ { \pi } ) + \underbrace { P ( s ( x ) | y = 0 , \mathscr { D } _ { \pi } ) } _ { \mathrm { a n d ~ t h i s } } P ( y = 0 | \mathcal { D } _ { \pi } ) } _ { \mathrm { D u l ~ t h i s } } } \\ & { = \underbrace { \overbrace { P ( s ( x ) | y = 1 , \mathscr { D } _ { \pi } ) } ^ { \mathrm { P ( s ( x ) | y = 1 , \mathscr { D } _ { \pi } ) } } } _ { \mathrm { w e c a n ~ u e l s h e l s h i f t } } \underbrace { \otimes P ( y = 1 | \mathcal { D } _ { \pi } ) } _ { \mathrm { w e c a n e ~ \lambda h e l t } } } \\ & { = \underbrace { P ( s ( x ) | y = 1 ) | y = \overbrace { P ( s ( x ) | y = 0 ) } ^ { P ( s ( x ) | y = 1 ) } \otimes P ( y = 1 | \mathcal { D } _ { \pi } ) } _ { \mathrm { P ( { s ( \lambda ) + \mathscr { D } _ { \pi } ( \lambda ) = 0 ) } } } \otimes P ( y = 1 | \mathcal { D } _ { \pi } ) } \end{array} $$ $$ \begin{array} { r l } { \mathbf { \Phi } _ { \mathcal { Y } } = 1 | s ( x ) , \pmb { \mathcal { D } } _ { \pmb { \pi } } ) \otimes P ( y = 0 | \pmb { \mathcal { D } } _ { \pmb { \pi } } ) = \underbrace { P ( s ( x ) | y = 1 ) } _ { \mathrm { i n v a r i a n t ~ t o ~ \pmb { \mathcal { D } } _ { \pmb { \pi } } ~ } } } & { } \\ { \quad \quad \quad = P ( y = 1 | s ( x ) , \pmb { \mathcal { D } } _ { \pmb { \pi _ { 0 } } } ) \otimes P ( y = 0 | \pmb { \mathcal { D } } _ { \pmb { \pi _ { 0 } } } ) } & { } \\ { P ( y = 1 | s ( x ) , \pmb { \mathcal { D } } _ { \pmb { \pi } } ) = P ( y = 1 | s ( x ) , \pmb { \mathcal { D } } _ { \pmb { \pi _ { 0 } } } ) \otimes P ( y = 0 | \pmb { \mathcal { D } } _ { \pmb { \pi _ { 0 } } } ) \otimes P ( y = 0 | \pmb { \mathcal { D } } _ { \pmb { 0 } } ) } & { } \\ { = P ( y = 1 | s ( x ) , \pmb { \mathcal { D } } _ { \pmb { \pi _ { 0 } } } ) \otimes ( 1 - \pmb { \pi _ { 0 } } ) \otimes \pmb { \pi } } & { } \end{array} $$ Here, the propagation of errors is straightforward: the log odds error will be the same size under both distributions, although of course the errors in probability space may be larger or smaller. Thus we can specify the best choice of prior-adjusted score: # A.4 Prior-Adjusted, Cost-Weighted Threshold Combining these two, we find that the best choice of decision threshold is $$ \pi \otimes ( 1 - \pi _ { 0 } ) \otimes s ( x ) \geq c $$ We will thus often refer to the induced optimal classifier $\kappa ( \pi \otimes s _ { 1 / 2 } ( x ) , c )$ . # B Derivation of Set-based Metrics We start with the most popular family of evaluation methods, which are based on accuracy but include cost-sensitive generalizations. These only require a binary classifier, which we define as a function $\kappa ( x ) : \mathcal { X } \{ 0 , 1 \}$ . They differ along two axes: the way they factor in the cost of errors, and the way they factor in the class balance of the dataset. Table 1: Taxonomy of set-based evaluation metrics. Each row represents a different approach to handling error costs, and each column represents a different approach to handling class balance. Note that when balanced, the second and third rows are equivalent. # B.1 Accuracy Definition B.1 (Accuracy). The accuracy of a thresholded binary classifier $\kappa ( x , \tau )$ is given by: $$ \mathrm { A c c u r a c y } ( \mathcal { D } _ { \pi _ { 0 } } , s , \tau ) = \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } V _ { 1 / 2 } ( y , \kappa ( s ( x ) , \tau ) ) $$ Table 2: Value function for Accuracy This is impractically neutral with regard to cost in that $V ( y , \widehat { y } \ : = \ : 1 \ : - \ : y )$ is not a function of y, which corresponds to the contingency table in Table 2. It is practbical but neither neutral nor flexible with regard to distribution shift in the sense that it implicitly assumes: $H ( \mathbf { \mathcal { D } } _ { \pi } ) = \delta ( \mathbf { \mathcal { D } } _ { \pi } = \mathbf { \mathcal { D } } _ { \pi _ { 0 } } )$ . The simplest way to make this more neutral is to evaluate on a balanced dataset, which we denote as $\mathcal { D } _ { 1 / 2 }$ . Mechanically, we can draw from this dataset using importance sampling, if we assume $\mathcal { D } _ { \pi } Y X$ and therefore $P ( X | Y , \pmb { \mathcal { D } } _ { \pi } ) = P ( X | Y )$ . Definition B.2 (Balanced Accuracy). The balanced accuracy of a thresholded binary classifier $\kappa ( x , \tau )$ is given by: Table 3: Value function for Balanced Accuracy If more flexibility is deired, at the expense of neutrality, it is necessary to evaluate at an arbitrary class balance. Moreover, evaluating how well a classifier performs at one specific threshold is less useful than understanding how the best threshold performs at a specific class balance. Definition B.3 (Prior-Adjusted Maximum Accuracy). The prior-adjusted maximum accuracy given a scoring function $s$ and a threshold $\tau$ with a class balance $\pi$ is given by: $$ \begin{array} { r l } { { \mathrm { P A M A } ( \mathcal { D } _ { \pi } , s , \tau ) } } \\ & { = \ \sum _ { x , y \in \mathcal { D } _ { \pi } } V _ { 1 / 2 } ( y , \kappa ( s ( x ) , \tau ) ) } \\ & { = \ \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } W ( \pi \to 1 / 2 ; y ) V _ { 1 / 2 } ( y , \kappa ( \pi \otimes s _ { 1 / 2 } ( x ) , \tau ) ) } \\ & { = \ \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } V ( y , \kappa ( \pi \otimes s _ { 1 / 2 } ( x ) , \tau ) ) } \end{array} $$ Table 4: Value function for Shifted Accuracy # B.2 Weighted Accuracy This problem is further complicated by the need to realistically confront asymmetric costs. Consider the syphilis testing case: unnecessary treatment is 10 to 100 times less costly than a missed detection. We will use $1 / 3 0$ as a representative value for exposition, as the exact mechanics of syphilis testing are not central to this work. First, we consider the balanced case, which is more mathematically tractable. Definition B.4 (Balanced Weighted Accuracy). The balanced weighted accuracy of a score function $s$ with a threshold $\tau$ is given by: $$ \begin{array} { l } { { \displaystyle \mathrm { B W A } ( \mathcal { D } _ { \pi _ { 0 } } , s , \tau , c ) } \qquad \quad \quad \quad \quad \quad \quad \quad \frac { \widehat { y } = 1 } { ( \mathrm { T r a t } ) } \qquad \frac { 1 - c } { \pi _ { 0 } } \qquad \quad 0 } \\ { { \displaystyle = \sum _ { \scriptstyle x , y \in \mathcal { D } _ { 1 / 2 } } ( 1 - c ) ^ { y } c ^ { 1 - y } V _ { 1 / 2 } ( y , \kappa ( s ( x ) , c ) ) \qquad \quad \quad \quad \quad \widehat { y } = 0 \qquad } \quad \quad \quad \quad \quad \quad \frac { c } { 1 - 1 } } \\ { { \displaystyle = \sum _ { \scriptstyle x , y \in \mathcal { D } _ { \pi _ { 0 } } } W ( \pi _ { 0 } \to 1 - c ; y ) V _ { 1 / 2 } ( y , \kappa ( s ( x ) , c ) ) \prod _ { \mathrm { \tiny ~ R b l e ~ 5 : ~ V a l u e ~ f u n c t i o n ~ f o r ~ B a l a n c d } } } \quad \quad \quad } \\ { { \displaystyle = \sum _ { \scriptstyle x , y \in \mathcal { D } _ { \pi _ { 0 } } } V ( y , \kappa ( s ( x ) , c ) ) \qquad \quad \quad \quad \quad \quad \quad \quad \mathrm { A c c u r a c y } } \quad \quad \quad \quad \quad \quad \quad \quad \frac { c } { 1 - 1 } } \\ { { \displaystyle \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } } \end{array} $$ d Weighted The minimum possible value of this expression is clearly 0 if $V ( y , \widehat { y } ) = 0$ for all $y$ . The maximum is also clear: $$ \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } W ( \pi _ { 0 } \to 1 / 2 ; y ) W ( 1 / 2 \to 1 - c ; y ) \mathbf { 1 } _ { 0 } = ( 1 - c ) + c = 1 $$ The obvious combination of the two weighting terms is not correct, however. $$ \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } W ( \pi _ { 0 } \to 1 / 2 ; y ) W ( 1 / 2 \to 1 - c ; y ) W ( 1 / 2 \to \pi ; y ) \mathbf { 1 } _ { 0 } = \pi ( 1 - c ) + ( 1 - \pi / 2 ) ^ { \alpha } , $$ The most intuitive approach involves rescaling the value of the true and false positives to be in the 1:30 ratio and then normalizing such that the maximum possible value remains 1 regardless of class balance. This is known as Weighted Accuracy. This procedure of normalizing the metric so that 0 is the worst possible value and 1 the best is generally known in the forecast evaluation literature as a skill score, but in the medical decisionmaking literature this particular metric is generally called Weighted Accuracy. Definition B.5 (Weighted Accuracy). The weighted accuracy of a thresholded binary classifier $\kappa ( x , \tau )$ is given by: $$ \begin{array} { r l } & { \mathrm { W A } ( \mathcal { D } _ { \pi _ { 0 } } , s , \tau ) } \\ & { \quad = \frac { \displaystyle \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } ( 1 - c ) ^ { y } c ^ { 1 - y } V _ { 1 / 2 } ( y , \kappa ( s ( x ) , \tau ) ) } { \displaystyle \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } ( 1 - c ) ^ { y } c ^ { 1 - y } V _ { 1 / 2 } ( y , y ) } } \\ & { \quad = \displaystyle \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } V ( y , \kappa ( s ( x ) , \tau ) ) } \end{array} $$ Table 6: Value function for Weighted Accuracy # B.3 Net Benefit However, this makes comparisons across different class balances less meaningful, since as the class balance varies, the normalizing factor changes. As a result, the effective value of a true positive changes. One common approach from the Decision Curve Analysis literature is instead to normalize the true positive to 1 and then rescale the false positive to keep the right ratio. The baseline in the DCA literature is to always predict the negative class, whereas the weighted accuracy literature uses a baseline of always predicting the wrong class. Since this is equivalent up to constants, we will modify the parameterization of Net Benefit to make it more directly comparable. Definition B.6 (Net Benefit). The net benefit of a scoring function $s$ with a threshold $\tau$ is given by: $$ \begin{array} { r l } & { \mathrm { N B } ( \mathcal { D } _ { \pi _ { 0 } } , s , \tau , c ) } \\ & { \ = \displaystyle \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } \left( \frac { c } { 1 - c } \right) ^ { 1 - y } V _ { 1 / 2 } ( y , \kappa ( s ( x ) , \tau ) ) } \\ & { \ = \displaystyle \sum _ { x , y \in \mathcal { D } _ { \pi _ { 0 } } } V ( y , \kappa ( s ( x ) , \tau ) ) } \end{array} $$ Table 7: Value function for Net Benefit The disadvantage of this approach is that it’s unintuitive that the net benefit of a perfect classifier is not reliably 1, and instead depends on the class balance. The advantage is that when comparing at different class balances, the value of a true positive and a true negative remain fixed, so measurements are directly compared on the same scale. # B.4 Prior-Adjusted Maximum Cost-Weighted Metrics We can combine the prior-adjusted maximum value approach with the cost-weighted metrics to get two new metrics that make sense to compare across different class balances. Definition B.7 (Prior-Adjusted Maximum Weighted Accuracy). The prior-adjusted maximum weighted accuracy given a scoring function $s$ and a threshold $\tau$ with a class balance $\pi$ is given by: $$ \begin{array} { r l } & { \mathrm { 3 X M W M N } ( { \mathcal { D } } _ { \mathcal { U } _ { n } } s , \tau , c ) } \\ & { = \underbrace { \sum _ { p = 1 } ^ { N } ( - c ) ^ { p + 1 - N } V _ { i , j } ( g ( y , \widehat { y } ) ) } _ { \begin{array} { l } { \sum _ { p = 1 } ^ { N } ( 1 - c ) ^ { p + 1 - N } V _ { i , j } ( g ( y , \widehat { y } ) ) } \\ { \sum _ { p = 1 } ^ { N } ( 1 - c ) ^ { p + 1 - N } V _ { i , j } ( g ( y , \widehat { y } ) ) } \end{array} } } \\ & { = \underbrace { \sum _ { p = 1 } ^ { N } [ ( 1 - c ) ^ { p } ] ^ { p } [ c ( s ) - c ^ { p } ] ^ { 1 - 1 - \psi } V _ { i , j } ( g ( y , \widehat { y } ) ) } _ { \begin{array} { l } { \sum _ { p = 1 } ^ { N } [ 1 - c ] ^ { p } [ 1 - c ^ { p } ] ^ { 1 - 1 } V _ { i , j } ( g ( y , \widehat { y } ) ) } \\ { \sum _ { p = 1 } ^ { N } [ 1 - c ^ { p } ] ^ { 1 - 1 } V _ { i , j } ( g ( y , \widehat { y } ) ) } \end{array} } } \\ & = \underbrace { \sum _ { p = 1 } ^ { N } [ ( 1 - c ) ^ { p } ( 1 - c ) ^ { p } ( 1 - c ) ^ { p } [ c ( 1 - \pi ) ^ { p } ] ^ { 1 - 1 } V _ { i , j } ( g ( y , \widehat { y } ) ) } _ \begin{array} { l } { \sum _ { p = 1 } ^ { N } ( 1 - c ) ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } } \\ { \sum _ { p = 1 } ^ { N } [ ( 1 - c ) ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } [ 1 - c ] ^ { p } } \\ = \underbrace { \sum _ { p = 1 } ^ { N } V _ { i , j } ( g ( y , \widehat { x } ) [ ( \frac { 1 - c } { 2 } ) ^ { p } [ 2 \pi ] ^ { p } [ \mathcal { R } ] ^ { p } } _ \begin{array} { l } \sum _ { p = 1 } ^ { N } \end{array} \end{array} \end{array} $$ Table 8: Value function for PriorAdjusted Maximum Weighted Accuracy Proposition B.8 (PAMA Equivalence). $$ \ P A M W A ( \pmb { \mathcal { D } } _ { \pi } , s , \tau , c ) = P A M A ( \pmb { \mathcal { D } } _ { 1 - c \otimes \pi } , s , \tau ) $$ Proof. $$ \begin{array} { r l } & { \mathbb { S } _ { \lambda \mathrm { M M K } } \langle \mathcal { D } _ { \pi } , \mathcal { S } _ { \pi } \rangle } \\ & { \quad = \begin{array} { l } { \sum _ { \gamma = 1 } ^ { \infty } ( 1 - e ) ^ { \beta _ { \gamma } t _ { \gamma } ( \tilde { \theta } _ { \gamma } , \tilde { \theta } ) } } \\ { \frac { \gamma _ { \gamma } } { 2 } ( 1 - e ) ^ { \beta _ { \gamma } t _ { \gamma } ( 2 , \tilde { \theta } _ { \gamma } , \tilde { \theta } ) } } \\ { \sum _ { \gamma = 1 } ^ { \infty } [ ( 1 - e ) ^ { \beta _ { \gamma } t _ { \gamma } ( 2 , \tilde { \theta } _ { \gamma } , \tilde { \theta } ) } } \\ { \frac { \gamma _ { \gamma } } { 2 } ] ( 1 - e ) ^ { \alpha _ { \gamma } t _ { \gamma } [ \tilde { \theta } _ { \gamma } , \tilde { \theta } ] } } \end{array} } \\ & \quad = \begin{array} { l } { \frac { \gamma _ { \gamma } } { 2 \sqrt { \pi } } [ ( 1 - e ) ^ { \alpha _ { \gamma } t _ { \gamma } } ] ^ { \alpha _ { \gamma } } [ ( 1 - \pi ) ] ^ { - 1 \alpha _ { \mathrm { K } _ { \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } } } } } \\ { \frac { \gamma _ { \gamma } } { 2 \sqrt { \pi } } [ ( 1 - e ) ^ { \alpha _ { \gamma } t _ { \gamma } } ] ^ { \beta _ { \gamma } } [ ( 1 - \pi ) ] ^ { - 1 \alpha _ { \mathrm { K } _ { \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } } } } } \\ { \frac { \sum _ { \gamma = 1 } ^ { \infty } V _ { \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } \mathrm { L } _ { \mathrm { L } } } } { 2 } [ \frac { \gamma _ { \gamma } } { 2 } ] ^ { \alpha _ { \gamma } } } \\ { \frac { \gamma _ { \gamma } } { 2 } \frac { \sqrt { \pi } } { 2 } V _ { \mathrm { L } } \langle \tilde { \theta } _ { \gamma } , \tilde { \theta } \rangle } \\ { \frac { \gamma _ { \gamma } } { 2 } \frac { \sqrt { \pi } } { 2 } V _ { \mathrm { L } } \langle \tilde { \theta } _ { \gamma } , \tilde { \theta } \rangle } \\ { \quad = \mathrm { P H M } \langle \mathcal { D } _ { \pi } \mid \mathcal { S } _ { \pi } \rangle , } \end{array} \end{array} $$ Definition B.9 (Prior-Adjusted Maximum Net Benefit). The prior-adjusted maximum net benefit of a scoring function $s$ with a threshold $\tau$ with a class balance $\pi$ is given by: $$ \begin{array} { r l } & { \mathrm { P A M N B } ( \mathcal { D } _ { \pi } , s , \tau , c ) } \\ & { \quad = \displaystyle \sum _ { \mathcal { D } _ { \pi } } ( \frac { \pi } { \pi _ { 0 } } ) ^ { 1 - y } \left( \frac { c } { 1 - c } \frac { 1 - \pi } { 1 - \pi _ { 0 } } \right) ^ { 1 - y } V _ { 1 / 2 } ( y , \widehat { y } ) } \\ & { \quad = \displaystyle \sum _ { \mathcal { D } _ { \pi _ { 0 } } } V ( y , \kappa ( \pi \otimes s _ { 1 / 2 } ( x ) , \tau ) ) } \end{array} $$ Table 9: Value function for Prior-Adjusted Maximum Net Benefit We focus on the second because although the semantics of a single value are more confusing (since the perfect classifier is not normalized to 1), the values at different class balances are commensurable.
Machine learning-based decision support systems are increasingly deployed in clinical settings, where probabilistic scoring functions are used to inform and prioritize patient management decisions. However, widely used scoring rules, such as accuracy and AUC-ROC, fail to adequately reflect key clinical priorities, including calibration, robustness to distributional shifts, and sensitivity to asymmetric error costs. In this work, we propose a principled yet practical evaluation framework for selecting calibrated thresholded classifiers that explicitly accounts for the uncertainty in class prevalences and domain-specific cost asymmetries often found in clinical settings. Building on the theory of proper scoring rules, particularly the Schervish representation, we derive an adjusted variant of cross-entropy (log score) that averages cost-weighted performance over clinically relevant ranges of class balance. The resulting evaluation is simple to apply, sensitive to clinical deployment conditions, and designed to prioritize models that are both calibrated and robust to real-world variations.
[ "cs.LG", "cs.AI" ]
# 1 Introduction Online social media platforms, despite their limitations (Ivan et al., 2015) and potential risks (Bert et al., 2016; Abolfathi et al., 2022), have revolutionized how individuals connect and communicate with others who share similar interests. The rapid growth in their usage can be attributed to the ubiquity of smartphones and advancements in social psychology and artificial intelligence (Grandinetti, 2021), which have transformed social media into a key driver of both individual interaction and public discourse. As the volume of social media content has surged, search engines have emerged as critical gatekeepers, filtering and mediating access to content from platforms like Reddit and Twitter/X (Freelon, 2018). However, this gatekeeping introduces potential biases that shape the visibility of subreddits and hashtags, influencing the flow of information and impacting public conversations as illustrated in Fig. 1. Research shows that biased search rankings can significantly affect consumer and voter decisions; one study found that such biases could shift voting preferences by over $20 \%$ among undecided voters in the U.S. and India (Epstein and Robertson, 2015). This phenomenon, known as the search engine manipulation effect, raises concerns about the role of dominant search engines in shaping democratic processes and underscores the importance of understanding how they curate online content. Figure 1: Search Engines curate and filter social media content before displaying results. To explore the framing effects of search engines, access to data is essential. However, the discontinuation of API access to social media sites have created significant barriers to obtaining this data. This period of data inaccessibility has been termed the Post-API era (Freelon, 2018; Poudel and Weninger, 2024), which has notably hindered research across various fields, including discourse analysis (De Choudhury and De, 2014; Stine and Agarwal, 2020), computational social science (Priya et al., 2019; Hassan et al., 2020), computational linguistics (Basile et al., 2021; Wang and Luo, 2021; Melton et al., 2021; Liu, 2020), and human behavior studies (Choi et al., 2015; Thukral et al., 2018), among others (Weng and Lee, 2011; Sakaki et al., 2010). Search Engine Result Pages. Search engines frequently establish data-sharing agreements with social media platforms, allowing them access to largescale, up-to-date data without the need for Web scraping. For instance, data from Google Trends can be used to calibrate and track the popularity of topics over time (West, 2020). In the Post-API 1000 keywords 1000keywords Google site:reddit.com climate × ? Q Google site:twitter.com climate X C Q Q Redditr/climate X OECDG ® 207.4K+ followers UN Climate Change (@UNFCCC.. ? Redieeee X ·NASAClimate ? Redditr/askscience ? NASA Climate (@NASAClimate) / Redditr/climatechange ? 280+ comments ·2 months ago era, Search Engine Results Pages (SERPs) have emerged as a possible alternative data source for computing and social science research (Scheitle, 2011; Young et al., 2018; Yang et al., 2015; Pan et al., 2012). However, as SERPs present results as paginated lists ranked by relevance, they inherently impose a layer of algorithmic moderation. This ranking process is central to the usability of search engines but also introduces biases in how content is prioritized, raising questions about the gatekeeping power of these platforms (Sundin et al., 2022). Subreddits and Hashtags Subreddits and hashtags are two examples of ways that platforms provide spaces for users with similar interests to gather and can even lead to the formation of new groups (Krohn and Weninger, 2022). Other platforms like Facebook, WhatsApp, Telegram, and Weibo also support topical discussion or community formation in similar ways. Analysis of these dynamics has led to deep insights and countless studies on engagement, membership, conflict, and discourse both within specific groups and in general (e.g., (Soliman et al., 2019; Weld et al., 2022; Long et al., 2023)). Continued study of these dynamics is predicated on the ability to gather data from these social platforms. In light of the new restrictions on social media data collection as well as the previous findings on bias in SERP data, the following questions arise: Our research builds upon previous work that investigates the page-level dynamics of how individual posts or pages containing certain keywords are promoted or suppressed within search engine result pages (SERPs) (Poudel and Weninger, 2024). However, we take a broader community-based approach that underscores the crucial role of subreddits and hashtags in shaping narratives. This shift in perspective allows us to uncover dimensions that are often overlooked in more granular studies. While we concur with prior research regarding the existence of bias in SERP representation, our findings extend this understanding by revealing how these biases operate at the community and topic levels. Search engine algorithms, we demonstrate, not only propagate bias but also significantly frame the larger narratives that emerge from online communities. Building on these contributions, we turn our focus to the key research questions that guide our investigation. These questions aim to deepen our understanding of how search engines function as gatekeepers, shaping the visibility and framing of entire communities and the narratives they promote. By examining both the systemic biases that influence which subreddits and hashtags are surfaced or suppressed, and the broader implications of these dynamics for online discourse, the following three research questions seek to uncover the mechanisms through which search engines mediate public conversations. 1. How do search engine rankings and moderation policies serve as gatekeeping mechanisms that shape the visibility of subreddits and hashtags within online discourse? 2. How does the toxicity of content differ between subreddits and hashtags that appear in search engine result pages (SERP) and those that do not? 3. Which subreddits and hashtags are systematically promoted or suppressed by search engine algorithms and moderation practices, and what common characteristics can be identified among these topics and communities? To address these questions, and as illustrated in Fig 2, we compared the prevalence of subreddits and hashtags from non-sampled data obtained directly from Reddit and Twitter/X with those identified in thousands of SERPs from Google’s web search engine1 during the same time period. Overall, we find that Google significantly and dramatically biases the subreddits and hashtags that are returned in important (but not malicious or nefarious) ways. On Reddit, the subreddits that were most suppressed included r/AskReddit, r/AutoNewspaper, and r/dirtykikpals; on Twitter/X the hashtags that were most suppressed were #voguegala2022xmileapo, #nft, and #nsfwtwt. Looking at the results broadly, we find that subreddits and hashtags that contain sexually explicit content, that promote conspiracy theories, that contain many advertisements, and that promote cryptocurrencies are less likely to be returned by Google compared to nonsampled social media data. On the other hand, we find that gaming and entertainment subreddits and hashtags are more likely to be returned by Google compared to nonsampled social media data. # 2 Related Work Here we review key literature on (1) the influential role of search engines in shaping public discourse, and (2) challenges in data collection in social media research. Investigating the framing role of search engines in shaping public discourse requires access to robust data. However, the process of data collection presents its own set of challenges. # 2.1 Search Engines as Gatekeepers Search engines play a pivotal role in shaping social discourse and curating information, fundamentally influencing public perceptions and narratives (Makhortykh et al., 2021; Introna and Nissenbaum, 2000; Epstein and Robertson, 2015; Pan et al., 2007). This curation is not merely a passive reflection of user interest but an active process that can amplify certain viewpoints while marginalizing others (Gerhart, 2004; Epstein and Robertson, 2015). Researchers have noted that algorithms governing search engines and social media platforms function as gatekeepers, determining which content is visible and how it is shown (Goldman, 2005). This is particularly important given the sheer volume of information available online, where users rely on search engines to navigate and filter relevant content from the noise. The mechanics of gatekeeping within search engines involve both the selection and filtering of information based on various criteria, including relevance, popularity, and alignment with the users’ prior behavior (Brin and Page, 1998; Baeza-Yates et al., 1999; Hannak et al., 2013). As they do their work, they can inadvertently reinforce societal biases and echo chambers, shaping users’ understanding of issues in ways that reflect hidden biases rather than a neutral presentation of information (Gillespie, 2020, 2010). The implications of these algorithmic choices extend beyond individual users to impact the broader social dynamics. As platforms prioritize content that generates higher engagement, they risk skewing the discourse towards more sensational or polarizing material, which can further entrench echo chambers and reduce exposure to a broad range of perspectives (Barberá, 2020). In summary, as curators of information, search engines significantly affect how social issues are framed and discussed in modern public discourse. Their role as gatekeepers not only determines what information is accessible but also influences the narratives that emerge within society, making it a critical path for investigation. # 2.2 Data Collection Strategies The rise of social media has transformed the study of online behavior (Myslín et al., 2013; Young et al., 2009), but recent restrictions on data access have forced researchers to explore alternative methods. These methods include data recalibration strategies, alternative data sharing mechanisms, and new data acquisition techniques. Social media data often suffers from sampling bias, such as Twitter’s garden-hose versus fire-hose feed (Morstatter et al., 2013). Researchers have developed methods to address this through data cleaning and recalibration, which correct noisy labels and adjust for incomplete data (Ilyas and Chu, 2019; West, 2020; Ford et al., 2023). With data collection services becoming more restricted, alternatives like data donation have emerged, where users voluntarily provide their data (Carrière et al., 2023; Ohme et al., 2023). Others propose policy-driven solutions, such as requiring platforms to share public data under regulations like Europe’s Digital Services Act (de Vreese and Tromble, 2023). Another approach involves using search engine result pages (SERPs) as proxies for social media data (Poudel and Weninger, 2024). # 3 Data Collection Methodology We compared (nearly) complete data from two social media platforms, Reddit and Twitter/X, with search engine responses for the same period. # 3.1 Reddit Data Reddit data was collected using the Pushshift sys$\mathrm { t e m } ^ { 2 }$ until March 2023. This dataset is comprehensive but may lack content flagged as spam by Reddit, or removed, edited, or deleted by moderators or users before collection. It also excludes content from quarantined subreddits or inaccessible posts/comments. Despite these limitations, it covers a vast majority of Reddit’s visible social media content. Note that metadata such as up-/downvotes, awards, and flair may be altered post-collection and may not be fully represented in this dataset. For this study, we focused on Reddit data from January 2023, consistent with prior research. During this period, the dataset comprised 36,090,931 posts and 253,577,506 comments across 336,949 distinct subreddits. # 3.2 X/Twitter Data We obtained a nearly complete X/Twitter dataset spanning 24 hours from September 20, 2022, 15:00:00 UTC, to September 21, 2022, 14:59:59 UTC using an academic API, available free at the time of collection. This dataset, though not guaranteed to be complete, aims to provide a nearlyexhaustive, stable representation of X/Twitter activity (Pfeffer et al., 2023). During this period, 374,937,971 tweets were collected, with approximately $80 \%$ being retweets, quotes, or replies, and the remainder original tweets. # 3.3 Search Engine Sampling Methodology Given the vast amount of social media data, extracting all indexed content from search engines is impractical. Instead, we sampled data by issuing keyword queries and extracting results from SERP. The Reddit dataset was tokenized using Lucene’s StandardAnalyzer (lucene, 2024), which processes text by removing whitespace, converting to lowercase, and eliminating stopwords. We filtered tokens with non-alphabetic characters, fewer than 3 characters, or occurring less than 100 times, then a stratified sample of 1,000 keywords was selected based on document frequency for balanced representation3(see Appendix. A.1 for details). Table 1: Number of unique subreddits and hashtags in nonsampled data and the time-matched SERP sample For each keyword, site-specific queries were issued to Google using formats like site:reddit.com {keyword} and site:twitter.com {keyword}, with time constraints set to match nonsampled Reddit data from January 2023 and Twitter/X data from September 20-21, 2022. Default query settings were maintained. The SERP-API we employed utilized multiple global proxies to mitigate geographical biases. Each query was repeated three times to account for SERP’s non-deterministic nature, and results were combined across repetitions. # 3.4 Sample Statistics Relative to the enormous size of the nearlycomplete Reddit and Twitter/X datasets, the timematched SERP results yielded a total of 1,296,958 posts from Reddit and 80,651 tweets from Twitter/X. Table 1 shows the statistics of total unique subreddits and hashtags retrieved from the nonsampled social media data and from the SERP results for the curated list of keywords. Rather than the posts themselves, in the present work we focus on those subreddits and hashtags returned by SERP. We conducted an in-depth comparison to understand what disparities, if any, exist between the SERP sample and the nonsampled data. This analysis is broken into four phases that correspond to the overall research questions of the present work: (1) Activity-based Analysis, (2) Characterization of the Sample, (3) Toxicity Analysis of the Sample, (4) Suppression and Promotion Analysis. # 4 Activity-based analysis Previous studies have shown that search engines prioritize Reddit posts with higher upvotes and tweets from users with larger followings (Poudel and Weninger, 2024). Here, we investigate whether Figure 3: Hexbin plots show correlation between hashatag and subreddit occurrence in SERP results compared to the non-sampled data for Twitter/X $R ^ { 2 } = 0 . 2 1 4$ , $p < 0 . 0 0 1 \$ ) and for Reddit $R ^ { 2 } = 0 . 4 2 3$ , $p < 0 . 0 0 1 \$ ). SERP results also favor subreddits and hashtags with higher activity. We measured activity in subreddits by the number of submissions to each subreddit during the sample timeframe. Similarly, for Twitter/X, activity was measured by the frequency of each hashtag. For Reddit, we compared the number of subreddit posts between nonsampled data and SERP samples. This comparison was visualized using hexbin plots (Fig. 3), where color intensity represents data point density. On Twitter/X, we similarly compared the frequency of each hashtag between nonsampled and SERP data. Hexbin plots were chosen because they effectively display the distribution and density of large datasets, making it easier to identify patterns and correlations. On Twitter/X, we found a moderate correlation between hashtag frequency in SERP and its occurrence in nonsampled data $( R ^ { 2 } ~ = ~ 0 . 2 1 4$ , $p \ < \ 0 . 0 0 1 )$ . For Reddit, a stronger association was observed ( $R ^ { 2 } = 0 . 4 2 3$ , $p < 0 . 0 0 1 \$ . Interestingly, hashtags with little activity still appeared in SERP results, possibly due to sustained popularity from previous periods despite current inactivity. This trend was particularly noticeable in the Twitter/X dataset, which covers only a single day in this study. # 4.1 Characterization of Sampled Subreddits and Hashtags Our analysis showed a moderate correlation between subreddit and hashtag engagement and SERP visibility. Here, we explore deeper by examining which types of subreddits and hashtags are overrepresented or underrepresented in SERP compared to an unbiased sample of the data. Specifically, we focus on the top 1,000 most active subreddits and English hashtags based on post frequency on Reddit and Twitter/X, respectively. Figure 4: In SERP results are more likely to contain public subreddits compared to those subreddits Not In SERP results. Figure 5: Distribution of the hashtag categories for those found In SERP results compared to those Not In SERP results. On Reddit, subreddits are categorized into following visibility states: public, restricted, forbidden (banned by Reddit as of March 2023), or private (visible only to subscribed members). Our analysis shows that SERP significantly favors public subreddits and suppresses those categorized as restricted, forbidden, and private; Fig. 4 illustrates the proportions of subreddit types returned and not returned by SERP. Using OpenAI’s GPT-4 (Kublik and Saboo, 2023), we categorized each Twitter/X hashtag into one of nine previously identified categories (Pfeffer et al., 2023), as shown in Fig. 5. The prompt template is shown in Appendix A.2.1. On SERP, categories like Games and Finance were over-represented, while Advertisement, Politics, and Entertainment were under-represented compared to the ’Not in SERP’ category. These findings are specific to the hashtags prevalent during a 24-hour period in late September 2022 and may not reflect broader trends on Twitter/X. (See Appendix (Tables. T1 & T2) for some of the representative subreddits/hashtags within each categories/classes respectively.) Next, we will analyze the content within these top 1000 subreddits and hashtags, examining the types of posts appearing in SERP versus those that do not, using a toxicity analysis. # 4.2 Toxicity Analysis Toxicity in online communities is a critical research area requiring complete social media data access. It’s vital to determine if SERP-represented groups truly reflect overall toxicity dynamics. Traditional toxicity analysis relied on keyword presence for identifying toxic content (Rezvan et al., 2020). Transformer models like BERT now lead, adapting to evolving cultural and linguistic contexts (Devlin et al., 2018; Sheth et al., 2022). We employed Toxic-BERT (Hanu and Unitary team, 2020), trained on annotated Wikipedia comments, to assess toxicity in Reddit post titles and Tweets. It provides probabilities for toxicity, obscenity, and insults, with other labels (threat, severe_toxic, identity_hate) being extremely rare and not shown in our results. We compared the toxicity levels across two categories: In SERP and Not In SERP. The "In SERP" group consists of randomly sampled 5,000 posts that appeared directly in search engine results, specifically within the top 1,000 results for selected subreddits and hashtags. The "Not In SERP" group includes 5,000 posts randomly selected from subreddits and hashtags not indexed by search engines, ensuring that none of these posts were visible in search results. By comparing these samples, we assessed and contrasted toxicity levels among posts from subreddits and hashtags that are in SERP, and not in SERP . This helps us understand how search engine Twitter/X Reddit 0.2 INnotSIEnRSPERP Probability 0.1 自 雨 雨 自 ! 日 0 Lasule soxC Lasue -scer bscer indexing and result presentation might influence users’ exposure to toxic content. Figure 6 illustrates the mean label probabilities alongside their $9 5 \%$ confidence intervals, highlighting key differences between Reddit and Twitter/X in terms of content toxicity. Our analysis reveals mixed results. Subreddits that do not appear in SERP exhibited higher toxicity levels compared to those that do appear or are returned by SERP suggesting that SERP aggressively filters subreddits. On Twitter/X, hashtags Not In SERP were only marginally more toxic than those In SERP, showing little difference overall. These findings may reflect the content landscape of Twitter/X during the time of data collection, where prominent discussions focused on less controversial topics, such as entertainment, finance, gaming, and current events. Despite these platform-specific variations, the overall toxicity of Twitter/X content was lower than that of Reddit. This may be attributed to Reddit’s higher prevalence of subreddits focused on adult content, which tend to be perceived as more toxic. However, as shown in Figure 4, such subreddits represent only a small subset of the most popular communities on Reddit. # 5 Suppression and Promotion While the previous categorization sheds light on the types and nature of subreddits and hashtags retrieved by SERP, it overlooks how frequently they appear, potentially introducing bias in their portrayal compared to nonsampled data. In this section, we treat subreddits and hashtags as tokens and employ conventional token analysis to assess their suppression and promotion in SERP. Various statistical analyses can be used to compare these distributions (Cha, 2007; Deza and Deza, 2006). However, traditional methods face challenges with Zipfian data typical of most text datasets (Gerlach et al., 2016; Dodds et al., 2023). To address this, we utilize Rank Turbulence Divergence (RTD) (Dodds et al., 2023) to quantify the disparity between the activity distribution of nonsampled subreddits and hashtags and those retrieved in the SERP sample; see Appendix. A.3 for details. Table 2: Rank Turbulence Divergence (RTD) between SERP subreddits and hashtags and the nonsampled social media subreddits and hashtags. A lower score indicates low rank divergence, indicating similar distributions. Conversely, a higher score suggests larger divergence. Table 2 shows the mean RTD for SERP results compared to nonsampled social media data across all 1,000 keywords, highlighting significant disparities in this domain-level analysis4. # 5.1 Divergence versus Frequency Selecting on only those subreddits and hashtags that appeared at least once in SERP results, we characterized their inclinations, i.e., if the subreddit is more or less likely to appear in the SERP sample compared to the non sampled social media data, and plotted these signed divergences as a function of the activity. Figure 7 illustrates the most divergent subreddits (top) and hashtags (bottom). Additionally, Fig A1 in the Appendix shows the distributions of the 15 highest and lowest individual divergences (Eq. E.1) and their mean (representing Eq. E.2) for each subreddit and hashtag respectively. For Twitter/X hashtags, SERP prominently featured hashtags related to events like the United Nations General Assembly (UNGA), the FIFA video game, and hashtags about the fashion-house Prada and its appearance at Milan Fashion Week (MFW). These events occurred during or prior to the data collection period. On the contrary, hashtags related to the appearance of two Thai celebrities Mile and Apo at the Vogue Gala as well as their talent agency BeOnCloud were largely hidden from SERP results. A hashtag of Mahsa Amini, an Iranian woman who refused to wear a headscarf and died under suspicious circumstances in the days prior to data collection was also comparatively hidden from SERP results. Cryptocurrency hashtags related to investors and NFTs were comparatively hidden from SERP results as well. Most common hashtags from each inclination are listed on the right. Similarly, for Reddit, as demonstrated in the previous analysis, gaming and conversational subreddits are more frequently returned in SERP results, while subreddits focused on adult content are more prevalent on Reddit. Interestingly, /r/AskReddit and /r/relationship_advice are notably less visible in SERP results, and requires a further exploration. # 5.2 Coverage of Subreddits We conducted a case study comparing subreddits included in SERP with those not included, as illustrated in Fig.8. For each subreddit with at least 10 posts, we semantically mapped the content using MPNet-Base-V2 embeddings, averaged from five random posts per subreddit. We then used UMap to project these embeddings into a two-dimensional space (McInnes et al., 2018). Red points denote subreddits in SERP, while blue points denote those not in SERP. We identified seven clusters, where clusters dominated by red or blue indicate SERP status. Pornographic and adult content was notably absent from SERP, while technology, music, comics, games, and health-related subreddits were prominently featured. Conversely, subreddits discussing crypto-coins, politics, and COVID-19 were less likely to appear in SERP. # 6 Discussion Our study demonstrates how search engines act as gatekeepers, shaping online discourse by selectively surfacing a biased subset of subreddits and hashtags in their SERPs. This selective visibility directly impacts how users access and engage with information. By analyzing the patterns of inclusion and exclusion within SERP results, we observe how search engine algorithms and moderation practices play a central role in framing the topics and communities that dominate online conversations. We found that subreddits and hashtags with higher engagement levels, such as highly upvoted Reddit posts or popular hashtags, are more likely to appear in SERPs. This tendency was more pronounced on Reddit, where there is a stronger correlation between engagement metrics and SERP appearance compared to X/Twitter. This disparity suggests that search engine algorithms treat engagement metrics differently across platforms, reflecting the unique dynamics of user interactions on each. This insight directly supports our first research question by revealing the role of search engine algorithms in amplifying content with higher activity and participation. Figure 7: Rank Turbulence Divergence (RTD) of ranked Subreddits and Hashtags as a function of activity. Subreddits and hashtags with higher likelihood in nonsampled social media data are represented in blue, while those with higher likelihood in SERP results are in red. Most Divergent Subreddits Most Divergent Hashtags Figure 8: Semantic embeddings of subreddits that are found In SERP (red) and Not In SERP (blue). Clusters of subreddits about adult content, political, crypto-coins are generally absent from SERP results. Of course, the time-scope of our data revealed specific events, such as #climateweeknyc and #nationalfitnessday. These hashtags gained prominence around their corresponding events, indicating that search engines respond to temporal spikes in engagement and act as curators of public discourse. Notably, political subreddits and hashtags were systematically less likely to appear in SERP, suggesting that factors beyond user engagement, such as moderation policies and content restrictions, significantly influence visibility. Political content, along with discussions related to pornography, bots, and cryptocurrency, were disproportionately filtered from SERPs. This underscores the gatekeeping function of search engines, which, through moderation, both maintain the quality of the content they display and inadvertently suppress dis course in these areas. Our analysis shows that SERPs filter out content related to pornography, bots, and cryptocurrency, likely due to moderation policies aimed at reducing inappropriate content. While this helps create a safer online space, it also suppresses legitimate discussions, skewing the available discourse. The toxicity analysis adds an important dimension to these findings. We found that content surfaced by SERPs generally contains less toxic language compared to the content from subreddits and hashtags that do not appear in SERP results. This suggests that search engines are effectively reducing exposure to harmful or toxic content. While this can be seen as a positive step towards creating a safer and more civil online environment, it also introduces concerns about over-filtering. Specifically, by aggressively limiting toxic content, search engines might also suppress important discussions that could be critical to public discourse. This observation directly informs our third research question by showing how moderation policies tangibly shape the nature of the content users access, and raising questions about the balance between safety and free expression. The results of our study have several implications. They suggest that SERP algorithms and moderation policies collectively shape the online information landscape in ways that may not be immediately apparent to users. By favoring certain communities and suppressing others, SERPs can influence public discourse, access to information, and the diversity of viewpoints available to users.
Search engines play a crucial role as digital gatekeepers, shaping the visibility of Web and social media content through algorithmic curation. This study investigates how search engines like Google selectively promotes or suppresses certain hashtags and subreddits, impacting the information users encounter. By comparing search engine results with nonsampled data from Reddit and Twitter/X, we reveal systematic biases in content visibility. Google's algorithms tend to suppress subreddits and hashtags related to sexually explicit material, conspiracy theories, advertisements, and cryptocurrencies, while promoting content associated with higher engagement. These findings suggest that Google's gatekeeping practices influence public discourse by curating the social media narratives available to users.
[ "cs.CL" ]
# 1 Introduction Tabular data is a prevalent modality in numerous real-world applications [39], including critical areas such as disease diagnosis [6], credit scoring [13], census data analysis [35], and cybersecurity [7]. Consequently, developing effective tabular foundation models [48, 21, 22] has become a pressing need to enable robust representation learning for diverse real-world tabular datasets. However, existing approaches predominantly focus on single tables or isolated databases [37], which is often insufficient for training high-performing models. To address this limitation and enable learning from numerous heterogeneous tabular datasets, multiple collaborative learning approaches have emerged. These include federated learning [28], transfer learning [50], split learning [40], and the development of tabular foundation models [21]. Despite the promise of these collaborative methodologies, their practical application and development are significantly hampered by limitations in available real-world data, an issue highlighted in recent surveys [29, 39]. For instance, the reliance on synthetic datasets, as seen in models like TabPFN [21], risks introducing distribution bias when applied to real-world scenarios. While initial data collection efforts yielded large corpora of individual tables, such as GitTables [24] and WikiTables [4], and subsequent resources like WikiDBs [41] expanded this scope to include corpora of databases offering richer structural context, a fundamental challenge persists. These existing datasets, even extensive ones like WikiDBs, predominantly feature isolated entities with scarce or undefined explicit relationships between individual databases. This lack of defined interconnections means that models trained on such collections primarily capture intra-database dependencies, thereby critically overlooking inter-database relationships essential for developing more comprehensive and high-performing tabular foundation models [30]. Table 1: Comparison of existing real-world table or database corpora with WikiDBGraph. Note: ‘Schema-Data’ refers to corpora with both Schema & Content; “DB” refers to database. To address this challenge, we introduce WikiDBGraph, a large-scale, open-sourced graph of relational databases constructed from Wikidata [42]. Our method first leverages the known relations within WikiDBs, a corpus of tabular databases also based on Wikidata. Subsequently, we employ contrastive learning to train a model that predicts the similarity between databases. Utilizing these predicted similarities, we construct a graph where each node represents a database, and edges signify their correlation. Furthermore, we define and calculate a range of properties for both nodes and edges, derived from database structure, schemas, and data distributions. This approach is intended to facilitate a more profound exploration of inter-database relationships and can be utilized in various collaborative learning paradigms. We categorize data scenarios into two main types: feature-overlap and instance-overlap, conducting experiments on each case to observe the improvement in collaborative learning. Finally, we summarize the future challenges of large-scale tabular collaborative learning. Our code and data are available at GitHub [45] and Huggingface, respectively. The primary contributions of this work are: (1) We present WikiDBGraph, a large-scale graph of interconnected relational databases. (2) We enrich this dataset by defining and computing comprehensive node and edge properties, thereby providing a nuanced representation of the database ecosystem. (3) We derive feature- and instance-overlapped databases from the edges of WikiDBGraph and conduct experiments to validate the improvement of collaborative learning, with a summary of the challenges and future directions for collaborative learning. # 2 Related Work This section reviews existing literature pertinent to WikiDBGraph, encompassing traditional knowledge graphs, large-scale table corpora, and database corpora. We aim to contextualize our contribution by comparing its characteristics with existing datasets, as summarized in Table 1. Knowledge Graphs. Knowledge Graphs (KGs) model individual entities and their explicit relationships. For example, Wikidata [42] serves as a collaboratively built central knowledge hub, DBpedia [3] structures Wikipedia content, and YAGO [38] integrates Wikipedia with other lexical resources. These KGs, including newer domain-specific versions, employ fine-grained, entity-level nodes, making them ideal for tasks such as understanding specific facts, logical reasoning, and semantic search, often leveraging graph learning techniques like Graph Neural Networks (GNNs) [43]. However, the entity-centric nature of KGs is not directly suited for tabular deep learning paradigms that require structured data. In contrast, our proposed WikiDBGraph is constructed with relational databases as nodes and inter-database relationships as edges. This higher-level, structured view of interconnected tabular data repositories is specifically designed to support tabular deep learning across multiple databases. Table Corpora. Existing large-scale table corpora have predominantly concentrated on amassing collections of individual, self-contained tables. Prominent examples include Wikipedia-sourced datasets like WikiTables [4]. Other significant collections encompass web-derived corpora such as WDC WebTables [26] and the Dresden Web Table Corpus [17], tables from code repositories such as GitTables [24], and collections from open data portals, for instance, Open Data Portal Watch [34] and the Table Union Search Benchmark [33]. Furthermore, specialized compilations like VizNet [23] and challenge-specific datasets such as WikidataTables2023R1 [1] contribute to this landscape. These resources have been instrumental in advancing research on single-table deep tabular learning. Nevertheless, their inherent focus on isolated tables presents a significant limitation, as it diverges from practical scenarios where tabular data often exists within relational databases. Database Corpora. Unlike single-table versions, database corpora feature multiple interrelated tables to support complex relational tasks and multi-table representation learning. These resources include: schema-only corpora (e.g., SchemaDB [12], GitSchemas [14] from GitHub SQL) offering detailed schemas but minimal/no data; data-only corpora (e.g., SQLShare [25]) providing table content, often without explicit schemas; and schema-data corpora with both. Among schema-data corpora, some are small-scale (e.g., CTU Prague Relational Learning Repository [31]) or taskspecific (e.g., Text-to-SQL datasets Spider [47], BIRD [27]), making them less suitable for general representation learning. Large-scale schema-data corpora like WikiDBs [41] (from Wikidata) and SchemaPile [15] (from GitHub SQL) provide substantial data. However, a significant limitation, even in these extensive collections, is the scarcity of explicitly identified inter-database relationships, which hinders studying collaborative learning across databases. WikiDBGraph addresses this critical gap by systematically constructing and defining these inter-database connections, thereby enabling such collaborative learning. # 3 Dataset Construction This section details the methods to construct WikiDBGraph. We begin by formally defining the problem of database relationship identification (Section 3.1). Subsequently, our proposed solution is presented in Section 3.2. The efficacy of this approach in identifying meaningful correlations is then evaluated in Section 3.3. Finally, Section 3.4 describes the utilization of the validated model to construct the database graph. The pipeline of WikiDBGraph construction is shown in Figure 1. # 3.1 Problem Definition This work leverages WikiDBs [41], a large-scale collection of relational databases extracted from Wikidata. We aim to construct WikiDBGraph by identifying and establishing correlations between these databases. We initially define two databases as correlated if they share the same value for an existing Wikidata attribute, wikidata_topic_item_id, referred to as TID. However, this explicit linkage based on TID identifies only 8,816 correlated pairs among 9,895 distinct databases. This results in a very sparse set of connections, leaving over $90 \%$ of the databases isolated. Such sparsity is probably because many topics, though intrinsically related, have distinct TIDs within Wikidata. The primary objective of this work is to train a model capable of uncovering these implicit correlations. Let $\mathbf { \dot { \mathcal { D } } } ~ = ~ \{ D _ { 1 } , D _ { 2 } , \ldots , D _ { N } \}$ and $\boldsymbol { \mathcal { S } } ~ = ~ \{ s _ { 1 } , \bar { s } _ { 2 } , \ldots , s _ { N } \}$ represent the set of all $N$ databases and $N$ schemas within WikiDBs, respectively, and let $\mathcal { P } _ { \mathrm { e x p l i c i t } } = \{ ( D _ { i } , D _ { j } ) | \mathrm { T I D }$ of $D _ { i } =$ TID of $D _ { j } , i \neq j \}$ denote the limited set of explicitly correlated database pairs identified through shared TIDs. We aim to train a function $f : \mathcal { D } \times \mathcal { D } [ 0 , 1 ] ,$ , where $f ( \bar { D _ { i } } , D _ { j } )$ outputs the predicted probability of a correlation existing between databases $D _ { i }$ and $D _ { j }$ . We model this as a semi-supervised learning problem due to the limited positive labels in $\mathcal { P } _ { \mathrm { e x p l i c i t } }$ . # 3.2 Approach Database Serialization. To mitigate the verbosity of the original JSON files and reduce input size for subsequent processing, we serialize each $s _ { i }$ and samples in $D _ { i }$ into a concise textual format, denoted as abstract $t _ { i } \in \mathcal { T }$ . This serialization retains both structural information and an abstract of the data content. Specifically, for each database, we preserve its name, the names of its tables, and for each table, the names of its columns along with a few representative sample values from each column. An illustrative example of our serialization format is presented below: Figure 1: The overview of WikiGraph construction process - Column: <column_name_1> ; Samples: <value_1> | <value_2> | <value_3> - Column: <column_name_2> ; Samples: <value_1> | <value_2> | <value_3> Table: <table_name_2> - Column: <column_name_1> ; Samples: <value_1> | <value_2> | <value_3> - Column: <column_name_2> ; Samples: <value_1> | <value_2> | <value_3> While sample values are included to offer a qualitative indication of the data, we do not incorporate full data distributions or the entirety of the raw data. This approach is adopted for two primary reasons: 1) schema-level information (database, table, and column names) combined with data samples is often more directly indicative of the database’s topic than comprehensive statistical distributions of all values. 2) Processing and representing the complete data for all databases would be computationally expensive and lead to excessively large serialized representations. Training. We employ a contrastive learning [11] framework to train an embedding model, $f _ { \theta } ( \cdot )$ : $\mathcal { T } \mathbb { R } ^ { d }$ . This model maps serialized database schema representations from the input space $\tau$ into a $d$ -dimensional vector space. The distinction between positive and negative pairs, crucial for the contrastive loss, is determined by TIDs: a pair of database abstracts $( t _ { i } , t _ { j } )$ is regarded as a positive pair if $t _ { i }$ and $t _ { j }$ share an identical TID. Conversely, pairs of abstracts that do not share a common TID are treated as negative pairs. The sets of positive and negative pairs are partitioned into training, validation, and test subsets in a 7:1:2 ratio, ensuring no database overlap between partitions. Specifically, the embedding model $f _ { \theta } ( \cdot )$ is initialized using parameters from the pretrained encoderonly language model BGE-M3 [9] and is subsequently fine-tuned during our training process. To construct training instances, each positive pair $( t _ { a } , t _ { b } )$ from the training set is utilized, where $t _ { a }$ serves as the anchor and $t _ { b }$ as the positive abstract. For each anchor $t _ { a }$ , we sample $k$ negative abstracts that has distinct TID with $t _ { a }$ , denoted as $\{ t _ { n _ { j } } \} _ { j = 1 } ^ { k }$ , from the training set. The triplet $( t _ { a } , t _ { b } , \{ t _ { n _ { j } } \} _ { j = 1 } ^ { k } )$ is fed into the training; $k$ is set to 6 in our experiments. The model parameters $\theta$ are optimized by minimizing the InfoNCE loss. Initially, the anchor, positive, and negative abstracts are transformed into their respective embeddings: $$ e _ { a } = f _ { \theta } ( t _ { a } ) , \quad e _ { b } = f _ { \theta } ( t _ { b } ) , \quad e _ { n _ { j } } = f _ { \theta } ( t _ { n _ { j } } ) { \mathrm { f o r } } j = 1 , \dots , k . $$ These embeddings, $e _ { a } , e _ { b } , e _ { n _ { j } } \in \mathbb { R } ^ { d }$ , are then employed to compute the InfoNCE loss function [11] as follows: $$ \mathcal { L } _ { \mathrm { I n f o N C E } } = - \log \frac { \exp ( \sin ( e _ { a } , e _ { b } ) / \tau ) } { \exp ( \sin ( e _ { a } , e _ { b } ) / \tau ) + \sum _ { j = 1 } ^ { k } \exp ( \sin ( e _ { a } , e _ { n _ { j } } ) / \tau ) } , $$ where $\sin ( \cdot , \cdot )$ represents cosine similarity between two embedding vectors, and $\tau$ is a temperature hyperparameter controlling the sharpness of the distribution. The optimal embedding model $\theta ^ { * }$ is obtained by optimizing the InfoNCE loss function. $$ \begin{array} { r } { \theta ^ { * } = \arg \operatorname* { m i n } _ { \theta } \mathcal { L } _ { \mathrm { I n f o N C E } } \left( \theta ; \{ ( t _ { a } , t _ { b } , \{ t _ { n _ { j } } \} _ { j = 1 } ^ { k } ) \} _ { a , b , j = 1 } ^ { N } \right) . } \end{array} $$ # 3.3 Evaluation of Embedding Model This section presents an evaluation of the embedding model $f _ { \theta } ( \cdot )$ on the test set. The test set is structured with a positive-to-negative pair ratio of $1 : k$ , where $k = 6$ as consistent to the training. We sample five test sets using different random seeds and report their mean and standard deviation of the performance. Performance metrics are detailed in Table 2, and the ROC curve is depicted in Figure 2a. The results demonstrate that the fine-tuned BGE-M3 model significantly outperforms the original BGE-M3 baseline. Furthermore, the fine-tuned model achieves near-optimal performance, effectively distinguishing between positive and negative database pairs. Table 2: Performance of the embedding model on the test set with threshold 0.5. Figure 2: Performance evaluation of the embedding model # 3.4 Graph Construction Upon obtaining the pretrained embedding model, denoted as $f _ { \theta ^ { * } }$ , the embedding vector for each database in the corpus can be derived. To construct the database graph, we then compute the cosine similarity between the embedding vectors of all possible database pairs. This computation is performed in parallel for efficiency. Subsequently, a similarity threshold, $\tau$ , is applied to these computed scores to determine the presence of an edge (i.e., a connection) between pairs of databases in the graph. The selection of an appropriate value for $\tau$ necessitates a trade-off. A lower value of $\tau$ may increase the recall of genuinely related pairs but can also decrease precision by introducing spurious connections and potentially escalate computational costs for subsequent graph analyses. Conversely, a higher value of $\tau$ tends to enhance precision and the confidence in the identified relationships, but it may result in a sparser graph. To strike a balance between graph density and the reliability of the relationships, we establish a default threshold of $\tau \geq 0 . 9 4$ while also providing generated graphs using various values of $\tau$ to accommodate different analytical needs. Embedding Analysis. A clustering of the database embeddings, projected to two dimensions using t-SNE and subsequently processed with HDBSCAN, is illustrated in Figure 2c. This visualization reveals that the embeddings are grouped into 11 distinct clusters (excluding unknown) of varying sizes, corresponding to different topical categories. Manual inspection indicates, for instance, that cluster 10 predominantly represents sports-related datab, while cluster 9 corresponds to biomedical data. These observations suggest that the learned embeddings effectively capture meaningful semantic relationships between databases. Further analysis, focusing on the distribution of similarity scores, is presented in Figure 2b. The figure demonstrates two key points: first, the overall distribution of cosine similarities across all database pairs approximates a normal distribution, which is consistent with expectations for large-scale pairwise comparisons. Second, when compared to explicit positive pairs (those sharing a TID), this analysis reveals a substantial number of previously unidentified related databases exhibiting high similarity scores. This finding underscores the efficacy of our approach in significantly expanding the network of thematically related databases within the WikiDBs corpus, beyond the initial set identified by shared TIDs. While this finding underscores the efficacy of our approach in significantly expanding the graph of correlated databases beyond WikiDBs, it is important to note that the number of such newly identified positive pairs remains relatively small compared to the global data size. Consequently, this does not substantially affect the negative pair sampling method in training. # 4 Dataset Details This section introduces the structural details of the WikiDBGraph dataset (Section 4.1) and subsequently describes the properties defined for the nodes and edges within this graph (Section 4.2). # 4.1 Graph Structure The structural characteristics of the generated database graphs, as detailed in Table 3 and further elucidated by the distributions presented in Figures 3a, 3b, and 3c, provide two key insights relevant to the exploration of inter-database relationships. Firstly, WikiDBGraph exhibits notable density. As illustrated in Figure 3a, a considerable number of database nodes display degrees exceeding 100, signifying a high level of local connectivity for many entities. This observation, corroborated by the data in Table 3 which indicates high maximum node degrees (e.g., up to 4,803 for a similarity threshold $\tau = 0 . 9 4 \mathrm { , }$ ), suggests that real-world databases in similar topics are often tightly correlated. This correlation is particularly beneficial to collaborative learning algorithms that can effectively leverage these rich interconnections. Secondly, WikiDBGraph comprises numerous small connected components and communities. The distributions of connected component sizes (Figure 3b) and community sizes (Figure 3c) both demonstrate a sharp increase in frequency as the size of components or communities decreases. This structural characteristic aligns with the clustering observations in Figure 2c, which also indicated the presence of many distinct, smaller topical groupings within the dataset. τ #Nodes #Edges #CCs #INs Degree Connected Component Min Max Mean Med Min Max Mean Med 0.93 100,000 26,879,058 64,417 58,010 0 5,872 268.79 0 1 12,054 1.55 1 0.94 100,000 17,964,868 71,235 65,126 0 4,803 179.65 0 1 10,703 1.40 1 0.96 100,000 6,197,746 78,359 73,300 0 4,527 123.95 0 1 7,960 1.28 1 Node Degree Distribution Connected Component Size Distribution Distribution of 6132 Community Sizes 10 G 10 10° 一 F 10 1.0 □ 1000 2000 3000 4000 5000 100 101 102 103 10 101 10² 103 NodeDegree Connected Component Size (Number of Nodes) Community Size (a) Node Degree (b) Connected Component Size (c) Community Size # 4.2 Node and Edge Properties While the graph construction primarily leverages the similarity of database schema embeddings, we incorporate a broader range of properties for both nodes (databases) and edges (inter-database relationships) to enrich the graph and support diverse analytical models, such as graph neural networks [43]. These properties are categorized into three main types, with detailed summaries available in Table 4 for nodes and Table 5 for edges. Structural Properties capture architectural characteristics: for nodes, this includes table and column counts and foreign key relationships; for edges, it involves quantifying structural similarity between connected databases using metrics like the Jaccard index on their respective sets of table names, column names, and data types. Semantic Properties encapsulate conceptual aspects: for nodes, these include pre-computed embedding vectors, topic categories (e.g., from clustering), and community categories (e.g., derived via Louvain community detection); for edges, they measure semantic relatedness through the cosine similarity of database embeddings and any confidence scores from the similarity prediction that formed the edge. Lastly, Statistical Properties offer quantitative insights: for nodes, this covers sparsity, sparsity, and cardinality; for edges, these properties describe relationships based on shared data characteristics, such as the Kullback-Leibler (KL) divergence of shared column distributions and the ratio of overlapping values within these columns. Table 4: Summary of Node (Database) Properties in WikiDBGraph Table 5: Summary of Edge (Database Relationship) Properties in WikiDBGraph $\begin{array} { r } { { \bf \Pi } ^ { \prime } \tau = 0 . 9 4 , } \end{array}$ ) 1The schema graph is a directed graph that connects tables (edge) with foreign keys (edge). 2The ratio of edges which has lower similarities. # 5 Experiments and Case Study In this section, we categorize database correlations identified by WikiDBGraph into two primary types based on their similarities: feature overlap and instance overlap. We present two examples of a newly discovered correlated database pair - databases with distinct TIDs that were not explicitly linked in Wikidata or WikiDBs. Furthermore, we conduct collaborative learning experiments on these pairs showing performance benefits compared to training on individual databases. Model and Metric. We utilize XGBoost [10] with 100 trees, a learning rate of 0.1, and a maximum depth of 6, for XGBoost’s strong performance on tabular data. In the evaluation, the data is split by a ratio of 8:2 for training and testing. Weighted averages for Precision, Recall, and F1-Score were used as evaluation metrics to account for potential class imbalances. For multiclass classification, Precision, Recall, and F1 Score are weighted averages. The variance is not reported as XGBoost is a determinstic algorithm in our setting. # 5.1 Top Similarity: Feature Overlap An example of feature overlap is between database 02799 (Trypanosoma_Cruzi_Orthologs_Db_30, TID: Q62194121) and database 79665 (TrypanosomaCruziOrthologs225, TID: Q62256692), reveals a high degree of similarity, with an embedding similarity score (EmbedSim) of 1.0. Both databases exhibit nearly identical schemas, each possessing two tables with the same 24 columns (thus presenting overlapped features) and corresponding foreign key relationships. The primary distinction lies in their number of rows (data instances) and distributions: dataset 02799 contains 282 rows in both its tables, whereas dataset 79665 has 514 and 304 rows in its respective tables. To empirically validate the benefits of leveraging combined instances in this feature-overlap scenario, a multi-class classification task was designed to predict the details_encoded_protein label, which comprises six distinct classes. We first left-join each database into a single table. Then, models are trained under three conditions: (1) solely on dataset 02799, (2) solely on dataset 79665, and (3) on a combined dataset (Combined) comprising instances from both. The performance of these models was subsequently evaluated on test sets derived from both the individual datasets. The results, summarized in Table 6, clearly demonstrate the advantages of utilizing Combined. These findings underscore the practical benefits of instance expansion in feature-overlap scenarios, leading to more robust and generalizable models. Table 6: XGBoost performance in feature-overlapped scenario Feature-overlapped collaborative learning, where datasets share a common feature space but differ in their specific instances, presents significant opportunities for various machine learning paradigms. It is a foundational scenario for horizontal federated learning [46], enabling collaborative model training across distributed datasets without direct data exchange. Such feature overlap readily supports incremental learning, allowing models trained on one dataset to be efficiently updated or expanded using data from the other. Furthermore, it also enables effective transfer learning within the same problem domain, where a model pre-trained on one dataset can be leveraged for tasks on the other, particularly if their underlying data distributions are related. # 5.2 High Similarity: Instance Overlap An instance-overlap collaborative learning scenario is demonstrated by comparing two genecentric databases: database 00381 (TrypanosomaCruziOrthologs1, TID: Q62194121) and database 48804 (Ortholog_Lpg1l_Genomic_Data, TID: Q62256692). These databases exhibit a notable embedding similarity (EmbedSim) of 0.95. Our analysis revealed that two specific tables - GeneOrthologsAnnotations from database 00381 (referred to as the gene table) and Ortholog_Lpg1l_Protein_Annotations from database 48804 (the protein table) - share a significant number of overlapped instances based on the GeneId column. The protein table, containing properties of various proteins, is considered the primary table for this analysis. The gene table, providing detailed profiles for each specific gene, serves as the secondary, enriching table. For our classification task, we selected the lpg_Uni_Prot_Protein_Id column from the protein table as the target label, which represents 49 distinct categories of proteins. The underlying hypothesis is that integrating detailed gene information from the secondary (gene) table can intuitively improve the precision of predicting these protein categories. To empirically validate the benefits of leveraging combined features in this instance-overlap scenario, models were trained under two conditions: (1) solely on data from database 48804 (protein table features), and (2) on a left-joined dataset (Combined) comprising features from both the protein table (DB 48804) and the gene table (DB 00381), linked by their common GeneId. Both datasets were split into training and testing sets using the same indices to ensure comparable evaluation. Table 7: XGBoost performance in instance-overlapped scenario The results, summarized in Table 7, clearly demonstrate the advantages of utilizing the combined feature set. While accuracy and recall remained the same, the combined model achieved improved precision and a higher F1-score. These findings underscore the practical benefits of feature enrichment through instance overlap, leading to more robust and generalizable models by incorporating complementary information from related datasets. Instance-overlapped collaborative learning, where different datasets contain distinct features for the same set of entities (instances), also opens up powerful avenues for advanced machine learning. This scenario aligns well with vertical federated learning [46], where models are trained by jointly leveraging complementary feature sets from multiple clients without raw data exchange. Furthermore, this configuration corresponds to a specific case of split learning [40], in which distinct feature sets are processed by different segments of a neural network architecture. Such instanceoverlapped data also provides a suitable foundation for ensemble learning [16], which investigates how models, potentially trained on these varied feature subsets, can be effectively combined to improve overall predictive performance or robustness. # 6 Limitation Despite WikiDBGraph’s identification of inter-database relationships facilitating collaborative learning, directly applying existing algorithms poses significant challenges, highlighting research needs in dataset curation and robust algorithm development. Feature Alignment. Precisely aligning columns across databases in feature-overlap scenarios is non-trivial. Identifying semantically equivalent columns in our case studies required manual inspection or LLM assistance, a task further complicated by differing column orders. Developing an automated, high-precision pipeline for column correspondence identification is a crucial future step. Missing Instances and Partial Overlap. In instance-overlap scenarios, few data instances are reliably linkable across databases using common identifiers, leaving substantial unaligned (yet potentially valuable) data in client datasets. Leveraging these unaligned instances effectively demands advanced collaborative learning algorithms. While related areas like semi-supervised learning show some progress [49], dedicated efforts are required for this specific challenge. These limitations hinder the direct evaluation of some existing collaborative learning algorithms on the entirety of WikiDBGraph. However, this also underscores the necessity for algorithms robust to common real-world data imperfections. WikiDBGraph therefore serves not only as a resource but also as a benchmark to foster the development of such practical algorithms.
Tabular data, ubiquitous and rich in informational value, is an increasing focus for deep representation learning, yet progress is hindered by studies centered on single tables or isolated databases, which limits model capabilities due to data scale. While collaborative learning approaches such as federated learning, transfer learning, split learning, and tabular foundation models aim to learn from multiple correlated databases, they are challenged by a scarcity of real-world interconnected tabular resources. Current data lakes and corpora largely consist of isolated databases lacking defined inter-database correlations. To overcome this, we introduce WikiDBGraph, a large-scale graph of 100,000 real-world tabular databases from WikiData, interconnected by 17 million edges and characterized by 13 node and 12 edge properties derived from its database schema and data distribution. WikiDBGraph's weighted edges identify both instance- and feature-overlapped databases. Experiments on these newly identified databases confirm that collaborative learning yields superior performance, thereby offering considerable promise for structured foundation model training while also exposing key challenges and future directions for learning from interconnected tabular data.
[ "cs.DB", "cs.LG" ]
# I. INTRODUCTION Pedestrian trajectory prediction is a fundamental task in domains such as robotics and surveillance systems aimed at enhancing public safety and services efficiency. Accurate forecasting of human movement allows these systems to operate safely and efficiently in dynamic and crowded environments. Although recent advances have significantly improved the accuracy, many existing methods primarily focus on social interactions among pedestrians, often neglecting the environmental context that critically influences human movement. In real-world scenarios, pedestrians navigate complex environments shaped by buildings, roads, sidewalks, and other obstacles. These scene elements impose natural constraints on human movement, as people tend to follow walkable paths and avoid physical barriers. Traditional trajectory prediction models that overlook such constraints often generate physically This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.RS2024-00342176 and RS-2025-02217000.) and INHA UNIVERSITY Research Grant (2023.) \*Corresponding author: Inwook Shim. 1Juho Bai is with the College of Economics and Business, Hankuk University of Foreign Studies, Republic of Korea juho@hufs.ac.kr 2Inwook Shim is with the Department of Smart Mobility Engineering, Inha University, Republic of Korea iwshim@inha.ac.kr Fig. 1: Overview of SceneAware. Inputs: Observed Trajectory and MLLM-generated binary mask that distinguishes walkable from non-walkable areas. Trajectory prediction with Scene Structure Information. (\*MLLM is only for training.) implausible paths, for instance, trajectories that pass through walls or restricted areas. Recent advances in deep learning have led to sophisticated trajectory prediction models, from social interaction-aware approaches [1], [2] to graph-based methods [3]–[5]. More recently, language-based approaches such as LMTrajectory [6] have emerged, leveraging large language models (LLMs) to improve motion reasoning. Despite its impressive performance, they still lack explicit mechanisms for incorporating environmental constraints. As a result, they still continue to produce unrealistic predictions that are misaligned with the actual physical world. Moreover, applying LLMs at every frame incurs substantial computational cost, making them impractical for real-time systems such as autonomous robots or surveillance platforms. In this paper, we propose SceneAware, a novel framework that explicitly incorporates scene structure understanding into trajectory prediction. Our method employs a pretrained and frozen Vision Transformer (ViT) scene encoder to distinguish between walkable and non-walkable areas, guiding the model to generate physically plausible trajectories. Figure 1 illustrates the overview of our SceneAware framework. The architecture integrates a Transformer-based trajectory predictor and is guided by a frozen Multi-modal Large Language Model (MLLM) that generates a walkability-based penalties on the decoder’s outputs when predicted trajectories violate inferred physical constraints. Importantly, MLLM is used only during training, allowing the model to learn sceneaware constraints without requiring MLLM inference at test time. This design enables efficient and physically grounded trajectory prediction, making SceneAware suitable for realtime applications such as surveillance and robotics. We develop both deterministic and stochastic variants of our model, supporting both single- and multi-path forecasting. Experiments on ETH/UCY benchmarks demonstrate that incorporating scene context significantly improves prediction accuracy. Our results show that explicitly scene encoded information provides sufficient spatial understanding without requiring complex pre-processing. Our contributions can be summarized as follows: Scene-aware trajectory prediction: We propose a novel pedestrian trajectory prediction framework that jointly encodes pedestrian trajectory and scene structure, enabling physically plausible and context-aware predictions. MLLM-assisted scene encoding: Our method leverages MLLM to generate a binary walkability mask from a single scene image, eliminating the need for manual annotations or per-frame computation. • Alternative evaluation approach: We introduce a new evaluation that categorizes pedestrian movement patterns for more fine-grained performance analysis. # II. RELATED WORK # A. Traditional Trajectory Prediction Early approaches to trajectory prediction rely on physicsbased models [7], [8] that impose pedestrian interactions using human designed force functions. These methods, while interpretable, struggle to capture the complexity of pedestrian behavior in real-world. The Social Force Model [9] and its extensions [10], [11] define the forces of attraction, repulsion, and friction to simulate movement. With the advent of deep learning, data-driven approaches gained prominence. Recurrent Neural Networks (RNNs), particularly Long Short-Term Memory (LSTM) networks, are utilized to the backbone of trajectory prediction models [1], [2]. Social-LSTM [1] introduces a social pooling mechanism to model interactions between pedestrians, and Social-GAN [2] employs adversarial training to generate more realistic trajectories. Other early deep learning models include those based on vanilla RNNs [12] and encoder-decoder architectures [13]. These models primarily focus on learning temporal dependencies in individual trajectories and simple forms of social interaction. However, these early methods also lack robust mechanisms for incorporating scene context, leading to potentially unrealistic predictions. # B. Graph-Based and Attention-Based Models Recent years have seen the rise of graph-based approaches [3]–[5] that model pedestrian interactions as graphs. STGAT [14] and SGCN [15] utilize graph attention and graph convolutional networks, respectively, to model the spatiotemporal relationships between pedestrians. More sophisticated graph-based models consider heterogeneous interactions [16] and future intent [17]. Attention mechanisms have also been widely adopted in trajectory prediction [18]–[20]. These approaches enable models to focus on relevant parts of the input, such as specific pedestrians or scene elements, improving prediction accuracy. Trajectron $^ { + + }$ [21] uses a graph-based approach with attention mechanism to model interactions and predict multiple future trajectories. Similarly, Social Attention [18] allows each pedestrian to attend to their relevant neighbors. TPNet [22] further refines the social attention by incorporating temporal attention. While these graph and attention mechanisms significantly enhance the modeling of complex social dynamics, they often treat the physical environment implicitly and still lack mechanisms to enforce hard constraints. # C. Environment-Aware Prediction Recognizing the limitations of social-only trajectory prediction, many approaches seek to incorporate environmental context. Initial efforts often do so implicitly, utilizing basic environmental representations like occupancy grids [23] and rasterized maps [24], learning interaction models such as cost maps via inverse reinforcement learning [25]. Other approaches continue to integrate context implicitly, employing techniques like scene attention over visual features [19], using pre-processed inputs such as semantic segmentation maps [26] and updated occupancy grids [27]. Most recently, language-based LMTrajectory [6] has emerged, recasting trajectory prediction problem as a promptbased question-answering task, which leverages LLM to understand and predict complex movement patterns, showing impressive performance across benchmark datasets. LMTrajectory encodes numerical trajectories into textual prompts, allowing language models to interpret pedestrian movement through natural language understanding. While this leverages the contextual reasoning strengths of large language models, it lacks explicit mechanisms to infer or enforce the geometric and physical constraints of the environment. In contrast, SceneAware addresses this limitation by directly incorporating scene structure constraints, offering a more practical and computationally efficient approach with improved predictive performance. Moving beyond these implicit methods, research progresses towards explicitly modeling scene constraints and integrating them more directly into the prediction pipeline. One direction involves using conditional generative models (CVAEs) explicitly conditioned on scene context [12] to ensure physical plausibility. Subsequently, researchers develop methods to more tightly integrate environmental features through spatial refinement modules operating alongside recurrent trajectory encoders [28], and convolutional pooling mechanisms designed to jointly process social and scene information [29]. Further advancements include approaches using scene semantics to generate plausible goal locations [30] that condition trajectory generation, and graph-based methods explicitly representing the environment within the graph structure [31] interacting with agents. This body of work demonstrates a clear trend towards leveraging explicit scene understanding and constraints. Our SceneAware builds upon the insights from these works but adopts a distinct strategy centered on task-specific feature learning using a focused representation of the environment. Instead of utilizing richer, but potentially more complex, and computationally intensive representations such as detailed heatmaps [32], semantic segmentation maps [26], or depth information [33], SceneAware leverages computationally efficient binary walkability mask. These maps distill the environment down to the essential geometric constraints relevant for navigation. (\*Sec.II.B) Transformer Encoder $( \phi _ { E } )$ FC Transformer Decoder $( \phi _ { D } )$ (\*Sec.II.D) FC Four heads self-attention $\times 3$ (𝒇𝒔𝒚𝒏𝒕𝒉) Four heads self-attention $\times 3$ Deterministic Decoder Observed Trajectory 𝑑1:𝑇𝑂 𝑒𝑇 ; 𝑠 𝑌෠𝑃 = (𝑦ො𝑇𝑂+1, … , 𝑦ො𝑇𝑂+𝑇𝑃) 𝑋1:𝑇 ∈ ℝ𝑇𝑂×2 ∈ ℝ𝑇𝑂×𝑑𝑡 Traj. vector 𝑒𝑇 ∈ ℝ𝑑𝑒 ∈ ℝ(𝑑𝑒+𝑑𝑠) Combined vector 𝑐 ∈ ℝ𝑑𝑐 CVAE (\*Sec.II.D) 𝜇 = 𝑓𝜇 𝑒𝑇𝑜 Stochastic Decoder (\*Sec.II.C) log 𝜎2 = 𝑓𝜎 𝑒𝑇𝑜 FC Transformer Decoder (𝝓𝑺) Scene Encoder (𝝍𝑬) 𝑧 ∼ 𝒩 𝜇, 𝜎2 ∈ ℝ𝑑𝑧 (𝒇𝒔𝒚𝒏𝒕𝒉) Four heads self-attention × 3 𝑦ො𝑇1𝑂+1, … 𝑦ො𝑇1𝑂+𝑇𝑃 ViT C 𝑌෠ (𝑘) 𝑦ො𝑇2+1, 𝑇2+𝑇 𝑧; 𝑠 Scene vector 𝑠 ∈ ℝ𝑑𝑠 ∈ ℝ(𝑑𝑧+𝑑𝑠) Combined Vector 𝑇𝑘+1, … , 𝑇𝑂+𝑇𝑃 𝑐𝑧 ∈ ℝ𝑑𝑐 Prompt-msg. “Draw H binary walkability mask” MLLM Collision evaluator L Self-Attention Layer (4 Head) $\circledcirc$ Concatenation 𝑀 ∈ ℝ𝐻×𝑊 1 Self-Attention Layer (4 Head) $$ Training Only In addition, several related works have explored many directions, such as improving computational efficiency [34], refining goal estimation [35], [36], enabling continual learning [37], and adapting to diverse environments [38]. # III. METHODOLOGY # A. Problem Formulation Given a sequence of observed positions $\begin{array} { r l } { X _ { 1 : T _ { O } } } & { { } = } \end{array}$ $\{ x _ { 1 } , x _ { 2 } , . . . , x _ { T _ { O } } \}$ for a pedestrian, where $\boldsymbol { x } _ { t } ~ \in ~ \mathbb { R } ^ { 2 }$ represents the 2D coordinates at time step $T _ { O }$ , the trajectory prediction task aims to forecast the future positions $Y _ { 1 : T _ { P } } =$ $\{ y _ { 1 } , y _ { 2 } , . . . , y _ { T _ { P } } \}$ , where $\boldsymbol { y } _ { t } \in \mathbb { R } ^ { 2 }$ are the coordinates at future time steps $T _ { P }$ . Here, the pedestrian’s position is expressed in the input image coordinate system. Our objective is to generate future trajectories that align with observed motion patterns while considering the physical scene structure constraints of the environment. The model architecture, illustrated in Fig. 2, is intentionally designed with three core components to effectively integrate observed trajectories and environmental context: 1) Trajectory Encoder: Captures how the pedestrian has been moving. 2) Scene Encoder: Understands the physical constraints of the environment. 3) Trajectory Decoder: Combines motion history and scene context to generate future steps. This separation allows each component to specialize. Specifically, we guide the LLM to convert the input image into a binary scene mask that clearly indicates walkability regions. This helps the decoder generate future trajectories based on clearer and more explicit environmental information. # B. Trajectory Encoder Our model utilizes the Transformer-based encoder [34], [39], leveraging its advantages for modeling complex temporal dependencies and long-range interactions in the sequence of observed positions. The trajectory encoder $\Phi _ { E }$ processes the raw trajectory coordinates by first transforming each observed 2D coordinate $\boldsymbol { x } _ { t }$ into a higher-dimensional embedding vector $d _ { t }$ using a fully connected layer with non-linear activation. This embedding step allows the model to learn a richer representation of spatial positions compared to using raw coordinates directly. The complete embedded trajectory sequence $d _ { 1 : T _ { O } }$ is then processed by the Transformer encoder to produce the final encoded representation: $$ e _ { T _ { O } } = \Phi _ { E } ( d _ { 1 : T _ { O } } ) , $$ where $e _ { T _ { O } } ~ \in ~ \mathbb { R } ^ { d _ { e } }$ indicates the encoded observed trajectory feature. The Transformer’s self-attention mechanism weighs the importance of different positions, focusing dynamically on the most relevant parts for prediction. This embedding transforms coordinates into a high-dimensional representation that generalizes well across diverse environments without overfitting. In our encoder-decoder architecture, $e _ { T _ { O } }$ is combined with scene structure information before being fed into the decoder, providing integrated context for accurate and sceneaware trajectory prediction. # C. Scene Encoder To encode the scene structure information, our framework employs a pretrained ViT as the scene encoder to generate the feature vector $s \in \mathbb { R } ^ { d _ { s } }$ . The ViT captures spatially global feature relationships across the entire image, allowing the decoder to reason effectively about how the observed trajectories related to important environmental features such as walkable corridors, obstacles, and boundaries. The dimension of the scene embedding, $d _ { s }$ , is set to match the trajectory embedding dimension $d _ { e }$ , enabling straightforward fusion with the trajectory features in the decoder. Fig. 3: Results of Scene Map to Binary Walkable Mask conversion across all five datasets. Each pair shows the original top-down view (top) and the corresponding generated binary walkable map (bottom) by MLLM. White areas in the binary masks represent walkable regions, while black indicates non-walkable areas. Note how the binary masks simplify complex visual information into clear environmental constraints, focusing only on regions relevant for pedestrian navigation. In addition to the ViT-based scene encoding, our SceneAware framework includes a penalty mechanism that discourages implausible trajectory predictions by referencing a binary walkability mask. This penalty is applied alongside the primary loss fuction, as described in Sec. III-E. We design prompt-based queries such as “Generate a binary walkability mask from this scene image, with white for walkable areas and black for obstacles.” that guide the MLLM [40] to distinguish walkable from non-walkable regions. This approach enables the model to learn scene structure representations without human supervision, while still enforcing physical constraints essential for realistic path prediction. Examples of the generated binary walkability masks are shown in Fig. 3. # $D$ . Trajectory Decoder The decoder’s primary role is to generate the future trajectory sequence $Y$ utilizing two input sources: the encoded motion patterns in the trajectory context vector $e _ { T _ { O } }$ from the Trajectory Encoder, and the encoded scene structure in the scene feature vector $s$ from the Scene Encoder. Trajectory context $e _ { T _ { O } }$ is concatenated with scene feature vector $s$ . A subsequent linear transformation yields a unified context embedding that conditions the decoder for generating spatially consistent predictions. Based on prior approaches [2], [19], our model includes both deterministic and stochastic decoders. Deterministic Model integrates trajectory context $e _ { T _ { O } }$ and scene context $s$ to predict the most likely future path sequence. First, these contexts are concatenated, denoted as $\left[ e _ { T _ { O } } ; s \right]$ . A linear fusion layer $f _ { s y n t h }$ then processes this combined vector to produce a unified context embedding $c = f _ { s y n t h } \big ( [ e _ { T _ { O } } ; s ] \big )$ . The decoder transformer $\Phi _ { D }$ takes this embedding to predict the future path sequence $\hat { Y } _ { P } = \Phi _ { D } ( c )$ , where $\hat { Y } _ { P } ^ { \phantom { \dagger } } = ( \bar { y } _ { T _ { O + 1 } } , . . . , \hat { y } _ { T _ { O } + T _ { P } } )$ with $\hat { y } _ { t } \in \mathbb { R } ^ { 2 }$ representing the predicted position at each future timestep $t$ . Stochastic Model adopts the conditional variational autoencoder (CVAE) approach [21] to estimate multiple plausible future paths. CVAE uses the trajectory context vector $e _ { T _ { O } }$ to parameterize a Gaussian distribution $\mathcal { N } ( \mu , \sigma ^ { 2 } )$ over a latent variable $z ~ \in ~ \mathbb { R } ^ { d _ { z } }$ , where $\mu ~ = ~ f _ { \mu } ( e _ { T _ { O } } )$ and $\log \sigma ^ { 2 } = { \begin{array} { l } { } \end{array} }$ $f _ { \sigma } ( e _ { T _ { O } } )$ . $f _ { \mu }$ and $f _ { \sigma }$ are fully connected layers mapping $e _ { T _ { O } }$ to the mean and log-variance of the latent distribution. To generate a prediction sample, we first sample a latent variable: $z \sim \mathcal { N } ( \mu , \sigma ^ { 2 } )$ . This latent variable is then concatenated with $s$ , denoted as $[ z ; s ]$ . A dedicated linear synthesis layer $f _ { s y n t h }$ synthesizes the decoder conditioning embedding $c _ { z } \ = \ f _ { s y n t h } ( [ z ; s ] )$ from this combined vector. Finally, the decoder transfomer $\Phi _ { S }$ generates a future path sequence sample $\hat { Y } _ { P } ^ { ( k ) } = \Phi _ { S } ( c _ { z } )$ . Multiple samples can be generated by repeating the sampling and decoding steps. # E. Collision Penalty and Loss Function Our training objective is designed to enable end-to-end learning of both trajectory prediction and scene structure understanding. To enforce adherence to the physical constraints of the environment, we introduce a collision penalty that discourages predicted trajectories from intersecting with nonwalkable regions. Given the binary walkability mask $M \in$ $\{ 0 , 1 \} ^ { H \times W }$ , the collision penalty is defined as: $$ \mathcal { L } _ { \mathrm { C } } = \lambda _ { \mathrm { C } } \sum _ { t = T _ { O } + 1 } ^ { T _ { O } + T _ { \mathrm { P } } } \mathcal { C } ( \hat { y } _ { t } , M ) , $$ where $\mathcal { C } ( \hat { y } _ { t } , M ) = 1$ if the predicted position $\hat { y } _ { t }$ falls within non-walkable areas ( $M < 0 . 5 )$ or outside image bounds, and 0 otherwise. The hyperparameter $\lambda _ { \mathrm { C } }$ controls the trade-off between trajectory accuracy and collision avoidance. Deterministic Model. We use the standard mean squared error (MSE) loss between the predicted absolute positions $\hat { y } _ { t }$ and the ground truth absolute positions $y _ { t }$ over the prediction horizon as used in method [37]: TABLE I: Performance comparisons for deterministic and stochastic models. All values are in meters. The symbol ‘-’ indicates that the performance evaluation is not reported. The best performance is highlighted in bold, and underline indicates the best performance among the baseline methods, excluding our method. $$ \mathcal { L } _ { \mathrm { D } } = \frac { 1 } { T _ { P } } \sum _ { t = T _ { O } + 1 } ^ { T _ { O } + T _ { P } } | | \hat { y } _ { t } - y _ { t } | | ^ { 2 } . $$ This loss directly penalizes the Euclidean distance between the prediction and the ground truth at each future step, encouraging the model to produce a single trajectory that closely matches the actual future path. The final objective function is defined as the sum of the deterministic loss and the collision penalty term. $$ \mathcal { L } _ { \mathrm { D + C } } = \mathcal { L } _ { \mathrm { D } } + \mathcal { L } _ { \mathrm { C } } $$ Stochastic Model. The loss function needs to achieve two goals: ensuring the predictions are accurate and encouraging diversity among the generated samples, while also regularizing the latent space. We adopt the compound losses used in [36]: $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { S } } = \mathcal { L } _ { \mathrm { b e s t } } + \lambda _ { K L } \mathcal { L } _ { K L } , } \end{array} $$ where $\mathcal { L } _ { \mathrm { b e s t } }$ is the best-of- $K$ loss, defined as the minimum MSE over the $K$ generated samples: $$ \mathcal { L } _ { \mathrm { b e s t } } = \operatorname* { m i n } _ { k \in \{ 1 , \dots , K \} } \frac { 1 } { T _ { P } } \sum _ { t = T _ { O } + 1 } ^ { T _ { O } + T _ { P } } | | \hat { y } _ { t } ^ { k } - y _ { t } | | ^ { 2 } . $$ This loss encourages the model to produce at least one sample trajectory $\hat { y } ^ { k }$ that is close to the ground truth $y _ { t }$ , effectively promoting diversity. It acknowledges that predicting the single exact future is hard, but covering the ground truth with one of the samples is achievable. $\mathcal { L } _ { K L }$ is the Kullback-Leibler (KL) divergence between the learned latent distribution $\mathcal { N } ( \mu , \sigma ^ { 2 } )$ and a prior distribution, typically the standard normal distribution $\mathcal { N } ( 0 , I )$ : $$ \mathcal { L } _ { K L } = D _ { K L } ( \mathcal { N } ( \mu , \sigma ^ { 2 } ) | | \mathcal { N } ( 0 , I ) ) . $$ This KL divergence term acts as a regularizer, encouraging the learned latent distributions to stay close to the prior. This helps prevent the posterior collapse, a situation in which the standard deviation $\sigma$ becomes zero, and ensures the latent space remains well-structured to support meaningful sampling. The hyperparameter $\lambda _ { K L }$ controls the trade-off between the prediction accuracy and diversity, represented by the loss term $\mathcal { L } _ { \mathrm { b e s t } }$ , and latent space regularization $\mathcal { L } _ { K L }$ . The final loss combines the collision penalty function with the stochastic prediction loss. $$ \mathcal { L } _ { \mathrm { S + C } } = \mathcal { L } _ { \mathrm { b e s t } } + \lambda _ { K L } \mathcal { L } _ { K L } + \mathcal { L } _ { \mathrm { C } } $$ # IV. EXPERIMENTS In this section, we conduct quantitative evaluations to demonstrate the effectiveness of our SceneAware approach, examining both deterministic and stochastic models. We also compare our performance with state-of-the-art methods. # A. Implementation Details We implement SceneAware network using Pytorch framework [43], utilizing Adam optimizer [44] with a learning rate of 0.001. For a fair performance comparison with other stateof-the-arts, the observation time $T _ { O }$ is 8 frames (3.2s) and prediction time $T _ { P }$ is 12 frames (4.8s) following standard practice in the field [1], [2]. The collision penalty weight $\lambda _ { \mathrm { { C } } }$ is set to 30.0 across all experiments. For the stochastic model, the KL divergence weight $\lambda _ { K L }$ and the number of generated samples $K$ are set to 0.1 and 20, respectively. The details of our SceneAware network is illustrated in Fig. 2 and github repo https://github.com/juho127/SceneAware For evaluation, we utilize the standard benchmark datasets widely used in pedestrian trajectory prediction: ETH [8] and UCY [45], which contain pedestrian movements recorded in diverse real-world environments. Following common practice in prior state-of-the-art methods, we adopt $K$ -fold validation $( K { = } 5 )$ to ensure fair evaluation. # B. Quantitative Evaluation In this evaluation, we use common quantitative measures of trajectory prediction: Average Displacement Error (ADE) and Final Displacement Error (FDE). Table I presents a comprehensive performance comparison between our proposed SceneAware model and existing state-of-the-art methods across both deterministic and stochastic prediction. The compared algorithms span multiple architectural approaches, categorized based on their fundamental design principles. TABLE II: Trajectory category-wise performance comparison between deterministic and stochastic models. Values represent ADE performance in meters using weighted average across all trajectory samples. Bold values indicate the best performance improvement for each category. Fig. 4: Examples of pedestrian trajectory categorization according to distinct pedestrian movement patterns. 1) RNN/LSTM-based Social Models: Early approaches like Social-LSTM [1] and Social-GAN [2] leverage recurrent architectures with specialized social pooling mechanisms to model pedestrian interactions. These models lack explicit environmental understanding in their structure. Our SceneAware model reduces error by approximately $8 5 \%$ compared to Social-LSTM, highlighting the critical importance of incorporating scene context into prediction models. 2) Graph-based Models: The second category comprises graph-based approaches including Social-STGCNN [3], DMRGCN [4], NPSN [5], and STGAT [14], which model pedestrians and their interactions using graph structures. While effective at capturing interpersonal dynamics, These graph models effectively capture interpersonal relationships but still lack crucial environmental information needed for accurate trajectory prediction. SceneAware’s explicit scene modeling provides approximately $6 7 \%$ error reduction compared to the best graph-based model (NPSN), demonstrating the advantage of clear environmental representation. 3) Goal and Scene-oriented Models: More recent approaches incorporate goal estimation and some degree of scene understanding, such as BiTraP-NP [35], DACG [36], DGCN $\dot { + }$ STDec [41], and GA-STT [42]. DGCN $+$ STDec semantically disentangles graph information into temporal factors (e.g., velocity) and spatial factors (e.g., interpersonal positioning). They leverage interpersonal information to implicitly reflect environmental constraints. Our SceneAware model improves upon DGCN $^ +$ STDec by approximately $5 8 \%$ for deterministic predictions, demonstrating that our direct binary walkability representation provides more effective scene constraints than the implicit or partial scene understanding in these approaches. Fig. 5: The number of samples and the distribution of trajectory categories across the benchmark datasets. 4) Language-based Models: LMTrajectory [6] represents an innovative approach using large language models to interpret trajectory patterns. Despite leveraging powerful contextual understanding, this approach struggles to fully capture geometric constraints from textified coordinates alone. SceneAware provides approximately $7 7 \%$ improvement over LMTrajectory in deterministic prediction, suggesting that explicit geometric representation of walkable areas offers advantages that even sophisticated language models cannot match through textbased coordinates. 5) Scene Representation Analysis: Our quantitative evaluation also reveals important insights about scene structure constrained approaches. In the Table. I, SceneAware (raw) utilizes scene information without generating the binary walkable mask map. SceneAware (mask) tends to perform better the SceneAware (raw) with a $3 1 \%$ reduction in the average error. This shows that simplified binary walkability masks provide clearer, more explicit environmental constraints than raw scene images with potentially distracting visual details. Fig. 6: Qualitative comparisons of SceneAware and Social-GAN stochastic model across all trajectory categories: Blue line (observed trajectory), Green line (ground truth future) and Red lines (predicted samples, the best one is bold.) Overall, our SceneAware model consistently outperforms all previous approaches across most benchmark datasets, demonstrating the critical importance of explicit scene understanding in trajectory prediction systems. # C. Analysis of Trajectory Categorization We analyze how the performance of the model varies in different patterns of pedestrian movement. As illustrated in Fig. 4, the benchmark datasets include four distinct trajectory categories: Straight, Turning, High-Variance, and Circling. The number of samples per category is different within each dataset (see Fig. 5.) Such imbalanced distributions make it difficult to evaluate performance trends solely based on overall metrics. In fact, Table I does not allow categorywise performance analysis, highlighting the need for a more fine-grained evaluation. To examine the performance across these categories, we compare our SceneAware model with the baseline $\mathrm { D G C N + S T D e c }$ [41] and Social-GAN [2] in Table II. The results demonstrate that SceneAware achieves substantial improvements across all trajectory categories for both deterministic and stochastic prediction. SceneAware (mask) consistently outperforms the previous works across all datasets and categories. Notably, our SceneAware model maintains stable performance across all categories, which indicates that its encoded scene understanding effectively captures diverse pedestrian behaviors and supports robust prediction across varying levels of trajectory complexity. # D. Qualitative Evaluation We select Social-GAN [2] for qualitative comparison, as it represents the best performing model among publicly available algorithms that do not explicitly use scene information, providing an ideal baseline to demonstrate the effectiveness of our scene structure constraint. Figure 6 shows clear differences between the stochastic predictions of SceneAware and Social-GAN. Through our explicit environmental constraint learning, SceneAware’s stochastic distributions converge more directionally. In structured environments like ETH, HOTEL, and ZARA while Social-GAN’s predicted samples (red dashed lines) exhibit wide dispersions that violate physical constraints, SceneAware maintains appropriate uncertainty while respecting environmental boundaries.
Accurate prediction of pedestrian trajectories is essential for applications in robotics and surveillance systems. While existing approaches primarily focus on social interactions between pedestrians, they often overlook the rich environmental context that significantly shapes human movement patterns. In this paper, we propose SceneAware, a novel framework that explicitly incorporates scene understanding to enhance trajectory prediction accuracy. Our method leverages a Vision Transformer~(ViT) scene encoder to process environmental context from static scene images, while Multi-modal Large Language Models~(MLLMs) generate binary walkability masks that distinguish between accessible and restricted areas during training. We combine a Transformer-based trajectory encoder with the ViT-based scene encoder, capturing both temporal dynamics and spatial constraints. The framework integrates collision penalty mechanisms that discourage predicted trajectories from violating physical boundaries, ensuring physically plausible predictions. SceneAware is implemented in both deterministic and stochastic variants. Comprehensive experiments on the ETH/UCY benchmark datasets show that our approach outperforms state-of-the-art methods, with more than 50\% improvement over previous models. Our analysis based on different trajectory categories shows that the model performs consistently well across various types of pedestrian movement. This highlights the importance of using explicit scene information and shows that our scene-aware approach is both effective and reliable in generating accurate and physically plausible predictions. Code is available at: https://github.com/juho127/SceneAware.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Image restoration (IR) addresses the reconstruction of high-quality images from degraded inputs, with super-resolution and inpainting representing its fundamental tasks. Traditional IR techniques, such as bicubic or B-spline[5] methods, often produce blurry results with compromised details, whereas modern deep learning approaches have demonstrated remarkable success in preserving spatial and spectral information through sophisticated architectures. Significant advancements include Convolutional Neural Networks (CNNs) [24][23] which play a pivotal role in advancing image restoration by utilizing residual connections and multiscale learning to aggregate local features effectively. Further improvements are achieved through attention mechanisms that capture long-range dependencies to refine feature representations, along with State-Space Modeling (SSM)-based methods[6] such as State-Space 2D (SS2D)[20] which introduce linear attention for efficient sequential modeling, thereby achieving superior computational scalability and performance. Moreover, the DiffIR model[11] adopts diffusion models (DMs)[2] as an iterative generative process[8] to progressively denoise images from Gaussian noise to recover high-fidelity outputs. Table 1. Computational complexity and parallelism comparison. Despite the significant success of image restoration models[8][39][40], distinct hierarchical structures in certain imagery challenge conventional methods: (1) Ineffective feature fusion occurs as CNNs have limited receptive fields, Transformers incur quadratic costs with local cross-shaped attention, and statespace models (SSMs)[6] like Mamba suffer from edge blurring and artifacts. (2) High computational overhead persists: Transformer-based super-resolution models exhibit quadratic complexity, while linear attention[7] and SSM-based methods[6] are hampered by sequential processing and poor memory access patterns. As Table 1 shows, SwinIR[21]/HAT[22] $( O ( L ^ { 2 } d ) )$ require global parallelism but incur high FLOPs, MambaIR[20] variants $( O ( L d ^ { 2 } ) )$ need $L$ sequential steps with no parallelism, and our approach achieves efficient chunk-wise parallelism. We propose DiffRWKVIR with three innovations: (1) Omni-Scale 2D State Evolution, which is inspired by Receptance Weighted Key Value (RWKV)[17, 18, 4, 19] and enables global contextual awareness via hierarchical branches and location-dependent parameterization with linear complexity, (2) Chunk-Optimized Flash Processing that reduces computational overhead through contiguous chunk processing inspired by Flash Linear Attention mechanism[3], achieving $3 . 2 \times$ faster intra-chunk parallelism $O ( L C d )$ complexity, $L / C$ chunks in Table 1, (3) Prior-Guided Efficient Diffusion which is initially encouraged by DiffIR[11] but proposed work proves 45% less training and inference time than DiffIR, and solves the computational inefficiency of conventional diffusion models by extracting critical Image Prior Representation (IPR) in merely 5-20 steps. # 2 Preliminaries and Proposed Mechanisms This work introduces a novel framework that synergizes Test-Time Training (TTT) with Denoising Diffusion Probabilistic Models (DDPMs) to address dynamic degradation challenges in image super-resolution. The integration enables real-time adaptation to unseen distortions during inference while leveraging DDPM’s hierarchical feature learning for spatial dependency modeling. This section formalizes the core components and their theoretical foundations. # 2.1 Denoising Diffusion Probabilistic Models Denoising Diffusion Probabilistic Models (DDPMs) establish the probabilistic foundation for hierarchical feature learning through two interconnected Markov processes. The forward diffusion process systematically corrupts data by incrementally adding Gaussian noise across T steps. This degradation follows the transition kernel $$ q ( \mathbf { x } _ { t } | \mathbf { x } _ { t - 1 } ) = \mathcal { N } ( \mathbf { x } _ { t } ; \sqrt { 1 - \beta _ { t } } \mathbf { x } _ { t - 1 } , \beta _ { t } \mathbf { I } ) , $$ where $\beta _ { t }$ controls the noise schedule. As t approaches T, the data $\mathbf { x } _ { T }$ converges to isotropic Gaussian noise, dissolving all original structure. The reverse process aims to reconstruct the original data by learning a parameterized denoising trajectory. It iteratively refines $\mathbf { x } _ { t }$ back to $\mathbf { x } _ { 0 }$ using the conditional distribution: $$ p _ { \theta } ( \mathbf { x } _ { t - 1 } \vert \mathbf { x } _ { t } ) = \mathcal { N } ( \mathbf { x } _ { t - 1 } ; \mu _ { \theta } ( \mathbf { x } _ { t } , t ) , \varSigma _ { \theta } ( \mathbf { x } _ { t } , t ) ) , $$ where $\mu \theta$ and $\scriptstyle \sum _ { \theta }$ are predicted by a neural network trained to reverse the diffusion steps. # 2.2 State Evolution and Theoretical Formulation The proposed State Evolution mechanism is based on Test-Time Training (TTT), which enables dynamic parameter adaptation during inference, overcoming the static limitation of conventional deep learning models. By continuously refining parameters through self-supervised learning, TTT compresses historical context $\{ { \bf x } _ { i } \} _ { i = 1 } ^ { I ^ { \prime } }$ into a latent state $\mathbf { S } _ { t }$ that parameterizes a trainable model $\mathcal { F }$ . The $\mathbf { S } _ { t }$ evolves via the output prediction $\mathbf { y } _ { t } = \mathcal { F } ( \mathbf { x } _ { t } ; \mathbf { S } _ { t } )$ via gradient-based optimization: $$ \mathbf { S } _ { t } = \mathbf { S } _ { t - 1 } - \eta \nabla _ { \mathbf { S } } \mathcal { L } \big ( \mathbf { S } _ { t - 1 } ; \mathbf { x } _ { t } \big ) . $$ Here, $\mathcal { L }$ denotes a self-supervised loss (e.g., reconstruction error), and $\eta$ controls the adaptation rate. Building on this, the proposed linear attention mechanism establishes an efficient input-output mapping via state weight $\mathbf { S } _ { t }$ and the loss $\begin{array} { r } { \mathcal { L } = \frac { 1 } { 2 } \| \mathbf { y } _ { t } - \mathbf { x } _ { t } \mathbf { S } _ { t - 1 } ^ { \top } \| _ { 2 } ^ { 2 } } \end{array}$ . The gradient derivation yields: $$ \frac { \partial \mathcal { L } } { \partial \mathbf { S } _ { t - 1 } } = \mathbf { S } _ { t - 1 } \mathbf { x } _ { t } ^ { T } \mathbf { x } _ { t } - \mathbf { y } _ { t } ^ { T } \mathbf { x } _ { t } , $$ resulting in the compact update: $$ { \bf { S } } _ { t } = { \bf { S } } _ { t - 1 } \big ( \omega - { { \bf { x } } _ { t } } ^ { T } { \bf { x } } _ { t } \eta \big ) + { { \bf { y } } _ { t } } ^ { T } { \bf { x } } _ { t } \eta . $$ This combines TTT’s adaptability with error-driven plasticity while maintaining $\mathcal { O } ( L )$ complexity. # 2.3 2D State Evolution Module Standard state evolution processes data causally, ignoring non-local spatial dependencies in images. To address this, we extend the mechanism to 2D via multi-directional scanning (Fig. 1), capturing forward, backward, upward, and downward semantics. This fusion enables simultaneous learning of high-level abstractions and lowlevel spatial details, bridging sequential adaptation with image-specific requirements. Fig. 1. 2D State Evolution Mechanism # 2.4 Chunk-wise Iteration Acceleration To mitigate computational overhead, we adopt chunk-wise processing inspired by Flash Linear Attention. Input $\mathbf { X } \in \mathbf { R } ^ { L \times d }$ is divided into $N = \lceil L / C \rceil$ chunks of size $C$ . Using the WY representation for Householder matrices, the state updates as: $$ \mathbf { S } _ { t + 1 } = \mathbf { S } _ { t } + \underbrace { ( \mathbf { U } _ { t } - \mathbf { W } _ { t } \mathbf { S } _ { t } ^ { \top } ) \mathbf { K } _ { t } } _ { \Delta \mathbf { S } _ { t } } , $$ where $\mathbf { U } _ { t } , \mathbf { W } _ { t }$ derive from the $U T$ transform: $$ \mathbf { T } _ { t } = \left( \mathbf { I } + \operatorname { t r i l } ( \mathrm { d i a g } ( \beta _ { t } ) \mathbf { K } _ { t } \mathbf { K } _ { t } ^ { \top } , - 1 ) \right) ^ { - 1 } \mathrm { d i a g } ( \beta _ { t } ) . $$ Chunk outputs combine inherited states and intra-chunk attention, reducing sequential dependency from $O ( L )$ to $O ( d )$ while preserving theoretical advantages: $$ \begin{array} { r } { \mathbf O _ { t } = \mathbf Q _ { t } \mathbf S _ { t } ^ { \top } + \underbrace { \left( \mathbf Q _ { t } \mathbf K _ { t } ^ { \top } \odot \mathbf M _ { C } \right) } _ { \mathrm { i n t r a - c h u n k } } \mathbf U _ { t } . } \end{array} $$ This allows efficient computation of pseudo-values $\mathbf { U } _ { t } = \mathbf { T } _ { t } \mathbf { V } _ { t }$ and weight updates $\mathbf { W } _ { t } = \mathbf { T } _ { t } \mathbf { K } _ { t }$ entirely through batched matrix multiplications. # 3 Methodology # 3.1 Model Architecture The proposed DiffRWKVIR framework employs a two-stage architecture following DiffIR to address the fundamental challenges in image restoration. Stage 1 focuses on compact prior extraction using a U-Net backbone, while Stage 2 implements efficient prior-guided restoration through an enhanced diffusion mechanism. Fig. 2. The stage 1 architecture of our proposed DiffRWKVIR. Stage 1: Compact Prior Extraction (DiffRWKVIR $^ { s \bot }$ ): As illustrated in Fig. 2, this stage implements a U-Net structured Compact IR Prior Extraction Network (CPEN). The network processes input image I through convolutional layers and residual blocks, employing PixelUnshuffle for resolution adjustment. The core innovation is the integration of Dynamic State Evolution Blocks (DSEBlocks) within the encoding and decoding paths. Each DSE block implements the update mechanism: $$ \mathbf { S } _ { t } = \mathbf { S } _ { t - 1 } \big ( \omega - \mathbf { x } _ { t } ^ { \top } \mathbf { x } _ { t } \eta \big ) + \mathbf { y } _ { t } ^ { \top } \mathbf { x } _ { t } \eta $$ where $\mathbf { S } _ { t }$ represents the evolving state tensor. These blocks alternate with Channel Attention Blocks (CAB) and residual connections, enabling multi-scale feature fusion. This stage outputs a compact Image Prior Representation (IPR) $\mathbf { Z }$ that encodes hierarchical features across scales. Stage 2: Prior-Guided Restoration (DiffRWKVIR $^ { s 2 }$ ): As depicted in Fig. 3, Stage 2 implements an efficient diffusion process conditioned on the IPR. During training, low-quality $\mathbf { I } _ { \mathrm { L Q } }$ and ground-truth $\mathbf { I } _ { \mathrm { G T } }$ images are concatenated and processed through PixelUnshuffle ( $\times$ 4) and $\mathrm { C P E _ { n s 1 } }$ to extract feature $\mathbf { Z }$ . The state evolution provides spatial context while the IPR guides the restoration of spectral details, which enables high-fidelity reconstruction with dramatically fewer diffusion steps than conventional DDPMs. Fig. 3. The stage 2 architecture of our proposed DiffRWKVIR. # 3.2 Dynamic State Evolution Block The Dynamic State Evolution Block (DSEB) constitutes the core computational unit of the Dynamic-ESR framework, integrating the State Evolution Block (SEB) and Channel Attention Block (CAB) into a unified architecture. Crucially, the CAB functions as the Feed Forward Network within this block, while the SEB incorporates the Omni-Shift mechanism for comprehensive spatial modeling. This design enables dynamic feature evolution with linear complexity while preserving multi-scale spatial relationships. Fig. 4. Structure of SEB, which is a component of the DSEB. The SEB processes input $\mathbf { x } \in \mathbf { R } ^ { B \times T \times C }$ through token shift operations that capture adjacent semantics, as shown in Fig. 4. Key components $\mathbf { q }$ , $\mathbf { w }$ , $\mathbf { k }$ , $\mathbf { v }$ , and $\eta$ are derived via learnable weights with Softplus activation for dynamic adjustment, followed by Low-Rank Adaptation (LoRA) and Linear Interpolation (LeRP) transformations that generate intermediate representations: $$ \mathrm { L o R A } ( \mathbf { x } ) = \mathbf { A } \mathrm { T a n h } ( \mathbf { x } \mathbf { B } ) , \mathrm { L e R P } ( \mathbf { a } , \mathbf { b } ) = \mathbf { a } + ( \mathbf { b } - \mathbf { a } ) \odot \boldsymbol { \mu } . $$ Fig. 5. Illustrated Comparison of Uni-Shift, Quad-Shift and Omni-Shift. The Omni-Shift module in SEB, as shown in Fig. 6, enhances spatial modeling through multi-scale convolutional fusion. This multi-scale processing enables more hierarchical feature fusion while maintaining 2D structural relationships, compared to uniform directional shift (Uni-Shift) and quad-directional shift (Quad-Shift) in Fig. 5. The final 2D State Evolution (2DSE) output combines these components through layer normalization and projection, with residual connections preserving feature integrity. As the Feed Forward Network component, the CAB, as shown in Fig. 7, operates on feature maps $F \in \mathbf { R } ^ { C \times H \times W }$ through channel-wise recalibration: $$ { \cal F } _ { \mathrm { o u t p u t } } = { \cal F } \odot \sigma ( W _ { 2 } \cdot \mathrm { R e L U } ( W _ { 1 } \cdot \mathrm { G l o b a l A v e r a g e P o o l i n g } ( F ) ) ) . $$ This residual connections ensure gradient stability and feature preservation throughout processing, as shown in Fig. 8. The integrated architecture dynamically adjusts feature representations through learnable states, enabling robust adaptation to varying input conditions while maintaining computational efficiency. Fig. 7. Structure of CAB Fig. 6. Illustration of Omni-Shift Fig. 8. Structure of DSEB # 4 Experimental Setup # 4.1 Datasets and Implementation Details The DF2K[31][34] dataset (3,450 high-resolution images from DIV2K[31] and Flickr2K[34]) serves as our training foundation. We generate low-resolution counterparts via bicubic downscaling to 48 $\times$ 48 patches, with corresponding HR patches scaled to 96 $\times$ 96 (2 $\times$ ) and 192 $\times$ 192 (4 $\because$ ) for multi-scale training. Validation uses 800 DIV2K[31] images, while evaluation employs standard benchmarks: Set5[32], Set14[28], BSD100[33], and Urban100[29]. Implemented in PyTorch on NVIDIA A100 GPUs, models are trained with random 48 $\times$ 48 LR crops, rotation augmentation, batch size 16, and Adam optimization ( $\beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } = 0 . 9 9$ ). The learning rate initiates at $1 \times 1 0 ^ { - 4 }$ with $1 0 \times$ decay after 80 epochs. Architecturally, we incorporate 4 residual groups containing 6 residual blocks each.Performance assessment employs complementary task-specific metrics: super-resolution evaluation utilizes PSNR and SSIM [35] for spatial fidelity, RMSE for pixel error, SAM for spectral consistency, LPIPS [38] for perceptual similarity, and NIQE [37] for non-reference naturalness, while inpainting tasks are evaluated using PSNR, SSIM [35], LPIPS [38], and FID [36] to measure reconstruction quality and distribution alignment. # 4.2 Results Quantitative evaluations comprehensively demonstrate the superior performance of our DiffRWKVIR framework across diverse image restoration benchmarks. For image inpainting, DiffRWKVIR achieves remarkable performance improvements across three challenging datasets: Places365, Celeba-HQ, and Mural. As illustrated qualitatively in Fig. 9, our approach generates visually coherent completions with significantly reduced artifacts and perceptual distortions compared to existing methods. Quantitatively, we observe substantial gains in both fidelity and perceptual metrics as shown in Table 2. For example, on Places365, we achieve a 0.53 dB PSNR improvement over the previous state-of-the-art while simultaneously enhancing SSIM by 2.18% and reducing FID by $0 . 7 \%$ . In super-resolution tasks, DiffRWKVIR consistently surpasses all evaluated baselines—including transformer-based (SwinIR, HAT), SSM-based (MambaIR/v2), and diffusion-based (DiffIR) approaches—across all five benchmark datasets at $\times$ 4 magnification scales. As comprehensively detailed in Table 3, our method demonstrates particular advantages in perceptual quality metrics while maintaining superior pixel-level accuracy. For example, on Urban100, which contains challenging urban structures, we achieve a 0.025 dB PSNR gain and $0 . 3 6 \%$ SSIM improvement while reducing LPIPS by 3.9% compared to DiffIR. Qualitative comparisons provide compelling visual evidence of DiffRWKVIR’s superiority, particularly in reconstructing high-frequency details and complex textures. As shown in Fig. 10, our method produces significantly sharper and more natural reconstructions compared to existing approaches. Fig. 9. Qualitative Result on inpainting tasks Table 2. Quantitative comparison results for inpainting on benchmark datasets. The best and second-best performance are marked in bold and underlined. Fig. 10. Qualitative Results on Super-Resolution tasks Table 3. Quantitative comparison results for the BSD100[33], Set14[28], Set5[32], Manga109[30] and Urban100[29] datasets (Scale $\times 4$ ). Table 4. Unified efficiency and memory characteristics comparison. DiffRWKVIR demonstrates superior hardware utilization across all metrics while maintaining competitive model size. # 4.3 Efficiency and Other Studies We rigorously evaluate DiffRWKVIR’s computational efficiency against Diffusion and Transformer-based baselines (SwinIR, HAT, MambaIR/v2) using one NVIDIA A100 40G GPU under identical settings. As shown in Table 4, our chunk-wise Flash Linear Attention mechanism enables superior efficiency: DiffRWKVIR achieves the lowest FLOPs, fastest training/inference speeds, and optimal memory characteristics while maintaining competitive parameterization. Our Omni-Shift mechanism improves PSNR by 0.65 dB and reduces LPIPS by 4% versus shift alternatives. The TTT backbone outperforms ResNet and naive attention implementations, delivering 0.26 dB PSNR gain and 10% SAM reduction. Channel attention surpasses MLP variants with 0.29 dB PSNR improvement and 25% NIQE reduction. Finally, 2D scanning exceeds 1D methods by 3% SSIM while reducing SAM by $4 . 2 \%$ . These results confirm each component’s critical contribution to overall performance. Table 5. Studies on Impacts of Different Components
Image restoration faces challenges including ineffective feature fusion, computational bottlenecks and inefficient diffusion processes. To address these, we propose DiffRWKVIR, a novel framework unifying Test-Time Training (TTT) with efficient diffusion. Our approach introduces three key innovations: (1) Omni-Scale 2D State Evolution extends RWKV's location-dependent parameterization to hierarchical multi-directional 2D scanning, enabling global contextual awareness with linear complexity O(L); (2) Chunk-Optimized Flash Processing accelerates intra-chunk parallelism by 3.2x via contiguous chunk processing (O(LCd) complexity), reducing sequential dependencies and computational overhead; (3) Prior-Guided Efficient Diffusion extracts a compact Image Prior Representation (IPR) in only 5-20 steps, proving 45% faster training/inference than DiffIR while solving computational inefficiency in denoising. Evaluated across super-resolution and inpainting benchmarks (Set5, Set14, BSD100, Urban100, Places365), DiffRWKVIR outperforms SwinIR, HAT, and MambaIR/v2 in PSNR, SSIM, LPIPS, and efficiency metrics. Our method establishes a new paradigm for adaptive, high-efficiency image restoration with optimized hardware utilization.
[ "cs.CV", "I.4.9" ]
# Introduction Reasoning about the presence and properties of occluded objects is a fundamental aspect of human intelligence, crucial for navigating and interacting with a complex world. This ability is based on the integration of various sensory cues and contextual information to form probabilistic judgments. Consider the following scenario. You are locating your package in a cluttered mail room. Which box is yours? Rather than exhaustively examining each package, you can efficiently leverage multimodal cues such as size, weight, and textual labels to rapidly narrow the search space and infer the likely location of your target. This capacity to infer hidden states based on incomplete and potentially ambiguous observations raises a critical question: how do humans effectively integrate multimodal information to reason about the unobserved? Accounting for uncertainty regarding unobserved phenomena is a core aspect of human intelligence and has been extensively studied in cognitive science. Research on object permanence in developmental psychology (Piaget, 1954) highlights the early emergence of the ability to represent and reason about objects that are no longer directly perceived. Classic paradigms such as the Wason selection task (Wason, 1968) have explored the use of deductive reasoning to answer questions about unseen objects. More recent work has focused on how humans infer hidden properties of objects based on partial observations, incorporating probabilistic models of scene understanding and physical reasoning. These models commonly use a physics engine as a generative model of visual percepts, which can then be inverted to infer the physical attributes of the underlying hidden object (Battaglia et al., 2013; Lake et al., 2017; Yildirim et al., 2016). However, these computational models typically focus on vision as the single modality of perceptual cue and evaluate the model in well-controlled simulated environments. The models often require extensive hand-engineering to restrict the space of the object’s physical attributes for running the simulation in order to make inference tractable. Consequently, these existing cognitive models may not be able to account for the complexity of real-world scenarios that require integrating cues from multiple modalities, and the physical properties of the candidate objects are not given a priori. Objects inside the boxes: yoga mat, boxed laptop, pillow Figure 1: The What’s in the Box (WiTB) game. In this game, the participants are given a written list of objects hidden in boxes. They then watch a video of a human experimenter shaking the box. The participants are then asked to guess where each object is hidden based on the visual and audio cues. Figure 2: Our neurosymbolic model. (a) The model first uses neural networks to parse multimodal input to a structured JSON representation. (b) For a set of objects and boxes, the model generate all hypotheses of object placements among boxes. Then the hypotheses are evaluated based on the visual and audio information to generate a posterior distribution over the hypotheses, which can be marginalized to infer object placements. On the other hand, a complementary line of research in cognitive science has extensively documented the integration of multimodal cues in low-level perception. A typical setting presents observers with a multimodal stimulus (e.g. a flash of light and a noise burst), each modality giving a noisy estimate of a scene quantity (e.g. source location) (Alais & Burr, 2004)). This paradigm is motivated by a canonical multisensory integration model, which applies Bayes’ rule to derive an optimal estimate by combining information from each sense (Ernst, 2007; Ko¨rding et al., 2007; Trommershauser et al., 2011). Human experiments have found remarkable agreement between the predictions of this model - which performs a weighted average of the modality-specific estimates - and human judgments, with notable examples in audiovisual (Alais & Burr, 2004; Battaglia et al., 2003) and visual-haptic (Ernst, 2007) processing. However, multimodal reasoning, potentially incorporating perceptual information as well, is far less studied in cognitive science. Rapid progress has also been made in multimodal reasoning from the artificial intelligence (AI) and robotics community, particularly through deep learning over massive datasets (Nam et al., 2017). The advent of versatile Vision-Language Models (VLMs) and Large Language Models (LLMs) has further expanded these capabilities, allowing for generalization to previously unseen scenarios (Ahn et al., 2022; Wang et al., 2024). However, critical questions remain as to how well these models truly grasp physical and visual reasoning. While they excel at pattern recognition and language understanding, there are ongoing debates about their capacity for scene understanding, multimodal reasoning, and interpreting the causal relationships inherent in the physical world. In this paper, we present a new neurosymbolic model designed to perform robust reasoning about hidden objects from complex and ambiguous multimodal inputs. Leveraging a suite of state-of-the-art neural networks for processing text, audio, and visual data, our model constructs a formal representation of the observed scene. Subsequently, a Bayesian inference engine updates hypotheses about the hidden objects based on these observations. Such a neurosymbolic structure combines the strength of both data-driven large neural models and a Bayesian architecture for integrating cues from different modalities for robust reasoning about unseen objects. We evaluate our model on a novel object guessing task that we call “What’s in the Box?” (WiTB), wherein objects are concealed within boxes, and an observer must infer their contents by analyzing a human participant’s interactions with the boxes, including lifting and shaking. We demonstrate that the proposed neurosymbolic model effectively integrates visual, textual, and auditory information to achieve human-like performance in reasoning about object placements. Critically, unimodal models exhibit significantly poorer performance, highlighting the crucial role of multimodal integration in this task. This work takes one step towards helping us better understand how humans can flexibly and reliably infer information about objects we cannot see from diverse information sources. # Computational Model Our model is shown in Fig. 2. Similar to prior work on neurosymbolic reasoning (Hsu et al., 2023; Wong et al., 2023; Ying et al., 2023), our neurosymbolic model consists of two modules: (1) a neural module translates multimodal inputs into formal representation, and (2) an engine for probabilistic inference to form the final graded judgment. That is: in the second module, our model performs probabilistic inference over the parsed symbolic representation to derive the posterior distribution over the object placements. # Parsing and representing multimodal inputs Reasoning about multimodal scenes is often complex as it requires integrating information across different modalities. Following prior work on scene and semantic parsing using large foundation models (Liu et al., 2015; Ying, Zhi-Xuan, et al., 2025), we use a variety of neural models to parse multimodal inputs into structured symbolic forms. Language: The linguistic information provided to an observer includes names of the objects hidden among the boxes. However, human language is often abstract and ambiguous. How do we know about the properties of the hidden objects, such as a pillow, without seeing them? Humans often rely on their knowledge and memory (past observations) as clues. In our model, we prompt a state-of-the-art large language model (LLM) to generate attributes of the unseen object, including its geometric dimensions, weights, materials, and rigidity (The degree to which the object can be compressed or folded in any dimension). This is because the LLMs, trained on a large amount of real-world data, have likely encountered more objects than any person and can provide reasonable guesses about them. Our model uses the Llama 3.1 70B model as the LLM parser. Furthermore, to capture the uncertainty about the objects from the language input (e.g. pillows can have various sizes), we prompted the LLM to output standard deviations for some key attributes, such as the physical dimensions, from which we can model uncertainty by assuming a normal distribution over these variables. Vision: From the visual inputs, we can estimate the size of the boxes present in the scene. We prompt Gemini 2.0 Flash (Gemini Team et al., 2024) with the first frame of the video and ask the model to return the dimensions of the boxes in the format specified in Figure 2. Audio: The sound made by shaking the boxes can also provide us with clues on what’s inside them. In our model, we use an audio classification algorithm CLAP (Elizalde et al., 2022) to generate a probability distribution over the type of the sounds from the audio track with object names as candidate labels. This allows us to calculate the posterior probability of objects in any box conditioned on the audio of the box when shaken by a human. # Generating and evaluating hypotheses To infer the placement of objects among boxes, we adopt an approach inspired by particle filtering (Wills & Scho¨n, 2023). We first initialize all the hypotheses $H = \{ H ^ { 1 } . . . H ^ { n } \}$ , each representing a unique way of placing objects in distinct objects. With $\mathbf { N }$ objects in K boxes, this would generate $| H | = K ! S ( N , K )$ possible placements, represented as an ordered set of lists of objects, Here $S ( N , K )$ is the Stirling numbers of the second kind. The observer has a uniform prior belief about the placement $b _ { 0 } = P ( H )$ . Then the observer performs a belief update conditioned on the observed multimodal inputs. We denote hypothesis $H ^ { i } = H _ { 1 } ^ { i } , . . . H _ { n } ^ { i }$ where $H _ { n } ^ { i }$ is the set of items in box n according to hypothesis $H ^ { i }$ . We then denote the audio observation as $A = A _ { 1 } , . . . , A _ { n }$ , visual observation as $O = O _ { 1 } , O _ { 2 } , . . . , O _ { n }$ where $A _ { n }$ is the audio of box n. Since we assume a uniform prior, this is $$ \begin{array} { l } { { P ( H | O , A ) \propto P ( O , A | H ) = P ( O | H ) P ( A | H ) } } \\ { { \propto P ( O | H ) P ( H | A ) = \displaystyle \prod _ { i } P ( O _ { i } | H _ { i } ) P ( H _ { i } | A _ { i } ) } } \end{array} $$ Here, since the audio and visual signals are both ambiguous and their underlying joint probability distribution is often not accessible, we treat them as conditionally independent as a reasonable approximation. We also manipulate the conditional probabilities to compute $P ( H _ { i } | A _ { i } )$ because the audio likelihood function $P ( A _ { i } | H _ { i } )$ , which requires a generative model for audio, is difficult to estimate while state-of-the-art audio classification models can readily output posterior distributions $P ( H _ { i } | A _ { i } )$ . For evaluating $P ( O _ { i } | H _ { i } ^ { n } )$ , we use rejection sampling by checking whether the set of items $H _ { i } ^ { n }$ can fit in the box i. To account for uncertainties about the physical attributes of the boxes and objects, we sample their dimensions under a normal distribution and apply rejection sampling 1000 times to produce a continuous probability distribution. We then evaluate $P ( H _ { i } ^ { n } | A _ { i } )$ by querying the CLAP model with the audio segment $A _ { i }$ and item labels which are all possible items in the scenario. $$ P ( H _ { i } ^ { n } | A _ { i } ) = \prod _ { o \in H _ { i } ^ { n } } P ( o | A _ { i } ) $$ Once the model computes the posterior distributions over the hypotheses, it then marginalize over all hypotheses to compute the distribution for any individual object. Figure 3: Correlation plots comparing belief judgments from humans (y-axis) against models $\mathbf { \bar { X } }$ -axis). Each dot represents a probability rating on a scale from 1 to 100 (e.g. how likely is it that object X is in the left box). Error bars show standard error and CI indicates $9 5 \%$ confidence interval. Our full model shows a significantly better fit to human judgment than ablated unimodal baselines and the Gemini model. Error bars indicate standard error. # Experiment Domain and Scenarios We constructed 25 stimuli, each with 2 to 5 household objects, such as water bottles, mugs, plates, laptops, etc., hidden inside 2 boxes. The items were carefully chosen to represent various shapes, sizes and materials. The type of materials cover ceramics, metals, plastic, wood, etc., which produce a variety of sounds in the video clip. The boxes also vary in sizes, where some boxes can hold all objects and some may hold one or two small objects. In each stimulus, we record a 2 to 3 seconds video of a human experimenter shaking the boxes. We excluded 5 stimuli in the experiment due to low agreement among human participants (split-half correlation less than 0.8). Human Participants We recruited 54 participants over Prolific (mean age $= 3 7$ ; 29 male, 24 female, 1 other). The experiment took place over a customized web interface, which is shown in Fig. 1. During the experiment, each participant is shown a video with sound and asked to evaluate the likelihood of the object hidden inside each box with a set of continuous dependent scales from 1 to 100 where the of scale for each item automatically sums to 100 across the box options as the user drags. Baselines: Our model has two critical components: reasoning about the objects from both visual and auditory cues, and then combining different sensory inputs to perform belief updates. To evaluate the criticality of the integration of multimodal inputs, we consider two alternative models involving unimodal ablations, wherein we remove one of the sensory inputs. The Audio-Only model only receives auditory information as input. It uses the CLAP model to assign probability ratings for the object being inside any box given the sound, and then normalize the ratings over all boxes. In other words, the probability of object $o$ inside box $i$ would be $$ \frac { P ( o \in b o x _ { i } | A _ { i } ) } { \sum _ { j } P ( o \in b o x _ { j } | A _ { j } ) } $$ On the other hand, the Vision-Only Model only receives visual information. Similar to the visual module in the full model, the Vision-only model uses geometric properties to guess where the object may be hidden (e.g. a yoga mat is more likely to be in a big box than a small one). Additionally, we evaluate a state-of-the-art visionlanguage foundation model, Gemini 2.0 Flash, as a neural baseline. The VLM model was given a video with audio and provided the same instructions as human participants to evaluate the probability distributions of hidden objects across boxes. # Results Quantitative Analysis: As shown in Figure 3, the Full Model correlated strongly with human judgment, with $r =$ 0.78, while the ablated Audio-Only Model and Vision-Only Model performed worse, with $r = 0 . 5 5$ and $r = 0 . 5 2$ , respectively. On the other hand, we also find the VLM model had a low correlation of $r = 0 . 3 1$ against human judgments, indicating that large foundation models still cannot reliably reason about ambiguous multimodal inputs in a human-like way. Taken together, these results showcase the promise of a Bayesian approach integrating different modalities of inputs to reason about objects in ambiguous and highly uncertain scenarios. Qualitative Analysis We highlight two examples for qualitative analysis comparing the model performances. The visual layouts for the two examples are shown in Fig. 4. In Scenario A, based on the visual information, the model is confident that the yoga mat is inside the left box because it is unlikely to fit inside the right box. However, the visiononly model is uncertain about the location of the laptop and the pillow. The audio model, on the other hand, finds that (a) Scenario A: a yoga mat, a laptop packaged inside a box, and a pillow hidden inside two boxes. Video link: https://youtu.be/JE4ggHKfRss Figure 4: Two qualitative examples comparing model and human ratings on the location of the objects. The bars represent the averaged human or model differences in the probability rating of an item being inside the left versus right box. Error bars indicate standard error. (b) Scenario B: water jug, water bottle, and coins hidden inside two boxes. Video link: https://youtu.be/TdZHEkuGDgM the laptop is more likely to be inside box 1 due to the collision sound it makes. Combining these two sources, the full model is able to make a graded judgment about the location of these objects that mirrors human judgment, whereas audioonly and vision-only models made different judgments based on unimodal information. On the other hand, the Gemini model believes all three objects are likely to be inside box 1, which reflects poor physical reasoning skills. In Scenario B, we show a scenario where a water jug, a water bottle, and coins are distributed between two boxes of differing sizes. Based on the visual information on box sizes, the model finds the water jug is more likely to be inside box 1 since it might not fit inside box 2. However, it is uncertain where the coins are because they are small and could fit in either box. The audio model is able to determine where the coins are because of the distinct jingling sounds, whereas the water jug and bottle make the same sound and therefore cannot be distinguished by the audio model. Combining these two sources, the full model is able to make a judgment that mirrors human judgment, where neither ablated model would be able to if it had been restricted to only one modality. In contrast, the Gemini model can approximate a subset of the objects, but seems to be unable to reason about second-order effects of an object placement. Interestingly, in these examples, we find that the resulting probability judgments of the full model is not simply an average over audio and visual model outputs, as the joint inference over object placements conditioned on audio and visual information is performed over all hypotheses before marginalized for each individual object ratings. Error Analysis: We observe that in a few scenarios, our model is quite uncertain (almost equally likely in any of the two boxes) while the humans are more confident in their judgments on where the item is located. One possibility is that the visual cues our models are using are still limited, whereas humans may be leveraging more kinds of visual information to reason about the object placements. For instance, humans are able to infer the weight and size of the items inside the box based on the motion of the box, which they can use to update beliefs about the box’s content. Additionally, we noticed that the audio model sometimes failed to pick up nuanced audio information when multiple sounds were present. For instance, the model may not pick up plastic sound when it’s mixed with metallic sound, whereas humans are comparably better at recognizing and parsing subtle audio cues. # Discussion and Future Directions In this paper, we introduce a neurosymbolic model that performs robust yet generalizable reasoning based on multimodal cues. The model uses state-of-the-art neural models as the perception module and then uses a Bayesian model for initializing and updating beliefs on hypotheses about the unseen objects. We evaluate the model on a novel paradigm called “What’s in the Box?” (WiTB), wherein models and humans watch experimenters interact with boxes and guess which items are hidden in which box. Our results show that the proposed neurosymbolic model correlates strongly with human judgments, whereas other ablated models and stateof-the-art neural vision baselines perform poorly. Our model offers significant contributions to both cognitive science and artificial intelligence. By integrating the pattern recognition capabilities of neural networks with the structured reasoning of Bayesian models, we provide cognitive scientists with a new tool to investigate human inference processes in more open-ended settings, particularly under conditions of uncertainty and with complex multimodal information. This neurosymbolic architecture also holds promise for the development of more intelligent robots, enabling them to reason about the physical properties of unseen objects by effectively combining diverse sensory cues, thus approaching human-like reasoning and scene understanding. There are, however, a few important limitations and open directions for improvement. Firstly, our current model assumes that all sources of information are equally weighted. In reality, humans likely adaptively weigh different cues based on their reliability and relevance to the task Jacobs, 2002; Schertz and Clare, 2020. For instance, a distinct sound might overshadow a partially occluded visual cue. Future iterations of the model should explore mechanisms for learning and dynamically adjusting the weights associated with each modality. Additionally, future studies can expand the kinds of visual cues we considered in our model. For example, our model currently does not infer the weight of the boxes, which can be informative to what objects may be inside. Writings on the box (e.g. an IKEA box) can also be used to infer the kinds of objects it contains. The auditory component of our current model can also be improved. The lightweight audio model used in our model is less sensitive to ambiguous and low-volume sounds, which sometimes fails to use nuanced audio cues to reason about unseen objects. Future work can explore more sophisticated audio models, especially the ones trained on large-scale sound datasets, to improve the performance. For next steps, we also plan on extending the WiTB paradigm to more open-ended settings. Rather than answering questions about objects from a pre-defined list, we can query the model and humans to guess what objects are in the box in an open-ended way based on multimodal cues and compare the distribution of answers given by humans and models, as in Ying, Collins, et al., 2025. This could allow us to study and capture the richness of humans’ perception of what’s out there in the world that we cannot directly observe in an open-ended environment. # References Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., Finn, C., Fu, C., Gopalakrishnan, K., Hausman, K., et al. (2022). Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691. Alais, D., & Burr, D. (2004). The ventriloquist effect results from near-optimal bimodal integration. Current biology, 14(3), 257–262. Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene understanding. Proceedings of the National Academy of Sciences, 110(45), 18327–18332. Battaglia, P. W., Jacobs, R. A., & Aslin, R. N. (2003). Bayesian integration of visual and auditory signals for spatial localization. Josa a, 20(7), 1391–1397. Elizalde, B., Deshmukh, S., Ismail, M. A., & Wang, H. (2022). Clap: Learning audio concepts from natural language supervision. Ernst, M. ) (2007). Learning to integrate arbitrary signals from vision and touch. Journal of Vision. Hsu, J., Mao, J., & Wu, J. (2023). Ns3d: Neuro-symbolic grounding of 3d objects and relations. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2614–2623. Jacobs, R. A. (2002). What determines visual cue reliability? Trends in cognitive sciences, 6(8), 345–350. Ko¨ rding, K. P., Beierholm, U., Ma, W. J., Quartz, S., Tenenbaum, J. B., & Shams, L. (2007). Causal inference in multisensory perception. PLOS ONE, 2(9), 1–10. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40. Liu, Z., Li, X., Luo, P., Loy, C.-C., & Tang, X. (2015). Semantic image segmentation via deep parsing network. Proceedings of the IEEE international conference on computer vision, 1377–1385. Nam, H., Ha, J.-W., & Kim, J. (2017). Dual attention networks for multimodal reasoning and matching. Proceedings of the IEEE conference on computer vision and pattern recognition, 299–307. Piaget, J. (1954). The construction of reality in the child. Routledge. Schertz, J., & Clare, E. J. (2020). Phonetic cue weighting in perception and production. Wiley Interdisciplinary Reviews: Cognitive Science, 11(2), e1521. Team, G., Anil, R., Borgeaud, S., Alayrac, J.-B., Yu, J., Soricut, R., Schalkwyk, J., Dai, A. M., Hauth, A., Millican, K., Silver, D., Johnson, M., Antonoglou, I., Schrittwieser, J., Glaese, A., Chen, J., Pitler, E., Lillicrap, T., Lazaridou, A., . Vinyals, O. (2024). Gemini: A family of highly capable multimodal models. Trommershauser, J., Kording, K., & Landy, M. S. (2011). Sensory cue integration. Oxford University Press. Wang, Y., Chen, W., Han, X., Lin, X., Zhao, H., Liu, Y., Zhai, B., Yuan, J., You, Q., & Yang, H. (2024). Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805. Wason, P. C. (1968). Reasoning about a rule. The Quarterly Journal of Experimental Psychology. Wills, A. G., & Scho¨ n, T. B. (2023). Sequential monte carlo: A unified review. Annual Review of Control, Robotics, and Autonomous Systems, 6(1), 159–182. Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., & Tenenbaum, J. B. (2023). From word models to world models: Translating from natural language to the probabilistic language of thought. arXiv preprint arXiv:2306.12672. Yildirim, I., Siegel, M. H., & Tenenbaum, J. B. (2016). Perceiving fully occluded objects via physical simulation. Proceedings of the 38th annual conference of the cognitive science society. Ying, L., Collins, K. M., Wei, M., Zhang, C. E., Zhi-Xuan, T., Weller, A., Tenenbaum, J. B., & Wong, L. (2023). The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling probabilistic social inferences from linguistic inputs. arXiv preprint arXiv:2306.14325. Ying, L., Collins, K. M., Wong, L., Sucholutsky, I., Liu, R., Weller, A., Shu, T., Griffiths, T. L., & Tenenbaum, J. B. (2025). On benchmarking human-like intelligence in machines. arXiv preprint arXiv:2502.20502. Ying, L., Zhi-Xuan, T., Wong, L., Mansinghka, V., & Tenenbaum, J. B. (2025). Understanding epistemic language with a language-augmented bayesian theory of mind. Transactions of the Association for Computational Linguistics.
People regularly make inferences about objects in the world that they cannot see by flexibly integrating information from multiple sources: auditory and visual cues, language, and our prior beliefs and knowledge about the scene. How are we able to so flexibly integrate many sources of information to make sense of the world around us, even if we have no direct knowledge? In this work, we propose a neurosymbolic model that uses neural networks to parse open-ended multimodal inputs and then applies a Bayesian model to integrate different sources of information to evaluate different hypotheses. We evaluate our model with a novel object guessing game called ``What's in the Box?'' where humans and models watch a video clip of an experimenter shaking boxes and then try to guess the objects inside the boxes. Through a human experiment, we show that our model correlates strongly with human judgments, whereas unimodal ablated models and large multimodal neural model baselines show poor correlation.
[ "cs.AI" ]
# 1 INTRODUCTION Software testing plays a crucial role in ensuring software quality, primarily through the selection of test data that evaluates whether a program behaves as intended. Ideally, testing would cover all possible inputs to verify that software functions correctly. However, covering all inputs is generally infeasible due to the vast input space in realistic software systems. For example, consider a Date class, where testing every combination of day, month, and year would result in an enormous number of cases. Consequently, the main challenge in software testing is selecting test cases that are diverse enough to capture a broad and representative range of program behaviors. To address this challenge, a variety of test selection methods have been developed, aiming to identify representative test cases. One category of these methods is input domain-based techniques, which focus on how test cases are generated from the input space. Among these techniques, Boundary Value Analysis (BVA) and testing analyze software artifacts to identify the discrepancies between expected and actual boundaries [6, 29, 34]. Hierons [20] defined boundaries as pairs of inputs that are close to each other but fall within adjacent sub-domains, where these sub-domains represent distinct behavior domains in the context of a software under test (SUT). BVA is widely used for its effectiveness in detecting faults as they tend to occur near boundaries [2]. However, identifying boundaries within the input space remains challenging due to the lack of clear, objective methods, and even Grochtmann and Grimm [18] noted that BVA is a creative, manual process that cannot be automated. Fig. 1. Motivating example comparing BMI boundaries found by AutoBVA and SETBVE. Challenging this assumption, Dobslaw et al. [10] introduced the concept of Boundary Value Exploration (BVE), a set of methods for systematically identifying input pairs that lie on opposite sides of a behavioral boundary. They proposed AutoBVA, the first automated black-box framework for boundary detection, shifting boundary analysis away from its traditionally manual nature[11]. AutoBVA effectively discovers boundary candidates—input pairs that are close in the input space but yield distinct outputs—and ranks these candidates based on their program derivative, a metric designed to capture the sensitivity of software behavior to input changes [14]. However, the BVE task is complicated by the presence of multiple boundaries, each potentially having a distinct maximum program derivative [14]. Consequently, focusing solely on maximizing program derivative, as AutoBVA does, may limit exploration to a narrower set of boundary candidates, concentrating the discovered boundaries into fewer behavioral regions. This approach potentially Manuscript submitted to ACM overlooks broader behavioral diversity and misses boundary candidates that, while exhibiting lower derivative values, are nonetheless important for comprehensive software testing. This is where Quality-Diversity (QD) optimization offers a promising alternative. Unlike traditional optimization methods that seek a single best solution, QD aims to discover a broad set of diverse, high-performing solutions [5]. Its success across domains like robotics [7, 9, 12, 28] and video games [4, 13, 16, 26] has recently extended into software testing, including applications in both traditional [3, 15, 27] and deep learning-based systems [32, 38, 39]. Given its dual focus on solution quality and diversity, QD provides a natural fit for the challenges of BVE. We argue that QD is particularly well suited for BVE for at least two main reasons. First, it supports a customizable behavioral space — also referred to as feature space — allowing testers to tailor exploration objectives to specific testing goals. For instance, one tester may aim to broadly cover different input characteristics, while another may focus on fewer input features but seek maximum diversity in output behavior. This flexibility enables adaptive exploration and supports varied testing strategies. Second, QD algorithms inherently promote diversity by encouraging the exploration of multiple regions in the behavioral space. The emphasis on diversity makes QD especially valuable in boundary exploration, as it helps uncover a wider variety of boundaries and supports more comprehensive coverage of the behavioral space. Despite the growing interest in QD-based methods in software testing, to the best of our knowledge, BVE has not yet been formulated as a QD optimization problem. In this paper, we introduce SETBVE, a novel framework for automatically discovering diverse boundary behaviors in SUTs. SETBVE incorporates QD algorithms to explore the input space more comprehensively and identify uncover a wider range of boundary candidates. In contrast, AutoBVA focuses on maximizing boundary candidate quality through the program derivative, which can lead to a narrow search that overlooks regions with lower derivative values and often yields only a few high-quality boundary candidates. A motivating example can be seen in Figure 1, consider a Body Mass Index (BMI) function that classifies input pairs (height, weight) into distinct categories such as “Underweight”, “Normal”, and “Obese”. The results in the Figure reveals that SETBVE discovers a broader set of BMI boundaries, capturing transitions between diverse behaviors that AutoBVA misses. This simple example highlights the potential of QD optimization to uncover more diverse boundary regions, supporting more thorough boundary exploration. SETBVE is built around three components: Sampler, Explorer, and Tracer. The modular design of SETBVE allows users to combine and configure those components flexibly according to testing goals. Each component plays a distinct role and can be adapted independently. The Sampler sets up an archive — a grid-like structure that stores candidates based on selected properties. These properties, called behavioural descriptors, capture key aspects of the SUT, such as input features or output responses. In this work, we define descriptors using both input and output characteristics, but they can be customized to suit different testing needs. Explorer navigates the input space by selecting existing archive entries and mutating them to discover new archive cells. Tracer further refines the search by examining areas around the boundary candidates found by the Sampler and/or Explorer. It aims to follow detected boundary lines and uncover additional boundary candidates nearby. Similar to AutoBVA, SETBVE runs fully automatically, requiring no access to source code or formal specifications. The current implementation supports any number of integer inputs and handles outputs by stringifying them, allowing for compatibility with both numeric and textual output formats. With modifications, such as adjusting the mutation operator or generalizing quality metrics [14], the framework can be extended to support other input and output types as well. To support further research and ensure reproducibility, we provide the full implementation of SETBVE along with all scripts needed to replicate the experimental evaluation1, as well as the associated dataset2 . We evaluate SETBVE by examining the performance of its various configurations, including a comparison with AutoBVA. The comparison is conducted through experiments on ten different systems under test (SUTs). For each technique, we assess the quality and diversity of the identified boundary candidates, as well as the trade-off between these two aspects. Our results suggest that QD algorithms offer clear advantages for BVE, particularly by increasing the diversity of identified boundary candidates. SETBVE consistently identifies more diverse candidates than AutoBVA while still maintaining relatively high-quality results, especially as the SUT complexity increases. Depending on the characteristics of a SUT, the Tracer component further contributes by identifying additional boundary candidates near those already discovered, leading to a more complete and interpretable view of behavioral transitions. In this paper, we make the following contributions: • We propose SETBVE, a customizable and modular framework for automated boundary value exploration that identifies boundary candidate pairs by incorporating ideas from Quality-Diversity optimization. We empirically evaluate multiple configurations of SETBVE across ten SUTs, demonstrating its effectiveness in identifying diverse and high-quality boundary candidates, outperforming existing methods [11]. • For the empirical comparison, we define BVE-specific metrics — based on related QD literature [28] — to assess the quality and behavioral spread of boundary candidates. • We introduce a Tracer component that explores the vicinity of identified boundary candidates, delineating additional boundary transitions and refining the representation of boundary regions. The rest of the article is organized as follows. In Section 2, we provide background information and discuss related work. Section 3 then presents our proposed method, SETBVE, in detail. Section 4 outlines the methodology of our evaluation. Section 5 covers the results of our empirical evaluation and responses to our research questions. In Section 6, we discuss the study’s implications and address potential threats to validity. Finally, Section 7 concludes the paper. # 2 BACKGROUND AND RELATED WORK In this section, we provide a brief background and review relevant studies on Boundary Value Analysis, beginning with traditional boundary testing concepts, followed by recent advancements in automated frameworks. Finally, we present an overview of Quality-Diversity optimization methods and their application in software testing. # 2.1 Boundary Value Analysis Boundary Value Analysis (BVA) is a fundamental and widely adopted technique in software testing, focusing on inputs at the edges of input domains where errors are most likely to occur [6, 20]. White and Cohen [34] proposed a domain-testing strategy focused on identifying boundaries between mutually exclusive subdomains of the input space to detect control-flow errors. To improve the efficiency of testing boundaries, Jeng et al. [23] introduced a semi-automated method that combines dynamic search with algebraic manipulation of boundary conditions. Even with its well-established role in software testing, BVA continues to be an active area of research, with recent studies exploring its integration with modern techniques such as search-based software testing. Ali et al. [1] extended a search-based test data generation technique to model-based testing by integrating a solver that generates boundary values using heuristic guidance. However, to advance BVA in modern software systems, existing approaches need to deepen their understanding of what constitutes a boundary and how such boundaries can be effectively identified from a black-box perspective. More recently, Dobslaw et al. [10] introduced Boundary Value Exploration (BVE), a concept designed to complement traditional BVA by supporting boundary detection in cases where specifications are incomplete or missing. BVE employs distance-based metrics to systematically explore the input space and quantify behavioral changes, enabling the identification of boundary regions more effectively. One way of quantifying such boundaries is the concept of a program derivative (PD), introduced by Feldt et al. [14]. Program derivative, inspired by the mathematical derivative of a function, serves as a measure of boundariness by quantifying the sensitivity of a program’s behavior (output) to changes in its input. Formally, given two input values $i _ { 1 }$ and $i _ { 2 }$ , and their corresponding outputs $o _ { 1 }$ and $o _ { 2 }$ , the PD is calculated as the ratio of the output distance to the input distance: $$ P D ( i _ { 1 } , i _ { 2 } ) = \frac { d _ { o } ( o _ { 1 } , o _ { 2 } ) } { d _ { i } ( i _ { 1 } , i _ { 2 } ) } $$ Here, $d _ { i } ( i _ { 1 } , i _ { 2 } )$ is a distance measure on the input space, representing the difference between the inputs $i _ { 1 }$ and $i _ { 2 }$ , whereas $d _ { o } ( o _ { 1 } , o _ { 2 } )$ is a distance measure on the output space, capturing how much the program output changes in response. An input pair is considered a boundary candidate if it has a PD greater than zero, meaning the inputs are close but produce different outputs. A high PD indicates a strong boundary region (i.e. high boundariness), where small variations in input result in significant changes in the program’s behavior, making PD particularly valuable for BVE. Traditionally, boundaries are tightly coupled with the specific behavior of a system under test (SUT), making them highly system-dependent; this presents a challenge in developing more general boundary definitions that can be applied across diverse SUTs. Therefore, based on the concept of boundary candidates, Dobslaw et al. [11] introduced the notion of validity groups, which categorizes each boundary candidate — consisting of two inputs and their corresponding outputs — into one of three types: VV, where both outputs are valid; VE, where one output is valid and the other is an error; and EE, where both outputs are errors. These concepts have made it possible to define quality goals (e.g., measured by the PD) and, by systematically varying inputs, to explore the input space and automate boundary identification. 2.1.1 Automation of Boundary Value Analysis. Automated methods for testing partitions and boundary values commonly depend on available software specifications. For example, Pandita et al. [30] proposed a white-box testing approach that enhances boundary coverage by instrumenting the SUT to identify boundary values near existing decision points. Zhang et al. [37] proposed a BVA-aware white-box method to improve fault coverage by identifying and testing boundaries derived from comparison predicates in symbolic execution paths. They introduced constrained combinatorial testing to generate test cases, covering boundary conditions with fewer tests while maintaining structural coverage. Hübner et al. [21] developed an equivalence class partitioning (ECP) strategy to efficiently locate boundary regions between equivalence classes. Guo et al. [19] use machine learning with inputs and execution paths to learn input boundaries and apply Markov chain Monte Carlo (MCMC) to generate test cases. While effective in some scenarios, those whitebox approaches heavily rely on clearly defined partitions or program specifications, limiting their applicability or effectiveness when specifications are ambiguous, incomplete, or unavailable. To address some of those limitations, Dobslaw et al. introduced AutoBVA, an automated black-box boundary value exploration framework [11]. Their study demonstrated that black-box BVA could be automated, with AutoBVA successfully identifying notable boundary candidates. AutoBVA operates in two primary phases: detection and summarization. In the detection phase, the algorithm searches the input space to discover potential boundary candidates. The authors experimented with two alternative search strategies: Local Neighbourhood Search (LNS) and Boundary Crossing Search (BCS), with BCS yielding better results. Once the detection phase is complete, the summarization phase clusters the identified boundary candidates to provide a concise summary of boundary candidates for testers. Interestingly, during their evaluation, the authors uncovered unexpected behavior in one of Julia’s base functions, which led to raising a GitHub issue3 and later contributing a documentation fix4 — highlighting that BVE can reveal subtle edge cases even in well-established standard libraries. Dobslaw et al. [11] also introduced a sampling approach that combines compatible type sampling (CTS) with bituniform sampling. Bituniform sampling randomly selects numbers using bit-shifting to insert leading zeros, ensuring broad exploratory coverage. CTS complements this by sampling argument-wise based on compatible data types — for instance, integer types of different bit sizes, such as booleans (Bool), 8-bit integers (Int8), and 32-bit integers $( \mathtt { I n t } 3 2 ) ^ { 5 }$ , are compatible as they share the integer supertype. Their experiments showed that this combination outperformed other sampling strategies. Despite AutoBVA’s strengths, the method revealed challenging research gaps. Due to the focus on high-quality boundary candidates, the framework may miss certain regions within the input space, resulting in reduced diversity of discovered solutions. Additionally, its computationally intensive summarization phase can introduce overhead, especially when analyzing complex SUTs. Our proposed framework, SETBVE, aims to overcome those barriers by leveraging on quality diversity optimization. # 2.2 Quality Diversity Optimization Quality-Diversity (QD) optimization algorithms stand apart from typical stochastic optimization approaches by seeking a broad range of high-performing solutions across a feature space rather than a single optimum [5]. This feature space, often called the behavioral space, reflects the various behaviors of candidate solutions. Unlike multimodal optimization, which targets multiple optima, QD aims to cover the entire behavioral space, providing a wide variety of viable solutions. This breadth makes QD algorithms particularly valuable in domains where diverse solution behaviors are essential. However, these methods can face challenges in high-dimensional, noisy environments and can be computationally intensive, especially when fine-grained detail across the behavioral space is necessary [5]. Early QD algorithms include novelty search [25] and MAP-Elites [28]. Novelty search, introduced by Lehman and Stanley, encourages exploration by rewarding new behaviors [25]. It assesses novelty by measuring the distance to the closest observed solutions, promoting clusters of unique solutions without necessarily spreading evenly across all behavioral features. In contrast, MAP-Elites by Mouret and Clune uses an illumination-based approach to systematically map “elite” high-performing solutions across feature dimensions, constructing a rich landscape of solutions [28]. Although effective in avoiding local optima and exposing a broad range of solutions, MAP-Elites can be computationally demanding. Expanding on this foundation, Gravina et al. introduced “surprise” as an additional QD criterion, further encouraging exploration of underrepresented areas in the behavioral space [17]. Newer multi-objective QD extensions like MOME [31], which saves a Pareto front of solutions for each cell in the behavioral space, and Bayesian-optimized methods like BOP-Elites [24], bring QD techniques to more complex and/costly fitness evaluation scenarios. Cully et al. [8] proposed a modular framework for QD algorithms, later enhanced and implemented in the Python library pyribs [33]. This framework rests on three core components: behavioral descriptors, the archive, and emitters. The archive is a data structure collecting diverse candidate solutions based on their location in the behavioral descriptor space rather than strictly by performance. Solutions enter the archive if they surpass a novelty threshold, occupy a new part of the behavioral space, or outperform similar solutions in quality. Emitters, which generate or select solutions for the archive, operate with different strategies, prioritizing metrics like quality, novelty, or curiosity. Unlike traditional genetic operators, emitters can represent entire sub-optimization processes aimed at increasing solution quality or exploring new behavioral areas, thus offering a flexible approach to populating the behavioral space with diverse solutions. The concepts of archive and behavioral descriptors play a central role in both the design and understanding of SETBVE. Behavioral descriptors, in particular, can range from generic metrics (e.g., the number of exceptions thrown by a pair of inputs) to more SUT-specific measures such as the number of classes revealed in a simple classification task. Throughout Section 3 we explain how these quality-diversity concepts translate effectively into SETBVE. 2.2.1 Quality Diversity Optimization in Testing. QD algorithms have recently gained attention in software testing for their ability to explore diverse and high-performing test scenarios simultaneously. Novelty Search was first applied to test data generation in search-based structural testing by Boussaa et al.[3], who demonstrated its potential for exploring large input spaces. Instead of relying on fitness-based selection, their approach prioritizes test cases with high novelty scores — those that differ significantly from previously evaluated solutions. Building on this idea, Marculescu et al.[27] compared exploration-based algorithms, including Novelty Search and MAP-Elites, with an objective-based approach for testing a clustering algorithm. Their results showed that explorationfocused methods cover a broader portion of the behavior space and produce more diverse solutions, with over $8 0 \%$ of their outputs not found by the objective-based method, even under limited resources. They concluded that such algorithms are well-suited for investigating high-dimensional spaces, especially when information or computational power is constrained. Further reinforcing the benefits of QD in testing, Feldt and Poulding [15] investigated various methods for generating test inputs that exhibit high diversity with respect to specific features, including techniques inspired by the general principles of Novelty Search and MAP-Elites. Extending these ideas to test suite generation, Xiang et al. [36] applied the MAP-Elites algorithm to Software Product Lines (SPLs), combining objective functions with a user-defined behavior space. Their approach outperformed traditional single- and multi-objective methods, producing a wide range of effective and diverse test suites that support more informed decision-making. Beyond traditional software, QD algorithms have also been applied to testing deep learning systems. Riccio and Tonella [32] developed DeepJanus, a tool that generates input pairs with similar features but differing behaviors to map the behavior frontier of a DL system. By combining NSGA-II with Novelty Search, DeepJanus promotes diverse behavior discovery and avoids local optima, helping developers assess system quality and identify inputs that the system fails to handle properly. Building on similar input scenarios, Zohdinasab et al. [38] developed DeepHyperion, an open-source test input generator that leverages Illumination Search to produce diverse, high-performing test cases. The approach explores the feature space of DL inputs by mapping interpretable input characteristics to behavioral outcomes, helping developers understand how structural and behavioral features affect system performance. As a result, DeepHyperion provides a human-interpretable view of system quality and helps uncover misbehaving or near-misbehaving cases across a range of feature dimensions. In follow-up work [39], the authors introduced DeepHyperion-CS, an improved version of DeepHyperion for testing DL systems. Instead of selecting inputs randomly, it prioritizes those with higher contribution scores — inputs more likely to expand feature space coverage. Experiments showed that DeepHyperion-CS outperforms the original tool in both efficiency and effectiveness at uncovering misbehaving inputs. In summary, QD algorithms have recently shown strong potential in testing, with successful applications in both traditional systems and deep learning models. Their appeal lies in the ability to flexibly define a behavioral space and efficiently explore large input domains, aiming not only to discover diverse solutions but also to ensure high quality within each explored region. Despite the growing interest in QD for testing, to the best of our knowledge, its application to BVE remains unexplored. Given the nature of BVE — where discovering diverse yet meaningful boundaries is key — QD offers a promising foundation. In this work, we propose an automated black-box BVE framework that integrates QD algorithms to search for diverse boundary candidates across behavioral space. # 3 SETBVE APPROACH We propose SETBVE, a framework for automated black-box BVE that incorporates elements of QD approaches. SETBVE aims to identify diverse boundaries and generate multiple example input pairs along each boundary, with the goal of simplifying the boundary summarization phase. While it still uses the program derivative as a metric for evaluating boundariness, similar to AutoBVA, SETBVE introduces a novel approach to boundary search, consisting of three main components — Sampler, Explorer, and Tracer — that can be combined in different ways (see Figure 2). The Sampler component generates random solutions and stores them in an archive, which serves both as a record of previously evaluated candidates and as a guide for further exploration. The archive helps ensure that a diverse set of boundary candidates is maintained throughout the search. In our implementation, the archive is structured as a multidimensional grid, where each cell corresponds to a unique combination of values defined by behavioral descriptors. This grid-based structure supports diversity-aware storage and facilitates easier summarization. SETBVE iteratively evaluates input-output pairs, making boundary candidates available either immediately after the search or in real time as the process progresses. The Explorer component modifies the sampled boundary candidates from the archive and explores the feature space with the aim of populating as many archive cells as possible within a specified time limit. Instead of focusing solely on high program derivative regions, Explorer promotes diversity as well, potentially reducing the risk of overlooking meaningful boundaries with lower derivative values but distinct characteristics. Its search strategy is guided by QD approaches, which consider both feature space diversity and quality. Additionally, our implementation of SETBVE uses Jaccard distance for output distance calculation, because it effectively captures variations in strings compared to simpler metrics. The archive allows only one input pair per cell, making it inherently coarse. The Tracer component refines the search by identifying additional boundary candidates in the vicinity of those already stored in the archive. This enables SETBVE to more thoroughly populate regions near known boundaries, increasing the density of boundary candidates around transition areas. Figure 3 illustrates this refinement where the two circles represent the boundaries. The SUT takes $\mathbf { x }$ and y coordinates as input and outputs a string based on the coordinates’ position relative to the circles: “insideA”, “insideB”, or “outsideBoth”. The left side of the figure shows the output of using the Sampler and Explorer, which locate the boundaries of both circles. The Tracer then continues from these points, adding more boundary candidates to the input space, and attempting to trace the boundary. The result of this tracing process is displayed on the right side of the figure. Next, we detail those three main building blocks of SETBVE. Fig. 2. Components of SETBVE, their combinations and interactions with the archive. Arrows indicate solution generation (to archive) and selection (from archive). Fig. 3. Illustrative example of boundary refinement. On the left: output from using the Sampler and Explorer. On the right: output after applying the Tracer to the results from the Sampler and Explorer. # 3.1 Sampler Sampler populates the archive with random solutions. We employ a grid-structured archive, where the dimensions are defined by behavioral descriptors, i.e., numerical representations of specific characteristics or features of a solution (in our case, an input-output pair). These descriptors partition the search space into distinct cells, with the framework’s ultimate goal being to discover as many cells as possible (diversity) while ensuring the quality within each discovered cell. For this version of SETBVE, we propose five behavioral descriptors: three generally applicable across all tested SUTs, and two selective descriptors that vary according to the output type of a SUT. The general descriptors are the number of exceptions, total input length, and input length variance, while the selective descriptors are output abstraction number and output length difference. • Number of exceptions: Since we consider pairs of input values, this descriptor capture three possible regions of the SUT represented in validity groups (see Section 2): Valid-Valid (VV) for input pairs that raise zero exceptions, Valid-Error (VE) for pairs raising one exception, and Error-Error (EE) when both inputs raise exceptions. This descriptor focuses on the output. • Output abstraction number: This descriptor assigns a unique number to pairs of output classes in alphabetical order. For instance, in our experiments with Body Mass Index (BMI) classifications, the combination of outputA “Normal” with outputB “Overweight” (and vice-versa) are assigned a value, to distinguish between another combination of outputs (e.g., “Overweight” and “Obese”). If an output includes an error, we treat the exception type (e.g., DomainError or ArgumentError) as a class. Note that this descriptor is only applicable to SUTs with categorical outputs. • Output length difference: This descriptor captures the difference in output lengths by first converting outputs to strings. Similar to calculating the output abstraction number, if an output contains an error, we extract and use the exception type as the output. The length difference between outputs is then calculated. • Total input length: This descriptor is calculated by converting all input arguments to strings and summing their lengths. It diversifies inputs by their overall length and also serves as a filter after the search when, for instance, there is interest in specific length of input arguments. • Input length variance: Like total input length, this descriptor is calculated by firstly converting input arguments to strings and then computing the variance of their lengths.6 In combination with total input length, this descriptor enables searches that yield inputs with both uniform and variable lengths. For example, it can capture cases with uniformly short arguments or cases with one long argument among shorter ones. In total, we employ four behavioral descriptors as archive dimensions. For SUTs with categorical outputs, we use the following: number of exceptions, output abstraction number, total input length, and input length variance. For other SUTs, we use the following: number of exceptions, output length difference, total input length, and input length variance. This setup provides two dimensions focusing on output characteristics and two focusing on input characteristics. The archive population process is illustrated in the top-left section of Figure 4. First, the Sampler generates a random solution. For each solution, its behavioral descriptors and program derivative are computed, and the archive is checked to determine whether the corresponding cell is already occupied. If the cell is empty, the solution is stored. If occupied, the solution with the higher program derivative is retained, and the weaker one is discarded. This ensures that each archive cell contains the highest-ranked boundary candidate found for its descriptors. The Sampler building block is a fundamental component in all combinations of SETBVE elements. When used alone, the archive population process runs throughout the entire time budget. When combined with other blocks, the total time budget is divided between them. In all cases, the process begins with populating the archive (Sampler), and the other blocks are applied afterward. # 3.2 Explorer The goal of the Explorer is to enhance and diversify the solutions stored in the archive by applying mutations. Diversification is achieved by discovering new cells in the archive by mutating existing boundary candidates, while improvement occurs when a mutated candidate with a higher PD value replaces an existing one in an occupied archive cell. The process is illustrated in the top right section of Figure 4, and begins by selecting a parent solution, which is mutated to generate a child solution. A parent solution is selected using one of three methods: (i) uniform random sampling, or score-proportionate selection based on the (ii) solution’s fitness (i.e., program derivative) or (iii) its curiosity score. Uniform random sampling gives each solution an equal chance of selection, whereas in the score-proportionate selection, the program derivative or curiosity scores are used to assign weights to solutions, followed by weighted random sampling. Fig. 4. SETBVE framework: illustration of archive sampling, feature space discover process by Explorer (Algorithm 1), and boundary refinement by Tracer (Algorithm 2). Darker cells indicate regions with higher program derivative values. The curiosity score represents the likelihood that a parent generates an offspring that will be added to the archive [8]. It is initialized to 0 at the start of the search and updated each time a child is produced. If a child is added to the archive — either by discovering a new cell or improving an existing solution — the parent’s curiosity score is increased by 1. Otherwise, if the mutated solution is not added, the curiosity score is decreased by 0.5. Once a parent solution is selected, it undergoes mutation outlined in Algorithm 1. The mutation starts by bringing the input pair closer together by selecting a random point between the inputs . A pair to the mid_point is chosen randomly between the two original inputs (line 3). This ensures that the resulting pair is closer than the original. Next, the mutation introduces a randomized shift to one argument of the input pair (line 6). The shift is a random fraction of the distance between the inputs, chosen at random as positive or negative, creating a relative offset. Finally, the behavioral descriptors and the program derivative of the child solution are evaluated to determine whether it should be added to the archive. Similar to the process of populating the archive, the mutated solution is Algorithm 1 Mutation Operator stored if the corresponding cell is empty or if it outperforms the existing solution in terms of program derivative; otherwise, it is discarded. This process repeats iteratively until the allocated time budget is exhausted. # 3.3 Tracer Note that the archive stores only the highest-quality solutions for each behaviour defined by the behavioural descriptors, hence becoming intentionally coarse. The goal of the Tracer is to refine the search by closely examining regions around identified boundary candidates to uncover additional solutions in these areas of the input space. As shown in the bottom part of Figure 4, the process consists of two steps: cells prioritization and boundary tracing for the prioritized cells. An example of a boundary is illustrated as a curved line in the input space, traced by multiple boundary candidates. These candidates help visualize the boundary’s pattern, potentially offering insights into its shape and structure. The Tracer primarily aims to generate additional solutions near the detected boundary, enabling a more detailed description and analysis for future research. 3.3.1 Cells Prioritization. When refining the boundaries, we focus on transitions between equivalence partitions in the VV group and the shift from valid inputs to those causing exceptions (VE). Therefore, we begin by ranking solutions within each validity group based on their program derivative values, ensuring that the most promising boundary candidates are prioritized. From these ranked solutions, a subset of the top candidates is selected for a focused search in their vicinity within the input space, refining and expanding identified boundaries. To prioritize these cases, candidates are allocated as follows: $5 0 \%$ from the VV group, $4 0 \%$ from the VE group, and $10 \%$ from the EE group. If a validity group lacks enough solutions to meet its quota, the shortfall is compensated by selecting candidates from other groups with a surplus. 3.3.2 Boundary Tracing. Figure 5 illustrates the iterative search process, starting with the first selected boundary candidate. An initial example solution is provided as a starting point. The circle in the figure represents a boundary within a SUT. The Tracer component is applied either after the Sampler (when the Explorer is not used) or after the Explorer. Once the Sampler or Explorer has been applied, a list of solutions is produced and sorted in descending order based on their program derivative values. The tracing process begins with the first input pair $( x _ { 1 } , y _ { 1 } , x _ { 2 } , y _ { 2 } )$ , around which a search region is defined in the input space. To determine the search range for each input argument, we compute the differences between 30 consecutive input pairs (sorted by program derivative), starting from the one currently being analyzed. The median of these differences is used as an offset, applied in both positive and negative directions around the selected input pair, thereby defining the search region. This range is illustrated in Figure 5 as colored rectangles surrounding the input pair, representing the area explored during boundary refinement. Fig. 5. Boundary tracing process. Once the search bounds are established, we conduct a single-objective optimization process to identify boundary candidates that meet the desired criteria within these bounds. The right part of Figure 5 illustrates the ideal outcome, where boundary candidates are well-spaced and aligned with the boundary. Boundary candidates are input pairs that are close to one another but produce distinctly different outputs (e.g., falling inside and outside the circle). The goal of this search process is to generate 30 boundary candidates inside the search region that balance these objectives. We define an objective function that balances program derivatives and distances to achieve both the spread between boundary candidates and the closeness within input pairs. Specifically, the objective aims to maximize the total sum of program derivatives across the 30 newly generated boundary candidates while also maximizing the overall distance between them. Since program derivatives are normalized to values between 0 and 1, and distances can vary widely, we scale the sum of the 30 program derivatives by a weight (𝑊 ), representing the maximum possible distance between any two solutions in the search space. We calculate $W$ as the Euclidean distance between the minimum and maximum values within the search bounds $\prime x 1 _ { \mathrm { m i n } } , y 1 _ { \mathrm { m i n } } , x 2 _ { \mathrm { m i n } } , y 2 _ { \mathrm { m i n } } \Big )$ and $( x 1 _ { \mathrm { m a x } } , y 1 _ { \mathrm { m a x } } , x 2 _ { \mathrm { m a x } } , y 2 _ { \mathrm { m a x } } )$ , resulting in Equation 2, our complete objective function. $$ { \mathrm { O b j e c t i v e ~ F u n c t i o n ~ } } ( f _ { o b j } ) = ( W \cdot \sum _ { i = 1 } ^ { 3 0 } { \mathrm { P D } } ( B C _ { i } ) ) + \sum _ { \stackrel { i , j = 1 } { i \neq j } } ^ { 3 0 } { \mathrm { D i s t a n c e } } ( B C _ { i } , B C _ { j } ) $$ where: 𝑊 is the weight calculated as the maximum Euclidean distance within the search space: $$ W = \sqrt { ( x 1 _ { \mathrm { m a x } } - x 1 _ { \mathrm { m i n } } ) ^ { 2 } + ( y 1 _ { \mathrm { m a x } } - y 1 _ { \mathrm { m i n } } ) ^ { 2 } + ( x 2 _ { \mathrm { m a x } } - x 2 _ { \mathrm { m i n } } ) ^ { 2 } + ( y 2 _ { \mathrm { m a x } } - y 2 _ { \mathrm { m i n } } ) ^ { 2 } } $$ $\mathrm { P D } ( B C _ { i } )$ is the program derivative of boundary candidate $B C _ { i }$ • Distance $( B C _ { i } , B C _ { j } )$ is the Euclidean distance between boundary candidates $B C _ { i }$ and $B C _ { j }$ Calculating the entire fitness and the total distance between all solutions in each iteration is computationally expensive. To simplify, we approximate Equation 2 by calculating the program derivative and distance for just two boundary candidates: a newly generated candidate and a randomly selected one from the current population. This yields Equation 3, which follows the same logic as Equation 2 but reduces the calculation load from 30 to only two (the newly generated boundary candidate (child) and one selected from the population). Objective Function $( f _ { o b j } ) = ( W \cdot ( \mathrm { P D } ( B C _ { \mathrm { c h i l d } } ) + \mathrm { P D } ( B C _ { \mathrm { r a n d } } ) ) ) + \mathrm { D i s t a n c e } ( B C _ { \mathrm { c h i l d } } , B C _ { \mathrm { r a n d } } )$ where: • $B C _ { \mathrm { c h i l d } }$ is the newly generated (mutated) boundary candidate • $B C _ { \mathrm { r a n d } }$ is a randomly selected boundary candidate from the current population, excluding the parent solution The pseudocode for boundary tracing is shown in Algorithm 2. The process iterates over a subset of the top-𝑛 boundary candidates, allocating an equal share of the total tracing time budget $( 1 / n )$ to each. For each candidate, the search starts by initializing a population of 30 solutions within defined bounds. While time remains, a parent solution is randomly selected from the population and mutated using the operator defined in Algorithm 1. # Algorithm 2 Boundary Tracing 1: Input: Top- $n$ boundary candidates 2: Output: Refined set of boundary candidates 3: 𝑡𝑟𝑎𝑐𝑖𝑛𝑔_𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛𝑠 $$ empty list 4: for each boundary candidate in top- $n$ candidates do 5: Initialize population (𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛) with 30 random boundary candidates 6: while allocated time for candidate not expired do 7: $B C _ { \mathrm { p a r e n t } } $ randomly sample from the 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 8: $B C _ { \mathrm { c h i l d } } $ mutate $S _ { \mathrm { p a r e n t } }$ using Algorithm 1 9: $f _ { \mathrm { p a r e n t } } $ evaluate objective function (Eq. 3) on 𝐵𝐶parent 10: $f _ { \mathrm { c h i l d } } $ evaluate objective function (Eq. 3) on $B C _ { \mathrm { c h i l d } }$ 11: if $f _ { \mathrm { c h i l d } } > f _ { \mathrm { p a r e n t } }$ then 12: Replace $B C _ { \mathrm { p a r e n t } }$ with $B C _ { \mathrm { c h i l d } }$ in 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 13: end if 14: end while 15: Append 𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 to 𝑡𝑟𝑎𝑐𝑖𝑛𝑔_𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛𝑠 16: end for 17: return 𝑡𝑟𝑎𝑐𝑖𝑛𝑔_𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛𝑠 For each mutation, Equation 3 is used to calculate the objective function for both the parent and mutated solutions. If the mutated solution yields a higher objective function score, it replaces the parent solution in the population (lines 9 – 13). Otherwise, the search resumes by selecting a new parent to mutate, with selection occurring uniformly at random. This process continues until the time for the current boundary candidate expires, after which it repeats for the next boundary candidate. # 4 EMPIRICAL EVALUATION To evaluate the proposed framework’s effectiveness in identifying boundary candidates, we follow Wohlin et al.’s [35] guidelines for software engineering experiments. Our evaluation focuses on comparing different search approaches Manuscript submitted to ACM aiming to identify high-quality boundary candidates while emphasizing their diversity from the perspective of testers or developers. Specifically, we investigate the following research questions: • RQ1: How effective are the search strategies in identifying boundaries in terms of both quality and diversity? • RQ2: How do the search strategies differ in their ability to discover unique regions of the behavioral space? • RQ3: Can the Tracer component further delineate identified boundary regions? For RQ1 and RQ2, we evaluate boundary detection performance across various SETBVE configurations and compare it with AutoBVA — the only existing automated boundary detection framework. Although AutoBVA is the most relevant baseline, it is not directly comparable due to differences in framework design. To reduce these differences and enable a more meaningful comparison, we take two steps: (1) selecting a subset of relatively high-PD solutions for analysis, and (2) mapping AutoBVA’s solutions to the same archive cell structure used in SETBVE. To define a this subset, we establish a boundariness threshold for each SUT based on all experimental runs, selecting solutions with a program derivative of $1 ^ { 9 }$ , along with the top $1 \%$ of remaining solutions ranked by PD across all search strategies. Additionally, to enable a direct comparison, we map AutoBVA’s solutions to corresponding archive cells by computing behavioral descriptors for each input-output pair. RQ1 examines how different variations of SETBVE and AutoBVA perform in identifying boundary candidates, comparing their effectiveness in terms of quality and diversity. We assess: (1) the boundariness of the solutions (i.e., the ability to show change in the SUT’s behavior) and (2) archive coverage (i.e., the extent to which the search space is explored, reflecting diversity). To quantify these aspects, we introduce specific measures, detailed in Section 4.1. RQ2 focuses on comparing search strategies based on their ability to discover boundary candidates that exhibit distinct and unique behaviors. These behaviors are defined using archive dimensions, which capture input and output characteristics in our experiments. These descriptors can be customized by developers or testers to align with their specific priorities. To address this question, we evaluate how many archive cells —– containing relatively high-PD solutions — each search strategy uniquely discovers. To compare strategies from a global perspective, we aggregate results across all runs for each SUT within a fixed time budget allocated to each search strategy. Lastly, RQ3 focuses on the Tracer component of the SETBVE framework, assessing its ability to refine and expand identified boundary regions. Specifically, we evaluate whether it can discover additional boundary candidates that are close to existing ones and that contribute to a more complete distribution along the boundary. To measure its effectiveness, we visualize boundary regions before and after applying the Tracer, analyzing improvements in coverage and distribution. # 4.1 Quality and Diversity Metrics To enable consistent comparison between different search approaches across various SUTs, we introduce two key metrics that assess both solution performance and behavioral space coverage: Relative Program Derivative (RPD) and Relative Archive Coverage (RAC). 4.1.1 Relative Program Derivative (RPD). RPD quantifies the quality of a boundary candidate by normalizing its PD relative to the best observed PD within its corresponding archive cell for a given SUT. This normalization addresses the gap between the theoretical maximum ( $\mathrm { P } D = 1 \mathrm { \dot { \Omega } }$ ) and the empirically highest PD observed, which can vary across behavioral domains (i.e., archive cells). Since calculating the exact maximum PD for each cell is infeasible, we approximate it using the highest PD observed across all empirical runs. To illustrate this, consider a SUT that takes a single integer input and returns either “Positive” for numbers greater than 0 or “Negative” for numbers less than 0. For the input pair $\mathrm { i } 1 \ = \ - 1$ , $\mathrm { i } 2 ~ = ~ 1$ , the outputs are o1 $\mathbf { \tau } = \mathbf { \tau }$ "Negative", $\circ 2 \ =$ "Positive", respectively. Since the function only accepts integers, this pair represents the closest input pair resulting in a change from “Negative” to “Positive”. For example, using Jaccard distance based on 2-grams, the output distance is 0.73, and the input distance is 2, yielding a PD of 0.37. Now, assume we use a grid archive defined by two dimensions: output length difference and total input length. The corresponding cell with coordinates $( 0 , 3 )$ (where 0 represents no output length difference, and 3 is the total input length) has a maximum achievable PD of 0.37. Suppose another boundary candidate is identified with the input pair -2 and 1, resulting in $P D = 0 . 2 4$ but assigned to the same archive cell (0,3). In the first case, the $R P D _ { ( - 1 , 1 ) } = 0 . 3 7 / 0 . 3 7 = 1$ , while the $R P D _ { ( - 2 , 1 ) } = 0 . 2 4 / 0 . 3 7 = 0 . 6 5 .$ Note that, $R P D _ { ( - 1 , 1 ) } > R P D _ { ( - 2 , 1 ) }$ . The choice of distance functions for calculating the program derivative depends on the input and output types of the SUT. In our experiments, since the inputs are numeric, we use Euclidean distance to measure input differences. For output differences, we employ Jaccard distance based on 2-grams [22], which captures string variations while remaining computationally efficient for short outputs in our experimental SUTs. 4.1.2 Relative Archive Coverage (RAC). Similar to RPD, it is infeasible to precisely estimate which archive cells can be covered. RAC quantifies the diversity of solutions by measuring the proportion of the behavioral space covered relative to the empirically observed maximum coverage. The concept and motivation behind RAC align with the notion of coverage described in [28], but due to the broad and varied use of the term coverage in software testing, we adopt the term RAC to avoid ambiguity. The need for RAC arises from the fact that, depending on the SUT and the chosen behavioral descriptors, some archive cells may be inherently unpopulatable. For example, consider a grid archive defined by two dimensions: output length difference and total input length, applied to a SUT with a single integer input. While it is possible to populate a cell where the output string length difference is 0 (i.e., two outputs of equal length), the total input string length for a solution (pair of inputs) cannot be less than 2, as each input must have at least one character. As a result, cells with a total input length of 0 or 1 are impossible to populate in the archive. We normalize archive coverage using the empirically observed maximum to allow for consistent comparisons of coverage across different SUTs and search strategies. Certain combinations of behavioral descriptor values may be unattainable due to the constraints of the SUT. Therefore, we approximate the feasible archive cells by aggregating all discovered archive cells across all runs and search strategies. # 4.2 Description of SUTs We evaluate the search approaches on a total of ten SUTs, all operating at the unit level and taking integer inputs, though they differ in input structure and output behavior. Six of these SUTs are taken from Julia’s core library, Base, and each accepts two numeric arguments as input. These include cld (computes the ceiling of the division of two numbers), fld (computes the floor of the division), fldmod1 (returns both floor division and modulo as a pair), max (returns the greater of two values), power_by_squaring (raises the first argument to the power of the second), and tailjoin (joins the types of elements from a given index to the end). Although the AutoBVA paper evaluated a larger number of Base functions, these six were specifically shortlisted in that work. For these SUTs, we report base performance characteristics using our metrics. Manuscript submitted to ACM Table 1. The different configurations used in search strategies CTS: Compatible Type Sampling BU: bituniform sampling S,E,T: Sampler, Explorer, Tracer. We perform a more detailed evaluation — both quantitative and qualitative — on four additional SUTs: bytecount, circle, bmi, and date. Below, we provide the rationale for selecting each SUT, along with details on their inputs and outputs. The code for each SUT was implemented in Julia and is available in our replication package10. • bytecount (bytes: Int): Takes an integer representing the number of bytes as input and returns a humanreadable string for valid inputs. If the input is out of bounds, the function throws an exception. • circle (x: Int, y: Int): Receives two integers representing x and y coordinates on a plane, where a circle is centered at the origin. The function determines whether the given coordinates fall inside or outside the circle, returning “in” or “out”, respectively. If the input corresponds to the center of the circle, an exception is thrown. • bmi (h: Int, w: Int): Takes two integers as input: h represents a person’s height in centimeters, and w represents their weight in kilograms. The function returns a string indicating the BMI category, ranging from underweight to severely obese. If either input is negative, the function throws an exception. • date (d: Int, m: Int, y: Int): Receives three integers representing day, month, and year. The function returns a string representing the corresponding date in the proleptic Gregorian calendar. If the input forms an invalid date, an exception is thrown, specifying which argument caused the error. Each SUT presents unique testing challenges, differing in input structure and boundary complexity. bytecount focuses on transitions between byte-scale boundaries, where rounding and threshold effects may arise. circle works in a two-dimensional space, where an out-of-range value in either input renders the other irrelevant, defining a finite, well-defined boundary. The bmi SUT increases complexity, as the output depends on the combination of height and weight, with continuous rather than finite boundaries. Lastly, date has three interdependent inputs (day, month, and year), introducing dynamic constraints such as leap years and varying month lengths. # 4.3 Experimental Setup We compare the performance of AutoBVA and different configurations of SETBVE by combining its building blocks: Sampler, Explorer, and Tracer. This comparison serves a form of ablation study to better understand the relative importance and contribution of each component in the framework. Table 1 summarizes the experimental setup for each search strategy. We implement the search strategies and the experiment instrumentation in Julia11. When evaluating the Sampler component alone, we compare two methods for populating the archive: (1) random uniform sampling of integer values that use 64 bits (Int64) as a baseline, and (2) a combination of CTS with bituniform sampling (Section 2), which was identified as the best-performing sampling method in previous work with AutoBVA [11]. When the Sampler is combined with other components, we use CTS with bituniform sampling for creating the initial population. The Explorer component applies different parent selection strategies, which include: uniform random selection (randomly choosing parents), fitness-biased selection (favoring high-performing solutions), and curiosity-biased selection (favoring parents that are likely to generate novel or improved offspring). SETBVE mutates solutions using the operator described in Algorithm 1, whereas AutoBVA employs two types of search strategies: Local Neighbourhood Search (LNS) and Boundary Crossing Search (BCS), both using increment and decrement operations for numeric inputs. In this work, we use the results of the BCS search for AutoBVA, as it performed better than LNS. Another key distinction between AutoBVA and SETBVE is the choice of output distance function to calculate PDs. AutoBVA uses the difference in output length, whereas SETBVE uses the Jaccard distance. For the results comparability, we reuse the time budgets used in previous experiments with AutoBVA, namely: 30 seconds and 600 seconds. In SETBVE, the budget is distributed among its components. When only the Sampler is used, the entire budget is dedicated to solution sampling. When the Sampler is combined with the Explorer, $1 0 \%$ of the budget is allocated to initializing the archive, while the Explorer utilizes the remaining $9 0 \%$ to explore the search space through mutation for boundary identification. In the configuration combining the Sampler and Tracer, $9 0 \%$ of the budget is allocated to solution sampling, while the Tracer refines the found boundaries with the remaining $1 0 \%$ . When all three components are combined, the budget allocation is as follows: $10 \%$ each for the Sampler and Tracer, with the Explorer receiving $8 0 \%$ because it deserves more time to discover different regions of the behavioral space. We leave the optimization of these budget allocations for future work. In total, we evaluate nine SETBVE configurations, derived from four different combinations of building blocks (see Figure 2), two sampling strategies, and three parent selection strategies for each SUT and time budget. The experiments are conducted on ten SUTs described in Section 4.2. Nine of these SUTs were used in the AutoBVA study, and one additional SUT (circle) is introduced in this paper. For AutoBVA results, we use the publicly available data for bytecount, bmi, and date12, and we run the AutoBVA framework ourselves for the new circle SUT. In addition, while the results for cld, fld, fldmod1, max, power_by_squaring, and tailjoin are available in the AutoBVA replication package, they are limited to 30-second runs. For our empirical evaluation, we re-run AutoBVA for these SUTs with a 600-second time budget. Following the protocol used in the AutoBVA study, each search strategy is executed 20 times to account for pseudorandom variation. # 5 RESULTS AND ANALYSIS In this section, we analyze and compare how well each method covers the behavioral space, the quality of the boundaries identified, their ability to discover unique behaviors, and the impact of boundary tracing on results. Each research question is discussed in a dedicated subsection, which concludes with a summary of key findings. # 5.1 RQ1 - Quality and Diversity of Boundary Candidates In RQ1, we use the Relative Program Derivative (RPD) and Relative Archive Coverage (RAC) to compare, respectively, the quality and diversity of boundary candidates identified by different search strategies across the tested SUTs. We evaluate each strategy using two time budgets: 30 seconds and 600 seconds. The results presented in Tables 2 – 5 are sorted by RAC at 600 seconds because our primary focus is on the diversity of boundary candidates and exploration of the search space. All reported values represent the mean and standard deviation from 20 experimental runs. Note that RPD is normalized to a range from 0 to 1, while RAC expresses the percentage of empirically observed archive cells with relatively high PD that are covered. Consequently, a RAC value of $1 0 0 \%$ indicates that all such archive cells identified across all runs and search strategies have been discovered. Table 2. Descriptive statistics $( \mu \pm \sigma )$ for quality (RPD) and diversity (RAC) for the bytecount SUT. The top three results for each metric, including ties, are highlighted in bold. CTS: Compatible Type Sampling, BU: bituniform sampling. 5.1.1 Bytecount. Table 2 shows that AutoBVA quickly achieves maximum quality $\left( \mathrm { R P D } = 1 . 0 \right)$ at 30 seconds, yet its diversity remains relatively low (around $5 8 \%$ ). Random sampling performs poorly, yielding a RPD of zero across both time budgets and low RAC values $8 \%$ at 30s and $1 0 \%$ at 600s), illustrating its ineffectiveness. Similar to the findings in previous studies with AutoBVA, introducing CTS with bituniform sampling enhances the performance, especially when combined with uniform or curiosity-based exploration. These two Explorer options consistently deliver near-perfect quality $( \mathrm { R P D } \approx 0 . 9 7 - 0 . 9 9 )$ while also attaining high coverage of candidates (above $9 4 \%$ at 30s and up to $1 0 0 \%$ at 600s). In contrast, fitness-based exploration (rows 7 and 8) is notably less diverse, particularly at 30s $( \mathrm { R A C } \approx 4 7 \% - 5 8 \% )$ , though it increases up to $8 2 \%$ at 600s. Moreover, when using bituniform sampling even without the Explorer and Tracer components, RPD remains high (0.98), and RAC exceeds that of AutoBVA ( $7 0 \%$ at 30s and $91 \%$ at 600s). The impact of adding Tracer for curiosity-driven and uniform random exploration is minimal in terms of both quality and diversity. In contrast, for fitness-based exploration, the Tracer has a more noticeable impact on RAC, as it improves archive coverage at both time budgets (see rows 7 and 8). 5.1.2 Circle. The results in Table 3 show that AutoBVA consistently achieves the highest RPD scores (above 0.94) at both time budgets. However, this comes at the cost of behavioral diversity, with RAC values remaining relatively low — around $3 2 \%$ at 30 seconds and $3 6 \%$ at 600 seconds. In contrast, SETBVE configurations achieve much higher RAC, even without the Explorer or Tracer. For example, the Sampler-only setup (row 1) yields a moderate RPD (0.7 at 30s, 0.84 at 600s), but RAC improves over time, increasing from $5 2 \%$ to $7 2 \%$ . Explorer configurations result in lower RPD scores at 30 seconds (ranging from 0.5 to 0.57) and RAC values clustered around $3 7 - 3 9 \%$ . After 600 seconds, RPD increases across all Explorer variants, with uniform random selection reaching the highest (0.85), followed by fitness- and curiosity-based strategies (0.8 and 0.77, respectively). However, RAC values across all Explorer variants converge in the $5 7 - 5 8 \%$ range. Table 3. Descriptive statistics $( \mu \pm \sigma )$ for quality (RPD) and diversity (RAC) across search strategies and time budgets for the circle SUT. The top three results for each metric are highlighted in bold. CTS: Compatible Type Sampling, BU: bituniform sampling. Enabling the Tracer yields small improvements in RAC compared with tracer-less settings (e.g., $5 1 . 5 2 \%$ vs. $5 2 . 2 8 \%$ for no Explorer at 30s), while leaving RPD relatively unchanged (rows 1 and 2). Table 4. Descriptive statistics $( \mu \pm \sigma )$ for quality (RPD) and diversity (RAC) across search strategies and time budgets for the bmi SUT. The top three results for each metric are highlighted in bold. CTS: Compatible Type Sampling, BU: bituniform sampling. 5.1.3 BMI. The results, as detailed in Table 4, show that AutoBVA has the highest RPD score of 1.0 at both 30 and 600 seconds, but its diversity remains exceptionally low $( \mathrm { R A C } \approx 2 \% - 3 \%$ ), highlighting its focus on quality at the expense of diversity. In contrast, SETBVE with bituniform sampling exhibits a trade-off between RPD and RAC depending on the chosen Explorer and Tracer configurations. The highest diversity levels are observed with no Explorer and a Tracer enabled $4 8 \%$ at 30s, $78 \%$ at 600s), suggesting that this configuration effectively enhances diversity while maintaining a relatively high RPD (0.4 at 30s, 0.54 at 600s). Among the Explorer strategies, fitness-based exploration produces the highest RPD (0.5 at 30s, 0.56 at 600s) but sacrifices diversity $3 1 \%$ and $6 0 \%$ , respectively). When the Tracer is enabled, RPD increases slightly (0.52 at 30s), but RAC remains low. However, curiosity-based exploration leads to lower quality values $( \mathrm { R P D } \approx 0 . 3 - 0 . 4 )$ ), with RAC ranging between $3 1 \%$ (at 30s) and increasing to $6 0 \%$ (at 600s). Uniform random exploration provides a more balanced approach, achieving reasonable RPD values $( \mathrm { R P D } \approx 0 . 3 - 0 . 4 )$ ) while maintaining one of the highest RAC scores at 600 seconds $( 7 9 \% - 8 0 \%$ ). Manuscript submitted to ACM Table 5. Descriptive statistics $( \mu \pm \sigma )$ for quality (RPD) and diversity (RAC) across search strategies and time budgets for the date SUT. The top three results for each metric are highlighted in bold. CTS: Compatible Type Sampling, BU: bituniform sampling. 5.1.4 Date. Table 5 reveals a clear trade-off between quality and diversity for the date SUT. AutoBVA achieves the highest RPD (0.91 at 30s, 0.96 at 600s), but at the cost of extremely low diversity (e.g., only $5 . 4 7 \%$ RAC at 600s). In contrast, configurations under SETBVE with bituniform sampling exhibit far higher diversity ( $R A C > 5 0 \%$ at 600s) but attain modest RPD scores $R P D < 0 . 3$ in all instances). Among the SETBVE strategies, those without an Explorer consistently yield the highest diversity (over $4 0 \%$ at 30s and nearly $8 0 \%$ by 600s), showing that omitting the Explorer can be advantageous for this specific SUT in producing diverse boundary candidates. On the other hand, fitness-based exploration reaches higher RPD values (up to 0.21 at 30s), yet its diversity lags behind ( $12 \%$ at 30s, and $5 5 \%$ at 600s). Uniform exploration achieves moderate RPD (starting low at 0.07 and then increasing to 0.21 at 600s) and relatively high diversity at 600s $\mathrm { \Delta R A C } = 7 4 \%$ ). Curiosity-based exploration performs similarly or slightly lower in both RPD and RAC relative to the uniform random parent selection. Enabling the Tracer generally leads to a marginal increase in RPD and RAC within each exploration category. Finally, random sampling fails to make any progress, yielding zero values in both coverage and diversity. 5.1.5 SUTs from Julia Base. Table 6 presents RAC and RPD values averaged over 20 runs for a set of Julia Base functions, comparing AutoBVA with two SETBVE configurations: SE-Uniform (Sampler and Explorer) and SET-Uniform (Sampler, Explorer, and Tracer). In this experiment, we evaluate SETBVE using the uniform parent selection strategy for the Explorer. This choice is based on prior observations from our analysis of four SUTs (bytecount, circle, bmi, and date), where uniform random parent selection delivered competitive results in most cases. The results show a consistent trend across all SUTs: SETBVE configurations achieve higher diversity (RAC) compared to AutoBVA, indicating their effectiveness in exploring a wider behavioral space. In contrast, AutoBVA consistently attains nearly maximal RPD values, reflecting its emphasis on optimizing individual boundary candidates. This optimization, however, results in lower RAC values, which remain limited even after extended execution (600 seconds). SETBVE configurations improve both RAC and RPD as the runtime increases; for example, RAC values typically grow from approximately $2 5 \mathrm { ~ - ~ } 4 5 \%$ at 30 seconds to about $6 0 \mathrm { ~ - ~ } 8 0 \%$ at 600 seconds, while RPD rises from around 0.2 to approximately 0.4 over the same period for most SUTs. Including the Tracer component has minimal impact on both RAC and RPD metrics. 5.1.6 General Trends. Figure 6 illustrates the relationship between RPD and RAC for bytecount, circle, bmi, date SUTs. For detailed numeric values, refer to Tables 2–5. Methods utilizing the Tracer component are omitted from the figure because Tracer has only a modest impact on RDP and RAC in most tested scenarios. Table 6. RPD and RAC $( \mu \pm \sigma )$ for SUTs from Julia Base. SE-Uniform refers to the SETBVE configuration using a Sampler (CTS with bituniform sampling) and Explorer (uniform random parent selection). SET-Uniform adds a Tracer to this configuration. Fig. 6. RPD and RAC across methods for bytecount, circle, bmi and date. S: Sampler, E: Explorer, CTS: Compatible Type Sampling, BU: bituniform sampling. A clear pattern emerges when comparing the results at 30 seconds and 600 seconds. Increasing the time budget improves diversity for most methods, as seen in the general shift toward higher RAC at 600 seconds. SETBVE configurations with the Sampler using CTS with bituniform sampling, as well as the Sampler combined with the Explorer across all parent selection strategies, show considerable increases in diversity, indicating that a longer execution allows for broader exploration of the search space. However, AutoBVA and Random sampling do not show much improvement Manuscript submitted to ACM in diversity over time, remaining close to their initial positions. This suggests that these methods reach their maximum performance early on and do not benefit as much from extended execution. Considering that the RAC values reported for AutoBVA are averages from 20 runs with very low standard deviation, extending or repeating its search while preserving previously found solutions is unlikely to significantly alter its overall diversity results. When examining changes in quality and diversity across different SUTs, the extent of improvement varies across methods. bytecount achieves near-maximal quality early on for the majority of methods. Diversity is also relatively high even after 30 seconds, with uniform random and curiosity-based exploration being nearly ideal, while other methods continue to improve in RAC at 600 seconds. Both bmi and circle exhibit steady improvements in both quality and diversity as the time budget increases, showing that extended execution time benefits most methods in these SUTs. In contrast, the increased time budget for date primarily leads to a noticeable increase in diversity, while RPD remains relatively low. A similar pattern can be observed for the Julia Base functions (Table 6), where AutoBVA maintains high RPD but consistently shows low RAC across all SUTs. In comparison, SETBVE configurations — especially those combining the Sampler and Explorer — achieve gains in diversity over time while maintaining reasonable quality. The addition of the Tracer component brings only minor changes to overall trends, indicating that the Explorer accounts for the majority of behavioral space exploration. These variations indicate that while extended time budget generally enhances results, the degree of improvement depends on the characteristics of the SUT and the method used. # Key Findings (RQ1) - Quality and Diversity of Identified Boundaries • SETBVE exploration methods improve diversity with extended execution, while the improvement in quality varies depending on the SUT and the framework configuration. • Even the simplest SETBVE configuration (using only the CTS with bituniform sampling) often outperforms most tested methods in diversity while maintaining quality of boundary candidates. • AutoBVA quickly achieves high quality but has low diversity and shows little improvement over time. • Random sampling is the weakest method, demonstrating minimal or no gains in either quality or diversity, even with a longer execution time. # 5.2 RQ2 - Coverage of Behavioral Space In RQ2, we examine how different methods discover boundary candidates with distinct behaviors, i.e., archive cells defined through behavioral descriptors. Therefore, we compare archive cells uniquely found by specific methods across all their runs. Since differences among exploration strategies become clearer over longer runtimes, we analyze results from the 600-second runs. Figure 7 provides a pairwise comparison of search strategies for the bytecount and date SUTs, chosen due to their contrasting complexities. First, we identify which search strategies (rows) have found more unique boundaries than any other strategy (column), then we compare pairs of strategies. For instance, for date, SET-Uniform has found 13 boundaries that no other search found, but 149 boundaries that SE-Fitness could not find. Our results show that the number of distinct archive cells identified by each method increases with SUT complexity. For bytecount, there is significant overlap between methods, while more complex SUTs such as date lead methods to SUT:bytecount SUT: date 38 81 2 8 5 5 62857820 203 434 879 149 193 367 580 13 38 77 3 1 58907317 90 267 548 96 85 185 77 5 StT-FitnN 38 81 2 8 5 5 St-Fitn60 60337528 118 276 693 92 123 396 75 5 SEiOs SERUriIOSH 38 76 3 6307 7806 130 470 832 191 401 574 179 19 众 SE 38 81 2 8 5 5 全 S6 62737808 196 431 886 193 372 587 137 20 38 75 SUnifo5 56336995 25 183 73 21 160 226 54 5 38 81 2 8 5 5 SvFitnee5 59647444 115 632 67 108 192 394 58 3 38 79 6 3 3 Gurioo6 63057809 480 839 197 133 399 582 192 26 S STS+ 4 2 1 47 2 8 5 5 1553 49 73 191 18 54 58 126 18 3 SOB SOBV B S Bar Y C S S B C & S vm S ifo 心 彩 S 分 .Cu 公 Fitr 心 e U 号 B Rar y‘ S S 的 C e 6 省 珍 分 +rioSc 6 CUF S m sifo Any 分 心 Not found by Method Not found by Method Count (log scale)。1 23 4 Count(log scale)。 2 4 6 8 discover mostly unique cells. The bmi and circle SUTs exhibit intermediate levels of uniqueness. The total count of unique cells ranges from 0–81 for bytecount, 0–620 for circle, 0–1326 for bmi, and up to 0–7820 for date. For the bytecount SUT, the most unique behaviors are found in curiosity (SET-Curiosity, SE-Curiosity) and uniform random exploration (SET-Uniform, SE-Uniform). In contrast, fitness-based selection strategies find fewer unique cells among SETBVE configurations. Although AutoBVA identifies several unique cells, it overlooks many behaviors that other methods capture, as seen in the AutoBVA column in Figure 7. For the circle and bmi SUTs, AutoBVA identifies several unique behaviors but misses many behaviors that SETBVE discovers, particularly those found by uniform random exploration and bituniform sampling. Within SETBVE, bituniform sampling alone performs effectively, but combining it with Explorer or Tracer reduces the number of unique archive cells found. Among exploration strategies, uniform random exploration generally identifies the most unique behaviors, followed closely by curiosity-based exploration, while fitness-based selection and random sampling are less effective. The addition of the Tracer component does not significantly enhance the number of unique behaviors and can even slightly reduce it for the bmi SUT. Finally, for the date SUT, S-Bituniform (26), SE-Uniform (20), and ST-Bituniform configurations uncover the most distinct behaviors. AutoBVA finds a few unique behaviors (3) but fails to discover many cells found by other methods. Random sampling only identifies cells already discovered by other methods, offering no unique contributions. Nonetheless, comparing the pairwise portion of the heatmap and the unique behavior found (last column), we see that most behaviors are covered by multiple methods. Manuscript submitted to ACM Table 7 shows manually selected examples of archive cells, and corresponding boundary candidates, to illustrate the type of boundary behavior that SETBVE discovered, but AutoBVA did not. We chose these examples based on their validity groups, RPD values, and the overall diversity of boundary candidates. The “Cell” column identifies the set of values from each behavioural descriptor. Inputs and their corresponding outputs are shown side by side. Table 7. Examples of archive cells and their corresponding boundary candidates that were found by most SETBVE configurations but missed by AutoBVA, across the four SUTs: bytecount, bmi, circle, and date. Exception abbreviations: BErr (BoundsError), DErr (DomainError), AErr (ArgumentError), oor (out of range). Cell coordinates: total input length, input length variance, output length difference (or abstraction number), number of exceptions In the bytecount SUT, natural boundaries include transitions such as from kilobytes to megabytes within the VV group. As shown in Figure 7, this SUT exhibits the highest overlap between strategies in terms of discovered archive cells. Both AutoBVA and SETBVE successfully identified key transitions, including kB to MB (e.g., 999949 to 999950, yielding $9 9 9 . 9 \mathrm { k B }$ to $1 . 0 \mathrm { ~ } \mathrm { M B }$ ), MB to GB (e.g., 999949999 to 999950000, or 999.9 MB to $1 . 0 \mathrm { G B }$ ), and others. Both methods also captured VE validity group boundary candidates, such as the transition from 1000.0 EB to a BoundsError (e.g., 999999999999994822656 to 999999999999994822657). SETBVE found some additional transitions, such as 54949999 to 54950000 (54.9 MB to 55.0 MB) and 37949999999 to 37950000000 (37.9 GB to 38.0 GB). Another example is the pair -99999999999989 to -99999999999990, which is still categorized as a boundary candidate under our current definition — close input values leading to distinct program behaviors. Since we define program behavioral distinction based on the output distance between two outputs, even slightly different outputs like -99999999999989B and -99999999999990B qualify as a boundary. Overall, both methods discovered similar types of boundary candidates for this SUT. For the circle SUT, both AutoBVA and SETBVE found adjacent boundary candidates such as DomainError (exception at the origin) to “in”, and “in” to “out” (see Figure 8 for a visualization of the boundaries). However, SETBVE aims to maximize diversity across archive dimensions, including the output abstraction number used for the circle SUT. This objective allows SETBVE to also capture transitions between non-adjacent regions, provided these transitions represent distinct behavioral changes. Consequently, SETBVE discovered an additional boundary candidate transitioning directly from “out” to DomainError. Specifically, SETBVE identified inputs $( 1 , 8 0 ) ^ { 1 3 }$ and (0, 0) as the closest pair triggering this behavioral shift. For the bmi SUT, we observe a similar pattern to the circle SUT in how AutoBVA prioritizes boundary candidates. Several non-adjacent categories (e.g.,DomainError to “Overweight”, “Underweight” to Obese”) were not discovered by AutoBVA (see Figure 9 for a visualization of the boundaries). Although AutoBVA can find some non-adjacent categories, it typically detects only those where the input pairs lie very close together in the input space. Examples include DomainError to “Severely Obese” at (0, -1) and (0, 0), and “Underweight” to “Severely obese” at (9, 0) and (9, 1). For the date SUT, the outputs are not categorical as in circle or bmi, but meaningful boundaries still exist based on calendar rules. Valid day values range from 1 to 31 depending on the month, and for February, the valid range also depends on whether the year is a leap year. Both AutoBVA and SETBVE successfully identified VE boundaries between day 0 and day 1 for most months. However, differences emerge when considering VE boundaries at the end of each month. Within the given time budgets, neither method found all possible end-of-month transitions, but SETBVE discovered more than AutoBVA. For example, SETBVE detected transitions from valid to invalid dates such as December 31 to December 32, and similar transitions in January, July, and April (30 to 31), which AutoBVA missed. In contrast, AutoBVA found end-of-month VE transitions that SETBVE did not, including those in March, June, and October. Both methods found several examples involving leap years (VE boundary candidates between February 28 and 29). The remaining end-of-month VE transitions — in August, September, and November — were identified by both methods. Overall, no single search strategy dominates in covering unique behaviours, and each approach has complementary strengths. As SUT complexity increases, the differences between methods become more pronounced, leading to larger gaps in coverage. # Key Findings (RQ2) - Coverage of Behavioral Space No single method consistently outperforms the others; among SETBVE configurations, curiosity-based and uniform random exploration typically discover more behavioral regions not found by other methods than fitness-based exploration. AutoBVA identifies some unique boundary behaviors but generally discovers fewer than SETBVE configurations, especially for complex SUTs and non-adjacent cases. # 5.3 RQ3 - Tracing Identified Boundaries In RQ3, we investigate whether the Tracer can follow boundaries initially discovered by the Sampler and/or Explorer. The main objective of the Tracer is to enhance boundary refinement by exploring the vicinity of identified boundary candidates and locating additional ones. We evaluate the Tracer through visualizations of boundaries 14. We do not investigate tracing for the bytecount SUT because it has only a single input, resulting in a one-dimensional input space. In such a space, a boundary consists of isolated points rather than continuous regions, leaving no meaningful path to follow or trace. Fig. 8. Example of boundary refinement for the circle SUT before and after applying the Tracer (600-second run with curiosity-based exploration). Fig. 9. Example of boundary refinement for the bmi SUT before and after applying the Tracer (600-second run with fitness-based exploration). In contrast, the circle and bmi SUTs serve as effective examples for boundary tracing due to their two-dimensional input spaces, where varying one or both input arguments naturally can lead to successful tracing and the discovery of additional boundary candidates. Figures 8 and 9 illustrate this process. The left side of each figure shows boundary candidates initially discovered by the Sampler and Explorer, while the right side highlights the improved coverage achieved by the Tracer. This refinement better defines boundary shapes by revealing previously undetected parts of a boundary. In the bmi SUT (Figure 9), we observe that boundaries appear more concentrated at lower height and weight values, while transitions become more spread out as input values increase. Although this pattern may be specific to this SUT, it highlights how input characteristics can influence the distribution of behavioral transitions in the input space. Fig. 10. Example of boundary refinement for the date SUT before and after applying the Tracer (600-second run with uniform random-based exploration). The visualization omits the year input, meaning that the displayed day-month combinations belong to different years. Boundary tracing for the date SUT is challenging because input arguments (day, month, and year) depend on each other. Ideally, tracing would involve varying one argument at a time—for example, adjusting only the year to test leap years or changing day and month to explore end-of-month transitions within a fixed year. However, the current Tracer implementation cannot fix specific inputs, limiting its precision. Despite this, the Tracer effectively finds additional input pairs close to previously identified boundaries, resulting in a denser mapping of behavioral transitions (see Figure 10). In the figure, the left plot shows boundaries identified by the Explorer alone, while the right plot demonstrates improved completeness after applying the Tracer. The visualization excludes the year to highlight transitions between days and months. For the date SUT, the Explorer alone captures a sparse and incomplete set of transitions, leaving many gaps in the mapping of behavioral changes. In contrast, the Tracer expands the set of identified transitions, revealing well-defined shifts — particularly at the edges of valid date ranges. We observe clear transitions such as day 0 to day 1 and day 31 to day 32, along with a denser distribution of boundary candidates across multiple months. Some gaps remain, but the Manuscript submitted to ACM improvement towards completeness and clarity of these transitions highlights the potential of tracing, even in its initial form. In summary, depending on SUT characteristics, visualizing boundary patterns helps to identify regions where program behavior changes more suddenly. Since it is often difficult to predict how boundaries are distributed in the input space, these visualizations provide valuable insights for deciding which input regions to prioritize during testing. # Key Findings (RQ3) - Tracing Identified Boundaries The Tracer component can extend the initial set of boundary candidates by exploring surrounding input and behavioral spaces, revealing additional boundary transitions and exposing previously undetected sections of the boundaries. # 6 DISCUSSION This study demonstrates that integrating Quality-Diversity optimization into Boundary Value Exploration yields a framework, SETBVE, that systematically uncovers a broader range of boundary behaviors than existing methods. In our experiments across ten SUTs, SETBVE maintained high-quality boundary candidates while significantly improving behavioral coverage compared to the baseline AutoBVA technique. For instance, with the date SUT, SETBVE configurations achieved approximately $5 2 \%$ relative archive coverage compared to AutoBVA’s $5 . 4 7 \%$ , and similar patterns were observed across other tested systems. These results reveal that beyond maximizing a single boundary metric (program derivative), explicitly encouraging diversity across input/output descriptors enables the discovery of boundary regions that might otherwise remain undetected. This finding contributes to the field of software testing, where comprehensive boundary detection could potentially help identify failure points that might be missed by more focused testing approaches. Our experiments reveal an inherent trade-off between the quality and diversity of boundary candidates. Maximizing behavioral diversity and identifying promising behavioral regions across different validity groups (VV, VE, EE) can offer a more comprehensive understanding of a system’s behavioral transitions. However, the SETBVE setup also identifies boundary candidates that span non-adjacent program domains by maximizing diversity across defined archive dimensions, challenging the conventional assumption that boundary candidates must originate from adjacent domains. We argue that this approach can be particularly valuable in high-dimensional input spaces where adjacency can become ambiguous or multi-dimensional. For instance, in a system under test that processes graphs to compute the number of strongly connected components, substantial changes to the graph structure may be required to alter the output. In such cases, an overemphasis on input adjacency — here, graph adjacency — could obscure significant behavioral transitions. Our experiments reveal the quality-diversity trade-off in a more practical and tangible way. When a method like SETBVE emphasizes diversity, it inevitably diverts some computational effort from pure quality optimization. In practice, achieving the same quality as a focused approach like AutoBVA may require longer runtimes — or, somewhat unexpectedly, the impact on testing effectiveness may be minimal. We observed the latter: while SETBVE benefits from longer runtimes, it has to be considered that many boundary-pair candidates with the highest PD values are structurally similar, inflating quality without revealing genuinely novel behaviors. Meanwhile, SETBVE can uncover regions with locally high but globally lower PD values that a quality-focused method might overlook. Quantifying these effects with simple metrics remains challenging, suggesting that future work should involve real testers, ideally in interactive settings, to explore how best to balance diversity and quality for practical impact. The quality impact of this trade-off is also complex to assess from a more subtle perspective, as there exists a theoretical limit to how many archive cells contains pairs that reach the highest PD levels. Our normalization of PD to RPD values makes the quality metrics more comparable across different regions of the behavioral space. Given our results, particularly a typical decrease in RPD for SETBVE for SUTs with more complex behavior (date and bmi), we expect there to be value in a future extension of SETBVE with within-cell optimization. Such an approach would not only identify more archive cells with diverse behavior but also find the locally optimal candidate within each cell. This within-cell refinement could be combined with the Tracer component, which already attempts to find boundary pairs in adjacent regions, to create a more comprehensive boundary exploration strategy that balances both diversity and quality across the behavioral space. The Tracer component uncovers boundary candidates adjacent to those already identified. Although its impact on the metrics (RAC and RPD) is modest, visualizing it’s effect reveals expanded regions of rapid change and subtle boundary patterns that our metrics doesn’t capture. This discrepancy arises because the Tracer typically discovers additional high-value boundary pairs that nonetheless map into existing archive cells, so RAC and RPD remain largely unchanged In effect, the Tracer operates at a finer granularity than the archive, uncovering nuances that coarser cell-based metrics cannot detect. Future work should consider developing new metrics that capture these more fine-grained gains. Given that both SUTs and testing contexts vary considerably, SETBVE’s flexibility represents an important aspect of its design. The framework supports customization of key components in the QD approach, most notably the distance functions used to calculate quality (PD) and how behavioral diversity is defined through descriptors. The choice of behavioral descriptors directly shapes the search process and influences which boundary candidates are identified, but the behavioral descriptors can also be adjusted to meet domain-specific needs. Although this study primarily relied on generic input/output characteristics, the framework supports more tailored descriptors. Even with the simplest configuration, using only the Sampler component, the archive enables the generation of diverse candidates, as demonstrated by the results in RQ2. Future work should consider a wider set of choices for the distance and behaviora descriptor functions and how to exploit them for practical benefits. # 6.1 Limitations and Validity Threats While we have addressed several threats to validity, certain limitations warrant attention. Our measurement choices could affect the construct validity of our evaluation. Although Jaccard distance effectively captures string variations, other distance measures might yield different program derivative values, and thus could affect the quality diversity trade-off achieved. To ensure that our choice of Jaccard distance did not artificially inflate SETBVE’s diversity advantage, we conducted a sensitivity analysis using the same distance function (string length) which was the default optimized by AutoBVA. Even under this alternative distance measure, SETBVE maintained significantly higher relative archive coverage (RAC) than AutoBVA. This confirms that the higher diversity of behaviors identified by SETBVE cannot be explained by using a more fine-grained (Jaccard) output distance function and thus mitigate this as an alternative explanation to our findings. When it comes to the relative archive coverage (RAC) measure we used, it is based on empirical observations rather than establishing theoretical maximums on the number of possible archive cells. We try to mitigate this imperfection by aggregating results (and counting archive cells) over all runs, following accepted practices in related work. However, given that the input, output, and behavioral spaces studied are very large the runs might overlap less than can be expected, which might lessen the value of the normalization. Still, in practice, it will be very difficult to calculate, for a given SUT, how many cells can be covered making our normalization a reasonable and pragmatic choice. Manuscript submitted to ACM Regarding internal validity, we controlled for biases in archive initialization by repeating each experiment 20 times and using sampling strategies proven effective in previous studies. However, the Tracer component’s current implementation, which defines search regions based solely on input space rather than behavioral space, represents an area for refinement. For external validity, our focus on ten SUTs with integer inputs at the unit level limits generalizability to more complex software systems and input types. While our results suggest potential benefits for software testers, we have not empirically studied the impact on actual testing practices or outcomes, which limits our ability to make definitive claims about practical benefits. # 6.2 Future work Future research should build upon this study by addressing several interconnected areas. First, while SETBVE prioritizes diversity, further investigation into optimization techniques would enhance solution quality within regions where boundaries are already identified. The intra-cell optimization approach discussed earlier represents a promising direction. Second, extending SETBVE to accommodate different data types would overcome current limitations and improve applicability across diverse systems. Third, refining the tracing process to move beyond the basic approximations used in the current implementation would enable more precise descriptions of boundary patterns. Developing robust quantitative metrics for evaluating boundary tracing effectiveness would support more systematic assessment. Studies examining how testers interact with and benefit from the approach would provide valuable evidence for its practical utility. Investigating how SETBVE could be integrated into existing testing workflows would also enhance its applicability.
Software systems exhibit distinct behaviors based on input characteristics, and failures often occur at the boundaries between input domains. Traditional Boundary Value Analysis (BVA) relies on manual heuristics, while automated Boundary Value Exploration (BVE) methods typically optimize a single quality metric, risking a narrow and incomplete survey of boundary behaviors. We introduce SETBVE, a customizable, modular framework for automated black-box BVE that leverages Quality-Diversity (QD) optimization to systematically uncover and refine a broader spectrum of boundaries. SETBVE maintains an archive of boundary pairs organized by input- and output-based behavioral descriptors. It steers exploration toward underrepresented regions while preserving high-quality boundary pairs and applies local search to refine candidate boundaries. In experiments with ten integer-based functions, SETBVE outperforms the baseline in diversity, boosting archive coverage by 37 to 82 percentage points. A qualitative analysis reveals that SETBVE identifies boundary candidates the baseline misses. While the baseline method typically plateaus in both diversity and quality after 30 seconds, SETBVE continues to improve in 600-second runs, demonstrating better scalability. Even the simplest SETBVE configurations perform well in identifying diverse boundary behaviors. Our findings indicate that balancing quality with behavioral diversity can help identify more software edge-case behaviors than quality-focused approaches.
[ "cs.SE", "D.2.5" ]
# 1. Introduction Recent “long-thought” Large Reasoning Models (LRMs), such as OpenAI’s O1 (Jaech et al., 2024) and Deepseek-R1 (DeepSeek-AI et al., 2025), represent a significant paradigm extension of foundational Chain-of-Thought (CoT) techniques (Wei et al., 2023). Fine-tuned with Reinforcement Learning (RL), these models iteratively refine solutions to achieve unprecedented performance in complex reasoning tasks like mathematics and programming (Sun et al., 2025; Gu et al., 2024). However, with the improvement of “deep thinking” ability, a prominent problem is the excessive consumption of computing resources during the reasoning process (Chen et al., 2025; Aggarwal and Welleck, 2025). Specifically, existing models tend to generate lengthy and even unnecessary chains of reasoning when solving problems with low complexity or clear solution paths. This phenomenon, termed “overthinking”, manifests as models consuming far more computational resources than the problem itself requires to reach the correct conclusion (Chen et al., 2024; Sui et al., 2025; Cuadron et al., 2025). Therefore, one critical problem arises: Figure 2: Pareto analysis of the Efficacy-Efficiency trade-off of different methods on two reasoning models. The x-axis represents the reasoning length change, and the y-axis shows the accuracy change, relative to the original model (defined in Eq. 12), with the top-left corner representing the ideal position. A smaller and darker marker indicates a higher Valid Thinking (VT) rate (defined in Eq. 1), signifying a more efficient thinking process. Compared to other methods also on the pareto frontier, LC-R1 achieves a more favorable trade-off, attaining a substantially higher compression rate at the cost of a minimal drop in accuracy, and it also achieves a higher VT rate. The sub-optimal performance of our ablation variants (w/o C-reward, w/o L-reward) further proves the criticality of our dual-reward designs. # How can we maintain high reasoning efficacy while significantly improving efficiency? Prior works have approached this by fine-tuning on shorter demonstrations (SFT) (Chen et al., 2024), constructing preference datasets for conciseness (Luo et al., $2 0 2 5 \mathrm { a }$ ; Shen et al., 2025), or integrating length-penalties into RL (Hou et al., 2025; Luo et al., 2025b; Team et al., 2025). However, these methods often treat the reasoning process as a black box, penalizing length without analyzing the internal structure of the thoughts themselves. To address this gap, we delve into the structure of “overthinking” and identify a specific pattern: models frequently engage in redundant “double-checking” after having already derived the correct answer. We term this phenomenon “invalid thinking”, as shown in Figure 1. To quantify it, we introduce a new metric, Valid Thinking (VT) Rate, which measures the proportion of the reasoning process that is essential for reaching the initial correct conclusion. Guided by this insight, we propose two fine-grained principles: Brevity (eliminating redundancy) and Sufficiency (preserving necessary steps). We then introduce LC-R1, a GRPO-based post-training method that operationalizes these principles. LC-R1 uniquely combines a Length Reward for overall conciseness with a novel Compress Reward designed to directly guide the model to terminate the thinking process upon deriving the correct answer. We conduct comprehensive experiments on two reasoning models across seven benchmarks. Empirical results show that LC-R1 achieves a more favorable trade-off between efficacy and efficiency than prior methods as shown in Figure 2. Specifically, with only a $2 \%$ drop in accuracy, our method attains a $50 \%$ reduction in sequence length on average. Ablation study also demonstrates the indispensability of both Length Reward and Compress Reward for achieving efficient reasoning. Further study shows that our method achieves efficient compression without impairing the exploration ability of model, and the efficiency can generalize to various difficulty problems. In conclusion, our contribution can be summarized as follows: • We analyze the thinking process of current competitive reasoning model and find the phenomenon of “invalid thinking” : It takes a large portion of thinking process to double check after having derived the correct answer, making the reasoning verbose and inefficient. • We propose two novel principles: Brevity and Sufficiency, and design a GRPO-based method LC-R1 for LRM post-training to strike a balance between Brevity and Sufficiency, pruning invalid thinking while compressing overall sequences at the same time. • Through comprehensive experiments, we validate the effectiveness of LC-R1 to get a better trade-off between Efficacy and Efficiency, and conduct further analyses on the deep impact of compression, proving the robustness of LC-R1 to various difficulties and providing insights for future works. Table 1: Valid Thinking Rate of current state-of-the-art Large Reasoning Models. Nemotron indicates Llama-3.3-NemotronSuper-49b-v1. Results manifest a low VT rate on all these models, highlighting the phenomenon of “invalid thinking”. # 2. Preliminary: Compression and Efficienct Reasoning Models # 2.1. Motivation: Quantifying Redundant Reasoning A common paradigm for Large Reasoning Models (LRMs) involves a thinking process (i.e., step-by-step rationale) that precedes the final answer. While effective for accuracy, we observe a consistent inefficiency: models often derive the correct answer early in their thinking process but continue with lengthy and redundant verification steps. We term this subsequent, non-essential reasoning “Redundant Sequence”. To formalize this, we define the Valid Thinking $( V T )$ rate, a metric focusing on the model’s thinking process: $$ \mathrm { V T } = { \frac { \left| \mathrm { T o k e n s \ i n \ V a l i d \ T h i n k i n g } \right| } { \left| \mathrm { T o t a l \ t o k e n s \ i n \ T h i n k i n g \ P r o c e s s } \right| } } $$ where “Valid Thinking” comprises the tokens from the start of the thinking process until the correct answer is first derived. To automate this measurement, we utilize a lightweight parser, LC-Extractor, whose implementation details are provided in Section 4. we evaluated four state-of-the-art LRMs—Qwen3-32b (Team, 2025a), QwQ-32b (Team, 2025b), Deepseek-R1 (DeepSeek-AI et al., 2025), and Llama-3.3-nemotron-super49b-v1 (Bercovich et al., 2025)—across five math benchmarks: AIME25, MATH500, GSM8K, AMC, OlympiadBench. Our analysis reveals a universal and severe overthinking problem. As shown in Table 1, all models tested exhibit low VT rates, indicating that a substantial portion of their computational effort (often $3 5 \substack { - 4 5 \% } )$ is spent on redundant reasoning after the solution has been found. This widespread inefficiency confirms the significant potential for compression and motivates our work. # 2.2. Principles for Efficient Reasoning The evaluation of reasoning models traditionally rests on two pillars: Efficiency (the computational cost, often proxied by output length) and Efficacy (the ability to solve the problem correctly). However, simply shortening the output is a coarse approach that may inadvertently remove critical thinking steps. To create a more targeted framework, we refine these concepts by introducing two new, complementary principles: • Brevity refines Efficiency by shifting the focus from generic length reduction to the specific elimination of “Redundant Sequence”. While conventional methods may still produce a compressed sequence that contains unnecessary double-checks, Brevity advocates for the model to terminate its reasoning process as soon as the correct answer is found. • Sufficiency acts as a crucial safeguard for Efficacy. It mandates that, in the pursuit of Brevity, no critical logical steps essential for reaching a correct answer are omitted. It ensures that the compressed reasoning remains complete and logically sound. Therefore, the ideal reasoning model must navigate the tension between these principles: it should be maximally Brief by removing all non-essential thinking, yet always remain Sufficient to guarantee correctness. Our work, LCR1, is explicitly designed to optimize for this balance. # 3. LC-R1: Length Compression with Efficient Reasoning Principles In this section, we propose LC-R1, a GRPO-based posttraining algorithm designed to address the “invalid thinking” phenomenon and enhance reasoning efficiency. Guided by the principles of Brevity and Sufficiency introduced in Section 2.2, LC-R1 employs a novel dual-reward system. This system combines a global Length Reward for overall conciseness with a targeted Compress Reward that specifically removes redundant reasoning. The complete pipeline of LC-R1 is illustrated in Figure 3 and Algorithm 1. # 3.1. Problem Formulation Let $\mathcal { M }$ be the model and $q$ be the given query. The output is $o \sim \mathcal { M } ( q )$ , where $o = \cot ( R , A )$ consists of a reasoning part $R$ and an answer part $A$ , split by the token </think>, which is considered part of $A$ . For the reasoning part $R$ , we denote its effective prefix $R ^ { \prime }$ as the content from the Original Sequences Compressed Sequences Output-1 LC-Extractor Compressed Output-1 Va(liSdhoTrhtienskti)ng OInuvtpaluitd-2Thinking Answer Va(lCiSdohoTmrhtpienrsketi)snsged OutpAunts-2wer Generating G Outputs… Valid Thinking Invalid Thinking Answer Valid Thinking Answer ? : Output-(G-1) : Compressed Output-(G-1) Based on the Ground Question Policy Model Valid Thinking Invalid Thinking AWnrsownegr Truth, please extract Valid Thinking AWnrsownegr Output-G tphreocpeassr frofmthtehethought Compressed Output-G Valid Thinking Invalid Thinking Answer baepgpienanirnagncteo tohfei f.irst Vali(dLoTngheisnt)king Answer Thinking Process </think>Token </think>Token GRPO 好 Valid/Invalid Format Get Length Thinking Length C OK. I cfoarnmleata.rn this 八 Va(liSdhoTrhtienskti)ng Answer Best ! Accuracy ☆ ★ Valid Thinking Answer Normal. Update : Compress Base Length Trained Model Reward Reward Reward Reward OK. I can avoid Valilid TThiinkkiing Ansswerer Bad. + + Subtract Mean (Longest) Compressed Sequences beginning of $R$ up to the first occurrence of the correct answer corresponding to the query $q$ . If $R$ does not contain the correct answer, then we define $R ^ { \prime } = R$ . We define two functions as follows: $$ t ( \{ R , A \} ) = R , \quad f ( \{ R , A \} ) = \{ R ^ { \prime } , A \} $$ The function $t$ extracts the reasoning process $R$ from the output $o$ and function $f$ extracts the concise reasoning part $R ^ { \prime }$ and concatenates it with the answer $A$ . We denote $o _ { i }$ as the original model output and $o _ { i } ^ { \prime } = f ( o _ { i } )$ as the refined compressed output. LC-R1 is a GRPO-based method to efficiently compress the reasoning process. Within a group, let $\mathcal { C }$ denote the set of indices $i$ where sequences $o _ { i }$ leading to the correct answer corresponding to the query $q$ , and $\boldsymbol { \mathcal { W } }$ be the set of indices $j$ where $o _ { j }$ leading to a wrong answer, The total group size is $G = | \mathcal { C } | + | \mathcal { W } |$ . # 3.2. Reward and Objective Design Our method’s reward system consists of two core components: the Length Reward for reducing overall output length, and the Compress Reward for targeting redundant parts of the model’s reasoning. Length Reward. To compress the total length of the model output, we propose adding a length penalty during the GRPO training process. Leveraging the group-based sampling of GRPO, we can calculate the relative length reward to automatically adjust to the difficulty of the problem. And we define the Length Reward as follows: $$ r _ { i , \mathrm { l e n g t h } } = \left\{ \begin{array} { l l } { 1 - \frac { | o _ { i } ^ { \prime } | } { \operatorname* { m a x } _ { j \in \mathcal { C } } | o _ { j } ^ { \prime } | } , } & { \mathrm { i f } i \in \mathcal { C } } \\ { 0 , } & { \mathrm { i f } i \in \mathcal { W } } \end{array} \right. $$ This formulation uses the maximum length of a correct, compressed sequence within the group as a normalizer. The final reward combines this with a base reward for format and accuracy, and is normalized by subtracting the group mean, following Liu et al. (2025) to obtain an unbiased gradient: $$ \begin{array} { r } { \tilde { r } _ { i } = r _ { i , \mathrm { b a s e } } + \alpha \cdot r _ { i , \mathrm { l e n g t h } } } \\ { r _ { i , \mathrm { c o m b i n e } } = \tilde { r } _ { i } - \mathrm { m e a n } ( \{ \tilde { r } _ { j } \} _ { j = 1 } ^ { G } ) } \end{array} $$ where $$ r _ { i , \mathrm { b a s e } } = r _ { i , \mathrm { f o r m a t } } + r _ { i , \mathrm { a c c u r a c y } } $$ Following prior work, $r _ { i , \mathrm { f o r m a t } }$ and $r _ { i , \mathrm { a c c u r a c y } }$ are binary rewards to judge whether the model places its thinking process between <think> and </think> and whether the sample # Algorithm 1 LC-R1: Length Compress for R1-style model Input: Initial policy model $\pi _ { \boldsymbol { \theta } }$ , compression function $f ( \cdot )$ , task prompts $\mathcal { D }$ , hyperparameters $\alpha , \beta , \mu$ Output: Trained policy model $\pi _ { \boldsymbol { \theta } }$ 1: for $\operatorname { s t e p } = 1 , \dots , M$ do 2: Sample a batch $\mathcal { D } _ { b }$ from $\mathcal { D }$ 3: Update the old policy model $\pi _ { \theta _ { \mathrm { o l d } } } \pi _ { \theta }$ 4: Sample $G$ outputs $\{ o _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot | q )$ for each question $q \in \mathcal { D } _ { b }$ 5: Apply compression to all outputs: $o _ { i } ^ { \prime } \gets f ( o _ { i } )$ 6: Compute combined reward $r _ { i }$ ,combine (Eq. 5) and compress reward $\boldsymbol { r } _ { i }$ ,compress (Eq. 11) 7: Compute token-level advantages $\hat { A } _ { i , t }$ for each compressed output $o _ { i } ^ { \prime }$ (Eq. 10) 8: for iteration $= 1 , \ldots , \mu$ do 9: Update the policy model $\pi _ { \boldsymbol { \theta } }$ by maximizing the objective JGRPO (Eq. 7) 10: end for 11: end for 12: return $\pi _ { \boldsymbol { \theta } }$ leads to the correct answer corresponding to the query verified by Math-Verify1 respectively. $\alpha$ is a hyperparameter that controls the weight of the Length Reward. Compress Reward. For the original GRPO method, the loss calculation is based on the model’s own sampling results. In order to drive the model to terminate the thinking process when getting the correct answer for achieving Brevity, we modify the GRPO objective as follows: $$ \begin{array} { l } { { \displaystyle \mathcal { I } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { q \sim P ( Q ) , \{ \sigma _ { i } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( O \mid q ) } } \ ~ } \\ { \displaystyle \left[ \frac { 1 } { \sum _ { i = 1 } ^ { G } \vert \sigma _ { i } ^ { \prime } \vert } \sum _ { i = 1 } ^ { G } \sum _ { t = 1 } ^ { \vert \sigma _ { i } ^ { \prime } \vert } \Biggl \{ \mathrm { m i n } \lbrack R _ { t } ( \theta ) \cdot \hat { A } _ { i , t } , ~ \mathrm { c l i p } \Bigl ( R _ { t } ( \theta ) , \right]} \\ { { \displaystyle 1 - \epsilon , ~ 1 + \epsilon \Bigr ) \cdot \hat { A } _ { i , t } \Bigr \rbrack - \beta D _ { \mathrm { K L } } \bigl ( \pi _ { \theta } ( \cdot \vert q ) \parallel \pi _ { \mathrm { r e f } } ( \cdot \vert q ) \bigr ) \Biggr \} } } \end{array} $$ where $$ \mathbb { D } _ { \mathrm { K L } } \big ( \pi _ { \theta } \| \pi _ { r e f } \big ) = \frac { \pi _ { r e f } \big ( o _ { i } ^ { \prime } | q \big ) } { \pi _ { \theta } \big ( o _ { i } ^ { \prime } | q \big ) } - \log \frac { \pi _ { r e f } \big ( o _ { i } ^ { \prime } | q \big ) } { \pi _ { \theta } \big ( o _ { i } ^ { \prime } | q \big ) } - 1 $$ $$ o _ { i } ^ { \prime } = f ( o _ { i } ) , \quad R _ { t } ( \theta ) = \frac { \pi _ { \theta } ( o _ { i , t } ^ { \prime } | { q } , o _ { i , < t } ^ { \prime } ) } { \pi _ { \theta _ { \mathrm { o l d } } } ( o _ { i , t } ^ { \prime } | { q } , o _ { i , < t } ^ { \prime } ) } $$ Our key modification to the standard GRPO objective is that the loss is calculated over the compressed trajectories $o _ { i } ^ { \prime }$ , rather than the original full trajectories $o _ { i }$ . We define the token-level advantages $\hat { A } _ { i , t }$ as follows: $$ \hat { A } _ { i , t } = r _ { i , \mathrm { c o m b i n e } } + \gamma \cdot \mathbb { I } ( o _ { i , t } ^ { \prime } = < / \mathrm { t h i n k } > ) \cdot r _ { i , \mathrm { c o m p r e s s } } $$ where $$ r _ { i , \mathrm { c o m p r e s s } } = \left\{ \begin{array} { l l } { 1 - \frac { | t ( o _ { i } ^ { \prime } ) | } { | t ( o _ { i } ) | } , } & { \mathrm { i f ~ } i \in \mathcal { C } \& \mathrm { a n s } ( q ) \in t ( o _ { i } ^ { \prime } ) } \\ { - 1 , } & { \mathrm { i f ~ } i \in \mathcal { C } \& \mathrm { a n s } ( q ) \notin t ( o _ { i } ^ { \prime } ) } \\ { 0 , } & { \mathrm { i f ~ } i \in \mathcal { W } } \end{array} \right. $$ Let ans $( q )$ be the ground truth answer for a given query $q$ In this setting, we place focuses on steering model towards outputting </think> token when getting the correct answer (at the end of $o _ { i } ^ { \prime }$ ) during the thinking process to achieve compressing the verbose token, conforming to the principle of Brevity. We only give an extra reward to this token, avoiding place unnecessary emphasis on other tokens to make the training process more efficient and stable. We define the reward to be the portion of Redundant Sequence, formulated by 1 − |t(o′i)| , i representing the efficiency distinction between sequences before and after compression. The hyperparameter $\gamma$ scales this bonus. Based on the principle of Sufficiency, the model should engage in sufficient reasoning process, avoiding overhead compression at the cost of accuracy degradation. Therefore, we impose a large penalty (-1) to the token </think> if the model terminates its reasoning before finding the correct answer, which discourages harmful over-compression and provides robustness to the training process. To further validate the effectiveness of our method, we follow DAPO (Yu et al., 2025) to calculate the objection across all tokens in a group, instead of averaging the token rewards within a single sequence, which eliminates the original GRPO method’s preference for short-correct sequences and long-incorrect sequences, facilitating the validation of our method’s effectiveness. # 4. Experiments # 4.1. Experiment Setups Backbone Models. We choose two representative reasoning models: DeepSeek-R1-Distill-Qwen-7B/1.5B (DeepSeek-AI et al., 2025) as our backbone models, which have demonstrated strong performance on mathematical and coding reasoning tasks. Table 2: Accuracy (above) and Sequence Length (below) for all methods across seven benchmarks. AVG shows the relative change in accuracy and length compared to the Base model $^ +$ increase, - decrease). GPQA-D denotes GPQA-Diamond benchmark, and LCB denotes the pass $@ 1 0$ score on LiveCodeBench. $V T$ represents the Valid Thinking ratio. For each column, the best performing score is marked in bold, and the second-best is underlined. LC-Extractor. To accurately identify and extract the valid reasoning part, we develop a specialized parser to implement the extraction function $f$ mentioned in Eq. 2, termed LC-Extractor. We finetune Qwen2.5-3B-Instruct for its lightweight and easy enough to run. Detailed experiment settings are provided in Appendix B. Dataset. We used a mixed-difficulty dataset, combining past AIME competition problems with the MATH dataset in an approximate 1:2 ratio to create 2500 training samples. This approach enables the model to learn length compression across problems of varying difficulty. Evaluation. We test our model’s performance on seven datasets, including AIME25, MATH500, GSM8K, AMC, OlympiadBench, GPQA-Diamond and LiveCodeBench, across math, general and code tasks, to evaluate the efficiency of reasoning comprehensively. We use averaged Pass $\ @ 1$ as our primary metric. For each test, we sample N times, setting top- $\mathtt { P } = 0 . 9 5$ and temperature $= 0 . 7$ . For AIME25, we set $N = 6 4$ , while for the other test sets, we set $N = 8$ . We set the maximum length to 16384. Additionally, we calculate the average fluctuate ratio on accuracy and token lengths compared with base model on every benchmark, which can be formulated as follows: $$ \begin{array} { r l } & { \mathrm { A v g } _ { \mathrm { a c c } } = \mathrm { m e a n } _ { i = 1 } ^ { 7 } \Big \{ \frac { \mathrm { A c c } _ { i } ^ { \mathrm { m o d e l } } - \mathrm { A c c } _ { i } ^ { \mathrm { b a s e } } } { \mathrm { A c c } _ { i } ^ { \mathrm { b a s e } } } \Big \} } \\ & { \mathrm { A v g } _ { \mathrm { l e n } } = \mathrm { m e a n } _ { i = 1 } ^ { 7 } \Big \{ \frac { \mathrm { L e n } _ { i } ^ { \mathrm { m o d e l } } - \mathrm { L e n } _ { i } ^ { \mathrm { b a s e } } } { \mathrm { L e n } _ { i } ^ { \mathrm { b a s e } } } \Big \} } \end{array} $$ Table 3: Ablation study on the contribution of Length Reward and Compress Reward to the compression process. The study manifest the sub-optimal performance of them, varifying each of them makes a big contribution to the efficient reasoning. We also test VT for each model to evaluate the Brevity of the thinking process to investigate the ability of these methods to mitigate the “invalid thinking” phenomenon. We test VT on five math benchmarks and calculate the mean value, for the convenience of extracting the standard and formatted correct answer from the thinking process on math problems. # 4.2. Baselines Supervised Fine-tuning (SFT). Inspired by OVERTHINK (Chen et al., 2024), which proposes using only the initial correct solution for fine-tuning, we construct an SFT dataset of 5000 samples by removing the Redundant Sequence from self-generated outputs. Direct Preference Optimization (DPO) (Rafailov et al., 2023). We create a preference dataset of 5000 samples from the MATH dataset, where the shortest correct answer is treated as the “chosen” response and the longest as the “rejected” response. This DPO training is applied to the SFT-tuned model. O1 Pruner (Luo et al., 2025b). A PPO-like offline finetuning method to significantly compress CoT length while maintaining performance. We follow its methodology using 10000 samples from the MATH dataset. ThinkPrune-3K (Hou et al., 2025). A reinforcement learning approach that uses a length-truncation reward for multi-stage compression. We reproduce the ThinkPrune- $3 \mathrm { k }$ variant, which is reported to be highly efficient, with slight accuracy degradation. $\mathbf { S F T + O 1 }$ -Pruner. To better understand the effect of compressing the thinking process and pruning the overall sequences at the same time, we also compare with a two-stage training approach, combining SFT and O1 Pruner. # 4.3. Experiment Results LC-R1 outperforms other methods with competitive performance and fewer tokens. As presented in Table 2, On the 7B model, LC-R1 achieves an average length reduction of $4 6 . 3 2 \%$ , substantially higher than all other baselines, with a mere $1 . 8 4 \%$ drop in average accuracy. Similarly, on the 1.5B model, it attains a $5 1 . 8 6 \%$ length reduction for a $2 . 1 4 \%$ accuracy decrease. This efficiency does not appear to compromise its generalization, as it demonstrates more robust performance on out-of-distribution (OOD) benchmarks like GPQA-Diamond and LiveCodeBench compared to other high-compression methods. Figure 2 shows our method achieves more favorable Efficacy-Efficiency trade-off by enabling maximal compression ratio with negligible accuracy degradation. LC-R1 also achieves a significantly higher VT rate (over $9 7 \%$ ) compared to other methods like O1-Pruner $( { \sim } 7 0 \mathrm { - } 7 8 \% )$ and ThinkPrune $( \sim 6 6 - 7 7 \% )$ , demonstrating the superior efficiency of our approach. Combining length and compress reward brings superior efficiency to reasoning. Our ablation study on the Length Reward (L-reward) and Compress Reward (C-reward), presented in Table 3, reveals their critical complementary relationship. The analysis reveals that while each component alone yields competitive results—positioning them near the Pareto frontier of performance versus compression efficiency—combining them can achieve a more optimal balance. Specifically, using L-reward alone achieves significant compression but with lower VT rate. Conversely, C-reward alone ensures a high VT by precisely removing redundancy, but with limited overall compression. Our full LC-R1 method successfully integrates these strengths, achieving both the highest compression efficiency and the highest VT rate while maintaining comparable accuracy, proving that the synergy between both rewards is indispensable for achieving maximum reasoning efficiency. Figure 4: The impact of LC-R1 compression method on the AIME25 benchmark. Left: The Pass $@ \mathbf { k }$ scores show that LC-R1 models maintain competitive performance compared to the originals, preserving the model’s potential. Right: Per-problem analysis on Deepseek-R1-Distill-Qwen-7B reveals that LC-R1 achieves similar Pass $@ 1$ accuracy while maintaining a consistent token compression ratio across problems of varying difficulty, demonstrating a universal compression effect. SFT shows limitation on generalization. While SFT achieves a remarkably high VT rate (over $9 5 \%$ ), its effectiveness is superficial. The model’s performance collapses on $o o D$ benchmarks, indicating that it merely overfits to the structural brevity of the training data rather than learning a generalizable, efficient reasoning policy. The poor performance of the hybrid $\mathbf { S } \mathbf { F } \mathbf { T } { + } \mathbf { O } \mathbf { 1 }$ -Pruner method further suggests that a simple combination of off-the-shelf techniques is insufficient. These findings underscore the superiority of RL-based methods like LC-R1, which foster more robust and genuinely efficient reasoning skills. the compression. It suggests that the pruned “invalid thinking” segments are truly redundant and their removal does not diminish the model’s underlying knowledge or creative problem-solving potential. Compression remains consistent across varying problem difficulties. To analyze our method’s behavior at a microscopic level, we plot the per-problem pass $\ @ 1$ accuracy against the original model’s token consumption on the AIME25 benchmark (Figure 4 (right)). The plot reveals a clear difficulty spectrum, where problems requiring more tokens from the base model generally correspond to lower pass $\ @ 1$ scores. Crucially, LC-R1 applies a uniform and significant compression ratio across this entire spectrum, with per-problem outcomes (i.e., success or failure) remaining remarkably consistent with those of the base model. This provides strong evidence that LC-R1 functions as a robust and difficulty-agnostic efficiency layer, successfully streamlining the reasoning process without altering the model’s core problem-solving logic for any specific problem. # 5. Compression Impact Analysis Compression does not impact exploration capability. To investigate the deeper impact of compression on the model’s problem-solving potential, we sampled 256 times on AIME25 with maximal length $= 3 2$ , 768 and test pass $@ \mathbf { k }$ score on both models before and after compression. The results in Figure 4 (left) reveal a key phenomenon: across the entire $\mathrm { P a s s } @ \mathrm { k }$ evaluation range from ${ \bf k } = 1$ to 128 on the AIME25 dataset, the performance curve of the model compressed by our LC-R1 method almost perfectly overlaps with that of the original model. This result strongly demonstrates that the model’s exploration ability to find a correct solution through multiple attempts will not be injured by
Large Reasoning Models (LRMs) have achieved remarkable success, yet they often suffer from producing unnecessary and verbose reasoning chains. We identify a core aspect of this issue as "invalid thinking" -- models tend to repeatedly double-check their work after having derived the correct answer. To address this specific inefficiency, we move beyond the general principles of Efficacy and Efficiency to propose two new, fine-grained principles: Brevity, which advocates for eliminating redundancy, and Sufficiency, which ensures critical reasoning steps are preserved. Guided by these principles, we introduce LC-R1, a post-training method based on Group Relative Policy Optimization (GRPO). LC-R1 employs a novel combination of a Length Reward for overall conciseness and a Compress Reward that is specifically designed to remove the invalid portion of the thinking process. Extensive experiments on multiple reasoning benchmarks demonstrate that LC-R1 achieves a significant reduction in sequence length (~50%) with only a marginal (~2%) drop in accuracy, achieving a favorable trade-off point on the Pareto frontier that prioritizes high compression. Our analysis further validates the robustness of LC-R1 and provides valuable insights for developing more powerful yet computationally efficient LRMs. Our code is released at https://github.com/zxiangx/LC-R1.
[ "cs.AI", "cs.CL" ]
# 1 Introduction Large language models (LLMs) have shown remarkable performances on various NLP tasks, however, due to their huge sizes of model parameters, LLMs require significant GPU memory for inference, substantially limiting the throughput and latency time. To address these challenges, quantization methods (Yao et al., 2022; Dettmers et al., 2022, 2023; Wu et al., 2023b; Yao et al., 2023; Kim et al., 2024a) have been widely studied as an effective technique for reducing the memory requirement of LLMs, potentially improving the latency time as well, by representing weights and activations of LLMs using low-precision. In quantization, one of the most challenging issues is the presence of outliers in weights and activations, as they widen the quantization range and increase quantization error. Recently, leveraging the rotational invariance of transformers (Liu et al., 2024b), rotation-based quantization has been extensively applied to mitigate outliers, motivated by the observation that outliers are reduced after rotation. Similarly, SmoothQuant (Xiao et al., 2023) exploits the scaling invariance of linear layer, by dividing activation values by channel-specific scaling factors, thus greatly reducing activation outliers without severely strengthening weight outliers. The scaling invariance is further utilized in activationaware quantization (AWQ) (Lin et al., 2024), which primarily focuses on “salient channels” to reduce quantization errors, by identifying salient channels based on activation magnitude. Without being limited in the original feature space as in (Lin et al., 2024), this paper extensively explores the rotational invariance for saliency-aware weight quantization by identifying salient channels based on “principal dimensions on the projection space,” thereby proposing the rotation-based saliency-aware weight quantization (ROSAQ). By the definition, the principal dimensions resulting from the principal component analysis (PCA) maximize the variances of channel values on projected space, and accordingly substantially increase their activation magnitudes. Our key underlying expectation is that these principal channels with the largest eigenvalues are more dominant and salient than the existing magnitude-based salient channels in original space, due to their inherent properties of maximizing variance, thereby further improving the saliency-aware quantization. The proposed ROSAQ consists of three steps: • PCA-based projection, which first performs PCA projection with its eigenvectors on a calibration set to obtain the PCA-projected calibration set. For the multi-head self-attention (MHSA) layer, we further propose the use of head-wise PCA, where the PCA projection is applied separately to each head-specific attention representation. • Salient channel identification, which selects “principal channels” corresponding to the $K$ - largest eigenvalues as ‘salient’ channels, and regards the other channels as normal nonsalient channels. • Saliency-aware quantization with mixedprecision, which applies the per-group quantization, where employs FP16 for a salient group of channels, and INT3/4 for all other groups of non-salient channels, where a group consists of 128 channels. Experiment results on Wikitext2, zero-shot common-sense reasoning, and zero-shot MMLU tasks show that the proposed ROSAQ leads to improvements over the baseline saliency-aware quantization with mixed-precision on original feature space and the existing quantization methods, with minimal performance degradation. Furthermore, with kernel fusion, ROSAQ exhibits about $2 . 3 \mathrm { x }$ speed up over FP16 implementation when generating 256 tokens with a batch size of 64, and about $2 \mathrm { x }$ speedup when generating 128 tokens with a batch size of 128. Our contributions are summarized as follows: 1) we propose ROSAQ, which is a novel rotationbased saliency-aware quantization, by choosing principal channels resulting from the PCA projection as salient ones, 2) we apply the head-wise PCA projection across multiple heads for quantizing the parameters of MHSA, and 3) the proposed ROSAQ leads to improved performances over existing quantization on Wikitext2, zero-shot common-sense reasoning, and zero-shot MMLU tasks. Figure 1: An overview diagram of ROSAQ that quantizes the weights of a linear layer XW, using rotational invariance as described by Eq. (1), where $\mathbf { X }$ is the calibration data matrix. ROSAQ first applies the PCA-based projection, taking $\mathbf { Q }$ as $\mathbf { R }$ , with eigenvectors obtained from Eq. (4). The salient channels denoted as $\mathbf { W } _ { S }$ , corresponding to the $K$ largest eigenvalues, are represented in FP16, while the remaining non-salient channels ${ \bf W } _ { N }$ are represented in low precision, such as INT3/INT4. previously applied for the language model pruning (Ashkboos et al., $2 0 2 4 \mathrm { a }$ ; Hu et al., 2024), the rotation-based quantization has been extensively studied to reduce outliers, including incoherence processing based on orthogonal projections (Chee et al., 2024; Ashkboos et al., 2024b), and optimizing the rotation matrix (Liu et al., 2024b) based on Cayley SGD(Li et al., 2020). Saliency-aware quantization has also been proposed by AWQ (Lin et al., 2024), which selects salient channels based on activation magnitudes but uses “full low-precision” quantization, leveraging the “scaling invariance” property of linear layers without mixed precision. Unlike rotation-based quantization methods such as (Liu et al., 2024b), which aim to remove outliers, ROSAQ applies rotation to more effectively identify salient channels. Instead of the scaling invariance used in AWQ, ROSAQ exploits rotational invariance for salient channel identification. # 2 Related Work Quantization methods have been studied mainly two categories – quantization-aware training (QAT) (Liu et al., 2023; Shen et al., 2024; Ma et al., 2024) and post-training quantization (PTQ) (Pan et al., 2023; Tseng et al., 2024; Wu et al., 2023a; Guan et al., 2024; Yao et al., 2024; Liu et al., 2024a; Dettmers et al., 2024; Shao et al., 2024). PTQ is widely applied because it requires no (or minimal) training and only needs to use a small calibration set. While the “rotational invariance” has # 3 Method: ROSAQ Figure 1 presents the overall diagram of ROSAQ. Suppose that a calibration set consists of $N$ data samples, formally presented as $\begin{array} { r l } { \mathbf { X } } & { { } = } \end{array}$ $\left[ { \bf x } _ { 1 } , \cdot \cdot \cdot , \bar { \bf x } _ { N } \right] ^ { T } \in \mathbb { R } ^ { N \times \bar { d } }$ , where $\mathbf { x } _ { i } \in \mathbb { R } ^ { d }$ is $i$ -th data representation. ROSAQ uses the rotational invariance for all linear layers formed with XW in transformer, where $\mathbf { W } \in \mathbb { R } ^ { d \times d ^ { \prime } }$ is a weight matrix of parameters, formulated as follows: $$ \mathbf { X } \mathbf { W } = \left( \mathbf { X } \mathbf { R } \right) \left( \mathbf { R } ^ { T } \mathbf { W } \right) $$ where $\mathbf { R } \in \mathbb { R } ^ { d \times d }$ is a rotation matrix, which consists of orthonormal vectors. After applying the weight quantization, Eq. (1) is approximated by: $$ \mathbf { X } \mathbf { W } \approx \left( \mathbf { X } \mathbf { R } \right) Q \left( \mathbf { R } ^ { T } \mathbf { W } \right) $$ where $Q$ is a quantization function, which adopts per-group quantization with a group size of 128 channels. Similar to AWQ (Lin et al., 2024), ROSAQ also takes into account the assumption that weights are not equally salient. Different from AWQ which applies the quantization to all channels, but in a scale-sensitive manner, ROSAQ deploys the mixedprecision which keeps high-precision for salient channels while using low-precision non-salient channels, in order to minimize the quantization error particularly for salient channels. To formally present the mixed-precision, suppose the column vectors of $\mathbf { R }$ are sorted by their saliency degrees, and they are then divided into two groups – salient and non-salient groups of channels, ${ \bf R } =$ $[ { \bf R } _ { S } , { \bf R } _ { N } ]$ , where ${ \bf R } _ { S } \in \mathbb { R } ^ { K }$ is the orthonormal vectors for salient channels, and ${ \bf R } _ { N } \in \mathbb { R } ^ { d - K }$ is one for non-salient channels. Under the mixedprecision, Eq. 2 is approximated by: $$ \mathbf { X } \mathbf { W } \approx \left( \mathbf { X } \mathbf { R } \right) \left[ \begin{array} { c } { \left( \mathbf { R } _ { S } ^ { T } \mathbf { W } _ { S } \right) } \\ { Q \left( \mathbf { R } _ { N } ^ { T } \mathbf { W } _ { N } \right) } \end{array} \right] $$ where $\mathbf { W } _ { S } \in \mathbb { R } ^ { K \times d ^ { \prime } }$ is the sub-block of the weight matrix for salient channels, and WN ∈ R(d−K)×d′ is one for non-salient channels. ROSAQ consists of three steps — 1) PCA-based projection, 2) Salient channel identification, and 3) Saliency-aware quantization with mixed-precision, which will be presented in the next subsections with more details. # 3.1 PCA-based projection for Computing R To obtain the rotation matrix $\mathbf { R }$ in Eq. 3, we perform PCA on the calibration set $\mathbf { X }$ as follows: $$ \mathbf { X } ^ { T } \mathbf { X } = \mathbf { R } \mathbf { A } \mathbf { R } ^ { T } $$ where $\mathbf { R } \in \mathbb { R } ^ { d \times d }$ is the eigenvectors of ${ \bf X } ^ { T } { \bf X }$ , and $\pmb { \Lambda }$ is the corresponding eigenvalue matrix. Without loss of generality, we assume that the column vectors of $\mathbf { R }$ are sorted by their eigenvalues. To check whether the PCA projection helps to identify salient channels, Fig 1 presents the activation magnitudes across all channels in both the original and PCA-projected feature spaces. It is seen that the activation values for salient channels are more dominant, making them easier to distinguish compared to those in the original feature space. Figure 2: Magnitude of the input activation values to MHSA in LLaMA2-7B, before and after PCA-based rotation. Salient channels are more dominant in PCAprojected space than the original space. Detailed statistics are presented in Appendix F. # 3.1.1 Head-wise PCA Projection in MHSA In general, ROSAQ uses the layer-wise PCA projection, which is applied individually to the activation matrix $\mathbf { X } _ { l }$ in a layer-specific manner, for each linear layer represented by $\mathbf { X } _ { l } \mathbf { W } _ { l }$ , thereby resulting in its own rotation matrix ${ \bf R } _ { l }$ for each layer $l$ . To better capture the head-specific characteristics for quantization, ROSAQ deploys a headwise PCA projection for MHSA, where a separate PCA is performed for each head-specific attentive representation. More specifically, suppose that $\mathbf { H } _ { h } \in \mathbb { R } ^ { m \times d _ { h } }$ is $h$ -th head-specific attentive representation resulting from the activation matrix $\mathbf { Z } _ { l - 1 }$ at the previous layer, as follows. $$ \mathbf { H } _ { h } = \operatorname { A t t e n t i o n } \left( \mathbf { Z } _ { l - 1 } \mathbf { W } _ { h } ^ { Q } , \mathbf { Z } _ { l - 1 } \mathbf { W } _ { h } ^ { K } , \mathbf { Z } _ { l - 1 } \mathbf { W } _ { h } ^ { V } \right) $$ where $\mathbf { W } _ { h } ^ { Q } , \mathbf { W } _ { h } ^ { K } , \mathbf { W } _ { h } ^ { V } \in \mathbb { R } ^ { d \times d _ { h } }$ are the weight matrices at $h$ -th head for query, key, and value parts, respectively. Instead of applying a global PCA on the concatenated multi-head representation concat $( { \bf { H } } _ { 1 } , \cdots , { \bf { H } } _ { H } )$ , we approximate MHSA by using the head-specific PCA projection as follows: $$ \mathrm { M H S A } \left( \mathbf { Z } _ { l - 1 } \right) \approx \sum _ { h = 1 } ^ { H } \left( \mathbf { H } _ { h } \mathbf { R } _ { h } \right) Q \left( \mathbf { R } _ { h } ^ { T } \mathbf { W } _ { h } ^ { O } \right) $$ where ${ \bf R } _ { h }$ is a head-specific PCA projection matrix, which consists of eigenvectors obtained by applying PCA on the head-specific calibration set $\mathbf { X } _ { h } \in \mathbb { R } ^ { N \times d _ { h } }$ for $h$ -th head, and $\mathbf { W } _ { h } ^ { O } \in \mathbb { R } ^ { d _ { h } \times d }$ is Table 1: Comparison results between ROSAQ and other quantization methods on LLaMA3-8b and Qwen2-7b models, when group-wise quantization is applied using INT4 with a group size of 128 (i.e., INT4 g128). PPL indicates the perplexity score on WikiText2, CSR and MMLU refer to the averaged zero-shot accuracies on zero-shot common sense reasoning and MMLU tasks, respectively. \* results were quoted from AWQ (Lin et al., 2024). More detailed results are provided in Appendix E. the output projection matrix. In Appendix C, we present that the use of head-specific PCA leads to the decrease of perplexity, comparing to the global PCA, as in Table 3. # 3.2 Salient channel identification To identify salient channels on the PCA-based projection space, we sort projected channels according to their eigenvalues, and select a group of channels corresponding to the larger eigenvalues as salient ones. In Appedix F, Table 8 shows that the topranked channels also tend to have the largest average magnitudes. # 3.3 Saliency-aware quantization with mixed-precision channel After splitting weights into two groups – $\mathbf { W } _ { S }$ and ${ \bf W } _ { N }$ – salient and non-salient channels, we retain FP16 precision for the salient group, while applying quantization under the INT3/INT4 to the nonsalient group. Detailed settings can be found in Appendix G. # 4 Experiments # 4.1 Experimental Setup We apply per-group weight quantization under an INT3/INT4, where each group consists of 128 channels. Similar to AWQ(Lin et al., 2024), we use a small calibration set from the Pile dataset(Gao et al., 2020) to prevent overfitting to any specific downstream domain. For LLMs, we employ open-source models, including the LLaMA2 and LLaMA3 (Touvron et al., 2023; AI@Meta, 2024) and Qwen-2 (Yang et al., 2024) model families. To evaluate the quantized LLMs, we report perplexity (PPL) on the WikiText-2 dataset(Merity et al., 2016) and use standard evaluation metrics for zero-shot Common Sense Reasoning tasks and zero-shot MMLU benchmark(Hendrycks et al., 2021). We compare ROSAQ with the existing quantization methods, such as GPTQ, (Frantar et al., 2023), SpinQuant (Liu et al., 2024b), and AWQ (Lin et al., 2024). We also report the “rotation-less” baseline run, denoted as Mixed which select salient channels according to the activation magnitudes on the original feature space (i.e., $\mathbf { R } = \mathbf { I }$ in Eq. (3)). # 4.2 Main Results Table 1 presents the performances of quantized models in INT4, in terms of the perplexity on WikiText2 and the zero-shot accuracies on Common Sense Reasoning tasks and zero-shot MMLU. As seen in Table 1, ROSAQ exhibits slightly superior to GPTQ(Frantar et al., 2023), SpinQuant(Liu et al., 2024b) and AWQ(Lin et al., 2024) except perplexity. Notably in Table 6 which is more aggressive 3bit setting, ROSAQ outperforms other methods achieving the highest MMLU results across the various categories. Detailed results are presented in Appendix E, D. In particular, Table 4 compares inference throughputs, reporting that ROSAQ achieves approximately $2 . 3 \mathrm { x }$ speedup over FP16 when generating 256 tokens with a batch size of 64.
Quantization has been widely studied as an effective technique for reducing the memory requirement of large language models (LLMs), potentially improving the latency time as well. Utilizing the characteristic of rotational invariance of transformer, we propose the rotation-based saliency-aware weight quantization (ROSAQ), which identifies salient channels in the projection feature space, not in the original feature space, where the projected "principal" dimensions are naturally considered as "salient" features. The proposed ROSAQ consists of 1) PCA-based projection, which first performs principal component analysis (PCA) on a calibration set and transforms via the PCA projection, 2) Salient channel dentification, which selects dimensions corresponding to the K-largest eigenvalues as salient channels, and 3) Saliency-aware quantization with mixed-precision, which uses FP16 for salient dimensions and INT3/4 for other dimensions. Experiment results show that ROSAQ shows improvements over the baseline saliency-aware quantization on the original feature space and other existing quantization methods. With kernel fusion, ROSAQ presents about 2.3x speed up over FP16 implementation in generating 256 tokens with a batch size of 64.
[ "cs.CL", "cs.AI" ]
# INTRODUCTION Artificial Intelligence (AI) software has become a critical component in numerous applications, ranging from autonomous driving [1] and healthcare diagnostics [2] to financial decisionmaking and public service automation [3]. The rapid advancement and adoption of AI technologies have brought profound benefits, but also significant challenges related to reliability, safety, and ethics. As AI systems increasingly influence highstakes domains, ensuring their trustworthiness and robustness is essential [4]. One of the key processes to establish trust is software verification and validation, which aims to demonstrate that a software system meets its declared properties and performs as expected under realistic operating conditions [5]. Traditionally, software verification and validation have relied on a combination of testing, static analysis, and documentationbased processes such as performance reports, external audits, and model cards [6]. While these approaches have proven effective for conventional software, they face significant limitations when applied to AI systems, particularly those based on machine learning (ML). ML models are inherently probabilistic, data-dependent, and often opaque, complicating the assessment of correctness and compliance. Furthermore, the deployment of ML models as services (MLaaS) [7] introduces additional challenges, as the model internals remain inaccessible to external validators. This black-box nature limits direct inspection and complicates verification of whether the declared model was actually used for inference, or whether reported performance metrics truthfully represent the deployed system’s behavior [8]. Consequently, traditional validation approaches struggle to provide objective, tamper-proof evidence, weakening accountability and trust, especially in regulated sectors where compliance mandates clear, auditable validation evidence, as emphasized by recent legislation such as the EU AI Act [9]. A promising approach to improve validation transparency and objectivity is the use of Zero-Knowledge Proofs (ZKPs) [10]. ZKPs are cryptographic protocols that allow one party (the prover) to demonstrate to another party (the verifier) that a computation was carried out correctly, without requiring the verifier to rerun the computation or access sensitive internal details. Originally developed for the broader field of verifiable computing, ZKPs have increasingly been applied to ${ \mathrm { M L } } ,$ where, for example, they can offer a mechanism to prove that an inference step was executed correctly using a declared model, without revealing the model’s internal parameters or the input data itself [11]. This work focuses on evaluating the feasibility of applying ZKPs to the broader challenge of Trustworthy AI Software verification and validation in the MLOps lifecycle. By embedding ZKPs into AI software workflows, it becomes possible to generate tamper-proof, cryptographically verifiable evidence that computations adhere to declared specifications and requirements, without revealing sensitive details such as proprietary model weights or training data. This approach enables external auditors, customers, or regulators to independently verify AI software operations while respecting intellectual property concerns. In summary, the key contributions of this work are: (a) a systematic survey of ZKP protocols, highlighting five key properties (non-interactivity, transparent setup, standard representations, succinctness, and post-quantum security) that make them suitable for integration into AI system verification and validation pipelines; (b) a structured analysis of ZKP-enhanced ML applications, organized according to the stages of the TDSP model [12], and for each application, the specific verification objective, the ML model used, and the ZKP protocol adopted are detailed; (c) An exploration of the emerging convergence between ZKP and ML technologies toward a unified Zero-Knowledge Machine Learning Operations (ZKMLOps) verification framework for Trustworthy AI, identifying research trends and future works. The remainder of this paper is organized as follows. Section 2 provides background on Trustworthy AI, AI software verification and validation, and Zero-Knowledge Proofs. Section 4 outlines the research methodology. Sections 5 presents a systematic literature review on ZKP protocols, identifying 5 key properties that make them suitable for integration into AI system verification and validation pipelines. Section 6 presents a systematic literature review on ZKP-Enhanced ML applications, showing the convergence of the research domain toward a unified Zero-Knowledge Machine Learning Operations (ZKMLOps) verification framework for Trustworthy AI. Section 7 outlines potential research directions and opportunities for extending the contributions of this work. Section 8 concludes the work, highlighting the key findings of the research. # 2 BACKGROUND This section lays the foundational groundwork, first by outlining the principles of Trustworthy AI, then by detailing the specific challenges in AI Software Verification and Validation, and finally by introducing Zero-Knowledge Proofs as the foundational cryptographic technique for this work. (iii) Verification, i.e., confirming that the system adheres to design specifications and functions as intended, (iv) Continuous Governance, i.e,. maintaining oversight to ensure long-term accountability, compliance, and adaptability. # 2.2 AI Software Verification and Validation Software validation is a well-established process in traditional software engineering, ensuring that software fulfills its declared requirements and performs as intended [5]. When applied to AI software, validation becomes significantly more challenging. Traditional validation techniques assume deterministic behavior, where outputs are traceable to explicitly written source code. Modern AI systems, especially those based on ${ \mathrm { ~ \bf ~ M L } } ,$ exhibit probabilistic behavior that depends heavily on training data, model architecture, and optimization processes. This makes it harder to directly link observed outputs to the intended requirements [6]. Further complicating the process, many AI models are proprietary and deployed as services, meaning external validators, regulators, or customers cannot access the internal details of the model. This black-box nature forces external parties to rely on documentation or self-reported performance metrics, limiting the objectivity and reproducibility of the validation process. Moreover, current approaches such as model cards or empirical performance reports provide useful context, but they are fundamentally self-declared and do not inherently provide verifiable evidence [6]. In turn, external validation mechanisms, such as audits or independent re-testing, also face practical limits when applied to AI systems. Audits rely on documentation provided by the developer, creating risks of selective reporting. Independent re-testing, while more objective, may be infeasible for large or proprietary models where data and models cannot be freely shared [17]. # 2.1 Trustworthy AI Trustworthy AI has emerged as a critical area of focus as AI systems increasingly impact society, business, and everyday life. Ensuring that these systems are reliable, ethical, and safe is essential for promoting public trust and for enabling the responsible deployment of AI technologies at scale. The concept of Trustworthy AI is rooted in five foundational ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability [13]. There is a set of well-established technical and ethical dimensions of trustworthy AI [4], [14]: (i) Safety $\boldsymbol { \mathcal { E } }$ Robustness, i.e. ensuring systems perform reliably under various conditions, (ii) Fairness $\boldsymbol { \mathcal { E } }$ Non-discrimination, i.e. preventing bias and ensuring equitable outcomes, (iii) Explainability $\boldsymbol { \mathcal { E } }$ Transparency, i.e. making AI decisions understandable and traceable, (iv) Privacy & Data Governance protecting user data and ensuring responsible data use, (v) Accountability & Auditability, i.e. assigning responsibility and enabling oversight, (vi) Societal & Environmental Well-being, i.e. considering broader impacts on society and the environment. A systematic approach to trustworthy AI spans the entire AI lifecycle, from data acquisition and model development to deployment and monitoring, and includes the following key components [15], [16]: (i) Risk Analysis, i.e., identifying and mitigating potential ethical, technical, and societal risks. (ii) Validation, i.e,. ensuring the AI system meets performance goals and stakeholder expectations in its intended context, # 2.3 Zero-Knowledge Proofs ZKPs provide a formal mechanism through which a prover can convince a verifier that a given statement is true, without revealing any information beyond the truth of the statement itself [18]. To introduce the idea, consider a traditional software application used to determine eligibility for a benefit based on income. The rule might be: “grant the benefit if the citizen’s income is less than $\$ 30,000$ .” With a ZKP, the citizen (prover) can convince an organization (verifier) that their income satisfies this condition, without revealing the actual income. At the core of modern ZKP systems is the transformation of any arbitrary computations into arithmetic circuits defined over finite fields [10]. Any computable function can be rewritten as a sequence of additions and multiplications over a finite field $\mathbb { F } _ { p } ,$ where $p$ is a large prime. The prover’s task is to demonstrate knowledge of a valid assignment to all the variables in the circuit, ensuring that all constraints hold. Formally, the prover proves the existence of a secret witness $w$ that satisfies: $$ C ( x , w ) = y $$ where $C$ denotes the arithmetic circuit, $x$ represents public inputs, $w$ is the private witness, and $y$ is the public output of the computation. If we consider the previous example: The public input $x$ encodes the eligibility threshold $( \$ 30,000 )$ . The witness $w$ represents the citizen’s confidential income. The public output $y$ is the Boolean result (e.g., true if the condition holds). The ZKP convinces the verifier that there exists a secret $w$ such that the circuit $C$ satisfies $C ( x , w ) = y = \mathtt { t r u e }$ , without revealing $w$ . ZKPs were first studied in the setting of interactive proofs [10], where the prover and verifier engage in a sequence of challenge-response rounds. These protocols guarantee that a cheating prover cannot convince an honest verifier of a false statement, except with negligible probability. A significant step towards removing interaction was the Fiat-Shamir heuristic [19]. This technique transforms certain interactive protocols into non-interactive variants by replacing the verifier’s random challenges with the output of a cryptographic hash function applied to the transcript. While widely used and practical, this transformation’s security is typically proven in the idealized Random Oracle Model [20]. Blum et al. [21] later gave a precise mathematical definition of Non-Interactive Zero-Knowledge Proofs (NIZKs) and showed how to build them with provable security guarantees in the standard cryptographic model, typically using a shared reference string that all parties can access. Both approaches result in a self-contained proof that can be verified without further interaction. To enable efficient proof generation and verification, many systems encode the execution trace of the computation into a polynomial $P ( x )$ over $\mathbb { F } _ { p }$ : $$ P ( x ) { = } { \sum _ { i = 0 } ^ { n } } c _ { i } x ^ { i } $$ The prover commits to this polynomial using a polynomial commitment scheme [22], which ensures both binding (the committed polynomial cannot be altered later) and optionally hiding (its content remains secret). The verifier can then check whether the polynomial satisfies the required properties by querying a few evaluations at selected points. This drastically reduces the size of the proof and the cost of verification, achieving the property of succinctness. A key challenge in applying ZKPs to domains such as ML is handling non-linear functions, which are not naturally supported in arithmetic circuits. Neural networks, for example, often include non-linear activation functions like the Rectified Linear Unit $( R e L U ( x ) = \operatorname* { m a x } ( 0 , x ) )$ [23]. To represent such operations in ZKP-friendly form, systems typically use lookup arguments [24]. In a lookup argument, the prover shows that each non-linear operation maps an input to an output according to a precomputed table $T$ : $$ \exists ( x , y ) \in T \quad { \mathrm { s u c h ~ t h a t } } \quad y = f ( x ) $$ This allows incorporating non-polynomial logic into ZKPs while preserving succinctness and zero-knowledge. The table $T$ encodes valid input-output pairs for the non-linear function, and the verifier only checks that the prover’s values appear in the table. # 3 RELATED WORK To demonstrate the significance of our contribution, we conducted a comprehensive review of pertinent literature by examining leading conferences and journals, complemented by a snowballing methodology. We aimed to identify works that survey the applicability of ZKP protocols to ${ \mathrm { M L } } ,$ particularly those that delineate critical factors and properties, as well as studies exploring the integration of ZKP within ML applications. The review revealed several surveys, each addressing specific facets of ZKP in ML; however, none provided a holistic perspective on the integration of ZKP across the MLOps pipeline within the broader context of Trustworthy AI verification and validation. Lavin et al. [25] present a comprehensive survey aimed at both researchers and practitioners, covering a wide spectrum of real-world applications and use cases of ZKPs. Within the domain of ML, the survey contextualizes recent advances— including those discussed in Section 6 of this work—highlighting the current state of the art. While the contribution is substantial, it does not explicitly address the MLOps lifecycle nor provide an in-depth discussion of protocol-level considerations, ML model addressed, or verification processes essential to operationalizing ZKPs in ML pipelines. Peng et al. [26] deliver a survey of Zero-Knowledge Machine Learning (ZKML) research, covering works from June 2017 to December 2024, which they categorize into verifiable training, inference, and testing, complemented by discussions on implementation challenges and commercial applications. Their work offers a valuable chronological and stage-based overview of the ZKML field. While comprehensive in its temporal scope and categorization by verification stage, the survey does not extend its analysis to a detailed mapping of ZKP-enhanced ML applications across a full MLOps lifecycle process. Furthermore, their review does not place a central focus on a systematic, criteriadriven assessment of ZKP protocol characteristics for AI system verification, nor on the explicit conceptualization of a unified MLOps framework designed to integrate ZKPs for advancing Trustworthy AI. We found only one paper, by Balan et al. [27], that proposes a framework for verifiability across the whole AI pipeline. They identify key parts and link existing cryptographic tools to different stages, from data sourcing to unlearning, aiming to allow verification of AI-generated assets. While their goal of a complete view is valuable, the pipeline stages they describe (such as “verification of raw dataset” and “extraction and analysis”) are presented generally and do not seem to follow a formal MLOps model. The authors also state that, as yet, “there are no implementations of this fully verifiable pipeline,” which shows such endto-end solutions are still largely conceptual. Therefore, their work does not offer a systematic survey of existing ZKP-enhanced ML applications organized by a standard MLOps lifecycle, nor does it deeply analyze ZKP protocol suitability for various ML tasks using specific criteria—areas central to our contributions. In summary, while the reviewed literature provides valuable insights into ZKP applications for ${ \mathrm { M L } } ,$ general ZKP surveys or conceptual frameworks for engineering AI verifiability with ZKP approaches are missing, which motivates our work in proposing a framework to provide a holistic approach to Trustworthy Machine Learning Operations with ZKPs. # 4 METHODOLOGY This work adopts a mixed methodology that combines two systematic literature reviews following the methodology described by Kitchenham et al. [28] with a systematic analysis of ZKP protocols and their applications in ML. The first review identifies and characterizes relevant ZKP protocols, examining their mathematical foundations, performance properties, and implementation maturity. The goal is to identify common patterns and challenges and define a set of essential properties that a ZKP protocol should possess to be effectively applied in an ML context. The second review analyzes the emerging field of ZKP-Enhanced ML, exploring how ZKPs have been applied to validate and secure ML processes. We further classify each relevant contribution based on the Team Data Science Process (TDSP) model [12] to show the convergence of this research domain towards a unified MLOps pipeline verification framework. Furthermore, to encourage replication, we provide a full replication package1 available online. # 4.1 Literature Search Process for ZKP Protocols The first systematic literature review focused on identifying and characterizing the main ZKP protocols that could potentially be applied to inference validation in ML systems. Since this initial review was intended to capture the landscape of general-purpose ZKP protocols, its scope was not restricted to ML-specific applications, allowing for a broader understanding of available proof systems, their theoretical properties, and their practical characteristics. # 4.1.1 Research Query The query applied for this search was: ("zero knowledge" OR "verifiable comput\*") AND (proof OR argument) AND (interactive OR "non-interactive") This query was designed to retrieve works that focus on both interactive and non-interactive proof systems, including both classical ZKPs and broader verifiable computing techniques. The search was performed in the ACM Digital Library , IEEE Xplore3, and Cryptology ePrint Archive4, as these libraries cover the main venues where ZKP research has been published. # 4.1.2 Screening and Filtering Process The search yielded a total of 1,427 papers across all three libraries. To refine this set, a comprehensive filtering process was applied, consisting of three main phases: title screening, abstract screening, and full-text assessment. In the title screening phase, papers were evaluated based on their titles, and those clearly indicating topics unrelated to the core focus of ZKP contributions—such as works exclusively centered on blockchain applications, finance, or other domains with no relevance to general ZKP advancements—were excluded. During the abstract screening phase, papers were further assessed to eliminate those that, despite referencing ZKPs, did not offer direct contributions to the design, analysis, or benchmarking of ZKP protocols. Additionally, duplicates across the libraries were identified and removed to ensure a unique set of studies. In the final phase, full-text assessment was conducted, where each remaining paper was thoroughly reviewed to confirm that it provided a meaningful discussion of ZKP protocols themselves, rather than merely applying pre-existing protocols to external use cases without novel insight. Papers failing to meet this criterion were discarded, and any remaining redundancies were addressed. After completing this rigorous process, a final set of 30 papers was obtained. # 4.1.3 Quality Indices To systematically assess the quality of these 30 papers, we defined a set of quality indices, inspired by established methodologies in literature reviews [29]. These indices evaluate key aspects of each study, assigning scores from 0 to 2 based on specific criteria, like problem definition, problem context, research design, results, insights derived, and limitations. Each surviving paper was thoroughly read and scored according to these metrics, which include the clarity of problem definition, the depth of contextual description, the explicitness of research design, the specificity of contributions, the insightfulness of derived lessons, and the acknowledgment of limitations. This scoring mechanism enabled us to prioritize papers that not only meet the thematic relevance criteria but also exhibit robustness and transparency in their scientific approach. The resulting quality scores provide a foundation for identifying the most significant works that shape our understanding of ZKP protocols and their theoretical advancements. # 4.2 Systematic Literature Review on ZKP-Enhanced ML The second component of the methodological process consisted of a systematic literature review SLR focused specifically on the intersection of ZKPs and ML. This review aimed to identify existing approaches where ZKPs were applied to ML processes. The objective was to understand how the current research landscape addresses the need for externally verifiable, privacy-preserving validation of ML computations. # 4.2.1 Research Query The following search query was developed to capture works focusing explicitly on the use of ZKPs for verifying or validating ML processes: ("zero knowledge proof" OR "verifiable comput\*") AND ("ML" OR "neural network" OR "deep learning") This query was executed across two major digital libraries, IEEE Xplore and ACM Digital Library. The Cryptology ePrint Archive was excluded from this review as a pilot study showed a lack of directly relevant work focusing on ML inference. # 4.2.2 Screening and Filtering Process The initial query returned a total of 1,134 papers across the two libraries. These papers were filtered in two stages, applying progressively stricter criteria to ensure relevance to the topic of ZKP-enhanced ML validation. In the first stage, papers were excluded if they focused only on privacy-preserving ML techniques unrelated to ZKPs, or if they discussed general ML security (such as adversarial attacks or robustness) without addressing verification tasks. The remaining papers underwent the second stage, which involved a full-text review, with papers excluded if they: (i) Used ZKPs only as a theoretical reference without concrete implementation or application to ML workflows; (ii) Incorporated ZKPs in ways that did not contribute to verifiability or correctness validation, such as merely enhancing privacy without any verification objective; (iii) Applied existing ZKP protocols without modification or novel insight, offering limited contribution to the understanding or evolution of ZKP-Enhanced ML. Fig. 1. At the top, the diagram depicts the nine phases of the TDSP model [12], while the bottom illustrates the four phases (grouped) of the MLOps lifecycle verification process derived from the TDSP model. This process left a final set of 42 papers for inclusion in the literature review. # 4.2.3 Cross-Referencing and Snowballing To maximize coverage, an additional round of cross-referencing was conducted using the citations and bibliographies of the 42 selected papers. This step identified 15 additional works of relevance, bringing the final corpus to 57 papers. # 4.2.4 Comparative Analysis The final set of 57 papers was analyzed using a comparative framework designed to highlight key dimensions of existing ZKML approaches: ZKP Guarantees. Completeness, soundness, zero knowledge, and binding properties. Adopted Protocols. Which ZKP protocols were employed. Targeted ML Model. Which ML models were studied for the specific implementation. . Targeted ML Lifecycle Phase. Data and Preprocessing Verification, Training and Offline Metrics Verification, Inference Verification, and Online Metrics Verification. These phases are derived through a bucketing process applied to the well-established TDSP model [12], and a visualization of this process is presented in Figure 1. The Data and Preprocessing Verification phase encompasses the verification of properties related to dataset design choices and preprocessing operations. Training and Offline Metrics Verification includes the verification of the training process and the evaluation of model performance using metrics such as accuracy and F1-score, which are computed right after the training. Inference Verification focuses on ensuring the correctness of the inference computation process. Finally, Online Metrics Verification involves the real-time verification of dynamic properties and metrics, such as model drift and live accuracy assessments. TABLE 1 Condensed Comparison of Cryptographic Protocols. The above-mentioned four phases represent the primary aspects currently addressed in the literature concerning the verification of MLOps lifecycle stages. While other established frameworks exist—such as CRISP-DM [30] and KDD [31]—the TDSP model was selected for its more fine-grained and comprehensive representation of the MLOps lifecycle. Unlike the aforementioned alternatives, TDSP places less emphasis on business understanding phases, which lie beyond the scope of this work. This analysis offers a comprehensive overview of the current state of the art in ZKP-enhanced ${ \mathrm { ~ \bf ~ M L } } ,$ elucidating common challenges and uncovering gaps within the existing literature. Most notably, it reveals a discernible trend toward the convergence of research efforts in this domain, aiming to establish a unified framework for the verification and validation of the overall MLOps lifecycle. # 5 ZKP PROTOCOLS SUITABILITY FOR ML: A LITERATURE REVIEW ZKP protocols have evolved into a diverse landscape, with different designs optimized for various computational and security needs. This section categorizes the primary families of ZKP protocols and examines their relevance to ML applications. At the highest level, these protocols can be classified into interactive and non-interactive approaches. Beyond this fundamental distinction, protocols differ in their guarantees, setup requirements, computational representations, post-quantum security, succinctness, and performance characteristics. Each of these factors plays a crucial role in determining a protocol’s applicability to verifiable ML. This section provides a structured review of these classification dimensions, highlighting key protocols and their suitability for ML applications. The analysis highlighted seven key dimensions characterizing ZKPs, namely: (i) Interactivity, (ii) Guarantees Provided by Modern Protocols, (iii) Setup Requirements, (iv) Representation of Computation, (v) Post-Quantum Security Considerations, (vi) Succinctness Properties, and (vii) Theoretical Performance Comparison. These properties are further explored in the following sections, and a summary of this analysis on the selected protocols is shown in Table 1. # 5.1 Analysis of Interactivity Zero-knowledge protocols can be broadly classified into interactive [10] and non-interactive [47] schemes. This distinction directly affects their practicality, particularly in distributed environments or use cases where proofs must be verified repeatedly by independent parties. Interactive protocols, such as $\mathrm { G K R } ,$ require a back-and-forth exchange between prover and verifier, where the verifier continuously challenges the prover to validate the computation. While this approach often reduces proof size and prover-side complexity, it requires synchronous communication, limiting scalability in scenarios where proofs are generated once and verified multiple times [48]. Non-interactive protocols, including SNARKs, STARKs, etc., compress proof generation into a single exchange, where the prover submits a self-contained proof that any verifier can check independently. This is particularly important in decentralized systems and for applications such as verifiable ML inference, where proofs may be published and validated offline. Non-interactivity in many protocols is achieved via the Fiat-Shamir heuristic, which simulates interaction through the use of a hash function acting as a public random oracle [49]. # 5.2 Guarantees Provided by Modern Protocols All protocols analyzed, spanning interactive, non-interactive, and hybrid approaches, provide the core guarantees defining ZKP protocols: completeness, soundness, and zero-knowledge, as defined by Goldreich et al. [50]. Completeness ensures that a prover following the protocol correctly, with a valid witness, always convinces the verifier. This property is consistently upheld across all surveyed protocols, from early interactive designs to modern non-interactive systems. Soundness guarantees that a dishonest prover, lacking a valid witness, can only convince the verifier with negligible probability. The exact assumptions vary: SNARKs such as Plonk rely on elliptic curve hardness [33], while hash-based STARKs provide stronger post-quantum resilience [34]. Protocols built on Halo inherit soundness from KZG polynomial commitments, similarly tied to elliptic curve assumptions [32]. Zero-Knowledge ensures the verifier learns nothing beyond the validity of the claim itself. This is achieved either through blinding techniques in SNARKs [33], or via hash commitments in STARKs [34]. In practice, all protocols achieve strong zero-knowledge properties. A notable point is the frequent use of the Fiat-Shamir heuristic [49] to transform interactive protocols into non-interactive ones, including in Marlin, Spartan. While convenient, this relies on the Random Oracle Model (ROM) [20], weakening formal soundness proofs slightly compared to fully interactive protocols. Despite minor differences in formalism, all protocols offer guarantees strong enough for real-world privacy-preserving applications [51], including ML inference, provided the chosen protocol aligns with the application’s performance and trust requirements. # 5.3 Setup Requirements The setup phase in ZKP systems refers to the preliminary step in which cryptographic parameters are generated before any proving or verification can occur. This phase significantly affects both the security model and the efficiency of the protocol. Broadly, ZKP schemes fall into two categories based on the nature of this setup: those requiring a trusted setup and those supporting a transparent setup [52]. A trusted setup involves the generation of a structured reference string (SRS) by a single party or a group of participants. In general, the security assumption hinges on the complete and irreversible disposal of any secret values created during this setup—commonly referred to as toxic waste [53]. If these secrets are ever compromised or retained, an adversary could forge proofs, thus undermining the system’s integrity. While trusted setups can offer compact proofs and fast verification, they introduce a critical vulnerability rooted in the assumption of honest behavior during the setup ceremony. In contrast, transparent setups eliminate the need for trust by deriving public parameters solely from publicly verifiable sources of randomness. Protocols such as zk-STARKs and systems built on Halo exemplify this approach. These protocols do not rely on any secret input during the setup and are therefore inherently more robust in adversarial settings. Transparent setups are particularly appealing for applications requiring strong auditability and long-term trust guarantees, albeit often at the cost of larger proofs and higher prover overhead. Furthermore, setups can be classified based on their scope as either universal or circuit-specific. A universal setup, as employed in systems like Marlin and Sonic, supports any computation up to a predefined size and needs to be executed only once. This greatly enhances reusability and reduces setup overhead across multiple applications. On the other hand, circuit-specific setups—as seen in schemes like Pinocchio—require a fresh setup for each distinct computation. While this increases setup cost, it allows for more fine-tuned optimizations tailored to individual circuits. # 5.4 Representation of Computation Zero-knowledge protocols do not operate directly on high-level programs or models; instead, they require computations to be transformed into formal representations that are compatible with their internal proof systems [51]. These representations play a central role in determining the performance, scalability, and suitability of a protocol for various application domains. The most widely adopted approach is the circuit-based representation, where a computation is expressed as a directed graph: nodes, or gates, represent basic operations such as addition or multiplication, and edges, or wires, carry intermediate values between operations [10]. From a proof system’s perspective, the prover demonstrates knowledge of all wire values — including inputs, outputs, and every intermediate result — and convinces the verifier that these values satisfy the logical constraints imposed by the circuit structure. If any inconsistency is detected, the proof is rejected, ensuring soundness [50]. Among circuit-based approaches, arithmetic circuits are particularly prominent [54]. These circuits represent computations over finite fields using operations like addition and multiplication. SNARK systems such as Groth16, Plonk, and Marlin operate on a constraint system derived from arithmetic circuits called Rank-1 Constraint Systems (R1CS) [35], which translates each gate and wire relationship into a structured set of equations. While efficient for algebraic tasks, arithmetic circuits struggle with non-arithmetic operations—such as comparisons or conditional logic—which must be rewritten or approximated, often adding complexity to the proving process [55]. In contrast, STARKs employ a fundamentally different representation model based on execution traces [34]. Rather than encoding the computation as a circuit, a STARK captures its dynamic behavior over time. This is done by recording a trace table: a matrix where each row reflects the full state of the computation at a given step, and each column tracks the evolution of a specific variable. This trace is then transformed into an Algebraic Intermediate Representation (AIR [56]), a set of polynomial constraints that must be satisfied for the trace to be considered valid. While this method offers greater flexibility and post-quantum security, it typically results in larger proofs, particularly for simple or low-complexity programs. Ultimately, the choice of computational representation shapes not only the cryptographic properties of a proof system but also its practical feasibility for different types of workloads. As such, selecting the appropriate abstraction—be it arithmetic circuits or execution traces—is a critical step in ZKP design. # 5.5 Post-Quantum Security Considerations The emergence of quantum computing presents a critical challenge to many cryptographic systems, including a significant subset of ZKP protocols [57]. Post-quantum security refers to a protocol’s resistance to adversaries equipped with quantum capabilities — that is, the inability to efficiently break the underlying cryptographic assumptions using quantum algorithms. Whether a zero-knowledge protocol is considered postquantum secure depends entirely on the primitives it employs. In general, protocols built solely on collision-resistant hash functions (CRHFs [58]) are believed to be more resilient in a quantum context, since no quantum algorithm is currently known to break CRHFs faster than brute force. However, it is important to recognize that such protocols are best described as plausibly postquantum secure, as no definitive proof rules out the possibility of future quantum attacks against hash-based constructions [34]. Among the protocols evaluated, STARKs are explicitly designed with post-quantum considerations in mind [34]. They avoid reliance on number-theoretic assumptions—such as discrete logarithms or elliptic curve pairings—which are known to be vulnerable to quantum attacks like Shor’s algorithm. Instead, STARKs use CRHFs for commitments and integrity checks, making them a compelling choice for applications requiring long-term security and resilience in a post-quantum world. On the other hand, SNARK-based protocols such as Groth16, Plonk, and Marlin rely on cryptographic assumptions rooted in elliptic curve and pairing-based cryptography [35]. These assumptions are susceptible to quantum attacks and therefore cannot be considered post-quantum secure. As such, while these protocols offer strong efficiency and succinctness, they may not be viable for future-proof deployments. Despite the theoretical urgency, post-quantum security is not yet a central requirement in most current ZKP applications. Nevertheless, as interest grows in areas like secure digital identity, archival data protection, and verifiable computing with long-term guarantees, the demand for cryptographic protocols that can withstand quantum adversaries is expected to rise [59]. Anticipating this shift, future-proof ZKP designs may increasingly favor transparent and hash-based constructions to ensure robust security against emerging threats. # 5.6 Succinctness Properties Succinctness is a foundational property of many modern zero-knowledge protocols, particularly those intended for use in bandwidth-limited or resource-constrained environments. A protocol is considered succinct if the size of the proof and the time required for its verification scale only polynomially with the size of the input and output, independent of the complexity of the computation being proven [50]. In practice, this means that verification can be performed much faster than re-executing the computation itself, and that the proof remains compact regardless of the underlying workload. All protocols examined exhibit some form of succinctness, though the degree varies significantly. Classical SNARKs are notable for achieving highly compact proofs—often just a few elliptic curve group elements—and constant-time verification [60]. These characteristics make them ideal in scenarios where fast validation and minimal communication overhead are essential. However, their efficiency depends on a trusted setup and cryptographic primitives that are not quantum-resistant. STARKs, by contrast, are designed for transparency and long-term security [34]. They do not require a trusted setup and instead rely on collision-resistant hash functions. While this ensures stronger trust guarantees and potential post-quantum resilience, it leads to considerably larger proofs and longer verification times. This trade-off reflects a shift in priorities, favoring auditability and future-proofing over minimal proof size. Protocols based on the GKR framework demonstrate excellent succinctness in individual rounds of interaction, with small messages and lightweight checks [44]. However, as the number of rounds grows with the depth of the computation, the overall communication and verification costs can accumulate significantly. As a result, while GKR-based approaches are efficient in shallow computations, they may become impractical for deeply nested or complex workloads. Succinctness, especially in terms of low verification cost, remains a highly desirable property in zero-knowledge systems. It directly impacts the scalability and deployability of these protocols, making them suitable for environments where efficient validation is crucial. # 5.7 Theoretical Performance Comparison Zero-knowledge protocols can be broadly evaluated using three core metrics: prover time, verifier time, and proof size [61]. These theoretical performance estimates, typically expressed in asymptotic terms, offer a first-order approximation of a protocol’s computational efficiency and scalability, independent of implementation details or hardware. Table 2 summarizes these asymptotic characteristics for the protocols under consideration. It highlights key distinctions in how each construction handles the burden of proof generation and verification, as well as the cost of communication through proof size. TABLE 2 Theoretical performance of selected zero-knowledge protocols (prover time, verifier time, and proof size). Among the protocols analyzed, Groth16 is notable for achieving optimal succinctness: it offers constant-size proofs and constant-time verification, making it highly attractive where bandwidth and verifier efficiency are critical. This efficiency, however, comes at the cost of requiring a trusted setup and reliance on elliptic curve pairings [41]. STARKs, by contrast, avoid any trusted setup and rely solely on collision-resistant hash functions. These choices yield strong transparency and post-quantum security, but result in significantly larger proofs and higher verifier complexity—tradeoffs that are intrinsic to their construction [39]. Other protocols fall along different points in this design space. For instance, systems based on the GKR framework can offer excellent prover efficiency and low communication cost per round, but incur cumulative overhead as the number of rounds grows with the computation’s depth [44]. Meanwhile, no known protocol achieves prover time better than ${ \mathcal { O } } ( n \mathrm { l o g } n ) .$ which reflects the additional work required to generate a proof beyond merely executing the underlying computation. While these theoretical estimates provide useful insights into protocol behavior and scalability, they are not sufficient for drawing conclusions about practical performance. Real-world considerations such as preprocessing costs, memory usage, and parallelization capabilities often play an equally important role. While these aspects are highly relevant to understanding practical performance, they fall outside the scope of this work and should be the focus of future studies, which must include empirical benchmarks and implementation-level evaluations to assess real-world efficiency and scalability. # 5.8 Discussion on ZKP Protocols: Suitability for ML The application of ZKPs to ML must span beyond inference alone, extending to training verification, model certification, and integrity assurance across the AI lifecycle. These tasks impose stringent demands on the underlying proof systems, particularly in terms of the guarantees highlighted in Section 5.2, and compatibility with the structured operations typical of neural networks. Among the protocol families surveyed, SNARKs and GKR have demonstrated the most practical applicability to ML tasks. SNARKs, such as Groth16 and Plonk, support arithmetic circuits and the Rank-1 Constraint System R1CS format, which aligns well with matrix-based operations in neural networks [33]. Their succinct verification—typically constant-time and constant-size proofs—makes them suitable for low-power or embedded verifiers. However, SNARKs face two main limitations: the reliance on trusted setup ceremonies and the inefficiency in handling non-linear operations, which often require approximations or lookup arguments [62]. Recent work has shown that SNARKs can be optimized for ML use through protocol-specific circuit transformations, such as batching matrix operations and reducing the number of constraints [63]. Furthermore, some systems explore compositional proving, whereby different ZKPs are combined to prove disjoint parts of a model, each using the most suitable protocol [64]. While prover time remains a challenge, efforts to bring SNARK performance closer to practical deployment continue to advance. GKR protocols offer a structurally complementary approach, operating directly on layered Boolean circuits, which naturally reflect the feedforward architecture of neural networks [44]. GKR’s interactive model leads to reduced prover complexity, but requires multiple communication rounds, which can be a limiting factor in asynchronous or decentralized environments. Fig. 2. Core properties of ZKP protocols in the context of ML tasks. Each property—ranging from non-interactivity to post-quantum security—reflects emerging trends and practical considerations for deploying ZKPs in real-world ML applications. Nonetheless, its low setup requirements and scalable verifier overhead make it well-suited to scenarios where interaction is acceptable or can be transformed into a non-interactive form using the Fiat-Shamir heuristic [49]. STARKs present a compelling alternative due to their transparent setup and post-quantum security. Unlike R1CS-based systems, STARKs use execution traces and encode computation through an AIR [34]. This enables a broader range of operations but results in significantly larger proofs and longer verification times. Despite these drawbacks, the trend toward quantumresilient protocols and trust-minimized systems has elevated interest in STARKs for future-proof ZKML deployments. Here, we outline the 5 key characteristics of a ZKP protocol in the context of ML tasks, highlighting the essential features that enable secure and efficient integration. These properties are outlined in Figure 2. Non-Interactivity: While early systems often used interactive protocols, recent trends clearly favor non-interactive designs [47]. This shift allows a prover to generate a single proof that can be verified by multiple parties without reexecution, significantly reducing overhead in multi-verifier or asynchronous contexts. Many post-2015 protocols adopt the Fiat-Shamir heuristic to transform interactive constructions into non-interactive equivalents [49]. Transparent Setup: As the field matures, transparent setup has emerged as a highly desirable property [52]. Protocols that eliminate trusted setup reduce attack vectors and regulatory friction—particularly relevant in medical and financial applications [65]–[67]. STARKs and certain variants of Spartan exemplify this direction, using public randomness and hash-based commitments instead of structured reference strings [34], [37]. Standard Representations: Most protocols currently rely on circuit-based representations, such as arithmetic circuits or Boolean circuits. R1CS [68] has become a widely adopted standard, particularly within SNARK ecosystems, but it is not universally compatible. STARKs, for instance, use execution traces and AIR, introducing interoperability challenges [34]. Having standard and flexible representations is crucial for enabling broader toolchain compatibility, developer accessibility, and seamless integration of ML models into various proof systems. Succinctness: Succinctness—both in terms of proof size and verifier time—is a near-universal property across modern ZKP systems. This is particularly critical in ZKML, where verifiers may run on constrained hardware, such as mobile devices or edge platforms [69]. Protocols like Groth16 offer constant-time verification and minimal proof sizes, making them well-suited for scenarios where communication and computational resources are limited [41]. Post-Quantum Security: Although not yet a baseline requirement in all applications, there is growing awareness of the need for post-quantum secure ZKPs. Protocols such as STARKs, which rely on collision-resistant hash functions rather than elliptic curves or pairings, are well-positioned to address future cryptographic threats [40], [59]. As quantum-resistant infrastructure becomes more pressing, support for this property may become critical. Despite these promising trends, several challenges remain. The most significant is the performance, which, under all current constructions, remains bounded below by ${ \mathcal { O } } ( n \log n )$ (see Table 2). Furthermore, in practice, ZKP implementations often suffer from significant constant overheads introduced by compiler inefficiencies, memory consumption, and limited backend parallelism [70], [71]. # 6 ZKP-ENHANCED ML: A LITERATURE REVIEW This section presents a systematic review of the existing research landscape on ZKP-Enhanced ${ \mathrm { ~ \bf ~ M L , ~ } }$ also known as Zero-Knowledge Machine Learning (ZKML), identifying key approaches and methodologies employed to construct ZKPs for ML applications. The analysis focuses on how different works address efficiency bottlenecks, optimize proof generation, and manage trade-offs between proof succinctness and computational overhead. By examining the evolution of these methods in chronological order, this review highlights the current state of the art, revealing emerging patterns and the convergence of the research domain toward a unified ZKMLOps framework for Trustworthy ML development. # 6.1 Overview of Existing Research The solutions presented in existing research address several ML-related topics, which can be broadly grouped into two main types of contributions: Federated Learning (FL) and ML as a Service (MLaaS). We identified 26 papers focusing on FL (based on the definition by Bonawitz et al. [72]) and 31 papers on MLaaS (based on the definition by Hesamifard et al. [73]). The 26 papers addressing FL primarily study problems related to the privacy and confidentiality of user data, the integrity of aggregation processes, and local updates to prevent poisoning attacks. Among these, 16 papers adopt techniques of verifiable computing, such as homomorphic encryption (e.g., [74], [75]), differential privacy (e.g., [76]), or chain mechanisms (e.g., [77]). The remaining $1 0 \mathrm { F L }$ papers employ ZKP techniques. As further exploration of these FL studies is planned for future work, they are not analyzed in detail here. The list of these papers can still be found in the replication package mentioned in Section 4. With respect to the 30 papers addressing MLaaS, on which we focused our analysis, the goals typically revolve around guaranteeing: (i) integrity of the computation, (ii) privacy and confidentiality, and (iii) fairness between parties. Of these, 13 papers apply techniques such as homomorphic encryption (e.g., [78]–[80]), randomized algorithms (e.g., [81]–[83]), or blockchains (e.g., [84]–[86]). Our analysis focuses on the remaining 17 MLaaS papers that employ ZKP techniques or provide new ZKP implementations for ML applications: [87]–[103]. These contributions will be further discussed in the following section. # 6.2 Analysis of the ZKML Approaches This section will provide a concise summary of the approaches identified in the literature. This comprehensive analysis is essential, as the proposed approaches address distinct aspects and propose varying solutions to the challenges they seek to overcome. Furthermore, these challenges exhibit significant variability. Zhang et al. [102] initiated the exploration of ZKPs in the context of ML tasks, with a focus on verifying both predictions and model accuracy. They proposed an efficient scheme tailored to zero-knowledge decision trees. Specifically, their contributions include: (i) the design of an efficient protocol for ZKPs of decision tree predictions; (ii) the extension of this protocol to support accuracy verification of decision trees in zero knowledge, incorporating task-specific optimizations; and (iii) the implementation and empirical evaluation of the proposed protocol. The underlying proof system utilized is Aurora [104]. We further categorized this work under Inference Verification and Online Metrics Verification. Liu et al. [105] propose an efficient ZKP scheme for CNN predictions and accuracy that scales to large CNN models, enabling the computation of such proofs without the excessive overhead introduced by general-purpose ZKP schemes that work for any computations modeled as arithmetic circuits. This improvement is based on a novel sum-check protocol based on the Fast Fourier Transform (FFT). The proposed scheme is then extended, adding generalization and integration with the GKR protocol [44]. We further categorized this work under Inference Verification and Online Metrics Verification. Ju et al. [92] propose a new efficient sum-check protocol for a CNN convolution operation, achieving an asymptotically optimal proving cost for a convolution operation. Their scheme employs a combination of the sum-check protocol [106], and GKR [44]. The protocol is then evaluated, and it is shown how it improves previous work on verifiable CNNs [105] reaching optimal computation cost and smaller proof size. We further categorized this work under Inference Verification. Ghaffaripour et al. [91] address the challenge of assuring the integrity of computations performed by MLaaS platforms, by proposing a novel distributed approach which uses specialized composable proof systems at its core. More precisely, the mathematical formulation of the ML task is divided into multiple parts, each of which is handled by a different specialized proof system; these proof systems are then combined with the commit-and-prove methodology to guarantee correctness as a whole. This methodology is based on the implementation of LegoSNARK [64], a toolbox for commit-and-prove zkSNARKs (CP-SNARKs). The solution is evaluated against a verification of the integrity of a classification task on a Support Vector Machine. We further categorized this work under Inference Verification. Zhao et al. [103] propose VeriML, a MLaaS framework that provides tunable probabilistic assurance on service correctness as well as service fee accounting fairness. To achieve this, VeriML utilizes a novel CP-SNARK protocol on randomly selected iterations during the ML training phase. Moreover, in doing so, it utilizes multiple circuit-friendly optimizations for the verification of expensive operations such as matrix multiplication and non-linear functions in ML algorithms. The authors empirically validate the efficiency of the proposed solutions on several ML models, namely linear regression, logistic regression, neural network, support vector machines, K-Means, and decision tree. We further categorized this work under Training and Offline Metrics Verification and Inference Verification. Feng et al. [63] present ZEN, the first attempt in the literature to provide an optimizing compiler that generates efficient verifiable, zero-knowledge neural network accuracy $( Z \mathrm { E N } _ { a c c } )$ and inference $( Z \mathrm { E N } _ { i n f e r } )$ schemes. The first is used to verify that a committed neural network model achieves a claimed accuracy on a test dataset without revealing the model itself. The latter, instead, is used to verify that the inference result from the private model on a given input is correct, without revealing the model or the input. Since the direct application of pure zkSNARKs for these tasks requires prohibitive computational costs, the authors first incorporate a new neural network quantization algorithm that incorporate two R1CS friendly optimizations which makes the model to be express in zkSNARKs with less constraints and minimal accuracy loss; second, ZEN introduces a SIMD style optimization, namely stranded encoding, that can encode multiple 8bit integers in large finite field elements without overwhelming extraction cost. We further classified this work under offline metrics verification and Inference Verification. Garg et al. [107] propose a novel method for verifying floating-point computations that guarantees approximate correctness w.r.t. a relative error bound. The standard approach to handling floating-point computations requires conversion to binary circuits, following the IEEE-754 floating-point standard. This approach incurs a $\mathrm { p o l y } ( w )$ overhead in prover efficiency for computations with $w$ -bit precision, resulting in very high prover runtimes, which is still one of the main issues and bottlenecks in the design of succinct arguments. The proposed solution consists of a compiler optimization that incurs only a $\log ( w )$ overhead in the prover’s running time. Although this work does not provide a proving scheme tailored specifically for ML tasks, it paves the way for further research in ML and scientific computing by providing an efficient way of proving possibly any ML-pipeline phase that involves floating-point computations. Toreini et al. [98] propose FaaS, an auditing framework that emphasizes trustworthy AI, particularly group fairness. Group fairness refers to the property that the demographics of individuals receiving positive (or negative) classifications are consistent with the demographics of the entire population [108]. In other words, an ML model is considered fair (in the context of group fairness) if it treats different groups equally [109]. In particular, FaaS is a privacy-preserving, end-to-end verifiable architecture to collectively audit the algorithmic fairness of ML systems. FaaS is model-agnostic (independent of the ML model) and takes a holistic approach towards auditing for group fairness metric. More precisely, the authors propose an auditing approach based on a 1-out-of-n interactive ZKP technique, famously known as CDS (Cramer, Damgard, and Schoenmakers) [110], [111]. Although promising, the solution is based on the strong assumption that the ML system presents the data and predictions honestly. We further classified the work under Online Metrics Verification. Feng et al. [90] present ZENO (ZEro-knowledge Neural network Optimizer), a type-based optimization framework designed to enable efficient neural network inference verification. In conventional zkSNARK systems [63], arbitrary arithmetic functions are compiled into low-level arithmetic circuits, thereby discarding high-level neural network semantics such as tensor structure and privacy guarantees, which become difficult to reconstruct. The authors address this limitation as their first contribution by proposing a novel language construct that preserves high-level semantics throughout zkSNARK proof generation. Their second contribution introduces an optimized circuit generation strategy that leverages this preserved semantic information to reduce both computational complexity and the total number of operations. The third contribution consists of a neural network-centric system-level optimization that further enhances the performance of zkSNARKs when applied to neural network inference tasks. The framework is implemented atop general-purpose zkSNARK methodologies and benchmarked against existing tools following a similar design philosophy, including Arkworks [112], Bellman [113], and Ginger [114]. We categorize this work under Inference Verification. Chen et al. [88] introduce ZKML, a framework designed to generate zkSNARKs [34] for realistic and complex ML models. This work specifically targets the halo2 proving system [115], which incorporates the Plonkish randomized AIR (Arithmetic Intermediate Representation) with preprocessing [116]. The framework represents a significant advancement, enabling the computation of zkSNARKs for a diverse set of models with realistic scales and structures for the first time. The authors demonstrate the capabilities of ZKML by applying it to several representative models, including a distilled version of GPT-2 (81.3M parameters), a diffusion model (19.4M parameters), Twitter’s recommender system (48.1M parameters), DLRM (764.3K parameters), MobileNet (3.5M parameters), ResNet-18 (280.9K parameters), VGG16 (15.2M parameters), and MNIST (8.1K parameters). This contribution is further categorized under Inference Verification. Sun et al. [97] propose a specialized ZKP framework tailored to Large Language Models (LLMs). Their work introduces two key components: tlookup, a ZKP protocol designed to support universal non-arithmetic operations commonly encountered in deep learning; and zkAttn, a ZKP protocol specifically crafted to verify attention mechanisms in LLMs. The zkAttn protocol is built upon the sumcheck protocol [117] and the Hyrax protocol [118], ensuring efficient and scalable proof generation for the attention layer. The proposed framework is evaluated on prominent LLM architectures, including OPT and LLaMa-2. This contribution is further categorized under Inference Verification. Sun et al. [96] present $Z { \mathrm { K D L } } ,$ an efficient ZKP framework for deep learning training. To enhance performance, the authors introduce zkReLU, a specialized ZKP protocol optimized for the exact computation of the ReLU activation function and its backpropagation. Furthermore, the authors propose FAC4DNN, a modeling scheme that captures the training process of deep neural networks using arithmetic circuits grounded in the GKR protocol [44]. The framework is empirically evaluated on an 8-layer neural network comprising over 10 million parameters. This contribution is categorized under Training and Offline Metrics Verification. Wu et al. [101] present a confidential and verifiable delegation scheme for ML inference in untrusted cloud environments. Their work focuses on enabling both privacy and integrity by combining secure multiparty computation with ZKPs. The core of their approach uses interactive proofs, specifically, the GKR [44] protocol enhanced with polynomial commitments, to generate efficient, low-overhead proofs, even when most of the participating servers are potentially malicious. The protocol is optimized for arithmetic circuits and includes a custom design for matrix multiplication that significantly reduces proof generation time. Experimental results on neural networks, including a 3-layer fully connected model and LeNet, show large performance gains compared to prior work. We classify this contribution under Inference Verification. Lee et al. [93] introduce vCNN, a verifiable convolutional neural network framework that addresses the inefficiency of zk-SNARK-based inference verification for CNNs. Their key innovation lies in optimizing the representation of convolutional operations, which dominate CNN computations, by proposing a novel QPP-based formulation that reduces proving complexity from $O ( l n )$ to $O ( l + n )$ . To handle other network components such as ReLU and pooling, which are not efficiently supported by QPP, they combine QPP and QAP circuits and use CP- and cc-SNARKs [64] to link them, enabling efficient end-to-end proof generation. Their model supports standard CNNs like MNIST, AlexNet, and VGG16, achieving up to $1 8 , 0 0 0 \times$ speedups in proof generation time and drastic reductions in CRS size compared to prior zk-SNARK approaches [41], [46]. We classify this work under Inference Verification. Abbaszadeh et al. [87] propose Kaizen, a ZKP of training (zkPoT) system designed for deep neural networks. The goal is to enable a party to prove that a model was correctly trained on a committed dataset using gradient descent, without revealing either the model or the data. Their construction combines an optimized GKR-style proof system [44] for single gradient descent steps with a recursive composition framework to achieve succinctness across multiple iterations. A novel contribution is their aggregatable polynomial commitment scheme tailored for multivariate polynomials, which is essential for scaling recursive proofs efficiently. Kaizen supports large models like VGG-11 and demonstrates a prover time of 15 minutes per iteration, $2 4 \times$ faster and $2 7 \times$ more memory-efficient than generic recursive ZK schemes, with proof size and verifier time independent of iteration count. We classify this work under data and Training and Offline Metrics Verification. Wang et al. [100] propose ezDPS, a zero-knowledge framework for verifying classical ML inference pipelines in outsourced settings. The pipeline comprises four stages: data denoising using Discrete Wavelet Transform, normalization with Z-Score, feature extraction via Principal Component Analysis, and classification using Support Vector Machines. Each stage is converted into arithmetic circuits using customdesigned zero-knowledge gadgets for core operations, including square root, exponentiation, max/min, and absolute value. The framework is instantiated over the Spartan CP-ZKP backend [37], supporting efficient Rank-1 Constraint Systems with polynomial commitments. ezDPS introduces a zkPoA (zero-knowledge Proof-of-Accuracy) scheme, allowing the server to prove that a committed model achieves a specified minimum accuracy over public datasets without revealing model parameters. To improve efficiency, the authors leverage techniques like random linear combination for dimensionality reduction and permutation-based maximum value selection. We classify this work under Data and Preprocessing Verification, Inference Verification, and Online Metrics Verification. Waiwitlikhit et al. [99] propose ZKAUDIT, a zeroknowledge audit framework enabling trustless verification of model training and data properties without revealing model weights or training data. The system consists of two main phases: ZKAUDIT-T, which proves that a model was trained via stochastic gradient descent on a committed dataset, and ZKAUDIT-I, which allows auditing arbitrary properties over the hidden data and weights through user-defined functions. The framework leverages ZK-SNARKs over AIRs, using the Halo2 [115] backend with optimizations such as rounded division, variable fixed-point precision, and softmax implementation in finite fields. It supports real-world models like MobileNet v2 and DLRM-style recommenders. The framework supports audits such as censorship detection, copyright verification, and counterfactual analysis. We classify this work under Data and Preprocessing Verification, and Training and Offline Metrics Verification. # 6.3 Discussion on ZKP-Enhanced ML: An MLOps Lifecycle Overview This section presents a discussion of the primary findings from the survey on ZKP-Enhanced ML applications, with an emphasis on the MLOps verification lifecycle inspired by the TDSP model [12], introduced in Section 4 and Figure 1. To structure this analysis, we divide the discussion into two phases. In the first phase, we describe the main findings by identifying the specific phase of the MLOps verification lifecycle addressed in each work, the model used, and the protocol employed—this latter aspect being assessed through the ZKP-ML suitability model defined in Section 5.8. The second phase of the analysis highlights a central insight of our investigation: the identification of a convergence trend across the reviewed literature, pointing toward the development of a unified and comprehensive model for MLOps verification in the broader context of Trustworthy AI. # 6.3.1 MLOps Verification Lifecycle: Phases, Models and Protocols In our survey and classification of the literature, we identified a diverse range of efforts addressing different stages of the MLOps verification lifecycle. This classification can be seen in Figure 3. Specifically, we observed that two studies explicitly target the phase of Data and Preprocessing Verification, four contributions focus on Training and Offline Metrics Verification, a significantly larger group of twelve papers address Inference Verification, and four works propose solutions for Online Metrics Verification. This distribution of research efforts highlights a substantial emphasis on the inference stage, suggesting that the research community currently prioritizes the integrity and correctness of model predictions during deployment. This trend is perhaps unsurprising, as the inference phase is typically the most security-sensitive and externally exposed component of the ML lifecycle in realworld deployments. It also presents some of the most significant technical challenges, particularly in the efficient generation and verification of ZKPs. These challenges have made inference the primary focus of recent research, as it represents the most prominent bottleneck in achieving practical, verifiable ML systems. However, this imbalance also reveals notable research gaps. In particular, comparatively limited attention has been paid to the earlier stages of the pipeline, such as data acquisition, preprocessing, and training integrity. These stages are no less important: they are foundational to model correctness, fairness, and generalization, and can often be the origin of Data and Training and Inference Online Preprocessing Offline Metrics Verification Metrics Verification Verification Verification [99], [100] [96], [103], [102], [105], [102], [105], [87], [99], [91], [92], [98], [100], [63], [103], [88], [90], [97], [101], [93], [100] TABLE 3 ML Models Studied in the ZK-Enhanced ML Literature. subtle but critical vulnerabilities or data misuse. Encouragingly, some recent works have started to adopt a more holistic view, proposing solutions that span multiple verification phases or that attempt to encompass the entire ML lifecycle ZKP frameworks [119]. This evolving trend toward end-to-end verifiability is a promising direction for future work. In terms of the types of ML models addressed by the reviewed literature, Table 3 provides a summary of the distribution across model classes. A clear trend emerges in favor of complex deep learning models, particularly Neural Networks and Convolutional Neural Networks, which have become dominant in both academic research and real-world applications due to their high expressive power and state-of-the-art performance across many domains. This focus aligns with the technical challenges posed by these models, such as large parameter counts, non-linear activations, and costly inference operations, which make their verification particularly demanding and thus an attractive target for ZKP-based approaches. Nevertheless, it is worth noting that several contributions also address traditional ML models, including Decision Trees, Support Vector Machines, Linear and Logistic Regression, and Clustering algorithms like K-Means. These classical models remain widely used in industry due to their interpretability, efficiency, and performance in low-data regimes. The presence of works tackling these models demonstrates a healthy diversity in research, and it is especially encouraging as these simpler models can serve as testbeds for novel ZKP constructions or optimizations that may later be scaled to more complex architectures. Turning to the analysis of ZKP protocol suitability, we evaluated the extent to which the underlying cryptographic protocols used in each work satisfy the key properties required for practical integration in ML workflows as described in Section 5.8. Figure 4 summarizes the degree to which current works meet these criteria across the defined MLOps phase. None of the surveyed phases exhibit full compliance with these properties across all works. Across all phases, at least some of the reviewed works rely on cryptographic protocols that do not fully adhere to our defined suitability criteria. These shortcomings highlight that, despite meaningful progress in recent years, substantial effort is still required to design and standardize ZKP systems that are not only theoretically robust but also practically viable for integration into contemporary ML pipelines. Fig. 3. ZKP-Enhanced ML applications in the MLOps verification lifecycle. Fig. 4. ZKP Protocols suitability to ML Applications for every MLOps Verification phase. # 6.3.2 Convergence Towards a Unified MLOps Verification Model After analyzing how zero-knowledge protocols are applied across the MLOps verification lifecycle, we observed a convergence of efforts toward a unified framework for Trustworthy AI, which we term ZKMLOps. This framework integrates ZKPs into ML pipelines to provide strong cryptographic guarantees of correctness, integrity, and privacy. We categorized existing work into three classes: Enabling Technologies, Applied Verification, and Trustworthy AI. While the majority of contributions fall within the first two categories, only a few works—Toreini et al. [98] and Waiwitlikhit et al. [99]—explicitly address core trustworthy AI principles such as fairness, copyrights, censorship, and counterfactual audits. Nonetheless, this should not be seen as a limitation. The inherent properties of ZKPs are naturally aligned with key trustworthy AI goals, including privacy and data governance, accountability and auditability, and transparency [13], [120]. To illustrate the emerging structure of ZKP-Enhanced ML research, we adapted the visualization style of the Thoughtworks Technology Radar5. Figure 5 highlights how current efforts are concentrated on performance and feasibility, yet indicate a clear trajectory toward trustworthy AI principles. ZKMLOps emerges as the technical foundation for building verifiable, privacy-preserving, and auditable ML systems, thereby enabling the practical realization of trustworthy AI at scale. Fig. 5. Emerging structure of ZKML contributions, showing convergence toward a unified framework that supports verification and trustworthy AI. # 7 FUTURE WORK Future research should prioritize the development of efficient ZKP protocols specifically designed for the data preprocessing and training phases of the machine learning lifecycle. These stages remain critically underexplored compared to the more extensively studied domain of inference verification. Addressing these gaps is essential to enable end-to-end trustworthiness in ML systems. A valuable avenue for future investigation involves the creation of a decision-support tool, potentially structured as a decision tree, that leverages current state-of-the-art contributions. This tool would assist practitioners in selecting, configuring, and deploying appropriate ZKP techniques tailored to specific use-case requirements, thereby operationalizing ZKMLOps frameworks. Moreover, comprehensive practical evaluations in real-world settings should be undertaken to assess trade-offs and identify deployment bottlenecks. Empirical studies across diverse application domains can provide insights into the performance, scalability, and regulatory compliance of ZKP-Enhanced ML workflows. Another promising direction is the analysis of ZKP into federated learning paradigms, where preserving privacy across decentralized and heterogeneous data sources is paramount. Future work should explore how ZKPs can be employed to verify model updates and ensure data integrity without exposing sensitive information or compromising the decentralized architecture of such systems. By addressing these research priorities, the community can pave the way toward more robust, privacy-preserving, and verifiable AI systems that meet the increasing demands of trust and regulation.
As Artificial Intelligence (AI) systems, particularly those based on machine learning (ML), become integral to high-stakes applications, their probabilistic and opaque nature poses significant challenges to traditional verification and validation methods. These challenges are exacerbated in regulated sectors requiring tamper-proof, auditable evidence, as highlighted by apposite legal frameworks, e.g., the EU AI Act. Conversely, Zero-Knowledge Proofs (ZKPs) offer a cryptographic solution that enables provers to demonstrate, through verified computations, adherence to set requirements without revealing sensitive model details or data. Through a systematic survey of ZKP protocols, we identify five key properties (non-interactivity, transparent setup, standard representations, succinctness, and post-quantum security) critical for their application in AI validation and verification pipelines. Subsequently, we perform a follow-up systematic survey analyzing ZKP-enhanced ML applications across an adaptation of the Team Data Science Process (TDSP) model (Data & Preprocessing, Training & Offline Metrics, Inference, and Online Metrics), detailing verification objectives, ML models, and adopted protocols. Our findings indicate that current research on ZKP-Enhanced ML primarily focuses on inference verification, while the data preprocessing and training stages remain underexplored. Most notably, our analysis identifies a significant convergence within the research domain toward the development of a unified Zero-Knowledge Machine Learning Operations (ZKMLOps) framework. This emerging framework leverages ZKPs to provide robust cryptographic guarantees of correctness, integrity, and privacy, thereby promoting enhanced accountability, transparency, and compliance with Trustworthy AI principles.
[ "cs.SE", "cs.CR" ]
# 1. INTRODUCTION Multimodal data combining structural, functional, and genetic information has been increasingly used in medical imaging research for brain-related predictions and neurological disorder analysis, with advanced models like vision transformers (ViT) predicting diseases and identifying biomarkers [1, 2, 3]. Despite progress, challenges remain in addressing data scarcity, integrating diverse data types, and developing portable foundation models. Schizophrenia, linked to structural, functional, and genomic factors, has been predicted using multimodal deep learning, but improving prediction accuracy and model portability continues to be critical for biomarker discovery and broader application to other brain disorders [4, 5]. The latent diffusion model (LDM) [6], a framework based on diffusion models [7], excels in image generation and translation tasks. In unconditional setups, LDM can function as a data augmentation pipeline. While LDM has been widely used in natural image generation [8], its application to 3D medical image generation, particularly brain MRI, remains underexplored [9, 10]. Generating high-quality 3D MRI images requires addressing the inherent complexity and high dimensionality of the data [11], making robust autoencoders crucial for capturing the intricate spatial relationships in 3D brain images. Our current work extends previous multimodal models using structural MRI and functional network connectivity (FNC), offtering several key innovations and explorations. First, instead of relying solely on a ViT classification model, we integrate latent feature fusion module (LFFM), a portable pre-trained feature extraction module with a early state data fusion. We pre-trained the feature extraction module using the ABCD dataset $( \Nu = 1 1 , 2 2 0 )$ ), focusing on the 3D sMRI data. This pre-training significantly enhances the module’s ability to represent complex 3D structures in sMRI, allowing for more accurate learning of features critical for neurological disease prediction. This approach also increases the adaptability of the model, allowing it to be effectively applied to different datasets and tasks. In addition, we trained a 3D autoencoder with a KL divergence loss on the same ABCD dataset, addressing the limitations of existing LDM in the 3D MRI domain. This enhancement allows the model to better capture the inherent variability in 3D brain images and improves its overall generalization. Finally, we applied the LDM data augmentation model to the original dataset, which resulted in significant improvements in the accuracy and overall performance of the model, demonstrating the effectiveness of this augmentation technique in improving prediction results. # 2. RELATED WORKS Deep learning models for multimodal fusion have seen significant advancements in the field of brain disease research. Qiu et al. [12] introduces a 3D multimodal fusion network named MDL-Net, designed for the early diagnosis of Alzheimer”s Disease (AD). MDL-Net effectively integrates 3D multimodal imaging data, including structural MRI and PET, to construct a deep learning model with richer features for AD diagnosis and brain region analysis. Alrawis et al. [13] proposes a multimodal approach that integrates EEG and MRI data, leveraging their complementarity to enhance diagnostic accuracy for early Parkinson’s disease (PD) diagnosis, and outperforming traditional single-modal and multimodal methods. Zhang et al. [14] introduces PA-Net, a generative adversarial network with a pyramid attention mechanism, to address missing PET data in Alzheimer’s disease classification. By generating realistic PET images and integrating MRI gray matter with PET metabolic information, the method reduces network input parameters and improves classification accuracy. Fig. 1. The overall pipeline of MultiViT2: We designed a multimodal hybrid model that combines a pretrained base model with a vision transformer backbone, effectively classifying structural and functional neuroimaging data while integrating a data augmentation module based on a latent diffusion model. # 3. METHODS # 3.1. Latent Diffusion Model with Autoencoder The latent diffusion model integrates an autoencoder to compress high-dimensional data $\mathbf { x } \in \mathbb { R } ^ { D }$ into a lower-dimensional latent representation $\mathbf { z } \in \mathbb { R } ^ { d }$ , where $d \ll D$ . The autoencoder consists of two parts: an encoder $( E )$ that maps the input data to the latent space, and a decoder $( D )$ that reconstructs the data from the latent representation. Specifically, the encoder performs the mapping $E : \mathbb { R } ^ { D } \mathbb { R } ^ { d }$ , resulting in ${ \bf z } = E ( { \bf x } )$ , while the decoder maps the latent variables back to the original data space $\hat { \mathbf { x } } = D ( \mathbf { z } )$ . In the latent space, a diffusion process is applied, where Gaussian noise is progressively added to the latent variables over $T$ timesteps. This forward diffusion is modeled as a Markov chain, defined by: $$ q ( \mathbf { \boldsymbol { z } } _ { t } \mid \mathbf { \boldsymbol { z } } _ { t - 1 } ) = \mathcal { N } ( \mathbf { \boldsymbol { z } } _ { t } ; \sqrt { 1 - \beta _ { t } } \mathbf { \boldsymbol { z } } _ { t - 1 } , \beta _ { t } \mathbf { \boldsymbol { I } } ) , $$ where $\beta _ { t }$ is the variance schedule and $\mathcal { N }$ denotes the normal distribution. The overall forward process over all timesteps can be written as: $$ q ( \mathbf { z } _ { 1 : T } \mid \mathbf { z } _ { 0 } ) = \prod _ { t = 1 } ^ { T } q ( \mathbf { z } _ { t } \mid \mathbf { z } _ { t - 1 } ) . $$ After the forward process, the reverse diffusion process is used to recover the original latent variable $\mathbf { z } _ { 0 }$ from the noisy latent variable ${ \bf z } _ { T }$ . This reverse process is parameterized by neural networks, following: $$ p _ { \theta } \left( \mathbf { z } _ { t - 1 } \mid \mathbf { z } _ { t } \right) = \mathcal { N } ( \mathbf { z } _ { t - 1 } ; \mu _ { \theta } \left( \mathbf { z } _ { t } , t \right) , \Sigma _ { \theta } ( \mathbf { z } _ { t } , t ) ) , $$ where $\mu _ { \theta }$ and $\Sigma _ { \theta }$ are learned functions. The reverse diffusion process starts from $\mathbf { z } _ { T } \sim \mathcal { N } ( 0 , \mathbf { I } )$ and iteratively denoises the latent variable to reconstruct $\mathbf { z } _ { 0 }$ . Finally, the decoder from the autoencoder is used to transform the denoised latent variable back to the original data space, yielding $\hat { \mathbf { x } } = D ( \mathbf { z } _ { 0 } )$ . The training of the autoencoder is critical for latent diffusion model, which involves multiple loss functions. The Reconstruction Loss $( \mathcal { L } _ { \mathrm { r e c o n } } )$ ensures the accurate reconstruction of the input data: $$ \mathcal { L } _ { \mathrm { r e c o n } } = \mathbb { E } _ { \mathbf { x } } \left[ \| \mathbf { x } - \hat { \mathbf { x } } \| _ { 2 } ^ { 2 } \right] . $$ In the case of variational autoencoders (VAEs), the Kullback-Leibler Divergence Loss $( \mathcal { L } _ { \mathrm { K L } } )$ regularizes the latent space to align with a prior distribution: Table 1. Model performances for baselines, abligations and MultiViT2 $$ \mathcal { L } _ { \mathrm { K L } } = \frac { 1 } { 2 } \sum _ { i = 1 } ^ { d } \left( \mu _ { i } ^ { 2 } + \sigma _ { i } ^ { 2 } - \log \sigma _ { i } ^ { 2 } - 1 \right) , $$ where $\mu _ { i }$ and $\sigma _ { i }$ are the mean and standard deviation of the latent variables. The overall training objective combines these loss functions, with $\lambda _ { \mathrm { { r e c o n } } }$ and $\lambda _ { \mathrm { K L } }$ controlling their relative contributions: $$ { \mathcal { L } } _ { \mathrm { t o t a l } } = \lambda _ { \mathrm { r e c o n } } { \mathcal { L } } _ { \mathrm { r e c o n } } + \lambda _ { \mathrm { K L } } { \mathcal { L } } _ { \mathrm { K L } } . $$ The latent diffusion model leverages the compression and reconstruction capabilities of the autoencoder, and the diffusion process operates within the compressed latent space, offering a more efficient and structured generative process. # 3.2. MultiViT2 Architecture The MultiViT2 architecture is designed with a latent feature fusion module (LFFM) and a ViT classification pipeline, enhanced by late fusion through cross-attention mechanisms. Latent Feature Fusion Module (LFFM): The architecture begins with a 3D input tensor $\mathbf { X } \in \mathbb { R } ^ { L \times W \times H }$ , which is processed by the Latent Feature Fusion Module (LFFM). In this module, the high-dimensional input is transformed into a latent representation using a pretrained feature extraction network. Subsequently, FNC information is integrated via early fusion of the latent representations. Specifically, the sMRI latent tensor $\mathbf { Z }$ is reduced along one spatial dimension to derive the FNC latent tensor $\mathbf { Z } ^ { \prime }$ . A convolutional layer fuses $\mathbf { Z } ^ { \prime }$ with $\mathbf { Z }$ , aligning their spatial dimensions and enabling the model to learn meaningful interactions between the two modalities: where $\mathrm { T } ( \cdot )$ represents the transformer block operations. The final latent representation, $\mathbf { Z } _ { \mathrm { F i n a l } }$ , is then used for classification: $$ \mathbf { Y } = \mathrm { s o f t m a x } ( \mathbf { M L P } ( \mathbf { Z } _ { \mathrm { F i n a l } } ) ) , $$ where $\mathbf { Y }$ represents the predicted class probabilities. This pipeline effectively combines information from multiple modalities, ensuring robust classification performance. # 4. EXPERIMENTS # 4.1. Dataset and Pre-processing We utilized the ABCD dataset for our experiments. First, T1-weighted MRI data was segmented using SPM12 to extract gray matter regions. Next, group ICA was applied to the fMRI data to obtain the FNC matrix. The ABCD dataset, including gray matter and FNC information, was used to train both the autoencoder and the LFFM, allowing the model to effectively learn representations from both structural and functional neuroimaging data. For the downstream task of schizophrenia prediction with MultiViT2, we employed two comprehensive schizophrenia-related datasets. The combined dataset included data from three international studies (fBIRN, MPRC, and COBRE) and several hospitals in China. In total, the dataset consisted of 1,642 participants: 803 healthy controls and 839 individuals diagnosed with schizophrenia. Resting-state fMRI (rsfMRI) data were acquired using 3.0 Tesla scanners across multiple sites, with standard echo-planar imaging (EPI) sequences (TR/TE approximately $2 0 0 0 / 3 0 ~ \mathrm { m s }$ , voxel sizes ranging from $3 \times 3 \times 3$ mm to $3 . 7 5 \times 3 . 7 5 \times 4 . 5 ~ \mathrm { m m }$ ). $$ { \bf Z } _ { \mathrm { f u s e d } } = \mathrm { C o n v F u s i o n } ( { \bf Z } , { \bf Z ^ { \prime } } ) , $$ where ConvFusion $( \mathbf { Z } , \mathbf { Z } ^ { \prime } )$ denotes the convolutional operation that produces the fused latent representation $\mathbf { Z } _ { \mathrm { f u s e d } }$ . ViT Classification Pipeline: The fused latent representation $\mathbf { Z } _ { \mathrm { f u s e d } }$ is tokenized and passed through a series of transformer blocks to capture higher-order features. Similarly, the FNC latent tensor $\mathbf { Z } ^ { \prime }$ is processed through transformer blocks to obtain an enhanced representation $\mathbf { Z } _ { \mathrm { F N C } }$ . Afterward, a cross-attention mechanism is applied to integrate complementary information from both $\mathbf { Z } _ { \mathrm { f u s e d } }$ and ${ \bf { Z } } _ { \mathrm { F N C } }$ : $$ { \bf Z } _ { \mathrm { F i n a l } } = \mathrm { C r o s s A t t e n t i o n } ( \mathrm { T } ( { \bf Z } _ { \mathrm { f u s e d } } ) , \mathrm { T } ( { \bf Z } ^ { \prime } ) ) , $$ # 4.2. Experimental Setup We primarily conducted comparison and ablation studies to evaluate the performance of our proposed approach. We created two baselines: Baseline 1 used unimodal models based on sMRI and FNC data separately. Baseline 2 used MultiViT1 [3], a basic multimodal model that lacked all of the innovations introduced in the second-generation model. In the comparison experiments, we demonstrated that the latest MultiViT2 architecture outperformed both Baseline 1 and Baseline 2, even when the pretrained model and data augmentation components were removed. However, the ablation studies were more critical to this research. We conducted two ablation experiments to assess the contributions of different components of our model. Ablation 1: We removed the LFFM but keep the LDM-based data augmentation process. Ablation 2: We kept the LFFM and ViT classifier but removed the LDMbased data augmentation component. These ablation experiments allowed us to analyze the individual contributions of the pre-trained module and data augmentation to the overall model performance. # 4.3. Training, Evaluation and Visualization To train the baseline models, we employed the AdamW optimizer and a ReduceLROnPlateau scheduler with a learning rate of 3e-4, training for a total of 150 epochs. For the MultiViT2 and its ablation models, we similarly used the AdamW optimizer but incorporated a 20-epoch warm-up phase alongside the ReduceLROnPlateau mechanism. The learning rate was maintained at 3e-4, with models also trained for 150 epochs. During the evaluation, we conducted 5-fold crossvalidation, recording both accuracy and AUC metrics for each fold to assess model performance. Fig. 2. Saliency map showing key brain regions contributing to schizophrenia classification, including the cerebellum, caudate, precuneus, and superior frontal orbital gyrus, all associated with motor control, cognition, and emotional regulation. To visualize the importance maps generated by the attention mechanism on 3D sMRI data, we applied an attentionweighting method. These highlighted regions likely correspond to the model’s response to the integration of functional data, revealing the regions of interest (ROIs) in the structural data most strongly associated with schizophrenia. Additionally, we averaged attention weights across all transformer encoder layers and self-attention heads, providing a more comprehensive representation of the attention mechanism’s effects throughout the model. # 5.2. Saliency Map Visualizations Our structural brain saliency maps highlight the brain regions that contribute most to the model’s classification of schizophrenia. According to our analysis, regions such as the cerebellum, caudate, precuneus, and superior frontal orbital gyrus show strong relevance to schizophrenia prediction. These areas are commonly associated with motor control, cognitive functions, self-awareness, and emotional regulation, which are often impaired in people with schizophrenia. For example, the cerebellum is involved in motor coordination and cognitive processing, while the caudate plays a key role in movement regulation and learning. The precuneus is critical for self-awareness and spatial processing, and the superior frontal orbital gyrus is involved in decision-making and social behavior. These findings are consistent with the established neuroscience literature and further support the clinical relevance of our model [15, 16]. The saliency map provides insight into the neurobiological basis of schizophrenia, highlighting the importance of these key brain regions in the pathology of the disorder. # 5. RESULTS # 5.1. Basic Experimental Results The results of our experiments show that the MultiViT2 model achieved superior performance compared to both the baseline and ablation models. As shown in Table 1, MultiViT2 outperformed Baseline 1 and Baseline 2 with improvements in both accuracy and AUC metrics, reaching 0.866 for both measures. In the ablation studies, removal of either the LFFM (Abligation1) or the data augmentation component (Abligation2) resulted in slightly reduced performance, indicating that both components contribute significantly to the effectiveness of the model. Abligation1 had an accuracy of 0.854, while Abligation2 had an accuracy of 0.853, both lower than MultiViT2. These results confirm that the integration of LFFM and LDM-based data augmentation improves the model’s ability to effectively classify multimodal neuroimaging data.
Multimodal medical imaging integrates diverse data types, such as structural and functional neuroimaging, to provide complementary insights that enhance deep learning predictions and improve outcomes. This study focuses on a neuroimaging prediction framework based on both structural and functional neuroimaging data. We propose a next-generation prediction model, \textbf{MultiViT2}, which combines a pretrained representative learning base model with a vision transformer backbone for prediction output. Additionally, we developed a data augmentation module based on the latent diffusion model that enriches input data by generating augmented neuroimaging samples, thereby enhancing predictive performance through reduced overfitting and improved generalizability. We show that MultiViT2 significantly outperforms the first-generation model in schizophrenia classification accuracy and demonstrates strong scalability and portability.
[ "eess.IV", "cs.CV" ]
# 1 Introduction Relevant and up-to-date documentation is useful for software maintenance (Stapleton et al., 2020; Misra et al., 2020; de Souza et al., 2006)/ To support one important form of documentation, researchers have developed models that generate one-line summaries of functions (Hu et al., 2018a; LeClair et al., 2020; Nguyen et al., 2024a, inter alia). However, evaluating these models is difficult. Expert human evaluations are expensive, slow to collect, and hard to consistently reproduce. Automatic metrics address are cheap and consistent, but they have weak-to-moderate correlation with human scores (Roy et al., 2021; Haque et al., 2022; Mastropaolo et al., 2024). In this paper, we introduce a simple baseline: directly querying an LLM to get an overall rating of a generated summary. This approach considers the code when judging the summary, which most current metrics do not. We also propose a referencefree variant, which has not previously been done for this task. Not needing a reference summary enables new uses of these metrics, such as to flag low quality summaries in a code base or as part of the summary generation process. We compare with all of the standard n-gram based metrics, a model-based metric (Mastropaolo et al., 2024), and embedding-based metrics. We evaluate by measuring correlation with two datasets of human judgements (Haque et al., 2022; Roy et al., 2021). In appendices, we also provide results on two datasets that consider specific aspects of summary quality. Our approach is the best at predicting an overall score. For similarity with a reference, there is no significant difference between our approach and alternatives. We do find a risk that our method prefers output if it comes from the same LLM as the metric, and so we recommend using our method alongside an embedding-based metric. While evaluation by querying an LLM has been done in other tasks with natural language outputs, our results differ from work in other areas. For example, unlike in machine translation, our method remains just as effective without a reference, and it improves over a metric using a supervised model, and unlike in QA, our method does not favour longer (or shorter) summaries. These differences highlight the distinctiveness of code summarisation and therefore, the value of research in this space. Our work provides novel baselines that are simple and effective, forming a solid foundation for further exploration. # 2 Related Work Code Summarisation Evaluation N-gram metrics, such as BLEU, METEOR, and ROUGE-L, were the first approach for evaluation, but have low correlation with human evaluation Roy et al. (2021). Embedding-based approaches, such as SentenceBERT, improve on n-gram metrics, but still have a weak-to-moderate correlation (Haque et al., 2022; Mastropaolo et al., 2024). One trained metric exists, SIDE, and improves slightly over embedding methods (Mastropaolo et al., 2024). Despite these findings, research still relies on ngram metrics for evaluation. Of ten new code summarisation papers in 2024 (Nguyen et al., 2024b; Su and McMillan, 2024; Su et al., 2024; Zhao et al., 2024; Li et al., 2024; Pan et al., 2024; Sun et al., 2024; Ahmed et al., 2024; Cai et al., 2024; Mao et al., 2024), six used only n-gram metrics, three used n-gram metrics and embedding-based metrics, and one only used human evaluation. Human Evaluation Datasets We focus on two datasets that were collected specifically for code summarisation metric evaluation (Roy et al., 2021; Haque et al., 2022). We also draw data from papers that proposed new code summarisation methods and asked people to evaluate specific aspects of quality (Gao et al., 2023; Su et al., 2024). Those results are mentioned in analysis and included in appendices due to space constraints. LLM-prompting based NLG Evaluation Prompting has been successfully used to evaluate other forms of Natural Language Generation, e.g., for text summarisation and dialogue generation (Liu et al., 2023), and machine translation (Kocmi and Federmann, 2023). We observe some key differences between our results and other NLG work. We achieve equally strong results without a reference, but Qian et al. (2024) and Huang et al. (2024) investigate different prompting techniques and find that the reference summary is very beneficial. We also find that our approach consistently improves over a trained method, while trained models are still the most effective for MT (Anugraha et al., 2024; Freitag et al., 2024), probably because of the larger and higher quality datasets for metric development in MT. There has also been considerable work evaluating the potential biases of LLM evaluators (Wu and Aji, 2023; Zheng et al., 2024; Koo et al., 2024), finding evidence that LLMs tend to evaluate their own outputs more highly and favour longer responses. We investigate this issue in Section 6.1. Reference-Free Metrics We introduce the first reference-free approach for code summarisation evaluation, but there is significant prior work for other tasks (Rei et al., 2021; Scialom et al., 2021). These often have better correlations with human evaluations than equivalent reference-based metrics. However, Deutsch et al. (2022) argue that reference-free metrics are essentially creating their own pseudo-references, and so are constrained by their own generation ability. We agree that reference-free metrics are not a complete substitute, but for code summarisation they have the additional benefit that they could be used to flag low quality summaries within an existing code base. # 3 Task Code summarisation is the task of generating a summary of a code snippet. We are proposing new metrics for this task. The aim of the metric is to output a score that captures the overall quality of the summary, so that it can provide a broad indicator of the model’s performance. These metrics have access to the code, the generated summary, and a human-written reference summary. However, we will also consider a variant of our approach that does not use the reference. We measure the quality of the metric by looking at how well it correlates with human ratings of overall score and similarity. # 4 New Metric: Ask LLM Directly Our metric is simple: ask an LLM to give the summary a rating, just like asking a human. One benefit is that this approach can consider the relevant code as well as the reference summary. In contrast, ngram and embedding based metrics only measure the similarity between the generated summary and a reference summary. Our metric can also work without a reference. We include this variant in our results and note that (1) it is useful when highquality references are not available, and (2) it could be used outside of model evaluation, for example to identify low quality human-written documentation. To develop this metric we tested different techniques such as chain-of-thought reasoning, rolebased prompting and varying the problem description. We also considered question-answering based prompts, where we focused on whether the LLM was able to answer questions about the reference using information from the generated summary. For details, see Appendix E. # 5 Experiments # 5.1 Datasets We use two datasets that were created for metric evaluation. We aim to produce a single score, and so the most relevant data is Roy et al. (2021)’s Overall Score, a direct assessment of the overall quality of the summary. We also consider Haque et al. (2022)’s Similarity, which measures the similarity with the reference, but that does not account for a high quality but different summary. To avoid overfitting, during development we used a subset of the data. For the final results we used all of the data with 10-fold cross-validation. In analysis, we also consider human evaluations of Adequacy that were collected in the process of evaluating a code summarisation system (Gao et al., 2023). Additional details are in Appendix D.2 and results comparing with specific aspects of quality are in Appendix A. We release a version of all the datasets reformatted to be consistent, and with all of the same information. This was somewhat involved as some datasets did not directly include the code. Fortunately, they did indicate their code and documentation source, and so we could go back to that source and match the summary to find the code. # 5.2 Measuring Correlation As in previous papers which evaluate code summarisation metrics (Roy et al., 2021; Haque et al., 2022; Mastropaolo et al., 2024), we aim to maximise correlation with human evaluation scores. We follow Haque et al. (2022)’s methodology: (1) when there are multiple human scores for a sample, we compare with the mean to reduce the impact of noise from disagreement, and (2) we use Spearman’s Rank correlation for each metric because, unlike Pearson’s correlation, it does not assume a normal distribution. We use a permutation test for significance testing, see Appendix B for details. # 5.3 Metrics We consider the most commonly used metrics (BLEU, METEOR and ROUGE-L), the best metrics according to prior work (SIDE and SentenceBERT), two new embeddings (gite-base-en, and coyage-code-3), and our own metric (ask-LLM and ask-LLM-no-ref), where LLM is the name of the model that is queried, and no-ref indicates the variant in which no reference summary is provided in the prompt. For further details, see Appendix C. Metrics that are evaluated here for the first time are in italics in Table 1. # 6 Results Table 1 shows correlations with Overall Score and Similarity to the reference summary. Below, we note several key results. Table 1: Spearman’s Correlation with Human Ratings for Overall Score and Similarity N-gram metrics are not as effective. For Overall Score, the trained method (SIDE), the best embedding-based approach (voyage-code-3) and the best ask-LLM approach (ask-claude) outperform the best n-gram metric (BLEU-A). All of these improvements are statistically significant according to a permutation test at below the 0.01 level1. For Similarity, we find a different pattern, with SIDE performing worst, and the other three types of metrics in similar ranges. We find no statistically significant difference between the best n-gram based metric (METEOR) and either the best embedding-based metric (voyage-code-3) or the best ask-LLM metric (ask-claude-no-ref). Embedding metrics are comparable to ask-LLM metrics. On Overall Score, the best embeddingbased approach (voyage-code-3) and the best askLLM approach (ask-claude) are not statistically significantly different. For Similarity they are, with the embeddings being better, but we would expect embeddings to be better suited to that task. Note in particular that a summary may be good even though it isn’t similar to the reference, and so a metric that focuses on similarity will sometimes struggle. There is also the issue that Similarity is only a measure of quality if the reference is high quality. In code summarisation datasets, nearly all reference summaries are stripped from Github with limited manual oversight. This introduces many quality issues. Newer embeddings are better. For both Overall Score and Similarity, the newest embedding based metric, using voyage-code-3, improves on the previous state-of-the-art embeddings-based metric SentenceBERT. This is good news, since it indicates that continued progress on embedding methods is likely to continue to provide improvements here. One key difference between these approaches is cost, which will be discussed below. ask-LLM-no-ref is just as effective. The performance of the Ask-LLM-Directly style metrics is stable regardless of whether the reference summary is provided, with no statistically significant difference between the two. Different LLMs may perform differently. The choice of model (e.g. OLMo vs Claude) does lead to a significant difference. However, we used Claude when fine-tuning our prompt, making it an unfair comparison. # 6.1 Analysis To understand the strengths and weaknesses of our approach, we conducted several additional experiments. Ask-LLM method can’t easily be adapted to different quality dimensions Table 2 shows the results of our attempts to get the LLM to focus on specific aspects of quality. We see very little variation, with the scores continuing to mainly reflect Adequacy. Looking at specific examples, we found two issues. First, mentioning unrelated issues, e.g., for conciseness it produced: “The generated summary contains incorrect information and does not accurately summarize the function.”. Second, inconsistency, e.g., for conciseness it produced “...the lack of specificity makes the generated summary less informative ...”. We did not explore this further since our focus is on a single metric that aligns with overall quality. However, note that full results in the appendices show that despite these issues, correlation with specific aspects was better than prior methods. Ask-LLM is Not Sensitive to Length Many studies suggest that LLM evaluators are biased towards longer outputs (Wu and Aji, 2023; Zheng et al., 2024; Koo et al., 2024). However, for our metric, looking at the scores assigned by different metrics and the number of characters in the generated summary, in most cases we find the correlation is close to zero. For full results, see Appendix G. Model Sensitivity There is a risk that an LLM will prefer its own output. We considered the relative ranking of each model according to each metric. Table 2: Test of Different Quality Dimensions Table 3: Metric Costs Surprisingly, ask-gpt rates the models that use GPT4 as the worst overall. None of the data we had used Claude and so we generated our own summaries with Claude and valuated them. While Claude did find some issues within summaries it had generated, in $9 2 . 7 \%$ of cases it gives its summaries the highest possible rating. For full results, see Appendices F and H. Based on this, we recommend using these metrics in combination with embedding based methods. Costs Table 3 shows the cost per summary of each of the metrics. These are API costs for commercial tools and compute costs open source model OLMo-2 (we used an A100). These results show that these approaches are clearly much cheaper than running human evaluations, but still more expensive than metrics which can be run locally, e.g. gte-base-en, Sentence-BERT and n-gram methods.
Code documentation is useful, but writing it is time-consuming. Different techniques for generating code summaries have emerged, but comparing them is difficult because human evaluation is expensive and automatic metrics are unreliable. In this paper, we introduce a simple new baseline in which we ask an LLM to give an overall score to a summary. Unlike n-gram and embedding-based baselines, our approach is able to consider the code when giving a score. This allows us to also make a variant that does not consider the reference summary at all, which could be used for other tasks, e.g., to evaluate the quality of documentation in code bases. We find that our method is as good or better than prior metrics, though we recommend using it in conjunction with embedding-based methods to avoid the risk of LLM-specific bias.
[ "cs.CL", "cs.AI", "cs.SE", "68T50", "I.2.7" ]
1 Introduction 3 2 Related Work 4 2.1 Contributions 4 # 3 Taxonomy 5 3.1 Formal Definition 5 3.2 Desiderata 5 3.3 Development Methodology 5 3.4 Selected Categories 6 # 4 Downstream Results 6 4.1 Experimental Protocol 6 4.2 Math 8 4.3 Code 9 4.4 Medical 9 4.5 STEM 10 # 5 Developing Taxonomy 11 5.1 Measuring Orthogonality, Correctness, & Expressivity 11 5.2 Teacher Model Selection 12 # 6 Running at Scale 15 6.1 Performance Considerations 15 6.2 Distillation 16 6.3 EAI-Distill-0.5b Performance 16
Data plays the most prominent role in how language models acquire skills and knowledge. The lack of massive, well-organized pre-training datasets results in costly and inaccessible data pipelines. We present Essential-Web v1.0, a 24-trillion-token dataset in which every document is annotated with a twelve-category taxonomy covering topic, format, content complexity, and quality. Taxonomy labels are produced by EAI-Distill-0.5b, a fine-tuned 0.5b-parameter model that achieves an annotator agreement within 3% of Qwen2.5-32B-Instruct. With nothing more than SQL-style filters, we obtain competitive web-curated datasets in math (-8.0% relative to SOTA), web code (+14.3%), STEM (+24.5%) and medical (+8.6%). Essential-Web v1.0 is available on HuggingFace: https://huggingface.co/datasets/EssentialAI/essential-web-v1.0
[ "cs.CL", "cs.AI", "cs.LG" ]
# 1 Introduction Conversational Recommendation Systems (CRSs) leverage multi-turn natural language interactions to gradually uncover user interests and subsequently recommend items aligned with their preferences (Jannach et al., 2021; Gao et al., 2021). Powered by the advanced text generation and toolcalling capabilities of Large Language Models High Cost Low Quality 0 Simulation Ideal Tree Ineffectiveness Eapri A Noise Evaluation A Model Distribution Expected Distribution Turn-level Preference Target Distribution (LLMs) (Wang et al., 2024a), LLM-based Conversational Recommendation Agents (CRAs) (Gao et al., 2023; Huang et al., 2023; Fang et al., 2024) are emerging as a mainstream paradigm for delivering accurate, interpretable, and emotionally engaging personalized services. However, the responses generated by current CRAs often appear rigid, lacking proactivity and flexibility. This is mainly because the pretraining objectives of LLMs are predominantly focused on short-sighted nexttoken prediction (Ouyang et al., 2022). As a result, their ability to sustain multi-turn interactions and provide dynamic guidance is limited, making it difficult to meet human expectations in conversation. To address this challenge, aligning CRAs with human expectations presents a viable solution. Preference optimization has demonstrated success in aligning LLM outputs with user preferences (Schulman et al., 2017; Ouyang et al., 2022; Rafailov et al., 2024). Its core principle involves sampling multiple candidate outputs from the LLM and increasing the probability of those that align with user expectations. However, conversational recommendation is a multi-turn dialogue task, and applying preference optimization to this process presents great challenges. The main difficulty is that user preferences change in each dialogue turn and dynamically evolve as the conversation progresses. Most existing Multi-Turn Preference Optimization (MTPO) methods simply treat each turn equally, failing to capture turn-level preference relationships (Ulmer et al., 2024; Sun et al., 2024). Several recent works (Jin et al., 2024; Xie et al., 2024) try to infer turn-level preference relationships through tree-based simulations. As illustrated in Fig. 1, these approaches introduce three inherent challenges: (1) To obtain turn-level preference, it is necessary to sample multiple candidate responses at each turn and simulate the entire conversation to evaluate preferences for intermediate turns, resulting in significant sampling overhead. (2) In multi-turn conversational recommendation tasks, LLMs struggle to generate effective positive outputs through self-sampling. (3) Evaluating preferences for intermediate turns relies on the simulated environment, whose randomness may introduce additional noise into preference relationships, leading to suboptimal performance of the aligned CRA. Overcoming these limitations is essential to aligning CRAs with human expectations. This leads to a critical question: Is there a way to construct highquality turn-level preference relationships without additional sampling and evaluation? A problem well stated is a problem half solved. The core idea of this paper is to explicitly model how user satisfaction evolves throughout multi-turn dialogues and uncover the underlying causes of dissatisfaction. By identifying and addressing the root causes of low satisfaction, we can naturally construct responses that better align with user expectations. Expectation Confirmation Theory (ECT) (Oliver, 1977, 1980) tells us satisfaction is a subjective feeling that arises from the comparison between an individual’s initial expectations and the perceived actual performance or outcomes. When applied to the context of conversational recommendation, this can be understood as: during a dialogue, a user has specific expectations for the system’s response in each turn. Upon receiving the actual response, the user evaluates it by comparing it with their initial expectations, assigning a subjective satisfaction score based on the perceived gap. Motivated by this, we propose Expectation Confirmation Preference Optimization (ECPO), which comprises three key steps: (1) Forward Expectation Confirmation to identify unsatisfactory responses and uncover their root causes; (2) Backward Expectation Derivation to rewrite the unsatisfactory responses based on these causes; (3) Preference Optimization using the original and rewritten responses. Considering the high cost and potential bias associated with real users participating in the Expectation Confirmation (EC) process, we further introduce AILO, an LLM-based agent that simulates real users’ Activities, Interests, Language, and Orientations. During the dialogue, AILO acts as a user, providing diverse and realistic feedback as well as performing the EC process. Our contributions are summarized as follows: • We introduce ECPO, a novel MTPO paradigm leveraging ECT to guide turn-level alignment in dialogues. To the best of our knowledge, this is the first preference optimization method tailored for LLM-based CRAs. • To support ECPO, we introduce an LLMbased user simulator, AILO, which provides diverse and realistic feedback as well as performs the expectation confirmation process. • We conduct extensive experiments on three datasets, demonstrating ECPO’s exceptional performance in enhancing CRA’s interactive capabilities and highlighting its significant advantages over existing MTPO methods in both efficiency and effectiveness. # 2 Method To better align multi-turn CRAs with human expectations, we propose Expectation Confirmation Preference Optimization (ECPO). Its core idea is to leverage ECT to explicitly model the evolution of user satisfaction throughout multi-turn dialogues and construct turn-level preference relationships by identifying and addressing the root causes of dissatisfaction. A detailed description of ECPO is provided in Section 2.2. Additionally, we introduce a novel user simulator, AILO, which generates diverse and realistic user feedback while performing expectation confirmation (see Section 2.3). # 2.1 Preliminary We define the CRA as $\pi ^ { 1 }$ , which leverages LLMs’ planning and tool-calling capabilities to conduct Expectation Confirmation Expert LM SFT LM Evaluation Dimensions: Guidance(0-1), Simulator-Guided Planning Tuning ), Turn 1 日 FElxepxiebcitlaityio(0n-:2T)a…rgCetohIteremnce(0-2) Forward2 Turn 3 日 PCeornffoirmatnicoen::IGreucidoamncme (nCdRxxSxrecommended AILO AILO Unsatisfactory Turns the wrong project without asking me, $\mathbf { 0 } / 1 { \overset { \cdot } { } _ { . } }$ ) … ×× Satisfaction: $5 - 1 - 1 = 3$ Trajectory InstructioEn:x pYeocutatrieoan RDerwirvitaetri,oynour task is o Reason Backward to refine the CRS’s unsatisfactory response… <Dialogue state of CRS> Q:。 Refinement <Unsatisfactory Response> Counterfactual dCoensfni'rtmaeteitouns:erWehxyptehcetaotiroignisnal response Rewriter □ Inference X 1010 𝑫 𝒔𝒇𝒕 Optimization4 0100 Preference Optimization Final LM 𝑫 (𝒔𝒕, 𝒑𝒕, 𝒑෥𝒕) SFT 01 𝒑𝒓𝒆 LM Maximum likelihood Save multi-turn dialogues with a user $U$ . Through iterative interactions, the agent elicits user preferences, retrieves relevant items from the external database $I = \{ I _ { 1 } , I _ { 2 } , . . . , I _ { n } \}$ , and recommends the item that best matches the user’s interests. Formally, at the $t$ -th turn $( 1 \leq t \leq T )$ ), $\pi$ performs internal reasoning $c r _ { t }$ and generates a response $p _ { t }$ , denoted as $\{ c r _ { t } , p _ { t } \} = \pi ( s _ { t } )$ , where $s _ { t }$ represents the dialogue state (e.g., dialogue history). We follow the setting proposed by iEvalLM (Wang et al., 2023), which assumes each user has a ground-truth item $i ^ { E }$ . The goal of the CRA is to proactively guide users in conversations, providing a highly flexible and coherent user experience while successfully recommending the target item $i ^ { E }$ . Formally, an interaction episode is: $$ H ^ { T } = \big \{ u _ { 0 } , \left( c r _ { 1 } , p _ { 1 } , u _ { 1 } \right) , \dots , \left( c r _ { T } , p _ { T } , u _ { T } \right) \big \} , $$ where $u _ { t }$ represents the user’s utterance at turn $t$ . # 2.2 ECPO In this section, we propose ECPO, an MTPO paradigm based on ECT. As shown in Figure 2, we first obtain the model $\pi _ { \mathrm { s f t } }$ through a SimulatorGuided Planning Tuning phase. Subsequently, ECPO is performed in three steps: Forward $E x$ - pectation Confirmation, Backward Expectation Derivation, and Preference Optimization. Simulation-Guided Planning Tuning. Existing CRS datasets (Kim et al., 2024) often lack an internal reasoning process, making them unsuitable for CRA’s fine-tuning. To resolve this issue, we construct a new multi-turn conversational recommendation dataset that incorporates internal reasoning. This dataset is generated from dialogues between a GPT-4o mini-based CRA $\pi _ { \mathrm { G P T } }$ and a user simulator $U$ . We filter the trajectories based on whether the recommendation is successful, resulting in the dataset $\mathcal { D } _ { \mathrm { s f t } }$ . Subsequently, we perform supervised fine-tuning (SFT) on the CRA $\pi$ : $$ \mathcal { L } _ { \mathrm { S F T } } = \mathbb { E } _ { ( s _ { t } , c r _ { t } , p _ { t } ) \sim \mathcal { D } _ { \mathrm { s f t } } } \left[ - \log \pi _ { \theta } ( c r _ { t } , p _ { t } | s _ { t } ) \right] $$ Through this process, we obtain the CRA $\pi _ { \mathrm { s f t } }$ . However, SFT struggles to capture turn-level user preferences, making it insufficient to fully meet user expectations. To address this, we introduce ECPO, a low-cost and high-quality MTPO paradigm. For clarity, we omit the internal reasoning cr of the CRA in the subsequent formulations. Forward Expectation Confirmation. Expectation Confirmation Theory tells us an individual’s satisfaction arises from comparing actual performance against prior expectations. When applied to conversational recommendation, the evolution of user satisfaction can be modeled through the Expectation Confirmation (EC) process. In this paper, we adopt an extensible multi-dimensional scoring criterion with a maximum score of 5, consisting of flexibility (0-2 points), coherence (0-2 points), and user guidance ability (0-1 point) (Gao et al., 2021; Alkan et al., 2019). Formally, at the $t$ -th turn, ECPO integrates the user expectation item $i ^ { E }$ and the CRA’s response $p _ { t }$ at this dialogue turn into an instruction prompt $I _ { \mathrm { e c t } }$ . The instruction is designed to explicitly simulate the user’s inner monologue during the conversation: First, a user $U$ evaluates the system’s output against their expectations, assessing whether each dimension meets the corresponding requirement and assigning a sub-score to each aspect. These sub-scores are then aggregated to compute the overall satisfaction score $\boldsymbol { r } _ { t }$ for $p _ { t }$ . We formulate the EC process as follows: $$ \{ { \mathrm { C O N F } } _ { t } , r _ { t } \} = U ( I _ { \mathrm { e c t } } ( i ^ { E } , h _ { t } , p _ { t } ) ) , $$ where $h _ { t }$ is the dialogue history, $\mathbf { C O N F } _ { t }$ is a natural language explanation explicitly detailing why the user feels satisfied or dissatisfied at this turn. We then trace back the internal state $s _ { t }$ at the time of the CRS output $p _ { t }$ , together with the corresponding EC process $\mathrm { C O N F } _ { t }$ , and store it as a tuple $( s _ { t } , p _ { t } , \mathbf { C O N F } _ { t } , r _ { t } )$ for the subsequent phase. Backward Expectation Derivation. Once each dialogue turn is assigned a satisfaction score via the EC process, we can identify responses that fail to meet user expectations. Next, we backtrack to the CRA state $s _ { t }$ and leverage $\mathrm { C O N F } _ { t }$ for counterfactual inference on how the CRA should have generated a response to better align with user expectations. Formally, at the $t$ -th turn, ECPO integrates the EC process $\mathrm { C O N F } _ { t }$ and the unsatisfactory response $p _ { t }$ into an instruction prompt $I _ { \mathrm { b e d } }$ , which serves as the input for the Rewriter—an additional LLM introduced to refine unsatisfactory responses during backtracking. The Rewriter employs a slow thinking process, first generating a chain of thought (Wei et al., 2023) and then producing a refined response $\tilde { p } _ { t }$ : $$ \begin{array} { r l } & { { \tilde { p } } _ { t } = \mathrm { R e w r i t e r } ( I _ { \mathrm { b e d } } ( s _ { t } , p _ { t } , \mathrm { C O N F } _ { t } ) ) , } \\ & { \quad \quad \quad \mathrm { w h e r e } r _ { t } \leq \lambda } \end{array} $$ Here, $\lambda$ is a hyperparameter that defines the satisfaction threshold. If the user’s satisfaction score $\boldsymbol { r } _ { t }$ falls below $\lambda$ , the response will undergo backtracking and rewriting. Meanwhile, to ensure that rewritten responses do not deviate too far from the $\pi _ { \mathrm { s f t } }$ , we require the Rewriter to make only limited modifications to the unsatisfactory response, rather than performing a complete rewrite. After the backward process, we can collect these “original–rewritten” pairs from the training set to form our preference dataset, denoted as $\mathcal { D } _ { \mathrm { p r e } } =$ $\{ ( s _ { t } , p _ { t } , \tilde { p } _ { t } ) \ | \ r _ { t } < \ \lambda \}$ . This dataset consists of turn-level preference pairs, where the rewritten responses $\tilde { p } _ { t }$ are statistically more likely to exhibit significant improvements over the original ones. This hypothesis has been empirically validated through our evaluation (cf. Appendix C.2). Preference Optimization. After obtaining the turn-level preference dataset ${ \mathcal { D } } _ { \mathrm { p r e } }$ , we can optimize $\pi _ { s f t }$ through existing preference optimization methods. A typical implementation is Direct Preference Optimization (DPO) (Rafailov et al., 2024): $$ \begin{array} { r l } & { \mathcal { L } _ { \mathrm { D P O } } ( \pi _ { \theta } , \pi _ { \mathrm { s f t } } ) = \mathbb { E } _ { s , \tilde { p } _ { t } , p _ { t } \sim \mathcal { D } _ { \mathrm { p r e } } } \bigg [ - \log \sigma } \\ & { \bigg ( \beta \log \frac { \pi _ { \theta } \left( \tilde { p } _ { t } \mid s _ { t } \right) } { \pi _ { \mathrm { s f t } } \left( \tilde { p } _ { t } \mid s _ { t } \right) } - \beta \log \frac { \pi _ { \theta } \left( p _ { t } \mid s _ { t } \right) } { \pi _ { \mathrm { s f t } } \left( p _ { t } \mid s _ { t } \right) } \bigg ) \bigg ] } \end{array} $$ ECPO is both orthogonal and complementary to existing preference optimization methods. This enables seamless integration with various methods (e.g., KTO (Ethayarajh et al., 2024), SimPO (Meng et al., 2024)) based on specific task requirements and optimization goals. Discussion Existing MTPO methods typically require completing the entire conversation before estimating the reward for each intermediate turn, and all positive samples must be generated through self-sampling. In contrast, ECPO implicitly assigns rewards at each turn through the EC process and provides the underlying reasons for these rewards in natural language. These reasons promote the proactive generation of positive samples for preference optimization instead of self-sampling. This paradigm not only eliminates additional sampling and evaluation costs but also ensures that preference relationships drive meaningful optimization. In the next section, we introduce AILO, a novel user simulator designed to support the EC process. # 2.3 AILO This paper aims to leverage the EC process to explicitly model how user satisfaction evolves throughout conversational recommendation, thereby guiding CRA to align with user expectations. However, considering the unacceptably high costs and potential biases involved in human participation, we propose a new user simulator, AILO, an LLM-based agent that provides realistic and diverse user feedback. As shown in Figure 3, AILO consists of two components: user persona modeling and policy-based user simulation. Figure 3: The illustration of the AILO, showing its persona modeling, policy-based user simulation. Figure also depicts the task of the CRA: interacting with the database, engaging in dialogue, and recommending items to AILO. User Persona Modeling. Existing user simulators typically generate user personas through simple random sampling (Wang et al., 2024b), but this approach often results in unrealistic and less diverse personas. To address this, we propose AILO, a comprehensive user simulator for conversational recommendation. Inspired by the AIO theory (Wells et al., 1971) from consumer psychology, AILO defines user attributes across four dimensions: Activities, Interests, Language, and Orientations, thereby capturing the diverse characteristics that users may exhibit during conversational recommendations. For example, some users prioritize efficiency in recommendations, while others prefer engaging in in-depth discussions on specific topics. We employ GPT-4o (OpenAI et al., 2024) to infer user personas from real recommendation review datasets. This not only ensures the authenticity of personas but also enhances their diversity. To assess the diversity of AILO’s personas, following Jin et al. (2024), we randomly sample 100 personas created by our method and those generated using the sampling method in RecAgent (Wang et al., 2024b), then compute the maximum ROUGE-L between each persona and the others. As shown in Figure 4, the ROUGE-L’s distribution of AILO is significantly lower than RecAgent, indicating that AILO produces more diverse user personas. Policy-Based User Simulation. Directly simulating user responses with LLMs may lead to role reversals and uncontrollable behavior (Zhu et al., Figure 4: ROUGE-L with the Most Similar Persona. 2024). Therefore, we redefine the process of user response generation as a planning task executed in three steps: (1) Response Policy Generation: Based on the user’s persona and the CRA’s response $p _ { t }$ , the simulator $U$ generates a response policy ${ \boldsymbol { u } } { \boldsymbol { r } } _ { t }$ , such as “Asking for Recommendations”. (2) $R e$ - sponse Content Generation: Based on the response policy $\boldsymbol { u r } _ { t }$ , the simulator generates the response $u _ { t }$ . (3) Expectation Confirmation Process: $U$ generates the EC process $\mathrm { C O N F } _ { t }$ , computes the satisfaction score $\boldsymbol { r } _ { t }$ , and outputs them in a structured format. Formally, the simulator produces: $$ \{ u r _ { t } , u _ { t } , \mathrm { C O N F } _ { t } , r _ { t } \} = U ( i ^ { E } , h _ { t } , p _ { t } ) $$ Here, $i ^ { E }$ is the target item, and $h _ { t }$ represents the dialogue history. To verify the authenticity of AILO’s simulated dialogue, we recruit annotators to compare 50 sets of dialogue trajectories generated by AILO and iEvalLM (Wang et al., 2023), assessing which one appears more human-like. The experimental results show that AILO outperforms iEvalLM in all cases, achieving a $100 \%$ win rate. # 3 Experiments To thoroughly evaluate the effectiveness of ECPO in enhancing multi-turn CRAs, we conduct extensive experiments, which are outlined as follows: Table 1: Comparison with existing prompt-based CRAs. The "#Calls" column represents the number of LLM calls required to complete an entire dialogue. $N$ denotes the number of dialogue turns, and $M$ represents the number of times the LLM generates retrieval queries $( M \leq N )$ . SR (Success Rate) and R (Recall Rate) are recommendation metrics, while WR reflects the interactive capabilities. • First, to validate the importance of ECPO alignment for CRAs, we compare existing prompt-based CRAs with those that have undergone ECPO alignment. • Second, we comprehensively compare ECPO with existing MTPO methods to verify its efficiency and effectiveness. • Finally, we thoroughly analyze the effectiveness of different components of ECPO and conduct evaluations of its performance under various experimental settings. # 3.1 Experimental Setup In this section, we briefly introduce the experimental settings. A more detailed elaboration and design motivations are presented in Appendix B. Environments. Traditional CRS evaluation methods struggle to assess dynamic CRA tasks (Afzali et al., 2023). As discussed in Section 2.3, we follow and extend iEvalLM (Wang et al., 2023) by introducing AILO for our evaluations. Our experiments utilize the Amazon-Game, Amazon- $\mathbf { \cdot B o o k } ^ { 2 }$ , and $\mathrm { Y e l p } ^ { 3 }$ datasets to construct user personas and generate approximately 3,000 tasks for each dataset. During the training phase, we use 1,000 tasks to construct $\mathcal { D } _ { \mathrm { s f t } }$ and 500 tasks to construct ${ \mathcal { D } } _ { \mathrm { p r e } }$ . Following ReAct (Yao et al., 2023) and MACRS(Fang et al., 2024), we sample 100 tasks from each dataset for testing. Baselines. Given the significant gap between traditional CRS and emerging LLM-based CRAs, we focus on comparing our approach with existing prompt-based CRAs (ChatRec (Gao et al., 2023), ReAct (Yao et al., 2023), MACRS (Fang et al., 2024), ActCRS) and MTPO methods (trajectorylevel: SFT, KTO (Ethayarajh et al., 2024); turnlevel: SDPO (Jin et al., 2024), SKTO). Notably, ActCRS is a straightforward CRA developed by us, that simultaneously generates a response strategy and the corresponding response. Due to its simplicity and effectiveness, we fine-tune ActCRS in our main experiments. Our backbone model is Llama-3.1-8B-Instruct (Grattafiori et al., 2024), and we additionally provide results based on GPT4o mini (OpenAI et al., 2024) as a reference. Metrics. We evaluate CRAs across two dimensions: (1) Recommendation Metrics: Success Rate (SR) and Recall Rate (R). (2) Dialogue Metric: Win Rate (WR, (Li et al., 2023)), which measures interactivity compared to the expert CRA (GPT-based ActCRS in main experiments). # 3.2 Comparison with Existing Prompt-Based CRA Frameworks Analysis of Existing Prompt-Based CRAs. Table 1 summarizes the main experimental results on three recommendation datasets. First, we analyze the existing CRAs’ results. We find that: (1) Stronger backbone models (GPT-4o mini) perform better as CRA framework complexity increases. In contrast, weaker models (Llama-3.1) struggle to benefit from more complex CRA frameworks. (2) ChatRec and MACRS can generate high-quality recommendations. However, ChatRec lacks interactivity, while MACRS’s responses tend to be overly verbose, making conversations feel unnatural. In terms of WR (interactivity performance), their win rates are significantly lower than expert CRA, typically below 0.15. (3) No single promptbased CRA demonstrates a clear advantage across all datasets and metrics. Moreover, as the number of calls increases, the performance gains gradually diminish. This observation highlights the growing importance of an alignment method for CRAs. Figure 5: Comparison of aligned CRAs fine-tuned with different methods in terms of interactivity (flexibility, coherence, and user guidance) against the expert CRA. Figure 6: Human evaluation results. Effect of Alignment. We fine-tune the Llamabased ActCRS using $\operatorname { S G P T } + \operatorname { E C P O }$ , and present the performance results in the table 1. After SGPT training, the recommendation metrics (SR and R) reach GPT-level performance, but interactivity remains inferior to the expert CRA. After ECPO training, the win rate significantly exceeded that of the GPT model (WR ranging from 0.56 to 0.63), highlighting the crucial role of the ECPO in enhancing the multi-turn conversation user experience. # 3.3 Comparison with Existing MTPO Methods In Figure 5, we compare ECPO with two categories of existing multi-turn alignment methods: trajectory-level methods (SFT, KTO) and turn-level preference optimization methods based on tree simulation (SDPO, SKTO). Specifically, we construct the preference dataset $\mathcal { D } _ { \mathrm { p r e } }$ using each method in 500 simulation tasks. In these tasks, trajectorylevel methods require sampling 1,000 trajectories, tree simulation methods require sampling 2,500 trajectories, whereas ECPO eliminates the need for additional sampling and efficiently utilizes only 500 trajectories. Experimental results show that the improvement of trajectory-level methods is limited, as they fail to effectively capture preference relationships at the turn level. Meanwhile, tree simulation methods, despite capturing these preferences, actually led to negative gains, likely due to noise interference. This finding highlights the challenges of CRA alignment. In contrast, ECPO, guided by the EC process, achieves the best performance while requiring the lowest cost, significantly outperforming all existing methods. Additionally, we recruit human annotators to compare the win rates between the ECPO-aligned CRA and the expert CRA. The experimental results, as shown in Figure 6, indicate that ECPO demonstrates a significant advantage across all metrics, especially in flexibility and user guidance. To further understand how ECPO outperforms existing methods, we provide case studies in appendix C.3. # 3.4 Effectiveness of the EC Process Although we have demonstrated the effectiveness of ECPO in the main experiments, a natural question arises: How does the turn-level EC process influence the performance of ECPO? To investigate this further, we manually design rewriting instructions based on the test results of $\pi _ { \mathrm { s f t } }$ , identifying its issues and guiding the Rewriter to revise the responses generated by $\pi _ { \mathrm { s f t } }$ , to construct ${ \mathcal { D } } _ { \mathrm { p r e } }$ . This approach, referred to as ECPO w/o EC, aims to replace each turn of the EC process with a unified analysis conducted by human to guide rewriting. Table 2: Effectiveness of the EC process. Table 3: Win rate of Rewritten vs. Original responses across Fidelity & Coherence. In Table 2, we find that ECPO w/o EC enhances interactivity to some extent but slightly reduces recommendation performance, with overall performance remaining significantly inferior to ECPO. This result underscores the importance of the turnlevel EC process in the rewriting process. # 3.5 Hyperparameter Analysis In this section, we investigate the impact of the rewriting threshold $\lambda$ , defined as the satisfaction score threshold below which responses are selected for rewriting and training. A higher $\lambda$ leads to more response samples being backtracked and rewritten, resulting in a larger training dataset. Figure 7(a) presents the training results for $\lambda$ values $\{ 1 , 2 , 3$ , 4}, while Figure 7(b) shows results from uniformly sampled subsets of the $\lambda = 4$ setting with varying sample sizes $\{ 5 0 , 1 0 0 , 2 0 0 , 4 0 0 , 8 0 0 , 1 6 0 0 , \mathrm { A l l } \}$ . The blue line represents the overall performance gain, while the pink line represents the performance improvement per individual sample. We observe that, in Figure 4(a), lower $\lambda$ values lead to a more significant gain for individual samples. In contrast, in Figure 4(b), the performance improvement appears more irregular. This phenomenon is particularly interesting and aligns with intuition: when a sample has a lower satisfaction score, it often indicates critical issues, and addressing these issues results in a more noticeable performance gain. Figure 7: Hyperparameter analysis of $\lambda$ . # 3.6 Bias Analysis of Rewritten Outputs In this section, we conduct an empirical analysis of potential biases introduced by the rewriter, focusing on its impact on information fidelity and conversational coherence in multi-turn dialogues. We evaluate responses across multiple domains using GPT-4o to compare the quality of rewritten versus original outputs along these two dimensions. As shown in Table 3, the results show that rewritten responses are generally preferred over the original ones across all domains and evaluation aspects, with particularly notable improvements in coherence. This indicates that the rewriter not only enhances language quality but also preserves the core semantic content, without introducing significant negative bias.
Recent advancements in Large Language Models (LLMs) have significantly propelled the development of Conversational Recommendation Agents (CRAs). However, these agents often generate short-sighted responses that fail to sustain user guidance and meet expectations. Although preference optimization has proven effective in aligning LLMs with user expectations, it remains costly and performs poorly in multi-turn dialogue. To address this challenge, we introduce a novel multi-turn preference optimization (MTPO) paradigm ECPO, which leverages Expectation Confirmation Theory to explicitly model the evolution of user satisfaction throughout multi-turn dialogues, uncovering the underlying causes of dissatisfaction. These causes can be utilized to support targeted optimization of unsatisfactory responses, thereby achieving turn-level preference optimization. ECPO ingeniously eliminates the significant sampling overhead of existing MTPO methods while ensuring the optimization process drives meaningful improvements. To support ECPO, we introduce an LLM-based user simulator, AILO, to simulate user feedback and perform expectation confirmation during conversational recommendations. Experimental results show that ECPO significantly enhances CRA's interaction capabilities, delivering notable improvements in both efficiency and effectiveness over existing MTPO methods.
[ "cs.CL" ]
# 1 Introduction Computational science communities have proposed numerous learning-based approaches for solving PDE-governed systems for simulation, optimization, or scientific discovery. These methods provide tradeoffs across accuracy, applicability, and speed. Physics-informed neural networks (PINNs) [59, 60] can be flexibly applied to forward/inverse predictions or sparse differential measurements, but their optimization often falls into a local minimum, sacrificing accuracy. Neural Operators [40, 42, 46] offer fast approximate simulations, but struggle to handle partial observations common in real-world problems. Recent generative methods [8, 66, 85] accommodate partial observations, at the expense of slow speed and inability to model dense temporal states. These challenges have limited the real-world applicability of learning-based PDE approaches for past state reconstructions or future simulations. In this work, we introduce VideoPDE, a unified framework that is accurate, fast, and applicable to a diverse range of scenarios. Intuitively, VideoPDE casts the problem of PDE solving in diverse settings as a generative video inpainting problem, leveraging the power of diffusion models. For example, forward simulation can be viewed as predicting the missing pixels in frames 2 to T, given full or partial observations of the initial frame. This generative framework unifies diverse sensor configurations, supports non-deterministic predictions for chaotic systems, and offers a fast yet accurate alternative to classical solvers—all within a single neural network. Our method uses a transformer-based video diffusion model (VDM) [29] architecture that can be conditioned on arbitrary spatiotemporal patterns of partial measurements. Unlike common VDMs that operate in the latent space [3, 23, 25, 84], our method can denoise and condition at the pixel level, well-suited for scientific applications that require fine-grained accuracy, rather than perceptual realism. While VDMs have been used for video inpainting and prediction in the natural video domain [30, 44, 74, 82], no prior works (scientific or non-scientific domain) have tried hierarchical pixel-level modeling and intra-token conditioning that lead to exceptional accuracy and efficiency. Our contributions include: (i) the formulation of casting PDE solving as generative video inpainting, (ii) architectural innovation that leads to efficient pixel-space denoising and conditioning, (iii) empirical demonstration of generative PDE solving across diverse settings. Our trained models obtain state-of-the-art results across settings, producing accurate predictions from as little as $1 \%$ continuous measurements and reducing error by up to an order of magnitude compared to prior methods. # 2 Related Work Neural PDEs Solving partial differential equations (PDEs) is a fundamental problem in physical sciences. Traditional numerical methods such as the Finite Element Method [58, 68] and Boundary Element Method [1, 32] have long been the backbone of PDE solving but are computationally expensive and inflexible for complex systems. Neural approaches offer data-driven alternatives: physics-informed neural networks (PINNs) [60, 59] enforce PDE constraints via the loss function and have been applied to a wide range of PDEs [5, 6, 20, 24, 48, 51, 56, 73, 75]. While PINNs can work on sparse measurements, in practice they often face optimization instability and poor scalability. Neural operators, such as FNO [40], DeepONet [46], and PINO [42], learn mappings between function spaces to avoid expensive optimization and achieve resolution-invariance. These models have been extended to various forward [4, 10, 37, 38, 41, 55, 64, 78] and inverse [43, 52] PDE tasks, but remain limited in flexibility for handling arbitrary and sparse input patterns. Solving PDEs Under Sparse Measurements Recently, neural methods have gained attention for solving PDEs under sparse measurements, reflecting the challenge of acquiring full spatiotemporal data. DiffusionPDE [31] addresses this by modeling the joint distribution over coefficients and solutions, allowing flexible inputs, but its DPS [12] backbone requires PDE-specific tuning and struggles with dynamic PDEs. Spatially-aware diffusion models [85] use cross-attention to handle partial observations but lack temporal modeling. Temporal PDEs are especially important for modeling nonlinear fluid and gas dynamics [17, 19, 34, 76, 86]. Super-resolution frameworks [21, 22, 36] reconstruct full fields from coarse data. Recent methods [39, 65, 66] combine physicsinformed losses with diffusion models or transformers for high-fidelity turbulent flow reconstruction. Despite past successes, existing methods often rely on strong assumptions about PDEs, boundary conditions, or sensor layouts. We propose a unified generative framework that requires no prior knowledge and generalizes well across forward, inverse, and partial observation problems. Inpainting Diffusion Models Diffusion models [27, 67, 70, 72, 80] have emerged as particularly suited for image and video inpainting due to their capability to model complex, high-dimensional distributions effectively. Training-free methods guide the sampling trajectory to satisfy the conditions at inference time through noise inference [50], resampling [47], or latent alignment [11]. It can also be studied as a linear inverse problem [12, 14, 35, 69, 79]. However, these methods often struggle with extremely sparse or ambiguous observations. Another class of methods directly trains a conditional diffusion model. These methods typically modify the network architecture to inject conditioning information, such as channel concatenation [62, 63], cross-attention [2, 57, 61], or ControlNet [81]. We adopt channel concatenation in this work since it is simple and effective. These conditioning techniques have been extended to video diffusion models [29] for video inpainting [45, 77, 83]. # 3 Methods # 3.1 Preliminaries: Diffusion Models and Guided Sampling Diffusion models learn data distributions by reversing a gradual noising process. Starting from a clean sample $\mathbf { x } _ { \mathrm { 0 } }$ from a data distribution $p ( \mathbf { x } )$ , a forward stochastic process progressively adds noise $\epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ to produce $\mathbf { x } _ { t } = \mathbf { x } _ { 0 } + \sigma ( t ) \boldsymbol { \epsilon }$ and hence a family of distributions $p ( \mathbf { x } _ { t } ; \boldsymbol { \sigma } ( t ) )$ , where $\sigma ( t )$ denotes the standard deviation of the noise at diffusion time $t$ , following the noise schedule of the Elucidating Diffusion Models (EDM) [33] framework we adopt in this work. The goal is to learn the reverse process to recover $\mathbf { x } _ { \mathrm { 0 } }$ from $\mathbf { x } _ { t }$ by training a denoising neural network $D _ { \theta } ( \mathbf { x } _ { t } , \sigma ( t ) )$ with loss $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { E D M } } = \mathbb { E } _ { \mathbf { x } _ { 0 } \sim p ( \mathbf { x } ) } \mathbb { E } _ { \epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) } \left[ | D _ { \theta } ( \mathbf { x } _ { t } , \sigma ( t ) ) - \mathbf { x } _ { 0 } | | ^ { 2 } \right] } \end{array} $$ This gives us an estimate of the score function [71], a vector field pointing to higher data density, $$ \begin{array} { r } { \nabla _ { \mathbf { x } } \log p \big ( \mathbf { x } ; \boldsymbol { \sigma } ( t ) \big ) = ( D ( \mathbf { x } , \boldsymbol { \sigma } ( t ) ) - \mathbf { x } ) / \sigma ( t ) ^ { 2 } , } \end{array} $$ from which we can apply numerical ODE solvers to iteratively denoise from a complete noise Figure 1: VideoPDE pipeline. We cast PDE solving as a video inpainting task. Our Hierarchical Video Diffusion Transformer (HV-DiT) denoises initial noise into a full video, conditioned on pixellevel sparse measurements. Its ability to handle arbitrary input patterns enables flexible application to diverse PDE scenarios, including forward, inverse, and continuous measurement tasks. $\mathbf { x } _ { T } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ following $$ \mathrm { d } \mathbf { x } = - \dot { \sigma } ( t ) \sigma ( t ) \nabla _ { \mathbf { x } } \log p \big ( \mathbf { x } ; \sigma ( t ) \big ) \mathrm { d } t . $$ Conditional Sampling with Diffusion Models Diffusion models allow flexible input conditioning during inference sampling. I.e., conditional diffusion methods model a conditional data distribution $p ( x | y ; \sigma ( t ) )$ with a conditional score function $\nabla _ { \mathbf { x } } \log p ( \mathbf { x } | y )$ . Conditional diffusion sampling can be roughly divided into methods that require computing gradients during inference and those that do not. Gradient-based guidance [18] approaches often maintain an unconditional network architecture, but use an auxiliary loss during inference to guide the score toward the conditioning. For example, DPS [13] approximates the conditional score function: $\nabla _ { \mathbf { x } _ { t } } \log p \big ( \mathbf { x } _ { t } | y \big ) \approx \nabla _ { \mathbf { x } _ { t } } \log p \big ( \mathbf { x } _ { t } \big ) -$ $\zeta \nabla _ { \mathbf x _ { t } } \mathcal L _ { \phi } ( \mathbf x _ { t } , y )$ , where $\mathcal { L } _ { \phi }$ measures the current loss against the conditions $y$ . While effective, gradient-based diffusion guidance methods tend to be slow and rely on intricate dynamics of the two directions, leading to hyperparameter sensitivity. Gradient-free methods [62, 49, 28] typically train a specific network architecture that takes the conditioning $y$ as input. That is, the conditional denoising network models: $$ \begin{array} { r } { \nabla _ { \mathbf x } \log p \big ( \mathbf x _ { t } | \boldsymbol y ; \boldsymbol \sigma ( t ) \big ) \approx ( D _ { \boldsymbol \theta } ( \mathbf x _ { t } , \boldsymbol y ; \boldsymbol \sigma ( t ) ) - \mathbf x ) / \sigma ( t ) ^ { 2 } . } \end{array} $$ When there are significant ambiguities and noise associated with the conditioning $y$ , e.g., when $y$ is in the form of text annotations, the network is incentivized to under-commit to the conditioning, leading to the classifier-free guidance (CFG) [28] technique to amplify the conditioning signal. In this work, we adopt the gradient-free network-conditioning strategy without using CFG. # 3.2 Spatiotemporal PDE Solving as Video Inpainting We cast the problem of spatiotemporal PDE solving as a video inpainting task, enabling a unified and flexible framework for handling a wide range of prediction scenarios (Figure 2). Like prior data-driven PDE approaches, our goal is to learn a neural network that can infer unknown system states across a family of equations. However, unlike existing methods that typically design separate models for forward, inverse, or partially observed cases, our approach treats all such tasks as instances of conditional video inpainting. In this formulation, we cast PDE solving as the task of filling in missing regions of a video representing the evolution of physical states over time and space. For example, forward prediction corresponds to inpainting future frames based on an initial condition; partially observed setups correspond to inpainting from sparse spatiotemporal sensor data. Our proposed architecture, described in detail in Section 3.3, is a transformer-based diffusion model explicitly designed to condition on arbitrary patterns of observed data and generate coherent, accurate completions. Figure 2: Inverse simulation from partial observation. VideoPDE formulates general PDE solving as a video inpainting problem, where unknown pixels are denoised conditioned on sparse inputs. Here, given $3 \%$ observation at time $T$ , VideoPDE accurately recovers the whole trajectory $T - 1 1$ . PDE Formulation While our formulation accommodates both static (time-independent) and dynamic (time-dependent) PDEs, we focus on dynamic systems, e.g., Navier–Stokes: $$ \begin{array} { r l } { f ( c , \tau ; \mathbf { u } ) = 0 , } & { \quad \mathrm { i n ~ } \Omega \times ( 0 , \infty ) , } \\ { \mathbf { u } ( c , \tau ) = \pmb { g } ( c , \tau ) , } & { \quad \mathrm { o n ~ } \partial \Omega \times ( 0 , \infty ) , } \\ { \mathbf { u } ( c , \tau ) = \pmb { o } ( c , \tau ) , } & { \quad \mathrm { o n ~ } \mathcal { O } \subset \Omega \times ( 0 , \infty ) } \end{array} $$ Here, $c$ and $\tau$ denote the spatial and temporal coordinates, respectively, and $\mathbf { u } ( \boldsymbol { c } , \tau )$ is the solution field. The boundary condition is given by $\mathbf { u } | _ { \partial \Omega \times ( 0 , \infty ) } = \pmb { g }$ . We aim to recover the full solution ${ \bf u } _ { \tau }$ at any time $\tau \in [ 0 , T ]$ from sparse spatiotemporal observations $\mathcal { O }$ , where $\mathbf { u } | _ { \mathcal { O } } = o$ . We make no assumptions about the structure of these observed locations. Diffusion-based Video Inpainting We cast PDE-solving as a spatiotemporal inpainting task, where missing regions of the solution field $\mathbf { u } ( \boldsymbol { c } , \tau )$ are inferred from sparse observations $\mathcal { O }$ . To solve this inpainting problem, we leverage the powerful generative capabilities of diffusion models. Specifically, we train a conditional diffusion model to learn the distribution of physically consistent video-like solution trajectories, while conditioning on arbitrary known subsets of the spatiotemporal domain. We represent each PDE solution as a video $\mathbf { x } \in \mathbb { R } ^ { H \times W \times T \times C }$ , where $H \times W$ is the spatial grid, $T$ is the number of time steps, and $C$ the number of field channels. The conditioning signal is defined by a binary mask $\mathbf { m } \in \{ 0 , \mathbf { \bar { 1 } } \} ^ { H \times W \times T }$ and corresponding observed values $\pmb { y } = \mathbf { x } \odot \mathbf { m }$ . During training, we sample random spatiotemporal masks and supervise the model to reconstruct the full video from these partial views. The model learns the conditional score function: $$ \nabla _ { \mathbf x } \log p ( \mathbf x | y ; \sigma ( t ) ) \approx ( D _ { \theta } ( \mathbf x _ { t } , y , \mathbf m ; \sigma ( t ) ) - \mathbf x _ { t } ) / \sigma ( t ) ^ { 2 } , $$ where $D _ { \theta }$ is a transformer-based denoising network conditioned on $_ y$ and $\mathbf { m }$ , and $\mathbf { x } _ { t }$ is a noisy intermediate sample at diffusion time $t$ . During inference, we take sparse observations $_ y$ and $\mathbf { m }$ as inputs, initialize $\mathbf { x } _ { T }$ with pure Gaussian noise, and denoise it using the learned score function. By casting PDE-solving as conditional video generation, we unify a broad class of spatiotemporal problems under a generative modeling task. Importantly, our formulation enables conditioning the same model on different observation patterns, e.g. forward and inverse predictions, or interpolation from arbitrary observed subsets. Section 3.3 details the model design and training process. # 3.3 Hierarchical Video Diffusion Transformer (HV-DiT) While most recent state-of-the-art diffusion models [61] operate in a learned latent space to reduce computational cost, we design our architecture to perform diffusion directly in pixel space, as shown in Figure 1. This choice is motivated by our observation that pixel-space diffusion yields significantly more accurate and physically consistent reconstructions, which is particularly important in PDE settings where fine-grained field values matter more than perceptual qualities. Table 1: Conceptual comparison of PDE-solving methods. Neural operator methods struggle with partial inputs. Only PINN and VideoPDE handle forward, inverse, and continuous measurements flexibly. Generative baselines focus on reconstructing one or two frames (instead of dense temporal frames) and are often not designed for forward prediction, where VideoPDE excels. To manage the high dimensionality of pixel-space video data, we tokenize each input video ${ \textbf { x } } \in$ $\mathbb { R } ^ { H \times W \times \widecheck { T } \times C }$ by merging small spatiotemporal neighborhoods, for example, $N \times N \times N$ patche∈s, into single tokens. This results in a structured token sequence over which we design an efficient variant of the Video DiT architecture [54], which we refer to as HV-DiT, inspired by the hierarchical image model HDiT [15]. Unlike standard transformers with global self-attention, HV-DiT employs localized attention, restricting each token’s receptive field to nearby spatiotemporal neighbors. This reduces computational complexity and allows the model to focus on local PDE dynamics. Our transformer architecture is hierarchical [15, 53]: tokens are progressively downsampled by merging neighboring tokens, creating a multi-scale representation. This downsampling path is paired with an upsampling path with skip connections in the style of U-Net, enabling both local detail preservation and global context integration. At each layer, we apply spatiotemporal neighborhood attention. At the coarsest resolution (bottleneck), we use global attention layers to capture long-range spatiotemporal dependencies. A key architectural innovation is the way we condition the model on known observations. For each token, we concatenate its associated binary mask (indicating observed pixels) and the corresponding observed values. This allows our model to condition at the individual pixel level, enabling finegrained, spatially varying guidance during the denoising process. Concatenating the binary mask resolves ambiguity between observed and unobserved pixels. This formulation supports flexible conditioning across a wide range of scenarios, including forward prediction, inverse recovery, and inpainting from arbitrary subsets of observations. The concatenated final input to $D _ { \theta } \big ( \mathbf { x } _ { t } ^ { c o n d } \big )$ is: $$ { \bf x } _ { t } ^ { c o n d } \equiv \mathrm { c o n c a t } ( { \bf x } _ { t } , { \bf m } , y ) , \quad \# \mathrm { o f t o k e n s ~ i s ~ } H / N \times W / N \times T / N $$ that only the solution field $\mathbf { x }$ part of the input token contains the diffusion noise. Overall, our HV-DiT combines the expressiveness of pixel-space modeling with the efficiency of localized and hierarchical attention, forming a powerful and versatile backbone for generative PDE-solving through conditional video inpainting. # 4 Experiments We comprehensively evaluate VideoPDE’s ability to solve a range of temporal PDEs across diverse inference scenarios. Specifically, we assess its performance in (i) reconstructing from continuous spatiotemporal sensor measurements (Table 2), (ii) predicting future or past system states (Table 3), (iii) handling partial observations during forward and inverse prediction (Table 4) , and (iv) generalizing across multiple inference tasks, including forward, inverse, and reconstruction. Baselines We compare VideoPDE against a representative set of learning-based PDE solvers. For standard forward and inverse prediction under full initial or final conditions, we include FNO, PINO, DeepONet, and DiffusionPDE, each representing a distinct modeling paradigm (see Table 1). For partial observation settings, we compare only against DiffusionPDE, which has demonstrated superior performance and shown that prior baselines struggle with sparse conditioning. For the continuous measurement reconstruction task, we evaluate against state-of-the-art generative methods, including those proposed by Shu et al. [66], Zhuang et al. [85], and DiffusionPDE [31]. We also extend DiffusionPDE for improved temporal message passing. See the supplementary for more details. # 4.1 PDE Problem Settings We show the effectiveness of VideoPDE mainly on 2D dynamic PDEs for forward, inverse, and continuous observation problems and compare it against SOTA learning-based techniques. We use the following families of PDEs for the main experiments. We refer readers to the supplementary for a more comprehensive coverage of our experiments Wave-Layer We evaluate our method on the Wave-Layer task following Poseidon [26]. This task is based on the wave equation with spatially varying propagation speed and absorbing boundary: $$ \partial _ { t } ^ { 2 } \mathbf { u } ( c , \tau ) + ( \mathbf { q } ( c ) ) ^ { 2 } \Delta { \mathbf { u } } ( c , \tau ) = 0 , \quad ( c , \tau ) \in \Omega \times ( 0 , T ) . $$ Here, $\mathbf { u } \colon \Omega \times ( 0 , T ) \to { \mathbb { R } }$ is a scalar field representing displacement, and $\mathbf { q } \colon \Omega \to { \mathbb { R } }$ represents propagation speed. The initial condition is the sum of 2-6 Gaussians with random location and scale: The propagation speed coefficient $c$ is generated by creating 3-6 layers with piecewise constant propagation speeds. The layers are separated by reandomly generated frontiers. The dataset contains 10,512 trajectories, each with 21 time steps at $1 2 8 \times 1 2 8$ resolution. The final 100 trajectories are used for validation and the rest for training. This task arises from propagation of seismic waves through a layered medium. See the supplementary for more details on this problem. Navier–Stokes Equation We study the two-dimensional incompressible Navier–Stokes equations in vorticity form, following the setup introduced in DiffusionPDE [31]: $$ \begin{array} { r l } { \partial _ { t } \mathbf { w } ( \pmb { c } , \tau ) + \mathbf { v } ( \pmb { c } , \tau ) \cdot \nabla \mathbf { w } ( \pmb { c } , \tau ) = \nu \Delta \mathbf { w } ( \pmb { c } , \tau ) + \mathbf { q } ( \pmb { c } ) , } & { \pmb { c } \in \Omega , \tau \in ( 0 , T ] , } \\ { \nabla \cdot \mathbf { v } ( \pmb { c } , \tau ) = 0 , } & { \pmb { c } \in \Omega , \tau \in [ 0 , T ] , } \\ { \mathbf { w } ( \pmb { c } , 0 ) = \mathbf { w } _ { 0 } ( \pmb { c } ) , } & { \pmb { c } \in \Omega . } \end{array} $$ Here, ${ \bf w } = \nabla \times { \bf v }$ denotes the vorticity field, and $\mathbf { v } ( \boldsymbol { c } , \tau )$ is the velocity field at spatial location $c$ and time $\tau$ . We fix the viscosity coefficient to $\nu = \mathrm { 1 0 ^ { - 3 } }$ , corresponding to a Reynolds number of $R e \ : = \ : 1 / \nu \ : = \ : 1 0 0 0$ . Initial conditions $\mathbf { w } _ { 0 }$ are sampled from a Gaussian random field as in DiffusionPDE. Each datapoint is composed of 20 frames of a $1 2 8 \times 1 2 8$ vorticity field w. The external forcing $\mathbf { q } ( c )$ determines the long-term behavior of the system. In this setting, we adopt a static, time-independent forcing term: $$ \mathbf { q } ( { \pmb { c } } ) = 0 . 1 \left( \sin ( 2 \pi ( c _ { 1 } + c _ { 2 } ) ) + \cos ( 2 \pi ( c _ { 1 } + c _ { 2 } ) ) \right) , $$ which introduces smooth, low-frequency energy into the system without any feedback from the flow itself. Due to the weak magnitude of this forcing and the absence of dynamic coupling, the system exhibits diffusion-like decay: initial high-frequency vorticity structures dissipate over time as the system evolves under viscous damping. Kolmogorov Flow To study more complex and persistent flow dynamics, we also evaluate our method on the Kolmogorov flow (KF) [7], a classical setup used in [66] to simulate forced, quasiturbulent regimes in 2D fluid dynamics. The same Navier–Stokes formulation applies, but with a different forcing term of the KF form: $$ \begin{array} { r } { \mathbf q ( c , \tau ) = - 4 \cos ( 4 c _ { 2 } ) - 0 . 1 \mathbf w ( c , \tau ) . } \end{array} $$ This forcing is composed of a strong, anisotropic spatial component $( \cos ( 4 c _ { 2 } ) )$ that continuously injects energy into the system, and a linear drag term $( - 0 . 1 \mathbf { w } )$ that stabilizes the flow by removing energy at small scales. Crucially, the forcing depends on the evolving state $\mathbf { w } ( \pmb { c } , \tau )$ , introducing dynamic feedback that enables sustained motion. Unlike the decaying dynamics of the previous setup, Kolmogorov flow exhibits persistent, swirling structures and high-frequency vorticity patterns over time. This makes it a challenging and realistic benchmark for generative PDE modeling, particularly in capturing long-term, high-fidelity spatiotemporal behavior. Finally, each datapoint is a 20-frame $2 5 6 \times 2 5 6$ vorticity field. # 4.2 Experiment Details We provide additional details on the datasets and their processing in the supplementary. Training is performed on 4 NVIDIA L40S GPUs with a batch size of 8 per GPU, taking approximately 24 hours per model. All models are trained until convergence using the Adam optimizer with a constant learning rate schedule (initial LR $5 \times 1 0 ^ { - 4 }$ ). Our HV-DiT architecture operates directly in pixel space. Videos are tokenized into $4 \times 4 \times 2$ patches, forming a $3 2 \times 3 2 \times 1 0$ token grid with embedding dimension 384 for WL and NS. The model uses 2 transformer layers with neighborhood attention (window size $7 \times 7 \times 2$ ) and a downsampling operation via patch merging with factor 2. We provide more details on the model architecture and hyperparameters in the supplementary. Table 2: Continuous partial observation reconstruction. We quantitatively measure the performance of different methods using average $\ell _ { 2 }$ relative errors on Wave-Layer, Navier–Stokes, and Kolmogorov Flow benchmarks with $1 \%$ and $3 \%$ observation points. Figure 3: Continuous measurement reconstruction comparison. We compare relative error maps for reconstructing dense spatiotemporal fields from fixed sensors providing $1 \%$ continuous observations on Navier–Stokes. Our results are the most accurate with minimal error. In contrast, baseline methods are significantly slower and not suitable for forward prediction (Zhuang & Shu). # 4.3 Experiment Results Continuous Partial Observation We evaluate the ability of VideoPDE to reconstruct full spatiotemporal PDE trajectories from sparse, fixed-point observations. Specifically, we randomly sample a very small percentage of spatial coordinates ( $1 \%$ or $3 \%$ ) and provide the solution values across all time steps at those locations. This setting mimics real-world sensor deployments, where measurements are collected continuously at fixed spatial positions. Our model is conditioned on these sparse yet temporally continuous observations. As shown in Table 2, we report the relative $\ell _ { 2 }$ error across 100 held-out trajectories for three PDEs: Wave-Layer, Navier–Stokes, and Komolgorov Flow. In Figure 3 we visualize the error map for the Navier–Stokes equation. Our method significantly outperforms existing generative baselines, including DiffusionPDE [31], Shu et al. [66], and Zhuang et al. [85], up to an order of magnitude, demonstrating robust generalization under extreme observation sparsity. Forward/Inverse Full Observation We evaluate VideoPDE on reconstructing full PDE trajectories given a single frame at either the start (forward prediction) or end (inverse inference) of the sequence. The full conditioning frame is provided while the remaining frames are masked. This setup reflects Table 3: Forward/inverse full observation. Average $\ell _ { 2 }$ relative errors of baseline methods for forward and inverse subtasks across datasets. Input DeepONet FNO PINO DiffusionPDE Ours GT TEn163S G 3R113RR1 practical simulation scenarios where dense initial conditions are available and parallels image-to-video tasks in generative modeling. Figure 4 shows the final/initial frames of the fully observed forward/inverse processes on the Kolmogorov Flow dataset, demonstrating that VideoPDE consistently produces results that are closer to the ground truth. Table 3 reports the relative $\ell _ { 2 }$ error across 100 held-out trajectories for three PDEs. VideoPDE consistently outperforms baselines in both forward and inverse tasks, except for the low-frequency inverse setting. We attribute this to aleatoric uncertainty: in the NS dataset, diffusive dynamics lead to low-frequency end states that may originate from many high-frequency initial conditions. In such cases, pixel-wise $\ell _ { 2 }$ loss penalizes plausible reconstructions and favors blurry averages. We leave exploration of distribution-based evaluation metrics for future work. Forward/Inverse Partial Observation We extend the forward and inverse prediction tasks to the partially observed setting by conditioning on a single frame—either at the start or end of the trajectory, with only $3 \%$ of spatial points revealed. The model must reconstruct the full trajectory from these observations, reflecting real-world scenarios where sensors provide limited data at a single timepoint. In Figure 2, we present the inverse simulation of a Wave-Layer sample, where VideoPDE recovers all time steps in reverse given only $3 \%$ of observation points from the final frame. As shown in Table 4, VideoPDE outperforms DiffusionPDE, the current SOTA for this task, by a significant margin across all settings, except for inverse prediction on the Navier–Stokes case, where aleatoric uncertainty remains high due to the diffusive loss of high-frequency information. We note that VideoPDE performs similarly on this task to the forward/inverse full observation task, particularly for Wave-Layer forward prediction, and both Navier–Stokes forward and inverse prediction. Unified Model We evaluate whether a single model can jointly learn multiple inference tasks within our video inpainting framework. For each dataset, we train one unified model on six tasks: continuous partial observation $3 \%$ and $1 \%$ ), forward and inverse prediction under full/partial observation. As shown in Tables 2, 3, and 4, the unified model matches the performance of task-specific variants and outperforms prior baselines in most settings. In contrast, all baselines require separate models per task, highlighting VideoPDE’s potential to be a unified framework for flexible PDE solving. Table 4: Forward/inverse $3 \%$ observation. Average $\ell _ { 2 }$ relative errors of baseline methods for forward and inverse subtasks across datasets. Table 5: Ablation study. We ablate our design choices, beginning with a latent space DiT. We report average relative $\ell _ { 2 }$ errors for all configurations on Navier–Stokes with $3 \%$ observation rate. # 4.4 Ablation Study We conduct an ablation study to assess the impact of key architectural choices, evaluated on the continuous partial observation task for low-frequency Navier–Stokes with a $3 \%$ observation rate. Relative $\ell _ { 2 }$ errors are reported in Table 5. We begin with a video DiT architecture adapted from VDT [45], originally designed for natural video inpainting. The model input is $\mathbf { x } \odot ( 1 - \mathbf { m } ) + \mathbf { y } \odot \mathbf { m }$ , where $\mathbf { x }$ is Gaussian noise, m a binary mask, and y the ground truth. This model performed poorly, likely due to confusion between sparse observations and noise. Replacing the conditioning method with channel-wise concatenation of noise and masked ground truth, i.e., concat $\left( \mathbf { x } _ { t } , \pmb { y } \right)$ , significantly improves performance. Building on this, we train a latent diffusion version using a task-specific VAE. However, due to the precision requirements of PDEs, the latent model performs poorly, highlighting the need for pixel-space modeling in scientific applications. Next, we introduce a hierarchical variant of the DiT with 3D neighborhood attention and temporal downsampling inspired by HDiT [16], which further reduces error. Finally, conditioning on the binary mask itself yields the best performance in our setup, indicating that the binary mask resolves ambiguity between masked and unmasked pixels: the input is concat $( \mathbf { x } _ { t } , \pmb { y } , \mathbf { m } )$ . # 4.5 Supplementary We provide further details on our training hyperparameters, datasets, and model architecture in the supplementary. Also provided are extended experimental results and analysis for more tasks and PDE problem settings. We urge readers to review the additional images and videos in the supplementary.
We present a unified framework for solving partial differential equations (PDEs) using video-inpainting diffusion transformer models. Unlike existing methods that devise specialized strategies for either forward or inverse problems under full or partial observation, our approach unifies these tasks under a single, flexible generative framework. Specifically, we recast PDE-solving as a generalized inpainting problem, e.g., treating forward prediction as inferring missing spatiotemporal information of future states from initial conditions. To this end, we design a transformer-based architecture that conditions on arbitrary patterns of known data to infer missing values across time and space. Our method proposes pixel-space video diffusion models for fine-grained, high-fidelity inpainting and conditioning, while enhancing computational efficiency through hierarchical modeling. Extensive experiments show that our video inpainting-based diffusion model offers an accurate and versatile solution across a wide range of PDEs and problem setups, outperforming state-of-the-art baselines.
[ "cs.LG", "cs.AI", "cs.CV" ]
# 1 Introduction While diffusion models have set a new standard for photorealism in generative art [1], their operational costs remain a major challenge. The generation of a single image can involve many denoising steps, each utilizes a learned denoiser model with potentially over a billion parameters [2]. This makes in-the-wild adoption (i.e., on-device) challenging and raises valid concerns about their environmental sustainability [3, 4, 5]. To address this, a significant body of research has explored optimization strategies such as network simplification [6, 7] and model distillation [8, 9, 10, 11]. However, these existing methods typically apply the same degree of optimization irrespective of the task’s intrinsic difficulty. This results in a single model with a fixed computational cost, which is inherently suboptimal as the generative effort required to synthesize an image varies with the complexity of the input prompt. For example, a simple prompt like a white and empty wall requires fewer denoising steps to generate a high-quality image than a complex one like a colorful park with a crowd, as shown in Figure 1. With the motivation to adaptively allocate computational budget, we present CATImage, a framework that allows the amount of computation for text-to-image generation to vary for each prompt. Our Preprint. Under review. (a) ‘a white and empty wall’ (b) ‘a colorful park with a crowd’ (c) quality trends across numbers of steps Figure 1: Two input prompts that require different denoising steps to ensure quality. As shown in (c), prompt (a) only requires a small number of denoising steps to reach a high CLIPScore. By contrast, the more complex prompt (b) requires over 100 steps to reach a similar quality. Key to our proposed CATImage is to allocate an appropriate amount amount of computation for each prompt, so that the overall computational cost is reduced while the quality remains the same. framework operates with a pre-defined set of choices that can be chosen adaptively for each input prompt. Each choice represents a text-to-image generation function, and has a distinct profile of computational cost and the expected image quality. Concretely, these choices may correspond to different numbers of denoising steps of the same diffusion model (i.e., homogeneous choices), disparate, independent text-to-image generative models (i.e., heterogeneous choices), or a combination of both. The proposed CATImage aims to adaptively select the right choice (i.e., “routing”) for each input prompt, in such a way that expensive choices (e.g., $1 0 0 +$ denoising steps) are reserved only for complex prompts. Our approach enables a joint deployment of diverse text-to-image models and has a potential to delivery higher average image quality compared to using any individual model in the pool, while allowing the average computational cost to be adapted at deployment time. In summary, our contributions are as follows. 1. We precisely formulate a constrained optimization problem for the above routing problem (Section 3.1). The formulation aims to maximize average image quality subject to a budget constraint on the generation cost. 2. We study the theoretically optimal routing rule that optimally trades off the average quality and cost (Section 3.2). Based on the optimal rule, we construct a plug-in estimator that can be trained from data. 3. We perform a series of objective analyses on the COCO [12] and DiffusionDB datasets [13]. Our findings show that, through adaptive routing, our proposal matches the quality of the largest model in the serving pool (namely, Stable Diffusion XL [14] with 100 denoising steps) with only a fraction of its computational cost (Table 1).1 # 2 Background: Text-To-Image Generative Models Let $\mathbf { x } \in \mathcal { X }$ denote an input text prompt, and $\mathbf { i } \in \mathcal { I } \doteq [ 0 , 1 ] ^ { W \times H \times 3 }$ denote an image described by the prompt, where W, $H \in \mathbb { N }$ denote the width and the height of the image (in pixels), and the last dimension denotes the number of color channels. A text-to-image generative model is a stochastic map $h \colon \mathcal { X } \to \mathcal { I }$ that takes a prompt $\mathbf { x }$ as input and generates an image $h ( \mathbf { x } ) \in \mathcal { I }$ that fits the description in the prompt x. There are many model classes one may use to construct such a model $h$ , including conditional Generative Adversarial Networks (GANs) [15, 16], Variational Auto-Encoder (VAE) [17], and diffusion models [1], among others. Diffusion models A specific class of text-to-image generative models that has recently been shown to produce high-fidelity images is given by diffusion-based models [18, 1, 19]. A diffusion generative model relies on a function $\mathbf { \bar { \rho } } _ { g } \colon \mathcal { X } \times \mathbf { \bar { N } } \times \mathbb { R } ^ { \bar { D } } \to \mathbf { \mathcal { I } }$ that takes as input a prompt $\mathbf { x }$ , the number of denoising steps $T \in \mathbb { N }$ , a noise vector $\mathbf { z } \in \mathbb { R } ^ { D }$ with $D = 3 \cdot W H$ , and generates an image $\mathbf { i } = g ( \mathbf { x } , T , \mathbf { z } )$ . Figure 2: Illustration of our pipeline. During training (dashed box), a quality estimator is trained to predict perprompt quality scores for all routing candidates $h ^ { ( \mathrm { i } ) } , \ldots , h ^ { ( M ) }$ . At inference time (bottom), given a prompt, predicted quality scores of all routing candidates are adjusted by their respective costs. The routing candidate that has the highest cost-adjusted score is chosen (see Eq. (3)). Image generation is done by iteratively refining the initial noise vector $\textbf { z }$ for $T$ iterations to produce the final image. The noise vector $\mathbf { z } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ is typically sampled from the standard multivariate normal distribution and the $T$ refinement steps correspond to the reverse diffusion process, which reconstructs an image from a random initial state [1]. With $\mathbf { z } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ understood to be an implicit source of randomness, we define $h _ { T } ( \mathbf { x } ) \doteq g ( \mathbf { x } , T , \mathbf { z } )$ to be an image sampled from the diffusion model using $T$ diffusion steps. With $T$ chosen, $h _ { T } \colon \mathcal { X } \ \mathcal { I }$ is thus an instance of text-to-image generative models as described earlier. The importance of this view will be apparent when we describe our proposed method in Section 3, which enables an automatic selection of the number of denoising steps separately for each prompt. Typically, the number of denoising steps is pre-chosen according to the computational budget available at inference time, with a low value of $T$ giving a lower computational cost at the expense of image quality. # 3 Cost-Aware Text-To-Image Generation We now describe our main proposal termed CATImage (Cost-Aware Text-based Image Generation), which seeks to minimize inference cost by adaptively adjusting the cost spent for each prompt, depending on its complexity. As illustrated in Figure 1, in the case of a diffusion model, our key observation is that not all prompts require a large number of denoising steps to ensure quality. Thus, inference efficiency can be achieved by spending a small amount of computation for easy prompts. Our proposed framework is general and allows cost adjustment in a per-prompt manner via selecting an appropriate amount of resources from homogeneous choices (i.e., adaptively varying the number of denoising steps of a single diffusion model), or heterogeneous choices (i.e., adaptively route prompts to disparate, independent generative models). We start by formalizing the cost-aware text-to-image generation task as a learning-to-route problem in Section 3.1. The formulation can be theoretically shown (Section 3.2) to have a simple Bayes optimal routing rule, involving subtracting off the expected quality metrics with the costs of candidate numbers of denoising steps. We show that the optimal rule can be estimated from data, and propose two estimators: a Transformer-based estimator [20], and a $K$ -nearest neighbors (KNN) model. # 3.1 Problem Formulation Let $[ n ] \doteq \{ 1 , 2 , \ldots , n \}$ denote the set of counting numbers up to $n$ . Suppose that we are given a fixed set of $M$ choices $\mathcal { H } \doteq \{ h ^ { ( 1 ) } , \ldots , h ^ { ( M ) } \}$ where each choice $h ^ { ( i ) } : \mathcal { X } \mathcal { I }$ represents a trained generative model (see Section 2 for a precise definition). Our goal is to derive a routing rule that optimally (in the sense of quality-cost trade-offs) chooses the best model to invoke for each input prompt. These $M$ base models may be homogeneous, being derived from a single diffusion model with varying numbers of diffusion steps; a mix of heterogeneous generative model classes; or a combination of both. For example, if we want to decide whether to use 20, or 50 number of denoising steps in the Stable Diffusion XL (SDXL) model [21], then $M = 2$ , and $\mathcal { H } = \{ h ^ { ( 1 ) } , h ^ { ( 2 ) } \}$ where the two models are both SDXL with the number of denoising steps fixed to 20 and 50, respectively. We will abstract away the details of the underlying $M$ base models and propose a general framework that supports both the homogeneous and heterogeneous cases (as shown in our experiments in Section 5). Suppose we are given a quality metric of interest $q \colon \mathcal { X } \times \mathcal { I } \ \ \mathbb { R }$ (see Quality Metrics under Section 5.1), which takes as input a prompt-image tuple, and estimates a quality score. We seek a router $r \colon \mathcal { X } [ M ]$ that predicts the index of the $M$ choices from a given prompt. We posit two desirable properties that the router ought to possess: 1. The router must respect a specified budget constraint on the inference cost. 2. Routing prompts to candidates in $\mathcal { H }$ must maximize average quality metric. Following similar formulations considered in [22, 23, 24, 25], the above desiderata may be realized as a constrained optimization problem: $$ \begin{array} { r l r } { { \operatorname* { m a x } Q ( r ) \mathrm { ~ s u b j e c t ~ t o ~ } C ( r ) \le B , \quad \mathrm { w h e r e ~ } } } & { \mathrm { ( 1 ) } } \\ & { \displaystyle \mathcal { Q } ( r ) \doteq \mathbb { E } [ \sum _ { m \in [ M ] } \mathbf { 1 } [ r ( \mathbf { x } ) = m ] \cdot q ( \mathbf { x } , h ^ { ( m ) } ( \mathbf { x } ) ) ] , \mathrm { ~ a n d ~ } C ( r ) \doteq \mathbb { E } [ \sum _ { m \in [ M ] } \mathbf { 1 } [ r ( \mathbf { x } ) = m ] \cdot c ^ { ( m ) } ] , } & \end{array} $$ where for $m \in [ M ]$ , $c ^ { ( m ) } \geq 0$ denotes the cost for the model $h ^ { ( m ) }$ to produce one image for a given prompt, $\mathbb { E }$ denotes the expectation with respect to the population joint distribution on all random variables (i.e., prompt $\mathbf { x }$ , and the sampled output of $h ^ { ( m ) }$ ), $B \geq 0$ is a hyperparameter specifying an upper bound on the average cost. The optimization problem (1) thus seeks a router $r$ that maximizes the average quality $Q ( r )$ subject to the constraint that the average cost (over all prompts) is bounded above by $B$ . Remark. The optimization problem is general and allows the per-model costs to be in any unit suitable for the application (e.g., latency in seconds, FLOP counts). Further, no practical constraint is imposed on the quality metric function $q$ . For instance, $q$ could be the CLIP score [14]. Intuitively, if the budget $B$ is large, the cost constraint $C ( r ) \leq B$ would have little effect, and the optimal router is expected to route each prompt to the base model that can produce the highest quality metric score, disregarding the cost of the model. In practice, such a model is often the largest one in the pool $\mathcal { H }$ , or the diffusion model with the largest number of denoising steps. On the contrary, if $B$ is small, the router would prioritize cost over quality, preferring to choose a small base model (or a small number of denoising steps) over a larger candidate. This proposal offers a framework to allow trading off average quality with cost in a unified way by varying $B$ . # 3.2 Theoretically Optimal Routing Rule Having formulated the constrained problem in (1), we now investigate its theoretically optimal solution. We will use the optimal solution to guide us on how to design a practical router. Based on the results in [22, 23], the optimal solution to (1) is shown in Proposition 1. Proposition 1. For a cost budget $B > 0$ , the optimal router $r ^ { * } \colon \mathcal { X } \{ 1 , \ldots , M \}$ to the constrained optimization problem $( l ) i s$ $$ r ^ { * } ( \mathbf { x } ) = \arg \operatorname* { m a x } _ { m \in [ M ] } \mathbb { E } \left[ q ( \mathbf { x } , h ^ { ( m ) } ( \mathbf { x } ) ) \mid \mathbf { x } \right] - \lambda \cdot c ^ { ( m ) } , $$ where the conditional expectation is over the sampled output from the model $h ^ { ( m ) }$ , and $\lambda \geq 0$ is $a$ Lagrange multiplier inversely proportional to $B$ . The result follows from Proposition 1 in [23]. The result states that the choice/model we choose to route a prompt $\mathbf { x }$ to is the one that maximizes the average quality, adjusted additively by the cost of the model. The hyperparameter $\lambda$ controls the trade-off between quality and cost, and is inversely proportional to the budget $B$ . For instance, if $\lambda = 0$ (corresponding to $B = \infty$ ), then the model with the highest expected quality for $\mathbf { x }$ will be chosen, regardless of its cost. Increasing $\lambda$ enforces the routing rule to account more for model costs, in addition to the expected quality. Estimating the Optimal Rule The optimal rule $r ^ { * }$ in Proposition 1 depends on the population conditional expectation $\gamma ^ { ( m ) } ( \mathbf { x } ) \doteq \mathbb { E } \left[ \bar { q } ( \mathbf { x } , h ^ { ( m ) } ( \mathbf { x } ) ) \mid \mathbf { x } \right]$ , which is unknown. Following a similar reasoning as in [23], we propose plugging in an empirical estimator $\hat { \gamma } ^ { ( m ) } : \mathcal { X } \mathbb { R }$ in place of $\gamma ^ { ( m ) }$ , resulting in the empirical rule $\hat { r } _ { \lambda }$ : $$ \hat { r } _ { \lambda } ( \mathbf { x } ) = \arg \operatorname* { m a x } _ { m \in [ M ] } \hat { \gamma } ^ { ( m ) } ( \mathbf { x } ) - \lambda \cdot c ^ { ( m ) } . $$ For each $m \in [ M ]$ , the idea is to train an estimator $\hat { \gamma } ^ { ( m ) }$ to estimate the true expected quality. That is, suppose we are given a collection of $N$ training prompts $\{ \mathbf { x } _ { i } \} _ { i = 1 } ^ { N }$ . For each prompt $\mathbf { x } _ { i }$ , we may sample $S$ times from $h ^ { ( m ) }$ to produce output images $\mathbf { i } _ { i , 1 } ^ { ( m ) } \ldots , \mathbf { i } _ { i , S } ^ { ( m ) }$ . These output images allow one to estimate the empirical expectation of the quality $\begin{array} { r } { \hat { y } _ { i } \doteq \frac { 1 } { S } \sum _ { s = 1 } ^ { S } q ( \mathbf { x } , \mathbf { i } _ { i , s } ^ { ( m ) } ) } \end{array}$ . With the labeled training set $\{ ( \mathbf { x } _ { i } , \hat { y } _ { i } ) \} _ { i = 1 } ^ { N }$ , we may then proceed to train a predictive model $\hat { \gamma } ( \mathbf { x } ) \doteq \bigl ( \hat { \gamma } ^ { ( 1 ) } ( \mathbf { x } ) , \ldots , \hat { \gamma } ^ { ( M ) } ( \mathbf { x } ) \bigr )$ , which has $M$ output heads for predicting the expected qualities of the $M$ models. There are several standard machine learning models one can use as the model class for $\hat { \gamma }$ . We emphasize that we do not advocate a specific model class as part of our proposal since different model classes offer distinct properties on training and inference costs, which may be best tailored to the application. What we propose is an application of the generic routing rule in (3) to text-to-image model routing. The rule is guaranteed to give a good quality-cost trade-off provided that the estimator $\hat { \gamma } ^ { ( m ) }$ well estimates $\gamma ^ { ( m ) }$ . In experiments (Section 5), we demonstrate estimating $\gamma ^ { ( m ) }$ with two model classes: 1) $K$ -nearest neighbors, and 2) Multi-Layer Perceptron (MLP) with a Transformer backbone [20]. Likewise, we do not propose or advocate a specific value of $\lambda$ . The parameter is left to the user as a knob to control the desired degree of quality-cost trade-off. In experiments, we evaluate our proposed routing rule by considering a wide range of $\lambda$ and show the trade-off as a deferral curve (see Section 3.3). An illustration summarizing our pipeline is displayed in Figure 2. # 3.3 Deferral Curve In general, any methods that offer the ability to trade off quality and cost may be evaluated via a deferral curve [26, 27, 28, 29]. A deferral curve is a curve showing the average quality against the average cost, in a quality-cost two-dimensional plane. Specifically, for our proposed routing rule $\hat { r } _ { \lambda }$ in (3), the curve is precisely given by $\mathcal { C } = \{ ( C ( \hat { r _ { \lambda } } ) , Q ( \hat { r _ { \lambda } } ) ) \ | \ \lambda \in \ \mathbf { \bar { [ 0 , \infty ) } } \}$ where $Q$ and $C$ denote the average quality and cost, and are defined in Eq. (2). In practice, the population expectation in $Q$ and $C$ is replaced with an empirical expectation over examples in a test set. More generally, one evaluates the deferral curve of a method by computing its average quality and cost as we vary parameters that control the trade-off. For instance, for the SDXL diffusion model, we may produce a deferral curve by varying the number of denoising steps. # 4 Related Work Uniform Optimization Strategies for Diffusion Models Diffusion models have recently exploded in popularity due to their high performance on tasks such as image and video generation, audio generation, and 3D shape generation [1, 30]. Latent diffusion models [2] have significantly improved training and inference efficiency, but still require a large number of forward denoising neural network evaluations to produce high-quality results. To address this, an extensive body of literature has been proposed to optimize and accelerate diffusion models, which are typically applied uniformly across all prompts. For example, optimizing the sampling strategy may enable more efficient denoising computation [6, 31, 7], such as timestep integration [32] or conditioning on the denoising [33]. Optimizing solvers for the denoising step can also efficiently reduce the computation to avoid retraining or fine-tuning [34, 35, 36, 37]. Alternatively, reducing the redundant computations by caching the internal results within the denoising network is also explored in [38, 39]. Another common approach includes model-based optimizations, such as distilling a fully trained model into a smaller student model that achieves comparable results with fewer denoising steps [8, 9, 10, 11] or combining multiple denoising models with different sizes to accelerate the denoising process [40, 41, 42]. An alternative strategy is to approximate the direct mapping from initial noise to generated images, further reducing the number of denoising steps [43, 44]. Adaptive Optimization Strategies for Diffusion Models Instead of a fixed reduction in computational resources, AdaDiff [45] explores a more dynamic approach where the number of denoising steps is decided based on the uncertainty estimation of the intermediate results during denoising. Our work shares a similar motivation for flexible resource allocation. However, we adaptively allocate resources according to prompt complexity and thus can select the most suitable number of steps or model before any denoising process. Concurrently, AdaDiff [46] tackles optimal number of steps selection using a prompt-specific policy, with a lightweight network trained on a reward function that balances image quality and computational resources. In contrast, we decouple the quality estimation from the routing decision, which allows our framework to adapt to different resource constraints without any retraining. Learning-To-Defer, and Modeling Routing The idea of adaptively invoking a different expert on each input is a widely studied area in machine learning under the topic of learning to defer. Here, each expert may be a human expert [47, 48, 49], or a larger model [29, 22, 24, 28]. In the latter, depending on the topology or order the models are invoked, a learning-to-defer method may yield a cascade if models are arranged in a chain [50, 22, 51]; or yield a routed model if there is a central routing logic (i.e., the router) which selectively sends input traffic to appropriate models [52, 24, 28, 23]. The latter setup is also known as model routing and receives much attention of late, especially in the natural language processing literature. Model routing has been successfully applied to route between many Large Language Model (LLMs) of various sizes and specialties (see [53, 54, 55, 56, 23] and references therein). To our knowledge, our work is one of the first that connects the model routing problem to efficient text-to-image generation. # 5 Experiments In this section, we show how our proposed routing method (Section 3) can be realized in practice by evaluate its effectiveness on real data. We experiment with both homogeneous (i.e., all routing candidates are derived from the same diffusion model with different candidate numbers of denoising steps), and heterogeneous settings (i.e., the routing candidates also include different generative models). Our goal is to optimally select the best model (or number of denoising steps) for each input prompt given a specified cost constraint. # 5.1 Experimental Setup Text-To-Image Generative Models As defined in Section 3.1, our method selects from a set of generative models $\mathcal { H }$ for each input prompt. We consider a diverse range of models with varying configurations, each offering a different trade-off between image quality and computational cost: 1. SDXL: a widely-used SD architecture [2]. To see the full extent of achievable trade-off, we consider representative numbers of denoising steps in a wide range between 1 and 100. 2. TURBO [8] and LIGHTNING [57]: distilled versions of SDXL for faster generation. We use SDXL variant with 1 step for Turbo, and 4 steps for Lighting. 3. DDIM [34]: a non-Markovian diffusion process allowing faster sampling. We use this sampling strategy on SDXL variant at 50 steps. 4. DEEPCACHE [39]: a caching method that reduces redundant computation in SDXL. We use the implementation released by the authors of [39], and set the cache interval parameter to 3. 5. INFINITY [58]: a non-diffusion, autoregressive text-to-image model based on the Transformer encoder-decoder. We use the pre-trained Infinity-2B variant with a visual vocabulary size of $2 ^ { 3 2 }$ . Quality Metrics The effectiveness of generative models largely depends on the criteria used to evaluate their output. Our proposed method can adaptively identify the optimal allocation of generative model for any instance-level image quality metric. As there is no consensus on the optimal metric for evaluating image quality, we explore several widely-used metrics: CLIPScore [14] for text-image semantic alignment, ImageReward [59] with a reward model tuned to human preferences, and Aesthetic Score [60] trained on human aesthetic ratings from LAION [61]. Additionally, we also introduce Sharpness metric adapted from [62], defined as, qSharp(x, i) = Pij(iij−ij[i i2⊛jG]ij)2 , where $\circledast$ denotes the convolution operator, $\mathbf { i } _ { i , j }$ is the pixel intensity at location $( i , j )$ , and $G$ is a Gaussian kernel with standard deviation of 1. Intuitively, this metric measures the relative distance between the given image i and itself after a Gaussian blur filter is applied. Figure 3: Deferral curves of our proposed methods and baselines on COCO dataset as described in Section 5.1, where the quality metric is measured by CLIPScore (Sub-figure (a).) and pixel sharpness (Sub-figure (b).) which are presented in Quality Metrics under Section 5.1. Our Proposed Transformer $( S D X L + )$ , which considers all the numbers of diffusion steps of SDXL and other baselines as candidate choices to route to, offers the best quality-cost trade-off, where cost is measured in TFLOPs. In Figure 3a, baselines that are not visible are shown at the bottom-right corner in the format of (cost, CLIPScore). Quality Estimator $\hat { \gamma }$ One of the key components of our routing method is the quality estimator which estimates the expected quality of the $m$ -th model given an input prompt (see $\hat { \gamma } ^ { m }$ in Eq. (3)). We explore two model classes: a K-NEAREST NEIGHBORS ( $K$ -NN) model and a TRANSFORMER-based model. Both of these models incur a negligible inference cost: less than 0.001 TFLOPs compared to 1.5 TFLOPs of the smallest base model in the pool (Infinity). The $K$ -NN approach provides a non-parametric way to estimate quality by averaging the quality scores of $K$ nearest training prompts in the space of CLIP embeddings [14]. This method is simple, and can generalize well with sufficient data. The Transformer model takes as input the per-token embeddings produced by the frozen CLIP text encoder. A two-layer MLP with $M$ output heads is added to each output token embedding. Pooling across all tokens gives $M$ output scores $\hat { \gamma } ^ { ( 1 ) } ( \mathbf { x } ) , \mathbf { \gamma } . \mathbf { \sigma } . \mathbf { \hat { \gamma } } . , \hat { \gamma } ( \mathbf { x } ) ^ { ( M ) }$ (see Eq. (3)), each estimating the expected quality of the $m$ -th model on prompt $\mathbf { x }$ (see Appendix B for details). All base models except Infinity already use CLIP embeddings, making router overhead negligible. Infinity uses Flan-T5 embeddings $\approx 1 3$ GFLOPs overhead), but this cost is minimal compared to one SDXL call $\approx 2 0 0$ TFLOPs for 17 steps). We train a separate model for each of the quality metrics considered. In each case, the quality scores are linearly scaled across all training examples to be in [0, 1]. These scaled metrics are treated as ground-truth probabilities, and the model is trained by minimizing the sum of the sigmoid cross entropy losses across all heads. # 5.2 Dataset Details We utilize two datasets: 1) the COCO captioning dataset [12], which contains high-quality and detailed image captioning, and 2) the DiffusionDB dataset [13], which contains a larger collection of realistic, user-generated text prompts for text-to-image generation. From both datasets, we sub-sample prompts by retaining only those with pairwise CLIP similarity below 0.75, resulting in a diverse set of 18,384 prompts in COCO dataset, and 97,841 prompts on DiffusionDB dataset. We split each dataset independently into $80 \%$ for training, $10 \%$ for validation, and $10 \%$ for testing. We then generate images from those prompts using all the base text-to-image models as described earlier. For SDXL, we generate images with various numbers of denoising steps ranging from 1 to 100. The costs in terms FLOPs from these candidates cover the full range of costs of all other baselines. For each model, we generate four images per prompt (i.e., $S = 4$ in Section 3.2) using different random seeds, with a fixed seed across different numbers of steps for SDXL. The generated images for each prompt $\mathbf { x } _ { i }$ allow us to compute the average quality metric, which is then used as the training label $\hat { y } _ { i }$ (as described in Section 3.2). Unless otherwise specified, we use the widely used Euler Scheduler [37] for diffusion based image generation. # 5.3 Experiments on COCO dataset We present experimental results on a subset of COCO’s test set [12] consisting of $1 . 8 \mathrm { k }$ image-caption pairs in Figure 3. We evaluate the deferral curves (see Section 3.3) of our proposed method and all the baselines. The results are shown in Figures 3a and 3b for the two different quality metrics: CLIPScore, and image sharpness (Section 5.1), respectively. The deferral curves plot average quality against average cost measured in TFLOPs (Tera Floating Point Operations). Baselines that do not support dynamic quality-cost trade-off are shown as isolated dots in the same quality-cost plane; these baselines use the same compute cost for image generation for each input prompt. For instance, each point $\bigstar$ of SDXL represents performance of the SDXL model with the number of denoising steps fixed. For our proposed methods, Proposed (SDXL) refers to the homogeneous configuration in which the model candidate set $\mathcal { H }$ consists solely of the SDXL model at multiple numbers of denoising steps settings. Proposed $( S D X L + )$ extends this configuration by incorporating other text-to-image models considered, namely, TURBO, DDIM, DEEPCACHE, and INFINITY. Each of these has two variants based on Transformer or $K$ -NN as the model class for estimating the expected quality metric. Homogeneous vs. Heterogeneous setting In both settings, our methods outperform baselines with static inference costs per prompt. The heterogeneous setting further benefits from models with strong quality-to-cost trade-offs (e.g., INFINITY, TURBO), improving our dynamic routing’s effectiveness and cost-efficiency. Moreover, our strategy remains adaptive, seamlessly allocating prompts to higherperformance models when additional computational resources are available, improving performance beyond what is attainable using each model alone (see Appendix E for details on model selection rates). Transformer vs. KNN Between the two proposed variants, the Transformer-based variant generally outperforms the $K$ -NN variant, suggesting that directly learning to predict the quality metric can be more effective than estimating it from neighboring prompts. Qualitative Analysis In Figure 4, we analyze scenarios showing both successes and failures of our adaptive routing method (Proposed Transformer $( S D X L + )$ on CLIPScore metric). Specifically, we focus on cases where our method uses the same overall computational cost as the baseline (SDXL with a fixed 22 denoising steps). Within these scenarios, we consider cases where our method allocates more than 22 denoising steps, indicating that the prompts are particularly complex and require additional refinement. For the prompt A young kid stands before a birthday cake decorated with Captain America, our method correctly recommends more denoising steps, as fewer would not generate accurate images. In contrast, the prompt There are two traffic signals on a metal pole, each with three light signals on them includes an exact number of objects, a concept which both diffusion models and CLIP often struggle with [63, 64]. Our approach accounts for this difficulty by recommending more steps than average. However, in this case, more denoising steps actually degrade image quality which is uncommon and ends up hurting the router performance. Table 1: Cost ratio $( \% )$ of our method compared to baselines to match the quality score (Sharpness) We also perform a user study to compare the subset of these routing decisions with the fixed cost baseline (see Appendix F). All participants rate Figure 4b (ours) as the better image, while 14 of 19 participants select Figure 4c (baseline) as the better image. # 5.4 Experiments on DiffusionDB dataset In this section, we present results on a subset of prompts from the DiffusionDB dataset [13], which aligns more closely with real-world prompts used in text-to-image generation. We evaluate the performance across four metrics: CLIPScore, ImageReward, Aesthetic Score, and Sharpness. Quantitative results comparing our dynamic routing method to the fixed-model baselines are summarized in Table 2. This table effectively captures the trade-offs shown in the deferral curves at a specific cost equal to each baseline. We use KNN as a quality estimator to efficiently evaluate multiple metrics at scale. The results show that our method consistently matches or exceeds fixed-model Table 2: Quality-cost trade-off of our proposed approach on DiffusionDB (Section 5.4). We report the average quality (as measured by four different quality metrics) achieved by our routing approach when operating at the cost (TFLOPs) of each model in the pool. For each metric, the highest score achieved is highlighted in bold, which in all cases correspond to our routing method. Additionally, our approach is able to consistently maintain or exceed the quality using the same cost as each model baseline. Success case. A young kid stands before a birth- Failure case. There are two traffic signals on a day cake decorated with Captain America metal pole, each with three light signals on them. Figure 4: Success and failure cases of the baseline SDXL with static 22 denoising steps, and our approach Proposed Transformer $( S D X L + )$ ) in Figure $3 a$ operating at the same average cost as the baseline. (a), (b): Our approach is able to recognize the need for a larger number of denoising steps to generate an image that matches the prompt. (c), (d): Prompts that specify an exact number of objects are difficult for diffusion models in general. The number of objects may fluctuate during the denoising process, making it difficult to predict the right number of steps. baseline performance across all four quality metrics. Additionally, the highest value of each score (highlighted in Table 2 in bold) is attainable only with our routing strategy. In other words, even under an unconstrained computational budget, none of the individual baselines can attain the quality that our adaptive routing achieves through prompt-based allocation across the model pool. Table 1 quantifies the computational cost reduction achieved by our routing method compared to the baseline at equivalent quality levels (on Sharpness metric). For inherently efficient models (e.g. Infinity[58], Turbo [8]), the savings appear marginal. However, compared to Lighting [57], a distilled SDXL variant, our method achieves the same performance at only $6 \%$ of its computational cost. For higher-performance models, such as SDXL at 100 denoising steps, the savings are even more significant.
Diffusion models are well known for their ability to generate a high-fidelity image for an input prompt through an iterative denoising process. Unfortunately, the high fidelity also comes at a high computational cost due the inherently sequential generative process. In this work, we seek to optimally balance quality and computational cost, and propose a framework to allow the amount of computation to vary for each prompt, depending on its complexity. Each prompt is automatically routed to the most appropriate text-to-image generation function, which may correspond to a distinct number of denoising steps of a diffusion model, or a disparate, independent text-to-image model. Unlike uniform cost reduction techniques (e.g., distillation, model quantization), our approach achieves the optimal trade-off by learning to reserve expensive choices (e.g., 100+ denoising steps) only for a few complex prompts, and employ more economical choices (e.g., small distilled model) for less sophisticated prompts. We empirically demonstrate on COCO and DiffusionDB that by learning to route to nine already-trained text-to-image models, our approach is able to deliver an average quality that is higher than that achievable by any of these models alone.
[ "cs.CV", "cs.LG" ]
# 1 Introduction Document intelligence systems are transforming modern business operations by automating the extraction of critical information from unstructured documents [35,36]. Within this domain, Table Extraction (TE) plays a pivotal role in converting tabular data into structured formats for downstream applications. Efficient TE is critical for processing business documents such as invoices, financial reports, and product catalogs, where structured tabular data drives operational workflows. Traditional approaches [10,12,13,24] combining Table Detection (TD) and Table Structure Recognition (TSR) struggle with key challenges including format diversity across vendors [10], error propagation from element interdependence, and annotation scarcity due to document confidentiality [31]. These issues compound in industrial settings, where extraction failures disrupt downstream workflows like financial auditing. This is a problem explicitly highlighted in the DocILE benchmark’s analysis of real-world document processing pipelines [31]. To address annotation scarcity, semi-supervised learning (SSL) leverages unlabeled data through pseudo-label selection, reducing human annotation efforts. However, SSL effectiveness heavily depends on selecting high-quality pseudolabels for training. Conventional SSL methods [17,26,27,34] rely on confidence scores that poorly correlate with actual extraction quality, as shown in Figure 1. This leads to two critical issues: incorporating erroneous predictions with high confidence into training, while discarding correct extractions with low confidence scores. Such selection errors are particularly damaging in business document analysis, where a single misplaced row or column can invalidate an entire table’s structure. 国B0 0000 ? . 8 · ? . 0.4 Confidence Score Predicted Quality Score We propose QUEST: Quality-aware Semi-supervised Table Extraction for Business Documents, a novel framework that fundamentally reimagines pseudo-label selection through verifiable quality assessment. Drawing inspiration from complexity theory [1,7], QUEST leverages a key insight: verifying the quality of an extracted table is more reliable than generating confidence scores during extraction. Our framework trains a specialized model to evaluate structural, layout and contextual features of extracted tables, producing interpretable F1 score predictions that directly measure extraction quality. This approach, combined with diversity measures (DPP [15], Vendi score [9], IntDiv [3]), ensures robust pseudo-label selection while preventing confirmation bias during iterative SSL training. Our key contributions are as follows: – The QUEST Framework: A quality-aware SSL method that revolutionizes business document extraction through interpretable quality assessment, evaluating both structural consistency and contextual plausibility of extracted tables. This approach bridges the gap between model confidence and actual extraction quality. – Diversity-Guided Training: A novel integration of complementary diversity metrics (DPP, Vendi scores, IntDiv) with quality-based selection, creating a robust pseudo-labeling strategy that systematically reduces error propagation in SSL iterations. – Empirical Validation: Comprehensive evaluation showing significant improvements: on our proprietary business dataset (1,000 annotated + 10,000 unannotated documents), QUEST boosts F1 from $6 4 \%$ to $7 4 \%$ while reducing empty predictions by $4 5 \%$ ( $1 2 \%$ to $6 . 5 \%$ ). On DocILE [31] (600 annotated + 20,000 unannotated), it achieves $5 0 \%$ F1 (from $4 2 \%$ ) with 19% fewer empty predictions (27% to $2 2 \%$ ). QUEST’s modular design addresses the unique challenges of business documents, where structural consistency and data completeness are paramount. By emphasizing interpretable, feature-based assessments over raw confidence metrics, it reduces annotation overhead and avoids issues caused by unbalanced data. This approach yields transparent quality scores that enterprises can trust for operational workflows, while remaining adaptable to diverse tables. # 2 Related Work # 2.1 Table Extraction Methods Business documents present unique challenges for table extraction systems, as evidenced in datasets such as DocILE [31]. These documents often contain diverse data types, inconsistent formatting, and arbitrary content placement, making accurate extraction particularly challenging [23]. The scarcity of annotated data in business settings remains a significant constraint for supervised learning approaches [22]. Table extraction has evolved from early heuristic approaches [37] to modern learning-based methods. This task typically involves two main components: Table Detection (TD)[11] and Table Structure Recognition (TSR)[5]. Early approaches relied heavily on rule-based systems and geometric analysis, while recent deep learning architectures [19,20] and transformer-based models have shown notable improvements by leveraging global context and reducing post-processing steps. However, these approaches typically require substantial labeled data, which is often unavailable in business settings. Recent advancements such as Table Transformer (TATR)[24] have demonstrated strong performance by leveraging DETR-based architectures[4] trained on table-specific datasets like PubTables-1M [24] and FinTabNet [38]. Thomas et al.[30] address common extraction errors in business documents through specialized post-processing, highlighting the ongoing challenges when working with business tables. More efficient architectures like YOLOv9[32], chosen for its validated performance in production document analysis, offer improved speed-accuracy trade-offs, making them suitable for practical document analysis systems, but still require substantial labeled data for optimal performance. Evaluation metrics for these systems have also evolved, from simple structural measures like tree edit distance similarity (TEDS) [39] and directed adjacency relations (DAR) [12,13], to more comprehensive metrics like GRITS [25] that evaluates tables directly in their natural matrix form. GRITS offers three variants: GRITS-Top for topology recognition, GRITS-Con for content recognition, and GRITS-Loc for location recognition. # 2.2 Semi-Supervised Learning in Document Analysis Semi-supervised learning (SSL) [16,29] has evolved from classification, assigning a single image label, to detection-specific frameworks [27], requiring location and identification. Object detection’s complexity, particularly for interconnected structures like tables, necessitates adapting SSL for spatial consistency of pseudo-labels and reliable object-level confidence measures. Although early SSL used multistage training [17], recent end-to-end methods [34] show promise. In document analysis, model confidence has traditionally driven pseudo-label selection [26], but its limitations are shown in several works [14,28,40]. Recent table detection work [8] improves pseudo-label quality via novel matching strategies, but still primarily relies on confidence measures that poorly correlate with actual extraction quality. Building on insights from transfer learning approaches [21] that demonstrate the complementary value of diversity and similarity metrics, our work adapts this paradigm to the pseudo-label selection task. Addressing quality assessment, diversity measures like Determinantal Point Process [15], Vendi Score [9], and IntDiv [3] (a metric based on average pairwise dissimilarity, a concept originating in [6]) have emerged. Despite these advances, selecting appropriate unlabeled data remains challenging due to domain shifts and quality variations in real-world business documents [18], a gap our QUEST framework specifically addresses through its quality-aware approach. # 3 Proposed Method # 3.1 System Overview As illustrated in Figure 2, our framework iteratively improves table extraction through quality-aware pseudo-labeling. The process begins with Classic Inference, where initial extraction models perform Table Detection (TD) followed by Table Structure Recognition (TSR) on annotated data. The annotated dataset is split 70/15/15 into training, validation, and test sets, maintained consistently across our framework. For prediction quality assessment, we employ a quality model trained on features from both ground truth (GT) and prediction results. This model evaluates tables using explainable characteristics, comparing documents against expected feature distributions. The Quality section of Figure 2 illustrates this model’s role in pseudo-label selection. The Semi-Supervised Learning phase applies trained extraction models to unannotated data. After selecting high-quality pseudo-labels, we implement diversity optimization to mitigate confirmation bias. From quality-filtered candidates, we select a subset maximizing diversity using DPP, Vendi Score, and IntDiv, ensuring a more representative training set. This process operates iteratively: as shown in the rightmost section of Figure 2, extraction models are trained from scratch at each step (Tn+1) using initial annotations combined with filtered pseudo-labels from previous iterations. The framework progressively refines extraction performance by leveraging both annotated and unannotated data through diverse, high-quality pseudo-labels. Fig. 2: Pipeline of our quality-aware SSL framework for table extraction. # 3.2 Quality Assessment Model We introduce a quality assessment model that predicts F1 scores for extracted tables through a dual-phase approach: extracting domain-specific features and enriching them via statistical transformations. This design leverages expert knowledge while capturing data-driven patterns. The iterative data augmentation process can be formalized as: $$ D _ { t + 1 } = D _ { 0 } \cup f _ { t } ( U ) $$ where $D _ { 0 }$ represents the initial annotated dataset, $U$ the unlabeled data pool, and $f _ { t }$ the quality-aware selection function at iteration $t$ . The dataset is curated to include only information that substantively benefits training, ensuring both quality and relevance. Feature Engineering Our model uses layout, structural and contextual features of the table, as well as confidence scores from the extraction models: TD, TSR, and their product, TE, which reflects the sequential nature of the two steps. A detailed list of features is provided in Table 1. Feature Transformation Let $x$ be the raw feature value, $\mu$ the mean, and $\sigma$ the standard deviation from the training distribution. Each base feature (21 in total) generates 5 engineered variants: – Raw: Original measurement – Z-Score: $( x - \mu ) / \sigma$ – Deviation Magnitude: $| x - \mu | / \sigma$ – Outlier Flag: 1 if $x > Q _ { 0 . 9 5 }$ , else 0 – Normal Range: 1 if $\mu - \sigma \leq x \leq \mu + \sigma$ , else 0 This yields $2 1 \times 5 = 1 0 5$ derived predictors, plus three confidence scores $( \mathrm { c o n f _ { T D } , c o n f _ { T S R } , c o n f _ { T E } } ,$ ), totaling 108 features. This approach combines raw measurements with their training distribution positions, offering interpretable features rather than abstract embeddings. Table 1: Summary of quality assessment features Quality Model Training and Application Our quality model is trained once on annotated data where GT enables accurate F1 calculation, using the same 70/15/15 data split as the extraction models. Let $Q : \mathcal { R } ^ { 1 0 8 } [ 0 , 1 ]$ represent our quality function that maps feature vectors to predicted F1 scores. For a prediction $p$ on document $d$ , we define its feature vector as: $$ X _ { p } = [ f _ { 1 } , f _ { 2 } , . . . , f _ { 2 1 } , \mathrm { t r a n s } ( f _ { 1 } ) , . . . , \mathrm { t r a n s } ( f _ { 2 1 } ) , . . . , \mathrm { c o n f _ { T D } , c o n f _ { T S R } , c o n f _ { T E } } ] $$ Where $f _ { i }$ are the base features from Table 1 and $\operatorname { t r a n s } ( f _ { i } )$ represents the set of statistical transformations. The quality model is trained to minimize the error between predicted quality scores and actual F1 scores: $$ \operatorname* { m i n } _ { Q } \mathcal { L } ( Q ( X _ { p } ) , F 1 _ { p } ) $$ where $\mathcal { L }$ represents a suitable loss function, $X _ { p }$ is the feature vector for prediction $p$ , and $F 1 _ { p }$ is the actual F1 score calculated by comparing prediction $p$ to its GT. This trained model remains unchanged throughout the SSL process, providing consistent quality estimation for reliable pseudo-label selection. # 3.3 Pseudo-label Selection Our pseudo-label selection process has three key steps: (1) gathering unlabeled data, (2) filtering out low-quality entries, and (3) selecting a maximally diverse subset for pseudo-labeling. Unlabeled Data Curation We assume access to a large corpus of unlabeled images. To identify those that are most likely to contain useful tables, we apply the following filtering steps: 1. Empty Check: Discard blank or near-blank images. 2. Orientation Check: Among non-empty images, retain only those recognized as upright via an OCR-based orientation detection. # 3. Table Presence Verification: (a) Run a table-detection model trained on our annotated data. If a table is detected, keep the image. (b) Otherwise, apply a publicly available table-detection model with a confidence threshold $\theta _ { \mathrm { h i g h } }$ . If the confidence is at least $\theta _ { \mathrm { h i g h } }$ , keep the image. (c) If the confidence is below $\theta _ { \mathrm { h i g h } }$ but above another threshold $\theta _ { \mathrm { { m i d } } }$ , we query a vision-language model (VLM) with a prompt such as: “Is there a table of items in the image? Respond only with True or False.” If the response is “True,” keep the image; otherwise, discard it. Quality Filtering Each extracted table receives a quality score (estimated F1) from model $Q$ . We retain tables where: $$ Q ( X _ { p } ) \geq \alpha , $$ with threshold $\alpha$ and feature vector $X _ { p }$ as defined in the quality model section. This strategy enhances traditional confidence-based filtering by incorporating multiple quality indicators. Diversity Selection From the quality-filtered set $\mathcal { D } _ { q }$ , we seek a subset that is both diverse and informative. Our approach comprises the following steps: 1. DPP-Based Subset Selection. We construct an RBF kernel: $$ K _ { i j } = \exp \left( - \frac { \| x _ { i } - x _ { j } \| ^ { 2 } } { 2 \sigma ^ { 2 } } \right) , \quad \sigma = \mathrm { m e d i a n } \{ \| x _ { i } - x _ { j } \| \} , $$ and select a subset $S ^ { * }$ of size $k$ that maximizes the submatrix determinant: $$ S ^ { * } = \arg \operatorname* { m a x } _ { S \subset \mathcal { D } _ { q } , | S | = k } \operatorname* { d e t } ( K _ { S } ) . $$ 2. Diversity Measures. We quantify the diversity of a set $T$ using two main metrics: (a) Vendi Score (VS). $$ \operatorname { V S } ( T ) = \exp \biggl ( - \sum _ { i = 1 } ^ { n } \lambda _ { i } \log \lambda _ { i } \biggr ) , $$ where $\{ \lambda _ { i } \}$ are the eigenvalues of the normalized kernel $K / n$ . This can be interpreted as the effective number of unique elements in $T$ . (b) Internal Diversity (IntDiv). $$ \mathrm { I n t D i v } ( T ) = 1 - \frac { 1 } { n ( n - 1 ) } \sum _ { i \neq j } \exp \left( - \frac { \| x _ { i } - x _ { j } \| ^ { 2 } } { 2 \sigma ^ { 2 } } \right) . $$ This measures average pairwise dissimilarity within $T$ . 3. Candidate Subset Evaluation. Let $A _ { \mathrm { t r a i n } }$ be the annotated training set, and $S _ { k }$ a DPP-sampled subset of size $k$ . We form: $$ T _ { k } = A _ { \mathrm { t r a i n } } \cup S _ { k } $$ and define its overall diversity as: $$ D ( T _ { k } ) = \mathrm { V S } ( T _ { k } ) \ \times \ \mathrm { I n t D i v } ( T _ { k } ) . $$ The optimal subset $S _ { \mathrm { o p t } }$ maximizes: $$ S _ { \mathrm { o p t } } = \arg \operatorname* { m a x } _ { S _ { k } } D \big ( A _ { \mathrm { t r a i n } } \cup S _ { k } \big ) . $$ # 4 Experiments # 4.1 Experimental Setup Datasets We evaluate on two datasets with a 75/15/15 train/val/test split: (1) A private business document collection with 1,639 annotated tables and 10,109 unlabeled documents, primarily invoices and financial statements containing structured tables; (2) The public DocILE dataset [31], processed following Thomas et al. [30], with 958 annotated tables and 19,196 unlabeled multi-page PDFs from batch one, spanning diverse business layouts3. Implementation details We select YOLOv9-T [32] as our extraction model for its lightweight architecture and stability at the time of experiments. Training from scratch uses SGD (lr=0.01, batch=16) for up to 500 epochs with early stopping (patience=100). Quality assessment uses XGBoost, with parameters tuned via randomized search: estimators (100-1000), max depth (3-12), and learning rate (0.01-0.3). For unlabeled data filtering, we employ three TD models: our YOLOv9, DETR fine-tuned on ICDAR19 $^ 4$ [10], and Qwen2-VL-7B [2,33], using thresholds $\theta _ { \mathrm { h i g h } } = 0 . 9 5$ and $\theta _ { \mathrm { m e d } } = 0 . 5$ . Quality threshold $\alpha$ is set to 0.9, balancing high confidence with achievability for business documents. Evaluation Protocol We evaluate using GRITS-CON [25] for structural and textual accuracy, plus: – Empty prediction rate: percentage of documents with no detected tables – Document-level analysis: F1-score changes between iterations (per doc) Our evaluation examines: (1) if SSL with pseudo-labels outperforms training on labeled data alone, and (2) how our quality-based selection compares to traditional confidence filtering, using each method’s best model. # 4.2 Results Performance Analysis Our experimental results demonstrate the superior performance of the quality-aware approach compared to the confidence-based method across both datasets, as shown in Figure 3 and 4. The effectiveness of our approach varies significantly between datasets, primarily due to their inherent characteristics and initial baseline performances. On the private dataset, where baseline performance starts at an F1 score of $6 4 \%$ , both SSL approaches show improvement, but with notably different patterns. The quality-based approach achieves the best performance with an F1 score of $7 4 \%$ , while maintaining precision at 83% despite reducing empty predictions from $1 2 \%$ to $6 . 5 \%$ (Table 2). This reduction in empty predictions is particularly significant as empty predictions artificially inflate precision (resulting in Recall=0, Precision=1, F1=0). The confidence-based approach, while CoreMetrics Comparison Net Document Changes vs Baseline Quality (solid) vs Confidence (dashed) (% Improved minus% Degraded) 0.95 Recall Precision F1 Quality 0.90 0.85 inthL.l 0.80 0.70 0.65 0.60 %-10 0.55 -20 Base IterO Iter1 Iter2 Iter3 Iter4 Iter5 Iter6 Iter7 Iter8 Iter 9 IterOIter1 Iter2 Iter3 Iter 4 Iter5 Iter6 Iter7 Iter8 Iter 9 Iteration Iteration Additional Training Examples (Base:1,146) Empty Predictions Comparison Quality 400 -Quality Confidence -Confidence 281283 267 250 200 184 195 180 175 134 150 104 107 100 50 0 15 4 2 2 25 0 15 4 2 2 0 0 Base IterO Iter 1 Iter2 Iter3 Iter 4 Iter5 Iter 6 Iter7 Iter8 Iter 9 Base Iter 0 Iter 1 Iter 2 Iter 3 Iter 4Iter 5 Iter 6 Iter 7 Iter 8Iter 9 Iteration Iteration achieving its best results at iteration 2, begins to loop after iteration 5, limiting its potential for further enhancement. The net document change is defined as the difference between the percentage of documents that improved and degraded in F1 score. In our case, this $+ 3 3 \%$ net improvement demonstrates the robustness of the quality-based approach, as it indicates that significantly more documents benefited from the method than were negatively impacted by it. The DocILE dataset presents a more challenging scenario, starting with a lower baseline F1 score of 42% (Table 2). In this context, we observe a stark contrast between approaches: the confidence-based method fails entirely, generating no pseudo-labels above the quality threshold and remaining at baseline performance. In contrast, our quality-oriented approach successfully identifies valuable candidates for training. This demonstrates two crucial points: first, SSL methods require adequate initial performance to generate useful pseudolabels, and second, our method’s ability to identify high-quality samples proves especially valuable in challenging conditions. The effectiveness of our approach is reflected in a net document improvement of $1 5 \%$ , reaching an F1 score of $5 0 \%$ while reducing empty predictions from 27% to $2 2 \%$ . These results demonstrate that our approach can work effectively across a variety of business documents, suggesting potential adaptability to other document types. The DocILE dataset’s more modest improvements can be attributed to three main factors: its greater document variety, smaller training set (670 vs 1,146 documents), and its original focus on line item retrieval rather than complete table extraction. This distinction is significant as line item retrieval only annotates specific columns of interest within tables, while table extraction requires all table content to be annotated. This partial annotation scheme makes it more challenging for models to learn complete table structures, as they must simultaneously identify table boundaries while implicitly learning which columns are relevant. Despite these challenges, our quality-aware approach demonstrates consistent improvement across both datasets. Fig. 4: SSL Performance Metrics on DocILE Dataset: Quality-based vs Confidence-based Approaches Table 2: Performance Comparison of SSL Approaches on Both Datasets Quality Model Evaluation Our quality assessment framework demonstrates significant improvements over traditional confidence scores through three key findings: stronger correlation with actual performance, dataset-specific feature importance patterns, and the complementary role of confidence metrics. The framework achieves substantially stronger correlation with F1 scores than confidence metrics alone. On our private dataset (Figure 1), it reaches r=0.80 (RMSE=0.13) versus confidence scores’ r=0.22 (RMSE=0.24). For DocILE, the quality model maintains $\mathrm { r } { = } 0 . 6 7$ (RMSE $\ L =$ 0.22) compared to confidence scores’ $\mathrm { r } { = } 0 . 4 6$ (RMSE=0.26). Three factors explain this superiority: Fig. 5: Feature importance patterns in quality estimation across datasets First, statistical transformations dominate feature importance (Figure 5), particularly z-score variants occupying 8 of the top 10 positions. These transformations compare new table features against statistics from known correct tables in our training set. Regular z-scores (5 features) indicate whether measurements deviate above or below expected values, while absolute z-scores (3 features) capture the magnitude of these deviations regardless of direction. Second, dataset characteristics shape feature priorities. Private documents, typically containing invoice-style tables with regular row patterns, rely on layout features like height_ratio_variation_abs_zscore $( \sim 0 . 0 8 5 )$ to detect parsing errors through unexpected height variations. DocILE’s complex documents, containing multiple table-like elements, prioritize structural features such as header _inside_zscore $( \sim 0 . 1 0 )$ to verify whether header content matches expected patterns. Contextual features like internal_whitespace_density remain consistently important across both datasets. Third, confidence metrics show consistently low importance across datasets, with none in the top 25 features. The highest-ranked confidence metrics are conf_TD (31st, private dataset) and conf_TE (28th, DocILE). This pattern holds across all confidence types (TD, TSR, TE). The finding validates our feature set: while quality indicators vary by dataset, our structural and statistical features better capture table quality than raw confidence scores. This combination of statistical normalization, dataset-aware prioritization, and contextual analysis enables robust quality estimation across document types, as evidenced by the strong correlations in Figure 1. # 5 Discussion # 5.1 Qualitative Analysis Model Disagreements To complement our quantitative evaluation, we analyze three representative cases from the DocILE dataset where quality and confidence models diverge (Figure 6). The most common scenario (Figure 6a) shows the quality model correctly accepting a good document (quality: 0.95) that confidence rejects (0.68). A rarer case (Figure 6b) shows confidence correctly rejecting a poor document (0.42) that quality accepts (0.91). Finally, an extremely rare case (Figure 6c, observed once) shows quality incorrectly rejecting a good document (0.67) that confidence accepts (0.91). This document was excluded by diversity checks. These examples highlight the quality model’s ability to capture document quality while exposing its occasional limitations. Fig. 6: Quality (Q) vs confidence (C) divergence cases, where k=keep, $\mathrm { r } =$ remove. Detection Improvements We analyze two DocILE documents in which our improved model (iteration 9) corrects the errors of the base model (Figure 7). The first case shows successful detection of a previously missed column, improving F1 from 0.4 to 1.0 (Figures 7a, 7b). The second demonstrates correct row separation instead of erroneous merging, raising F1 from 0.2 to 0.75 (Figures 7c, 7d). Fig. 7: Base model vs iteration 9 improvements (DocILE). # 5.2 Limitations Computational Considerations Our SSL pipeline requires 15 hours per iteration on an NVIDIA RTX A6000 GPU with Intel Xeon Silver 4210R CPU. For our private dataset (1,146 documents), this comprises 12.5h training and 2.8h pseudo-label generation; DocILE requires 10h/4.5h due to its larger unlabeled set. While industrially feasible, we limited testing to 10 iterations. Future work could reduce costs via progressive learning, model distillation, or selective retraining to enable more iterations. Initial Model Requirements QUEST requires initial models capable of quality predictions, as DocILE results show: only 4-6 predictions per iteration met quality thresholds from thousands of candidates, while our private dataset’s stronger baseline yielded more viable pseudo-labels. Though restrictive, this strict filtering ensures reliable improvement. Future work could explore dynamic thresholds based on dataset characteristics. Technical Constraints The framework currently processes single tables without spanning cells, with multi-table handling planned. Quality assessment performance depends on training examples, with DocILE results indicating room for improvement in low-resource settings. # 5.3 Industrial Applications Quality Assessment as Rejection Mechanism Our quality model delivers interpretable predictions (RMSE: 0.13 between predicted/actual F1 scores) to automatically reject low-quality extractions, identifying issues like abnormal spacing, missed content, and structural inconsistencies. This reduces manual verification needs while offering more reliable metrics than confidence scores. Domain Adaptability QUEST particularly benefits TSR where structural elements must form a coherent whole. While adapting to new domains requires feature engineering based on domain-specific patterns, our interpretable approach offers advantages over automated feature extraction through embeddings. The framework scales effectively with multi-information processing tasks. Deployment Considerations QUEST’s modular design enables flexible deployment with adjustable quality thresholds (recommended: 0.9), benefiting from higher thresholds with stronger models. The framework supports targeted humanin-the-loop integration for low-quality and borderline-quality predictions, optimizing effort while maintaining automation.
Automating table extraction (TE) from business documents is critical for industrial workflows but remains challenging due to sparse annotations and error-prone multi-stage pipelines. While semi-supervised learning (SSL) can leverage unlabeled data, existing methods rely on confidence scores that poorly reflect extraction quality. We propose QUEST, a Quality-aware Semi-supervised Table extraction framework designed for business documents. QUEST introduces a novel quality assessment model that evaluates structural and contextual features of extracted tables, trained to predict F1 scores instead of relying on confidence metrics. This quality-aware approach guides pseudo-label selection during iterative SSL training, while diversity measures (DPP, Vendi score, IntDiv) mitigate confirmation bias. Experiments on a proprietary business dataset (1000 annotated + 10000 unannotated documents) show QUEST improves F1 from 64% to 74% and reduces empty predictions by 45% (from 12% to 6.5%). On the DocILE benchmark (600 annotated + 20000 unannotated documents), QUEST achieves a 50% F1 score (up from 42%) and reduces empty predictions by 19% (from 27% to 22%). The framework's interpretable quality assessments and robustness to annotation scarcity make it particularly suited for business documents, where structural consistency and data completeness are paramount.
[ "cs.AI" ]
# I. INTRODUCTION practice for describing interfaces for integrating systems. It contains formal elements like paths and natural language constituents such as descriptions. For integrating these systems automatically, automated service composition using Large Language Models (LLMs) has been recently proposed [1], [2], [3]. These approaches exploit the capabilities of LLMs to process formal and natural language input, combining them with the inherent nature of automated service composition of decoupling and independent lifecycle management. While saving on manual modeling efforts by relying on already broadly available OpenAPIs, the approaches face the challenge of limited input token length [3]. This bounds the quantity and extent of the input service description. Even for proprietary models with a large input token context, e.g., OpenAI’s GPT4 with a context size of 128,000 tokens [4], an economic constraint emerges as these use of the models is paid in relation to the input and output token count. Therefore, a smaller prompt length is beneficial both for inserting further service documentation and reducing proprietary models’ usage costs. To address these challenges, Retrieval Augmented Generation (RAG) [5] has emerged as a promising technique. In such an approach, the external information is collected in a database, typically structured as a set of documents or document chunks. The primary goal is retrieving only a small subset of the most relevant documents or document chunks, which is then inserted into the prompt [5]. How to optimally apply RAG for endpoint discovery is open to investigation, leading to the following research questions that we address in this paper: RQ1. How to benchmark service discovery with natural language queries across the most relevant domains? RQ2. How best to preprocess, i.e., chunk, OpenAPIs for RAG endpoint discovery? RQ3. Can LLM agents be employed to reduce token count further and improve retrieval performance? For answering RQ1 and extending our previous work [6], we propose the novel service discovery benchmark SOCBenchD comprising natural language query $q$ , expected endpoints eexpected pairs to evaluate RAG for OpenAPI endpoint discovery thoroughly. We rely on the Global Industry Classification Standard (GICS) [7] as a leading standard for classifying industries into sectors, i.e., domains, to ensure generalizability across various domains. It provides the following domains: energy, materials, industrials, consumer discretionary, consumer staples, health care, financials, information technology, communication services, utilities, and real estate. Similar to the ToolBench approach [8], which employs ChatGPT to create training data, we use an LLM to construct for each of the domains five services with ten endpoints each as OpenAPIs. We validate the OpenAPIs syntactically using an OpenAPI validator and semantically using another LLM. Using the services for each domain, we let an LLM create ten queries for a random subset $e _ { \mathrm { e x p e c t e d } }$ of the endpoints, i.e., eexpected is the solution to $q$ . We recheck the solution correctness using an LLM and preclude ambiguity between the queries related to the same domain by defining a similarity threshold relying on an embedding model. To reduce the influence of randomness, we create five instances of the benchmark, resulting in 5 (benchmark instances) 11 (domains) 10 (queries) $= 5 5 0$ queries in total. Based on SOCBench-D, we compute the accuracy metrics across all domains to determine the accuracy and generalizability of the approach. To answer RQ2, we develop an OpenAPI RAG system that takes as input service descriptions. We apply various token-based and LLM-based chunking strategies to split the documentation and evaluate them based on retrieval quality. The token-based strategies process the document using a classical parser and then split the parts into equal-sized chunks. The LLM-based strategies let an LLM create a description, i.e., a summary or a question, for each endpoint and then use these descriptions for similarity matching. We employ mainstream open-source and proprietary embedding models for similarity matching, which can create an embedding vector for an input. The similarity between two inputs can then be determined by comparing their embedding vectors using, e.g., the cosine similarity. We evaluate the OpenAPI RAG and the chunking strategies by relying on our novel SOCBench-D benchmark and the already available RestBench benchmark for LLMs agents [9], measuring recall and precision for each chunking strategy. We employ SOCBench-D to retrieve generalizable results across multiple domains and the RestBench benchmark consisting of the Spotify and TMDB OpenAPI descriptions and corresponding queries, each with a set of endpoints as the sample solution, for real-world applicability. To address RQ3, we propose an LLM agent called Discovery Agent. As LLM agents allow the usage of external tools, we investigate using one tool that filters and enters the LLM endpoint summaries to the prompt using RAG, while the second tool allows the retrieval of the endpoint details on demand. We resort to the same benchmarks for evaluation and measure recall and precision. As the chunking strategy, we rely on the LLM-based summary strategy with OpenAI’s text-embedding-3-large embedding model [10]. The remainder of the paper is structured as follows. First, we provide an overview of related works regarding service discovery and LLMs in Section II. Then, we present how to use RAG for endpoint discovery and the OpenAPI chunking strategies in Section III. We introduce SOCBench-D and evaluate and discuss the OpenAPI RAG and the different chunking strategies in Section IV. Final considerations are presented in Section V. # II. STATE OF THE ART Service discovery has been actively investigated in the fields of networking and information systems. Next, we provide a brief review of the state of the art in that field and an exploration of recent trends in service discovery, including LLMs and LLM agents. # A. Service Discovery The most common service discovery implementation is based on service registries, which collect information about available services and offer search facilities. The service registry is usually backed by a component residing at the middleware or application levels [11]. It is characterized by the syntax used to describe the services and their invocation and the expressive power of the available query language. The typical integration model is a pull model where service consumers search for the required services. Some standards, such as UPnP, are based on a push model, where service providers regularly advertise their services [12]. In the early days of XML-based Web services, the infrastructure for service discovery was the Universal Description, Discovery, and Integration (UDDI) specification [13]. UDDI had a global incarnation called the UDDI Business Registry (UBR), intended to offer an Internet-wide repository of available web services and promoted by IBM, Microsoft, and SAP. Unfortunately, UBR never gained widespread adoption and was short-lived (2000-2006). Significant research in the early days focused on enhancing service discovery on UDDI, improving search capabilities, and creating federated registries, e.g., [14], [15], [16]. Alternatively, WS-Discovery is a multicast protocol that finds web services on a local network. Nowadays, OpenAPI is the de facto standard for describing services. While not offering a discovery protocol and mechanism, given its popularity, OpenAPI would also benefit from discovery [17]. Several authors have proposed the use of additional infrastructure for discovery in the form of centralized repositories (SwaggerHub or Apiary), service registry integration (Consul, Eureka), API Gateways (Kong, Apigee), or Kubernetes annotations (Ambassador). Populating registries of services requires effort from service providers, which often hinders the success of such approaches, especially if the service provider is expected to provide extensive additional information beyond the service endpoints. This additional effort has often been the reason for the failure of some of these technologies, most notably UBR. Approaches confined to specific applications, domains, or enterprises have been more successful, e.g., Eureka. Developed by Netflix as part of its microservices architecture [18], Eureka helps clients find service instances described by host IP, port, health indicator URL, and home page. Developers can add optional data to the registry for additional use cases. While classical incarnations like UDDI used to be comprehensive, they required extensive modeling, e.g., as semantic annotations. To avoid falling into the same pit, our approach proposed here relies on already broadly available OpenAPI specifications. # B. Large Language Models LLMs represent one of the recent advancement in the Natural Language Processing (NLP) and machine learning field [19], [20], [21]. Often containing billions of parameters, these models are trained on extensive text corpora to generate and manipulate text with human-level proficiency [22]. They are primarily based on an encoder-decoder architecture called Transformers [23], which has been further refined to improve text generation tasks using decoder-only models such as GPT [24]. Usually, the input is a natural language task called prompt, which first needs to be translated into a sequence of input tokens. The model processes the prompt and returns an output token sequence, which can then be translated back to a natural language answer. As these models have shown the ability to capture intricate linguistic nuances and even semantic contexts, they can be applied to a wide range of tasks, including in software engineering [25]. LLMs can be used to create integration based on endpoint documentation automatically [1], [2], [3]. Yet, these face strict input token limitations, e.g., 128,000 tokens for current OpenAI models [4], [3]. Another approach is encoder-only models such as BERT [26], often referred to as embedding models. They allow condensing the contextual meaning of a text into a dense vector, termed embedding. Using similarity metrics such as dot product, cosine similarity, or Euclidean distance allows for assessing the similarity of two input texts. Embedding models are usually used for the similarity search in RAG systems [27], a technique we also exploit in our implementation. In previous work, we proposed Compositio Prompto as an architecture to employ LLMs to automatically generate a service composition as code based on service documentation, a natural language task, and an input and output schema. A concern was due to the limited input token count of the LLM and that the LLM generated imperfect results requiring further manual effort to make the code operational [3]. In the present work, we analyze the usage of RAG to alleviate the limited input token count issue. # C. LLM Agents LLMs have shown remarkable capabilities in solving complex tasks by decomposing them in a step-by-step fashion [28] or by exploring multiple solution paths simultaneously [29]. Typically, these plans are generated iteratively by using the history of the previously generated steps to guide the generation of the next step. Recent studies have shown the potential of providing LLMs access to external tools to boost their inference capabilities and add further knowledge. Such an approach consists of prompting the LLM to interact with external tools to solve tasks, thus offloading computations from the LLM to specialized functions. Notable examples of such tools include web browsers [30], calculators [31], and Python interpreters [32]. In practice, this can be realized as a Python function called during the interaction with the LLM. The LLM agent paradigm [33], [34], [35] combines the concepts of external tool usage, the planning capabilities of LLMs, and adds a shared memory to solve complex tasks. Given an input task, an LLM agent uses its reasoning capabilities to decompose the task into a set of simpler subtasks. For each subtask, the LLM finds and interacts with the set of tools to solve the subtask. Then, based on the outcome of the current task and the history of previously executed subtasks, the LLM agent generates a new subtask and repeats the steps mentioned above or terminates if the original task is solved. To instruct the processing, the outcome of the tool invocations and the history of the subtasks are stored in the memory, typically consisting in the LLM agent’s own context. Within this work, we apply the LLM agent paradigm to create the Discovery Agent as an LLM agent for endpoint discovery. A critical challenge for LLM agents is the accessibility to a set of common APIs and tasks for their evaluation, e.g., tested using benchmarks like API Bank [36] or RestBench [9]. API Bank is a benchmark consisting of a set of APIs exposed through a search engine. Unfortunately, the available code of the benchmark is incomplete, i.e., all APIs, but only a few of the used queries are available. The RestBench benchmark contains a collection of tasks and endpoints expressed using the OpenAPI specification of Spotify and TMDB [9]. We employ RestBench to validate our results, given that it is the most extensive benchmark available. OpenAPIs within LLM agents have been used in RestGPT [9] and Chain of Tools [37]. The former combines multiple LLM agents to solve complex tasks by interacting with a set of tools exposed using the OpenAPI specification. The latter solves an input query by framing the problem as a code generation task and interacts with the set of tools to generate Python code. In contrast, our Discovery Agent does not directly interact with the endpoints found in the OpenAPIs. Instead, it filters and returns matching endpoints that can be used for subsequent processing. Even when considering the similarity to the tool selection within LLM agents, the task of selecting a set of tools from a larger pool to solve a specific problem remains relatively underexplored [38]. Existing research primarily focuses on the a priori selection of human-curated tools [39], heuristic-based methods for tool selection [40], choosing the relevant tool by scoring each query against every tool using a similarity metric between user queries and API names [41], and embeddingbased semantic retrieval using a combination of different vector databases [38]. With our work, we contribute the analysis of preprocessing OpenAPIs into this corpus. # III. OPENAPI RAG We first introduce the general architecture to employ RAG for endpoint discovery. Then, we investigate how to chunk OpenAPIs as preprocessing for RAG. # A. RAG for Endpoint Discovery RAG comprises a preprocessing step ahead of the answer generation of an LLM to enrich the prompt with additional data. Therefore, a retrieval component performs a semantic search based on some knowledge sources. Usually, the semantic search is done by embedding similarity, and the data from the knowledge sources is reduced to small chunks to allow fine-grained information retrieval [5]. The application of RAG for endpoint discovery, i.e., the OpenAPI RAG, is shown in Figure 1. Initially, the chunking strategy determines how the chunks are created from the OpenAPIs, i.e., how many chunks are created and what they contain. Each chunk has an embedding as metadata for similarity search in addition to its content. The chunking strategy specifies which data is used as input to the embedding model to create the embedding. This input does not have to match the chunk content, e.g., it can be a summary instead of the entire content. The chunks are finally stored in the chunk database. For retrieval, the user submits in $\textcircled{1}$ a natural language query $q$ to the chunk retriever, which converts $q$ into the embedding $e$ using the same embedding model as for the chunk creation. In $\textcircled{2}$ , the chunk retriever queries the chunk database using $e$ . The chunk database compares $e$ using a similarity metric with the embeddings of the service chunks contained in the database. The results are the top $k$ most similar chunks according to the metric, which are then returned to the chunk retriever in $\textcircled{3}$ . Finally, in $\textcircled{4}$ , the chunk retriever forwards the retrieved results to the user, who can add them to their prompt either manually or automatically through integration into their tooling. Figure 1. RAG for Endpoint Discovery The benefit of employing RAG is the insertion of only the gist of the available information, which allows picking only the most relevant information for the fix LLM context size. A drawback is that, based on the retrieval algorithm, not all relevant information may be retrieved. Further, fixing $k$ reveals the advantage of controlling the result size. An alternative is to return all chunks about a certain similarity threshold, introducing the question about the optimal cutoff. Figure 2 shows how the Discovery Agent extends on the RAG from Figure 1 shown in yellow hued. Instead of passing $q$ to the RAG, the user submits it in $\textcircled{1}$ to the Discovery Agent, which then iteratively decomposes $q$ into a set of finegrained tasks in $\textcircled{2}$ . Breaking down the query into smaller, more manageable tasks can potentially fill the gap between the coarse semantics of the query and the specificities in the services documentation. In $\textcircled{3}$ , the Discovery Agent submits each task to the RAG to retrieve the set of relevant chunks to solve the current task specifically. Finally, in $\textcircled{4}$ , the Discovery Agent collects the retrieval results of each individual task, filters them, and repeats $\textcircled{2}$ if $q$ needs further processing or returns the results to the user in $\textcircled{5}$ . # B. OpenAPI Chunking Strategies A critical step in the RAG workflow is creating the chunks for the chunk database. Embedding models typically have a limited input token size, and real-world service registries can contain tens of thousands of services, each containing multiple potentially lengthy endpoints due to detailed descriptions or extensive input and output schemas. A single service might not fit into the context size of the embedding model or even exceed the limit of the LLM that further processes the output of the RAG system. In addition, service documentation can also feature additional metadata that, while valuable for understanding service details, is not necessarily relevant for composing services. To determine advantageous chunking strategies, we employ the nine well-known chunking strategies presented in Table I. Input is always an OpenAPI specification, and output is a list of chunks. The chunking strategies can be categorized into token-based and LLM-based strategies. Each strategy consists of a splitting method, which dissects the OpenAPI specification into a list of intermediate chunks, and another refinement step converts the intermediate chunks to the final list of chunks. In addition, there is the meta-parameter for the used embedding model $m$ . For the refinement step, there is also the chunk size $s$ in tokens and their overlap $l$ , i.e., how many tokens two consecutive chunks share. Table I IMPLEMENTED CHUNKING STRATEGIES For the token-based approaches, we consider three main splitting methods. The no split method returns a single intermediate chunk for each OpenAPI containing the whole specification. The endpoint split divides the OpenAPI into one chunk per endpoint. The JSON split is a built-in LlamaIndex1 splitting strategy tailored to JSON files. This strategy parses the JSON file and traverses it using depth-first search, collecting leaf nodes, i.e., key-value pair where the value is a primitive type, e.g., strings, numbers, etc. During this traversal, the parser concatenates keys and values into single lines of text to create a comprehensive representation of each leaf node. For the refinement, we implemented token chunking, remove example, relevant field, and JSON split token chunking. The token chunking splits each intermediate chunk into a list of fixed-size chunks of $s$ tokens respecting an overlap of $l$ tokens with the previous node. The remove example removes the requestBody and recursively all examples fields for each endpoint as these are typically lengthy but contribute little information. The relevant field extracts representative fields, i.e., title, service description, endpoint verb, endpoint path, and endpoint description, which contribute information but few tokens. In the JSON split token chunking, we integrate the JSON split for a single endpoint with subsequent chunking. Figure 2. Overview of the Discovery Agent Approach for Endpoint Discovery For the LLM-based processing strategies, we apply the endpoint split and a summary (similar to [42]) and query approach for refinement. In the summary approach, we prompt an LLM to generate a summary for each OpenAPI endpoint. For the query approach, we instruct the LLM to generate a possible query matching the OpenAPI endpoint, as this might be closer to a possible input query than the summary. As an advanced LLM-based approach, we implement CRAFT [38], which combines multiple retrieval strategies. For all three approaches, we only consider the LLM output for the embedding creation. The chunk content remains the original OpenAPI endpoint information. The no split and JSON split methods can only be used with token chunking since all other refinement strategies rely on exactly one endpoint as an intermediate chunk. # IV. EVALUATION To evaluate the OpenAPI RAG and the Discovery Agent, we first implement it as a fully operational prototype. Then, we create the SOCBench-D to evaluate RAG across all domains. Additionally, we employ the RestBench [9] benchmark to validate the prototype in a real-world setting. # A. Implementation We implement the OpenAPI RAG and Discovery Agent approaches as open-source prototypes2 based on the LlamaIndex library.1 For the prototypes, we rely solely on OpenAPIs as the state-of-practice for service descriptions. For the OpenAPI RAG, we focus on the components presented in Figure 1. When starting, the system loads the OpenAPIs and applies a chunking strategy to create chunks and their embeddings for their later retrieval. The chunks contain thereby the information from the OpenAPIs, e.g., a whole endpoint or a part of it. A chunk embedding does not necessarily have to match the chunk’s content; for example, the content can be the endpoint, and the embedding is created using a natural language summary of the endpoint. Thus, the matching is performed based on the embedding, and the result returned is the chunk’s content, which can include additional information not required for the matching process. As the service database, we use FAISS, which allows the storage and the similarity search of chunks [43]. We use a so-called QueryEngine from LlamaIndex for the chunk retriever, which allows us to query a chunk database based on textual input. As a more advanced algorithm, we implement the retrieval approach from CRAFT [38], which utilizes a multi-view matching approach to match tasks to tools. Transferred to our use case, we employ the result sets of the summary, the endpoint name, and the endpoint description approaches. If an endpoint appears in at least two of these result sets, it is included in the final result. We adapt CRAFT to exactly return $k$ results by iterating over the result sets and adding one element after the other. Therefore, we create an intermediate set for each of the three approaches and successively add the result elements, i.e., first, add the first element of the summary results, then the first element of the endpoint name results, and so on. After adding an element, we check how many elements are in at least two of the intermediate sets and continue adding elements until there are $k$ elements. To enable measuring the retrieved endpoints, we attach the endpoint information, i.e., verb and path, to each chunk as metadata. For the endpoint split splitting strategies, we take the information from the endpoint. For the other strategies, we first attach a list of all endpoints to the nodes before splitting and then filter on the endpoint paths in the final chunks after splitting. So, for each chunk, we know to which endpoint or endpoints it relates to. We realize the Discovery Agent from Figure 2 using a LlamaIndex OpenAIAgent, which implements the LLM agent pattern for OpenAI’s LLMs. An OpenAIAgent takes a list of tools, i.e., Python functions with a name and a description as parameters, and interacts with these using the OpenAI API. For the tools, we use a RAG with chunks of the endpoint’s verb, path, and summary as contents and for their embeddings. We create the summary by instructing an LLM to create it based on the endpoint information, i.e., as in the summary chunking strategy. This should reduce the token count, as the chunks are much smaller because not all endpoint details are returned and processed. To provide all information, we introduce a second tool, which takes the endpoint verb and path as input parameters and returns the whole endpoint information. The complete data is only inserted into the history for indispensable endpoints. # B. SOCBench-D To evaluate the RAG implementation for service discovery in a generalized setting across various domains, we propose and implement our benchmark SOCBench- ${ \mathbf { } } \cdot { \cal D }$ based on the GICS, which comprises all relevant industry domains grouped into eleven sectors. For each domain of the eleven GICS, we employ an LLM to create five services as OpenAPIs, each with ten endpoints. Using the services within the same domain, we then select ten random subsets of the endpoints and let the LLM create a natural query for each. To ensure quality control, we ensure syntactical validity via schema compliance and check semantics by employing another LLM. To reduce the influence of randomness, we generate five benchmark instances leading to 50 queries per domain and 550 queries in total. As the GICS is designed to encompass all industry sectors, we can also assume to cover all industry domains and therefore generalizability with SOCBench-D. Further domains are just subdomains of the GICS sectors. Therefore, by employing SOCBench-D, we can gain insights on service discovery across domains. Implementation Details. Algorithms 1 and 2 describe the benchmark creation in detail as pseudo code. Algorithm 1 comprises the benchmark creation (createBenchmark), the generation of services (createServices), the endpoint creation (createEndpoints), and the OpenAPI generation (createOpenAPI). Algorithm 2 presents the query creation (createQueries, createQuery) and the semantic endpoint checking (checkNecessary). First, we call createBenchmark (1) with the list of domains (domains), the number of services $n _ { s } ~ = ~ 5$ , the number of endpoints each service should contain $n _ { e } = 1 0 \$ , and the number of queries that should be created per domain $n _ { q } = 1 0$ as parameters to create a single benchmark instance. Hence, to create the five benchmark instances, we invoke createBenchmark five times. We define the list benchmark to collect the services and the pairs of natural language query and expected endpoints, i.e., the OpenAPIs and the queries (2). For each of the domains (3), we collect the OpenAPIs in openapis (4), all endpoints of all services in $e _ { \mathrm { a l l } }$ (5), and create the list of $n _ { s }$ services, i.e., the service name and its description, stored in services by calling createServices (6). For each of the services, we create $n _ { e }$ endpoints, i.e., a list of verb endpoint description triplets, by calling createEndpoints (8) and add the endpoints to all endpoints $e _ { \mathrm { a l l } }$ (9). Based on the list of endpoints endpoints, we create the service’s OpenAPI by invoking createOpenAPI (10) and add the generated OpenAPI to the list of all OpenAPIs (openapis) in the domain (11). Given the complete list of OpenAPIs in the current domain, we create $n _ { q }$ queries, i.e., the natural language queries and their list of expected endpoints (13). We finalize the current domain by adding the openapis and queries to the benchmark (14). Once done with all domains, we return the benchmark list as the current benchmark instance. In case of a (validation) error, the current results are stored as files, i.e., we can continue our algorithm from the last valid state. For createServices (19), we query the LLM to return the list of service names and descriptions (20). Then, we assert that the correct number of services was returned (21). Otherwise, we recreate the list. Finally, we return the created list (22). Equivalent to createServices, in createEndpoints (25-29), we create the list of endpoints, i.e., the verb, endpoint, and description triplets, which the service should contain. For the OpenAPI generation based on the list of endpoints in createOpenApi (32), we first query the LLM to create the OpenAPI (33). Then, we validate that exactly the endpoints from endpoints are contained in the OpenAPI (34), followed by a formal verfication3 of the OpenAPI ensuring syntactical validity (35). Finally, we analyze semantics by prompting the LLM to evaluate whether the OpenAPI is valid, reasonable, and specific for the domain (36). In case of any (validation) errors, we prompt the LLM with the OpenAPI and the error message to fix the error. If the LLM cannot fix the error, we discard the OpenAPI and restart from (33). 1 function createBenchmark(domains, $n _ { s } , n _ { e } , n _ { q } )$ 2 benchmark $ [ ]$ 3 for domain in domains do 4 openapis $ [ ]$ 5 $e _ { \mathrm { a l l } } \gets [ ]$ ▷ all endpoints 6 services $$ createServices(domain, $n _ { s }$ ) 7 for service in services do 8 endpoints $$ createEndpoints $( n _ { e } )$ ) 9 $e _ { \mathrm { a l l } }$ .extend(endpoints) 10 openapi $$ createOpenApi(endpoints) 11 openapis.append(openapi) 12 end 13 queries $$ createQueries(openapis, $\boldsymbol { e } _ { a l l } , ~ \boldsymbol { n } _ { q } )$ 14 benchmark.append((openapis, queries)) 15 end 16 return benchmark 17 end 18 $t _ { s } \gets$ template create services 19 function createServices(domain, $n _ { s }$ ) 20 services $$ queryLLM $( t _ { s }$ , domain, $n _ { s }$ ) 21 assert len $( s e r v i c e s ) = n _ { s }$ 22 return services 23 end 24 $t _ { e } \gets$ template create endpoints 25 function createEndpoints(domain, $n _ { e }$ ) 26 endpoints $$ queryLLM ${ { t } _ { e } }$ , domain, $n _ { e }$ ) 27 assert len $( e n d p o i n t s ) = n _ { e }$ 28 return endpoints 29 end 30 $t _ { o } \gets$ template create openapi 31 $t _ { c } \gets$ template check openapi 32 function createOpenApi(endpoints) 33 openapi $$ queryLLM( $\left. t _ { o } \right.$ , endpoints) 34 assert endpoints $\ b =$ openapi.endpoints 35 assert openApiValidator(openapi) 36 assert queryLLM $\mathit { \Pi } _ { { t } _ { c } , }$ openapi) 37 return openapi 38 end To create the queries, we rely on createQueries (40), with the set of OpenAPIs (openapis), the list of endpoints $e _ { \mathrm { a l l } }$ , and the number of queries to be created $n _ { q }$ as parameters. Starting with an empty list queries (41), we create the $n _ { q }$ queries one by one (42-47). Thereby, we select a random subset $e _ { \mathrm { e x p e c t e d } }$ of $e _ { \mathrm { a l l } }$ , where the cardinality of $e _ { \mathrm { e x p e c t e d } }$ is normally distributed (43). We set $\mu = 5$ and $\sigma = 2$ , which is about $10 \%$ of all endpoints within the domain. To create the natural language query (query), we invoke createQuery with the OpenAPIs (openapis) and the list of expected endpoints $( e _ { \mathrm { e x p e c t e d } } )$ (44). Once the query is created, we check the similarity using OpenAI’s text-embeddings-3-large embedding model with the similarity threshold $s _ { \mathrm { t h r e s h o l d } } ~ =$ 0.8 (45). If the query exceeds the threshold, we discard it and start over from (43). Otherwise, we add the query to queries (46) and continue with the next one (42). # Algorithm 2: Create Query 39 sthreshold $$ similarity threshold 40 function createQueries(openapis, eall, $n _ { q }$ ) 41 queries $ [ ]$ 42 for $i \gets 1 , n _ { q }$ do 43 eexpected $$ random from $e _ { \mathrm { a l l } }$ 44 $q u e r y \gets$ createQuery(openapis, eexpected) 45 assert similarity(query, queries) $<$ sthreshold 46 queries.append((query,eexpected)) 47 end 48 return queries 49 end 50 $t _ { q } \gets$ template create query solution endpoints pair 51 $t _ { f } \gets$ template list further endpoints 52 function createQuery(openapis, eexpected) 53 $q \gets$ queryLLM $\dot { \boldsymbol { { t } } } _ { q } ,$ , openapis, eexpected) 54 $e _ { \mathrm { f u r t h e r } } \mathrm { q u e r y L L M } ( t _ { f }$ $\cdot t _ { f }$ , openapis, q, eexpected) 55 eextended $$ set(q.endpoints) | set(eexpected) 56 enecessary $$ checkNecessary(openapis, q, eextended) 57 assert enecessary = eexpected 58 return $q$ 59 end 60 $t _ { n } \gets$ template check endpoint necessary 61 function checkNecessary(openapis, q, eextended) 62 $e _ { \mathrm { n e c e s s a r y } } \gets [ ]$ 63 for endpoint in eextended do 64 necessary $$ queryLLM( $t _ { n }$ , openapis, q, eextended, endpoint) 65 if necessary then enecessary.append(endpoint) 66 end 67 return enecessary 68 end To create the queries, we rely on createQueries (40), with the set of OpenAPIs (openapis), the list of endpoints $e _ { \mathrm { a l l } }$ , and the number of queries to be created $n _ { q }$ as parameters. Starting with an empty list queries (41), we create the $n _ { q }$ queries one by one (42-47). Thereby, we select a random subset eexpected of $e _ { \mathrm { a l l } }$ , where the cardinality of eexpected is normally distributed (43). We set $\mu = 5$ and $\sigma = 2$ , which is about $10 \%$ of all endpoints within the domain. To create the natural language query (query), we invoke createQuery with the OpenAPIs (openapis) and the list of expected endpoints $( e _ { \mathrm { e x p e c t e d } } )$ (44). Once the query is created, we check the similarity using OpenAI’s text-embeddings-3-large embedding model with the similarity threshold $s _ { \mathrm { t h r e s h o l d } } ~ =$ 0.8 (45). If the query exceeds the threshold, we discard it and start over from (43). Otherwise, we add the query to queries (46) and continue with the next one (42). For the creation of a single query, we invoke createQuery with the list of OpenAPIs (openapis) and the list of expected endpoints $e _ { \mathrm { e x p e c t e d } }$ (52). Therefore, we invoke the LLM (queryLLM) with the template $t _ { q }$ for creating a natural language query, the openapis, and $e$ expected. The result is the natural language query $q$ (53). To validate whether the $q$ conforms with eexpected, we again invoke the LLM (queryLLM) with the template $t _ { f }$ , the openapis, and eexpected to list the endpoints $e _ { \mathrm { f u r t h e r } }$ , which are necessary to fulfill $q$ , but are not in eexpected (54). We or the sets eexpected and efurther to eextended (55). For each of the endpoints in eextended, we check if it is genuinely required to fulfill $q$ by calling checkNecessary with the openapis, $q$ , and the whole list eextended to check interdepence. checkNecessary return the list of genuinely necessary endpoints enecessary (56). If there is a mismatch between enecessary and eexpected, we prompt the LLM in a chat-based manner (57), i.e., in a question-answer style, with the expected endpoints $e$ expected, the additional endpoints enecessary eexpected, and the absent endpoints eexpected \ enecessary in the response message to the LLM to improve the prompt and continue with (54). The checking for necessity is encapsulated in checkNecessary with the OpenAPIs (openapis), the query $q$ , and the list of expected to be necessary endpoints eextended as parameters (61). The idea is to filter out unnecessary endpoints from eextended. To realize this, we start with the empty list enecessary to store the actually required endpoints to fulfill $q$ (62). For each of the endpoints in eextended (63), we query the LLM (queryLLM) with the template $t _ { n }$ , the openapis, $q$ , all endpoints eextended, and the current endpoint in a query-answer style to determine whether the current endpoint is required to fulfill $q$ (64). The LLM, thereby, returns a simple “Yes” or “No” for each endpoint. If the LLM returns “Yes”, we add the endpoint to enecessary (65). This is because stating required endpoints (54) can reveal many unrelated endpoints, and the query-answer style helps the LLM to focus as the LLM might be confused when evaluating all endpoints at once. Algorithms 1 and 2 guarantee that exactly ${ n _ { s } } = 5$ services are created, each with exactly $n _ { e } = 1 0 \$ endpoints, resulting in an OpenAPI complying with the standard. They create precisely $n _ { q } = 1 0$ queries. Additionally, they ensure with high probability through the application of an LLM that the services are non-generic, reasonable services within the expected GICS domain and that the query can be fulfilled using the given set of services using the stated set of endpoints exclusively and through using an embedding model that the queries within one domain do not exceed a similarity threshold. # C. Evaluation Methodology For the result evaluation, we consider the service discovery with RAG as a hyperparameter tuning problem with $k$ for the top- $\mathbf { \nabla } \cdot k$ selection of candidates, the model $m$ , and the chunking strategy $s$ as independent variables. Further, we consider the domains as independent datasets. The methodology follows: (1) Performance criteria: We define the dependent variables recall and precision as the performance criteria metrics because we are interested in how many correct endpoints we retrieve. We are not interested in the ranking of the endpoints because we assume that the incorrect endpoints are filtered out in a later stage, i.e., we do not consider other metrics like Mean Reciprocal Rank, which weigh positioning and $\operatorname { H i t } @ \operatorname { k }$ , which does not consider the number of correct results. (a) Top 10 Candidates by Recall and Precision. NV is the Nvidia model, and OAI is the OpenAI model. ES represents Endpoint Split with token chunking with the overlap $l$ in parentheses. Figure 3. Cross-Domain Average Analysis (b) Pareto Front Analysis of Recall and Precision as Scatterplot. Model ColorCoded. $k$ Shape-Coded. (2) Candidate set: As embedding models $M$ , we employ OpenAI’s text-embedding-3-large [10] as one of the currently leading proprietary models. As open-source models, we utilize BAAI/bge-small-en-v1.5 [44], which is relatively small while still producing reasonable results, allowing the model to be executed on commonly available hardware like laptops, and Nvidia’s NV-Embed-v2 [45] as one of the leading open-source models. For the parameter $k$ , we set $K = \{ 5 , 1 0 , 2 0 \}$ as these are the multiples of $\mu = 5$ of our normal distribution, and for the chunking strategy $s$ , we use the chunking strategies $S$ as defined in Table I. This results in the candidate set $C = \{ ( m , k , s ) | m \in M , k \in K , s \in S \}$ . (3) Domain-dependent datasets: For each $c \in C$ and each domain, we execute SOCBench-D, resulting in a set of independent result sets. (4) Cross-domain average: As we are interested in the candidate that performs best across all domains, we compute the performance criteria metrics as an average across all domains for each candidate $c \in C$ using the domain-dependent datasets. We weigh each domain equally as we consider each domain as equally important. (5) Stability: We analyze the standard deviation across domains to determine whether there is a candidate $c \in C$ that performs slightly worse but performs more stable. We compute the standard deviation of the recall across all domains and $k$ and separate it by model $m$ and chunking strategy $s$ . (6) Significance: We perform the Friedman test as we do not assume normality to resolve if candidate differences are statistically significant across domains. We evaluate the test for each domain and across all domains by model $m$ and $k$ . # D. Experimental Results on SOCBench-D We first focus on the chunking strategies with endpoint split as splitting strategy, i.e., endpoint split splitting with token chunking, remove examples, relevant fields, JSON split token chunking, query, summary, and CRAFT, as these always reveal exactly one endpoint per chunk. Then, we cross-validate it with the remaining document and JSON split approaches. The cross-domain average is shown in Figure 3. The figure is split into two subfigures. Figure 3a presents the top 10 candidates by recall and precision. The top 10 candidates in recall all have $k = 2 0$ , and for precision, $k = 5$ . Also, the top six candidates use the Nvidia model. The remaining candidates either use the Nvidia or the OpenAI model, revealing the superiority of these models over the BGE model. Both metrics list the summary approach with the Nvidia model as the leading chunking strategy but with different $k$ . To further refine the results, we perform a Pareto front analysis for recall and precision shown in Figure 3b as a scatterplot. The abscissa shows the recall and the ordinate the precision with an inverse scale, i.e., the best result is in the origin, and the closer to the origin, the better the result. We can determine three distinct clusters defined by the $k$ - value, unveiling that with a higher $k$ , the recall increases while the precision drops and vice versa. In total, we see the three Nvidia summary candidates as Pareto optimal results, one for each cluster. Nevertheless, differences between the chunking strategies are not apparent within a cluster. Further refinement through color-coding of the model reveals that the Nvidia model outperforms the OpenAI model, which in turn outperforms the BGE model. Still, the differences are minor compared to the $k$ clusters. To further analyze and compare the different models, we create a boxplot chart of the recall for $k = 2 0$ in Figure 4. We group the boxplots by chunking strategy and color-coding the model. For each chunking strategy, the median recall of the Nvidia model is above the OpenAI model, which, in turn, always performs better than the BGE model. The Nvidia model performs exceptionally well with the summary approach, while the BGE model performs poorly with the endpoint split field (a) Recall by Chunking Strategy for $k = 2 0$ Grouped by Model with Standard Deviation as Error Bars. Figure 4. Recall by Chunking Strategy as Boxplots Grouped by Model for $k = 2 0$ . Model Color-Coded. Figure 5. Statistical Stability Analysis of the Candidates. approach. Otherwise, there are no differences visible between the chunking strategies. Still, the interquartile range overlaps, requiring further analysis to determine an obvious winner. These results align with the results of the MTEB leaderboard4, which ranks current embedding models. Stability Analysis. Another factor is stability, which we analyze in Figure 5. The first analysis segment is Figure 5a, which shows average recall by chunking strategy as a bar chart grouped by the model for $k = 2 0$ . Stability is shown as error bars of the standard deviation, which is between 13- $2 5 \%$ . We can determine that the mean is again lower and varies more for the BGE model than the other two. Also, the figure shows that all models, while revealing significant performance differences, expose a good overall performance. Yet, for all three models, the differences in mean between the chunking strategies seem small. Considering the error bar, the mean differences seem minor as the variance outweighs these. The standard deviation over mean recall for all candidates in a scatterplot, with the coefficient of variance $\begin{array} { r } { \mathrm { { C V } = \frac { \ s t a n d a r d \ d e v i a t i o n } { \ m e a n } } } \end{array}$ as the point size is shown in Figure 5b. $k$ is shown as color, again revealing the clustering in recall comparable to the one seen in Figure 3b. We can see a tendency for the standard deviation to decrease with a higher $k$ for the clusters. For $k = 5$ , the mean standard deviation is about $4 \%$ ; for $k = 1 0$ , it is about (b) Standard Deviation by Mean Recall for All Candidates. Color-Coded by $k$ . Coefficient of Variation (CV) Size-Coded. $3 . 5 \%$ ; and for $k = 2 0$ , it is around $3 \%$ . This is also seen in the coefficient of variance. Within a cluster, there is no clear winning chunking strategy and model. Significance Analysis. To compare the chunking strategies, if there is a significant difference between the individual chunking strategies, we employ the Friedman test as it can compare multiple groups without requiring a normal distribution in the measurements. We perform the Friedman test for each domain and over all domains and set the significance level to $5 \%$ . The results are shown in Figure 6a split by $k$ and model $m$ . Entries exceeding the significance level are marked in bold. We can extract that there are some differences within chunking strategies within specific domains, but these average out over all domains. This can be through the training data of the embedding model or through the increasing variance when condensing a more extensive set of measurements. Therefore, we can assume that there is no significant difference between chunking strategies over all domains, but individual cases may be considered when choosing the chunking strategy. Analysis of Multi-Endpoint Chunking Strategies. Compared to the endpoint split-based chunking strategies, the whole document and JSON approaches do not guarantee that each chunk corresponds precisely to one endpoint, i.e., one chunk can contain multiple endpoints or fragments of these. In addition, they allow the specification of the chunk size $s$ and the overlap $l$ in tokens as they rely on the token chunking (a) Friedman Test by Domain And Across All Domains for the BGE, the Nvidia (NV), and the OpenAI (OAI) Model and $k = \{ 5 , 1 0 , 2 0 \}$ . Values above the significance level, i.e., $p \geq 0 . 0 5$ , are marked in bold. Figure 6. Friedman Test for the Endpoint Split and Token Count per Chunk for All Chunking Strategies from Table I. (b) Ascending Token Count per Chunk by Chunking Strategy Averaged over $k = 5 , 1 0 , 2 0$ (a) Scatterplot of Recall and Precision of all Candidates with the Whole (b) Comparison to Endpoint Split-Based Chunking Strategies as a Scatterplot Document and JSON Splitting Strategy. The Parameters Chunk Size $s$ and Formated like Figure 7a. Candidates are Limited to the Nvidia Model. The Overlap $\mathbf { \xi } _ { l }$ are in Parantheses $( s , l )$ . Chunking Strategy Color-Coded. $k$ Shape- Summary Chunking Strategy is Added for Comparison. Coded. Figure 7. Evaluation of Chunking Strategies with Non-Endpoint Split Splitting, i.e., Whole Document and JSON. refinement. As the endpoints in the SOCBench-D OpenAPIs are shorter than comparable real-world OpenAPIs through the LLM generation, we choose small values for $s$ and $l$ to account for more realistic results. We set $l ~ = ~ \{ 0 , 2 0 \}$ and $s = \{ 1 0 0 , 2 0 0 \}$ for the whole document and $l = \{ 0 , 2 0 \}$ and $s = \{ 1 0 0 \}$ for the JSON splitting strategies. Otherwise, a single chunk can contain numerous endpoints, making evaluation difficult due to incomparability. First, we examine the token count shown in Figure 6b to overview how much data is considered on average per chunk and chunking strategy. The chunking strategies and average token count per chunk are shown as sorted ascending by average token count. For the token counting, we rely on the “tiktoken” library and select the “gpt-4o” as the target model.5 The average is computed over all models and $k = \{ 5 , 1 0 , 2 0 \}$ . The whole document splitting approaches produce chunks much smaller than $s$ , which can be explained due to the different token counting of LlamaIndex and tiktoken, where tiktoken can combine multiple tokens into one, e.g., considering white spaces and structural elements in the OpenAPI JSON input. Also, the JSON splitting creates much more dense chunks, i.e., with more tokens per chunk than the whole document approaches. Further, for our chosen parameters, the endpoint split chunking strategies produce larger chunks on average compared to the whole document and JSON approaches. A larger $s$ like in real-world cases can reverse this effect as lengthy chunks can be condensed more effectively, e.g., using the summary approach independently of the input length. Overall, the token count for all strategies is relatively low, allowing us to insert hundreds of chunks into a single prompt. Considering accuracy, Figure 7 presents the results of the non-endpoint split-based chunking strategies, i.e., whole document and JSON, as scatterplots of recall as inversed abscissa and precision as inversed ordinate. The left side, Figure 7a, depicts all candidates of all models with the chunking strategy color-coded and $k$ shape-coded. Compared to the endpoint split-based chunking strategies shown in Figure 3b, there are no clear clusters or obvious Pareto-optimal strategy. On the upper left, we can see that the JSON split chunking strategies tend to perform better concerning recall but also reveal low precision. This can be explained by the implementation of the JSON split algorithm, which densely packs the information from the OpenAPIs, resulting in many endpoints represented in a single chunk, increasing recall while decreasing precision. Also, the tendency that a higher $k$ leads to a higher recall and a lower precision can be determined by most of the chunking strategies with $k = 5$ being in the lower right, with $k = 1 0$ in the center, and $k = 2 0$ in the upper left. Regarding the whole document splitting, the $s = 2 0 0$ candidates seem to surpass the $s = 1 0 0$ candidates in recall with a decrease in precision, which can stem from the inclusion of multiple endpoints in one chunk or the embedding model receiving more information, which results in better similarity. No prominent difference results from including the overlap $l = 2 0$ compared to $l = 0$ . Figure 7b restricts the candidates to the Nvidia model to ease comparison and adds the summary approach. What becomes apparent is that the summary approach is Paretooptimal with a large margin for all $k$ . Nevertheless, the whole document approaches with $s = 1 0 0$ and $k = 5$ reveal a higher precision but with a significant drop in recall. Due to this high margin, we infer that the endpoint split approaches outperform the whole document and JSON splitting-based approaches in our experiments. Further research is needed to determine the influence of $s$ , especially regarding extensive OpenAPIs. Summary of Findings. In summary, the biggest influence on accuracy is $k$ . Therefore, we recommend practitioners choose the highest $k$ possible for their use case to achieve the highest recall. The second most considerable influence is on the embedding model $m$ . As the differences are not as prominent, we recommend the Nvidia model if the highest accuracy is needed, the OpenAI model as the model if the practitioners are already familiar with the OpenAI tooling, and the BGE model if a small resource footprint is required. Regarding the chunking strategy, endpoint split-based approaches outperform whole document or JSON split-based approaches. Within endpoint split-based approaches, there is no significant difference across all domains. Therefore, we recommend choosing the simplest one to implement. Suppose the RAG should be employed in a very special domain; a specific chunking strategy might be beneficial, which should be determined individually for the actual case. # E. Experimental Results on RestBench To evaluate OpenAPI RAG in a real-world setting, we employ the RestBench benchmark in addition to SOCBench-D, covering the Spotify and TMDB OpenAPI specifications [9]. The services of Restbench, with 40 endpoints for Spotify and 54 for TMDB, are much more complex than usual ServiceOriented Computing (SOC) case studies containing usually just three to seven endpoints [2]. Nevertheless, it only covers the communication services domain, with 57 queries for Spotify and 100 for TMDB. Thus, it is significantly smaller than SOCBench-D. Also, the queries can be much more vague. For SOCBench-D, we ensure that the query precisely aligns with the expected endpoints through an iterative process, resulting in an unambiguous query. This is not the case for RestBench. For evaluation, we perform the same steps described in Section IV-B for SOCBench-D. Figure 8. RestBench Pareto Front Analysis of Recall and Precision as Scatterplot. Model Color-Coded. $k$ Shape-Coded. Like Figure 3b, Figure 8 shows the Pareto front analysis of the endpoint split-based chunking strategies. The $k$ clusters are less evident due to a higher variance. Still, $k = 2 0$ tends to express higher recall and lower precision than $k = 1 0$ , resulting in a higher recall but lower precision than $k = 5$ . The Nvidia and OpenAI models distinctly outperform the BGE model. There is no clear dominance between the Nvidia and the OpenAI model. Performing the Friedman test again reveals no significant difference between the endpoint-splitbased chunking strategies. As for the whole document and JSON splitting strategybased chunking strategies, we perform the same experiments in Figure 9 for RestBench as in Figure 7 for SOCBench-D. In Figure 9a, we can see a sharp distinction in recall between the whole document and the JSON approaches, with the JSON approaches performing significantly better. Among the JSON candidates are three diagonal clusters representing the different models, with the Nvidia model performing best and the BGE model performing worst. Within the cluster, there is again the relationship with an increase in $k$ , resulting in an increase in recall and a drop in precision. This correlation is also visible for the whole document approach but with a minor difference in the recall. When plotted with models visible, for the whole document approaches, the OpenAI model performs best in precision, followed by the Nvidia model, then the BGE model. This can result from special training data. There is no apparent relation between the chunking strategies, the model, and recall. By adding the summary approach to compare the endpoint split-based approaches with the whole document and JSON approaches, we can determine that the summary approach outperforms the other approaches with the same $k$ in recall. Yet, the whole document and JSON approach surpass the summary approach in precision, which can result from multiple chunks being retrieved for the same endpoint. This increases precision as the cardinality of the set of retrieved endpoints decreases. (a) Scatterplot of Recall and Precision of all Candidates with the Whole (b) Comparison to Endpoint Split-Based Chunking Strategies as a Scatterplot Document and JSON Splitting Strategy. The Parameters Chunk Size $s$ and Formated like Figure $9 \mathrm { a }$ . Candidates are Limited to the Nvidia Model. Th Overlap $\mathbf { \xi } _ { l }$ are in Parantheses $( s , l )$ . Chunking Strategy Color-Coded. $k$ Shape- Summary Chunking Strategy is Added for Comparison. Coded. Figure 9. RestBench Evaluation of Chunking Strategies with Non-Endpoint Split Splitting, i.e., Whole Document and JSON. Formatting as in Figure 7. Figure 10. Pareto Front Analysis of the Discovery Agent as a Scatterplot. Agent Results are in Blue. The Summary Chunking Strategy is in Orange. $k$ is Shape-Coded. Overall, the RestBench results reinforce the SOCBench-D results while revealing a higher variance. Further research is needed on chunk size $s$ and overlap $l$ ’s influence on the whole document and the JSON splitting-based strategies. # F. Discovery Agent To evaluate whether the results can be further improved by employing the Discovery Agent as a means to utilize the LLM agent reasoning capabilities, we execute SOCBenchD and RestBench and compare the Discovery Agent results with the standalone RAG. Figure 10a shows the SOCBench-D results for Discovery Agent and, for comparison, the summary approach for the OpenAI model as a scatterplot over recall as the inversed abscissa and precision as the inversed ordinate. For Discovery Agent, we employ $k ~ = ~ \{ 2 0 , \mathrm { a l l } \}$ as the maximum number of elements, which are retrieved per query to the RAG tool that the agent can use. The agent can then refine the query and filter the returned endpoints. $k = 2 0$ is used for comparison with the standalone counterparts. $k = \mathrm { a l l }$ reveals always all endpoints. The results show that the agent with $\boldsymbol { k } \ =$ all outperforms the standalone summary RAG approach in recall and precision. For both $k = \{ 2 0 , \mathrm { a l l } \}$ , the agent increase precision. When comparing $k = 2 0$ , the agent increases precision but seems to filter out relevant endpoints, resulting in a recall drop. Comparing the $k =$ all agent with the $k = 2 0$ summary reveals a minor increase in recall $87 \%$ vs. $84 \%$ ). Nevertheless, the agent cannot identify all relevant endpoints, even when fed with all endpoints. We present the RestBench results of the Discovery Agent in Figure 10b. The setup equals the SOCBench-D results from Figure 10a. Compared to the SOCBench-D results, the summary results are similar in recall with a lower precision. For $k = \{ 2 0 , \mathrm { a l l } \}$ , the agent again improves precision significantly. For $k = 2 0$ , like in Figure 10b, the agent lowers the recall, i.e., it filters out endpoints too restrictively. The main difference is that for $k = \mathrm { a l l }$ , the agent performs notably worse in recall and reveals a similar precision compared to $k = 2 0$ . This can be due to more endpoints and more similar endpoints added to the prompt, making it harder for the model to determine relevant ones. In conclusion, the benefit of the Discovery Agent is the increased precision, which reveals most of the necessary information while not exposing unrelated information, i.e., the token count is minimal. The drawback is the additional effort in calling another LLM and the decrease in recall due to too strict filtering of the LLM. Further research is needed to improve the Discovery Agent, e.g., with additional reasoning components to mitigate the filtering issues and confusion due to many endpoints. In practice, a pragmatic approach might be employing the summary approach with an increase of $k$ to achieve comparable results in the recall, saving the additional effort for the agent. # G. Discussion To answer RQ1, we implemented the novel SOCBenchD benchmark based on GICS, which ensures covering the most relevant domains. It shows that OpenAPI RAG and the Discovery Agent can retrieve large portions of relevant data while not revealing all relevant information in all cases. In relation to RQ2, we showed the effectiveness using SOCBench-D and the RestBench benchmark. Overall, the prototype exhibited the ability to adequately reduce the token size to fit into the LLM context size while maintaining most of the relevant information. Regarding the chunking strategies, endpoint split-based chunking strategies achieve favorable accuracies. There is no significant difference within the endpointsplit chunking strategies. Limitations for postprocessing are primarily that the RAG results may not contain all relevant information. For RQ3, using the summary approach, the Discovery Agent showed improved precision. Further research is needed to improve the decline in recall due to too strict filtering of the LLM. While covering various domains, SOCBenchD is limited to the generated services. These may be less extensive than comparable real-world services through the LLM generation process, making token count evaluation less robust. Further, our real-world evaluation relies on RestBench, which consists of only two services within one domain. This calls for additional real-world data on endpoint discovery. Additionally, we rely on pre-trained general-purpose embedding models. These are trained to perform all kinds of similarity matching and may highlight in our use case insignificant pieces of information. To further improve performance, finetuning, a custom embedding model, or a similarity threshold can be applied to match endpoints to tasks more precisely. Further, our experiments assume that schemas are inlined in the endpoint specification. In practice, schemas may be outsourced and linked by a reference. Further research is needed to determine the best way to process these. OpenAPI RAG systems in practice may operate on much larger datasets than the ones from SOCBench-D or RestBench. For the data processing, we rely on standard RAG implementations like LlamaIndex, which are already designed to operate on large amounts of data. The applicability of the OpenAPI RAG depends on the availability of service documentation. We try to mitigate this issue by relying on widely adopted OpenAPI specifications, but this might not be valid for all domains. A solution to consider is automatically generating service documentation using an LLM. Another factor influencing the discovery is the quality of the OpenAPI specifications. The discovery may fail without descriptions, meaningful naming, or erroneous information. This is not an issue of the approach, as a human developer would face the same problem, but it highlights the importance of high-quality documentation. Besides capabilities of the RAG system, resource consumption is a major issue in LLM-based systems. The OpenAPI RAG only uses embedding models. These are much more efficient than LLMs, resulting in costs in fractions of a cent per query. In contrast, the Discovery Agent requires significantly more resources due to relying on an LLM for the endpoint selection. # V. CONCLUDING REMARKS The service discovery challenge is central to SOC. With the application of automated LLM-based service composition approaches, the LLM input context limitations have become prominent, as the entire service documentation often does not fit into the input context, requiring the preselection of relevant information. To address this issue, we proposed an OpenAPI RAG, which facilitates search based on state-of-thepractice OpenAPIs and reduces the input token size. Further, we show an advanced integration through a Discovery Agent, which can retrieve service details on demand to reduce the input token count further. Our evaluation based on our novel general-purpose benchmark SOCBench-D and the RestBench benchmark shows that our approach is viable and efficient. # REFERENCES [1] R. D. Pesl, K. Klein, and M. Aiello, “Verfahren zur Nutzung von unbekannten neuen Systemdiensten in einer Fahrzeuganwendung,” German Patent DE 10 2024 108 126 A1, 2024. [2] R. D. Pesl, M. Stötzner, I. Georgievski, and M. Aiello, “Uncovering LLMs for service-composition: Challenges and opportunities,” in ICSOC 2023 WS. Springer, 2024. [3] R. D. Pesl et al., “Compositio Prompto: An architecture to employ large language models in automated service computing,” in Service-Oriented Computing. Springer Nature Singapore, 2025, pp. 276–286. [4] OpenAI, “GPT-4 Turbo in the OpenAI API,” https://help.openai.com/ en/articles/8555510-gpt-4-turbo-in-the-openai-api, 2024, last accessed 2025-03-07. [5] P. Lewis et al., “Retrieval-augmented generation for knowledge-intensive NLP tasks,” in NeurIPS, vol. 33. Curran Associates, 2020, pp. 9459– 9474. [6] R. D. Pesl, J. G. Mathew, M. Mecella, and M. Aiello, “Advanced system integration: Analyzing OpenAPI chunking for retrievalaugmented generation,” in CAiSE 2025. Springer Nature, 2025, to appear. [Online]. Available: https://arxiv.org/abs/2411.19804 [7] MSCI Inc. and Standard & Poor’s, “Global industry classification standard (GICS),” https://www.msci.com/gics, August 2024. [8] Y. Qin et al., “ToolLLM: Facilitating large language models to master $1 6 0 0 0 +$ real-world APIs,” 2023. [9] Y. Song et al., “RestGPT: Connecting large language models with real-world applications via restful APIs,” 2023. [Online]. Available: https://arxiv.org/abs/2306.06624 [10] OpenAI, “New embedding models and API updates,” Jan 2024, last accessed 2024-07-18. [Online]. Available: https://openai.com/blog/ new-embedding-models-and-api-updates A survey of techniques and tools,” ACM Comput. Surv., vol. 48, no. 3, dec 2015. [12] J. M. S. Santana, M. Petrova, and P. Mahonen, “UPnP service discovery for heterogeneous networks,” in IEEE PIMRC, vol. 17. IEEE, 2006, pp. 1–5. [13] F. Curbera et al., “Unraveling the web services web: an introduction to SOAP, WSDL, and UDDI,” IEEE Internet Computing, vol. 6, no. 2, pp. 86–93, 2002. [14] L. Baresi and M. Miraz, “A distributed approach for the federation of heterogeneous registries,” in ICSOC 2006. Springer, 2006, pp. 240–251. [15] H. Bohn, F. Golatowski, and D. Timmermann, “Dynamic device and service discovery extensions for ws-bpel,” in ICSSSM 2008. IEEE, 2008, pp. 1–6. [16] I. Fikouras and E. Freiter, “Service discovery and orchestration for distributed service repositories,” in ICSOC 2003. Springer, 2003, pp. 59–74. [17] A. T. Soki and F. Siqueira, “Discovery of RESTful Web services based on the OpenAPI 3.0 standard with semantic annotations,” in AINA. Springer, 2024, pp. 22–34. [18] J. Thönes, “Microservices,” IEEE software, vol. 32, no. 1, pp. 116–116, 2015. [19] J. Achiam et al., “GPT-4 technical report,” 2023. [Online]. Available: https://arxiv.org/abs/2303.08774 [20] AI@Meta, “Llama 3 model card,” 2024. [Online]. Available: https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md [21] M. Kim, T. Stennett, D. Shah, S. Sinha, and A. Orso, “Leveraging large language models to improve rest api testing,” in ICSE, vol. 44, 2024, pp. 37–41. [22] A. Radford, J. Wu, D. Amodei, D. Amodei, J. Clark, M. Brundage, and I. Sutskever, “Better language models and their implications,” OpenAI blog, vol. 1, no. 2, 2019. [23] A. Vaswani et al., “Attention is all you need,” NeurIPS, vol. 30, 2017. [24] A. Radford et al., “Improving language understanding by generative pre-training,” 2018. [25] A. Fan et al., “Large language models for software engineering: Survey and open problems,” 2023. [Online]. Available: https: //arxiv.org/abs/2310.03533 [26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pretraining of deep bidirectional transformers for language understanding,” in NAACL-HLT 2019, 2019, pp. 4171–4186. [27] F. Cuconasu et al., “The power of noise: Redefining retrieval for RAG systems,” in SIGIR, vol. 47, 2024, pp. 719–729. [28] J. Wei et al., “Chain-of-thought prompting elicits reasoning in large language models,” NeurIPS, vol. 35, pp. 24 824–24 837, 2022. [29] S. Yao et al., “Tree of thoughts: Deliberate problem solving with large language models,” NeurIPS, vol. 36, 2024. [30] R. Nakano et al., “WebGPT: Browser-assisted question-answering with human feedback,” 2021. [Online]. Available: https://arxiv.org/abs/2112. 09332 [31] K. Cobbe et al., “Training verifiers to solve math word problems,” 2021. [Online]. Available: https://arxiv.org/abs/2110.14168 [32] L. Gao et al., “Pal: Program-aided language models,” in International Conference on Machine Learning. PMLR, 2023, pp. 10 764–10 799. [33] G. Mialon et al., “Augmented language models: a survey,” 2023. [Online]. Available: https://arxiv.org/abs/2302.07842 [34] OpenAI, “Function calling and other API updates,” Jun 2024, last accessed 2024-07-18. [Online]. Available: https://openai.com/index/ function-calling-and-other-api-updates/ [35] S. Yao et al., “React: Synergizing reasoning and acting in language models,” 2023. [Online]. Available: https://arxiv.org/abs/2210.03629 [36] M. Li et al., “API-Bank: A comprehensive benchmark for toolaugmented LLMs,” in EMNLP. Association for Computational Linguistics, 2023. [37] Z. Shi et al., “Chain of tools: Large language model is an automatic multi-tool learner,” 2024. [Online]. Available: https://arxiv.org/abs/2405. 16533 [38] L. Yuan, Y. Chen, X. Wang, Y. R. Fung, H. Peng, and H. Ji, “CRAFT: Customizing LLMs by creating and retrieving from specialized toolsets,” 2024. [Online]. Available: https://arxiv.org/abs/2309.17428 [39] A. Parisi, Y. Zhao, and N. Fiedel, “Talm: Tool augmented language models,” 2022. [Online]. Available: https://arxiv.org/abs/2205.12255 [40] Y. Liang et al., “Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,” Intelligent Computing, vol. 3, p. 0063, 2024. [41] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, “Gorilla: Large language model connected with massive APIs,” 2023. [Online]. Available: https://arxiv.org/abs/2305.15334 [42] R. Nogueira, W. Yang, J. Lin, and K. Cho, “Document expansion by query prediction,” 2019. [Online]. Available: https://arxiv.org/abs/1904. 08375 [43] M. Douze et al., “The faiss library,” 2024. [Online]. Available: https://arxiv.org/abs/2401.08281 [44] S. Xiao, Z. Liu, P. Zhang, and N. Muennighoff, “C-pack: Packaged resources to advance general chinese embedding,” 2023. [Online]. Available: https://arxiv.org/abs/2309.07597 [45] C. Lee et al., “Nv-embed: Improved techniques for training llms as generalist embedding models,” 2024. [Online]. Available: https://arxiv.org/abs/2405.17428 Robin D. Pesl is a Ph.D. student at the Institute of Architecture of Application Systems at the University of Stuttgart with the focus on the application of LLMs in Service Computing. He accomplished his Bachelor’s degree in Computer Science at the Cooperative State University Baden-Württemberg as a dual study in cooperation with SAP and his Master’s degree at the University of Stuttgart while working part-time at SAP in the SAP HANA Spatial team as a software engineer. Jerin G. Methew received his Ph.D. as a National Doctorate in AI at Sapienza University of Rome. He pursued his Bachelor’s degree in Computer Engineering at Roma Tre University in 2017 and then earned a Master’s in Computer Engineering in 2020 at the same university. His research interests are in data management-related topics, including entity resolution, knowledge graphs, assessment of the fairness of rankings, and applying NLP techniques in these fields. Massimo Mecella received the Ph.D. degree in engineering in computer science from the University of Rome “La Sapienza.” He is currently a Full Professor at the University of Rome “La Sapienza,” where he is conducting research, among others, in information systems engineering, software architectures, distributed middleware, and service-oriented computing, focusing on smart applications. He has various experiences organizing scientific events, e.g., as the General Chair of CAiSE 2019, BPM 2021, and ICSOC 2023. Marco Aiello received the Ph.D. degree from the University of Amsterdam, The Netherlands. He is currently a full Professor of Computer Science and Head of the Service Computing Department at the University of Stuttgart, Germany. He is an elected member of the European Academy of Sciences and Arts. His main areas of expertise are in the coordination of cyber-physical systems in complex, dynamic, and uncertain environments.
Integrating multiple (sub-)systems is essential to create advanced Information Systems. Difficulties mainly arise when integrating dynamic environments, e.g., the integration at design time of not yet existing services. This has been traditionally addressed using a registry that provides the API documentation of the endpoints. Large Language Models have shown to be capable of automatically creating system integrations (e.g., as service composition) based on this documentation but require concise input due to input oken limitations, especially regarding comprehensive API descriptions. Currently, it is unknown how best to preprocess these API descriptions. In the present work, we (i) analyze the usage of Retrieval Augmented Generation for endpoint discovery and the chunking, i.e., preprocessing, of state-of-practice OpenAPIs to reduce the input oken length while preserving the most relevant information. To further reduce the input token length for the composition prompt and improve endpoint retrieval, we propose (ii) a Discovery Agent that only receives a summary of the most relevant endpoints nd retrieves specification details on demand. We evaluate RAG for endpoint discovery using (iii) a proposed novel service discovery benchmark SOCBench-D representing a general setting across numerous domains and the real-world RestBench enchmark, first, for the different chunking possibilities and parameters measuring the endpoint retrieval accuracy. Then, we assess the Discovery Agent using the same test data set. The prototype shows how to successfully employ RAG for endpoint discovery to reduce the token count. Our experiments show that endpoint-based approaches outperform naive chunking methods for preprocessing. Relying on an agent significantly improves precision while being prone to decrease recall, disclosing the need for further reasoning capabilities.
[ "cs.SE", "cs.AI" ]
# 1 Introduction Imitation learning for sensorimotor skills has made significant strides in recent years, propelled by the increasing scale and diversity of robotic datasets. From controlled tabletop environments to open-world household settings, large-scale data has been shown to improve the generalization of robotic policies [1, 2] – mirroring advances in vision [3, 4, 5] and language [6, 7, 8]. However, precise, contact-rich manipulation poses a significant challenge to this data-centric approach. Finegrained tasks such as inserting USBs and swiping credit cards have low error tolerance (millimeter to sub-millimeter), and the high fidelity required makes demonstration collection time-consuming, brittle, and difficult to scale. Deep reinforcement learning (RL) provides an alternative by learning directly through online interaction, but often sacrifices generalization in favor of narrowly tuned policies sensitive to training-specific cues like scene layout or background distractors. In this work, we propose VisuoTactile Local (VITAL), a policy learning framework that bridges this gap by enabling robust, precise manipulation while maintaining generalizability. VITAL decomposes manipulation into two phases: a global reaching phase, where a vision-language model (VLM) performs scene-level reasoning to identify and localize the object of interest, and a local interaction phase, where a reusable, scene-agnostic policy performs fine-grained, contact-rich manipulation using egocentric vision and tactile sensing. This decomposition is motivated by the observation that while the environmental context for a task may vary drastically, the low-level physical interactions required for manipulation remain consistent. Our work focuses on capturing this invariant local policy: training it once in a canonical setting allows it to generalize across environments via a simple localize-then-execute strategy. With just 32 demonstrations and 45 minutes of online reinforcement learning per task, VITAL achieves the precision necessary for real-world deployment while maintaining adaptability across scenes. A core design motivation behind VITAL is the deliberate pairing of sensing modalities with complementary strengths. Tactile sensing is indispensable during contact-rich phases of manipulation, providing direct, localized feedback about forces and slip, that cannot be captured by vision. It is inherently robust to lighting, background clutter, and occlusion, but lacks the spatial awareness necessary for planning and coarse alignment in the pre-contact phase. Egocentric vision fills this gap by offering a consistent, robot-centered perspective that captures the relative pose of the end-effector and surrounding objects. Unlike third-person or fixed external cameras, egocentric views are naturally aligned with the robot’s actions and are easy to replicate across different environments without introducing viewpoint-specific biases that can severely hinder learned policy transfer. While visuotactile design is not novel in itself, existing works typically fail to use it effectively. Imitation learning methods require large, diverse datasets [9, 10] to handle spatial and scene variation, making them expensive and difficult to scale, especially for precise manipulation. Reinforcement learning is capable of refining policies through interaction, but tends to overfit to training environments [11, 12]. A key reason for this is that learning from raw RGB inputs in constrained settings lacks the visual diversity needed for generalization. Without sufficient variation in appearance, background, and lighting, policies trained via RL become brittle and environment-specific. VITAL addresses this limitation with a key insight: task success depends primarily on the visual features of task-relevant objects, which remain relatively stable across environmental changes. To exploit this invariance, we introduce a semantic, task-aware data augmentation pipeline powered by vision foundation models. These augmentations introduce altering distractors, backgrounds, and lighting, while preserving object and robot identity. This allows visual encoders to learn more general representations from the same amount of demonstration data, eliminating the need for costly scene variations in data collection. Finally, to further improve performance and address the inevitable imperfections in teleoperated demonstrations, we fine-tune our policies using offset-based reinforcement learning. Rather than learning policies from scratch, we apply DrQ-v2 [13] to refine behavior-cloned policies by predicting small corrective actions, or offsets, relative to the predicted actions. Crucially, this refinement is done without discarding the visual generalization learned during imitation, as we continue to apply semantic augmentations during online training. This final phase boosts precision and robustness while preserving the broad generalization enabled by our visuotactile design and augmentation strategy. Our key findings can be summarized as follows: 1. VITAL learns generalizable, contact-rich manipulation policies with a $90 \%$ success rate from just 32 demonstrations and 45 minutes of interaction, outperforming the best baseline by $40 \%$ on average across four challenging precise manipulation tasks in unseen environments. 2. Tactile sensing is essential for precision and reliability: removing tactile input reduces success rates by an average of $40 \%$ , underscoring its critical role in contact-rich task phases where vision alone is insufficient. 3. VITAL extends the benefits of semantic visual augmentation beyond imitation learning by combining it with residual RL, enabling policy fine-tuning without sacrificing generalization. All of our datasets, and training and evaluation code have been made publicly available. Videos of our trained policies can be seen here: vitalprecise.github.io. # 2 Related Work # 2.1 Imitation and Reinforcement Learning Unlike language modeling and computer vision, robot learning has been heavily constrained by limited data [14]. Imitation learning, often from teleoperated expert demonstrations, has gained momentum in recent years as a method for transferring human manipulation skills to robots [15, 16, 17]. Although the quality of demonstration data has steadily improved, its quantity remains limited, making it difficult for learned policies to handle corner cases or generalize to unseen variations [14, 18]. This challenge persists even with Vision-Language-Action models (VLAs), which combine large pretrained models with imitation learning-based fine-tuning [2, 19, 20]. In contrast, reinforcement learning (RL) is known for its ability to generalize, driven by exploration and domain randomization [21, 22]. However, RL typically suffers from poor data efficiency and sim2real gaps [23, 24]. Online residual RL has emerged as a promising solution, combining the strengths of both approaches: it uses a base policy to guide exploration for better efficiency while simultaneously refining the policy [25, 26, 27, 28, 29, 30, 31]. Building on this insight, our method adopts residual RL as its foundation. # 2.2 Precise Manipulation Beyond pick-and-place operations, robots must be capable of physically interacting with and modifying their environments using objects as tools. Many such tasks require precise control over contact interactions [32, 33, 34]. Tactile sensing is critical for this purpose, as vision alone often fails to capture subtle deformations at contact points and can be hindered by occlusions. Traditionally, researchers have relied on model-based analytical methods for contact-rich tasks [35, 36, 37], but these approaches tend to lack robustness or extendability in unstructured environments. Learning-based methods, on the other hand, are often tailored to specific tasks [38, 39, 40], or struggle to effectively integrate tactile sensing with vision for high-precision manipulation [41, 26, 42]. In this work, we present a scalable approach to contact-rich tasks by leveraging local tactile sensing to constrain and guide contact interactions, enabling reliable and precise task execution. Figure 2: Overview of VITAL. (A) VITAL utilizes vision foundation models to enhance task data with procedurally generated backgrounds, improving visual diversity. (B) This data is then used to train a generalizable visuo-tactile policy, which is later refined through online residual reinforcement learning (RL) for precision. (C) Finally, VLM-guided reaching enables zero-shot deployment in novel spatial configurations, despite policies being trained on fixed object positions. # 2.3 Priors for Generalizable Policy Learning Generalization efficiency is particularly crucial in robot learning due to the scarcity of real-world data. To address this challenge, researchers have introduced various forms of inductive bias into learning frameworks. These priors are often infused at the representation level: common computer vision techniques are typically used to augment visual inputs in robot learning [43], while alternative approaches leverage object-centric properties, such as equivariance, to further enhance the performance [44, 45]. Moreover, the generalizability of pretrained models is often harnessed to enable downstream tasks [46, 47, 48, 49]. More generally, invariance in manipulation tasks has been exploited to retarget or generate trajectories that cover a wider range of variations [50, 51]. In our work, beyond extensive background visual augmentation, we build a local sensing structure and employ task-space control to enable natural generalization to spatial variations. # 3 VITAL The core insight behind VITAL is that vision offers task-level spatial awareness for scene generalization, while tactile sensing is essential for millimeter-scale precision during physical contact. By leveraging the strength of each modality, our method enables policies to be trained in localized settings and deployed across diverse spatial variations and background configurations. VITAL operates in three phases: (1) Visuotactile behavior cloning learns a generalizable base policy using visual semantic augmentations; (2) Residual RL enhances downstream performance by optimizing policy refinements while maintaining vision-driven robustness; (3) VLM-based reaching facilitates zero-shot adaptation to novel spatial configurations by identifying actionable regions and decoupling task dynamics from environment configuration. Our pipeline has been illustrated in Figure 2. # 3.1 Generalizable behavior cloning through semantic augmentations Our method starts by collecting visuotactile robot demonstrations using a virtual reality (VR) based teleoperation framework [52]. All the tasks presented in this paper consist of a target object on the table that the robot interacts with, and a grasped object that is held within the robot gripper. We hypothesize that for most precise manipulation tasks, the core interaction dynamics remain consistent across task instances, despite variations in the broader environment, ie., the dynamics of plugging your charger in the kitchen are consistent with the dynamics of plugging your charger in the bedroom. To focus data collection on these invariant interactions, we fix the target object position, and collect successful demonstrations with the robot randomly initialized in the vicinity of the target object. We ensure that observations and actions are grounded in the robot’s end-effector frame to enable transfer to novel spatial configurations during inference. This is achieved by using the wrist camera image and tactile readings as input observations, and computing relative actions in the robot’s end effector frame. By constraining spatial variability and focusing on local interaction patterns, our method achieves robust policy learning with only 32 demonstrations per task. To maintain policy performance across variations in the visual environment, we implement semantic augmentations targeting visual regions irrelevant to the task. Our collected demonstrations use a green screen background to facilitate background replacement through procedural scene generation [53] using RoboEngine [54] during policy learning. In our initial experiments, we observed that naive color-key based background filtering performs poorly, which prompted our multi-stage segmentation pipeline: First, a human annotator marks key points on task-relevant objects in a single reference demonstration frame. This often requires only a few seconds. Next, DIFT-based correspondence matching [55] propagates these annotations to the first frame of all demonstrations, followed by Segment-Anything 2 [5] for instance segmentation. Finally, XMem [56] tracks the segmented masks temporally along trajectories, separating the relevant task elements from augmentable background regions (Fig. 2). This allows targeted background transformations while preserving contact-relevant visual features critical for tactile coordination. The demonstration data is then used to train a base visuotactile policy using behavior cloning. The augmented visual data is encoded using a randomly-initialized ResNet-18 [57] encoder, and tactile reading from AnySkin [58] is encoded using a multilayer perception (MLP). The encoded observations are fed into a visuo-tactile transformer policy $\pi ^ { b }$ for action prediction [59]. The policy is trained with action chunking [16] using a mean squared error loss between predicted and ground truth action chunks. By jointly enforcing spatial and visual invariance through semantic augmentations and sensory observations grounded in the end-effector frame, the policy develops robust task understanding decoupled from environmental context. # 3.2 Fine-tuning with Demonstration-guided Reinforcement Learning While the pretrained base policy $\pi ^ { b }$ enables generalizable visuo-tactile policies, we observe that the policy only achieves a modest success rate. To improve the performance of $\pi ^ { b }$ , we employ residual reinforcement learning (RL) to train a residual policy $\pi ^ { r }$ on top of the base policy. In residual RL [28], given a base policy $\pi ^ { b } : { \mathcal { Z } } \to A$ with encoded representations $z \in { \mathcal { Z } }$ and action $a \in { \mathcal { A } }$ , we learn a residual policy $\pi ^ { r } : \mathcal { Z } \times \mathcal { A } \mathcal { A }$ such that an action sampled from the final policy $\pi$ is the sum of the base action $a ^ { b } \sim \pi ^ { b } ( z )$ and the residual offset $a ^ { r } \sim \pi ^ { r } ( z , a ^ { b } )$ . Following prior work [25, 26], we use $\mathfrak { n }$ -step DDPG [60] as our RL optimizer, a deterministic actor-critic based method that provides high performance in continuous control [13]. During online learning, the encoders (ResNet-18 for vision, MLP for tactile) trained for behavior cloning are fixed, and feed compressed representations $z ^ { i }$ (image) and $z ^ { t }$ (tactile) to both the frozen base policy $\pi _ { b }$ and the residual actor network, $\pi _ { r }$ , which takes as input $( z ^ { i } , z ^ { t } , a ^ { b } )$ to predict $a ^ { r }$ . Similarly, the residual critic $Q ^ { r }$ evaluates $( z ^ { i } , z ^ { t } , a ^ { b } , a ^ { r } )$ pairs using layer normalization and high update-to-data (UTD) ratios for sample-efficient Q-learning [61]. Crucially, we observe that adding L2 weight regularization for the actor network improves policy training, resulting in better performance. For RL training, our reward is simply a sum of a binary success reward provided by a human at the end of the trajectory and a dense L1 distance from the goal for the task. The RL training objective is as follows: $$ \pi ^ { r } = \underset { \pi ^ { r } } { \operatorname { a r g m a x } } \mathbb { E } _ { ( z ^ { i } , z ^ { t } , a ^ { b } , a ^ { r } ) \sim \mathcal { D } _ { \beta } } \left[ Q ( z ^ { i } , z ^ { t } , a ^ { b } , a ^ { r } ) \right] $$ where $\mathcal { D } _ { \beta }$ contains rollouts enriched with the same semantic visual augmentations from the behavior cloning phase to maintain generalization. The executed action $a$ is a sum of $a ^ { b }$ and $a ^ { r }$ . This approach of combining fixed pretrained features with adaptive residuals improves policy performance while preserving cross-environment robustness through augmentations. Details about hyperparameters and network architectures used in our experiments have been included in Appendix A1. # 3.3 Inference Our framework achieves spatial and scene generalization through a hierarchical inference strategy: global semantic navigation by a high-level agent followed by localized visuotactile control for precise low-level execution. By combining offline behavior cloning and online residual adaptation, the policy operates within a constrained task space while maintaining robustness to environmental perturbations. For global positioning, we employ Molmo [62], a vision-language model (VLM) pretrained on web-scale data, to coarsely localize target objects specified via natural language. Given an external RGB-D observation, Molmo predicts a 2D coordinate for the target object, which is projected to 3D workspace coordinates using depth data and camera calibration parameters. The robot then samples an initial end-effector pose within a pre-defined region of the target coordinate. For example, for USB insertion, the target point for the robot is sampled at a height of $1 0 \mathrm { { c m } }$ above the predicted coordinate. Empirically, we observe that this coarse initialization falls within the pretrained policy’s operational envelope, ensuring target visibility in the wrist camera feed. Upon reaching the target position, the learned visuotactile local policy is deployed to complete the task. Our results in Section 4 demonstrate the potential of combining general-purpose VLMs for coarse robotic navigation, with localized visuo-tactile policies handling the precise parts of a task. # 4 Experiments Our experiments seek to answer the following questions: (1) How does VITAL perform in an indomain setting? (2) How does VITAL perform under environmental perturbations? (3) What are the important design choices for VITAL? (4) How well does the VLM navigation work with VITAL? # 4.1 Experimental Setup Our experiments are conducted using a UFACTORY xArm 7 robot equipped with a two-fingered xArm Gripper. For tactile sensing, we integrate the AnySkin [58] magnetic tactile sensor into the gripper. The observations for policy learning include $1 2 8 \times 1 2 8$ RGB images captured by a fisheye camera mounted on the robot’s wrist, and 15-dimensional tactile readings from the AnySkin sensor. For coarse navigation via the VLM, we use a calibrated third-person Intel RealSense RGB-D camera. For each task, demonstrations are collected using a VR-based teleoperation system [52] operating at $3 0 \mathrm { H z }$ . The collected data is subsampled to $6 \mathrm { H z }$ for policy training, and the learned policies are deployed at $6 \mathrm { H z }$ during real-world execution. # 4.2 Task Descriptions We demonstrate the versatility of our framework by evaluating VITAL on four precise, contact-rich manipulation tasks and a pick bread task. We collect 32 demonstrations for each task while fixing the target object and randomly initializing the robot in a predefined area around it. Detailed task descriptions can be found in Appendix A2. # 4.3 Baselines We compare VITAL with four primary baselines – BAKU [63], ViSk [59], and RLPD [64], and the semantically augmented BC policy described in Section 3.1. BAKU [63]: Transformer policy for behavior cloning that maps RGB images to robot actions. Table 1: Policy performance of VITAL in an in-domain setting. Table 2: Study of spatial and scene generalization in VITAL. ViSk [59]: BAKU with both RGB images and tactile readings provided as input. RLPD [64]: This involves collecting a few expert demonstrations and training an RL policy from scratch, where the data during RL training is sampled 1:1 between the expert and RL replay buffer. VITAL-BC: Visuotactile base policy described above that uses the semantic augmentation scheme with the ViSk architecture. Further details on baseline implementations can be found in Appendix A3. # 4.4 How does VITAL perform in an in-domain setting? Table 1 evaluates VITAL’s performance in a controlled in-domain setting, where both the background (green screen) and object positions are fixed. For each method, we conduct 10 trials per task, with the robot randomly initialized within a predefined area around the target object (Section 4.2). Both VITAL and RLPD receive identical visual and tactile observations and are trained online for 45 minutes. While VITAL incorporates semantic augmentations in its RL replay buffer, we find that such augmentations degrade performance for RLPD; therefore, RLPD results in Table 1 do not use semantic augmentations. Our results demonstrate that VITAL significantly outperforms all baselines, achieving an absolute improvement of $40 \%$ over the strongest alternative. Notably, ViSk outperforms BAKU, highlighting the importance of tactile sensing for precise manipulation. Further, VITAL surpasses RLPD, emphasizing the value of offline policy pretraining for sampleefficient online learning. Overall, these findings illustrate that visuotactile behavior cloning and residual RL scaffolded by semantic augmentations enables robust, high-precision manipulation. # 4.5 How does VITAL perform under environmental perturbations? Spatial Generalization Table 2 evaluates VITAL’s spatial generalization by testing three novel target object positions outside the training distribution, with the green screen background retained to isolate spatial variations from scene-level changes. Across 10 trials per position, each initializing the robot within a predefined workspace around the target object, results show comparable performance to in-domain settings, confirming that localized end-effector frame observations effectively enable spatial generalization. Notably, BAKU and ViSk admit a performance decline when target objects approach the edges of the green screen, resulting in background elements entering into the fisheye wrist camera’s field of view, inducing visual distribution shifts relative to training data. Table 3: Study of VITAL’s robustness to combined spatial and scene perturbations. Scene Generalization Table 2 assesses VITAL’s scene generalization by testing on three novel, cluttered scene configurations (see Appendix A4 for examples) while keeping the target object position fixed and identical to training. For each configuration, we run 10 trials with the robot randomly initialized within a predefined area around the target. The results demonstrate VITAL’s robustness to unstructured scene variations, significantly outperforming all baselines. The strong performance of both VITAL and VITAL-BC highlights the critical role of semantic augmentations in enabling policies to disentangle task-relevant visual cues from environmental noise. Moreover, VITAL’s improvement over VITAL-BC illustrates how residual RL combined with semantic augmentations substantially enhances performance while preserving VITAL-BC’s generality. Table 3 extends this evaluation to scenarios varying both target spatial positions and background appearances. To decouple policy performance from VLM navigation effects, we manually initialize the robot near the target object and conduct 10 trials per position. The results revealing a consistent pattern: VITAL and VITAL-BC outperform baselines, with VITAL maintaining a clear advantage. Overall, the use of localized observation spaces alongside semantic augmentations during training endows VITAL with strong spatial and scene generalization capabilities. # 4.6 What are the important design choices for VITAL? VITAL is an amalgam of several techniques that enable learning generalizable visuo-tactile policies. Here, we systematically ablate several design choices in VITAL and justify their importance. Table 4: Study of important design choices for VITAL. Tactile sensing Table 4 investigates tactile sensing’s role in enabling millimeter-scale precision, with experiments conducted under controlled conditions (fixed object positions, green screen background) to isolate sensory effects. Comparing visual (BAKU) and visuo-tactile (ViSk) BC, both with and without residual RL, reveals a consistent performance advantage with tactile inputs. While visual BC with residual RL is competent on two tasks, utilizing tactile inputs further improves performance. Qualitatively, this improvement stems from visual occlusion challenges: as the end effector approaches the target, the object held by the gripper obstructs the egocentric camera’s view of the goal, rendering visual feedback unreliable and causing hesitation or blind actions. Tactile sensing proves indispensable in tasks like Card Swiping, where the card occludes the machine and the policy has to heavily rely on tactile sensing for task completion. The results confirm that tactile sensing compensates for dynamic visual obstructions while enabling finer contact-driven adjustments. Semantic Augmentation Table 4 studies the importance of semantic augmentations for novel scene generalization. We average the performance of visual (BAKU) and visuo-tactile (ViSK) BC – with and without semantic augmentations – across three unseen object positions with background distractors. Our results demonstrate that semantic augmentations enable both approaches to adapt to new spatial and visual conditions, with visuotactile BC achieving superior performance than its visual counterpart. # 4.7 How well does the VLM navigation work with VITAL? Table 5: VLM Navigation for spatial generalization. tent performance in cluttered scenes, despite being trained on fixed configurations. This highlights the utility of VLM navigation for imparting spatial robustness to visuotactile policies. Table 5 evaluates VLM-based coarse navigation across five novel object positions, conducting five trials per position while including background distractors to test robustness to environmental perturbations. Compared to the strongest baseline, VITAL-BC, we observe that both methods generalize to unseen object positions and maintain consis
Data-driven approaches struggle with precise manipulation; imitation learning requires many hard-to-obtain demonstrations, while reinforcement learning yields brittle, non-generalizable policies. We introduce VisuoTactile Local (ViTaL) policy learning, a framework that solves fine-grained manipulation tasks by decomposing them into two phases: a reaching phase, where a vision-language model (VLM) enables scene-level reasoning to localize the object of interest, and a local interaction phase, where a reusable, scene-agnostic ViTaL policy performs contact-rich manipulation using egocentric vision and tactile sensing. This approach is motivated by the observation that while scene context varies, the low-level interaction remains consistent across task instances. By training local policies once in a canonical setting, they can generalize via a localize-then-execute strategy. ViTaL achieves around 90% success on contact-rich tasks in unseen environments and is robust to distractors. ViTaL's effectiveness stems from three key insights: (1) foundation models for segmentation enable training robust visual encoders via behavior cloning; (2) these encoders improve the generalizability of policies learned using residual RL; and (3) tactile sensing significantly boosts performance in contact-rich tasks. Ablation studies validate each of these insights, and we demonstrate that ViTaL integrates well with high-level VLMs, enabling robust, reusable low-level skills. Results and videos are available at https://vitalprecise.github.io.
[ "cs.RO", "cs.CV" ]
# 1 Introduction Invariant manifolds (IMs) and attendant methods remain powerful, versatile, and insightful tools in the broader nonlinear dynamical systems theory. An invariant manifold is a subset embedded in the system’s state space with the property that any trajectory starting on it remains on it for all time. These manifolds often capture essential features of the dynamics–such as long-term behavior near equilibria or slow evolution in multiscale systems–and provide a natural framework for reducing the dimensionality of complex models. By restricting the dynamics on the manifold, one can reformulate high-dimensional systems in a lower-dimensional setting without losing critical information [1–4]. Therefore, IMs serve as fundamental geometric structures that govern the long-term behavior of dynamical systems, enabling rigorous stability analysis, bifurcation analysis, and model reduction [1, 2]. Furthermore, their accurate approximation is critical for constructing reliable reduced-order models (ROMs) that capture the nonlinear behavior of complex systems across a wide range of disciplines [2, 3, 5]. Over the years different types of IMs have been identified and properly characterized such as stable/unstable and center manifolds (depending on the nature of the eigenspectum/eigenvectors of the system’s Jacobian matrix), fast and slow manifolds (depending on an underlying time-scale multiplicity associated with a spectral gap in the system’s Jacobian), as well as inertial manifolds in spatially distributed dynamical systems (depending on the presence of “dissipation terms”) [1–4, 6–15]. Beyond questions of existence and uniqueness, substantial effort has been directed toward developing computational methods for approximating these manifolds by solving the corresponding invariance equations. Recently, growing interest has emerged in integrating IMs with approaches that have been traditionally rooted in nonlinear dynamical systems and control theory. In this setting, IMs provide a unifying framework through which the behavior of input-driven nonlinear systems can be analyzed, particularly via the notion of control flow; a dynamical structure defined on the product space of admissible inputs and system states [16, 17]. In particular, IMs have been used to compute, analyze, and characterize the responses of nonlinear systems to various exogenous stimuli that drive the system dynamics. Such stimuli are outputs generated by an exogenous system (often known as the exosystem) that models dynamic input changes reminiscent of forcing functions, disturbance events, time-varying system parameters, etc. The exosystem dynamics coupled with the original system dynamics conform to an overall structural configuration of a skew-product (or “master-slave”) system [16–21]. One could identify two approaches whereby IMs assume critical importance in the nonlinear feedback control and state estimation design problems. On the feedback controller and/or state estimator synthesis front, IMs can be creatively used as a “design tool" through which desirable dynamic characteristics can be assigned to the feedback-controlled system and/or the state estimator. This perspective has influenced several important areas, including nonlinear regulation [16, 20, 22, 23], stabilization through immersion and invariance [21], transverse feedback linearization [24], invariance-inducing control [25, 26] and nonlinear observer design [27–31]. Conversely, once a controller is synthesized, IMs parameterized by the controller parameters (design degrees of freedom) can be identified and used to analyze the controlled system dynamics. Therefore, the above IM parameterization allows a transparent analysis of the effect of the available controller design parameters on the feedback-controlled system dynamics by retaining the advantages afforded by well-established methods of nonlinear dynamical system theory. Notable research efforts along these lines include applications to nonlinear $H _ { \infty }$ optimal control [32], controlled center dynamics [16, 17], and nonlinear feedback control of Galerkin systems [33]. For continuous-time systems, the theoretical foundations are well-established. Geometric singular perturbation theory (GSPT) [14, 34] and spectral submanifold theory [35, 36] provide rigorous tools for slow/fast systems and normally hyperbolic manifolds. A variety of computational/numerical methods has been developed [37–41] to enable practical computation of IM approximations, leading to successful applications in diverse domains, such as chemical kinetics [42–44], biochemical networks [45, 46], structural dynamics [36], etc. In contrast, discrete dynamical systems, particularly those with external inputs, lack a systematic framework. The absence of infinitesimal generators (e.g., Lie derivatives) prevents direct translation of continuous-time methods, while input-driven dynamics complicate invariance conditions. Existing approaches rely on local expansions near equilibria [18, 47], which struggle with high-dimensional systems due to combinatorial complexity and suffer from poor accuracy far from equilibria due to series divergence. Recent advances in Scientific Machine Learning (SciML) have introduced data-driven paradigms for black or graybox ROMs construction, typically including two steps: data dimensionality reduction for embedding the data into low-dimensional subspaces, using manifold learning methods [48–51], including proper orthogonal decomposition [52–54] and Koopman operator techniques [55–57], followed by surrogate ROMs modelling via, for example, artificial neural networks [52, 58–60], or Gaussian Processes [60–62]. Autoencoders often combine both steps [63–65]. Datadriven approaches for learning low-dimensional stable, unstable and center IMs from high-dimensional microscopic simulators/legacy codes within the Equation-free approach [66] for multiscale computations have also been proposed [41, 45, 67–69]. The core of these approach is the construction of a “coarse” discrete map of the emergent dynamics, based on short simulations of microscale simulations. More recently, “next-generation" Equation-free methods, integrating manifold learning with numerical analysis, have been introduced to eliminate the need of ROMs construction and directly perform bifurcation analysis and control design [70–72]. While the above SciML approaches identify low-dimensional subspaces, they often lack explicit geometric interpretation of true underlying IMs. For continuous-time systems, recent work has begun to bridge this gap by combining neural networks (NNs) with dynamical systems theory, in the context of Physics-Informed Machine Learning (PIML) [73– 75]. In particular, in [76] a NN-based framework was introduced to parameterize intrinsic coordinates that solve invariance conditions via PIML loss functions. In [77], we have used NNs to solve the partial differential equation (PDE) corresponding to the invariance equation of singular perturbed dynamical systems for the approximation of slow IMs in the context of GSPT. In [78], we have extended this PIML approach for the general class of fast-slow dynamical systems by introducing linear transformations of the state variables into slow and fast variables. For discrete-time systems, SciML approaches to IM approximation remain underdeveloped. The fundamental challenge lies in solving the nonlinear functional equations (NFEs) governing the manifold invariance for discrete-time dynamical systems. Traditional methods [18, 19, 47, 68, 69] rely on local power series expansions around equilibria, where coefficients are determined through linear algebraic systems. While these approaches yield accurate approximations near equilibria, they become increasingly intractable for high-dimensional systems despite symbolic computation tools, and critically fail to maintain accuracy far from equilibrium or near dynamic singularities. This limitation presents a substantial obstacle for engineering applications requiring reliable IM approximations across the entire operating domain. Anothe In this work, we present a hybrid physics-informed/numerical analysis-informed machine learning framework for IM approximations of discrete-time dynamical systems with input-driven dynamics. Building on established existence theory for such manifolds [18, 19], we solve the governing NFEs through an innovative physics-informed (PI) hybrid scheme that strategically combines the local precision of polynomial series expansions with the global approximation capabilities of NNs. Extending our previous work on the development of numerical analysis-informed machine learning approaches for constructing slow IMs of continuous-time systems [77, 78], here we propose an adaptive/hybrid transition between polynomial series within a defined equilibrium neighborhood $\mathcal { D }$ and NNs outside this region. The polynomial component is implemented through a novel shallow NN-like architecture inspired by [79], where each polynomial term operates as an activated neuron. In particular, we treat the polynomial coefficient as an output weight, and the polynomial as, a different for each neuron, activation function, with the degree of the expansion corresponding to the width of the NN. This formulation accommodates various polynomial bases, including power series for their simplicity in representing local dynamics, as well as Legendre and Chebyshev polynomials for their numerical stability via orthogonal properties on bounded domains–though each option presents its own tradeoffs between conditioning, convergence, and computational cost for high-degree terms. By additionally considering shallow NNs operating outside $\mathcal { D }$ , we enable symbolic differentiation across the entire PI hybrid scheme for providing an efficient, accelerated training. The performance of the proposed PI hybrid scheme is assessed via three illustrative examples; an artificial one where the true IM is known analytically, the example used in [19] for which the IM approximation via power series expansion is known and an example that generalizes the proposed algorithmic implementation in multiple dimensions. Comparative analysis of the IM approximations provided by the hybrid scheme demonstrates high approximation accuracy over those provided by the traditional expansion approach [19]. More importantly, the hybrid scheme outperforms the IM approximations constructed purely by PI polynomial series or PI NNs, particularly far from/near to the region of the equilibrium, respectively. We note however, that this comes with increased parametric complexity, as the hybrid scheme has more parameters to tune than the other PI schemes. A comprehensive analysis of the role of the degree of the polynomial series and the radius of the polydisc $\mathcal { D }$ is provided in the first two illustrative examples. The manuscript is structured and organized as follows: we first discuss the conditions of IM existence in discrete-time systems in Section 2 and then present the PI hybrid scheme for the IM approximations. For completeness, we also present the standaolone PI polynomial series and NNs approximations, and their implementation. In Section 3, we briefly present the three illustrative examples, and then we assess the performance of the hybrid scheme and the other schemes on the basis of the numerical results in Section 4. We finally offer a few concluding remarks focusing on the proposed method’s comparative advantages and limitations in Section 5, providing also directions for future work. # 2 Methodology Let us consider the non-linear discrete-time dynamical system: $$ \mathbf { x } ( k + 1 ) = \mathbf { F } ( \mathbf { x } ( k ) , \mathbf { y } ( k ) ) , \qquad \mathbf { y } ( k + 1 ) = \mathbf { G } ( \mathbf { y } ( k ) ) , $$ where the state space of $\mathbf { x } \in \mathbb { R } ^ { N }$ is input-driven by the external dynamics of $\mathbf { y } \in \mathbb { R } ^ { M }$ . Let $\mathbf { x } ( k ) = [ x _ { 1 } ( k ) , \ldots , x _ { N } ( k ) ] ^ { \top }$ and $\mathbf { y } ( k ) = [ y _ { 1 } ( k ) , \ldots , y _ { M } ( k ) ] ^ { \top }$ denote the state variables at the $k \in \mathbb { N }$ discrete time step, and the functions $\mathbf { F } : \mathbb { R } ^ { N } \times \mathbb { R } ^ { M } \xrightarrow { } \mathbb { R } ^ { N }$ and $\mathbf { G } : \mathbf { \bar { \mathbb { R } } } ^ { M } \to \mathbb { R } ^ { M }$ denote the corresponding vector fields. Without loss of generality, we assume that the origin $( { \bf x } _ { 0 } , { \bf y } _ { 0 } ) = ( { \bf 0 } ^ { N } , { \bf 0 } ^ { M } )$ is an equilibrium point of the system in Eq. (1). Additionally, we make the following assumption: Assumption 1. The matrix $\mathbf { A } = \partial _ { \mathbf { y } } \mathbf { G } ( \mathbf { y } _ { 0 } ) \in \mathbb { R } ^ { M \times M }$ has non-zero eigenvalues, denoted by $k _ { i }$ for $i = 1 , \dots , M$ , all of which lie either strictly inside or outside the unit disc. This assumption ensures that the $\mathbf G ( \mathbf y )$ is locally invertible in $a$ neighborhood of $\mathbf { y } _ { 0 }$ . The system in Eq. (1) can be rewritten in the form: $$ \begin{array} { r l } & { \mathbf { x } ( k + 1 ) = \mathbf { B } \mathbf { x } ( k ) + \mathbf { C } \mathbf { y } ( k ) + \mathbf { f } ( \mathbf { x } ( k ) , \mathbf { y } ( k ) ) , } \\ & { \mathbf { y } ( k + 1 ) = \mathbf { A } \mathbf { y } ( k ) + \mathbf { g } ( \mathbf { y } ( k ) ) , } \end{array} $$ where $\mathbf { B } = \partial _ { \mathbf x } \mathbf { F } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) \in \mathbb { R } ^ { N \times N }$ and $\mathbf { C } = \partial _ { \mathbf { y } } \mathbf { F } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) \in \mathbb { R } ^ { N \times M }$ are constant matrices derived from the linearization of the system in Eq. (1) around the equilibrium. The non-linear terms f : $\mathbb { R } ^ { N } \times \mathbb { R } ^ { M } \to \mathbb { R } ^ { N }$ and $\mathbf { g } : \mathbb { R } ^ { M } \to \mathbb { R } ^ { M }$ are real-valued functions satisfying $\mathbf { f } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) = \mathbf { 0 } ^ { N }$ and $\mathbf { g } ( \mathbf { y } _ { 0 } ) = \mathbf { 0 } ^ { M }$ . Additionally, their linearization at the equilibrium satisfy ${ \partial _ { \mathbf { x } } } { \mathbf { f } } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) = \partial _ { \mathbf { y } } { \mathbf { f } } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) = \partial _ { \mathbf { y } } { \mathbf { g } } ( \mathbf { y } _ { 0 } ) = { \mathbf { 0 } }$ with consistent zero-matrix dimensions. The system in Eq. (1) exhibits a unique locally invariant manifold (IM) under the assumptions of Theorem 1 in [19], which we restate below (for the proof, see [19]): Theorem 1 (Invariant Manifold existence, [19]). Consider the non-linear discrete dynamical system in Eq. (1), where Assumption $^ { l }$ is satisfied. Additionally, assume that the eigenvalues of the matrix $\mathbf { \dot { A } } = \partial _ { \mathbf { y } } \mathbf { G } ( \mathbf { y } _ { 0 } )$ , denoted by $k _ { i }$ for $i = 1 , \dots , M$ are not related to the eigenvalues of the matrix $\mathbf { B } = \partial _ { \mathbf { x } } \mathbf { F } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } )$ , denoted by $\lambda _ { j }$ for $j = 1 , \dots , N$ , through any resonance condition of the form: $$ \prod _ { i = 1 } ^ { M } k _ { i } ^ { d _ { i } } = \lambda _ { j } , $$ where $d _ { i }$ are non-negative integers satisfying $\textstyle \sum _ { i = 1 } ^ { M } d _ { i } > 0$ . Then, there exists a compact neighborhood $\boldsymbol { \nu } _ { } _ { \subset } \mathbb { R } ^ { M }$ around the equilibrium $\mathbf { y } _ { 0 }$ , and $a$ unique, loca y analytic mapping $\pi : \mathcal { V } \to \mathbb { R } ^ { N }$ such that: $$ \mathcal { M } = \{ ( \mathbf { x } , \mathbf { y } ) \in \mathbb { R } ^ { N } \times \mathcal { V } : \mathbf { x } = \pi ( \mathbf { y } ) , \pi ( \mathbf { y } _ { 0 } ) = \mathbf { x } _ { 0 } \} , $$ defines an analytic local invariant manifold (IM) of the system in Eq. (1). This manifold passes through the equilibrium point $( { \bf x } _ { 0 } , { \bf y } _ { 0 } ) \dot { = } ( { \bf 0 } ^ { N } , { \bf 0 } ^ { M } )$ . Furthermore, the mapping $\pi ( \mathbf { y } )$ is the unique solution of the system of nonlinear functional equations (NFEs) associated to Eq. (2): $$ \pi ( \mathbf { A } \mathbf { y } + \mathbf { g } ( \mathbf { y } ) ) = \mathbf { B } \pi ( \mathbf { y } ) + \mathbf { C } \mathbf { y } + \mathbf { f } ( \pi ( \mathbf { y } ) , \mathbf { y } ) , \qquad w i t h \qquad \pi ( \mathbf { y } _ { 0 } ) = \mathbf { x } _ { 0 } . $$ The system of NFEs in Eq. (5) arises from the invariance of the manifold $\mathcal { M }$ in Eq. (4). Specifically, when the solution of the discrete system in Eq. (2) at the $k$ -th step is on the manifold (i.e., when $\mathbf { x } ( k ) = \pi ( \mathbf { y } ( k ) ) )$ , then the invariance of $\mathcal { M }$ ensures that the system remains on the manifold at the next $k + 1$ -th step. This implies $\overset { \triangledown } { \mathbf { x } } ( k + 1 ) = \pi ( \mathbf { y } ( k + 1 ) )$ , which, upon substitution of the system dynamics from Eq. (2), yields Eq. (5). According to Theorem 1, the functional map $\mathbf { x } = \pi ( \mathbf { y } )$ of the IM $\mathcal { M }$ in Eq. (4) can be computed through the solution of the system of NFEs in Eq. (5). Due to their implicit and nonlinear nature, exact solutions are generally intractable, necessitating approximate solutions. Traditional methods [18, 19, 47] use local power series expansions (PSE) around $\left( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } \right)$ , where the coefficients are determined by solving a linear system of algebraic equations obtained from the NFEs (see Appendix A). While PSE methods are rigorous near equilibria (with exponential convergence for analytic systems) and interpretable (due to their polynomial form), they face two critical limitations: (i) the number of coefficients grows combinatorially with system dimension $N + M$ and expansion degree $h$ , and (ii) the PSEs diverge far from equilibria and close to singularities, resulting to IM approximation with poor accuracy. While alternatives like Koopman linearization [80] or neural networks (NNs) [76] offer complementary insights, they fail to address the core challenge of solving the NFEs for $\pi ( \mathbf { y } )$ , since Koopman methods linearize dynamics without preserving the manifold’s functional structure, while pure NNs lack local theoretical guarantees near equilibria. These challenges also arise in continuous-time fast/slow dynamical systems for the approximation of slow IMs. In such systems, the invariance equation comes in a form of a partial differential equation (PDE), the solution of which may be approximated by asymptotic expansions [81–83], or by advanced numerical methods [40, 41, 84, 85]. In prior work [77, 78], we demonstrated that physics-informed NNs provide very accurate slow IM approximations, especially near the boundaries of the manifold, where the asymptotic series diverge. Motivated by this observation, we propose a hybrid scheme for discrete-time systems that combines the strengths of both approaches by coupling (i) polynomial expansions near the equilibria to preserve theoretical guarantees and (ii) NNs far from equilibria to provide high approximation accuracy where polynomial expansions fail. This unified approach, constrained by the NFEs in Eq. (5), generalizes our earlier work while addressing the discrete-time-specific challenges of combinatorial complexity and input-driven dynamics. Below, we detail this coupling and its implementation. # 2.1 The hybrid neural network-polynomial series physics-informed scheme In this section, we present the proposed hybrid neural network-polynomial series physics-informed (PI) scheme for approximating smooth functionals and here IM functionals $\mathbf { x } = \pi ( \mathbf { y } )$ . A schematic of the proposed scheme is shown in Fig. 1. As previously discussed, the hybrid scheme combines polynomial series expansions in a region around the equilibrium point, and a shallow NNs outside this region. Both components serve as universal approximators of continuous functions on compact domains, as guaranteed by the Weierstrass Approximation Theorem [86, 87] for polynomials and the universal approximation theorems for NNs [88, 89]. The rationale for using such a scheme lies in the continuity properties of the underlying functional to be approximated, the results on the rate of convergence of both approaches, and the complexity in the training process. In particular, if the sought function is analytic around a point, then polynomial approximation (such as Taylor or Chebyshev expansions) offers geometric/exponential convergence, meaning that the approximation error decreases exponentially with the degree of the polynomial [90]. In contrast, for standard shallow neural networks for which theoretical approximation results have been derived (see e.g., [91]), the approximation error decays algebraically with the square root of the number of neurons. Thus, while neural networks offer greater flexibility especially in high-dimensions, they may require significantly many neurons to achieve comparable accuracy to low-degree polynomials when approximating smooth functions in a neighborhood of a point. Finally, the training of a neural network (even shallow ones) is known to be an NP-complete problem [92] or even NP-hard in the size of the network and the input dimension [93]. In contrast, fitting a polynomial of fixed degree (e.g., via least squares or Chebyshev projection) reduces to a convex or linear system, which admits a polynomial-time solution. Figure 1: Schematic of the PI hybrid scheme for approximating IM functionals of discrete-time dynamical systems in the form of Eq. (1). The scheme combines polynomial series inside a polydisc $\mathcal { D }$ of radius r centered at the equilibrium and NNs outside $\mathcal { D }$ . The hybrid IM scheme is trained to minimize a composite loss function that enforces the system of NFEs in Eq. (5), including residuals for manifold invariance, equilibrium constraints and continuity constraints at $\partial \mathcal { D }$ . The coefficients of the polynomial series and the parameters of the NN are jointly optimized to minimize the loss. More specifically, while the polynomial series provide uniform approximations of any analytic function [86, 87, 90], their practical implementation depends on knowing the radius of convergence, which is impossible to determine a priori for an unknown function. On the other hand, NNs provide high approximation accuracy throughout a broader domain $\Omega$ , but cannot reach the local accuracy of a polynomial series approximation within the region of convergence. To address this challenge for the computation of IM functionals $\mathbf { x } { \overset { - } { = } } \pi ( \mathbf { y } )$ , we dynamically define a polydisc $\mathcal { D } = \{ \mathbf { y } \in \mathbb { R } ^ { M } : | y _ { m } | < r _ { m } , \forall m = 1 , \ldots , M \}$ of radius $ { \mathbf { r } } = [ r _ { 1 } , \dots , r _ { M } ] ^ { \top } \in \mathbb { R } ^ { M }$ around the equilibrium, within which the polynomial series residuals remain bounded and outside which the NNs provide better approximation accuracy. This avoids reliance on $a$ priori radius estimates. In particular, we approximate the IM functional by the hybrid scheme approximation $\begin{array} { r } { \mathbf { x } = \tilde { \pi } ^ { H x S } ( \mathbf { y } , \pmb { a } , \mathbf { p } ; \mathcal { H } _ { 1 } , \mathcal { H } _ { 2 } , \mathbf { r } ) \in } \end{array}$ $\mathbb { R } ^ { \dot { N } }$ , the elements of which for $n = 1 , \ldots , N$ are expressed as: $$ \tilde { \pi } _ { n } ^ { H x S } ( \mathbf { y } , { a } _ { n } , \mathbf { p } _ { n } ; \mathcal { H } _ { 1 } , \mathcal { H } _ { 2 } , \mathbf { r } ) = \tilde { \pi } _ { n } ^ { x S } ( \mathbf { y } , { \alpha } _ { n } ; \mathcal { H } _ { 1 } ) ^ { H ( \mathbf { y } , \mathbf { r } ) } \cdot \tilde { \pi } _ { n } ^ { N N } ( \mathbf { y } , \mathbf { p } _ { n } ; \mathcal { H } _ { 2 } ) ^ { 1 - H ( \mathbf { y } , \mathbf { r } ) } , $$ where $\tilde { \pi } _ { n } ^ { x S } ( \mathbf { y } , \pmb { \alpha } _ { n } ; \mathcal { H } _ { 1 } )$ and $\widetilde { \pi } _ { n } ^ { N N } ( \mathbf { y } , \mathbf { p } _ { n } ; \mathcal { H } _ { 2 } )$ are the $n$ -th outputs of a polynomial series and a NN, as functions of $\mathbf { y }$ , respectively. These components include the parameters $ { \alpha } _ { n }$ and ${ \bf p } _ { n }$ and the hyperparameters $\mathcal { H } _ { 1 }$ and $\mathcal { H } _ { 2 }$ , which will be discussed next. The function $H : \mathbb { R } ^ { M } \times \mathbb { R } ^ { M } \to \mathbb { R }$ in Eq. (6) is a product of Heaviside functions defined as: $$ H ( \mathbf { y } , \mathbf { r } ) = \prod _ { i = 1 } ^ { M } H ( r _ { i } ^ { 2 } - y _ { i } ^ { 2 } ) = \left\{ \begin{array} { l l } { 1 , } & { | y _ { i } | < r _ { i } , \forall i = 1 , \ldots , M } \\ { 0 , } & { \mathrm { o t h e r w i s e } . } \end{array} \right. , $$ which takes the values $H ( \mathbf { y } , \mathbf { r } ) = 1$ when $\mathbf { y } \in { \mathcal { D } }$ and $H ( \mathbf { y } , \mathbf { r } ) = 0$ when $\mathbf { y } \not \in { \mathcal { D } }$ . This ensures that the hybrid scheme approximation $\mathbf { x } = \tilde { \pi } ^ { H x S } ( \mathbf { y } , \pmb { a } , \mathbf { p } ; \mathcal { H } _ { 1 } , \mathcal { H } _ { 2 } , \mathbf { r } )$ in Eq. (6) reduces to a polynomial series (power, Legendre or Chebyshev) approximation within the polydisc $\mathcal { D }$ and a NN approximation outside $\mathcal { D }$ . We hereby note that the hybrid approximation depends also on the radius $\mathbf { r }$ of the polydisc, which is considered in Eq. (6) as an additional hyperparameter. Polynomial series component The polynomial series component adopts a NN-like architecture inspired by [79] (see Fig. 1). We consider a multivariate polynomial series for the external variables $\mathbf { y } \in \mathbb { R } ^ { M }$ of degree $h$ , denoted as $\tilde { \pi } ^ { x S } ( \mathbf { y } , \pmb { \alpha } ; \mathcal { H } _ { 1 } ) \in \mathbb { R } ^ { N }$ , where $\alpha$ contains the polynomial coefficients and $\mathcal { H } _ { 1 }$ includes both the degree $h$ and polynomial type $x$ (indicated by superscript $x S$ ). Each output dimension $n$ (for $n = 1 , \ldots , N )$ of the vector valued $\tilde { \pi } ^ { x S }$ is given by: $$ \tilde { \pi } _ { n } ^ { x S } ( \mathbf { y } , \alpha _ { n } ; \mathcal { H } _ { 1 } ) = \sum _ { i _ { 1 } , \dots , i _ { M } = 0 } ^ { h } \alpha _ { n } ^ { i _ { 1 } , \dots , i _ { M } } P _ { i _ { 1 } } ^ { x } ( y _ { 1 } ) \cdots P _ { i _ { M } } ^ { x } ( y _ { M } ) + \mathcal { O } ( \mathbf { P } ^ { x } ( \mathbf { y } ) ^ { h + 1 } ) , $$ where $\alpha _ { n } ^ { i _ { 1 } , \dots , i _ { M } }$ are the expansion coefficients, $P _ { i _ { j } } ^ { x } ( y _ { j } )$ represents the $i _ { j }$ -th degree polynomial of type $x$ for the variable $y _ { j }$ , and the reminder term $\mathcal { O } ( \mathbf { P } ^ { x } ( \mathbf { y } ) ^ { h + 1 } )$ captures higher-order contributions. Although the summation includes terms beyond degree $h$ in its formulation (chosen to mimic a NN-like structure, as shown in Fig. 1), we enforce truncation by setting $\alpha _ { n } ^ { i _ { 1 } , \dots , i _ { M } } = 0$ for $i _ { 1 } + i _ { 2 } + . . . + i _ { M } > h$ . The coefficient vector $ { \alpha } _ { n }$ collects all $\textstyle { \big ( } \prod _ { i = 1 } ^ { M } ( h + i ) { \big ) } / i$ ! coefficients $\alpha _ { n } ^ { i _ { 1 } , \dots , i _ { M } }$ up to degree $h$ , with the complete parameter vector $\pmb { \alpha } = [ \pmb { \alpha } _ { 1 } , \dots , \pmb { \alpha } _ { N } ] ^ { \top }$ in $\tilde { \pi } ^ { x S } ( \mathbf { y } , \alpha ; \mathcal { H } _ { 1 } )$ aggregating all output dimensions $n = 1 , \ldots , N$ . We implement three polynomial variants in this framework: power series, Legendre and Chebyshev (of the second kind) polynomials, denoted by the superscript $x = P , L , C$ in Eq. (8). The power series polynomials match the traditional PSE approaches [19] with monomials $P _ { l } ^ { \bar { P } } ( y _ { m } ) = y _ { m } ^ { l }$ of degree $l \in \mathbb { N }$ for the variable $y _ { m }$ $( m = 1 , \ldots , M )$ . We consider Legendre and Chebyshev polynomials due to their increased accuracy and reduced computational costs in PINN forward problems [79, 94, 95]. Both polynomial types are orthogonal with respect to a weight function $\rho ( y _ { m } )$ and are defined on the interval $y _ { m } \in [ - 1 , 1 ]$ for every $m = 1 , \ldots , M$ . Specifically, any two univariate Legendre/Chebyshev polynomials of degrees $i , j \in \mathbb { N }$ with $i \neq j$ satisfy the orthogonality relation: $$ \langle P _ { i } ( y _ { m } ) , P _ { j } ( y _ { m } ) \rangle = \int _ { - 1 } ^ { 1 } P _ { i } ( y _ { m } ) P _ { j } ( y _ { m } ) \frac { d y _ { m } } { \rho ( y _ { m } ) } = 0 . $$ For Legendre polynomials with weight function $\rho ( y _ { m } ) = 1$ , the $l$ -th degree monomial is given by the recursive formula: $$ P _ { 0 } ^ { L } ( y _ { m } ) = 1 , \qquad P _ { 1 } ^ { L } ( y _ { m } ) = y _ { m } , \qquad P _ { l + 1 } ^ { L } ( y _ { m } ) = \frac { ( 2 l + 1 ) y _ { m } P _ { l } ^ { L } ( y _ { m } ) - l P _ { l - 1 } ^ { L } ( y _ { m } ) } { l + 1 } , $$ and for Chebyshev polynomials of the second kind with weight function $\rho ( y _ { m } ) = \sqrt { 1 - y _ { m } ^ { 2 } }$ , the $l$ -th degree polynomial is defined by: $$ P _ { 0 } ^ { C } ( y _ { m } ) = 1 , \qquad P _ { 1 } ^ { C } ( y _ { m } ) = 2 y _ { m } , \qquad P _ { l + 1 } ^ { C } ( y _ { m } ) = 2 y _ { m } P _ { l } ^ { C } ( y _ { m } ) - P _ { l - 1 } ^ { C } ( y _ { m } ) . $$ Neural Network component The hybrid scheme (see Fig. 1) further considers a feedforward artificial NN $\tilde { \pi } ^ { N N } ( \mathbf { y } , \mathbf { p } ; \mathcal { H } _ { 2 } ) \in \mathbb { R } ^ { N }$ with inputs $\mathbf { y } \in \mathbb { R } ^ { M }$ , where $\mathbf { p }$ is a vector including the parameters of the NN (weights and biases of each layer) and $\mathcal { H } _ { 2 }$ includes the NN hyperparameters, such as the number of hidden layers, the type and parameters of activation function, etc.. For simplicity, we assume a single layer feedforward NN (SLFNN) with $L$ neurons in the hidden layer (more expressive architectures were tested and did not show significant numerical accuracy improvement). Each output of the SLFNN, $\tilde { \pi } _ { n } ^ { N N } ( \mathbf { y } , \mathbf { p } _ { n } ; \mathcal { H } _ { 2 } )$ for $n = 1 , \ldots , N$ , is expressed as: $$ \tilde { \pi } _ { n } ^ { N N } ( \mathbf { y } , \mathbf { p } _ { n } ; \mathcal { H } _ { 2 } ) = \mathbf { w } ^ { o ( n ) \top } \phi \left( \mathbf { W } ^ { ( n ) } \mathbf { y } + \mathbf { b } ^ { ( n ) } \right) + b ^ { o ( n ) } , $$ where $\begin{array} { r l r } { { \mathbf { p } } _ { n } } & { { } = } & { [ { \mathbf { w } } ^ { o ( n ) } , b ^ { o ( n ) } , { \mathbf { W } } ^ { ( n ) } , { \mathbf { b } } ^ { ( n ) } ] ^ { \top } \quad \in \quad \mathbb { R } ^ { L ( M + 2 ) + 1 } } \end{array}$ includes: (i) the output weights $\begin{array} { r l } { { \bf w } ^ { o ( n ) } } & { { } = } \end{array}$ $[ w _ { 1 } ^ { o ( n ) } , \ldots , w _ { L } ^ { o ( n ) } ] ^ { \top } \in \mathbb { R } ^ { L }$ connecting the hidden to the output layer, (ii) the bias $b ^ { o ( n ) } ~ \in ~ \mathbb { R }$ of the output layer, (iii) the internal weights $\mathbf { W } ^ { ( n ) } \in \mathbb { R } ^ { L \times M }$ , where each column $\mathbf { w } _ { l } ^ { ( n ) } = [ w _ { l , 1 } ^ { ( n ) } , \ldots , w _ { l , M } ^ { ( n ) } ] ^ { \top } \in \mathbb { R } ^ { M }$ corresponds to the weights between the input neurons and the $l$ -th neuron in the hidden layer, and (iv) the internal biases $\mathbf { b } ^ { ( n ) } = [ b _ { 1 } ^ { ( n ) } , \ldots , b _ { L } ^ { ( n ) } ] ^ { \top } \in \mathbb { R } ^ { L }$ of the neurons in the hidden layer. The outputs of the activated neurons $\phi _ { l }$ $\mathbf { \Delta } ^ { \prime } \mathbf { W } ^ { ( n ) } \mathbf { y } + \mathbf { b } ^ { ( n ) } )$ for $l = 1 , \ldots , L$ are included in the column vector $\boldsymbol { \phi } ( \cdot ) \in \mathbb { R } ^ { L }$ . Here, the logistic sigmoid function is used as the activation function due to its universal approximation properties [88] and its suitability for symbolic differentiation. Physics-Informed Learning For acquiring an approximation of the IM functional in Eq. (4), the hybrid scheme should provide a solution of the system of NFEs in Eq. (5), according to Theorem 1. Let $\mathbf { y } ^ { ( q ) } \in \dot { \Omega }$ be a set of $q = 1 , \ldots , Q$ collocation points, selected from the domain $\Omega$ where the IM approximation is sought. Further assume a set of $r = 1 , \ldots , R$ collocation points on the boundary of the polydisc $\mathcal { D }$ of radius $\mathbf { r }$ , such that $\mathbf { y } ^ { ( r ) } \in \partial \mathcal { D }$ . Then, the parameters of $\tilde { \pi } ^ { H x S } ( \mathbf { y } , \mathbf { a } , \mathbf { p } ; \bar { \mathcal { H } } _ { 1 } , \mathcal { H } _ { 2 } , \mathbf { r } )$ , that render it an IM approximation, are computed by the solution of the optimization problem: $$ \begin{array} { l } { { \displaystyle \underset { { \boldsymbol { a } , \mathbf { p } } } { \operatorname { a r g m i n } } \mathop { \mathcal { L } } ( { \boldsymbol { \boldsymbol { a } } } , { \mathbf { p } } ; { \mathcal { H } } _ { 1 } , { \mathcal { H } } _ { 2 } , { \mathbf { r } } ) \triangleq \displaystyle \sum _ { q = 1 } ^ { Q } \omega _ { q } \left( \left\| \tilde { { \boldsymbol { \pi } } } ^ { H x S } ( \mathbf { A } { \mathbf { y } } ^ { ( q ) } + { \mathbf { g } } ( { \mathbf { y } } ^ { ( q ) } ) , { \boldsymbol { a } } , { \mathbf { p } } ; { \mathcal { H } } _ { 1 } , { \mathcal { H } } _ { 2 } , { \mathbf { r } } ) - \right. \right. } } \\ { { \displaystyle \left. \left. - { \mathbf { B } } \tilde { { \boldsymbol { \pi } } } ^ { H x S } ( { \mathbf { y } } ^ { ( q ) } , { \boldsymbol { a } } , { \mathbf { p } } ; { \mathcal { H } } _ { 1 } , { \mathcal { H } } _ { 2 } , { \mathbf { r } } ) - { \mathbf { C } } { \mathbf { y } } ^ { ( q ) } - { \mathbf { f } } ( \tilde { { \boldsymbol { \pi } } } ^ { H x S } ( { \mathbf { y } } ^ { ( q ) } , { \boldsymbol { a } } , { \mathbf { p } } ; { \mathcal { H } } _ { 1 } , { \mathcal { H } } _ { 2 } , { \mathbf { r } } ) , { \mathbf { y } } ^ { ( q ) } ) \right\| ^ { 2 } \right) + } { \displaystyle \left. \left. \left. + \omega _ { 1 } \right\| \tilde { { \boldsymbol { \pi } } } ^ { H x S } ( { \mathbf { y } } ^ { ( r ) } , { \mathbf { p } } _ { n } ; { \mathcal { H } } _ { 1 } , { \mathcal { H } } _ { 2 } , { \mathbf { r } } ) \right\| ^ { 2 } \right)} + { \displaystyle \sum _ { \boldsymbol { r } = 1 } ^ { R } \omega _ { 2 } \left\| \tilde { { \boldsymbol { \pi } } } _ { n } ^ { H x S } ( { \mathbf { y } } ^ { ( r ) } , { \boldsymbol { a } } _ { n } ; { \mathcal { H } } _ { 1 } ) - \tilde { { \pi } } _ { n } ^ { N N } ( { \mathbf { y } } ^ { ( r ) } , { \mathbf { p } } _ { n } ; { \mathcal { H } } _ { n } ; { \mathbf { y } } ^ { ( q ) } , { \mathbf { y } } _ { n } ) \right\| ^ { 2 } } } \end{array} $$ The first term of the loss function in Eq. (13) is a $\mathrm { P I }$ term enforcing the satisfaction of the system of NFEs by $\tilde { \pi } ^ { H x S } ( \cdot )$ , while the second term ensures that the $\tilde { \pi } ^ { H x S } ( \cdot )$ passes through the equilibrium (which can be viewed as a loss function at the boundary). Note that the hybrid scheme approximation at the equilibrium consists only of the polynomial series component, since ${ \mathbf { 0 } } ^ { M } \in \mathcal { D }$ . The third term is a ${ \bar { C } } ^ { 0 }$ continuity in the boundary $\partial \mathcal { D }$ ( $C ^ { k }$ continuity conditions can be also imposed), as it matches the polynomial series component with the NN component on the boundary collocation points $\bar { \mathbf { y } } ^ { ( r ) } \in \partial \mathcal { D }$ . The second and third terms in the loss function serve for pinning the polynomial series and the NN components, respectively, which are further balanced by $\omega _ { q } , \omega _ { 1 } , \omega _ { 2 } \in \mathbb { R }$ . Remark 1. A key feature of the hybrid scheme in Eq. (13) is that the IM approximation $\tilde { \pi } _ { n } ^ { H x S } ( \cdot )$ at $\mathbf { y } ^ { ( q ) }$ may be evaluated by the NN, while at $\mathbf { A } \mathbf { y } ^ { ( q ) } + \mathbf { g } ( \mathbf { y } ^ { ( q ) } )$ by the polynomial series and vice versa, depending on whether these points lie inside or outside $\mathcal { D }$ . The radius of the polydisc $\mathcal { D }$ is an additional hyperparameter that needs tuning especially in the case of power series polynomials, since when Legendre or Chebyshev polynomials are considered, naturally we choose $r _ { m } = 1$ for $m = 1 , \ldots , M$ . To solve the optimization problem in Eq. (13), we minimize the non-linear residuals resulting from the loss function, as detailed in Appendix B. Since the optimization problem is solved in $\alpha$ and $\mathbf { p }$ , an overdetermined system requires at least $\begin{array} { r } { Q + R = N \Big ( L ( M + 2 ) + \big ( \prod _ { i = 1 } ^ { M } ( h + i ) \big ) / i ! + 1 \Big ) } \end{array}$ collocation points to form the residuals. For their minimization, we employ the Levenberg-Marquardt (LM) gradient-based optimization algorithm [96], which requires the derivatives of the hybrid scheme w.r.t. to its parameters, as well as the derivatives $\partial _ { \mathbf { x } } \mathbf { f }$ . The former reduce to the computation of $\partial _ { \alpha } \tilde { \pi } ^ { x S } \bar { ( \mathbf { y } ^ { ( q ) } , \pmb { \alpha } ; \mathcal { H } _ { 1 } ) }$ and $\partial _ { \mathbf { p } } \tilde { \pi } ^ { N N } ( \mathbf { y } ^ { ( q ) } , \mathbf { p } ; \mathcal { H } _ { 2 } )$ , which can be obtained analytically using symbolic differentiation. $\partial _ { \mathbf { x } } \mathbf { f }$ can be also obtained analytically if $\mathbf { f }$ is known explicitly; otherwise, numerical differentiation can be employed. Finally, we compare the IM approximations provided by the hybrid scheme with those provided by purely using NNs. For completeness, we discuss the optimization problem resulting from NNs in Appendix C. The implementation of the LM algorithm for solving both the hybrid and NN schemes is provided in Appendix D, along with a pseudo-code for minimizing the related non-linear residuals. Remark 2. We emphasize that the PI optimization problem in Eq. (13) is inherently non-linear, even within the polydisc $\mathcal { D }$ where the polynomial component of the hybrid scheme acts, due to the non-linearity of the NFEs. However, in a regression setting–where the hybrid scheme approximates a function using labeled targets–, the polynomial counterpart reduces to a linear residual minimization problem. Unlike non-linear optimization, this problem does not require iterative methods (e.g., the LM algorithm) to determine the polynomial coefficients. Instead, it should be solved directly using linear techniques such as the Moore-Penrose pseudo-inverse, as iterative optimization introduces unnecessary numerical errors in the coefficient estimates; we provide such a polynomial regression example in Appendix E. Remark 3. Building on Remark 2, one might consider linearizing the PI optimization problem in Eq. (13) by locally approximating the polynomial counterpart via Taylor series expansions around equilibrium, following [18, 19]. However, this would necessitate a two-step procedure for the hybrid scheme; first, solve the linearized problem within $\mathcal { D }$ to obtain the polynomial coefficients and then solve a non-linear optimization problem to train the NN for approximating the IM outside . This approach has several drawbacks: (i) it requires Taylor expansions of the non-linear functions f and g in Eq. (2), and as such it does not provide flexibility on the radius of the polydisc $\mathcal { D }$ , since the latter is constrained by the radius of convergence of the Taylor expansions, (ii) it is problem-dependent, as the linearized polynomial series varies for each dynamical system, and (iii) it becomes computationally intractable as the system dimension or polynomial degree increases. In contrast, our method—-while sacrificing some accuracy in the polynomial coefficients (see Remark 2)—-provides a one-step, plug-and-play solution that avoids these limitations. # 2.2 Implementation of the physics-informed approaches for the IM approximation This section provides all the necessary details for implementing the PI hybrid scheme introduced in Section 2.1, along with the PI NNs scheme. Specifically, we describe the training process and discuss the evaluation of the numerical accuracy of the learned IM approximations. An overview of the training and evaluation process is presented in Algorithm 1, the steps of which are explained in detail in the following paragraphs. Algorithm 1 Outline of training and evaluation of the IM functionals approximated via the PI hybrid and NN schemes; bold comments denote paragraphs of Section 2.2 where the specific step is discussed. # 2.2.1 Training process The training process was conducted over 100 training realizations to quantify the uncertainty in training performance. Below, we outline its key components: training data acquisition, the architectures and hyperparameters selected, the initialization of parameters, and the metrics used to evaluate convergence. These components correspond to steps 2, 5, 7, and 13 of Algorithm 1, respectively. The minimization of non-linear residuals via the LM algorithm is presented in Appendix D. Training data acquisition To learn the IM approximations using the proposed PI hybrid scheme in Section 2.1–or the PI NN scheme in Appendix $\mathrm { C } -$ a collection of $Q$ collocation points $\mathbf { y } ^ { ( q ) }$ for $q = 1 , \ldots , Q$ is required. Since the IM $\mathcal { M }$ in Eq. (4) is defined in a local neighborhood $\boldsymbol { \nu } _ { } ^ { } \subset \mathbb { R } ^ { M }$ around the equilibrium $\mathbf { y } _ { 0 }$ (see Theorem 1), the collocation points are collected within a desired domain $\Omega \subset \mathcal { V }$ , chosen to avoid singular points. For the $M = 1$ -dim. case, $\Omega = [ a _ { 1 } , b _ { 1 } ]$ is an interval containing $y _ { 0 } = 0$ . However, the generalization of $\Omega$ in higher dimensions is not straightforward. For example, there is no guarantee that the IM exists for all y in the hyperrectangle $\Omega = [ a _ { 1 } , b _ { 1 } ] \times [ a _ { 2 } , b _ { 2 } ] { \stackrel { - } { \times } } \cdot \cdot \cdot \times [ a _ { M } , b _ { M } ]$ . To address this, for $M > 1$ , we collected collocation points from numerically derived trajectories, following the approach in [77, 78]. Specifically, we selected 200 initial conditions randomly chosen outside a predefined hyperrectangle $\Omega$ and generated trajectories of the system in Eq. (1). After discarding a transient of $k = 8$ time steps (to allow the system to approach the IM), we recorded all subsequent time steps until reaching the equilibrium, using a cutoff of 0.001 for each variable (for avoiding oversampling near the equilbrium). After ensuring $\mathbf { y } \in \Omega$ for the recorded points, we randomly selected $Q$ of them and used their y values to form the training dataset $\mathbf { y } ^ { ( q ) }$ for $q = 1 , \ldots , Q$ . For the $M = 1$ -dim. cases, we simply sampled $Q$ uniform points from $\mathcal { U } ( [ a _ { 1 } , b _ { 1 } ] )$ . To enable fair comparison of computational training times between the PI hybrid and NNs schemes, we chose the number of collocation points $Q$ such that the number of residuals is 20 times the number of unknown parameters of the–common between two PI schemes–NN component. In addition, for the continuity term in the loss function in Eq. (13), it is further required to collect $R$ collocation points on the boundary $\partial D$ . For the $M = 1$ -dim. case, we simply select the two points $\mathbf { y } ^ { ( r ) } = \pm \mathbf { r }$ . For higher dimensions, we uniformly sample $5 ^ { M - 1 }$ points along each hyperplane of $\partial D = \{ \mathbf { y } \in \mathbb { R } ^ { 2 } : | y _ { m } | = r _ { m } , \forall m = 1 , \therefore , M \}$ ; e.g., for $M = 2$ , we sample $R = 4 \cdot 5 = 2 0$ points accross the four edges of the rectangle $\partial D$ , while for $M = 3$ , we sample $R = 6 \cdot 5 ^ { 2 } = 1 5 \dot { 0 }$ points accross the six surfaces of the cube $\partial D$ . Architectures and Hyperparameters For the polynomial series components of the PI hybrid scheme presented in Eq. (8), we considered power series, Legendre polynomials and Chebyshev polynomials of the second kind. The degree $h$ of the polynomials varied across the examples studied, depending on the desired accuracy. For the NN component, we employed the SLFNN architecture presented in Eq. (12). Increasing the depth to two hidden layers did not yield significant improvements in accuracy, so we used a single hidden layer with $L = 1 0$ neurons and a logistic sigmoid activation function in all examples. An important hyperparameter for the PI hybrid scheme is the radius $\mathbf { r }$ of the polydisc $\mathcal { D }$ . Each element $r _ { m }$ of $\mathbf { r }$ is free to vary within the interval $[ m i n ( a _ { m } , \dot { b _ { m } } ) , m a x ( a _ { m } , b _ { m } ) ]$ , where $[ a _ { m } , b _ { m } ]$ is the range of the $y _ { m }$ variable in the hyperectangle $\Omega$ of the training dataset. For Legendre and Chebyshev polynomials, the restriction $r _ { m } \leq 1$ applies to ensure the polynomials are well-defined. The choice of $r _ { m }$ determines the polydisc $\mathcal { D }$ where the polynomial series counterpart is active. Since no prior information about the underlying IM functional is generally available, $r _ { m }$ requires careful tuning. In all the examples, we tuned $r _ { m }$ starting from values leaving approximately half of the collocation points outside $\mathcal { D }$ . While it is out of the scope of this work, we emphasize that further analysis is needed to determine optimal values of $r _ { m }$ , taking into account the polynomial degree $h$ . For the optimization problem, we determined the three balancing terms $\omega _ { q } , \omega _ { 1 } , \omega _ { 2 }$ in Eq. (13) using residual balancing [97], where the $\omega _ { q }$ was set to 1 for the NN residuals; i.e., for $\mathbf { y } ^ { ( q ) } \not \in { \mathcal { D } }$ . For the implementation of the LM algorithm, we configured the initial damping factor to $\lambda _ { 0 } = 1 0 ^ { - 2 }$ and the hyperparameters related to the stopping criteria to a maximum number of epochs $k _ { m a x } = 2 0 0 0$ , a relative function tolerance $t o l _ { F } = 1 0 ^ { - 8 }$ and a relative step tolerance $t o l _ { R } = 1 0 ^ { - 4 }$ ; see Appendix D for futher details. Parsimonious initialization of parameters The solution of the optimization problem in Eq. (13) requires an initial guess of the learnable parameters: the coefficients $\alpha$ of the polynomial series counterpart and the parameters $\mathbf { p }$ of the NN one. While a random initial guess can be selected, we observed that such an approach leads to poor convergence for approximately $20 \%$ of the training realizations. This behavior arises in high-degree polynomial series, because random initialization can result in high-order terms dominating the loss function, causing the optimizer to converge to suboptimal local minima of the non-linear loss function. To address this, we employ a parsimonious initialization strategy for the polynomial coefficients $\alpha$ , ensuring that each monomial is $\mathcal { O } ( 1 )$ . In particular, we initialize the non-zero coefficients $\alpha _ { n } ^ { \dot { i } _ { 1 } , \dots , \dot { i } _ { M } }$ in Eq. (8) by sampling random values uniformly distributed in the interval $[ - 1 / c _ { n } ^ { i _ { 1 } , \dots , i _ { M } }$ , $1 / c _ { n } ^ { i _ { 1 } , \dots , i _ { M } } ]$ , where $$ { \bf \Psi } _ { n } ^ { i _ { 1 } , \dots , i _ { M } } = \operatorname* { m a x } _ { 1 \leq q \leq Q } \{ \left| P _ { i _ { 1 } } ^ { x } ( y _ { 1 } ^ { ( q ) } ) \cdot \cdot \cdot P _ { i _ { M } } ^ { x } ( y _ { M } ^ { ( q ) } ) \right| \} , \qquad \forall i _ { 1 } , \dots , i _ { M } = 1 , \dots , h , \quad \mathrm { w i t h } \quad i _ { 1 } + \dots , + i _ { M } \leqq \cdot \frac { 1 } { L ^ { q } } . $$ Here $y _ { m } ^ { ( q ) }$ denotes the $m$ -th element of the $q$ -th collocation poitn $\mathbf { y } ^ { ( q ) }$ . For the NN parameters $\mathbf { p }$ , we use a uniform Xavier/Glorot initialization [98]. To further avoid numerically exploding or vanishing gradients, we ensure that $\mathbf { W } ^ { ( n ) } \mathbf { y } + \mathbf { b } ^ { ( n ) } = { \mathcal { O } } ( 1 )$ by further scaling the Xavier-initialized input weights wl(,nm) by maxq(y(mq ) $m a x _ { q } ( y _ { m } ^ { ( q ) } ) - m i n _ { q } ( y _ { m } ^ { ( q ) } )$ . The parameters $\alpha$ and $\mathbf { p }$ , initialized using the above procedure, consistently demonstrated good convergence by improving the conditioning of the optimization problem and enabling the LM algorithm to perform reliably. Training metrics To evaluate the convergence of the optimization problem in Eqs. (13) and (C.1), we record the values of the respective loss functions at the end of training for each of the 100 realizations of different initial parameters, along with the computational times required for training. The value of each loss function corresponds to the squared $l _ { 2 }$ error, $\| \mathcal F \| _ { 2 }$ , of the non-linear residuals $\mathcal { F }$ minimized through the LM algorithm. To quantify the uncertainty and assess the consistency of convergence, we compute the mean and $5 . 9 5 \%$ confidence intervals (CIs) of these training metrics accross the 100 realizations. # 2.2.2 Evaluation of numerical approximation accuracy For assessing the numerical approximation accuracy of the IM approximations obtained by the PI hybrid and pure NNs scheme, we compare data points that lie exclusively on the true IM $\mathcal { M }$ in Eq. (4) to their corresponding approximations. Specifically, for a given $\mathbf { y } \in \Omega$ , the true IM mapping $\pi ( \mathbf { y } )$ provides the corresponding $\mathbf { x }$ (i.e., $\mathbf { x } = \pi ( \mathbf { y } ) )$ , while the IM functional approximation yields the estimate $\tilde { \mathbf { x } } = \tilde { \pi } ( \mathbf { y } )$ . By comparing $\mathbf { x }$ and $\tilde { \mathbf { x } }$ , we quantify the accuracy of the IM approximation. In the sequel, we discuss the construction of the testing data sets and the error metrics used for quantifying numerical accuracy. Testing data acquisition As previously discussed, the construction of the testing data set requires data points that lie exclusively on the underlying IM $\mathcal { M }$ in Eq. (4). However, an explicit expression for the IM mapping is generally unknown for dynamical systems of the form Eq. (1). To address this, we collected testing data from numerically derived trajectories. Following the same procedure as for acquiring the training data, we selected $1 0 0 N$ random initial conditions outside $\Omega$ within example-specific bounds, generated trajectories, discarded a transient of $k = 8$ time steps and finally recorded subsequent time steps until reaching the equilibrium, using a cutoff of 0.001 for each variable. We randomly selected $S = 1 0 , 0 0 0$ from the recorded points and used both $\mathbf { y } ^ { ( s ) }$ and $\mathbf { x } ^ { ( s ) }$ to form the testing sets, $( \mathbf { x } ^ { ( s ) } , \mathbf { y } ^ { ( s ) } ) \in \mathcal { M }$ , ensuring $\mathbf { y } ^ { ( s ) } \in \Omega$ for $s = 1 , \ldots , S$ . This construction procedure was followed for every example, except for the first one, where the IM mapping $\pi ( y ) = l n ( 1 + y )$ was analytically known and used for generate the testing data. Numerical accuracy metrics To assess the numerical approximation accuracy, we simply evaluated the error between $\mathbf { x } ^ { ( s ) }$ and $\tilde { \mathbf { x } } ^ { ( s ) } = \tilde { \pi } ( \mathbf { y } ^ { ( s ) } )$ on the testing data set. Specifically, we computed the $l _ { 1 } , l _ { 2 }$ and $l _ { \infty }$ norms of $\big \| \mathbf { x } _ { n } ^ { ( s ) } - \tilde { \pmb { \pi } } _ { n } ( \mathbf { y } ^ { ( s ) } ) \big \|$ for all $s = 1 , \ldots , S$ , as well as the mean squared error $\mathbf { M S E } ( \mathbf { x } _ { n } ^ { ( s ) } , \tilde { \pmb { \pi } } _ { n } ( \mathbf { y } ^ { ( s ) } )$ for all the $n = 1 , \ldots , N$ components of the IM approximations, and report their mean and $5 . 9 5 \%$ CIs across the 100 parameter sets learned during training. For visualization purposes, we selected the parameter set with the best $l _ { 2 }$ norm error and displayed the absolute errors $| \mathbf { x } ^ { ( s ) } - \tilde { \pi } ( \mathbf { y } ^ { ( s ) } ) |$ accross all points to highlight regions of high and low numerical approximation accuracy. # 3 Illustrative examples To demonstrate the efficiency of the proposed PI hybrid scheme, we consider three illustrative examples. The first example involves a discrete dynamical system with an exact $N = 1$ -dim. IM functional, providing a known exact mapping to validate our approach. The second example focuses on an enzymatic bioreactor problem studied in [19], for which the power series expansion is known. Finally, the third example generalizes in multiple dimensions, allowing us to explore $N = 1$ and $N = 2$ -dim. IMs, which are functions of $M = 1$ , $M = 2$ and $M = 3$ variables. Below, we briefly describe the examples and provide the analytic PSE for the IM approximations, derived by the Taylor expansion approach proposed in [18, 19]. # 3.1 Example with analytic invariant manifold We first consider a non-linear discrete dynamical system expressed in the form of Eq. (1) with $N = 1$ and $M = 1$ : $$ \begin{array} { l } { x ( k + 1 ) = \beta x ( k ) + y ( k ) , } \\ { y ( k + 1 ) = ( 1 + y ( k ) ) ^ { \beta } e ^ { y ( k ) } - 1 , } \end{array} $$ where $\beta \neq 0$ and $y \geq - 1$ . For $\beta < 0$ , a singularity occurs at $y = - 1$ . The equilibrium of the system is $( x _ { 0 } , y _ { 0 } ) = ( 0 , 0 )$ and Assumption 1 is satisfied for $\beta \neq - 1$ . Under this assumption, we rewrite the system in the form of Eq. (2) as: $$ \begin{array} { l } { x ( k + 1 ) = \beta x ( k ) + y ( k ) , } \\ { y ( k + 1 ) = ( 1 + \beta ) y ( k ) + ( 1 + y ( k ) ) ^ { \beta } e ^ { y ( k ) } - ( 1 + \beta ) y ( k ) - 1 , } \end{array} $$ where all the assumptions regarding the non-linear functions are satisfied at the equilibrium point. According to Theorem 1 and since $\mathbf { A } = 1 + \beta$ and $\mathbf { B } = { \boldsymbol { \beta } } $ , an analytic local IM $\mathcal { M }$ exists, in the form of Eq. (4), around a neighborhood of the equilibrium, provided that the non-resonance condition $( 1 + \beta ) ^ { d } \neq \beta$ holds for any $d \in \mathbb { N }$ . It can be shown that the IM mapping $\pi$ , given by: $$ x = \pi ( y ) = l n ( 1 + y ) , $$ is the exact analytic solution of the NFEs in Eq. (5) for the system Eq. (15). Here, we set $\beta = - 0 . 4$ , ensuring that the equilibrium is stable (trajectories are attracted to it) and the non-resonance condition is satisfied. Note that, as previously mentioned and evident from Eq. (16), a singularity exists at $y = - 1$ . Although the exact IM mapping is known for this example, we approximated it analytically using the PSE approach proposed in [18], as described in Appendix A. The resulting PSE for the IM approximation coincides with the well known $h$ -th degree Taylor expansion of the logarithmic function: $$ \tilde { x } = \tilde { \pi } ^ { P S E } ( y ) = \sum _ { i = 1 } ^ { h } \alpha _ { P S E } ^ { i } P _ { i } ^ { P } ( y ) , \qquad \mathrm { w h e r e } \quad \alpha _ { P S E } ^ { i } = ( - 1 ) ^ { i + 1 } / i , \quad \mathrm { a n d } \quad P _ { i } ^ { P } ( y ) = y ^ { i } . $$ The radius of convergence of this power series is $\vert y \vert < r = 1$ ; Additionally, since the exact IM mapping is available in Eq. (16), we approximated it using Legendre and Chebychev polynomials, yielding: $$ \tilde { x } = \tilde { \pi } ^ { L S E } ( y ) = \sum _ { i = 1 } ^ { h } \alpha _ { L S E } ^ { i } P _ { i } ^ { L } ( y ) , \quad \mathrm { w h e r e } \quad \alpha _ { L S E } ^ { i } = \frac { 2 i + 1 } { 2 } \int _ { - 1 } ^ { 1 } l n ( 1 + y ) P _ { i } ^ { L } ( y ) d y , $$ for the Legendre approximation, and $$ \tilde { x } = \tilde { \pi } ^ { C S E } ( y ) = \sum _ { i = 1 } ^ { h } \alpha _ { C S E } ^ { i } P _ { i } ^ { C } ( y ) , \quad \mathrm { w h e r e } \quad \alpha _ { C S E } ^ { i } = \frac { 2 } { \pi } \int _ { - 1 } ^ { 1 } l n ( 1 + y ) P _ { i } ^ { C } ( y ) \frac { d y } { \sqrt { ( 1 - y ^ { 2 } ) } } . $$ for the Chebyshev approximation. The Legendre and Chebyshev polynomials, $P _ { i } ^ { L } ( y )$ and $P _ { i } ^ { C } ( y )$ , are defined by the recursive formulas in Eq. (10) and Eq. (11), respectively. In this example, we not only have the exact expression of the IM mapping in Eq. (16), but also the analytic expressions of the power, Legendre and Chebyshev polynomial series in Eqs. (17) to (19), computed for degree $h = 2 0$ . The former will be used for evaluating the accuracy of the PI schemes, while the latter will serve as approximations for comparison purposes. # 3.2 Enzymatic bioreactor problem We consider the enzymatic bioreactor problem studied in [19], which models a continuous stirred tank reactor (CSTR) where an enzyme converts a substrate into product via a ping-pong bi-mechanism. The system is described by the following non-linear ODEs for the substrate concentration $S$ and the enzyme concentration $E$ : $$ \begin{array} { l } { \displaystyle \frac { d S } { d t } = \frac { k _ { 1 } E S } { 1 - k _ { 2 } S } + v _ { r } ( S _ { 0 } - S ) , } \\ { \displaystyle \frac { d E } { d t } = - k _ { d 1 } E , } \end{array} $$ where $k _ { 1 } , k _ { 2 }$ and $k _ { d 1 }$ are kinetic parameters, $v _ { r }$ is the ratio of the flow rate of substrate to the reactor volume and $S _ { 0 }$ is the substrate concentration in the feed stream. We adopt the parameter values $k _ { 1 } = 0 . 0 8 2 , k _ { 2 } = 0 . 5 9 , k _ { d 1 } = 0 . 0 0 3 4 .$ $v _ { r } = 2$ and $S _ { 0 } = 3 . 4$ , following [19]. To transform the continuous system in Eq. (20) into a discrete one, we use a time-discretization step $\delta = 0 . 0 1$ , yielding the discrete dynamical system: $$ \begin{array} { l } { { S ( k + 1 ) = ( 1 - \delta v _ { r } ) S ( k ) + \delta \frac { k _ { 1 } E ( k ) S ( k ) } { 1 - k _ { 2 } S ( k ) } + \delta v _ { r } S _ { 0 } , } } \\ { { E ( k + 1 ) = ( 1 - \delta k _ { d 1 } ) E ( k ) . } } \end{array} $$ With the chosen parameter set, the system exhibits a stable equilibrium $( S ^ { 0 } , E ^ { 0 } ) = ( S _ { 0 } , 0 )$ . Introducing the deviation from equilibrium $x = S - S ^ { 0 }$ and $\dot { y } = E - E ^ { 0 }$ , we obtain a discrete dynamical system in the form of Eq. (1), which can be further cast in the form of Eq. (2) as: $$ \begin{array} { l } { \displaystyle { x ( k + 1 ) = ( 1 - \delta v _ { r } ) x ( k ) + \frac { \delta k _ { 1 } S _ { 0 } } { 1 - k _ { 2 } S _ { 0 } } y ( k ) + \frac { \delta k _ { 1 } y ( k ) x ( k ) } { ( 1 - k _ { 2 } S _ { 0 } ) ( 1 - k _ { 2 } S _ { 0 } - k _ { 2 } x ( k ) ) } , } } \\ { \displaystyle { y ( k + 1 ) = ( 1 - \delta k _ { d 1 } ) y ( k ) . } } \end{array} $$ The system in Eq. (21) satisfies all the assumptions regarding the non-linear functions at the equilibrium $( x _ { 0 } , y _ { 0 } ) =$ $( 0 , 0 )$ . Additionally, since $\mathbf { A } = 1 - \delta k _ { d 1 }$ and $\mathbf { B } = 1 - \delta v _ { r }$ , the assumptions of Theorem 1 are satisfied, as the non-resonance condition $( 1 - \delta k _ { d 1 } ) ^ { d } \neq 1 - \delta v _ { r }$ holds for any $d \in \mathbb { N }$ . This implies the existence of an analytic $N = 1$ -dim. IM $\mathcal { M }$ in the form of Eq. (4), the mapping of which $\tilde { x } = \tilde { \pi } ( y )$ can be approximated by solving the NFEs Eq. (5). Following Appendix A, we obtained the PSE approximation of the IM functional $\tilde { x } = \tilde { \pi } ^ { P S \mathbf { \check { E } } } ( y )$ in the form of Eq. (A.2). The analytically determined coefficients $\pi ^ { \bar { 1 } } , . . . , \pi ^ { 1 , . . . , 1 }$ are in excellent agreement with those reported in [19] for a Taylor epxansion of degree $h = 1 0$ . # 3.3 Multi-dimensional example We now consider a discrete dynamical system that can be generalized to multiple dimensions in both x and y. Specifically, we examine the system: $$ \begin{array} { r l } & { \hat { \mathbf { x } } ( k + 1 ) = \cos ( \hat { \mathbf { B } } \hat { \mathbf { x } } ( k ) ) + \hat { \mathbf { C } } \hat { \mathbf { y } } ( k ) , } \\ & { \hat { \mathbf { y } } ( k + 1 ) = \sin ( \hat { \mathbf { A } } \hat { \mathbf { y } } ( k ) ) , } \end{array} $$ where $\hat { \mathbf { x } } \in \mathbb { R } ^ { N } , \hat { \mathbf { y } } \in \mathbb { R } ^ { M }$ , and $\hat { \mathbf { A } } \in \mathbb { R } ^ { M \times M } , \hat { \mathbf { B } } \in \mathbb { R } ^ { N \times N }$ and $\hat { \mathbf { C } } \in \mathbb { R } ^ { N \times M }$ . The equilibrium points of this system, say $\big ( \hat { \mathbf { x } } _ { 0 } , \hat { \mathbf { y } } _ { 0 } \big )$ , satisfy: $$ \begin{array} { r } { \hat { \mathbf { x } } _ { 0 } - \cos ( \hat { \mathbf { B } } \hat { \mathbf { x } } _ { 0 } ) - \hat { \mathbf { C } } \hat { \mathbf { y } } _ { 0 } = \mathbf { 0 } _ { N } , \qquad \hat { \mathbf { y } } _ { 0 } - \sin ( \hat { \mathbf { A } } \hat { \mathbf { y } } _ { 0 } ) = \mathbf { 0 } _ { M } . } \end{array} $$ Here, we compute the IM around the origin $\hat { \mathbf { y } } _ { 0 } = \mathbf { 0 } ^ { M }$ , which implies that $\hat { \mathbf { x } } _ { 0 }$ satisfies $\hat { \mathbf { x } } _ { 0 } - \cos ( \hat { \mathbf { B } } \hat { \mathbf { x } } _ { 0 } ) = \mathbf { 0 } ^ { N }$ . The Jacobian matrix of the system in Eq. (22) at the equilibrium points $\big ( \hat { \mathbf { x } } _ { 0 } , \hat { \mathbf { y } } _ { 0 } \big )$ is: $$ \begin{array}{c} \mathbf { J } ( \hat { \mathbf { x } } _ { 0 } , \hat { \mathbf { y } } _ { 0 } ) = \left[ \begin{array} { c c } { - d i a g \left( \mathrm { s i n } ( \hat { \mathbf { B } } \hat { \mathbf { x } } _ { 0 } ) \right) \hat { \mathbf { B } } } & { \hat { \mathbf { C } } } \\ { \mathbf { 0 } ^ { M \times N } } & { d i a g \left( \mathrm { c o s } ( \hat { \mathbf { A } } \hat { \mathbf { y } } _ { 0 } ) \right) \hat { \mathbf { A } } } \end{array} \right] = \left[ - d i a g \left( \mathbf { 1 } ^ { N } - \hat { \mathbf { x } } _ { 0 } ^ { 2 } \right) ^ { 1 / 2 } \hat { \mathbf { B } } & { \hat { \mathbf { C } } } \\ { \mathbf { 0 } ^ { M \times N } } & { \hat { \mathbf { A } } } \end{array} \right] , $$ where $\mathbf { 0 } ^ { M \times N }$ is the $M \times N$ zero matrix, $\mathbf { 1 } ^ { N }$ is the $N$ -dim. column vector of ones and $\hat { \mathbf { x } } _ { 0 } ^ { 2 }$ denotes the element-wise square of $\hat { \mathbf { x } } _ { 0 }$ . Equation (23) implies that the stability of the equilibrium points $\big ( \hat { \mathbf { x } } _ { 0 } , \hat { \mathbf { y } } _ { 0 } \big )$ depends on the eigenvalues of $\hat { \bf B }$ and $\hat { \bf A }$ , since the eigenvalues of $- d i a g \left( { \bf 1 } _ { N } - \hat { \bf x } _ { 0 } ^ { 2 } \right) ^ { 1 / 2 }$ are always less than 1. Next, we introduce deviation variables from the above equilibrium points $\mathbf { x } = \hat { \mathbf { x } } - \hat { \mathbf { x } } _ { 0 }$ and $\mathbf { y } = \hat { \mathbf { y } }$ , rewriting the system in Eq. (22) in the form of Eq. (1) as: $$ \begin{array} { r l } & { \mathbf { x } ( k + 1 ) = d i a g ( \hat { \mathbf { x } } _ { 0 } ) \cos ( \hat { \mathbf { B } } \mathbf { x } ( k ) ) - d i a g \big ( \mathbf { 1 } ^ { N } - \hat { \mathbf { x } } _ { 0 } ^ { 2 } \big ) ^ { 1 / 2 } \sin ( \hat { \mathbf { B } } \mathbf { x } ( k ) ) + \hat { \mathbf { C } } \mathbf { y } ( k ) - \hat { \mathbf { x } } _ { 0 } , } \\ & { \mathbf { y } ( k + 1 ) = \sin ( \hat { \mathbf { A } } \mathbf { y } ( k ) ) , } \end{array} $$ with the equilibrium points now represented by the origin $( { \bf x } _ { 0 } , { \bf y } _ { 0 } ) = ( { \bf 0 } ^ { N } , { \bf 0 } ^ { M } )$ in the new state variables. This system can be rewritten in the form of Eq. (2) by setting: $$ \begin{array} { r l } & { \mathbf { A } = \hat { \mathbf { A } } , \quad \mathbf { B } = - d i a g \left( \mathbf { 1 } ^ { N } - \hat { \mathbf { x } } _ { 0 } ^ { 2 } \right) ^ { 1 / 2 } \hat { \mathbf { B } } , \quad \mathbf { C } = \hat { \mathbf { C } } , } \\ & { \mathbf { f } ( \mathbf { x } ( k ) , \mathbf { y } ( k ) ) = d i a g ( \hat { \mathbf { x } } _ { 0 } ) \left( \cos ( \hat { \mathbf { B } } \mathbf { x } ( k ) ) - \mathbf { 1 } ^ { N } \right) + d i a g \left( \mathbf { 1 } ^ { N } - \hat { \mathbf { x } } _ { 0 } ^ { 2 } \right) ^ { 1 / 2 } \left( \hat { \mathbf { B } } \mathbf { x } ( k ) - \sin ( \hat { \mathbf { B } } \mathbf { x } ( k ) ) \right) , } \\ & { \mathbf { g } ( \mathbf { x } ( k ) , \mathbf { y } ( k ) ) = \sin ( \hat { \mathbf { A } } \mathbf { y } ( k ) ) - \hat { \mathbf { A } } \mathbf { y } ( k ) . } \end{array} $$ For any choise of matrices $\hat { \mathbf { A } } , \hat { \mathbf { B } }$ and $\hat { \mathbf { C } }$ , Eq. (24) implies $\mathbf { f } ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) = \mathbf { g } ( \mathbf { y } _ { 0 } ) = \mathbf { 0 }$ and $\partial \mathbf { f } / \partial \mathbf { x } | _ { ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) } = \partial \mathbf { f } / \partial \mathbf { y } | _ { ( \mathbf { x } _ { 0 } , \mathbf { y } _ { 0 } ) } =$ $\partial \mathbf { g } / \partial \mathbf { y } | _ { ( \mathbf { y } _ { 0 } ) } = \mathbf { 0 }$ with appropriate dimensionality. However, to satisfy Assumption 1 the matrix $\hat { \bf A }$ needs to have non-zero eigenvalues. In the sequel, we consider three cases of different dimensions for the original system in Eq. (22) aligned with the above state space representation Eq. (24). We chose parameters such that the resulting system trajectories are attracted to their equilibrium points and the non-resonance conditions are satisfied, thus ensuring the existence of an IM through Theorem 1. In all cases, we compute the PSE approximations $\tilde { \mathbf { x } } = \tilde { \pi } ^ { P S E } ( \mathbf { y } )$ of the IM functionals by determining the coefficients of the Taylor series expansion in Eq. (A.2), as described in Appendix A. The resulting PSE approximations of the IM are used for comparison with the IM approximations provided by the PI schemes. # 3.3.1 An example of $N = 1$ and $M = 1$ First, we consider a simple case with $N = 1$ and $M = 1$ ; i.e., $\mathbf { x } \in \mathbb { R }$ and $\mathbf { y } \in \mathbb { R }$ . We select the parameter values $\hat { \mathbf { A } } = 0 . 9 9 , \hat { \mathbf { B } } = 0 . 8 , \hat { \hat { \mathbf { C } } } = 0 . 5$ , which result to $\hat { \mathbf { x } } _ { 0 } ~ = ~ 0 . 8 0 1 4$ . Substituting these values into the expressions in Eq. (24) shows that the resulting system satisfies the non-resonance condition of Theorem 1, since the eigenvalue of $\hat { \bf A }$ , $k _ { 1 } = 0 . 9 9$ , and the eigenvalue of $\hat { \bf B }$ , $\lambda _ { 1 } = - 0 . 1 1 9 6$ , are not related by $k _ { 1 } ^ { d } \neq \lambda _ { 1 }$ for any $d \in \mathbb { N }$ . Hence, according to Theorem 1, a $N = 1$ -dim. IM $\mathcal { M }$ exists, with the mapping $\tilde { \mathbf { x } } = \tilde { \pi } ( \mathbf { y } )$ , where $\pi : \mathbb { R } \mathbb { R }$ , satisfying the NFEs in Eq. (5). Here, the IM develops around a stable fixed point, since $| k _ { 1 } | , | \lambda _ { 1 } | < 1$ . The PSE approximation of the IM was computed up to degree $h = 5$ . # 3.3.2 An example of $N = 1$ and $M = 2$ Next, we consider a case where $N = 1$ but the input dynamics is realized by $M = 2$ independent dynamic variables, that is $\mathbf { x } \in \mathbb { R }$ and $\mathbf { y } \in \mathbb { R } ^ { 2 }$ . We select the parameter values: $$ { \hat { \bf A } } = \left[ \begin{array} { c c } { 0 . 9 9 } & { 0 . 8 } \\ { - 0 . 1 } & { 0 . 9 } \end{array} \right] , \qquad { \hat { \bf B } } = 0 . 2 , \qquad { \hat { \bf C } } = [ 0 . 5 \quad - 0 . 8 ] , $$ which result to $\hat { \mathbf { x } } _ { 0 } = 0 . 9 8 0 8$ . For these parameter values, we obtain by Eq. (24) that the matrix A has a pair of complex eigenvalues of $k _ { 1 , 2 } = 0 . 9 4 5 \pm 0 . 2 7 9 2 i$ , while the eigenvalue of $\hat { \mathbf { B } }$ is $\lambda _ { 1 } = - 0 . 0 3 9$ . There is no resonance relation $k _ { 1 } ^ { d _ { 1 } } k _ { 2 } ^ { d _ { 2 } } = \lambda _ { 1 }$ that holds true for any $d _ { 1 } , d _ { 2 } \in \mathbb { N }$ with $d _ { 1 } + d _ { 2 } > 0$ . Since all the assumptions of Theorem 1 are satisfied, a $N = 1$ -dim. IM $\mathcal { M }$ exists. In this case however, the mapping $\tilde { \mathbf { x } } = \tilde { \pi } ( \mathbf { y } )$ has two independent input variables, since $M = 2$ , i.e., $\pi : \mathbb { R } ^ { 2 } \mathbb { R }$ . Here, the PSE approximation of the IM was computed up to $h = 4$ degree. Another important difference with the IM in Section 3.3.1 is that in this case, the IM is developed around a stable spiral, since $k _ { 1 , 2 }$ are complex and $| \mathfrak { R e } ( k _ { 1 , 2 } ) | , | \lambda _ { 1 } | < 1$ . # 3.3.3 An example of $N = 2$ and $M = 3$ Finally, we consider a case of the system in Eq. (22) where $N = 2$ and $M = 3$ . In particular, we select the parameter values: $$ { \hat { \mathbf { A } } } = { \left[ \begin{array} { l l l } { 0 . 9 } & { 0 . 8 } & { 0 } \\ { - 0 . 1 } & { 0 . 7 } & { 0 . 4 } \\ { 0 } & { - 0 . 5 } & { 0 . 8 } \end{array} \right] } , \qquad { \hat { \mathbf { B } } } = { \left[ \begin{array} { l l } { - 0 . 8 } & { 0 . 3 } \\ { 0 . 7 } & { - 0 . 9 } \end{array} \right] } , \qquad { \hat { \mathbf { C } } } = { \left[ \begin{array} { l l l } { 0 . 2 } & { - 0 . 4 } & { 0 . 7 } \\ { - 0 . 1 } & { 0 . 8 } & { 0 . 9 } \end{array} \right] } , $$ which result to $\hat { \mathbf { x } } _ { 0 } = [ 0 . 9 0 7 2 , 0 . 9 7 1 5 ] ^ { \top }$ . Substituting these parameter values into Eq. (24), implies that the matrix A has the eigenvalues $k _ { 1 } = 0 . 8 7 2 7$ $' , k _ { 2 , 3 } = 0 . 7 6 3 7 \pm 0 . 5 2 3 4 i$ , while the matrix $\mathbf { B }$ has the eigenvalues $\lambda _ { 1 } = 0 . 4 3 2 3$ , $\lambda _ { 2 } = 0 . 1 1 7 7$ . As in Section 3.3.2, these eigenvalues imply that the origin is a stable spiral. Since no resonance condition exists for these eigenvalues (that is, there are no $d _ { 1 } , d _ { 2 } , d _ { 3 } \in \mathbb { N }$ with $d _ { 1 } + d _ { 2 } + d _ { 3 } > 0$ , such that $k _ { 1 } ^ { d _ { 1 } } k _ { 2 } ^ { d _ { 2 } } k _ { 3 } ^ { d _ { 3 } } = \lambda _ { 1 }$ or $k _ { 1 } ^ { d _ { 1 } } k _ { 2 } ^ { d _ { 2 } } k _ { 3 } ^ { d _ { 3 } } = \lambda _ { 2 } )$ , all the assumptions of Theorem 1 are satisfied. Hence, a $N = 2$ -dim. IM $\mathcal { M }$ develops around the origin, with the mapping $\tilde { \mathbf { x } } = \tilde { \pi } ( \mathbf { y } )$ , which in this case has $M = 3$ inputs and $N = 2$ outputs, that is $\pi : \mathbb { R } ^ { \bar { 3 } } \to \mathbb { R } ^ { 2 }$ . This IM mapping satisfies again the NFEs in Eq. (5) and for its PSE approximation we considered a multi-variate Taylor series expansion of degree $h = 3$ . # 4 Numerical Results This section evaluates the efficiency of the proposed PI hybrid scheme in learning IM approximations for the three examples described in Section 3. We examine their convergence, numerical approximation accuracy and the computational training cost. Additionally, we compare the IM approximations provided by the PI hybrid schemes with those obtained by purely using PI NNs and the PSE expansions derived by the symbolic approach proposed in [18, 19]. Training and evaluation of the PI schemes followed the procedures outlined in Sections 2.2.1 and 2.2.2. All simulations were carried out with a CPU Intel(R) Core(TM) CPU i7-13700H $@$ 2.40GHz, RAM 32.0 GB using MATLAB R2024b. # 4.1 Example with analytic invariant manifold We first consider the discrete dynamical system in Eq. (15), for which the true IM mapping is known in Eq. (16). This allows us to compare the IM approximations with the exact IM. The system in Eq. (15) is defined for $y \geq 1$ (when $\beta < 0 \mathrm { i }$ ) and exhibits a singularity at $y = - 1$ . We approximate the IM in the domain $\Omega = [ - 0 . 9 , 2 ]$ . For the PI hybrid scheme, we considered polynomials of degree $h = 1 0$ and $h = 2 0$ , and NNs with $L = 1 0$ neurons in the hidden layer. The resulting number of parameters to be computed for the NN component is 31. Accordingly, we collected $Q = 3 1 \times 2 0 = 6 2 0$ collocation points to train the PI hybrid and pure NN schemes, uniformly sampled from $\Omega$ . Using these collocation points in $\Omega = [ - 0 . 9 , 2 ]$ , we trained the PI hybrid schemes with power, Legendre and Chebyshev polynomials and $r = 1$ (denoted as HPS, HLS and HCS $r = 1$ , respectively) and the PI NN scheme. To explore the effect of the radius of convergence, we also trained the PI hybrid scheme with power series using $r = 0 . 5$ (denoted as PI HPS $r = 0 . 5$ ). To examine the impact of polynomial degree, we considerd all hybrid schemes with $h = 1 0$ (baseline) and $h = 2 0$ . The convergence results of all 9 trained PI approximations are shown in Table 1, as obtained over 100 training realizations with different initial parameters; see Section 2.2.1 for details. Table 1: Convergence results of the proposed PI schemes for the example in Section 3.1. Mean values and $5- 9 5 \%$ CIs of the loss function $\mathcal { L } ( \cdot )$ and computational training times (in seconds) are reported for 100 randomly initialiazed realizations. Schemes include PI hybrid schemes (HxS: HPS, HLS, HCS) with $x = P , L , C$ denoting power, Legendre, and Chebyshev polynomials and PI NN. Hyperparameters: polynomial degrees $h = 1 0$ and $h = 2 0$ , radius $r = 1$ and $r = 0 . 5$ for the HPS schemes and $L = 1 0$ neurons for the NN and $\mathrm { H x S }$ schemes. The means and $5 . 9 5 \%$ CIs of the loss functions in Table 1 indicate that the optimization problems for all PI schemes consistently converge to low loss values (less than $2 \mathrm { E } { - 4 } ,$ over the selected collocation points. As the degree $h$ increases, the convergence improves for all schemes, except for the HPS scheme with $r = 0 . 5$ . Regarding computational training time, the hybrid schemes are more demanding than the NN scheme, since they additionally consider the polynomial coefficients for the optimization. While in the degree $h = 1 0$ case, all hybrid schemes require the same computational time, we observe that in the degree $h = 2 0$ case, the hybrid scehemes with the Legendre and Chebyshev polynomials are much faster than those with the power series polynomials. This result indicates that the hybrid scheme with high degree orthogonal polynomials is less computationally intesive than either low degree ones, or power series polynomials. To evaluate the numerical approximation accuracy of the learned IM approximations, we constructed a testing data set based on the true IM mapping in Eq. (16). Specifically, we collected $S = 3 , 0 0 0$ data points $( x ^ { ( s ) } , y ^ { ( s ) } )$ using the IM mapping $x ^ { ( s ) } = \pi ( y ^ { ( s ) } ) = l n ( 1 + y ^ { ( s ) } )$ , with $y ^ { ( s ) }$ uniformly sampled from $\Omega$ . We then computed the errors $x ^ { ( s ) } - { \tilde { \pi } } { \bigl ( } y ^ { ( s ) } { \bigr ) }$ using the IM approximations $\tilde { \pi } ( y )$ of the trained PI schemes. Table 2 reports the means and $5 . 9 5 \%$ CIs of the $l _ { 1 } , l _ { 2 }$ and $l _ { \infty }$ norms of these errors, calculated over the 100 parameter sets obtained during training. For comparison, Table 2 also includes the same error metrics for the PSE, LSE and CSE expansions derived from the true IM in Eqs. (17) to (19). Table 2: Numerical approximation accuracy of the IM approximations $\tilde { \pi } ( y )$ for the example in Section 3.1. Errors $\left. l _ { 1 } \right.$ , $l _ { 2 }$ and $l _ { \infty }$ norms) are reported for the expansions (PSE, LSE, CSE) of the true IM in Eq. (16) and the PI schemes (HPS, HLS, HCS, NN); for hyperparameters see Table 1. Mean values and $5- 9 5 \%$ CIs are computed over the 100 parameter sets obtained during training for the PI schemes. The testing data errors are evaluated over $S = 3 , 0 0 0$ points As shown in Table 2, the purely polynomial-based IM approximations (PSE, LSE and CSE ones derived by the approach in [18, 19]) provide inaccurate results. This inaccuracy worsens as $h$ increases, and is attributed to testing data in $\Omega = [ - 0 . 9 , 2 ]$ lying far from the equilibrium. In this example, the neighborhood of the equilibrium is bounded by the radius of convergence $r = 1$ of the true IM polynomial expansions. In contrast, the PI hybrid schemes and the PI NN deliver accurate IM approximations. In particular, for low-degree polynomials, the PI hybrid scheme with power series and $r = 0 . 5$ (PI HPS $r = 0 . 5$ ) and the PI NN. As $h$ increases, the accuracy of all the PI hybrid schemes improves, and outperformes the approximation provided by the PI NN. We highlight here that the error metrics in Table 2 are global over the testing set $\Omega$ . To examine local accuracy, we visualize the absolute errors $| x ^ { ( s ) } - \tilde { \pi } ( y ^ { ( s ) } ) |$ in Figure 2, where panels (a,b) display the approximations in Table 2 for $h = 1 0$ and $h = 2 0$ , respectively. It is evident that the inaccuracy of all the polynomial series expansions $( x S E )$ Figure 2: Absolute errors of IM approximations $\tilde { \pi } ( y )$ compared to the true IM $x = l n ( 1 + y )$ for the example in Section 3.1. Panels show the polynomial series expansions $x S E$ (black), the PI hybrid schemes $H x S$ (red, blue) and the PI NN approximations (green). Power, Legendre, and Chebyshev polynomials $( x = P , L , C )$ are distinguished by solid, dashed, and dotted curves, respectively. Red and blue background indicates the radius $r$ of the polydisc $\mathcal { D }$ for the PI hybrid schemes. Panels (a) and (b) correspond to polynomial degrees $h = 1 0$ and $h = 2 0$ , respectively, and $L = 1 0$ neurons are used in the NNs. arises from high errors far from the equilibrium. Figure 2 confirms that the errors of the PI hybrid and PI NN schemes are almost homogeneous in $\Omega$ . Interestingly, for $h = 1 0$ , the PI hybrid schemes exhibit lower accuracy than the PI NN scheme, as reflected in Table 2. However, their accuracy improves and matches that of the PI NN schemes, as $h$ increases. It is also shown that the choice of polynomials $( x = P , L , C )$ does not affect the approximation accuracy of the hybrid schemes. More importantly, with decreased $r$ , the approximation provided by the PI hybrid schemes is more accurate near the equilibrium than that provided by the PI NN. The next example will demonstrate that the radius $r$ is the most critical hyperparameter for improving the accuracy of the PI hybrid schemes. # 4.2 Enzymatic bioreactor problem We next consider the discrete dynamical system of the enzymatic bioreactor problem in Eq. (21), previously examined in [19], where a PSE of degree $h = 1 0$ was derived by symbolically solving the NFEs in Eq. (5). Here, we derive IM approximations using the proposed PI hybrid scheme and PI NNs, and compare them with the PSE, focusing on the role of the radius $r$ . The system in Eq. (21) is biologically plausible for positive concentrations of enzyme $y > 0$ , so we approximate the IM in the domain $\Omega = [ 0 , 4 ]$ . We use polynomials of degree $h = 1 0$ for all schemes and NNs of $L = 1 0$ neurons in the hidden layer. The number of parameters matches the previous example (Section 4.1), so we collect $Q = 6 2 0$ collocation points to train the PI hybrid and NN schemes, uniformly sampled from $\Omega$ . All the hyperparameters, except from the radius $r$ , are set as described in Section 2.2. To explore the role of $r$ , we trained the PI hybrid schemes with power series for $r = 0 , 0 . 5 , 1 , 2 , 4 .$ The setting with $r = 0$ is equivalent to the PI NN scheme, since the polydisc $\mathcal { D }$ is empty and the NN counterpart is active all over the domain $\Omega$ . Conversly, choosing $r = 4$ makes the polydisc $\mathcal { D }$ cover $\Omega$ and thus the PI hybrid scheme is equivalent to considering a polynomial series expansion. We also considered Legendre and Chebyshev polynomials with $r = 1$ for the PI hybrid schemes. Bellow, we present results for the $7 \mathrm { P I }$ approximations, accounting for these equivalences. The convergence assessment results of the PI hybrid schemes, evaluated over 100 training realizations, are reported in Table 3. The radius $r$ increases from top to bottom with the PI NN scheme being equivalent to the PI HPS hybrid schemes for $r = 0$ . All the PI schemes consistently converge to very low loss function values (mean loss $\underline { { < } } 4 E - 1 0 )$ . As $r$ increases, the convergence of the PI hybrid schemes remains almost the same, with the $r = 4$ case being lower, as expected since the scheme purely considers polynomial series without the NN counterpart. For this reason, the training times of this particular approximation is negligible. The rest of the PI hybrid schemes are more demanding than the PI NN scheme, since the latter includes less parameters to learn for solving the optimization problem. The Legendre and Chebyshev polynomials show similar convergence to the power series. For the evaluation of the numerical approximation accuracy of the learned IM approximations, we constructed the testing data set from system trajectories, removing the transients so that the data lie on the IM; see Section 2.2.2. Specifically, we collected $S = 1 0 , 0 0 0$ data points $( x ^ { ( s ) } , y ^ { ( s ) } )$ in $\Omega = [ 0 , 4 ]$ by selecting intial conditions for random $x \in [ - 1 , 1 ]$ and fixed $y = 4 . 3$ . On the basis of the testing set, we computed the errors $x ^ { ( s ) } - { \tilde { \pi } } { \bigl ( } y ^ { ( s ) } { \bigr ) }$ for the IM approximations $\tilde { \pi } ( y )$ of the trained PI schemes. The means and $5 . 9 5 \%$ CIs of the $l _ { 1 }$ , $l _ { 2 }$ and $l _ { \infty }$ norms of these errors, calculated over the 100 parameter sets obtained during training, are reported in Table 4. For comparison, the same metrics are included for the PSE expression derived in [19]. Table 3: Convergence results of the proposed PI schemes for the enzymatic bioreactor problem in Section 3.2. Mean values and $5 \text{‰}$ CIs of the loss function $\mathcal { L } ( \cdot )$ and computational training times (in seconds) are reported for 100 randomly initialiazed realizations. Schemes include PI hybrid schemes (HxS: HPS, HLS, HCS) with $x = P , L , C$ denoting power, Legendre, and Chebyshev polynomials and PI NNs. The radii of the PI HPS $r$ vary from 0 to 4 with $r = 0$ being equivalent to the PI NN scheme. Hyperparameters: polynomial degree $h = 1 0$ and $L = 1 0$ neurons for the NN and HxS schemes. Table 4: Numerical approximation accuracy of IM approximations $\tilde { \pi } ( y )$ for the enzymatic bioreactor problem in Section 3.2. Errors $\left. l _ { 1 } , l _ { 2 } \right.$ and $l _ { \infty }$ norms) are reported for the PSE expansion from [19] and the PI schemes (NN, HPS, HLS, HCS); for hyperparameters see Table 3. Mean values and $5- 9 5 \%$ CIs are computed over the 100 parameter sets obtained during training of the PI schemes. Testing data errors are evaluated over $S = 1 0 , 0 0 0$ points. Table 4 shows that the PSE approximation, which is based only on power series, provides poor accuracy. On the other hand, the PI hybrid schemes provide higher approximation accuracy with mean $l _ { 2 }$ errors at the order of $1 E - 3$ . As $r$ increases, accuracy slightly improves across the testing set. Overall, the PI hybrid schemes achieve slightly higher global accuracy than the PI NN scheme, all of which outperform the PSE approximation. To evaluate local accuracy, we focus on the absolute errors $| x ^ { ( s ) } - \tilde { \pi } ( y ^ { ( s ) } ) |$ in the physical space $( S , E )$ in Figure 3, where the PSE approximation derived in [19] and the PI hybrid scheme approximations for different radii $r$ are compared; the PI NN scheme corresponds to $r = 0$ in the hybrid scheme. Figure 3 demonstrates that the PSE provides high accuracy near the equilibrium but exhibits large errors farther away, as reflected in the global accuracy metrics in Table 4. On the other hand, the PI NN scheme generates a homogeneously distributed accuracy profile across the 10-5 mmmnmm PSE 1 10-10 S PI HPS,r=0.5 PINN (r=0) PIHxS,r=1 S PI HPS, r=2 10-15 PI HPS,r=4 0 1 2 3 4 E entire domain $\Omega$ in the order of $1 E - 5$ . The PI hybrid scheme combines the regions of high accuracy of the above approximations. In particular, as indicated by the highlighted backgrounds in Figure 3, the PI hybrid scheme delivers 1-2 orders higher accuracy than the PI NN within $[ 0 , r ]$ (where the polynomial series are active) and comparable (or even higher) accuracy for $E = y > r$ . Notably, the accuracy of the PI hybrid schemes improves within $[ 0 , r ]$ , as $r$ increases. In addition, the Legendre and Chebyshev polynomials provide lower accuracy near the equilbirum than the power polynomials. Similar results were reported for $h = 2 0$ , which are not included for economy of space. # 4.3 Multi-dimensional example In this example, we consider the multi-dimensional system presented in Section 3.3 (see Eq. (22) with definitions in Eq. (24)) and evaluate the performance of the proposed PI hybrid scheme across the three cases $N = 1$ and $M = 1$ , $N = 1$ and $M = 2$ , and $N = 2$ and $M = 3$ , described in Sections 3.3.1 to 3.3.3. For each case, we derived three IM approximations using the proposed PI hybrid schemes (power, Legendre and Chebyshen polynomials), one using the PI NNs scheme and the PSE one using the expansion approach proposed in [19]. Unlike the previous examples, which focused on a comprehensive analysis of the role of hyperparameters, here we fix $h$ and $r$ to tuned values for each case considered. This allows us to evaluate the efficiency of hybrid scheme across varying system dimensions. In the following, we present the results for the three cases together, by summarizing the key findings regarding convergence, computational training time, and global accuracy of the IM approximations. Detailed results for each case, are provided in the Supplements S1 to S3. Before presenting the results, we provide all relevant details about the training and testing procedures. For the $N = 1$ and $M = 1$ case, we approximate the IM in the domain $\Omega = [ - 0 . 6 , 0 . 6 ]$ tuning the degree of the polynomials to $h = 5$ and the radius to $r = 0 . 3$ for the PI hybrid scheme. For the $N = 1$ and $M = 2$ case, the domain is $\Omega = [ - 0 . 8 , 0 . 8 ] \times [ - 0 . 4 , 0 . 4 ]$ , with $h = 4$ , and $\mathbf { r } = [ 0 . 2 , 0 . 1 ]$ . For the $N = 2$ and $M = 3$ case, the domain is $\Omega = [ - 0 . 7 , 0 . 7 ] \times [ - 0 . 4 , 0 . 4 ] \times [ - 0 . 4 , 0 . 4 ]$ ], with $h = 3$ and ${ \bf r } = [ 0 . 2 , 0 . 1 , 0 . 1 ]$ . In all cases, we keep the NNs width to $L = 1 0$ neurons. To obtain the training data, we collect 20-fold collocation points relative to the number of parameters of the NN counterpart of the hybrid scheme. For the $M = 1$ -dim. case, we uniformly sampled $Q = 6 2 0$ points in $\Omega$ , while for the other two cases (in which $M > 1 \dot { \bf \Phi }$ ), we generated trajectories randomly initialized for each variable within $[ - 1 , 1 ]$ , as discussed in Section 2.2.1. From the resulting data of $\mathbf { y }$ , we randomly sampled $Q = 8 2 0$ and $Q = 2 0 4 0$ points in $\Omega$ to train the hybrid and NN schemes, for the $N = 1$ and $M = 2$ , and the $N = 2$ and $M = 3$ cases, respectively. To construct the testing data sets, we followed Section 2.2.2 and generated trajectories by randomly initializing each variable within $[ - 1 , 1 ]$ . After removing the transients, we randomly sampled $S = 1 0 , 0 0 0$ pairs of data points $( \mathbf { x } ^ { ( s ) } , \mathbf { y } ^ { ( s ) } )$ in the respective $\Omega$ for each case. We hereby highlight the fact that while the degree decreases from $h = 5$ in the $N = 1$ and $M = 1$ case, to $h = 4$ in the $N = 1$ and $M = 2$ case, and then to $h = 3$ in the $N = 2$ and $M = 3$ case, the number of monomials in the PI hybrid scheme is increasing from 6, to 15, and then to 40. This is due to the increase in $M$ and $N$ . The convergence results of the PI schemes, as evaluated over 100 training realizations with random collocation points and initial parameters, are reported in the Supplement in Table S1.1, S2.1 and S3.1 for the $N = 1$ and $M = 1$ , $N = 1$ and $M = 2$ , and $N = 2$ and $M = 3$ cases, respectively. Here, we provide a summary over all cases in Table 5, where the mean loss functions values and the mean computational times required for training are reported. As evident by the low loss function values, all PI schemes converge for all cases considered. Convergence weakens as the dimension of $N$ and $M$ increases. Additionally, the computational time required for training increases with the dimension of the system. As in the previous examples, the training times of the PI hybrid scheme are higher than those required from the PI NN scheme, due to the more parameters included in the PI hybrid scheme. Additionally, the Legendre and Chebyshev polynomials show similar convergence properties to the power series polynomials. Table 5: Summary of the convergence results of the proposed PI schemes for the three cases $\overline { { N = 1 } }$ and $M = 1$ , $N = 1$ and $M = 2$ , and $N = 2$ and $M = 3$ (denoted as $( 1 , 1 )$ , $( 1 , 2 )$ and $( 2 , 3 ) \dot { }$ ) of the multi-dimensional example in Section 3.3. Mean values of the loss function $\mathcal { L } ( \cdot )$ and computational training times (in seconds) are reported for 100 randomly initialized realizations. Schemes include the PI hybrid schemes (HxS: HPS, HLS, HCS), with $x = P , L , C$ denoting power, Legendre, and Chebyshev polynomials ant the PI NNs. Detailed results (including CIs) are provided in Table S1.1, S2.1 and S3.1. The numerical approximation accuracy of the learned, by the proposed PI schemes, IM approximations was evaluated over data lying on the IM. In particular, we computed the $l _ { 1 } , l _ { 2 }$ and $l _ { \infty }$ norms of the $\mathbf { x } ^ { ( s ) } - \tilde { \pi } ( \mathbf { y } ^ { ( s ) } )$ errors throughout the testing set. We report in the Supplement the means and $5 . 9 5 \%$ CIs of the above errors, calculated over the 100 parameter sets obtained during training, for all the cases considered; see Table S1.2, S2.2 and S3.2. Here, we present a summary in Table 6, where only the mean values of the $l _ { 2 }$ -norm errors are reported for each IM approximation, as derived either with the PSE expansion, or with the trained PI schemes. The PSE approximation provides a fair approximation accuracy, which significantly worsens as the dimension $M$ increases. While this trend persists for the PI hybrid and NN schemes, they show a higher global approximation accuracy, which is more pronounced in the $N = 1$ cases. In the $N = 2$ and $M = 3$ case, the IM approximations $\pi ( \mathbf { y } ) = \dot { [ \pi _ { 1 } ( \mathbf { y } ) , \pi _ { 2 } ( \mathbf { y } ) ] } ^ { \top }$ of the PI hybrid and NN schemes provide slightly higher approximation accuracy than that of the PSE. However, these accuracy results are global, across the testing data set. Table 6: Summary of the numerical approximation accuracy results for the IM approximations $\overline { { \tilde { \pi } ( \mathbf { y } ) } }$ for the three cases $N = 1$ and $M = 1$ , $N = 1$ and $M = 2$ , and $N = 2$ and $M = 3$ (denoted as $( 1 , 1 )$ , $( 1 , 2 )$ and $( 2 , 3 ) )$ of the multi-dimensional example in Section 3.3. $l _ { 2 }$ -norm errors are computed for each component $( { \bar { \pi } } ( y ) , { \bar { \pi } } ( \mathbf { y } )$ and $[ \pi _ { 1 } ( \mathbf { y } ) , \pi _ { 2 } ( \mathbf { y } ) ] ^ { \top } )$ of the PSE expansion and the PI schemes (HPS, HLS, HCS, NN). The reported mean values are computed over the 100 parameter sets obtained during training of the PI schemes. Testing data errors are evaluated over $S = 1 0 , 0 0 0$ points. Detailed results (including $l _ { 1 }$ , $l _ { \infty }$ -norm errors and CIs) are provided in Table S1.2, S2.2 and S3.2. To appreciate the accuracy improvement provided by the PI hybrid schemes, we focus on the absolute errors $| \mathbf { x } ^ { ( s ) } - \mathbf { \bar { \rho } }$ $\tilde { \pi } ( \mathbf { y } ^ { ( s ) } ) |$ for the three cases considered in the physical space $( \hat { \mathbf { x } } , \hat { \mathbf { y } } )$ in Figures 4 and 5. Figure 4 displays the absolute errors of the IM approximations for the $N = 1$ and $M = 1$ case, including the PSE scheme. The PSE approximation provides very high accuracy near the equilibrium, while deteriorating far from it. On the other hand, the PI NN approximation provides homogeneously distributed high approximation accuracy accross the domain. As expected, the PI hybrid schemes improves the accuracy of the PI NN locally, by providing $\sim 2$ orders higher accuracy within the polydisc $\mathcal { D }$ (denoted by red backround), where the polynomial component operates. Additionally, outside $\mathcal { D }$ , the PI hybrid schemes provide slightly better accouracy than the PI NN approximation. Figure 4: Absolute errors of IM approximations $\tilde { \pi } ( y )$ for the case $N = 1$ and $M = 1$ of the multi-dimensional example in Section 3.3.1, compared to data on the IM. Errors are projected in the original state space $( \hat { x } , \hat { y } )$ . The IM approximations include the power series expansion PSE (black), the PI hybrid schemes $\mathrm { H x S }$ (red) and the PI NN approximations (green). Power, Legendre, and Chebyshev polynomials $( x = P , L , C )$ are distinguished by solid, dashed, and dotted curves, respectively. Red background color indicates the range of the polydisc $\mathcal { D }$ for the PI hybrid schemes. Polynomial degree is $h = 5$ for all cases and $L = 1 0$ neurons are used in the NNs. Similar results are derived for the $N = 1$ and $M = 2$ , and the $N = 2$ and $M = 3$ cases. The absolute errors $| \mathbf { x } ^ { ( s ) } - \tilde { \pi } ( \mathbf { y } ^ { ( s ) } ) |$ for all IM approximations are shown in the Supplement in Figures S2.1 and S3.1, while here we focus on the PSE, PI hybrid scheme with power series and the PI NN approximations in Figure 5 (top/bottom row for the $N = 1$ and $M = 2 / N = 2$ and $M = 3$ case). Again, the PI hybrid scheme shows very high approximation accuracy within the polydisc $\mathcal { D }$ (denoted by black squares and cubes in Figure 5b,e) where the polynomial counterpart is activated to approximate the IM. Outside $\mathcal { D }$ , the hybrid scheme achieves similarly high approximation accuracy levels to the PI NN approximation. These results demonstrate that the PI hybrid scheme combines the high approximation accuracy provided locally by the polynomial series, while avoiding poor accuracy far from the equilibrium.
We propose a hybrid machine learning scheme to learn -- in physics-informed and numerical analysis-informed fashion -- invariant manifolds (IM) of discrete maps for constructing reduced-order models (ROMs) for dynamical systems. The proposed scheme combines polynomial series with shallow neural networks, exploiting the complementary strengths of both approaches. Polynomials enable an efficient and accurate modeling of ROMs with guaranteed local exponential convergence rate around the fixed point, where, under certain assumptions, the IM is demonstrated to be analytic. Neural networks provide approximations to more complex structures beyond the reach of the polynomials' convergence. We evaluate the efficiency of the proposed scheme using three benchmark examples, examining convergence behavior, numerical approximation accuracy, and computational training cost. Additionally, we compare the IM approximations obtained solely with neural networks and with polynomial expansions. We demonstrate that the proposed hybrid scheme outperforms both pure polynomial approximations (power series, Legendre and Chebyshev polynomials) and standalone shallow neural network approximations in terms of numerical approximation accuracy.
[ "math.NA", "cs.LG", "cs.NA", "math.DS", "37M10, 37M21, 37N30, 37N35, 65P99, 65D15, 68T07", "G.1.0; F.1.1; I.2.6" ]
# 1. Introduction In recent years, Multi-Agent Reinforcement Learning (MARL) has emerged as a critical area of research within the broader field of machine learning. While traditional reinforcement learning focuses on training a single agent to interact with an environment and maximize cumulative rewards, MARL extends this paradigm to multiple agents that learn and act simultaneously, often in cooperative or competitive settings. This added complexity introduces challenges such as non-stationarity, partial observability, and the need for decentralized coordination. Unlike single-agent methods, MARL requires policies that can handle dynamic interactions among agents, which may lead to emergent behaviors that are difficult to predict or model. As a result, MARL research has increasingly focused on scalable architectures, stability under multi-agent dynamics, and policy generalization. With the advancement of MARL algorithms, a wide variety of benchmark environments have been created to evaluate different aspects of multi-agent learning. In continuous robotic control, platforms such as DexHand(Chen et al., 2022) and MA-MuJoCo(Peng et al., 2021) enable high-fidelity simulation of dexterous manipulation and physics-based locomotion. In the realm of team sports and strategy, Google Research Football provides a rich, physics-driven soccer environment requiring tactical cooperation. Light Aircraft Game (LAG) is a lightweight, scalable, Gym-wrapped aircraft combat environment designed for rapid experimentation in aerial dogfights and team skirmishes. PettingZoo offers a simple, Pythonic interface for representing general MARL problems, complete with both the Agent Environment Cycle (AEC) API for sequential turn-based tasks and the Parallel API for simultaneous action settings, plus a suite of reference environments and utilities for custom development. Finally, SMAC(Samvelyan et al., 2019) (StarCraft Multi-Agent Challenge) and its successor SMACv2(Ellis et al., 2023) present agents with partial observations in real-time combat inspired by StarCraft II. Together, these environments span the spectrum from continuous control to discrete strategic games, providing diverse testbeds for MARL research. This report focuses specifically on the Light Aircraft Game (LAG) environment. LAG is a competitive, aircraft-themed Gym environment that emphasizes lightweight deployment, scalability to many concurrent agents, and easy customization of reward structures. Agents pilot simple fighter aircraft in configurable scenarios—ranging from free-for-all skirmishes to coordinated team engagements—and must learn both low-level control (e.g., throttle, pitch, yaw) and high-level tactics (e.g., formation flying, target prioritization). In our study, we select five representative LAG tasks, including $2 \nu 2$ Noweapon (unarmed dogfight) and Shoot Missile War (armed engagement), to evaluate how well modern MARL algorithms generalize across different aerial combat scenarios. Our investigation centers on two MARL algorithms: HAPPO (Heterogeneous-Agent Proximal Policy Optimization)(Kuba et al., 2022) and HASAC (Heterogeneous-Agent Soft Actor-Critic)(Liu et al., 2025). HAPPO extends PPO to multi-agent settings by enforcing trust-region updates in a decentralized manner, while HASAC incorporates entropy-regularized objectives to balance exploration and stability. By training both methods across the five LAG tasks, we analyze their learning curves, convergence properties, and emergent behaviors. The remainder of this report details our experimental setup, presents quantitative training results, and offers a comparative analysis of the two algorithms’ performance in the Light Aircraft Game. # 2. Preliminary # 2.1. Simulation Framework The LAG environment is constructed around two principal simulation components: the Aircraft Simulator and the Missile Simulator. Aircraft Simulator. This module models the aircraft’s flight dynamics. It determines the current position using variables such as delta altitude, altitude, longitude, latitude, and delta heading. It also records internal state parameters including roll, pitch, three-dimensional velocity $( v _ { x } , v _ { y } , v _ { z } )$ , and acceleration $( a _ { x } , a _ { y } , a _ { z } )$ . These values enable accurate prediction of the aircraft’s future trajectory. Missile Simulator. Building on the physical model of the aircraft, this module incorporates additional aspects relevant to missile behavior, such as aerodynamic drag, explosive radius, and missile lifespan. A proportional navigation guidance system is employed for realistic missile trajectory control. Control Interface. The agent interacts with the aircraft through four continuous control inputs: aileron (roll), elevator (pitch), rudder (yaw), and throttle (thrust). To address the complexity of joint flight and combat behavior, we adopt a hierarchical control paradigm. The high-level controller sets targets for direction, altitude, and velocity, while the low-level controller—trained via the SingleControl task—executes fine-grained actuation commands. # 2.2. Task Suite The LAG environment encompasses three progressively complex task categories: SingleControl, SingleCombat, and DualCombat. SingleControl. This task is designed to train the low-level controller to stabilize and maneuver the aircraft effectively. It serves as a foundational module for downstream combat tasks. SingleCombat. This category includes 1-vs-1 aerial engagements between two aircraft agents. Two distinct subtasks are provided: • NoWeapon Task. Inspired by reconnaissance operations, the agent must maintain a positional advantage by maneuvering behind the opponent while preserving a safe and controlled distance. • Missile Task. The agent is required not only to maneuver but also to engage in missile combat. This task is further divided into: – Dodge Missile. Missile launches follow predefined rules. The agent must learn to evade incoming missiles. – Shoot Missile. Missile launching becomes a learning objective. Since training from scratch is challenging due to sparse rewards, we incorporate prior knowledge using the conjugate property of the Beta distribution for binomial processes. This probabilistic prior aids in policy learning for missile firing decisions. Both sub-tasks support self-play and agent-vs-baseline training settings. DualCombat. In this cooperative-competitive setting, each team controls two aircraft. The goal remains consistent with the SingleCombat tasks, but with additional requirements for intra-team coordination. The high-level strategy module plays a crucial role in orchestrating team behavior during engagement. # 2.3. Reward Design In our multi-agent dual-aircraft combat experiments, the reward function plays a critical role in shaping agent behavior. The LAG environment employs a composite reward mechanism comprising three categories: AltitudeReward, PostureReward, and EventDrivenReward. Each reward type captures distinct aspects of tactical air combat performance and safety. AltitudeReward. This component penalizes unsafe flight behavior, particularly when the aircraft violates minimum altitude constraints. It is defined as: • Velocity Penalty: A negative reward is assigned when the aircraft’s velocity is insufficient while flying below a safe altitude. The typical reward range is $[ - 1 , 0 ]$ . • Altitude Penalty: Additional penalty is applied when the aircraft descends below a danger altitude threshold. This discourages risky low-altitude flight and enforces adherence to operational constraints. The reward range is likewise $[ - 1 , 0 ]$ . PostureReward. This term encourages advantageous spatial and directional alignment between the agent and its opponent. It is modeled as the product of two factors: • Orientation: Positive reward is given when the agent aligns its heading toward the enemy fighter. Conversely, being targeted by the opponent incurs a penalty. • Range: Agents are rewarded for maintaining proximity to the enemy within an effective engagement zone, while excessive distance results in negative feedback. EventDrivenReward. This sparse, high-magnitude reward is triggered by critical events during the combat engagement: • Shot Down by Missile: $- 2 0 0$ reward is assigned to penalize being destroyed by an enemy missile. • Crash: Accidental crashes due to poor control or environmental factors also incur a $- 2 0 0$ penalty. • Enemy Kill: Successfully shooting down an opponent yields a substantial reward of $+ 2 0 0$ . Together, these reward components form a balanced and hierarchical structure that guides learning from low-level flight safety to high-level combat effectiveness. The design enables agents to gradually acquire safe, stable, and strategically advantageous behavior in both cooperative and adversarial scenarios. # 3. Methodology # 3.1. Heterogeneous-Agent Proximal Policy Optimization (HAPPO) In the context of multi-agent reinforcement learning (MARL), achieving stable and monotonic policy improvement presents a major challenge due to the inherently nonstationary and interdependent nature of agent interactions. Even in cooperative settings, agents may induce conflicting policy updates, undermining joint performance. To address this, the Heterogeneous-Agent Proximal Policy Optimization (HAPPO) algorithm extends the trust region learning framework to MARL, enabling agents to optimize their individual policies while maintaining a principled guarantee of joint policy improvement. HAPPO is founded on two theoretical pillars: the multiagent advantage decomposition lemma and a sequential policy update scheme. These allow the policy of each agent to be optimized one at a time, while accounting for the potential influence of prior agent updates. Unlike traditional MARL algorithms that assume parameter sharing or require decomposition of the joint value function, HAPPO makes no such restrictive assumptions. It enables decentralized learning by allowing each agent to learn an individual policy, thereby improving scalability and generality across heterogeneous agent settings. The core objective of HAPPO is a clipped surrogate loss function extended to the multi-agent case. The update for agent $i _ { m }$ is computed as: $$ \begin{array} { r l } & { \mathbb { E } _ { s \sim p _ { \pi _ { \theta _ { k } } } , a \sim \pi _ { \theta _ { k } } } \left[ \operatorname* { m i n } \left( \frac { \pi _ { \theta ^ { i _ { m } } } ^ { i _ { m } } \left( a ^ { i _ { m } } \mid s \right) } { \pi _ { \theta _ { k } ^ { i _ { m } } } ^ { i _ { m } } \left( a ^ { i _ { m } } \mid s \right) } M ^ { i _ { 1 : m } } ( s , a ) , \right. \right. } \\ & { \quad \quad \left. \left. \mathrm { c l i p } \left( \frac { \pi _ { \theta ^ { i _ { m } } } ^ { i _ { m } } \left( a ^ { i _ { m } } \mid s \right) } { \pi _ { \theta _ { k } ^ { i _ { m } } } ^ { i _ { m } } \left( a ^ { i _ { m } } \mid s \right) } , 1 \pm \epsilon \right) M ^ { i _ { 1 : m } } ( s , a ) \right) \right] } \end{array} $$ where $M ^ { i _ { 1 : m } } \left( s , a \right)$ serves as a multi-agent modification factor for the advantage function, defined by: $$ M ^ { i _ { 1 : m } } ( s , a ) = \frac { \hat { \pi } ^ { i _ { 1 : m - 1 } } ( a ^ { i _ { 1 : m - 1 } } \vert s ) } { \pi ^ { i _ { 1 : m - 1 } } ( a ^ { i _ { 1 : m - 1 } } \vert s ) } A ( s , a ) , $$ with $\scriptstyle { \hat { \pi } } ^ { i _ { 1 : m - 1 } }$ denoting the updated policies of previous agents in the sequence, and $A ( s , a )$ representing the advantage of the joint action $a$ in state $s$ . # 3.2. Heterogeneous-Agent Soft Actor-Critic (HASAC) Heterogeneous-Agent Soft Actor-Critic (HASAC) is a multiagent reinforcement learning (MARL) algorithm developed to address critical limitations in existing methods, such as poor sample efficiency, unstable training dynamics, and convergence to suboptimal Nash equilibria in cooperative tasks. HASAC is derived by embedding cooperative MARL settings into probabilistic graphical models and adopting the Maximum Entropy (MaxEnt) reinforcement learning framework, which encourages agents to act stochastically and explore effectively. By maximizing both the expected cumulative reward and policy entropy, HASAC fosters more diverse and stable policies, especially valuable in environments requiring sustained exploration and resilience to policy fluctuation. The algorithm extends the Soft Actor-Critic (SAC) approach to multi-agent scenarios with heterogeneous agents, each maintaining its own actor and critic networks. Importantly, HASAC allows agents to learn independently while still optimizing a globally cooperative objective. From a theoretical standpoint, HASAC enjoys two key guarantees: (1) monotonic improvement in policy updates, and (2) convergence to the quantal response equilibrium (QRE), a relaxed form of Nash equilibrium that better accounts for stochastic decision-making. These properties are established through a unified framework called Maximum Entropy Heterogeneous-Agent Mirror Learning (MEHAML), which generalizes the algorithmic design of HASAC and ensures that any derived method from this template inherits the same theoretical guarantees. # 4. Experiments # 5. Analysis # 5.1. Full Results # 5.2. Comparative Analysis of HAPPO and HASAC We evaluate HAPPO and HASAC under two gameplay conditions—No Weapon and ShootMissile—each comprising the HierarchySelfplay, SelfPlay, and vsBaseline evaluation protocols (Table 1). # 5.2.1. NO WEAPON SETTING In the No Weapon scenario, HASAC consistently outperforms HAPPO across all protocols and timesteps: • HierarchySelfplay: HAPPO’s returns remain negative throughout training (from $- 5 4 . 1 3$ to $- 6 6 . 9 2 \rangle$ ), whereas HASAC maintains positive rewards (around 30). • SelfPlay: HAPPO exhibits performance collapse (–13.81 to $- 6 6 . 9 2 )$ , while HASAC progresses from –24.88 to $+ 7 . 8 2$ . • vsBaseline: HAPPO is unable to exceed –95, in contrast to HASAC’s stable near 30 reward. These results indicate that the on-policy nature of HAPPO struggles to stabilize coordination among multiple agents in purely positional tasks, whereas the off-policy SAC foundation of HASAC delivers greater sample efficiency and robustness under limited action complexity. # 5.2.2. SHOOTMISSILE SETTING Under the ShootMissile condition, HAPPO demonstrates a marked advantage: • HierarchySelfplay & vsBaseline: HAPPO’s reward surges from 385.27 to over 1 090.17, reflecting its capacity to learn expressive, high-variance policies for missile engagement. • HASAC Performance: Although HASAC improves (–6.79 to 735.59 in HierarchySelfplay; 4.77 to 468.34 in vsBaseline), it remains below HAPPO’s peak. This reversal suggests that HAPPO’s clipped surrogate objective better supports exploration in high-dimensional action spaces, enabling effective missile-firing strategies, whereas HASAC’s entropy-regularized updates provide less aggressive policy refinement in these tasks. # 5.3. Algorithmic Trends Across Tasks 5.3.1. HAPPO: VARIANCE AND TASK DEPENDENCE HAPPO exhibits: • High variance in reward trajectories, particularly under decentralized training (SelfPlay) in the No Weapon setting.This aligns with known issues of gradient instability in on-policy methods such as PPO when agents are non-stationary and influence each other’s learning dynamics.(de Witt et al., 2020; Jiang et al., 2017) • Strong expressiveness when flexible, temporally extended behaviors (missile launch and evasion) are required. These patterns imply that HAPPO’s on-policy updates are sensitive to both the richness of the action space and the availability of structured (hierarchical) training. # 5.3.2. HASAC: STABILITY AND CONSISTENCY HASAC shows: • Stable positive performance in cooperative, lowdimensional tasks (No Weapon), reflecting the benefits of off-policy sample reuse and entropy regularization that promote both sample efficiency and smooth learning (Haarnoja et al., 2019). • Competitive yields in missile tasks, albeit with lower peak rewards than HAPPO. This consistency reflects the benefits of off-policy sample reuse and entropy regularization, which mitigate training instability in simpler coordination tasks. # 5.4. Visualization Analysis under NoWeapon Setting In this section, we present visualizations of the training process using the HAPPO algorithm under the NoWeapon Setting 1. The results include ten plots, labeled from Fig. 1 to Fig. 10, which illustrate various indicators collected throughout training: actor and critic metrics, evaluationphase rewards, gradient magnitudes, and policy entropy. Formatting of ICML 2022 Table 1. Comparison of evaluation rewards across algorithms and scenarios. Monitored Indicators. The following quantities are visualized to assess the agent’s learning behavior and training stability: 1. Policy Loss: The clipped PPO surrogate loss that guides policy improvement, reflecting how well the updated policy aligns with the estimated advantage function. 2. Dist. Entropy: The entropy of the action distribution, which promotes exploration and avoids premature convergence to deterministic suboptimal policies. 3. Actor Grad Norm: The $\ell _ { 2 }$ -norm of the actor network’s gradients (with optional clipping), serving as a measure of update magnitude and an indicator of training stability. 4. Importance Weights: The policy likelihood ratio $r _ { t } =$ $\exp ( \log \pi _ { \boldsymbol { \theta } } \big ( a _ { t } | \boldsymbol { s } _ { t } \big ) - \log \pi _ { \boldsymbol { \theta } _ { \mathrm { o l d } } } \big ( a _ { t } | \boldsymbol { s } _ { t } \big ) \big )$ , which quantifies the divergence between current and old policies and modulates the policy update strength. Training Reward Instability. From the reward curves in both training and evaluation (see Fig. 3 and Fig. 4), we observe that the learning dynamics under HAPPO exhibit significant instability. The reward trajectory drops sharply multiple times during mid-training, indicating sensitivity to policy updates or value estimation errors. Such fluctuations suggest poor robustness of the algorithm in this environment, and the reward curve lacks smoothness, oscillating instead of showing stable monotonic improvement. Symmetry Across Agents. Although we display results primarily for agent 0 due to space limitations, similar patterns are observed across all agents. The environment presents a high degree of symmetry, leading to largely mirrored training behaviors and metric trends. Therefore, visualizing one agent suffices to generalize insights to the full multi-agent system. Critic Network Instability. A striking feature in the visualizations is the extreme ruggedness of the critic’s training loss curve (see Fig. 2 and Fig. 3), which contrasts sharply with the comparatively smoother actor-related metrics. Notably, the irregular critic behavior appears to coincide temporally with major drops in the evaluation reward, hinting at a causal relationship. This suggests that instability in value function estimation may propagate to the policy updates, resulting in erratic agent performance. Stabilizing the critic, possibly through better value targets or auxiliary objectives, could mitigate this issue. Figure 1. imp weights of agent of happo algorithms under NoWeapon, HierarchySelfplay experiment setting Figure 2. average step rewards of critic of happo algorithms under NoWeapon, HierarchySelfplay experiment setting
This paper investigates multi-agent reinforcement learning (MARL) in a partially observable, cooperative-competitive combat environment known as LAG. We describe the environment's setup, including agent actions, hierarchical controls, and reward design across different combat modes such as No Weapon and ShootMissile. Two representative algorithms are evaluated: HAPPO, an on-policy hierarchical variant of PPO, and HASAC, an off-policy method based on soft actor-critic. We analyze their training stability, reward progression, and inter-agent coordination capabilities. Experimental results show that HASAC performs well in simpler coordination tasks without weapons, while HAPPO demonstrates stronger adaptability in more dynamic and expressive scenarios involving missile combat. These findings provide insights into the trade-offs between on-policy and off-policy methods in multi-agent settings.
[ "cs.LG", "cs.MA" ]
# 1 Introduction 3D scence reconstruction remains a longstanding challenge in computer vision and graphics. A significant advancement in this domain is the Neural Radiance Field (NeRF) [36], which effectively represents geometry and view-dependent appearance using multi-layer perceptrons (MLPs), demonstrating significant advancements in 3D reconstruction quality. Recently, 3D Gaussian Splatting (3DGS) [26] has gained considerable attention as a compelling alternative to MLP-based [36] and feature grid-based representations [11, 17, 34, 38]. 3DGS stands out for its impressive results in 3D scene reconstruction and novel view synthesis while achieving real-time rendering at 1K resolutions. This efficiency and effectiveness, combined with the potential integration into the standard GPU rasterization pipeline, marks a significant step toward the practical adoption of 3D reconstruction methods. In particular, 3DGS models complex scenes as a collection of 3D Gaussian distributions, which are projected onto screen space using splatting-based rasterization. The characteristics of each 3D Gaussian, including position, size, orientation, opacity, and color, are optimized using multi-view photometric loss. Although 3DGS has demonstrated impressive 3D reconstruction results, its application in high-resolution scenarios encounters critical memory scalability limitations. Specifically,when reconstructing outdoor scenes at ultra-high resolutions approaching 5K (e.g $\phantom { - } 5 . 4 9 7 8 \times 3 3 0 0$ pixels) in standardized benchmark datasets like Mip-NeRF 360 [5], conventional 3DGS implementations demand excessive VRAM, exceeding the capacity of mainstream GPUs with limited memory, such as the NVIDIA A5000 (24GB VRAM). This computational bottleneck arises from the increasing resolution: higher resolutions demand more GPU memory, as illustrated in Fig. 1. Such algorithmic behavior fundamentally conflicts with finite GPU memory resources, resulting in catastrophic memory overflow during optimization phases. To overcome these critical memory constraints while preserving reconstruction fidelity for highresolution scene reconstruction, we present Hierarchically Gaussian Splatting (HRGS), a memoryefficient framework with hierarchical block optimization from coarse to fine. Specifically, we first obtain a coarse global Gaussian representation using low-resolution images. Subsequently, to minimize memory usage on a single GPU, we partition the scene into spatially adjacent blocks and parallelly refined. Each block is represented with fewer Gaussians and trained on reduced data, allowing further optimization with high-resolution images. The partitioning strategy operates at two levels: Gaussian primitives and training data. To achieve a more balanced partition of Gaussians and avoid blocks with sparse Gaussians, we begin by contracting unbounded Gaussians. In detail, we define a bounded cubic region and use its boundary to normalize the Gaussian positions. Within this region, Gaussians are contracted via a linear mapping, while those outside undergo nonlinear contraction, yielding a more compact Gaussian representation. We then apply a uniform grid subdivision strategy to this contracted space, ensuring an even distribution of computational tasks. During data partitioning for training, we compute the SSIM loss [54] for each observation by comparing two renderings:One rendering is implemented with the complete global Gaussian representation, while the other is executed subsequent to the elimination of Gaussians within the target block. A more pronounced SSIM loss denotes that the observation exerts a more substantial contribution to the target block, so we set a threshold on SSIM loss and retain only observations whose values exceed it. To mitigate artifacts at the block boundaries, we further include observations that fall within the region of the considered block. Finally, to prevent overfitting, we employ a binary search algorithm during data partitioning to expand each block until the number of Gaussians it contains exceeds a specified threshold. This innovative strategy effectively reduces interference from irrelevant data while improving fidelity with decreased memory usage, as demonstrated in Tab. C. After partitioning the Gaussian primitives and data, we initialize each block in the original, uncontracted space using the coarse global Gaussian representation. To accelerate convergence and reduce computational overhead during block-level refinement with high-resolution data, we introduce an Importance-Driven Gaussian Pruning (IDGP) strategy. Specifically, we evaluate the interaction between each Gaussian and the multi-view training rays within the corresponding block, and discard those with negligible rendering contributions. All blocks are then refined in parallel, and subsequently integrated into a unified, high-resolution global Gaussian representation. In addition to novel view synthesis, we evaluate our method on another key subtask of 3D reconstruction: 3D surface reconstruction. To further enhance the quality of the reconstructed surfaces, we incorporate the View-Consistent Depth-Normal Regularizer [13], which is applied both during the initialization of the coarse global Gaussian representation and throughout the subsequent blocklevel refinement. Finally, our method enables high-quality and high-resolution scene reconstruction even under constrained memory capacities (e.g., NVIDIA A5000 with 24GB VRAM). We validate our method on two sub-tasks of 3D reconstruction: high-resolution NVS and surface reconstruction, and demonstrate that it delivers superior high-resolution reconstruction performance. In summary, the main contributions of this paper are: • We propose HRGS, a memory-efficient coarse to fine framework that leverages low-resolution global Gaussians to guide high-resolution local Gaussians refinement, enabling high-resolution scene reconstruction with limited GPU memory. • A novel partitioning strategy for Gaussian primitives and data is introduced, optimizing memory usage, reducing irrelevant data interference, and enhancing reconstruction fidelity. • We propose a novel dynamic pruning strategy, Importance-Driven Gaussian Pruning (IDGP), which evaluates the contribution of each Gaussian primitive during training and selectively removes those with low impact. This approach significantly improves training efficiency and optimizes memory utilization. • Extensive experiments on three public datasets demonstrate that our approach achieves state-ofthe-art performance in high-resolution rendering and surface reconstruction. # 2 Related Work 3D Reconstruction. Recent 3D reconstruction research can be broadly categorized into traditional geometry-based and deep learning methods. The former relies on multi-view stereo (MVS) [1] and structure from motion (SfM) [43] to estimate scene depth and camera poses, producing point clouds and subsequent surface meshes. The latter integrates implicit functions (e.g., SDF, Occupancy) [24] with volumetric rendering for high-fidelity reconstruction, as exemplified by Neural Radiance Fields (NeRF) [36]. However, NeRF-based approaches often struggle with real-time performance in largescale or dynamic scenarios. In contrast, 3D Gaussian Splatting [26] encodes scenes as 3D Gaussians (with position, scale, and color), using differentiable point-based rendering to achieve fast training and inference while balancing accuracy and quality. Balancing high fidelity, scalability, and realtime capability remains a key challenge in 3D reconstruction. Within the field of 3D reconstruction, there are primarily two main sub-tasks: novel view synthesis (NVS) and surface reconstruction. Novel View Synthesis. Novel View Synthesis (NVS) aims to generate a target image from an arbitrary camera pose, given source images and their camera poses [30, 21]. NeRF [36] integrates implicit representations with volume rendering[16, 29], demonstrating impressive results in view synthesis. However, dense point sampling remains a major bottleneck for rendering speed. To address this, various methods accelerate NeRF by replacing the original multi-layer perceptrons (MLPs) [15, 41] with discretized representations, such as voxel grids [51], hash encodings [39], or tensor radiation fields [11]. Additionally, some approaches [56, 42] distill pretrained NeRFs into sparse representations, enabling real-time rendering. Recent advancements in 3D Gaussian Splatting (3DGS) have significantly improved real-time rendering, demonstrating that continuous representations are not strictly necessary. However, directly optimizing and rendering at high resolutions drastically increase memory overhead, making it challenging to achieve real-time reconstruction of high-quality scenes on mainstream GPUs with limited memory (24GB). Our approach specifically addresses this challenge by reducing the computational cost of high-resolution processing while preserving reconstruction fidelity. Multi-View Surface Reconstruction. In recent years, multi-view reconstruction methods have evolved from traditional geometric approaches to neural implicit representations. Traditional multiview stereo methods [8, 10, 28, 45, 49, 48] primarily rely on extracting dense depth maps [9, 46] from multiple images, fusing them into point clouds [20, 31], and generating scene models through triangulation or implicit surface fitting [25]. Although these techniques have been widely adopted in both academia and industry, they are often susceptible to artifacts and loss of detail due to matching [4] errors, noise, and local optimum issues during reconstruction. Recent advances in neural implicit representations, such as NeRF [37] and its SDF-based variants [53, 60], have shifted the reconstruction paradigm from explicit depth estimation toward learning continuous volumetric or surface fields directly from images. These approaches inherently model geometry and appearance jointly, offering better robustness to occlusions and textureless regions. While neural implicit methods offer high-quality reconstructions, they remain computationally intensive and face scalability challenges. As an alternative, 3D Gaussian Splatting (3DGS) [26] adopts an explicit representation by projecting anisotropic Gaussians onto image space, enabling efficient and differentiable rasterization [57]. Despite its ability to support real-time rendering with high visual fidelity, 3DGS often suffers from insufficient geometric supervision, particularly in sparse-view or large-scale scenarios [12]. To address this limitation, recent methods such as VCR-GauS [14], Vastgaussian [33], and SuGaR [22] introduce view-consistent depth and normal constraints [52, 2, 58], significantly enhancing both reconstruction accuracy and convergence stability. These developments position 3DGS as a promising solution for high-resolution and scalable surface reconstruction. # 3 Method Our proposed HRGS efficiently reconstructs high-resolution scenes. We first review 3DGS in Section 3.1. Next, in Section 3.2, we present the memory-efficient coarse-to-fine framework, detailing the partitioning of Gaussian primitives and data, along with the proposed Importance-Driven Gaus Figure 2: Illustrative diagram of the hierarchical block optimization framework. We first derive a global coarse Gaussian representation using low-resolution data, which is then contracted into a bounded cubic region. Subsequently, the contracted Gaussian primitives are partitioned into blocks, each paired with corresponding data. Leveraging the global coarse Gaussian as initialization, we parallelly refine each block in the original uncontracted space using high-resolution data. During this refinement process, an Importance-Driven Gaussian Pruning strategy is employed to compute the interaction between each Gaussian primitive and training view rays, removing low-contribution primitives to accelerate convergence and reduce redundancy. The optimized blocks are then concatenated to form the final global Gaussian representation, which is validated through novel view synthesis (NVS) and surface reconstruction tasks. sian Pruning (IDGP) strategy. Finally, Section 3.3 describes the loss function employed in our approach. # 3.1 Preliminary We begin with a brief overview of 3D Gaussian Splatting (3DGS) [26]. In the 3DGS framework, a scene is represented as a set of discrete 3D Gaussian primitives, denoted by $G _ { K } = \{ G _ { k } \mid k = $ $1 , \ldots , K \}$ , where $K$ is the total number of Gaussians in the scene. Each Gaussian $G _ { k }$ is defined by a set of learnable parameters, including its 3D position $\mathbf { p } _ { k } \in \mathbb { R } ^ { 3 \times 1 }$ , opacity $\sigma _ { k } \in [ 0 , 1 ]$ , and geometric properties, which typically consist of scaling and rotation parameters that define the Gaussian covariance matrix $\Sigma _ { k } \in \mathbb { R } ^ { 3 \times \tilde { 3 } }$ . Furthermore, spherical harmonic (SH) features $f _ { k } \in \mathbb { R } ^ { 3 \times 1 6 }$ are used to encode view-dependent color information $\boldsymbol { \dot { c } _ { k } } \in \mathbb { R } ^ { 3 \times 1 }$ , allowing for a realistic depiction of color variations as a function of the viewing angle. For rendering purposes, the combined color and opacity contributions from multiple Gaussians at a given pixel are weighted according to their respective opacities. The color blending for overlapping Gaussians is computed as follows: $$ \hat { C } = \sum _ { k \in M } c _ { k } \alpha _ { k } \prod _ { j = 1 } ^ { k - 1 } \left( 1 - \alpha _ { j } \right) , $$ where $c _ { k }$ and $\alpha _ { k } = \sigma _ { k } G _ { k }$ denote the color and density of the $k$ -th Gaussian primitive, respectively. # 3.2 Hierarchical Block Optimization Framework Traditional 3D Gaussian methods [26, 13] rely on global iterative optimization for scene reconstruction but struggle with memory inefficiency in high-resolution settings, such as the Mip-NeRF 360 [6] dataset. To address this, we propose a hierarchical optimization framework that balances coarse global representation and fine-grained local refinement, as shown in Fig 2. We first construct a low-resolution global Gaussian prior, guiding block-wise high-resolution optimization to enhance geometric detail while maintaining memory efficiency. This approach enables precise reconstruction under constrained memory conditions. The following subsections detail the coarse global Gaussian generation, Gaussian and data partitioning strategies, as well as refinement and post-processing procedures. Coarse Global Gaussian Representation. This stage establishes the foundation for subsequent Gaussian and data partitioning. Initially, we train the COLMAP [47, 44] points using all observations at a low resolution for 30,000 iterations, generating a coarse representation of the global geometric structure. The resulting Gaussian primitives are represented as $G _ { K } = \{ G _ { k } \ | \ k = \mathbf { \bar { l } } , \dots , \mathbf { \bar { K } } \}$ , where $K$ denotes the total number of Gaussians. In the following block-wise high-resolution refinement, this robust global geometric prior ensures that Gaussians are positioned accurately, thereby preventing drift and eliminating inter-block discontinuities, minimizing significant fusion artifacts. Primitives and Data Division. Directly applying uniform grid division in the original 3D space may lead to uneven Gaussian distribution in local regions (e.g. many nearly empty grid cells alongside overly dense ones). To address this imbalance, we define a bounded cubic region and contract all Gaussians within it. Within this region, the central one-third of the space is designated as the internal region, while the surrounding area is classified as the external region. The internal region is bounded by the minimum and maximum corner positions, $\mathbf { p } _ { \mathrm { m i n } }$ and $\mathbf { p } _ { \mathrm { m a x } }$ , which define the limits of the central one-third of the entire region. To standardize the representation of global Gaussians, we introduce a normalization step: $ { \hat { \mathbf { p } } } _ { k } = 2 \left( { \mathbf { p } } _ { k } - { \mathbf { p } } _ { \operatorname* { m i n } } \right) / \left( { \mathbf { p } } _ { \operatorname* { m a x } } - { \mathbf { \tilde { p } } } _ { \operatorname* { m i n } } \right) - 1$ . As a result, the coordinates of Gaussians located in the internal region are constrained within the range $[ - 1 , 1 ]$ . To achieve more effective contraction of the global Gaussians, we apply a linear mapping for the Gaussians in the internal region, while a nonlinear mapping is employed for the external region (as shown in Fig. 2). The final contraction step is performed using the function described in [55]: $$ \mathrm { c o n t r a c t } ( { \hat { \mathbf { p } } } _ { k } ) = \left\{ { \begin{array} { l l } { { \hat { \mathbf { p } } } _ { k } , } & { \mathrm { i f ~ } \| { \hat { \mathbf { p } } } _ { k } \| _ { \infty } \leq 1 , } \\ { \left( 2 - { \frac { 1 } { \| { \hat { \mathbf { p } } } _ { k } \| _ { \infty } } } \right) { \frac { { \hat { \mathbf { p } } } _ { k } } { \| { \hat { \mathbf { p } } } _ { k } \| _ { \infty } } } , } & { \mathrm { i f ~ } \| { \hat { \mathbf { p } } } _ { k } \| _ { \infty } > 1 . } \end{array} } \right. $$ The contracted space is then uniformly partitioned into $n$ blocks (the specific number of blocks used will be discussed further in Sec. 4.), resulting in a more balanced Gaussian partitioning. After partitioning the Gaussians, our objective is to ensure that each block is sufficiently trained. In other words, the training data assigned to each block should be highly relevant to the region it represents, focusing on refining the details within the block. To achieve this, we select observations and retain only those that contribute significantly to the visible content of the corresponding block in the rendering results. Since SSIM loss effectively captures structural differences and is somewhat robust to brightness variations [54], we use it as the foundation for our data partition strategy. Specifically, for the $j$ -th block, the global Gaussians contained within it are represented as: $G _ { K j } = \{ G _ { k } \ | \ b _ { j , \mathrm { m i n } } \leq \mathrm { c o n t r a c t } ( \hat { \bf p } _ { k } ) < b _ { j , \mathrm { m a x } } , k = 1 , \ldots , K _ { j } \} .$ , where $b _ { j , \mathrm { m i n } }$ and $b _ { j , \operatorname* { m a x } }$ define the spatial bounds of the $j$ -th block, and $K _ { j }$ is the number of Gaussians contained within the block. The set of observations assigned to the $j$ -th block is defined by the following formula: $$ \mathbf { P } _ { j } ^ { 1 } = \operatorname { M a s k } \left( \mathcal { L } _ { \mathrm { S S I M } } \left( I _ { G _ { K } } ( \tau ) , I _ { G _ { K } \backslash G _ { K j } } ( \tau ) \right) > \epsilon \right) \odot \tau , $$ where $\mathbf { M a s k } ( \cdot )$ generates an element-wise binary mask. Each element of the mask is set to 1 if it satisfies the condition inside the mask (i.e. the SSIM loss exceeds a threshold $\epsilon$ ), and 0 otherwise. The term $G _ { K } \backslash G _ { K j }$ denotes the portion of the global set $G _ { K }$ excluding the block $G _ { K j }$ . $\tau$ is a matrix containing all camera poses, with each column $\tau _ { i }$ representing the $i$ -th camera pose. $\odot$ is element-wise product operation. And the resulting set ${ \bf P } _ { j } ^ { 1 }$ represents the camera poses assigned to the $j$ -th block. However, this strategy does not account for the projection of the considered block, which may lead to artifacts at the edges of the block. To address this issue, we further include poses that fall within the boundaries of the considered block: $$ \begin{array} { r } { \mathbf { P } _ { j } ^ { 2 } = \mathbf { M a s k } \left( b _ { j , \mathrm { m i n } } \leq \mathrm { c o n t r a c t } ( \hat { \mathbf { p } } _ { \tau _ { i } } ) < b _ { j , \mathrm { m a x } } \right) \odot \boldsymbol \tau . } \end{array} $$ where $\hat { \mathbf { p } } _ { \tau _ { i } }$ is the position under the world coordinate of pose $i$ . The final assignment is: $$ { \bf P } _ { j } ( \tau , G _ { K j } ) = \mathrm { M e r g e } \big ( { \bf P } _ { j } ^ { 1 } , { \bf P } _ { j } ^ { 2 } \big ) , $$ where Merge denotes the concatenate operator that removes any duplicate elements, ensuring only one copy of each element is retained. To prevent overfitting, we employ a binary search method [32] to incrementally expand $b _ { j , \mathrm { m i n } }$ and $b _ { j , \operatorname* { m a x } }$ until $K _ { j }$ exceeds a predefined threshold. Notably, this procedure is applied exclusively during the data partitioning phase for each block. Importance-Driven Gaussian Pruning (IDGP). After the Gaussian primitives and data division, we proceed to train each block in parallel in the original uncontracted space. Specifically, we first initialize each block using the coarse global Gaussian prior, and then fine-tune each block using high-resolution data as detailed in Sec. 3.3. During block-level optimization, we further accelerate convergence and reduce redundancy by applying a lightweight importance scoring and pruning strategy. Let $\mathcal { R } _ { b }$ denote the set of all rays cast from the training views assigned to block $b$ . For each Gaussian primitive $p _ { i }$ in block $b$ , we only consider its interactions with $\mathcal { R } _ { b }$ and define the weighted hit count as $$ H _ { i } \ = \ \sum _ { r \in { \mathcal R } _ { b } } { \mathbf 1 } ( p _ { i } \cap r ) T _ { i , r } , \mathrm { w h e r e } T _ { i , r } = \prod _ { \substack { { p _ { k } \cap r } } \atop { \mathrm { d e p t h } ( p _ { k } ) < \mathrm { d e p t h } ( p _ { i } ) } } \left( 1 - \alpha _ { k } \right) . $$ Here, $\mathbf { 1 } ( p _ { i } \cap r ) = 1$ if and only if ray $r$ intersects $p _ { i }$ , and $T _ { i , r }$ accumulates the transmission up to $p _ { i }$ by all closer primitives $p _ { k }$ . We then compute the raw volume of $p _ { i }$ as $\begin{array} { r c l } { v _ { i } } & { = } & { \prod _ { d = 1 } ^ { 3 } s _ { i , d } } \end{array}$ , where each $s _ { i , d }$ is the scale factor of $p _ { i }$ along the $d$ -th spatial axis, and apply logarithmic compression $\widetilde { v } _ { i } \ = \ \ln \bigl ( 1 + v _ { i } \bigr )$ . Finally, we assign each primitive an importance score with its opacity $\alpha _ { i }$ : $S _ { i } ~ \stackrel { - } { = } ~ \alpha _ { i } ~ \stackrel { \sim } { v } _ { i } ~ H _ { i }$ . After evaluating $\{ S _ { i } \}$ for all primitives in the block, we sort them in descending order and remove th elowest $20 \%$ . The remaining Gaussians, now both globally informed by the coarse prior and locally pruned of low-impact points, continue through block-level fine-tuning. Finally, we select the fine-tuned Gaussians within each block and, guided by the global geometric prior, concatenate the blocks to obtain the fine-tuned global Gaussian. Through this process, the previously coarse global Gaussians are significantly enhanced in areas where they lacked detail. Figure 3: Qualitative Comparison on the Mip-NeRF 360 Dataset. Three representative scenes demonstrate that our method more faithfully preserves fine-scale structures and achieves superior visual fidelity compared to 3DGS and Mip-Splatting. # 3.3 Loss Function To optimize both the coarse and refined stages,the loss functions are defined as follows. First, we use the RGB loss $\mathcal { L } _ { R G B }$ from 3DGS for the novel view synthesis task. To reconstruct scene surfaces, we enforce normal priors $\mathbf { N }$ predicted by a pretrained monocular deep neural network [3] to supervise the rendered normal map $\hat { \bf N }$ using L1 and cosine losses: $$ \mathcal { L } _ { n } = \| \hat { \mathbf { N } } - \mathbf { N } \| _ { 1 } + ( 1 - \hat { \mathbf { N } } \cdot \mathbf { N } ) . $$ Additionally, to effectively update Gaussian positions, we utilize the predicted normal $\mathbf { N }$ from the pretrained model to supervise the D-Normal $\overline { { \mathbf { N } } } _ { d }$ . The D-Normal is derived from the rendered depth by computing the cross-product of horizontal and vertical finite differences from neighboring points: $$ \overline { { \mathbf { N } } } _ { d } = \frac { \nabla _ { v } \mathbf { d } \times \nabla _ { h } \mathbf { d } } { | \nabla _ { v } \mathbf { d } \times \nabla _ { h } \mathbf { d } | } , $$ where $\mathbf { d }$ represents the 3D coordinates of a pixel obtained via back-projection from the depth map. We then apply the D-Normal regularization from [13]: $$ \mathcal { L } _ { d n } = w \cdot \left( \| \bar { \mathbf { N } } _ { d } - \mathbf { N } \| _ { 1 } + ( 1 - \bar { \mathbf { N } } _ { d } \cdot \mathbf { N } ) \right) , $$ where $w$ is a confidence term. The overall loss function integrates these components: $$ \begin{array} { r } { \mathcal { L } _ { t o t a l } = \mathcal { L } _ { R G B } + \lambda _ { 1 } \mathcal { L } _ { s } + \lambda _ { 2 } \mathcal { L } _ { n } + \lambda _ { 3 } \mathcal { L } _ { d n } , } \end{array} $$ where $\lambda _ { 1 } , \lambda _ { 2 }$ , and $\lambda _ { 3 }$ balance the individual terms. The term $\mathcal { L } _ { s }$ is introduced to simplify depth computation, as described in [13]. Figure 4: Qualitative Comparison on TNT dataset. Reconstructions from left to right—SuGar, NeuS, 2DGS, and VCR-Gaus—demonstrate that our method delivers more complete surface geometry, enhanced smoothness in planar regions, and superior preservation of fine structural details, thereby outperforming existing approaches in geometric fidelity. Table 1: Mip-NeRF 360 Full-Resolution Results. The rendering quality comparison highlights the best and second-best results. # 4 Experiment # 4.1 Experimental setups Dataset and Metrics. To evaluate the effectiveness of our reconstruction method, we conduct experiments on two core tasks: novel view synthesis (NVS) and surface reconstruction, using multiple benchmark datasets. We first assess high-resolution NVS performance on Mip-NeRF360 [6], followed by high-fidelity surface reconstruction on the Tanks and Temples (TNT)[27] dataset. Additionally, we perform comparative experiments on the Replica[50] dataset to further validate our method. For a comprehensive evaluation, we employ standard metrics including SSIM, PSNR, LPIPS, and F1-score. Rendering efficiency is also assessed in terms of frames per second (FPS). Implementation Details. We begin by following the 3DGS [26] pipeline, performing 30,000 iterations at a low resolution (0.3K) to obtain a coarse global Gaussian prior. During this stage, we introduce our ImportanceDriven Gaussian Pruning (IDGP) strategy, which scores the rendering contribution of each Gaussian primitive and prunes those with the lowest impact. This step prevents irrelevant viewpoints from being assigned to training blocks in subsequent stages, reducing unnecessary computational overhead. The resulting coarse prior serves as initialization for the refinement phase. In the contraction stage, we define the central one-third of the scene as the internal region and the remainder as the external region. The contracted Gaussians are then divided into four spatial sub-blocks. For data assignment, we use an SSIM threshold of $\epsilon = 0 . 1$ . Each sub-block is further trained for 30,000 iterations. Specifically, we apply IDGP at the 10,000th, 15,000th, and 25,000th iterations to prune low-impact Gaussians based on their interaction contributions with training rays. This dynamic pruning accelerates convergence and reduces computational redundancy. To facilitate surface reconstruction, we adopt the depth-normal regularization method described in Sec. 3.3. Specifically, we use the pretrained DSINE [3] model for outdoor scenes and the pretrained GeoWizard [19] for indoor scenes to predict normal maps. The hyperparameters $\lambda _ { 1 } , \lambda _ { 2 }$ , and $\lambda _ { 3 }$ are set to 1, 0.01, and 0.015, respectively. After rendering the depth maps, we perform truncated signed distance function (TSDF) fusion and process the results using Open3D [61]. Additional details are provided in the supplementary. Novel View Synthesis. As shown in Tab. E, we compare our method with several existing approaches, including mip-NeRF [5], Instant-NGP [40], zip-NeRF [7], 3DGS [26], 3DGS $^ +$ EWA [62], and Mip-Splatting [59]. At high resolutions, our method significantly outperforms all state-of-the-art techniques. As shown in Fig. D, our method produces high-fidelity imagery devoid of fine-scale texture distortions. While 3DGS [26] introduces noticeable erosion artifacts due to dilation operations, Mip-Splatting [59] shows improved performance, yet still exhibits evident texture distortions. In contrast, our method avoids such issues, producing images that are both aesthetically pleasing and closely aligned with the ground truth, demonstrating the effectiveness of our hierarchicall refined strategy. Table 2: Quantitative Results on the Tanks and Temples Dataset [27]. The best results are highlighted in orange, while the second-best results are marked in blue. Surface Reconstruction. Our method not only provides high-quality novel views synthesis but also enables precise 3D surface reconstruction. As shown in Tab. C, our method outperforms both NeuS-based methods (e.g., NeuS [53], MonoSDF [60], and Geo-NeuS [18]) and Gaussian-based methods (e.g., 3DGS [26], SuGaR [22], 2DGS [23], and VCR-GauS [14]) on the TNT dataset. Compared to NeuSbased methods, our approach demonstrates significantly faster reconstruction speeds. While our method is slightly slower than some concurrent works, it achieves considerably better reconstruction quality, with a noticeable improvement over 2DGS (0.3 vs.0.45). Furthermore, our method outperforms the recent state-of-the-art method in surface reconstruction, VCR-GauS, with a higher reconstruction quality (0.45 vs.0.4). As shown in Fig. B, our method is particularly effective at recovering finer geometric details. Additionally, our method has a substantial advantage in rendering speed, surpassing the rendering speed of 2DGS by more than double. On the Replica dataset, as shown in Tab. D, our method achieves comparable performance to MonoSDF [60] while running significantly faster. Moreover, in comparison to explicit reconstruction approaches, 3DGS [26], SuGaR [22], and 2DGS [23], our method delivers substantially higher F1-scores. Table 3: Comparsions on Replica [50]. # 4.2 Ablation Studies To validate the effectiveness of individual components in our method, we conducted a series of ablation experiments on the “Stump” scene from the Mip-NeRF 360 dataset and the “Ignatius” scene from the TNT dataset. Specifically, we evaluated the impacts of the following components: hierarchical block optimization strategy, Importance-Driven Gaussian Pruning (IDGP), and data partitioning strategy. Table 4: Ablation on Data Division. “SO Ass.” refers to SSIM-based assignment, while “BO Ass.” denotes boundary-based assignment. Bold indicates the best. Ablation of the Data Division. As shown in Tab C, we analyzed the impact of the data partitioning strategy, using the original Gaussian global prior as the baseline. The results in the first and last rows of Table 5 demonstrate the effectiveness of our proposed method in improving performance (0.55 vs.0.64). The second row in Table 5 further indicates that assigning relevant data in the contracted space is essential for enhancing reconstruction quality. The third row in Table 5 highlights the importance of starategy 1 (Eq. 3) in data partitioning, and we also found that starategy 2 (Eq. 4) plays a significant role in preventing artifacts at the edges of blocks. Ablation of the Number of Blocks. As shown in Tab. 5, we investigate how the number of blocks affects reconstruction performance by splitting the coarse global Gaussian into 2, 4, 8, or 16 blocks. Our results indicate that too few blocks can cause conflicts between local and global optima, resulting in insufficient refinement of fine details, whereas too many Table 5: Ablation on number of blocks. Table 6: Ablation Studies on the “Stump” Scene of the Mip-NeRF 360 Dataset[5]. Ablation of Importance-Driven Gaussian Pruning. As shown in Tab. D, to validate the effectiveness of the proposed Importance-Driven Gaussian Pruning (IDGP) strategy, we conducted an ablation study on the “stump” scene of the Mip-NeRF 360 dataset. Specifically, we compare the full method with the IDGP mechanism (Baseline) against a control variant in which IDGP is disabled (Baseline w/o IDGP). As shown in Table. D, IDGP is able to selectively prune redundant Gaussians during training without degrading rendering quality, thereby achieving significant improvements in both model structure and computational efficiency.
3D Gaussian Splatting (3DGS) has made significant strides in real-time 3D scene reconstruction, but faces memory scalability issues in high-resolution scenarios. To address this, we propose Hierarchical Gaussian Splatting (HRGS), a memory-efficient framework with hierarchical block-level optimization. First, we generate a global, coarse Gaussian representation from low-resolution data. Then, we partition the scene into multiple blocks, refining each block with high-resolution data. The partitioning involves two steps: Gaussian partitioning, where irregular scenes are normalized into a bounded cubic space with a uniform grid for task distribution, and training data partitioning, where only relevant observations are retained for each block. By guiding block refinement with the coarse Gaussian prior, we ensure seamless Gaussian fusion across adjacent blocks. To reduce computational demands, we introduce Importance-Driven Gaussian Pruning (IDGP), which computes importance scores for each Gaussian and removes those with minimal contribution, speeding up convergence and reducing memory usage. Additionally, we incorporate normal priors from a pretrained model to enhance surface reconstruction quality. Our method enables high-quality, high-resolution 3D scene reconstruction even under memory constraints. Extensive experiments on three benchmarks show that HRGS achieves state-of-the-art performance in high-resolution novel view synthesis (NVS) and surface reconstruction tasks.
[ "cs.CV", "cs.AI" ]
# 1 Introduction As LLMs become increasingly integrated into real-world applications, hallucination—where LLMs generate unsupported or misleading information—remains a fundamental limitation. Ensuring that their responses remain factual is a critical challenge. This attention has led to the development of factuality evaluation frameworks for LLMs (Min et al., 2023), as well as approaches for training LLMs to improve factuality (Tian et al., 2023). In these frameworks, fact verifiers are essential components for evaluating the factuality of LLM outputs, particularly by checking whether the generated facts are attributable to a reliable knowledge source (Rashkin et al., 2023). While benchmarks exist to evaluate fact verifiers, a comprehensive study that deeply investigates these models still remain underexplored—despite their critical role in assessing factuality. To this end, we collect examples from 14 distinct benchmarks and construct a balanced set—encompassing both data sources and label distributions—as a testbed for studying fact verification models. We then evaluate 12 pre-trained LLMs and one specialized fact-verifier, including frontier LLMs, open-weight reasoning LLMs, and a small, fine-tuned state-of-the-art factverifier (MiniCheck 7B; Tang et al. (2024a)). Ⅰ. Fact Verifiers Ⅱ. LLM Judges Ⅲ. Human Annotaters 14 A B C 888 Benchmarks Q2. Do all three LLM Q3. Is the rationale correct? Hover Q1. Is the No Judges mark the B Is the label debatable? SciFact instance rationale as correct? CovLeFrBQeAn ch FcalcatssVifeireifdiebrys? BA. LCogmicpalletCeonhesrse nce MIinsltabnecled Ambiguous C. Faithfulness Instance Refine Yes A B C CLEARFACTS GRAYFACTS Based on our studies, we share three findings to support developing better fact verifiers. First, we find that label ambiguity and annotation errors can largely affect the model rankings during evaluations. We find that at least $9 . 1 \%$ of the examples in our initial data collection were ambiguous, and $6 . 6 \%$ were mislabeled. When identifying these examples, we use a scalable approach using LLM-as-a-judge, which can substantially reduce the need for extensive human annotation. This allows human annotators to inspect less than $2 0 \%$ of the dataset rather than examining all examples. Through this process, we construct a refined benchmark, CLEARFACTS, by correcting mislabeled data and removing ambiguous instances. Comparing model rankings before and after refinement reveals notable shifts—for example, MiniCheck initially ranks above OpenAI’s o1 or the R1-distilled Qwen 32B, but falls to a lower position following refinement. We also categorize the ambiguous examples and construct GRAYFACTS to specifically analyze model behavior on these instances. Evaluation on GRAYFACTS yields unintuitive results, such as frontier LLMs producing very low macro F1 and underperforming smaller LLMs. For example, zero-shot prompted o1 achieves a score of 9.4, whereas Llama 3.1 8B scores 13.5. We hypothesize that this is because different benchmarks have nuanced differences in labeling ambiguous cases. These findings highlight how ambiguous examples can distort model evaluations, suggesting that practitioners (e.g., model developers and benchmark designers) should carefully identify and handle such instances in future work. Second, we find that frontier LLMs using few-shot in-context examples rank as topperforming models. Among the 12 pre-trained LLMs evaluated, providing few-shot examples improved performance in all but one case, with the few-shot o1 model achieving the highest overall performance. We suspect that the advantage of few-shot prompting lies in the nuanced nature of fact verification tasks, which makes it challenging to design a zero-shot instruction that adequately captures diverse edge cases. Despite their effectiveness and simplicity, few-shot baselines have often been overlooked in recent studies (Lei et al., 2025; Tang et al., 2024a; Jacovi et al., 2024a). Including these strong baselines will guide future research toward developing improved fact verification models. Finally, our evaluation on CLEARFACTS reveals that small fine-tuned model substantially lag behind larger models on examples that involve complex, multi-hop reasoning. Developing small yet high-performing models remains a critical task, as fact verifiers are not only widely adopted for evaluators of factuality benchmarks, but also serve as reward models for improving the factuality of LLMs (Tian et al., 2023; Lin et al., 2024; Xie et al., 2024). Employing large models to compute rewards across numerous instances is impractical, underscoring the necessity for smaller, efficient fact verifiers. Specifically, when comparing MiniCheck 7B with the top-performing model (o1 with few-shot prompts), MiniCheck trails notably on examples from CoverBench (Jacovi et al., 2024a) and Hover (Jiang et al., 2020), which require complex reasoning for fact verification. To address this, we introduce a simple algorithm to build synthetic multi-hop reasoning data for fact verification tasks. Experiments demonstrate that training on these synthetic data significantly improves model performance on these challenging benchmarks without compromising results on other datasets, highlighting the potential of building high-quality data for fine-tuning fact verifier models. # 2 Task Setup of Fact Verification Our fact verification task is a binary classification task: given the document and the statement, the fact verifier model should determine whether the statement is (1) Attributable or (2) Not Attributable to the document, following existing works (Rashkin et al., 2023; Jacovi et al., 2025; Tang et al., 2024a). Datasets We collect 14 publicly available datasets designed to evaluate fact verification tasks across diverse domains of statements and documents, including expert domains (e.g., medicine, law, biology) and general domains (e.g., news, conversation, general documents). Specifically, we adopt 11 datasets from LLM-AggreFact (Tang et al., 2024a), supplemented by three additional datasets to enrich task complexity and domain diversity: SciFact (scientific claim verification; Wadden et al. (2020)), Hover (multi-hop reasoning; Jiang et al. (2020)), and CoverBench (complex-format verification tasks; Jacovi et al. (2024a)). Instances are sampled from each data source while maintaining a balanced distribution across both data sources and labels. See Table 3 and §6 for more details about the benchmarks. Filtering unverifiable statements and verbatim matching instances To retain higherquality examples from the dataset, we run a two-stage filtering process upon collection. We first identify some statements that are inherently unverifiable (e.g., “This is not considered overpopulation.”, because This in the statement can not be specified. See Table 13 for more examples), which hinder accurate performance assessment due to the absence of definitive ground-truth. The issue of unverifiable statements in fact verification is also discussed in Song et al. (2024). To mitigate this issue, given only the statement, we prompt an LLM to classify each statement as verifiable, ambiguous, or unverifiable, discarding any labeled as ambiguous or unverifiable. Additionally, some statements directly replicate document content, making verification too trivial. Thus, we utilize n-gram overlap (Brown et al., 1992) to remove trivial verification instances where the statement closely matches segments of the source documents. The first stage filters approximately $4 2 \%$ of the examples, and the second stage removes an additional $3 \%$ . We provide additional details in Appendix D.1. After filtering, we balance the resulting dataset to a 1:1 label distribution, following recommendations from Godbole & Jia (2025). However, during the final refining process, we find duplicates in the original CoverBench data and remove them, resulting in a total of 1,749 examples. Additional details on the prompt template and comprehensive dataset statistics are provided in Appendix E and Table 3. Evaluation metric We use macro F1 to evaluate the fact verifiers, which is to address the problem of label imbalance in benchmarks. It first computes F1 for each label and then measures the average of them. For datasets in our eval suite that use a three-way classification scheme—attributable, not attributable, and contradictory—we map the latter two classes to not attributable, following Tang et al. (2024a). Similarly, when computing macro F1, we map the outputs of three-way classification models to this two-label space. # 3 Ambiguity and Annotation Errors in Fact Verification Benchmarks After filtering unverifiable statements, we still observe some data instances with ambiguity and annotation errors. Given the impracticality of manually inspecting every benchmark Question: Is the statement attributable to the document? Knowledge: Verification Linguistic: A phrase can have Document: ... Herrera appears to have a history of shooting with requires making inferences multiple meanings depending his eyes closed as the image of his goal for Spain's Under 20 side using external knowledge. on context. back in 2009 shows. Herrera hits the back of the net while Contextual: Document Numerical: Verification representing Spain's Under 20 side against Tahiti. The former provides incomplete info or correctness is sensitive to Athletic Bilbao joined the Red Devils for £29million in the summer due to his vision and creativity in midfield. Louis van Gaal heaped indirect references. numerical precision. praise on the 25-year-old's shooting ability after Saturday's Premier \* League encounter at Old Trafford, by stating: 'He has a very good Model Response: To determine whether the STATEMENT kicking technique and he should be more composed,' Van Gaal is supported by the DOCUMENT, let's break down the key said to MUTV.... Knowledge (+contextual) ambiguity elements of the STATEMENT and . Red Devils $\mathbf { \tau } = \mathbf { \tau }$ United? 4. \*\*The former Athletic Bilbao man joined United for £29million in Statement: ander herrera scored a brace for manchester united the summer\*\*: - The DOCUMENT states, "The former Athletic Bilbao joined the against aston villa ... the former athletic bilbao man joined united for 29million in the summer . Red Devils for £29million in the summer," confirming this part of Not aligned with human label the STATEMENT. ... Human Label: [Not Attributable] Final answer: [Attributable] example, we introduce an efficient pipeline for identifying ambiguity and label errors in fact verification datasets. Following the identification of these issues, we construct two datasets: CLEARFACTS, derived from the initial collection by correcting the label errors and removing ambiguous examples, and GRAYFACTS, which is a collection of the ambiguous samples only. See Figure 1 for the overview of the procedure. # 3.1 Systemically Identifying Label Ambiguity and Annotation Errors Automatically detecting potential cases We designed an efficient pipeline for detecting potentially ambiguous or erroneous examples leveraging LLM-as-a-judge. For better coverage and robust detection, we ask four distinct frontier LLMs (o3-mini, GPT-4o, Gemini 2.0-Flash, Llama3.1 405B FP8) with zero-shot prompt (used the prompt from Wei et al. (2024); see Table 7 for the actual prompt) and aggregate verdicts and rationales. We retain $4 0 \%$ of the examples in which at least one model’s verdict differs from the original human label. Finally, we employ LLM-as-a-judge — each specialized in evaluating Completeness, Logical Coherence, and Faithfulness of fact verifier outputs — to evaluate the rationales and keep only the examples that receive unanimous positive evaluations from all three judges. Each judgement criteria to evaluate the fact verifiers’ output are: • Completeness: Whether the fact verifier explicitly verifies all critical components of the given statement, regardless of verification accuracy. • Logical Coherency: Whether the fact verifier’s reasoning logically aligns with its final verdict, regardless of the correctness of individual inference steps. • Faithfulness: Whether each inference step in the fact verifier’s reasoning is logically sound and justified. Specifically, we evaluate the internal consistency and validity of rationales. This approach significantly reduced human annotation, yielding 344 candidates from the original 1,749 instances $( 1 9 . 7 \% )$ . Moreover, this method presents a broader applicability for efficiently detecting problematic data points, especially in contexts where human annotation is costly. See Appendix E for details about prompts for judges and fact verifiers. Human annotations After automatically detecting potential erroneous labels and ambiguous examples, five authors of this paper manually confirmed whether the fact verifier’s reasoning was correct. Two annotators each answered two questions about each reasoning trace: the first question asked whether the reasoning trace was correct, and the second question asked to identify debatable points in the data. There are three potential outcomes from the annotation process: (1) if both annotators agree that the reasoning is correct and there is no debatable point, we mark the instance as Mislabeled in the original dataset. (2) If both annotators agree that the reasoning is incorrect and there are no debatable points, we consider the instance is misclassified by the fact verification model (Model Errors). (3) Finally, the other instances are considered to be Ambiguous. On average, each example took approximately four minutes to annotate due to the complexity of the task (e.g., long document, requiring multi-hop reasoning, expert-level document) Table 1: We manually inspect 344 examples from 1,749 examples before refinement, and categorize them into three sets. Table 2: Distribution of ambiguity categories in GRAYFACTS dataset. Table 3: Distribution of the CLEARFACTS and GRAYFACTS datasets. $A B$ indicates that the dataset originally contained $A$ instances before annotation, but after refinement, $B$ instances were retained in CLEARFACTS. The inter-annotator agreement at this stage was $5 2 . 4 \% ,$ which means the other $4 7 . 6 \%$ of examples are potentially ambiguous cases. For the next stage, we conducted an additional round of annotation to ensure that any misalignment was not due to annotation errors. We reviewed 106 cases where the two annotators disagreed on the first question but both answered that there was no ambiguity, and 65 cases where they agreed on the first question but disagreed on the second, and revised the annotations. Further details about the annotators and the human annotation interface are provided in Appendix C.3. # 3.2 Results As shown in Table 1, from the 1,749 instances from the unrefined set, we found 117 instances $( 6 . 7 \% )$ to be mislabeled, and 159 instances $( 9 . 1 \% )$ to be ambiguous. Table 3 presents the dataset distribution after the process. Using the results, we constructed two sets: CLEARFACTS and GRAYFACTS. CLEARFACTS is composed of instances that are not ambiguous and label-corrected instances where the human annotators both agree the instance was mislabeled. GRAYFACTS is composed of the label-ambiguous instances identified by the human annotators. With additional annotations, we further categorized the ambiguous instances in GRAYFACTS set. We defined four categories: • Knowledge-level Ambiguity: Verification requires making inferences using knowledge that a model or human annotator might not know and does not appear in the provided document (ex: $H _ { 2 } O$ stands for water, $g$ in physics equals $9 . 8 ~ m / s ^ { 2 } .$ , etc). Table 4: Finding 1: Label ambiguity and annotation errors can significantly affect the model rankings during evaluations. Unrefined refers to the state before correcting annotation errors and removing ambiguous examples from CLEARFACTS. CLEARFACTS and GRAYFACTS do not have any overlaps. When evaluating on GRAYFACTS, we used the labels provided in the original data sources. Four models (o3-mini, GPT- $4 0$ , Gemini 2.0-Flash, and Llama3.1 405B FP8) that are used to identify label ambiguity and annotation errors were excluded from comparison to avoid potential bias. • Linguistic Ambiguity: (1) A key term or phrase can have multiple meanings depending on context. Sentence structure allows multiple valid interpretations. (2) The meaning of the claim or text is inherently vague or open-ended, leading to multiple valid interpretations. • Contextual Ambiguity: (1) Document provides incomplete information, making verification uncertain. For example, the document doesn’t give the full name of the person, but the statement is talking about the full name. (2) Document contains indirect or subtle references, making attribution nontrivial. • Numerical Ambiguity: Verification correctness is sensitive to numerical precision or rounding errors. For example, the document says $" 1 0 0 0 . 3 ^ { \prime \prime }$ but the statement is talking about $" 1 0 0 0 ^ { \prime \prime }$ , and the context seems the number doesn’t have to be exact. We put instances that do not fall into these categories in Others, and Table 2 shows the percentage of each category. As shown in Figure 2, multiple ambiguities may coexist due to the inherently multifaceted nature of ambiguity itself. In such cases, we assign each example to the primary cause of ambiguity. See Appendix D.2 for more examples. # 4 Evaluating Fact Verifiers with CLEARFACTS Here, we share our findings from testing a total of 13 fact verifiers on our collection of examples sourced from 14 diverse fact verification benchmarks. # 4.1 Considered Fact Verifiers Fact Verification with LLMs We consider 12 LLMs for fact verifications. For open-weight models, we test Llama3.1 8B Instruct, Llama3.3 70B Instruct, Llama3.1 405B Instruct FP8, and Qwen2.5 32B Instruct. We additionally test two open reasoning models: R1-distilledLlama3.3 70B Instruct, and R1-distilled-Qwen2.5 32B Instruct. For closed frontier LLMs, we test o1, o3-mini, GPT-4o, Gemini 2.0-Flash, Claude 3.5-Haiku, and Claude 3.7-Sonnet. For zero-shot, we use the instruction from SAFE framework (Wei et al., 2024) (Figure 7), and for few-shot, we manually construct nine examples for the prompt (Figure 6). MiniCheck We evaluate MiniCheck 7B, a state-of-the-art fact verifier fine-tuned for the task, introduced by Tang et al. (2024a). MiniCheck 7B is a fine-tuned version of the InternLM 2.5 7B model (Cai et al., 2024), trained on a combination of 14K instances from the ANLI dataset (Nie et al., 2019) and 21K synthetic dataset generated by the Llama 3.1 405B Instruct. Figure 3: Finding 2: Few-shot prompting significantly improves the performance of LLM-as-fact-verifiers. We report macro F1 scores on CLEARFACTS using MiniCheck and 12 LLMs under both zero-shot and few-shot settings. For each setup, the same prompt was used consistently across all models. # 4.2 Findings Label ambiguity and annotation errors can significantly affect the model rankings during evaluations. First, we evaluate zero-shot LLMs as fact verifiers, along with the fine-tuned model, MiniCheck, on CLEARFACTS. Model performance was measured using macro F1, and rankings were computed accordingly. We compare rankings of ten fact verifiers between the unrefined version of CLEARFACTS— i.e., the dataset prior to our label corrections and removal of ambiguous examples — and CLEARFACTS. Table 4 presents the results. We found that four fact verifiers with similar macro F1 scores on the unrefined dataset, namely o1, R1-Llama3.3, R1-Qwen2.5, and MiniCheck, exhibited changes in model rankings after refinement. While MiniCheck initially appeared to outperform the other three, which are larger and more capable models, the rankings on the refined CLEARFACTS show a reversal of that trend. To better understand this result, we further measured macro F1 scores on GRAYFACTS to investigate the cause of the ranking changes, and found three pieces of evidence. First, F1 scores on GRAYFACTS were substantially lower than those on CLEARFACTS, which helps explain why overall scores improved after removing ambiguous examples. We hypothesize the reason for low F1 scores on GRAYFACTS is because different benchmarks have nuanced differences in labeling ambiguous cases. Second, we observed an unintuitive ordering of model rankings on GRAYFACTS— for example, Llama3.1 8B outperformed o1, despite being a smaller and generally less capable model. Finally, inspired by Godbole & Jia (2025), we measure the inter-agreement between the two top-performing models, o1 and Claude 3.7-Sonnet. While we expected these models to exhibit high agreement on benchmark data, the results show that their inter-agreement is $8 5 . 3 \%$ on CLEARFACTS, but drops significantly to $6 9 . 2 \%$ on the GRAYFACTS set. This highlights increased uncertainty and variability in judgments on ambiguous data. Few-shot prompted frontier LLMs are strong yet overlooked baselines While prior works (Tang et al., 2024a; Jacovi et al., 2024a; Glockner et al., 2024) employ only zero-shot LLMs as fact verifiers, we pose a natural question: How would these models perform under few-shot prompting? Few-shot prompting has proven to be a simple yet effective technique across many NLP tasks. To explore this, we craft nine in-context examples and use the exact same set across all LLMs evaluated. Specifically, to craft the few-shot examples, we randomly select examples from the ANLI (stage 3) dataset and our synthetic multi-hop dataset (further introduced in Section 5. Note that both datasets are completely decontaminated with the test set). Examples are sampled with a fair distribution of three examples per three labels (”Attributable”, ”Not Attributable”, ”Contradictory”). Next, we use zero-shot reasoning outputs from models such as Llama3.1- 405B Instruct FP8, GPT-4o, and use them as seeds to further verify and refine for actual usage. To provide guidance to future practitioners, we have included the examples inside the code release. Table 5: Finding 3: A small fine-tuned fact verifier shows limited capabilities on examples requiring complex reasoning. We grouped the examples in CLEARFACTS into four subsets and reported macro F1 scores for each. While MiniCheck performs strongly on AggreFact and SciFact examples—competing with or even outperforming larger ones—it shows substantial performance gaps on CoverBench and Hover, which require more complex reasoning. Motivated by this, we demonstrate that incorporating synthetic multi-hop reasoning data during training significantly boosts performance on these two benchmarks, while also yielding improvements on the others. Figure 3 presents the macro F1 results on CLEARFACTS. Notably, it reveals that few-shot prompting consistently boosts performance across LLMs (12 out of 13 models). Few-shot o1 model achieved the best performance, a macro F1 of 88.7. Based on this observation, we recommend including few-shot LLM baselines in future comparative studies of fact verifiers, as these strong baselines can better inform the development of more effective fact verification models. To further study the sensitivity of models to the few-shot examples, we conduct an additional ablation study in Appendix D.7 and show that our few-shot crafting method is generalizable, with little variance across performances using different few-shot examples. Small fine-tuned fact verifier substantially underperforms larger models on instances requiring complex reasoning Developing small but robust fact verifiers has a lot of benefits. While Figure 3 shows that the small fine-tuned fact verifier, MiniCheck 7B, outperforms a similarly sized model, Llama3.1 8B, a notable performance gap remains between MiniCheck and the top-performing model, o1 with few-shot prompting. Upon closer inspection, we find that this gap is largely driven by examples from Hover and CoverBench — benchmarks that require complex reasoning. To better understand this, Table 5 categorizes the datasets in CLEARFACTS into four groups based on their original sources, reporting macro F1 scores for each. The results indicate that while MiniCheck performs reasonably well on instances from AggreFact and SciFact—occasionally outperforming some larger models—it struggles with examples from CoverBench and Hover. # 5 CLEARCHECK: Fine-tuning Models for Complex Fact-verification Tasks Requiring Reasoning Experimental results indicate that a small fine-tuned model underperforms larger models by a huge margin, particularly for instances requiring complex reasoning. Building a small and powerful model has a huge implication for improving the applicability of fact verifiers. Motivated by this, we introduce a simple method to build synthetic multi-hop fact verification data, and experiments show that fine-tuning the model on this data largely improves its performance on examples from Hover and CoverBench. Synthetic multi-hop fact verification data Specifically, to generate statements that require multi-hop reasoning to verify, we first crawled diverse Wikipedia documents to create a knowledge pool. For each document in the pool, we apply an extract-ask-answer procedure. Using LLMs, we first extract a fact from the document, generate a question related to the extracted fact, and answer the question using retrieval-augmented generation. For example, starting from the document about Computer, the LLM extracts the fact: “Computers can execute programs.” It then generates the question: “What is a computer program?” By retrieving the document about Computer Program, the model answers: “A computer program is a set of instructions in a programming language for a computer to execute.” By iteratively applying this process, we obtain a list of facts, each paired with a list of supporting documents for grounding. We then construct statements based on this list of facts, ensuring that they remain attributable to the provided documents. To generate negative statements—i.e., statements that are either non-attributable or contradictory to the documents—we randomly remove a subset of the supporting documents or modify specific details in the statement to introduce contradictions. We use the Llama 3.1 405B FP8 Inst to generate facts and statements. Model training We compare two setups to demonstrate the efficacy of our new dataset. First, we fine-tune a fact verifier using only ANLI data. We use 57K ANLI examples for training. Next, we augment the training set with our 25.2K synthetic multi-hop fact verification data on top of ANLI, which results in CLEARCHECK. Compared to MiniCheck, we train CLEARCHECK with multi-task training, enabling the model to either provide direct answers or engage in CoT reasoning before answering. The model is trained using next-token prediction loss, with the objective of predicting either the final label alone or both the CoT reasoning trace followed by the conclusion. We again use Llama 3.1 405B FP8 to generate direct answers and CoT reasoning traces, then fine-tune the Llama 3.1 8B Inst with the data (i.e., distilling from the teacher). Results Table 5 presents the results. When comparing the model trained only on ANLI with CLEARCHECK which has been trained with additional reasoning data, we find that incorporating synthetic multi-hop data significantly improves overall model performance. In particular, the model shows substantial gains on examples from CoverBench and Hover, demonstrating the effectiveness of multi-hop reasoning data for fact verification model training. This shows us the potential of building better data for developing small yet specialized fact verifiers. Note that we found that using CoT or providing direct answers does not give different evaluation results, however, CoT makes the verifier output legible to humans so that possible errors can be detected. See Appendix D.6 for ablation results. # 6 Related Works Long-form factuality evaluation of LLMs Fact verifiers are now widely used in long-form factuality evaluation frameworks. These frameworks query LLMs to generate information and then decompose the output into smaller units, such as sentences (Jacovi et al., 2024a) or atomic claims (Min et al., 2023; Wei et al., 2024; Zhao et al., 2024; Song et al., 2024). While the design of these systems varies — e.g., VeriScore (Song et al., 2024) retains only verifiable statements determined by LLMs — they all share a common core: Each units are verified by fact verifiers, whether they are attributable to a given or retrieved knowledge source. Fact verification benchmarks A diverse set of fact verification benchmarks has been developed to evaluate the fact verifiers. Notably, Tang et al. (2024a) compiles 11 different benchmarks and constructs LLM-AggreFact, which includes datasets for evaluating summarization models (Tang et al., 2022; 2024b), retrieval-augmented generation models (Jacovi et al., 2024a; Liu et al., 2023b; Chen et al., 2023; Niu et al., 2023), factuality evaluation (Malaviya et al., 2023; Jacovi et al., 2024b), and fact-checking human-written claims (Kamoi et al., 2023). Wadden et al. (2020) introduced SciFact, specialized for scientific facts using claims in scientific papers and citing abstracts. Jiang et al. (2020) developed Hover, which requires multi-hop reasoning based on HotpotQA, a widely-used multi-hop QA benchmark covering Wikipedia documents. Jacovi et al. (2024a) introduced CoverBench, an aggregation of data from nine different benchmarks related to complex fact verification, such as understanding JSON data and financial tables. Fixing benchmark errors and ambiguity The reliability of benchmarks is crucial for model development (Bowman & Dahl, 2021). To improve the reliability of existing benchmarks for LLM evaluations, Platinum-benchmark(Vendrow et al., 2025) and MMLU-Redux(Gema et al., 2024) address annotation errors and ambiguities in benchmarks primarily targeting reasoning and knowledge-based tasks. Other works have focused on ambiguity in NLP tasks such as QA (Min et al., 2020), NLI (Liu et al., 2023a; Pavlick & Kwiatkowski, 2019), and fact verification (Glockner et al., 2024). More recently, Godbole & Jia (2025) raised concerns about the reliability of existing fact verification benchmarks, but not directly relating to the ambiguity and annotation errors of the benchmarks.
Fact verification is essential for ensuring the reliability of LLM applications. In this study, we evaluate 12 pre-trained LLMs and one specialized fact-verifier, including frontier LLMs and open-weight reasoning LLMs, using a collection of examples from 14 fact-checking benchmarks. We share three findings intended to guide future development of more robust fact verifiers. First, we highlight the importance of addressing annotation errors and ambiguity in datasets, demonstrating that approximately 16\% of ambiguous or incorrectly labeled data substantially influences model rankings. Neglecting this issue may result in misleading conclusions during comparative evaluations, and we suggest using a systematic pipeline utilizing LLM-as-a-judge to help identify these issues at scale. Second, we discover that frontier LLMs with few-shot in-context examples, often overlooked in previous works, achieve top-tier performance. We therefore recommend future studies include comparisons with these simple yet highly effective baselines. Lastly, despite their effectiveness, frontier LLMs incur substantial costs, motivating the development of small, fine-tuned fact verifiers. We show that these small models still have room for improvement, particularly on instances that require complex reasoning. Encouragingly, we demonstrate that augmenting training with synthetic multi-hop reasoning data significantly enhances their capabilities in such instances. We release our code, model, and dataset at https://github.com/just1nseo/verifying-the-verifiers
[ "cs.AI", "cs.CL", "cs.LG" ]
# 1 Introduction Developers spend a significant amount of time reading and comprehending code [20, 47], and identifier names play a central role in this process, accounting for roughly $7 0 \%$ of all code characters [21]. Prior work shows that the quality of identifier names significantly impacts comprehension [10,15,25,33,41,65,68], supports tooling [12, 53], and poses persistent pedagogical challenges [28, 70]. These challenges motivate research into how naming practices encode meaning, and how we might better characterize or improve them. A key obstacle in studying identifier names is measuring the semantics they convey, not just at the level of individual terms, but in the structure and composition of entire names. Some approaches cluster identifiers by terms or embeddings [4, 45], while others analyze them using syntactic or static roles [6, 23, 51]. In this work, we focus instead on grammar patterns [52]: sequences of partof-speech (PoS) tags that abstract the phrasal structure of identifiers. Grammar patterns provide a syntactic lens through which naming semantics can be studied at scale, offering insight into how term combinations convey behavioral meaning. At a high level, PoS can be split into two Syntactic Categories: open and closed. Most identifier naming research has focused on open-category, which includes nouns and verbs. The set of open category terms changes and increases (in terms of new words) with time as new domains emerge and change. In contrast, closed-category (e.g., prepositions, conjunctions, determiners) are drawn from a fixed set and serve functional roles in language; this set of terms rarely sees new words introduced over time. These terms have received little attention in the software literature, despite their importance in human languages. Identifying closed-category terms in code is also non-trivial: for example, the word and may represent a conjunction or a logical operator, depending on context—making PoS tagging a prerequisite for meaningful analysis. The goal of this paper is to investigate how closed-category terms are used in identifier names to express program behavior, using the grammar patterns (see Section 3 for definitions) that these terms appear within to provide insights into how these terms interact with the other terms around them. We extend prior research on general grammar patterns [50, 52] by introducing and analyzing the Closed Category Identifier Dataset (CCID), a manually annotated corpus of 1,275 identifiers from 30 open-source systems. Unlike raw term-based approaches, grammar patterns abstract away surface vocabulary, allowing us to characterize naming conventions by their syntactic structure. By examining both the patterns and the concrete terms that instantiate them, we explore how developers use compact linguistic forms to encode behavioral semantics in code. Specifically, we contribute: – A new dataset (CCID) of identifiers containing closed-category terms, annotated with PoS tags, grammar patterns, and contextual metadata. – A mixed-methods analysis combining grounded theory coding with statistical evaluation to characterize the semantics of closed-category grammar patterns and their constituent terms. – An evaluation of how these patterns correlate with programming context, language, and domain. Our findings have implications for both human and automated naming support. Grammar patterns offer a structured way to analyze naming behavior, surface potential inconsistencies, and guide naming suggestions. For AI-based tools, they offer scaffolding to align generated names with human conventions. For developers and educators, they reveal naming idioms that can support clearer communication and pedagogy. In this study, we address the following research questions: RQ1: What behavioral roles do closed-category terms play in source code identifiers? To address this question, we conduct a grounded theory study on a manually annotated dataset of identifiers containing closed-category terms: prepositions, conjunctions, determiners, and digits. Through iterative coding and memoing, we develop axial and selective codes that describe the behavioral functions these terms convey in source code, such as data flow, condition handling, or execution sequencing. This process allows us to uncover not only common grammar patterns but also the communicative intent behind developers’ use of closedcategory terms. Our goal is to characterize the nuanced and purposeful ways in which these terms encode program behavior. RQ2: How do closed-category terms correlate with structural, programming language, and domain-specific contexts in software? To answer this question, we quantitatively analyze the distribution of closed-category terms across multiple dimensions: source-code-local structure (e.g., function names, parameters, class names), programming languages (e.g., Java, $\mathrm { C } { + } { + }$ , C), and system domains (e.g., libraries, frameworks, domain-specific applications). We use statistical tests to examine whether these terms appear disproportionately in certain contexts. These correlations help us determine whether developers systematically leverage closed-category terms to express behavior in ways that are shaped by structural conventions, linguistic norms, or domain constraints. This paper is organized as follows. Section 2 provides our reasoning on why it is important to study this topic. Section 3 gives background on grammar patterns in the context of identifier names. Section 4 provides a detailed explanation of our methods for undertaking the investigation. Our Evaluations are presented in Sections 5 and 6. Related work on identifier names is in Section 7. Discussion of the results is in section 8, followed by Threats to Validity in Section 9. Conclusions are in Section 10 and Data Availability in Section 11. Table 1: Examples of closed-category grammar patterns # 2 Why Study Closed-Category Naming Patterns? Closed-category terms are relatively uncommon in identifier names. Because they are uncommon, their presence raises an important question: When developers do use these terms, what specific meaning or behavior are they trying to convey? We hypothesize that developers include closed-category terms deliberately, as a way to encode behaviorally specific semantics that are lost or obscured without them. Consider the following examples: – find all textures: The determiner all signals a universal scope, clarifying that this identifier refers to the entire set of textures, not a subset. – on start: The preposition on reflects event-driven logic, indicating that the associated behavior is triggered at the start of execution. warn if error: The conjunction if embeds a conditional relationship, revealing that the action is contingent on an error occurring. In each case, the closed-category term is essential to understanding the behavioral semantics of the identifier. Without these terms, the names are more ambiguous or less informative. While uncommon in aggregate, closed-category terms often signal precise intent and encode logical structure in compact forms. Despite their potential significance, these terms have received almost no attention in prior software development naming research, which has focused primarily on open-category words (e.g., nouns, verbs). As a result, we lack foundational knowledge about when and how closed-category terms are used in code—and what they contribute to program comprehension. Understanding these naming patterns has clear implications: it can inform naming tools, guide educational resources, improve automated name generation, and help researchers characterize naming conventions more precisely. Closed-category terms may be uncommon, but we argue, in this paper, that their usage is not accidental; they significantly contribute to the meaning of identifier names, making it important to study them. # 3 Definitions & Grammar Pattern Generation In this work, we analyze identifier names through the lens of grammar patterns, which are sequences of part-of-speech (PoS) tags assigned to the terms within an identifier. For example, the identifier GetUserToken is split into the terms Get, User, and Token, which are tagged as Verb Noun-adjunct Noun. This sequence, $\mathsf { V }$ Table 2: Part-of-speech categories used in study Fig. 1: Examples of noun, verb, and prepositional phrases NM N, represents the identifier’s grammar pattern. Crucially, this pattern generalizes across many identifiers: RunUserQuery and WriteAccessToken share the same structure, even though they use different terms. Grammar patterns thus allow us to relate identifiers by their syntactic form. We focus specifically on closed-category grammar patterns, which are patterns that contain at least one closed-class part of speech: a preposition, determiner, conjunction, or numeral (digit). These categories are finite and rarely accept new terms, in contrast to open-class categories like nouns and verbs, which grow over time as new domains introduce new concepts. Despite their rarity in code, closed-category terms often signal behavioral relationships such as event triggers, quantification, or conditional logic, making them important to study. Part-of-Speech Tags. Table 2 lists the PoS tags used in this study. Most are drawn from standard linguistic categories. We highlight one custom tag that is central to our analysis: – Noun Modifier (NM): Includes adjectives as well as noun-adjuncts—nouns used to modify another noun (e.g., user in userToken, or content in contentBorder). Although standard PoS taggers do not typically distinguish noun-adjuncts, prior work shows their critical role in naming semantics [52]. – Preamble (PRE): A prefix used to convey structural or language-specific metadata, rather than domain semantics. Common examples include Hungarianstyle markers such as m for member variables, or project-level namespaces like gimp in gimp temp file; a practice especially common in C. For a complete typology and discussion, see [52]; we include preambles here since we do use them in the data set, but they are not the focus of this paper. # 3.1 Phrasal Structures and Interpretation While our analysis is based on PoS sequences rather than full parse trees, we draw on linguistic phrase structure to interpret identifier patterns. Specifically, we reference three example concepts to help the reader understand what we mean when we use the term ‘phrase’ with respect to grammar patterns: – Noun Phrase (NP): A noun optionally preceded by one or more modifiers (e.g., accessLog, userToken, windowTitle). – Verb Phrase (VP): A verb followed by a noun phrase, often representing an action on a specific entity (e.g., getUserToken, drawContentBorder). – Prepositional Phrase (PP): A preposition followed by a noun phrase (e.g., onClick, fromCache). These phrase structures help illustrate how grammar patterns support analysis of phrases. For instance, in drawContentBorder, the noun-modifier content refines the meaning of the head noun border, while the verb draw anchors the identifier as a behavior applied to that concept (i.e., draw applied to a specific type of border; a content-border). When closed-category terms appear, they may indicate when an action should occur (onStart), under what condition (ifError), or which entities are included (allTextures). Figure 1 shows examples of NP, VP, and VP-with-PP constructions as derived from grammar patterns. # 4 Methodology For our study, identifiers are collected from and analyzed in the following contexts: class names, function names, parameter names, attribute names (i.e., data members), and declaration-statement names. A declaration-statement name is a name belonging to a local (to a function) or global variable. We use this terminology because it is consistent with srcML’s terminology [19] for these variables, and we used srcML to collect identifiers. Therefore, to study closed-category grammar patterns, we group identifiers based on these five categories. The purpose of doing this is to study closed-category grammar pattern frequencies based on their highlevel semantic role (e.g., class names have a different role than function names). We collected these identifiers from 30 open source systems, which can be found in Table 3. These systems belonged to a curated dataset of engineered software projects, synthesized by Reaper [49], which is a tool that measures how well different projects follow software engineering practices such as documentation and continuous integration. Table 3: List of 30 open source systems included in study Table 4: Distribution of part-of-speech labels in Old Data Set and CCID The set of systems have an average and median of 335,358 and 111,069 LOC, respectively. 11 of the systems are primarily C systems, 9 are primarily C++, and 10 are primarily Java. We chose systems that have tests and use continuous integration (CI) under the idea that these represented systems that have at least some basic process for ensuring quality; Reaper is able to automatically determine which systems have both CI and tests. Our primary concern for selecting systems is that they represent different programming languages, follow basic quality procedures, and are large enough for us to collect enough identifiers. Given this, our choice of systems is designed to ensure that the grammar patterns in this study are applied across at least the languages under study. Table 5: Distribution of Tags in Candidate and Verified (Manually-annotated) data set Table 6: Balanced population of identifiers per context 4.1 Detecting and Sampling Identifiers with Closed-Category Terms Sampling identifiers that contain closed-category terms is challenging for two reasons: (1) They are relatively uncommon in production code, and (2) many such terms are ambiguous without context, making automatic tagging difficult. To address this, we implemented a two-phase sampling strategy: (1) filtering identifiers that potentially contain closed-category terms into candidate sets, and (2) manually verifying and annotating a statistically representative sample. # Phase 1: Filtering Candidate Identifiers We began with the CCID corpus, which contains 279,000 unique identifiers from production code. To collect these 279K identifiers, we used the srcML identifier getter tool 1 on the srcML archives resulting from running srcML [19] on the system repository directories (Table 3). To identify candidate sets: – Digits (D): We selected identifiers containing at least one numeric digit, using Python’s isDigit() functionality. Digits are easier to detect automatically and unambiguous in token form. However, there are cases where a digit will be annotated as part of another category. For example, str2int uses the digit 2 as a preposition (to). – Determiners (DT), Conjunctions (CJ), and Prepositions (P): We constructed lexicons for each category using curated lists of common English terms234. We then filtered for identifiers containing component words (i.e., split tokens) that matched a word in one of these lists. This approach is viable only because these categories are closed and finite in vocabulary. This filtering process produced the following candidate counts: – 602 identifiers candidate conjunctions – 1,693 identifiers candidate determiners – 3,383 identifiers candidate prepositions – 4,630 identifiers candidate digits We use the term candidate because these filters do not account for context or usage, and thus include false positives. Still, they serve as an upper bound for each category’s prevalence in the corpus. Based on this, we estimate the proportion of identifiers containing each term type as follows: – 0.2% (602/279,000) contain conjunctions – 0.6% (1,693/279,000) contain determiners – 1.2% (3,383/279,000) contain prepositions – $1 . 7 \%$ (4,630/279,000) contain digits # Phase 2: Balanced Sampling and Annotation Using a $9 5 \%$ confidence level and a $5 \%$ margin of error, we computed minimum sample sizes for each category. For example, a 95 and 5 sample for conjunctions (602 identifiers) is 235: – CJ: 235 candidate conjunction identifiers – DT: 313 candidate determiner identifiers $- \ \mathbf { P } \colon 3 4 5$ candidate preposition identifiers – D: 355 candidate digit identifiers Before manual annotation, we stratified the candidate identifiers by their program context: – Function names – Parameters – Attributes (i.e., class members) – Function-local declarations – Class names Some contexts, such as parameters and especially class names, were underrepresented due to the natural scarcity of closed-category terms in those positions. To increase representation from underrepresented contexts, we attempted to oversample relevant subgroups. However, even with oversampling, the absolute number of qualifying identifiers (e.g., a parameter containing a conjunction) remained low. As our sampling was driven by the presence/population of closed-category terms in general, rather than population within our program contexts under study, we opted not to artificially balance the dataset further. After sampling and stratification, we obtained a total of 1,275 identifiers across the four categories in our candidate set: – 364 candidate preposition identifiers – 363 candidate digit identifiers – 313 candidate determiner identifiers – 235 candidate conjunction identifiers Following manual verification and annotation (described in Section 4.2), we retained only those identifiers that were confirmed to contain closed-category terms. Table 5 summarizes both the sampled totals and the verified counts. The final dataset consists of 1,001 identifiers confirmed to contain at least one closedcategory term. These identifiers comprise the CCID. Table 6 shows the CCID, but broken down by program context instead of closed-category type. # Annotation Notes Some tools exist for part-of-speech tagging of source code identifiers (e.g., the ensemble tagger from [54]), but these are slow at scale and are trained on datasets that underrepresent closed-category terms. For example, in our prior dataset [52], used to train the aforementioned tagging approach [54]; conjunctions, determiners, and prepositions made up only $0 . 2 \%$ , $0 . 4 \%$ , and $2 . 6 \%$ of tags, respectively. Thus, we found manual annotation necessary to ensure sufficient coverage and correctness for our study. As we are primarily concerned with production code, and prior work shows that test name grammar patterns differ from production names [58], we did not collect any identifier containing the word ‘test’, or that appeared in a clearly marked test file or directory. In addition, note that Table 4 counts tags at the word level (e.g., CJ CJ N counts two CJ tags), whereas Table 5 counts tags at the identifier level (e.g., one identifier with multiple CJ tags counts as one). This explains occasional mismatches between sampled and actual tag distributions. # 4.2 Manual Process for Annotating Part-of-Speech Initially, one author (annotator) is assigned annotate each identifier in the CCID with its grammar pattern. The annotator has experience annotating identifiers with PoS from prior work [52,58]. The process is as follows: The annotator is given a split (using Spiral [39]) identifier along with the identifier’s type, file path, and line number to make it easy to find the identifier in the original code. The annotator is allowed to look at the source code from which the identifier originated if necessary. The annotator is asked to additionally identify and correct mistakes made by Spiral. When the annotator is finished, two additional annotators are asked to validate (agree or disagree) with the annotations created by the original annotator. Any disagreements are discussed and fixed, if required. Furthermore, a fourth annotator made annotations, which are then compared to the original annotator’s work. Again, disagreements are discussed and fixed. An example disagreement is with the identifier where len, which is a tricky one because ‘where’ is typically an adverb or conjunction. However, in this case ‘where’ is a reference to a void pointer variable called ‘where’ within the code. So ‘where len’ is the length of the memory this pointer points to, making ‘where’ a noun-adjunct in this case; describing the type of length. Thus, its grammar pattern is NM N. The Fleiss’ Kappa for this process was .916. Table 7: Most common Patterns We did not expand abbreviations for a couple reasons. The first is that some abbreviations are more meaningful than their expanded terms (e.g., HTTP, IPV4, SSL) due to how frequently they are used in their abbreviated form by the community. The second reason is that abbreviation expansion techniques are not widely available and vary widely in terms of effectiveness on different types of terms [53, 71]. Therefore, a realistic worst-case scenario for developers and researchers is that no abbreviation-expansion technique is available to use, and their PoS taggers must work in this worst-case scenario. Whenever we recognized one, we do not split domain-term abbreviations (e.g., Spiral will make IPV4 into IPV 4; we corrected this to IPV4). We do this because it is the view of the authors that they should be recognized and appropriately tagged in their abbreviated (i.e., their most common) form. # 5 Evaluation of RQ1: What behavioral roles do closed-category terms play in source code identifiers? Our evaluation aims to establish, through RQ1 and RQ2: 1) how closed-category terms are used to convey differing types of program behavior, 2) the typical grammatical structure of identifiers containing closed-category terms, and 3) how closed-category term distributions differ across programming context, language, and system domains. This research question investigates the semantic role of closed-category grammatical patterns in identifier naming. We focus on four closedcategory part-of-speech types: prepositions, digits, determiners, and conjunctions. We present our findings by (1) describing each category’s semantic function using axial codes, (2) summarizing behavioral trends via selective coding, and (3) highlighting shared trends through cross-category synthesis. 5.1 Methodology: Manual Process for Grounded Theory Annotations We employed a grounded theory approach to analyze variable names in source code and their relationship to program behavior. This multi-phase coding process involved four annotators and combined individual annotation with iterative validation and synthesis to construct a theory grounded in observed naming patterns. The sample used in this study is a subset of the CCID described in Section 4. To construct this subset, we took the top 10 most common grammar patterns (Table 7) and collected all identifiers that followed these patterns; randomly selecting the 10th grammar pattern if multiple patterns had the same frequency. These represent the most common names (i.e., from the perspective of grammatical structure) used in the data set. This totals to 618 identifiers. Four annotators participated in the process, comprising both faculty and graduate students with prior experience in natural language processing and software engineering research. All annotators had previously worked on part-of-speech annotation tasks. Before formal annotation began, the team conducted a one-hour training and calibration session to discuss the guidelines, walk through examples, and establish expectations and deadlines. Coding Platform. Annotations are conducted collaboratively using a shared Google Sheets document. Each row in the sheet contained an identifier along with contextual metadata, including: – Identifier Name – Source Code Context – Programming Language – GitHub Commit Link – Split Identifier Name (tokenized form) – Grammar Pattern (POS sequence) – Notes (for open coding and memoing) – Axial Code (for grouping behavioral patterns) Open coding and memoing are captured directly in the Notes column. The final axial codes were recorded in the corresponding column once annotators had synthesized their observations. Phase 1: Familiarization. All annotators reviewed the dataset to build familiarity with the variable names, associated grammar patterns, and program contexts. They discussed ambiguous or novel constructions in group chats to align interpretations and maintain consistency. Phase 2: Open Coding. Annotators examined each variable name in its context and assigned a free-form behavioral interpretation based on how the variable is used in the surrounding code. These open codes and rationale are documented in the Notes column. The goal was to capture a grounded understanding of what each identifier conveyed, informed by both linguistic structure and program behavior. Phase 3: Axial Coding. Annotators grouped similar open codes into higher-level axial codes, focusing on patterns where particular grammar structures aligned consistently with specific behavioral roles. These axial codes captured mid-level abstractions (e.g., State Variables, Event Triggers), and were documented in the spreadsheet alongside notes justifying the grouping where needed. The Fleiss’ Kappa for this phase was: .971 for Digits, .996 for Determiners, .976 for Prepositions, and 1.0 for Conjunctions. Each annotator’s axial codes were reviewed by a different annotator for validation. This cross-review process involved reading both the open codes and the proposed axial codes, discussing disagreements, and refining the categories until consensus was reached. Phase 4: Selective Coding. One annotator synthesized the final, validated axial codes across all annotations and constructed a set of selective codes representing core theoretical categories that linked grammar structure to program intent. These selective codes were then shared with the remaining annotators, who were asked to evaluate whether they reflected the themes and relationships they had observed during their own coding work. Annotators agreed or suggested revisions to finalize the theory. # 5.2 Digits in Identifiers Overview. Digits in identifiers act as compact, semantic indicators of structure, ordering, or version. They are also often used to disambiguate entities and encode numeric conventions. Their meaning is typically inferred through domain knowledge, making them powerful, but potentially hard to understand for those without the requisite domain knowledge. Axial Codes. We created a dual-axis framework for interpreting the meaning of digits, inspired by a single-axis framework we created in prior work on digits in identifiers [62]. This framework reflects our observation that digits contribute information in two distinct ways: (1) the role they play within the local context (e.g., indexing, versioning), and (2) the source of meaning they draw from, which is often external to the immediate source code scope (e.g., domain conventions, technical standards). Every digit in the set has both a role and a source of meaning; they must be combined to fully understand the digit. We put an $\mathrm { { } ^ { \circ } x \mathrm { { } ^ { \circ } } }$ between each combination of ‘Role’ and ‘Source of Meaning’ Axial Code. – Role: What functional purpose the digit serves in the identifier. – Distinguisher: The digit differentiates conceptually similar entities, typically to avoid name collision errors from the compiler (e.g., arg1, tile2). – Version Identifier: The digit encodes versioning information such as protocol revisions or data format versions (e.g., http2, v1). – Source of Meaning: Where the interpretation of the digit originates, typically via convention, tooling, or domain-specific logic. – Auto-Generated: The digit is added automatically by tools, compilers, or naming systems to avoid conflicts (e.g., var1 2, jButton3). – Human-Named Convention: The digit’s meaning is primarily derived from ad hoc developer intent and is not more complex than distinguishing entities manually (e.g., str1, feature2). – Locally Specific Concept: The digit conveys project- or context-specific information, often related to coordinate systems, data structures, or memory layouts (e.g., $\mathtt { m } 3 3$ for matrix row $3 \mathrm { \ c o l 3 }$ ). – Technology Term / Standard: The digit is part of a recognized domainspecific label, format, or protocol (e.g., HTTP2, Neo4j). Role x Source of Meaning # 1. Distinguisher $\times$ Human-Named Convention (122 items) Description: This group captures identifiers that use manually assigned numeric suffixes to distinguish conceptually and lexically similar entities. Examples: host1 (first of 2 host variables), e8 (element 8 in a parameter list) Grammar patterns: – N D (73) – NM N D (21) – V N D (6) – NPL D (6) – N D N (5) – PRE N D (3) – N D NM N (2) – P D (2) – PRE NM N D (2) – V NM N D (2) # 2. Distinguisher $\times$ Locally Specific Concept (45 items) Description: This group captures identifiers where digits encode positional or logical roles based on system-specific conventions, such as grid layout or data structure indexing. Examples: dist2 (squared distance calculation), col1 (first column of a matrix) Grammar patterns: – N D (32) – NM N D (5) – PRE N D (3) – N D N (2) – NM N D P D (2) V N D (1) 3. Distinguisher $\times$ Technology Term / Standard (17 items) Description: This group captures identifiers that include digits as part of standardized or domain-specific naming conventions, often encoding formats or specifications. Examples: b1110 (binary for UTF8 byte sequences), count32 (32-bit count value) Grammar patterns: – N D (7) – NM N D (5) – PRE N D (2) – N D N (1) – V N D (1) – V NM N D (1) # 4. Version Identifier $\times$ Technology Term / Standard (9 items) Description: This group captures identifiers where the digit signals the version number of a protocol, tool, or technology component. Examples: gw6 (gateway addr for IPV6), httperf2 (version 2 of the httperf tool) Grammar patterns: – N D (5) – NM N D (2) – N D N (1) – N D NM N (1) # 5. Distinguisher $\times$ Auto-Generated (8 items) Description: This group captures identifiers that are automatically suffixed with a digit to ensure uniqueness, often generated by tools or compilers. Examples: field37, field4 (numbers are generated to avoid name collissions) Grammar patterns: – N D (8) Example. Consider the identifier m34, which appears in the context of a matrix operation. To fully interpret the digit 3 in this name, we must consider both its Role and its Source of Meaning. Semantically, the digit serves as a Distinguisher; uniquely identifying this variable apart from its siblings (such as $\mathrm { m } 3 2$ and m31). However, its full interpretation depends on its Locally Specific Concept source: the developers have an internal convention that 3 refers to the row index, while 4 refers to the column. Without knowing the Source of Meaning, the numbers can only be interpreted as distinguishing one identifier from another; the meaning of the digits would remain ambiguous. This illustrates how both axes work together—Role tells us what the digit is doing, while Source of Meaning tells us how to interpret the value. Selective Coding Insight. Digits serve as semantic compression tools in source code: conveying versioning, layout, ordering, or configuration state using a minimal footprint. Their power lies in the shared assumptions between the name’s author and its reader. Whether distinguishing hosts (host1, host2), signaling protocol versions (http2), or denoting matrix dimensions (m33), digits rely on prior knowledge to be effective. This makes them: – Easy to understand when used in well-known conventions (e.g., 3D, utf8) – Hard to understand when overused without documentation or when the reader lacks background information/experience Digits are structural shortcuts in the mental models of developers; a quick way to convey a lot of information in a small number of characters. # 5.3 Prepositions in Identifiers Overview. Prepositions in identifiers express spatial, temporal, or logical relationships. They are the most versatile (i.e., most axial codes) and frequently used closed-class grammatical structure in our dataset. Prepositions typically convey transformation, control conditions, event triggers, source origin, or context membership. Because only a subset of these are dual axis (Boolean Flow), we inline the definitions with our examples, unlike with Digits, where we separated them. Axial Codes. Through axial coding, we identified several recurring behavioral roles that prepositions play in identifier names. These axial codes describe the functional semantics conveyed by the preposition within the naming context: # 1. Type Casting / Interpretation (38 items) Definition: This group captures identifiers that signify transformation from one type, format, or abstraction to another. Examples: str 2 int, as field Grammar patterns: – P N (18) – P NM N (9) – N P N (4) – V P N (2) – P NM NM N (2) – P V (1) $- \mathrm { \Delta N M \ N \mathrm { \Delta P \ N } }$ (1) – V P (1) # 2. Position / Ordering in Time or Space (28 items) Definition: This group captures identifiers that indicate relative position or sequencing within a spatial, temporal, or execution context. Examples: before major, after first batch Grammar patterns: – P N (8) – P (4) – N P N (3) – P NM N (3) V P N (3) – P V (2) – V P (2) – NM N P N (2) – N P (1) # 3. Boolean Flow / Control Flag (26 items) Definition: This group captures identifiers that encode boolean flags which both guard execution and describe the behavior they enable. This group is somewhat special, as their name implies other axial codes, but they are boolean variables. Thus, many of the identifiers in this group are dual-axis, where the 1st axis is boolean, and the 2nd is one of the other preposition axes. These variables are typically guards, used in branching logic that: – Activate based on position or sequencing (e.g., after equals) – Govern strategy or type casting/interpretation behavior (e.g., for backprop, as array) – Reflect data provenance or deferred logic (e.g., from docker config, wait for reload) Examples: obsess over host, for backprop Grammar patterns: – P N (12) – P NM N (6) – V P N (2) – V P (2) – N P N (2) – N P (1) – P (1) 4. Data Source / Origin (20 items) Definition: This group captures identifiers that refer to the source from which data or configuration is retrieved. Examples: from context, from id Grammar patterns: – P N (10) – P NM N (1) – N P N (2) – P (3) – NM N P N (1) – V P (1) – N P (2) 5. Event Callback / Trigger (17 items) Definition: This group captures identifiers that define behavior executed in response to user or system events. Examples: on reason, on start Grammar patterns: – P N (6) – P NM N (5) – P NM NM N (4) – V P N (1) – NM N P N (1) 6. Deferred Processing / Pending Action (13 items) Definition: This group captures identifiers that signal actions or data awaiting future handling. Examples: to ack, to count Grammar patterns: – P V (10) – P N (2) – P NM N (1) # 7. Unit-Based Decomposition / Measurement (11 items) Definition: This group captures identifiers that describe per-unit measurement, processing, or aggregation. Examples: down time, size in datum Grammar patterns: – NPL P N (8) – P N (1) – N P N (1) – NM N P N (1) # 8. Purpose / Role Annotation (10 items) Definition: This group captures identifiers that clarify the functional role or use-case of a value. Examples: for avg, for class Grammar patterns: – P N (6) – NM N P N (2) – P NM N (1) – V P (1) 9. Data Movement / Transfer (9 items) Definition: This group captures identifiers that represent movement of data or control between locations, buffers, or components. Examples: to repo, to header Grammar patterns: – P N (3) – N P (3) – P NM N (1) – NM N P N (1) – P NM NM N (1) 10. Operation Basis / Strategy (8 items) Definition: This group captures identifiers that describe the method, or trait that determines how operations may/should be carried out. Examples: extend by hexahedron, with unary operator Grammar patterns: – P N (2) – P NM N (2) – V P N (2) – P (1) – V P (1) 11. Membership / Peer Grouping (7 items) Definition: This group captures identifiers that signal inclusion in a group, scope, or set of peer entities. Examples: in neighbour heap, in for Grammar patterns: – P (2) – P N (1) – P NM N (1) – V P N (1) – V P (1) – N P (1) 12. Mathematical / Constraint Context (2 items) Definition: This group captures identifiers that encode numerical limits, bounds, or ratios that constrain behavior. Examples: over size, vmax over base Grammar patterns: – P N (1) – N P N (1) Selective Coding Insight. Prepositions in identifier names serve as compact, highly expressive relational markers. Across the dataset, prepositions consistently support four core semantic roles: – Transformation and Directionality: Prepositions like to, from, and as signal type casting, movement, or format conversion. – Execution and Conditional Control: Prepositions such as after, on, and for often signal when or whether an action should occur, especially within event-driven operations and boolean flags that gate execution. – Role and Configuration Semantics: Prepositions like with, by, and in clarify how values contribute to a process or how behavior is scoped or grouped. – Quantification and Unit-Based Aggregation: Prepositions such as per and in describe how quantities are measured, normalized, or decomposed across units (e.g., iterations per sample, size in datum). – Future-Intent or Deferred Action: Especially with to, some identifiers encode pending or scheduled behavior (e.g., to merge, wait for reload). Importantly, boolean flags that include prepositions do not form a distinct behavioral class, but instead overlay these four functions; gating type conversions, controlling source-based logic, or scoping strategies. These flags act as behavioral summaries, where the identifier directly reflects the guarded behavior (e.g., send to buffer reflects that the guarded code sends data to a buffer). In short, prepositions make invisible system relationships visible. They map the logic of control, transformation, and association directly into identifier structure, enabling expressive, intention-revealing naming in complex systems. # 5.4 Determiners in Identifiers Overview. Determiners in identifiers help interpret values in relation to a set. They often signal positional reasoning, filtering criteria, relative thresholds, control flow, or scoping rules. In our analysis, we treat terms like next and last as determiners, even though they are typically categorized as adjectives in general English. In source code, however, these terms function more like determiners because they specify a particular entity within a sequence or collection rather than merely describing its properties. For example, the next pointer in a linked list does not describe a type of pointer, but rather identifies the specific node that follows in the structure. In this way, such terms serve a determinative function. Axial Codes. We identified the following eight categories of determiner-based behavior: # 1. Temporal / Most Recent Element (60 items) Definition: This group captures identifiers that refer to the most recently computed, stored, or observed value, often used for computing prior state, and in sequence-based data structures. Examples: last bucket, last builder Grammar patterns: – DT N (32) – DT NM N (19) – DT NM NM N (4) – DT V (2) – V DT N (2) – DT NPL (1) # 2. Temporal / Upcoming Element (54 items) Definition: This group captures identifiers that denote the next item in a sequence or timeline, often used in look-ahead and sequence-based data structures. Examples: next tex, next bar Grammar patterns: – DT N (35) – DT NM N (9) – DT V (3) – N DT (3) DT NPL (2) DT NM NM N (1) V DT N (1) 3. Population / Subpopulation Reference (42 items) Definition: This group captures identifiers that reference a population or subset, typically using quantifiers like all, any, or some to guide iteration, filtering, or policy logic. Examples: any diffuse, all set Grammar patterns: – DT NPL (13) – DT N (9) – V DT (6) DT NM NPL (4) V DT NPL (4) DT NM N (2) – N DT (2) – DT V (1) – V DT N (1) 4. Immediate Context Reference (26 items) Definition: This group captures identifiers that refer to the current instance, scope, or runtime context—emphasizing locality, such as this, another, or a. Examples: this node, another id Grammar patterns: – DT N (17) – DT NM N (6) – DT NM NM N (1) – N DT (1) – V DT N (1) 5. Negation / Exclusion Flag (18 items) Definition: This group captures identifiers that indicate something is explicitly disabled, excluded, or absent; commonly using no to toggle features or signal null conditions. Examples: no callback, no log Grammar patterns: – DT N (12) – DT NM N (2) – DT NPL (2) – DT NM NPL (1) – DT V (1) # 6. Quantity Threshold / Optional Extensibility (4 items) Definition: This group captures identifiers that express minimum thresholds, or the possibility of extending beyond a baseline. Examples: enough memory, more data Grammar patterns: – DT N (2) – DT NPL (2) 7. Default / Fallback Value Representation (2 items) Definition: This group captures identifiers that represent placeholder or fallback values, used when a field must be filled or a default condition must be satisfied. Examples: a void, no val Grammar patterns: – DT N (2) 8. Boolean Multi-Condition Test (2 items) Definition: This group captures boolean identifiers representing conjunctions of multiple conditions, usually requiring all to be satisfied (e.g., both X and $\mathrm { Y }$ must be true). Examples: both empty selection, both NonEmpty Selection Grammar patterns: – DT NM N (2) Selective Coding Insight. Determiner-based identifiers help interpret values in relation to a set—by signaling position, filtering criteria, thresholds, or scoping rules. These are closed-category terms that enable programmers to express set logic, entity selection, and relative capacity or validity. They typically support: – Positional reasoning (next, last, this): Indicates where a value occurs in a temporal or structural sequence, helping to track state progression, history, or future execution. – Population membership and filtering (some, any, each, least, which): Refers to selecting or referencing members within a larger set, expressing scope, quantification, or comparison. – Thresholding and extensibility (enough, more, additional): Indicates whether a minimum condition is met or whether more values can be included beyond a base requirement. – Identity negation or fallback (no, none, a, without): Flags exclusion, absence, or placeholder values—often tied to feature toggles or default logic. # 5.5 Conjunctions in Identifiers Overview. Conjunction-based identifiers are rare but expressive. They signal compound behavior, dual-mode interfaces, or gated logic—often making hidden control flow or semantic relationships visible. Their rarity likely stems from the fact that developers often express conjunctions in logic rather than names. But when used, they highlight either an intent to foreground behavior or to capture structural duality within a single name. Axial Codes. We identified seven categories of conjunctional behavior, each reflecting a different type of pairing, conditionality, or combination: # 1. Data Pair / Composite Value (7 items) Definition: This group captures identifiers that hold or refer to two values used together or in alternation, typically for a shared behavioral role or composite purpose. Examples: data or diff, function and data Grammar patterns: – N CJ N (6) – V CJ N (1) # 2. Guarded Action / Conditional Enablement (6 items) Definition: This group captures identifiers that encode actions gated by internal logic; executing only if a condition is satisfied. The conjunction expresses conditional enablement or guarded behavior. Examples: if present, if unique Grammar patterns: – CJ NM (2) – V CJ N (2) – V CJ V (1) – V CJ VM P (1) # 3. Combined Action / Sequential Behavior (3 items) Definition: This group captures identifiers that describe a sequence of operations performed together, often representing merged behaviors. Examples: hash and save, print and free json Grammar patterns: – V CJ V (1) – V CJ V N (1) – V N CJ N (1) 4. Shared Interface for Alternatives (1 item) Definition: This group captures identifiers that define a shared interface or behavior over mutually exclusive alternatives, with the conjunction indicating a choice, not a combination. Example: generate key or iv Grammar pattern: – V N CJ N (1) 5. Combined Configuration / UI Concept (1 item) Definition: This group captures identifiers that refer to compound interface or configuration concepts, often blending multiple traits into a unified design or behavioral setting. Example: look and feel Grammar pattern: – NM CJ NM (1) 6. Boolean Concept Name (1 item) Definition: This group captures identifiers that encode a named logical or boolean relationship, usually by treating the conjunction itself as a symbolic concept. Example: and Grammar pattern: – CJ (1) # 7. Boolean Multi-Condition Test (1 item) Definition: This group captures identifiers that evaluate multiple conditions simultaneously; typically for readiness or validation checks, returning true only if all constraints are met. Example: null or empty Grammar pattern: – NM CJ NM (1) Selective Coding Insight. Conjunction-based identifiers are especially useful when modeling: – Duality: Representing more than one entity or mode simultaneously (e.g., input and output, key or iv) – Mutual Exclusion: Encoding choices between alternatives—only one active at a time (e.g., stream or cache) – Preconditions: Embedding logic into the name that would otherwise be hidden in branching statements (e.g., load if needed, trigger if active) Conjunctions are the rarest category in our data, and while it is difficult to draw firm conclusions about them, it is clear that ‘and’, ‘or’, and ‘if’ are go-to conjunctions, particularly for Data Pairs and Guarded actions. # 5.6 Cross-Category Synthesis Across digits, determiners, prepositions, and conjunctions, developers use closedclass grammatical structures to encode compact, behavior-rich semantics in identifiers. While each part-of-speech (POS) category exhibits distinct tendencies, analysis of grammar patterns reveals broader functional themes and stylistic consistencies across categories. Boolean Semantics and Execution Control. Our first cross-category behavior is the use of closed-class elements to encode boolean conditions, execution control, or logical gating: 1. Determiners such as no, some, this, and both signal presence, exclusion, or multi-condition boolean evaluation. 2. When used as booleans, Prepositions like as, and with tend to guard sections of code that implement the behavior described in the identifier name. 3. Conjunctions surface explicitly in guarded or compound logic names (e.g., load if enabled, both ready) using patterns like V CJ N, NM CJ NM. It is interesting that booleans appear in all three of these contexts, but each is a different flavor; a way of expressing behavior that is unique to the closed-category terms used in the boolean identifier. Control Flow and Event Signaling Across Categories. Closed-category terms across all four categories reflect a tendency to encode temporal, reactive, or preconditioned behavior: 1. Prepositions like on, before, after, and by appear in structures such as P N and V P N, signaling timing, triggers, or basis for operation. 2. Conjunctions explicitly model control conditions (if, and) or mutual exclusivity (or), often appearing in V CJ N or N CJ N structures. 3. Determiners frequently encode sequence through next and last, realized in DT N and DT NM N patterns. 4. Digits imply procedural differentiation (method1, step2) or timeline indexing when appearing in coordinated identifiers (m31, m32). These names act as micro-control structures, embedding state transitions and flow logic directly into identifier names to help the reader understand when or how an identifier will/should be used. Multi-Dimensional Semantic Layering. Grammar pattern analysis highlights how identifiers stack multiple behavioral dimensions: 1. Prepositions convey direction, transformation, measurement, and order 2. Determiners convey selection, quantity, and scope 3. Digits embed indexing, uniqueness, and domain roles 4. Conjunctions encode logic composition and structural alternatives. These layered forms operate as semantic shortcuts to express complex behavior with very few words. They compress conditions, transformations, order, and relationships into concise forms. Finally, the grammar patterns observed across our axial codings provide structural insight into how behavioral semantics are composed. When the closedcategory term appears as the first token in a grammar pattern, such as in DT NM $\mathbb { N }$ or $\mathrm { \Delta P }$ NM $\mathbb { N }$ , it typically modifies or qualifies a single operand, forming a unary relation (e.g., temporal status or transformation of a noun phrase). In contrast, when the closed term is flanked by open-class terms, such as in N P N or N CJ N, the structure reflects a binary relation: two operands connected through a behavioral or logical relationship (e.g., data flow or choice). By combining our axial and selective codes with these syntactic patterns, we gain a fuller picture of identifier meaning: the open-category terms indicate \*which\* entities are involved, while the closed-category term signals $^ *$ how\* they are related or behave with respect to one another. # 5.7 Summary of RQ1 Through grounded theory analysis of closed-category terms in identifiers, we have uncovered and explored the ways in which these compact grammatical forms play a central role in expressing program behavior. Each part-of-speech category contributes distinct semantic functions, ranging from transformation and scoping to control flow and logical composition. Table 8: Top 10 terms per closed-category part-of-speech tag Together, they reveal how developers construct concise, behavior-rich identifiers that encode structure, timing, intent, and logic. Whether signaling preconditions (load if enabled), alternatives (data or diff), state (last bucket), or structural roles (col1), these terms form a functional lexicon that bridges source code, cognition, and context. # 6 Evaluation of RQ2: How do closed-category terms correlate with structural, programming language, and domain-specific contexts in software? One interesting aspect of closed-category terms is that they appear in different contexts within source code with varying frequency. This variation provides insight into how developers use these terms to express different types of meaning. For RQ2, we investigate how closed-category terms correlate with three types of context: (1) the local programming context in which a variable is declared (e.g., Function, Attribute), (2) the programming language of the source code in which the identifier was found, and (3) the broader system-level domain of the software in which it appears (e.g., domain-specific vs general-purpose projects). This 3-way perspective allows us to examine both how these terms are used within individual source code structures, between programming languages, and how they reflect distinctions across different kinds of systems. We begin by analyzing the distribution of four closed categories: prepositions, determiners, conjunctions, and digits, across five programming contexts and three programming languages. We discuss which categories are most frequent in which contexts/languages and consider how those patterns may reflect the communicative goals of the developer. We then extend this analysis to system-level domain context, comparing the normalized frequency of closed-category term usage between domain-specific and general-purpose systems. Table 9: Results of Pearson’s Chi-Squared Test. df $= 6$ , $\alpha = 0 . 0 5$ , critical value $= 1 2 . 5 9 2$ , test statistic $= 4 . 2 9 1$ Table 10: Standardized Pearson Residuals Results. With Bonferroni Correction, a significant result is $\alpha = 0 . 0 5 / 1 2 = 0 . 0 0 4 2$ , which translates to a $\pm 2 . 8 7 \$ critical value 6.1 Closed-Category Term Usage Across Programming Contexts, Programming Languages, and System Domains We now examine how differing contexts and closed-category grammar patterns relate to one another, and whether programming language further conditions their usage. We begin by analyzing cross-language correlations in closed-category term usage, followed by an exploration of correlations in how closed-category terms are used in different program contexts. We provide Table 8, which shows frequencies and percentages for PoS and terms, to help the reader understand what types of terms are most prevalent. However, for this RQ we rely primarily on Tables 9, 10, 11, and 12, which show the results of our Pearson Chi-squared tests and Standardized Pearson residuals. Using these, we highlight common patterns, terms, and the contexts or languages to which these patterns are correlated. # 6.1.1 Language-Specific Differences in Closed-Category Term Usage Starting with an analysis of closed category terms and programming language, our null hypothesis is that there is no relationship between identifiers containing closed category terms and the programming language in which they appear. Our alternative hypothesis is that there is a relationship between identifiers that contain closed category terms and the programming language. Methodology. To perform our Chi-Square test, we use the CCID described in Section 4.1. We count how many times each closed-category PoS appears in C++, Java, or C code by analyzing all 1,001 identifiers that contain closed-category terms. For example, we might find that there were 20 Digits in our data set found in $^ { \mathrm { C + + } }$ code, and 5 Digits in Java code. Once we have these frequencies, we apply the Chi-Square test, and Standardized Pearson residuals w/Bonferonni correction to determine overall significance, and per-part-of-speech significance, respectively. Results. The Chi-square test for programming language (Table 9) did not produce a statistically significant result. Thus, we do not reject the null hypothesis: there Table 11: Results of Pearson’s Chi-Squared Test. df $= 1 2$ , $\alpha = 0 . 0 5$ , critical value = 21.026, test statistic $\mathbf { \Lambda } = { \bf 8 8 . 8 9 3 5 6 7 }$ . Table $\it { 1 2 }$ : Standardized Pearson Residuals. With Bonferroni correction, a significant result is $\alpha = 0 . 0 5 / 2 0 = 0 . 0 0 2 5$ , which translates to a $\pm 3 . 0 2 \$ critical value. is no strong evidence that closed-category tag usage differs significantly by programming language. However, exploratory analysis of the Standardized Pearson residuals in Table 10 offers insight into modest trends worth noting: – Digits (D) are modestly underrepresented in Java (residual = –1.65), suggesting a mild tendency to avoid numeric suffixes in Java naming. – Determiners (DT) are slightly overrepresented in Java (residual = 1.23), potentially reflecting more frequent use of quantifying or contextual modifiers. Summary. While we did not find significant statistical evidence linking closedcategory tag usage to programming language, the residual analysis and qualitative trends suggest mild idiomatic differences, particularly around digit usage and determiner phrasing. For example, in our Axial Code data from RQ1, Population/Subpopulation Reference identifiers were found in Java (21, $5 0 \%$ ) and C++ (18, $4 2 \%$ ) more than in $\mathrm { C ( 3 , 7 \% ) }$ . These patterns may reflect broader stylistic conventions or design idioms of each language, but should be interpreted cautiously given the statistical outcome. # 6.1.2 Context-Specific Differences in Closed-Category Term Usage Next, we analyze the correlation between closed-category terms and program contexts such as Function names, Attributes, and Class names. Our null hypothesis is that there is no relationship between identifiers containing closed-category terms and the context in which they appear. Our alternative hypothesis is that there is a relationship between identifiers containing closed-category terms and the context in which they appear. Methodology. To perform our Chi-Square test, we use the CCID described in Section 4.1. We analyzed all 1,001 identifiers that contained closed-category terms, and count how many times a closed-category term appears in one of our five code contexts: Attribute, Function, Class, Declaration, or Parameter. Once we have these frequencies, we apply the Chi-Square test, and Standardized Pearson residuals w/Bonferonni correction to determine overall significance, and per-partof-speech significance, respectively. Results. The Chi-squared test for context (Table 11) shows a significant result $\mathrm { 8 8 . 8 9 \ > \ 2 1 . 0 2 6 }$ ), allowing us to reject the null hypothesis. As before, we analyze the Standardized Pearson Residuals (Table 12) to understand where the largest deviations appeared. Conjunctions (CJ). Closed-category grammar patterns that include conjunctions typically feature the terms ‘and’, ‘or’, or ‘if’, as reflected in Table 8. Although rare overall, these patterns are significantly positively correlated with function names (Standardized Pearson residual = 4.22, Table 12). This indicates that when conjunctions do appear, they are far more likely to occur in function names than in other contexts. The selective coding data from RQ1 offers an explanation for this pattern. Conjunction-based grammar patterns tend to express compound logic, dual-purpose behavior, or guarded activation, which are most relevant when naming behaviors or actions rather than static values. For example, Guarded Action / Conditional Enablement patterns such as load if needed or activate if enabled appear in function names to encode preconditions or gating logic directly into the identifier. Conjunction-based names are largely absent from declarations and classes, likely because those contexts do not typically represent conditional or compound operations. The data supports the interpretation that developers strategically use conjunctions in function names to foreground complex control logic or behavioral nuance at the point of execution. While we did find a significant correlation with Conjunctions and functions, it is important to recall that we have a limited number of Conjunctions in our dataset, at 16, meaning that while we have found potential trends, further research is required. Determiners (DT). Closed-category grammar patterns that include determiners typically feature the terms last, next, all, no, or this, as shown in Table 8. While determiners are not significantly positively correlated with any specific context, they are modestly negatively correlated with class names (Standardized Pearson residual $= - 3 . 0 7$ , Table 12), suggesting that developers tend to avoid determiner-based grammar patterns in class names. The selective coding analysis offers a plausible explanation: the most common roles for determiners involve expressing temporal or positional relationships—such as Temporal / Most Recent Element and Temporal / Upcoming Element (over 100 instances)—as well as set-based semantics, such as Population / Subpopulation Reference (38 instances). These patterns commonly use terms like next, last, prev, all, and any to indicate an element’s position in a sequence or its membership in a filtered subset. These naming strategies are well-suited to attributes, parameters, and declarations, where variables often represent dynamic state or bounded subsets. In contrast, class names are generally used to describe abstract data types or roles, where positional or filtering semantics are less relevant. The relative absence of determiners in class contexts thus reflects their semantic focus: determiners foreground state, scope, or specificity, whereas class names typically signal structural purpose or generalization. Digits (D). Closed-category grammar patterns that include digits often feature numerals such as 1, 2, 0, 3, and 4, as shown in Table 8. Digits are significantly positively correlated with parameter names and class names, and significantly negatively correlated with function names (Standardized Pearson residual = 4.72 for class names, 4.25 for parameters, and $- 5 . 5 1$ for functions; Table 12). Notably, digits are the only closed-category category to exhibit a positive correlation with class names. The selective coding data sheds light on this trend. The most frequent digitrelated patterns in our dataset fall under Distinguisher $\times$ Human-Named Convention and Distinguisher $\times$ Locally Specific Concept. These naming strategies are used to distinguish among similar entities (e.g., arg1, arg2, tile3) or to embed system-specific references (e.g., m34, cp437) into variable or type names. Such distinctions are especially useful in parameters and declarations, where there is no syntactic support for disambiguation outside of naming. In contrast, function-level disambiguation is often handled by the language itself, through overloading, polymorphism, or naming conventions focused on behavior; making digits largely unnecessary or even undesirable in that context. Their absence from function names reflects this shift: digits encode identity, that is, they act as traceability to specific domain concepts, and distinguishing entities with similar names, rather than encoding purpose or behavior. Taken together, these findings suggest that digits serve primarily as disambiguators or protocol markers rather than communicative devices for expressing behavior. Their presence in class and parameter names signals static or structural variation, while their avoidance in function names underscores a developer preference for meaningful, descriptive action labels over numerical markers. Prepositions (P). Closed-category grammar patterns that include prepositions frequently feature terms such as to, for, as, on, or from, as shown in Table 8. These patterns are significantly positively correlated with function names (Standardized Pearson residual $= 4 . 4 4$ , Table 12), suggesting that developers are particularly likely to use prepositional grammar when naming behaviors or operations. This strong correlation reflects the behavioral semantics that prepositions convey in identifier names. As detailed in our selective coding, prepositions frequently express directionality, transformation, conditional activation, or event-driven execution; all of which are very function-oriented behaviors, requiring action to be taken. Overall, prepositions help scope, qualify, and clarify a function’s behavior. Their strong correlation with function names supports the interpretation that developers use them deliberately to encode operational semantics directly into the name, especially in contexts involving transformation, control flow, or event handling. Summary. The results of our analysis support the alternative hypothesis: closedcategory parts of speech are meaningfully correlated with specific roles and contexts in source code. Prepositions and conjunctions appear more frequently in function names, where they help express behavioral nuances such as guarded actions, type casting, or alternative execution paths. Digits, by contrast, are most commonly found in class names and parameter declarations, where they signal disambiguation, indexing, or versioning; identifiers rooted in identity rather than behavior or purpose. Fig. 2: Global Mann-Whitney $\mathrm { ~ U ~ }$ test significance across thresholds, showing divergence between domain-specific and general systems. Peaks at 0.6 and 0.8 suggest the importance of both ubiquitous and moderately specific closed-category terms. Fig. 3: Per-category Mann-Whitney U test significance across thresholds. Prepositions dominate across thresholds, while conjunctions and digits contribute more variably. # 6.1.3 Closed-Category Term Usage Across System Domains Having established correlations between closed-category terms, source code context, and programming language, we now turn to a broader question: do these terms also vary with the domain of the software system itself? This sub-question allows us to further test our central hypothesis, that closed-category terms are not used arbitrarily, but instead reflect domain-relevant distinctions in how behavior and structure are communicated. If certain domains make more frequent or specialized use of closed-category terms, this suggests that such terms play a role in expressing concepts tightly coupled to those domains. Understanding and appropriately using these terms may therefore be critical for accurate communication of behavior in domain-specific software. Fig. 4: Cliff’s Delta for closed-category terms across system support thresholds. Table 13: Systems and System Domains Selected Based on Axial Codes for each ClosedCategory Type Methodology. In RQ1, we developed a set of Axial Codes to describe the behavioral roles of closed-category terms in identifiers. To explore their importance at the level of system domain, we selected the two most common Axial Codes from each closed-category group (e.g., Prepositions, Determiners). For each code, we identified two software domains that we hypothesized would frequently use identifiers expressing that behavior. For example, in the Preposition group, the top two Axial Codes were: – Type Casting / Interpretation – Position / Ordering in Time / Space / Execution Context Based on these, we selected four relevant software domains: – For Type Casting / Interpretation: – Serialization/deserialization libraries – Polyglot interop tools or type bridge layers – For Position / Ordering in Time / Space / Execution Context: – Data structure and algorithm libraries – Compiler or intermediate representation (IR) tooling Table 13 lists all selected systems, the domain they represent, and the Axial Code that motivated their inclusion. To fit the table, we omitted a few details like system size; this can be found in our open data set (Section 11). We analyzed identifiers drawn from five programming contexts—attributes, parameters, functions, declarations, and class names—across two groups of systems: one curated for domain-specific relevance and one composed of general-purpose projects, which were used to construct the CCID (Table 3). The general-purpose group serves as a baseline, as these systems were not selected based on any particular domain hypothesis. Our underlying assumption is that, if closed-category terms are meaningfully correlated with domain-specific concerns, we will observe statistically significant differences in their usage between these two groups. For each system, we extracted all identifiers and segmented them using Spiral [39]. We then filtered out all terms that are neither digits nor included in our predefined lists of closed-category terms (as defined in Section4). After filtering, we compute the normalized frequency of closed-category term usage by dividing the count of qualifying terms by the system’s total lines of code. To assess whether differences in usage were statistically meaningful, we apply a Mann-Whitney U test to compare the distributions between domain-specific and general-purpose systems. Results. To mitigate the risk of a small number of systems dominating the term distribution, and to better understand how widely closed-category terms are used, we introduce a support threshold that controls how many systems a term must appear in to be included in the Mann-Whitney U test. Increasing the threshold emphasizes more widely used (ubiquitous) closed-category terms; decreasing it emphasizes more narrowly distributed (specific) closed-category terms that may signal domain-specific behavior. Significance at high thresholds implies that there are terms that are important to all of our domain-specific systems; significance at lower thresholds implies that there are subsets of the domain-systems that make use of terms that are not very universal, but nevertheless set these systems apart from the general set. To explore how these different usage profiles affect our results, we conducted a threshold sweep. At each level, a term had to appear in at least a given proportion of systems to be retained. This allowed us to systematically vary our emphasis between ubiquity (terms common across many systems) and specificity (terms concentrated in a smaller, domain-aligned subset). The results, shown in Figure 2, reveal the strongest distributional divergence at thresholds around 0.6 and 0.8. These peaks suggest that both common and moderately specific terms help distinguish domain-specific systems. By contrast, thresholds between 0.1 and 0.3 yielded little significance, likely reflecting linguistic noise from terms with low usage or ambiguous semantic function. We repeat the analysis at the level of individual closed-category types (prepositions, determiners, conjunctions, digits) to identify which groups drive the observed differences. As shown in Figure 3, prepositions exhibit consistently strong significance across thresholds, particularly between 0.6 and 0.8. Digits and conjunctions show more variable but still notable divergence, while determiners contribute the weakest and least consistent signal. These trends suggest that domain-specific systems rely more heavily on certain linguistic forms, especially prepositions, to express structural or behavioral distinctions central to their design. To complement the significance testing, we examine Cliff’s Delta as a nonparametric effect size estimate, plotted in Figure 4. This allows us to assess not only whether closed-category usage differs between system types, but also how strongly. The results show that prepositions and digits increasingly favor domain-specific systems at higher thresholds, reaching small to medium effect sizes. Determiners, by contrast, exhibit weak or even negative effect sizes, suggesting a more generalpurpose usage profile. Conjunctions remain close to the negligible–small range, with mild domain skew. These patterns reinforce the idea that domain-specific systems do not just differ in which closed-category terms they use, but in how salient those terms are among their most widely reused identifiers. Summary. Our findings suggest that domain-specific systems tend to use closedcategory terms more frequently than general-purpose baselines, particularly in ways that align with the communicative roles captured by our Selective Codes. While we rely on predefined lists of closed-category terms—without verifying each term’s function in context—our goal in this evaluation was not to establish definitive usage, but to assess whether these terms might play a heightened role in domain-specific software. The statistical results support that possibility. As such, we argue that further research into how closed-category terms contribute to domain-specific expression is both warranted and promising. These findings offer initial evidence that supporting developers in the effective use of such terms could benefit certain styles or domains of software development. # 6.2 Summary of RQ2 For RQ2, we examine how closed-category terms correlate with multiple forms of context: (1) source-code-local structure, (2) programming language, and (3) broader system domain. Our findings reveal several consistent trends. First, there is no statistically significant difference in the distribution of closed-category terms across the three programming languages under study, though there are some trends that indicate how they may differ in minor (i.e., non-statistically-significant) ways. Second, source code context plays a significant role: Prepositions and conjunctions are used disproportionately in function names; Digits are significantly positively correlated with parameters and class names while significantly negatively correlated with function names; and Determiners are significantly negatively correlated with class names. These patterns align with the communicative roles uncovered in our Selective Codes, such as the use of prepositions to express behavior or data flow, and digits to distinguish instances or versions. Finally, we found statistical evidence that domain-specific systems use closedcategory terms more frequently than general-purpose ones. This suggests that these terms serve as meaningful signals of domain-relevant behavior. Taken together, our results demonstrate that closed-category terms have specific, purposeful usage in software development. One of the broader aims of RQ2 is to assess whether closed-category terms are meaningful enough to warrant dedicated study. We argue that their statistically significant correlations with specific code contexts support this aim: such terms appear deliberately and consistently in ways that reflect their natural language functions. While our domain-level comparison relies on predefined lists of closedcategory terms, without manual verification of each term’s grammatical role, the results nonetheless suggest that these terms may hold particular communicative importance in domain-specific software. This exploratory finding reinforces the potential value of further study. Supporting the appropriate use of closed-category terms through tools, naming conventions, or educational interventions may ultimately benefit program comprehension and internal quality, particularly in domains where such terms help convey behavioral intent. # 7 Related Work While there are numerous studies on identifier names, this paper represents one of the few to address closed-category terms, and the only paper to do an in-depth study of their usage in open source systems. We discuss relevant related literature below, and how our work can be improved by, or improve upon, their outcomes. # 7.1 Part of Speech Taggers POSSE [29] and SWUM [31], and SCANL tagger [54] are part-of-speech taggers created specifically to be run on software identifiers; they are trained to deal with the specialized context in which identifiers appear. Both POSSE and SWUM take advantage of static analysis to provide annotations. For example, they will look at the return type of a function to determine whether the word set is a noun or a verb. Additionally, they are both aware of common naming structures in identifier names. For example, methods are more likely to contain a verb in certain positions within their name (e.g., at the beginning) [29, 31]. They leverage this information to help determine what POS to assign different words. Newman et al. [52] compared these taggers on a larger dataset than their original evaluation (1,335 identifiers) using five identifier categories: function, class, attribute, parameter, and declaration statement. They found that SWUM was the most accurate overall, with an average accuracy around $5 9 . 4 \%$ at the identifier level. Later, Newman et al. created a new tagger that ensembled SWUM, POSSE, and Stanford together, then compared with SWUM, POSSE, and Stanford [69] individually, finding that the ensembled tagger exceeded the others’ performance metrics on identifiers [54]. # 7.2 Human-subjects studies Several studies use human subjects to understand the influence and importance of different characteristics of identifiers. Our work is largely complementary to these studies, as it can be used in conjunction with data from these studies to create/support naming techniques. Reem et al. [5] did a survey of 1100 professional developers, shedding some light on developer preferences and practices with respect to the content of identifier names, including things such as the use of abbreviations and preferred identifier length. Feitelson et al. [24] studied how the information content of identifiers named affected their memorability, and concluded that short names that contain focused information are likely optimal. Felienne et al. [70] find, among other things, that while instructors agree on the importance of naming, there is disagreement between their teaching practices. Even internally, teachers are generally inconsistent in how they teach and practice identifier naming in the classroom. The results of their study highlight the importance of increasing our formal understanding of naming, which can help increase and support consistency of teaching materials and practices in the classroom. # 7.3 Rename Analysis Arnoudova et al. [9] present an approach to analyze and classify identifier renamings. The authors show the impact of proper naming on minimizing software development effort and find that 68% of developers think recommending identifier names would be useful. They also defined a catalog of linguistic anti-patterns [8]. Liu et al. [43] proposed an approach that recommends a batch of rename operations to code elements closely related to the rename. They also studied the relationship between argument and parameter names to detect naming anomalies and suggest renames [44]. Peruma et al. [59] studied how terms in an identifier change and contextualized these changes by analyzing commit messages using a topic modeler. They later extend this work to include refactorings [60] and data type changes [61] that co-occur with renames. Osumi et al. [56] studied terms that were co-renamed with a goal of supporting developers in deciding when identifiers should be renamed together. In particular, they studied how location, data dependencies, type relationships, and inflections affected co-renaming. These techniques are concerned with examining the structure and semantics of names as they evolve through renames. By contrast, we present the structure and semantics of names as they stand at a single point in the version history of a set of systems. Rename analysis and our work are complementary; our analysis of naming structure can be used to help improve how these techniques analyze changes between two versions of a name by examining changes in their grammar pattern. In particular, since we specifically study closed-category terms, rename analysis can leverage our results to improve its behavior on identifiers that contain these terms. For example, they might use our results to determine when to recommend a closed-category term during a rename operation. # 7.4 Identifier Type and Name Generation There are many recent approaches to appraising identifier names for variables, functions, and classes. Kashiwabara et al. [40] use association rule mining to identify verbs that might be good candidates for use in method names. Abebe [2] uses an ontology that models the word relationships within a piece of software. Saeed et al. [57] vectorize methods based on metrics and use the K-Nearst Neighbors algorithm with these vectors, and a large data set of methods, to recommend method names. Allamanis et al. [4] introduce a novel language model called the Subtoken Context Model. There has also been work to reverse engineer data types from identifiers [30, 46]. One thing these approaches have in common is the use of frequent tokens and source code context to try and generate high-quality identifier names (or understand their behavior for the purpose of generating types). There is a lot of work in this subfield, but the contrast to our work remains the same for all of them: These approaches aim to predict strong identifier names based on history. Our approach can help, since an understanding of common naming structures can support filtering out names that are inappropriate based on their grammatical structure; teach AI-based approaches how to optimize the identifiers they generate, or at least avoid using bad grammar structure; or help reverse-engineer the semantics of an identifier name based on its grammatical properties. In addition, automated name generation approaches cannot teach us very much about naming practices on their own, or help us formalize our understanding of strong naming structures, and how those can be taught in a classroom. Thus, our work is novel, and complementary to identifier name generation approaches. # 7.5 Software Ontology Creation Using Identifier Names A lot of work has been done in the area of modeling domain knowledge and word relationships by leveraging identifiers [1, 22, 26, 63, 64]. Abebe and Tonella [1] analyze the effectiveness of information retrieval-based techniques for filtering domain concepts and relations from implementation details. They show that fully automated techniques based on keywords or topics have low performance but that a semi-automated approach can significantly improve results. Falleri et al., present a way to automatically construct a wordnet-like [48] identifier network from software. Their model is based on synonymy, hypernymy and hyponymy, which are types of relationships between words. Synonyms are words with similar or equivalent meaning; hyper/hyponyms are words which, relative to one another, have a broader or more narrow domain (e.g., dog is a hyponym of animal, animal is a hypernym of dog). Ratiu and Deissenboeck [64] present a framework for mapping real world concepts to program elements bi-directionally. They use a set of object-oriented properties (e.g., isA, hasA) to map relationships between program elements and string matching to map these elements to external concepts. This extends two prior works of theirs: one paper on a previous version of their metamodel [22] and a second paper on linking programs to ontologies [63]. Many of these approaches need to split and analyze words found in an identifier in order to connect these identifiers to a model of program semantics (e.g., class hierarchies). All of these approaches rely on identifiers. Many software word ontologies use meta-data about words to understand the relationship between different words. There is a synergistic relationship between the work we present here and software ontologies, since stronger ontologies can help us generate and study grammar patterns effectively, and the CCID can help construct stronger software word ontologies. In particular, studying closed-category terms helps strengthen the metadata used to generate an ontology that seeks to map how words are related to one another, or code behavior. # 7.6 Identifier Structure and Semantics Analysis Liblit et al. [42] discuss naming in several programming languages and makes observations about how natural language influences the use of words in these languages. Schankin et al. [65] focus on investigating the impact of more informative identifiers on code comprehension. Their findings show the advantage of descriptive, compound identifiers over short single-word ones. Hofmeister et al [33] compared comprehension of identifiers containing words against identifiers containing letters and/or abbreviations. Their results show that when identifiers contained only words instead of abbreviations or letters, developer comprehension speed increased by $1 9 \%$ on average. Lawrie et al. [41] did a study and used three different “levels” of identifiers. The results show that full-word identifiers lead to the best comprehension compared to the other levels studied. Butler’s work [15] extends their previous work on Java class identifiers [14] to show that flawed method identifiers are also associated with low-quality code according to static analysis-based metrics. These papers primarily study the words found in identifiers and how they relate to code behavior or comprehension rather than word metadata (e.g., PoS). Caprile and Tonella [18] analyze the syntax and semantics of function identifiers. They create classes which can be used to understand the behavior of a function; grouping function identifiers by leveraging the words within them to understand some of the semantics of those identifiers. While they do not identify particular grammar patterns, this study does identify grammatical elements in function identifiers, such as noun and verb, and discusses different roles that they play in expressing behavior both independently, and in conjunction, using the classes they propose. They also used the classes identified in this previous work to propose methods for restructuring program identifiers [17]. Fry and Shepherd [27, 66] study verb-direct objects to link verbs to the natural-language-representation of the entity they act upon, in order to assist in locating action-oriented concerns. The primary concern in this work is identifying the entity (e.g., an object) which a verb is targeting (e.g., the action part of a method name). Høst and Østvold study method names as part of a line of work discussed in Høst’s dissertation [35]. This line of work starts by analyzing a corpus of Java method implementations to establish the meanings of verbs in method names based on method behavior, which they measure using a set of attributes which they define [34]. They automatically create a lexicon of verbs that are commonly used by developers and a way to compare verbs in this lexicon by analyzing their program semantics. They build on this work in [37] by using full method names, which they refer to as phrases, and augment their semantic model by considering a richer set of attributes. The outcome is that they were able to aggregate methods by their phrases and come up with the semantics behind those phrases using their semantic model, therefore modeling the relationship between method names and method behavior. The phrases they discuss are similar to the general grammar patterns studied in our prior work [52]. They extend this use of phrases by presenting an approach to debug method names [36]. In this work, they designed automated naming rules using method signature elements. They use the phrase refinement from their prior paper, which takes a sequence of PoS tags (i.e., phrases) and concretizes them by substituting real words. (e.g., the phrase <verb>-<adjective> might refine to is-empty). They connect these patterns to different method behaviors and use this to determine when a method’s name and implementation do not match. They consider this a naming bug. Finally, in [38], Høst and Østvold analyzed how ambiguous verbs in method names makes comprehension of Java programs more difficult. They proposed a way to detect when two or more verbs are synonymous and being used to describe the same behavior in a program; hoping to eliminate these redundancies as well as increase naming consistency and correctness. They perform this detection using two metrics which they introduce, called nominal and semantic entropy. Høst and Østvold’s work focuses heavily on method naming patterns; connecting these to the implementation of the method to both understand and critique method naming. Butler [16] studied class identifier names and lexical inheritance, analyzing the effect that interfaces or inheritance has on the name of a given class. For example, a class may inherit from a super class or implement a particular interface. Sometimes this class will incorporate words from the interface name or inherited class in its name. His study builds on work by Singer and Kirkham [67], who identified a grammar pattern for class names of (adjective) $*$ $( \mathrm { n o u n } ) +$ and studies how class names correlate with micro patterns. Among Butler’s findings, he identifies a number of grammar patterns for class names: $( \mathrm { n o u n } ) +$ , (adjective) $+$ (noun) $+$ , (noun) $+$ (adjective) $+$ ${ \mathrm { ( n o u n ) } } +$ , (verb) $( \mathrm { n o u n } ) +$ and extends these patterns to identify where inherited names and interface names appear in the pattern. The same author also studies Java field, argument, and variable naming structures [13]. Among other results, they identify noun phrases as the most common pattern for field, argument, and variable names. Verb phrases are the second most common. Further, they discuss phrase structures for boolean variables; finding an increase in verb phrases compared to non-boolean variables. Olney [55] compared taggers for accuracy on identifiers, but only on Java method names which were curated to remove ambiguous words (e.g., abbreviations). Binkley et al [11] studied grammar patterns for attribute names in classes. They come up with four rules for how to write attribute names: 1) Non-boolean field names should not contain a present tense verb, 2) field names should never only be a verb, 3) field names should never only be an adjective, and 4) boolean field names should contain a third-person form of the verb “to be” or the auxiliary verb “should”. Al Madi [3] created a tool for performing lexical analysis of identifier names based on phonological, semantic, and orthographic similarity. Techniques that normalize identifiers, such as the one presented by Jingxuan [71], or by Hill [32] can help make generating grammar patterns easier by expanding abbreviations into full words that a tagger can recognize more accurately. Aman et al. [7] studied confusing variable pairs, which are variables with very similar names, to understand how/if they are changed over time, and how pervasive they are. None of the projects in this subsection deal specifically with closed-category grammar patterns, or even terms that fall within a closed PoS category. Many of them, particularly the work on PoS taggers, on grammar patterns in differing contexts, normalizing identifier names, and on grammatical anti-patterns, are likely mutually-synergistic to our work. This is because a stronger understanding of closed-category terms/patterns, and how they relate to program behavior, can help support the style of analysis these works leverage. # 8 Discussion and Future Work Our analysis focuses deeply on the semantic roles of closed-category terms, including determiners, prepositions, conjunctions, and digits; as well as the behavioral codes they imply. By applying axial and selective coding to a handcurated dataset, we uncovered patterns of usage that go beyond surface grammar and reveal how developers embed control flow, intent, and logical relationships into naming conventions. These findings reveal that closed-category terms serve as purposeful, behaviorally expressive units of meaning that often directly map to behavioral constructs in code. Developers use these compact grammatical forms as cognitive shortcuts to signal structural distinctions, state transitions, and operational logic. This strategic use of language suggests that these terms are important to understand and study. 1. Closed-category terms convey rich and distinct behavioral semantics. We examined the unique roles that each type of closed-category term tends to serve. For instance, determiners often reflect selection (e.g., someItem), temporality (e.g., nextNode), or negation (e.g., noCache); conjunctions signal alternatives, conjunctional guards, or composite actions (e.g., save and close, key or iv); and digits most commonly serve to distinguish, enumerate, or encode system-specific roles (e.g., arg1, Neo4j). These behavioral distinctions have been formalized in our axial coding schema, which can serve as a foundation for recommendation systems, naming audits, or comprehension studies. 2. Closed-category semantics adapt to programming context in ways distinct from natural language. Prepositions that typically express motion or containment in English (e.g., to, from, on) are disproportionately found in function names, where they signal behavior, transformation, or event triggers. Conversely, determiners like no, this, and next are rare in class names but common in parameters or attributes, where positional reasoning and state scoping are essential. This contextual distribution suggests that developers adapt the semantics of closed-category terms to the structural and functional roles of identifiers, shaping how behavior is encoded in code. 3. Closed-category terms serve as lexical scaffolds for program comprehension. These terms often compress multiple behavioral or structural ideas into a compact name. Whether flagging control flow (e.g., ifEnabled), encoding default values (noVal), or signaling transformation (toString), they act as micro-annotations; embedding traceability, logic, or domain-specific cues directly into identifiers. This use of grammar-as-guidance offers a potentially promising direction for tools aimed at improving name clarity and developer communication. 4. Grammar patterns help us understand form and function. Combining axial and selective codes with grammar patterns reveals both the entities involved (open-category terms) and the behavioral relationship between them (closed-category terms). When the closed term appears at the beginning of a pattern (e.g., DT NM N, P NM N), it typically forms a unary relation, modifying a single operand (e.g., temporal status or type reinterpretation). When it appears between open terms (e.g., N P N, N CJ N), it often encodes a binary relation—linking two operands through behavior or logic (e.g., data flow or disjunction). We also note that function identifiers with a final closed term (e.g., sendTo, readFrom) may also express a binary relation, where the second operand is supplied as a parameter rather than named directly. This is consistent with observations made during data analysis in our prior work [52]. These findings have practical implications for tool builders and educators interested in improving naming support. For example, grammar patterns could be integrated into static analysis or IDE plugins to provide optional, contextual suggestions; highlighting when an identifier follows an uncommon structure or when the pattern contrasts with its code context. This does not imply the name is incorrect, but it may prompt a developer or reviewer to reflect on whether the chosen pattern aligns with the intended semantics. For instance, encountering a pattern like $\texttt { N C J N }$ (e.g., dataOrLogger) in a constant declaration could trigger a soft prompt: “This naming pattern is rare in this context—consider whether it clearly communicates its role.” Grammar patterns can also be used to scaffold naming suggestions in LLMdriven tools. Instead of generating identifiers purely from task descriptions, models could be prompted to produce names that instantiate common closed-category structures (e.g., DT NM N, P N), which are frequently associated with behavioral semantics. This approach may help align completions with human naming conventions, while still allowing flexibility in term choice. Beyond tooling, grammar pattern awareness can enhance educational workflows. Instructors could use pattern frequency and semantics to illustrate naming “idioms,” helping students understand how experienced developers encode behavior through compact syntactic forms. Similarly, code review tools might use grammar pattern summaries to draw attention to unconventional naming constructs, offering reviewers an additional signal without enforcing rigid standards. In future work, we plan to evaluate these ideas empirically: measuring whether adherence to common patterns improves comprehension, how grammar scaffolding affects naming quality in LLM-generated identifiers, and whether tools that surface grammar patterns can meaningfully assist developers. In addition, it would be interesting to perform studies simialar to Schankin [65] and Hofmeister [33]’s work on how the descriptiveness of names influence comprehension; or Arnaoudova [8], and Host [36] who looked at how code behavior and naming structure could be used to measure name quality. Specifically, we could study how closed-category terms change or augment the outcomes of their studies. # 9 Threats to Validity Construct Validity: This study is conducted on a manually annotated dataset of 1,275 identifiers containing closed-category terms, the largest of its kind at the time of writing. A potential threat lies in the completeness of our closed-category term list: we relied on a predefined lexicon (Section 4), meaning novel or unlisted terms may be absent from the dataset. However, their absence would likely expand our results rather than refute them. Our identifier sample is restricted to production code, and although we excluded known test files, developers may occasionally include test logic in production files. To mitigate this, we manually reviewed each identifier and its source context. Furthermore, while we used file extensions to distinguish C (.c, .h) from $^ { \mathrm { C + + } }$ (.cpp, .hpp), these conventions are not absolute. We addressed this by manually validating the source language of each identifier. Internal Validity: Abbreviations within identifiers were not expanded, which may have caused occasional misinterpretation by annotators. However, annotators had access to the surrounding source code, reducing the risk of misannotation. Grammar pattern tagging and axial coding for each closed-category term were both subject to cross-annotation by three independent annotators and evaluated using Fleiss’ Kappa to assess agreement. We used a grounded-theory approach to develop our behavioral codes. Four coders participated in open and axial coding; during the selective coding phase, one coder proposed all selective codes, which were then validated and refined collaboratively by the other three through discussion until thematic saturation was reached. We used statistical methods to examine correlations between closed-category terms and contextual variables. We performed two chi-square tests: one to assess correlation between closed-category part-of-speech categories (e.g., Determiner, Preposition) and programming language (Java, C, C++), and another for their correlation with code context (Attribute, Function, Declaration, Parameter, Class), derived automatically via srcML [19]. We applied Bonferroni correction to account for multiple comparisons. A threat to internal validity is the assumption of independence in the chi-squared test. If violated, some significance values may be distorted. However, the primary insights of RQ1, which focus on behavioral coding through qualitative analysis, are unaffected by this statistical assumption. External Validity: Our data includes identifiers from C, C++, and Java, three widely used languages with similar syntactic and object-oriented paradigms. While this helps reduce language-specific bias, our findings may not generalize to other paradigms such as functional or logic-based languages, where naming conventions and code contexts may differ significantly. Mitigation Strategies: To ensure transparency and reproducibility, the datase will be made publicly available (Section 11). Annotators were allowed to inspect source code when labeling identifiers, and each identifier was independently annotated twice. Grammar patterns and axial codes were validated by multiple annotators, with inter-rater agreement assessed using Fleiss’ Kappa. We selected a representative sample from 30 software systems, sized to meet a $9 5 \%$ confidence level with a $5 \%$ confidence interval. Code context was derived automatically using srcML. Finally, to evaluate whether closed-category term usage varies by domain, we curated a domain-specific dataset (e.g., compilers, databases, networking tools) and compared it against a general-purpose set selected without regard to domain. We applied a Mann-Whitney U test to compare term frequencies between these groups, normalizing by lines of code to control for system size.
Identifier names are crucial components of code, serving as primary clues for developers to understand program behavior. This paper investigates the linguistic structure of identifier names by extending the concept of grammar patterns; representations of the part-of-speech (PoS) sequences that underlie identifier phrases. The specific focus is on closed syntactic categories (e.g., prepositions, conjunctions, determiners), which are rarely studied in software engineering despite their central role in general natural language. The Closed Category Identifier Dataset (CCID) is presented, a new manually annotated dataset of 1,275 identifiers drawn from 30 open-source systems. The relationship between closed-category grammar patterns and program behavior is analyzed using grounded theory coding, statistical, and pattern analysis. The results reveal recurring structures that developers use to express control flow, data transformation, temporal reasoning, and behavioral roles through naming. This study contributes an empirical foundation for understanding how developers adapt linguistic resources to encode behavior in source code. By analyzing closed-category terms and their associated grammar patterns, the work highlights a previously underexplored dimension of identifier semantics and identifies promising directions for future research in naming support, comprehension, and education.
[ "cs.SE" ]
# Introduction Ensuring computational reproducibility is increasingly recognized as a cornerstone of credible scientific research (Peng 2011; Seibold et al. 2021; National Academies of Sciences and Medicine 2019). Several works, beginning in the 1990s (Claerbout and Karrenbach 1992), have highlighted the importance of making research outputs reproducible and have proposed computational methodologies and best practices to achieve this goal. A fundamental aspect in achieving computational reproducibility is the provision of all necessary materials, including the raw data and the code used to generate the results. Barba (2018) paraphrases Claerbout and Karrenbach (1992) with what is now widely considered the ideal of computationally reproducible research. An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures. Figure 1: Our completely automated pipeline osf-to-binder (left) and the analysis results using this pipeline for the StatCodeSearch dataset (right). Following this goal, Chung-hong Chan, Tim SchattoEckrodt, and Johannes Gruber (2024), among others, emphasise that sharing of all code and data is crucial for transparency and forms the basis for computational reproducibility. To facilitate the execution of their shared code, researchers must pay close attention to documenting their computational environment thoroughly. This includes explicitly listing all software dependencies and their versions, using files like requirements.txt or including detailed information about their used environment using commands such as sessionInfo() in R. Containerisation technologies such as Docker, which use a Dockerfile to create consistent and isolated environments across different systems (e.g., different operating systems), are another widely accepted approach in the literature on computational reproducibility (Boettiger 2015). Furthermore, Schoch et al. (2024) emphasise that external dependencies such as online APIs are also part of the potentially changing environment, which can undermine computational reproducibility. Achieving reproducibility requires a multifaceted, proactive approach that includes transparent sharing of materials and thorough documentation of the computational environment by the authors. Sandve et al. (2013) and Kohrs et al. (2023) condense these requirements into basic rules for reproducible computational research. Achieving computational reproducibility should not be assumed; it requires external verification. Hardwicke et al. (2020) manually examined 250 articles from the social science literature and found that fewer than $3 \%$ made their analysis scripts available. Rainey et al. (2025) found that only about $12 \%$ of quantitative research articles provided access to both the data and the code. Trisovic et al. (2022) executed R code from replication datasets hosted on the Harvard Dataverse repository in a clean runtime environment and found that $74 \%$ of R files failed to complete without error. Pimentel et al. (2019) and Samuel and Mietchen (2024) examined the reproducibility of Jupyter notebooks, mostly written in Python, and found that only $24 \%$ and $1 1 . 6 \%$ respectively ran without errors in a fully automated analysis. Chung-hong Chan, Tim Schatto-Eckrodt, and Johannes Gruber (2024) tested the reproducibility of 30 papers and found that, even after manual restoration of the code, at least $20 \%$ were only partially reproducible. Furthermore, both practitioners (Lasser 2020; Nu¨st and Eglen 2021) and guides (Arnold et al. 2019; Bleier 2025) emphasise the role of services like MyBinder in enabling authors to share their analysis scripts in a way that allows for easy verification by others. In this work, we empirically test the computational reproducibility of 296 R code supplements published as projects on the Open Science Framework (OSF) repository. However, unlike earlier approaches (Trisovic et al. 2022) that used a clean runtime or manual intervention (Chung-hong Chan, Tim Schatto-Eckrodt, and Johannes Gruber 2024) to establish reproducibility, we apply an automated approach to infer and extract dependencies that are necessary for a successful execution. Our work is guided by the following research questions: At what rate are we able to verify the computational reproducibility of the submissions published on OSF? Can automatic dependency inference aid in successful re-execution? Can a statistical analysis of replication failure modes inform recommendations on best practices for the publication of code supplements, and if so, what are these best practices? # Methodology The starting point for this study was the StatCodeSearch dataset, which is part of the GenCodeSearchNet benchmark suite (Diera et al. 2023). This dataset, available on HuggingFace1, consists of code-comment pairs extracted from R scripts hosted on the Open Science Framework $( \mathrm { O S F } ) ^ { 2 }$ . It focuses specifically on R projects in the social sciences and psychology, particularly those involving statistical analysis. The dataset contains 1,070 code-comment pairs drawn from 558 unique R scripts across 296 distinct OSF projects. While the dataset is organized at the level of individual codecomment pairs, our goal is to reconstruct interactive, reproducible computational environments at the project level. To achieve this, we used the project identifiers provided in the dataset to retrieve the corresponding research materials. We then employed the OSFClient $\mathrm { A P I } ^ { 3 }$ to download the full contents of each associated OSF repository. An initial verification step revealed that, out of the $5 5 8 \mathrm { ~ R ~ }$ code files referenced across 296 OSF projects in the StatCodeSearch dataset, 63 files from 32 distinct projects were no longer accessible through their original OSF directories. This outcome suggests that a portion of the dataset had become outdated, likely due to file deletions, renaming, or changes to project access permissions on the OSF platform following the initial data collection. While OSF supports the creation of immutable, timestamped project snapshots through its registration feature, our analysis found that only 58 out of 296 projects had used registrations, and only 49 of those preserved the files referenced in the dataset. Moreover, registered snapshots are not automatically created or mandatory, and their selective use makes it difficult to systematically recover the original state of all materials. The lack of widespread adoption of OSF registrations and the absence of robust version control systems (such as those provided by Git) make it challenging to replicate the computational environment used in these studies at the time of publication. Following the identification and removal of unresolvable file references from the initial dataset, the remaining 264 projects were examined for files that could support the reproduction of the original analyses. The downloaded project contents were systematically searched for reproducibilityrelevant files that document the computational R environment, including renv.lock, sessionInfo.txt, sessionInfo.RData, .Rprofile, DESCRIPTION, dependencies.R, dependency.R, Dockerfile, environment.yml, and install.R. To enable the automated execution and validation of project files associated with the GenCodeSearchNet dataset, we developed an automated pipeline, osf-to-binder4, which is publicly available on GitHub. The goal of this pipeline is to generate verifiably reproducible computational environments directly from the source code of scientific publications hosted on OSF. The osf-to-binder pipeline operates through the following steps (see Figure 1): • Project Retrieval: Given one or more OSF project identifiers, the pipeline automatically downloads and unpacks the entire file storage associated with each project. • Dependency Extraction: For projects containing R scripts, the pipeline employs flowR (Sihler and Tichy 2024), a static dataflow analyser and program slicer to automatically extract dependencies. • Docker Configuration: The extracted R dependencies are used to generate a DESCRIPTION file, an R package metadata file that is essential for specifying dependencies in Docker-based environments. • Containerisation: Using repo2docker (Forde et al. 2018), the pipeline builds a Docker container based on the project directory. It scans the repository for standard configuration files (e.g., DESCRIPTION) and creates a runnable Docker image accordingly. • Code Execution: Within the built container, the pipeline executes all identified R scripts in a fully isolated and dependency-managed environment. • Logging and open validation: Execution results and logs are recorded to ensure transparency and support both internal and external validation. • Publication: To support open reproducibility, the resulting Docker image is published to a container registry (DockerHub)5, and the project code is made available via a version control system (GitHub)6. Additionally, we generate a MyBinder (Ragan-Kelley et al. 2018) launch link, enabling users to run the environment in a remote RStudio instance without any local setup. By automating dependency extraction, environment configuration, containerisation, and execution, the osf-to-binder pipeline offers a scalable and transparent approach to enhancing computational reproducibility for OSF-hosted research projects. It thereby supports broader efforts toward open and verifiable science. # Results An analysis of the remaining 264 OSF projects, identified after excluding those for which the referenced R scripts could not be located during initial verification, revealed a limited presence of files commonly associated with computational reproducibility. The results are detailed in Table 1. These findings highlight the current state of explicit reproducibility provisions within the examined subset of OSF R Projects. The scarcity of these files suggests that many projects may lack readily available instructions or specifications for recreating computational environments. Table 1: Presence of reproducibility-related files in the 264 analysed OSF projects. # Containerisation success and failures Of the 264 projects processed by the pipeline, containerisation failed for 15, comprising $3 5 \mathrm { R }$ scripts (around $5 \%$ of the total). The main reasons included: • Malformed or incomplete DESCRIPTION files: Generated from flowR outputs, these often lacked required fields or had formatting errors, rendering them invalid. • Incorrectly extracted dependencies: In several cases, flowR misidentified variables, file paths, or internal objects as package names, resulting in invalid entries such as unknown, NULL, or numeric values like $\mathbf { \zeta } ^ { \prime } \circ ^ { \prime } , \mathbf { \zeta } ^ { \prime } \bot \mathbf { \zeta } ^ { \prime }$ . • Invalid scoped package references: Calls like knitr::opts chunk were mistakenly treated as standalone packages, which are not installable. • Unavailable or incompatible packages: Some projects listed packages such as crsh/papaja, DF network, and swfscMisc, which were either not available for the R version in the container or required system-level libraries that were not included. • Failed package installation: These issues caused devtools::install local(getwd()) to fail, stopping container creation during the repo2docker build. The remaining 249 projects were successfully containerised, yielding $4 6 0 \mathrm { ~ R ~ }$ scripts, and demonstrating the pipeline’s ability to automatically generate and build Docker images for a majority of the analysed OSF R projects. A key constraint during the publication step was GitHub’s 100MB per-file size limit, which prevented 23 of these projects (51 scripts) from being pushed to the repository. While this limited their accessibility via Git-based platforms such as Binder, the scripts were still containerised and included in the execution analysis, maintaining the total number of executed scripts at 460. # Code execution within containerised environments Following successful containerisation, 460 R scripts from 249 OSF projects were executed. Of these, 119 scripts $( 2 5 . 8 7 \% )$ completed successfully without critical errors, while the remaining 341 scripts $( 7 4 . 1 3 \% )$ failed—highlighting persistent challenges in computational reproducibility. Among the successful scripts, 51 came from 40 projects ( $1 6 . 0 6 \%$ of all 249 projects) in which all scripts executed without failure, indicating full project-level reproducibility. The other 68 successful scripts were from 34 projects $( 1 3 . 6 5 \% )$ that also included at least one failed script, reflecting partial reproducibility. The remaining 175 projects $( 7 0 . 2 8 \% )$ had no successfully executed scripts. To analyse and interpret script execution failures, a twolevel classification approach was implemented. At the first level, regular expression patterns were used to extract common error messages and map them to initial error types, such as missing objects, function call issues, invalid paths, or package loading problems. In the second level, semantically similar errors were grouped into broader categories. Errors such as “object not found,” “could not find function,” and “unexported object access” were grouped under Missing Object or Function $( 1 8 . 2 \% )$ , which also includes references to undefined variables or function calls made without properly loaded packages. Errors involving non-existent file or folder paths, hardcoded directories, or failed attempts to change the working directory were classified as Invalid File or Directory Path $( 1 9 . 1 \% )$ . Typical cases include “cannot open file,” “failed to search directory,” or “directory already exists,” reflecting file system access issues. Many errors stemmed from scripts referencing missing datasets or using absolute paths, both of which hinder reproducibility in containerised or remote environments. Package-related issues were among the most frequent failure sources, with Missing Package errors accounting for $2 6 . 1 \%$ of failed scripts. This category includes unmet dependencies due to missing, outdated, or deprecated packages, or libraries implicitly required but not declared. Separately, errors during installation, such as broken dependencies, compilation failures, or loading issues, were categorized as Package Installation Failure $( 8 . 2 \% )$ , often signaled by messages like unable to install packages,” lazy loading failed,” or “package or namespace load failed.” System-level problems related to shared object files or display devices were categorized under Shared Library Load Error $( 8 . 5 \% )$ . These include failures such as “unable to load shared object” or GUI-dependent functions failing in headless environments (e.g., “unable to start data viewer”). Direct file access errors, such as failures when trying to read a file, were grouped under File Read Error $( 7 . 9 \% )$ . These typically manifested as errors in reading .rds, .csv, or other files due to missing or incorrect paths. The Other Errors category, accounting for $12 \%$ of all failed scripts, includes less frequent but collectively significant error types. A substantial portion involved RStudio Environment Errors and Compressed File Not Found errors, often caused by assumptions about interactive sessions (e.g., active RStudio) or attempts to load missing .rds or compressed files. Less common issues included Syntax or Argument Errors, Encoding and String Handling Problems, Missing Arguments in setwd(), and Data Structure Mismatches. Though individually rare, these represent a long Missing Package 89 $( 2 6 . 1 \% )$ Invalid File or Directory Path 65 (19.1%) Missing Object or Function 62 (18.2%) Shared Library Load Error 29 $( 8 . 5 \% )$ Category Groups Package Installation Failure 28 $( 8 . 2 \% )$ Missing Package / Object File Read Error 27 $( 7 . 9 \% )$ LOetshserFrCeaqtuegnotriEersrors (<5%) Other Errors 41 $( 1 2 . 0 \% )$ 0 20 40 60 80 100 120 Number of Failed Scripts (Unique) tail of reproducibility challenges in real-world R scripts. This structured classification enabled the distillation of hundreds of distinct error messages into a concise set of meaningful categories, as visualized in the final breakdown of execution errors (see Figure 2).
Computational reproducibility is fundamental to scientific research, yet many published code supplements lack the necessary documentation to recreate their computational environments. While researchers increasingly share code alongside publications, the actual reproducibility of these materials remains poorly understood. In this work, we assess the computational reproducibility of 296 R projects using the StatCodeSearch dataset. Of these, only 264 were still retrievable, and 98.8% lacked formal dependency descriptions required for successful execution. To address this, we developed an automated pipeline that reconstructs computational environments directly from project source code. Applying this pipeline, we executed the R scripts within custom Docker containers and found that 25.87% completed successfully without error. We conducted a detailed analysis of execution failures, identifying reproducibility barriers such as undeclared dependencies, invalid file paths, and system-level issues. Our findings show that automated dependency inference and containerisation can support scalable verification of computational reproducibility and help identify practical obstacles to code reuse and transparency in scientific research.
[ "cs.CY", "cs.SE" ]
# 1 Introduction Tokenization is a crucial component of nearly all modern language models: it allows them to consume and produce arbitrary streams of text using only finite vocabularies. The vast majority of tokenizers in use today, such as those based on Byte-Pair Encoding (BPE) [57] or Unigram [27], feature tokens spanning multiple bytes or characters, allowing them to represent text more efficiently than purely byte-level or character-level tokenization [12, 75, 71]. Users of LMs are generally unaware of the tokenization and expect LMs to operate on strings, consuming a prompt as a string and producing a useful string completion thereof. Tokenized LMs approximate this by $( i )$ encoding the text as a sequence of tokens, $( i i )$ feeding the resulting sequence to the language model, and (iii) decoding the generated token sequence back into text. More precisely, let prompt $\in \Sigma ^ { * }$ be a string of arbitrary length over some alphabet $\Sigma$ , and let encode : $\Sigma ^ { * } V ^ { * }$ and decode: $V ^ { * } \to \Sigma ^ { * }$ represent the translation between strings and token sequences over a vocabulary $V$ . To complete the prompt, a typical scheme is to sample from the distribution, $$ \mathsf { P } ( t _ { 1 } , \ldots , t _ { n } \mid [ t _ { 1 } , \ldots , t _ { k } ] = \mathrm { e n c o d e ( p r o m p t ) } ) , $$ where encode(prompt) is the tokenization of the prompt, which in this example has length $k$ . Note that sampling from this distribution can be done very conveniently by following the three steps above when the model has an autoregressive structure, i.e., $$ \mathsf { P } ( t _ { k + 1 } , \ldots , t _ { n } \mid t _ { 1 } , \ldots , t _ { k } ) = \prod _ { i = k + 1 } ^ { n } \mathsf { P } ( t _ { i } \mid t _ { 1 } , \ldots , t _ { i - 1 } ) , $$ which is used to sample the completion from $\mathsf { P } ( t _ { k + 1 } , \hdots , t _ { n } \mid t _ { 1 } , \hdots , t _ { k } )$ given the tokenized prompt $[ t _ { 1 } , \ldots , t _ { k } ]$ . We then return $\operatorname* { d e c o d e } ( t _ { 1 } , \dots , t _ { n } )$ to the user. For the most part, this process happens transparently to the user, but under certain circumstances it can introduce distortion to the language model’s completions, as we are about to explain. The Prompt Boundary Problem (PBP). To be precise, Eq. (1) introduces distortion whenever the prompt ends on a prefix of what could otherwise be a single token. More concretely, consider LLAMA3.2-1B and suppose the user’s prompt ends with the text “becau” (["bec" $\mathbf { \Sigma } = \mathbf { \Sigma }$ 17106, $" { \sf a u } " = 2 9 3 3 ]$ as tokens): The user most likely expects the continuation to begin with $\cdots _ { \mathsf { S e } ^ { 3 } } ,$ (325) since “because” is a common word. However during training, the model has only ever seen the word “because” represented as a single token (11458) and never as the sequence [17106, 2933, 325]. Accordingly, the actual next token LLAMA-3.2-1B predicts is $\overbrace \phantom \sum ^ { 2 } \sum ^ { 3 } \sum _ { i = 1 } ^ { 3 } \sum _ { p = 1 } ^ { 4 } \sum _ { p = 1 } ^ { 4 } \sum _ { p = 1 } ^ { 2 } \sum _ { \rho = 1 } ^ { 4 } \sum _ { p = 2 } ^ { 2 } \sum _ { \rho = 1 } ^ { 3 } \sum _ { p = 1 } ^ { 4 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 1 } ^ { 2 } \sum _ { p = 1 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 1 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 1 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 1 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 2 } ^ { 2 } \sum _ { p = 2 } ^ { 2 }$ (89) which, while plausible in some scenarios, is an arguably unlikely > olmo.generate(tok.encode("This a tes")) "erstor" > ByteSampler(olmo, "This is a tes") "t" Japan’s capital is Tokyo China’s capital > qwen.generate(tok.encode("日本的首都是东京,中国的首都")) also is Beijing "也是北京" $x$ > ByteSampler(qwen, "日本的首都是东京,中国的首都") is Beijing "是北京" > olmo.generate(tok.encode("document.getElement")) "('div')" > ByteSampler(olmo, "document.getElement") "ById('button')" continuation representing an artifact of tokenization. While this example may seem contrived at first glance, there are many situations where this problem may arise (Fig. 1 shows a few more examples): 1. In languages that do not separate words with whitespace, such as Chinese and Japanese, tokens can span multiple words, so this issue can arise even when the prompt ends with a complete word. 2. Any tokenizer that features multi-word tokens, which can bring gains in encoding efficiency [18, 29, 34], suffer from the same problem as Chinese and Japanese. 3. When completing code, it is common to request completions while in the middle of an identifier [23]. 4. This issue also occurs when performing constrained generation from language models [54]. In general, the user, unaware of the tokenization, expects samples from the properly conditioned distribution, $$ \mathsf { P } ( t _ { 1 } , \ldots , t _ { n } \mid \mathrm { p r o m p t } \sqsubseteq \mathrm { d e c o d e } ( t _ { 1 } , \ldots , t _ { n } ) ) \mathrm { ~ , ~ } $$ where $\sqsubseteq$ denotes the prefix relation. However, the token-prefix conditioned distribution of Eq. (1) and the byte-prefix conditioned distribution of Eq. (2) can differ substantially (e.g., Figure 1). Eq. (2) transcends the arbitrary token boundary set where the user provided prompt stops, decoupling the prompt boundary from token boundaries, to complete the prompt with the exact distribution from the language model. This leads to a fundamental algorithmic question of interest: how do we sample from the byte-prefix conditioned distribution of Eq. (2) exactly and efficiently? Contributions. We introduce an efficient procedure to condition a BPE tokenizer-based model on an arbitrary byte-prefix given only access to the tokenizer and log-probability queries to the model (Section 3). We demonstrate in experiments that this represents an exact solution to the Prompt Boundary Problem presented above (Section 4.2). We show that our method can be used to convert the model into a byte-level language model and that this ability can be used to unify the vocabularies of different models. This enables exact byte-level ensembles of language models with different tokenizers (Section 4.3) and allows one to transfer the post-training of one model onto another model at inference time using proxy-tuning [33] (Section 4.4). We demonstrate in proof-of-concept experiments that language model ensembles and proxy-tuned models constructed with our method are able to outperform their constituent models in downstream evaluations. # 2 Background In this section we give essential background regarding tokenization as well a prior work addressing the Prompt Boundary Problem. We discuss additional related works in Appendix A. Table 1: Incremental complexity of various mitigations for the prompt boundary problem: we list the complexity (in both preprocessing time and LM evaluations) when sampling each new character while generating an $n$ character string. Our method has the same complexity as backtracking methods while remaining exact, i.e., distribution matches Eq. (2) modulo invalid sequences (see below for discussion). We report both the original LM inference complexity as originally presented, as well as upper bounds using analysis from Section 3.1 when using prefix caching. “(optimal)” indicates that the token evaluations for any input will be the minimum required for exactness. Byte Pair Encoding. BPE was originally presented as a form of data compression in Gage [16] and was proposed for use in NLP in Sennrich et al. [57]. To tokenize a piece of text with a typical BPE-based tokenizer, the text is first split into chunks, a process called pretokenization. These chunks, or pretokens, are then tokenized separately using BPE (thus no token may cross the boundary between pretokens). The BPE tokenizer processes each pretoken by first converting the text into a sequence of elements of the tokenizer’s base vocabulary (common choices for base vocabulary are individual characters or bytes under UTF-8 encoding). Next, an ordered list of merges is applied to the sequence to form larger tokens. Each merge specifies a contiguous pair of tokens (which may include products of previous merges), and a new token that represents their concatenation. The merges are applied left-to-right and once all valid merges are applied, the tokenization is complete. We show an example application of these steps in Table 2. Table 2: Step-by-step execution of an example BPE tokenizer. Prompt Boundary Problem. Issues surrounding tokenization have been extensively documented in prior work. The prompt boundary problem was presented for maximum prefix encoding in Phan et al. [50] and for BPE tokenizers in Vieira et al. [70] and Ribeiro [54]. Many methods have been proposed to address the prompt boundary issue. One line of heuristic techniques, including token healing [54] and its generalizations [13, 2] perform “backtracking” by $( i )$ removing one or more of the most recent tokens, followed by (ii) sampling a continuation of the partial prompt using the language model, constraining the newly generated tokens to match the remaining text. Exact methods, which preserve the sampling distribution of the original language model as shown in (3), have also been proposed. Vieira et al. [70] gave an exact method which requires exponential time as well as an approximate solution leveraging beam search. Turaga [68] proposed a method that combines backtracking with the exponential time method of Vieira et al. [70], adding a “back tokenization” step that significantly reduces the number of necessary calls to the language model, but still requires exponential preprocessing. Additionally, Phan et al. [50] proposed an exact method which requires only linear time. Although all of the above methods, except for Backtracking, are “exact,” they may produce different sampling distributions. This is because the methods differ in their handling of invalid token sequences. An invalid token sequence is one that can never be output by the tokenizer. We make this notion precise in Section 3.1. This is closely related to the concept of marginalization [6]: the idea that calculating the probability of generating a string with a language model requires summing over all segmentations of the string, including invalid ones. Vieira et al. [70] consider all segmentations, valid or not, which corresponds to Eq. (2). The method of Turaga [68] and our method, condition on valid token sequences, which corresponds to $$ \mathsf { P } ( t _ { 1 } , \ldots , t _ { n } \mid \mathrm { p r o m p t } \sqsubseteq \mathrm { d e c o d e } ( t _ { 1 } , \ldots , t _ { n } ) , [ t _ { 1 } , \ldots , t _ { n } ] \mathrm { i s ~ v a l i d } ) , $$ and Phan et al. [50] consider a superset of the valid token sequences, giving a distribution “between” Eq. (2) and Eq. (3). Of note, Chirkova et al. [9] found that $\mathsf { P } ( [ t _ { 1 } , \ldots , t _ { n } ]$ is not valid) makes up a negligible fraction of the language model’s distribution, so these differences should not be significant in practice. # 3 Method In this section, we present some simple building blocks and use them to construct a procedure for sampling from a tokenizer-based language model one byte at a time. The fundamental structure of the algorithm is based on what we call the Valid Covering Tree, which is the tree of all possible valid token sequences that share a specific byte prefix and do not extend past the end of the prefix by more than one full token. We show the construction of the Valid Covering Tree in Fig. 2. Figure 2: Construction of the Valid Covering Tree for string prefix “hypot”: (a) starting with the infinite tree of all possible token sequences (many edges not shown), we prune branches that (b) do not match the given prefix or begin after the prefix ends or (c) contain invalid contiguous pairs of tokens. More example trees are shown in Appendix D. The tree depicted in Fig. 2b corresponds to the cover described in Vieira et al. [70], who remark that it will generally have exponential size in the length of the prefix. In contrast, the Valid Covering Tree, which is a subtree of the one in Fig. 2b, has several properties which will prove useful: 1. Correctness: It represents exactly the set of conditions for Eq. (3) which makes it the minimum tree sufficient to calculate the distribution described in Eq. (3). (See Section 3.1) 2. Compactness: The tree is composed of a “trunk” of tokens that are fully determined (starting at the root, every node has only one child) plus a finite number of “branching” nodes at the end of the trunk. (The number is bounded by a constant which depends only on the tokenizer, see Section 3.2) 3. Convenience: The tree can be updated to reflect the addition of a new byte using only constant time and space. (See Algorithm 1) Additional implementation details and optimizations are presented in Appendix C. # 3.1 Pairwise Validation Recall that a token sequence is valid if it is the encoding of some string under the BPE encoder.2 The correctness of the pairwise pruning depends on the following proposition regarding validity under BPE tokenization. Proposition 3.1. Let (encode, decode) denote a BPE encoder and decoder pair corresponding to some merge list $M$ and vocabulary $V$ . We call a token sequence $T = [ t _ { 1 } , t _ { 2 } , \ldots , t _ { n } ] \in V ^ { n }$ valid $i f$ encode $( \mathrm { d e c o d e } ( T ) ) = T$ . Then $T$ is valid if and only $i f [ t _ { i } , t _ { i + 1 } ]$ is valid for all $i \in \{ 0 , \ldots , n - 1 \}$ . To see that this proposition is true, consider two valid token sequences $T _ { 1 } = \mathrm { e n c o d e } ( S _ { 1 } )$ and $T _ { 2 } = \mathrm { e n c o d e } ( S _ { 2 } )$ . If, while tokenizing the concatenation $S _ { 1 } \# S _ { 2 }$ , there is no merge applied that crosses the boundary between $S _ { 1 }$ and $S _ { 2 }$ then the two strings will “evolve” independently, and we will have encode $( S _ { 1 } + S _ { 2 } ) = T _ { 1 } + T _ { 2 }$ which means $T _ { 1 } \# T _ { 2 }$ is valid. Conversely, if a merge is applied that does cross the boundary, then the final encoding must feature a token crossing the boundary (since no merge can be undone), which means $T _ { 1 } \# T _ { 2 }$ cannot be valid since it has no such token. We depict an example of both cases using OpenAI’s cl100k tokenizer [47] in Fig. 3.3 (a) Valid pair: no merge crossing boundary (b) Invalid pair: merge m20252 crosses boundary Figure 3: Example of valid and invalid token pairs. We show the initial string’s bytes and the merges $m _ { t } \in M$ that are applied to the string (in order of $t$ ) to tokenize the string. In the invalid case, merge $m _ { 5 3 0 5 8 }$ cannot occur because a conflicting merge $m _ { 2 0 2 5 2 }$ was applied earlier. The key observation is that we only need to consider the trajectory at the boundary (in blue) to decide if the pair is valid. This implies a fast method to check whether a pair of tokens is valid: we consider the merge trajectory of each token along the boundary and see if any conflicting merges would be applied. The worst case merge tree depth is fixed by the tokenizer, so this check can be done in constant time.4 # 3.2 Streaming Tokenization Given a stream of input bytes, we will use the following approach to update “branches” of the Valid Covering Tree, while writing the fully determined “trunk” of tokens to an output stream. We next show that this can be done efficiently. To bound the asymptotic behavior, we use the observation of Berglund and van der Merwe [3] that each output token can be fully determined using only a constant amount of lookahead (in bytes), where the constant depends only on the tokenizer. This implies that the branching tree $T$ will have bounded depth, since any token that is fully determined will be removed from the tree and written to the output stream. The branching factor of the tree is also bounded by a constant depending on the tokenizer. Thus, the number of edges of $T$ is bounded by a constant, which means the pruning described in Fig. 2 can be carried out in constant time. For more concrete performance numbers see Section 4.1, where we show that the tree has only 0.72 extra non-leaf nodes on average. Algorithm 1: Streaming BPE tokenization maintaining a tree matching Fig. 2c # 3.3 Language Modeling Using Valid Covering Trees Now that we can easily compute Valid Covering Trees, we can use them to perform various common language modeling operations. To compute the probability of a prefix under the LM, we sum the cumulative probabilities the LM assigns to the sequences represented by all leaves of the tree. To sample a continuation of a prefix, we compute the probability (as above) of every leaf and sample one of them accordingly. We are then free to continue sampling a continuation from that leaf using normal token-level sampling. This can be used to solve the PBP without paying the cost of sampling one byte at a time. To compute the next byte distribution given a prefix, we group the leaves by the next byte they would entail and sum the probabilities (as above) of the leaves in each group. This can be combined with a sampling rule to generate text one byte at a time. Naturally, this will generate text more slowly than sampling at the token level. We quantify this overhead in Section 4.2. We use “ByteSampler” to refer to this collection of capabilities for convenience. # 4 Experiments In our experiments, we apply ByteSampler at inference time to off-the-shelf language models. In Section 4.1 we show that our method has less computational overhead compared to other exact methods. Next, in Section 4.2, we show that exact methods perform better than heuristics in characterlevel language modeling. Finally, we present several applications of our method to enable higher-level functions such as ensembling (Section 4.3) and proxy-tuning (Section 4.4) models with mismatched tokenizers. # 4.1 Efficiency Table 3: Inference cost of various exact solutions to the prompt boundary problem. Our method has $65 \%$ less overhead than the next best method. Overhead vs. BPE measures the average additional tokens of inference required by the method, compared to plain BPE. Importantly, the overhead is paid for each byte when sampling at the byte level, making low overhead crucial for efficient sampling. As discussed in Section 2, there are several existing methods which are also “exact.” Although each technically corresponds to a different sampling distribution, we do not expect there to be any significant differences between them in practice. Therefore, the main distinguishing factor to consider is the method’s computational cost. To estimate the cost in a realistic setting, we sample a random 100 character substring from the OLMO2 pretraining corpus [46] and estimate how many inference tokens each method requires to calculate the probability of the substring as a text prefix. Note that the substring is sampled uniformly, so it is about $80 \%$ likely to end in the middle of a word. We report the average inference cost in tokens, averaged over 10,000 samples, for several methods in Table 3. # 4.2 Character-Level Language modeling Table 4: Language modeling loss of OLMO2-1B on English text using various methods. We compare three settings: $( i )$ the original token-level cross-entropy loss when predicting the next token; (ii) the character-level loss when predicting the next character by directly tokenizing the prompt and calculating the next character distribution; and (iii) the character-level loss obtained using ByteSampler to predict the next character. The higher loss per unit for token-level prediction is to be expected, as tokens are harder to predict than bytes. Once the loss is normalized to bits per character, our method and the original model achieve similar results, which demonstrates that our method does not degrade language modeling quality. In this section, we will focus on converting off-the-shelf language models into character-level language models.7 We then evaluate the character-level prediction performance using the standard cross-entropy loss as well as next-character prediction accuracy in two languages: English in Section 4.2.1 and Chinese in Section 4.2.2. # 4.2.1 OLMO2 for English Text In this setting, we sample a document randomly from the OLMO2 pretraining corpus [46] and choose a random prefix of the document of length at most 1000 characters. We then compute the nextcharacter distribution according to OLMO2-1B [63] using various methods. To allow comparison with the original token-based model, we also truncate the prefix to the nearest token boundary and perform next-token prediction with the original model. The character-level and token-level losses can be compared after normalization that accounts for the fact that tokens are more difficult to predict, due to their greater information content, giving a standardized measurement of bits per character [41]. We report the average loss of the predictions over 100,000 such documents in Table 4. From the results in Table 4, we can clearly see the effect of the prompt boundary problem: naively predicting the next character by directly applying the tokenizer to an arbitrary string prefix as in Eq. (1) leads to poor performance (“no mitigation” in Table 4). In contrast, ByteSampler nearly matches the performance of the original token-based model (“plain BPE”) in bits per character, as expected for exact methods. For backtracking methods, it is not easy to compute the probability of any particular next character. This prevents us from calculating the cross-entropy loss as in Table 4. For our experiments, we compare to the Token Alignment method of Athiwaratkun et al. [2], which is the most advanced of the proposed backtracking methods and also includes token healing as a special case. We use it to directly predict the next character by sampling greedily and report the average accuracy over 100,000 samples in Table 5. Table 5: Next character prediction accuracy of OLMO2-1B on English text using various methods. We compare three settings $( i )$ directly tokenizing the prompt and greedily sampling until the first character of the completion is determined; (ii) using backtracking with Token Alignment (of which Token Healing is a special case) to predict the next character; and (iii) using ByteSampler to predict the next character. Overhead vs. BPE measures the average additional tokens of inference required by the method, compared to (i). Table 6: Language modeling loss of QWEN3-1.7B-BASE on Chinese text using various methods. We use the same settings and metrics as Table 4. Similarly to our English results, ByteSampler achieves a similar normalized language modeling loss (in bits per character) to the original model which can only perform next token prediction. Interestingly, we find that too much backtracking hurts the performance of the Token Alignment method. We believe this is because the sampling step often segments the remainder of the prompt in a non-standard way, which may harm the performance of the model. # 4.2.2 QWEN3 for Chinese Text Similar to Section 4.2.1, we sample a random prefix of length at most 500 characters of a random document from the Chinese subset of the MADLAD-400 dataset [28]. We then compute the distribution of next characters according to QWEN3-1.7B-BASE [64] using various methods and report the average cross-entropy loss over 100,000 documents in Table 6. Once again, the naive method fails while our method achieves similar normalized loss to the original token-level model. We also report next character prediction accuracy to allow comparison with backtracking methods. Note that Chinese has much more entropy at the character level so the average accuracies will be proportionally lower. # 4.3 Byte-Level Ensemble Another application enabled by byte-level sampling is the ensembling of language models with different tokenizers. In general, when vocabularies between LMs are the same, their next-token probability or logit distribution can be combined via arithmetic into a single distribution, but this cannot be done directly when the vocabularies differ. Several works have proposed methods to combine LM predictions despite mismatching vocabularies [25, 38, 35, 72], but these may introduce bias into the sampling distribution. Our method makes the direct ensemble possible by converting models with BPE tokenizers into a byte-wise models, thus unifying their vocabularies. In our experiment, we consider an ensemble of three small language models: Qwen3 1.7B Base [64], OLMo 2 1B [46, 63], and Llama 3.2 1B [62]. We combine the predictions by computing the average $\begin{array} { r } { { \pmb p } _ { \mathrm { e n s e m b l e } } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } { \pmb p } _ { i } } \end{array}$ where $p _ { 1 } , . . . , p _ { n }$ are the next-byte probability distributions for each model. We evaluate the models on a suite of seven tasks and report the results in Table 8. Table 7: Next character prediction accuracy of QWEN3-1.7B-BASE on Chinese text using various methods. We use the same settings and metrics as Table 5. Similar to our English language results, ByteSampler achieves the best prediction accuracy, but unlike in English, ByteSampler also requires the least overhead of all methods. This highlights that languages with multi-byte characters 9 can behave differently than ones which typically use a single byte for each character. Table 8: Byte-level ensemble results. We report the performance (accuracy) of a byte-level ensemble of three models on downstream evals, along with the individual performance of each model. We see that the ensemble is competitive with the best individual model on each task and consistently outperforms the average performance across the three models. We give more details regarding the evaluation in Appendix B.2. # 4.4 Byte-Level Proxy-Tuning In addition to additive ensembles over probabilities, the logit-level predictions of multiple LMs can be combined via arithmetic, with individual LMs acting as “experts” (if their predictions are combined additively) or “anti-experts” (if subtractively) [32, 31, 59, 19, 11, 58]. In particular, this form of ensembling can be used to achieve the effect of tuning a large pretrained LM without accessing model weights. To see how this can be done, note that clearly for logit vectors $$ \ell _ { \mathrm { t u n e d } } = \ell _ { \mathrm { b a s e } } + ( \ell _ { \mathrm { t u n e d } } - \ell _ { \mathrm { b a s e } } ) . $$ The idea of proxy-tuning [33] is to approximate the term $\ell _ { \mathrm { t u n e d } } - \ell _ { \mathrm { b a s e } }$ using the difference between a pair of tuned and base proxy models $\ell _ { \mathrm { e x p e r t } } - \ell _ { \mathrm { a n t i - e x p e r t } }$ . In our experiments, we proxy-tune a strong base model, LLAMA-3.1-8B, using OLMO2-1B-INSTRUCT and OLMO2-1B as the expert and anti-expert, respectively, which together represent a strong post-training recipe [46, 30]. Shown in Table 9, we find that the proxy-tuned LLAMA 3.1 [61] model consistently outperforms the base model alone as well as the small tuned expert. This highlights a practical application of ByteSampler to “apply” post-training to base models without actually training them, thus disentangling the quality of the base model from that of the post-training recipe.
Tokenization is used almost universally by modern language models, enabling efficient text representation using multi-byte or multi-character tokens. However, prior work has shown that tokenization can introduce distortion into the model's generations. For example, users are often advised not to end their prompts with a space because it prevents the model from including the space as part of the next token. This Prompt Boundary Problem (PBP) also arises in languages such as Chinese and in code generation, where tokens often do not line up with syntactic boundaries. Additionally mismatching tokenizers often hinder model composition and interoperability. For example, it is not possible to directly ensemble models with different tokenizers due to their mismatching vocabularies. To address these issues, we present an inference-time method to convert any autoregressive LM with a BPE tokenizer into a character-level or byte-level LM, without changing its generative distribution at the text level. Our method efficient solves the PBP and is also able to unify the vocabularies of language models with different tokenizers, allowing one to ensemble LMs with different tokenizers at inference time as well as transfer the post-training from one model to another using proxy-tuning. We demonstrate in experiments that the ensemble and proxy-tuned models outperform their constituents on downstream evals.
[ "cs.CL", "cs.FL", "cs.LG" ]
# 1 INTRODUCTION Vector similarity search [25, 33, 34, 43–45] aims to identify the most similar vectors from a large dataset given a query vector. Approximate nearest neighbor (ANN) search has emerged as a practical alternative for exact top- $k$ nearest neighbor search, offering significant speedups with minimal accuracy trade-offs. Vector databases leverage ANN search techniques to support efficient retrieval, making them essential for a wide range of ML applications [14, 15], including search engines [22] and recommendation systems[29]. In particular, they play a critical role in powering large language models (LLMs) and retrieval-augmented generation (RAG) Figure 1: Graph-based vector search index: HNSW. systems [10] by enabling fast, high-dimensional similarity searches over massive embedding spaces. In RAG, a vector database retrieves semantically relevant documents based on the user prompt’s embedding, allowing LLMs to generate responses with external knowledge rather than limited to the information encoded in their model parameters. As these AI-driven applications [8, 29] continue to grow, the demand for scalable and high-performance vector search solutions has become increasingly crucial. Disaggregation [35] is gaining attention in cloud computing by separating storage and compute hardware resources, allowing them to scale independently for better flexibility and efficiency. Different resource pools are connected with high-speed networks to realize fast data transfer. For example, RDMA (Remote Direct Memory Access) [11] is one of the high-performance fabrics that enables direct memory access between remote machines, bypassing the CPU to reduce latency and improve throughput. Recent industry efforts, such as DeepSeek’s 3FS [3, 5], have demonstrated the benefits of leveraging RDMA for AI training and inference, enabling high-speed remote memory access with low latency. Inspired by this architectural shift [31, 35], we are motivated to propose an RDMA-based disaggregated vector database designed to improve hardware resource utilization with the high-throughput vector query. Among the various similarity search algorithms [7, 24, 33], graph-based approaches [6, 20] have demonstrated superior performance in both recall and latency. Hierarchical Navigable Small World (HNSW) [20] is a widely adopted graph-based index that balances vector search accuracy and efficiency. Thus, in this work, we introduce d-HNSW, a fast, RDMA-based vector similarity search engine designed for disaggregated memory system. d-HNSW aims to bridge the gap between high-performance vector similarity search and the emerging disaggregated architecture in datacenter, ensuring scalability and efficiency in handling high-throughput data queries. The disaggregated memory pool provides abundant memory resources, allowing us to store both the HNSW index and all original floating-point vector data on it. Intuitively, the disaggregated compute instances handle data requests and reply on one-sided RDMA primitives to directly access the index and vectors, bypassing memory instances’ CPUs. However, this approach presents several challenges. (i) The greedy algorithm [40] in HNSW navigates the index by comparing distances to the query vector along a search path, where each node on the graph represents a vector. The traversal path is unpredictable, and if we need to read vectors on each single step along the path via the network, the number of round trips required for a vector query becomes excessive. To mitigate this, we propose partitioning vectors into groups with an representative index and selectively reading only the partitions that most likely contain top- $\cdot k$ candidates. (ii) All partitions are compactly serialized and written to remote registered memory. When a new vector is inserted, it needs to be stored in available memory while ensuring fast index access for different partitions. If we allocate a global memory space for inserted vectors, those belonging to the same partition will be scattered across fragmented memory regions, leading to high latency as the RDMA NIC needs to issue multiple network or PCIe round trips. To address this, we propose an RDMA-friendly graph index layout that enables efficient data queries while supporting dynamic vector insertions. (iii) Since we process vector queries in a batch [14] and there is limited cache DRAM space in the compute pool, we propose query-aware data loading to reduce partition data loading from the memory pool and save bandwidth by pruning duplicate partition transfers, thereby improving vector query throughput. We implement a prototype of d-HNSW with 12K LoC and evaluate it against other RDMA-based approaches in terms of vector query recall, latency, and throughput across various datasets. The results show d-HNSW outperforms other baselines by up to $1 1 7 \times$ with top-10 benchmarking. To our best knowledge, d-HNSW is the first vector database designed for RDMA-based disaggregated memory systems. We believe d-HNSW can inspire researchers to explore this area further and propose new solutions to boost the performance of disaggregated vector databases. # 2 BACKGROUND # 2.1 Graph-based vector search with HNSW. Vector similarity search is crucial for efficiently retrieving high-dimensional data in modern ML applications such as RAG [10] for LLMs. Traditional methods like KD-trees [24] Data Request Client & reply Load Balancer CPU Instance Memory Instance §3.i1nRdexprceascehnitnagtive grTarpahnisnfedrex §g3r.a2pRhDinMdAe-xfrliaeynodulty 3 6 SS §3.3 Batched query- aware data fetch Compute Pool Memory Pool and LSH [7] struggle with scalability and search accuracy in high-dimensional spaces, leading to the development of graph-based indexing techniques [6, 20]. These methods construct a navigable graph where data points serve as nodes, and edges encode proximity relationships, enabling fast traversal during queries. For example, as shown in Fig. 1, HNSW builds a multi-layered graph [20] where upper layers provide a coarse-grained overview for fast entry into the structure, and lower layers refine the search with more densely connected nodes. During a query, the search starts from an entry point and follows a greedy routing strategy, moving to the closest neighbor at each layer. This closest vector will become the entry point to the next layer, performing greedy routing again toward the queried vector while refining the candidate set. The number of vectors in each layer increases exponentially. By leveraging small-world properties and efficient greedy search heuristics, HNSW significantly improves both recall and query speed compared to earlier graph-based methods, making it one of the most effective ANN search algorithms in modern vector databases. # 2.2 RDMA-based disaggregated memory. RDMA technologies (e.g. RoCE [21], Infiniband [11]) enable reliable and in-order packet delivery, making them wellsuited for indexing structures in disaggregated memory systems. It supports RDMA READ/WRITE for fetching and writing data directly on remote memory without CPU involvement, and atomic operations like Compare-And-Swap (CAS) and Fetch-And-Add (FAA), enable efficient and lock-free data access. Designing an efficient indexing data structure tailored for RDMA-based remote memory applications can reduce system computation overheads, minimize network round trips, and realize data access with low latency. # 3 d-HNSW DESIGN We present d-HNSW, an RDMA-based vector similarity search engine on disaggregated memory. d-HNSW exploits the characteristics of RDMA-based memory data accessing and graphbased index HNSW to realize fast and bandwidth-efficient vector query processing. d-HNSW achieves so by representative index caching (§3.1), RDMA-friendly graph index storage in remote memory (§3.2), and query-aware batched data loading (§3.3). Here, we provide a brief overview of d-HNSW as Fig. 2 shows, $\mathrm { ~ d ~ }$ -HNSW requires tailored coordination between compute instances and memory instances on vector query serving. We assume the client load balancer distributes the workload across multiple CPU instances. The compute and memory pools are interconnected via RDMA, enabling efficient transfer of vector indices and data. We target the disaggregated scenario where compute pools contain abundant CPU resources across many instances, each with limited DRAM serving as a cache, while memory instances have extremely weak computational power, handling lightweight memory registration tasks. Figure 3: Representative index caching in d-HNSW. Figure 4: RDMA-friendly sub-HNSW indexing data layout in remote memory. # 3.1 Representative index caching. Graph-based vector search schemes [6, 20] rely on greedy routing to iteratively navigate toward the queried vector. However, the search path can span the entire graph, potentially covering distant vectors. For example, HNSW exhibits small-world properties, allowing long-range connections between vectors that are far apart. However, loading the entire graph index from the memory pool to the computer pool for each query is impractical, because the compute pool has limited storage resources in a disaggregated system. This approach would not only consume excessive bandwidth by transferring a significant portion of untraversed vectors but also introduce additional latency, thereby degrading the overall search efficiency. We propose partitioning the vector database into multiple subsets, as shown in Fig. 3. Inspired by Pyramid [4], we construct a three-layer representative HNSW, referred to as meta-HNSW, by uniformly selecting 500 vectors. This metaHNSW serves as a lightweight index and a cluster classifier for the entire dataset, and it only costs $0 . 3 7 3 \mathrm { M B }$ for SIFT1M and $1 . 9 6 0 ~ \mathrm { M B }$ for GIST1M datasets from our experiments. The search process starts from a fixed entry point in the top layer $L _ { 2 }$ of meta-HNSW and applies greedy routing at each layer, traversing downward until reaching a vector in its bottom layer $L _ { 0 }$ . Each vector in $L _ { 0 }$ defines a partition and Sub-HNSW-1 Sub-HNSW-2 Sub-HNSW-𝑚 Sub-HNSW-(𝑚+1) A Serialize Serialize Serialize Serialize Memory Area Global metadata (e.g.,sub-hnsw offsets) Serialized group for two consecutive Overflowed data sub-hnsw clusters Sub-HNSW-1 metadata Shared Sub-HNSW-2 metadata Graph (e.g., neighbors array) overflow Graph data FP vectors memory space FP vectors serves as an entry point to a corresponding sub-HNSW. All vectors assigned to the same partition will be used to construct their respective sub-HNSW. The overall graph index consists of two components: meta-HNSW, which provides coarse-grained classification, and sub-HNSWs, which enable fine-grained search within partitions. To improve search efficiency in disaggregation, we cache the lightweight meta-HNSW in the compute pool, allowing it to identify the most relevant sub-HNSW clusters for a given query. Meanwhile, we put all sub-HNSW clusters in the memory pool. For each vector query, only a small subset of sub-HNSW clusters needs to be loaded from the memory pool via the network, reducing both bandwidth usage and search latency. # 3.2 RDMA-friendly graph index storage layout in remote memory. RDMA enables efficient data access to targeted remote memory addresses. To efficiently read and write sub-HNSW cluster data in remote memory, an intuitive approach is to serialize all sub-HNSW clusters in the registered memory. Given that the top- $m$ closest sub-HNSW clusters for a queried vector $q$ are $\big \{ S _ { 0 } , . . , S _ { m - 1 } \big \}$ , the compute instance can issue RDMA_READ commands to access these serialized clusters and then deserialize them. However, two challenges arise: (1) If the queried clusters $\{ S _ { 0 } , . . , S _ { m - 1 } \}$ are not stored contiguously in memory, multiple RDMA round trips are required, increasing latency. (2) When new vectors are inserted, the size of each sub-HNSW cluster may exceed the allocated space. Since shifting all stacked sub-HNSW clusters is impractical, newly inserted vectors and their metadata may be placed in non-contiguous memory regions if they are simply appended at the tail of the available area. This fragmentation increases access latency and reduces query throughput due to the higher cost of scattered index access. As shown in Fig. 4, we allocate and register a continuous memory space in memory instance to store both the serialized HNSW index and floating-point vectors. At the beginning of this memory space, a global metadata block records the offsets of each sub-HNSW cluster, as their sizes vary. The remaining memory space is divided into groups, each of which is capable of holding two sub-HNSW clusters. Within each group, the first section stores the first serialized sub-HNSW cluster, which includes its metadata, neighbor array for HNSW, and the associated floating-point vectors. The second sub-HNSW cluster is placed at the end of the group. Between these two clusters, we allocate a $0 . 7 5 \mathrm { M B }$ for SIFT1M 3.92 MB for GIST1M shared overflow memory space to accommodate newly inserted vectors for both sub-HNSW clusters. When a vector query requires loading a sub-HNSW cluster, the compute instance issues an RDMA_READ command to retrieve the cluster along with its corresponding shared overflow memory space. This layout ensures that newly inserted vectors are stored continuously with the original sub-HNSW data, enabling them to be read back with a one-time RDMA_READ command. To optimize memory usage, each pair of adjacent sub-HNSW clusters shares a single overflow memory space for accommodating newly inserted vectors rather than allocating a separate one for each cluster. Figure 5: Query-aware sub-HNSW clusters loading. If multiple sub-HNSW clusters need to be loaded into the compute pool for batched query processing, and they are not stored continuously in memory, we leverage doorbell batching to read them in a single network round-trip with RDMA NIC issuing multiple PCIe transactions. However, there is a tradeoff in the number of batched operations within a single RDMA command. If too many operations are included in one round-trip, it can interfere with other RDMA commands and incur long latency due to the scalability of the RDMA NIC. The memory offsets of each sub-HNSW cluster are cached in all compute instances after the sub-HNSW clusters are written to the memory pool, with the latest version stored at the beginning of the memory space in the memory instance. # 3.3 Query-aware batched data loading. To reduce bandwidth usage for transferring graph index and improve query efficiency, we propose merging sub-HNSW index loading for queried vectors in the same batch. Given a batch of queried vectors $\{ q _ { 1 } , q _ { 2 } , . . . , q _ { s } \}$ and a total of 𝑚 sub-HNSW clusters, each queried vector requires searching the top- $k$ closest vectors from the $b$ closest sub-HNSWs. However, the DRAM resources in the compute instance can only accommodate and cache $c$ sub-HNSWs. To optimize loading, we analyze the required $b * s$ sub-HNSWs online and ensure that each sub-HNSW is loaded from the memory pool only once. For example, as shown in Fig. 5, queried vector $q _ { 1 }$ ’s two closest sub-HNSW clusters are $S _ { 1 }$ and $S _ { 4 }$ , while $q _ { 3 }$ ’s two closest sub-HNSWs are $S _ { 4 }$ and $S _ { 5 }$ . Similarly, $S _ { 3 }$ is required for both $q _ { 2 }$ and $q _ { 4 }$ . Given a doorbell batch size of 2 for accessing sub-HNSWs, the compute instance can issue an RDMA_- READ command to fetch index $S _ { 3 }$ and $S _ { 4 }$ in one network round-trip, then compute the top- $k$ closest vectors candidates for all queries $\{ q _ { 1 } , q _ { 2 } , q _ { 3 } , q _ { 4 } \}$ first. The results will be temporarily stored for further computation and comparison because each query vector still requires another sub-HNSW to obtain the final answer. Note that $S _ { 3 }$ and $S _ { 4 }$ will not be loaded again within the same batch. Once all required sub-HNSW clusters for the batched queried vectors have been loaded and traversed, the query results will be returned. Additionally, we retain the most recently loaded $c$ sub-HNSWs for the next batch. If the required sub-HNSWs are already in the compute instance, they do not need to be loaded again, further reducing data transfer overhead. # 4 EVALUATION We develope and evaluate the prototype of d-HNSW on CloudLab [2] using real-world hardware. Our testbed consists of four Dell PowerEdge R650 servers, each equipped with two 36-core Intel Xeon Platinum CPUs, 256GB RAM, a 1.6TB NVMe SSD, and a Mellanox ConnectX-6 100Gb NIC. Three servers act as a compute pool, while one serves as the memory instance. We compare d-HNSW against the following baselines: (1) Native-HNSW: when a vector query arrives at a compute node, Native-HNSW issues an RDMA read command to fetch corresponding sub-HNSW clusters, bypassing the memory node’s CPU. (2) d-HNSW (w./o. doorbell): with meta-HNSW caching and query-aware data loading, the compute node reads sub-HNSW clusters in multiple round-trips, while our d-HNSW reads discontinuous sub-HNSW clusters in a single doorbell batch. Each server has 144 hyperthreads, which are divided into 8 compute instances. Each instance runs a vector query worker that sends RDMA commands for top-k vector retrieval. Each instance uses 18 threads for OpenMP parallel HNSW search. The cache in each compute instance is configured to store only $1 0 \%$ of the total sub-HNSW clusters in the memory pool. At runtime, the batch size for vector queries is set to 2000. 1000 ×10² Oe 1000×10 1000 6500×102 300 6500×102 Naive d-HNSW 90 Naived-HNSW 0@ Naived-HNSW Naived-HNSW 10 d-HNsW(w/oe D 10 -d-NswW(w.ob G5 75 d-NSW w/o.orb D 75 d-nsW(o 中 中 50 中 IP 50 中 中 中 茶 8 A A 8 中 中 A 女 品 A 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Recall Recall Recall Recall (a) SIFT1M@10. (b) SIFT1M@1. (c) GIST1M@10. (d) GIST1M@1. Latency-recall curve evaluation. We evaluate d-HNSW and the baselines using the SIFT1M and GIST1M datasets, setting the top- $\cdot k$ parameter as top-1 and top-10, respectively. All compute instances across three servers issue vector queries to the memory instance together. Fig. 6 presents the latency-recall curves for all three scheme with the varied 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ from 1 to 48. The 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ parameter determines the number of dynamic candidates maintained during the sub-HNSW search process. In the SIFT1M dataset with top-10 vector query shown in Fig. 6(a), d-HNSW reduces latency by up to $1 1 7 \times$ and $1 . 1 2 \times$ compared to naive d-HNSW and d-HNSW without doorbell, respectively, while achieving a recall of approximately 0.86 when 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ reaches 48. The reason is naive d-HNSW issues an RDMA read round-trip to access each involved sub-HNSW cluster. Also, the doorbell mechanism further optimizes performance by batching memory accesses across multiple fragmented addresses within a single round-trip. For top-1 query in SIFT1M shown in Fig. 6(b), the upper recall reaches 0.85 when 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ is set to 48. The latency is further reduced across all schemes since only the closest vector needs to be selected. Figure 6: Latency-recall evaluation of d-HNSW and baselines. Similarly, in the GIST1M dataset shown in Figs. 6(c)(d), d-HNSW achieves up to $1 2 1 \times$ and $1 . 3 0 \times$ lower latency compared to naive d-HNSW and d-HNSW without doorbell, respectively. Due to the higher dimensionality of GIST1M vectors, query latency is generally higher than in SIFT1M. Latency breakdown of vector query. We break down the latency of each scheme to analyze the source of d-HNSW’s performance advantage. The total latency of a vector query consists of three components: data transfer over the network, meta-HNSW (cache) computation, and sub-HNSW computation on loaded data. Table 1 presents the latency breakdown for the SIFT1M dataset with top-1 queries. d-HNSW benefits from significantly reduced network latency, measured at $5 2 7 \mu s$ , which is $0 . 0 0 5 \times$ and $0 . 8 4 \times$ lower than that of naive d-HNSW and d-HNSW without doorbell, respectively. The number of round-trips per vector query are 3.547 for naive d-HNSW and 0.896 for d-HNSW w./o. doorbell, $4 . 7 5 \times 1 0 ^ { - 3 }$ for d-HNSW for SIFT1M. Similarly, as shown in Table. 2, d-HNSW also achieves the lowest network latency in the dataset GIST1M. Table 1: Latency breakdown for SIFT1M@1 with efSearch as 48. Table 2: Latency breakdown for GIST1M@1 with 𝑒 𝑓 𝑆𝑒𝑎𝑟𝑐ℎ as 48. # 5 RELATED WORK Disaggregated memory system. Disaggregated memory systems have recently received essential attention due to enabling flexible resource allocation and improving hardware utilization in data centers. Existing work studies various solutions to managing and developing memory disaggregation from various systematic views, including architectural support [28, 31, 35, 37], operating systems [26, 32], KVCache management for LLMs [8, 23], disaggregated KV stores [13, 16–19, 27, 36, 47], transactional systems [41, 42], in-network computation systems [1, 38, 39, 46]. d-HNSW is orthogonal to these works. Approximate similarity search system. Approximate similarity search has become a fundamental technique for efficiently retrieving high-dimensional data vectors. Various algorithms have been developed to balance search efficiency and accuracy, like KD-trees [24], graph-based search structures [20], and quantization techniques [14]. Furthermore, advancements in hardware acceleration, such as GPU-based indexing [45] and CXL-based indexing [9], have further improved search performance. As data volumes continue to grow, optimizing vector search for both accuracy and resource efficiency remains an active research area. This includes adaptive search strategies [12, 43] and storage tieraware optimizations [25] to meet different service level objectives (SLOs) [30, 44].
Efficient vector query processing is critical to enable AI applications at scale. Recent solutions struggle with growing vector datasets that exceed single-machine memory capacity, forcing unnecessary data movement and resource underutilization in monolithic architectures. We present d-HNSW, the first disaggregated vector similarity search engine for RDMA-based remote memory systems that achieves high performance while supporting fast data indexing with low network communication overhead. The core of d-HNSW is a novel disaggregation of the graph-based vector indexing data structure HNSW. It exploits the characteristics of greedy searching in HNSW to efficiently coordinate data transfers from the memory pool to the compute pool while serving data requests. Specifically, it leverages three ideas: (i) Representative index caching, a lightweight index constructed from a sampled subset of data, is cached in the compute pool to reduce frequent access to critical components of the hierarchical graph-based index, (ii) RDMA-friendly data layout design to reduce the networking round trips incurred by vector query and insertion and (iii) batched query-aware data loading to reduce bandwidth usage on data transfer between pools, addressing the limited cache capacity in compute nodes. We evaluate d-HNSW with extensive benchmarking datasets. The experimental results show that d-HNSW outperforms Naive d-HNSW implementation by up to 117x in latency while maintaining recall as 0.87 in dataset SIFT1M@1.
[ "cs.DB" ]
# 1 Introduction Neuro-symbolic AI aims for an integration of symbolic and subsymbolic AI approaches, to overcome the limitations of either. Symbolic AI is useful for the communication between humans and machines, for guiding the reasoning (in terms of logical and mathematical reasoning, but also in terms of common-sense reasoning), and for liability, whenever a failure of a system in the real world has to be dealt with and related to existing law. Subsymbolic AI, vice versa, is useful for recognizing subtle differences in audio-visual or sensory data, or language, which cannot be detected by symbolic approaches. In many cases, the subsymbolic layer can also guide search and, in a sense, plays the role of intuition in humans. Research on neuro-symbolic AI has intensified in recent years. Frequently, neuro-symbolic approaches either integrate neurons into a symbolic system (e.g., through neural predicates) or embed symbols into a neural system (e.g., via loss functions that encode logical constraints). Several excellent survey articles provide a comprehensive overview of the field [24, 17, 8]. Notable examples of neuro-symbolic AI sytems are, amongst others, Logic Tensor Networks (LTN) [2] and DeepProbLog [15]. LTNs allow reasoning over continuous domains, leveraging fuzzy logic and differentiable logical operators. DeepProbLog, on the other hand, combines probabilistic logic programming with neural networks via neural predicates. Like LTNs, DeepProbLog enables end-to-end learning. The power and expressiveness of such systems makes them, in principle, applicable to many different tasks and problems. However, this versality makes them less efficient in simpler settings, for instance, for discriminative machine learning, in particular in domains with many constants. Therefore, we follow and propose a different approach, namely that of enhancing symbolic machine learning schemes by giving them access to neural embeddings. In the present paper, we show this for TILDE [3] and the use of embeddings in similarity predicates. However, the approach could go far beyond this particular setting: The embeddings could be used in similarities between instances (effectively working like kernels, but flexibly, within a logical language), for analogical reasoning, or for propositionalization. Interestingly, the learned symbolic models can be translated into LTN models, such that the embeddings can be fine-tuned and fed back into the next round of structure learning. In our experiments, we compare the approach with several baselines, side by side, in three real-world domains: hate speech, spam recognition, and drug response prediction from multi-omics data. We compare the symbolic learning algorithm alone (TILDE without embeddings), TILDE with the embeddings and the similarity predicate, and TILDE with the LTN-revised embeddings. Further, we compare with hand-crafted rules for LTNs, which are subsequently refined by revising the embeddings of the constants. Our experiments show that the maximal variant, TILDE with LTN-revised embeddings, outperforms all other variants in terms of the F1 score. The paper is organized as follows: In the next section, we briefly review further related work. In Section 3, we present the method. Section 4 discusses the experimental set-up and results in detail, before we conclude in Section 5. # 2 Related Work In propositional machine learning, approaches for embedding-based decision trees have been developed: Kontschieder el al. [14] use learned or pre-trained embeddings to inform the splitting criterion in decision trees. Frosst and Hinton [7] distill knowledge from neural networks into symbolic structures like soft decision trees. This can be seen as a precursor to the soft decision tree TEL [9], in which a routing function can be made soft or hard depending on a parameter. In all of these cases, the embeddings are hard-coded into the algorithm and cannot be used flexibly and “on-demand”, as in our proposed relational learning setting. Another line of work is neural-symbolic ILP by Evans and Grefenstette [5], where neural networks map raw data into embeddings, which are then discretized or clustered into symbolic constants or predicates. Here, the embeddings are made discrete, before actually being used in a symbolic learning system. The neuro-symbolic concept learner by Mao et al. [16] combines neural networks for perceptual grounding (e.g., image features) with symbolic reasoning modules. The symbolic reasoning operates on concepts derived from neural representations — essentially attaching word vectors to symbolic predicates. One of the main differences to our approach is that we do not focus on computer vision applications (or scene understanding), but general symbolic learning domains (see the experimental results section). Also, in our case, the predicates are symbolic and fixed, but the constants have embeddings attached. Similarity predicates, amongst others, make use of the embeddings attached to the constants. The paper is also related to papers from statistical relational learning (SRL), where ILP algorithms (like FOIL or TILDE) were used for structure learning, and the models were parameterized, e.g., by Markov Logic Networks (MLNs) [13, 12, 21, 18]. One difference (except the one of SRL vs. NeSy) may be that we are closing the loop and can revise both the clauses and the embeddings successively. # 3 Methodology # 3.1 Overview Our proposed method augments the Inductive Logic Programming (ILP) framework TILDE [3] with subsymbolic entity representations, adding semantic context. Specifically, it integrates embedding models, e.g., word2vec [19], GloVe [20], or GenePT [4], to introduce a semantic distance measure to TILDE, via a novel subsymbolic predicate. Figure 1 illustrates the proposed approach. It consists of two major steps: 1) TILDE is employed to induce symbolic rules suitable for classification, leveraging both a symbolic knowledge base and latent background knowledge through the introduced subsymbolic predicate. 2) The underlying embeddings are fine-tuned with a Logic Tensor Network (LTN) [2] to achieve higher satisfaction (in the sense of LTNs) of the logical rules derived from TILDE trees. # 3.2 Decision Tree Induction Based On Symbolic and Subsymbolic Knowledge Compilation of the Knowledge Base Initially, the symbolic predicates are derived from the dataset. The target variable for classification is interpreted as a distinguished predicate for TILDE. Additionally, this step introduces symbolic predicates, e.g., contains_word(text, word), that allow for querying the symbolic knowledge given by the dataset. These symbolic predicates determine which subset of the input embeddings will be evaluated by the subsymbolic predicate. Fig. 1: Overview of the proposed system, integrating subsymbolic embeddings with TILDE and Logic Tensor Networks. The solid arrows indicate the rule induction process, while the dashed arrows indicate the refinement of the included embeddings. Addition of a Subsymbolic Predicate To effectively incorporate subsymbolic information into our framework, we define the predicate similar/2. This predicate encapsulates the semantic similarity between two entities, $X$ and $Y$ (e.g., words or genes), leveraging subsymbolic embeddings that inherently capture both semantic and syntactic entity properties [20]. The predicate is defined based on the cosine similarity between $\mathbf { X }$ and $\mathbf { Y }$ , representing the embedding vectors of $X$ and $Y$ , respectively: $$ \operatorname { s i m i l a r } ( X , Y ) \iff { \frac { \mathbf { X } \cdot \mathbf { Y } } { \| \mathbf { X } \| \| \mathbf { Y } \| } } \geq \tau , $$ where $\tau$ denotes a user-defined threshold. In our framework, we precompute the grounding of similar/2 for all pairs of entities that appear in the training data and append it as additional background knowledge to TILDE, enabling it to effectively leverage the subsymbolic information throughout the learning process. Decision Tree Induction with TILDE After the previous steps, TILDE is applied to learn logical decision trees based on the previously compiled predicates. A key inductive bias in this setup is that membership predicates must be used in conjunction with the similar/2 predicate. Further, TILDE is configured to learn constant values as the $Y$ -argument of similar/2, thereby encouraging the learner to exploit subsymbolic similarities. This allows TILDE to identify entities with high cosine similarity to the learned constants, enabling generalization beyond exact lexical matches. The resulting rules offer human-interpretable explanations for the model’s classification decisions. Once induced, these rules, represented by a TILDE decision tree, can be directly applied to classify new instances. # 3.3 Refining the Embeddings The second major step refines the embeddings, guided by the induced rules themselves. An LTN is employed to fine-tune these embeddings, optimizing the satisfaction of the subsymbolic similar/2 predicate within the rules and thereby increasing the predictive performance with respect to the target variable. LTN Structure The structure of the proposed LTN framework is designed to mirror the TILDE trees as rules. Rules derived from the trees are converted into fuzzy logic using LTN operators, and each instance of the dataset is grounded by a set of embeddings. The similar/2 predicate is implemented as a differentiable function within the LTN. Specifically, the predicate computes the cosine similarity between two embedding vectors, followed by a shifted sigmoid function to model the similarity threshold, as detailed in subsection 3.2. Instead of modeling symbolic predicates explicitly in the LTN, we use them to filter during the grounding process. Symbolic knowledge is used to only select the embeddings of entities, where the symbolic predicates evaluate to true. If this is not the case, the conjunction of similarity and symbolic predicate would evaluate to false anyways, making the calculation of the similarity predicate obsolete. Therefore, adapting those embeddings would not benefit the task at hand. This not only results in a less complex LTN model, but also cuts unnecessary computation, significantly improves memory efficiency, while maintaining the same information. Convert TILDE Tree to LTN rules Algorithms 1 and 2 describe the process of converting a TILDE decision tree into a set of fuzzy logical rules using $L T N$ operators. This rule extraction method is adapted from the associate algorithm introduced by Blockeel et al. [3]. Instead of relying on predicate invention, we apply existential quantification and negation to the relevant parts of each rule using LTN operators. This takes advantage of LTN’s reasoning over the full groundings of the variables, rather than searching for individual satisfying assignments, as illustrated in Algorithm 2. Figure 2 shows the LTN structure of one node in the TILDE tree. These structural elements are assembled into full rules using our adaptation of the associate algorithm for LTNs, which can be found in Algorithms 1 and 2. Since multiple rules are extracted from the TILDE tree and only one rule is applicable to each instance, the implication operator generates the appropriate training signal. If a rule is satisfied, but this disagrees with the actual label, the model is penalized. If a rule is not satisfied, even though the actual label indicates it, the model is not penalized, because another rule might still be applicable. # Algorithm 1 Convert the TILDE Tree to LTN Rules Input: Tilde tree $T$ Output: Set of LTN rules 1: function ConvertTreeToLTNRules(T) 2: $S \gets ( \mathrm { r o o t ~ o f ~ t r e e } , \emptyset )$ ▷ Initialize stack 3: $R \gets \emptyset$ ▷ Initialize empty list of LTN rules 4: while ${ \mathrm { S } } \neq \emptyset$ do 5: $( n o d e , r u l e _ { c o m p } ) \gets \mathrm { S . p o p ( ) }$ 6: if IsLeaf(node) then 7: head ClassLabel(node) 8: body ← QuantifyAndConjunct(rulecomp) ▷ Algorithm 2 9: R ← R ∪ {head ⇐= body} 10: else 11: $\begin{array} { r l } & { r u l e ^ { + } \gets r u l e _ { c o m p } \cup \mathrm { C o n j u n c t i o n } ( n o d e ) } \\ & { S . \mathrm { p u s h } ( ( \mathrm { L e f t C h i l d } ( n o d e ) , r u l e ^ { + } ) ) } \\ & { r u l e ^ { - } \gets r u l e _ { c o m p } \cup \mathrm { N o t } ( \mathrm { Q u a n t i f y A n d C o n j u n c t } ( r u l e ^ { + } ) ) } \\ & { S . \mathrm { p u s h } ( ( \mathrm { R i g h t C h i l d } ( n o d e ) , r u l e ^ { - } ) ) } \end{array}$ 12: 13: 14: 15: end if 16: end while 17: return R 18: end function # Algorithm 2 Quantify and Conjunct Input: RuleComponents = predicates, already quantified components ∗ Output: Quantified conjunction of rule components using LTN fuzzy logic operators 1: function QuantifyAndConjunct(RC) 2: UV ← {V ∈ V ar(RC) | ¬Quantified(V )} ▷ unquantified variables 3: $U _ { P } \gets \{ p \in R _ { C } \ | \ \neg \mathrm { Q u N T I F I E D } ( p ) \}$ ▷ unquantified predicates 4: QC ← VC∈RC|Quantified(C) C ▷ quantified rule components 5: for all v UV do 6: $\begin{array} { r l } & { P _ { v } \{ p \in U _ { P } \mid v \in \operatorname { V a R s } ( p ) \} } \\ & { Q C ( \exists v \bigwedge _ { p \in P _ { v } } p ) \land Q C } \end{array}$ 7: 8: end for 9: return QC 10: end function Fig. 2: LTN architecture for one rule component: The left-hand side reflects the LTN structure for a single node of the TILDE tree. A set of these structures is assembled according to Algorithms 1 and 2. The rule is then applied to each training instance and compared with the target variable via an implication operator. Using the Fine-Tuned Embeddings with the TILDE Tree After training, the original embeddings are replaced with the fine-tuned ones. The TILDE tree now leverages the fine-tuned embeddings to classify new examples. # 4 Experimental Evaluation # 4.1 Experimental Setup We evaluate our approach on three popular datasets from different domains: Measuring Hate Speech [11]: The first dataset consists of social media comments, labeled with continuous hate speech scores. The task is to predict whether a comment’s score exceeds 0.5. We randomly sample 2,000 instances (50%/25%/25% for training, validation, and testing, respectively) after stop-word removal, stemming, and removing words of which no embeddings are available . We extract contains_word/2, similar/2, and hate_speech/1 predicates. GloVe embeddings [6] (200d), which were trained on tweets, provide the subsymbolic representations. SMS Spam Collection [1]: The second dataset contains labeled SMS messages for spam classification. We initially sample 3,500 instances (2,500/500/500 for training, validation, and testing). Our preprocessing steps and used word embedding framework correspond to those of the hate speech dataset. Multi-Omics-based Drug Response [10]: The third dataset holds gene expression, mutation, and copy number alteration (CNA) features of cell lines, labeled by their response to the drug Cetuximab. In addition to the preprocessing proposed by Sharifi-Noghabi et al. [22], we binarize the gene expressions via thresholding. Of 856 samples, 60%/20%/20% are used for training, validation, and testing. We restrict the number of features to 1,000. Extracted predicates include expression/2, mutation/2, cna/2, similar/2, and response/1. For the calculation of gene-level similarity, we employ GenePT embeddings [4]. Aiming for both high precision and recall, we undersample the majority class of the spam and drug response training sets to an equal number of training samples. The following configurations for TILDE were set for all experiments: minimum leaf support (“minimal_cases”) and similar/2 threshold: 50/50/20 and 0.75/0.7/0.5 for hate speech/spam/drug response, respectively. The TILDE implementation from the ACE data mining system $^ { 1 }$ is used throughout. The learning rates of the employed TensorFlow Adam Optimizer were set to e-3/e-1/5e-4 for hate speech/spam/drug response, respectively. We evaluate the accuracy and F1 score of our proposed method and conduct an ablation study to analyze how the different building blocks contribute to the overall performance. To examine the isolated LTN performance and measure the performance of the decision tree learner, we compare the results with an alternative approach with a simple hand-crafted instead of TILDE-derived set of rules. The hand-crafted rules are, simply put, designed as described below. An instance is classified as – Spam if and only if it contains words that are similar to at least two of the following words: ´urgent’, ´rich’, ´free’, ´miracle’, ´winner’; else no spam. – Hate Speech if and only if it contains words that are simlilar to at least two of the following words [racist word], [sexist word], [homophobic word] or [word of religious-cultural intolerance]; else no hatespeech. – Drug Response if and only if at least two of the following conditions are satisfied: ´EGFR’ or similar expressed, ´KRAS’ or similar mutated, ´NRAS’ or similar mutated; else negative drug response. # 4.2 Results and Analysis Table 1 presents the experimental results. For all three datasets, incorporating the similar/2 predicate substantially enhances the F1 score. This demonstrates that TILDE benefits notably from relaxing the requirement of exact entity matches by enabling semantically similar entities to satisfy the induced rules. Further refinement of the embeddings via the LTN yields another notable increase in F1 scores across all datasets, underscoring the effectiveness of finetuning the embeddings. Similarly, this effect can be observed when applying the LTN refinement to the hand-crafted rules, especially the performance on the Spam dataset improves considerably. Moreover, Table 1 reveals that fine-tuning all embeddings, rather than only those corresponding to the constants in the theories, leads to higher performance gains. Conversely, restricting fine-tuning exclusively to constants better preserves the original semantic properties inherent in the embeddings. Table 1: Performance results (accuracy and F1 score) across three tasks along with an ablation study comparing TILDE with and without the subsymbolic predicate, and with the refinement step once only for the constants and once for all embeddings. Furthermore, we compare the approach against hand-crafted rules. Fig. 3: Two TILDE decision trees harnessing the subsymbolic similar/2 predicate. Left: spam classification Right: drug-response prediction. Since our optimization objective explicitly targets the F1 score, the stable accuracy results align with expectations, reflecting the prioritization of correctly classifying instances from less-represented classes. Figure 3 visualizes two examples of TILDE trees induced from the respective datasets, incorporating the subsymbolic background knowledge. These trees were utilized to produce the results presented in Table 1. The interpretability of these TILDE trees highlights a key strength of the proposed approach: It preserves the transparency of symbolic learning models and limits the black-box character of the neural models to the subsymbolic predicates. Self-Organizing Maps (SOMs) [23] enable the visualization of changes in the latent space. We use SOMs instead of other dimensionality reduction and visualization methods, because the addition of new data points or even datasets is straightforward within the same frame of reference. This enhances the interpretability of our proposed approach, as high-dimensional embeddings and their mutual distances can be made human-readable, thus simplifying the explanation of the model’s behavior. We train the dimensionality reduction using SOMs exclusively on the original pre-trained embeddings, and subsequently apply the learned mapping to visualize the fine-tuned embeddings. The resulting latent space visualization, depicted in Figure 4, aligns well with our expectations: After fine-tuning, the embeddings become semantically more closely aligned with those of the learned constants in the rules. This forms clusters around the constants which are particularly relevant for a message’s spam classification. Fig. 4: Self organizing map visualization of sample words from the Spam dataset, original (TILDE $+$ similar/2) vs. fine-tuned embeddings (TILDE+LTN all). To further investigate the significant improvement in the F1 score following the embedding refinement step applied to the hand-crafted rules, we visualize the corresponding changes in the latent space again using SOMs. Figure 5 illustrates an additional notable effect of the refinement process: the latent embedding for the constant “urgent” shifts semantically closer to words such as “send”, “call”, “txt” and “text”. This effectively transforms the constant “urgent” into “text”, which TILDE independently identifies as the most predictive constant. This shift explains the observed leap in performance, as the refinement step corrects the suboptimal choice of constants in the hand-crafted rule set, substituting it with constants more suitable in the dataset context. Consequently, this suggests that fine-tuning embeddings can compensate for ineffective constant selections made by a symbolic learner. Fig. 5: Self organizing map visualization of sample words from the Spam dataset using the hand-crafted rules, original (Hand-crafted+similar/2) vs. fine-tuned embeddings (Hand-crafted+LTN all).
The goal of neuro-symbolic AI is to integrate symbolic and subsymbolic AI approaches, to overcome the limitations of either. Prominent systems include Logic Tensor Networks (LTN) or DeepProbLog, which offer neural predicates and end-to-end learning. The versatility of systems like LTNs and DeepProbLog, however, makes them less efficient in simpler settings, for instance, for discriminative machine learning, in particular in domains with many constants. Therefore, we follow a different approach: We propose to enhance symbolic machine learning schemes by giving them access to neural embeddings. In the present paper, we show this for TILDE and embeddings of constants used by TILDE in similarity predicates. The approach can be fine-tuned by further refining the embeddings depending on the symbolic theory. In experiments in three real-world domain, we show that this simple, yet effective, approach outperforms all other baseline methods in terms of the F1 score. The approach could be useful beyond this setting: Enhancing symbolic learners in this way could be extended to similarities between instances (effectively working like kernels within a logical language), for analogical reasoning, or for propositionalization.
[ "cs.AI", "cs.LO" ]
# 1 Introduction The Background of Geo-localization. The rapid growth of visual content on social media and mobile devices, has made image geo-localization (determining where an image was taken) increasingly important for downstream applications such as autonomous navigation [15] and crisis response [18]. Given that metadata (i.e., GPS coordinates) is frequently unavailable in practice [19], predicting geographic location from visual content remains a crucial capability. This demand has led to growing interest in the image geo-localization task [24]. Limitations in Existing Geo-localization Approaches. Traditional image geo-localization approaches fall into two main categories: classification and retrieval. Classification-based methods [63, 48, 43, 46, 13] treat geo-localization as a discrete prediction task, assigning each image to a predefined set of geographical regions or cells. Retrieval-based methods [66, 74, 59, 56, 22, 64, 21] estimate location by comparing the query image to a large geo-tagged reference database, retrieving the closest match in terms of visual features, geographic coordinates, or semantic labels (e.g., city or country names). Although these methods perform well on standard benchmarks, they typically require training on millions of samples and lack interpretability, offering little insight into their underlying reasoning process. When LVLMs Meet Geo-localization. The emergence of Large Vision-Language Models (LVLMs) [38, 4, 58, 5, 10, 2, 53] has introduced a new paradigm to tackle image geo-localization. Equipped with powerful multimodal reasoning capabilities and extensive world knowledge encoded through large-scale pretraining, LVLM-based methods [32, 28, 72] have been explored through various strategies, including few-shot prompting, retrieval-augmented generation (RAG), and supervised fine-tuning (SFT). These methods are capable of generating both location predictions and explanations, offering greater interpretability in how decisions are made. Social Media Imagery Full-parameter Fine-tuning : Rich Diversity & Multiple Views No Reasoning Path 中 Encoder Vision Adapter VL LLM 四四 GT With Location Annotations LoRA-based Fine-tuning Street-view Imagery Limited Diversity & Fixed Views 3 EVnicsiodne r AdVaLp ter LLM 四□ GT With Reasoning Path LoRA With Location Annotations Most LVLM-based methods rely on Supervised Fine-tuning Limitations in LVLM-based Image Geo-localization. Notably, geo-localization requires deeper reasoning than typical vision-language tasks. Success depends on more than recognition, as models must often draw on domain knowledge to infer plausible locations from subtle visual clues such as vegetation, architecture, or language, especially when iconic landmarks are absent. While LVLMs offer a promising path toward such reasoning-driven geo-localization, two fundamental challenges persist, as illustrated in Figure 1. On the data side, existing datasets rarely provide explicit reasoning supervision, such as interpretations of visual evidence and contextual justifications supporting the final location decision. Recent efforts [32, 17, 51] to incorporate reasoning into geo-localization datasets have primarily relied on street-view imagery, which offers limited scene diversity and fixed viewpoints. As a result, models trained on such data often struggle to generalize to diverse, real-world visual conditions. On the modeling side, most current approaches depend on supervised fine-tuning with instruction-style data, which tends to encourage pattern replication rather than the development of a grounded understanding of visual-geographic relationships. Without verification mechanisms, these models rely heavily on correlation rather than structured inference, reducing their ability to generalize beyond familiar examples. <think> Architecture: The buildings have a European architectural style with red brickwork, domed towers, and ornate details like clock towers and green domes with red domes on top. The "OXYGEROL" sign suggests a hotel or building with industrial-style lettering. Signage and Street Names: The street signs and the building with the "OXYGEROL" sign suggest a Swedish city due to the Scandinavian style of architecture and the Scanian brand of vehicles used by the fire truck. The red fire truck with "Scania" logo indicates it's a Scania truck commonly used in Sweden. Street Layout and Vegetation: The urban setting with European-style buildings, bare trees suggesting winter or early spring, modern streetlights, and urban infrastructure like buses and buildings with large windows and cylindrical domes is typical of Helsinki, Finland. Street Signs and License Plate: The license plate "MMI-842" resembles Finnish license plates. The "Scania" branding on the fire truck is prominent, which is common in Finland. </think> Helsinki, Finland <answer> country: Finland, city: Helsinki </answer> How GLOBE Tackles the Challenges. To address these challenges, we propose a novel pipeline for reasoning-aware geo-localization consisting of two main components: (1) constructing a geolocalization dataset from diverse social media images augmented with model-derived reasoning traces, and (2) fine-tuning a vision-language model using Group Relative Policy Optimization (GRPO) for enhanced reasoning. We begin by building MP16-Reason, an extension of MP-16 [30], which contains user-captured photographs with diverse viewpoints and rich contextual content. To introduce reasoning supervision, we prompt multiple vision-language models [5, 73, 56] to distill the geolocation-related knowledge, including locatability assessments, reasoning trajectories, and predicted locations. To ensure the reliability of these distilled signals, we employ a multi-dimensional verification process that assesses both the alignment between visual evidence and model-generated reasoning, and the consistency across different models through self-verification, thereby filtering out inconsistent or hallucinated outputs. Finally, we fine-tune a pretrained LVLM on the curated dataset using GRPO [50], guided by task-specific rewards for locatability, visual grounding, and geolocation accuracy. Our resulting model, GLOBE, achieves state-of-the-art performance among open-source VLMs on geo-localization benchmarks, while producing more interpretable and visually grounded reasoning trajectories, as shown in Figure 2. Our main contributions include: Reasoning-Aware Geo-Localization Dataset: We construct MP16-Reason, a diverse geolocalization dataset enriched with image-grounded reasoning supervision that supports model interpretability and generalization. • GRPO-Based Fine-Tuning: We develop a GRPO-based reinforcement learning framework that fine-tunes LVLMs using task-specific rewards for locatability, visual grounding, and geolocation accuracy, enabling stronger reasoning capabilities compared to traditional supervised fine-tuning. Opensource LVLM: Trained through this pipeline, we opensource GLOBE. Empirical results demonstrate that $G L O B E$ outperforms state-of-the-art LVLMs on multiple geo-localization benchmarks, while producing more interpretable and visually grounded reasoning trajectories. # 2 Related Work Image Geo-localization. Image geo-localization aims to predict the geographic location of a given image and has broad applications in urban analysis [69, 18, 65, 67, 68], navigation [15], and geospatial data mining [29, 40, 34, 45, 23]. With advances in multimodal models, research has evolved from classification [63, 48, 43, 46, 13] and retrieval-based methods [66, 74, 59, 56, 22, 64, 21] to generationbased approaches [32, 72, 28], which aim to produce location predictions through visual reasoning. Recent studies [32, 72, 28] have pointed out key limitations of classification (e.g., coarse granularity) and retrieval methods (e.g., dependency on large reference databases), prompting increased interest in generation-based alternatives. Since the introduction of the MediaEval Placing Tasks 2016 (MP-16) dataset by [30], recent research [72, 28] continues to utilize this dataset to model relationships between visual semantics and geographic locations. In contrast to conventional approaches, current LVLMs [38, 4, 58, 5, 10], which are typically pre-trained on large-scale datasets, inherently exhibit significant visual reasoning capabilities. This raises the critical question of whether the continued reliance on millions of labeled samples for supervised fine-tuning remains necessary to effectively adapt these models to specific tasks. In this work, we take a data-centric perspective to investigate how existing large-scale datasets can be leveraged to construct higher-quality training data for fine-tuning LVLMs in the context of image geo-localization. Large Vision-Language Models. Building upon advancements in LLMs [8, 55, 12, 3, 7, 20, 25], LLaVA [38] was a pioneering approach in this direction, combining a pre-trained vision encoder with an LLM. It demonstrated that joint fine-tuning of visual and textual representations significantly enhances performance in tasks such as image-based question answering [42, 70, 41, 31]. Subsequently, various LVLMs have emerged [4, 58, 5, 10, 2, 53], differing primarily in their visual-language alignment mechanisms and associated architectural trade-offs. Motivated by these recent advancements, our work further investigates the shift of image geo-localization from traditional methods to LVLMs. Specifically, we explore how curated datasets can be effectively leveraged to facilitate more efficient fine-tuning of these models for geo-localization tasks. Visual Reasoning and Verification. The emergence of advanced models such as DeepSeek [37] has heightened expectations for the multimodal reasoning capabilities of LLMs. Most reasoning research [27, 49] has focused on mathematical tasks, with limited attention to open-ended or visual scenarios. Thus, these models often suffer from hallucination [26, 6, 60], especially in visual tasks where they produce seemingly plausible but incorrect outputs. To address hallucination and promote more faithful reasoning, recent work has explored verification-based strategies [35, 16, 36, 52, 44], as well as reinforcement learning frameworks [50, 71] that optimize models via structured rewards. Motivated by these insights, we adopt GRPO as the reinforcement learning framework in our reasoning-driven geo-localization task. # 3 5 GLOBE: The Methodology We propose a novel pipeline based on the original MP-16 [30] dataset, aiming to advance image geolocalization from single-modal visual recognition to more robust multimodal reasoning. Achieving this objective requires not only powerful models but also well-curated training data that effectively capture geographic clues. Our pipeline for reasoning-aware geo-localization consists of two main components: dataset curation and model fine-tuning. These are implemented in three stages: (1) dataset curation via strong-to-weak distillation & verfication (Section 3.1), (2) reward construction via task-specific supervision (Section 3.2), and (3) model fine-tuning via GRPO-based reinforcement learning (Section 3.3). # 3.1 Dataset Curation: Data Distillation & Verfication Raw web-scale datasets contain a diverse range of social media images captured from varied perspectives. However, these datasets suffer from substantial noise [14, 33, 61, 62, 39], such as close-up shots with limited visual context or generic objects lacking informative localizable clues. To address this issue and select appropriate images for downstream training, we employ multiple vision-language model knowledge distillation for data synthesis and multi-dimensional verification for data curation. Data Synthesis Data Curation 心 GeoCLIP Coordinates with Scores 9 Threshold-based Filtering + Qwen-VL Locatability Decisions Self-verification C Reasoning Trajectories :+ T InternVL Geolocation Predictions Visual-semantic Consistency Multiple Vision-Language Models Knowledge Distillation. We utilize multiple vision-language models (e.g., Qwen2.5-VL-72B [5], InternVL3-78B [73], and GeoCLIP [56]) to extract locatability judgments, visual clues, and geolocation predictions for each image in the MP-16 [30] dataset, following the approach of [32, 9]. As shown in Figure 3, Qwen2.5-VL and InternVL3 produce binary locatability decisions, step-by-step reasoning trajectories, and textual geolocation predictions. GeoCLIP, in contrast, produces latitude-longitude coordinates along with a confidence score that quantifies locatability [9]. Collectively, these strong models offer complementary signals, which we distill into structured supervision for downstream data curation and reward modeling. Multi-dimensional Verfication. Following model inference, we perform multi-dimensional verification to curate high-quality data, as illustrated in Figure 3. Initially, we filter out images with negative locatability decisions or low locatability scores. Subsequently, incorrect geolocation predictions are discarded by comparing them against ground-truth annotations. To ensure the reliability of the knowledge distilled from Qwen2.5-VL and InternVL3, we introduce a self-verification step in which the geolocation predictions and reasoning trajectories of both models are compared for each image. Only those samples exhibiting consistent location outputs (e.g., matching city- or country-level predictions) and semantically aligned reasoning chains are retained. This cross-model agreement serves as the reliability proxy in distilled supervision. Furthermore, to enforce visual grounding of the reasoning process, we employ a general-purpose semantic segmentation model [11] to extract both the categories and relative proportions of visual elements within each image. We then assess the consistency between the entities mentioned in the reasoning trajectories and the visual elements identified via segmentation. Through this multi-stage validation pipeline, which combines locatability filtering, self-verification of distilled knowledge, and visual-semantic consistency checks, we curate a robust and trustworthy dataset tailored for downstream tasks. # 3.2 Reward Construction: Task-specific Supervision Building upon the curated dataset introduced in Section 3.1, we develop three task-specific rewards to assess distinct dimensions of reasoning quality in the geo-localization process. Each reward is trained with annotated supervision and collectively provides a structured reward signal, which guides the policy optimization during the reinforcement learning stage described in Section 3.3. Formally, let $\mathcal { D } = ( I _ { i } , y _ { i } , g _ { i } , r _ { i } ) _ { i = 1 } ^ { N }$ denote the curated dataset of $N$ samples, where $I _ { i }$ is an image, $y _ { i } \in \{ 0 , 1 \}$ is a binary label indicating whether the image is localizable, $g _ { i }$ indicates the ground-truth geolocation, and $r _ { i }$ is the associated reasoning trajectory. Locatability Reward. We develop a binary classification reward model to estimate the localizability of an image based on its visual content. Using the curated dataset $\mathcal { D }$ , we train a model $R _ { \mathrm { l o c } } ( I _ { i } ) \stackrel { . } { \in }$ $\{ 0 , 1 \}$ to predict the probability that $y _ { i } = 1$ , i.e., the image is localizable. Accordingly, we define the reward as: $$ R _ { \mathrm { l o c } } ( I _ { i } ) = \mathbb { P } ( y _ { i } = 1 \mid I _ { i } ; \theta _ { \mathrm { l o c } } ) , $$ where $\theta _ { \mathrm { l o c } }$ denotes the parameters of the reward model. The output score serves both as a reward signal for reinforcement learning and as a soft indicator of an image’s inherent locatability. Visual Grounding Consistency Reward. To ensure the model-generated reasoning aligns with the actual visual content, we introduce a reward model evaluating entity grounding consistency. For a given sample $( I _ { i } , r _ { i } )$ from the curated dataset, let $\hat { r } _ { i }$ denote the predicted reasoning. We extract a set of entities $E _ { i } = \{ e _ { 1 } , e _ { 2 } , . . . , e _ { n } \}$ from the reasoning trajectory $\boldsymbol { { \hat { r } } } _ { i }$ , and a set of visual elements $V _ { i } = \{ v _ { 1 } , v _ { 2 } , . . . , v _ { m } \}$ from both the image $I _ { i }$ (via semantic segmentation) and the text of $r _ { i }$ (via entity extraction). We define a soft matching function Match $( e _ { j } , \bar { V _ { i } } ) \in 0 , 1$ , which returns 1 if entity $e _ { j }$ approximately matches any element in $V _ { i }$ , allowing for partial lexical or semantic overlap. The visual grounding reward is computed as: $$ R _ { \mathrm { v i s } } ( I _ { i } , \hat { r } _ { i } , r _ { i } ) = \frac { 1 } { | E _ { i } | } \sum _ { j = 1 } ^ { | E _ { i } | } \mathbf { M a t c h } ( e _ { j } , V _ { i } ) , $$ where $R _ { \mathrm { v i s } }$ assigns a higher score when more entities in the reasoning are visually grounded. This reward penalizes hallucinated entities that do not correspond to visible elements in the image, thereby encouraging grounded visual reasoning. Geo-localization Accuracy Reward. To evaluate model predictions at a semantic location level, we define a classification-based reward that reflects whether the predicted country and city match the ground truth. Let $\hat { g } _ { i } = ( \hat { c } _ { i } , \hat { t } _ { i } )$ denote the predicted country and city for image $I _ { i }$ , and let $\begin{array} { r } { g _ { i } = \left( c _ { i } , t _ { i } \right) } \end{array}$ be the corresponding ground-truth geolocation from the curated dataset. The geo-localization reward $R _ { \mathrm { g e o } }$ is defined as: $$ R _ { \mathtt { g e o } } ( { \hat { g } } _ { i } , g _ { i } ) = \mathbb { I } [ { \hat { c } } _ { i } = c _ { i } ] \cdot ( \alpha \cdot \mathbb { I } [ { \hat { t } } _ { i } = t _ { i } ] + ( 1 - \alpha ) ) , $$ where $\mathbb { I } [ \cdot ]$ is the indicator function and $\alpha \in [ 0 , 1 ]$ is a weighting factor that controls the importance of city-level correctness, conditional on the country being correct. This reward structure captures the hierarchical nature of geo-tags. A reward of 0 is assigned when the predicted country is incorrect (i.e., $\hat { c } _ { i } \neq c _ { i }$ ). If the country is correct but the city is not (i.e., $\hat { c } _ { i } = c _ { i } , \hat { t } _ { i } \neq t _ { i } )$ , the model receives a partial reward of $1 - \alpha$ . A full reward of 1 is assigned only when both predictions are correct (i.e., $\bar { { \hat { c } } } _ { i } = c _ { i } , \hat { t } _ { i } = t _ { i } )$ . This tiered design encourages the model to first learn coarse-grained localization before refining its predictions to finer spatial resolutions. Figure 4: GRPO optimization framework with multi-dimensional reward design. For each prompt, candidate outputs are scored using three task-specific reward models: $R _ { \mathrm { l o c } }$ , $R _ { \mathrm { { v i s } } }$ , and $R _ { \mathrm { g e o } }$ , which reflect different aspects of geo-localization reasoning. Group-wise advantage values guide policy updates, while a ${ \mathcal { D } } _ { \mathrm { K L } }$ penalty constrains divergence from the reference model. # 3.3 Model Fine-tuning: GRPO-based Reinforcement Learning With the reward signals defined in Section 3.2, we fine-tune the base model using GRPO [50], a reinforcement learning algorithm designed for ranking-based reward optimization, as illustrated in Figure 4. GRPO builds upon Proximal Policy Optimization (PPO)[47], which stabilizes policy updates by optimizing a clipped surrogate objective using advantage estimates derived from scalar rewards. Unlike PPO, GRPO introduces group-wise normalization and optimizes relative preferences among candidates conditioned on each prompt, enhancing robustness to variations in the reward scale. Let $\pi _ { \boldsymbol { \theta } }$ denote the current policy parameterized by $\theta$ , and let $\boldsymbol { \mathcal { B } } = \{ ( \boldsymbol { \mathbf { { x } } } _ { i } , \{ \boldsymbol { \mathbf { { a } } } _ { i } ^ { ( j ) } \} _ { j = 1 } ^ { k } ) \}$ represent a batch of input prompts $\pmb { x } _ { i }$ each paired with $k$ candidate completions $\pmb { a } _ { i } ^ { ( j ) }$ sampled from the policy. Each completion $\pmb { a } _ { i } ^ { ( j ) }$ is scored by a composite reward function: $$ r _ { i } ^ { ( j ) } = \lambda _ { 1 } R _ { \mathrm { l o c } } + \lambda _ { 2 } R _ { \mathrm { v i s } } + \lambda _ { 3 } R _ { \mathrm { g e o } } , $$ where $\lambda _ { 1 } , \lambda _ { 2 } , \lambda _ { 3 } \in [ 0 , 1 ]$ are weights controlling the importance of the three reward components: locatability $( R _ { \mathrm { l o c } } )$ , visual grounding consistency $( R _ { \mathrm { v i s } } )$ , and geo-localization accuracy $( R _ { \mathrm { g e o } } )$ . To encourage the model to prefer higher-reward completions within each group, GRPO computes a group-normalized advantage for each candidate: $$ A _ { i } ^ { ( j ) } = r _ { i } ^ { ( j ) } - \frac { 1 } { k } \sum _ { l = 1 } ^ { k } r _ { i } ^ { ( l ) } , $$ which centers rewards within each prompt group. Eqn. (5) guides the policy to optimize relative ranking rather than absolute scores, making it suitable for scenarios with non-uniform reward scales. The policy is then updated by maximizing the following clipped surrogate objective: $\begin{array} { r } { \mathcal { L } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { ( \mathbf { x } _ { i } , a _ { i } ^ { ( j ) } ) \sim \pi _ { \mathrm { g _ { e f f } } } } \left[ \operatorname* { m i n } \left( \rho _ { i } ^ { ( j ) } A _ { i } ^ { ( j ) } , \operatorname { c l i p } ( \rho _ { i } ^ { ( j ) } , 1 - \epsilon , 1 + \epsilon ) A _ { i } ^ { ( j ) } \right) - \beta \mathcal { D } _ { \mathrm { K L } } \left[ \pi _ { \theta } \lVert \pi _ { \mathrm { r e f } } \right] \right] , } \end{array}$ (6) where $\begin{array} { r } { \rho _ { i } ^ { ( j ) } = \frac { \pi _ { \boldsymbol { \theta } } ( \mathbf { a } _ { i } ^ { ( j ) } | \mathbf { x } _ { i } ) } { \pi _ { \boldsymbol { \theta } _ { \mathrm { o l d } } } ( \mathbf { a } _ { i } ^ { ( j ) } | \mathbf { x } _ { i } ) } } \end{array}$ is the likelihood ratio between the current and reference policies, and $\epsilon$ is the clipping threshold. The coefficient $\beta$ controls the strength of the ${ \mathcal { D } } _ { \mathrm { K L } }$ penalty, and $\pi _ { \mathrm { r e f } }$ is the reference policy used to constrain updates. In practice, the reference policy $\pi _ { \mathrm { r e f } }$ is typically instantiated as the previous policy snapshot, serving to regularize updates and ensure training stability. # 4 Experiments We conduct qualitative and quantitative experiments to evaluate the effectiveness of our curated dataset MP16-Reason and the GRPO-based training strategy used in $G L O B E$ . Specifically, we examine whether MP16-Reason enables better geo-reasoning (i.e., the ability to infer geographic locations through interpretable and visually grounded reasoning) compared to conventional imageonly datasets (which lack reasoning supervision) and street-view datasets (which offer limited visual diversity), and whether GRPO training yields stronger reasoning performance than supervised finetuning. We also compare GLOBE with both open- and closed-source LVLMs. # 4.1 Experimental Setup Datasets. The curated dataset MP16-Reason is divided into two subsets: MP16-Reason-Train with $3 3 \mathbf { k }$ samples and MP16-Reason-Test with 12k samples, respectively. MP16-Reason-Train is used to train $G L O B E$ , while MP16-Reason-Test is used to evaluate all baseline methods. To ensure a comprehensive comparison, we additionally evaluate all models on the public geo-localization benchmark IM2GPS3K [57]. Evaluation Metrics. We adopt different evaluation metrics for MP16-Reason-Test and the public geo-localization benchmark to account for differences in annotation format and prediction targets. In MP16-Reason-Test, we evaluate model performance using three metrics: (1) city-level accuracy, (2) country-level accuracy. On the public benchmark, we follow previous work [56, 22, 28, 32] and report the percentage of predictions whose geographic distance to the ground-truth coordinate falls within fixed thresholds (25km, $2 0 0 \mathrm { k m }$ , and $7 5 0 \mathrm { k m } \dot { } ,$ ). Since our model outputs discrete place names (e.g., country or city), we use external tools to convert these names into their corresponding geographic center coordinates for evaluation. Implementation details. We implement GLOBE based on the publicly available Qwen2.5-VL7B [5], a large vision-language model with strong multimodal understanding capabilities. Instead of using task-specific supervised fine-tuning as a cold start, we directly fine-tune the model using reinforcement learning based on the GRPO framework described in Section 3.3. All training is conducted using MP16-Reason-Train. Further details are provided in Appendix A.1. Table 1: Localization accuracy on MP16-Reason-Test and IM2GPS3K [57]. Underlined results indicate test–train overlap; $\dagger$ denotes models that are not publicly available. Best and second-best results are in bold and blue, respectively. # 4.2 Performance Comparison We evaluate the effects of the curated dataset and training strategy through quantitative metrics and qualitative analysis, as detailed in the following subsections. # 4.2.1 MP16-Reason: Annotation Benefit and Scene Robustness We validate the contribution of the MP16-Reason dataset through two perspectives. First, we examine whether reasoning-augmented annotations provide tangible benefits over image-only supervision. Second, we compare MP16-Reason to existing geo-reasoning datasets that are predominantly streetview-centric, assessing generalization performance across diverse visual scenes. Annotation Benefit. To isolate the contribution of our reasoning-based annotations, we ablate the reward components used in GRPO training. Specifically, we compare a model trained only with the geo-localization accuracy reward (VGC Reward), which is derived from the original annotations, as shown in Table 2-row4. As reported in Table 1 and Table 2, GLOBE trained on MP16-Reason significantly outperforms all variants trained on MP-16 (Table 1-row1, Table 1-row2, and Table 1-row3). While PIGEOTTO [22] outperforms $G L O B E$ at the country level, its performance lags significantly at the city and region levels (Table 1-row4). It is worth noting that G3 (GPT-4V) [28] leverages GPT4V to perform retrieval-augmented generation (RAG) for geo-localization, incorporating external knowledge through a prompt ensembling strategy. Despite this advantage, our method achieves comparable performance at the city level and even slightly outperforms G3 (GPT-4V) at the region and country levels, as shown in Table 1. This highlights the importance of the additional reasoning-based annotations, such as locatability judgments and visual grounding signals, which were distilled from multiple vision-language models and used to supervise finer-grained aspects of geo-reasoning. Scene Robustness. To assess the generalization ability of geo-reasoning models beyond street-view scenarios, we compare $G L O B E$ with counterpart LVLMs trained on street-view-oriented reasoning datasets (Table 1-row12 and Table 1-row13). While models trained on such datasets perform well on street-view imagery, their generalization degrades considerably in more diverse scenarios (see Table 1). In contrast, MP16-Reason, which includes both street-view and non-street-view images, achieves consistently strong results across a wider range of visual contexts, better reflecting the complexity of real-world geo-localization tasks. # 4.2.2 # GLOBE: Model Performance and Training Strategy Analysis To further evaluate the effectiveness of our proposed model GLOBE, we conduct comprehensive experiments from two perspectives: external baseline comparison and internal strategy ablation. Boston, United States # GPT-4.1 Boston, United States 1. There are several clues suggesting this is an Occupy movement protest, specifically "Occupy," "DON'T GET OUTFOXED JOIN US," and "1000 Cities 90 Countries." 2. The mix of modern and postmodern urban office buildings indicates a major city in a developed country. 3. The protest signs are in English, hinting at an English-speaking country. 4. The QR codes, highly organized signage, and tents are reminiscent of the 2011 Occupy Wall Street movements in the US. 5. Checking the architecture: The building with distinctive square windows (with a folwer-like pattern) and the rounded whiteblack-window building in the background. These are very characteristic. 6. The open brick plaza, greenery, and the scale of the scene suggest a civic space, likely the kind found in downtown areas. 7. The structure on the right back matches the "One International Place" building in Boston, Massachusetts, USA. 8. The protest is typical of Occupy Boston, which took place in Dewey Square (part of the Rose Kennedy Greenway), with similar building surroundings. # GLOBE 1. Architecture: The architecture features modern urban buildings with high-rise structures, large glass windows, and geometrically patterned facades. The building with white brickwork and decorative grilles suggests a style common in North American cities like New York or Washington D.C., but the urban layout and green spaces with urban parks resembles Boston's Public Garden area. 2. Signage and Banners: The signs with messages related to "Occupy Wall Street" movement suggest a protest against media bias, typical of Occupy protests in major US cities. The tents with colorful banners reading "1000 Cities 80 Centres" could represent Occupy encampments similar to those seen in the Boston Common or other large city squares like Zuccotti Park in New York City. 3. Vegetation: The urban greenery includes shrubs, trees, and folwer beds. The presence of a tree with broad leaves and urban landscaping like plants and folwers in city parks points to a temperate climate with deciduous trees like maple or oak. The urban park setting with tents and signs suggests a downtown area with public spaces like Boston Common. 4. Street Layout and Signboards: The urban layout with wide streets, brick sidewalks, and public spaces like squares with protesters and signs suggests a city like Boston with its Freedom Trail or Occupy Boston movements. Signs like "Don’t Get Outfoxed" align with Occupy Boston protests. 5. Background Buildings: Large red-brick buildings with symmetrical designs and modern office buildings with glass facades resemble Boston's cityscape with Harvard Square or City Hall Plaza areas. # Qwen2.5-VL-7B w/ SFT # New York, United States The architecture visible includes modern buildings with large windows and geometric patterns, which could be found in many urban areas around the world. The presence of signs with English text such as '1000 CITIES SO CONTRARY' suggests that the event is taking place in an English-speaking region. The style of the protest signs and the casual attire of the participants suggest a contemporary setting, possibly in North America or Europe. The specific design of the building in the background, with its distinct window pattern and stone facade, resembles structures commonly seen in U.S. cities like New York or Boston. The vegetation appears wellmaintained and typical of a temperate climate, further supporting the idea of a developed urban area. # InternVL3-78B # New York, United States First, I notice the signs in the image, like the one that says "1000 cities, 80 centres." That suggests some kind of organized movement or event happening across multiple locations. The other sign says "DON'T GET OUTFOXED JOIN US," which might be a play on words against Fox News, hinting at a protest or public demonstration. Looking at the architecture, the buildings in the background are modern with a mix of brick and glass. The style isn't overly distinctive, but it's common in many urban areas. There's a tree and some tents, which might indicate a public square or park where protests often take place. The people are dressed in casual clothing, which is typical for public gatherings. The presence of tents and the protest signs makes me think of the Occupy Wall Street movement, which started in New York City. The Occupy movement was known for setting up encampments in public spaces. The sign "Occupy" also points towards this being related to that movement. If this is Occupy-related, then it's likely in a major city where such events occurred, like New York. The language on the signs is in English, so it's probably in an English-speaking country. Considering all these factors—Occupy movement, urban setting, tents, and the style of the buildings—New York City seems like a strong possibility, specifically Zuccotti Park, which was the hub for Occupy Wall Street. Figure 5: Reasoning comparison of four different models (GPT-4.1 [1], GLOBE, Qwen2.5-VL-7B [5] with SFT, and InternVL3-78B [73]) on the same input image. Reliable visual clues identified by the models are marked in text. Externally, we benchmark $G L O B E$ against vision-only models without reasoning supervision, generalpurpose LVLMs, and task-specific LVLMs trained on street-view reasoning datasets. Internally, we benchmark SFT against GRPO-based reinforcement learning, and perform ablation studies within the GRPO framework to assess the effect of different reward configurations. These comparisons examine the performance of GLOBE and the effects of its key design choices. External Baseline Comparison. We compare $G L O B E$ against three categories of representative baselines. The first group includes traditional approaches, which rely on visual feature matching without supporting reasoning capabilities. The second group consists of open-source LVLMs (e.g., Qwen2.5-VL [5], InternVL3 [73], Gemma3 [54], and GPT-4.1 [1]) trained on general-purpose data, which are expected to exhibit broader and more generalizable reasoning capabilities. The third group includes task-specific LVLMs trained on geo-reasoning datasets oriented toward street-view imagery. As shown in Table 1, GPT-4.1 [1] demonstrates outstanding performance on this task, but it remains closed-source. Aside from GPT-4.1 [1], the proposed GLOBE achieves strong accuracy on both the MP16-Reason-Test and the public benchmark IM2GPS3K [57], while also producing more coherent and interpretable reasoning trajectories (see Table 1 and Figure 5). Notably, GLOBE achieves this performance with just 33K samples from MP16-Reason, highlighting the efficiency of reasoning-aware supervision. Table 2: Ablation study on training methods and reward modeling configurations in MP16-ReasonTest set using Qwen2.5-VL-7B-Instruct model. Internal Strategy Ablation. We further investigate the impact of different training paradigms and reward configurations on the performance of GLOBE. Specifically, we compare full-parameter SFT, and our proposed GRPO-based reinforcement learning approach (See Table 2-row2 and Table 2- row8). For GRPO, we conduct a reward ablation by evaluating different combinations of the three reward components: Locatability (Loc) Reward, Visual Grounding Consistency (VGC) Reward, and Geo-localization Accuracy (GA) Reward. It demonstrates that GRPO with the complete reward set achieves the highest overall performance (Table 2-row8). Removing either Loc, VGC or GA Reward results in noticeable drops, underscoring the importance of reasoning-aware supervision beyond location-level correctness (Table 2-row5, Table 2-row6, Table 2-row7). In addition, GRPO demonstrates clear advantages over SFT for the image geo-localization task in LVLMs, offering improved consistency and grounding by directly optimizing the relative quality of generated outputs (Table 2-row2 vs. Table 2-row8). # 5 Discussion Toward Fine-Grained Geo-localization: Limits of Pure Reasoning. While our reasoning-aware framework achieves strong performance at the country and city levels, its effectiveness diminishes when tasked with fine-grained, coordinate-level localization. This limitation originates from the inherent nature of the reasoning process: predictions are based on high-level semantic cues such as language, architectural style, or vegetation, which often lack the spatial specificity required to differentiate between closely situated locations. For example, multiple European cities may share similar visual patterns, such as Mediterranean-style architecture, the presence of European Union flags, or public signage in English, which makes it difficult for the model to resolve fine-grained geographic ambiguities through reasoning alone. In such cases, even accurate reasoning can only narrow down a broad region but cannot pinpoint an exact location. This highlights a key challenge in reasoning-centric geo-localization: the lack of precise visual-geographic anchoring. To overcome this limitation, future work may explore hybrid approaches that combine reasoning to constrain the candidate region, followed by local feature-based retrieval within that region to achieve coordinatelevel precision. Beyond Scale Alone: Data Efficiency in Reasoning-aware Training. Our experiments show that training GLOBE on just 33K high-quality, reasoning-aware samples (MP16-Reason) achieves performance comparable to, and sometimes exceeding, models trained on millions of generic imagetext pairs. This highlights that for reasoning-centric tasks, targeted supervision can be more effective than sheer data scale. Our results suggest that aligning supervision with task-specific reasoning offers a more data-efficient path forward for LVLM training. Beyond Geo-localization: GRPO for Reasoning-driven LVLM Tasks. Our findings suggest that GRPO, as a training paradigm, is particularly well-suited for reasoning-driven objectives in LVLMs. Unlike SFT, which often treats outputs as isolated targets, GRPO directly optimizes the relative quality of outputs through scalar reward signals. This form of supervision allows GRPO to guide complex reasoning behaviors in a more structured and interpretable manner than traditional training objectives. While our work focuses on geo-localization, we believe the GRPO paradigm can be readily extended to other multimodal reasoning tasks, such as visual question answering and multimodal chain-of-thought generation.
Previous methods for image geo-localization have typically treated the task as either classification or retrieval, often relying on black-box decisions that lack interpretability. The rise of large vision-language models (LVLMs) has enabled a rethinking of geo-localization as a reasoning-driven task grounded in visual cues. However, two major challenges persist. On the data side, existing reasoning-focused datasets are primarily based on street-view imagery, offering limited scene diversity and constrained viewpoints. On the modeling side, current approaches predominantly rely on supervised fine-tuning, which yields only marginal improvements in reasoning capabilities. To address these challenges, we propose a novel pipeline that constructs a reasoning-oriented geo-localization dataset, MP16-Reason, using diverse social media images. We introduce GLOBE, Group-relative policy optimization for Locatability assessment and Optimized visual-clue reasoning, yielding Bi-objective geo-Enhancement for the VLM in recognition and reasoning. GLOBE incorporates task-specific rewards that jointly enhance locatability assessment, visual clue reasoning, and geolocation accuracy. Both qualitative and quantitative results demonstrate that GLOBE outperforms state-of-the-art open-source LVLMs on geo-localization tasks, particularly in diverse visual scenes, while also generating more insightful and interpretable reasoning trajectories.
[ "cs.CV" ]
# 1 Introduction With the recent success of deep-thinking large language models (LLMs) – such as OpenAI O1 [15], DeepSeek R1 [7], and Qwen3 [25, 26], which are capable of generating long sequences of thoughts to achieve better performance – there has been a growing need to control the reasoning length of these models while maintaining the performance, because many deep-thinking LLMs often incur excessive inference costs with disproportionate performance gain. For example, in Figure 1, we show a response from a deep-thinking model that, while correct, is unnecessarily long. Such extensive reasoning is not always desirable, and there are cases where we need to impose a budget to limit the extent of reasoning, particularly in scenarios that demand real-time interaction, such as customer-facing chatbots, where excessive latency can degrade user experience and responsiveness. Existing thinking budget control methods can be roughly divided into two categories with complementary strengths. The first category is fine-tuning methods, which fine-tune deep-thinking LLMs on specially curated dataset [9] or with budget-aware reward to enable budget control capabilities [14]. Fine-tuning methods have been shown effective in changing the reasoning length while keeping competitive performance because they allow LLMs to fundamentally restructure and optimize their reasoning behavior according to the given budget. However, they come with two main drawbacks. Question: Jen randomly picks 4 distinct numbers from {1–10}. She wins a prize if at least 2 match, and the grand prize if all 4 match. What’s the probability she wins the grand prize given she won a prize? 90% <think> Okay, so Jen is picking 4 distinct numbers from the set {1–10}, and she wins a prize if … 37% probabilities: [summarize the problem]. Let me recall … But wait … So, I need to calculate two $\ldots$ First, Let me compute that $\ldots$ Now, let's find P(A) … Next, let's find P(B) … 85% 26% faster Therefore, $\left( 1 / 5 \right) / 2 3 = 1 / ( 5 ^ { * } 2 3 ) = 1 / 1 1 5$ . Wait, hold on, that seems low. Let me check my steps again… Alternatively $\ldots$ [a lot more reasoning…] Thus, the probability $\ldots$ is: 1 / 115. $< / \mathrm { t h i n k } >$ 80% higher (thinking length: 2521) DR 75% $\textcircled{3}$ Thinking from original model: correct but too long <think> Okay, so Jen is picking 4 distinct numbers from the set {1–10}, and she wins a prize if … 70% [summarize the problem]. Let me recall … But wait … So, I need to calculate two probabilities: … First, Let me compute that: $C ( 1 0 , 4 ) = 1 0 ! / \left( 4 ! ^ { * } \left( 1 0 - 4 \right) ! \right) = ( 1 0 ^ { * } 9 ^ { * } 8 ^ { * } < / \mathrm { t h i n k } >$ 65% (budget: 400. thinking length: 400) Full Thinking Thinking with Budget Forcing: follow the budget but forced to stop 60% Budget Forcing <think> First, I need to determine … Next, I'll calculate … Now, I need to find the probability … Budget Guidance Adding these up ... Finally, the probability … is 1 / 115. </think> 500 1000 1500 2000 2500 (budget: 400. thinking length: 395) Thinking Length Thinking with Budget Guidance: follow the budget naturally Figure 1: Deep-thinking models often produce excessively long reasoning traces, leading to high latency and unnecessary computation. Existing inference-time methods like budget forcing rely on simplistic heuristics such as abruptly stopping, which can result in incomplete reasoning and degraded answer quality. In contrast, our method, budget guidance, steers the reasoning process toward the target budget in a smoother and more natural way, without any LLM fine-tuning. First, fine-tuning an LLM is costly, requiring substantial computational resources and time. Second, directly fine-tuning the LLM may potentially alter its behavior in unexpected ways, such as compromising safety [21]. The second category of methods is the inference-time methods [19, 20], which seek to alter the reasoning behavior at inference time. While these approaches do not involve fine-tuning, they often result in sub-optimal reasoning behaviors and significant performance degradation, because the intervention at inference time are often heuristic and overly simple, breaking the integrity of the original reasoning process. For example, one well-known inference-time method is budget forcing [20] which terminates the model’s reasoning as soon as the thinking budget is reached, as described in Figure 1. While this method offers strict control over the number of generated tokens, abruptly interrupting the model may cut off unfinished thoughts and force premature answers, often leading to incorrect outputs. In short, an important bottleneck in the task of thinking budget control lies in the tradeoff between non-intrusiveness (in inference-time approaches) and optimality of the reasoning chain (in fine-tuning approaches). This leads to our central research question: Can we design a flexible inference-time budget control approach (without fine-tuning) that still allows for wholistic, principled restructuring of the reasoning process to maintain its quality under budget? In this paper, we introduce budget guidance, a novel approach that employs a lightweight auxiliary module to enable test-time control over the reasoning length of LLMs. Inspired by the principle of classifier guidance in diffusion models [4], we train an auxiliary predictor that predicts the probability distribution of the remaining reasoning length at each reasoning step. The predicted length distribution is then used to modulate the LLM generation probability, effectively turning it into a budget-conditional generation probability. Our method avoids the direct fine-tuning of LLMs, while providing flexible and accurate control over the reasoning process. It can be seamlessly integrated into existing inference pipelines, and adapts to a wide range of models, thinking budgets, and tasks. Our experiments have revealed several key highlights of our method. First, budget guidance exhibits a remarkable trade-off between thinking length and performance. For example, as shown in Figure 1, on MATH-500 benchmark [12] budget guidance can reduce the full thinking length by $37 \%$ with minimal accuracy degradation, while been $2 6 \%$ higher in accuracy than budget forcing baseline under tight budget. Second, the auxiliary predictor is very successful in predicting the thinking length, effectively considering task difficulty and instruction type. Thus, it can accurately guide the thinking process under various budgets. Finally, our method demonstrates surprising generalizability across domains – an auxiliary predictor trained on one dataset can also work well in other datasets and domains. We summarize our contributions as follows: • We propose budget guidance, a novel test-time method for steering the reasoning process of LLMs toward a specified thinking budget, without requiring any fine-tuning of the LLM itself. • We design a lightweight predictor that models a Gamma distribution over the remaining reasoning length based on the current generation context, and uses this signal to guide LLM generation toward a target thinking budget. • Budget guidance achieves strong trade-offs between thinking length and accuracy across multiple benchmarks, and demonstrates cross-domain generalization, enabling effective budget control and accurate thinking length prediction. # 2 Related Works # 2.1 Efficient LLM Reasoning Efficiency is a fundamental topic in machine learning, and recent work has focused on improving the token efficiency of LLMs in long chain-of-thought reasoning. For example, ThinkPrune [14] employs reinforcement learning with an iterative pruning strategy to shorten reasoning traces; Z1 [27] enables efficient test-time scaling by training on data with varying reasoning lengths and introducing a shifted thinking window for hybrid-mode inference; COCONUT [10] operates in continuous space to encourage reasoning with fewer tokens. While effective, these methods typically rely on expensive LLM fine-tuning and primarily aim to reduce the length of reasoning, rather than to control it. More recent approaches [9, 20] have begun exploring methods to control the reasoning length, either through heuristic rules or model fine-tuning. In contrast, we propose a simple yet effective alternative: a fine-tuning-free approach that naturally steers the reasoning process to adhere to a specified thinking budget, enabling more efficient and flexible inference. # 2.2 Guidance and Guided Generation The term guidance originates primarily from the diffusion model literature, where it denotes the ability to steer the generative process, often through truncated or low-temperature sampling, by reducing the variance or range of noise inputs to the generative model at sampling time [13]. This effectively transforms an unconditional diffusion model into a conditional one, enabling it to generate targeted outputs. One of the earliest examples is classifier guidance [4], which modifies the diffusion score by incorporating the gradient of the log-likelihood from an auxiliary classifier, thereby biasing the sampling process toward desired content. This can be viewed as a form of guided generation, where image generation is conditioned on the output of a classifier. A similar notion of guided generation has emerged in the context of LLMs, where it typically refers to constraining the model’s output to satisfy structural requirements, such as regular expressions or context-free grammars, to ensure syntactic correctness for downstream applications [23]. To the best of our knowledge, our work is the first to extend the idea of guided generation to a new dimension: budget-conditioned generation. Specifically, we introduce a novel form of guidance that softly steers the LLM’s generation to meet a specified thinking budget, enabling efficient and controlled reasoning without compromising output quality. # 3 Budget Guidance We now introduce our method in detail. In Section 3.1, we begin by formulating the budgetconditioned generation problem and present the overall budget guidance framework, which draws inspiration from classifier guidance [4] in diffusion models. Section 3.2 describes the design of our proposed auxiliary thinking length predictor, which estimates the distribution over remaining reasoning length at each decoding step. In Section 3.3, we outline the training procedure for the predictor using reasoning traces. Section 3.4 introduces the model architecture of the predictor, which is designed to be lightweight and inference-efficient. Finally, Section 3.5 presents a simple modulationskipping strategy to further reduce computational overhead during decoding. An illustration of our method is provided in Figure 2. Figure 2: An overview of budget guidance. A lightweight predictor uses the LLM’s hidden states to predict a Gamma distribution over the remaining reasoning length for each candidate token. We then use the CDF of Gamma distribution to compute a predictor score, which is combined with the LLM’s output score to guide generation. The result is soft, token-level steering toward budget-conditioned reasoning without any LLM fine-tuning. # 3.1 The Budget Guidance Framework The overall framework of our method follows the classifier guidance framework in diffusion generation [4], thus we name our framework budget guidance. Specifically, denote $X$ as the input question, $Y _ { < t }$ as the LLM’s output thinking process up to token $t$ , and $Y _ { t }$ as the LLM’s output at token $t$ . The LLM generation process essentially involves sampling from the following budget-unconditional distribution, $p ( Y _ { t } | X , Y _ { < t } )$ . However, when there is a budget constraint, we would need to draw from a budget-conditional distribution. Formally, denote $L _ { t }$ as the random variable indicating the remaining length of the thinking process from token $t$ . For example, if the overall thinking length is $l$ (i.e., the </think> token occurs at token $l$ ), then $L _ { t } = l - t$ . Given the thinking budget limit $\bar { l }$ , the budget-conditional distribution is defined as $p ( Y _ { t } | X , Y _ { < t } , L _ { t } \leq \bar { l } - t )$ . According to Bayes’ rule, the budget-conditional distribution can be computed from the budgetunconditional distribution as follows $$ \underbrace { p ( Y _ { t } | X , Y _ { < t } , L _ { t } \leq \bar { l } - t ) } _ { \mathrm { b u d g e t - c o n d i t i o n a l } } \propto \underbrace { p ( Y _ { t } | X , Y _ { < t } ) } _ { \mathrm { b u d g e t - u n c o n d i t i o n a l } } \cdot P r ( L _ { t } \leq \bar { l } - t | X , Y _ { < t } , Y _ { t } ) . $$ Therefore, at each token $t$ , generating from the budget-conditional distribution involves three steps. First, compute the unconditional distribution, which is simply performing a forward pass of the LLM. Second, predict the remaining length distribution, $P r ( \bar { L _ { t } } \leq \bar { l } - t | X , \bar { Y } _ { < t } , Y _ { t } )$ . Finally, use the remaining length distribution to modulate the unconditional distribution and then renormalize. Therefore, within budget guidance framework, our task boils down to computing $P r ( L _ { t } \leq \bar { l } -$ $t | X , Y _ { < t } , Y _ { t } )$ . To this end, we introduce a lightweight auxiliary thinking length predictor, which we describe in detail over the next three subsections. # 3.2 An Auxiliary Thinking Length Predictor Denote the LLM vocabulary size as $n$ , and denote the vocabulary as $\mathcal { V } = \{ v _ { 1 } , \ldots , v _ { n } \}$ . At each token $t$ , the LLM outputs an $n$ -dimensional unconditional probability vector (which we denote as ${ \mathbf { } } u _ { t } .$ ): $$ \pmb { u } _ { t } = [ p ( Y _ { t } = v _ { 1 } | X , Y _ { < t } ) , . . . , p ( Y _ { t } = v _ { n } | X , Y _ { < t } ) ] . $$ According to Equation (1), the predictor needs to predict an $n$ -dimensional vector (which we denote as $\mathbf { \alpha } _ { \mathbf { \alpha } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \ \ } \mathrm { ~ \ } \mathbf { \alpha } _ { \mathbf { \beta } } \mathrm { ~ \alpha } _ { \mathbf { \beta } } \mathrm \mathrm { ~ \ \alpha } _ { \mathbf { \beta } }$ ): $$ \begin{array} { r } { \pmb { a _ { t } } = [ P r ( L _ { t } \leq \bar { l } - t | X , Y _ { < t } , Y _ { t } = v _ { 1 } ) , \cdots , P r ( L _ { t } \leq \bar { l } - t | X , Y _ { < t } , Y _ { t } = v _ { n } ) ] , } \end{array} $$ so that the budget-conditional probability vector, which we denote as $\scriptstyle { c _ { t } }$ , can be computed by element-wise multiplying the two vectors and renormalize: $$ \mathbf { \boldsymbol { c } } _ { t } = n o r m a l i z e ( \mathbf { \boldsymbol { u } } _ { t } \circ \mathbf { \boldsymbol { a } } _ { t } ) . $$ Equation (3) indicates that the predictor needs to accomplish a rather intensive task: At each token $t$ , given the question $X$ and all the context generated so far $Y _ { < t }$ , the auxiliary predictor needs to $\pmb { \mathfrak { o } }$ traverse all possible values for $Y _ { t }$ across the vocabulary, $\pmb { \theta }$ for each possible value, predict what would be the remaining length if $Y _ { t }$ took on this value (that is $n$ probability distributions in total), and $\pmb { \otimes }$ compute the cumulative probability up to ${ \bar { l } } - t$ for each distribution. To simplify the task, we parameterize each predicted distribution as a Gamma distribution for $\log ( L _ { t } )$ : $$ p ( L _ { t } | X , Y _ { < t } , Y _ { t } = v _ { i } ) = G a m m a ( \log ( L _ { t } ) ; \lambda _ { t } ( v _ { i } ) , \alpha _ { t } ( v _ { i } ) ) , $$ where $G a m m a ( \cdot ; \lambda , \alpha )$ represents the probability density function (PDF) of the Gamma distribution, with the shape parameter $\lambda$ and rate parameter $\alpha$ . We model the distribution over $\log ( L _ { t } )$ instead of $L _ { t }$ directly to better capture the dynamic range of thinking lengths. With the Gamma distribution assumption, instead of predicting $n$ probability distributions, we only needs to predict two $n$ -dimensional vectors: $\bar { \lambda _ { t } } ~ = ~ [ \bar { \lambda _ { t } } ( v _ { 1 } ) \bar { , } . . . , \lambda _ { t } ( \bar { v _ { n } } ) ]$ and $\begin{array} { r l } { \alpha _ { t } } & { { } = } \end{array}$ $[ \alpha _ { t } ( v _ { 1 } ) , \ldots , \alpha _ { t } ( v _ { n } ) ]$ . The cumulative probability, $\mathbf { \alpha } _ { \mathbf { \alpha } _ { \mathbf { \beta } } } \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } } } } } \mathbf { \alpha } _ { \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } \mathbf { \beta } _ { \mathbf { \beta } _ { \lambda } } }$ , can be computed from the predicted $\lambda _ { t }$ and $\alpha _ { t }$ by the known closed-form cumulative distribution function (CDF) of the Gamma distribution. # 3.3 Training the Predictor To train the predictor, we need to collect a dataset of reasoning chains produced by the target LLM. Formally, the data in the dataset takes the following form: $\bar { \mathcal { D } } \bar { = } \{ ( x , \bar { y } _ { 1 : l } , l ) \}$ , where $x$ is the input question, $y _ { 1 : l }$ is the LLM-generated reasoning chain, and $l$ is the length of the reasoning chain. Note that the task dataset from which reasoning chain length training data are generated is not the same as the inference dataset (not even the same task), as we will show that the trained predictor has good dataset and task generalizability. For simplicity, in our training, we focus on math reasoning and use the OpenR1-Math-220k dataset [6]. For each training datum, $( x , y _ { 1 : l } , l )$ , we feed the information of a partial reasoning chain to the predictor, truncated at different positions, and train the predictor to predict the remaining length. We adopt the maximum log likelihood objective for the gradient descent training. Formally, denote the parameters of the auxiliary predictor as $\pmb \theta$ . Then the training objective can be written as $$ \operatorname* { m a x } _ { \theta } \mathbb { E } _ { ( x , y _ { 1 : l } , l ) \sim \mathcal { D } } \bigg [ \sum _ { t = 1 } ^ { l - 1 } \log \big ( p _ { \theta } \big ( L _ { t } = l - t | X = x , Y _ { < t } = y _ { < t } , Y _ { t } = y _ { t } \big ) \big ) \bigg ] , $$ where $p _ { \pmb { \theta } } ( \cdot )$ represents the predicted PDF by the auxiliary predictor, as shown in Equation (5). # 3.4 Architecture of the Predictor The predictor is designed to be lightweight enough to avoid significant computational overhead during decoding, yet expressive enough to capture both the input question and the ongoing reasoning context to produce a meaningful estimate of the remaining reasoning length. To this end, we adopt BERT-base [3] as the backbone of our predictor. Its input consists of the concatenated hidden states from all layers of the last generated token of the target LLM, which encode rich semantic information about both the input question and the reasoning history. A linear projection maps the LLM’s hidden dimensionality to the predictor’s input space, and a [CLS] token is used to summarize the hidden states. The final [CLS] representation is passed through another linear projection to produce an output matrix $M \in \mathbb { R } ^ { n \times 2 }$ , where each row corresponds to the parameters $\lambda _ { t }$ and $\alpha _ { t }$ of a Gamma distribution. A softplus activation [5] is applied to ensure both parameters are non-negative. # 3.5 Skipping Modulation Ideally, probability modulation in Equation (4) would be applied at every decoding step $t$ . To reduce computational overhead, however, we apply it only at the start of each reasoning paragraph, indicated by newline delimiters, where uncertainty is typically highest. The modulation is thus defined as: $$ \boldsymbol { c } _ { t } = \left\{ \begin{array} { l l } { \mathrm { n o r m a l i z e } ( u _ { t } \circ a _ { t } ) , } & { \mathrm { i f ~ } t \mathrm { ~ i s ~ t h e ~ s t a r t ~ o f ~ a ~ r e a s o n i n g ~ p a r a g r a p h } } \\ { u _ { t } , } & { \mathrm { o t h e r w i s e } } \end{array} \right. $$ Empirically, we find that this modulation introduces only a $0 . 6 \%$ increase in total latency for a 7B-parameter LLM, which is negligible in practice. # 4 Experiments # 4.1 Settings Training. We apply our method to three deep-thinking models: DeepSeek-R1-Distill-Qwen-7B (R1-7B) [7], DeepSeek-R1-Distill-Qwen-32B (R1-32B) [7], and Qwen3-8B [25, 26]. Training is conducted on OpenR1-Math-220k [6], a dataset of 220k math problems from NuminaMath 1.5 [17] with reasoning traces generated by DeepSeek R1. We apply a simple data augmentation technique (detailed in the Appendix) to double the dataset size. During training, the LLMs are frozen and only the predictor is updated. We train for one epoch using a batch size of 8 and a constant learning rate of $\bar { 1 . 0 } \times 1 0 ^ { - 4 }$ after warmup. Training takes 15 hours for R1-7B and Qwen3-8B, and 35 hours for R1-32B, using 8 NVIDIA H100 GPUs. All evaluations are conducted on the same hardware setup. Evaluation. We evaluate our method on four representative math reasoning benchmarks: MATH500 [12], AIME-2024 [1], AMC [2] (including both AMC12 2022 and AMC12 2023), and the math subset from OlympiadBench [11]. These benchmarks cover diverse mathematical topics, including arithmetic, algebra, combinatorics, etc., and span a broad range of difficulty levels. Besides math benchmarks, we also extend our evaluation to broader domains to test the out-ofdomain transferability of our math-data-trained predictor. Specifically, we further evaluate on GPQA Diamond [22] for scientific reasoning, FOLIO [8] for logical reasoning, the numerical reasoning subset from TableBench [24] for tabular numerical reasoning, and LiveCodeBench [16] (2024-08 - 2025-01 following [7]) for code reasoning. All experiments are conducted in a zero-shot manner, i.e., we do not perform further fine-tuning on the training sets of the evaluation benchmarks. We use greedy decoding for all evaluation. Baselines. We compare our method with other methods that also do not finetune the LLM. Our main baseline is budget forcing [20], which enforces a hard token limit by appending an end-of-thinking delimiter (and optionally “Final Answer:”) to trigger early exit and force the model to produce its best guess. We use their open-sourced codebase for evaluation. We also include NoThinking [19] as a baseline, which bypasses the reasoning stage by inserting a fixed phrase as the thinking process: Okay, I think I have finished thinking. We also report results from the original model with full thinking as a reference. # 4.2 Main Results # 4.2.1 Evaluation on Math Reasoning Benchmarks Since the predictor is trained on math data, we first evaluate its performance on math reasoning benchmarks to assess in-domain effectiveness. We set the thinking budget to approximately half the original model’s full thinking length and ensure the average thinking length (denoted as #Tokens) comparable between our method and the baseline, and report the task accuracy. Table 1 summarizes the evaluation results on math reasoning benchmarks. Across all three models and four datasets, budget guidance consistently outperforms budget forcing under comparable average thinking lengths, effectively reducing the reasoning length without causing significant accuracy degradation. Compared to NoThinking, budget guidance achieves substantially higher performance, indicating that the reasoning traces are non-trivial and contribute meaningfully to task success. Table 1: Evaluation results on math benchmarks. Figure 3: Accuracy vs. thinking length on math benchmarks. These improvements are consistent across different model sizes (7B to 32B) and model families (DeepSeek vs. Qwen), highlighting the general applicability of our approach to diverse deep-thinking LLMs. Notably, even though the predictor for Qwen3-8B is trained on reasoning traces generated by DeepSeek-R1, it still performs well. This suggests that the training data can be model-agnostic, provided the target LLM exhibits a similar reasoning style, for instance, using words like “wait” or “alternatively” to structure its reasoning process. # 4.2.2 Accuracy–Thinking Length Tradeoff Analysis A key indicator of effective control is the ability to achieve higher accuracy under the same thinking length, which we call token efficiency. To evaluate and compare the token efficiency of our method across different reasoning lengths, we vary the token budget to obtain different average thinking lengths and record the corresponding accuracy achieved by the model. We visualize this relationship through accuracy-thinking length trade-off curves. Experiments are conducted on all three models across the four math benchmarks, and the resulting plots are presented in Figure 3. Figure 4: Thinking length controllability measured on MATH-500 benchmark. Table 2: Evaluation on out-of-domain transferability. From Figure 3, we observe that our method consistently achieves better token efficiency across most benchmarks, achieving higher accuracy than budget forcing under a range of thinking lengths. Notably, as the average thinking length decreases, corresponding to stricter budget constraints, our method yields significantly higher accuracy, particularly on benchmarks with diverse problem difficulty such as MATH-500. We attribute this to the ability of our method to adapt the reasoning pattern under strict budgets, producing concise yet complete reasoning traces. This enables the model to arrive at correct answers more efficiently, especially for questions that are relatively easy and do not require deep reasoning. This is also reflected in the occasional worse accuracy of budget forcing compared to the NoThinking baseline under strict budgets (e.g., MATH-500 on DS-7B/32B), where the reasoning trace is abruptly truncated and the model is forced to guess prematurely. In contrast, our method avoids such incomplete reasoning and consistently outperforms the NoThinking baseline. An illustrative example of this guided reasoning behavior is provided in Section 4.4. # 4.2.3 Fine-Grained Control of Thinking Length Our goal is to steer LLM reasoning to adhere to a specified thinking budget. To evaluate controllability, we test on MATH-500 under varying thinking budgets, measuring the actual thinking length per sample and visualizing the distributions. We compare our method to budget forcing and include the full-thinking baseline as a reference. Results across all three models are shown in Figure 4. From Figure 4, we observe that our method behaves similar to budget forcing, generally respects the specified thinking budget: for each setting, at least $7 5 \%$ are within the budget, and the median thinking length closely aligns with the budget. Compared to the full-thinking baseline, our method guides the model to generate a budget-aligned reasoning trajectory. This behavior is notable because, unlike budget forcing, our approach does not enforce a hard cutoff. Instead, it softly steers the generation process to match the desired level of detail, demonstrating flexible and controllable reasoning. # 4.2.4 Out-of-Domain Transferability While we train the predictor solely on math data for simplicity, we also explore its generalization to broader task domains. To this end, we conduct an out-of-domain transferability analysis using the DS-7B model. Specifically, we evaluate our method on four benchmarks: GPQA Diamond (scientific reasoning), FOLIO (logical reasoning), TableBench (tabular reasoning), and LiveCodeBench (code reasoning). We match the average reasoning length between our method and the baseline, and report the corresponding accuracies in Table 2. Despite being trained exclusively on math data, our predictor generalizes well to non-math reasoning tasks, consistently outperforming budget forcing across all four benchmarks. These results highlight Figure 5: Correlation between question difficulties and estimated thinking lengths. Figure 6: Correlation between prompt types and estimated thinking lengths. Question: The graph of $\textstyle f ( x ) = { \frac { 2 x } { x ^ { 2 } - 5 x - 1 4 } }$ has vertical asymptotes $x = a$ and $x = b$ , and horizontal asymptote $y = c$ . Find $a + b + c$ . # Thinking 1 $( \mathbf { b u d g e t } = \mathbf { 3 0 0 } )$ ) Thinking 2 (budget = 600) To determine the vertical asymptotes of the function $\textstyle f ( x ) = { \frac { 2 x } { x ^ { 2 } - 5 x - 1 4 } }$ , I first need to find the values of x that make the denominator zero, as these will be the points where the function is undefined and thus where the vertical asymptotes occur. I’ll start by factoring the denominator: $$ x ^ { 2 } - 5 x - 1 4 = ( x - 7 ) ( x + 2 ) $$ Okay, so I have this function here: $\operatorname { f ( x ) } = ( 2 \mathrm { x } ) / ( \mathrm { x } ^ { 2 } - 5 \mathrm { x } - 1 4 )$ . I need to find the vertical asymptotes and the horizontal asymptote, and then add their $\mathbf { x } \mathbf { \cdot }$ -values and the y-value together to get a $+ \textrm { b } + \textrm { c }$ . Hmm, let me think about how to approach this. First, vertical asymptotes occur where the denominator is zero, right? Because the function will be undefined at those points, and the graph will shoot off to infinity or negative infinity. So, I need to factor the denominator to find those $\mathrm { x }$ -values. The denominator is $\mathrm { x ^ { 2 } }$ - 5x - 14. Let me try to factor that. I’m looking for two numbers that multiply to -14 and add up to -5. Hmm, let’s see. The factors of 14 are 1 and 14, 2 and $7$ . Since the product is negative, one has to be positive and the other negative. Let me try 2 and $- 7$ : $2 * ( - 7 ) = - 1 4$ , and $2 + ( - 7 ) =$ -5. Perfect! So, the denominator factors as $( { \bf x } + 2 ) ( { \bf x } - 7 )$ . So, the denominator is zero when $\mathbf { x } + \mathbf { 2 } = 0$ or $\mathrm { _ x }$ - $7 = 0$ , which means $\mathrm { { x } = - 2 }$ and $\mathrm { { x } = 7 }$ . Therefore, the vertical asymptotes are at $\mathrm { { x } = - 2 }$ and $\mathbf { x } = { \mathbf { \nabla } } /$ . So, a and b are -2 and 7, respectively. Wait, hold on, the problem says vertical asymptotes $\mathbf { x } = \mathrm { a }$ and $\mathbf { x } = \mathbf { b }$ . So, a is -2 and b is $T ?$ Or is it the other way around? $\mathbf { H m m }$ , actually, the order doesn’t matter because we’re just adding them together. So, $\mathrm { ~ a ~ } + \mathrm { ~ b ~ }$ will be $( - 2 ) + 7$ , which is 5. Okay, that’s straightforward. Now, onto the horizontal asymptote. Horizontal asymptotes depend on the degrees of the numerator and the denominator. The numerator is 2x, which is degree 1, and the denominator is $\mathrm { x ^ { 2 } }$ - 5x -14, which is degree 2. So, the degree of the numerator is less than the degree of the denominator. I remember that when the degree of the numerator is less than the degree of the denominator, the horizontal asymptote is $\mathbf { y } = 0$ . So, c is $0$ . Double-checking: If the degree of the numerator is less than the denominator, horizontal asymptote is $\mathrm { y } =$ 0. If they are equal, it’s the ratio of the leading coefficients. If the numerator is higher, there’s no horizontal Setting each factor equal to zero gives the solutions $x = 7$ and $x \ = \ - 2$ . Therefore, the vertical asymptotes are at $x = 7$ and $x = - 2$ . Next, to find the horizontal asymptote, I’ll compare the degrees of the numerator and the denominator. The numerator has a degree of $^ { 1 }$ , and the denominator has a degree of $2$ . Since the degree of the denominator is higher, the horizontal asymptote is $y = 0$ . Finally, I’ll add the values of $a$ , $b$ , and $C 2$ : $$ a + b + c = 7 + ( - 2 ) + 0 = 5 $$ (thinking length: 260) asymptote, but maybe an oblique one. So, in this case, since numerator is degree 1 and denominator is degree 2, horizontal asymptote is $\mathrm { y } = 0$ . So, $\mathrm { c } = 0$ . (thinking length: 602) Figure 7: Sample reasoning traces generated with budget guidance under different thinking budgets the cross-domain generalizability of our approach and its potential applicability to a wide range of reasoning scenarios. While the gains on out-of-domain tasks are less pronounced than those on in-domain benchmarks, we believe performance can be further improved by incorporating reasoning traces from a broader range of domains during training. We leave this direction for future work. # 4.3 Insights into What the Predictor Learns To probe what the predictor has learned, we analyze its estimated thinking length at the first thinking token, interpreted as the predicted number of thinking tokens needed, against task difficulty and prompt type, using the DS-7B model. Task Difficulty. We evaluate on MATH-500 (in-domain) and LiveCodeBench (out-of-domain). Figure 5 shows that estimated thinking length increases with difficulty in both cases. This suggests that the predictor captures a general understanding of difficulty, enabling effective difficulty estimation. Prompt Type. We evaluate on MATH-500 and compare two prompts: one encouraging long reasoning and one encouraging concise reasoning (listed in the Appendix). As shown in Figure 6, the long reasoning prompt yields longer estimated thinking lengths. A t-test gives a $p$ -value of 0.0028, confirming the difference is statistically significant and indicating that the predictor is prompt-aware. # 4.4 Case Study Figure 7 shows a case study from MATH-500 illustrating reasoning traces under different thinking budgets. Rather than truncating output, our method adapts the reasoning style to the budget. With a stricter budget (left), the model generates concise answers without reflection. With a larger budget (right), it mirrors full-length reasoning: it starts with problem analysis and using reflective phrases like “wait” and “double-checking.” In both settings, the trace ends appropriately, highlighting our method’s flexibility and controllability.
Recent deep-thinking large language models often reason extensively to improve performance, but such lengthy reasoning is not always desirable, as it incurs excessive inference costs with disproportionate performance gains. Controlling reasoning length without sacrificing performance is therefore important, but remains challenging, especially under tight thinking budgets. We propose budget guidance, a simple yet effective method for steering the reasoning process of LLMs toward a target budget without requiring any LLM fine-tuning. Our approach introduces a lightweight predictor that models a Gamma distribution over the remaining thinking length during next-token generation. This signal is then used to guide generation in a soft, token-level manner, ensuring that the overall reasoning trace adheres to the specified thinking budget. Budget guidance enables natural control of the thinking length, along with significant token efficiency improvements over baseline methods on challenging math benchmarks. For instance, it achieves up to a 26% accuracy gain on the MATH-500 benchmark under tight budgets compared to baseline methods, while maintaining competitive accuracy with only 63% of the thinking tokens used by the full-thinking model. Budget guidance also generalizes to broader task domains and exhibits emergent capabilities, such as estimating question difficulty. The source code is available at: https://github.com/UMass-Embodied-AGI/BudgetGuidance.
[ "cs.CL", "cs.AI" ]
# Introduction Psychophysics experiments that require the physical presence of participants in a laboratory suffer from limitations in sample size and recruitment options. For example, they often underrepresent participants with accessibility and mobility limitations and overrepresent student populations (Druckman & Kam, 2009; Sears, 1986). Moreover, data acquisition can be affected or even halted, by external circumstances, as exemplified by the COVID-19 pandemic shutdowns. Running experiments online can overcome these constraints, since participants can take part outside the laboratory, in the convenience of their own homes. Psychophysics experiments vary broadly in their structure, design, and the type of stimuli they use, depending on the scientific objectives. Hence, tools for conducting psychophysics experiments online need to allow for flexible customization. However, there is no unified platform that singlehandedly enables this. Instead, researchers are generally required to integrate multiple platforms, each with different functionalities. Specifically, at least two platforms are needed: one for building the experiment and another for hosting it. This can be a complex, expensive, and skill-dependent process, considerably raising the barrier-to-entry for some researchers and students. The scientific community has developed many valuable and open-access tools for this purpose (de Leeuw, 2015; Lago, 2021; Lange et al., 2015; Onnela et al., 2021; Peirce et al., 2019; Schwarzbach, 2011; Torous et al., 2016; Woods et al., 2015); see Grootswagers, 2020 for an overview). Table 1 lists several popular alternatives for building psychophysics experiments (left half of the table) and their corresponding platforms for hosting experiments (right half of the table; although other combinations are also possible). However, building custom experiments with these platforms often requires knowledge of their respective programming languages and/or familiarity with dedicated software applications. Moreover, additional programming skills are required to configure the servers, whether the experiment is hosted on a local server or hosted using cloud services. Table 1 – Popular platforms for building and hosting psychophysical experiments online. Another critical challenge in conducting psychophysics experiments online is the lack of control over the experimental environment, compared to laboratory-based experiments. If not taken into account, confounding factors, such as visual acuity and variations in viewing distance, add noise to the experimental measures obtained. Additionally, many current online platforms do not validate the credibility of participants adequately (Chandler et al., 2014; Chmielewski & Kucker, 2020; Kennedy et al., 2020; Paolacci & Chandler, 2014; Peer et al., 2022). This reduces data quality and reliability. Thus, a unified and user-friendly platform for online experiments with improved experimental control and participant validation, would provide a valuable contribution. This paper presents an open-source Modular Online Psychophysics Platform $( M O P P )$ . MOPP provides a simple web-based interface to build, manage, and launch online experiments. Additionally, it has integrated tools to calibrate for viewing distance, measure visual acuity, and confirm participants’ credibility. We collected pilot data for five example tasks that come preloaded in the MOPP environment. The results were comparable to those reported when run in laboratory settings. In the following three sections, we provide an overview of MOPP’s architecture (Section A), describe a typical workflow (Section B), and present pilot data gathered with MOPP (Section C). # A. MOPP architecture The MOPP architecture consists of three integrated components (Figure 1a). i) The server-side component. This contains the code for running experiments. It controls the progression of the experiments, sends stimulus details to the participant’s web browser (on the client side), and receives information in return (e.g., responses). It runs in ‘Node.js’ – an open-source JavaScript (JS) server environment. ii) The database component. This enables the efficient management and storage of the experimental data. It runs on the server and interacts with the server-side component for data exchange. This was implemented using ‘MongoDB’, an open-source database management program. iii) The client-side component. This provides a web-based interface for human interaction with MOPP. It provides access either as a participant (to perform the experiment) or as a researcher (for experiment management). On the participant’s web browser, it displays experiment-related content, including text, stimuli, and input fields. On the researcher side, it displays a web interface for building, launching, and managing experiments. The client-side component was developed using ‘React.js’, an open-source JS library for user interfaces. Figure 1: MOPP overview. (a) MOPP consists of three main components: (i) the server-side component - ‘Node.js’, (ii) the database component - ‘MongoDB’, and (iii) the client-side component - ‘React.js’. The researcher (green) interacts with all three components, while the participant (blue) only interacts with the client side. (b) A typical workflow on the platform. The researcher first builds an experiment (by adding experimental tasks and defining their details) and then launches it. During data collection, participants enter the experiment and undergo email and IP-address authentication, and reCAPTCHA verification. Then, they undergo visual calibration tests to calibrate for viewing distance and to measure visual acuity (per participant). After completing these steps, participants perform the experiment. Finally, the researcher downloads the data. # Code availability and installation process MOPP is an open-source project, and the code is publicly available at: https://gitlab.com/mopp.proj/mopp-project/. Before running experiments with MOPP, the researcher first needs to complete the setup process by following the stepby-step instructions in the MOPP guide (available on the project’s Gitlab page). The MOPP guide explains the process of hosting an experiment using the AWS cloud (this requires creating an AWS account). We chose AWS because of its popularity and builtin services. For example, it offers adjustable computational power and storage capacity according to the experimental requirements. MOPP can also be hosted on other cloud servers (e.g., Microsoft Azure or Google Cloud) or on a local server, however, we only cover AWS hosting in the scope of this paper. # B. MOPP workflow Figure 1b presents a typical workflow on the platform. It consists of four main steps (green boxes). The first step begins with the researcher, who builds an experiment according to their scientific objectives. This includes adding the relevant tasks and defining the details of the stimuli. In the second step, the researcher launches the experiment and shares a dedicated link with the participants, by which they access the experiment. The third step, data collection, involves the participants (blue boxes). During this step, the participants enter the experiment via a web browser from their personal computers (PCs). The participants then undergo several tests, including authentication and verification of the participants’ credibility, and visual calibration tests to account for viewing distance and to measure visual acuity. Following these tests, the participants perform the actual experiment. Finally, in the last step, the researcher downloads the acquired data via MOPP. This workflow is further described in detail below. # 1. Build experiment The researcher builds an experiment through the researcher portal (Figure 2a), which is accessed via a computer web browser. The researcher portal displays a summary table of the stored experiments. The experiment name, number of tasks within each experiment, how many participants have accessed the experiment, and how many of those have completed it, are presented in the table (some details are omitted in Fig. 2a, for simplicity). From the summary table, the researcher can open the experiment page (Figure 2b; each experiment has its own separate page). There, the researcher can build, edit, and launch the experiment. Figure 2: Illustration of the web-based interface for managing, building, and launching experiments. (a) The researcher portal displays a list of saved experiments with key information. On the researcher portal, the researcher can create a new experiment (using the $^ +$ button). Clicking on the experiment name (e.g., ‘Experiment 1’) opens the experiment page. (b) On the experiment page, the researcher can clone a previous experiment (using the $\bullet$ button), define the order of the tasks (drag-and-drop using the cursor), and launch the experiment (using the $\nVDash$ button). The pre-experiment questions can be edited (using the button). Clicking on a task name, e.g., ‘Numerosity’, opens the task page (c), where task-specific settings can be modified. To get started, the researcher can choose an experiment from the preloaded list of sample experiments, or create a new one (plus button in Figure 2a). The researcher can copy an experiment with all its settings using the clone feature (green button in Figure 2b) and then edit its details. The clone feature enables the researcher to copy and edit an experiment under a new name without changing the original. # 1.1 . Pre-experiment questions Many psychophysics experiments begin with a set of self-reported questions. These typically ask participants to give informed consent, to provide information regarding demographics (e.g., age, gender), to report whether they wear glasses, and whether they suffer from medical conditions (neurological, psychiatric, etc.). By default, MOPP adds these questions to the beginning of each experiment. The researcher can replace these default questions by uploading a custom set of questions (light gray button in Figure 2b). For instance, if a crowdsourcing platform is used to recruit participants (e.g., Prolific or MTurk), a question can be added to obtain the participants’ platform-user-IDs for payment approval. To replace the default questions, the researcher must upload a JavaScript Object Notation (JSON) file containing the new set of questions. Several websites can be used to create and download JSON files, such as Survey.io (https://surveyjs.io/create-survey) or Qualtrics (https://www.qualtrics.com). Thus, no programming skills are required to upload a new set of questions. # 1.2 Preloaded tasks In order to demonstrate MOPP functionality, five example tasks were developed and loaded to MOPP, as described in Table 2. Table 2 – Description of the five preloaded tasks # 1.3 Adding tasks to an experiment The researcher can add the preloaded tasks to a specific experiment using MOPP’s interface, without programming, from a drop-down list (not shown in Fig. 2b for simplicity). The settings of the tasks can be modified from the task page (Figure 2c), where the task description, number of trials, stimulus duration, and response duration can be set. # 1.4 Creating a new task Beyond the preloaded tasks, if a new task is required, this must be designed and developed in advance (or copied from another user). With MOPP, all tasks should be developed in JS, a popular web development language. Researchers proficient in JS programming can develop tasks from scratch, directly in JS. Otherwise, tasks can be developed without JS expertise using the jsPsych platform (de Leeuw, 2015). With jsPsych, tasks are created from plugins – templates of JS code with experimental events, task components, or even complete standalone tasks. Researchers can choose from an extensive library of available plugins to create different tasks, and then load their new tasks to MOPP. When developing a new task, the researcher should decide which task-specific settings will be modifiable from the task page (rather than having to modify the code). For instance, if a task's stimulus magnitude is obtained from a certain distribution, the researcher can set the distribution parameter(s) from the task page. Including modifiable task-specific settings during development provides flexibility to the researcher, and prevents the need for additional (re)programming. After developing a new task, it should be loaded into MOPP. Then it can be added to experiments (similar to the preloaded tasks). The MOPP user guide demonstrates how to load a task designed in jsPsych to MOPP (section 2c of the guide). It takes the researcher step-by-step through this process with the Random Dot Kinematogram (RDK) task (Rajananda et al., 2018) as an example. To ensure task understanding and to familiarize participants with the response format, researchers can use MOPP’s practice trial feature. Practice trials help improve data quality and reduce the likelihood of excluding participants due to task misunderstanding - a common challenge in online experiments (Chandler et al., 2014; Chmielewski & Kucker, 2020; Kennedy et al., 2020; Paolacci & Chandler, 2014; Peer et al., 2022). The researcher can set predefined conditions to confirm that the participants have responded reasonably in the practice trial before continuing the experiment (e.g., that the response is correct or within a certain range). These conditions should be defined during task development. The researcher can then choose whether or not to use it in a specific experiment from the task page (Figure 2c). # 2. Launch experiment The researcher launches the experiment from the experiment page (Figure 2b). This generates a shareable link for participants to access the experiment. Once an experiment is launched, its settings become uneditable. If required, the researcher can clone a launched experiment which can be edited without affecting the original experiment. Before launching the experiment, the researcher must choose between one of two possible modes for running the experiment: i) Public mode. This is meant for remote (online) experiments conducted outside the laboratory. In public mode, the participants first complete authentication and verification tests (described in subsection 3.2.), then they answer the pre-experiment questions (described in subsection 1.2.), followed by the visual calibration tests (described in subsection 3.3.). After these are completed, they perform the experiment. ii) Supervised mode. This is meant for experiments conducted in a controlled physical setting, such as a laboratory or clinic, using the MOPP platform. In this mode, the researcher sets up and controls the physical environment. The researcher also personally verifies each participant's identity. Therefore, participants skip the authentication, verification, and visual calibration tests. Researchers interested in performing the calibration tests in this mode (to measure participants viewing distance and/or visual acuity) could activate these tests via MOPP’s source code or, instead, create and load it as a task within the experiment. # 3. Data collection # 3.1. Participant entry Participants enter the experiment from the web browser of their PCs – laptop or desktop. Currently, MOPP supports only PCs (not mobile devices, such as tablets or smartphones). Thus, the researcher should instruct participants to use a PC. If participants attempt to enter using a mobile device, an error page will be presented, instructing them to access the experiment using a PC. Participants should also be instructed to deactivate any advertisement blockers before performing the experiment so that MOPP’s pop-up windows—used, for example, to provide progress feedback—can function properly, and to use the Chrome browser (MOPP was primarily tested on Chrome, and thus, there may be compatibility issues with other browsers). # 3.2. Authentication and verification MOPP is equipped with two tests to confirm participants’ credibility: (a) An authentication test. This is used to authenticate each participant’s identity without collecting personal data. This test uses a third-party feature that enables participants to self-authenticate their identity by logging into their email accounts. A secure pop-out window is presented outside of MOPP’s interface during this process. Successful login confirms the participant has a valid email address (those who do not have an active email account cannot pass this test). MOPP prevents multiple entries to the experiment by ensuring that once someone has accessed it, they cannot authenticate again using the same email account or IP address. (b) A verification test. This is used to confirm that the participant is a human (and not an internet bot). For this, MOPP utilizes the reCAPTCHA test ((Von Ahn et al., 2008); the $2 ^ { \mathsf { n d } }$ version of the Completely Automated Public Turing test to tell Computers and Humans Apart), which is embedded in MOPP’s interface. # 3.3. Visual calibration MOPP incorporates two tests to improve experimental control for visual stimuli: (a) The virtual chinrest test (Li et al., 2020). In this test, MOPP calculates the participant’s viewing distance using trigonometry (based on the blind spot of the retina). The estimated viewing distance is then used to adjust the size of the subsequently presented stimuli to maintain a consistent visual angle of the stimulus on the retina. MOPP administers this test twice: once at the beginning and once at the end. These tests can be used for post-experimental analysis, correction, or data exclusion if the participant substantially changes their position during the experiment. (b) The Taylor’s $\boldsymbol { \mathsf { \Pi } } _ { E }$ test (Bach, 2006). This is used to measure the participant’s visual acuity. In this test, the participants are asked to indicate in which one of the four possible cardinal directions the open side of a rotated letter ‘E’ is pointing (e.g., without rotation, E is pointing rightward, reported by pressing the right keyboard arrow). The direction is randomly selected on each trial. Similar to previous studies (Bach, 2006), the task starts with four specific stimuli (the size of the E corresponds to an acuity of 0.1, 0.2, 0.4 and 0.8). The size of $' E ^ { \prime }$ is then adapted individually according to each participant’s performance, following a 2-up 1-down staircase procedure (Cornsweet, 1962). Namely, following two correct responses, the size of the E on next trial is reduced, and following one incorrect response, the size of the E on next trial is increased. # 3.4. Perform experiment Participants perform the tasks according to the order set by the researcher in advance (when building the experiment). MOPP displays the participant’s progress in a progress bar with basic information regarding the experiment structure and duration (how many tasks and trials remain). During the tasks, MOPP checks that the correct data type was entered. Otherwise, it displays a “data type mismatch” message. At the end of the experiment, the researcher can provide a confirmation code on the completion page, which can be used by the participants to prove that they have completed the experiment. This is particularly useful for participants recruited via crowdsourcing platforms, where a completion code is required to receive compensation. # 4. Download data Once data collection is complete, the researcher can download the data from the experiment page (accessed via the researcher portal). The data is saved in a CSV file in a wide format - i.e., rows represent participants, and columns represent responses (with headers to specify the name of each variable). The participant’s responses to the visual calibration test, and pre-experiment questions are also saved as columns. # C. Pilot data results We conducted a small online pilot study with MOPP (in public mode) to validate its functionality and to test whether the data collected would produce results consistent with those reported in laboratory settings. Seventeen participants (mean age ± SEM: $3 0 . 9 \pm 2 . 6$ years; 8 females) were recruited via online student forums. This study was approved by the internal review board at Bar-Ilan University (ethics committee approval number: ISU202110005), and all participants provided informed consent before participation. The pilot data and the code to generate the relevant figures are available at: https://github.com/YuvalSK/MOPP. The mean viewing distance across participants, measured by the virtual chinrest test at the start and end of the experiment, was $5 4 . 1 \pm 1 . 6$ cm and $5 2 . 9 \pm 1 . 9 c m .$ , respectively (mean ± SEM: the range across both tests was 37.9 to $6 5 . 9 ~ { \mathsf { c m } }$ ). This indicates participants were physically within an acceptable viewing distance from their PC screens. There was no significant difference in viewing distance between these two measurements $( p > 0 . 5 , t ( 1 6 ) = 0 . 5 8 .$ ; $B F _ { 1 0 } = 0 . 2 9$ ; two-tailed paired $t$ -test). Thus, on average, there was no systematic shift during the experiment. The mean decimal visual acuity (VA), measured using Taylor’s E test, was: $0 . 9 7 \pm 0 . 0 1$ (mean ± SEM). The VA of a person with a decimal VA value of 1 is considered normal, whereas values below 0.5 are generally considered indicative of a poor vision. Thus, on average, participants had close to normal vision. Six participants reported wearing glasses, and their visual acuity was similarly near normal (mean decimal $\mathsf { V A } = 0 . 9 9 \pm 0 . 0 1$ ). # 1. Length task In the length task, participants were asked to estimate, without counting, the length of a one-dimensional line. The line comprised a series of consecutive underbars. The participants were shown one underbar $( \mathit { \Pi } _ { - } ^ { \prime } )$ for reference, and then asked to estimate how many underbars a given stimulus contains (e.g., ‘_ _’ is 10 units; for elaboration on the task details, see Table 2). Stimulus magnitudes were randomly drawn from a Uniform distribution in the range of 1 to 18 (integers). This set of stimuli was generated once, and then used across all participants. Each participant completed 24 trials. Two participants were excluded due to missing trials (skipped more than one trial). This left 15 participants for further analysis in this task. Responses to the length task are presented in Figure 3. Previous studies (Ekman & Junge, 1961; Stevens & Galanter, 1957) found that line length estimates are linearly related to the stimulus values. Accordingly, we fitted a linear regression to our data (Fig. 3, black regression line; $R ^ { 2 } = 0 . 9 3$ , $p < 0 . 0 0 1$ ; fitting the mean values per stimulus). The regression line had a positive intercept $\beta _ { 0 } = 3 . 6 8 ; 9 5 \% \ C | \ = [ 2 . 7 0 , 4 . 6 6 ] )$ and a slope smaller than 1 $\mathfrak { j } _ { 1 } = 0 . 5 1 ; 9 5 \% \mathsf { C l } = [ 0 . 4 2 , 0 . 6 0 ] \mathrm { _ { \ell } }$ ) such that the fitted line crosses the unity line (at 7.51 units). This means that length stimuli on the lower end are overestimated, and those on the upper end are underestimated. This replicates the well-documented phenomenon of regression to the mean (Ashourian & Loewenstein, 2011; Petzschner et al., 2015). Figure 3: Pilot data for the length task collected via MOPP. A stimulus schematic is presented in the top left panel. In the main plot, black circles and error bars depict the mean $\pm S E M$ length estimates across participants $( \mathsf { N } = 1 5 )$ as a function of the actual stimulus length. The solid black line depicts a linear regression of the group data, and the dashed gray line is the unity line $( \forall = \boldsymbol { \mathsf { x } } )$ . A set of 24 stimulus lengths was randomly drawn from a uniform distribution (integers in the range of 1-18) once, and then used for all participants. # 2. Numerosity task In the numerosity task, participants were asked to estimate, without counting, the number of dots in a two-dimensional array of black and white dots (Table 2). Stimulus magnitudes were randomly drawn from a Gaussian distribution $\ln { } = 2 2$ , $\sigma = 5$ dots). This set of stimuli was generated once, and then used across all participants. Each participant completed 24 trials. No participants were excluded in this task $( \mathsf { N } = \mathsf { 1 7 } )$ . Responses to the numerosity task are presented in Figure 4. Previous studies found that numerosity estimates are systematically underestimated. Specifically, perceived numerosity increases with actual numerosity according to a power function, with an exponent smaller than one (Bevan & Turner, 1964; Burr & Ross, 2008; Crollen et al., 2013; Indow & Ida, 1977; Izard & Dehaene, 2008; Krueger, 1982). Namely, as the actual number of items increases, the perceived number increases more slowly— larger quantities are progressively underestimated relative to smaller ones. Figure 4: Pilot data for the numerosity task collected via MOPP. A stimulus schematic is presented in the top left panel. In the main plot, black circles and error bars depict the mean $\pm \mathsf { S E M }$ numerosity estimates across participants $( \mathsf { N } = 1 7 )$ ), as a function of the actual stimulus numerosity. The solid black line depicts a regression of the group data (fit in logscale), and the dashed gray line is the unity line (y $\mathbf { \tau } = \mathbf { \boldsymbol { x } } )$ . A set of 24 stimulus values (integers) was randomly drawn from a Gaussian distribution (mean $\mathbf { \tau } = \mathbf { \tau }$ 22, $S D = 5$ dots) once, and then used for all participants. To quantify this relationship in our data, we fitted a linear regression in log-log scale. By applying a log-transformation to both the stimulus magnitudes and participants’ estimates, we could estimate the exponent of the power function from the slope of the regression line. The resulting fit, specifically the strong (Pearson’s) correlation $\textstyle \left. R ^ { 2 } = 0 . 9 7 \right.$ , $\mathsf { p } < 0 . 0 0 1$ ; data fit only for stimuli exceeding ten dots) provides robust evidence for a linear relationship in the log-log scale. The regression line had an intercept of $\big \{ 3 0 = 0 . 2 0 \ ( 9 5 \% \ C | = [ - 0 . 0 7 , 0 . 4 7 ] )$ which did not differ significantly from zero, and a significantly positive slope of $\beta _ { 1 } = 0 . 8 8$ $\langle 9 5 \% \mathsf { C l } = [ 0 . 7 9 , 0 . 9 6 ] \rangle$ which was also significantly less than 1. This indicates underestimation, namely, as the number of dots increases, the participants' estimates grew more slowly than the actual stimulus magnitude. Figure 4 presents the regression curve plotted on the original (non-logarithmic) scale for easier interpretability. This replicates the well-documented pattern of underestimation of numerosity. # 3. Biological motion task In the biological motion task, participants were presented with a set of ten white moving dots on a black background that reflected either biological motion or random motion (Table 2). They were asked to discriminate between the two. The same set of stimuli was used for all participants. Each participant completed 24 trials (half with biological motion and half with random motion). No participants were excluded in this task $( \mathsf { N } = \mathsf { 1 7 } )$ ). Responses to the biological motion task are presented in Figure 5. For each participant, we calculated the proportion of trials for which they reported biological motion, for biological motion stimuli and random motion stimuli. We then calculated the mean of these proportions across participants. As expected, we observed higher rates of reporting biological motion for biological compared to random motion stimuli (Fig. 5; right vs. left bars, respectively; mean $\pm S E M = 9 3 . 0 \% \pm 6 . 2 \%$ and $2 6 . 7 \% \pm 1 0 . 7 \%$ , respectively; $p < 0 . 0 0 1$ , $t ( 1 6 ) = 1 0 . 1$ ; one-tailed paired $t$ -test). Figure 5: Pilot data for the biological motion task collected via MOPP. Stimulus schematics are presented on the top left (random motion and biological motion in the left and right schematics, respectively). The main plot presents the mean $\pm S E M$ reports of perceiving biological motion across participants $( \mathsf { N } = \mathsf { 1 7 } )$ , per stimulus type. The dashed gray line marks chance level $( 5 0 \% )$ . The same set of stimuli was used for all participants. The biological motion stimuli were generated using the tool developed by Troje (2002). We used $d ^ { \prime }$ in order to measure individual sensitivity to biological motion detection, calculated as: $d ^ { \prime } = z ( H ) - z ( F A ) .$ , where $z$ denotes the inverse of the cumulative standard normal distribution, $H$ denotes the hit rate (reporting biological motion for biological stimuli), and $F A$ denotes the false alarm rate (reporting biological motion for random stimuli). Higher $d ^ { \prime }$ values reflect better sensitivity. The $d ^ { \prime }$ in our data $( \mathsf { m e a n } \pm 5 \mathsf { E M } = 0 . 6 6 \pm 0 . 0 7 ,$ ) did not differ significantly from the values reported by Weil et al (2018) for either their large online sample $( 0 . 7 3 \pm 0 . 0 1 , \mathsf { N } = 1 8 9 ; p = 0 . 3 3 ,$ $t ( 1 6 ) = - 1 . 0 0$ ; $B F _ { 1 0 } = 0 . 4 0$ ; two-tailed unpaired Welch’s $t$ -test) or their smaller offline sample $\langle 0 . 7 4 \pm 0 . 0 3$ , ${ \mathsf { N } } = 1 9$ ; $p = 0 . 3 0$ , $t ( 1 6 ) = - 1 . 0 6$ ; $B F _ { 1 0 } = 0 . 5 0 \$ ; two-tailed unpaired Welch’s $t { \cdot }$ -test). # 4. Mooney face task In the Mooney face task, participants were presented with black-and-white images that either contained a face (oriented upright or inverted) or not (Table 2). They were asked to discriminate whether an image contains a face (regardless of orientation) or not (random image). The same set of stimuli was used for all participants. Each participant completed 24 trials (half trials without faces, and the other half equally split between upright and inverted faces). One participant who provided the same response on all trials was excluded. This resulted in 16 participants for further analysis in this task. Results from the Mooney face task are presented in Figure 6. For each participant, we calculated the proportion of trials in which they identified a face for the different stimuli (upright faces, inverted faces and random images). We then calculated the mean of these proportions across participants. Consistent with the original study on which our stimuli were based (Schwiedrzik et al., 2018), we observed higher face detection rates for upright face images compared to inverted face images (Figure 6; right and center bars, respectively; mean $\pm \mathsf { S E M } = 9 1 . 3 \% \pm \ 1 . 8 \%$ , $3 0 . 5 \% \pm 2 . 9 \%$ , respectively; $p < 0 . 0 0 1$ , $t ( 3 0 ) = 7 . 9 2$ ; one-tailed unpaired $t$ -test) and compared to random images (left bar; $1 7 . 0 \% \pm 2 . 4 \%$ ; upright vs. random: $p < 0 . 0 0 1$ , $t ( 3 0 ) = 1 1 . 1 )$ . A trend was observed of higher face detection rate for inverted images compared to random images $( p = 0 . 0 5 2$ , $t ( 3 0 ) = 1 . 6 8 )$ . Figure 6: Pilot data for the Mooney face task collected via MOPP. Stimulus schematics are presented on the top left (from left to right: random, inverted, and upright faces). The main plot presents the mean ± SEM reports of perceiving faces across participants $( \mathsf { N } = 1 6 )$ ), per stimulus type. The dashed gray line marks chance level $( 5 0 \% )$ . The same set of stimuli was used for all participants. The Mooney images of faces were taken from the database made freely available by Schwiedrzik et al. (2018). Here too, we used $d ^ { \prime }$ to measure individual sensitivity to face detection for each comparison. The overall $d ^ { \prime }$ in our data $( \mathsf { m e a n } \pm 5 \mathsf { E M } = 1 . 5 4 \pm 0 . 6 5 )$ did not differ significantly from the values reported by Schwiedrzik (2018) across trials and participants $1 . 1 9 \pm 0 . 0 7$ , their $\mathsf { N } = 1 8 ; p = 0 . 6 5 , t ( 1 6 ) = 0 . 5 3 ; B F _ { 1 0 } = 0 . 5 3 ;$ ; two-tailed unpaired Welch’s $t$ -test). # 5. Key-tapping task In the key-tapping task (Table 2), participants were asked to press the $' s ^ { \prime }$ and $\mathbf { \chi } ^ { \prime } \mathbf { k } ^ { \prime }$ keys alternatingly on their keyboard as many times as they could within a 30-second interval using both hands, right hand only, or left hand only (three conditions). Each participant completed one trial per condition. Participants who failed to provide a response were excluded, per condition. This removed one participant in the bothhands condition and four participants in the left-hand condition. This resulted in 16, 17, and 13 participants for further analysis (for both hands, right-hand and left-hand conditions, respectively). Responses to the key-tapping task are presented in Figure 7. The overall mean number of key taps using a single hand was $5 5 . 4 \pm 7 . 0$ taps (mean ± SEM; data pooled across right- and left-hand conditions). This was compared to the results from the original study on which the task was based $\langle 6 0 . 3 \pm 1 . 4$ taps, also using a single hand, with data pooled across right- and left-hand conditions; $N = 9 3$ participants; Noyce et al., 2014). No significant difference was observed between our data and the results from that original study $( p = 0 . 5 0 , \mathrm { t } ( 3 1 . 3 ) = - 0 . 6 9 )$ ; two-tailed two-sample Welch’s ttest; $B F _ { 1 0 } = 0 . 2 7$ ). The mean number of taps using both hands was $9 9 . 6 \pm 1 1 . 0$ (mean $\pm \mathsf { S E M } )$ . This condition could not be compared to previous studies, as prior research assessed key-tapping performance only for a single hand. Figure 7: Pilot data for the key-tapping motor task collected via MOPP. An illustration of the task is presented in the schematic on the top left. Participants were required to sequentially press the $' s ^ { \prime }$ and $' \boldsymbol { \mathbf { k } } ^ { \prime }$ buttons on their computer keyboard as many times as possible within a 30 s interval, in one of three conditions: using both hands, right hand only, or left hand only. The main plot presents the mean $\pm S E M$ number of taps across participants $N = 1 6$ , 17, and 13, respectively), per condition. Participants performed each condition once. # Discussion In this study, we introduce an open-source modular online psychophysics platform $( M O P P )$ for researchers running online visual experiments. MOPP has a simple webbased interface to create modular experiments tailored to their scientific objectives. Additionally, it has integrated tools to confirm participants’ credibility, calibrate for viewing distance, and measure visual acuity. This can enhance accessibility for researchers without a strong programming background, and facilitate comparison and replication of experiments. To evaluate the data collected through MOPP, we developed five example psychophysics tasks that come preloaded in the environment and ran a pilot experiment online, hosted on the AWS cloud. In all five tasks, the data yielded results similar to previous publications collected in laboratory settings, validating our task implementations across the perceptual and motor domains. # Limitations and future directions Although it does not require programming knowledge, the installation process of MOPP currently comprises several steps. The MOPP guide, that details the process of hosting an experiment using the AWS cloud, aims to support and ease the installation process. A setup script is included in the guide to automate parts of the process, such as cloning the repository and configuring basic parameters. However, it does not have a stand-alone installer and requires several manual steps (e.g., cloud setup). Developing a cross-platform installation tool (e.g., via an execution file), perhaps by future users of this open-source platform, would help simplify this process and thereby further lower the technical barrier for researchers and students unfamiliar with cloud-based systems. MOPP does not currently support mobile devices, as it is designed for traditional keyboard and mouse interactions. Besides its visual calibration tests, MOPP does not address several other inherent challenges of online experiments, such as variability in environmental conditions, monitors, and computer hardware. As a result, it cannot guarantee consistent and uniform stimulus presentation across devices (e.g., in terms of brightness or contrast). One possible mitigation strategy is to collect self-reported specifications – such as monitor model and other PC details - for post-experimental control and analysis. MOPP allows researchers to easily add such specifications questions in the pre-experiment questions phase without requiring additional coding. However, this approach relies on the participant's ability to provide this information accurately, and does not account for other environmental conditions (e.g., ambient light). Another solution is to compare online results to a controlled offline group, which can be run with MOPP’s supervised mode. MOPP is primarily designed to test visual psychophysics and does not currently support testing other sensory modalities (such as auditory, tactile etc.) or motor function (although some motor function can be assessed, e.g., via the keyboard tapping task, as we implemented). Researchers interested in studying these or multisensory processes would need to supplement MOPP with additional tools or devices. For instance, online auditory experiments might need auxiliary equipment, such as headphones with sound calibration tests to ensure reliable delivery of auditory cues (Milne et al., 2021; Schmack et al., 2021; Su et al., 2022). Experiments of more complex motor function, balance and spatial orientation, can use built-in sensors in smartphones, such as accelerometers and gyroscopes. These can also capture body and head movements (e.g., when placed in VR goggles) to assess vestibular function (Brodsky et al., 2015; Wengier et al., 2021). # Concluding remarks MOPP offers a simple web-based platform that allows researchers to create modular experiments tailored to their specific scientific goals. This enhances accessibility for researchers, including those without a strong programming background. MOPP can help researchers collect psychophysics datasets online, with reduced turnaround time, and in a standardized manner. By that, it complements traditional lab-based research and supports access to larger and more diverse populations outside the boundaries of a laboratory. # References Ashourian, P., & Loewenstein, Y. (2011). Bayesian inference underlies the contraction bias in delayed comparison tasks. PLoS ONE, 6(5), 19551. https://doi.org/10.1371/journal.pone.0019551 Bach, M. (2006). The Freiburg Visual Acuity Test-Variability unchanged by post-hoc re-analysis. Graefe’s Archive for Clinical and Experimental Ophthalmology, 245(7), 965–971. https://doi.org/10.1007/S00417-006-0474-4, Bevan, W., & Turner, E. D. (1964). Assimilation and contrast in the estimation of number. Journal of Experimental Psychology, 67(5), 458–462. https://doi.org/10.1037/h0041141 Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Vision, 10(4), 433–436. https://doi.org/10.1163/156856897X00357 Brodsky, J. R., Cusick, B. A., Kawai, K., Kenna, M., & Zhou, G. (2015). Peripheral vestibular loss detected in pediatric patients using a smartphone-based test of the subjective visual vertical. International Journal of Pediatric Otorhinolaryngology, 79(12), 2094–2098. https://doi.org/10.1016/j.ijporl.2015.09.020 Burr, D., & Ross, J. (2008). A Visual Sense of Number. Current Biology, 18(6), 425– 428. https://doi.org/10.1016/j.cub.2008.02.052 Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46(1), 112–130. https://doi.org/10.3758/s13428-013-0365-7 Chmielewski, M., & Kucker, S. C. (2020). An MTurk Crisis? Shifts in Data Quality and the Impact on Study Results. Social Psychological and Personality Science, 11(4), 464–473. https://doi.org/10.1177/1948550619875149 Cornsweet, T. N. (1962). The staircase method in psychophysics. The American Journal of Psychology, 75(3), 485–491. https://doi.org/10.2307/1419876 Crollen, V., Grade, S., Pesenti, M., & Dormal, V. (2013). A common metric magnitude system for the perception and production of numerosity, length, and duration. Frontiers in Psychology, 4(JUL), 449. https://doi.org/10.3389/fpsyg.2013.00449 de Leeuw, J. R. (2015). jsPsych: A JavaScript library for creating behavioral experiments in a Web browser. Behavior Research Methods, 47(1), 1–12. https://doi.org/10.3758/s13428-014-0458-y Druckman, J. N., & Kam, C. D. (2009). Students as Experimental Participants: A Defense of the “Narrow Data Base.” SSRN Electronic Journal. https://doi.org/10.2139/SSRN.1498843 Ekman, G., & Junge, K. (1961). PSYCHOPHYSICAL RELATIONS IN VISUAL PERCEPTION OF LENGTH, AREA AND VOLUME. Scandinavian Journal of Psychology, 2(1), 1– 10. https://doi.org/10.1111/j.1467-9450.1961.tb01215.x Grootswagers, T. (2020). A primer on running human behavioural experiments online. Behavior Research Methods, 52(6), 2283–2286. https://doi.org/10.3758/s13428-020-01395-3 Indow, T., & Ida, M. (1977). Scaling of dot numerosity. Perception & Psychophysics, 22(3), 265–276. https://doi.org/10.3758/BF03199689 Izard, V., & Dehaene, S. (2008). Calibrating the mental number line. Cognition, 106(3), 1221–1247. https://doi.org/10.1016/J.COGNITION.2007.06.004 Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. G. (2020). The shape of and solutions to the MTurk quality crisis. Political Science Research and Methods, 8(4), 614–629. https://doi.org/10.1017/psrm.2020.6 Krueger, L. E. (1982). Single judgments of numerosity. Perception & Psychophysics, 31(2), 175–182. https://doi.org/10.3758/BF03206218 Lago, M. A. (2021). SimplePhy: An open-source tool for quick online perception experiments. Behavior Research Methods, 53(4), 1669–1676. https://doi.org/10.3758/s13428-020-01515-z Lange, K., Kühn, S., & Filevich, E. (2015). "Just Another Tool for Online Studies” (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies. PLOS ONE, 10(6), e0130834. https://doi.org/10.1371/JOURNAL.PONE.0130834 Li, Q., Joo, S. J., Yeatman, J. D., & Reinecke, K. (2020). Controlling for Participants’ Viewing Distance in Large-Scale, Psychophysical Online Experiments Using a Virtual Chinrest. Scientific Reports, 10(1), 1–11. https://doi.org/10.1038/s41598-019-57204-1 Milne, A. E., Bianco, R., Poole, K. C., Zhao, S., Oxenham, A. J., Billig, A. J., & Chait, M. (2021). An online headphone screening test based on dichotic pitch. Behavior Research Methods, 53(4), 1551–1562. https://doi.org/10.3758/s13428-020- 01514-0 Noyce, A. J., Nagy, A., Acharya, S., Hadavi, S., Bestwick, J. P., Fearnley, J., Lees, A. J., & Giovannoni, G. (2014). Bradykinesia-akinesia incoordination test: Validating an online keyboard test of upper limb function. PLoS ONE, 9(4). https://doi.org/10.1371/journal.pone.0096260 Onnela, J.-P., Dixon, C., Griffin, K., Jaenicke, T., Minowada, L., Esterkin, S., Siu, A., Zagorsky, J., & Jones, E. (2021). Beiwe: A data collection platform for highthroughput digital phenotyping. Journal of Open Source Software, 6(68), 3417. https://doi.org/10.21105/joss.03417 Paolacci, G., & Chandler, J. (2014). Inside the Turk: Understanding Mechanical Turk as a Participant Pool. Current Directions in Psychological Science, 23(3), 184– 188. https://doi.org/10.1177/0963721414531598 Peer, E., Rothschild, D., Gordon, A., & Damer, E. (2022). Erratum to Peer et al. (2021) Data quality of platforms and panels for online behavioral research. Behavior Research Methods, 54(5), 2618–2620. https://doi.org/10.3758/s13428-022- 01909-1 Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195–203. https://doi.org/10.3758/s13428-018-01193-y Petzschner, F. H., Glasauer, S., & Stephan, K. E. (2015). A Bayesian perspective on magnitude estimation. Trends in Cognitive Sciences, 19(5), 285–293. https://doi.org/10.1016/j.tics.2015.03.002 Rajananda, S., Lau, H., & Odegaard, B. (2018). A random-dot kinematogram for webbased vision research. Journal of Open Research Software, 6(1). https://doi.org/10.5334/jors.194 Schmack, K., Bosc, M., Ott, T., Sturgill, J. F., & Kepecs, A. (2021). Striatal dopamine mediates hallucination-like perception in mice. Science, 372(6537). https://doi.org/10.1126/science.abf4740 Schwarzbach, J. (2011). A simple framework (ASF) for behavioral and neuroimaging experiments based on the psychophysics toolbox for MATLAB. Behavior Research Methods, 43(4), 1194–1201. https://doi.org/10.3758/s13428-011- 0106-8 Schwiedrzik, C. M., Melloni, L., & Schurger, A. (2018). Mooney face stimuli for visual perception research. PLoS ONE, 13(7), e0200106. https://doi.org/10.1371/journal.pone.0200106 Sears, D. O. (1986). College Sophomores in the Laboratory. Influences of a Narrow Data Base on Social Psychology’s View of Human Nature. Journal of Personality and Social Psychology, 51(3), 515–530. https://doi.org/10.1037/0022- 3514.51.3.515 Stevens, S. S., & Galanter, E. H. (1957). Ratio scales and category scales for a dozen perceptual continua. Journal of Experimental Psychology, 54(6), 377–411. https://doi.org/10.1037/h0043680 Su, Z. H., Patel, S., Bredemeyer, O., FitzGerald, J. J., & Antoniades, C. A. (2022). Parkinson’s disease deficits in time perception to auditory as well as visual stimuli – A large online study. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.995438 Torous, J., Kiang, M. V., Lorme, J., & Onnela, J. P. (2016). New tools for new research in psychiatry: A scalable and customizable platform to empower data driven smartphone research. JMIR Mental Health, 3(2), e5165. https://doi.org/10.2196/mental.5165 Troje, N. F. (2002). Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of Vision, 2(5), 371–387. https://doi.org/10.1167/2.5.2 Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). reCAPTCHA: Human-based character recognition via web security measures. Science, 321(5895), 1465–1468. https://doi.org/10.1126/science.1160379 Weil, R. S., Schwarzkopf, D. S., Bahrami, B., Fleming, S. M., Jackson, B. M., Goch, T. J. C., Saygin, A. P., Miller, L. E., Pappa, K., Pavisic, I., Schade, R. N., Noyce, A. J., Crutch, S. J., O’Keeffe, A. G., Schrag, A. E., & Morris, H. R. (2018). Assessing cognitive dysfunction in Parkinson’s disease: An online tool to detect visuoperceptual deficits. Movement Disorders, 33(4), 544–553. https://doi.org/10.1002/mds.27311 Wengier, A., Ungar, O. J., Handzel, O., Cavel, O., & Oron, Y. (2021). Subjective Visual Vertical Evaluation by a Smartphone-based Test - Taking the Phone out of the Bucket. Otology and Neurotology, 42(3), 455–460. https://doi.org/10.1097/MAO.0000000000002944 Woods, A. T., Velasco, C., Levitan, C. A., Wan, X., & Spence, C. (2015). Conducting perception research over the internet: A tutorial review. PeerJ, 2015(7). https://doi.org/10.7717/peerj.1058
In recent years, there is a growing need and opportunity to use online platforms for psychophysics research. Online experiments make it possible to evaluate large and diverse populations remotely and quickly, complementing laboratory-based research. However, developing and running online psychophysics experiments poses several challenges: i) a high barrier-to-entry for researchers who often need to learn complex code-based platforms, ii) an uncontrolled experimental environment, and iii) questionable credibility of the participants. Here, we introduce an open-source Modular Online Psychophysics Platform (MOPP) to address these challenges. Through the simple web-based interface of MOPP, researchers can build modular experiments, share them with others, and copy or modify tasks from each others environments. MOPP provides built-in features to calibrate for viewing distance and to measure visual acuity. It also includes email-based and IP-based authentication, and reCAPTCHA verification. We developed five example psychophysics tasks, that come preloaded in the environment, and ran a pilot experiment which was hosted on the AWS (Amazon Web Services) cloud. Pilot data collected for these tasks yielded similar results to those reported in laboratory settings. MOPP can thus help researchers collect large psychophysics datasets online, with reduced turnaround time, and in a standardized manner.
[ "q-bio.NC", "cs.HC", "cs.SE" ]
# Introduction Zero-shot learning (ZSL) has emerged as a promising paradigm for recognizing novel categories without requiring labeled examples (Lampert, Nickisch, and Harmeling 2009; Han et al. 2021). In particular, Transductive ZSL, where the entire unlabeled test set is available at inference time, offers a valuable opportunity to exploit the global data distribution for more accurate predictions (Wang et al. 2023; Fu et al. 2015). Recent progress in this area has been largely driven by Vision-Language Models (VLMs), such as CLIP (Radford et al. 2021), which align images and textual labels within a joint embedding space. By encoding class names into textual prompts and comparing them with image features, these models enable direct zero-shot classification (Martin et al. 2024; Zanella, Ge´rin, and Ayed 2024). Despite their impressive generalization capability, VLMs often struggle to associate text semantics with precise visual regions due to their image-level alignment training. This limitation leads to an over-reliance on semantic priors and degraded performance under domain shifts or in tasks requiring fine-grained distinctions. In contrast, Visiononly Foundation Models (VFMs) such as DINOv2 (Oquab et al. 2023), trained with self-supervised objectives, excel at capturing rich visual patterns and perform strongly across various downstream tasks (Zhang and Tan 2025). However, VFMs lack inherent connections to class semantics, rendering them unsuitable for zero-shot classification without additional adaptation. To further investigate this gap, we visualize three spaces using t-SNE: the visual spaces of CLIP’s vision encoder and DINOv2, and the probabilistic space of CLIP. As illustrated in Figure 1, CLIP’s visual features exhibit only coarse semantic structures with considerable category overlap. In contrast, DINOv2 features form tight, well-separated clusters, indicating significantly stronger visual discrimination. More importantly, the spatial distribution of DINOv2 features and the semantic layout of CLIP show clear structural complementarity. This observation suggests that VFMs provide highly discriminative visual priors that can enhance the semantic capabilities of VLMs. Naturally, this raises question: Can we effectively combine the strengths of both model families to improve zero-shot classification performance? A recent effort in this direction is DINO-assisted prompt learning (Imam et al. 2024), which leverages a DINO-based labeling network to guide prompt tuning of the vision encoder in VLMs. While this approach yields strong performance, it requires substantial computational resources and struggles to generalize across different foundation models. In this work, we propose OTFusion, a simple yet effective training-free framework that bridges VFMs and VLMs through the lens of Optimal Transport (Villani et al. 2008; Cuturi 2013). Rather than forcing the two model types into a shared embedding space, we treat each model as inducing a probability distribution over class labels. OTFusion then constructs a unified probabilistic representation by minimizing the transport cost between these distributions, enabling predictions that are both semantically meaningful and visually grounded, as shown in Figure 1. Crucially, OTFusion operates entirely in the probability space, which allows for flexible integration of diverse models, including multiple VLMs and VFMs. In addition, OTFusion is highly compatible with a wide range of training-based methods that rely on pseudo-labels, such as Lafter (Mirza et al. 2023) and CPL (Zhang et al. 2024a). Specifically, OTFusion can serve as a plug-and-play component to generate more reliable soft pseudo-labels or enhance existing ones by integrating complementary knowledge from multiple foundation models. We evaluate our method across 11 standard zero-shot benchmarks, and demonstrate that OTFusion consistently outperforms the original CLIP model, achieving an average accuracy improvement of nearly $1 0 \%$ . Our approach is lightweight, scalable, and fully training-free, making it well-suited for deployment in low-resource scenarios. Figure 1: t-SNE visualization of predicted clusters on the Pets dataset. OTFusion significantly improves cluster compactness and separation over CLIP, highlighting superior integration of visual and semantic cues. Contributions. The main contributions of this paper are summarized as follows: • Novel perspective. To the best of our knowledge, this is the first work that systematically investigates the complementary strengths of VFMs and VLMs in the Transductive ZSL setting, opening up a new direction for enhancing zero-shot performance. • Novel method. We introduce OTFusion, a simple yet effective training-free framework that unifies predictions from VLMs and VFMs via Optimal Transport. Unlike traditional ensemble or Mixture of Experts approaches, OTFusion models each foundation model as a probability distribution over class labels and constructs a shared probabilistic representation by minimizing transport cost, enabling semantically coherent and visually grounded predictions. • Promising results. Extensive experiments on 11 widelyused zero-shot benchmarks demonstrate that OTFusion consistently improves CLIP, achieving an average accuracy gain of $10 \%$ , all without any fine-tuning or additional annotations. These results highlight its practical utility for scalable deployment in resource-constrained scenarios. # Related Work # Zero-shot Learning Zero-shot learning (ZSL) aims to recognize unseen classes by transferring semantic knowledge from seen categories through visual-semantic interactions (Liu et al. 2023). Existing methods can be broadly categorized into Inductive and Transductive paradigms. Inductive ZSL. Inductive ZSL assumes no access to testtime data during training or adaptation. Traditional approaches typically follow two main directions: embeddingbased methods (Chen et al. 2022, 2024; Hou et al. 2025), which learn a joint space to compare image and class semantics; and generative methods (Hou et al. 2024; Chen et al. 2023; Hou et al. 2025), which synthesize visual features for unseen classes via conditional generative models. Recently, large-scale VLMs such as CLIP (Radford et al. 2021) have significantly advanced ZSL. These models are pre-trained on massive image–text pairs to align visual and linguistic representations via contrastive learning. During inference, class names are converted into textual prompts, and classification is achieved by measuring similarity between image embeddings and those of the prompts, eliminating the need for hand-crafted attributes. Despite their impressive generalization to novel categories, VLMs often over-rely on global semantic cues and underperform in scenarios requiring fine-grained discrimination or robustness to domain shift. To mitigate these issues, recent works focus on refining distance metrics (Guo et al. 2023; Zhou et al. 2023), improving prompt design (Roth et al. 2023; Novack et al. 2023), or augmenting with synthetic samples (Udandarao, Gupta, and Albanie 2023; Zhang et al. 2023). Nonetheless, most of these methods also operate under the inductive setting, without leveraging the unlabeled test distribution. Transductive ZSL. Transductive ZSL assumes the availability of the entire unlabeled test set at inference time, which allows leveraging the test distribution to improve prediction accuracy. Various techniques have been proposed under this setting, including label propagation (Kalantidis, Tolias et al. 2024; Li et al. 2025), distribution alignment (Martin et al. 2024; Zanella, Ge´rin, and Ayed 2024), and test-time adaptation (Hu et al. 2024; Qian, Xu, and Hu 2023; Zhang et al. 2024b; Mirza et al. 2023; Khattak et al. 2025). These methods aim to reduce semantic shift and uncover the latent structure of test instances. While Transductive ZSL approaches show clear advantages, few works have explored how to integrate knowledge from diverse foundation models during inference. A recent effort in this direction is DINOassisted prompt learning (Imam et al. 2024), which leverages a DINO-based labeling network to guide prompt tuning of the vision encoder in VLMs. While this approach yields strong performance, it requires substantial computational resources and struggles to generalize across different foundation models. Our work addresses this gap by proposing a model-agnostic fusion mechanism that operates solely in the label probability space. # Mixture of Experts Our framework is also conceptually related to Mixture of Experts (MoE) (Shazeer et al. 2017; Kar et al. 2024; Li et al. 2024; Azadani et al. 2025), which aims to improve generalization by combining the outputs of multiple expert models. However, conventional MoE systems typically rely on a learned gating mechanism and require joint training of all expert components, making them less suitable for zeroshot or low-resource settings. In contrast, OTFusion acts as a training-free and plug-and-play probabilistic mixture model, where each foundation model serves as an expert that defines a distribution over class labels. By leveraging Optimal Transport to align these distributions, OTFusion enables flexible fusion of an arbitrary number of VLMs and VFMs without additional training or architectural modification. This provides a scalable alternative to traditional MoE methods, particularly well-suited for Transductive ZSL. # Method # Preliminary We consider the problem of Transductive ZSL, where we are given a set of unlabeled test images $\mathcal { X } _ { t } = \{ x _ { i } \} _ { i = 1 } ^ { N }$ and a set of class-level semantic descriptions S = {sj}jC=1 (e.g., textual names or attributes). The goal is to assign each image $x _ { i }$ a class label from the unseen classes based on semantic alignment and the underlying data distribution. Our method builds upon two types of pretrained foundation models: VFMs and VLMs. We briefly describe these two model families below. VFMs VFMs are self-supervised vision encoders trained on large-scale unlabeled image datasets. They aim to learn transferable and high-quality representations without relying on labels. Among them, DINOv2 (Oquab et al. 2023) has shown state-of-the-art performance across many downstream tasks. Given an image $x _ { i }$ , the VFM encoder $f _ { v }$ outputs a visual embedding $\mathbf { v } \ = f _ { v } ( x _ { i } ) \in \mathbb { R } ^ { d }$ . These embeddings are purely visual and do not encode semantic label information directly, but they often exhibit strong intra-class compactness and inter-class separability. VLMs Unlike VFMs, VLMs are pretrained to align images and texts within a shared embedding space using large-scale image-text pairs. A representative example is CLIP (Radford et al. 2021), which jointly learns an image encoder $f _ { I }$ and a text encoder $f _ { T }$ through contrastive learning. Given an image $x _ { i }$ , the image encoder $f _ { I }$ produces a visual embedding $\mathbf { v } _ { i } ~ = ~ f _ { I } ( x _ { i } ) ~ \in ~ \mathbb { R } ^ { d }$ . At the same time, class names are formatted into textual prompts $s _ { j }$ (e.g., ”a photo of a [class]”) and encoded as semantic prototypes $\mathbf { t } _ { j } = f _ { T } ( s _ { j } )$ . At inference time, VLMs support zero-shot reasoning by comparing image embeddings with class prototypes, yielding a probability distribution over classes: $$ y _ { i j } = \frac { \exp ( \tau \cdot \cos ( { \bf v } _ { i } , { \bf t } _ { j } ) ) } { \sum _ { k = 1 } ^ { C } \exp ( \tau \cdot \cos ( { \bf v } _ { i } , { \bf t } _ { k } ) ) } , $$ where $\cos ( \cdot , \cdot )$ denotes cosine similarity and $\tau$ is a temperature parameter. The resulting matrix $\dot { \mathbf { Y } } \in \mathbb { R } ^ { N \times C }$ encodes class probabilities for all test data. # OTFusion: Bridging VFMs and VLMs via Optimal Transport Our goal is to leverage the complementary strengths of VFMs and VLMs for Transductive ZSL. We propose a training-free framework, OTFusion, that fuses class probability distributions derived from both sources via Optimal Transport. The details are as follows. Modeling Visual Distributions via Gaussian Mixture Model (GMM). Given test data $\mathcal { X } = \{ x _ { 1 } , . . . , x _ { N } \}$ , we extract visual features $\begin{array} { r } { \mathbf { v } _ { i } } \ = \ f _ { v } ( x _ { i } ) \end{array}$ and fit a GMM on $\mathbf { V } \in \mathbb { R } ^ { N \times d }$ . Specifically, the GMM assumes: $$ p ( \mathbf { v } ) = \sum _ { c = 1 } ^ { C } \pi _ { c } \cdot \mathcal { N } ( \mathbf { z } ; \mu _ { c } , \Sigma _ { c } ) , $$ where $\pi _ { c }$ are mixture weights and $\mu _ { c } , \Sigma _ { c }$ are component parameters. In practice, a shared diagonal covariance matrix $\Sigma$ across all classes is typically adopted to reduce computational complexity (Zanella, Ge´rin, and Ayed 2024). Using a shared diagonal covariance $\Sigma$ , we compute soft posteriors: $$ \mathbf { p } _ { i } = [ p ( y = 1 | \mathbf { z } _ { i } ) , \ldots , p ( y = C | \mathbf { z } _ { i } ) ] \in \Delta ^ { C } , $$ where $\Delta ^ { C }$ denotes the $C$ -dimensional probability simplex. Stacking yields the visual distribution matrix $\mathbf { P } \in \mathbb { R } ^ { N \times C }$ . Given multiple pre-trained VFMs {f v( , $\{ f _ { v } ^ { ( 1 ) } , \ldots , f _ { v } ^ { ( K ) } \}$ a straightforward approach is to extract feature vectors ${ \bf z } _ { i } ^ { ( k ) } = { \bf \Phi }$ f v(k)(xi) for each model and concatenate them into a joint representation: zi = [zi( , . $\mathbf z _ { i } = [ \mathbf z _ { i } ^ { ( 1 ) } , \dots , \mathbf z _ { i } ^ { ( K ) } ]$ zi(K)]. GMM fitting and inference can then be performed as before to obtain a richer visual distribution P. However, this concatenation-based strategy incurs substantial computational and memory overhead as $K$ increases, limiting its scalability in practice. To address this limitation, we further propose a more general and flexible framework that fuses the outputs of multiple foundation models in a principled manner via Optimal Transport, without requiring explicit feature concatenation. Distribution Fusion via Optimal Transport. Our goal is to unify two complementary sources of class-discriminative information: the visual prior $\mathbf { P }$ derived from VFMs and the semantic prior $\mathbf { Y }$ derived from VLMs. To achieve this, we cast the distribution fusion process as an Optimal Transport problem. Intuitively, Optimal Transport seeks a soft alignment $\mathbf { Q } \in \mathbb { R } ^ { N \times K }$ between the $N$ image samples and $K$ classes, such that the transport plan $\mathbf { Q }$ simultaneously respects the structure of the visual distribution $\mathbf { P }$ and the semantic distribution $\mathbf { Y }$ . Formally, the optimization objective of OTFusion is formulated as the following : $$ \mathcal { L } ( \mathbf { Q } , \mu , \Sigma ) = \operatorname* { m a x } _ { \mathbf { Q } \in \mathcal { Q } } \mathrm { T r } ( \mathbf { Q } ^ { \top } \mathbf { P } ) + \lambda \mathrm { T r } ( \mathbf { Q } ^ { \top } \mathbf { Y } ) + \epsilon \mathcal { H } ( \mathbf { Q } ) , $$ where the first two terms promote alignment between $\mathbf { Q }$ and the guidance distributions, and the third term introduces an entropy regularizer $\begin{array} { r } { \mathcal { H } ( \mathbf { Q } ) = - \sum _ { i j } Q _ { i j } \log Q _ { i j } } \end{array}$ that encourages smoother and more stable transport plans. The hyperparameter $\lambda$ balances the influence of semantic and visual information, and $\epsilon$ controls the entropy smoothness. We follow (Caron et al. 2020) and keep $\epsilon$ small to avoid overly uniform predictions. This formulation is subject to the transport polytope constraint: $$ \boldsymbol { \mathcal { Q } } = \left\{ \mathbf { Q } \in \mathbb { R } _ { + } ^ { N \times K } \mid \mathbf { Q } \mathbf { 1 } _ { K } = \mathbf { 1 } _ { N } , \mathbf { Q } ^ { \top } \mathbf { 1 } _ { N } = \mathbf { 1 } _ { K } \right\} , $$ which ensures that $\mathbf { Q }$ is a doubly stochastic matrix, effectively defining a soft assignment of each sample to all possible classes while maintaining global mass conservation. Importantly, a key advantage of our joint objective in Eq. (4) is the mutual reinforcement between the learned prediction distribution $\mathbf { Q }$ and the visual distribution $\mathbf { P }$ . During optimization, $\mathbf { Q }$ is jointly influenced by the GMMinduced visual assignment scores $\mathbf { P }$ and the semantic distribution $\mathbf { Y }$ from VLMs. In turn, the evolving $\mathbf { Q }$ , which implicitly guides the update of GMM parameters $( { \boldsymbol { \mu } } , { \boldsymbol { \Sigma } } )$ , encouraging the formation of clusters that are not only visually coherent but also semantically meaningful. This bidirectional interaction prevents the GMM from collapsing into appearance-based, density-driven clusters that may ignore semantic structure, and instead facilitates the emergence of class boundaries that are both data-driven and semantically aligned. Extension. Indeed, we can further generalize our framework to incorporate multiple VFMs and VLMs: $$ \begin{array} { r l r } { { \mathcal { L } ( \mathbf { Q } , \{ \pmb { \mu } _ { i } \} , \{ \pmb { \Sigma } _ { i } \} ) = \operatorname* { m a x } \sum _ { i } \eta _ { i } \operatorname { T r } ( \mathbf { Q } ^ { \top } \mathbf { P } _ { i } ) } } \\ & { } & { + \sum _ { i } \lambda _ { i } \operatorname { T r } ( \mathbf { Q } ^ { \top } \mathbf { Y } _ { i } ) + \epsilon \mathcal { H } ( \mathbf { Q } ) , } \end{array} $$ where $\mathbf { P } _ { i }$ and $\mathbf { Y } _ { i }$ denote the class probability distributions obtained from the $i$ -th VFM and VLM, respectively. The coefficients $\{ \eta _ { i } \}$ and $\{ \lambda _ { i } \}$ are balancing hyperparameters that control the contributions of each expert model. This formulation allows OTFusion to flexibly integrate information from a diverse set of foundation models in a unified optimization framework. # Optimization Without loss of generality, we describe the optimization procedure for Eq. (6), which can be readily extended to the more general objective in Eq. (4). Our approach adopts an alternating optimization strategy between the prediction distribution $\mathbf { Q }$ and the GMM parameters $( \mu , \Sigma )$ . Initialization. We begin by initializing the soft visual assignment matrix $\mathbf { P } ^ { ( 0 ) }$ using the semantic distribution $\mathbf { Y }$ predicted by the VLM. This semantically guided initialization provides a prior for estimating the initial GMM parameters $( \mu ^ { ( 0 ) } , { \pmb \Sigma } ^ { ( 0 ) } )$ over the visual features $\mathbf { X }$ , ensuring the GMM is initially aligned with the high-level semantic knowledge. Require: Visual features $\mathbf { V } = \{ \mathbf { v } _ { i } \} _ { i = 1 } ^ { N }$ , VLM probability distribution $\mathbf { Y } \in \mathbb { R } ^ { N \times K }$ , number of iterations $T$ Ensure: Prediction distribution $\mathbf { Q }$ and GMM parameters $( \mu , \Sigma )$ 1: Initialize $\mathbf { P } ^ { ( 0 ) }$ using $\mathbf { Y }$ ; estimate $( \mu ^ { ( 0 ) } , \Sigma ^ { ( 0 ) } )$ 2: for $t = 1$ to $T$ do 3: Compute probability distribution $\mathbf { P } ^ { ( t ) }$ via GMM Estep using $( \mu ^ { ( t - 1 ) } , \Sigma ^ { ( t - 1 ) } )$ ; 4: Update probability distribution $\mathbf { Q } ^ { ( t ) }$ using Eq. (8); 5: Update $( \mu ^ { ( t ) } , \Sigma ^ { ( t ) } )$ using Eq. (11) and Eq. (12) 6: Convergence check. 7: end for 8: return ${ \bf Q } ^ { * } , ( \mu ^ { ( T ) } , \Sigma ^ { ( T ) } )$ Alternating Optimization. At iteration $t$ , given the current GMM parameters $( \pmb { \mu } ^ { ( t - 1 ) } , \pmb { \Sigma } ^ { ( t - 1 ) } )$ , we compute the visual class distribution $\mathbf { P } ^ { ( t ) }$ using the E-step of the GMM. Subsequently, we update the prediction distribution $\mathbf { Q } ^ { ( t ) }$ by solving the following entropy-regularized Optimal Transport problem: $$ \operatorname* { m a x } _ { \mathbf { Q } \in \mathcal { Q } } \sum _ { i } \eta _ { i } \mathrm { T r } ( \mathbf { Q } ^ { \top } \mathbf { P } _ { i } ) + \sum _ { i } \lambda _ { i } \mathrm { T r } ( \mathbf { Q } ^ { \top } \mathbf { Y } _ { i } ) + \epsilon \mathcal { H } ( \mathbf { Q } ) $$ which can be efficiently solved via the Sinkhorn algorithm (Cuturi 2013). Specifically, $$ \mathbf { Q } ^ { ( t ) } = \mathrm { D i a g } ( \mathbf { u } ) \cdot \exp \left( \frac { \sum _ { i } \eta _ { i } \mathbf { P } _ { i } ^ { ( t ) } + \sum _ { i } \lambda _ { i } \mathbf { Y } _ { i } } { \epsilon } \right) \cdot \mathrm { D i a g } ( \mathbf { v } ) , $$ where $\mathbf { u } \in \mathbb { R } ^ { K }$ and $\mathbf { v } \in \mathbb { R } ^ { N }$ are scaling vectors iteratively updated as: $$ \begin{array} { l } { { \displaystyle { \bf u } ^ { ( s + 1 ) } = \frac { { \bf 1 } _ { K } } { { \bf K } { \bf v } ^ { ( s ) } } } , } \\ { { \displaystyle { \bf v } ^ { ( s + 1 ) } = \frac { { \bf 1 } _ { N } } { { \bf K } ^ { \top } { \bf u } ^ { ( s + 1 ) } } } , } \end{array} $$ with $\begin{array} { r } { \mathbf { K } = \exp \left( \frac { \sum _ { i } \eta _ { i } \mathbf { P } _ { i } ^ { ( t ) } + \sum _ { i } \lambda _ { i } \mathbf { Y } _ { i } } { \epsilon } \right) } \end{array}$ In practice, a small number of iterations (e.g., 3) is sufficient for convergence. Obviously, Eq. (8) demonstrates that $\mathbf { Q } ^ { ( t ) }$ aligns with both the visual structure and semantic guidance. Updating GMM Parameters via $\mathbf { Q }$ . Rather than relying on $\mathbf { P }$ as in traditional EM updates, we leverage the semantically enriched $\mathbf { Q } ^ { ( t ) }$ to refine the GMM parameters, thereby enforcing stronger semantic alignment. The updates are computed as: $$ \mu _ { k } ^ { ( t ) } = \frac { \sum _ { i } Q _ { i k } ^ { ( t ) } { \bf v } _ { i } } { \sum _ { i } Q _ { i k } ^ { ( t ) } } , $$ $$ \boldsymbol { \Sigma } ^ { ( t ) } = \frac { 1 } { N } \sum _ { i } \sum _ { k } Q _ { i k } ^ { ( t ) } ( { \bf v } _ { i } - { \pmb { \mu } } _ { k } ^ { ( t ) } ) ( { \bf v } _ { i } - { \pmb { \mu } } _ { k } ^ { ( t ) } ) ^ { \top } , $$ which ensures that the resulting clusters are not only visually coherent but also semantically meaningful, avoiding convergence to purely density-driven partitions. We alternate between the above optimization steps, updating the prediction distribution $\mathbf { Q }$ and refining the GMM parameters $( \mu , \Sigma )$ , iteratively until convergence. The full optimization algorithm is summarized in Algorithm 1. # Experiments # Experimental Setup Datasets. Building on prior work (Zanella, G´erin, and Ayed 2024; Zhou et al. 2022b), we conduct an extensive evaluation of OTFusion across 11 diverse datasets, including ImageNet (Jia Deng and Fei-Fei 2009), SUN397 (Jianxiong Xiao and Torralba 2010), Aircraft (Subhransu Maji and Vedaldi 2013), Eurosat (Patrick Helber and Borth 2019), StanfordCars (Jonathan Krause and Fei-Fei 2013), Food101 (Bossard, Guillaumin, and Gool 2014), Pets (MParkhi et al. 2012), Flowers102 (Nilsback and Zisserman 2008), Caltech101 (Fei-Fei, Fergus, and Perona 2004), DTD (Cimpoi et al. 2014), and UCF101 (Soomro, Zamir, and Shah 2012). These datasets encompass a broad spectrum of image classification, allowing us to comprehensively assess the robustness and generalization capability of our method across distinct domains. Benchmarks. We evaluate our proposed method, OTFusion, against a comprehensive set of representative approaches for both zero-shot adaptation tasks. These competing methods can be broadly categorized based on whether they require additional training: training-based and trainingfree. The training-based category includes TPT (Manli et al. 2022), DiffTPT (Feng et al. 2023), and CoCoOp (Zhou et al. 2022a), which rely on task-specific optimization to adapt prompts or model parameters. In contrast, trainingfree methods operate without any finetuning and include CLIP (Radford et al. 2021), VisDesc (Menon and Vondrick 2023), CuPL (Pratt et al. 2023), Sus-X (Udandarao, Gupta, and Albanie 2023), CALIP (Guo et al. 2023), TDA (Karmanov et al. 2024), DMN (Zhang et al. 2024b), TransCLIP (Zanella, Ge´rin, and Ayed 2024), and ECALP (Li et al. 2025). Implementation details. Our framework is built upon open-source implementations of vision-language and vision foundation models. We use the CLIP ViT-B/16 and ResNet50 models released by OpenAI as the primary semantic predictors, without any additional fine-tuning. For visual prior extraction, we adopt DINOv2 ViT-L/14, selected for its strong clustering properties. During inference, each image is processed using a single-view center crop $( 2 2 4 \times 2 2 4 )$ to reduce computational overhead. Class names are embedded using a fixed prompt template without manual prompt ensembling or optimization. The output distributions from the vision and language branches are fused via our proposed OTFusion. The temperature parameter $\epsilon$ to control the entropy smoothness is set to 0.01, the balance parameters $\lambda$ is set to 0.8. To maintain simplicity while integrating multiple VFM models, we enforce uniformity in $\eta$ across all models, subject to the normalization constraint $\textstyle \sum _ { i } \eta _ { i } = 1$ . All models run in inference-only mode, and no gradient updates are performed. Experiments are conducted on an NVIDIA 4090 GPU with 24GB of memory. # Main results To assess the effectiveness of our method, OTFusion, we evaluate it on a broad spectrum of visual recognition benchmarks in the Transductive zero-shot setting. Our comparisons span several state-of-the-art training-based (indicated with \*) and training-free methods, utilizing two widely adopted pre-trained backbones: CLIP-ViT/B16 and ResNet50. Table 1 presents a comparison between OTFusion and several state-of-the-art methods across eleven datasets, where $\ ' \Delta '$ denotes the performance improvements over the baseline model, CLIP. Among the compared methods, CoCoOp, TPT, and DiffTPT require prompt tuning to adapt to specific tasks, which introduces additional training overhead and limits generalization. In contrast, our method is entirely training-free yet outperforms these methods by over $10 \%$ on average across both CLIP-ViT/B16 and ResNet50 backbones. While DMN achieves slightly better performance than OTFusion on the Aircraft, it depends on dual memory modules and task-specific hyperparameters, leading to higher computational costs and limited scalability. Moreover, OTFusion surpasses DMN on nearly all other datasets. ECALP achieves competitive results on DTD, but its reliance on dynamic graph construction brings considerable computational complexity. Other recent methods like TransCLIP and TDA also show strong performance but still fall short of OTFusion, particularly on datasets requiring robust generalization (e.g., Eurosat, Flowers102). Earlier methods, such as CuPL, Sus-X, VisDesc, and CALIP, perform moderately across datasets, but none consistently approach the overall effectiveness of OTFusion. # Ablation Study We conduct a comprehensive ablation study to verify the effectiveness of each component in our framework, as summarized in Table 2. Specifically, $\mathbf { Y }$ -only performs Optimal Transport solely based on the semantic probability distribution $\mathbf { Y }$ output by the VLMs, without utilizing any visual information. CLIP-only models the distribution derived exclusively from CLIP’s visual features, without incorporating perspectives from other VFMs. DINOv2-only relies solely on the visual distribution from DINOv2 to assess the contribution of different visual models. Concatenation fuses multiple visual features by concatenating them into a single input for GMM, serving as the most straightforward fusion strategy. OTFusion jointly trains by integrating probability distributions from multiple visual models along with the semantic distribution Y. Lastly, No joint-learning performs inference by directly combining different distributions during testing without joint training, aiming to evaluate the necessity of joint optimization. Importance of fusing vision feature. A central design choice of our framework is the integration of multiple visual feature spaces. As shown in Table 2, Y-only) leads Table 1: Comparison with several state-of-the-art methods on eleven datasets. The best results are highlighted in bold, and the second best results are highlighted in underline. \* indicates that this method is training-based, otherwise it is not. Table 2: Ablation study across 11 downstream datasets. We evaluate the contributions of different components in OTFusion, including distribution sources ( $\mathbf { Y }$ -only, CLIP-only, DINOv2-only), fusion strategies (concatenation, distribution fusion), and training settings (with or without joint learning). OTFusion achieves consistent improvements over the CLIP baseline. to only marginal gains over native CLIP $( + 0 . 6 1 \% )$ , suggesting that weak supervisory signals are insufficient unless supported by discriminative features. In contrast, integrating vision features significantly improves zero-shot classification accuracy. Compared to using native CLIP $( 6 5 . 2 4 \% )$ , both CLIP -only $( 7 0 . 9 6 \% )$ and DINOv2-only $( 7 4 . 2 0 \% )$ achieve substantial gains. Besides, when combining both visual sources via Feature concatenation or Distribution Fusion (used in OTFusion), performance further improves to $7 5 . 5 4 \%$ and $7 4 . 9 5 \%$ , respectively. This clearly indicates that multi-view visual features provide complementary information and jointly enhance the model’s performance. Fusion Strategy Matters. Given the importance of integrating features, the method of fusion plays a key role. As shown in Table 2, naive concatenation of features performs well $( 7 5 . 5 4 \% )$ , but our probabilistic fusion method, based on combining prediction distributions rather than raw features, achieves comparable or better results. This suggests that both approaches are effective, and distribution-based fusion is advantageous due to its scalability and compatibility with probabilistic modeling. Importance of Joint-learning. Our method leverages a joint optimization objective (Eq. (4)) that tightly couples the evolving prediction distribution $\mathbf { Q }$ with both the visual assignments $\mathbf { P }$ (from GMM) and the semantic prior $\mathbf { Y }$ (from VLMs). The ablation result in Table 2 validates this effect: when joint learning is disabled (No jointlearning), the average accuracy drops significantly from $7 4 . 9 5 \%$ to $7 1 . 6 7 \%$ , with particularly large decreases observed on datasets like StanfordCars $( 6 . 1 2 \% )$ , SUN397 $( 3 . 4 6 \% )$ , and Eurosat $( 1 0 . 5 5 \% )$ . These performance gaps underscore that decoupling the fusion module from the downstream prediction task weakens semantic alignment, leading to suboptimal clusters and degraded classification accuracy. In contrast, the joint-learning paradigm enables mutual reinforcement: the GMM’s clustering process is no longer guided solely by visual similarity, but also influenced by the semantic structure induced by VLMs. Conversely, the predictive distribution $\mathbf { Q }$ benefits from the refined visual clusters shaped by GMM, which evolve in tandem during learning. # Further Analyses Parameter Sensitivity Analysis. We conduct a thorough sensitivity analysis on two key hyperparameters in our framework: the entropy regularization parameter $\epsilon$ in Optimal Transport, and the fusion coefficient $\lambda$ in our distribution integration. As shown in Figure 2, the left panel evaluates the impact of varying $\epsilon$ in the range $[ \mathrm { i } 0 ^ { - 3 } , 1 0 ^ { - 1 } ]$ . Across all 11 datasets, the performance remains remarkably stable, with only minimal fluctuations. This demonstrates that our method is largely insensitive to the choice of $\epsilon$ , and that a broad range of values can yield satisfactory results. Notably, overly large $\epsilon$ (e.g., $1 0 ^ { - 1 }$ ) may slightly degrade performance on certain datasets like Caltech101 and Food101. We recommend using $\epsilon = 0 . 0 1$ as a stable and effective default. Besides, we also explore the influence of $\lambda$ , which balances the contributions of the visual distribution from VFMs and the semantic distribution from VLMs. We vary $\lambda$ between 0.1 and 1. As shown in Figure 2, OTFusion performs consistently well across different datasets when $\lambda$ is relatively small (e.g., 0.1 to 0.7), demonstrating strong robustness and effective fusion between different foundation models. However, as $\lambda$ increases beyond 0.7, we observe a performance drop on several datasets such as EuroSAT and Flowers12. This suggests that overemphasizing the semantic signal from $\mathbf { Y }$ may lead to overfitting to potentially noisy pseudo-labels, thereby diminishing the contribution of discriminative visual patterns. Figure 2: Parameter sensitivity analysis on $\epsilon$ (left) and $\lambda$ (right) across 11 datasets. Performance remains stable over a wide range of values, highlighting the robustness of our proposed method OTFusion. Convergence Analysis. We conduct convergence analysis by jointly examining the evolution of classification accuracy and optimization loss over training iterations, as shown in Figure 3. Each curve corresponds to a different dataset, reflecting the stability and learning dynamics of our model across diverse domains. From the left panel (Accuracy vs. Iteration), we observe that most datasets experience a sharp accuracy increase in the first few iterations, with performance typically plateauing after 5 rounds. Datasets such as Caltech101 and Pets rapidly converge to high accuracies above $90 \%$ , demonstrating the model’s efficiency on structured visual domains. In contrast, more challenging datasets like Eurosat and DTD require a higher number of iterations. Simultaneously, the right panel (Loss vs. Iteration) reveals that the overall loss increases rapidly at the early stage, due to the dynamic re-alignment between pseudo-labels and evolving visual clusters, but quickly stabilizes, typically within 3 to 5 iterations. This behavior confirms that our joint optimization effectively reaches equilibrium between the visual GMM model and the semantic guidance. Overall, OTFusion converges quickly and stably, balancing semantic alignment and visual clustering, and delivering consistent performance improvements with minimal training overhead. Figure 3: Convergence curves on 11 datasets. Left: Accuracy vs. Iteration. Right: Loss vs. Iteration. The method converges within a few iterations, demonstrating efficient and stable optimization.
Transductive zero-shot learning (ZSL) aims to classify unseen categories by leveraging both semantic class descriptions and the distribution of unlabeled test data. While Vision-Language Models (VLMs) such as CLIP excel at aligning visual inputs with textual semantics, they often rely too heavily on class-level priors and fail to capture fine-grained visual cues. In contrast, Vision-only Foundation Models (VFMs) like DINOv2 provide rich perceptual features but lack semantic alignment. To exploit the complementary strengths of these models, we propose OTFusion, a simple yet effective training-free framework that bridges VLMs and VFMs via Optimal Transport. Specifically, OTFusion aims to learn a shared probabilistic representation that aligns visual and semantic information by minimizing the transport cost between their respective distributions. This unified distribution enables coherent class predictions that are both semantically meaningful and visually grounded. Extensive experiments on 11 benchmark datasets demonstrate that OTFusion consistently outperforms the original CLIP model, achieving an average accuracy improvement of nearly $10\%$, all without any fine-tuning or additional annotations. The code will be publicly released after the paper is accepted.
[ "cs.CV" ]