article
stringlengths
507
295k
abstract
stringlengths
417
1.92k
category
listlengths
1
6
# 1 Introduction Large language model (LLM) capabilities have transformed numerous domains, from creative writing to scientific research. A critical detail of LLM deployment is the sampling method: the algorithm that determines how tokens are sampled during generation. Sampling strategies directly impact the quality and diversity of generated outputs, making them important to both research and deployment. Commonly used samplers include basic (temperature-only) sampling (Ackley et al., 1985), which samples tokens based on their temperature-scaled softmax-normalized logits; top-k sampling (Fan et al., 2018), which samples the $k$ most probable tokens; and top-p sampling (Holtzman et al., 2020), which samples tokens comprising the top $p$ probability mass. Other samplers include $\eta$ -sampling, ϵ-sampling (Hewitt et al., 2022) and mirostat sampling (Basu et al., 2020). Preprint. Under review. Figure 1: Visualizing Human Evaluators’ Scores from Nguyen et al. (2024)’s Data Demonstrates Min- $\cdot \mathtt { p }$ Does Not “Consistently" Outperform Other Samplers. Rather, the original paper’s data suggest min- $\mathtt { p }$ is largely indistinguishable from other samplers based on $9 5 \%$ confidence intervals. Recently, the paper “Turning Up the Heat: Min-P Sampling for Creative and Coherent LLM Outputs" (Nguyen et al., 2024) introduced a new sampling method called min- $\cdot \mathtt { p }$ sampling, claiming it produces higher quality and higher diversity outputs than other samplers. Given the potential impact of an improved sampling method and the paper’s exposure as the 18th highest-scoring submission at ICLR $2 0 2 5 ^ { 1 }$ , we carefully scrutinized the methodologies, data, analyses, code and conclusions presented in support of min- $\mathtt { p }$ across the authors’ four lines of evidence: (1) human evaluations, (2) natural language processing (NLP) benchmark evaluations, (3) LLM-As-A-Judge evaluations and (4) community adoption metrics. Our re-analyses of the evidence lead us to conclude that relative to commonly used samplers, min- $\cdot \mathtt { p }$ does not improve quality or diversity or the trade-off between quality and diversity. Our code is publicly available on GitHub, as are our W&B sweeps of NLP benchmark evaluations. # 2 Re-Analyzing Min-p’s Human Evaluations We began with re-analyzing the original paper’s human evaluations since human judgments are widely considered the gold standard for assessing language model outputs (Van Der Lee et al., 2019; Roller et al., 2020; Howcroft et al., 2020; Clark et al., 2021; Liang et al., 2022; Khashabi et al., 2022; Chiang et al., 2024; Biderman et al., 2024; Schaeffer et al., 2025b). We identified four key issues. # 2.1 Human evaluators’ scores for one of two baseline samplers were omitted Section 6 of Nguyen et al. (2024) states human participants evaluated min-p against a single sampler: top-p. Both the Oct 2024 Arxiv manuscript and ICLR OpenReview manuscript repeatedly state that min-p and top-p were considered, and their Table 4 presents results only for these. However, when examining the paper’s data, we discovered that scores for a second baseline sampler (basic sampling) were excluded from the methodology, the analysis and the results without mention or explanation. We publicly confirmed with the authors. These omitted scores comprised $1 / 3 ^ { \mathrm { r d } }$ of the total collected scores. After we raised the issue, the omitted data were added to the Camera Ready’s Table 4, but the methodology, the results and the conclusions have not been correspondingly updated. # 2.2 Visualizations and Statistical Tests Fail to Support Claim That Min-p Outperforms Other Samplers Based on the human evaluators’ scores, Section 6 of Nguyen et al. (2024) concluded that min-p “consistently" outperformed top-p “across all settings": “Overall, min-p sampling consistently scored higher than $\tt t o p \mathrm { - p }$ sampling across all settings [...] A paired t-test confirmed that the differences in scores between min-p and top-p sampling were statistically significant $( p < 0 . 0 5 )$ .” ∗ $p < 0 . 0 5$ , $^ { * * } p < 0 . 0 1$ , $^ { \ast \ast \ast } p < 0 . 0 0 1$ , † Significant after Bonferroni correction for 12 comparisons. Note: All tests were paired t-tests with df $= 5 2$ , one-sided (alternative $\mathbf { \tau } = \mathbf { \tau }$ "greater") Table 1: Hypothesis Testing of Human Evaluators’ Scores Fails to Support Claim that Min- $\cdot \mathtt { p }$ Consistently Outperforms Other Samplers. To test whether evidence supports the claim that min- $\cdot \mathtt { p }$ “consistently outperforms" other samplers, we conducted one-sided paired t-tests using the authors’ published data. Without correcting for multiple comparisons, evidence exists to support min-p’s superiority in 5 of 12 comparisons at $\alpha = 0 . 0 5$ and 2 of 12 comparisons at $\alpha = 0 . 0 1$ . After applying a Bonferroni correcting for multiple comparisons, evidence exists to support min- $\cdot \mathtt { p }$ ’s superiority in 1 of 12 comparisons at $\alpha = 0 . 0 5$ and 0 of 12 comparisons at $\alpha = 0 . 0 1$ . For details, see Sec. 2.3. # However, both visualizations and statistical hypothesis tests of the original human evaluation data suggest min- $\cdot \mathtt { p }$ is indistinguishable from the baselines in almost all settings. To briefly explain the human evaluation methodology, three samplers (basic, top- $\mathtt { p }$ and min-p) were compared in six conditions: three temperatures (1.0, 2.0, 3.0) and two diversity settings (“high" and “low") corresponding to different $p$ hyperparameters. Humans were asked to score the generated outputs under two metrics: quality and diversity. Participants were excluded if they failed attention checks. For more information, please see the original manuscript. We focused on the “high" diversity setting for three reasons: First, the claimed advantage of min- $\cdot \mathtt { p }$ sampling is that it provides both high quality and high diversity, whereas other samplers typically trade one off against the other. Second, the authors publicly told us to focus on the high diversity setting, writing that “the low [diversity] settings were quite experimental". Third, we believe that top-p’s $p$ value in the low diversity setting was poorly chosen; indeed, after we raised these concerns, the authors ran a new human evaluation that changed the low diversity top-p $p$ from 0.1 to 0.9. We return to this second new human evaluation in Sec. 2.4. We began by visualizing the human evaluations’ scores from Nguyen et al. (2024). Using the original paper’s data, Fig. 1 reveals that the three samplers provide similar quality and similar diversity, with $9 5 \%$ confidence intervals frequently overlapping. To more rigorously assess the claim that min- $\cdot \mathtt { p }$ consistently outperforms other samplers, we conducted 12 one-sided paired t-tests for each metric (quality or diversity), temperature $( 1 . 0 , 2 . 0 , 3 . 0 )$ and baseline sampler $( \mathtt { m i n - p }$ versus basic, min- $\mathtt { p }$ versus top-p). In each test, the null hypothesis is min-p’s score is less than or equal to the other sampler’s score, and the alternative hypothesis is min-p’s score is greater than the other sampler’s score. Statistical test results are displayed in Table 1. Without correcting for multiple comparisons, we found evidence to reject the null hypotheses in 5 of 12 tests at $\alpha = 0 . 0 5$ and 2 of 12 tests at $\alpha = 0 . 0 1$ . After applying a Bonferroni correction for multiple comparisons, we found evidence to reject the null hypothesis in 1 of 12 tests at $\alpha = 0 . 0 5$ and 0 of 12 tests at $\alpha = 0 . 0 1$ . Based on the original paper’s data, there is insufficient evidence to support the claim that min- $\mathtt { p }$ consistently outperforms baseline samplers across all settings. Furthermore, given that the original paper claims that min- $\mathtt { p }$ “consistently" scores higher, an Intersection-Union Test (IUT) may be the appropriate statistical test, where the alternative hypothesis is that min- $\mathtt { p }$ is better in all 12 comparisons and the null hypothesis is the set complement. Since the largest $\boldsymbol { \mathrm { \tt ~ p } }$ -value of the 12 comparisons is 0.378, under the IUT, we again find insufficient evidence to reject the null hypothesis at both $\alpha = 0 . 0 5$ and $\alpha = 0 . 0 1$ . The original paper’s statistical analysis reached a different conclusion for two reasons. First, despite claiming that min-p “consistently scored higher" “across all settings" (metric, temperature, and diversity), the paper pooled data across all settings and performed a single t-test, which tests whether min- $\mathtt { p }$ scored higher on average. Second, pooling over all settings is misleading in that in the “low” diversity condition, top-p’s hyperparameter $p$ was poorly chosen in a way that pulled top- $\cdot \mathtt { p }$ down significantly; the authors said publicly to ignore this particular low diversity condition and subsequently changed $p$ in their new human experiment (Sec. 2.4). Thus, we believe the original paper’s statistical inferences are misleading or incorrect. Figure 2: Manual Annotation of Human Evaluators’ Qualitative Responses Fail to Support Claim that Min-P Was the Preferred Sampler. We manually annotated responses from human annotators regarding their preferred sampler(s) at the end of the original paper’s study. The responses suggest min- $\mathtt { p }$ was not the most preferred sampler. We provide example responses in Sec. 2.3. # 2.3 Human Evaluators’ Qualitative Responses Fail to Support Claim That Min-p Is Preferred Over Other Samplers At the end of the human evaluation study, the original paper asked human participants to qualitatively describe which sampler(s) they preferred. The paper claimed that human evaluators’ qualitative responses support min- $\cdot \mathtt { p }$ over top-p: “Participants frequently noted that outputs generated with min-p sampling were more coherent and creative, especially at higher temperatures.” However, when reading through the paper’s data, we believe that the qualitative responses suggest a different preference pattern. We manually annotated the qualitative responses and visualized our annotations of the humans’ expressed preferences (Fig. 2), and publicly posted our annotations in the same format as the original paper. We found two results: (1) more human evaluators explicitly preferred basic sampling than preferred min- $\mathtt { p }$ sampling, and (2) min-p was only slightly preferred over top-p. We provide quotations from human evaluators favoring basic sampling in Appendix A. # 2.4 New Human Evaluation Study Shows Min- $\cdot \mathtt { p }$ Does Not Outperform Baselines in Quality, in Diversity, or in a Tradeoff Between Quality and Diversity In response to our feedback, the authors conducted and added a new human evaluation study to Appendix C.2. Their new study made multiple methodological changes: • Different sampler implementation: switched from applying temperature after truncation to applying temperature before truncation. • Different distribution of human participants from Prolific. • Different sampling hyperparameters for top-p: switched from 0.1 and 0.9 to 0.9 and 0.95. • Different sampling hyperparameters for min-p: switched from 0.2 and 0.05 to 0.1 and 0.05. • Different allotted reading time: increased from 30 minutes to 45 minutes. • Different sampled text: 3 short paragraphs were replaced with a single complete story. Figure 3: New Human Evaluation Study Suggests Min- $\mathtt { p }$ Does Not Outperform Baselines in Quality, in Diversity or in a Pareto-Optimal Tradeoff Between Quality and Diversity. Visualization of scores from Nguyen et al. (2024)’s second human experiment. Min-p’s performance advantage relative to basic and top-p sampling is observed in conditions (e.g., higher temperatures) where absolute quality and absolute diversity scores across all samplers are lower compared to other regimes (e.g., $T = 1 \mathrm { \dot { \Omega } }$ ). For practitioners optimizing for maximal quality and maximal diversity, these results suggest that min- $\cdot \mathtt { p }$ offers no apparent advantage over basic or top-p sampling. • Different rubric for human participants to evaluate sampled outputs. Regarding the new human evaluation data and results, we share two discoveries here: First, we believe one value is incorrectly reported: in Nguyen et al. (2024)’s Table 15, the average score of min- $\mathtt { p }$ at $p = 0 . 0 5$ and temperature $T = 2$ is reported as 7.80, but based on the authors’ publicly posted data, we believe the correct numerical value should be 5.80. Second, more generally, the data show again that Min- $\cdot \mathtt { p }$ does not outperform baselines in quality, in diversity or in a favorable tradeoff between quality and diversity. In this new study, whenever min- $\cdot \mathtt { p }$ outperforms other samplers, it does so under conditions that yield lower absolute scores than other conditions (Fig. 3). For instance, min- $\mathtt { p }$ shows an advantage over the baselines in the “high" diversity setting at $T = 2$ and in the “low" diversity setting at $T = 3$ . However, in both of these conditions, min- $\cdot \mathtt { p }$ receives lower quality and diversity scores than it does in the “high" diversity setting at $T = 1$ and the “low" diversity setting at $T = 2$ . This shows min-p’s advantage is observed primarily under conditions that yield lower overall quality and diversity scores compared to other achievable conditions. For anyone seeking higher quality or diversity, min- $\cdot \mathtt { p }$ offers no apparent advantage over basic or top-p sampling. # 3 Extending Min-p’s NLP Benchmark Evaluations We next turned to the original paper’s NLP benchmark evaluations of several models on GSM8K with Chain-of-Thought (Cobbe et al., 2021) and GPQA (5-shot) (Rein et al., 2023), which concluded that: “Min-p sampling achieves superior performance across benchmarks and temperatures.” # 3.1 Thorough Hyperparameter Sweep on GSM8K Contradicts Claim of Min-p’s Superiority To test whether min-p indeed achieves superior performance, we conducted an extensive analysis on GSM8K, sweeping the following models, samplers, hyperparameters and sampling seeds: • 9 Models: Qwen 2.5 (Qwen et al., 2025) 0.5B, 1.5B, 3B and 7B; Mistral 7Bv0.1 (Jiang et al., 2023); Llama (Grattafiori et al., 2024) 3.1 8B and 3.2 3B; Gemma 2 (Team et al., 2024) 2B and 9B. • 2 Model Stages: Pre-trained (“Base") and Post-Trained (“Instruct"). • 4 Samplers: basic, top-p, top-k, min-p. • 31 Temperatures: 0.0 (“greedy") to 3.0 in increments of 0.1. • 6 Hyperparameters Per Sampler: We chose 6 hyperparameters per sampler, except for basic which has no hyperparameter beyond temperature. The values were taken from the original paper; some were lightly edited to make them more evenly distributed: – basic: No hyperparameters other than temperature. – top-k: $k \in \{ 1 0 , 3 0 , 5 0 , 1 0 0 , 1 5 0 , 2 0 0 \}$ . – top-p: $p \in \{ 0 . 9 9 , 0 . 9 8 , 0 . 9 5 , 0 . 9 , 0 . 8 , 0 . 7 \}$ . – min-p: $p \in \{ 0 . 0 1 , 0 . 0 2 , 0 . 0 5 , 0 . 1 , 0 . 2 , 0 . 3 \}$ . Figure 4: Min-P Does Not Consistently Outperform Other Samplers on GSM8K When Controlling For Hyperparameter Volume. In our first analysis, we measured how the maximum Exact Match (Strict) for each sampler improves as the number of hyperparameters increases. Basic sampling has only a temperature hyperparameter, and we therefore do not sweep it to the same degree. # • 3 Random Seeds for Sampling: $\{ 0 , 1 , 2 \}$ Due to our compute budget, we only evaluated GSM8K (albeit under two prompt formats, for reasons explained below). This sweep and the sweep below required $\sim 6 0 0 0$ Nvidia A100-hours. GSM8K contains a subset of samples with ambiguous language or incorrect labels that have since been identified and cleaned (Vendrow et al., 2025) and that models may have been trained on GSM8K (Zhang et al., 2024), but we used GSM8K nonetheless for consistency with the original paper. We similarly used EleutherAI’s LM Eval Harness (Gao et al., 2021; Biderman et al., 2024). To evaluate how performant each sampler is, we first averaged over the three sampling seeds and then conducted two complementary analyses: 1. For each sampler, we subsampled an equal number of hyperparameters ranging from $N = 1$ to $N = 1 0 0$ and computed the maximum Exact Match (Strict) score achieved by the sampled subset of size $N$ . We repeated this process 150 times, averaging over the subsampled subsets’ scores. This “Best-of-N" analysis (Nakano et al., 2021; Stiennon et al., 2020; Hughes et al., Figure 5: Min-P Does Not Consistently Outperform Other Samplers on GSM8K When Controlling For Hyperparameter Volume. In our second analysis, we measured how the difference between min-p’s highest score and the best non-min- $\mathtt { p }$ sampler’s highest score changes as the number of swept hyperparameters increases. Min- $\cdot \mathtt { p }$ matches or underperforms other samplers. 2024; Schaeffer et al., 2025a) tells us the best possible performance each sampler will likely obtain as its hyperparameter space increases. 2. For $N = 1$ to $N = 1 0 0$ , we subsampled $N$ hyperparameters per sampler and computed the difference of the maximum Exact Match (Strict) score achieved by min- $\cdot \mathtt { p }$ minus the maximum score achieved by any other sampler. We repeated this process 150 times, averaging over the subsampled subsets. This tells us by how much min- $\cdot \mathtt { p }$ outperforms all other samplers, controlling for the size of hyperparameter space of each sampler. Both analyses reached consistent results: min- $\mathtt { p }$ does not outperform other samplers when equalizing the volume of hyperparameter space. Fig. 4 and Fig. 5 respectively demonstrate that min-p is largely indistinguishable from other samplers. After we showed these results to the authors, they informed us that we had run our experiments using the “Llama" formatting of GSM8K prompts as we used the command from the authors’ public Colab notebook; the authors clarified that "Llama" formatting should be used only for Llama models. We then reran our experiments using standard formatting of GSM8K prompts. The results were nearly identical (Appendix B), with one small difference: min-p does produce higher scores for 2 of 12 language models. Again, we conclude min-p does not outperform other samplers on either formatting of GSM8K when controlling for hyperparameter volume. Figure 6: Nguyen et al. (2024)’s LLM-As-A-Judge Evaluations Suggest Min- $\mathtt { p }$ Typically Matches Other Samplers Despite $2 \times$ to $1 0 \times$ More Hyperparameter Tuning. Left: Nguyen et al. (2024) swept min- $\cdot \mathtt { p }$ with more than twice as many hyperparameters as top-p and more than ten times as many hyperparameters as basic. Right: Pairwise comparisons show min- $\mathtt { p }$ typically performs on-par with other samplers. Data were obtained from the first author’s public GitHub repository. # 4 Investigating Min-p’s LLM-As-A-Judge Evaluations We then turned to the original paper’s LLM-as-a-Judge evaluations (Zheng et al., 2023), specifically AlpacaEval creative writing evaluations (Dubois et al., 2023). # 4.1 Under-Specified and Indirect Methodology Hinders Reproduction and Interpretation In the Oct 2024 Arxiv manuscript and ICLR OpenReview manuscript, the methodology is underspecified in several ways: There is no mention which model(s) were sampled from, which model(s) served as the judge(s), or how hyperparameters were chosen or swept. Additionally, there is no description of uncertainty for the reported win rates, meaning readers are unable to decide whether win rates are statistically different from chance $( 5 0 . 0 0 \% )$ ). Furthermore, the experiment seems designed in a manner that introduces a confounder. For those unfamiliar, AlpacaEval reports win rates between paired comparisons. Instead of directly comparing min-p against other samplers, the authors compared each sampler against a common fixed sampler: basic $( \tau = 1 . 0 ) ,$ ). This comparison strategy is indirect since comparing directly against min- $\mathtt { p }$ would offer a clearer test of its superiority while using the same number of comparisons. The authors’ design choice is additionally concerning because LLM-judge preferences are probably not transitive, as shown by recent research ( $\mathrm { \Delta X u }$ et al., 2025); that is, if sampler A beats sampler B, and sampler B beats sampler C, it does not necessarily follow that sampler A beats sampler C. Therefore, comparing all methods to basic $\mathit { \check { \tau } } _ { \tau } = 1 . 0 )$ provides no reliable inference about min-p’s performance relative to top- $\cdot \mathtt { p }$ or basic at other temperatures. These under-specified aspects of the methodology, combined with its indirect experimental design, make drawing conclusions difficult. # 4.2 Min-p Received More Hyperparameter Tuning and Frequently Fails to Win No scores or code to create scores were provided with the original paper’s GitHub repository. While drafting this manuscript, we became aware of ongoing work to release code in a separate repository. Results from that code revealed two discoveries: First, min- $\cdot \mathtt { p }$ received $\sim 2 \times$ more hyperparameter tuning than top-p sampling and $\sim 1 0 \times$ more tuning than basic sampling (Fig. 6, left), potentially tilting the scales in its favor. Second, the win-rates show that min- $\mathtt { p }$ frequently fails to outperform top- $\cdot \mathtt { p }$ and basic sampling, especially when accounting for confidence intervals; we visualized the new data with $9 5 \%$ confidence intervals (with horizontal offsets added for visibility) (Fig. 6, right). # 4.3 Table 3(b) Reported The Higher of Two Scores For Min- $\mathtt { p }$ But the Lower of Two Scores For Top-p As evidence for the LLM-As-A-Judge evaluation scores in the original paper’s Table 3(b), the first author publicly shared a Telegram link that showed the higher of two scores was reported for min- $\cdot \mathtt { p }$ (the reported win rate of 52.01 corresponds to $p = 0 . 0 5$ , but $p = 0 . 0 1$ yields a lower win rate of 50.14) but the lower of two score was reported for top-p (the reported win rate of 50.07 corresponds to $p = 0 . 9$ , but $p = 0 . 9 8$ yields a higher win rate of 50.43). # 5 Substantiating Min-p’s Community Adoption Claims # 5.1 Claimed GitHub Repositories & Stars Were Unsubstantiated and Retracted The Arxiv and peer-reviewed manuscripts of Nguyen et al. (2024) included specific claims about min-p’s adoption in the language modeling community: “Community Adoption: Min-p sampling has been rapidly adopted by the opensource community, with over 54,000 GitHub repositories using it, amassing a cumulative 1.1 million stars across these projects." We attempted to verify these numbers through analysis of major GitHub language modeling repositories. Per our calculations, the combined GitHub stars of leading LM repositories (transformers, ollama, llama.cpp, vLLM, Unsloth, mamba, SGLang, llama-cpp-python) sum to $4 5 3 \mathrm { k }$ stars as of March 2025, less than half the 1.1M stars claimed by min-p alone. We could not substantiate either 49k GitHub repositories or 1.1M GitHub stars. When we inquired how these numbers were calculated, the authors publicly stated that GitHub was searched for “min-p”, which yields many false positives. The authors retracted both the 54k GitHub repository claim and the 1.1M GitHub stars claim from the ICLR 2025 Camera Ready manuscript. Given that the numbers have been retracted, we debated whether to include this section. We decided to include it for three reasons. First, we wanted to document this clear failure of the review process. These numbers were unsubstantiated in the manuscript, and, in our opinion, were preposterous. Yet three out of four reviewers and the Area Chair highly commended the community adoption as evidence of min-p’s superiority; for instance, the Area Chair wrote: “[Min-p] is simple and is already widely adopted by the community (as mentioned by [Reviewer] D38H, “The usage of it in 54,000 Github repositories alone is very impressive”). [...] The resulting review scores reflect the high quality of the paper: It presents convincing experiments, thorough analysis, and the provided method has an extremely high impact." Reviewer fwNb similarly emphasized the community adoption numbers: “[min-p] has good empirical results, both as measured on benchmarks and (more important [sic]) by adoption of the community" Second, the machine learning research community may have learned of min-p before these community adoption numbers were retracted, e.g., when the original paper was posted to ArXiV or accepted at ICLR 2025. Thus, we felt a proactive clarification would better rectify the scientific record. Third, as we detail below, we believe the new community adoption statement remains misleading. # 5.2 The Revised Community Adoption Statement Inflates Min-p’s Adoption The ICLR 2025 Camera Ready now has a different statement of community adoption: “[Min-p] is now integrated in widely used frameworks such as Hugging Face Transformers, vLLM, and SGLang, which collectively have accrued over 350,000 GitHub stars. This integration, coupled with extensive downstream usage (e.g., over 290,000 dependent repositories for Transformers alone), underscores the method’s practical impact." While being integrated into such frameworks is indeed a contribution, this statement misleadingly represents these frameworks’ usage as min-p’s usage, rather than specifically measuring min-p’s usage. This new statement is akin to publishing a book and then claiming credit for the library. # 6 Discussion and Limitations Scientific Conclusions This investigation led us to conclude that the four lines of evidence presented by Nguyen et al. (2024) – (1) human evaluations, (2) NLP benchmark evaluations, (3) LLM-as-aJudge evaluations, (4) community adoption – do not support claims of min-p’s superiority. While min- $\cdot \mathtt { p }$ is useful for providing users another options to try, the original paper’s data and our extensions of the original paper’s data suggest that all samplers perform roughly the same once given the same amount of hyperparameter tuning; however, in our view, more research would be needed to assess the veracity of this conclusion. The paper’s data does weakly suggest that min- $\cdot \mathtt { p }$ sampling can sometimes provide a benefit at higher temperatures, albeit with the critical caveat that absolute performance is meaningfully worse in this high-temperature regime than in standard temperature regimes. Key Limitation Our manuscript re-analyzes the evidence presented by the original paper (Nguyen et al., 2024) and additional evidence created using the original paper’s code. Conclusions here are based on that evidence. We emphasize that new evidence might lead to different conclusions. What Went Wrong During the ICLR 2025 Review Process? Nguyen et al. (2024)’s outstanding success in the ICLR 2025 review process—achieving Oral presentation status and ranking as the 18th highest-scoring submission2—is difficult to reconcile with the flaws our investigation uncovered. The reviewers overlooked methodological issues such as which model(s) are being sampled from for the LLM-as-judge evals and missing/inadequate/improper consideration of uncertainty in presented results. Reviewers uncritically accepted the authors’ claim that "over 54,000 GitHub repositories" were using min- $\cdot \mathtt { p }$ sampling, when intuition or a quick GitHub search reveals cause for pause. The Area Chair’s comment also contains a clear misstatement: it highlights min-p’s success in the low temperature regime ("in the low temperature regime, [min-p] provides a significant advantage"), when the paper specifically claims benefits in the high temperature regime. # References David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985. Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar, and Lav R Varshney. Mirostat: A neural text decoding algorithm that directly controls perplexity. arXiv preprint arXiv:2007.14966, 2020. Stella Biderman, Hailey Schoelkopf, Lintang Sutawika, Leo Gao, Jonathan Tow, Baber Abbasi, Alham Fikri Aji, Pawan Sasanka Ammanamanchi, Sidney Black, Jordan Clive, Anthony DiPofi, Julen Etxaniz, Benjamin Fattori, Jessica Zosa Forde, Charles Foster, Jeffrey Hsu, Mimansa Jaiswal, Wilson Y. Lee, Haonan Li, Charles Lovering, Niklas Muennighoff, Ellie Pavlick, Jason Phang, Aviya Skowron, Samson Tan, Xiangru Tang, Kevin A. Wang, Genta Indra Winata, François Yvon, and Andy Zou. Lessons from the trenches on reproducible evaluation of language models, 2024. URL https://arxiv.org/abs/2405.14782. Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Hao Zhang, Banghua Zhu, Michael Jordan, Joseph E. Gonzalez, and Ion Stoica. Chatbot arena: An open platform for evaluating llms by human preference, 2024. URL https://arxiv.org/ abs/2403.04132. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A Smith. All that’s ‘human’is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 7282–7296, 2021. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems, 2021. URL https://arxiv.org/ abs/2110.14168. Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems, 36:30039–30069, 2023. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, et al. A framework for few-shot language model evaluation. Version v0. 0.1. Sept, 10:8–9, 2021. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes or Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Ra o Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey yen Hannah Kore vaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. John Hewitt, Christopher D Manning, and Percy Liang. Truncation sampling as language model desmoothing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pp. 3414–3427, 2022. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In International Conference on Learning Representations, 2020. David M Howcroft, Anya Belz, Miruna Clinciu, Dimitra Gkatzia, Sadid A Hasan, Saad Mahamood, Simon Mille, Emiel Van Miltenburg, Sashank Santhanam, and Verena Rieser. Twenty years of confusion in human evaluation: Nlg needs evaluation sheets and standardised definitions. In 13th International Conference on Natural Language Generation 2020, pp. 169–182. Association for Computational Linguistics, 2020. John Hughes, Sara Price, Aengus Lynch, Rylan Schaeffer, Fazl Barez, Sanmi Koyejo, Henry Sleight, Erik Jones, Ethan Perez, and Mrinank Sharma. Best-of-n jailbreaking, 2024. URL https: //arxiv.org/abs/2412.03556. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b, 2023. URL https://arxiv.org/ abs/2310.06825. Daniel Khashabi, Gabriel Stanovsky, Jonathan Bragg, Nicholas Lourie, Jungo Kasai, Yejin Choi, Noah A. Smith, and Daniel S. Weld. Genie: Toward reproducible and standardized human evaluation for text generation, 2022. URL https://arxiv.org/abs/2101.06561. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. Minh Nguyen, Andrew Baker, Clement Neo, Allen Roush, Andreas Kirsch, and Ravid ShwartzZiv. Turning up the heat: Min-p sampling for creative and coherent llm outputs. arXiv preprint arXiv:2407.01082, 2024. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. Gpqa: A graduate-level google-proof q&a benchmark, 2023. URL https://arxiv.org/abs/2311.12022. Stephen Roller, Y-Lan Boureau, Jason Weston, Antoine Bordes, Emily Dinan, Angela Fan, David Gunning, Da Ju, Margaret Li, Spencer Poff, Pratik Ringshia, Kurt Shuster, Eric Michael Smith, Arthur Szlam, Jack Urbanek, and Mary Williamson. Open-domain conversational agents: Current progress, open problems, and future directions, 2020. URL https://arxiv.org/abs/2006. 12442. Rylan Schaeffer, Joshua Kazdan, John Hughes, Jordan Juravsky, Sara Price, Aengus Lynch, Erik Jones, Robert Kirk, Azalia Mirhoseini, and Sanmi Koyejo. How do large language monkeys get their power (laws)?, 2025a. URL https://arxiv.org/abs/2502.17578. Rylan Schaeffer, Punit Singh Koura, Binh Tang, Ranjan Subramanian, Aaditya K Singh, Todor Mihaylov, Prajjwal Bhargava, Lovish Madaan, Niladri S. Chatterji, Vedanuj Goswami, Sergey Edunov, Dieuwke Hupkes, Sanmi Koyejo, and Sharan Narang. Correlating and predicting human evaluations of language models from natural language processing benchmarks, 2025b. URL https://arxiv.org/abs/2502.18339. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in neural information processing systems, 33:3008–3021, 2020. emma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijaykumar, Dominika Rogozi´nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltyshev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci´nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mohamed, Kartikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, Lilly McNealus, Livio Baldini Soares, Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Martin Görner, Mat Velloso, Mateo Wirth, Matt Davidow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kuppala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, Sébastien M. R. Arnold, Sebastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jordan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hassabis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Armand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. Gemma 2: Improving open language models at a practical size, 2024. URL https://arxiv.org/abs/2408.00118. Chris Van Der Lee, Albert Gatt, Emiel Van Miltenburg, Sander Wubben, and Emiel Krahmer. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pp. 355–368, 2019. Joshua Vendrow, Edward Vendrow, Sara Beery, and Aleksander Madry. Do large language model benchmarks test reliability? arXiv preprint arXiv:2502.03461, 2025. Yi Xu, Laura Ruis, Tim Rocktäschel, and Robert Kirk. Investigating non-transitivity in llm-as-a-judge, 2025. URL https://arxiv.org/abs/2502.14074. Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, William Song, Tiffany Zhao, Pranav Raja, Charlotte Zhuang, Dylan Slack, et al. A careful examination of large language model performance on grade school arithmetic. Advances in Neural Information Processing Systems, 37: 46819–46836, 2024. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36:46595–46623, 2023. # A Examples of Human Qualitative Responses Favoring Basic Sampling Over Min-P Sampling In Section 2.3, we described how qualitative responses from many human participants in the original paper’s study favored basic sampling. Direct quotes from human evaluators favoring basic sampling are provided below. In the study, basic sampling was called “Model A”; for clarity, we substituted below for clarity): • “[basic sampling] on Temp 3.0 - High Diversity setting. The stories where [sic] more interenting [sic], felt more different compared to the others, which felt like the same ideia [sic] just in a different format.” • “I felt like [basic sampling] was most diverse and most interesting with it’s [sic] descriptions of the characters and the setting. It appealed to me most and seemed to have less ’broken’ sentences that didn’t make sense. Descriptions were painterly [sic] and elaborate.” • “[basic sampling] was more engaging, it aroused my curiosity.” • “[basic sampling] provided more depth and easy to read for me and there was more diversity.” • “[basic sampling], they presented creative storytelling” • “[basic sampling]. From the very beginning the verbiage and descriptions were very creative and vivid. And each story was unique” • “I believe that [basic sampling] has provided stories with more differentiation overall than the other two models. From the point of view of creativity, all three models are more or less equivalent as they almost always talk about stories set in extraterrestrial worlds both from a physical and mental (dreams) point of view" • “[Basic sampling]: Sample 2: Temperature Setting F (Temp 3.0 - High Diversity). The story was captivating, it took inside the mystical land and walked you right besides all the characters, you can even draw the characters from just th descriptions provided by the prompt. you Could even smell them, smell the setting and be at one with the setting." • “I personally preferred [basic sampling] on the setting of creative, descriptive storytelling. I enjoyed how the writing was creative, showing imagination and a strong use of language. The stories were quite evocative, with intriguing settings and characters that helped to draw the reader in. I also appreciated the diversity of themes that were explored, from night weavers to dream manipulation and mysterious libraries, which kept the stories engaging and interesting." • “Temporature setting C on [basic sampling] was the best. The story was fascinating and very engaging. I wanted to read more." • “I prefered the first [basic sampling]. Tho [basic sampling] and C seem to be very head to head. But something about [basic sampling] seemed different in quality about it to me." More quotes are in the original paper’s data. We urge readers to draw their own conclusions. # B GSM8K Chain-of-Thought Scores with “Standard" Formatting At the request of Nguyen et al. (2024), we reran our GSM8K Chain-of-Thought sweeps using “standard" formatting instead of “Llama" formatting. Both analyses reached consistent results: min- $\cdot \mathtt { p }$ does not consistently outperform other samplers when controlling the volume of hyperparameter space. Figure 7: Min-P Does Not Consistently Outperform Other Samplers on GSM8K When Controlling For Hyperparameter Volume. We reran our GSM8K sweep using “standard" formatting rather than “Llama" formatting and observed qualitatively similar data. Figure 8: Min-P Does Not Consistently Outperform Other Samplers on GSM8K When Controlling For Hyperparameter Volume. We reran our GSM8K sweep using “standard" formatting rather than “Llama" formatting and observed qualitatively similar data. # C GSM8K Scores By Model, Sampler and Hyperparameters 0.012 Qwen 2.5 0.5B InstructQwen 2.5 1.5BQwen 2.5 1.5B InstructQwen 2.5 3BQwen 2.5 3B InstructQwen 2.5 7BQwen 2.5 7B InstructMistral 7Bv0.1Mistral 7Bv0.1 InstructLlama 3.2 3BLlama 3.2 3B InstructLlama 3.1 8BLlama 3.1 8B InstructGemma 2 2BGemma 2 2B InstructGemma 2 9B0.050.70.80.91.0Sampler Value100200Sampler Value0.00.10.20.3Sampler ValueGemma 2 9B InstructBaseInstruct 0.12345 0.024 0.020123468 0.00.20.40.60.8Exact Match (Strict)0.0000.0250.0500.0750.1000.1250.150Exact Match (Strict)0.00.10.20.30.4Exact Match (Strict)0.000.020.040.060.080.100.12Exact Match (Strict)0.00.20.40.60.8Exact Match (Strict)0.000.050.100.150.200.25Exact Match (Strict)0.00.20.40.60.8Exact Match (Strict)0.000.020.040.06Exact Match (Strict)0.00.10.20.30.40.5Exact Match (Strict) 0.12
Sampling from language models impacts the quality and diversity of outputs, affecting both research and real-world applications. Recently, Nguyen et al. 2024's "Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs" introduced a new sampler called min-p, claiming it achieves superior quality and diversity over established samplers such as basic, top-k, and top-p sampling. The significance of these claims was underscored by the paper's recognition as the 18th highest-scoring submission to ICLR 2025 and selection for an Oral presentation. This paper conducts a comprehensive re-examination of the evidence supporting min-p and reaches different conclusions from the original paper's four lines of evidence. First, the original paper's human evaluations omitted data, conducted statistical tests incorrectly, and described qualitative feedback inaccurately; our reanalysis demonstrates min-p did not outperform baselines in quality, diversity, or a trade-off between quality and diversity; in response to our findings, the authors of the original paper conducted a new human evaluation using a different implementation, task, and rubric that nevertheless provides further evidence min-p does not improve over baselines. Second, comprehensively sweeping the original paper's NLP benchmarks reveals min-p does not surpass baselines when controlling for the number of hyperparameters. Third, the original paper's LLM-as-a-Judge evaluations lack methodological clarity and appear inconsistently reported. Fourth, community adoption claims (49k GitHub repositories, 1.1M GitHub stars) were found to be unsubstantiated, leading to their removal; the revised adoption claim remains misleading. We conclude that evidence presented in the original paper fails to support claims that min-p improves quality, diversity, or a trade-off between quality and diversity.
[ "cs.CL", "cs.LG" ]
# 1 Introduction Time series forecasting has traditionally relied on historical patterns and temporal dependencies to predict future values. However, in complex real-world applications such as electrical consumption prediction, the incorporation of external factors has proven crucial for improving forecast accuracy [19]. These exogenous variables provide additional context that can significantly influence consumption patterns beyond what historical data alone can reveal. In the specific case of electrical load forecasting, numerous studies have demonstrated that consumption patterns are heavily influenced by external factors such as weather conditions, calendar effects, and socio-economic indicators [8]. Temperature, in particular, has been shown to have a strong relationship with electricity demand, as heating and cooling needs vary significantly with ambient temperature [2]. Additionally, calendar variables including holidays, weekends, and seasonal patterns have been shown to capture regular variations in consumption behavior effectively [19][25]. These behaviors, however, are householdspecific — e.g., a household using electric heating has a consumption more sensitive to cold temperatures than a household relying on gas. This represents a challenge to global forecasting models, which therefore have to capture specific behaviors when predicting the consumption. In order to forecast consumption, two strategies can be distinguished: – Global model: A unified model trained on aggregated data across the entire consumer population. This centralized approach facilitates comprehensive pattern recognition across diverse consumption behaviors, enhancing generalization capabilities while minimizing computational infrastructure requirements. Furthermore, recent architectural innovations specifically address multi-channel time series [11][23]. – Individual models: A dedicated model trained for each consumer entity. These specialized models capture household-specific consumption patterns with high fidelity. While traditionally resource-intensive in terms of computation and storage, recent advances in federated learning mitigate these constraints [17], though hardware limitations for on-device machine learning deployment remain significant. To compare these two paradigms, we assess them on real-world data provided by an industrial partner, containing more than 6000 households consumptions over two years and corresponding external factors, ranging from weather data to football $^ 1$ events. As our results later demonstrate, although incorporating external factors as features theoretically enhances performance, these lead to overall performance degradation in global models. Conversely, individual models excel at mapping external factors to consumer-specific responses, but introduce substantial computational and storage overhead that scales linearly with the consumer population. In particular, this approach fails to capitalize on the substantial behavioral similarities across consumers. Since many households share comparable consumption patterns [20], training completely separate models results in significant parameter redundancy, as each individual model essentially learns the same forecasting task (electricity consumption) with variations to accommodate specific consumer profiles. This redundancy wastes computational resources and misses opportunities for knowledge sharing across similar consumer segments. In order to bridge the gap between global models efficiency and individual models precision, hypernetworks offer a promising architectural paradigm, illustrated in Figure 1. Hypernetworks [7] are meta-models designed to generate the weights of a primary task network conditioned on specific inputs. In our context, a hypernetwork can dynamically produce customized parameters for each consumer based on their unique embedding and current situation. This approach maintains the personalization advantages of individual models while dramatically reducing the parameter space, rather than maintaining thousands of separate forecasting models. Fig. 1. Difference between global and individual models, and the proposed in-between solution using hypernetworks. In this paper, we introduce a novel approach using hypernetworks and consumerspecific embeddings that enable global models to differentiate between individual households. These compact embeddings require minimal storage compared to full individual model parameters while preserving household-specific information. Our experimental results demonstrate that the hypernetwork architecture is the only one in the tested benchmark to leverage external factors to reduce forecasting error — and ultimately get the lowest error, beating state-of-the-art models by up to $1 6 \%$ — while conventional approaches result in performance degradation. This improvement enables more accurate, individualized forecasting within a computationally efficient framework. # 2 Background Time series forecasting has evolved from classical statistical methods to advanced deep learning architectures. Traditional approaches like ARIMA [16] rely on temporal dependencies within univariate series but struggle with incorporating exogenous variables effectively. More recently, neural network-based models have demonstrated significant improvements in handling complex time series tasks. Transformer-based architectures [18] have been adapted for time series forecasting, with models like Informer [26] addressing the quadratic complexity limitations of vanilla transformers. N-HiTS [1] extends the interpretable N-BEATS framework [15] by introducing hierarchical interpolation and multi-rate data processing for improved performance across multiple horizons. When it comes to electricity load forecasting, a critical challenge is to effectively incorporate multiple information channels, including historical consumption and various exogenous factors. Recent architectures specifically target this multivariate challenge: iTransformer [11] revolutionizes time series modeling by treating individual features as tokens and timestamps as channels, inverting the traditional approach. PatchTST [13] applies patching strategies to decompose time series into subseries, enabling more robust feature extraction. Lately, CARD [23] introduced channel attention mechanisms that dynamically weight the importance of different input variables. These models however still have to process the input time series to figure out the consumer’s profile, which can be highly different from one time series to another. Additionally, recognizing consumers profiles may also require longer input time series (e.g. in order to analyze their behaviors during vacations). Mixture of Experts (MoE) models [9] offer another approach to handling heterogeneous patterns in time series data. These architectures dynamically route inputs to specialized subnetworks, allowing the model to develop expertise given a specific embedding. Mixture of Linear Experts (MoLE) [12] extends this concept by creating embeddings that represent input characteristics in order to create this embedding, further improving adaptability to diverse time series behaviors. Hypernetworks [7] represent a powerful paradigm where one network generates the weights for another. In the domain of time series, this approach has shown particular promise for addressing distribution shifts in time series [3] and has been applied to implicit neural representations as demonstrated in HyperTime [4]. Hypernetworks are especially relevant for our work as they can efficiently generate consumer-specific parameters from compact embeddings, potentially capturing individual household behaviors without requiring separate models for each consumer. In the context of electricity load forecasting, these architectural innovations offer promising directions for improving prediction accuracy while maintaining computational efficiency. Our work builds upon these foundations to address the specific challenges of capturing consumer-specific responses to exogenous factors. # 3 Hypernetworks for Time Series Forecasting # 3.1 Problem Formulation We address the task of forecasting electrical consumption time series for a diverse set of consumers while incorporating various external factors. Let $\mathcal { X } =$ $\{ x _ { 1 } , x _ { 2 } , \ldots , x _ { N } \}$ represent the set of $N$ consumer entities, each with its own hourly electrical consumption time series. For each consumer $x _ { i }$ , we denote its consumption at time $t$ as $x _ { i , t } \in \mathbb { R }$ . Additionally, we have a set of numerical external factors $\boldsymbol { \varPhi } = \left\{ \phi _ { 1 } , \phi _ { 2 } , \ldots , \phi _ { k } \right\}$ (additional time series, such as temperature) and categorical external factors $\mathcal { C } = \{ c _ { 1 } , c _ { 2 } , \ldots , c _ { m } \}$ . Our objective is to predict future consumption values $y _ { i , t : t + h } : = x _ { i , t + L : t + L + h }$ for a horizon $h$ for every consumer $i$ , given historical consumption $x _ { i , t : t + L }$ for an input length $L$ and external factors $\phi _ { t : t + L }$ and $\stackrel { \triangledown } { C } _ { t : t + L }$ . # 3.2 Model Architecture Our proposed architecture consists of three main components: (1) an embedding layer for categorical variables, (2) a hypernetwork that generates consumerspecific weights, and (3) a linear forecasting model with these consumer-specific weights. The hypernetwork itself can be seen as a weights generator — that outputs matrices for the linear model — and essentially shares the same architecture than an image decoder [14]. An overview of the pipeline is illustrated in Figure 2. Fig. 2. Overview of the hypernetwork pipeline Embedding Representation for Categorical Variables. For each categorical external factor $c _ { j } \in { \mathcal { C } }$ , we learn a dense embedding representation: $$ \mathbf { e } _ { j } = \operatorname { E m b e d } ( c _ { j } ) \in \mathbb { R } ^ { d _ { j } } $$ where $d _ { j }$ is the embedding dimension for factor $j$ . Specifically, when categorical features are related and complementary, we sum their embeddings as follows: $$ \begin{array} { r } { \mathbf { e } _ { \mathrm { e v e n t } } = \left\{ \begin{array} { l l } { \mathbf { e } _ { \mathrm { n o \ e v e n t } } , } & { \mathrm { i f } \ c _ { \mathrm { e v e n t } _ { k } } = 0 \ \mathrm { f o r \ a l l } \ k } \\ { \sum _ { k \in \{ k | c _ { \mathrm { e v e n t } _ { k } } = 1 \} } \mathbf { e } _ { \mathrm { e v e n t } _ { k } } , } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array} $$ All categorical embeddings are reshaped to matrices of size $( p , q )$ and stacked together to form the hypernetwork input, as shown in Figure 3. The resulting input tensor is denoted ${ \bf z } _ { i , t }$ . The output matrices predicted by the hypernetwork have proportional dimensions from the inputs, and are of shape $( p \times u , q \times u )$ , where $u$ is the upscaling factor. Fig. 3. Illustration of example embeddings. Each consumer ID and other known categorical features are transformed to embeddings, which are reshaped and stacked together to form the Hypernetwork input. Forecasting Mechanism. The hypernetwork $H _ { \theta }$ with parameters $\theta$ takes the concatenated features $\mathbf { z } _ { t }$ and generates the weights for a consumer-specific linear forecasting model: $$ \mathbf { W } _ { i , t } = H _ { \theta } ( \mathbf { z } _ { i , t } ) \in \mathbb { R } ^ { L \times h \times p } $$ where $p = k + 1$ is the input dimension to the linear model, corresponding to the number of input time series; $L$ is the input length, and $h$ is the forecast horizon. The consumer-specific weights $\mathbf { W } _ { i , t }$ are then used in a linear model to produce the final forecasts. For each consumer $i$ at time $t$ , the input to the linear model includes both historical consumption values $\boldsymbol { x } _ { i , t : t + L } \in \mathbb { R } ^ { L }$ and the numerical external factors $\varPhi _ { t : t + L } \in \mathbb { R } ^ { k \times L }$ . The forecast for the next $h$ time steps is then computed as: $$ \hat { \mathbf { y } } _ { i , t : t + h } = \mathbf { W } _ { i , t } \cdot \left( \begin{array} { l } { x _ { i , t : t + L } } \\ { \phi _ { 1 , t : t + L } } \\ { \cdot \cdot \cdot } \\ { \phi _ { k , t : t + L } } \end{array} \right) $$ Loss Function and optimization. We jointly optimize the hypernetwork parameters $\theta$ along with all categorical feature embeddings $e _ { i }$ by minimizing the Mean Squared Error (MSE) between predictions and ground truth: $$ \operatorname* { m i n } _ { \theta , \{ e _ { i } \} } \sum _ { i = 1 } ^ { N } \sum _ { t \in \mathcal { T } } \| \hat { \mathbf { y } } _ { i , t : t + h } - \mathbf { y } _ { i , t : t + h } \| ^ { 2 } $$ where $N$ is the number of consumers, $\tau$ is the set of time points in the training data, $\hat { \mathbf { y } } _ { i , t : t + h }$ represents the predicted values, and ${ \bf y } _ { i , t : t + h }$ represents the ground truth values. It is worth noting that unlike traditional neural networks where weights are directly optimized, in our approach, the hypernetwork parameters $\theta$ are optimized such that they can generate effective consumer-specific weights $\mathbf { W } _ { i , t }$ for the linear forecasting model. This approach allows the model to dynamically adapt to different consumers’ consumption patterns while leveraging shared knowledge across the entire consumer base. # 3.3 Experimental setup Dataset. We collected a dataset comprising hourly data over a period of two years (2020 and 2021): # – Numerical features: $x$ : Consumption data (kWh) for $N = 6 , 0 1 0$ households and businesses in Luxembourg, provided by the national grid operator; $\phi _ { \mathrm { t e m p } }$ , $\phi _ { \mathrm { h u m } }$ , $\phi _ { \mathrm { w i n d } }$ , $\phi _ { \mathrm { s u n } }$ : Weather indicators — temperature ( $^ \circ C$ ), humidity $( \% )$ , wind speed $\mathrm { { ( k m / h \mathrm { { \dot { } } } } }$ ), sunlight (minutes of sun within one hour)2; # – Categorical features: $i$ : Consumer ID, ranging from 0 to 6,009; • chour, cdw, $c _ { \mathrm { d m } }$ , $c _ { \mathrm { m o n t h } }$ : Timestamps data — hour of day (24 values), day of week (7), day of month (31), month of year (12); • $c _ { \mathrm { s h } } , c _ { \mathrm { p h } }$ : School holiday indicator (boolean), public holiday (boolean); $c _ { \mathrm { t e a m } _ { 1 } } , \ldots , c _ { \mathrm { t e a m } _ { 5 } } \colon$ 5 booleans, indicating wether Luxembourg, Germany, France, Belgium or Portugal will be playing in the current day or not — which are relevant teams for the studied region. As it is usual for electric load forecasting [6], we set a forecast horizon of 1 week ( $h = 1 6 8$ ), from an input length of 2 weeks ( $L = 3 3 6$ ). We compare results with and without the inclusion of external factors, and run further experiments where only the consumer ID is provided in addition to electrical consumption. The dataset is partitioned chronologically into train/validation/test sets with standard 70%/10%/20% ratios following established time series forecasting protocols [22]. We preprocess the data by standardizing consumption values, temperature, and wind speed, while applying min-max normalization to humidity and sunlight variables as these represent naturally bounded quantities. Hyperparameters. We set the upscaling factor $u$ to 24, which fits with the daily seasonality characteristics of electricity consumption. Given this factor and the needed sizes of the output matrices ( $3 3 6 \times 1 6 8$ ), we have to make inputs of size $1 4 \times 7$ . To achieve this, we concatenate two $7 \times 7$ matrices, leading to 49-dimensional vectors. One reason for this choice is the flexibility this concatenation offers: one could easily change the number of weeks in the input length or forecast horizon by getting shapes of $7 a \times 7 b$ . For consumer IDs, we allocate twice the embedding capacity $( 7 \times 7 \times 2 )$ to capture the more complex behavioral patterns associated with individual users. These embedding tensors are concatenated along the channel dimension before being processed by the model through four residual blocks, ultimately generating weight matrices of dimension $3 3 6 \times 1 6 8$ that map input sequences to forecast horizons. Experiments are repeated 10 times to reduce randomness effects. Baseline models. One natural additional solution to experiment with is Mixture of Linear Experts [12], as they demonstrate strong performance in general time series forecasting. Especially, each expert can specialize in specific groups of consumers, and embeddings can simply be used to attribute expert importance. We consider three MoLE variants, MoLE_DLinear, MoLE_RLinear and $-$ $-$ MoLE_RMLP, the latter consisting in two dense layers expert models. 16 experts are used, as this setting allowed the good performance shown in [12]. When using categorical features, we use the same embeddings as for hypernetworks, which are then linearly mapped to a probability distribution vector that assigns experts importance. Baseline models also include state-of-the-art forecasting models with a focus on multiple channels processing: iTransformer (2024 [11]), CARD (2024 [23]), NHits (2023 [1]), PatchTST (2022 [13]), RLinear (2022 [10]). For completeness, we include ARIMA as a classical statistical baseline which, despite its computational complexity, often provides competitive performance for structured time series forecasting tasks. Since the baseline models are designed for continuous multivariate time series, we adapt categorical features for fair comparison. For most categorical variables, we employ one-hot encoding to create additional binary channels. However, for the high-cardinality consumer ID feature, this approach would create an impractical number of channels. Instead, we learn lowdimensional embeddings for consumer IDs and repeat these embeddings across the temporal dimension, maintaining consistent representation while controlling dimensionality. The code is available on Github3. Finally, we compare these results with individual RLinear models being trained for every individual consumer — not predicted by the hypernetwork in contrast with global models cited above. Infrastructure. We use a Quadro RTX 8000 49GB GPU for all the experiments. # 4 Results Table 1. MSE and MAE values for different models and datasets. Models denoted with an asterisk \* are not meant to handle categorical features: the consumer’s ID embedding is provided in additional time series channels Our experimental results demonstrate several key findings regarding the performance of various time series forecasting models, as shown in Table 4. The comparison across different input configurations yields important insights for model selection and deployment in real-world scenarios. The standard error is always $< 1 0 ^ { - 4 }$ in the table, with two minor exceptions. More detail is provided in appendix. # 4.1 Impact of External Factors Perhaps the most surprising finding is that incorporating external factors generally degrades model performance across almost all architectures. This contradicts the common assumption that additional information should improve predictive accuracy. Only individual models and our hypernetwork approach exhibit improved performance when leveraging external factors, with decreases in both MSE and MAE compared to using consumer ID only or no external factors. This exceptional behavior of hypernetworks suggests they possess a unique ability to effectively filter and use external information without introducing additional noise or complexity that harms prediction accuracy. The architecture’s approach to handling multiple input channels appears fundamentally more effective than competing methods. Consumer ID Embeddings. The performance when using only consumer ID embeddings as additional channels provides insights into how different models handle the introduction of this information. The MoLE models are the only global ones to improve the forecasting quality with the consumer ID provided — they are, however, with the hypernetworks, the only models designed to handle this specific input. Models not explicitly designed for this purpose always show a small degradation of performance. Despite not being optimized for categorical features, Transformer models still perform reasonably well in this scenario. # 4.2 Performance Across Model Architectures. The Hypernetwork architecture exhibits superior performance compared to other models by successfully imitating the individual models approach and getting closer to its final performance, achieving the second lowest MSE (0.1734) and MAE (0.1805) when incorporating external factors. This represents a notable improvement over traditional approaches and even other deep learning models. CARD and NHits follow closely behind, with NHits demonstrating particularly strong performance (MSE: 0.1763, MAE: 0.1854), making it a viable alternative when no external factors are available. Interestingly, the classical Arima model (MSE: 0.1780, MAE: 0.1893) remains competitive despite being significantly less complex than the deep learning approaches. This suggests that for certain forecasting tasks, traditional statistical methods should not be dismissed outright. # 4.3 Cost Training time. Our hypernetwork approach achieves a favorable trade-off between computational resources and prediction accuracy. While generating consumerspecific weights introduces additional computational overhead during training compared to global models, this cost is substantially lower than training individual models for each consumer. Specifically, our approach reduces training time by 7 hours (approximately $7 0 \%$ ) compared to individualized RLinear models. Memory. The memory efficiency of our approach is particularly notable. The consumer embeddings require only 589K parameters (2.4MB), whereas individual linear models for all 6,010 consumers demand 3.392 billion parameters. This represents a parameter reduction factor of over 5,700 $\times$ . Extrapolating to a realworld deployment with 1 million consumers, our approach would require only megabytes of storage compared to approximately 2.3TB for individual models. This dramatic reduction in model size not only decreases storage requirements but also eliminates the significant I/O overhead that would occur when loading individual models from disk during inference — a practical consideration not captured in our GPU-only-based timing experiments. Fig. 4. Comparison of models training time vs. the best resulting MSE. Bubble sizes refer to the number of weights. For our approach, we distinguish the size of the hypernetwork itself, and the size including all 6010 consumers embeddings. The size for individual RLinear models (bottom right) is not shown as it would fill in all the figure. # 4.4 Generalizing consumers embeddings As consumers might evolve over time, with new ones arriving and others leaving, embeddings often need to be updated. This can be achieved by optimizing the embeddings in order to reduce the final forecasting error. One advantage of this method is that this task can be easily parallelized, and the hypernetwork model itself doesn’t necessarily need to be retrained. Figure 5 reveals that our hypernetwork approach, when trained on merely 8% of the consumer base (500 out of 6010 consumers), outperforms competing models across the entire dataset, given consumers’ embeddings after training. This adaptive capability presents a significant advantage in dynamic real-world settings where consumer populations continually evolve, as the model maintains strong predictive performance while requiring minimal retraining. Fig. 5. Performance evaluated on the full dataset when only a portion of the 6010 consumers is used to train models # 4.5 Ablation studies Inclusion of categorical features. As several models are not explicitly designed to handle categorical features, it is important to verify that these models are not penalized by such inclusions. Results in Table 2 show that categorical features have overall no significant impact on their performance, with MSE varying by at most $5 \times 1 0 ^ { - 4 }$ and MAE by at most $7 \times 1 0 ^ { - 4 }$ . Some models even show marginal improvements with categorical features (e.g., NHits exhibits lower MSE and MAE with categorical features included). This stability might suggest that the performance degradation observed in Table 4 is predominantly attributable to numerical features rather than categorical ones. Importance of different external factors. Table 3 demonstrates the significant contribution of each external factor to model performance. The experiment incorporating all external factors achieves the lowest error, while removing any category of factors leads to performance degradation. Temporal indicators emerge as the most critical component, with their removal causing the largest increase in error, followed by weather indicators and perhaps more interestingly football events. We also observe that including more external factors systematically decreases the standard error, thus making the performance less uncertain. Overall, these results quantitatively validate the hypernetwork’s capacity to effectively integrate diverse external signals, capturing complex interdependencies between seemingly disparate factors and the target variable. Table 2. Performance ( $\pm$ standard error) with and without categorical features, for models that only handle numerical time series as inputs (asterisked in Table 4) Table 3. MSE and MAE values ( $\pm$ standard error) for the hypernetwork model with and without groups of external factors # 5 Discussion Mixed role of external factors. The findings from our study challenge the prevailing assumption that integrating more external factors naturally enhances forecasting accuracy. Our results indicate that, for most models, the inclusion of additional external factors often leads to performance degradation. This suggests that the signal-to-noise ratio introduced by these external factors may not always be beneficial, highlighting the complexity involved in effectively leveraging such data. Linearity of the forecasting process. While the end forecast is inherently linear and may not capture complex patterns directly [24], the linear weights themselves are dynamically generated by the hypernetwork, which is nonlinear. This unique capability allows the linear model to adapt to more complex situations by tailoring weights to individual consumer behaviors, effectively making the final forecast nonlinear w.r.t. the input embeddings. Adaptability. Hypernetworks present a notable exception by using external information without compromising performance, showcasing their capability in adapting to the varying significance of different input channels. New consumer embeddings can effectively be added over time to adapt to the demand evolution, which makes this solution suitable for real-world scenario. Encoders could be used in the future to be more effective than gradient descent in order to optimize these new embeddings. Future work. Long time series embedding models [5][21] could be used to create consumers embeddings optimized to serve as hypernetwork’s input. This would allow even faster profile embedding without having to apply gradient descent. More complex models than simple linear models could also be considered as for the hypernetwork’s output. As already suggest with MoLE models, adding simple layers to the output model could potentially increase the performance.
Accurate electrical consumption forecasting is crucial for efficient energy management and resource allocation. While traditional time series forecasting relies on historical patterns and temporal dependencies, incorporating external factors -- such as weather indicators -- has shown significant potential for improving prediction accuracy in complex real-world applications. However, the inclusion of these additional features often degrades the performance of global predictive models trained on entire populations, despite improving individual household-level models. To address this challenge, we found that a hypernetwork architecture can effectively leverage external factors to enhance the accuracy of global electrical consumption forecasting models, by specifically adjusting the model weights to each consumer. We collected a comprehensive dataset spanning two years, comprising consumption data from over 6000 luxembourgish households and corresponding external factors such as weather indicators, holidays, and major local events. By comparing various forecasting models, we demonstrate that a hypernetwork approach outperforms existing methods when associated to external factors, reducing forecasting errors and achieving the best accuracy while maintaining the benefits of a global model.
[ "cs.LG", "cs.AI" ]
# Introduction Multiple Sclerosis (MS) is a chronic autoimmune disorder that affects approximately 2.8 million individuals worldwide, making it one of the most prevalent neurological diseases among young adults (Sadeghibakhi et al., 2022). It Machine learning (ML), particularly convolutional neural networks (CNNs), has demonstrated significant success in automating the detection of brain pathologies (Aliev et al., 2021; Rondinella et al., 2023). However, challenges persist in model robustness and the effective integration of lesion-specific knowledge into deep learning frameworks. Radiomics holds promises to address these challenges and enhance ML-based diagnostics by extracting high-dimensional quantitative features from medical images. These features quantify structural heterogeneity and cover such characteristics as texture, shape, and intensity patterns. Recently, radiomics was successfully applied to detect mild cognitive impairment (MCI) by identifying volumetric changes in brain structures (Zubrikhina et al., 2023). Similarly, our prior work demonstrated that combining radiomics with neural networks improves focal cortical dysplasia detection by capturing subtle pathological features often missed in visual assessments (Alsahanova et al., 2025). Attention-augmented U-Nets, have achieved notable performance in MS lesion segmentation, therefore they are considered state-of-the-art (SOTA) approaches (Rondinella et al., 2023). The authors validated SOTA performance through extensive experiments, comparing it against other leading methods on benchmark datasets such as the MICCAI MS lesion segmentation challenge data. Their results demonstrated superior segmentation accuracy (e.g., higher Dice scores), confirming SOTA status of their model. Still, the search for more accurate and robust models remains actual. The demand for accurate automatic delineation of MS lesions stems from radiologists’ need for a tool to assess neuroinflammatory brain changes efficiently. Once developed, this tool will have broad practical implications, supporting the application of MS diagnostic criteria by objectively demonstrating the dissemination of white matter lesions in time and space. High-accuracy quantitative MRI assessment is critical, as the Magnetic Resonance Imaging in Multiple Sclerosis (MAGNIMS) criteria are non-specific, underscoring the need for follow-up examinations with longitudinal comparison of imaging findings. # Aims and Objectives This study aims to improve the accuracy and robustness of MS lesion segmentation in MRI scans by combining data fusion and deep learning techniques. Hypothetically, fusion of imaging data with radiomical features retrieved from the same images boosts the performance of the models trained to segment multiple sclerosis lesions from FLAIR images. To reach the study objective we formulated the following tasks: • Develop radiomical features for data fusion and compare radiomics values for different types of MS lesions • Assess the efficiency of ML-based segmentation before and after the data fusion • Test whether fusing radiomics with raw images improves the stability and performance of a recent SOTA model for lesion segmentation. # Materials and Methods # Study Cohort and Data Preprocessing The study dataset included results MRI findings of the brain of 46 patients treated for MS in Tawam Hospital, Al Ain city. From the PACS server, our team collected diagnostic images of FLAIR sequence – the primary diagnostic modality for detection and delineation of white matter lesions in MS. The images had isometric voxel with the size ranging from 0.8 to $1 ~ \mathrm { m m }$ . Figure 1 shows FLAIR slices with white matter lesion segmentations. The models were trained and tested on the labeled MS lesion dataset containing 1102 slices. Prior to the extraction of radiomical features, we performed intensity normalisation. # Research Methodology To complete the first task, we resorted to the same radiomical features as in our recent study on focal cortical dysplasia. Concentraion rate (CR) was the most promising feature to use because which captures local hyperintensities while being robust to extreme outliers by excluding the highest values. Equation 1 describes a way to compute CR for a pixel at position $( i , j )$ with scanning window $W _ { s } ( i , j )$ of size $2 s + 1 \times 2 s + 1$ where $s = 2$ . In the equation, num is a number of high-intensity pixels to sum, $m$ is a number of highest pixels to exclude avoiding outliers, and $X _ { ( k ) }$ is the $i$ -th highest order statistic of gray scales of $X$ in the scanning window $W _ { s } ( i , j )$ . Herein, $N$ is the total number of pixels in the neighborhood scanning window computed according to Equation 2. Patient1-FLAIR (Slice 18) Patient1-Segmentation Figure 1: Examples of FLAIR slices and segmentations. Rényi entropy (RE) was a top performing feature for discovering hyperintense lesions in the white and gray matters in our previous research, which provided rationale to apply RE to the current study (Alsahanova et al., 2025). RE of order $\mathbf { \boldsymbol { \mathfrak { a } } } > 0$ , $\mathsf { \pm 1 }$ is a measure of “disorder” of gray scales (Rényi, 1960, 1961). To calculate it, we used the standard gray-level co-occurrence matrix (GLCM) which records how often pairs of voxels with specific intensity appear at a given distance and direction in the image (Chitalia and Kontos, 2019; Suresh and Shunmuganathan, 2012). RE is calculated for each element $f _ { k , l }$ in $2 5 6 ^ { \hat { 2 } }$ GLCM matrix as per Equation 3. We computed GLCMs for distances 1 and 2 pixels across 4 primary directions within the scanning window $W _ { s } ( i , j )$ (Eichkitz et al., 2013). Then, the summation of GLCM matrices produced a combined matrix. $$ \mathrm { C R } ( i , j ) = \sum _ { k = N - \mathrm { n u m } - m + 1 } ^ { N - m } X _ { ( k ) } $$ $$ N = ( 2 s + 1 ) ^ { 2 } $$ $$ H _ { \alpha } ( i , j ) = ( 1 - \alpha ) ^ { - 1 } \log \left( \sum _ { i , j = 0 } ^ { 2 5 5 } f _ { k , l } ^ { \alpha } \right) $$ To compute CR, we used the parameters $s = 2$ , $n u m = 1 5$ , and $m = 5$ in Equations 1 and 2. For Equation 3, we set $s = 5$ and $\alpha = 7$ to calculate RE. Working on the second task, our research team trained a CNN model to segment MS lesions in FLAIR images of the brain. At the input to the model, we used either raw FLAIR images or their combination with the extracted radiomical features. The performance metrics were dice score, precision and sensitivity. The architecture of the CNN model is in the next subsection. To accomplish the third task, we implemented an attention-augmented U-Net model in our study. According to Rondinella et al., 2023, the model reached reputable performance in MS lesion segmentation task. The author validated SOTA performance through extensive experiments, comparing it against other leading methods on benchmark datasets such as the MICCAI MS lesion segmentation challenge data. Their results demonstrated superior segmentation accuracy (e.g., higher Dice scores), confirming its SOTA status. To make it suitable for our research, we slightly modified the architecture by removing the LSTM block from the bottleneck of the U-Net, since our dataset consists of non-adjacent MRI slices, making the block redundant. Then the model was trained on raw FLAIR images and a combination of them with radiomical features. Cross-validation was performed using the same metrics as for previous task to access the results. To assess the stability of model training, we calculated the standard deviation of derivatives (SDD) which reflects the smoothness of a validation curve (see Equation 4). In this formula, $N$ is the number of validations performed during training and $d _ { i }$ represents the change in the metric between consecutive validations (see Equation 5). $$ S D D = \sqrt { \frac { 1 } { N } \sum _ { i = 1 } ^ { N } ( d _ { i } - \overline { { d } } ) ^ { 2 } } $$ $$ d _ { i } = \nu a l \_ s c o r e _ { i } - \nu a l \_ s c o r e _ { i - 1 } $$ # Model Architecture The CNN model in the second part of our study followed a ResNeXt-UNet architecture for semantic segmentation. The architecture combines the robust feature extraction capabilities of ResNeXt-50 (Xie et al., 2017) with the segmentation ability of the U-Net encoder-decoder framework. Recently, this combination demonstrated high segmentation quality in detecting brain tumors (Rai et al., 2021). The network consists of three main components: an encoder backbone, a decoder pathway, and skip connections for multi-scale feature fusion. The encoder utilizes a ResNeXt-50 backbone pre-trained on ImageNet, providing strong transferable feature representations. ResNeXt-50 employs grouped convolutions with cardinality of 32 and base width of 4, offering improved representational capacity compared to standard ResNet architectures while maintaining computational efficiency. The decoder pathway consists of four upsampling blocks (DecoderBlock) that progressively reconstruct the spatial resolution while reducing feature dimensionality. The final classification head consists of two convolutional layers: a $3 { \times } 3$ convolution reducing channels from 256 to 128, followed by a final $3 { \times } 3$ convolution mapping to the desired number of output classes. The network outputs raw logits for each pixel, with subsequent softmax activation applied during inference. All convolutional layers except the final output layer are followed by ReLU activation functions. The attention-augmented U-Net model employs a Tiramisu-style (Jegou et al., 2017) Fully-Convolutional DenseNet that keeps the classic U-Net encoder–decoder layout: five down-sampling stages, a bottleneck, and five mirrored up-sampling stages linked by skip connections. Squeeze-and-Attention blocks (Zhong et al., 2020) follow every Dense Block in both encoder and decoder. # Training Settings Both models were evaluated using six-fold cross-validation (8 subjects per fold for 1-4 folds and 7 subjects per fold for 5-6 folds). The CNN model was trained using a combined loss function with equal weighting $( \gamma = 0 . 5 )$ between Dice loss and binary cross-entropy loss. Optimization was performed with AdamW, configured with an initial learning rate of $1 \cdot$ $1 0 ^ { - 4 }$ and a weight decay of $1 \cdot 1 0 ^ { - 5 }$ . To improve convergence, a learning rate scheduling strategy was implemented. This began with a 100-iteration warmup phase, during which the learning rate linearly increased from $1 \cdot 1 0 ^ { - 6 }$ to the base value. After warmup, cosine annealing was applied across subsequent epochs, gradually reducing the learning rate to a minimum of $5 \cdot 1 0 ^ { - 6 }$ . This approach stabilized training dynamics and enhanced final model performance. Input data were prepared by extracting 2D slices from 3D FLAIR volumes, resizing each slice to $2 5 6 \times 2 5 6$ pixels. Intensity normalization was applied per slice using min-max scaling to standardize input ranges. During training, data augmentation included random horizontal flipping with a $5 0 \%$ probability. Model for each fold was trained for 50 epochs using a batch size of 32. During inference, FLAIR images underwent identical preprocessing as during training. The model outputs were processed with a sigmoid activation function before thresholding, and predicted masks were resized to original dimensions using nearest-neighbor interpolation to maintain sharp lesion boundaries. The attention-augmented U-Net model was trained using Dice loss with RMSprop optimizer. The initial learning rate was set to $1 \cdot 1 0 ^ { - 4 } \$ and learning rate decay was implemented with decay factor of 0.995 applied after every epoch. The data preprocessing and augmenations were done in similar way to the CNN model described above except that the slices were resized to $2 2 4 \times 2 2 4$ pixels. The model was trained for 40 epochs for each fold with batch size of 6. # Results # Radiomic features The distribution of CR values differed between voxels inside and outside MS lesions (see Figure 2). CR was markedly higher within lesion boundaries, confirming its sensitivity to hyperintense regions. The bimodal distribution across MS lesions may stem from two factors. First, CR depends on lesion size and scanning window placement. It peaks, when the window is fully within a lesion. Contrarily, a lower CR value comes when the window includes the voxels located close to the borders but outside the lesion. Second, neuroinflammatory lesions in MS vary in shape and type. The difference among supratentorial, infratentorial, juxtacortical or paraventricular foci may account for the dual-peak pattern of the CR distribution. On average, RE was also larger inside than onside MS lesions (see Figure 3). The difference in the RE distribution evidences its capacity to detect hyperintense regions in FLAIR. The RE distribution across MS lesions followed a single-spike pattern. Hence, the patterns characterizing CR and RE in MS differ, and future studies should explore the reasons underlying this difference. Since CR and RE may help to identify neuroinflammation in FLAIR, we decided to use both features in data fusion described in the next subsection. Figure 2: Values of concentration rate inside and outside white matter lesions in multiple sclerosis. # CNN model Incorporating radiomics features into ML models alongside MRI data consistently improved segmentation performance across all evaluation metrics. The models trained on a combination of MRI and radiomical features achieved higher Dice scores, precision and sensitivity compared to those trained exclusively on MRI data (see Table 1). The Wilcoxon signed-rank test with Bonferroni correction for multiple comparisons showed a significant improvement in the aforementioned metrics (Wilcoxon, 1992). The fold-by-fold analysis further confirmed the validity of the study hypothesis (see Table 2). With the data fusion technique, we managed to elevate Dice metric across all six cross-validation folds. The improvement was most remarkable in folds 3 (0.77 vs 0.74), 5 (0.73 vs 0.69), and 6 (0.73 vs 0.69). The consistent gain in performance suggest that the radiomics provides complementary information enhancing the neural network’s segmentation. Figure 3: Values of Rényi entropy inside and outside white matter lesions in multiple sclerosis. Table 1: Metrics averaged across all cross-validation folds Bonferoni-adjusted p-value from the Wilcoxon signed-rank test indicates measured difference between metrics of CNN trained on MRI with and without radiomics features Table 2: Mean Dice score for test set in each fold # U-Net with attention model The application of radiomics features also boosted the segmentation accuracy of the U-Net with attention model. The mean Dice, Precision, and Sensitivity scores rose consistently across all folds after enhancing the input with radiomics features (see Table 3). The result of the Wilcoxon signed-rank test adjusted for multiple comparisons evidenced a pronounced enlargement of the referenced metrics. In most cases, the model trained on a combination of MRI and radiomics features outperformed the baseline model, which clearly suggests the benefits of incorporating radiomics (see Table 4). From the analysis of validation curves, model training was more stable after we combined data fusion and deep learning techniques: $S D D = 0 . 2 1 \pm 0 . 0 6$ vs $0 . 1 8 \pm 0 . 0 9$ . The result of the Wilcoxon signed-rank test indicated a noticeable drop in SDD with $p = 0 . 0 3$ , proving that the model enhancement with radiomics was beneficial in terms of training stability. The same was clear from the visual appearance of validation curves: they were smoother with radiomics features and sharper without them (see Figure 4). Figure 4: Dice validation scores obtained after each training epoch. Table 3: Metrics averaged across all cross-validation folds Bonferoni-adjusted p-value from the Wilcoxon signed-rank test indicates measured difference between metrics of CNN trained on MRI with and without radiomics features Table 4: Mean Dice score for test set in each fold # Discussion Utility of Radiomical Findings for Data Fusion Data Fusion Techniques for Enhanced Diagnostic Performance Data fusion methodologies enable the integration of imaging data with non-imaging clinical and molecular data, which provides a more comprehensive view of individual tumors and potentially improves the prediction of patient outcomes (S. Wang et al., 2019). Multimodal data fusion has shown promise in medical imaging tasks like breast cancer prediction and glaucoma classification (Huang et al., 2020). Early fusion, late fusion, and intermediate fusion represent common strategies for integrating multimodal data, each with its own advantages and limitations (Y. Wang et al., 2025). Early fusion concatenates features from different modalities at the input level, creating a high-dimensional feature vector that captures the collective information from all modalities; however, this approach may be sensitive to irrelevant or redundant features and may not effectively capture the complex interrelationships between modalities (Liu et al., 2022). Late fusion, on the other hand, trains separate models for each modality and then combines their predictions using techniques such as weighted averaging or ensemble learning, allowing each modality to be processed independently and contributing its unique perspective to the final decision (Kline et al., 2022). Intermediate fusion combines features at an intermediate stage of the processing pipeline, allowing for more flexible integration of information and potentially capturing more complex interdependencies between modalities. The increasing availability of multimodal data in healthcare, encompassing electronic health records, medical imaging, multi-omics, and environmental data, has spurred the development of AI-based data fusion techniques to enhance prediction, diagnosis, and treatment planning (Mohsen et al., 2022). The fusion of multimodal brain imaging techniques is gaining popularity in mental disorder analysis, since it allows for a multidimensional analysis, providing comprehensive insights into the interrelationships between various imaging modalities (Jiao et al., 2025). Multimodal medical image fusion techniques coalesce multiple images from different imaging modalities to derive a fused image enriched with a wealth of information, thereby enhancing the clinical applicability of medical images (B. Huang et al., 2020). The integration of data from diverse modalities, such as imaging, genomics, and clinical data, can provide a more complete picture of a patient’s health, reducing the chance of misdiagnosis and improving the accuracy of diagnosis (Al-antari, 2023). By integrating visual, temporal, and textual information into a unified feature representation space, a more holistic and nuanced understanding of industrial system complexities can be attained (T. Wang et al., 2025). The clinical use of fusion imaging is recognized as a central component in the general scheme of clinical decision-making (Zaidi et al., 2009). Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research (Zhang et al., 2020). # Role of Radiomics in Diagnostic Imaging Radiomics, an emerging field, is revolutionizing precision medicine by quantitatively analyzing medical images to extract a wealth of phenotypic features, thereby establishing a link between imaging and personalized treatment strategies (Arimura et al., 2018; Guiot et al., 2021). Moving beyond the traditional qualitative assessment of medical images, radiomics leverages sophisticated algorithms to mine high-dimensional data, encompassing first-, second-, and higher-order statistics, which can then be integrated with clinical and genomic information to enhance diagnostic, prognostic, and predictive accuracy (Gillies et al., 2015). The radiomics pipeline typically begins with image acquisition, ensuring standardized protocols to minimize variability, followed by precise segmentation of the region of interest to delineate the anatomical or pathological structures for feature extraction (van Timmeren et al., 2020). Following feature extraction, the high-dimensional radiomic feature space is subjected to normalization techniques, such as Z-score standardization or min-max scaling, to mitigate the effects of differing feature scales and distributions (Capobianco and Dominietto, 2020). To ensure robust and generalizable models, feature selection methodologies are crucial in radiomics, ranging from univariate statistical tests, such as t-tests and ANOVA, to advanced machine learning techniques including recursive feature elimination, least absolute shrinkage and selection operator, and tree-based methods (van Timmeren et al., 2020). Finally, machine learning models are trained and validated to predict clinical outcomes, with careful attention to hyperparameter tuning and cross-validation strategies to prevent overfitting and assess the model’s performance on unseen data (de Farias et al., 2021). Despite the promising applications of radiomics in diagnostic imaging, the complexities associated with data acquisition, image segmentation, feature extraction, and model validation require careful consideration to ensure the robustness and reproducibility of the results (Ghuwalewala et al., 2021; Koçak et al., 2019). # Radiomic Features: Concentration Rate and Rényi Entropy Radiomic features, including concentration rate and Rényi entropy, offer unique insights into the underlying tissue characteristics, reflecting both the distribution and complexity of voxel intensities within medical images (Rizzo et al., 2018). Concentration rate, a statistical measure, quantifies the degree to which voxel intensities are clustered around a specific value, reflecting the homogeneity or heterogeneity of the tissue (Yang et al., 2018). A high concentration rate suggests that a large proportion of voxels exhibit similar intensity values, indicating a more uniform tissue structure, whereas a low concentration rate implies a wider distribution of intensities, indicative of greater heterogeneity. Rényi entropy, a generalization of Shannon entropy, provides a flexible measure to quantify the randomness or disorder of voxel intensities within an image, with its sensitivity adjustable via a parameter that controls the weighting of different intensity values. By tuning the parameter, Rényi entropy can be tailored to emphasize either the dominant intensity patterns or the subtle variations within the image, offering a more comprehensive characterization of tissue heterogeneity. In the context of cancer imaging, radiomic features facilitate the non-invasive characterization of tumor heterogeneity, a critical determinant of tumor aggressiveness and therapeutic response, by quantifying the spatial distribution of voxel intensities and textural patterns within the tumor volume (Mayerhoefer et al., 2020). Entropy measures in radiomics have been related to molecular classifications of breast cancer subtypes, with higher entropy often correlating with more aggressive subtypes (H. Li et al., 2016). Concentration rate, a measure of the degree to which voxel intensities are clustered within a specific region of interest, can provide valuable insights into the spatial distribution of metabolic activity or contrast enhancement, thereby reflecting the underlying biological processes occurring within the tissue (Y. Wang et al., 2025). Rényi entropy, a generalization of Shannon entropy, quantifies the randomness or disorder of the voxel intensity distribution, offering a more nuanced characterization of tissue heterogeneity and complexity that can be used to distinguish between different disease states and predict treatment response (Tan et al., 2020). Concentration rate can be particularly useful in differentiating between benign and malignant lesions, as malignant tumors often exhibit higher concentration rates due to their increased metabolic activity and rapid cell proliferation. Rényi entropy, with its ability to adjust sensitivity to different aspects of the intensity distribution through its order parameter, provides a flexible tool for capturing subtle changes in tissue texture and heterogeneity that may be indicative of early-stage disease or treatment response (Cui et al., 2022). The feature enhancement capability of image fusion is visually apparent in combinations that often results in images that are superior to the original data (Jiang et al., 2009). By combining concentration rate and Rényi entropy with other radiomic features and clinical data, data fusion techniques can create a more comprehensive and robust diagnostic model that is less susceptible to noise and variability in the imaging data (Rasekh et al., 2024). The integration of concentration rate and Rényi entropy within a data fusion framework necessitates careful consideration of feature normalization, weighting, and selection techniques to ensure that each feature contributes appropriately to the final diagnostic decision. Data normalization techniques, such as z-score scaling or min-max scaling, can mitigate the impact of differing feature scales and ranges, while feature weighting algorithms, such as those based on mutual information or correlation analysis, can emphasize the contributions of more informative features (Hagiwara et al., 2020). The decrease of the weight entropy in a cluster illustrates the increase of certainty of a subset of features with more substantial weights in the determination of the cluster (Singh and Verma, 2019). Additionally, feature selection methods, such as principal component analysis or feature importance ranking, can identify and remove redundant or irrelevant features, further improving the model’s performance and interpretability. The application of radiomics, which involves extracting quantitative features from medical images, has shown promise in various clinical applications, including cancer diagnosis, prognosis, and treatment response prediction (Liao et al., 2019). Radiomic features, such as concentration rate and Rényi entropy, can be used to characterize the heterogeneity and complexity of tumors (Frank et al., 2021). Integrating the radiomic features with other modalities, like clinical and genomics data, can offer a more comprehensive view of the disease, potentially boosting the performance of diagnostic models (Pfeifer and Schimek, 2020). The development of fused instruments, such as PET/CT and PET/MRI stations, has bolstered the concept of complementing the strengths of one imaging modality with those of another (W.-Y. Huang and Davis, 2011). While acquiring unified imaging data of good quality may be relatively straightforward in routine clinical practice, achieving this level of cooperation across different healthcare facilities represents a considerable obstacle (Patyk et al., 2018). Molecular imaging significantly contributes to personalized medicine by providing noninvasive spatiotemporal information on physiological and pathological processes, which can then be used not only for accurate diagnosis and determination of the extent of disease but also for rational targeted therapy and treatment monitoring (Jadvar and Colletti, 2013). Advanced MRI techniques, quantitative methods, and artificial intelligence can evaluate brain metastases, specifically for diagnosis, including differentiating between malignancy types and evaluation of treatment response, including the differentiation between radiation necrosis and disease progression (Tong et al., 2020). By combining radiomic features like concentration rate and Rényi entropy with other data modalities through sophisticated data fusion techniques, diagnostic models can achieve enhanced performance in disease detection, characterization, and prediction (Cuocolo et al., 2019; J. Li et al., 2020).
Background: Accurate lesion segmentation is critical for multiple sclerosis (MS) diagnosis, yet current deep learning approaches face robustness challenges. Aim: This study improves MS lesion segmentation by combining data fusion and deep learning techniques. Materials and Methods: We suggested novel radiomic features (concentration rate and R\'enyi entropy) to characterize different MS lesion types and fused these with raw imaging data. The study integrated radiomic features with imaging data through a ResNeXt-UNet architecture and attention-augmented U-Net architecture. Our approach was evaluated on scans from 46 patients (1102 slices), comparing performance before and after data fusion. Results: The radiomics-enhanced ResNeXt-UNet demonstrated high segmentation accuracy, achieving significant improvements in precision and sensitivity over the MRI-only baseline and a Dice score of 0.774$\pm$0.05; p<0.001 according to Bonferroni-adjusted Wilcoxon signed-rank tests. The radiomics-enhanced attention-augmented U-Net model showed a greater model stability evidenced by reduced performance variability (SDD = 0.18 $\pm$ 0.09 vs. 0.21 $\pm$ 0.06; p=0.03) and smoother validation curves with radiomics integration. Conclusion: These results validate our hypothesis that fusing radiomics with raw imaging data boosts segmentation performance and stability in state-of-the-art models.
[ "eess.IV", "cs.CV" ]
# 1 Introduction LLM-based agent systems have seen widespread adoption across diverse domains, such as medicine [32], programming [15, 51], robotics [35, 55], psychology [41], and general-purpose personal assistants [13, 5]. Driven by rapid advancements, agent systems are emerging as a new software paradigm, playing an increasingly pervasive role in shaping and supporting the full spectrum of human activities. As products of human intellectual labor, similar as traditional software systems, agent systems are also inevitably prone to quality issues. Recent studies [30] have shown that multi-agent systems exhibit diverse failure modes during operation. Moreover, agent systems are continuously evolving to meet changing external requirements, making their maintenance both crucial and labor-intensive. For instance, by May 2025, the agent system MetaGPT [22] had accumulated over 800 GitHub issues (an issue is typically a bug report or a feature request), highlighting the substantial maintenance workload associated with agent systems. Automating the issue resolution process has been an important and challenging direction with substantial dedicated research effort. In particular, with the recent advances in agent systems, there is a growing trend toward developing software engineering agents [46, 56, 51, 15, 24, 6, 23] (referred to as $S E$ agents in this paper), which can automatically resolve real-world software issues. Recent SE agents have demonstrated strong potential in resolving issues in traditional software systems. For instance, Agentless [46] correctly resolves $5 0 . 8 0 \%$ of issues on SWE-bench [29], a real-world issue resolution benchmark for traditional Python software. Although SE agents have shown promise in resolving issues in traditional software systems, it remains unclear how effectively they perform on agent systems, which is a new software paradigm that differs significantly from traditional software. Therefore, in this work, we aim to answer the central question: can SE agents fix issues in agent systems? To understand issues in agent systems, we first perform an empirical study to analyze and catalog real-world agent issues. In particular, we collect 201 real-world GitHub issues along with developercommitted patches from 16 widely-used agent systems. We further build a taxonomy of agent issues with human annotators via grounded theory, resulting in 6 categories and 20 sub-categories of common agent issues. Our taxonomy reveals that real-world agent systems exhibit a diverse range of issues, many of which possess unique characteristics not typically found in traditional software systems. The findings highlight the large engineering effort for maintaining agent systems, confirming that automated issue resolution for agent systems is a challenging and critical problem. We then build AGENTISSUE-BENCH, the first reproducible benchmark for agent issue resolution. Reproducing agent issues is particularly more challenging compared to traditional software issues, largely due to the nondeterminism of LLMs and the volatility of external resources (e.g., tools) that agents interact with. As a result, from the 201 issues analyzed, we invested 500 person-hours to successfully reproduce 50 agent issues. Each issue resolution task in AGENTISSUE-BENCH is packaged within an executable Docker environment, along with failure-triggering tests, user-reported issue descriptions, the buggy version, and the developer-committed patched version of the codebase. We further evaluate multiple state-of-the-art SE agents (i.e., Agentless [46], AutoCodeRover [56], and SWE-agent [51]) with both GPT-4o [1] and Claude-3.5-Sonnet [12] on AGENTISSUE-BENCH. We find that all of the existing SE agents exhibit limited capabilities in resolving agent issues. For instance, only $3 . 3 3 \%$ to $12 . 6 7 \%$ of agent issues are correctly resolved, which is significantly lower than the resolution rates achieved when these SE agents are applied to traditional software (e.g., $2 3 . 2 0 \% - 5 0 . 8 0 \%$ resolution rate [29]). We further conduct a qualitative analysis to break down the resolution capabilities of SE agents across different categories. Notably, the majority of resolved issues pertain to utility or dependency issues, while the most of LLM-related issues (e.g., compatibility with LLM providers or LLM operation issues) remain unsolved. Overall, our analysis reveals the limitations of current SE agents in resolving agent issues, underscoring the need for building advanced SE agents tailored to the maintenance of agent systems. In summary, this work makes the following contributions: • Taxonomy. We present the first taxonomy of issues in agent systems, derived from extensive manual analysis, which summarizes the common maintenance demands encountered during agent system evolution. • Reproducible benchmark AGENTISSUE-BENCH. We manually construct the first issue resolution benchmark of real-world agent issues. Each task is packed into an executable Docker environment, including issue descriptions, failure-triggering tests, and both buggy and patched versions of the codebase, enabling easy reproduction and validation through one-click execution. • Evaluation. We evaluate state-of-the-art SE agents on AGENTISSUE-BENCH with both quantitative and qualitative analysis, and find their limited capabilities in solving agent issues. Our findings highlight the unique challenges of maintaining agent systems, underscoring the need to develop more powerful SE agents for resolving agent issues. # 2 Background and Related Work # 2.1 LLM-based Agent Systems LLM-based agent systems are emerging as a new software paradigm, which have been widely applied across various fields (e.g., medicine [32], programming [15, 51], robotics [35, 55], psychology [41], and general-purpose personal assistants [13, 5]) with remarkable abilities. An LLM-based agent system [45, 44] typically consists of: (i) an LLM-controlled brain that decomposes and schedules tasks (i.e., planning) and records the historical behaviors (i.e., memory); (ii) a perception component that receives information from the environment; and (iii) an action component that interacts with the environment by invoking external tools. In addition, single-agent systems can collaborate to form multi-agent systems, which can tackle more complex tasks with better flexibility and effectiveness. Quality problems in LLM-integrated systems. Given the widespread adoption of LLMs, recent work has been looking into quality problems (e.g., bugs or runtime failures) in LLM-integrated systems. For example, Shao et al. [43] catalog the integration bugs in LLM-integrated systems. Different from their work, our work focuses on a specific category of LLM-integrated systems, i.e., LLM-based agent systems. Along this direction, Cemri et al. [30] build a taxonomy of failure modes in multi-agent systems. While their work focuses on runtime failure symptoms by analyzing failure trajectories, our taxonomy centers on agent issue resolution by analyzing both real-world user-reported issues and developer-committed patches. Therefore, our work complements existing efforts by providing a perspective on maintaining agent systems, encompassing a broader scope that includes not only bug fixes but also feature requests. Moreover, our work is further different from existing work by introducing the first reproducible benchmark for agent issue resolution and empirically evaluating state-of-the-art SE agents on their ability to resolve agent issues. # 2.2 Software Engineering Agents Software Engineering (SE) agents are a category of agent systems specifically designed to tackle SE tasks [36]. In particular, there is a growing trend in both industry and academia toward developing SE agents [15, 24, 46, 56, 23, 51, 6, 40, 53], which can support end-to-end software maintenance by automatically resolving user-reported issues (e.g., bug fixes or feature requests). For instance, Devin [15] is one of the first SE agents capable of resolving software issues by invoking file editors, terminals, and search tools. More recently, SWE-agent [51] interacts with the code repository environment through a custom Agent-Computer Interface (ACI), capable of performing actions such as manipulating files and executing bash commands; AutoCodeRover [56] incorporates a suite of code search tools that iteratively retrieve relevant code contexts to navigate the repository and localize issue locations; Moatless [23] equips agents with code search and retrieval tools to identify the issue locations; Agentless [46] optimizes the agent workflow with human expertise, incorporating hierarchical localization and regression testing to improve issue resolution rates. In this work, we evaluate the effectiveness of existing SE agents in resolving issues in agent systems. Benchmarking issue resolution capabilities of SE agents. With the rise of SE agents, an increasing number of benchmarks have been developed to evaluate their capabilities in addressing real-world issue resolution tasks. For instance, Jimenez et al. [34] build SWE-bench from GitHub issues of 12 Python libraries. Based on SWE-bench, researchers further propose a series of benchmarks, e.g., SWE-bench Lite [34], SWE-bench verified [3], and SWE-bench Lite-S [46], which are refined versions of SWE-bench with additional quality checking. While the SWE-bench series only includes issues of Python software, Zan et al. [54, 38] further propose SWE-bench Java, an issue resolution benchmark for Java software, and Yang et al. [52] build SWE-bench Multimodal, comprising frontend issue resolutions tasks from open-source JavaScript libraries. More recently, OpenAI releases SWELancer Diamond [20], an issue resolution benchmark with end-to-end tests for both open-source and commercial Expensify [17] software. While existing benchmarks focus exclusively on issue resolution in traditional software systems, our work introduces the first reproducible benchmark targeting issues in agent systems, an emerging software paradigm with features distinct from traditional software. Using this benchmark, we find that current SE agents are still unable to resolve the majority of issue resolution tasks in agent systems. # 3 Agent Issue Taxonomy To understand issues during agent system maintenance, we first manually analyze and categorize real-world GitHub issues in widely-used agent systems. # 3.1 Methodology Figure 1 illustrates our methodology of systematically collecting and analyzing agent issues. # 3.1.1 Data Collection Agent system collection. To select diverse and representative agent systems, we first use the GitHub search API to obtain 50 repositories with keywords “AI agents” by Feb 2025. We then manually go through each repository to keep the ones that are LLM-based agent systems (filter out the unrelated ones like paper lists or tutorials); to focus on agent systems with active maintenance, we only keep the ones with more than 1k stars and 30 issues. In this way, we collect 16 agent systems, such as MetaGPT [22], AutoGen [8], GPT-engineer [18], and CrewAI [13]. The full list of our analyzed agent systems is in Appendix A. O MetreWA Filtrigscritei 171n on Tastromyn GitHub AutoGen Patch Single isoues Independent Taxonomy GPT-engineer Closed problem label Evaluation 30 issues Agent system collection Agent issue extraction Taxonomy construction Agent issue extraction. For each studied agent system, we adopt the following inclusion criteria to extract high-quality issues. (i) The issue has been closed with a developer-committed patch to address the issue, as patches can serve as ground truth for understanding root causes of agent issues; (ii) The issue has clear descriptions without misleading information (e.g., exact patches or misleading patches in the problem description). This criteria has been widely used in constructing high-quality issue resolution benchmarks for traditional software systems [46, 3, 34]; (iii) The issue should only report one problem instead of mixing multiple problems. In the end, we obtain 201 issues in total. # 3.1.2 Manual Labeling We randomly separate our collected 201 agent issues into (i) 171 issues $( 8 5 \% )$ for building the taxonomy and (ii) 30 issues $( 1 5 \% )$ for evaluating our constructed taxonomy. Taxonomy construction. We manually catalog the 171 agent issues with grounded theory [31]. In particular, three human annotators with extensive software development and machine learning experience apply open coding [42] to annotate each issue based on the issue description and the developer-committed patch. They break down each issue into segments and label them with descriptive codes. Then they organize the open codes into structured categories by merging and linking relevant ones. All the annotators further discuss and review the taxonomy until reaching a consensus. Taxonomy evaluation. We further evaluate our taxonomy on the remaining 30 agent issues. Two annotators independently label each issue. Their annotation reaches a high agreement ratio (Cohen’s $\mathrm { { K a p p a = 0 . 8 4 9 } }$ ; meanwhile there emerge no new categories in addition to our taxonomy during their annotation. The results suggest the generalizability and reliability of our taxonomy. # 3.2 Taxonomy Table 1 presents our taxonomy of agent issues, mainly covering 6 categories. Appendix F presents detailed examples for each sub-category. In addition to the “utility issues” category which may also occur in traditional software systems, the remaining five categories are uniquely tied to key agent system components (e.g., tools and memory), making them distinctive to agent systems. Incompatibility with LLM providers. Most agent systems incorporate existing LLMs from LLM providers (e.g., OpenAI [2], DeepSeek [14], and Anthropic [7]), and improper usage of providers’ interfaces impairs agent functionality. Such issues often stem from missing dependencies or incorrect invocations of provider APIs. Moreover, due to the rapid evolution of LLMs, users frequently request new feature to support newly-released LLMs. Table 1: Taxonomy of agent issues. Tool-related issues. The versatility of agent systems partly stems from their proficiency in utilizing tools to interact with the environment. As a result, many agent-related issues arise during tool invocation, including missing tool-dependent libraries, misconfigurations, or incorrect use of tool interfaces. In addition to external tools, agents may also rely on internal tools (e.g., custom-developed functions), where implementation flaws can trigger unintended behaviors during tool execution. Memory-related issues. The memory mechanism in agents tracks the trajectory of agent operation, and most memory-related issues arise from incorrect memory content. For example, agents may pollute memory with irrelevant information when they mistakenly extract unrelated attributes from the current context, or memory entries may be missing or incomplete due to failures in storing data. Workflow issues. Due to the autonomy and flexibility of agent systems, unexpected behaviors can emerge along the agent workflow, such as repeated actions or hanging states. Although it is difficult to completely eliminate such issues, developers commonly mitigate them by incorporating status checkers to monitor and regulate the agent workflow. LLM operation issues. A large portion $( 3 1 . 8 4 \% )$ of agent-related issues occur during LLM operation. For example, proper configuration of model access and token usage is critical, and misconfiguration in these areas can disrupt agent functionality. Additionally, many issues stem from incorrect handling of model outputs, including: (i) flawed parsing implementations, or (ii) missing handlers for unexpected model responses. Beyond the suboptimal prompt content (e.g., unclear model instructions), prompt management can also introduce risks: as agent systems often maintain a large and evolving pool of prompts, failures in prompt updates or configuration can result in models being queried with incorrect or outdated instructions. Summary. Our taxonomy reveals that real-world agent systems exhibit a diverse range of issues, many of which possess unique characteristics not typically found in traditional software systems. In particular, developing and maintaining agent systems demands substantial engineering effort, as developers must manage correct dependencies, configurations, and implementations across multiple components (e.g., model providers, LLM operations, memory mechanisms, and tools). Therefore, we believe that automatically resolving issues in agent systems represents a challenging and increasingly vital research direction in the era of LLMs. # 4 AGENTISSUE-BENCH Benchmark We then manually build AGENTISSUE-BENCH, the first reproducible issue resolution benchmark of real-world agent issues. AGENTISSUE-BENCH can be used to evaluate the efficacy of state-of-the-art SE agents in solving issues in agent systems. # 4.1 Benchmark Construction We construct AGENTISSUE-BENCH out of the 201 GitHub agent issues we collected in Section 3. In particular, we try to reproduce each issue according to the following procedure. Step 1: Failure reproduction. For each issue, we pull its corresponding buggy commit and set up the agent system. In particular, we manually write a test script (i.e., failure-triggering test) to reproduce the problematic behaviors according to the issue descriptions. In this step, we filter out the issues where we cannot observe the same buggy behavior as issue descriptions. Step 2: Patch reproduction. We then pull the corresponding patched commit and execute the failure-triggering test on it. In this step, we only keep the issues where the patched version can pass the failure-triggering tests (i.e., problematic behaviors disappear on the patched version). Step 3: Non-flakiness verification. Given the nondeterminism of LLMs, we repeat the previous two steps three times for each issue so as to eliminate the test flakiness. In this step, we filter out issues where there are inconsistent behaviors on executing one failure-triggering test. Through such a multi-step filtering process, the original 201 agent issues are narrowed down to 50 reproducible issue resolution tasks, collectively forming AGENTISSUE-BENCH. We find that reproducing issues in agent systems is significantly more challenging than in traditional software systems, as agent issues are associated with diverse internal and external components and resources. In particular, most agent issues fail to reproduce for the following reasons. (i) The nondeterminism of LLMs leads to unstable model outputs, which hinders the reproduction of agent issues such as workflow errors; (ii) External resources (e.g., agent-invoked tools, dependent libraries, or LLM providers) may have changed since the issue was reported, making it impossible to reproduce the same failure; (iii) Issue descriptions lack sufficient details or steps on how to reproduce the problematic behaviors; (iv) Agent systems cannot be correctly set up and exhibit unexpected failure behaviors that are different from the issue descriptions. Overall, the entire reproduction process takes huge manual effort (approximately 500 person-hours). # 4.2 Benchmark Details Benchmark statistics. Figure 2 shows the distribution of AGENTISSUE-BENCH across different issue categories. Overall, we can observe that the 50 reproduced agent issues in AGENTISSUEBENCH cover all the main categories identified in our taxonomy of agent issues, indicating that AGENTISSUE-BENCH is representative of real-world agent issue distribution. Moreover, issues in AGENTISSUE-BENCH involve patches of different scales (Detailed statistics are in Table 5). Each issue resolution instance in AGENTISSUE-BENCH consists of the following components: (i) Issue description: a user-reported textual description of the problem; (ii) Buggy version of the agent system: the buggy commit of the agent code repository in which the issue occurs; (iii) Developercommitted patch: the code changes between the buggy and correct versions, serving as the ground truth for issue resolution; (iv) Failure-triggering tests: test scripts that reproduce the issue on the buggy version but pass on the patched version; (v) Docker environment: a container with all necessary dependencies and configurations to execute the agent system. Figure 2: Distribution of AGENTISSUE-BENCH Figure 3: A task example in AGENTISSUE-BENCH. ? Issue Description 白 SE Agents 8 Generated Patch Kickoff hangs when LLM call fails. SWE-agent AutoCodeRover agent.py □ Minmum example to reproduce theissue... 1 Agentless : agents crew_agent_executor.py □ Code Repository utilities 1 agents utilities TEST Patch Verification m_utis.py □ 1 N agent.py H Illm.py Failure-triggering Tests X N Illm.py □ Task formulation. The agent issue resolution task can be formulated as follows: (i) Input: the issue description and the buggy codebase of the agent system; (ii) Output: a patch (i.e., a code edit to the buggy codebase) that aims to resolve the issue. Figure 3 shows the task example in AGENTISSUE-BENCH. Evaluation metrics. To evaluate how a technique tackles the agent issue resolution task, we adopt the following metrics to evaluate the patches output by the technique (i.e., SE agents in our experiments). (i) Localization accuracy: if the generated patch modifies the same location as the developer-committed patch, we consider it to have accurately localized the issue. We then compute the percentage of issues for which the generated patches can achieve accurate localization. (ii) Plausible resolution rate: if the generated patch makes the failure-triggering tests pass after being applied, we consider it to plausibly resolve the issue (i.e., denoted as a plausible patch). We then compute the percentage of issues for which the generated patches are plausible patches. (iii) Correct resolution rate: if the generated plausible patch is further semantically-equivalent to the developercommitted patch, we consider it to correctly resolve the issue (i.e., denoted as a correct patch). In particular, given the insufficiency of tests in practice, it is common [39, 50, 37] that plausible patches are not necessarily correct patches but are just overfitting to the failure-triggering tests. Therefore, only reporting the plausible resolution rate can overestimate the effectiveness of issue resolution techniques. Following the common practice in the program repair area [49, 48, 47, 33], we further involve human annotators to manually check whether the plausible patches are semantically equivalent to developer-committed patches. We then compute the percentage of issues for which the generated patches are correct patches. # 5 Experiments In this section, we investigate how state-of-the-art SE agents can automatically resolve real-world issues in agent systems by evaluating their efficacy on AGENTISSUE-BENCH. Table 2: Overall results of SE agents on AGENTISSUE-BENCH # 5.1 Experimental Setup Studied SE agents. We include three state-of-the-art SE agents, including SWE-agent [51], AutoCodeRover [56], and Agentless [46]. These agents are selected given that they are fully opensourced and achieve superior effectiveness in resolving issues for traditional software systems [29]. We directly adopt their released implementation with the original hyperparameter settings. Backbone LLMs. Based on the recent SWE leaderboard [29], state-of-the-art SE agents achieve higher fixing rate on general software issues when equipped with backbone LLMs GPT-4o [1] and Claude-3.5 Sonnet [12]. Therefore, in our experiments, we mainly study how effective SE agents are in resolving agent issues with these two backbone LLMs. Evaluation pipelines. We apply studied SE agents on AGENTISSUE-BENCH and collect their generated patches for each issue resolution task. We then calculate the metrics of fault localization accuracy, plausible and correct resolution rates for each studied SE agent. To eliminate the randomness from LLMs, we repeat all experiments three times and present the average results. # 5.2 Quantitative Results Overall resolution effectivenes. Table 2 shows the results of the studied SE agents on AGENTISSUEBENCH. In general, state-of-the-art SE agents can only correctly resolve a small number (i.e., $3 . 3 3 \%$ - $1 2 . 6 7 \% )$ of agent issues. In addition, in most cases, SE agents even fail to correctly identify the location (i.e., files or functions) for resolving the issue, e.g., file-level/function-level localization accuracy is less than $26 \% / 1 9 \%$ . Such observations reveal the limited capabilities of state-of-the-art SE agents in understanding and resolving the issues in agent systems. In addition, Figure 4 compares the correct resolution rate of SE agents on agent issues (on our benchmark AGENTISSUE-BENCH) versus on traditional software issues (results from SWE-bench Lite [29]). As there is no previous data of AutoCodeRover with Claude-3.5-S on SWE-bench, we leave it as blank. Overall, SE agents demonstrate significantly lower resolution rates on agent issues compared to traditional software issues. These findings highlight the unique challenges posed by agent systems and underscore the need for developing SE agents specifically tailored to maintain agent systems, which is an emerging and distinctive software paradigm. Comparison among SE agents and backbone LLMs. As shown in Table 2, SE agents with Claude3.5-S achieve higher resolution capabilities than with GPT-4o in terms of plausible resolution, correct resolution, and localization accuracy. In particular, AutoCodeRover with Claude-3.5-S achieves the highest resolved rate (i.e., $1 2 . 6 7 \% )$ and the highest localization accuracy (i.e., $2 5 . 6 1 \%$ at file level). Overall, we observe a larger potential of Claude-3.5-S in understanding agent issues than GPT-4o. Figure 5 shows the unique and overlapped agent issues that are correctly resolved by each SE agent. We could observe that each SE agent can uniquely fix 2 - 4 bugs that cannot be resolved by any other SE agents. In addition, there is no agent issue that can be fixed by all SE agents. In other words, existing SE agents exhibit complementary capabilities to resolve agent issues. Costs. As shown in Table 2, the average costs of applying SE agents to agent issue are controllable, ranging from $\$ 0.05$ to $\$ 1.15$ . The cost range is similar as applying these SE agents to resolve traditional software issues (e.g., $\$ 0.45-\$ 52.53$ [46]). 40.67 40 Agent IsSUes (AGENTISSUE-BENCH) Traditional Software Issues (SWE-bench Lite) 35 32.00 30.67 1 23.00 18.33 12.67 10 8.67 6.00 4.67 3.33 6.66 N/A 0 Agentless A(GPT-40) (Claude-3.5-Su Agentless (S)-teCodeRover (GPT-40) AutoCodeR.5-S)SWE-T-4en (ClWede-3.5-S) SWE-agent Figure 4: Resolution rate of agent issues v.s. traditional software issues. Figure 5: Venn diagrams of resolved issues. Table 3: Breakdown of resolved agent issues (unresolved categories are not presented). # 5.3 Qualitative Results In this section, we further break down the issues that SE agents can and cannot resolve, aiming to better understand their strengths and limitations in resolving agent issues. Table 3 presents the issue categories that can be resolved by at least one studied SE agent. Resolved agent issues. Overall, the majority of agent issues resolved by SE agents are still related to utility (e.g., log/file operation/UI), which actually share high commonality with traditional software systems. As a result, SE agents are inherently able to resolve issues of this category in agent systems. Moreover, besides common utility issues, some of the dependency issues on agent-specific components (e.g., tool) can also be resolved by SE agents. The reason why SE agents can handle such agent issues might be that the dependency issues often contain explicit error messages (e.g., missing libraries or incompatible variables/interfaces). As a result, even if the dependencies are unique to agent components (e.g., tool), they can still be similar to dependency issues in other general software components, which are straightforward and informative to resolve. Unresolved agent issues. Overall, the majority of agent-specific issues cannot be resolved by any SE agent. For example, SE agents resolve a very few (or even none) issues on LLM provider incompatibility, memory, or LLM operation. The reason might be that the exchanges with LLM providers are unique features in agent systems and agent systems are emerging in the recent period, which thus are less covered in the LLM training data. In addition, the autonomous and flexible nature of agent systems stemming from LLMs makes it challenging to identify the root causes of LLM operation issues. Figure 6 and Figure 7 in Appendix E show two unresolved issues for which all SE agents cannot even correctly localize the buggy files. In summary, our analysis further confirms the limitations of existing SE agents in resolving the agent issues which are particularly related to agent-specific features, highlighting the necessity of building more advanced SE agents for maintaining agent systems. # 6 Limitation and Future work While AGENTISSUE-BENCH is representative of real-world agent issues by covering a wide range of different categories, the generality of our findings can still be restricted due to the current size of AGENTISSUE-BENCH. In particular, we find reproducing issues in agent systems is significantly more challenging than in traditional software systems. Due to the nondeterminism of LLMs and changeable external resources (e.g., tools and LLM providers) interacted with agent systems, only a small number of agent issues (50 out of 201 issues) can be successfully reproduced. Moreover, huge manual effort (approximately 500 person-hours) are dedicated to preparing the Docker environment, configuring agent systems, and writing failure-triggering tests. In the future, we plan to continuously maintain and extend the benchmark to support future research on agent system maintenance.
LLM-based agent systems are emerging as a new software paradigm and have been widely adopted across diverse domains such as medicine, robotics, and programming. However, maintaining these systems requires substantial effort, as they are inevitably prone to bugs and continually evolve to meet changing external requirements. Therefore, automatically resolving agent issues (i.e., bug reports or feature requests) is a crucial and challenging task. While recent software engineering (SE) agents (e.g., SWE-agent) have shown promise in addressing issues in traditional software systems, it remains unclear how effectively they can resolve real-world issues in agent systems, which differ significantly from traditional software. To fill this gap, we first manually analyze 201 real-world agent issues and identify common categories of agent issues. We then spend 500 person-hours constructing AGENTISSUE-BENCH, a reproducible benchmark comprising 50 agent issue resolution tasks (each with an executable environment and failure-triggering tests). We further evaluate state-of-the-art SE agents on AGENTISSUE-BENCH and reveal their limited effectiveness (i.e., with only 3.33% - 12.67% resolution rates). These results underscore the unique challenges of maintaining agent systems compared to traditional software, highlighting the need for further research to develop advanced SE agents for resolving agent issues. Data and code are available at https://alfin06.github.io/AgentIssue-Bench-Leaderboard/#/ .
[ "cs.AI", "cs.SE" ]
# 1 Introduction Cloud-native timeseries monitoring systems such as Prometheus [1], VictoriaMetrics [7], and Grafana Mimir [19] are widely used as the cloud telemetry platform, where various metrics such as sensor readings [81], IP network traffic information [12, 24, 25, 29, 59], and cluster CPU and memory utilization [22, 41] are stored and monitored. Under the hood, such a monitoring system often consists of a timeseries database as the back-end and a dynamic query engine as the front-end, allowing users to perform various statistical queries over different time ranges to support downstream applications such as anomaly detection [14, 34], attack detection [88, 95, 96], and data visualization [18]. Among all queries supported for these applications, rule queries [11, 27, 39] are often set up to periodically compute aggregated statistics over time ranges (i.e., repeated time range queries) and alert users if abnormal conditions are met (e.g., quantiles, top-K, cardinality). For instance, timeseries network flow data (e.g., source/destination IPs, ports, and protocols) can be monitored in the range of seconds to aggregate distinct source IPs targeting a specific host over a recent time window, indicating a potential Distributed Denial of Service (DDoS) attack [45, 79]. While Prometheus and its variants have been a de facto standard open-source tool to handle rule queries, they struggle with non-trivial operational costs and high query latency in practice. In our evaluation, an AWS Prometheus service running 10 rule queries every minute and monitoring just on a single rack would approximately take $\$ 11,520$ for query processing and $\$ 9,256$ for data ingestion per month (§2.3). Performing a quantile query over 100Ksample windows and 10K timeseries takes $1 5 \mathrm { m i n }$ on a commodity server in our testbed. Our profiling reveals two major bottlenecks in rule queries that lead to high monitoring cost and query latency: 1) repeated data scans from storage and 2) repeated query computations, based on the observation that a single rule can perform time range queries over consecutive overlapping windows and different rules may also query the same overlapping windows. For example, a rule with 10-minute windows and 1-min evaluation intervals, or queries over different time windows (e.g., 2-, 5- and 10-minute), repeatedly access the overlapping portions of data among windows but Prometheus computes them separately. While several existing efforts aim to address the bottlenecks of Prometheus, they fall short in one or more of the dimensions in operational cost, query latency, and query accuracy. Exact monitoring systems that optimize Prometheus (e.g., VictoriaMetrics [7]) can reduce query latency through better storage engine designs and data caching for lower data retrieval time, and applying parallel query computation for lower query evaluation time. However, they do not reduce operational costs as they do not address the repeated data scanning and computational bottlenecks. While one can consider applying pre-computation approaches similarly to those optimizing SQL queries [67, 97], they tend to support a fixed time window and limited statistics such as sum and max, with small cost and performance improvements. Alternatively, approximate analytics (e.g., sampling- and sketchbased) offer a promising approach to trade off estimation accuracy for further lower operational costs and query latency [82] of complex queries. Applications often require near real-time analytics and tolerate approximate but highly accurate results, such as datacenter alerts [44, 77, 101], network measurements [62, 102], and more [56, 73, 103]. However, practical issues of low accuracy and low query generality remain. Sampling-based approaches (e.g., [32, 80]) can provide an estimation for any queries but suffer from worse and unpredictable accuracy for complex statistics such as quantiles and entropy. Sketch-based analytics (e.g., [42, 57, 72, 82, 86]) and sliding window sketches (e.g.,[46, 48, 49, 55]) can provide strong accuracy guarantees over querying statistics of a fixed window but are limited to answering certain queries. 好 Metrics Storage Query Engine Cache Misses MySQU Ingester PrometheusCsoermvepru(tse) Q Alert 𝑞# 𝑞! H Query Manager 𝑞" time docker PromSketch: Cache Hits Data Sources Approximate Query Cache $$ Insertion Path $$ Slow Query Path $$ Fast Query Path 𝑞: Query Data Sources Data Data path Cluster Metrics Collector Query path memory_usage Samples cpu_usage pod_number Storage H Raw samples Query results Rule Manager PromQL Querier PromQL Clients Recording Rules Query PromQL Parser Query Dashboards Alerting Rules Results Query Engine Results HTTP API In this paper, we revisit the promises of approximate analytics to improve operational efficiency and performance in timeseries monitoring systems. We present PromSketch, an approximate query cache that improves operational cost and query latency by up to two orders of magnitude while preserving high accuracies (e.g., $> 9 5 \%$ ). In contrast to Prometheus’ independent query handling, PromSketch is a framework that is able to “sketch and cache” a wide range of recent windows and statistics in the fast storage (e.g., main memory) to mitigate the bottlenecks of repeated data scans and query computations from overlapping windows (as in Fig. 1). PromSketch is built on the combination of two key ideas. First, PromSketch caches a range of intermediate results rather than caching raw data or final query results integrated in today’s timeseries monitoring systems [7]. This is a practical choice because (1) a raw data cache does not help reduce repeated query computations, and the memory usage can be prohibitively large; and (2) a query result cache misses the opportunities to optimize drill-down queries that are not predefined. Thus, we adopt an extended sliding window model based on the Exponential Histogram [60, 69] to maintain a list of intermediate results (called buckets) covering consecutive intervals of sizes varying exponentially in a timeseries. At query time, we can linearly merge these buckets to obtain the final results on any sub-window of the large window. We consider this intermediate result cache as a balance between caching raw data and final results. Second, we provably combine the extended sliding window model with popular linear sketches to support various query functions. There are a potentially large number of concurrent timeseries and query functions that need to be monitored (e.g., quantiles, entropy, and $L _ { 2 }$ norm of CPU and memory usages from various Kubernates [5] nodes). We want to cache as many timeseries as possible given a memory budget. While exact data structures can be used to store intermediate results, they cannot scale to a large number of timeseries. To optimize memory usage, we extend the Exponential Histogram model with KLL sketch [72] and universal sketching [54, 78], and prove their memory-accuracy efficiency both theoretically and empirically, with system optimizations to reduce system runtime and operational costs. We implement PromSketch as a Go package in 5K lines of code that is compatible with Prometheus and VictoriaMetrics, two popular open-source timeseries monitoring systems, extending PromQL and covering $7 0 \%$ of Prometheus’ aggregation over time functions. PromSketch is also portable to other Prometheus-like systems such as [19, 31]. Our extensive experiments show that: (1) PromSketch offers robust accuracy (mean error $\leq 5 \%$ ) while reducing operational costs of query processing by $4 0 0 \times$ compared to Prometheus and at least $4 \times$ compared to VictoriaMetrics and (2) it reduces end-to-end query latency by up to two orders of magnitude over Prometheus and VictoriaMetrics. PromSketch’s precomputation overhead is moderate as $1 . 3 \times$ to $3 \times$ of non-precomputed/cached Prometheus. In summary, we make the following contributions. We systematically analyze the rule queries in the popular timeseries monitoring systems and identify the bottlenecks and cost consequences of repeated data scans and overlapped query computations on the cloud. (§2) To mitigate the bottlenecks, to the best of our knowledge, PromSketch is the first work to (1) introduce an end-to-end approximate intermediate caching design for various time ranges and statistics in timeseries monitoring, (2) propose the combination of Exponential Histogram and different types of sketches (e.g., KLL and Universal Sketching) to support various time windows and query statistics, and (3) analytically prove the guarantees of such constructions. (§4) • We provide ready-to-plugin PromSketch to both single-machine and distributed systems with cloud-native architecture (§5), and show its benefits on real-world and synthetic datasets over baseline systems (§6). # 2 Background and Motivation In this section, we introduce background of timeseries monitoring systems, present motivating scenarios, and discuss the limitations of existing monitoring systems and new design opportunities. # 2.1 Timeseries Monitoring Systems A monitoring system usually collects, stores, and queries timeseries data from various sources. Fig. 2 shows a typical timeseries monitor architecture. Data collectors scrape metrics and send them to storage. The storage engine appends new data to the timeseries without modifying previous data. Users can issue queries in PromQL [28] via various clients, including rule queries for periodic monitoring and alerting. The query engine retrieves data samples from storage and computes results based on query expressions. The rule query results can be stored for reuse. Data Model. Timeseries data are streams of timestamped values belonging to the same data source and the same set of labeled tags. They span over two dimensions: (1) the time dimension, which consists of data samples each associated with a timestamp belonging to one timeseries; and (2) the label dimension, which consists of samples from many different data sources and label tags at a given timestamp. A data sample can be represented by $\rho = ( l , t , \boldsymbol { v } )$ , where $l = ( d _ { 1 } , d _ { 2 } , . . . , d _ { m } )$ contains $m$ label dimensions, $t$ is the timestamp, and $\boldsymbol { v }$ is the data value, either a 64-bit floating point (e.g., CPU usage) or string (e.g., IP address). An example timeseries of cpu_usage Time-dimensional aggregation Data Timeseries 1 Timeseries Labels Source 1 Timeseries 2 Label-dimensional aggregation Data Timeseries 3 Source 2 Timeseries 4 Time Most Recent metric recording the CPU usage of each node and each core can be represented as cpu_usage{node_id=“node0”,cpu_id=“0”}, specified with labels of node ID and CPU core ID. Queries. Monitoring systems like Prometheus offer various queries for downstream applications, especially alerting and recording rules [26]. Users can define rules to automatically execute periodic monitoring queries and track alerts [8]. A rule query mainly consists of three parts as below: 1) a rule type, either recording rule (which stores results for future use), or alerting rule (which sends alerts based on conditions); 2) a rule evaluation interval $T _ { e v a l }$ , defining the evaluation frequency; and 3) a query expression, with an optional alert condition for triggering alerts in alerting rules. # rule : type : record | alert evaluation_interval : T_eval expr : PromQL expression , [ alert condition ] Formally, a query expression can be defined as $\mathcal { Q } _ { R } = \{ q ( \rho ) , \rho :$ $t _ { c u r } - T _ { q } \leq t \leq t _ { c u r } \wedge d _ { i _ { 1 } } = x _ { i _ { 1 } } \wedge \cdot \cdot \cdot \wedge d _ { i _ { m } } = x _ { i _ { m } } \xi$ , with query function $q$ on a set of timeseries data samples for $T _ { q }$ query window looking back from current time $t _ { c u r }$ , and $d _ { i _ { 1 } } , \ldots , d _ { i _ { m } }$ a subset of the label dimensions to condition on. The query function $q$ can be an aggregation over the time dimension or label dimensions at a given timestamp. Examples of timeseries data samples and rule queries are shown in Fig. 3. Aggregation queries, such as quantiles, Top-K, entropy, and cardinality, are important for understanding statistics without focusing on single values in timeseries and are more efficient than querying raw data samples. Summarizing the data over time with aggregation query functions and periodic rule queries, essentially, it forms a sliding window model. In this paper, we aim to optimize the time-dimensional aggregation queries. # 2.2 Motivating Scenarios Network Flow Monitoring. DDoS attacks can occur when attackers use TCP SYN floods to exhaust bandwidth or server resources via a botnet [45, 85]. Victims can detect ongoing attacks by monitoring the volume and entropy of SYN packets from multiple source IPs targeting a single destination [89]. For example, operators can track DDoS indicators for a virtual machine using alerting rules with a 5-second evaluation interval as follows [79]. Detection queries can identify the start and the end of a DDoS attack by monitoring flow changes and comparing metrics against alert thresholds, requiring frequent queries across various time windows (e.g., 10s, 5s) due to the uncertainty of optimal window sizes. In this case, each target server may receive millions of packets per second [2], requiring the time windows of being over 100K to 1 million data samples. Table 1: Comparison of operational cost between several systems. “PS-PM” and “PS-VM” refer to Prometheus- and VictoriaMetrics-based integrations of PromSketch. Cloud Resource Scaling. Cloud-native platforms autoscale resources, such as pods, to reduce costs [16] based on aggregated statistical queries (e.g., averages, quantiles) over time windows for metrics like CPU, memory, and pod counts from monitoring tools [9]. For instance, recording rules can query each container’s 0.95-quantiles for memory and CPU usage, and average pod counts over the past 5 minutes, storing the results for quick retrieval by the cloud resource scheduler and downstream applications as below. Standard Google Cloud clusters can have up to 256 pods per node and up to 100 nodes per cluster [17]. Thus, cluster-level monitoring can easily result in 100K or more timeseries to query on. # rules : evaluation_interval : 1m type : record expr : quantile_over_time (0.95 , container_memory { dimension $= "$ used "}[5m]) expr : quantile_over_time (0.95 , container_cpu { dimension $= \cdot$ " used "}[5m]) expr : avg_over_time ( pod_number [5m]) In summary, rule query use cases involve monitoring queries that repeatedly query the same metrics over time with varying window sizes, various statistical functions, and the ability to handle large data volumes, while being sensitive to query latency, providing critical observability to anomaly detection [68, 84, 96, 99], security checking [45, 98], and cloud performance monitoring [61, 99]. # 2.3 Operational Cost and Bottleneck Analysis We start by comparing the operational costs of two representative systems in Table 1, monitoring a 1000-node Kubernetes cluster with each node having 1000 metrics, storing 268 billion data samples per month. 10 concurrent rule queries run every minute and each query processes 8 billion samples. Cost estimates follow AWS Prometheus Pricing [10], which charges by storage and samples processed, and a typical cloud billing model used by VictoriaMetrics [4, 38], which charges based on resource usage such as memory, vCPUs. We defer detailed analysis in $\ S 6$ . Query processing comprises $5 5 \%$ of the total costs in Prometheus and $9 5 \%$ of in VictoriaMetrics. We analyze bottlenecks in Prometheus and VictoriaMetrics to identify sources of high query costs. Using Golang pprof and the testbed in $\ S 6$ , we profile recording rule queries with extended time windows. For example, we test with a 10,000-second query window, $1 0 0 \mathrm { m s }$ sample interval, and 1s evaluation interval, benchmarking the quantile_over_time(0.99,metric[10000s]) query. Table 2 shows the CPU profiling results and the top two bottlenecks. Table 2: CPU hotspots of evaluating a quantile rule query in Prometheus and VictoriaMetrics. Bottleneck 1: Repeated data scans from storage. Data scanning from storage accounts for over $4 0 \%$ of CPU time, marking the primary bottleneck. This is due to repeated scans of data, even when query windows overlap in rule queries or when there are concurrent queries from multiple users. Bottleneck 2: Repeated query computations. The second major bottleneck is quantile query calculation. In both VictoriaMetrics and Prometheus, periodic rule queries are computed independently rather than as sliding window queries. They re-execute the entire query computation for overlapping portions, without leveraging intermediate results from previous overlapping windows. # 2.4 Prior Work and Limitations Exact monitoring systems. Prior work reduces timeseries query latency and costs via three categories. The first enhances storage engines with better indexing (e.g., InfluxDB [21], VictoriaMetrics [7]), optimized storage schemas (e.g., Heracles [100]), and improved compression techniques (e.g., Gorilla [93]). These methods reduce storage costs and retrieval latency but don’t address computational bottlenecks from repeated data scans and overlapping windows. The second improves query performance by utilizing parallel query processing, query sharding, and precomputation. Parallel processing (e.g., VictoriaMetrics [37]) distributes the query computation across CPU cores, splitting tasks by time series. Query sharding (e.g., Mimir [19], Thanos [31]) reduces memory usage by partitioning the query by time range or timeseries and processing each partition sequentially. While both methods can reduce query latency through hardware parallelism or reduced memory impact from Go garbage collection, parallel processing does not lower overall query costs, and query sharding, while reducing memory costs, still maintains redundant computational overheads across queries. Precomputation, e.g., LindormTSDB [97], computes predefined statistics for fixed time intervals during data ingestion. While this reduces operational costs and query latency by reducing query redundancy from window overlapping, they support only basic statistics (e.g., sum, max) and set fixed intervals in advance. The third employs key-value caches (e.g., fastcache [15] in VictoriaMetrics, Memcached [23] or Redis [30] in Grafana Mimir [19, 20]) to accelerate queries, including metadata-cache, index-cache, chunk-cache, and result-cache [20]. Metadata and index caches accelerate timeseries searches by mapping metrics to database indexes but don’t remove repeated data retrieval or computational overhead. Chunk-cache stores data in memory, reducing disk retrieval time but is limited by memory capacity and doesn’t address repeated computations. Result caches store query results, but frequent changes in query statistics and time ranges limit cache reuse. Approximate Query Processing (AQP). In monitoring systems, approximate results are often sufficient for downstream applications [69, 78, 79, 90], offering the chance to trade off minor accuracy for lower query latency and operational costs [82], using sampling or data summarization for time-window queries. Sampling for aggregation queries has been widely explored in Approximate Query Processing (AQP) by pre-processing data samples for query-time use [43, 80, 92, 94]. Monitoring systems like Thanos [32] apply downsampling to reduce data retrieval and computation costs. While sampling-based frameworks offer broad applicability across various statistics and support the sliding window model [80], their accuracy guarantees weaken for complex statistics (e.g., quantiles [72]) and suffer from larger errors when zooming into small sub-windows with a low fixed sampling rate, due to limited sample availability. Sketch-based analytics offer bounded accuracy-memory tradeoffs in sub-linear space [65, 72, 78, 86], creating compact summaries during ingestion and estimating statistics with provable error bounds. Sliding window sketches are often designed for specific query types, maintaining summaries for the entire window, such as sliding sum [49], 0-1 counting [60], heavy hitter detection [47, 48], distinct counting [55], and sliding quantiles [46]. While implementing each individually supports diverse queries, it introduces perstatistic effort and lacks sub-window query support within the recent window, leading to additional maintenance overhead. Recent approaches [69] extend fixed sliding window frameworks [53, 60] to support arbitrary sub-windows and accommodate various sketch types as subroutines, making them well-suited for periodic rule queries with varying window sizes and statistical requirements. Summary and Opportunities. Existing solutions fall short in the tradeoffs among operational costs, query latency, and accuracy. Our analysis reveals a key optimization opportunity in removing query redundancy due to overlapping windows. Since periodic rule queries often share overlaps, caching appears as an effective approach to reduce redundant data scans and query computations. However, caching all raw data samples is not a scalable choice and does not reduce the computational costs from window overlaps. Moreover, caching some final results is an ad-hoc choice to only optimize a few predefined queries. Thus, caching intermediate results that are precomputed and flexible enough to query a wide range of windows becomes a well-informed choice. # 3 PromSketch: System Overview PromSketch Architecture. We illustrate the system components in Fig. 4. PromSketch maintains an in-memory approximate cache. Data samples are ingested into both backend storage and the cache by the data ingester. PromSketch precomputes intermediate results for the most recent windows of timeseries selected by rule queries. When a rule query is issued, the querier first checks the cache for the required time range and statistics. If found, the query retrieves estimated results from PromSketch with reduced latency; otherwise, it falls back to the original TSDB query engine to scan raw data and compute exact statistics. The final query results, whether from PromSketch or the exact engine, are then returned to users. Challenges and Key Ideas. To realize the vision of PromSketch, we address several key design challenges: Rule Queries Querier Cache Hits P Query Output WW Cache Lookup Time Window Users & PromSketch Cache Dashboards Cache Misses Query Engine Storage Compute Fast Query Path Slow Query Path Challenge 1: Caching many recent query windows and results. Query statistics and functions are various in real use cases. Caching all samples ensures generality but is memory-intensive, while caching only final results limits optimization for unforeseen drill-down queries. A raw data cache also fails to reduce redundancy from overlapping query windows. Key Idea. We extend the window-based approximate query framework (e.g., Exponential Histogram [60, 69]) to be sub-windowcapable as a flexible intermediate query cache along the time dimension for each timeseries. The cache stores intermediate results for many sub-windows within a large recent time window, allowing reuse for overlapping portions of query windows (e.g., one can query 5-, 10-, 15-min windows within a cached $3 0 \mathrm { - m i n }$ window without recalculating from scratch). It can support multiple and arbitrary sub-windows and different query functions through using different internal data structures to maintain intermediate results. Challenge 2: Caching a large number of timeseries. Number of active timeseries that need to be monitored can be large [97], requiring caching as many timeseries as possible within a memory budget. While exact data structures in window-based frameworks to store the intermediate results offers high accuracy, it needs a large amount of memory as exact query processing and cannot scale to a large number of timeseries. Key Idea. To reduce memory usage, we integrate approximate methods, such as sketches and sampling, to work as compact and low-latency intermediate data summarizations in the framework. For instance, we propose proper combinations of Exponential Histogram and KLL, and formally establish space-error bounds. Challenge 3: Efficient caching of various query statistics. Users often query different statistics over the same timeseries, such as distinct counting and entropy of source IPs for DDoS attack detection. To support as many query statistics as existing exact monitoring systems, a strawman solution is to cache each statistic with a separate sketch instance for each timeseries. However, this approach introduces per-statistic efforts and large memory costs. Key Idea. To avoid per-statistic effort, recent advances in universal sketching [54, 78] allows a single sketch instance to support multiple target query functions, such as $L _ { 0 } , L _ { 1 } , L _ { 2 }$ norms and entropy, instead of requiring a separate sketch for each function. We combine universal sketching with EH to support multiple statistics simultaneously [69], and proposes a novel optimization that combines exact maps and universal sketching as EH buckets, reducing memory footprint while improving accuracy. Figure 4: PromSketch Architecture. Figure 5: Exponential Histogram (EH) [60] and Smooth Histogram (SH) [53] structures and window-based queries. $( t _ { 1 } , t )$ and $\left( t _ { 2 } , t _ { 3 } \right)$ are $s u b$ -window queries within the most recent $T$ time window and current time $t$ . # 4 PromSketch Detailed Design We introduce sub-window query frameworks as PromSketch cache, present its algorithmic building blocks, with provable accuracyspace bounds and detailed system design. # 4.1 Window-based Frameworks as a Cache Periodic time interval aggregation queries, such as Alerting rules and Recording rules, are essentially sliding window queries along the time. These queries maintain statistics of the most recent 𝑇 time window $W = \left( t { - } T , t \right)$ . Users can also query any statistics over a subwindow $( t _ { 1 } , t _ { 2 } ) \subseteq W$ for zoom-in diagnosis of applications such as anomaly localization. To cache as many query windows as possible within limited memory budgets, approximate window-based frameworks that maintain sliding windows and sub-window structures are viable options. Currently, there are two general approximate window-based frameworks providing $o ( N )$ memory with good estimations for a recent window $W$ of $N$ items: Exponential Histogram (EH) [60] and Smooth Histogram (SH) [53]. Intuitively, Exponential Histogram maintains non-overlapping buckets whose bucket sizes are exponentially growing when buckets are older; Smooth Histogram maintains overlapped buckets that covering time ranges with different start time points, as Fig. 5 shows. Exponential Histograms [60] suggests to break the most recent window $W = \left( t - T , t \right)$ into a sequence of $l$ non-overlapping intervals (buckets) $B _ { 1 } , B _ { 2 } , \ldots , B _ { l }$ . Window $W$ is covered by $\textstyle \bigcup _ { i = 1 } ^ { l } B _ { i }$ , and contains all $B _ { i }$ except $B _ { 1 }$ . Then, if a target function $f$ admits a composable sketch, maintaining such a sketch on each bucket can provide us with an estimator for $f$ on a window $\begin{array} { r } { W ^ { \prime } = \bigcup _ { i = 2 } ^ { l } B _ { i } } \end{array}$ . $f ( W )$ is sandwiched between $f ( W ^ { \prime } )$ and $f ( B _ { 1 } \cup W ^ { \prime } )$ . Therefore, a careful choice of each bucket endpoints provides control over the difference between $f ( W )$ and $f ( W ^ { \prime } )$ . When the window slides, new buckets are introduced, expired buckets are deleted, and buckets in between are merged. The EH approach admits non-negative, polynomially bounded functions $f$ which in turn enable a composable sketch and are weakly additive, i.e., $\exists C _ { f } \geq 1$ , such that $\forall S _ { 1 } , S _ { 2 }$ : $$ f ( S _ { 1 } ) + f ( S _ { 2 } ) \leq f ( S _ { 1 } \cup S _ { 2 } ) \leq C _ { f } ( f ( S _ { 1 } ) + f ( S _ { 2 } ) ) . $$ Table 3: Example Prometheus aggregation-over-time queries supported by PromSketch. PromSketch can support $7 0 \%$ existing aggregation over time queries in Prometheus and introduces capabilities for currently unsupported queries. count_over_time can be supported by multiple algorithms. Algorithm 1 EHKLL: Quantiles Based on EH We show the intuition of querying a sub-window by the following example: $q = \left( t _ { 2 } , t _ { 3 } \right)$ as depicted in Fig. 5. In the example, $q$ is sandwiched between $B _ { 2 } \cup B _ { 3 } \cup B _ { 4 }$ and $B _ { 3 }$ , where $f ( \bigcup _ { j = 2 } ^ { l } ) =$ $( 1 \pm \varepsilon ) f ( t _ { 2 } , t )$ and $f ( \bigcup _ { j = 4 } ^ { l } ) = ( 1 \pm \varepsilon ) f ( t _ { 3 } , t )$ . Intuitively, one can expect that $f ( t _ { 2 } , t _ { 3 } )$ can be approximated by $f ( B _ { 3 } \cup B _ { 4 } )$ with an additive error of $\pm \varepsilon f ( t _ { 2 } , t )$ , related to the suffix $( t _ { 2 } , t )$ . Smooth Histograms [53] buckets $A _ { 1 } , \ldots , A _ { m }$ overlap. An example sub-window query $q = \left( t _ { 2 } , t _ { 3 } \right)$ is sandwiched between $f ( A _ { 2 } )$ and $f ( A _ { 4 } )$ and can be approximated by $f ( A _ { 3 } - A _ { 4 } )$ with SH buckets, if the sketches preserve approximation upon subtraction. We choose Exponential Histogram (EH) over Smooth Histogram (SH) in PromSketch for two main reasons. First, SH requires subtractive properties between sketches while EH requires only additive mergeability, which most sketches support, allowing us to analyze error bounds for more window/sketch combinations. Second, for potentially large-scale data ingestion, EH can offer better system performance. Specifically, when inserting an item, EH only needs inserting into the newest (and smallest) bucket, with an amortized $O ( 1 )$ insertion cost [60], while SH inserts the item into every active bucket, resulting in an ${ \cal O } ( \log N )$ insertion cost [53]. Additionally, EH typically has smaller buckets than those of SH because they represent non-overlapping sub-windows, and thus require smaller inner data structure allocations. # 4.2 Algorithmic Building Blocks Next, we introduce algorithmic building blocks of PromSketch, novelly combining EH window framework and configurable sketches with provable error guarantees. PromSketch supports $7 0 \%$ of queries in Prometheus as in Table 3. 4.2.1 $E H + K L L$ for Quantiles (EHKLL). Quantile-based rule queries such as quantile_over_time, min_over_time, max_over_time, consist of querying data samples over a time range and a $\varphi \in { }$ [0, 1], representing $\varphi$ -quantile (e.g., min and max correspond to the 0-quantile and 1-quantile). We present a novel construction for arbitrary sub-window quantiles, using an EH with each bucket as a KLL sketch [72] to maintain quantiles, as shown in Alg. 1 (EHKLL). The user specifies the KLL rank error $\epsilon _ { K L L }$ , EH error $\epsilon _ { E H }$ , confidence level $\delta$ , and size (in time range $T$ or data count $N$ ) of the most recent window. When inserting a data sample, it is added into the latest bucket $B _ { l }$ ’s KLL sketch. If needed, buckets merge [60] to maintain EH invariants based on the number of items in each EH bucket. For a query with time range $T = \left[ t _ { 1 } , t _ { 2 } \right]$ , we identify two buckets that contain $t _ { 1 }$ and $t _ { 2 }$ , merge the KLL sketches between the two buckets based on EH to construct a merged sketch. Feasible Quantile Sketches with EH. We integrate KLL sketch [72] as the quantile estimator because of feasibility to aggregate the estimation errors from both EH window framework and each bucket’s sketch consistently. At a high level, we can choose quantile sketches between two types – one provides rank error guarantees, and the other provides relative error guarantees. A rank error $\epsilon _ { r a n k }$ approximate quantile sketch receives items $x _ { 1 } , x _ { 2 } , \ldots , x _ { n }$ , and allows one to approximate the rank of any query item up to additive error $\epsilon _ { r a n k } n$ with probability at least $1 - \delta$ . The rank of a query $x$ is the number of items in the queried window such that $x _ { i } \leq x$ . Given a $\varphi$ -quantile query $x _ { \varphi }$ , a relative error $\epsilon _ { r e l }$ approximate quantile sketch outputs $\tilde { x } _ { \varphi }$ such that $| \tilde { x } _ { \varphi } - x _ { \varphi } | \leq \epsilon _ { r e l } x _ { \varphi }$ . Since an EH maintains buckets with rank error guarantees based on Invariant 1 and Invariant 2 as below ([60]) and KLL is a representative sketch with rank errors, we explore the novel combination of $_ \mathrm { E H + K L L }$ and analyze the aggregated rank error bounds. Invariant 1. Define $\begin{array} { r } { k _ { E H } = \frac { 1 } { \epsilon _ { E H } } } \end{array}$ and assume 𝑘𝐸2𝐻 is an integer; otherwise, we replace 𝑘𝐸𝐻 by ⌈ 𝑘𝐸2𝐻 ⌉. At all times, the bucket sizes $C _ { 1 } , \ldots , C _ { l }$ satisfy $\begin{array} { r } { \frac { C _ { j } } { 2 \left( 1 + \sum _ { i = j + 1 } ^ { l } C _ { i } \right) } \le \frac { 1 } { k _ { E H } } } \end{array}$ , for all $1 \le j \le l$ . Invariant 2. At all times, the bucket sizes are nondecresing, i.e., $C _ { 1 } \leq \cdot \cdot \cdot \leq C _ { l - 1 } \leq C _ { l }$ . Further, the bucket sizes are constrained to the following:−{1, 2, 4, . . . , 2𝑙′ } for some 𝑙 ′ ≤ 𝑙 and 𝑙 ′ ≤ log 2𝐸𝑁𝐻 For every bucket size other than the size of the last bucket, there are at most $\frac { k _ { E H } } { 2 } + 1$ and at least $\frac { k _ { E H } } { 2 }$ buckets of that size. EHKLL Error Guarantee. We prove the error bound as follows. First considering queries over the entire sliding window, the oldest bucket with size $C _ { 1 }$ that we discard for quantile estimation can contribute at most $C _ { 1 }$ rank difference from the accurate answer. The newest bucket $C _ { l }$ exactly aligns with the sliding window query boundary and introduces no errors. Therefore, based on Invariant 1 [60], the rank error caused by window framework EH is at tmiloeset $\frac { 2 } { k _ { E H } }$ a. tiAossnuamftienrgtakKinLgL salkl hitheams $\epsilon _ { K L L }$ ernaintks emrreorrgeoafbtihlietyq,utahne$N$ final estimated rank error is $\epsilon _ { E H K L L } \leq 2 \epsilon _ { E H } + \epsilon _ { K L L }$ . Alg. 1 can be used to query all quantiles including 𝑚𝑖𝑛 $( \phi = 0 )$ and 𝑚𝑎𝑥 $( \phi = 1 )$ ). oBfasmeedmonor[y72f]o,raesKtiLLmastkieotncshonfeaeldls $\begin{array} { r } { O ( \frac { 1 } { \epsilon _ { K L L } } \log ^ { 2 } \log ( 1 / \delta \epsilon _ { K L L } ) ) } \end{array}$ rbdietsr of $\begin{array} { r } { O ( \frac { 1 } { \epsilon _ { E H } } \log N ) } \end{array}$ EH buckets if we maintain EH buckets based on the number of samples inserted to each bucket. Therefore, the total memory needed by Alg. 1 is $\begin{array} { r } { O ( \frac { 1 } { \epsilon _ { K L L } } \log ^ { 2 } \log ( 1 / \delta \epsilon _ { K L L } ) \cdot \frac { 1 } { \epsilon _ { E H } } \log N ) } \end{array}$ which gives at most $2 \epsilon _ { E H } + \epsilon _ { K L L }$ normalized rank error for sliding window queries with $N$ samples in the most recent window. # Algorithm 2 EHUniv: GSum Based on EH 1: Input: $L _ { 2 }$ error target $\epsilon$ , confidence level $\delta$ , time window $T$ 2: function Update ${ \bf \ddot { \boldsymbol { t } } } ,$ item) 3: Maintain EH for $L _ { 2 } ^ { 2 }$ with $k _ { E H } = O ( 1 / \epsilon ^ { 2 } )$ based on Invariant 3 and Invariant 4. 4: On each bucket $A _ { i }$ maintain a universal sketch per bucket with error target $\epsilon$ . 5: function $\boldsymbol { \mathrm { Q } } \boldsymbol { \mathrm { U E R Y } } ( t _ { 1 } , t _ { 2 } )$ 6: Find $B _ { i } = \left( b ^ { 0 } , b ^ { 1 } \right)$ and $B _ { j } = ( b ^ { 2 } , b ^ { 3 } )$ s.t.: $t _ { 1 } \in B _ { i }$ and $t _ { 2 } \in B _ { j }$ 7: Compute the merged sketch $\begin{array} { r } { U n i v _ { m e r g e } = \bigcup _ { i + 1 \le r \le j } U n i v _ { r } } \end{array}$ 8: Query 𝑔𝑠𝑢𝑚 from $U n i v _ { m e r g e }$ based on Recursive GSum algorithm [52] (also Alg. 4 in [69]) 9: return 𝑔𝑠𝑢𝑚 Next, we extend the error guarantee for arbitrary sub-window queries with EHKLL. The sub-window query is answered by merging buckets $i + 1$ to $j$ in Alg. 1. Similarly, the bucket $B _ { i }$ discarded in calculation contributes at most $C _ { i }$ rank difference; the bucket $B _ { j }$ included in the calculation contributes at most $C _ { j }$ rank difference. In total, the rank difference from EH sub-window query is at most $C _ { i } { - } C _ { j }$ . If we provide rank error bound against $( t _ { 1 } , t _ { 2 } )$ , the rank error from EHKLL is 𝜖𝐸𝐻𝐾𝐿𝐿 ≤ 𝑁𝑡𝑖 −𝑁𝑗𝑡 + $\begin{array} { r } { \epsilon _ { E H K L L } \leq \frac { C _ { i } - C _ { j } } { N _ { t _ { 1 } } - N _ { t _ { 2 } } } + \epsilon _ { K L L } \leq \frac { N _ { t _ { 1 } } } { N _ { t _ { 1 } } - N _ { t _ { 2 } } } \frac { C _ { i } - C _ { j } } { \left( 1 + \sum _ { r = i + 1 } ^ { l } C _ { r } \right) } + } \end{array}$ $\epsilon _ { K L L } \leq 2 \epsilon _ { E H } \frac { N _ { t _ { 1 } } } { N _ { t _ { 1 } } - N _ { t _ { 2 } } } + \epsilon _ { K L L }$ , where $N _ { t _ { i } }$ refers to the number of samples between time $t _ { i }$ to current time $t$ . If we provide rank error bound against $( t _ { 1 } , t )$ , similarly, the rank error from EHKLL is $\begin{array} { r } { \epsilon _ { E H K L L } \leq \frac { C _ { i } - C _ { j } } { ( 1 + \sum _ { l = i + 1 } ^ { m } C _ { l } ) } + \epsilon _ { K L L } \leq 2 \epsilon _ { E H } + \epsilon _ { K L L } . } \end{array}$ 4.2.2 Universal sketching with EH (EHUniv). Next, we support estimations of complex statistics that are not fully supported by current systems, such as $L _ { 2 }$ norm, distinct counting, and entropy. These are often used together and queried on the same timeseries, but naive solutions require separate caching per statistic. For example, in DDoS attack detection, one would need a distinct counting sketch for cardinality and another for entropy, doubling memory usage. Therefore, we leverage a universal sketch as a subroutine in EH to answer multiple statistics with one sketch, as shown in Alg. 2 (EHUniv). The update process inserts the item into the newest EH bucket while maintaining EH buckets according to EH invariants. If EH invariants are violated for a pair of buckets $( B _ { j - 1 } , B _ { j } )$ , they are merged into a new bucket $B _ { j - 1 } ^ { \prime }$ , whose universal sketch combines those of $B _ { j - 1 }$ and $B _ { j }$ . The update process has an amortized merge time $O ( 1 )$ and a worst-case merge time of $O ( k _ { E H } \log N )$ , where $N$ is the item number in the most recent window. During queries, after finding the two buckets that contains the query window start time $t _ { 1 }$ and end time $t _ { 2 }$ , it merges all buckets in between, including the last bucket but excluding the first bucket. Finally, the supported statistic can be answered by Recursive GSum algorithm [52] and the merged universal sketch. EHUniv Benefits. The benefits of integrating universal sketch is the ability to maintain a single sketch for querying multiple statistics rather than creating separate sketches for each, and its natural mergeability. These statistical functions can be summarized as GSum, allowing a single universal sketch instance to maintain multiple statistics [52, 78]. The GSum function is defined as $\begin{array} { r } { G = \sum _ { i = 1 } ^ { m } g ( f _ { i } ) } \end{array}$ , where $f _ { i }$ is the frequency of data sample $\mathrm { { d a t a } } _ { i }$ and $g : \mathbb { N } \to \mathbb { N }$ is a function. The class of GSum functions covers many practical monitoring functions, including $L _ { 0 }$ (distinct counting), $L _ { 1 }$ norms (count_over_time), $L _ { 2 }$ norm, entropy , and TopK-frequent item finding. We provide the basics of universal sketching here and refer to [52, 78] for more details. Theorem 2 in [52] states that if $g ( x )$ grows slower than $x ^ { 2 }$ , drops no faster than sub-polynomially, and has predictable local variability, then there is an algorithm that outputs an $\epsilon$ -approximation to $G$ , using sub-polynomial space and only one pass over the data. For a universe with $M$ different items in a data stream, a universal sketch maintains $\log M$ parallel copies of a $^ { \ast } L _ { 2 }$ -heavy hitter” (L2-HH), e.g., using a Count Sketch [57] as the L2- HH subroutine. Then, leveraging a Recursive GSum algorithm [60] (also described in Alg. 4 of [69], we omit the details here), a universal sketch estimates the statistical function $g$ by recursively computing $g$ on founded heavy hitters in $\log M$ layers of Count Sketches. Following [69], EH buckets can be maintained based on $L _ { 2 }$ norm and thus can support L2-HH routines: [69] improves EH [60]’s results on $L _ { 2 }$ that by maintaining $\begin{array} { r } { k _ { E H } = O \big ( \frac { 1 } { \epsilon ^ { 2 } } \big ) } \end{array}$ and $C _ { f } = 2$ , EH can provide $\epsilon$ -approximation for $L _ { 2 }$ on the sliding window by maintaining Invariant 3 and 4, where $f ( B _ { j } ) = L _ { 2 } ^ { 2 } ( B _ { j } ) , j = 1 , \ldots , l$ . Invariant 3. $\begin{array} { r } { f ( B _ { j } ) \leq \frac { C _ { f } } { k _ { E H } } \sum _ { i = j + 1 } ^ { l } f ( B _ { i } ) . } \end{array}$ Invariant 4. $\begin{array} { r } { f ( B _ { j - 1 } ) + f ( B _ { j } ) > \frac { 1 } { k _ { E H } } \sum _ { i = j + 1 } ^ { l } f ( B _ { i } ) . } \end{array}$ EHUniv Error Guarantee. According to Theorem 7 in [60], the EH approach required $O ( k _ { E H } s ( \epsilon , \delta ) \log N )$ bits of memory, where $s ( \epsilon , \delta )$ is the amount of memory needed for a sketch to get a $( 1 + \epsilon )$ - approximation with a failure probability of at most $\delta$ . Theorem 3.5 in [69] shows that a Count-Sketch-based $L _ { 2 }$ heavy hitter algorithm based on EH with $\begin{array} { r } { k _ { E H } = O \big ( \frac { 1 } { \epsilon ^ { 2 } } \big ) } \end{array}$ can solve $( \epsilon , L _ { 2 } )$ -heavy hitters problem in the Sliding Window and Sub-Window query using $O ( \epsilon ^ { - 4 } \log ^ { 3 } N \log \delta ^ { - 1 } )$ memory bits. As shown in [69], Recursive Sketch with $( g , \epsilon )$ -heavy hitter algorithm which finds all 𝑖 such that $g ( f _ { i } ( t _ { 1 } , t _ { 2 } ) ) \geq \epsilon G ( t _ { 1 } , t )$ will return $\hat { G } ( t _ { 1 } , t _ { 2 } ) = G ( t _ { 1 } , t _ { 2 } ) \pm \epsilon G ( t _ { 1 } , t )$ and errors with probability at most 0.3, and with ${ \cal O } ( \log M )$ space overhead. Therefore, using EH and buckets of universal sketches with Count Sketches for L2-HH subroutines, Alg. 2 estimates GSum statistics using $O ( \epsilon ^ { - 4 } \log ^ { 3 } N \log M \log \delta ^ { - 1 } )$ bits of memory. EHUniv Optimizations. Straightforward EHUniv implementation can incur large memory usage, as universal sketches in EH need to be configured with the same parameters and memory for mergability among buckets, where each sketch cannot be too small to guarantee good accuracy for a bucket. However, newer EH buckets maintained in the window are usually very small-sized (e.g., sizes $1 , 2 , 4 , \ldots )$ . To optimize EHUniv memory usage and runtime, we propose to use exact item frequency maps for smaller buckets (when sizes are below the sketch memory) and universal sketches for larger buckets. The hybrid sketch/map construction reduces memory footprint and per-item update time, and improves accuracy because maps provide deterministic results for small buckets. When a map size exceeds the threshold, the map is converted into a universal sketch. Querying an interval among active buckets may access maps or sketches. If the time range includes only maps, we merge selected maps in the time range to calculate item frequencies and statistics. If it includes only sketches, we merge them and apply Recursive GSum to answer the query. When both maps and sketches are present, we merge the maps into one item frequency map, update the universal sketch with these frequencies, and combine all sketch buckets with the updated sketch to answer it. # 4.3 Single-Machine PromSketch PromSketch, as an intermediate result cache, can be applied to both single-node and distributed monitoring systems. We first introduce the end-to-end design for integrating it into a single-node system with the Promethues architecture. PromSketch data ingester. Data ingester inserts collected timeseries data samples into both the backend TSDB and corresponding PromSketch cache instances in parallel. Rule manager. Rule manager issues rule queries. When it initiates a rule query, it signals the query engine that a query is periodic and eligible for caching. The query engine then creates a PromSketch instance. If rule configurations are updated and certain rules are removed, the corresponding PromSketch instances are remoevd. PromSketch query engine. The query engine registers a PromSketch cache instance when it first executes a rule query, with timeseries ID (or name), statistical function, and query window size. If multiple rule queries share the same timeseries and statistical function but have different window sizes, the PromSketch cache expands its window range to the largest query window for best possible caching. When evaluating an aggregation_over_time query, the query engine first checks whether the timeseries has been precomputed by PromSketch. If available, it computes results using PromSketch; otherwise, it retrieves raw data samples from the cache or storage to perform exact query computation. PromSketch supports evaluating multiple timeseries sequentially (e.g., integrating with Prometheus) or in parallel across multiple cores (e.g., integrating with VictoriaMetrics). PromSketch is designed to be compatible with PromQL-like query languages, including those used by Prometheus, VictoriaMetrics, and more [19, 31], with aggregation_over_time functions. To support the PromSketch cache with the query engine, we extend the query parser to include an option for utilizing the PromSketch cache at the entry point of the query’s Abstract Syntax Tree, which is widely used to parse aggregation_over_time functions in PromQL. This preserves the original query syntax and allows outer functions to process results from the PromSketch cache. For queries that first aggregate by timeseries label and then by time (e.g., avg_over_time(max(metric)[10s])), we initiate a PromSketch instance with the inner aggregated timeseries (e.g., max(metric)) as input. Similarly, for queries that join timeseries before applying time-based aggregation, the joined timeseries samples are inserted into a PromSketch instance. In this work, we focus on optimizing aggregation over time functions, leaving optimizations for labeldimension aggregation to future work. PromSketch cache considerations. PromSketch uses a hash table for timeseries indexing as Prometheus for prototyping. For each timeseries, insertions and queries are performed concurrently. Inserting to a PromSketch instance may require reconstruction of its EH buckets. Therefore, we add a Read-Write lock between query and insertion threads for each PromSketch instance, allowing multiple concurrent reads to the buckets while permitting only one insertion at a time. PromSketch cache is maintained dynamically: If some rules are removed by the users, PromSketch will remove cache instances that are no longer needed. Optionally, PromSketch can also integrate VictoriaMetrics’ timeseries index cache for accelerated sketch instance look-up. Moreover, PromSketch has several reliability and data ordering considerations: (1) When a running PromSketch fails, the in-memory cache can be rebuilt with old data from storage, with another PromSketch instance accepting new data with current timestamps. In this case, queries are answered by merging two instances. (2) PromSketch has the same out-of-order data model as VictoriaMetrics [35] and Prometheus [6], where only the data samples with current timestamp ranges are accepted and out-of-order/duplicated samples should be rejected. In practice, PromSketch cache is placed after the deduplication and reordering component in VictoriaMetrics [36]. # 4.4 Extension to Distributed PromSketch To demonstrate a distributed system design, we integrate PromSketch into a cluster-based VictoriaMetrics-like architecture [13]), where all ingester, cache, query, and storage components can be running as container nodes as microservices. This cloud-native design brings several benefits — The timeseries set is dynamically partitioned using consistent hashing [71] for possible scaling up and down, with each PromSketch node independently maintaining its own data shard without sharing data with other nodes. This is different from VictoriaMetrics’ design of tightly coupling the query and caching in same nodes. Ingester and query nodes access timeseries based on the partitioning. With this design, the number of PromSketch cache and the number storage/query nodes can be adjusted accordingly based on the load. Moreover, distributed PromSketch also allows replicas of cache nodes for fault tolerance in case of node failure as well as leveraging the auto-scaling mechanism provided by Kubernates [13]. # 5 Implementation We implement PromSketch as a plugin with 5K lines of Go code and integrate it into Prometheus (release-2.52) and VictoriaMetrics (release-v1.102.0). PromSketch is also compatible with other Prometheus-like systems, such as Grafana Mimir [19]. For integration, for example, users can apply a ${ \sim } 3 0$ -line patch to Prometheus. Algorithm implementation optimizations: To further improve PromSketch’s system performance, we implement two optimizations following [103] for universal sketches in Alg. 2: (1) One layer update: We update only the lowest sampled layer per insertion, reducing the layers updated to one per insertion. (2) Pyramid memory: We use larger Count Sketches for upper layers and smaller ones for lower layers, while preserving high accuracy, given that the layered-sampling in universal sketching reduces the data size reaching the lower layers. Extending to more statistics: We implement approximate caching for additional statistics using sliding window uniform sampling [80] for statistics like average, sum, standard deviation and variance. This algorithm maintains a sample set 𝑆 of the most recent sliding window with time range 𝑇 . New items are added with a probability $p \in ( 0 , 1 )$ , and outdated samples are discarded. For a query over $( t _ { 1 } , t _ { 2 } )$ , it answers queries with samples within the range from $s$ . # 6 Evaluation We evaluate PromSketch’s end-to-end performance and provide a sensitivity analysis and demonstrate that: PromSketch offers $\leq 5 \%$ mean errors across statistics at $5 \times$ to $7 5 \times$ less system costs (compute and memory) of exact query engines. Thus, the operational costs are reduced by $4 0 0 \times$ and $4 \times$ compared to Prometheus and VictoriaMetrics, respectively. • PromSketch offers up to two order of magnitude smaller query latency than Prometheus and VictoriaMetrics. • PromSketch maintains up to $^ { 8 \times }$ faster ingestion than alternative fixed sliding window designs when querying multiple metrics and time windows, representing a moderate $1 . 3 \times$ to $3 \times$ slowdown compared to non-cached Prometheus depending on the number of timeseries and PromSketch algorithm choices. Testbed: Our single-machine experiments run on an Ubuntu 20.04 system with a 32-core Intel Xeon Gold 6142 (2.6GHz), 384GB DRAM, and a 1TB SATA HDD. The cluster experiments use three servers, each with a 24-core AMD 7402P (2.8GHz), 128GB DRAM, 1.6TB NVMe SSDs, and 100Gbps NICs. Datasets: We use two synthetic and two real-world traces. (1) Synthetic traces: We generate Zipf-distributed (obeying $P r ( k ) =$ $( 1 + k ) ^ { - 1 . 0 1 } , k \in N$ , Zipf ) 10M data samples at configurable intervals and 10M uniform distribution data (Uniform), with values in [0, 100000], for each timeseries. We create a dynamic dataset (Dynamic) that transitions between Zipf, uniform, and normal distributions $( \mu = 5 0 0 0 0 , \sigma = 1 0 0 0 0 )$ , generating 1M data points per distribution in a continuous cycle. (2) Real-world traces: We use Electronic Power dataset (Power) [66], and Google Cluster data v3 [3] (Google), where we use start_time as time and average_usage.memory as memory_usage. For EHUniv evaluation, we use CAIDA datasets [33] of source IP addresses from a NYC Equinix backbone on 2018-12-20 (CAIDA2018) and 2019-01-17 (CAIDA2019). PromSketch sensitivity analysis is conducted on the first 20M points, and 2M points of Power due to length constraints. Baselines and evaluation metrics: We compare PromSketch with the (1) Prometheus system. (2) VictoriaMetrics single-node version [40]. From the space of approximate analytics engines, we compare PromSketch against (3) Uniform Sampling: We implement uniform sampling and insert the sampled data into an array-based caching layer for Prometheus, tested with varied sampling ratios. We evaluate rule query latency, insertion throughput, memory usage, accuracy, and operational costs. Accuracy is assessed by Mean Relative Error (MRE) against exact statistics. For quantile, min, and max estimations, we use the Kolmogorov-Smirnov test (KSTest) [50] to compare CDF differences. # 6.1 End-to-End PromSketch Performance For evaluating query latency and insertion throughput, we set uniform sampling at a $1 0 \%$ sampling rate; for EHKLL, we set KLL space limit parameter [72] $k _ { K L L } = 2 5 6$ (where $k _ { K L L } = ( 1 / \epsilon _ { K L L } ) \sqrt { \log ( 1 - \delta ) } ) ,$ and EH error parameter $k _ { E H } = 5 0$ , for $5 \%$ KSTest errors; for EHUniv, we set $k _ { E H } = 2 0$ for $5 \%$ relative errors. 1.1 Operational Costs and Accuracy. Compute and memory costs vs. accuracy. Under a cloud billing model, users pay for resources such as memory and CPU cores. Fig.6 shows the normalized operational costs for concurrent queries, including insertion, query compute, and memory usage (excluding storage and network). The costs are normalized by computation time and memory usage. Each figure shows the average errors of different sub-window sizes, with confidence levels shown as a region after five runs. Fig. 6(a) and (c) depict queries with sub-window sizes of 1M, 100K, and 10K samples, using a fixed 1M-sample sliding window. The zoom-in queries mimic anomaly detection: the user first queries a 1M window, then splits it into ten 100K sub-windows for further queries, and finally divides the last 100K sub-window into ten 10K subwindows for finer granularity. Fig. 6(b) shows the mean relative errors and confidence levels for entropy, distinct counting, and $L _ { 2 }$ norm, with a 1M-sample sliding window and zoomed-in subwindows of suffix length from 100K to 1M with an interval of 10K samples. Queries are issued every 100K samples. For quantiles and GSum-statistics, PromSketch offers better cost-accuracy tradeoffs than uniform sampling. PromSketch maintains less than $5 \%$ errors even for the smallest sub-windows (10K samples) while reducing costs by $7 5 \times$ for EHKLL, $1 0 \times$ for Sampling, and $5 \times$ for EHUniv compared to the exact baseline. Cloud operational cost estimations. We compare operational costs of Prometheus, VictoriaMetrics, and PromSketch integration in Table 1. PromSketch respectively reduces the query processing cost by about $4 0 0 \times$ compared to AWS Prometheus Pricing and at least $4 \times$ compared to VictoriaMetrics, while not increasing the storage and data ingestion costs. For a 1000-node Kubernetes cluster collecting 1000 metrics per node per second for a month, the total ingestion is 268B samples, requiring 1M samples/s. Assuming each metric has 20 labels with 100 unique values and averaging 30 bytes per label and 2 bytes per sample after compression [10], and 10 rule queries running $2 4 / 7$ , querying every minute with 8000 timeseries and 1M samples per series, the cost breakdown is as follows: (1) AWS Prometheus pricing [10]: Data ingestion costs $\$ 9,186$ , storing 2336GB of metrics and labels costs $\$ 70/\mathrm { m o n t h }$ , and query processing costs $\$ 11,560$ . This is an estimate for 10 alerting rules, and we envision the cost to be at least several orders of magnitude more when it scales up. (2) VictoriaMetrics [38]: Using the typical cloud billing model [4] and assuming each data sample has 64-bit floating point value and 64-bit timestamp associated after decompression, 10 queries concurrently require $1 0 \times 8$ Billion $\ltimes 1 6 \mathrm { B } = 1 1 9 2$ GB memory, costing at least $\$ 7,443$ /month for compute (using x2idn.24xlarge [4] with 96 vCPUs and $1 . 5 \mathrm { T B }$ memory). Storage costs $\$ 348,$ /month, with no data ingestion charge [38]. (3) PromSketch-PM: With Prometheus, PromSketch only processes each sample once, costing $\$ 28.6$ month for query processing (\$0.1/Billion samples). (4) PromSketch-VM: With VictoriaMetrics, PromSketch uses ${ \sim } 3 M \mathrm { B }$ per timeseries for a 1M-sample window and $5 \%$ error target, requiring $1 0 \cdot 8 0 0 0 \cdot 3 \mathrm { M B } = 2 3 4$ GB, which can be handled by an m6g.16xlarge instance (64 vCPUs, 256GB) at $\$ 1,833,$ /month. Overlapping queries on the same timeseries will further reduce costs. 0.01680 EHKLL 0.2442MB E10HK%LULS4a.m36plMinBg 1.52MB 0.5105 EHUniv 34.739405MB 450E30H%UnUiSva6m.5p3liMngB 4.57MB 450 E5x0a%ctU S1a5.m2p6liMnBg 7.62MB E5x%aUctS a15m.p2li6nMgB0.76MB 1 230%USampling 34.0558MB 4 4 4 2 0.02 厂 0.01005 106 107 0.006×106 107 Entropy 3×10706×106 107 Distinct 3×10706×106 107 L2 Norm 3×107 0 106 107 Normalized Cost \$ Normalized Costs \$ Normalized Cost \$ (a) Quantile_over_time, Google (b) GSum-statistics, CAIDA2018 (c) Avg_over_time, Google 10567 EHKLL_10M0KK EHUniv_10M0KK USampling_10M0KK PAVrillco_t1morMeitahMeeutsrics 105678 EHKLL_1M 101 EUHSaUnmipvl_i1ngM_1CDMAyInDaAm2i0c19 自 中 有 Exact A A 101 103 16 32 64 101 103 Number of Timeseries Number of Timeseries Number of Timeseries Number of Threads Number of Timeseries (a) PromSketch-only (b) PromSketch+Prometheus (c) PromSketch+VictoriaMetrics (d) PromSketch-only Scalability (e) #Timeseries vs. Memory Table 4: Total concurrent rule query latency on 10K-, 100K-, and 1Msample windows. $\mathbf { \Sigma } ^ { \omega } \mathbf { \Sigma } ^ { \prime \prime }$ and “VM” stand for Prometheus and VictoriaMetrics, and “PS-PM” and “PS-VM” refer to corresponding PromSketch integration. Table 5: VictoriaMetrics(w/ PromSketch) cluster version total rule query latencies (seconds) with different nodes. We show the speedup comparing VictoriaMetrics w/ PromSketch and without. 6.1.2 Single-Machine Rule Query Latency. We evaluate PromSketch’s end-to-end query latency in Prometheus and VictoriaMetrics across different statistics and query window sizes on 10K time series each with $1 0 0 \mathrm { m s }$ data insertion interval. Table 4 reports total latencies for concurrent drill-down queries over the most recent $1 0 ^ { 5 }$ -, $1 0 ^ { 4 } \cdot$ -, and $1 0 ^ { 3 }$ -second time windows for each listed statistics. The total latency is averaged from 10 runs after a warm-up of $1 0 ^ { 5 }$ seconds. Each aggr_over_time query aggregates statistics across all timeseries. With Alg. 1, PromSketch improves quantile query latencies by up to $2 0 3 \times$ over Prometheus, and $7 8 \times$ over VictoriaMetrics. With Alg. 2, PromSketch improves distinct counting, entropy, and $L _ { 2 }$ norm query latencies by up to $2 3 1 \times$ compared to Prometheus and up to $1 5 8 \times$ compared to VictoriaMetrics. With $1 0 \%$ uniform sampling, PromSketch improves average and stddev query latencies by $1 3 5 \times$ over Prometheus and $2 1 \times$ over VictoriaMetrics. The smaller improvement for average is due to VictoriaMetrics’s data caching optimization for avg, while it does not optimize complex queries such as quantiles and distinct counting. The overall smaller improvements on VictoriaMetrics compared to Prometheus are attributed to VictoriaMetrics’ more efficient storage engine. Table 6: VictoriaMetrics $^ +$ PromSketch cluster version insertion throughput $( \mathbf { M } / \mathbf { s } )$ with different nodes and timeseries numbers. 6.1.3 Single-Machine Insertion Throughput. To evaluate the impact of PromSketch precomputation on insertion throughput with the backend Prometheus and VictoriaMetrics TSDBs, we set the data timestamp interval as $1 0 0 \mathrm { m s }$ and measure the duration of inserting 60-hour data (2.16M samples) with varying timeseries numbers. Each approximation method is tested with 10K-, 100K- and 1Msample window sizes. Fig. 7 shows the insertion throughput of PromSketch and each approximation method running alongside a backend TSDB, with timeseries insertion evenly distributed across CPU cores. Fig. 7 (a) shows the insertion throughput of PromSketch alone without inserting into backend TSDB, can achieve 10M samples/s for EHKLL, 2M samples/s for EHUniv, and 100M samples/s for $10 \%$ uniform sampling. We show the 1M-sample window size for each algorithm in integrated systems. For Prometheus integration (Fig. 7(b)), with 10K number of timeseries, uniform sampling, EHKLL, EHuniv, and running all three algorithms together have $. 3 { \times } , 1 . 3 { \times } , 2 . 3 { \times } .$ , and $3 . 1 \times$ smaller insertion throughput compared to Prometheus. For integration with VictoriaMetrics (Fig. 7(c)), PromSketch achieves over 1.7M samples/s insertion throughput with 100 to 10K timeseries. With VictoriaMetrics, uniform sampling, EHKLL, EHUniv, and running all three algorithms together has $1 . 4 \times$ , $4 . 1 \times$ , $4 . 5 \times ,$ , and $7 . 2 \times$ smaller insertion throughput compared to VictoriaMetrics, respectively, with 10K number of timeseries. The discrepancy is attributed to VictoriaMetrics’ faster storage backend. To evaluate PromSketch insertion scalability, we fix the timeseries number at 10K and vary the thread count in a single-machine setting. We measure throughput across three algorithms using 10K-, $1 0 0 \mathrm { K } \mathrm { - }$ , and 1M-sample time windows. Fig. 7 (d) shows that each algorithm scales linearly as the number of threads increases. 6.1.4 Distributed PromSketch Performance. We integrate PromSketch into the VictoriaMetrics cluster version [13], distributing query, sketch, and storage nodes across 3 servers. Rule Query Latency. In latency experiments, data is collected every second per series through a single ingestion node, with a synthetic data generator issuing Zipf-distribution data of different timeseries. Table 5 shows the total query latencies for four statistics (0.9-quantile, max, average, and distinct) with 1K-, 10K-, and 100Ksecond query windows (12 concurrent queries). With PromSketch cache, total latency can reduce by up to $3 0 \times$ compared to VictoriaMetrics Cluster. Scaling from 1 to 3 nodes in PromSketch reduces latency by $1 . 4 \times$ for 20K timeseries and $1 . 7 \times$ for 40K timeseries, representing a sub-linear speedup due to not all query nodes being used by the cluster scheduler. Insertion. Table 6 shows the throughput with different nodes and timeseries when inserting data to all three algorithm instances for each timeseries in VictoriaMetric $^ +$ PromSketch. With increasing number of server nodes, PromSketch’s ingestion performance can scale linearly from ${ 1 . 3 3 } \mathrm { M } / s$ to $3 . 8 6 M / s$ with a large number of 2 PGowoeglre 4 510%(0(.18.5MB)B) 30%(4.6MB) 0 0 0 3 4 104 105 106 Memory Requirement (MB) Sub-Window Sizes (Samples) (a) 300-600K Interval (b) Multi Sub-windows, Google 0.03 o Google 0.05705 50-64(0.2MB) 200-256(1.8MB) Power 100-128(0.7MB) 500-512(4.1MB) 0.01 有 Uniform . Zipf 0.025 T 0.00 e0 olo 0.000 103 0 2 4 6 8 104 105 106 Memory Requirement (MB) Sub-Window Sizes (Samples) (a) 300-600K Interval (b) Multi Sub-windows, Google CAIDA20189 1.5 0.2 Zipf 1.0 1240(34.8169MB) 410(04(.59.076MB)B) 0.1 0.5 0.0 0.0 4 5 6 7 0.2 0.4 0.6 0.8 1.0 Memory Requirement (MB) Suffix Lengths (Samples) 1e6 (a) Entropy, 300-600K Interval (b) Entropy, Multi Suffix CAIDA2018 12 CAIDA2019 7.5 1240(34.8169MB) 1 410(04(.59.076MB)B) Zipf 5.0 2.5 0 0.0 2 5 6 7 0.2 0.4 0.6 0.8 1.0 Memory Requirement (MB) Suffix Lengths (Samples) 1e6 (c) Distinct counting, 300-600K Interval (d) Distinct counting, Multi Suffix CAIDA2018 14(3.86MB) 40(4.90MB) 2 CAIDA2019 2 20(4.19MB) 100(5.76MB) Zipf 1 0 0 4 5 6 7 0.2 0.4 0.6 0.8 1.0 Memory Requirement (MB) Suffix Lengths (Samples) 1e6 (e) $L _ { 2 }$ norm, 300-600K Interval (f ) $L _ { 2 }$ norm, Multi Suffix concurrent timeseries. We observe that changing the number of monitored timeseries (from 20K to 80K) does not show a significant performance impact, indicating the feasibility of supporting largerscale cloud infrastructure monitoring. 0.02 5 UnivConfig12 0.01 Memory(MB) kkll=64 0 0345Memory(MB) UnivConfig1 2.5 + k =128 3 UnivConfig2 e k =256 5 H A e kkll=512 e 0.0 25 27 29 23 24 25 KEH KEH (a) 300-600K Interval, Google (b) 1M-Window, CAIDA2019 Figure 11: Tuning parameters for EHKLL and EHUniv. We show KSTest error for EHKLL. UnivConfig1 is a 16-layer universal sketch (8 layers with 3-row, 1024-column CS, and 8 layers with 3-row, 128-column CS). UnivConfig2 is a 14-layer universal sketch (8 layers with 3-row, 256-column CS, and 6 layers with 3-row, 64-column CS). # 6.2 PromSketch Sensitivity Analysis 6.2.1 Accuracy with different memory consumption and sub-window sizes. To evaluate memory consumption and empirical errors, we set the sliding window size to 1M samples and query sub-windows ranging from 300K to 600K samples and different suffix lengths on real-world traces. Differences in datasets primarily stem from the skewness of workload distribution. Fig. 8 and Fig. 9 present the average statistic and quantile statistic results. Both Fig. 8(a) and Fig. 9(a) show that increased memory improves estimation accuracy. Fig. 8(b) and Fig. 9(b) show that larger sub-windows also yield smaller estimation errors compared to smaller ones, and more memory enhances accuracy across all sub-window sizes. We repeat the above evaluation for entropy, distinct counting (cardinality), and $L _ { 2 }$ norm statistics, as shown in Fig. 10. As memory increases, the relative errors for these estimates decrease. With 4MB memory, EHUniv can achieve $2 \%$ MRE for $L _ { 2 }$ norm, $1 \%$ MRE for entropy, and $2 \%$ MRE for distinct counting on both CAIDA datasets for the $\begin{array} { l } { { \frac { 1 } { 3 } } } \end{array}$ sub-window in a 1M sliding window. Additionally, as suffix lengths increase, estimation errors for entropy and distinct counting rise due to the fixed memory space accommodating more data samples. Errors for $L _ { 2 }$ norm may vary depending on how well suffix lengths align with EH bucket boundaries. EHUniv approaches near-zero errors when its memory approaches that of the exact algorithm. Fig. 11 shows the impact of different configurations to EHKLL and EHUniv. Given a KLL or Universal sketch configuration, larger $k _ { E H }$ reduces window alignment error and thus has smaller estimation errors. Given a $k _ { E H }$ , smaller KLL and Universal sketch memory configuration has larger errors. EHUniv shows smaller memory gap when $k _ { E H }$ is larger than 20 because each EH bucket is smaller and thus uses exact map, with errors solely due to window alignment. Figure 12: Memory usage with sliding window sizes. In (a), a legend label x-y denotes an EHKLL configuration with $k _ { E H } = x$ , $k _ { K L L } = y$ . The exact baseline stores every point with its associated timestamp. Figure 13: TopK-frequent item estimation comparing to MicroscopeSketch with HeavyGuardian (HG) and SpaceSaving (SS). 6.2.2 Memory with different timeseries numbers and sliding window sizes. We evaluate PromSketch memory consumption under $5 \%$ error target and 1M-sample sliding window for each timeseries. Fig. 7(e) shows the memory increases linearly as the timeseries number increases, because we allocate one PromSketch instance per series. Since Dynamic dataset contains uniformly distributed data, EHUniv’s memory usage is higher compared to the more skewed CAIDA2019 dataset. Fig. 12(a) and (b) shows the memory consumption of EHKLL and EHUniv with different sliding window sizes and parameter configurations for each timeseries. For each configuration of both EHKLL and EHUniv, the memory usage increases sublinearly as sliding window sizes grows. Larger $k _ { E H }$ and $k _ { K L L }$ in EHKLL and $k _ { E H }$ in EHUniv use more memory. Given a window size, a larger $k _ { E H }$ of EHUniv uses more memory because smaller EH buckets use hash maps for exact computation. Conversely, a smaller $k _ { E H }$ reduces memory usage since larger buckets leverage sketches for compression. 6.2.3 Comparing with sliding window algorithm. We compare PromSketch with fixed sliding window algorithm, e.g., MicroscopeSketch [102], on concurrent window queries, evaluating insertion throughput and estimation error across all query windows. We query TopKfrequent item finding over time of EHUniv, and compare MRE and average recall rate over 10 sub-windows (ranging from 100K- to 1M-sample sub-windows in a 1M-sample sliding window) against MicroscopeSketch with HeavyGuardian [104] and SpaceSaving [87] on CAIDA2019. Fig. 13 shows between 1.7MB and 3.3MB memory, PromSketch has up to $^ { 8 \times }$ higher insertion throughputs, smaller errors, and higher recall rates than MicroscopeSketch, because of EH’s ability to support multiple windows simultaneously. # 7 Related Work Window-based summaries. While various sliding window methods exist, most do not support arbitrary sub-window queries, incurring per-window maintenance efforts. [46] addresses approximate frequency counting and quantiles. SlidingSketches [64] optimizes hash-based sketches like Bloom filter [51] and CountSketch [57] but lacks support for quantile sketches like KLL. ECM-sketch [91] enhances CountMin Sketch [58] by replacing counters with Exponential Histogram for frequency estimation, whereas PromSketch uses sketches as EH buckets. WCSS [48] and MicroscopeSketch [102] support frequency estimation and TopK-frequent item finding, but they lack sub-window querying support and MicroscopeSketch is limited to counter-based sketches. CoopStore [63] focuses on offline precomputation of quantiles and frequencies with fixed window sizes and luxury memory during aggregation to reduce errors. SummaryStore [44] designs approximate timeseries storage with PowerLaw-based sub-window queries, while PromSketch functions as an in-memory cache and integrates to Prometheus-like monitoring systems with rule query support. Approximate timeseries visualization. M4 [70] and MinMaxCache [83] solve timeseries data point visualization problem with approximate pixel positions in a canvas, by querying min and max values of data points of a time range. They are orthogonal to our work and don’t address the window query bottlenecks. Approximate query processing (AQP) systems. Another type of approximate query system focuses on label dimensional queries. PASS [75, 76] and ${ \mathrm { A Q P + + } }$ [94] combine precomputed-aggregation and sampling to support SQL queries, including sum, count, min, max, and variance statistics. VerdictDB [92] acts as a sampling middlebox between the user interface and the backend database, enabling approximate SQL queries without requiring backend modifications. DHS [105] offers streaming data estimation of network traffic frequencies with dynamic sketch memory layout but does not target the time window queries.
Timeseries monitoring systems such as Prometheus play a crucial role in gaining observability of the underlying system components. These systems collect timeseries metrics from various system components and perform monitoring queries over periodic window-based aggregations (i.e., rule queries). However, despite wide adoption, the operational costs and query latency of rule queries remain high. In this paper, we identify major bottlenecks associated with repeated data scans and query computations concerning window overlaps in rule queries, and present PromSketch, an approximation-first query framework as intermediate caches for monitoring systems. It enables low operational costs and query latency, by combining approximate window-based query frameworks and sketch-based precomputation. PromSketch is implemented as a standalone module that can be integrated into Prometheus and VictoriaMetrics, covering 70% of Prometheus' aggregation over time queries. Our evaluation shows that PromSketch achieves up to a two orders of magnitude reduction in query latency over Prometheus and VictoriaMetrics, while lowering operational dollar costs of query processing by two orders of magnitude compared to Prometheus and by at least 4x compared to VictoriaMetrics with at most 5% average errors across statistics. The source code has been made available at https://github.com/Froot-NetSys/promsketch.
[ "cs.DB", "cs.NI" ]
# 1 Introduction Recent advances in computational capabilities have sparked a data-centric paradigm shift in deep learning. Moving beyond an exclusive reliance on architectural innovations, the AI community now prioritizes large-scale data utilization, as evidenced by the success of GPT-4 [1] in language processing and Sora [30] in vision tasks. This data-centric scaling trend also extends to graph machine learning, where two learning paradigms are gaining prominence (1) Federated graph learning (FGL) enables cross-silo graph collaboration; (2) Graph foundation models (GFM) promote multi-domain graph generalization. However, both of them face practical deployment limitations. Two limitations hinder FGL from achieving cross-domain and cross-task collaboration, as illustrated in Fig. 1 (a): (1) Data Heterogeneity. Due to diverse data sources and processing methods, client graphs often differ in feature dimension, label space, and topology pattern. As a result, most FGL methods are confined to collaboration across subsets of a single dataset [62, 26, 20]. While ${ \mathrm { G C F L } } +$ [53] and FedStar [36] enable limited cross-domain collaboration via domain-aware client clustering or feature-agnostic parameter sharing, they are only applicable to graph-level tasks and lack the ability to capture cross-domain general knowledge at the feature level. (2) Task Heterogeneity. Existing FGL assumes uniform graph granularity and downstream tasks across clients, enforcing one of three settings: node-level (ego-networks for node classification/link prediction), subgraph-level (induced subgraphs from a global graph for node classification/link prediction), or graph-level (graph sets for classification/regression) [16]. As a result, existing FGL approaches often adopt task-specific designs in both model architectures and training algorithms, which significantly limits their ability to support collaboration across multi-task graph data. Meanwhile, existing GFM studies face the following two limitations, as illustrated in Fig. 1 (b): (1) Multi-Domain Data Isolation. Training generalizable GFMs requires diverse graph data spanning multiple domains, like social networks, molecular structures, etc. Although a number of public graph datasets are available, they remain limited in both scale and diversity. In contrast, real-world graph data is expected to continuously grow in volume and variety, yet it is often distributed across institutions and isolated in data silos due to privacy regulations or commercial competition. This renders existing centralized GFM approaches increasingly infeasible. (2) Cross-Silo Storage and Computation Neglect. Although current GFMs require significantly fewer storage and computation resources than their NLP or vision counterparts, which makes them feasible within a single institution, centralized training frameworks inherently fail to leverage the vast yet fragmented storage and computation capacities distributed across multiple silos in real-world deployments. This under-utilization results in non-trivial opportunity costs, such as redundant resource provisioning and sub-optimal training efficiency. Fortunately, FGL and GFM exhibit a naturally complementary relationship. Specifically, FGL equips GFM with a decentralized training paradigm that supports learning across distributed silos while efficiently utilizing cross-silo storage and computational resources. In contrast, GFM enhances FGL by offering unified feature encoding and a pre-training followed by fine-tuning framework, thereby facilitating generalized collaboration across diverse graph domains and task types. To this end, we introduce Federated Graph Foundation Model (FedGFM), a novel and practical paradigm designed for training GFM over decentralized, cross-domain, and cross-task graphs. As illustrated in Fig. 1 (c), the FedGFM paradigm follows a pipeline that begins with federated pre-training and proceeds with fine-tuning. During the federated pre-training phase, each client performs self-supervised learning on its private graph to acquire domain-specific representations. The server then aggregates these local models to construct a global model that captures generalizable topological and semantic patterns. The global model is subsequently broadcast to clients as the initialization for the next round of federated pre-training. This iterative process continues across multiple rounds of federated communication. In the fine-tuning phase, the global model is treated as a graph foundation model and is further adapted to specific downstream tasks through supervised learning. To establish an effective FedGFM framework, our work begins with an empirical investigation (Sec. 3), assessing its feasibility and revealing a non-trivial challenge. Specifically, (1) From a feasibility perspective, FedGFM faces stringent communication constraints, as frequent transmission of large-scale model parameters or gradients is often impractical in real-world federated deployments. This limitation calls for a lightweight yet expressive model architecture. Fortunately, the graph vector quantization-variational auto-encoder (gVQ-VAE), which is widely used as the backbone in centralized GFM, presents a promising solution. It has been extensively validated for its ability to jointly encode graph structures and text attributes into discrete, semantically meaningful representations [41, 43], making it well-suited for multi-domain pre-training. Meanwhile, its lightweight design naturally aligns with the communication-efficiency requirements of FedGFM. (2) However, naively distributing the pre-training of $\mathrm { \ g V Q }$ -VAE across local clients in a federated setting introduces a critical challenge we term knowledge entanglement. Unlike centralized training, federated pre-training operates on multiple isolated, domain-specific graphs, each with distinct data distributions. Each client’s local trained model tend to overfit their domain-specific data without alignment across clients. Consequently, the aggregated global GFM encodes multi-domain graphs into indistinguishable representations and further limits its downstream generalization. Building upon these insights, we present an effective FedGFM framework named ${ \mathrm { F e d G F M } } +$ , which involves two key modules to mitigate knowledge entanglement in a dual-pronged manner: (1) AncDAI: From a global perspective, we introduce a novel anchor-based domain-aware initialization strategy. Before pre-training, each client encodes its local graph into a domain-specific prototype, which serve as semantic anchors in the representation space. Around each anchor, we construct synthetic embeddings to initialize the global model. We theoretically show that these domain prototypes are distinguishable across domains, and the initialization provides a strong inductive bias that naturally facilitates encourages separation among knowledge representations from different domains. (2) AdaDPP: From a local perspective, during the pre-training stage, each client independently learns and retains a lightweight, domain-sensitive prompt that captures its local semantic preferences, without participating in federated aggregation. In the fine-tuning stage, these prompts are assembled into an adaptive domain-sensitive prompt pool. For a given target graph, the model selects and incorporates the most relevant prompts from the pool based on its semantic characteristics. These prompts serve as domain-specific priors that condition the GFM’s representations, thereby enabling adaptive exploitation of domain knowledge and facilitating improved adaption to downstream tasks. Our Contributions. (1) Problem Identification. To the best of our knowledge, this is the first exploration of the FedGFM paradigm, which organically combines FGL and GFM to offer a practical solution for training graph foundation model across silos with diverse graph domain and tasks. (2) Indepth Investigation. (Sec. 3) We conduct an in-depth empirical investigation for FedGFM, assessing its feasibility and revealing a non-trivial challenges named knowledge entanglement, providing valuable insights for its development. (3) Novel Framework. (Sec. 4) We propose a novel and effective FedGFM framework named ${ \mathrm { F e d G F M } } +$ , which employs two key modules to address the knowledge entanglement challenge, including AncDAI from the global perspective and AdaDPP from the local perspective. (4) State-of-the-art Performance. (Sec. 5) Extensive experimental results on graph learning with 8 cross-task and cross-domain datasets demonstrate the superiority of FedGFM+ compared with 20 baselines, including 5 isolated supervised learning methods, 10 FGL techniques, and 5 federated variants of centralized GFM training strategies. # 2 Preliminaries and Problem Formalization Text-Attributed Graph. Consider a text-attributed graph $G = ( \nu , \mathcal { E } )$ , where $\nu$ is the set of nodes and $\mathcal { E }$ is the set of edges. Each node $v _ { i } \in \mathcal { V }$ and edge $e _ { i } \in \mathcal E$ may be associated with a textual description, which is encoded into a semantic vector using a specific embedding technique (e.g., bag-of-words, pre-trained language models). Depending on the downstream task, the graph may be equipped with supervision signals at different levels: node-level labels (for node classification), edge-level labels (for edge classification or link prediction), or graph-level labels (for graph classification). Graph Vector Quantization-Variational Auto-Encoder as GFM Backbone. Most recent GFMs adopt $\mathrm { \ g V Q }$ -VAEs as the trainable GNN. This backbone enables the joint encoding of topology and textual attributes into a discrete embedding space with clear semantic boundaries, making it particularly suitable for multi-domain GFM pre-training. Specifically, (1) $\mathcal { G } ^ { \prime } = ( \nu , \mathcal { E } , \mathcal { X } ) \to E$ ncoder $$ Embeddings: To ensure generality in arbitrary inputs, the Encoder can be instantiated as any reasonable GNN capable of incorporating both node and edge features to generate informative embeddings $z \in \mathbb { R } ^ { d }$ . (2) Embeddin $\imath g s \to C o d e b o o k \to Q u a n . .$ Emb.: To establish clear semantic boundaries, the Codebook $\mathcal { C }$ transforms continuous embeddings $z$ into discrete embeddings $\boldsymbol { e } \in \mathbb { R } ^ { d }$ (Quan. Emb. $z _ { q } \in \mathbb { R } ^ { d } .$ ) via similarity retrieval-based vector quantization: $$ z _ { q } \gets e _ { j } , \ j = \arg \operatorname* { m i n } _ { e _ { i } \in \mathcal C } \| z - e _ { i } \| _ { 2 } , \ \mathcal C = \{ e _ { 1 } , e _ { 2 } , \ldots , e _ { K } \} . $$ (3) Quan. E $n b . D e c o d e r { \mathcal G } _ { r } ^ { \prime } = ( \mathcal V , { \mathcal E } _ { r } , { \mathcal X } _ { r } )$ : To enable the self-supervised training, $\mathrm { g V Q }$ -VAEs follow an autoencoder framework, where gradients are computed by the discrepancy between the reconstructed graph $\mathcal { G } _ { r } ^ { \prime }$ and the original input graph $\mathcal { G } ^ { \prime }$ , thereby updating the Encoder and Codebook. Notably, the trainable components of the Encoder and the Codebook are the weighted matrix and the discrete embeddings $\{ e _ { 1 } , \ldots , e _ { K } \}$ , which together constitute the trainable GFM embedding function parameterized by $f _ { \theta }$ . Meanwhile, to construct end-to-end gradient flow, the straight-through estimator (STE) [4] is used to approximate gradients by bypassing the non-differentiable quantization step. Formally, the $\mathrm { g V Q - V A E }$ is pre-trained via optimizing loss function as follows: $$ \begin{array} { l l } { \displaystyle \mathcal { L } _ { p r e t r a i n } = \mathcal { L } _ { f e a t } + \mathcal { L } _ { t o p o } + \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \lVert \mathbf { s } \mathbf { g } [ z _ { i } ] - z _ { q _ { i } } \rVert _ { 2 } ^ { 2 } + \cdot \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \lVert z _ { i } - \mathbf { s } \mathbf { g } [ z _ { q _ { i } } ] \rVert _ { 2 } ^ { 2 } , } \\ { \displaystyle \mathcal { L } _ { f e a t } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } ( 1 - \frac { x _ { i } ^ { T } \hat { x } _ { i } } { | | x _ { i } | | \cdot | | \hat { x } _ { i } | | } ) ^ { \gamma } , \qquad \mathcal { L } _ { t o p o } = | | A - \sigma ( \hat { X } \hat { X } ^ { T } ) | | _ { 2 } ^ { 2 } , } \end{array} $$ where $\operatorname { s g } [ \cdot ]$ represents the stop-gradient operator, $n$ denotes the number of nodes, $z _ { i }$ represents the $i$ -th node embedding produced by the GNN encoder, $z _ { q _ { i } }$ denotes its quantized embedding obtained by retrieving the codebook, and $\hat { x } _ { i }$ denotes the reconstructed node attributes projected via MLP-based decoders, i.e., $\hat { x } _ { i } = \delta ( z _ { q _ { i } } )$ , $\gamma$ is the scaling factor. More details and related works about $\mathrm { g V Q }$ -VAE are presented in Appendix A. Problem Formalization of FedGFM. For FedGFM, there is a trusted central server and $K$ clients. The subgraphs or graph collections of the client present a relationship such as subgraph-level decentralization or graph-level decentralization (see Appendix. C.2 for more details about data settings). To unify the representation, we regard the graph data held by $k$ -th client as $\boldsymbol { \mathcal { S } } _ { k }$ , where $| \boldsymbol { S } _ { k } | = 1$ for subgraph-level decentralization. The proposed FedGFM paradigm follows a federated pre-training-fine-tuning process. For the Federated Pre-Training phase, each client conducts selfsupervised training to optimize its local model based on its local graph, and the server aggregates multiple local models to obtain a global graph foundation model. Consider adapting the widelyused FedAvg [32] aggregation strategy in federated learning for vision tasks within the FedGFM framework, the federated pre-training process unfolds as follows: (1) Initialization: At the first communication round $( r = 1 )$ ), the central server sets the local model parameters of $K$ clients to the global parameters, i.e., $\Theta ^ { k } \gets \Theta ^ { \mathrm { g } } \forall k$ . (2) Local Updates: Each local model performs training on the current local data $G ^ { k }$ to minimize the self-supervised loss $\mathcal { L } ( G ^ { k } ; \Theta ^ { k } )$ , and then updating the parameters: $\Theta ^ { k } \gets \Theta ^ { k } - \eta \nabla { \mathcal { L } } .$ . (3) Global Aggregation: After local training, the server aggregates the local knowledge with respect to the number of tragining instances, i.e., $\begin{array} { r } { \Theta ^ { \bar { \mathrm { g } } } \frac { N _ { k } } { N } \sum _ { k = 1 } ^ { K } \bar { \Theta } _ { \cdot } ^ { k } } \end{array}$ with $\begin{array} { r } { N = \sum _ { k } N _ { k } } \end{array}$ , and distributes the global parameters to local clients selected at the next round. This process iterates between steps 2 and 3 until reaching the final round $R$ . This iterative cycle continues until the completion of the last round $( r = R )$ , facilitating collaborative GFM training by parameter sharing without the exchange of private local data. For the Fine-Tuning phase, FedGFM first loads and freezes the pre-trained global model from the central server as GFM, then uses available graph supervision signals to fine-tune the task heads to adapt to specific downstream graph tasks. # 3 Empirical Investigation In this section, we present an in-depth empirical study of the FedGFM paradigm, organized around two key questions from different perspectives. Q1: From the perspective of Feasibility, is FedGFM practical for real-world deployment? Q2: From the perspective of Effectiveness, what are the main bottlenecks that limit the effectiveness of a naive FedGFM implementation? Table 1: Comparison of parameter sizes between graph foundation models and those in the language and vision fields. Parameter counts are shown above each method name. ‘\*’ indicates an upper bound. Graph, Language and vision models are highlighted in red, yellow and blue, respectively. To address Q1, we survey several representative foundation models to quantify their parameter scales, and summarize the results in Table 1. Notably, compared with foundation models in language and vision domains, graph foundation models (GFMs) are significantly more lightweight in terms of parameter size. This suggests that federated pre-training of GFMs is communication-efficient and practically feasible. Among all surveyed GFMs, we further observe that two $\mathrm { g V Q }$ -VAE-based methods, GFT [43] and GQT [41], exhibit the smallest parameter scales. This highlights the advantage of the $\mathrm { \ g V Q }$ -VAE architecture in achieving a lightweight yet Cora WN18RR HIV Raw Feat. GFT Emb. GFT\* Emb. 230 Cosine Similarity 10..08 0.6 □ 0.4 0.2 0 5 10 15 20 25 Cora-WN18RR Cora-HIV WN18RR-HIV (a) Degree Frequency (b) Representation Similarity expressive design, making it particularly suitable for FedGFM settings. More related works about GFM are presented in Appendix A. To address Q2, we conduct a simple yet illustrative visualization experiment, aiming to reveal the bottlenecks that limit the effectiveness of naive FedGFM. Building on the insight of Q1, we implement naive federated variants of GFT [43] (denoted as $\mathrm { G F T ^ { * } }$ ), and evaluate GFT and GFT∗ on three datasets: Cora [56], WN18RR [12], and HIV [47], covering different domains (citation networks, knowledge graphs, and molecular graphs). The empirical results are presented in Fig. 2. Specifically, panel (a) illustrates the node degree distributions of the Cora, WN18RR, and HIV datasets (restricted to the first 30 degrees starting from 1 for visual clarity), while panel (b) reports the inter-domain cosine similarity among the three datasets, computed in three different representation spaces: (1) the average initial node features, (2) the average node embeddings learned by GFT, and (3) those learned by $\mathrm { G F T ^ { * } }$ . This comparison reveals how well each model distinguishes multi-domain knowledge during representation learning. As observed, the three datasets differ markedly in both topological structure and initial feature distributions. Despite such heterogeneity, centralized GFT pretraining produces a graph foundation model that generates embeddings with clear domain-specific distinctions. This indicates effective preservation of inter-domain variability through joint optimization. In contrast, the embeddings learned by $\mathrm { G F T ^ { * } }$ under decentralized federated pretraining show near-unity inter-domain similarity, reflecting a collapse of domain specificity caused by the absence of coordinated global optimization. We term this the knowledge entanglement, a non-trivial challenge to resolve for effective FedGFM design. # 4 Methods (a) Federated Pre-Training Stage of FedGFM+ (b) Fine-Tuning Stage of FedGFM+ : ° Gaussian Perturbation 五 Before Pre-Training Encoding Encoding GNN Averaging Embedding USpleoravderto Init. 3421 Before Fine-Tuning CFlireonmts Download 中 Ensemble 000 Vector Node Embeddings Domain Prototypes Anchor-Based Domain-Aware InitializCaotdioenbo(oAkncDAI) 国 GraphLoPcraolmpts PrDoompta Pno-oSle(nAsidtaivDePP) Attributed Attributed Client 3 i LoPcraolmGprtasph Node Ind. Vec. Gradient BackwardSelf-Supervised Client 1 Upload Broadcast C AdaDPP Server Frozen Trainable AttLroicbaulted EGncraopdhGeradient BackQwuaerdy CoN34debook AssiQgnuNanotdiezIendput RecToFnoespatorluourgceytion Client 2 MoUdpleoladAgg. BGroFaMdcast Gr Target Global Input HTaesakd In this section, we introduce the proposed FedGFM+ framework. We first provide an overview of ${ \mathrm { F e d G F M } } +$ in Fig. 3. At its core, FedGFM $^ +$ adopts a federated pre-training and fine-tuning paradigm. During each communication round of pre-training, clients leverage a local $\mathrm { \ g V Q }$ -VAE encoder to perform self-supervised graph reconstruction, capturing domain-specific semantics. The resulting local models are uploaded to the server for aggregation, yielding an updated global model. The global model is subsequently broadcast to clients as the initialization for the next round of federated pretraining. In the fine-tuning stage, this global model serves as a general-purpose GFM encoder, while a task-specific prediction head is optimized for downstream tasks. Moreover, FedGFM+ introduces two key modules to mitigate the knowledge entanglement challenges: (1) AncDAI: Before pre-training, FedGFM $^ +$ employs a novel anchor-based domain-aware initialization strategy to initialize the global codebook, providing a strong inductive bias that facilitates disentanglement of domain-specific knowledge. (2) AdaDPP: During pre-training, each client independently learns a lightweight graph prompt that imbues the GFM with its own domain semantic preferences. During fine-tuning, prompts from all clients are aggregated into an adaptive domain-sensitive prompt pool, from which the GFM selects relevant prompts to augment the target graph attributes, thereby improving the downstream adaptation. Below we introduce these two modules in detail. # 4.1 Anchor-Based Domain-Aware Initialization As discussed in Section 3, naive FedGFM suffers from knowledge entanglement, where representations from different domains collapse into indistinguishable embeddings. To mitigate this, from a global perspective, we aim to endow the global model with a strong inductive bias that explicitly encourages the separation of domain-specific semantics. Before federated pre-training, to capture domain-specific knowledge, we introduce a domain prototype extraction mechanism, which models intrinsic patterns in the graph topology and node attributes of the local graph and summarizes them into a compact, unified-dimensional vector representation. Specifically, for the $k$ -th client with a local graph $\mathcal { G } ^ { \hat { k } } = ( \nu ^ { k } , \mathcal { E } ^ { k } )$ , node features $\mathbf { X } ^ { k }$ and adjancency matrix $\mathbf { A } ^ { k }$ , we first compute the node embeddings $\mathbf { Z } ^ { k }$ as follows: $$ \mathbf { Z } ^ { k } = f _ { \theta ^ { \mathrm { g l b } } } ( \mathbf { X } ^ { k } , \mathbf { A } ^ { k } ) $$ where $\theta ^ { \mathrm { g l b } }$ denotes the initialized global model parameter broadcast to all clients. The domain prototype $\mathbf { p } ^ { k }$ is then obtained by mean-pooling over node embeddings: $$ \mathbf { p } ^ { k } = \frac { 1 } { | \mathcal { V } ^ { k } | } \sum _ { i \in \mathcal { V } ^ { k } } \mathbf { z } _ { i } ^ { k } $$ We theoretically demonstrate that, even under a randomly initialized and untrained model with shared parameters, the domain prototypes—obtained via averaging the encoded node representations—remain distinguishable across clients. This separability stems from intrinsic discrepancies in node features and graph topologies among domains, and can be formally bounded (Appendix B Theorem. B.1). Each client subsequently uploads its prototype to the central server. To steer the global model toward learning domain-aware representations, we treat these prototypes as semantic anchors and synthesize local neighborhoods in the embedding space via controlled perturbations. Specifically, for each anchor $\mathbf { p } ^ { \check { k } }$ , a set of perturbed embeddings $\{ \tilde { \mathbf { p } } _ { i } ^ { k } \} _ { i = 1 } ^ { H }$ is generated as: $$ \tilde { \mathbf { p } } _ { i } ^ { k } = \mathbf { p } ^ { k } + \sigma \epsilon _ { i } , \quad \epsilon _ { i } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { 1 } ) , \quad i = 1 , \ldots , H , $$ where $\epsilon _ { i }$ is sampled from a standard Gaussian distribution, and $\sigma$ is a noise scaling factor that ensures numerical stability. Notably, the number of synthetic embeddings $H$ is uniformly allocated across prototypes, depending on the number of the learnable codebook tokens in the global model. Finally, the synthetic embeddings aggregated from all domains are used to initialize the codebook $\mathcal { C }$ of the global model, i.e., $\mathcal { C } \gets \mathrm { I n i t } \overline { { ( \cup _ { k } \{ \tilde { \mathbf { p } } _ { i } ^ { k } \} _ { i = 1 } ^ { H } ) } }$ . We further provide a theoretical analysis (AppendiCx B Theorem. B.2) to demonstrate that this initialization introduces a structured inductive bias, which not only facilitates disentangled representation learning across diverse domains but also stabilizes optimization during the early stages of federated pretraining. # 4.2 Adaptive Domain-Sensitive Prompt Pool Moreover, to address knowledge entanglement from the local perspective, we introduce a novel prompt learning-based mechanism. During the pre-training stage, each client independently learns and retains domain-specific prompts and is excluded from federated aggregation. During the finetuning stage, these prompts serve as semantic priors that condition the GFM’s representations, facilitating improved adaptation to diverse downstream tasks. Concretely, during federated pre-training, each client maintains a set of learnable prompt tokens embedded in its local graph’s feature space. For the $k$ -th client, this prompt set is denoted as $\Phi ^ { k } = \{ \phi _ { i } ^ { k } \} _ { i = 1 } ^ { \lambda }$ , where $\lambda$ is the number of prompts and $F$ the feature dimensionality. Given the local graph $\dot { G } ^ { \bar { k } } = ( \mathcal { V } ^ { k } , \mathcal { E } ^ { k } )$ and node features $\{ x _ { i } ^ { k } \} _ { v _ { i } \in \mathcal { V } ^ { k } }$ , node representations are enhanced by a weighted combination of prompts, with attention weights computed via $\lambda$ learnable linear projections: $$ \tilde { x } _ { i } ^ { k } = x _ { i } ^ { k } + \sum _ { j = 1 } ^ { \lambda } \alpha _ { j } ^ { k } \phi _ { j } ^ { k } , \quad \alpha _ { j } ^ { k } = \frac { e ^ { ( \mathbf { w } j ^ { k } ) ^ { T } x _ { i } ^ { k } } } { \sum _ { t = 1 } ^ { \lambda } e ^ { ( \mathbf { w } _ { t } ^ { k } ) ^ { T } x _ { i } ^ { k } } } , $$ where $\alpha _ { j } ^ { k }$ reflects the relevance of the $j$ -th prompt to node $v _ { i }$ , and $\mathbf { w } _ { j } ^ { k }$ is the corresponding learnable projection vector. These prompts and projection weights are optimized together with the local GNN backbone through a self-supervised graph reconstruction task, as described in Eq. 2. During the fine-tuning stage, we downloads the global model as GFM, which encodes generalizable weights to construct a adaptive domain-aware prompt pool, denoted as $\rho = \{ \phi _ { i } ^ { j } \} _ { i = 1 , j = 1 } ^ { \lambda , \bar { K } }$ and $\mathbf { w } = [ \mathbf { w } ^ { 1 } , \dots , \mathbf { w } ^ { K } ]$ . Given a target graph $G ^ { \mathrm { t g t } } = ( \mathcal { V } ^ { \mathrm { t g t } } , \mathcal { E } ^ { \mathrm { t g t } } )$ , node features are augmented using this prompt pool. For each node $v _ { i } \in \mathcal { V } ^ { \mathrm { t g t } }$ with feature $x _ { i } ^ { \mathrm { { g t } } }$ , the enhanced representation is computed as: $$ \widetilde { x } _ { i } ^ { \mathrm { { g t } } } = x _ { i } ^ { \mathrm { { t g t } } } + \sum _ { p = 1 } ^ { K } \sum _ { j = 1 } ^ { \lambda } \alpha _ { j } ^ { p } \phi _ { j } ^ { p } , \quad \alpha _ { j } ^ { p } = \frac { e ^ { ( \mathbf { w } j ^ { p } ) ^ { T } x _ { i } ^ { \mathrm { { t g t } } } } } { \sum _ { t = 1 } ^ { K } \sum _ { l = 1 } ^ { \lambda } e ^ { ( \mathbf { w } _ { l } ^ { t } ) ^ { T } x _ { i } ^ { \mathrm { { t g t } } } } } . $$ As a result, FedGFM $^ +$ effectively capitalizes on domain-specific prompts acquired during pre-training, substantially improving its adaptability to heterogeneous domains and diverse downstream tasks in the fine-tuning phase. # 5 Experiments In this section, we present a comprehensive evaluation of ${ \mathrm { F e d G F M } } +$ . We begin by introducing the experimental setup (Sec.5.1), and then seek to answer the following research questions: Q1: After task-specific fine-tuning, does the GFM trained by FedGFM $^ +$ consistently outperform (1) isolated supervised learning techniques, (2) state-of-the-art FGL baselines, and (3) naive federated variants of centralized GFM strategies across node-, edge-, and graph-level prediction tasks (Sec.5.2)? Q2: How does each individual module contribute to the overall performance of FedGFM $^ +$ (Sec.5.3)? Q3: Is FedGFM $^ +$ robust to changes in hyperparameter configurations (Sec.5.4)? In addition to the main evaluation, we further investigate the few-shot generalization ability (Q4) in Appendix D. Table 2: Performance comparison of ${ \mathrm { F e d G F M } } +$ and baselines. Best results of each baseline category are in underline. ‘\*’ denotes federated variants of centralized GFM. ‘N/A’ denotes task inapplicability. Node, edge, and graph classification datasets are marked in red, yellow, and blue, respectively. # 5.1 Experimental Setup To evaluate the effectiveness of ${ \mathrm { F e d G F M } } +$ , we conduct experiments on 8 benchmark graph datasets spanning a range of domains and covering three key tasks: node classification (Citation Networks: Cora, PubMed [56], and OGB-Arxiv [19]; Hyper-Link Networks: WikiCS [33]), edge classification (Knowledge Graphs: FB15K237 [37] and WN18RR [12]), and graph classification (Molecule Graphs: HIV, PCBA [47]). Each dataset is partitioned into 3 clients to simulate decentralized scenarios, and we report the average test performance (accuracy or AUC) across clients. We compare FedGFM+ against three baseline categories: (1) Isolated Supervised Models, trained independently on each client, including a linear layer, GCN, GAT, GraphSAGE, and GIN; (2) FL/FGL Approaches, including general-purpose methods like FedAvg and MOON, and task-specific methods such as FedSage+, Fed-PUB, FedGTA, FedTAD, FGSSL, FGGP, GCFL+, and FedStar; and (3) Federated Variants of centralized GFM training strategies (OFA, GFT, UniGraph, GQT, GraphCLIP). More experimental details are provided in Appendix C. # 5.2 Performance Comparison (Answers for Q1) To answer Q1, we compare ${ \mathrm { F e d G F M } } +$ with a range of competitive baselines, evaluating each configuration over 3 independent runs without fixed seeds. As summarized in Table 2, FedGFM+ consistently achieves superior performance across all datasets and downstream tasks. Comparison with Isolated Supervised Learning. ${ \mathrm { F e d G F M } } +$ consistently outperforms supervised backbones, confirming its strong cross-domain and cross-task generalization. Specifically, it improves over the best baselines by at least $2 . 7 0 \%$ in node classification, $2 . 1 8 \%$ in edge classification, and $3 . 0 9 \%$ in graph classification, demonstrating superior transferability and robustness. Comparison with FL/FGL Methods. As discussed in Section 1, existing FL/FGL methods are limited by data/task heterogeneity and reliance on task-specific information, restricting its training and evaluation scenarios. In contrast, as observed, ${ \mathrm { F e d G F M } } +$ consistently outperforms by enabling broad cross- domain and task collaboration that captures general structural and semantic knowledge. Comparison with Federated Variants of Centralized GFM. As observed, naive federated GFM models often suffer from knowledge entanglement, leading to them even below isolated supervised baselines (i.e., negative transfer). In contrast, FedGFM $\cdot +$ effectively addresses these issues via its design (i.e., AncDAI and AdaDPP), enabling efficient downstream adaptation. # 5.3 Ablation Study (Answer for Q2) To address Q2, we analyze ${ \mathrm { F e d G F M } } +$ ’s two key modules. AncDAI guides the initialization of learnable tokens in the global gVQ-VAE codebook, while AdaDPP is applied during fine-tuning to improve adaptability to domain- and task-specific variations. An ablation study on 8 datasets (Table 3) shows that removing both modules degrades performance. Notably, excluding AncDAI causes a larger drop than excluding AdaDPP, highlighting AncDAI’s crucial role in reducing knowledge entanglement and boosting generalization. In summary, both are vital for FedGFM+’s effectiveness. Table 3: Ablation study results for ${ \mathrm { F e d G F M } } +$ . Node, edge, and graph classification datasets are marked in red, yellow, and blue, respectively. # 5.4 Sensitivity Analysis (Answer for Q3) To address Q3, we perform a sensitivity analysis on key hyperparameters in ${ \mathrm { F e d G F M } } +$ . As a pretraining–fine-tuning framework, it involves many hyperparameters; here we focus on those in our core modules. For AncDAI, we vary the number of learnable tokens in the global gVQ-VAE codebook. For AdaDPP, we vary the number of learnable prompts per client. Results are shown in Fig. 4: (a) AncDAI maintains stable performance under different Figure 4: Sensitivity analysis results for ${ \mathrm { F e d G F M } } +$ . codebook sizes, indicating robust domain initialization; (b) AdaDPP performs well with few prompts, and is insensitive to prompt number. Overall, FedGFM $^ +$ shows strong robustness to key hyperparameters.
Recent advances in graph machine learning have shifted to data-centric paradigms, driven by two emerging fields: (1) Federated graph learning (FGL) enables multi-client collaboration but faces challenges from data and task heterogeneity, limiting its practicality; (2) Graph foundation models (GFM) offer strong domain generalization but are usually trained on single machines, missing out on cross-silo data and resources. These paradigms are complementary, and their integration brings notable benefits. Motivated by this, we propose FedGFM, a novel decentralized GFM training paradigm. However, a key challenge is knowledge entanglement, where multi-domain knowledge merges into indistinguishable representations, hindering downstream adaptation. To address this, we present FedGFM+, an enhanced framework with two core modules to reduce knowledge entanglement: (1) AncDAI: A global anchor-based domain-aware initialization strategy. Before pre-training, each client encodes its local graph into domain-specific prototypes that serve as semantic anchors. Synthetic embeddings around these anchors initialize the global model. We theoretically prove these prototypes are distinguishable across domains, providing a strong inductive bias to disentangle domain-specific knowledge. (2) AdaDPP: A local adaptive domain-sensitive prompt pool. Each client learns a lightweight graph prompt capturing domain semantics during pre-training. During fine-tuning, prompts from all clients form a pool from which the GFM selects relevant prompts to augment target graph attributes, improving downstream adaptation. FedGFM+ is evaluated on 8 diverse benchmarks across multiple domains and tasks, outperforming 20 baselines from supervised learning, FGL, and federated GFM variants.
[ "cs.LG", "cs.AI", "cs.DB", "cs.SI" ]
# 1 Introduction Neural View Synthesis (NVS) has emerged as a transformative technology in computer vision and graphics, enabling the generation of photorealistic images from arbitrary camera viewpoints given sparse input views. The field has witnessed remarkable progress with the introduction of Neural Radiance Fields (NeRF) [28] and its subsequent evolution into 3DGS [15] , which has revolutionized real-time rendering capabilities while maintaining high visual fidelity. While 3DGS achieves superior visual quality compared to NeRF, it demands substantially more storage resources, significantly hindering practical deployment. Consequently, 3DGS compression has become critical, and recent SOTA 3DGS algorithms [4, 7, 8, 17, 21, 23, 30, 31] increasingly incorporating compression modules. To guide the training of 3DGS and identify optimal compression strategies, effective 3DGS quality assessment (QA) metrics are essential. However, most research still uses traditional IQA metrics to assess 3DGS model quality, which may neglect unique distortions of the 3DGS and resulting in inaccurate prediction. Therefore, a comprehensive dataset for 3DGS images with diverse distortion effects is firstly needed for designing and evaluating 3DGS-QA metrics. able 1: Summary for existing NVS quality evaluation datasets, with $^ { \mathfrak { e } } \mathbf { S } \mathbf { y } \mathbf { n } ^ { \mathfrak { n } }$ and “Real” representing synthetic and real scenes. However, existing NVS-QA datasets [19, 24–26, 39, 41, 48] exhibit several critical limitations: (1) Lack of distortion sample design incorporating NVS compression. While compression has become a critical research priority with SOTA 3DGS algorithms, most existing datasets predominantly fail to integrate with NVS models with diverse compression parameters and resulting in insufficient samples with varied distortion effects for effective metric training. (2) Severely constrained scales. Due to the substantial time and storage costs required for training 3DGS, coupled with the necessity of per-scene individual training, most existing datasets containing fewer than 100 samples and even the largest not exceeding 500, which are insufficient for training robust objective models. (3) Lack of attention to 3DGS image quality assessment. Both 3DGS-based IQA and video quality assessment (VQA) are critically important. At the application level, VQA simulates continuous viewpoint trajectories, mimicking how humans naturally explore 3D scenes and providing comprehensive quality evaluation. At the training level, IQA is essential for efficient 3DGS training feedback, requiring minimal rendering time compared to video evaluation and enabling investigation of view-dependent quality variations unique to 3DGS. However, existing datasets predominantly focus on VQA, with IQA-based 3DGS quality assessment remaining a significant research gap. To address these limitations, we present 3DGS-IEval-15K, the first large-scale image quality assessment dataset specifically designed for compressed 3DGS representations. Our dataset encompasses 15,200 images rendered from 10 real-world scenes through 6 mainstream 3DGS compression algorithms at 20 carefully selected viewpoints, with different compression levels leading to various distortion effects, providing unprecedented scale and systematic coverage of compression methods and distortion types. We establish a comprehensively benchmark encompassing 30 representative IQA metrics on 3DGS-IEval-15K, and systematically include deep learning-based and large language model (LLM)-based approaches for the first time on 3DGS database, providing comprehensive evaluation capabilities that were previously unavailable due to insufficient dataset scale. To facilitate investigation of 3DGSspecific characteristics, our viewpoint selection strategy identifies both representative training perspectives and challenging test perspectives with maximal differences from the training set, enabling fine-grained analysis of view-dependent quality variations unique to 3DGS. The main contributions of our work are summarized as follows: We construct the largest IQA dataset for 3DGS, featuring 15,200 samples with systematic compression-based distortion design. This enables the training of specialized 3DGS quality assessment models and facilitates the optimization of 3DGS generation processes. We establish the first comprehensive benchmark for 3DGS image quality assessment, systematically evaluating 30 objective quality metrics, including deep learning-based and LLM-based approaches, on compressed 3DGS content. This reveals the limitations of existing methods and provides suggestions for metric selection as well as guidance for future metric development. · We provide essential data and foundations for investigating view-dependent quality distribution patterns unique to 3DGS. This offers insights into viewpoint-dependent reconstruction fidelity and suggests potential optimization strategies for 3DGS compression techniques that better align with human visual perception. # 2 Related Work As summarized in Table 1, NVS quality assessment has evolved alongside NVS technologies, transitioning from NeRF-based to 3DGS-focused evaluation. Early NeRF datasets established foundational frameworks: NeRF-QA [25] and NeRF-VSQA [24] provided initial benchmarks with 48 and 88 video samples, while FFV [19] pioneered pairwise comparison with 220 samples. ENeRF-QA [39] introduced systematic distortion design through NeRF compression across 440 samples, with identifying nine NeRF-specific distortion types. As 3DGS emerged as superior NVS technology, evaluation focus shifted accordingly. GSC-QA [41] examined compression effects of their 3DGS compression method with 120 samples, GSQA [26] compared 3DGS and NeRF methods across 64 samples, while NVS-QA [48] pioneered both video and image assessment with 65 samples each. However, existing datasets face critical limitations hindering comprehensive evaluation. First, scales remain severely constrained with most containing fewer than 100 samples and even the largest not exceeding 500, insufficient for training robust objective models. Second, most lack systematic compression integration for distortion design, despite compression being essential for 3DGS practical deployment due to substantial storage requirements. Third, the predominant video-focus overlooks image quality assessment’s irreplaceable importance for 3DGS generation, where IQA enables efficient training feedback and investigation of view-dependent quality variations unique to 3DGS. Figure 2: 10 selected source content in 3DGS-IEval-15K: scenes 1-5 depict outdoor scenes, while scenes 6-10 depict indoor scenes. To address these limitations, we propose 3DGS-IEval-15K, the first large-scale IQA benchmark specifically designed for 3DGS evaluation. It contains 15,200 samples featuring systematically constructed compression-induced distortions, enabling comprehensive assessment of representative compressed 3DGS methods. # 3 DATABASE CONSTRUCTION # 3.1 Source Content Selection Our dataset comprises 10 real-world scenes selected from three canonical multiview datasets in the NVS domain to ensure diverse visual characteristics for comprehensive quality assessment. As illustrated in Figure 2, the collection includes six scenes from MipNeRF 360 [2], consisting of three outdoor scenes: bicycle with resolution $1 2 3 7 \times 8 2 2$ , flowers with resolution $1 2 5 6 \times 8 2 8$ , garden with resolution $1 2 9 7 \times 8 4 0$ , and three indoor scenes: counter with resolution $1 5 5 8 \times 1 0 3 8$ , kitchen with resolution $1 5 5 8 \times 1 0 3 9$ , room with resolution $1 5 5 7 \times 1 0 3 8$ . Additionally, two outdoor scenes are picked from Tanks & Temples [16]: train with resolution $9 8 0 \times 5 4 5$ , truck with resolution $9 7 9 \times 5 4 6$ , and two indoor scenes are picked from Deep Blending [11]: playroom with resolution $1 2 6 4 \times 8 3 2$ , drjohnson with resolution $1 3 3 2 \times 8 7 6$ . For each scene, we partition the viewpoints into disjoint training and testing set. The training set is then used to reconstruct a 3DGS model. This selection presents varied challenges for 3DGS representation and compression, including both indoor and outdoor scenes, complex occlusions, different lighting conditions, reflective surfaces, and natural textures with high-frequency details. The resolution settings of each scene in the original 3DGS are maintained, thereby standardizing evaluation protocol and establishing a solid foundation for cross-study comparison and communication. # 3.2 3D Viewpoint Selection Viewpoint selection is critical for effective evaluation of 3DGS models. We propose a systematic strategy (Figure 3) that selects 10 representative viewpoints from training set and 10 challenging viewpoints from testing set mentioned in Section 3.1, enabling robust assessment of view-dependent generalization and perceptual quality under diverse conditions. Training Viewpoint Selection: To select training viewpoints, we adopt a feature-based clustering strategy that ensures broad scene coverage while minimizing redundancy. For each scene, all the viewpoints from training set is encoded by a feature vector $\mathbf { f } = [ \mathbf { p } , \beta \mathbf { d } ]$ , where $\mathbf { p }$ is the position, d is the viewing direction, and $\beta = 0 . 3$ balances their relative importance. These features are then normalized and partitioned using $k$ -means clustering, resulting in 10 distinct clusters. From each cluster, we select the viewpoint closest to the centroid, ensuring that the chosen viewpoints optimally represent the distribution of available camera poses while maintaining diversity. Testing Viewpoint Selection: To rigorously assess the generalization capability of NVS, we select testing viewpoints that exhibit maximal distributional divergence from the viewpoints from training set. Each candidate is ranked using a composite score that integrates multiple criteria. The composite score for a viewpoint $j$ from testing set is formulated as: $$ S _ { j } = w _ { d } D _ { j } + w _ { s } S _ { j } + w _ { e } E _ { j } + w _ { \theta } \Theta _ { j } $$ where $D _ { j }$ represents normalized distance metrics, $S _ { j }$ quantifies local sparsity, $E _ { j }$ indicates extrapolation requirements, $\Theta _ { j }$ measures directional novelty, and $w$ factors are the respective weights (we set all $ { \boldsymbol { w } }$ values to 0.25). The detailed definitions of these four criteria are as follows: Maximum Distance: Preference for viewpoints maximally distant from any training viewpoints: $$ D _ { j } = { \frac { 1 } { 2 } } \left( { \frac { \operatorname* { m i n } _ { i \in T } \left\| \mathbf { p } _ { j } - \mathbf { p } _ { i } \right\| } { \operatorname* { m a x } _ { k } \operatorname* { m i n } _ { i \in T } \left\| \mathbf { p } _ { k } - \mathbf { p } _ { i } \right\| } } + { \frac { { \frac { 1 } { \left| T \right| } } \sum _ { i \in T } \left\| \mathbf { p } _ { j } - \mathbf { p } _ { i } \right\| } { \operatorname* { m a x } _ { k } { \frac { 1 } { \left| T \right| } } \sum _ { i \in T } \left\| \mathbf { p } _ { k } - \mathbf { p } _ { i } \right\| } } \right) $$ Sparsity: Prioritization of viewpoints in regions with low training viewpoints density: $$ S _ { j } = 1 - \frac { \rho _ { j } - \operatorname* { m i n } _ { k } \rho _ { k } } { \operatorname* { m a x } _ { k } \rho _ { k } - \operatorname* { m i n } _ { k } \rho _ { k } } , \quad \mathrm { w h e r e } \quad \rho _ { j } = \frac { 1 } { \frac { 1 } { K } \sum _ { l = 1 } ^ { K } d _ { j , l } + \epsilon } $$ Extrapolation Potential: Selection of viewpoints outside the convex hull of training viewpoints, necessitating extrapolation rather than interpolation: $$ E _ { j } = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f } } { \bf p } _ { j } \not \in { \mathrm { C o n v e x H u l l } } ( \{ { \bf p } _ { i } \} _ { i \in T } ) } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. } $$ Directional Diversity: Emphasis on viewing directions substantially different from training viewpoints: Figure 3: Illustration of the proposed viewpoints selection strategy: Training viewpoints are selected through feature-based $k$ -means clustering, while testing viewpoints are chosen based on four criteria. $$ \ominus _ { j } = \frac { \operatorname* { m a x } _ { i \in T } \operatorname { a r c c o s } ( \mathbf { d } _ { j } \cdot \mathbf { d } _ { i } ) - \operatorname* { m i n } _ { k } \operatorname* { m a x } _ { i \in T } \operatorname { a r c c o s } ( \mathbf { d } _ { k } \cdot \mathbf { d } _ { i } ) } { \operatorname* { m a x } _ { k } \operatorname* { m a x } _ { i \in T } \operatorname { a r c c o s } ( \mathbf { d } _ { k } \cdot \mathbf { d } _ { i } ) - \operatorname* { m i n } _ { k } \operatorname* { m a x } _ { i \in T } \operatorname { a r c c o s } ( \mathbf { d } _ { k } \cdot \mathbf { d } _ { i } ) } $$ where $T$ is the indices of viewpoints from training set, $p _ { j }$ and ${ \bf d } _ { j }$ are the position and viewing direction of viewpoint $j , k$ ranges over all viewpoints from testing set, $d _ { j , l }$ is the distance between viewpoint $j$ to its $l$ -th nearest viewpoints from training set, $K$ is the number of viewpoint $j$ s nearest viewpoints from training set (we set $K = m i n ( 1 0 , | T | ) )$ , $\epsilon$ is a small constant to prevent division by zero. We compute the composite scores for all viewpoints in the testing set and select the top 10 with the highest scores. This targeted selection strategy establishes a more rigorous evaluation protocol than random or uniform sampling by emphasizing viewpoints that demand significant interpolation or extrapolation from training data. It thereby facilitates a more reliable assessment of model generalization in challenging novel view synthesis scenarios. # 3.3 3DGS Model and Bitrate Point Selection # 3.3.1 3DGS Model Selection 3DGS is a novel view synthesis approach that represents scenes using a collection of learnable 3D Gaussians. Each Gaussian stores two categories of attributes: geometric properties including position $\boldsymbol { \mu } \in \mathbb { R } ^ { 3 }$ , opacity $\alpha \in \mathbb { R }$ , covariance matrix $\Sigma$ (parameterized scale $s \in$ $\mathbb { R } ^ { 3 }$ and rotation parameters), and view-dependent color properties represented by spherical harmonics (SH) coefficients. Despite superior visual quality, 3DGS requires substantial storage resources, making compression essential for practical deployment. Consequently, 3DGS methods increasingly incorporate dedicated compression modules. In this study, we select six such representative 3DGS algorithms to construct our comprehensive evaluation dataset: LightGS [7] adopts a pruning-then-quantization approach, removing insignificant Gaussians (controlled by prune_percents) and compressing remaining color coefficients via vector quantization (controlled by vq_ratio and codebook_size). c3dgs [31] uses intelligent parameter grouping through $k$ -means clustering, where codebook_size determines compression strength and importance_include thresholds filter parameters based on their visual contribution. Compact 3DGS [17] replaces color representation with hash-grid neural networks (controlled by hashmap size) and applies multi-level vector quantization to geometry (controlled by codebook_size and rvq_num depth), achieving over $2 5 \times$ storage reduction. CompGS [30] applies straightforward vector quantization to both geometry and color attributes, with compression level solely controlled by codebook_size. HAC [4] employs context-aware compression by modeling spatial relationships between anchors, with lambda parameter balancing compression rate and visual quality through adaptive quantization. Scaffold [23] fundamentally changes the representation by using anchor points to generate local Gaussians dynamically, where vsize parameter controls the spatial resolution of anchor placement. # 3.3.2 Compression Parameter Level Design Through analysis of these 6 representative mainstream algorithms and their compression strategies and parameters, we find that their distortion effects can be broadly categorized into controlling geometric distortion or color distortion. Based on the underlying principles and compression parameter types, we categorize the selected 3DGS methods into two types: (1) Singlecompression-parameter algorithms are primarily anchor-based methods controlled by one key compression parameter that mainly affects geometric distortion, such as HAC and Scaffold-GS. (2) Multi-compression-parameter algorithms directly compress different 3DGS attributes with multiple compression strategies and parameters, affecting both geometry and color distortions, including LightGS, c3dgs, Compact-3DGS, and CompGS. To better study the distortion types and degrees introduced by different compression parameters and algorithms, we design different Compression Levels (CL) for key compression parameters that affect geometry and color separately. Through pairwise combinations of these CLs, we ultimately obtain different 3DGS Distortion Levels (DL) and types. For multi-compression-parameter algorithms, we design 4 CL for key parameters controlling geometry distortion and color distortion separately, as shown in Table 2, generating $4 \times 4 = 1 6$ DL through pairwise combinations. For single-compression-parameter algorithms, we design 6 CL for their key compression parameter as shown in Table 3, corresponding to ${ \bf 6 D L }$ to cover a wide quality range. In total, we trai $1 1 0 \ { \mathrm { s c e n e s } } \times ( 4 \times 1 6 + 2 \times 6 ) \ { \mathrm { D L s } } = 7 2 0$ 3DGS models, and obtain $7 2 0 ~ 3 \mathrm { D G S } \times 2 0$ viewpoints $= 1 5 { , } 2 0 0$ distorted 3DGS images. # 3.4 Subjective Experiment and Data Processing To evaluate the quality of the images in the GS-IQA, we utilize a double stimulus impairment scale, with reference and 3DGS synthesized images displayed side-by-side. For the MOS annotation type, we use an 11- level impairment scale proposed by ITU-TP.910 [13]. The images are displayed using an interface designed with Python Tkinter, as illustrated in Figure 1(d). The experiment was carried out using a 27-inch AOC Q2790PQ monitor in an indoor laboratory environment under standard lighting conditions. To prevent visual fatigue caused by too long experiment time, 15,200 images are randomly divided into 8 smaller groups. A total of 60 students with different background participate in the experiment. Table 2: 4 CL designed separately for color-distortion-controlled compression parameters and geometry-distortion-controlled compression parameters for multi-compression-parameter methods, with 16 DL 3DGS models each generated through pairwise combinations of these parameters with various CL. Table 3: 6 CL designed for the key parameter of singlecompression-parameter methods, generating 6 DL of 3DGS models each accordingly. Figure 4: MOS distribution for both training and testing viewpoints, with fitted density curves overlaid on the histogram. ITUR BT.500 [3] is applied to conduct the outlier detection and subject rejection. The score rejection rate is $2 \%$ . Finally, by averaging the reserved subjective scores, we obtain MOS score of each image. # 4 Experiments # 4.1 Scene Content Diversity Analysis To validate selected source scene diversity, we measure geometry and color complexity using spatial perceptual information (SI) [13] and colorfulness metrics (CM) [3], respectively. As shown in Figure 4 (a), where each numbered point corresponds to the scene index in Figure 2, the uniform distribution across both complexity dimensions confirms our dataset covers diverse visual characteristics, ensuring comprehensive evaluation of 3DGS methods across varying challenging scenarios. # 4.2 MOS and Viewpoint-Based Quality Analysis MOS Distribution: As presented in Figure 4 (b), our dataset demonstrates comprehensive quality coverage across the entire 0-10 MOS range with sufficient samples in each score segment, ensuring adequate representation of varying distortion levels, and providing a robust foundation for training quality assessment models. Inter-View Quality Disparity: As shown in Figure 4(b), test viewpoints exhibit lower MOS scores than training viewpoints, with distributions peaking around 4-5 versus 5-6 respectively. Specifically, for MOS scores $\leq 6$ , novel views contribute more samples, while for ${ \mathrm { M O S } } > 6$ , training views outnumber test views. This represents the first systematic investigation of quality differences between training and novel viewpoints in 3DGS, revealing substantial view-dependent quality variation that provides insights for future 3DGS optimization and quality assessment design. # 4.3 Performance on Score Prediction # 4.3.1 Experiment Settings Evaluation Metrics. To evaluate the correlation between the predicted scores and the ground-truth MOSs, we employ three widely used evaluation criteria: Spearman Rank-order Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Kendall Rank Correlation Coefficient (KRCC). Reference Algorithms. To thoroughly investigate the performance of existing evaluation methods on the 3DGS-IEval-15K dataset, we comprehensively select 30 representative image quality assessment algorithms, which can be classified into three groups: handcrafted-based IQA models, LLM Zero-Shot models, and deep learning-based IQA models. Dataset Partitioning. We construct three distinct subsets of the dataset to isolate specific distortion types: Geometry-Only, ColorOnly, and Geometry & Color Mix. The Geometry-Only subset contains samples exhibiting purely geometric distortions, while the Color-Only subset includes samples with only color-related degradations. The Geometry & Color Mix subset consists of samples where each image simultaneously contains both geometric and color distortions. For each of these subsets, we apply a 4:1 train-test split across all scenes. Based on these subsets, we further construct the All setting by concatenating the training sets and testing sets from the three subsets, respectively, thus also preserving the 4:1 training-to-testing ratio. This setup enables a comprehensive evaluation across individual and combined distortion types while ensuring consistent data partitioning. Training Settings. Traditional handcrafted metrics are directly evaluated on corresponding datasets. LLM Zero-Shot models use pre-trained weights for inference. Deep learning-based IQA models are trained only on the training set of the All configuration and evaluate directly on the test sets of all four configurations without any fine-tuning on the individual subsets. This experimental design enables us to examine the same model’s performance across different distortion scenarios, thereby comprehensively exploring quality assessment methods’ generalization capabilities across diverse distortion types. Table 4: Performance benchmark on 3DGS-IEval-15K. ♠ Handcrafted-based IQA models, ♦ Deep learning-based IQA models, $0$ LLM Zero-Shot models. # 4.3.2 Results and Analysis The results reveal distinct performance patterns across the three model categories. Deep learning-based IQA models achieve the highest performance, with top methods like HYPERIQA and MANIQA reaching SRCC values exceeding 0.93 on the All dataset. Handcrafted IQA models demonstrate moderate performance, with FSIM achieving the best SRCC of 0.7327, while traditional metrics like PSNR and SSIM show substantially lower correlations around 0.64- 0.68. LLM Zero-Shot models exhibit the most varied performance, ranging from near-zero correlations (Llama3.2-Vision: 0.0681) to competitive results (Q-Align: 0.7711), though notably, these models were not fine-tuned for the quality assessment task. The superior performance of deep learning-based methods stems from their learned perceptual representations that better capture human visual perception, while handcrafted metrics rely on predetermined mathematical formulations that may not align with human judgment. The variable performance of LLM Zero-Shot models reflects their primary design for general visual understanding rather than specialized quality assessment, though their semantic reasoning capabilities show promise for this domain. Examining performance across distortion-specific evaluations reveals a consistent pattern: most methods exhibit performance degradation when evaluated on isolated distortion types compared to the comprehensive All dataset. For instance, MANIQA’s SRCC drops from 0.9356 (All) to 0.8443 (Geometry-Only) and 0.8999 (ColorOnly). Similarly, HYPERIQA shows a decline from 0.9407 (All) to 0.8785 (Geometry-Only) and 0.9086 (Color-Only). This phenomenon indicates that while these models achieve strong overall performance, they struggle with domain-specific distortions that differ from their training distribution. The performance gaps suggest that models benefit from the diverse distortion patterns present in the All training set, and their generalization to isolated distortion types remains challenging, highlighting the importance of distortionspecific evaluation for comprehensive model assessment.
3D Gaussian Splatting (3DGS) has emerged as a promising approach for novel view synthesis, offering real-time rendering with high visual fidelity. However, its substantial storage requirements present significant challenges for practical applications. While recent state-of-the-art (SOTA) 3DGS methods increasingly incorporate dedicated compression modules, there is a lack of a comprehensive framework to evaluate their perceptual impact. Therefore we present 3DGS-IEval-15K, the first large-scale image quality assessment (IQA) dataset specifically designed for compressed 3DGS representations. Our dataset encompasses 15,200 images rendered from 10 real-world scenes through 6 representative 3DGS algorithms at 20 strategically selected viewpoints, with different compression levels leading to various distortion effects. Through controlled subjective experiments, we collect human perception data from 60 viewers. We validate dataset quality through scene diversity and MOS distribution analysis, and establish a comprehensive benchmark with 30 representative IQA metrics covering diverse types. As the largest-scale 3DGS quality assessment dataset to date, our work provides a foundation for developing 3DGS specialized IQA metrics, and offers essential data for investigating view-dependent quality distribution patterns unique to 3DGS. The database is publicly available at https://github.com/YukeXing/3DGS-IEval-15K.
[ "cs.CV" ]
# Introduction Recent advancements in image generation (Cao et al. 2024; Zhang et al. 2023; Xu et al. 2024; Ruiz et al. 2023) and video generation (Bar-Tal et al. 2024; Chen et al. 2025; Kong et al. 2024) have demonstrated remarkable success, enabling a wide range of applications in computer graphics, cultural heritage, and the arts. Despite this substantial progress, particularly with diffusion-based models, the capability to generate videos with richer and more complex motion dynamics remains underexplored, especially in scenarios involving multi-character interaction. As illustrated in Fig. 1, prominent existing models such as WanX (Wan et al. 2025), face significant challenges in synthesizing smooth and rich interactive actions, particularly in long-term generation tasks. To mitigate these challenges, various methods $\mathrm { \cdot H u } 2 0 2 4$ ; Peng et al. 2024; Wang et al. $2 0 2 4 \mathrm { c }$ ; Zhang et al. 2024b; Tan et al. 2024) have been developed that incorporate additional conditioning information, such as human motion priors, key frames, or reference videos, to enhance generation quality. Among these, using human motion data as a prior is a highly effective strategy to mitigate limb distortions, enhance motion expressiveness, and ensure long-term temporal coherence. The increasing reliance on motion priors has, in turn, placed higher demands on motion generation models themselves. Current models predominantly operate in 3D space (Jiang et al. $2 0 2 3 \mathrm { a }$ ; Guo et al. 2024; Zhang et al. $2 0 2 4 \mathrm { c }$ ; Li et al. 2024; Tevet et al. 2023; Barquero, Escalera, and Palmero 2023; Chen et al. 2023; Zhang et al. 2024a; Liang et al. 2024), mainly employing diffusion-based or autoregressive (e.g., GPT-based) frameworks. While many have achieved considerable success in generation quality and inference speed, a significant limitation is their predominant focus on single-character motion. Consequently, generated motions often lack diversity and fail to capture the nuances of interactions common in real-world scenarios. This limitation is largely attributed to the restricted scale and diversity of available training data. For instance, HumanML3D (Guo et al. 2022), a widely used dataset, contains only approximately 14,000 single-character motion clips. Although datasets for multi-character motion, like InterHuman (Liang et al. 2024) (with around 7,779 dual-character sequences), represent initial steps, they too suffer from limited scale and diversity. This data scarcity primarily stems from the inherent difficulty and high cost associated with capturing 3D motion data, which often necessitates expensive, high-precision motion capture systems. 2D motion representation offers a compelling alternative to address the data acquisition challenge, as annotations can be obtained more cost-effectively by leveraging advanced 2D pose estimation models (Jiang et al. 2023b; Khirodkar et al. 2024) and large language models (LLMs) (GPT4o 2024; DeepMind 2025; Ye et al. 2024) for automated data processing from readily available videos. Furthermore, focusing on 2D motion is highly pertinent for image and video Prompt: Person 1, dressed in a white karate uniform with a black belt, is performing a series of martial arts moves, including punches and kicks, in a dark room. The sequence of movements is fluid and continuous, with the person's body and limbs moving in a coordinated manner. Prompt: Person 1 is performing a yoga pose on a mat, with their left leg extended upwards and their right arm reaching towards the sky. They are seated on a pink and white mat, surrounded by yoga props such as a bowl and a bell. The background is a lush green. Prompt: Person 1 and Person 2 are performing a workout routine in a gym, using two wooden boxes for exercises. They are seen in various positions, such as standing, lunging, and stretching, with their arms and legs in different configurations. Prompt: Person 1 and Person 2 are working together to plant trees on a beach. They are both wearing casual clothing and using shovels to dig into the sand. The relationship between them is cooperative and focused on their task. Figure 1: Qualitative results illustrating our proposed text-to-motion generation (RVHM2D) and subsequent skeleton-driven video synthesis. We choose Wan2.1-T2V-14B to generate the videos. These examples demonstrate RVHM2D’s capability to produce textually-aligned and coherent motions that effectively drive high-quality video generation, showcasing significant improvements in motion fidelity and textual consistency over the comparative motion generation tasks, as it can simplify certain spatial complexities inherent in 3D representations while still providing sufficiently rich prior information. Recognizing this, recent efforts have aimed to curate large-scale 2D human motion datasets by mining vast quantities of internet videos. For example, (Wang et al. 2024b) collected and annotated an extensive dataset of motion-caption pairs using pose estimation and LLMs, scaling 2D human motion data to one million pairs. Building on this, (Wang et al. 2025) introduced more fine-grained annotation and cleaning pipelines, proposing a dataset of 1.2 million motion-caption pairs for human pose generation. Despite these significant advancements in scale, a critical gap remains: these datasets still predominantly focus on single-character motions, largely overlooking the complexities of rich, multi-character interactive actions. To address the identified gap in data availability for rich interactive motions, we introduce Motion2D-Video-150K, a novel large-scale dataset for 2D human motion generation. Motion2D-Video-150K comprises 150,000 motion sequences, specifically curated to cover a diverse range of both single-character and, crucially, double-character interactive motions, effectively leveraging vast Internet data. Furthermore, building upon Motion2D-Video-150K, we propose RVHM2D, a novel human motion generation model. RVHM2D is designed to synthesize high-fidelity 2D human motion conditioned on both textual prompts and an initial motion frame. Notably, RVHM2D innovatively incorporates a reinforcement learning framework where the Fre´chet Inception Distance (FID) serves as a reward signal to further enhance the quality and realism of the generated motions. The 2D motions generated by RVHM2D can then be seamlessly applied to drive video generation tasks using existing frameworks like skeleton-based ControlNet (Zhao et al. 2024; Zhang, Rao, and Agrawala 2023), as exemplified in Fig. 1. Our primary contributions are summarized as follows: • We introduce Motion2D-Video-150K, a new large-scale 2D rich motion dataset containing 150,000 sequences. To the best of our knowledge, Motion2D-Video-150K is the first and largest publicly available dataset of its kind to comprehensively feature both single-character and complex double-character interactive motions. • We propose RVHM2D, a novel generative model for 2D human motion conditioned on both text prompts and an initial motion frame. RVHM2D effectively unifies the generation of single-character and double-character rich motions within a single, coherent framework and is capable of generating motions up to 300 frames. • We innovate by formulating the motion generation training process for RVHM2D as a reinforcement learning task, uniquely employing the Fre´chet Inception Distance (FID) as a reward signal to guide the model towards generating higher-quality and more perceptually realistic motions. • We conduct comprehensive experiments demonstrating that RVHM2D, trained on Motion2D-Video150K, achieves state-of-the-art performance against reimplemented baseline methods in the task of diverse and realistic human motion generation. # Related Work # Controllable Text-to-Video Generation Recent efforts in text-to-video generation have increasingly focused on controllability (Bar-Tal et al. 2024; Chen et al. 2025; Peng et al. 2024; Wang et al. 2024c). Given that motion is a primary differentiator from static images, leveraging motion priors is a key strategy. For instance, ControlVideo (Peng et al. 2024), analogous to ControlNet (Zhao et al. 2024) for images, introduced training-free control by incorporating cross-frame interactions into ControlNet’s attention modules, improving video quality and consistency. Many other works (Feng et al. 2023; Ma et al. 2024; Zhai et al. 2024; Li et al. 2025) have also adopted similar structures. Other recent works combine human motion with reference images to enhance character animation and robustness (Hu 2024; Zhang et al. 2024b; Tan et al. 2024). However, despite these advancements, generating videos with complex, richly interactive multi-character scenarios remains a significant challenge, often limited by the expressiveness or availability of suitable motion priors. # Text-to-Motion Generation Text-to-motion generation (Sun et al. 2024; Zhang et al. 2024a; Shafir et al. 2023; Liang et al. 2024; Tevet et al. 2023; Jiang et al. 2023a; Barquero, Escalera, and Palmero 2023) is a prominent task due to the intuitive nature of textual input. Early efforts often focused on 3D single-character human motion, with main approaches including LLM-based methods (Jiang et al. 2023a; Guo et al. 2024; Zhang et al. 2024c; Li et al. 2024) that autoregressively generate tokenized motion, and diffusion-based methods (Tevet et al. 2023; Barquero, Escalera, and Palmero 2023; Chen et al. 2023; Zhang et al. 2024a; Liang et al. 2024) that learn a denoising process, sometimes also using VQVAE for discretization. While recent advancements like MotionLCM (Dai et al. 2024) and Motion Mamba (Zhang et al. 2024d) have improved singlecharacter motion quality and efficiency, a persistent limitation, even in these advanced models, is the generation of rich, interactive motions involving multiple characters. Specific attempts at multi-character motion generation exist. For example, priorMDM (Jiang et al. 2023a) extended MDM for two-person scenarios using an additional Transformer, InterGen (Liang et al. 2024) redesigned the diffusion process leveraging interaction symmetries, and MoMat-MoGen (Cai et al. 2024) employed a retrieval-based strategy for priors. Our proposed RVHM2D model also adopts a diffusion-based approach but aims to unify highquality single- and multi-character (specifically, doublecharacter) motion generation within a single framework, conditioned on rich textual prompts and benefiting from our new Motion2D-Video-150K dataset. Despite these efforts, existing multi-character motion generation still faces critical hurdles: 1) a significant scarcity of large-scale, diverse training data for complex interactions (which our Motion2D-Video-150K dataset aims to alleviate); 2) limited realism and complexity in the generated interactions; and 3) difficulties in precise semantic and detailed control of these interactions. Figure 2: The architecture of our proposed RVHM2D model for human motion generation. # Reinforcement Learning for Generative Modeling Reinforcement Learning (RL) provides a paradigm for optimizing objectives through interaction. An MDP, defined by $( S , A , P , R , \gamma )$ , aims to find a policy $\pi ( a | s )$ maximizing cumulative reward. In generative modeling, RL can optimize non-differentiable metrics or fine-tune models. Numerous studies (Wallace et al. 2024; Ethayarajh et al. 2024; Wang et al. $2 0 2 4 \mathrm { a }$ ; Cideron et al. 2024; Collins et al. 2024) have employed RL to enhance model performance across various tasks. However, its application to human motion synthesis, particularly for directly enhancing perceptual quality using metrics like Fre´chet Inception Distance (FID), remains relatively underexplored. Our work explores the integration of an FID-based objective to further refine the generation quality of our RVHM2D model. # Method In this section, we first detail the collection, annotation, and cleaning pipeline for our proposed Motion2D-Video-150K dataset. Subsequently, we will present the methodology underpinning our model, RVHM2D. # The Motion2D-Video-150K Dataset: A 150K Rich Video Human-Motion2D Dataset Addressing the limitations of existing datasets in capturing diverse and interactive multi-character motions, as discussed in , we construct Motion2D-Video-150K, a large-scale 2D rich motion dataset. Data Sources: The Motion2D-Video-150K dataset is curated from two primary sources to ensure diversity in motion and character interactions. Firstly, we incorporate data from established open-source human motion datasets, including HAA500 (Chung et al. 2021), Penn Action Dataset (Zhang, Zhu, and Derpanis 2013), and UCF101 (Soomro, Zamir, and Shah 2012). These datasets predominantly feature singlecharacter, human-centric videos, which are valuable for learning fundamental human skeletal structures and movements. Secondly, to gather a rich collection of doublecharacter interactions, we collected over 500,000 video clips from online platforms. The search queries for this collection were generated using GPT-4o and included terms such as "group work" and "cooperation", with a focus on capturing two-character interactions. Data Annotation: Our Motion2D-Video-150K dataset consists of 2D motion sequences paired with textual descriptions. We define a video sample in our dataset, $V$ , as containing one or more character motion sequences. Each individual character’s 2D skeleton sequence, $s _ { c } = \{ k _ { j } \} _ { j = 1 } ^ { L _ { c } }$ comprises $L _ { c }$ frames, where $k _ { j } ~ \in ~ \mathbb { R } ^ { N \times 3 }$ represents the $N = 1 7$ keypoints at frame $j$ . Each keypoint is defined by its $\mathbf { X }$ -coordinate, y-coordinate, and a confidence score. Each video sample $V$ is also associated with a textual description $c _ { t }$ . For motion annotation, we employed RTMPoseLarge (Jiang et al. 2023b), a robust model that integrates human bounding box detection and subsequent skeleton keypoint estimation. Concurrently, for textual annotation, we utilized the Gemini 2.0 Flash model (DeepMind 2025) as well as Owl3 model (Ye et al. 2024) with the following prompt designed to elicit detailed descriptions of actions, interactions, and spatial relationships: "You are an AI specialized in Figure 3: The annotation and data cleaning pipeline for our Motion2D-Video-150K human motion 2D dataset. This pipeline involves initial pose and text annotation followed by rigorous filtering based on limb integrity, motion smoothness, and contextual stability. analyzing video sequences. Given a series of images from the same video clip, describe the actions, interactions, and positional relationships of the characters in a single sentence, treating the images as a continuous sequence. Follow these requirements: 1. Number the characters from left to right based on their positions in the first image, and consistently refer to each character by the same number throughout. 2. Describe the characters’ actions and their left-to-right positional relationships as they change across the sequence. 3. Use a sentence structure starting with ’Person 1 . 4. Avoid mentioning the characters’ clothing or the video’s background. 5. Do not describe each image individually or include introductory phrases like ’Here is a description of the video.’" Data Cleaning: Initial annotations inevitably contained noisy samples, such as sequences with incomplete bodies, inconsistent numbers of tracked persons, overly erratic movements, or pose misdetections. To ensure data quality, we developed a multi-stage cleaning pipeline, illustrated in Fig. 3, focusing on three key aspects: • Limb Integrity: A character is considered valid if the average confidence score across all 17 keypoints is above 0.5 and the average confidence score for facial keypoints also exceeds 0.5. • Motion Smoothness: Assuming that human motion in videos should be relatively smooth and continuous, we penalize overly erratic movements. For each valid character, we calculate the mean inter-frame displacement of their keypoints. Let $k _ { f , p }$ be the pose vector for person $p$ at frame $f$ . We compute a frame-wise difference, e.g., $\begin{array} { r } { d _ { f } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } | | k _ { f , p , i } - k _ { f - 1 , p , i } | | _ { 2 } . } \end{array}$ If $d _ { f }$ frequently exceeds a predefined threshold $\tau _ { s m o o t h }$ , the video segment is flagged. Videos with an excessive number of such irregular movements are discarded. • Contextual Stability: We monitor significant fluctuations in the number of validly tracked persons throughout a video. Sequences exhibiting unstable tracking or frequent changes in the number of interacting characters are removed to maintain contextual consistency. Dataset Analysis and Statistics: Following the rigorous cleaning process, the Motion2D-Video-150K dataset comprises 150,000 high-quality 2D rich motion sequences. A key characteristic of Motion2D-Video-150K is its emphasis on interactive scenarios; the ratio of video segments containing two-person motions to those with singleperson motions is approximately 1.5:1. In terms of temporal length, the dataset contains sequences of varying durations, with many extending up to 300 frames, which also defines the maximum sequence length processed by our proposed model RVHM2D. Motion2D-Video-150K spans over 300 distinct motion categories, encompassing a wide array of both single-character and intricate double-character interactions. An example illustrating the annotated 2D skeletons and the corresponding generated caption is provided in Fig. 4. Data Splits and Availability: The complete Motion2DVideo-150K dataset comprises 150,000 motion sequences. For evaluation purposes, we randomly sampled 11,682 sequences to constitute the test set. This test set is carefully balanced to reflect the diverse nature of our dataset, containing 6,260 single-character motion sequences and 5,422 double-character (interactive) motion sequences. These sequences are held out and used exclusively for final performance reporting. All the remaining data are used to train out model. Caption Video and Motion Person 1 starts in a kneeling position on a yoga mat, then extends one leg alternating between the two sides. The bachkeand. rTahiesesetqhue anrcemsreapbeoavtse, the kiArAKKK person's gaze is directed upwards. Person 1 and Person 2 are walking Pesrisdoenb1yslidgehtolyn ahseiaddewofalPk,erwsoitnh2. They are both holding cups in their 炒辣鸡饺炒饮饼馆 hands. # RVHM2D: A Diffusion-based Rich Motion Generation Model Our proposed model, RVHM2D, is designed to generate rich 2D human motions, including complex single- and doublecharacter interactions, conditioned on textual prompts and optionally an initial reference motion frame. RVHM2D integrates several key components: advanced textual feature utilization, a reinforcement learning training method using Fre´chet Inception Distance (FID), and a specialized diffusion model architecture. Model Architecture: Similar to InterGen (Liang et al. 2024), RVHM2D adopts a dual-tower structure to ensure the stability of the interaction effect between double characters. In order to integrate both single and double characters scenarios into our model, we add $" 1 "$ or $" 2 "$ into the text prompt to indicate the number of people in current case, and specifically for the single-character cases, we replicate the same skeleton twice to simulate a double-character interaction. Additionally, we utilize CLIP-L and CLIP-B text encoders together in a concatenating manner where each piece of text input will go through both text encoders and the encoder outputs will be concatenated together to obtain the text features and we also tried utilizing T5-XXL text encoder. We retain their final global features and local token features. To accommodate text prompts exceeding standard token limit and preserve all textual information, we divide the tokens into several sequences, and the input sequences are separately sent into our text encoder(s) for processing. After that, the out features are concatenated again to ensure no text information is lost. Then, combining the DDIM (Song, Meng, and Ermon 2020) sampler, we randomly choose a time-step $t$ and add noise into the input skeleton points. Simultaneously, the $t$ is encoded by sinusoidal function and 2 layers of $M L P$ , obtaining features $f _ { t }$ . Then, $f _ { t }$ is injected into the de-noising process of the 2D motion generation. In addition, we use transformer-based decoders module with 8 layers to extract motion features, similar to the UNet structure in SD, and inject text features at each layer. Next, we input the two motion into the two branches of the shared parameters respectively, and inject the text features into the subsequent single-character features. Furthermore, we can calculate the interactive attention (Liang et al. 2024) and the specific calculation process is as follows: $$ h _ { 1 } = S A ( a , c _ { t } ) + a $$ $$ h _ { 2 } = C A _ { 1 } ( h _ { 1 } , f _ { l o c a l } ) + h _ { 1 } $$ $$ h _ { 3 } = C A _ { 2 } ( h _ { 2 } , b , c _ { t } ) + h _ { 2 } $$ where $a$ and $b$ refer to the input noised motion embeddings of the two person. $S A$ and $C A$ represent self-attention layer and cross-attention layer separately. $c _ { t }$ is calculated by adding text pooling feature $f _ { p o o l i n g }$ and time-step feature $f _ { t }$ . The overall framework is depicted in Fig. 2. RVHM2D also supports conditioning on an initial motion frame $\mathbf { m } _ { s t a r t }$ to guide the generation. This reference frame is encoded using an MLP to obtain a feature representation $\mathbf { f } _ { r e f \_ m o t i o n }$ . This feature is then integrated into the denoising network after the initial self-attention layer of each Transformer block using an additional self-attention mechanism over the concatenated sequence and reference features: $$ \begin{array} { r l } & { \mathbf { h } _ { \mathrm { r e f i n e d } } = \mathrm { S A } _ { \mathrm { r e f } } ( \mathbf { h } _ { 1 } , \mathbf { f } _ { \mathrm { r e f - m o t i o n } } ) } \\ & { \qquad = \mathrm { A t t e n t i o n } { \left( Q = \mathbf { h } _ { 1 } , \ K = V = [ \mathbf { h } _ { 1 } , \mathbf { f } _ { \mathrm { r e f - m o t i o n } } ] \right) } } \end{array} $$ Diffusion Model: Diffusion (Song, Meng, and Ermon 2020; Ho, Jain, and Abbeel 2020) is modeled as a Markov noising process (Dynkin and Dynkin 1965), $\{ s ^ { t } \} _ { t = 0 } ^ { T }$ , where $s ^ { 0 }$ is derived from the data distribution and $$ q ( s ^ { t } | s ^ { t - 1 } ) = \mathcal { N } ( \sqrt { \alpha _ { t } } s ^ { t - 1 } , ( 1 - \alpha _ { t } ) I ) $$ where $\alpha _ { t } \ \in \ ( 0 , 1 )$ are constant hyper-parameters. If $\alpha _ { t }$ is sufficiently small, $s ^ { T }$ can be approximated as $s ^ { T } \sim \mathcal { N } ( 0 , I )$ . In addition, we define $t$ as the noising step. In our approach, with only considering text condition $c _ { t }$ , the conditioned motion synthesis models the distribution $p ( s ^ { 0 } | c _ { t } )$ as the reversed diffusion process of gradually refining $s ^ { T }$ . Instead Prompt: Person 1 and Person 2 are scuba diving together in the ocean. They are both wearing flippers and scuba gear. They are both looking at the camera. They are both swimming towards the bottom of the ocean. They are both swimming around the coral reef. Prompt: Person 1 and Person 2 are dancing together in a room. They are holding hands and performing a series of dance moves, including spins and turns. The dance is elegant and graceful, with the two dancers moving in sync with each other. 1 材 T Prompt: Person 1, dressed in a white karate uniform with a black belt, is performing a series of martial arts moves, including punches and kicks, in a dark room. The sequence of movements is fluid and continuous, with the person's body and limbs moving in a coordinated manner. 1 Prompt: Person 1 starts in a kneeling position on a yoga mat, then extends one leg back and raises the arms above the head, maintaining a straight back and a slight bend in the front knee. The person's gaze is directed upwards, and the posture is held with a sense of balance and control. Figure 5: Qualitative comparison of generated 2D human motions from different models: (a) InterGen, (b) InterGen with enhanced text encoders (CLIP-L/B), and (c) Our full RVHM2D model. of predicting $\epsilon _ { t }$ as formulated by (Ho, Jain, and Abbeel 2020), we predict the signal itself (Ramesh et al. 2022) $\hat { s } ^ { 0 } = G ( s ^ { t } , \bar { t } , c _ { t } )$ with the objective: $$ \mathcal { L } _ { \mathrm { s i m p l e } } = \mathbb { E } _ { s ^ { 0 } \sim q ( s ^ { 0 } \vert c _ { t } ) , t \sim \lbrack 1 , T \rbrack } \left[ \Vert s ^ { 0 } - G ( s ^ { t } , t , c _ { t } ) \Vert _ { 2 } ^ { 2 } \right] $$ Enhanced Textual Conditioning: Conventional text-tomotion methods often rely solely on global pooled features extracted from text encoders. However, for generating nuanced and complex motions, particularly those involving interactions described in detail by our Motion2D-Video-150K dataset’s captions, such global features can be insufficient. Therefore, inspired by (Yu, Seo, and Son 2023), we augment global textual features with fine-grained local features extracted from the CLIP-L/B text encoder or T5-XXL text encoder, capturing word-level semantic information. These local features are then injected into our model’s decoder via an attention mechanism to provide more detailed guidance. Reinforcement Learning-Inspired Refinement: Inspired by successes in other generative domains (e.g. large language models (GPT4o 2024)), we explore a refinement strategy for RVHM2D that draws from reinforcement learning (RL) principles to align generated motions more closely with desired perceptual qualities. While RL has been applied in various generative tasks, its use for refining human motion generation with comprehensive perceptual metrics like FID as a direct optimization target remains relatively underexplored. We formulate the motion generation task where the state $s$ corresponds to the input text prompt $c _ { t }$ , and the action $a$ is the generated motion sequence $m$ . The generated motion sequence $m$ is defined as: $$ m = \{ \mathbf { m } _ { 1 } , \mathbf { m } _ { 2 } , \dots , \mathbf { m } _ { T } \} $$ where $T$ is the total number of frames. Each frame ${ \mathbf { m } } _ { f }$ (for $f = 1 , \ldots , T )$ contains the poses of the characters present. For a two-character scenario, common in our Motion2DVideo-150K dataset, we define: $$ \mathbf { m } _ { f } = \{ \mathbf { p } _ { f , 1 } , \mathbf { p } _ { f , 2 } \} $$ where $\mathbf { p } _ { f , c }$ is the pose of character $c$ at frame $f$ . Each character’s pose is represented by $N = 1 7$ keypoints: $$ \mathbf { p } _ { f , c } = \{ \mathbf { k } _ { j , c } \} _ { j = 1 } ^ { N } $$ where $\mathbf { k } _ { j , c } \in \mathbb { R } ^ { 3 }$ includes the $\mathbf { \boldsymbol { x } }$ -coordinate, y-coordinate, and confidence score for the $j$ -th keypoint of character $c$ . The policy $\pi ( \boldsymbol { m } | \boldsymbol { c } _ { t } )$ is embodied by our generative model RVHM2D. What is worth mentioning is that we devise a two-stage training strategy: the model is first trained with a standard diffusion objective, and then fine-tuned using reinforcement learning with an FID-based reward to further enhance motion realism and text alignment. The weight for the reward signal is set very low to stabilize training and prevent policy collapse. In the second stage, we incorporate FID-based scores directly into our loss function to serve as a strong learning signal, akin to a reward in RL, guiding the model towards perceptually superior outputs. # Loss Function Motion generation introduces temporal consistency challenges that necessitate specialized loss functions. Inspired by InterGen (Liang et al. 2024), our model retains bone length loss, velocity loss, distance map loss, joint awareness loss and reconstruction loss. In the first training stage, we train our model directly with these losses, where $\lambda _ { ( \cdot ) }$ are hyperparameter weights for each loss term. $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { f i r s t . s t a g e } } = \lambda _ { B L } \mathcal { L } _ { B L } + \lambda _ { V E L } \mathcal { L } _ { V E L } + \lambda _ { D M } \mathcal { L } _ { D M } + } \\ { \lambda _ { J A } \mathcal { L } _ { J A } + \lambda _ { R e c o n } \mathcal { L } _ { R e c o n } } \end{array} $$ In the second stage, to enhance perceptual quality and alignment with data distributions, we incorporate losses derived from pre-trained Fre´chet Inception Distance (FID) evaluation models, trained following (Guo et al. 2022). Unlike L1/L2 losses, FID captures higher-level distributional similarities. This component includes: • Text-Motion FID Loss $( \mathcal { L } _ { \mathrm { t e x t f i d } } )$ : Measures the similarity between the generated motion and the input text prompt $c _ { t }$ . $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { t e x t f i d } } = 1 - \operatorname* { s i m } _ { \mathrm { c o s } } ( \mathrm { E n c } _ { \mathrm { t e x t } } ( c _ { t } ) , \mathrm { E n c } _ { \mathrm { m o t i o n } } ( { \bf m } _ { p r e d } ) ) } \end{array} $$ where $\mathrm { E n c } _ { \mathrm { t e x t } } ( \cdot )$ and $\mathrm { E n c } _ { \mathrm { m o t i o n } } ( \cdot )$ are pre-trained encoders from the FID evaluation model, and $\sin _ { \mathrm { c o s } } ( \cdot , \cdot )$ is cosine similarity. • Motion FID Loss ${ \mathcal { L } } _ { \mathrm { m o t i o n f i d } } )$ : Measures the similarity between the generated motion and the ground truth motion distribution. $$ \begin{array} { r } { \mathcal { L } _ { \operatorname { m o t i o n f i d } } = 1 - \mathrm { s i m } _ { \mathrm { c o s } } \big ( \mathrm { E n c } _ { \mathrm { m o t i o n } } ( \mathbf { m } _ { g t } ) , \mathrm { E n c } _ { \mathrm { m o t i o n } } ( \mathbf { m } _ { p r e d } ) \big ) } \end{array} $$ The final objective is a weighted sum of these individual loss components: $$ { \mathcal { L } } _ { \mathrm { t o t a l } } = { \mathcal { L } } _ { \mathrm { f i r s t . \mathrm { s t a g e } } } + \lambda _ { \mathrm { t e x t f i d } } { \mathcal { L } } _ { \mathrm { t e x t f i d } } + \lambda _ { \mathrm { m o t i o n f i d } } { \mathcal { L } } _ { \mathrm { m o t i o n f i d } } $$ where $\lambda _ { ( \cdot ) }$ are hyperparameter weights for each loss term. # Experiments # Experimental Setup Implementation Details: We re-implemented two strong baseline models for the 2D human motion generation task: MDM (Tevet et al. 2023) and InterGen (Liang et al. 2024). All models, including our proposed RVHM2D model and the re-implemented baselines, were trained using the AdamW optimizer with a fixed learning rate of $1 \times \mathrm { { 1 0 ^ { - 4 } } }$ . Training was performed on 8 NVIDIA A100 GPUs for 300 epochs. The batch size was set to 32 or 16 per GPU, depending on the specific memory requirements of each model. For our RVHM2D model and the diffusion-based baselines, the number of diffusion steps during training was set to 1000. For models that incorporate reinforcement learning, the training was divided into two stages: 200 epochs of initial training followed by 100 epochs of RL-based finetuning. Inference settings, including the number of steps for baselines, are kept consistent with their original implementations for fair comparison. Table 1: Quantitative comparison with state-of-the-art methods on the Motion2D-Video-150K test set for both single-character and two-character 2D motion generation. Best results are in bold. $\uparrow$ indicates higher is better, $\downarrow$ lower is better, $$ indicates closer to ground truth human motion diversity is better. Table 2: Ablation study on key components of the RVHM2D framework. The first row refers to InterGen as baseline. Subsequent rows incrementally add our proposed text encoding enhancements (CLIP-L/B $^ +$ Local Features), Text FID loss, and Motion FID loss. Evaluation Metrics: Following established practices in 3D human motion generation (Guo et al. 2022), we adopt a comprehensive set of metrics to evaluate performance: RPrecision (Top-1, Top-2, Top-3), Fre´chet Inception Distance (FID), Multimodal Distance (MM Dist), and Diversity. To extract features for these metrics, we trained an evaluation model whose architecture is similar to that proposed in InterGen (Liang et al. 2024). However, to align with the text encoding capabilities of our RVHM2D model, our evaluation feature extractor utilizes CLIP-L/B text encoders as our proposed model. # Quantitative Comparisons Comparison with State-of-the-Art Methods: We compare our proposed RVHM2D model with the re-implemented baselines (MDM, and InterGen) on the Motion2D-Video150K test set. We extend MDM to both single-character and double-character scenarios, by duplicating the singlecharacter data. In contrast to the baselines, our RVHM2D model is designed to inherently handle both single- and doublecharacter generation, leveraging enhanced text feature utilization and an FID-based refinement strategy. The comparative results are presented in Table 1. As shown in Table 1, our RVHM2D model demonstrates strong performance. Specifically, for single-character generation, RVHM2D achieves an R-Precision-Top1 of 36.64, surpassing InterGen (33.67). For two-character generation, RVHM2D obtains an R-Precision-Top1 of 31.48, also outperforming InterGen (30.70). While our proposed model performs slightly behind on R-Precision in Top2 and Top3 and MM Dist, our RVHM2D model shows superior RPrecision in Top1 for both single and two-character scenarios and leads in Diversity, indicating its capability to generate varied and text-relevant motions. MDM achieves the best FID scores, but falls behind on other metrics, indicating its over fitting problem. # Ablation Studies We conduct comprehensive ablation studies to validate the effectiveness of different components within our RVHM2D model and investigate the impact of various design choices. Impact of Components:To demonstrate the effects of our dual CLIP-L/B text encoder configuration, the utilization of local text features, and our FID-based refinement strategy, we performed several ablation experiments. The results are presented in Table 2. We start with InterGen as our initial baseline, as reported in Table 1 and incrementally add our proposed enhancements. As shown in Table 2, by using CLIP-L/B text encoders and leveraging local text features, we improve R-precisionTop1 by $0 . 4 7 \%$ , FID by 0.0386, MM distance by 0.0007 and diversity by 0.031 in single character motion generation, as well as R-precision-Top3 by $0 . 0 8 \%$ and MM distance by 0.0365. Furthermore, incorporating the Text FID and Motion FID loss components contributes to R-Precision-Top1 and Diversity with slight falling on other metrics, suggesting that this FID-based refinement helps generate motions that are more consistent with textual descriptions and closer to the distribution of real motions. After that, we incorporate all the modules to get our full model architecture. Impact of Different Text Encoder: Notably, when we replace the CLIP-based text encoders in our full RVHM2D architecture with a more powerful T5-XXL encoder (Raffel et al. 2020), we observe a significant performance boost across most metrics, particularly FID and R-Precision. This underscores the importance of a strong text encoder for rich motion generation and demonstrates the capability of our RVHM2D architecture to leverage more powerful text features effectively. The results are shown in Table 3 Table 3: Ablation study on the text encoder(s). Table 4: Ablation study on the impact of different text caption sources (Owl3 vs. Gemini 2.0 Flash) and the inclusion of firstframe reference motion for the RVHM2D model. Impact of Caption Source and Reference Motion: We also investigated the effect of different text caption sources and the inclusion of a first-frame reference motion. These results are presented in Table 4. Observations from Table 4 indicate that text annotations from Gemini 2.0 Flash, compared to Owl3, yield slightly lower R-Precision for single-character generation but can be beneficial for two-character generation, particularly improving R-Precision-Top2. Furthermore, when using Gemini-annotated text, incorporating the first frame as an additional motion prior effectively enhances R-Precision for two-character generation, with R-Precision-Top1 from 32.86 to 33.17. # Qualitative Analysis To provide a visual assessment of generation quality, we present qualitative examples of 2D human motions generated by our RVHM2D model and compared baselines in Figure 5. As illustrated in Figure 5, when comparing the baseline InterGen with a version enhanced by stronger CLIP-L/B text encoders, it is evident that improved text encoding enhances the semantic alignment between the input text and the generated motion. The baseline InterGen struggles with prompts, whereas the enhanced version more accurately captures these actions. Furthermore, our full RVHM2D model, benefiting from its comprehensive design including the FIDbased refinement, demonstrates an ability to generate motions with more realistic human body structures and more natural, human-like dynamics compared to the other approaches. # Downstream Application: Motion2D-Driven Video Synthesis To validate the practical applicability and assess the perceptual quality of motions generated by our RVHM2D model, we conducted experiments on skeleton-driven video synthesis. We leveraged Wan2.1-T2v-14B for pose-guided video generation. The 2D skeleton sequences $\mathbf { m } _ { p r e d }$ produced by RVHM2D, conditioned on textual prompts served as the primary control input for the video synthesis model. Qualitative results are presented in Figure 1. The synthesized videos demonstrate a higher degree of motion realism and a more plausible execution of described actions.This suggests that the structural and temporal properties captured by RVHM2D are robust enough for downstream video applications, further highlighting the effectiveness of our approach in generating high-fidelity human motions.
Generating realistic and controllable human motions, particularly those involving rich multi-character interactions, remains a significant challenge due to data scarcity and the complexities of modeling inter-personal dynamics. To address these limitations, we first introduce a new large-scale rich video human motion 2D dataset (Motion2D-Video-150K) comprising 150,000 video sequences. Motion2D-Video-150K features a balanced distribution of diverse single-character and, crucially, double-character interactive actions, each paired with detailed textual descriptions. Building upon this dataset, we propose a novel diffusion-based rich video human motion2D generation (RVHM2D) model. RVHM2D incorporates an enhanced textual conditioning mechanism utilizing either dual text encoders (CLIP-L/B) or T5-XXL with both global and local features. We devise a two-stage training strategy: the model is first trained with a standard diffusion objective, and then fine-tuned using reinforcement learning with an FID-based reward to further enhance motion realism and text alignment. Extensive experiments demonstrate that RVHM2D achieves leading performance on the Motion2D-Video-150K benchmark in generating both single and interactive double-character scenarios.
[ "cs.CV" ]
# 1 Introduction Program repair, or automatic bug fixing, promises to generate corrective patches for faulty code [Monperrus, 2018]. Recent years have seen dramatic improvements in the quality and complexity of patches thanks to learning based program repair, with the most complex bugs being repaired by frontier LLMs [Yang et al., 2024]. Progress on benchmarks like SWE-Bench [Jimenez et al., 2024] and RepairBench [Silva and Monperrus, 2025] has demonstrated that real-world bugs can be fixed automatically. The fundamental limitation of asking a language model to generate a patch is that it does so by reasoning about token distributions, and not by reasoning about the expected behavior. In other words, optimizing for next token prediction captures very little of the difference between buggy behavior and expected correct behavior from the specification. For the same reason, it is hard to repair completely new programs as language models fail to generalize to unseen problems [Chollet et al., 2024]. In this paper, we completely reframe learning based program repair. We propose to embed the specification and the incorrect behavior to be repaired as a first-class concept in a loss function that is used to directly optimize the program. We describe Gradient-Based Program Repair (GBPR), a novel paradigm that is founded on expressing programs as numerical representations, such as embeddings or neural network weights. With this numerical program representation associated with a loss that captures the expected behavior, GBPR repairs the bug by searching in the numerical program space. This core originality of GBPR is that it considers program behavior, the expected correct one and the buggy one, as a first-class concept in the learning pipeline, directly expressed in the loss function to be optimized. To sum up, we propose to 1) transform symbolic programs into numerical programs, 2) design loss functions that capture correct behavior, and 3) optimize numerical programs with gradient descent in the program space until a repair is found. Figure 1: The key insight of Gradient-Based Program Repair is that program search can be done in a numerical space by employing gradient-based optimization. a) Symbolic program computing the reverse function, written in RASP, and the difference between the expected and buggy behavior; b) Compilation of the symbolic program into a numerical program, encoded as a Transformer; c) Numerical program, equivalent to the symbolic program; d) GBPR optimizes the numerical program via the correctness loss, starting from the buggy program. The program is iteratively optimized, moving towards correct behavior. As the correctness loss decreases, the program correctness increases, with some incorrect behavior now corrected. At the end of the optimization, the repaired program correctly implements the reverse function. As opposed to LLM-based bug fixing, GBPR directly reasons about the expected behavior as a first-class optimizable concept. To rigorously evaluate our approach, we introduce RaspBugs, a new benchmark of buggy symbolic programs and their corresponding numerical representation. Those programs are written in RASP [Weiss et al., 2021], a class of sequence-processing programs that can be analytically represented as Transformer models. By systematically applying mutations to base RASP programs, we create a diverse collection of meaningful bugs that can be analyzed both at the symbolic and numerical levels. RaspBugs contains 1,466 bugs over 6 programs, and provides the first-ever controlled environment for researching program repair as continuous optimization. Our results demonstrate that gradient-based program repair is feasible. First, we observe proper convergence of the optimization problem, with the correctness of the considered programs improving. Second, we are able to repair the majority of buggy programs for 5 out of the 6 considered base programs. Third, the analysis of the repair trajectories and the correctness landscape confirms that incorrect behavior on buggy input points is gradually fixed. To summarize, our main contributions are: • Gradient-Based Program Repair, a novel paradigm for program repair that performs gradientbased program search, driven by a loss function that captures correct program behavior. • RaspBugs, a curated benchmark for evaluating research on continuous program repair, with 1,466 pairs of buggy RASP programs, available as either symbolic or numerical programs. • An empirical evaluation demonstrating the feasibility and effectiveness of GBPR for repairing RASP programs as continuous optimization in the numerical program space. # 2 Background Program Repair. Program repair [Monperrus, 2018] automatically finds a correct program from a buggy one, changing incorrect behavior to correct behavior according to a specification (e.g., an input-output test suite), typically via search or mutation over the program space. Most program repair research considers repairing imperative programs, in particular Python or Java. Symbolic Program Space. In the context of traditional program repair, programs are symbolic artifacts, represented using discrete structures like source code token sequences, abstract syntax trees (ASTs), or control-flow graphs (CFGs). Program repair on symbolic programs relies on symbolic methods operating directly on these structures according to rules based on language syntax and semantics (e.g., program transformations, static analysis, symbolic execution). Large Language Models (LLMs) are used for program repair [Vasic et al., 2019, Yasunaga and Liang, 2021, Yang et al., 2024] by considering code as a sequence of textual tokens. Numerical Program Space. A numerical program is a program 1) whose behavior is encoded as continuous, real-valued parameters and 2) can be executed. These can be either neural networks or vectors in latent spaces with execution semantics, such as in Bonnet and Macfarlane [2024]. Unlike traditional symbolic programs, which are constrained by discrete structures, the behavior of numerical programs can be adjusted smoothly via optimization techniques like gradient descent. Transformer Programs. In this paper, we deal with programs that can be represented both symbolically and numerically. Our symbolic program space consists of RASP [Weiss et al., 2021] programs. RASP is a programming language specifically designed for sequence processing tasks, whose primitives can be directly implemented by components of the Transformer architecture. In our experiments, our numerical program space consists of Transformer models [Vaswani et al., 2017]. We leverage Tracr [Lindner et al., 2023], a compiler to translate any RASP symbolic program into an equivalent numerical program, which is a Transformer. # 3 Gradient-Based Program Repair All previous research has done program repair as a search in a symbolic space. Our core insight is that one can do program repair by searching programs in a numerical space instead. In that numerical space, the program semantics are encoded into a numerical representation. Gradient-Based Program Repair (GBPR) leverages gradient descent to search the numerical program space, minimizing a loss that directly measures deviations from correct behavior. The program zeroing the loss is considered the repaired program. # 3.1 Compilation of Symbolic Programs to Differentiable Numerical Programs The first step of numerical repair is to translate the initial symbolic program into a numerical representation where a correctness gradient can be computed with respect to its parameters. Let $P _ { f }$ be a symbolic program (e.g., source code text in Python) that implements the target function $f : \mathcal { X } \mathcal { Y }$ , mapping inputs from space $\chi$ to outputs in space $y$ . Compilation. We require a compiler function, denoted $\mathcal { C }$ , that transforms $P _ { f }$ into a numerical representation $D _ { f , \theta }$ . This representation $D _ { f , \theta }$ is parameterized by a set of numerical parameters $\theta$ , such that executing the numerical representation on an input $\mathbf { x } \in \mathcal { X }$ yields the program’s output. Crucially, the compiler $\mathcal { C }$ must ensure that the numerical parameters $\theta$ completely encode the semantics of the original program $P _ { f }$ . In other words, $\theta$ ‘is’ the numerical program. Numerical Execution. The execution of $D _ { f , \theta }$ must always be the same as the symbolic execution of the equivalent $P _ { f }$ , guaranteeing that program semantics are the same in both the symbolic and numerical spaces. $$ D _ { f , \theta } \equiv P _ { f } \implies D _ { f , \theta } ( \mathbf { x } ) = P _ { f } ( \mathbf { x } ) \quad \forall \mathbf { x } \in \mathcal { X } . $$ Differentiation. We require $D _ { f , \theta }$ to be differentiable over $\chi$ with respect to $\theta$ . This means we need to compute the gradient of a loss function, in order to change the parameters $\theta$ to improve the correctness of the output for $\mathbf { x }$ . If the gradient captures correctness, this means that gradient descent is actually optimizing the program towards more correct behavior, which is the fundamental goal of program repair (section 2). Alternatives for $D _ { f , \theta }$ . In section 6, we will discuss a few appropriate representations for $D _ { f , \theta }$ . At this point, we focus on neural networks as our numerical representation. The neural network input, resp. output, is the program input, resp. output. This is a natural choice as 1) neural networks are inherently differentiable via backpropagation, 2) their parameters form the continuous space $\theta$ we seek to optimize, and 3) they are executable via forward passes. # 3.2 Gradient-Based Program Repair (GBPR) Let us assume a buggy symbolic program $P _ { b }$ implementing an incorrect function $b$ . The ideal correct function is called $f$ , and is defined by a specification that describes the behavior of the ideal program. In this paper, we assume specifications in the form of input-output examples: $\{ ( \mathbf { x } _ { i } , \mathbf { y } _ { i } ) \} _ { i = 1 } ^ { n }$ , where each input $\mathbf { x } _ { i }$ is mapped to its correct output $\mathbf { y } _ { i }$ by the ideal function $f$ . Symbolic repair means directly changing $P _ { b }$ with e.g., symbolic repair templates or repair operators that manipulate symbols. GBPR means repairing the numerical representation $D _ { f , \theta }$ instead. For this, we first compile $P _ { b }$ using $\mathcal { C }$ to obtain its differentiable representation $D _ { b , \theta _ { b } }$ . Both the initial parameters $\theta _ { b }$ and the structure of the numerical program are given by the compiler. The goal of GBPR is to adjust these parameters $\theta _ { b }$ to find a new set of parameters $\theta ^ { * }$ such that the behavior of $D _ { \theta ^ { * } } ( \mathbf { x } )$ matches the specification. $$ D _ { \theta ^ { * } } ( \mathbf { x } ) = f ( \mathbf { x } ) \quad \forall \mathbf { x } \in \mathcal { X } . $$ Correctness Loss. Next, we need a loss function $\mathcal { L }$ that measures how far the current program behavior deviates from the specification. The total loss is an aggregation of a local loss function $\ell$ computed over a subset of the specification: $$ \mathcal { L } ( \boldsymbol { \theta } , \{ ( \mathbf { x } _ { i } , \mathbf { y } _ { i } ) \} _ { i = 1 } ^ { n } ) = \sum _ { i = 1 } ^ { n } \ell \left( D _ { \boldsymbol { \theta } } ( \mathbf { x } _ { i } ) , \mathbf { y } _ { i } \right) . $$ Consider the space of all possible parameter values $\theta$ for our differentiable numerical program $D _ { \theta }$ . Each point in this space corresponds to a slightly different program behavior. The loss function $\mathcal { L }$ creates a landscape over this space, where lower values indicate behavior closer to the correct program $P _ { f }$ . The repair process is then a classical optimization problem: finding the parameters $\theta ^ { * }$ that minimize the correctness loss: $$ \theta ^ { * } = \arg \operatorname* { m i n } _ { \theta } \mathcal { L } ( \theta , \{ ( \mathbf { x } _ { i } , \mathbf { y } _ { i } ) \} ) . $$ Repair as Gradient Descent. Gradient descent acts like rolling a ball down this landscape. The initial parameters $\theta _ { b }$ place the ball somewhere corresponding to the buggy program’s behavior. The gradient $\nabla _ { \boldsymbol { \theta } } \mathcal { L }$ points uphill towards higher loss (more incorrect behavior). By moving in the opposite direction $\left( - \nabla _ { \theta } \mathcal { L } \right)$ , we iteratively adjust the parameters $\theta$ , effectively improving the program’s behavior stepby-step towards the desired correct functionality defined by the input-output specification. Starting from the initial parameters $\theta ^ { ( 0 ) } = \theta _ { b }$ obtained from compiling the buggy program, we iteratively update the parameters in the direction opposite to the gradient of the loss: $$ \boldsymbol { \theta } ^ { ( t + 1 ) } = \boldsymbol { \theta } ^ { ( t ) } - \eta \nabla _ { \boldsymbol { \theta } } \mathcal { L } ( \boldsymbol { \theta } ^ { ( t ) } ) , $$ where $\eta$ is the learning rate. The main difference between symbolic repair and repair as gradient descent is that, because the representation $D _ { \theta }$ is continuous and differentiable, small improvements are possible and efficiently guided by the gradient. This sharply contrasts with symbolic repair, which entirely consists of discrete jumps in the program space. # 3.3 Repair Acceptance Criterion Minimizing loss on training examples is insufficient for successful repair, as optimization might overfit, leading to a program $D _ { \theta ^ { * } } ( \mathbf { x } )$ that performs well on training data but fails to generalize to unseen inputs and thus hasn’t captured $f$ ’s true semantics. Therefore, we need a repair acceptance criterion based on the performance of the optimized program $D _ { \theta ^ { * } }$ on a separate, held-out set of test examples $\{ ( \mathbf { x } _ { j } ^ { \prime } , \mathbf { y } _ { j } ^ { \prime } ) \}$ that were not used during the gradient descent optimization. We consider the program repaired if its correctness on this held-out set exceeds $1 - \epsilon$ of the held-out test cases, for some small $\epsilon \geq 0$ , ensuring that: $$ D _ { \theta ^ { * } } ( x ) \approx f ( x ) \quad \forall x \in \mathcal { X } . $$ This ensures that the repair generalizes beyond the training data and the program likely corresponds to the intended function. # 3.4 Summary of Key Novel Concepts Differentiable Numerical Programs. Symbolic programs translated to continuous, differentiable forms (e.g., neural networks) with parameters $\mathbf { \eta } ^ { ( \theta ) }$ encoding semantics; a novel concept in the program repair literature. Numerical Repair Search Space. Viewing the repair search space $\theta$ as a continuous landscape where program behavior can be smoothly varied, as opposed to the irregular, discrete symbolic search space. Correctness Loss. A differentiable function $\mathcal { L }$ quantifying the difference between the current program’s behavior $D _ { \theta } ( \mathbf { x } )$ and the expected behavior y. We cast classical optimization loss into a behavioral semantics conceptual framework. Correctness Gradient. $\nabla _ { \boldsymbol { \theta } } \mathcal { L }$ , indicating the direction in numerical program space towards correct behavior. Gradient-Based Program Repair. Iteratively adjusting program parameters $\theta$ via gradient descent on the correctness loss $( \theta ^ { ( t + 1 ) } = \theta ^ { ( t ) } - \eta \nabla _ { \theta } \mathcal { L } )$ , optimizing towards functional correctness. This is the first framing of program repair as continuous optimization, in contrast to traditional discrete symbolic search. # 4 RaspBugs: A Benchmark of Buggy Transformer Programs To evaluate GBPR, we need buggy symbolic programs and their equivalent differentiable numerical counterparts. We thus build RaspBugs, a novel benchmark of buggy transformer programs. We choose to consider RASP programs [Weiss et al., 2021], which have the property to be representable symbolically or as Transformer models (see section 2) Programs. We rely on previous work by Weiss et al. [2021] and six of their reference RASP programs. These programs perform various sequence processing operations, including sorting, reversing, histogram computation, frequencybased sorting, and validating Dyck language expressions. def hist(input) -> list: Returns the number of times each token occurs in the input. Example usage: hist(a b a c) >> 2 1 2 1 same_tok $\mathbf { \Sigma } = \mathbf { \Sigma }$ rasp.Select( rasp.tokens, rasp.tokens, rasp.Comparison.GEQ # bug: should be rasp.Comparison.EQ hist_op $\mathbf { \Sigma } = \mathbf { \Sigma }$ rasp.SelectorWidth(same_tok) return hist_op(input) # correct behavior hist(a c d b a d) $\mathbf { \sigma } = \mathbf { \sigma }$ 2 1 2 1 2 2 $\#$ buggy behavior hist(a c d b a d) $\mathbf { \sigma } = \mathbf { \sigma }$ 6 3 2 4 6 2 Input-Output Specifications. For each RASP program, we generate an input-output specification by randomly sampling from the input space and computing the corresponding outputs using the ground-truth symbolic implementation. Each program specification is composed of $5 0 { , } 0 0 0 \ \mathrm { I / O }$ pairs. The lengths of the input samples are randomly sampled between 2 and Figure 2: Example of a buggy RASP program in RaspBugs, synthesized from the reference hist program using mutation. The reference program selects only equal tokens, while the mutated program selects tokens greater than or equal to, resulting in buggy program behavior. 10. Each specification is split into train $( 8 0 \% )$ , validation $( 1 0 \% )$ , and test $( 1 0 \% )$ sets. Figure 3: Accuracy distribution before (red) and after (green) Gradient-Based Program Repair for each program in RaspBugs. The majority of buggy variants for five programs can be repaired with GBPR (as demonstrated by the rightmost green bars). Mutating Transformer Programs. We create RaspBugs by applying a suite of mutation operators to the original RASP programs. The mutations are meant to introduce semantic changes to the program. We consider generic mutation operators that act on programming language operators such as arithmetic operations and comparison operations. We also design and implement nine RASPspecific mutation operators that target constructs of the RASP language. In total, we utilize 15 mutation operators. These mutation operators are employed individually or combined with others to generate higher-order mutants - mutated programs with several changed locations. We set the limit of mutations per order per program to 200. Properties of Mutated Programs. Buggy programs must: 1) be symbolically buggy (at least one input-output pair is incorrect), 2) compile to Transformer models via Tracr [Lindner et al., 2023], 3) be executable numerically (forward pass), and 4) be numerically buggy (incorrect on the same inputoutput pairs). Validation outcomes include: FAILED_MUTATION (symbolic interpretation errors), UNCOMPILABLE (Tracr compilation failure), CORRECT_MODEL (semantically equivalent mutations), and BUGGY_MODEL (programs for repair, considered hereafter). Descriptive Statistics. RaspBugs is composed of 1,466 buggy RASP programs, seeded from six reference programs and 15 mutation operators, their corresponding input-output specifications (split into train, validation, and test sets), and their numerical representations as Transformer models. The buggy programs are broken to a different extent, as demonstrated by their different test set accuracies: $\operatorname* { m i n } = 0 . 0 0 \%$ (completely broken), median $= 2 . 0 0 \%$ , average $= 3 6 . 6 9 \%$ , $\operatorname* { m a x } = 9 8 . 0 0 \%$ (a corner-case bug). The numerical representations range from $2 \mathbf { k }$ (hist program) to 1M (dyck2 program) parameters. Full details about RaspBugs can be found in Appendix A. # 5 Experiments Training and Evaluation. Each buggy transformer program is fine-tuned via supervised learning on its train split (section 4), minimizing cross-entropy correctness loss between predicted and groundtruth output sequences. We use batch size 256, learning rate $1 \times 1 0 ^ { - 4 }$ , and train up to 10k epochs with early stopping (validation loss improvement $< 1 \ : \overset { \circ } { \times } \ : 1 0 ^ { - 4 }$ for 10 epochs). Repaired programs are evaluated on the test set via greedy decoding (temperature 0), reporting accuracy as exact output match percentage. Experiments used multi-instance NVIDIA A100 GPUs (1/7th A100 compute, 10GB VRAM, 2 CPUs, 32GB RAM per instance/run). Repairing Transformer Program. To evaluate the effectiveness of Gradient-Based Program Repair, we apply it to the entire RaspBugs benchmark. Our goal is to determine whether gradient-based optimization can reliably repair a wide variety of buggy transformer programs. Figure 3 shows the correctness accuracy over the test sets for the buggy programs before and after Gradient-Based Program Repair. Here, correctness accuracy is defined as the percentage of test samples for which the model’s output exactly matches the ground-truth output. For example, the topleft figure shows the correctness accuracy distribution over 46 buggy hist programs from RaspBugs. The red distribution shows that most buggy programs are completely broken with a correctness accuracy of close to $0 \%$ . The green distribution represents the correctness accuracy after repair. We see that a large number of hist programs have higher correctness after Gradient-Based Program Repair (green distribution shifted to the right), with the majority achieving near perfect correctness (right-most bar). Before repair (red bars), for four of the six program types, the majority of buggy numerical programs start with near-zero correctness accuracy (red bars clustered at $0 \%$ ). This indicates that the mutations introduce substantial semantic errors, resulting in programs that almost never produce correct outputs. After repair (green bars), the accuracy distribution shifts dramatically to the right for five out of six program types. In all these cases, the majority of repaired programs achieve near-perfect correctness, demonstrating that Gradient-Based Program Repair can repair incorrect behavior even for severe bugs (i.e., those with initial accuracies near $0 \%$ as detailed in section 4). For the most-freq program, while correctness clearly improves, most programs do not achieve perfect accuracy after repair. This suggests inherent difficulties for gradient-based methods with certain programs, possibly due to complex loss landscapes or significant architectural changes (a point further discussed in section 6). Overall, our experiments over 1,466 buggy transformer programs clearly demonstrate the viability of Gradient-Based Program Repair. It is effective to use gradient optimization and an input-output specification to repair a broken symbolic program. This is a paradigm shift in the field of automatic program repair. Repair Trajectories through the Correctness Landscape. To provide further insight into how Gradient-Based Program Repair operates, we visualize repair trajectories from buggy programs to repaired ones across the numerical program space. Figure 4 shows the repair trajectory for a buggy sort program, and the surrounding correctness landscape. In this landscape, higher loss means more buggy behavior. The left panel presents the surface plot while the right panel shows a contour plot for the same trajectory, augmented with input-output behavior sampled from the trajectory. The red cross indicates the starting point of the search, i.e., the buggy program encoded as a numerical program, which has high loss and low correctness. As GBPR proceeds, the program is iteratively updated, following the steepest descent in the loss landscape. The trajectory ultimately converges to a minimum, where the program is successfully repaired and near-perfect correctness on the test set. From an execution perspective, at the beginning, the buggy sort program (red cross) is not capable of sorting any of the three input examples. For example, an incorrect output lists the same element multiple times. During repair, the program gradually improves. At the second highlighted point, the program already correctly sorts the first example. However, at this point, the repair is only partial – the remaining two examples are not correctly sorted – which is reflected by the relatively high loss. At the third highlighted point, the program correctly sorts two of the examples, with the loss now closer to 0. As the loss landscape is explored, the program eventually converges to a minimum where the loss is minimized and the accuracy maximized. This means that the program is successfully repaired and behaves according to the provided specification. This visualization highlights the core novelty of our approach: by representing programs as differentiable objects, we exploit the topology of the loss landscape to guide the repair process via gradient descent towards correct behavior. This is in sharp opposition to relying on discrete, combinatorial search for repairing symbolic programs. Figure 4: Repair trajectory for a buggy sort program, in the numerical program space. The red cross marks the initial buggy program, and the trajectory shows the path taken by gradient descent towards a repaired program. Gradient-Based Program Repair iteratively updates the numerical representation of the program using the gradient defined by the correctness loss landscape, until the program behavior is repaired $\begin{array} { r } { L \approx 0 , } \end{array}$ ). Left: Surface plot of the correctness loss landscape along the two principal components of the numerical program space. Right: Contour plot of the same landscape, the input-output behavior changing along the trajectory. In summary, our experimental results demonstrate that Gradient-Based Program Repair is feasible, it can reliably repair a wide range of buggy transformer programs, often achieving near-perfect correctness. The approach is robust across different RASP programs and various bugs seeded via different mutations. Repair trajectories clearly demonstrate the repair dynamics happening in the numerical program space. # 6 Discussion Specification Types in the Loss Function. A key concept of gradient-based program repair is that it expresses the specification in the loss. Hence, the gradient directly captures the program’s incorrect behavior. In our experiments, we have used the cross-entropy loss over an input-output specification, appropriate for the considered RASP programs. Ultimately, GBPR opens the door to incorporating other rich behavioral information into the loss function, such as formal specifications or invariants. Differentiable Numerical Program Representations. Programs can be represented numerically in several ways. In our experiments, we focus on neural networks, specifically Transformer models, as the numerical representation, compiled from symbolic RASP programs [Weiss et al., 2021, Lindner et al., 2023, Shaw et al., 2024]. Other approaches include embedding programs as points in a continuous latent space, so-called latent programs, which also support efficient search and repair via continuous optimization methods [Bonnet and Macfarlane, 2024]. Execution of these numerical programs is performed by an auxiliary interpreter model. Future work will focus on the design of advanced differentiable numerical representations that are ever more expressive. Decompilation to the Symbolic Space. A key future direction is decompiling repaired numerical programs into human-readable symbolic code. Symbolic representation is both 1) more interpretable and amenable to human review and 2) appropriate for traditional verification techniques with guarantees. However, this decompilation process is nontrivial: mapping the optimized parameters of a numerical representation back to structured, high-level code is an open research challenge, akin to decompilation. Recent work has begun to address this problem in the context of Transformer models by discretizing the model [Friedman et al., 2023] or by training a meta-model to decompile weights into symbolic programs [Thurnherr and Riesen, 2024, Langosco et al., 2024]. However, robust and general decompilation from neural programs to symbolic programs remains an unsolved research problem. Limitations. Our evaluation is limited to RaspBugs, our benchmark of RASP programs; broader experimentation with other programs (e.g., with Thurnherr and Scheurer [2024]) and languages is left to future work. Additionally, GBPR can only optimize parameters within the initial model architecture obtained after Tracr compilation. If repairing a bug requires changing the structure itself (e.g., adding a new attention head), our prototype could not repair the bug. Future work is needed on symbolic-to-numerical compilation to maximize the expressivity of the numerical program space. # 7 Related Work Latent Programs. Latent programs are represented in a latent space, a compressed feature space preserving meaningful data features and placing similar points adjacently. Neelakantan et al. [2015] train a Neural Programmer to recursively select operations and data sources via latent representations at each execution step. Hong et al. [2021] find that generating discrete latent codes representing highlevel operations improves program synthesis accuracy when compared with token-level generation. Bonnet and Macfarlane [2024] learn a latent program space for ARC-AGI programs, and use gradientbased search to find correct programs. Liskowski et al. [2020] train an autoencoder to embed programs in a latent space, mapping them back with an evolutionary algorithm. None of these works do program repair. Beyond our focus on RASP in this paper, Gradient-Based Program Repair is conceptually applicable to other latent program representations such as the ones from this related work. Learning Program Execution. Related work explores how neural networks can understand [Reed and De Freitas, 2016, Shin et al., 2018, Yan et al., 2020, Chen et al., 2021] or benefit from [Ye et al., 2022, Liu et al., 2023] program execution. For example, Zaremba and Sutskever [2014] learn LSTM networks to execute short Python programs. Ni et al. [2024] teach models to inspect and reason about code execution by bootstrapping a synthetic training set of execution-aware reasoning traces. In contrast to these works, which simulate execution with a black-box network, GBPR expresses program behavior as a first-class concept within a numerical framework. Neural Surrogates. Neural surrogates [Esmaeilzadeh et al., 2012, Renda et al., 2021] are neural networks designed to approximate complex functions with better execution efficiency. Neural surrogates are typically trained on a subset of the program’s input-output space. Weber et al. [2024] train a compiler that generates neural surrogates from the program source code, bypassing the need for generating input-output examples. Any class of programs that can be fully handled by a neural surrogate can be handled by Gradient-Based Program Repair. Learning-based Program Repair. Several works have proposed using machine learning to repair programs [Vasic et al., 2019, Chen et al., 2019]. In particular, LLMs are used to repair programs both in single-turn [Xia et al., 2023, Jiang et al., 2023] and agentic [Yang et al., 2024, Wang et al., 2024] setups. Our work is different in that we focus on repairing programs in a numerical space, using gradient-based optimization in search of the correct program, rather than searching exclusively in the token space.
Automatic program repair seeks to generate correct code from buggy programs, with most approaches searching the correct program in a discrete, symbolic space of source code tokens. This symbolic search is fundamentally limited by its inability to directly reason about program behavior. We introduce Gradient-Based Program Repair (GBPR), a new paradigm that reframes program repair as continuous optimization in a differentiable numerical program space. Our core insight is to compile symbolic programs into differentiable numerical representations, enabling search in the numerical program space directly guided by program behavior. To evaluate GBPR, we present RaspBugs, a new benchmark of 1,466 buggy symbolic RASP programs and their respective numerical representations. Our experiments demonstrate that GBPR can effectively repair buggy symbolic programs by gradient-based optimization in the numerical program space, with convincing repair trajectories. To our knowledge, we are the first to state program repair as continuous optimization in a numerical program space. Our work establishes a new direction for program repair research, bridging two rich worlds: continuous optimization and program behavior.
[ "cs.PL", "cs.LG", "cs.SE" ]
# 1. Introduction Code-switching (CS), the natural and fluid alternation between two or more languages within a single conversation or utterance, is a pervasive linguistic phenomenon worldwide, particularly as multilingualism grows [1]. Despite its widespread nature, research on CS, especially in Automatic Speech Recognition (ASR), significantly lags behind that of monolingual language processing. Compared to the abundant resources available for monolingual corpora, CS datasets remain under-resourced [2], multilingual scenarios increase complexity and lead to confusion [3], and pose unique challenges for model architectures [4, 5, 6]. Monolingual models, trained solely on monolingual data, are demonstrably ill-equipped to handle code-switched speech [7]. Current research in CS-ASR often focuses on adapting model architectures – employing techniques like dual encoders [8], language-aware structures [9, 5], or language-specific attention [10] – primarily to mitigate language confusion. These methods frequently necessitate retraining entire networks with limited CS data, leading to increased complexity and limited scalability. Another research direction explores cross-lingual transfer and unsupervised or self-supervised learning (SSL) leveraging large-scale pretrained datasets [11, 12, 13]. While promising, these works often remain focused on linguistic exploration and small-scale datasets. Indeed, deep investigations into CS-ASR are largely confined to a few well-resourced language pairs, most notably Mandarin-English [4, 9]. Consequently, much of the existing CS-ASR research leans towards exploratory studies rather than solutions that are ready for robust, industrial applications. This paper targets the development of CS-ASR technologies for industrial use, focusing on large-scale training methodologies, particularly for under-resourced Southeast Asian (SEA) languages. We address the critical bottleneck in CS-ASR: the limited availability of large-scale, transcribed code-switching data. This scarcity is inherent due to factors such as the early stage of CS corpus development, domainspecific challenges, language pair variations, community biases [14], and the inherent difficulties in data collection and annotation [15]. To address limited data, we propose using large synthetic datasets. Our method introduces a phrase-level mixing technique (improving traditional lexicon-based approaches [16]) to create natural-sounding code-switched speech by blending phrases from monolingual sources [17]. We built test sets by leveraging ChatGPT-generated conversations (text) spoken by Singaporeans for BM-EN and ZH-BM (within-sentence mixing), and a synthetic sentence-mixed TA-EN test set. Testing three leading large-scale pretrained ASR models on these sets revealed their performance in real-world multilingual industrial scenarios. This work is the first comprehensive benchmark of SOTA ASR models for under-resourced SEA languages, providing scalable solutions for research and industry. Summarizing our experiments, demonstrate the effectiveness of our phrase-mixed data augmentation and benchmarks. For BM-EN, our findings reveal remarkable distributional alignment with real-world test sets, directly resulting in strong, validated real-world performance and powerfully confirming our synthetic data’s fidelity in bridging the domain gap. While ZH-BM and TA-EN pairs show encouraging alignment suggesting a generalizable approach, future progress lies in integrating language-specific linguistic characteristics into phrase-mixing to optimize distributional matching and real-world performance for diverse pairs. Our newly established benchmark on underresourced language pairs, including BM-EN, ZH-BM, and TAEN, underscores the efficacy of large-scale pre-training on stateof-the-art models, with fine-tuned SeamlessM4T-v2-Large outperforming competitors (Whisper-Large-v3-Turbo and MMS1B-All) and emerging as a leading candidate for industrial-scale CS-ASR. In summary, the main contributions of this paper are: • Scalable Data Augmentation: We introduce a novel and scalable phrase-mixed data augmentation method, demonstrating significant performance gains on real-world codeswitching ASR test sets. • Comprehensive SOTA Model Analysis: We present a detailed comparative analysis of three leading state-of-the-art ASR models (WHISPER-LARGE-TURBO-V3, MMS-1B Phrase-mixed CS Data Generation ZH 我 喜欢 吃 炒饭 配 蔬菜 H你ell好o ML1O/NL2O HTrMaMn-slGatMe M Force FAlaigstn-mAleingtn CS Text DaCtaSset 🤖ASR Model BM sSayuar -suakyaurmanakan nasi goreng dengan வணக்கம் Audio Splicing CS Audio → CS 我 喜欢 吃 nasi goreng 配 蔬菜 (I like to eat fried rice with vegetables) ALL, and SeamlessM4T-v2-Large), revealing consistent performance improvements across multiple under-resourced language pairs. • Publicly Available Benchmark Datasets: We release novel evaluation test sets for three under-resourced language pairs to foster research in code-switching ASR. Datasets and model APIs are available upon email request. # 2. Methods # 2.1. Code-Switching in Singapore Singapore is a culturally and linguistically diverse nation, representing the broader Southeast Asian region within a single country. The primary language of communication is English, which is supported by government policies that promote Mandarin Chinese, Bahasa Malay, and Tamil as official languages [18]. As a result, a significant portion of the population is bilingual or multilingual. In both casual and formal contexts, it is common for individuals to code-switch, interweaving two or more languages within a single sentence to facilitate communication and express cultural identity [19]. This study focuses on the languages spoken in Singapore, specifically Singlish (EN), Mandarin Chinese (ZH), Bahasa Malay (BM), and Tamil (TA). We conduct experiments on under-researched code-switching language pairs, including BM-EN, ZH-BM, and TA-EN. We exclude EN-ZH, as it’s already well-studied. # 2.2. Phrase-Mixed: Enhancing Naturalness in Scalable Synthetic Code-Switch Data Generation Building upon Speech Collage [16]’s approach of leveraging monolingual corpora for synthetic speech creation, we introduce several enhancements to improve synthesis quality (detail of our pipeline in Figure 1): 1. Translation: We employ Google Translate for BM-EN and TA-EN pairs, and Mesolitica’s translation model 1 for ZHEN, to optimize for speed and accuracy. 2. Textual Alignment: We replace BERT-based alignment with FAST ALIGN [20] for improved scalability. 3. Phrase-Mixed Replacement: Singaporean code-switching occurs primarily at the phrase level rather than the word level [17], with speakers naturally alternating between full phrases while maintaining academic terminology. To better reflect this phenomenon, we replace the fixed $20 \%$ lexicon substitution with a more flexible $10 \mathrm { - } 3 0 \%$ mixing per sentence, capturing more authentic conversational patterns that incorporate both phrases and individual words. 4. Audio Splicing: We improve Speech Collage [16] by in corporating NeMo’s amplitude-based normalization , which produces more natural speech output. This adjustment addresses the distortion observed when using the original energy-based normalization. # 2.3. ASR Systems Training Large-Scale Pre-Trained Models. Recent advances in largescale pre-trained speech models enable adaptation to downstream tasks with limited data. Whisper [21], MMS [22], and SeamlessM4T [23] achieve state-of-the-art performance across diverse speech tasks and low-resource settings, benefiting from advanced architectures and extensive training data [24]. Whisper [21], a fully supervised model, supports approximately 100 languages and was initially trained on 680k hours of data, later extended to 5 million hours of weakly labeled data in version 3. MMS [22] extends wav2vec2 to 1,000 languages via self-supervised pretraining and language-specific adaptation. SeamlessM4T [23], based on w2v-BERT, is trained on 4.5 million hours and integrates speech-text alignment. These models serve as strong baselines for ASR research, allowing us to emulate industrial-scale systems. Fine-Tuning and Adaptation. We fine-tune Whisper and SeamlessM4T on multilingual tasks (EN, ZH, BM, TA) and code-switching pairs (BM-EN, ZH-BM, TA-EN). For MMS, we explore adapter-based and full-model fine-tuning with vocabulary adjustments. Our results indicate that training on multilingual data with code-switching outperforms purely multilingual training. # 3. Experiments # 3.1. MONO Data We utilize monolingual data from four languages: EN, ZH, BM, and TA, incorporating both local and non-local accents. Local accents refer to those associated with Singapore or similar regions. All datasets, except for the Internal dataset (which is sampled from our proprietary data), are listed in Table 2. We merged all the data, and refer to this dataset as MONO. # 3.2. Phrase-mixed Code-Switch data generation Using the MONO dataset, we generate code-switching text through translation $( \mathrm { L } 1 ~ ~ \mathrm { L } 2 )$ and alignment (via FAST ALIGN), as detailed in Section 2.2. We then replace $1 0 { - } 3 0 \%$ of tokens per sentence under a consecutive word constraint to mimic natural phrase-level switching. For audio generation, we align words to speech by training a simple HMM-DNN model [34], then splice segments guided by codeswitched text, applying amplitude normalization for naturalness. We generate 1,000 hours per language pair (BM-EN, ZHBM, TA-EN), totaling 3,000 hours of synthetic code-switched data (called CS). Note that we do not know the real characteristics for each pair in advance. Table 1: WER/CER/MER for different models on monolingual and code-switch datasets. Relative improvement indicate in round brackets, either improve (green) or decrease (red) over baseline. Table 2: MONO Dataset Summary by Language Table 3: CS-related statistical information of our training phrase-mixed, our test sets, and real SEAME testset. # 3.3. Evaluation Settings Multilingual Evaluation. We conducted comprehensive evaluations by averaging results across language-specific test sets containing general and local accents (except Tamil, lacking local accent data). test sets include: • English: LibriSpeech test-other (noisy reading) [25], and internal datasets: clean scripts and noisy conversations. • Chinese: AiShell-test (reading) [27], MagicHub (conversation) [28], and an internal test set in diverse conditions. • Malay: Low-resource internal sets (noisy, conversational, scripted). • Tamil: Mile Tamil (clean reading) [30], OpenSLR 65 (clean reading) [35], and internal reading scripts. Code-Switch Evaluation. We evaluated code-switching models using real-world test sets (BM-EN, ZH-BM) collected from ChatGPT conversations (text) spoken by Singaporeans. Due to resource constraints, Singaporean-spoken transcribed data for TA-EN was unavailable. Instead, we synthesized TA-EN test set by inter-sentential switching sentences from the MUCS (TA) [32] and IMDA3 (EN conversational) [26] datasets using NeMo’s sentence-level CS scripts. These test sets are the first benchmarks for code-switch evaluation in under-resourced Southeast Asian languages, offering valuable insights and advancing research in this area. We use WER for all languages and pairs except ZH and ZH-BM, where we use CER and Mixed Error Rate (MER) due to their character-based writing system. For fair comparison across models, we split test audio longer than 30s using PyannoteVAD [36] to accommodate Whisper’s context limit [21]. # 3.4. Finetuning Setup Whisper & SeamlessM4T. Both autoregressive models use BPE tokenizers and were fine-tuned. For Whisper, we used Whisper-Large-v3-Turbo (4-layer decoder – WHISPERTURBOV3) with its language prefix system, phrase-mixed samples (SC) were assigned prefixes by priority: BM-EN, TA-EN, ZH-BM. For SeamlessM4T, we use SeamlessM4T-v2-large (SEAMLESSM4T-V2), the largest version available. MMS. An encoder-only model trained with CTC loss and character outputs, we used MMS-1B-ALL (fine-tuned on over 1,000 languages). We experimented with both adapter-based fine-tuning (each language’s adapter) and full fine-tuning (averaging adapter weights across four languages), using either a merged character-level vocabulary or a BPE vocabulary (via the Whisper tokenizer). All models were trained for 3 epochs with a learning rate of 1e-5, $20 \%$ linear warm-up, cosine decay, and the AdamW optimizer (weight decay $= 0$ ). We applied $20 \%$ speed perturbation and $20 \%$ MUSAN noise ([10, 30]dB). Note that we did not use SpecAugment for SeamlessM4T due to minimal gains. All experiments utilized the Transformers library. # 4. Results # 4.1. Analysis on Code-Switch Characteristics in Phrasemixed Training Sets and Test Sets We assessed code-switching (CS) patterns using three metrics: the Code-Mixing Index (CMI) considers both the number of switches and language distribution [16]; the I-Index for how often switching occurs; and the M-Index for language balance [14]. Table 3 shows these metrics for each dataset, averaged across utterances. The BM-EN training and test sets are very similar across all metrics (CMI: 38.37 vs. 36.88, I-Index: 46.81 vs. 39.44, M-Index: 23.47 vs. 24.73). However, for ZH-BM, the training and test sets are less consistent. The training set shows higher switching frequency but lesser language balance compared to the test set. In contrast, compared to training data, the sentence-mixed TA-EN test set exhibits a similar language balance (M-Index $\approx 2 2 . 6 3 \$ ) but a lower switching frequency (I-Index: 6.50 vs. 37.33) due to its inter-switched nature. The SEAME EN-ZH benchmark [33] has a lower code-switching complexity (CMI: 15.40), suggesting recognition tasks might be less challenging [37]. These scores suggest that our phrasemixing method somewhat captures real code-switching patterns, which is beneficial for training models, especially when real code-switched examples are scarce. Test sets like BM-EN and ZH-BM seem challenging enough to represent real-world situations. However, TA-EN, with its sentence-level switching, may be less challenging to process. Figure 2: MER components of ZH-BM testset. Include Correctness, Insertion rate, Deletion rate, and Substitution rate. # 4.2. Multilingual ASR Table 1 shows that fine-tuning generally improves performance across languages for both MONO and $_ { \mathrm { M O N O + C S } }$ setups. Exceptions include WHISPERTURBO-V3 (MONO and $\mathbf { M O N O + C S }$ )—which degrades in English, particularly on the noisy dataset with local accents—and the MMS-1B CHARACTER model, which underperforms on both NLB and LibriSpeech test-other (U.S. accents). For non-English languages, fine-tuning with MONO enhances performance, with further gains when incorporating CS, indicating that more finetuning data is beneficial. Notably, SEAMLESSM4T-V2 shows the largest improvements relative to WHISPERTURBO-V3 and MMS-1B-ALL. # 4.3. Code-Switch ASR on BM-EN, ZH-BM, TA-EN Table 1 demonstrates consistent performance improvements across all models in the $\mathbf { M O N O + C S }$ setting, validating the efficacy of augmenting monolingual data with phrase-mixed codeswitched corpora. Relative gains vary by language pair: BMEN shows the largest improvement, followed by TA-EN and ZH-BM, aligning with the linguistic analysis in Section 4.1: BM-EN had strong alignment on train-test, TA-EN show both language-balance and less switching frequency, and ZH-BM have the big gap on language balance, make the model harder to predict. For BM-EN, MONO $+ \mathbf { C } \mathbf { S }$ enhances all models except SEAMLESSM4T-V2, which slightly degrades despite gains in MONO. TA-EN exhibits different challenges: WHISPERTURBO-V3 suffers $> 1 0 0 \%$ WER in BASELINE and MONO due to high insertion errors, which is solved by $\mathbf { M O N O + C S }$ . MMS-1B-ALL requires full finetuning (BPEBASED outperforms CHAR-BASED) for improvements, showing no gains with adapter-based approaches. ZH-BM yields minimal $\mathbf { M O N O + C S }$ gains. While WHISPERTURBO-V3 and SEAMLESSM4T-V2 degrade in MONO, MMS-1B-ALL remains stable. Figure 2 reveals model-specific biases: WHISPERTURBO-V3 marginally improves BM recognition, but is affected by high insertion/deletion errors; MMS-1B-ALL and SEAMLESSM4T-V2 initially fail to detect BM (suggest a strong bias in ZH) in MONO but show substantial error reduction (I/D/S) after $\mathbf { M O N O + C S }$ fine-tuning. These results underscore the necessity of code-switched data to mitigate monolingual biases and improve robustness in mixed-language scenarios.
Code-switching (CS), common in multilingual settings, presents challenges for ASR due to scarce and costly transcribed data caused by linguistic complexity. This study investigates building CS-ASR using synthetic CS data. We propose a phrase-level mixing method to generate synthetic CS data that mimics natural patterns. Utilizing monolingual augmented with synthetic phrase-mixed CS data to fine-tune large pretrained ASR models (Whisper, MMS, SeamlessM4T). This paper focuses on three under-resourced Southeast Asian language pairs: Malay-English (BM-EN), Mandarin-Malay (ZH-BM), and Tamil-English (TA-EN), establishing a new comprehensive benchmark for CS-ASR to evaluate the performance of leading ASR models. Experimental results show that the proposed training strategy enhances ASR performance on monolingual and CS tests, with BM-EN showing highest gains, then TA-EN and ZH-BM. This finding offers a cost-effective approach for CS-ASR development, benefiting research and industry.
[ "cs.CL", "cs.AI", "cs.SD", "eess.AS" ]
# 1. INTRODUCTION Notation-level music transcription is the process of converting musical audio or symbolic data into a written form. This task is both challenging and essential in the field of Music Information Retrieval (MIR) [1]. Automatic Music Transcription (AMT) seeks to address the limitations of manual transcription by creating algorithms that can transform musical input into symbolic representations. For guitarists, the use of classical Western music notation is rather unusual because it does not contain specific information about string choice and fret position, which are essential for playing the instrument, since the same pitch can be played in different ways on the guitar. Here, tablature (tab) notation is preferred, which is optimized for stringed There are various approaches in the field of automatic guitar tablature transcription, including rule-based, probabilistic, graph-based, and neural network-based methods. Early systems relied on predefined rules to create tablatures from MIDI data. The software developed by Wang and Li utilizes harmonic rules and fretting styles to produce scores, but often requires manual adjustments to ensure they were playable [3]. Miura et al. enhanced these methods by minimizing hand movements; however, advanced players found these limitations to be too restrictive [4]. Genetic algorithms (GAs) are employed to optimize tablatures based on playability criteria. Tuohy and Potter [5] introduced a GA designed to generate playable fret positions. Ramos et al. [6] built upon this by enhancing the algorithm with subpopulation techniques, which improved its efficiency. More recently, Bastas et al. [7] have integrated string-related audio features into a GA, resulting in more refined outcomes. Hidden Markov Models (HMMs) have also been applied, where states represent string configurations and the transitions are influenced by physical difficulties. Barbancho et al. [8] used the Viterbi algorithm to map audio signals to optimal finger positions, achieving more accurate results. Graph-based techniques represent the relationships between notes, strings, and frets as a directed acyclic graph. In 1989, Sayegh [9] introduced the optimal path paradigm, which assigns transition costs based on hand movements. Subsequent extensions by Radicioni et al. incorporated biomechanical constraints and hand span considerations [10]. Burlet and Fujinaga [11] have built upon Sayegh’s approach by developing a new algorithm for guitar tablature transcription called A-star-Guitar. This algorithm utilizes the $\mathbf { A } ^ { * }$ pathfinding method to create optimal guitar tablatures for polyphonic music. It works by searching for the optimal path in a graph that includes all possible combinations of strings and frets for notes and chords. The algorithm takes into account the tuning of the guitar, the number of frets, and the position of the capo. In this graph, possible fretboard positions are represented as nodes, while the edges are weighted based on biomechanical factors, such as the difficulty in moving between frets, the finger span required for chords, and penalties for positions beyond the seventh fret. The algorithm employs a heuristic function that calculates the cumulative weight of edges from a given vertex to the target vertex, enabling it to identify the easiest transitions between notes. Neural networks, especially convolutional neural networks (CNN), have shown significant potential in tablature transcription. Wiggins and Kim introduced TabCNN, which maps spectrogram images to tablatures [12]. Kim et al. improved the approach by integrating self-attention mechanisms, resulting in improved transcription accuracy and better long-term sequence modeling [13]. Recent advances in deep learning have established transformers as a leading architecture for analyzing sequential data, including music generation. Transformers are particularly effective in capturing long-term dependencies and complex patterns, enabling them to generate music with both temporal and harmonic consistency. Early work, such as the Music Transformer, introduced relative positional encoding to better address the nuances of pitch and timing in music [14]. Subsequent models, including the Pop Music Transformer [15] and Theme Transformer [16], further refined these techniques, focusing on rhythmic structure and thematic consistency. In 2021, Sarmento et al. [17] created a diverse symbolic dataset called DadaGP, which contains 26 181 files representing 739 music genres. They also developed a token format based on event-based MIDI encoding. To evaluate the dataset and the token format, the authors trained a Pop Music Transformer [15] to generate new symbolic compositions. In their work, Chen et al. [18] adapted Transformer XL [19] specifically to generate fingerstyle guitar tablatures. They enhanced the model by incorporating tokens for string and fret positions, in addition to pitch and duration. While their approach successfully produced valid tablatures, it faced challenges with string assignments for lower pitches and did not adequately evaluate long-term musical structure. These issues highlight the need for further refinement in the application of transformers for guitar-related tasks. Recent advances have introduced transformer models for transcribing MIDI into tablature. Edwards et al. [20] utilize the BERT model [21], which they train by tokenizing MIDI data using the MidiTok method [22] and masking the string tokens within the input sequence. Initially, they trained the model on the entire DadaGP dataset [17], followed by fine-tuning on a selected set of professional transcriptions from the Leduc dataset presented in [23]. An evaluation study involving guitarists demonstrated that the transformer model outperformed other methods in terms of playability. Although this approach is quite promising, it is limited to standard guitar tuning and does not allow the use of a capo. Transformer models have mainly been used for music generation and only recently for guitar tablature transcription. However, their ability to learn musical structures makes them a promising tool for this task. Although several probabilistic and graph-based approaches have been explored, there are only limited studies focusing on MIDI-based transcription using neural networks. Therefore, this research aims to investigate the potential of transformer-based approaches further. # 3. METHODOLOGY # 3.1 Datasets Unlike datasets that concentrate on converting audio into symbolic formats through complex Automatic Music Transcription (AMT) pipelines [1], the datasets used in this research – DadaGP, GuitarToday and Leduc – focus on symbolic data provided in the Guitar Pro 1 format. This format allows for direct experimentation with MIDI tablature transcription by encoding pitch as well as string/fret information. The following paragraphs summarize the characteristics and relevance of these datasets. The GuitarToday dataset 2 contains 363 easy fingerstyle guitar tablatures designed for beginners and intermediate players. Sourced from the ‘GuitarToday’ Patreon account 3 , the dataset features tracks in standard tuning $\mathrm { E } _ { 4 }$ , ${ \bf B } _ { 3 }$ , ${ \bf G } _ { 3 }$ , ${ \bf D } _ { 3 }$ , $\mathbf { A } _ { 2 }$ , $\begin{array} { r } { \mathbf { E } _ { 2 } \dot { } , } \end{array}$ ) and focuses on simpler pieces. The dataset analysis shows a predominance of beginner-friendly pitches with open strings or low fret positions. These characteristics make it an ideal foundation for model training with minimal complexity. The DadaGP dataset contains over 26 000 tracks across various genres, including rock, metal and classical music [17]. After filtering, a total of 2 301 acoustic guitar tracks were selected for this study. This dataset includes a range of note durations and a broader pitch spectrum compared to GuitarToday, reflecting more complex musical compositions. Most tracks are in standard tuning, although there are occasional variations like drop tunings and the use of capos. The distribution of string-fret combinations is wider, with a notable emphasis on mid-range frets. The DadaGP dataset was compiled from Ultimate Guitar 4 , a platform where the quality of contributions varies greatly. This variability requires caution when interpreting the results derived from the DadaGP dataset. However, it serves as a valuable complement to GuitarToday, introducing the model to a greater diversity of musical and technical contexts. The Leduc dataset consists of 232 jazz guitar tablatures created by Fran¸cois Leduc [24]. It highlights the rich harmonic and rhythmic complexity characteristic of jazz music. Although the dataset is relatively small, it provides valuable insights into jazz-specific playing styles, including mid-range pitch preferences and intricate chord voicings. These features make it a useful addition to other datasets, enhancing the model’s ability to generalize across different musical genres. Figure 1. Flowchart of the data pre-processing steps # 3.2 Data Pre-Processing To train transformer models for guitar tablature transcription, Guitar Pro files are pre-processed to extract relevant MIDI information and convert it into a text-based format suitable for encoding. This process involves several stages, as illustrated in Figure 1. The GuitarToday and Leduc datasets consist of singletrack acoustic guitar files that require minimal cleanup. In contrast, the DadaGP dataset includes arrangements for multiple instruments and needs filtering to isolate the acoustic guitar tracks. This process is accomplished by utilizing MIDI channel IDs, specifically 25 for nylon stringed guitars and 26 for steel stringed guitars. Due to inconsistencies in the assignment of instrument IDs, additional keyword-based filtering is applied using guitar-related terms in various languages to ensure that we focus only on the relevant tracks. As a result, about $5 \%$ of 47 039 tracks could be used. Duplicate tracks are removed to ensure dataset uniqueness by matching metadata or file names. The examples are split into training, validation, and test sets (90/10/10), maintaining diversity in key, tuning, and capo usage. GuitarPro files are converted to MIDI format using the Python packages PyGuitarPro 5 and mido 6 . The hierarchical structure of Guitar Pro, which includes tracks, bars, voices, beats, and notes, is simplified when converting to MIDI, as MIDI represents music as a sequence of messages. The relevant attributes – start time, end time, pitch, string, and fret – are extracted from MIDI files to represent the musical content in text format. Data tokenization divides the extracted MIDI information into sequences for model training. Five different encoding schemes were developed to explore different levels of abstraction and granularity (see 4.1). Word-level tokenizers assign numerical IDs to the tokens, resulting in datasets of varying sequence lengths for experimentation. This encoding approach enables a systematic analysis of how data representation impacts model performance. # 3.3 Data Augmentation For training purposes, the three datasets are combined. Due to the significant imbalance in the dataset regarding the different capo uses and tunings, we extended the dataset to develop a conditioned model. Since the capo condition affects all pitches simultaneously while keeping the stringfret combinations unchanged, augmenting the dataset is straightforward. To augment the capo usage, we first filtered the dataset to include only files in standard tuning. Then, we artificially transformed each file from capo zero to capo seven, ensuring that each piece still contained valid string and fret combinations for the given capo. To reduce the size of the test dataset to 150 files, since not every capo variant is necessary for every piece of music, we systematically iterated through the test files and applied the next capo number where applicable. This approach ensures that every piece of music is represented while considering different variants of the capo. The tuning augmentation was applied to each training sequence. A tuning was randomly selected for each sequence and the pitches of the notes were adjusted accordingly. The four most common tunings standard, halfstep down, full-step down, and drop-d were utilized. # 3.4 Model In our research, we define the task of transcribing pitches into tablature as a translation problem, where the model learns to map MIDI note sequences to their corresponding guitar tablatures. With this focus on the translation paradigm, we have selected the T5 model as an ideal candidate for training and fine-tuning this specific task. The T5 architecture, which stands for Text-to-Text Transfer Transformer, was introduced by Raffel et al. in 2020 [2]. The T5 represents a significant advance in natural language processing (NLP) by framing all tasks as text-to-text problems. This means that both inputs and outputs are treated as text. The Hugging Face Transformers package is used for implementing the network. We employ a reduced architecture of the T5 model, halving the configuration of $t 5$ -small with a model dimension $d _ { m o d e l } = 1 2 8$ , feedforward dimension $d _ { f f } = 1 0 2 4$ , three encoder-decoder layers and four attention heads. This model is trained from scratch, utilizing the Adafactor optimizer with a self-adaptive learning rate. To tokenize the data, we found that an event-based approach is optimal, similar to the encoding used in the Music Transformer [14]. The input consists of NOTE ON and NOTE OFF events, along with TIME SHIFT tokens for timing. The output uses $T A B { < } \# { , } \# { > }$ tokens, which represent both the string and fret numbers, followed again by the TIME SHIFT token. In total, we trained two versions of the model using the three combined datasets. The standard model is based on standard guitar configurations to test general functionality. The conditioned model contains a $C A P O { < } \# >$ and TUNING $<$ #,#,#,#,#,# $>$ token in the input to condition the model for more flexibility and control over the outputs. Both models were trained on input sequences of 512 tokens length. In total, the standard dataset included 16 451 training sequences and 1 819 validation sequences, while the conditioned dataset comprised 129 748 training sequences and 14 365 validation sequences. During inference, chunks of 20 notes are processed. The tokens from the last note of the previous chunk are placed at the beginning of the following sequence in both the encoder and decoder to preserve context between the chunks. # 3.5 Data Post-Processing In some cases, the model can generate tabs for a note that results in an incorrect pitch. To address this, errors are corrected in a post-processing step to ensure that the piece of music remains unchanged. Our post-processing algorithm refines the model’s output by comparing the estimated note sequence to the corresponding input note sequence. It attempts to match each input note to its closest counterpart in the estimated sequence within a configurable window of $\pm 5$ notes. The algorithm evaluates pitch values and selects the best match. If no direct match is found, the first viable string-fret combination generated for the guitar configuration used is applied. That way, we ensure that the tablatures reflect the original notes. # 3.6 Evaluation Metrics Evaluating guitar tablatures requires domain-specific metrics, as conventional machine learning and NLP metrics miss crucial aspects of musicality and playability. Since there are no established standards in this field, we propose three metrics that evaluate both the accuracy of the transcription and the playability of the generated tablatures. Preserving the original pitch is essential to maintain the musical integrity of a piece. The pitch accuracy metric, ranging from $0 \%$ to $100 \%$ , measures how well the model reproduces the original pitches from the MIDI input. It allows for alternative string-fret combinations as long as the pitch remains correct. The tab accuracy, also ranging from $0 \%$ to $100 \%$ , reflects how well the professionally created ground-truth tablatures agree with the estimated fretting. This metric compares the predicted string-fret combinations with the ground truth, which is assumed to represent the optimal playing positions. This metric reflects the overall playability and consistency with the original piece. A modified version of the difficulty estimation framework [25] is used to objectively evaluate the playability of tablatures. The scoring system takes into account two types of movement: horizontal shifts along the fretboard (along) and vertical shifts across the strings (across). The difficulty of transitioning between two positions $( p , q )$ is calculated as follows: $$ d i f f c u l t y _ { ( p , q ) } = a l o n g _ { ( p , q ) } + a c r o s s _ { ( p , q ) } \quad , $$ where $$ \begin{array} { c } { { a l o n g _ { ( p , q ) } = f r e t \_ s t r e t c h _ { ( p , q ) } + l o c a l i t y _ { ( p , q ) } \quad , } } \\ { { a c r o s s _ { ( p , q ) } = \nu e r t i c a l \_ s t r e t c h _ { ( p , q ) } \quad . } } \end{array} $$ The fret stretch $\dot { \mathbf { \zeta } } ( p , q )$ value measures the difficulty of the horizontal movement by calculating a delta between the frets of the first $p$ and second position $q$ . Positive deltas, corresponding to movement to higher frets, are assumed to be easier as the fret spacing becomes shorter. Let $\Delta _ { f r e t } =$ $q - p$ , where $p , q$ are the fret numbers. Then fret stretch $( p , q )$ is defined as: $$ f r e t \mathrm { - } s t r e t c h _ { ( p , q ) } = \left\{ \begin{array} { l l } { 0 . 5 0 \cdot | \Delta _ { f r e t } | } & { \mathrm { i f } \Delta _ { f r e t } > 0 , } \\ { 0 . 7 5 \cdot | \Delta _ { f r e t } | } & { \mathrm { i f } \Delta _ { f r e t } \le 0 \quad . } \end{array} \right. $$ It also takes into account the location of the two positions, because the higher the fret, the more the string lifts off the fret and the harder it is to press the string. Locality is defined as $$ l o c a l i t y _ { ( p , q ) } = \alpha \cdot ( p + q ) \quad , $$ where $p , q$ are the fret numbers and $\alpha$ is a factor that should take into account the player’s technical ability and the height of the strings. In the original article it was set at 0.25 by tests. The vertical stretch $\iota _ { ( p , q ) }$ depends on the distance between the positions of the fingers on the adjacent strings. Since this depends strongly on which position is played with which finger, only standard values are used here. It is assumed that a delta of at most 1 is still comfortable when comparing the strings, so let $\Delta _ { s t r i n g } = q - p$ , where $p , q$ are the string numbers. Then the vertical stretch $\backslash ( p , q )$ is defined as: $$ \ L _ { \nu e r t i c a l \_ s t r e t c h _ { ( p , q ) } } = \left\{ \begin{array} { l l } { { 0 . 2 5 } } & { { \mathrm { i f } \Delta _ { s t r i n g } \leq 1 , } } \\ { { 0 . 5 0 } } & { { \mathrm { i f } \Delta _ { s t r i n g } > 1 \quad . } } \end{array} \right. $$ To calculate the overall difficulty score for a tablature, we calculate the mean difficulty over all positions. The range is from 0 to 18.5 for a 24-fret guitar. The lowest difficulty, 0, is when notes are played on the same open string. The highest value is reached when jumping from the lowest open string to the highest string at the highest fret. # 4. EXPERIMENTS AND RESULTS In this section, the results of our proposed model are presented in an evaluation on the test split of the GuitarToday, Leduc and DadaGP datasets. For evaluation, the metrics described in Section 3.6 are used. # 4.1 Data Encodings To evaluate the effects of data encoding on the transcription performance of guitar tablatures, we conducted an experiment with five different encoding strategies. Each encoding was designed to capture the essential musical information, and we experimented with different levels of abstraction and granularity. Table 1 shows one note each as input and output text encoding. The $\nu I$ encoding reduces the information to pitch, and the output shows the string and fret positions as separate tokens. By simplifying the input and focusing on pitch alone, this version tests whether pitch is sufficient without explicit timing cues. The $\scriptstyle \nu 2$ encoding further simplifies the $\nu I$ version by combining the string and fret information into a single $T A B$ token in the output. This further reduces the space required for the output tokens. The $\nu 3$ encoding uses an event-based approach, using note-on and note-off events together with timeshift tokens to represent timing. The output also uses combined $T A B$ tokens for string and fret information, based on a compact representation of the tablature. This version includes both timing and pitch, allowing for a more comprehensive contextual understanding. Table 1. Examples of the different input and target MIDI-to-text encodings based on the representation of one note. NOTE ON / NOTE OFF define the pitch and TIME SHIFT the duration. STRING and FRET, as well as the combined TAB token, define the corresponding string and fret for the pitch. Figure 2. Comparison of five different encodings across GuitarToday, DadaGP, and Leduc datasets by testing time and tab token variations. The combination of $T A B$ and $T I M E \_ S H I F T$ tokens in the $\nu 3$ encoding showed the best result across the datasets. Similar to $\nu 3$ , the $\scriptstyle \nu 4$ encoding also uses the event-based approach, again separating the string and fret data into individual tokens in the output. The fifth encoding, $\nu ^ { 5 }$ , also builds on $\nu 3$ , but removes note-off tokens from the input, simplifies the event sequence and relies only on note-on and time-shift events to represent the duration of the note. The output uses combined $T A B$ tokens for string and fret information. This version explores the importance of note-off tokens in conveying time and duration information. The encodings were tested using the GuitarToday, DadaGP, and Leduc datasets and no post-processing was applied. The results in Figure 2 show varying levels of accuracy across the different encodings, suggesting that the choice of encoding has a significant influence on the model’s performance. Both encoding $\nu I$ and $\scriptstyle \nu 4$ show that the division into STRING and $F R E T$ tokens is less accurate than the combined $T A B$ token. This is because when a pitch is correctly mapped, the probability of selecting one correct token is easier than selecting two correct tokens. The addition of timeshift information (in $\nu 3 , \nu 4 ,$ , and $\nu ^ { 5 }$ ) improves tab accuracy, suggesting that the model benefits from explicit time and duration data. The reduction of tokens in $\nu ^ { 5 }$ seems to remove context from the model. A comparison of the datasets shows that the combination of $T A B$ and TIME SHIFT in the $\nu 3$ encoding is well generalizable across different styles, which can be further improved by post-processing corrections. # 4.2 Effects of Post-Processing Post-processing plays a critical role in refining the output of the Fretting-Transformer model by addressing residual inaccuracies in the generated tablatures. The initial model outputs, while largely accurate in pitch, occasionally produce string-fret combinations that result in incorrect pitches or implausible fingerings. To mitigate these issues, two postprocessing methods were implemented: overlap correction and neighbor search (see Section 3.5). Table 2. Evaluation of the effect of post-processing on the Leduc test dataset. With applying overlap and neighborhood search, perfect pitch accuracy can be achieved, which is necessary for musical fidelity. The results of applying these methods are presented in Table 2. Without post-processing, the model achieves a pitch accuracy of $9 7 . 2 3 \%$ and a tab accuracy of $6 8 . 5 6 \%$ Introducing overlap correction improves these metrics to $9 9 . 9 2 \%$ and $7 2 . 1 5 \%$ , respectively. Adding neighbor search further refines the output, resulting in perfect pitch accuracy $( 1 0 0 . 0 0 \% )$ and a slight increase in tab accuracy to $7 2 . 1 9 \%$ These results highlight the importance of post-processing in enhancing the playability and musical fidelity of the generated tablatures, ensuring that the transcriptions remain both accurate and practical for guitarists. # 4.3 Domain Adaption from Text Pre-trained models in different sizes are available for the T5 transformer. These pre-trained models have been trained unsupervisedly on a large corpus of English texts. In this experiment, the pre-trained $t 5$ -small is trained via domain adaption on the task of transcribing MIDI to tablature. We compare the progression of the validation loss during training. Although the $t 5$ -small configuration is larger than our proposed custom model, the training converges significantly faster. The choice of the optimizer also has a big impact. Choosing the Adafactor optimizer over AdamW improves the convergence speed of the custom model. Besides the faster convergence, the custom model optimized using Adafactor achieves the best tab accuracy, resulting in a $4 \%$ increase in comparison to the pre-trained t5-small model optimized with the same optimizer. # 4.4 Alternative NLP Task Formulations The Fretting-Transformer interprets the task of transcribing MIDI to tablature as a translation between the MIDI language and the tablature language. Alternatively, the fill-mask and the text completion interpretation can also be applied. Therefore, two alternative architectures are explored. Table 3. Comparison of models on GuitarToday, Leduc, and DadaGP test datasets (post-processing applied). The highest tab accuracy and lowest difficulty scores are printed in bold. Overall, all models achieve good results, with the T5 outperforming the others with a slightly higher tab accuracy and lower difficulty scores for all datasets. The BERT architecture [21], with a configuration similar to that described in [20], is trained using masked language modeling. Here, the $T A B { < } \# , \# { > }$ tokens directly follow the $N O T E \_ O N { < } \# >$ tokens in the input and are replaced by a ${ < M A S K > }$ token during training. For implementing the text completion task interpretation, the GPT2 model [26] is used. The model is trained to generate tablature tokens for a given sequence of MIDI tokens in a text completion manner. Therefore, a ‘MIDI:’ token followed by the MIDI notes and a ‘TABS:’ token is used as a primer sequence. The GPT2 model now completes the text by adding the tab token sequence. The results of the different interpretations of the NLP task can be compared in Table 3. Although each model is able to preserve the original pitch of a note, the T5 model leads to slightly higher tab accuracy and lower difficulty scores for all datasets. # 4.5 Conditioning on Tuning and Capo In the previous experiments, the tablature transcription was examined for the case of standard tuning and without the usage of a capo. While this is true for the majority of tablatures available online, in some cases guitarists prefer to use alternative tunings or want to transpose the piece to a different key using a capo. To incorporate these additional conditions, additional tokens are added for the tuning and the fret to which the capo is set. The results of the tuning and capo conditioning can be seen in Table 4. For a quantitative evaluation, the test splits of the aforementioned datasets were augmented in advance with random variations in tuning and capo usage. For the GPT2 variant the tuning and capo conditions are introduced after the ‘TABS:’ token in the primer sequence. By removing the conditioning tokens from the primer, it is also possible to let the model suggest a suitable tuning and capo option. The results show that the generation of high quality tablatures also works very well on a conditional basis. Again, the T5 model outperforms the others in the higher quality GuitarToday and Leduc datasets. For DadaGP, GPT2 is slightly closer to the ground truth. # 4.6 Comparison with Baselines A comparison of the proposed Fretting-Transformer model with the baseline and state-of-the-art methods is shown in Table 5. As a simple baseline, the string-fret combination with the lowest possible fret for a given pitch is chosen. The $\mathbf { A } ^ { * }$ algorithm method is a reimplementation of [11] selected for comparison with the state of the art. All of the methods are able to provide valid tablatures for the given pitches and achieve $100 \%$ pitch accuracy. The Fretting-Transformer outperforms the $\mathbf { A } ^ { * }$ and the baseline on all three datasets in terms of tab accuracy. Especially in the high quality tabs of the GuitarToday and the Leduc dataset, there is a significant difference. On DadaGP, the baseline already achieves quite good accuracies. This could confirm that a lot of the tablatures in this dataset are algorithmically generated. Besides the baseline methods, the commercial tool Guitar $\mathrm { P r o } 8 . 1 . 3 ^ { 7 }$ and the open source tool TuxGuitar $1 . 6 . 6 ^ { 8 }$ have been evaluated. Figure 3. Qualitative Comparison of ground truth, $\mathbf { A } ^ { * }$ and FrettingTransformer tablatures for ‘Smoke On The Water’. Box 1: Although the Fretting-Transformer varies from the ground truth, the use of the open strings might be preferred by many guitarists. Box 2: Our model is provided with more context and tends to make more consistent fretting decisions than $\mathbf { A } ^ { * }$ . Our proposed method achieves the highest tablature accuracy across all datasets in both scenarios, showing significant improvements especially on Leduc and DadaGP, thereby surpassing both baseline and state-of-the-art approaches. This highlights the robustness of the FrettingTransformer in handling a variety of musical styles and complexities, particularly with datasets like Leduc that feature intricate fretting tasks. There exists a trade-off between tablature accuracy and difficulty. Methods such as Baseline and TuxGuitar achieve lower difficulty ratings, yet they fall short of the proposed method’s accuracy. This indicates that an exclusive focus on playability may not reflect the preferences of guitarists and hence does not lead to optimal tablatures. # 4.7 Discussion While achieving high tablature accuracy can be interpreted as how well the ground truth matches our predictions, the quality and the purpose of the tablatures provided can vary depending on the data source and the target audience. Figure 3 shows a short excerpt of the piece ‘Smoke on the Water’ from the GuitarToday dataset. Firstly, it can be noticed that although the Fretting-Transformer’s result varies from the ground truth (see box 1), the use of the open strings might be preferred by many guitarists. This also highlights the limitations of the chosen metrics. Secondly, when compared to the $\mathbf { A } ^ { * }$ algorithm, the advantage of more contextual information becomes clear. While $\mathbf { A } ^ { * }$ only looks at the previous and the next note, our model is provided with more context and tends to make more consistent fretting decisions (see box 2). Table 4. Comparison of tuning and capo conditioned models on GuitarToday, Leduc, and DadaGP test datasets (post-processing applied). The highest tab accuracy and lowest difficulty values are printed in bold. The T5 model outperforms the others in the higher quality GuitarToday and Leduc datasets. For DadaGP, GPT2 is slightly closer to the ground truth. Table 5. Comparison of our approach (post-processing applied) with baseline and state-of-the-art methods across GuitarToday, Leduc, and DadaGP datasets under two experimental conditions: standard tuning without capo and capo/tuning-conditioned scenarios. The highest tab accuracy and lowest difficulty scores are printed in bold. Our proposed method achieves the highest tablature accuracy across all datasets in both scenarios, showing significant improvements and thereby surpassing both baseline and state-of-the-art approaches.
Music transcription plays a pivotal role in Music Information Retrieval (MIR), particularly for stringed instruments like the guitar, where symbolic music notations such as MIDI lack crucial playability information. This contribution introduces the Fretting-Transformer, an encoderdecoder model that utilizes a T5 transformer architecture to automate the transcription of MIDI sequences into guitar tablature. By framing the task as a symbolic translation problem, the model addresses key challenges, including string-fret ambiguity and physical playability. The proposed system leverages diverse datasets, including DadaGP, GuitarToday, and Leduc, with novel data pre-processing and tokenization strategies. We have developed metrics for tablature accuracy and playability to quantitatively evaluate the performance. The experimental results demonstrate that the Fretting-Transformer surpasses baseline methods like A* and commercial applications like Guitar Pro. The integration of context-sensitive processing and tuning/capo conditioning further enhances the model's performance, laying a robust foundation for future developments in automated guitar transcription.
[ "cs.SD", "cs.CL", "cs.MM", "eess.AS" ]
# 1 Introduction The remarkable success of the OpenAI-O1 series models (OpenAI, 2024) and DeepSeek-R1 (DeepSeekAI, 2025) has demonstrated the substantial potential of large-scale reinforcement learning (RL) for complex reasoning tasks, attracting significant research attention. However, the detailed methodologies employed in the training of these models have not been fully disclosed, creating a significant knowledge gap that hinders further advancements in the field. This lack of transparency in training techniques and strategies impedes the reproduction and extension of these results by other researchers. Recent advancements, such as DAPO (Yu et al., 2025), Open-Reasoner-Zero (Hu et al., 2025), and DeepCoder (Luo et al., 2025a), have demonstrated competitive reasoning performance through task-specific RL strategies, accompanied by publicly released models and datasets. However, their contributions remain narrowly scoped, focusing predominantly on isolated domains such as mathematics or code generation, with limited cross-task generalization. Furthermore, while current research has largely focused on dense model architectures (Yu et al., 2025; Zhang et al., 2025), scant attention has been devoted to exploring the potential of Mixture of Experts (MoE) (DeepSeek-AI, 2025; Ling-Team, 2025) paradigms in this context. Of particular concern is the persistent challenge of training stability—a fundamental prerequisite for scaling RL-based reasoning systems that remains systematically unaddressed. The dynamic interaction between specialized experts in MoE architectures introduces complex gradient synchronization and parameter update conflicts, often manifesting as oscillating loss or disaster forgetting of acquired reasoning skills. Without robust methodologies to ensure stable training in large-scale RL training, the theoretical advantages of MoE frameworks cannot be fully realized in practice. In this work, we introduce Ring-lite, a fully open-source Mixture of Experts (MoE) reasoning model designed to enhance multi-domain reasoning capabilities built upon the publicly available Ling-lite model (Ling-Team, 2025). To the best of our knowledge, this is the first work that integrates an open training framework, open training data, and an open model, specifically targeting the domains of mathematics, coding, and STEM. Furthermore, Ring-lite systematically delves into the instability issues prevalent in RL training and the conflicts arising from capability integration across domains. To solve the instability problem, we propose Constrained Contextual Computation Policy Optimization(C3PO), a novel token-level optimization framework for reinforcement training. The experimental results obtained by our model on complex reasoning tasks, coupled with its more stable reward curves compared to the widely-used conventional Group Relative Policy Optimization (GRPO) method, substantiate the efficacy of our approach. Our model, Ring-lite with 16.8B total parameters and 2.75B activation parameters, establishes state-of-the-art performance across mathematical reasoning, code generation, and STEM problem-solving benchmarks, demonstrating superior performance by matching or surpassing dense models with under 10B parameters, the standard baseline for comparable architectures according to the scaling law analyses (Ling-Team, 2025). To our knowledge, Ring-lite represents the first publicly available mixture-of-experts (MoE) system operating at this parameter-efficiency frontier, serving both as a methodological innovation in efficient architectural design through dynamic sparse activation and as an open-access resource that lowers barriers to cutting-edge AI research. This dual contribution advances both theoretical understanding of neural scaling laws and practical democratization of high-performance language model technologies. Specifically, Ring-lite achieves impressive scores of $7 6 . 6 1 \%$ and $6 9 . 1 1 \%$ on AIME2024 and AIME2025, two challenging math competition-style benchmarks, $6 0 . 6 6 \%$ and $8 6 . 4 5 \%$ on LiveCodeBench and Codeforces, two challenging code contest benchmarks for code generation, $6 1 . 0 5 \%$ on GPQA-diamond, the graduate-Level science QA benchmark. It surpasses Qwen3-8B (Yang et al., 2025) in average and approaches the performance of other top-tier reasoning models. In short, our work marks a pivotal advance in the democratization of AI research and development, as it provides the broader community with a fully open-source solution. By doing so, our work significantly lowers the barrier to entry for exploring multi-domain reasoning, empowering researchers and practitioners alike to contribute to and benefit from this burgeoning field. The main contributions are summarized as follows. • For the first time, we open-source a multi-domain MoE reasoning model 1, encompassing a open source infrastructure (framework), training methodologies, and training datasets. The entire transparent training pipeline is detailed, including Long-CoT Supervised Fine-Tuning (SFT) and reasoning-specific reinforcement learning (RL). • We identify a critical challenge in the training instability of reasoning models and propose C3PO, a framework that implements a fixed training token size (budget) to eliminate response length variance and select high-entropy base models to stabilize learning dynamics. The framework integrates a reinforcement learning methodology grounded in an algorithmengineered co-design paradigm, thereby not only ensuring long-term training stability but also achieving significant gains in computational efficiency. • Spanning multiple domains (math, code and science), we observed inter-domain data conflict and introduced a capability integration method (stage-wise training and balanced data mixing) to address this issue. The structure of this work proceeds as follows: Section 2 details the dataset curation process, including data cleaning and filtering. Section 3 systematically outlines the methodological framework, emphasizing its contributions and implementation specifics. Finally, Section 4 evaluates the model’s performance against established benchmarks and synthesizes critical insights gleaned from both quantitative results and qualitative observations. # 2 Data Our training dataset comprises two components: (1) long Chain-of-Thought (Long-CoT) supervised fine-tuning (SFT) data, employed to train a cold-start model, and (2) reinforcement learning (RL) data, designed to enhance reasoning capabilities. # 2.1 Long-CoT Data To activate a base model’s reasoning capability, a comprehensive dataset of high-quality samples exhibiting Long-CoT reasoning patterns was curated. The query pool was sourced from open-source repositories and further enriched through synthetic generation using large language models (LLMs). To ensure the production of high-fidelity responses with Long-CoT, we implemented an iterative refinement pipeline that synergistically combines automated model generation, expert manual annotation, and rejection sampling mechanisms. After that, rigorous data-cleansing protocols were applied, including detection and removal of repetitive patterns, mixed-language artifacts, and other noise sources, to yield a robust and high-quality dataset. The final data is predominantly dominated by three major domains: Mathematics $( 6 4 . 5 \% )$ , Code $( 2 5 . 5 \% )$ , and Science $( 9 . 2 \%$ , encompassing some high-quality and difficult samples generated by SHARP (Wu et al., 2025)). The remaining portion of the dataset includes contributions from other categories, such as medicine and history domains. In short, our long-CoT SFT dataset enabled effective multi-domain reasoning (spanning mathematics, coding, and science), providing a robust initialization for subsequent reinforcement learning training. # 2.2 RL Training Data # 2.2.1 Domain-Specific Reasoning Data • Math We begin by sourcing a wide range of mathematical problems to enrich the diversity and coverage of our math reasoning dataset. These problems are mainly obtained from two channels: open-source datasets and self-collected data. For open-source datasets, we included datasets that have been meticulously curated for reinforcement learning, including BigMath (Albalak et al., 2025), DeepScaleR (Luo et al., 2025b), DAPO (Yu et al., 2025), DeepMath-103K (He et al., 2025b), etc. To further expand our dataset, we crawled online math forums and collected authentic school examinations. Specifically, we extracted problems from the contest section of the Art of Problem Solving (AoPS) website2, which archives comprehensive records of historical mathematics competitions from diverse regions and educational levels. Additionally, we gathered a wide range of human-written problems utilized in school exams and mathematics competitions across various educational stages, such as high school and college. This extensive process yielded an initial collection of more than tens of thousands of math problems. We then applied stringent data cleansing and filtering protocols to ensure quality and relevance, ultimately refining the dataset to include over 73,000 high-quality math problems suitable for our reinforcement learning processes. • Code The dataset was curated from open-source programming competition resources, primarily drawn from CodeContest (Li et al., 2022), TACO (Li et al., 2023) and APPS (Hendrycks et al., 2021), additionally some problems from the QOJ online judge platform 3. To ensure data quality and training suitability, a multi-stage filtration process was implemented. First, test cases exhibiting format inconsistencies—such as erroneous line breaks or extraneous spaces—were systematically removed, along with truncated content marked by ellipses or incomplete patterns. Subsequently, all “Accepted”(AC) solutions underwent rigorous validation through our code sandbox environment. This verification step eliminated submissions with unresolved external dependencies and discarded implementations that failed extended test cases due to computational inefficiencies (e.g., ${ \mathrm { O } } ( { \mathrm { n } } ^ { 2 } )$ algorithms for $\mathbf { n } > 1 0 ^ { 5 }$ ) or memory overflow vulnerabilities. Output standardization was enforced by normalizing whitespace conventions and aligning floating-point precision thresholds across platforms to mitigate inconsistencies in evaluation criteria. To ensure integrity, only problems accompanied by at least one fully validated AC solution capable of passing all associated test cases were retained, thereby preserving practical problems with existing solution. Semantic deduplication was applied to remove overlapping problems from public coding benchmarks, minimizing the risk of evaluation bias through contamination control. The final curated dataset comprises approximately 14,000 code samples, each accompanied by verified executable solutions and rigorously validated test cases, establishing a robust foundation for reward computation and model training. • Science For the science domain, our RL training data construction followed a three-stage evolution to ensure quality and difficulty alignment. Initially, we sourced open datasets such as Nemotron-CrossThink (Akter et al., 2025) and SCP-116K (Lu et al., 2025), etc., to establish a baseline for scientific reasoning. As model capabilities improved, we employed the SHARP (Wu et al., 2025) synthesis pipeline to generate harder, verifiable problems. However, due to the difficulty ceiling and verification limitations of synthetic and open-source data, our final RL training relied exclusively on a third-stage dataset. This consisted of a proprietary collection of high-difficulty, human-annotated science problems drawn from advanced natural science domains. Sources included Olympiad competitions and graduate-level (Master’s and PhD) exams. We then applied a rigorous curation process—encompassing quality filtering, answer verification, and domain-specific tagging—resulting in a refined set of 3,833 high-quality scientific problems suitable for reinforcement learning. # 2.3 Data Curation The efficacy of the reinforcement learning process is heavily dependent on the quality of the training datasets. Through our initial investigations, we discover that data contamination issue persists even in the widely adopted open-source datasets. To ensure a high-quality training dataset for reinforcement learning, we developed an extensive and rigorous data curation pipeline, comprising several stages designed to ensure the complete decontamination of our data, thus making it readily prepared for RL training. The core components of our data processing protocol are primarily divided into the following three critical phases: Figure 2 The data curation pipeline of Ring-lite Data Cleansing We first exclude problems with invalid character, images, multi-subquestions, and those lacking valid answers. We conducted strict both character-based and semantic-based deduplication and decontamination on the dataset to ensure strict data cleansing. We also remove problems which cannot be uniquely sovled or susceptible to be easily guessed, such as multiplechoice questions, and problems that can be answered with True/False, Yes/No, etc. Answer Verification To ensure the correctness of answers associated with problems in our dataset, we conduct thorough verification using diverse approaches. Specifically, we employ an LLM-based method to assess the quality of each answer. We utilize LLMs of different sizes to generate multiple individual solutions for each problem. Based on the verifiers used in RL training, we calculate the model-aware pass rate. Additionally, we engage human experts to manually annotate the answers. Problems that do not pass either verification method are excluded from our dataset. Data Annotation To optimize the data selection strategy, we meticulously annotate each reasoning problem. Specifically, each problem is labeled with multi-dimensional attributes, such as data source, educational level, domain-specific knowledge, and more. For instance, we use the Mathematical Subject Classification (MSC) categories to assess the themes of our math problems. Additionally, we provide model-aware difficulty by computing the solve rate based on our distilled model. Problems that receive all correct solutions are deemed inefficient for RL training; therefore, we remove those problems. Conversely, problems that are unsolvable by both our distilled model and DeepSeek-R1 are also discarded to ensure that the remaining data contribute effectively to policy gradient updates in reinforcement learning. # 3 Method # 3.1 Preliminary Group Relative Policy Optimization (GRPO) algorithm is widely used such as DeepSeek-R1, Qwen3 and so on. For each question-answer pair $( q , a )$ in the training dataset $\mathcal { D }$ , we generate $K$ responses (i.i.d.) through the policy model $\pi _ { \theta _ { \mathrm { o l d } } }$ . The reward $R _ { i }$ of the response $y _ { i }$ is determined by the reward model or rule-based verifier. GRPO estimates the advantage via group-normalized rewards instead of the value model, $\begin{array} { r } { A _ { i , t } = \frac { R _ { i } - \mathrm { m e a n } \left( \{ R _ { i } \} _ { i = 1 } ^ { K } \right) } { \mathrm { s t d } \left( \{ R _ { i } \} _ { i = 1 } ^ { K } \right) } } \end{array}$ . Specifically, the GRPO loss is formulated as: $$ \begin{array} { l } { ( \theta ) = - \mathbb { E } _ { ( q , a ) \sim \mathcal { D } , \{ y _ { i } \} _ { i = 1 } ^ { K } \sim \pi _ { \theta _ { \mathrm { o d d } } } \left( \cdot | q \right) } } \\ { \displaystyle \left[ \frac { 1 } { K } \sum _ { i = 1 } ^ { K } \frac { 1 } { | y _ { i } | } \frac { | y _ { i } | } { t = 1 } \left( \operatorname* { m i n } \left( r _ { i , t } ( \theta ) A _ { i , t } , \ \mathrm { c l i p } \left( r _ { i , t } ( \theta ) , 1 - \varepsilon , 1 + \varepsilon \right) A _ { i , t } \right) - \beta D _ { \mathrm { K L } } ( \pi _ { \theta } | | \pi _ { \mathrm { r e f } } ) \right) \right] , } \end{array} $$ where ri,t(θ) = πθπoθl(dy(iy,ti,|tq|,qy,iy, i<,t<)t) and $\varepsilon$ is the clip bound. $D _ { \mathrm { K L } } ( \pi _ { \theta } | | \pi _ { \mathrm { r e f } } )$ is the token-level KL loss, keeping the policy model $\pi _ { \theta }$ not far from the reference policy $\pi _ { r e f }$ . # 3.2 C3PO: Constrained Contextual Computation Policy Optimization Observation. While long-CoT reasoning remains essential for handling complex tasks, current GRPO methods demonstrate systemic length-related issues that fundamentally undermine training stability, especially for the long-CoT reasoning models. Specifically, our analysis reveals two following critical issues affecting the training stability: • Within-step length bias: Within a single batch, unpacked responses of varying lengths induce substantial gradient bias under the GRPO objective, as initially identified in pioneering studies (Yu et al., 2025; Liu et al., 2025). This bias primarily arises from the length-normalized gradient estimation by normalizing per-response rewards through division by their token counts (termed per-token scaling), the procedure systematically amplifies gradient magnitudes for shorter sequences while attenuating them for longer ones. While recent research (Yu et al., 2025; Liu et al., 2025) has introduced token-level loss mechanisms to address within-step length bias, the persistence of across-step gradient variance continues to pose challenges, as elaborated below. • Across-step gradient variance: During RL training with exploratory mechanisms, the policy model’s generated responses exhibit substantial stochastic variance in sequence length. This dynamic sequence length variation induces non-trivial optimization challenges for token-level optimizers, as fluctuating lengths create training token inconsistencies that propagate through the learning pipeline. As empirically validated in Figure 7, highly-variation length fluctuation (Figure 7a) results in an abnormal gradient characteristics (Figure 7b), ultimately lead to premature reward collapse (Figure 7c). In addition to the challenges associated with training instability, our empirical observations revealed that variations in response length significantly influence training efficiency. Specifically, longer response sequences result in increased inference and training latency, whereas shorter sequences compromise computational throughput efficiency, as shown in Figure 7d. Figure 3 The comparison between our C3PO strategy and the widely-used dynamic sampling strategy, The C3PO performs token truncation when the token count exceeds the budget, after advantage computation but prior to gradient backpropagation Methodology. Building upon these empirical observations, we posit that synergistic algorithmengineering co-design constitutes a foundational requirement for achieving stable and scalable reinforcement learning training. To translate this principle into practice, we introduce Constrained Contextual Computation Policy Optimization(C3PO), an innovative token-level optimization framework designed to mitigate training instability while enhancing throughput consistency. The core innovation lies in establishing a formalized computational budgeting system that imposes explicit constraints on gradient contributions at the token level, thus ensuring homogeneous gradient contributions across variable-length sequences. With the training token budget, the GRPO loss function in Eq. 1 is reformulated as follows, $$ \begin{array} { l } { { \mathrm { 3 P O } } ( \theta ) = \displaystyle - \mathbb { E } _ { \{ q , a \} _ { l = 1 } ^ { L } \sim \mathcal { D } , \{ y _ { i } \} _ { i = 1 } ^ { K } \sim \pi _ { \theta _ { \mathrm { o l d } } } ( \cdot | q ) } } \\ { \displaystyle \left[ \frac { 1 } { \Phi } \sum _ { i = 1 } ^ { | S | } \sum _ { t = 1 } ^ { | y _ { i } | } \mathbb { I } \left[ y _ { i , t } \in \Psi \right] \left( \operatorname* { m i n } \left( r _ { i , t } ( \theta ) A _ { i , t } , \ \mathrm { c l i p } \left( r _ { i , t } ( \theta ) , 1 - \varepsilon , 1 + \varepsilon \right) A _ { i , t } \right) - \beta D _ { \mathsf { K } } \left( r _ { i , t } ( \theta ) , 1 - \varepsilon , 1 \right) \right) \right] } \\ { \displaystyle \mathrm { ~ s . t . ~ } \ | \Psi | = \Phi } \end{array} $$ where $\Phi$ is the training token budget (i.e., constrained contextual computation), $\Psi$ is the selected tokens by custom sampling strategy. $s$ is the selected responses for training, $L$ is the query size per step and $K$ denotes the group size, $\left| y _ { i } \right|$ denotes the token size for $i -$ th responses. In our experiment the set $\Psi$ is sampled as below: $$ \Psi = \{ ( y _ { 1 } , y _ { 2 } , \cdot \cdot \cdot , y _ { N } ) | y _ { i } \in B \} , \quad \mathrm { s . t . } \ N \leq | B | , \sum _ { j = 1 } ^ { N - 1 } | y _ { j } | < \Phi , \sum _ { j = 1 } ^ { N } | y _ { j } | \geq \Phi $$ where $\boldsymbol { { \mathcal B } } = \{ ( \boldsymbol { { q } } , \boldsymbol { { a } } ) ; \{ y _ { i } \} _ { i = 1 } ^ { K } \} _ { l = 1 } ^ { L }$ is the entire set in a training step. During a practical training step, we implement token-budgeted dynamic sampling in the training phase. For each batch, we employ a greedy algorithm to iteratively select sufficient responses $s$ whose cumulative token count closely exceeds or equals the predefined token budget $\Phi$ . By implementing a fixed token budget per optimization step, C3PO systematically mitigates GRPO’s sensitivity to variations in individual sequence lengths. This design facilitates two critical improvements: 1) Homogeneous Gradient Scaling: The uniform factor $1 / \Phi$ ensures equivalent gradient contributions across responses of varying token lengths, resolving the disproportionate weighting bias between short and long sequences inherent in conventional approaches. Furthermore, such a design mitigates abnormal gradient magnitudes caused by fluctuations in response length, effectively preventing the destabilization of training dynamics and subsequent reward collapse; 2) Deterministic Training Dynamics: Predictable computational loads eliminate burstinduced latency spikes while ensuring step-time consistency in distributed training environments. Complementing the C3PO framework, we utilize entropy regularization (He et al., 2025a) to the loss function, explicitly penalizing low-variance action distributions and thereby encouraging exploration of the policy model. $$ \mathbb { H } ( \theta ) = \mathcal { H } ( \pi _ { \theta } ( \cdot \mid y _ { i , < t } ) ) . $$ Figure 3 is the comparsion of our C3PO Strategy with the widely-used Dynamic Sampling Strategy (Yu et al., 2025). Each line represents a batch of grouped responses generated from the policy model for a single training step. In each group, there are the same number of responses, each composed of variant tokens. In each training step, all tokens are aggregated to form a token level global batch, which is then fed into optimizers such as AdamW (Loshchilov and Hutter, 2019). Due to the variable length of total tokens, the optimizer faces challenges as the gradients exhibit high variance, resulting in convergence difficulties. Previous approaches, such as dynamic sampling (Yu et al., 2025; Team et al., 2025), operate at the sample level by filtering and removing samples, yet fail to adequately address this class of problems. While C3PO operates at the token level by sampling tokens to form a token level global batch, each training step maintains consistent token input to optimizer. This results in reduced gradient variance and consequently achieves significantly more stable optimization. Our model Ring-lite adopts MoE architecture, which fundamentally differs from conventional dense models due to its inclusion of multiple specialized experts. However, this design introduces challenges related to expert imbalance during training. To enhance training efficiency and to prevent imbalances in token distributions across experts, we incorporate load balance loss and router $z$ -loss detailed in Ling-Team (2025), which is formulated as: $$ \mathbb { B } ( \theta ) = \frac { 1 } { N _ { e } } \sum _ { i = 1 } ^ { N _ { e } } P _ { i } * F _ { i } * N _ { e } , \quad \mathbb { Z } ( \theta ) = \frac { 1 } { M } \sum _ { i = 1 } ^ { M } \Big ( \log \sum _ { j = 1 } ^ { N _ { e } } \mathrm { e x p } ( z _ { i j } ) \Big ) ^ { 2 } , $$ where $N _ { e }$ is the number of experts, $M$ is the number of tokens, $P _ { i }$ and $F _ { i }$ are the average probability and count of the $i$ -th expert selected across all tokens in a batch, and $z _ { i j }$ is the logits of the router. In summary, the overall loss for training is formulated as: $$ \mathcal { L } = \mathcal { L } _ { \mathrm { C 3 P O } } ( \boldsymbol { \theta } ) + \alpha _ { e n t r o p y } * \mathbb { H } ( \boldsymbol { \theta } ) + \alpha _ { b a l a n c e } * \mathbb { B } ( \boldsymbol { \theta } ) + \alpha _ { z l o s s } * \mathbb { Z } ( \boldsymbol { \theta } ) $$ # 3.3 Reward Model # 3.3.1 Math & Science Verifier Our training framework incorporates rule-based verifiable rewards in reinforcement learning, which has been proven effective for advancing the reasoning ability of large language models (DeepSeek-AI, 2025). For mathematical and scientific tasks, we append a brief instruction prompt after each input query to facilitate long chain-of-thought reasoning, i.e., Please reason step by step, and put your final answer within \\boxed{}. We employ external verification tool, i.e., Math-Verify 4, to evaluate the correctness of model responses. Specifically, a score of 1 is awarded for responses that correctly match the ground-truth answers, and a score of 0 is assigned to incorrect solutions. Since Math-Verify provides robust parsing ability that well accommodates various mathematical notations and expressions, we do not include any explicit format-related reward in our training framework. # 3.3.2 Code Verifier For code task, we build a code sandbox for reward verification, supporting code execuation and online judgeg tasks across several programming languages (e.g., Python, $C { + } { + } ,$ , Java). It offers multiple execution modes (function calls, online judging, unit testing) and interaction paradigms (real-time SDK/API for training, offline batch processing for data cleaning), achieving 8K/s throughput with sub-second latency. With the code sandbox, we employs a sparse outcome reward for RL training on code tasks. Specifically, the reward is defined based on the execuation results from the sandbox, i.e., $$ R _ { \mathrm { c o d e } } = { \left\{ \begin{array} { l l } { 1 , } & { \quad { \mathrm { A l l ~ t e s t ~ c a s e s ~ p a s s e d } } } \\ { 0 , } & { \quad { \mathrm { O t h e r w i s e } } } \end{array} \right. } $$ It’s worth noting that the reward mechanism employs a sparse design, wherein a reward of 1 is exclusively granted only if the code successfully passes all test cases; otherwise, the reward remains 0. This approach stands in stark contrast to incremental reward systems that offer partial credit for solutions that are incomplete or only partially correct. By adopting this strategy, we ensure that models are incentivized to gain a thorough understanding of the problem, rather than focusing on superficial test cases. This prevents models from simply regurgitating answers to public test cases or overfitting to trivial edge cases, encouraging a more robust and well-rounded approach to problem-solving. # 3.4 Training Pipeline Figure 4 The training pipeline of Ring-lite The overview of the training pipeline is depicted in Figure 4. It consists of a four-stage training process. We first conduct Long Chain-of-Thought Supervised Fine-Tuning (Long-CoT SFT) to obtain our distilled model, i.e., Ring-lite-distill. This stage aims to directly distill the reasoning ability of a larger teacher model into our small-sized base model. From our preliminary analysis, we find that with our meticulously curated reasoning data, the reasoning ability of the distilled model can be further enhanced. However, directly applying reinforcement learning (RL) on mixed reasoning data is vulnerable to domain conflict, resulting in performance declines. Thus, we propose adopting a two-stage RL training pipeline: first, we run RL training on a math dataset, then incorporate code and science datasets in subsequent RL training. This approach empirically demonstrates that it can effectively preserve reasoning abilities across diverse fields. As both Long-Cot SFT and the two-stage RL training focus on improving performance on reasoning tasks, we additionally include a general SFT stage to enhance the model’s ability in various general tasks, such as instruction following, creative writing, safety, and etc. # 4 Experiment # 4.1 Experimental Setup # 4.1.1 Training Settings As introduced in the training pipeline, to enhance the model’s reasoning capabilities, we performed SFT on Ling-lite- $1 . 5 ^ { 5 }$ (Ling-Team, 2025) using the well-constructed Long-CoT dataset. For optimization, we employed the AdamW optimizer with a weight decay of 0.1 and a learning rate of 3e-4, following a cosine decay schedule with a $1 \%$ linear warmup. The training configuration included a batch size of 256 over 3 epochs. To facilitate long-context reasoning, we set the context window of the model to 32,768 tokens and adjusted the RoPE base to 600,000 for improved stability. In RL training with C3PO, we use a batch size $L$ of 512, sampling $K = 1 6$ responses per prompt and adopt a learning rate of $3 e - 6$ with AdamW optimizer. The token-budget parameter is set to 409600. The maximum total length is configured to 24576 and is extended to 32768 in the second stage of code & science training. We set entropy loss coefficient $\alpha _ { e n t } = 5 e - 4 ,$ , load balance loss coefficient $\alpha _ { b a l a n c e } = 1 e - 5 ,$ router $z$ -loss coefficient $\alpha _ { z l o s s } = 1 e - 7 _ { \cdot }$ , KL loss coefficient $\beta = 1 e - 3$ All experiments were performed on $2 5 6 ^ { * }$ NVIDIA H800. # 4.1.2 Benchmarks For a comprehensive evaluation of the quality of our reasoning models, we implemented automatic benchmarks to assess their performance, which are categorized into the following dimensions. • Math: MATH-500 (Lightman et al., 2024), AIME 2024, AIME 2025 (AIME, 2025), CNMO $2 0 2 4 ^ { 6 }$ , Livemathbench (Liu et al., 2024), MinervaMath (Lewkowycz et al., 2022). • Coding: LiveCodeBench (Jain et al., 2025)(202408 – 202501), Codeforces7. • Science: GPQA Diamond (Rein et al., 2023), OlympiadBench (He et al., 2024). # 4.1.3 Baselines We conduct comprehensive evaluations against several baselines of similar parameter sizes, including Qwen3-8B-Thinking (Yang et al., 2025), R1-Distill-Qwen-7B, R1-Distill-Qwen-14B (DeepSeek-AI, 2025), AceReason-Nemotron-7B (Chen et al., 2025) and Ring-lite-distill-preview (Tang et al., 2025). # 4.1.4 Evaluation Settings For all reasoning models, we utilize a temperature of 1.0, a top-p value of 0.95, and a maximum output length of 32,768 tokens. In addition, the prompts are unified with the zero-shot setting. For mathematics benchmarks, we use Math-Verify as the evaluator to score model generations. # 4.2 Main Results The evaluation results are shown in Table 1. To provide a fair comparison, we evaluate our Ring-lite against recent competitive reasoning models with approximately 10B parameters. As shown in Table 1, our Ring-lite achieves the best average score across multiple reasoning tasks while using only 2.75B active parameters. This establishes our Ring-lite as the new state-ofthe-art reasoning model among small-scale MoE models, offering performance comparable to or even surpassing that of the latest strong reasoning dense model under 10B parameters, i.e., Qwen3-8b-Thinking. Additionally, compared to our previously released distilled MoE model, Ringlite-distill-preview, our Ring-lite significantly improves reasoning performance on all benchmarks, further demonstrating the superiority of our training pipeline. # 4.3 Key Findings In this section, we analyze the central observations derived from training reinforcement learning across diverse domains, with particular focus on emergent instability phenomena, training efficiency between SFT and RL methodologies, and inter-domain data conflicts. To ensure a fair Table 1 The comparison of different reasoning models. The best and second-best results are in bold and underlined Results with ∗ are collected from their original papers. All models are evaluated with the same evaluation setting. comparison, we applied supervised fine-tuning to the Qwen2.5-7B-Base (Qwen et al., 2025) model using our Long-CoT dataset as stated in Section 2.1, resulting in the Ring-distill-Qwen-7B model. # 4.3.1 Move Towards Stable and Efficient RL As outlined in the methodology, we empirically observe significant training instability and throughput fluctuations in GRPO during RL training. Here we present a systematic experimental evaluation confirming these phenomena, followed by a quantitative analysis establishing the effectiveness of our proposed method in addressing both challenges: # 1. Reward Collapse Phenomenon in RL Training on Distilled Models During reinforcement learning training with distilled models, we found that reward trajectories exhibit a precipitous decline after a few training steps, failing to recover to baseline levels and culminating in complete training collapse. Through rigorous empirical diagnostics, we identify two critical factors governing RL training stability: Model Entropy, which quantifies policy degradation in distilled models, and Response Length Fluctuation, a measure of sequence generation instability. These factors demonstrate strong correlation with reward collapse, as evidenced by quantitative ablation studies as follows: Figure 5 The reward and entropy curves of Ring-lite. Figure 6 The reward and entropy curves of Ring-distill-Qwen-7B – Model Entropy: As shown in Figure 5, the reward collapse during RL training exhibits a systematic dependence on the number of Long-CoT SFT epochs: models trained with more SFT epochs experience collapse earlier. Specifically, models trained with a greater number of SFT epochs exhibit earlier onset of reward collapse. This trend is accompanied by a concurrent reduction in entropy loss (Figure 6), revealing a robust inverse correlation between the magnitude of entropy loss and stability during RL training. Collectively, these results underscore that lower entropy loss during SFT corresponds to a higher propensity for reward collapse in subsequent RL phases, suggesting a statistically significant inverse relationship between these variables. Notably, this pattern persists across both MoE and dense model architectures, indicating architectural invariance in the observed phenomena. – Response Length Fluctuation: Figure 7 (Ring-lite) and Figure 8 (Ring-distill-Qwen-7B) demonstrate that the generation length exhibits great variability across training steps, resulting in significant fluctuations in training token sizes. These unstable token training sizes greatly affect optimization stability, as evidenced by pronounced increases and occasional spikes—in gradient norms, leading to catastrophic reward collapse. This observation underscores the imperative need for developing strategies that mitigate both entropy loss and generation length variability to ensure stable RL training. # 2. RL Training Throughput Fluctuation Besides the challenge of reward collapse, our empirical observations reveal substantial throughput fluctuations emerging during RL training, which present considerable challenges for optimizing training efficiency in distributed systems. As shown in Figure 7 (Ring-lite) and Figure 8 (Ring-distill-Qwen-7B), these variations are primarily attributable to response length variability. Specifically, longer response sequences necessitate prolonged computation per training step, whereas shorter sequences underutilize computational resources, thereby diminishing throughput efficiency. This dynamic throughput behavior introduces significant optimization challenges in system design, as unpredictable computational demands complicate the implementation of efficient resource allocation and scheduling. Since GRPO method suffers from the training instability and throughput fluctuation problems, C3PO is proposed to address these limitations through two key mechanisms: (1) selecting SFT checkpoints associated with higher entropy loss to stabilize optimization dynamics, and (2) imposing a fixed token budget during training to ensure training token consistency. To validate the efficacy of our method, we conduct experiments using two distilled initialization models with different model architectures: Ring-lite-distill and Ring-distill-Qwen-7B. Figure 7 Training Dynamics of Ring-lite As illustrated in Figures 7 and 8, models trained with fewer epochs exhibit enhanced stability during RL training. However, our empirical analysis (Table 2) reveals a critical trade-off between training stability and final model performance. While models initialized with Epoch 1 checkpoints demonstrate superior stability in both configurations, their performance metrics lag significantly behind those of Epoch 7 and Epoch 3. Conversely, Epoch 9 achieves the highest initial performance but suffer from destabilization during later RL training phases, ultimately failing to surpass the results of Epoch 7 and Epoch 3. Furthermore, our methodological innovation in maintaining fixed training token size enables C3PO to consistently outperform GRPO across four critical metrics: generation length stability, gradient stability, reward stability, and throughput stability (Figures 7 and 8). In short, our C3PO not only resolves the instability inherent in GRPO on distilled models but also ensures efficient RL training, thereby bridging the gap between training robustness and model capability. # 4.3.2 From Distill to RL: The Art of Balancing Token Efficiency For our experiments, we used a constant warm-up learning rate scheduler with rates [1e-6, 3e-6], using the AdamW optimizer. Specifically, Ring-MoE performed better with a learning rate of 3e-6, while Qwen achieved better results at 2e-6. We utilized a training batch size of 512 prompts, a minibatch size of 2, and generated 16 responses for each prompt. In both the rollout and evaluation phases, the temperature was set to 1.0 to promote response diversity. For all methods, we set the Figure 8 Training Dynamics of Ring-distill-Qwen-7B bound $\epsilon$ to 0.2, the KL coefficient was set to 0.001. The maximum total length is configured to 24576. All experiments were performed on 256 \* NVIDIA H800. To further validate the generalizability of our findings, we conducted additional experiments on the Qwen series of models (Bai et al., 2023), which have emerged as highly influential in the open-source community. Following the same methodology applied to Ring-lite, we first finetuned the Qwen2.5-7B-Instruct model using our Long-CoT dataset and subsequently performed RL training. We find that while distillation is effective, it requires significantly more training tokens than reinforcement learning (RL) to achieve comparable performance. Empirically, selecting checkpoints based on entropy loss within the range of 0.3-0.5 yields optimal results on our RL training setting. Entropy loss values below this threshold limit model exploration and reduce the chances of learning to solve more challenging problems, whereas excessive entropy loss leads to slower convergence and degraded model performance. From Figure 5b, we observe that varying the number of training epochs of the distilled model significantly influences the trend of entropy loss, thereby determining the exploration scope for RL. Based on our experiments, increasing the number of SFT training epochs leads to a rapid collapse of entropy. However, insufficient SFT training inevitably results in inferior performance. To systematically quantify the choice of optimal SFT epoch, we employ token efficiency, i.e., # SRFLTttraiiniing ttokens , to evaluate the relationships among RL training steps, SFT training steps, and average downstream performance. As shown in Figure 9 and Table 2, the best performance is achieved with a moderate number of SFT training epochs and suitable token efficiency. In our training pipeline, we utilize these findings to select the optimal SFT model. Token Efficiency Analysis:RL v.s.SFT Token Efficiency Analysis: RL v.s. SFT Figure 9 Reward curves across different SFT training epochs: (a) Ring-distill-Qwen-7B, (b) Ring-lite. Dots with th same color denote different values of token efficiency on the same SFT model. Table 2 The comparison of RL training across different SFT epochs. The best results are in bold. $\triangle _ { i m p r }$ denotes the average performance improvements compared to respective SFT models. # 4.3.3 Resolving Domain Data Conflict: Beyond Mixed Solutions For our experiments, we used a constant warm-up learning rate scheduler with rates [2e-6, 3e-6], using the AdamW optimizer. Specifically, Ring-MoE performed better with a learning rate of 3e-6, while Qwen achieved better results at 2e-6. We utilized a training batch size of 512 prompts, and generated 16 responses for each prompt. In both the rollout and evaluation phases, the temperature was set to 1.0 to promote response diversity. For all methods, the KL coefficient was set to 0.001. The maximum total length is configured to 24576. All experiments were performed on $2 5 6 ^ { * }$ NVIDIA H800. In our preliminary reinforcement learning (RL) experiments, we observed significant performance declines across various reasoning benchmarks when training our cold-start supervised fine-tuning (SFT) model with a combination of math and code reasoning datasets. We then conducted RL experiments on two representative distilled dense models: DeepSeek-R1-Distill-Qwen-7B and Ring-distill-Qwen-7B. As shown in Table 3, combining reasoning datasets from the math and code domains does not lead to performance gains across different fields. Instead, the mixed dataset fails to outperform models trained exclusively on either math or code datasets. Notably, the experimental findings derived from our distilled models reveal that math-only training achieves superior performance on coding benchmarks compared to code-only, irrespective of the model’s architectural configuration. However, this observation does not extend to the DeepSeek-derived models, indicating that the performance of RL training may be strongly influenced by the Long-CoT data in the SFT period. Conversely, code-only RL does not provide additional improvements for math tasks. These results indicate that mixing diverse reasoning domains may introduce conflicts that hinder overall performance. Specialized training on individual domains appears to be more effective for optimizing performance in each specific area. Table 3 The comparison of different training stages on Ring-lite, Qwen and DeepSeek distilled model. The best results are in bold, the performance differences compared to the best performance are denoted with arrows and numbers. To enhance overall reasoning performance across diverse areas when training with multiple domain-specific datasets, we divided the our RL training into multiple stages. Specifically, we first conducted RL experiments using only the math dataset, followed by applying RL with scientific and code datasets. As shown in Table $^ { 4 , }$ our two-stage training strategy significantly improved downstream performance on challenging reasoning benchmarks, such as AIME25 and LiveCodeBench. Additionally, by doubling the amount of code and scientific training data, we achieved an average performance increase of $1 \%$ on both math and scientific benchmarks. Based on these results, we adopted this two-stage training strategy for Ring-lite to maintain superior overall reasoning abilities across multiple domains. Table 4 The comparison of different training strategies.
We present Ring-lite, a Mixture-of-Experts (MoE)-based large language model optimized via reinforcement learning (RL) to achieve efficient and robust reasoning capabilities. Built upon the publicly available Ling-lite model, a 16.8 billion parameter model with 2.75 billion activated parameters, our approach matches the performance of state-of-the-art (SOTA) small-scale reasoning models on challenging benchmarks (e.g., AIME, LiveCodeBench, GPQA-Diamond) while activating only one-third of the parameters required by comparable models. To accomplish this, we introduce a joint training pipeline integrating distillation with RL, revealing undocumented challenges in MoE RL training. First, we identify optimization instability during RL training, and we propose Constrained Contextual Computation Policy Optimization(C3PO), a novel approach that enhances training stability and improves computational throughput via algorithm-system co-design methodology. Second, we empirically demonstrate that selecting distillation checkpoints based on entropy loss for RL training, rather than validation metrics, yields superior performance-efficiency trade-offs in subsequent RL training. Finally, we develop a two-stage training paradigm to harmonize multi-domain data integration, addressing domain conflicts that arise in training with mixed dataset. We will release the model, dataset, and code.
[ "cs.CL", "cs.AI" ]
# 1 Introduction Efficient query processing is essential for the performance of modern database management systems, particularly as the complexity and size of data continue to grow. Join operations, which combine records from multiple tables, play a pivotal role in this process; however, traditional join algorithms often face significant efficiency challenges when processing complex queries that produce intermediate results much larger than the final output. The emergence of worst-case optimal join (WCOJ) algorithms [8–10, 19] represents a significant advancement, offering asymptotically better performance by avoiding the enumeration of potentially exploding intermediate results. It can be the case for both cyclic and acyclic queries as opposed to the common belief that WCOJ is designed for cyclic queries. There have been previous efforts [1, 3, 7] to adopt an approach where WCOJ algorithms are used for certain parts of a query while traditional join algorithms are applied to the rest. However, the use of two distinct algorithms within the same system introduces additional complexity, which has hindered the broader adoption of WCOJ methods. The current state-of-the-art system, and our main competitor in this work, is Free Join [20], which aims to address this challenge by unifying WCOJ with traditional join methods. Free Join provides a platform capable of executing a wide range of join queries, offering performance benefits in both WCOJ and traditional join scenarios. However, despite these advances, Free Join supports only a specific class of WCOJ algorithms–hash-based approaches–limiting its coverage and flexibility in handling other algorithmic paradigms such as sort-based joins. Our approach stands out by leveraging advancements in programming languages, specifically through the use of SDQL [18], an intermediate language designed for functional collection programming using semi-ring dictionaries. By employing SDQL as our intermediate representation, we can translate worst-case optimal join (WCOJ) queries into an intermediate representation that can be directly compiled into highly efficient $C + +$ code. This gives our system a key advantage in flexibility and performance optimization. The use of SDQL enables us to introduce a unified architecture that efficiently supports both traditional binary joins and WCOJ algorithms. Moreover, our system can handle hash-based and sortbased paradigms of WCOJ processing, significantly improving over state-of-the-art systems such as Free Join. Existing systems typically only focus on one approach – either hash-based or sort-based – while we provide support for both, ensuring that our system can adapt to various input data types and query execution scenarios. This broad capability enhances the versatility and overall performance of our system across a wide range of join queries. We demonstrate that our system not only matches but also outperforms the performance of the Free Join framework as the state-of-the-art. In this paper, we present the following contributions: (1) A unified architecture that integrates both efficient binary join and WCOJ processing (Section 3). (2) A comprehensive set of optimizations that refine our initial implementation into a highly optimized system (Sections 4.1 and 4.2). (3) A novel hybrid approach, along with support for sort-based WCOJ algorithms, leverages the strengths of both hash-based and sort-based paradigms (Section 4.3). (4) A detailed experimental evaluation of our system and the applied optimizations (Section 5). Specifically, we show that our method achieves speedups of up to $3 . 1 \times$ and $4 . 8 \times$ compared to the Generic Join and Free Join implementations within the Free Join framework, respectively. # 2 Background A full conjunctive query is expressed in the form shown in Eq (1). In this notation, each term $R _ { i } ( x _ { i } )$ is referred to as an atom, where $R _ { i }$ represents a relation name, and $x _ { i }$ is a tuple of variables. The query is considered full because the head variables $x$ encompass all variables that appear in the atoms. We assume that selections have been pushed down to the base tables, meaning that each atom $R _ { i }$ may already incorporate a selection over a base relation. Similarly, projections and aggregations are performed only after the full join operation is executed, and hence, they are not explicitly included in Eq (1). $$ Q ( x ) : - R _ { 1 } ( x _ { 1 } ) , \ldots , R _ { m } ( x _ { m } ) $$ Example. Throughout this paper, we will make use of a conjunctive query referred to as $Q _ { \pm }$ , denoted in Eq (2). $$ Q _ { \bullet } ( x , a , b ) : - R ( x , a ) , S ( x , b ) , T ( x ) $$ The SQL query of $Q _ { \pm }$ is as below and its corresponding implementation using binary joins in SDQL and $C + +$ is shown in Figure 1. SELECT $\star$ FROM R, S, TWHERE ${ \sf R . x } = { \sf S . x }$ AND ${ \mathsf R } . { \mathsf X } \ = \ { \mathsf T } . { \mathsf X } { \mathsf A } { \mathsf N } { \mathsf 0 } \ { \mathsf S } . { \mathsf X } \ = \ { \mathsf T } . { \mathsf X }$ # 2.1 Generic Join The Generic Join algorithm, first introduced in [10], represents the simplest worst-case optimal join algorithm. It builds upon the earlier Leapfrog Triejoin algorithm [19]. The Generic Join algorithm computes the query $\boldsymbol { Q }$ from Eq (1) by executing a series of nested loops, where each loop iterates over a specific query variable. In particular, Generic Join arbitrarily selects a variable $x$ , computes the intersection of all $x$ -columns across the relations that contain $x$ , and for each value $\theta$ in this intersection, it evaluates the residual query $Q [ x / \theta ]$ . In the residual query, every relation $R$ containing $x$ is replaced by $\sigma _ { x = \theta } ( R )$ (or equivalently $R [ \theta ] ) ,$ ). If the query $\boldsymbol { Q }$ contains $k$ variables, the algorithm proceeds with $k$ nested loops. In the innermost loop, Generic Join outputs a tuple consisting of constants derived from each iteration. The Generic Join algorithm is provably worst-case optimal, achieving a time complexity of $O ( n ^ { 1 . 5 } )$ , where $n$ represents the maximum possible size of the output [2, 10]. In contrast, binary join algorithms can exhibit a time complexity of $O ( n ^ { 2 } )$ . While binary joins rely on hash tables for efficiency, Generic Join leverages a hash trie structure for each relation in the query. A hash trie is a tree structure where each node is either a leaf node or a hash map that associates each atomic attribute value with another node. # 2.2 Free Join A Free Join plan defines how the Free Join algorithm is executed, serving as a generalization and unification of both binary join plans and Generic Join plans [20]. In a left-deep linear plan for binary joins, the execution order is represented as a sequence of relations, where join attributes are implicitly determined by the shared attributes between relations. In contrast, a Generic Join plan outlines a sequence of variables and does not explicitly reference the relations, as joins are performed on any relation sharing a particular variable. A Free Join plan, however, allows for the joining of any number of variables and relations at each step, requiring both to be explicitly specified. Formally, a Free Join plan is represented as a list $[ \phi _ { 1 } , \ldots , \phi _ { m } ]$ , where each $\phi _ { k }$ is a list of subatoms from the query $\boldsymbol { Q }$ , referred to as a node. The nodes must partition the query in the sense that for every atom $R _ { i } ( x _ { i } )$ in the query, the set of all subatoms in all nodes must constitute a partitioning of $R _ { i } ( x _ { i } )$ . A subatom of an atom $R _ { i } ( x _ { i } )$ is of the form $R _ { i } ( y )$ , where $y$ is a subset of the variables $x _ { i }$ . A partitioning of the atom $R _ { i } ( x _ { i } )$ consists of subatoms $R _ { i } ( y _ { 1 } ) , R _ { i } ( y _ { 2 } ) , \ldots$ , where the sets $y _ { 1 } , y _ { 2 } , \ldots$ form a partition of $x _ { i }$ . To construct a Free Join plan, the system begins with an optimized binary join plan produced by a traditional cost-based optimizer, such as DuckDB [12–14]. It first decomposes a bushy plan into a set of left-deep plans. Each left-deep plan is then converted into an equivalent Free Join plan. After conversion, further optimization yields a plan that can range from a left-deep plan to a Generic Join plan. Consider a plan derived from a straightforward conversion of the binary join plan for the clover query $Q _ { \pm }$ into a Free Join plan, as shown in Eq (3). To execute the first node, we iterate over each tuple $( x , a )$ in $R$ and use $x$ to probe into $s$ . For each successful probe, we proceed to the second node, iterating over each value $b$ in $s [ x ]$ , and then using $x$ to probe into $T$ . $$ [ [ R ( x , a ) , S ( x ) ] , [ S ( b ) , T ( x ) ] ] $$ The plan corresponding to the Generic Join plan for the clover query $Q _ { \pm }$ is depicted in Eq (4). In this plan, execution starts by intersecting the sets $R . x \cap S . x \cap T . x$ . For each $x$ in the intersection, the values of $a$ and $b$ are retrieved from $R$ and $s$ , respectively, and their Cartesian product is computed. Additionally, after optimizing the naive plan from Eq (3), the resulting optimized Free Join plan for the clover query $Q _ { \pm }$ is shown in Eq (5). $$ \begin{array} { c } { { [ [ R ( x ) , S ( x ) , T ( x ) ] , [ R ( a ) ] , [ S ( b ) ] ] } } \\ { { [ [ R ( x , a ) , S ( x ) , T ( x ) ] , [ S ( b ) ] ] } } \end{array} $$ # 2.3 SDQL We utilize SDQL, a statically typed language capable of expressing relational algebra with aggregations, and functional collections over data structures such as relations using semi-ring dictionaries. SDQL can serve as an intermediate language for data analytics, enabling the translation of programs written in relational algebrabased languages into SDQL. Figure 1 illustrates the corresponding SDQL program to execute $Q _ { \pm }$ using traditional binary joins, alongside its equivalent generated $C + +$ code. In this example, we first join relations $R$ and $s$ (lines 1-11), followed by a join with relation $T$ (lines 14-24). In SDQL, we use the let keyword to declare variables, such as a dictionary in line 1. The sum keyword enables iteration over dictionary entries, as demonstrated in line 2. Conditional logic is expressed with if, and membership is checked using the $\in$ operator to verify if an element exists in a dictionary (line 8). If an element exists, its associated value can be accessed using the (. . .) syntax, as shown in line 9. We employ std::tuple to implement records (line 11). In addition to these basic and predefined syntaxes, we extended SDQL to meet our requirements by incorporating support for various dictionary types and underlying data structures for dictionary representation. The subsequent sections will explore these extensions in detail. # 3 System The methodology, as depicted in Figure 2, employs a multi-stage pipeline architecture. The initial phase involves the transformation of a binary plan into a Free Join plan (Section 3.1). Subsequently, we proceed to generate a naive SDQL program corresponding to the Free Join plan (Section 3.2). Several optimizations are applied to enhance the performance of the initial SDQL program (Section 4). let $\mathsf { S \_ h t ^ { \mathrm { \Delta } } } =$ 2 sum $: < \mathrm { i }$ , $_ - > < -$ range(S.size)) 3 $\{ { \mathsf S } . \times ( \mathrm { i } ) \ \to \ \{ \mathrm { i } \ \to \ 1 \} \}$ in 4 5 let ${ \mathsf { R S } } =$ 6 sum $\mathsf { \tilde { \Phi } } { < } \mathsf { R } _ { - } \dot { \mathsf { 1 } }$ , $_ - > < -$ range(R.size)) 7 let $\textsf { x } = \textsf { R } . \times ( \mathsf { R } _ { - } \mathrm { i } )$ in 8 if $\langle { \sf x } \in \sf S _ { - } { \sf h } { \sf t } \rangle _ { \sf } { \mathrm { ~ ~ } }$ ) 9 let $\mathsf { S } \mathsf { x } = \mathsf { S } _ { - } \mathsf { h t } ( \mathsf { x } )$ in 10 sum (S_i, $\underline { { \boldsymbol { \mathbf { \Pi } } } } > < - \ \mathsf { S } \mathsf { x } ;$ , 11 $\{ < x = x$ , a=R.a(R_i), $\mathsf { b } { = } \mathsf { S } . \mathsf { b } ( \mathsf { S } _ { - } \mathrm { i } ) { > } \ { \to } \ 1 \}$ in 12 13 14 let $\bar { \mathsf { T } } _ { - } \mathsf { h } \mathtt { t } ~ =$ 15 sum(<i, $_ - > < -$ range(T.size)) 16 $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \to \ \{ \mathrm { i } \ \to \ 1 \} \}$ in 17 18 19 sum( $\mathsf { < R S _ { - } i }$ , $_ - > < -$ range(RS.size)) 20 let $\textsf { x } = \mathsf { R S } . \mathsf { x } ( \mathsf { R S } _ { - } \mathrm { i } )$ in 21 if $\langle \mathbf { x } \in \mathsf { T } _ { - } \mathsf { h t } \rangle$ ) 22 let $\mathsf { T x } = \mathsf { T \_ h t } ( \mathsf { x } )$ in 23 sum (T_i, $\underline { { \mathbf { \Pi } } } > < - \mathbf { \partial } \mathsf { T } \mathsf { x } ;$ ) 24 { $\therefore x = x$ , $a = R S$ .a(RS_i), $\mathsf { b } \mathsf { = } \mathsf { R S \mathrm { . } b } ( \mathsf { R S \mathrm { _ { - } i } } ) \mathsf { > \ - } \mathsf { > \ } 1 \mathsf { \Sigma }$ 25 (a) SDQL. 1 HT<int, HT<int, bool>> S_ht; 2 for (int $\dot { \textbf { i } } = \boldsymbol { \mathsf { 0 } }$ ; ${ \mathrm { ~ i ~ } } < { \mathsf { S } } .$ size; $+ { + } \mathrm { i }$ ) 3 S_ht[S.x[i]][i] $\scriptstyle + = ~ 1$ ; 4 5 HT<tuple<int, int, int>, int> RS; for (int $\mathsf { R } _ { - } \mathrm { i } \ = \ \boldsymbol { \vartheta }$ ; $\mathsf { R } _ { - } \mathrm { i } < \mathsf { R }$ .size; $+ + R _ { - } \mathrm { i }$ ) { 7 auto $\textsf { x } = \mathsf { R } . { \mathsf { x } } [ \mathsf { R } _ { - } \mathrm { i } ]$ ; 8 if (S_ht.contains $( \mathsf { x } )$ ) { 9 auto $\& S x \ = \ S .$ _ht.at $( \mathsf { x } )$ ; 10 for (auto &[S_i, $\mathsf { S \mathrm { _ - v } } \mathsf { ] } : \mathsf { S } \mathsf { x } ;$ 0 11 res[{x, R.a[R_i], S.b[S_i]}] $\scriptstyle + = ~ 1$ ; 12 }} 13 14 HT<int, HT<int, bool> $> >$ T_ht; 15 for (int $\mathrm { ~ i ~ } = \mathrm { ~ \AA ~ }$ ; $\mathrm { ~ i ~ } < \mathsf { T }$ .size; $+ { + } \mathrm { i }$ ) 16 T_ht[T.x[i]][i] $+ = 1$ ; 17 18 HT<tuple<int, int, int>, int> RST; 19 for (auto &[RS_i, RS_v] : RS) { 20 auto $\times = \tt g e t < 0 >$ (RS[RS_i]); 21 if (T_ht.contains $( \mathsf { x } )$ ) { 22 auto $\& T x \ = \ \mathsf { S } _ { - }$ _ht.at $( \mathsf { x } )$ ; 23 for (auto &[T_i, $\mathsf { T \_ v J } : \mathsf { T x } ,$ 24 RST[{x, get<1>(RS[RS_i]), get<2>(RS[RS_i])}] $\scriptstyle + = ~ 1$ ; 25 }} (b) C++. Figure 1: Implementation of $Q _ { \pm }$ based on traditional binary joins in SDQL and $^ { C + + }$ Figure 2: An overview of our system architecture. The final stage of the pipeline entails the generation of efficient $C + +$ code derived from the SDQL program, facilitating efficient query execution (Section 3.3). # 3.1 Planning This approach begins by taking an optimized binary join plan as input and transforming it into a Free Join plan [20]. This transformation process is based on the methodology described in Section 2.2 and the Free Join paper, which we utilize for consistency and fair comparison. The system initially starts with an optimized binary plan and then converts it to an equivalent Free Join plan. The corresponding plans for the Generic Join and Free Join algorithms for $Q _ { \pm }$ would resemble those in Eq (4) and Eq (5), respectively. # 3.2 SDQL Program Generation At this stage, we take the Free Join plan from the previous step and generate an efficient SDQL program as an intermediate representation. This process is divided into two phases: trie creation and query execution. Trie Creation Phase. We construct the tries that will be utilized during the query execution phase. Within SDQL, these tries function as nested hash maps, where each leaf node is a set of offsets into the base relation represented as a hash map with these offsets as the key and true as the value. Each internal level is a hash map that maps attribute values to the next level’s hash map. For instance, as illustrated in lines 2-10 of Figure 3a and Eq (4), we need a trie for each of the relations $R , S$ , and $T$ in $Q _ { \pm }$ , which align with the Generic Join algorithm. These tries are created from attribute $x$ to the offsets of each relation, enabling access to other attributes in subsequent steps. In contrast, the Free Join implementation only requires building tries over relations $s$ and $T$ (lines 2-7 of Figure 4a) since we iterate directly over relation $R$ . Query Execution Phase. We generate an SDQL program that corresponds to the converted Free Join plan, as described in Section 3.1, utilizing the previously constructed tries. The execution of each node in the Free Join plan involves iterating over the first relation or its trie, depending on the plan, and using the attribute values to probe into each of the other tries. Figure 3: Generic Join implementation of $Q _ { \pm }$ in SDQL and $^ { C + + }$ . In the example, for the first node in the plan for Generic Join $[ R ( x ) , S ( x ) , T ( x ) ]$ , we iterate over $R$ and use $x$ values to probe into $s$ and $T$ . We do a similar process for the first node of the Free Join’s plan $[ R ( x , a ) , S ( x ) , T ( x ) ]$ . For each $x$ value successfully probed in both, we proceed to execute the second node. This process is reflected in lines 14-17 and 11-15, corresponding to the first node of the plan in Figures 3a and 4a, respectively. The subsequent nodes represent the Cartesian product among the other attribute values to make the final results for a given $x$ value. The translation of the nodes $[ [ R ( a ) ] , [ S ( b ) ] ]$ are shown in lines 18-20 in Figure 3a, and the translation of $\big [ S ( b ) \big ]$ is shown in lines 16-17 in Figure 4a. # 3.3 $^ { C + + }$ Code Generation The final component of our pipeline involves generating $C + +$ code for SDQL programs, which is relatively straightforward, as illustrated in Figures 3 and 4. To enhance performance, summations that yield dictionaries are translated into loops that perform in-place updates. In addition to the transformations outlined in Section 2.3, the subsequent sections will demonstrate the translation of the extended data structures. # 4 Efficiency In this section, we sequentially apply a series of optimizations to the naive implementation of $Q _ { \pm }$ , building upon each previous one for better comprehension. Section 4.1 introduces dictionary specialization, optimizing the underlying data structures for dictionary representation. Section 4.2 discusses the early projection techniques employed to enhance performance. Finally, Section 4.3 presents our novel hybrid approach, alongside the support of sort-based WCOJ algorithms, to leverage the strengths of both hash-based and sort-based paradigms. # 4.1 Dictionary Specialization Dictionary specialization is a technique aimed at optimizing the data structures used to represent leaf nodes in tries, significantly improving both trie creation and query execution performance. When a dictionary is created in SDQL, it serves two primary purposes: lookup and iteration. When lookups are required, an underlying dictionary data structure is necessary. However, if the operation only involves iterating over the dictionary, we can store only the keys in a more efficient data structure. This subsection focuses on two specific optimizations we employed to enhance dictionary specialization in our system: std::vector and SmallVector. 4.1.1 Vector (O1). Replacing hash maps with vectors for leaf nodes in tries can significantly improve performance by reducing the overhead of key-value pair operations. As outlined in Section 3.2, each leaf node in the tries was initially represented as a hash map that mapped offsets in the base relation to a constant boolean value (true). However, since only the keys of these hash maps are required, as demonstrated in Figures 3a and 4a, this structure can be optimized by converting the key-value pair representation into a list implementation that stores only the keys. 1 // Trie Creation 2 let S_trie $\mathbf { \sigma } = \mathbf { \sigma }$ 3 sum(<i, $_ - > < -$ range(S.size)) 4 $\{ \mathsf { S } . \mathsf { x } ( \mathrm { i } ) \ \to \ \{ \mathrm { i } \ \to \ 1 \} \}$ in 5 let T_trie $\mathbf { \tau } = \mathbf { \tau }$ 6 sum $: < \mathrm { i }$ , $_ - > < -$ range(T.size)) 7 $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \to \ \{ \mathrm { i } \ \to \ 1 \} \}$ in 8 9 // Query Execution 10 11 sum( $\mathsf { < R } _ { - } \mathrm { i }$ , $_ - >$ in range(R.size)) 12 let $\textsf { x } = \textsf { R } . \times ( \mathsf { R } _ { - } \mathrm { i } )$ in 13 if ${ \mathrm { ~ ~ \lambda ~ } } _ { \times } \in { \mathrm { ~ \sf ~ S } } _ { - }$ _trie $\& \& \ \in \mathsf { T } .$ _trie) then 14 let $\mathsf { S } \mathsf { x } = \mathsf { S _ { - } t r i e } ( \mathsf { x } )$ in 15 let ${ \sf T } \times \sf = { T } .$ _trie $( \mathsf { x } )$ in 16 sum( $\mathrm { < S _ { - } i }$ , $\underline { { \mathbf { \Pi } } } > < - \mathbf { \Delta } \mathsf { S } \mathsf { x } \dot { \mathbf { \xi } }$ ) 17 $\scriptstyle \{ < \mathsf { c } \theta = \mathsf { x } $ , $C 1 = R$ .a(R_i), $\mathsf { c } 2 { = } \mathsf { S } . \mathsf { b } ( \mathsf { S } _ { - } \mathrm { i } ) { > } \ { \to } \ 1 \}$ 18 (a) SDQL. 1 // Trie Creation 2 HT<int, HT<int, bool> $\ggg$ S_trie; 3 for (int $\dot { \textbf { i } } = \boldsymbol { \Theta }$ ; $\mathrm { ~ i ~ } < \mathsf { S }$ .size; $+ { + } \dot { 1 } \dot { }$ ) 4 S_trie[S.x[i]][i] $+ = 1$ ; 5 HT<int, HT<int, bool>> T_trie; 6 for (int $\dot { \textbf { i } } = \boldsymbol { \Theta }$ ; $\mathrm { ~ i ~ } < \rceil$ T.size; $\mathbf { + + i } ^ { \cdot }$ ) 7 T_trie[T.x[i]][i] $\scriptstyle + = ~ 1$ ; 8 9 // Query Execution 10 HT<tuple<int, int, int>, int> res; 11 for (int $\mathsf { R } _ { - } \mathrm { i } \ = \ \boldsymbol { \vartheta }$ ; $\mathsf { R } _ { - } \mathrm { i } \ < \ \mathsf { R }$ .size; $+ + R _ { - } \mathrm { i }$ ) { 12 auto $\textsf { x } = \textsf { R } . { \times } [ \mathsf { R } _ { - } \mathrm { i } ]$ ; 13 if (S_trie.contains(x) && T_trie.contains $( \mathsf { x } )$ ) { 14 auto $\& S x \ = \ S .$ _trie.at(x); 15 auto $\& T x = \cdot$ T_trie.at $( \mathsf { x } )$ ; 16 for (auto &[S_i, S_v] : Sx) 17 res[ $\{ \mathsf { x }$ , R.a[R_i], S.b[S_i]}] $\scriptstyle + = ~ 1$ ; 18 }} $( \mathbf { b } ) \mathbf { C } + + .$ 1 class VecDict { 2 vector<T> vec; // SmallVector in SmallVecDict class Proxy { 4 VecDict &vecdict; 5 T key; 6 void operator $+ { = }$ (int) { 7 vecdict.vec.push_back(key); }}; 9 Proxy operator[](T key) { 10 return Proxy(\*this, key); 11 }}; This optimization is applied in SDQL programs through the use of @vec annotation to specify the dictionary representation, as shown in Figure 6a. When this annotation is applied to a hash map, we employ the VecDict data structure, as shown in Figure 5, in $C + +$ , replacing the hash table that maps offsets to true, as illustrated in Figure 6b. This data structure acts as a wrapper around std::vector, providing a dictionary-like interface. Using a vector under the hood enhances performance by reducing the cost of both insertion and iteration, as operations in std::vector are generally less expensive than in hash tables. 4.1.2 SmallVector $( O 2 )$ . SmallVector is a specialized data structure designed to function as a vector-like container optimized for small sequences. It improves performance by allocating storage on the stack, thus reducing the overhead of heap allocations and enhancing cache locality. This optimization is particularly advantageous when the vector contains only a small number of elements. SmallVector allocates a fixed number of elements on the stack, and when the number of elements exceeds this predefined size, it switches to heap allocation. This strategy offers a balance between performance and flexibility, and implementations of SmallVector are used in systems such as Rust [15] and LLVM [5]. We implemented a custom version of SmallVector, as shown in Figure 7 in $C + +$ to replace the underlying data structure of the leaf nodes used to store offsets. As previously discussed, when performing lookups on relation $R$ for a given $\theta$ value, $R [ \theta ]$ typically contains a small number of elements. In such cases, SmallVector enhances performance by avoiding heap allocation for nodes with few elements during trie creation. Another application of this data structure occurs when building a trie on a unique attribute, such as primary keys, which is common in join operations. Since this attribute is unique, each value appears only once, resulting in a single offset per value. While a single variable could be used in this scenario, SmallVector effectively handles it. The interface of SmallVector closely mirrors that of std::vector, allowing it to be seamlessly integrated in contexts where std::vector is typically used, as shown in Figure 8. SmallVector manages dynamic memory allocation internally, abstracting the complexity from the user while delivering improved efficiency. # 4.2 Early Projection/Aggregation Early projection and aggregation are techniques that reduce the amount of data processed and stored during query execution by identifying and eliminating unnecessary columns as early as possible. This approach can significantly improve query performance by reducing memory usage, enhancing cache utilization, and accelerating join operations. In this subsection, we discuss three specific optimizations that we employed in our system. 4.2.1 Dead Code Elimination (O3). Dead code elimination is a powerful optimization technique that can significantly improve query performance by reducing the amount of data processed and 1 // Trie Creation 2 let S_trie $\mathbf { \tau } = \mathbf { \tau }$ 3 sum(<i, $_ - > < -$ range(S.size)) $\mathsf { S } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } \to \{ \mathrm { i } \ \mathsf { \Omega } \to \ 1 \} \}$ in $\{ \mathsf { S } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } \to \mathsf { \Omega }$ @vec $\{ \mathrm { i } \ \ r \ 1 \} \}$ in 5 let T_trie $\mathbf { \tau } = \mathbf { \tau }$ 6 sum $: < \mathrm { i }$ , $_ - > < -$ range(T.size)) $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \to \ \{ \mathrm { i } \ \ \to \ 1 \} \}$ in + $\{ { \sf T } . { \sf x } ( \mathrm { i } ) \ \ { \sf @ } { \sf v e c } \ \{ \mathrm { i } \ \ 1 \} \}$ in 8 9 // Query Execution 10 11 sum( $\mathsf { < R } _ { - } \mathrm { i }$ , $_ - >$ in range(R.size)) 12 let $\textsf { x } = \textsf { R } . \times ( \mathsf { R } _ { - } \mathrm { i } )$ in 13 if $\mathsf { ~ \cdot ~ } _ { \mathsf { X } } \in \mathsf { S } _ { - }$ trie $\& \& \ \in \ \mathsf { T } .$ _trie) then 14 let $\mathsf { S x } = \mathsf { S } _ { - }$ _trie(x) in 15 let ${ \sf T x } = { \sf T } _ { - }$ _trie(x) in 16 sum $\mathsf { \tilde { \Pi } } { < } \mathsf { S } _ { - } \dot { \mathsf { 1 } }$ , $\underline { { \mathbf { \Pi } } } > < - \mathbf { \Delta } \mathsf { S } \mathsf { x } ,$ ) 17 $\{ < \mathsf { c } \theta = \mathsf { x }$ , $\mathsf { c l } = \mathsf { R } _ { \cdot } \mathsf { a } ( \mathsf { R } _ { - } \mathrm { i } )$ , $\mathsf { c } 2 \mathsf { = } \mathsf { S . b } ( \mathsf { S \_ i } ) \mathsf { > \_ } \mathsf { 1 } \mathsf { ) }$ 18 (a) SDQL. // Trie Creation - HT<int, HT<int, bool>> S_trie; $^ +$ HT<int, VecDict<int>> S_trie; 3 for (int $\mathrm { ~ i ~ } = \emptyset$ ; i < S.size; $^ { + + \mathrm { i } }$ ) S_trie[S.x[i]][i] $+ = 1$ ; HT<int, HT<int, bool> $\mathrm { > } >$ T_trie; $^ +$ HT<int, VecDict<int>> T_trie; 6 for (int $\mathrm { ~ i ~ } = \emptyset$ ; $\mathrm { ~ i ~ } < \mathsf { T }$ .size; $+ { + } \dot { 1 }$ ) 7 T_trie[T.x[i]][i] $+ = 1$ ; 8 9 // Query Execution 10 HT<tuple<int, int, int>, int> res; 11 for (int $\mathsf { R } _ { - } \mathrm { i } \ = \ \boldsymbol { \vartheta }$ ; $\mathsf { R } _ { - } \mathrm { i } < \mathsf { R }$ .size; $+ + R _ { - } \mathrm { i }$ ) { 12 auto $\textsf { x } = \textsf { R } . { \times } [ \mathsf { R } _ { - } \mathrm { i } ]$ ; 13 if (S_trie.contains(x) && T_trie.contains(x)){ 14 auto $\& S x \ = \ \mathsf { S } _ { - }$ trie.at $( \mathsf { x } )$ ; 15 auto $\& T x =$ T_trie.at $( \mathsf { x } )$ ; 16 for (auto &[S_i, S_v] : Sx) 16 $^ +$ for (auto $\mathsf { \& S \mathrm { _ - i } } : \mathsf { S x } ,$ ) 17 res[{x, R.a[R_i], S.b[S_i]}] $\scriptstyle + = ~ 1$ ; 18 }} $( \mathbf { b } ) \mathbf { C } + + .$ # Figure 6: Impact of using Vector data structure for inner dictionaries which store offsets into base relations. 1 class SmallVector { 2 array<T, N> stack; 3 vector<T> \*heap; 4 size_t size{0}; 5 void push_back(const T &value) { 6 if $( s \mathrm { i } z e + + \ < \ \mathsf { N } )$ ) stack[size] $\mathbf { \Sigma } = \mathbf { \Sigma }$ value; 8 else { 9 if $\displaystyle { \mathrm { ~ \bar { \ s i } ~ } } z \ e ^ { + + } \ = \ { \sf N } _ { , } ^ { \cdot }$ ) { 10 heap $\mathbf { \tau } = \mathbf { \tau }$ new vector<T>( 11 stack.begin(), stack.end()); 12 } 13 heap->push_back(value); 14 }}}; store their offsets during trie construction, as we do not need to access other attributes from these relations. These relations are only utilized for joining and checking the existence of values for the attributes involved in the joins. Therefore, eliminating redundant offsets can reduce the overhead associated with trie construction. In the clover query $Q _ { \pm }$ , for example, relation $T$ is used to join on attribute $x$ with relations $R$ and $s$ , but it does not contribute any attributes to the final results. For such relations, the primary task is to verify the existence of $x$ values from $R$ , without accessing $T ^ { \prime } s$ attributes. Therefore, storing offsets for a relation like $T$ becomes unnecessary. To address this, we can optimize the hash map by replacing the value type from a vector-like data structure with an integer variable, as illustrated in Figure 9. This modification reduces the overhead associated with appending elements for relations involved solely in joins, thereby streamlining the trie creation process and enhancing overall efficiency. stored during join operations. This technique systematically removes unnecessary columns during the join process, whether these columns originate from base relations or intermediate results. By identifying and eliminating attributes that are not required for subsequent operations or the final query output, the volume of data processed and stored throughout the query execution pipeline is minimized. This reduction accelerates join operations by decreasing memory usage and enhancing cache utilization, leading to overall improvements in query efficiency. 4.2.2 Eliminating Redundant Offsets $( O 4 )$ . For relations that do not contribute to the final query output, it is unnecessary to 4.2.3 Loop-Invariant Code Motion (O5). By identifying and moving invariant expressions out of loops, loop-invariant code motion can significantly reduce the number of operations required for aggregation calculations. In the context of calculating the left-hand side of Eq (6), we can observe that for each $\beta _ { j }$ , $\alpha _ { i }$ is a constant value and can be moved outside the inner summation loop. Similarly, now for each $\alpha _ { i }$ , the summation of all values in $\beta$ is also a constant value, allowing it to be moved outside the outer summation loop. This optimization reduces the number of required multiplications from $n \times k$ to 1 and the number of summations from $n \times k$ to $n + k$ , resulting in a // Trie Creation // Trie Creation let S_trie $\mathbf { \sigma } = \mathbf { \sigma }$ - HT<int, VecDict<int>> S_trie; sum(<i, $_ - > < -$ range(S.size)) 2 $^ +$ HT<int, SmallVecDict<int, $4 > >$ S_trie; {S.x(i) -> @vec $\{ \mathrm { i } \ \ r \ 1 \} \}$ in 3 for (int $\mathrm { ~ i ~ } = \emptyset$ ; $\mathrm { ~ i ~ } < \mathsf { S } . \mathsf { s i z e }$ ; $^ { + + \mathrm { i } }$ ) + $\{ \mathsf { S } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } \to \mathsf { N }$ @smallvec(4) $\{ \mathrm { ~ i ~ } \ \ 1 \} \}$ in S_trie[S.x[i]][i] $+ = 1$ ; let T_trie $\mathbf { \tau } = \mathbf { \tau }$ - HT<int, VecDict<int>> T_trie; sum $: < \mathrm { i }$ , $_ - > < -$ range(T.size)) $^ +$ HT<int, SmallVecDict<int, $\scriptstyle 4 > >$ T_trie; $\mathrm { ~ \gamma ~ } - \mathrm { ~ \frac ~ { ~ 2 ~ } ~ { ~ 3 ~ } ~ }$ $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { - } \mathsf { > } \ \mathsf { @ v e c }$ $\{ \mathrm { i } \ \ r \ 1 \} \}$ in 6 for (int $\mathrm { ~ i ~ } = \emptyset$ ; $\mathrm { ~ i ~ } < \mathsf { T }$ .size; $^ { + + \mathrm { i } }$ ) + $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } \not { \sim } \mathsf { > }$ @smallvec(4) $\{ \mathrm { ~ i ~ } \ \ 1 \} \}$ in 7 T_trie[T.x[i]][i] $+ = 1$ ; (a) SDQL. (b) $\mathbf { C } { + } { + } .$ Figure 8: Impact of using SmallVector data structure for inner dictionaries which store offsets. Figure 9: Impact of redundant offsets elimination for join-only relations. 1 // Trie Creation 2 HT<int, SmallVecDict<int, $\scriptstyle 4 > >$ S_trie; 3 for (int $\dot { \textbf { i } } = \boldsymbol { \Theta }$ ; $\mathrm { ~ i ~ } < \mathsf { S }$ .size; $+ { + } \dot { 1 }$ ) S_trie[S.x[i]][i] $+ = 1$ ; - HT<int, SmallVecDict<int, $4 { > } >$ T_trie; + HT<int, int> T_trie; for (int $\mathrm { ~ i ~ } = \emptyset$ ; $\dot { \textbf { i } } < \mathsf { T }$ .size; $+ { + } \dot { 1 }$ ) T_trie[T.x[i]][i] $+ = 1$ ; + T_trie[T.x[i]] $+ = 1$ ; (b) $\mathbf { C } { + } { + } .$ significant performance improvement. $$ \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { k } ( \alpha _ { i } \times \beta _ { j } ) = \sum _ { i = 1 } ^ { n } ( \alpha _ { i } \times ( \sum _ { j = 1 } ^ { k } \beta _ { j } ) ) = ( \sum _ { i = 1 } ^ { n } \alpha _ { i } ) \times ( \sum _ { j = 1 } ^ { k } \beta _ { j } ) $$ As discussed in Section 2, projections and aggregations are not explicitly represented in Free Join plans. For instance, consider an aggregation on query $Q _ { \pm }$ , which projects only the minimum values of attributes $a$ and $b$ . The naive implementation of these aggregations is shown in the deleted lines (with light red background) of Figure 10. In this approach, for each $x$ value that satisfies the join conditions, $2 \times k$ min operations are performed, assuming the size of $S ^ { : }$ ’s offsets is $k$ . By applying loop-invariant code motion, we can move the minimum operation for attribute $a$ outside the loop over $s ^ { \th }$ s offsets, reducing the number of minimum operations by $k$ . This leaves only $k$ operations to find the minimum value of $b$ , plus two additional operations to update the final output. This optimization effectively reduces unnecessary operations and improves the overall efficiency of the aggregation process. # 4.3 Sorting vs Hashing The realm of worst-case optimal joins is characterized by two primary paradigms: hash-based approaches, exemplified by Umbra [3] and Free Join [20], and sort-based approaches, such as Leapfrog Triejoin [19], EmptyHeaded [1], and LMFAO [16]. Our system is designed to efficiently support both paradigms, allowing for the execution of sort-based WCOJ algorithms alongside hash-based methods. For the sort-based approach, we assume that input relations are always provided in sorted order. Figure 13 illustrates the implementation of the sort-based approach for query $Q _ { \pm }$ in both SDQL and $C + +$ . In our system, the @st annotation is used to specify that the dictionary is a sorted dictionary. For the $C { + } { + }$ implementation, we developed a custom sorted dictionary data structure, as shown in Figure 11. Since we assume that the input relations are sorted by the attributes involved in the joins, insertions always occur at the end, or we update the last elements in the sorted dictionary. For lookups, we employ binary search to efficiently locate a given key among the sorted keys and return its corresponding value. Another optimization we employed in this case is the use of the @range annotation. When a relation is sorted by an attribute $x$ , all occurrences of each $x$ value appear consecutively. Instead of storing all occurrence offsets for each $x$ value in a vector-like structure, we optimize by only keeping the first and last offsets of this consecutive block of elements, as shown in in Figure 12. To insert an offset into this structure, we simply update the right bound of the range. During iteration, we can efficiently loop from the left bound to the right bound, reducing both storage overhead and iteration complexity. As discussed earlier, we decompose a bushy plan into a set of left-deep plans, and each left-deep plan produces an intermediate result. There is no guarantee that these intermediate results will remain sorted. For such cases, we must first sort the intermediate results before using them in trie creation with our sorted dictionary data structure. To reduce the overhead of sorting, we employ a hybrid approach by using a hash table for each intermediate result, bypassing the need for sorting them. While binary search in a sorted (a) SDQL. # class SortedDict { Figure 10: Impact of loop-invariant code motion on aggregation operations. Figure 11: SortedDict data structure. Figure 12: Range data structure. 2 // stores keys and values in std::vector 3 // values of type VT are int or Range 4 vector::iterator find(const KT &key) { 5 $/ \star$ std::lower_bound binary search $\star / \$ 6 }}; class Range { 2 size_t left; 3 size_t right; 4 class Proxy { 5 Range &range; 6 void operator $+ { = }$ (int) { ++range.right; } 7 }; 8 Proxy operator[](size_t const idx) { 9 if ( $/ \star$ is first access $\star / \AA$ ) 10 left $\mathbf { \tau } = \mathbf { \tau }$ right $\mathbf { \Sigma } = \mathbf { \Sigma }$ idx; 11 return Proxy( $\star$ this); 12 }}; dictionary is less efficient than lookups in hash tables, the sorted dictionary proves advantageous during the trie creation phase, which is often the time-consuming part of a query. This allows the trie creation phase for intermediate results to remain more efficient, even when utilizing sorted dictionaries for base relations. # 5 Experiments We implemented our system in a three-step pipeline. First, we take a binary join plan, produced and optimized by DuckDB, converting it into a Free Join plan [20]. Then, we translate the Free Join plan into an SDQL program, which serves as our intermediate representation, and apply various optimizations. Finally, we generate $C + +$ code from the optimized SDQL program to execute the query. We compare our approach against the Free Join framework [20] on both Generic Join and Free Join implementations, recognizing it as the state-of-the-art system that outperforms in-memory databases such as DuckDB [12–14]. For an apple-to-apple comparison, we use the same query plans as the Free Join framework [20]. To evaluate performance, we use the widely adopted Join Order Benchmark (JOB) [4] and the LSQB benchmark [6]. Three research questions guide our evaluation: (1) How does our system compare to the Generic Join and Free Join implementations of Free Join framework? (Section 5.2) (2) What is the impact of optimizations we employed? (Section 5.3) (3) How do the hash-based and sort-based approaches perform in our system? (Section 5.4) # 5.1 Setup Both the JOB and LSQB benchmarks are primarily focused on evaluating join performance. The JOB benchmark consists of 113 acyclic queries, with an average of 8 joins per query, while the LSQB benchmark includes a mix of cyclic and acyclic queries. Each query in both benchmarks involves base-table filters, natural joins, and a simple group-by operation at the end. JOB operates on real-world data from the IMDB dataset, whereas LSQB uses synthetic data. For a fair comparison, we executed all benchmarks on the query set reported by the Free Join framework, which serves as our competitor. The only exception is query Q3 from the LSQB benchmark, which we excluded as the results are not reproducible using the Free Join framework’s open-source implementation. All experiments were conducted on a MacBook Pro running macOS 15.0.1, equipped with an Apple M1 Max chip and 64GB of LPDDR5 RAM. Each experiment was executed 5 times and the average run times were reported. All systems were configured to 1 // Trie Creation 1 // Trie Creation 2 let S_trie $\mathbf { \sigma } = \mathbf { \sigma }$ 2 SortedDict<int, Range> S_trie; 3 sum(<i, $_ - > < -$ range(S.size)) 3 for (int $\dot { \textbf { i } } = \boldsymbol { \Theta }$ ; $\mathrm { ~ i ~ } < \mathsf { S }$ .size; $+ { + } \dot { 1 } \dot { }$ ) 4 @st $\{ \mathsf { S } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } ^ { - \mathrm { > } }$ @range $\{ \mathrm { i } \ \ 1 \} \}$ in 4 S_trie[S.x[i]][i] $+ = 1$ ; 5 let T_trie $\mathbf { \tau } = \mathbf { \tau }$ 5 SortedDict<int, Range> T_trie; 6 sum( $_ { < \mathrm { i } }$ , $_ - > < -$ range(T.size)) 6 for (int $\dot { \textbf { i } } = \boldsymbol { \Theta }$ ; $\mathrm { ~ i ~ } <$ T.size; $\mathbf { + + i } ^ { \cdot }$ ) 7 @st $\{ \mathsf { T } . \mathsf { x } ( \mathrm { i } ) \ \mathsf { \Omega } ^ { - \mathrm { > } }$ @range $\{ \mathrm { i } \ \ r \ 1 \} \}$ in 7 T_trie[T.x[i]][i] $+ = 1$ ; (a) SDQL. $( \mathbf { b } ) \mathbf { C } + + .$ run in single-threaded mode and operate entirely in main memory. We employed an efficient hash table implementation in $C + +$ known as phmap [11]. All code was compiled using Clang 18.1.8 with the following flags: -std= $: c + + 1 7$ -O3 -march=native -mtune=native -Wno-narrowing -ftree-vectorize # 5.2 Performance Comparison Our first set of experiments compares the performance of our system for Generic Join and Free Join algorithms against the Free Join framework on both JOB and LSQB benchmarks. 5.2.1 JOB. Figure 14 presents a run time comparison of our system with the Free Join framework, evaluating both the Generic Join and Free Join algorithms on JOB queries. Since our system does not currently support vectorization, Figure 14b illustrates the performance of our system relative to the non-vectorized version of the Free Join framework, which employs the same underlying algorithm. In Figure 14, the majority of data points for Generic Join and both non-vectorized and vectorized versions of Free Join algorithms appear below the diagonal, indicating that our system outperforms the Free Join framework in these cases. This suggests that, despite the lack of vectorization support, our system outperforms the performance of the Free Join framework. On average (geometric mean), our system demonstrates a speedup of $1 . 4 9 \times$ and $1 . 4 2 \times$ over the Free Join framework for the Generic Join and Free Join algorithms, respectively, and achieves a $2 . 7 0 \times$ performance improvement over the non-vectorized version of Free Join. The maximum speedups observed are $3 . 1 4 \times$ for Generic Join and $4 . 7 8 \times$ for Free Join, while the minimum speedups are $0 . 7 1 \times$ ( $4 0 \%$ slowdown) and $0 . 3 0 \times ( 3 . 3 3 \times$ slowdown), respectively. As discussed in Section 3.2, our system requires that tries be fully constructed before query execution begins, meaning the data structure we currently employ does not support lazy evaluation, unlike Free Join [20]. However, as outlined in Section 3.1, we utilize the same execution plans produced and used by the Free Join framework. In these plans, when a relation appears as the first relation in a node, we iterate over its offsets to access its attribute values. At this stage, all attribute values for that relation are made available to subsequent nodes in the execution plan, thereby eliminating the need for further iterations or lookups. Once we identify the first node where each relation is used for iteration, we construct a trie with levels corresponding to the attributes of that relation that appeared in earlier nodes, since lookups for those attribute values are required. For all queries in the JOB benchmark, each relation involves at most one attribute lookup before iteration. This means that each relation is either used for iteration or accessed first for lookups over a single attribute, followed by iteration over the offsets linked to that attribute. Consequently, our approach behaves similarly to leveraging lazy data structures, as the construction of the first level of tries for non-iterated relations is necessary, and lookups over these relations are guaranteed. 5.2.2 LSQB. Figure 15 presents a performance comparison between our system and the Free Join framework for both Generic Join and Free Join algorithms on LSQB queries. Each line in the figure represents a query executed across scaling factors of 0.1, 0.3, 1, and 3. It is important to note that the Free Join framework encountered an error when running Q3, and we were unable to reproduce its results for this query. For Q2, which is a cyclic query, our system outperforms the Free Join framework across scaling factors, achieving speedups of up to $2 . 4 9 \times$ (on average $2 . 1 5 \times$ ) for the Generic Join algorithm and up to $1 . 5 0 \times$ (on average $1 . 2 8 \times \dot { }$ ) for the Free Join algorithm. Contrary to our discussion about JOB queries in Section 5.2.1, there is a relation in Q2 that necessitates a trie with a depth greater than one. While our approach to trie construction in this scenario is less efficient than using lazy data structures, our system still demonstrates superior performance, with a significant performance gap compared to the Free Join framework. For the acyclic queries, our system’s performance is comparable to the Free Join framework for Q4. In the case of Q5, our system is up to $1 . 6 0 \times$ (on average $1 . 4 9 \times \rangle$ ) faster for the Generic Join algorithm. However, for the Free Join algorithm, the performance remains similar, with our system being slightly faster at smaller scaling factors and slightly slower at larger ones. A significant performance improvement is observed for Q1, where our system achieves speedups of up to $2 7 . 5 2 \times$ (on average $8 . 5 0 \times )$ ) for Generic Join and $8 0 . 0 5 \times$ (on average $2 3 . 1 3 \times$ ) for Free Join compared to the Free Join framework. We investigated the $8 0 \times$ speedup, which occurs for scaling factor 0.3, and found it to show up repeatedly across benchmark runs. While we recognize the potential for a $1 0 \times$ speedup through factorization in the Free Join framework, we were unable to reproduce their results. Even if we assume the Free Join framework achieves this speedup, our system would still maintain a considerable performance advantage based on the aforementioned speedups. Figure 14: Run-time comparison on JOB. Each point compares the run time of a query on our system and Free Join framework. Figure 14a compares the Generic Join implementation of each system. Figures 14b and 14c compare our system’s Free Join implementation with the Free Join framework [20] without and with vectorization. Each point below the diagonal line represents a query for which our system is faster. Figure 15: Runtime comparison on LSQB. Each line is a query running on increasing scaling factors (0.1, 0.3, 1, 3) and compares our system and Free Join framework. Figure 15a compares the Generic Join implementation of each system. Figure 15b compares our system’s Free Join implementation with Free Join framework. Overall, this substantial gap is primarily attributed to the early projection and aggregation optimizations integrated into our system. Unlike the JOB queries, in LSQB, the output size before aggregation is significantly larger than the input size, resulting in a considerable amount of time spent on output construction. Our system mitigates this overhead by pushing projection and aggregation earlier in the query execution process. # 5.3 Impact of Optimisations As discussed in Section 4, we implemented a series of optimizations to enhance the efficiency of our naive implementation. Figure 16 illustrates the cumulative effect of these optimizations, showing the distribution of performance improvements relative to the Free Join framework for JOB queries. Initially, without any optimizations, our naive implementation of the Free Join algorithm was $2 . 1 1 \times$ slower than the Free Join framework. Each subsequent optimization progressively narrowed this gap, contributing to the overall performance gains observed in our system. After applying O1, we observe an improvement in performance, though our system remains slower than the Free Join framework. With the application of O2, our system, despite lacking support for lazy data structures and vectorization, slightly outperforms the Free Join framework with a $1 . 0 5 6 \times$ speedup. Up to this point, the impact of each optimization is clearly visible in Figure 16, and these optimizations are also available in the Free Join framework. In the violin plot for O3, the lower part of the distribution (below the median) becomes thinner, while the upper part thickens, indicating a shift toward better performance. Additionally, we observe two data points with more than $2 \times$ speedup rather than one in the previous optimization, and our overall speedup increases to $1 . 0 7 7 \times$ . With the application of O4, the tail of the O3 distribution is eliminated, resulting in an increased average speedup of $1 . 1 1 7 \times$ . O5 further improves the performance of all queries slightly compared to O4. Ultimately, our fully optimized implementation achieves a $1 . 1 2 4 \times$ speedup, which is $2 . 3 8 \times$ faster than the naive implementation and $6 . 5 \%$ faster than O2. The subtle speedups in O3, O4, and O5 are attributed to the fact that trie construction dominates the overall run time for most of the queries. However, these optimizations are built on top of earlier optimizations targeting trie creation and only focus on improving the query execution phase. We analyzed the impact of optimizations on a representative subset of queries, starting with our naive implementation and progressively applying each optimization. The results, shown in Figure 17, demonstrate that all optimizations introduced in Section 4 contribute positively to their respective scenarios. The O1 and O2 optimizations highlight the benefits of using std::vector and SmallVector, which enhance the performance of all selected queries. The O3 optimization adds the Dead Code Elimination, affecting queries 9d and 16b. The O4 optimization further improves the previous ones by eliminating redundant offsets, which affects queries 16b and 19d. Finally, in O5, we apply Loop-Invariant Code Motion, which improves the performance of queries 9d and 16b. Figure 16: Impact of optimizations. Each point shows the performance improvement of a query over the Free Join framework. Each violin is the distribution of the performance improvements for all queries after applying the given optimization. The gray line shows the geometric mean for each optimization. Figure 17: Ablation study. Each bar shows the run time of a query in our system after applying its corresponding optimization. O1: std::vector. O2: SmallVector. O3: Dead Code Elimination. O4: Eliminating Redundant Offsets. O5: LoopInvariant Code Motion. Figure 18: Run-time comparison between sort- and hashbased approaches and Free Join on JOB. Figure 18a compares the performance of the hash-based approach implemented in our system. Figure 18b compares the performance of the sort-based approach. # 5.4 Hash-based vs Sort-based Performance In Figure 18a, illustrates the performance of the hash-based approach in our system. Data points, which represent the comparison between our hash-based implementation and Free Join, cluster around the diagonal. This suggests that the hash-based approach of our system matches and slightly outperforms the performance of the Free Join framework. Specifically, our system achieves a better performance up to $2 . 0 9 \times$ (on average $1 . 1 2 \times$ ) than the Free Join framework. As discussed in Section 4.3, our system also supports the sortbased paradigm of worst-case optimal join (WCOJ) algorithms. For this class of algorithms, we assume that the input data is always provided in sorted order. Figure 18b presents the performance of our sort-based implementation for all JOB queries in comparison to the Free Join framework. Our sort-based approach demonstrates performance improvements of up to $6 . 2 5 \times$ (on average $1 . 0 7 \times$ ). As can be realized, these two approaches are algorithmically distinct, which explains why the data points are not concentrated around the diagonal in Figure 18b. As mentioned earlier, even when input data is sorted, there is no guarantee that intermediate results will remain sorted. In such cases, we must sort these intermediate results before constructing their trie using a sorted dictionary, which can lead to significant overhead in the overall execution time. To address this, we introduce a novel hybrid approach that utilizes sorted dictionaries for base relations that are already sorted, while using hash tables for intermediate results. This eliminates the need to sort intermediate results. Using this hybrid approach, we achieve superior performance over the Free Join framework for almost all queries in the JOB benchmark, as shown in Figure 14c. Specifically, our hybrid approach demonstrates a performance improvement of up to $4 . 7 8 \times$ (on average $1 . 4 2 \times$ ) compared to Free Join. For most queries in the JOB benchmark, trie creation is the most time-consuming aspect when using hash tables. By employing sorted dictionaries via SortedDict and Range data structures– where only the first and last offsets of an element’s occurrences are stored instead of a vector-like structure–we can significantly improve run time performance. This approach reduces the overhead associated with hash tables, such as allocation, insertions, and updates. However, the use of binary search for lookups in sorted dictionaries can slow down query execution for cases where a significant portion of the run time is spent on the query execution phase itself, as seen in the queries above the diagonal in Figure 14c. The efficiency of each approach—hash-based or sort-based—depends on both the input data and the specific query being executed. However, our system provides the flexibility to utilize any of these approaches, enabling the efficient execution of any query on any dataset. Figure 19: Run time comparison among Free Join and the hash-based, sort-based, and hybrid implementations in our system. Each bar shows the performance of an alternative on the given query. Figure 19 highlights the run time of a representative subset of queries, providing deeper insights into the scenarios where each approach–hash-based, sort-based, or hybrid–performs better. For instance, query 12b, one of the points above the $2 \times$ line in Figure 16, benefits from the hybrid approach, resulting in the highest speedups among all JOB queries. This improvement is largely due to the efficient handling of intermediate results using hash tables, which avoids the overhead of sorting. However, when dealing with smaller relations (after applying filters), sorting is relatively fast. In such cases, the hybrid approach’s advantages may not fully offset the overhead introduced by hash tables. Query 8a is an example of this; as shown in Figure 19, the sort-based solution offers the best performance due to the small size of the relations, making sorting more efficient. Queries 17b and 17f provide an interesting comparison, as they share the same joins but apply different filters to their base relations. Both queries slow down significantly with the sort-based approach, largely because they involve a large number of lookups where binary search negates the speedup gained from trie creation. However, the hybrid approach manages to compensate for the performance in query 17f but not in 17b. This disparity stems from the order in which lookups are applied. In query 17b, the plan probes a base relation first, followed by an intermediate result. Even with a hash table for the intermediate result, binary search is still used for all elements. In contrast, query 17f first probes the intermediate result using the hash table, where lookups are performed in a constant time. Only for the elements that find a match in the intermediate result does the query then perform binary searches on the base relation, reducing the number of $O ( \log n )$ lookups, resulting in a more efficient execution.
Join processing is a fundamental operation in database management systems; however, traditional join algorithms often encounter efficiency challenges when dealing with complex queries that produce intermediate results much larger than the final query output. The emergence of worst-case optimal join (WCOJ) algorithms represents a significant advancement, offering asymptotically better performance by avoiding the enumeration of potentially exploding intermediate results. In this paper, we propose a unified architecture that efficiently supports both traditional binary joins and WCOJ processing. As opposed to the state-of-the-art, which only focuses on either hash-based or sort-based join implementations, our system accommodates both physical implementations of binary joins and WCOJ algorithms. Experimental evaluations demonstrate that our system achieves performance gains of up to 3.1x (on average 1.5x) and 4.8x (on average 1.4x) over the state-of-the-art implementation of Generic Join and Free Join methods, respectively, across acyclic and cyclic queries in standard query benchmarks.
[ "cs.DB" ]
# 1 Introduction In modern healthcare systems, accurate and timely diagnosis stands as a critical component in patient management and treatment [18]. To form a diagnosis, clinicians often engage in clinical decisionmaking through a dynamic, iterative process, called differential diagnosis. This process involves forming hypotheses about the patient and testing those hypotheses through requesting and interpreting information from relevant available diagnostic tests [22]. At the beginning multiple diagnoses may be possible and there is still high uncertainty. Through diagnostic testing the uncertainty is minimized and the space of possible diseases reduced until a sufficient confidence is reached, a diagnosis can be given, and treatment can begin [22]. It is especially important in complex, high-stakes environments such as emergency departments, where often little is known about a patient at admission and fast and accurate diagnosis is paramount [18]. Large Language Models (LLMs), with their ability to synthesize complex textual information, seem well-suited for aiding clinicians in this task, as most medical information is textual or can be represented in text, e.g., clinical notes, imaging reports or numerical laboratory results. This enables great flexibility and variety in medical modalities. In the medical domain LLMs have already shown Patient State Hypothesis Agent Decision Agent Patient State Patient History: The Reasoning: The Patient History: The patient is a 43-year-old Hypothesis: symptoms suggest patient is a 43-year... woman. She presents Pancreatitis a gastrointestinal with 4 days of intense, disorder, ... Ultrasound Report: cprainm..p.y, midepigastric Con3f /d1e0nce: URltreaqsuoeusntd geadlelbmlad..d.er wall Hypothesis Agent Decision Agent Hypothesis Agent Decision Agent Reasoning: A white Reasoning: The Hypothesis: blood cell count Hypothesis: findings are Cholecystitis would identify Cholecystitis consistent with inflammation. cholecystitis. Confidence: Request Confidence: Diagnose 6 / 10 WBC 9 / 10 Cholecystitis great success in passing medical license exams [21, 7] and diagnosing case challenges [3]. The potential to revolutionize healthcare through accurate diagnostic capabilities is enormous, however, existing approaches often either (1) assume immediate availability of all patient data [3, 16, 4], which is rarely the case in practice, or (2) rely on the often limited “out-of-the-box” behavior of pre-trained LLMs without any task-specific fine-tuning to the complexities of diagnostic decision-making [8, 14]. This mismatch between research and real-world clinical decision-making limits the applicability of LLMs to the clinical setting. In this paper, we address the above limitations by modeling and training Language Agents for Clinical Decision Making (LA-CDM), tasked with iteratively reducing hypothesis uncertainty through repeated diagnostic testing. Inspired by cognitive research on human clinical decision-making [22], we design a two-agent system replicating the two main cognitive tasks of clinicians involved in clinical decision-making. It consists of one LLM agent, the hypothesis agent, forming the most likely diagnosis hypothesis based on all available patient information and estimating its confidence in that hypothesis, and another agent, the decision agent, that evaluates the patient information and the hypothesis agent’s output to either provide a diagnosis or request an additional diagnostic test (Figure 1). To train this system, we propose a novel training strategy with three distinct objectives that target the core principles of successful clinical decision-making [22]: 1. Accurate hypothesis generation: Using supervised fine-tuning, the hypothesis agent is trained to form a correct hypothesis. Since information on the patient is only uncovered step-by-step, the agent has to make use of limited information from various data sources. 2. Hypothesis uncertainty estimation: Using reinforcement learning the hypothesis agent is trained to be well-calibrated in its verbalized uncertainty estimation. A well-calibrated model that is e.g. $60 \%$ certain on a hypothesis will be correct in $60 \%$ of cases. 3. Efficient decision-making: Using reinforcement learning, the decision agent is trained to select the most informative next test and reach a diagnosis when sufficiently confident. The model gets rewarded for a final correct diagnosis, reinforcing the testing pathway that led to that diagnosis. Analogously, to doctors graduating from medical school, LLMs have a strong medical knowledge foundation, but are not trained on performing clinical decision-making. Clinicians learn this skill through years of experience, pointing towards experience-based reinforcement learning as a prime paradigm for teaching clinical decision-making to LLMs. Further, since optimal testing pathways are not known, supervised learning of clinical decision-making is infeasible. The interplay of the three objectives results in the model learning which tests to request in order to increase the hypothesis confidence leading to an accurate diagnosis. This guides the model to request those tests that are most informative in a given situation. To the best of our knowledge, we propose the first method for explicitly training LLMs for clinical decision-making. We evaluate our method on MIMIC-CDM [8], a real-world dataset focused on four abdominal diseases that mirrors clinical workflows by modeling differential diagnosis through sequential test requests. Its standardized test naming and inclusion of lab results, notes, and imaging reports make it uniquely suited for training and evaluating LLM-based diagnostic reasoning. Our experiments demonstrate the benefit of explicitly training clinical decision-making. Notably, training reduces the required diagnostic tests, which has practical implications in reducing healthcare costs, diagnosis time, and patient discomfort [18]. We show the benefit of our hypothesis-driven approach to clinical decision-making and demonstrate that the model adapts its testing procedure to the patient at hand, placing this work as a step towards patient-specific personalized differential diagnosis. # 2 Related Work Reinforcement Learning for Clinical Decision Making Reinforcement learning has been explored for cost-efficient clinical decision-making based on tabular data. Yu et al. [28] train SM-DDPO, a model that iteratively requests laboratory tests optimizing diagnostic performance and cost-efficiency. Their method features an imputation model, estimating missing (and not yet requested) laboratory tests and a classification model predicting the diagnosis. A policy network trained with Q-learning predicts the next action, i.e. which test to request or which diagnosis to give. They show improvements in efficiency at a similar diagnostic performance compared to baselines making use of all available information. However, as the method is only compatible with tabular data, it neglects many important medical modalities, like clinical notes or imaging reports, which are often crucial for diagnosis. ED-Copilot [24] employs a Language Model for encoding serialized patient laboratory values which is trained end-to-end with two MLPs, one predicting test requests, and one predicting the task of outcome severity. They train their method in two stages. First, they use supervised learning to teach the model to predict all tests in a pre-defined order. In a second training stage, they use reinforcement learning to finetune the model to reduce time cost of the testing regiment. They also report great reduction in testing time, while remaining similar in performance to baselines making use of all available information. While this method uses a language model as its encoding backbone, this language model is not directly requesting tests. Same as the previous method, they further only consider tabular laboratory values and therefore do not make use of valuable information in textual form. Reflexion [20] addresses general LLM decision-making by introducing a zero-shot reinforcement learning approach. Rather than fine-tuning the LLM with traditional reinforcement learning, the method allows the model to iteratively attempt the same task. After each trial, a limited number of previous attempts and a numerical reward are added to the input context, guiding the next generation. While this is effective at general decision-making, this approach cannot be applied in the clinical context, where multiple diagnosis trials with the same patient and intermediate diagnosis correctness feedback are infeasible. LLMs for Clinical Decision Making LLMs have so far only been used as zero-shot methods for clinical decision support. Hager et al. [8] place large "out-of-the-box" language models in an evaluation framework where they are tasked with interactively requesting diagnostic tests and diagnose patients. They show severe limitations of LLMs for clinical decision-making and report worse diagnostic performance than clinicians. Vaid et al. [25] approach clinical decision-making with a tool-using LLM. Through zero-shot prompting, they provide the LLM with a number of available tools, e.g., a symptom tool for retrieving the symptoms, or a imaging study tool for getting any imaging reports on the patient. They evaluate various proprietary LLMs, with GPT-4 [1] showing the best performance. Liu et al. [15] model LLM clinical decision-making as a multi-agent setting, where a doctor agent communicates with a patient agent who can detail his symptoms and a technician agent who can perform laboratory or imaging results. They compare different prompting techniques, like chain-of-thought [26] or one-shot prompting and show that GPT-4o achieves the best performance. All these methods do not attempt to improve model performance by explicitly training clinical decision-making with LLMs. # 3 Preliminaries # 3.1 Reinforcement Learning on Large Language Models Reinforcement learning has emerged as a powerful framework for fine-tuning LLMs by aligning their behavior with desired objectives, especially when no clear ground truth for this behaviour exists. Most notably, Reinforcement Learning with Human Feedback (RLHF) [17] was used to align LLM generations with human preferences. LA-CDM utilizes a direct reinforcement learning method based on Proximal Policy Optimization (PPO) [19] without requiring human feedback. Let $\mathcal { X }$ be the space of textual inputs (prompts) and $\Theta \subseteq \mathbb { R } ^ { d }$ be the parameter space of a LLM. We denote by ${ \bar { \pi } } _ { \theta } ( a \mid s )$ the stochastic policy of the LLM, parameterized by $\theta$ , which specifies the probability distribution over a discrete vocabulary $\mathcal { A }$ (the set of possible tokens) given a state $s \in \mathcal S$ . In the context of language modeling, a state $s$ is typically a sequence of tokens $( x _ { 1 } , \ldots , x _ { t } )$ consisting of a prompt and previously generated tokens. An action $a$ is the next token to be generated, yielding the updated state $s \prime = ( x _ { 1 } , \dots , x _ { t } , a )$ . Every state transition gives rise to an reward the model is trained to maximize. # 3.2 Confidence Calibration of Large Language Models While LLMs have shown impressive capabilities in many language-related tasks, hallucinations and confidently-presented wrong answers are a common and well-known problem [13]. A well-calibrated model is able to express confidence that aligns with the epistemic probability of correctness. This means that of all the answers which are presented with a confidence of $0 \leq p \leq 1$ , the fraction of correct answers is $p$ . In this work, we train confidence calibration with reinforcement learning as proposed by Stangel et al. [23]. They model confidence calibration as a betting game, where the model bets on the correctness of its answer. If it is correct with a high confidence it receives a large reward. However, if it is wrong with a high confidence, the punishment becomes large. Analogously, if the answer is wrong the model receives the largest reward if it expresses a low confidence. Concretely, they use the reward function $$ R ( y _ { p r e d } , c , j ) = \left\{ \begin{array} { l l } { \log ( c ) , } & { i f J ( y _ { p r e d } ) i s T r u e } \\ { \log ( 1 - c ) , } & { i f J ( y _ { p r e d } ) i s F a l s e , } \end{array} \right. $$ where $y _ { p r e d }$ is the predicted answer, $0 < c < 1$ is the (scaled and clipped) confidence prediction, and $J ( \cdot )$ is a binary function evaluating the correctness of $y _ { p r e d }$ . The reward is then scaled to be between $- 1$ and 1. They train the model using Proximal Policy Optimization (PPO) [19]. This training approach removes the need for an artificially constructed ground truth confidence dataset, as done by other confidence calibration methods [2, 13], and instead only requires a measure of answer correctness. The authors prove that an optimal policy under their reward design produces perfectly calibrated confidence expressions. # 4 Language Agents for Clinical Decision Making We propose LA-CDM consisting of two language agents, hypothesis agent and decision agent, trained with three different objectives. The hypothesis agent is trained in accurate hypothesis generation through supervised fine-tuning and uncertainty-awareness through reinforcement learning. The decision agent is trained in decision-making using reinforcement learning. Both agents share the LLM weights, so training one agent also influences the other. In Figure 2, we show the full model. The two agents and the three training objectives will be explained in detail in this section. # 4.1 Modeling Clinical Decision Making Clinical Decision Making Environment In order to train our model in decision-making it has to operate in a reinforcement learning environment to explore testing strategies and receive reward signals. The model learns decision-making through interacting with this environment and diagnosing patients. Let each patient be described by a number of $n$ test results $[ t _ { i } ] _ { i = 1 } ^ { n }$ as textual records of clinical notes, imaging reports and laboratory panels. Since patient information is iteratively uncovered through the model’s test requests $r _ { j }$ , we define the currently observed patient state at time-step $j$ as the set of all observed tests and write it as $p _ { j }$ . The correct diagnosis for the patient is denoted by ytrue. Figure 2: Overview of our method LA-CDM and its three training objectives. The hypothesis agent receives the current patient state and predicts a hypothesis and confidence. The hypothesis generation is trained supervised, the confidence calibration using reinforcement learning. The hypothesis agent output and the current patient state are then provided to the decision agent that is trained to decide on an optimal clinical action (test request or diagnosis) using reinforcement learning. As we simulate a clinical patient-doctor interaction, $p _ { 0 }$ , the initial observed patient state, consists of the first clinical notes detailing symptoms, medical and family history. This information is always available to the model. The environment advances step-wise with each model action. If the model requests an additional test, the observed patient state is updated, and the results are made available to both the hypothesis agent and the decision-making agent for the next step. The simulation ends when the model provides a diagnosis for the patient or if one of the two failure cases is reached: (1) the model exceeds the specified maximum number of generated tokens, or (2) the model violates its specified output format. Hypothesis Agent Through its system prompt, the hypothesis agent is introduced to its task and provided with the possible diagnoses. At each time-step $j$ of the environment, the hypothesis agent $\mathcal { H }$ is given the currently observed patient state $p _ { j }$ to predict the most likely diagnosis $h _ { j }$ based on the limited available information, as well as the confidence in that prediction $c _ { j }$ . It therefore produces a mapping $$ \mathcal { H } : p _ { j } \{ h _ { j } , c _ { j } \} . $$ The model generates this output in a format of "Hypothesis: $h _ { j }$ , Confidence: $c _ { j }$ ". The agent reports numerical confidence estimations on a scale of 0 to 10, where 10 means absolute certainty of the correctness of the hypothesis, and 0 means absolute certainty that it is incorrect. Decision Agent The decision agent $\mathcal { D }$ is the actor advancing the environment. It decides on which action to take at each time-step. Through its system prompt it is provided with its task, a list of tests present in the dataset, and the possible patient diagnoses. Provided with the currently observed patient state $p _ { j }$ and the hypothesis agent’s hypothesis $h _ { j }$ and confidence $c _ { j }$ , it produces a decision on whether to request another diagnostic test $r _ { j }$ and move on to the next time-step $j + 1$ or whether to commit on a specific diagnosis $y _ { p r e d }$ for the patient and end the episode. Formally, it produces a mapping $$ \mathcal { D } : \{ p _ { j } , h _ { j } , c _ { j } \} \{ { r _ { j } } \begin{array} { l l } { { i f a f u r t h e r t e s t i s r e q u e s t e d } } \\ { { y _ { p r e d } } } & { { i f a d i a g n o s i s i s g i \nu e n } } \end{array} $$ Specifically, we employ the ReAct prompting technique [27], to prime the model to first produce a reasoning trace, following chain-of-thought principles [26], and then provide action and action input (in our case, the specific test or diagnosis) in a structured format. If a further test was requested, the test results are appended to the conversation context as a user response to the LLMs generation. Since we can only work with retrospective data, where not every test result is present for every patient, we cannot always fulfill the model’s request. In these cases, the user reply tells the model the requested test is unavailable and asks it to choose a different action. The implications of this will be discussed later on. Also if the model requests tests or provides diagnoses that are not on the list of possible tests or, respectively, diseases, the model is asked to choose a different action. # 4.2 Training Clinical Decision Making In our training objectives we follow the three main principles of clinical decision-making, as proposed by Sox et al. [22]: (1) accurate hypothesis generation, (2) hypothesis uncertainty estimation, and (3) efficient decision-making. The first two objectives are trained with respect to the generations of the hypothesis agent, the last objective is trained on the environment interactions of the decision agent. We follow a cyclic training approach, where each objective is trained individually for a specified number of episodes after which the objective changes to the next one until the cycle repeats, resulting in much more stable training compared to optimizing all objectives simultaneously. Training Hypothesis Generation A baseline of good clinical decision-making is a high accuracy in hypothesis generation. If the model knows the most likely candidate for the diagnosis, it can adapt its testing strategy to quickly confirm or reject this hypothesis. While the model interacts with the environment, the hypothesis agent is confronted with various patient states consisting of different subsets of test combinations. We collect all contexts shown to the model within all episodes of a patient batch, usually including multiple hypothesis generation steps per patient. To perform supervised fine-tuning, we create target sequences, consisting of the collected conversation contexts concatenated with the correct hypothesis generation $y _ { t r u e }$ . We compute the cross-entropy loss for the sequences, ignoring the token at the position where the model should place its confidence score. Training Uncertainty-Awareness Since the available patient information is often limited, especially at early stages of the diagnostic process, and the available data does not always clearly point at a specific diagnosis, uncertainty is inherent to clinical decision-making. An accurate intrinsic estimation of that uncertainty by the model can give it an improved basis for decisions on when to stop the diagnostic process and produce a diagnosis. In this work, we train confidence calibration following a method proposed by Stangel et al. [23] as previously introduced in Section 3.2. We define our correctness measure $\mathsf { \bar { J } } ( h _ { j } )$ as equality between the predicted hypothesis $h _ { j }$ and the ground truth diagnosis $y _ { t r u e }$ . Since "out-of-the-box" pre-trained LLMs tend to only predict high confidences, the model rarely explores predicting low confidences, significantly hindering the reinforcement learning training. We take inspiration from stochastic reinforcement learning exploration techniques and, with probability $p _ { e x p l o r e }$ , replace the predicted confidence with a randomly chosen different confidence. The exploration probability is scheduled to decrease during training. Training Clinical Action Selection At the core of training clinical decision-making lies the training of clinical action selection. During interaction with the clinical decision-making environment the model can freely choose to iteratively request any number of tests in any order. Given the vast number of possible diagnostic pathways, defining an optimal test sequence for each patient is infeasible, we therefore do not have a ground truth on which tests to perform. To still be able to train clinical decision-making, we propose to use reinforcement learning, where the model can learn through trial-and-error which tests are useful in which situations. Equal to the confidence calibration training, we employ the PPO algorithm [19] for reinforcement learning training. Through interaction with the clinical decision-making environment the model requests different tests until it decides on a diagnosis. We design our reward function to present the model with a fixed positive reward $r _ { p o s }$ if the final diagnosis at the end of the diagnosis episode was correct, or a fixed negative reward $r _ { n e g }$ if it is wrong. Additionally, we punish the model with reward $r _ { i n v a l i d }$ if the model violates the specified format. Our reward function is thus: $$ R ( y _ { p r e d } ) = \left\{ \begin{array} { l l } { r _ { p o s } } & { i f y _ { p r e d } = y _ { t r u e } } \\ { r _ { n e g } } & { i f y _ { p r e d } \neq y _ { t r u e } } \\ { r _ { i n v a l i d } } & { f o r o u t \ l { o f } \ l { f o r m a } } \end{array} \right. $$ The interaction of these three objectives enables the model to learn which tests to request to enhance hypothesis confidence, ultimately leading to a more accurate diagnosis. This drives the model to prioritize tests that provide the most informative insights in a given situation. # 5 Experimental Set-up # 5.1 Dataset and Pre-Processing We evaluate our method on the MIMIC-CDM dataset [8], a curated subset of MIMIC-IV [12] designed for modeling sequential clinical decision making. It contains 2,400 patients diagnosed with one of four abdominal conditions: appendicitis, cholecystitis, diverticulitis, or pancreatitis. This focused setting on four pathologies reflects real clinical workflows, where physicians perform differential diagnosis with a narrowed down space of possible diseases and request tests to distinguish between likely candidates. The dataset includes patient histories (symptoms, comorbidities, family histories) and physical exam notes for all patients. It also provides 5,959 textual imaging reports (CT, x-ray, ultrasound, MRI) and 143,191 lab results (blood, urine, microbiology), however, not every test result is reported for every patient. If multiple values for a specific test were recorded during the hospital stay of the patient only the first one was included in MIMIC-CDM to simulate an early diagnosis after hospital admission. Crucially, the dataset includes comprehensive mappings of test names across patients, an essential feature for modeling test requests reliably. Without this normalization, a model could not query the same test across different cases due to inconsistent naming in clinical documentation. To our knowledge, MIMIC-CDM is the only publicly available dataset that enables simulation of this setting. For our use-case, we construct the set of available tests as: physical examination, all imaging modalities, and the most common laboratory panels. These panels are collections of individual tests that are usually ordered together. The initial patient history is shortened by processing them with a Mixtral- $8 \mathrm { x } 7 \mathrm { B }$ [11] model, prompted to summarize the most important aspects. Also the imaging reports are shortened, wherever a separation into sections was available, by only keeping the findings section of the report and removing the remaining sections. The original dataset does not have a data split, since it was intended for the evaluation of zero-shot models. We therefore split the data into a training set of $80 \%$ , and a validation and test set of $10 \%$ each. # 5.2 Metrics To evaluate model performance, we report class-wise accuracies along with their mean, as well as micro and macro F1-scores. As classification is performed by auto-regressive LLMs, some generations do not include a valid class. If this is the case, we fall back to the last classification of the hypothesis agent. If the method under evaluation does not have a hypothesis agent, or that hypothesis agent also predicts an invalid class, those predictions are defined as predictions of a separate fifth wrong class, however, we exclude their contribution to the macro F1-score. Additionally, we compute the Expected Calibration Error (ECE) to assess confidence calibration of our hypothesis agent. ECE measures the discrepancy between predicted confidence scores and actual accuracy. A lower ECE indicates better model calibration, meaning the predicted probabilities align well with actual correctness frequencies. # 5.3 Baselines We compare our method to various zero-shot and trained baselines. In their work, Hager et al. [8] introduce both the MIMIC-CDM dataset and evaluate how various "out-of-the-box" pre-trained models perform when tasked with clinical decision-making. We compare with OASST, the best performing model from this evaluation, however, a direct comparison is very difficult for multiple reasons. First, as a zero-shot method, the model was evaluated on the complete dataset, whereas we reserve some part of that dataset for training. Second, the clinical decision framework was constructed differently, as the OASST model is not provided with the possible diagnosis classes or available tests in its prompt. Additionally, as an approximate upper-bound for our method, we compare with a Llama-3-7B-Instruct model [6] identical to our LLM backbone, trained with supervised fine-tuning to predict the correct diagnosis. Instead of requesting tests it simply receives all the available patient information directly which is not a realistic diagnostic process. We refer to this method as SFT-all. Table 1: Performance comparison of LA-CDM and baseline methods. We report class-wise accuracies and F1-scores. The avg. # tests is the mean number of tests that were requested by the model for all patients in the test set. \*OASST is evaluated on a different test set in a different framework. †SM-DDPO can only process tabular data. $Z \mathrm { S } =$ zero-shot. Figure 3: Left: Calibration curves before and after training LA-CDM. Right: Distribution of confidence estimations before and after training LA-CDM. We compare to three other adaptive test selection methods on MIMIC-CDM. SM-DDPO [28] trains a MLP using reinforcement learning for clinical decision making, however, the method is only able to process tabular data and can only request laboratory values. ReAct [27] is a zero-shot decision making method. Finally, we compare with an untrained zero-shot version of our method, LA-CDM (ZS), including hypothesis and decision agent. # 6 Results and Discussion # 6.1 Comparison with Baselines Our comparison with baselines is shown in Table 1. When comparing with OASST [8], LACDM shows improvement in accuracy for each class resulting in almost 20 percentage points difference when comparing the mean of all class accuracies. The performance shows the largest improvement for pathologies that are less common compared with the majority class appendicitis. While the methods are not directly comparable as outlined in section 5.3, the high performance improvement clearly shows the advantage of training on the task of clinical decision-making and not relying on inherent capabilities of large pre-trained models. This is further supported by the evaluation results of our method as a zero-shot model. In comparison to our trained model, we see a great performance improvement achieved through training. Importantly, the zero-shot method also diagnoses less efficiently than the trained version, requiring on average almost $3 . 5 \mathrm { x }$ more tests to form a diagnosis. Also the individual performance of the hypothesis agent benefits from training. Through our hypothesis generation training, we improve the ability of the model to form correct hypotheses from $5 4 . 6 \%$ to $7 4 . 2 \%$ . We equally show an improvement of uncertainty-awareness through training the confidence calibration objective. The ECE decreases significantly from 0.226 to 0.150. We visualise the calibration curves and confidence distribution of the two models in Figure 3, where the trained model shows better calibration, especially at often predicted confidences. It also shows more distributed confidence predictions, compared to the untrained model, which tends to be very over-confident, mostly predicting a confidence of $80 \%$ . Table 2: Ablation study of the hypothesis-driven approach. $\mathrm { \ H A = }$ hypothesis agent, $\mathrm { \ D A = }$ decision agent. The SFT-all model serves as a rough upper bound, as it leverages all available retrospective patient data, an unrealistic setup for real-time clinical decision-making. Therefore, it cannot be used for direct patient interactions. LA-CDM performs comparably, trailing by only two and five percentage points in micro and macro F1-score, respectively, while requiring $1 5 \mathrm { x }$ fewer diagnostic tests. This substantial reduction in test count highlights the efficiency of our approach. Moreover, we observe evidence of patient-adaptive testing strategies aligning with best practices: for suspected cholecystitis, the model most frequently selects ultrasound ( $32 \%$ of cases), the goldstandard test [9]; for appendicitis, it prioritizes CT scans $8 \%$ of cases), consistent with diagnostic guidelines [5]. These results demonstrate that our method not only achieves high diagnostic accuracy but also optimizes resource usage in a clinically meaningful way. The reasoning traces generated by chain-of-thought prompting further enable the interpretation of the model’s testing pathways. We report qualitative examples of the model’s generations in Appendix C. # 6.2 Ablation Study We evaluate the benefit of our hypothesis-driven approach in Table 2. When removing the hypothesis agent from our methodology, the decision agent has to learn to request tests and to diagnose without relying on the uncertainty-aware hypothesis generation capabilities of the hypothesis agent, explicitly trained for these objectives. The benefit of the hypothesis agent is demonstrated clearly by improvements in all metrics. Notably, including the hypothesis agent also significantly reduces the amount of tests requested by the model. # 6.3 Limitations This work presents a first step towards training LLMs for clinical decision-making. As such there are some limitations that should be addressed in future work. First, the data we are training on is limited. It only contains four abdominal pathologies and a limited number of available diagnostic tests, the model is therefore only trained in clinical decision-making for these diseases. An extension to more diseases and more tests remains as future work. Secondly, the data we are training on is retrospective with different tests missing for different patients. Furthermore, the available tests are those tests that the clinicians involved in treating that patient performed. The model can only explore a limited spread of testing pathways. It can therefore only learn to become more efficient within the testing protocols performed by doctors. Simulation of unavailable test data could open up a pathway for modeling a more holistic clinical decision-making environment.
Clinical decision-making is a dynamic, interactive, and cyclic process where doctors have to repeatedly decide on which clinical action to perform and consider newly uncovered information for diagnosis and treatment. Large Language Models (LLMs) have the potential to support clinicians in this process, however, most applications of LLMs in clinical decision support suffer from one of two limitations: Either they assume the unrealistic scenario of immediate availability of all patient information and do not model the interactive and iterative investigation process, or they restrict themselves to the limited "out-of-the-box" capabilities of large pre-trained models without performing task-specific training. In contrast to this, we propose to model clinical decision-making for diagnosis with a hypothesis-driven uncertainty-aware language agent, LA-CDM, that converges towards a diagnosis via repeatedly requesting and interpreting relevant tests. Using a hybrid training paradigm combining supervised and reinforcement learning, we train LA-CDM with three objectives targeting critical aspects of clinical decision-making: accurate hypothesis generation, hypothesis uncertainty estimation, and efficient decision-making. We evaluate our methodology on MIMIC-CDM, a real-world dataset covering four abdominal diseases containing various clinical tests and show the benefit of explicitly training clinical decision-making for increasing diagnostic performance and efficiency.
[ "cs.CL", "cs.AI", "cs.LG" ]
# 1 Introduction Plagiarism is a prevalent challenge in computer science education, facilitated by the ease of duplicating and modifying digital assignments [15, 42, 35]. Although students generally acknowledge plagiarism as academic misconduct, some will engage in it despite the threat of consequences [68]. Therefore, students are creative in obfuscating their plagiarism to conceal the relation to its source [51]. In the case of programming assignments, students commonly utilize techniques such as renaming, reordering, or restructuring [45, 27]. Plagiarism in programming assignments is particularly pronounced in beginner-level and mandatory courses, such as introductory programming courses [50]. While checking submissions for plagiarism manually is feasible for small course sizes, this quickly becomes infeasible for larger course sizes [9, 33] as the number of required pairwise comparisons grows quadratically – reaching 1,225 comparisons for just 50 submissions. This results in the individual risk of detection decreasing with rising course sizes [74]. In light of these issues, it is common for educators to use software plagiarism detection systems to uphold academic integrity for programming assignments [18]. These systems automate parts of the detection process and thus allow tackling the problem of plagiarism detection at scale. Thus, educators strongly rely on software plagiarism detectors to guide them in inspecting suspicious candidates. Plagiarism detectors analyze sets of programs to detect pairs with a suspiciously high degree of similarity [53]. However, assessing which suspicious candidates qualify as plagiarism is ultimately a human decision, given the underlying ethical considerations [16, 70]. Overall, plagiarism detection systems help identify plagiarism instances and, when using such systems is communicated [30], deter students from plagiarizing firsthand [7]. Crucially, plagiarism detectors are only effective when defeating them takes more effort than completing the actual assignment [18]. Yet, manually obfuscating a program successfully is tedious and requires understanding the underlying program, therefore requiring time and programming proficiency. Thus, a widespread assumption was that evading detection is not feasible for novice programmers as obfuscating the program requires more time than it takes to complete the actual assignments and requires a profound understanding of programming languages [25]. However, this assumption has been broken with the recent rise of automated obfuscation attacks [18, 21, 6, 51] which require neither time nor programming proficiency to employ successfully. These obfuscation attacks aim to avoid detection by strategically altering a plagiarized program, thus obscuring the relation to its original [60]: State-of-the-art detection approaches compare the structure of programs by identifying similarities between code fragments [43]. Thus, most obfuscation attacks alter the structural properties of the program, ideally without affecting its behavior. Early automated attacks relied purely on algorithmic approaches, for example, via repeated statement insertion [18]. However, the challenge intensifies with the rise of generative artificial intelligence, especially Large Language Models (LLMs) [17], making the obfuscation of plagiarism even more accessible with less effort than ever before [32, 61]. While state-of-the-art detectors exhibit some obfuscation resilience to changes like retyping and lexical changes, this does not apply to all types of obfuscation attacks [18, 38]. Thus, automated obfuscation attacks present a significant challenge for today’s plagiarism detection systems, as they must now contend with increasingly sophisticated obfuscation techniques that can evade detection while maintaining the original program’s functionality. In recent work [60, 63], we proposed defense mechanisms tailored towards different obfuscation attacks. However, it is unclear how well they work against a broader range of attacks. Token Sequence Normalization [60] explicitly targets dead code insertion and statement reordering. Subsequence Match Merging [63] employs a heuristic to counteract any obfuscation attack that aims at interrupting the match found between two program codes. While we show that these approaches are effective against the obfuscation attacks they are targeting individually, educators typically do not know which obfuscation attacks students employed, thus making it hard to select a specific appropriate defense mechanism. Ideally, a combination of these defense mechanisms can be used to provide greater resilience. However, it is currently unclear whether the different defense mechanisms can be combined to achieve this while not producing false-positive results due to the overapproximation of similarities. Moreover, with the steady improvements of LLMs, AI-based obfuscation attacks become more and more feasible. However, the defense mechanisms have not been comprehensively tested against a broad spectrum of automated attacks, including both algorithmic and AI-based obfuscation techniques. Finally, their applicability to detecting AI-generated programs is yet to be assessed. # 1.1 Research Contributions In this paper, we investigate the resilience of software plagiarism detectors to different automated obfuscation attacks. As a first contribution (C1), we present a comprehensive evaluation of various automated obfuscation attacks, including both algorithmic and AI-based methods, also exploring the feasibility of using AI to generate programs. In addition to examining defense mechanisms on their own, we also explore their combined use. As a second contribution (C2), we complement our technical findings with a detailed discussion of their broader implications – not only for improving software plagiarism detection, but also for issues related to academic integrity and the role of AI in education. # 1.2 Evaluation and Results We conducted a comprehensive empirical evaluation to demonstrate the effectiveness of defense mechanisms against obfuscation attacks. Over the entirety of this evaluation, we analyze over 4 million data points, each representing a pairwise comparison of two programs. Our datasets comprise over 14,000 files with over a million lines of code. We evaluate the defense mechanisms with a wide range of real-world datasets [48, 37, 60] from different university courses. These courses range from mandatory undergraduate courses to master’s-level elective courses. Furthermore, they contain different-sized programs, thus representing typical use cases for software plagiarism detection. In our evaluation, we employ a total of five different obfuscation techniques for the plagiarism instance. We use both algorithmic and AI-based obfuscation and use existing obfuscation tools [1, 18]. We demonstrate that the defense mechanisms offer broad obfuscation resilience across diverse datasets and attack types, thus significantly advancing resilience against automated obfuscation attacks for programming assignments. Notably, we achieved a median similarity difference increase of up to 99.65 percentage points against semantic-preserving insertion-based obfuscation. We also show substantial improvements against refactoring-based attacks (up to 22 percentage points). While resilience against AI-based obfuscation was comparatively lower (up to 19 percentage points), we still observe improved detection rates, including a notable 8.92 percentage point increase in identifying AI-generated programs, even though the defense mechanisms are not designed for this use case. These findings underscore the effectiveness of current defense mechanisms in defending against a wide range of obfuscation attacks, allowing for resilient source code plagiarism detection. # 1.3 Outline The remainder of this paper is structured as follows. First, section 2 introduces the foundations of automated obfuscation attacks and their impact on token-based plagiarism detection. In section 3, we present the defense mechanisms designed to counter these attacks, which we evaluate in this paper. Next, section 4 outlines our evaluation methodology, followed by section 5, which reports the results across various datasets and obfuscation strategies. We discuss threats to validity in section 6, and provide a broader discussion of implications and insights in section 7. Finally, section 8 reviews related work, and section 9 concludes. # 2 Automated Obfuscation Attacks Students often attempt to conceal plagiarism by obfuscating its origin [25, 44, 27, 51]. Since cosmetic changes alone (e.g., lexical edits) are insufficient against structural comparison [43], they increasingly alter program structure while preserving its behavior. Common strategies include inserting statements, refactoring control structures [27], or simplifying, combining, and splitting code fragments [45]. These techniques, however, are neither new nor especially worrying, as manual obfuscation is tedious, error-prone, and requires understanding the original program to be plagiarized [25]. Automated obfuscation attacks, however, introduce a paradigm shift. Automated obfuscation is both faster and more effective than manual obfuscation. All automated obfuscation attacks targeting software plagiarism detectors – whether manual, algorithmic, or AI-based – are based on a single underlying principle: avoiding detection by strategically altering a plagiarized program, thus obscuring the relation to its original [60]. As state-of-the-art detection approaches compare the structure of programs by identifying similarities between code fragments [43], obfuscation attacks try to alter the structural properties of the program, ideally without affecting its behavior [44, 27, 51]. Their intended outcome is to disrupt the matching of fragments between programs, thus leading to a reduced similarity score [18]. Specifically, the goal is to prevent the detector from matching fragments above the specified match length cut-off threshold. This can be achieved by breaking up matching code fragments into shorter sub-fragments. However, to impact the detection quality of a software plagiarism detector, the obfuscation must affect the linearized program representation of the detector, which in the case of token-based approaches is the token sequence [60]. Consequently, modifications to the program code that do not affect the internal program representation are inherently ineffective. For example, renaming program elements does not affect token-based approaches, as names are omitted during the tokenization [53, 61] (see Figure 1). Devore-McDonald and Berger [18] present an automated attack based on repeated insertion of dead statements into an existing program. This approach effectively deceives both JPlag [54] and MOSS [2], reducing the calculated similarity between a plagiarism instance and its source below the average similarity of unrelated student solutions. Similar attacks can be designed based on the automated application of refactoring operations [40]. The rapid improvements in the field of generative artificial intelligence significantly exacerbate this problem [34]. AI-powered tools can generate or alter source code [17] while requiring little manual effort and technical knowledge, making automated obfuscation more accessible than ever before [6, 32]. Tools like ChatGPT combine the capabilities of generative artificial intelligence with the approachable interface of a chatbot [61], thus further reducing the entry barrier to using generative AI. Essentially, automated obfuscation attacks make successfully evading plagiarism detection systems easier than ever. # 3 Defense Mechanisms In the following, we present defense mechanisms against automated obfuscation attacks from our prior work [60, 63]. These defense mechanisms are designed to provide broad resilience against automated obfuscation attacks by being largely language-independent, applicable across multiple programming languages, and agnostic to the underlying detection system, making them suitable for integration into any state-of-the-art, token-based detector such as MOSS, JPlag, or Dolos. Token-based plagiarism detectors are inherently immune to lexical obfuscation, and usually also to data-based obfuscation (see Figure 1). Our defense mechanisms additionally provide resilience to structural and complex attacks. Verbatim Copy Weak Broad L0 Lexical Obfuscation Attacks - Renaming - Comments - Formatting Value-based Attacks - Retyping - Value Obfuscation Clone Types - Precision Modification - Dead Code Insertion Effectiveness Applicability - Statement Reordering - Statement Deletion Complex Attacks - Refactoring - Control Flow Transformations L6 Reimplementation Semantic Clone Strong Narrow # 3.1 Token Sequence Normalization Token Sequence Normalization (TSN) [60] is a normalization-based defense mechanism designed to counter obfuscation attacks based on dead code insertion or statement reordering (structural attacks in Figure 1). It uses a Token Normalization Graph (TNG), a graph-based abstraction similar to program dependence graphs that captures semantic interdependencies between tokens. The normalization process begins by enriching the token sequence with language-independent semantic information. From this enriched sequence, a TNG is constructed to represent a partial ordering over tokens, abstracting away from the original code structure. The normalized token sequence is then generated from this graph by removing dead nodes and reverting reordered code via topological sorting. This normalization is performed before the pairwise comparison step in the plagiarism detection pipeline, allowing the detector to virtually de-obfuscate plagiarized code while preserving the scalability of token-based methods. The only language-specific component of this approach is the extraction of semantic information required for enrichment, which is consistent with the language-dependent nature of tokenization itself and does not introduce additional constraints on the detection system. # 3.2 Subsequence Match Merging Subsequence Match Merging (SMM) [63] is a defense mechanism designed to counter obfuscation in an attack-independent and language-independent manner. Thus, it covers any of the categories presented in Figure 1. It is based on the observation that all effective obfuscation attacks must disrupt the matching of code fragments by breaking up the internal linearized program representation of detectors. SMM operates on these internal representations by heuristically merging neighboring fragment matches in pairs of programs. This process is applied iteratively, subsuming gaps caused by obfuscation until no more neighboring matches can be merged. SMM thus restores the continuity of matches, which reverts the effects of the obfuscation attack without significantly increasing the false positive rate. Crucially, the approach is entirely language-independent and agnostic to the type of obfuscation, as it does not rely on the semantics of the internal representation. # 3.3 Combination of Both As TSN and SMM operate during different steps of the detection pipeline, they are complementary and can be combined. In our evaluation, we explore a hybrid defense strategy that applies TSN as a pre-processing step after parsing the input programs, followed by SMM as a post-processing step after computing matching subsequences. The rationale behind this combination is twofold. TSN provides strong resilience to insertion-based obfuscation, which is one of the easiest and most effective obfuscation attacks. SMM provides broad resilience against a range of obfuscation attacks, as its heuristic nature avoids making assumptions on the specifics of an obfuscation attack. This layered approach is expected to offer broad resilience, combining the benefits of both approaches. # 4 Evaluation Methodology This section outlines the methodology used to evaluate the effectiveness of the proposed defense mechanisms regarding obfuscation resilience and detection quality. We evaluate the aforementioned defense mechanisms regarding obfuscation resilience with the plagiarism detector JPlag as the baseline, as it is not only considered state-of-the-art [5] but also the most referenced and compared approach [45]. We use real-world datasets from different university courses. We employ a total of four different obfuscation techniques for the plagiarism instances: Dead code insertion, automated refactoring, AI-based obfuscation, and AI-based generation. Over the entirety of this evaluation, we analyze over 4.1 million data points, each representing a similarity value of a pairwise comparison of two programs. The datasets sum up to over 14,000 files with around a million source lines of code. In our evaluation, we analyze software plagiarism detectors for the purpose of evaluating their resilience with respect to automated obfuscation techniques in the context of computer science education. In this context, we investigate the following evaluation questions: Q1 To what degree do defense mechanisms affect the similarity scores of unrelated programs? Q2 What degree of resilience do defense mechanisms achieve against insertionbased obfuscation? Q3 What degree of resilience do defense mechanisms achieve against refactoringbased obfuscation? Q4 What degree of resilience do defense mechanisms achieve against AI-based obfuscation? Q5 How well can we distinguish AI-generated from human programs? Q6 What impact do defense mechanisms have on threshold-based plagiarism generators? For $Q 1$ , we examine the similarity values for pairs of original programs as a metric. For $Q \boldsymbol { \mathcal { Q } }$ to $Q 5$ , we look at the similarity value differences between plagiarized and original programs and conduct comprehensive statistical tests by computing the statistical significance (p-values) as well as the practical significance (effect size) of these differences. To answer $Q \theta$ , we measure the difference in the runtime of the plagiarism generator and the difference in the number of inserted lines in the plagiarism instance. In the following, we outline the similarity metrics and statistical measures in detail. Next, we discuss our choice of baseline. We then describe the datasets we used. Finally, we explain obfuscation attacks used for obfuscation purposes. # 4.1 Similarity Metrics As plagiarism detection systems compute similarity scores between program pairs, these scores serve as the primary basis for identifying suspicious cases. In practice, similarity scores guide which candidates are reviewed first, as no objective indicator alone can confirm plagiarism. Detection tools typically provide similarity distributions and ranked lists of pairs – both derived from similarity scores – to support human inspection. The detailed visualization of matched code fragments is usually consulted only after identifying highsimilarity candidates. Evaluating detection quality requires distinguishing between different types of program pairs, each requiring separate analysis of the detector’s similarity scores: 1. Original Pair: Two original programs developed independently of each other, without shared origin. 2. Plagiarism-To-Source Pair: A plagiarism instance and its source program. To clearly distinguish plagiarism from unrelated programs during human inspection, plagiarism pairs must have high similarity scores [60], while unrelated pairs must have low scores. Ideally, there is no overlap, with plagiarism pairs always showing higher similarity. However, in practice, overlap occurs as changes, especially obfuscation techniques, can reduce the similarity between a plagiarized program and its source. Thus, the difference in similarity between plagiarism and unrelated pairs is crucial to measure detection quality. A common anti-pattern in existing works is evaluating plagiarism detectors using a fixed similarity threshold where scores above count as successful detections, and those below do not. While this simplifies deriving precision, recall, and F1 scores, this approach is fundamentally flawed, as the threshold can arbitrarily influence results and it can be tuned to favor one approach. Since no universal threshold fits all datasets, thresholds are chosen arbitrarily. Due to varying similarity distributions for different datasets, they can only be set after seeing the results, thus introducing a strong bias. A threshold-based evaluation reduces plagiarism detection to a binary classification problem, which is insufficient due to the mentioned problem of overlap. Fundamentally, it measures only whether plagiarism is detected (and only according to some arbitrary criterion), not how well it is detected. For these reasons, we focus on the difference in similarity between plagiarism and non-plagiarism pairs. The larger this difference, the easier it is to detect plagiarism effectively. Thus, we measure to what extent a detection approach can produce such a difference between these pairs. This avoids overabstracting the problem into a binary classification. Varying statistical measures can be used to calculate the differences between these types of pairs. We use measures of central tendency like the mean $( \mu )$ or median ( $Q _ { 2 }$ ) and measures of spread like the difference between the interquartile ranges ( $Q _ { 1 }$ to $Q _ { 3 }$ ). Figure 2 shows an example of the median difference $( \varDelta m e d i a n )$ and interquartile range distance $( \varDelta I Q R )$ . Each metric highlights different data properties: ∆median resists outliers more than $\varDelta m e a n$ , while $\varDelta I Q R$ indicates how well the main value ranges are separated. A $\varDelta I Q R$ above zero signifies that at least 75% of values in both sets do not overlap. Fig. 2 Difference metrics comparing the sep- Fig. 3 Visualization of alternative hypothearation between plagiarism instances and ses $( H 1 )$ for one-sided significance tests beoriginal student solutions based on the me- tween plagiarism and original pairs. For pladian and interquartile range (IQR) distances giarism, the shift should be significantly measured in percentage points. greater $( H 1 _ { P l a g } )$ , while for originals, it should not $( H 1 _ { O r i g } )$ . We also employ statistical tests to assess statistical and practical significance. To that end we conduct one-sided Wilcoxon signed-rank tests to compare improvements in detection quality between approaches. Different hypotheses apply depending on the pair type, as shown in Figure 3. For plagiarism pairs, we test if they show a significant location shift $( H 1 _ { P l a g } )$ , i.e., higher scores. For original pairs, we test that no significant shift occurs $( H 1 _ { O r i g } )$ . For the practical significance, we use Cliff ’s delta $\delta$ [13] as an effect size measure, as we deal with non-normal distributions and paired data. Although a paired version of Cohen’s $d$ [14] exists, it is sensitive to outliers and only mildly robust to non-normality, making it unsuitable here. In contrast, while not ideal for paired data, Cliff ’s delta $\delta$ remains useful for rank-based comparisons, offering robustness to non-normality and variance differences. There are no established categories to interpret the resulting $\delta$ values. Thus, we base our interpretation on the derived categories by Romano et al. [56] based on Cohens $d$ : $$ \delta I n t e r p r e t a t i o n = \left\{ \begin{array} { l l } { \mathrm { N e g l i g i b l e } } & { \mathrm { i f ~ } 0 \qquad \leq | \delta | < 0 . 1 4 7 } \\ { \mathrm { S m a l l } } & { \mathrm { i f ~ } 0 . 1 4 7 \leq | \delta | < 0 . 3 3 } \\ { \mathrm { M e d i u m } } & { \mathrm { i f ~ } 0 . 3 3 \leq | \delta | < 0 . 4 7 4 } \\ { \mathrm { L a r g e } } & { \mathrm { i f ~ } 0 . 4 7 4 \leq | \delta | < 0 . 7 } \\ { \mathrm { V e r y ~ L a r g e } } & { \mathrm { i f ~ } 0 . 7 \qquad \leq | \delta | \leq 1 } \end{array} \right. $$ Note that a negative effect size suggests an adverse interpretation, e.g., that the comparison group is greater than the target group. # 4.2 Choice of Baselines For the evaluation of programming assignments, we utilize JPlag as our baseline. JPlag is not only regarded as a state-of-the-art tool [5, 45] but also stands out as one of the most frequently referenced approaches and the most compared approach in the literature [45]. Its widespread use in practice and scientific literature makes it an ideal standard for assessing programming-based plagiarism. While MOSS is widely used [45], we excluded it as a baseline for four reasons: (1) it only returns a subset of similarity values, skewing comparisons; (2) it is closed-source and cannot be extended with our defense mechanisms; (3) it requires sending data to U.S.-based servers, conflicting with General Data Protection Regulation of the European Union (GDPR); and (4) it imposes strict usage limits, making large-scale evaluation infeasible. We also excluded Dolos due to its limited adoption, and lack of support for multi-file programs. Multi-file programs are common in programming assignments, making it essential to evaluate plagiarism detection systems on such datasets. Nonetheless, both MOSS and Dolos are token-based and equally vulnerable to obfuscation attacks [18, 62], underscoring the need for improved defenses. As an example, Figure 4 shows the results of JPlag, MOSS, and Dolos on a simple dataset using insertion-based obfuscation. All three tools yield low similarity scores for the plagiarism instances, causing overlap with unrelated programs. Note that MOSS omits many lower similarity values by design, and the dataset includes only small, single-file programs compatible with Dolos. # 4.3 Datasets We used a total of six real-world datasets; four are publicly available, and two are internal. Since public datasets are limited in both size and number, we supplement them with internal datasets. All datasets come from an educational setting but stem from different courses and assignment types. First, we used two tasks from the publicly available collection PROGpedia [48]. Here, Task 19 covers the design of a graph data structure and a depth-first search to analyze a social network. Task 56 concerns minimum spanning trees using Prim’s algorithm. Both datasets contain small Java programs. For both datasets, we used only syntactically and semantically correct solutions and the latest version of each program. Next, we used the TicTacToe dataset [60], which contains command-linebased Java implementations of the paper-and-pencil game TicTacToe. This dataset is from an introductory programming class at KIT, specifically from a weekly assignment. This dataset contains many programs, each of which is medium-sized. We also used the BoardGame dataset [60]. This assignment is from the same course as the TicTacToe dataset. However, it is the final project of the course. Here, the task is also a command-line-based game; however, this time, it is a comprehensive board game. Thus, it contains very large programs. Finally, we used two tasks from the publicly available homework dataset by Ljubovic and Pajic [37]. While both tasks contain C++ programs, one pertains to managing student and laptop records within a university setting, whereas the other requires implementing a Fourier series. To prepare the datasets for our evaluation, we removed all solutions that did not compile, as JPlag requires valid input programs. We also removed all human plagiarism (if present) based on the labeling provided by the datasets. If no labeling was present, we removed verbatim copies. This notably reduces the size of some datasets. Consequently, we obtained the six datasets listed in Table 1. Fig. 4 Obfuscation vulnerability illustrated for the three token-based approaches JPlag, Dolos, and MOSS with the dataset PROGpedia-19 [48] and plagiarism instances automatically obfuscated via statement insertion. # 4.4 Obfuscation Attacks We evaluate four automated obfuscation attacks, two algorithmic and two AIbased. For ethical reasons, we briefly discuss these attacks without revealing details to avoid encouraging their use. The first algorithmic obfuscation attack is the insertion of dead statements. For this, we employ two different tools. The first one is MOSSad [18]. As previously discussed, it is indeterministic and operates threshold-based. The second is $P l a g G e n$ [8], which is similar to MOSSad but is deterministic and exhaustive. In both cases, the statement insertion uses statements from the original program and a pool of pre-defined statements. Furthermore, both ensure that the inserted statements do not change the behavior of the programs. Thus, this obfuscation attack is semantic-preserving. We use PlagGen for Java and MOSSad for C+ $^ { \cdot + }$ , as these are the languages supported by each tool. Second, we employ the refactoring-based obfuscation attack by Maisch [40], which leverages Spoon [52] and automatically applies semantic-preserving refactoring operations at random positions at the AST level to obfuscate a program. In detail, the refactoring operations include optional wrapping, extracting expressions as new variables, introducing constant container classes and extraction of constants, swapping if-else-statements and inverting the corresponding conditions, inserting methods and constructors, and introducing access methods for existing fields. As the behavior of the programs is not changed, this obfuscation attack is also semantic preserving. This implementation of the obfuscation attack only supports Java programs, so we only use four of the six datasets with this obfuscation attack. For the AI-based obfuscation attacks, we exploit OpenAI’s GPT-4 for automated plagiarism, which is currently the state-of-the-art LLM. There are generally two ways of using generative AI to cheat for programming assignments: AI-based obfuscation, where the adversary provides an AI model with a pre-existing program and tasks it to generate an obfuscated version. AI-based generation, where the adversary uses the assignment’s description to generate a program from scratch via an AI model. We employ AI-based obfuscation as a third obfuscation attack alongside both algorithmic ones. We use fifteen different prompts, mimicking how students would ask GPT to obfuscate their plagiarism. The prompts range from requesting minor structural changes to requesting a reimplemented version of the original program. As for this attack, the programs need to be sent to the OpenAI GPT server; we did not use it for the BoardGame dataset due to its sensitive nature. Finally, we use full generation as the final obfuscation. However, we can only employ it for the TicTacToe dataset, as we require the full assignment description and test cases to test for the expected behavior. AI-based obfuscation is a semantic agnostic attack. While the prompts contain instructions to preserve the program behavior, there are generally no guarantees that the changes proposed by GPT-4 conform to these instructions. Similarly, for AI-based generation, there is no guarantee that the programs fully implement all details requested by the task. In sum, we use the following four techniques to create 787 plagiarized programs (see Table 2 for details): Table 1 Programming assignment datasets used for the evaluation with the number of included programs, mean size in lines of code (LOC) excluding comments, the programming language, and source of the dataset. 1. Insertion-based Obfuscation (semantic-preserving): Inserting dead statements into the program (PlagGen [8] for Java and MOSSad [18] for C/C++). 2. Refactoring-based Obfuscation (semantic-preserving): Applying a variety of semantic-preserving refactoring operations, for example, transformations of control structures, field access, and method granularity [40]. 3. AI-based Obfuscation (sematic-agnostic): We obfuscate human solutions with GPT-4 [1] based on 15 varying prompts requesting structural changes. 4. AI-based Generation (sematic-agnostic): We fully generate AI-based solutions with GPT-4 [1] based on only the textual task description of the assignment. # 5 Evaluation Results In the following, we provide the results of our evaluation, which demonstrate that the defense mechanisms offer broad obfuscation resilience across diverse datasets and attack types. We compare them to JPlag without any defense mechanisms as the baseline. We provide a replication package for this evaluation [59]. The key findings from our comprehensive evaluation, which offer new insights beyond prior evaluations, can be summarized as follows: Insertionbased Obfuscation: Combining both defense mechanisms provides improved resilience. Refactoring-based Obfuscation: Token sequence normalization minimally impacts refactoring-based attacks, as expected. However, subsequence match merging significantly improves detection, and combining both mechanisms achieves enhanced separation of plagiarized and original programs. GPT-4-based Obfuscation: Token sequence normalization has no positive or negative impact on AI-based obfuscation. Combining token sequence normalization and subsequence match merging shows significant improvements without drawbacks. GPT-4-generated Programs: Despite the defense mechanisms not being tailored for AI-generated program detection, we observe significantly improved detection across datasets for subsequence match merging. Thresholdbased Plagiarism: The defense mechanisms substantially increase the computational cost of threshold-based obfuscation, making such an obfuscation method more tedious and also easily detectable via metrics such as program size or number of tokens. Table 2 Overview on the number of plagiarized programs per dataset and obfuscation attack type (851 in total). Each of the 15 prompts is applied to 5 originals for the AI-based obfuscation. Table 3 One-sided Wilcoxon signed-rank test results for unrelated, student-made programs regarding the potential adverse effects of our defense mechanisms compared to the baseline (sig. level of $\alpha = 0 . 0 1$ , alternative hypothesis $H 1 = g r e a t e r$ , test statistic $W$ , effect size via Cliff’s delta $\delta$ , its interpretation $\delta I n t .$ ., its confidence interval $C I$ , and the sample size $n$ ). Note that high $p$ and low $\delta$ are desirable, as original pairs should not be greater. In summary, our evaluation demonstrates that the proposed defense mechanisms are highly effective across a range of automated obfuscation attacks. The proposed defense mechanisms provide significantly (statistical and practical significance) improved obfuscation resilience without any practically significant change in false-positive rates. In the following, we present detailed results for each attack type as outlined in subsection 4.4 individually. The original, human-made programs of each dataset remain the same for all evaluation stages. Thus, we first discuss the effect of the defense mechanism on these unrelated programs. 5.1 Effect on Unrelated Programs Effective plagiarism detection requires not only high similarity for plagiarism pairs but also minimal impact on unrelated, original programs. Our results show that the effect on such unrelated programs is negligible, indicating no meaningful increase in false positives. Table 3 presents statistical test results for the original pairs, comparing defense mechanisms to the baseline. Token sequence normalization has virtually no impact on unrelated programs: median similarity changes range from $- 1 . 2 3$ to $+ 0 . 2 8$ percentage points across datasets, with only one statistically significant change (PROGpedia-19), which remains insignificant due to the negligible effect size. Subsequence match merging yields small median increases (+0.78 to +6.59), statistically significant in four datasets, yet with negligible to small effect sizes and thus little to no practical significance. When both mechanisms are combined, results are comparable to subsequence match merging alone, with median increases from $+ 0 . 9 1$ to $+ 6 . 7 5$ . While statistically significant in three datasets, effect sizes remain negligible to small, confirming little to no practical significance for the impact on unrelated programs. Answer to Q1: The defense mechanisms have a negligible effect on unrelated programs, meaning their impact on the false positive rate is both practically and, in some cases, statistically insignificant. # 5.2 Insertion-based Obfuscation Figure 5 presents results for insertion-based obfuscation attacks, with corresponding statistical measures in Table 4. As described earlier, we use MOSSad [18] for C++ datasets (Homework-1 and Homework-5) and PlagGen [8] for the others. The key difference lies in their termination strategies: MOSSad uses a threshold-based approach, while PlagGen exhaustively inserts code in all possible positions. # Baseline Figure 5 shows the severe impact of insertion-based obfuscation on the baseline. Median similarity values for plagiarism pairs drop to between $5 . 2 9 \%$ (TicTacToe) and $1 9 . 5 9 \%$ (PROGpedia-56), leading to near-complete overlap with original pairs in all datasets except PROGpedia-56, which still shows substantial overlap. In the TicTacToe dataset, median similarity for plagiarism pairs is even lower than for original pairs. As detailed in Table 4, the median similarity difference between plagiarism and original pairs ranges from $- 0 . 7 8$ percentage points (TicTacToe) to $+ 1 9 . 5 6$ (PROGpedia-56), with most datasets showing differences below ten points – indicating little to no separation. These results confirm that insertion-based obfuscation is highly effective against JPlag and similar token-based detectors, significantly hindering plagiarism detection in the absence of additional defenses. Fig. 5 Similarity scores for original program pairs and insertion-based plagiarism pairs. Ideally, plagiarism pairs exhibit high similarity, while original pairs should exhibit low similarity. # Token Sequence Normalization Token sequence normalization is designed to counter insertion-based obfuscation, and the results in Figure 5 confirm its strong effectiveness. For the Java datasets, it renders JPlag effectively immune to such attacks. Similarity values for plagiarism pairs increase significantly across all datasets. In the Java datasets, median similarities reach between $9 7 . 2 3 \%$ (BoardGame) and $1 0 0 . 0 0 \%$ (PROGpedia-19), eliminating overlap with original pairs and achieving full separation. In the C++ datasets, values rise to $3 8 . 6 3 \%$ (Homework-5) and 42.83% (Homework-1), with the remaining overlap confined to edge quartiles. This improvement is also evident in the similarity differences between plagiarism and original pairs (Table 4): 80.09 to 99.65 percentage points for Java, and 26.36 to 34.21 for C++ datasets. Statistical tests confirm both statistical and practical significance across all datasets (Table 5), with low p-values and very large effect sizes. Overall, token sequence normalization substantially improves obfuscation resilience and demonstrates the value of targeted defenses against insertion-based obfuscation. Table 4 Statistical measures for plagiarism pairs and their differences $( \varDelta )$ from original pairs for insertion-based obfuscation (corresponds to Figure 5). Higher values indicate better performance. Note that measures are expressed as percentages and their differences as percentage points. Highest values by a margin of 0.25 are marked in bold. # Subsequence Match Merging As an attack-independent defense, subsequence match merging is not specifically tailored to insertion-based obfuscation, but still yields notable improvements over the baseline (Figure 5). Median similarity values for plagiarism pairs increase across all datasets, ranging from 22.36% (TicTacToe) to $5 8 . 0 8 \%$ (PROGpedia-56), with overlap mostly confined to quartile extremes. Corresponding median similarity differences between plagiarism and original pairs (Table 4) range from 8.01 (BoardGame) to 51.49 percentage points (PROGpedia56), indicating strong separation. Statistical tests (Table 5) confirm that improvements are both statistically and practically significant. P-values are low across all datasets, and effect sizes are very large, except for BoardGame, which shows a large effect. While less effective than token sequence normalization, subsequence match merging still offers strong resilience against insertion-based obfuscation – especially notable given its general-purpose nature. Table 5 One-sided Wilcoxon signed-rank test results for insertion-based obfuscation regarding the improvement by our defense mechanism compared to baseline (sig. level of $\alpha = 0 . 0 1$ , alternative hypothesis $H 1 = g r e a t e r$ , test statistic $W$ , effect size via Cliff’s delta $\delta$ , its interpretation $\delta I n t$ , its 95 percent confidence interval $C I$ , and the sample size $n$ ). For plagiarism-to-source pairs (P2S), low $p$ and high $\delta$ are desirable. # Combination of Both Combining both defense mechanisms yields strong improvements over the baseline, rendering JPlag effectively immune to insertion-based attacks. For Java datasets, results are comparable to subsequence match merging alone, while C++ datasets show clear gains over either individual method. We no longer observe any significant overlap between the plagiarism and original pairs, resulting in a clear separation between both types of pairs. Median similarity differences (Table 4) range from 61.22 (Homework-5) to 93.42 percentage points (PROGpedia-19), indicating substantial improvements across all datasets. Statistical tests (Table 5) confirm that these gains are both statistically and practically significant, with low p-values and very large effect sizes for all datasets. In summary, the combination of both mechanisms provides robust resilience against insertion-based obfuscation, especially strengthening detection for C++ programs. Answer to Q2: The defense mechanisms significantly increase the resilience against semantic preserving insertion-based obfuscation attacks. The median similarity differences increase, depending on the dataset, up to 99.65 percentage points, thus producing a complete separation of plagiarized and original programs. Thus, the degree of resilience effectively reflects near-immunity to insertion-based attacks. Fig. 6 Similarity scores for original program pairs and refactoring-based plagiarism pairs. Ideally, plagiarism pairs exhibit high similarity, while original pairs should exhibit low similarity. 5.3 Refactoring-based Obfuscation Figure 6 presents results for refactoring-based obfuscation, which applies a mix of semantic-preserving refactorings at random positions in the parse tree of programs. Corresponding statistical measures are shown in Table 6. As the obfuscation tool [40] supports only Java, this evaluation stage includes four of the six datasets. # Baseline Figure 6 shows the substantial impact of refactoring-based obfuscation on the baseline. Median similarity values for plagiarism pairs drop to between $1 3 . 9 0 \%$ (TicTacToe) and $3 5 . 0 2 \%$ (BoardGame), leading to clear overlap with original pairs in all datasets except BoardGame, where overlap is limited to outliers. The reduced effect on BoardGame likely stems from the fact that complex obfuscation techniques are harder to apply broadly, reducing the effectiveness for large programs. Median similarity differences between plagiarism and original pairs (Table 6) range from 7.83 (TicTacToe) to 18.82 percentage points (BoardGame), indicating only limited separation. Overall, refactoring proves to be an effective obfuscation method against baseline JPlag. Table 6 Statistical measures for plagiarism pairs and their differences $( \varDelta )$ from original pairs for refactoring-based obfuscation (corresponds to Figure 6). Higher values indicate better performance. Note that measures are expressed as percentages and their differences as percentage points. Highest values by a margin of 0.25 are marked in bold. Table 7 One-sided Wilcoxon signed-rank test results for refactoring-based obfuscation regarding the improvement by our defense mechanism compared to baseline (sig. level of $\alpha = 0 . 0 1$ , alternative hypothesis $H 1 = g r e a t e r$ , test statistic $W$ , effect size via Cliff’s delta $\delta$ , its interpretation $\delta I n t$ , its 95 percent confidence interval $C I$ , and the sample size $n$ ). For plagiarism-to-source pairs (P2S), low $p$ and high $\delta$ are desirable. # Token Sequence Normalization Since refactoring-based obfuscation does not involve statement insertion or reordering, token sequence normalization has little to no effect effect – an expected outcome, given its design focus. As shown in Figure 6, the results closely resemble the ones for the baseline. In three of four datasets, the median similarity for plagiarism pairs is slightly lower, with reductions ranging from 0.78 (TicTacToe) to 2.73 percentage points (PROGpedia-56) – a marginal difference. This trend is also reflected in the similarity differences between plagiarism and original pairs (Table 6), which range from 6.96 (TicTacToe) to 18.87 percentage points (BoardGame). Statistical tests (Table 7) confirm that these changes are neither statistically nor practically significant. High p-values and near-zero (even negative) effect sizes indicate negligible impact. In summary, token sequence normalization provides no measurable resilience against refactoring-based obfuscation, which is consistent with its intended scope. # Subsequence Match Merging Subsequence match merging leads to clear improvements over the baseline. As shown in Figure 7, median similarity values for plagiarism pairs increase across all datasets, ranging from 25.47% (TicTacToe) to $4 2 . 6 2 \%$ (PROGpedia56). Overlap with original pairs is reduced, mostly limited to quartile extremes. Median similarity differences between plagiarism and original pairs (Table 6) range from 17.62 (TicTacToe) to 36.02 percentage points (PROGpedia-56), indicating stronger separation. Statistical tests (Table 7) confirm statistical and practical significance: p-values are low across all datasets, and effect sizes are medium (BoardGame, PROGpedia-56) to large (TicTacToe, PROGpedia19). In summary, subsequence match merging provides meaningful resilience against refactoring-based obfuscation, enabling effective detection even when structural changes are introduced through extensive refactorings. # Combination of Both Combining both defense mechanisms yields strong improvements over the baseline, closely mirroring the results of subsequence match merging alone. As shown in Figure 7, median similarity values for plagiarism pairs range from 25.28% (TicTacToe) to $4 1 . 3 2 \%$ (PROGpedia-56), with limited overlap confined to quartile extremes. Median similarity differences (Table 6) span from 17.32 (TicTacToe) to 34.58 percentage points (PROGpedia-56), showing solid improvement. Results are slightly lower than those of subsequence match merging alone, except for PROGpedia-56, where the combination performs marginally better. Statistical tests (Table 7) confirm both statistical and practical significance: all p-values are low, with effect sizes ranging from medium (BoardGame, PROGpedia-56) to large (TicTacToe, PROGpedia-19). In summary, the combined defenses offer significant resilience against refactoring-based obfuscation across all datasets, though the added benefit over subsequence match merging alone is limited. Answer to Q3: The defense mechanisms significantly increase the resilience against semantic preserving refactoring-based obfuscation attacks. The median similarity differences increase, depending on the dataset, up to 22 percentage points, thus strongly improving the separation between plagiarized and original programs. Fig. 7 Similarity scores for original program pairs and AI-based plagiarism pairs (obfuscation with 15 varying GPT-4 prompts). Ideally, plagiarism pairs exhibit high similarity, while original pairs should exhibit low similarity. 5.4 GPT-4-based Obfuscation Figure 7 presents the results for GPT-4-based obfuscation, with corresponding statistical measures in Table 8. We used 15 distinct prompts instructing GPT-4 to alter program code while preserving its functionality. However, as this process lacks formal guarantees, the resulting obfuscation is considered semantic-agnostic. The BoardGame dataset was excluded due to its use in a final exam and the associated privacy concerns with sending data to OpenAI servers. # Baseline Figure 7 shows that GPT-4-based obfuscation, using 15 behavior-preserving prompts, can be effective against JPlag. Median similarity values for plagiarism pairs drop to between 19.74% (Homework-5) and 66.67% (PROGpedia56), which is notably a higher range than seen with other obfuscation methods. Overlap with original pairs varies across datasets. Homework-1 shows substantial overlap, including interquartile ranges, while overlap is less pronounced for PROGpedia datasets. As shown in Table 8, median similarity differences range from 9.86 (Homework-1) to 66.67 percentage points (PROGpedia-56), indicating limited separation – particularly for Homework and TicTacToe. Interestingly, the variability in attack effectiveness across prompts is similar to that across datasets. We observe that the dataset itself – likely due to the underlying assignment and domain – has a greater influence on obfuscation effectiveness. Thus, while GPT-based obfuscation is effective, its reliability is lower than that of algorithmic methods due to significant variation in performance across datasets and prompts. Table 8 Statistical measures for plagiarism pairs and their differences $( \varDelta )$ from original pairs for AI-based obfuscation (corresponds to Figure 7). Higher values indicate better performance. Note that measures are expressed as percentages and their differences as percentage points. Highest values by a margin of 0.25 are marked in bold. Table 9 One-sided Wilcoxon signed-rank test results for AI-based obfuscation regarding the improvement by of our defense mechanism compared to baseline (sig. level of $\alpha = 0 . 0 1$ , alternative hypothesis $H 1 = g r e a t e r$ , test statistic $W$ , effect size via Cliff’s delta $\delta$ , its interpretation $\delta I n t$ ., its 95 percent confidence interval $C I$ , and the sample size $n$ ). For plagiarism-to-source pairs (P2S), low $p$ and high $\delta$ are desirable. # Token Sequence Normalization Since GPT-4-based obfuscation involves diverse modifications beyond statement insertion or reordering, token sequence normalization has a limited impact. As shown in Figure 7, results are similar to the baseline across all five datasets. Median similarity values for plagiarism pairs vary slightly, ranging from $- 5 . 0 7$ (Homework-5) to $+ 2 . 3 7$ percentage points (PROGpedia-56). Corresponding median similarity differences with original pairs (Table 8) range from 6.83 (Homework-1) to 69.40 percentage points (PROGpedia-56), aligning closely with baseline values – slightly better for PROGpedia, slightly worse for the Homework datasets, possibly reflecting language-specific differences in GPT-4’s output. Statistical tests (Table 9) show some statistical significance (PROGpedia-19, TicTacToe) but no practical significance. Effect sizes are negligible across all datasets, with negative values for the Homework sets. In summary, token sequence normalization offers no meaningful resilience against GPT-4-based obfuscation, though it also introduces no adverse effects. # Subsequence Match Merging Subsequence match merging significantly improves results over the baseline. As shown in Figure 7, median similarity values for plagiarism pairs increase across all datasets, ranging from $2 2 . 9 0 \%$ (Homework-1) to $8 4 . 4 3 \%$ (PROGpedia-56). Overlap with original pairs is reduced, primarily limited to quartile extremes. Median similarity differences (Table 8) range from 10.32 (Homework-1) to 77.84 percentage points (PROGpedia-56), indicating substantial separation. This confirms that subsequence match merging provides a solid improvement over the baseline. Statistical tests (Table 9) show statistically significant improvements across all datasets. Practical significance is achieved in all but Homework-1, where the effect size remains negligible. Note that the high variance in plagiarism pair similarities affects the effect size measure [23]. In contrast, the remaining datasets show small to medium effect sizes, indicating practical significance. In sum, subsequence match merging offers robust resilience against GPT-4-based obfuscation despite its semantic-agnostic nature and variability across datasets and prompts. # Combination of Both Combining both defense mechanisms results in strong improvements over the baseline, largely mirroring the effect of subsequence match merging alone. As shown in Figure 7, median similarity values for plagiarism pairs range from $1 9 . 0 9 \%$ (Homework-1) to $8 3 . 0 7 \%$ (PROGpedia-56), with reduced overlap mostly confined to quartile boundaries. Median similarity differences (Table 8) range from 7.47 (Homework-1) to 76.32 percentage points (PROGpedia-56), showing consistent improvement over the baseline. The effect is slightly weaker than with subsequence match merging alone, except for PROGpedia-19, where the combination performs marginally better. Statistical tests (Table 9) confirm statistical significance in all datasets except Homework-5 ( $p = 0 . 1 1$ ). Practical significance is observed for all but the Homework datasets, which exhibit small effect sizes. These datasets appear more vulnerable to AI-based obfuscation, possibly due to the small size of these programs or due to the semantic-agnostic nature of the transformation, which may alter the behavior of programs. For the remaining four Java datasets, effect sizes are small to medium, indicating practical significance. As with other AI-based attacks, high variance in similarity scores due to prompt diversity reduces measured effect sizes [23]. In summary, the combined defenses offer significant resilience against AI-based obfuscation for Java datasets, though results are more limited for C++ programs. Yet, the defense mechanisms improve detection despite the potentially disruptive nature of AI-based obfuscation. Answer to Q4: The defense mechanisms significantly increase the resilience against semantic agnostic AI-based obfuscation attacks. The median similarity differences increase, depending on the dataset, up to 19 percentage points, thus improving the separation between plagiarized and original programs, albeit to a lesser degree than other attack types. # 5.5 GPT-4-generated Programs Figure 8 presents results for programs generated by GPT-4 based on assignment descriptions, with corresponding statistical measures in Table 10. Unlike previous stages, this evaluation does not involve obfuscation, as the generated programs are not derived from human-written ones. Instead, we compare the similarity among GPT-4-generated programs to that of unrelated human submissions. While the defense mechanisms are not designed for this setting, they improve the distinction between AI-generated and unrelated human programs. Such a capability can help detect AI-generated submissions if multiple students use the same language model. As with the last stage, BoardGame was excluded for privacy reasons. Fig. 8 Similarity scores for original (human) program pairs and pairs of AI-generated programs (based on GPT-4 and the assignment description). Ideally, generated pairs exhibit high similarity, while original pairs should exhibit low similarity. Table 10 Statistical measures for plagiarism pairs and their differences $( \varDelta )$ from original pairs for AI-based generation (corresponds to Figure 8). Higher values indicate better performance. Note that measures are expressed as percentages and their differences as percentage points. Highest values by a margin of 0.25 are marked in bold. # Baseline Figure 8 shows that GPT-4-generated programs, though created from the same assignment prompt, exhibit relatively low similarity: the median similarities among generated pairs are between $2 0 . 1 0 \%$ (PROGpedia-19) and $3 1 . 3 3 \%$ (PROGpedia-56). This reflects the inherent indeterminism of generative AI. In contrast, however, unrelated human-made programs for the same task have even lower similarities, with median values between $0 . 0 0 \%$ (PROGpedia-56) and $6 . 0 6 \%$ (TicTacToe). Thus, even with the baseline, GPT-4-generated programs are significantly more similar to each other than human submissions. However, some overlap remains. As shown in Table 10, the median similarity differences between AI-generated and human program pairs are 14.57 (TicTacToe), 15.04 (PROGpedia-19) and 33.13 (PROGpedia-56) percentage points, which is comparable to differences observed for refactoring- and alterationbased obfuscation, but notably higher than for insertion-based attacks, which showed a median difference slightly below zero $( - 0 . 7 8 )$ . In summary, while generated programs are more alike than human ones, the limited separation leaves room for evasion – highlighting the need for improved detection mechanisms. Table 11 One-sided Wilcoxon signed-rank test results for AI-based generation regarding the improvement by our defense mechanism compared to baseline (sig. level of $\alpha = 0 . 0 5$ , alternative hypothesis $H 1 = g r e a t e r$ , test statistic $W$ , effect size via Cliff’s delta $\delta$ , its interpretation $\delta I n t$ ., its 95 percent confidence interval $C I$ , and the sample size $\scriptstyle { n }$ ). For Fully-Generated Pairs $\mathrm { ( F G ) }$ , low $p$ and high $\delta$ are desirable. # Token Sequence Normalization As token sequence normalization targets statement insertion and reordering, it is not expected to meaningfully affect AI-generated programs, which typically lack dead code and are less impacted by statement order. As shown in Figure 8, results align with expectations: the median similarities among generated pairs match the baseline. Similarly, the median similarity difference with unrelated human programs are at similar values as the baseline (Table 10). Statistical tests (Table 11) confirm that the variations are both statistically and practically insignificant. With a p-value of 1 and a near-zero effect size, no meaningful improvement is observed. In summary, token sequence normalization does not enhance the detection of AI-generated programs but also introduces no significant drawbacks. # Subsequence Match Merging Subsequence match merging yields a clear improvement over the baseline. As shown in Figure 8, the median similarity among generated pairs increases by more than 8.31 (TicTacToe) to 11.23 (PROGpedia-19) percentage points, reducing overlap with human programs – now mostly limited to the upper quartile. This improvement is reflected in the median similarity difference between AI-generated and human program pairs, which rises to between 21.10 (TicTacToe) and 34.47 (PROGpedia-56) percentage points (Table 10). Statistical tests (Table 11) confirm both statistical and practical significance: p-values are low, and the effect size, though small, is meaningful in practice. In summary, subsequence match merging significantly enhances the detection of AI-generated programs – despite not being designed for this purpose – highlighting its versatility as a defense mechanism. # Combination of Both Combining both defense mechanisms results in a strong improvement over the baseline, closely mirroring the effect of subsequence match merging alone. As shown in Figure 8, the median similarity among generated pairs increases by between 7.55 (TicTacToe) and 11.15 (PROGpedia-19) percentage points, reducing overlap with human submissions – primarily in the upper quartile. The median similarity difference between AI-generated and human program pairs rises to between 20.24 (TicTacToe) and 34.47 (PROGpedia-56) percentage points (Table 10). Statistical tests (Table 11) confirm both statistical and practical significance, with low p-values and a small but meaningful effect size. In summary, the combination of both defenses significantly improves the detection of AI-generated programs, performing nearly identically to subsequence match merging alone and introducing no observable drawbacks. Answer to Q5: The defense mechanisms, while not designed for this purpose, significantly increase the detection rate of AI-generated programs. The median similarity difference to human programs increases by up to 8.92 percentage points, thus improving the separation between plagiarized and original programs moderately but yet significantly. # 5.6 Threshold-based Obfuscation In the previous sections, we evaluated each defense variant using identical plagiarism instances, enabling direct comparison. However, threshold-based obfuscation (e.g., MOSSad [18]) dynamically adapts its transformations based on the output of a plagiarism detector, as it obfuscates until the similarity to the original falls below a target threshold. In subsection 5.2, we configured it to use baseline JPlag. This raises the question of how the defenses affect obfuscation when enabled during the threshold-based obfuscation process itself. To investigate this, we selected ten random programs from each Homework dataset and ran MOSSad with all defense variants (MOSSad only supports C/C++ programs). Note that these programs are relatively small (105 LOC on average in Homework-1 and 282 LOC in Homework-5). Experiments were run on a high-performance system (AMD Ryzen 7 7700, 16GB RAM, Arch Linux), representing a realistic upper bound for student hardware. For all programs and variants, MOSSad is configured to terminate when reaching the Fig. 9 Required obfuscation duration per program for MOSSad to reach an obfuscation threshold of 25 percent (for programs with original sizes of $^ { \sim } \mathrm { 1 0 5 }$ LOC for Hw.-1 and $^ { \sim } 1 2 3$ LOC for Hw.-5). Fig. 10 Required relative insertion of statements for MOSSad to reach a 25 percent obfuscation threshold (relative insertions compared to the original program size to normalize for program size). 25 percent similarity to the original. We observed a notable increase in obfuscation time and program size when using MOSSad to evade the defense mechanisms. Figure 9 shows a substantial increase in runtime when defenses are active. For the baseline, median obfuscation times were 24 minutes (Homework-1) and 48 minutes (Homework-5). With token sequence normalization (TSN), runtime increased to 72 minutes across both datasets. With subsequence match merging (SMM), runtimes rose to 55 minutes (Homework-1) and 110 minutes (Homework-5). Combining both defenses further increased runtimes to 110 minutes and 190 minutes, respectively, with observed maxima reaching 234 and 375 minutes. These results show that the defenses impose significant computational overhead even on small programs, raising the effort required to successfully obfuscate plagiarism. Figure 10 shows the relative growth in program size. For the baseline, the median size increased by 87% (Homework-1) and $1 0 1 \%$ (Homework-5), nearly doubling the original code. With TSN, insertions rose to 169%, while SMM required $1 5 6 \%$ (Homework-1) and $1 6 9 \%$ (Homework-5). When both defenses were enabled, median increases reached $3 1 7 \%$ and $3 3 1 \%$ , respectively. Maximum observed increases were $1 1 9 3 \%$ and $8 6 9 \%$ , and all obfuscated programs exceeded $2 0 0 \%$ of the original size. Such extreme growth makes obfuscated code highly conspicuous and easily identifiable through token count, manual inspection, or outlier detection. Consequently, it makes threshold-based obfuscation apparent, thus turning it into an ineffective obfuscation strategy. Despite the high-performance system and small input programs, obfuscating just 20 programs across all variants took over 125 hours. Larger programs or slower systems would require days per attempt. Importantly, we configured MOSSad to stop at a $2 5 \%$ similarity threshold (to restrict then overall computation time), while unrelated program pairs typically fall around $1 0 { - } 1 5 \%$ . Achieving lower similarity – and thus avoiding detection entirely – would require even more aggressive obfuscation. Overall, our contributions substantially enhance obfuscation resilience, making threshold-based obfuscation highly time-consuming and resulting in plagiarized solutions that are exceptionally conspicuous due to their size. These factors collectively act as strong deterrents against obfuscation-based plagiarism, making the obfuscation efforts more tedious than completing the actual assignment. Answer to Q5: The defense mechanisms strongly increase the computational cost for threshold-based plagiarism, thus resulting in an obfuscation time of up to 6 hours per program and up to 1300 percent increase in program size, making threshold-based plagiarism more tedious and easily detectable. # 6 Threats to Validity We now discuss how we address threats to the validity of our evaluation, following the guidelines outlined by Wohlin et al. [72] and Runeson and H¨ost [58]. Internal Validity Baseline Consistency: For internal validity, we used JPlag as a baseline but also implemented the defense mechanism for JPlag, ensuring that all other conditions remained constant when comparing the defense mechanism with each other or with the baseline. Handling of invalid programs: Some public datasets contain invalid or incomplete programs (e.g., programs that do not compile), which could lead to inaccurate results if not properly handled. We addressed this by preprocessing the datasets and removing programs that do not compile. Validity of the Labeling: Public datasets often contain incomplete or biased plagiarism labels. This issue does not affect our evaluation, as all plagiarism instances are generated through controlled automated obfuscation. As a preprocessing step, we carefully filtered out instances of human plagiarism based on the labels, analyzed them with JPlag, and performed human inspections. External Validity Generalizability across datasets: Our evaluation uses real-world student submissions from diverse university courses, covering two programming languages and varying assignment sizes. This reflects typical software plagiarism detection scenarios and supports a representative, generalizable assessment [45]. Generalizability of obfuscation attacks: Limiting the evaluation to only a few types of obfuscation attacks could hinder the applicability of our results to broader contexts. To enhance external validity and thus ensure that our findings are generalizable, we included a diverse set of real-world obfuscation techniques. Influence of Prompt Quality: To address the impact of prompt choice for AI-based obfuscation, we performed systematic ”prompt-engineering” prior to the evaluation. We then evaluated with 15 suitable prompts. We generated multiple plagiarism instances for each prompt, which we repeated for multiple datasets. While the impact of the prompt varies, the variation is not strong enough to obscure the overall trend, supporting the generalizability of our results. # Construct Validity Evaluation Methodology Alignment: To enhance construct validity, we aligned our evaluation methodology with those from established and related research works. Moreover, we employ an approach-independent ground truth, and use established similarity metrics. Underlying Research Object: Our measurements align directly with the research objective of evaluating detector resilience against automated obfuscation. We use similarity scores from the detectors as primary measurements and assess obfuscation using real-world tools like MOSSad and GPT-4. Choice of Baseline: The baseline selection might affect the comparison and outcomes. We selected JPlag as a baseline, as other widely used tools are either not applicable to all datasets, closed-source, or provide restricted results. JPlag is widely recognized as a state-of-the-art tool [5, 45], ensuring that the comparison is relevant and accurate. It operates similarly to other widely used tools by employing standard similarity metrics. # Reliability To ensure reliability, we provide a comprehensive reproduction package for our evaluation [59]. Use of Internal Datasets: Using internal datasets can hinder reproducibility. To enhance reliability, we used both public and internal datasets, balancing generalizability with the need for open data where possible. We discussed any preprocessing steps and the employed obfuscation attacks for all datasets. For the internal datasets (TicTacToe and BoardGame), we provide raw results and metadata in our replication package. Publishing of Obfuscation Attacks: The obfuscation attacks utilized in our study can be considered malware, which restricts our ability to provide access to these tools. The exception is GPT-4 [1], which is publicly available; however, we do not provide a detailed, step-by-step guide on exploiting it for plagiarism detection. While omitting these artifacts or details may hinder reproducibility, balancing this limitation with ethical considerations and the responsibility regarding potential misuse. # 7 Discussion In the following, we discuss the interpretation of the evaluation results and highlight key takeaways for software plagiarism detection. Our evaluation highlights the effectiveness of automated obfuscation techniques against plagiarism detectors without defense mechanisms. Insertion-based obfuscation proves especially effective, fully concealing plagiarism by adding semantically irrelevant code. Refactoring-based obfuscation also poses a substantial challenge, as structural changes that preserve behavior significantly reduce similarity, limiting the detector’s ability to identify plagiarism instances. AI-based obfuscation introduces significant variability, with its effectiveness depending more on dataset characteristics than on prompt or language. While powerful, its reliability is lower than that of algorithmic methods. However, due to the rapid advancements of generative AI, AI-based poses a growing challenge to detection systems. Currently, AI-based program generation is only effective for smaller programs (below 300 to 400 LOC). Our results show that AI-generated programs exhibit increased similarity to each other compared to human-written programs, aiding detection when multiple students use the same model. Although not designed for this setting, subsequence match merging improves the separation of AI-generated from human-written programs. # 7.1 On Providing Broad Obfuscation Resilience Our evaluation shows that our approach improves obfuscation resilience for all employed attacks and datasets. As expected, the degree of those improvements depends on the type of obfuscation attack. Nevertheless, when employing the defense mechanisms, we demonstrate that the provided resilience is not limited to a specific obfuscation attack. We demonstrated effectiveness against a wide-range of obfuscation attacks, including both algorithmic and AI-based attacks, encompassing semantic-preserving and semantic-agnostic obfuscation. Furthermore, we evaluate datasets across different programming languages, in addition to diverse assignment types and sizes, thus demonstrating its adaptability. In total, we use six datasets in combination with five distinct obfuscation attack types. Moreover, each attack type involves various modifications. For example, refactoring-based obfuscation includes multiple transformation types, while AI-based obfuscation involves 15 varying prompts to generate diverse plagiarism instances. Our evaluation showed that our contributions provide broad resilience against automated obfuscation attacks on programming assignments by systematically covering these different categories of obfuscation attacks. The smallest improvement was observed for AI-based obfuscation, which is expected, given that this is a semantic-agnostic attack using highly challenging prompts, including partial implementations. Detecting partial implementations is particularly difficult for plagiarism detectors as they must carefully balance between detecting re-implementation and avoiding false positives. On the other hand, the strongest improvement was observed for structural attacks, which is a significant result. Structural attacks are among the easiest to automate, even with traditional methods, and they tend to consistently affect plagiarism detectors. Thus, improving resilience in this area is crucial for the effectiveness of detection tools. # 7.2 Outliers and Remaining Overlap Except for insertion-based obfuscation (see Figure 5), where the defense mechanisms completely eliminate any overlap between plagiarism instances and unrelated programs, the evaluation results demonstrate minor overlap. This raises an important question regarding the expectations one should have concerning the quality of plagiarism detection. In practical terms, some overlap among outliers is not a significant concern. It is essential to recognize that no plagiarism detection tool is perfect. Thus, educators must accept that human inspection is always the final step in plagiarism detection and that no one should solely rely on the results of an automated tool without first verifying the flagged candidates themselves. Furthermore, it is crucial to note that plagiarism detectors compare pairs of programs, and thus, a single program might be included in multiple comparisons. This means that detecting every plagiarism pair is not necessary to identify all students involved in plagiarism. In practice, educators would be presented with a ranked list of suspicious pairs, including unrelated and plagiarism pairs. For example, in our evaluation of GPT-4-generated programs, it is not necessary to identify all 1,225 pairs of AI-generated programs to detect each of the 50 generated programs at least once. Notably, when both of the defense mechanisms are enabled (Both in Figure 8), only the first 158 pairs (which is the top 0.07 percent out of all 220,780 analyzed pairs) need to be inspected to successfully identify 90 percent of the AI-generated programs at least once. To detect all 50 AI-generated programs, the first 711 pairs need to be checked, which is the top 0.3 percent of all pairs. This underscores that a slight overlap between the pairs of unrelated programs and the plagiarism pairs is not a cause for concern. Ultimately, it is important to emphasize that no plagiarism detection tool can provide 100 percent certainty. Therefore, human inspection and informed decision-making are essential in ensuring fair and accurate misconduct investigations. Educators must always engage in thoughtful analysis of the results generated by these tools to discern genuine cases of plagiarism from false positives effectively. # 7.3 AI-based Plagiarism AI-based attacks [6], particularly those utilizing generative AI, present a growing concern for plagiarism detection. We discussed two possible scenarios when employing generative AI to cheat on programming assignments. Automatic obfuscation of an existing solution and fully generating solutions from the assignment description. Based on our evaluation results, automatic obfuscation is currently the more effective approach for medium and larger assignments, as fully generating only works well for smaller programs. Generated programs do not fulfill necessary functional requirements (not implementing the required behavior precisely) and even non-functional requirements like code style, thus requiring significant manual effort to improve them sufficiently. Automatic obfuscation resembles human obfuscation practices, as a pre-existing solution is altered while trying to preserve the program behavior. For both approaches, the defense mechanisms have shown improved resilience. For AI-generated solutions, there’s an ongoing debate on whether this form of cheating qualifies as plagiarism [45, 61]. Our approach improves the detection rate by helping to recognize the similarities among generated solutions that occur due to the semi-deterministic nature of large language models. This improvement is surprising, as the defense mechanisms are not designed to detect AI-generated programs. # 7.3.1 On the Effectiveness of AI-based Obfuscation Our evaluation results show that the effectiveness of the defense mechanisms for AI-based obfuscation is less pronounced compared to their performance against algorithmic attacks. This can be attributed to two key factors. First, the overall varying effectiveness of AI-based obfuscation plays a significant role. Our results indicate a strong variance in the similarity values achieved by AI-based obfuscation. While part of this variability can be explained by the different prompts used in our evaluation, this trend remains consistent even when examining the results for each prompt individually. For plagiarized programs that already exhibit a high degree of similarity to their original versions, there is limited potential and necessity for the defense mechanisms to increase that any further. Second, generative AI employs a much broader range of modifications compared to algorithmic obfuscation techniques. Algorithmic methods typically rely on a well-defined, limited set of changes during obfuscation. Even refactoring-based obfuscation, which involves multiple refactoring operations, operates within a constrained set of transformations. In contrast, AI-based obfuscation introduces a far more diverse range of modifications, even when using the same prompt. In our evaluation, we observed strong variations in the types of changes applied by AI depending on both the prompt used and the dataset involved. These diverse modifications alter token sequences extensively, posing a challenge to the defense mechanisms. Nonetheless, it is important to note that our evaluation still shows a notable improvement in resilience against AI-based obfuscation, even in the presence of these complex and varied changes. This demonstrates that while AI obfuscation is an effective technique, the defense mechanisms mitigate its effects. Interestingly, the effectiveness of AI-based obfuscation attacks strongly varies depending on the dataset used. As illustrated in Figure 7, AI-based obfuscation performs well for Homework-1, while it does not perform well for both PROGpedia datasets. TicTacToe and Homework-5 achieve mixed results. The median similarity differences range, depending on the dataset, between around ten and around 78 percentage points (see Table 8). Although the evaluated plagiarism instances proved to be effective, the process of generating them was not straightforward. In some cases, GPT-4 produces incomplete or invalid code. Despite over 50 attempts, we could not produce a valid result for three original programs, which all exceeded 300 LOC. Thus, algorithmic obfuscation currently exhibits more consistent results than AI-based obfuscation, and currently can be just as effective. However, AI-based obfuscation is more useful in avoiding detection during manual inspection, as it produces diverse modifications and can imitate human-made code. # 7.3.2 On the Effectiveness of AI-based Generation While AI-based generation works to a certain extent, its effectiveness is currently limited. The programs generated entirely by GPT-4 did not fully comply with the specific requirements of the programming assignments, often resulting in additional output or slightly altered behavior. These discrepancies suggest that fully AI-generated solutions may only be suitable for smaller, less complex assignments. In our case, the TicTacToe dataset, with an average size of 236 lines of code, appears to be near the threshold where fully generated solutions start to exhibit these inconsistencies. A noteworthy observation is that AI-generated programs are typically shorter than those created by human developers, especially within the TicTacToe dataset. This reduction in length may contribute to the higher degree of similarity observed between AI-generated solutions. While large language models like GPT-4 are not entirely deterministic, they exhibit a level of determinism sufficient for software plagiarism detection purposes. This inherent determinism, coupled with the more concise code produced by AI, may explain why AI-generated programs tend to resemble each other more closely than human-generated ones. Finally, GPT-4 tends to produce placeholder comments instead of fully implementing certain methods, particularly when the task or method is not well-defined in the prompt. This behavior further limits the effectiveness of AI-based generation for complex assignments, as these incomplete implementations require additional manual intervention to complete. # 7.3.3 Emerging Threats While our results thus show that our contributions can effectively address current threats of artificial intelligence, rapid advancements in this field may necessitate future re-evaluation. In the future, AI-based obfuscation methods may exhibit less variance in their effectiveness, thus increasing their reliability. Similarly, new algorithmic attacks might emerge. However, as discussed, all emerging attacks must affect the same attack surface, namely, the internal, linear program representation. Thus, subsequence match merging will continue to provide resilience to emerging attacks. However, the degree of that resilience remains to be assessed. The rapid development in the field of generative AI may lead to emerging threats that warrant close attention [34]. One area of particular concern is AI-generated programs. As generative AI advances, this might become feasible for larger programs and produce functionally correct programs for more complex assignments. To detect such fully generated programs, detection systems need capabilities to detect obfuscation via implementation, which can be considered semantic clones. Here, caution is warranted. While matching full re-implementations seems desirable, it risks introducing significant false positives by flagging unrelated programs created independently by students. Note that unrelated solutions to a single problem can also be seen as semantic clones. Thus, we see the danger of creating unreliable detection systems, which may lead to unfairly penalizing students. Addressing re-implementation or semantic clones, therefore, raises philosophical questions about the boundaries of what type of plagiarism we actually want a detection system to target. For fully generated programs, for example, via generative AI, plagiarism detection methods may not be sufficient for emerging attacks. If traditional plagiarism detection methods, including the defense mechanisms evaluated in this paper, prove inadequate against more sophisticated AI-generated code, alternative techniques may need to be explored [31]. One research area is the development of AI-based detectors that act as countermeasures to generative AI. However, at present, such AI-based detectors have not demonstrated sufficient reliability or performance, and they remain an area of ongoing research [71, 49, 32]. Another possibility lies in signature- or watermarkbased methods, where the artifacts generated by AI are always identifiable as such. This approach would involve recognizing specific patterns or characteristics inherent to AI-generated content, allowing for consistent identification, regardless of the obfuscation techniques applied. Again, this is ongoing research [76, 24]. It is important to note, however, that these potential future developments lie beyond the scope of this paper and even outside of the research area of software plagiarism detection. # 7.4 Layering Defense Mechanisms Attack-specific defense mechanisms are highly effective, as they can be tailored with strong assumptions about specific obfuscation techniques in mind. For their targeted obfuscation attacks, attack-specific mechanisms outperform attack-independent approaches. This is evident in the case of token sequence normalization in Figure 5, where the defense mechanism fully separates plagiarism pairs from original pairs, completely outperforming subsequence match merging. However, attack-specific mechanisms mostly focus solely on a single known obfuscation attack type. Multiple attack-specific mechanisms must be combined to achieve broad resilience. Additionally, attack-specific mechanisms can only be designed for known attacks and may not be equipped to handle emerging threats, as they rely on assumptions that may not hold true for unknown obfuscation techniques. Attack-independent mechanisms, such as subsequence match merging, make fewer assumptions about the obfuscation techniques in use. Thus, they provide less resilience for a given obfuscation attack. Their strength, however, lies in providing broad resilience. Throughout our evaluation, we observed that subsequence match merging consistently offered resilience across a variety of obfuscation attacks. Because of its heuristic nature and the fact that it operates at a high level of abstraction, it can provide some resilience against unknown and emerging obfuscation attacks. Attack-independent approaches are essential for defending against emerging threats. Since they make fewer assumptions about the nature of incoming obfuscation attacks, they offer a level of protection against unknown attacks that attack-specific mechanisms may not. The ideal solution is to combine multiple defense mechanisms, leveraging both attack-specific and attack-independent defense mechanisms. This strategy provides targeted resilience against well-known or highly effective obfuscation attacks while also offering broad protection against unknown or emerging techniques. Layering multiple defenses is a well-established strategy in information security and risk assessment, often referred to as the Swiss cheese model [55] or defense in depth [66, 36, 4]. When using this layered approach, it is critical to ensure compatibility between defense mechanisms to avoid unintended side effects that could reduce overall obfuscation resilience or detection quality. In the context of software plagiarism detection, it is beneficial to allow users to enable or disable different defense mechanisms depending on their needs or to mitigate potential side effects. The defense mechanisms are designed to be minimally intrusive, enabling them to be layered with other approaches. In our evaluation, we examine the combination of defense mechanisms to check for adverse side effects. While in some cases, individual mechanisms outperformed the combination, the difference from the second-best mechanism is always negligible. Therefore, our mechanisms can be safely used in a layered defense strategy. # 8 Related Work This section discusses research from areas intersecting with this paper’s contributions. # 8.1 Software Plagiarism Detection Systems Despite its early roots [47], research in software plagiarism detection has seen a resurgence in recent years Novak et al. [45]. Most software plagiarism detection approaches compare the structure of the code [43, 45]; among them, tokenbased approaches are the most popular tools employed in practice. JPlag [53] and MOSS [2] are the most widely used tools [5, 45]. Furthermore, JPlag is most frequently referenced and compared to [45]. Other tools mentioned frequently are Sherlock [25] and SIM [22]. However, they are partially outdated or no longer maintained. Dolos [39] is a more recent tool inspired by MOSS and JPlag but currently supports only single-file programs, which limits its applicability. All mentioned approaches are token-based and find matching fragments via hashing and tiling [54, 2]. Some recent approaches also employ machine learning for plagiarism detection [19]. We use JPlag as a baseline in our research; however, the approaches extend to any token-based detector and can be generalized to structure-based methods. The mentioned works evaluate their tools with manually-obfuscated plagiarism. We specifically focus on evaluating automated obfuscation. # 8.2 Obfuscation Attacks and Their Mitigation Obfuscation attacks present a significant challenge for software plagiarism detection. While obfuscation has long been a concern [75, 27, 45], research on defending existing state-of-the-art plagiarism detection tools from automated obfuscation is limited. Most recent studies focus on developing entirely new detection systems that often remain inaccessible to the public, as noted by Novak et al. [45]. Research on mitigating obfuscation usually focuses on manual obfuscation. In the following, we discuss notable exceptions. Devore-McDonald and Berger [18] introduce MOSSad, a tool that uses genetic programming techniques to automatically generate semantically equivalent but undetectable plagiarized code variants, defeating detectors like Moss and JPlag. Its nondeterministic transformations mimic authentic student submissions. Biderman and Raff [6] show that language models like $G P T$ - $J$ can produce correct, syntactically diverse solutions that evade Moss detection with minimal human input, raising concerns for academic integrity as AI tools become more accessible. Similarly, Karnalim et al. [29] evaluate 16 preprocessing techniques for source code similarity detection, finding that methods like identifier removal and syntax tree linearization improve detection effectiveness. However, such techniques offer limited resilience against broader obfuscation strategies. These works typically focus on individual obfuscation attacks. In contrast, our work addresses a broader spectrum of automated obfuscation strategies and shifts the focus from attack feasibility to evaluating concrete defense mechanisms. # 8.3 Generative AI in Programming Education Chen et al. [10] investigate the impact of generative AI on academic integrity in an introductory programming course. They show that suspected plagiarism increased and shifted from traditional sources to AI tools. The results of their regression suggests that increased plagiarism may lead to decreased learning outcomes. In contrast to these results, other studies observe no difference. Xue et al. [73] conduct a controlled study on ChatGPT’s impact in CS1 programming education with 56 participants. Their results showed no significant difference in learning outcomes between groups. Most students held neutral views but expressed concerns about ethical issues and ChatGPT’s inconsistent results. Choudhuri et al. [11] explore the impact of conversational generative AI on supporting students in software engineering tasks. Their study with 22 participants found no significant difference in productivity or self-efficacy compared to traditional resources, but noted significantly higher frustration levels. Karnalim [28] investigate student perceptions of AI-assisted plagiarism in programming education by comparing it to traditional plagiarism scenarios. Based on survey responses from 66 introductory and intermediate programming students, the study finds that students view AI assistance as morally comparable to help from peers. The study suggests that student awareness and interpretation of AI-assisted plagiarism vary by experience level. Cipriano and Alves [12] examine the performance of large language models in object-oriented programming (OOP) exercises. Using real-world educational tasks and automatic assessment tools, they found that while LLMs often produce working solutions, they frequently neglect OOP best practices. The study highlights the need to emphasize code quality in programming education. Generative AI is becoming an integral part of programming education, and as educators, we have to deal with its impact. For this reason, this paper specifically investigates AI-based obfuscation. # 8.4 Detecting AI-Generated code Karnalim et al. [31] propose a lightweight AI-assisted code detector based on code anomaly features, which uses 34 features spanning various program elements to identify unusual patterns that may indicate AI assistance. Evaluated across three datasets, their approach shows promising results. However, the detection effectiveness drops significantly when students collaborate or use AI only partially. Orenstrakh et al. [46] evaluate the effectiveness of eight publicly available detectors for identifying LLM-generated content. They collected 124 human-written student submissions from before ChatGPT and compared them with 40 ChatGPT-generated samples. They find that detection accuracy significantly drops for programming code, non-English text, and content modified with paraphrasing tools, highlighting current limitations of such detectors. Similarly, Suh et al. [67] investigate the challenge of detecting AIgenerated code, noting that current detection tools perform poorly and lack generalizability. To address this, they propose enhanced approaches such as fine-tuning LLMs and using machine learning classifiers with static code metrics or AST-based embeddings. Their best model outperforms GPTSniffer, achieving an F1 score of 82.55. Moreover, Pan et al. [49] examine the ability of ChatGPT to evade detectors for AI-generated content in programming education. Using a dataset of 5,069 human-written Python solutions, they prompted ChatGPT with 13 code problem variants and evaluated five detectors. Results show that current detectors struggle to distinguish AI-generated code from human-written code reliably. Given that AI code detectors are unreliable, it is especially relevant that software plagiarism detectors can produce suspiciously high similarity values for AI-generated programs produced by the same model. We analyze this effect in our evaluation. # 8.5 Clone Detection Reusing source code via copying commonly leads to code clones [57], which impedes modern software development [26]. Code clones are created accidentally [26], while plagiarism is a deliberate act. While both clone detection and plagiarism detection are software similarity problems [45], they ultimately differ in many aspects [41]. In contrast, code clone detection does not consider scenarios where an adversary attempts to affect the process, as code clones typically arise inadvertently [26]. As a consequence, clone detectors are vulnerable to obfuscation attacks. Plagiarism detection approaches must deal with an additional layer of complexity introduced by the adversary-defenderscenario [60]. Still, many clone detection approaches share similarities in their employed techniques [69]. In summary, while clone detection is a related field, these works are not applicable for automated obfuscation.
Plagiarism in programming assignments is a persistent issue in computer science education, increasingly complicated by the emergence of automated obfuscation attacks. While software plagiarism detectors are widely used to identify suspicious similarities at scale and are resilient to simple obfuscation techniques, they are vulnerable to advanced obfuscation based on structural modification of program code that preserves the original program behavior. While different defense mechanisms have been proposed to increase resilience against these attacks, their current evaluation is limited to the scope of attacks used and lacks a comprehensive investigation regarding AI-based obfuscation. In this paper, we investigate the resilience of these defense mechanisms against a broad range of automated obfuscation attacks, including both algorithmic and AI-generated methods, and for a wide variety of real-world datasets. We evaluate the improvements of two defense mechanisms over the plagiarism detector JPlag across over four million pairwise program comparisons. Our results show significant improvements in detecting obfuscated plagiarism instances, and we observe an improved detection of AI-generated programs, even though the defense mechanisms are not designed for this use case. Based on our findings, we provide an in-depth discussion of their broader implications for academic integrity and the role of AI in education.
[ "cs.SE" ]
# 1. Introduction This paper introduces ReaLchords, a generative model tailored for online adaptive musical accompaniment. Emulating the spontaneity of live music jamming, ReaLchords generates chord accompaniments in response to a stream of monophonic melody notes, adapting on-the-fly to the unfolding musical narrative. Each chord must be generated without knowing in advance which melody note it will accompany. This simultaneous interaction imposes a conditional independence assumption on the joint generative process, that an online model must respect. Moreover, a model must be able to gracefully handle unfamiliar situations and unexpected changes. Likelihood models, however, suffer from exposure bias due to being trained entirely on ground truth data, and transfer poorly to the online settings where mistakes, imperfections and stylistic differences are common (see Figure 1 for an example). Deep generative models produce realistic, high-quality content, and are seeing increasing integration into the creative processes of artists. However, such models tend not to be designed for the demands of live scenarios such as interactive improvisation, which requires anticipation of others’ intentions and adaptation to mistakes, stylistic choices and delib To address this, we use RL finetuning to improve the model with respect to reward models that consider musical coherence (§3.2). These reward models see the entire composition and evaluate its musical coherence from various perspectives (§3.3). Our setup bears similarities to RLHF (Ouyang et al., 2022; Jaques et al., 2019) and RLAIF (Bai et al., 2022; Lee et al., 2023), however our reward models are trained through self-supervision rather than human labeling. Finally, we combine RL finetuning with knowledge distillation (Agarwal et al., 2023; Zhou et al., 2023) in a novel way, distilling from a teacher that can see the future into a student that cannot, hence forcing anticipation (§3.4). We develop key algorithmic components (Figure 2) needed to produce an online adaptive accompaniment model that is amenable to interactive use. Our contributions and findings are as follows: 1 Figure 1. Online models finetuned with RL are able to recover from mistakes, while models trained with MLE alone do not. We take a melody from the test set and midway introduce an abrupt transposition designed to disrupt the accompaniment model (top row). The Online MLE model predicts a bad chord (B7) and fails to adapt. ReaLchords also predicts a bad chord $\left( \mathrm { F } \sharp \mathbf { m } \right)$ , but adapts quickly. Wrong chords highlighted in orange are our own judgment informed by music theory, but the overall pattern is corroborated by an objective measure of harmonic quality, averaged over many trials of this experiment (bottom row). ♯ We propose ReaLchords, an online accompaniment generation model trained by RL finetuning. Figure 1 shows how ReaLchords adapts to out-ofdistribution input, a necessary skill for live jamming. ♯ We leverage knowledge distillation to learn from a non-causal teacher that can see the future (§3.4). Distillation greatly improves the quality of the model, as evidenced by the human evaluation shown in Figure 3. $\sharp$ We further employ a novel set of self-supervised reward models to encourage musical coherence and perceptual quality (§3.3). Based on a human listening test, we show that our reward models align closely with human preferences (Figure 3), despite being trained without human feedback (§3.3). $\sharp$ We demonstrate through a series of controlled experiments that without RL finetuning, models fail to adapt to mistakes and perturbations (Figure 4, $\ S 5 . 4 )$ . $\sharp$ Finally, we analyze the behavior of our models in terms of domain-specific metrics (Table 1, $\ S 5 . 3 \}$ . We find that each component in our RL finetuning methods improves the rhythmic and harmonic quality of generated accompaniments. # 2. Related Work Adaptive music accompaniment systems In contrast to automatic music generative systems, accompaniment systems often take input (such as melody) from a user, and generate output that is meant to be played in synchrony to complement what the user is playing. Some of these systems are asynchronous, where the user first provides the full melody, and the system generates an accompaniment offline. Examples include MySong (Simon et al., 2008), where a user sings a melody and the system generates chords to accompany them. Most recently, SingSong (Donahue et al., 2023) supports a very similar interaction, but generates fullband backing tracks. Both are offline systems. In contrast, online accompaniment systems need to synchronize with user actions in real-time. Score-following is a special case where the system has the score, the full context of the content of what the musician will play, but still needs to follow along and infer when to play their own part. Music Plus One (Raphael, 2010) adapts its playback speed of an orchestral recording (without the soloist) to a soloist’s expressive performance. Similarly, Antescofo (Cont, 2008) follows where a soloist is in a score and triggers live electronics accordingly. Generative accompaniment systems or more generally cocreative music systems, not only have to anticipate user actions, they need to learn how to respond. Voyager (Lewis, 2003) takes a rule-based approach in how to listen, respond and generate musical material on the fly, while Omax Brothers (Assayag et al., 2006) recombines what a musician plays on-the-fly as an accompaniment but often requires another computer musician to control when it comes in and what section of material to draw from. ImproteK and later DJazz (Nika & Chemillier, 2012; Nika et al., 2017) leverages a shared predefined chord progressions (such as a Jazz Standard) to coordinate the human-machine improvisation. Instead of tight synchronization, Spire Muse (Thelle & Pasquier, 2021) serves as a brainstorming partner which retrieves musical responses that are more or less similar depending on if the user is in a converging or diverging phase of ideation. Recent systems based on deep neural networks have emerged. BachDuet (Benetatos et al., 2020) trains an LSTM model using MLE for counterpoint (melody to bassline) accompaniment. SongDriver (Wang et al., 2022) focuses on online melody-to-chord accompaniment, similar to our work. To address exposure bias, SongDriver employs two MLE-trained models: a transformer model that predicts current output based on both current and past outputs, and a conditional random field (CRF) model that predicts current output based on previous context. The CRF model makes online predictions but does not use its own predictions for future contexts; instead, it relies on the transformer model for context. In contrast, our system ReaLchords learns how to respond and in tight synchronization with user melody, by first learning interdependencies between melody and accompaniment from existing songs, and then using RL to tune the models to respond in an adaptive fashion. RL finetuning for generative models Reinforcement learning (RL) finetuning has proven effective in aligning language models with human preferences (Ouyang et al., 2022; Jaques et al., 2019) and constraints (Jaques et al., 2017), which are often unaddressed in generative pretraining. In some cases, RL finetuning has been applied to enhance music generation models (Jaques et al., 2017; Jiang et al., 2020b). Most closely related to our work is RL-Duet (Jiang et al., 2020b), which considers a similar online generation setting, namely a duet between a user and an agent, both of them playing each note without knowing what the other will play. Our work provides several contributions over RL-Duet. First, RL-Duet is trained on Bach Chorales, a small dataset of approximately 400 songs following strict rules of counterpoint composition in the style of a particular composer. In contrast, our models are trained on the diverse Hooktheory dataset of 38,000 popular songs from a wide array of artists. To enable effective learning on this scale, we develop novel multiscale contrastive and discriminative reward models, and also propose a new knowledge distillation technique specifically geared toward the online generation setting. Finally, RL-Duet experiments are limited to the setting in which the RL model is primed with the first few ground-truth notes of the accompaniment, an unrealistic assumption for real-time collaborative jamming. As we will show in $\ S 5 . 4$ , our methods are able to begin jamming with the user’s melody within a few beats, and adapt to sudden perturbations in the key. Our work is related to the emerging literature on Reinforcement Learning from AI Feedback (RLAIF) (Saleh et al., 2020; Bai et al., 2022; Lee et al., 2023), which mitigates the need for extensive human labeling by utilizing an AI assistant for feedback generation. We use this strategy to finetune online music language models, using an MLE model to obtain a learning signal. Recently, Agarwal et al. (2023) have shown that adding a distillation objective between the policy and a larger teacher model during RL finetuning further improves performance. ReaLchords employs a novel knowledge distillation objective between the online policy and an offline model which can see future context, bridging the gap between online improvisational capabilities and offline musical coherence. # 3. Online Musical Accompaniment We seek a generative model that can be used for interactive music accompaniment, where a user plays a melody, and the model simultaneously plays chords to support it. Accompaniment is a special case of the general setting in which two agents generate a joint sequence $( x _ { 1 } , y _ { 1 } ) , \dotsc , ( x _ { T } , y _ { T } )$ in chronological order. At each step $t$ , agents observe the historical material $x _ { < t } , y _ { < t }$ , and simultaneously emit the next pair of tokens $x _ { t } , y _ { t }$ . Simultaneity imposes a conditional independence on the generative process: $$ \operatorname* { P r } ( x _ { t } , y _ { t } \mid x _ { < t } , y _ { < t } ) = \operatorname* { P r } ( x _ { t } \mid x _ { < t } , y _ { < t } ) \operatorname* { P r } ( y _ { t } \mid x _ { < t } , y _ { < t } ) . $$ In this general setting, the melody $x$ and chords $y$ interdepend through the conditioning on shared history $x _ { < t } , y _ { < t }$ ; this corresponds to musicians adapting to each other as they play. As a first step, we consider the specific setting where the chords do not influence the melody; now one player leads and the other follows. We call this accompaniment. We approach this problem by constructing a model $\pi _ { \boldsymbol { \theta } }$ that generates accompaniment $y$ according to a specific autoregressive process: $$ \pi _ { \boldsymbol { \theta } } ( y \mid x ) = \prod _ { t } \pi _ { \boldsymbol { \theta } } ( y _ { t } \mid x _ { < t } , y _ { < t } ) . $$ While our goal at each timestep $t$ is to predict a chord $y _ { t }$ that supports the melody token $\scriptstyle { \boldsymbol { x } } _ { t }$ about to be played, the model’s prediction of $y _ { t }$ does not depend on $\boldsymbol { x } _ { t }$ . This is crucial, as it allows the model to be used online as desired. We train this model in two steps: pretraining on data (§3.1), followed by finetuning using reinforcement learning (§3.2). In the rest of this section, we first describe the general approach, and then detail on the components involved (reward models $\ S 3 . 3$ , distillation $\ S 3 . 4$ , and regularizations $\ S 3 . 5 \AA$ ). # 3.1. Maximum Likelihood Pretraining The first step in training $\pi _ { \boldsymbol { \theta } }$ is to apply MLE, maximizing the data likelihood with respect to $\theta$ : $$ \operatorname* { m a x } _ { \theta } \operatorname* { \mathbb { E } } _ { { x , y \sim p _ { \mathrm { d a t a } } } } \log \pi _ { \theta } ( y \mid x ) . $$ Figure 2. ReaLchords leverages RL finetuning to learn anticipation and adaptation for online melody-to-chord accompaniment. Initializing from a model $\pi _ { \boldsymbol { \theta } }$ pretrained by MLE, the policy generates a complete chord response to a melody from the dataset, each chord being predicted given only previous melody and chords (top left). In contrast, the offline model $\phi _ { \omega }$ (also trained by MLE) predicts each chord given the complete melody (bottom left). A KL-divergence penalty distills the predictions of the offline model into the online model, improving its ability to anticipate the future. (Right) The reward stems from an ensemble of multi-scale contrastive and discriminative models that evaluate the musical coherence between melody and chord. The final training objective in ReaLchords is a sum of the reward and the distillation loss (center). The data distribution $p _ { \mathrm { d a t a } }$ can be interpreted as standing in for $p _ { \mathrm { u s e r } }$ : we simulate user play by sampling fixed melodies from the dataset. This limits our ability to encourage and assess the model’s ability to adapt to out-of-distribution melodies. Nevertheless, the model will still encounter outof-distribution combinations of melodies and chords during inference. Unfortunately, applying only MLE training to online accompaniment model suffers from exposure bias (Arora et al., 2022): during training, the model is always conditioned on ground-truth context, but this does not occur during inference. Consequently, MLE models struggle to learn two skills required in online accompaniment (Jiang et al., 2020b;a). First, the model must anticipate what the user is going to play, in order to ensure that its own output agrees with that of the user. Second, the model must be able to adapt to and recover from unknown input, due to its own mistakes or those of the user, due to misanticipation, or due to user idiosyncrasies. As a concrete example, Figure 1 shows a failure mode of the online MLE model. The model fails to adequately anticipate future inputs, leading to exposure bias and error accumulation due to a distribution mismatch between training and inference. Whenever the first few time-steps of output do not fit with the melody input stream the model will continue its chord progression, ignoring the input. # 3.2. Finetuning using Reinforcement Learning Similar challenges are encountered in imitation learning (Ross & Bagnell, 2010), where policies trained by MLE to reproduce expert demonstrations are brittle, and fail to transfer to the real environment (see e.g. Reddy et al. (2019)). A rich history of work has demonstrated Reinforcement Learning (RL) finetuning to be an effective remedy. We begin by initializing the weights of our RL policy $\pi _ { \boldsymbol { \theta } }$ with those of the pretrained online MLE model. As in eq. 1, at timestep $t$ , the policy predicts action probability distribution $a _ { t } = y _ { t }$ given state $s _ { t } = ( x _ { < t } , y _ { < t } )$ . Then, we adopt an RL finetuning methodology similar to the popular RLHF (RL from Human Feedback) framework used for language models (Ouyang et al., 2022; Jaques et al., 2019). Namely, in addition to maximizing RL rewards $R ( x , y )$ , we minimize KL-divergence from a pretrained MLE anchor model $\phi _ { \omega } ( y | x )$ parameterized by $\omega$ , as proposed in Jaques et al. (2017). Let $x$ and $y$ represent the full melody and chord sequence, each consisting of several tokens (i.e. the full trajectory). This gives us the KL-regularized RL objective: $$ \operatorname* { m a x } _ { \theta } \underset { x \sim p _ { \mathrm { d a t a } } } { \mathbb { E } } R ( x , y ) - \beta D _ { K L } ( \pi _ { \theta } ( { \tiny { \cdot } } \mid x ) \parallel \phi _ { \omega } ( { \tiny { \cdot } } \mid x ) ) . $$ To evaluate (2), we sample a batch of melodies $x$ from the training set, then use the current policy $\pi _ { \boldsymbol { \theta } }$ according to (1) to generate a batch of corresponding harmonies $y$ (Figure 2, top left). We then evaluate the resulting batch of compositions $( x , y )$ according to reward models (§3.3) and regularizers (§3.5) to obtain the reward $R ( x , y )$ (Figure 2, top and bottom right). Additionally, we measure $\phi _ { \omega } ( y \mid x )$ under the offline model $\phi _ { \omega }$ (§3.4) in order to compute the KL term (Figure 2, bottom left). Finally, we update the model according to (2), using REINFORCE with a separate value model serves as baseline estimation for improved stability (Lee et al., 2023; Agarwal et al., 2023). The separate value model is also initialized from pretrained online MLE model, and is trained to estimate the total return. We use mean square error between the estimated return and total return as objective to train the value model. Unlike in RLHF (Ouyang et al., 2022) and RLAIF (Bai et al., 2022), our reward models are not trained on preference labels from either human or machine labelers. Instead, they are trained using positive and negative melody-chord pairs constructed from a dataset (see Figure 2, $\ S 3 . 3 \ r ,$ ). Nevertheless, a listening test (§5.1) shows that our reward models align well with human preferences, as shown in Figure 3. # 3.3. Reward Models We develop a novel ensemble of reward models that evaluates the coherence between input (melody) and output (chord) tracks. We implement two types of coherence evaluation reward models, contrastive and discriminative, each with different inductive biases. Reward model training and architectural details can be found in Appendix $\ S \mathrm { F }$ and $\ S \mathrm { G }$ . The contrastive reward model consists of a melody encoder and a chord encoder, which respectively map the melody $x$ and chord $y$ to embedding vectors $E _ { x } , E _ { y }$ . The encoders are trained in an unsupervised manner using InfoNCE loss (Oord et al., 2018; Radford et al., 2021) applied to positive and negative samples created from the dataset. As shown in Figure 2, the positive pairs are defined as the melody-chord pairs from the same song, and the negative pairs are created by randomly pairing melody and chord from different songs. The InfoNCE loss essentially maximizes the cosine similarity for positive pairs, and minimizes the cosine similarity for negative pairs. The reward for a given pair $x , y$ is the cosine similarity of $E _ { x }$ and $E _ { y }$ . The discriminative reward model looks at the entire generated pair $( x , y )$ as a whole. This model is trained in an unsupervised manner to discriminate between “real” melodychord pairs and randomly paired melodies and chords. Each training batch case provides a set of positive pair and, by combining its melody with the chords from another randomly chosen batch case, a negative pair. Once trained, the model produces a probability of $( x , y )$ being “real“ that directly serves as the reward. Due to the bottleneck on the embedding vectors $E _ { x } , E _ { y }$ the contrastive models focus on global coherence. The discriminative models on the other hand are able to evaluate temporally local coherence. Indeed, our experiments in $\ S 5 . 3$ show that contrastive reward models promote mainly harmonic quality whereas discriminative reward models encourage mainly synchronization. While these reward models are effective, we find that they can be overly harsh on temporally localized incompatibilities, such as short-lived mistakes that are quickly resolved. To mitigate this and improve temporal credit assignment, we further propose to use an ensemble of multi-scale variants that evaluate isolated fragments without being influenced by distant mistakes. We train multiple contrastive and discrimi$( \{ \textstyle { \frac { 1 } { 2 } } , \textstyle { \frac { 1 } { 4 } } , \textstyle { \frac { 1 } { 8 } } , \textstyle { \frac { 1 } { 1 6 } } \}$ of the maximum sequence length 256). During finetuning, we apply these models to sliding windows ( $50 \%$ overlap) of the example. # 3.4. Distillation from Offline Teacher Model As stated in $\ S 3 . 2$ , during RL finetuning we penalize KLdivergence from a model pretrained on the data distribution to ensure the model maintains realistic outputs while maximizing rewards (Jaques et al., 2017). However, unlike in typical RL finetuning, the online MLE model with which our policy is initialized suffers from a lack of robustness to out-of-distribution data, and as such is not an ideal anchor for use with the KL-regularization term. Agarwal et al. (2023) demonstrated how the KL penalty can be used not just to avoid diverging from a checkpoint, but also to distill knowledge from a larger teacher model. We take this idea one step further and distill knowledge from an offline model that can see the future of the melody. The offline model $\phi$ is trained with MLE to autoregressively predict chords given the full melody $x$ : $$ \phi _ { \omega } ( y \mid x ) = \prod _ { t } \phi _ { \omega } ( y _ { t } \mid x , y _ { < t } ) . $$ In traditional knowledge distillation, ground truth data is used to obtain the predictions of both the teacher and student models, and a KL loss is then applied to bring the student’s predictions closer to the teacher’s. Here, it is instead evaluated on samples generated by the current policy. This is a special case of on-policy knowledge distillation (Agarwal et al., 2023; Zhou et al., 2023), which in general allows any mixture of ground truth data, student samples and teacher samples. We tested various on-policy knowledge distillation schedules and found it works best when driven by the student (§5.3). Thus, during RL finetuning we only train on outputs from the student. # 3.5. Regularization Penalties RL finetuning can lead to pathological behavior, such as repetitions and mode collapse (Jaques et al., 2017; Jiang et al., 2020b). We introduce three regularization penalties to discourage specific failure modes: Repetition: Inspired by repetition penalties used for training language models (as in Saleh et al. (2020); Jaques et al. (2019)), we impose a penalty for chords that are held for too long. Silences: We impose a penalty for silences beyond the beginning of a phrase. Ending early: A penalty is imposed for early end-of-sequence (EOS) tokens. See $\ S _ { \mathrm { { D } } }$ for an ablation that shows the need of these penalties. Figure 3. Our reward models are aligned with human preferences. We carried out a listening test (§5.1) to evaluate the quality of our models. The online MLE model performs poorly, but is greatly improved by distillation from the offline MLE model. Our proposed systems ReaLchords and ReaLchords-M improve further thanks to RL finetuning. The rewards given by both the contrastive and discriminative reward models are strongly correlated with human evaluations. # 4. Dataset We train our models on an updated version of the Hooktheory dataset (Donahue et al., 2022), which comprises crowdsourced analyses of monophonic melodies and chords from recordings and now contains 38K melody-chord pairs. We adopt a frame-based representation where time is quantized to sixteenth notes, where each frame is a discrete index. We set a maximum sequence length of 256 for $x$ and $y$ . We augment the data by randomly transposing up or down by up to 6 semitones. $2 0 \%$ of the data is held out and divided equally into validation and test sets. We develop on the validation set and report the test set results in the paper. Please refer to $\ S _ { \mathrm { { L } } }$ for details on the dataset and data representation. # 5. Experiments Is the system capable of producing accompaniments of high musical quality? How swiftly can the system adjust to unfamiliar situations? We address these questions from three directions. To directly assess musical quality, we conduct a human listening test using samples generated from the models (§5.1). We demonstrate adaptation through several controlled generation experiments, tracking the quality of the accompaniment over time (explained in $\ S 5 . 4 \AA ,$ ). Finally, we evaluate the system using heuristic metrics to assess the quality of compositions generated in response to melodies in the test set (detailed in $\ S 5 . 3 \AA$ . The following systems are compared in our experiments: MLE baselines The Online MLE model trained to predict $y _ { t } \mid x _ { < t } , y _ { < t }$ without seeing $\mathbf { \Psi } _ { x _ { t } }$ (§3.1). The Offline MLE model that sees the full input $x$ and is used as a teacher for knowledge distillation (§3.4). Our proposals These models are trained with both contrastive and discriminative rewards, as well as regularization and knowledge distillation. ReaLchords incorporates the global reward models, whereas ReaLchords-M incorporates the multi-scale variants of both reward models. Ablations The model KD, trained with only knowledge distillation and regularization. Two models trained by MLE and then finetuned using either only Contrastive (C) reward or only Discriminative (D) reward, with regularization and KL divergence to the MLE checkpoint. A model $\mathbf { C + D }$ using both contrastive and discriminative reward, with regularization and KL divergence to the online MLE checkpoint. # 5.1. Human and Machine Evaluation on Musicality Any measure of the quality of a musical piece must ultimately be grounded in human preferences. We carry out a listening test to evaluate four systems: the Online MLE baseline, KD, ReaLchords and ReaLchords-M. In the listening test, participants are presented with 8-second audio clips from two different systems, and asked to rate which one sounded more musical, on a 5-point Likert scale. We recruited ten musicians, and collected 192 ratings with each system involved in 96 pairwise comparisons (see $\ S \mathbf { J }$ for more details). Figure 3 (top) shows the number of wins for each system. We ran a Kruskal-Wallis H test and confirmed that there are statistically significant pairs among the permutations. According to a post-hoc analysis using the Wilcoxon signed-rank test with Bonferroni correction (with $\mathrm { p } { < } 0 . 0 5 / 6$ as there are 6 pairs of systems), we found the following statistically significant results: All systems outperformed the Online MLE baseline. Also, the fully-fledged systems ReaLchords and ReaLchords-M outperformed distillation alone (KD). While ReaLchords-M appears to outperform ReaLchords, this comparison is not significant. Table 1. Effect of RL finetuning with our reward models and knowledge distillation on harmonic, synchronization and rhythmic diversity metrics. Each number is an average over a large number of accompaniments to test set melodies. In row 2-7, we report confidence interval $( 9 5 \% )$ of metric values over $3 \mathrm { R L }$ finetunings, each with different random seeds. Overall, the results from the listening test show that distillation alone (KD) accounts for a large improvement in perceptual quality. The reward models agree with this assessment, even though KD does not directly optimize for these rewards. In general, we find that the rewards given by our self-supervised reward models (Figure 3, middle and bottom) correlate strongly with human preferences, which justifies their use in lieu of human feedback. # 5.2. Quantitative Metrics In line with prior research (Jiang et al., 2020b; Yang & Lerch, 2020; Fang et al., 2020), we introduce quantitative metrics to evaluate the quality of accompaniments: Harmonic quality We measure harmonic quality by the note-in-chord ratio, which is the amount of time that the melody’s pitch class occurs in the chord. For example, if the melody token $\mathbf { \Psi } _ { x _ { t } }$ is a C, and the chord $y _ { t }$ is F minor, then the note-in-chord ratio as time $t$ equals 1. We average this metric across time $t$ and across all compositions $x , y$ generated, to obtain the overall note-in-chord ratio for the model in question. Synchronization To gauge temporal synchronization between melody and chord progression, we look at chord-tonote onset interval, which is the length of time between the onset of a chord and the onset of the nearest preceding melody note. The synchronization of a model can be judged by comparing this quantity’s distribution on the test set versus on the output of the model. Whereas Jiang et al. (2020b) compare averages of this quantity, we propose to compare the full distributions using Earth Mover’s Distance (EMD) on histograms of chord-to-note onset intervals. Rhythmic diversity We examine the distribution of durations of generated chords to assess overall rhythmic behavior. The entropy of this distribution measures rhythmic diversity. # 5.3. Quantitative Evaluation Results We evaluate each model based on a large number of accompaniments to test set melodies. The average metrics are reported in Table 1. MLE baselines The behavior of Offline MLE is closest to that of the test set, as is expected due to its ability to see the future input. Online MLE exhibits poor harmonic and temporal coordination with the melody, which suggests that it produces chords without paying attention to the melody. Distillation On-policy knowledge distillation (KD in Table 1) significantly enhances online generation, particularly with regard to harmony and synchronization. Distillation from the offline teacher will suppress the probability of chords that match poorly with the future melody, forcing the student to anticipate the immediate future. The student (KD) learns to produce outputs that align better with the input context while retaining a causal conditioning structure. Reward models Training with contrastive and discriminative reward models, individually (C, D) and combined $( \mathbf { C } +$ D), shows distinct improvements in harmony and synchronization. The use of the contrastive reward model (C) improves more on harmony, presumably because it compresses the entire melody and the entire accompaniment separately (a) Accompaniment quality when primed with ground truth. (b) Accompaniment quality after a cold start. (c) Accompaniment quality when perturbed midway. Figure 4. Comparing the quality of overall accompaniment as a function of the number of beats generated, in three scenarios of increasing difficulty (§5.4). Quality is measured by note-in-chord ratio. (a) Priming the online model with ground-truth context (8 beats in this case) results in comparable performance between models. (b-c) ReaLchords and ReaLchords-M recover from cold starts and perturbation, while the online model does not. and merges them only in the final cosine similarity. The use of the discriminative reward model (D) improves more on synchronization, while having worse harmony. This is expected, as the classification is biased towards direct comparison. We further examine the bias of different reward models by plotting reward values against harmonic perturbations, where varying portions of chords in the test set are replaced with random alternatives. As shown in Figure 6 in $\ S \mathrm { G }$ , the contrastive model is more sensitive to harmonic perturbations. The blend of both rewards in ReaLchords offers enhancement in each metrics. Additional experiments applying RL fine-tuning with ensembles of the same type of reward models show similar metric improvements, as presented in Table 6 in $\ S \mathrm { I }$ . This suggests that the observed metric enhancements may result from both the combined biases and the ensemble of reward models. Combining rewards with distillation Integrating both reward models with knowledge distillation yields better harmony but less rhythmic diversity (ReaLchords in Table 1). This indicates a tendency of the model to opt for ‘safer’ chord progressions, presumably resulting from satisfying both reward maximization and knowledge distillation. This is further validated in Figure 8 in Appendix $\ S \mathbf { M }$ where we visualize the chord length histograms and find that this model tends to hold chords for 2 or 4 beats. Multi-scale reward models ReaLchords-M further improves harmonic quality thanks to the locally isolated rewards from the multi-scale variants of our reward models. This aligns well with the findings in Figure 3. # 5.4. Adaptation Dynamics To study the temporal dynamics of model adaptation to unknown input, we measure accompaniment quality as a function of number of beats generated. A beat is 4 frames with a total of a quarter note length. We compare among the dataset and four models: Online MLE, Offline MLE, ReaLchords and ReaLchords-M. We report harmonic quality in terms of note-in-chord ratio. In all experiments, we draw melodies from the test set, and let the models generate accompaniment. However, we consider three different scenarios with different interventions: priming, cold-start, and perturbation. Priming (Figure 4a): We start with the setting of RLDuet (Jiang et al., 2020b), where the models are primed with several beats of ground truth chords before generating their own chords. This avoids the cold-start problem of predicting a chord without knowing anything about the melody, and gives an indication of the model’s ability to anticipate what happens next without having to first adapt to what happened previously. Behavior is similar across the models. For reference, we also plot the cold-start behavior of Online MLE (without priming), which is significantly worse. We argue that, as a benchmark for online accompaniment, primed generation is unnatural and too facile. Cold start (Figure 4b): We now proceed to the cold-start setting, which is more natural and more difficult. Here, models predict chords immediately and have to adapt to the resulting melody-chord combinations, which are usually wrong and outside of the data distribution. The Online MLEstruggles to adapt to its own past chords, and never gets close to the primed behavior. ReaLchords and ReaLchords-M quickly overcome their own mistakes and play as well as if they were primed. Perturbation (Figure 4c): Finally, we introduce a deliberate perturbation in the middle of the generation process, to demonstrate the ability of our systems to recover from serious errors. We transpose the melody up by a tritone (6 semitones) at beat 17, resulting in both an out-of-distribution melody and almost guaranteeing that the next chord is a poor fit. This is similar to the push test in legged robotics. The Online MLE fails the test: it exhibits a drop in harmonic quality and never recovers. ReaLchords and ReaLchords-M quickly adapt to the new key and recover their previous performance. Overall, these results confirm that Online MLE suffers from exposure bias due to only being trained on ground-truth data. This brittleness, or inability to produce reasonable output given out-of-distribution (OOD) input not covered by the training data, is similar to the failures exhibited by imitation learning or behavior cloning methods in traditional RL contexts (Reddy et al., 2019), which also rely purely on supervised learning. Our systems ReaLchords and ReaLchords-M quickly recover from both cold-start situations and mid-song disturbances, which highlights their ability to follow along with a user as they explore ideas. Thus, ReaLchords solves a critical requirement for an online accompaniment system, which will have to accompany a diverse range of human users who are likely to play novel melodies not covered by the training data and change what they play midway. Finally, we find an interesting emergent behavior due to RL finetuning, where models hold off on playing chords initially, preferring instead to wait for more information about the melody. This wait and see behavior is also visible in Figure 1, and examined further in Appendix $\ S \mathrm { A }$ . Similar behavior occurs in human performers, who often wait for several bars when improvising with an unfamiliar player.
Jamming requires coordination, anticipation, and collaborative creativity between musicians. Current generative models of music produce expressive output but are not able to generate in an \emph{online} manner, meaning simultaneously with other musicians (human or otherwise). We propose ReaLchords, an online generative model for improvising chord accompaniment to user melody. We start with an online model pretrained by maximum likelihood, and use reinforcement learning to finetune the model for online use. The finetuning objective leverages both a novel reward model that provides feedback on both harmonic and temporal coherency between melody and chord, and a divergence term that implements a novel type of distillation from a teacher model that can see the future melody. Through quantitative experiments and listening tests, we demonstrate that the resulting model adapts well to unfamiliar input and produce fitting accompaniment. ReaLchords opens the door to live jamming, as well as simultaneous co-creation in other modalities.
[ "cs.SD", "cs.AI" ]
# 1 Introduction Tropical cyclones (TCs, also known as hurricanes or typhoons) have been the most damaging single form of weatherrelated natural disaster globally in terms of both loss of life and economic damage in recent decades (World Meteorological Organization, 2021). In the United States (U.S.), storm surge has been the leading cause of deaths directly attributable to TCs, making up nearly as many deaths as all other TC-induced hazards combined (Rappaport, 2014). With the growing density of people and assets in coastal regions (Klotzbach et al., 2018), paired with increasing TC intensities in the future (Knutson et al., 2020), the threat posed by TC-induced storm surge may only grow more pronounced. However, deadly storm surge is such a rare occurrence that the historical record is not an accurate measure of the true risk—and certainly does not capture the potential for future changes in TC behavior. While flood risk is most often communicated in terms of the 100-year $1 \%$ yearly exceedance probability) flood, such as in the Federal Emergency Management Agency (FEMA) Flood Insurance Rate Maps, the record of reliable surge observations in the U.S. is at most 150 years (generously, 1880–present (Needham, 2014)). Analyzing the 100-year event as a Bernoulli process suggests that the true 100-year return level has never been met or exceeded in this historical window for $22 \%$ of the U.S. coastline. Given insufficient observational records, the other avenue to understanding future storm surge risk is via modeling. While simple parametric models of storm surge exist (Irish et al., 2008; Islam et al., 2021), the physical complexity of the processes involved render them incapable of sufficiently accurate spatial modeling. Physical-numerical models such as the Advanced Circulation (ADCIRC) model (Luettich et al., 1992; Pringle et al., 2021), Delft3D (Roelvink & Van Banning, 1995), the Finite-Volume Coastal Ocean Model (FVCOM) (Chen et al., 2003), and the Regional Ocean Model (ROMS) (Shchepetkin & McWilliams, 2005) are capable of highly accurate storm surge (and in some cases inland inundation) modeling. Yet, this accuracy requires a tradeoff with computational efficiency; due to the complex physics involved, these models often simulate fluid dynamics on the timescale of seconds on detailed meshes with many thousands of nodes. The computational expense associated with simulating thousands of years of TC activity are thus immense, and usually preclude a Monte Carlo-style probabilistic uncertainty analysis. Only in the last handful of years have data-driven models (namely, deep neural networks) presented a viable alternative to numerical modeling approaches, as illustrated by recent advances in neural networks for global weather forecasting (Pathak et al., 2022; Bi et al., 2023; Lam et al., 2023; Price et al., 2024) and general circulation modeling (Kochkov et al., 2024), often with orders-of-magnitude speedups in prediction time. Theoretical analysis of these data-driven models indicates that sufficient training and model complexity allows them to implicitly encode the underlying physics of these systems (Rupe et al., 2022). Previous deep learning approaches to storm surge modeling have shown promise, though these past methods have all been spatially inflexible either by training a model for only a single bay or subregion (Sztobryn, 2003; Lee, 2006; Oliveira et al., 2009; Hashemi et al., 2016; Sahoo & Bhaskaran, 2019; Lee et al., 2021; Xie et al., 2023; Adeli et al., 2023) or by training separate models for each location of interest (Lockwood et al., 2022) thereby foregoing the benefits of learning shared physics. In contrast, we propose a “point-based” approach, which enables our model to be highly flexible, parallelizable, and capable of learning more complete and generalized physics of storm surge across all locations. Our proposed model, which we call DeepSurge, is a recurrent neural network trained to predict the peak surge level at any given location in the North Atlantic basin, trained and validated on ADCIRC outputs from more than 250 historical TC storm surge events. Although by no means a perfect surrogate for ADCIRC, DeepSurge achieves reasonably skillful out-of-sample accuracy $( 8 1 . { \bar { 5 } } \% \ { \bar { R } } ^ { 2 }$ , $0 . 2 5 \mathrm { m }$ mean absolute error) and up to a 96x speedup in prediction time compared with ADCIRC. Further, DeepSurge and ADCIRC show comparable skill when validated against independent National Oceanic and Atmospheric Administration (NOAA) tide gauge observations. For a robust quantification of risk, many thousands of TCs are required, necessitating the use of synthetic TC events. For this task, we use the Risk Analysis Framework for Tropical Cyclones (RAFT) (Xu et al., 2024) to generate 900,000 TCs $( \sim 6 0 { , } 0 0 0$ simulation years) representative of historical and future conditions, to our knowledge an order of magnitude larger than any previous synthetic TC-driven storm surge risk assessment. From these storms, we use DeepSurge to robustly estimate surge heights along the entire U.S. Gulf and Atlantic coastline, and combine with probabilistic sea-level rise projections and an efficient inland inundation model to characterize the extremes of storm surge flooding risk impacts. # 2 Methods # 2.1 Numerical storm surge simulations A large and varied dataset of historical storm surge data was generated with the ADCIRC model (Luettich Jr. & Westerink, 1991; Luettich et al., 1992; Pringle et al., 2021), utilizing a mesh of 15,467 nodes spanning the North Atlantic basin. Historical TC data for the North Atlantic was retrieved from the International Best Track Archive for Climate Stewardship (IBTrACS) (Knapp et al., 2010, 2018). Wind forcings were generated with the method described in Emanuel & Rotunno (2011), and pressure forcing with the methodology presented by Holland (2008). Simulations were run at 1-second time steps for the lifetime of each storm, with the maximum water level at each node retained as the training target for our deep learning surrogate model. Further details are provided in Supplementary Section 1. # 2.2 DeepSurge, a deep learning storm surge model We develop a neural network model informed by the physics of the problem, which we call DeepSurge. Because storm surge is the result of complex physical interactions in space (coastal geometry and bathymetry) and time (storm evolution), our model utilizes both convolutional and recurrent neural network components which are known to be skilled in processing these forms of data respectively (O’Shea & Nash, 2015; Hochreiter & Schmidhuber, 1997) and have been successfully utilized in previous storm surge modeling approaches (Adeli et al., 2023; Xie et al., 2023; Giaremis et al., 2024). Our model differs from others in that it operates in a “point-based” manner, meaning it processes each node independently from its neighbors, which allows more flexibility, generalizability, and parallelization than previous approaches. The DeepSurge model ingests a timeseries and spatial maps describing a single node during a storm, and predicts that node’s maximum surge level. The basic structure of the neural network, as detailed in Figure 1, has four stages: 1) encode the timeseries and spatial data separately, 2) convert them to compatible shapes and concatenate them together, 3) apply a Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) layer to understand the temporal development, and 4) decode the features to predict a maximum surge level. The timeseries input to the model consists of a wide variety of variables describing the time evolution of the storm and its relation to the node of interest: storm maximum wind speed, storm radius of maximum wind, distance from node of interest to storm center, direction from node of interest to storm center, storm translation speed, and direction of storm motion. Also included are a few time-invariant scalars: bathymetric depth at the node of interest and estimated slope and direction of the seafloor. All directions and angles are encoded as sine/cosine pairs to avoid discontinuities. The spatial inputs to the model consist of a log-scaled bathymetric map interpolated from the ADCIRC mesh, and a land-ocean mask derived from $3 0 0 \mathrm { - m }$ resolution European Space Agency global land cover data (European Space Agency, 2017). Both are $1 2 8 \mathrm { x } 1 2 8$ pixel grids at $0 . 0 2 2 2 ^ { \circ }$ resolution centered on the node of interest, resulting in roughly a $3 0 0 \mathrm { k m }$ diameter receptive field. The network is constructed and trained in TensorFlow/Keras (Martín Abadi et al., 2015), and has a total of 1.7 million parameters. Details of the architecture and training methodology are provided in Supplementary Section 2.2. Figure 1: DeepSurge architecture. Tensor shapes (batch size not included) are given for each arrow, representing that tensor being passed from one layer to the next. In the tensor shapes, $S$ is the number of spatial images, $T$ is the number of timesteps, and $I$ is the number of timeseries inputs. Convolutions are summarized with kernel size $K$ and number of filters $F$ , e.g. $K \mathrm { x } K \mathrm { x } F$ for a 2D convolution # 2.3 Validation & sensitivity analysis of deep learning method DeepSurge demonstrates promising generalization skill on the test set, which consists of 71 storms that the model has never been trained or tuned on. When compared to the corresponding ADCIRC simulations, DeepSurge achieves an $8 1 . 5 \% R ^ { 2 }$ score, a mean squared error of $0 . 2 2 4 \mathrm { m }$ , and a mean absolute error of $0 . 2 5 8 \mathrm { m }$ in predicting peak surge heights per node. We additionally perform a validation against NOAA tide gauge observations2. Although tide gauges are quite accurate under normal conditions, they often fail or malfunction during extreme surge events, which causes weaker events to be sampled more reliably than stronger ones. Overall, DeepSurge shows reasonable skill in capturing these gauge-observed surge peaks, with error metrics ( $R ^ { 2 } = 0 . 4 0 3$ , $\mathrm { M A E } = 0 . 4 7 4 \mathrm { m }$ , $\mathrm { R M S E } = 0 . 6 6 4 \mathrm { m } )$ similar to the ADCIRC simulations $R ^ { 2 } = 0 . 4 2 7$ , $\mathrm { M A E } = 0 . 5 4 3 \mathrm { m }$ , $\mathrm { R M S E } = 0 . 8 8 2 \mathrm { m } )$ , and highly significant Pearson and Spearman correlations $( p \ll 0 . 0 0 1 )$ . Possibly due to the negative sampling bias of the gauge observations, both DeepSurge and ADCIRC exhibit similar positive mean biases $( + 0 . 2 1 7 \mathrm { m }$ and $+ 0 . 2 0 3 \mathrm { { m } }$ respectively). Sensitivity analysis suggests this bias may affect estimated inundation totals on the order of $14 \%$ . See Supplementary Section 2.4 for further details of the tide gauge comparison and sensitivity analysis. # 2.4 Quantifying storm surge risk with synthetic tropical cyclones Synthetic tropical cyclones are generated with the the Risk Analysis Framework for Tropical Cyclones (RAFT) (Xu et al., 2021; Balaguru et al., 2023; Xu et al., 2024), forced by the climate conditions from nine Coupled Model Intercomparison Project Phase 6 (CMIP6) general circulation models (GCMs). The RAFT synthetic TC method produces realistic and diverse synthetic TC geneses, tracks, and intensities (Xu et al., 2024), and has been used in past studies to assess changes in TC landfall frequency (Balaguru et al., 2023), wind turbine damage (Lipari et al., 2024), and power outage risk (Rice et al., 2025). RAFT is forced with the climate conditions from a historical (1980–2014) and end-of-century future (2066–2100) period under Shared Socioeconomic Pathway SSP5-8.5 from nine CMIP6 models. We generate $5 0 , 0 0 0 \mathrm { T C s }$ from the forcings of each of the 18 CMIP6 model-time period pairs, for a total of 900,000 tracks. Quantile delta mapping (Cannon et al., 2015) is applied to the TC intensities to correct for biases in the CMIP6 forcings. The set of synthetic TCs used in this study are identical to those described in and utilized by Lipari et al. (2024) and Rice et al. (2025). For further details on these synthetic storms, see those publications and Supplementary Section 1. For each of these storms, DeepSurge is used to predict peak storm surge levels at 1,100 points along the U.S. Gulf and Atlantic coast. Projected surge levels in the future period are combined with the probabilistic sea-level rise projections from Kopp et al. (2014) to create a joint probability distribution at every coastal location. As in other studies (Gori et al., 2022), surge and sea-level rise are treated as linearly additive, and the future distributions of TC climatology and sea-level rise as independent (which may result in generally conservative estimates of future change (Little et al., 2015)). # 2.5 CA-Surge, a simple inland inundation model Lastly, we evaluate the impacts of modeled extreme water levels with a simple and efficient inundation model, CA-Surge, which puts projected changes in terms of the number of people impacted and enables analysis of sensitivity to various sources of uncertainty. CA-Surge is a “bathtub-style" model similar to previously published approaches (Strauss et al., 2012; Yunus et al., 2016; Williams & Lück-Vogel, 2020) with the addition of an overland attenuation factor, which accounts for the reduction of surge height as water moves inland due to bottom friction. Attenuation rates are gathered from Vafeidis et al. (2019), though it should be noted that there is substantial uncertainty inherent in attenuation rate estimates. Inundated areas are combined with LandScan (Dobson et al., 2000) night-time population maps for estimates of affected population (for interpretability, we hold population constant between the historical and future periods). This method is found to produce reasonable estimates of inundated area compared with observations from Hurricane Katrina and captures state-level patterns in populations at risk from the 100-year coastal flood as derived by FEMA (see Supplementary Section 3.2). Still, we stress that this is an approximate method, useful primarily for efficient assessment of large-scale surge inundation. Pseudo-code of the CA-Surge algorithm is provided in Supplementary Section 3.1. # 3 Results The combination of RAFT, DeepSurge, and CA-Surge makes tractable the efficient simulation of many thousands of storm events, which enables robust estimation of the effect of changes in TC behavior on once-in-a-century coastal flood extremes. In this section, we evaluate the historical 100-year surge heights, analyze how they are projected to change in the future, and then put these changes in terms of population at risk of flooding. # 3.1 Extreme storm surge heights The ensemble median 100-year surge event as modeled by our method for the historical period (Fig. 2a) shows predictable patterns. The highest surges occur primarily in the Gulf Coast where landfalling major hurricanes are most common, with peaks clustered in bays and concave coastlines. From the east coast of Florida and northward along the Atlantic, the combination of a steeper continental shelf and fewer major hurricanes results in lower extremes, with notable exceptions in areas with more complex coastal geometries including the Chesapeake Bay, Delaware Bay, and Long Island Sound. However, the results in the Chesapeake Bay appear to be anomalous; we find that this outlier is inherited from the original ADCIRC simulations, which had insufficient spatial resolution to resolve the fine-scale riverine and estuarine hydrodynamic processes in this region. Addressing this issue is planned for future work. Results in the Maryland-Virginia region are provided in this remainder of this work for completeness but should be treated with caution due to these biases. Otherwise, the spatial pattern of our modeled 100-year event appears qualitatively reasonable. To quantitatively verify that the combination of DeepSurge and RAFT are producing accurate 100-year surge heights, we compare our ensemble median return levels with the historical 100-year event observational estimates from Needham (2014), who undertook an exhaustive analysis of the historical storm surge record 1900–2014 throughout the Gulf of Mexico, and three of independent surge modeling techniques: Gori et al. (2022) applied ADCIRC to a similar synthetic TC model forced with NCEP reanalysis; Muis et al. (2023) forced the Global Tide and Surge Model (a hydrodynamic model based on Delft3D) with historical ERA5 reanlysis; and lastly, for a comparison with a simple parametric model we generate surges with the Storm Surge Hazard Potential Index (SSHPI) developed by Islam et al. (2021), forced by the same RAFT synthetic TCs as DeepSurge. We find that our method matches the observed distribution of 100-year surge heights at least as well as the other modeling methods, and shows strong and statistically significant correlation with the patterns produced by them (Spearman correlations $\geq 0 . 5$ , $p < 0 . 0 5$ ; see Supplementary Section 2.5 for details). DeepSurge achieves this comparable level of skill while being much more computationally efficient than numerical hydrodynamic models (up to 96x faster than our ADCIRC configuration; see Supplementary Section 2.3). In the future period, RAFT projects broadly increasing coastal TC intensities, with the 100-year storm intensity increasing in strength by roughly one Saffir-Simpson category in most coastal regions (Supplementary Fig. 1). The synthetic TCs also move slower on average in the future period, with slight differences in storm movement direction most visible in the vicinity of Florida (Supplementary Fig. 2). These differences in TC behavior cause substantial changes in storm surge as modeled by DeepSurge. Even without sea-level rise, the model indicates that these differences in TC behavior will produce notably larger 100-year surge levels across the northern Gulf Coast and eastern Florida (up to $+ 7 8 \mathrm { c m } ,$ ), with more heterogeneous results for the rest of the coastline (Fig. 2b). These results are in broad agreement with past studies such as Gori et al. (2022). Causal analysis suggests that the increase in future TC intensities is the dominant factor contributing to these changes, while decreasing storm translation speeds are generally a weakly negative factor (Supplementary Section 4). The westward shift in average storm direction in the vicinity of Florida (Supplementary Fig. 2) may explain the differing responses on the peninsula’s eastern and western coasts despite similar increases in TC intensities. Overall, altered TC behavior is projected to be a positive factor, increasing surge height by an average of $+ 8 . 4 \mathrm { c m }$ . Figure 2: (a) DeepSurge modeled historical ensemble-median 100-year event; the corresponding future change (b) with and (c) without sea-level rise; and respective widths of the $90 \%$ confidence intervals (d,e). The inclusion of sea-level rise (Fig. 2c) results in much larger changes, with a mean increase of $+ 8 5 \mathrm { c m }$ and maximum of $+ 1 7 0$ cm relative to historical, though with correspondingly larger uncertainty (Fig. 2e). While the effect of sea-level rise is generally larger than that of future TC behavior (roughly two times larger at the peak along the Louisiana coast), the latter is still a significant contributor especially in capturing differences across finer spatial scales. # 3.2 Changing coastal inundation risk While the quantification of future changes in extreme surge heights is of vital importance, it lacks crucial context; due to widely varying topographies and population densities, the relationship between surge height and flood damage on the U.S. coast is spatially variable and non-linear. Thus, to translate our projected surge heights into human impacts, we use CA-Surge, a simple bathtub-style inundation model with an overland frictional attenuation effect (as described in the Methods section). Note that since the CA-Surge model is only an approximation of true inundation physics, more emphasis should be placed on relative changes than on absolute totals. Even under historical climate conditions, the flooding risk posed by the 100-year storm surge event is significant—with an estimated 4.6 million people at risk—yet this risk is substantially heightened in our projected future scenario (Fig. 3). Similar to the changes in surge heights, future changes in inundation risk are driven primarily by sea level rise $( + 1 . 9$ million people at risk above historical; Fig. 3c) with changes in future TC behavior being a secondary contributor $\mathrm { \cdot + 0 . 2 4 }$ million people at risk above historical; Fig. 3b). Taken together, risk is projected to increase in every coastal state (Fig. 3d), with a $50 \%$ increase in population at risk overall $_ { + 2 . 3 }$ million people above historical). Florida alone projects to see an additional roughly one million residents at risk in the future period, as it sees one of the larger state-level relative increases in risk on top of its already large historical risk. Crucially, the percentage change in inundation risk at a state level is not always proportional to the local increase in 100-year surge height. While the largest future changes in surge height were observed in the central Gulf states of Louisiana, Alabama, and Mississippi, relative changes in coastal flood risk are most concentrated along the southeast Atlantic coast. For example, although Georgia and South Carolina see relatively moderate increases in 100-year surge heights in the future compared to many other states (Fig. 2c), they have among the largest percentage increases in inundation risk (Fig. 3d). This suggests that these states’ coastal topographies may exhibit critical thresholds, above which population centers currently insulated from risk may become rapidly more vulnerable. This hypothesis is explored in Figure 4, which plots the relationship between average coastal surge height and population at risk for Georgia and South Carolina, with Alabama as a counterexample. Around the two meter threshold of coastal surge height, Georgia and South Carolina’s risk curves inflect upward sharply, while Alabama’s remains fairly linear. Thus, despite Alabama exhibiting a much larger future change in surge height on top of an already higher baseline 100-year surge, Georgia and South Carolina experience larger increases (in both absolute and percentage terms) in population at risk. These drastic differences in risk curve slope indicate that any future increase in surge height for Georgia and South Carolina is much more damaging than the same increase in Alabama would be. This finding underscores the importance of framing storm surge projections in terms of inundation risk instead of surge height, as these impactful nonlinear relationships would otherwise be missed. The large sample size of synthetic TC events and ensemble forcings allow for robust uncertainty quantification as well, which also exhibits nonlinear and heterogeneous patterns. The spatial distribution of uncertainty (Supplementary Fig. 3)—arising from the combination of uncertainties in GCM-induced TC behavior and sea-level rise—varies widely, with some states seeing proportionally much wider $90 \%$ uncertainty bounds than others (e.g., New York vs. Louisiana), or greatly increasing uncertainty in the future (e.g., Florida). Beyond the $90 \%$ bounds, the tails of the distributions tend to be much longer in the future than in the historical period, due to long tails in the sea-level rise projections (Supplementary Fig. 4). Very generally, uncertainty in population at risk from the 100-year event appears to be lower in the Gulf, and larger along the Atlantic coast. # 4 Discussion This study couples an ensemble of global projections of future ocean-atmospheric conditions, a synthetic TC model, a novel deep learning storm surge model, and an efficient inland inundation model to assess changes in storm surge risk and associated flooding impacts for the U.S. coastline. With an unprecedented sample size of TC events, we find substantial $( \sim 5 0 \% )$ ) increases in population at risk of once-in-a-century surge flooding, and characterize the spatial pattern and variability of this risk. While increases in 100-year surge heights are most prominent along the Gulf Coast, the pattern of inundation risk (as modulated by coastal topography and population densities) is revealed to be much more variable, exhibiting complex and non-linear interactions. Our deep learning storm surge model, DeepSurge, is not a replacement for numerical storm surge models but serves as a useful complement to them, particularly in situations where traditional models may pose an intolerable computational cost or throughput. While this study demonstrates the utility of such a deep learning storm surge model in assessing surge risk, the technique is still evolving, and—like all data-driven methods—is inherently limited by the quality and scope of its training data. Given that the model operates on individual points rather than a fixed mesh, we intend to explore assimilating additional data from diverse regional and high-resolution hydrodynamic models during the training process. This may help address known biases in complex regions such as the Chesapeake Bay, where resolving fine-scale dynamics remains challenging. A similar technique may further allow the expansion of our model to new basins and domains across the globe. While accurately projecting flood risk is important on its own, continuing study is needed to further out understanding of the broader socioeconomic impacts of flooding. Beyond the immediate threats to wellbeing, floods cause a wide range of adverse and long-lasting effects, including increased disease and mortality (Alderman et al., 2012; Stephens et al., 2007), mental health disruptions (Graham et al., 2019; Alderman et al., 2012), and unemployment (Allaire, 2018; Peek-Asa et al., 2012). Complex and varied patterns in the socioeconomic effects of flooding emerge across different coastal regions (Montgomery & Chakraborty, 2013; Maldonado et al., 2016; Qiang, 2019; Herreros-Cantis et al., 2020; Smiley et al., 2022), particularly in its impacts to housing and home valuations (Varela Varela, 2023; Zhang, 2016; Figure 3: Modeled population at risk from the 100-year flood event in (a) the historical period, and the percentage change in the future period with (b) future TCs only, (c) future sea-level rise (SLR) only, and (d) the combination of the two. All panels show the ensemble median estimate. Figure 4: Curves relating average coastal surge height to population at risk for three selected states. The rapid steepening of Georgia and South Carolina’s risk curves leads to large future changes in inundation risk, despite these states having both a lower historical 100-year surge height and a smaller absolute change in future surge height than states such as Alabama. Van der Straten, 2023; Billings et al., 2022). Given these ramifications, careful consideration of adaption, mitigation, and retreat strategies at local and national scales is essential (Neumann et al., 2015). # Acknowledgments This work was supported by the Multisector Dynamics and Regional and Global Model Analysis program areas of the U.S. Department of Energy (DOE), Office of Science, Office of Biological and Environmental Research as part of the multi-program, collaborative Integrated Coastal Modeling (ICoM) project. The Pacific Northwest National Laboratory is operated for DOE by Battelle Memorial Institute under contract DE-AC05-76RL01830. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award BER-ERCAP0024320. This work also used the computing resources of Pacific Northwest National Lab’s Institutional Computing facility. # Data Availability DeepSurge-predicted storm surge heights for all 900,000 synthetic storm tracks are made publicly available on Zenodo: https://doi.org/10.5281/zenodo.15021868. All other data sources used in this study are publicly available, as is the ADCIRC algorithm. # References Adeli, E., Sun, L., Wang, J., and Taflanidis, A. A. An advanced spatio-temporal convolutional recurrent neural network for storm surge predictions. Neural Computing and Applications, 35(26):18971–18987, September 2023. ISSN 1433-3058. doi:10.1007/s00521-023-08719-2. URL https://doi.org/10.1007/s00521-023-08719-2. Alderman, K., Turner, L. R., and Tong, S. Floods and human health: A systematic review. Environment International, 47: 37–47, October 2012. ISSN 0160-4120. doi:10.1016/j.envint.2012.06.003. URL https://www.sciencedirect. com/science/article/pii/S0160412012001237. Allaire, M. Socio-economic impacts of flooding: A review of the empirical literature. Water Security, 3:18–26, May 2018. ISSN 2468-3124. doi:10.1016/j.wasec.2018.09.002. URL https://www.sciencedirect.com/science/ article/pii/S2468312418300063. Balaguru, K., Xu, W., Chang, C.-C., Leung, L. R., Judi, D. R., Hagos, S. M., Wehner, M. F., Kossin, J. P., and Ting, M. Increased U.S. coastal hurricane risk under climate change. Science Advances, 9(14):eadf0259, April 2023. doi:10.1126/sciadv.adf0259. URL https://www.science.org/doi/full/10.1126/sciadv.adf0259. Publisher: American Association for the Advancement of Science. Bi, K., Xie, L., Zhang, H., Chen, X., Gu, X., and Tian, Q. Accurate medium-range global weather forecasting with 3D neural networks. Nature, 619(7970):533–538, July 2023. ISSN 1476-4687. doi:10.1038/s41586-023-06185-3. URL https://www.nature.com/articles/s41586-023-06185-3. Publisher: Nature Publishing Group. Billings, S. B., Gallagher, E. A., and Ricketts, L. Let the rich be flooded: The distribution of financial aid and distress after hurricane harvey. Journal of Financial Economics, 146(2):797–819, November 2022. ISSN 0304- 405X. doi:10.1016/j.jfineco.2021.11.006. URL https://www.sciencedirect.com/science/article/pii/ S0304405X21005067. Cannon, A. J., Sobie, S. R., and Murdock, T. Q. Bias Correction of GCM Precipitation by Quantile Mapping: How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28(17):6938–6959, September 2015. ISSN 0894-8755, 1520-0442. doi:10.1175/JCLI-D-14-00754.1. URL https://journals.ametsoc.org/ view/journals/clim/28/17/jcli-d-14-00754.1.xml. Publisher: American Meteorological Society Section: Journal of Climate. Chen, C., Liu, H., and Beardsley, R. C. An unstructured grid, finite-volume, three-dimensional, primitive equations ocean model: application to coastal ocean and estuaries. Journal of atmospheric and oceanic technology, 20(1): 159–186, 2003. Dobson, J. E., Bright, E. A., Coleman, P. R., Durfee, R. C., and Worley, B. A. LandScan: a global population database for estimating populations at risk. Photogrammetric Engineering & Remote Sensing, 66(7), 2000. doi:10.1201/9781482264678-24. URL https://www.taylorfrancis.com/books/9781482264678/ chapters/10.1201/9781482264678-24. Emanuel, K. and Rotunno, R. Self-Stratification of Tropical Cyclone Outflow. Part I: Implications for Storm Structure. Journal of the Atmospheric Sciences, 68(10):2236–2249, October 2011. ISSN 0022-4928, 1520-0469. doi:10.1175/JAS-D-10-05024.1. European Space Agency. Land Cover CCI Product User Guide Version 2. Tech. Rep., 2017. URL maps.elie.ucl. ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf. Giaremis, S., Nader, N., Dawson, C., Kaiser, H., Kaiser, C., and Nikidis, E. Storm Surge Modeling in the AI ERA: Using LSTM-based Machine Learning for Enhancing Forecasting Accuracy, March 2024. URL http: //arxiv.org/abs/2403.04818. arXiv:2403.04818 [physics]. Gori, A., Lin, N., Xi, D., and Emanuel, K. Tropical cyclone climatology change greatly exacerbates US extreme rainfall–surge hazard. Nature Climate Change, 12(2):171–178, February 2022. ISSN 1758-6798. doi:10.1038/s41558- 021-01272-7. URL https://www.nature.com/articles/s41558-021-01272-7. Publisher: Nature Publishing Group. Graham, H., White, P., Cotton, J., and McManus, S. Flood- and Weather-Damaged Homes and Mental Health: An Analysis Using England’s Mental Health Survey. International Journal of Environmental Research and Public Health, 16(18):3256, January 2019. ISSN 1660-4601. doi:10.3390/ijerph16183256. URL https://www.mdpi. com/1660-4601/16/18/3256. Number: 18 Publisher: Multidisciplinary Digital Publishing Institute. Hashemi, M. R., Spaulding, M. L., Shaw, A., Farhadi, H., and Lewis, M. An efficient artificial intelligence model for prediction of tropical storm surge. Natural Hazards, 82(1):471–491, May 2016. ISSN 1573-0840. doi:10.1007/s11069- 016-2193-4. URL https://doi.org/10.1007/s11069-016-2193-4. Herreros-Cantis, P., Olivotto, V., Grabowski, Z. J., and McPhearson, T. Shifting landscapes of coastal flood risk: environmental (in)justice of urban change, sea level rise, and differential vulnerability in New York City. Urban Transformations, 2(1):9, July 2020. ISSN 2524-8162. doi:10.1186/s42854-020-00014-w. URL https://doi.org/ 10.1186/s42854-020-00014-w. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Publisher: MIT press. Holland, G. A Revised Hurricane Pressure–Wind Model. Monthly Weather Review, 136, September 2008. doi:10.1175/2008MWR2395.1. URL https://journals.ametsoc.org/view/journals/mwre/136/ 9/2008mwr2395.1.xml. Irish, J. L., Resio, D. T., and Ratcliff, J. J. The Influence of Storm Size on Hurricane Surge. Journal of Physical Oceanography, 38(9):2003–2013, September 2008. ISSN 0022-3670, 1520-0485. doi:10.1175/2008JPO3727.1. URL https://journals.ametsoc.org/view/journals/phoc/38/9/2008jpo3727.1.xml. Publisher: American Meteorological Society Section: Journal of Physical Oceanography. Islam, M. R., Lee, C.-Y., Mandli, K. T., and Takagi, H. A new tropical cyclone surge index incorporating the effects of coastal geometry, bathymetry and storm information. Scientific Reports, 11(1):16747, August 2021. ISSN 2045- 2322. doi:10.1038/s41598-021-95825-7. URL https://www.nature.com/articles/s41598-021-95825-7. Number: 1 Publisher: Nature Publishing Group. Klotzbach, P. J., Bowen, S. G., Pielke, R., and Bell, M. Continental U.S. Hurricane Landfall Frequency and Associated Damage: Observations and Future Risks. Bulletin of the American Meteorological Society, 99(7):1359–1376, July 2018. ISSN 0003-0007, 1520-0477. doi:10.1175/BAMS-D-17-0184.1. URL https://journals.ametsoc.org/ view/journals/bams/99/7/bams-d-17-0184.1.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Knapp, K. R., Kruk, M. C., Levinson, D. H., Diamond, H. J., and Neumann, C. J. The International Best Track Archive for Climate Stewardship (IBTrACS): Unifying Tropical Cyclone Data. Bulletin of the American Meteorological Society, 91(3):363–376, March 2010. ISSN 0003-0007, 1520-0477. doi:10.1175/2009BAMS2755.1. URL https://journals.ametsoc.org/view/journals/bams/91/3/2009bams2755_1.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Knapp, K. R., Diamond, H. J., Kossin, J. P., Kruk, M. C., and Schreck, C. J. International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 4, North Atlantic, 2018. Knutson, T., Camargo, S. J., Chan, J. C. L., Emanuel, K., Ho, C.-H., Kossin, J., Mohapatra, M., Satoh, M., Sugi, M., Walsh, K., and Wu, L. Tropical Cyclones and Climate Change Assessment: Part II: Projected Response to Anthropogenic Warming. Bulletin of the American Meteorological Society, 101(3):E303–E322, March 2020. ISSN 0003-0007, 1520-0477. doi:10.1175/BAMS-D-18-0194.1. URL https://journals.ametsoc.org/view/ journals/bams/101/3/bams-d-18-0194.1.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Kochkov, D., Yuval, J., Langmore, I., Norgaard, P., Smith, J., Mooers, G., Klöwer, M., Lottes, J., Rasp, S., Düben, P., Hatfield, S., Battaglia, P., Sanchez-Gonzalez, A., Willson, M., Brenner, M. P., and Hoyer, S. Neural general circulation models for weather and climate. Nature, 632(8027):1060–1066, August 2024. ISSN 1476-4687. doi:10.1038/s41586- 024-07744-y. URL https://www.nature.com/articles/s41586-024-07744-y. Publisher: Nature Publishing Group. Kopp, R. E., Horton, R. M., Little, C. M., Mitrovica, J. X., Oppenheimer, M., Rasmussen, D. J., Strauss, B. H., and Tebaldi, C. Probabilistic 21st and 22nd century sea-level projections at a global network of tide-gauge sites. Earth’s Future, 2(8):383–406, 2014. ISSN 2328-4277. doi:10.1002/2014EF000239. URL https://onlinelibrary.wiley. com/doi/abs/10.1002/2014EF000239. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/2014EF000239. Lam, R., Sanchez-Gonzalez, A., Willson, M., Wirnsberger, P., Fortunato, M., Alet, F., Ravuri, S., Ewalds, T., EatonRosen, Z., Hu, W., Merose, A., Hoyer, S., Holland, G., Vinyals, O., Stott, J., Pritzel, A., Mohamed, S., and Battaglia, P. Learning skillful medium-range global weather forecasting. Science, 382(6677):1416–1421, December 2023. doi:10.1126/science.adi2336. URL https://www.science.org/doi/10.1126/science.adi2336. Publisher: American Association for the Advancement of Science. Lee, J.-W., Irish, J. L., Bensi, M. T., and Marcy, D. C. Rapid prediction of peak storm surge from tropical cyclone track time series using machine learning. Coastal Engineering, 170:104024, December 2021. ISSN 0378- 3839. doi:10.1016/j.coastaleng.2021.104024. URL https://www.sciencedirect.com/science/article/ pii/S0378383921001691. Lee, T.-L. Neural network prediction of a storm surge. Ocean Engineering, 33(3):483–494, March 2006. ISSN 0029-8018. doi:10.1016/j.oceaneng.2005.04.012. URL https://www.sciencedirect.com/science/article/ pii/S002980180500140X. Lipari, S., Balaguru, K., Rice, J., Feng, S., Xu, W., K. Berg, L., and Judi, D. Amplified threat of tropical cyclones to US offshore wind energy in a changing climate. Communications Earth & Environment, 5(1):1–10, December 2024. ISSN 2662-4435. doi:10.1038/s43247-024-01887-6. URL https://www.nature.com/articles/ s43247-024-01887-6. Publisher: Nature Publishing Group. Little, C. M., Horton, R. M., Kopp, R. E., Oppenheimer, M., Vecchi, G. A., and Villarini, G. Joint projections of US East Coast sea level and storm surge. Nature Climate Change, 5(12):1114–1120, December 2015. ISSN 1758-6798. doi:10.1038/nclimate2801. URL https://www.nature.com/articles/nclimate2801. Publisher: Nature Publishing Group. Lockwood, J. W., Lin, N., Oppenheimer, M., and Lai, C.-Y. Using Neural Networks to Predict Hurricane Storm Surge and to Assess the Sensitivity of Surge to Storm Characteristics. Journal of Geophysical Research: Atmospheres, 127(24):e2022JD037617, 2022. ISSN 2169-8996. doi:10.1029/2022JD037617. URL https://onlinelibrary.wiley.com/doi/abs/10.1029/2022JD037617. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1029/2022JD037617. Luettich, R. A., Westerink, J. J., and Scheffner, N. W. ADCIRC: an advanced three-dimensional circulation model for shelves, coasts, and estuaries. Report 1, Theory and methodology of ADCIRC-2DD1 and ADCIRC-3DL. 1992. Publisher: Coastal Engineering Research Center (US). Luettich Jr., R. A. and Westerink, J. J. A solution for the vertical variation of stress, rather than velocity, in a threedimensional circulation model. International Journal for Numerical Methods in Fluids, 12(10):911–928, 1991. ISSN 1097-0363. doi:10.1002/fld.1650121002. Maldonado, A., Collins, T. W., Grineski, S. E., and Chakraborty, J. Exposure to Flood Hazards in Miami and Houston: Are Hispanic Immigrants at Greater Risk than Other Social Groups? International Journal of Environmental Research and Public Health, 13(8):775, August 2016. ISSN 1660-4601. doi:10.3390/ijerph13080775. URL https://www.mdpi.com/1660-4601/13/8/775. Number: 8 Publisher: Multidisciplinary Digital Publishing Institute. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Jia, Y., Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, 2015. URL https://www.tensorflow.org/. Montgomery, M. C. and Chakraborty, J. Social Vulnerability to Coastal and Inland Flood Hazards: A Comparison of GIS-Based Spatial Interpolation Methods. International Journal of Applied Geospatial Research (IJAGR), 4 (3):58–79, July 2013. ISSN 1947-9654. doi:10.4018/jagr.2013070104. URL https://www.igi-global.com/ article/content/www.igi-global.com/article/content/77925. Publisher: IGI Global. Muis, S., Aerts, J. C. J. H., Á. Antolínez, J. A., Dullaart, J. C., Duong, T. M., Erikson, L., Haarsma, R. J., Apecechea, M. I., Mengel, M., Le Bars, D., O’Neill, A., Ranasinghe, R., Roberts, M. J., Verlaan, M., Ward, P. J., and Yan, K. Global Projections of Storm Surges Using High-Resolution CMIP6 Climate Models. Earth’s Future, 11(9): e2023EF003479, 2023. ISSN 2328-4277. doi:10.1029/2023EF003479. URL https://onlinelibrary.wiley. com/doi/abs/10.1029/2023EF003479. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1029/2023EF003479. Needham, H. A Data-Driven Storm Surge Analysis for the U.S. Gulf Coast. Doctor of Philosophy, Louisiana State University and Agricultural and Mechanical College, March 2014. URL https://repository.lsu.edu/ gradschool_dissertations/3250. Neumann, J. E., Emanuel, K., Ravela, S., Ludwig, L., Kirshen, P., Bosma, K., and Martinich, J. Joint effects of storm surge and sea-level rise on US Coasts: new economic estimates of impacts, adaptation, and benefits of mitigation policy. Climatic Change, 129(1):337–349, March 2015. ISSN 1573-1480. doi:10.1007/s10584-014-1304-z. URL https://doi.org/10.1007/s10584-014-1304-z. Oliveira, M. M. F. d., Ebecken, N. F. F., Oliveira, J. L. F. d., and Santos, I. d. A. Neural Network Model to Predict a Storm Surge. Journal of Applied Meteorology and Climatology, 48(1):143–155, January 2009. ISSN 1558- 8424, 1558-8432. doi:10.1175/2008JAMC1907.1. URL https://journals.ametsoc.org/view/journals/ apme/48/1/2008jamc1907.1.xml. Publisher: American Meteorological Society Section: Journal of Applied Meteorology and Climatology. O’Shea, K. and Nash, R. An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458, 2015. Pathak, J., Subramanian, S., Harrington, P., Raja, S., Chattopadhyay, A., Mardani, M., Kurth, T., Hall, D., Li, Z., Azizzadenesheli, K., Hassanzadeh, P., Kashinath, K., and Anandkumar, A. FourCastNet: A Global Datadriven High-resolution Weather Model using Adaptive Fourier Neural Operators, February 2022. URL http: //arxiv.org/abs/2202.11214. arXiv:2202.11214. Peek-Asa, C., Ramirez, M., Young, T., and Cao, Y. Flood-Related Work Disruption and Poor Health Outcomes Among University Students. Prehospital and Disaster Medicine, 27(6):503– 508, December 2012. ISSN 1049-023X, 1945-1938. doi:10.1017/S1049023X1200129X. URL https://www.cambridge.org/core/journals/prehospital-and-disaster-medicine/article/ abs/floodrelated-work-disruption-and-poor-health-outcomes-among-university-students/ 656ED60550A6B7B719003C908B70CAF6. Price, I., Sanchez-Gonzalez, A., Alet, F., Andersson, T. R., El-Kadi, A., Masters, D., Ewalds, T., Stott, J., Mohamed, S., Battaglia, P., Lam, R., and Willson, M. GenCast: Diffusion-based ensemble forecasting for medium-range weather, May 2024. URL http://arxiv.org/abs/2312.15796. arXiv:2312.15796. Pringle, W. J., Wirasaet, D., Roberts, K. J., and Westerink, J. J. Global storm tide modeling with ADCIRC v55: unstructured mesh design and performance. Geoscientific Model Development, 14(2):1125–1145, February 2021. ISSN 1991-959X. doi:10.5194/gmd-14-1125-2021. URL https://gmd.copernicus.org/articles/14/1125/ 2021/. Publisher: Copernicus GmbH. Qiang, Y. Disparities of population exposed to flood hazards in the United States. Journal of Environmental Management, 232:295–304, February 2019. ISSN 0301-4797. doi:10.1016/j.jenvman.2018.11.039. URL https://www.sciencedirect.com/science/article/pii/S0301479718313057. Rappaport, E. N. Fatalities in the United States from Atlantic Tropical Cyclones: New Data and Interpretation. Bulletin of the American Meteorological Society, 95(3):341–346, March 2014. ISSN 0003-0007, 1520-0477. doi:10.1175/BAMSD-12-00074.1. URL https://journals.ametsoc.org/view/journals/bams/95/3/bams-d-12-00074.1. xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Rice, J. R., Balaguru, K., Staid, A., Xu, W., and Judi, D. Projected increases in tropical cyclone-induced U.S. electric power outage risk. Environmental Research Letters, 20(3):034030, February 2025. ISSN 1748-9326. doi:10.1088/1748-9326/adad85. URL https://dx.doi.org/10.1088/1748-9326/adad85. Publisher: IOP Publishing. Roelvink, J. and Van Banning, G. Design and development of DELFT3D and application to coastal morphodynamics. Oceanographic Literature Review, 11(42):925, 1995. Rupe, A., Vesselinov, V., and Crutchfield, J. P. Nonequilibrium statistical mechanics and optimal prediction of partially-observed complex systems. New Journal of Physics, May 2022. Sahoo, B. and Bhaskaran, P. K. Prediction of storm surge and inundation using climatological datasets for the Indian coast using soft computing techniques. Soft Computing, 23(23):12363–12383, December 2019. ISSN 1433-7479. doi:10.1007/s00500-019-03775-0. URL https://doi.org/10.1007/s00500-019-03775-0. Shchepetkin, A. F. and McWilliams, J. C. The regional oceanic modeling system (ROMS): a split-explicit, free-surface, topography-following-coordinate oceanic model. Ocean Modelling, 9(4):347–404, January 2005. ISSN 1463- 5003. doi:10.1016/j.ocemod.2004.08.002. URL https://www.sciencedirect.com/science/article/pii/ S1463500304000484. Smiley, K. T., Noy, I., Wehner, M. F., Frame, D., Sampson, C. C., and Wing, O. E. J. Social inequalities in climate change-attributed impacts of Hurricane Harvey. Nature Communications, 13(1):3418, August 2022. ISSN 2041- 1723. doi:10.1038/s41467-022-31056-2. URL https://www.nature.com/articles/s41467-022-31056-2. Publisher: Nature Publishing Group. Stephens, K. U., Grew, D., Chin, K., Kadetz, P., Greenough, P. G., Burkle, F. M., Robinson, S. L., and Franklin, E. R. Excess mortality in the aftermath of Hurricane Katrina: a preliminary report. Disaster Medicine and Public Health Preparedness, 1(1):15–20, July 2007. ISSN 1938-744X. doi:10.1097/DMP.0b013e3180691856. Strauss, B. H., Ziemlinski, R., Weiss, J. L., and Overpeck, J. T. Tidally adjusted estimates of topographic vulnerability to sea level rise and flooding for the contiguous United States. Environmental Research Letters, 7(1):014033, March 2012. ISSN 1748-9326. doi:10.1088/1748-9326/7/1/014033. URL https://dx.doi.org/10.1088/1748-9326/ 7/1/014033. Publisher: IOP Publishing. Sztobryn, M. Forecast of storm surge by means of artificial neural network. Journal of Sea Research, 49(4):317–322, June 2003. ISSN 1385-1101. doi:10.1016/S1385-1101(03)00024-8. URL https://www.sciencedirect.com/ science/article/pii/S1385110103000248. Vafeidis, A. T., Schuerch, M., Wolff, C., Spencer, T., Merkens, J. L., Hinkel, J., Lincke, D., Brown, S., and Nicholls, R. J. Water-level attenuation in global-scale assessments of exposure to coastal flooding: a sensitivity analysis. Natural Hazards and Earth System Sciences, 19(5):973–984, May 2019. ISSN 1561-8633. doi:10.5194/nhess-19-973-2019. URL https://nhess.copernicus.org/articles/19/973/2019/. Publisher: Copernicus GmbH. Van der Straten, Y. Flooded House or Underwater Mortgage? The Macrofinancial Implications of Climate Change and Adaptation, March 2023. URL https://papers.ssrn.com/abstract $\ c =$ 4393731. Varela Varela, A. Surge of Inequality: How Different Neighborhoods React to Flooding. SSRN Electronic Journal, 2023. ISSN 1556-5068. doi:10.2139/ssrn.4396481. URL https://www.ssrn.com/abstract $\ c =$ 4396481. Williams, L. L. and Lück-Vogel, M. Comparative assessment of the GIS based bathtub model and an enhanced bathtub model for coastal inundation. Journal of Coastal Conservation, 24(2):23, March 2020. ISSN 1874-7841. doi:10.1007/s11852-020-00735-x. URL https://doi.org/10.1007/s11852-020-00735-x. World Meteorological Organization. WMO Atlas of Mortality and Economic Losses from Weather, Climate and Water Extremes (1970–2019), volume WMO-No. 1267 of WMO. WMO, Geneva, 2021. ISBN 978-92-63-11267-5. Xie, W., Xu, G., Zhang, H., and Dong, C. Developing a deep learning-based storm surge forecasting model. Ocean Modelling, 182:102179, April 2023. ISSN 1463-5003. doi:10.1016/j.ocemod.2023.102179. URL https://www. sciencedirect.com/science/article/pii/S1463500323000203. Xu, W., Balaguru, K., August, A., Lalo, N., Hodas, N., DeMaria, M., and Judi, D. Deep Learning Experiments for Tropical Cyclone Intensity Forecasts. Weather and Forecasting, 36(4):1453–1470, August 2021. ISSN 1520-0434, 0882-8156. doi:10.1175/WAF-D-20-0104.1. URL https://journals.ametsoc.org/view/journals/wefo/ 36/4/WAF-D-20-0104.1.xml. Publisher: American Meteorological Society Section: Weather and Forecasting. Xu, W., Balaguru, K., Judi, D. R., Rice, J., Leung, L. R., and Lipari, S. A North Atlantic synthetic tropical cyclone track, intensity, and rainfall dataset. Scientific Data, 11(1):130, January 2024. ISSN 2052-4463. doi:10.1038/s41597-024- 02952-7. URL https://www.nature.com/articles/s41597-024-02952-7. Number: 1 Publisher: Nature Publishing Group. Yunus, A. P., Avtar, R., Kraines, S., Yamamuro, M., Lindberg, F., and Grimmond, C. S. B. Uncertainties in Tidally Adjusted Estimates of Sea Level Rise Flooding (Bathtub Model) for the Greater London. Remote Sensing, 8(5): 366, May 2016. ISSN 2072-4292. doi:10.3390/rs8050366. URL https://www.mdpi.com/2072-4292/8/5/366. Number: 5 Publisher: Multidisciplinary Digital Publishing Institute. Zhang, L. Flood hazards impact on neighborhood house prices: A spatial quantile regression analysis. Regional Science and Urban Economics, 60:12–19, September 2016. ISSN 0166-0462. doi:10.1016/j.regsciurbeco.2016.06.005. URL https://www.sciencedirect.com/science/article/pii/S0166046216300540. # Supplementary Information for “Projecting U.S. coastal storm surge risks and impacts with deep learning” Julian R. Rice $^ 1 { : }$ $^ { , * }$ , Karthik Balaguru $^ { 1 }$ , Fadia Ticona Rollano $^ { 1 }$ , John Wilson $^ { 1 }$ , Brent Daniel $^ { 1 }$ David Judi $^ { 1 }$ , Ning Sun $\cdot ^ { 1 }$ , and L. Ruby Leung $^ 1$ $^ { 1 }$ Pacific Northwest National Laboratory \*Corresponding author: julian.rice@pnnl.gov # S1 RAFT synthetic tropical cyclones The Risk Analysis Framework for Tropical Cyclones (RAFT) (Xu et al., 2021; Balaguru et al., 2023; Xu et al., 2024) is a synthetic TC downscaling method. It uses a random seeding process for cyclogenesis conditioned on historical observations, with a physical beta-advection model for track propagation (Emanuel et al., 2006; Marks, 1992; Kelly et al., 2018) and a deep learning regressor to model 6-hourly intensity changes over the lifetime of a storm (Xu et al., 2021). Radius of maximum wind is parameterized with a log-transformed linear regression on latitude and maximum wind speed, following Willoughby et al. (2006). Since large uncertainties remain about the rate of TC genesis in the future (Knutson et al., 2020; Murakami & Wang, 2022; Chavas et al., 2024), we use a fixed rate of 14.91 seeds per year (the observed historical rate in IBTrACS from 1980-2014) for both the historical and future periods. Note that the intensity model quickly dissipates or sustains TC seeds depending on how favorable their environment is, which allows the climate to implicitly regulate TC frequency. Climate conditions are gathered for the historical (1980–2014) and end-of-century future (2066–2100) periods under Shared Socioeconomic Pathway SSP5-8.5 from nine CMIP6 models: the Euro-Mediterranean Centre on Climate Change coupled climate model (CMCC-CM2-SR5), Canadian Earth System Model (CanESM5), Energy Exascale Earth System Model (E3SM), EC-Earth Consortium Model (EC-Earth3), Geophysical Fluid Dynamics Laboratory Climate Model (GFDLCM4), Institute Pierre Simon Laplace Climate Model (IPSL-CM6A-LR), Model for Interdisciplinary Research on Climate (MIROC6), Max Planck Institute Earth System Model (MPI-ESM1-2-LR), and Meteorological Research Institute Earth System Model (MRI-ESM2-0). We generate 50,000 TCs from the forcings of each of the 18 CMIP6 model-scenario pairs, for a total of 900,000 tracks. Due to climatological biases in the CMIP6 ensemble, we apply bias correction to the intensities of the synthetic TCs. Bias correction is implemented with a spatially-aware quantile delta mapping (Cannon et al., 2015) to align the distribution of historical TC intensities with historical observations, and preserve differences between corresponding quantiles in historical and future scenarios, as in Lipari et al. (2024) and Rice et al. (2025). # S2 Storm surge modeling # S2.1 ADCIRC simulations The numerical model selected for generating the training dataset for DeepSurge is ADCIRC (Luettich Jr. & Westerink, 1991; Luettich et al., 1992; Pringle et al., 2021) v53.04. ADCIRC is an ocean circulation model that has been widely used in storm surge studies. The computational grid used in this study was based on an unstructured mesh developed by Dietrich et al. (2011)1. It spans the U.S. East Coast, the Gulf of Mexico, and the Caribbean Sea, with the open boundary defined along the North Atlantic Ocean. The horizontal resolution of the original mesh was relaxed to 15,467 nodes to balance computational costs and accuracy of the simulations, with a final resolution of approximately 25 km along coastlines and a 150 km resolution at the open boundary (Fig. S5a). Historical tropical cyclone data for the North Atlantic was retrieved from NOAA’s International Best Track Archive for Climate Stewardship (IBTrACS) (Knapp et al., 2010, 2018). The North Atlantic IBTrACS dataset was subsampled to include only tracks contained within the extents of the ADCIRC model domain and a minimum storm length of 3 days, allowing for data gaps in wind speed and atmospheric pressure records of no more than six hours (gaps shorter than six hours were filled through linear interpolation). 279 tracks met these criteria (Fig. S5b). Wind fields were generated following the methodology outlined by Emanuel $\&$ Rotunno (2011) using each track’s maximum wind speed and radius of maximum wind, and a maximum radius of storm influence set at 300 km. Pressure fields were generated following the methodology presented by Holland (2008). No other forcings (e.g., tidal) were included in the ADCIRC simulations. To ensure the stability of the model, storm events shorter than 5 days were simulated using a ramp function to extend the total simulation time to 5 days (e.g., a 2-day ramp period for a 3-day storm event). Each simulation was executed with a 1-second time step and the resulting water levels (i.e., storm surge) were stored at 1-hour intervals over the entire computational domain. # S2.2 DeepSurge architecture $\&$ training This section assumes some basic familiarity with neural network architectures. For an introduction to neural networks we recommend LeCun et al. (2015), and Emmert-Streib et al. (2020) for a more in-depth exploration. The basic structure of the neural network, as detailed in Main Text Figure 1, has four stages: 1) encode the timeseries and spatial data separately, 2) convert them to compatible shapes and concatenate them together, 3) apply a Long Short-Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) layer to understand the temporal development, and 4) decode the features to predict a maximum surge level. To compactly notate tensor shapes, let $T$ denote the (variable-length) time dimension, $I$ denote the number of input features in the timeseries, and $S$ denote the number of spatial images. The timeseries $( T \mathrm { x } I )$ is encoded with a pair of one-dimensional convolutional layers with a kernel size of one, to expand the series to shape (T x256). The spatial stack (64x64xS) is encoded with four successive 2-dimensional convolutional layers with striding and pooling to downsample to a vector (1x256) describing the spatial information. We then repeat this vector $T$ times to create a tensor ( $T$ x256) that can be concatenated with the temporal features to create a $T \mathrm { x } \mathrm { 5 } 1 2$ tensor. This combined feature is passed through an LSTM layer—a recurrent sub-network that processes each step of a timeseries in the context of information remembered from all previous steps—which outputs a new sequence (T x256). Lastly, we decode these LSTM outputs with another series of 1-dimensional convolutional layers, resulting in a final output sequence ( $T$ x1). Taking the maximum over the $T$ dimension returns the scalar maximum surge prediction for the storm at that given node. The network is constructed and trained in TensorFlow/Keras (Martı´n Abadi et al., 2015), and has a total of 1.7 million parameters. Prior to training, $2 5 \%$ of storms in the dataset are set aside for testing, and never trained on. An additional $1 5 \%$ of remaining storms are used as the validation set, to test for overfitting during the training process. Since extreme surge levels are of more interest and simultaneously more rare in the training data, two adjustments are made: Each example’s loss is weighted proportional to the ADCIRC peak surge height in meters plus one (e.g., a surge of 6 m has a weight of 7, while a surge of 0 m has a weight of 1), and examples with ADCIRC peak surges below 1 meter are sampled less frequently to correct for their overrepresentation in the dataset. Training occurs in epochs, each consisting of 500 batches with a batch size of 32. During training, low levels of zero-centered Gaussian noise ( $\sigma =$ 1e-3) are added to the inputs as a simple regularization mechanism. We use the Adam optimizer (Kingma & Ba, 2014), and the meansquared error between the maximum neural network prediction and maximum ADCIRC modeled surge as the loss target. At the end of each epoch, the loss on the validation set is assessed, and the best model iteration according to this metric is saved. The learning rate begins at 2e-3, and is multiplied by a factor of 0.2 after any three consecutive epochs with no validation loss improvement. Training is stopped after six consecutive epochs with no validation loss improvement. This architecture was developed and tuned over numerous manual iterations, including experimenting with learning rates, batch sizes, alternate model structures, batch normalization, dropout, scaling the width and depth of the network, different recurrent units, and regularization techniques. # S2.3 DeepSurge computational efficiency For an approximate comparison of the computational speedup provided by our method, we predict Hurricane Katrina’s storm surge using both DeepSurge and ADCIRC on high-performance computing systems. We find DeepSurge to be approximately 12.5x faster in terms of CPU-hours, with an upper bound of 96x when predicting hundreds of storms in sequence (since DeepSurge’s initialization stage is the bulk of the computational burden, which can subsequently be shared across multiple simulations). Even with this significant speedup, the computational expense of simulating 900,000 storms is not insignificant; approximately 350 high-performance compute node-hours were utilized (with 256 cores per node, $\sim 9 0 , 0 0 0$ core-hours). # S2.4 DeepSurge validation & sensitivity analysis DeepSurge demonstrates promising generalization skill on the test set, which consists of 71 storms that the model has never been trained or tuned on. When compared to the corresponding ADCIRC simulations, DeepSurge achieves an 81.5% $R ^ { 2 }$ score, a mean squared error of 0.224 m, and a mean absolute error of 0.258 m in predicting peak surge heights per node. As error metrics in terms of raw surge height are difficult to contextualize on their own, we later assess the sensitivity of our inundation results to the model’s biases and errors. To compare with observed storm surge data, we perform a validation against NOAA tide gauge observations2. All available peak tide gauge measurements within a 300 km radius of each test-set storm’s track are collected and co-located with the predictions from the nearest ADCIRC/DeepSurge node. Peak surge for each tide gauge is calculated as the maximum difference between observed water level and expected tide. Only verified data with gaps of less than one hour during the lifetime of the storm are considered, resulting 194 valid gauge observations. Although this is highly accurate data, it is notably incomplete; tide gauges often fail or malfunction during extreme surge events, with failure observed at surge heights as low as 1.22 m (see Needham (2014)’s Figure 5.1). This effect causes a fairly strong sampling bias toward smaller surge levels, with the largest observed surge in this collection being 3.1 m despite the fact that a number of storms in the sample (including 2005’s Hurricane Katrina) are known to have generated much larger surges. Gauge observations may be influenced by factors not modeled by ADCIRC or the neural network, such as rainfall, river discharge, and background non-cyclonic winds, which makes them an imperfect comparison for our surge-only ADCIRC and DeepSurge simulations, though we expect these influences to be small relative to TC-driven surge in most cases. DeepSurge shows reasonable skill in capturing these gauge-observed surge peaks, with error metrics ( $R ^ { 2 } = 0 . 4 0 3$ , $\mathrm { M A E } = 0 . 4 7 4 \mathrm { m }$ , $\mathrm { R M S E } = 0 . 6 6 4 \mathrm { m }$ ) similar to the ADCIRC simulations ( $R ^ { 2 } ~ = ~ 0 . 4 2 7$ , $\mathrm { M A E } = 0 . 5 4 3 \mathrm { m }$ , $\mathrm { R M S E } = 0 . 8 8 2 \mathrm { m }$ ). Pearson and Spearman correlations are highly significant ( $p \ll 0 . 0 0 1 .$ ). Possibly due to the negative sampling bias of the gauge observations, both DeepSurge and ADCIRC exhibit positive mean biases (+0.217m and +0.203m respectively). Additionally, a sensitivity analysis is performed to understand how the biases of the DeepSurge model may impact our final inundation estimates. Specifically, we estimate the bias at each DeepSurge node as the mean of the biases at all gauge observations within three degrees of the node, weighted inversely by distance. This process is averaged over 200 bootstrap samples of the gauge bias observations for robustness. The derived bias (Fig. S6) is generally positive as noted prior, with near zero bias ( $< ~ 0 . 2 5$ m) in the western Gulf of Mexico and eastern Florida, moderate bias (generally $< 0 . 5$ m) in the rest of the Gulf and Southeast, and larger biases in the Northeast—though the sample size in that region is notably less robust, with 20 total gauge-prediction pairs north of $3 7 ^ { \circ } N$ caused by only 4 storms. Correcting for this estimated bias in DeepSurge’s 100-year surge event and recalculating inundation risk results in an $1 8 \%$ reduction in historical-period population affected, and a $1 4 \%$ reduction in future-period population affected. Use of these bias-corrected totals actually indicates an even stronger relative change in inundation risk in the future of $+ 5 7 \%$ , compared with the $+ 5 0 \%$ found using the uncorrected totals. At the individual state level, all states in the Gulf and Southeast see corrections of under $\pm 3 0 \%$ (and all but Georgia and South Carolina under $\pm 2 0 \%$ ), with the near-zero biases in Texas. Corrections are much larger in the Northeast, exceeding $- 5 0 \%$ in Massachusetts and New Jersey, though again we note the much lower confidence in estimated biases there. Given the limited sample size, incompleteness of the tide gauge records, and confounding factors previously discussed, we treat these results as only rough indications of the directions and magnitudes of DeepSurge biases. For these reasons, and because the uncorrected total actually present a more conservative view of relative future changes in risk, we choose to report the uncorrected totals in the main manuscript. # S2.5 Validation of modeled 100-year surge height To quantitatively verify that the combination of DeepSurge and RAFT are producing accurate 100-year surge heights, we compare our ensemble median return levels with the historical 100-year event observational estimates from Needham (2014), who undertook an exhaustive analysis of the historical storm surge record 1900–2014 throughout the Gulf of Mexico, and three of independent surge modeling techniques: Gori et al. (2022) forced a similar synthetic TC model with NCEP reanalysis for a historical period (1980–2005) and eight CMIP6 GCMs for a future period (2070–2100 under SSP5-8.5) to model the 100-year storm tide above mean higher high water (MHHW) with a peaks-over-threshold approach using ADCIRC. Muis et al. (2023) forced the Global Tide and Surge Model (a hydrodynamic model based on Delft3D) with historical ERA5 reanlysis (1985–2014) as well as five high-resolution CMIP6 models for a historical (1950–2014) and near future (2015–2050 under SSP5-8.5) period, using a peaks-over-threshold approach to estimate 100-year storm surge levels. Lastly, for a comparison with a simple parametric model we generate surges with the Storm Surge Hazard Potential Index (SSHPI) developed by Islam et al. (2021), forced by the same RAFT synthetic TCs as DeepSurge. The historical 100-year estimates from each of the modeling methods are presented in Fig. S9. The 100-year estimates from Needham (2014)3, Gori et al. (2022)4, and Muis et al. (2023)5 are all directly published with no additional computation performed on our part, while Storm Surge Hazard Potential Index (SSHPI) (Islam et al., 2021) estimates were derived using the following method: Predictions were computed by applying the SSHPI equations (see Islam et al. (2021), their equations 4 and 5) to the same 900,000 synthetic TCs used to force DeepSurge. SSHPI-derived surge is computed at hourly timesteps for all nodes within 0.5 degrees of the storm center, and the maximum predicted surge at each node is saved. Because the SSHPI formulation requires a measure of the mean radius of 50-knot winds ( $R _ { 5 0 k t }$ , in nautical miles) which is not directly modeled in our synthetic TCs, a linear regression on maximum wind speed ( $V _ { m a x }$ , in knots), latitude (degrees North), and radius of maximum wind ( $R _ { m w }$ , in nautical miles) was derived from all available observations ( $n = 2 2 2$ ) in the HURDAT database (Jarvinen et al., 1984). This simple formulation achieves a Pearson’s correlation of 0.83 and $R ^ { 2 }$ of 0.68: $$ R _ { 5 0 k t } = 0 . 5 9 6 \cdot V _ { m a x } + 0 . 8 5 3 \cdot R _ { m w } + 2 . 0 7 4 \cdot { \mathrm { l } } \varepsilon $$ The three methods’ predictions are all co-located to the nearest node within 0.15 degrees, which enables consistent comparison between methods at 651 locations along the U.S. coast. The analysis by Needham (2014) reveals 100-year surge levels ranging from 2.53 m (Cedar Key, Florida) to 7.95 m (Bay St. Louis, Mississipi). Naturally these observations are subject to a great deal of random variability, as the chance occurrence or non-occurrence of particular extreme events (e.g., Hurricane Katrina) within this particular 115-year sample will bias the estimates; still, they remain the best available approximation of the spread of 100-year surge levels in this region. Co-locating the Needham (2014) estimates with the other methods ( $n = 1 8$ ) reveals that DeepSurge achieves the most comparable cumulative distribution of 100-year surge magnitudes (Fig. S7). However, all methods including DeepSurge have insignificant ( $p > 0 . 0 5 )$ Spearman correlations against the spatial pattern of the sparse Needham estimates. In addition to comparing against the historical records, each of these methods’ 100-year return level estimates are cross-compared. Given the wide variability in magnitudes, Spearman’s rank correlation is used as a scale-invariant measure of pattern similarity. As shown in Figure S9, DeepSurge has the best Spearman correlation with every other method, for both the historical and future periods (excepting the high correlation between the Muis et al. (2023) variants). The larger sample of synthetic TCs utilized in our method may allow it to capture a more robust signal and be less subject to random noise, which enables stronger agreement with the three alternatives than they exhibit between themselves. # S3 Inundation modeling # S3.1 CA-Surge algorithm CA-Surge is a simple “bathtub-style” inundation model, with frictional attenuation. Conceptually, the algorithm starts with a water surface elevation (surge height) in the open ocean, and fills inland pixel-bypixel until the water is not able to flow any further. The water depth for a pixel is computed as the difference between the water surface elevation and the ground elevation, with the water surface elevation being lowered (attenuated) as it moves across a pixel, in accordance with its land cover class. The algorithm uses 3 input rasters and an attenuation multiplier parameter. All three rasters must be the same size and aligned with each other. DEM-Raster contains a digital elevation model of the study domain. Each cell (pixel) gives the height above (or below) mean sea level (in meters). Surge-Raster contains a surge height (in meters) above mean sea level for all cells (pixels) where surge originates (usually over the open ocean near the coast). All other cells (pixels) contain 0. Attenuation-Raster contains the surge attenuation values. Each cell (pixel) contains the drop in surge (in meters) as water horizontally traverses the cell. For example, if the cell contains the value 0.01, it means that the surge will be reduced by 1 centimeter if the surge traverses the cell. Attenuation-Multiplier is a numerical value between 0 and 1 that is applied to the values in the Attenuation-Raster, in accordance with (Vafeidis et al., 2019). If the multiplier is 0, then no attenuation will be applied to the surge as it traverses the raster cells (pixels). If the multiplier is 1, then the full attenuation value will be subtracted from the surge as it traverses the raster cells. CA-Surge is then defined by the following algorithm: # Algorithm 1 CA-Surge Four internal matrices that match the size of the input rasters are initialized: • $e l$ : contains the DEM-Raster data. • att: contains the Attenuation-Raster data. • wse: contains the water surface elevation. The cells are initialized to the values of the Surge-Raster. • $h$ : contains the height of the surge water above the DEM elevations. Each cell is initialized to wse minus $e l$ . The cell is set to 0 if any of the following conditions is true for the cell: the wse cell contains no data, the el cell contains no data, or $w s e$ minus $e l$ is less than or equal to 0. Cells that are set to a positive, non-zero value are added to a list of cells to a check list. The model then iterates through the following steps: 1. If the check list is empty, processing is complete, and the $h$ matrix is output as a raster. 2. For each cell in the check list (the “current cell”), apply the following to each of the current cell’s eight adjacent neighbors (the “neighbor cell”): (a) Compute $w a t e r S u r f a c e = w s e - ( a t t * d i s t a n c e * { \mathrm { A t t e n u a t i o n } } { \mathrm { - M u l t i p l i e r } } ) ,$ where $w s e$ and att are from the current cell, and distance is 1.0 for vertical and horizontal neighbors and $s q r t ( 2 )$ for diagonal neighbors. (b) Compute $d e p t h = w a t e r S u r f a c e - e l$ , where $e l$ is from the neighbor cell. (c) If depth is greater than 0 and $e l$ is not empty and the neighbor cell has not already been processed, or depth is greater than the $h$ of the neighbor cell: • wse of the neighbor cell is set to waterSurface • $h$ of the neighbor cell is set to depth. • The neighbor cell is added to the check list of the next iteration. # S3.2 Validation of inundation modeling Accurate observations of inundated area from historical storm surge events are generally difficult to come by. Some of the best available data to our knowledge are FEMA high-water marks (HMWs) surveyed following 2005’s Hurricane Katrina by contractor URS Group (Group, 2006a,c,b). This data contains a larger sample of surveyed points than most other HWM surveys, provides HWM elevation, and crucially labels each mark as caused to wave action, riverine flooding, or sustained storm surge, which is not common in other surveys. This enables an accurate evaluation of the surge-only DeepSurge method without complications from wave run-up and riverine flooding. Although the most famous flooding from this event was in Louisiana, much of it was due to or compounded by failures in the levee system; since these factors are outside of the modeling abilities of DeepSurge and CA-Surge, we focus on Mississippi and Alabama in this comparison, which received substantial flooding as well. We estimate inundation from this data by linearly interpolating a 3d water level surface from all HWMs, subtracting USGS ground elevation (Danielson & Gesch, 2011) to determine height-above-ground, and filtering all pixels which are not hydrologically connected to the ocean (Fig. S10a). Ground elevation is computed as the midpoint between the median and minimum elevation in each pixel to account for water’s tendency to take the lowest available path. The extent of flooding seems to be roughly bounded by the outer extent of high-water mark locations in Fig. S10a which suggests this method is capturing the underlying hydrodynamics that caused the marks. LandScan data from all inundation pixels is used to estimate inundated populations, which are then aggregated to census-tracts (Fig. S10b). DeepSurge and CA-Surge applied to the best track of Katrina (Fig. S10c) reasonably captures the pattern of the HWM-derived results in terms of percentage of population affected at the tract level (Pearson $r = 0 . 7 7$ , Spearman $r = 0 . 7 8$ , $p$ -values $\ll 0 . 0 0 1$ , $R ^ { 2 } = 0 . 5 9$ , $\mathrm { M A E } = 9 . 1$ percentage points). To validate the combination DeepSurge and CA-Surge for the entire coastline, we compare our state-level estimates of historical population at risk from 100-year storm surge inundation with those from Crowell et al. (2010) (see their Table 1), who estimate the 100-year coastal flood risk from the extremely high-resolution Flood Insurance Rate Maps (FIRMs) produced by FEMA. These FIRMs are developed with detailed and manual process combining modeling, observations, expert survey, and community input. Because of the effort required, FIRMs in some areas may be decades old. Additionally, since the maps do not differentiate between causes of flooding, Crowell et al. (2010) manually estimate the separation of surge- and riverinedriven flooding to isolate the former. Lastly, these coastal 100-year flood zones are spatially joined with census block-group population estimates from the 2000 census. Notably, they assume that population density is distributed uniformly within block-groups. While these approximations are possible sources of error, this FIRM-based assessment still represents one of the best available estimates of 100-year surge risk for the whole US coastline. We find that our historical-period 100-year inundation estimate correlates very highly with Crowell et al.’s totals, with Pearson and Spearman correlations of 0.95 and 0.87 respectively ( $p \ll 0 . 0 1$ ). Interestingly, our DeepSurge & CA-Surge method shows a negative bias in population affected for most states (Fig. S11)—most significantly for barely-affected states e.g. Pennsylvania—which runs counter to the positive biases found against tide gauge observations and high-water mark analysis in previous sections. These differences are perhaps due to the inclusion of wave action effects in the FEMA modeling, or inaccuracies in either (or both) of DeepSurge or Crowell’s methodologies. # S4 Drivers of change It is difficult to assess the contributions of individual storm characteristics to DeepSurge-predicted surge heights due to the black-box nature of neural networks and the complex, nonlinear, and time-dependent nature of the underlying physics. However, simple parametric models such as SSHPI (Islam et al., 2021, 2022) enable the evaluation of the approximate direction and magnitude of effect that each storm characteristic has on surge height. SSHPI predicts peak surge based on the multiplicative combination of four factors: storm intensity (maximum wind speed), storm size (radius of 50-kt winds), storm translation speed, and bathymetry (distance to 30 meter isobath). As Figure S12a-b shows, SSHPI projects a much more homogeneous increase in surge risk across the US coastline than DeepSurge, though the two do broadly agree on the sign of the change south of $3 5 ^ { \circ }$ N, while agreement is more mixed in the Northeast. Disagreements may be largely explained by SSHPI’s indifference to the direction of storm motion; in regions where storms tend to make landfall at indirect angles relative to the coastline (e.g. much of the Northeast, Fig. S2a) or are projected to increasingly do so in the future climate (e.g. western Florida, S2b), SSHPI’s formulation is less reliable. Nevertheless, since it is forced with the same set of TCs as DeepSurge, disassembling SSHPI into its components is instructive: Increasing storm intensities are a strongly positive contribution to surge height across the domain (Fig. S12c), while slightly larger radii of 50-kt winds—due largely to increasing intensities—are further weakly positive (Fig. S12d). Decreasing storm translation speeds in the future period result in lower surges on open coastlines, and slightly larger surges in bays and estuaries (Fig. S12e). In total, increasing storm intensity dominates the trend across nearly the entire coastline (Fig. S12f). Figure S1: Ensemble mean 100-year synthetic TC intensity for the (a) historical and (b) future periods, and (c) their difference. Figure S2: (a-b) Average movement direction for all synthetic TCs (of at least Category 1 Saffir-Simpson strength) in the historical period weighted by magnitude of storm translation speed, and the future change. Arrows in panel $\mathbf { b }$ are $\mathrm { 2 x }$ scaled relative to a for clarity. (c-d) Average synthetic TC movement speed in the historical period weighted by storm intensity, and the future change. Figure S3: Median estimate of population at risk from 100-year flood event in the historical and future periods for each state, along with $9 0 \%$ uncertainty intervals, (a) linearly scaled, and $\mathbf { \tau } ( \mathbf { b } )$ logarithmically scaled for clarity in less-affected states. Figure S4: Empirical cumulative distribution functions (eCDFs) of population at risk from DeepSurgemodeled 100-year storm surge flooding for each state. Figure S5: (a) ADCIRC domain, triangular elements of unstructured grid, and bathymetry colormap. (b) Historical TC tracks simulated by ADCIRC used for training. Figure S6: Estimated spatial mean bias of DeepSurge relative to tide gauge observations. Note that small sample sizes make the error estimates in the Northeast much less robust. Figure S7: Empirical cumulative distribution functions (eCDFs) of historical 100-year surge heights from 18 locations across the Gulf Coast estimated by the Needham (2014) compared to each modeling method. The methods are: DeepSurge (ours), Gori et al. (2022), Muis et al. (2023), and SSHPI (Islam et al., 2021), with the climate data used to force the model listed in parentheses. Figure S8: Spearman correlations between methods for the spatial distribution of the 100-year return level, as estimated in each method’s respective (a) historical and (b) future period. Asterisks indicate significance at the $9 5 \%$ level. Note that these methods have varying definitions of the historical and future time period. The methods are: DeepSurge (ours), Gori et al. (2022), Muis et al. (2023), and SSHPI (Islam et al., 2021), with the climate data used to force the model listed in parentheses. (e) SSHPI (RAFT-CMIP6 1980-2014 forcing) Figure S9: The 100-year surge level estimates from five modeling methods, for their respective historical periods. Note that the vastly varying magnitudes necessitate different colorscales, with the colorscale maximum set to the 99th percentile of the data for each. (a) Inundation surface estimated from FEMA high-water marks. Red dots indicate location of high water marks. FEMA high watermark-derived inundation Figure S10: Inundated (a) area and (b) population at the census-tract level estimated from FEMA highwater marks from Hurricane Katrina (2005), in Mississipi (left half) and Alabama (right half). (c) Inundated population estimated by our method. Note that since census tracts have roughly the same number of residents, large tracts are not any more important that small tracts. Some islands are part of mainland census tracts. Figure S11: Comparison between historical 100-year inundation estimates from Crowell et al. (2010) and our method. They exhibit Pearson and Spearman correlations of 0.95 and 0.87 respectively ( $p \ll 0 . 0 1 _ { , }$ ). Figure S12: Sign of future change in 100-year surge height, as projected by (a) DeepSurge, and (b) SSHPI (red is positive, blue is negative). Percentage change in the future-climate SSHPI-derived surge for all events with return periods of $\geq 1 0 0$ years, due to (c) storm intensity (d) storm size and (e) storm translation speed. (f) The largest contributing factor at each node. References Balaguru, K., Xu, W., Chang, C.-C., Leung, L. R., Judi, D. R., Hagos, S. M., Wehner, M. F., Kossin, J. P., and Ting, M. Increased U.S. coastal hurricane risk under climate change. Science Advances, 9 (14):eadf0259, April 2023. doi: 10.1126/sciadv.adf0259. URL https://www.science.org/doi/full/10. 1126/sciadv.adf0259. Publisher: American Association for the Advancement of Science. Cannon, A. J., Sobie, S. R., and Murdock, T. Q. Bias Correction of GCM Precipitation by Quantile Mapping: How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28(17):6938– 6959, September 2015. ISSN 0894-8755, 1520-0442. doi: 10.1175/JCLI-D-14-00754.1. URL https: //journals.ametsoc.org/view/journals/clim/28/17/jcli-d-14-00754.1.xml. Publisher: American Meteorological Society Section: Journal of Climate. Chavas, D. R., Camargo, S. J., and Tippett, M. K. Tropical cyclone genesis potential using a ventilated potential intensity, April 2024. URL http://arxiv.org/abs/2404.01572. arXiv:2404.01572 [physics] version: 1. Crowell, M., Coulton, K., Johnson, C., Westcott, J., Bellomo, D., Edelman, S., and Hirsch, E. An Estimate of the U.S. Population Living in 100-Year Coastal Flood Hazard Areas. Journal of Coastal Research, 262:201–211, March 2010. ISSN 0749-0208, 1551-5036. doi: 10.2112/JCOASTRES-D-09-00076.1. URL http://www.bioone.org/doi/abs/10.2112/JCOASTRES-D-09-00076.1. Danielson, J. J. and Gesch, D. B. Global multi-resolution terrain elevation data 2010 (GMTED2010). USGS Numbered Series 2011-1073, U.S. Geological Survey, 2011. URL http://pubs.er.usgs.gov/ publication/ofr20111073. Code Number: 2011-1073 Code: Global multi-resolution terrain elevation data 2010 (GMTED2010) Publication Title: Global multi-resolution terrain elevation data 2010 (GMTED2010) Reporter: Global multi-resolution terrain elevation data 2010 (GMTED2010) Series: Open-File Report. Dietrich, J. C., Westerink, J. J., Kennedy, A. B., Smith, J. M., Jensen, R. E., Zijlema, M., Holthuijsen, L. H., Dawson, C., Luettich, R. A., Powell, M. D., Cardone, V. J., Cox, A. T., Stone, G. W., Pourtaheri, H., Hope, M. E., Tanaka, S., Westerink, L. G., Westerink, H. J., and Cobell, Z. Hurricane Gustav (2008) Waves and Storm Surge: Hindcast, Synoptic Analysis, and Validation in Southern Louisiana. Monthly Weather Review, 139(8):2488–2522, August 2011. ISSN 1520-0493, 0027-0644. doi: 10.1175/2011MWR3611.1. Emanuel, K. and Rotunno, R. Self-Stratification of Tropical Cyclone Outflow. Part I: Implications for Storm Structure. Journal of the Atmospheric Sciences, 68(10):2236–2249, October 2011. ISSN 0022-4928, 1520-0469. doi: 10.1175/JAS-D-10-05024.1. Emanuel, K., Ravela, S., Vivant, E., and Risi, C. A Statistical Deterministic Approach to Hurricane Risk Assessment. Bulletin of the American Meteorological Society, 87(3):299–314, March 2006. ISSN 0003-0007, 1520-0477. doi: 10.1175/BAMS-87-3-299. URL https://journals.ametsoc.org/view/journals/bams/ 87/3/bams-87-3-299.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Emmert-Streib, F., Yang, Z., Feng, H., Tripathi, S., and Dehmer, M. An introductory review of deep learning for prediction models with big data. Frontiers in Artificial Intelligence, 3:4, 2020. Publisher: Frontiers Media SA. Gori, A., Lin, N., Xi, D., and Emanuel, K. Tropical cyclone climatology change greatly exacerbates US extreme rainfall–surge hazard. Nature Climate Change, 12(2):171–178, February 2022. ISSN 1758-6798. doi: 10.1038/s41558-021-01272-7. URL https://www.nature.com/articles/s41558-021-01272-7. Publisher: Nature Publishing Group. Group, U. Final Coastal and Riverine High Water Mark Collection for Hurricane Katrina in Mississippi. Technical report, March 2006a. URL https://www.fema.gov/pdf/hazard/flood/recoverydata/katrina/ katrina_ms_hwm_public.pdf. Group, U. High Water Mark Collection for Hurricane Katrina in Alabama. Technical report, April 2006b. URL https://www.fema.gov/pdf/hazard/flood/recoverydata/katrina/katrina_ms_hwm_ public.pdf. Group, U. High Water Mark Collection for Hurricane Katrina in Louisiana. Technical report, March 2006c. URL https://www.fema.gov/pdf/hazard/flood/recoverydata/katrina/katrina_ms_ hwm_public.pdf. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. Publisher: MIT press. Holland, G. A Revised Hurricane Pressure–Wind Model. Monthly Weather Review, 136, September 2008. doi: 10.1175/2008MWR2395.1. URL https://journals.ametsoc.org/view/journals/mwre/136/9/ 2008mwr2395.1.xml. Islam, M. R., Lee, C.-Y., Mandli, K. T., and Takagi, H. A new tropical cyclone surge index incorporating the effects of coastal geometry, bathymetry and storm information. Scientific Reports, 11(1):16747, August 2021. ISSN 2045-2322. doi: 10.1038/s41598-021-95825-7. URL https://www.nature.com/articles/ s41598-021-95825-7. Number: 1 Publisher: Nature Publishing Group. Islam, M. R., Satoh, M., and Takagi, H. Tropical Cyclones Affecting Japan Central Coast and Changing Storm Surge Hazard since 1980. Journal of the Meteorological Society of Japan. Ser. II, 100(3):493–507, 2022. doi: 10.2151/jmsj.2022-024. Jarvinen, B. R., Neumann, C. J., and Davis, M. A. S. A tropical cyclone data tape for the North Atlantic basin, 1886-1983 : contents, limitations, and uses. 1984. URL https://repository.library.noaa.gov/ view/noaa/7069/. Kelly, P., Leung, L. R., Balaguru, K., Xu, W., Mapes, B., and Soden, B. Shape of Atlantic Tropical Cyclone Tracks and the Indian Monsoon. Geophysical Research Letters, 45(19):10,746–10,755, 2018. ISSN 1944-8007. doi: 10.1029/2018GL080098. URL https://onlinelibrary.wiley.com/doi/abs/10.1029/ 2018GL080098. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1029/2018GL080098. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Knapp, K. R., Kruk, M. C., Levinson, D. H., Diamond, H. J., and Neumann, C. J. The International Best Track Archive for Climate Stewardship (IBTrACS): Unifying Tropical Cyclone Data. Bulletin of the American Meteorological Society, 91(3):363–376, March 2010. ISSN 0003-0007, 1520-0477. doi: 10.1175/ 2009BAMS2755.1. URL https://journals.ametsoc.org/view/journals/bams/91/3/2009bams2755_ 1.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. Knapp, K. R., Diamond, H. J., Kossin, J. P., Kruk, M. C., and Schreck, C. J. International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 4, North Atlantic, 2018. Knutson, T., Camargo, S. J., Chan, J. C. L., Emanuel, K., Ho, C.-H., Kossin, J., Mohapatra, M., Satoh, M., Sugi, M., Walsh, K., and Wu, L. Tropical Cyclones and Climate Change Assessment: Part II: Projected Response to Anthropogenic Warming. Bulletin of the American Meteorological Society, 101(3): E303–E322, March 2020. ISSN 0003-0007, 1520-0477. doi: 10.1175/BAMS-D-18-0194.1. URL https: //journals.ametsoc.org/view/journals/bams/101/3/bams-d-18-0194.1.xml. Publisher: American Meteorological Society Section: Bulletin of the American Meteorological Society. LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature, 521(7553):436–444, May 2015. ISSN 1476-4687. doi: 10.1038/nature14539. URL https://www.nature.com/articles/nature14539. Bandiera abtest: a Cg type: Nature Research Journals Number: 7553 Primary atype: Reviews Publisher: Nature Publishing Group Subject term: Computer science;Mathematics and computing Subject term id: computerscience;mathematics-and-computing. Lipari, S., Balaguru, K., Rice, J., Feng, S., Xu, W., K. Berg, L., and Judi, D. Amplified threat of tropical cyclones to US offshore wind energy in a changing climate. Communications Earth & Environment, 5(1): 1–10, December 2024. ISSN 2662-4435. doi: 10.1038/s43247-024-01887-6. URL https://www.nature. com/articles/s43247-024-01887-6. Publisher: Nature Publishing Group. Luettich, R. A., Westerink, J. J., and Scheffner, N. W. ADCIRC: an advanced three-dimensional circulation model for shelves, coasts, and estuaries. Report 1, Theory and methodology of ADCIRC-2DD1 and ADCIRC-3DL. 1992. Publisher: Coastal Engineering Research Center (US). Luettich Jr., R. A. and Westerink, J. J. A solution for the vertical variation of stress, rather than velocity, in a three-dimensional circulation model. International Journal for Numerical Methods in Fluids, 12(10): 911–928, 1991. ISSN 1097-0363. doi: 10.1002/fld.1650121002. Marks, D. G. The Beta and advection model for hurricane track forecasting. 1992. URL https: //repository.library.noaa.gov/view/noaa/7184. Martı´n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Jia, Y., Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane´, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vie´gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, 2015. URL https://www. tensorflow.org/. Muis, S., Aerts, J. C. J. H., ´A. Antolı´nez, J. A., Dullaart, J. C., Duong, T. M., Erikson, L., Haarsma, R. J., Apecechea, M. I., Mengel, M., Le Bars, D., O’Neill, A., Ranasinghe, R., Roberts, M. J., Verlaan, M., Ward, P. J., and Yan, K. Global Projections of Storm Surges Using High-Resolution CMIP6 Climate Models. Earth’s Future, 11(9):e2023EF003479, 2023. ISSN 2328-4277. doi: 10.1029/ 2023EF003479. URL https://onlinelibrary.wiley.com/doi/abs/10.1029/2023EF003479. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1029/2023EF003479. Murakami, H. and Wang, B. Patterns and frequency of projected future tropical cyclone genesis are governed by dynamic effects. Communications Earth & Environment, 3(1):1–10, April 2022. ISSN 2662-4435. doi: 10.1038/s43247-022-00410-z. URL https://www.nature.com/articles/s43247-022-00410-z. Publisher: Nature Publishing Group. Needham, H. A Data-Driven Storm Surge Analysis for the U.S. Gulf Coast. Doctor of Philosophy, Louisiana State University and Agricultural and Mechanical College, March 2014. URL https://repository.lsu. edu/gradschool_dissertations/3250. Pringle, W. J., Wirasaet, D., Roberts, K. J., and Westerink, J. J. Global storm tide modeling with ADCIRC v55: unstructured mesh design and performance. Geoscientific Model Development, 14(2):1125–1145, February 2021. ISSN 1991-959X. doi: 10.5194/gmd-14-1125-2021. URL https://gmd.copernicus.org/ articles/14/1125/2021/. Publisher: Copernicus GmbH. Rice, J. R., Balaguru, K., Staid, A., Xu, W., and Judi, D. Projected increases in tropical cyclone-induced U.S. electric power outage risk. Environmental Research Letters, 20(3):034030, February 2025. ISSN 1748-9326. doi: 10.1088/1748-9326/adad85. URL https://dx.doi.org/10.1088/1748-9326/adad85. Publisher: IOP Publishing. Vafeidis, A. T., Schuerch, M., Wolff, C., Spencer, T., Merkens, J. L., Hinkel, J., Lincke, D., Brown, S., and Nicholls, R. J. Water-level attenuation in global-scale assessments of exposure to coastal flooding: a sensitivity analysis. Natural Hazards and Earth System Sciences, 19(5):973–984, May 2019. ISSN 1561- 8633. doi: 10.5194/nhess-19-973-2019. URL https://nhess.copernicus.org/articles/19/973/2019/. Publisher: Copernicus GmbH. Willoughby, H. E., Darling, R. W. R., and Rahn, M. E. Parametric Representation of the Primary Hurricane Vortex. Part II: A New Family of Sectionally Continuous Profiles. Monthly Weather Review, 134, April 2006. doi: 10.1175/MWR3106.1. URL https://journals.ametsoc.org/view/journals/mwre/134/4/ mwr3106.1.xml. Xu, W., Balaguru, K., August, A., Lalo, N., Hodas, N., DeMaria, M., and Judi, D. Deep Learning Experiments for Tropical Cyclone Intensity Forecasts. Weather and Forecasting, 36(4):1453–1470, August 2021. ISSN 1520-0434, 0882-8156. doi: 10.1175/WAF-D-20-0104.1. URL https://journals.ametsoc.org/ view/journals/wefo/36/4/WAF-D-20-0104.1.xml. Publisher: American Meteorological Society Section: Weather and Forecasting. Xu, W., Balaguru, K., Judi, D. R., Rice, J., Leung, L. R., and Lipari, S. A North Atlantic synthetic tropical cyclone track, intensity, and rainfall dataset. Scientific Data, 11(1):130, January 2024. ISSN 2052-4463. doi: 10.1038/s41597-024-02952-7. URL https://www.nature.com/articles/s41597-024-02952-7. Number: 1 Publisher: Nature Publishing Group.
Storm surge is one of the deadliest hazards posed by tropical cyclones (TCs), yet assessing its current and future risk is difficult due to the phenomenon's rarity and physical complexity. Recent advances in artificial intelligence applications to natural hazard modeling suggest a new avenue for addressing this problem. We utilize a deep learning storm surge model to efficiently estimate coastal surge risk in the United States from 900,000 synthetic TC events, accounting for projected changes in TC behavior and sea levels. The derived historical 100-year surge (the event with a 1% yearly exceedance probability) agrees well with historical observations and other modeling techniques. When coupled with an inundation model, we find that heightened TC intensities and sea levels by the end of the century result in a 50% increase in population at risk. Key findings include markedly heightened risk in Florida, and critical thresholds identified in Georgia and South Carolina.
[ "physics.ao-ph", "cs.LG" ]
# 1 Introduction Counterfactual generation is fundamental to causal reasoning [37, 40, 1], allowing us to explore hypothetical scenarios, such as How would this patient’s disease have progressed if treatment A had been administered instead of treatment B? The ability to answer such causal questions is important across various domains, such as healthcare [46], fairness [26, 65] and scientific discovery [33]. There has been a growing interest in generating counterfactual images using deep generative models [23, 67], aiming to simulate how visual data would change under hypothetical interventions. Recent works extend the framework of Structural Causal Models (SCMs) [38, 40, 2] with deep generative models, performing counterfactual generation via a three-step procedure: abduction-action-prediction [36, 9, 62, 61, 8, 47]. These works typically parameterise their image-generating mechanisms with normalising flows [35, 59], GANs [14], VAEs [22], and HVAEs [5, 56]. However, these generative backbones often exhibit trade-offs between fidelity and flexibility [43, 64]. In this work, we explore diffusion models as a more powerful alternative for counterfactual inference. Diffusion models [50, 54, 17] have emerged as the state-of-the-art approach for image synthesis, achieving unprecedented fidelity and perceptual quality [10, 41]. Recent works have utilised diffusion models for counterfactual generation [34, 24, 46, 45, 3, 4, 29, 58, 44, 12, 53, 39, 25]. A common approach involves DDIM inversion to encode observed images into latent representations, followed by conditional generation under modified attributes. Conditioning is typically enforced via discriminative score functions, either with external classifiers [10] or through classifier-free guidance [16]. This combination of DDIM inversion and conditional decoding has become a dominant paradigm in diffusion-based image editing [7, 31, 57, 15, 11]. Building upon this paradigm, we investigate its use for high-fidelity counterfactual inference via a novel classifier-free guidance strategy. Classifier-Free Guidance (CFG) [16] has become the standard approach for conditioning diffusion models without requiring external classifiers. In counterfactual generation, CFG plays a crucial role in ensuring that the intended intervention is faithfully reflected in the output. While recent works have proposed refinements to CFG to enhance sample fidelity and controllability [6, 21, 27], standard CFG applies a single global guidance weight across all conditioning attributes, regardless of which ones are intended to change. This can lead to over-editing or unintended modifications of attributes that should remain fixed—a phenomenon known as attribute amplification [63], where invariant factors are spuriously exaggerated. While Xia et al. [63] observed this issue in HVAE-based counterfactual models [9] as a result of training-time entanglement, we show that a similar failure mode can arise at inference time in diffusion models due to the indiscriminate application of global guidance. Such behaviour not only violates the underlying causal assumptions—by modifying attributes outside the intervention—but can also cause the generation trajectory to drift away from the original data manifold, degrading identity preservation [66, 31, 55]. To address the limitations of standard CFG, we propose Decoupled Classifier-Free Guidance (DCFG), a model-agnostic method that requires no changes to the diffusion model or loss function, and only modifies the conditioning embedding. DCFG can be readily applied to any diffusion model employing CFG. A key ingredient in our approach is an attribute-split conditioning embedding strategy, which disentangles semantic attributes in the embedding space and enables selective masking and group-wise modulation at inference time. Unlike standard CFG—which applies a single global guidance weight across all conditioning attributes—DCFG assigns separate weights to attribute groups, allowing for fine-grained, interpretable control over the generative process. While conceptually related to compositional diffusion approaches, our method differs significantly: Shen et al. [49] apply pixel-wise spatial masks to modulate guidance locally, and Liu et al. [28] rely on multiple conditional diffusion models fused via shared score functions. In contrast, DCFG uses a single model and modulates guidance at the semantic attribute level. For counterfactual generation, we instantiate DCFG by partitioning attributes into intervened and invariant sets based on their roles in a structural causal graph, and apply distinct guidance to each group. Crucially, by decoupling guidance and focusing it solely on the intended intervention, DCFG reduces the risk of the generation trajectory drifting away from the original data manifold [66, 31, 55]. While our experiments focus on causal groupings, the DCFG framework is general and supports arbitrary partitions of semantic attributes. The contributions of this paper are: • We introduce an attribute-split embedding strategy that disentangles semantic conditioning variables in the embedding space, enabling selective masking and modular guidance for DCFG. • We propose Decoupled Classifier-Free Guidance (DCFG), a simple and flexible extension of CFG that assigns separate guidance weights to groups of attributes. DCFG is model-agnostic, training-compatible with existing pipelines, and supports arbitrary attribute groupings at inference time. • We instantiate DCFG for counterfactual image generation by grouping attributes into intervened and invariant sets based on a structural causal graph, and show improved alignment with intervention targets while preserving irrelevant features. # 2 Background # 2.1 Causality Structural Causal Models. SCMs [37] consist of a triplet $\langle U , A , F \rangle$ , where $U = \{ u _ { i } \} _ { i = 1 } ^ { K }$ denotes the set of exogenous (latent) variables, $A = \{ a _ { i } \} _ { i = 1 } ^ { K }$ the set of endogenous (observed) variables, and $\boldsymbol { F } = \{ f _ { i } \} _ { i = 1 } ^ { K }$ a collection of structural assignments such that each variable $a _ { k }$ is determined by a function $f _ { k }$ of its parents $\mathbf { p } \mathbf { a } _ { k } \subseteq A \setminus a _ { k }$ and its corresponding noise $u _ { k }$ $\mathbf { \Psi } _ { k } \colon a _ { k } : = f _ { k } ( \mathbf { p a } _ { k } , u _ { k } )$ . SCMs enable causal reasoning and interventions via the $d o$ -operator, e.g., setting a variable $a _ { k }$ to a fixed value $c$ through $d o ( a _ { k } : = c )$ . In this work, we focus on generating image counterfactuals and implement the underlying image synthesis mechanism using diffusion models. Counterfactual inference. We define an image as $\mathbf { x }$ , which is generated from its causal parents pa and an exogenous noise variable u via a structural assignment: $\mathbf { x } : = f ( \mathbf { u } , \mathbf { p a } )$ . Counterfactual reasoning proceeds in three conceptual steps [37] - (1) Abduction: infer the latent noise $\mathbf { u }$ from the observed data and its parents: $\mathbf { u } : \bar { = } f ^ { - 1 } ( \bar { \mathbf { x } } , \mathbf { p a } )$ ; (2) Action: apply an intervention to alter selected parent variables, yielding $\widetilde { \mathbf { p } \mathbf { a } }$ ; (3) Prediction: propagate the effect of the intervention through the model to compute the countefrfactual $\widetilde { \mathbf { x } } : = f ( \mathbf { u } , \widetilde { \mathbf { p a } } )$ . Recent advancements have sought to implement these steps using deep learning models, with VAfE [36], HVAE [9], diffusion models [44, 45, 24, 12]. The general idea is to model the structural assignment $f _ { \theta }$ and its inverse $f _ { \theta } ^ { - 1 }$ using neural networks with trainable parameters $\theta$ . # 2.2 Diffusion Models DMs [51, 17] are latent variable models which are trained to generate images by gradually removing Gaussian noise from $x _ { T } \sim N ( 0 , I )$ over $T$ timesteps. Given a clean sample $\mathbf { x } _ { 0 } \sim p _ { 0 } ( \mathbf { x } )$ , the forward process at timestep $t$ follows: $$ \begin{array} { r } { \mathbf { x } _ { t } = \sqrt { \alpha _ { t } } \mathbf { x } _ { 0 } + \sqrt { 1 - \alpha _ { t } } \mathbf { \epsilon } , \quad \epsilon \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } ) , } \end{array} $$ where $\{ \alpha _ { t } \} _ { t = 1 } ^ { T }$ is a variance schedule defined over $t \in [ 1 , T ]$ . To learn the reverse process, a parameterized network $\epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } )$ is trained to predict the added noise from noisy inputs. We adopt the conditional diffusion model formulation, where c denotes the embedding of semantic parent attributes pa used as conditioning input. The training objective minimizes the noise prediction loss: $$ \underset { \theta } { \operatorname* { m i n } } \ : \mathbb { E } _ { \mathbf { x } _ { 0 } , \epsilon , t } [ | \vert \epsilon - \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) | | ^ { 2 } ] . $$ At inference time, data samples are generated by progressively denoising $\mathbf { x } _ { t }$ . Following Eq (12) in Song et al. [52], the denoising step from $\mathbf { x } _ { t }$ to $\mathbf { x } _ { t - 1 }$ is: $$ \mathbf { x } _ { t - 1 } = \underbrace { \sqrt { \alpha _ { t - 1 } } \left( \frac { \mathbf { x } _ { t } - \sqrt { 1 - \alpha _ { t } } \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) } { \sqrt { \alpha _ { t } } } \right) } _ { \substack { \mathrm { a s s a l i c e } \mathrm { ~ A s e ~ } } } + \underbrace { \sqrt { 1 - \alpha _ { t - 1 } - \sigma _ { t } ^ { 2 } } \cdot \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) } _ { \substack { \mathrm { d i r e c t i o n ~ t o ~ } \mathbf { x } _ { t } } } + \underbrace { \sigma _ { t } \epsilon _ { t } } _ { \mathrm { n o i s e } } , $$ where $\epsilon _ { t } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ . Setting $\sigma _ { t } = 0$ yields a deterministic sampling process known as DDIM [52], which defines an invertible trajectory between data and latent space. Following [24, 44, 12, 39], we adopt this DDIM formulation for counterfactual generation, as detailed below. Abduction. We implement the abduction function $\mathbf { u _ { \lambda } } : = f _ { \theta } ^ { - 1 } ( \mathbf { x } , \mathbf { p a } )$ using the DDIM forward trajectory. Given an observed image $\mathbf { x } _ { \mathrm { 0 } }$ (i.e. $\mathbf { x }$ in $f _ { \theta } ^ { - 1 }$ ) and conditioning vector c (the embedding of semantic parents pa), the latent $\mathbf { x } _ { T }$ serves as a deterministic estimate of the exogenous noise $\mathbf { u }$ : $$ \mathbf { x } _ { t + 1 } = \sqrt { \alpha _ { t + 1 } } \cdot \hat { \mathbf { x } } _ { 0 } + \sqrt { 1 - \alpha _ { t + 1 } } \cdot \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) , \quad t = 0 , \ldots , T - 1 , $$ where the clean estimate $\hat { \mathbf { x } } _ { 0 }$ at each step is computed as: $$ \hat { \mathbf { x } } _ { 0 } = \frac { 1 } { \sqrt { \alpha _ { t } } } \left( \mathbf { x } _ { t } - \sqrt { 1 - \alpha _ { t } } \cdot \boldsymbol { \epsilon } _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) \right) , $$ Action. We apply an intervention to the semantic attributes pa (e.g., do $( \mathsf { M a l e } \ = \ 1 )$ ), and propagate the effect through the causal graph using invertible flows as in [36, 9]. This yields the counterfactual attribute vector $\widetilde { \mathbf { p } \mathbf { a } }$ and its embedding c˜. Prediction. We ifmplement the structural assignment $\widetilde { \mathbf { x } } : = f _ { \boldsymbol { \theta } } ( \mathbf { u } , \widetilde { \mathbf { p a } } )$ under the modified condition c˜ using the DDIM reverse trajectory, with $\mathbf { u } = \mathbf { x } _ { T }$ the exogenous nfoise estimated in eq. (4): $$ \begin{array} { r } { { \bf x } _ { t - 1 } = \sqrt { \alpha _ { t - 1 } } \cdot \hat { \bf x } _ { 0 } + \sqrt { 1 - \alpha _ { t - 1 } } \cdot \epsilon _ { \theta } ( { \bf x } _ { t } , t , \tilde { \bf c } ) , \quad t = T , \ldots , 1 , } \end{array} $$ where $\hat { \mathbf { x } } _ { 0 }$ is computed as in eq. (5). The final output $\tilde { \mathbf { x } } _ { 0 }$ is the predicted counterfactual $\tilde { \mathbf { x } }$ In practice, decoding under the counterfactual condition c˜ using the conditional denoiser alone may be insufficient for producing effective counterfactuals. Additional guidance is often required to steer generation toward the desired intervention [44, 45, 24, 12, 58, 53, 39, 25]. In this work, we adopt classifier-free guidance (CFG) to enhance counterfactual fidelity and alignment with the specified intervention. # 2.3 Classifier-Free Guidance CFG [16] is a widely adopted technique in conditional diffusion models. It enables conditional generation without requiring an external classifier by training a single denoising model to operate in both conditional and unconditional modes. During training, the model learns both $p _ { \theta } ( \mathbf { x } _ { t } \mid \mathbf { c } )$ and $p _ { \theta } ( \mathbf { x } _ { t } \mid \mathbf { \theta } )$ by randomly replacing c with a null token $\varnothing$ . At inference time, CFG samples from a sharpened posterior: $$ p ^ { \omega } ( \mathbf { x } _ { t } \mid \mathbf { c } ) \propto p ( \mathbf { x } _ { t } ) \cdot p ( \mathbf { c } \mid \mathbf { x } _ { t } ) ^ { \omega } , $$ where $\omega \geq 0$ is a guidance weight controlling the strength of conditioning. This corresponds to interpolating between the unconditional and conditional scores: $$ \begin{array} { r } { \nabla _ { \mathbf { x } _ { t } } \log p ^ { \omega } ( \mathbf { x } _ { t } \mid \mathbf { c } ) = \nabla \log p ( \mathbf { x } _ { t } ) + \omega \cdot \left( \nabla \log p ( \mathbf { x } _ { t } \mid \mathbf { c } ) - \nabla \log p ( \mathbf { x } _ { t } ) \right) . } \end{array} $$ In practice, this is implemented by interpolating denoised predictions: $$ \begin{array} { r } { \epsilon _ { \mathrm { C F G } } ( \mathbf { x } _ { t } , t , \mathbf { c } ) = \epsilon _ { \theta } ( \mathbf { x } _ { t } , t \mid \mathcal { D } ) + \omega \cdot \left( \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { c } ) - \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathcal { D } ) \right) . } \end{array} $$ With CFG, abduction is the same as in eq. (4), and action remains unchanged. The only difference lies in prediction, where the conditional denoiser is replaced with the guided score $\epsilon _ { \mathrm { C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } )$ to enhance counterfactual effectiveness [45, 24]: $$ \begin{array} { r l r } & { } & { \hat { \mathbf { x } } _ { 0 } = \displaystyle \frac { 1 } { \sqrt { \alpha _ { t } } } \left( \mathbf { x } _ { t } - \sqrt { 1 - \alpha _ { t } } \cdot \boldsymbol { \epsilon } _ { \mathrm { C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } ) \right) , } \\ & { } & { \mathbf { x } _ { t - 1 } = \sqrt { \alpha _ { t - 1 } } \cdot \hat { \mathbf { x } } _ { 0 } + \sqrt { 1 - \alpha _ { t - 1 } } \cdot \boldsymbol { \epsilon } _ { \mathrm { C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } ) . } \end{array} $$ Despite its effectiveness, standard CFG applies a single global guidance weight $\omega$ uniformly across the entire counterfactual embedding c˜, which typically encodes multiple attributes—some of which may not have been altered during the intervention. In counterfactual generation, however, only a subset of attributes in c˜ (i.e., those affected by the intervention) should be emphasized, while the remaining attributes should remain invariant. Applying the same guidance strength to all elements of c˜ violates this principle, and can cause unintended changes to invariant attributes. This misalignment is called attribute amplification [63], which violates the causal relationship as pre-defined in structural causal models and undermines counterfactual soundness [32]. To address these limitations, we propose a structured alternative that assigns separate guidance weights to semantically or causally defined groups of attributes. # 3 Method In this section, we present our Decoupled Classifier-Free Guidance (DCFG) for counterfactual image generation. We begin by introducing an attribute-split conditioning embedder (section 3.1), which explicitly separates individual attributes in the embedding space to enable selective control. Building on this, we then describe our DCFG formulation (section 3.2), which allows distinct guidance strengths to be applied to different subsets of attributes. Finally, we present how DCFG is integrated into DDIM-based counterfactual inference (section 3.3), detailing its application across abduction, action, and prediction steps using causally defined attribute groupings. # 3.1 Attribute-Split Conditioning Embedding In practice, raw conditioning inputs such as discrete labels or structured attributes (e.g., a patient’s sex, race, disease status) are not used directly in diffusion models but transformed into dense vectors using embedding functions—typically via small MLPs [10], convolutional encoders [68], or transformerbased text encoders [16, 42]. These embeddings align semantic or categorical inputs with the model’s internal representation space, but conventional designs often entangle multiple attributes into a single vector, making it difficult to independently control them during sampling. To address this, we introduce an attribute-split conditioning embedding that preserves the identity of each attribute in the embedding space. Let $p a _ { i }$ denote the raw value of the $i$ -th parent attribute (e.g., a binary indicator or scalar). Each $p a _ { i }$ is embedded independently via a dedicated MLP $\mathcal { E } _ { i } : \mathbb { R } ^ { d _ { i } } \mathbb { R } ^ { d }$ , and the final condition vector is formed by concatenating the outputs: $$ \mathbf { c } = \operatorname { c o n c a t } \left( { \mathcal { E } } _ { 1 } ( p a _ { 1 } ) , \ldots , { \mathcal { E } } _ { K } ( p a _ { K } ) \right) , \quad \mathbf { c } \in \mathbb { R } ^ { K \cdot d } . $$ This architecture provides a flexible representation where each attribute is explicitly disentangled at the embedding level. As a result, we can selectively null-tokenize or modulate individual attributes at inference time—enabling fine-grained control. Throughout the rest of the paper, we denote the semantic attribute vector as pa and the corresponding embedding vector as c, as defined in eq. (12). # 3.2 Decoupled Classifier-Free Guidance To overcome the limitations of CFG and enable more precise, causally aligned control in counterfactual image generation, we propose Decoupled Classifier-Free Guidance (DCFG). Rather than applying a single guidance weight uniformly to the entire conditioning vector, we partition semantic attributes pa into $M$ disjoint groups $\mathbf { p a } ^ { ( 1 ) } , \ldots , \mathbf { p a } ^ { ( M ) }$ , and apply a separate guidance weight $\omega _ { m }$ to each group. Let $\mathbf { p a } = ( p a _ { 1 } , \dots , p a _ { K } )$ denote the vector of semantic parent attributes. Under the assumption that the groups $\mathbf { p a } ^ { ( 1 ) } , \ldots , \mathbf { p a } ^ { ( M ) }$ are conditionally independent given the latent variable $\mathbf { x } _ { t }$ , we obtain the following factorized proxy posterior: Proposition 1 (Proxy Posterior for DCFG). Under the assumption $\begin{array} { r } { p ( \mathbf { p } \mathbf { a } \mid \mathbf { x } _ { t } ) = \prod _ { m = 1 } ^ { M } p ( \mathbf { p } \mathbf { a } ^ { ( m ) } \mid \mathbf { \Lambda } } \end{array}$ ${ \bf x } _ { t } .$ ), the sharpened proxy posterior becomes: $$ p ^ { \omega } ( \mathbf { x } _ { t } \mid \mathbf { p a } ) \propto p ( \mathbf { x } _ { t } ) \cdot \prod _ { m = 1 } ^ { M } p ( \mathbf { p a } ^ { ( m ) } \mid \mathbf { x } _ { t } ) ^ { \omega _ { m } } , $$ where $\omega _ { m } \geq 0$ controls the guidance strength for group $m$ . A full derivation and gradient-based justification for this proxy posterior are provided in Appendix B. The corresponding gradient used in score-based diffusion sampling is: $$ \nabla _ { \mathbf { x } _ { t } } \log p ^ { \omega } ( \mathbf { x } _ { t } \mid \mathbf { p a } ) = \nabla \log p ( \mathbf { x } _ { t } ) + \sum _ { m = 1 } ^ { M } \omega _ { m } \cdot \left( \nabla \log p ( \mathbf { x } _ { t } \mid \mathbf { p a } ^ { ( m ) } ) - \nabla \log p ( \mathbf { x } _ { t } ) \right) . $$ In practice, we encode pa into a dense conditioning vector c using the attribute-split embedding described in Section 3.1. For each group $m$ , we construct a masked embedding $\underline { { \mathbf { c } } } ^ { ( m ) }$ that retains only the embeddings for $\mathbf { p } \mathbf { a } ^ { ( m ) }$ and replaces all others with null tokens (represented here as zero vectors): $$ \underline { { \mathbf { c } } } ^ { ( m ) } = \mathrm { c o n c a t } \left( \delta _ { 1 } ^ { ( m ) } \cdot \mathcal { E } _ { 1 } ( p a _ { 1 } ) , \dots , \delta _ { K } ^ { ( m ) } \cdot \mathcal { E } _ { K } ( p a _ { K } ) \right) , \quad \delta _ { i } ^ { ( m ) } = \left\{ 1 , \begin{array} { l l } { { \mathrm { i f } p a _ { i } \in \mathbf { p a } ^ { ( m ) } } } \\ { { \mathrm { o } , } } \end{array} \right. $$ The final guided score used in the diffusion model is computed as: $$ \epsilon _ { \mathrm { D C F G } } ( \mathbf { x } _ { t } , t , \mathbf { c } ) = \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { \mathcal { O } } ) + \sum _ { m = 1 } ^ { M } \omega _ { m } \cdot \left( \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \underline { { \mathbf { c } } } ^ { ( m ) } ) - \epsilon _ { \theta } ( \mathbf { x } _ { t } , t , \mathbf { \mathcal { O } } ) \right) . $$ The proposed DCFG framework is highly flexible, as it allows arbitrary groupings of attributes, regardless of whether attributes within a group are mutually independent or not. The only assumption required is that different groups are conditionally independent given the latent variable $\mathbf { x } _ { t }$ . This flexibility enables a wide range of configurations. For instance, setting $M = 1$ recovers standard global CFG, while increasing $M$ provides progressively finer-grained control, including per-attribute guidance ( $M = K$ ) as an extreme case where we assume all attributes are independent of each other. # 3.3 DCFG for Counterfactual Generation We now detail how DCFG is integrated into DDIM-based counterfactual inference. Abduction. The abduction step proceeds as in eq. (4), where the conditioning vector c is obtained by embedding the semantic parent attributes pa using the attribute-split encoder defined in eq. (12). Action. As in previous setups, we apply a causal intervention to obtain a modified semantic vector pa. This is then embedded into the counterfactual conditioning vector c˜ via the attribute-split embeddefr: $$ \tilde { \mathbf { c } } = \mathrm { c o n c a t } \left( \mathcal { E } _ { 1 } ( \widetilde { p a } _ { 1 } ) , \dots , \mathcal { E } _ { K } ( \widetilde { p a } _ { K } ) \right) . $$ Prediction. The prediction step uses the DCFG-guided reverse DDIM trajectory: $$ \hat { \mathbf { x } } _ { 0 } = \frac { 1 } { \sqrt { \alpha _ { t } } } \left( \mathbf { x } _ { t } - \sqrt { 1 - \alpha _ { t } } \cdot \epsilon _ { \mathrm { D C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } ) \right) , $$ $$ \mathbf { x } _ { t - 1 } = \sqrt { \alpha _ { t - 1 } } \cdot \hat { \mathbf { x } } _ { 0 } + \sqrt { 1 - \alpha _ { t - 1 } } \cdot \epsilon _ { \mathrm { D C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } ) , $$ where $\epsilon _ { \mathrm { D C F G } } ( \mathbf { x } _ { t } , t , \tilde { \mathbf { c } } )$ is computed as in eq. (16) using counterfactual conditioning embeddin For the counterfactual generation task, we follow a two-group partitioning of attributes based on the causal graph. The affected group paaff consists of attributes that are directly intervened on, along with their causal descendants. The invariant group painv comprises attributes that are not affected by the intervention and are expected to remain unchanged. These groups are assumed to be conditionally independent given the latent $\mathbf { x } _ { t }$ , consistent with the d-separation implied by the post-intervention causal graph. Under this setup, eq. (16) is instantiated with $M = 2$ groups, applying separate guidance weights $\omega _ { \mathrm { a f f } }$ and $\omega _ { \mathrm { i n v } }$ to the affected and invariant groups, respectively. In practice, we find that $\omega _ { \mathrm { i n v } } = 1 . 0$ is typically sufficient when $\omega _ { \mathrm { a f f } }$ is small. However, as $\omega _ { \mathrm { a f f } }$ increases, the resulting intervention may introduce unintended drift in $\mathbf { p a } ^ { \mathrm { i n v } }$ due to more aggressive changes to the image. In such cases, raising $\omega _ { \mathrm { i n v } }$ can help maintain the desired invariance. Determining the optimal $\left( \omega _ { \mathrm { a f f } } , \omega _ { \mathrm { i n v } } \right)$ pair remains an open problem for future exploration. Fig. A.8 visualizes the effect of $\omega _ { \mathrm { i n v } }$ . # 4 Experiments In this section, we demonstrate the benefits of the proposed approach across three public datasets. For each dataset, we train a diffusion model with the same architecture and training protocol, detailed in appendix C. We compare our DCFG against the standard CFG baseline. In all results, settings labeled as $\omega = X$ correspond to standard classifier-free guidance (CFG) with a global guidance weight. In contrast, configurations denoted by $\omega _ { \mathrm { a f f } } = X$ , $\omega _ { \mathrm { i n v } } = Y$ represent our proposed DCFG, where separate guidance weights are applied to the intervened and invariant attribute groups, respectively. Following Melistas et al. [30], Monteiro et al. [32], we evaluate counterfactual quality using two metrics. Effectiveness $( \Delta )$ : Measured by a pretrained classifier as the change in AUROC for intervened attributes relative to $\omega = 1 . 0$ (no CFG). Higher $\Delta$ indicates stronger intervention effect; large $\Delta$ on invariant attributes indicates unintended amplification. Reversibility: Assesses how well counterfactuals can be reversed to the original image using inverse interventions. We report MAE and LPIPS; lower values indicate better identity preservation. See appendix A.2 for details. # 4.1 CelebA We begin our evaluation on the CelebA-HQ dataset [20], using Smiling, Male, and Young as independent binary parent attributes. Although some variables (e.g., Smiling and Mouth Open) may be causally linked in the real world, we deliberately adopt this simplified setup to isolate and diagnose inference-time failures caused by global CFG. Under this assumption, any unintended change in non-intervened attributes can be confidently attributed to attribute amplification by standard CFG, rather than valid downstream causal effects. Refer to appendix D.1 for more details. Figure 1: Comparison of $\Delta$ metrics under different interventions in CelebA-HQ. Left: Intervention on Smiling. Right: Intervention on Young. Both use baseline $\omega = 1 . 0$ . Under global CFG, increasing $\omega$ boosts the intended attribute but amplifies non-target attributes. DCFG achieves similar improvements on the target attribute while mitigating amplification. See appendix D.2 for full quantitative results. Figure 2: Counterfactual generations in CelebA-HQ $( 6 4 \times 6 4 )$ . Each row compares global CFG (left) and DCFG (right) across guidance weights. Top: global CFG causes amplification of Smiling under do(Male); Middle: do(Young) suppresses Male (i.e. amplifies $\mathtt { M a l e } = n o$ ); Bottom: do(Smiling) makes the subject appear older, adds glasses, and alters identity. DCFG mitigates these unintended changes and preserves non-intervened attributes. Refer to appendix D.3 for more visual results. Fig. 1 presents the $\Delta$ metrics under different guidance strategies for two separate interventions: $\mathsf { d o } ( \mathsf { S m i l i n g } )$ and do(Young). As the global guidance weight $\omega$ increases (left to right side of each plot), the $\Delta$ of the intervened attribute improves, but so do the $\Delta$ values of attributes that should remain invariant, indicating undesirable amplification. In contrast, the right side of each plot shows results for DCFG, where distinct weights are applied to affected $( \omega _ { \mathrm { a f f } } )$ and invariant $\left( \omega _ { \mathrm { i n v } } \right)$ attribute groups. This decoupled formulation achieves comparable or stronger improvement on the intervened attribute while keeping the others stable, validating DCFG ’s ability to produce more disentangled and effective counterfactuals. Fig. 2 illustrates how global CFG can introduce unitended changes by uniformly amplifying all conditioning signals, even when only one attribute is meant to change. In the top row, applying do $( \mathtt { M a l e = n o } )$ ) with increasing $\omega$ inadvertently amplifies Smiling; in the middle row, do ${ \mathrm { ( Y o u n g = n o ) } }$ ) reduces Male expression; and in the bottom row, do(Smiling ${ \bf \Phi } = { \bf \Phi }$ yes) introduces changes to age, identity, and even adds glasses. These unintended shifts stem from global CFG treating all attributes equally. In contrast, DCFG applies decoupled guidance across attributes—assigning stronger weights to those affected by the intervention, allowing attributes that were not targeted by the intervention to remain unchanged. This results in counterfactuals that more faithfully reflect the intended change while preserving identity and consistency in non-intervened factors. Fig. 3 evaluates the reversibility of counterfactuals in CelebA-HQ. The left panel reports quantitative metrics (MAE and LPIPS) that assess how well the original image is recovered after applying an intervention (e.g., do(Smiling)) and then reversing it. With global CFG, reversibility consistently degrades as guidance strength increases, with higher error values across both metrics. In comparison, DCFG yields lower MAE and LPIPS for the same guidance levels on intervened variables (i.e., comparing $\omega$ with $\omega _ { \mathrm { a f f } } .$ ), demonstrating improved ability to recover the original image. The right panel presents a qualitative example where a counterfactual is generated under do(Male) and then reversed. Figure 3: Reversibility analysis in CelebA-HQ $( 6 4 \times 6 4 )$ . Left: Quantitative evaluation of how well the original image is recovered after generating a counterfactual and mapping it back to the original condition under do(Smiling). Right: A qualitative example showing a counterfactual generated under do(Male) and its reconstruction after reversing the intervention with CFG and our DCFG. Figure 4: Evaluation of counterfactual generation on EMBED ${ 1 9 2 \times 1 9 2 } )$ . Left: $\Delta$ metrics showing the effect of do(circle). DCFG improves target intervention effectiveness while suppressing spurious shifts in non-intervened attributes. Right: A visual example showing the input image, the counterfactual under do(density), the reversed image, and their difference maps (CF/Rev. - input). See appendix E.2 for full quantitative results and appendix E.3 for more visual results. Notably, the reversed image appears older than the original—an artifact not of the intervention itself, but of global CFG’s tendency to amplify all attributes in the counterfactual parent vector. While the model reverts the intervened attribute, non-intervened ones like Young $\mathbf { \tau } = \mathbf { \tau }$ no remain amplified, leading to unintended and compounding changes. In contrast, the proposed DCFG distinguishes between intervened and invariant attributes, applying stronger guidance only to the target group. This mitigates attribute amplification in non-target attributes during both generation and reversal, enabling more faithful, disentangled, and reversible counterfactuals. # 4.2 EMBED We evaluate DCFG on the EMBED [18] breast mammography dataset. We define a binary circle attribute based on the presence of circular skin markers, and a binary breast density label, where categories A and B are grouped as low and categories C and D as high. See appendix E.1 for details. Fig. 4 presents results for counterfactual generation on EMBED. The bar plot on the left reports $\Delta$ effectiveness metrics, measuring how classifier performance changes relative to the baseline. While global CFG improves effectiveness for the target attribute (circle), it also increases effectiveness on non-intervened attributes such as density, indicating unintended attribute amplfication. DCFG mitigates this by applying selective guidance, maintaining stable performance on non-target attributes. The figure on the right illustrates a key example: applying do(density) under global CFG unintentionally amplifies the presence of circular skin markers, as evidenced by the increased number of visible circles in both the counterfactual and reversed images. This is suppressed under DCFG, where circle features remain unchanged in both counterfactual and reversed images. # 4.3 MIMIC We evaluate our method on the MIMIC-CXR dataset [19]. We follow the dataset splits and filtering protocols from [9, 13], and focus on the binary disease label of pleural effusion. The underlying causal graph in De Sousa Ribeiro et al. [9] includes four attributes: race, sex, finding, and age. We adopt this setup, but since our goal is to study attribute amplification caused by CFG, we focus on △ Metrics for do(circle) (Baseline $\omega = 1 . 0$ ) density: low |--------- w=2.5 ---- Standard CFG ----- w=2.5 ---------| |--------- Wa=2.5,inv=1.0aff=2.5,in=1.0 -------- Our DCFG ---- circle: present do(density=high) Reversed do(density=high) Reversed 0.08 density AUROC XXcircle AUROC 0.06 0.04 山 |----------------------------- Standard CFG ---------------------| |------------------------- Our DCFG -----------------------------| Figure 5: Evaluation of counterfactual generation on MIMIC ${ \mathrm { ~ ( 9 2 ~ \times ~ 1 9 2 ) ~ } }$ . Left: $\Delta$ metrics showing the effect of do(finding). DCFG improves target intervention effectiveness while suppressing spurious shifts in non-intervened attributes. Right: A visual example showing the input image, the counterfactual under do(density), the reversed image, and their difference maps (CF/Rev. - input). See appendix F.2 for full quantitative results and appendix F.3 for more qualitative results. sex, race, and finding, which we assume to be mutually independent for analysis purposes. See appendix F.1 for details. Fig. 5 presents an evaluation of counterfactual generation in MIMIC-CXR, highlighting the advantages of our proposed DCFG. The bar plot on the left shows $\Delta$ metrics that quantify the change in effectiveness relative to the baseline $\omega = 1 . 0$ . While global CFG improves effectiveness for the intervened variable (finding), it also introduces substantial shifts in non-intervened attributes such as race and sex, revealing unwanted attribute amplfication. In contrast, DCFG achieves comparable or higher target effectiveness while suppressing spurious changes, demonstrating more precise and controlled generation. On the right, we show a qualitative example of a counterfactual generated under do(finding), its reversed reconstruction, and their corresponding difference maps. Compared to global CFG, our method yields localized, clinically meaningful changes in counterfactuals and better identity preservation in the reversed image—further supporting its robustness and causal faithfulness.
Counterfactual image generation aims to simulate realistic visual outcomes under specific causal interventions. Diffusion models have recently emerged as a powerful tool for this task, combining DDIM inversion with conditional generation via classifier-free guidance (CFG). However, standard CFG applies a single global weight across all conditioning variables, which can lead to poor identity preservation and spurious attribute changes - a phenomenon known as attribute amplification. To address this, we propose Decoupled Classifier-Free Guidance (DCFG), a flexible and model-agnostic framework that introduces group-wise conditioning control. DCFG builds on an attribute-split embedding strategy that disentangles semantic inputs, enabling selective guidance on user-defined attribute groups. For counterfactual generation, we partition attributes into intervened and invariant sets based on a causal graph and apply distinct guidance to each. Experiments on CelebA-HQ, MIMIC-CXR, and EMBED show that DCFG improves intervention fidelity, mitigates unintended changes, and enhances reversibility, enabling more faithful and interpretable counterfactual image generation.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Vision-language models (VLMs) have made notable progress in general-domain tasks, such as crop anomaly detection[1] and intelligent video surveillance[2]. In the medical and healthcare domain, researchers have recently adapted VLMs to support medical visual question answering (VQA), with promising results from both academic initiatives [3, 4] and large-scale efforts [5, 6]. Alongside improvements in accuracy, recent VLMs have become increasingly accessible to small teams and individual researchers and practitioners to adapt off-the-shelf VLMs to domain-specific tasks through affordable fine-tuning. However, these off-the-shelf VLMs still underperform on medical VQA tasks compared to general-domain VQA due to domain mismatch, limited data availability, and a lack of systematic evaluation and interpretability tools. Developing robust medical VQA systems poses unique challenges. VLM models are trained on open-web datasets (like [7, 8]) that include general-domain data and struggle with the domain shift introduced by complex, multi-modality clinical inputs. Medical VQA tasks require not only visual understanding but also specialized reasoning grounded in clinical knowledge, which general-purpose VLMs typically lack. Moreover, the scarcity of large-scale, high-quality image–question–answer datasets in radiology limits the ability to fine-tune or evaluate these models systematically. In addition, the absence of standardized training pipelines and interpretability tools hampers both model development and clinical validation. Together, these challenges call for lightweight approaches that balance domain adaptation, performance analysis, and interpretability. We address these challenges by adapting a lightweight VLM - 3B-parameter PaliGemma-mix-448 [9] for radiological VQA. Our approach combines a two-stage fine-tuning pipeline with parameterefficient LoRA [10] adaptation, using a curated mixture of radiology datasets (SLAKE [11], PMCVQA [12], ROCO v2.0 [13], MedPix 2.0 [14]). In the first stage of fine-tuning, we align the model’s projection head with domain-specific anatomical vocabulary; in stage 2, we fine-tune the full model using enriched instruction-tuning data generated via a LLaMA-8B QA generation pipeline and annealing strategies to amplify high-quality supervision. To evaluate model performance, we introduce a saliency-based diagnostic tool that visualizes attention from image patches to response tokens and vice versa, enabling human experts to identify ill-conditioned outputs. Despite the model’s small size, it achieves competitive accuracy on combined ROCO $+$ MedPix VQA tasks, approaching the performance of much larger models like LLaVA-Med [6]. Our key contributions are as follows. First, we reassess model scaling trends in medical VQA by demonstrating that a compact 3B VLM, when appropriately fine-tuned, can achieve competitive performance on radiological VQA tasks, challenging the assumption that only large-scale models are capable of strong clinical reasoning. Second, we propose an end-to-end framework that spans dataset curation, synthetic QA pair generation, annealing-based enrichment, and a two-stage finetuning strategy. This pipeline enables medical domain specialization with minimal compute, serving as a practical guide for low-resource medical VLMs. Third, we develop a lightweight, attentionbased interpretability tool to visualize cross-modal saliency between image regions and text outputs, supporting expert-driven auditing of model predictions. Finally, we empirically validate our model on both open- and closed-ended radiological QA tasks, highlighting that compact, interpretable models can be viable for domain-specific VQA applications. # 2 Related Work Our methodology builds upon recent development in medical VQA and text-based question-answering. Several studies have introduced comprehensive pipelines that span data collection, model training, and rigorous assessment, highlighting the evolving capabilities of a radiological VQA system. We now summarize key contributions from related works that have influenced our approach. MedVInT-T(D,E) [15] presents a complete training and evaluation framework for medical VQA. Their approach involves fine-tuning a VLM model on an in-house curated synthetic dataset [12] using GPT-4 [16], which contains multiple-choice style questions to cover a variety of radiological images, and short fill-in-the-blanks style questions with the expectation that the resulting model also develops the capability of answering open-ended queries. The model, fine-tuned on public benchmarks, performs on par with existing radiological VQA systems. Additionally, they manually verify a sample of test set results to make the models robust against the current limitations of popular evaluation frameworks. MedPaLM [5], introduces a comprehensive training and evaluation framework from scratch. They compile HealthSearchQA dataset [17] for answering both consumer- and professional-level textbased medical questions by sampling from existing medical QA datasets. They then fine-tune Flan-PaLM [18] on this dataset, achieving a new state-of-the-art model which is then evaluated by both professionals and laypersons on an extensive set of evaluation axes. Notably, their work exemplifies the design of human evaluation, incorporating assessments from both professionals and laypersons across a broad set of criteria. LLaVA-Med [6] curates PMC-15M dataset by sampling from PubMed Central [19] and prepares synthetically generated multi-turn instruction training data using GPT-4 [16]. The study trains the model for only 16 hours on 8xA100 GPUs [20], achieving state-of-the-art results in radiological visual question answering with a modest 8B LLM [21]. Their work demonstrates that individual researchers can achieve state-of-the-art performance even with a cost-effective training approach. 1 # 3 Architecture # 3.1 Model Design Our vision-language model (VLM) builds on prior work [9, 22] and follows a multi-stage training pipeline (Figure 1). The training begins with the selection of an off-the-shelf vision-tower and an LLM, each demonstrating strong performance on their respective unimodal tasks, such as large-scale image classification for the vision-tower and natural language understanding and generation for the LLM. These components are then integrated and subjected to multimodal pretraining on a diverse set of tasks such as image captioning [23] and referring expression segmentation [24] to develop a broad understanding of visual concepts in the general domain. During the multimodal pretraining stage of the model, no weights are frozen in time, allowing all parameters to learn during backpropagation. For domain adaptation such as radiological VQA, we conduct multi-stage fine-tuning on the selected off-the-shelf model using smaller but domain-specific datasets to adapt to specific tasks, mirroring methodologies in [5, 6, 15]. In our study, we employ PaliGemma-mix-448 [9] as our base VLM. This choice is motivated by its transparent pretraining on a diverse and well-curated collection of open-web datasets [7, 8, 25, 26], in contrast to models with undisclosed training data [16]. This transparency enables a clearer understanding of the model’s zero-shot (base) performance and would make it easier to compare after the base model is fine-tuned. The details about the main components of proposed VLM architecture are described below. Figure 1: Our and PaliGemma [9] Vision Language Model Architecture Vision Tower We employ a decoder-only SigLIP transformer [27] as the vision towel in our framework, which contains approximately 400M parameters. pretrained with a sigmoid contrastive loss and comprising 400M parameters. SigLIP is pretrained using a contrastive learning objective with a sigmoid loss, specifically to handle classification tasks involving a large number of labels where traditional cross-entropy loss becomes less effective [27]. The vision tower processes one or multiple input images by applying self-attention across image patches in a non-causal manner, generating image features that are independent of any accompanying text instruction. Projection Head A single linear layer aligns the output dimensionality of the vision tower with the token dimension of the language model’s vocabulary, which is required for concatenation. While the projection can be implemented using multiple linear layers, the prior ablation study [9] found no significant advantage to have more than one layer. Therefore, we use a single-layer projection in our VLM architecture. Concatenation The text prefix associated with the image is tokenized [28]) and concatenated with the projected image features from the vision tower. A special separator token is inserted between the image features and the tokenized text to delineate the two modalities. The resulting sequence is then padded or truncated as needed to match the input length of the language model. LLM The concatenated image and text features are passed to 2B-GEMMA LLM [29] as a single input. The model generates the first output token by jointly attending to both the visual features and the tokenized text prefix. Subsequent tokens are produced autoregressively, conditioned on the previously generated tokens along with the original multimodal input. # 3.2 Diagnostic Design To enhance interpretability and validate the clinical relevance of the proposed VQA, we analyze the model’s attention mechanisms, which govern cross-modal interactions between image features and text tokens, inspired by [30]. We develop a diagnostic tool for saliency analysis aimed at aiding practicing radiologists during expert evaluation [31]. The interactions between text prefix, image features, and response tokens, which occur exclusively within the attention heads of the LLM, were analyzed with visualizations. Prior to the concatenation layer, there is no interaction between the text prefix and image features. Therefore, the attention heads of the LLM learn to selectively filter and attend to the relevant signals from both modalities to guide the generation process. Although saliency is not the same as explainability [32], experts can often identify diagnostic indicators, as saliency is fundamentally tied to the learned weights of the model. For a self-attentionbased model [33], this relation is easy to examine as self-attention operates by aggregating similarity scores between two learned representations for each tokens: queries and keys. These interactions determine how information is distributed across tokens which ultimately guides the generation process. We implemented the following two attention techniques. Saliency via Raw Attention. Raw attention examines the interactions between queries and keys, which can be interpreted as measuring the affinity or relevance of a token of interest (query) with the rest of the tokens (keys), either within or across modalities. We compute attention weights between queries and keys to localize token-level contributions. Saliency via Rollout Attention. In self-attention-based models, raw attention weights do not always provide meaningful insights as information propagates through multiple layers, embeddings become increasingly mixed. This is because self-attention does not inherently preserve token identity across layers; rather, it continuously blends representations from multiple input tokens. As a result individual token contributions become obscure, and raw attention weights fail to capture the original token relationship.[34]. We adopt rollout attention [34, 35], which recursively aggregates attention weights across layers while accounting for skip connections. # 4 Datasets and Training Recipe The overall methodology of our training recipe is outlined in Figure 2. We begin by collecting publicly available radiological datasets and converting them into Visual Question-Answer (VQA) pairs [21]. The resulting dataset is then enriched and processed to ensure suitability for fine-tuning. Our fine-tuning approach uses a two-stage training strategy: the first stage focuses on learning foundational visual radiological concepts, while the second stage incorporates larger datasets to enhance the model’s rigor and generalization capabilities. Figure 2: Training Recipe Overview To evaluate model performance, we measure classification accuracy on both open- and closed-ended questions, depending on the dataset composition. For generative responses from open-ended questions, we assess their factuality using GPT-4 [16] as an automated judge. We perform ablation studies across different stages of our data curation and finetuning methodology to quantify performance gains. In the absence of a medical expert, the authors of the paper conduct a diagnostic analysis on organ-level cases to identify model limitations. # 4.1 Data Collections Neck Others Nodule Others Brain Genitourinary Pelvic Ultrasound Entire Body Chest, Pulmonary Head X-ray Heart Gastrointestinal Abdomen CT scan Cystic Nepoplasm Musculoskeletal Chest MRI Fluid Brain and Neuro 1500 3000 4500 1500 3000 4500 500 1000 1500 200 400 600 Counts Counts Counts Counts (a) SLAKE: Organ seman- (b) PMC-VQA: Modality (c) ROCO v2.0: Top7 (d) MedPix v2.0: Organtic annotations distribution UMLS concepts level distribution Fine-tuning VLM requires not only substantial model capacity but also access to large, diverse, and semantically rich datasets. In our work, we combined four datasets that have been de-identified for privacy protection including SLAKE [11], PMC-VQA [12], ROCOv2 [13], and MedPix 2.0 [36]. The combination of them spans a wide range of pathodology and radiological modalities (Figure 3, and concepts for open- and closed-ended questions. SLAKE contains ${ \sim } 1 4 { , } 0 0 0 \ \mathrm { V Q A }$ pairs, annotated by practicing physicians. The dataset covers a wide range of anatomical regions and provides high-quality semantic annotations that are well-suited for evaluating radiological reasoning. PMC-VQA is derived from PMC-CLIP [37] and includes 227,000 QA pairs. The questions are either multiple-choice or short fill-in-the-blank format. Its scale and diversity have been effectively leveraged in training models such as MedVInT-TE and MedVInT-TD. The dataset includes a diverse set of imaging modalities such as CT, MRI, ultrasound, and X-ray. ROCOv2 contains $\sim 7 9 . 0 0 0$ image-caption pairs from PubMed Central. Each caption provides a concise $\sim 2 0$ word) description of the radiological images. Due to its breadth and structural consistency, ROCOv2 supports multiple tasks including image captioning, multi-label classification, and VLM pretraining. MedPix 2.0 includes $\sim 1 2 { , } 0 0 0$ curated cases from the MedPix database. Each case contains diagnostic images, detailed case descriptions, and relevant treatment information. The dataset is built using a semi-automated pipeline with manual validation to reduce label noise. # 4.2 QA-Pairs Data Generation Among our selected datasets, SLAKE and PMC-VQA natively provide image–QA pairs, while ROCO $\nu 2 . 0$ and MedPix $\nu 2 . 0$ contain image–caption pairs. Fine-tuning on image–QA triplets has been shown to be more effective than image–caption pairs for training VLMs on visual reasoning tasks [38]. Therefore, inspired by previous work [39, 38, 15, 6], we synthesize both open- and closed-ended QA pairs from image-caption pairs using LLaMA-8B [21]. LLaMA-8B was applied for its accessibility, inference efficiency, and reproducibility for other individual researchers. Importantly, its pretraining corpus contains limited medical content, allowing us to isolate and evaluate the performance of general-domain LLMs when applied to specialized medical tasks. Figure 4: Filtering and curation pipeline. Medical VQA tasks demand not only visual understanding but also clinical reasoning, which generalpurpose VLMs often lack. To address this, we prioritize datasets where questions are grounded in patient context and, where possible, linked to supporting medical literature. Figure 8 and 9 in the Appendix show the prompt templates to generate patient case-based and literature-based QA pairs from image-caption pairs. Synthetic QA generation introduces risks such as hallucinations or clinically irrelevant content. To ensure quality, we manually filter out noisy outputs and apply a form of dataset annealing to incrementally refine the corpus toward higher semantic and clinical relevance. # 4.3 Annealing and Filtering Annealing improves model performance by incrementally incorporating small, high-quality subsets into a larger training set. The objective is to improve the proportion of higher informative examples such as those rich in visual concepts and clinical reasoning within the overall dataset. By doing so, the model will learn more reliable patterns that might otherwise be obscured by lower-quality data. Evidence for annealing’s effectiveness comes from [21], where LLaMA3-8B showed a $24 \%$ improvement on grade-school-level math questions GSM8K [40] and a $6 . 4 \%$ gain on competition-level math reasoning tasks [41]. Notably, the benefit diminished for larger models (e.g., LLaMA3-405B) [21], suggesting that small or mid-sized models, such as our 4B parameter VLM, are receptive to annealing. In our study, we use the high-quality dataset MedPix $\nu 2 . 0$ [14] as the primary enrichment dataset for annealing $R O C O \nu 2 . 0$ . While MedPix is smaller in scale, it provides high-quality radiological case studies and literature references, making it well-suited for improving domain-specific reasoning. A key component of effective annealing is systematic filtering, which ensures that only high-quality and domain-relevant data is incorporated into the dataset. Figure 4 outlines our data curation stratergy with annealing and filtering. The process begins with a medical corpus, filtered by the pathological relevance. Unlike conventional upsampling strategies that increase the variety of rare cases, our approach focuses on reinforcing the most common pathologies existing in our data mix to improve model generalization. # 4.4 Two-stage Fine-Tuning 1st Stage Off-the-shelf VLMs often exhibit inconsistent performance in recognizing anatomical structures occasionally producing incorrect generation when presented with slight variations in images. This inconsistency highlights the need for alignment between visual features and anatomical vocabulary. To address this, we adopt the SLAKE dataset [11] as a foundation for 1st stage finetuning. SLAKE offers well-annotated radiological visual concepts, making it particularly suitable for anatomical structure recognition. In this initial phase, we fine-tune only the projection head of the model while keeping all other parameters frozen. We train the projection layer for 5 epochs on SLAKE and use the resulting checkpoint as the initialization point for subsequent model training. Our method aligns with curriculum learning principles, emphasized in [42], starting with simpler radiological visual concepts followed by more diverse data. 2nd Stage Using the checkpoint from the 1st stage as the model weight initialization, we fine-tune the model on larger and more diverse instruction sets - ROCO v2.0 [13], MedPix 2.0 [36], and PMCVQA [43]. To perform parameter-efficient fine-tuning, we apply LoRA [10], a low-rank adaptation method, targeting the attention heads in both the vision tower and the language model. This allows us to significantly reduce computational and storage overhead, deviating from traditional fine-tuning methods that retain all or a large portion of model parameters. Figure 5: Fine Tuning Evaluation Loss # 5 Experiments and Evaluation # 5.1 Experiment Setting All experiments including fine-tuning and evaluation were conducted using a single NVIDIA H100 GPU. With adequate allocation, ROCO and MedPix and Roco $^ +$ Medpix datasets were fine-tuned in approximately 3 days each, PMC-VQA takes about 6 days, and SLAKE completes under 5 hours. # 5.2 Fine-Tuning Experiments We first evaluated fine-tuning performance across instruction sets with varying token lengths and question formats. Evaluation loss curves over training epochs are shown in Figures 5a and 5b. For datasets where QA template is open-ended, we observe that the evaluation loss decreases approximately quadratically as the number of tokens in the instruction set increases (Figure 5a). However, this trend does not hold for datasets with close or short-ended QA templates, where the labels contain fewer tokens as the expected loss after a few training iterations becomes smaller and the loss plateaus earlier (Figure 5b). We further analyzed scaling behavior using the empirical loss model: $$ \tilde { L } ( X , D _ { f } ) = A \cdot \frac { 1 } { X ^ { \alpha } } \cdot \frac { 1 } { D _ { f } ^ { \beta } } + E $$ where $\tilde { L }$ is evaluation loss, $X$ is the fine-tuning parameters, $D _ { f }$ is the token size, and $A , \alpha , \beta , E$ are scaling exponents. Scaling properties for fine-tuning LLMs are highly dependent on task type and data composition [44]. Consequently, the optimal fine-tuning strategies and scaling behavior can vary depending on the structure and semantics of the training data. We observed that scaling exponents $( ^ { \mathfrak { N } } A ^ { \mathfrak { N } } , \alpha , ^ { \mathfrak { N } } \beta ^ { \mathfrak { N } } ,$ , $" { \boldsymbol { E } } "$ in Equation 1) differ depending on the question-answer templates used across datasets. For example, ROCO v2.0 [13] and MedPix 2.0 [36] have open-ended instruction sets with an average label length of around 20 tokens. In this case, task dependence is less observable, and improvements in evaluation loss $( \tilde { L } )$ tend to correlate more directly with data size $( D _ { f } )$ . In contrast, task dependence becomes more evident in close-ended QA, particularly when different templates are used for QA pairs (Figure 5b). While higher data volume generally leads to faster convergence, this trend breaks down when comparing MIMIC-CXR-JPG [45] and PMC-VQA [12]. Despite its smaller size, PMC-VQA yields greater learning gains in fewer epochs, likely due to the use of multiple-choice templates. These have a lower expected loss $\begin{array} { r } { ( \tilde { L } = - \ln \left( \frac { 1 } { 4 } \right) } \end{array}$ ) than open-ended QA tasks, which typically involve more linguistic variation and semantic ambiguity. These observations suggest that a single scaling law may not generalize across mixed-template datasets. As dataset mixtures grow, especially those combining open- and close-ended QA formats, it becomes increasingly difficult to preserve a consistent ratio of question types. Since each new addition may introduce variations in this ratio, it becomes challenging to predict the expected evaluation loss as the number of tokens in the instruction set grows. This variability complicates the application of scaling laws in Medical VQA, as the impact of additional training data is not uniform across different datasets and QA templates. # 5.3 VQA Evaluation Standard n-gram metrics such as BLEU [46] and ROUGE [47] offer limited insight into factual correctness, particularly in clinical VQA settings [5]. We report these scores in Table 5 in the Appendix, but propose and emphasize more robust evaluation methods below. Generation: Coronal CT Image Generation: Postoperative upper shows the lesion crossing the gastrointestinal tract image midline. showing slight narrowing of the gastro epigastrium. True Answer: Coronal CT after intrvenous contrast injection: True Answer: Postoperative UGI expnaisve cervical process showing slight narrowing at mid showing that the medial border body of stomach (arrow). of the mass crosses the midline. Bleu: 1.88𝑒 − 78 li🧒en:sitDohone diemtsahcgereibed BRloeuug:e 4:. 0𝑒.2−31550 tg🧒oi:vWbehenatahnaepapiresrasoruwse RAocucguerLa:c $\textcircled{5}$ 3: 0/1 cross the Accuracy $\circledast$ : 1/1 indicated at the midline? image? Closed-ended QA Evaluation: For multiple-choice question answering (MCQA) such as PMC-VQA [12], we measure model accuracy across five stochastic generations per test instance. Inspired by [15], we define a prediction as non-robust if the model produces different answers in three or more out of five inferences. In such cases, we penalize the accuracy by one point to account for uncertainty and instability in the output. Open-Ended QA Evaluation: For open-ended question that demands clinical reasoning, we employ LLM-based evaluation. We design a prompt template (Figure 10 in the Appendix) and use GPT-4.0 [16] to judge each generated answer based on factual correctness. Examples are presented in Figure 6. Table 1 compares accuracy across four datasets, evaluating the effect of a two-stage fine-tuning approach. Results are reported as the mean accuracy $\pm$ standard deviation over five inference runs, with LLaVA-Med serving as a high-capacity baseline. On SLAKE (closed-ended QA), two-stage fine-tuning achieves $79 \%$ accuracy, highlighting strong gains even without large model capacity. For PMC-VQA, ROCO, and the ROCO $^ +$ MedPix annealing set, two-stage fine-tuning consistently outperforms single-stage fine-tuning, demonstrating its effectiveness across different QA formats. Although the accuracy gains of 2-stage fine-tuning are slight, it accelerated convergence, reducing the number of epochs needed to reach target evaluation loss. Finally, comparing ROCO to the ROCO $+$ MedPix annealing set shows clear performance gains from annealing, even with small data volumes. These results indicate that modest instruction set annealing offers a cost-effective way to improve generalization and robustness, with potentials for further gains using larger annealing sets. # 5.4 Manual Verification via Saliency Diagnostic Inspired from previous work [15, 5], we conduct manual verification of model-generated responses on test samples, incorporating saliency diagnostic wherever possible for the authors of the study. As discussed above, we applied raw attention and attention rollout methods for saliency analysis. Figure 7 illustrates an example of these two methods. In the case of response-to-image saliency, we select a response token (e.g., "narrowing") as the query and visualize the average saliency over the input image (used as keys). Conversely, we can also examine image-to-response saliency, where we select a specific image patch (e.g., the blue arrow) as the query and plot the resulting saliency over the response tokens based on their key representations. Compared to raw attention, we found the resulting saliency of attention rollout highlights more abstract features such as the passage of the gastrointestinal tract that are semantically relevant to the given example. More details and examples are presented in Appendix B. Table 1: Accuracy $( \% )$ with and without Stage 1 fine-tuning across datasets. Results are reported as mean $\pm$ one standard deviation across five inference runs (each on a sample of 200). Figure 7: An example illustrates saliency analysis with Raw Attention and Attention Rollout for a patient suffering from a Post-Operative UGI which shows slight narrowing at mid-body of stomach. Using the saliency tool, we evaluate the factuality of the generated responses from our model against the corresponding ground-truth labels. Furthermore, we report a broad per-class accuracy (Table 2) at the organ level to highlight variability in model performance, as certain anatomical regions exhibit greater nuance and complexity than others. Table 2: Manual Verification performed over a single inference. (LLava-Med [6] as a baseline)
Recent advancements in vision-language systems have improved the accuracy of Radiological Visual Question Answering (VQA) Models. However, some challenges remain across each stage of model development: limited expert-labeled images hinders data procurement at scale; the intricate and nuanced patterns of radiological images make modeling inherently difficult; and the lack of evaluation evaluation efforts makes it difficult to identify cases where the model might be ill-conditioned. In this study, we fine-tune a lightweight 3B parameter vision-language model for Radiological VQA, demonstrating that small models, when appropriately tuned with curated data, can achieve robust performance across both open- and closed-ended questions. We propose a cost-effective training pipeline from synthetic question-answer pair generation to multi-stage fine-tuning on specialised radiological domain-targeted datasets (e.g., ROCO v2.0, MedPix v2.0). Our results show that despite operating at a fraction of the scale of state-of-the-art models such as LLaVA-Med, our model achieves promising performance given its small parameter size and the limited scale of training data. We introduce a lightweight saliency-based diagnostic tool that enables domain experts to inspect VQA model performance and identify ill-conditioned failure modes through saliency analysis.
[ "cs.CV", "cs.AI" ]
# 1 Introduction With the burgeoning development of machine learning (ML) applications, there is an increasing use of sensitive data, including financial transactions, medical records, and personal digital footprints, for training purposes. Numerous studies [39, 42, 53] have highlighted serious privacy risks associated with ML models, such as data extraction [14], membership inference [18], and property inference [15] attacks, primarily due to their capacity to memorize training datasets. Membership inference attacks (MIAs) on ML models aim to determine if a specific data sample was used to train a target model or not. These attacks have received significant attention and are widely studied in ML privacy research. Beyond highlighting membership inference as a critical privacy threat, they are also frequently employed as evaluation tools across a broad range of privacy-related tasks and research efforts. These include: Privacy Risk Assessment: MIAs have been increasingly utilized to examine privacy risks in various machine learning contexts, such as generative adversarial networks [6], explainable ML [29], diffusion models [32], federated learning [36], large language models [4, 33], and multi-modal models [10]. MIAs are also applied across diverse applications such as social media networks [28], recommendation systems [57], and clinical models [19]. • Privacy Auditing: MIAs are often used as empirical tools for privacy auditing to quantify privacy leakage [21, 35]. With their underlying privacy notion closely tied to differential privacy (DP), MIAs have been used to validate the bounds of DP algorithms [37] and debug their implementations [48]. • Machine Unlearning Verification: Machine unlearning [2] involves removing the influence of a data item from a model to ensure privacy and compliance. MIAs are often used to assess whether a sample has been unlearned or not [23]. • Benchmarking performance of privacy-enhancing methods: Because of the effectiveness of MIAs in the above tasks, they are widely used to evaluate and benchmark the effectiveness of various privacy-preserving solutions [41, 49], DP algorithms [11], and unlearning methods [13, 38]. Due to the critical nature of these tasks, extensive research efforts are being made to develop more effective and powerful MIAs [18]. Figure 1: Venn diagram of member sets detected by (a) different attacks at a low FPR (0.1), (b) different instances of the Class-NN attack (with the same auxiliary dataset) at a low FPR (0.1). All Attacks are done on CIFAR-10. These advances are important to ensure a more accurate and comprehensive assessment of privacy risks, auditing, and unlearning verification. While balanced accuracy and AUC are commonly used to measure the performance of MIAs, Carlini et al. [3] argue that these aggregate metrics often do not correlate with success rates at low false positive rates (FPRs), which are crucial for a practically meaningful evaluation of MIA effectiveness. Therefore, the true positive rate at low FPR (TPR@low FPR) has become the standard metric for evaluating the “practical effectiveness” of MIAs. In recent works on MIAs [3, 30, 50, 55], both aggregate metrics and TPR@low FPR are used to evaluate and demonstrate the superiority of their proposed methods over prior attacks. In this paper, however, we argue that the evaluation, even with all these metrics, may still not capture a complete picture of MIA performance. To elaborate on this, consider a target model $F _ { T }$ trained on dataset $\mathcal { D }$ and two attack instances $\mathcal { A } _ { a }$ and $\boldsymbol { \mathcal { A } } _ { b }$ having the same FPR. Suppose $\mathcal { D } _ { a }$ and $\mathcal { D } _ { b }$ represent the member subsets that can be detected by $\ b { \mathcal { A } } _ { a }$ and $\boldsymbol { \mathcal { A } } _ { b }$ , respectively. Even if $\ b { \mathcal { A } } _ { a }$ performs better than $\boldsymbol { \mathcal { A } } _ { b }$ in both aggregate metrics and TPR@low FPR, relying only on $\mathcal { A } _ { a }$ may not reliably assess privacy risks and verify unlearning outcomes associated with $\mathcal { D } _ { b } \backslash \mathcal { D } _ { a }$ . For illustration, Figure 1a shows a Venn diagram of member subsets detected by three different attacks LiRA [3], Loss Trajectory [30], and Reference Attack [51], with the same FPR in our MIA experiment on CIFAR-10. The minimal overlap among them indicates that different attacks may implicitly target different subsets of members. This observation highlights a potential limitation in the common practice of favoring one attack over another in privacy-related tasks based solely on performance metrics. Better metrics do not necessarily imply a greater overall capability of an MIA, as a sample undetected by a “stronger” attack may still be exposed by another. It raises two important questions relevant to ongoing MIA privacy research and practice: • Q1: Should the effectiveness of an MIA be judged solely based on those traditional metrics [10, 18]? More broadly, should research on developing new MIAs primarily focus on improving performance metrics while overlooking the member detection disparities between different methods? • Q2: Is it sufficient for privacy evaluations to rely on a single “top-performing” MIA based on performance metrics without accounting for the disparities between different MIAs? In this paper, we argue that the significant disparities in member detection at the sample level across different MIAs should not be overlooked when evaluating their effectiveness and employing them as tools for privacy assessment. In addition, reliability and consistency are essential attributes that MIAs must possess to function effectively as privacy evaluation tools. Most existing works [10, 46, 47] that utilize MIA for privacy assessment and machine unlearning verification employ a single instance of MIA in their experiments. However, the construction of these MIAs involves inherent randomness, associated with data shuffling/sampling and training shadow/attack models, where randomness stems from factors such as optimization, weight initialization, and data batching. It has been shown that the training of randomly initialized neural networks explores different modes in the function space [12]. Therefore, factors involving randomness inevitably lead to different decision boundaries for membership detection, often resulting in significant variance in attack outcomes among different instances of the same attack with the same auxiliary knowledge. Figure 1b shows that for the same attack, ClassNN attack [44], three different instances that are trained on the same shadow dataset with different random seeds have large nonoverlapping member sets. This indicates that the attack outcome can be highly sensitive to the randomness of attack construction. This raises another common issue in current research that uses MIAs for privacy assessment and performance evaluation: Q3: Is it sufficient to evaluate and report results based solely on a single instance of an MIA—as is common in existing works—while disregarding the disparities among instances that naturally arise from randomness in attack construction? In this paper, we argue that using MIAs in their current form for evaluation, without accounting for these disparities, may lead to incomplete or potentially unreliable results. To address these concerns, this paper first systematically investigates the disparities among different MIA methods and their instances. We propose a novel framework that introduces coverage and stability analysis to evaluate and quantify the disparities of MIA methods through multiple attack instances constructed with different random seeds. Our extensive experiments highlight significant issues of instability and disparity inherent in MIAs. To better understand these disparities, we analyze the signals and features used by different MIAs to determine membership and the influence of randomness in their constructions. Our analysis reveals that different attacks may focus on samples with distinct characteristics, resulting in divergent member detection outcomes. Furthermore, we propose an ensemble framework with three different strategies to address disparity issues in MIAs. It integrates different MIA methods from distinct perspectives: coverage-oriented, stability-oriented, and majority-oriented. These strategies combine multiple random instances of each MIA and further integrate different MIA methods to account for detection disparities. This framework not only enables the construction of more powerful attacks by leveraging the diverse strengths of existing MIAs and incorporating future advancements, but also provides an evaluation protocol to enhance the comprehensiveness of privacy evaluation. Our extensive experiments demonstrate that these ensemble strategies achieve higher performance in traditional metrics. For example, compared to the top-performing MIA, our ensemble improves the ROC AUC and balanced accuracy by $3 6 \%$ and $2 4 \%$ , respectively, and increases the TPR at $0 . 1 \%$ FPR by a factor of five on CIFAR-10. In addition, we discuss and evaluate practical strategies to reduce the computational cost of the ensemble. Beyond the metrics, our ensemble strategies and their pronounced increase in attack performance serve as constructive proof of the issues raised in Q1, Q2, and Q3. Specifically, a “less powerful” but high-disparity MIA remains valuable for uncovering privacy risks that are overlooked by other attacks and can further improve overall effectiveness through the ensemble (Q1). Relying on a single attack or instance, even one considered state-of-the-art, may underestimate true membership privacy risks, as members undetected by one attack may still be exposed by another (Q2, Q3). This has concrete implications for privacy practitioners and researchers applying MIAs in machine unlearning, privacy auditing, and defense evaluation: current evaluation practices that rely on a single attack instance may be unreliable, since they fail to capture the full spectrum of vulnerabilities posed by inherent disparities in MIAs. We conclude this paper with a discussion of these implications and actionable directions for future MIA research, advocating for holistic consideration of disparities and applying ensemble strategies as an evaluation protocol to enable more reliable and comprehensive privacy assessments. The source code is accessible at https://github.com/RPI-DSPlab/mia-disparity. # 2 Background and Related Work Membership Inference Attacks (MIAs) aim to identify whether or not a specific sample was used as training data for a target model. This paper focuses on black-box attacks in which attackers can only query the target model to obtain a prediction for a data point and use it to infer membership. In addition, attackers are able to leverage an auxiliary dataset that comes from a similar distribution as the training set of the target model. Formally, given a target sample $x$ , a target model $F _ { T }$ trained on the dataset $\mathcal { D } _ { T }$ , and an auxiliary dataset ${ \mathcal { D } } _ { A }$ , membership inference attack $\mathcal { A }$ can be defined as: $$ \mathcal { A } ( F _ { T } , \mathcal { D } _ { A } , x , \phi ) \{ 0 , 1 \} $$ Here $\phi$ represents a feature extraction function applied to samples, and $\mathcal { A }$ uses $\phi ( x )$ as a signal to determine the membership of $x$ , where 1 indicates $x$ is a member, i.e., $x \in \mathcal { D } _ { T }$ , and 0 indicates otherwise. For simplicity, $x$ represents a sample and its ground truth class as a pair $( x , y )$ . Current MIAs utilize various feature extraction functions $\phi$ , such as loss [8, 52], full confidence vector output [44] of $F _ { T }$ , or the loss trajectory [30]. As a typical intermediate step, $\mathcal { A }$ assigns a membership score Score $\mathcal { A } \left( x \right)$ to every sample $x$ , and compares it with a threshold to decide the membership. # 2.1 Representative MIAs MIA has been developed widely for different applications, language models, etc. In this paper, we focus on a number of representative MIAs that have been widely used for privacy evaluation and assessment, unlearning, or as the comparing target for developing better MIAs against them. LOSS Attack. (Yeom et al. [52]) This method considers an instance $x$ a member of the training set if the loss of $F _ { T }$ on $x$ is less than a global threshold set as the average loss across the training set. Formally, let $\ell ( x , F _ { T } )$ be the loss of $F _ { T }$ on instance $x$ . The LOSS attack predicts $x \in \mathcal { D } _ { T }$ if: $$ \ell ( x , F _ { T } ) < \frac { 1 } { | \mathscr { D } _ { T } | } \sum _ { x ^ { \prime } \in \mathscr { D } _ { T } } \ell ( x ^ { \prime } , F _ { T } ) $$ The right-hand side of (2) serves as the threshold for membership prediction. For each sample $x$ , its MIA score is computed as 1 minus its normalized loss on $F _ { T }$ . Class-NN. (Shokri et al. [44]) This attack involves training classspecific neural networks as membership classifiers for each class using data from shadow models. The adversary divides ${ \mathcal { D } } _ { A }$ into subsets $\mathcal { D } _ { 1 } , \mathcal { D } _ { 2 } , . . . , \mathcal { D } _ { k }$ , then further split each subset to $\mathcal { D } _ { k } ^ { i n }$ and $\mathcal { D } _ { k } ^ { o u t }$ . Subsequently, $k$ shadow models $f _ { 1 } , f _ { 2 } , \ldots , f _ { k }$ are trained on $\mathcal { D } _ { 1 } ^ { i n } , \mathcal { D } _ { 2 } ^ { i n } , . . . , \mathcal { D } _ { k } ^ { i n }$ . An attack dataset can be constructed as $$ \{ ( f _ { i } ( x ) , y , \mathrm { i n } ) | x \in D _ { i } ^ { i n } , \forall i \in k \} \cup \{ ( f _ { i } ( x ) , y , \mathrm { o u t } ) | x \in D _ { i } ^ { o u t } , \forall i \in k \} $$ To train the attack classifier $C _ { j }$ for class $j _ { ; }$ , it finds entries in the attack dataset where $( f _ { i } ( x ) , y , \mathrm { i n } / \mathrm { o u t } ) , y = j$ . Then it uses those entries to train $C _ { j }$ for each class $j$ . To determine if a sample $x$ belongs to the training set of $F _ { T }$ , the adversary queries $C _ { j = y }$ with $F _ { T } ( x )$ , where $y$ is the label of $x$ . The MIA score of this attack is the logit of the attack classifier $C _ { j }$ in sample $x _ { j }$ being a member. Augmentation Attack. (Choquette-Choo et al. [8]) This label-only attack uses data augmentation techniques to generate translated versions of data points, querying a shadow model trained on ${ \mathcal { D } } _ { A }$ to gather predictions which train an attack classifier $C$ . To infer membership for a data point $x$ , it generates translated versions $\{ \hat { x } _ { 1 } , \hdots , \hat { x } _ { n } \}$ , queries them on the target model $F _ { T }$ to obtain predictions $\{ F _ { T } ( \hat { x } _ { 1 } ) , . . . , F _ { T } ( \hat { x } _ { n } ) \}$ , and use these predictions to make inferences using $C$ . The MIA score is the logit of the attack classifier’s prediction $C ( { \boldsymbol x } )$ being a member. Difficulty Calibration Loss Attack. (Watson et al. [50]) This attack improves traditional loss-based attacks by calibrating membership scores using losses from both the target and shadow models, accounting for difficulty. It queries both the target model $F _ { T }$ and shadow model (trained on $\mathcal { D } _ { A } ) f _ { s }$ on all $x \in \mathcal { D } _ { T }$ , producing two sets of predictions: $\boldsymbol { \hat { y } } ^ { T }$ and $\hat { y } ^ { s }$ . The losses for each prediction, $\ell ^ { T }$ and $\ell ^ { s }$ , are computed using cross-entropy loss. The uncalibrated membership scores $\ell ^ { T }$ are adjusted by computing $\boldsymbol { s } ^ { c a l } = \boldsymbol { \ell } ^ { T } - \boldsymbol { \ell } ^ { s }$ The threshold $\tau$ for determining membership is done by optimizing the prediction accuracy by splitting the ${ \mathcal { D } } _ { A }$ to members (trainset for shadow model) and non-members. This time, the target model becomes the model we use to calibrate. And $\tau$ is selected by optimizing the accuracy of losses of ${ \mathcal { D } } _ { A }$ on the shadow model $f _ { s }$ calibrated by the target model $F _ { T }$ . Similar to the Loss Attack, the MIA scores are the $1 - { \vec { \ell } }$ where $\vec { \ell }$ are normalized calibrated losses of $\mathcal { D } _ { T }$ . LiRA. (Carlini et al. [3]) This Likelihood Ratio-based Instancespecific attack computes the likelihood ratio of losses for models trained with and without a particular instance, determining membership based on a threshold that optimizes attack effectiveness. For each instance $x$ , let $\mathcal { D } _ { A , x }$ and $\mathcal { D } _ { A , \bar { x } }$ be the subsets of ${ \mathcal { D } } _ { A }$ with and without $x$ , respectively. The adversary trains shadow models $\{ f _ { x , 1 } , f _ { x , 2 } , \ldots , f _ { x , m } \}$ on random subsets of $\mathcal { D } _ { A , x }$ , and $\{ f _ { \bar { x } , 1 } , f _ { \bar { x } , 2 } , . . . , f _ { \bar { x } , n } \}$ on random subsets of $\mathcal { D } _ { A , \bar { x } }$ . The likelihood ratio for $x$ is then computed as: $$ L R ( x ) = \frac { \prod _ { i = 1 } ^ { m } p ( \ell ( x , f _ { x , i } ) \mid x \in \mathcal { D } _ { T } ) } { \prod _ { i = 1 } ^ { n } p ( \ell ( x , f _ { \bar { x } , i } ) \mid x \notin \mathcal { D } _ { T } ) } $$ where $p ( \cdot \mid x \in { \mathcal { D } } _ { T } )$ and $p ( \cdot \mid x \not \in { \mathcal { D } } _ { T } )$ are the probability density functions of the losses conditioned on $x$ being a member or non-member of $\mathcal { D } _ { T }$ , respectively. The adversary then chooses a threshold $\tau$ for the likelihood ratio that optimizes the effectiveness of the attack, especially aiming for a low false-positive rate. The MIA score of LiRA is the likelihood of $x$ being a member. Reference Attack. (Ye et al. [51]) This attack (Attack R) uses a similar approach to LiRA by Carlini et al. [3]. It prepares $m$ shadow models $\{ f _ { x , 1 } , f _ { x , 2 } , \ldots , f _ { x , m } \}$ on $\mathcal { D } _ { \mathrm { A } }$ with different train-test partition. It calculates the membership score as: $$ \mathrm { P r } _ { \theta ^ { \prime } } \left( \frac { \mathrm { P r } ( x | \theta ) } { \mathrm { P r } ( x | \theta ^ { \prime } ) } \geq 1 \right) $$ where $\mathrm { P r } ( x | \theta ^ { \prime } )$ is the likelihood (confidence) of sample $x$ evaluated on all shadow models $\theta ^ { \prime } \in \{ f _ { x , 1 } , f _ { x , 2 } , \dotsc + \dotsc , f _ { x , m } \}$ , and $\theta$ is the target model $F _ { T }$ . Similar to LiRA, the MIA score is the likelihood of $x$ being a member. Loss Trajectory Attack. (Liu et al. [30]) This attack monitors the change in the loss of each sample over multiple epochs, using knowledge distillation and cross-entropy loss to track and compare loss trajectories for membership inference. It involves training a shadow model $f _ { s }$ on ${ \mathcal { D } } _ { A }$ and applying knowledge distillation [17] on both $f _ { s }$ and $F _ { T }$ with saving the checkpoints $\bar { \boldsymbol { f } } ^ { I }$ at each epoch $I$ over $n$ training epochs, for capturing the loss trajectory for each sample. For each sample $x \in { \mathcal { D } } _ { A }$ , its loss trajectory $\ell ( x , f _ { s } )$ can be obtained using each distillation checkpoint of the shadow model. We collect all loss trajectories to construct an attack training set similar to (3) to train an attack classifier $C$ . For a target sample $x$ , the loss trajectory $\ell ( x , F _ { T } )$ is obtained using the distillation checkpoints of the target model $F _ { T }$ . The classifier $C$ is then queried with $\ell ( x , F _ { T } )$ to determine membership. The MIA score is $C ^ { \prime }$ ’s output logit on the loss trajectory of $x$ for predicting it as a member. # 2.2 MIA Performance Metrics. The following metrics are commonly used to evaluate the performance of MIAs [3, 8, 30, 50]. Balanced Accuracy measures the accuracy of membership predictions on a test set with balanced priors (equal numbers of members and non-members). • ROC (Receiver Operating Characteristic) curve plots the true positive rate (TPR) against the false positive rate (FPR) at various threshold levels, providing a comprehensive view of the trade-offs between MIA’s TPR and FPR. • AUC (Area Under the ROC Curve) provides a single scalar value summarizing the overall performance of an attack. A higher AUC indicates that the attack is better at distinguishing members from non-members across all thresholds. • TPR@Low FPR focuses on the practical effectiveness of MIAs. A low false positive rate imposes a constraint on membership predictions, requiring the model to be more "cautious" when predicting members to minimize false alarms. # 3 Disparity Evaluation Methodology In this section, we present the metrics and methodology to evaluate disparities of MIAs at both the instance level and the method level. # 3.1 Instance Level Disparity Over Randomness Most MIAs [3, 30, 44, 51] for deep learning rely on the shadow training technique, which trains multiple shadow models on an auxiliary dataset to replicate the behavior of the target model. This process inherently involves randomness from several sources, including the partitioning of the auxiliary dataset into member and non-member sets, weight initialization in the training algorithm, and data shuffling and batching. These factors introduce variability in the outcomes of both shadow models and attack classifiers, ultimately affecting the detection outcomes of MIAs. In our study, we abstract this randomness using a single random seed, representing a random MIA instance that an attacker might create using the same algorithm but under different randomness sources in real-world scenarios. Disparity Metric: To evaluate an MIA’s instance-level disparity in member detection, we introduce consistency score, which quantifies the similarity of membership predictions between attack instances using pairwise Jaccard Index. The Jaccard Index (or Jaccard Similarity) measures the similarity between finite sample sets and is defined as the size of the intersection divided by the size of the union of the sample sets. Given a set of random seeds $s$ , we create $| S |$ number of instances for an MIA $\mathcal { A }$ , its consistency on target dataset $\mathcal { D }$ is defined as the average of Jaccard Index between every pair of attack instances $\mathcal { A } ^ { i }$ and $\mathcal { A } ^ { j }$ on their detected member sets $\mathbb { M } _ { \mathcal { D } } ( \mathcal { A } ^ { i } )$ and $\mathbb { M } _ { \mathcal { D } } ( \mathcal { A } ^ { j } )$ , i.e., $$ \mathrm { C o n s i s t e n c y } _ { S } ^ { \mathcal { D } } ( \mathcal { A } ) = \frac { 1 } { { \binom { | S | } { 2 } } } \sum \sp i { } i { \not \in } S _ { J } ( \mathbb { M } _ { \mathcal { D } } ( \mathcal { A } \sp i ) , \mathbb { M } _ { \mathcal { D } } ( \mathcal { A } \sp j ) ) $$ where the Jaccard Index 𝐽 (M(𝒜𝑖 ), M(𝒜𝑗 )) = |M(𝒜𝑖 )∩M(𝒜𝑗 ) | . A lower consistency score indicates greater discrepancy in the detected member sets across different instances of the same attack, even when provided with identical auxiliary information and target model. We use this metric to measure the variance in an MIA method’s membership detection outcomes due to randomness in its construction, indicating that evaluations based on a single instance may not reliably capture an MIA method’s true privacy risks. For the LOSS attack, which uses a fixed global threshold and does not involve randomness in its construction, its consistency score is 1. Coverage and Stability: Due to the discrepancy in member detection across random instances of an MIA, evaluations based on a single random instance—as is common in most existing works—cannot fully capture the true privacy risk posed by an MIA method at the method level under the same auxiliary knowledge, as opposed to the leakage revealed by a specific instantiation. Consequently, such evaluations may provide an incomplete picture of the effectiveness of a privacy solution. To address this limitation, we introduce the evaluation measures coverage and stability. The union of true positive attack results from multiple instances of an MIA constructed with different random seeds, referred to as the coverage of an attack. Formally, given an attack $\mathcal { A }$ and a set of possible seeds $s$ , for each random seed $s \in S$ , we can construct an instance of $\mathcal { A }$ with randomness generated from random seed $s$ , denoted by $\mathcal { A } ^ { s }$ . The membership prediction for the data point $x$ is $\mathcal { A } ^ { s } ( x )$ . The coverage of attack $\mathcal { A }$ is represented as $$ { \mathrm { C o v e r a g e } } _ { S } ( { \mathcal { A } } ) = \left\{ x \in { \mathcal { D } } _ { T } : \bigcup _ { s \in S } { \mathcal { A } } ^ { s } ( x ) = 1 \right\} $$ Similarly, we define stability as the intersection of true positives across all instances, reflecting how consistently an attack identifies members despite randomness. This excludes members whose status is inconsistently predicted across runs: $$ { \mathrm { S t a b i l i t y } } _ { S } ( { \mathcal { A } } ) = \left\{ x \in { \mathcal { D } } _ { T } : \bigcap _ { s \in S } { \mathcal { A } } ^ { s } ( x ) = 1 \right\} $$ Coverage reflects the extent of potential privacy leakage, while stability captures the consistency of privacy vulnerability under an MIA method. Because the privacy risks they reveal are independent of any specific instance, given their convergence observed in Section 4.3, we compute the Jaccard similarity of coverage and stability to characterize the method-level disparities across different MIAs, i.e., the differences in the subsets of the training data targeted by different MIA methods (regardless of any specific instance). Illustrative Example: Figure 2 shows the union (coverage) and intersection (stability) of three instances for each attack method. As we can see, all attacks that involve randomness from shadow model training, except the LOSS attack [52], exhibit significant variations in member detection, with their coverage and stability changing considerably from one instance to three instances. Therefore, it is evident that single-instance-based MIA evaluations or assessments may be unreliable. # 3.2 Multi-instance Attack Analysis To analyze the instance-level disparity from the lens of coverage and stability, we introduce a multi-instance analysis framework. As demonstrated in Figure 3, we first prepare $n$ instances of an attack $\mathcal { A }$ using the same auxiliary dataset ${ \mathcal { D } } _ { A }$ but different random seeds. To attack a target model $F _ { T }$ , each MIA instance performs inferences on the target dataset $\mathcal { D } _ { T }$ with access to $F _ { T }$ . The MIA scores obtained are converted to binary membership predictions using Algorithm 1 (AdjustFPR) to obtain predictions at a specific FPR level $\beta$ . It is crucial because we need to maintain a consistent level of FPR to ensure a fair comparison between different MIA instances on their coverage and stability derived from true positive detections. With the predictions of multiple instances of $\mathcal { A }$ , we can compute the coverage and stability of $\mathcal { A }$ over $n$ instances. Given multi-instance membership inference predictions, coverage helps capture all possible risks, i.e., members that are vulnerable to any MIA instance at a given FPR level $\beta$ , while stability focuses on vulnerable members that are consistently detected by an MIA across different random instantiations. Our complete evaluation framework follows Algorithm 2 to compare different attack instances under the same conditions. It splits the dataset $\mathcal { D }$ into Algorithm 1 FPR-Based Thresholding (AdjustFPR) This algorithm predicts membership by determining a MIA score threshold $\tau$ that achieves a specified target FPR level $\beta$ . Algorithm 2 Multi-instance Attack Analysis Framework This procedure handles the training, preparation, and execution of attacks, and computes aggregated results to assess stability and coverage across different attack configurations. two non-overlapping datasets, the auxiliary dataset ${ \mathcal { D } } _ { A }$ and the target dataset $\mathcal { D } _ { T }$ . $\mathcal { D } _ { T }$ is further divided into two equal-size, nonoverlapping subsets in line 2, one for training the target model (constituting the members) and one for testing (comprising the non-members). This division ensures that both subsets are of equal size $( | \mathcal { D } _ { \mathrm { t a r g e t \_ t r a i n } } | = | \mathcal { D } _ { \mathrm { t a r g e t \_ t e s t } } | )$ , ensuring a balanced prior of memberships. Lines 6 to 12 apply the pipeline in Figure 3 to each attack method. Line 13 calculates the stability and coverage of each attack over 𝑆 instances. # 3.3 MIA Method Level Disparity As discussed in Section 3.1, we use coverage and stability to evaluate disparities between different MIAs. These measures enable us to analyze how existing attacks differ at the method level in terms of both the extent of vulnerable members they expose (i.e., coverage) and the consistency of vulnerable samples across instances (i.e., stability), despite the presence of instance-level variance. To assess the method-level disparity, we compute the Jaccard index between the coverage/stability sets of different MIA methods. For each MIA, both coverage and stability are evaluated based on the predictions from the same number of instances at an identical FPR level to ensure fairness, as described in Section 3.2. In addition, we conducted a preliminary empirical analysis of the following two 890 (179.797%) (140a.0u46g%_1) 1115 (62.950%) (2151.320%) 715 (63.127%) l6o8s6straj_1 664 (94.280%) (18.089%) reference_0 (20.0%) (84.893%) (197.49%) LOSS_0 LOSS_1 (16.574%) (51.919%) (63.072%)(62.727%)(62.845%) (73.986%) (2184.55%) (186.345%) (83.894%) (189.358%)(141.705%) 150.631%)(185.32%)(94.919%) (11050.907%) (0.264%) (294.59%) (2140.718%) (163.978%) (176.167%) (190.412%) aug_2 LIRA_2 losstraj_2 loss-cali_2 reference_2 LOSS_2 (a) Augmentation (b) LIRA (c) Loss Trajectory (d) Diff-Calibration (e) Reference (f) LOSS Attack Attack Loss Attack Attack Attack $$ { \begin{array} { r l } { \left[ { \mathrm { A t a c k ~ i n s t a n c e ~ } } 0 \right] . { \mathrm { . ~ i n f e r ~ } } ( { \mathcal { D } } _ { T } , F _ { \mathcal { P } } ) } & { \left[ { \mathrm { ~ S c o r e ~ } } 0 \leq N ^ { 0 } \right] . { \mathrm { . ~ A d j u s t e r P R } } ( \beta ) } & { \qquad \left[ { \mathrm { P r e d i c t i o n ~ } } 0 \right. \left. { \partial _ { \beta } ^ { 0 } } \right] } \\ { \qquad \left. { \mathrm { A t t a c k ~ i n s t a n c e ~ } } 1 \mathcal { A } ^ { 1 } \right] . { \mathrm { . ~ i n f e r ~ } } ( { \mathcal { D } } _ { T } , F _ { \mathcal { P } } ) } & { \left[ { \mathrm { S c o r e ~ } } 1 \ S ^ { 1 } \right] . { \mathrm { . ~ A d j u s t e r P R } } ( \beta ) } \\ { \qquad \quad \quad \cdots \qquad \cdots } & { \qquad \cdots } & { \qquad \left. { \mathrm { . ~ i ~ } } \right. } \\ { \left[ { \mathrm { A t a c k ~ i n s t a n c e ~ } } 0 { \mathrm { . ~ A } } ^ { n } \right] . { \mathrm { . ~ i n f e r ~ } } ( { \mathcal { D } } _ { T } , F _ { \mathcal { P } } ) } & { \left[ { \mathrm { s c o r e ~ } } { \mathcal { D } } \right] . { \mathrm { . ~ A d j u s t e r P R } } ( \beta ) } & { \qquad \left[ { \mathrm { P r e d i c t i o n ~ } } n \right. \left. { \partial _ { \beta } ^ { n } } \right] } \end{array} } $$ Figure 2: Venn Diagram of three MIA instances at $\mathbf { F P R } = \mathbf { 0 . 1 }$ for different attack methods. Each set represents the true positive samples from one instance. The Venn diagram of the Class-NN attack is shown in Figure 1b. Figure 3: MIA Multi-Instance Analysis Pipeline. The process includes preparing attack instances, inferring membership, and adjusting predictions based on a given FPR. aspects to explore potential factors that contribute to method-level disparities. • Stability Difference in Model Output Space: We define $\mathcal { A }$ - unique samples as the set of members correctly identified by the stability of MIA $\mathcal { A }$ , but not by the stability of any other MIA methods. Formally, the set of $\mathcal { A }$ -unique samples, denoted as $S _ { \mathcal { A } } ^ { \mathrm { u n i q u e } }$ , is expressed as: $$ S _ { \mathcal { A } } ^ { \mathrm { u n i q u e } } = \{ x \mid x \in \mathrm { S t a b i l i t y } ( \mathcal { A } ) \land x \notin \bigcup _ { B \neq \mathcal { A } } \mathrm { S t a b i l i t y } ( B ) \} $$ In a black-box attack setting, logits encapsulate the maximum information returned by a query. Given the model’s logits output on those samples uniquely “consistently captured”, we look into the difference in their distributions to understand if an MIA may target or is more sensitive to distinct output distributions of members, which may help explain the disparities among MIAs. Attack Signal Difference: Different MIA methods use different feature extraction method $\phi$ , resulting in different signals for MIA. To understand the impact of signals while isolating the effect of randomness and MIA methodology difference, we focused on the Class-NN MIA method and $\mathcal { A }$ -covered samples that are the members identified by MIA $\mathcal { A }$ ’s coverage, $$ S _ { \mathcal { A } } ^ { \mathrm { c o v e r e d } } = \{ x \mid x \in \mathrm { C o v e r a g e } ( \mathcal { A } ) \} $$ Class-NN uses logits as attack signals, so we can easily manipulate the signal by restricting it to only the top- $x$ logits while masking out the rest, referred to as $^ { * } x$ -top” Class-NN. This modification allows us to observe how variations in the signals received by the same MIA influence the resulting detected member sets. # 4.1 Experiment Setup To make sure our empirical analysis is comprehensive, our experiment uses five datasets and four neural network architectures, listed below. A more detailed set-up including the hyperparameter choices of MIAs is provided in Appendix Section A. Datasets. We use five datasets commonly adopted in MIA research: CIFAR-10, CIFAR-100, CINIC-10, Purchase100, and Texas100. CIFAR-10 and CIFAR-100 consist of $^ { 6 0 , 0 0 0 3 2 \mathrm { x } 3 2 }$ color images divided into 10 and 100 classes, respectively. CINIC-10, an extension of CIFAR-10, includes 270,000 images derived from CIFAR-10 and ImageNet. Purchase100 and Texas100 are structured datasets representing consumer purchase behaviors and hospital discharge records, respectively. Detailed dataset descriptions are provided in Appendix Section A.1. Unless otherwise specified, we present experimental results based on CIFAR-10. Models. We employed ResNet-56 [16], MobileNetV2 [43], VGG16 [45], and WideResNet-32 [54] as our primary model architectures for image datasets, with ResNet-56 being the main model used for reporting experimental results. All models are optimized with SGD and a cosine learning rate scheduler [31]. We choose MLP for tabular datasets Purchase100 and Texas100. Training and evaluation configurations, including dataset partitions and training epochs, are detailed in Appendix Section A.2. MIA setup. For most MIAs examined in this paper, we adhered to the standard settings used to produce the main results in their respective papers, except LiRA. Our experiment uses LiRA’s online version in its paper. A detailed discussion of the setup for these MIAs and LiRA is provided in Appendix A.3, and the consistency result of offline LiRA is discussed in Appendix Section D. For disparity evaluation, we utilize six instances to compute coverage, stability, and consistency scores, as these metrics generally start to converge in most cases at this number of instances, as shown in Section 4.3. Additionally, we focus on presenting results at $\mathrm { F P R } { = } 0 . 1$ , with results for other FPR settings available in the Appendix. Additionally, we examine the impact of outliers and auxiliarytarget dataset distribution gap, with relevant results presented in Appendix Section E and Section C.4. # 4 Evaluation In this section, we evaluate the disparities between the seven widely used MIAs described in Section 2, using the methodology introduced earlier to assess both the instance-level and the method-level disparities, and investigate their potential causes. # 4.2 MIA Instance Level Disparity 4.2.1 Inherit Low Consistency of MIA. Following the methodology introduced in Section 3.1, we evaluate the consistency scores of different MIAs, each using six instances under the standard setting. Figure 4 shows the consistency scores for each MIA. Except for 1.0 1.0 0.9 different iniFtiallseweights 0.9 same attackFtarlasiening set 0.8 True 0.8 True 0.67 0.67 0.4 0.45 0.3 0.2 0.2 0.1 0.1 0.0 0.0 Josstraj Class-NN LiRA augibration reference Josstraj Class-NN aug Attack Attack (a) Shadow Model Training (b) Attack Model Training Figure 4: Consistency score shows inherent disparities among pairs of instances of MIAs (except LOSS attack) across datasets and models (ResNet-56, VGG-16, WideResNet-32, MobileNetV2). Consistency evaluated at $\mathbf { F P R = 0 . 1 }$ . Figure 5: Shadow model training and attack model training both contribute to the disparity. Consistency is measured with six instances at $\mathbf { F P R = 0 . 1 }$ on CIFAR-10. Figure 6: Number of shadow models’ relation with Consistency. the model architecture is ResNet-56, and the dataset is CIFAR-10. Consistency is measured with six instances at $\mathbf { F P R = 0 . 1 }$ . the LOSS attack, all other attacks exhibit low consistency across datasets and model architectures, with an average consistency score below 0.4, highlighting the inherent instance-level inconsistency. The instance-level MIA consistency appears to be influenced by the dataset. Attacks on CIFAR-100 demonstrate the highest overall consistency, likely due to increased over-fitting, as indicated by the larger generalization gap between the training and testing performance of target models (Appendix Table 2). Greater overfitting results in a larger set of common members that are easier for MIAs to infer, thereby leading to higher consistency. Additionally, certain attacks exhibit higher consistency on specific datasets and model architectures. For example, LiRA and Reference Attack achieve relatively high consistency on CIFAR-100 with VGG-16 and MobileNetV2. In contrast, Class-NN consistently shows low consistency across all datasets. This inconsistency arises from its non-overlapping shadow training sets, which lead to less aligned shadow models. 4.2.2 Disparity Factors. The randomness in MIA construction involves random partitioning of the auxiliary dataset into member and non-member sets, shadow model training, and attack model training. We investigate how these factors contribute to instancelevel disparity in MIAs, particularly at low false positive rates (FPR). Shadow Model Training: For MIAs that rely on shadow models, shadow model training involves data shuffling and partitioning, weight initialization, and other randomness factors that are specific to an MIA, such as model distillation in Loss Trajectory Attack. 0.00.10.20.30.40.50.60.70.80.91.0Consistency calibration 0.00.10.20.30.40.50.60.70.80.91.0Consistency lrierfaerence 1 3 5 7 9 11 13 20 40 60 80 100 Number of shadow models Number of shadow models (a) Diff-Calibration (b) LiRA and Reference Attack To analyze the effects of shadow model training, we compute the consistency score for instances created with different initial weights and compare it to the score for instances created with the same initial weights. When all instances start with the same set of initial weights for shadow models, data shuffling and partitioning become the primary sources of randomness for shadow model training. Figure 5a shows the consistency scores of each MIA under these two conditions. The “False” condition represents the same set of initial model weights. Comparing the two, we observe that the effect of varying initial weights on disparity is minimal (i.e., changes in the consistency score are no more than 0.05), indicating that data random shuffling and partitioning are the primary contributors to the disparity. Attack Model Training: For attacks that include attack classification models, the training of these models can also contribute to instance-level disparity. To evaluate this effect, we fix the shadow models across all instances, ensuring that their attack models are trained on the same attack training set (corresponding to the “True” case in Figure 5b). Comparing this setup with the normal scenario where shadow models and attack training sets vary, we find that the consistency score increases by $8 \%$ to $12 \%$ . This indicates that attack model training also contributes to MIA disparity, though its impact is less pronounced than that of shadow model training. These findings highlight that both shadow model training and attack model training are critical factors driving high instance-level disparity in MIAs. Number of Shadow Models: Additionally, we examine the impact of the number of shadow models used in an MIA on disparity. The Class-NN Attack is excluded from this analysis, as it trains shadow models on disjoint datasets, meaning that increasing the number of shadow models reduces the training data size and thus the quality of shadow models. Loss Trajectory and Augmentation attacks are also excluded, as their methodologies do not specify how they operate with multiple shadow models. As shown in Figure 6, increasing the number of shadow models in an MIA slightly increases the consistency at the instance level. For the Difficulty Calibration Loss Attack, the consistency increases as the calibration term (computed from shadow-model losses) becomes a more stable empirical estimate of difficulty. This reduces the variance of the calibration loss and enhances consistency. For LiRA and the Reference Attack, the increase in consistency is relatively smaller. Overall, despite the increase in the number of shadow models, instance-level disparity remains significant across MIAs. # 4.3 Coverage and Stability Over Randomness 4.3.1 Coverage Over Randomness. To evaluate the coverage of each attack, we compute the union of positive membership predictions across varying numbers of instances and present the results in Figure 7. Figure 7a shows that as we increase the number of instances, TPR (i.e., coverage) increases accordingly. However, as shown in Figure 7b, the FPR also rises with additional instances, indicating that more nonmembers are incorrectly classified as members. Figure 9a further illustrates the corresponding decrease in precision. Notably, the drop in precision is less pronounced, as the growth in true positives partially offsets the increase in false positives. As the true positive set stabilizes, precision also converges. We repeat the same experiment by configuring each MIA instance with different FPR thresholds: 0.001, 0.01, and 0.2. The results are provided in Figure 18 in Appendix Section B. We observe that Loss Trajectory, Reference, and Calibrated Loss attacks consistently achieve the highest number of true positive samples when multiple instances are used. At FPR 0.01, the coverage of these attacks with multiple instances captures approximately five times more members compared to a single instance. In contrast, the coverage of the Loss attack remains unchanged across all metrics, as it is not affected by randomness. We also observe convergence at the tail end of each TPR curve, indicating that an MIA instance under a fixed FPR can only identify a subset of members within a bounded group, even under different randomization conditions. 4.3.2 Stability Over Randomness. Following the same setting as the coverage evaluation, we compute the intersection of positive membership predictions to assess stability. The results are presented in Figure 8. The figures show that both TPR and FPR decrease with the number of instances, indicating substantial variance in the detection results between different MIA instances, except for the LOSS attack, which is not affected by randomness. We also conduct the same experiment with MIA instances at different FPR values of 0.001, 0.01, and 0.2. The results are provided in Figure 19 in Appendix Section B. We observed that at low FPRs (e.g., 0.001, 0.01), the stability of most attacks converge to fewer than 10 and 50 true positive members, respectively. This finding highlights that only a small subset of data points is consistently vulnerable to a given MIA method, regardless of randomization effects. As the number of consistently identified members decreases, precision (Figure 9b) increases significantly. Specifically, the Loss Figure 7: Trends in TPR and FPR for coverage under different numbers of instances with $F P R = 0 . 1 \$ . For all attacks, each instance is created using the same auxiliary dataset of 30,000 samples, and they predict membership on a disjoint target dataset of 30,000 samples. Figure 8: Trends in TPR and FPR for stability, following the same setup as Figure 7. Figure 9: Precision of coverage and stability corresponding to Figure 7 and 8. Trajectory, Calibrated Loss, and LiRA achieve precision values that exceed $9 5 \%$ . In contrast, the Augmentation attack and Class-NN attack fail to show similar improvements, reflecting their limited capability to consistently predict vulnerable members. Importantly, achieving high precision does not require all 16 instances; most precision gains are realized within the first six instances, while the FPR of stability drops to near zero. # 4.4 MIA Method Level Disparity Following the methodology in Section 3.3, we compute the Jaccard similarity of coverage/stability between every pair of MIA methods. Figure 10 presents the results for coverage and stability derived from six instances constructed for each MIA method with $\mathrm { F P R = } 0 . 1$ . Both coverage and stability show low similarity between most attack pairs, with Jaccard similarity generally below 0.4 for coverage and 0.1 for stability. The notably lower Jaccard similarity in stability compared to coverage underscores the significant disparities in consistently detected vulnerabilities across different MIAs. Figure 10: MIA Method Disparity. The values represent the average Jaccard similarity of 4 experiment runs. Attacks’ coverage and stability are calculated with six instances at $\mathbf { F P R = 0 . 1 }$ on CIFAR-10. Figure 11: Correlation of Disparity and FPR. The line is a linear regression on all Jaccard similarity scores for different FPR values of instances. Certain attack pairs exhibit similar trends in both coverage and stability. For example, LiRA and Reference Attack show higher similarity in both measurements, likely due to their shared approach to shadow model training. Similarly, Loss Calibration, Loss Trajectory, and Reference Attacks show mutual similarity, likely because they are all based on loss signals. Conversely, some attack pairs show significant disagreement; for example, the Loss Attack consistently demonstrates low Jaccard similarity with all other attacks. We also observe that attacks that perform well at low FPR (e.g., Loss Trajectory, Reference attack, Loss Calibration attack, and LiRA) tend to be more mutually similar compared to attacks designed for average-case performance (e.g., Class-NN, Loss, Augmentation attack). Overall, while certain attack pairs produce membership predictions with moderately higher similarity than others, the Jaccard similarity remains low across the board when predictions are made at $\mathrm { F P R } { = } 0 . 1$ . We also evaluate the pairwise similarity between different MIA methods in terms of coverage and stability under varying instancelevel FPR values (from 0.001 to 0.2). The results are presented in Appendix Section B. Across different FPR settings, the overall similarity trend remains consistent, and we observe a positive correlation between FPR and similarity. In Figure 11, each point represents the Jaccard Similarity value between a pair of attacks at a given FPR level. As shown, both coverage and stability exhibit low similarity at low FPRs. As the FPR increases, the similarity also increases, with coverage similarity values approaching 1. The relatively low similarity at low FPR suggests that different attacks may have distinct insights into predicting members, especially at low FPRs. The member predictions in which these attacks are most confident tend to be nearly disjoint across methods, implying that traditional metrics—particularly those evaluated at low FPR—do not fully capture the diverse behaviors and strengths of different attacks. # 4.5 Disparity Empirical Analysis In this section, we conduct a preliminary empirical analysis to analyze the potential causes of disparities between MIAs with the methodology presented in Section 3.3. This analysis offers insights into the underlying factors that lead different MIA methods to implicitly target different subsets of training data. 4.5.1 Output Distribution of $\mathcal { A }$ -Unique Samples. To understand the difference in data points that are vulnerable to different attacks, we analyze the output space of the target model $F _ { T }$ . We apply Principal Component Analysis (PCA) to extract feature scores from the logits predicted by $F _ { T }$ for each member. Focusing on $\mathcal { A }$ -unique members (as defined in Section 3.3) allows us to understand the disparities among MIAs through their uniquely identified members. Figure 12 shows that the $\mathcal { A }$ -unique samples identified by different attacks can exhibit different distributions in the model output space. This suggests that each MIA may implicitly favor certain groups or distributions of members, although PCA may only be able to partially capture the underlying characteristics. In particular, as shown in Figures 12a and 12b, the samples uniquely identified by the Loss attack and those identified by the Augmentation attack form visibly different clusters. 4.5.2 Attack Signals of $\mathcal { A }$ -Covered Samples. In addition to difference in the distribution of detected samples in the model output space, disparities among MIAs also stems from their distinct methodologies and signals they exploit. Quantifying these methodological differences is challenging, as each attack employs different processes that are difficult to formalize within a unified framework. Therefore, we focus on examining how the signals obtained from the target and shadow models contribute to variation in member detection. Following the methodology in Section 3.3, we use "Top-x" Class-NN attack with varying signals. Figure 13a shows a Venn diagram illustrating the coverage of three variations: Top-1, Top-3, and Top-10 Class-NN attacks. We observe a greater overlap between Top-3 and Top-10 Class-NN attacks compared to their overlap with Top-1. This is because Top-3 and Top-10 attacks leverage more similar signals, as logits beyond the top 3 positions are typically close to zero and contribute little additional information. Figure 12: PCA of $\mathcal { A }$ -unique Members. We conduct PCA on all the target model $F _ { T }$ ’s logits of all samples in target dataset $\mathcal { D } _ { T }$ , and plot $\mathcal { A }$ -unique members (defined in Section 3.3) for each attack. It is obtained with six instances at $\mathbf { F P R } ( \pmb { \omega } \mathbf { 0 . 1 } .$ . Figure 12a and Figure 12b are picked to demonstrate the distribution difference between two MIAs. The explained variance ratio is $4 8 . 5 6 \%$ , and $1 5 . 9 5 \%$ for the PC1 (first component) and PC2 (second component), respectively. To better understand the distribution difference of detected members under different top- $\mathbf { \nabla \cdot x }$ signals, we measure the Confidence Margin of the target model’s predictions for each member. The confidence margin is defined as the difference between the highest confidence score (representing the most likely class) and the second highest confidence score (representing the next most likely class) in the output of the target model. It represents how much more confident the model is in its top prediction relative to the closest alternative. Figure 13b presents the kernel density estimate (KDE) of confidence margin values for correctly predicted members, showing that less similar signals lead to greater disparity in the distribution of detected members. Specifically, the confidence margin distributions for Top-3 and Top-10 signals are similar, while the distribution for Top-1 is more left-skewed and exhibits higher variance. This difference arises because the Class-NN attack with access to Top-3 or Top-10 logits can leverage additional information beyond the single highest logit used by Top-1. As a result, Top-3 and Top-10 attacks tend to identify more common members with larger confidence margins. These observations on confidence margin distributions align with the overlap patterns observed in the Venn diagram in Figure 13a. The impact of attack signals extends beyond the Top-x Class-NN model. Existing MIAs rely on various signals including loss, confidence score vector, and loss trajectories. While the loss, computed from the confidence score vector, provides a scalar summary, it may omit finer details present in the vector itself which can be beneficial for membership inference. The loss trajectory offers insights into the training-time patterns that may reveal more nuanced information. These differences suggest that each type of signal carries unique information that may contribute to disparities in member detection across different MIAs. Figure 13: Top- $\mathbf { \sigma } \cdot \mathbf { x }$ Class-NN Attack-Covered Samples with Different Signals. The coverage is calculated using 6 instances at $\mathbf { F P R } = \mathbf { 0 . 1 }$ . # 4.6 Practical Implications of MIA Disparities Given the significance of instance-level and method-level disparities in MIAs, caution must be exercised when using them for privacy assessment and performance evaluation. Evaluating each MIA separately or relying on a single instance may fail to capture the full spectrum of privacy leakage risks, leading to incomplete assessments. Below, we examine several privacy tasks where MIAs are commonly used and discuss the implications of MIA disparities. Privacy Auditing and Risk Assessment: Several open-source toolkits have been developed for privacy assessment (e.g., [22, 27]), which often incorporate multiple MIA methods to assess privacy leakage in trained models. However, these tools typically run one instance per method and report risk based on population-level metrics derived from single-instance results. This overlooks both instance-level and method-level disparities, which may result in undetected vulnerable member samples and lead to overly optimistic privacy estimates. Machine Unlearning: In many unlearning works (e.g., [7, 24]), MIAs are used to evaluate the effectiveness of unlearning by checking how many samples from a forgetting set (a subset of the training set) remain identifiable after unlearning. A common practice is to instantiate an MIA with an auxiliary dataset and report results from that single instance. However, due to instance-level variability, a sample undetected by the reported instance may still be identified by others, potentially leading to overestimation of unlearning effectiveness. Similarly, relying on a single MIA method can overlook member samples that would be detected by other methods due to method-level disparities. Privacy Defense Evaluation: Many defense mechanisms (e.g., [20, 26]) are evaluated against a single MIA instance. The reported metric reflects exposure risk only for the members detected by that specific instance. However, due to randomness in shadow model training and method-specific biases, it is unclear whether a defense appears stronger simply because it performs better on the particular subset exposed by that instance. It remains uncertain whether the defense would perform similarly on samples exposed by other instances or MIA methods. Thus, explicitly addressing MIA disparities is essential for fair and reliable evaluation of privacy risks and defenses. # 5 MIA Ensemble In this section, we propose an ensemble framework that employs various strategies to account for MIA disparities. This framework not only enables the construction of more powerful attacks but also provides an evaluation protocol for more comprehensive and reliable privacy assessments. # 5.1 Ensemble Strategies 5.1.1 Attack Stability Ensemble. Section 4.3 shows that MIA stability converges as more instances are aggregated, and this aggregation reduces false positives, resulting in lower FPR and improved precision. These findings suggest that stability captures members consistently identified across instances regardless of randomness, and can be leveraged within each attack to achieve higher precision. Furthermore, Section 4.4 shows that the stability of different attacks has very little overlap, particularly at low FPR (Figure 11). Accordingly, we propose a 2-step ensemble approach: 1) Multi-instance Stability: This step uses multiple instances of an attack $\mathcal { A }$ and uses the "logical and" (i.e., conjunction) to determine membership to improve precision. That is, a sample $x$ is regarded as a member only if all of $\mathcal { A }$ ’s instances determine $x$ is a member. 2) Multi-attack Union: This step capitalizes on the high precision achieved by multi-instance and the complementary nature of different attacks, which tend to identify distinct sets of members. By taking the "logical or" (i.e., disjunction) of prediction from multiple attacks from the multi-instance step, the ensemble can detect members across all MIAs’ stable predictions while maintaining high precision. Formally, let $p _ { i } ^ { \mathcal { A } } ( x )$ represent the membership prediction of $i$ -th instance of attack $\mathcal { A }$ on sample $x$ . Note that $p _ { i } ^ { { \mathcal { A } } } ( x )$ is a prediction that’s thresholded to either 1 or 0. The multi-instance prediction $\boldsymbol { P _ { n } ^ { \mathcal { A } } }$ is the conjunction of $\{ p _ { 1 } ^ { \mathcal { A } } , \ldots , p _ { n } ^ { \mathcal { A } } \}$ for attack $\mathcal { A }$ . The multiattack prediction $P _ { n } ^ { \{ \mathcal { A } _ { 1 } , . . . , \mathcal { A } _ { m } \} }$ is the disjunction of multi-instance predictions {𝑃𝑛𝒜1 , . . . , $\{ P _ { n } ^ { \mathcal { A } _ { 1 } } , . . . , P _ { n } ^ { \mathcal { A } _ { m } } \}$ . $$ \begin{array} { c } { { P _ { n } ^ { \mathcal { A } _ { j } } ( x ) = \displaystyle \bigwedge _ { i = 1 } ^ { n } p _ { i } ^ { \mathcal { A } } ( x ) } } \\ { { P _ { n } ^ { \{ \mathcal { A } _ { 1 } , \ldots , \mathcal { A } _ { m } \} } ( x ) = \displaystyle \bigvee _ { j = 1 } ^ { m } P _ { n } ^ { \mathcal { A } _ { j } } ( x ) } } \end{array} $$ 5.1.2 Attack Coverage Ensemble. The previous attack ensemble strategy applies multi-instance intersection to improve reliability and precision, however, at the cost of coverage. In contrast, the attack coverage strategy here applies the multi-instance union to improve the coverage, followed by the same multi-attack union step. Similarly, we can describe this approach as follows: $$ \begin{array} { l c } { { \mathrm { 1 ) ~ M u l t i - i n s t a n c e ~ C o v e r a g e : } } } & { { P _ { n } ^ { \displaystyle A _ { j } } ( x ) = \displaystyle \sum _ { i = 1 } ^ { n } p _ { i } ^ { \displaystyle A } ( x ) } } \\ { { \mathrm { 2 ) ~ M u l t i - a t t a c k ~ U n i o n : } } } & { { P _ { n } ^ { \displaystyle \{ A _ { 1 } , . . . , A _ { m } \} } ( x ) = \displaystyle \sum _ { j = 1 } ^ { m } P _ { n } ^ { \displaystyle A _ { j } } ( x ) } } \end{array} $$ 5.1.3 Attack Majority Ensemble. While coverage and stability represent two extremes—capturing all potential risks and the most consistently vulnerable samples, respectively—the majority-voting ensemble offers a balanced alternative. This strategy captures samples that are identified as members by the majority of the running instances of a given MIA method. Formally, we have: 1) Multi-instance Majority Voting: $$ P _ { n } ^ { \mathcal { A } _ { j } } ( x ) = \left( \sum _ { i = 1 } ^ { n } { p _ { i } ^ { \mathcal { A } } ( x ) } \right) > \frac { n } { 2 } $$ 2) Multi-attack Union: $P _ { n } ^ { \{ { \cal A } _ { 1 } , \ldots , { \cal A } _ { m } \} } = \sum _ { j = 1 } ^ { m } P _ { n } ^ { { \cal A } _ { j } }$ # 5.2 Evaluation For the ensemble, we consider four attacks: Difficulty Calibration Loss Attack, Reference Attack, LiRA, and Loss Trajectory Attack, because out of our seven implemented attacks, only these four improve the precision with stability over multiple instances at a low FPR, as shown in Figure 9b. As in our previous setup, we utilize six instances of each MIA for the ensemble. Our proposed ensemble operates on membership predictions rather than membership scores. Therefore, to measure its performance in the TPR-FPR plane, we vary the FPR of base instances with 100 different FPR values, ranging from $1 0 ^ { - 6 }$ to 1, evenly spaced on a logarithmic scale. Under each instance FPR, we compute the predictions by ensemble and derive the corresponding TPR and FPR values for the ensemble. In Figure 15, we observe that the TPR of all three ensembles consistently outperforms single-instance and multiple-instance methods in the TPR-FPR plane. Here, the multi-instance method refers to the ensemble approach without the multi-attack union step, i.e., only using (11), (13), or (15). Interestingly, we find that the multi-instance method alone often outperforms its single-instance counterpart, particularly when using stability or majority-voting strategies. This further demonstrates that relying on a single MIA instance for evaluation underestimates the true privacy risks, as, in real-world scenarios, multiple MIA instances could be generated by the same or different attackers, and inherent instance-level disparities in membership inference persist. Additionally, we evaluate all possible combinations of the four attacks and compare their ROC curves. As shown in Appendix Figure 24, the full ensembles leveraging all four attacks consistently achieve higher TPR across all FPR values compared to ensembles using fewer or different combinations of attacks. Table 1 further compares our full ensembles and multi-instanceonly ensembles against each single MIA instance in terms of AUC, accuracy, and TPR at $0 . 1 \%$ FPR (see Appendix Table 5 for results on Texas100 and Purchase100). We choose $\mathrm { F P R } = 0 . 1 \%$ to showcase the ensemble’s capabilities under low FPR conditions, aligning with evaluation metrics used in recent works [3, 30]. The results are based on the ResNet-56 architecture, and comparisons for other model architectures can be found in Appendix Table 7. Across all settings, the final three rows in Table 1 show that all three full ensemble strategies consistently outperform individual instances under three traditional performance metrics. Compared to singleinstance attacks, the multi-instance-only ensemble (denoted as ‘Multi-inst’ in the table) shows improved performance under both stability and majority-voting strategies. However, it underperforms under the coverage-based ensemble, where only the multi-instance Reference attack shows a slight performance improvement. By comparing the full ensembles with the multi-instance-only ensembles, we observe that the benefit gained from multi-attack union often exceeds that achieved through multi-instance aggregation alone. Table 1: Performance of ensembles with four attacks vs. single instance attacks. TPR is measured at $0 . 1 \%$ FPR. While all three full ensembles achieve improved performance, each exhibits unique strengths across different FPR ranges. Figure 16 shows their ROC curves side by side. From linear-scale ROC in Figure 16b, we can see that the Stability Ensemble outperforms the Coverage Ensemble in the lower FPR region $\mathrm { ( F P R < 0 . 3 ) }$ ), while the Coverage Ensemble achieves a higher TPR in the higher FPR region $\mathrm { ( F P R > 0 . 3 ) }$ ). On log-scale ROC, we can see that the Majority Voting performs comparably to the Stability Ensemble at low FPR (also demonstrated in Table 1), and also exceeds the Stability’s performance in the high FPR region. These trends align with the design of each ensemble method. Coverage tends to cover more potential risks at the cost of increased FPR, stability focuses on consistently identifying vulnerabilities with high precision, while Majority Voting balances their strengths. Figure 16: ROC Curves of Different Ensemble Strategies 5.3 Ensemble in Practice 5.3.1 Optimization Strategies for Ensemble. The ensemble framework leverages both multi-instance and multi-attack approaches, achieving comprehensive coverage of privacy risks at the expense of increased computational cost. Below, we discuss practical strategies to mitigate this computational overhead. Low-Cost Attack as an Add-on. Among the four attacks we examined, the Difficulty Calibration Loss Attack requires much less time to prepare than the others, requiring only a single shadow model. This makes it an ideal add-on attack. Attacks Sharing the Same Process. Many membership inference attacks share similar, if not identical, preparation processes. For example, LIRA and the Reference Attack both rely on the same shadow model training process (as detailed in Appendix Section A.3). In our experiments, LIRA and the Reference Attack utilized the same 20 shadow models, making their ensemble nearly as cost-effective as preparing just one of them. This ensemble identified approximately twice as many members as either individual attack. Similarly, the Difficulty Calibration Loss Attack can serve as a “free” add-on if another attack already involves training a shadow model, as it only requires one shadow model to calibrate the MIA score [50]. 5.3.2 Cost Analysis. We measure the computation cost of ensembles in GPU hours for each MIA instance, considering different numbers of instances per ensemble. When both LiRA and the Reference Attack are included in an ensemble combination, we apply the above optimization strategy to combine and deduct their shadow model training time. The Majority Voting Ensemble is evaluated with odd numbers of instances to avoid ties in voting. Figure 17 presents cost (in GPU time) v.s. performance (in TPR $\ @ 0 . 1 \% \mathrm { F P R } )$ given different numbers of instances and different combinations of attacks. A more detailed description and study of the cost is provided in the Appendix Section C.1. Overall, we observe a positive correlation between computation cost and performance. Notably, ensembles involving all four attacks consistently achieve the best performance, underscoring the importance of combining multiple attack methods in an ensemble. From additional experiments across different datasets, we conclude that this trend holds true for Stability and Majority Voting Ensembles but does not always apply to Coverage Ensembles. Additionally, when comparing configurations with similar performance, we observe that cost-effective options often exist, achieving target TPRs with minimal GPU time (indicated by the leftmost points on a given TPR line). For example, in Figure 17a, with target $\mathrm { T P R } { = } 0 . 0 5$ , an ensemble of four attacks with three instances achieves the same performance as using six instances, effectively reducing the training time by half, from around 3500 mins to 1700 mins. This significant reduction in computation cost demonstrates that a careful selection of attack combinations and instance counts can achieve similar levels of effectiveness without incurring unnecessary overhead. Practitioners may find their desired ensemble configuration to achieve robust privacy evaluations given a resource budget. We leave efficiently identifying optimal configurations for future work. # 6 Discussion The instance-level and method-level disparities among MIAs, along with the performance gains achieved through ensemble strategies, highlight the practical relevance of MIA disparities and the risk of underestimating privacy vulnerabilities in the tasks discussed Figure 15: ROC Curve for Ensemble. Dashed lines show single-instance ROC, solid lines show multi-instance ROC and the black line represents the complete four-attack ensemble. Figure 17: Performance vs. Cost Analysis for CIFAR-10 using different ensembles. 0.050 0.0246 0.06 ↑ lcoaslisbtrajt,iorne,felroessntcreaj, reference 0.045 0.022 0.034 , calibration, lrierfae,rleonscsteraj 0.040 0.020 · calibration, lira, losstraj, reference 0.0305 0.01468 lciarlai,blroastisotrna, l, rae,ferrefenrcence 0.025 ■ calibration, liorsastraj 0.01250 0.0102 0.02 23 instances 0 500 1000 1500 2000 2500 3000 3500 0 500 1000 1500 2000 2500 3000 3500 0 500 1000 1500 2000 2500 3000 4 instances Cost (minutes) Cost (minutes) Cost (minutes) 5 instances 6 instances (a) Stability Ensemble (b) Coverage Ensemble (c) Majority Voting Ensemble in Section 4.6. In this section, we discuss actionable directions for addressing these issues in future MIA research. MIA performance evaluation and development. We advocate for incorporating disparity analysis into the development and evaluation of MIAs, using our proposed coverage and stability measures to examine and quantify how an MIA differs from others in member detection. These measures offer a complementary perspective to traditional population-level metrics such as AUC and TPR@Low FPR by providing additional insight into the extent of privacy risks an attack can expose (via coverage) and the consistency with which it reveals those risks across different runs (via stability). An MIA that achieves similar or even lower population-level metrics, such as AUC, may still hold significant value if it detects a substantially different subset of members—indicating high disparity—which can be revealed through our disparity analysis based on coverage and stability. This diversity enhances our understanding of privacy vulnerabilities by uncovering risks that other attacks may miss. Our ensemble results further support this insight, demonstrating that combining multiple attacks—including those traditionally considered “weaker”—often leads to improved overall performance. Therefore, MIAs with high disparity contribute to a more complete and robust assessment of privacy risks, especially when leveraged through our proposed ensemble strategies. Complete and reliable privacy evaluation. As discussed in Section 4.6, the common practice of single-instance-based evaluation is insufficient and may lead to unreliable conclusions. Our ensemble framework addresses this by capturing the full spectrum of privacy risks posed by different MIA methods through multi-instance and multi-attack ensembles based on coverage, stability, and majority voting. Integrating these ensemble strategies into privacy evaluations—for instance, using ensemble attacks against defensive models— ensures a more accurate and robust assessment of privacy defenses. This approach can similarly enhance evaluations of unlearning mechanisms. Given this, it may also be necessary to revisit prior evaluations of MIA defenses and unlearning methods that relied solely on a single random MIA instance. Additional Caveats in Using MIA for Privacy Evaluation. Recently, there has been significant interest in MIAs against large language models (LLMs). However, several works [9, 34, 56] have raised concerns about the construction of evaluation datasets, particularly regarding the distribution shift between member and nonmember samples. In many LLM MIA evaluations, non-member data were collected from web content published after the model’s training cutoff date, such that member samples originate from the training data distribution, while non-member samples come from a different and later distribution (e.g., different time periods). Such distribution shifts can artificially inflate MIA performance, as attacks may exploit these distribution differences rather than truly detecting membership status. Consequently, evaluations based on such datasets may overstate privacy risks. For privacy defense evaluation, Aerni et al. [1] argues that prior evaluations using MIAs, which report attack performance averaged across all training samples, can be misleading because they may fail to reflect a defense’s effectiveness against the most vulnerable examples. In addition, some evaluations have relied on relatively weak, non-adaptive attacks, potentially overstating the robustness of the proposed defenses. Our study is orthogonal to these concerns by addressing a different overlooked issue: the instance-level and method-level disparities among MIAs, which are often neglected in current evaluation practices. Addressing these disparities requires a more holistic evaluation protocol, such as our proposed ensemble framework, to enable more complete and reliable privacy assessments.
Membership inference attacks (MIAs) pose a significant threat to the privacy of machine learning models and are widely used as tools for privacy assessment, auditing, and machine unlearning. While prior MIA research has primarily focused on performance metrics such as AUC, accuracy, and TPR@low FPR - either by developing new methods to enhance these metrics or using them to evaluate privacy solutions - we found that it overlooks the disparities among different attacks. These disparities, both between distinct attack methods and between multiple instantiations of the same method, have crucial implications for the reliability and completeness of MIAs as privacy evaluation tools. In this paper, we systematically investigate these disparities through a novel framework based on coverage and stability analysis. Extensive experiments reveal significant disparities in MIAs, their potential causes, and their broader implications for privacy evaluation. To address these challenges, we propose an ensemble framework with three distinct strategies to harness the strengths of state-of-the-art MIAs while accounting for their disparities. This framework not only enables the construction of more powerful attacks but also provides a more robust and comprehensive methodology for privacy evaluation.
[ "cs.LG" ]
# 1 Introduction Robotic systems are increasingly expected to operate in dynamic, uncertain, and unstructured environments, making self-adaptation a crucial capability. Unlike traditional robots that follow pre-programmed behaviours, self-adaptive robots exploit artificial intelligence (AI) and data-driven techniques to autonomously modify their behaviour in response to environmental changes, operational faults, and evolving objectives. This adaptability is critical in autonomous driving, industrial automation, search and rescue, and assistive robotics. For such a level of autonomy, a fundamental characteristic of these robots is to possess self-management capability, including self-configuration, self-optimisation, self-healing, and self-protection [45]. Ali et al. Self-adaptive robotic systems require sophisticated mechanisms to ensure reliability, safety, and robustness while balancing competing demands such as energy efficiency, computational cost, and ethical considerations. The MonitorAnalyse-Plan-Execute-Knowledge (MAPE-K) loop serves as a fundamental framework for self-adaptation, enabling robots to assess their environment continuously, plan adaptive responses, and execute modifications while maintaining a knowledge base. Recent advances, such as MAPLE- $K$ [48], extend this framework to incorporate the verification of the legitimacy of adaptations, reinforcing trustworthiness in autonomous decisions. Developing and maintaining self-adaptive robotic software presents several challenges from a software engineering perspective. These include (1) requirements engineering for dynamic and evolving specifications, (2) designing scalable and modular architectures that facilitate real-time adaptation, (3) integrating AI techniques while ensuring explainability and robustness, (4) verifying and validating adaptive behaviours under uncertainty, and (5) exploiting digital twins and co-simulation for runtime monitoring, fault prediction, and real-time adaptation validation. This paper provides a research agenda for the software engineering of self-adaptive robotics, addressing challenges from two key perspectives: (1) Development phase. Covering requirements engineering, software design, co-simulation, and testing methodologies tailored for self-adaptive robotics discussed in Section 2. (2) Key enabling technologies. Exploring the role of model-driven engineering, digital twins, and AI in supporting autonomous adaptation and decision-making in robotic software systems discussed in Section 3. By structuring the discussion from these two dimensions, we aim to provide a roadmap for advancing self-adaptive robotic software engineering, ensuring future robotic systems can effectively navigate unpredictable environments while maintaining safety, trust, and efficiency. Some related roadmaps exist [28, 32, 21, 83, 52, 16, 11, 17]. However, we differ in one or more aspects: 1) focusing on self-adaptive robots in general rather than specific application domains, and 2) complementing gaps in these works, such as covering development phases, technologies, and providing a holistic overview, challenges, and a detailed roadmap for future software engineering in self-adaptive robotics. Finally, we present a forward-looking vision for engineering self-adaptive robots alongside a comprehensive overview of the current state of the art and open research challenges. # 2 Development Phase # 2.1 Software Requirements Engineering (SRE) Self-adaptive robotic systems operate in highly dynamic and unpredictable environments, making traditional requirements specification methods insufficient. For robotics, SRE must capture functional requirements, such as perception, navigation, and interaction with the physical world, and non-functional requirements, such as safety, adaptability, and real-time performance. Goal-oriented requirements engineering (GORE), formal methods, and model-driven approaches have been used for robotic SRE. Goal-oriented approaches, such as those based on KAOS and i frameworks, enable the capture and refinement of high-level goals into operational requirements [80]. Formal methods, such as Z notation and temporal logic, are gaining traction in robotics for specifying safety-critical requirements and verifying their correctness [50]. Model-driven approaches, such as SysML-based methods, facilitate the integration of requirements with system design and verification [25]. Recent efforts have focused on runtime requirements monitoring for self-adaptive systems, where requirements are treated as dynamic entities that evolve as the robot interacts with its environment. Despite these advances, several significant challenges remain in applying SRE to self-adaptive robotics. (1) Dynamic and Context-Aware Requirements Specification. Self-adaptive robotics requires continuous adaptation to changing environments and tasks, raising the need for context-aware and evolving requirements. Current methods struggle to handle requirements that must be updated at runtime while maintaining system consistency and safety guarantees. How can requirements be dynamically specified, validated, and adapted in real-time? (2) Trade-offs between Conflicting Requirements. Robotics systems must balance conflicting requirements, such as energy efficiency, speed, and safety. Resolving these trade-offs at design time is complex. It becomes even more challenging in adaptive scenarios [73]. Developing automated techniques to manage and prioritise requirements trade-offs at runtime remains an open question. (3) Requirements Traceability and Verification. It is challenging to ensure traceability from high-level requirements to implementation and maintain it across system adaptations. Verification and validation (V&V) approaches must evolve to provide runtime assurances post-adaptation. How can we ensure that traceability and V&V processes are scalable and efficient in self-adaptive robotics? (4) Stakeholder Involvement and Elicitation for Complex Scenarios. The involvement of diverse stakeholders (roboticists, end-users, and domain experts) makes elicitation challenging. The unpredictable nature of robotics applications, especially in search and rescue, healthcare, and autonomous driving, complicates requirements gathering. Collaborative approaches and user-centred design methodologies must be extended to better capture evolving requirements. (5) Ethical and Societal Requirements. Self-adaptive robotics brings ethical concerns, including privacy, fairness, and accountability. Defining and enforcing ethical requirements in robotic systems is an emerging research area that requires multidisciplinary collaboration. Questions about operationalising these requirements and integrating them into existing SRE processes remain open. Addressing these challenges requires new methodologies combining formal, goal-driven, and adaptive approaches, integrating runtime monitoring, feedback loops, and automated reasoning into requirements engineering. # 2.2 Software Design, Development, and Simulation Software design and development for self-adaptive robotics are multidisciplinary endeavours combining principles from software engineering, robotics, AI, and control systems. It aims to create robust, adaptable, and scalable software architectures to support the dynamic nature of robotic systems. Component-based design and service-oriented architectures are widely used for modular and scalable development. Middleware frameworks such as Robot Operating System (ROS) and its extensions for multi-robot systems provide essential building blocks for designing robotic software [15]. Model-driven development approaches, such as those using UML and SysML, facilitate high-level abstraction and code generation, enabling a systematic transition from design models to implementation. Aspect-oriented programming is gaining traction for handling crosscutting concerns like fault tolerance and resource management. Self-adaptive software architectures have emerged, employing feedback control loops and runtime models to enable continuous monitoring, adaptation, and optimisation of system behaviour. Patterns such as autonomic computing loops (monitor-analyse-plan-execute) are increasingly incorporated into robotic systems for real-time self-management. AI-based techniques are also integrated into software design to improve decision-making and adaptation strategies. Several co-simulation approaches exist [30, 34] implementing multiple interfaces, such as the Functional Mock-up Interface (FMI [43]), High-Level Architecture (HLA [64]), DEVS [86], and others simulators [47, 35]. When the subsystems implement a common interface, co-simulation is achieved by intentionally stepping each simulator forward in simulated time while exchanging data between each step with the other simulators [47]. The algorithm that manages this process is called the orchestration algorithm. Several orchestration algorithms exist, from simple and general [5] to complex and highly specialized[39], exhibiting self-adaptive properties themselves[40]. Despite advances in design and development methodologies, self-adaptive robotics software faces numerous challenges and unresolved research questions: (1) Scalable and Modular Architectures for Self-Adaptation. Designing modular and scalable software architectures that support runtime adaptability while maintaining performance and robustness is a persistent challenge. How can we develop architectures that balance flexibility with efficiency and support continuous adaptation across diverse robotic platforms? (2) Trade-offs between Adaptability, Safety, and Performance. Self-adaptive robotics must ensure that adaptability does not compromise safety or degrade performance. Balancing these often conflicting requirements during software design and ensuring the system remains reliable after complex adaptation. What design strategies can manage these trade-offs effectively at both design time and run time? (3) Runtime Software Evolution and Reconfiguration. Runtime software evolution (adapting system components, configurations, or behaviours on the fly) raises consistency, fault tolerance, and verification issues. Research is needed on how to safely and efficiently update software during operation without disrupting the system’s functioning or violating constraints. (4) Handling Uncertainty in AI-based Adaptation. AI adds uncertainty in software behaviour. Thus, the lack of predictability in AI used for decision-making can result in unexpected behaviours. How can software design practices incorporate mechanisms to mitigate and manage uncertainty in AI-based adaptations? (5) Model-driven Engineering for Self-Adaptive Robotics. Although model-driven approaches offer high abstraction and automation, their full potential in self-adaptive robotics is yet to be realised. Ensuring the consistency of runtime models with design-time models and maintaining accurate runtime representations of the system remains an open question. How can runtime models be effectively integrated into the development lifecycle for lifecycle adaptation? (6) Integration of Human-in-the-Loop Adaptation. In many robotic applications, human intervention is necessary to guide adaptation decisions or validate changes. Designing software that supports human-in-the-loop adaptation while minimising cognitive load and avoiding delays is critical. What frameworks and interfaces can facilitate seamless human-system collaboration for adaptation? (7) Verification and Validation of Self-Adaptive Software. Ensuring self-adaptive software consistently meets its design requirements after adaptation is an ongoing challenge. Traditional V&V methods are often inadequate for highly dynamic systems. What novel V&V techniques can be developed to address this need, particularly for real-time and safety-critical robotic applications? (8) Orchestration challenge in co-simulation. Co-simulation of self-adaptive robotics requires orchestration algorithms that differ from traditional ones to ensure system reconfigurations are accurately captured in the simulation dynamics, such as dynamically changing simulator dependencies [4]. This is an open area of research. (9) Limitations of co-Simulation interfaces. Co-simulation interfaces may not be rich enough to enable the orchestrator to capture dynamics correctly. For instance, traditional co-simulation of multi-rate systems is challenging in FMI version 2.0 [8]. Recently, FMI 3.0 has been proposed to mitigate this issue by introducing the notions of clocks and clocked variables [31]. Conversely, error-free simulation of continuous dynamics in discrete event simulation interfaces (such as DEVS, HLA) is impossible to attain due to quantisation [85]. (10) Intellectual property protection in co-simulation. In modern supply chains, there is a need to protect the intellectual property (IP) of subsystems, as external companies often provide these. To avoid expensive contracts, the co-simulation interface must promote IP protection. The question remains whether traditional IP protection mechanisms in existing co-simulation interfaces are sufficient for simulating self-adaptive systems. Tackling these challenges requires innovative solutions combining software engineering with robotics-specific adaptations, focusing on developing frameworks and methodologies that improve adaptability while ensuring dependability. # 2.3 Testing Self-Adaptive Robotic Software Testing ensures self-adaptive robots function as intended. Testing is performed in different setups, e.g., software-inthe-loop (SiL), hardware-in-the-loop (HiL), tests on physical robots, and operational testing (e.g., with digital twins). Many existing works focus on testing various aspects of robots, with model-driven approaches being prevalent, such as based on RoboStar [18, 37] as well as combining model-based approaches with AI [75, 76]. Few works test the self-adaptive behaviour of robots, e.g., [63, 55, 56, 41, 42]. Despite many advances in testing robotic software with various approaches, novel testing challenges are emerging due to continuous advancements in tasks assigned to robots and new techniques. (1) Simulation-based Testing in Varied Reality Gap. A key challenge in simulation-based testing (e.g., in a SiL setup) is the reality gap—the discrepancy between simulation and real-world conditions. This makes realistic testing in simulation difficult, as test scenarios may not accurately reflect real-world situations (e.g., weather changes). Highly realistic simulations can be costly, yet full realism may not always be necessary. This raises a key question: What level of realism balances effectiveness and cost? AI foundation models (e.g., vision-language models) and real-time sensor data offer promising ways to enhance simulation realism and improve testing accuracy. (2) Testing AI and non-AI Components. AI is increasingly integrated into self-adaptive robotic software (including MAPE-K) for tasks like object identification and autonomous decision-making. However, this introduces testing challenges for both AI and non-AI components independently and due to their interactions, including uncertainty, lack of explainability, and ethical concerns. Effective testing methodologies must address these issues. (3) Testing under Uncertainty. Uncertainty is inherent in self-adaptive software. Thus, testing methods must treat uncertainty as a first-class entity to identify faults effectively. Holistic approaches are needed to quantify uncertainty across different sources, including AI components. Furthermore, continuous assessment and management of uncertainty during real-world operations are crucial, along with strategies to handle cases where uncertainty exceeds acceptable limits, ensuring system reliability and safety. (4) Testing Self-Adaptive MAPE-K Loop. Self-adaptation, often via the MAPE-K loop, introduces testing challenges. New methods are needed to test each component, such as whether Analyse triggers adaptations, Plan generates appropriate strategies, and adaptations execute correctly. Testing must assess both individual components in MAPE-K and the entire loop to ensure overall correctness. (5) Testing under Continuous Evolution. Self-adaptive robotic software evolves continuously, requiring cost-effective testing of modified parts while ensuring existing functionality remains intact. This demands new regression testing methods for the MAPE-K loop and AI components to support continuous evolution. # 3 Key Enabling Technologies # 3.1 Model-Driven Engineering Model Driven Engineering (MDE) can be applied to self-adaptive robots. To this end, many Domain-specific languages (DSLs) have been explored for robotics subdomains, e.g., Nordmann et al. [66] in 2016 identified 137 distinct robotics DSLs published between 1980 and 2015. More have been proposed. UML and its derivatives have been explored for various purpose [14, 71, 20, 29, 70] and its extension for robotics [2, 23, 60]. Several general languages support architectural and behavioural modelling, e.g., SysML [67], AADL [24], and Focus [13]. Other formalisms have also been explored, e.g., temporal logic formula [9], Lie groups [19, 62, 68], Denavit-Hartenberg (D-H) convention [22, 6, 7] in [44, 77], and property-driven approaches [12]. However, many open challenges still exist regarding the effective use of all these notations for MDE. That requires automation based on sound techniques that can ensure the values of the artefacts derived and compensate for the effort to develop models. Some open problems are as follows: (1) Formalisation of architectures for adaptation. There is no standard or formal understanding of what it means for a self-adaptive system to adopt the MAPE-K architecture. This impacts the traceability between the code and the MAPE-K design of an adaptive system. It also makes it difficult to take advantage of the features of the MAPE-K Ali et al. architecture at the code level. (2) Support for code generation. The lack of MAPE-K formalisation leads to the lack of customised facilities to model and generate code that adopts that architecture. While, general modelling languages and MDE tools can be employed, none provides domain-specific support, i.e., no native support to ensure the communication patterns and control flow of MAPE-K, for example, are enforced at either the model or coding levels. (3) Hybrid reasoning. Adaptation is often a response to robotic hardware and environment changes, which are often better described via hybrid models. While MBE is often about the software component, we need to use hybrid models to generate simulations, identify meaningful tests, and reason about adaptation for self-adaptive robotics. There is a rich literature on hybrid model checkers and theorem provers. However, a practical approach that can handle the scale and complexity of robotic models is still missing. (4) Integration with AI. MBE techniques are often component-based to a certain level, with at least some structure in the models to reflect commonly used architectures or computational resources. Extensions of the notations and techniques to deal with AI components are key to allowing the system-level approach required for adaptation. (5) Human factors. Again, with the observation that adaptation is a system-level concern, we have to consider the impact of human factors. There are two viewpoints of humans leading to a requirement for adaptation and the impact of adaptation on human stakeholders. Much work on ethical factors is being undertaken, but we require full operationalisation to support the deployment of suitable adaptations. # 3.2 Digital Twins Many definitions of Digital Twins (DTs) exist [3]. However, we use the following from [26]. (1) A Digital Twin (DT) is a digital representation of a real-world entity called the Physical Twin (PT). (2) The DT and PT are connected by a communications infrastructure which allows the DT to maintain a known level of fidelity to the PT it represents. (3) A DT offers its stakeholders a range of services that add value to the PT without unduly compromising the PT’s operation. There is growing interest in using DTs in robotics, e.g., to improve the design, performance, and maintenance. DTs are applied in various subfields, such as robotics design [46], motion planning and control [84], human-robot interaction [58, 53], autonomous robots [38], smart manufacturing [51], and prognostics and health management (PHM) [33, 78]. Mazumder et al. [61] explored the trends of DT-integrated robotics, to identify gaps, trends, potential scopes, challenges, and future perspectives. Yang et al. [84] proposed a DT-based autonomous navigation and control method for omnidirectional mobile robots (OMRs), achieving a physical-virtual synchronisation tracking error of 0.061 m and handling various tasks effectively. Song et al. [78] introduced a DT-assisted fault diagnosis system for robot joints, using a CycleGAN-based model to map virtual entity data to physical data. Different publications target DTs’ most important research challenges [27, 10, 49]. However, from a robotics perspective, we think that the most important ones are the following that deserve attention from the research community: (1) Data quality from PT. Data from a physical robot is mainly derived from sensors. Still, it is delivered with differen kinds of uncertainties (noise affecting the values and time delay in delivery). Depending on the purpose of a DT, such uncertainties may make it impossible to provide the predictions in a trustworthy manner. (2) Providing Predictions promptly. When data from a PT is fed into one of the models inside a DT, it is essential that it is possible to provide predictions to indicate some action is required promptly. A human user can make an actual decision autonomously if one trusts in the predictions. The challenge here is ensuring that the underlying computing infrastructure is enough for the model to make predictions. (3) State Estimation for the PT. When debugging an application on a computer, one has complete insight into what state the system is in at all times. However, when dealing with physical processes, it is much harder to determine the exact moment a state transition is taking place. (4) Composition of DTs. If we have established DTs for different collaborating robots, the composition of the DTs is far from trivial. One root cause is that the IP present inside models with predictive power can be essential for the organisations producing them. Thus, it is paramount to ensure that such IP is protected. (5) Flexibility and Multi-Purpose Demands. DTs have significant potential to improve lifecycle engineering. For DTs to be effective, their models must adapt across all lifecycle stages with fidelity adjusted to each phase—lower fidelity for design and higher fidelity for control, verification, and operations. However, most current research focuses on specific lifecycle stages checking the broader value chain [87]. Enhancing the adaptability and flexibility of DT models remains a key challenge. (6) Limited Complexity and Extensibility. Current DT study in virtual prototyping (VP) [57] often focuses on small-scale modules or isolated systems. However, a typical robot involves a complex interplay of mechanical, thermal, hydraulic, and control subsystems. The existing literature offers limited insights into developing large-scale, fully integrated DT systems. The key challenge lies in comprehensively integrating these subsystems, including sensors, control mechanisms, and mechanical systems. (7) Dynamic Interaction between DTs and PTs. DTs differ from traditional models through their real-time, dynamic interaction with PTs. Unlike conventional models used in isolated software for tasks like design and control, DTs evolve alongside the systems they represent, integrating live sensor data, simulation updates, and software upgrades throughout the system’s life cycle [59]. The complexity of modern robotic systems, with their multidisciplinary dimensions (e.g., mechanical, electrical, software), makes it challenging to model everything in a single platform. As a result, achieving a fully integrated and responsive representation of real-world systems remains a significant challenge for DTs. # 3.3 Artificial Intelligence AI has become a cornerstone in the evolution of robotics, enabling robots to perform complex tasks with increasing autonomy and adaptability. Integrating AI into the software engineering of self-adaptive robots has led to significant advancements in perception [72, 36, 74, 1], decision-making [65, 81, 82, 69, 54], and control [79, 82], making robots more capable of operating in dynamic and unstructured environments. Despite considerable efforts, AI for software engineering of self-adaptive robots still faces several challenges. To this end, some open research questions are: (1) Inherent challenges related to data. Self-adaptation poses specific challenges in AI-powered robotics systems. The most popular AI systems are based on training using vast data. Thus, self-adaptation can usually be considered as gathering more data that represent the new state of the world and the task to which the systems should adapt. This is very expensive or infeasible in many situations. (2) Overcoming catastrophic forgetting in continual learning. Another challenge is to retain the previously acquired knowledge when adapting to the new data, avoiding catastrophic forgetting towards smooth, continual learning and adaptation. Model fine-tuning, transfer learning, knowledge distillation and RL-based continual training are methods to consider when the systems adapt to new data. (3) Overfitting. AI often overfits training data, leading to poor generalisation and causing operational anomalies. These should be detected and analysed to provide adaptation plans. More research is needed to handle the overfitting of AI by implementing self-adaptive robots. (4) AI-specific challenges. AI in self-adaptive robots presents new challenges, such as hallucinations leading to unsafe decisions and inherent uncertainty requiring quantification. Real-time handling of uncertainty remains underexplored and needs further study. Regardless of the adaptation method, trustworthiness is key. AI-powered robots must adapt safely, ensuring legitimacy through explainability, fairness, transparency, and robustness. Assuring trustworthiness in self-adaptive robots is understudied and requires novel solutions.
Self-adaptive robotic systems are designed to operate autonomously in dynamic and uncertain environments, requiring robust mechanisms to monitor, analyse, and adapt their behaviour in real-time. Unlike traditional robotic software, which follows predefined logic, self-adaptive robots leverage artificial intelligence, machine learning, and model-driven engineering to continuously adjust to changing operational conditions while ensuring reliability, safety, and performance. This paper presents a research agenda for software engineering in self-adaptive robotics, addressing critical challenges across two key dimensions: (1) the development phase, including requirements engineering, software design, co-simulation, and testing methodologies tailored to adaptive robotic systems, and (2) key enabling technologies, such as digital twins, model-driven engineering, and AI-driven adaptation, which facilitate runtime monitoring, fault detection, and automated decision-making. We discuss open research challenges, including verifying adaptive behaviours under uncertainty, balancing trade-offs between adaptability, performance, and safety, and integrating self-adaptation frameworks like MAPE-K. By providing a structured roadmap, this work aims to advance the software engineering foundations for self-adaptive robotic systems, ensuring they remain trustworthy, efficient, and capable of handling real-world complexities.
[ "cs.SE", "cs.RO" ]
# 1 INTRODUCTION Relational operations such as filtering, join, and group-by are the crux of data science tasks such as data analysis [36, 98, 103], data cleaning [19, 23], and feature engineering [25, 31]. They are commonly performed in dataframes—a table-like data structure widely used in data science due to their larger degree of freedom regarding table schemas and data types versus traditional database tables [76, 77, 101]. Dataframe libraries supporting these relational operations are present in many popular programming languages employed in data science, for example Pandas [72] and Modin [78] for Python, Polars [80] for Rust, frames in R [88], and Spark’s dataframe [90] in Scala. These libraries feature distinct pros and cons attributed to their native language: for instance, Pandas and Modin support flexible data types, but can be slow for user-defined functions (UDFs) [34]. Polars supports lazy executions for multi-operation queries [81] but does not support user-defined objects [82]. Dataframes in Mojo: Promising Alternative. Mojo is a recent programming language with flexible, Python-like syntax specifically designed for data science while addressing many of the aforementioned shortcomings. Various capabilities include JIT [37] with MLIR [39] for increased runtime efficiency, native CPU-GPU programming, and optimized tensor operations [50]. Mojo has been benchmarked on data science tasks like tensor and model operations, outperforming both Python [97] and Rust [52]. Yet, performing relational operations in Mojo is currently unexplored due to the lack of native Mojo-based dataframe [49], which we aim to develop Relational Mojo Programming Language Compute Operations : · CPU JIT . , : 1 FILTER 1 Our Mojo-native GPU : GRJOUIPNBY dataframe (MojoFrame) TPU MLIR 1 : , , L in this paper (Fig 1). We hypothesize from existing benchmarking results that such a dataframe library (which we call MojoFrame) would be a promising alternative versus existing libraries, notably Pandas and Polars, achieving higher efficiency (especially on UDFs) versus the former while being easier to program versus the latter. Challenges for MojoFrame. Implementing a Mojo dataframe library that is both expressive and efficient in relational performing relational operations is challenging. First, Mojo is optimized for performing operations on tensor data. Thus, it utilizes specific optimizations (e.g., SIMD) that are not directly applicable to other, non-numeric types (that don’t fit in tensors) such as strings. However, support for efficient operations on these non-numeric types that commonly appear in data science is crucial. Second, Mojo is still relatively new and currently lacks many optimized data structures and features (e.g., handling mutable pointers [53] in dictionaries [54]). Many of these are used by dataframes implementations in other languages (e.g., Python’s Pandas) for efficiently performing relational operations following established algorithms (e.g., hash join); hence, we need to design intelligent workarounds. Our approach. We implement MojoFrame by designing a hybrid data structure that utilizes Mojo’s native tensors and tensor operations wherever possible—numeric types and mapping operations— for efficiency. Then, we derive alternative approaches that exploit Mojo’s characteristics to perform tasks without a native counterpart in tensor operations, such as string operations and joins. First, for the dataframe, we use a tensor to store numeric columns; then, for non-numeric columns, we derive a cardinality-aware approach which decides between integrating them into the tensor via a mapping or transparently offloading them into separate lists for high space and operation efficiency. Our dataframe enables this offloading in a column and row order-preserving manner, with a decoupled physical and logical layouts enabled via indexers. Second, for relational operations, we derive workarounds to circumvent data structures not yet present in Mojo required by solutions in existing dataframe implementations. For example, we combine a custom tuple-to-integer hashing function and list indexing to perform multi-column joins and group-by aggregations to avoid inserting mutable data structures into dictionaries, which is notably inefficient [54]. Then, we use vectorization and parallelization when appropriate to maximize hardware potential. Comparison against other methods for GPU-based relational operations. Our implementation of MojoFrame utilizes different techniques compared to existing tools for performing GPU-based relational operations. Compared to the GPU-based dataframes cuDF [96] and cuPy [70] which aim to replicate Pandas and NumPy functions onto GPUs for higher computational efficiency, MojoFrame’s focus is more on the dataframe interface in Mojo, i.e., how to represent and perform dataframe operations under the significantly different toolset provided by this language, given that these operations will naturally be compatible with GPU programming due to the nature of the Mojo language. BlazingSQL [73] and Crystal [21] orthogonally perform GPU-based relational operations on database tables, a significantly different data structure. Contributions. According to our motivations (§2), we implement MojoFrame to achieve the following: • Universal Representation. We introduce MojoFrame’s representation and how it supports the variety of datatypes commonly used in data science (§3). • Relational Operations Support. We describe our implementations to support filtering, group-by aggregation, and joins in MojoFrame (§4). • TPC-H Benchmark. We show MojoFrame’s support for all existing 22 TPC-H queries, and benchmark its performance versus alternative dataframes (§5). # 2 BACKGROUND This sections describes the Mojo language (§2.1), existing dataframe implementations in other languages (§2.2), and finally, how a Mojobased dataframe can benefit existing data science pipelines (§2.3). # 2.1 What is Mojo? This section describes Mojo’s key characteristics that enable its high performance and adaptability across various data science tasks. Mojo’s Just-in Time (JIT) Compilation. Mojo is a JIT-based compiled language. Like other JIT-based languages such as Java, Mojo’s JIT compilation allow generation of optimized machine code specific to the hardware it is running on, achieving better performance on a variety of operations present in data science tasks, such as tensor operations and complex UDFs, versus interpreted languages such as Python and R [97] (Fig 2). While Mojo’s JIT compilation potentially incurs latency when running data science pipelines, Mojo code can also be compiled ahead-of-time [55] (e.g., for repeated use during recurring operations [45]); nevertheless, we empirically verify that such (dataset-size agnostic) latency is often negligible compared to time saved from faster data loading, processing, etc, especially on tasks using larger dataset scales (Fig 11). Multi-level Intermediate Representation (MLIR). Mojo is the first language to be designed specifically for MLIR [39], a new compiler infrastructure designed for optimizing domain-specific workflows. For example, given code for a task such as processing a TPC-H [14] query, Mojo’s (JIT) compiler will progressively generate lower-level intermediate representations at runtime, applying domain-specific optimizations (e.g., data reading/tensor computations) where necessary. Versus non-MLIR frameworks (e.g., TensorFlow Graph [93]) Figure 2: String filtering UDF on the o_comment column in TPC-H Q13 (left). MojoFrame applies this UDF $8 . 4 1 \times$ faster than Pandas on the 10G scale TPC-H dataset (right). where the progressive lowering is performed with multiple domainspecific compilers (e.g., for GPU/TPU) each applying their own optimizations, Mojo’s MLIR-based approach is more suited for fastevolving data science tasks due to requiring maintenance and optimization of only a single overarching compiler [55]. # 2.2 Existing Dataframes for Data Science In this section, we overview the large variety of dataframe libraries available to other programming languages, and accordingly hypothesize potential benefits that MojoFrame can bring over them. Dataframes in Python. Python features many popular dataframe libraries such as Pandas [72], Modin [47], and Dask [22]. Pandas is based on the NumPy array, flexibly supporting heterogeneous datatypes within a single column and efficient vectorized column operations. Modin is a drop-in Pandas replacement that parallelizes operations such as transpose and pivot. Dask is a Pandas-based distributed dataframe library for parallel operations on large datasets. As each library optimizes for different cases (e.g., Modin’s pivoting and Dask’s distributed computing), users may need to perform tedious (and possibly high-overhead) data conversions to maximize efficiency of data science pipelines [17]. However, as Mojo performs these optimizations under the hood at the language level [50], MojoFrame can be a low-overhead alternative for data science versus using multiple dataframe libraries in Python. Dataframes in Other Compiled Languages. There exists dataframe libraries in other compiled languages such as Rust (Polars [80]) and Julia (JuliaFrame [24]). While these dataframes also natively support other optimizations like parallelism, they also have more complex syntax and are hence harder to program with. As Mojo’s syntax is largely based on Python’s, MojoFrame can bring the performance benefits of these dataframes while still being easy to use. GPU Dataframes. There also exists specialized dataframes such as cuDF [96] and cuPy [70] designed for performing CPU-GPU computations, on top of which further optimizations such as data placement [105] and JIT [106] have been studied. However, these libraries are specifically designed for CUDA [28], require porting the code when GPUs are not available, and are not applicable when only alternative GPU libraries are available (e.g., MPS [11] or Intel [85]). Currently, Mojo is transparently integrated with a large number of GPU types (currently CUDA, MPS, Intel, and AMD [50]); hence MojoFrame is a potentially more generalizable alternative that can be used regardless of which GPU (if any) is available in the computing environment while requiring no code porting (Fig 3). $\#$ Determine device to use # Determine device to use if torch.backends.mps.is_available(): if has_accelerator(): device $\mathbf { \Sigma } = \mathbf { \Sigma }$ torch.device("mps") # Use any found GPU else: device $\mathbf { \Sigma } = \mathbf { \Sigma }$ accelerator() if torch.cuda.is_available(): else: device $\mathbf { \Sigma } = \mathbf { \Sigma }$ torch.device("cuda") device $\mathbf { \Sigma } = \mathbf { \Sigma }$ cpu() else: device $\mathbf { \Sigma } = \mathbf { \Sigma }$ torch.device("cpu") # Declare new tensor on # Move model to device device model $\mathbf { \Sigma } = \mathbf { \Sigma }$ SomeModel() tensor $\mathbf { \Sigma } = \mathbf { \Sigma }$ Tensor[dtype, model.to(device) rank](device) (a) PyTorch [13] (b) Mojo [50] # 2.3 Mojo-Native Data Science Pipelines This section describes potential benefits implementing MojoFrame brings to end-to-end data science pipelines. Python Data Science Pipelines. Data science tasks in Python, such as data cleaning, feature engineering, and visualization, require various libraries. For example, data scientists may find themselves loading data into a Pandas dataframe [72] for data analysis, converting the dataframe into on-GPU PyTorch [29] or Tensorflow [94] tensors for training models, converting to Scipy [89] to sparsify output tensors for row/column operations, and finally, plotting with Matplotlib [92] or Seaborn [100]. This manner of data pipelining incurs potential inefficiencies from requiring data conversions (e.g., from Pandas to PyTorch) and additional management overhead from maintaining compatible library versions. Mojo Data Science Pipelines. Mojo aims to natively support all parts of the data science pipeline [48], for which it currently includes built-in tools (i.e., similar to Python’s standard library [84]) for various general (e.g., model training) and more specialized (e.g., CV, NLP) data science tasks [46]. Despite the current lack of a Mojo-native dataframe, it is possible for users to run complete data science pipelines in Mojo through importing Python’s Pandas as Mojo supports Python libraries through an integrated CPython [20] runtime [51]. However, such an approach leads to a similar data conversion inefficiency as observed in Python data science pipelines, requiring conversions between Mojo-native types (e.g., Int32) used by Mojo libraries and generic Python Objects used by imported Python libraries. Therefore, implementing MojoFrame and completing a Mojo-native data science pipeline is important both for performance benefits and ease of use: users only need to maintain one unified package with no additional external dependencies. # 3 MOJOFRAME: DATA REPRESENTATION This section describes our implementation of MojoFrame. As Mojo is specifically optimized for tensor operations, naively translating existing dataframe libraries from other programming languages is insufficient: for example, a translation of the dynamically-typed Pandas library in Mojo will fail to leverage Mojo’s static typing and hardware accelerations for performance. Hence, we take an approach that idiomatically represents heterogeneous columns within Mojo’s static type system, which we depict in Fig 4. Figure 3: GPU programming with Python’s PyTorch library vs. Mojo. Mojo features vendor-independent GPU programmability [50], reducing need for potentially complex and error-prone per-GPU code statements. Figure 4: MojoFrame data structure. A tensor stores numeric data. Non-numeric columns are either mapped into the tensor or offloaded into lists based on cardinality. Logical and physical layout is decoupled with row and column indexers. Data Loading. MojoFrame supports loading dataframes stored in common file formats supported by existing dataframe implementations (CSV, Parquet [1], ORC [6], Arrow [12], etc.). Once loaded, MojoFrame organizes the dataframe columns based on their column types (e.g., numeric vs. non-numeric, high vs. low cardinality) into respective elements within MojoFrame (described shortly). Tensor. MojoFrame’s tensor stores all numeric columns, for example, the integer column int1 and float column float1 in Fig 4. It also stores the element indexes of low-cardinality non-numeric columns which are mapped into the tensor. Low-cardinality non-numeric columns. MojoFrame maintains mappings of distinct elements to integer indexes in non-numeric columns with cardinality below a user-defined threshold (i.e., low cardinality), for example, the str1 and str2 columns each with 2 distinct elements. The indexes are stored as columns in the tensor to facilitate efficient operations (e.g., filtering, join, group-by, §4). High-cardinality non-numeric columns. MojoFrame offloads nonnumeric columns with cardinality above the user-defined threshold (i.e., high cardinality), for example, the str3 and str4 columns each with 3 distinct elements into lists separate from the tensor. This approach notably differs from the multi-array BlockManager approach of Pandas dataframes [72] where multiple arrays stores all columns of the same type (e.g., all integer columns in an array[int], all string columns in an array[str]) which is equivalent to offloading all columns into separate arrays; MojoFrame maps lowcardinality columns into its tensor when doing so enables higher operation efficiency via tensor operations (TPC-H Q19, §5.2). Column names. MojoFrame stores column names like other dataframe implementations such as Pandas and Polars. This allows MojoFrame to support column-based operations which refer to column names, e.g., df[’min’].fillna(). Row and column indexers. MojoFrame’s row and column indexers control the logical layout of the MojoFrame independently of the physical layout of columns in the tensor and offloaded highcardinality non-numeric lists. For example, given the row and column indexers depicted in Fig 4, the ordering of columns in the MojoFrame is str3, str1, int1, float1, str2, and str4. This approach allows MojoFrame to logically interleave numeric and nonnumeric columns regardless of their positioning in the tensor/lists, while efficiently supporting relational operations that potentially alter row and/or column orders such as joins and groupbys, as only the indexers need to be accordingly updated while the physical data layout can remain unchanged (§4). # 4 MOJOFRAME OPERATIONS This section presents our approach to supporting relational operations on MojoFrame. The Mojo language is significantly different from other existing programming languages that host dataframe libraries; hence, many techniques used in existing dataframe libraries for relational operations are not effectively translatable to Mojo. We describe our approaches to filtering in $\ S 4 . 1$ , group-by aggregation in $\ S 4 . 2$ , and joins in $\ S 4 . 3$ . # 4.1 Filtering This section describes our approach to supporting filtering in MojoFrame. Simple filters such as equality, greater-than and less-than can be implemented with boolean indexing in existing dataframe libraries such as Pandas and Polars (e.g., via a mask ${ \mathsf { d f } } [ ^ { \prime } { \mathsf { A } } ^ { \prime } ] ~ <$ 5) which are then executed with vectorized instructions; however, dataframe filtering often requires to specify custom logic (e.g., regular expressions) beyond these simple comparisons—user-defined functions (UDFs), which are not vectorized and instead executed ’row-by-agonizing-row’ [30] in existing dataframe libraries [74, 79]. Filtering in Existing Dataframe Libraries. Existing dataframe libraries, notably Pandas [74] and Polars [79], enable filtering with complex UDFs with the df.apply() interface that allows users to define and pass in boolean-returning lambda functions for filtering (e.g., lambda x: re.search(’%special%request%’, x) in TPC-H Q13). However, like UDFs in database transactions [26, 108], the lambda functions passed to apply() can be stateful (i.e., the result of one row depending on application result of prior rows); hence, given an interpreted language like Python that Pandas and Modin are built on, the lambda function (even when not stateful) must be executed row-by-row without parallelization. This can potentially be alleviated with Numba’s JIT compilation [74], which unfortunately does not support some commonly-used data science operations (e.g., argsort [71]). Polars, while natively implemented in Rust, still fails to take advantage of Rust’s compilation for parallelized UDF execution due to it internally offloading lambda function applications to Python for expressiveness [79]. MojoFrame’s approach. We aim to support parallelized filtering with stateless lambda functions in MojoFrame. To accomplish this, we introduce a trait-based filtering mechanism that allows users to define expressive, generic filter conditions which inherit from a set of stateless, extensible, and JIT-optimizable base operations (e.g., equality, greater/less than, assignment, mathematic and string df.apply(lambda x, cmp: math.sin(2 $^ { \ast }$ math.pi $\star \ \times$ ) $>$ math.cos(2 $^ { \ast }$ math.pi $^ { \ast }$ cmp)) # Cyclical feature engineering for timestamps (a) Pandas apply() fn evaluate(self, $\textsf { x }$ : SIMD[DType.float64, 1], cmp: SIMD[DType.float64, 1]): var ${ \mathsf { p } } = 2 { \mathsf { \Omega } } \star$ math.pi $\texttt { \textbf { * } \times }$ var ${ \mathfrak { q } } = 2 \ \star$ math.pi $^ \ast$ cmp return True if math. $\mathsf { s i n } ( \mathsf { p } ) \ >$ math.cos(q) else False (b) MojoFrame trait operations, Fig 5). This allows the Mojo compiler to parallelize our filter operations defined through inheriting traits, as they are guaranteed to be stateless (lambda functions), and produce optimization passes more effectively (compared to ad-hoc inline definition). This approach is more efficient versus handling equivalent UDFs with apply() in existing dataframe implementations (Fig 2). # 4.2 Group-by aggregation This section describes our approach to supporting group-by aggregation in MojoFrame. Efficient multi-column group-by aggregation presents algorithmic and system-level challenges: algorithmically, multi-column group-by aggregation requires the creation of composite keys (i.e., a combination of the $k$ keys in each row for a $k$ -column groupby) and finding distinct keys. The number of possible composite keys grows exponentially with the number of grouping columns; hence, system-wise, multi-column aggregation demands efficient composite key management and mapping for finding distinct keys. Group-By Aggregation in Existing Dataframe Libraries. Existing dataframe libraries such as Pandas employ a sparse-to-dense, incremental composite key creation and mapping strategy (summarized in algorithm 1). As these dataframes store data in column-major data storage for efficient memory access patterns [47, 72, 80], each of the $k$ columns involved in the group-by are processed one-byone for incremental composite key and hash generation (line 6). For a dataframe with $n$ columns, $n$ composite keys (internally stored as lists) and accompanying incremental hashes are maintained; then, for each column, the sparse-to-dense step is first applied to map unique elements to integer identifiers (i.e., like MojoFrame’s mapping for low-cardinality non-numeric columns, §3) (line 7); then, these integer identifiers are incrementally collected into the $n$ composite keys (line 9), while the $n$ hashes are incrementally updated via a vectorization-friendly arithmetic combination [75] (line 10). Finally, the $n$ composite keys are inserted into a dictionary with the $n$ hashes to find distinct keys (line 11). Incremental Hashing in Mojo. An analogous approach to Pandas’ incremental hashing for MojoFrame would be to similarly maintain a list of $n$ composite keys (stored as lists) and incremental hashes (for a $n$ -row dataframe) and process the group-by columns in percolumn order: elements would be collected into the $n$ composite key-lists, while hashed using a generalizable hash function (due to offloaded non-numeric columns possibly appearing in the groupby, $\ S 3$ ) such as xxhash [104] to incrementally update the $n$ hashes. Finally, the composite keys would be inserted along with the hashes into Mojo’s dictionary class. Unfortunately, this analogous approach currently does not translate well to Mojo due to Mojo’s dictionary 1 Input: $n$ -row dataframe 𝑑 𝑓 , groupby columns $g _ { 1 } , . . . , g _ { k }$ 2 Output: $n ^ { \prime } \leq n$ unique comp. keys $\{ d f [ i , 1 ] . . . d f [ i , k ] \} , 1 \leq i \leq n ^ { \prime }$ 3 Initialize $n$ empty composite keys $C _ { i } = [ ] , 1 \leq i \leq n$ ; 4 Initialize $n$ empty hashes $H _ { i } = [ ] , 1 \leq i \leq n ;$ 5 Initialize $k$ unique element-to-index mappings $M _ { g _ { i } } = \{ \} , 1 \leq i \leq k$ ; 6 for each column $d f [ : , g _ { i } ]$ do 7 Compute element-to-index mapping: $M _ { g _ { i } } : d f [ : , g _ { i } ] \to \mathbb { N }$ 8 for each element $d f [ j : g _ { i } ]$ in column $\boldsymbol { d f } [ : , \boldsymbol { g } _ { i } ]$ do do 9 $C _ { i \cdot a p p e n d } \textstyle ( M _ { g _ { i } } ( d f [ j : g _ { i } ] ) )$ ; 10 $H _ { i \cdot } u p d a t e ( M _ { g _ { i } } ( d f [ j : g _ { i } ] ) )$ ; 11 Insert $( C _ { i } , H _ { i } )$ , $1 \leq i \leq n$ into dictionary to find unique keys; 12 Return $\{ d f [ i , 1 ] , . . . d f [ i , k ] \}$ }, $1 \leq i \leq n ^ { \prime }$ . # Algorithm 2: Mojo Row-Order Group-By Aggregation 1 Input: $n$ -row dataframe 𝑑 𝑓 , groupby columns $g _ { 1 } , . . . , g _ { k }$ 2 Output: $n ^ { \prime } \leq n$ unique comp. keys $\{ d f [ i , 1 ] . . . d f [ i , k ] \}$ , $1 \leq i \leq n ^ { \prime }$ 3 Initialize composite key array $C _ { i } = [ ] , 1 \leq i \leq n ;$ ; 4 Initialize hash array $H _ { i } = [ ] , 1 \leq i \leq n ;$ 5 Transpose $d f t = d f . T$ ; 6 for each column in transposed dataframe $\boldsymbol { d f t } [ : , i ]$ do 7 $C _ { i } \gets t u p l e ( d f t [ g _ { 1 } , i ] , . . . , d f t [ g _ { k } , i ] )$ ; 8 $H _ { i } \gets h a s h ( d f t [ g _ { 1 } , i ] , . . . , d f t [ g _ { k } , i ] ) ;$ ; 9 Insert $( C _ { i } , H _ { i } )$ , $1 \leq i \leq n$ into dictionary to find unique keys; 10 Return $\{ d f [ i , 1 ] , . . . d f [ i , k ] \}$ }, $1 \leq i \leq n ^ { \prime }$ . Figure 6: Three-column group-by in TPC-H Q3 (left). MojoFrame’s group-by (algorithm 2) is $3 . 5 \times$ faster than Pandas’ group-by (algorithm 1) on the 10G scale dataset; a direct translation of Pandas’ approach to Mojo works poorly (right). not supporting mutable classes (i.e., the composite key-lists) as keys; inserting a mutable class instance into Mojo’s dictionary results in it being copied due to a lack of support for mutable references [54], incurring significant time/memory overheads (Pandas-Mojo, Fig 6). MojoFrame’s Approach (algorithm 2). Instead of incrementally computing a list of $n$ composite keys while iterating the group-by columns in column order for optimized data access pattern, MojoFrame first transposes the group-by columns for optimized data access following row-major order (line 5), then (non-incrementally) collects each of the $n$ rows into $n$ (immutable) tuples as composite keys (line 7) and $n$ non-incremental hashes (line 8). Then, we insert these $n$ tuples along with the non-incremental hashes into Mojo’s dictionary, which are not duplicated due to the tuples being immutable (line 9). Hence, versus Panda’s column-order incremental approach, the only extra overhead MojoFrame’s row-order approach pays is the transposing of the group-by columns, which we empirically verify to be negligible versus the time saving enabled by MojoFrame’s faster tuple-based hashing (Fig 6). Figure 7: joining on unordered join columns in TPC-H Q3 (left). MojoFrame adopts Pandas’ hash join into Mojo for faster $\ : 1 . 4 2 \times \ :$ on 10G scale), optimized joins. Specialized alternatives such as sort-merge join underperform (right). # 4.3 Join This section describes our approach to supporting inner joins in MojoFrame.2 Similar to group-by, joining large dataframes presents the algorithmic challenge of efficiently matching join keys [31]. Adopting Pandas’ Join algorithm to MojoFrame. Existing datafram libraries like Pandas adopt a hash join derivative for performing inner joins. Compared to hash joins in traditional DBMS where values in the join columns are directly hashed during the build and probe phases, Pandas adds a pre-processing step where (nonnumeric) join columns are first factorized into a shared integer space [7] in a manner similar to MojoFrame’s mapping of lowcardinality non-numeric columns to indexes (§3). Then, these indexes are processed following the standard hash join algorithm [8]. The rationale behind this modified algorithm is that peforming hash join on the factorized integers (via sequential integer arrays) is more memory-efficient versus direct hash computation and collision detection on non-numeric columns [7, 15, 35]. We find that this approach translates well to the Mojo language and hence is suitable for use with MojoFrame; versus the native implementation in Pandas, Mojo’s factorization-then-hash-join is parallelized via compilation, enabling faster join computations (Fig 7). Alternative Join Algorithms. We have also explored adopting alternative, more specialized join algorithms utilized in DBMS such as sort-merge join [33] into MojoFrame. However, as seen in Fig 7, naïvely performing sort-merge join in Mojo on unordered join columns incurs heavy performance penalties even with Mojo’s vectorized tensor sorting [5]. Hence, we defer incorporating these join algorithms in MojoFrame, and their selection based on join column characteristics (e.g., sorted or not) to future work. # 5 EXPERIMENTS In this section, we empirically study the effectiveness of MojoFrame. We aim to show and investigate the following: (1) Analytical query processing time: MojoFrame achieves faster analytical query runtimes for UDF-heavy queries and queries with low-cardinality group-by aggregation versus other dataframe libraries (§5.2). (2) Scalability of MojoFrame to large datasets: MojoFrame exhibits linear scalability with respect to dataset size, a characteristic typical of parallelized dataframes (§5.3). (3) Parallelism of MojoFrame: MojoFrame achieves speedup with increasing core count compared to existing singlethreaded dataframe implementations (§5.4). # Deeper Performance analysis of MojoFrame (Ours) (1) Microbenchmark on Compilation Time: We study the compilation overhead of MojoFrame incurred by Mojo’s JIT compilation, and show that it is both largely agnostic to query complexity and negligible versus query runtime at large dataset scales (§5.5). (2) Microbenchmark on Data Loading Time: We investigate MojoFrame’s data loading times in the Mojo programming language versus data loading times of alternative dataframe implementations in Python (§5.6). # 5.1 Experiment Setup We use the table generator and queries included in the TPC-H [95] decision support benchmark in our experiments. We generate TPCH datasets from 3 distinct scale factors (1, 3, 10); the scale factor determines the total size in GB of the tables in the generated dataset. All data tables are stored in the CSV format. Workload. We use all 22 TPC-H queries for our workload. As the queries are written in SQL, we translate the queries into equivalent code of the dataframes’ implementation languages (e.g., SQL’s GROUPBY into Pandas’ agg()) for evaluation. Methods. We evaluate MojoFrame by comparing it to the following established dataframe libraries commonly used in data science: (1) Pandas [72]: We perform all operations in Pandas with default function arguments (e.g., no Numba JIT [71]). (2) Modin [47]: A drop-in Pandas alternative that parallelizes common operations (e.g., group-by, transpose). We similarly use default arguments for all operations. (3) Polars [80]: A dataframe library natively implemented in Rust. We use this library in Python via its Python bindings. For MojoFrame, we compile our SQL-to-Mojo translated queries ahead-of-time for execution; however, we also study setups where we perform JIT compilation as part of query execution (§5.5). Environment. All experiments are performed on a Standard E16ads v6 Azure machine with 16 vCPUs (AMD EPYC 9004 Genoa) and 128GB RAM. Input data is read from a local SSD disk with 3.05 MB/s read speed.3. We use 8 cores for most of our experiments; however, we also study setups with fewer cores in $\ S 5 . 4$ . Implementation. We implement MojoFrame natively in Mojo following the data structure described in $\ S 3$ and running relational operations as described in $\ S 4$ . We manually implement some functions (e.g., substring matching with regexes, TPC-H Q13) not directly translatable from Python to Mojo (due to lack of libraries, e.g., regex [10]) required for some operations in the TPC-H queries. MojoFrame supports all 22 translated TPC-H queries with these additional function implementations. Time measurement. We pre-load all datasets into memory to mimic interactive data science scenarios. We measure the query execution runtime as the time from invoking the query on the inmemory tables to observing results. Specific to MojoFrame, we also study the compile time as time incurred by Mojo’s JIT compilation in cases where ahead-of-time compilation is not used in $\ S 5 . 5$ . For data loading, we study the data read time for reading relevant input tables into memory in $\ S 5 . 6$ . We run each query/operation 5 times and report the average. We clear the page cache between runs. Reproducibility. Our implementation of MojoFrame and our translated TPC-H queries can be found in our Github repository.4 # 5.2 MojoFrame: Fast In-Memory Analytics This section evaluates MojoFrame’s performance on typical relational operations in analytical queries. We measure MojoFrame’s query execution times on all 22 TPC-H queries on the 10GB TPC-H dataset versus existing dataframe implementations, with all times normalized w.r.t. Pandas’ runtime on the same query. We report results in Fig 8. MojoFrame exhibits comparable execution speeds to alternative dataframe implementations, achieving faster execution times than Pandas, Modin, and Polars on 16, 18, and 8 out of the 22 queries, respectively. Fast UDF Application. MojoFrame demonstrates a significant advantage on Q13, which contains a complex string filtering UDF (Fig 2). It is $2 . 9 6 \times , 3 . 9 4 \times$ , and $1 1 . 0 2 \times$ faster than Polars (next best alternative), Pandas, and Modin, respectively. This is because these baseline dataframe implementations cannot take advantage of the applied UDF being stateless and apply it across rows sequentially. MojoFrame’s advantage stems from its ability to compile the UDF logic through our trait system and parallelize its application (§4.1). Efficient Group-By Aggregation Performance. MojoFrame is $1 . 1 1 \times$ $7 . 9 4 \times$ , and $1 . 8 8 \times$ faster than Polars, Pandas, and Modin, respectively, on Q9, which contains a 2-column group-by aggregation applied on a large table (5 joins) with a small number of distinct groups. This performance highlights the high memory locality achieved by MojoFrame in its row-order, transpose-based group-by (§4.2). Limitation: Unoptimized Dictionary in Mojo. Mojo’s current dictionary implementation, which MojoFrame relies on for group-by aggregation (§4.2) and joins (§4.3), is unoptimized for handling large numbers of keys (i.e., distinct elements) due to it using open addressing with quadratic probing [2]. This results in MojoFrame being slower than alternative dataframe implementations on queries performing group-by aggregation on high-cardinality grouping columns such as Q18 and Q21 ( $1 7 . 1 \times$ and $6 . 7 1 \times$ slower than Polars, respectively). However, the Mojo community is actively working on improving Mojo’s dictionary implementation [58], hence we consider this to not be a limitation inherent to Mojo or MojoFrame. # 5.3 MojoFrame Scales Linearly with Data Size This section evaluates MojoFrame’s scalability with varied data sizes. We vary the TPC-H dataset scale from 1GB to 10GB, then measure MojoFrame’s query execution time versus dataset scale on select TPC-H queries, comparing against existing dataframes. Figure 8: MojoFrame’s normalized query execution times (w.r.t. Pandas) on the 22 TPC-H queries versus alternative dataframes. MojoFrame is up to $2 . 9 6 \times$ faster than the next best alternative on UDF-heavy queries (e.g., Q13) and low-cardinality aggregation (e.g., Q9), but falls short on high-cardinality aggregation (e.g., Q18) due to Mojo’s native dictionary being unoptimized. Figure 9: MojoFrame’s query processing times versus baseline dataframe implementations on various dataset scales. MojoFrame exhibits linear scaling versus dataset scale like existing parallelized dataframe implementations (Polars, Modin). Pandas Modin Polars MojoFrame (Ours) 0246810 43.2s 20 4 1250 Runtime (s) ? 15 Runtime (s) 3 Runtime (s) 1 10 2 150 5 : 1 0 + 0 + 0 . 1GB Dataset Scale 3GB 10GB 1GB Dataset Scale 3GB 10GB 1GB Dataset Scale 3GB 10GB 1GB Dataset Scale 3GB 10GB (a) Q9 (b) Q13 (c) Q19 (d) Q21 We report results in Fig 9. MojoFrame demonstrates efficient, near-linear scalability versus dataset scale for all core relational operations like existing parallelized dataframe implementations: For the UDF-heavy Q13 (Fig 2), MojoFrame’s runtime increases by $1 1 . 3 \times$ from 1GB to 10GB which matches the $9 . 7 \times$ and $1 1 . 0 \times$ scaling exhibited by Polars and Modin, respectively. For the join and group-by aggregation heavy Q9, MojoFrame similarly exhibits near-linear scaling $( 1 2 . 5 \times )$ like to Modin $( 1 2 . 4 \times )$ and Polars $( 1 4 . 7 \times )$ . In contrast, Pandas shows degraded, super-linear scaling $( 4 7 . 7 \times )$ , as it defaults to larger, less compute-efficient datatypes (e.g., INT64 [4]) for factorization on higher-cardinality join columns (§4.3). # 5.4 MojoFrame Scales to Multiple Cores This section evaluates MojoFrame’s scalability with different numbers of cores used to perform relational operations. We vary the core number from 2 to 8, then measure MojoFrame’s query execution time versus core number on select TPC-H queries, comparing against existing dataframe implementations. We report results in Fig 10. Typical of parallelized dataframe implementations, MojoFrame is capable of leveraging multiple cores to achieve query speedups, achieving $1 . 1 7 \times$ and $1 . 3 4 \times$ speedup on Q9 and Q13, respectively, when increasing the core count from 2 to 8. However, this is a less significant speedup versus Polars and Modin, which achieve $2 . 3 1 \times$ and $2 . 1 7 \times$ speedup from 2 to 8 cores on Q9, respectively. This is because Mojo’s tools for achieving fine-grained control over thread management and task granularity are still under development [55]: MojoFrame currently falls back to manually utilizing parallelize (equivalent to $C + +$ ’s omp parallel [3]) in relational operations when appropriate, while Modin and Polars have access to mature parallel execution frameworks (Ray [69] and Rayon [9]) in their respective programming languages. # 5.5 Microbenchmark: MojoFrame Compilation This section studies overhead of Mojo’s JIT compilation for using MojoFrame. We vary the TPC-H dataset scales, number of cores, query structure, and perform JIT compilation for query execution with MojoFrame instead of using ahead-of-time compiled code. We measure and compare the time taken for compilation and query compute during MojoFrame’s end-to-end query execution. We report results in Fig 11. MojoFrame’s JIT compilation time remains largely constant regardless of the query workload, number of cores, and dataset size, being on average 2.3 seconds with only up to $3 \%$ variation across runs.5 This factor-agnostic JIT compilation time is notably lightweight versus the query compute times of the TPC-H queries on larger dataset scales, contributing to only $1 0 . 4 \%$ of the end-to-end query execution time (Q21, 10GB). # 5.6 Microbenchmark: MojoFrame Data Loading This section studies MojoFrame’s data load speed. We measure time taken to load columns in various TPC-H tables with 10GB dataset scale relevant to select queries from SSD into MojoFrame (i.e., inmemory) versus loading into existing dataframe implementations. We report results in Fig 12. Mojo and MojoFrame efficiently loads the purely numeric columns of the Partsupp table relevant to Q2 thanks to its optimized tensor operations (§2.1), exhibiting $5 . 6 6 \times$ , $3 3 . 3 \times ,$ , and $2 2 . 0 \times$ faster loading times compared to Polars (the next best alternative), Pandas, and Modin, respectively. Limitation: Lack of Mojo-Native File Parser. MojoFrame’s data loading performance is currently limited by the lack of a MojoNative file parser for reading mixed datatype tables (e.g., from CSVs) that would not directly fit in tensors; the current, most efficient workaround that we employ is to first use the data loading functionality of an existing dataframe library (e.g., Pandas), then manually convert the loaded (Python) non-numeric columns to corresponding Mojo non-numeric columns before ingesting and processing them with MojoFrame. This conversion step incurs significant overhead, resulting in MojoFrame loading the mixeddatatype tables Lineitem and Orders $8 3 . 1 \times$ and $1 3 2 . 4 \times$ slower than Pandas Modin Polars MojoFrame (Ours) 12340 40 8 20 Runtime (s) Runtime (s) 30 Runtime (s) 6 Runtime (s) 15 0 ? 20 4 10 . : . 10 0 C . 2 0 + + 05 1 2 4 8 2 4 8 2 4 8 2 4 8 Number of Cores Number of Cores Number of Cores Number of Cores (a) Q9 (b) Q13 (c) Q19 (d) Q21 Figure 10: MojoFrame’s query processing times versus baseline dataframe implementations on variable number of cores. Figure 11: Breakdown of MojoFrame’s JIT compilation and query compute times for end-to-end query execution versus query, num. cores (left) and dataset scale (right). Compilation time is factor-agnostic, and negligible versus compute times. Figure 12: Data loading times for TPC-H tables (10G scale) with MojoFrame versus alternative dataframes. MojoFrame loads numeric data (Partsupp) significantly faster than alternatives, but falls short on mixed datatype loading (Lineitem, Orders) due to lack of a Mojo-native CSV reader. Polars, respectively; hence, additionally developing a Mojo-native, high-performance table parser in Mojo remains critical future work. # 6 RELATED WORK Existing Mojo libraries. There currently exists a large variety of libraries in Mojo [56]: (1) libraries for AI pipelines such as machine learning algorithms [60], StableDiffusion [68], and LLMs [61], (2) domain-specific libraries such as audio processing [65], quantum computing [66], and bioinformatics [57], (3) libraries that extend Mojo with additional data structures such as arrays [64], trees [63], dictionaries [54], and queues [62], and (4) libraries for system programming such as networking [59] and logging [67]. We add MojoFrame—dataframe library for Mojo on which relational operations can be performed to the Mojo ecosystem. GPU-based Analytics. Accelerating analytical tasks such as performing relational operations by using GPU acceleration is a wellstudied problem [21, 31, 70, 73, 96, 105–107]. cuDF [96] and cuPy [70] are CPU-GPU dataframe libraries which allow users to specify which of CPU or GPU to use for data placement and/or computations. BlazingSQL [73] and Crystal [21] are GPU databases that supports executing SQL queries with GPUs. Gao et. al. proposes a method for speeding up joins with multiple GPUs [31]. There are also works aimed at optimizing data placement [105, 107] and performing JIT [106] for GPU computations. We design the data structure of MojoFrame, our Mojo-based dataframe, to be mainly tensor-based to natively support GPU acceleration (§3). Just-in-time Compilation for Data Science. JIT compilation has been extensively explored for speeding up data science code in interpreted languages such as Python and R [32, 38, 40, 83, 86, 87]. Numba [38] uses the LLVM compiler to optimize NumPy arrays and functions by applying threading and SIMD. PyPy [40] is an alternative Python interpreter featuring a tracing JIT compiler that performs established optimizations such as hot loop tracing [18]. R contains a native JIT compiler package [86] with adjustable JIT levels controlling which code structures (e.g., closures, control flows) are compiled for different compilation time-runtime trade-offs, and the Torch library [87] for accelerating array operations for machine learning. MojoFrame implements relational operations (e.g., join, $\ S 4 . 3 )$ in ways that take advantage of Mojo’s JIT compilation. Systems for Speeding up Data Science Coding. There exists a variety of works for speeding up the coding process for building data science pipelines [16, 27, 41–44, 91, 99, 102]. Code completion tools recommend next lines of code for the user via either traditional rulebased [41, 42] or LLM-based [27, 91] predictions. Checkpoing tools such as Diff-in-the-loop [99], ElasticNotebook [44], and Kishu [43] can be used to save intermediate states of data science pipelines for returning to later, faciliting more efficient code iteration. Symphony [16] and B2 [102] adopt a non-coding approach and enable point-and-click interactions with ML models and dataframes. In comparison, MojoFrame enables users to more conveniently write and run (Mojo-native) data science pipelining code in Mojo by eliminating the need to import special-purpose libraries (§2.2) or alter code based on available hardware (§2.3).
Mojo is an emerging programming language built on MLIR (Multi-Level Intermediate Representation) and JIT compilation. It enables transparent optimizations with respect to the underlying hardware (e.g., CPUs, GPUs), while allowing users to express their logic using Python-like user-friendly syntax. Mojo has been shown to offer great performance in tensor operations; however, its performance has not been tested for relational operations (e.g., filtering, join, and group-by), which are common in data science workflows. To date, no dataframe implementation exists in the Mojo ecosystem. In this paper, we introduce the first Mojo-native dataframe library, called MojoFrame, that supports core relational operations and user-defined functions (UDFs). MojoFrame is built on top of Mojo's tensor to achieve fast operations on numeric columns, while utilizing a cardinality-aware approach to effectively integrate non-numeric columns for flexible data representation. To achieve high efficiency, MojoFrame takes significantly different approaches than existing libraries. MojoFrame supports all operations for TPC-H queries, and achieves up to 2.97x speedup versus existing dataframe libraries in other programming languages. Nevertheless, there remain optimization opportunities for MojoFrame (and the Mojo language), particularly in data loading and dictionary operations.
[ "cs.DB" ]
# 1. Introduction Reconstructing high-quality, animatable 3D human avatars from casually captured images is a crucial task in computer graphics, with broad applications like virtual reality and telepresence. A practical solution should support rapid and robust reconstruction from minimal input—ideally using only one or a few casually captured images, without relying on camera parameters, human pose annotations, or controlled capture environments. Such capability is essential for enabling scalable and accessible avatar generation in real-world scenarios. Existing approaches to animatable 3D human reconstruction from monocular or multi-view videos typically rely on optimization-based frameworks that minimize photometric or silhouette reprojection losses [2, 8, 45]. These methods usually require dozens or hundreds of images with accurate human pose estimation as a prerequisite. Moreover, the optimization process is often computationally expensive, taking several minutes or even hours to converge, thus limiting real-time applications. More recently, LHM [31], a feed-forward network for single-image 3D human reconstruction, has shown promising progress toward real-time performance. It employs a transformer-based architecture to fuse geometric point features initialized from the canonical SMPL-X surfaces and image features to directly predict a 3D Gaussian Splatting [17] based avatar from a single image. However, as single-image-based methods are inherently limited by partial observations, they often struggle to reconstruct occluded or unseen regions, leading to oversmoothed surfaces or noticeable artifacts [34, 60]. A straightforward extension of LHM [31] to multi-image settings would involve concatenating image tokens from multiple images and performing attention fusion. However, such a naive approach suffers from substantial memory and computational overhead due to the large number of geometric point features and the quadratic complexity of dense self-attention mechanisms. In this work, we propose PF-LHM, a novel feed-forward framework for fast and high-fidelity 3D human reconstruction from one or a few images without requiring the camera and human poses. To achieve this, we design an efficient Encoder-Decoder Point-Image Transformer (PIT) Framework that hierarchically fuses 3D geometric features with multi-image cues. The framework is built upon Point-Image Transformer blocks (PIT-blocks), which enable interaction between geometric and image tokens via attention fusion while maintaining scalability through spatial hierarchy. We start by representing the SMPL-X anchor points as geometric tokens and extracting image tokens from each input image. The encoder stage comprises several PIT-blocks to progressively downsample the geometric tokens via Grid Pooling [47]. At each layer, the downsampled point tokens interact with image tokens through multimodal attention [6], allowing compact yet expressive geometric representations to be enriched with visual information from multiple images. The decoder stage upsamples the geometric tokens to recover spatial resolution. The resulting 3D geometry tokens are decoded to predict Gaussian splatting parameters, enabling photorealistic rendering and animation. To enhance robustness and generalization, we train our model on large-scale real-world human video datasets covering diverse clothing styles, body shapes, and viewing conditions. In summary, our contributions are: • We introduce PF-LHM, to the best of our knowledge, the first feed-forward model capable of reconstructing highquality, animatable 3D human avatars in seconds from one or a few casually captured images, without requiring either camera poses or human pose annotations. • We propose a novel Encoder-Decoder Point-Image Transformer (PIT) architecture that hierarchically fuses 3D geometric point features and 2D image features using multimodal attention, enabling efficient and scalable integration of multi-image cues. • Extensive experiments on both synthetic and real-world data demonstrate that PF-LHM unifies single- and multiimage 3D human reconstruction, with superior generalization and visual quality. # 2. Related Work # 2.1. Human Reconstruction from A Single Image For single-image 3D human reconstruction, many methods adopt implicit neural representations [4, 33, 34, 49, 50, 53, 57, 59] to model complex human geometries. To improve geometric consistency and generalizability, some approaches [1, 3, 5, 16] rely on parametric body models such as SMPL [21, 26] to predict geometric offsets for the reconstruction of clothed humans. However, reconstruction from a single image is an ill-posed problem. Current cascade-type approaches [19, 32, 39, 44, 46] attempt to mitigate this issue by decoupling the process into two stages: multi-view image synthesis using generative models, followed by 3D reconstruction. While these methods require view-consistent generation in the first stage, which is often unstable and challenging, this ultimately affects the quality of the reconstruction. Inspired by the success of large reconstruction models [10, 37], emerging solutions aim to enable direct generalizable reconstruction through feed-forward networks which significantly accelerate the inference time. HumanLRM [46] employs a feed-forward model to decode the triplane NeRF representation, then followed by a conditional diffusion-based novel views generation and reconstruction. IDOL [60] introduces a UV-Alignment transformer model to decode Gaussian attribute maps in a structured 2D UV space. LHM [31] leverages a Body-Head multimodal transformer architecture produces animatable 3D avatars with the face identity preservation and fine detail recovery. While these single-view methods often face challenges with occlusions and invisible regions, frequently resulting in geometrically implausible results or blurred textures, the proposed PF-LHM leverages a variable number of pose-free images to reconstruct photorealistic and animatable avatars. Concurrent work, GIGA [61], introduces a generalizable human reconstruction model based on UV map representations. However, in contrast to our approach, GIGA requires sparse-view inputs where the same action is captured from multiple viewpoints, along with complex camera setups and motion calibration. These constraints make it challenging to apply in causal scenarios. # 2.2. Human Reconstruction from Monocular Videos Video-based techniques further improve reconstruction consistency by using temporal cues. 4D replay methods [25, 45] can reconstruct dynamic humans from monocular video or multiview video sequences, however, they are not able to drive the humans in novel poses since they do not build a standalone 3D model for humans. Therefore, a series of monocular video-based methods [12, 13, 30, 36] build a static 3D human model and can drive the human in novel poses by binding the skinning weight. Another series of works [8, 11, 14, 15, 23, 29, 55] take it further by incorporating a 3D parametric human model into the optimization process, and thus can drive the human reconstruction in novel poses without any post-processing. Despite impressive visual fidelity, they often require dozens of minutes and dozens of views for a good optimization, which limits their practical usage in real-world scenarios. Unconstrained collection is ideal input for a practical application. However, existing methods [51, 54] share a similar pipeline that uses a view generative model and score distillation sampling [27] for shape optimization. As a result, they are costly for offline training and impractical for online reconstruction. Paving in a new way, PF-LHM infers a human avatar from one, a few, or dozens of views under any poses in a feed-forward manner and costs only seconds, which is extremely efficient for online applications. Moreover, PF-LHM greatly outperforms any previous state-ofthe-art methods on 3D human reconstruction and offers a more flexible input manner for the community. Table 1. Comparison with state-of-the-art 3D human reconstruction methods. FF stands for Feed-forward, PF for Pose-free, and AM for Animatable. # 2.3. Feed-Forward Scene Reconstruction Recent years have witnessed a paradigm shift in geometric 3D vision, driven by the emergence of methods that eliminate traditional dependencies on camera calibration and multi-stage pipelines. At the forefront of this revolution lies the DUSt3R [42] framework, which reimagines 3D reconstruction as a direct regression problem from image pairs to 3D pointmaps. By discarding the need for intrinsic camera parameters, extrinsic pose estimation, or even known correspondence relationships, DUSt3R and its successors [22, 38, 40, 52] have democratized 3D vision, enabling rapid reconstruction across diverse scenarios while achieving state-of-the-art performance in depth estimation, relative pose recovery, and scene understanding. However, general feed-forward reconstruction methods assume that images are captured from a static scene [18, 41], while our PF-LHM can accept multiple human images with different camera and human poses as input and produce an animatable 3D avatar. # 3. Method # 3.1. Overview Problem Formulation Given a set of $N \geq 1$ RGB images of a human subject, without known camera parameters or human pose annotations, our goal is to reconstruct a highfidelity, animatable 3D human avatar in seconds. Figure 2. Overview of the proposed $_ { P F - L H M }$ . In the 2D space, we extract image tokens $\mathbf { T } _ { \mathrm { I m g } }$ by $\mathrm { D I N O v } 2$ from the input RGB images without camera parameters or human poses, which are then concatenated with deformation tokens $\mathbf { T } _ { \mathrm { D e f } }$ to form 2D tokens $\mathbf { T } _ { 2 \mathrm { D } }$ . In the 3D space, geometric tokens $\mathbf { T } _ { 3 \mathrm { D } }$ are represented by the MLP output of SMPL-X anchor points. Subsequently, we build our Encoder-Decoder Point-Image Transformer (PIT) to hierarchically fuse 3D tokens with 2D tokens, where the downsampled 3D tokens interact with 2D tokens via multimodal attention in each layer. The finalized 3D tokens are decoded to directly predict 3D Gaussian parameters, enabling animation and photorealistic rendering. We adopt the 3D Gaussians splatting (3DGS) [17] as the representation, which allows for photorealistic, real-time rendering and efficient pose control. Each 3D Gaussian primitive is parameterized by its center location $ { \mathbf { p } } \in \mathbb { R } ^ { 3 }$ , directional scales $\sigma \in \mathbb { R } ^ { 3 }$ , and orientation (represented as a quaternion) $\mathbf { r } \in \mathbb { R } ^ { 4 }$ . In addition, the primitive includes opacity $\rho \in [ 0 , 1 ]$ and spherical harmonic (SH) coefficients f to model view-dependent appearance. Inspired by LHM [31], we employ a set of spatial points $P \in \bar { \mathbb { R } } ^ { N _ { \mathrm { p o i n t s } } \times 3 }$ uniformly sampled from the SMPL-X surface in its canonical pose to serve as the anchors. Conditioned on the multi-image inputs, these points are processed and decoded to regress the human 3D Gaussian appearance in canonical space through a feed-forward transformerbased architecture. The pipeline can be formulated as: $$ \chi \{ \mathbf { p } , \mathbf { r } , \mathbf { f } , \rho , \sigma \} = \mathrm { P F - L H M } ( P \mid I ^ { 1 } , \ldots , I ^ { N } ) . $$ Model Design A straightforward solution to this problem is to extend LHM to support multiple image inputs by directly concatenating all available image tokens and performing attention operations between 3D point tokens and image tokens. However, this naive extension results in significant computational and memory overhead due to the quadratic complexity of self-attention operations with respect to the total number of tokens, i.e., $\mathcal { O } ( ( N _ { \mathrm { p o i n t s } } + N ) ^ { 2 } )$ . To mitigate this issue, we explore strategies to reduce the number of geometric point tokens involved in attention. However, we empirically observe that simply reducing the number of point tokens significantly degrades reconstruction performance. To address this trade-off, we propose an efficient Encoder-Decoder Point-Image Transformer Framework to fuse image features with geometric point features, as illustrated in Fig. 2, which maintains reconstruction quality while reducing the attention footprint. The final geometric point features output from the decoder are utilized to regress 3D Gaussian parameters using lightweight multi-layer perceptron (MLP) heads. To account for non-rigid deformations such as clothing or hair, we introduce learnable deformation-aware tokens. These tokens, together with the geometric features, are used to predict residual offsets in canonical space. Finally, Linear Blend Skinning (LBS) is applied to animate the canonical avatar into the target pose. # 3.2. Encoder-Decoder Point-Image Transformer Framework To efficiently fuse multi-image features with 3D geometric information, we propose an encoder-decoder architecture based on Point-Image Transformer blocks (PIT-blocks). This framework enables hierarchical feature interaction while alleviating the computational and memory burden associated with dense attention. We begin by projecting the SMPL-X anchor points in canonical space into a set of geometric tokens and encoding the input images into image tokens, as described in Sec. 3.3. The network consists of $N _ { \mathrm { l a y e r } }$ PIT blocks, divided into an encoder stage and a decoder stage. In the first $\lfloor N _ { \mathrm { l a y e r } } / 2 \rfloor$ encoder blocks, we progressively reduce the spatial resolution of the geometric tokens using Grid Pooling [47]. At each layer, the downsampled point tokens perform the attention operation with the image tokens, enabling compact geometric representations enriched with multi-image visual cues. In the subsequent $\lceil N _ { \mathrm { l a y e r } } / 2 \rceil$ decoder blocks, we upsample the geometric tokens to restore their original resolution. At each stage, the upsampled tokens are concatenated with the corresponding high-resolution features from the encoder via skip connections. These fused features are further refined by the PIT blocks to reconstruct detailed geometry and view-dependent appearance. # 3.3. Geometric Point and Images Tokenization Geometric Point Tokenization To incorporate human body priors, we initialize a set of 3D query points $\{ \mathbf { x } _ { i } \} _ { i = 1 } ^ { N _ { \mathrm { p o i n t s } } } \subset \mathbb { R } ^ { 3 }$ by uniformly sampling from the mesh of a canonical SMPL-X pose. Following the design of Point Transformer v3 (PTv3) [48], we first serialize these points into a structured sequence and then project them into a higher-dimensional feature space using an MLP. Formally, this process is expressed as: $$ \begin{array} { r } { \begin{array} { r l } & { X = \mathrm { S e r i a l i z a t i o n } ( X ) , } \\ & { \mathbf { T } _ { \mathrm { 3 D } } = \mathbf { M L P } _ { \mathrm { p r o j } } ( X ) \in \mathbb { R } ^ { N _ { \mathrm { p o i n t s } } \times C _ { \mathrm { p o i n t } } } , } \end{array} } \end{array} $$ where $C _ { \mathrm { p o i n t } }$ denotes the dimensionality of the point tokens. Multi-Image Tokenization To obtain rich image features, we adopt DINOv2 [24], a vision transformer pretrained on large-scale in-the-wild datasets, as the image encoder $\mathcal { E } _ { \mathrm { I m g } }$ . Given an input image $I$ , we extract a sequence of image tokens as follows: $$ \mathbf { T } _ { \mathrm { I m g } } = \mathcal { E } _ { \mathrm { I m g } } ( I ) \in \mathbb { R } ^ { N _ { \mathrm { I } } \times C } , $$ where $N _ { \mathrm { I } }$ is the number of image tokens and $C$ is the output feature dimension of the transformer. Deformation-aware Token Injection To account for nonrigid deformations such as clothing and hair present in each image, we introduce a learnable deformation token specific to the observed subject, denoted as $\mathbf { T } _ { \mathrm { D e f } } ~ \in ~ \mathbb { R } ^ { 1 \times C }$ . This token is concatenated with the image token sequence $\mathbf { T } _ { \mathrm { I m g } }$ , forming the multi-image tokens: $$ \mathbf { T } _ { \mathrm { I } } = [ \mathbf { T } _ { \mathrm { I m g } } ; \mathbf { T } _ { \mathrm { D e f } } ] \in \mathbb { R } ^ { ( N _ { \mathrm { I } } + 1 ) \times C } , $$ where $[ \cdot ; \cdot ]$ denotes token-wise concatenation along the sequence dimension. # 3.4. Point-Image Transformer Block After obtaining both geometric and image tokens, we design an efficient Point-Image Transformer Block (PIT-block), which comprises three core attention modules to facilitate cross-modal interaction: Point-wise Attention To model self-attention among geometric tokens, we adopt the patch-based point transformer blocks from PTv3 [48]. This design enables cross-patch interactions via randomized shuffling of point orders, as detailed in the Supplementary Materials: $$ \mathrm { \mathbf { T } _ { 3 D } = \mathrm { P T v } 3 - \mathbf { B l o c k } ( \mathbf { T } _ { 3 D } ) . } $$ Image-wise Attention Given the image token sequence $\mathbf { T } _ { \mathrm { 2 D } } \ = \ \{ \mathbf { T } _ { \mathrm { I } } ^ { 1 } , \dotsc , \mathbf { T } _ { \mathrm { I } } ^ { N } \} \ \in \ \mathbb { R } ^ { N \times ( N _ { \mathrm { I } } + 1 ) \times C }$ , we apply selfattention independently to the tokens of each image. This updates the features within each frame based on its own image tokens: $$ \begin{array} { r } { \mathbf { T } _ { \mathrm { 2 D } } = \mathrm { S e l f - A t t e n t i o n } ( \mathbf { T } _ { \mathrm { 2 D } } ) . } \end{array} $$ Point-Image Attention After obtaining the updated features for both point-wise and frame-wise modalities, we develop a global point-image attention mechanism to fuse the point and multi-image tokens. Our model builds upon the powerful Multimodal-Transformer (MM-Transformer) [6] to efficiently merge features from different modalities. To enhance global context representation in the input images, we utilize the class token $\mathbf { T } _ { \mathrm { c l s } }$ extracted from the first frame as learnable global context features. Additionally, to align the dimensions of different modalities, we incorporate projection MLPs into both the input and output layers of the MM-Transformer (MM-T): $$ \begin{array} { r l } & { \bar { \mathbf { T } } _ { \mathrm { 2 D } } = \mathrm { F l a t t e n } ( \mathbf { T } _ { \mathrm { 2 D } } ) \in \mathbb { R } ^ { N ( N _ { \mathrm { I } } + 1 ) \times C } , } \\ & { \qquad \bar { \mathbf { T } } _ { \mathrm { 3 D } } = \mathbf { M } \mathbf { L } \mathbf { P } _ { \mathrm { p r o j } } ( \mathbf { T } _ { \mathrm { 3 D } } ) \in \mathbb { R } ^ { N _ { \mathrm { p o i n s } } \times C } , } \\ & { \bar { \mathbf { T } } _ { \mathrm { 3 D } } , \bar { \mathbf { T } } _ { \mathrm { 2 D } } = \mathbf { M } \mathbf { M } \mathbf { \cdot } \mathbf { \tilde { T } } ( \bar { \mathbf { T } } _ { \mathrm { 3 D } } , \bar { \mathbf { T } } _ { \mathrm { 2 D } } \mid \mathbf { T } _ { \mathrm { c l s } } ) , } \\ & { \qquad \mathbf { T } _ { \mathrm { 3 D } } = \mathbf { M } \mathbf { L } \mathbf { P } _ { \mathrm { u p r o j } } ( \bar { \mathbf { T } } _ { \mathrm { 3 D } } ) \in \mathbb { R } ^ { N _ { \mathrm { p o i n s } } \times C _ { \mathrm { p i n t } } } , } \\ & { \qquad \mathbf { T } _ { \mathrm { 2 D } } = \mathbf { U } \mathbf { n } \mathrm { F l a t t e n } ( \bar { \mathbf { T } } _ { \mathrm { 2 D } } , N ) \in \mathbb { R } ^ { N \times ( N _ { \mathrm { I } } + 1 ) \times C } . } \end{array} $$ This global point-image attention module enables effective fusion of geometric and visual features by leveraging crossmodal attention. # 3.5. 3D Human Gaussian Parameter Prediction Given the fused point tokens $\mathbf { T } _ { 3 \mathrm { D } }$ obtained from the encoder-decoder transformer framework, we predict the parameters of 3D Gaussians in the canonical human space using a lightweight MLP head: $$ \begin{array} { r } { \{ \Delta \mathbf { p } _ { i } , \mathbf { r } _ { i } , \mathbf { f } _ { i } , \rho _ { i } , \pmb { \sigma } _ { i } \} = \mathbf { M } \mathbf { L } \mathbf { P } _ { \mathrm { r e g r e s s } } ( \mathbf { T } _ { 3 \mathrm { D } } ^ { ( i ) } ) , \quad \quad } \\ { \mathbf { p } _ { i } = \mathbf { x } _ { i } + \Delta \mathbf { p } _ { i } , \quad \forall i \in \{ 1 , \dots , N _ { \mathrm { p o i n t s } } \} , \quad \quad } \end{array} $$ where $\Delta \mathbf { p } _ { i } ~ \in ~ \mathbb { R } ^ { 3 }$ denotes the predicted residual offset from the corresponding canonical SMPL-X vertex $\mathbf { x } _ { i }$ , and $\mathbf { r } _ { i } , \mathbf { f } _ { i } , \rho _ { i } , \pmb { \sigma } _ { i }$ are the Gaussian orientation, feature vector, opacity, and scale, respectively. Pose Conditioned Deformation Although the regressed canonical-space Gaussians can be animated to target poses using Linear Blend Skinning (LBS), modeling clothing deformations presents challenges due to their complex nonrigid motion patterns, which LBS is often unable to capture adequately. To overcome this limitation, we use a lightweight MLP to predict pose-dependent residual deformations. Specifically, we derive a deformation-aware token $\bar { \mathbf { T } } _ { \mathrm { d e f } }$ by averaging the fused deformation-aware tokens $\mathbf { T } _ { \mathrm { d e f } }$ across all frames, and concatenate it with the SMPL parameters to modulate the geometric tokens using Adaptive Layer Normalization. These modulated features are then processed through a sequence of MLP layers to generate non-rigid residual deformations: $$ \Delta \mathbf { p } _ { i } ^ { \mathrm { m o t i o n } } = \mathbf { M L P } _ { \mathrm { m o t i o n } } \left( \mathrm { A d a L N } ( \mathbf { T } _ { \mathrm { p o i n t s } } ^ { ( i ) } , [ \overline { { \mathbf { T } } } _ { \mathrm { d e f } } ; \pmb { \theta } ] ) \right) , $$ where $\pmb \theta$ represents the SMPL pose parameters. The final posed positions are then obtained by adding both canonical offsets and motion-specific deformations before LBS is applied. # 3.6. Loss Function Our training strategy integrates photometric supervision from unconstrained video sequences with geometric regularization on Gaussian primitives. This hybrid optimization framework enables the learning of deformable human avatars without the need for explicit 3D ground-truth annotations. To better capture complex clothing deformations, we adopt a diffused voxel skinning approach as proposed in [20, 30]. Given the predicted 3DGS parameters $x =$ $( \mathbf { p } , \mathbf { r } , \mathbf { f } , \rho , \sigma )$ , we transform the canonical avatar into target view space using voxel-based skinning. Photometric Loss We render the animated Gaussian primitives via differentiable splatting to obtain an RGB image $\hat { I }$ and an alpha mask $\hat { M }$ , based on the target camera parameters. Supervision is applied through the following photometric loss: $$ \mathcal { L } _ { \mathrm { p h o t o m e t r i c } } = \lambda _ { \mathrm { r g b } } \mathcal { L } _ { \mathrm { c o l o r } } + \lambda _ { \mathrm { m a s k } } \mathcal { L } _ { \mathrm { m a s k } } + \lambda _ { \mathrm { p e r } } \mathcal { L } _ { \mathrm { l p i p s } } , $$ where $\mathcal { L } _ { \mathrm { c o l o r } }$ and ${ \mathcal { L } } _ { \mathrm { m a s k } }$ are L1 losses on RGB and alpha values respectively, and $\mathcal { L } _ { \mathrm { l p i p s } }$ is a perceptual loss measuring high-frequency feature similarity. We set the corresponding weights as $\lambda _ { \mathrm { r g b } } = 1 . 0$ , $\lambda _ { \mathrm { m a s k } } = 0 . 5$ , and $\lambda _ { \mathrm { p e r } } = 1 . 0$ . Gaussian Regularization Loss Empirically, we observe that using only mask supervision tends to encourage overly large Gaussian scales, especially near object boundaries, which leads to blurred renderings. To counteract this issue, we propose a Mask Distribution Loss ${ \mathcal { L } } _ { \mathrm { d i s } }$ , which encourages uniform Gaussian distributions within human masks and sharper boundary representation. This is achieved by rendering an auxiliary mask $M _ { \mathrm { d i s } }$ with fixed Gaussian parameters (opacity $\rho = 0 . 9 5$ , scale $\sigma = 0 . 0 0 2 \mathrm { \Omega }$ ), and applying a L1 loss between $M _ { \mathrm { d i s } }$ and the ground-truth human mask. Furthermore, to reduce ambiguities in canonical space supervision, we adopt two additional geometric regularizers from LHM [31]: (1) the As Spherical As Possible loss $\mathcal { L } _ { \mathrm { A S A P } }$ , which promotes isotropy in the 3D Gaussians, and (2) the As Close As Possible loss ${ \mathcal { L } } _ { \mathrm { A C A P } }$ , which preserves spatial coherence among neighboring primitives. The combined geometric regularization term is defined as: $$ \begin{array} { r } { \mathcal { L } _ { \mathrm { r e g } } = \lambda _ { \mathrm { d i s } } \mathcal { L } _ { \mathrm { d i s } } + \lambda _ { \mathrm { A S A P } } \mathcal { L } _ { \mathrm { A S A P } } + \lambda _ { \mathrm { A C A P } } \mathcal { L } _ { \mathrm { A C A P } } , } \end{array} $$ with empirically chosen weights: $\lambda _ { \mathrm { d i s } } = 0 . 5$ , $\lambda _ { \mathrm { A S A P } } = 2 0$ , and $\lambda _ { \mathrm { A C A P } } = 5$ . Overall Loss The overall training objective combines photometric reconstruction accuracy with geometric regularization: $$ \mathcal { L } _ { \mathrm { t o t a l } } = \mathcal { L } _ { \mathrm { p h o t o m e t r i c } } + \mathcal { L } _ { \mathrm { r e g } } . $$ # 4. Experiments Implementation Details We design three variants of our model with $N _ { \mathrm { l a y e r } } = 4 , 6 , 8$ layers of the PI-MT block, corresponding to PF-LHM-S (small), PF-LHM-M (medium), and PF-LHM-L (large), respectively. The models contain approximately $5 0 0 ~ \mathrm { M B }$ , $7 0 0 ~ \mathrm { M B }$ , and $1 0 0 0 ~ \mathrm { M B }$ training parameters in total. We train the model by minimizing the training loss using the AdamW optimizer for 60,000 iterations. A cosine learning rate scheduler is employed, with a peak learning rate of 0.0001 and a warm-up period of 3,000 iterations. During each batch, we randomly sample a number of frames in the range of [1, 16] from a randomly selected training video. Input images are resized to have a maximum dimension of 1024 pixels. Training is performed on 32 A100 GPUs over five days. To ensure training stability, we apply gradient norm clipping with a threshold of 0.1. Additionally, we utilize bfloat16 precision and gradient checkpointing to enhance GPU memory and computational efficiency. Training Dataset For our model training, we utilize approximately 300,000 in-the-wild video sequences collected from public video repositories, along with over 5,173 3D public synthetic static human scans sourced from 2K2K [9], Human4DiT [35], and RenderPeople. Specifically, we employ a sampling ratio of 19:1 to draw training batches from the in-the-wild and synthetic datasets to balance generalization and view-consistency. To address view bias in the video data, we sample from a diverse range of perspectives as uniformly as possible, guided by the estimated global orientation of SMPL-X. Evaluation Protocol We report PSNR, SSIM [43], and LPIPS [56] to assess rendering quality, and measure efficiency with GPU memory usage and inference time. able 2. Comparison experiments with sparse-view input methods on public benchmar Table 3. Comparison experiments with sparse-view input methods on causal videos. # 4.1. Comparison with Existing Methods Animatable Human Reconstruction from Sparse Images We conduct a comprehensive evaluation of PF-LHM by comparing it with three baseline methods for generating animatable human avatars from casually captured video sequences. We assess the efficiency and performance of our model using two types of datasets: one is a public benchmark that includes 20 video sequences from NeuMan [15], REC-MV [30], and Vid2Avatar [7], while the other comprises 24 casual video sequences collected via our smartphones. Both Table 2 and Table 3 illustrate quantitative experiments evaluating our model against InstantAvatar [14], GaussianAvatar [11], and ExAvatar [23] on public and casual video sequences. Compared to the state-of-the-art (SOTA) baseline ExAvatar, our approach not only significantly accelerates the inference time but also yields comparable quantitative results. Specifically, for the model’s efficiency, our model promptly creates animable avatars in seconds while ExAvatar requires approximately 15 minutes to 1.2 hours, depending on the number of input images. In terms of model performance, SOTA methods typically require dozens of input images to achieve satisfactory metrics; Moreover, our model achieves more accurate results with substantially fewer inputs, and this capability improves as the number of input images increases. As shown in Figure 3, sparse input views lead to noticeable reconstruction artifacts with fitting-based frameworks, including geometric distortions and texture blurring. However, our PF-LHM achieves robust and high-fidelity reconstruction from sparse inputs and outperforms the SOTA baselines. Animatable Human Reconstruction from a Single Image We evaluate PF-LHM against three baseline approaches for single-view animatable human reconstruction. The first baseline is AniGS [32], which employs a multiview diffusion model to create canonical human avatars, followed by 4D Gaussian splatting (4DGS) optimization to address inconsistencies across different views. The second one is IDOL [60], using a UV-based transformer model to create the animatable avatars training from synthetic human datasets built on a human video diffusion model. The last baseline, LHM [31], introduces novel body-head transformer blocks that directly regress human Gaussian parameters in canonical space. For our evaluation, we employ 400 in-the-wild video sequences featuring individuals of various age groups, including young men and women, older adults, and children. For a fair comparison, we compare LHM with the same parameters and query points. Figure 7 shows a qualitative comparison experiments with LHM on in-thewild data. The figure indicates that PF-LHM achieves results comparable to LHM with single-view image input. Furthermore, as the number of inputs increases, our method generates increasingly realistic and detailed results. Table 4. Comparison with single-image method on pose animation using our in-the-wild dataset. \* indicates we use 80000 query points in LHM for a fair comparison. + denotes the measured inference time of IDOL. Table 4 presents two key findings regarding single-view human reconstruction. Our unified framework not only achieves competitive quantitative results compared to LHM but also the inference speed that is four times faster than that of LHM. Furthermore, as the number of input views increases, our method’s performance also improves. Specifically, in comparison to a single-image input, using 16 input views results in improvement of 1.417, 0.09, and 0.01 in PSNR, SSIM, and LPIPS, respectively. Figure 7 shows a qualitative comparison experiment with LHM. mmm TiHhrh U ARAMFAF 16 L InstantAvatar ExAvatar Ours GT InstantAvatar ExAvatar Ours GT Table 5. Model Efficiency. We evaluate the training and inference efficiency of various backbones. The batch size is set to 1, the number of input views is equal to 16. ‘# Points’ refers to the number of geometric points, and ‘Time’ denotes the duration of a single iteration during both training and inference. # 4.2. Qualitative Results As demonstrated in Fig. 8, we present the animation results of avatars generated from various in-the-wild monocular videos, including NeuMan [15], REC-MV [30], Vid2Avatar [7], MVHumanNet [49] and our causal video dataset. Our PF-LHM is capable of generalizing across different identities and garment styles, producing highly realistic renderings for novel human poses and arbitrary viewpoints. # 4.3. Ablation Study Model Efficiency Table 5 presents quantitative results that substantiate the efficiency of our model, tested on NVIDIA A100-80G hardware. The table clearly illustrates that our novel framework significantly outperforms the original LHM architecture, achieving notably reduced training times and lower memory consumption. Furthermore, during testing, our inference times are about $5 { \sim } 1 0$ times faster than those of LHM. Model Parameter Scalability To verify the scalability of our PF-LHM, we train variant models with increasing parameter numbers by scaling the layer numbers. Table 6 compares performance across various model capacities. Our experiments indicate that increasing the number of model parameters correlates with improved performance. Figure 5 presents the comparison among PF-LHM-S, PFLHM-M, and PF-LHM-L where the larger model achieves more accurate reconstruction. Also, Fig. 6 depicts the performance of different models based on varying input view counts, clearly indicating that our model is scalable and that performance improves with an increased number of input images. Table 6. Analysis of model parameters and 3D geometric point numbers. Number of Query Points Table 6 shows an ablation study analyzing the effect of varying the number of query points on public video datasets. As the number of query points increases from 40K to 80K, our model demonstrates improvements in PSNR, SSIM, and LPIPS by 0.562, 0.006, and 0.002, respectively. However, when the number of query points is increased further from 80K to 160K, we observe a slight gain in performance. Therefore, we set the number of query points to 80K to achieve an optimal balance between efficiency and model performance.
Reconstructing an animatable 3D human from casually captured images of an articulated subject without camera or human pose information is a practical yet challenging task due to view misalignment, occlusions, and the absence of structural priors. While optimization-based methods can produce high-fidelity results from monocular or multi-view videos, they require accurate pose estimation and slow iterative optimization, limiting scalability in unconstrained scenarios. Recent feed-forward approaches enable efficient single-image reconstruction but struggle to effectively leverage multiple input images to reduce ambiguity and improve reconstruction accuracy. To address these challenges, we propose PF-LHM, a large human reconstruction model that generates high-quality 3D avatars in seconds from one or multiple casually captured pose-free images. Our approach introduces an efficient Encoder-Decoder Point-Image Transformer architecture, which fuses hierarchical geometric point features and multi-view image features through multimodal attention. The fused features are decoded to recover detailed geometry and appearance, represented using 3D Gaussian splats. Extensive experiments on both real and synthetic datasets demonstrate that our method unifies single- and multi-image 3D human reconstruction, achieving high-fidelity and animatable 3D human avatars without requiring camera and human pose annotations. Code and models will be released to the public.
[ "cs.CV" ]
# I. Introduction Temporarily static object detection has many applications. Depending on the application, temporarily static objects can be abandoned items such as luggage, illegally parked vehicles, removed objects from the scene, etc. A significant amount of research has been done on the detection of abandoned items [1, 2, 3] in video surveillance, illegally parked vehicle detection [4, 5, 6, 7, 8, 9] in restricted regions, and detection of removed objects from the scene [10]. In [1], they first used a trans-dimensional Markov Chain Monte Carlo tracking model to track objects in a scene. Then, the result of the tracking system is analyzed, and left luggage is detected in the scene. Background subtraction and blob tracking are used in [2] to detect abandoned items. Short-term logic is applied to classify the detected blobs into four types: unknown objects, abandoned objects, person, and still person. Similarly, in [3], background subtraction and tracking are used to detect left luggage using multiple cameras. Most of these methods assume the scene is not crowded, with no occlusion or illumination changes. Bevilacqua and Vaccari [4] proposed a method to detect stopped vehicles based on the centroid position of the tracked vehicle. Background subtraction and optical flow methods are used for detection and tracking of stopped vehicles. If the object's center position remains within a small area for a certain duration, the object is considered static. A temporarily static object detection method based on two backgrounds—one short-term and another long-term—is presented in [5, 6]. Object tracking-based methods are also used for detecting static objects in a scene. In [7], J. T. Lee et al. presented a 1-D transformation-based real-time illegal parking detection method. They first apply a $1 { ^ - \mathrm { D } }$ transformation to the source video data. Next, foreground blobs representing vehicles are segmented and tracked frame by frame. Parking vehicles are detected according to the trajectory of the tracking result. B. Mitra et al. [8] presented an illegally parked vehicle tracking method using correlation of multi-scale difference of Gaussian filtered patches. Similarly, in [9], a corner feature-based parked vehicle detection method is presented. They classified corners into two categories: static and dynamic. Dynamic corners correspond to moving objects, and static corners correspond to the background and stopped objects. A disadvantage of this method is that static corners corresponding to the background can be mistakenly detected as corners corresponding to stopped vehicles, and vice versa. Finally, in [10], abandoned and removed object detection based on background subtraction and foreground analysis complemented by tracking is presented. Most of the methods above work well if objects are static for a short duration. However, if temporarily static objects remain in the scene for a long time, they may appear as background objects. Therefore, an additional process is needed to monitor static objects as soon as they are detected as abandoned or stopped. In this paper, we present and compare two methods for detecting temporarily static objects. We particularly focus on illegally parked vehicle detection, but the same methods can be applied to detect any kind of temporarily static object in a scene. We used the Gaussian Mixture Model (GMM) method for separating foreground objects from the background image. In the first method, we used background subtraction and a simple blob tracking method to classify objects as static or moving. As soon as an object is detected as static, we used normalized cross-correlation (NCC)-based image comparison to monitor the temporarily stopped object. In the second method, we used two background images generated using different learning rates or frame rates. We subtracted the two background images to detect static objects. Again, NCC-based image comparison is used to monitor the detected static objects in the scene. The main advantage of the proposed methods is that they can detect static objects in a scene even if they remain there for a long duration. # II. Stationary Object Detection In the first stage, we detect stationary objects in a scene. Once an object is identified as stationary, we proceed to the second stage for further processing. Here, we focus on describing the first stage. We propose two approaches for this stage: using a single background image and using dual background images. Each approach is described in the following subsections. # 1. Single Background Based Stationary Object Detection In general, background subtraction is used to detect and track moving objects in a scene. If an object appears in the scene and becomes stationary, it will be detected as a foreground object for a while; over time, it will be incorporated into the background image and then classified as a background object. Therefore, there is a time interval during which we can decide whether a detected object by background subtraction is stationary or moving. Here, we use background subtraction followed by simple rectangle-overlapping based binary blob tracking to determine whether an object is stationary or moving. Fig. 1 shows the overall block diagram of the stationary object detection. It includes an input ROI image of a video frame, the corresponding background image, and the resulting foreground image detected using background subtraction and thresholding. In Fig. 2(a), the upper vehicle is moving while the lower vehicle is stationary. Before starting the tracking, we apply frame differencing to reduce noise generated by moving objects. Fig. 3(a) shows the result of the frame difference, and Fig. 3(b) displays two types of pixels corresponding to foreground objects. Pixels labeled in gray correspond to moving object pixels (white pixels in Fig. 3(a)), whereas pixels labeled in white correspond to stationary foreground object pixels. Since we are interested in finding stationary objects, we remove the pixels corresponding to moving objects. The final result is shown in Fig. 3(d). Fig. 1. Proposed system of stationary object detection using single background image. Vehicle tracking is performed using a simple rectangle overlapping approach. For every frame, we label the binary blobs resulting from background subtraction and moving object pixel removal. If the bounding rectangle of a binary blob in the previous frame and the current frame overlap by more than $80 \%$ , we consider that bounding rectangle to correspond to the same object in both frames. If the object is detected as the same in several consecutive frames (greater than a predefined monitoring threshold), it is classified as a stationary object and moves to the second stage: stationary object monitoring. Fig. 2. (a) Current input image, (b) background image, (c) result of background subtraction and thresholding. Fig. 3. (a) Frame difference image $( \mathrm { i } _ { \mathrm { t } } - \mathrm { i } _ { \mathrm { t } - 1 } )$ , (b) foreground image with gray pixels corresponding to moving objects (white pixels from (a)) and white pixels corresponding to stopped objects, (c) result after removing moving object pixels from (a), and (d) result after morphological erosion and dilation. # 2. Dual Background Based Stationary Object Detection Two background models, generated at different frame rates or learning rates, can be utilized to detect stationary objects. If an object remains stationary within the region of interest, it will first appear in the background image with a fast update rate, and after some time, it will also appear in the background image with a slow update rate. Thus, there is a time interval during which the stopped vehicle is visible in one background but not in the other. There are two approaches that can be used to generate fast and slow updating background images: one by processing at different frame rates, and the other by processing with different learning rates. Fig. 4 shows the block diagram of the proposed stationary object detection method using dual background images. Fig. 5 displays the current input image (I), the fast-updating background (BGF), the slow-updating background (BGS), and the binary difference image between the two backgrounds (BGDIFF), which is the result after applying morphological erosion and dilation operations. As shown, there are several vehicles in the current image. The vehicles that appear in BGF are currently stopped vehicles, while others are moving vehicles. However, the stopped vehicles have not yet appeared in the BGS image. Therefore, by using the BGF and BGS images, we can compute the BGDIFF image, in which the binary blobs indicate the positions of stopped vehicles. Once a vehicle is detected as stopped across several frames, we move to the next stage, where the stopped object is monitored using the NCC method. Fig. 4. Proposed system of stationary object detection using dual backgrounds. Fig. 5. Current image I (left top), background image with fast learning rate $( \mathrm { B G } )$ (right top), background image with slow learning rate $\mathrm { B G s } ,$ (bottom left), and difference between BGF and BGS with thresholding and morphological operation (bottom right). # III. Normalized Cross-Correlation Based Stationary Object Monitoring Normalized Cross-Correlation (NCC) can be used to compare two signals to evaluate their similarity. The closer the NCC value is to 1, the more similar the two signals are. Therefore, NCC can be used to compare two images to determine how similar they are. In this work, we use the NCC value between two images to determine whether a temporarily stationary object is still at its detected position or has been removed or moved. As soon as an object is detected as stationary, we register the image patch within the bounding rectangle of the corresponding stationary object as a reference image. For subsequent frames, the image patch from the same position in the current image is compared with the previously stored reference image patch. As long as the object remains at the same position, the NCC value remains close to 1. If the object is removed or moved, the NCC value drops significantly. When there are multiple stationary objects, calculating NCC for each object in every frame becomes computationally expensive. To reduce computational complexity, NCC comparisons are performed approximately twice per second. In another scenario, if there is occlusion (partial or full) caused by moving objects near the stationary object, the NCC value between the current image patch and the reference image patch may be low even if the stationary object is still present. To handle this issue, we first count the number of moving object pixels around the stationary object using a frame difference image. If the count exceeds a predefined threshold, the NCC comparison is postponed to the next frame. Additionally, gradual changes in illumination over time can cause the calculated NCC value to decrease, even if the object has not moved or been removed. To address this, for objects that remain in place for a long duration, we periodically update the reference image patch after a certain interval. The NCC value between the reference image patch and the current image patch is computed using Equation (1). $$ \gamma = \frac { \sum _ { x , y } ( f _ { r } ( x , y ) - \bar { f } _ { r } ) ( f _ { c } ( x , y ) - \bar { f } _ { c } ) } { \sqrt { \sum _ { x , y } ( f _ { r } ( x , y ) - \bar { f } _ { r } ) ^ { 2 } } \sqrt { \sum _ { x , y } ( f _ { c } ( x , y ) - \bar { f } _ { c } ) ^ { 2 } } } ( 1 ) $$ In $\mathrm { f _ { r } }$ denotes the reference patch image, $\boldsymbol { \mathrm { ~ f ~ } } _ { \mathrm { ~ r ~ } } ^ { - }$ denotes the mean value of the reference patch image, f𝑐 denotes the current patch image, f̄𝑐 denotes the mean value of the current patch image, and γ denotes the resulting NCC value. # IV. Experimental Results and Discussion The performance of the proposed method is evaluated on a private dataset of surveillance video. The experiment focuses on detecting illegally parked vehicles. As soon as a vehicle enters the region under analysis, it is detected immediately if it stops there. The result of tracking or stopped vehicle detection at the first stage gives the time duration, or the number of frames, for which a particular vehicle remains stopped in the region under analysis. If it stops for more than 50 image frames, it is defined as a stopped vehicle. This means it will remain in the first stage for 50 to 150 frames. If the vehicle moves out before 150 frames, it is classified as a stopped vehicle but not a parked vehicle. However, if the vehicle remains in the same position for more than 150 frames, it is defined as a parked vehicle, i.e., it enters the second stage. When a stopped vehicle enters the second stage (parked stage), we use NCC (Normalized Cross-Correlation) to further verify that the vehicle is still parked, as long as it remains in place. In our case, if the NCC value between the reference image patch of the stopped vehicle's position and the current image patch at the same position is greater than or equal to 0.90, it indicates that the vehicle is still parked at the same position. This threshold value was determined experimentally. If the NCC value is less than 0.90, it indicates that the vehicle has moved, i.e., the third stage (moved stage). Fig. 6. Illegal parking vehivle detection results for a video sequence using dual background modeling scheme. Figure 6 shows the result of illegal parking detection, with the time duration and stages provided for each stopped or parked vehicle. The given time duration represents the total time starting from when the vehicle stopped in the region under analysis. From Figure 6, we can see that the parking time of each vehicle has also been recorded. In this particular video, the maximum parking time was found to be 4.13 minutes. Additionally, there were no issues with partial occlusion or short-term full occlusion. In our video dataset, the performance of the dual background-based stationary object detection is better than that of the single background-based method. In terms of computational complexity, the single background-based scheme is better and works well if there are no illumination changes, the surveillance scene is not crowded, and the stationary object is not frequently occluded by moving objects. On the other hand, the dual background-based scheme has higher computational complexity compared to the single-background-based method, but it can still operate in real time. Moreover, the dual-background-based scheme is robust to partial occlusion and short-term full occlusion. Since we subtract two backgrounds to detect stationary objects, temporary occlusion does not significantly affect the background model. However, problems may arise if an object is occluded for a long duration. The dual background-based scheme is also robust to illumination changes. When subtracting a background from the current image, noise may occur due to moving objects, sudden illumination changes, or object shadows. However, subtracting two backgrounds generated with different learning rates captures only stationary objects, making the result more stable. Table 1 shows the operating speed of the two proposed schemes for temporarily static object detection. Originally, the video frame size is $7 2 0 \ \times$ 480 pixels. We extracted the regions of interest (ROI) from both sides of the street and concatenated them, as shown in the figures in Section 2. According to the results in Table 1, for the single-background case with an image size of $3 2 9 \times 1 6 4$ pixels, we achieved an operating speed of up to 33.26 frames per second. In contrast, for the dual-background scenario, we achieved a speed of up to 20.13 frames per second. Although the dual-background method has a lower operating speed, it is more stable than the single background-based scenario. Table 1. Comparison of two proposed scheme of stationary object detection in terms of computation complexity.
In general, background subtraction-based methods are used to detect moving objects in visual tracking applications. In this paper, we employed a background subtraction-based scheme to detect the temporarily stationary objects. We proposed two schemes for stationary object detection, and we compare those in terms of detection performance and computational complexity. In the first approach, we used a single background, and in the second approach, we used dual backgrounds, generated with different learning rates, in order to detect temporarily stopped objects. Finally, we used normalized cross correlation (NCC) based image comparison to monitor and track the detected stationary object in a video scene. The proposed method is robust with partial occlusion, short-time fully occlusion, and illumination changes, and it can operate in real time.
[ "cs.CV" ]
# 1 Introduction X-ray computed tomography (XCT) is a critical technique with many applications in medical and industrial imaging. In industrial XCT, reconstructing a 3D object requires solving a large inverse problem using many projections from different angles. The resulting reconstruction can be used for internal inspection and anomaly detection, with applications in additive manufacturing (e.g. nondestructive evaluation) and medical imaging (e.g. tumor detection) [1–6]. However, attaining high-quality reconstructions can be difficult; the quality of the reconstruction depends on factors such as desired reconstruction resolution, total integration time, total number of views, and X-ray scan setting (e.g. voltage, current, and physical filters). Higher quality generally requires more time and higher radiation doses. Traditional methods such as Feldkamp, Davis and Kress (FDK) [7] can quickly produce reconstructions, but require a large number of projections at sufficiently high signal-to-noise ratio to achieve high quality, resulting in longer scans with more exposure. More advanced algorithms such as model-based iterative reconstruction (MBIR) methods [8–12] and plug-and-play (PnP) methods [13–17], can produce high-quality reconstructions even with sparse measurements, but they are slow and computationally expensive. Deep learning has been proposed as a fast alternative that directly maps low-quality reconstructions to high-quality reconstructions [18–22]. However, these models rely on large amounts of representative training data and often struggle to generalize to new XCT scans that differ in part geometry, material composition, print parameters, or scan settings. As a result, their performance can degrade significantly when applied to data that fall outside the distribution of the training set. To address these challenges, a practical PnP algorithm that is both flexible and efficient was proposed in [23]. This method uses an artifact reduction CNN prior instead of a CNN-based Gaussian denoiser, along with an adaptive regularization parameter selection strategy to perform reconstruction for large scale 3D imaging data. However, the artifact reduction CNN prior only exploits single-slice information from the 3D reconstruction, which may limit its performance. 2.5D deep learning architectures leverage multi-slice information by aggregating neighbouring slices along the channel dimension to estimate the center slice. The ability of 2.5D architectures to produce better image quality than 2D architectures has been established in a variety of applications, such as XCT super-resolution [24], XCT image denoising [25], volumetric image segmentation [26], and XCT image reconstruction [22, 27–31]. 2.5D architectures have the added benefit of maintaining low computational complexity in contrast to 3D architectures. Prior work in volumetric imaging [15, 29, 32, 33] similarly supports the use of 2.5D models as efficient alternatives to 3D networks, especially in high-resolution domains like XCT. In this paper, we propose an improved practical PnP method that uses a 2.5D artifact reduction prior which incorporates multi-slice information from the 3D reconstruction while preserving low computational complexity. This simple modification improves reconstruction quality without the heavy cost of 3D models. We present results on experimental and synthetic cone-beam XCT datasets that demonstrate improved performance on both in-distribution (InD) and out-of-distribution (OOD) data when using a 2.5D artifact reduction prior instead of a 2D prior. In particular, we demonstrate strong performance on experimental XCT data using a model trained entirely on synthetic scans, highlighting the method’s ability to generalize across domains. Fig. 1 Pipeline of our proposed 2.5D PnP method. The 2.5D artifact reduction prior takes 5 neighboring slices from a sparse-view FDK reconstruction containing beam hardening artifacts as input and reduces both noise and artifacts from the center slice in order to match the dense-view FDK reconstruction that does not contain beam hardening artifacts. # 2 Plug-and-Play with 2.5D Artifact Reduction Prior The forward model for a cone-beam XCT system is given by $y = A x$ where $y \in \mathbb { R } ^ { M }$ contains the projection measurements, $A \in \mathbb { R } ^ { M \times N }$ is a linear operator encoding the cone-beam projection, and $x \in \mathbb { R } ^ { N }$ is the 3D volume of linear attenuation coefficients that we would like to reconstruct. A common approach for estimating $x$ is to use a regularized weighted least-squares formulation, i.e., $$ \hat { x } = \arg \operatorname* { m i n } _ { x } \left\{ \frac { 1 } { 2 } \| A x - y \| _ { 2 } ^ { 2 } + \lambda R ( x ) \right\} , $$ Table 1 Synthetic cone-beam XCT aluminum datasets used to evaluate 2D and 2.5D PnP. The In-distribution data (InD) test set contains scans which match the training data in the material scanned, number of views, presence of beam hardening, and noise levels. The out-of-distribution (OOD) test set contains scans where the noise level or number of views is different than those used to train the CNNs. where $R ( x )$ is a regularizer that encourages certain “desirable” properties in the reconstruction and $\lambda$ is a parameter weighting the impact of the regularizer. We solve this minimization problem using a quadratic penalty method with alternating minimization, as is done in [23]. Namely, we introduce an auxiliary variable $z$ to decouple the regularizer: $$ \hat { x } , \hat { z } = \arg \operatorname* { m i n } _ { x , z } \left\{ \frac { 1 } { 2 } \| A x - y \| _ { 2 } ^ { 2 } + \lambda R ( z ) \right\} \ \mathrm { s u c h ~ t h a t } \ x = z . $$ Instead of directly enforcing this constraint, we relax it using a quadratic penalty term with tunable parameter $\beta > 0$ : $$ \hat { x } , \hat { z } = \arg \operatorname* { m i n } _ { x , z } \left\{ \frac { 1 } { 2 } \| A x - y \| _ { 2 } ^ { 2 } + \lambda R ( z ) + \frac { \beta } { 2 } \| x - z \| ^ { 2 } \right\} . $$ We then solve the relaxed minimization problem in (3) using alternating minimization, which provides an iterative algorithm that alternates between a data-fitting sub-problem and a regularization sub-problem, i.e., $$ \begin{array} { r l } & { \hat { z } _ { k } = \arg \operatorname* { m i n } _ { z } \left\{ \lambda R ( z ) + \frac { \beta } { 2 } \| \hat { x } _ { k - 1 } - z \| _ { 2 } ^ { 2 } \right\} } \\ & { \hat { x } _ { k } = \arg \operatorname* { m i n } _ { x } \left\{ \frac { 1 } { 2 } \| A x - y \| _ { 2 } ^ { 2 } + \frac { \beta } { 2 } \| x - \hat { z } _ { k } \| _ { 2 } ^ { 2 } \right\} . } \end{array} $$ The proposed algorithm can be interpreted as a proximal splitting scheme with a quadratic penalty, and it is closely related to majorization-minimization and variablesplitting methods. Note that for the constraint $x = z$ to be enforced, we must increase $\beta$ as $k \infty$ . Rather than setting an explicit schedule for $\beta$ , we use the proposed adaptive parameter selection strategy from [23] to update $\beta$ at each iteration, which selects $\beta$ using a grid search based on the reconstruction quality of a few center slices from the 3D volume. Pseudocode and more detail for this parameter selection algorithm can be found in [23]. Due to the large size of $A$ , it is not trivial to solve (5). Instead, it is common to use a few iterations of either the conjugate gradient method (CGM) or a gradient descent-based method to estimate $\hat { x } _ { k }$ . To that end, we use CGM for this step. PnP methods build upon the insight that the regularization sub-problem in (4) can be interpreted as a maximum a posteriori (MAP) estimate in a Gaussian denoising problem [34]. This equivalence allows the proximal operator associated with the prior to be replaced by an off-the-shelf denoiser, typically a CNN trained on problemspecific data. The PnP framework thus decouples the data fidelity and prior modeling steps, enabling flexible incorporation of powerful learned denoisers without explicitly formulating a regularizer. Convergence properties of PnP algorithms have been studied extensively, with results often relying on denoisers satisfying nonexpansiveness or averaged operator conditions [34, 35]. In practice, these methods have demonstrated impressive empirical performance across various inverse problems, including imaging reconstruction, where handcrafted priors are insufficient or difficult to specify. Instead of using a generic denoiser, here we use a CNN trained to reduce artifacts from sparse-view and noisy FDK reconstructions. As for the choice of the artifact reduction prior, we propose to use a 2.5D architecture that allows for exploiting multislice information to more effectively reduce artifacts and noise while preserving the underlying structure of the data. Compared to full 3D networks, our 2.5D design offers a favorable trade-off between computational cost and performance; it captures important inter-slice context while remaining significantly more memory-efficient and easier to train. This makes 2.5D particularly well-suited for PnP methods applied to large volumetric datasets. Figure 1 shows a high-level overview of our proposed method, and Section 3.1 provides a more detailed discussion of our 2.5D artifact reduction prior. Additionally we provide pseudocode for our proposed 2.5D PnP algorithm in Algorithm 1. Algorithm 1 Proposed 2.5D Artifact Reduction PnP Algorithm # 3 Implementation Details In this section, we outline the datasets used for our experimental results as well as the implementation of our $ { 2 . 5 \mathrm { D } }$ artifact reduction CNN and the hyperparameters used. # 3.1 2.5D Artifact Reduction CNN In contrast with other methods that use artifact reduction priors [23, 36, 37], we propose to use a 2.5D architecture for our artifact reduction prior. Namely, we use the UNet architecture from [38], which consists of four pooling/unpooling layers. However, instead of providing one slice of the reconstruction as input to the network, we provide a stack of 5 neighboring slices from a sparse-view FDK reconstruction containing BH artifacts as input, modifying the number of channels in the first layer of the UNet accordingly. Figure 2 shows the architecture of our 2.5D UNet. The 2.5D artifact reduction network learns to reduce noise and artifacts from the center slice in order to match a dense-view FDK reconstruction that does not contain BH artifacts. This modification allows the CNN to learn multi-slice information from the 3D volume, rather than only single-slice information. We divide the training volumes into patches of size $5 \times 2 5 6 \times 2 5 6$ , and split these into training and validation patches with a 80/20 ratio respectively. Then, we train the 2.5D UNet for 200 epochs with an Adam optimizer [39]. We initialize the learning rate at $1 \times 1 0 ^ { - 3 }$ , and reduce it by a factor of 2 when the normalized root mean square error (NRMSE) of the validation patches stops improving for 10 epochs. After training, we use the epoch that attains the lowest NRMSE on the validation data for all further testing. For comparison, we also train a 2D UNet with the same training procedure. The network is trained by minimizing an L1 loss defined as: $$ \mathcal { L } ( \pmb { \theta } ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \left| R ( \pmb { x } _ { i } ; \pmb { \theta } ) - ( \mathcal { P } _ { \mathrm { c } } \pmb { x } _ { i } - \pmb { y } _ { i } ) \right| , $$ where $\pmb { x } _ { i } \in \mathbb { R } ^ { M \times M \times 5 }$ is an input patch consisting of five adjacent slices, and $\pmb { y } _ { i } \in$ $\mathbb { R } ^ { M \times M }$ denotes the corresponding target center slice. The operator $\mathcal { P } _ { \mathrm { c } }$ extracts he center slice from the input stack, and $R ( \cdot ; \pmb \theta )$ is the network with parameters $\pmb \theta$ that Fig. 2 Architecture of our 2.5D artifact reduction UNet used as the prior model in the proposed 2.5D PnP algorithm. predicts the residual with respect to the ground truth. This 2.5D approach incorporates spatial context across slices while supervising the center slice reconstruction. # 3.2 Hyperparameters Both the 2D PnP algorithm [23] and our proposed 2.5D PnP algorithm only require setting three hyperparameters before reconstruction — the total number of iterations, the number of CG steps for the data-fitting step, and the candidate values for $\beta$ . We use the same values for our hyperparameters as proposed in the 2D PnP paper. Namely, we set the number of total iterations to 3, the number of CG steps to 10, and the candidates for $\beta$ to $\{ 2 ^ { 1 - i } \} _ { i = 0 } ^ { 1 4 }$ . # 3.3 Synthetic Datasets We perform experiments on simulated XCT scans of Aluminum additively manufactured parts generated using Computer-Aided Design (CAD) models. Namely, we generate one training set that we use to train a 2D and 2.5D artifact reduction CNN, as well as two test sets. The InD test set contains scans that match the training set in the material of the part, number of views, presence of beam hardening (BH), and noise level. The OOD test set contains scans with different noise level or number of views than the training set. To simulate realistic noise in cone-beam XCT projections, we apply a Gaussian approximation of the Poisson distribution. In this approximation, we add zero-mean Gaussian noise scaled by the square root of the signal intensity and a user-defined noise parameter $\sigma$ to the projection data; we refer to the noise parameter $\sigma$ as the “noise level”. Let $W$ represent the ideal photon count data, obtained by forward-projecting a digital phantom (e.g., derived from a CAD model) onto a virtual detector using cone-beam geometry across multiple view angles. The noisy projection data is then expressed as: $$ W _ { \mathrm { n o i s y } } = W + \sqrt { W } \cdot \sigma \cdot \mathcal { N } ( 0 , 1 ) , $$ where $W _ { \mathrm { n o i s y } }$ is the simulated noisy projection, $\sigma$ controls the noise level, and $\mathcal { N } ( 0 , 1 )$ denotes element-wise independent standard normal random variables. This formulation approximates Poisson statistics by leveraging the variance-to-mean relationship inherent in photon-counting statistics, and is commonly used in CT simulation pipelines when actual photon statistics are not available or when computational efficiency is prioritized. In our simulations, we vary both the noise level $\sigma$ and the number of projection views to study their impact on reconstruction quality. The training phantom is forward-projected using cone-beam geometry under both full-scan (covering $3 6 0 ^ { \circ }$ ) and short-scan (covering $1 9 7 ^ { \circ }$ with a $1 7 ^ { \circ }$ fan angle) [40] acquisition protocols to evaluate performance under different sampling conditions. A short-scan is a type of scan that only measures projections at $1 8 0 ^ { \circ }$ plus a small fan-angle as opposed to a full $3 6 0 ^ { \circ }$ degree scan. It was proposed originally by Parker [40] using Tuy’s conditions [41, 42] which establish that a short-scan is sufficient to get the same result as a full scan for an object fully contained in the field of view. Table 1 gives an overview of the paired input and reference scans in the training and test sets, including the number of views, whether it is a short-scan, the noise level simulated, and whether the scan contains BH artifacts. BH is a common artifact in XCT imaging caused by the poly-chromatic nature of the X-ray beam. As X-rays pass through dense materials, lower-energy photons are absorbed more rapidly than higher-energy photons. This results in a shift toward higher average photon energy (“beam hardening”) and causes nonlinear attenuation effects that lead to artifacts in the reconstructed image, such as cupping and streaks. To simulate data without BH, we assume the X-ray source is a single energy beam, whereas to simulate data with BH, we consider the spectrum of the X-ray source at different energies. To mitigate BH artifacts, one typically performs a pre-processing correction step on the raw projection data. However, in our proposed 2.5D PnP method, the CNN model implicitly corrects for both BH artifacts and noise, which avoids the need for pre-processing. Thus, all the input scans in our training and test sets contain BH. The simulated detector size is set to $1 4 5 6 \times 1 8 4 0$ pixels, with each pixel measuring 0.127mm $\times$ 0.127mm, matching a standard detector being used in commercial industrial XCT systems (e.g. Zeiss Metrotom). We use python’s spekpy package ([43, 44]), to simulate the XCT spectrum with a peak voltage of 180kV and a 2mm Al filter typically used as pre-filter to reduce the BH effect. Since the detector has 1840 channels, a common rule of thumb would require 1840 views at sufficiently high signal-to-noise ratio to guarantee high-quality reconstructions when using traditional algorithms (e.g. FDK). However, to reduce scan time and cost, as well as to increase throughput, the input scans in our dataset have significantly fewer views, in some cases by a factor of more than 10. The reconstructed volumes have a voxel size of 17.28 µm. Reference Input 2D UNet 2.5D UNet 2D PnP 2.5D PnP (2132 views, noise = 0) (145 views, noise = 1.0) D··.·· (a) (b) (c) (d) (e) (f) # 4 Results In this section, we summarize results demonstrating the generalizability of our proposed approach on synthetic data that is in- and out-of-distribution with respect to the training set, as well as on experimental XCT scans. # 4.1 In-Distribution (InD) Test Data Figure 3 compares the performance of the supervised 2D UNet and 2.5D UNet models, as well as 2D PnP and our proposed 2.5D PnP using a short and sparse scan with 145 views and noise level 1. As noted in Table 1, this scan is considered InD with respect to the training data. Both PnP methods are able to reduce the noise in the background that the UNets cannot (red and green ovals). However, 2D PnP fails to reconstruct some small pores and distorts the shape of some large pores (red arrows), while the pores in the 2.5D PnP reconstruction closely match the reference reconstruction (green arrows). Figure 4 compares the pixel intensities along the center row of the center slice in the input, reference, 2D PnP, and 2.5D PnP reconstructions for a short-scan with 580 views and noise level 0.5. Both 2D and 2.5D PnP effectively reduce the noise from the input FDK reconstruction. However, 2.5D PnP matches the reference pixel intensity both at the edges and the center of the reconstruction, reducing the cupping artifacts seen in both the input and 2D PnP reconstructions (i.e. having non-uniform intensity along the material with brighter edges and darker center due to the BH effect). Table 2 summarizes the NRMSE and structural similarity index metric (SSIM) for 2D and 2.5D PnP, showing that 2.5D PnP performs better on InD data compared to 2D PnP. However, typical metrics such as NRMSE (or equivalently PSNR) and SSIM are not sufficient for analyzing performance when dealing with data from experimental scientific imaging applications such as industrial XCT [45]. In order to have useful and relevant metrics, we compare the impact of 2D PnP and our proposed 2.5D PnP on the ability to detect defects within the part using Otsu thresholding [46] for segmentation in Figure 5. The detected defects are shown in red overlaid on the grayscale reconstruction slice. More defects are detected in the 2.5D PnP reconstruction than the 2D PnP reconstruction (red and green arrows). Fig. 4 Comparison of pixel intensities along the center row of the center slice in the input, reference, 2D PnP, and 2.5D PnP reconstructions for a short-scan with 580 views and noise level 0.5, which is InD with the training data. 2.5D $\mathrm { P n P }$ reduces cupping artifacts (i.e. having non-uniform intensity along the material with brighter edges and darker center due to BH effect) seen in the input and 2D PnP reconstructions. Table 2 Image quality and probability of detection metrics for InD and OOD test sets from Table 1. Recall and precision are reported for flaws with diameter ranging from $7 5 \mu \mathrm { m }$ to $1 2 5 \mu \mathrm { m }$ . Best metrics for each testing scan are shown in bold. Figure 6 shows the recall and precision curves calculated from the 3D segmented reconstructions, using the segmented reference as ground truth. 2.5D PnP achieves higher recall and precision over all defect diameters, supporting our conclusion that 2.5D PnP enables better detection of defects. Fig. 5 Comparison of Otsu thresholding segmentation of the (a) reference, (b) input, (c) 2D PnP, and (d) our proposed 2.5D PnP reconstructions for a short and sparse scan with 145 views and noise level 1, which is InD with the training data. The detected defects are shown in red overlaid on the grayscale reconstruction slice. More defects are detected in the 2.5D PnP reconstruction than the 2D PnP reconstruction (red and green arrows). In Table 2, we report the NRMSE and SSIM, as well as recall and precision for flaws with diameter ranging from 75µm to $1 2 5 \mu \mathrm { m }$ in the 2D and 2.5D PnP reconstructions. 2.5D PnP achieves better image quality metrics (NRMSE and SSIM) for almost all InD testing scans, which is consistent with the qualitative results we have observed. Note that the SSIM for both 2D and 2.5D PnP is very close to 1, implying that both methods are able to attain high-quality features. However, 2.5D PnP achieves a lower NRMSE, implying that it is able to match the data better than 2D PnP. We also note that 2D and 2.5D PnP significantly improve both NRMSE and SSIM as compared to the input FDK and offer more consistent performance across varying number of views and noise levels. Additionally, 2.5D PnP consistently achieves significantly higher recall and precision for InD data, implying that it enables more accurate defect detection. # 4.2 Out-Of-Distribution (OOD) Test Data One of the main advantages of PnP-based methods is their ability to better generalize to OOD data. On the other hand, end-to-end deep learning-based methods (like UNet) need to be retrained for OOD data. In this section, we compare the performance of 2D and 2.5D UNet and PnP on simulated scans with noise that is OOD with respect to the training set. Figure 7 shows a comparison of 2D UNet, 2.5D UNet, 2D PnP, and our proposed 2.5D PnP using a short and sparse scan with 145 views and noise level 2, which is noisier than the training data. Both 2D and 2.5D PnP are able to reduce the noise in the background (green ovals), while 2D and 2.5D UNet are unable to fully remove the noise (red ovals). Additionally, both 2D UNet and 2D PnP are unable to reconstruct some small pores and alter the shape of some of the larger pores (red arrows), while 2.5D UNet and 2.5D PnP are able to more accurately reconstruct these pores (green arrows). Figure 8 compares the impact of 2D and 2.5D PnP on the detected defects from a short and sparse scan with 145 views and noise level 2, using Otsu thresholding for segmentation. The detected defects are shown in red overlaid on the grayscale reconstruction slice. 2D PnP fails to detect defects that 2.5D PnP is able to detect (red and green arrows). We observe a similar pattern in the recall and precision curves shown in Figure 9. 2.5D PnP achieves a significantly higher recall, implying that it can better detect pores. 2D and 2.5D PnP have similar precision for smaller pores. However, 2D PnP has better precision for larger pores, implying that 2.5D PnP detects more false positives for larger pores. In Table 2, we report the NRMSE and SSIM, as well as recall and precision for flaws with diameter ranging from 75µm to $1 2 5 \mu \mathrm { m }$ in the 2D and 2.5D PnP reconstructions for scans with OOD noise levels. For scans with noise level in between training noise levels (i.e. noise level 0.75), 2.5D PnP achieves better image quality metrics and better recall and precision. For scans with higher noise levels (i.e. noise levels 2.0 and 4.0), 2.5D PnP achieves better image quality metrics and better recall, but lower precision. This implies that 2.5D PnP enables detection of more defects at the cost of slightly more false positives when reconstructing scans with more noise than the training data. # 5 Experimental Data In addition to synthetic data with OOD noise, our proposed PnP method performs well on experimental XCT scans, even though the artifact reduction prior is trained only on the synthetic training set from Table 1. In this section, we compare the performance of 2D and 2.5D PnP on parts made of aluminum-cerium (Al-Ce) with a short and sparse scan consisting of 145 views with 180 kV source voltage and 8s integration time. For reference, we use an MBIR reconstruction of a short-scan with 580 views. The reconstruction is of size $1 3 5 6 \times 1 3 5 6 \times 1 2 6 4$ . Importantly, the scalability of our method enables processing volumes of this large volume size, which many existing approaches struggle to handle efficiently or practically [47]. Figure 10 compares the reference MBIR, input FDK, 2D PnP, and proposed 2.5D PnP reconstructions. Both PnP reconstructions contain less noise with more distinguishable pores than the FDK reconstruction. Our proposed 2.5D PnP reconstruction preserves more pores in the part than 2D PnP (green and red arrows). Table 3 reports the NRMSE, SSIM, signal-to-noise ratio (SNR), and contrast-tonoise ratio (CNR) of the input FDK, 2D PnP, and 2.5D PnP reconstructions of a 145 view scan with respect to the MBIR reconstruction from a 580 view scan. To compute the SNR and CNR, we select two $5 0 \times 5 0$ regions within 10 slices of the reconstructions that contain either only background or only material (no defects). Then, we compute the SNR (in dB) and CNR (unitless) as $$ S N R = 2 0 \log _ { 1 0 } \left( { \frac { \mu _ { \mathrm { m a t e r i a l } } } { \sigma _ { \mathrm { m a t e r i a l } } } } \right) $$ $$ C N R = \frac { \lvert \mu _ { \mathrm { b a c k g r o u n d } } - \mu _ { \mathrm { m a t e r i a l } } \rvert } { \sqrt { \sigma _ { \mathrm { b a c k g r o u n d } } ^ { 2 } + \sigma _ { \mathrm { m a t e r i a l } } ^ { 2 } } } $$ where $\mu$ background and $\sigma$ background are the mean and standard deviation over the background region and $\mu _ { \mathrm { m a t e r i a l } }$ and $\sigma _ { \mathrm { m a t e r i a l } }$ are the mean and standard deviation over the material region. 2D and 2.5D PnP attain higher SNR and CNR than both FDK and MBIR, with 2.5D PnP achieving the highest values. Considering that the proposed approaches significantly suppress the noise both in the material and background regions, the very high CNR and SNR values are expected. Additionally, both 2D and 2.5D PnP significantly outperform the input FDK, attaining similar NRMSE and SSIM values. 2D PnP attains slightly better image quality metrics; however, as shown in Table 2 this does not translate to the task-specific metric of defect detection. Table 3 Image quality metrics for experimental XCT short and sparse scan of Al-Ce part with 145 views, using MBIR with 580 views as reference. The best results are shown in bold. The PnP methods significantly outperform the input FDK, with 2.5D PnP attaining the best SNR & CNR. 2D PnP attains slightly better NRMSE & SSIM; however, this does not translate to the task-specific metric of defect detection. Figure 11 compares the impact of 2D and 2.5D PnP on the ability to detect defects within the part, using Otsu thresholding for segmentation. The detected defects are shown in red overlaid on the grayscale reconstruction slice. 2.5D PnP detects more pores within the part and better preserves the shape and size of the detected pores (red and green arrows). This observation is further justified by the recall and precision curves shown in Figure 12. 2.5D PnP achieves significantly higher recall and precision across all defect diameters, supporting our conclusion that 2.5D PnP enables significantly better defect detection, even when reconstructing experimental XCT scans using a prior trained only on synthetic data. It should be emphasized that we chose Otsu thresholding [46] since it is a simple, parameter-free, and widely-used approach. It avoids tuning across reconstruction qualities and allows for consistent, reproducible evaluation across methods. While CNN-based segmentation approaches could improve segmentation performance, they also introduce confounding factors like retraining and hyperparameter sensitivity. Our focus was to assess whether improved image quality translates into improved detectability using a fixed, minimal segmentation baseline, which can easily be achieved using Otsu thresholding. Table 4 compares the reconstruction time and peak memory usage for 2D PnP and proposed 2.5D PnP when reconstructing an XCT volume of size $1 3 5 6 \times 1 3 5 6 \times 1 2 6 4$ using four Nvidia H100 GPUs with 80 GB of memory. Both the 2D and 2.5D PnP reconstructions take approximately 48 minutes to complete and require approximately 35 GB of GPU memory during processing. While this is more expensive than FDK—which takes approximately 1 minute and uses approximately 0.5 GB of GPU memory—both PnP methods produce significantly higher-quality reconstructions that substantially improve artifact suppression and defect detection. Importantly, 2.5D PnP is also an order of magnitude less expensive than MBIR, which requires approximately 6 hours per volume and uses approximately 300 GB of GPU memory, while delivering comparably high-quality results. Moreover, our 2.5D PnP method is only slightly more expensive than 2D PnP ( $\approx 2$ seconds and $\approx 1$ GB), but provides noticeably improved reconstruction fidelity and higher probability of detection for key features. Thus, 2.5D PnP offers a compelling balance between computational cost and reconstruction quality. Table 4 Runtime (in seconds) and peak GPU memory usage (in MB) for each reconstruction method. Our proposed 2.5D $\mathrm { P n P }$ is much less expensive than MBIR, which requires approximately 6 hours per volume and uses approximately 300 GB of GPU memory, while delivering comparably high-quality results. # 6 Discussion Our synthetic OOD test set also includes two scans with fewer views than the training set. Namely, we test on short and sparse scans with 73 views, which are very sparse and pose a difficult challenge for reconstruction algorithms. Table 2 reports the image quality and defect detection metrics for 2D and 2.5D PnP applied to these short and sparse scans with a noise level of 0.5 and 1.0. Despite being trained on denser view distributions, both 2D and 2.5D PnP frameworks demonstrate strong generalization capabilities to this OOD data. The 2D PnP consistently achieves higher image quality scores and improved precision, whereas the 2.5D PnP shows increased recall, detecting a larger number of defects but with somewhat reduced precision. This trade-off suggests that 2.5D PnP is more sensitive in identifying subtle defect features, likely due to its exploitation of inter-slice contextual information. We hypothesize that the increase in false positives for 2.5D PnP arises from “porelike” structures induced by view sparsity artifacts that appear consistently across neighboring slices. Nonetheless, the ability of 2.5D PnP to leverage volumetric information highlights its scalability and adaptability when applied to diverse scan protocols and noise conditions outside the original training distribution.
Cone-beam X-ray computed tomography (XCT) is an essential imaging technique for generating 3D reconstructions of internal structures, with applications ranging from medical to industrial imaging. Producing high-quality reconstructions typically requires many X-ray measurements; this process can be slow and expensive, especially for dense materials. Recent work incorporating artifact reduction priors within a plug-and-play (PnP) reconstruction framework has shown promising results in improving image quality from sparse-view XCT scans while enhancing the generalizability of deep learning-based solutions. However, this method uses a 2D convolutional neural network (CNN) for artifact reduction, which captures only slice-independent information from the 3D reconstruction, limiting performance. In this paper, we propose a PnP reconstruction method that uses a 2.5D artifact reduction CNN as the prior. This approach leverages inter-slice information from adjacent slices, capturing richer spatial context while remaining computationally efficient. We show that this 2.5D prior not only improves the quality of reconstructions but also enables the model to directly suppress commonly occurring XCT artifacts (such as beam hardening), eliminating the need for artifact correction pre-processing. Experiments on both experimental and synthetic cone-beam XCT data demonstrate that the proposed method better preserves fine structural details, such as pore size and shape, leading to more accurate defect detection compared to 2D priors. In particular, we demonstrate strong performance on experimental XCT data using a 2.5D artifact reduction prior trained entirely on simulated scans, highlighting the proposed method's ability to generalize across domains.
[ "eess.IV", "cs.CV" ]
# 1 Introduction SWE-Dev 14000 train + 500 test samples 17 Chatbot LLMs Input Task Generate Code Evaluation 10 Reasoning LLMs Codebase Updated Codebase Testcase 10 Multi-Agent Systems num2words/ num2words/lang_EN.py Convert ordinal to English: test_ordinal Project Requirement Description cl.da.es.fs Ntuom_2yWeoar(ds_EeNl(fN,uvma2lW,orsdu_fBfaisxe)=:None, .C.o.nvert year to English: test_year Training Support Feature: convert numbers to English words 4 → Convert ordinal to English: + pytest tests/test_en.py SFT C•onavdedrttyoe_aroytreodaEirngalmislo umleodtuo eliannlga_nEgN_.EpNy.py + nduefm2nuwmo2wrodrds/(n_u_mbienr,itor_d_in.apl=yFalse, PFASISLED test_oyredairnal Ofnfline RL Complete interface of num2words: ... implement num2words in __init__.py Ran 6 tests in 0.02s FAILED=2 Multi Agent Training Large Language Models (LLMs) are rapidly transforming autonomous programming, with capabilities extending from generating isolated code snippets to complex interactions within entire repositories [1, 2]. As LLMs increasingly engage at this repository scale, rigorously evaluating their proficiency in handling complex coding systems becomes paramount for guiding their advancement. Current prominent benchmarks, while valuable, still struggle to judge how well LLMs perform in realistic, end-to-end development settings (Table 1). For example, SWE-Bench [3] measures only localized bug fixes described by GitHub issues, and RepoBench [4] evaluates the completion of a few unrelated functions within a repository. However, they overlook the core tasks of developing and integrating significant new functionalities, which truly define how real-world codebases evolve. The task of developing and integrating new functionalities is formally defined as feature-driven development (FDD) [5, 6], which consists of $40 \%$ coding tasks of all development efforts [7, 8]. FDD involves the end-to-end creation of new features, from interpreting requirements in large, existing codebases to generating functionally correct and integrated code (see Figure 1). FDD is how most modern software, from large applications to essential libraries, primarily evolves and delivers continuous value [9, 10]. Consequently, mastering FDD is a critical way towards achieving more comprehensive and genuinely autonomous programming capabilities with coding systems. Recognizing the central role of FDD and the limitations of current evaluation benchmarks, we introduce a feature-driven SoftWarE Development dataset, SWE-Dev, which is the first largescale dataset designed to evaluate and train autonomous AI systems on real-world FDD tasks. It comprises 14,000 training and 500 test instances derived from over 1,000 open-source projects, and is distinguished by three key characteristics: (1) Realistic scale and complexity: SWE-Dev requires substantial code modifications (avg. $1 9 0 \ \mathrm { L O C }$ across 3 files), challenging models with the cross-file dependencies, large contexts, and significant implementation scope characteristic of real-world feature development. (2) Robust and grounded evaluation: Each SWE-Dev sample is grounded in a real open-source repository, guided by a well-defined project requirement description (PRD), and evaluated using executable test cases to ensure the functional correctness of the proposed implementation. This design ensures alignment between task objectives and evaluation, enabling robust assessment and model supervision. (3) Verifiable training set with executable test suites: Uniquely, all 14,000 training instances are paired with runnable environments and executable unit tests, providing crucial execution-based feedback that enables effective Supervised Fine-Tuning (SFT) validation, Reinforcement Learning (RL) with accurate rewards, and Multi-Agent System (MAS) training, refer to Table 1. Our extensive experiments using SWE-Dev reveal several critical insights. Firstly, Repositorylevel feature development is challenging: our findings show even top-tier models like Claude-3.7- Sonnet[11] and GPT-4o[12] solve only $\overline { { 2 2 } } . 4 5 \%$ hard samples and $6 8 . 7 0 \%$ easy samples with $\mathrm { P a s s } @ 3$ . Secondly, MASs generally outperform single-agent baselines in modest margins. Interestingly, simple general-purpose multi-agent methods (e.g., Self-Refine[13], Reflexion[14]) often outperform more complex code-specific agents, while requiring fewer model calls and lower cost. Lastly, task-specific training on this task gets substantial gains on all training methods. After training, a 7B fine-tuned model is comparable to GPT-4o on hard subset after task-specific training. These findings point to several promising directions for future research. First, the difficulty of FDD for LLMs necessitates enhancing LLMs’ core reasoning and long-context capabilities for software development. Second, current MAS designs often suffer from unnecessary communication overhead and limited coordination efficiency. Future work should explore lightweight agent architectures and better context-sharing mechanisms for repository-level development. Lastly, our initial experiments with RL and role-based multi-agent training show that training can be beneficial, but headroom remains. Future work could investigate multi-agent training and long-context RL with SWE-Dev. Our contributions are as follows: We introduce SWE-Dev, the first real-world dataset for autonomous feature-driven software development. The dataset includes both training and test splits, each with runnable environments and test cases, enabling a wide range of evaluation and training. Our evaluations on SWE-Dev offer novel insights into the proficiency and deficiencies of various coding systems (chatbot LLMs, reasoning LLMs, and MAS) on complex FDD tasks. • We demonstrate SWE-Dev enabling and validating diverse training paradigms (SFT, RL, and MAS training), establishing its utility for advancing training-based adaptation. Table 1: Comparison of SWE-Dev with existing repository-level benchmarks. Task (FC: Function Completion, PG: Project Generation, LC: Line Completion, IS: Issue Solving), usage of realrepository, availability of training sets, Number of Samples, and task statistics are compared here. Detailed statistics information is demonstrated e.g., line of code (LOC), task description PRD length. # 2 Related Work # 2.1 Coding benchmarks LLMs show significant potential in coding tasks, driving the need for robust benchmarks. Early benchmarks such as HumanEval[20], MBPP[21], APPS[22], and CodeContests[23] primarily focus on isolated, function-level tasks. These benchmarks test for correctness in constrained settings: short snippets, well-specified inputs, and short expected outputs. While useful for early-stage capability testing, such tasks fall short of reflecting the complex, multi-file dependency and long contexts nature of real-world software development tasks. To address this, repository-level benchmarks emerged, such as SWE-Bench[3] (issue fixing), RepoBench[4], and M2RC-Eval[19] (code completion/understanding). Despite this progress, they often face two main issues: (1) The scope of required code generation or modification remains limited (e.g., avg. $3 2 . 8 \ \mathrm { L O C }$ in SWE-Bench, 38.21 LOC in ComplexCodeEval[17]), inadequately simulating large-scale feature development or refactoring. (2) Weak or inconsistent evaluation protocols: several benchmarks [15, 19, 4] rely heavily on proxy metrics such as code similarity or static heuristics, which often fail to reflect functional correctness. This compromises both the robustness of evaluation and the comparability of results across models[24, 25]. SWE-Dev directly tackles these limitations by providing large-scale repository-level feature development tasks with executable unit tests. Its tasks involve substantial modifications, addressing shortcomings in both code scope and trainable environments, thereby significantly increasing task complexity and realism. # 2.2 Code LLMs Training Training LLMs for coding tasks typically involves three stages: pre-training, supervised fine-tuning (SFT), and reinforcement learning (RL). Pre-trained models such as StarCoder[24] and Phi[26] leverage massive code corpora to learn syntax and general programming patterns. To improve instruction following and task completion, many works adopt SFT. Code Alpaca[27] employs self-instruct generation, WizardCoder[28] leverages Evol-Instruct[29] to synthesize more complex instructions. However, SFT fundamentally lacks exploration: it teaches models to imitate groundtruth outputs rather than to reason or build autonomously[30]. Beyond SFT, RL frameworks such as CodeRL[31] utilize test-case-based feedback to optimize model behavior. While promising, both SFT and RL approaches largely focused on function-level tasks, limiting their applicability to more complex development scenarios. To address this, SWE-Gym[32] explores extending training to repository-scale tasks using multi-agent systems. However, due to the lack of an executable training set in SWE-Bench, SWE-Gym resorts to constructing a separate dataset of 2,438 tasks, ultimately yielding only 500 trajectory samples for training. In contrast, our proposed SWE-Dev provides a large-scale repository-level training set with runnable environments and unit-test-based supervision. It supports SFT, RL, and multi-agent training with executable feedback, enabling realistic and scalable development of code generation systems. Data Collection Dataset Generation Step 1: Test File Collection Step 2: Call Tree Generation Step 3: Task Generation 8,000 Collect Repos Dynamic Generate Target Function PRD rPeypPIos & Tests Analysis Call Tree Masking Generation Build Docker & Execute Tests Call Chain Call Tree Module description 1,086 repos & 9,339 test files test file + Remove Enhanced Docstring test funcs Evaluation Task Docker 中 Test File TEST source funcs Incomplete PRD PRD Repo # 3 SWE-Dev SWE-Dev is the first dataset designed to train and evaluate autonomous coding systems on featuredriven software development tasks. Each instance requires the model to implement a new capability within an existing codebase, based on a natural language requirement and evaluated through realworld unit tests. This section describes the construction of the dataset $( \ S 3 . 1 )$ , its core features $( \ S 3 . 2 )$ , and key statistics $( \ S \ 3 . 3 )$ . # 3.1 Dataset Construction Our dataset construction leverages a straightforward principle: test files in real-world repositories can serve both as a source of feature requirements and as a means of verifying correct implementation. In PyPI packages, developers create high-quality test files to ensure that specific modules or features function reliably across updates. For example, in numpy, test_random.py validates random array generation, aligning closely with the feature it tests. These test files provide executable, featurespecific validation, making them ideal for defining and evaluating development tasks. Using these developer-authored tests as ground truth, we gain two advantages. First, they provide executable, functionality-level feedback for model evaluation. Second, by tracing the test cases back to their associated implementation code, we can identify and mask the relevant source code, forming the basis of an incomplete development task. These traced functions also guide the generation of precise task descriptions. Based on this process, we divide our construction into three stages: (1) collecting repositories, test files and building Docker environments, (2) generating test-to-code call trees via dynamic tracing, and (3) creating the final task by masking the relevant source code and producing the feature specification. Step 1: Test File Collection To support realistic feature-level tasks and test-based evaluation, we begin by collecting real-world repositories that reflect common development scenarios. Specifically, we select 8,000 popular PyPI packages based on star counts. However, not all repositories are suitable: many lack usable tests or need sophisticated installation. Therefore, we applied a strict filtering process focused on test suite executability. Repositories were retained only if they met two criteria: (1) they include an identifiable test suite (e.g., using pytest or unittest), and (2) their test files could be executed successfully within the package Docker environment, with all tests passing. This ensures the resulting tasks are grounded in verifiable, runnable functionality. After filtering, we obtain 1,086 validated repositories (as of December 12, 2024) and 9,339 executable test files. Step 2: Call Tree Generation To locate the specific functions and methods involved in implementing a feature, we capture the runtime interactions between test cases and their corresponding source code through dynamic analysis. This process has two main parts: (1) Dynamic analysis: We execute each test file using pytest inside a containerized Docker environment and apply Python’s built-in 53/ 5/26 5/18 134/ 718 223/ 1760 2155 28/ 8/91 8/80 445 54010PTSaeacskmtapgles 18682/4 13083538TPrSaiacnkmapgles 19053/0 1/7 6/27 8/251 404/ 5206 Web & Network Command-Line & Data Science & Automation Developer Tools Visualization Data Processing & Cloud & Data Others Integration Storage Table 2: Basic statistics of SWE-Dev, including task specification length, repository scale, ground truth implementation size, and evaluation test coverage for both train and test splits. trace module to record all triggered functions in source code. This results multiple linear call chains that record the sequence of invoked source functions. (2) Call tree ensemble: We aggregate the call chains into into a hierarchical call tree, where the nodes of call tree represent functions, and edges capture dependency relationships. The call tree is rooted from test functions and followed by triggered source functions. The depth and width of the tree reflect the complexity of the feature logic, including nested structures and cross-file dependencies. These trees provide a precise mapping from test behavior to implementation code, enabling us to localize relevant functions and systematically control task difficulty later. Step 3: Task Generation Once we have localized the implementation logic using call trees, we convert it into a feature development task by (1) masking the relevant code and (2) generating a natural language requirement for this feature. These two components constitute a typical development scenario in which a feature is functionally defined but not yet implemented. To achieve this, we perform the following: (1) Target function masking: We use structural properties of the call tree (e.g., depth and node count) to select function nodes that represent the core logic under test. The corresponding implementation code is removed from the repository, leaving a functional gap to fill. (2) Project Requirement Document (PRD) generation: We construct the feature description in PRD by using GPT-4o to synthesize a high-level module description from the test file and augmenting the masked function’s docstring with implementation-level details. These two elements are combined into PRD, which serves as the task prompt. See example in Fig. 9 and prompt in Appendix H. # 3.2 Dataset Features Controlled Complexity via Call Tree: Leveraging call-tree analysis, SWE-Dev enables systematic adjustment of task difficulty by adjusting dependency depth for task generation. This uniquely supports rigorous assessment of model capabilities against varying complexities, see $\ S 5$ discussion. Reliable Test-Based Evaluation: Assessment uses the original, developer-authored unit tests, validated for correct execution in a controlled environment. This execution-based pass/fail verification provides an objective, reproducible, and functionally accurate measure of code, directly reflecting real-world correctness criteria. Executable Training Support: SWE-Dev includes runnable environments and test cases for every sample, enabling training paradigms such as SFT and RL with execution-based feedback. # 3.3 Statistics Table 2 summarizes the key statistics of SWE-Dev, which consists of 14,000 training and 500 test samples. The test set is manually curated and split into two difficulty levels: easy and hard (250 instances each). Each dataset instance comprises four components: (1) the task, specified by a PRD, with its token count reflecting instruction length; (2) the codebase, referring to the non-test portion of the repository, where we report the number of files and lines of code (LOC); (3) the ground truth (GT) Figure 4: Comparison of Pass $\textcircled { a } 3$ scores for 17 chatbot and 10 reasoning LLMs on SWE-Dev across Easy and Hard splits. SWE-Dev poses substantial challenges and effectively distinguishes model capabilities under both difficulty levels. See Appendix B for full results. code to be implemented, measured by its LOC and number of target functions; and (4) the test suite, evaluated via the number of test cases and total test LOC per sample. Figure 3 shows the distribution of training and test samples across six major PyPI application domains, demonstrating the diversity of software categories represented in the dataset. More statistics are in Appendix A. # 4 Experiment In this section, we empirically evaluate the effectiveness of various coding systems and training paradigms on SWE-Dev. We first compare the performance of single-LLM(§ 4.1) and MAS(§ 4.1.2) on the FDD tasks. Then, the effectiveness of different training approaches, including SFT (§ 4.2.1), RL $( \ S 4 . 2 . 2 )$ , and multi-agent training $( \ S 4 . 2 . 3 )$ is discussed. Setup. We employed the $\operatorname* { P a s s } @ k$ as an evaluation metric in SWE-Dev [20]. For inference code context, since SWE-Dev requires both the PRD and codebase as inputs. The codebases consist of many tokens (an average of 202K lines, see Table 2), exceeding typical LLM context window. Thus, in all the experiments below, we provide only the relevant code context—i.e., the specific files involved in the task—rather than the entire codebase. # 4.1 Testing Results This section presents the performance of 17 chatbot LLMs, 10 reasoning LLMs, and 10 multi-agent systems on SWE-Dev, under the single-LLM and multi-agent settings. Full details of the evaluated methods are provided in Appendix F.1. # 4.1.1 Single LLM Inference SWE-Dev presents substantial challenges for current LLMs, revealing a clear gap between existing coding capabilities and real-world software engineering demands. Figure 4 reports Pass $\textcircled { a } 3$ performance of chatbot and reasoning LLMs on SWE-Dev. We observe that: (1) LLMs perform better on the easy split than the hard split. (2) Performance generally scales with model size, especially for LLMs within the same family, aligning with our understanding of LLM capabilities. (3) Even the best-performing LLM (Claude-3.7-Sonnet[11]) achieves just over $20 \%$ on the Hard split. This still fall short of achieving strong performance, indicating that current models are not yet fully capable of handling tasks that approximate the complexity of real-world developing scenarios. Figure 5: Comparison of benchmarks on various model sizes. SWEDev shows clear performance scaling with model size, while HumanEval[20] fails to distinguish between models. ComplexCodeEval[15] using CodeBLEU[33] exhibits high variance, making it less stable for evaluation. Table 3: Comparison of general and code-specific MAS on SWE-Dev driven by GPT-4o-mini. Bold highlights the best performance; underlined indicates results worse than the single-agent baseline. Most MAS methods outperform the single agent, and simpler general MASs are more effective and efficient than complex coding-specific MASs. Reasoning models generally underperform their base counterparts, with Claude-3.7-Sonnet being a exception. While Claude with thinking outperforms its base variant, most reasoning models yield worse results. This suggests that current reasoning strategies do not consistently translate into gains for complex, repository-level generation tasks. We further explain this in Appendix C.2. # SWE-Dev provides stable and discriminative evaluation of model capabilities. Figure 5 compares the performance of Qwen2.5 [41] family on SWE-Dev, HumanEval [20], and ComplexCodeEval [15] across three runs. We use Pass $\ @ 1$ for SWE-Dev and HumanEval and ComplexCodeEval for CodeBLEU [33]. The lines represent the average performance, and the shaded regions show the variance. We observe that: (1) SWE-Dev yields low variance performance and consistent scaling with model size, demonstrating SWE-Dev’s stability and reliability in evaluating model capabilities. (2) In contrast, HumanEval—despite being stable—is too simple to differentiate models meaningfully. (3) Meanwhile, ComplexCodeEval shows high variance due to its reliance on similarity-based metrics, CodeBLEU, which limits its reliability for evaluating complex generation tasks. # 4.1.2 Multi-Agent Inference Table 3 compares the performance, call times and total costs of various MAS against the single-agent baseline driven by GPT-4o-mini. Details of MAS are given in AppendixF.1. Key observations are: MAS generally outperforms single-agent baselines on complex tasks. While the single-agent approach achieves only $1 1 . 0 9 \%$ Pass $\ @ 1$ on hard tasks, Self Refine[34] and EvoMAC[18] improve performance to $2 0 . 0 3 \%$ and $1 3 . 6 0 \%$ , respectively. These results highlight the advantage of multi-agent collaboration in solving complex, reasoning-intensive problems. Simpler multi-agent strategies offer strong performance–efficiency trade-offs. Methods such as Self Refine strike an effective balance between performance and cost. On the easy subset, Self Refine achieves the highest $\mathrm { P a s s } @ 1$ of $4 0 . 0 2 \%$ using only 5 calls. In contrast, more complex systems like ChatDev, despite making over 26 calls, fall behind in performance $( 3 5 . 1 3 \% )$ ), indicating that additional agent complexity does not necessarily lead to better results. Human-designed, workflow-heavy MAS often introduce unnecessary overhead. Systems with manually defined roles and interaction protocols, such as ChatDev and MapCoder, tend to be less effective. On hard tasks, ChatDev requires over 30 calls yet only achieves $1 1 . 7 \%$ , while MapCoder performs even worse, with $5 . 8 7 \%$ despite 23.41 calls. These results suggest that handcrafted workflows may introduce redundant operations without improving code generation quality. Table 4: Comparison of zero-shot and SFT performance $( \operatorname { P a s s } @ 1 )$ on SWE-Dev using Qwen2.5 models. Results are reported on both Easy and Hard test splits across model sizes from 0.5B to 7B. The $\Delta$ columns indicate relative improvement after SFT. Fine-tuning yields consistent gains. Figure 6: Training data scaling of SFT Qwen2.5-7B-instruct on SWE-Dev. As data size increases, performance improves steadily under SFT. Our results highlight MAS’s potential for complex tasks on SWE-Dev but reveal a gap between simple and complex MAS, indicating that scalable, efficient MAS remain a challenge. Future work could focus on balancing collaboration benefits with resource costs and mitigating error amplification from LLM hallucinations across agent interactions. # 4.2 Training Results In this section, we evaluate SWE-Dev’s support for different training methods, including SFT, RL. Additionally, we present preliminary results from our exploration of multi-agent training, offering an initial assessment of MAS-based learning. For detailed training setups, refer to the Appendix F.2. # 4.2.1 Single LLM SFT We conducted experiments on Qwen2.5-Intstruct models of various sizes (0.5B, 1.5B, 3B, and 7B) to assess the impact of SFT on performance in SWE-Dev. Experimental setting is in Appendix F.3. Training significantly improves performance across model sizes. SFT leads to substantial performance improvements across all model sizes, especially for harder tasks. As shown in Table 4, the 7B model achieves a $\mathrm { P a s s } @ 1$ of $3 6 . 9 0 \%$ on the easy task set after fine-tuning, up from $2 5 . 7 4 \%$ in the zero-shot setting. On the hard task set, the $\mathrm { P a s s } @ 1$ increases from $6 . 6 8 \%$ to $1 8 . 8 9 \%$ , demonstrating the clear benefits of training in enhancing model performance. SWE-Dev effectively supports the scaling law of training. Figure 6 illustrates the scaling law of training using Qwen2.5-7b-instruct. In this experiment, we measured model performance across varying amounts of fine-tuning data, specifically tracking changes in $\mathrm { P a s s } @ 1$ for both easy and hard task. As shown in the figure, performance improves steadily as the amount of fine-tuning data increases, with larger improvements observed for harder tasks. In summary, our results underscore the importance of fine-tuning in improving performance on SWE-Dev. The scaling law observed here further supports the idea that SWE-Dev is a valuable dataset for studying the effects of model size and training data on task performance. # 4.2.2 Single LLM RL SWE-Dev provides precise test cases enabling accurate rewards for coding tasks, supporting both online and offline RL. In this section, we explore the impact of RL on the Qwen2.5-7Binstruct using SWE-Dev. Considering the computational cost of RL, we limit our experiments in this section to $2 \mathbf { k }$ training samples. For full training setup, refer to the Appendix F.4. Both online and offline RL improve performance, especially on hard tasks. Table 5 shows that both PPO [42] and DPO [43] sig Table 5: Performance comparison of Qwen2.5-7BInstruct as base model, SFT-Tuned and RL-Tuned models on SWE-Dev. Table 6: Comparison of multi-agent role-wise training, base MAS and single LLM’s performance on Qwen2.5- 7B-Instruct. $\Delta$ indicates the relative improvement over the base MAS system. Partial fine-tuning of either agent also leads to consistent gains, demonstrating the effectiveness of role-specific supervision enabled by SWEDev. Figure 7: EvoMAC performance trajectory under ground truth test case supervision on SWE-Dev with Qwen2.5-7B-Instruct. EvoMAC iteratively improves across reasoning rounds, guided by executable test feedback. nificantly improve $\mathrm { P a s s } @ 1$ performance, especially on the Hard split. Furthermore, PPO outperforms SFT on the same training data. These findings highlight the advantages of RL training. RL boosts one-shot success but not multi-sample gains. While RL fine-tuning yields improvements in Pass $\ @ 1$ , it has minimal impact on $\mathrm { P a s s } @ 3$ . Specifically, PPO achieves a Pass $\ @ 1$ of $2 8 . 3 0 \%$ on easy tasks, a noticeable increase from the base model’s $2 5 . 7 4 \%$ , but the Pass $\textcircled { a } 3$ remains lower than the SFT-training, even the original model’s performance. These results suggest that RL can be beneficial in refining $\mathrm { P a s s } @ 1$ , particularly for more complex tasks, by increasing the model’s efficiency in generating correct answers in fewer attempts. However, this efficiency comes at the cost of reduced exploration. This aligns with findings from prior work[44]. Therefore, while RL improves performance, significant headroom remains, and more advanced methods or further training are needed to achieve improvements across tasks. # 4.2.3 Multi-Agent Training MAS has shown promising results on SWE-Dev, and we further investigate the training process of MAS on this dataset. As depicted in Fig. 7, the ground truth test case supervision in SWE-Dev enables EvoMAC [18] to improve its performance across multiple rounds of reasoning. This iterative refinement process motivates us to explore EvoMAC as the MAS for training in SWE-Dev. We apply rejection sampling to enhance agent performance via role-wise training. Trajectory Collection. We use Qwen2.5-7B-Instruct to collect training data for the MAS, following these steps: (1) EvoMAC iterative reasoning: EvoMAC performs multiple reasoning rounds, benefiting from ground truth test case supervision to progressively improve its performance. (2) Rejection sampling: At each iteration, we apply rejection sampling based on training sample testcases to select high-quality trajectories that show improvement over the previous round, ensuring the retention of beneficial data. (3) Role-wise training: The selected trajectories are used to role-wise train two agents (organizer and coder) in EvoMAC, allowing each agent to specialize in its task for better overall performance. Training Effectiveness. Table 6 presents the performance of different training configurations in terms of $\mathrm { P a s s } @ 1$ . We see that: i) Fine-tuning both the organizer and coder agents results in the highest performance, with $\mathrm { P a s s } @ 1$ of $3 1 . 6 5 \%$ on easy tasks and $1 2 . 7 0 \%$ on hard tasks, outperforming all other configurations; ii) When only one agent is fine-tuned, we also see improvements over the baseline. These findings highlight the effectiveness of role-wise training for MAS training. # 5 Dataset Analysis We analyze SWE-Dev’s task complexity, evaluation setup, and PRD quality to demonstrate its uniqueness and reliability. Analysis of Task Difficulty and Call Tree Characteristics. We analyze how task difficulty in SWE-Dev correlates with call tree complexity. As introduced in $\ S 3 . 1$ , a call tree reflects the dynamic function invocation structure for this task, where nodes represent functions and edges denote their 0.7 70 Qwen2.5-7B 100 Origin Win Tie Refined Win Qwen2.5-32B 0.6 60 1 Qwen2.5-72B 5432Call Tree Depth 0.5 340 GPT-4o 20406080Voting Proportion (%) 0.34 73.5 89 84.5 2 0.2 0.1 10 17 1 2 3N4um5 b6er7 o8 9N1o0d1e1s12i1n31C4a1l5l1T6r1e7e181920 0.0 0 0 Cl9a.r5ity Compl1e1teness Action2ability EM ES CodeBLEU Pass@3 (a) Complexity analysis (b) Metric Comparison (c) PRD Quality Analysis call relationships. We use two metrics: depth, indicating the maximum call nesting, and node count, representing the total number of distinct functions involved in the task. Fig. 8a shows that GPT-4o’s performance declines as depth and node count increase, revealing a strong correlation between structural complexity and task difficulty. This suggests that deeper and broader call structures introduce more functional requirements and interdependencies, making tasks more challenging. Evaluation Method Precision. SWE-Dev uses execution-based evaluation with test cases, enabling precise performance signals. We compare metrics: Exact Match (EM) [19], Exact Sequence (ES) [19], CodeBLEU [33], and $\mathrm { P a s s } @ 3$ , using Qwen2.5 models and GPT-4o. As Fig 8b shows, Pass $\textcircled { a } 3$ best reflects capability scaling, separating models by size and quality. In contrast, EM, ES, and CodeBLEU show minimal variance, failing to distinguish models. This demonstrates that SWE-Dev’s test-casebased evaluation provides a more robust and realistic signal of model performance, better reflecting the functional correctness required in real-world software development. PRD Quality. SWE-Dev includes a PRD for each task to simulate realistic developer-facing requirements, which are primarily derived from the original docstrings found within the repository source code. While many functions in open-source code include docstrings, we found that these are often incomplete—lacking clear descriptions of behavior, parameters, or edge cases. To improve instruction clarity without fabricating content, we lightly refine existing docstrings using GPT-4o, grounded in the related file and surrounding context. To evaluate instruction quality, we conducted a human assessment on 100 sampled tasks. Two experienced engineers rated the original and refined PRDs along Actionability, Completeness, and Clarity (Appendix C.1 includes human instruction). As shown in Fig. 8c, refined PRDs consistently scored higher across all dimensions. This supports SWE-Dev’s goal of providing realistic, well-scoped requirements for reliable model evaluation.
Large Language Models (LLMs) have shown strong capability in diverse software engineering tasks, e.g. code completion, bug fixing, and document generation. However, feature-driven development (FDD), a highly prevalent real-world task that involves developing new functionalities for large, existing codebases, remains underexplored. We therefore introduce SWE-Dev, the first large-scale dataset (with 14,000 training and 500 test samples) designed to evaluate and train autonomous coding systems on real-world feature development tasks. To ensure verifiable and diverse training, SWE-Dev uniquely provides all instances with a runnable environment and its developer-authored executable unit tests. This collection not only provides high-quality data for Supervised Fine-Tuning (SFT), but also enables Reinforcement Learning (RL) by delivering accurate reward signals from executable unit tests. Our extensive evaluations on SWE-Dev, covering 17 chatbot LLMs, 10 reasoning models, and 10 Multi-Agent Systems (MAS), reveal that FDD is a profoundly challenging frontier for current AI (e.g., Claude-3.7-Sonnet achieves only 22.45\% Pass@3 on the hard test split). Crucially, we demonstrate that SWE-Dev serves as an effective platform for model improvement: fine-tuning on training set enabled a 7B model comparable to GPT-4o on \textit{hard} split, underscoring the value of its high-quality training data. Code is available here \href{https://github.com/justLittleWhite/SWE-Dev}{https://github.com/justLittleWhite/SWE-Dev}.
[ "cs.SE", "cs.CL" ]
# 1 INTRODUCTION Life narratives are deeply unique and valuable [10, 35, 82], shaped through an interplay of personal memories that involve achievements, struggles, and moments of reflection. Capturing and expressing these narratives through an autobiography is a powerful way for people to preserve their legacy [5, 15, 24, 52], deepen their self-understanding [83, 95], and share their life journey with others, which in turn, fosters a strong connection across generations and communities [2, 3, 34]. However, autobiography writing poses a few significant challenges. Because memories are often scattered, individuals find that recalling and organizing their life experiences into a coherent narrative is both emotionally demanding and timeconsuming [22, 72, 98, 128]. To inform the design and key features of an effective conversational autobiography writing assistant, we consulted with experts and conducted user interviews. These studies highlight the importance of designing a system that can hold conversations which feel both engaging and personal to the user. Furthermore, the system should sufficiently capture the important memories shared by users, while preserving their factual accuracy in writing. Most importantly, users expressed a strong preference for systems that allow them to control the narrative direction and writing process—an observation echoed in prior work [8, 40, 61, 71]. However, most existing academic systems are either designed for short-form storytelling or do not address the user needs we identify. For example, MindScape [89] supports context-aware journaling using behavioral sensing but focuses on short-term self-reflection rather than long-form narrative writing. Recently, GuideLLM [30] introduced an autobiography writing system in which the system autonomously guides users through structured interview sessions. Their approach draws on conversational guidelines from "The Life Story Interview" [81], with each session focused on a single topic, used to create a chapter of the user’s autobiography. While GuideLLM is intuitive and easy to use, our approach offers a new perspective on autobiography writing that emphasizes flexibility and collaboration. In this work, we present a novel framework that is StorySage1, a conversational autobiography writing software that attempts to both guide users in recalling and organizing their memories, while also actively involving them in the storytelling and writing process. The system is meant to support human-AI co-creativity, and align with the key design goals we identified from our formative study. Users interact with StorySage over the course of multiple sessions. During each session, users engage in a flexible-length, dynamic conversation with the system that feels personal to their interests. Following each session, users provide a list of discussion topics to explore in future conversation. They also receive an updated autobiography after every session, which they can review and edit prior to the next session. In the backend, this workflow is managed by a multi-agent framework composed of five specialized agents. The Interviewer and Session Scribe are responsible for facilitating a natural, responsive conversation for the user, while the Planner and Section Writer together outline and incorporate users’ memories into their autobiography. Lastly, the Session Coordinator oversees continuity by preparing a guiding agenda for future sessions. StorySage is a software system built for a general population of users that are interested in writing their autobiography, reflective of the demographic diversity among participants in our user study. We evaluate the effectiveness of StorySage through a simulation-based experiment and user study with $N = 2 8$ participants by comparing it against a Baseline. We then present qualitative findings that provide deeper insight into user experience across both systems. In summary, this paper presents two main contributions: the introduction of StorySage, a novel user-centric system for conversational autobiography writing that supports human-AI co-creativity, and an evaluation of its effectiveness through a real-world user study. # 2 RELATED WORK # 2.1 Autobiographical Memory and Reflection Autobiographical memory, or memory about one’s past, plays an important role in how people form their identity, connect with others, and make decisions about their future [22, 24, 44, 80]. Studies in psychology indicate that recalling and documenting these memories can improve mental health [45, 117], strengthen relationships [87, 107], and improve memory function over time [101, 108]. Motivated by the psychological benefits of memory elicitation and organization, many digital tools have emerged—from early “lifelogging” visions [37] to contemporary systems for journaling, reminiscence, and narrative self-reflection [31, 59, 59, 94, 115, 122]. While traditional methods like journaling and memoir writing help individuals organize memories, they often fall short in prompting recall and supporting deeper reflection [79]. To address these limitations, systems like Pensieve [94] use social media-based prompts to enhance daily reminiscence. RetroMind [115] combines conversational and visual cues for reminiscence therapy in dementia care, and SimSensei [86] leverages virtual conversational agents for storytelling. Reflective tools like MindfulDiary [59] build on this work by using conversational agents to support journaling in mental health contexts [79]. To support memory retrieval and deeper reflection, researchers have examined interviewing strategies for autobiography writing. Harding et al. [46] describe two primary approaches: chronological interviews, which elicit life events in sequence, and narrativefocused interviews, which explore clusters of meaningful experiences in detail. They advocate for a hybrid interviewing style that blends narrative and explanatory elements to encourage users to explore memories they find meaningful. Jiménez and Orozco [57] further propose interviewing techniques that include “grand tour” questions, “counterfactuals,” and “comparisons” to encourage interviewees to reflect more deeply on their experiences. # 2.2 Human–AI Co-Creation Recent advances in artificial intelligence (AI) have significantly expanded the capabilities of AI-assisted writing tools, transforming how users engage with writing tasks across academic, professional, and everyday contexts [65]. These tools include predictive text suggestions that enhance productivity [6, 16, 19], interactive systems that support complex editing and revision workflows [1, 23, 26, 32, 58, 62, 63], and even collaborative assistants that facilitate human-AI creative partnerships [9, 17, 42]. These writing assistants have been applied in scientific and academic writing [11, 39, 102, 103], personal and creative expression [66, 89, 110], and professional business communication [19, 27, 90]. Moving beyond general-purpose writing support, many tools have been designed for creative narrative writing [21, 97, 111, 139]. These systems facilitate collaboration between humans and AI to craft fictional stories, scripts, poems, and other forms of content. For example, Story Centaur [110] is an interactive platform that leverages few-shot prompt engineering to enable writers to create customized narratives, while Wordcraft [21] is a collaborative storytelling editor that offers writers control over story continuations and stylistic edits. Another such system, Dramatron [85], is a screenplay and playwriting assistant that employs hierarchical prompting techniques to iteratively generate narrative content. While AI-assisted writing tools offer significant benefit, recent empirical studies have raised concerns about their impact on authorship, voice, and creativity [8, 40, 60, 61]. For instance, Behrooz et al. [8] found that while writers may be open to receiving AI support, they stress the need for clear boundaries to maintain creative agency. Similarly, Li et al. [71] found that while AI assistance can enhance productivity and confidence, they may also reduce authors’ sense of ownership and reduce diversity in writing style. These findings highlight the importance of preserving human agency in writing to ensure that writers retain voice and style in their work. Within this context, StorySage positions itself as a human-AI collaborative system intended for autobiographical writing. StorySage is designed to preserve human agency by enabling flexible story navigation and adapting the conversation and narrative in response to user feedback, addressing the concerns raised by [8, 39, 61, 71]. # 2.3 LLM-Powered Multi-Agent Systems Multi-agent systems (MAS) leverage a collection of specialized agents that interact to solve complex tasks, typically by adopting distinct roles, capabilities, and communication protocols [129]. With the advent of large language models (LLMs), researchers have begun to design LLM-powered multi-agent systems in various domains, including writing [64, 102], coding [49, 96, 112, 130], and social simulation [20, 92, 140]. For example, in MetaGPT [49], LLM-powered agents with specialized roles (e.g., product manager, software engineer) collaborate to develop a software application. Accomplishing complex tasks with an LLM-powered multi-agent framework necessitates careful orchestration [114]. Prior studies have proposed different approaches to facilitate this coordination, ranging from predefined sequential interactions [49, 137] to more complex communication patterns [29, 70, 120]. Other work [36, 104] has incorporated a central orchestrator responsible for planning and outlining tasks, with subordinate agents dedicated to executing these predefined plans. This approach has demonstrated a good balance between simplicity and successful task completion. Moreover, having an effective memory mechanism is essential for enabling long-term goal navigation in multi-agent systems [38, 138]. For instance, Generative Agents [92] introduced complex memory architectures capable of capturing, summarizing, and reflecting on extensive individual experiences, thus enabling agents to plan and interact over prolonged periods. Other works have used techniques such as retrieval-augmented generation (RAG) [67] and context chunking [137] to enhance memory functionality. In the context of human-AI interactions, robust memory modules are particularly crucial for delivering personalized and contextually relevant discussion [37, 100, 135]. Along these lines, OmniQuery [67] and MAP [64] apply RAG techniques to retrieve relevant memories from a user’s personal archive in order to provide a personalized user experience. Building on these practices, StorySage introduces a multi-agent architecture designed for autobiographical interviewing and writing. The system orchestrates five specialized agents alongside a persistent memory module to address key challenges in MAS design, including maintaining narrative coherence, adapting to personal context, and sustaining user engagement [64, 92, 130, 136]. Agentic frameworks support long-term, personalized interaction, making it work well for the iterative nature of autobiography writing. # 3 DESIGN OF STORYSAGE # 3.1 Formative Study To inform the design of StorySage, we conducted a pilot study2 consisting of ten 60-minute conversations with various individuals—a professional biographer, technology strategist, and users interested in writing about their life stories. Drawing from principles in the Design Thinking Bootleg [47], we explored how people engage in conversations about their memories, their perceptions of AI, and challenges they anticipate in writing their autobiography. Our early discussions with experts emphasized the importance of building trust between interviewer and interviewee by asking both personal, rapport-building and reflective questions, which echoes insights from prior work in biographical interviewing techniques [46, 57]. These conversations also highlighted the importance of gathering feedback on early versions of the system to better understand user needs and expectations. Following this guidance, we presented a prototype to a group of 20 adults and observed several takeaways from their interaction with the system. Participants expressed that they valued the mental stimulation of memoir writing and desired more natural, conversational, and timely interactions. Moreover, they observed hallucination in the final autobiography and felt that it lacked their authentic voice. These insights guided several key design aspects of StorySage, which we outline below. Figure 2: Overview of the StorySage multi-agent architecture. (User Interview) In this phase, the Interviewer engages the user in a conversation to help them share their memories, while the Session Scribe works in the background to log key details from the conversation—including answered questions and shared memories—and generates follow-up prompts. (Biography Writing) After the user concludes the interview, the Planner analyzes the existing biography structure and new memories from the conversation to formulate a set of structured update plans. Subsequently, the Section Writer uses these plans to write narrative content in the user’s autobiography. (Subsequent Session Preparation) Finally, the Session Coordinator begins planning for the next session by designing a session agenda. # 3.2 Design Goals DG 1: Providing Human Agency Expert interviews highlight the importance of user-led conversations and the value in asking follow-up questions that enable users to recollect their memories. Similarly, feedback from our focus group revealed that users want to be involved in the writing process, both as an exercise and means for personalization. Taken together, these insights suggest that an effective system should empower users to direct the conversation, while supporting them to recollect and reflect on their memories. Moreover, the system should allow users to iteratively contribute to their narrative during the writing process. This design differentiates StorySage from prior work in autobiography writing [30], which follows a fixed conversational framework and narrative outline. DG 2: Fostering Natural Conversation An effective system should both be adaptable and maintain a natural conversation flow by asking thoughtful and relevant questions. Insights from our focus group reveal that some users may struggle to answer overly reflective or abstract questions, while our literature review cautions against relying solely on surface-level questions [46]. These findings point to the need for balance. Experts similarly emphasize that conversations which begin by building familiarity and trust naturally lead individuals to reflect and share deeper stories. To foster deeper and more meaningful conversations tailored to the user, the system must be capable of remembering prior conversations and maintaining coherence across multiple sessions. DG 3: Preserving Narrative Integrity A recurring theme in our early exploratory interviews was concern over an AI system introducing hallucinations or misrepresenting users’ memories. These concerns reflect a broader expectation: when users share their life stories, they expect the system not only to listen, but also to remember and accurately reflect the story in their autobiography. To this point, an autobiography writing system should accurately represent the user’s memories and adequately capture the content they share in their conversation with an appropriate level of detail. Building on (DG 1), such a system should also provide users with mechanisms to incorporate their voice and correct inaccuracies in the autobiography, ensuring that narrative integrity is preserved. DG 4: Maintaining Responsiveness. Noting that some focus group participants felt that long response times disrupted conversation flow, we believe a well-designed system should both propose follow-up questions and generate the autobiography in a timely manner. Amidst the complexities of designing StorySage to be userdriven (DG 1), support natural conversation (DG 2), and generate an accurate and complete autobiography (DG 3), it is important that users do not perceive the system to be sluggish. Maintaining system responsiveness is essential for keeping users engaged with StorySage and enhancing their overall user experience. # 4 STORYSAGE We begin by illustrating StorySage through the experience of a hypothetical user, Nick, who is interested in documenting his life story. Nick begins a conversation with StorySage, during which the system asks questions about his childhood. As he talks, memories surface—like composing music as a child and meeting his best friend at basketball practice. After a fruitful conversation, Nick concludes the session and indicates a desire to discuss more about his journey into music in a future session. Afterwards, he receives an initial draft of his autobiography, which he reviews and edits ahead of the next session. This flow defines a single session, described in Figure 1. Over time, with each conversation, Nick gradually builds a rich autobiography with StorySage. Underlying Nick’s experience with StorySage is a three-stage system design: (1) Interview Session, (2) Biography Writing, and (3) Subsequent Session Preparation. As shown in Figure 2, these components perform fundamentally different tasks, though they are inherently connected. This structure naturally lends itself to a multi-agent architecture in which responsibilities are distributed across specialized agents—namely, the Interviewer, Session Scribe, Planner, Section Writer, and Session Coordinator. Although each agent is responsible for executing different tasks, these roles are inherently connected, so the agents rely on shared data structures to coordinate their actions, illustrated in Figure 2. This modularity is particularly valuable given how developing an autobiography is an iterative process that demands a well-organized system which can adapt and scale as the autobiography evolves over time. In this section, we describe the architecture and implementation of StorySage, guided by our design goals in Section 3.2. # 4.1 Interview Session At the core of StorySage is the interview session—a space for the user to share memories that ultimately form their autobiography. The interview session is led by the Interviewer Agent, who is responsible for facilitating a natural and personalized conversation with the user (DG 2). To foster a sense of intimacy, the Interviewer utilizes a memory bank and question proposal mechanism to ask follow-up questions that align with the user’s interests (DG 1). Playing a key role in the question proposal system is the Session Scribe Agent who listens to the conversation in the background and suggests follow-up questions the Interviewer can ask the user. By offloading the responsibility of question generation from the Interviewer—among other responsibilities—StorySage ensures a smooth interaction between the user and Interviewer (DG 2). 4.1.1 Interviewer Agent. Unlike existing work that structures the conversation between user and system around a list of fixed seed questions, StorySage is designed with an interface that allows users to steer the conversation by skipping questions the Interviewer proposes or directly suggesting topics they want to discuss. To support this flexibility, the Interviewer’s sole responsibility is to facilitate a natural and responsive conversation aligned with the user’s interests. Therefore, we prompt the Interviewer to propose contextually appropriate questions by monitoring signals of user engagement with the current topic. High-engagement responses tell the Interviewer to ask deeper follow-up questions, while low engagement answers—like unenthusiastic responses or skipped questions—signal the need for a change of topic. To ensure the Interviewer can access a diverse set of questions across different topics and levels of depth, it reads from a dynamic session agenda. The session agenda contains various conversation starters suggested by the Session Coordinator from the previous session, as well as deeper follow-up questions that the Session Scribe finds pertinent to the current conversation. The ability to ask follow-up questions from the session agenda reduces the need for synchronous question generation, which can be time-consuming for the Interviewer and introduce latency in the user interface (DG 4). At the same time, when no suitable questions are available, the Interviewer is instructed to propose its own follow-up questions to keep the conversation flowing naturally. In order to establish a more human-like connection with a user, the Interviewer should have a memory of prior interactions with them. Therefore, we design the Interviewer to read from two memory modules: a short-term memory of the session chat history and a long-term memory bank, as shown in Figure 2. By invoking a recall tool, the Interviewer can query the memory bank to retrieve information shared in prior conversations. This enables the Interviewer to respond thoughtfully in follow-up questions by drawing connection between meaningful memories. For example, in response to the Interviewer’s opening question in Figure 11, our hypothetical user Nick shares “I was seven when my mom introduced me to my first instrument." The Interviewer can then query the memory bank using phrases such as "Nick’s first instrument" and "Nick’s mom" to retrieve memories like “Nick’s first instrument was a piano.” This retrieval is performed through a similarity search between the query and the stored memories in the memory bank [69, 109]. The Interviewer can then respond with “I remember you mentioned that your first instrument was a piano. Did your mother teach you how to play the piano, or did you take lessons?" 4.1.2 Session Scribe Agent. The Session Scribe acts as an assistant to the Interviewer that listens to the conversation and performs several bookkeeping tasks behind the scenes. It runs concurrently, allowing the user to continue interacting with the system, despite having to handle a number of tasks in the background (DG 4). This design offloads the work of documenting the user’s memories and proposing follow-up questions, enabling the Interviewer to focus solely on facilitating a fluid and responsive conversation. As illustrated in Figure 11, the Session Scribe updates several data structures in parallel. After hearing Nick share his journey into music, the Session Scribe does the following: Step 1. Retrieve top K similar previously answered questions Step 2. Decides whether the proposed question is duplicate or not Figure 3: Similar Question Detection Mechanism. This mechanism is utilized by the Session Scribe and Session Coordinator. These agents first (1) retrieve the top $\mathrm { K } { = } 3$ most similar questions from the question bank and then (2) compare the proposed question against the retrieved questions to determine if the proposed question is repetitive. (1) Memory Decomposition: The Session Scribe decomposes Nick’s answer into discrete memories, annotates each with metadata (e.g., event date, location, people involved), and stores them in the memory bank. (2) Question Bank Management: The Session Scribe creates a list of questions that can be implicitly answered from the Nick’s response, and adds these to a question bank. This allows future questions to be compared against those previously asked to avoid repetition and maintain natural conversation flow (DG 2). (3) Updating Session Agenda: The Session Scribe records Nick’s response and marks the Interviewer’s question as answered in the session agenda. (4) Follow-Up Question Proposal: The Session Scribe thinks of follow-up questions based on the memory Nick shared. To ensure variety and relevance, we design a mechanism for detecting and filtering repeated follow-up questions. Each question undergoes a similarity check against the question bank, as illustrated in Figure 3; only novel questions are added to the current session agenda. The Session Scribe proposes fact-gathering questions to build initial rapport and gather essential context about a memory and deeper questions to explore connections between multiple memories and the underlying themes they reveal [46, 57]. # 4.2 Biography Writing The purpose of the biography writing team is to incorporate the user’s memories from the interview session into their autobiography. This process is managed by a dedicated writing team comprised of a Planner Agent and Section Writer Agent. As shown in Figure 4, the Planner first comes up with a set of guidelines for updating the biography using the existing biography structure and the new memories it obtains from the interview session. The Section Writer then executes these plans in parallel, either by adding new sections or revising existing content. We design these agents to periodically update the autobiography when the number of new memories during the interview session reaches a certain threshold. This prevents a large backlog of memories at the end of the session, which would otherwise prolong writing time (DG 4). After each session, StorySage provides users with a draft of their evolving autobiography so they can shape their story in real time by editing the narrative structure and content (DG 1). Figure 13 provides an example of the biography editing interface. 4.2.1 Planner Agent. The Planner follows a structured approach that involves (1) memory grouping, (2) update plan generation, and (3) coverage verification. Figure 4 outlines these in detail. After grouping related memories, the Planner generates an update plan for each group, which is handed off to the Section Writer to add to the autobiography. A feedback loop then ensures that all memories shared by the user are integrated into an update plan, and ultimately the autobiography (DG 3). We use a similar mechanism to ensure the user’s edits to the autobiography are properly addressed. 4.2.2 Section Writer Agent. The Section Writer is responsible for writing stories for the autobiography using the update plans provided by the Planner. These update plans contain the relevant memory identifiers and the user’s original dialogue that describes these memories. This focused context window allows the Section Writer to write a narrative that accurately reflects the user’s memories and covers all memories in the update plan (DG 3). The system then writes to different sections of the autobiography concurrently, so users experience minimal delay in receiving their updated autobiography after each session (DG 4). Like the Planner, the Section Writer is also triggered when users edit the framing of their autobiography. # 4.3 Subsequent Session Preparation After the user ends the interview session, the biography writing team writes any remaining memories to the autobiography. This marks the beginning of a future session planning phase, a process led by the Session Coordinator Agent. This agent is responsible for preparing the detailed session agenda for the subsequent session with guiding questions that align with the user’s interests. To identify these conversational areas, users are presented with a topic selection modal with a list of talking points they expressed interest in from previous sessions (DG 1). By discussing these topics in future sessions, the Interviewer can engage the user in a more personalized conversation that encourages their participation (DG 2). This workflow is different from existing work that initiates new conversations with predefined questions, a strategy that lacks personalization and is limited by a finite number of questions. Preparation of this agenda concludes the current session with StorySage, and sets the stage for the next session. 4.3.1 Session Coordinator Agent. As illustrated in Figure 4, the Session Coordinator collects a list of follow-up questions from (1) new questions it generates using user-selected topics in the topic selection modal, (2) unanswered questions from the previous session agenda, and (3) follow-up questions proposed from each agent in the biography writing team. The Session Coordinator utilizes the recall tool to identify gaps in the memory bank and determine which questions are likely to offer novel insights that can (1) Biography Writing Proposes follow-up questions for the next session (2) Subsequent Session Preparation Biography of Nick Y io SAectcitoion:nCPraetaht:e1N/e1w.1SFeicrtsitoSnteps into Music FaQfot1lelrDotiwhd-eNupipciakQnlueoea?rsntioanys instruments M Guidance: Describe Nick’s journey into music. Q2 How has Nick’s relationship with 1. Early Life Referenced Memories: M1, M2 music evolved over the years? 2. My Time at Dunn High School Planner Biography Update Plans … 2.1. Meeting My Best Friend 2.2. Summer Program in Europe Feedback for uncovered memories No CMoevmeorreide?s UCsaerreeSreilnectMeudsiTcopics 嘿 Yes Biography at Session T ↓ M1. Nick's journey in music began at age seven when his mother introduced him to the piano. G M2. Nick’s mother introduced him to Holst and Vivaldi, 自: which got him hooked in music. Session Coordinator Memories Collected From Session T Biography of Nick Y Session Agenda 巨 1. Early Life Create New Section 1.1 First Steps into Music 1. Early Life / 1.1 First Steps into Music Topic 1: Introduction to Music Content: 2.12 SMuemetimnegr PMryogBraesmt iFnrieEnudrope Yes CMoevmeorreide?s [wMh1e]n...mSyhemohtahdermfiersltisitnetrnotdoucHeoldstmaentdoVt ihvealpdiia,nwohich secioyir Top g a No Biography Update Contents Q23. by secstsiion wcroitoerrdinator Biography at Session T+1 Feedback for uncovered memories Session Agenda for Session T+1 enrich the autobiography. Candidate questions undergo similarity verification against the question bank (Figure 3), and redundant questions are either revised or abandoned. # 4.4 Design Limitations We acknowledge that our design lacks rigorous ablation studies to validate each architectural choice. Additionally, we recognize that other multi-agent frameworks may also be viable; as such, the system should be viewed as one possible approach that manages the interview and autobiography writing processes. Key design choices—including conversation navigation, memory management, and planning strategies—may have room for improvement. As with many LLM-based systems, the performance of our system is sensitive to variations in prompts and the underlying language model. Future work can measure the effect of individual design choices and optimize system components, prompts, and models. # 5 TECHNICAL EVALUATION To evaluate StorySage’s performance over extended use, we conduct a series of experimental studies in which we observe user proxies interact with both StorySage and a Baseline system over a number of interview sessions. The goal of this study is to assess the basic functionality and behavior of both systems across several key areas prior to user testing, and to evaluate system performance when prompted with different underlying language models. # 5.1 Baseline System To understand whether StorySage’s multi-agent architecture contributes to better system performance, we construct a Baseline system by drawing inspiration from prior work in autobiography writing. We then ablate on features critical to interview quality, user autonomy, and biography writing. Following GuideLLM [30], we design the Baseline with an Interviewer agent that is equipped with a question outline proposed in "The Life Story Interview" protocol [81]. This outline includes a fixed list of seed questions that ask about reflective life topics, providing strong conversation starters and enabling high-level topic and context navigation. Question proposal is done solely by the Interviewer. It is instructed to think step-by-step [121] by first selecting a topic from the outline relevant to the user’s response; it then formulates a question that suits the current conversational context. To equip the Baseline with long-term memory across sessions, we include a summary of prior conversation in the prompt of the Interviewer, consistent with the approach used in GuideLLM. This design allows us to test whether a concurrent multi-agent architecture improves system responsiveness (DG 4) and conversational quality (DG 2) through better question proposal, reduced question repetitiveness, and quicker question proposal time. In addition to modifying the Interviewer, the Baseline system omits the next session topic selection feature, dynamic session agenda, and the biography editing mechanism. For fairness of comparison, we still provide users with their autobiography after each session. By ablating these features, which provide user autonomy over directing the conversation and narrative flow (DG 1), we can measure how the interactivity and personalization in the design of StorySage influences user engagement and overall satisfaction. Figure 5: Performance comparison metrics between StorySage and Baseline across three underlying language models (DeepSeek-V3, GPT-4o, and Gemini- $\cdot 1 . 5 { \cdot } \mathrm { P r o } \dot { } ,$ ). Lines represent average performance across four simulated user agents, with the X-axis indicating session number (1 to 10). Across all models, StorySage consistently outperforms the Baseline. Biography Coverage (top) reflects StorySage’s ability to maintain high memory coverage across many sessions, while Number of Memories (bottom) reveal both systems ability to extract many memories. In the bottom plots, dashed lines indicate total memories stored (M), and solid lines represent those referenced in the biography (B). In the Baseline, we consolidate the Planner and Section Writer into a single Writer agent, thus removing the planning module and iterative feedback loops. To ensure fair comparison and prevent context overload [74] in the Baseline, we chunk the newly collected memories into groups of ten during the interview session. At the end of the session, the Writer sequentially integrates these memory chunks into the autobiography. Moreover, we provide the Baseline access to the same biography writing tools as StorySage, so the Writer can modify specific sections of the autobiography, rather than having to regenerate the full narrative, ensuring fair latency comparison. The Baseline system also follows the same writing guidelines as StorySage, which instruct the system to link narrative sentences to memory identifiers and not to hallucinate stories beyond the memories the user provides. By omitting specialized agents and a structured update design from the Baseline system, while preserving its writing capabilities, we can compare performance between the Baseline and StorySage to measure the effect of our multi-agent design on memory coverage and accuracy (DG 3). # 5.2 Experimental Setup 5.2.1 User Proxies. To simulate realistic interview sessions, we design an LLM-powered User Agent tasked with role-playing a human participant [30, 92, 135]. Each user agent is provided with a real individual’s biography. To encourage the user agent to share a variety of memories, we use a pre-processing step to extract high-level life chapters from their biography. A different chapter is provided as context to user agents at the start of each new interview session, mimicking how real users can choose different life topics to discuss with StorySage. We select four Wikipedia biographies of individuals from diverse backgrounds: Paul Coates (activist, male) [127], Jennifer Doudna (biochemist, female) [125], Esther Duflo (economist, female) [124], and Maggie Rogers (musician, female) [126]. 5.2.2 Procedure. All user proxies interact with both systems for 10 interview sessions, with each session consisting of 20 questions. This is repeated three times per proxy, each time using a different model: GPT-4o [54], Gemini-1.5-pro [113], and DeepSeek-V3 [73]. The user proxies are powered by GPT-4o mini [91]. 5.2.3 Metrics. We define quantitative metrics to evaluate the basic functionality of StorySage and the Baseline [30]. These metrics were chosen because they provide objective indicators of system performance and reliability, which we find important prior to conducting user testing. Latency Metrics Question Proposal Latency (sec). The time required for the Interviewer to propose a follow-up question. Autobiography Update Latency (sec): The time required for StorySage to generate the autobiography after the end of a session. Biography Evaluation Metrics Memory Count (#): The total number of memories collected from all interview sessions. • Biography Coverage $( \% )$ : The proportion of memories from the memory bank (𝑀) that are referenced in the biography (𝐵), calculated as $$ \frac { \# \mathrm { m e m o r i e s } \mathrm { c o v e r e d i n } B } { \# \mathrm { m e m o r i e s } \mathrm { s t o r e d i n } M } $$ • Biography Accuracy $( \% )$ : We use an LLM-as-a-judge approach (details in Appendix B.5) to calculate the percentage of total claims in the autobiography that are substantiated by the user’s original responses. Table 1: System performance metrics for StorySage and Baseline across three different underlying models. Metrics include: (Q. Lat.) question proposal latency, (Bio. Lat.) biography update latency, (Mem.) the total number of memories collected across all sessions, (Bio. Cov.) biography coverage, and (Bio. Acc.) biography accuracy. All metrics are averaged across 10 sessions and 4 simulated users, except (Mem.), which is only averaged over users. # 5.3 Results Table 1 presents a performance evaluation of StorySage and the Baseline. To understand how the behavior of each system evolves over time, Figure 5 illustrates the progression of the key quantitative metrics across multiple interview sessions. We briefly discuss the high-level takeaways below. • Latency Metrics: Across all language models, StorySage consistently proposes follow-up questions 2–4 seconds faster than the Baseline. However, StorySage is slower at biography generation when DeepSeek-V3 and GPT-4o are used as the underlying model, but both systems are equally fast with Gemini-1.5-pro. GPT-4o offers the fastest performance overall in question and biography generation time. • Biography Metrics: StorySage extracts more user memories from the conversation compared to the Baseline with Gemini-1.5-pro, but a roughly equivalent number with GPT4o and DeepSeek-V3. Biography coverage is highest with Gemini-1.5-pro, although StorySage consistently achieves higher coverage than Baseline. Both systems maintain high biography accuracy scores across all models. Considering the importance of responsiveness for maintaining a smooth user experience and the need for strong memory extraction and coverage, we select GPT-4o as the underlying model for both the Baseline and StorySage. We anticipate that latency and biography quality will play a critical role in user testing, and GPT-4o offers the best balance in these areas from our experiments. # 6 USER EVALUATION While simulations offer useful insights into system performance, they cannot fully capture the holistic user experience, which is central to the evaluation of a user-facing product like StorySage. To address this, we conduct a user study framed around answering four research questions (RQ1–RQ4), (1) User’s sense of agency in directing StorySage. (2) Users’ perception of the StorySage’s conversational ability. (3) Users’ satisfaction with StorySage’s autobiography. (4) Users’ perception of the responsiveness of StorySage. # 6.1 Ethics and Research Disclosure This study was approved by our institution’s IRB. All participants provided informed consent prior to their involvement. Participants were compensated at a rate of $\$ 20$ per hour. All collected data were treated as confidential and used solely for research purposes [7]. # 6.2 Experimental Setup Following standard experimental practice, we assign participants to two equal-sized groups: a control group and treatment group. Participants in the control group assess both the Baseline system and StorySage, while the treatment group evaluates only StorySage. To prevent score contamination, those in the control group evaluate their experience with the Baseline prior to engaging with StorySage. This approach enables both fair between-group comparisons and within-group analysis to compare how participants in the control group evaluate each system. We primarily rely on results from the between-group analysis, as this design minimizes the risk of carryover, fatigue, and learning effects that could confound results [13]. However, the within-group analysis adds a complementary dimension that enables direct comparison of StorySage and Baseline within the same group of users and offers deeper insight into individual-level effects [25, 84, 105]. Figure 6: The demographics of participants across the control and treatment group (each $n = 1 4$ ) in terms of age, gender, and prior AI (ChatGPT, Siri, etc) usage frequency. In aggregate, both groups display similar participant profiles across these three dimensions. 6.2.1 Procedure. In line with our methodology, participants in the control group spend 45 minutes with each system, while those in the treatment group engage with StorySage for 45 minutes. Each 45-minute period is structured into three parts: the showing of a 5-minute introductory video, a 30-minute interaction period, and a 10-minute evaluation period. After giving verbal consent to the IRB-approved study conditions and completing a demographic questionnaire, participants begin watching an introductory video that describes the features of the first system they will interact with. They are then directed to start the first of two 15-minute interviews. Throughout the process, we monitor the interview to ensure a smooth experience, guiding participants to end each interview and read their initial autobiography. This is repeated for a second session, culminating in the participants reading their final autobiography. Participants then complete a questionnaire, where they provide numerical scores to 7-point Likert-scale questions assessing their experience with the system. These questions are provided in Appendix A.1. Individuals in the control group repeat this process for StorySage. 6.2.2 Participants. We recruit 28 participants (15 females, 13 males) from various professional backgrounds from Upwork, a freelance recruitment platform [116]. Notably, a disproportionately high number of participants come from backgrounds in humanities and computer science, reflecting the common freelance work found on Upwork. We recognize that participants from these backgrounds may offer more critical feedback on AI writing systems, so we stratify participants across gender, age, and professional background to assess how a diverse user base evaluates StorySage. These demographics are described in Figure 6 for each experimental group. It is important to note that Upwork’s user base tends to skew younger with $8 2 \%$ of participants ranging in age from $1 8 - 5 4$ . Participants were asked about their familiarity with AI tools; individuals in control group reported $M = 4 . 2 1$ , $S D = 0 . 9 7$ (on a 5 point scale), while the treatment group reported $M = 3 . 8 6$ , $S D = 1 . 0 3 \$ . 6.2.3 Data Analysis. We conduct statistical significance tests to assess whether participants rate StorySage more favorably than the Baseline across questionnaire items, both in between-group and within-group settings, reflecting our hypothesis that StorySage provides a better user experience across our key design dimensions. Given the ordinal nature of Likert-scale data and the small sample size, we utilize a Wilcoxon rank-sum test [88] to evaluate our hypothesis in the between-group setting. For our within-group analysis of the control group, we utilize a Wilcoxon signed-rank, a commonly used non-parametric test for comparing paired samples [48]. These tests are applied to each question in the questionnaire, and we report the $p$ -values along with the difference in median score, a measure robust to outliers in small sample sizes. # 6.3 Between-Group Analysis Figure 7 displays a distribution of scores given to the Baseline and StorySage across each survey item in the between-group setting, where we compare the control group’s evaluation of the Baseline against the treatment group’s evaluation of StorySage. Figure 9 provides a bar graph visualization of the scores. The questionnaire consists of 13 Likert-style questions: three questions designed to evaluate each research question, and one final question that assesses participants’ overall experience (see the Appendix A.1 for question formulation). Overall, users experience a higher level of satisfaction interacting with StorySage over the Baseline (Q13: median difference $\mathbf { \Psi } = \mathbf { \Psi } 1$ , $p = 0 . 0 1$ ). One participant described how interacting with StorySage felt "like talking to a new friend" (gender: F, age: 45-54), while another said "it wrote a better biography for me than I could do myself " (gender: F, age: 45-54). These responses, along with others, highlight key qualitative themes that we discuss below. # 6.3.1 System Effect on User Autonomy (Ability to Guide StorySage) [RQ1]. Participants who interacted with StorySage reported a statistically significantly higher level of overall autonomy compared to those who interacted with the Baseline model (Q12: median difference $\mathbf { \sigma } = \mathbf { 0 . 5 }$ , $\mathbf { p } = \mathbf { 0 . 0 3 5 }$ ). StorySage allows users to better guide the conversation flow during an interview session. As indicated by Figure 7, participants reported an ability to more meaningfully guide the interview session (Q11: median difference $= 2$ , $p = 0 . 0 0 1$ ). This can be attributed to features in StorySage that allow users to direct the conversation (e.g., next session topic selection), combined with the Interviewer’s ability to respond to subtle user cues—like brief or unanswered responses— with contextually appropriate follow-up questions. Participants reported "I really liked being able to indicate easily what questions I did not feel like answering" (gender: F, age: 25-34), and another "appreciated the option to select topics for future sessions" (gender: F, age: 45-54). While some participants tried to exert a similar level of control over the Baseline, one user "felt I could not ‘guide’ the Autobiography Captured My Information (Q1) 3 9 3 10 p = 0.003\*\* Autobiography Was Accurate (Q2) 5 6 1 1 5 7 O p = 0.327 Writing Style Was Effective (Q3) 3 6 1 1 3 4 5 O p= 0.095t Interviewer Questions Flowed Naturally (Q4) 4 4 3 2 1 8 p= 0.036\* Interviewer Showed Understanding and Empathy (Q5) 2 8 2 5 O p = 0.029\* Topics Had Natural Continuity Across Sessions (Q6) 1 3 4 5 1 1 1 5 6 p= 0.018\* Autobiography Updates Were Timely (Q7) 4 7 5 p = 0.133 Follow-up Questions Were Proposed Quickly (Q8) 4 6 1 3 4 p=0.199 System Was Responsive (Q9) 2 8 2 2 2 4 6 p = 0.172 Could Guide Autobiography Writing (Q10) 2 5 2 7 4 p= 0.004\*\* Could Guide Interview Direction (Q11) 2 3 2 1 3 5 5 · ${ \mathsf p } = 0 . 0 0 1 ^ { \star }$ Had Autonomy Over the System (Q12) 11 3 3 5 6 4 3 · ${ \mathsf p } = 0 . 0 3 5 ^ { \star }$ Overall Experience Was Positive (Q13) 6 6 1 3 4 · ${ \mathsf p } = 0 . 0 1 0 ^ { \star }$ strongly disagree neutral □ strongly agree 1 2 3 Median Score Differences (StorySage-Baseline) questions a certain way, that I was merely responding to the questions" (gender: F, age: 55-64). Regarding the Baseline’s limited ability to change the conversation based on contextual cues, one user noted "I expected the system to ask follow-up questions, but I don’t feel like it ever did" (gender: M, age: 25-34). Meanwhile, a participant in the treatment group described how "StorySage picked up on my desire to not continue on a topic from not answering more on a question" (gender: M, age: 55-64). StorySage allows users to better guide the writing of their autobiography. Our study found that the writing process with StorySage is significantly more collaborative than that of the Baseline system (Q12, median difference $\mathbf { \Psi } = 1$ , $p = 0 . 0 0 4 ,$ . Users attributed this to their ability to edit the biography periodically—a feature that stems from our design choice to update the biography after each session. They appreciated being able to interact with the biography after each session, explaining "I really like how it was able to quickly start creating the autobiography" (gender: F, age: 25-34). Many users took advantage of their ability to progressively edit their biography after each session to incorporate their voice and writing style, citing "the option to edit the product is great" (gender: F, age: 25-34). # 6.3.2 Perception of System Conversational Ability [RQ2]. Users indicate that StorySage facilitates a more natural conversation than the Baseline (Q4: median difference ${ \bf \Pi } = { \bf 1 }$ , ${ \boldsymbol { p } } = \mathbf { 0 . 0 3 6 } )$ . Participants experienced more natural conversation with StorySage, which can be attributed to the Interviewer’s ability to ask more contextually-relevant follow-up questions in an intimate manner. A participant described how "StorySage was responsive to my answers and I enjoyed the specificity of the questions...in relation to my previous responses" (gender: F, age: 18-24). Another explained how "StorySage picked up on my desire to not continue on a topic by me not answering more on a question" (gender: M, age: 55-64). We attribute this to StorySage’s user-centric design that encourages the Interviewer to analyze engagement before deciding whether to shift topics or probe deeper. This approach differs from the conversational style of the Baseline, which uses predefined seed questions similar to those found in existing work [30]. A user explained how "the conversation [with the Baseline] felt less like a conversation and more like a rigid set of prompts the continue regardless of response" (gender: M, age: 25-34). We also find that it is more difficult for the Baseline to pick up on the emotion in a user’s answer and respond appropriately (Q5: median difference $= 0 . 5$ , $p = 0 . 0 2 9 .$ ). For example, a participant in the control group reports that the Baseline "didn’t include some of the emotions I had stated I was feeling during those particular moments the Interviewer asked for" (gender: F, age: 45-54). However, user feedback also suggests that StorySage occasionally asks too many follow-up questions focused around a singular topic. When this happens, participants change the conversation topic, which further underscores the importance of providing them greater control and autonomy. For example, a participant explained "I had to force StorySage to change topics...because it asked too many questions about a certain topic" (gender: F, age: 18-24). We elaborate on this limitation in Section 7.2. StorySage is better at maintaining a natural continuity between successive interview sessions (Q6: median difference ${ \bf \mu } = 1 { \bf \rho }$ , $p = 0 . 0 1 8 \mathrm { \AA }$ ). This can be attributed to StorySage’s next session planning module. A user explicitly mentioned how they "appreciated the option to select topics [for the next session]" (gender: F, age: 45-54), while others appreciated starting the second session with topics of their choice when prompted by the Interviewer’s open-ended opening question. On the other hand, multiple participants in the control group noticed that the Baseline would tend to ask repeated questions across the two sessions. This is likely due to the Baseline system’s weaker inter-session memory, which relies on a summary of past conversations [30]. In contrast, StorySage has a question bank and employs a similarity-based verification mechanism to prevent redundant questions. "During the second session ... [the Baseline] did ask very similar questions that it had asked before. This repetitiveness felt a bit unnatural" (gender: F, age: 18-24). # 6.3.3 User Satisfaction with the Autobiography [RQ3]. Users were moderately more satisfied with StorySage’s autobiography. StorySage provides a more complete autobiography that captures users’ memories more comprehensively than the Baseline (Q1: median difference ${ \bf \mu } = 1 { \bf \rho }$ , $p = 0 . 0 0 3 ,$ . This outcome is likely due to the design of the Planner and Section Writer, who work together to ensure high biography memory coverage through focused context windows and content verification loops. However, we also observed that content was occasionally repeated within StorySage’s autobiographies, suggesting a potential overuse of the feedback mechanism. We discuss this limitation in more detail in Section 7.2. For example, a participant noted that "there was some repetition of information about my teenage years" (gender: F, age: $6 5 +$ ). Nonetheless, participants in the treatment generally expressed satisfaction with the completeness of their autobiographies, as reflected in the number of high scores given to Q1. We notice a similar positive sentiment among participants in the control group, with a participant reporting $" I$ appreciated that [the Baseline] didn’t write every little aside and joke I made" (gender: F, age: 45-54). However, high memory coverage tends not to scale with session length in the Baseline, as illustrated by our long-running simulations in Table 1. We attribute this difference to the absence of the multi-agent design and verification mechanism in the Baseline, which enables StorySage to maintain high coverage as session length increases. Table 2 presents a high-level, quantitative comparison of the autobiographies generated by StorySage and the Baseline for users in the treatment and control groups, respectively. On average, autobiographies produced by StorySage are nearly twice as long as those of the Baseline, cover nearly three times as many memories, and include a greater number of sections. These differences suggest that StorySage may be able to write more comprehensive and organized narratives, likely due to its planning and verification mechanisms. However, these results cannot be attributed solely to the biography writing team, as the increase in biography length may also stem from StorySage’s ability to ask less repetitive questions and foster more natural conversation, which helps elicit more memories. We find no significant difference in the accuracy of the biography produced by both systems (Q2: median difference $= 0 . 5$ , $p = 0 . 3 2 7 .$ ). High evaluation scores suggest that both systems are capable of generating a narrative that accurately reflects the content of the user’s memories. One user shared "[StorySage] did a good job understanding what I was saying and keeping everything in the correct context" (gender: F, age: 55-64). We hypothesize this stems from our design choice to provide the user’s original memory—as they expressed it during the conversation—to the both systems. Notably, as shown in Table 2, StorySage biographies are substantially longer than those produced by the Baseline. Despite this increase in length, participants did not report reduced accuracy, suggesting that the additional content is not the result of hallucination or irrelevant elaboration, but rather reflects more complete coverage of the user’s shared experiences. However, in a few cases, participants noted that both systems tended to overstate the emotional tone of certain memories, which some users perceive as a form of inaccuracy. Users prefer the writing style of the autobiography produced by StorySage somewhat more than the writing style of the Baseline (Q3: median difference $= 0 . 5$ , $p { = } 0 . 0 9 5$ ). A participant in the treatment group explained "it wrote a better biography for me than I could do myself " (gender: F, age: 45-54). With a Section Writer solely focused on generating narrative content from the Planner’s update plans, StorySage can spend more resources to produce an eloquently written autobiography [106, 119, 132]. However, qualitative feedback suggests that participants across both groups prefer different writing styles—some wanted a professional tone, while others preferred a narrative that retained their voice. For example, one user who spoke to the system with short, concise sentences felt "the tone of writing didn’t feel like me" (gender: F, age: 25-34). This is closely related to the system’s tendency to exaggerate the emotional tone of the narrative, similar to the last feedback point we mention. We expand on these limitations of StorySage in Section 7.2. Table 2: Biography statistics for StorySage and Baseline. Metrics represent mean $\pm$ standard deviation across users in each group. # 6.3.4 Perception of System Responsiveness [RQ4]. Participant felts that StorySage and the Baseline system exhibited similar levels of responsiveness (Q9: median difference $\mathbf { \sigma } = \mathbf { 0 }$ , $p = \mathbf { 0 . 1 7 2 } ,$ ). Users did not report a statistically significantly faster question proposal time with StorySage. Despite StorySage’s empirically faster question proposal speed (as shown in Section 5), participants rated both systems similarly with a median score of 6 on a 7-point Likert scale (Q8: median difference $= 0 , p = 0 . 1 9 9 ,$ . This does not imply that concurrency is ineffective; instead, it enables StorySage to handle background processes that enhance conversational quality without incurring additional latency compared to the Baseline. Moreover, analysis of the outlying scores indicated that some participants considered the audio playback delay after the follow-up question appeared on the screen, when evaluating the systems’ question proposal latency. This delay occurs due to the conversion delay of the text-to-speech function, which was identical in both systems. While one participant reported that "the voice asking the questions felt delayed compared to when the text was displayed" (gender: F, age: 25-34), another reported "the system was amazingly responsive" (gender: F, age: $6 5 +$ ). Our analysis indicates that differences in latency on the order of a few seconds can be difficult to detect in smaller user studies, partly due to the subjective perception of time. Participants did not notice a significantly faster autobiography generation time with StorySage after each session. Users evaluated the speed of autobiography writing, and in aggregate, they found User Felt In Control User Felt The System User Felt The System Proportion of Users Preferring Was Responsive Was Conversational StorySage over Baseline 7 7 0.9 0.99 0.98 0.95 6 6 6 0.7 0.67 5 5 5 0.53 0.55 4 4 4 0.5 3 Improved (12) 3 Improved (7) 3 Improved (10) 0.3 2 p= 0.0004\*\* Declined (1) Unchanged (0) 2 p = 0.012\* Declined (1) Unchanged (5) 2 p=0.005\*\* Declined (2) Unchanged (1) 0.1 Wilson Score 95% Confidence Interval Baseline StorySage Baseline StorySage Baseline StorySage (i) (ii) () (i) (ii) (i) that new memories were incorporated in both autobiographies at similar speeds (Q7: median difference $= 0$ , $p = 0 . 1 3 3 ,$ ). Despite the computational overhead of a multi-step planning and writing workflow, the concurrent design of StorySage maintains a comparable biography generation time as the Baseline. Notably, because biography generation begins before the topic selection modal is shown to users, part of the true latency is masked, making StorySage appear slightly faster. However, for fairness of comparison, we report objective latency measurements in our experimental simulations (Section 5). Analyzing qualitative feedback, we find that the evaluation results are subjective. One user reported, "I really like how [StorySage] was able to quickly create the autobiography within 10 minutes or less" (gender: F, age: 25-34), though some users may still prefer a faster turnaround. # 6.4 Within-Group Analysis Following our experimental design, participants in the control group interact with StorySage after evaluating the Baseline. The advantage of within-group tests is that participants can directly compare the two systems, which may increase their sensitivity to differences by allowing them to exercise comparative judgment [18]. Comparing their questionnaire responses then provides us insights into how the same users perceived differences between the systems. Figure 8 presents the aggregated results that compute the average scores of the respondents, following the common practice of combining multiple Likert-scale items that assess the same construct into a single composite score [12, 41]. This approach is further justified by the high correlation we observed between questionnaire responses. One user is excluded from our analysis due to a technical error during their session. To estimate the true proportion of users who prefer StorySage across the dimensions we identify in our research questions, we construct $9 5 \%$ confidence intervals using the Wilson Score method [123] and find that a statistically significant majority of users strictly prefer the autonomy, responsiveness, and Interviewer conversational ability of StorySage (RQ1-RQ3). Interestingly, we observe a statistically significant preference for the responsiveness of StorySage in the within-group study, but not in the between-group study—particularly in faster question proposal. We hypothesize that this is because participants in the within-group setting evaluate StorySage after interacting with the Baseline, giving them a more concrete basis of latency in an existing system for comparison. Moreover, this result somewhat aligns with the latency difference we highlight in our experimental simulations (Section 5). At the same time, it is important to acknowledge that these findings may be influenced by potential carryover effects, including participant fatigue or a bias toward rating the second system more favorably after exposure to a better system. In total, eight participants in the within-group study preferred StorySage, two felt indifferently, and three preferred the Baseline. Participants who preferred StorySage described the "interviewing [style] was a lot more in depth and informative" (gender: F, age: 45–54), and "[StorySage] felt like more of a conversation than the [Baseline]" (gender: M, age: 35–44). StorySage offered much better control on topics (gender: M, age: 55-64) and "showcased a pretty great writing style!" (gender: F, age: 25-34). One participant who preferred the Baseline system explained that StorySage "was not repetitive but wanted all the nuances [of my memories]" (gender: M, age: 25-34), and another described how "information [was presented] less accurately [in the autobiography]" (gender: F, age: 45-54). We discuss these points of feedback as well as other limitations in the following section. # 7 DISCUSSIONS In this section, we reflect on key insights from our study, discuss limitations of the current system, and outline directions for future work, including important ethical considerations. # 7.1 Design Implication 7.1.1 Human-in-the-Loop Design. Supporting human-in-the-loop interaction is important when designing AI systems for long-form, creative tasks because it keeps users actively engaged in both the generative and decision-making aspects of the process [28]. In the case of StorySage, participants felt a greater sense of autonomy in contributing to their autobiography because the system offered them an ability to steer the conversation, revise their biography periodically, and influence the narrative direction across sessions. Crucially, the system was designed to be responsive to their input, which reinforced their sense of control and transformed the interaction into a more collaborative experience. These findings suggest that creative AI systems should be designed to continuously involve the user. This is especially important for creative tasks that unfold over longer periods of time, where ongoing human involvement helps users stay engaged and ensures they retain influence over the system’s outputs. Rather than operating fully autonomously, these systems should strike a balance where both the system and the user actively contribute to shaping the outcome [14, 28]. 7.1.2 Modularity. Modular design is useful for building AI systems that needs to manage several moving parts in a coordinated way [129]. In the case of StorySage, a multi-agent framework made it possible to split distinct responsibilities across five specialized agents. This separation of concerns allowed each agent to operate within a well-scoped context, enabling better organization that improved both the conversational flow and the writing process [53] without sacrificing system responsiveness. Creative AI systems designed to support work developed over longer periods can benefit from modular architectures, which can provide both scalability and responsiveness, while allowing for flexibility in how different components of the system can evolve over time. # 7.2 Future Work and Limitations Conversational navigation. Our qualitative analysis reveals that people engage in conversations differently, making it challenging to build a system that feels tailored to everyone [43]. To address this, StorySage allows users to guide the conversation and asks follow-up questions that align with their interests. While this helps personalize the conversation, it does not fully address the problem. At its core, the Interviewer is powered by an LLM with a manually crafted prompt, and given context that includes the session agenda, chat history, and memory recall tools. However, it is not fine-tuned to navigate conversations. Therefore, the Interviewer occasionally becomes overly focused on a single topic. While some users enjoy this style of conversation, others find it too deep. Future work can improve the Interviewer’s navigation capabilities by incorporating feedback mechanisms that prevent the questions from becoming unnecessarily detailed or even fine-tuning the base language model on the conversational dialogue [118]. Similarly, the Interviewer’s engagement recognition capabilities can be improved by leveraging finetuned models, such as EmoLlama [30, 75]. High memory coverage and repetition. Although our experimental simulation show that StorySage is able to achieve near-perfect biography memory coverage, user feedback highlights an issue with memory repetition across the autobiography. We hypothesize that this arises from the system’s memory coverage verification loops, which may contribute to redundancy. Future work can involve users more directly during the planning phase, allowing them to select which memories they want reflected in their narrative. This approach can also help preserve their voice in the autobiography and provide greater autonomy during the writing process. Biography reconstruction. Supporting content reconstruction is important for creating longer, coherent autobiographies. Although users do not explicitly raise this concern, our current design does not support automatic reorganization; rather, we provide a mechanism for users to manually reorganize their biography during the editing process. Automatic reconstruction becomes valuable for longer narratives, as manual reconstruction is both difficult and time-consuming. Future research can explore strategies for automatically merging related content and generating multiple outlines that users can choose from. Additionally, future work can more closely align with conventional practices of professional biographers, who hold multiple conversations before outlining a biography. In this sense, the overall planning process can benefit from accumulating a larger memory bank prior to structuring. Semantic hallucination. While neither StorySage nor the Baseline introduces inaccurate memories in the autobiography, users note that the system occasionally describes memories using adjectives or emotions they had not shared during the interview. This effect appears to stem from the Section Writer’s tendency to write the autobiography with a positive tone, even when provided with the user’s framing of the memory. We attribute this behavior to the the GPT-4o’s post-training process, where the RLHF encourages positively framed writing. To help re-incorporate tone back in the narrative, StorySage allows users to edit their biography, although we realize this does not solve the deeper problem. Future work can offer multiple phrasings of the autobiography, potentially generated by a model fine-tuned for biography writing. Additionally, such a system can present writing samples to users throughout the writing process to better learn their preferred narrative style. Longitudinal evaluation. Our primary findings are drawn from a user study in which participants engage with StorySage across two 15-minute conversational sessions (30 minutes total). To examine the scalability of our system design, we conducted a simulation study using LLM-based user proxies over 10 sessions, with 20 conversational rounds each. While this simulation demonstrated that StorySage can function effectively over longer sessions and multiple rounds of interaction, it does not capture how real users perceive the system over extended use. Future research should evaluate how users evaluate StorySage across our key design dimensions when interacting with the system over extended periods of time. # 7.3 Ethical Risks and Considerations A system like StorySage, which is powered by large language models (LLMs), raises several ethical considerations—particularly related to data privacy, narrative accuracy, model bias, and the potential for user over-reliance. Data privacy and confidentiality. Autobiographical writing often involves sharing deeply personal and sensitive memories, which raises concerns about privacy and confidentiality [4, 131, 133]. Transmitting these narratives to third-party LLM providers can increase the risk of exposing sensitive personal data [68]. Although participants in our study were clearly informed about the data handling protocols, broader deployment of such systems should consider using on-premises open-source models to reduce their reliance on external services and provide stronger guarantees around data protection [33]. Narrative distortion and model bias. LLMs are known to reflect and amplify biases present in their training data [50, 77], which can lead to plausible yet inaccurate narratives or recollections. In the context of autobiography writing, research has shown that this can distort the narrative and introduce unintended perspectives or tones that compromise the authenticity of a user’s life story [51, 55, 56, 59, 78]. To address this, robust content verification measures and human oversight are essential to maintain the accuracy and integrity of the autobiographies [99]. Over-reliance. Although StorySage incorporates several features designed to provide users with a greater sense of control, individuals may opt to under-engage with the system. While convenient, this over-reliance diminishes the cognitive and emotional value of the writing process [28, 61, 71, 76, 93, 134]. User involvement is critical in maintaining a co-creative dynamic that ensures the autobiography reflects the user’s voice.
Every individual carries a unique and personal life story shaped by their memories and experiences. However, these memories are often scattered and difficult to organize into a coherent narrative, a challenge that defines the task of autobiography writing. Existing conversational writing assistants tend to rely on generic user interactions and pre-defined guidelines, making it difficult for these systems to capture personal memories and develop a complete biography over time. We introduce StorySage, a user-driven software system designed to meet the needs of a diverse group of users that supports a flexible conversation and a structured approach to autobiography writing. Powered by a multi-agent framework composed of an Interviewer, Session Scribe, Planner, Section Writer, and Session Coordinator, our system iteratively collects user memories, updates their autobiography, and plans for future conversations. In experimental simulations, StorySage demonstrates its ability to navigate multiple sessions and capture user memories across many conversations. User studies (N=28) highlight how StorySage maintains improved conversational flow, narrative completeness, and higher user satisfaction when compared to a baseline. In summary, StorySage contributes both a novel architecture for autobiography writing and insights into how multi-agent systems can enhance human-AI creative partnerships.
[ "cs.HC", "cs.AI", "cs.MA" ]
# I. INTRODUCTION Automatic code generation aims to reduce manual coding and boost productivity [1], with LLMs like GPT-4 [2] making significant advancements. However, ensuring accuracy and correctness remains a challenge. Recently, several approaches have been proposed to enhance LLM-based code generation. These include prompt engineering techniques like chain-ofthought reasoning [3], which encourages the model to break problems step by step. Additionally, feedback-based methods have shown promise in correcting generated code. For instance, Reflexion [4] helps language agents learn from mistakes by using feedback and storing reflections in memory, improving their decisions without changing their underlying model. Most feedback-based approaches only take into account the execution results. In this paper, we explore another characteristic of the generated code, which is its complexity. The idea is that by understanding the expected level of complexity (defined by various metrics) for the correct code of a programming task in natural language, we can effectively guide LLMs toward generating the correct code (with the right complexity), even when the initial prompt results in an incorrect solution. Additionally, we investigate whether this complexity-aware method can enhance the performance of existing feedbackbased approaches, further refining code generation outcomes. Common complexity metrics that can be useful predictors of correct vs incorrect code may include cyclomatic complexity [5], which measures the number of independent paths through a program’s control flow, and Halstead complexity [6], which evaluates the size and volume of code based on operators and operands. In this work, we have employed 53 widely-used complexity metrics (details are in Section III-C) as a quantifiable measure of the complexity inherent in the generated code. As a motivation example, consider Figure 1, which presents a solution generated by GPT-4o [7] on an example from the HumanEval [8] dataset. The prompt for this task is “Given two positive integers a and b, return the even digits between a and $b$ , in ascending order”. The initial incorrect code from GPT-4o (Listing 1) produces even numbers within [a, b] but includes numbers beyond the specific digits required $\{ 2 , 4 , 6 ,$ and $\boldsymbol { \vartheta } \}$ , leading to an overgeneralized solution. By contrast, the correct code generated by our approach (Listing 2) limits the output to only the specified even digits by checking for membership in $\{ 2 , 4 , 6 , 8 \}$ , aligning closely with the problem requirements. Looking at the complexity metrics associated with these listings, we can see that, for instance, the Halstead effort metric is 94.89, 48.60, and 42.79 in Listing 1, Listing 2, and Listing 3, respectively. This shows that some complexity metrics can be predictors of code correctness. To investigate the correlation between code complexity and LLM’s effectiveness and its potential to improve LLMs for code generation using the code complexity feedback, we conduct a study with these four research questions: d e f g e n e r a t e i n t e g e r s ( a , b ) : i f a $> \mathsf { b }$ : a , ${ \textbf { b } } = { \textbf { b } }$ , a even numbers $\mathbf { \Sigma } = \mathbf { \Sigma } [ \mathbf { \Sigma } \mathbf { i } \mathbf { \Sigma }$ f o r i i n range ( a , $\mathrm { ~ \bf ~ b ~ } + \mathrm { ~ \bf ~ l ~ }$ ) i f i % 2 $\scriptstyle \mathbf { \bar { \rho } } = { \begin{array} { l } { 0 } \end{array} } ]$ r et ur n even numbers d e f g e n e r a t e i n t e g e r s ( a , b ) : i f a > b : a , $\textbf { b } = \textbf { b }$ , a e v e n d i g i t s $\begin{array} { r l } { \mathbf { \Sigma } } & { { } = \mathbf { \Sigma } \left[ \mathbf { \Sigma } \right] } \end{array}$ f o r i i n range ( a , b + 1 ) : i f i i n 2 , 4 , 6 , 8 : e v e n d i g i t s . a p p e n d ( i ) r e t u r n e v e n d i g i t s Listing 1: code generated by the LLM d e f g e n e r a t e i n t e g e r s ( a , b ) : l o w e r $\mathbf { \tau } = \mathbf { \tau }$ max ( 2 , min ( a , b ) ) ${ \mathbf { u p p e r } } \ = \ { \mathbf { m i n } } \left( 8 \ , \ \ { \mathbf { m a x } } ( \mathrm { ~ a ~ , ~ \ b ~ } ) \ \right)$ r e t u r n [ i f o r i i n range ( lower , $\mathbf { u } \mathbf { p } \mathbf { p } \mathbf { e } \mathbf { r } + 1 .$ ) i f $\textrm { i } \ \% \ 2 \ = \ 0 ]$ Fig. 1: Comparison of incorrect code generated by the LLM (left), the correct code generated by our approach (middle), and the ground truth code in the dataset (right). The incorrect code shows higher complexity based on certain metrics, such as Halstead metrics, the number of numeric literals, and the frequency of mathematical operations. For instance, the Halstead effort metric is 94.89 in the left code and 48.60 and 42.79 in the middle and right code, respectively. RQ1: Are complexity metrics of the generated codes correlated with the code generation’s effectiveness $( \mathbf { p a s s } @ 1 ) ?$ • Our first objective is to investigate whether there is a correlation between the complexity metrics of the generated code and the success rate of the LLMs’ outputs, measured as $\mathrm { P a s s } @ 1$ [8] (i.e., the percentage of correct solutions on the first attempt). Using a machine learning model—specifically logistic regression [9]—We observe a clear correlation between $\mathrm { P a s s } @ 1$ and specific complexity metrics. This relationship is particularly strong in the HumanEval dataset, where models such as GPT4o attain very high accuracy. RQ2: How do the distributions of complexity metrics differ between successful and failed code solutions generated by LLMs? We analyze the distribution of each metric across both correct and incorrect code samples generated by LLMs, which enabled us to identify patterns and differences in metric values associated with successful versus unsuccessful code generations. To gain deeper insights, we divide this research question into two sub-questions: RQ2.1: Are there specific metrics that LLMs have more difficulty getting right when generating code? This subquestion investigates which metrics are harder for different LLMs, such as GPT-4o, GPT-3.5 Turbo [10], and Llama 3.1 [11], to optimize, examining success and failure cases to highlight LLM-specific challenges. RQ2.2: Do different datasets’ characteristics differ in terms of code complexity metrics’ distributions over the correct vs. incorrect code? Here, we assess whether complexity metrics influence success differently across datasets (HumanEval, MBPP [12], LeetCode[13]), providing insights into dataset-specific characteristics. RQ3: Can feedback based on complexity metric values of the generated code improve LLMs’ code generation effectiveness? Having established this correlation, we sought to leverage these complexity metrics to further help LLMs refine the generated code. Using a diverse set of datasets and different LLMs, we conducted experiments to refine code generation iteratively. By calculating the Shapley values [14] of the complexity metrics, we identified the most important metrics for each dataset and used them as feedback to prompt the LLM to generate new code with different complexity characteristics. To prevent overfitting, new test cases are generated using GPT-4o, and if the code fails, we identify the five most impactful complexity metrics, prompting the LLM to regenerate and alter those metrics in the generated code. This cycle continues until the code passes all test cases or reaches a maximum number of iterations (five in our study). RQ4: Can feedback based on complexity metric values enhance the effectiveness of code generation agents, particularly on datasets with lower accuracy? Here, we explore whether our complexity-based feedback method can further improve code generation when integrated into an agent-based framework. Given that datasets like HumanEval, MBPP, and LeetCode already have high Pass $\ @ 1$ scores, we focus on a more challenging dataset—BigCodeBench [15]—where there is greater room for improvement. We apply our complexity-aware feedback method on top of a code generation agent (Reflexion [4]) to assess its effectiveness in refining agent-generated code. Our results show that even in more complex scenarios, guiding the agent with complexity-based insights leads to measurable improvements in code accuracy. In short, the contributions of this paper are: 1) Demonstrating the correlation between code complexity metrics and LLMs’ code generation success $\left( \operatorname { P a s s } @ 1 \right)$ . 2) Introducing an iterative feedback method based on code complexity metrics to enhance the correctness of generated code, both in standalone LLM prompting and within an agent-based framework. 3) Conducting comprehensive experiments across multiple datasets and LLMs to validate our approach. Data Availability: We release the source code of our experiments to help other researchers replicate and extend our study1. # II. BACKGROUND AND RELATED WORKS # A. LLMs for Code Generation Recent LLMs have brought significant improvements to code generation. Codex [8], based on GPT-3 and trained on extensive code repositories, was developed specifically for code tasks and stands out in both comprehension and generation. CodeLlama [16], an enhanced variant of Llama 2 [17] specifically fine-tuned for coding tasks, excels in code generation, and is offered in multiple parameter sizes. Llama 3 is a powerful successor to Llama 2, available in various parameter configurations, making it an excellent tool for generating code efficiently. In this research, we use GPT-4o, a high-performing, closedsource model, GPT-o3 mini [18], OpenAI’s most cost-effective model in its reasoning series, alongside GPT-3.5-turbo, another advanced closed-source model, and Llama 3.1, a cost-effective, open-source model. GPT-4o and GPT-o3 mini are selected for their superior capabilities in closed-source applications, while GPT-3.5-turbo offers a balance of performance and efficiency in various coding tasks. Llama 3.1 provides a budget-friendly and accessible alternative for open-source experiments, complementing the other models in our study. # B. Feedback-Based Code Generation Recent frameworks like Parsel [19] and ANPL [20] enhance code generation by structuring tasks and refining code iteratively. Other systems, such as DyLAN [21], Reflexion [4], and AgentCoder [22], leverage multi-agent and feedback-based methods to improve task efficiency. The LDB [23] framework breaks down code for error detection, while EPiC [24] uses evolutionary algorithms to optimize prompts for efficient code generation. Our approach uniquely leverages the complexity metrics of generated code as feedback to refine outputs, representing an innovative feedback-based method focused on improving code quality through metric-based insights. # C. Code Complexity Metrics Software complexity metrics are vital for managing quality and reducing costs. Yu and Zhou [25] provide a comprehensive review, highlighting their role in improving maintainability throughout the software lifecycle. Zamani and Hemmati [26] introduce “Tuning Gain,” a metric estimating cost-effectiveness in search-based test generation. Using metrics like McCabe’s and Halstead’s complexities, they demonstrate the value of static metrics in enhancing software testing. Mashhadi et al. [27] assess code complexity metrics for bug prediction. They find Lines of Code and McCabe’s complexity effective for bug detection but insufficient for severity prediction, suggesting a need for context-sensitive metrics. Harzevili and Alizadeh [28] discuss complexities in software defect prediction, noting that traditional classifiers often overlook interdependencies among metrics, which can impact real-world applications. Despite the wealth of research on complexity metrics, no prior studies have focused on using these metrics specifically to improve the quality of code generated by LLMs. In this paper, we aim to fill this gap by applying complexity metrics to enhance the code generation process. # III. STUDY SETUP In this section, we explain the datasets, models, complexity metrics, evaluation metrics, and experiment setup and design for our study. # A. Datasets For our experiments, we used four datasets: HumanEval [8], MBPP-sanitized [12], LeetCode [13], and BigCodeBench [15]. HumanEval consists of 164 Python programming problems, each with a function signature, description, and examples, widely used to evaluate the accuracy of LLM code generation. MBPP-sanitized is a refined version of MBPP, addressing inconsistencies in prompts and test cases for improved reliability. It includes 120 training samples and 257 test samples. LeetCode is a large dataset of 2,360 Python problems commonly used in competitive programming. We focused on 561 problems with verified correct code, extracting test cases embedded in problem descriptions. BigCodeBench assesses LLMs on practical and complex coding tasks. We used BigCodeBench-Hard, a 148-task subset, featuring more intricate instructions and diverse function calls. # B. Models For code generation, we experimented with a range of LLMs to capture the differences in their capabilities, i.e., GPT-4o [7], GPT-3.5 Turbo [10], Llama 3.1 [11], and GPT-o3 mini [18]. # C. Complexity Metrics Complexity metrics are quantitative measures used to assess various aspects of code, such as its structure, readability, and maintainability. These metrics help to understand how complex a piece of code is. For our study, we used a comprehensive set of 53 complexity metrics from the literature [29], [30], which are explained in Table I. # D. Evaluation Metric In this work, we use pass $@ \mathbf { k }$ [8] to measure the performance of LLMs in code generation tasks. Pass $@ \mathbf { k }$ calculates the probability that at least one of the $\mathbf { k }$ -generated code outputs passes all the test cases for a given coding problem. $\mathrm { P a s s } @ 1$ is a specific case of the pass $@ \mathbf { k }$ metric in which only the top-ranked generated solution is considered. It measures the probability that the first generated code sample passes all test cases, assessing the model’s ability to produce a correct solution on the first attempt. In this study, pass $\ @ 1$ is particularly crucial, as it directly measures the initial success rate of LLMs in producing correct codes. A higher pass $@ 1$ score indicates better code generation performance. # E. Experimental Setup 1) Preprocessing: We extracted complexity metrics from the generated code and ground truth for all experiments. Cyclomatic and Halstead complexities were computed using the Radon library in Python, while custom functions were defined for other complexity metrics. To normalize these features, we applied StandardScaler to ensure all metrics were on the same scale for the prediction process. TABLE I: Overview of Code Complexity Metrics Used in our Study 2) Configurations of LLMs: GPT-3.5-turbo, GPT-4o, and GPT-o3 mini were accessed via Colab with high RAM configurations on the CPU. Meta-Llama-3.1-8B-Instruct was run using an A100 GPU for increased speed. The models were configured with a temperature of 0.2, max tokens set to 1000, and a frequency penalty of 0.0 to control creativity and output length. We used this prompt to generate code based on the function description: Please complete the Python function: function description In cases where the generated code was incorrect, feedback was provided to the LLM using complexity metrics: The previously generated code is incorrect. Please complete the Python function according to the feedback: feedback Where feedback took the form: Please ensure that your generated code has different values for the following complexity metrics: metrics 3) Logistic Regression Model: To predict the success of the generated code $( \operatorname { p a s s } @ 1 )$ , we trained a Logistic Regression model on the complexity metrics of the generated code. We used max iter $= 1 0 0 0 0$ and penalty $= ^ { \cdot } 1 2 ^ { \cdot }$ to handle feature regularization and to ensure convergence for all training instances. # F. Experiment Design 1) RQ1 Design: In this RQ, we first prompt LLMs to generate code solutions for problems in each dataset. For each code solution, we compute various complexity metrics (as discussed in Section III-C). Next, we apply a Logistic Regression model with 5-fold cross-validation to analyze the correlation between metric values and the likelihood of code solutions’ success $( \mathrm { p a s s } @ 1 )$ . This step involved using complexity metrics as predictor variables to assess their ability to predict the success or failure of generated code. The goal is to understand whether higher or lower values of specific metrics are associated with successful outcomes. To enhance the model’s accuracy, we applied several feature selection methods, including: L1 Regularization (Lasso) [31]: This method reduces model complexity by setting coefficients of less relevant features to zero, focusing on key complexity metrics. Recursive Feature Elimination (RFE) [32]: This method iteratively removes the least important features based on model performance to identify a minimal, effective subset for predicting pass $\ @ 1$ . Correlation-based Selection [33]: This method chooses features strongly correlated with the target $( \operatorname { p a s s } @ 1 )$ and removes redundant ones to ensure each selected feature uniquely contributes to prediction. Shapley Values [34]: This method calculates each feature’s contribution to a model by averaging its impact across all feature combinations. For feature selection, it ranks features by their Shapley values, helping identify and retain the most influential ones, which enhances model interpretability. 2) RQ2 Design: To explore how complexity metric values differ between successful (p $\operatorname { a s s } @ 1 = 1 \$ ) and failed $\left( \operatorname { p a s s } \ @ 1 = \right.$ 0) instances, we analyze the distribution of metrics for each case. We identify for which metrics the difference between the correct vs. incorrect instances is significant. This analysis reveals insights into which complexity metrics are more often associated with failures and thus are critical for LLMs to get them right and to produce correct code. We then extend our analysis, and for each metric, we compare successful and failed cases across GPT-3.5-turbo, GPT-4, and Llama 3.1-8BInstruct. Additionally, to detect any dataset-specific trends, we examine whether different datasets show varying distributions of complexity metrics between successful and unsuccessful solutions by comparing their median metric values. 3) RQ3 Design: RQ3 is about assessing our proposed feedback-driven mechanism to enhance the code generation process. Figure 2 provides an overview of our method, which will be explained in detail in the rest of this section. Our method is divided into two parts: Important Complexity Metric Detection and Iterative Code Improvement. Important Complexity Metric Detection: The goal of this phase is to identify the most predictive complexity metrics per dataset. We used the training set of each dataset to prompt the LLM for code generation. For HumanEval, LeetCode, and BigCodeBench, this involves using 4 folds in the crossvalidation process. For MBPP, we used the provided training set. In Step 1, we compute the complexity metrics for each generated code. In Step 2, we evaluate their correctness using the pass $\ @ 1$ criterion. In Step 3, we employ a Logistic Regression model to predict the pass $\ @ 1$ using the metrics as the feature set and pass $\ @ 1$ as labels in the training set. In Step 4, the feature importance of the trained model is analyzed using Shapley values [34]. Shapley values provide a robust approach for assessing the contribution of each feature to the model’s prediction. Here, Shapley values were used to measure the impact of various complexity metrics on the likelihood of a generated code passing all test cases (i.e., pass $\ @ 1$ ). By assigning importance scores to each metric, Shapley values enabled a nuanced understanding of how individual complexity characteristics influence model predictions, highlighting which metrics are critical to predicting code correctness. Fig. 2: Overview of Complexity-Aware Feedback for Enhanced LLM Code Generation Iterative Code Improvement: In this phase, we apply an iterative process for code improvement on the evaluation set (the remaining fold in cross-validation in HumanEval, LeetCode, and BigCodeBench or the evaluation set in MBPP). The process starts with Step 5, where test cases are generated using GPT-4o for internal code evaluation. The algorithm utilizes test cases generated by GPT-4o instead of the dataset’s evaluation test cases, as it continues processing only the samples that fail these test cases. Incorporating ground truth tests within this feedback loop would introduce data leakage. Next, in Step 6, we prompt the LLM to generate new code. We then evaluate the generated code (Step 7), and if a code sample passed all generated test cases, it was added to the final code pool. In Step 8, for any code that failed, the values of the five most influential metrics (identified in Step 4) are collected. This feedback will be provided to LLMs to adjust these complexity aspects in the regenerated code, in Step 9, with the following prompt: “Please ensure that your generated code has different values for the following complexity metrics: $\{ m e t r i c s \} ^ { \ast }$ where metrics are the five most influential metrics identified earlier. This refinement process will be repeated over a maximum of $N$ iterations $\mathrm { ~ N ~ } = \mathrm { ~ 5 ~ }$ in this study) or until the generated code successfully passes all test cases. Lastly, in Step 10, the final set of generated code samples will be evaluated using the original test cases in the dataset (the developer-written ones and not the LLM-generated tests, which are used internally in the algorithm). This process is also formally presented as Algorithm 1, which comprises the above two phases. Lines 1 through 7 correspond to the process illustrated as ”Important Complexity Metric Detection” in Figure 2 (training set) and focus on identifying the most impactful complexity metrics. Lines 8 onward relate to the feedback and evaluation phase, where these metrics are used to guide code regeneration. (”Iterative Code Improvement” in Figure 2) 4) RQ4 Design: In this research question, we extend our complexity-aware feedback method to an agent-based code generation framework. Specifically, we first apply Reflexion, a feedback-driven code generation agent, to iteratively refine the generated code. Once Reflexion has completed its improvement process, we then incorporate our complexity-based feedback mechanism to further enhance the correctness of the code. Our approach remains consistent with RQ3, where we identify the most predictive complexity metrics using Shapley values and iteratively prompt the LLM to adjust its generated code based on these insights. However, in RQ4, this complexity-driven refinement is applied on top of Reflexion’s intermediate outputs rather than directly on the initial code generated by the LLM. This allows us to evaluate whether our complexity-aware feedback can further improve results even when an agent has already optimized the code. Given that datasets like HumanEval, MBPP, and LeetCode already exhibit high $\mathrm { P a s s } @ 1$ scores, we focus on BigCodeBench, a more challenging dataset where there is greater room for improvement. By layering our complexity-aware approach on top of an agent-based method, we assess its effectiveness in refining code generation within more complex and practical programming scenarios. Code Generation Require: Training set $T$ , Evaluation set $E$ , Max iterations $I = 5$ Ensure: Final pass $\ @ 1$ scores 1: for each code $c$ in $T$ do 2: $g \gets$ GenerateCode(LLM, c) 3: metric $\mathfrak { s } _ { g } \gets$ ComputeComplexityMetrics(g) 4: $ \mathrm { p a s s } 1 _ { g } \mathrm { C a l c u l a t e P a s s } 1 ( g ) $ 5: end for 6: important metrics $$ FindImportantMetrics( 7: LogisticRegression( $T$ ), ShapleyValues) 8: for each code $c$ in $E$ do 9: $g \gets$ GenerateCode(LLM, $c$ ) 10: $ \operatorname { p a s s } 1 _ { g } $ EvaluatePass1 $( g$ , GeneratedTestCases) 11: if $\mathrm { p a s } \bar { \bf s } 1 _ { g } = = 0$ then 12: metrics $_ g $ ComputeComplexityMetrics $( g )$ 13: for $i = 1$ to $I$ do 14: Prompt LLM: Please ensure that your generated code has different values for the following com plexity metrics: $\mathcal { M }$ 15: $g \gets$ RegenerateCode(LLM, 16: metrics $\mathbf { \tau } = \mathbf { \tau }$ important metrics) 17: $ \operatorname { p a s s } 1 _ { g } $ EvaluatePass1 $( g$ , GeneratedTestCases) 18: if $\left. \mathrm { p a s s } 1 _ { g } = = 1 \right.$ then 19: break 20: end if 21: end for 22: end if 23: end for 24: for each code $g$ in $E$ do 25: final pass1 $$ EvaluatePass1(g, OriginalTestCases) 26: end for this dataset, especially with more advanced models like GPT4o. Even less sophisticated models, such as GPT-3.5-turbo and Llama 3.1 showed significant improvement on HumanEval, achieving accuracies of 0.683 and 0.714, respectively. This suggests that the complexity metrics are particularly wellsuited to predicting pass $\ @ 1$ for the HumanEval dataset across different LLMs. MBPP Dataset: Performance on MBPP was weak across all LLMs, with GPT-3.5 Turbo and Llama 3.1 struggling at an accuracy below 0.67. Even GPT-4o, which excelled elsewhere, achieved only 0.74, possibly due to the nature of the tasks in MBPP, which may involve more challenging or less uniform code patterns, making complexity metrics less predictive of Pass $\ @ 1$ . LeetCode Dataset LeetCode presented an interesting middle ground, with GPT-4o performing consistently well across all methods, demonstrating that the complexity metrics are robust for this dataset. GPT-3.5-turbo and Llama 3.1 also performed moderately well, with GPT-3.5-turbo reaching 0.814 and Llama 3.1 achieving 0.693. These results suggest that while LeetCode is not as easy to predict as HumanEval, it still allows for a solid correlation between complexity metrics and pass $\ @ 1$ . BigCodeBench Dataset BigCodeBench exhibited the weakest overall performance, with GPT-o3 mini achieving only 0.59 accuracy and GPT-4o reaching 0.722, lower than its performance on other datasets. These results suggest that complexity metrics are less predictive of $\mathrm { P a s s } @ 1$ in this dataset, likely due to its diverse function calls and more intricate coding tasks. This highlights the need for alternative or enhanced feedback mechanisms for handling more complex real-world programming challenges. # IV. EXPERIMENT RESULTS # A. RQ1: Are complexity metrics of the generated codes correlated with the code generation’s effectiveness $( p a s s @ l )$ ? Table II presents the Logistic Regression model’s accuracy (the average accuracy over five folds) when using different feature selection methods for each LLM on the four datasets. The bold values in each row represent the highest accuracy achieved for that particular LLM and dataset combination. For example, the highest accuracy for GPT-4o on the HumanEval dataset is 0.921 using Shapley Values, indicating that Shapley Values outperformed other methods for this combination. Compared to other feature selection methods, Shapley values perform best, as shown by the higher number of topperforming values in the Shapley column. Even where Shaply is not the top method, its accuracy is very close to the bestperforming alternatives. This consistency is why we chose Shaply for feature selection in our feedback-based algorithm. Table II reveals clear patterns in the relationship between the complexity metrics of LLM-generated code and pass $\ @ 1$ scores across different datasets and LLMs. The key insights include: HumanEval Dataset: This dataset consistently showed the best results across all LLMs, particularly with GPT-4o, which achieved the highest accuracy (0.921). This indicates a strong correlation between the complexity metrics and pass $\ @ 1$ for Answer to RQ1: The results indicate that complexity metrics of LLM-generated code solutions are correlated with their pass $\ @ 1$ scores, particularly when feature selection methods such as Shapley Values are applied. # B. RQ2: How do the distributions of complexity metrics differ between successful and failed solutions generated by LLMs? In this research question, we aim to explore the differences in the distributions of complexity metrics based on the target value $( \mathsf { p a s s } @ 1 ~ = ~ 1$ or $\mathrm { p a s s } @ 1 = 0 .$ ). For example, Figure 3 presents a box plot of the Halstead Length distribution, comparing the target values when GPT-4o is used as the LLM and HumanEval as the dataset. The median for target 0 is higher than for target 1, indicating that this complexity metric tends to be greater in cases where the code fails. To highlight the variation between the two target distributions, we calculate the difference in their medians for each TABLE II: The accuracy of the logistic regression model when using different feature selection methods for each LLM on the four datasets. “-” indicates no feature selection. Fig. 3: The distribution of Halstead Length by target value (pass $\boldsymbol { \mathscr { Q } } 1 = 1$ or pas ${ \mathfrak { o } } 1 = 0 _ { , }$ ), using GPT-4o as the LLM and HumanEval as the dataset. complexity metric and visualize these differences using a bar plot. Figure 4 displays this bar plot for each LLM and dataset combination. 1) RQ2.1: Are there specific metrics that LLMs have more difficulty getting right when generating code?: In Figure 4, each column presents the difference in the median values of complexity metrics between failed and passed cases across different datasets, evaluated using a specific LLM. A clear pattern emerges: GPT-4o and Llama 3.1 consistently generate more complex code in failed cases (pass $\boldsymbol { \mathcal { Q } } 1 = 0$ ), with metrics such as Halstead Length and number of lines in the code (LOC) showing higher values when code fails. This suggests that these two models might produce overly complex solutions, which could lead to incorrect code outputs. In contrast, GPT-3.5-turbo tends to generate simpler code for failed cases, as indicated by several complexity metrics being higher for the correct cases (p $\operatorname { a s s } @ 1 = 1 \$ ). This may indicate that GPT-3.5-turbo generates overly simplistic solutions that lack the required details to pass. 2) RQ2.2: Do different datasets’ characteristics differ in terms of code complexity metrics’ distributions over the correct vs. incorrect code?: Each row in Figure 4 represents the behavior of a dataset across multiple LLMs. A notable finding is that both the HumanEval and LeetCode datasets show more variation in complexity metrics between failed and successful cases. This suggests that for these datasets, the complexity of the code plays a significant role in determining whether a generated solution passes or fails. On the other hand, the MBPP dataset shows fewer differences in complexity metrics between pas $\mathsf { i } @ 1 = 1$ and pass $\ @ 1$ $\mathit { \Theta } = \ 0$ . This finding is consistent with the lower performance seen in RQ1 for this dataset, where the accuracy of the Logistic Regression model was also lower. It appears that complexity metrics are less relevant to determining code correctness in MBPP, which aligns with the nature of MBPP prompts—these are typically one-line explanations without detailed context, potentially causing issues beyond the complexity of the generated code itself. In contrast, HumanEval and LeetCode provide more comprehensive prompts, including examples and details, meaning that code failures are more likely related to the LLM’s handling of code complexity. In such cases, offering complexity-based feedback could lead to performance improvements, as indicated by the differences in metric distributions. HumanEval Dataset: Across all three LLMs, failed code tends to have higher maximum nested blocks and more comparisons. This suggests that in HumanEval, incorrect code is often more structurally complex, containing deeper levels of nesting and a greater number of comparison operations. Therefore, excessive nesting and comparisons add complexity and decision points, increasing the likelihood of errors and indicating potential code failure. MBPP Dataset: For all three LLMs, failed code exhibits a higher number of unique words and more numeric literals. This indicates that incorrect MBPP solutions introduce more distinct terms and numeric values. Since MBPP prompts are typically concise, failed solutions likely overcomplicate the task by adding extra variables or values that do not align with the intended simplicity. This increased complexity may contribute to the failure of the generated code. LeetCode Dataset: In this dataset, failed code has higher counts of numeric literals and consistently higher values across all Halstead complexity metrics. This pattern suggests that incorrect solutions in LeetCode tend to be more computationally demanding and complex. The prominence of Halstead metrics, which quantify the overall effort required to understand and maintain code, suggests that these failed solutions are not only larger but also harder to process and may involve more intricate logic or operations than necessary, which contributes to their failure. Fig. 4: Bar plot showing the median differences in complexity metrics between target values ( $\mathsf { p a s s } @ 1 = 1$ and pass $\boldsymbol { \mathcal { Q } } \boldsymbol { 1 } = \boldsymbol { 0 } ,$ ) for each LLM and dataset combination. (the median difference in metrics that are not in the plots is zero) Answer to RQ2: The analysis reveals that complexity patterns differ across models and datasets: GPT-4o and Llama 3.1 struggle with complex outputs, while GPT-3.5- turbo tends to fail on overly simple ones. Complexity strongly impacts failures in HumanEval and LeetCode but is less relevant in MBPP due to its simpler prompts. C. RQ3: Can feedback based on complexity metric values of the generated code improve LLMs’ code generation effectiveness? In this research question, we focus on improving code generation by providing targeted feedback to LLMs based on key complexity metrics. Table III shows the $\mathrm { P a s s } @ 1$ performance across iterations for each LLM and dataset. Iteration 0 serves as a baseline where the LLM generates code without feedback. From Iteration 1, the LLM is prompted to modify the five most important complexity metrics for any incorrect code solution, identified during training with logistic regression and Shapley values. This process continues for five iterations, refining the generated code. For example, Figure 5 shows the most influential metrics using Shapley values calculated from one fold of the HumanEval dataset, with code generated by GPT-4o. The metrics displayed are ranked according to their impact on the model’s predictions. This ranking facilitates easy identification of which complexity features contribute most significantly to the likelihood of success. Metrics such as Halstead Length, vocabulary, effort, Lines of Code (LOC), and number of math operations emerged as the most significant indicators of code quality. For the MBPP dataset, we used the provided training set during the training phase. However, for HumanEval and LeetCode, which do not include separate training sets, we employed 5-fold cross-validation and reported the average results across folds. The second baseline in Table III, represented by the white rows, involves asking the LLM to regenerate incorrect codes iteratively without any complexity-based feedback. This setup allows us to assess whether our approach offers improvements over naive regeneration. In this baseline, we follow the same process as our method, using LLM-generated test cases for evaluation during the iterations. It is important to note that while our algorithm incorporates LLM-generated test cases for feedback during training, we report the final $\mathrm { P a s s } @ 1$ results based on the actual test cases from the dataset. Fig. 5: Shapley values illustrating the importance of various complexity metrics in predicting the likelihood of generated code passing all test cases $( \mathrm { p a s s } @ 1 )$ for one fold of the HumanEval dataset, using GPT-4o for code generation. The chart highlights the most influential metrics. The most substantial improvements are observed in the HumanEval dataset, particularly with GPT-3.5, which initially had the lowest pass $\ @ 1$ but achieved a higher pass $\ @ 1$ than Llama 3 in the fifth iteration. This result aligns with our findings in RQ1 and RQ2, confirming that pass $\ @ 1$ is correlated with the complexity metrics of generated code. Additionally, our algorithm outperformed the baseline across all models. Specifically, GPT-4o’s pass $\ @ 1$ improved by $6 . 7 4 \%$ with our algorithm versus $2 . 2 4 \%$ in the baseline, GPT-3.5- turbo by $3 5 . 7 1 \%$ versus $1 2 . 5 \%$ , and Llama 3.1 by $1 0 . 2 9 \%$ versus $4 . 4 1 \%$ . In the MBPP dataset, while our algorithm achieved better results than the zero-shot baseline, the improvements were only slightly higher than in the second baseline, where complexity metrics were not directly leveraged. This outcome aligns with previous RQs, which indicated that MBPP exhibits a low correlation between complexity metrics and pass $\ @ 1$ , making complexity-based adjustments less impactful. Improvements for GPT-4o were $5 . 8 8 \%$ with our algorithm (versus $1 . 4 3 \%$ in the baseline), for GPT-3.5-turbo $4 . 4 8 \%$ (versus $1 . 4 9 \%$ ), and for Llama $3 . 1 ~ 9 . 0 9 \%$ (versus $7 . 2 7 \%$ ). In the LeetCode dataset, GPT-4o showed limited improvement, likely due to its initially high pass $\ @ 1$ score, suggesting it may have reached its performance ceiling in this context. For instance, this is a prompt in the LeetCode dataset: “Given a string s, return all the palindromic permutations (without duplicates) of it . You may return the answer in \*\*any order\*\*. If ‘s‘ has no palindromic permutation, return an empty list.” Despite mentioning “any order”, order errors persisted, implying that these failures were more due to prompt interpretation issues than complexity. Conversely, improvements on GPT-3.5-turbo and Llama 3.1 were more pronounced. The improvement for GPT-4o was $1 . 0 9 \%$ (versus no improvement in the baseline), for GPT-3.5-turbo, was $4 . 9 4 \%$ (versus $1 . 2 3 \%$ , and for Llama 3.1, was $7 . 5 8 \%$ (versus $3 . 0 3 \%$ ). Across all datasets, GPT-3.5 consistently showed the greatest improvement relative to other LLMs. This could be because when we ask the LLM to change the complexity metrics, it will usually make the code more complex than more simple because our observations in RQ2 suggest that GPT-3.5 initially generates simpler code in failed cases, whereas GPT-4o and Llama 3.1 produce more complex code upon failure. Answer to RQ3: Complexity-based feedback significantly improves LLMs’ code generation ability, particularly for GPT-3.5-turbo, which saw notable gains in $\mathrm { P a s s } @ 1$ compared to baselines. By refining key complexity metrics, our approach consistently outperformed the baseline, confirming that targeted complexity-based adjustments can enhance the accuracy of generated codes. D. RQ4: Can feedback based on complexity metric values enhance the effectiveness of code generation agents? In this RQ, we investigate whether our complexity-based feedback can further enhance the performance of feedbackdriven code generation agents, specifically Reflexion. To evaluate this, we conducted experiments on BigCodeBench, a more complex dataset compared to HumanEval, MBPP, and LeetCode. We used GPT-4o and GPT-o3 mini as our test models and analyzed their performance with and without Reflexion. The results are presented in Table IV, where Iteration 0 serves as the first baseline (zero-shot generation), while the white rows represent the second baseline (iterative code generation without complexity-based feedback). Our findings indicate that our complexity-aware feedback method improves code generation performance across iterations, both with and without Reflexion. This is particularly evident in $\mathbf { G P T - o 3 } \ \mathrm { \ m i n i }$ , where iterative complexity-based refinements resulted in noticeable improvements. These results suggest that our approach can be effectively applied on top of agent-based methods, further refining generated code. However, when comparing the improvements against the second baseline, the differences are relatively minor. This aligns with our findings in RQ1, which indicated that complexity metrics are less predictive of $\mathrm { P a s s } @ 1$ in the BigCodeBench dataset. One possible reason is that BigCodeBench contains more diverse and intricate coding tasks, making it less sensitive to complexity-based refinements. Nonetheless, the improvements remain slightly higher than the second baseline. This suggests that while complexityaware feedback may have a limited impact on datasets with weaker complexity-pass $\ @ 1$ correlations, it can still contribute meaningfully when integrated into agent-based code generation workflows. TABLE III: Pass $\ @ 1$ of our feedback-based approach (green rows) and the baseline (white rows) for various LLMs on HumanEval, MBPP, and LeetCode datasets TABLE IV: Pass $\ @ 1$ of our complexity-based feedback approach (green rows) and the baseline (white rows) for GPT-4o and GPT-o3 mini on the BigCodeBench dataset, with and without the Reflexion agent Answer to RQ4: Complexity-based feedback can enhance agent-based code generation. However, improvements were only slightly higher than the second baseline, aligning with our finding that complexity metrics are less predictive of Pass $\ @ 1$ in this dataset. While its impact is limited in loweraccuracy datasets, it can still provide marginal gains in agent-assisted generation. # V. THREATS TO VALIDITY Internal Validity: Fixed parameters (like temperature and max tokens) were set to balance consistency and creativity in code generation. However, broader experimentation with these settings could optimize results for different tasks. External Validity: While the study uses well-established datasets, these may not fully represent real-world coding challenges. Future work with diverse datasets and newer LLMs could improve generalizability. Construct Validity: Using SHAP to analyze only the top five metrics might miss other important relationships. Also, relying on LLM-generated test cases could introduce misalignment if they fail to fully capture task requirements, highlighting a need to reduce hallucinations. Conclusion Validity: The results are robust for the chosen datasets, but expanding the dataset size would improve reliability and generalizability for broader programming scenarios.
Automatic code generation has gained significant momentum with the advent of Large Language Models (LLMs) such as GPT-4. Although many studies focus on improving the effectiveness of LLMs for code generation, very limited work tries to understand the generated code's characteristics and leverage that to improve failed cases. In this paper, as the most straightforward characteristic of code, we investigate the relationship between code complexity and the success of LLM generated code. Using a large set of standard complexity metrics, we first conduct an empirical analysis to explore their correlation with LLM's performance on code generation (i.e., Pass@1). Using logistic regression models, we identify which complexity metrics are most predictive of code correctness. Building on these findings, we propose an iterative feedback method, where LLMs are prompted to generate correct code based on complexity metrics from previous failed outputs. We validate our approach across multiple benchmarks (i.e., HumanEval, MBPP, LeetCode, and BigCodeBench) and various LLMs (i.e., GPT-4o, GPT-3.5 Turbo, Llama 3.1, and GPT-o3 mini), comparing the results with two baseline methods: (a) zero-shot generation, and (b) iterative execution-based feedback without our code complexity insights. Experiment results show that our approach makes notable improvements, particularly with a smaller LLM (GPT3.5 Turbo), where, e.g., Pass@1 increased by 35.71% compared to the baseline's improvement of 12.5% on the HumanEval dataset. The study expands experiments to BigCodeBench and integrates the method with the Reflexion code generation agent, leading to Pass@1 improvements of 20% (GPT-4o) and 23.07% (GPT-o3 mini). The results highlight that complexity-aware feedback enhances both direct LLM prompting and agent-based workflows.
[ "cs.SE", "cs.AI" ]
# 1 Introduction Cloud architecture design requires integrating diverse services to fulfill requirements while optimizing system qualities including scalability, security, and cost-efficiency [1, 2]. A central challenge is refining ambiguous requirements into precise specifications [3], requiring architects to identify gaps, set priorities, and balance current versus future needs [4]. Such tasks demand extensive domain knowledge in cloud technologies and architectural principles, creating practical challenges due to the limited availability of experienced practitioners. In today’s cloud-native development environments, effective architecture design support addresses a critical need [5]. Research on Large Language Models (LLMs) for system development shows promise in various coding tasks [6, 7]. In architecture design, LLMs demonstrate potential for requirement clarification and design decision support [8]. Studies have shown LLMs’ effectiveness in requirements engineering, including elicitation [9], specification [10], and design patterns [8]. However, their use in cloud architecture design remains underexplored. Selecting configurations requires complex trade-off decisions among components, especially with ambiguous requirements [11]. Research is needed to assess whether LLMs have sufficient domain knowledge to support requirements refinement and architectural decisions, and to determine effective support methods. We present CloudArchitectBuddy (CA-Buddy), illustrated in Figure 1, which supports cloud architecture design through two key mechanisms. First, Structured State Management organizes design information into two structured Can you help me design a chat app architecture on AWS? User-driven flow System-driven flow with Chat UI with CA-Buddy Freestyle Dialog Proactive Supports Structured States Sure! Here’s Requirement Initialization User State Anyt hisisu?e:s … 鸟 □ aNpepeldictaotidoenplonyAaWchSat rSsate should watch £ V 围 out for? Architecture Proposal User State Here are the ☆ concerns: 向 Lambda, SQS, Architecture State DynamoDB, 西Q Can it handle Architecture User State heavy traffic? Summarization / Inspection □☆ Architecture State Nsoc aluitnog-yet. Gbouotd secaulraibtiylirtiys,k 四属 Q We should add Architecture Inquiry User State it if … 宫 ☆ Q12. NSeceadl sbeicluirtey …? Architecture State Is the data 鸟 Q围 safely stored? Preference Update User State □☆ Psoutcehntaisal… risks Q12. YNeos 9 Architecture State components: UserState and ArchitectureState. UserState captures initial requirements and evolving specifications, while ArchitectureState represents design proposals, evaluations, and identified issues. By making design state explicitly visible, this structured representation enhances understanding and improves consistency throughout the iterative process. Second, Guided Decision Assistance implements a system-driven workflow through four processes: proposing designs based on requirements, evaluating architectural qualities, identifying potential concerns, and generating targeted questions for requirement refinement. This proactive approach reduces cognitive load and lowers the expertise barrier for effective architecture development. Unlike chat interfaces where design state is implicit and user-driven, our method provides explicit state and system-guided progress, offering systematic support for the iterative refinement of cloud architecture designs. We conducted a role-playing study with 16 industry practitioners. Participants used either CA-Buddy or ChatGPT to develop cloud architectures from brief requirements, and we analyzed architecture quality, user experience, and feedback. Results showed comparable design quality, but CA-Buddy rated higher for ease of use and likelihood to recommend. Participants valued the improved architecture visibility, systematic identification of requirement gaps, and reduced cognitive effort due to system guidance, but noted a need for free-text input for specifying requirements and technical discussion. Based on these findings, we summarize our contributions as follows: a) Introduction of a cloud architecture design support system with structured state management and guided decision assistance for systematic design. b) Empirical evaluation showing that explicit state representation improves design understanding and system-driven guidance reduces cognitive load. c) Demonstration of complementary strengths of system-driven and chat interfaces, suggesting integration of workflow and free-text input could overcome their limitations. # 2 CloudArchitectBuddy This paper introduces CloudArchitectBuddy (CA-Buddy), a cloud architecture design support system that guides users to appropriate architectures via iterative requirements refinement. CA-Buddy employs two mechanisms: (1) Structured State Management to track requirements and designs in organized formats, and (2) Guided Decision Assistance to orchestrate the process through system-driven interactions. This section details the system architecture, state models, and their updates throughout the design lifecycle. User State (I) User State User State User State User State (V) Subject Host jupyter Subject Host jupyter Subject Host jupyter Subject Host jupyter Subject Host jupyter □ on cloud □ on cloud … □ on cloud … □ on cloud … □ on cloud … Preferences Preferences Preferences Preferences Preferences Save data? Y 中 中 中 中 中 Architecture State Architecture State (II) Architecture State (III) Architecture State (IV) Architecture State Services $0$ Services EC2, VPC Services D EC2, VPC Services O EC2, VPC Services 0 EC2, VPC Summary $\circledast$ Summary 国 Summary A Security OK Summary $\circledast$ Security OK Summary $\circledast$ Security OK Inspection Inspection Inspection No volumes Inspection No volumes Inspection No volumes Inquiry 围 Inquiry 围 Inquiry 围 Inquiry Save data? Inquiry Save data? (II) Architecture (III) (IV) Architecture Proposal Summarization / Inspection Architecture Inquiry Requirement Initialization(I) + Preference Update (V) Re-propose Architecture # 2.1 System Design CA-Buddy organizes the cloud architecture design process as a system-driven approach built around the following two mechanisms. Structured State Management: As shown in the upper part of Figure 2, the system maintains two iteratively updated state models: UserState, which tracks evolving requirements, and ArchitectureState, which stores design decisions, summary, inspections, and user inquiries. This structured approach maintains design consistency and enhances user understanding by explicitly representing the evolving design state. Guided Decision Assistance: As shown in the lower part of Figure 2, CA-Buddy directs the design process via system-driven steps. After users input initial requirements (Step I), the system sequentially generates proposals (Step II), summarizes and identifies issues (Step III), and creates focused inquiries (Step IV). User responses (Step V) update preferences for the next iteration. This workflow reduces cognitive load and systematically uncovers overlooked requirements and concerns. # 2.2 State Models UserState Panels (1) and (3) in Figure 3 illustrate UserState at different stages. UserState has two fields: Subject, which stores initial user requirements (e.g., Host Jupyter on AWS ...); and Preferences, which captures evolving detailed requirements in a key-value format (key: value). Keys may represent system inquiries, alternatives, or service names; values reflect user intent, evaluation, or selection (e.g., Yes/No, Good/Bad, Pinned; detailed in $\ S 2 . 3$ ). ArchitectureState Panels (2) and (4) in Figure 3 illustrate ArchitectureState and its adaptation to changing requirements, consisting of four fields: (1) Services for individual cloud resources and their configurations; (2) Summary for visualization and evaluations of key aspects; (3) Inspection for potential issues and alternative solutions; and (4) Inquiry for questions for refining requirements. These elements dynamically respond to user preferences (e.g., requesting GPU support changes both service configurations and related concerns). All these elements are generated by the LLM through system actions (detailed in $\ S 2 . 4 \AA ,$ ). Figure 4 (a)–(c) show their user interface presentation. # (1) Step I / User Action / UserState: Initialize Requirement (3) Step V / User Action / UserState: Update Preferences Subject : Host Jupyter on AWS and coding in local Subject : Host Jupyter on AWS and coding in local Preferences : Preferences : # No preferences specified yet Require GPU: Yes Save Data: Yes (2) Step II-IV / System Action / ArchitectureState: Propose Architecture Use of Session Manager: Good EC2: Pinned Services : Name : EC2 (4) Step II-IV / System Action / ArchitectureState: Redesign Architecture Usage : Hosting Jupyter notebook server Settings : Services : Instance type : t3. medium Name : EC2 Access : Public IP with Security Group Usage : Hosting Jupyter notebook server Name : Security Group Settings : Usage : Control network access Instance type: p3.2xlarge Settings : Storage: 100GB EBS volume (gp3) Inbound : Port 8888 open to specific IPs Name : SessionManager Summary : Usage : Provide secure access to the server Diagram : <Mermaid Diagrams > Settings : Security : IP - based access restriction Authentication : IAM user authentication Scalability : Limited to single user Summary : Inspection : Diagram : <Mermaid Diagrams > Issues : Security : Secure access with SessionManager Service : EC2 Reliability : Single instance with EBS Issue : No data persistence Scalability : Limited to single user Reason : Data lost on instance termination Inspection : Alternatives : Issues : Use of EBS volumes Service : Cost Service : Security Group Issue : High instance cost Issue : Security risk Reason : GPU instances are expensive Reason : Direct exposure to internet Alternatives : Alternatives : - Use Spot instances - Use of Session Manager - Implement auto shutdown when idle Inquiry : Inquiry : Questions : Questions : - Require GPU? Expected duration of workloads ? - Save Data? - Need automated backups ? # 2.3 User Actions Within the CA-Buddy workflow, all user actions incrementally update UserState (Subject and Preferences), enabling iterative requirements refinement and design progress with minimal effort. Users can tell their intent with the following four actions: Input Requirement: Users specify high-level goals in Subject (e.g., Host Jupyter on AWS and coding in local; see also Step I in Figure 2 and the panel (1) in Figure 3). Answering system inquiries: Users clarify requirements by responding to system Yes/No questions (Figure 4-d); answers populate corresponding Preferences entries (e.g., Require GPU: Yes; see panel transitions from (2) to (3) in Figure 3). Evaluating solution alternatives: Users rate technical options (from Inspection.Issues[].Alternatives) as Good/Bad, recorded in Preferences (Figure 4- e); these judgments are likewise registered within Preferences (e.g., Use of Session Manager: Good; see panel (3) in Figure 3). Marking essential services Users “pin” specific architectural components across design iterations (Figure 4-f); this is also noted in Preferences (e.g., EC2: Pinned, see panel (3) in Figure 3). # 2.4 System Actions CA-Buddy implements four LLM-powered system actions that generate and update ArchitectureState based on UserState, as shown in panels (2) and (4) of Figure 3. Each action uses prompts with detailed instructions, examples, and current state. Our prompt templates are found in Appendix A. (a) Architecture View (b) Summary View (c) Inspection View (d) Answering (e) Evaluating Solution Alternatives (Services) (Summary) (Inspection) System Inquiries AWS Regions Regions service disruption mz v Mermaid Diagram AWions Would you lilk tomatic database fallover to e contro Service Diagram Issue Evaluate Alternative Solution by Click User Management: Enabled can lead to a total service outage in cas ofregiospie performance and cost efficiency Topic Details of user trust and engagemer Questions dess deloymets r high中 t Security, Would you like security with encryption (f) Marking Essential Services Reliability, AWS Cost Managem strateg Amazon EC2 compting RELIABILITY Withoutaspeificeostana deliverysucc Evaluate Service by Click orsietiicinfinia where the main application logic where the main application logic Store runs. runs. Architecture Proposal generates the Services field by translating user requirements into concrete cloud resource configurations (Step II in Fig. 2). It takes UserState (Subject and Preferences) as input and outputs the proposed services and configurations in the structured Services field. The initial iteration uses only Subject; subsequent iterations include Preferences and the full previous ArchitectureState (Services, Summary, Inspection, Inquiry) as context. This allows the LLM to reflect user intent and architectural feedback for incremental, consistent refinement. Prompts provide typical guidance; for example, instructing the model to focus on core system functions rather than auxiliary concerns (like CI/CD or monitoring) and to incorporate user goals and constraints from injected states. Architecture Summarization creates Summary based on both UserState and the current Services in ArchitectureState (Step III in Figure 2). The output is Summary that provides both a system diagram and concise written evaluations of key aspects; including security, reliability, scalability, cost, performance, storage, analytics, and operations. Prompts direct the LLM to structure its analysis around these cloud-specific dimensions, aiding user understanding of both system structure and quality attributes relevant to deployment. Architecture Inspection generates the Inspection field by analyzing the current UserState and Services to identify potential issues and actionable alternatives. The output is Inspection is a structured list of concerns linked to specific services or decisions, each with supporting reasons and improvement suggestions. Prompts instruct the LLM to focus on fundamental architectural issues (e.g., data persistence, external exposure) and to generate practical, actionable alternatives, while avoiding application-layer feedback. Inquiry Generation constructs the Inquiry field using the full state: UserState and Services, Summary, and Inspection in ArchitectureState. The output is a prioritized list of Yes/No questions to refine requirements and clarify architectural decisions. Prompts are designed to elicit high-impact, non-redundant questions, ordered by importance, and avoid items already present in Preferences. # 3 Preliminary Experiment: Evaluating LLMs on Cloud Architecture Certification Exams We selected two prominent certifications: Google Cloud Professional Cloud Architect (GCP-PCA) and AWS Solutions Architect Professional (AWS-SAP). These exams feature architectural design questions with detailed requirements and multiple-choice options specifying the number of correct answers. We compiled 50 questions from each certification using privately sourced preparation materials. Three widely used LLM variants were tested: GPT-4o (gpt-4o-2024- 08-06), GPT-4o-mini (gpt-4o-mini-2024-07-18), and ChatGPT-4o (chatgpt-4o-latest). Table 1 shows GPT-4o scored highest ( $8 8 \%$ for GCP-PCA, $82 \%$ for AWS-SAP). ChatGPT-4o scored $84 \%$ and $72 \%$ , respectively. Its relatively lower performance implies that its tuning is oriented toward general-purpose tasks rather than the specialized reasoning these exams requirement. GPT-4o-mini scored lowest, with $82 \%$ and $64 \%$ , consistent with its smaller parameter size and limited reasoning capacity. These results suggest that GPT-4o variants possess notable potential and broad competency in cloud architecture design. Table 1: Performance of LLMs on PCA and SAP # 4 User Experiment: Cloud Architecture Design Tasks We conducted a study with industry practitioners comparing cloud designs created using CA-Buddy and ChatGPT. This section examines design quality and identifies strengths and limitations of our approach. # 4.1 Setups Test Scenarios: We developed four test scenarios representing common architectural challenges in real-world product development: IoT Data Collection (GCP), E-Commerce (GCP), Travel Planning (AWS), and Matching Applications (AWS), as detailed in Table 2. Table 2: Cloud Architecture Test Scenarios Procedure: We conducted an 80-minute study with 16 industry practitioners (engineers and data scientists) who had experience in cloud architecture design. The study consisted of three phases: instruction $\mathrm { { ( 1 0 ~ m i n ) } }$ , design tasks $( 6 0 \mathrm { m i n } )$ , and feedback collection $\mathrm { { 1 0 } m i n ) }$ . Participants, assuming the role of lead engineers, designed cloud architecture solutions using either GPT-4o (gpt-4o-2024-08-06) based CA-Buddy or ChatGPT (using the latest gpt-4o model available as of December 3, 2024) in a counterbalanced experimental design. For each scenario, participants documented cloud services, their purposes, and specific configurations within a 15-minute time limit. Table 3 presents an example of a participant’s output. Table 3: Architecture output format and example for the D scenario (Matching Applications on AWS) Evaluation: Infrastructure experts developed evaluation criteria for architectural designs by identifying key services for each scenario using a three-level classification: Level 1 (basic services), Level 2 (specialized managed services), and Level 3 (advanced service combinations with operational features). Based on these criteria, each solution received a score on a 3-point scale to measure architectural quality. Additionally, participants evaluated both tools using a 10-point Likert scale across three dimensions: I would like to use this tool frequently, I found this tool easy to use, and I would recommend this tool to others. Participants also provided qualitative feedback on the strengths and limitations of each tool. # 4.2 Results # 4.2.1 Design Quality Comparison: Table 4 presents evaluation scores across four scenarios. In A. IoT Data Collection (GCP), ChatGPT scored higher in five of six topics, while CA-Buddy led in one. In $B$ . E-commerce (GCP), CA-Buddy received higher scores in four of seven topics, ChatGPT in two, with one showing equivalent scores. In C. Travel Planning (AWS), both tools demonstrated comparable performance: CA-Buddy scored higher in three of six topics, ChatGPT in two, with one equivalent. In $D$ . Matching Application (AWS), ChatGPT scored higher in four of six topics, while CA-Buddy led in two. Only the chat feature topic showed a statistically significant difference between tools. The evaluation results indicate that our framework achieves comparable architectural quality to the chat interface, despite slightly lower scores in some topics. Performance variations across scenarios and topics show no consistent patterns that would suggest specific advantages of either approach. Table 4: Evaluation Topics with Architectural Examples for Different Levels and Evaluation Scores for Scenarios A, B, C, and D and ∗∗ denote significant difference $\overline { { p < 0 . 0 5 ) } }$ and trend $\overline { { ( p < 0 . 1 0 ) } }$ with Mann-Whitney U Test, respectively. # 4.2.2 User Experience Analysis: Table 5 shows the analysis of the user experience ratings. The ratings for frequency of use were comparable between ChatGPT at 7.06 and CA-Buddy at 7.00. CA-Buddy received higher ratings than ChatGPT in ease of use, scoring 7.93 compared to 6.75, with the difference showing a marginally significant trend $( p < 0 . 1 0 )$ . CA-Buddy also scored higher in likelihood to recommend at 7.62 compared to ChatGPT’s 7.12. The results indicate that CA-Buddy provided a positive user experience for practitioners working on real-world cloud architectural design tasks. Table 5: Average Ratings for CA-Buddy and ChatGPT ∗∗ denotes significant trend $( p < 0 . 1 0 )$ with Mann-Whitney U Test Table 6: User Feedback Comparison between CA-Buddy and ChatGPT: Comparative analysis of user feedback on both systems across three key aspects - input method, output format, and interaction style. Numbers in brackets indicate the frequency of similar feedback. # 4.2.3 User Feedback Analysis: To identify what influenced participants’ experience, we conducted a thematic analysis of their feedback to uncover specific strengths and limitations of our system-driven approach. Table 6 categorizes common feedback into three aspects—input approach, output format, and interaction style—highlighting distinct characteristics of both systems. Input: System-guided vs. Free-text Our analysis revealed clear trade-offs between the systems. CA-Buddy’s systemguided method received positive feedback for intuitive design reflection and ease of use (4 participants each), with automated architecture updates being particularly valuable for initial idea verification. However, participants identified limitations in requirement specification capabilities $\scriptstyle ( \mathrm { n = 7 } )$ ) and topic discussion constraints $\scriptstyle ( \mathrm { n = 6 } )$ , expressing a need for free-text input, for adding requirements, modifying services, and engaging in technical discussions. ChatGPT’s free-text approach facilitated flexible requirements handling and discussions $\scriptstyle ( \mathrm { n = 8 } )$ ), enabling unrestricted exploration of specifications. However, participants noted that cloud expertise was necessary for effective query formulation and productive discussions $( n { = } 7 )$ . Output: Structured vs. Narrative Participant feedback revealed fundamental differences between the output formats. CA-Buddy received positive assessments for its structured presentation $( \mathrm { n } { = } 4 )$ and visual diagrams $\scriptstyle ( \mathrm { n = 9 } )$ , which effectively communicated service relationships through organized list views and facilitated comprehension of the overall architecture at a glance. However, participants identified limitations in accessing detailed information $\scriptstyle ( \mathrm { n } = 5 )$ ), especially when needing specific verification or direct interaction with the LLM. ChatGPT’s narrative output provided flexible format and content customization $\scriptstyle ( \mathtt { n } = 3 )$ , supporting detailed explanations through conversational dialogue. However, participants experienced difficulty comprehending large-scale architectural proposals in text format $\scriptstyle ( \ n = 6 )$ , particularly in maintaining a clear overview of complex architectures and connecting discussions to specific components. Interaction: Guided Workflow vs. Open Dialogue Feedback revealed distinct interaction approaches between the systems. CA-Buddy’s guided workflow received positive evaluation for its inspection capabilities and streamlined design progression $\mathrm { \Delta } \mathrm { n } { = } 5$ each), helping users identify overlooked requirements and develop comprehensive designs through systematic feedback. While the question-based updates enhanced design efficiency, participants identified specific workflow constraints $\scriptstyle ( \mathtt { n } = 2 )$ ), including difficulties introducing new services after pinning others and tracking changes between iterations. ChatGPT’s chat interaction supported detailed investigations $\scriptstyle ( \mathrm { n } = 5 )$ ) and provided a familiar interaction model $\scriptstyle ( \mathrm { n } = 3 )$ ), facilitating service inquiries through conversational dialogue. However, participants experienced challenges maintaining conversation focus $\scriptstyle ( \mathrm { n = 4 } )$ ), frequently losing design context during open-ended discussions. Analysis across these three dimensions demonstrated how different aspects of both approaches complement each other. The trade-offs observed in input flexibility, output representation, and interaction patterns consistently emphasized the potential value of integrating systematic control with free-text interaction. These findings suggest that incorporating a chat interface into CA-Buddy would enhance system flexibility, enabling users to discuss technical questions and address specific requirements. A key consideration remains balancing structured guidance with free interaction while preserving the system’s core benefits of systematic design support. # 5 Related Work LLMs have demonstrated effectiveness in software development ranged from coding [12, 6] to managing and operations [13, 14]. In architectural design, LLMs support requirements engineering from elicitation [9] to specification [10], and assist with design pattern suggestion [8]. Challenges include domain-specific knowledge integration [9], context comprehension [15], and ambiguous requirement resolution [16], particularly significant in cloud architecture where precision and systematic decision-making are essential [11]. Although preliminary, these investigations offer insights into integrating LLM capabilities for design support systems. Structured output aids language model integration by providing formats and standards [17]. Extracting structured knowledge from text enables accessible human reasoning [18]. In long-term interaction, LLMs struggle with context maintenance [19]. Structured information enhances context retention and enables more effective interactions [20]. Also, proactive agent studies have established AI-driven interaction patterns [21]. Proactive questioning can enhance self-reflection and problem understanding [22], while system-driven workflows improve user experience[23]. These advances suggest potential for cloud architecture design support, where structured information maintains design consistency and automated guidance assists systematic decision-making. Adapting these approaches to cloud architecture, with its reliance on expertise and reasoning, remains a research challenge.
Cloud architecture design is a complex process requiring both technical expertise and architectural knowledge to develop solutions from frequently ambiguous requirements. We present CloudArchitectBuddy, a system-driven cloud architecture design support application with two key mechanisms: (1) structured state management that enhances design understanding through explicit representation of requirements and architectural decisions, and (2) guided decision assistance that facilitates design progress through proactive verification and requirement refinement. Our study with 16 industry practitioners showed that while our approach achieved comparable design quality to a chat interface, participants rated our system higher for usability and appreciated its ability to help understand architectural relationships and identify missing requirements. However, participants also expressed a need for user-initiated interactions where they could freely provide design instructions and engage in detailed discussions with LLMs. These results suggest that integrating a chat interface into our structured and guided workflow approach would create a more practical solution, balancing systematic design support with conversational flexibility for comprehensive cloud architecture development.
[ "cs.SE", "cs.HC" ]
# 1 Introduction Simulators have proven to be an indispensable tool for synthesizing policies for robot control. Sim-to-real techniques are now the de facto standard in creating robust and performant policies for real robots [1, 2, 3, 4, 5]. Predominantly, these simulators have been used as black-box functions where policies are optimized with reinforcement learning (RL). If simulators provided informative gradients for actions and simulation parameters, a set of very powerful tools and applications would become available. We could generate policies with gradient-based MPC and RL for high-dimensional control problems that are much more sample-efficient than with classical deep RL. We could optimize the simulator to fit real-world observations to bridge the sim-to-real gap via gradient-based system identification. With both in place, policies for new tasks could be generated and deployed in seconds instead of hours. Figure 2: Left: Common computational graph for robot control synthesis. Center: If the initial ball radius $p$ is set too small, then the gradient $\nabla _ { p } L$ resulting from a one-step-ahead prediction is zero. Right: Large constraint penetration for hard contacts results in large gradients $\nabla _ { x _ { k } } L$ which aggravates the estimation of the previous state $x _ { k }$ . What keeps us from doing it? Current simulators already provide gradients, but they are either too slow, incorrect, or uninformative. We care about sufficiently realistic simulations for real robot applications, which implies hard materials and stiff contact models. The principal source of problems is that simulators only compute a discrete approximation $\widetilde { F }$ of the continuous physics $F$ . As a result, the gradient $\boldsymbol { \nabla } \widetilde { F }$ (computed using automatic differentia ieon) may not accurately represent the true gradient $\nabla F$ . Ine the sequel, gradient “correctness” refers to how closely this gradient matches the gradient of the continuous model. We show that in penalty-based simulators, which are the standard [6] in multibody physics simulation, the gradients for dynamics with contacts are incorrect to the extent that their sign can be flipped. In addition, the simulations need to be fast, so we cannot resort to infinitesimally small simulation timesteps. Previous suggestions for time-of-impact correction [7, 8] in elastic simulators do not solve the problem for penalty-based collision models, whereas softening contact dynamics or resorting to unfeasibly small timesteps do improve gradients. We propose using adaptive timestep integrators under the hood while keeping the fixed-timestep external interface, creating a small computational overhead but yielding correct gradients, even for hard contact cases. Another obstacle for gradient-based optimization for policy generation and system identification is the non-informativeness of gradients about unmade contacts. For example, when a robot’s hand is not in contact with an object, then there is no gradient directing it to make contact for task facilitation. Therefore, drawing inspiration from prior works on contact-invariant optimization [9, 10], we propose Contacts From Distance (CFD) to address this problem. However, when done naively, introducing artificial contact forces considerably changes the simulation, resulting in a too large sim-to-real gap. In order to preserve the initial simulation, we propose using the straight-through-trick to only introduce CFD in the gradient computation (backward pass). We implement our changes in Mujoco XLA, the JAX [11] implementation of MuJoCo [12], and additionally fix some low-level collision routines to be truly differentiable. As a result, our implementation makes use of GPU acceleration and automatic differentiation. We show in proof-of-concept simulations that we can perform system identification and policy synthesis for collision-rich problems and hard contact in high-dimensional systems such as musculoskeletal models shown in Fig. 1. # 2 Computing gradients in penalty-based simulators As illustrated in Fig. 2, we want to use automatic differentiation to obtain the correct gradient of a loss functional $L ( \tilde { x } _ { k + 1 } , x _ { k } , a _ { k } , p )$ where the next state of the robotic system is governed by the discrete-time dynamics $x _ { k + 1 } = \operatorname { s t e p } ( x _ { k } , a _ { k } , p )$ with the state $x _ { k } : = x ( t _ { k } ) = [ q _ { k } , v _ { k } ]$ at time $t _ { k }$ consisting of the system’s generalized position $q _ { k } \in \mathbb { R } ^ { n _ { q } }$ and velocity $v _ { k } \in \mathbb { R } ^ { n _ { v } }$ , control actions $a _ { k } \in \mathbb { R } ^ { n _ { a } }$ , and model parameters $p \in \mathbb { R } ^ { n _ { p } }$ . Typically, multi-body dynamics simulators consist of forward dynamics model and a numerical integration method. The forward dynamics govern the system’s acceleration $\dot { v }$ via the equations of motion $$ \dot { \boldsymbol { v } } = M ^ { - 1 } \left( \tau - c + \boldsymbol { J } ^ { \top } \boldsymbol { f } \right) $$ with the joint-space inertia matrix $\boldsymbol { M } ( \boldsymbol { q } ) \in \mathbb { R } ^ { n _ { v } \times n _ { v } }$ , the applied forces $\tau ( x , a ) \in \mathbb { R } ^ { n _ { v } }$ , the bias force $c ( x ) \in \mathbb { R } ^ { n _ { v } }$ , the constraint space Jacobian $J ( q ) \in \mathbb { R } ^ { n _ { c } \times n _ { v } }$ , and the constraint forces $f ( x ) =$ $f _ { \mathcal { E } } + f _ { \mathcal { F } } + f _ { \mathcal { C } } \in \mathbb { R } ^ { n _ { c } }$ consisting of the equality constraint, the generalized friction, and the contact constraint forces. The Jacobian $J$ maps joint velocity $v$ into constraint space, whereas $J ^ { \top }$ maps $f$ into generalized coordinate space. The contact-free dynamics are typically derived via recursive multi-body algorithms [13, 14] while computing contact forces requires an intricate interplay of collision detection and contact force computation. Simulators typically use numerical integrators such as semi-implicit Euler and 4th-order Runge-Kutta. As we will see throughout this work, numerical integration plays a critical role for understanding how contact forces may hinder correct gradient computation. For that, we will focus on MuJoCo XLA as a concrete penalty-based simulator. # 2.1 MuJoCo XLA MuJoCo XLA (MJX) is a reimplementation of MuJoCo using the Python library JAX, which enables GPU-parallelizable gradient computation via automatic differentiation. This capability allows MJX to compute gradients of rigid-body dynamics efficiently. MuJoCo has become the de facto standard in robotics, alongside other widely used simulators such as Bullet [15], Drake [16], DiffTaichi [7] and IsaacGym [17]. The growing importance of MuJoCo within the robotics community is further evidenced by a collaborative effort between NVIDIA and Google to develop the general-purpose simulator “Newton”, built on top of the recently introduced MuJoCo Warp [18]. In order to understand where gradients computed in MJX may fail to approximate the correct ones, it is essential to first develop a solid understanding of how MuJoCo computes constraint forces. Collision detection. A collision detector is an algorithm that, given the state $x$ and geometry parameterizations of two bodies, returns the signed distance $r ( x )$ between potential contact point candidates alongside with corresponding body surface normals. Collision detection is critical for contact force computation, as contacts are only considered active – that is, a contact point exerts contact forces – if $r ( x ) < 0$ . For a robot simulator to be fully differentiable, it is key that the collision detector is itself differentiable. While collision detection is not the primary focus of this work, we note that obtaining reliable gradients from MJX required addressing several discontinuities in its collision detection module. Contact force solver. When objects in a simulation come into contact, we need to determine not only where and when forces act, but also their direction and strength. MuJoCo solves this using an optimization approach inspired by Gauss’s principle [19, 20], minimizing the deviation from the unconstrained acceleration (what the system would do subject to constraint-free forces alone), while ensuring that all constraints are satisfied. Rather than solving rigid equations exactly, MuJoCo uses a softened formulation that blends two ideas: minimizing unexpected accelerations, and steering the system towards satisfying constraints using spring-damper dynamics. Figure 3: The position-level reference acceleration $h ( r )$ and impedance $d ( \boldsymbol { r } )$ determine the contact force magnitudes that the solver can apply. The core of this approach lies in two functions: the impedance $d ( \boldsymbol { r } )$ , and the position-level reference acceleration $h ( r )$ . The impedance $d ( r ) \in ( 0 , 1 )$ describes how strongly a constraint resists violation – it controls both how much force can be applied and how costly that force is to the solver. As illustrated in Fig. 3, the impedance $d \in ( 0 , 1 )$ is a polynomial spline function of the constraint violation $r$ specified by the parameters solim $\boldsymbol { \jmath } = ( d _ { o } , d _ { w } , w$ , midpoint, power). The reference acceleration $h ( r )$ , in turn, defines how the solver aims to correct constraint violations over time, based on the velocity and position error, and given solref parameters consisting of the time constant $t _ { c }$ and the damping ratio $\phi _ { d }$ . Internally, MuJoCo reformulates this as a convex optimization problem and solves it efficiently using a Newton method. For an in-depth description of Mujoco’s contact computation consult Suppl. B.1. Figure 4: Toy simulation of a point mass colliding with a surface that either resorts to an ideal-elastic contact model (similar to DiffTaichi) or a penalty-based contact model (similar to MuJoCo). Loss and gradients with respect to $q _ { 0 }$ . The loss $L = \left| q _ { N } - 1 \right|$ uses the final state $q _ { N }$ , which is obtained after doing $N$ numerical integration steps starting from $q _ { 0 }$ with $v _ { 0 } = - 1$ . In the penalty-based simulation, reducing $h$ fixes the gradient oscillations, whereas for the ideal-elastic collision, reducing $h$ does not yield correct gradients. Figure 5: Simulation of geometric primitives thrown onto a surface. Contacts with stiff contact settings cause MJX’s gradients of the toss distance to deviate from central difference gradients, while DiffMJX maintains close agreement. # 3 Correcting the contact gradients of penalty-based simulation # 3.1 Analyzing gradient quality To evaluate the correctness of gradients in the presence of contacts, we unroll the trajectory of several primitives bouncing against a plane and observe the final position and its gradient with respect to the initial velocity. In Fig. 5 (MJX row), we observe that the gradient is oscillating with an amplitude orders of magnitude larger than the loss. Point mass example. To better understand this issue, we simplify the setup even further. We consider a point mass colliding with a flat surface in the absence of gravity, simulated via a minimal version of MuJoCo’s penalty-based contact model, as shown in Fig. 4 (middle column). The loss is the distance between the point’s final state and a target. Depending on the initial start height, it shows a cyclic pattern as before, which causes rapid sign flips in the gradient. Similar oscillations have been discussed in previous works [7, 8]. DiffTaichi [7] describes the “time-of-impact” (TOI) problem, showing an oscillating pattern in gradients during the ideal elastic collision. We replicate this for comparison in our toy simulator Fig. 4 (right column). These oscillations arise from timediscretization errors in ODE integration that also affect penalty-based simulators, but have to be addressed differently. TOI correction does not fix gradients for penalty-based simulators, but small stepsizes do. For an ideal elastic collision, the ODE is piecewise linear, and at the contact, the velocity is inverted (see also Figure S4 in the Appendix). The TOI approach [7] dynamically splits the ODE into two linear segments at the time of contact, thereby eliminating the discretization error and yielding correct gradients. In penalty-based simulation, the ODE is also linear before and after the collision, but is non-linear and with variable stiffness over the time of the collision. Therefore, it cannot be easily divided into large linear segments. A possible solution would be to reduce the stepsize until the discretization error decreases enough. This is also confirmed in our toy simulator in Fig. 4: Reducing the stepsize in the penalty-based simulation eventually yields correct gradients, whereas for ideal elastic collision, the gradients remain incorrect. Unfortunately, simply reducing the step size is not a practical solution, as it necessitates extremely small steps that substantially increase the computational and memory demands of gradient computation. This trade-off raises a critical question: Can we retain realistic contacts while maintaining practical simulation speeds? Figure 6: Contacts from dis- Figure 7: Top: Applying contact tance (CFD): To let MuJoCo forces for $r > 0$ in the forward pass create small contact forces be- of the simulation causes a robot tween non-colliding objects, to hover. Bottom: The straightreference acceleration $h ( r )$ through-trick is used to replace the and impedance $d ( \boldsymbol { r } )$ are ad- original MJX derivative with the justed to be nonzero for pos- derivative of $\mathbf { M J X } + \mathbf { C F D }$ , evaluitive signed distances $r > 0$ . ated at the unaltered trajectory. Figure 8: Billiard simulation. Top: Force $F$ acts on the white ball to minimize the loss. Bottom: Despite the loss derivative being zero if the balls do not collide, DiffMJX with CFD provides informative gradient information. # 3.2 Adaptive stepsize integration: Numerical precision on demand A standard methods for integrating ODEs with variable stiffness are adaptive integrators. The idea behind adaptive stepsize integration is simple: Two numerical integrators of different order compute the next state. Their difference provides an estimate of the error. If the error is smaller than a given threshold, the step is accepted, otherwise the step is rejected and the procedure is repeated with a different stepsize chosen by a feedback controller. For further details on the rich history of adaptive stepsize integration, see e.g. [21, 22, 23, 24]. We use Diffrax [25] for efficient numerical integration in JAX, taking advantage of its solver flexibility and enhanced backpropagation modes. We extend it to integrate quaternions and stateful actuators seamlessly and make Diffrax compatible with MJX. Further details are provided in Suppl. B.2. Resolving problems of Collision Detection. Using an adaptive integrator with MJX eliminates oscillations in the bounce example. However, gradients for some object primitives (capsule, cylinder, box) experience offsets due to non-differentiable operations in the collision detector arising from discrete case distinctions. We smoothened them with standard proxies, leading to results in the bottom row of Fig. 5, where analytical gradients nearly match central differences. Henceforth, we refer to MJX with the Diffrax integrator and smoothened collision detection as DiffMJX. # 4 Contact force gradients from a distance We can now compute the correct gradients of the dynamics. However, are these gradients also informative? For example, consider a simulated billiard table setup as shown in Fig. 8. The white ball should be shoot to hit the black ball and make it reach the target. We are exerting a force $F$ onto the white ball in the first timestep and would like to optimize this force to minimize the distance $L$ between the black ball and the target. As long as the balls collide, MJX with adaptive integration yields informative gradients $\nabla _ { F } L$ . However, if $F$ does not cause the balls to touch, then $\nabla _ { F } L$ is zero and therefore uninformative for optimization. Therefore, we propose contacts from distance (CFD), a method for computing contact forces for positive signed distances $r$ in penalty-based simulation to yield informative gradients even if objects are not in contact. Creating artificial contact forces from a distance. As discussed in Suppl. 2, the magnitude of contact forces in MuJoCo is determined by the impedance $d ( \boldsymbol { r } )$ and position-level reference acceleration $h ( r )$ . To enable the solver to apply CFD, we can augment $d ( \boldsymbol { r } )$ as depicted in Fig. 6. Here, we do not alter $d ( \boldsymbol { r } )$ for $\textit { r } < 0$ , but instead extend $d ( \boldsymbol { r } )$ for $r > 0$ . This continuation is parametrized by solimp- $C F D$ parameters $( d _ { c } , d _ { 0 } , w _ { c } , m _ { c } , p _ { c } )$ . By default, the curve smoothly continues MuJoCo’s impedance at $d _ { 0 }$ and tapers of to $d _ { c } = 0$ to ensure smooth differentiability. The CFD width $w _ { c }$ specifies the distance for which artificial contact forces are generated and, in our experiments, has been varied between $1 \mathrm { c m }$ and $1 \mathrm { m }$ . Moreover, we soften the reference acceleration $h ( r )$ by replacing the ReLU function on the signed distance with a softplus (Fig. 6). This yields modified contact forces $f _ { \mathrm { C F D } }$ , and hence the modified ODE $\dot { v } _ { \mathrm { C F D } } = M ^ { - 1 } \left( \tau - c + J ^ { \top } f _ { \mathrm { C F D } } \right)$ . Designing a surrogate gradient estimator. CFD provides contact gradients for positive signed distances at the expense of physical realism. As shown in Fig. 7 (top), naive simulation with CFD results in floating objects. In this example, CFD can be seen as introducing a compliant layer mimicking a soft foam mat of thickness $w _ { c }$ being placed atop the actual surface. In turn, the quadrupedal robot appears to hover above the surface. As significantly altering simulation realism is not an option, we are faced with the question: Can CFD be used to obtain informative contact gradients without affecting simulation realism? To positively answer this question, we propose a simple idea: we keep the forward computation (vanilla MJX or DiffMJX), but enable CFD only in a simulation used for gradient computation via automatic differentiation. This is achieved by a variation of the straight-through-trick applied to the forward dynamics model, as detailed in Suppl. B.3. The resulting procedure is illustrated in Fig. 7 (bottom). Note that this approach requires the simulator’s forward pass to be computed twice. However, the gradient, which typically dominates the computational cost, is still only evaluated once. Revisiting the billiard example from the beginning of this section (Fig. 8), we see that by using the straight-through-trick, DiffMJX computes both the correct loss as well as gradients pointing towards the loss minimum despite the balls not being in contact. # 5 Evaluation # 5.1 System Identification Parameter identification in the presence of hard contacts remains a laborious task. If contacts are hard, even learning the dynamics of a cube requires impractical amounts of data for “naive” neural network regression [27]. In comparison, penalty-based simulators can capture hard contacts, but the lack of correct gradients hinders efficient parameter estimation [28]. Therefore, recent work introduced intricate analytical pipelines for cube geometry estimation [26, 29] and graph-based networks Figure 9: Left: Estimation of a cube’s side length in MJX via gradient descent using multi-step ahead predictions. Right: Experimental setup for collecting cube toss data. Image adapted from [26]. for learning contact dynamics [30]. In what follows, we use the same real-world data as used in [26, 29, 30]. We demonstrate that DiffMJX with CDF enables simulator parameter estimation via standard gradient-based optimization. Dataset and training setup. We use the Contactnets dataset [26] which consists of 550 trajectories of an $1 0 \mathrm { c m }$ acrylic cube that has been repeatedly tossed onto a wooden table. For training, trajectories are split into segments of length five such that the simulator is tasked to unroll four future steps starting from the initial state. Each segment and its prediction is fed to an $L _ { 2 }$ loss whose gradient is used for gradient-based optimization using Adam [31]. For systems with stiff dynamics, we favor multi-step-ahead predictions over one-step-ahead predictions, as they capture the cumulative effects of prediction errors over time. This setup enables a fairer analysis of MJX without CFD, as even for too small side length estimates, future state predictions can make contact to inform the optimization. Figure 10: Left: Simulation cost evolution of gradient-based MPC, with and without contacts from distance (CFD), vs sampling-based MPC. The number for sampling indicates the number of samples used per planning step. Sampling has difficulties solving the dexterous in-hand manipulation task; gradients without CFD cannot solve the bionic tennis task. Right: Rendering of sampling-based MPC (1024 samples) vs gradient-based MPC with CFD on in-hand manipulation task. The goal is to swap the balls; the MyoHand model is actuated by 39 muscle-tendon units. Training results. The training results are shown in Fig. 9. Surprisingly, the refined version of MJX and MJX with CFD already achieve good estimation results with an error of around $5 \%$ relative to the ground-truth. If the side length is initialized at $6 0 \mathrm { m m }$ or $1 4 0 \mathrm { m m }$ , training either stalls fully or convergence is severely limited for MJX. The incorporation of CFD into MJX addresses convergence issues arising from poor initial parameters, while the integration of adaptive integration via DiffMJX significantly enhances estimation accuracy. DiffMJX improves estimation accuracy by dynamically adjusting the time steps during collisions, thereby mitigating time discretization errors. Further details and parameter estimation experiments are provided in Suppl. C.2. To the best of our knowledge, we are the first to demonstrate parameter estimation of real-world cube dynamics using an automatically differentiable penalty-based simulator. While this represents a promising step forward, further experimentation is necessary to fully characterize the scope and limitations of this approach. # 5.2 Model Predictive Control Next, we conduct experiments on gradient-based model-predictive control. We use a simple MPC loop in which, at every plan step, we refine a sequence of controls over a 256-step horizon. In the gradient-based planner, we compute gradients by backpropagating the differentiable cost computed on the rollout of the current plan through the MJX simulator. The plan is then iteratively optimized using the Adam optimizer with a learning rate of 0.01 for 32 iterations. Finally, the resulting plan is executed for 16 steps in simulation, after which the planning procedure is repeated with the previous plan as a warm start. As a simple baseline, we include a version of the predictive sampling planner from Mujoco MPC [32], which at every plan step samples $k = \{ 6 4 , 2 5 6 , 1 0 2 4 \}$ trajectories, and executes the lowest-cost plan. We significantly improved the performance of this planner for the muscular systems by resorting to brown noise [33] for sampling. Models. All our MPC experiments revolve around the muscle-tendon models provided by MyoSuite [34, 35]. Models include the MyoHand modified from the MyoChallenge 2022, which is comprised of 29 bones, 23 joints, and 39 muscle-tendon units. We also use a bionic model modified from the MyoChallenge 2024, which is comprised of the MyoArm with 27 degrees of freedom and 63 muscle-tendon units, and the simulated modular prosthetic limb with 26 degrees of freedom and 17 motor control units. Dexterous in-hand manipulation: Determining crucial components in the MPC loop. First, we consider an in-hand manipulation task, where the goal is to swap two balls in the MyoHand. The cost is given by the Euclidean distance between each of the balls and the respective target location, fixed in the frame of the hand. The results are reported in Fig. 10. We find that gradient-based MPC can reliably solve this task, in contrast to the sampling-based planner. Overparameterization in the muscle-tendon model with at least two muscles per joint benefits the gradient-based planner by helping escape local minima, similar to its role in optimizing overparametrized neural networks. In contrast, RL and sampling-based planners struggle with scaling in overparametrized higher-dimensional systems [36]. First-order methods using differentiable simulation should be able to tackle more complex control problems. Notably, this task does not require distant contacts because hand-ball interactions are frequent due to gravity. Moreover, we identify two crucial components of the gradient-based MPC loop: First is gradient clipping, which is important as the scale of gradients changes massively in the presence of contacts. This technique has also been reported to be effective in previous works on differentiable simulation [37, 38]. Second, we store the rollout cost of all gradient iterations and select the one with minimal cost. This is important as the cost landscape is highly non-convex, which is reflected in the non-monotonic cost evolution between the iterations of a planning step. Bionic tennis: Using CFD to solve complex control tasks with minimal task supervision. Finally, we test a more complex custom bionic tennis task on the bionic model. The task is to move a ball that is initially moving sideways to a target location below. This can be achieved by bouncing it back using a racket that is statically welded to the prosthetic hand, and then catching it at the target location with the muscle hand. In this task, the only cost supervision is the Euclidean distance of the ball to the target, the complicated sequential movement has to be discovered purely from this signal. We report our findings in Fig. 10, see Fig. 1 for a rendering. By design, the task initialization is such that the ball misses both hands, hence this task is not solvable by purely gradient-based MPC using vanilla MJX. On the other hand, we observe that adding the contacts from the distance mechanism allows solving this task. The sampling-based planner is a strong baseline in this task and gets very close to solving it. Initially bouncing the ball back to the target only requires controlling the prosthetic arm which is relatively low-dimensional, hence the sampling-based planner achieves this part easily. However, as seen in the in-hand manipulation task, it struggles with precise control of the high-dimensional MyoHand.
Contact forces pose a major challenge for gradient-based optimization of robot dynamics as they introduce jumps in the system's velocities. Penalty-based simulators, such as MuJoCo, simplify gradient computation by softening the contact forces. However, realistically simulating hard contacts requires very stiff contact settings, which leads to incorrect gradients when using automatic differentiation. On the other hand, using non-stiff settings strongly increases the sim-to-real gap. We analyze the contact computation of penalty-based simulators to identify the causes of gradient errors. Then, we propose DiffMJX, which combines adaptive integration with MuJoCo XLA, to notably improve gradient quality in the presence of hard contacts. Finally, we address a key limitation of contact gradients: they vanish when objects do not touch. To overcome this, we introduce Contacts From Distance (CFD), a mechanism that enables the simulator to generate informative contact gradients even before objects are in contact. To preserve physical realism, we apply CFD only in the backward pass using a straight-through trick, allowing us to compute useful gradients without modifying the forward simulation.
[ "cs.RO", "cs.LG", "cs.SY", "eess.SY", "I.2.9; I.2.6; I.6.4; G.1.6" ]
# 1 Introduction The problem of estimating the mean of a random variable from a finite sample of its i.i.d. copies is fundamental in statistics and machine learning. When the random variable has exponentially decaying tails, the sample mean exhibits optimal or near-optimal performance. In particular, for $\varepsilon , \delta \in ( 0 , 1 )$ , it is known that $\mathrm { P O L Y L O G } ( 1 / \delta ) / \varepsilon ^ { 2 }$ samples suffice to obtain an $\varepsilon$ -close estimate with probability at least $1 - \delta$ . Recent studies have shown that heavier-tailed distributions, possessing only the first $p$ moments for $p \in ( 1 , 2 ] .$ , are better suited to model several important cases, including but not limited to, large attention and language models [35, 36, 14, 13], certain applications in econometrics [9] and network science [5], and some classes of extremal processes [27]. Under this model, the sample mean suffers from sub-optimal performance with a polynomial dependence on $1 / \delta$ [10]. Median-of-Means (MoM) is a mean estimator that provides optimal performance guarantees even under heavy-tailed distributions [28, 15, 2]. Its popularity is largely due to its simplicity and efficiency. Indeed, its computation only requires splitting the sample into $\kappa$ batches, computing the sample mean in each batch, and then returning the median of these sample means, with an overall runtime that is quasi-linear in the number of observations. Notice that the user is only required to specify the number of batches, which should be of order $\log ( 1 / \delta )$ for optimal performance. In this work, we analyze the performance of the MoM estimator in solving the following significant generalization of the mean estimation task, a problem typically referred to as uniform convergence. Given a set of real-valued functions $\mathcal { F }$ over a domain $\chi$ , and a distribution $\mathcal { D }$ supported over $\chi$ , we consider the problem of estimating, simultaneously for each $f \in { \mathcal { F } }$ , the mean $\mu ( f ) = \mathbb { E } [ f ( \mathbf { X } ) ]$ from an i.i.d. sample $\mathbf { X } \sim { \mathcal { D } } ^ { n }$ generated from $\mathcal { D }$ . In particular, our goal is to estimate the sample complexity of the MoM estimator, i.e., the smallest sample size $n ^ { * } = n ( \varepsilon , \delta , { \mathcal { F } } )$ that suffices to guarantee that for all $\varepsilon , \delta \in ( 0 , 1 )$ and $n \geq n ^ { * }$ , the following holds: $$ \underset { \mathbf { X } \sim \mathcal { D } ^ { n } } { \mathbb { P } } \left( \operatorname* { s u p } _ { f \in \mathcal { F } } \vert \mathbf { M } \mathbf { O } \mathbf { M } ( f , \mathbf { X } ) - \mu ( f ) \vert \leq \varepsilon \right) \geq 1 - \delta . $$ Uniform convergence has fundamental applications in machine learning. First, given an estimator $\theta$ satisfying (1.1), one can learn $\mathcal { F }$ by minimizing $\theta ( f , \mathbf { X } )$ over $\mathcal { F }$ . Notice that if $\theta$ is the sample mean, this corresponds to the standard Empirical Risk Minimization (ERM) paradigm. Second, such an estimator can be used to estimate the risk of any function in $\mathcal { F }$ using the same data as for training. This is particularly useful when a test set cannot be set aside, or only an approximate solution to the empirical problem can be computed. Third, as the sample complexity of $\theta$ features a dependence on some complexity measure of $\mathcal { F }$ , it can be used to perform model selection, i.e., to select a class of functions for the learning problem at hand before having a look at the data. Contributions. We provide the following contributions. • We show that, upon $\mathcal { F }$ admitting a suitable distribution-dependent approximation of size $N _ { \mathcal { D } } ( \Theta ( \varepsilon ) , \Theta ( ( v _ { p } / \bar { \varepsilon ^ { p } } ) ^ { 1 / ( p - 1 ) } ) )$ and success parameter $\kappa _ { 0 } ( \Theta ( \delta ) )$ , where $v _ { p }$ is a uniform upper bound to the $L _ { p }$ norm of the functions in $\mathcal { F }$ , the sample complexity of the MoM estimator is at most of order $( v _ { p } / \varepsilon ^ { p } ) ^ { 1 / ( p - 1 ) } ( \log ( N _ { \mathcal { D } } ( \Theta ( \varepsilon , ( v _ { p } / \varepsilon ^ { p } ) ^ { 1 / ( p - 1 ) } ) ) / \delta ) + \kappa _ { 0 } ( \Theta ( \delta ) ) )$ , see lemma 1 for formal statement. Specifically, we require that: given $\varepsilon , \delta > 0$ and $m \in \mathbb { N }$ , there exists a finite set $F _ { ( \varepsilon , m ) }$ of size at most $N _ { \mathit { D } } ( \varepsilon , m )$ s.t. for a large enough $\kappa$ (larger than $\kappa _ { 0 } ( \Theta ( \delta ) ) )$ , with probability at least $1 - \delta$ the functions in $\mathcal { F }$ can be $\varepsilon$ -approximated on most of the $\kappa$ batches of 3 i.i.d. random samples $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 }$ of size $m \cdot \kappa$ . We argue that this condition on $\mathcal { F }$ is mild, and in addition to capture the canonical case of functions with bounded range, it also captures important classes of unbounded functions. • To illustrate this we show that our result applies to two important class of unbounded functions. First, we prove a novel relative generalization error bound for the classical $k$ -means problem that, compared with prior work, features an exponential improvement in the confidence term $1 / \delta$ . Second, we use the MoM estimator to derive sample complexity bounds for a large class of regression problems. Our sample complexity bound only requires continuity of the loss function along with a bound on the norm of the weight vectors. We also provide a more refined bound in the more specific case of Lipschitz losses. Moreover, our sample complexity bounds match the known results for exponentially tailed distributions, only assuming the existence of the $p$ -th moments for $p \in ( 1 , 2 ]$ . • To derive the main result, we introduce a novel symmetrization technique based on the introduction of an additional ghost sample, compared to the standard approach using only one ghost sample. While the first ghost sample is used to symmetrize the mean, the second ghost sample is used to symmetrize the MOM. Analyzing two ghost samples simultaneously requires non-trivial modifications to the canonical discretization and permutation steps. The new discretization step allows for relaxing a uniform approximation over the functions to an approximation at the sample mean level, only requiring most of the sample means to be approximated, which is a desirable feature when dealing with unbounded functions and heavy tailed data. # 2 Related Work The study of uniform convergence for classes of real-valued functions is a fundamental topic in statistical learning theory. In the special case of binary-valued functions, a complete (worst-case) characterization is provided by the Vapnik-Chervonenkis dimension of the class [33]. When the range of the functions in $\mathcal { F }$ is bounded within an interval, the problem is known to be solved by the sample mean as soon as the fat-shattering dimension [16] of $\mathcal { F }$ is finite at all scales [1, 7, 11]. In particular, the best known upper bounds on the sample complexity of the sample mean are of the order of $\varepsilon ^ { - 2 } ( \mathrm { f a t } _ { \varepsilon } + \log ( 1 / \delta ) )$ , where $\mathrm { f a t } _ { \varepsilon }$ denotes the fat-shattering dimension of $\mathcal { F }$ at scale $\varepsilon$ . The variant of the uniform convergence problem considered in this work is a special case of the formulation given in [29] except we don’t consider adversarial contaminations. Differently from our work, the authors in [29] analyzed the performance of the trimmed mean with a focus on the estimation error. Their bounds feature a dependence on a quantity related to Rademacher complexity [8]. Similar results, but in the more restrictive case of $p \in ( 2 , 3 ]$ , have also been obtained by [26], who considered a different class of estimators interpolating between the Catoni´s estimator [10] and the MoM. The estimation error of the MoM has been studied in [23, 18] for $p = 2$ . These works also feature a dependence on a quantity related to Rademacher complexity of $\mathcal { F }$ . Compared to this line of work focussing on the estimation error, our focus is on the sample complexity and is thus more aligned with the results discussed earlier in this section of [1, 7, 11]. We notice that the Rademacher Complexity depends on the sample size, and thus it is sometimes problematic to derive an explicit sample complexity bound from an estimation error bound. Taking a sample complexity perspective allows for coping with function classes that are otherwise difficult to handle through the Rademacher Complexity such as $k$ -means clustering with unbounded input and center spaces, and linear regression with general continuous losses. In that respect, we see our results for $p = 2$ as a complement to these works. We remark that our proof technique differs from the bounded difference arguments proposed in [23, 18], and instead is based on a novel symmetrization argument that we believe may be of independent interest. In contrast, while [29, 26, 18] consider both heavy-tailed distributions and adversarial contaminations, in this work, we focus exclusively on heavy-tailed distributions. # 3 Sample Complexity Bound In this section, we describe our main result and provide a sketch of its proofs (we refer to appendix B for the details). # 3.1 Notation We will use boldface letters for random variables and non-boldface letters otherwise. Throughout the section, ${ \bf b } \sim \{ 0 , 1 \}$ will always denote the random variable defined as $\mathbb { P } _ { \mathbf { b } } \left( \mathbf { b = 0 } \right) = \mathbb { P } _ { \mathbf { b } } \left( \mathbf { b = 1 } \right) = 1 / 2$ . For a natural number $\kappa \in \mathbb { N }$ we define the set $[ \kappa ] = \{ 1 , \ldots , \kappa \}$ . Given two sets $A$ and $B , B ^ { A }$ denotes the set of all functions from $A$ to $B$ . For a function $f \in { \mathcal { F } } \subseteq \mathbb { R } ^ { \mathcal { X } }$ , $m \in \mathbb { N }$ , $X \in \mathcal { X } ^ { m }$ , and a distribution $\mathcal { D }$ over $\chi$ , the notations $\mu _ { ( f , X ) }$ and $\mu _ { f }$ denote the sample mean of $f$ on $X$ , i.e. $\textstyle \mu _ { ( f , X ) } = \sum _ { i = 1 } ^ { m } f ( X _ { i } ) / m$ and its expectation over $D \mu _ { f } = \mathbb { E } _ { \mathbf { X } \sim D } \left[ f ( \mathbf { X } ) \right]$ . Furthermore, for $p \in ( 1 , 2 ]$ we write ${ \mathcal { F } } \subseteq L _ { p } ( { \mathcal { D } } )$ iff $\begin{array} { r } { \operatorname* { s u p } _ { f \in \mathcal { F } } \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ f ( \mathbf { X } ) ^ { p } \right] < \infty } \end{array}$ . For $\kappa \in \mathbb { N }$ , if $a _ { 1 } , \dots , a _ { \kappa } \in \mathbb { R }$ and we let $a _ { ( 1 ) } \leq \ldots , \leq a _ { ( \kappa ) }$ denote the numbers in ascending order, we define their median as With the definition of the median, we can now define the MoM estimator. In words, the MOM takes as input a sample consisting of $\kappa$ blocks of $m$ samples in each, and a function $f$ wanting the mean estimate of. Finally, for $m , \kappa \in \mathbb { N }$ , and $X _ { 1 } , X _ { 2 } , X _ { 3 } \in ( \mathcal { X } ^ { m } ) ^ { \kappa }$ , for each $l \in \{ 1 , 2 , 3 \}$ we rely on the following notation, $$ \begin{array} { r } { { \boldsymbol { X } } _ { l } = ( X _ { l , 1 } ^ { 1 } , \ldots , X _ { l , m } ^ { 1 } , \ldots , X _ { l , 1 } ^ { \kappa } , \ldots , X _ { l , m } ^ { \kappa } ) . } \end{array} $$ # 3.2 Proof Overview Here, we provide a high-level and intuitive explanation of the proof for our main theorem, we here provide fig. 1 as a way of thinking about the proof pictorially. Figure 1: Proposed symmetrization approach. Red crosses and green ticks denote mean estimates that failed or succeeded respectively. Step 1: Symmetrization of the MOM with a ghost sample. Step 2: Imbalance preserving discretization of the class $\mathcal { F }$ . Step 3: Permutation of the sample means between the MOM of interest and the “ghost” MOM. The first thing we observe is that for the MOM to fail in providing a uniform error bound over the functions in $\mathcal { F }$ , there must exist a function $f \in { \mathcal { F } }$ for which at least half of its $\kappa$ mean estimates in the MOM fail to be $\varepsilon$ -close to the true mean. However, for a fixed function $f$ , we know that the MOM is likely to have almost all of its $\kappa$ mean estimates correct. We now leverage this in the first step of the analysis by introducing a “ghost” MOM, that has almost all of its $\kappa$ mean estimates correct for the function $f \in { \mathcal { F } }$ , on which the MOM of interest had at least half of its $\kappa$ mean estimates incorrect. This step is depicted in fig. 1 as “Step 1,” where the red crosses indicate whether a mean estimate is correct or not. We observe that the MOM of interest has at least half of its $\kappa$ mean estimates incorrect, whereas the “ghost” MOM has very few errors among its $\kappa$ mean estimates for the function $f \in { \mathcal { F } }$ . This imbalance between incorrect mean estimates in the MOM of interest and the “ghost” MOM is key for “Step 3,” which argues that such an imbalance is unlikely due to the symmetry introduced in this step - “Step $1 ^ { \mathfrak { v } }$ can be seen as a symmetrization of the MOM. The next step in the analysis involves discretizing the function class $\mathcal { F }$ into a finite-sized function class $\tilde { \mathcal { F } }$ . Normally, this step would be performed by creating a net over the function class $\mathcal { F }$ for any possible estimating sequence. However, since we aim to provide bounds for potentially unbounded function classes, with finite moments, we adopt an alternative discretization. Specifically, we only require the discretization $\tilde { \mathcal { F } }$ of the function class $\mathcal { F }$ to ensure that most of the mean estimates in both the MOM of interest and the “ghost” MOM remain the same - thus preserving the imbalance between incorrect mean estimates in the MOM of interest and the “ghost” MOM created in “Step $1 ^ { \mathfrak { v } }$ . Furthermore, we also allow the discretization to fail for a negligible amount of mean estimates. This step is depicted as “Step $2 ^ { \prime \prime }$ in fig. 1, where we observe that the discretization $\tilde { \mathcal { F } }$ of $\mathcal { F }$ preserves the imbalance between incorrect mean estimates of the MOM of interest and the “ghost” MOM. The final step of the analysis is due to the previous two steps, to analyze the probability of the existence of a function $\tilde { f } \in \tilde { \cal F }$ for which the MOM of interest has close to half or more of its mean estimates incorrect, while the “ghost” MOM has very few incorrect mean estimates. First, since $\tilde { F }$ is finite, it suffices to analyze a fixed $\tilde { f } \in \tilde { F }$ and then do a union bound over $\tilde { F }$ . For a fixed $\tilde { f } _ { \perp }$ , we leverage the symmetry introduced in “Step $1 ^ { \mathfrak { n } }$ , namely using that the mean estimates of both the MOM of interest and the “ghost” MOM are i.i.d. Thus, we may view the $\kappa$ mean estimates of the MOM of interest and the “ghost” MOM as being ”assigned” as follows: Draw two mean estimates, $\mu _ { 1 } { } _ { \tilde { f } , \mathbf { X } }$ and $\mu _ { 2 } { \bf \boldsymbol { \rho } } _ { \tilde { f } , { \bf X ^ { \prime } } }$ , and with probability $1 / 2$ , assign $\mu _ { 1 } { } _ { \tilde { f } , \mathbf { X } }$ to the MOM of interest and $\mu _ { 2 } { \bf \boldsymbol { \ell } } _ { \mathbf { \ell } } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf { \times } \mathbf \mathbf { \times \times } \mathbf { \times } \mathbf \mathbf { \times \times } \mathbf \mathbf { \times \times \times } \mathbf \mathbf \mathbf \times \times \times \times \mathbf \times \mathbf \mathbf \times \mathbf \times \mathbf \times \mathbf \pm \pm \pm \pm \mathbf \pm \pm \mathbf \pm \pm \pm \mathbf \pm \pm \mathbf \pm \mathbf \pm \mathbf \pm \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \pm \mathbf \mathbf \pm \mathbf \mathbf \pm \mathbf \mathbf \pm \mathbf \pm \mathbf \mathbf \pm \mathbf \mathbf \pm \mathbf \mathbf \pm \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \pm \mathbf \mathbf \mathbf \mathbf \pm \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf \mathbf$ to the “ghost” MOM. Otherwise, assign $\mu _ { 2 } { \bf \boldsymbol { \mathbf { \ell } } } _ { \mathbf { \widetilde { \Gamma } } } \mathbf { \mathbf { x } } ^ { \prime }$ to the MOM of interest and $\mu _ { 1 } { } _ { \tilde { f } , \mathbf { X } }$ to the “ghost” MOM. Repeat this process $\kappa$ times. Under this perspective, it is intuitively that having a large imbalance between the number of incorrect mean estimates for the MOM of interest and the “ghost” MOM - the MOM of interest has close to half or more of its mean estimates incorrect while the “ghost” MOM has very few incorrect mean estimates - is unlikely. This final step is depicted as “Step $3 ^ { \mathfrak { N } }$ in fig. 1, where the mean estimates of the MOM of interest and the “ghost” MOM are permuted. The above high-level analysis contrasts with the conventional symmetrization-discretizationpermutation argument, on the estimating sequence level, where the above analysis symmetrizes, discretizes, and permutes the mean estimates. # 3.3 Main Result To present our main result, we need the following definitions of discretization for a function class $\mathcal { F }$ . Definition 1 $( ( \varepsilon , m )$ -Discretization). Let $0 < \varepsilon < 1 , m , \kappa \in \mathbb { N } , X _ { 0 } , X _ { 1 } , X _ { 2 } \in ( \mathcal { X } ^ { m } ) ^ { \kappa }$ . A function class $\mathcal { F } \subseteq \mathbb { R } ^ { \mathcal { X } }$ admits a $( \varepsilon , m )$ - discretization on $X _ { 0 } , X _ { 1 } , X _ { 2 }$ if there exists a set of functions $F _ { ( \varepsilon , m ) }$ defined on $X _ { 0 } , X _ { 1 } , X _ { 2 }$ satisfying the following: for each $f \in { \mathcal { F } }$ , there exists $\pi ( f ) \in F _ { ( \varepsilon , m ) }$ and $\dot { I } _ { f } \subset [ \kappa ]$ s.t.: $\begin{array} { r } { | I _ { f } | \le \frac { 2 \kappa } { 6 2 5 } } \end{array}$ , and for each $i \in [ \kappa ] \backslash I _ { f }$ and $\forall l \in \{ 0 , 1 , 2 \}$ , it holds that $$ \sum _ { j = 1 } ^ { m } \left| \frac { f ( X _ { l , j } ^ { i } ) - \pi ( f ) ( X _ { l , j } ^ { i } ) } { m } \right| \leq \varepsilon . $$ We call $| F _ { ( \varepsilon , m ) } |$ the size of the $\varepsilon$ -discretization of $\mathcal { F }$ on $X _ { 0 } , X _ { 1 } , X _ { 2 }$ . The above definition requires only that we can approximate most of the $\kappa$ sample means of a function $f \in { \mathcal { F } }$ appearing in its MoM with those of its neighbor $\pi ( f ) \in F _ { ( \varepsilon , m ) } .$ , on all three samples $X _ { 1 } , X _ { 2 } , X _ { 3 }$ . The following definition extends this idea at distribution level, by requiring that with large probability, the samples $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 }$ allows $\mathcal { F }$ to admit a $( \varepsilon , m )$ -discretization. Definition 2 ( $\mathrm { \Delta } \cdot \mathrm { \Delta } \mathcal { D }$ -Discretization). Let $\mathcal { D }$ be a distribution over $\mathcal { X }$ . A function class $\mathcal { F } \subseteq \mathbb { R } ^ { \mathcal { X } }$ admits a $\mathcal { D }$ -discretization if there exists a threshold function $\kappa _ { 0 } \in \mathbb { N } ^ { [ 0 , 1 ] }$ , a threshold $\varepsilon _ { 0 } > 0$ and size function $N _ { \mathcal { D } } \in \mathbb { N } ^ { \mathbb { R } ^ { 2 } }$ , s.t. for any $0 < \varepsilon < \varepsilon _ { 0 }$ , $0 < \delta < 1$ , $m \geq 1$ , and $\kappa \geq \kappa _ { 0 } ( \delta )$ , with probability at least $1 - \delta$ (over $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \sim ( { \cal D } ^ { m } ) ^ { \kappa } )$ it holds that: $\mathcal { F }$ admits a $( \varepsilon , m )$ -discretization $F _ { ( \varepsilon , m ) }$ on $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 }$ and $| F _ { ( \varepsilon , m ) } | \le N _ { \mathcal { D } } ( \varepsilon , m )$ . Remark 1. The following comments are in order. • If a function class $\mathcal { F } \subseteq \mathbb { R } ^ { \mathcal { X } }$ and $\varepsilon _ { 0 } > 0$ is such that for any distribution $\mathcal { D } ^ { \prime }$ over $\mathcal { X }$ and any $0 < \varepsilon \le \varepsilon _ { 0 } , \mathcal { F }$ admits a $\varepsilon$ -net $\mathcal { N } _ { \varepsilon } ( \mathcal { D } ^ { \prime } , \mathcal { F } , L _ { 1 } )$ in $L _ { 1 }$ with respect to $\mathcal { D } ^ { \prime }$ , i.e. for any $f \in { \mathcal { F } }$ there exists $\pi ( f ) \in \mathcal { N } _ { \varepsilon } ( \mathcal { D } ^ { \prime } , \mathcal { F } , L _ { 1 } )$ such that $$ \underset { \mathbf { X } \sim \mathcal { D } ^ { \prime } } { \mathbb { E } } \left[ | f ( \mathbf { X } ) - \pi ( f ) ( \mathbf { X } ) | \right] \leq \varepsilon $$ then for any $0 < \varepsilon \le \varepsilon _ { 0 }$ , $m , \kappa \in { \mathbb { N } } , X _ { 0 } , X _ { 1 } , X _ { 2 } \in ( { \mathcal { X } } ^ { m } ) ^ { \kappa }$ , $\mathcal { F }$ admits a $( \varepsilon , m )$ -discretization of size at most $\mathrm { s u p } _ { D ^ { \prime } } | \mathcal { N } _ { 2 \varepsilon / 1 8 7 5 } ( \mathcal { D } ^ { \prime } , \mathcal { F } ) |$ . Furthermore, for any data generating distribution $\mathcal { D }$ over $\mathbf { X }$ , $\mathcal { F }$ has $\mathcal { D }$ -discretization with threshold function $\kappa _ { 0 } = 1$ , threshold $\varepsilon _ { 0 }$ and size function $\begin{array} { r } { N _ { \mathcal { D } } ( \varepsilon , m ) = \operatorname* { s u p } _ { \mathcal { D ^ { \prime } } } | \mathcal { N } _ { 2 \varepsilon / 1 8 7 5 } ( \mathcal { D ^ { \prime } } , \mathcal { F } ) | } \end{array}$ . See the appendix A for a proof of this claim. • Let $p \geq 1$ . For a function class $\mathcal { F }$ , it is known that the existence of a $\varepsilon$ -net w.r.t. $L _ { p } ( \mathcal { D } ^ { \prime } )$ implies the existence of a $\varepsilon$ -net w.r.t. $L _ { 1 } ( \mathcal { D } ^ { \prime } )$ . Thus, each $\mathcal { F }$ admitting a net w.r.t. to the $L _ { p }$ metric, would also admit a $( \varepsilon , m )$ -discretization and a $\mathcal { D }$ -discretization. • Any function class $\mathcal { F }$ bounded between $[ - 1 , 1 ]$ and featuring finite fat shattering dimension $\mathrm { F A T } _ { \varepsilon }$ at every scale $\varepsilon \ > \ 0$ , admits a $\varepsilon$ -net $\mathcal { N } ( \varepsilon , L _ { 1 } ( \mathcal { D } ) )$ for any $\mathcal { D }$ of size at most exp $\begin{array} { r } { ( O \big ( \mathrm { F A T } _ { O ( \varepsilon ) } \ln { ( 1 / \varepsilon ) } \big ) ) } \end{array}$ (see Corollary 5.4 in [30]). This result can be also be extended to classes bounded in $[ - M , M ]$ for $M \geq 1$ with appropriate rescaling. • We remark that the definition of a $\mathcal { D }$ -Discretization is allowed to depend on realizations of the samples, oppose to the stricter definition of having one fixed discretization which holds for all realizations of the samples. This view of considering discretizations that depend on the samples is (to our knowledge) the most common in the literature, see e.g. [31][Definition 27.1], [17][Lemma 7] and [30][Theorem 4.4 and Corollary 5.4]. We are now ready to present our main result. Theorem 1 (Main theorem). Let $\mathcal { F } \subseteq \mathbb { R } ^ { \mathcal { X } }$ and $\mathcal { D }$ be a distribution over $\chi$ . Suppose that $\mathcal { F }$ admits $a$ $\mathcal { D }$ -discretization with threshold function $\kappa _ { 0 } \in \mathbb { N } ^ { [ 0 , 1 ] } .$ , threshold $\varepsilon _ { \boldsymbol { 0 } } ,$ and size function $N _ { \mathcal { D } } \in \mathbb { N } ^ { \mathbb { R } ^ { 2 } }$ . Moreover, suppose that for some $p \in ( 1 , 2 ] .$ , ${ \mathcal { F } } \subseteq L _ { p } ( { \mathcal { D } } )$ and let $v _ { p } \geq \operatorname* { s u p } _ { f \in { \mathcal { F } } } \mathbb { E } _ { \mathbf { X } \sim { \mathcal { D } } } \left[ | f ( \mathbf { X } ) - \mathbb { E } _ { \mathbf { X } \sim { \mathcal { D } } } \left[ f ( \mathbf { X } ) \right] | ^ { p } \right]$ . Then, there exist absolute numerical constants $c _ { 2 } , c _ { 3 } > 0$ s.t., for any $\varepsilon \in ( 0 , \varepsilon _ { 0 } )$ and $\delta \in ( 0 , 1 )$ , if $$ m \geq \left( \frac { 4 0 0 \cdot 1 6 ^ { p } v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } , \kappa \geq \operatorname* { m a x } \Bigg \{ \kappa _ { 0 } ( \delta / 8 ) , \frac { 1 0 ^ { 6 } \ln { ( 2 ) } } { 9 9 } , 5 0 \ln { \left( \frac { 8 N _ { \mathcal { D } } \left( \varepsilon / 1 6 , m \right) } { \delta } \right) } \Bigg \} , $$ it holds $$ \underset { \mathbf { X } \sim ( D ^ { m } ) ^ { \kappa } } { \mathbb { P } } \left( \underset { f \in \mathcal { F } } { \operatorname* { s u p } } | \mathbf { M o M } ( f , \mathbf { X } ) - \mu ( f ) | \leq \varepsilon \right) \geq 1 - \delta . $$ Remark 2. The following comments are in order. • To provide some intuition on the MoM parameters $m , \kappa , \varepsilon , \delta$ , we start noting that the dependency on $\varepsilon$ decides the number of samples $m$ needed for each of the mean estimates, and is chosen such that they are within $O ( \varepsilon )$ distance from the true expectation with constant probability. Furthermore, both $\varepsilon$ and $\delta$ also go into the number of mean estimates, $\kappa$ . The intuition for the choice of $\kappa$ , is that the MoM, which is based on aggregation of $\kappa$ mean estimates, boosts the constant success probability to $1 - \delta / N _ { D } ( \varepsilon , m )$ probability for any function in the discretization, and one can then do a union bound. • The sample complexity bound implied by our theorem is of the order of $$ \left( \frac { v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } \left( \log N _ { \mathcal { D } } ( \varepsilon / 1 6 , ( v _ { p } / \varepsilon ^ { p } ) ^ { \frac { 1 } { p - 1 } } ) + \log \left( \frac { 1 } { \delta } \right) \right) , $$ when not taking into account $\kappa _ { 0 } ( \delta / 8 )$ , and therefore of order $( v _ { p } / \varepsilon ^ { p } ) ^ { 1 / ( p - 1 ) } \log ( v _ { p } / ( \varepsilon \delta ) )$ as soon as $N _ { \mathcal { D } } ( \varepsilon / 1 6 , ( { \bar { v _ { p } } } / \varepsilon ^ { p } ) ^ { 1 / ( p - 1 ) } ) = O ( ( v _ { p } / \varepsilon ) ^ { \alpha } )$ for some constant $\alpha$ . We notice that this is optimal (up to log factors) [12]. • In order to apply this result, one needs to find a $\mathcal { D }$ -discretization of $\mathcal { F }$ . In Remark 1 we have seen that this is possible if $\mathcal { F }$ is bounded. In the next section, we show two concrete examples of unbounded classes that admit such a cover. • The estimation error bound in [18], holding for $p = 2$ , instead is of the order of $$ \frac { \mathcal { R } ( \mathcal { F } , \mathcal { D } , n ) } { n } + \sqrt { \frac { \log ( 1 / \delta ) } { n } } $$ where $n$ is the sample size and $\mathcal { R } ( \mathcal { F } , \mathcal { D } , n )$ is the Rademacher complexity of $\mathcal { F }$ over a sample of size $n$ . To derive a sample complexity bound from this, one should be able to get an explicit estimate of $\mathcal { R } ( \mathcal { F } , \mathcal { D } , n )$ in terms of $n$ . This has already been done for certain classes of bounded or well-behaved functions (see for example [8, 24]), it may be intersting to see if a relaxed notation of discretization, in the same spirit of Definition 2, can lead to explicit bounds even for broader classes of functions. • We remark here that the magnitude of the constants in theorem 1 is rather large. This is due to the symmetrization, discretization, and permutation arguments, and was not optimized. Notice that it is not uncommon for symmetrization-discretization-permutation arguments to yield large constants, for instance: [7] having constant of approximately 1500 (read from proof of Theorem 9 (5)), and later improved, asymptotically, by [11] having a constant of approximately 5000 (read from point (j) page 13), and [22] having a constant of approximately 500 (read from lemma 9). # 3.4 Analysis We now give the proof of theorem 1. We start by noting that for the MOM to fail, it must be the case that at least $1 / 2$ of the mean estimates are $\varepsilon$ -away from the expectation, as in the converse case the median is $\varepsilon$ -close to the expectation. Thus, to bound the failure probability of the MOM it suffices to upper bound the probability of the former event. Before presenting the upper bound, we introduce the following auxiliary random variables that will be useful throughout this section. For $\mathfrak { h } \in \{ > , \leq \} , \kappa , m \in \mathbb { N } , \varepsilon > 0 , \mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \sim ( ( D ) ^ { m } ) ^ { \kappa } , X _ { 0 } , X _ { 1 } , X _ { 2 } \in ( ( \mathcal { X } ) ^ { m } ) ^ { \kappa }$ , and a random vector $\mathbf { b } \in \{ 0 , 1 \} ^ { \kappa }$ , with independent coordinates with $\mathbb { P } \left[ { \bf b } _ { i } = 0 \right] = \mathbb { P } \left[ { \bf b } _ { i } = 1 \right] = 1 / 2$ , we define $$ \hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( \flat ) } ( f , \varepsilon ) = \sum _ { i = 1 } ^ { \kappa } \frac { \mathbb { 1 } \{ | \mu _ { f , \mathbf { X } _ { \mathbf { b } _ { i } } ^ { i } } - \mu _ { f , \mathbf { X } _ { 2 } ^ { i } } | \flat \varepsilon \} } { \kappa } , ~ \mathbf { S } _ { \mathbf { b } } ^ { ( \flat ) } ( f , \varepsilon ) = \sum _ { i = 1 } ^ { \kappa } \frac { \mathbb { 1 } \{ | \mu _ { f , X _ { \mathbf { b } _ { i } } ^ { i } } - \mu _ { f , X _ { 2 } ^ { i } } | \flat \varepsilon \} } { \kappa } . $$ In words $\hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( > ) } ( f , \varepsilon )$ is the fraction of the $\kappa$ mean estimates of $f$ that are $\varepsilon$ -away from the mean estimates of $f$ on the sample $\mathbf { X } _ { 2 }$ , and $\hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( \leq ) } ( f , \varepsilon )$ is the fraction of the $\kappa$ mean estimates of $f$ that are $\varepsilon$ -close to the mean estimates of $f$ on the sample $\mathbf { X } _ { 2 }$ . We also consider $\hat { \mathbf { S } } _ { 1 - \mathbf { b } } ^ { ( \flat ) } ( f , \varepsilon )$ and $\mathbf { S } _ { 1 - \mathbf { b } } ^ { ( \flat ) } ( f , \varepsilon )$ , where $1 - \mathbf { b } = \left( 1 - \mathbf { b } _ { 1 } , \ldots , 1 - \mathbf { b } _ { \kappa } \right)$ . Now we can state our symmetrization lemma. Lemma 1 (Symmetrization). Let $p \in ( 1 , 2 ] , \varepsilon > 0 .$ , and $\mathcal { D } \boldsymbol { a }$ distribution over $\chi$ . Suppose that ${ \mathcal { F } } \subseteq L _ { p } ( { \mathcal { D } } ) _ { : }$ and let $v _ { p } \geq \operatorname* { s u p } _ { f \in { \mathcal { F } } } \mathbb { E } _ { \mathbf { X } \sim { \mathcal { D } } } \left[ | f ( \mathbf { X } ) - \mathbb { E } _ { \mathbf { X } \sim { \mathcal { D } } } \left[ f ( \mathbf { X } ) \right] | ^ { p } \right]$ . Then, if $\begin{array} { r } { m \geq \left( \frac { 4 0 0 \cdot 1 6 ^ { p } v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } } \end{array}$ and $\begin{array} { r } { \kappa \geq \left( \frac { 1 0 ^ { 6 } \ln { ( 2 ) } } { 9 9 } \right) } \end{array}$ we have that $\begin{array} { r } { \underset { { \bf X } \sim ( \mathcal { D } ^ { m } ) \times } { \mathbb P } \Bigg ( \exists f \in \mathcal { F } : \sum _ { i = 1 } ^ { \kappa } \frac { \mathbb { 1 } \left\{ | \mu _ { f , { \bf X } ^ { i } } - \mu _ { f } | > \varepsilon \right\} } { \kappa } \geq \frac { 1 } { 2 } \Bigg ) \underset { { \bf X } _ { 0 } , { \bf X } _ { 1 } , { \bf X } _ { 2 } } { \geq 4 } \mathbb { P } \Bigg ( \exists f \in \mathcal { F } : \hat { \bf S } _ { \mathrm { b } } ^ { ( \varsigma ) } \bigg ( f , \frac { 1 5 \varepsilon } { 1 6 } \bigg ) \geq a , \hat { \bf S } _ { 1 - \mathrm { b } } ^ { ( \varsigma ) } \bigg ( f \in \mathcal { F } \times \mathcal { F } _ { \mathrm { b } } ^ { ( \varsigma ) } \bigg ) , } \end{array}$ , 2ε  >b, where $\begin{array} { r } { a = { \frac { 4 8 0 1 } { 1 0 0 0 0 } } , b = { \frac { 9 7 0 1 } { 1 0 0 0 0 } } , \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } \end{array}$ , and $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \sim ( { \cal D } ^ { m } ) ^ { \kappa }$ . We notice that the above lemma captures the situation described in “Step $1 ^ { \mathfrak { v } }$ of Figure 1. That is, we have related the event of the MOM failing, with the event that one MOM has many incorrect mean estimates (with the true mean being estimated), and a second MOM has few incorrect mean estimates. Notice the $\mathbf { b } _ { i }$ ’s have been set up for the permutation argument, which will show that this imbalance is unlikely. Before applying the permutation step, we discretize the function class to enable a union bound over the event that the mean estimate fails for each function in the class. The following lemma relies on the existence of a $( \varepsilon , m )$ -discretization: by definition, moving from $\mathcal { F }$ to its discretization only changes the number of mean estimates that are good approximations of the ”true” mean estimate $\mu _ { f , \mathbf { X } _ { 2 } ^ { i } }$ or, conversely, the number of bad mean estimates, slightly. In other words, this discretization preserves, the imbalance created in the symmetrization step. Lemma 2 (Discretization). Let $m , \kappa \in \mathbb { N } , \varepsilon > 0 .$ , and $X _ { 0 } , X _ { 1 } , X _ { 2 } \in ( \mathcal { X } ^ { m } ) ^ { \kappa }$ . Suppose that $\mathcal { F }$ admits $a$ $\textstyle { \big ( } { \frac { \varepsilon } { 1 6 } } , m { \big ) }$ -discretization $F _ { ( \varepsilon / 1 6 , m ) }$ over $X _ { 0 } , X _ { 1 } , X _ { 2 }$ . Then, it holds that $$ \begin{array} { r l } & { \underset { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } { \mathbb { P } } \Big ( \exists f \in \mathcal { F } : \mathbf { S _ { b } ^ { ( \scriptscriptstyle > ) } } \Big ( f , \frac { 1 5 \varepsilon } { 1 6 } \Big ) \geq a , \mathbf { S } _ { 1 - \mathbf { b } } ^ { ( \scriptscriptstyle \leq ) } \Big ( f , \frac { 2 \varepsilon } { 1 6 } \Big ) > b \Big ) } \\ & { \leq | F _ { ( \varepsilon / 1 6 , m ) } | \underset { f \in F _ { ( \varepsilon / 1 6 , m ) } } { \operatorname* { s u p } } \underset { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } { \mathbb { P } } \Big ( \mathbf { S } _ { \mathbf { b } } ^ { ( \scriptscriptstyle > ) } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) \geq c , \mathbf { S } _ { 1 - \mathbf { b } } ^ { ( \scriptscriptstyle > ) } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) < d \Big ) , } \end{array} $$ where $\begin{array} { r } { a = \frac { 4 8 0 1 } { 1 0 0 0 0 } , b = \frac { 9 7 0 1 } { 1 0 0 0 0 } , c = \frac { 4 7 6 9 } { 1 0 0 0 0 } , d = \frac { 3 3 1 } { 1 0 0 0 0 } } \end{array}$ · The above lemma is described as “Step $2 ^ { \prime \prime }$ in Figure 1. That is, the function class has been discretized while preserving the imbalance in the number of incorrect mean estimates, and the problem has now been reduced to analyzing an imbalance of correct mean estimates between two MOMs for a single function. The following permutation lemma, states that having two sets of mean estimates that differ widely on their quality happens with exponentially small probability, in the number of estimates $\kappa$ . This lemma models the situation depicted as “Step $3 ^ { \mathfrak { N } }$ in fig. 1. Lemma 3 (Permutation). Let $m , \kappa \in { \mathbb { N } } , \varepsilon > 0 _ { : }$ , and $X _ { 0 } , X _ { 1 } , X _ { 2 } \in ( \mathcal { X } ^ { m } ) ^ { \kappa }$ . Then, for any $f \in \mathbb { R } ^ { \chi } .$ , it holds that $$ \mathbb { P } _ { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } \Big ( \mathbf { S _ { b } ^ { ( > ) } } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) \geq c , \mathbf { S } _ { 1 - \mathbf { b } } ^ { ( > ) } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) < d \Big ) \leq \exp \Big ( - \frac { \kappa } { 5 0 } \Big ) $$ where $\begin{array} { r } { c = { \frac { 4 7 6 9 } { 1 0 0 0 0 } } } \end{array}$ and $\begin{array} { r } { d = \frac { 3 3 1 } { 1 0 0 0 0 } } \end{array}$ · We are now ready to show the proof of Theorem 1. Proof of theorem 1. For the MOM to fail to provide a uniform estimation for $\mathcal { F }$ it must be the case that there exists a function $f \in { \mathcal { F } }$ s.t. at $1 / 2$ of the mean estimates of its MOM fails. That is $$ \underset { \mathbf { X } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } { \underbrace { \mathbb { P } } } \left( \operatorname* { s u p } _ { f \in \mathcal { F } } \vert \mathrm { M o M } ( f , \mathbf { X } ) - \mu ( f ) \vert > \varepsilon \right) \leq \underset { \mathbf { X } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } { \underbrace { \mathbb { P } } } \left( \exists f \in \mathcal { F } : \sum _ { i = 1 } ^ { \kappa } \frac { \mathbb { I } \{ \vert \mu _ { f , \mathbf { X } _ { i } } - \mu _ { f } \vert > \varepsilon \} } { \kappa } \geq \right) $$ Since $\begin{array} { r } { m \geq \left( \frac { 4 0 0 \cdot 1 6 ^ { p } v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } } \end{array}$ and $\begin{array} { r } { \kappa \geq \frac { 1 0 ^ { 6 } \ln { ( 2 ) } } { 9 9 } } \end{array}$ 106 ln (2) , Lemma 1 yields $$ \begin{array} { r l } & { \underset { m _ { \mathrm { J } } \times } { = } \left( \exists f \in \mathcal { F } : | \mathbf { M 0 M } ( f , \mathbf { X } ) - \mu _ { f } | > \varepsilon \right) \leq \underset { \mathbf { b } \sim \{ 0 , 1 \} } { \le } \underset { \mathbf { b } \sim \{ 0 , 1 \} } { \mathbb { P } } \kappa \overset { } { \cong } \mathcal { F } : \hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( > ) } \left( f , \frac { 1 5 \varepsilon } { 1 6 } \right) \geq a , \hat { \mathbf { S } } _ { 1 - \mathbf { b } } ^ { ( \le ) } \left( f , \frac { 2 \varepsilon } { 1 6 } \right) } \\ & { \qquad \mathbf { x } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } \end{array} $$ with $\begin{array} { r } { a = \frac { 4 8 0 1 } { 1 0 0 0 0 } } \end{array}$ and $\begin{array} { r } { b = { \frac { 9 7 0 1 } { 1 0 0 0 0 } } } \end{array}$ . Now let $G$ denote the event that the samples $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 }$ are s.t. $\mathcal { F }$ admits a $( \varepsilon / 1 6 , m )$ -discretization of size at most $N _ { \mathcal { D } } ( \varepsilon / 1 6 , m )$ over them. Then, since by hypothesis $\kappa \geq \kappa _ { 0 } ( \delta / 8 )$ , it holds that $$ \begin{array} { r l } & { \quad \underset { \mathbf { X } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } { \mathbb { P } } ( \exists f \in \mathcal { F } : | \mathbf { M O M } ( f , \mathbf { X } ) - \mu _ { f } | > \varepsilon ) } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \leq 4 _ { \mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } [ \mathbb { 1 } \{ G \} _ { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } ( \exists f \in \mathcal { F } : \hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( \varsigma ) } \bigg ( f , \frac { \mathrm { 1 5 } \varepsilon } { 1 6 } \bigg ) \geq a , \hat { \mathbf { S } } _ { 1 - \mathbf { b } } ^ { ( \varsigma ) } \bigg ( f , \frac { 2 \varepsilon } { 1 6 } \bigg ) > b \bigg ) ] + \delta / 2 . } \end{array} $$ Since for each realization $X _ { 0 } , X _ { 1 } , X _ { 2 }$ of $\mathbf { X } _ { 0 } , \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } \in G , { \mathcal { I } }$ $\mathcal { F }$ admits a $( \varepsilon / 1 6 , m )$ -discretization, Lemma 2 implies that $$ \begin{array} { r } { \underset { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } { \mathbb { P } } \Big ( \exists f \in \mathcal { F } : \hat { \mathbf { S } } _ { \mathbf { b } } ^ { ( > ) } \Big ( f , \frac { 1 5 \varepsilon } { 1 6 } \Big ) \geq a , \hat { \mathbf { S } } _ { 1 - \mathbf { b } } ^ { ( \leq ) } \Big ( f , \frac { 2 \varepsilon } { 1 6 } \Big ) > b \Big ) } \\ { \leq | F _ { ( \varepsilon / 1 6 , m ) } | \underset { f \in F _ { ( \varepsilon / 1 6 , m ) } } { \operatorname* { s u p } } \underset { \mathbf { b } \sim \{ 0 , 1 \} ^ { \kappa } } { \mathbb { P } } \Big ( \mathbf { S } _ { \mathbf { b } } ^ { ( > ) } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) \geq c , \qquad \mathbf { S } _ { 1 - \mathbf { b } } ^ { ( > ) } \Big ( f , \frac { 1 2 \varepsilon } { 1 6 } \Big ) < d \Big ) , } \end{array} $$ where $\begin{array} { r } { c = { \frac { 4 7 6 9 } { 1 0 0 0 0 } } } \end{array}$ and $\begin{array} { r } { d = \frac { 3 3 1 } { 1 0 0 0 0 } } \end{array}$ . Notice that the term on the right-hand side, by Lemma 3, can be bounded with exp (−κ/50). Thus, if we take κ ≥ 50 ln 8ND(εδ/16,m), the above it is at most $\delta / 8$ , which, combined with (3.4), implies that $$ \underset { \mathbf { X } \sim ( T ^ { m } ) ^ { \kappa } } { \mathbb { P } } \left( \exists f \in \mathcal { F } : | \operatorname { M o M } ( f , \mathbf { X } ) - \mu _ { f } | > \varepsilon \right) \leq \delta $$ and concludes the proof. # 4 Applications In this section we present two applications of Theorem 1. # 4.1 k-Means Clustering over Unbounded Spaces $k$ -means clustering is one of the most popular clustering paradigms. Here, we provide a new sample complexity bound that improves upon existing works for the case of unbounded input and centers. Preliminaries. Given $x , y \in \mathbb { R } ^ { d }$ , we let $d ( x , y ) ^ { 2 } = | | x - y | | ^ { 2 }$ . We use $k \in \mathbb { N }$ to denote the number of centers, and $Q \in \mathbb { R } ^ { d \times k }$ to denote the centers meant as the columns of $Q$ . For $\boldsymbol { x } \in \mathbb { R } ^ { d }$ and $Q \in \mathbb { R } ^ { d \times k }$ , we let the loss of $Q$ on $x$ be defined as $\begin{array} { r } { d ( x , Q ) ^ { 2 } = \operatorname* { m i n } _ { q \in Q } \left| \left| x - q \right| \right| ^ { 2 } } \end{array}$ , where the minimum is taken over the columns of $Q$ . For a distribution $\mathcal { D }$ over $\mathbb { R } ^ { d }$ , we let $\mu = \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ \mathbf { X } _ { . } \right.$ ] and $\sigma ^ { 2 } = \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ d ( \mathbf { X } , \mu ) ^ { 2 } \right]$ . In the $k$ -means clustering problem, we are given random i.i.d. samples from $\mathcal { D }$ , and the objective if to minimize risk $R ( Q ) = \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } [ d ( \mathbf { X } , Q ) ^ { 2 } ]$ over $Q \in \mathbb { R } ^ { d \times k }$ . Our goal is to provide a uniform estimation bound for all possible sets of $k$ -centers. We consider the class of normalized loss functions defined below. For $Q \in \mathbb { R } ^ { d \times k }$ , we define $$ f _ { Q } ( x ) = \frac { 2 d ( x , Q ) ^ { 2 } } { \sigma ^ { 2 } + \mathbb { E } _ { { \mathbf { X } } \sim { \mathcal { D } } } \left[ d ( { \mathbf { X } } , Q ) ^ { 2 } \right] } , $$ and $\mathcal { F } _ { k } = \left\{ f _ { Q } | Q \in \mathbb { R } ^ { d \times k } \right\}$ . The class $\mathcal { F } _ { k }$ has been introduced in [4] and provide several advantages compared to the standard loss class including scale-invariance, and that it allows to derive uniform bounds even when the support of $\mathcal { D }$ is unbounded and $Q \in \mathbb { R } ^ { d }$ . The next theorem provide a bound on the sample complexity of the MOM for this problem. Theorem 2. Let $k \in \mathbb { N }$ and let $\mathcal { D }$ be a distribution over $\mathbb { R } ^ { d }$ s.t. $\sigma ^ { 2 } < \infty$ . Suppose that, there exists $a$ $p \in ( 1 , 2 ]$ s.t. ${ \mathcal { F } } _ { k } \subseteq L _ { p } ( { \mathcal { D } } ) _ { : }$ , and $\begin{array} { r } { \infty > v _ { p } \geq \operatorname* { s u p } _ { f \in \mathcal { F } _ { k } } \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ | f ( \mathbf { X } ) - \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ f ( \mathbf { X } ) \right] | ^ { p } \right] } \end{array}$ . Then, $\mathcal { F } _ { k }$ admits $a$ $\mathcal { D }$ -discretization with $$ \kappa _ { 0 } ( \delta ) = 2 \cdot 8 0 0 0 ^ { 2 } \ln { ( e / \delta ) } , \varepsilon _ { 0 } = 1 , N _ { \mathcal { D } } ( \varepsilon , m ) = 8 \left( \frac { 7 2 \cdot 1 0 ^ { 4 } \cdot 8 0 0 0 e } { \varepsilon } \right) ^ { 1 4 0 k d \ln { ( 6 k ) } } . $$ Moreover, let $\varepsilon , \delta \in ( 0 , 1 )$ , if $$ m \geq \left( \frac { 4 0 0 \cdot 1 6 ^ { p } v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } , \kappa \geq \operatorname* { m a x } \left( \kappa _ { 0 } ( \delta / 8 ) , \frac { 1 0 ^ { 6 } \ln { ( 2 ) } } { 9 9 } , 5 0 \ln { \left( \frac { 8 N _ { D } ( \varepsilon / 1 6 , m ) } { \delta } \right) } \right) $$ then $$ \underset { \mathbf { x } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } { \mathbb { P } } \Big ( \operatorname* { s u p } _ { f \in \mathcal { F } _ { k } } | \mathbf { M o M } ( f , \mathbf { X } ) - \mu ( f ) | \leq \varepsilon \Big ) \geq 1 - \delta . $$ Remark 3. The following comments are in order. • The sample complexity bound implied by Proposition 1 is of the order of $$ \frac { v _ { p } ^ { \frac { 1 } { p - 1 } } } { \varepsilon ^ { \frac { p } { p - 1 } } } \left( d k \log k \log \frac { 1 } { \varepsilon } + \log \frac { 1 } { \delta } \right) . $$ Notice that the $d k \log k$ -term depends on the number of centers $k$ and the dimensionality of the problem $d _ { z }$ , and resembles a complexity term. • The literature on generalization bounds for $k$ -means is rich and has mostly focussed on distributions with bounded support and centers lying in a norm ball of a given radious [21, 6, 20, 3, 24, 32]. The work closer to ours, in that it considers inputs and centers from unbounded sets, is [4]. In that work, authors analyze the problem of uniform estimation over $\mathcal { F } _ { k }$ with the sample mean and show a sample complexity bound of the order of $$ \frac { \kappa } { \varepsilon ^ { 2 } \delta } \left( d k \log k + \log \frac { 1 } { \delta } \right) , $$ where ${ \mathcal K } = \mathbb { E } [ d ( { \mathbf X } , \mu ) ^ { 4 } ] / \sigma ^ { 4 }$ is the kurtosis of $\mathcal { D }$ . We start noticing that [4] requires the finiteness of the kurtosis, while our result only requires $\mathcal { D }$ to have a finite variance and $\mathcal { F } _ { k } \subseteq L _ { p } ( \mathcal { D } )$ for some $p \in ( 1 , 2 ]$ . To see that our condition is weaker, observe that when $\kappa < \infty$ then $\mathcal { F } _ { k } \subseteq L _ { 2 } ( \mathcal { D } )$ (See lemma 5 and the relation between $f \in \mathcal { F } _ { k }$ and $s$ ). Focussing on the case of $p = 2$ , we have the following observations. First, notice that our sample complexity bound is exponentially better in the confidence term $1 / \delta$ , this is due to the stronger concentration properties of the MOM compared to the sample mean. Second, in (4.2) the confidence term $1 / \delta$ multiplies the complexity term $d k \log k$ which is undesirable. In contrast, in our sample complexity bound these two terms are decoupled. We finally note that our bound suffers from a slightly worse dependence on $\varepsilon$ due to the extra log term. • We have focussed on providing uniform estimation guarantees for the class $\mathcal { F } _ { k }$ of normalized losses. In practice, one may be instead intered in bounding the risk $R ( Q )$ of a certain set of centers $Q$ , given its performance on the sample. Calculations show that one can get such a bound from Proposition 1 (see the appendix C for details). In particular, under the same assumptions of Proposition 1, for each $Q \in \mathbb { R } ^ { ( d \times k ) }$ the following holds with probability at least $1 - \delta$ $$ R ( Q ) \asymp ( 1 \pm \varepsilon ) \Bigl ( \mathbf { M } 0 \mathbf { M } ( d ( \cdot , Q ) ^ { 2 } , \mathbf { X } ) \pm \frac { \varepsilon \sigma ^ { 2 } } { 2 } \Bigr ) , $$ where the notation $a \asymp ( 1 \pm \varepsilon ) ( b \pm c )$ is equivalent to $\Omega ( ( 1 - \varepsilon ) ( b - c ) ) = a = O ( ( 1 + \varepsilon ) ( b + c ) )$ . # 4.2 Linear Regression with General Losses Linear regression is a classical problem in machine learning and statistics. This problem is typically studied either in the special case of the squared loss or for (possibly) non-smooth Lipschit losses. We consider instead the more general class of continuous losses and show a new sample complexity result that holds for broad class of distributions. Preliminaries. In this section $\ell \in [ 0 , \infty ) ^ { \mathbb { R } }$ will denote a continuous loss function unless further specified. We consider the function class obtained composing linear predictors with bounded norm with $\ell$ . That is, for $W > 0$ , we define $$ \mathcal { F } _ { W } = \left\{ \ell ( \langle w , \cdot \rangle - \cdot ) | w \in \mathbb { R } ^ { d } , | | w | | ^ { 2 } \leq W \right\} . $$ Thus, if $f \in { \mathcal { F } } _ { W }$ , then $f ( ( x , y ) ) = \ell ( \langle ( w , - 1 ) , ( x , y ) \rangle ) = \ell ( \langle w , x \rangle - y )$ , for any $\boldsymbol { x } \in \mathbb { R } ^ { d }$ , $y \in \mathbb { R }$ . For $a , b > 0$ , the define number $\alpha _ { \ell } ( a , b )$ as the largest positive real s.t. for $x , y \in [ - a , a ]$ and $| x - y | \leq \alpha \ell ( a , b )$ we have that $| \ell ( x ) - \ell ( y ) | \leq b$ . Since $\ell$ is continuous and $[ - a , a ]$ is a compact interval, $\alpha _ { \ell } ( a , b )$ is well-defined. Thus, $\ell$ is uniform continuous on $[ - a , a ]$ . Furthermore, when $\ell$ is $L$ -Lipschitz, then $\alpha _ { \ell } ( a , b ) = b / L$ . The next result provides a uniform bound that holds for general continuous losses. Theorem 3. Let $W ~ > ~ 0$ and ${ \mathcal { D } } _ { X }$ and $\mathcal { D } _ { Y }$ be distributions over $\mathbb { R } ^ { d }$ and $\mathbb { R }$ respectively and let $\mathcal { D } \ = \ \mathcal { D } _ { X } \times \mathcal { D } _ { Y }$ . Suppose that, there exists a $p \ \in \ ( 1 , 2 ]$ s.t. ${ \mathcal { F } } _ { W } \subseteq L _ { p } ( { \mathcal { D } } ) .$ , and $\infty \ > \ v _ { p } \ \ge$ $\begin{array} { r } { \operatorname* { s u p } _ { f \in \mathcal { F } _ { k } } \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ | f ( \mathbf { X } ) - \mathbb { E } _ { \mathbf { X } \sim \mathcal { D } } \left[ f ( \mathbf { X } ) \right] | ^ { p } \right] } \end{array}$ Then $\mathcal { F } _ { W }$ admits a $\mathcal { D }$ -discretization with $$ \kappa _ { 0 } ( \delta ) = 4 \cdot 1 2 5 0 ^ { 2 } \ln { ( e / \delta ) } , \varepsilon _ { 0 } = \infty , N _ { \mathcal { D } } ( \varepsilon , m ) = \left( \frac { 6 W } { \beta ( \varepsilon , m , \mathcal { D } ) } \right) ^ { d } , $$ where $$ \begin{array} { r l } & { \beta ( \varepsilon , m , \mathcal { D } ) = \operatorname* { m i n } \Big ( \frac { W } { 2 } , \frac { \alpha _ { \ell } ( J , \varepsilon ) } { 3 7 5 0 ( \mathbb { E } [ | | X | | ] _ { 1 } ] + \mathbb { E } [ | Y | ] ) m } \Big ) , } \\ & { J = ( 3 W / 2 + 1 ) \cdot 3 7 5 0 ( \mathbb { E } [ | | \mathbf { X } | | _ { 1 } ] + \mathbb { E } [ | \mathbf { Y } | ] ) m . } \end{array} $$ Moreover, let $\varepsilon \in ( 0 , \infty )$ and $\delta \in ( 0 , 1 ) .$ , if $$ m \geq \left( \frac { 4 0 0 \cdot 1 6 ^ { p } v _ { p } } { \varepsilon ^ { p } } \right) ^ { \frac { 1 } { p - 1 } } , \kappa \geq \operatorname* { m a x } \left( \kappa _ { 0 } ( \delta / 8 ) , \frac { 1 0 ^ { 6 } \ln { ( 2 ) } } { 9 9 } , 5 0 \ln { \left( \frac { 8 N _ { \mathcal { D } } \left( \varepsilon / 1 6 , m \right) } { \delta } \right) } \right) , $$ then $$ \underset { \mathbf { X } \sim ( \mathcal { D } ^ { m } ) ^ { \kappa } } { \overset { \mathbb { P } } { \operatorname {\operatorname* { s u p } } } } \left( \underset { f \in \mathcal { F } _ { W } } { \operatorname* { s u p } } \mid \mathbf { M } \mathbf { O } \mathbf { M } ( f , \mathbf { Z } ) - \mu ( f ) \vert \leq \varepsilon \right) \geq 1 - \delta , $$ where ${ \bf Z } = ( { \bf X } , { \bf Y } ) \sim ( ( { \mathcal D } _ { X } \times { \mathcal D } _ { Y } ) ^ { m } ) ^ { \kappa }$ . Remark 4. The following comments are in order. • If we omit the dependence on $v _ { p }$ , the sample complexity bound implied by Theorem 3 is of the order of $$ \frac { 1 } { \varepsilon ^ { \frac { p } { p - 1 } } } \left( \log \Big ( \frac { W J } { \alpha _ { \ell } ( J , \varepsilon ) } \Big ) + \log \Big ( \frac { 1 } { \delta } \Big ) \right) . $$ Notice that, for a given loss function $\ell$ , the $\begin{array} { r } { d \log \left( \frac { W } { \alpha _ { \ell } ( J , \varepsilon ) } \right) } \end{array}$ depends both on $d , \varepsilon$ and $W$ as well as $J$ . Which resembles a complexity term, depending on the distribution via $J$ , the complexity of $\ell$ via $\alpha _ { \ell }$ , and the norm of the regressor and its dimension $d$ . • If the loss function $\ell$ is also $L$ -Lipschitz and $\ell ( 0 ) = 0$ , it is possible to obtain a more explicit bound. In particular, calculations (see appendix D for details) show that, if $\begin{array} { r } { \operatorname* { s u p } _ { w \in \mathrm { B } ( W ) } \mathbb { E } \left[ | \langle w , \mathbf { X } \rangle | ^ { p } \right] + } \end{array}$ $\mathbb { E } \left[ | \mathbf { Y } | ^ { p } \right] < \infty$ , and omitting it in the following expression(also omitting $v _ { p }$ ), the sample complexity is at most of the order of(assuming $W \geq 1$ ) $$ \frac { 1 } { \varepsilon ^ { \frac { p } { p - 1 } } } \left( d \log \Big ( \frac { W L } { \varepsilon } \Big ) + \log \Big ( \frac { 1 } { \delta } \Big ) \right) . $$ In this case, the dependence in $\epsilon$ is explicit and of the order of $\varepsilon ^ { \frac { p } { 1 - p } } \log ( 1 / \varepsilon )$ . • We notice that the rate 4.5 matches, in terms of the dependence in $\varepsilon$ and $\delta$ and up to log factors, the know rates of the sample average when the distributions of $\| \mathbf { X } \|$ and $\mathbf { Y }$ are sub-exponential (see for example [25]). For this class of distributions, all moments exist, while our result only requires the existence of the $p$ -th moment for $p \in ( 1 , 2 ]$ . We also point out that a similar generality on the distribution is also achieved by [18] which also relied on the MOM estimator. The main difference is that their bound has a dependence on the Rademacher complexity of $\mathcal { F } _ { W }$ , which as far as we know, is not explicit for this class distribution.
The Median of Means (MoM) is a mean estimator that has gained popularity in the context of heavy-tailed data. In this work, we analyze its performance in the task of simultaneously estimating the mean of each function in a class $\mathcal{F}$ when the data distribution possesses only the first $p$ moments for $p \in (1,2]$. We prove a new sample complexity bound using a novel symmetrization technique that may be of independent interest. Additionally, we present applications of our result to $k$-means clustering with unbounded inputs and linear regression with general losses, improving upon existing works.
[ "stat.ML", "cs.LG" ]
# Introduction In the past decade, public repositories (e.g., Gene Expression Omnibus (GEO) [1], ArrayExpress [2], European Genome-phenome Archive (EGA) [3], Accelerating Medicins Partnership Parkinson’s Disease (AMP-PD) [4], Synapse [5]) have facilitated access to thousands of omics datasets accompanied by clinical metadata. In this context, metadata refers to structured or unstructured information describing or providing context for biological datasets (e.g., clinical annotation, study protocol, patient demographics…). Over $80 \%$ of this companion clinical metadata is still stored as free text or within heterogeneous containers [6]. Furthermore, the absence of standardized terminologies fosters redundancy (e.g., “UPDRS-III” and “motor score”), which hinders the ability to identify specific concepts (semantic precision), as well as the ability to detect the same concepts across different cohorts (semantic coherence). This largely heterogeneous and weakly structured metadata limits the retrieval and comparison of relevant cohorts across studies [1]. Multiple reviews propose harmonization of formats (e.g, ontologies) as a partial solution to these limitations. However, current methodological guidance remains insufficient to support automated integration of unstructured or semi-structured biological data into structured repositories. [7,8]. Knowledge modeling consists of turning free-text, highly heterogeneous descriptions of relevant clinical information into structured knowledge. It addresses simultaneously the task of acquiring information and organising it in a more or less formal structure, e.g., using an ontology as a reference [4]. Early approaches relied on rule-based or shallow NLP, achieving only incremental fact extraction [9,10]; the advent of Large Language Models (LLM) is significantly accelerating the conversion of unstructured information metadata into much more useful and structured resources. Recent work has explored the use of LLMs to automate the formal arrangement of clinical metadata. LLM-based approaches for knowledge extraction are able to recover high-accuracy entities from unstructured health records, achieving remarkable high performance [11]. This in combination with ontologies as the reference framework to be used as knowledge structure is the most successful approach to date. Besides, the combination of different ontologies enhances the prediction of biomedical associations and improves cohort search capabilities by providing cross-domain semantic terms [11,12]. Hybrid strategies that combine knowledge-graph structures with contrastive learning mechanisms have recently shown superior performance in retrieving biomedical cohorts compared to single-source or non-semantic baselines [13]. These methods construct dense representations informed by ontological hierarchies, thereby increasing both retrieval precision and robustness across domains [13]. Unlike prior hybrid methods, our approach targets cohort retrieval in the domain of ND research, including Alzheimer’s disease (AD), Parkinson’s disease (PD), Huntington’s disease (HD), Lewy Body Disease (LWB), Frontotemporal Dementia (FT) and Multiple System Atrophy (MSA). It can also be adapted to any domain, as long as there is a structured body of knowledge that can be used as reference. Additionally, our method integrates external domain knowledge using a two steps approach. First, we expanded clinical metadata using biomedical ontologies knowledge, allowing the model to disambiguate terms and align related concepts. Then, we train the model to recognize semantic similarities and differences in the data. This is done through a learning technique (contrastive learning) that pulls together the representations of conceptually related inputs and pushes apart unrelated ones in the embedding space, ensuring that similar clinical terms are embedded close to each other. By integrating these two techniques, we introduce a model that produces embeddings for clinical data that are augmented with the biomedical-ontology-based information. These ontology-augmented embeddings are designed to enhance the model’s ability to retrieve meaningful cohorts from complex biomedical repositories [14]. We define four key metadata dimensions to describe each ND cohort: Population (Po), Assay (As), Phenotype (Ph), and Tissue (Ti). Applied to the GEO repository and grounded on PubMedBERT embedder, our framework generates embeddings that reconcile heterogeneous clinical descriptors within a unified semantic space, enabling precise and scalable retrieval of ND cohorts and paving the way for more comprehensive multi-omics analyses. # Methodology We implemented a six-step pipeline that systematically transforms the heterogeneous, unstructured metadata of the ND studies into a semantically enriched, query‑ready repository, publicly accessible to any user (see Figure 1): (0) acquisition of ND cohorts using Medical Subject Headings (MeSH) queries (e.g., "Parkinson") [15]; (1) generation of synonyms for the key metadata dimensions (Po, As, Ph, Ti) using biomedical ontologies; (2) generation of a Natural Language Queries Question Answering (NLQ-QA) dataset by combining original and synonym metadata and linking them to specific cohorts; (4) fine-tuning of a PubMedBERT-based embedding model on the QA dataset; (5) evaluation of the embedding’s performance using standard retrieval metrics; and (6) deployment of the final embeddings to enable semantic search on the enriched metadata. cohorts acquisition through biomedical ontologies MeSH 2801 cohorts Population Assay Phenotype Tissue (105 synonyms) (51synonyms) (292 synonyms) (326 synonyms) GSE148938 MeSH query BosE Alzheimer Disease Prefrontal Cortex population: Bos (e.g.,Parkinson) Parkinso Ox RNA-Seq Acute Confusional Senile Dementia Prefrontal Cortex auay: Expression GEO ? prte Dairy Cow Theongtpuing gy hndhng Sclerosis Alzheimer Prefrontal Cortex hroingpytHigh gepulation: Domestic Cattle Miromnproftlingey igh Primary Senile Degenerative Dementia Prefrontal Cortex tepe diopathn Lewy PPrimarsm) Original ValuesSynonym Values Step2.2:QA dataset generation Step 3:Model finetuning on QA dataset Step 4: Evaluating embedding performance Step 5: Deploying embeddings Query1:"Show me studies on" Acute Confusional Senile 》 Query:Give me cohorts about cows Explore data related to Primary Senile Dementia in the Degenerative Dementia in the Prefrontal Pronta NA-seqf Test data rang s ? Corexhro 20% Sequencing in Domestic Cattle. Gen mbede 8 FEine-tuned Yes.Relevant studies are 丰 ↑ Train data 1304500万001015005510 "LNNLL Alzheimer Training Steps Precision@5=3/5=0.6 - Assay: Expression Profiling by High ThroughputSequencing MPR=1-((0+0.25+0.75)/3)=0.666 # Step 0: data acquisition To address disease-specific heterogeneity across NDs, we systematically collected all studies of the most prevalent NDs that included any type of omics data and basic clinical and demographic data of the participants from the GEO repository (Figure 1, step 0). To identify disease-specific cohorts with a high confidence, we constructed disease-specific queries using MeSH medical thesaurus terms and their synonyms , (e.g., “Idiopathic Parkinson's Disease” for “Parkinson’s disease”), since GEO does not impose rigorous metadata specifications on the studies.. To controlling the false positives cohorts, we applied a filtering protocol that requires at least one disease-specific MeSH term or synonym to appear in any of the following data fields: (1) GEO study title, (2) publication title (if available), (3) GEO abstract, excluding the first and last sentences to avoid boilerplate or non-informative content. Additionally, we used the pymed Python library [16] to retrieve publication-level metadata (e.g., article’s title), since GEO only provides the study metadata and PubMed ID (PMID) # Step 1: synonym generation After identifying relevant cohorts, we applied a synonym search protocol to normalize and expand the range of related terms and capture more semantic and enriched representations of the metadata dimensions (see Figure 1, Step 1). These ontology-based searches allow us to: (1) normalized the data, i.e. consolidating those variants into a single canonical identifier so that semantically equivalent concepts are represented consistently across cohorts; (2) augmentate the data; i.e. systematic expansion of each original metadata term with all ontology-derived synonym variants. The augmentation process followed a two-stage matching strategy. First, we searched for exact matches of values from the metadata at the target ontologies. Those found at any of the ontologies were used to expand with synonyms. Ontologies included the Experimental Factor Ontology (EFO) [17] for assay related terms, UBERON [18] for tissue terms and NCBI Taxonomy [19,20] for population values. If no match was found in these primary ontologies, the search extended to broader ontologies, i.e., MeSH [15], followed by UMLS [21,22]. OWL ontologies (EFO, NCBI Taxonomy, UBERON) were queried via the “hasExactSynonym” field using the “rdflib” [23] and “xml” [24] libraries. MeSH synonyms were extracted from the “ConceptList” function. UMLS terms were mapped using a precompiled Concept Unique Identifier dictionary. If no exact match was found for a value, either a population, assay, phenotype or tissue, then a fuzzy string matching was performed using the “thefuzz” Python library [23,25] where only candidates with a similarity score $> = 8 0 \%$ based on Levenshtein distance [14]. # Step 2: QA dataset generation After metadata augmentation, we leveraged all these base metadata to generate NLQs related to our cohorts (see Figure 1, Step 2). Each query was constructed by randomly combining one to four original or augmented values from the four metadata dimensions. Only NLQs for which at least one matching cohort was available in our cohort dataset were retained. For example, an NLQ could be: Show me cohorts within ox population from prefrontal cortex from transcription profiling by high throughput sequencing with senile dementia alzheimer type observations, where the four metadata dimensions (Po, Ti, As and Ph) are queried. Another example could be: I'm searching for cohorts with ataxia limb from rna profiling by array assay, where just two dimensions (Phenotype and Assay) are combined. To generate the Question Answering Dataset (QAD), we performed a stratified split of the synonym-expanded vocabulary of the four cohort-metadata dimensions to get exclusive training and test synonym sets in a 80/20 ratio (Algorithm 1 Line 2). For every query (i.e. the combination of one to four training values), we retrieved the cohorts whose metadata satisfied the entire combination and randomly selected one of them (Algorithm 1 Lines 5–11). Each pair of query-cohort was then converted into a NLQ by using one of six predefined templates (one of them is exclusive for test) and adding prepositions to make the queries more real. The predefined templates used were: (1) “Give me papers about…”; (2) “Can you show findings about…”; (3) “Explore data related to…”; (4) “Show me studies on…”; (5) “What research exists on…”, and (6) “I’d like to know about…”. Finally, we formed the training set with NLQ-cohort pairs that use only training synonyms and five of the six templates, and the test set with only test synonyms (Algorithm 1 Lines 20–22). This dual partitioning allowed us to evaluate the model’s capacity to generalize across both (1) novel synonym instances and (2) unseen query formulations. Require: vocab_augmented, cohorts, templates, prefix_suf fix_rules 1:procedure GENERATEQAD 2: (train_vals, test_vals) $$ STRATIFIEDSPLIT(vocab_augmented, 0.8) 3: pair $l i s t \gets \{ \}$ 4: $f i n a l { \_ } Q A \gets \{ \}$ 5: for $k \gets 1$ to 4 do 6: for all combo $\in$ RANDOMCOMBINATIONs(train_Uals, k) do 7: compatible $$ FILTER(cohorts, combo) 8: if compatible $\neq \emptyset$ then 9: cohort ← RANDOMCHOICE(compatible) 10: $p a i r \_ l i s t \gets p a i r \_ l i s t \cup \{ ( q , ~ c o h o r t ) \}$ 11: end if 12: end for 13: end for 14: for all (q, cohort) $\in$ pair_list do 15: variants $$ CREATENLQ(q, templates, prefix_suffix_rules) 16: for all nlq $\in$ variants do 17: $f i n a l \_ Q A f i n a l \_ Q A \cup \{ ( n l q , \ c o h o r t ) \}$ 18: end for 19: end for 20: train_set $ \{ p \in$ final_QA | UsEsONLY(train_vals, p.nlq) ^ -TEMPLATEISTEST $\left. \operatorname { \rho } _ { \mathrm { N L Y } } ( \boldsymbol { p } . n l q ) \right\}$ 21: $t e s t _ { - } s e t \gets \{ p \in f i n a l . Q A \mid \mathrm { U S E S O N L Y } ( t e s t _ { - } v a l s , ~ p . n l q ) \}$ 22: return (train_set, test_set) 23: end procedure The procedure takes as input the augmented vocabulary across four metadata dimensions (Po, As, Ph, Ti) and follows these steps: (1) Vocabulary is split into training and testing subsets using a stratified strategy (line 2); (2) For each combination of 1 to 4 training terms, cohorts satisfying the corresponding constraints are filtered, and if at least one valid cohort exists, a query–cohort pair is stored (lines 5–13); (3) For each query–cohort pair, NLQs are generated using predefined templates and prefix/suffix rules (lines 14–19); (4) The resulting NLQs are filtered into a training set and a test set depending on which subset their terms belong to and whether the template is marked as test-only (lines 20–22). This process yields a balanced and diverse QA dataset suitable for fine-tuning natural language embedders. # Step 3: model fine-tuning To adapt a biomedical language embedder to the constructed QAD, we fine-tuned the NeuML/pubmedbert-base-embeddings model [25] (see Figure 1, Step 5), pretrained exclusively on 14 million PubMed abstracts and full-text articles, which provides stronger coverage of biomedical terminology and syntax than domain-agnostic alternatives such as BERT-base [25] or SciBERT [26]. It has 768-dimensional sentence vectors and existing benchmarks report higher retrieval precision for biomedical queries compared with BioClinicalBERT [26] or BlueBERT [31]. In this case, the embedder will use the QAD to learn how to associate a clinician’s free-text query (question) with a single, well-defined cohort description (answer), mirroring the real-world interaction and simplifying evaluation through exact-match. Model fine-tuning was performed to adapt the embedder to the QA task. Specifically, we employed the MultipleNegativesRankingLoss (MNRL) [27] from the sentence-transformers library [26], a contrastive loss that, for every anchor–positive pair in a mini-batch, maximises their cosine similarity while treating all other instances in the batch as implicit negatives. This loss implements a cross-entropy objective known as InfoNCE [28], which compares the true (query, cohort) pair against the rest in the mini-batch by encouraging the model to assign a higher similarity to the correct pair. InfoNCE is a contrastive formulation originally designed to discriminate a true positive from a set of distractors by scaling the probability of correct matches. This in-batch negative sampling increases the number of effective negatives with batch size and has proved effective for retrieval tasks where explicit negatives are scarce [29,30]. Formally, for a query $q$ , its $P$ positive examples $p _ { i }$ and the $N$ in-batch negatives ${ \mathsf { n } } _ { \mathrm { j } } ,$ the loss is: $$ \mathrm { L o s s } = \sum _ { i = 1 } ^ { P } \sum _ { j = 1 } ^ { N } \operatorname* { m a x } \left( 0 , \ f ( q , p _ { i } ) - f ( q , n _ { j } ) + \operatorname* { m a r g i n } \right) $$ where ${ \mathfrak { f } } ( )$ is the cosine-similarity function, $P$ and $N$ denote the counts of positive and negative pairs in the mini-batch, and $\boldsymbol { \gamma }$ is a margin hyperparameter that enforces the desired separation between positive and negative scores. In our QAD, positive (Query, Cohort) pairs are explicitly labeled, while all other cohorts serve as implicit negatives. In comparison with the original release, we required fewer training epochs and warm-up steps due to the smaller size of our dataset (2 epochs, with warm-up covering $10 \%$ of total iterations). # Step 4: embedding’s evaluation Model performance was monitored via two complementary strategies in this order: (1) validation loss was measured every $5 \%$ of an epoch to detect underfitting or overtraining; (2) retrieval-based evaluation was performed using a ground truth query-to-cohort mapping (see Figure 1, Step 6). Using the trained embedder, cohorts were retrieved based on cosine similarity. For each query, the fine-tuned model returns the cohorts most similar to the query based on cosine similarity. To estimate how relevant were the answers of each query we use Retrieval Precision metric [22,23], computed as follows: $$ \begin{array} { r } { P r e c i s i o n = \frac { N u m b e r \ o f \ R e l e v a n t C o h o r t s \ R e t r i e v e r } { T o t a l n u m b e r o f r e l e v a n t c o h o r t s } } \end{array} $$ Additionally, cohorts are ranked based on cosine similarity, which allows us to get the Mean Percentile Rank (MPR) for each query. $$ \begin{array} { r } { M P R \stackrel { } { = } \frac { 1 } { N } \displaystyle \sum _ { i = 1 } ^ { n } \frac { R a n k o f C o h o r t _ { i } } { T o t a l n u m b e r o f C o h o r t s } } \end{array} $$ where $N$ is the number of correct cohorts for the query. # Step 5: embeddings’ deployment The resulting embedding model was deployed with Gradio’s ChatInterface [32], which links a custom Python callback to a chat widget that accepts free-text biomedical queries. Each query is routed to the backend, the callback returns a markdown-formatted response, and Gradio streams the output to the browser. The Gradios's launch function creates a FastAPI server, serves the auto-generated HTML and JavaScript frontend, and lets the application run locally or be shared through a public URL without further web-development effort. # Results Ontology-Augmented Normalization and Synonym Expansion Across Metadata Dimensions A total of 3823 omics cohorts were initially retrieved from GEO [1] across a range of NDs. After applying the disease-specific filtering protocol (see Methodology Step 2 Section), 2801 cohorts remained. The most represented condition was: AD $( n = 1 2 5 0 )$ ; followed by (2) PD $( \mathsf { n } = 5 8 9 )$ and HD $( n = 3 6 5 )$ . Other conditions were less frequently represented, including LBD $( \mathsf { n } = 1 8 )$ and MSA $( n = 1 7 )$ . In terms of heterogeneity of the terminology used to describe the entries at GEO, we identified 33 distinct population descriptors, 19 assay types, and 1770 non-standardized tissue annotations. For example, multiple distinct values were found to describe semantically equivalent values (e.g., "brain cortex" tissue vs. "cerebral cortex" tissue). All these terms need to be standardized in order to be used in downstream analysis. To this end, first we homogenized raw values, i.e. every string was normalized and whenever a match existed (exact or similar), each value was replaced by the standard label of its reference ontology (see Section 2). Additionally, the Tissue field required normalization, i.e. consolidating those variants into a single canonical identifier so that semantically equivalent concepts are represented consistently across cohorts. From the original 1770 non-standardized entries retrieved from GEO, 560 values were successfully mapped to UBERON, MeSH and UMLS ontologies using direct and fuzzy matching strategies, 326 of them corresponding to unique standardized terms (see Table 1). Specifically, the majority of tissue mappings were obtained via fuzzy matching: 384 values $( 7 3 . 8 4 \% )$ compared to 136 $( 2 6 . 1 6 \% )$ obtained via exact matches. UBERON was the most informative source, contributing 227 standardized terms $( 4 3 . 7 \% )$ . Then, for Po, As and $\mathsf { P h }$ , an augmentation step was applied to enrich the GEO metadata. For Po, the initial set of 33 unique descriptors was augmented to a total of 105 synonym terms from the NCBI Taxonomy ontology. For As, 19 distinct values were augmented to 51 (only 11 of the initial values yielded synonyms). For $\mathsf { P h }$ , we employed a combination of exact and fuzzy matching strategies, which led to 31 synonyms from EFO and 12 from UMLS. All 13 input values were successfully augmented to 292 values from the MeSH ontology, with 45 synonyms $( 1 5 . 4 \% )$ obtained from direct matches, and 247 $( 8 4 . 6 \% )$ inferred through fuzzy similarity scoring. Table 1. Synonym-augmentation summary across Ti, Po, As and Ph dimensions. The table reports: (1) the metadata field; (2) the number of original distinct terms retrieved from GEO (Original Count); (3) the number of those original distinct terms with no match in any ontology (No Match); (4) the number of matched terms (Match); (5) the total number of synonyms generated (Synonyms); (6–8) the distribution of synonym sources by ontology (EFO/NCBI/U $\mathsf { 3 E R O N + M e S H + U M L S } ) ;$ and (9–10) the distribution of synonyms obtained via exact match (Direct) or fuzzy matching (Fuzzy). # NLQ Dataset Construction Using Augmented Metadata To fine-tune the base embedder, we used the synonym-augmented metadata to construct a space of NLQs by combining values across the four standardized metadata dimensions. Since the expansion process resulted in 105 Po terms, 51 As terms, 292 Ph terms, and 326 Ti terms (774 unique values), the space of all combinations of single values for all four elements would lead to $5 { \times } 1 0 ^ { 8 }$ NLQs. Nevertheless, we restricted this space to NLQs with verifiable answers (i.e., those with at least one matching cohort), leading to 368,082 NLQs, each one associated with 1.5 cohorts on average. Specifically, we partitioned the 774 augmented metadata values into training and test sets using metadata-stratified sampling, allocating $80 \%$ of the synonyms to training $( { \mathsf n } =$ 619) and $20 \%$ for evaluation $( n = 1 5 5 )$ . This procedure led to two disjoint sets of NLQs: (1) NLQs composed exclusively of synonyms from the training subset $( \mathsf { n } = 1 3 9 , 3 3 6 )$ and (2) NLQs composed exclusively of synonyms from the test subset $( \mathsf { n } = 1 , 8 8 6 )$ . To avoid overfitting caused by the large imbalance between the numbers of training-only and test-only NLQs, we randomly subsampled the training set to 7,544 NLQs, which is four times the size of the test set. # Cohort Retrieval Fine-tuned Embedding Evaluation After training the base embedder with our NLQ-cohort dataset, we monitored both training and evaluation “MNRL” loss [33] to evaluate model convergence. We selected this loss function because it maximises the similarity margin between each true query-cohort pair and the many in-batch negatives, making it well suited for dense retrieval tasks (see Methodology Step 3 section, see Equation 1). As shown in Figure 2A, the training loss decreased during the initial steps, from 1.10 at step 15 to 0.24 at step 60. Afterward, the training loss continued to decline more gradually and stabilized below 0.15 by step 255. The validation loss followed a similar trend, dropping from 0.33 at step 15 to approximately 0.12 from step 120 onward. The plateau below 0.15 shows that the encoder has already distilled the dominant patterns in the training NLQs. From that point on, each newly presented query mostly reinforces what the model already knows instead of uncovering new signals. These late-stage NLQs are not useless: they stabilise the weights and help prevent over-fitting to earlier batches. Meaningful gains would require examples with NLQs with lexical variants, different metadata combinations or harder negative cohorts. To assess the retrieval capabilities of the embedding model, we evaluated performance on the test set NLQs using two embedder metrics: Retrieval Precision and MPR (see Methodology Step 4 Section). As shown in Figure 2B, For each group of NLQs, defined by the number of metadata terms included in the query (n_terms, i.e., 1, 2, 3, or 4 of the dimensions Po, As, Ph, or Ti), the majority of instances fell near 1.0 precision. Focusing first on two-term NLQs, we identified two distinct patterns. A small subset recorded precision values near 0.0 yet still retrieved the correct cohort within the top four results (mean $\mathsf { M P R } = 0 . 2 6 7 ;$ , indicating that although competing partial matches scored slightly higher, the target cohort remained highly ranked. The remaining two-term queries showed precisions between 0.5 and 1.0 (mean $\mathsf { M P R } = 0 . 8 3 4 ;$ ). For three-term NLQs, nearly all cases exceeded 0.5 precision; only a few targeted a single cohort with precision below 0.5, and their mean MPR was 0.285, so the correct match was still placed in the top four despite one atypical metadata term. Finally, all four-term NLQs clustered at 1.0 precision. We also observed that all test NLQs, regardless of the number of terms, achieved MPR values near 1.0, following a similar pattern to retrieval precision (see Figure 2C). However, unlike retrieval precision, MPR showed no accumulation of NLQs near 0.0, suggesting that even when retrieval precision is zero, the correct result is still ranked relatively high by the embedding model. Finally, to evaluate the impact of fine-tuning, we compared the retrieval performance of the trained embedder to that of the original, non-fine-tuned “NeuML/pubmedbert-base-embeddings” model [34]. We detected a substantially lower performance of the base embedder compared to the fine-tuned using both metrics, retrieval precision and MPR. Specifically, a large number of queries had precision close to 0.0 (see Figure 3A), which indicates that, for the base model, relevant cohorts frequently appeared in the bottom percentiles of the ranked list (see Figure 3B). Figure 2. (A) Training and evaluation loss curves during fine-tuning. (B) Precision and (C) MPR distributions for test NLQs grouped by number of metadata terms using the fine-tuned embedder. Figure 3. (A) Precision and (B) MPR distributions for test NLQs grouped by number of metadata terms using the base embedder. # Discussion We have defined an easy and adaptable methodology to create semantically rich repositories of omics cohorts based on LLMs as embedders. To this end, we use established ontologies to normalize structureless descriptions at the repositories and then we augment these descriptions through the use of synonyms, which will be used to fine-tune the embedder. That is, this workflow combines cohort metadata curation, large-scale synonym expansion across biomedical ontologies and fine-tuning of a PubMedBERT base encoder. Finally, we illustrated the applicability of this workflow for searching and discovering new NDs cohorts of the GEO repository. Our experiments demonstrated that enriching the four core metadata dimensions (i.e., Po, As, Ph and Ti) with ontology-derived synonyms triples the lexical coverage of Po descriptors and expands Ph terms twenty times (see Table 1). When these curated descriptors are included in a shared embedding semantic space, the model attains a mean Retrieval Precision of 0.866 and a MPR of 0.896 on 1,886 natural-language queries, far surpassing the baseline PubMedBERT encoder (Retrieval Precision $\mathbf { \tau } = \mathbf { \tau }$ 0.277; $\mathsf { M P R } \ = \ 0 . 3 5 5$ ). Notably, queries that combine all four metadata dimensions achieve perfect Retrieval Precision. Moreover, error analysis reveals that most residual failures concentrate in two-term queries containing potentially ambiguous combinations (e.g., “Macaca Mulatta” with “Macaca Fascicularis” or “Bos Indicus” with “Bos Taurus”), suggesting that further improvements will stem from ontological disambiguation and expanded semantic coverage rather than from modifying the embedding architecture. In the interactive Gradio-based platform, researchers can formulate free-text questions such as “Show me Parkinson's disease cohorts profiled with RNA-Seq in substantia nigra tissue” and receive highly relevant studies without laborious manual filtering. The interface returns a concise ranked list in plain text where each entry shows the cohort title, GEO accession and all the metadata associated with the cohort together with a quick link to the original GEO record. The application runs entirely in the browser, requires no local installation, which makes the retrieval workflow readily accessible to researchers with minimal technical effort. This work also lays the foundation for a much broader initiative. We are currently generalizing the approach beyond the four metadata dimensions and cohort-level granularity. Specifically, we are working to index every individual ND samples $( > 1 5 0 , 0 0 0 )$ contained in the same GEO studies together with all of their metadata dimensions. In the future, we intend to integrate GEO omics profiles so that similarity can be computed jointly over structured descriptors and latent molecular signatures. Our long-term objective is to support compound queries such as “Return two mouse samples exhibiting Parkinsonian phenotypes with the most similar multi-omics fingerprints”, where similarity will be evaluated within an embedded vector space that fuses clinical terms and omic feature representations. # References 1 Barrett, Tanya, Wilhite, Stephen E., Ledoux, Pierre, Evangelista, Carlos, et al. (2013) ‘NCBI GEO: archive for functional genomics data sets—update’. Nucleic Acids Research, 41(D1), pp. D991–D995. 2 Parkinson, H., Kapushesky, M., Shojatalab, M., Abeygunawardena, N., et al. (2007) ‘ArrayExpress—a public database of microarray experiments and gene expression profiles’. Nucleic Acids Research, 35(Database issue), pp. D747–D750. 3 Lappalainen, Ilkka, Almeida-King, Jeff, Kumanduri, Vasudev, Senf, Alexander, et al. (2015) ‘The European Genome-phenome Archive of human data consented for biomedical research’. Nature Genetics, 47(7), pp. 692–695. 4 Iwaki, Hirotaka, Leonard, Hampton L., Makarious, Mary B., Bookman, Matt, et al. (2021) ‘Accelerating Medicines Partnership: Parkinson’s Disease. Genetic Resource’. Movement Disorders: Official Journal of the Movement Disorder Society, 36(8), pp. 1795–1804. 5 info@sagebase.org, Sage Bionetworks (n.d.) ‘Synapse Commons Repository’. [online] Available from: https://www.synapse.org/Synapse:syn150935 (Accessed 3 June 2025) 6 Sedlakova, Jana, Daniore, Paola, Wintsch, Andrea Horn, Wolf, Markus, et al. (2023) ‘Challenges and best practices for digital unstructured data enrichment in health research: A systematic narrative review’. PLOS Digital Health, 2(10), p. e0000347. 7 Foreman, Brandon (2020) ‘Neurocritical Care: Bench to Bedside (Eds. Claude Hemphill, Michael James) Integrating and Using Big Data in Neurocritical Care’. Neurotherapeutics, 17(2), pp. 593–605. Hemingway, Harry, Asselbergs, Folkert W, Danesh, John, Dobson, Richard, et al. (2018) ‘Big data from electronic health records for early and late translational cardiovascular research: challenges and potential’. European Heart Journal, 39(16), pp. 1481–1495. Huffman, Scott B. (1996) ‘Learning information extraction patterns from examples’, in Wermter, S., Riloff, E., and Scheler, G. (eds.), Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, Berlin, Heidelberg, Springer, pp. 246–260. Anon (n.d.) ‘Natural Language Interfaces for Tabular Data Querying and Visualization: A Survey’. [online] Available from: https://arxiv.org/html/2310.17894v3 (Accessed 3 June 2025) Ntinopoulos, Vasileios, Rodriguez Cetina Biefer, Hector, Tudorache, Igor, Papadopoulos, Nestoras, et al. (2025) ‘Large language models for data extraction from unstructured and semi-structured electronic health records: a multiple model performance evaluation’. BMJ health & care informatics, 32(1), p. e101139. Wang, Yihao, Wegner, Philipp, Domingo-Fernández, Daniel and Tom Kodamullil, Alpha (2023) ‘Multi-ontology embeddings approach on human-aligned multi-ontologies representation for gene-disease associations prediction’. Heliyon, 9(11), p. e21502. Nunes, Susana, Sousa, Rita T. and Pesquita, Catia (2023) ‘Multi-domain knowledge graph embeddings for gene-disease association prediction’. Journal of Biomedical Semantics, 14(1), p. 11. Le-Khac, Phuc H., Healy, Graham and Smeaton, Alan F. (2020) ‘Contrastive Representation Learning: A Framework and Review’. IEEE Access, 8, pp. 193907–193934. National Library of Medicine (US) (2024) Medical Subject Headings, National Library of Medicine (US). [online] Available from: https://www.nlm.nih.gov/mesh/ Gijs Wobben (n.d.) ‘pymed: Python library for access to PubMed’. [online] Available from: https://github.com/gijswobben/pymed (Accessed 14 May 2025) Malone, James, Holloway, Ele, Adamusiak, Tomasz, Kapushesky, Misha, et al. (2010) ‘Modeling sample variables with an Experimental Factor Ontology’. Bioinformatics (Oxford, England), 26(8), pp. 1112–1118. 18 Mungall, Christopher J., Torniai, Carlo, Gkoutos, Georgios V., Lewis, Suzanna E. and Haendel, Melissa A. (2012) ‘Uberon, an integrative multi-species anatomy ontology’. Genome Biology, 13(1), p. R5. Schoch, Conrad L., Ciufo, Stacy, Domrachev, Mikhail, Hotton, Carol L., et al. (2020) ‘NCBI Taxonomy: a comprehensive update on curation, resources and tools’. Database: The Journal of Biological Databases and Curation, 2020, p. baaa062. Sayers, Eric W., Cavanaugh, Mark, Clark, Karen, Ostell, James, et al. (2019) ‘GenBank’. Nucleic Acids Research, 47(D1), pp. D94–D99. Bodenreider, Olivier (2004) ‘The Unified Medical Language System (UMLS): integrating biomedical terminology’. Nucleic Acids Research, 32(Database issue), pp. D267–D270. National Library of Medicine (US) (2024) UMLS Knowledge Sources, National Library of Medicine (US). [online] Available from: http://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.ht ml Krech, Daniel, Grimnes, Gunnar AAstrand, Higgins, Graham, Hees, Jörn, et al. (2023) ‘RDFLib’. [online] Available from: https://zenodo.org/records/8206632 (Accessed 10 June 2025) Anon (n.d.) ‘XML Processing Modules’. Python documentation. [online] Available from: https://docs.python.org/3/library/xml.html (Accessed 10 June 2025) Adam Cohen (n.d.) ‘thefuzz: Fuzzy string matching in python’. [online] Available from: https://github.com/seatgeek/thefuzz (Accessed 14 May 2025) Levenshtein, V. I. (1966) ‘Binary Codes Capable of Correcting Deletions, Insertions and Reversals’. Soviet Physics Doklady, 10, p. 707. Henderson, Matthew, Al-Rfou, Rami, Strope, Brian, Sung, Yun-hsuan, et al. (2017) ‘Efficient Natural Language Response Suggestion for Smart Reply’. [online] Available from: http://arxiv.org/abs/1705.00652 (Accessed 11 June 2025) Oord, Aaron van den, Li, Yazhe and Vinyals, Oriol (2019) ‘Representation Learning with Contrastive Predictive Coding’. [online] Available from: http://arxiv.org/abs/1807.03748 (Accessed 11 June 2025) Reimers, Nils and Gurevych, Iryna (2020) ‘Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation’. [online] Available from: http://arxiv.org/abs/2004.09813 (Accessed 11 June 2025) Le-Khac, Phuc H., Healy, Graham and Smeaton, Alan F. (2020) ‘Contrastive Representation Learning: A Framework and Review’. IEEE Access, 8, pp. 193907–193934. Anon (n.d.) ‘(PDF) Evaluation of Evaluation in Information Retrieval’, in ResearchGate. [online] Available from: https://www.researchgate.net/publication/221301028_Evaluation_of_Evaluation_i n_Information_Retrieval (Accessed 14 May 2025) 32 Järvelin, Kalervo and Kekäläinen, Jaana (2000) ‘IR evaluation methods for retrieving highly relevant documents.’, in Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval., ACM, pp. 41–48. [online] Available from: https://researchportal.tuni.fi/en/publications/ir-evaluation-methods-for-retrieving-hi ghly-relevant-documents (Accessed 14 May 2025) 33 Nils Reimers (n.d.) ‘sentence-transformers: Embeddings, Retrieval, and Reranking’. [online] Available from: https://www.SBERT.net (Accessed 14 May 2025) 34 NeuML, David Mezzetti, NeuML, and David Mezzetti (2025) ‘NeuML/pubmedbert-base-embeddings $\mathbf { \nabla } \cdot \mathbf { \varepsilon }$ Hugging Face’. [online] Available from: https://huggingface.co/NeuML/pubmedbert-base-embeddings
The growing volume of omics and clinical data generated for neurodegenerative diseases (NDs) requires new approaches for their curation so they can be ready-to-use in bioinformatics. NeuroEmbed is an approach for the engineering of semantically accurate embedding spaces to represent cohorts and samples. The NeuroEmbed method comprises four stages: (1) extraction of ND cohorts from public repositories; (2) semi-automated normalization and augmentation of metadata of cohorts and samples using biomedical ontologies and clustering on the embedding space; (3) automated generation of a natural language question-answering (QA) dataset for cohorts and samples based on randomized combinations of standardized metadata dimensions and (4) fine-tuning of a domain-specific embedder to optimize queries. We illustrate the approach using the GEO repository and the PubMedBERT pretrained embedder. Applying NeuroEmbed, we semantically indexed 2,801 repositories and 150,924 samples. Amongst many biology-relevant categories, we normalized more than 1,700 heterogeneous tissue labels from GEO into 326 unique ontology-aligned concepts and enriched annotations with new ontology-aligned terms, leading to a fold increase in size for the metadata terms between 2.7 and 20 fold. After fine-tuning PubMedBERT with the QA training data augmented with the enlarged metadata, the model increased its mean Retrieval Precision from 0.277 to 0.866 and its mean Percentile Rank from 0.355 to 0.896. The NeuroEmbed methodology for the creation of electronic catalogues of omics cohorts and samples will foster automated bioinformatic pipelines construction. The NeuroEmbed catalogue of cohorts and samples is available at https://github.com/JoseAdrian3/NeuroEmbed.
[ "cs.CL" ]
# 1 Introduction Today’s smart IoT devices, such as smart speakers, smart bulbs, and various smart display devices, are commonly connected to home routers or mesh network hubs via WiFi. Beyond their primary role in communication, the WiFi signals between these devices inherently capture rich information about the surrounding environment through their propagation paths [24, 42, 25]. This has positioned WiFi sensing as a compelling alternative to vision- or wearable-based systems for human monitoring in smart environments. By capturing fine-grained temporal and spatial variations in Channel State Information (CSI), commodity WiFi devices can infer a wide range of human-centric phenomena—from gross motor events such as falls to subtle physiological signals like breathing. These properties make WiFi sensing especially attractive for health-related applications in smart homes, where privacy, continuous operation, and ease of deployment are critical. Moreover, because these signals are already being transmitted by existing infrastructure, WiFi-based sensing enables non-intrusive, cost-effective, and passive monitoring without requiring additional sensors or user instrumentation. Despite increasing research interest, existing WiFi sensing studies suffer from a fundamental limitation: a lack of large-scale, diverse, and real-world datasets. Most current datasets are collected in controlled laboratory settings, often using limited types of homogeneous hardware configurations and a narrow range of tasks. As a result, models trained on these datasets struggle to generalize to new users, devices, or environments, limiting their practical utility. To address these gaps, we introduce CSI-Bench, the first large-scale, in-the-wild benchmark dataset supporting multi-task WiFi sensing as illustrated in Figure 1. Using commercial edge devices, Preprint. Figure 1: CSI-Bench overview. The benchmark features multiple commercial routers and IoT devices deployed in real homes and offices to collect CSI data. It supports a wide range of human-centric sensing tasks, enabling robust model development across diverse hardware setups and real-world scenarios. CSI-Bench captures real-world signal variability across diverse environments, including apartments, multi-room houses, offices, and public indoor spaces. Data is recorded continuously from a broad spectrum of WiFi chipsets (Qualcomm, Broadcom, Espressif, MediaTek, and NXP), under both line-of-sight (LoS) and non-line-of-sight (NLoS) conditions, and during natural human activities with minimal intervention. CSI-Bench advances the field in three key ways: Large-scale, real-world coverage. The dataset spans over 461 hours of CSI data from 35 users, 26 distinct environments, and 16 device configurations. It reflects realistic deployment conditions with background interference, user mobility, and ambient network traffic. Multi-task and co-labeled annotations. We provide both single-task specialist datasets (e.g., fall detection, breathing monitoring, localization, and motion source recognition) and a multi-task dataset with joint labels for user identification, activity recognition, and proximity estimation. The co-labeled samples enable efficient multi-task learning and low-latency inference on resource-constrained edge devices. Standardized benchmarking protocols. We establish strong baselines under supervised learning and multi-task learning. Our findings highlight generalization gaps and the promise of parameter-efficient multi-task learning. CSI-Bench aims to catalyze robust model development for passive, privacy-preserving WiFi sensing. By offering a unified platform for realistic, diverse, and reproducible evaluation, it provides a foundation for scalable AI applications in smart health, home monitoring, and beyond. # 2 Related Work # 2.1 WiFi Sensing Compared to vision-, audio-, or wearable-based systems, WiFi sensing offers a scalable, privacypreserving, and non-intrusive alternative or complementary solution for continuous monitoring in smart environments and healthcare applications [22, 12, 33]. WiFi sensing has demonstrated substantial potential in tasks such as activity recognition [23, 28], gesture detection [31, 45], indoor localization [40, 38], and vital sign monitoring [39, 13]. However, most existing studies rely on data collected in constrained settings, which limits generalization to diverse users, hardware platforms, and real-world deployment scenarios. # 2.2 WiFi Sensing Dataset A number of WiFi sensing datasets have contributed valuable resources to the community. Widar3.0 [47] offers large-scale CSI data for gesture recognition using Intel 5300 NICs [15]. SignFi [27] focuses on sign language recognition, capturing fine-grained hand gestures. MMFi [44] enables cross-modal analysis by combining WiFi CSI with synchronized video and depth data. Table 1: Comparison of CSI-Bench with published datasets. Figure 2: Representative CSI samples are shown for various scenarios, including human actions (jumping, running, walking, hand waving, falling, breathing), non-human motions (pet movement, iRobot, fan), and empty environments. In each sample, the $\mathbf { \boldsymbol { x } }$ -axis represents time, and the y-axis represents the subcarrier index. XRF55 [36] introduces a large corpus of RF-based activity data for action recognition. Additional datasets such as ARIL [35], CSIDA [19] support tasks like activity recognition and localization. While these datasets have advanced the field, they share several limitations as illustrated in Table 1. First, most are confined to controlled laboratory settings, offering limited variability in user behavior, device types, and environmental complexity. Second, they primarily support single-task scenarios, lacking the multi-task supervision needed for training general-purpose models. Third, nearly all rely on the Intel 5300 chipset, which does not support continuous CSI recording. As a result, data is collected in fragmented, pre-scripted sessions using manual triggers, which limits dataset scale and fails to capture users’ natural daily activities. There remains a growing demand for a unified benchmark that reflects the complexity of real-world deployments, supports multiple sensing tasks, and enables evaluation across diverse users, environments, and hardware platforms. To address this need, we introduce CSI-Bench, a large-scale in-the-wild benchmark for passive WiFi sensing. # 3 Dataset Collection # 3.1 Overview To support robust and generalizable WiFi sensing research, we build a diverse collection of datasets captured in real-world environments using commercial WiFi devices. CSI-Bench spans over 460 hours of CSI recordings across 35 unique users, 26 environments, and 16 device types, covering both routers and edge devices operating under varied network conditions. Data is collected in homes, offices, and public indoor areas with minimal control over ambient interference or user behavior. Each dataset is designed to support one or more sensing tasks, including fall detection (Fall), breathing monitoring (Breath), localization (Loc.), human activity recognition (HAR), user identification (UID), and proximity estimation (Prox.). Representative CSI samples illustrating task-specific signal patterns are visualized in Figure 2. The following section details the hardware, environments, and collection protocols used to capture the datasets. # 3.2 Devices and Hardware Setup Hardware. To emulate the heterogeneity of real-world WiFi sensing deployments, we select a diverse set of WiFi routers and edge IoT devices commonly found in residential and commercial environments, with chipset models including Qualcomm, NXP, Broadcom, and Espressif [7, 6, 2, 3]. All devices collectively support IEEE $8 0 2 . 1 1 \mathrm { n / a c / a x }$ standards, with MIMO configurations ranging from $1 { \times } 1$ to $2 { \times } 2$ and $1 { \times } 4$ , and channel bandwidths of 20, 40, and $8 0 \mathrm { M H z }$ . The detailed specifications of the edge IoT devices are provided in Appendix. A. CSI extraction and synchronization. In our system, IoT client devices periodically transmit CSI packets to routers at two sounding rates: $1 0 0 \mathrm { H z }$ for general sensing tasks and $3 0 \mathrm { H z }$ for breathing detection, accommodating different temporal dynamics. Given the distributed nature of these devices, propagation delays and clock drifts cause misalignment in CSI data streams. To address this, the router coordinates data collection by sending batch requests with defined time windows, asking devices to record and upload CSI within the same interval. Each device uses its own system clock to timestamp the data, which allows us to later align the streams in software. Routers handle CSI extraction, buffering, and data upload to cloud servers, running either Linux or FreeRTOS depending on their chipset. CSI format. Due to hardware diversity, the CSI data in CSI-Bench varies in subcarrier granularity, antenna configurations, and supported bandwidths across different chipset architectures. For example, the NXP 88W8997 provides a $2 { \times } 2$ MIMO configuration with 58 subcarriers at $4 0 \mathrm { M H z }$ on $5 \mathrm { G H z }$ , while the ESP32-S3, with a $1 { \times } 1$ setup, captures 64 subcarriers at $2 0 \mathrm { M H z }$ on $2 . 4 \ : \mathrm { G H z }$ . Qualcomm IPQ4019/IPQ4018 devices offer a $1 { \times } 2$ MIMO configuration, supporting 128 subcarriers at $4 0 \mathrm { M H z }$ and 256 subcarriers at ${ 8 0 } ~ \mathrm { M H z }$ on $5 \operatorname { G H z }$ . In contrast, the Broadcom BCM4345 employs a $1 { \times } 4$ antenna configuration, providing only 14/28 subcarriers at $2 0 / 4 0 \mathrm { M H z }$ due to proprietary subcarrier grouping. These variations ensure CSI-Bench captures a wide spectrum of signal characteristics, enabling comprehensive evaluation of model generalization across heterogeneous hardware platforms. # 3.3 Continuous Data Recording To overcome the limitations of prior works that typically rely on controlled environments or predefined protocols, we develop an integrated pipeline enabling scalable, in-the-wild CSI data collection across diverse residential settings. Leveraging commercial routers with developer-accessible CSI extraction, cloud infrastructure, and user-friendly annotation tools, our system unobtrusively captures large-scale CSI data from everyday WiFi usage without device-side modifications. We collaborate with multiple router chipset vendors, who provided firmware and drivers with CSI extraction capabilities enabled, along with proprietary CSI capture utilities for CSI extraction. Building on this, we develop our own tools to programmatically capture and manage CSI data. Specifically, we design separate tools for Linux or FreeRTOS [4], each design to send commands from the Linux application layer directly to the WLAN kernel module, enabling continuous collection and buffering of CSI from all registered devices into unified binary files, which are periodically uploaded to cloud storage via AWS S3 APIs [1]. Each file is timestamped using the router’s local system time embedded in the filename, ensuring straightforward temporal alignment across deployments. Upload frequency dynamically adjusts based on device count and bandwidth utilization. We also develop a lightweight user annotation tool integrated into Google Spreadsheet [5], allowing users to optionally log daily activities—such as waking up, sleeping, leaving or returning home, room occupancy, or inactivity—by tapping buttons that record local timestamps. Our system queries and retrieves CSI files matching these events, concatenates the relevant segments, and refines alignment using embedded packet-level timestamps, resulting in precisely labeled CSI data segments. We collect CSI of motion from non-human sources like pets and cleaning robot when users are not home. When possible, time-aligned external information is collected through camera recordings and local sensor logs to annotate non-human motions or highlight environmental changes. This pipeline enables extensive, accurately labeled CSI data collection reflective of authentic user behaviors and diverse environments, supporting a wide range of large-scale research applications. Table 2: Summary of tasks, dataset statistics, partitions, and evaluation protocols. $\mathbf { \nabla } S T =$ single-task specialist, $M T =$ multi-task joint. # 3.4 Environments and Contexts We collect our data across a broad range of environments, including compact apartments, multi-room houses, offices, hallways, and open indoor public spaces, as detailed in Appendix A.2. These settings introduce diverse physical characteristics, including complex layouts, clutter, variable wall materials, and occlusions, that significantly affect signal propagation. Unlike prior datasets collected under controlled conditions, our data captures CSI under authentic, in-the-wild conditions. Devices were positioned freely by users, and data was recorded continuously during natural daily activities. Consequently, the CSI reflects realistic variability introduced by NLoS links, neighboring motion, background activity from appliances, WiFi traffic, and environmental factors such as wind and even rain drops. This level of interference is critical for benchmarking the robustness of WiFi sensing models, particularly for healthcare applications where reliable and through-the-wall monitoring in uncontrolled home environments is essential. # 3.5 Data Collection Protocols Although participants are free to move naturally and perform tasks as they would in daily life, we implement basic data collection protocols to ensure consistency and repeatability. Each session begin with a brief calibration phase to verify device connectivity, synchronize timestamps, and confirm stable CSI logging. The recorded activities spans a range of motion patterns, including sitting still, walking, waving hands, and running through hallways. All participants signed a consent form prior to participation, with expenses around $\$ 20$ /hr. Data from non-human motion sources—such as pets, cleaning robots, and electrical appliances like fans—are collected when users are not present. Detailed task-specific data collection procedures are provided in Appendix A. # 3.6 Dataset Statistics CSI-Bench spans seven classification tasks with varied sensing objectives. Table 2 summarizes dataset scale and coverage, including the number of samples, recording duration, users, environments, and device types. This diversity reflects real-world deployment conditions and supports robust generalization benchmarking. # 4 Data Quality and Preprocessing # 4.1 CSI Quality Verification Motivation. CSI quality checking is critical for ensuring data reliability, as raw measurements often suffer from signal dropouts, high noise levels, or inconsistent timestamps. These issues can arise due to differences in chipset design, CSI extraction algorithms, hardware configurations (e.g., antenna layout, RF circuitry), and deployment conditions. As illustrated in Figure 3a, the CSI quality varies from device to device. Device 1 exhibits the best CSI quality, with consistent temporal patterns and a stable sampling rate near the nominal 30 and $1 0 0 \mathrm { H z }$ . Device 2 shows moderate quality with occasional outliers and a lower sampling rate, while Device 3 suffers from the poorest quality, marked by irregular sampling intervals and temporal clustering of CSI frames. Given the diverse hardware platforms and settings in CSI-Bench, these quality variations must be systematically addressed to enable meaningful benchmarking. Device 1 MATLAB App □ 100 Freq 98.23 Hz 0.2 Amplitude G 50 0.15 0.1 CSI Parsing 0 =++\* 60 0.1 0 Chipset ESP32 1 200 400 200 400 20 40 60 Start parsing Packet Index Time Index Subcarrier Index Device 2 Figure Configuration 50 Freq 74.72 Hz Subcarrier 3690 0.015 0.1 Show TisI Amplitude 120 0 √Save figures 200 400 200 400 20 60 100 Packet Index Time Index Subcarrier Index CSI Timestamp&Amplitude Device 3 50 Freq 47.62 Hz 0.125 0.01.512 Set rate(Hzy test 100 上 E JUJNL 200 400 60 200 400 20 40 60 Evaluation Packet Index Time Index Subcarrier Index (a) (b) Verification tool. To systematically assess and ensure CSI data quality, we adopt a structured evaluation framework introduced in an existing work [20], which models CSI verification as a multilayered pipeline. Each layer of this pipeline targets a specific aspect of data integrity using customized metrics, covering timestamp consistency, CSI amplitude stability, and other modalityspecific characteristics. This design allows us to characterize various perspectives of CSI quality and adapt the evaluation to different sensing tasks. In the context of CSI-Bench, we apply this framework to filter out samples with timestamp irregularities, unstable or flat CSI amplitude, and signal dropout, ensuring that only reliable traces are included in the benchmark. The CSI verification tool is implemented in MATLAB, as shown in Figure 3b, to facilitate systematic quality control before incorporating data into CSI-Bench. # 4.2 CSI Preprocessing Pipeline Amplitude extraction. In real-world measurements, CSI is often corrupted by phase noise caused by timing and frequency synchronization offsets, as well as additive thermal noise. In the literature, two main approaches are used to handle phase distortions: phase cleaning [10, 30, 43] and phase elimination [37, 41, 46]. Phase cleaning aims to correct the distorted phase but cannot fully eliminate initial phase offsets, making it less reliable for consistent processing across diverse devices. Therefore, in our benchmark, we adopt the phase elimination approach. Specifically, if the extracted CSI at time $t$ and subcarrier frequency $f$ is represented as $H ( f , t )$ , we use the amplitude $| H ( f , t ) |$ as input, eliminating the unreliable phase component. Data segmentation. To facilitate task-specific model training, we segment the collected CSI data into fixed-duration samples. For tasks including Fall Detection, Localization, Motion Source Recognition, and the Multi-Task dataset, we segment CSI data into 5-second intervals. For the Breathing Detection dataset, considering the slower temporal variations inherent to respiration signals, we segment the CSI data into 10-second intervals. Amplitude normalization. The automatic gain controller on commercial WiFi devices affects the reported CSI amplitude. To mitigate the effects of varying signal strengths, we normalize the power response of each subcarrier across the entire frequency band as follows $\hat { H } ( f _ { k } , t ) =$ kN|=sH1(|fHk,(tf)k|,2t)|2 , for all k, where Ns is the number of subcarriers, and H(fk, t) is the original reported CSI on the $k$ -th subcarrier. Subcarrier standardization. Due to hardware differences, the number of subcarriers in CSI samples can vary across different platforms, leading to inconsistent input shapes along the frequency dimension. To standardize the data, we select a fixed number of subcarriers and apply zero-padding or clipping in the frequency dimension as needed. This ensures all samples have consistent input shapes across the dataset. # 5 Benchmark Design # 5.1 Task Suite and Metrics CSI-Bench supports a suite of supervised classification tasks for WiFi sensing, covering key applications in health monitoring and ambient intelligence. Each task operates on a fixed-length CSI tensor $\mathbf { X } \in \mathbb { R } ^ { C \times K \times T }$ , where $C$ is the channel count, $K$ is the standardized subcarrier dimension over antenna arrays, and $T$ is the temporal length of samples (5 seconds for most tasks, and 10 seconds for breathing detection). Single-task specialized dataset. The benchmark includes four single-task datasets: Fall Detection (binary classification of fall vs. non-fall), Breathing Detection (binary detection during sleep, sampled at $3 0 \mathrm { H z }$ ), Motion Source Recognition (four-class classification of human, pet, robot, and fan motion), and Room-Level Localization (six-way classification of the user location). These are evaluated independently using dedicated datasets. Multi-task joint dataset. A multi-task dataset contains co-labeled samples for three tasks: Human Activity Recognition (five-class classification), User Identification (multi-class over 6 users), and Proximity Recognition (four-class distance estimation). This enables parameter-efficient multi-task training with a shared backbone and task-specific heads. All tasks are evaluated using overall accuracy and weighted F1-score. Accuracy provides a global measure of classification correctness, while the weighted F1-score accounts for class imbalance by averaging per-class F1-scores weighted by class frequency. This is especially relevant for tasks with skewed distributions such as fall detection or proximity recognition. # 5.2 Evaluation Protocols CSI-Bench provides standardized train/validation/test splits for all tasks to ensure fair comparison and reproducibility. For each dataset, $70 \%$ of samples are used for training, $1 5 \%$ for validation, and the remaining $1 5 \%$ for testing, with class balance and environment distribution preserved. Evaluation protocols and statistics for each task are summarized in Table 2. To evaluate real-world robustness, each test sample is annotated with a difficulty level—Easy, Medium, or Hard—based on signal quality, environment, and subject complexity. For the multi-task dataset, we define three out-of-distribution (OOD) splits—cross-user, cross-environment, and crossdevice—reflecting domain shifts in deployment. These settings enable systematic robustness and generalization evaluation. Full details are provided in Appendix A. # 5.3 Baseline Models To establish reference performance and benchmark learning effectiveness on CSI-Bench, we implement a suite of baseline models across single-task supervised and multi-task learning settings. Supervised learning. We evaluate representative architectures spanning fully connected networks (MLP) [32], recurrent models (LSTM) [17], convolutional backbones (ResNet-18) [16], and transformer-based sequence learners, including Vision Transformer (ViT) [11], PatchTST [29], and TimeSformer-1D [8]. All models are trained independently on each task using the corresponding specialist dataset. Input CSI tensors are amplitude-only with hyperparameters tuned using validation performance. Multi-task learning. To explore parameter efficiency and cross-task knowledge sharing, we also implement multi-task learning using a shared backbone with lightweight task-specific adapters [9]. We adopt the same backbones as in the supervised setting and attach low-rank (LoRA) adapters [18] and separate classification heads for each task. During training, task-labeled samples are drawn from the joint multi-task dataset, and optimization proceeds with shared backbone updates and task-specific losses. Table 3: Performance comparison of supervised models across four core WiFi sensing tasks. Accuracy (Acc) and F1-score are reported as mean $\pm$ std $( \% )$ over three runs. Table 4: Comparison of task-specific and multi-task training for the Transformer model across shared-data tasks. The improvements $( \Delta )$ are reported as mean $\pm$ std $( \% )$ over three runs. All models are trained using the AdamW optimizer [26] with a cosine learning rate schedule and early stopping. Detailed architecture configurations and training hyperparameters are provided in Appendix B. # 5.4 Results We report performance on all tasks using both standard supervised learning baselines. Table 3 summarizes accuracy and weighted F1-score for supervised models trained on the specialist datasets. Among the models, transformer-based architectures—particularly TimeSformer-1D and PatchTST—consistently achieve strong performance, highlighting their effectiveness in capturing temporal dynamics in high-dimensional CSI data. Simpler models such as MLP and LSTM perform adequately on some tasks but show clear limitations in harder cases. Multi-task learning results are presented in Table 4. Compared to task-specific training, our multi-task models with a shared Transformer backbone and lightweight adapter-based heads achieve improved performance across multiple tasks. These findings highlight the effectiveness of joint training in capturing shared representations while preserving task-specific specialization through adapters. They also suggest that multi-task learning can improve generalization in real-world settings where sensing tasks are naturally co-located and co-labeled. In addition to strong performance, our multi-task framework significantly reduces model complexity and training cost. By consolidating three single-task Transformers into a single backbone with taskspecific adapters, we reduce the total parameter count by over $60 \%$ . This compression is achieved without degrading task performance. Moreover, because all tasks are trained jointly in a single pass, the wall-clock training time is reduced by nearly $3 \times$ compared to training separate models for each task. These gains in model size and training efficiency make our approach especially suitable for deployment on resource-constrained edge devices, where memory and compute budgets are limited. We also report task-wise performance stratified by difficulty levels (Easy, Medium, Hard) for the single-task datasets in Appendix C.1. Performance drops on hard samples for tasks like fall detection due to signal degradation, cluttered environments, and hardware diversity, reinforcing the need for deployment-aware evaluation. While models perform well under in-distribution settings, we observe significant performance degradation under domain shifts. OOD evaluation across user, environment, and device axes—summarized in Appendix C.2—reveals notable challenges in generalization, particularly in cross-device settings. This highlights a key motivation for developing robust and adaptive models in future work. # 5.5 Discussion and Takeaways CSI-Bench enables scalable research on high-dimensional CSI-based sensing under real-world conditions. Its large scale, diverse hardware coverage, and co-labeled tasks support the development of unified multi-task models for on-device health monitoring. Multi-task learning yields competitive performance while significantly reducing model size and inference cost, making it well-suited for resource-constrained edge deployment. However, performance drops notably under OOD settings, particularly in cross-device scenarios, exposing persistent generalization challenges. Failure cases often arise from hardware heterogeneity, cluttered environments, or degraded signal quality. Overall, CSI-Bench offers a realistic and comprehensive testbed for developing robust, efficient, and generalizable WiFi sensing systems in unconstrained environments. # 6 Limitations The dataset uses amplitude-only CSI features due to phase instability across platforms. While this is practical, it limits exploration of techniques that exploit calibrated phase or angle-of-arrival information. CSI-Bench is designed around classification tasks. Extensions to regression (e.g., continuous sign estimation) and more temporally structured tasks (e.g., long-term activity tracking) are promising but not yet included. We release all data, tools, and splits to support community-driven extensions and improvements.
WiFi sensing has emerged as a compelling contactless modality for human activity monitoring by capturing fine-grained variations in Channel State Information (CSI). Its ability to operate continuously and non-intrusively while preserving user privacy makes it particularly suitable for health monitoring. However, existing WiFi sensing systems struggle to generalize in real-world settings, largely due to datasets collected in controlled environments with homogeneous hardware and fragmented, session-based recordings that fail to reflect continuous daily activity. We present CSI-Bench, a large-scale, in-the-wild benchmark dataset collected using commercial WiFi edge devices across 26 diverse indoor environments with 35 real users. Spanning over 461 hours of effective data, CSI-Bench captures realistic signal variability under natural conditions. It includes task-specific datasets for fall detection, breathing monitoring, localization, and motion source recognition, as well as a co-labeled multitask dataset with joint annotations for user identity, activity, and proximity. To support the development of robust and generalizable models, CSI-Bench provides standardized evaluation splits and baseline results for both single-task and multi-task learning. CSI-Bench offers a foundation for scalable, privacy-preserving WiFi sensing systems in health and broader human-centric applications.
[ "eess.SP", "cs.AI", "cs.DB" ]
# 1. Introduction Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing, fundamentally reshaping the landscape of artificial intelligence. A burgeoning area of research now focuses on extending these powerful models to the speech modality, leveraging their powerful semantic understanding capabilities to achieve a wider range of functions. This has led to the development of Speech Large Language Models (S-LLMs), which integrate speech encoders, such as Self-Supervised Learning (SSL) models [1] and Whisper encoders [2], with LLM backbones like Llama [3] and Qwen [4] to create end-to-end systems for speech processing tasks [5, 6, 7], such as Automatic Speech Recognition (ASR). Recent studies, such as Qwen-Audio [6], SpeechGPT [5], and Step-Audio [8], have demonstrated the effectiveness of connecting pre-trained audio encoders to large language models (LLMs) through alignment modules. These modules include Q-Formers [9, 10] and simple linear projectors [6], serving to bridge the modality gap between continuous speech representations and the discrete token space of LLMs. Such integration enables the models to leverage the rich linguistic knowledge embedded in LLMs, facilitating the understanding of both linguistic and paralinguistic information encoded in speech. This paper reports the system submitted by NTU Speechlab for Track I of the Interspeech 2025 Multilingual Conversational Speech and Language Model (MLC-SLM) Challenge. Our contributions are as follows. • We demonstrate that full-parameter tuning of the LLM, combined with a frozen Whisper encoder, is effective for adapting a speech-language model (S-LLM) to automatic speech recognition (ASR) tasks. • We propose to use the language-specific prompts in SLLMs for ASR, which significantly helps multilingual ASR. • Model Average strategy demonstrates its superior performance gain for an LLM. Our submitted model achieved fifth place on Track I, significantly improving the baseline performance on the evaluation set in terms of the averaged Mixed Error Rate (MER), from a baseline of $2 0 . 2 \%$ to $1 0 . 6 \%$ , marking a $48 \%$ relative improvement. This report details our system’s architecture, strategy, and the experimental results that validate our approach, offering valuable insights for the continued development of highperformance, multilingual S-LLMs. # 2. Methodology In this section, we present the framework of our LLM-based multilingual ASR system. The core design follows a postalignment structure, where speech features are aligned with language semantic tokens for seamless integration into a pretrained LLM. The model structure is shown in Figure 1. The model transcribes speech input using a language-specific prompt that aligns with the spoken language, ensuring it operates purely as a multilingual ASR system rather than performing translation. The architecture consists of three core components: a speech encoder, a modality adaptor, and an LLM backbone. Speech Encoder. We employ the encoder of Whisperlarge-v3 within our SLLM due to its strong multilingual capability, backed by pretraining on diverse linguistic data, to transform speech signals into fixed-length embeddings, covering up to 30 seconds of audio. Modality Adaptor. We employ a lightweight two-layer MLP as the adaptor to bridge the speech modality features and the LLM’s embedding space. The adaptor is randomly initialized and trained jointly with the LLM. LLM Backbone. Unlike monolingual ASR, multilingual ASR demands a more robust language model capable of capturing language-specific patterns essential for accurate transcription. To this end, we employ Gemma-2-2B as the LLM for transcription generation. We propose to use language-specific prompts during both training and inference, as illustrated in Figure 2. This approach enhances intra-model consistency during autoregressive generation. During supervised finetuning, we freeze the encoder while optimizing the adaptor and LLM via an autoregressive loss. Following traditional ASR practices [11, 12], we further enhance performance through model averaging, applying equalweighted averaging over the last 15 checkpoints. Figure 1: Proposed Model Architecture. In our model, we utilize the Whisper-large-v3 encoder as an audio encoder, and the Gemma-2- 2B as the backbone LLM. To understand the audio representation output from the audio encoder, we simply use a linear projector after the encoder. During training, our encoder is frozen, and full-parameter-tuning is applied to both the linear projector and the LLM. The language-specific prompt is specially designed for the MLC-SLM challenge, which uses the prompts shown in Figure 2 for each training sample with language label. Figure 2: Language Specific Prompt. All these prompts have the same meaning: ”Transcribe speech to text,” but are written in specific languages based on the language given for a speech. # 3. Experiments In this section, we present the dataset we utilize and the modeling configurations in detail. # 3.1. Dataset We train our models on a comprehensive multilingual corpus totaling approximately 17500 hours of speech data, including augmentations. The primary resource is the MLC-SLM Dataset, officially provided by the MLC-SLM Challenge. The MLC-SLM dataset contains roughly 1500 hours of conversational speech spanning 11 languages: English, French, German, Italian, Portuguese, Spanish, Japanese, Korean, Russian, Thai, and Vietnamese. The English subset includes five accents, including American, British, Filipino, Australian, and Indian, each contributing approximately 100 hours. All recordings were collected in quiet indoor environments using consumergrade devices to ensure high audio quality and realistic conversational conditions. To further enhance our multilingual ASR performance, we supplement our training with the following publicly available Table 1: MLC-SLM dataset statistics. It includes a 1500-hour training set and 32 hours of validation and evaluation sets, covering eleven languages and five different accents in English. datasets: • CommonVoice 21.0 [13]: A crowdsourced multilingual corpus of read speech curated by Mozilla. We select the subsets corresponding to the 11 target languages, amounting to approximately 4467 hours of audio. • GigaSpeech [14] and GigaSpeech2 [15]: GigaSpeech is an English-centric, multi-domain speech corpus collected from podcasts, audiobooks, and YouTube. Its extension, GigaSpeech2, includes additional data for lower-resource languages. We incorporate 901 hours of English from GigaSpeech, along with 2147 hours of Thai and Vietnamese from GigaSpeech2. • Multilingual LibriSpeech [16]: A large-scale corpus of read speech derived from public domain audiobooks in eight European languages. Our training set includes the French, German, Italian, Portuguese, and Spanish subsets, totaling 4367 hours of aligned audio and transcriptions. • Multilingual TEDx [17]: A curated speech corpus constructed from TEDx talks on YouTube, featuring timealigned transcriptions across a wide range of languages. We include French, German, Italian, Portuguese, Russian, and Spanish, contributing approximately 559 hours of semi-spontaneous transcribed speech. • ReazonSpeech [18]: A Japanese corpus with around 486 hours of transcribed speech from diverse domains such as news, podcasts, and read materials. • Zeroth-Korean1 and Seoul Corpus [19]: ZerothKorean provides 51.6 hours of high-quality read speech recorded from 105 native speakers in controlled settings, while the Seoul Corpus offers 22 hours of spontaneous Korean speech, including dialogues, monologues, and read passages from speakers of diverse age groups and dialectal backgrounds. The MLC-SLM dataset construction statistics are summarized in Table 1, and a detailed breakdown of the external dataset composition across languages is provided in Table 2. Table 2: External open accessible datasets used in model training. The duration is shown in hours. These comprise more than 13k hours in total. # 3.2. Experimental setup In this challenge, we build four models following the same architecture we propose in Figure 1, as shown in Table 3, apart from the baseline system. Firstly, the baseline system is pro English-Indian-00700_002_phone-O1-075765-076403 my my fa- fa- other fa- um um um um other fa- um um um um um um um um um um um um um um um um um um …. Vietnamese-0593_001_phone-O1-095060-095201 bùng bùng bùng bùng bùng bùng bùng bùng bùng …. vided by the challenge official repo.2 This model follow a twostage training strategy: first, only the projector is trained, while both the Whisper encoder and the LLM remain fully frozen, to learn a stable mapping from speech embeddings into the LLM’s input space; then, in the second phase, the projector and LoRA adapter of the LLM are jointly fine-tuned, to adapt the language model’s internal representations to speech-derived inputs with minimal tunable parameters. Then, we build our models by replacing the backbone LLM from Qwen2.5 to Gemma-2, where the LLM’s parameters are fully fine-tuned together with a two-layer linear projector configured with a downsampling rate of 5, to obtain better performance for the ASR task. Our S1 model is trained using the MLC-SLM Training dataset only, and the text prompt is fixed to ”Transcribe speech to text” regardless of language. S2 model uses the same training data as S1 but utilizes language-specific prompts for each language, as shown in Figure 2. Extra CommonVoice data is used in S3, incorporating six thousand hours of training data. Finally, S4 model utilizes the largest scale training data among the four models, including not only all the external data demonstrated in Table 2, but also simple data augmentation methods are applied to the MLC-SLM training data, including speed ( $0 . 9 \mathrm { x }$ and 1.1x) and volume $( 0 . 1 5 \mathrm { x } \mathrm { ~ - ~ } 1 . 1 5 \mathrm { x } )$ perturbations, making up more than 17 thousand hours in total. We built our models using the SLAM-LLM [20] toolkit, running on 8 NVIDIA H20-96GB GPUs. Under our configuration, each GPU can process a batch of 4 samples, and the batch sizes demonstrated in Table 3 are calculated by multiplying the number of GPUs, per-GPU batch size, and the steps of gradient accumulation. Also, we employ an early-stop strategy during training, with a tolerance of 2000 training steps, based on the validation accuracy. Moreover, for the systems S3 and S4, we use the Model Average strategy, equally averaging the last 15 checkpoints, each with 400 update steps, to obtain models with better robustness. During the inference period, we use beam search with a beam size of 4, and set no repeated ngrams to 5-gram, to prevent hallucinations that were observed in the validation experiments, as examples shown in Figure 3. These samples have similar characteristics that repeat n-gram phrases dozens of times until the end of the sentence. These situations only appeared in less than $0 . 0 5 \%$ of the entire valid set, but contribute to more than $0 . 8 \%$ of WER. # 4. Results In this section, we present and discuss the experimental results comparing baseline systems with our proposed models. Table 4 summarizes the Word Error Rate (WER) and Character Error Rate (CER) achieved by our models across eleven languages and five accents on the validation set. In detail, we calculate CER for Japanese, Korean, and Thai, while WER is used for the rest of the languages based on the characteristics of each language. For Avg. Valid. and Avg. Eval., we report the averaged Mix Error Rate (MER) on both the validation and evaluation sets to show the overall performance of each model. Table 3: Model training configurations. Baseline is released by the challenge official. FPT stands for full parameter tuning. CV is the CommonVoice 21.0 dataset, and Ext. includes all the external datasets listed in Table 2. LID means we use a language specific prompt shown in Figure. 2. Duration is shown in hours. Table 4: Word Error Rate $( W E R \Vdash )$ and Character Error Rate (CER↓) results for each of the models. The results for split languages are for the validation set. Mix Error Rate (MER↓) is reported for average performance. AVG. uses the latest 15 checkpoints for the equal-weighted model average, where each checkpoint is trained for 400 steps. The first two columns compare the off-the-shelf Whisper model and the officially released baseline, and the results are from the official repo. Columns S1–S3 show the performance of single best validation accuracy checkpoints, while $^ { * } { \bf S } 3$ AVG.” and “S4 AVG.” report the model average of the latest 15 checkpoints for S3 and S4 models, respectively. On average, we observe a clear trend of error reduction from Baseline through S3, with the most significant single gain coming from checkpoint averaging. On the validation set, the baseline system achieves an overall MER of $2 1 . 4 9 \%$ , and the MER is reduced to $1 6 . 6 0 \%$ and further to $1 4 . 8 7 \%$ in S1 and S2, by fully finetuning the LLM and introducing the language-specific prompts, respectively. Then, using external CommonVoice data, S3 reduces MER to $1 3 . 6 3 \%$ , and the model average for S3 further improves MER performance to $1 1 . 7 0 \%$ . The S4 averaged model yields the best validation performance at $1 1 . 5 7 \%$ , and delivers a consistent improvement on evaluation data ( $1 0 . 5 8 \%$ MER) compared to S3 AVG. $1 0 . 8 4 \%$ MER). These results confirm that model averaging via an equal-weighted strategy substantially mitigates variance between training steps and enhances generalization. A language-by-language breakdown highlights that highresource varieties such as English (American, Australian, British accents) and Spanish attain the lowest absolute WERs, which are under $9 \%$ with S4 AVG., reflecting ample training data. Languages with richer morphology or tonal distinctions (e.g., French, Portuguese, Vietnamese) start with higher WER $( 2 0 - 3 5 \%$ from Whisper/Baseline) but still achieve relative reductions of up to $3 5 \mathrm { - } 4 5 \%$ by S4 AVG.. For character-based languages like Japanese, Korean, and Thai, CER improvements of $10 \mathrm { - } 2 0 \%$ in absolute terms demonstrate the robustness of our approach across diverse writing systems. The consistent gains from S1 through S4 AVG. underscore two key insights for system design. First, model averaging is a simple yet powerful method to enhance performance without requiring architectural changes, additional data, or further training steps. Second, the use of more external data and data augmentation methods can only improve the model’s performance by a limited margin, suggesting that utilizing more outof-domain data is a sub-optimal choice for achieving better ASR results on a specific domain. Overall, our results validate the effectiveness of model averaging and full parameter finetuning for multilingual speech recognition.
This report details the NTU Speechlab system developed for the Interspeech 2025 Multilingual Conversational Speech and Language Model (MLC-SLM) Challenge (Task I), where we achieved 5th place. We present comprehensive analyses of our multilingual automatic speech recognition system, highlighting key advancements in model architecture, data selection, and training strategies. In particular, language-specific prompts and model averaging techniques were instrumental in boosting system performance across diverse languages. Compared to the initial baseline system, our final model reduced the average Mix Error Rate from 20.2% to 10.6%, representing an absolute improvement of 9.6% (a relative improvement of 48%) on the evaluation set. Our results demonstrate the effectiveness of our approach and offer practical insights for future Speech Large Language Models.
[ "cs.CL", "eess.AS" ]
# 1 Introduction Literary translation is a complex task that goes beyond simple word-for-word conversion. It demands a deep understanding of cultural nuances and the preservation of the author’s unique voice through creative adaptation for a new audience. Unlike technical translation, which prioritizes precision and clarity, literary translation requires fidelity to the stylistic essence, emotional resonance, and narrative depth of the source text. This complexity makes evaluation challenging, as the quality of a literary translation is subjective and varies depending on readers’ preferences—some favor literal accuracy, while others prioritize capturing the original’s spirit (Toral and Way, 2018; Thai et al., 2022). Traditional evaluation metrics for machine translation, such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005), measure lexical overlap and syntactic similarity. While effective in technical contexts, these metrics struggle with literary texts, overlooking stylistic, discursive, and cultural factors critical to literature (Reiter, 2018). Neural-based metrics like BERTScore (Zhang et al., 2020) and COMET (Rei et al., 2020) enhance semantic analysis, yet they still fail to fully capture aesthetic and cultural nuances. This gap highlights the need for advanced methods tailored to the unique demands of literary translation (Yan et al., 2015; Freitag et al., 2021; Team et al., 2022). Specialized metrics like Multidimensional Quality Metrics (MQM) (Lommel et al., 2014) and Scalar Quality Metric (SQM) (Blain et al., 2023) attempt to address these shortcomings by evaluating style and fluency alongside accuracy. However, MQM’s reliance on human annotation limits its scalability, and SQM lacks the depth required for literary analysis. Large Language Models (LLMs) such as gpt-4, claude, and gemini show promise due to their advanced text generation and comprehension capabilities (Zhang et al., 2025). Nevertheless, no single LLM can comprehensively assess the multifaceted aspects of translation quality—accuracy, fluency, style, and cultural fidelity—necessitating a multi-agent system that leverages their combined strengths (Karpinska and Iyyer, 2023). Our method introduces a multi-agent system where specialized agents evaluate distinct dimensions of literary translation quality. One agent ensures the consistency of terminology, such as character names; another verifies the alignment of narrative perspective; and a third assesses stylistic fidelity, including tone and rhythm. A coordinator integrates these evaluations into an Overall Translation Quality Score (OTQS), combining quantitative scores with qualitative insights. This approach capitalizes on the strengths of models like claude for style and Llama for customization, addressing the complex nature of literary TQA. We evaluated this system on translations of The Little Prince and A Connecticut Yankee in King Arthur’s Court, generated by LLMs including gpt-4o (OpenAI et al., 2024), claude-3.7-sonnet, gemini-flash-1.5, solar-pro-preview (Kim et al., 2024), TowerBase-7B (Alves et al., 2024), and Llama-3.1-8B (Grattafiori et al., 2024). The experimental setup compared our OTQS against traditional metrics (BLEU, METEOR, ROUGE-1, ROUGE-L, WMT-KIWI) using a diverse dataset and a rigorous process to ensure validity. Results demonstrate that our system outperforms traditional metrics, with top models achieving OTQS scores up to 0.890, capturing nuances like stylistic consistency that BLEU (0.28) misses. Open-source models lagged behind, revealing gaps in their training. These findings confirm our approach’s effectiveness in tackling the complexities of literary TQA. The significance of this work lies in its contributions: (1) a scalable multi-agent TQA framework that enhances literary evaluation, (2) a comparative analysis of LLM performance in translation, and (3) a practical system adaptable for human-inthe-loop refinement. This advances TQA beyond conventional methods, providing a valuable tool for translators and researchers to improve literary translation quality. # 2 Method : MAS-LitEval MAS-LitEval employs specialized LLMs to assess literary translations, with agents focusing on terminology consistency, narrative perspective, and stylistic fidelity. Overall Architecture. Three agents process the source and translated texts in parallel, with the texts segmented into 4096-token chunks. A coordinator combines their scores and feedback into an Overall Translation Quality Score(OTQS) and a detailed report, ensuring consistency across the entire text. Roles of Each Agent. The roles of the agents are as follows: • Terminology Consistency Agent: This agent ensures that key terms, such as character names or recurring motifs, remain consistent throughout the translation. Using named entity recognition (NER), it identifies these terms and assigns a score (ranging from 0 to 1) based on their uniformity across the text. • Narrative Perspective Consistency Agent: This agent confirms that the narrative voice (e.g., first-person or omniscient) aligns with the source text across all chunks. An LLM analyzes the segments, assigns a score (ranging from 0 to 1), and flags deviations, such as perspective shifts, to preserve narrative integrity. • Stylistic Consistency Agent: This agent evaluates tone, rhythm, and aesthetic fidelity by comparing stylistic traits between the source and target texts, assigning a fidelity score (ranging from 0 to 1). Collaboration Mechanism. The coordinator computes the OTQS using a weighted average: $$ { \mathrm { O T Q S } } = w _ { T } \cdot S _ { T } + w _ { N } \cdot S _ { N } + w _ { S } \cdot S _ { S } $$ where $S _ { T } , S _ { N }$ , and $S _ { S }$ represent the scores from the terminology, narrative, and stylistic agents, respectively, and $w _ { T } , w _ { N }$ , and $w _ { S }$ are their corresponding weights. Given the emphasis on preserving the artistic essence of literary works, the weight for stylistic consistency $\ w _ { S } = 0 . 4 )$ is higher than those for terminology consistency ( $\mathrm { w } _ { T } = 0 . 3 \$ ) and narrative consistency $\mathrm { w } _ { N } = 0 . 3 )$ , reflecting its pivotal role in literary translation quality (Yan et al., 2015; Freitag et al., 2021). Rationale for Multi-Agent Approach. Literary translation quality encompasses multiple dimensions—terminology, narrative, and style—that a single LLM cannot fully evaluate. By employing specialized agents, MAS-LitEval harnesses diverse LLM capabilities, enhancing accuracy and efficiency compared to traditional metrics (Wu et al., 2024). This method ensures consistency is assessed across the entire text, overcoming the limitations of chunk-based evaluations where local consistency might obscure global discrepancies. Implementation Details. MAS-LitEval is implemented in Python, integrating spaCy for preprocessing and LLMs via APIs. Although texts are segmented into 4096-token chunks for processing, the agents maintain a global context: the Terminology Consistency Agent tracks terms across all chunks, the Narrative Perspective Consistency Agent ensures voice continuity, and the Stylistic Consistency Agent evaluates tone and rhythm holistically. # 3 Experiment We tested MAS-LitEval on translations of excerpts from The Little Prince and A Connecticut Yankee in King Arthur’s Court, generated by a mix of closedsource and open-source LLMs. Dataset. We selected two works for evaluation: a 5,000-word excerpt from the Korean translation of The Little Prince (originally in French) and a 4,000-word excerpt from the Korean translation of A Connecticut Yankee in King Arthur’s Court (originally in English). These texts were chosen for their stylistic richness and narrative complexity, making them ideal for assessing literary translation nuances. The LLMs generated translations from Korean to English. We also extracted Korean-English parallel data from additional literary works on Project Gutenberg Korea (http:// projectgutenberg.kr/) and Project Gutenberg (https://www.gutenberg.org/), enriching the dataset. Table 1 provides statistics for the specific works used. Models. Six LLMs were tested: closedsource models (gpt-4o, claude-3.7-sonnet, gemini-flash-1.5, solar-pro-preview) and open-source models (TowerBase-7B, Llama-3.1-8B). These models were chosen for their diverse strengths in language generation and comprehension, enabling a robust performance comparison. Baselines. MAS-LitEval was compared against BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-1, ROUGE-L (Lin, 2004), and WMT-KIWI (Rei et al., 2023). Human reference translations, sourced from professional translations of the selected works, were used for baseline metrics to ensure a fair comparison. Evaluation Process. Translations generated by the LLMs were assessed using MAS-LitEval. Texts were segmented into 4096-token chunks, but agents evaluated consistency across all chunks to capture global quality. For instance, the Terminology Consistency Agent assessed term uniformity across the entire text, addressing limitations of chunk-based evaluations where intra-chunk consistency might mask cross-chunk discrepancies. Baseline metrics were calculated against human references, while MAS-LitEval operated reference-free, using only the source and machine-generated translations. Technical Setup. Experiments were conducted on an NVIDIA A100 GPU. Closed-source models were accessed via APIs, while open-source models were hosted locally with 4-bit quantization to optimize memory usage. The temperature was set to 0.1 to ensure deterministic outputs, guaranteeing reproducibility across runs. # 4 Findings MAS-LitEval evaluated translations of The Little Prince and A Connecticut Yankee in King Arthur’s Court, generated by four closed-source and two open-source models. The results, presented in Table 2, highlight performance differences and our system’s ability to detect nuances overlooked by traditional metrics. Performance of Top Models. claude-3.7 and gpt-4o achieved the highest OTQS scores: 0.890 and 0.875 for The Little Prince, and 0.880 and 0.860 for A Connecticut Yankee in King Arthur’s Court. claude-3.7-sonnet excelled in stylistic fidelity (0.93) and narrative consistency (0.91), key aspects of literary quality. For the phrase “On ne voit bien qu’avec le cœur,” it translated it as “It is only with the heart that one can see rightly” (stylistic score: 0.92), preserving poetic nuance, while gpt-4o’s “One sees clearly only with the heart” (0.87) was less evocative according to agent feedback. In A Connecticut Yankee in King Arthur’s Court, claude-3.7-sonnet maintained the medieval tone across chunks (narrative consistency: 0.90), whereas gpt-4o occasionally introduced modern phrasing (0.85). Comparison of Open-Source and Closed-Source Models. Closed-source models outperformed their open-source counterparts. For The Little Prince, claude-3.7-sonnet (0.890) and gpt-4o (0.875) surpassed TowerBase-7B (0.745) and Llama-3.1-8B (0.710). Stylistic scores for TowerBase-7B (0.70) indicated flatter translations compared to claude-3.7-sonnet’s nuanced output (0.92), suggesting limitations in open-source model resources. Comparison with Baseline Metrics. OTQS showed a strong correlation with WMT-KIWI (0.93) but weaker correlations with BLEU (0.62), METEOR (0.70), ROUGE-1 (0.68), and ROUGEL (0.65), indicating it captures distinct quality aspects. For The Little Prince, gpt-4o outperformed claude-3.7-sonnet in BLEU (0.30 vs. 0.28), but Table 1: Dataset Statistics for Specific Works in Korean to English Translation. Table 2: Evaluation Results for the two literary works: LP (The Little Prince) and KA (A Connecticut Yankee in King Arthur’s Court). The highest scores for each metric and work are bolded. OTQS favored the latter (0.890 vs. 0.875) for its stylistic depth. ROUGE-1 and ROUGE-L exhibited similar patterns, missing narrative inconsistencies in models like TowerBase-7B (OTQS: 0.745). MAS-LitEval’s cross-chunk evaluation identified issues like tone shifts that baselines overlooked, underscoring its advantage in literary quality assessment. # 5 Discussion MAS-LitEval provides a sophisticated framework for literary Translation Quality Assessment (TQA). Below, we explore its strengths, limitations, and implications. Advantages of the Multi-Agent Approach. MAS-LitEval’s multi-dimensional evaluation—covering terminology, narrative, and style—surpasses single-metric methods. For The Little Prince, BLEU favored gpt-4o (0.30) over claude-3.7-sonnet (0.28), but OTQS prioritized claude-3.7-sonnet (0.890 vs. 0.875) for its lyrical fidelity. This mirrors human-like judgment, valuing literary essence over lexical overlap. By evaluating consistency across chunks, it detects global issues, such as narrative drift, that chunk-based approaches miss, offering a comprehensive assessment. Challenges and Refinement Opportunities. Subjectivity in stylistic scoring poses a challenge. The difference between claude-3.7-sonnet’s 0.93 and gpt-4o’s 0.87 reflects potential LLM biases, which could lead to inconsistency. Averaging scores from multiple LLMs or calibrating with human annotations could improve reliability. Additionally, incorporating domain-specific training or a cultural fidelity agent could address cultural nuances. Implications for Literary Translation. MASLitEval’s scalability offers practical benefits. Publishers can use it to pre-screen translations, while educators can leverage its feedback to train translators. Its reference-free design suits literary contexts with multiple valid translations, unlike BLEU or ROUGE, which depend on fixed references. Future enhancements, such as human-in-the-loop integration, could further refine its accuracy, establishing it as a key tool for AI-supported literary TQA. # 6 Limitations and Future Works MAS-LitEval’s dataset, restricted to two works, limits its generalizability; expanding to include genres like poetry, drama, and non-fiction is necessary. Stylistic scoring remains subjective and may reflect LLM training biases; averaging scores from multiple LLMs or using standardized rubrics could improve consistency. The absence of human evaluation leaves its alignment with expert judgment unconfirmed; integrating feedback from professional translators or scholars and correlating OTQS with human ratings would validate its reliability. Human input could also refine agent prompts and OTQS weightings. Future efforts should focus on expanding the dataset, incorporating human evaluation, refining stylistic scoring, and addressing cultural concerns to improve MAS-LitEval’s reliability and versatility in literary translation quality assessment. # Acknowledgements # References Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, and André F. T. Martins. 2024. Tower: An open multilingual large language model for translation-related tasks. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Frederic Blain, Chrysoula Zerva, Ricardo Rei, Nuno M. Guerreiro, Diptesh Kanojia, José G. C. de Souza, Beatriz Silva, Tânia Vaz, Yan Jingxuan, Fatemeh Azadi, Constantin Orasan, and André Martins. 2023. Findings of the WMT 2023 shared task on quality estimation. In Proceedings of the Eighth Conference on Machine Translation, pages 629–653, Singapore. Association for Computational Linguistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad AlDahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, ChingHsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Marzena Karpinska and Mohit Iyyer. 2023. Large language models effectively leverage document-level context for literary translation, but critical errors persist. In Proceedings of the Eighth Conference on Machine Translation, pages 419–451, Singapore. Association for Computational Linguistics. Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, Changbae Ahn, Seonghoon Yang, Sukyung Lee, Hyunbyung Park, Gyoungjin Gim, Mikyoung Cha, Hwalsuk Lee, and Sunghun Kim. 2024. Solar 10.7b: Scaling large language models with simple yet effective depth upscaling. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Arle Lommel, Hans Uszkoreit, and Aljoscha Burchardt. 2014. Mqm: Un marc per declarar i descriure mètriques de qualitat de la traducció. Tradumàtica: traducció i tecnologies de la informació i la comunicació, (12):455–463. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Ma˛dry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robinson, David Sasaki, Denny Jin, Dev Valladares, Dimitris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan Asdar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wallace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whitney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Jason Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Johannes Heidecke, John Schulman, Jonathan Lachman, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Workman, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lilian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kondraciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Janner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Murat Yesildal, Nacho Soto, Natalia Gimelshein, Natalie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Peter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Rajan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchandani, Romain Huet, Rory Carmichael, Rowan Zellers, Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agarwal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shirong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stewart Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. 2024. Gpt-4o system card. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Ricardo Rei, Nuno M. Guerreiro, José Pombal, Daan van Stigt, Marcos Treviso, Luisa Coheur, José G. C. de Souza, and André F. T. Martins. 2023. Scaling up cometkiwi: Unbabel-ist 2023 submission for the quality estimation shared task. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393–401. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling humancentered machine translation. Katherine Thai, Marzena Karpinska, Kalpesh Krishna, Bill Ray, Moira Inghilleri, John Wieting, and Mohit Iyyer. 2022. Exploring document-level literary machine translation with parallel paragraphs from world literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9882–9902, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Antonio Toral and Andy Way. 2018. What level of quality can neural machine translation attain on literary text? Minghao Wu, Jiahao Xu, and Longyue Wang. 2024. TransAgents: Build your translation company with language agents. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 131–141, Miami, Florida, USA. Association for Computational Linguistics. Rongjie Yan, Chih-Hong Cheng, and Yesheng Chai. 2015. Formal consistency checking over specifications in natural languages. In 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 1677–1682. IEEE. Ran Zhang, Wei Zhao, and Steffen Eger. 2025. How good are llms for literary translation, really? literary translation evaluation with humans and llms. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. # A Prompts Used in MAS-LitEval # A.1 Translation Prompt Translate the following literary text from [source language] to [target language]. Ensure that the translation preserves the original’s style, tone, and cultural nuances. Pay special attention to maintaining the narrative voice and literary devices used in the source text. # A.2 Terminology Consistency Agent Prompt You are an expert in literary translation evaluation. Given a source text in [source language] and its translation in [target language], your task is to ensure that key terms, such as character names, place names, and recurring motifs, are translated consistently throughout the text. Follow these steps: 1. Identify key terms in the source text that appear multiple times. 2. For each key term, check how it is translated in the target text across all occurrences. 3. Calculate a consistency score (0 to 1), where 1 indicates that all occurrences of a term are translated identically, and 0 indicates no consistency. 4. Provide feedback highlighting any inconsistencies, specifying the terms and their varying translations. Your output should include the consis tency score and the detailed feedback. # A.3 Narrative Perspective Consistency Agent Prompt You are an expert in literary analysis. Given a source text in [source language] and its translation in [target language], your task is to verify that the narrative perspective (e.g., first-person, thirdperson limited, omniscient) is consistently maintained in the translation. Follow these steps: 1. Determine the narrative perspective of the source text. 2. Analyze the translation to identify its narrative perspective. 3. Compare the two and assess whether the translation accurately reflects the source’s perspective. 4. Assign a score (0 to 1) indicating the degree of consistency, where 1 means perfect alignment, and 0 means complete mismatch. 5. Provide feedback on any deviations, citing specific examples from the text. Your output should include the consistency score and the detailed feedback. # A.4 Stylistic Consistency Agent Prompt You are an expert in literary style and translation. Given a source text in [source language] and its translation in [target language], your task is to evaluate how well the translation preserves the stylistic elements of the original, such as tone, rhythm, imagery, and literary devices. Follow these steps: 1. Identify the key stylistic features of the source text. 2. Analyze the translation to see if these features are adequately captured. 3. Assign a score (0 to 1) indicating the level of stylistic fidelity, where 1 means the translation perfectly preserves the style, and 0 means it completely fails to do so. 4. Provide feedback with specific examples where the translation succeeds or falls short in maintaining the style. Your output should include the fidelity score and the detailed feedback.
Literary translation requires preserving cultural nuances and stylistic elements, which traditional metrics like BLEU and METEOR fail to assess due to their focus on lexical overlap. This oversight neglects the narrative consistency and stylistic fidelity that are crucial for literary works. To address this, we propose MAS-LitEval, a multi-agent system using Large Language Models (LLMs) to evaluate translations based on terminology, narrative, and style. We tested MAS-LitEval on translations of The Little Prince and A Connecticut Yankee in King Arthur's Court, generated by various LLMs, and compared it to traditional metrics. \textbf{MAS-LitEval} outperformed these metrics, with top models scoring up to 0.890 in capturing literary nuances. This work introduces a scalable, nuanced framework for Translation Quality Assessment (TQA), offering a practical tool for translators and researchers.
[ "cs.CL" ]
2. Introduction2 As software engineering data are symbolic by nature, in this chapter, we will present fault localization using symbolic methods. Symbolic methods tend to lend themselves naturally to give explanations, and this is exactly what we are looking for in fault localization. Indeed, we prefer a system with the capacity of saying “the failure has to do with the initialization of variable $x ^ { , \ast }$ to a system limited to saying “the fault is in these million lines with probability $0 . 5 2 7 ^ { , }$ . Therefore, we will illustrate how to use two data mining techniques: association rules and formal concept analysis in fault localization. Formal concept analysis and association rules deal with collections of objects and their features. The former extracts contextual truth, such as “in this assembly, all white-haired female wear glasses”, while the latter extracts relativized truth, such as “in this assembly, carrying a briefcase increases the chance of wearing a tie”. In a fault localization context, the former could say that “all failed tests call method $m ^ { \prime \prime }$ , and the latter could discover that “most failed tests call method $m$ , which is very seldom called in passed tests”. Throughout this chapter, we use the Trityp program (partly given in Table 7.1) to illustrate the general method. It is a classical benchmark for test generation methods. Its specification is to classify sets of three segment lengths into four categories: scalene, isosceles, equilateral, and not a triangle, according to whether a given kind of triangle can be formed with these dimensions, or no triangle at all. We use this benchmark to explain the ability of data mining process for localizing faults. We do so by introducing faults in the program in order to form slight variants, called mutants, and by testing them through a test suite [9]. The data mining process starts with the output of the tests, i.e., execution traces and pass/fail verdicts. The mutants can be found on the web3, and we use them to illustrate the general localization method. Table 7.2 presents the eight mutants of the Trityp program. The first mutant is used to explain in detail the method. For mutant 1, one fault has been introduced in Line 84. The condition $( { \mathrm { t r i t y p } } = = 2 ) $ is replaced by (trityp $\scriptstyle = = 3$ ). That fault causes a failure in two cases: Table 7.1. Source code of the Trityp program [6] Table 7.2. Mutants of the Trityp program (1) The first case is when trityp is equal to 2; the execution does not enter this branch and goes to the default case in lines 89 and 90. (2) The second case is when trityp is equal to 3; the execution should go to Line 87, but due to the fault, it goes to Line 84. Indeed, if the condition $( \mathrm { i } { + } \mathbf { k } { > } \mathrm { j } )$ holds, trityp is assigned to 2. However, $( \mathrm { i + k } > \mathrm { j } )$ does not always imply $( \mathrm { j + k } > \mathrm { i } )$ , which is the real condition to test when trityp is equal to 3. Therefore, trityp is assigned to 2, whereas 4 is expected. The faults of mutants 2, 3, 6, and 8 are on assignments. The faults of mutants 4, 5, and 7 are on conditions. We will present more details about multiple fault situations in Chapter 8.5. In this case, we simply combine several mutations to form new mutants. # 7.3. Formal Concept Analysis and Association Rules Formal concept analysis (FCA, [12] Erreur ! Source du renvoi introuvable.Erreur ! Source du renvoi introuvable.) and association rules (AR, [[3]]) are two well-known methods for symbolic data mining. In their original inception, they both consider data in the form of an object-attribute table. In the FCA world, the table is called a formal context. In the AR world, objects are called transactions and attributes are called items, so that a line represents the items present in a given transaction. This comes from one of the first applications of AR, namely the basket analysis of retail sales. We will use both vocabularies interchangeably according to context. Definition $^ { \small 1 }$ (Formal context and transactions). A formal context, $K .$ , is a triple $( O , A , d )$ , where $O$ is a set of objects, $A$ is a set of attributes, and $d$ is a relation in $O \times A$ . We write $( o , a ) \in d$ or oda equivalently. In the AR world, $A$ is called a set of items, or itemset, and each $\{ i \in A \mid o d i \}$ is the $o$ th transaction. For visualization, we will consider objects as labeling lines and attributes as labeling columns of a table. A cross sign at the intersection of line $o$ and column $a$ indicates that object $o$ has attribute $a$ . Table 7.3 is an example of context. The objects are the planets of the solar system, and the attributes are discretized properties of these planets: size, distance to the sun, and presence of moons. One can observe that all planets without moons are small, but all planets with moons except two are far from the sun. The difficulty is making similar observations in large data sets. Both methods try to answer questions such has “which attributes entail these attributes?” or “which attributes are entailed by these attributes?”. The main difference between FCA and AR is that FCA answers these questions to the letter, i.e., the mere exception to a candidate rule kills the rule, though association rules are accompanied by statistical indicators. In short, association rules can be almost true. As a consequence, rare events as well as frequent events are represented in FCA, whereas in AR, frequent events are distinguished. Table 7.3. The Solar system context [6]. # 7.3.1. Formal Concept Analysis FCA searches for sets of objects and sets of attributes with equal significance, like Mercury, Venus, and without moons, and then orders the significances by their specificity. Definition 2 (extent /intent/formal concept) Let $K = \left( O , A , d \right)$ be a formal context. $\{ o \in O | \forall a \in A . o d a \}$ is the extent of a set of attributes $A \subseteq A$ . It is written as extent $( A ) \cdot \left\{ a \in A { \big | } \forall o \in O . o d a \right\}$ is the intent of a set of objects $O \subseteq O$ . It is written intent $( O )$ . A formal concept is a pair $( O , A )$ such that $A \subseteq A , O \subseteq O$ , intent $( O ) = A$ and extent $\scriptstyle \left( A \right) = O . A$ is called the intent of the formal concept, and $O$ is called its extent. Formal concepts are partially ordered by set inclusion of their intent or extent. $\left( O _ { 1 } , A _ { 1 } \right) < \left( O _ { 2 } , A _ { 2 } \right)$ iff $O _ { 1 } \subset O _ { 2 }$ . We say that $\left( O _ { 2 } , A _ { 2 } \right) \mathrm { c o n t a i n s } \left( O _ { 1 } , A _ { 1 } \right)$ .In other words, $( O , A )$ forms a formal concept iff $O$ and A are mutually optimal for describing each others; ie., they have same significance. Lemma 1 (Basic FCA results) It is worth remembering the following results: $$ e x t e n t ( \emptyset ) { \mathrel { = } } O { \mathrm { ~ a n d ~ i n t e n t } } ( \emptyset ) { \mathrel { = } } A $$ extent(intent(extent(A))) $\mathbf { \Sigma } = \mathbf { \Sigma }$ extent(A) and intent(extent(intent $( O ) ) =$ intent(O). Hence, extent circ intent and intent circ extent are closure operators. Figure 7.1. Concept lattice of the solar system context (see Table 7.1) [6]. $$ \left( O , A _ { 1 } \right) < \left( O _ { 2 } , A _ { 2 } \right) i f f A _ { 1 } \supset A _ { 2 } $$ (extent)intent(O)), intent $( O ) )$ is always a formal concept, and it is written as concept(O). In the same way, (extent(A), intent(extent(A))) is always a formal concept as well, and it is written as concept(A) All formal concepts can be constructed this way. Theorem 1 (Fundamental theorem of FCA, [16]). Given a formal context, the set of all its partially ordered formal concepts forms a lattice called a concept lattice. Given a concept lattice, the original formal context can be reconstructed. Figure 7.1 shows the concept lattice deduced from the solar system context. It is an example of the standard representation of a concept lattice. In this representation, concepts are drawn as colored circles with an optional inner label that serves as a concept identifier, and 0, 1, or 2 outer labels in square boxes. Lines represent nontransitive containment; therefore, the standard representation displays a Hasse diagram of the lattice [26]. The figure is oriented such that higher concepts (higher in the diagram) contain lower concepts. The upper outer label of a concept (such as large for concept $G$ ), when present, represents the attributes that are new to this concept intent compared with higher concepts; we call it an attribute label. It can be proven that if $A$ is the attribute label of concept $c$ , then $A$ is the smallest set of attributes such that $c =$ concept(A). Symmetrically, the lower outer label of a concept (such as Jupiter, Saturn for concept $G$ ), when present, represents the objects that are new to this concept extent compared with lower concepts; we call it an object label. It can be proven that if $O$ is the object label of concept $c$ , then $O$ is the smallest set of objects such that $c = c o n c e p t ( O )$ . As a consequence, the intent of a concept is the set of all attribute labels of this concept and higher concepts, and the extent of a concept is the set of all object labels of this concept and lower concepts. For example, the extent of concept A is {Jupiter, Saturn, Uranus, Neptune}, and its intent is {far fromsun, withmoons}. In other words, an attribute labels the highest concept to which intent it belongs, and an object labels the lowest concept to which extent it belongs. It has been proven [12] that such a labeling where all attributes and objects are used exactly once is always possible. As a consequence, some formal concepts can be named by an attribute and/or an object; for example, concept G can be called either concept large, Jupiter, or Saturn, but others like concept D have no such names. They are merely unions or intersections of other concepts. In the standard representation of concept lattice, “a1 entails a2” reads as an upward path from concept $( a _ { 1 } )$ to concept $( a _ { 2 } )$ . Attributes that do not entail each other label incomparable concepts, such as attributes small and with moons. Note that there is no purely graphical way to detect that $^ { \bullet } a _ { 1 }$ nearly entails $\boldsymbol { a _ { 2 } } ^ { \flat }$ . The bottom concept, $\perp$ , has all attributes and usually zero objects, unless some objects have all attributes. The top concept, $\top$ , has all objects and usually zero attributes, unless some attributes are shared by all objects. The worst-case time complexity of the construction of a concept lattice is exponential, but we have shown that if the size of the problem can only grow with the number of objects, i.e., the number of attributes per object is bounded, then the complexity is linear [11]. Moreover, though the mainstream interpretation of FCA is to compute the concept lattice at once and use it as a means for presenting graphically the structure of a data set, we have shown [11][21] that the concept lattice can be built and explored gradually and efficiently. # 7.3.2. Association Rules FCA is a crisp methodology that is sensitive to every detail of the data set. Sometimes, one may wish for a method that is more tolerant to exceptions. Definition 3 (Association rules), Let K be a set of transactions, i.e., a formal context seen as a set of lines seen as itemsets. An association rule is a pair (P, C) of itemsets. It is usually written as $P C$ . The $P$ part is called the premise, and the C part is the conclusion. Note that any $P { } C$ forms an association rule. It does not mean it is a relevant one. Statistical indicators give hints at the relevance of a rule. Definition 4 (Support/confidence/lift). The support of a rule $P { } C _ { i }$ , written as su $\scriptstyle \left( P \to C \right)$ , is defined as $$ e x t e n t ( P \cup C ) $$ The normalized support a rule $P C$ is defined as $$ \frac { e x t e n t ( P \cup C ) } { e x t e n t ( \emptyset ) } $$ The confidence of rule $P C$ ,written con $f \left( P \to C \right)$ ,is defined as $$ { \frac { \operatorname* { s u p } ( P C ) } { \operatorname* { s u p } \bigr ( P \emptyset \bigr ) } } { = } { \frac { e x t e n t \bigl ( P \cup C \bigr ) } { e x t e n t \bigl ( P \bigr ) } } $$ The lift of a rule $P C$ , written lift $\left( P \to C \right)$ , is defined as $$ { \frac { c o n f { \bigl ( } P \to C { \bigr ) } } { c o n f { \bigl ( } \emptyset \to C { \bigr ) } } } { = } { \frac { \operatorname* { s u p } { \bigl ( } P \to C { \bigr ) } } { \operatorname* { s u p } { \bigl ( } P \to \emptyset { \bigr ) } } } $$ $$ \operatorname* { s u p } ( \emptyset C ) = { \frac { e x t e n t { \bigl ( } P \cup C { \bigr ) } \times e x t e m t ( \emptyset ) } { e x t e n t { \bigl ( } P { \bigr ) } \times e x t e n t ( C ) } } $$ Support measures the prevalence of an association rule in a data set. For example, the support of near sun with moon is 2. Normalized support measures its prevalence as a value in [0, 1], i.e., as a probability of occurrence. For example, the normalized support of near sun with moon is $2 / 8 = 0 . 2 5$ . It can be read as the probability of observing the rule in a random transaction of the context. It would seem that the greater the support, the better it is, but very often one must be satisfied with a very small support. This is because in large contexts, with many transactions and items, any given co-occurrence of several items is a rare event. Efficient algorithms exist for calculating all ARs with minimal support [2][4][24][28]. Confidence measures the “truthness” of an association rule as the ratio of the prevalence of its premise and conclusion together on the prevalence of its premise alone. Its value is in [0, 1], and for a given premise, bigger is better; in other words, it is better to have fewer exceptions to the rule considered as a logical implication. For example, the confidence of near sun with moon is $2 / 4 = 0 . 5$ . This can be read as the conditional probability of observing the conclusion knowing that the premise holds. However, there is no way to tell whether a confidence value is good in itself. In other words, there is no absolute threshold above which a confidence value is good. Lift also measures “truthness” of an association rule, but instead as the increase of the probability of observing the conclusion when the premise holds w.r.t. when it does not hold. In other words, it measures how the premise of a rule increases the chance of observing the conclusion. A lift value of 1 indicates that the premise and conclusion are independent. A lower value indicates that the premise repels the conclusion, and a higher value indicates that the premise attracts the conclusion. For example, the lift of near sun with moon is $0 . 5 / 0 . 7 5$ , which shows that the attribute near sun repels the attribute with moon; to be near the sun diminishes the probability of having a moon. The rule near sun without moon has a support value of 0.25, confidence value of 0.5, and lift value of $0 . 5 / 0 . 2 5$ , which indicates an attraction; to be near the sun augments the probability of not having a moon. The two rules have identical supports and confidences but opposite lifts. In the sequel, we will use support as an indicator of the prevalence of a rule and lift as an indicator of its “truthness”. # 7.4. Data Mining for Fault Localization6 We consider a debugging process in which a program is tested against different test cases. Each test case yields a transaction in the AR sense, in which attributes correspond to properties observed during the execution of the test case. Two attributes, $P A S S$ and $F A I L$ , represent the issue of the test case (again, see future works for variants on this). Thus, the set of all test cases yields a set of transactions that form a formal context, which we call a trace context. The main idea of the data mining approach is to look for a formal explanation of the failures. # 7.4.1. Failure Rules Formally, we are looking for association rules following pattern $P F A I L$ . We call these rules failure rules. A failure rule proposes an explanation to a failure, and this explanation can be evaluated according to its support and lift. Note that failure rules have a variable premise $P$ and a constant conclusion $F A I L$ . This slightly simplifies the management of rules. For instance, relevance indicators can be specialized as follows: Definition 5 (Relevance indicators for failure rules) $$ \begin{array} { r l } & { s u p \big ( P F A I L \big ) = \Big \| e x t e n t \big ( P \cup \big \{ F A I L \big \} \big ) \Big \| , } \\ & { c o n f \big ( P F A I L \big ) = \frac { \Big \| e x t e n t ( P \cup \big \{ F A I L \big \} \Big \| } { \Big \| e x t e n t \big ( P \big ) \Big \| } , } \end{array} $$ $$ l i f t ( P F A I L ) = \frac { \| e x t e n t { ( P \cup [ F A I L ) } ) } { \| e x t e n t { ( P ) } \| \times \| e x t e n t { ( \{ F A I L \} ) } \| } . $$ Observe that extent $( \Theta )$ and extent $\langle \beta F A I L \rangle ,$ are constant for a given test suite. Only extent $( P )$ and extent $( P \ U \ \{ F A I L \}$ depend on the failure rule. It is interesting to understand the dynamics of these indicators when new test cases are added to the trace context. Lemma 2 (Dynamics of relevance indicators with respect to test suite). Consider a failure rule P FAIL: A new passed test case that executes $P$ will leave its support unchanged (normalized support will decrease slightly7), will decrease its confidence, and will decrease its lift slightly if P is not executed by all test cases. A new passed test case that does not execute $P$ will leave its support and confidence unchanged (normalized support will decrease slightly) and will increase its lift. A new failed test case that executes $P$ will increase its support and confidence (normalized support will increase slightly) and will increase its lift slightly if P is not executed by all test cases. A new failed test case that does not execute $P$ will leave its support and confidence unchanged (normalized support will decrease slightly), and will decrease its lift. In summary, support and confidence grow with new failed test cases that execute $P _ { \cdot }$ , and lift grows with failed test cases that execute $P$ or passed test cases that do not execute $P$ . Failed test cases that execute $P$ increase all the indicators, but passed test cases that do not execute $P$ only increase lift8. Another interesting dynamic is what happens when $P$ increases. Lemma 3 (Dynamics of relevance indicators with respect to premise). Consider a failure rule $P { } F A I L$ and replacing $P$ with $P$ such that $P { > } P$ : Support will decrease (except if all test cases fail, which should not persist). One says $P { } F A I L$ is more specific than $P { } F A I L$ . Confidence and lift can go either way, but both in the same way because $\frac { e x t e n t ( \emptyset ) } { e x t e n t \bigl ( \{ F A I L \} \bigr ) }$ is a constant. For the sequel of the description, we assume that the attributes recorded in the trace context are line numbers of executed statements. Since the order of the attributes in a formal context does not matter, this forms an abstraction of a standard trace (see a fragment of such a trace context in Table 7.4). Thus, explanations for failures will consist of line numbers, lines that increase the risk of failure when executed. Had other trace observations been used, the explanations would have been different. For faults that materialize in faulty instructions, it is expected that they will show up as explanations to failed test cases. For other faults that materialize in missing instructions, they will still be visible in actual lines that would have been correct if the missing lines where present. For instance, a missing initialization will be seen as the faulty consultation of a non-initialized variable. It is up to a competent debugger to conclude from faulty consultations that an initialization is missing. Note finally that the relationships between faults and failures are complex: Table 7.4. A trace context [6]. • Executing a faulty line does not necessarily cause a failure. For example, a fault in a line may not be stressed by a case test (e.g. faulty condition $i > 1$ instead of the expected $i > 0$ , tested with $i$ equals to 10) or a faulty line that is “corrected” by another one. Absolutely correct lines can apparently cause failure, such as lines of the same basic block [29] as a faulty line (they will have exactly the same distribution as the faulty line) or lines whose preconditions cannot be established by a distant faulty part. Failure rules are selected according to a minimal support criterion. However, there are too many such rules, and it would be inconvenient to list them all. We have observed in Lemma 3 that more specific rules have less support. However, this does not mean that less specific rules must be preferred. For instance, if the program has a mandatory initialization part, which always executes a set of lines $I$ , rule $I { } F A I L$ is a failure rule with maximal support, but it is also less informative. On the contrary, if all failures are caused by executing a set of lines $F { \supset } I , { \mathrm { ~ r u l e } } ^ { 9 } F | I { } F A I L$ will have the same support as $F$ ${ } F A I L$ , but will be the most informative. In summary, maximizing support is good, but it is not the definitive criteria for selecting informative rules. Another idea is to use the lift indicator instead of support. However, lift does not grow monotonically with premise inclusion. Therefore, finding rules with a minimal lift cannot be done more efficiently than by enumerating all rules and then filtering them. Table 7.5. Failure context for mutant 1 of the Trityp program with $\operatorname* { m i n } l i f t = 1 . 2 5$ and min sup $^ { = 1 }$ (for mutant 1 the fault is at line 84, see Table 7.1 ) [6]. # 7.4.2. Failure Lattice Here, we describe how to use FCA to help navigate in the set of explanations. Definition $\blacktriangle$ (The failure lattice). Form a formal context with the premises of failure rules. The rules identifiers are the objects, and their premises are the attributes (in our example, line numbers) (see an example in Table 7.5). It is called the failure context. Observe that the failure context is special in that all premises of failure rules are different from each other10. Thus, they are uniquely determined by their premises (or itemsets). Thus, it is not necessary to identify them by objects identifiers. Apply FCA on this formal context to form the corresponding concept lattice. It is called the failure lattice. Its concepts and labeling display the most specific explanations to groups of failed tests. Since object identifiers are useless, replace object labels by the support and lift of the unique rule that labels each concept. This forms the failure lattice (see Figure 8.3). The overall trace mining process is summarized in Figure 8.4. Lemma 4 (Properties of the failure lattice). The most specific explanations (i.e., the largest premises) are at the bottom of the lattice. On the contrary, the least specific failure rules are near the top. For instance, the line numbers of a prelude sequence executed by every test case will label the topmost concepts. The explanations with the smallest support are at the bottom of the lattice. For example, line numbers executed only by specific failed test cases will label concepts near the bottom. Figure 7.2. Failure lattice associated to the failure context of Table 7.5 (for mutant 1, the fault is at line 84) [6] Figure 7.3. The trace mining process [6] Support increases when going upstream, from bottom to top. We call this the global monotony of support ordering. This is a theorem [[5]]. Lift does not follow any global monotony behavior. Concepts form clusters of comparable concepts with the same support. For example, concepts 2, 4, and 7 in Figure 7.2 form a cluster of rules with support 60. We call them support clusters. This means that explanations of increasing size represent the same group of failures. In a support cluster, a unique concept has the largest extent. We call it the head concept of the support cluster. It corresponds to the explanation with the highest lift value in the support cluster. More generally, lift decreases when going bottom-up in a support cluster. We call this behavior the local monotony of lift ordering, and it is also a theorem [[5]]. It is useless to investigate explanations other than the head concepts. This can be done by a bottom-up exploration of the failure lattice. In the lattice of Figure 7.2, only concepts 2 (head of support cluster with value 60), 3 (head of support cluster with value 52), and 5 (head of support cluster with value 112) need be presented to the debugging oracle. Concept 5 has Line 84 in its attribute label, which is the location of the fault in this mutant. The local monotony of lift ordering shows that the lift indicator can be used as a metric, but only inside support clusters. The process that we have presented is dominated by the choice of a minimal value for the support indicator. Recall that the support of an explanation is simply the number of simultaneous realizations of its items in the failure context, and the normalized support is the ratio of this number to the total number of realizations. In this application of ARs, it is more meaningful to use the non-normalized variant because it directly represents the number of failed test cases covered by an explanation. What is a good value for the minimal support? First, it cannot be larger than the number of failed test cases $[ =$ $e x t e n t ( F A I L ) )$ ; otherwise, no $P F A I L$ rule will show up. Second, it cannot be less than 1. The choice of between 1 and extent(FAIL) depends on the nature of the fault, but in any case, experiments show that acceptable minimum support is quite low (a few percent of the total number of test cases). A high minimal support will filter out all faults that are the causes of less failures than this threshold. Very singular faults will require a very small support, eventually 1, to be visible in the failure lattice. This suggests starting with a high support to localize the most visible faults, and then decreasing the support to localize less frequently executed faults. The minimal support acts as a resolution cursor. A coarse resolution will show the largest features at a low cost, and a finer resolution will be required to zoom in on smaller features at a higher cost. We have insisted on using lift instead of confidence as a “truthness” indicator, because it lends itself more easily to an interpretation (recall Definition 4 and subsequent comments). However, in the case of failure rules, the conclusion is fixed $( = F A I L )$ , and both indicators increase and decrease in the same way when the premise changes (recall Lemma 3). The only difference is that the lift indicator yields a normalized value (1 is independence, bellow 1 is repulsion, and over 1 is attraction). What is the effect of a minimum lift value? Firstly, if it is chosen to be larger than or equal to 1, it will eliminate all failure rules that show a repulsion between the premise and conclusion. Secondly, if it is chosen to be strictly greater than 1, it will eliminate failure rules that have a lower lift, thus compressing the representation of support clusters and eventually eliminating some support clusters. Thus, the minimal lift also acts as a zoom. Figure 7.4. The global debugging process Figure 7.5. The four Venn diagrams of two-fault dependency [6]. This suggests a global debugging process in which the results of an increasingly large test suite are examined with increasing acuity (see Figure 7.4). Given a test suite, an inner loop computes failure rules, i.e., explanations with decreasing support, from a fraction of extent(FAIL) to 1, and builds the corresponding failure lattice. In the outer loop, test cases are added progressively to cope with added functionality (such as test-driven development) or new failure reports. Thus, the global debugging process zooms in on the failed test cases to find explanations for more and more specific failures. # 7.5. The Failure Lattice for Multiple Faults This section extends the analysis of data mining for fault localization for the multiple fault situation. From the debugging process point of view, there is nothing special about multiple faults. Some software engineering life cycles like test-driven development tend to limit the number of faults observed simultaneously, but one can never assume a priori that there is a single fault. Thus, we assume there are one or several faults. # 7.5.1. Dependencies between Faults In the multiple fault case, each failure trace accounts for one or several faults. Conversely, faulty lines are suspected in one or several failure traces. Thus, the inner loop of the global debugging process cannot stop because a fault is found. The process must go on until all failures are explained. How can this be done without exploring the entire failure lattice? Consider any pair of two faults $F _ { 1 }$ and $F _ { 2 }$ , and $F$ ail $F _ { 1 }$ and $F$ ail $F _ { 2 }$ are the sets of failed test cases that detect $F _ { 1 }$ and $F _ { 2 }$ , respectively. We identify four types of possible dependencies between the two faults. Definition 7 (Dependencies between faults) If $F a i l _ { _ { F _ { 1 } } } = F a i l _ { _ { F _ { 2 } } }$ , we say that they are mutually strongly dependent (MSD). $\mathrm { I f } F a i l _ { _ { F _ { 1 } } } \subset F a i l _ { _ { F _ { 2 } } }$ we say $F _ { 1 }$ is strongly dependent $( S D )$ from F2 (and vice versa). $\mathrm { I f } F a i l _ { _ { F _ { 1 } } } \cap F a i l _ { _ { F _ { 2 } } } \neq \emptyset$ , we say that they are loosely dependent $( L D )$ . Otherwise, $F a i l _ { _ { F _ { 1 } } } \cap F a i l _ { _ { F _ { 2 } } } = \emptyset$ , we say that they are independent $( I D )$ . Note that this classification, is not intrinsic to a pair of faults; it depends on the test suite. However, it does not depend arbitrarily from the test suite. Lemma 5 (How failure dependencies depend on growing test suites) Assume that the test suite can only grow, then an $I D$ or $S D$ pair can only become $L D$ , and an MSD pair can only become $S D$ or $L D$ . This can be summarized as follows: $$ I D L D S D M S D $$ Note also that this knowledge, with several faults and the dependencies between them, is what the debugging person is looking for, whereas the trace context only gives hints at this knowledge. The question is: how does it give hints at this knowledge? The main idea is to distinguish special concepts in the failure lattice that we call failure concepts. Definition 8 (Failure concept). A failure concept is a maximally specific concept of the failure lattice whose intent (a set of lines) is contained in a failed execution. Recall that the failure rules are an abstraction of the failed execution. For instance, choosing minimal support and lift values eliminates lines that are seldom executed or that do not attract failure. Thus, the failure lattice describes exactly the selected failure rules but only approximately the failed executions. That is why it is interesting; it compresses information, though with loss. The failure concepts in the failure lattice are the concepts that best approximate failed executions. All other concepts contain less precise information. For the same reasons, there are much fewer failure concepts than failed executions; each failure concept accounts for a group of failures that detects some fault. The main use for failure concepts is to give a criterion for stopping the exploration of the failure lattice. In a few words, • The bottom-up exploration of the failure lattice goes from support clusters to support clusters as above; The line labels of the traversed concepts are accumulated in a fault context sent to the competent debugger; Any time a competent debugger finds a hint at an actual fault, all the failure concepts under the concept that gave the hint are deemed explained; • The process continues until all failure concepts are explained. The fault context is the part of the program that the debugging person is supposed to check. We consider its size as a measure of the effort imposed on the debugging person (see Section 1.6 for comparative experiments). Dependencies between faults have an impact on the way failure concepts are presented in the failure lattice. Lemma 6 ( $\mathbf { \nabla } _ { I D }$ faults with respect to failure concepts). If two faults are ID, their lines can never occur in the same failed trace, and then no rule contains the two faults and no concept in the failure lattice contains the two faults. Thus, the two faults will label failure concepts in two different support clusters that have no subconcepts in common (for an example, see Figure 7.6). Concretely, when exploring the failure lattice bottom-up, finding a fault in the label of a concept explains both the concept and the concepts underneath, but the faults in the other upper branches remain to be explained. Moreover, the order with which the different branches are explored does not matter. Lemma 7 ( $L D$ faults with respect to failure concepts). If two faults are LD, some failed traces contain both faults, while other failed traces contain either fault. They may label concepts in two different support clusters that share common subconcepts. Figure 7.6. Failure lattice associated to program Trityp with ID faults of mutants 1, 2, and 6 [6]. Concretely, when exploring the failure lattice bottom-up, finding a fault for a failure concept does not explain the other $L D$ failure concept. Once a fault is found, shared concepts must be re-explored in the direction of other superconcepts. Lemma 8 $\mathbf { \sigma } _ { S D }$ faults with respect to failure concepts). If two faults are SD, say F1 depends on $F 2$ , a failure concept whose intent contains LineF1 will appear as a subconcept of a failure concept whose concept contains LineF2 in a different support cluster (for an example, see Figure 7.7). Therefore, fault $F _ { 1 }$ will be found before $F _ { 2 }$ , but the debugging process must continue because there is a failure concept above. Lemma 9 (MSD faults with respect to failure concepts). Finally, if two faults are MSD, they cannot be distinguished by failed executions, and their failure concepts belong to the same support cluster. However, they can sometimes be distinguished by passed executions (such as one having more passed executions than the other), and this can be seen in the failure lattice through the lift value. All this can be formalized in an algorithm that searches for multiple faults in an efficient traversal of the $C _ { t o E x p o l o r e } : = F A I L U R E C O N C E P T S$ 2 $C _ { f a i l u r e \_ t o E x p l a i n } : = F A I L U R E C O N C E P T S$ 3 while $C _ { f a i l u r e _ { t } o E x p l a i n \neq \emptyset \land C _ { t o E x p l o r e } \neq \emptyset }$ do 4 let $c \in C _ { t o E x p l o r e }$ in 5 $C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \backslash \{ c \}$ 6 if the debugging oracle(label(c),faultcontext(c)) locates no fault then 7 $\begin{array} { r l } { | } & { { } C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \cup \{ u p p e r n e i g h b o u r s o f c \} } \end{array}$ 8 else 9 let Explained = subconcepts(c) U cluster(c) in 10 $\begin{array} { r l } { | } & { { } C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \backslash E x p l a i n e d } \end{array}$ 11 $\begin{array} { r l } { | } & { { } C _ { f a i l u r e _ { t } o E x p l a i n } : = C _ { f a i l u r e _ { t } o E x p l a i n } \backslash E x p l a i n e d } \end{array}$ 1 12 end failure lattice (see Algorithm 7.1). The failure lattice is traversed bottom-up, starting with the failure concepts (step 1). At the end of the failure lattice traversal, CfailuretoExplain, the set of failure concepts not explained by a fault (step 2) must be empty, or all concepts must be already explored (step 3). When a concept, $c$ (step 4), is chosen among the concepts to explore, $C _ { t o E x p l o r e }$ , the events that label the concept are explored. Note that the selection of that concept is not determinist. If no fault is located, then the upper neighbours of $c$ are added to the set of concepts to explore (step 7). If, thanks to the new clues, the debugging oracle understands mistakes and locates one or several faults, then all subconcepts of c and all concepts that are in the same support cluster are “explained”. Those concepts do not have to be explored again (step 10). This means that the failure concepts that are subconcepts of $c$ are explained (step 11). The exploration goes on until all failed executions in the failure lattice are explained by at least one fault, or all concepts have been explored. Figure 7.7. Failure lattice associated to program Trityp with SD faults 1 and 7 [6]. 1 $C _ { t o E x p o l o r e } : = F A I L U R E C O N C E P T S$ 2 $C _ { f a i l u r e \ : t o E x p l a i n } : = F A I L U R E C O N C E P T S$ 3 while $C _ { f a i l u r e _ { t } o E x p l a i n \neq \emptyset \land C _ { t o E x p l o r e } \neq \emptyset }$ do 4 let $c \in C _ { t o E x p l o r e }$ in 5 $C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \backslash \{ c \}$ 6 if the debugging oracle(label(c),faultcontext(c)) locates no fault then 7 $\begin{array} { r l } { | } & { { } C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \cup \{ u p p e r n e i g h b o u r s o f c \} } \end{array}$ 8 else 9 let Explained = subconcepts(c) U cluster(c) in 10 $C _ { t o E x p l o r e } : = C _ { t o E x p l o r e } \backslash E x p l a i n e .$ d 11 $C _ { f a i l u r e _ { t } o E x p l a i n } : = C _ { f a i l u r e _ { t } o E x p l a i n } \backslash E x p l a i n e d$ 12 end 13 end Table 7.6. Exploration of the failure lattice of Figure 7.6 [6]. Note that at each iteration, CfailuretoExplain can only decrease or remain untouched. The competent debugger hypothesis ensures that $C _ { f a i l u r e t o E x p l a i n }$ ends at empty when min sup is equal to 1. In case of an incompetent debugging oracle or a too high min sup, the process would end when CtoExplore becomes empty, namely when all concepts have been explored. # 7.5.2. Example For the example of Figure 7.6, the min sup value is equal to four failed executions (out of 400 executions, of which there are 168 failed executions), and the min lift value is equal to one. There are four failure concepts: 5, 13, 12, and 9. Table 7.6 presents the values of CtoExplore and CfailuretoExplain at each iteration of the exploration. We choose to explore the lattice with a queue strategy; it means first in $C _ { t o E x p l o r e }$ , first out of $C _ { t o E x p l o r e }$ . However, the algorithm does not specify one strategy. At the begining, CtoExplore and CfailuretoExplain are initialized as the set of all failure concepts (iteration 0 in Table 7.6). At the first iteration of the while loop, concept 5 is selected $( c = c _ { 5 } )$ . That concept is labeled by line 74. Line 74 actually corresponds to fault 6. Thanks to the competent debugging hypothesis, fault 6 is located. Concepts 5, 4, and 14 are thus tagged as explained. The new values of $C _ { t o E x p l o r e }$ and CfailuretoExplain are presented at iteration 1 in Table 7.6. At the second iteration, concept 13 is selected $( c = c _ { 1 3 } )$ ). That concept is labeled by lines 64 and 79. Line 79 actually corresponds to fault 2; the competent debugging oracle locates fault 2. Concept 13 is tagged as explained. At the third iteration, concept 12 is selected. That concept is labeled by lines 87 and 90. No fault is found. The upper neighbours, concepts 7 and 11, are added to $C _ { t o E x p l o r e . }$ , and CfailuretoExplain is unchanged. At the next iteration, concept 9 is selected. As in the previous iteration, no fault is found. The upper neighbour, concept 8, is added to CtoExplore. Finally, concept 7 is selected. That concept is labeled by lines 81 and 84. By exploring those lines (new clues) in addition with the fault context, i.e., lines that have already been explored: 87, 90, 101, and 85, the competent debugging oracle locates fault 1 at line 84. The fault is the substitution of the test of trityp $= 2$ by trityp $= 3$ . Concepts 12 and 9 exhibit two concrete realizations (failures) of the fault at line 84 (Concept 7). Concepts 7, 12, and 9 are tagged as explained. The set of failure concepts to explain is empty; thus, the exploration stops. All four faults (for failures above support and lift threshold) are found after the debugging oracle has inspected nine lines. # 7.6. Discussion12 The contexts and lattices introduced in the previous sections allow programmers to see all the differences between execution traces as well as all the differences between association rules. There exist other methods that compute differences between execution traces. We first show that the information about trace differences provided by the failure context (and the corresponding lattice) is already more relevant than the information provided by four other methods proposed by Renieris and Reiss [25] and Cleve and Zeller [7]. Then, we show that explicitly using association rules with several lines in the premise alleviates some limitations of Jones et al.’s method [15]. Finally, we show that reasoning on the partial ordering given by the proposed failure lattice is more relevant than reasoning on total order rankings [8][18][20][24][32]. # 7.6.1. The Structure of the Execution Traces The trace context contains the whole information about execution traces. In particular, the associated lattice, the trace lattice, allows programmers to see all differences between traces in one pass. There exist several fault localization methods based on the differences between execution traces. They all assume a single failed execution and several passed executions. We rephrase them in terms of search in a lattice to highlight their advantages, their hidden hypothesis, and limitations. # 7.6.2. Union Model The union model, proposed by Renieris and Reiss [25], aims at finding features that are specific to the failed execution. The method is based on trace differences between the failed execution f and a set of passed executions $S : f _ { \mathbf { \Gamma } } \cup _ { S } \in S ^ { s }$ . The underlying intuition is that the failure is caused by lines that are executed y in the failed execution. Formalized in FCA terms, the concepts of interest are the subconcepts whose label contains $F A I L$ , and the computed information is the lines contained in the labels of the subconcepts. The trace lattice presented in the figure is slightly different from the lattice that would be computed for the union model, because it represents more than one failed execution. Nevertheless, the union model often computes empty information, namely each time the faulty line belongs to failed and passed execution traces. For example, a fault in a condition has a very slight chance to be localized. The approach we presented is based on the same intuition. However, the lattices that we propose do not lose information and help navigate in order to localize the faults, even when the faulty line belongs to both failed and passed execution traces. The union model helps localize a bug when executing the faulty statement always implies an error, such as the bad assignment of a variable that is the result of the program. In that case, the lattice also helps, and the faulty statement labels the same concept as $F A I L$ . # 7.6.3. Intersection Model The intersection model [25] is the complementary of the previous model. It computes the features whose absence is discriminant of the failed execution: $\cap s \in S ^ { s } \cdot f .$ Replacing $F A I L$ with $P A S S$ in the above discussion is relevant to discussing the intersection model and leads to the same conclusions. # 7.6.4. Nearest Neighbor The nearest neighbor approach [25] computes a distance metric between the failed execution trace and a set of passed execution traces. The computed trace difference involves the failed execution trace, $f$ , and only one passed execution trace, the nearest one, $p : f - p$ . The difference is meant to be the part of the code to explore. The approach can be formalized in FCA. Given a concept $C _ { f }$ whose intent contains $F A I L$ , the nearest neighbor method searches for a concept $C _ { p }$ whose intent contains $P A S S _ { \mathrm { i } }$ , such that the intent of $C _ { p }$ shares as many lines as possible with the intent of $C _ { f }$ The rightmost concept fails, whereas the leftmost one passes. As for the previous methods, it is a good approach when the execution of the faulty statement always involves an error. However, when the faulty statement can lead to both a passed and a failed execution, the nearest neighbor method is not sufficient. In addition, we remark that there are possibly many concepts of interest, namely all the nearest neighbors of the concept that is labeled by FAIL. With a lattice, that kind of behavior can be observed directly. Note that in the trace lattice, the executions that execute the same lines are clustered in the label of a single concept. Executions that are nearby share a large part of their executed lines and label concepts that are neighbors in the lattice. There is therefore no reason to restrict the comparison to a single pass execution. Furthermore, all the nearest neighbors are naturally in the lattice. # 7.6.5. Delta Debugging Delta debugging, proposed by Zeller et al. [8], reasons on the values of variables during executions rather than on executed lines. The trace spectrum, and therefore the trace context, contains different types of attributes. Note that the presented approach does not depend on the type of attributes and would apply more to spectra containing other attributes than executed lines. Delta debugging computes the differences between the failed execution trace and a single passed execution trace in a memory graph. By injecting the values of variables of the failed execution into variables of the passed execution, the method tries to determine a small set of suspicious variables. One of the purposes of the method is to find a passed execution relatively similar to the failed execution. It has the same drawbacks as the nearest neighbor method. # 7.6.6. From the Trace Context to the Failure Context Tarantula Jones et al. [15] computed association rules with only one line in the premise. Denmat et al. [10] showed the limitations of this method through three implicit hypotheses. The first hypothesis is that a failure has a single faulty statement origin. The second hypothesis is that lines are independent. The third hypothesis is that executing the faulty statement often causes a failure. That last hypothesis is a common assumption of fault localization methods, including the presented method. Indeed, when the fault is executed in both passed and failed executions (such as in a prelude), it cannot be found so easily using these hypotheses. In addition, Denmat et al. demonstrated that the ad hoc indicator, which was used by Jones et al., is equivalent to the lift indicator. By using association rules with more expressive premises than in Jones et al.’s method (namely with several lines), the limitations mentioned above are alleviated. Firstly, the fault need not be a single line but can contain several lines together. Secondly, the dependency between lines is taken into account. Indeed, dependent lines are clustered or ordered together. The part of the trace context that is important to search in order to localize a fault is the set of concepts that are related to the concept labeled by $F A I L$ ; i.e., those that have a non-empty intersection with the concept labeled by FAIL. Computing association rules with $F A I L$ as a conclusion compute exactly those concepts, modulo the minsup and minlift filtering. In other words, the focus is done on the part of the lattice related to the concept labeled by $F A I L$ . # 7.6.7. The Structure of Association Rules Jones et al.’s method presents the result of the analysis to the user as a coloring of the source code. A red-green gradient indicates the correlation with failure. Lines that are highly correlated with failure are colored in red, whereas lines that are not highly correlated are colored in green. Red lines typically represent more than $10 \%$ of the lines of the program, without identified links between them. Other statistical methods [8][18][19][32] also try to rank lines in a total ordering. It can be seen as ordering the concepts of the failure lattice by the lift value of the rule in their label. However, we have shown in Section 1.3 that the monotonicity of lift is only relevant locally to a support cluster. For example, on the failure lattice of Figure 7.2, the obtained ranking would be: line 85, line 66, line 68, line 84. No link would be established between the execution of line 85 and line 68, for example. The user who must localize a fault in a program has background knowledge about the program and can use it to explore the failure lattice. Reading the lattice gives context about the fault and not just a sequence of independent lines to be examined, and it reduces the number of lines to be examined at each step (concept) by structuring them. # 7.6.8. Multiple Faults We have compared the failure lattice with existing single fault localization methods. In this section, we compare the presented navigation in the failure lattice with the strategies of the other methods to detect several faults. The presented approach involves algorithmic debugging [27]. The difference lies in the traversed data structure. While Shapiro’s algorithm helps traverse a proof tree, the presented algorithm helps traverse the failure lattice, starting from the most suspicious places. For multiple faults, Jiang et al. [14] criticized the ranking of statistical methods. They proposed a method based on traces whose events are predicates. The predicates are clustered, and the path in the control flow graph associated to each cluster is computed. In the failure lattice, events are also clustered in concepts. The relations between concepts give information about the path in the control flow graph and highlight some parts of that path as relevant to debug without computing the control flow graph. Zheng et al. [32] proposed a method based on bi-clustering in order to group failed executions and identify one feature (bug predictor) that characterizes each cluster. They proposed to look at one bug predictor at a time. Several bug predictors can be in relation with the same fault, but no link is drawn between them. The presented approach gives context to the fault, in order to help understand the mistakes of the programmer that have produced the fault. Jones et al. [16] proposed a method that first clusters executions and then finds a fault in each cluster in parallel. The method has the same aim as the presented method. Indeed, in both cases, we want to separate the effects of the different faults in order to treat the maximum of faults in one execution of the test suite, but in the presented approach, the clusters are partially ordered to take into account dependencies between faults. Finally, SBI [18] introduces a stop criterion as we did in the presented algorithm. SBI tries to take advantage of one execution of the test suite. The events are predicates, and SBI ranks them. When a fault is found due to the ranking, all execution traces that contain the predicates used to find the fault are deleted, and a new ranking of predicates with the reduced set of execution traces is computed. Deleting execution traces can be seen as equivalent to tagging concepts, as well as the events of their labeling, as explained in DeLLIS. The difference between SBI and DeLLIS is that DeLLIS does not need to compute the failure lattice several times. # 7.7. Fault Localization using N-gram Analysis13 In the previous sections, we described the background of data mining and how it can be applied to fault localization in general. In this section, we will describe how to use data mining along with N-gram analysis for software fault localization. In software fault localization, test cases are usually utilized as sets of inputs with known expected outputs. If the actual output does not match the expected output, the test case has failed. Various information can be collected during the execution of the test cases for later analysis. This information may include statement coverage (the set of statements that were executed at least once during the execution) and exact execution sequence (the actual order in which the statements were executed during the test case executions). Since we will be working only with the exact execution sequence in this paper, we refer to it as trace. Usually, the usefulness of trace data is limited by the sheer volume. Data mining traditionally deals with large volumes of data, and in this research, we apply data mining techniques to process this trace data for fault localization. From trace data, we generate N-grams, i.e., subsequences of length N. From these, we choose N-grams that appear more than a certain number of times in the failing traces. For these N-grams, we calculate the confidence — the conditional probability that a test case fails given that the N-gram appears in that test case’s trace. We sort the N-grams in descending order of confidence and report the statements in the program in the order of their first occurrence in the sorted list. # 7.7.1. Background # Execution Sequence Let $P$ be a program with n lines of source code, labeled $L = \left\{ l _ { 1 } , l _ { 2 } , \cdots , l _ { n } \right\}$ .For example, in the sample program mid from [15] in Figure $7 . 8 , \mathrm { L } = \{ 4 , 5 , 6 , 1 0 , 1 1 , 1 2 , 1 3 , 1 4 , 1 5 , 1 7 , 1 8 , 1 9 , 2 0 , 2 1 , 2 4 \}$ after excluding comments, blank lines and structural constructs like $\cdot _ { \nmid } ,$ . A test case is a set of input with known outputs. Let $T = \left\{ t _ { 1 } , t _ { 2 } , \cdots , t _ { n } \right\}$ be the n test cases for program P. Each test case $t _ { i } = \left. I _ { i } , X _ { i } \right.$ has the input $I _ { \ i }$ and expected output $X _ { i }$ When program $P$ is executed with input $I _ { \ i }$ , it produces actual output $A _ { i } \cdot \operatorname { I f } A _ { i } = X _ { i }$ , then we say $t _ { i }$ is a passing test case, and if $A _ { i } \neq X _ { i }$ ; then we say $t _ { i }$ is a failing test case. For example, the 6 test cases for the program mid in [15], $T = \left( t _ { 1 } , t _ { 2 } , \cdots , t _ { 6 } \right)$ , are shown in Table 7.7. Let $Y = \left. { y _ { 1 } , y _ { 2 } , \cdots , y _ { k } } \right.$ be the trace of program $P$ when running test case $T .$ Then, for mid the trace for the test case $t _ { 1 }$ is $\mathrm { Y } = \{ 4 , 4 , 5 , 1 0 , 1 1 , 1 2 , 1 4 , 1 5 , 2 4 , 6 \}$ . We define two sets based on the outcome of the test casespassing traces which is $Y _ { _ { P } } = \left\{ Y _ { i } \vert t _ { i } s a f a i l i n g t e s t c a s e \right\}$ and failing traces which is $Y _ { _ { F } } = \left\{ Y _ { i } \vert t _ { i } s a f a i l i n g t e s t c a s e \right\} .$ We define the problem as: given program $P$ with executable statements $L$ ,test casesT and actual outputs $A$ , the problem is to rank the statements in $L$ according to their probability of containing the fault. To compare this method with other methods like [15], we report the results in terms of statements, but it can also work at function level. Given an ordered list, an N-gram is any sub-list of N consecutive elements in the list. The elements of the N-gram must be in the same order as they were in the original list, and they must be consecutive. Given an execution trace $Y$ , an N-gram $G _ { \scriptscriptstyle { Y , N , \alpha } }$ is a contiguous subsequence $\langle y _ { \alpha } , y _ { \alpha + 1 } , \cdots , y _ { \alpha + N - 1 } \rangle \mathrm { c }$ of length $\mathrm { \Delta N }$ starting at position $\alpha$ . For a trace $Y$ , the set of all line N-gram is $G _ { { \scriptscriptstyle Y } , N } = \left\{ G _ { { \scriptscriptstyle Y } , N , 1 } , G _ { { \scriptscriptstyle Y } , N , 2 } , \cdots , G _ { { \scriptscriptstyle Y } , N , K - N + 1 } \right\}$ Table 7.7. Test cases for program mid [22] # 7.7.2. Linear Execution Blocks From the set of all traces, we identify the execution blocks, i.e., the code segments with a single point of entry and a single point of exit. For this, we construct the Execution Sequence Graph $\mathrm { X S G ( P ) } = \left( \mathrm { V } , \right.$ E) where the set of vertices is $V \subseteq L$ such that for each $\nu _ { i } \in V , \nu _ { i } \in Y _ { k }$ for some $k$ and that $\nu _ { i }$ and $\boldsymbol { \nu } _ { j }$ are consecutive in $Y _ { k }$ . This is similar to a Control Flow Graph, but the vertices in an XSG represent statements rather than blocks. In this graph, there is an edge between two vertices only if they were executed in succession in at least one of the execution traces. The XSG for mid is given in Figure 8.10, where we can see that the blocks of mid are $\left\{ b _ { 1 } , b _ { 2 } , \cdots , b _ { 1 0 } \right\} = \left\{ \left. 4 \right. , \left. 5 , 1 0 , 1 1 \right. , \left. 1 2 \right. , \left. 1 8 \right. , \left. 2 0 \right. , \left. 2 4 , 6 \right. , \left. 1 4 \right. , \left. 1 3 \right. , \left. 1 5 \right. \right\} .$ Thus, trace of test case t1 can be converted to block level trace by $\left. b _ { 1 } , b _ { 2 } , b _ { 3 } , b _ { 8 } , b _ { 1 0 } , b _ { 7 } \right.$ . It should be noted that the definition of blocks here is different than the traditional blocks [[1]]. Since we identify blocks from traces, the blocks here may include function or procedure entry points. For example, 5,10,11 will not be a single block by the traditional definition since it has a function started at line 10. Due to this difference, we name the blocks Linear Execution Blocks, defined as follows: A Linear Execution Block $B \langle \nu _ { i } , \nu _ { i + 1 } , \cdots , \nu _ { j } \rangle$ is a directed path in XSG such that the indegree of each vertex $V _ { k } \in \mathbf { B }$ is 0 or 1. Advantages of using block traces are: (a) it reduces the size of the traces, and, (b) in a block trace, each sequence of two blocks indicate one possible branch. Therefore, in N-gram analysis on block traces, each block N-gram represents $_ \textrm { N - 1 }$ branches. This helps the choice of N for N-gram analysis. #include<stdio.h> 2 intmain({ 3 4 6 } intmid(intx,inty,intz){ 9 intm; 10 12 13 14 15 16 17 }else{ 18 19 m=y; 20 }elseif(x>z){ 22 23 24 return m; 25 # 7.7.3. Association Rule Mining Association rule mining searches for interesting relationships among items in a given data set [13]. It has the following two parts: Frequent Itemset Generation. Search for sets of items occurring together frequently, called a frequent itemset, whose frequency in the data set, called support, exceeds a predefined threshold, called minimum support. Association Rule Generation. Look for association rules like $A B$ among the elements of the frequent itemsets, meaning that the appearance of $A$ in a set implies the appearance of $B$ in the same set. The conditional probability $P ( B | A )$ is called confidence, which must be greater than a predefined minimum confidence for a rule to be considered. More details can be found in [13]. We model the blocks as items and the block traces as transactions. For example, $Y _ { 1 } = { \langle b _ { 1 } , b _ { 2 } , b _ { 3 } , b _ { 8 } }$ , $b _ { 1 0 } , b _ { 7 } >$ is a transaction for mid corresponding to the first test case, $T _ { 1 }$ . We generate frequent itemsets from the transactions with the additional constraint that the items in an itemset must be consecutive in the original transaction. To do this, we generate $N .$ -grams from the block traces, and from them, we choose the ones with at least the minimum support. For a block $N .$ -gram $G _ { Y _ { i } , N , p }$ , support is the number of failing traces containing GYi,N,p: $$ S u p p o r t ( G _ { Y _ { i } , N , p } ) { = } \left| \left\{ Y _ { j } \vphantom { G _ { Y _ { i } , N , p } } G _ { Y _ { i } , N , p } \vphantom { G _ { Y _ { i } , N , p } } \in Y _ { j } a n d Y _ { j } \in Y _ { F } \right\} \right| $$ Figure 7.9. Execution sequence graph for program mid [22]. For example, for mid, the support for $\left. b _ { 2 } , b _ { 3 } , b _ { 8 } \right.$ is 1 since it occurs in one failing trace. We add the test case type to the itemset. For example, after adding the test case type to the itemset $\left. b _ { 2 } , b _ { 3 } , b _ { 8 } \right.$ , the itemset becomes $\left. b _ { 2 } , b _ { 3 } , b _ { 8 } , p a s s i n g \right.$ . Then, we try to discover association rules of the form $A \Rightarrow$ failing from these itemsets where the antecedent is a block $\mathrm { \bf N }$ -gram and the consequent is failing. Therefore, the block $\mathrm { \Delta N }$ -grams that appear as antecedents in the association rules are most likely to have caused the failure of the test case. We sort these block $\mathrm { \Delta N }$ -grams in descending order of confidence. For a block N-gram $G _ { { \scriptscriptstyle Y _ { i } } , N , p }$ ; confidence is the conditional probability that the test case outcome is failure given that $G _ { { \scriptscriptstyle Y _ { i } } , N , p }$ appears in the trace of that test case. That is, $$ \mathrm { \it ~ \mathrm { \it ~ > n f i d e n c e } ~ } \Big ( G _ { _ { Y _ { i } , N , p } } \Big ) = { \frac { \operatorname* { P r } \ o b \Big ( G _ { _ { Y _ { i } , N , p } } \in Y _ { j } \ a n d \ t _ { j } \ i s \ a f a i l i n g \ t e s t \ c a s e \ \Big ) } { \operatorname* { P r } \ o b \Big ( G _ { _ { Y _ { i } , N , p } } \in Y _ { j } \Big ) } } $$ For example, the confidence the rule $\left. b _ { 2 } , b _ { 3 } , b _ { 8 } \right. \Rightarrow$ failing has confidence 0.33. After sorting the block N-grams, we convert the blocks back to line numbers and report this sequence of lines to investigate to find the fault location. # 7.7.4. Methodology As input, we use the source code, the test case types, and the traces for all the test cases, and we produce as output an ordered list of statements, sorted in order of probability of containing the fault. We first convert the traces to block traces and then apply $N .$ -gram analysis on these block traces to generate all possible unique $N .$ -grams for a given range of $N .$ For each $N .$ -gram, we count its frequency in passing and failing traces. The execution of the faulty statement may not always cause failure of the test case. There may be quite a number of test cases in which the faulty statement was executed but did not cause a failure. In most cases, the failure is dependent on the sequence of execution. A specific sequence or path of execution will cause the program to fail, and this sequence will be very common in the failing traces but not so common in the passing traces. Therefore, we can find the subsequences that are most likely to contain the fault by analyzing the traces during passing and failing test cases. There are two major parameters in the algorithm. The first one is MinSup, the minimum support for selecting the $N .$ -grams, and the second is $N _ { M A X } ,$ , the maximum value of $N$ for generating the $N .$ -grams. Taking a low value of mini-mum support will result in the inclusion of irrelevant $N .$ -grams in consideration. Therefore, we should take minimum support at a high value. Our experience suggests that $90 \%$ is a good choice. However, the choice of an appropriate $N _ { M A X }$ is more difficult. Two execution paths can differ because of conditional branches. Such differences can be detected by 2-grams. Again, the same function can be called from different functions, which can also be detected with 2-grams. Since we are using execution blocks, an $N .$ -gram can capture $( N 1 )$ branches, and a choice of 2 or 3 for $N _ { M A X }$ should give good results in most cases. If we use higher $N .$ -grams, the algorithm will still be able to find the fault, but due to larger $N .$ -grams, we will have to examine more lines to find the fault. L2B: Convert exact execution sequences to block traces. From the line level traces, we create the execution sequence graph (XSG). From the $X S G$ , we find the linear execution blocks $( L E B )$ . Then, we convert the traces into block traces in lines 2 to 4 of Algorithm 7.2. GNG: Generate $N .$ -grams. In this step, we first generate all possible $N .$ -grams of lengths 1 to $N _ { M A X }$ from the block traces. The generation of all $N .$ -grams from a set of block traces for a given $N$ is done in lines 1 to 7, and the generation and combination of all the $N .$ -grams are done in lines 5 to 8. Then, we find out how many passing and failing traces each $N .$ -gram occurs in. FRB: Find relevant blocks. From 1-gram, we construct a set of relevant blocks, $B _ { r e l } ,$ that contains only the blocks that have appeared in each of the failing traces in lines 10 to 14. EIN: Eliminate irrelevant $N .$ -grams. In lines 15 to 16, we discard the $N .$ -grams that do not contain any block from the relevant block set, $B _ { r e l } .$ FFN: Find frequent $N \mathrm { . }$ -grams. In lines 17 to 21, we eliminate $N .$ -grams with support less than the minimum support. RNC: Rank $N .$ -grams by confidence. For each surviving $N .$ -gram, we compute its confidence using Equation (2). This is done in lines 22 to 26. Then, we order the $N .$ -grams in order of confidence in line 27. B2L: Convert blocks in N-grams to line numbers. We convert each block in the $N .$ -grams back to line numbers using the $X S G$ in line 28. RLS: Rank lines according to suspicion. We traverse the ordered list of $N .$ -grams and report the line numbers in the order of their first appearance in the list. This is done in line 29. If there are multiple $N .$ -grams with the same confidence as the $N .$ -gram containing the faulty statement, the best case will be the ordering in which the faulty statement appears in the earliest possible position in the group, and the worst case will be the ordering in which the faulty statement appears in the latest possible position. 1 Function LocalizeFaults $( Y , Y _ { F } , K , M I N S U P )$ : 2 foreach $Y _ { i } \in Y$ do 3 Convert $Y _ { i }$ to block trace 4 end 5 $N G \gets \phi$ for $N = 1$ to $N _ { M A X }$ do 6 NG←NGuGenerateNGrams $( Y , N$ ) 7 end 8 $L _ { r e l } \gets \{ n | n \in N G$ and $| n | = 1 \}$ foreach $n \in L _ { r e l }$ do 9 if $S u p p o r t ( n ) \neq | Y _ { F } |$ then 10 Remove $\boldsymbol { n }$ from $N G$ and $L _ { r e l }$ 11 end 12 end 13 $N G _ { 1 } \{ n | n \in N G$ and for all $s \in L _ { r e l } , s \notin n \}$ 14 $N G \gets N G - N G _ { 1 }$ 15 foreach $n \in N G$ do 16 if $S u p p o r t ( n ) < M I N S U P$ then 17 Remove $\boldsymbol { n }$ from $N G$ 18 end 19 end 20 foreach $n \in N G$ do 21 $N F | \{ Y _ { k } | Y _ { k } \in Y _ { F }$ and n∈Yk}l 22 $N T \gets | \{ Y _ { k } | Y _ { k } \in Y$ and n∈Yk}l 23 n.confidence←NF÷NT 24 end 25 Sort $N G$ in the descending order of confidence 26 Convert the block numbers in the $N$ -gramsin $N G$ to line numbers 27 Report the line numbers in the order of their first appearance in NG Algorithm 7.2. Fault localization using $N \cdot$ -gram analysis [22]. 1 Function GenerateNGrams $( Y , N )$ :2 G←3 foreachY∈Ydo4 G ← Gu GY,N5 end6 return GAlgorithm 7.3. N-gram generation [22].
This chapter illustrates the basic concepts of fault localization using a data mining technique. It utilizes the Trityp program to illustrate the general method. Formal concept analysis and association rule are two well-known methods for symbolic data mining. In their original inception, they both consider data in the form of an object-attribute table. In their original inception, they both consider data in the form of an object-attribute table. The chapter considers a debugging process in which a program is tested against different test cases. Two attributes, PASS and FAIL, represent the issue of the test case. The chapter extends the analysis of data mining for fault localization for the multiple fault situations. It addresses how data mining can be further applied to fault localization for GUI components. Unlike traditional software, GUI test cases are usually event sequences, and each individual event has a unique corresponding event handler.
[ "cs.SE", "cs.AI" ]
# 1 Introduction Sequential resource allocation (SRA) involves distributing limited resources across locations over time, where an agent allocates resources at a sequence of demand nodes while satisfying upper and lower bound constraints. The objective is to allocate resources efficiently while adhering to these constraints. SRA arises in critical domains such as healthcare, public safety, energy, and agriculture, where dynamic demands and societal priorities play a key role. For example, healthcare resource distribution during pandemics must balance immediate needs with future demand [Malenica et al., 2024], while pesticide distribution must adapt to regional crop health and sustainability requirements [Qin et al., 2021]. Beyond efficiency, resource allocation algorithms must consider societal constraints, such as equity [Pu, 2021], sustainability [Heffron and McCauley, 2014], and justice [Zhao et al., 2020]. Moreover, these constraints often depend on context, such as prioritizing equity when regions’ demands conflict. For example, during the COVID-19 pandemic, New Zealand proposed a Traffic Light system [Taylor et al., Figure 1: Left: Simulated medical resource demand in Beijing, where darker colors represent higher demand levels. The experiment focuses on a district in the southwest. Right: Farmland in Saskatchewan, Canada, used for pesticide allocation. Numbers indicate regions with varying pesticide requirements. 2023] to adjust policies according to the level of emergency. E.g. one-meter distancing measures were only enforced when public medical resources faced high pressure. Similarly, the U.S. clean-energy supply-chain strategy [Igogo, 2022] highlighted adaptive systems to address bottlenecks during disruptions. They underscore the need for context-aware allocation strategies. Traditional solutions to SRA, such as dynamic programming [Lien et al., 2014] and multi-armed bandits [Kaufmann, 2018], are effective for small-scale problems with explicit models but struggle with scalability. Reinforcement learning (RL) offers a promising alternative by learning optimal policies through interaction with the environment, without requiring prior system knowledge [Bhatia et al., 2019]. RL has been successfully applied to diverse SRA tasks, such as pesticide spraying [Qin et al., 2021], healthcare resource allocation [Li et al., 2023], and dynamic electricity distribution [Bahrami et al., 2020]. Constrained reinforcement learning (CRL) extends standard RL by incorporating constraints into the learning objective, typically through Lagrangian methods or constrained policy updates. Recent advancements, such as density-constrained reinforcement learning (DCRL) [Qin et al., 2021], have extended RL by incorporating constraints on state distributions. However, existing algorithms rely on static constraints, limiting their ability to adapt to evolving demands and situational requirements. Addressing this gap calls for an advanced RL framework capable of incorporating conditional constraints to enable adaptive decision-making. This leads to the question: How can we design a density-constrained RL framework that ensures situational fairness and adapts to dynamic, context-dependent constraints in resource allocation? To address this question, one needs to (1) develop a formal framework for SRA under such constraints. (2) Propose a new density-constrained RL algorithm that handles “if-then” logic for such constraints. In this paper, we initiate the study of situational constraints for SRA tasks. We formulate the problem as a conditional DCRL problem, where the constraints are implications. To address this problem, we propose a new algorithm, Situational-Constrained Reinforcement Learning (SCRL). The algorithm extends the conventional CRL framework by introducing a violation degree-based punitive term function that quantifies the extent of constraint violations and adjusts policy updates accordingly. Unlike previous approaches [Tessler et al., 2018; Ray et al., 2019; Qin et al., 2021], SCRL incorporates an adaptive aggregation mechanism that handles disjunctive constraints by selectively prioritizing one constraint within the disjunction. This design allows SCRL to dynamically balance reward optimization with constraint satisfaction in context-sensitive environments. To the best of our knowledge, this is the first work to address situational, disjunctive constraints within the CRL paradigm. We evaluate SCRL in two real-world-inspired scenarios: medical resource allocation during the COVID-19 pandemic in Beijing, China [Hao et al., 2021] and agricultural resource distribution in Saskatchewan, Canada [Qin et al., 2021]. In both cases, the constraints are designed to balance equity and adequacy, such as ensuring fairness when resources are insufficient and maintaining sufficient coverage when resources are ample. Experimental results demonstrate that SCRL significantly improves the satisfaction of situational constraints compared to baseline methods and effectively adapts resource distributions to meet context-specific requirements. Additionally, we present a case study to illustrate how SCRL adjusts resource allocation across regions in response to shifting situational demands, further highlighting the algorithm’s ability to provide adaptive and equitable decision-making in complex, real-world environments. The following is a summary of key contributions: • Formulation of sequential resource allocation with situational, disjunctive constraints. • Development of the SCRL algorithm with a violation degree-based punitive term for dynamic policy updates. • Introduction of an aggregation mechanism to handle disjunctive constraints in context-sensitive environments. # 2 Related Work Sequential Resource Allocation (SRA). SRA focuses on distributing resources in systems where demands arrive sequentially, making it distinct from traditional resource allocation due to its dynamic nature and uncertainty. Its relevance spans socially impactful applications, such as allocating medical testing resources during pandemics [Malenica et al., 2024] and optimizing industrial gas deliveries to minimize costs and prevent shortages [Berman and Larson, 2001]. Ethical considerations, such as equity in resource distribution, have also been explored in government and community planning [Johnson and Smilowitz, 2007]. Early approaches to SRA used dynamic programming to optimize costs under uncertainty, including supply chain management for sequential customers [Bassok and Ernst, 1995]. Bayesian methods were later introduced to handle stochastic dynamics, with Bayes-UCB demonstrating asymptotic optimality [Kaufmann, 2018]. To address fairness, heuristic algorithms were proposed for equitable and sustainable allocation [Lien et al., 2014]. Reinforcement learning (RL) has recently become a predominant approach for SRA, offering scalable solutions for complex environments. Deep RL has been applied to supply chain management [Peng et al., 2019] and network slicing [Liu et al., 2021], enabling efficient resource allocation under constraints. Resourceconstrained RL frameworks have further improved performance over conventional policies [Bhatia et al., 2019]. Despite these advancements, existing methods often focus on fixed constraints or single-objective optimization, leaving a gap in addressing situational and context-sensitive constraints, which are critical for real-world applications. Constrained reinforcement learning (CRL). CRL extends traditional RL by incorporating constraints to ensure policies satisfy predefined requirements while maximizing rewards [Gu et al., 2022; Garcıa and Ferna´ndez, 2015]. Rooted in Constrained Markov Decision Processes (CMDPs)[Altman, 1993], methods like Reward Constrained Policy Optimization (RCPO)[Tessler et al., 2018] and SACLag [Ha et al., 2020] use Lagrange multipliers to balance rewards and constraints. Constrained Policy Optimization (CPO)[Achiam et al., 2017] introduced trust region methods for maintaining feasibility during updates, while Projectionbased CPO (PCPO)[Yang et al., 2020] extended it to further avoid infeasible policy during optimization. Density-based CRL imposes constraints directly on state density functions, offering clear physical interpretations suitable for resource and safety-critical applications [Rantzer, 2001]. Qin et al.[Qin et al., 2021] applied this approach to a pesticide spraying scenario by constraining pesticide density, and Zhang et al.[Zhang et al., 2023] extended it to multi-agent settings with ethical constraints. These methods demonstrated the efficacy of density constraints for resource allocation but do not address situational constraints, which require dynamic adaptation across scenarios. Other RL paradigms are less applicable to SRA. Logicbased RL [Hasanbeig et al., 2018; Hasanbeig et al., 2020] relies on qualitative specifications, which are unsuitable for quantitative resource allocation and introduce significant computational complexity. Fuzzy-logic-based RL, e.g., FQL [Glorennec and Jouffe, 1997], applies fuzzy rules to represent value functions and actions, and has been used in tasks like robot navigation [Fathinezhad et al., 2016] and resource management [Prasath et al., 2024]. However, it typically yields soft rule satisfaction, which is unsuitable for strict constraints like fairness and safety. Shielding [Waga et al., 2022; Alshiekh et al., 2017] focuses on safe exploration, which is unnecessary for SRA with reliable simulations. Our approach builds on density-based CRL, extending it to handle situational constraints that cannot be addressed by traditional Lagrangian methods. We propose a novel algorithm tailored to this problem, ensuring dynamic and contextsensitive resource allocation. # 3 Problem Formulation # 3.1 Situational Constraints We study resource allocation to demand nodes represented as finite regions within a 2D spatial domain, reflecting applications where resources are distributed geographically. Formally, a supplier agent distributes resources over a bounded region $D \subseteq \mathbb { R } ^ { 2 }$ with $m \in \mathbb { N }$ demand nodes, indexed as $M = \{ 1 , \dots , m \}$ . Each demand node $i \in M$ covers a subregion $S _ { i } \subseteq D$ , and an allocation function $f \colon M \to \mathbb { R }$ maps each node $i$ to a non-negative amount of resources $f ( i ) \geq 0$ . These sub-regions may overlap. Applications include drones spraying pesticides [Qin et al., 2021], mobile immunization vehicles distributing vaccines, and policing resource allocation [Maslen and Paine, 2024], as illustrated in Figure 1. In many situations, resource allocation must satisfy interval and equity constraints. Interval constraints ensure that each demand node $i$ receives resources within a specified range $f ( i ) ~ \in ~ [ a _ { i } , b _ { i } ]$ , where $0 ~ \leq ~ a _ { i } ~ < ~ b _ { i } ~ \leq ~ \infty$ [Qin et al., 2021]. Equity constraints ensure fairness, expressed as $| f ( i ) - f ( j ) | \leq b$ , where $b \ \geq \ 0$ [Lien et al., 2014; Zhang et al., 2023]. These constraints address sufficiency and fairness but lack flexibility for context-aware allocation. We therefore introduce situational constraints, which enable conditional relationships between constraints. For example, a situational constraint may state: “If the resources allocated to region $A$ exceed a certain threshold, then region $B$ must receive a minimum amount.” Let $\vec { f } = [ f ( 1 ) , \dots , f ( m ) ]$ represent the allocation vector. Definition 1. $A n$ atomic constraint is of the form ${ \vec { a } } \cdot { \vec { f } } \leq b$ where vector $\vec { a } \in \mathbb R ^ { m }$ and $b \in \mathbb { R }$ are parameters. $A$ situational constraint is of the form $\varphi _ { 1 } ( \vec { f } ) \to \varphi _ { 2 } ( \vec { f } )$ , where $\varphi _ { 1 } ( \vec { f } )$ and $\varphi _ { 2 } ( \vec { f } )$ are atomic. Situational constraints generalize existing formulations. For example, an interval constraint $f ( i ) ~ \in ~ [ a _ { i } , b _ { i } ]$ can be expressed as $\textsf { T } \to [ - f ( i ) \ \leq \ - a _ { i } ]$ and ${ \mathsf { T } } \to [ f ( i ) \leq b _ { i } ]$ , where $\top$ denotes an always-true atomic constraint. Similarly, an equity constraint can be rewritten as a conjunction of two interval constraints. While interval and equity constraints have been studied [Qin et al., 2021; Lien et al., 2014; Zhang et al., 2023], they fail to capture context-sensitive requirements. Situational constraints address this gap by capturing conditional demands. # 3.2 SRA with Situational Constraints Formally, the SRA problem is represented by the MDP $\mathcal { M } =$ $\langle S , A , r , P , \gamma , \eta , L \rangle$ , where: • $s$ : The state space, which captures the agent’s position in the 2D region and its movement dynamics, such as velocity. • $\mathcal { A }$ : The action space, which represents the set of actions available to the agent. • $P \colon S \times { \mathcal { A } } \to \Delta ( S )$ : The transition function, which defines the probability distribution over next states $s ^ { \prime }$ given the current state $s$ and action $a$ . • $r \colon S \times \mathcal { A } \mathbb { R }$ : The reward function, which quantifies the efficiency of resource allocation. • $\gamma \in ( 0 , 1 )$ : The discount factor. • $\eta \in \Delta ( S )$ : The initial state distribution, which specifies the probability distribution over initial states $s _ { 0 } \in \mathcal { S }$ . • $L \colon S \to 2 ^ { M }$ : The labeling function maps $s \in { \mathcal { S } }$ to the set of demand nodes $L ( s )$ receiving resources at that state. To formalize the reward function $r$ , we follow [Qin et al., 2021] and assume that the agent distributes resources at a constant rate as it moves through the region. This means that the amount of resources $f ( i )$ allocated to a demand node $i \in M$ is captured by the time the agent spends within the corresponding sub-region $S _ { i }$ . This simplification links resource allocation directly to the agent’s trajectory, allowing the agent to control resource distribution through its movement across the space. Formally, a trajectory is a potentially infinite sequence ${ \boldsymbol \tau } ~ = ~ ( s _ { 0 } , s _ { 1 } , \bar { s } _ { 2 } , \dots )$ where each $s _ { t } ~ \in ~ S$ represents the agent’s state at time $t$ . The following definition follows the definition in [Qin et al., 2021; Rantzer, 2001; Syed et al., 2008; Chen and Ames, 2019]. Definition 2. Given a trajectory $\tau$ , the density of resources allocated to demand node $i \in M$ is defined as: $\rho ^ { \tau } ( i ) : = \mathbf { \rho }$ $\begin{array} { r } { \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } \cdot \mathbb { 1 } _ { L ( s _ { t } ) } ( i ) } \end{array}$ , where 1 is the indicator function. The density $\rho ^ { \tau } ( i )$ quantifies the cumulative, discounted amount of resources allocated to demand node $i$ by the agent while following the trajectory $\tau$ . The discount factor ensures the density function does not diverge under infinite-horizon settings. A policy $\pi \colon S \Delta ( { \mathcal { A } } )$ defines a probability distribution over the agent’s actions at each state. The sequence of states $\tau = ( s _ { 0 } , \bar { s } _ { 1 } , \dots )$ conforms to a policy $\pi$ , written $\tau \sim \pi$ , if: $s _ { t + 1 } \sim P ( s _ { t } , a _ { t } )$ , $a _ { t } \sim \pi ( s _ { t } )$ for all $t \geq 0$ , where $P ( s _ { t } , a _ { t } )$ is the transition function. Definition 3. Given a policy $\pi \colon S \Delta ( { \mathcal { A } } )$ , the expected density of resources allocated to demand node $i \in M$ is: $$ \rho ^ { \pi } ( i ) : = \mathbb { E } _ { \tau \sim \pi } [ \rho ^ { \tau } ( i ) ] = \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } \operatorname* { P r } ( i \in L ( s _ { t } ) \mid \pi , \eta ) , $$ where $\operatorname* { P r } ( i \in L ( s _ { t } ) \mid \pi , \eta )$ denotes the probability that demand node $i$ is receiving resources at time $t ,$ , given the policy $\pi$ and initial state distribution $\eta$ . The expected density $\rho ^ { \pi } ( i )$ captures the average amount of resources allocated to demand node $i$ when the agent follows policy $\pi$ . Thus any atomic constraint $\varphi ( \vec { f } )$ of the form ${ \vec { a } } \cdot { \vec { f } } \leq$ $b$ can be rephrased as the following constraint over the policy $\pi$ of the agent: $a _ { 1 } \rho ^ { \pi } ( 1 ) + \cdot \cdot \cdot + a _ { m } \rho ^ { \pi } ( m ) \leq b$ . We now formalize our main problem: Problem 1 (SRA with situational constraints). $$ \arg \operatorname* { m a x } _ { \pi } \sum _ { s \in \cal { S } } \eta ( s ) V _ { \pi } ( s ) \qquad s . t . \qquad \Psi ( \pi ) $$ where $V _ { \pi } ( s )$ is the expected cumulative reward under policy $\pi$ , defined as: $\begin{array} { r } { V _ { \pi } ( s ) \dot { \ } = \ \mathbb { E } _ { \tau \sim \pi , s _ { 0 } = s } \left[ \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } r ( s _ { t } , a _ { t } ) \right] } \end{array}$ , and $\Psi ( \pi )$ is a conjunction of situational constraints over $\pi$ . # 4 Method # 4.1 Algorithm Overview To solve Problem 1, we propose a general algorithm framework (Alg. 1) designed for constrained reinforcement learning (CRL) tasks. The framework dynamically addresses the dual objectives of maximizing cumulative rewards and satisfying constraints by incorporating a punitive mechanism through a punitive term function, denoted as $\sigma ( \Psi , s )$ . This term penalizes constraint violations directly within the reward structure, effectively transforming the constrained RL problem into an unconstrained optimization task. The objective then becomes maximizing the cumulative punished reward, allowing the agent to learn constraint satisfaction implicitly while pursuing reward optimization. The framework operates iteratively through three core steps: (1) Trajectory generation: The current policy $\pi$ is used to generate a collection of trajectories $D _ { \pi }$ . (2) Constraint evaluation and punitive term update: Using $D _ { \pi }$ , the violation degree $V i o ^ { \pi } ( \Psi )$ is computed to evaluate how well the policy satisfies the constraints, and $\sigma ( \Psi , s )$ is updated accordingly. (3) Policy optimization: The policy $\pi$ is updated by maximizing the cumulative punished reward. This iterative process continues until convergence, ensuring that both reward maximization and constraint satisfaction objectives are met. This framework generalizes existing CRL methods, including RCPO [Tessler et al., 2018], PPOLag [Ray et al., 2019], and DCRL [Qin et al., 2021], which utilize scalar Lagrangian multipliers as punitive terms. We extend the punitive mechanism to handle situational constraints. This involves: 1. Defining the punitive term $\sigma ( \varphi , s )$ for atomic constraints, ensuring it accurately reflects the violation degree. 2. Extending $\sigma$ to situational constraints, such as $\sigma ( \varphi _ { 1 } \to \varphi _ { 2 } , s )$ . # 4.2 Punitive Mechanism: Atomic Constraints For an atomic constraint $\varphi$ of the form $\vec { a } \cdot \vec { \rho ^ { \pi } } \leq b$ , the violation degree is defined as: $V i o ^ { \dot { \pi } } ( \varphi ) : = \vec { a } \cdot \vec { \rho ^ { \pi } } - \dot { b }$ . This quantifies the extent to which the constraint $\varphi$ is violated under the current 1: Input: An MDP $\mathcal { M }$ with a constraint $\Psi$ 2: Initialize: An initial policy $\pi$ ; A punitive term $\sigma$ 3: while Not converged do 4: Generate trajectories $D _ { \pi } \{ \tau _ { 1 } , \tau _ { 2 } , \cdot \cdot \cdot | \eta , \pi , P \}$ 5: Evaluate violation degree $V i o ^ { \pi } ( \Psi )$ using $D _ { \pi }$ 6: Update punitive term function $\sigma ( \Psi , s )$ 7: for each transition $( s , a , r , s ^ { \prime } ) \in \tau _ { i }$ where $\tau _ { i } \in D _ { \pi }$ do 8: Apply punitive term on reward $r ^ { \prime } \gets r - \sigma ( \Psi , s )$ 9: end for 10: Update policy $\begin{array} { r } { \pi \arg \operatorname* { m a x } _ { \pi } \mathbb E _ { D _ { \pi } } [ \sum _ { t } \gamma ^ { t } r ^ { \prime } ( s _ { t } , a _ { t } ) ] } \end{array}$ 11: end while 12: Return π policy $\pi$ . A positive $V i o ^ { \pi } ( \varphi )$ indicates a violation, while a value of zero or less signifies satisfaction of the constraint. To design a punitive mechanism for atomic constraints, we decompose it into two complementary components: the penalty factor, which measures the overall severity of a constraint violation, and the weighting factor, which determines the state-level impact of the violation. These components collectively define the punitive term $\sigma ( \varphi , s )$ . 1. Penalty factor: The penalty factor $\kappa ( \varphi )$ is updated iteratively to reflect the accumulated violation of $\varphi$ over time: $$ \kappa ^ { \prime } ( \varphi ) : = \operatorname* { m a x } ( 0 , \kappa ( \varphi ) + \beta \cdot V i o ^ { \pi } ( \varphi ) ) , $$ where $\beta$ is the learning rate. This approach aligns with existing CRL methods, such as DCRL [Qin et al., 2021], where the penalty factor is dynamically adjusted to enforce density constraints. For instance, DCRL enforces an upper bound $\rho _ { m a x }$ on state density $\rho ^ { \pi } ( s ^ { \prime } )$ by updating $\kappa$ as $\mathbf { \bar { \kappa } } ^ { \prime } = \operatorname* { m a x } ( 0 , \kappa + \beta \cdot ( \rho ^ { \pi } ( s ^ { \prime } ) \dot { - } \rho _ { m a x } ) )$ . Our mechanism generalizes this idea to arbitrary atomic constraints. 2. Weighting Factor: The weighting factor $w ( \varphi , s )$ accounts for the fact that visiting different states may contribute unequally to the violation of $\varphi$ by setting $$ w ( \varphi , s ) : = \frac { \sum _ { i = 1 } ^ { m } a _ { i } \cdot \mathbb { 1 } _ { L ( s ) } ( i ) } { \sum _ { i = 1 } ^ { m } | a _ { i } | } , $$ where 1 is the indicator function, and $L ( s )$ identifies the demand nodes affected by state $s$ . Thus the weighting factor $w ( \varphi , s )$ reflects how allocating resources to state $s$ impacts the violation degree $V i o ^ { \pi } ( \varphi )$ . 3. Punitive term function: The punitive term for an atomic constraint $\varphi$ is defined as: $\sigma ( \varphi , s ) = w ( \varphi , s ) \cdot \kappa ( \varphi )$ . The penalized reward is then computed as: $$ r ^ { \prime } ( s , a ) = r ( s , a ) - \sigma ( \varphi , s ) . $$ This formulation ensures that the agent is guided toward satisfying the constraint by dynamically adjusting the reward based on the impact of each state on the violation degree. Example 1. Consider atomic constraint $\varphi$ that specifies $\rho ^ { \pi } ( i ) \bar { - } \rho ^ { \pi } ( j ) \leq 0 $ , which enforces that the resources allocated to demand node i should not exceed those allocated to demand node $j$ . Suppose that the penalty factor $\kappa ( \varphi ) > 0$ , i.e., the constraint is violated under the current $\pi$ . For $s \in \mathcal S$ , when $i \in L ( s )$ and $j \notin L ( s )$ , by definition, we have $w ( \varphi , s ) = 1 / 2$ . In this case, $\sigma ( \varphi , s ) = 1 / 2 \kappa ( \varphi ) > 0$ . This leads to a decrease on the reward, which discourages the agent to allocate resource at s. On the other hand, when $i \notin L ( s )$ and $j ~ \in ~ L ( s )$ , $w ( \varphi , s ) = - 1 / 2$ , which means $\sigma ( \varphi , s ) < 0$ , leading to an increase on the reward, which encourages the agent to allocate resources at s. In any other case, $w ( \varphi , s ) = 0$ , which means that any penalty at state $s$ will not affect the satisfaction of $\varphi$ . With the definition of the punitive term $\sigma ( \varphi , s )$ provided above, we can directly instantiate Algorithm 1 for addressing an SRA problem with an atomic constraint $\varphi$ . Proposition 1. Consider an SRA problem with an atomic constraint $\varphi$ of the form $\vec { a } \cdot \vec { \rho } ^ { \pi } \leq b$ . With a sufficiently small learning rate $\beta$ , the algorithm framework incorporating the punitive term $\sigma ( \varphi , s )$ converges to a feasible solution. Proof. The proof builds on principles from canonical Lagrangian-based CRL frameworks, such as [Tessler et al., 2018]. Appendix. 7.1 contains proof details. □ # 4.3 Punitive Mechanism: Situational Constraints Situational constraints $\psi \ : = \ \varphi _ { 1 } \ \ \varphi _ { 2 }$ can be reformulated as disjunctive constraints $\lnot \varphi _ { 1 } \lor \varphi _ { 2 }$ . This requires handling disjunctions of atomic constraints $\varphi _ { 1 } \vee \varphi _ { 2 }$ , which pose unique challenges. Traditional CRL methods work well for conjunctive constraints as they enable gradient-based optimization within connected feasible regions. However, disjunctive constraints create disconnected feasible regions that make gradient-based methods ineffective. Conventional approaches usually reformulate disjunctive constraints into mixed-integer linear programming (MILP) [Trespalacios and Grossmann, 2015; Kronqvist et al., 2021] and solve them using techniques such as branchand-bound [Tu¨rkay and Grossmann, 1996]. While effective for problems with explicit system models (e.g., linear programming), these methods struggle with the complexity and dynamic nature of realistic tasks like the SRA problem, where constraints are context-dependent and the environment evolves sequentially. Moreover, branch-and-bound approaches scale poorly in large or high-dimensional problems due to their exhaustive exploration of disjuncts. Importantly, recent machine learning methods have tackled disjunctive constraints by interpreting them as a min-operator over loss functions or as unions of feasible sets [Ren et al., 2020; Huang et al., 2022; Li and Srikumar, 2019; Nandwani et al., 2019]. Building on these ideas, our approach extends the use of min-operators to RL by designing a punitive mechanism for disjunctive constraints. To define the punitive term $\boldsymbol { \sigma } ( \psi , s )$ on a state s, we adopt the principle that prioritizes the “leastviolated” disjunct in the constraint, i.e., $\varphi _ { j }$ where $\begin{array} { r l r } { \boldsymbol { j } } & { { } = } & { \arg \operatorname* { m i n } _ { \boldsymbol { j } } \{ \sigma ( \varphi _ { 1 } , \boldsymbol { s } ) , \sigma ( \varphi _ { 2 } , \boldsymbol { s } ) , \ldots , \sigma ( \varphi _ { j } , \boldsymbol { s } ) \} } \end{array}$ . The min operator aligns with the logical semantics of disjunctions and has been applied in prior works [Ren et al., 2020; Huang et al., 2022]. This design encourages the policy to satisfy the most attainable disjunct in a disjunction. An illustrative example is provided in Figure 4 in the Appendix. While it is intuitive to select the least-violated atomic constraint during the optimization, greedily applying the minoperator may suffer from sub-optimality in cases where it focuses on an infeasible disjunct that appears easier to satisfy. Specifically, consider a disjunction $\psi ~ : = ~ \varphi _ { 1 } \vee \varphi _ { 2 }$ , where $\varphi _ { 1 }$ is infeasible for the policy set $\Pi$ , meaning $\forall \pi \in$ Π, $V i o ^ { \pi } ( \varphi _ { 1 } ) > 0$ , while $\varphi _ { 2 }$ is feasible. In this scenario, for a given policy $\pi ^ { \prime } \in \Pi$ at state $s$ , if the punitive term for the infeasible disjunct $\varphi _ { 1 }$ , $\sigma ( \varphi _ { 1 } , s )$ , is smaller than that for the feasible disjunct $\varphi _ { 2 }$ , $\sigma ( \varphi _ { 2 } , s )$ , the algorithm would encourage the policy to prioritize $\varphi _ { 1 }$ . Consequently, the policy is misguided to focus on an unattainable constraint. Figure 5 in Appendix. 7.1. illustrates this issue. To address this issue, we propose a probabilistic mechanism for selecting disjuncts within a disjunctive constraint. For a disjunction $\psi = \bigvee _ { j \in [ J ] } \varphi _ { j }$ , where $\varphi _ { j }$ are atomic constraints, probabilistic mechanism defines a random variable $\Phi$ over the set of atomic constraints. The punitive term $\boldsymbol { \sigma } ( \psi , s )$ is then defined as $\sigma ( \Phi , s )$ , where $\Phi$ follows a probability distribution that assigns a probability $p _ { j }$ to each $\varphi _ { j }$ . These probiasbitlhiteiepseanrael ydefaincteod as $\begin{array} { r } { p _ { j } ~ : = ~ \frac { \kappa ( \varphi _ { j } ) ^ { - 1 } } { \sum _ { j \in J } \kappa ( \varphi _ { j } ) ^ { - 1 } } } \end{array}$ , wfhoerrme $\kappa ( \varphi _ { j } )$ $\varphi _ { j }$ ensures that constraints closer to satisfaction are prioritized, while constraints with larger penalty factors are still occasionally explored due to their nonzero probabilities. Further details on the implementation are provided in Algorithm 3 in Appendix 7.1, and its performance is evaluated through an ablation study. Finally, for a conjunction of $I$ situational constraints, $\Psi : = \boldsymbol { \mathbf { \rho } } $ $\wedge _ { i \in I } \varphi _ { i }$ , we define $\begin{array} { r } { \bar { \boldsymbol { \sigma } } ( \Psi , s ) = \sum _ { i \in I } \boldsymbol { \sigma } ( \psi _ { i } , s ) } \end{array}$ . # 4.4 SCRL Algorithm Alg.2 presents the Situational-Constrained Reinforcement Learning (SCRL) algorithm, an instance of the general framework (Alg.1) tailored for Problem 1. SCRL incorporates the punitive mechanism defined above to handle situational constraints. The algorithm initializes the penalty factor $\kappa ( \varphi ) = 0$ for each atomic constraint $\varphi$ , with a learning rate $\beta$ to encourage exploration during the early stages of training. Trajectory data is used to empirically estimate the state density $\rho ^ { \pi } ( s )$ , leveraging either discrete state counts or kernelbased methods for continuous spaces [Qin et al., 2021; Chen, 2017], ensuring computational efficiency for largescale problems. The algorithm iteratively alternates between generating trajectories, updating penalty factors based on constraint violations, applying punitive terms to the rewards, and optimizing the policy. This iterative process ensures both constraint satisfaction and reward maximization. # 5 Experiment We aim to validate SCRL algorithm’s performance through empirical evaluations over two real-world scenarios. # 5.1 Experiment Scenarios and Tasks Medical Resource Allocation. This scenario models the allocation of medical resources in Beijing during the COVID19 pandemic using a simulation from [Hao et al., 2021]. The city is divided into modules, grouped into five sub-regions based on demand levels (Figure 2a). The challenge is to prioritize high-demand regions during resource shortages while maintaining fairness, reflecting real-world public health requirements for dynamic, context-sensitive allocation policies. Agricultural Spraying Drone. [Qin et al., 2021] This scenario involves pesticide allocation in farmland in Saskatchewan, Canada, divided into five sub-regions based on crop types (Figure 2b). The agent must optimize pesticide usage by responding to pest outbreaks while avoiding overuse, balancing sufficiency and fairness across regions. This mirrors real-world agricultural challenges with economic and environmental implications. # Algorithm 2 Situational-Constrained RL 1: Input: An MDP $( S , \mathcal { A } , P , r , \eta , \gamma , L )$ , situational constraints Ψ := i I j 1,2 φj 2: Initialisation: Let $\pi$ be a random policy, $\kappa ( \varphi ) = 0$ be penalty factor for each $\varphi , \beta$ be learning rate for $\kappa$ 3: repeat 4: Generate trajectories ${ \cal D } _ { \pi } = \{ \tau _ { 1 } , \tau _ { 2 } , \cdot \cdot \cdot \mid \eta , \pi , P \}$ 5: Empirically compute density $\rho ^ { \pi }$ according to $D _ { \pi }$ 6: for all atomic constraint $\varphi$ do 7: Compute the violation degree $V i o ^ { \pi } ( \varphi )$ for $\varphi$ 8: Update penalty factor as: $\kappa ( \varphi ) \gets \operatorname* { m a x } ( 0 , \kappa ( \varphi ) + \beta V i o ^ { \pi } ( \varphi ) )$ 9: end for 10: for each $\tau _ { i } \in D _ { \pi }$ , each transition $( s , a , r , s ^ { \prime } ) \in \tau _ { i }$ do 11: Calculate $\sigma ( \Psi , s )$ . 12: Apply punitive term on reward $r ^ { \prime } r - \sigma ( \Psi , s )$ 13: end for 14: Solve $\pi$ that maximizes the expected punished return based on $D _ { \pi }$ 15: until Convergence 16: Output: A policy $\pi$ , with density values $\rho ^ { \pi }$ . Figure 2: The sub-regions in two scenarios. For both scenarios, the map is divided into $5 0 \times 5 0$ grids, and 5 regions (regions with label 0 are ignored in our settings). The detailed experiment setting of these two scenarios can be found in the Appendix. 7.2. For both scenarios, the agent’s goal is to maximize resource allocation efficiency while satisfying constraints. Three tasks of increasing complexity evaluate the agent’s performance: 1. Situational Task: A single situational constraint requires, e.g., “If resources allocated to certain regions exceed a threshold, others must receive a minimum allocation.” This tests the agent’s ability to adapt dynamically to conditional requirements. 2. Priority Task: Involves equity constraints (equal resource allocation across specific regions) and adequacy constraints (minimum resources for specific regions). The situational requirement states, “If adequacy cannot be met, ensure equity.” This evaluates the agent’s ability to prioritize fairness under resource limitations. 3. Joint Task: Combines adequacy and equity constraints simultaneously without prioritization, requiring the agent to balance potentially conflicting requirements. In some cases, satisfying both constraints may be infeasible. Details on scenarios and tasks are provided in Appendix 7.2. # 5.2 Baselines We compare our approach against four baseline methods: 1. Deep Deterministic Policy Gradient (DDPG) [Gu et al., 2017] serves as an unconstrained RL baseline, optimizing solely for reward without considering constraints. 2. Reward Constrained Policy Optimization (RCPO) [Tessler et al., 2018] is adapted to our setting by defining cost functions over state-action pairs instead of using density-based constraints. RCPO cannot natively support density-based situational constraints, requiring approximations for implementation. 3. Conservative Augmented Lagrangian (CAL) [Wu et al., 2024] is a recent primal-dual CRL method. We adapt CAL to our setting in the same manner as RCPO. 4. Density Constrained Reinforcement Learning (DCRL) [Qin et al., 2021] is included as a baseline but lacks native support for situational constraints. To adapt, we decompose each situational constraint $\varphi _ { 1 } \varphi _ { 2 }$ into: (DCRL1): The premise $\neg \varphi _ { 1 }$ , treated as an interval constraint $\rho ^ { \pi } ( s ) \in [ a , b ]$ . (DCRL2): The conclusion $\varphi _ { 2 }$ , addressed independently as an interval constraint. # 5.3 Performance Metrics The primary evaluation metric, constraint violation (Cons.Vio.), measures violation of constraints. A lower Cons.Vio. indicates better compliance with these critical constraints and is preferred, as we prioritize the safety and fairness in real-world applications. The density function in experiments takes an undiscounted sum due to the finitehorizon setting. The secondary metric, reward, assesses resource efficiency, with higher rewards indicating less resource allocation amount. A negative reward is assigned when one unit of resources is allocated. # 5.4 Experiment Result The result in Table 1 shows that DDPG attains high rewards at the cost of severe constraint violations. Both RCPO and CAL rely on a surrogate cost function, so they suffer high violations (with the lone exception of CAL on the Med. joint task). Few DCRL instances satisfy constraints due to the separate implementation. However, DCRL lacks scalability against situational constraints. SCRL consistently offers near-zero cost on priority and situational tasks. For joint task, SCRL also offers low-level costs, demonstrating its advantage in satisfying context-sensitive requirements. Besides, even when a few DCRL instances satisfy constraints, they perform worse on rewards (e.g., -31 vs. -9). In contrast, SCRL successfully guides the agent to higher-reward feasible solutions. # 5.5 Case Study: Priority and Joint Tasks We investigate SCRL agent’s behavior by comparing it with DDPG agent’s behavior, as an unconstrained baseline. Constraint Violation. As shown in Table 2, SCRL effectively prioritizes adequacy constraints in priority tasks, satisfying them at the expense of higher equity violations. In joint tasks, SCRL balances both adequacy and equity constraints, minimizing total violation degrees. This demonstrates SCRL’s adaptability to diverse constraint structures. Table 1: Result on two scenarios, each with three tasks. Mean and Std are collected from 10 independent runs. Table 2: Violation of different constraints. Violation of two constraints (equity and adequacy) in different tasks are shown. In Med. scenario, the adequacy requirement for two tasks are different, thus DDPG has different performances. Resources Allocation. Figure 3 visualizes SCRL’s allocation strategies on Agri. scenario, with warmer colors indicating higher allocations. In joint tasks, SCRL balances equity constraints, reflecting region size variations. In priority tasks, it emphasizes adequacy, ensuring critical regions meet minimum demands. These results demonstrate SCRL’s adaptability to dynamically meet task-specific constraints. A similar analysis for Med. scenario is given in Appendix. 7.2. Resource Efficiency. In the Agri. scenario, DDPG minimizes resource usage, completing its trajectory in 800 time steps, achieving higher rewards at the cost of constraint violations. In contrast, SCRL meets adequacy constraints, requiring at least 900 time steps to satisfy minimum demands across regions, leading to lower rewards. A similar trade-off is observed in the Med. scenario. This underscores the inherent trade-off between reward maximization and constraint satisfaction, as CRL algorithms prioritize constraint compliance over unconstrained efficiency. Figure 3: SCRL’s resource allocation as heatmap. Higher temprerature indicates more resources allocated. In joint task, agent tries to satisfy the equity constraint so allocating resources to different regions in different amounts; In situational task, agent prioritizes adequacy constraint so allocating each region with sufficient resources. Table 3: Case study result on multi-disjunction task, Agri. scenario. The constraint involves four disjunctions. # 5.6 Case Study: More disjunctions Case study in Table. 3 shows algorithms performance under multiple disjunctions. The result is consistent with earlier analysis: RCPO and CAL fail due to the surrogate constraint; DCRL occasionally satisfies the constraint with a loss of reward (DCRL 3 and 4); SCRL still shows its advantage in capturing situational constraints. # 5.7 Ablation Study: Probabilistic Mechanism We compare the proposed probabilistic mechanism (SCRL) with a variant using the min operator (SCRL-min). As shown in Table 4, SCRL-min performs similarly in reward but incurs slightly higher constraint violations. This aligns with our earlier claim that the min operator may mislead the agent to satisfy an infeasible constraint with a lower violation.
Sequential Resource Allocation with situational constraints presents a significant challenge in real-world applications, where resource demands and priorities are context-dependent. This paper introduces a novel framework, SCRL, to address this problem. We formalize situational constraints as logic implications and develop a new algorithm that dynamically penalizes constraint violations. To handle situational constraints effectively, we propose a probabilistic selection mechanism to overcome limitations of traditional constraint reinforcement learning (CRL) approaches. We evaluate SCRL across two scenarios: medical resource allocation during a pandemic and pesticide distribution in agriculture. Experiments demonstrate that SCRL outperforms existing baselines in satisfying constraints while maintaining high resource efficiency, showcasing its potential for real-world, context-sensitive decision-making tasks.
[ "cs.AI" ]
# 1 Introduction Diffusion Models (DMs) [1–3] are a state-of-the-art class of generative model, achieving high quality, diverse sampling of complex data distributions. A particularly successful application is in conditional generation of image data, enabling rapid progress in class-conditioned image generation [4–6], image editing [7] and image restoration tasks [8–10]. Training-free guidance methods [10–12] attempt to avoid the high cost of problem-specific model training by steering the reverse diffusion process towards conditional samples. The exact guidance vector required to sample from the conditional distribution is the noisy likelihood score function, obtained by applying Bayes’ rule to the posterior score of the conditional distribution[4]. The main challenge for training-free guidance is the intractability[13] of the noisy likelihood score function. In this paper, we focus on the approach of directly approximating the noisy likelihood score with application to inverse problems in the field of image restoration. The image degradation model involves application of a measurement operator followed by addition of Gaussian noise, and so the likelihood for clean data is given by a Normal distribution. At each time step of reverse diffusion, the DM unconditional score is augmented with a guidance term corresponding to the approximate likelihood score. When the measurement operator is simply the identity, the inverse problem is image denoising. By exploiting the structure of the noise-perturbed posterior score function, we show that the exact score for denoising is tractable in terms of the unconditional score function at all time steps. With access to the denoising posterior score, we can compute the exact noisy likelihood score for denoising tasks and evaluate the accuracy of existing methods on this task, as well as improve such methods in related tasks. To demonstrate the value of the tractable denoising score, we develop a method, DPS-w, for correcting DPS [10] for tasks with significant denoising character, such as colorization, inpainting and super resolution. We hope that the results presented herein can inform future developments of principled training-free guidance methods. The main contributions of this paper are: 1. A novel expression for the tractable denoising posterior score in terms of the unconditional DM score. We also present the result for inpainting in terms of the score function for a non-isotropic noising process [14]. Several exact conditions of the intractable score for inpainting are presented. 2. We use the norm of the exact posterior score to assess the step size heuristics adopted by DPS and other methods, showing that they result in guidance steps far larger than those implied by the true score for the majority of time steps. 3. We develop a simple method, DPS-w, to highlight the informative value of the tractable posterior score. For a reference denoising task, DPS step sizes are fit to the exact posterior score at each time step and transferred to related inverse linear problems. Despite its simplicity and lack of fine-tuned parameters, DPS-w is competitive with start-of-the-art methods on random inpainting and super resolution tasks. It is shown to be robust across a range of measurement noise levels, have little computation overhead, and enable sampling with a reduced number of steps. # 2 Background # 2.1 Diffusion models Diffusion Models (DMs) [1–3] involve a predefined Gaussian noising process that incrementally maps clean data $x _ { 0 }$ at time $t = 0$ to pure, isotropic noise $x _ { T } \sim \mathcal { N } ( \bar { 0 } , \bar { I } )$ at time $T$ . To generate samples, the process is run in reverse, starting from sampled $x _ { T }$ , removing the noise $\epsilon _ { t } ( x _ { t } )$ at each time step until a clean sample $x _ { 0 }$ is obtained. A deep learning model is trained to predict the noise $\boldsymbol { \epsilon } _ { \theta } ( x _ { t } , t ) \bar { } \approx \boldsymbol { \epsilon } _ { t } ( x _ { t } )$ . In DDPM [15], a Variance-Preserving (VP) process is adopted and the noised data has posterior distribution $q _ { t } ( x _ { t } | x _ { 0 } ) = \mathcal { N } ( x _ { t } ; \sqrt { \bar { \alpha } _ { t } } x _ { 0 } , ( 1 - \bar { \alpha } _ { t } ) I )$ , where $\bar { \alpha } _ { t }$ is a parameter of the noise schedule. The marginal noise-perturbed distribution is therefore $$ p _ { t } ( x _ { t } ) = \int p ( x _ { 0 } ) N ( x _ { t } , \sqrt { \bar { \alpha } _ { t } } x _ { 0 } , ( 1 - \bar { \alpha } _ { t } ) I ) d x _ { 0 } , $$ with $p ( x _ { 0 } )$ the target distribution of clean data $x _ { 0 }$ [9]. Other methods [1, 16] adopt a VarianceExploding (VE) process, with noise-perturbed distribution $\begin{array} { r } { p _ { t } ( x _ { t } ) = \int p ( x _ { 0 } ) \mathcal { N } ( x _ { t } ; x _ { 0 } , \dot { \sigma } _ { t } ^ { 2 } I ) d x _ { 0 } } \end{array}$ . The variance $\sigma _ { t } ^ { 2 }$ is the noise level at time $t$ . The distributions for VP and VE have been shown to be equivalent [12]. The noise at time $t$ is related to the score of $p _ { t } ( x _ { t } )$ . In the case of DDPM, the score is given by [17]: $$ \nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } ) } = - \frac { 1 } { \sqrt { 1 - \bar { \alpha } _ { t } } } \epsilon _ { t } . $$ Therefore, knowledge of the noise-perturbed score function is sufficient to generate samples via the denoising process. Similarly, we can define the approximate score function $s _ { \theta } ( x _ { t } , t ) ~ =$ $- \epsilon _ { \theta } ( x _ { t } , t ) / \sqrt { 1 - \bar { \alpha } _ { t } }$ . # 2.2 Training-free conditional diffusion models for inverse problems For a noisy measurement $y$ with forward model $$ y = \mathcal { A } ( x _ { 0 } ) + \sigma _ { y } \eta , $$ where $\mathcal { A }$ is the measurement operator, $\sigma _ { y }$ is the measurement noise level and $\eta \sim \mathcal { N } ( 0 , I )$ , we aim to solve the inverse problem to find realistic solutions $x _ { 0 }$ . Expressed in a Bayesian formulation, we aim to sample from the posterior $p ( x _ { 0 } | y ) \propto p ( x _ { 0 } ) p ( y | x _ { 0 } )$ , given the prior $p ( x _ { 0 } )$ and the likelihood $p ( y | x _ { 0 } ) = \mathcal { N } ( y ; \mathcal { A } ( x _ { 0 } ) , \sigma _ { y } ^ { 2 } I )$ . To sample from the posterior with a reverse diffusion process, we need the noise-perturbed posterior score function, which is related to the noise-perturbed prior and likelihood score functions by [4] $$ \begin{array} { r } { \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } | y ) = \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } ) + \nabla _ { x _ { t } } \log p _ { t } ( y | x _ { t } ) . } \end{array} $$ Figure 1: Selected samples for inverse linear problems inpainting and super resolution with methods DPS and DPS-w, $\sigma _ { y } = 0 . 0 5$ . Eq (4) provides a route for sampling the posterior: at each time step, augment the unconditional DM score function with the guidance of the noisy likelihood score. In general, the likelihood score is intractable [10, 13] and needs to be approximated [18]. Diffusion Posterior Sampling (DPS) [10] introduces a popular approximation approach. First, it is observed that the time-dependent likelihood $$ p _ { t } ( y | x _ { t } ) = \int p ( x _ { 0 } | x _ { t } ) p ( y | x _ { 0 } ) d x _ { 0 } $$ can be interpreted as the expectation $E _ { x _ { 0 } \sim p ( x _ { 0 } | x _ { t } ) } [ p ( y | x _ { 0 } ) ]$ . Second, the expectation of the function $p ( \boldsymbol { y } | \boldsymbol { x } _ { 0 } )$ is approximated as the function of the expectation: $p ( y | \hat { x } _ { 0 } )$ , where $\hat { x } _ { 0 } = E _ { x _ { 0 } \sim p ( x _ { 0 } | x _ { t } ) } [ x _ { 0 } ]$ . The posterior mean $\hat { x } _ { 0 }$ is the MMSE estimation of $x _ { 0 }$ given $\boldsymbol { x } _ { t }$ , and is calculated in terms of the score function via Tweedie’s formula [19]. For DDPM, $\begin{array} { r } { \tilde { \dot { x _ { 0 } } } = \frac { 1 } { \sqrt { \bar { \alpha } _ { t } } } ( x _ { t } + ( 1 - \bar { \alpha } _ { t } ) \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } ) ) } \end{array}$ . The source of error in the DPS approximation is due to the Jensen gap. The practical form of the DPS approximation given a pre-trained DM $s _ { \theta }$ is $$ \nabla _ { \boldsymbol { x } _ { t } } \log { p _ { t } ( \boldsymbol { x } _ { t } | \boldsymbol { y } ) } \approx s _ { \theta } ( \boldsymbol { x } _ { t } , t ) - \rho \nabla _ { \boldsymbol { x } _ { t } } \| \boldsymbol { y } - \boldsymbol { A } ( \boldsymbol { \hat { x } } _ { 0 } ) \| ^ { 2 } , $$ and the step size $\rho = 1 / \sigma _ { y } ^ { 2 }$ is generally replaced with the time-dependent step size $\zeta _ { t } = \zeta ^ { \prime } / \| y -$ $\boldsymbol { \mathcal { A } } ( \boldsymbol { \hat { x } } _ { 0 } ) \boldsymbol { \| }$ . Since $\hat { x } _ { 0 }$ is a function of $\boldsymbol { x } _ { t }$ , backpropagation through the neural network is required. Finally, though DPS is applied to both linear and nonlinear inverse problems, we only consider linear operators $\mathcal { A } ( x _ { 0 } ) = A x _ { 0 }$ in the rest of this paper. # 3 Related work There is a growing body of work on approximations to the noisy likelihood score, Eq (5) [18]. MCG [11] introduced the use of Tweedie’s formaula to evaluate the likelihood for posterior mean, $\hat { x } _ { 0 }$ , which was extended to general noisy inverse problems by DPS [10]. LDG [20] draws Monte Carlo samples from a Normal distribution to reduce the bias in DPS, and extends application to general loss-function guidance. PGDM [21] modifies the Jensen approximation DPS to model $p ( x _ { 0 } | \bar { x } _ { t } )$ as a Gaussian, and introduces time-dependent variances, or step sizes, based on a heuristic. DSG [22] introduces the concept of manifold deviation during the sampling process and proposes a constraint to keep the guidance step within a zone of high-confidence. DAPS [23] considers an alternative, noise-annealing process that decouples consecutive time steps, allowing large variation between steps. DAPS and DSG are recent state-of-the-art methods that are shown to out perform a wide range of established methods for the benchmark tasks considered in this paper. # 4 Method We present novel expressions for the posterior score function, Eq (4), for pure denoising and noisy inpainting problems. The denoising posterior score is tractable when we have access to the score function for the isotropic noising process of DMs. The inpainting posterior requires the score for a non-isotropic noising process. In both settings, the novel expressions reveal exact conditions satisfied by the posterior score, which can be used to design principled approximations. In this section, we assume a VE noising process for simplicity of notation. For our experiments we apply both VE and VP processes; a conversion between VE and VP score functions is derived in Appendix B.1. # 4.1 The exact posterior score for denoising When the measurement operator $A$ is the identity, the inverse problem reduces to the task of denoising, with posterior $p ( x _ { 0 } | y ) \overset { \cdot } { \propto } p ( x _ { 0 } ) \mathcal { N } ( y ; x _ { 0 } , \sigma _ { y } ^ { 2 } I )$ . It might be expected that DMs can sample from this posterior given that they are specifically trained to denoise. Indeed, several recent works [24–26], show that the posterior can be sampled by a denoising process starting from time $t ^ { \prime }$ corresponding to noise level $\sigma _ { y }$ and $x _ { t ^ { \prime } } \propto y$ . While this sampling procedure has been applied to inverse problems by interleaving consistency and denoising steps [24, 26], it does not provide direct access to the score of the noise-perturbed posterior or the noisy likelihood. We exploit the structure of the noise-perturbed distribution to obtain a tractable expression for the posterior score (and in turn, the noisy likelihood via Eq (4)): $$ \begin{array} { l } { { p _ { t } ( x _ { t } | y ) = \displaystyle \int p ( x _ { 0 } | y ) \mathcal N ( x _ { t } ; x _ { 0 } , \sigma _ { t } ^ { 2 } I ) d x _ { 0 } } } \\ { { \displaystyle ~ \propto \int p ( x _ { 0 } ) \mathcal N ( y ; x _ { 0 } , \sigma _ { y } ^ { 2 } I ) \mathcal N ( x _ { t } ; x _ { 0 } , \sigma _ { t } ^ { 2 } I ) d x _ { 0 } } } \end{array} $$ where we apply Bayes’ rule to get the second line. By applying the rule for products of Gaussians, we arrive at a simple relationship between the perturbed posterior and prior distributions: $$ p _ { t } ( x _ { t } | y ) \propto p _ { \tilde { t } } ( \tilde { x } ) \mathcal { N } ( y ; x _ { t } , ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) I ) , $$ where $\tilde { t }$ is defined such that $\sigma _ { \tilde { t } } ^ { 2 } = ( \sigma _ { y } ^ { - 2 } + \sigma _ { t } ^ { - 2 } ) ^ { - 1 }$ and $\tilde { x } = \tilde { x } ( x _ { t } ) = \sigma _ { \tilde { t } } ^ { 2 } ( \sigma _ { y } ^ { - 2 } y + \sigma _ { t } ^ { - 2 } x _ { t } ) ,$ . Finally, Proposition 4.1 gives the posterior score function for denoising, obtained by taking the score of Eq (7). The proof is given in Appendix A.1. Proposition 4.1. For the inverse linear problem where $A = I$ , the noise-perturbed posterior score function is given by $$ \nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } | y ) } = \sigma _ { t } ^ { - 2 } \sigma _ { \tilde { t } } ^ { 2 } \nabla _ { \tilde { x } } \log { p _ { \tilde { t } } ( \tilde { x } ) } - ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } ( x _ { t } - y ) . $$ So we can evaluate the posterior score for any $t$ as a linear function of the prior score evaluated at $\tilde { x }$ at time $\tilde { t }$ . Note the following behaviors evident in Eq (8): • As $\sigma _ { y } \infty$ , the unconditional score $\nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } ) }$ is recovered. • For $\sigma _ { t } \ll \sigma _ { y }$ , the RHS is approximately $\nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } ) } - \sigma _ { y } ^ { - 2 } ( x _ { t } - y )$ , where the second term is the score of the noiseless likelihood $p ( \boldsymbol { y } | \boldsymbol { x } _ { 0 } )$ , as expected. • For $\sigma _ { t } \gg \sigma _ { y }$ , $\tilde { x } ( x _ { t } ) \approx y$ and $\sigma _ { \tilde { t } } \approx \sigma _ { y }$ , so the arguments of the prior score function are nearly constant for the majority of time steps and, in practice, we can avoid many calls to approximate score $s _ { \theta } ( \tilde { x } , \tilde { t } )$ . We also note that the second term dominates for the majority of time steps; see Figure (7) Appendix E. For sufficiently small $\sigma _ { y }$ , the linear guidance term alone, $- ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } ( x _ { t } - y )$ , is a good approximation to the score. • The noise level present in $\tilde { x }$ is $\sigma _ { \tilde { t } } ^ { 2 }$ , so the arguments of the score function are consistent with those used for the training objective of DMs. We therefore expect an off-the-shelf $s _ { \theta } ( \tilde { x } , \tilde { t } )$ to be a reliable approximator of $\nabla _ { \tilde { x } } \log { p _ { \tilde { t } } ( \tilde { x } ) }$ . To the best of our knowledge, Eq (8) is a novel result that allows efficient evaluation of the posterior and noisy likelihood score functions for denoising. For methods that directly approximate the likelihood score, Eq (5), such as DPS, it can be used to evaluate the accuracy at each time step $t$ when $A = I$ . This information can also be used to improve such methods. In Section 4.3 we propose a simple method to determine time-dependent step sizes $\zeta _ { t }$ to improve DPS for tasks with a large denoising character, such as colorization, random inpainting and super resolution. # 4.2 The exact posterior score for inpainting Inpainting can be expressed as an inverse linear problem with $A = \operatorname { d i a g } ( d _ { 1 } , \dotsc \dotsc , d _ { n } )$ where $d _ { i } \in$ $\{ 0 , 1 \}$ determines whether pixel $i$ is masked. The posterior score for noisy inpainting, given in Proposition 4.2, can be derived following a procedure similar to that for denoising (see proof in Appendix A.2). Proposition 4.2. For inpainting problems, for which $A = d i a g ( d _ { 1 } , \dotsc , d _ { n } )$ with $d _ { i } \in \{ 0 , 1 \}$ , the noise-perturbed posterior score function is given by $$ \nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } | y ) } = \sigma _ { t } ^ { - 2 } \Sigma _ { \tilde { t } } \nabla _ { \tilde { x } } \log { p _ { \Sigma _ { \tilde { t } } } ( \tilde { x } ) } - ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } A ( x _ { t } - y ) , $$ where $\Sigma _ { \tilde { t } } = ( \sigma _ { y } ^ { - 2 } A + \sigma _ { t } ^ { - 2 } I ) ^ { - 1 }$ , $\tilde { x } = \Sigma _ { \tilde { t } } ( \sigma _ { y } ^ { - 2 } A y + \sigma _ { t } ^ { - 2 } x _ { t } )$ and the non-isotropic score function $$ \nabla _ { \tilde { x } } \log { p _ { \Sigma _ { \tilde { t } } } ( \tilde { x } ) } = \int p ( x _ { 0 } ) \mathcal { N } ( x _ { t } ; x ^ { \prime } , \Sigma _ { \tilde { t } } ) d x _ { 0 } . $$ Given the score function for a noising process that has a different noise level for each pixel, we can compute the exact posterior score using Eq (9). Such non-isotropic score functions are not commonly available, but there is recent work in training such models and performing conditional sampling with non-isotropic denoising processes [14]. Eq (9) shows how to sample from the exact posterior using a non-isotropic score in an isotropic denoising process. In the Experiments section, we demonstrate the validity of the posterior score for inpainting on a toy problem for which the non-isoptropic score function is analytically tractable. Analysis of Eq (9) reveals the following properties of the exact posterior score: • For $\sigma _ { y } 0$ , $\tilde { x }$ and $\Sigma _ { \tilde { t } }$ define a noising process that only noises the masked pixels. This is an intuitive result; we would expect a model trained on this process to solve noiseless inpainting tasks. • Despite the intractability of the score overall, the components corresponding to unmasked pixels are given exactly by $- \sigma _ { t } ^ { - 2 } A ( x _ { t } - y )$ for noiseless inpainting. • For $\sigma _ { y } > 0$ , Eq (9) describes the exact balancing between noise levels in the non-isotropic score for masked and unmasked pixels at each time step required to sample from the posterior. • For $\sigma _ { t } \ \gg \ \sigma _ { y }$ , $A \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } | y ) \approx - ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } A ( x _ { t } - y )$ . As noted in 4.1, and demonstrated in Figure 7 in Appendix $\mathrm { E }$ for denoising, the linear guidance term dominates for large $\sigma _ { t }$ . • For $\sigma _ { t } \ \ll \ \sigma _ { y }$ , $\nabla _ { \tilde { x } } \log { p _ { \Sigma _ { \tilde { t } } } ( \tilde { x } ) } \approx \nabla _ { x _ { t } } \log { p _ { t } ( x _ { t } ) }$ and the noisy likelihood score can be approximated by the simple guidance term $- \sigma _ { y } ^ { - 2 } A ( x _ { t } - y )$ . • For $\sigma _ { y } \approx \sigma _ { t }$ , for the typically low values of $\sigma _ { y }$ , the anisotropy of $\Sigma _ { \tilde { t } }$ is small in absolute terms, and we expect the isotropic score function in the denoising posterior score expression to provide a good approximation of the non-isotropic score for unmasked pixel dimensions. As a result, we expect the denoising trajectories of unmasked pixels by the posterior score for denoising, Eq (8), and inpainting, Eq (9), to be identical for $\sigma _ { y } = 0$ and approximately equal for $\sigma _ { y } > 0$ . # 4.3 DPS-w: improving DPS with time-dependent step size In practice, the DPS method replaces the analytically-derived step size of $\sigma _ { y } ^ { - 2 }$ with the factor $\zeta _ { t } = \zeta ^ { \prime } / \lVert y - A \hat { { x } } _ { 0 } ( x _ { t } ) \rVert$ . The division by the norm is motivated [10, 20] by the expected increase in error for larger $\sigma _ { t }$ ; the norm will generally be larger earlier on in the denoising process. The constant $\zeta ^ { \prime }$ is a task-specific hyperparameter that is determined empirically. Improvements to DPS are generally motivated by error analysis of toy problems (e.g. [20, 27]) or theoretical grounds such as manifold preservation [11, 28, 22]. For real-world problems, reliable ground truth data is generally unavailable for validation of approximate score functions. With access to the tractable posterior score, Eq (8), we can evaluate approximate scores for the case of denoising, and even improve them on-the-fly. We can combine Eqs (8) and (4) to yield an expression for the noisy likelihood score, $$ \begin{array} { r l } & { \nabla _ { x _ { t } } \log p _ { t } ( y | x _ { t } ) = \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } | y ) - \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } ) } \\ & { \qquad = \sigma _ { t } ^ { - 2 } \sigma _ { \tilde { t } } ^ { 2 } \nabla _ { \tilde { x } } \log p _ { \bar { t } } ( \tilde { x } ) - \nabla _ { x _ { t } } \log p _ { t } ( x _ { t } ) - ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } ( x _ { t } - y ) . } \end{array} $$ bstituting the trained score $s _ { \theta }$ for both prior score invocations on the RHS of Eq (11), we defin $$ s _ { \theta } ( y | x _ { t } ) = \sigma _ { t } ^ { - 2 } \sigma _ { \tilde { t } } ^ { 2 } s _ { \theta } ( \tilde { x } , \tilde { t } ) - s _ { \theta } ( x _ { t } , t ) - ( \sigma _ { y } ^ { 2 } + \sigma _ { t } ^ { 2 } ) ^ { - 1 } ( x _ { t } - y ) , $$ which can be compared directly to the DPS approximation, $s _ { \mathrm { D P S } } ( y | x _ { t } , A = I )$ , where $$ \begin{array} { r } { s _ { \mathrm { D P S } } ( y | x _ { t } , A ) = - \zeta _ { t } \nabla _ { x _ { t } } \| y - A \hat { x } _ { 0 } ( x _ { t } ) \| ^ { 2 } . } \end{array} $$ We propose the DPS-w method, which replaces the hyperparameter $\zeta _ { t }$ at each time step with the weight $\boldsymbol { w } _ { t }$ that minimizes the MSE of the DPS score for the reference task of denoising ${ \bf \nabla } \cdot { \bf A } = I { \bf \nabla }$ ): $$ w _ { t } = \frac { s _ { \theta } ( y | x _ { t } ) \cdot s _ { \mathrm { D P S } } ( y | x _ { t } , A = I ) } { \| s _ { \mathrm { D P S } } ( y | x _ { t } , A = I ) \| ^ { 2 } } , $$ where $\cdot$ is the dot product. While the weight $\boldsymbol { w } _ { t }$ is optimized for the pure denoising case, it can be applied to DPS for general inverse problems. For problems with a high-degree of denoising character, such as colorization, random inpainting and super-resolution, we expect the $\boldsymbol { w } _ { t }$ to be informative and improve the DPS trajectory. For a given inverse problem, a reference denoising task is chosen and used to compute $\boldsymbol { w } _ { t }$ . The reference task for inpainting is denoising of the unmasked pixels, for colorization it is denoising of the noised grayscale image and for super resolution it is denoising of the adjoint-upsampled measurement. Full details and algorithm are given in Appendix B.3. In the experimental section we show that, despite its simplicity, DPS-w provides a significant improvement for these tasks, competitive with more sophisticated state-of-the-art methods at varying levels of measurement noise. The values of $\boldsymbol { w } _ { t }$ for a typical denoising task are presented in Figure 8 in Appendix E. It is clear that the reciprocal error norm heuristic employed by DPS leads to step sizes that far too large for $A = I$ The PGDM [21] method also introduced time-dependent time steps based on a heuristic motivated by pure denoising guidance, approaching 1 for large $\sigma _ { t }$ and 0 for small $\sigma _ { t }$ . As discussed in 5.2, large, early step sizes are a feature of DSG guidance. In contrast, the $\boldsymbol { w } _ { t }$ of DPS-w, show the opposite behavior, starting out very small and becoming much larger towards the end of the trajectory. # 5 Experiments In the 5.1, we compare the performance of the exact posterior score function introduced in the Methods section, with DPS and DPS-w on a toy problem. In 5.2, we evaluate the DPS-w method on two standard benchmarking image datasets for inverse linear problems: denoising, colorization, random inpainting and super resolution. # 5.1 Toy problem: two-dimensional double well We introduce a double-well model system represented by a bivariate Gaussian mixture distribution, $p _ { \mathrm { D W } } ( x ^ { ( 0 ) } , x ^ { ( 1 ) } )$ . The two Gaussians are equally weighted and have means $\pmb { \mu } _ { 1 } ~ = ~ ( - 2 . 0 , 2 . 4 )$ , $\mu _ { 2 } = ( 1 . 5 , 0 . 0 )$ and standard deviations $\sigma _ { 1 } = ( 0 . 5 , 0 . 6 )$ , $\pmb { \sigma } _ { 2 } = ( 0 . 3 , 0 . 4 5 )$ , respectively. The logprobability of the distribution is visualized in Figure 9, Appendix $\mathrm { E }$ , along with one hundred random samples. A Gaussian mixture is a convenient choice since it has an analytic noise-perturbed score function for general Gaussian noise, including the non-isotropic noise in Eq (10). Consider the problem of computing the free energy profile along the first dimension (horizontal axis of Figure 9(a), which is the negative logarithm of the marginal $$ p ( x ^ { ( 0 ) } ) = \int p _ { \mathrm { D W } } ( x ^ { ( 0 ) } , x ^ { ( 1 ) } ) d x ^ { ( 1 ) } . $$ While the integral is tractable in this case, we aim to solve the problem by sampling with a denoising process. Simply sampling from the unconditional score will yield good representation of the minima, but very little coverage of the low-probability regions, such as the transition barrier between the two wells. A popular technique from the field of molecular dynamics, Umbrella Sampling [29], addresses this issue through use of harmonic bias potentials centered at various positions along $x ^ { ( 0 ) }$ . The biased samples are processed with the weighted-histogram analysis technique (WHAM) [30] to remove the biases in the computed integral. For this problem, Umbrella Sampling turns out to be identical to noisy inpainting, with mask $A = \mathrm { d i a g ( 1 , 0 ) }$ and noise level $\sigma _ { y }$ governing the width of a bias window. We perform sampling at multiple positions using the exact posterior score for inpainting given by Eq (9) (see Figure 9(b)), and approximations DPS and DPS-w, and compute the corresponding free energy profiles. Figure 9(c) shows that the exact posterior score correctly represents the harmonic bias potential, leading to a highly accurate free energy profile when unbiasing with WHAM. DPS is too flexible in the $x ^ { ( 1 ) }$ dimension, and too restrictive in $x ^ { ( 0 ) }$ , requiring twice as many bias windows and producing a flat profile that lacks any characteristic of the ground truth. On the other hand, DPS-w captures the correct shape, albeit with an underestimation of the depth of the first well. # 5.2 Inverse linear problems on images We conduct quantitative and qualitative evaluation of the DPS-w method against benchmark methods DPS, DSG and DAPS, on the FFHQ [31] and ImageNet [32] datasets with resolution $2 5 6 \times 2 5 6$ . We use the first 200 images from the FFHQ 1k validation set and a random sample of 200 images from the ImageNet 1k validation set. Following DPS [10], the same pre-trained, unconditional DDPM models are used for all tasks and benchmark methods: for FFHQ we use the model released with DPS; for ImageNet we use the model from [4]. For evaluation metrics, we compute LPIPS [33] with the replace-pooling option enabled, SSIM and PSNR. Following the implementation in [23], the metrics are computed with the piq [34] library with data normalized to the range $[ 0 , 1 ]$ . All methods are run with 1000 steps, except the 100-step version of DPS-w. Measurement noise $\sigma _ { y }$ is added according to Eq (3), i.e. the measurement operator is applied to the normalized image first, followed by noise. We run experiments for $\sigma _ { y } = 0 . 0 1$ , 0.05 and 0.1. The benchmarking experiments in [10], [23] and [22] are carried out for $\sigma _ { y } = 0 . 0 5$ . It is important to highlight an ambiguity in recent publications, particularly works that extend the code base of DPS (including DSG and our work). The DPS paper reports that images are normalized to the range $[ 0 , 1 ]$ , but in the code the range is actually $[ - 1 , 1 ]$ . The true noise level in the [0, 1] pixel space is therefore $\sigma _ { y } / 2$ and it is not clear whether the value of $\sigma _ { y }$ used for experiments was adjusted accordingly. We assume that the noise level corresponds to the data normalized to $[ - 1 , 1 ]$ ; our benchmarking experiments are consistent with this for all methods. We also include $\sigma _ { y } = 0 . 1$ for some experiments to cover both scenarios. For each method, we use the same published hyperparameters where available for a given task, and borrow hyperparamers from a related task if not (e.g. inpainting with $70 \%$ vs $92 \%$ masking). Since all benchmark methods were developed for $\sigma _ { y } = 0 . 0 5$ , the purpose of testing on different noise levels is to evaluate sensitivity and broad applicability of the parameters, rather than assessing a method’s full potential. When comparing to DPS-w, we highlight the tasks to which each method was fine tuned and should therefore be the most competitive. DPS-w has no hyperparameters, except for the single $w _ { \mathrm { m a x } }$ parameter in the case of super resolution, which is fit to a single image per dataset and not fine tuned for different noise levels (see Appendix for more details). Random inpainting. We run experiments for random inpainting with $40 \%$ , $70 \%$ , and $92 \%$ of pixels removed. With decreasing masking probability, the inpainting problem resembles pure denoising more closely. Super resolution. The image resolution is reduced to $6 4 \times 6 4$ with bicubic downsampling. Table 2 in Appendix D presents the quantitative benchmarking results for inpainting and super resolution tasks, with $\sigma _ { y } = 0 . 0 5$ on both FFHQ and ImageNet. Overall, DPS-w is competitive on these tasks, out performing DAPS on most tasks and metrics, and competitive with DSG; while DSG has the best LPIPS evaluation across all tasks, DPS-w has stronger results for the SSIM and PSNR metrics. DPS-w significantly improves on DPS in all cases. Interestingly, the 100-step version of DPS-w appears to trade off a small loss in LPIPS performance for gain in SSIM and PSNR; beating DPS-w with 1000 steps in most cases. Confidence intervals for these results are provided in Tables 5 and 6 in Appendix D. By scaling the DPS-w step size by a factor of $\sqrt { d _ { m } }$ , where $d _ { m }$ is the number of masked pixels, random inpainting performance on ImageNet was improved to match DSG. See Appendix D.2 for results and discussion. For the ImageNet super resolution task, all methods had a mix of success and failure cases, with failures including blurry or distorted images. While DPS-w performed well quantitatively on the super resolution task, we noticed a higher number of blurry samples compared to DSG. More thorough parametrization of the DPS-w reference denoising task for super resolution, beyond $w _ { \mathrm { m a x } }$ , would likely improve these results, but is against the spirit and scope of this work. Other noise levels. In DPS, the noise level is absorbed into $\zeta ^ { \prime }$ and there is no other reference to $\sigma _ { y }$ . Similarly, the DAPS method was developed for a wide range of tasks exclusively at $\sigma _ { y } = 0 . 0 5$ , and the noise level itself is replaced with a tuned hyperparameter. On the other hand, DPS-w has a concrete dependency on $\sigma _ { y }$ via the denoising score function, and DSG has adaptable step sizes and robust parameters that are claimed to be broadly applicable across tasks. Additional benchmarking results for $\sigma _ { y } ~ = ~ 0 . 0 1$ and 0.1 can be found in Tables 3 and 4 in Appendix D, respectively. DPS-w outperforms the benchmark methods across almost all tasks for both $\sigma _ { y } = 0 . 0 1$ and 0.1, demonstrating the value of the denoising posterior score reference as a way to avoid overdependence on hyperparameters. For applications where the noise level is not known, $\sigma _ { y }$ can be treated as a problem-specific hyperparameter. Denoising. The results for the denoising task with $\sigma _ { y } = 0 . 0 5$ using the exact posterior score, Eq (8), DPS, DPS-w, and DPS-w with 100 steps are given in Table 1. For DPS, we use the random inpainting configuration and set masking probability to zero. As expected, the exact score gives superior performance to the DPS reference, which has not been specifically optimized for denoising. DPS-w with its optimized step sizes closely matches the exact results. DPS-w with 100 steps is only slightly worse in the LPIPS metric. In Appendix D.1, we assess how well the exact denoising posterior satisfies necessary conditions of a true posterior sampler when using the unconditional score of the FFHQ model. Table 1: Quantitative evaluation of image denoising on FFHQ. We also examine denoising trajectories for a FFHQ image (Figure 3 in Appendix C) for exact, DPS, DPS-w and DSG methods. Due to the small weights $\boldsymbol { w } _ { t }$ for large $t$ , DPS-w is seen to develop high-level features slightly later than DPS. Conversely, as highlighted by the authors [22], DSG develops high-level features very early in the denoising process, and it is suggested that this “more effective” guidance enables rapid sampling. By comparing to the exact trajectory, which is slower to develop features, we can see that the impressive performance of DSG guidance is not due to an improved approximation of the score function. Additionally, the ability of DPS-w to sample with fewer steps without hyperparameter tuning, shows that strong, early guidance is not essential for rapid sampling. Figure 2: DPS samples for various $\zeta ^ { \prime }$ values on the image colorization task with $\sigma _ { y } = 0 . 1$ . DPS is unable to produce realistic results across a range of hyperparameter values, without loss of structural features ( $\zeta ^ { \prime } = 0 . 2 )$ . DPS-w yields high-quality samples despite having no tunable parameters. Colorization. The measurement operator takes a weighted average of the color channels per pixel to obtain a grayscale image. The values are repeated into three channels so that the shape of the image tensor is unchanged. Figure 2 shows samples from DPS from a hyperparameter scan for an FFHQ image, compared to DPS-w. DPS is unable to generate high quality samples across a range of hyperparameter values. For large $\zeta ^ { \prime } > 0 . 2$ , DPS generates strongly color-tinted images; at lower guidance step size, the unconditional score helps obtain a more realistic color distribution, but at the cost of losing structural features in the image. The time-dependent $\boldsymbol { w } _ { t }$ of DPS-w enable a more flexible trade off, avoiding a commitment to color features too early in the trajectory, while giving the prior more weight towards the end. More samples are presented for qualitative evaluation on FFHQ and ImageNet in Figures 5 and 6 in Appendix C, respectively. ImageNet proves to be a more challenging dataset for this task: while most of the ten presented samples are reasonable, there are a couple of failure cases with color bleeding, or patching. # 6 Limitations While the value of the exact posterior score is demonstrated through the success of the DPS-w method for colorization, inpainting and super resolution tasks, the benchmark methods are successfully applied to a much broader set of inverse problems. Incorporation of the tractable score into a more generalized framework for training-free guidance is the natural next step. Also, it was not explored whether the approach can be extended to guidance of latent diffusion models[35].
The success of diffusion models has driven interest in performing conditional sampling via training-free guidance of the denoising process to solve image restoration and other inverse problems. A popular class of methods, based on Diffusion Posterior Sampling (DPS), attempts to approximate the intractable posterior score function directly. In this work, we present a novel expression for the exact posterior score for purely denoising tasks that is tractable in terms of the unconditional score function. We leverage this result to analyze the time-dependent error in the DPS score for denoising tasks and compute step sizes on the fly to minimize the error at each time step. We demonstrate that these step sizes are transferable to related inverse problems such as colorization, random inpainting, and super resolution. Despite its simplicity, this approach is competitive with state-of-the-art techniques and enables sampling with fewer time steps than DPS.
[ "stat.ML", "cs.CV", "cs.LG" ]
# 1 INTRODUCTION When photographing an object, we often want both a full overview and fine-grained material details. However, everyday cameras have fixed resolution, forcing a trade-off between coverage and detail. For example, product images typically show a low-detail full view alongside isolated close-ups, which limits the ability to examine the entire product in high resolution. Gigapixel images allow one to navigate the entire image in high detail, but creating such images typically requires specialized systems that capture and stitch together thousands of high-resolution photos [Kopf et al. 2007]. While such exhaustive capture is necessary for complex scenes like city skylines, where each subregion contains unique structures and details, we argue that a single object is more self-similar and characterized by a limited set of material patterns. A small number of close-up snapshots often suffices to cover the range of textures. From these sparse examples, it is possible to infer the missing fine details across the entire object in a plausible and truthful manner. We introduce UltraZoom, a system that enables gigapixel-scale imaging of objects using only a phone camera and a casual, handheld capture process. Given a full image and one or more example close-ups, UltraZoom upscales the full image to match the scale and material details of the close-ups. The result is a unified, highresolution image where local details are photorealistic and faithfully reflect the object’s actual appearance. To achieve this, we adapt a pretrained generative model by constructing a per-instance paired dataset from the close-ups, and learning how to map low-resolution patches to their high-resolution counterparts. During inference, we apply the model in a sliding-window fashion across the full image. A key challenge lies in constructing these training pairs, which requires registering close-ups within the full image despite significant scale differences and repetitive structures. We introduce a simple yet robust method for achieving such registration on casual captures of arbitrary materials. Together, these components form a system that generates faithful, photorealistic gigapixel images from minimal input, enabling seamless pan-and-zoom exploration of an object in high detail. Fullresolution results can be viewed interactively at ultra-zoom.github.io Our contributions are: • A system for generating gigapixel-scale zoomable imagery using only sparse, casual handheld captures. • A simple, robust method for registering close-ups to a global image under unconstrained settings. # 2 RELATED WORK # 2.1 Reference-Based Super-Resolution In Reference-Based Super-Resolution (RefSR), external high-resolution (HR) images are provided to guide the enhancement of a low-resolution (LR) input. RefSR encompasses a range of problem settings, each involving different types or configurations of reference images. Early work in RefSR downsample generic high-resolution images to construct a database of representative HR–LR patch pairs. To improve generalization, small patches are extracted and often preprocessed to retain only structural or frequency components (e.g., by removing color). At test time, LR patches from the input are matched to this database using retrieval or sparse-coding techniques to construct the HR output [Chang et al. 2004; Freeman et al. 2002; Yang et al. 2010]. More recent works leverage deep neural networks to incorporate reference images from semantically or structurally similar scenes. These methods focus on adaptively transferring high-resolution details from the reference to the corresponding regions of the lowresolution input. Many employ attention mechanisms to match and fuse features of the reference and input, often guided by hierarchical structure or semantic information [Lu et al. 2021; Pesavento et al. 2021; Yang et al. 2020; Zhang et al. 2019b]. Other works focus on settings where per-instance reference images are available, either of the same identity captured at different times or multiple captures of the same scene. For the former, [Varanka et al. 2024] proposes a personalized face SR method using cross-attention to inject person-specific features from reference images. While full-face images contain consistent structure that suits attention-based transfer, our setting operates on highly local texture patches due to memory constraints, where the lack of structural regularity makes detail transfer more ambiguous. The latter category involves light field images [Zheng et al. 2018], or a dual-camera setting using wide-angle and telephoto image pairs [Cai et al. 2019; Chen et al. 2019; Wang et al. 2021; Xu et al. 2023; Zhang et al. 2019a]. These methods typically assume closely aligned image centers and small scale differences, enabling precise alignment between the LR input and HR reference using keypoints [Lowe 2004], dense optical flow [Fischer et al. 2015], warping neural network, or loss designs that tolerate slight misalignment. The aligned pairs are then used for supervised training. In contrast, our setup involves handheld captures with regular and macro lenses, where larger distortions and scale gaps make accurate alignment challenging. Due to these challenges, we instead construct pixel-aligned LR–HR patch pairs from the close-ups alone and train a model to learn direct local LR-to-HR mappings. The main challenge lies in degrading the HR patches to match their corresponding regions in the LR input (obtained via our coarse registration method), enabling the model to generalize effectively at test time. # 2.2 Texture Synthesis Texture synthesis aims to generate new pixels that match the appearance and statistical properties of an exemplar texture. Early non-parametric methods [Efros and Leung 1999; Efros and Freeman 2001; Kwatra et al. 2003] achieve this by copying best-matched patches from a reference image into target regions, ensuring local coherency but often struggling to preserve global structure consistency. Recent advances in deep learning have significantly expanded the capabilities of texture synthesis. CNN-based methods [Bergmann et al. 2017; Gatys et al. 2015; Ulyanov et al. 2016], GANs [Shaham et al. 2019; Xian et al. 2018; Zhu et al. 2018], and diffusion-based approaches [Wang et al. 2024a] enable the generation of diverse and globally consistent textures. Our work also synthesizes new texture pixels from exemplars, but under the constraint of a lowresolution input that defines the global image structure, requiring the synthesized texture details to align with underlying content. # 2.3 Extreme-Scale Super-Resolution Extreme-scale super-resolution aims to reconstruct high-resolution outputs from significantly lower-resolution inputs, often beyond the typical $2 { - } 4 \times$ range. The large scale gap poses a major challenge: synthesizing high-frequency details not present in the input while maintaining consistency with the overall structure of the LR image. One common approach is to progressively build up to the target scale with multi-scale designs [Shang et al. 2020] or cascaded models [Ho et al. 2021]. However, these methods operate at discrete scales and struggle to generalize beyond their training range. Cascading also introduces cumulative error and increases inference time. Arbitrary-scale super-resolution addresses this by modeling the image in continuous space. Some methods condition explicitly on the desired scale [Chai et al. 2022; Hu et al. 2019], while others reconstruct an implicit representation that allows querying at arbitrary coordinates [Becker et al. 2025; Chen et al. 2021; Peng et al. 2025; Xu et al. 2022]. While these approaches offer flexibility, they often produce overly smooth results at extreme scales due to limited model capacity and difficulty recovering high-frequency details far beyond the training distribution. Another line of work enables large-scale SR through per-instance or per-domain priors. Per-instance methods [Ruiz et al. 2023; Varanka et al. 2024] adapt the model to each input using fine-tuning or reference-guided attention to inject identity-specific features. Perdomain methods [Sharma et al. 2024; Zhang et al. 2020] leverage existing gigapixel-scale images (e.g., paintings, satellite imagery) to construct paired LR–HR data for supervised training. Our work is closely related but differs in that it does not assume access to preexisting gigapixel images; instead, we construct the dataset from a regular-resolution image and a sparse set of close-up captures. Fig. 2. Method Overview. (a) Dataset Construction: For each scene, we capture a close-up, a full-view image, and a bridging video that connects the two views. We track the close-up region across the video to register it within the full image, estimating the relative scale 𝑠 and the color statistics $H$ of the matched region. These are used to construct a dataset of paired high-resolution and degraded image patches, designed to align with inference-time test patches. (b) Per-Instance Fine-tuning: Next, we fine-tune a pretrained generative model [Labs 2024] on the per-instance dataset with DreamBooth [Ruiz et al. 2023] and LoRA [Hu et al. 2021], allowing the model to adapt specifically to the captured object’s appearance. (c) Gigapixel inference: At inference time, due to GPU memory constraints, we divide the full-view image into sliding windows of patches, apply super-resolution to each patch, and blend the overlapping regions in both latent and pixel space to produce a coherent gigapixel result. # 3 METHOD Given one or more example close-ups of an object and a regularresolution image showing the entire object, our goal is to upscale the full-view image to the same scale as the close-ups, producing a gigapixel-resolution output with details faithful to the exemplars and structure aligned with the original full view. Our system (illustrated in Fig.2) consists of three stages: dataset construction, per-instance fine-tuning, and gigapixel inference. # 3.1 Dataset Construction Capture Process. We use an iPhone with macro-lens mode for data collection. Given an object, we capture a minimal collection of $N$ close-up images $C = \{ C _ { 1 } , \ldots , C _ { N } \}$ to cover fine surface details, followed by a full-shot image $F$ that serves as the low-resolution input. To guide spatial registration, we also capture a sequence of short bridging videos $\mathcal { V } = [ V _ { 1 } , \ldots , V _ { N } ]$ , where each $V _ { i }$ connects $C _ { i }$ to $C _ { i + 1 }$ , and the final video $V _ { N }$ connects $C _ { N }$ to $F$ . Camera orientation is kept consistent across all captures. In Fig.2a, we show a simplified case where only one close-up is captured. Motivation for Image Registration. Given the captured images, our goal is to construct a paired dataset for supervised per-instance fine-tuning. However, the close-ups and full-view images are taken with different lenses and from different distances, resulting in large variations in perspective, color and noise levels, along with disocclusions and image center misalignment due to handheld capture. These factors make it extremely difficult to establish exact pixel-topixel correspondences between close-ups and their corresponding low-resolution pixels in the full image. To address this, we construct pixel-aligned pairs using only the close-ups and apply degradation to simulate the appearance of the full-view image, enabling the model to generalize effectively during inference when applied to upscale the full image. This degradation process requires: (1) estimating the relative scale between close-ups and the full image for proper downscaling, and (2) identifying the corresponding region in the full image to match color statistics and other degradation characteristics. Both steps require coarse image registration, which we describe next. Fig. 3. Close-up-to-full registration. Given a close-up, the full image, and a connecting video, we first split the video into shorter segments to improve point tracking, as the field of view changes rapidly. At the start of each segment, we initialize a grid of points and track them across frames within the segment. A 2D similarity transform is then estimated for each segment using RANSAC over the tracked points. These transforms are sequentially chained to produce the final transform that maps the close-up to the full image. The registration result is visualized in the green box (right). Image Registration. Direct registration using SIFT or learningbased methods is challenging due to the highly repetitive nature of textures and significant scale differences between the close-ups $c$ and the full view $F$ . Instead, we leverage the bridging videos $\mathcal { V }$ and a state-of-the-art point tracking method [Karaev et al. 2024] to track points across the full sequence and accumulate the transforms for final registration. Specifically, for each video $V _ { i }$ , we track a grid of 2D points and estimate a similarity transform $T _ { V _ { i } }$ between the first and the last frame using RANSAC [Fischler and Bolles 1981]. By chaining these transforms across $V$ , we obtain the cumulative transform from the close-up frame $C _ { i }$ to the full image $F$ : $$ T _ { C _ { i } F } = T _ { V _ { N } } \cdot ( T _ { V _ { N - 1 } } \cdot ( \cdot \cdot \cdot ( T _ { V _ { i + 1 } } \cdot T _ { V _ { i } } ) \cdot \cdot \cdot ) ) $$ Lastly, we extract the scale 𝑠 from the upper-left $2 \times 2$ submatrix, which corresponds to the rotation and scaling component of the final transform $T _ { C _ { i } }$ : $$ s = \sqrt { \operatorname * { d e t } ( T _ { C _ { i } F [ : 2 , : 2 ] } ) } $$ For $V _ { N }$ , the video dollying out from the last close-up $C _ { N }$ to the full image $F$ , the content in view changes quickly, so we divide the video into short segments $V _ { N } = \left[ S _ { 1 } , \dots , S _ { M } \right]$ and perform point tracking within each segment, allowing points to be reinitialized at the start of each segment for improved robustness. The per-segment transforms are chained similar to how per-video transforms are chained. We visualize this process in Fig.3. Degradation Alignment. With registration complete, we minimize the train-test domain gap by degrading the high-detail closeups to match the appearance of their corresponding regions in the lower-detail full image. We first address color inconsistency caused by varying white balance across captures. For each close-up $C _ { i }$ , we use the estimated transform $T _ { C _ { i } F }$ to localize its corresponding region in the full image $F$ , denoted $F _ { C _ { i } }$ . We then extract its color statistics $H ( F _ { C _ { i } } )$ and apply color matching [van der Walt et al. 2014] to $C _ { i }$ to obtain a color-corrected version $\tilde { C } _ { i }$ . Next, we simulate degradation to match the appearance of the full image. We first apply bicubic downsampling to $\tilde { C } _ { i }$ using the estimated scale factor $s$ , followed by an additional $2 \mathrm { x }$ downsampling to mimic optical blur from distant capture. Since both the close-ups and the full image are JPEG-compressed on a standard iPhone, with artifacts appearing at different scales, we apply JPEG compression (quality $= 7 5$ ) when $F _ { C _ { i } }$ exhibits significantly more compression artifacts than the downscaled $\tilde { C } _ { i }$ . The fully degraded close-up, incorporating all degradation steps, is denoted as $D ( \tilde { C } _ { i } )$ . # 3.2 Per-Instance Fine-tuning To minimize prior-based hallucination and ensure faithful reconstruction of object details, we fine-tune a pretrained generative model on the instance-specific dataset we construct from the closeups. To fully leverage the pretrained model’s high-fidelity generation capabilities, we freeze its weights and optimize only low-rank adaptations of the weight matrices, following [Hu et al. 2021]. As shown in Fig. 2b, during training, we sample random patches $\mathbf { c } \sim \mathcal { P } ( \tilde { C } )$ and their corresponding degraded versions $\mathbf { d } \sim \mathcal { P } ( D ( \tilde { C } ) )$ , where $\tilde { C } \in \{ \tilde { C } _ { 1 } , . . . , \tilde { C } _ { N } \}$ is drawn uniformly from all color-corrected close-ups, and $\mathcal { P }$ denotes a random patch extraction operator. The training objective is a flow matching loss: $$ \mathcal { L } _ { \mathrm { F M } } = \mathbb { E } _ { { \mathbf { c } } _ { t } , t , { \mathbf { d } } } \left[ \| \hat { \mathbf { u } } _ { \theta } ( { \mathbf { c } } _ { t } , t , { \mathbf { d } } , { \mathbf { y } } ) - { \mathbf { u } } ( { \mathbf { c } } _ { t } , t ) \| _ { 2 } ^ { 2 } \right] $$ Here, $\mathbf { c }$ is the clean, high-detail patch from a color-corrected closeup, d is the corresponding degraded, lower-detail patch, and $\mathbf { c } _ { t }$ is a noisy version of c at time step $t . \textbf { u }$ is the target velocity from the forward diffusion process, and $\hat { \mathbf { u } } _ { \theta }$ is the model’s predicted velocity. $\mathbf { y }$ denotes the fixed per-instance text prompt used for conditioning. # 3.3 Gigapixel Inference Due to the gigapixel output size and GPU memory constraints, we perform inference on sliding local windows (see Fig.2c). Overlapping latent regions are averaged following [Bar-Tal et al. 2023], and overlapping RGB values are blended when stitching decoded pixels. However, noticeable boundary artifacts can still emerge (also observed in [Wang et al. 2024b]) due to patch discrepancies and repeated boundaries across steps, which cause neighboring patches to evolve inconsistently and diverge over time. To mitigate this, we introduce stride variation across denoising steps by gradually increasing the stride. This helps reduce artifact accumulation along fixed seams and lowers inference time by reducing the total number of sliding windows. See implementation details in supplemental. # 4 RESULTS We evaluate our method on 15 examples of everyday objects captured using an iPhone 16, with scale factors ranging from 6x to 30x. Among them, 2 examples contain multi-material objects and include multiple close-up captures to cover different surface materials. The following sections elaborate on baseline methods, quantitative and qualitative evaluations, ablations, and details of computational cost. # 4.1 Baseline Methods We compare with three baseline methods: (1) Thera [Becker et al. 2025], (2) ContinuousSR [Peng et al. 2025], and (3) ZeDuSR [Xu et al. 2023]. Since Thera and ContinuousSR do not estimate scale and ZeDuSR relies on SIFT-based image registration (which fails on our data), we supply all methods with our estimated scale factors and, for ZeDuSR, our registration. Note that boundary blending is not implemented for any baseline, which may lead to visible seams in their outputs. Thera and ContinuousSR are arbitrary-scale super-resolution methods based on implicit neural representations, allowing direct application of floating-point scale factors. Due to GPU memory constraints and their training resolution, we divide the 4K full-view image into overlapping $2 5 6 \times 2 5 6$ patches and stitch the outputs. ZeDuSR is a dual-camera super-resolution method that performs per-instance training on wide-angle and telephoto image pairs. We adapt it to our setting by using the full image and close-up as the input pair. Since the implementation requires a power-of-two scale factor, we round up our estimated scale and downsample the input image to match the resulting scale difference. We replace ZeDuSR’s SIFT-based registration with our own and keep the rest of the pipeline unchanged. During inference, we use overlapping $1 0 2 4 \times 1 0 2 4$ patches, roughly following the input shape used in their tiling inference code. # 4.2 Quantitative Comparison Metrics. We quantitatively evaluate performance using two types of metrics. (1) Mean Absolute Error (LR-MAE) between the lowresolution input and the bicubic-downsampled high-resolution output, which measures super-resolution consistency. (2) Patch-FID and KID. Since the close-ups and corresponding regions in the generated high-resolution output are not pixel-aligned, we compute Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) on randomly sampled patches from the real and generated images, assuming they follow similar patch-level distributions. We sample approximately 3000 patches of size $2 9 9 \times 2 9 9$ , keeping the crop positions fixed across methods for fair comparison. Table 1. Quantitative Comparison. We report low-resolution mean absolute error (LR MAE) to measure consistency with the input, and PatchFID/KID to evaluate perceptual similarity to the captured close-ups. Our method achieves the best Patch-FID and KID scores, indicating superior visual quality and texture fidelity, while maintaining competitive LR consistency. We also include user study results for human evaluation of visual quality and consistency. Our method ranks highest in both dimensions, being selected $9 6 . 0 8 \%$ of the time as the best in quality and in consistency $7 9 . 4 1 \%$ of the time. User Study. We also conduct a user study: for each example in our evaluation set, participants are shown a $1 0 2 4 \times 1 0 2 4$ patch from the real captured close-up alongside a nearby patch from each method’s output, with patch position fixed across methods. They are asked to select the best result along two dimensions: visual quality and consistency with the exemplar in terms of texture detail. For the examples with multiple close-ups, each close-up is treated as a separate instance, expanding 15 examples into 18. We collect responses from 17 participants, each evaluating all 18 examples across both dimensions, resulting in 612 total responses. Discussion. As shown in Tab. 1, the two per-instance methods ZeDuSR and Ours have higher low-resolution mean absolute error (LR-MAE): 0.051 and 0.040, compared to 0.038 and 0.007 for ContinuousSR and Thera. This is expected, as both methods generate more high-frequency details, which may deviate slightly from the lowresolution input due to the lack of explicit consistency constraints during training. In terms of generation quality, our method significantly outperforms all baselines in Patch-FID and KID (134.986 vs. $\sim 3 0 0 _ { , } ^ { \cdot }$ ), demonstrating strong alignment with the quality and texture details of the captured close-ups. The user study further validates these results: our method is overwhelmingly preferred in visual quality $( 9 6 . 0 8 \% )$ and achieves strong performance in consistency $( 7 9 . 4 1 \% )$ , highlighting its effectiveness while indicating potential for further improvement in suppressing subtle hallucinations and maintaining detail consistency with the exemplar. # 4.3 Qualitative Comparison We present qualitative comparisons across a range of scales in Fig. 6. All baselines struggle to produce high-quality, detail-consistent results. Thera and ContinuousSR are trained on $2 { - } 4 \mathrm { x }$ discrete and 4- 8x continuous scales, respectively. Despite the shrub example having a $6 . 9 3 \times$ scale, which is within ContinuousSR’s range, it still fails to generate proper details, likely because the model was trained on natural images with standard fields of view, whereas our inputs are narrow FoV local patches, which fall outside its training distribution. ZeDuSR also fails despite per-instance training. This may stem from its LR–HR alignment assumption: it warps the low-resolution, wide FoV image to be pixel-aligned with the high-resolution, narrow FoV image. However, our data cannot be easily aligned, resulting in training on misaligned LR–HR pairs, which may hinder learning. Additionally, ZeDuSR trains a lightweight model from scratch, whereas we fine-tune a high-capacity pretrained generative model. While their approach is well-suited for dual-camera setups with modest scale differences and minimal misalignment, our method is better equipped for recovering fine-grained details in extreme close-ups and handling casually captured inputs. Fig. 4. Qualitative Comparison. Rows are ordered from low to high scale. For each example, we compare $1 0 2 4 \times 1 0 2 4$ patches across methods. From left to right: a $1 0 2 4 \times 1 0 2 4$ crop from the captured close-up (reference), full image with patch location (green box), low-resolution input patch bicubic-upsampled to $1 0 2 4 \times 1 0 2 4$ , three baselines, and our result. While the output patch may not be perfectly aligned with the reference, it is sampled near the reference, and the same crop is used across all methods for fair comparison. Qualitatively, our method achieves the best visual fidelity and consistency with the exemplar. # 4.4 Ablations We visualize the effect of each component of our method in Fig. 5 with two examples. The most naive baseline applies the pretrained model at its original scale (4x), which is equivalent to standard singleimage super-resolution. Without leveraging the close-up reference, the model may produce high-quality outputs but hallucinates details. Close-up Input (bicubic) ➀ Pretrained ➁ + Correct Scale ➂ + Fine-tuning $\textcircled{4} +$ Degradation Align $\textcircled{5} +$ Stride Variation (reference) (Full method) 6.355x 1.0x Gr GR 16.618x CabfabCabCabFabCb A When supplied with the correct scale, the pretrained model continues to hallucinate and struggles to generalize to scales beyond its training scale, introducing artifacts (pineapple) or failing to enhance the input meaningfully (sweater). Adding per-instance fine-tuning without proper degradation (i.e., without color matching, blurring, or JPEG compression) begins to introduce instance-specific details, but these details fail to integrate seamlessly with the structure of the low-resolution input, as the model is not trained to handle the appearance of the test-time degraded input. Including degradation alignment resolves this gap, but without stride variation, visible seams remain at inference window boundaries (faint lines extending from the marked green arrows). The final column shows our full method, which effectively addresses all these issues and produces seamless, high-fidelity results. # 4.5 Computational Cost All experiments are conducted on a single $\mathrm { { A 1 0 0 } G P U }$ . Each training epoch takes approximately 22 minutes while inference time depends on the output resolution. For example, generating a $1 8 6 7 2 \times 1 8 6 7 2$ output with 28 denoising steps and stride variation results in 21,366 forward passes in total (763.07 forward runs per step). At 0.64 seconds per run, the total inference time is approximately 3.82 hours. # 5 DISCUSSION AND LIMITATIONS We present a system for generating high-quality, faithful gigapixel images from a regular-resolution image and a set of close-ups. While our results demonstrate strong potential, several limitations remain that point to valuable directions for future work: Slow Inference. Our current inference pipeline involves slidingwindow generation with overlapping patches, which becomes computationally expensive at gigapixel scales. One promising direction is retrieval-based inference: for frequently occurring or similar patches, future work could cache or retrieve previously generated outputs instead of re-running the model, trading off some consistency for improved efficiency. Per-Instance Fine-Tuning. To maximize generation quality, we perform per-instance fine-tuning by adapting a small set of model parameters to each object. While this yields strong results, it requires retraining for every new object, which is inefficient. An alternative would be to develop a general model that directly takes in the LR image and a few reference patches, and propagates high-frequency details across the object without fine-tuning. Degradation Alignment. Our method relies on degrading the closeup patches to mimic the characteristics of patches from the full image, due to the difficulty of registering macro and regular photos. In this work, we manually choose the degradation operations (e.g., blur, downsampling, JPEG artifacts) to simulate the LR appearance. In future work, it would be valuable to automate this process by optimizing degradation parameters to best match the patch distributions between HR and LR views, potentially using a learned degradation model or domain adaptation framework. Lack of Global Context. Currently, the model operates on local image crops without access to global information, such as the position of the patch within the full image or the broader structural context of the object. This limits the model’s ability to reason about object-level consistency or to apply the correct high-resolution details in ambiguous regions. Injecting global context through positional encoding, spatial layout features, or hierarchical features could help the model make more coherent and informed predictions, particularly when synthesizing large-scale images with long-range dependencies. # REFERENCES Ome Diffusion Paths for Controlled Image Generation. arXiv:2302.08113 [cs.CV] https: //arxiv.org/abs/2302.08113 Alexander Becker, Rodrigo Caye Daudt, Dominik Narnhofer, Torben Peters, Nando Metzger, Jan Dirk Wegner, and Konrad Schindler. 2025. Thera: Aliasing-Free ArbitraryScale Super-Resolution with Neural Heat Fields. arXiv preprint arXiv:2311.17643 (2025). Urs Bergmann, Nikolay Jetchev, and Roland Vollgraf. 2017. Learning Texture Manifolds with the Periodic Spatial GAN. arXiv:1705.06566 [cs.CV] https://arxiv.org/abs/1705. 06566 Jianrui Cai, Hui Zeng, Hongwei Yong, Zisheng Cao, and Lei Zhang. 2019. Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE International Conference on Computer Vision. Lucy Chai, Michael Gharbi, Eli Shechtman, Phillip Isola, and Richard Zhang. 2022. Anyresolution training for high-resolution image synthesis.. In European Conference on Computer Vision. Hong Chang, Dit-Yan Yeung, and Yimin Xiong. 2004. Super-resolution through neighbor embedding. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., Vol. 1. I–I. https://doi.org/10.1109/ CVPR.2004.1315043 Chang Chen, Zhiwei Xiong, Xinmei Tian, Zheng-Jun Zha, and Feng Wu. 2019. Camera Lens Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yinbo Chen, Sifei Liu, and Xiaolong Wang. 2021. Learning Continuous Image Representation With Local Implicit Image Function. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 8628–8638. A.A. Efros and T.K. Leung. 1999. Texture synthesis by non-parametric sampling. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Vol. 2. 1033–1038 vol.2. https://doi.org/10.1109/ICCV.1999.790383 Alexei A. Efros and William T. Freeman. 2001. Image Quilting for Texture Synthesis and Transfer. Proceedings of SIGGRAPH 2001 (August 2001), 341–346. Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. 2015. FlowNet: Learning Optical Flow with Convolutional Networks. arXiv:1504.06852 [cs.CV] https://arxiv.org/abs/1504.06852 M. Fischler and R. Bolles. 1981. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 24, 6 (1981), 381–395. /brokenurl#http://publication.wilsonwong.me/load. php?id=233282275 William T. Freeman, Thouis R. Jones, and Egon C. Pasztor. 2002. Example-Based SuperResolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Mitsubishi Electric Research Labs, Cambridge, MA. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. Texture Synthesis Using Convolutional Neural Networks. arXiv:1505.07376 [cs.CV] https://arxiv.org/abs/ 1505.07376 Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. 2021. Cascaded Diffusion Models for High Fidelity Image Generation. arXiv preprint arXiv:2106.15282 (2021). Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [cs.CL] https://arxiv.org/abs/2106.09685 Xuecai Hu, Haoyuan Mu, Xiangyu Zhang, Zilei Wang, Tieniu Tan, and Jian Sun. 2019. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. arXiv:1903.00875 [cs.CV] https://arxiv.org/abs/1903.00875 Nikita Karaev, Iurii Makarov, Jianyuan Wang, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. 2024. CoTracker3: Simpler and Better Point Tracking by Pseudo-Labelling Real Videos. arxiv. Johannes Kopf, Matt Uyttendaele, Oliver Deussen, and Michael F. Cohen. 2007. Capturing and Viewing Gigapixel Images. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2007) 26, 3 (2007), to appear. Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. 2003. Graphcut Textures: Image and Video Synthesis Using Graph Cuts. ACM Transactions on Graphics, SIGGRAPH 2003 22, 3 (July 2003), 277–286. Black Forest Labs. 2024. FLUX. https://github.com/black-forest-labs/flux. David Lowe. 2004. Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60 (11 2004), 91–. https://doi.org/10.1023/B: VISI.0000029664.99615.94 Liying Lu, Wenbo Li, Xin Tao, Jiangbo Lu, and Jiaya Jia. 2021. MASA-SR: Matching Acceleration and Spatial Adaptation for Reference-Based Image Super-Resolution. arXiv:2106.02299 [cs.CV] https://arxiv.org/abs/2106.02299 Long Peng, Anran Wu, Wenbo Li, Peizhe Xia, Xueyuan Dai, Xinjie Zhang, Xin Di, Haoze Sun, Renjing Pei, Yang Wang, et al. 2025. Pixel to Gaussian: Ultra-Fast Continuous Super-Resolution with 2D Gaussian Modeling. arXiv preprint arXiv:2503.06617 (2025). Marco Pesavento, Marco Volino, and Adrian Hilton. 2021. Attention-based MultiReference Learning for Image Super-Resolution. arXiv:2108.13697 [cs.CV] https: //arxiv.org/abs/2108.13697 Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. 2023. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. arXiv:2208.12242 [cs.CV] https://arxiv.org/abs/2208. 12242 Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. 2019. SinGAN: Learning a Generative Model from a Single Natural Image. arXiv:1905.01164 [cs.CV] https: //arxiv.org/abs/1905.01164 Taizhang Shang, Qiuju Dai, Shengchen Zhu, Tong Yang, and Yandong Guo. 2020. Perceptual Extreme Super Resolution Network with Receptive Field Block. arXiv:2005.12597 [eess.IV] https://arxiv.org/abs/2005.12597 Ansh Sharma, Albert Xiao, Praneet Rathi, Rohit Kundu, Albert Zhai, Yuan Shen, and Shenlong Wang. 2024. EarthGen: Generating the World from Top-Down Views. arXiv:2409.01491 [cs.CV] https://arxiv.org/abs/2409.01491 Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor Lempitsky. 2016. Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. arXiv:1603.03417 [cs.CV] https://arxiv.org/abs/1603.03417 Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez-Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu, and the scikit-image contributors. 2014. scikit-image: image processing in Python. PeerJ 2 (June 2014), e453. https://doi.org/10.7717/peerj.453 Tuomas Varanka, Tapani Toivonen, Soumya Tripathy, Guoying Zhao, and Erman Acar. 2024. PFStorer: Personalized Face Restoration and Super-Resolution. arXiv:2403.08436 [cs.CV] https://arxiv.org/abs/2403.08436 Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C.K. Chan, and Chen Change Loy. 2024b. Exploiting Diffusion Prior for Real-World Image Super-Resolution. (2024). Tengfei Wang, Jiaxin Xie, Wenxiu Sun, Qiong Yan, and Qifeng Chen. 2021. Dual-Camera Super-Resolution With Aligned Attention Modules. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2001–2010. Yifan Wang, Aleksander Holynski, Brian L. Curless, and Steven M. Seitz. 2024a. Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis. arXiv:2405.08210 [cs.CV] https://arxiv.org/abs/2405.08210 Wenqi Xian, Patsorn Sangkloy, Varun Agrawal, Amit Raj, Jingwan Lu, Chen Fang, Fisher Yu, and James Hays. 2018. TextureGAN: Controlling Deep Image Synthesis with Texture Patches. arXiv:1706.02823 [cs.CV] https://arxiv.org/abs/1706.02823 Ruikang Xu, Mingde Yao, and Zhiwei Xiong. 2023. Zero-Shot Dual-Lens SuperResolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Xingqian Xu, Zhangyang Wang, and Humphrey Shi. 2022. UltraSR: Spatial Encoding is a Missing Key for Implicit Image Function-based Arbitrary-Scale Super-Resolution. arXiv:2103.12716 [cs.CV] https://arxiv.org/abs/2103.12716 Fuzhi Yang, Huan Yang, Jianlong Fu, Hongtao Lu, and Baining Guo. 2020. Learning Texture Transformer Network for Image Super-Resolution. arXiv:2006.04139 [cs.CV] https://arxiv.org/abs/2006.04139 Jianchao Yang, John Wright, Thomas S. Huang, and Yi Ma. 2010. Image Super-Resolution Via Sparse Representation. IEEE Transactions on Image Processing 19, 11 (2010), 2861– 2873. https://doi.org/10.1109/TIP.2010.2050625 Xuaner Zhang, Qifeng Chen, Ren Ng, and Vladlen Koltun. 2019a. Zoom to Learn, Learn to Zoom. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Yulun Zhang, Zhifei Zhang, Stephen DiVerdi, Zhaowen Wang, Jose Echevarria, and Yun Fu. 2020. Texture Hallucination for Large-Factor Painting Super-Resolution. arXiv:1912.00515 [eess.IV] https://arxiv.org/abs/1912.00515 Zhifei Zhang, Zhaowen Wang, Zhe Lin, and Hairong Qi. 2019b. Image Super-Resolution by Neural Texture Transfer. arXiv:1903.00834 [cs.CV] https://arxiv.org/abs/1903. 00834 Haitian Zheng, Mengqi Ji, Haoqian Wang, Yebin Liu, and Lu Fang. 2018. CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping. arXiv:1807.10547 [cs.CV] https://arxiv.org/abs/1807.10547 Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman. 2018. Toward Multimodal Image-to-Image Translation. arXiv:1711.11586 [cs.CV] https://arxiv.org/abs/1711.11586 Fig. 6. Additional Results. We present further qualitative comparisons between baseline methods and ours on additional captured examples. These results continue to demonstrate the strong visual quality and exemplar consistency achieved by our approach. Visit the supplemental video or the demo webpage to see the full-resolution results in an interactive interface.
We present UltraZoom, a system for generating gigapixel-resolution images of objects from casually captured inputs, such as handheld phone photos. Given a full-shot image (global, low-detail) and one or more close-ups (local, high-detail), UltraZoom upscales the full image to match the fine detail and scale of the close-up examples. To achieve this, we construct a per-instance paired dataset from the close-ups and adapt a pretrained generative model to learn object-specific low-to-high resolution mappings. At inference, we apply the model in a sliding window fashion over the full image. Constructing these pairs is non-trivial: it requires registering the close-ups within the full image for scale estimation and degradation alignment. We introduce a simple, robust method for getting registration on arbitrary materials in casual, in-the-wild captures. Together, these components form a system that enables seamless pan and zoom across the entire object, producing consistent, photorealistic gigapixel imagery from minimal input.
[ "cs.GR", "cs.CV" ]
# 1 Introduction Li-ion batteries are widely deployed for transportation, grid stabilization, and power tools. These applications require specialized batteries, e.g., with long service life, or performance under extreme conditions. Enhancing the usable lifespan and power density of future batteries will greatly aid in their successful deployment. Discovering new battery compositions requires a comprehensive understanding of the physicochemical processes taking place within them. Building this understanding starting from a candidate battery material requires thorough collaboration across a wide variety of disciplines: chemists to formulate the material, physicists to develop the experimental machinery, experimental electrochemists to perform and interpret the experiments, and theoretical electrochemists to find patterns in the data. Due to the many mechanisms that happen in a battery during this chain of events1, interdisciplinary communication between all scientists involved in battery characterization is needed. However, the exchange between theoretical and experimental electrochemistry has always been challenging due to a divergence in discipline-specific language and domain knowledge. This is a direct consequence of the necessarily different challenges in battery characterization and the diverse backgrounds. On the one hand, model-based interpretation of experiments introduces bias by assigning processes to measurement features. However, the underlying assumptions may not be transparent. On the other hand, most experimental procedures have to be repeated multiple times to overcome challenges around accuracy and reproducibility. Yet, the concise publication of a representative single dataset may not make that transparent. To bring the community forward and better align theoretical and experimental efforts, workflows and methods need to be established, with which we can produce high-quality data in an up-scalable manner while increasing its compliance with the FAIR principles2: Findable, Accessible, Interoperable, and Reusable. Current efforts to achieve this task span many disciplines and problems. Describing the data such that it can be collected and analyzed across instutions requires common requirements and a formalized language, e.g., ontologies.3–5 Maximizing the throughput of any one institution is aided by automation and digitalization of the experiments. 6–9 Interpreting the rich amount of data requires a close integration with advanced Machine Learning techniques. 10,11 This paper demonstrates a semi-autonomous, FAIR-compliant workflow to create reusable measurement data and use it to parameterize electrochemical battery models. We show how it allows us to identify and resolve mismatching biases in data interpretation. First, we detail the theory behind the battery models we use, the measurement techniques we employ, the FAIR principles, and the algorithms we use for parameterization. Second, we present a case study of how we elucidated a commonly occurring mismatch between active material diffusivities, depending on the measurement technique or theoretical model treatment. Third, we report on our findings on enhancing the collaboration between experimentalists and theoreticians. Finally, we conclude with a summary of our findings. # 2 Methods # 2.1 Ensuring FAIR Workflows We present the methods we investigated to apply the FAIR principles in practice for battery measurements. We implement the four methods to improve compliance with the FAIR principles: data annotation with ontologies, data publication in open repositories, automated data processing workflows, and data review. Findability requires structured metadata that make datasets searchable and discoverable. We achieve this by creating metadata based on key-value pairs. The structure and terms used in the metadata are taken from the Battery Interface Ontology (BattINFO).12 BattINFO is a domain ontology that expresses knowledge about batteries using a formal and machine-readable vocabulary. It is an extension of the Elementary Multiperspective Materials Ontology (EMMO). This allows metadata annotated with BattINFO terms to be understood within the broader scope of physics and materials science and enables interoperability with other datasets annotated with EMMO. Furthermore, annotating datasets with semantic vocabularies and linking to other datasets adheres to World Wide Web Consortium (W3C) recommendations for publishing linked data on the Web, which extends the findability for web-based queries. The structured and semantically annotated metadata is serialized as JSON-LD files and stored alongside the data using the database software Kadi4Mat13. Accessibility ensures that datasets, once discovered, can be retrieved. Open data repositories act as publishers of datasets, providing long-term storage, persistent identifiers, and version control. Using third-party repositories such as Zenodo, operated by CERN, suitable datasets can be made available with central access venues. Kadi4Mat simplifies this process with a three-click integration, preserving metadata and data files for immediate access and citation. Interoperability allows datasets and workflows to be reused in new contexts. We achieve this by adhering to standardized formats with ontology-annotated data models and breaking the data processing pipeline into modular workflows. These workflows define clear inputs, processes, and outputs, enabling automation and machine-readability. Structuring data to support automation incidentally makes the process much more transparent and requires machine-readable (intermediate) results. Kadi4Mat provides an infrastructure to keep the data and the workflows acting on it in one. Reproducibility ensures that others can understand, verify, and adapt the developed workflows. External reviews help validate data pipelines and uncover any missing documentation. For reusability, we incorporate a checklist-based review to confirm that datasets meet legal and practical requirements, such as licensing and accessibility. Reusability ensures that datasets can be reused in future research, both legally and practically. To enable this, we provide datasets under a permissive Creative Commons license, which allows others to access, share, and adapt the data as long as proper credit is given. This licensing approach maximizes the potential for collaboration, innovation, and integration of our work into broader research efforts. # 2.2 Electrochemical Battery Parameterization Our goal is to obtain material parameters that, when plugged into predictive models of the cell state, will give the results that we would observe in validation measurements. On the cell level without access to microstructure imaging data, our most accurate dynamic model is the Doyle-Fuller-Newman (DFN) model14. For a thermodynamically consistent derivation and treatment of the general class of models that the DFN belongs to, we refer to Latz et al.15. Limitations of the DFN appear when considering the complex microstructures that arise in thick electrodes or novel materials 16, as well as previously negligible effects in novel electrolytes 17. To account for microstructure effects in the context of the DFN, please refer to Traskunov et al.18. Recent research showed that 3D microstructure models do not offer higher shortterm voltage prediction accuracy than the DFN for commercial-like batteries 19. 3D microstructure models do, however, offer higher predictive capability for cell degradation20. Hence, we do not consider microstructure effects here. We demonstrate the challenges in systematically treating complex and diverse characterization data, starting with inverse modelling of the DFN as “most accurate method” and work our way down via simplifications of the DFN to direct parameter extraction from graphs. The single particle model with electrolyte (SPMe) 21 is the linearized version of the DFN concerning electrolyte dynamics. Effectively, it approximates the DFN with only one representative particle per electrode, while resolving the electrolyte dynamics spatially. The single particle model (SPM) 21 is the constant term of the DFN concerning electrolyte dynamics. It may be considered a DFN that neglects the electrolyte dynamics and treats the electrodes as one representative particle each. We use PyBaMM22 to simulate these models. Marquis et al.21 documented their equations and distinguishable parameter groupings. # 2.3 Measuring and Interpreting GITT Battery Response We now introduce the experimental methods and their interpretations we will consider. The Galvanostatic Intermittent Titration Technique (GITT) was introduced in 1977 to study molecule transport phenomena in electrochemistry.23 With GITT, the battery experiences a short constant-current pulse, followed by a sufficiently long rest period. GITT is used most commonly for the determination of diffusion coefficients. Please refer to the SI Subsection 2.4 for the formula and its modernization 24,25. Later in the paper, we will only use the square-root slope of the voltage signal shortly after current changes, abbreviated as $\gamma : = \partial { \cal { U } } / \partial \sqrt { t }$ . Applying inverse modelling to GITT can utilize the measurement more comprehensively26. Escalante et al.27 have already discussed the differences that can arise due to the model choice, in the case of the SPM vs. the SPMe. We will elaborate on that by additionally including the DFN. GITT also yields the most accurate measurement of the OpenCircuit Potential (OCP) at any one State-of-Charge (SOC), i.e., the degree of lithiation between the maximally delithiated and maximally lithiated states. Measurement of voltage at a low current, typically cycling the battery in $5 0 \mathrm { h }$ or more (quasi-OCP), gives many SOC points but mixes static and dynamic parts and flattens features in the OCP curve. With GITT, the cell gets cycled with short constant-current pulses only changing the SOC by a small percentage value. Longer rest phases in-between let the voltage signal exponentially decay close to the OCP, and we take the exponential asymptote as the OCP. Hence, compared to quasiOCP, the SOC resolution has to be lower, but each measurement is more accurate. One can alleviate this a bit by shortening the rest phases between GITT pulses and recovering their terminal voltage from exponential extrapolation26. In any case, a reliable rest phase must have the material exhibit only one mode of exponential relaxation at its end. A plot of voltage over logarithmic time easily verifies this for graphite and NMC for rest phases as short as $1 5 \mathrm { m i n }$ ; long-term hysteresis can require rest phases as long as weeks for other materials like silicon 28,29. The algorithm we use to fit simulation models to data is called Expectation Propagation with Bayesian Optimization for Likelihood-Free Inference (EP-BOLFI).26 It can tackle the high nonlinearity, i.e., the complexity of our models, while not only managing but also incorporating uncertainties in data, model, and model parameters into the fits. # 3 Experimental # 3.1 Cell composition We conduct our experiments on the INR18650-MJ1, a cylindrical $3 5 0 0 \mathrm { m A h }$ Li-ion battery cell in the 18650 format manufactured by LG Chem. With its high cycle life, an energy density of $7 1 0 \mathrm { W h / L } _ { \cdot }$ , and a specific energy of $2 6 0 \mathrm { W h / k g }$ , measured at reference current $0 . 2 \mathrm { C }$ , it is often employed for high energy applications. The positive electrode active material is a high-nickel NMC-840511 $\mathrm { ( L i N i _ { 0 . 8 4 } M n _ { 0 . 0 5 } C o _ { 0 . 1 1 } O _ { 2 } ) }$ positive electrode, based on our measurements using inductively coupled plasma atomic emission spectroscopy (ICP-OES) performed on a Varian VistaMPX. The element ratios are consistent with scanning electron microscopy (SEM) from a Zeiss Gemini Ultra plus and energydispersive X-ray spectroscopy (EDX) from a Bruker XFlash detector 5010, which report the average ratios N:M:C 83:5:12. Both ratio results are consistent with a report by Li et al. 30, stating N:M:C 82:6:11 from ICP and EDX. The negative electrode active material is a graphite/silicon oxide composite, as confirmed via SEM-EDX. The ratio of graphite to silicon oxide is determined from Micro Computer Tomography $( \mu \mathrm { C T } )$ , as 96.5 volume- $\%$ graphite and 3.5 volume- $\%$ silicon oxide. Electrolyte harvested from the cell was measured via gas chromatography-mass spectrometry (GC-MS) by Sturm et al. 31, revealing it to be $1 { \bmod { / } } \mathrm { L i P F } _ { 6 }$ in a solvent based on EC:EMC:DMC with 1:1:1 volume ratios. The transport parameters for this electrolyte are well-documented and, therefore, taken from the literature. 32 # 3.2 Cell disassembly To study materials and perform experiments on the electrode level, the cell is disassembled and the electrodes extracted. To this end, the cell is discharged to the discharge cut-off voltage of $2 . 5 \mathrm { V }$ at 0.1 A $( C / 5 0 )$ , transferred to an argon-filled glove box, and opened with a pipe cutter. After the cell is dismantled, positive and negative electrodes are extracted and carefully separated from the separator to avoid cross-contamination. For all measurements, the electrode and separator samples were washed twice for one minute with Dimethyl Carbonate (DMC) and dried in the glove box. For Electrochemical Impedance Spectroscopy (EIS) to measure tortuosities, to remove any residual salts, the electrodes were immersed in DMC overnight and left to dry for 30 minutes before re-assembly. For electrochemical experiments, the coating on one side of the double-sided electrodes must be removed from the current collector foil. The positive electrode coating is removed with N-Methyl-2-Pyrrolidone (NMP). In contrast, the coating of the negative electrode is removed outside of the glove box with deionized water, as a water-soluble binder is commonly used there. Images of the jelly roll removed from the cell can and the subsequent removal of the coatings are depicted in the SI Figure 1. The process of dismantling the commercial cell and preparing the components for different types of measurements is described in more detail in Schmitt et al. 33, together with a comprehensive description and assessment of various techniques for parameter identification. # 3.3 Cell geometry and microstructure Coating thicknesses are determined with the same SEM setup as earlier, a Zeiss Gemini Ultra plus with EDX from a Bruker XFlash detector 5010. The positive electrode has a $7 3 \mu \mathrm { m }$ coating thickness with $1 9 8 1 2 \mathrm { m o l } / \mathrm { m } ^ { 3 }$ maximum lithium concentration according to SEM images of the electrode cross-section. As for the negative electrode, SEM images give a $8 7 \mu \mathrm { m }$ coating thickness with joint $2 9 2 5 4 \mathrm { m o l } / \mathrm { m } ^ { 3 }$ maximum lithium concentration. The thicknesses are averaged over several SEM images to accommodate for local variations. Exemplary SEM images are shown in the SI Figure 3 to get an impression of particle morphology. The separator is a ceramic-coated polymer with a thickness of approximately $1 2 \mu \mathrm { m }$ . The images of the electrode surface reveal that the active materials on the negative and positive electrodes are in flake and spherical shape, respectively. Parameters describing the microstructure of the electrode are quantitatively assessed by analyzing the 3D reconstruction of the porous structure obtained by Focused Ion Beam nanotomography (FIB-nt). For that purpose, the microstructural data of the MJ1 cell provided in Heenan et al.34 is analyzed. For the reconstruction of the raw data, the 3D stack of images is segmented with the program ImageJ (from NIH). Due to non-uniform illumination, setting a single threshold for all micrographs is not feasible. Therefore, a Sauvola algorithm 35 is used to perform local thresholding of the data. The Sauvola algorithm works by dividing the input image into square windows and setting thresholds for each based on the mean and standard deviation of the pixel intensities. Figure 1 shows the visualizations of the 3D reconstructions of the analyzed data. The data visualization uses Mayavi, a Pythonbased data visualization library. The Particle Size Distributions (PSD) of the various phases are calculated with MATLAB and TauFactor36, using a method introduced by Münch et al.37. For a comprehensive microstructural analysis, we refer to Heenan et al. 34. The porosities of the electrode and separators are measured by mercury porosimetry38, using the Pascal $1 4 0 + 2 4 0$ system by Thermo Scientific, resulting in 0.26, 0.38, and 0.23, for the negative electrode, separator, and positive electrode, respectively. A maximum pressure of $2 0 0 \mathrm { { M P a } }$ is applied to the evacuated samples. Fig. 1 Visualization of the 3D reconstructions of the (a) NMC active phase in the positive electrode and of the (b) graphite and (c) SiliconOxide phases in the negative electrode. Below those are the calculated particle size distributions for the (d) NMC, (e) graphite, and (f) Silicon phases. Table 1 Path-length tortuosity $\tau ^ { 2 }$ , MacMullin number $N _ { M }$ , and Bruggeman exponent $\beta$ of different components of the MJ1 cell determined by EIS # 3.4 Electrochemical measurements For all electrochemical measurements, electrodes with $1 8 \mathrm { m m }$ diameter are punched out and assembled in ECC-PAT-Core-Cells (EL-CELL) in three-electrode configuration, if not otherwise mentioned. Setups relating to GITT also refer to the measurement where GITT and EIS were performed intermittently. The resulting capacity is $1 2 \mathrm { m A h }$ , as estimated from an OCP model fit39. To ensure proper wetting, the cells were allowed to rest for 12 hours before measurements. Cycling is conducted with a BaSyTec Cell Test System (CTS) inside an IPP750 climate chamber by Memmert operating at $2 5 ^ { \circ } \mathrm { C }$ . For GITT, the counter electrodes are the ones from the original cell. For EIS to measure tortuosities, symmetrical cells of the negative and of the positive electrodes are constructed. In both cases, a $2 6 0 \mu \mathrm { m }$ thick Whatman GF/A separator with porosity 0.93 and Bruggeman coefficient 1.0 replaces the original one, with an integrated lithium reference ring from EL-CELL for measuring the working electrode versus the reference electrode potential at $0 \mathrm { v }$ versus $\mathrm { { L i / L i ^ { + } } }$ . For GITT, the only difference is that the original electrodes are used as counter electrodes, as shown in the SI Figure 2. The cell plungers are chosen such that the reference ring is located approximately in the middle of the separator to prevent measurement artefacts. The plungers for the EIS tortuosity measurement are copper-coated to minimize additional ohmic resistance. For GITT, $1 2 0 \mu \mathrm { L L P F } _ { 6 }$ in EC:EMC:DMC 1:1:1 volume ratios (Ethylene Carbonate, Ethyl Methyl Carbonate, Dimethyl Carbonate) with 2 weight- $\%$ VC (vinylene carbonate) from Solvionic is used to represent the original electrolyte. For EIS to measure the tortuosity of both electrodes, $1 2 0 \mu \mathrm { L }$ of a non-intercalating electrolyte consisting of $1 0 \mathrm { m m o l }$ Tetrabutylammonium Perchlorate $\mathrm { ( T B A C l O _ { 4 } ) }$ , Merck) in EC (Alfa Aesar) : EMC (Solvionic) 3:7 weight ratio is used for blocking conditions. For EIS of the separator, ${ 5 0 \mu \mathrm { L } }$ EC:EMC:DMC 1:1:1 volume ratios are used again instead. The tortuosity of both electrodes and the separator is determined with EIS according to the procedure thoroughly described by Landesfeind et al.40,41. EIS measurements are conducted under blocking conditions in potentiostatic mode, employing a Gamry 1010E instrument with a $5 \mathrm { m V }$ amplitude over a frequency range of $1 \mathrm { k H z } \mathrm { - } 1 0 0 0 \mathrm { k H z }$ . To ensure measurement reproducibility, this is repeated for three cells for each component. For the impedance spectra, Equivalent Circuit Models (ECM) are used to obtain the ionic resistance $R _ { \mathrm { i o n } }$ from which the tortuosity is then calculated. With A denoting cross-section area and $L _ { k }$ denoting coating thicknesses, $R _ { \mathrm { i o n } }$ can be obtained according to $$ \tau ^ { 2 } = \frac { \varepsilon R _ { \mathrm { i o n } } A \kappa _ { e } } { 2 L _ { k } } , $$ with the conductivity of the electrolyte at $\kappa _ { e } = 0 . 3 2 \mathrm { m S / c m }$ and the 2 referring to the fact that we have two identical coatings in the symmetrical cell. The ECM for the separator consists of a resistor $R _ { i o n } ^ { * }$ in series with a constant-phase element. The ECM for the electrodes consists of a resistor $R _ { i o n }$ in series with a simplified Transmission Line Model (TLM). For the latter, blocking conditions, reflective boundary conditions, and $R _ { \mathrm { i o n } } \gg R _ { \mathrm { e l e c t r o l y t e } }$ are assumed. See Schmitt et al. 33 for further elaborations. An ECM consisting of a resistor and capacitor in parallel fits the impedance semicircle at $4 \mathrm { H z } { \cdot } 1 0 0 \mathrm { H z }$ and yields the exchange-current densities. The electronic conductivities of the electrodes are determined from a four-point-probe measurement (Ossila) to be $\sigma _ { n } ^ { * } = 2 1 5 \mathrm { S / m }$ and $\sigma _ { p } ^ { * } = 0 . 2 5 \mathrm { S / m }$ . An adhesive tape is used to delaminate the coating from the current collector to ensure that only the conductivity of the porous electrode is measured. # 3.5 Data for Results and Discussions We emulate the state-of-the-art of the research field by considering data from one of our previously finished experiments, which had little interaction between experimentalists and theoreticians. We use this as a basis for discussion between the two disciplines and to motivate how a more FAIR-compliant version of the experiment and its interpretation improve results. This dataset comprises GITT and EIS measurements from a Basytec device setup on the EL-CELL setup. Three full charge-discharge cycles at $_ { \textrm { C } / 5 }$ current between $2 . 5 \mathrm { V }$ and $4 . 2 \mathrm { V }$ were performed before each measurement. The pulse charges are given relative to a scale between $0 \%$ and $100 \%$ corresponding to $2 . 5 \mathrm { V }$ and $4 . 2 \mathrm { V }$ at $\mathsf { C } / 5 0$ current cycling, respectively. Between $10 \%$ and $9 0 ~ \%$ SOC, the GITT pulses were carried out with $\mathtt { C / 1 0 }$ current in $5 \%$ steps, and beyond that with $\mathbf { C } / 2 0$ current in $1 \%$ steps to avoid reaching cut-off voltages early and get a higher resolution at the edges. The relaxation criterium signalling the end of the rest phases is a voltage change smaller than $0 . 0 0 0 5 \mathrm { V }$ within the last 30-minute segment. The GITT measurements and EIS measurements were performed interspersed with each other. The precise timings and order of operations are collected in the SI Table 1 and the SI Table 2 for the lithiation and delithiation direction, respectively. Fig. 2 Finalized workflow for handling GITT and/or EIS data. Solid lines indicate a Record/dataset, while dashed lines indicate a Workflow/sub-task. The four bigger coloured sections represent the raw-to-interoperable data conversion (top left, blue), laboratory-to-interoperable report conversion (bottom left, green), discerning static and dynamic measurement features (top right, orange), and model parameterization (bottom right, red). Our discussions revealed the necessity of a GITT measurement without performing EIS intermittently for parameterization. The exact measurement protocol is listed in the SI Table 3 and the SI Table 4 in lithiation and delithiation direction, respectively. To handle the measurement data, we first need to convert it into a consistent format suited to our analysis. The battery measurement devices (“cyclers”) commonly output data in a proprietary format. The most raw export from the cyclers is a .csv file. We stripped redundant measurement columns, like various representations of time or empty columns, reducing file size by $7 7 \%$ . We packaged the remaining table in a Parquet file, reducing file size by another $8 7 \%$ . Parquet is a minimal file size container, exploiting common redundancies in time-series data. Parquet also provides fast access with its column-oriented data structure. # 4 Results Our developed workflow is summarized in Figure 2. We group its components into four categories: converting raw measurement data into interoperable data (highlighted in blue), collating laboratory reports into interoperable characterization results (highlighted in green), distinguishing static from dynamic measurement features (highlighted in orange), and parameterizing an electrochemical model (highlighted in red). We now discuss these in order. # 4.1 Creating interoperable data We aim to standardize data processing by establishing one consistent data format internally. With this approach, reusing our existing data processing scripts for future datasets becomes seamless —- requiring only a single script each time to convert new datasets into the standardized format, or maybe even just different settings in the same script. We showcase our data conversion on the .csv conversion of our proprietary cycler output file, as .csv is the most generally accessible non-proprietary format. The first standardization here deals with general tabular data interpretation regardless of format, e.g., conversion to SI units or structuring according to measurement protocol. The development of such an interpretation script against multiple datasets revealed the following adjustments it needs to be able to perform. • Normalizing column descriptors, e.g., from “Applied current $/ \mathrm { A } / \mathrm { m } ^ { 2 \mathfrak { N } }$ to “I [A]”; the re-formatted unit denotion is intentional, and ambiguous extra scalings like the $\mathrm { m } ^ { 2 }$ obfuscate magnitudes. • Stripping redundant or empty data columns, as these would slow down network-intensive data processing and obscure the information content. • Stripping superfluous data rows, e.g., measurement channels that were logged even though no experiment was connected to them. • Storing non-data comments in a separate file. • Converting the file encoding into a global format. • Converting localized column delimiters and decimal symbols into a global format. • Interpreting the contents of a “cycler state” column of the user’s choice, based on state changes of which the measurement will be segmented. • Collating multiple measurement files into one while preserving the numbering of the original files for consistency. • Normalizing current sign conventions based on cycler state, as some cyclers might imply current sign change by stating “Charge” and “Discharge”, while others will additionally explicitly denote it. • Normalizing voltage sign conventions globally, as the direction of the battery in the cycler should not affect further data processing. • Normalizing the sign convention for the imaginary part of an impedance measurement, as only some cyclers will report the true impedance. In contrast, others already negated the imaginary part for Nyquist plots. • Lastly, normalizing current and voltage signs to align with the convention of the battery model and extract the working conditions to input into the battery simulator. To store the standardized measurement, we use the file format Apache Parquet, as it features an optimally small file size, fast reads, and broad software support. The conversion to a Parquet file happens with one central script to ensure that the structure of the files is consistent. We showcase the general tool usage on this interpretation step in the SI Section 3. We document unstructured metadata, like the operating states of each segment, via a separate JSON file. Still, the reusability of our data processing scripts requires us to keep them file-agnostic, as they should work with minimal adjustments for data stored as CSV or HDF5. Therefore, we handle in-memory sharing between data processing scripts with a Python object structure. Reusing our data processing scripts with different file formats involves writing one script that parses them into that Python object structure. # 4.2 Challenges in creating interoperable laboratory reports We want to be able to handle any information on material and cell properties programmatically. Then, all further data processing steps document exactly how we used that information, and future reuse can build on that. We showcase our laboratory report standardization on the documents as they come, as this elucidates some of the common challenges in the communication and data exchange between experimentalists and theoreticians. A more ideal setup than what we show here would be to introduce ontology-based checklists and data sheets to the laboratory. The laboratory report was summarized into two files that act as interfaces to users. The experimentalist side intended to curate the data in a self-explanatory way and devised the following attempt to structure and describe the information and results from the parameterization works. The first file, an Excel file, contained all geometry and material parameters in a minimal format. The second file, a PowerPoint file, repeated some of the parameters while also giving error bars, additional context on the methods used, and diagrams for the non-scalar information and images from the microscopy measurements. The PowerPoint file acts as metadata for the data in the Excel file. Nevertheless, the Excel file is not interoperable, as there is no machine-readable information about its data structure. Manual extraction is the only avenue here; ideally, the experimentalists would have been provided with an interoperable structure to input. Such a structure would also have given the theoreticians a central document to align their requirements to ensure that the experiments cover all required material properties. As an example, the thermodynamic factor of the electrolyte was initially missed on both sides. We fill such gaps with literature data gleaned from LiionDB42. As standardization efforts are still rapidly developing (such as the BPX physics-based battery modelling standard), we opted not to adopt them during our methodology’s long development period. However, once all the required features are present, BPX will be the most interoperable way to present our parameter file. Alternatively, our parameter file is a Python script that stores the parameters in a key-value structure. The keys correspond to the simulation software PyBaMM22. # 4.3 Data preprocessing to enhance signal interpretability We want to dissect our data into signal and background. More accurately, we only want to use the part of the data containing a signal for a specific parameter of interest. Then, the sensitivity and precision of the following parameterization step are much more easily assessed. The time-series voltage response of a battery can be split into a static part (OCP), and a dynamic part, termed “overpotential”. Since we want to study transport properties, which only appear in the dynamic part here, we must first consider the static part. We extract OCP data of both electrodes from GITT measurements as described in 2.2. We store the extracted OCV data in a JSON file; as this dataset is rather small, JSON is more appropriate here than Parquet, for human-readability. To increase the SOC resolution and filter noise, we interpolate the OCP data with the OCP model of Birkl et al. 39. These steps entail many small adjustments and a carefully crafted optimization algorithm, which are documented by our code and the workflow files describing its invocation. See Yao et al.43 for another example. Finally, we store the OCV model and the metadata of its optimization in a JSON file. The data, the fit parameters, a directly usable representation of the fit function, and the optimization metadata are stored with respective keys. We want to parameterize our data in a way that considers as many uncertainties as possible, as battery measurements, in particular, entail a lot of them 26. Then, our results will transparently encode how accurately the battery response reflects its material properties and allow us to update the range of possible parameters with future measurements. First, we subtract the optimized OCP model from the GITT data. To verify the accuracy and alignment of the OCP model, we plot the resulting overpotential measurement and check that the voltage asymptotes of the rest phases are close to zero. The overpotential is stored as a Parquet file with an identical internal structure to the original data, including timestamps, current, and voltage. # 4.4 Probabilistic parameterization We now prepare and perform the parameterization according to the algorithm EP-BOLFI 26. EP-BOLFI splits into a preprocessing step for Expectation Propagation (EP) and a parameterization step for Bayesian Optimization for Likelihood-Free Inference (BOLFI). The application to GITT is part of the EP-BOLFI publication. Preprocessing for EP allows you to apply domain knowledge by transforming the data into characteristic features. A typical GITT pulse or rest phase for materials like graphite and NMC can be entirely described by only two features, if no phase changes are occurring during the measurement, comprising of a total of five scalar values: the square-root behaviour for short times consisting of offset and square-root slope, and the exponential behaviour for long times consisting of offset, magnitude, and decay rate. 26 We choose a suitable subset of features that relate to the quantity we wish to measure; here, it is the short-time square-root slopes for diffusivities. The remaining central input EP-BOLFI requires are our prior assumptions about the parameters of interest. The spread of sensible parameters that is known a priori is encoded as a probability distribution, denoted as the Prior. All inputs for the parameterization get encapsulated as a JSON file containing model information, model discretization, experimental conditions, experimental data, experimental features, and EP-BOLFI settings. We now visualize that our parameterization is set up correctly. We do so by collecting the spread of simulation results over the parameter sets that are at the $9 5 ~ \%$ probability bounds of the Prior. After visually confirming that the Prior we set contains the true parameters in its $9 5 ~ \%$ probability bounds, we run the parameterization. See the GITT analysis in the EP-BOLFI paper 26 for a detailed explanation of this process. The parameterization result is also a probability distribution over the parameter sets. Compared to the Prior, it only contains the subset of the Prior that also agrees with the data. As the result is a posteriori knowledge about the true parameters, it is aptly denoted the Posterior. We can visualize the Posterior similarly to the Prior, as it is structurally identical. Hence, the plots are consistent and can emphasize that the Posterior is a knowledge update of the Prior. Once the parameterization is done for each GITT pulse, we collect the individual SOC point parameters into a function of SOC. The SOC-dependent functions are stored as JSON files alongside a B-spline interpolation in Python format and their plot. 4.5 GITT characterization results Fig. 3 Results for the diffusivity of the active material from one set of GITT data in delithiation direction, via direct calculations (a) and from fitting electrochemical models (b). The labels read as follows: $\Delta U _ { s } / \Delta U _ { t }$ refers to the original GITT method 23, $\Delta U _ { s } / \Delta U _ { t } ( \Delta t \downarrow )$ refers to the same method applied to only a suitably small time segment (90 s), $\partial U _ { s } / \partial { \sqrt { t } }$ refers to the differential formulation of the original GITT method, $\partial U _ { s } / ( \partial \sqrt { ( t + \tau } - \sqrt { t } )$ refers to a correction for overlapping relaxation phenomena 25, $\partial \eta _ { s } / ( \partial \sqrt { ( t + \tau } - \sqrt { t } )$ additionally removes the OCP prior to diffusivity calculation, and SPM, SPMe, and DFN refer to the fitted electrochemical models. The best direct approach is plotted in black in (b) as well for comparison. Figure 3a shows the results of state-of-the-art direct diffusivity extraction from the GITT data in delithiation direction. The limited error propagation we can consider here only displays the effect of voltage measurement resolution. It naturally becomes an issue in the SOC range 0.6-1.0, where graphite has a voltage plateau that is shallower than the measurement can resolve. Hence, we observe large errorbars in that range. Figure 3b shows the results of our model-based diffusivity extration from the same GITT data in delithiation direction. Our approach includes more sources of uncertainty in its error propagation, especially parameter uncertainties and their correlations. We observe a significant decrease in diffusivity accuracy at a much wider SOC range 0.3-1.0. Fig. 4 The predictive parameterization posterior of a GITT measurement in delithiation direction. The highlighted square-root slopes γ are used for fitting. The constant-current pulse lasts $0 . 6 \mathrm { h }$ , and we show only the relevant part of the following rest. The square-root features used for parameterization are noted down for experiment (orange) and optimal simulation (green) in $\scriptstyle { \sqrt { \mathrm { s } } } / { \mathrm { v } }$ . The large posterior $9 5 \ \%$ confidence interval is a consequence of the non-matchable pulse square-root feature. The DFN simulations for the individual GITT pulses, where one exemplary one is shown in Figure 4, hint at the reason. Traditional GITT relies on the assumption that the overpotential response grows monotonously, which we do not observe there. We investigate the unexpected shape of the overpotential further in an analysis of the overpotential components in Figure 5a. Oscillations between SOC and overpotential occur, showing unexpected retrograde SOC change. The SOC at which this happens is near a kink in the OCP, originating from crystal structure rearrangements in the active material. While the voltage response grows monotonously, the overpotential response does not, which would be missed in traditional GITT. With our approach, though, the similarity of the shape and the relaxation square-root accuracy tell us that the OCV and model accuracy are sufficiently high for parameterization. Furthermore, we observe a fundamental phenomenon in statistical estimation in the model-based diffusivities Figure 3b: the bias-variance tradeoff. Since the SPM neglects electrolyte effects, it is the wrong model and can not fit the data, which we call a high bias. Consequently, as seen from the error bars, the variance is (a) 0.13 V 0.12 Gr 0 0.10 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Experiment run-time/h Negative open-circuit potential Negative particle concentration overpotential Negative reaction overpotential Electrode-electrolyte concentration overpotential Separator-electrolyte concentration overpotential Ohmic electrolyte overpotential Ohmic negative electrode overpotential Voltage (b) 0.090 0.089 V 0.088 0.07 00 0.086 0.085 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Experiment run-time / h Negative open-circuit potential Negative particle concentration overpotential Negative reaction overpotential Electrode-electrolyte concentration overpotential Separator-electrolyte concentration overpotential Ohmic electrolyte overpotential Ohmic negative electrode overpotential Voltage Fig. 5 A plot detailing the overpotential components in a GITT measure ment in delithiation (a) and lithiation (b) direction. Only the two largest contributions are relevant in the delithiation direction, which are the OCP and particle concentration overpotential. The oscillation between the two is a result of a rapid change in OCP slope. All contributions are equally important in the lithiation direction. In particular, we see that the particle concentration overpotential shows a minor contribution overall, which makes this a measurement of the electrolyte rather than of the electrode. suspiciously low, which we colloquially call “confidently incorrect”. As we approach a more correct model with the SPMe, the variance grows, which we now know is expected but may be counterintuitive. 27 Only the DFN, as a sufficient model, can exhibit low bias and variance simultaneously. This example cautions us to trust a parameterization with a single model without considering the context from adjacent models. Fig. 7 The predictive parameterization posterior of a GITT measurement in lithiation direction. The highlighted square-root slopes γ are used for fitting. The constant-current pulse lasts $0 . 6 \mathrm { h }$ , and we show only the relevant part of the following rest. The square-root features used for parameterization are noted down for experiment (orange) and optimal simulation (green) in $\scriptstyle { \sqrt { \mathrm { s } } } / { \mathrm { v } }$ . The overfitted posterior $9 5 \ \%$ confidence interval is a consequence of the prior $9 5 \ \%$ confidence interval not enveloping the data either. Fig. 6 Results for the diffusivity of the active material from one set of GITT data in lithiation direction, via direct calculations (a) and from fitting electrochemical models (b). The labels read as follows: $\Delta U _ { s } / \Delta U _ { t }$ refers to the original GITT method 23, $\Delta U _ { s } / \Delta U _ { t } ( \Delta t \downarrow )$ refers to the same method applied to only a suitably small time segment (90 s), $\partial U _ { s } / \partial { \sqrt { t } }$ refers to the differential formulation of the original GITT method, $\partial { \cal U } _ { s } / ( \partial \sqrt { ( t + \tau } -$ $\sqrt { t } )$ refers to a correction for overlapping relaxation phenomena 25, and $\partial \eta _ { s } / ( \partial \sqrt { ( t + \tau } - \sqrt { t } )$ additionally removes the OCP prior to diffusivity calculation, and SPM, SPMe, and DFN refer to the fitted electrochemical models. The best direct approach is plotted in black in (b) as well for comparison. Figure 6a shows the results of state-of-the-art direct diffusivity extraction from the GITT data, this time in the lithiation direction. The limited error propagation we can consider here again only displays the effect of voltage measurement resolution. This time, it is much less of an issue due to the measurement happening in the direction of increasing OCP slope, which results in voltage responses comfortably beyond measurement accuracy. The exception is at the very beginning of the GITT lithiation measurement at high negative electrode SOC, as the surface concentrations do not reach the non-plateau region of the OCP yet. Figure 6b shows the results of our model-based diffusivity extraction on the same GITT data in lithiation direction. We observe almost no improvement over the prior parameter assumptions for the SOC range 0.3-1.0. The SPM fit shows suspiciously low variance, which can be attributed to an insufficient model introducing significant bias27. We see a marked decrease in diffusivity accuracy in the SOC range 0.0-0.1 across all models this time. Similar to the delithiation direction, we observe that GITT measurements towards the edge of the SOC range can not uniquely parameterize the active material diffusivity. When the local electrode concentration hits an SOC limit, a “depletion shockwave” runs from the current collector to the separator, which has a different dynamic than a diffusivity response. The DFN simulations can’t capture the magnitude of the overpotential this time, as we see in one of the parameterized pulses in Figure 7. The shallow OCP curve is one reason, as the negative electrode concentration overpotential scales with it. Consequently, it is small with regards to the electrolyte overpotential, as shown in the overpotential analysis in Figure 5b. We repeat the identical procedure for the positive NMC electrode in the SI Section 4. As NMC has a benign OCP with no kinks and small slope changes, traditional GITT works well and our approach is not needed. We can ensure the compatibility of other measurements to our GITT parameterization by utilizing the fact that we treated it according to Bayesian principles. In Bayesian statistics, results from insufficient data are described as probability distributions reflecting the uncertainty that the data and model contain. To give context: the posterior (the “result”) in Bayesian statistics is obtained as the product of the prior (the “informed researcher’s intuition”) and the likelihood (the “model”). We generalize this multiplication update by multiplying the likelihoods of another measurement and GITT by the prior. The mathematical justification for this “trick” stems from summary statistics 44. More straightforwardly, this is equivalent to a semi-parallelized EP-BOLFI, which is stated by Barthelmé et al. 45 to be valid. This “simple” step is only possible when the models used for the parameterization in both cases are compatible. See Zhu et al.46 and Deng et al.47 for the challenges that combining GITT and, e.g., Electrochemical Impedance Spectroscopy otherwise entail. # 5 Discussions Here we describe the generally applicable issues and improvements we found in our collaboration. We will discuss these in order: measurement protocol communication, measurement objective communication, measurement accuracy assessment, uncertainty treatment, documentation via metadata, elucidating domain knowledge, checking model compatibility, interoperable laboratory reports, and software dependency review. Communication of measurement protocol is the first step that may induce issues. We found different understandings of a measurement technique (GITT) amongst the parties involved. With different requirements and limitations, one may select a different interpretation. For example, a theoretician might prefer a rigid GITT set for consistency or an arbitrary, but uninterrupted one for error mitigation. At the same time, an experimentalist is concerned about maximizing expensive equipment time and might interweave other measurements in the rest phase “downtime”. By joining all parties on what may be considered the domain of only one party, we could find an optimal solution for all: shortening the rest of the phases. While shorter rest phases down to $1 5 \mathrm { m i n }$ may suffice26, this only applies after one verification GITT pulse on the material at hand with the usual hours-long relaxation. Measurement objective communication is a separate step that needs to be considered. The issue we found was different quantities of interest. This may sound trivial without knowledge of experimental setups, but to use resources optimally, they can be much more complex than what their output files suggest. For example, a multiplexer setup can seamlessly switch between the time-domain measurements for GITT and the frequency-domain measurements for EIS. However, specific quantities can only be logged by one device at once, leading to gaps in the record for the other device. For example, an EIS measurement device may not be set up to track the total charge transferred, which is necessary for the GITT measurement device to assign SOCs to data points. A solution to avoid this is to agree on a verbose spreadsheet with exact cycler instructions beforehand. Some issues only appear in such simplified discussions, as they eliminate the application of advanced knowledge. For example, a theoretician might not know the time, current, or voltage resolution limits. Measurement accuracy assessment refers to a humaninterpretable representation of the intermediate steps in the data pipeline. On the one hand, it allows for the re-calibration and finetuning of the intermediate steps. On the other hand, it reduces the individual errors that accumulate in the final error propagation calculation. The issue we found was a lack of checks of assumptions. For example, a GITT measurement may be idealized as per theory. Each segment starts with a short-term square-root behaviour and smoothly merges into an exponential decay towards a (quasi-)equilibrium. To verify this, we subtracted the electrode OCP from the data. But we found oscillations of the overpotential around kinks in the electrode OCP, e.g., in Figure 4. While it is well understood that the original GITT formula from $1 9 7 7 ^ { 2 3 }$ does not apply in such situations, we show that GITT with a model-based analysis can still yield a suitable parameterization. Uncertainty treatment is a step that can not be overstated in its importance in battery research. We found that the magnitude of uncertainty sources is easily underestimated when only considering one at a time. With EP-BOLFI26, we turn to a black-box optimizer with a stochastical framework that allows us to evaluate any uncertainties simultaneously that we can incorporate into a simulation model. Voltage measurement precision and material/geometrical property uncertainty can be tacked onto any simulation model. Meanwhile, material/geometrical property correlation is an intrinsic property of the model equations that EP-BOLFI uncovers. For example, the influence of electrolyte properties and their geometry on the complete parameterization is often underestimated. To verify the extent of this influence, we perform the overpotential analyses in Figure 5 for selected SOC points in both the delithiation and lithiation directions. We see that in delithiation direction, most of the signal stems from the electrode concentration gradients, which is desired. But in lithiation direction, only about $1 0 \small { - } 2 0 \%$ of the signal stems from the phenomenon of interest, while the electrolyte concentration gradient effects dominate the signal. Any uncertainty in the electrolyte properties has a proportionally increased influence on the parameterization of the active material diffusivity. Checking the influence of the electrolyte this way tells us how much we need to optimize the experimental setup for a sufficient signal from the electrodes. Documentation via metadata is often considered an ungrateful task, as it is thought not to have an immediate benefit. The issue we found is that domain-specific language between experimentalists and theoreticians did not diverge in the words used but in the meaning of those words. For example, the term “tortuosity $\tau ^ { \mathfrak { n } }$ has different defaults depending on one’s own research field. More accurately, one may refer to “path-length tortuosity $\tau ^ { \mathfrak { n } }$ if the ratio between material and effective transport properties is $\varepsilon / \tau ^ { 2 }$ , or “effective tortuosity $\tau ^ { \mathfrak { n } }$ if it is $\scriptstyle { \varepsilon / \tau }$ . With ontology-backed descriptions of each use of the term “tortuosity”, one could directly translate sources from other disciplines into one’s own requirements. Elucidating domain knowledge is critical for successful communication across disciplines. The issue we found is the unconscious application of domain knowledge. For example, initial communication about the difference between a commercial cell and its modified experimental sample was kept “simple” in the interest of each party‘s time: the theoreticians got the description that the experimental setup minimizes “the effect” of the separator. But this “simple” statement encodes a large volume of expectations of the measurement, an assumption on the quantities that will be extracted from it, and the method by which the separator was made “negligible”. Theoreticians would assume that the new separator would be a marginally thick glass fibre with unity tortuosity. We show the actual picture in the SI Figure 2. The separator is “removed” from the measurement by it having unity tortuosity and high porosity. But, as the commercial electrolyte influences the signal greatly26 compared to a purely academic cell, combined with the considerable thickness of the separator, the removal is imperfect, which must be communicated back and forth. Hence, we recommend graphical communication as a way to transfer domain knowledge. Checking model compatibility thoroughly by checking assumptions between models can be arbitrarily difficult. The issue we found was the pragmatic reliance on the fact that different models of one phenomenon are supposed to approximate the same physical reality. For example, while Transmission Line Models claim that they reproduce a Finite Volume discretization of porous microstructures, from the differences found in $\mathrm { 1 D + 1 D }$ impedance simulations48, we can infer that TLM model parameters do not map onto those of $\mathrm { 1 D + 1 D }$ models. Interoperable laboratory reports may seem like an extra step on top of the measurement documentation. The issue we found is the loss of auxiliary information, e.g., meanings of data column descriptors, differences between a battery cell and its sample for measurement, known noise sources, or even specifics of the preceding equipment use. With ontologies, we have a tool to make checklists and input masks for metadata, automatically converting laboratory notes into a complete picture. For example, we encountered data segments from a multiplexer setup that were not interpretable independently. One device seemed to have arbitrary gaps in voltage data. But both devices were active simultaneously, passing their electrical connection to the battery cell back and forth. Another example is the common description of current via $\mathbf { A } / \mathbf { m } ^ { 2 }$ , which, out of context, lacks the information if the area it refers to is the total surface area of one of the electrodes or the cross-section of one of the electrodes, and which electrode it refers to. One issue we want to emphasize is human error when reading out non-machine-readable laboratory reports. Data loss may be just a matter of not scrolling down an Excel sheet or missing that it has tabs. Software dependency review entails not only the log of software versions used but, more importantly, the effects each piece of software has on the workflow. The issue we found is the sometimes non-interoperable implementation of file formats. For example, Pandas is a popular Python library for handling tabular data, offering data export into the HDF5 file format. While it is possible to store interoperable data this way 49, the default behaviour is a file that can only be reasonably read by Pandas. This raises an unnecessary barrier to future reuse. In the worst case, one must find the same Pandas version and make it run on their system. Therefore, we recommend verifying the standard adherence of your data by opening it in a “third” software, as in, one neither party initially used. Additionally, the choice of file format depends on the size of one’s organization. To show that HDF5 is appropriate for organizations with dedicated resources for data curation, we refer you to Moradpour et al.50. With no dedicated resources, a file format is preferable that can not be misconstrued as the highly flexible HDF5 can. For example, we choose Apache Parquet because it forces us to organize our data in a single table each time. For unstructured data, we choose JSON, since it sacrifices file compactness for structural simplicity and universal readability.
Interdisciplinary collaboration in battery science is required for rapid evaluation of better compositions and materials. However, diverging domain vocabulary and non-compatible experimental results slow down cooperation. We critically assess the current state-of-the-art and develop a structured data management and interpretation system to make data curation sustainable. The techniques we utilize comprise ontologies to give a structure to knowledge, database systems tenable to the FAIR principles, and software engineering to break down data processing into verifiable steps. To demonstrate our approach, we study the applicability of the Galvanostatic Intermittent Titration Technique on various electrodes. Our work is a building block in making automated material science scale beyond individual laboratories to a worldwide connected search for better battery materials.
[ "cs.DB", "physics.data-an" ]
# 1. Introduction Code review is essential for improving code quality and detecting defects (Fagan, 2002). Modern Code Review (MCR) is widely used in open-source (Rigby et al., 2008; 2014; Rigby & Bird, 2013) and industrial settings (Sadowski et al., 2018; Shan et al., 2022), typically involving: (A) code submission, (B) reviewer examination, (C) feedback, and (D) developer revisions. Despite its benefits, MCR is labor-intensive and timeconsuming (Yang et al., 2016), driving research toward automated review comment generation. Existing methods—whether retrieval-based (Gupta & Sundaresan, 2018; Siow et al., 2020; Hong et al., 2022) or deep-learning-driven (Tufano et al., 2021; 2022; Li et al., 2022b;a; Lin et al., 2023; Lu et al., 2023)—often frame it as a snippet-level code-to-text task. However, this oversimplification diverges from the core goal of reviewers: detecting defects (Bacchelli & Bird, 2013) (see Section A). Furthermore, current evaluations rely excessively on textual similarity metrics (e.g., BLEU (Papineni et al., 2002), ROUGE (Lin, 2004)), which fail to measure real-world effectiveness (Lu et al., 2025). Challenges. To address these limitations, we investigate a full code review pipeline within a real-world online service (Figure 1). Our system integrates with an internal DevOps platform, generating review reports, filtering comments, and aligning them with code lines. A detailed description of this real-world workflow integration, designed for seamless adoption by developers, is provided in Appendix B. This deployment reveals four key challenges (Appendix C): Capturing Proper Code Context: Effective review requires analyzing dependencies beyond the immediate diff hunk (e.g., variable declarations or method calls). However, excessively long inputs degrade LLM performance, necessitating efficient context extraction. Improving Key Bug Inclusion (KBI): The goal of automated review is to detect critical defects, yet existing methods rely on textual similarity metrics, which fail to measure defect detection capability. More robust evaluation methods, such as Key-Bug Inclusion (KBI), are needed. Reducing False Alarm Rates (FAR): Generative models often produce irrelevant or overly strict comments (e.g., nitpicks, hallucinations), burdening developers. A robust filtering mechanism is required to reduce false positives and enhance signal-to-noise ratio. Human-Centric Workflow Integration: Practical review tools must seamlessly integrate into developers’ workflows, ensuring comment alignment with code lines while minimizing cognitive overhead. Existing solutions often overlook this critical usability aspect. Figure 1. The code review automation pipeline integrated into the online service. Our Approach. To address these challenges, we propose: $\bullet$ A static analysis system using code slicing to extract relevant context. $\pmb { \theta }$ A multi-role LLM framework with chain-of-thought reasoning to enhance defect detection. $\otimes$ A filtering mechanism to eliminate false positive nitpicks and hallucinations. $\pmb { \varrho }$ A line-aware prompt design for precise comment placement. Evaluation. We validate our framework on real-world system failures, including historical core dumps and fault reports that caused significant financial losses. We evaluate it using multiple open-source LLM engines, demonstrating a $\pmb { 2 \times }$ performance improvement over standard LLM methods and a $\mathbf { 1 0 \times }$ improvement over prior baselines. An ablation study further confirms the contribution of each component, highlighting the impact of code slicing, multi-role reasoning, and filtering mechanisms. Contributions. Our key contributions include being the first to: $\bullet$ Repository-Level and Merge-Request Granularity: Elevating automated code review from snippet-level tasks to repository-wide and merge-request (pull-request) granularity. $\pmb { \theta }$ Integration with Real-World DevOps Workflows: Deploying automation into a practical online review system with more practical and objective evaluation metrics beyond text similarity. $\otimes$ Validation on Industry-Scale Defects: Demonstrating effectiveness on real-world, high-impact failures in industry-level codebases instead of synthetic test data. $\bullet$ Code-Review-Oriented LLM Framework: Designing a specialized framework leveraging code slicing, multi-role collaboration, and filtering mechanisms, achieving substantial improvement in code review performances. # 2. Background: Code Review Automation Automating code review is crucial for maintaining software quality by identifying critical bugs early. The goal is to detect severe issues in new merge requests and provide necessary comments. In 2022, company reports showed that $30 \%$ of severe ${ \mathrm { P 1 } } +$ incidents (asset losses exceeding $\$ 350,000$ and $2 4 . 0 4 \%$ of $\scriptstyle \mathrm { P 4 + }$ incidents stemmed from lowlevel faults due to inadequate reviews. Even in 2024, changerelated core failures accounted for $67 \%$ of incidents, with code change-related graded incidents comprising $1 9 . 5 4 \%$ , highlighting the urgent need for effective automated review tools. These tools help ensure thorough, compliant reviews, reducing defect risks. To understand reviewer needs, we surveyed a super reviewer group, summarizing findings in Section D. Background on code slicing and multi-role systems, key techniques in our work, is introduced in Sections E and F. # 3. Proposed Approach # 3.1. Overview Figure 2 illustrates our decoupling process of code review automation architecture: 1) Code Slicing: Extracting code from the diff hunk within repository context (Section 3.2); 2) Multi-role Code Review System: Employing a multi-role system to conduct reviews and compile the results (Section 3.3); 3) Redundancy Comment Filter Mechanism: Filtering out redundant and irrelevant comments to avoid nitpicks and hallucinations (Section 3.4); 4) Line Number Localization: Ensuring precise identification of code lines where issues occur (Section 3.5). To evaluate the automation, we construct a dataset from historical fault reports, simulating real-world merge requests that introduced defects (Section 3.6). # 3.2. Code Slicing Previous work used method-level or diff-level code snippets as independent inputs. However, new code is integrated into a larger codebase during reviews, and understanding the structural context is crucial. We developed a code slicing process that integrates multiple slicing strategies, selectable based on the analysis needs. To avoid redundant slices, we use a caching mechanism to enhance efficiency. The pseudo code of our slicing algorithms is presented in 一 O 品 Action: Merge Request Triggered Section 3.2 Section 3.3 Code Slicing Multi-role Code Review System 圍一 目 自 Output: Code Review Report Section 3.5 Section 3.4 Line Number Localization Redundancy Comment Filter Mechanism Section G. Initially, the repository is cloned, and the merge request commit is checked out. A static analysis tool is then applied to generate abstract syntax trees (ASTs), which serve as the foundation for our slicing process. Based on data dependencies and control flow analysis, one or more of the following four optional slicing algorithms may be applied: 1) Original Diff: The basic code diff without transformations, capturing essential changes in the commit. 2) Parent Function: Locates the smallest parent function containing the changes, providing functional context. 3) Left Flow: Tracks the flow of all left-hand values (L-values) in the function and control structures, focusing on the lifecycle of variables. 4) Full Flow: Extends Left Flow by tracing right-hand values (R-values) and collecting the signatures of callee functions, offering coverage of variable usage and modifications. # 3.3. Multi-role Code Review System Our multi-role code review system involves four key roles: Reviewer, Meta-Reviewer, Validator, and Translator. These roles collaborate to enhance the accuracy and efficiency of the review process. The system design is illustrated in Figure 3, and we detail the roles and their processes below. ❶ Reviewer: Reviews each code snippet generated by the code slicing algorithm (Section 3.2) and provides detailed comments on potential issues in a predefined format. $\pmb { \theta }$ Meta-Reviewer: Aggregates comments from multiple Reviewers, filtering and sorting them based on predefined thresholds. It merges common issues across reviews. $\otimes$ Validator: Validates and refines the merged comments, rescores them, and ensures that only comments exceeding a certain threshold are retained. $\bullet$ Translator: Translates the final comments into the required language for multinational teams, ensuring proper formatting for direct integration into the development environment. Each role is integrated with Chain-of-Thought technique, as detailed in Section H. # 3.4. Redundancy Comment Filter Mechanism LLMs often produce an overwhelming number of comments, many of which are either nitpicks or hallucinations. To mitigate this issue, we implemented a Redundancy Comment Filter Mechanism to reduce the number of irrelevant comments. Our filtering mechanism, integrated within the multi-role system (Section 3.3), operates by answering three key questions for each comment: Q1: Is this comment a nitpick? Typical nitpicks include excessive code comments, handling unnecessary edge cases, or overly complex error handling. Q2: Does the comment identify a fake problem (i.e., a non-existent bug)? For example, if the comment flags a function call to a known reliable internal library, null pointer checks are considered irrelevant. Q3: How critical is the issue identified by this comment? Minor issues, like missing comments, are less severe than potential core dumps or infinite loops. Each question is rated on a scale from 1 to 7, with 1 indicating a nitpick, fake problem, or minimal issue, and 7 indicating a severe and real issue. The scoring scale (1 to 7) is inspired by other related work (McAleese et al., 2024). We chose this scale to enable a fine-grained and manageable distinction. These scores form the basis of the filtering process throughout the review workflow. Coarse Filtering and Sorting by Reviewer. During the review process, the Reviewer LLMs score each comment based on Q1-Q3. Comments with Q1 or Q2 scores of 4 or below are discarded. This specific threshold was established heuristically to enhance interpretability and has been validated by developer feedback during internal piloting. The remaining comments are then sorted based on their Q3 score and truncated to the Top-N comments. Fine Filtering and Sorting by Meta-Reviewer. The MetaReviewer further refines the filtered comments by merging those flagged by multiple Reviewers and removing comments mentioned by only one Reviewer. Figure 3. The multi-role system for automating code review. Validation and Re-scoring by Validators. Validators then re-score the comments by revisiting the original code snippets and applying the same Q1-Q3 criteria. A secondary filter is applied, ensuring that only the most relevant and critical comments proceed to translation and integration into the development platform. Integration with the Multi-role System. The filtered comments are processed by the remaining multi-role components, including translation (if necessary) and final submission to the development platform. This multi-stage process ensures that the delivered comments are both relevant and concise, minimizing redundancy and false alarms. The heuristic approach to threshold definition described herein was chosen to prioritize generalizability, interpretability, and mitigate overfitting in this study. While providing a robust baseline, exploring adaptive or machine-learned thresholds remains a valuable direction for future enhancement to achieve more nuanced filtering. # 3.5. Line Number Localization A key challenge overlooked in prior work is the precise localization of comments within the code. Unlike code summarization tasks, code reviews require pinpointing specific lines of code where issues are identified. Without this information, developers face inefficiencies in verifying and addressing comments. For example, the change-involved function has 94.54 lines of code in average based on our statistics, missing line localization can result in significant delays for developers. We propose a code formatting approach inspired by Aider(Gauthier, 2024), tailored for code review tasks. As shown in Table 1, the format includes an operation label (indicating whether a line is kept, added, or deleted), the line number, and the code content. For non-contiguous code lines, ellipses are used to indicate omissions. Table 1. Code formatting with line position information. # 3.6. Offline Validation To systematically assess the performance of our system, we developed a dataset curated from the company’s fault report platform. Each case in this dataset corresponds to an issue that resulted in actual company losses. For each reported fault, we trace back to the merge request that introduced the fault and its subsequent fixing merge request. Using these, we generate ideal reference comments containing details such as affected files, specific lines of code, fault location, root cause, suggested fix, example code, and issue category. The motivation for conducting such validation is illustrated in Section I. # 4. Evaluation Design # 4.1. Research Questions We define the following research questions (RQs) to guide our evaluation, whose detailed illustrations are in Section J: RQ1: How does the overall performance of our framework compare with previous works? RQ2: How do code slicing algorithms impact the performance of the framework? RQ3: How do the different components of the multi-role system impact the performance of our framework? RQ4: How does the redundancy comment filter mechanism address nitpicks and hallucinations? RQ5: How does the representation of line number position information impact overall performance and line number localization success rate? # 4.2. Dataset and Studied Models The primary goal of code review is to prevent problematic code from being merged into the target branch. To simulate real-world code review scenarios, we collected data from a company’s core framework team, which is responsible for the production code of the short video recommendation core service. This data was gathered using fault reports recorded on an online platform. These cases come from four repositories and involve total 4,090 developers. By analyzing these reports, we traced the merge requests (MRs) that introduced the issues and examined the specific commits to reproduce the code snapshots. The detailed statistics are presented in Section K. Our framework supports multiple LLM engines. To mitigate security risks, we only studied open-source models that can be deployed locally. We exclusively selected large instructed models due to the complex human-instruction-based tasks in our workflow. The final list of models includes: LLaMA-3.1 (70B), Qwen2 (72B), Command $\scriptstyle \mathrm { R + }$ (104B), Mistral-large2407 (123B), and LLaMA3.1 (405B). The reasons for not selecting other models are outlined in Section L. # 4.3. Metrics In accordance with the real-world developer expectations discussed in Section D, we evaluate performance at the merge request (MR) level using four metrics, with their formal definitions provided in Section M: $\bullet$ Key Bug Inclusion (KBI): Assesses the model’s ability to recall critical issues that could lead to tangible losses. $\pmb { \theta }$ False Alarm Rate (FAR): Captures the proportion of irrelevant or erroneous comments, with two variants $F A R _ { 1 }$ for all MRs and $F A R _ { 2 }$ for MRs where key bugs are recalled). $\otimes$ Comprehensive Performance Index (CPI): Balances the completeness of key issue detection (KBI) and precision $( 1 0 0 - \mathrm { F A R } )$ , analogous to the F1-score. It is also computed in two variants ( $C P I _ { 1 }$ and $C P I _ { 2 }$ ). $\pmb { \varrho }$ Line Localization Success Rate (LSR): Measures the accuracy of line-level references by checking whether comments point to the correct code lines. # 4.4. Baselines and Experimental Setups Since our framework focuses on $^ { C + + }$ , we selected stateof-the-art baselines that support this language: CodeReviewer (Li et al., 2022b): A T5 model pre-trained for code review tasks and then fine-tuned. CCT5 (Lin et al., 2023): A T5 model pre-trained on CodeChangeNet, then fine-tuned. LLaMA-Reviewer (Lu et al., 2023): A large LLM finetuned for code review tasks based on the LLaMA. DISCOREV (Ben Sghaier & Sahraoui, 2024): A T5 model enhanced via cross-task knowledge distillation for code review. The detailed experimental setups of our framework and baselines are presented in Section N. # 5. Evaluation Results # 5.1. RQ1. Comparison with Baselines We evaluated the performance of our framework on the fault merge request dataset, comparing it with several baseline approaches. Our framework was tested with different large language model (LLM) engines. Our main experiments primarily utilized a homogeneous setup, employing the same Table 2. Overall performance comparison of our framework using different LLM engines and baseline models. LLM engines marked with \* are quantized. “Val” indicates if Validator role was used. LLM across all roles. This approach was chosen to isolate and clearly assess whether a single, powerful model could effectively address key challenges in code review. Recognizing the practical importance and potential benefits of diverse model deployments, we also conducted extended comparison experiments with heterogeneous LLM assignments for reviewer and validator roles. These experiments, detailed in Appendix O, show that strategic combinations, such as pairing a strong validator with a smaller reviewer, can achieve comparable or even superior performance while potentially optimizing resource usage. For baselines, since they do not prioritize comments, we evaluated their comments based on whether they passed their respective “quality estimation” filters, which assess whether a code snippet requires a comment. The results are in Table 2. The results indicate that our framework significantly outperforms the baselines by a factor of $1 0 \mathrm { x }$ across most key metrics, such as key bug inclusion (KBI) and comprehensive performance index (CPI). This marked improvement is likely due to our framework’s end-to-end approach to code review automation, which addresses the key challenges of the task and introduces strategies specifically designed to tackle each challenge. Among the LLM engines tested in our primary setup, LLaMA3.1-405B demonstrated the best overall performance, which aligns with the general scaling laws of language models where capability often increases with parameter count on complex tasks such as code review. However, our evaluations (detailed in Table 2) also included more compact LLMs. These results show that certain smaller models, particularly those with strong inherent reasoning capabilities, can still achieve competitive performance within our framework. This finding is particularly relevant given the industry trend towards increasing ’capacity density’ in newer architectures, where smaller models are progressively narrowing the performance gap. While the largest models may provide peak effectiveness, these observations suggest that a range of LLMs can be effectively utilized, allowing for a balance between performance and computational resource demands, a point further explored in our heterogeneous model assignments (Appendix O). Summary of RQ1. Our framework surpasses baseline approaches significantly (up to $1 0 \mathrm { x }$ on KBI/CPI), thanks to its end-to-end design. LLaMA3.1-405B stands out among tested engines, highlighting the role of model capability. Investigations into heterogeneous LLM combinations also suggest the potential for optimized deployments. (See Appendix T.1 for the extended conclusion.) # 5.2. RQ2. Effectiveness of Code Slicing We tested the four code slicing algorithms described in Section 3.2: Original Diff, Parent Function, Left Flow, and Full Flow. It is important to clarify that while our framework does not employ an explicit Retrieval-Augmented Generation (RAG) pipeline, our code slicing mechanism is designed with a RAG-aligned objective. Specifically, it serves a similar purpose to RAG by strategically retrieving and providing the LLM with only the most relevant contextual code ’slices’ from the broader codebase. This process aims to focus the model on pertinent information, thereby enhancing its reasoning and effectiveness in the code review task. Our focus in this section is on $K B I$ and $C P I _ { 1 }$ , as these metrics indicate how input content affects the maximum recall capability of LLMs for code review. The experiments were structured to evaluate the comments generated by the large language models under different conditions, including all comments, comments after applying a coarse filter, and top-k ranked comments (based on scores from Q3). We also tested multi-reviewer settings, where the meta-reviewer merges the comments, and validator settings, where validators further refine the comments. The average results are shown in Table 3, based on the LLaMA3.1-405BAWQ-Int4 LLM engine. To provide further insight into the variability of these results, the minimum and maximum values for each reported metric across the three runs are detailed in Appendix R. The results reveal that using only the diff or parent function is less effective, while more detailed slicing (Left Flow and Full Flow) improves performance, especially in key bug inclusion. Surprisingly, Left Flow performs better than Full Flow, likely due to the large language model’s reduced capability when provided with longer contexts, which can cause distraction. This finding supports our assumption that providing targeted and relevant code context is critical for maximizing LLM performance in code review tasks, an observation consistent with the principles underpinning RAG systems where curated information significantly enhances model outputs. During our analysis of the recalled merge requests (MRs), we found another interesting pattern. Although some slicing algorithms perform worse overall, each algorithm uniquely succeeds in specific cases. This means that each slicing strategy provides valuable context in certain situations. Figure 4 presents a Venn diagram showing the union and differences among the key bugs recalled by each slicing algorithm under the “All” and “+Meta Reviewer” settings. Notably, Left Flow and Full Flow recall most, with significant overlap, but almost each method also uniquely recalls some. This phenomenon mirrors how human reviewers operate—expanding their focus to different levels of granularity, such as inspecting parent functions or understanding variable usage in different contexts. Some defects are easier to spot in one context, while others require a different view. Therefore, a combination of various slicing strategies might be a promising direction. Summary of RQ2. Left Flow and Full Flow significantly improve key bug inclusion and overall performance compared to simpler slicing. Left Flow often outperforms Full Flow, possibly because shorter context helps maintain focus. Notably, each slicing approach has exclusive successes, suggesting that combining them could further improve detection. (See Appendix T.2 for the extended conclusion.) # 5.3. RQ3. Effectiveness of Multi-role System To better understand the capabilities of our multi-role system, we conduct experiments on: $\bullet$ Leveraging the non-determinism of large language models; $\pmb { \theta }$ The selfcorrection capability (validator); $\otimes$ The chain-of-thought (CoT) prompting strategy. # 5.3.1. NUMBER OF REVIEWERS Previous research has shown that the non-determinism of large language models (LLMs) can impact results. Specifically, with a best-of-N sampling approach, smaller LLMs can sometimes match or surpass larger models. Since our framework includes a multi-reviewer scenario, where a meta-reviewer merges comments from multiple reviewers, we conduct experiments to assess whether increasing the number of reviewers improves performance. The results in Table 4 show that increasing the number of reviewers from one to three improves $K B I$ but also leads to higher $F A R _ { 1 }$ and $F A R _ { 2 }$ , which negatively affect $C P I _ { 1 }$ and $C P I _ { 2 }$ in the $\mathrm { \dot { \ell } + M }$ eta Reviewer” setting. However, after introducing the validator, the performance for three reviewers significantly improves in terms of $C P I _ { 1 }$ and $C P I _ { 2 }$ . While more reviewers boost $K B I$ , they also increase false alarms, making the validator essential to overall performance. Table 3. Impact comparison of different code slicing algorithms on key bug inclusion $( K B I )$ and the comprehensive performance index $( C P I )$ , based on LLaMA3.1-405B-AWQ-Int4. Experiments for a single reviewer are conducted three times to compute the average. “All” represents all comments generated by the reviewer; “Coarse filter” refers to filtering using Q1 and Q2 scores during generation; “Top-k” denotes truncated comments sorted by Q3 scores; $^ { \circ } +$ Meta Reviewer” and $\mathrm { \dot { \Psi } + V a l }$ lidator” settings are evaluated under Top-5 truncation. Figure 4. Venn diagram of recalled key bugs identified by different code slicing algorithms. The “All” setting represents all comments, while the “+Meta Reviewer” setting denotes multi-reviewer comments merged by the meta-reviewer. To analyze per-category performance, a breakdown across logic, security, and performance-related bugs is shown in Appendix P. Table 4. Impact of increasing the number of reviewers from one to three. The “+Meta Reviewer” setting represents the meta-reviewer merging the reviewers’ comments, while the “ $^ { + }$ Validator” setting denotes the validator refining the comments after the meta-reviewer. All settings use Top-5 truncation of reviewer comments. Summary of RQ3.1. Increasing the number of reviewers lifts key bug inclusion but raises false alarms. A validator mitigates these alarms, implying a trade-off between coverage and precision. (See Appendix T.3 for extended conclusions.) # 5.3.2. SELF-CORRECTION ABILITY OF LLMS In our framework, the validator refines and validates generated comments to correct hallucinations. Table 5 shows that the validator lowers $F A R _ { 1 }$ and $F A R _ { 2 }$ but also reduces $K B I$ , indicating a trade-off between precision and recall. Our analysis suggests such erroneous rejections of valid comments by validators primarily stem from factors including context propagation from earlier pipeline stages, minor inaccuracies in comment positioning, occasional model input token limits, and inherent scoring variances. Summary of RQ3.2. Self-correction (validator) reduces false alarms but can inadvertently discard critical bugdetecting comments. Balancing these factors is crucial. (See Appendix T.3 for extended conclusions.) Table 5. The self-correction ability of LLMs through the Validator role. “w/o” denotes without Validator, “w/” denotes with Validator. Table 6. Impact of Chain-of-Thought (CoT) on the framework, presenting paired slicing algorithm comparisons. “SR” denotes Single Reviewer, “MR” denotes Multi Reviewers. All multi-reviewer settings use three reviewers and Top-5 truncation. # 5.3.3. EFFECTIVENESS OF CHAIN-OF-THOUGHT We compared our specified CoT approach with free-form reasoning. Table 6 shows that CoT prompts often excel in complex slicing tasks (Left Flow, Full Flow), but in simpler tasks (Original Diff, Parent Function), free-form can be just as good or better. Summary of RQ3.3. CoT prompting is especially beneficial in complex contexts. For simpler code slices, the model may perform well without explicit CoT guidance. As more powerful reasoning models, such as GPT-O1 and DeepSeekR1, emerge, the advantage of specified CoT over free-form reasoning may further diminish. (Appendix T.3) # 5.4. RQ4. Effectiveness of Comment Filter Mechanism The comment filter mechanism includes $\bullet$ Coarse reviewer filter, $\pmb { \theta }$ Top- $\mathbf { \nabla } \cdot \mathbf { k }$ truncation, $\pmb { \otimes }$ Meta-reviewer filter, and $\pmb { \varrho }$ Validator validation. Table 7 shows that in flow-based slic Table 7. The $K B I$ , $F A R _ { 1 }$ , and $C P I _ { 1 }$ results for different code slicing algorithms utilizing our filtering mechanism. This table illustrates the impact of sequential filter stages, including different Top-k truncation values $( \mathbf { k } { = } 1 0 , \ 5 , \ 3 )$ ) for single-reviewer paths. For the multi-reviewer path results shown here (+Meta Reviewer, $+$ Validator), Top-k is set to 5. A comprehensive discussion of Top-k sensitivity, covering both single-reviewer variations and multi-reviewer settings, is presented in Appendix S. ing (Left Flow, Full Flow), adding these filters sequentially decreases $F A R _ { 1 }$ and improves $C P I _ { 1 }$ . In simpler slicing (Original Diff, Parent Function), only the coarse filter proves particularly effective, likely due to limited context causing more hallucinations. A comprehensive sensitivity analysis of the Top-k truncation hyperparameter $k$ —detailing its impact on single-reviewer paths with various $k$ values (as presented in Table 7) and an extended analysis within our multi-reviewer framework—is provided in Appendix S. Summary of RQ4. Our comment filter significantly reduces false alarms and improves performance in more detailed slicing methods. In simpler slicing, the coarse filter stage is the most impactful step. (See Appendix T.4 for the extended conclusion.) # 5.5. RQ5. Line Number Position Line number localization is crucial for real-world applications. We tested three formats: No: No line position information is provided; Relative: Code is provided with a separate list containing relative line positions; and Inline: Position information is integrated directly into the code using the format in Table 1. Table 8 shows that providing line number information (especially inline) significantly improves performance and localization success rate (LSR). Summary of RQ5. Embedding line numbers inline yields the highest performance and LSR, likely because it helps the model anchor comments to specific lines accurately. (See Appendix T.5 for the extended conclusion.) Table 8. Impact of line number position information. “All” represents the average of all comments generated by reviewers, while “+Meta Reviewer” denotes the multi-reviewer workflow with three reviewers and Top-5 truncation. LSR (Line Success Rate) measures whether LLMs provide valid lines, regardless of correctness. # 6. Related Work Code review comments play a crucial role in maintaining software quality, leading to significant research efforts in automating this process. Early studies, such as Gupta & Sundaresan (2018), employed retrieval-based methods, utilizing LSTM models to match new code snippets with historical changes to recommend comments. Siow et al. (2020) advanced this approach by incorporating attention mechanisms to capture semantic nuances more effectively. With the advent of deep learning, the focus shifted towards automated comment generation. Pioneering efforts by Tufano et al. (2021; 2022) introduced models trained on diverse datasets, including technical texts and code snippets. Subsequent innovations included specialized models such as CodeReviewer (Li et al., 2022b), which leveraged pretraining on code review data, and AUGER (Li et al., 2022a), which used review tags to streamline the task. Another approach, CommentFinder (Hong et al., 2022), presented an efficient retrieval-based model tailored to new code. More recently, LLaMA-Reviewer (Lu et al., 2023) trained large language models specifically for code review tasks, and DISCOREV (Ben Sghaier & Sahraoui, 2024) improved performance by applying cross-task knowledge distillation across successive tasks, and Yu et al. (2024b) focused on fine-tuning LLMs to improve both the accuracy and comprehensibility of automated code reviews. Alongside these advancements in direct comment generation, recent studies have also explored the application of LLMs to other related aspects of the software development lifecycle, such as enhancing code reviewer recommendation (Wang et al., 2024) and automating commit message generation (Tao et al., 2024), underscoring the expanding utility of large models in diverse software engineering contexts. Despite these advances, previous works have oversimplified the code review process by treating it as a set of snippetlevel code-comment pairs. These approaches typically split merge requests into independent snippets and framed the task as a one-to-one neural machine translation (NMT) problem, converting code into natural language. While innovative, this approach provides a limited and idealized view of code review, often evaluated with text similarity metrics, such as BLEU or ROUGE, which do not fully capture the expectations of real-world developers for finding defects. In practice, code review is more complex, evaluated at the level of entire merge requests of repository codebases rather than individual code-comment pairs. The focus on text similarity fails to consider the broader context, including how comments address the full scope of changes in a MR. Although contributing valuable insights, these studies fall short of replicating the holistic, real-world workflow.
The complexity of code reviews has driven efforts to automate review comments, but prior approaches oversimplify this task by treating it as snippet-level code-to-text generation and relying on text similarity metrics like BLEU for evaluation. These methods overlook repository context, real-world merge request evaluation, and defect detection, limiting their practicality. To address these issues, we explore the full automation pipeline within the online recommendation service of a company with nearly 400 million daily active users, analyzing industry-grade C++ codebases comprising hundreds of thousands of lines of code. We identify four key challenges: 1) capturing relevant context, 2) improving key bug inclusion (KBI), 3) reducing false alarm rates (FAR), and 4) integrating human workflows. To tackle these, we propose 1) code slicing algorithms for context extraction, 2) a multi-role LLM framework for KBI, 3) a filtering mechanism for FAR reduction, and 4) a novel prompt design for better human interaction. Our approach, validated on real-world merge requests from historical fault reports, achieves a 2x improvement over standard LLMs and a 10x gain over previous baselines. While the presented results focus on C++, the underlying framework design leverages language-agnostic principles (e.g., AST-based analysis), suggesting potential for broader applicability.
[ "cs.SE", "cs.AI", "cs.CL", "cs.LG" ]
# I. INTRODUCTION Public procurement represents a major component of government expenditure, necessitating effective management to ensure fair allocation of public funds and economic stability. However, traditional data storage formats, often tabular or unstructured, limit transparency, accessibility, and analytical depth. Addressing these challenges requires a shift toward structured, semantically enriched data representation. Semantic Web technologies offer a transformative solution by structuring procurement records as RDF-based knowledge graphs, enabling interconnected data, flexible querying, and advanced analytics. Ontologies define standardized vocabularies and relationships, ensuring semantic consistency and facilitating SPARQL-based queries. This research introduces an ontology-driven framework for procurement data analysis, integrating automated data transformation into machine-readable formats. The system leverages knowledge graphs and Machine Learning to enhance predictive modeling of procurement trends and risk factors. By combining Semantic Web technologies with data-driven analytics, this approach strengthens transparency, supports decision-making, and fosters accountability in North Macedonia’s public procurement system. # II. DATA DESCRIPTION # A. Legal and Institutional Framework for Data Transparency As part of North Macedonia’s commitment to enhancing transparency in public procurement, significant advancements have been made both legislatively and technologically. A key milestone in this development occurred on January 9, 2018, when the Electronic Public Procurement System (EPPS) [1] was enhanced to publish awarded contracts. This improvement increased public access to procurement data, providing greater visibility into how public funds are allocated [2]. The legal foundation for this increased transparency is laid out in the Law on Public Procurement, specifically Article 6 [3], which has evolved since its initial adoption in 1998. The version adopted on February 1, 2019 (Official Gazette no. 24/2019) [4], and effective from April 1, 2019, mandates the publication of procurement notices, tender documentation, and contract awards. These legal requirements ensure that public procurement processes are conducted transparently, allowing citizens to track the allocation of public resources. # B. Data Acquisition and Source Characterization In alignment with these regulatory requirements, the primary data source for this study is the national open data portal [5], specifically the dedicated section for high-value contracts exceeding 1,000,000 euros [6]. The procurement data published through this platform is derived from the Electronic Public Procurement System (EPPS) [1], which has been operational since 2006 and represents the longestrunning procurement system in the Western Balkans region. This strategic selection enables an in-depth analysis of transactions representing substantial fiscal resource allocation within the country’s public sector. Covering a twelve-year period (2009–2021), the dataset provides a unique longitudinal perspective on procurement trends, capturing shifts influenced by political, economic, and regulatory transformations. # C. Data Format and Structure The raw procurement data is available in Microsoft Excel (XLSX) format, containing structured information about highvalue contracts. The dataset includes essential procurement fields such as: 1) Contracting Authority: The institutional entity initiating the procurement process, representing various governmental strata from ministries to public enterprises and municipalities. 2) Subject of the Contract: A textual specification delineating the procurement objective, encompassing diverse categories from infrastructure development to service acquisition. 3) Procurement Holder: The economic operator or consortium awarded the contract, providing critical insights into public-private transactional relationships. 4) Date of the Contract: The temporal identifier signifying formal contractual establishment, enabling chronological analysis of procurement patterns. 5) Value of the Contract in Denars: The financial magnitude expressed in North Macedonia’s national currency, providing a quantitative measurement of resource allocation. While these datasets provide comprehensive information, they exist as separate files requiring integration and semantic enrichment for advanced analytical capabilities. # III. DATA PROCESSING Given that procurement data is periodically published as XLSX files, an initial step involves the systematic aggregation of these individual datasets. This merging operation is conducted by identifying shared attributes across files and unifying them into a single, comprehensive dataset. Subsequent to aggregation, the dataset undergoes an essential normalization phase. This involves removing redundant fields and standardizing column structures to ensure data consistency. Irrelevant or superfluous attributes—such as sequential numbering fields—are eliminated to refine the dataset for analytical processing. The result is a harmonized procurement database that accurately reflects high-value transactions while maintaining structural integrity. # IV. ONTOLOGY DESIGN The ontology for public procurement data provides a structured model for representing key entities, their attributes, and relationships within the procurement domain. It defines fundamental procurement concepts to ensure consistency in data organization and facilitate analysis [7]. At the core of the ontology is the Contract class, which represents individual procurement agreements. Each contract is associated with an Institution, the public authority responsible for issuing it, and a Supplier, the economic operator fulfilling the contractual obligations. These relationships are explicitly defined through object properties: hasInstitution: Links a contract to the awarding institution. hasSupplier: Connects a contract to the designated supplier. In addition to defining structural relationships, the ontology includes datatype properties that capture essential contract details: hasAmount: Specifies the financial value of the contract. hasDate: Records the issuance date of the contract. hasDescription: Provides a textual summary of the contract. These properties contribute to a well-defined framework for procurement data representation (Fig. 1). The ontology also applies OWL (Web Ontology Language) restrictions requiring each contract to be associated with at least one institution and one supplier. This ensures the data is complete and accurately represents the procurement process [8]. # V. SEMANTIC CONVERSION To enable advanced semantic querying and Linked Open Data (LOD) compatibility, the procurement dataset is transformed from CSV to the Resource Description Framework (RDF). This conversion structures procurement records into a knowledge graph, aligning with Semantic Web standards [9]. # A. CSV to RDF Transformation using RML The transformation utilizes RDF Mapping Language (RML) to map CSV attributes to ontology properties while preserving semantic integrity [9]. The process represented in Fig. 2 involves: # 1) Attribute Mapping: Contracting Authority $$ Institution class • Procurement Holder $$ Supplier entity Contract Value and Date $$ hasAmount and hasDate properties # 2) Temporal Standardization: Dates are formatted in ISO 8601 to ensure uniform representation across procurement records. Fig. 1. Public Procurement Ontology Overview 1 1. Ontology Definition 2.Data Capture 3.Data Transformation Create OWL classes (Contract, - Get public procurement data - Convert XLSX to CSV format Institution,Supplier) from opendata.mk and arrange the data - Define object/data properties - Export datasets as XLSX files - Merge annual datasets into and constraints per year unified CSV 5.Knowledge Graph 1 6. Quality Assurance 4.Semantic Conversion Construction - SPARQL query validation - Apply ontology rules to RDF data - Map CSV data to RDF using Generate Linked Data artifacts RML - SHACL constraint checking (Turtle/N-Triples) - Validate with SHACL shapes # 3) URI Generation: • To ensure global referenceability, Unique Resource Identifiers (URIs) are systematically generated for institutions, suppliers, and contracts. # 4) RML Execution: Structured CSV data is transformed into RDF triples, forming the procurement knowledge graph. The resulting RDF dataset enables procurement data to be stored in a triple store (Table I), making it accessible for SPARQL-based querying and reasoning. # B. Validation with SHACL Shapes After RDF transformation, the dataset undergoes validation using the Shapes Constraint Language (SHACL) to ensure compliance with ontology constraints [10]. 1) Defining SHACL Shapes: Each contract must have an associated Institution and Supplier. The hasAmount property must be a numeric value greater than zero, and hasDate must follow the ISO 8601 format. 2) Validation Execution: A SHACL engine checks the dataset for inconsistencies, flagging errors for correction before further analysis. # VI. SEMANTIC DATA QUERYING AND ANALYSIS IN PUBLIC PROCUREMENT Public procurement data represents a vast and complex dataset that can be effectively managed using Semantic Web technologies. To facilitate structured querying and analysis, a knowledge graph is constructed using RDF (Resource Description Framework) and queried via SPARQL [11]. TABLE I SUMMARY STATISTICS OF THE PUBLIC PROCUREMENT DATASET To analyze procurement trends, a series of SPARQL queries are executed against the knowledge graph. These queries extract various key insights, such as identifying the highest contract values, calculating the total amount of all recorded contracts, analyzing the distribution of contracts over time, determining institutions with the highest number of contracts, and identifying suppliers with the greatest total contract values. Such analyses provide valuable information for monitoring public spending and detecting potential anomalies in procurement practices. Another key component of the analysis is the temporal distribution of contracts. By grouping procurement transactions by year and quarter, it becomes possible to identify trends in public spending over time Fig. 3. Similarly, averaging contract values per month or year provides insights into fluctuations in procurement expenditures. Detecting contracts with values above the average further highlights outliers that may warrant deeper investigation. The analysis includes statistical measures such as the minimum, maximum, mean, median, and standard deviation of contract distributions for each quarter in the last five years. Part of the analysis can be seen in Table II. Fig. 2. System Workflow and Data Processing Pipeline Fig. 3. Quarterly Trends in Public Procurement Amounts TABLE II KEY PROCUREMENT STATISTICS EXTRACTED VIA SPARQL QUERIES # VII. CONTRACT AMOUNT ESTIMATION AND TREND ANALYSIS Integrating predictive analytics into public procurement enhances decision-making by estimating contract values based on historical data and textual descriptions. The analysis is performed in two steps: procurement contract value prediction and visualization of historical spending patterns. This strengthens transparency and analytical capabilities for data-driven procurement assessments. # A. Machine Learning Model for Procurement Prediction We utilize the multilingual-e5-large-instruct model, a transformer-based [14] sentence embedding model designed to produce high-quality vector representations of text in multiple languages. The input to the model includes the textual description of a new or unlabeled procurement contract, which is encoded into a dense vector representation. All previously known procurement contracts from our historical dataset are pre-encoded and stored in a FAISS (Facebook AI Similarity Search) [15] index to allow for efficient nearest neighbor search in high-dimensional space. Given a new procurement contract, we compute its embedding and query the FAISS index to retrieve the top-9 most similar contracts based on cosine similarity. We then estimate the value of the new contract by computing the median of the known values of the most similar contracts. # B. Model Results The results are displayed in the Table III. The performance of the approach is benchmarked against the median value of the amount of all contracts. These results indicate that the model captures key patterns in procurement data, supporting procurement planning and expenditure forecasting [13]. TABLE III RESULTS OF THE CONTRACT AMOUNT PREDICTION # C. Historical Procurement Trend Visualization The system provides analytical tools for visualizing historical procurement trends which enable: selecting a contracting institution, retrieving and plotting past transactions and analyze spending over time. This visualization aids in identifying spending patterns, seasonal trends, and anomalies, enabling structured procurement analysis (Fig. 4). Fig. 4. Historical trends for Ministry of Education and Science
Public procurement plays a critical role in government operations, ensuring the efficient allocation of resources and fostering economic growth. However, traditional procurement data is often stored in rigid, tabular formats, limiting its analytical potential and hindering transparency. This research presents a methodological framework for transforming structured procurement data into a semantic knowledge graph, leveraging ontological modeling and automated data transformation techniques. By integrating RDF and SPARQL-based querying, the system enhances the accessibility and interpretability of procurement records, enabling complex semantic queries and advanced analytics. Furthermore, by incorporating machine learning-driven predictive modeling, the system extends beyond conventional data analysis, offering insights into procurement trends and risk assessment. This work contributes to the broader field of public procurement intelligence by improving data transparency, supporting evidence-based decision-making, and enabling in-depth analysis of procurement activities in North Macedonia.
[ "cs.DB", "cs.LG" ]
# 1. Introduction This paper proposes a variational inference algorithm based on particle methods to sample from multimodal densitiesMore precisely, we consider the problem of approximating measure of interest $\pi$ , which will be assumed to take the form $$ \pi ( d x ) = \frac { 1 } { Z } \rho ( x ) d x , $$ where $\rho$ is a non-negative, integrable function, and $Z$ is a normalizing constant. Due to numerical constraints, the normalizing constant $Z$ is often inaccessible, and we work under the standard setting in which only the score function $\nabla \log \rho$ is available to the user. This will be the standing assumption throughout the paper. Our starting point is the celebrated Stein Variational Gradient Descent, introduced by Liu and Wang in [17] which is summarized in Algorithm 1, and further interpreted in the framework of gradient flows in [15], which is the perspective adopted in this paper. Roughly speaking, the method consists of constructing a Wasserstein gradient flow uniquely determined by the score function, and whose asymptotic limit is heuristically close to $\pi$ . Convergence guarantees for this method, in the specific setting where the initial condition for the gradient flow is empirical, have been addressed in [22], [18], and [2], although the particular frameworks in which these guarantees are phrased is a subtle topic that the reader should take into consideration. The practical implementation of the method has been quite successfull with applications varying between reinforcement learning, amortized inference, discrete latent variable models, graphical models, and Bayesian optimization (see [8, 12, 19, 7, 14, 24, 9]) Despite the method’s success and its effectiveness across a wide range of applications, it often struggles when the target distribution exhibits strongly hidden or isolated modes. To address this limitation, we propose a modification that incorporates a random branching mechanism to enhance the algorithm’s exploratory capabilities. One way to describe particle methods in variational inference is to imagine a collection of projectiles moving through space according to an optimization rule, with their collective behavior forming an empirical measure. Building on this metaphor, our approach replaces these ”plain projectiles” with ”fireworks”: particles that still follow an optimization rule in a piecewise deterministic manner, but now randomly generate new descendants at carefully chosen times, scattering their positions around their parent. This mechanism allows the algorithm to explore the space more effectively and uncover hidden modes. The introduction of this controlled randomness draws inspiration from branching particle systems. As discussed in detail later, the branched SVGD (BSVGD) method we present exhibits a systematic upgrade in the approximation of the SVGD, while operating within the same computational time. We test the effectiveness of our approach using a numerical approximation of the Wasserstein distance. While this paper focuses on practical implementation and numerical results, the analysis of convergence rates remains an important open challenge that we plan to address in future work. Before delving further into the details of the method, we break down what we conceive as the most fundamental building blocks. The aim is to emphasize the role of each component rather than its definition, which we hope will make it easier to adapt or improve the method in the future to achieve better performance. In simple terms, the algorithm combines two key mechanisms: (i) a deterministic refinement applied to an initial measure, and (ii) a random perturbation of the refined measure, introduced via a branching particle system. Although these two components might appear conceptually opposed, they are integrated in the algorithm through an inductive alternation of steps, with the deterministic refinement consistently playing the asymptotically dominant role. The rest of the paper is organized as follows. Section 2 introduces the notation and fundamental concepts used throughout. Section 3 presents our main contribution, the BSVGD algorithm, and establishes its theoretical guarantees. Lastly, Section 4 focuses on the numerical implementation and performance evaluation of BSVGD in two case studies: mixtures of Gaussian distributions and mixtures of banana-shaped distributions. # 2. Preliminaries In this section, we introduce the mathematical concepts that will be used throughout the paper, including the Wasserstein space, Wasserstein gradient flows, and Stein variational gradient descent. 2.1. Notation. Throughout the paper, we denote by ${ \mathcal { P } } ( X )$ the set of probability measures on a measurable space $( X , \sigma _ { X } )$ . In the case where $\mathcal { X }$ is a normed space, we denote by $\mathcal { P } _ { p } ( \mathcal { X } )$ the set of probability measures for which the mapping $x \mapsto | x | ^ { p }$ is integrable. Given another measurable space $( Y , \sigma _ { Y } )$ and a measurable function $T : ( X , \sigma _ { X } ) \to ( Y , \sigma _ { Y } )$ , the push-forward of a measure $\mu \in { \mathcal { P } } ( X )$ under $T$ is denoted by $T _ { \# } \mu$ . This measure is the unique element in $\mathcal { P } ( Y )$ satisfying $$ \int _ { Y } f ( y ) T _ { \# } \mu ( \mathrm { d } y ) = \int _ { X } f ( T ( x ) ) \mu ( \mathrm { d } x ) , $$ for any measurable function $f : ( Y , \sigma _ { Y } ) \to ( \mathbb { R } , \mathcal { B } ( \mathbb { R } ) )$ . We denote by ${ \mathcal { P } } _ { \mathrm { A C } } ( \mathbb { R } ^ { d } )$ the set of absolutely continuous probability measures on $\mathbb { R } ^ { d }$ . For $\mu \in \mathcal { P } _ { \mathrm { A C } } ( \mathbb { R } ^ { d } )$ , its density function is denoted by $f _ { \mu }$ . 2.2. The Wasserstein Space and Gradient Flows. It is well known in optimal transport theory that the space ${ \mathcal { P } } _ { 2 } ( \mathbb { R } ^ { d } )$ is metrizable and carries a differentiable manifold-like structure (see [21] and [4]). SVGD builds upon these properties, putting particular emphasis on the notion of a tangent space to ${ \mathcal { P } } _ { 2 } ( \mathbb { R } ^ { d } )$ . We begin with the definition of the Wasserstein distance: for any $\mu , \nu \in \mathcal P _ { 2 } ( \mathbb { R } ^ { d } )$ , $$ d _ { W } ( \mu , \nu ) : = \left( \operatorname* { i n f } _ { \pi \in \Pi ( \mu , \nu ) } \int _ { \mathbb { R } ^ { 2 d } } | x - y | ^ { 2 } \pi ( \mathrm { d } x , \mathrm { d } y ) \right) ^ { 1 / 2 } , $$ where $\textstyle \prod ( \mu , \nu )$ denotes the set of transport plans between $\mu$ and $\nu$ ; namely, the set of probability measures on $\mathbb { R } ^ { 2 d }$ with marginals $\mu$ and $\nu$ , respectively. The mapping $d _ { W }$ is known as the 2-Wasserstein distance, and it can be shown that $\mathcal { W } _ { 2 } : = ( \mathcal { P } _ { 2 } ( \mathbb { R } ^ { d } ) , d _ { W } )$ is in fact a complete metric space, see [1, Proposition 7.1.5]. Furthermore, it is known that $\mathcal { W } _ { 2 }$ possesses a differentiable manifold-type structure. One can implement a formal differential calculus on the Wasserstein space, known in the literature as Otto calculus, which can be used to generalize the notion of gradient flows to $\mathcal { W } _ { 2 }$ . The ideas from this theory can be used to deduce, in a reasonably simple manner, most of the results presented in this chapter. However, for the sake of brevity, we omit its presentation and refer the reader to Chapters II.15–II.23 of [23] for a full review of the ma. Given an open interval $J$ containing zero, we define the tangent plane at a given measure $\nu \in \mathcal { W } _ { 2 }$ as $$ \mathcal { T } _ { \nu } \mathcal { W } : = \overline { { \left\{ v _ { 0 } \ ; \ \left\{ \left( v _ { t } , \mu _ { t } \right) \right\} _ { t \in J } \ \mathrm { s a t i s f y } \ ( 2 . 1 ) \ \mathrm { w i t h } \ \mu _ { 0 } = \nu \right\} } } ^ { L ^ { 2 } ( \nu ) } , $$ where (2.1) corresponds to the continuity equation $$ \partial _ { t } f _ { \mu _ { t } } + ( - \nabla ) ^ { * } ( f _ { \mu _ { t } } v _ { t } ) = 0 , $$ with $( - \nabla ) ^ { * }$ referring to the divergence operator (that is, the adjoint operator of $- \nabla$ with respect to the inner product associated with the Lebesgue measure), and $\{ v _ { t } : \mathbb { R } ^ { d } \mathbb { R } ^ { d } \} _ { t \in J }$ is the vector field associated with an absolutely continuous curve $\{ \mu _ { t } \} _ { t \in J }$ satisfying $\mu _ { 0 } = \nu$ . Let $\tau \mathcal { W }$ be the disjoint union of $\tau _ { \nu } \mathcal { W }$ with $\nu$ ranging over $\mathcal { W } _ { 2 }$ . We refer to any mapping $F : \mathcal { T W } \mathcal { T W }$ , satisfying $$ F [ \nu ] \in \mathcal { T } _ { \nu } \mathcal { W } , \qquad \forall \nu \in \mathcal { W } _ { 2 } , $$ as a vector field over $\pi \nu$ . This then allows us to define a differential calculus on $\mathcal { W } _ { 2 }$ : We say the a curve $\{ \mu _ { t } \} _ { t \in J } \subset \mathcal { P } _ { \mathrm { A C } } ( \mathbb { R } ^ { d } )$ , embedded in the Wasserstein space with $J$ an interval around zero, solves the initial value problem $$ \partial _ { t } \mu _ { t } = F [ \mu _ { t } ] \qquad \mu _ { 0 } = \nu , $$ if the following continuity equation holds $$ \partial _ { t } f _ { \mu _ { t } } + ( - \nabla ) ^ { * } ( f _ { \mu _ { t } } F [ \mu _ { t } ] ) = 0 ~ \mu _ { 0 } = \nu , $$ where $F : \mathcal { T W } \mathcal { T W }$ vector field and $\nu \in \mathcal { W } _ { 2 }$ are given. In this case, we shall also refer to the curve $\left\{ \mu _ { t } \right\}$ as a flow on $\mathcal { W } _ { 2 }$ , or simply as a flow (see [1, Chapters 8 and 11]). 2.3. Weak Formulation and Kernelization. An important detail to emphasize is that the formulation we have presented so far applies only to absolutely continuous curves, which is incompatible with the finite particle methods introduced earlier. To address this limitation, we adopt a weaker notion of gradient flow, defined via the action of the underlying measures on test functions: We say that the curve $\{ \mu _ { t } \} _ { t \in J }$ solves the system (2.2) in the weak sense if, for every smooth and compactly supported function $\varphi : \mathbb { R } ^ { d } \mathbb { R }$ , the following identity holds: $$ \begin{array} { r } { \partial _ { t } \langle \mu _ { t } , \varphi \rangle - \langle \mu _ { t } , \nabla \varphi \cdot F [ \mu _ { t } ] \rangle = 0 , } \end{array} $$ where $\langle \cdot , \cdot \rangle$ denotes the canonical pairing between measures and test functions. For technical details on this formulation, see [1, Section 8.3]. This weak form is particularly well suited to the kernelization approach introduced next, which leads to the formulation of the Stein variational gradient flow. Let $V : \mathbb { R } ^ { d } \mathbb { R }$ be a smooth, symmetric function integrating to one, centered around the target distribution $\pi$ . Define the kernel $K : \mathbb { R } ^ { d } \times \mathbb { R } ^ { d } \mathbb { R }$ by $$ K ( x , y ) : = V ( x - y ) , \qquad \forall x , y \in \mathbb { R } ^ { d } , $$ which in turn induces a family of kernel operators $\{ K _ { \nu } \} _ { \nu \in \mathcal { W } _ { 2 } ( \mathbb { R } ^ { d } ) }$ acting on functions via $$ K _ { \nu } f ( x ) : = \int _ { \mathbb { R } ^ { d } } K ( x , y ) f ( y ) \nu ( \mathrm { d } y ) . $$ The kernelized flow $K _ { \mu } F [ \mu ]$ is the one associated to the equation $$ \partial _ { t } f _ { \mu _ { t } } + ( - \nabla ) ^ { * } [ f _ { \mu _ { t } } \cdot K _ { \mu _ { t } } F [ \mu _ { t } ] ] = 0 , \qquad \mu _ { 0 } = \nu , $$ with corresponding weak formulation $$ \begin{array} { r } { \partial _ { t } \langle \mu _ { t } , \varphi \rangle - \langle \mu _ { t } , \nabla \varphi \cdot K _ { \mu _ { t } } F [ \mu _ { t } ] \rangle = 0 , } \end{array} $$ for every smooth test function $\varphi$ . For background on this kernelization framework and its analytical implications, see [4, Chapter 5]. An advantage of the previous regularization argument can be seen in the case where empirical measures are considered. This perspective will be explored in more detail in the next section, when discussing the case of the flow associated to the Kullback-Liebler divergence minimization. 2.4. Minimizers, Kullback-Liebler divergence and SVGD. As in the classical case, an important class of curves consists of those constructed from an objective function : $\mathcal { P } _ { 2 } ( \mathbb { R } ^ { d } ) \to \mathbb { R }$ that a given user aims to progressively minimize. Taking inspiration from vector calculus, the natural candidates for this kind of flows would be vector fields acting as some sort of gradient to $\mathcal { V }$ . More precisely, we say a vector field $\mathbb { V } \mathcal { V }$ over $\mathcal { W } _ { 2 }$ is a Wasserstein gradient if it satisfies the following “chain rule inspired” equation $$ \frac { \mathrm { d } } { \mathrm { d } t } \mathcal { V } [ \mu _ { t } ] \big | _ { t = 0 } = \int _ { \mathbb { R } ^ { d } } \mathbb { W } \mathcal { V } [ \nu ] ( x ) \cdot v _ { 0 } ( x ) \vartheta ( \mathrm { d } x ) , $$ for every $\{ \mu _ { t } \} _ { t \ge 0 }$ satisfying the continuity equation (2.1) with $\mu _ { 0 } = \vartheta$ for some vector field $\{ v _ { t } \} _ { t \in J }$ , see [4, Section 5]. Among the different type of objective functions $\mathcal { V }$ , we are mostly interested in the case where $\mathcal { V }$ is the Kullback-Leibler (KL) divergence: $$ \mathrm { K L } ( \mu \| \nu ) : = \int _ { \mathbb { R } ^ { d } } \log \left( \frac { f _ { \mu } ( x ) } { f _ { \nu } ( x ) } \right) \mu ( \mathrm { d } x ) , $$ for $\mu , \nu \in \mathcal { P } _ { \mathrm { A C } } ( \mathbb { R } ^ { d } )$ (see [5, Chapter 2] for a summary of the properties of the KL divergence). The groundbreaking work by Jordan, Kinderlehrer and Otto from [13], formulated in the notation utilized in this paper, establishes that the flow associated to the Wasserstein gradient of the mapping $\mu \mapsto \operatorname { K L } ( \mu \| \nu )$ , under suitable assumptions of the initial condition, has a density satisfying the Fokker-Planck equation $$ \partial _ { t } f _ { \mu _ { t } } = \Delta f _ { \mu _ { t } } - ( - \nabla ) ^ { * } ( f _ { \mu _ { t } } \nabla \log ( f _ { \nu } ) ) , $$ which in its weak formulation, yields the evolution $$ \begin{array} { r } { \partial _ { t } \langle \mu _ { t } , \varphi \rangle = - \langle \mu _ { t } , ( \nabla \log ( f _ { \mu _ { t } } ) - \nabla \log ( f _ { \nu } ) ) \cdot \nabla \varphi \rangle . } \end{array} $$ This gradient flow naturally raises the question of whether it is possible to numerically implement the evolution $\mu _ { t }$ as an interpolation curve of measures, in a way that provides asymptotic access to the limiting distribution $\mu _ { \infty } ~ = ~ \pi$ through a numerically tractable procedure. Although this is not straightforward within the framework of particle systems, we can make an adjustment that allows us to apply the methodology by means of the previously introduced kernelization procedure. More specifically, for the functional $\psi [ \nu ] : = \mathrm { K L } ( \nu \| \pi )$ , the Wasserstein gradient of $\psi$ is, as shown in [4, Examples 5.11 and 5.12], given by $$ \begin{array} { r } { \nabla \psi [ \nu ] = \nabla \log ( f _ { \nu } ) - \nabla \log ( f _ { \pi } ) . } \end{array} $$ In contrast, the Stein variational gradient flow is defined as the flow associated with the kernelized vector field $$ \nu \mapsto K _ { \nu } [ \nabla \log ( f _ { \nu } ) ] + K _ { \nu } [ \nabla V ] , $$ which leads to the following gradient flow, already formulated in its weak form: $$ \begin{array} { r } { \partial _ { t } \langle \mu _ { t } , \varphi \rangle = - \left. \mu _ { t } , K _ { \mu _ { t } } [ \nabla \log ( f _ { \mu _ { t } } ) - \nabla \log ( f _ { \pi } ) ] \cdot \nabla \varphi \right. . } \end{array} $$ This flow admits a version applicable to an initial distribution of the form $$ \mu _ { 0 } = \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \delta _ { x _ { j } ( 0 ) } , $$ for some $x _ { 1 } ( 0 ) , \ldots , x _ { \ell } ( 0 ) \in \mathbb { R } ^ { d }$ . This version can be obtained by considering the limit as $\varepsilon 0$ , where the measure $\mu _ { t }$ is replaced by its mollification $\mu _ { t } * \gamma _ { \varepsilon }$ , with $\gamma _ { \varepsilon }$ denoting the centered Gaussian kernel with variance $\varepsilon$ . Through elementary computations, one can show that the system of differential equations $$ \partial _ { t } x _ { k } ( t ) = K _ { \mu _ { t } } [ \nabla \log \rho ] ( x _ { k } ) + \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \nabla V ( x _ { k } - x _ { j } ) , $$ where $\rho \propto f _ { \pi }$ up to a normalizing constant, is such that the empirical measure $$ \mu _ { t } = \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \delta _ { x _ { j } ( t ) } , $$ solves the system (2.6). The measure constructed in this way will be referred to, as the Stein variational gradient flow. For a given element $\mathbf { x } \in \mathbb { R } ^ { d \ell }$ , we can take limit as $t$ goes to infinity in the solution to the system (2.7) with the initial condition $( x _ { 1 } ( 0 ) , \ldots , x _ { \ell } ( 0 ) ) = \mathbf { x }$ . The value of this vector will be denoted by $S _ { \ell } ( \mathbf { x } )$ . The SVGD algorithm can be implemented using the pseudocode presented in the Algorithm 1. As expected, the effectiveness of this procedure depends significantly on the choice of initial condition. This aspect is leveraged in the present manuscript by introducing modifications to the set of particles forming the empirical measure, through a branching mechanism that will be detailed in the next section. The fact that the system of differential equations (2.7) depends solely on the score function $\nabla \log \rho$ suggests a natural procedure for obtaining a proxy for the measure $\pi$ : we can initialize the system (2.7), use the score function to evolve it toward its asymptotic limit, and then adopt the resulting empirical measure as an approximation of $\pi$ . In this spirit, the empirical distribution associated to a $\mathbf { x } = ( x _ { 1 } , \ldots , x _ { \ell } )$ can be regarded to be ”improved”, if the components are replaced by the vector $S _ { \ell } ( \mathbf { x } )$ . Owing to this intuition, we will henceforth refer to the mapping $$ \mathbf { x } \longmapsto S _ { \ell } ( \mathbf { x } ) $$ as the improvement operator associated to the gradient flow (2.7). Require: Score function $\nabla \log \rho ( x )$ with support in $\mathbb { R } ^ { d }$ ; initial particles $\{ x _ { i } ^ { 0 } \} _ { i = 1 } ^ { \ell }$ ; max itera tions $M$ ; step sizes $\epsilon _ { d }$ for $d = 1 , \dotsc , M$ ; differentiable kernel $k$ ; convergence threshold $\eta$ Ensure: Set of particles $\{ x _ { i } \} _ { i = 1 } ^ { \ell }$ approximating the target distribution 1: $d 0$ 2: $h 2 \eta$ 3: Initialize $x _ { i } ^ { 0 } = x _ { i }$ for $i = 1 , \ldots , \ell$ 4: while $d < M$ and $h > \eta$ do 5: for $i = 1$ to $\ell$ do 6: Compute $\begin{array} { r } { \hat { \phi } ( x _ { i } ^ { d } ) = \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \Big [ k ( x _ { j } ^ { d } , x _ { i } ^ { d } ) \nabla _ { x _ { j } ^ { d } } \log \rho ( x _ { j } ^ { d } ) + \nabla _ { x _ { j } ^ { d } } k ( x _ { j } ^ { d } , x _ { i } ^ { d } ) \Big ] } \end{array}$ 7: $\boldsymbol { x } _ { i } ^ { d + 1 } \gets \boldsymbol { x } _ { i } ^ { d } + \boldsymbol { \epsilon } _ { d } \cdot \hat { \boldsymbol { \phi } } ( \boldsymbol { x } _ { i } ^ { d } )$ 8: end for 9: $\begin{array} { r } { h \frac { 1 } { \ell } \sum _ { i = 1 } ^ { \ell } \| x _ { i } ^ { d + 1 } - x _ { i } ^ { d } \| } \\ { d d + 1 } \end{array}$ 10: 11: end while 12: return $\{ x _ { i } ^ { d } \} _ { i = 1 } ^ { \ell }$ 2.5. Branching Mechanism. We proceed to introduce a branching mechanism that will interact with the Stein variational gradient flow by modifying the initial conditions. Let $\zeta$ denote a set of labels or colors, given by ${ \mathcal C } = \{ E , O , S \}$ , where $E$ alludes to the word explorer, $O$ to optimizer, and $S$ to spine. The particles of interest will be pairs $( \boldsymbol { x } , \boldsymbol { c } ) \in \mathbb { R } ^ { d } \times \mathcal { C }$ . This product space will be denoted by $\mathcal { U }$ . We are interested in a state space consisting of collections of such elements, so we define the state space $\mathcal { E }$ by those elements in $\cup _ { \ell \geq 1 } \mathcal { U } ^ { \ell }$ that have exactly one component colored “ $S ^ { \prime \prime }$ . The index $\ell$ describing the number of copies of $\mathcal { U }$ to be considered, will be called the level. Consider a triplet of $ { \mathbb { N } } _ { 0 }$ -supported distributions $q _ { E } , q _ { O } , q _ { S }$ with finite moments of arbitrary large order. Let the initial configuration be a vector $\mathbf { u } = ( u _ { 1 } , \ldots , u _ { \ell } ) \in \mathcal { U } ^ { \ell }$ , where each particle is of the form $u _ { i } = ( x _ { i } , c _ { i } ) \in \mathbb { R } ^ { d } \times \mathcal { C }$ , and observe that we can always identify $\mathbf { u }$ with the pair $\displaystyle ( \mathbf { x } , \mathbf { c } )$ , where $\mathbf { x } = ( x _ { 1 } , \ldots , x _ { \ell } )$ and $\mathbf { c } = ( c _ { 1 } , \hdots , c _ { \ell } )$ . Finally, consider a fixed Markov kernel $\{ P ( \mathrm { d } y | x )$ ; $x \in \mathbb { R } ^ { d } \}$ over $\mathbb { R } ^ { d }$ . In the branching procedure, each particle $u _ { i } = \left( x _ { i } , c _ { i } \right)$ ramifies independently to the rest of the particles, according to the following rules: (i) Each particle gives birth to a random number of offspring according to their color; i.e., if $c _ { i } = E$ (resp. $c _ { i } = O$ and $c _ { i } = S$ ), then the number of particles it produces is distributed $q _ { E }$ (resp. $q _ { O }$ and $q _ { S }$ ). (ii) The number of offspring generated by a ”spine” is positive. The number of offspring generated an ”optimizer” is zero. (iii) The new particles generated by $x _ { i }$ are colored as ”explorer”, with their positions determined by $P ( \cdot | x _ { i } )$ . (iv) The old particles remain in their current position and are recolored as ”optimizer”. (v) After all offspring have been generated, one particle is selected uniformly at random from among the ”explorers” and ”optimizers”, and its color is changed to ”spine”. All other particles retain their color. The resulting collection of particles (positions and updated colors) defines the outcome of the Markov transition. This defines a Markov kernel $Q$ from $\mathcal { E }$ to itself. 3. Branched Stein Variational Gradient Descent With the preliminaries established, we are now ready to present the main contribution of our paper: the anticipated branched version of the Stein Variational Gradient Descent. 3.1. The algorithm. The construction of the BSVGD is based on defining an appropriate $\mathcal { E }$ - valued Markov chain $\{ { \bf U } _ { n } \} _ { n \ge 0 }$ , whose transitions incorporate both the improvement operator and the branching mechanism described earlier. The operator $S _ { \ell }$ , introduced at the end of Section 2.4, naturally acts on elements $\mathbf { u } \in \mathcal { U } ^ { \ell }$ , by taking the pair $( \mathbf { x } , \mathbf { c } )$ and producing $( S _ { \ell } ( { \mathbf { x } } ) , { \mathbf { c } } )$ , which we associate with $$ \begin{array} { r } { S _ { \ell } ( \mathbf { u } ) : = \big ( ( S _ { \ell } ( \mathbf { x } ) _ { 1 } , c _ { 1 } ) , \ldots , ( S _ { \ell } ( \mathbf { x } ) _ { \ell } , c _ { \ell } ) \big ) . } \end{array} $$ For notational simplicity, we will omit the dependence on $\ell$ and write this operation as $\mathcal { S } ( \mathbf { u } )$ throughout the section. We also fix a triple of distributions $q _ { E } , q _ { O } , q _ { S }$ as in Section 2.5, and denote by $Q$ the corresponding Markov kernel on $\mathcal { E }$ . To construct the Markov chain, we begin with an initial element $\mathbf { u } _ { 0 } \in \mathcal { E }$ of level $\ell _ { 0 }$ , and set $\mathbf { U } _ { 0 } : = \mathcal { S } ( \mathbf { u } _ { 0 } )$ ; then, the transitions of the chain are governed by the pushforward $S _ { \# } Q$ . To be more precise, let $\mathbf { U } _ { n } \in \mathcal { U } ^ { \ell _ { n } }$ be the Markov chain at the $n$ -th step, identified with the pair $\left( \mathbf { X } _ { n } , \mathbf { c } _ { n } \right)$ , and let $\mu _ { n }$ denote the empirical distribution associated to the vector $\mathbf { X } _ { n }$ . At each step, given the current state ${ \mathbf { U } } _ { n }$ , we draw an independent sample $\mathbf { u } _ { n + 1 }$ from $Q ( \mathbf { U } _ { n } , \cdot )$ viewed as a random element in $\mathcal { E }$ , and identify it with the pair $\left( \mathbf { x } _ { n + 1 } , \mathbf { c } _ { n + 1 } \right)$ . We apply the improvement operator and set $\mathbf { U } _ { n + 1 } : = { \cal S } ( \mathbf { u } _ { n + 1 } )$ ; we then compute the new empirical distribution $\mu _ { n + 1 }$ associated to the new vector of positions $\mathbf { X } _ { n + 1 } = S ( \mathbf { x } _ { n + 1 } )$ . The resulting sequence of measures $\{ \mu _ { n } \} _ { n \ge 1 }$ will be referred to as the outcome of the BSVGD. The BSVGD algorithm can be implemented using the pseudocode presented in the Algorithm 2. Before proceeding, observe that Algorithm 2 introduces an additional function $\eta$ . Heuristically, this function modulates the precision of the SVGD step at line 5, according to the sample size. This will be discussed with further detail in a later section. 3.2. Convergence results. This section aims to discuss features of the output of the algorithm that could potentially guarantee convergence of the outcome of the BSVGD $\mu = \{ \mu _ { n } \} _ { n \geq 0 }$ towards the target distribution $\pi$ . The reader should be warned from the start that the type of theoretical result one might most naturally hope for: a mild and verifiable condition over $V$ and $Q$ that ensures convergence of the algorithm, is far beyond the scope of this work. This is not merely a limitation Require: Score function $\nabla \log \rho ( x )$ with support in $\mathbb { R } ^ { d }$ ; initial particles $\{ x _ { i } ^ { 0 } \} _ { i = 1 } ^ { \ell _ { 0 } }$ ; initial labels $\{ c _ { i } ^ { 0 } \} _ { i = 1 } ^ { \ell _ { 0 } }$ ; max iterations $M$ ; step sizes $\epsilon _ { d }$ for $d = 1 , \dotsc , M$ ; differentiable kernel $k$ convergence function $\eta ( \ell )$ ; maximum number of particles $L$ ; distributions $q _ { E }$ , $q _ { O }$ , $q _ { S }$ ; conditional distribution $P ( \cdot | x )$ E1n: f particles $\{ x _ { i } \} _ { i = 1 } ^ { \ell }$ approximating the target distribution $X \{ x _ { i } ^ { 0 } \} _ { i = 1 } ^ { \ell _ { 0 } }$ 2: $C \gets \{ c _ { i } ^ { 0 } \} _ { i = 1 } ^ { \ell _ { 0 } }$ 3: $\ell \gets \# X$ ▷ # denotes the cardinality 4: while $\ell \leq L$ do 5: Update $X$ using Algorithm 1 with parameters $\nabla \log \rho ( x ) , X , \epsilon _ { d } , \eta ( \ell )$ 6: for $i = 1$ to $\ell$ do 7: if $c _ { i } = E$ then 8: Sample $\gamma _ { i } \sim q _ { E }$ 9: else if $c _ { i } = S$ then 10: Sample $\gamma _ { i } \sim q _ { S }$ 11: end if 12: ci O 13: if $\gamma _ { i } > 0$ then 14: for $j = 1$ to $\gamma _ { i }$ do 15: $\begin{array} { r l } & { x _ { j + \ell + \sum _ { k = 1 } ^ { i - 1 } \gamma _ { k } } \sim P ( \cdot | x _ { i } ) } \\ & { c _ { j + \ell + \sum _ { k = 1 } ^ { i - 1 } \gamma _ { k } } E } \end{array}$ 16: 17: end for 18: end if 19: end for 20: Sample $k$ uniformly from $\begin{array} { r } { \{ 1 , 2 , \dotsc , \ell + \sum _ { i = 1 } ^ { \ell } \gamma _ { i } \} } \end{array}$ 21: ck S 22: X ← {xi}iℓ=+1Pℓiℓ=1 γi 23: $C \gets \{ c _ { i } \} _ { i = 1 } ^ { \ell + \sum _ { i = 1 } ^ { \ell } \gamma _ { i } }$ 24: $\ell \gets \# X$ 25: end while 26: return $X$ of our specific framework, but reflects a broader difficulty in the literature: even for the standard, unbranched version of SVGD, establishing convergence under minimal assumptions remains a formidable challenge. Indeed, while there exists a considerable amount of results proving convergence of SVGD (some even offering convergence rates) none of them are available without imposing some form of non-trivial assumption on the initial condition. The reader can easily verify that a condition of this type is really needed, by thinking of a simple degenerate case: if one initializes the plain SVGD algorithm with a large number of particles, but all of them located at the same position, the evolution of the system will emulate that of a single particle, thereby producing a final state that fails to reflect the true diversity of the target distribution. Several strategies have been proposed to deal with this issue and to still recover meaningful convergence guarantees. All of them, however, rely on some form of structural assumption that is either incompatible with the empirical setting, or difficult to check in practice. Some approaches rely on the assumption that the initial condition is absolutely continuous, others on assuming that the initialization is drawn from a large random sample with adequate convergence distributional features, and others on the assumption that the initial measure belongs to a sequence that converges to an absolutely continuous one. The first of these options is clearly ruled out in our case, as the entire framework operates with empirical measures. This leaves us with the other two: either consider a random initialization through random sampling, or a sequence of approximating initial conditions. In this work, we adopt the perspective based on the last approach. Once translated and adapted to the BSVGD framework, it leads to the condition for convergence presented in Theorem 3.1 below. The second approach seems to be quite attractive as well, and we intend to address a perspective of this nature in future research work. In the sequel, if $\chi$ is a locally integrable, non-negative function over $\mathbb { R } ^ { d }$ , we will denote by $\operatorname { A C } _ { \chi } ( \mathbb { R } ^ { d } )$ the set of elements in $\operatorname { A C } ( \mathbb { R } ^ { d } )$ which density bounded by $\chi$ . The distance from an element $\nu \in \mathcal { P } _ { 2 } ( \mathbb { R } ^ { d } )$ to the set $\operatorname { A C } _ { \chi } ( \mathbb { R } ^ { d } )$ will be denoted by $d _ { W } \big ( \nu , \mathrm { A C } _ { \chi } ( \mathbb { R } ^ { d } ) \big )$ . Theorem 3.1. Let $\mu _ { n }$ denote the outcome of the BSVGD described in Section 3. Suppose that the moments of order two of $\{ \mu _ { n } \} _ { n \ge 1 }$ are uniformly bounded and that there exists $\chi :$ $\mathbb { R } ^ { d } \to \mathbb { R } _ { + }$ locally integrable, such that $$ d _ { W } ( \mu _ { n } , \mathrm { A C } _ { \chi } ( \mathbb { R } ^ { d } ) ) \to 0 . $$ Then the sequence $\mu _ { n }$ converges weakly to $\pi$ . In particular, we can guarantee convergence of the $\mu _ { n }$ under the condition $$ d _ { W } ( \mu _ { n } , \mathrm { A C } _ { y } ( \mathbb { R } ^ { d } ) ) \to 0 , $$ where $\operatorname { A C } _ { y } ( \mathbb { R } ^ { d } )$ denotes the set of elements in $\operatorname { A C } ( \mathbb { R } ^ { d } )$ with density bounded by some $y \in \mathbb { R } _ { + }$ . The intuition behind condition (3.2) can be motivated by examining the histogram of $\mu _ { n }$ . If the empirical measure displays no atoms for sufficiently large $n$ , Theorem 3.1 supports the heuristic that convergence is indeed taking place. This observation, and several other numerical considerations will be presented in Section 4. Proof of Theorem 3.1. The boundedness of the moments of order two of the $\mu _ { n }$ ’s imply the sequential compactness property, and hence, to prove the result, it suffices to prove that every convergent subsequence $\{ \mu _ { n _ { k } } \} _ { k \ge 1 }$ of the $\mu _ { n }$ ’s has a further subsequence that converges to $\pi$ . To this end, we use (3.2) a sequence of elements $\{ \nu _ { k } \} _ { k \ge 1 }$ in $\operatorname { A C } ( \mathbb { R } ^ { d } )$ with density bounded by $\chi$ , satisfying $d _ { W } ( \mu _ { n _ { k } } , \nu _ { k } ) \to 0$ . The uniform boundedness of the moments of order two of the $\mu _ { n }$ ’s, together with the boundedness of $d _ { W } ( \mu _ { n _ { k } } , \nu _ { k } )$ implies that $\nu _ { k }$ has moments of order two uniformly bounded, implying the existence of a further subsequence $\nu _ { k _ { m } }$ convergent in law. Let $\tau$ denote the weak limit of $\nu _ { k _ { m } }$ . Since the $\nu _ { k _ { m } }$ have density bounded by $\chi$ , by means of Portmanteau’s lemma, for every compact $K \subset \mathbb { R } ^ { d }$ , $$ \tau [ K ] \leq \operatorname* { l i m } _ { n } \operatorname* { s u p } \nu _ { n _ { k m } } ( K ) \leq \int _ { K } \chi ( x ) d x . $$ By the local integrability of $\chi$ , we thus conclude that $\tau$ is absolutely continuous. This observation, combined with the fact that $d _ { W } ( \mu _ { n _ { k } } , \nu _ { k } ) \to 0$ implies that $\mu _ { n _ { k _ { m } } }$ converges in law towards the absolutely continuous measure $\tau$ . We now apply Theorem 7 in [10], to get that the empirical measure associated to ${ \cal { S } } ( { \bf { X } } _ { n _ { k } } )$ converges weakly to the limit of the Stein variational gradient flow applied to $\tau$ . By [15, Theorem 3.3], this former probability measure is equal to $\pi$ . Since all the $\mathbf { X } _ { n }$ ’s are constructed as the asymptotic limit of the system (2.7), they are invariant under the action of $\boldsymbol { S }$ , and consequently, ${ \cal S } ( { \bf X } _ { n _ { k } } ) = { \bf X } _ { n _ { k } }$ . By the previous analysis, the empirical distribution associated to ${ \cal { S } } ( { \bf { X } } _ { n _ { k _ { m } } } )$ converges to $\pi$ , so the identity ${ \cal S } ( { \bf X } _ { n _ { k } } ) = { \bf X } _ { n _ { k } }$ implies that $\mu _ { n _ { k _ { m } } }$ converges weakly to $\pi$ . We have hence proved that an arbitrary subsequence of $\mu$ has a further subsequence converging to $\pi$ , as required. # 4. Numerical Experiments To highlight the suitability of the BSVGD in multimodal cases, as well as its efficiency compared with the classical SVGD, this section focuses on numerical examples. All the codes used to generate the figures presented in this section are public available in the repository isaiasmanuel/SVGD in Github and were executed in a 24” 2023 iMac with M3 processor. 4.1. Case studies: Gaussian and Banana-shaped mixtures. Our first example consists in the mixture of 25 Gaussian densities in $\mathbb { R } ^ { 2 }$ , each with a variance of 5I, where I represents the identity matrix. These distributions are arranged in such a way that each of the 25 elements of the Cartesian product $\{ 0 , 2 , 4 , 6 , 8 \} \times \{ 0 , 2 , 4 , 6 , 8 \}$ corresponds to the mean of a different Gaussian, and the weighting parameters of the mixture are given by $\{ \textstyle { \frac { 1 } { 3 2 5 } } 1 , \textstyle { \frac { 1 } { 3 2 5 } } 2 , . . . , \textstyle { \frac { 1 } { 3 2 5 } } 2 5 \}$ , assigned in lexicographical order; e.g. the Gaussian with mean $( 0 , 0 )$ has weight $\textstyle { \frac { 1 } { 3 2 5 } }$ , the one with mean $( 0 , 2 )$ has weight $\frac { 2 } { 3 2 5 }$ , and so on. Visually, the density corresponds to the one shown in Figure 1a, along with vectorial field of the corresponding score function. This mixture has already been used in literature as a way to test multimodal distribution sampling algorithms, see [25]. Our second example follows the idea presented in [3] of using banana-shaped distributions with $t$ -tails: initially, the authors discussed that the Stein thining algorithm presented in [16] exhibits spourious minimums for the mixture of banana shaped distributions; to correct this problem, they proposed a variation using a Laplacian correction. In the context of multimodal distribution sampling, mixtures of banana-shaped distribution with $t$ -tails have been used to exhibit the performance of algorithms, see for example [20]. This is due to the fact that this density is more challenging than the classic Banana shaped Gaussian discussed in [11]. Formally, the banana-shaped distribution is defined as follows: let $( x _ { 1 } , x _ { 2 } , . . . , x _ { d } )$ be distributed as a $d$ -dimensional $t$ -distribution with parameters of location $\mathbf { y }$ , scale matrix $\Sigma =$ $\mathrm { d i a g ( 1 0 0 , 1 , . . . , 1 ) }$ and $\mathbf { r }$ degrees of freedom; i.e. $$ f ( { \bf x } ) = \frac { \Gamma \left[ ( { \bf r } + p ) / 2 \right] } { \Gamma ( { \bf r } / 2 ) { \bf r } ^ { p / 2 } \pi ^ { p / 2 } \left| { \bf \Sigma } \right| ^ { 1 / 2 } } \left[ 1 + \frac { 1 } { { \bf r } } ( { \bf x } - { \bf y } ) ^ { T } { \Sigma } ^ { - 1 } ( { \bf x } - { \bf y } ) \right] ^ { - ( { \bf r } + p ) / 2 } . $$ Then, the banana-shaped distribution with $t$ -tails is the distribution associated to the vector defined by $$ \phi ( x _ { 1 } , x _ { 2 } , x _ { 3 } , . . . , x _ { d } ) = ( x _ { 1 } , x _ { 2 } + b x _ { 1 } ^ { 2 } - 1 0 0 b , x _ { 3 } , . . . , x _ { d } ) , $$ where $b > 0$ is a given parameter of nonlinearity. In our example, we use a mixture of 3 banana-shaped random variables, with locations $( 0 , 0 ) , ( 0 , 5 )$ , (15, 15), $b$ parameters 0.03, 0.05, 0.03, and weights 0.4, 0.4, 0.2, respectively. The density $\rho$ defined by this example and the vectorial field corresponding to the score fucntion $\nabla \log \rho$ are presented in Figure 1b. Figure 1. $\rho$ densities of interest and the vectorial fields defined by $\nabla \log \rho$ . (a) Mixture of Gaussian random variables, (b) Mixture of banana-shaped with $t -$ tails random variables. 4.2. Measuring performance. One of the advantages of working with mixtures of Gaussians and banana-shaped distributions is that we can easily simulate from them. We leverage this property to compare the performance of BSVGD against SVGD. To this end, we use the Wasserstein distance $d _ { W }$ distance, presented in Section 2.2, to compare two empirical distributions (see also [23]). Let $\mu$ and $\nu$ be two empirical measures supported on $\{ x _ { 1 } , . . . , x _ { \ell } \}$ and $\{ y _ { 1 } , . . . , y _ { \ell } \}$ , respectively. The Wasserstein distance between them is given by $$ d _ { W } ( \mu , \nu ) = \operatorname* { i n f } _ { \sigma } \left( \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \| x _ { j } - y _ { \sigma ( j ) } \| ^ { 2 } \right) ^ { \frac { 1 } { 2 } } , $$ where the infimum is taken over all the index permutations $\sigma$ . In our code this permutation was calculated using the function linear sum assignment of scipy. Note that both SVGD and BSVGD generate sequences of empirical measures as the particle positions are updated over time. Our goal is to compare the performance of the two algorithms through these evolving empirical distributions. We start with the usual SVGD from Algorithm 1: Let $I _ { S }$ the maximum number of times the vector of positions was updated during the algorithm; then, we define $\{ \mu _ { i S } \} _ { i = 1 } ^ { I _ { S } }$ as the sequence such that $\mu _ { i S }$ is the empirical measure of the position after the $i$ -th update. Now, using the Wasserstein distance as in equation (4.1), we can compare each $\mu _ { i S }$ with another empirical measure $\pi _ { i } S$ of the sample size, defined by sampling independently from the objective $\pi$ : $$ \pi _ { i S } : = \frac { 1 } { \ell } \sum _ { j = 1 } ^ { \ell } \delta _ { y _ { j S } } , \ y _ { j S } \sim \pi , \ \forall j = 1 , \ldots , \ \ell , \forall i = 1 , \ldots , I _ { S } . $$ However, since each $d _ { W } ( \mu _ { i S } , \pi _ { i S } )$ is only a point estimator of the real distance, we improve the precision by considering a collection of sequences $\{ \pi _ { \cdot S } ^ { a } \} _ { a = 1 } ^ { A }$ , such that each $\pi _ { \cdot S } ^ { a } = \{ \pi _ { i S } ^ { a } \} _ { i = 1 } ^ { I _ { S } }$ is itself an independent copy of $\{ \pi _ { j S } \} _ { j = 1 } ^ { I _ { S } }$ . Consequen·tly, we define our esti ·mator as the average with respect to this collection of sequences: $$ W _ { S } ( i ) : = \frac { 1 } { A } \sum _ { a = 1 } ^ { A } d _ { W } ( \mu _ { i S } , \pi _ { i S } ^ { a } ) , \qquad \forall i = 1 , \ldots , I _ { S } . $$ The comparison with the BSVGD follows the same spirit, albeit with a small increase in notational complexity due to the fact that, by construction, the outputs of the BSVGD have an increasing (piece-wise constant) sample size: For each $\ell = 1 , \ldots , L$ , let $I _ { B } ^ { \ell }$ be the maximum number of times the vector of positions was updated during the algorithm at the $\ell$ -th level; then, by considering a lexicographic ordering, we define $$ \{ \mu _ { j B } ; ~ j = 1 , \ldots , J _ { B } \} = \{ \mu _ { i \ell B } ; ~ i = 1 , \ldots , I _ { B } ^ { \ell } , ~ \ell = 1 , \ldots , L \} $$ such that $\mu _ { i \ell B }$ is the empirical measure of the position after the $j$ -th update at the $\ell$ -th level. Similalrly, we have that $$ \{ \pi _ { j B } ; ~ j = 1 , \ldots , J _ { B } \} = \{ \pi _ { i \ell B } ; ~ i = 1 , \ldots , I _ { B } ^ { \ell } , ~ \ell = 1 , \ldots , L \} , $$ where each $\pi _ { i } \ell B$ is defined as in (4.2) with their corresponding level $\ell$ , and that $\{ \pi _ { \cdot B } ^ { a } \} _ { a = 1 } ^ { A }$ is a collection of independent copies of $\pi _ { \cdot B } = \{ \pi _ { j B } \} _ { j = 1 } ^ { J _ { B } }$ . The sequence of estimators $W _ { B } ( j )$ , $j = 1 , \dots , J _ { B }$ is defined analogusly to (4.3). 4.3. Implementation and results. For our examples, we run Algorithm 2 using $\begin{array} { r } { \eta ( \ell ) = \frac { 1 } { \ell } } \end{array}$ in order to avoid early stops when the sample size increase, i.e. we are being more restrictive in the convergence criterium when the number of particles grow. The flow to the ordinary differential equation (2.7) is approximated by means of an Euler scheme where the step size is set as $$ \epsilon _ { d } = e _ { M } - \frac { e _ { M } - e _ { m } } { 1 + e ^ { - 0 . 0 1 * ( d - M * ( 1 / 2 ) ) } } , $$ where $e _ { M }$ and $e _ { m }$ are the starting and ending step sizes: 1 and 0.01 in the mixture of Gaussians example, and 10 and 1 in the banana case. By choosing these parameters, we allow big moves at the beginning of the SVGD step, with each successive iteration producing finer movements; this is particularly useful when the offspring is far from the regions with high density, since the point can go fast to the high density region. Other options of $\epsilon _ { d }$ can improve the computational time, e.g. in [17] it is proposed the use of the AdaGrad algorithm introduced in [6]. We omit a deeper discussion about this hyperparameter tuning in our comparision due both algorithms using the same function. To define the position of the offspring in the line 14 of Algorithm 2, we use a bivariate Gaussian distribution with mean $x _ { i }$ (their parent), and standard deviation 2 and 5 for the first and second example, respectively. This is the first proposal of how to locate the offspring; nevertheless, adaptative proposal must be explored, as well as the use of mixtures distribution to have local and far descending that allows explore in better ways according to the random variable of interest the space. The starting points in both examples were taken from a bivariate Gaussian distribution with mean 0 and variance 1. In the SVGD case the sample size is $\ell = 5 0 0$ , and in the BSVGD we take $\ell _ { 0 } = 1$ and $c _ { 0 } = \{ S \}$ , that is, we start with only one particle ensured to have offspring. In both algorithms we used a Gaussian kernel defined by $$ K _ { r } ( x , y ) : = \pi ^ { - d / 2 } e ^ { - \frac { ( x - y ) ^ { T } ( x - y ) } { r } } , $$ with $r = 1$ , and set the parameter $A = 1 0$ for the performance estimators $W _ { S }$ and $W _ { B }$ . Regarding the branching mechanism, we set $q _ { O } = 0$ with probability one (by definition), $q _ { S }$ a uniform distribution over $\{ 1 , 2 , 3 \}$ , and $$ q _ { E } ( x ) = { \left\{ \begin{array} { l l } { 0 . 5 } & { { \mathrm { i f ~ } } x = 0 , } \\ { 0 . 2 } & { { \mathrm { i f ~ } } x = 1 , } \\ { 0 . 3 } & { { \mathrm { i f ~ } } x = 2 . } \end{array} \right. } $$ The reason behind this configuration is to have a subcritic branching process that allows the sample size to increase slowly. In Figure 2 we present the kernel density estimators for the points obtained using the SVGD and BSVGD. It is worth noting that even when the sample obtained by BSVGD exhibits an important improvement with respect to the one obtained using SVGD, the SVGD itself has also shown capabilities for detecting multimodality, as it is discussed in [17]. Additionally, the sampling problems that may arise by an early stop on the SVGD could also be solved with more iterations or smallest $\epsilon$ for the stop criterium. Moreover: the BSVGD is computationally more time consuming mainly because the use of SVGD repeatedly; then, in order to compare properly both algorithms it is necessary not only to see the final samples between both, but to also analyze the performance of each one along the time. Figure 2. Kernel density estimator using the points obtained from SVGD at top and using BSVGD at the bottom, the mixture of Gaussians at left, the mixture of banana shaped distributions at right. In Figure 3 we present $( t _ { i } , W _ { S } ( i ) )$ where $t _ { i }$ is the time required to calculate the $i$ -th particles’ update, and analogous for $( t _ { j } , W _ { B } ( j ) )$ ; we also present the sample size of the BSVGD along time. Observe that the BSVGD is computationally more time consuming than the classical SVGD. Nevertheless, we want to remark that if we let the BSVGD run for the same amount of time that takes the SVGD to converge, in our examples the graphs of the function $W _ { B }$ fall under $W _ { S }$ . Therefore, an early stopped BSVGD seems to be a good option in contrast to executing the SVGD when the computational time is limited, with the caveat that the sample size will be lower. Based on the previous examples, we can affirm that the BSVGD is an effective algorithm in multimodal cases with respect to the classical SVGD when $\rho$ presents multimodality. This is due to our two-fold algorithm: the SVGD step accommodates the points, first towards the mode and then towards the tails, while the branching step encourages the exploration of the particles after these have been arranged by the SVGD, preparing them for the next cycle. Figure 3. At left the mixture of Gaussian, at right the banana shaped case. At top in black the function $( t _ { i } , W _ { S } ( i ) )$ is presented, where $t _ { i }$ is the time spent to obtain the $i$ -th update using the SVGD algorithm and in blue the $A$ functions used in the average to obtain $W _ { S }$ presented until the algorithm convergence. The vertical dashed line is the time when algorithm 1 converges, in green are the $( t _ { j } , W _ { B } ( j ) )$ , and in orange are the functions used in its average. At bottom the sample size of the BSVGD at time $t$ 4.4. Conclusions and Further Work. The BSVGD emerges as a competitive algorithm in different directions. In practical problems, we can obtain a sample that reflects better the multimodality compared with the classical SVGD. It is important to remark that when the modes of $\rho$ have big valleys between them, the BSVGD struggles to detect the mixture weights properly. Because of this and with the aim to improve the sample between modes, it is necessary to explore new candidates for the branching and exploring distributions. A natural candidate for this include adaptive proposals. It will be important in future works to also modify the selection of the spine. Instead of taking it uniform between the points, we could take weighting of the points based on $\rho$ , if we can evaluate it. This idea is aligned with the work presented in [20], with the difference that we are searching the modes while also generating an approximated sample. # References [1] Luigi Ambrosio, Nicola Gigli, and Giuseppe Savare´. Gradient Flows in Metric Spaces and in the Space of Probability Measures. Lectures in Mathematics ETH Zu¨rich. Birkh¨auser, Basel, 2nd edition, 2008. OCLC: 254181287. [2] Krishnakumar Balasubramanian, Sayan Banerjee, and Promit Ghosal. Improved finite-particle convergence rates for stein variational gradient descent. arXiv preprint arXiv:2402.11776, 2024. [3] Cl´ement B´enard, Brian Staber, and Se´bastien Da Veiga. Kernel stein discrepancy thinning: a theoretical perspective of pathologies and a practical fix with regularization. Advances in Neural Information Processing Systems, 36:49281–49311, 2023. [4] Sinho Chewi, Jonathan Niles-Weed, and Philippe Rigollet. Statistical Optimal Transport, volume 2364 of Lecture Notes in Mathematics. Springer, 2025. [5] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition, 2006. [6] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011. [7] Yihao Feng, Dilin Wang, and Qiang Liu. Learning to draw samples with amortized stein variational gradient descent. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), 2017. [8] Tanmay Gangwani, Qiang Liu, and Jian Peng. Learning self-imitating diverse policies. In International Conference on Learning Representations (ICLR), 2019. [9] Chengyue Gong, Jian Peng, and Qiang Liu. Quantile stein variational gradient descent for batch bayesian optimization. In Proceedings of the 36th International Conference on Machine Learning (ICML), pages 2339–2348. PMLR, 2019. [10] Jackson Gorham, Anant Raj, and Lester Mackey. Stochastic stein discrepancies. In Advances in Neural Information Processing Systems, volume 33, pages 17931–17942. Curran Associates, Inc., 2020. [11] Heikki Haario, Eero Saksman, and Johanna Tamminen. Adaptive proposal distribution for random walk metropolis algorithm. Computational statistics, 14:375–395, 1999. [12] Jun Han, Fan Ding, Xianglong Liu, Lorenzo Torresani, Jian Peng, and Qiang Liu. Stein variational inference for discrete distributions. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), volume 108, pages 609–619. PMLR, 2020. [13] Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the fokker–planck equation. SIAM Journal on Mathematical Analysis, 29(1):1–17, 1998. [14] Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, and Qiang Liu. Action-dependent control variates for policy optimization via stein identity. In International Conference on Learning Representations (ICLR), 2018. [15] Qiang Liu. Stein variational gradient descent as gradient flow. In Advances in Neural Information Processing Systems (NeurIPS), volume 30. Curran Associates, Inc., 2017. [16] Qiang Liu, Jason Lee, and Michael Jordan. A kernelized stein discrepancy for goodness-of-fit tests. In Maria Florina Balcan and Kilian Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 276– 284, New York, New York, USA, 20–22 Jun 2016. PMLR. [17] Qiang Liu and Dilin Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances in Neural Information Processing Systems (NeurIPS), volume 29. Curran Associates, Inc., 2016. [18] Tianle Liu, Promit Ghosal, Krishnakumar Balasubramanian, and Natesh Pillai. Towards understanding the dynamics of gaussian-stein variational gradient descent. In Advances in Neural Information Processing Systems, volume 36. Curran Associates, Inc., 2024. [19] Yang Liu, Prajit Ramachandran, Qiang Liu, and Jian Peng. Stein variational policy gradient. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), 2017. [20] Emilia Pompe, Chris Holmes, and Krzysztof Latuszyn´ski. A framework for adaptive MCMC targeting multimodal distributions. The Annals of Statistics, 48(5):2930 – 2952, 2020. [21] Filippo Santambrogio. Optimal Transport for Applied Mathematicians, volume 87 of Progress in Nonlinear Differential Equations and Their Applications. Springer International Publishing, Cham, 2015. [22] Jiaxin Shi and Lester Mackey. A finite-particle convergence rate for stein variational gradient descent. In Advances in Neural Information Processing Systems, volume 36. Curran Associates, Inc., 2024. [23] C´edric Villani. Optimal Transport: Old and New, volume 338 of Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 2009. [24] Dilin Wang, Zhe Zeng, and Qiang Liu. Stein variational message passing for continuous graphical models. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 5219–5227. PMLR, 2018. [25] Zhenqing Wu, Zhejun Huang, Sijin Wu, Ziying Yu, Liuxin Zhu, and Lili Yang. Accelerating convergence of langevin dynamics via adaptive irreversible perturbations. Mathematics, 12(1):118, 2023.
We propose a novel particle-based variational inference method designed to work with multimodal distributions. Our approach, referred to as Branched Stein Variational Gradient Descent (BSVGD), extends the classical Stein Variational Gradient Descent (SVGD) algorithm by incorporating a random branching mechanism that encourages the exploration of the state space. In this work, a theoretical guarantee for the convergence in distribution is presented, as well as numerical experiments to validate the suitability of our algorithm. Performance comparisons between the BSVGD and the SVGD are presented using the Wasserstein distance between samples and the corresponding computational times.
[ "cs.LG", "stat.CO", "62F15, 65C05, 65C35" ]
# 1 INTRODUCTION In applications that involve dynamic decision-making, such as ad placement and recommendation systems, exploration can be costly and risky. These constraints limit the use of online exploration of actions, thereby motivating the study of offline policy learning methods. Off-policy learning (OPL) addresses this need by enabling the policy optimization using only logged bandit data generated under old (or logging) policies (Joachims et al., 2018; Su et al., 2019; 2020a; Uehara et al., 2022). A typical approach to OPL is using policy gradient (PG), which we can estimate unbiasedly based on the logged data via techniques like Inverse Propensity Score (IPS) and Doubly Robust (DR) (Dud´ık et al., 2014). OPL methods based on PG iterations perform effectively in ideal scenarios with large sample sizes and relatively small action spaces (Saito & Joachims, 2021; Sachdeva et al., 2023; Taufiq et al., 2024). However, in many application scenarios, rewards are only partially observed due to missing data (Jakobsen et al., 2017; Wang et al., 2019b; Jadidinejad et al., 2019; Christakopoulou et al., 2022), censoring (Ren et al., 2019; Wang et al., 2019a), delayed observation (Wang et al., 2022b; Imbens et al., 2022; Wang et al., 2023; Saito et al., 2024a), data fusion (Imbens et al., 2022), and multi-stage rewards (Wan & McAuley, 2018; Hadash et al., 2018; Ma et al., 2018; Saito, 2020). For instance, on e-commerce platforms, binary conversions serve as a target reward to maximize. Table 1: Examples of Partially-Observed Rewards in Various Real-Life Scenarios Unfortunately, conversions are only observed for products that were clicked and seen by the user. To make matters worse, conversion signals are obtained only after weeks-long delays (Ktena et al., 2021), and most of them are yet to be observed when policy learning is conducted. Table 1 provides a list of real-life examples and causes of partially-observed rewards. When rewards are only partially observed due to such causes, the estimation of PG would result in high variance and inefficient OPL. Our work thus proposes a new formulation and method to address this challenging but highly general problem: OPL with partial observations of the reward. In scenarios with only partial observations of the target reward, one possible idea to perform OPL more effectively is to use more frequently observed secondary rewards instead. For instance, in e-commerce recommender systems, we often observe not only the conversion signals but also other implicit feedback such as clicks and dwell time, which are much more densely observed with no missingness (Liu et al., 2010; Jadidinejad et al., 2019). Thus, in our problem formulation, we consider both the target reward that is only partially observed (e.g., user ratings, conversions, retention, future earnings, and survival days) and the secondary rewards that are fully observed (e.g., clicks, dwell time, and short-term medical indicators). Considering the case where we also aim to optimize secondary rewards, we explore learning a policy that maximizes a weighted sum of the target reward and secondary rewards. This represents a more general objective rather than focusing only on the optimization of the target reward. With the availability of the secondary rewards, one feasible approach to perform OPL is to maximize some aggregation of the secondary rewards as a surrogate for the target reward. Nonetheless, a significant drawback of this approach is the potential for high PG estimation bias, as secondary rewards often do not align perfectly with the target reward (Liu et al., 2010; Jadidinejad et al., 2019). We also do not know how to construct an appropriate aggregation function of the secondary rewards. Therefore, there is a dilemma between the target and secondary rewards: the former more accurately aligns with our ultimate objective, while the latter provides more observations. Using only the former leads to high variance in PG estimation due to missing observations, while relying solely on the latter results in high bias due to a potential misalignment with the ultimate objective. To solve this new OPL problem more effectively, we focus on developing a method that can leverage both target and secondary rewards in a principled way. Specifically, we propose Hybrid Policy Optimization for Partially-Observed Reward (HyPeR), a method which estimates the PG leveraging both types of rewards. We show that our PG estimator can substantially reduce the estimation variance compared to typical estimators such as IPS and DR while maintaining unbiasedness under reasonable conditions. In addition, while our approach generally performs effectively using the predefined weight between the target and secondary rewards within the objective, we demonstrate that further enhancement can be achieved by strategically tuning the weight to improve the biasvariance trade-off of the PG estimation, which we can perform based only on observable logged data. Finally, we conduct comprehensive experiments on both synthetic and real-world datasets, where the HyPeR algorithm outperforms a range of existing methods in terms of optimizing both the target reward objective and the combined objective of the target and secondary rewards. The key contributions of our work can be summarized as follows. • We formulate the general problem of Off-Policy Learning (OPL) for contextual bandits with partially-observed rewards, encompassing many prevalent scenarios such as missing data, delayed rewards, and censoring, all of which are instances of this general problem. • We propose a new method to address OPL with partial rewards by leveraging more densely observed secondary rewards to estimate the policy gradient with reduced variance. • We consider a combined objective defined by a weighted sum of the target and secondary rewards and introduce the novel concept of a strategic use of an incorrect weight in our method to maximize the advantage of combining the two types of rewards. # 2 OFF-POLICY LEARNING # 2.1 THE CONVENTIONAL FORMULATION We first formulate the conventional OPL problem regarding the typical contextual bandit setting (Dud´ık et al., 2014; Swaminathan & Joachims, 2015a; Farajtabar et al., 2018). In the typical formulation, context $x \in \mathcal { X } \subseteq \mathbb { R } ^ { d _ { x } }$ is a $d _ { x }$ -dimensional vector that is drawn i.i.d. from an unknown distribution $p ( x )$ . Given context $x$ , a possibly stochastic policy $\pi ( a | x )$ chooses an action $a$ within a finite action space denoted here by $\mathcal { A }$ . The (target) reward $r \in [ 0 , r _ { m a x } ]$ (e.g., ratings, conversions, retention, survivals) is then sampled from an unknown conditional distribution $p ( r | x , a )$ . The existing literature defines the objective function in OPL by the expected target reward (often referred to as the policy value) as below (Swaminathan & Joachims, 2015b). $$ V ( \pi ) : = \mathbb { E } _ { p ( x ) \pi ( a \mid x ) p ( r \mid x , a ) } [ r ] = \mathbb { E } _ { p ( x ) \pi ( a \mid x ) } [ q ( x , a ) ] , $$ where $q ( x , a ) : = \mathbb { E } [ r \mid x , a ]$ is the expected reward given $x$ and $a$ , which we call the $q$ -function. In OPL, the goal is to learn a policy $\pi _ { \boldsymbol { \theta } }$ , parameterized by $\theta$ , that would maximize the policy value: $\theta ^ { * } \in \arg \operatorname* { m a x } _ { \theta \in \Theta } V ( \pi _ { \theta } )$ . In particular, OPL aims to learn such a policy using only logged data consisting of tuples $( x , a , r )$ generated under the logging policy denoted by $\pi _ { 0 }$ . More specifically, the logged data we can use to perform OPL can be written as $\bar { \mathcal { D } } = \{ ( x _ { i } , a _ { i } , r _ { i } ) \} _ { i = 1 } ^ { n } \sim \bar { p ( \mathcal { D } ) }$ where the data distribution is induced by the logging policy, i.e., $\begin{array} { r } { p ( \mathcal { D } ) = \prod _ { i = 1 } ^ { n } p ( x _ { i } ) \pi _ { 0 } ( a _ { i } | x _ { i } ) p ( r _ { i } | x _ { i } , a _ { i } ) } \end{array}$ . # 2.2 OFF-POLICY LEARNING VIA POLICY GRADIENT Most existing approaches in OPL are based on PG iterations (Ma et al., 2020; Chen et al., 2021). This method updates the policy parameter $\theta$ via iterative gradient ascent: $\boldsymbol { \theta } _ { t + 1 } \gets \boldsymbol { \theta } _ { t } + \nabla _ { \boldsymbol { \theta } } V ( \boldsymbol { \pi } _ { \boldsymbol { \theta } } )$ where the policy gradient is represented as $\nabla _ { \boldsymbol { \theta } } V ( \pi _ { \boldsymbol { \theta } } ) = \mathbb { E } _ { \boldsymbol { p } ( \boldsymbol { x } ) \pi _ { \boldsymbol { \theta } } ( { a } | \boldsymbol { x } ) } [ q ( \boldsymbol { x } , { a } ) \nabla _ { \boldsymbol { \theta } } \log \pi _ { \boldsymbol { \theta } } ( { a } | \boldsymbol { x } ) ]$ Saito et al. (2024b). This form of PG can be derived by the log-derivative trick and suggests to update the policy parameter $\theta$ so that the resulting policy $\pi _ { \boldsymbol { \theta } }$ can choose actions that have high expected reward. To implement the PG iteration, we first need to estimate the PG, since it is an unknown vector. To achieve this using only the logged data $\mathcal { D }$ collected under the logging policy $\pi _ { 0 }$ , the relevant literature relies on estimators such as IPS and DR, which enable unbiased estimation of the PG (Dud´ık et al., 2014; Metelli et al., 2021). These PG estimators are specifically defined as follows. $$ \begin{array} { l } { { \displaystyle \nabla _ { \theta } \widehat { V } _ { \mathrm { I P S } } ( \pi _ { \theta } ; \mathcal { D } ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } w ( x _ { i } , a _ { i } ) r _ { i } g _ { \theta } ( x _ { i } , a _ { i } ) , \ ~ } } \\ { { \displaystyle \nabla _ { \theta } \widehat { V } _ { \mathrm { D R } } ( \pi _ { \theta } ; \mathcal { D } ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \left\{ w ( x _ { i } , a _ { i } ) ( r _ { i } - \widehat { q } ( x _ { i } , a _ { i } ) ) g _ { \theta } ( x _ { i } , a _ { i } ) + \mathbb { E } _ { \pi _ { \theta } ( a | x _ { i } ) } [ \widehat { q } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a ) ] \right\} , } } \end{array} $$ where $w ( x , a ) : = \pi _ { \theta } ( a \mid x ) / \pi _ { 0 } ( a \mid x )$ is the importance weight and $g _ { \theta } ( x , a ) : = \nabla _ { \theta } \log \pi _ { \theta } ( a \mid x )$ is the policy score function. $\boldsymbol { \hat { q } } ( \boldsymbol { x } , a )$ is an estimator of the $\mathsf { q }$ -function, which we can obtain by performing reward regression in the logged data $\mathcal { D }$ . These estimators are unbiased (i.e., $\begin{array} { r } { \mathbb { E } _ { p ( \mathcal { D } ) } [ \nabla _ { \theta } \widehat { V } _ { \mathrm { I P S } } ( \pi _ { \theta } ; \mathcal { D } ) ] = \mathbb { E } _ { p ( \mathcal { D } ) } [ \nabla _ { \theta } \widehat { V } _ { \mathrm { D R } } ( \pi _ { \theta } ; \mathcal { D } ) ] = \nabla _ { \theta } V ( \pi _ { \theta } ) ) } \end{array}$ under full support, which requires that the loggbing policy sufficiently expblore the action space. Condition 1 (Full Support). The logging policy $\pi _ { 0 }$ is said to have full support if π0(a x) > 0 for all $x \in \mathcal { X }$ and $a \in { \mathcal { A } }$ . The importance weighting technique used by IPS and DR enables an unbiased estimation of the PG, thereby resulting in effective OPL even without additional exploration. However, if the problem is far from ideal with smaller data sizes, large action spaces, and noisy or partial rewards, existing OPL methods can easily collapse due to substantial variance in PG estimation (Saito & Joachims, 2022). # 2.3 PARTIALLY-OBSERVED REWARDS As discussed in the introduction and in Table 1, there are many real-life cases where we can observe the target reward only partially. To precisely formulate such a scenario, we introduce an additional random variable, $o \in \{ 0 , 1 \}$ , to represent whether the reward is observed for each data point. If $o _ { i } = 1$ , the reward $\boldsymbol { r } _ { i }$ is observed; otherwise, it is unavailable, as indicated by setting $r _ { i } = \mathrm { N } / \mathrm { A }$ . Considering a scenario where the observation indicator comes from an unknown distribution $0 <$ $p ( o | x ) < 1$ , we can generalize the data generating process as $\mathcal { D } = \{ ( x _ { i } , a _ { i } , o _ { i } , r _ { i } ) \} _ { i = 1 } ^ { n } \sim p ( \mathcal { D } ) =$ $\begin{array} { r l } { \prod _ { i = 1 } ^ { n } p ( x _ { i } ) \pi _ { 0 } ( a _ { i } | x _ { i } ) p ( o _ { i } | x _ { i } ) p ( r _ { i } | x _ { i } , a _ { i } ) } & { { } } \end{array}$ , which encompasses many realistic situations like missing data, delayed rewards, data fusion, multi-stage rewards, and censoring. To implement OPL with such partially-observed rewards, we can simply apply the PG approach using only the data with observed rewards, which we call $\mathbf { r }$ -IPS and r-DR: $$ \begin{array} { r l r } { { \mathcal { T } _ { \theta } \widehat { V } _ { \mathrm { r - I P S } } ( \pi _ { \theta } ; \mathcal { D } ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \frac { o _ { i } } { p ( o _ { i } | x _ { i } ) } w ( x _ { i } , a _ { i } ) r _ { i } g _ { \theta } ( x _ { i } , a _ { i } ) , } } & { ( 4 ) } \\ & { \nabla _ { \theta } \widehat { V } _ { \mathrm { r - D R } } ( \pi _ { \theta } ; \mathcal { D } ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \frac { o _ { i } } { p ( o _ { i } | x _ { i } ) } \{ w ( x _ { i } , a _ { i } ) ( r _ { i } - \hat { q } ( x _ { i } , a _ { i } ) ) g _ { \theta } ( x _ { i } , a _ { i } ) + \mathbb { E } _ { \pi _ { \theta } ( \alpha | x _ { i } ) } [ \hat { q } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a _ { i } ) - \mathbb { E } _ { \pi _ { \theta } ( \alpha | x _ { i } ) } [ \hat { q } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a _ { i } ) - \mathbb { E } _ { \pi _ { \theta } ( \alpha | x _ { i } ) } [ \hat { q } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a ) ] \} ) } & { ( 4 ) } \end{array} $$ where we can see that these estimators use only the data with reward observations $( o _ { i } = 1 )$ ) to estimate the PG. We can readily show that these estimators are unbiased, but they are often substantially inefficient and produce high variance in PG estimation. This is because they use only a part of the data in $\mathcal { D }$ and naively discard all the information when $o _ { i } = 0$ . # 3 OFF-POLICY LEARNING WITH SECONDARY REWARDS To deal with the issue of inefficiency in OPL when the rewards are partial, we propose a new formulation of OPL with secondary rewards. In many real-life scenarios, we not only observe the target reward such as the conversion signals that we are optimizing, but also secondary rewards such as clicks and dwell time (Wan & McAuley, 2018; Jadidinejad et al., 2019; Christakopoulou et al., 2022). By leveraging these secondary rewards, we aim to reduce variance in PG estimation and to achieve a more efficient OPL even under the challenging scenarios with partially-observed rewards. To implement this idea, we extend the typical formulation of OPL by introducing the secondary rewards denoted by $s \in \mathbb { R } ^ { d _ { s } }$ , which can be multi-dimensional. We consider the secondary rewards to be sampled from an unknown conditional distribution of the form: $p ( s | x , a )$ after taking action $a$ for context $x$ . The logged dataset that we can use in the setting can be written as follows. $$ \mathcal { D } : = \{ ( x _ { i } , a _ { i } , o _ { i } , s _ { i } , r _ { i } ) \} _ { i = 1 } ^ { n } \sim p ( \mathcal { D } ) = \prod _ { i = 1 } ^ { n } p ( x _ { i } ) \pi _ { 0 } ( a _ { i } | x _ { i } ) p ( o _ { i } | x _ { i } ) p ( s _ { i } | x _ { i } , a _ { i } ) p ( r _ { i } | x _ { i } , a _ { i } , s _ { i } ) , $$ In addition to the introduction of secondary rewards, we consider the generalized objective called the combined policy value, which is defined as a weighted sum of the expected target and secondary rewards, considering a situation where we aim to optimize the secondary rewards as well. $$ \begin{array} { r } { _ { c } ( \pi ; \beta ) : = ( 1 - \beta ) V _ { r } ( \pi ) + \beta V _ { s } ( \pi ) = ( 1 - \beta ) \mathbb { E } _ { p ( x ) \pi ( a | x ) } [ q ( x , a ) ] + \beta \mathbb { E } _ { p ( x ) \pi ( a | x ) } \left[ \displaystyle \sum _ { d = 1 } ^ { d _ { s } } f _ { d } ( x , a ) \right] , } \end{array} $$ where $f _ { d } ( x , a ) : = \mathbb { E } [ s _ { d } | x , a ]$ is the $d$ -th dimension of the expected secondary reward and $\beta \in [ 0 , 1 )$ is a parameter to control the prioritization between the optimization of the target reward and that of the secondary rewards. When $\beta = 0$ , this generalized objective reduces to the typical policy value defined in Eq. (1). With some positive value of $\beta$ , we can address practical situations where we aim to optimize not only the target reward but also the secondary rewards to some extent. Given this extended problem of OPL with secondary rewards and with the combined policy value, we can consider a baseline method of using some aggregation of the secondary rewards as a surrogate for the target reward. This baseline approach of using only the secondary rewards as surrogates, which we call s-IPS and s-DR, estimates the PG as follows: $$ \begin{array} { r l } & { \displaystyle \nabla _ { \theta } \hat { V } _ { \mathrm { s - I P S } } ( \pi _ { \theta } ; \mathcal { D } ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } w ( x _ { i } , a _ { i } ) F ( s _ { i } ) g _ { \theta } ( x _ { i } , a _ { i } ) , } \\ & { \displaystyle \nabla _ { \theta } \hat { V } _ { \mathrm { s - D R } } ( \pi _ { \theta } ; \mathcal { D } ) } \\ & { \displaystyle = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \Big \{ w ( x _ { i } , a _ { i } ) \big ( F ( s _ { i } ) - F ( \hat { f } ( x _ { i } , a _ { i } ) \big ) \big ) g _ { \theta } ( x _ { i } , a _ { i } ) + \mathbb { E } _ { \pi _ { \theta } ( a | x _ { i } ) } \big [ F ( \hat { f } ( x _ { i } , a ) ) g _ { \theta } ( x _ { i } , a ) \big ] \Big \} , } \end{array} $$ where $\hat { f } ( x , a )$ is an estimator of $f ( x , a )$ and $F ( s )$ is some aggregation of the secondary rewards, such as their weighted average, designed to imitate the target reward. By replacing the target reward with $F ( s )$ , s-IPS and s-DR can use all the data points irrespective of the observation indicator $o _ { i }$ , thus reducing the variance. However, unless the function $F ( s )$ accurately describes the target reward, which is untestable, the estimators often produce substantial bias against the true PG regarding the target reward, which leads to ineffective OPL as we will show in our experiments. # 4 HYBRID POLICY OPTIMIZATION FOR PARTIALLY-OBSERVED REWARD This section proposes a new OPL algorithm to maximize the combined policy value in Eq. (7) with only partially-observed target rewards and secondary rewards that do not necessarily align accurately with the target reward. We also newly introduce the concept of strategically using a different value of the balancing factor $\gamma$ compared to the true value of $\beta$ in the objective in Eq. (7) to improve the finite-sample effectiveness of the algorithm. The key idea behind our algorithm is the introduction of a new estimator of the PG using secondary rewards to reduce variance while also optimizing the secondary rewards depending on the value of $\beta$ in the objective function. To derive our method, we first focus on the PG estimation regarding the target policy value $V _ { r } ( \pi _ { \theta } )$ . Particularly to address the variance issue in existing methods, we propose leveraging secondary rewards through the following estimator for $\nabla _ { \boldsymbol { \theta } } V _ { r } ( \pi _ { \boldsymbol { \theta } } )$ . $$ \begin{array} { r l r } { \nabla _ { \theta } \hat { V } _ { r } ( \pi _ { \theta } ; \mathcal { D } ) : = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \biggl \{ \mathbb { E } _ { \pi _ { \theta } ( a | x _ { i } ) } [ \hat { q } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a ) ] + w \bigl ( x _ { i } , a _ { i } \bigr ) ( \hat { q } ( x _ { i } , a _ { i } , s _ { i } ) - \hat { q } ( x _ { i } , a _ { i } ) ) g _ { \theta } \bigl ( x _ { i } , a _ { i } \bigr ) } \end{array} $$ where $\boldsymbol { \hat { q } } ( \boldsymbol { x } , \boldsymbol { a } , \boldsymbol { s } )$ is an estimator of the $\mathsf { q }$ -function conditional on the secondary rewards (i.e., $\boldsymbol q ( \boldsymbol x , \boldsymbol a , s ) )$ , which we can derive, e.g., by solving $\begin{array} { r } { \hat { q } \ = \ \arg \operatorname* { m i n } _ { \boldsymbol { q } ^ { \prime } } \sum _ { ( \boldsymbol { x } , \boldsymbol { a } , \boldsymbol { s } , \boldsymbol { r } ) \in \mathcal { D } } ( \bar { r } - \boldsymbol { q } ^ { \prime } ( \boldsymbol { x } , \boldsymbol { a } , \boldsymbol { s } ) ) ^ { 2 } } \end{array}$ using $\mathcal { D }$ . Note also that we can estimate the conditional reward-observation probability $p ( o _ { i } | x _ { i } )$ when it is unknown by regressing $o$ to $x$ observed in the logged data $\mathcal { D }$ by a supervised classifier. We need to perform this estimation task before performing OPL not only for our method, but also for baseline methods such as r-IPS and $\mathbf { r }$ -DR. We first show that our PG estimator is unbiased under the same conditions as r-DR. Theorem 1. (Unbiasedness) Under Condition $\jmath$ , Eq. (10) is unbiased against the true PG regarding the target reward, i.e., $$ \mathbb { E } [ \nabla _ { \theta } \hat { V } _ { r } ( \pi _ { \theta } ; \mathcal { D } ) ] = \nabla _ { \theta } V _ { r } ( \pi _ { \theta } ) = \nabla _ { \theta } V _ { c } ( \pi _ { \theta } ; \beta = 0 ) $$ See Appendix B.1 for the proof. In addition to its unbiasedness, our PG estimator fully utilizes the information from the secondary rewards, thus reducing the variance compared to r-DR in most cases. The following demonstrates that the variance of Eq. (10) can be much lower than r-DR. Theorem 2. (Variance Reduction) Under Condition $\jmath$ , we have $$ \begin{array} { r l } & { n ( \mathbb { V } _ { \mathcal { D } } [ \nabla _ { \theta } \hat { V } _ { \mathrm { r - D R } } ( \pi ; \mathcal { D } ) ] - \mathbb { V } _ { \mathcal { D } } [ \nabla _ { \theta } \hat { V } _ { r } ( \pi ; \mathcal { D } ) ] ) } \\ & { \quad = \mathbb { E } _ { p ( x ) \pi _ { 0 } ( a | x ) p ( s | x , a ) } [ \cfrac { \rho ^ { 2 } } { p ( o | x ) ^ { 2 } } w ( x , a ) ^ { 2 } g _ { \theta } ( x , a ) ^ { 2 } ( \Delta _ { q , \hat { q } s } ( x , a , s ) ^ { 2 } - \Delta _ { q , \hat { q } } ( x , a , s ) ^ { 2 } ) ] } \end{array} $$ where $\rho ^ { 2 } : = \mathbb { V } [ o | x ]$ is the variance of the observation indicator. $\Delta _ { q , \hat { q } \lnot s } ( x , a , s ) : =$ $q ( x , a , s ) - { \hat { q } } ( x , a )$ and $\Delta _ { q , \hat { q } } ( x , a , s ) : = q ( x , a , s ) - \hat { q } ( x , a , s )$ are the estimation error of $\boldsymbol { \hat { q } } ( \boldsymbol { x } , a )$ and $\hat { q } ( x , a , s )$ , respectively. See Appendix B.2 for the proof. Theorem 2 indicates that there is a reduction in variance if $\hat { q } ( x , a , s )$ is better than $\boldsymbol { \hat { q } } ( \boldsymbol { x } , \boldsymbol { a } )$ in estimating $\boldsymbol { q } ( \boldsymbol { x } , a , s )$ , which is often the case since secondary rewards are typically somewhat correlated with the target reward. Thus, Eq. (10) is expected to perform better than r-DR while being unbiased under the same condition. To demonstrate this, in the experimental sections, we investigate the difference in performance between Eq. (10), which we name ${ \bf H y P e R } ( \gamma = 0 )$ , against $\mathbf { r }$ -DR. Based on the new estimator for the PG regarding the target reward defined in Eq. (10), we finally introduce our HyPeR estimator. $$ \nabla _ { \boldsymbol { \theta } } \hat { V } _ { \mathrm { H y P e R } } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } , \gamma ) = ( 1 - \gamma ) \cdot \nabla _ { \boldsymbol { \theta } } \hat { V } _ { r } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } ) + \gamma \cdot \nabla _ { \boldsymbol { \theta } } \hat { V } _ { s } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } ) , $$ where the first term $\nabla _ { \boldsymbol { \theta } } \hat { V } _ { r } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } )$ estimates the PG regarding the target policy value $V _ { r } ( \pi _ { \theta } )$ as in Eq. (10). The second term $\nabla _ { \boldsymbol { \theta } } \hat { V } _ { s } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } )$ estimates the PG regarding the secondary policy value $V _ { s } ( \pi _ { \theta } )$ , which we can define by simply applying DR to the sum of secondary rewards as $$ \begin{array} { l } { { \displaystyle \nabla _ { \theta } \hat { V } _ { s } ( \pi _ { \theta } ; { \mathcal D } ) } } \\ { { \displaystyle = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \left\{ \mathbb E _ { \pi _ { \theta } ( a | x _ { i } ) } \left[ \sum _ { d = 1 } ^ { d _ { s } } \hat { f } _ { d } ( x _ { i } , a ) g _ { \theta } ( x _ { i } , a _ { i } ) \right] + w ( x _ { i } , a _ { i } ) \sum _ { d = 1 } ^ { d _ { s } } \left( s _ { i } ^ { d } - \hat { f } _ { d } ( x _ { i } , a _ { i } ) \right) g _ { \theta } ( x _ { i } , a _ { i } ) \right\} . } } \end{array} $$ In Eq. (11), $\gamma \in [ 0 , 1 )$ is a tunable parameter that defines the mixture ratio between the estimators of the PG regarding the target reward and secondary rewards. When using the predefined weight (i.e., $\gamma = \beta )$ , Eq. (11) is unbiased against the combined policy gradient (i.e., $\mathbb { E } [ \nabla _ { \theta } \hat { V } _ { \mathrm { H y P e R } } ( \pi _ { \theta } ; \mathcal { D } ) ] =$ $\nabla _ { \boldsymbol { \theta } } V ( \pi _ { \boldsymbol { \theta } } ) )$ , but the following discusses a strategic tuning of $\gamma$ to further improve the finite-sample effectiveness of our HyPeR algorithm. # 4.1 STRATEGIC TUNING OF THE WEIGHT $\gamma$ Although the natural choice of $\gamma$ in our PG estimator is $\beta$ , which defines the true objective in Eq. (7), we argue that intentionally using $\gamma \neq \beta$ can further improve the combined policy value in the finitesample setting. This is due to the potential differences in variance between the different types of rewards. Specifically, strategically shifting weights from the predefined balance, $\beta$ , can lead to better estimation due to variance reduction, even though it introduces some bias. For example, consider an objective function that is a sum of two estimands, $X + Y$ . If estimator $\hat { X }$ carries less variance than estimator $\hat { Y }$ , it is likely better to give more weight to $\hat { X }$ to achieve less variance, even at the cost of introducing some bias. When dealing with multiple reward types, this can occur in various situations, such as when one reward is less noisy than the other, or as in our study, when one reward type (secondary reward) is more frequently observed than the other (target reward). In HyPeR, the PG regarding the target reward $\nabla _ { \boldsymbol { \theta } } \hat { V } _ { r } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } )$ often carries much higher variance compared to that regarding the secondary rewards $\nabla _ { \boldsymbol { \theta } } \hat { V } _ { s } ( \pi _ { \boldsymbol { \theta } } ; \mathcal { D } )$ due to the partial-observation nature. Thus, although setting $\gamma = \beta$ will make the HyPeR estimation unbiased and at least reduce the variance from existing methods, it can still have high variance, particularly when $\beta$ takes a small value. On the other hand, increasing the weight of $\gamma$ will likely lead to less variance since this prioritizes the second term more, while introducing some bias. This creates an interesting bias-variance trade-off, and we aim to tune $\gamma$ to achieve the optimal combined policy value by the resulting policy: $$ \gamma ^ { * } = \arg \operatorname* { m a x } _ { \gamma \in [ 0 , 1 ) } V ( \pi _ { \theta } ( \cdot ; \gamma , D ) ; \beta ) , $$ where $\pi _ { \boldsymbol { \theta } } ( \cdot ; \gamma , \mathcal { D } )$ is a policy optimized under the weight $\gamma$ used in our policy gradient estimator in Eq. (11), and its value is evaluated under the originally defined weight $\beta$ . Since the true policy value is unknown, we need to perform Eq. (12) from only the logged dataset $\mathcal { D }$ . The most straightforward method is via splitting the dataset $\mathcal { D }$ into training $( \mathcal { D } _ { t r } )$ and validation $( \mathcal { D } _ { v a l } )$ sets. Then, we train a policy using $\mathcal { D } _ { t r }$ (i.e., $\pi _ { \boldsymbol { \theta } } \big ( \cdot ; \gamma , \mathcal { D } _ { t r } \big ) \big )$ and estimate the value using $\mathcal { D } _ { v a l }$ (i.e., $\hat { V } ( \pi _ { \theta } ; \beta , \mathcal { D } _ { v a l } ) )$ . However, an issue with this naive procedure is that the policy $\pi _ { \boldsymbol { \theta } } ( \cdot ; \gamma , \mathcal { D } _ { t r } )$ is trained on a smaller dataset $\mathcal { D } _ { t r }$ instead of the full dataset $\mathcal { D }$ that is used in real training. In the PG estimation, variance is inversely proportional to the data size $| \mathcal D |$ , while bias is unaffected. Therefore, the naive tuning procedure may end up selecting a higher weight than the optimal weight $\gamma ^ { * }$ by overly prioritizing variance reduction. To address this issue, we ensure that $| \mathcal { D } _ { t r } | = | \bar { \mathcal { D } } |$ through bootstrapping: $$ \mathcal { D } _ { t r } ^ { \prime } = \{ ( x _ { i } ^ { \prime } , a _ { i } ^ { \prime } , s _ { i } ^ { \prime } , o _ { i } ^ { \prime } , r _ { i } ^ { \prime } ) \ : | \ : i = 1 , . . . , n \} , $$ where $\{ ( x _ { i } ^ { \prime } , a _ { i } ^ { \prime } , s _ { i } ^ { \prime } , o _ { i } ^ { \prime } , r _ { i } ^ { \prime } ) \} _ { i = 1 } ^ { n } \sim \mathcal { D } _ { t r }$ is sampled independently with replacement. By doing so, we alleviate the issue regarding the variance estimation, since $\mathcal { D } _ { t r } ^ { \prime }$ now has the same size as the full dataset $\mathcal { D }$ . Using $\mathcal { D } _ { t r } ^ { \prime }$ , we propose to solve the bi-level optimization to data-drivenly tune $\gamma$ : $$ \hat { \gamma } ^ { * } = \underset { \gamma \in [ 0 , 1 ] } { \arg \operatorname* { m a x } } \hat { V } ( \pi _ { \theta } ( \cdot ; \gamma , \mathcal { D } _ { t r } ^ { \prime } ) ; \beta , \mathcal { D } _ { v a l } ) . $$ # 5 SYNTHETIC EXPERIMENT Synthetic Data Generation. To create synthetic data, we sample 10-dimensional context vectors $x$ from a standard normal distribution, and the sample size is fixed at $n = 2 0 0 0$ by default. We then synthesize the logging policy as $\pi _ { 0 } = \mathrm { s o f t m a x } ( \phi ( \overset { \cdot } { x ^ { T } } \mathcal { M } _ { X , A } a + x ^ { T } \theta _ { x } + a ^ { T } \theta _ { a } ) )$ , where ${ \mathcal { M } } , \theta _ { x } , \theta _ { a }$ are parameter matrices randomly sampled from a uniform distribution with range $[ - 1 , 1 ]$ , and actions $a \in { \mathcal { A } }$ are sampled following this logging policy $\langle \lvert A \rvert = 1 0 )$ . $\phi$ is a parameter that controls how deterministic the logging policy would become, and we set this at $\phi = - 2 . 0$ by default. We then synthesize each dimension of the 5-dimensional expected secondary rewards given $x$ and $a$ as $$ f _ { d } ( x , a ) = x ^ { T } \mathcal { M } _ { X , A } ^ { \prime } a + x ^ { T } \theta _ { x } ^ { \prime } + a ^ { T } \theta _ { a } ^ { \prime } , $$ and secondary rewards $s$ are sampled from a normal distribution $s _ { d } \sim \mathcal N ( f _ { d } ( x , a ) , \sigma _ { s } ^ { 2 } )$ . In the main text, we set the default to $\sigma _ { s } = 0 . 5$ , and the results for other values of $\sigma _ { s }$ can be found in Appendix C.2. We finally synthesize the expected target reward function as $$ \begin{array} { r } { q ( x , a , f ( x , a ) ) : = ( 1 - \lambda ) ( x ^ { T } \mathcal { M } _ { X , A } ^ { \prime \prime } a + x ^ { T } \theta _ { x } ^ { \prime \prime } + a ^ { T } \theta _ { a } ^ { \prime \prime } + x ^ { T } \mathcal { M } _ { X , F } f + a ^ { T } \mathcal { M } _ { A , F } f ) + \lambda f ^ { T } \theta _ { f } . } \end{array} $$ $\lambda \in \ [ 0 , 1 ]$ is an experimental parameter to control how much secondary rewards are correlated with the target reward. When $\lambda = 1$ , secondary rewards are completely correlated with the target reward, while a smaller $\lambda$ will make it less correlated. We use $\lambda = 0 . 7$ as a default setting throughout the synthetic experiment. The target reward is sampled from a normal distribution as $\boldsymbol { r } \sim \mathrm { \bar { \mathcal { N } } } ( \boldsymbol { q } ( \boldsymbol { x } , \boldsymbol { a } , \boldsymbol { \dot { f } } ( \boldsymbol { x } , \boldsymbol { a } ) ) , \sigma _ { r } ^ { 2 } )$ with default $\sigma _ { r } = 0 . 5$ (results with other $\sigma _ { r }$ values are provided in Appendix C.2). The target reward observation probability is an experimental parameter and is set to $p ( o | x ) = 0 . 2$ for all $x$ by default.1 The true weight $\beta$ , which is used to define the combined policy value $V _ { c } ( \pi ; \beta )$ , is set to $\beta = 0 . 3$ and it is also one of the experimental parameters. In the synthetic experiments, we generally use the predefined weight $\beta$ for HyPeR, and represent it as $\mathrm { H y P e R } ( \gamma = \beta )$ . We compare $\mathrm { H y P e R } ( \gamma = \beta )$ against $\mathrm { H y P e R } ( \gamma = 0 )$ , r-IPS, r-DR, s-IPS and s-DR. For s-IPS and s-DR, we use $F ( s ) = s ^ { T } ( \theta _ { f } + \varepsilon _ { F } )$ , where $\theta _ { f }$ is identical to that of Eq. (16), and $\varepsilon _ { F } \sim \mathcal { N } ( 0 , \sigma _ { F } ^ { 2 } )$ is the noise to simulate the inaccuracy of describing the target reward. # 5.1 RESULTS AND DISCUSSION We run OPL simulations 100 times with different train-test splits. We report the relative policy values calculated by $( V ( \pi ^ { * } ) - V ( \pi _ { \theta } ) ) / ( V ( \pi _ { \theta } ) - V ( \pi _ { \operatorname { u n i f } } ) )$ , where $\pi ^ { * }$ is the optimal policy and $\pi _ { \mathrm { u n i f } }$ is a uniform random policy. This way, the performance of a random policy will have a value of 0, and an optimal policy would have a value of 1, thus easier to interpret. Note that the shaded regions in the plots represent $9 5 \%$ confidence intervals estimated via bootstrapping. Figure 1: Comparing the combined, target, and secondary policy values of OPL methods with varying target reward observation probabilities $( p ( o | x ) )$ on Synthetic Data. Figure 2: Comparing the combined, target, and secondary policy values of OPL methods with varying training data sizes $( n )$ on Synthetic Data. How does HyPeR perform with varying target reward observation probabilities? Figure 1 evaluates the relative policy value when we vary the observation probability of the target reward $p ( o | x )$ . We observe that ${ \mathrm { H y P e R } } ( \gamma = \beta )$ provides the highest combined policy value in all cases. It is also able to optimize both the target and secondary policy values in a balanced way. In addition, $\mathrm { H y P e R } ( \gamma = \beta )$ ) outperforms all the baselines regarding the target policy value, and it even outperforms $\mathrm { H y P e R } ( \gamma = 0 )$ when the observation probability is low. This is because the secondary reward maximization component of HyPeR reduces the variance of the PG estimation, as secondary rewards are denser. We also observe that $\mathrm { H y P e R } ( \gamma = 0 )$ (target reward maximization component of HyPeR) is always performing better than r-DR, which suggests the effective use of secondary rewards enhances the estimation of the target gradient $\nabla _ { \boldsymbol { \theta } } V _ { r } ( \pi _ { \boldsymbol { \theta } } )$ alone, as in Theorem 2. How does HyPeR perform with varying training data sizes? Figure 2 evaluates the methods’ policy values with different training data sizes. Larger data size generally makes gradient estimations more accurate as it decreases the variance. Therefore, as the data size increases, the methods that use the target reward in their estimation (i.e., $\mathrm { H y P e R } ( \gamma = \beta )$ , HyPeR $\langle \gamma = 0 \rangle$ ), r-IPS, and r-DR) perform increasingly better compared to the ones that do not (i.e., s-IPS and s-DR). The left plot in Figure 2 shows that ${ \mathrm { H y P e R } } ( \gamma = \beta )$ performs the best in most cases regarding the combined policy value $V _ { c } ( \pi )$ . It even performs the best in terms of the target policy value $V _ { r } ( \pi )$ , particularly when the data size is small due to the use of secondary rewards and resulting variance reduction. How does HyPeR perform with varying correlation between target and secondary rewards? Figure 3 shows the results with different degrees of correlation between the secondary rewards and the target reward, controlled by $\lambda$ in Eq. (16). A larger $\lambda$ indicates that the target reward can be more predictable by the secondary reward. We can also see that $\mathrm { H y P e R } ( \gamma = \beta )$ is the best option in almost all cases for both the combined policy value and target policy value. We also observe that $\mathrm { H y P e R } ( \gamma = \beta )$ , s-IPS, and s-DR perform comparatively well when secondary rewards are more correlated (larger $\lambda$ ), as they find more advantage in using the secondary rewards as surrogates. How do HyPeR and data-driven weight selection perform with varying true weight $\beta \mathbf { ? }$ Section 4.1 showed that a data-driven tuning of the weight $\gamma$ will potentially improve the performance of HyPeR. To empirically evaluate whether our weight-tuning method can make further improvement, in addition to $\mathrm { H y P e R } ( \gamma = \beta )$ , we add HyPeR(Tuned $\hat { \gamma } ^ { \ast }$ ) and HyPeR(Optimal $\gamma ^ { * }$ ) for comparison. Figure 3: Comparing the combined, target, and secondary policy values of OPL methods with varying degrees of target-secondary reward correlation $( \lambda )$ on Synthetic Data. Secondary rewards can completely explain the target reward at $\lambda = 1$ , and they are less correlated with smaller $\lambda$ . Figure 4: Comparing the combined, target, and secondary policy values of OPL methods with varying $\beta$ , which is a weight that balances target and secondary policy values in the combined policy value. Higher $\beta$ means the secondary policy value becomes more dominant. HyPeR(Tuned $\hat { \gamma } ^ { \ast } .$ ) estimates the optimal weight through estimation using the method described in Section 4.1. HyPeR(Optimal $\gamma ^ { * }$ ) is a skyline that performs HyPeR with a truly optimal weight $\gamma ^ { * }$ , which is not feasible in practice but it is a useful reference. Due to the increase in the number of comparisons, here we reduce r-IPS and s-IPS from the baselines, which always perform worse than DR-based methods (Appendix C.1 shows the complete results including r-IPS and s-IPS). Figure 4 provides the results, including HyPeR(Tuned $\hat { \gamma } ^ { * }$ ) and HyPeR(Optimal $\gamma ^ { * }$ ), with varying weight $\beta$ . From the results, we can see that HyPeR(Tuned $\hat { \gamma } ^ { \ast }$ ) always outperforms $\mathrm { H y P e R } ( \gamma = \beta )$ and all other feasible methods, suggesting the effectiveness of our weight tuning procedure. This observation also interestingly implies that an incorrect weight (i.e., $\hat { \gamma } ^ { \ast } \neq \beta \dot { }$ ) can lead to a better policy performance compared to $\gamma = \beta$ , due to the variance reduction at the cost of some bias in the PG estimation as discussed in Section 4.1. Appendix C.1 also empirically demonstrates the advantage of leveraging the bootstrapping procedure in our tuning process. # 6 REAL-WORLD EXPERIMENT To assess the real-world applicability of HyPeR, we now evaluate it on the KuaiRec dataset (Gao et al., 2022).2 This is a publicly available fully-observed user-item matrix data collected on a short video platform, where 1,411 users have viewed all 3,317 videos and left watch duration as feedback. This unique feature of KuaiRec enables to perform an OPL experiment without synthesizing the reward function (few other public datasets retain this desirable feature). Setup. We use watch ratio $\mathbf { \bar { \Psi } } = \mathbf { \Psi }$ watch duration/video length) as target reward $r$ . We use fourdimensional secondary rewards; each dimension has a realistic reason why the platform would want to maximize it. The first dimension is binary, where $s _ { 1 } = 1$ if $r \geq 2 . 0$ , and $s _ { 1 } = 0$ if $r < 2 . 0$ ; this reward maximization lets the platform prioritize sessions with an exceptionally long watch ratio. The second dimension is also binary, where $s _ { 2 } = - 1$ if $r < 0 . 5$ , and $s _ { 2 } = 0$ if $r \geq 0 . 5$ . This dimension is built to strictly punish and refrain from sessions with an exceptionally low watch ratio (raising the engagement floor). The third dimension of the secondary rewards is time since video upload (multiplied by -1), which is implemented to prioritize newer videos over older ones. The last dimension is video length, which is designed to prioritize longer videos when watch ratios are similar. Note that all continuous rewards are normalized to the range $[ 0 , 1 ]$ . We use the first dimension of the secondary rewards to express $F ( s )$ for s-IPS and s-DR. Figure 5: Comparing the combined policy values of OPL methods under (left) varying target reward observation probabilities $( p ( o | x ) )$ , (center) varying data sizes $( n )$ , and (right) varying true weights $\beta$ in the combined policy value on the KuaiRec dataset. To perform OPL experiments on the dataset, we randomly choose 988 users $( 7 0 \% )$ for training and 423 users $( 3 0 \% )$ for evaluation. We set the target reward observation probability to $p ( o | x ) = 0 . 2$ for all $x$ , training data size to $n = 1 0 0 0$ , and weight $\beta = 0 . 3$ , as default experimental parameters. The actions are chosen randomly with size $\vert \mathcal { A } \vert = 1 0 0$ , and Appendix C.3 shows results with varying numbers of actions. We define the logging policy as $\begin{array} { r } { \pi _ { 0 } ( a | \dot { \mathbf { \sigma } } ) = \mathrm { s o f t m a x } ( \phi ( \mathbf { \boldsymbol { x } } ^ { T } \mathbf { \mathcal { M } } _ { X , A } a + \mathbf { \boldsymbol { x } } ^ { T } \bar { \theta } _ { x } \Breve { \mathbf { \xi } } + \mathbf { \xi } } \end{array}$ $a ^ { T } \theta _ { a } )$ ), with $\phi = - 2 . 0$ , and run 100 simulations with different train-test splits Results. Figure 5 provides real-world experiment results with varying target reward observation probabilities $p ( o | x )$ , varying training data sizes $n$ , and varying weights $\beta$ in the combined policy value. In this section, we provide comparisons of the combined policy values for r-DR, s-DR, $\mathrm { H y P e R } ( \gamma = 0 )$ , $\mathrm { H y P e R } ( \gamma = \beta )$ , HyPeR(Tuned $\hat { \gamma } ^ { \ast } .$ ) and HyPeR(Optimal $\gamma ^ { * }$ ). In Figure 5, we observe that $\mathrm { H y P e R } ( \gamma = \beta )$ and HyPeR(Tuned $\hat { \gamma } ^ { \ast }$ ) both outperform the baselines by far. Moreover, by comparing HyPeR(Tuned $\hat { \gamma } ^ { \ast }$ ) against $\mathrm { H y P e R } ( \gamma = \beta )$ , we can see that intentionally using the tuned weight leads to better performance, particularly under more challenging scenarios (i.e., sparse target reward observation and small training data size). We also observe, in Appendix C.3, that HyPeR outperforms the baseline methods in terms of not only the combined policy value, but also the target and secondary policy values. It is also interesting to see that, in terms of the secondary policy value, the policy performance becomes much worse when we use only the target reward (i.e., performance of r-DR at $\beta = 1 \mathrm { \cdot }$ ), compared to the case where we use only the secondary rewards (i.e., performance of $\mathrm { H y P e R } ( \gamma = \beta )$ at $\beta = 1 \mathrm { \hbar }$ ). This demonstrates that the secondary rewards that we used in our real experiment is not highly correlated with the target reward, ensuring the problem is non-trivial and making it crucial to leverage both types of rewards via our hybrid approach.
Off-policy learning (OPL) in contextual bandits aims to learn a decision-making policy that maximizes the target rewards by using only historical interaction data collected under previously developed policies. Unfortunately, when rewards are only partially observed, the effectiveness of OPL degrades severely. Well-known examples of such partial rewards include explicit ratings in content recommendations, conversion signals on e-commerce platforms that are partial due to delay, and the issue of censoring in medical problems. One possible solution to deal with such partial rewards is to use secondary rewards, such as dwelling time, clicks, and medical indicators, which are more densely observed. However, relying solely on such secondary rewards can also lead to poor policy learning since they may not align with the target reward. Thus, this work studies a new and general problem of OPL where the goal is to learn a policy that maximizes the expected target reward by leveraging densely observed secondary rewards as supplemental data. We then propose a new method called Hybrid Policy Optimization for Partially-Observed Reward (HyPeR), which effectively uses the secondary rewards in addition to the partially-observed target reward to achieve effective OPL despite the challenging scenario. We also discuss a case where we aim to optimize not only the expected target reward but also the expected secondary rewards to some extent; counter-intuitively, we will show that leveraging the two objectives is in fact advantageous also for the optimization of only the target reward. Along with statistical analysis of our proposed methods, empirical evaluations on both synthetic and real-world data show that HyPeR outperforms existing methods in various scenarios.
[ "cs.LG", "62L05, 68T05", "I.2.6; G.3" ]
# 1 Introduction Recent successes in harnessing internet-scale data to train image and language foundation models [1, 2, 3, 4, 5, 6] have spurred an analogous push in robotics. In contrast with earlier methods that focused on achieving expert-level capabilities in narrow, controlled domains, recent efforts in robotics have aimed to generalize across tasks, object categories, object instances, environments, and the abundant variety of conditions present in the natural world [7, 8, 9, 10, 11, 12]. However, in order to train such generalist models, the typical behavior cloning (BC) approach requires prohibitively large amounts of action-labeled expert demonstrations. Datasets that are considered large-scale for robotics [7, 9, 13] take weeks or months to collect a few hundred hours of interaction data, falling far short of the roughly one billion hours of video data available on the internet. Therefore, methods that incorporate large-scale pre-training on these more abundant modalities tend to generalize better from limited action data [14, 15, 8]. Videos, in particular, contain rich priors on temporallyextended dynamics, behaviors, and semantics, which can be used to learn a predictive model of the world [16, 17, 18, 19, 20, 21, 22, 23] Prior work has leveraged video pre-training to learn representations using a number of auxiliary tasks such as reward and value prediction [24, 25, 26, 27] or time-contrastive loss terms [24, 28, 29]. While useful as representations, these methods only learn an encoder for static observations and do not explicitly model sequential dynamics. In contrast, model-based approaches can improve sample efficiency by separating the challenge of policy learning from learning dynamics [30]. Since videos contain rich priors over object and agent dynamics, model-based methods offer a promising avenue for learning from limited action data. One such approach is to train a full video prediction model to capture visual dynamics, which can act as a reference generator for downstream policies [16, 31]. However, predicting in pixel space is computationally intensive and costly to run at high frequencies, forcing these methods to make compromises like open-loop control [16] or partial denoising [31]. As a result, a number of works have aimed to learn latent action representations from videos using next-frame prediction [32, 33, 34] or latent consistency [35], efficiently modeling features that are predictive of the future. While this avoids high inference costs, these representations are still trained on image reconstruction/prediction objectives, capturing textural details or visually salient features that may not be relevant to policy learning. Motivated by the desire to capture motion rather than appearance, optical flow and keypoint tracking have emerged as appealing abstractions for extracting action information from videos without action labels. Recent advances in computer vision have enabled efficient and precise pixel-level point tracking, even through occlusions and limited out-of-frame tracking [36, 37, 38, 39]. As these capabilities enable fine-grained capture of motion and scene dynamics, they have found applications in robotics for visual imitation learning [40] and tool use [41]. A number of prior works predict motion from images as optical flow [42, 43, 44] or by modeling the trajectories of specified keypoints [45, 46, 47, 48, 49, 50, 51]. However, many of these works still rely on prohibitively expensive video prediction models [51, 52, 44], object-centric mask extraction [51, 49, 47, 53], calibrated cameras [50], or inefficient online planning [48], limiting their generality. Two of the most general keypoint modeling approaches are ATM [54] and Track2Act [53], which aim to learn a universal keypoint dynamics model to predict the future trajectories of arbitrary points in an image, and condition a policy on these predictions. However, Track2Act relies on the often unrealistic assumption of a goal image and restricts its output space to single-object rigid-body transformations. ATM, while more flexible in its representation, relies on unrealistic point-sampling heuristics during training that cannot be replicated during inference. In addition, neither ATM nor Track2Act learn a latent space abstraction of keypoints, leaving them with high computational costs much like pixel-space video generation and potentially hindering generalization. Due to their high computational costs, Track2Act requires open-loop trajectory generation, and ATM only generates tracks for 32 points during policy inference, resulting in very coarse dynamics predictions. Further discussion and comparison to related work can be found in Appendix C. In this paper, we investigate the use of latent keypoint motion as an abstraction for learning valuable action priors from action-free video data, combining the benefits of latent dynamics prediction with the explicikt 1 m o t iDoencodienrformation captured in keypoint trajectories. We propose AMPLIFY: Actionless Motion PLroicoalrWs nfdorw LearRencionsgt uIcntevderse and Forward Dynamics, a three-stage framework that flexibly decouplesCladssyifincatiomnics moTdraeclkisng from policy learning. First, we learn a compact latent OsSpace for modeling the motion of a dense grid of keypoints. Second, we train a latent dynamics model to predict a sequence of latent motions based on the current observation. Finally, an inverse dynamics model learns to mTra cpk pgredicted latent motions to low-level robot actions for e x (eAcutuotriegornes.s eNTroatnsafobrlmye, this modular approach allows the first two stages to be trained on any video data, while the inverse dynamicNesxtp-poilnitcClyascsifiacna iobneintLroacialnWeind own any interactionRecdoTanrasttcarkusc(teFdigure 1Ve)l.ocSiitnWiegsle(psetrehpoinwt that this has profound implications for Epncoldiecry generalization in Section 3.2. Figure 2: ArchitectuTraec.k nAg MPLIFY consists of a three-stage decomposition: (a) keypoint tracks are compressed into a discrete latent space using FSQ. For each timestep and each point, the decoder outputs a distribution in a local window centered around each point to reconstruct the instantaneous velocities, (b) a forward dynamics model is trained toEnpcroededrict the latent codes for the next $T$ ti FmSeQsteps given an input image and task description, and (c) an inverse dynamics model decodes predicted track tokens into an action chunk. Through extensive real-world and simulated experiments, we evaluate both the accuracy and downstream utility of our latent dynamics model. Compared to state-of-the-art baselines, we observe that AMPLIFY leads to improved keypoint trajectory prediction, lowering mean-squared error by over $3 \times$ . We then demonstrate that these predictions are useful for control; conditioning the inverse dynamics policy on latent motions is a valuable prior that allows for more data-efficient learning and generalization to tasks for which we have no action-labeled data. Finally, we examine the versatility of our motion-based representations beyond control for tasks such as conditional video prediction. In summNeaxrt-yp,oinwt Celasmsifiackatieont ihneLofcaol lWliondwoiwng key contributions: 1. We present the first latent keypoint dynamics model and investigate crucial design choices. 2. We demonstrate state-of-the-art keypoint prediction accuracy on three large-scale video datasets. 3. We train a data-efficient and generalizable policy that can learn from action-free human data. 4. We apply latent motions to conditional video generation, outperforming previous baselines. # 2 AMPLIFY: Method Problem Setup – We assume access to two types of data: a video dataset $\mathcal { V } = \{ ( o _ { t } , g ) \}$ and a dataset of robot interaction data $\mathcal { R } = \{ ( o _ { t } , q _ { t } , a _ { t } ) \}$ where $o \in { \mathcal { O } }$ are RGB image observations, $g \in { \mathcal { G } }$ is a goal (e.g., a language description), and $a \in \mathcal { A } , q \in \mathcal { Q }$ are the action and proprioceptive state of the robot, respectively2. Given these datasets, our aim is to learn the parameters of a visual control policy $\pi : \mathcal { O } \times \mathcal { Q } \times \mathcal { G } \to \mathcal { P } ( \mathcal { A } ) = f _ { \mathrm { i n v } } \left( o _ { t } , q _ { t } , f \left( o _ { t } , g \right) \right)$ composed of a forward dynamics model $f : \mathcal { O } \times \mathcal { G } \to \mathcal { Z }$ that learns a motion prior in a latent space $\mathcal { Z }$ and an inverse dynamics model $f _ { \mathrm { i n v } } : \mathcal { O } \times \mathcal { Q } \times \mathcal { Z } \to \mathcal { A }$ that maps the latent motion to a sequence of actions. Crucially, this decomposition allows for independent scaling of $f$ and $f _ { \mathrm { i n v } }$ by training on $\nu$ and $\mathcal { R }$ , respectively. We provide an extended discussion of the benefits of this decomposition in Appendix B. The following sections detail preprocessing (Sec. 2.1), learning the latent motion representation (Sec. 2.2), and training the forward (Sec. 2.3) and inverse (Sec. 2.4) dynamics models. # 2.1 Preprocessing Keypoint Tracks We first augment $\mathcal { V } \to \mathcal { V } ^ { \prime } = \{ ( o _ { t } , \kappa _ { t } , g ) \}$ in a preprocessing step using the off-the-shelf point tracking model from [36] to obtain a set of keypoint tracks $\kappa _ { t } \in \bar { \mathbb { R } ^ { T \times N \times 2 } }$ for each timestep $t$ . More precisely, we initialize a $2 0 \times 2 0$ uniform grid of $N = 4 0 0$ points in each image $o _ { t }$ , then track the points through the next $T = 1 6$ frames $O _ { t : t + T }$ , capturing their 2-dimensional pixel coordinates. Although extracting specific task-relevant keypoints could potentially yield more informative predictions, we favor the uniform grid for its simplicity and generality, similar to [53], and find that it works effectively to model a variety of motions. Other works have attempted to select key points according to heuristics such as movement throughout the video [54], but we found that this led the model to learn spurious correlations and relies on unrealistic assumptions at test time. By reinitializing the grid of keypoints in each frame, we ensure no points are occluded and guarantee consistent coverage throughout every frame, even with moving cameras. See Appendix D.4 for further details on preprocessing. # 2.2 Motion Tokenization Unlike prior keypoint-based methods which predict directly in pixel space [54, 53, 48, 51], we argue that learning to predict dynamics in a compressed latent space enables a more efficient and generalizable representation, similar to findings in model-based reinforcement learning [55, 56, 57]. To this end, we learn a compact discrete latent space from pre-processed keypoint trajectories using Finite Scalar Quantization (FSQ) [58], a drop-in replacement for vector-quantized variational autoencoders (VQ-VAEs) [59]. FSQ employs an implicit codebook and a single reconstruction loss term, avoiding representation collapse and resulting in better codebook utilization. Figure 2a illustrates our tokenization scheme. We compute single-step velocities $u _ { t } \in \mathbb { R } ^ { ( T - 1 ) \times N \times 2 }$ from the pre-processed keypoint trajectories κt. Then, a keypoint encoder Eθ : R(T −1)×N×2 → Rb×d maps $\boldsymbol { u } _ { t }$ to a $d$ -length sequence $\tilde { z } _ { t }$ of latent vectors $\tilde { z } _ { t , i } \in \mathbb { R } ^ { b }$ , which are quantized via FSQ to a sequence $z _ { t } ~ \in ~ \mathbb { Z } ^ { b \times d }$ of discrete codes, and decoded by the keypoint decoder $\mathcal { D } _ { \theta } : \mathbb { R } ^ { b \times d } $ $\mathbb { R } ^ { ( T - 1 ) \times N \times W ^ { 2 } }$ for reconstruction. Rather than just predicting the 2-dimensional pixel coordinate of each point directly, the decoder outputs a categorical distribution over $W ^ { 2 }$ classes representing a local $W \times W$ window of motions centered at the same point in the previous timestep. This imposes an inductive bias on the model toward next-keypoint predictions that are close to locations in the current timestep, and additionally better captures multimodal distributions compared to performing regression on the coordinates. The keypoint encoder has a causally-masked transformer encoder architecture, and the keypoint decoder is an unmasked transformer decoder that cross-attends between a sequence of $N$ learned positional encodings and the quantized codes from the encoder. The encoder and decoder are jointly trained on $\nu$ using a cross-entropy loss: $$ \mathcal { L } _ { A E } ( \theta ) = \mathrm { C E } \Big ( \mathcal { D } _ { \theta } \Big ( h ( \mathcal { E } _ { \theta } ( u _ { t } ) ) \Big ) , \omega _ { t } \Big ) $$ where $\omega _ { t } = \Omega ( u _ { t } )$ , $\Omega : \mathbb { R } ^ { ( T - 1 ) \times N \times 2 } \mathbb { R } ^ { ( T - 1 ) \times N \times W ^ { 2 } }$ maps ground-truth velocity vectors to their corresponding class based on the displacement in the local $W \times W$ window, and $h$ is the FSQ discretization function. When available, multi-view inputs are tokenized together into a single sequence of codes. For simplicity, we do not include the view dimension in our notation. For ablations and an extended discussion on the effects of these design choices, we refer readers to Appendix E. # 2.3 Forward Dynamics (Actionless Motion Prior) After training the motion tokenizer, we train an autoregressive transformer $f ( o _ { t } , g )$ to predict the tokenized motion sequence $z _ { t }$ corresponding to the video $O _ { t : t + T }$ based on the current observation and task description. Image observations are encoded and projected into the embedding space of the transformer using the flattened feature map from a pre-trained ResNet-18 [60] to generate $7 \times 7 = 4 9$ vision tokens per image. The summary token from a T5 [61] text embedding of the task description is used to tokenize language inputs. These conditioning tokens are then concatenated with a start of sequence (SOS) token and the latent motion tokens to predict the next tokens in the sequence (Figure 2b). A block-causal attention mask is used, where the conditioning part of the sequence is non-causal and the motion tokens are causally masked. We use a cross-entropy loss on the predicted codes without decoding to full keypoint trajectories, and only back-propagate gradients to the dynamics model while the tokenizer remains frozen (Equation 2). sg refers to the stop-gradient operator. Figure 3: Decoded keypoint trajectory predictions from AMPLIFY. Zero-movement points are not shown. $$ \mathcal { L } _ { \mathrm { f o r w a r d } } = \mathrm { C E } \Big ( f ( o _ { t } , g ) , s \mathrm { g } ( \mathcal { E } _ { \theta } ( u _ { t } ) ) \Big ) $$ # 2.4 Inverse Dynamics Finally, we learn an inverse dynamics model $f _ { \mathrm { i n v } } \big ( o _ { t } , q _ { t } , z _ { t } \big )$ that decodes latent motion tokens into a distribution over action chunks ${ \pmb a } _ { t } = { \boldsymbol a } _ { t : t + T }$ , as shown in Figure 2c. Importantly, this module is not conditioned on the goal and instead acts as a general reference follower trained on any interaction data $\mathcal { R }$ . The model uses a transformer decoder with a sequence of learned tokens that cross-attend to image tokens, a linear projection of proprioceptive state, and codes from the motion tokenizer to produce a sequence of $d$ action tokens. These action tokens are fed into an action head to output a distribution over length- $\mathbf { \nabla } \cdot \boldsymbol { T } _ { \mathbf { \nabla } }$ action chunks. Following BAKU [62], we opt for an isotropic Gaussian prior on the action distribution. In Appendix E, we discuss alternative choices for the action head. The inverse dynamics model is trained with a negative log-likelihood (NLL) loss with a temporal discount $\gamma$ to reduce the impact of inaccurate predictions towards the end of the sequence. $$ \mathcal { L } _ { \mathrm { \tiny { i n v } } } = - \sum _ { \tau = t } ^ { t + T - 1 } \gamma ^ { \tau - t } \cdot \log p \left( a _ { \tau } \mid \mu _ { \tau - t } , \sigma _ { \tau - t } \right) $$ where $\mu _ { \tau - t } = f _ { \mathrm { i n v } } ^ { \mu } ( o _ { t } , q _ { t } , z _ { t } ) [ \tau - t ]$ and $\sigma _ { \tau - t } = \exp ( f _ { \mathrm { i n v } } ^ { \sigma } \big ( o _ { t } , q _ { t } , z _ { t } \big ) [ \tau - t ] )$ are the predicted mean and standard deviation. The inverse dynamics model can be trained on ground truth tokens $z _ { t } = \mathcal { E } _ { \theta } ( u _ { t } )$ , but in practice, we fine-tune the action decoder on the predicted outputs $\hat { z } _ { t }$ of the forward dynamics model. Both the motion tokenizer and the forward dynamics model are frozen for this stage. The keypoint decoder ${ \mathcal { D } } _ { \theta }$ is not used, as we condition $f _ { \mathrm { i n v } }$ on latent motions rather than decoded tracks. # 2.5 Inference During inference, the forward dynamics model takes the current observation and task description at each timestep $t$ and autoregressively predicts a sequence of latent motion tokens $\hat { z } _ { t } = f ( o _ { t } , g )$ . The inverse dynamics model then decodes these tokens, along with image and proprioception tokens, into an action chunk $\pmb { a } _ { t } = f _ { \mathrm { i n v } } \big ( o _ { t } , q _ { t } , \hat { z } _ { t } \big )$ . Following ACT [63], we use temporal ensembling to aggregate information over previously predicted action chunks using the same temporal discount $\gamma$ . # 3 Experiments We evaluate AMPLIFY guided by two main axes of investigation: quality of dynamics prediction (Sec. 3.1) and utility of predictions for downstream tasks, including policy learning (Sec. 3.2) and conditional video generation (Sec. 3.3). See Appendix D for extended details on all experiments. # 3.1 Quality of Forward Dynamics Prediction We test the prediction accuracy of our forward dynamics model on a combination of three simulated and real-world video datasets, including both human and robot data: BridgeData v2 [64], a large-scale robot dataset consisting of over $6 0 \mathrm { k }$ real-world rollouts of diverse manipulation tasks in 24 different environments; Something-Something v2 [65], a video dataset consisting of over 220,000 videos of humans performing everyday manipulation tasks with a variety of objects and primitive motion categories; and LIBERO [66], a benchmark of 130 diverse simulated robotic manipulation tasks, from which we use the observations from 6500 demonstration rollouts as a video dataset. Table 1: Training dataset setup for each component by experiment. Subscript id and ood indicate in-distribution and out of distribution tasks and superscript $H$ and $R$ distinguish human and robot video data. $\subseteq$ indicates training on limited subsets of the data. Table 2: Prediction. AMPLIFY achieves $3 . 7 \times$ better MSE and $2 . 5 \times$ better pixel accuracy compared to ATM, and a $4 \%$ improvement over Track2Act, which uses a goal image, and Seer, which requires full video prediction. Table 3: Behavior Cloning performance on LIBERO. AMPLIFY is competitive with various state-of-the-art baselines, both with and without video pretraining. We compare to ATM [54] and Track2Act [53], two state-of-the-art keypoint trajectory prediction approaches. In addition, on BridgeData v2 we compare track prediction accuracy to a baseline of first predicting videos with Seer [67], then applying CoTracker [36] to the initial set of points and tracking through the generated videos. Since our forward dynamics model predicts in latent space, we use the decoder from the Motion Tokenization stage for fair comparison in pixel space. We measure performance on normalized tracks $( \kappa \in [ - 1 , 1 ] )$ using Mean Squared Error (MSE), Pixel-Wise Accuracy (Pixel Acc.), and a metric $\Delta _ { \mathrm { A U C } }$ originally used by point tracking methods [38, 36], and later used for track point prediction by Track2Act. See Appendix D.3 for further details on metrics. Results are summarized in Table 2, demonstrating that AMPLIFY consistently leads to more accurate predictions, even though the forward dynamics model is only trained on a latent consistency loss rather than pixel-space prediction objectives. On the LIBERO dataset, we achieve over twice the pixel-wise accuracy of ATM, and we outperform Track2Act (which, unlike our method, has access to goal images) on their chosen $\Delta _ { \mathrm { A U C } }$ metric across BridgeData v2 and Something-Something v2. We attribute this success to several design choices, including the compression of motion into a compact latent space, thus improving efficiency and generalization; the prediction of discrete tokens to leverage the expressive power of autoregressive transformers; and the use of local-window pixel space classification, which gives our forward dynamics model the ability to model rich multi-modal distributions of motion and capture fine-grained dynamics. Further investigation into design choices (E), detailed results (F.2), and qualitative visualizations (F.3) can be found in the Appendix. # 3.2 Utility of Predicted Latent Motions for Policy Learning Beyond prediction accuracy, we examine whether video pre-training using AMPLIFY can provide a useful prior for policy learning in both real-world and simulated experiments. Specifically, we 2 Demos 5 Demos 10 Demos 1.0 AMPLIFY (inverse only) 0.8 0.4 0.0 Long Object Spatial Goal 90 Long Object Spatial Goal 90 Long Object Spatial Goal 90 evaluate AMPLIFY along four dimensions measuring (1) in-distribution performance, (2) few-shot learning, (3) cross-embodiment transfer, and (4) generalization. Table 1 summarizes the training datasets for different stages under each experimental setup. We evaluate performance using success rates on all five subsets of LIBERO, as well as a set of 3 real-world tasks: "Put the Rubik’s Cube on the Box" (Place Cube), "Stack the Green and Blue Cups in the Orange Cup" (Stack Cups), and "Open the Box and Move the Eggplant into the Bowl" (Open Box & Place Eggplant)). In-Distribution Performance – We first evaluate AMPLIFY in a standard behavior cloning setup, training both the forward and inverse dynamics models on only the demonstration data. We compare to state-of-the-art approaches with and without video pre-training. Results in Table 3 indicate that AMPLIFY, even without additional data, is competitive with SOTA behavior cloning methods and outperforms video pre-training methods trained with (ATM) and without (UniPi) keypoint tracks. In this setting, we observe that since there is sufficient information to learn tasks to a high degree without video pre-training, standard BC methods tend to match or outperform approaches using pre-training. However, in subsequent sections, we demonstrate that these approaches under-perform in limited data regimes and do not generalize effectively to new tasks. Few-Shot Learning – We study whether AMPLIFY can learn from fewer action-labeled demonstrations by training the forward model on all videos, while the inverse model is only trained on $4 \%$ , $10 \%$ , or $20 \%$ of the 50 demonstrations available for each of the subsets of LIBERO. In Figure 4, we compare AMPLIFY with ATM, trained on all videos and the same subsets of action data, as well as a variant of AMPLIFY that does not condition on motion tokens to predict actions. Both AMPLIFY and ATM consistently outperform the no-pre-training variant, indicating that in low-data regimes, video pre-training on keypoint dynamics provides a strong prior for data-efficient policy learning. In addition, AMPLIFY achieves stronger performance than ATM on nearly every subset, suggesting that a latent motion representation has higher utility for action prediction than conditioning the policy directly on pixel-space track predictions. This seems to be especially true at the extreme low end–when provided with only 2 demonstrations per task, AMPLIFY achieves an average $1 . 9 4 \times$ improvement over ATM. Full numerical results are included in Table 16. Cross-Embodiment Transfer – Since the forward dynamics model can be trained on any observation data, we study whether videos of humans demonstrating a task can be used to improve policy learning. We train the forward dynamics model on both human and robot video data, while the inverse dynamics model is trained only on the action-labeled robot data. This setup highlights how the two stages can be decoupled to scale independently, unlike BC methods that cannot effectively harness action-free data. We evaluate success rates on three real-world tasks of varying difficulty, using Diffusion Policy as the BC baseline. For fair comparison, we replace the Gaussian head used in other experiments with a Diffusion Policy head in the inverse dynamics model. This ensures that the only difference between the two approaches is whether the predictions from our forward dynamics model are used to condition the policy. Similarly to the previous section, we evaluate AMPLIFY in both the few-shot setting and the full demonstration setting. Results in Table 4 demonstrate that AMPLIFY can effectively leverage additional human data to learn common dynamics between human and robot motions, and use the predicted latent motions to improve policy learning. The average improvements of $1 . 3 2 \times$ , $1 . 4 \times$ , and $1 . 5 \times$ indicate a more prominent gap as task complexity increases. See Table 17 for complete results. Table 4: Cross-Embodiment Transfer. By leveraging human video demonstrations to train the forward dynamics model, AMPLIFY outperforms Diffusion Policy on real-world tasks. Generalization – Observing that AMPLIFY excels in learning from limited action data, we now turn to a setting where no action data is available for target tasks. Given only observations of target tasks, as well as a dataset of out-of-distribution interaction data, we evaluate how well AMPLIFY can solve the target tasks zero-shot. This challenging setting requires methods to both learn a good abstraction of the mapping from observations to actions, and also generalize that abstraction to predict correct actions on new tasks. To test this setting, we train the forward dynamics model on observations from all subsets of LIBERO, and train the inverse dynamics model and BC baselines on actions from only LIBERO 90. We then evaluate on four LIBERO target suites (Long, Object, Spatial, Goal), specifically designed to test different categories of generalization [66]. We find that BC methods completely fail in this scenario, achieving near-zero success rates (Table 5). We attribute this failure to two main shortcomings of BC: (1) the supervised imitation objective has no incentive to learn a generalizable abstraction, and (2) BC has no mechanism for harnessing additional data that may be informative, such as videos. In contrast, AMPLIFY attains an average $6 0 . 5 \%$ success rate on target tasks, approaching the success rates of models that were directly trained on the target tasks. This success highlights the value of latent dynamics prediction as a versatile interface for learning general priors from action-free videos. In addition, it suggests that training a general reference following inverse dynamics model may be a more generalizable objective compared to imitation learning. # 3.3 Utility of Predicted Latent Motions for Conditional Video Generation To demonstrate the utility of predicting keypoint trajectories beyond robotic control, we condition a video prediction model [44] on the latent motion tokens predicted by our forward dynamics model. We find that conditioning a video prediction model on our latent motion tokens leads to improved generation quality (Table 6). Compared to a baseline model that does not use track inputs, our approach yields better performance on all metrics (details on metrics in Appendix D.3). This improvement suggests that our latent motion representation captures rich, structured dynamics that improve not only control tasks but also the fidelity of generated video content. Further details on training and generation are provided in Appendix D.5 and qualitative results in Appendix F.3. Table 6: Video Prediction. Conditioning AVDC on predicted motion tokens from our dynamics model improves generated video quality on BridgeData v2.
Action-labeled data for robotics is scarce and expensive, limiting the generalization of learned policies. In contrast, vast amounts of action-free video data are readily available, but translating these observations into effective policies remains a challenge. We introduce AMPLIFY, a novel framework that leverages large-scale video data by encoding visual dynamics into compact, discrete motion tokens derived from keypoint trajectories. Our modular approach separates visual motion prediction from action inference, decoupling the challenges of learning what motion defines a task from how robots can perform it. We train a forward dynamics model on abundant action-free videos and an inverse dynamics model on a limited set of action-labeled examples, allowing for independent scaling. Extensive evaluations demonstrate that the learned dynamics are both accurate, achieving up to 3.7x better MSE and over 2.5x better pixel prediction accuracy compared to prior approaches, and broadly useful. In downstream policy learning, our dynamics predictions enable a 1.2-2.2x improvement in low-data regimes, a 1.4x average improvement by learning from action-free human videos, and the first generalization to LIBERO tasks from zero in-distribution action data. Beyond robotic control, we find the dynamics learned by AMPLIFY to be a versatile latent world model, enhancing video prediction quality. Our results present a novel paradigm leveraging heterogeneous data sources to build efficient, generalizable world models. More information can be found at https://amplify-robotics.github.io/.
[ "cs.RO", "cs.CV", "cs.LG" ]
# I. INTRODUCTION For verification through model-checking, the complexity of the formal models used is highly critical. A usual approach to address this is through abstraction, and we propose structural abstraction of the environment model. Technically, our approach approximates environment objects via a composition of cuboids of the same size called voxels. The size of the voxels directly influences the accuracy of the approximation. Smaller voxels approximate objects better than larger ones, where the latter are an abstraction of the former. Since this abstraction does not involve any change in the behavior of the robot, we denote it as structural abstraction. In this paper, we present an approach that reduces the verification time, see [1], which this paper is based upon. It achieves this by systematically abstracting the environment model through voxels and by refining them locally depending on verification results. This approach has the advantage that the models are coarse first and, hence, fast to verify, and only become more detailed in areas where verification runs of abstract models fail. To this end, our defined verification workflow starts with a representation using small voxels throughout, abstracts it by generating a representation consisting of larger voxels, still throughout, and from then on selectively adds the details needed for the actual verification where needed. The remainder of this paper is organized in the following manner. First, we provide some background material in order to make this paper self-contained. Then we present our new approach to selective refinement of structural abstraction. For evaluating its feasibility, we present and explain the results of applying our new approach to verifying a safety-critical robot scenario. Finally, we compare our approach with related work and provide a conclusion. # II. BACKGROUND We provide some background material first on our running example of a robot arm performing a pick-and-place task. Subsequently, an existing methodology for verifying such robot applications is described. Since voxels and voxel grids play a major role in our paper, we give some background on them as well. Finally, we sketch counterexample-guided abstraction refinement (CEGAR), a methodology to systematically abstract and refine behavioral models. # A. Running Example For this paper, we reuse the running example presented in Rathmair et al. [2], where all the details are given. For the safety-critical aspects, see also below. In order to make this paper self-contained, we provide a short introduction to the use case of this running example here. Figure 1 illustrates a static environment where multiple pick-and-place tasks are to be performed, e.g., picking two objects at their initial position and placing them at different target locations. A robot manipulator with a gripper performs these tasks and is mounted on the white base plate in the background of the figure. At the start of the pick-and-place operation, a large object and a smaller one are located on the red tray in the middle. The first task of the robot is to pick the large object and transfer it to the red tray located on the right. After placing the object on the tray, the robot moves back to the middle tray and picks a second smaller object, which is transferred to the blue box, where it is dropped into. After doing so, the robot arm moves back to the initial position, where it pauses. Fig. 1. Environment model of the running example including gripper position (gray sphere) and trajectories (black lines) The reason is that the robot performs this application in cooperation with a human, who manipulates the larger object while the robot moves the smaller object. After the human finishes his or her task, the robot picks the object from the tray, drops it into the blue box, and moves back to the initial position. This concludes a cycle of the application, and a new cycle may start. We use this running example for verifying that no collision between the robot and its environment occurs. While verifying this scenario, we use a fixed trajectory with fixed positions of the robot. This is aligned with Rathmair et al. [2], where also the exact coordinates for the initial position and each position on a trajectory from the initial to the end position are used. Our proposed workflow operates on the environment representation only and, therefore, solely structural abstraction is performed. # B. Verification Methodology for Robot Applications Rathmair et al. [2] defined a generic verification approach for robot applications. It uses models describing the robot behavior and environment models and verifies the combined model against safety properties based on risk analyses, laws and standards. For details on this workflow, we refer to [2], but let us briefly sketch it here in order to provide the context of our new model-checking approach. The robot task is given as a behavioral tree, which defines the execution sequence of robot skills. Together with the definition of these skills, the behavioral tree defines the behavior of the robot – the behavioral model. This model is then transformed into a representation that the model-checker takes as its input. For verifying whether a robot application can be considered safe, a environment model is crucial. In the approach of Rathmair et al. [2], it is given as a 3D-model of the relevant parts of the physical environment that the robot is operating in. This model is then transformed into a voxel grid, which is stored in the binvox file format [3]. A resolution of the voxel grid has to be chosen, which defines the number of voxels representing the environment and, in effect, its level of detail. Actually, this model is just an intermediate representation before a corresponding input for the model-checker tool is generated. The actual verification is done via the modelchecker nuXmv [4], [5]. It receives the behavioral model, the model of the environment, as well as safety properties as its input. # C. Voxel Grid A voxel grid is a construct used in 3D computer graphics, which represents a particular 3D space in terms of its properties. To accomplish this, the voxel grid is composed of individual voxels, which represent a value in the voxel grid, that are all of the same size. Instead of explicitly giving the position of the voxel in terms of 3D coordinates, the position is given relative to other voxels by indexing them. The 3D coordinates of a voxel are calculated using its $x , y$ and $z$ - index, the 3D space the voxel grid covers, and the resolution of the voxel grid. One way to build a voxel grid is to first define a cuboid (in 3D space) that it should represent. This space is then divided into smaller cuboids that are represented by voxels. The number of cuboids (voxels) is defined by the resolution of the voxel grid and is given via the resolution along each axis, e.g., $4 \times 8 \times 4$ divides the large cuboid into $( 4 \times 8 \times 4 = )$ 128 smaller cuboids (voxels). Although different resolutions for each axis are possible, it is more common to use the same value for each axis, which is also a power of two, e.g., $4 \times 4 \times 4$ or $8 \times 8 \times 8$ . Using this definition, each voxel of a grid with a certain resolution (e.g., $4 \times 4 \times 4 )$ ) can be seen as a composition of eight grid voxels with the subsequent higher resolution (e.g., $8 \times 8 \times 8 )$ . In general, a higher resolution leads to a better approximation of the space, i.e., as more details can be represented, where a particular property holds or not. Figure 2 shows the approximation of a sphere in different resolutions and indicates that higher resolutions are more precise. In this example, a voxel is filled in when its value is T rue, i.e., when the sphere at least partly occupies the voxel. Therefore, it results in an approximation that guarantees to enclose the whole sphere, independently of the resolution of the voxel grid. However, other forms of determining the value of the voxel are also reasonable, e.g., setting the voxel value to T rue only if the entire voxel is part of the sphere. It entirely depends on the particular use case. Fig. 2. Voxel representation of a sphere in different resolutions – left: sphere modeled in Blender [6]; middle: resolution $3 2 \times 3 2 \times 3 2$ ; right: resolution $8 \times 8 \times 8$ Usually, the voxel values are determined based on another form of representation of the 3D space. In our case, the sphere in the left part of Figure 2 was modeled in Blender and exported to a file in the Standard Triangle Language (STL) file format. We then used the tool binvox [3] to generate the voxel grids based on the STL file, each with a different resolution. Finally, we used the tool viewvox [7] to view the generated voxel grids and capture the views given in Figure 2. However, the approach presented in this paper is not limited to these tools. For example, details of the 3D-space could be captured as a point cloud as well, which then would be used to generate a voxel representation. TABLE I RESULTS WITHOUT SELECTIVE REFINEMENT Resolution ... Resolution of Voxel Grid Time ... Running Time of Verification Length ... Length of Counterexample # D. Counterexample-Guided Abstraction Refinement Counterexample-guided abstraction refinement (CEGAR) [8]–[12] is an approach that uses a special form of abstraction (with one-sided abstraction error) called over-approximation, to reduce the state space in order to allow model checking of more complex systems. Intuitively, an abstract model is an over-approximation of a concrete model, if it allows for all the behaviors of the latter and possibly more. In the course of an abstraction, states of the concrete model are clustered into abstract states. This may already lead to an increase of behavioral options through the transitions between clustered states in the abstract model. However, no transition in the abstract model must be removed so that a possible behavioral option in the concrete model is not available in the abstract model. Over-approximation guarantees that, if a temporal logic expression in $A C T L ^ { * }$ evaluates to true in an abstract model, then it is true in the concrete model as well. If it evaluates to false in the abstract model, however, no conclusion can be drawn for the concrete model in this regard, since it is not known whether the concrete or the abstract model caused the violation. Whenever the model violates the property, the model-checker tool generates a counterexample. The approach by Clarke et al. [10] uses it to first determine whether this counterexample is a valid path of the unabstracted model. If not, the information provided by the counterexample is used for refinement of the abstract model. The CEGAR workflow as defined by Clark et al. [10] starts with the computation of an initial abstraction. This is done by both taking the original model and the property to be verified into account. The abstract model is then checked against the given property, resulting either in a confirmation that the property holds, or a counterexample. In the former case the CEGAR verification workflow ends with pass, in the latter it proceeds. If an abstract counterexample is generated, it is checked whether it is real or not (spurious). This is done by checking the counterexample on the unabstracted model and has two possible outcomes. If the counterexample is real (i.e., the abstract counterexample is realizable in the unabstracted model) the CEGAR workflow ends with fail. If the counterexample is spurious (i.e., the counterexample is only present in the abstract model), the workflow refines the abstract model in such a way that the given counterexample is no longer admitted by the refined (abstract) model, when checked against the same property. In essence, CEGAR is an iterative workflow to verify $A C T L ^ { * }$ properties by first generating a coarse abstraction of a given model and gradually refining the abstraction based on spurious counterexamples until the given property holds in the abstract model (and, therefore, the unabstracted model). Alternatively, it finds a counterexample that exists in the abstract model as well as in the unabstracted model. # III. SELECTIVE REFINEMENT OF STRUCTURAL ABSTRACTIONS In this section, we present our new approach to selective refinement of structural abstractions. We start with motivating this approach for the structural abstraction of voxel grids used for environment representation. Then we present our approach for generating abstract voxels from more concrete ones. Based on it, we define our verification workflow using structural abstraction and selective refinement. Finally, we explain how the workflow is integrated into the verification methodology of Rathmair et al. [2], where it can make the safety verification of certain robot applications much more efficient. # A. Why Structural Abstraction and Selective Refinements Matter Structural abstractions and their refinements matter because the verification times of different voxel grid resolutions are very different. Table I shows the running times needed to verify our running example (more precisely, the scenario leading to collision) with different resolutions, i.e., to find a counterexample in this case. The verification times range from a fraction of a second to ${ \sim } 5 0 \mathrm { h }$ , depending only on the different resolutions of the voxel grid. For a resolution of 128, it took the model-checker a couple of days to finally crash. This strong variation in running times is due to the fact that the voxels that the robot can occupy in the model of the environment increases the more fine grained this model is, and this requires more calculations and comparisons with the voxels of the environment. The verification times needed for each individual model-checker run increase strongly for higher resolutions, since the number of voxels increases with the power of 3. Not only do the verification times differ, but also the lengths of the counterexamples. For instance, the verification with a voxel grid of resolution 2 $( = 2 \times 2 \times 2 )$ ) finds a counterexample of length 1 (the property checked is already violated in the initial state). With a resolution of 4, a counterexample of length 7 is found. Fig. 3. Voxel representation of environment model of the running example — left: modeled in Blender; middle: resolution $1 2 8 \times 1 2 8 \times 1 2 8$ ; right: resolution $3 2 \times 3 2 \times 3 2$ Fig. 4. Environment representation with more details added by selective refinement As shown in Figure 3, for representing the environment of a robot application, using lower resolution – which means a shorter verification time – comes with the drawback of losing details. Whereas the higher-resolution (middle) voxel grid captures the top opening of the blue box quite well, the lower-resolution voxel grid does not. However, such details may be important when it comes to determining if a collision occurs. For example, when using the voxel grid with resolution $2 \times 2 \times 2$ , the model-checker already detects a collision with the robot in its initial position. When using a resolution of $4 \times 4 \times 4$ , no collision is detected at the same position. This effect of detecting a collision with a specific resolution that disappears with a higher resolution, is also the reason for the different counterexample lengths in Table I. Unfortunately, one does not know upfront which details of the environment matter and, therefore, which resolution is needed to verify a property. Verification engineers can pick a resolution based on their experience or perform a detailed analysis of the whole robot application. In general, a higher resolution is preferable, since it means a more realistic representation of the environment than a lower resolution. However, if the resolution is higher than needed, the modelchecker run takes longer than necessary. If the resolution is too low, the verification may only fail due to the abstraction and the verification engineer has to determine if this is the case or rerun the verification with a higher resolution. A heuristic may be to use a resolution first that has verification runs in the order of a few minutes, and only increase it when necessary. Using (automated) structural abstraction in combination with selective refinements as proposed below allows mitigating or even overcoming those problems by using a voxel grid of low resolution first (automatically), thus reducing the time of verification, and selectively refining the voxel grid at points of particular interest to avoid finding counterexamples that only exist due to the low resolution used. Figure 4 illustrates how selective refinement of a voxel grid with a given resolution of $4 \times 4 \times 4$ may be changed 3 to capture more details. The resulting representation captures more details for certain parts of the environment, e.g., the opening of the box. However, there is the challenge to determine where to refine the environment representation. This problem is specifically addressed by our new approach. # B. Abstracting Voxels for Guaranteeing Over-approximation As stated above, it is important how the abstraction is performed to guarantee that the abstract model is an overapproximation of the concrete model. Two or more voxels can always be joined together into one abstract voxel. Thus, there is always an abstraction function but it needs to be an over-approximation. In this section, we show how voxels can be abstracted to lower the resolution of the voxel grid in such a way that guarantees over-approximation. CEGAR defines over-approximation for behavioral models only and not for structural ones. Therefore, we must dig deeper into what over-approximation does in the context of a particular property. Over-approximation allows the same or more behavior, and, consequently, the model-checker is more likely to encounter states where the atomic prepositions $x$ of a property like $\varphi = A G ( x )$ becomes false. Therefore, it is also more likely that the entire property becomes false, which is detected by the model-checker. Assuming that a voxel can be either (partly) occupied by an obstacle (SOLID) or free of an obstacle (not SOLID), a possible atomic proposition to check whether a collision occurs in a certain step is: $\alpha =$ voxels visited by the robot are not SOLID To check if the entire robot application is collision-free, the corresponding $A C T L ^ { * }$ property is $\varphi = A G ( \alpha )$ , i.e., that the atomic proposition evaluates to true in each step. Considering that the approximation does not alter the behavioral part of the model (the robot still moves along the same trajectory given in numerical coordinates), using different (structural) abstractions means checking which parts of the 3D space are visited by the robot in a different granularity (resolution). For example, this means checking $4 * 4 * 4 = 6 4$ voxels instead of $8 * 8 * 8 = 5 1 2$ voxels if they are visited and SOLID. However, to guarantee that all violations (collisions) detected by the finer-grained analysis are also detected by the coarser-grained one, the voxel values must be set right. Assuming we have two voxels, a free one and an occupied one named $\scriptstyle v _ { 0 }$ and $v _ { 1 }$ , respectively, $\boldsymbol { v } _ { 0 }$ has the value False (since it is not SOLID), and $\boldsymbol { v } _ { 1 }$ has the value True (since it is SOLID). For reducing the number of voxels by joining them together into one abstract voxel $ { \boldsymbol { v } } _ { a }$ , it must be defined which value is assigned to that voxel. The abstract voxel $\boldsymbol { v } _ { a }$ needs to have the value True, in order to fulfill over-approximation. Considering an abstract voxel as SOLID when at least one (less abstract) voxel it is composed of is SOLID also corresponds nicely with the output of the tool binvox. Given the same STL-file, binvox generates voxel grids in such a way that an obstacle occupies more space in a lower-resolution voxel grid than in a higher-resolution one. Actually, we used voxel grids of different resolutions exported by binvox to check this approach and to test the implementation of the abstraction mechanism. In general, however, the abstraction approach depends on the property to be checked. Consider the following property: # • AG(voxels visited by the robot are F REE) Assume that a value of True means that the voxel is FREE. Then the abstraction mechanism of setting the value of an abstract voxel to True when at least one of the more concrete voxels has a value of True does not fulfill over-approximation. # C. Our Verification Approach using Selective Refinement of Structural Abstractions The verification approach presented here uses structural abstraction of voxel grids in combination with selective refinements of individual voxels. It is inspired by the ideas behind CEGAR as we also perform an initial abstraction and improve the abstract model in incremental steps using information gathered by analyzing counterexamples. In contrast to CEGAR, however, our approach works on structural abstractions rather than behavioral ones. Our workflow, as illustrated in Figure 5, consists of several steps. First, there is an initial step of abstracting the voxel grid provided to the workflow. After that, the whole workflow mainly operates on a voxel grid with reduced (as compared to the provided one) resolution. While the provided higherresolution voxel grid is still available during the whole workflow, it is not directly used for verification. After the initial abstraction is available, the verification loop starts. It consists of performing a verification run, analyzing the counterexample and correspondingly refining the environment model. Details on these steps are given below. The workflow ends if either the verification run does not detect a violation of the property (and, therefore, does not generate a counterexample) or the proposed refinement is considered not valuable or even impossible. In the first case, the workflow ends with the result that the verification has passed. In the second case, the verification fails and the workflow provides the counterexample that caused it. Fig. 5. Verification workflow with selective refinement of structural abstractions During the execution of the workflow, we distinguish between three resolutions of a voxel (grid). Those are: Max-resolution: The highest resolution used during the execution of the workflow. This resolution of the voxel grid is stored in a binvox-file. Base-resolution: This resolution is chosen by the verification engineer and is the resolution the voxel grid is reduced to at the start of the workflow. It represents the coarsest resolution that is used during the execution of the workflow. It is the resolution of the initial abstraction. • Voxel-resolution: This is the specific resolution of a particular voxel and is neither greater than Max-resolution nor smaller than Base-resolution. To understand the difference between these resolutions better, we give a short example. The voxel grid provided to the workflow has a resolution of 128 $( 1 2 8 \times 1 2 8 \times 1 2 8 )$ and, therefore, divides the space covered by the voxel grid into 2, 097, 152 individual voxels. Hence, the Max-resolution of the workflow is 128. The initial abstraction is set to generate a voxel grid with a resolution of 4 $( 4 \times 4 \times 4 )$ , i.e., the Baseresolution of the workflow is 4. Note, with its 64 voxels, this voxel grid covers the same space as the 2, 097, 152 voxels provided to the workflow. Therefore, those 64 voxels have 32, 768 times the size of the voxels provided to the workflow. However, instead of explicitly spelling out the size of voxels, we just say that a voxel has a particular Voxel-resolution. In our example, all voxels in the voxel grid provided to the workflow have a Voxel-resolution of 128 and all voxels of the voxel grid generated by the initial abstraction have a Voxelresolution of 4. After introducing refinements, the environment representation does have voxels of various sizes and, therefore, various Voxel-resolutions. As already stated, the first step of our workflow is to generate a voxel grid with reduced resolution. Depending on the Max-resolution, the preset Base-resolution and the property to be checked, an abstraction according to our approach for abstracting voxels is done. In our example, this step determines the value associated with a voxel of resolution 4 by joining 32, 768 voxels of resolution 128 together. All the 128-resolution voxels a 4-resolution voxel is composed of are joined. For example, the 4-resolution voxel with index $x _ { 4 } = 0$ , $y _ { 4 } ~ = ~ 1$ , $z _ { 4 } ~ = ~ 0$ is composed of all 128-resolution voxels whose index fulfills the following expressions $x _ { 1 2 8 } \in [ 0 , 3 1 ]$ , $y _ { 1 2 8 } \in [ 3 2 , 6 3 ]$ and $z _ { 1 2 8 } \in [ 0 , 3 1 ]$ . The first step in the verification loop executes Perform Verification. During this step, all the models required (like environment model, behavioral model, etc.) are put together to form the overall model. This model is then handed over to the model-checker to be verified against the given property. While all the technical details can be found at https://zenodo. org/record/7622703, let us briefly sketch how voxel grids are represented for passing them into the model-checker, both the unrefined and the abstracted ones. There is a set of arrays, where one array stores the largest voxels with the highest resolution, and for each refinement of a voxel there is an extra array. An index structure serves for using the right arrays to be encoded into the input language of the model-checker. The output of the model-checker is the result of this step and is either the statement that the property is verified successfully, or a counterexample, which shows that the property is violated. Depending on this output, the workflow either stops with the result of successful verification of the model against the property, or it proceeds with executing the next action Analyze Counterexample. Analyze Counterexample analyzes the output of the previous verification step. In our implementation, the output is captured as a log file and, hence, the file is parsed and the (humanreadable) counterexample documented in it is analyzed fully automatically. The counterexample contains information on how variables of the model change at each time step of the verification run. With this information and knowing the names of the variables used to represent the index of a voxel, the specific voxel that caused the property violation and the time step when the violation occurred can be determined. Based on this information, the action then outputs a suggestion for voxels to be refined. In our implementation, the suggestion is the specific voxel causing the violation. However, as we explain in the discussion section below, a more elaborate suggestion containing multiple voxels may be possible. The suggested refinement can only be performed if the voxel does not have Max-resolution already. Otherwise, no further refinement is possible and the workflow ends with the result of a failed verification. The actual refinement is performed in the Generate Refined Environment Model action. In this step, the (abstract) voxel is refined into eight new voxels with the next higher resolution, e.g., a voxel of resolution 8 is refined into eight voxels of resolution 16. To determine the Max-Resolution voxels to be combined to form a particular new voxel, the position of the voxel to be refined and the position of the new voxel inside of the refined one are used. To compute the value (SOLID or not) for each new voxel, the identified Max-Resolution voxels are combined. Finally, the whole environment – consisting of the voxel grid in base resolution and the refined ones – is exported to its nuXmv input representation. The newly generated environment representation is then used during the next Perform Verification step of the workflow. The cycle of verification, analyzing a counterexample, and generating a new environment representation continues until the workflow terminates. # $D$ . Integration into Preexisting Methodology The proposed workflow is integrated into the preexisting methodology of Rathmair et al. [2] for its application to safety verification. Compared to the original methodology, only minor changes were necessary. Instead of verification directly using the model-checker nuXmv, our tool implementing the new verification workflow is used, which invokes nuXmv as specified above. In addition to that, adapting the Robot Environment Path was also necessary, since the original methodology would hand over an SMV model of the environment. However, our workflow asks for binvoxfiles representing the environment. Hence, both processes of the original Robot Environment Path were replaced by a new process generating the binvox-file(s) needed. As defined above, the resolution of this binvox-file is also the Max-resolution of the workflow. Since our workflow also uses the model-checker nuXmv, it has to generate the SMV models of the environment. # IV. RESULTS We applied our new approach to verifying our running example, which originated from Rathmair et al. [2] and is also sketched above. Actually, we model-checked one cycle only. Assuming that the cycles are identical, the verification of one cycle is sufficient. Table I above shows that even for a lower resolution of 64, ${ \sim } 5 0 \mathrm { h }$ were needed for finding a counterexample. These results correspond to the performance of the approach without selective refinement by Rathmair et al. [2]. While the amount of memory needed increases with the more fine-grained environment models, it was never a problem in our model-checking runs. Hence, we only report running times. All model-checking runs in this paper were performed on an Ubuntu 20.04 system running on an AMD Ryzen 7 5800X 8-Core Processor with 3.8GHz, 64GB memory and a GeForce GT 710. Using these results from previous work for comparison, we present and explain the results of applying our new approach for demonstrating that it can strongly improve model-checking performance. More precisely, we verified this scenario using our workflow with a Max-resolution of 128. Note again, that there were no results for verification without selective refinement for this high resolution, since the model-checker did not finish with a result even after a couple of days. Table II shows the results for our approach with selective refinement, for various values of Base-resolution, which has to be set before each verification run as a kind of parameter. A Base-resolution of 128 does not make sense for a Maxresolution with the same value, since using a Base-resolution equal to the Max-resolution leads to the same model-checker run as without selective refinement. Hence, there is no related result given in this table. Also the verification with a Baseresolution of 64 took extremely long, but using such a high Base-resolution is not reasonable since it constrains our approach too much. TABLE II RESULTS WITH SELECTIVE REFINEMENT – MAX-RESOLUTION 128 Base-Resolution ... Base-Resolution used in the Workflow Time ... Running Time of Verification Length ... Length of Counterexample Refinements ... Number of Refinements made The lowest running times in the order of a few minutes have been achieved with low values for Base-resolution, i.e., when starting the model-checker runs with low resolutions, which finish fairly quickly. After that, selective refinement shows its benefits by exploring higher resolutions only where necessary for finding a real counterexample. Hence, with lower values of Base-resolution, a (real) counterexample for Max-resolution of 128 was possible to be found within a few minutes, while the model-checker directly running with the high resolution and without selective refinement did not finish with a result even after a couple of days. For the purpose of illustrating selective refinements, Figure 6 visualizes an example of the evolution of the voxel grid when applying the workflow for performing refinements with a Base-resolution of 4 and Max-resolution of 128. The leftmost subfigure shows the environment in a resolution of 4. Neither the blue box nor the raised tray are visible. After a few refinements, both can already be seen vaguely. With its 35 refinements, the final representation gives even more details. Note, that the opening of the box seems not essential to disprove the property (in this case). Fig. 6. Evolution of the voxel grid with Base-resolution 4 and Max-resolution 128 – left: without any refinement; middle: with 16 refinements; right: with 35 refinements We also investigated the relative performance depending on the parameters Base- and Max-resolutions. For that, we used three different scenarios. The first scenario finds a counterexample even when a Max-resolution of 128 is used (i.e., the previous scenario). The second scenario covers the case that no counterexample is generated with a Max-resolution of 128, but there is one with a Max-resolution of 64. In the third scenario, there is no counterexample found at all, not even using a Max-resolution of 2. Table III shows the overall time used for the verification of the first scenario depending on the parameters Base- and Max-resolutions ranging from 2 to 64 and from 2 to 128, respectively. The running times from Table II are included here in the bottom row for facilitating comparisons. Note, that also the running times from Table I are included, in the diagonal of Table III. For all combinations with a Max-resolution of 4 and 8, our approach using selective refinement takes slightly more time, but this is in the order of less than a second and, hence, does not really matter. We suspect that this is due to the workflow’s overhead for generating the environment model(s), which is small in absolute terms, but compared to these very short verification times, relatively large. For larger Max-resolutions, this overhead can be neglected. The shortest verification run that leads to the counterexample with the longest path – all other counterexamples are considered spurious since they are only due to the structural abstraction – can be achieved with Max-resolution 64 and Base-resolution 4. Using a Max-resolution of 128 does not make a big difference in terms of verification time. It checks immediately whether the counterexample still exists in the higher-resolution representation. We also model-checked a second scenario, which is a slightly adapted version of the running example, in order to get results for a near miss situation. In this second scenario, we slightly increased the distance between the robot arm and the table by (virtually) expanding the height of the mounting plate slightly. We intentionally defined the adapted height in such a way that a verification with resolution 64 results in a counterexample, while a verification with resolution 128 passes. The results are given in Table IV. Here a large time difference between the results of Max-resolution 64 and 128 is shown. Since a collision in this scenario is only nearly missed, the verification run with Max-resolution 64 found a counterexample quickly and this run was short, since with this resolution it looks like there is a collision. With the higher Max-resolution of 128, however, this collision cannot be seen anymore, so that there is no corresponding counterexample, and the verification cannot stop early. We were interested in whether this means that only with real counterexamples our new approach is that efficient as shown in Table III. Hence, we defined a related third scenario, where the obstacles are out of the robots reach and, therefore, a collision is obviously avoided. Table $\mathrm { \Delta V }$ shows the results for this scenario, and it shows comparable running times to Table III. That is, only in the near-miss situation the running times with Max-resolution 64 were larger. TABLE III VERIFICATION-TIMES OF ALL BASE- AND MAX-RESOLUTION COMBINATIONS FOR THE Collision SCENARIO Base-Resolution ... Base-Resolution used during the Workflow Max-Resolution ... Max-Resolution used during the Workflow TABLE IV VERIFICATION-TIMES OF ALL BASE- AND MAX-RESOLUTION COMBINATIONS FOR THE Near Miss SCENARIO Base-Resolution ... Base-Resolution used during the Workflow Max-Resolution ... Max-Resolution used during the Workflow TABLE V VERIFICATION-TIMES OF ALL BASE- AND MAX-RESOLUTION COMBINATIONS FOR THE Obviously Safe SCENARIO Base-Resolution ... Base-Resolution used during the Workflow Max-Resolution ... Max-Resolution used during the Workflow Across all scenarios, the verification workflow with Maxresolution $x$ outperforms the verification without structural abstraction and resolution $x$ while finding the same counterexample in terms of robot position. The Base-resolution only influences the time advantage gained by using our approach. The data source as well as the source code of the approach presented here are available in an executable artefact on https: //zenodo.org/record/7622703. # V. RELATED WORK A bounding volume hierarchy is a hierarchy that arranges bounding volumes of objects into a tree structure [13, Chapter 6]. It is often used for collision detection, e.g., in game engines. One way to model a voxel grid as a bounding volume hierarchy is using an octree. A node of an octree has exactly eight children or no children at all. Each leaf node of the octree represents a voxel of the voxel grid. All other nodes of the octree represent what we call abstract voxels of varying sizes depending on the node’s level in the octree. However, modeling the whole octree would lead to a very large environment model. That is why our workflow uses environment representations that include only distinct parts of the octree and, hence, the bounding volume hierarchy of the voxel grid, depending on the selective refinements introduced. Multi-level voxels models [14], [15] may be an alternative to our approach of representing different resolutions. It uses two types of voxels – coarse and fine resolution voxels. The coarse resolution voxels form the base voxel grid. At object boundaries, a coarse resolution voxel is subdivided into fine resolution voxels. This enables a more detailed approximation of objects with a lower overall number of voxels as compared to using fine resolution voxels throughout the whole model. Our approach also refines course voxels into finer-grained ones. However, it selectively refines voxels deemed important for a particular verification task, and it does so on several levels of granularity. Babic´ and Hu [16] introduced structural abstraction of software by providing an automatic abstraction-checkingrefinement framework. It also follows the general approach of CEGAR of generating a coarse initial abstraction and refining it based on counterexamples. The framework uses “the natural function-level abstraction boundaries present in software” [16] to omit details about function behavior. Initially, their approach treats the effects of function calls as unconstrained variables. Later constraints are added based on the counterexamples. In essence, they use structural information gathered by analyzing the software to guide behavioral abstractions. Our approach differs since it does not abstract the behavior of a system. Instead, it abstracts structure of the environment the system is embedded into. For the verification of hybrid systems based on the CEGAR approach, Clarke et al. [11], [12] use both structural and behavioral abstractions. In their example, the width of a street is represented, where a car is not allowed to drive over the borders of the street, neither to the left nor to the right. In contrast to this example, the formulas in our example are much more numerous and complex, hence we automatically create them from the voxel representation. In particular, our focus is enterely on the structural abstractions possible here and how to translate them from the voxel representation. Fishwick and Lee [17] group the behavior of different (physical) entities in a structure, which is also abstracted in their approach. One state in their finite state automata (FSA) represents a high-level state of the system. The system’s behavior in a specific state is then comprised of the behavior of the individual entities in this state. In contrast, we neither use abstract states nor abstract behavior in our approach. Yang and Chen [18] use artificial intelligence (AI) based shape abstraction to map point clouds into a representation consisting only of cuboids. However, inferring that a concrete model satisfies a property when the abstract model satisfies it relies on over-approximation – in our case, this means that the whole “real” object is contained in its abstract representation. Using their method for abstracting the environment does not guarantee over-approximation, however. Therefore, their approach cannot be used as the basis for generating the voxel grids that we need for our approach of selective refinement. Henzinger et al. [19] introduced lazy abstraction, which continuously builds and refines a single abstract model on demand, driven by the model checker, so that different parts of the model may exhibit different degrees of precision, just enough to verify the desired property. Our approach uses a similar principle, but it works on structural models as opposed to behavioral models as implemented in C programs. Rathmair et al. [2] proposed a verification workflow for robot applications that the one here builds upon. To overcome the hassles that come with using a high-resolution voxel grid, Rathmair et al. divide the environment into multiple objects, e.g., table and blue box. Each voxel grid has a different resolution, depending on its size and the details of the object required for the verification. For example, a voxel grid of resolution 16 covering only the blue box has significantly smaller voxels and, therefore, provides more box details than a voxel grid with the same resolution covering the whole environment. Our new approach is different because the level of detail is not homogeneous over a whole object. Instead, the model may provide more details of certain parts of an object when they are relevant, and the selective refinement automatically determines where more details are needed. # VI. DISCUSSION There is a tradeoff between the number of model-checker runs for refinements and the time needed for each modelchecker run. Generally, a model-checker run with a lower resolution takes less time than one with a higher resolution. However, workflow executions with lower Base-resolution need more refinements and, therefore, more individual modelchecker runs. Based on our analysis of this tradeoff, determining a value for Base-resolution upfront can be done by educated guessing and, later, through experience. Although our new verification workflow has shown its potential to strongly outperform verification without refinements for our example, we see room for further improvement. Detailed analysis of the individual refinements made while verifying our example showed that generated counterexamples tend to have the same length. That is, the same robot position as in the verification run before causes a counterexample even though a refinement was made. The reason is that a counterexample only highlights one voxel that causes a violation, although, at each robot position, more than one voxel may cause a violation. This may lead to situations where the modelchecker finds a violation, a refinement is done, and the next verification step finds a violation at the same robot position. We saw this phenomenon when our approach tries to overcome the counterexample at length 21, when using a resolution of 32 without selective refinement. Manually analyzing this exact robot position revealed that up to 6 voxels – dependent on the Base-resolution – violate the property. However, the workflow refines the voxels one by one, so that up to 5 additional model checker runs may be needed. Static analysis of the exact situation where the counterexample occurs, and not only refining the voxel given in the counterexample could reduce the number of model-checker runs needed and, hence, the verification time. Our proposed workflow does not check the outcome of the refinement of a voxel for the following situation. A SOLID voxel proposed for refinement may only be composed of SOLID higher-resolution voxels, i.e., voxels with the subsequent higher resolution. In such a situation, it is unavoidable that the model-checker detects a collision in its next run. Refining the higher-resolution voxels at the same time, even over several refinement levels, could skip such model-checker runs. In our robot application, such a situation has been encountered multiple times. With a Base-resolution of 4, already the first refinement has this structure. Introducing multiple refinements in such a situation could have skipped several model-checker runs and, hence, reduced the overall verification time. There can even be the extreme case that a SOLID voxel proposed for refinement is entirely composed of SOLID voxels even at the Max-resolution. In this case, the introduction of new refinements can be skipped since the model-checker is guaranteed to find a counterexample, anyway, leading to reduced verification time. Although the structural abstraction proposed in this paper is done on voxel grids only, the overall approach is not limited to them. However, it is necessary to have some representation of the relationship between structural elements on one level of abstraction to elements on another level of abstraction. In the case of voxel grids, the indexes of a voxel allow computing the indexes of the lower-resolution voxel that it is part of and the indexes of all higher-resolution voxels that are abstracted by it. In general, it may be necessary to represent this relationship explicitly. Finally, since the voxel grid provided to the workflow is already an abstraction of the real environment, it would have to be made sure that it is generated in such a way that this abstraction is an “over-approximation” of the real environment. Unless this is ensured, however, the workflow cannot guarantee that a collision in the real environment is detected. However, this issue is out of the scope of the workflow, and it arises also when directly performing modelchecking without abstractions and refinements, of course.
Safety verification of robot applications is extremely challenging due to the complexity of the environment that a robot typically operates in. Formal verification with model-checking provides guarantees but it may often take too long or even fail for complex models of the environment. A usual solution approach is abstraction, more precisely behavioral abstraction. Our new approach introduces structural abstraction instead, which we investigated in the context of voxel representation of the robot environment. This kind of abstraction leads to abstract voxels. We also propose a complete and automated verification workflow, which is based on an already existing methodology for robot applications, and inspired by the key ideas behind counterexample-guided abstraction refinement (CEGAR) - performing an initial abstraction and successively introducing refinements based on counterexamples, intertwined with model-checker runs. Hence, our approach uses selective refinement of structural abstractions to improve the runtime efficiency of model-checking. A fully-automated implementation of our approach showed its feasibility, since counterexamples have been found for a realistic scenario with a fairly high (maximal) resolution in a few minutes, while direct model-checker runs led to a crash after a couple of days.
[ "cs.RO", "cs.SE" ]
# Introduction Parameter-efficient adaptation/fine-tuning [64] plays a central role in the practical deployment of large-scale pre-training models, especially in multi-task learning (MTL) scenarios [73, 57, 11, 58, 40]. Take large language models (LLMs) [75] in natural language processing (NLP) tasks as an example. To increase intrinsic knowledge and maintain generalization power, a pre-trained LLM often needs to learn multiple downstream tasks in different domains simultaneously or sequentially. To achieve multi-task adaptation, parameter-efficient fine-tuning (PEFT) methods like low-rank adaptation (LoRA) [24] and its variants [62, 2, 56, 60, 30] have been proposed. However, the practical applications of these methods are often limited because they suffer from task conflict and oblivion: $i$ ) The model adapted for one task often leads to the performance degradation in other tasks (also known as negative transfer [63, 72] or destructive interference [11]). ii) Learning new tasks sometimes results in catastrophic forgetting [58] — the model performance deteriorates severely in previously learned tasks. Figure 1: (a) An illustration of our model MoE-ization strategy and the corresponding MoORE architecture. (b) The comparison for various multi-task adaptation methods on fine-tuning LLaMA-3.1 8B [20] on the CSR-MTL constructed by nine tasks [9, 8, 43, 5, 51, 70, 50, 55]. MoORE consistently works better than the baselines when the number of tasks is larger than one. (c) Before adaptation, LLaMA-3.1 8B achieves encouraging overall performance (i.e., the gray dashed line) in seven tasks [23, 22, 76, 54, 48, 7, 4, 10]. MoORE mitigates performance degradation and outperforms other baselines when the number of tasks exceeds three. (d) MoORE’s runtime is comparable to that of its competitors. Compared to the original LLaMA-3.1 8B (i.e., the gray dashed line), MoORE increases the inference time moderately. Essentially, task conflict and oblivion arise because the diversity of different tasks requires the models to adapt their parameters in different directions [65, 67, 13, 25, 15]. Some recent methods [30, 29, 56] combine the Mixture-of-Experts (MoE) architecture [26, 52, 28] with PEFT, mitigating the interferences across different tasks by activating task-specific parameters. Given a layer of a pre-trained model, these methods learn multiple adapters associated with a router in multi-task scenarios. Each adapter may inherit task-specific knowledge, and the router selects/fuses these adapters based on the input data and tasks. In principle, we call the above strategy “Model MoE-ization” because it converts the original neural network layer to an MoE architecture. In this study, we propose a novel model MoE-ization strategy, with the help of the singular value decomposition (SVD), leading to a conflict- and oblivion-resistant multi-task adaptation method. As illustrated in Figure 1(a), given a weight matrix of a pre-trained model, our method applies SVD [1] to it and introduces a learnable router to adjust its singular values based on tasks and samples. Accordingly, the weight matrix becomes a Mixture of Orthogonal Rank-one Experts (MoORE), in which each expert is constructed by the outer product of a left singular vector and the corresponding right one. To further improve the capacity of MoORE, we impose a learnable orthogonal adapter on the right singular vectors, which is implemented by a Householder reflection adaptation module [68]. Unlike existing methods, which learn some additional low-rank experts without any constraints on their relations, MoORE extracts many rank-one experts intrinsically from the SVD of the pre-trained weight matrix. Such a design guarantees the orthogonality of the experts, avoiding their information redundancy and undesired interferences across them. Moreover, similar to existing orthogonal fine-tuning strategies [46, 68], MoORE maintains the column space of the original weight matrix and thus makes the adapted model resistant to forgetting its original tasks. Thanks to the above two properties, MoORE consistently outperforms state-ofthe-art methods in various multi-task adaptation scenarios. The representative results in Figure 1 show the superiority of MoORE in mitigating task conflict and oblivion and its competitive inference efficiency. # 2 Related Work and Preliminaries # 2.1 Adapter-based Methods for Multi-Task Adaptation Multi-task adaptation aims to fine-tune a pre-trained foundation model simultaneously or sequentially in multiple downstream tasks [32, 31]. Focusing on this problem, many adapter-based methods have been proposed [35, 47, 53, 60, 38]. In particular, denote the data of $K$ tasks as $\{ \mathcal { D } _ { k } \} _ { k = 1 } ^ { K }$ . Given a pre-trained foundation model, whose parameters are denoted as $\pmb \theta$ , multi-task adaptation can often be formulated in the framework of maximum likelihood estimation: $$ \begin{array} { r } { \operatorname* { m a x } _ { \Delta \theta } \sum _ { k = 1 } ^ { K } P _ { \theta \cup \Delta \theta } ( \mathcal { D } _ { k } ) , } \end{array} $$ where $P _ { \theta \cup \Delta \theta } ( \mathcal { D } _ { k } )$ denotes the likelihood of the $k$ -th task’s data, which is parameterized by the original model parameters $\pmb \theta$ and the adapters’ parameters $\Delta \theta$ . Unlike learning and ensembling different models, adapter-based methods improve the overall model performance by sharing parameters and domain knowledge across various tasks, leading to moderate increases in parameters and complexity. For instance, Hyperformer [42] utilizes a shared hypernetwork to generate taskspecific adapters, reducing the number of learnable parameters. Recently, some methods extend LoRA [24] to multi-task adaptation, e.g., MultiLoRA [62], MTL-LoRA [66], HydraLoRA [56], and so on, which learns multiple low-rank adapters to handle diverse tasks. # 2.2 Connections to MoE Architectures The Mixture-of-Experts (MoE) architecture was initially introduced by the work in [26], which is constructed by multiple specialized networks (called experts) and a router. Given an input data $\pmb { x } \in \mathcal { X }$ , the MoE derives the output $\pmb { y } \in \mathcal { V }$ as $\begin{array} { r } { \pmb { y } = \sum _ { m = 1 } ^ { M } g _ { m } ( \pmb { x } ) f _ { m } ( \mathbf { \hat { x } } ) } \end{array}$ , where $f _ { m } : \mathcal { X } \mapsto \mathcal { Y }$ denotes the $m$ -th expert, which achieves a mapping from the sample space $\mathcal { X }$ to the output space $y$ . $g : \mathcal { X } \mapsto \mathbb { R } ^ { M }$ denotes the router, and $g _ { m }$ denotes its $m$ -th output. The router adjusts the experts’ significance based on input data. When applying a sparse routing strategy, i.e., activating only a few experts for each input [52, 28, 17, 3, 16, 44], the MoE architecture supports building large-scale models while maintaining computational efficiency. Due to its advantages, many large language models, e.g., DeepSeek [12], Grok3, and Qwen31, are built based on MoE, and many efforts have been made to convert well-trained dense models into MoE architectures [27, 21, 49, 45, 71]. As mentioned before, most existing adapter-based multi-task adaptation methods actually “MoE-ize” pretrained models. Given a weight matrix $W$ and its input $\scriptstyle { \mathbf { { \vec { x } } } }$ , these methods apply multiple low-rank adapters as experts [69, 14, 33, 39] and add them to the pre-trained models, i.e., $$ \pmb { y } = \pmb { W x } + \sum _ { m = 1 } ^ { M } g _ { m } ( \pmb { x } ) \pmb { B _ { m } } \pmb { A _ { m } } \pmb { x } , $$ where $A _ { m }$ and $B _ { m }$ are low-rank matrices constructing the $m$ -th expert $f _ { m }$ . For the router $g ( \pmb { x } )$ , some attempts have been made to develop advanced routing strategies, e.g., the dynamic routing in AdaMoLE [36] and the token-task hybrid routing in HMoRA [30]. Recently, some new MoE architectures have been proposed, including the asymmetric “Hydra” structure in HydraLoRA [56], the LEMoE [61] for lifelong model editing, and the OMoE [18] for orthogonal output. However, the above methods rely purely on data-driven strategies to determine the experts’ functionality and domain knowledge. Without necessary regularization on the relations across different experts, the experts often suffer from the load imbalance issue, i.e., a limited number of experts are over-trained and applied for multiple tasks, while many experts are seldom used. This issue harms the capacity and generalizability of the models, increasing the risk of task conflict and oblivion. # 3 Proposed Method In this study, we propose a new model MoE-ization method for multi-task adaptation. In principle, our method imposes orthogonality on the experts and further regularizes their output spaces, which helps mitigate task conflict and oblivion. # 3.1 SVD-based Model MoE-ization Consider a pre-trained weight matrix $W \in \mathbb { R } ^ { D _ { \mathrm { o u t } } \times D }$ . Without loss of generality, we assume $D _ { \mathrm { o u t } } \geq D$ and $\operatorname { R a n k } ( W ) = D$ . The SVD of the matrix is denoted as $$ \begin{array} { r } { W = U \mathrm { d i a g } ( { \pmb \sigma } ) { \pmb V } ^ { \top } = \sum _ { d = 1 } ^ { D } \underbrace { \sigma _ { d } } _ { \mathrm { w e i g h t } } \cdot \underbrace { ( { \pmb u } _ { d } { \pmb v } _ { d } ^ { \top } ) } _ { \mathrm { e x p e r t } } , } \end{array} $$ where $\pmb { U } = [ \pmb { u } _ { 1 } , \dots , \pmb { u } _ { D } ] \in \mathbb { R } ^ { D _ { \mathrm { o u t } } \times D }$ contains left singular vectors, $V = [ \pmb { v } _ { 1 } , \therefore \cdot , \pmb { v } _ { D } ] \in \mathbb { R } ^ { D \times D }$ contains right singular vectors, and ${ \pmb \sigma } = [ \sigma _ { 1 } , \cdots , \sigma _ { D } ] ^ { \top }$ is a vector of singular values. As shown in (3), there exists an intrinsic but static MoE architecture hidden in the SVD of the weight matrix — each $W$ corresponds to the mixture of $D$ orthogonal and rank-one experts, in which the $d$ -th expert is the outer product of $\mathbf { \Delta } \mathbf { u } _ { d }$ and $\pmb { v } _ { d }$ and its weight is fixed as $\sigma _ { d }$ . Motivated by this intrinsic MoE, our method derives the proposed MoORE model for multi-task adaptation, which reuses the experts while introducing the following two modifications: • A hybrid routing strategy: Inspired by HMoRA [30], we adjust the experts’ weights according to input data and tasks, leading to a hybrid routing strategy. Given an input of the $k$ -th task, denoted as $\pmb { x } ^ { ( k ) } \in \mathbb { R } ^ { d }$ , we determines the weight of the $d$ -th expert as $$ \begin{array} { r } { g _ { d } ( \pmb { x } ^ { ( k ) } ) = \underbrace { p _ { d } ^ { \top } \pmb { t } _ { k } } _ { \mathrm { t a s k - l e v e l } } + \underbrace { \pmb { q } _ { d } ^ { \top } \pmb { \Gamma } \pmb { x } ^ { ( k ) } } _ { \mathrm { s a m p l e - l e v e l } } , } \end{array} $$ where $\pmb { t } _ { k } \in \mathbb { R } ^ { D _ { t } }$ denotes the embedding of the $k$ -th task, and the vector $\pmb { p } _ { d } \in \mathbb { R } ^ { D _ { t } }$ projects the task embedding to a task-level weight. Similarly, applying the matrix $\mathbf { T } \in \mathbb { R } ^ { D _ { s } \times D }$ and the vector $\pmb q _ { d } \in \mathbb { R } ^ { D _ { s } }$ , we project the input $\boldsymbol { x } ^ { ( k ) }$ to a sample-level weight. In practice, we set $D _ { s } , D _ { t } \ll D$ to reduce the router’s parameters and computational cost. The final weight $g _ { d } ( \pmb { x } ^ { ( k ) } )$ is the summation of the task- and sample-level weights. • An orthogonal adapter of input: To further increase the model capacity, we can apply a learnable orthogonal transform to the input, i.e., $\pmb { H x }$ , where the learnable orthogonal transform $H$ can be implemented efficiently by the butterfly orthogonal fine-tuning (BOFT) module in [34], the Givens rotation adapter [41], or the Householder reflection adapter (HRA) [68]. In this study, we implement $\scriptstyle { R }$ by HRA, which corresponds to the product of $L$ Householder reflections, i.e., Table 1: Comparisons for various MoE-based multi-task adaptation methods on their implementations and properties, where $M$ ( $D$ in our method) is the number of experts, $k$ is the task index, GS( ) denotes the Gram-Schmidt orthogonalization [6]. $$ \begin{array} { r } { \pmb { H } = \prod _ { \ell = 1 } ^ { L } \Big ( \pmb { I } - \frac { 1 } { \Vert \pmb { r } _ { \ell } \Vert _ { 2 } ^ { 2 } } \pmb { r } _ { \ell } \pmb { r } _ { \ell } ^ { \top } \Big ) , } \end{array} $$ whose learnable parameters are $\pmb { R } = [ \pmb { r } _ { 1 } , \cdots , \pmb { r } _ { L } ] \in \mathbb { R } ^ { D \times L }$ . Note that applying orthogonal adapters maintains the angles between neurons (i.e., the rows of the weight matrix $W$ ), which helps preserve the knowledge of the pre-trained model when enhancing model capacity [46, 34, 68]. Applying the above SVD-based model MoE-ization strategy, we obtain the proposed MoORE model, which introduces $D$ orthogonal and rank-one experts into the model and encodes the input data as $$ \begin{array} { r } { \boldsymbol { y } = \boldsymbol { W } \boldsymbol { H } \boldsymbol { x } ^ { ( k ) } + \sum _ { d = 1 } ^ { D } \underbrace { g _ { d } ( \boldsymbol { x } ^ { ( k ) } ) } _ { \mathrm { r o u l e r } } \underbrace { ( \boldsymbol { u } _ { d } \boldsymbol { v } _ { d } ^ { \top } \boldsymbol { H } ) } _ { \mathrm { e x p e r t } } \boldsymbol { x } ^ { ( k ) } = \boldsymbol { U } \mathrm { d i a g } ( g ( \boldsymbol { x } ^ { ( k ) } ) + \sigma ) \boldsymbol { V } ^ { \top } \boldsymbol { H } \boldsymbol { x } ^ { ( k ) } , } \end{array} $$ where $\begin{array} { r } { g ( \pmb { x } ^ { ( k ) } ) = \pmb { P } ^ { \top } \pmb { t } _ { k } + \pmb { Q } ^ { \top } \pmb { \Gamma } \pmb { x } ^ { ( k ) } \in \mathbb { R } ^ { D } . } \end{array}$ The learnable parameters of MoORE include $\pmb { T } = [ \pmb { t } _ { k } ] \in \mathbb { R } ^ { D _ { t } \times K }$ , $P = [ \pmb { p } _ { d } ] \in \mathbb { R } ^ { D _ { t } \times D }$ , $\pmb { Q } = [ \pmb { q } _ { d } ] \in \mathbb { R } ^ { D _ { s } \times D }$ , $\mathbf { T } \in \mathbb { R } ^ { D _ { s } \times D }$ , and $\pmb { R } \in \mathbb { R } ^ { D \times L }$ . As shown in (6), given the SVD of $W$ (which can be computed in advance), we can implement MoORE with low complexity via the following steps: $$ \mathrm { 1 ) } z = \underbrace { V H x ^ { ( k ) } } _ { \mathcal { O } ( D ( L + D ) ) } , \quad \mathrm { 2 ) } g ( { \mathbf x } ^ { ( k ) } ) = \underbrace { P ^ { \top } t _ { k } } _ { \mathcal { O } ( D D _ { t } ) } + \underbrace { Q ^ { \top } \Gamma x ^ { ( k ) } } _ { \mathcal { O } ( D D _ { s } ) } , \quad \mathrm { 3 ) ~ } { \mathbf y } = \underbrace { U ( g ( { \mathbf x } ^ { ( k ) } ) + \pmb { \sigma } ) z } _ { \mathcal { O } ( D D _ { \mathrm { o u t } } ) } . $$ As shown in (7), the overall complexity of MoORE is $\mathcal { O } ( D ( D _ { \mathrm { o u t } } + D + D _ { t } + D _ { s } + L ) )$ ). In practice, we set $L , D _ { t } , D _ { s } \ll \operatorname* { m i n } \{ D , D _ { \mathrm { o u t } } \}$ to reduce the complexity. Moreover, we can merge the orthogonal adapter into each expert in the inference phase, i.e., obtaining $V ^ { \prime } = V ^ { \top } H$ , and the complexity further reduces to $\mathcal { O } ( D ( D _ { \mathrm { o u t } } + D + D _ { t } + D _ { s } )$ ). Figure 1(d) shows that the inference efficiency of MoORE is comparable to its competitors. # 3.2 Comparisons with Existing MoE-based Multi-Task Adaptation Methods Our SVD-based model MoE-ization method provides a new technical route for multi-task adaptation: Instead of learning an MoE with few strong extrinsic experts, our method constructs an MoE with many simple but structured experts intrinsically based on the pre-trained weight matrix. Tables 1 and 2 compare different methods on their MoE architectures, theoretical properties, and computational efficiency, respectively. • The design of router. Given an input $\boldsymbol { x } \in \mathbb { R } ^ { D }$ , most existing methods leverage a sample-level routing strategy. Typically, they apply a linear projection $\pmb { S } \in \mathbb { R } ^ { M \times D }$ to it and pass the projection result through a softmax operator, leading to a nonnegative and normalized weight vector for $M$ experts. Among them, MixLoRA [29] further applies a sparse routing mechanism — for each input, it only activates the two experts that correspond to the top-2 weights. Instead, MTL-LoRA [66] leverages a task-level routing strategy, which determines the experts’ weights by passing a task-specific embedding (i.e., $\phi _ { k } \in \mathbb { R } ^ { M }$ , $k \in \{ 1 , \cdots , K \}$ indicates the task index) through a softmax operator. Unlike existing methods, our MoORE considers the task- and sample-level information jointly. The advantage of this hybrid routing strategy is that when the same sample serves for different tasks, MoORE can assign task-specific weights to the experts and thus leverage different domain knowledge accordingly. • The design of experts. Most existing methods apply $M$ low-rank adapters as experts, i.e., $\{ B _ { m } A _ { m } \} _ { m = 1 } ^ { M }$ , where $\boldsymbol { B } \in \mathbb { R } ^ { D _ { \mathrm { o u t } } \times r }$ and $\pmb { A } \in \mathbb { R } ^ { r \times D }$ are two rank- $r$ matrices. To reduce the number of learnable parameters, some methods, e.g., MoSLD [74], HydraLoRA [56], and MTL-LoRA [66], reuse the same $A$ or $B$ for all experts. MTL-LoRA [66] further introduces a task-specific matrix $\mathbf { A } _ { k }$ for each expert, where $k = 1 , . . . , K$ indicates the task index. As a result, it creates $M K$ low-rank experts and activates $M$ experts per task. Our MoORE contains $D$ orthogonal rank-one experts $\{ { \pmb u } _ { d } { \pmb v } _ { d } ^ { \top } { \pmb H } \} _ { d = 1 } ^ { D }$ , in which the orthogonal adapter $H$ is shared by all experts. Unlike existing methods, MoORE applies many simple but structured experts. Such a design has several advantages: 1. Imposing orthogonality for mitigating task conflict: The experts of MoORE are orthogonal to each other because $\pmb { u } _ { d } ^ { \top } \pmb { u } _ { d ^ { \prime } } = 0$ for all $d \neq d ^ { \prime }$ . The orthogonality ensures that the experts have different functionalities and domain knowledge, without redundant information. In particular, by activating different experts for different tasks, MoORE suppresses the interferences across the tasks in the training phase, thus mitigating the task conflict issue. Among existing methods, OMoE [18] is the only one imposing orthogonality on experts. However, it applies Gram-Schmidt orthogonalization algorithm [6] to the concatenation of $M$ experts’ output, i.e., $\mathrm { G S } ( [ B _ { 1 } A _ { m } { \pmb x } , \cdot \cdot \cdot , B _ { M } { \pmb A } _ { m } { \pmb x } ] )$ . As a result, imposing orthogonality requires additional $\mathcal { O } ( D _ { \mathrm { o u t } } M ^ { 2 } )$ operations per sample, which is less efficient than ours. 2. Maintaining $\mathbf { R a n g e } ( W )$ for mitigating task oblivion: The column space of each expert, i.e., Range $( { \pmb u } _ { d } { \pmb v } _ { d } ^ { \top } { \pmb H } )$ , is the same with $\mathrm { R a n g e } ( { \pmb u } _ { d } )$ . Accordingly, the output space of MoORE is the direct sum of $\{ \mathrm { R a n g e } ( { \pmb u } _ { d } ) \} _ { d = 1 } ^ { D }$ , which is the same as $W$ ’s column space, i.e., $$ \bigoplus _ { d = 1 } ^ { D } \operatorname { R a n g e } ( u _ { d } v _ { d } ^ { \top } H ) = \bigoplus _ { d = 1 } ^ { D } \operatorname { R a n g e } ( u _ { d } ) = \operatorname { R a n g e } ( U ) = \operatorname { R a n g e } ( W ) . $$ In single task adaptation scenarios, the work in [68] has shown that maintaining the column space of the weight matrix makes the adapted model inherit the ability of the pre-trained model better, mitigating the oblivion of the previously pre-trained task. In our work, we find that such maintenance is helpful in multi-task adaptation scenarios as well. • Computational efficiency. Table 2 shows each method’s learnable parameters and computational complexity. For the methods using the sample-based routing strategy, their routers contain $M D$ learnable parameters. Given a sample, the complexity of the router is $\mathcal { O } ( M D )$ . For the method [66] using the task-based routing strategy, its router is lightweight, containing $M K$ learnable parameters and determining the weights of its experts with complexity $\mathcal { O } ( M )$ . For the experts, using $M$ rank- $\cdot r$ experts leads to the complexity $\mathcal { O } ( M ( D _ { \mathrm { o u t } } + D ) r )$ . Reusing $A$ or $B$ (e.g., MoSLD [74] and HydraLoRA [56]) and applying sparse routing (e.g., MixLoRA [29]) can reduce the computational complexity significantly. In contrast, introducing additional parameters (e.g., the $\mathbf { \Lambda } _ { \mathbf { \Lambda } } \mathbf { \Lambda } _ { \mathbf { \Lambda } }$ in MTL-LoRA [66]) or operations (e.g., the Gram-Schmidt orthogonalization in OMoE [18]) leads to higher complexity. Existing methods construct an MoE with $M$ rank- $\cdot r$ experts, while our MoORE is an MoE with $D$ rank-one experts, whose number of experts is determined by the input dimension and thus much larger than $M$ . To improve the computational efficiency of MoORE, we set $D _ { s }$ and $D _ { t }$ comparable to the rank $r$ and set $L$ comparable to $M$ , respectively. As a result, the number of learnable parameters and the complexity of MoORE become comparable to most existing methods. Table 2: Comparisons for various MoE-based multi-task adaptation methods on their computational efficiency # 4 Experiments To demonstrate the effectiveness of MoORE in multi-task adaptation, we apply three MTL datasets and conduct comprehensive experiments on them. Representative results are shown in Figure 1 and the following content. More experimental details, e.g., the basic information of datasets, hyperparameter settings, ablation studies, routing weight analysis, and numerical results associated with figures, are shown in the Appendix. # 4.1 Implementation Details Base model and baselines. In the following experiments, we utilize LLaMA- $. 3 . 1 8 \mathrm { B } ^ { 2 }$ [20] as the base model and adapt it by various multi-task adaptation methods. In particular, we compare MoORE with LoRA [24] and the methods incorporating low-rank adapters as MoEs, including LoRAMoE [14], MoSLD [74], MTLLoRA [66], HydraLoRA [56], and MixLoRA [29]. We implement the MoE architectures of the baselines mainly based on their default settings. For a fair comparison, we modify some baselines’ hyperparameters to make the number of learnable parameters comparable for each method. For MoORE, we MoE-ize all linear layers of LLaMA-3.1 8B, including the “QKV O” modules of attention layers and the weight matrices of FFN layers. Two datasets for evaluating conflict-resistance. We consider two MTL datasets for commonsense reasoning (CSR) and natural language understanding (NLU), respectively. The CSR-MTL dataset is constructed by nine tasks, including ARC-Challenge (ARC-C), ARC-Easy (ARC-E) [9], OpenBookQA (OBQA) [43], PIQA [5], SocialIQA (SIQA) [51], BoolQ [8], Hellaswag (HellaS) [70], Winogrande (WinoG) [50], and CommonsenseQA (CSQA) [55]. These tasks are widely used to evaluate LLMs on various commonsense reasoning challenges, ranging from genuine grade-school level science questions to physical commonsense reasoning. The NLU-MTL dataset consists of seven tasks from GLUE [59], including CoLA, SST-2, MRPC, QQP, MNLI, QNLI, and RTE. These tasks are applied to evaluate the natural language understanding Table 3: Results $( \% )$ of various methods on CSR-MTL. The best results on each dataset are shown in bold, and the second best results are shown in underline. Table 4: Results $( \% )$ of various methods on NLU-MTL. The best results on each dataset are shown in bold, and the second best results are shown in underline. We report the matched accuracy for MNLI, Matthew’s correlation for CoLA, and average correlation for STS-B. capabilities of LLMs, including natural language inference, textual entailment, sentiment analysis, semantic similarity, and so on. One dataset for evaluating oblivion-resistance. In addition, we construct one more dataset, called ORMTL, for evaluating the oblivion-resistance of different methods. The dataset includes seven tasks, including MMLU [23, 22], IFEval [76], BIG-Bench Hard (BBH) [54], GPQA [48], HumanEval [7], MBPP [4], and GSM-8K [10]. The base model, LLaMA-3.1 8B, can achieve encouraging performance in these tasks. After adapting it on CSR-MTL, we record the performance of the adapted models in OR-MTL and assess the ability of different adaptation methods to mitigate task oblivion. Experimental settings. When adapting the pre-trained model on CSR-MTL and NLU-MTL, we set the training epoch to 2 and 5, respectively. The learning rate is set to $3 \times 1 0 ^ { - 4 }$ , with AdamW [37] as the optimizer. For CSR-MTL, we set the batch size to 8, whereas for NLU-MTL, we set the batch size to 64. Both training and testing are conducted on one NVIDIA A100 GPU. # 4.2 Performance in Conflict-Resistance Tables 3 and 4 compare MoORE with its competitors on CSR-MTL and NLU-MTL, respectively. Using a comparable number of learnable parameters for each dataset, MoORE achieves the best or comparable performance across all tasks and thus obtains the best average performance. These results demonstrate the MoSLD MTL-LoRA LoRAMoE HydraLoRA MixLoRA SVDMoE69 86 354667 68 78680 84 346802 44 3123 3463 川 山 I 62 72 32 2970 3061 28(a) MMLU (b) IFEval (c) BBH (d) GPQA68 60 8766 59 84 642602 58 54567 川 782581 J Performance(%)56 53 6954 52 6651 −10(e) HumanEval (f) MBPP (g) GSM-8K (h) Overall degradation superiority of MoORE in mitigating task conflict. The impact of orthogonal adapter. In the experiment on CSR-MTL, we increase the number of Householder reflections in $H$ (i.e., $L _ { ☉ }$ ) from 0 to 8 and find that MoORE exhibits consistent improvements in performance. In the experiment on NLU-MTL, however, applying the orthogonal adapter may not improve performance. In our opinion, this phenomenon indicates that the commonsense reasoning tasks in CSR-MTL require more domain knowledge not covered by the pre-trained model. As a result, introducing the orthogonal adapter increases the number of learnable parameters and enhances the model capacity accordingly. In contrast, the text classification tasks in NLU-MTL rely more on the non-specific natural language knowledge captured by the pre-trained model. Therefore, without introducing more learnable parameters, adjusting the singular values of the pre-trained weight matrix is sufficient to achieve encouraging performance. Conflict-resistance regarding task number and difficulty. To further compare and analyze the conflictresistance capabilities of different methods, we conduct comparative experiments on CSR-MTL by varying the number and difficulty of the tasks. In particular, for each task in CSR-MTL, we first calculate the average of all the methods’ performance based on the results in Table 3. The average performance of the methods in a task measures the difficulty of the task for the base model — the lower the average performance is, the more difficult the task is. Then, we sort the tasks in ascending order based on their difficulty. Finally, we adapt the base model for the top- $K$ tasks, $K = 2 , . . . , 9$ , and show the performance of different adaptation methods in Figure 1(b). With the increase of task number and difficulty, all the methods suffer performance degradation because $i$ ) task conflict becomes severe as the number of tasks increases, and $i i$ ) difficult tasks are more likely to have conflicts with other tasks in general. Notably, MoORE consistently outperforms all other baselines across all settings. Figure 3: The visualization of normalized performance degradation and task correlation. The “difference” shown in the first row is the normalized performance degradation, i.e., $( \mathrm { A c c } _ { \mathrm { B a s e } } - \mathrm { A c c } _ { \mathrm { M o O R E } } ) / 1 0 0 \%$ . The following matrix records the normalized task correlation. The element in the $j$ -th row and the $i$ -th column is $\begin{array} { r } { \| \pmb { g } _ { i } - \pmb { g } _ { j } \| _ { 2 } / \mathrm { m a x } _ { k , k ^ { \prime } } \| \pmb { g } _ { k } - \pmb { g } _ { k } ^ { \prime } \| _ { 2 } . } \end{array}$ . # 4.3 Performance in Oblivion-Resistance To demonstrate the superiority of MoORE in mitigating task oblivion, we compare various methods in the Language Model Evaluation Harness framework [19]. In particular, we first adapt the base model on CSR-MTL using different methods. Then, we evaluate the adapted models on OR-MTL, comparing them with the base model. Figure 2 shows that MoORE consistently mitigates task oblivion across all seven tasks of OR-MTL, with an average performance drop of only $1 . 3 1 \%$ compared to the base model. It significantly outperforms the other adaptation methods. Notably, on the HumanEval task, MoORE achieves performance exceeding the original model, with an improvement of $9 . 7 5 \%$ . Similarly, LoRAMoE and MTL-LoRA also obtain slight improvements of $4 . 2 7 \%$ and $0 . 1 6 \%$ , respectively. This intriguing phenomenon may imply that the datasets in CSR-MTL have some relevance to HumanEval, providing information and capabilities beneficial for solving this task. Oblivion-resistance regarding task number and difficulty. We conduct experiments to investigate how the ability of oblivion-resistance changes with increased task number and difficulty. The results in the Figure 1(c) show that MoORE consistently outperforms other baselines when the number of tasks exceeds three. This result demonstrates that MoORE has stronger oblivion-resistance than its competitors. In addition, we observe an interesting phenomenon: when the number of tasks increases from 1 to 3, the performance of all methods consistently improves. However, when the number of tasks exceeds 3, their performance no longer follows a clear pattern. This may be because, when the number of tasks is no more than 3, there is insufficient diversity among the fine-tuning tasks, which leads to overfitting issues in the model. The impact of task correlation. We investigate the reason for MoORE’s oblivion-resistance empirically. In particular, we consider the samples of several sub-tasks in MMLU and those of six tasks in CSRMTL. For each task/sub-task, we compute the average weights of MoORE’s experts over its samples, i.e., $\begin{array} { r } { \pmb { g } _ { k } = \frac { 1 } { | \mathscr { D } _ { k } | } \sum _ { \pmb { x } ^ { ( k ) } \in \mathscr { D } _ { k } } g ( \pmb { x } ^ { ( k ) } ) } \end{array}$ . Given a sub-task of MMLU and a task of CSR-MTL, denoted as $\mathcal { D } _ { k }$ and $\mathcal { D } _ { k ^ { \prime } }$ , respectively, we measure their correlation by $\| { \pmb g } _ { k } - { \pmb g } _ { k ^ { \prime } } \| _ { 2 }$ . For each sub-task of MMLU, we record the performance degradation of MoORE compared to the base model. Figure 3 shows normalized performance degradation and task correlation. This visualization result indicates that the oblivion-resistance arises from the correlation of tasks — for correlated tasks, the model can learn some common domain knowledge during adaptation and thus avoid catastrophic forgetting. In addition, this experiment also explains the result in Figure 1(c). In particular, the more tasks considered in the adaptation phase, the more likely some tasks correlate with those covered in the pre-training phase. As a result, increasing the number of tasks in the adaptation phase helps enhance the oblivion-resistance of MoORE.
Adapting large-scale foundation models in multi-task scenarios often suffers from task conflict and oblivion. To mitigate such issues, we propose a novel ''model MoE-ization'' strategy that leads to a conflict- and oblivion-resistant multi-task adaptation method. Given a weight matrix of a pre-trained model, our method applies SVD to it and introduces a learnable router to adjust its singular values based on tasks and samples. Accordingly, the weight matrix becomes a Mixture of Orthogonal Rank-one Experts (MoORE), in which each expert corresponds to the outer product of a left singular vector and the corresponding right one. We can improve the model capacity by imposing a learnable orthogonal transform on the right singular vectors. Unlike low-rank adaptation (LoRA) and its MoE-driven variants, MoORE guarantees the experts' orthogonality and maintains the column space of the original weight matrix. These two properties make the adapted model resistant to the conflicts among the new tasks and the oblivion of its original tasks, respectively. Experiments on various datasets demonstrate that MoORE outperforms existing multi-task adaptation methods consistently, showing its superiority in terms of conflict- and oblivion-resistance. The code of the experiments is available at https://github.com/DaShenZi721/MoORE.
[ "cs.LG" ]
# 1 INTRODUCTION Cardinality estimation (CardEst), which estimates the result size of an SQL query on a relational database, is a fundamental component Tongyu Liu Renmin University of China ltyzzz@ruc.edu.cn Kai Zeng Huawei Technologies kai.zeng@huawei.com Tao Ye Huawei Technologies yetao1@huawei.com Nan Tang HKUST(GZ) nantang@hkust-gz.edu.cn of query optimization in database management systems (DBMSs). Traditional CardEst methods [1, 2, 4, 10, 11, 18, 24] rely on simplifying assumptions, such as column independence, often leading to substantial estimation errors. To overcome these limitations, learning-based CardEst models [6, 12, 16, 20, 40–42] have emerged as state-of-the-art solutions, significantly improving accuracy by capturing complex data distributions and query patterns. Despite the advancements in learning-based CardEst models, deploying them in real-world DBMS systems still requires balancing three key criteria [9, 15, 33, 34, 42]: estimation accuracy, inference time, and storage overhead. Data-driven approaches [12, 40–42] leverage probabilistic models, such as Sum-Product Networks [12, 20, 42] and Deep Auto-Regressive [40, 41], to capture the joint distribution of all columns of the relational data. While these methods typically achieve high estimation accuracy, they suffer from high inference time and substantial storage overheads, particularly when handling complex data distributions. Query-driven methods [6, 16], on the other hand, train regression models that directly map SQL queries to their estimated cardinalities based on a set of training queries, bypassing the need to model data distributions. Although these methods are efficient and lightweight, they struggle with generalization, particularly when encountering queries that significantly deviate from those in the training set. Table 1 presents a comparison of existing approaches based on the three key criteria. Given the strengths and limitations of existing methods, a promising direction is to develop a unified model that leverages both data and queries, aiming to achieve the three key criteria: high estimation accuracy, low inference time, and lightweight storage overhead. Such an approach can overcome the weaknesses of purely data-driven or query-driven CardEst models by leveraging complementary information from both sources. Although UAE [35] also leverages both data and queries for CardEst, its primary focus is on improving estimation accuracy by training a Deep Auto-Regressive model that incorporates unsupervised losses from data and supervised losses from queries. However, UAE inherits key limitations of data-driven approaches [40, 41] that rely on Deep Auto-Regressive models. Specifically, it suffers from high inference time due to the computationally expensive progressive sampling process and incurs significant storage overhead, especially for columns with large value domains. FA(5FF|(𝑄5"|)𝑄") 𝑛! 𝒂𝟏𝒂 𝒂𝟐𝒂 𝒂𝟑𝒂 𝒂𝟒 𝐐SPLIT 𝑞%\$ 𝑞" 1 0 1 10 0 1 0 1 𝑄" 𝒂𝟏 𝒂𝟑 𝒂𝟏𝟐 𝒂𝟐𝟑 𝒂𝟑𝟒 𝒂𝟒 𝐴 , 𝐴 " 𝑛!&{𝐴!%, 𝐴𝑄#"}𝑛0𝑄#."0𝑛7𝑛7'0#1.0𝐴77!{,1𝑎𝐴,%𝑎 } 𝑄! 𝑄" % 𝐐PRODUCT {𝐴", 𝐴#} 𝑛# " 𝑛( 𝑞+ 1 0 1 1 FA(5FF|(𝑄5%|)𝑄%) , 𝑞, 0 0 1 0 𝒂𝟏𝟑 𝒂𝟑𝟐 𝒂𝟐𝟒 𝒂𝟒 𝑞- 1 0 1 0 𝑄% 𝒂𝟏 𝑛#\$ 𝑞. 0 1 0 1 𝒂𝟑 𝑞/ 0 1 0 1 𝒂𝟐 0𝑞"0 1 1 1 0 𝒂𝟒 !𝐴𝑎!"𝑎𝐴𝑎"𝐴𝑎!#𝐴𝑎"𝑎\$ !𝐴𝑎%𝑎#𝐴𝑎#𝑎! #𝑎 𝑎𝐴!𝑎!𝑎𝐴#𝑎%𝐴𝑎!" 𝑎𝐴\$%𝑎𝑎"𝐴𝑎!𝑎\$𝑎𝐴% 𝐴" 𝐴#𝐴" 𝐴# (b) Partitioning Columns based on Query Patterns. (c) Our Proposed QSPN Model (a) Traditional SPN Model. Fig𝑨u𝟏r𝑨e5𝟏1:53H3i3gh31-le1vel idea of QSPN. Table 1: Cardinality Estimation Methods Comp𝑨ar3iso1n Our Proposals. In this paper, we propose to learn from both data and queries via the Sum-Product Network (SPN) model. As shown in Figure 1(a), traditional SPN models [12, 42] recursively partition columns (i.e., the Product node) and rows (i.e., the Sum node) into local subsets, making it easier to compute the joint probability distribution from these local distributions. However, they often suffer from high inference time and large model size when columns in a database are highly correlated. This issue arises because many intermediate nodes (e.g.,Sum nodes) must be introduced to ensure that the columns in partitioned subsets are treated as independent. To address this problem, we propose QSPN that extends traditional SPNs by incorporating query workload information. The high-level idea behinds QSPN stems from an observation in many real-world query workloads: queries often exhibit specific access patterns on the columns of relational tables, which can be effectively leveraged to enhance both the efficiency and accuracy of cardinality estimation. Take the real-world queries from the JobLight workload [17] as an example which represents how users retrieve movie comments. Analyzing the query workload reveals that certain columns are frequently accessed together, while others are rarely referenced in the same queries. For instance, when retrieving movie comments by different types, production year is usually a search criteria meanwhile i.e., these two columns are frequently queried together in analytical workloads, whereas company type is seldom focused together with type i.e., these two columns tend to appear in a separate set of queries. Traditional SPN models overlook such query-driven correlations, leading to unnecessary model complexity and inefficiencies in inference. By integrating query workload information, QSPN can jointly partition columns based on both data correlations and query access patterns, thereby reducing model size and improving inference efficiency without sacrificing estimation accuracy, as shown in Table 1. Example 1. We consider the CardEst task for an example table $T$ with highly correlated columns $a _ { 1 } , a _ { 2 } , a _ { 3 } , a _ { 4 }$ , as illustrated in Figure 1(a). SPN partitions $T$ into different row subsets via Sum nodes (e.g., node $n _ { 1 }$ , which partitions rows based on whether $a _ { 1 } > 3 0 0 0 ,$ ) to reduce column correlations within each subset. However, as depicted in the figure , when columns exhibit high correlations, the SPN requires numerous Sum nodes to break down the joint distribution into local distributions over individual columns. This leads to a substantial increase in model size. Moreover, when processing a query 𝑞, the inference procedure must traverse a large number of these nodes, significantly increasing inference time. As illustrated in Figure $\begin{array} { r } { \boldsymbol { { 1 } } ( b ) _ { : } } \end{array}$ , QSPN leverages the access patterns of the query workload $Q$ , i.e., how often certain columns are accessed together by the same queries. Within subset $Q _ { 1 }$ of $\mathbf { \bar { \Lambda } } _ { Q }$ , we observe that columns $a _ { 1 }$ and $a _ { 2 }$ are frequently accessed together by queries $q _ { 1 }$ and $q _ { 2 }$ , while $a _ { 3 }$ and $a _ { 4 }$ are jointly accessed by queries $q _ { 3 }$ and $q _ { 4 }$ . Based on this pattern, we partition the columns into $\{ a _ { 1 } , a _ { 2 } \}$ and $\{ a _ { 3 } , a _ { 4 } \}$ for queries in $Q _ { 1 }$ , even if their data remain highly correlated. Similarly, for queries in subset $Q _ { 2 }$ , the columns can be partitioned into $\{ a _ { 1 } , a _ { 3 } \}$ and $\{ a _ { 2 } , a _ { 4 } \}$ . These query-aware column partitions allow QSPN to construct more compact SPN models, reducing model size and inference time while maintaining accuracy. As shown in Figure 1(c), we formalize QSPN as a tree-based structure that extends traditional SPNs by introducing two new node types: QProduct and QSplit. Specifically, QProduct partitions columns based on query-specific access patterns (e.g., grouping frequently accessed columns together) within a given workload, while QSplit refines the workload itself into more granular subsets to capture workload variations. Moreover, QSPN retains the SPN’s ability to partition columns and rows based on data correlations. By partitioning columns based on both data correlations and query access patterns, QSPN effectively reduces the number of intermediate nodes in the SPN, which improves inference efficiency, reduces storage overhead while maintaining high estimation accuracy. Key Challenges and Solutions. We study the technical challenges that naturally arise in our proposed QSPN approach. Offline QSPN Construction. A key challenge in offline QSPN construction is integrating query workload information into the SPN framework while maintaining inference efficiency. Unlike conventional SPNs, QSPN must consider query co-access patterns, making the partitioning problem more complex. To address this, QSPN develops efficient algorithms for two core problems, i.e., QProduct for queryaware column partitioning and QSplit for workload partitioning. Online QSPN Computation. The online stage of QSPN presents two key challenges. First, accurately computing query cardinalities while minimizing inference overhead is non-trivial, especially for queries with unseen access patterns. Second, as data distributions and query workloads evolve, QSPN must maintain accuracy without requiring frequent full retraining. To tackle these challenges, we develop an online inference algorithm, and introduce an incremental update mechanism that enables QSPN to efficiently adapt to workload and data changes. Multi-Table CardEst with QSPN. Extending QSPN to multi-table cardinality estimation is challenging due to the complex distributions of join keys and their impact on base table query predicates. To address this, we introduce a novel approach that allows QSPN to generalize to multi-table queries while maintaining high accuracy and efficient inference performance. Contributions. Our contributions are summarized as follows. (1) We propose QSPN, a query-aware Sum-Product Network that integrates data and queries for CardEst (Section 3). (2) We develop effective algorithms for offline QSPN construction and online QSPN computation, balancing estimation accuracy, inference time, and model size (Sections 4 and 5). We extend QSPN to support multi-table cardinality estimation (Section 6). (3) We conduct a comprehensive experimental study on widely used CardEst benchmarks. Extensive results demonstrate that QSPN achieves superior performance (Section 7). # 2 PRELIMINARIES # 2.1 Problem Formalization Data. This paper considers a relational table $T$ with columns (or attributes) $A = \{ a _ { 1 } , a _ { 2 } , . . . , a _ { | A | } \}$ and tuples $T = \{ t _ { 1 } , t _ { 2 } , . . . , t _ { | T | } \}$ , where $T$ may be either a single table or a joined table. Following existing work [42], we define the domain of each column $a _ { i }$ as $[ L B _ { i } , U B _ { i } ]$ , where $L B _ { i }$ and $U B _ { i }$ represent the lower and upper bounds of $a _ { i }$ , respectively. Queries. Similar to existing works [12, 35, 42], this paper focuses on queries that consist of a conjunction of predicates, where each predicate over column $a _ { i }$ can be represented as $L _ { i } \leq a _ { i } \leq U _ { i }$ , with $L B _ { i } \le L _ { i } \le U _ { i } \le U B _ { i }$ . Without loss of generality, the endpoints of interval $\left[ L _ { i } , U _ { i } \right]$ can also be open, e.g., $( L _ { i } , U _ { i } ]$ or $[ L _ { i } , U _ { i } )$ , which are omitted in this paper for simplicity. In particular, a point query over $a _ { i }$ can be represented as $L _ { i } = U _ { i }$ . For ease of presentation, we assume each column has only one interval $[ L _ { i } , U _ { i } ]$ , though this can be easily extended to the cases with multiple intervals per column. This paper considers a query workload as a set of queries $Q =$ $\{ q _ { 1 } , q _ { 2 } , . . . , q _ { | Q | } \}$ , typically extracted from real query logs. Given this workload $\boldsymbol { Q }$ , we introduce query-column access matrix (or access matrix for short) to represent the access patterns of queries in $\boldsymbol { Q }$ on columns in $T$ . This matrix provides a structured way to capture which queries reference which columns, enabling more optimization opportunities in query-aware cardinality estimation. Definition 2.1 (Query-Column Access Matrix). For each query $q _ { i }$ and each column $a _ { j }$ , we define $\mathsf { a c c } ( q _ { i } , a _ { j } )$ as an indicator function that specifies whether column $a _ { j }$ is accessed by query $q _ { i }$ . Specifically, $\mathsf { a c c } ( q _ { i } , a _ { j } ) = 1$ if $q _ { i }$ accesses $a _ { j }$ , and $\mathsf { a c c } ( q _ { i } , a _ { j } ) = 0$ otherwise. Then, we define the access matrix for a workload $\boldsymbol { Q }$ and a column set $A$ , denoted as ${ \mathsf { A C C } } ( Q , A )$ , as a binary matrix where each entry $\mathsf { a c c } ( q _ { i } , A _ { j } )$ indicates whether query $q _ { i }$ accesses column $a _ { j }$ . In particular, we use ${ \mathsf { a c c } } ( q _ { i } )$ to represent the $i \cdot$ -th row of the access matrix, corresponding to the access pattern of query $q _ { i }$ across all columns, and ${ \mathsf { a c c } } ( a _ { j } )$ to denote the $j$ -th column of the access matrix, capturing how column $a _ { j }$ is accessed by queries. Cardinality Estimation. Given a new query $q$ , we define $\mathsf { C a r d } ( q , T )$ as the cardinality of $q$ over table $T$ , i.e., the number of tuples in $T$ that satisfy the query conditions. The goal of cardinality estimation is to compute an estimate ${ \widehat { \mathsf { C a r d } } } ( q , T )$ that approximates $\mathsf { C a r d } ( q , T )$ efficiently and accurately wšithout executing $q$ on $T$ . Specifically, we study the following problems: (1) Offline CardEst Training, which trains a CardEst model by leveraging both data $T$ and workload $\boldsymbol { Q }$ , capturing the underlying data distribution and query access patterns, and (2) Online CardEst Inference: which uses the trained CardEst model to estimate the cardinality ${ \widehat { \mathsf { C a r d } } } ( q , T )$ for a given query $q$ . # 2.2 SUM-Product Network (SPN) Model Sum-Product Network (SPN) [23] is a data-driven model with extensions such as DeepDB [12] and FLAT [42]. SPN-based approaches address the CardEst problem by modeling the joint probability distribution ${ { P } _ { T } } ( A )$ , where each attribute $a _ { i } \ \in \ A$ is treated as a random variable. Given a query $q$ with selection predicates $A _ { i } \in [ L _ { i } , U _ { i } ] _ { i = 1 } ^ { m }$ , the estimated cardinality is computed as $\begin{array} { r } { \widehat { \mathsf { C a r d } } ( q , T ) = | T | \cdot \sum _ { v _ { 1 } \in [ L _ { 1 } , U _ { 1 } ] } \cdot \cdot \cdot \sum _ { v _ { m } \in [ L _ { m } , U _ { m } ] } P _ { T } ( v _ { 1 } , \ldots , v _ { m } ) , } \end{array}$ , wherše the summation iterates over all possible values within the query’s range constraints. SPN approximates $P _ { T } ( A )$ by decomposing the joint probability distribution into multiple local probability distributions. This decomposition is realized through a hierarchical, tree-based structure, where each node represents a local joint probability distribution $P _ { T ^ { \prime } } ( A ^ { \prime } )$ . The key idea of SPN focuses on introducing intermediate nodes, which fall into one of the following two categories. (1) A Sum node partitions its tuple set $T ^ { \prime }$ into a collection of disjoint subsets $\textstyle T ^ { \prime } = \bigcup _ { i } T _ { i } ^ { \prime }$ . Each subset $T _ { i } ^ { \prime }$ corresponds to a child node with a probability distribution $P _ { T _ { i } ^ { \prime } } ( A ^ { j } )$ . The overall distribution at the Sum node is then computed as a weighted sum of its children’s distributions $\begin{array} { r } { P _ { T ^ { \prime } } ( A ^ { \prime } ) = \sum _ { i } w _ { i } \cdot P _ { T _ { i } ^ { \prime } } ( A ^ { \prime } ) } \end{array}$ , where the weight $w _ { i }$ is determined by the proportion of tuples in each subset, given by $w _ { i } = \vert T _ { i } ^ { \prime } \vert / \vert T ^ { \prime } \vert$ (2) A Product node partitions its attribute set $A ^ { \prime }$ into disjoint subsets $A ^ { \prime } = \cup _ { j } A _ { j } ^ { \prime }$ . Each subset $A _ { j } ^ { \prime }$ corresponds to a child node that models the probability distribution $P _ { T ^ { \prime } } ( A _ { j } ^ { \prime } )$ . By assuming independence among these subsets, the overall distribution at the Product node is computed as $\begin{array} { r } { P _ { T ^ { \prime } } ( A ^ { \prime } ) = \prod _ { j } P _ { T ^ { \prime } } ( A _ { j } ^ { \prime } ) } \end{array}$ This decomposition allows SPNs to efficiently approximate complex joint distributions by leveraging independence between attributes in a particular data subset. Given a query $q$ for cardinality estimation, SPN estimates the cardinality ${ \widehat { \mathsf { C a r d } } } ( q , T )$ in a bottom-up manner. # 3 AN OVERVIEW OF QSPN We propose QSPN that extends traditional SPNs by incorporating query workload information to partition columns based on their access patterns. We formalize QSPN as a tree-based structure that extends traditional SPNs by introducing two new node types, QProduct and QSplit, as shown in Figure 1(c). Formally, each node n in QSPN is represented as a 4-tuple $( A _ { \mathsf { n } } , T _ { \mathsf { n } } , Q _ { \mathsf { n } } , O _ { \mathsf { n } } )$ , where $A _ { \mathfrak { n } }$ denotes the column set, $T _ { \mathfrak { n } }$ the corresponding table, and $Q _ { \mathfrak { n } }$ the associated query workload. Each node captures the joint probability distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ conditioned on the queries in $Q _ { \mathfrak { n } }$ . Moreover, the node type $O _ { \mathsf { n } }$ represents how the joint probability of node n is estimated from its child nodes, which is described as follows. QProduct. A QProduct node, such as ${ \mathsf n } _ { 2 }$ in Figure 1(c), partitions its column set $A _ { \mathfrak { n } }$ into a set of disjoint subsets $\mathcal { A } = \{ A _ { 1 } , A _ { 2 } , . . . , A _ { m } \}$ , ensuring that columns in different subsets are infrequently coaccessed by queries in $Q _ { \mathfrak { n } }$ . For each subset $A _ { i }$ , a corresponding child node is created as $\mathsf { n . c h i l d } _ { i } = ( A _ { i } , T _ { \mathsf { n } } [ A _ { i } ] , { Q } _ { \mathsf { n } } [ A _ { i } ] , { O } _ { i } )$ . The joint probability distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ at node n can then be computed as $\begin{array} { r } { P _ { T _ { \mathfrak { n } } } ( A _ { \mathfrak { n } } ) = \prod _ { i = 1 } ^ { m } P _ { T _ { \mathfrak { n } } } ( A _ { i } ) } \end{array}$ . QSplit. A $\mathsf { Q S p l i t }$ , such as $\mathsf { n } _ { 1 }$ in Figure 1(c), splits its query workload $Q _ { \mathfrak { n } }$ into a set of disjoint query subsets $Q = \{ Q _ { 1 } , Q _ { 2 } , . . . , Q _ { m } \}$ , ensuring that queries within the same subset share similar access patterns, while queries across different subsets exhibit distinct access behaviors. For each query subset $Q _ { i }$ , a corresponding child node is created as $\mathsf { n . c h i l d } _ { i } = ( A _ { \mathsf { n } } , T _ { \mathsf { n } } , Q _ { i } , O _ { i } )$ . Note that a QSplit does not directly compute a joint probability distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ but instead functions as a query router. Specifically, when estimating the cardinality of a query $q$ , the QSplit identifies the query subset $\boldsymbol { Q } _ { k ^ { * } }$ from $\boldsymbol { Q }$ that shares the most similar access patterns with $q$ and routes $q$ to the corresponding child node $\mathsf { n } . \mathsf { c h i l d } _ { k ^ { * } }$ for cardinality estimation. We will discuss the method for determining whether a query subset exhibits the most similar access patterns with $q$ later. Product. A Product node, such as ${ \mathsf n } _ { 5 }$ in Figure $\boldsymbol { 1 } ( \mathrm { c } )$ , partitions its column set $A _ { \mathfrak { n } }$ into a set of disjoint subsets $\mathcal { A } = \{ A _ { 1 } , A _ { 2 } , . . . , A _ { m } \}$ , ensuring statistical independence between columns in different subsets. For each subset $A _ { i }$ , a corresponding child node is created as $\boldsymbol { \mathsf { n . c h i l d } } _ { i } = ( A _ { i } , T _ { \mathsf { n } } , Q _ { \mathsf { n } } , O _ { i } )$ . The joint distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ is then computed using the equation in Section 2.2. Sum. A Sum node, such as ${ \mathsf n } _ { 4 }$ in Figure 1(c), partitions the table $T _ { \mathfrak { n } }$ of node n into disjoint subsets $\mathcal { T } = \{ T _ { 1 } , T _ { 2 } , \dots \dots , T _ { m } \}$ . For each subset $T _ { i }$ , a corresponding child node is created as $\mathsf { n . c h i l d } _ { i } = ( A _ { \mathsf { n } } , T _ { i } , Q _ { \mathsf { n } } , O _ { i } )$ . The joint distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ is then computed as a weighted sum of the distributions of its child nodes, as defined in Section 2.2. Leaf. A Leaf node, such as any leaf in the tree shown in Figure 1(c), represents the 1-dimensional probability distribution $P _ { T _ { \mathsf { n } } } ( A _ { \mathsf { n } } )$ . Specifically, we use a histogram-based mechanism to capture this probability distribution. The construction cost of the histogram is $O ( \left. T _ { \mathsf { n } } \right. )$ , and the query cost is approximately $O ( 1 )$ . To support the above QSPN structure, in this paper, we introduce a framework that consists of both offline and online stages. Offline QSPN Construction. In the offline stage, QSPN learns its structure from both data $T$ and workload $\boldsymbol { Q }$ , capturing the underlying data distribution and query access patterns. Unlike conventional SPNs, QSPN introduces a new challenge of incorporating query co-access patterns into column partitioning. We propose efficient algorithms QProduct and QSplit, as presented in Section 4. Online QSPN Computation. In the online stage, we utilize QSPN for cardinality estimation. Specifically, for queries with access patterns that differ from those in the training workload, we develop an online inference algorithm, ensuring both accuracy and efficiency. Second, we introduce an incremental update mechanism that selectively identifies and updates only the affected parts of the model. Details of online QSPN inference are provided in Section 5. Multi-Table Cardinality Estimation. This paper also explores extending QSPN to support multi-table CardEst. The key difficulty lies in accurately modeling join key distributions while effectively handling multi-table query predicates. Traditional methods rely on heuristic bucket-based approaches, which suffer from poor accuracy. Learning-based CardEst models [40, 42], on the other hand, train on a materialized outer-join table, incurs excessive time and storage costs. To tackle this, we introduce an effective approach, which is described in Section 6. # 4 OFFLINE QSPN CONSTRUCTION Given a relational table $T$ with column set $A$ and a query workload 𝑄, QSPN construction generates a QSPN tree that models the joint probability distribution $P _ { T } ( A )$ conditioned on $\boldsymbol { Q }$ . To achieve this, the construction process recursively decomposes the joint probability distribution into local probability distributions in a top-down manner. Specifically, during the construction of each node, QSPN attempts different node types in the following order: Leaf, Product, QProduct, QSplit, and Sum. Construction of Leaf Nodes. During the recursive process, if $A$ contains only a single column, this indicates that the joint probability distribution has been fully decomposed into a local distribution over the specific column. In this case, the construction process creates a Leaf to model the 1-dimensional probability distribution $P _ { T _ { n } } ( A )$ using a histogram. This choice is motivated by the histogram’s accuracy, efficiency, and lightweight nature, making it well-suited for modeling such distributions. Construction of Product Nodes. A Product node is constructed when the column set $A$ of a node exhibits statistical dependencies suitable for partitioning. Following prior works [12, 23, 42], we use the Randomized Dependence Coefficient (RDC) to measure statistical dependencies between columns. Details on RDC can be found in the original paper [19]. Then, we employ a partition-based method. Specifically, using these RDC values, we construct a graph where vertices represent columns and edges represent dependencies weighted by the RDC values. Then, we remove the edges with RDC values below a threshold, and divide the graph into connected components, each representing an independent subset of columns. Construction of Sum Nodes. During the recursive process, if all other types of nodes fail to meet the decomposition criteria, node $n$ defaults to a Sum, which splits the data $T$ into subsets $T _ { i }$ . To achieve this, following prior works [12, 42], we use the K-Means clustering algorithm, as it partitions the data into clusters, which helps to reduce data correlation within each subset. # 4.1 Construction of QProduct QProduct partitions columns according to their query access patterns, grouping frequently co-accessed columns together while separating those that are rarely co-accessed into distinct subsets. Formalization of QProduct. We first formally define access affinity using the query-column access matrix as follows. Definition 4.1 (Access Affinity). The Access Affinity between columns $a _ { i }$ and $a _ { j }$ with respect to query workload $\boldsymbol { Q }$ , denoted by $\mathsf { A F F } ( a _ { i } , a _ { j } | Q )$ , is defined as how frequently both columns are referenced together by queries in $\boldsymbol { Q }$ , i.e., $$ { \mathsf { A F F } } ( a _ { i } , a _ { j } | Q ) = \mathsf { a c c } ( a _ { i } ) \cdot \mathsf { a c c } ( a _ { j } ) . $$ Using access affinity, we formally define the QProduct operation. Definition 4.2 (QProduct). Given a query workload $\boldsymbol { Q }$ and a column set $A = \{ a _ { 1 } , a _ { 2 } , . . . , a _ { | A | } \}$ , QProduct partitions $A$ into a set of disjoint column subsets $\mathcal { A } = \{ A _ { 1 } , A _ { 2 } , . . . , A _ { m } \}$ , minimizing the inter-partition affinity $( I P A )$ of partitioning $\mathbb { I P A } ( \mathcal { A } | Q )$ , where $$ \mathrm { T P A } ( \mathcal { A } | Q ) = \sum _ { 1 \leq k < l \leq m } \sum _ { a _ { i } \in A _ { k } } \sum _ { a _ { j } \in A _ { l } } \mathsf { A F F } ( a _ { i } , a _ { j } | Q ) . $$ For example, consider the workload $Q _ { 1 }$ in Figure 1(b). The interpartition affinity for the column partitioning $\{ \{ a _ { 1 } , a _ { 2 } \} , \{ a _ { 3 } , a _ { 4 } \} \}$ is 0, whereas for the partitioning $\{ \{ a _ { 1 } , a _ { 3 } \} , \{ a _ { 2 } , a _ { 4 } \} \}$ it is 4. Based on these results, QProduct selects the partition $\{ \{ a _ { 1 } , a _ { 2 } \} , \{ a _ { 3 } , a _ { 4 } \} \}$ for workload $Q _ { 1 }$ to minimize inter-partition affinity. Algorithm Design for QProduct. We first analyze the complexity of the QProduct construction problem, as shown below. Lemma 1. The problem of ${ \mathsf { Q } } { \mathsf { P } }$ roduct construction is equivalent to the minimum $k$ -cut problem. We omit the proof due to the space constraint. The minimum $k$ - cut problem, even for fixed $k$ , is computationally expensive to solve. We abstract each column to a vertice. For example, the minimum 2-cut (i.e., the classic min-cut problem) can be solved in $O ( | A | ^ { 3 } )$ time using algorithms such as Stoer-Wagner [30]. For $k = 3$ , the time complexity is $O ( | A | ^ { 3 } \tau ( | A | ) )$ [22], where $\tau ( | A | )$ represents the cost of computing the objective function, which is $O ( | A | ^ { 2 } )$ in our QProduct construction problem, yielding an overall complexity of $O ( | A | ^ { 5 } )$ . Similarly, a recent algorithm for $k = 4$ achieves a complexity of $O ( | A | ^ { 6 } \tau ( | A | ) )$ (or $O ( | A | ^ { 8 } ) )$ . While the problem is polynomialtime solvable for fixed $k \geq 5$ , the complexity increases dramatically (e.g., $O ( | A | ^ { 1 6 } )$ for a minimum 5-cut algorithm as suggested by [13]), rendering such algorithms impractical for real-world use. Given the computational expense of partitioning the column set to minimize IPA, we design an algorithm PartitionByAFF that achieves effective and efficient $( O ( | A | ^ { 2 } ) )$ results. The algorithm first constructs a graph $G = ( A , E )$ , where vertices $A$ represent columns, and an edge $e _ { i j } \in E$ connects columns $a _ { i }$ and $a _ { j }$ if their affinity AF $\mathsf { F } ( a _ { i } , a _ { j } | Q )$ exceeds a threshold $\tau$ . Next, the algorithm identifies the connected components of $G$ , with the vertices in each connected component $G _ { i } \subseteq G$ forming a column partition $A _ { i }$ . F2igure 2: 3An exa 0mple of QSplit construction. # 4.2 Cons4tr4ucti𝑣on of 𝑣QSplit When considering the entire workload $\boldsymbol { Q }$ , QProduct may struggle to derive a meaningful column partitioning with a sufficiently low IPA score due to the presence of queries exhibiting diverse access patterns. The following example illustrates this challenge. Example 2. Given the workload $Q = \{ q _ { 1 } , q _ { 2 } , . . . , q _ { 1 0 } \}$ , as shown in Figure $\begin{array} { r } { \boldsymbol { { 1 } } ( \boldsymbol { b } ) , } \end{array}$ an optimal column partitioning for QProduct with $k = 2$ results in $\{ \{ a _ { 1 } , a _ { 2 } , a _ { 3 } \} , \{ a _ { 4 } \} \}$ , yielding a minimum $I P A$ score of 6, which remains relatively high. The primary reason for this is that different subsets of queries within $\boldsymbol { Q }$ exhibit different access patterns, leading to conflicting preferences for column partitioning. Specifically, queries in $Q _ { 1 } \subset Q$ favor the column partitioning $\{ \{ a _ { 1 } , a _ { 2 } \} , \{ a _ { 3 } , a _ { 4 } \} \}$ , while those in $Q _ { 2 } \subset Q$ prefer $\{ \{ a _ { 1 } , a _ { 3 } \} , \{ a _ { 2 } , a _ { 4 } \} \}$ . These conflicting preferences arise due to the distinct access patterns exhibited by queries in different subsets. Formalization of QSplit. To address the above challenge, we introduce the QSplit operation, which partitions the query workload into $n$ subsets, i.e., $Q = \{ Q 1 , \ldots , Q _ { n } \}$ , ensuring that each subset exhibits more consistent access patterns, thus enabling QProduct to derive more meaningful column partitions. Definition 4.3 (QSplit). Given a query workload $\boldsymbol { Q }$ and a column set $A = \{ a _ { 1 } , a _ { 2 } , . . . , a _ { | A | } \}$ , QSplit aims to partition $\boldsymbol { Q }$ into a set of disjoint query subsets $\dot { Q } = \left\{ Q _ { 1 } , Q _ { 2 } , \dots , Q _ { n } \right\}$ , minimizing the objective $\textstyle \sum _ { k = 1 } ^ { n } { \mathrm { I P A } } ( { \mathcal { A } } _ { k } ^ { * } | Q _ { k } )$ , where $\mathcal { A } _ { k } ^ { \ast }$ is the optimal column partition given query subset $Q _ { k }$ . For example, by splitting the workload $\boldsymbol { Q }$ in Figure 1(b) into two subsets $( n = 2 )$ , an effective partitioning results in $Q _ { 1 } = \{ q _ { 1 } , . . . , q _ { 5 } \}$ and $Q _ { 2 } = \{ q _ { 6 } , . . . , q _ { 1 0 } \}$ . Given these subsets, the optimal QProduct column partitioning for $Q _ { 1 }$ is $\begin{array} { r c l } { \mathcal { A } _ { 1 } ^ { \ast } } & { = } & { \{ \{ a _ { 1 } , a _ { 2 } \} , \{ a _ { 3 } , a _ { 4 } \} \} } \end{array}$ with $\mathrm { I P A } ( \mathcal { A } _ { 1 } ^ { * } | Q _ { 1 } ) \ = \ 2$ . Similarly, for $Q _ { 2 }$ , the optimal partitioning is $\mathcal { A } _ { 2 } ^ { * } = \{ \{ a _ { 1 } , a _ { 3 } \} , \{ a _ { 2 } , a _ { 4 } \} \}$ with $\mathrm { I P A } ( \mathcal { A } _ { 2 } ^ { \ast } | Q _ { 2 } ) = 2$ . Algorithm Design for QSplit. The QSplit construction problem is highly challenging because it requires simultaneously minimizing $\mathrm { I P A } ( \mathcal { A } _ { k } ^ { \ast } | Q _ { k } )$ for each partitioned subset $Q _ { k }$ . Thus, instead of directly optimizing $\mathtt { I P A } ( \mathcal { A } _ { k } ^ { * } | Q _ { k } )$ for each $Q _ { k }$ , we focus on minimizing its upper bound, denoted as $\overline { { \mathbb { P A } } } ( \mathcal { A } _ { k } | Q _ { k } )$ , which allows us to design a more tractable algorithm while still ensuring effective workload partitioning. Formally, the objective is to partition $\boldsymbol { Q }$ into $Q = \{ Q _ { 1 } , Q _ { 2 } , . . . , Q _ { n } \}$ while minimizing the upper bound of inter-partition affinity, i.e., $\begin{array} { r } { \sum _ { k = 1 } ^ { n } \overline { { \mathrm { I P A } } } ( \mathcal { A } _ { k } | Q _ { k } ) } \end{array}$ . Next, we first design a mechanism to construct the upper bound $\overline { { \mathsf { I P A } } } ( \mathcal { A } _ { k } | Q _ { k } )$ and prove that optimizing this upper bound is NPhard. Finally, we propose an efficient heuristic algorithm that achieves effective partitioning results in practice. ,U𝑎p\$pe≤r-b1o5u0n}d design. Given a specific query set $\boldsymbol { Q }$ , we constr𝒂uct𝒂an upper bound $\overline { { \mathsf { I P A } } } ( \mathcal { A } | Q )$ f"or t1he op1timal0colu0mn partition𝒂ing result $\mathtt { I P A } ( \mathcal { R } ^ { * } | Q )$ by analyzin𝑞g%the1acces1s pat0terns0of quer"ies in $\boldsymbol { Q }$ . Since 0d?ifferent q𝑎ueri>es2 $( e . g . , q _ { 1 }$ and $q _ { 2 }$ in0Figur1e 1(b1)) may share the same access pattern (e.g., $( 1 , 1 , 0 , 0 ) ,$ ), we introduce $p _ { i }$ to denote𝟑 the $i$ -th distinct access pattern i\*n $\boldsymbol { Q }$ and let $n _ { i }$ represent the n𝒂u𝟒mber of queries in $\boldsymbol { Q }$ that follow𝑞p+atte1rn $p _ { l }$ 0. For1insta1nce, inAFiFg(5ur|e𝑄1()b), the first access pattern in $\boldsymbol { Q }$ ,is $p _ { 1 } = \left( 1 , 1 , 0 , 0 \right)$ , 0which correspo𝒂nds𝒂to two queries, i.e., $n _ { 1 } = 2$ . Based on these notations, w𝑄e% construct the following upper bound f-or $\mathtt { I P A } ( \mathcal { R } ^ { * } | Q )$ . Definition 4.4 (Upper𝑞 Boun0d). 1Given0 a q1uery set $\boldsymbol { Q }$ consisting of $m$ distinct query patterns $\{ p 1 , p 2 , . . . , p _ { m } \}$ , let $\| p _ { i } \|$ de𝟐note the $L _ { 2 }$ 𝑎n!o𝑎r"m𝑎!o𝑎f"p𝑎a!t𝑎te"r𝑎n! $p _ { i } , n _ { i } )$ ) represent the number of que𝟒ries in $\boldsymbol { Q }$ corresponding to pattern $p _ { i }$ , and $z _ { i j }$ denote the dot product of two patterns $p _ { i }$ and $p _ { j }$ . We construct an upper bound for $\mathrm { I P A } ( { \mathcal { A } } ^ { * } | Q )$ as $$ \overline { { \mathtt { I P A } } } ( \mathcal { A } | Q ) = \sum _ { i < j } \left( n _ { i } \cdot z _ { i j } \cdot ( \| p _ { i } \| - z _ { i j } ) + n _ { j } \cdot z _ { i j } \cdot ( \| p _ { j } \| - z _ { i j } ) \right) . $$ Consider a query set $\boldsymbol { Q }$ with two q𝑨uery3 pa1tter6ns, $p _ { 2 }$ and $\pmb { \mathscr { p } } _ { 3 }$ , as illustrated in Figure 2(a). To compute the upper bound ${ \overline { { \operatorname { I P A } } } } ( { \mathcal { A } } | Q )$ , we first calculate the dot product $z _ { 2 3 } = p _ { 2 } \cdot p _ { 3 } = 2$ , as well as the norms $\| p _ { 2 } \| = 2$ and $\| \mathtt { p } _ { 3 } \| = 3$ . Given tha𝑨t $n _ { 2 } = 2$ 𝟑an𝑨d𝟒 $n _ { 3 } = 1$ , the contribution of patterns $p _ { 2 }$ and $p _ { 3 }$ to𝑨the overall upper bound is computed as: $n _ { 2 } \cdot z _ { 2 3 } \cdot ( \| p _ { 2 } \| - z _ { 2 3 } ) + n _ { 3 } \cdot z _ { 2 3 } \cdot ( \| p _ { 3 } \| - z _ { 2 3 } ) = 2$ . As shown in Figure 2(a), the upper bound corresponds to a specific column partitioning strategy, i.e., gr𝑨ou𝟑ping columns that are coaccessed by multiple query patterns (e𝑨.g𝟒., $A _ { 3 }$ and $A _ { 2 }$ ) while placing the remaining columns (e.g., $A _ { 1 }$ and $A _ { 4 }$ ) in a separate group. Lemma 2. For any given query set $\boldsymbol { Q }$ , the upper bound $\overline { { \mathsf { I P A } } } ( \mathcal { A } | Q )$ provides an upper estimate of the optimal result $\mathtt { I P A } ( \mathcal { A } ^ { * } | Q )$ . Due to the space limit, we omit the proof in this paper. Hardness of upper-bound optimization. Next, we show that even optimizing the upper bound $\overline { { \mathtt { I P A } } } ( \mathcal { A } | Q )$ is theoretically intractable. Lemma 3. The problem of partitioning $\boldsymbol { Q }$ into $\begin{array} { r l } { Q } & { { } = } \end{array}$ $\{ Q _ { 1 } , Q _ { 2 } , . . . , Q _ { n } \}$ to minimize the upper bound of inter-partition affinity, i.e.,, ${ \textstyle \sum _ { k = 1 } ^ { n } } \overline { { \mathrm { I P A } } } ( \mathcal { A } _ { k } | Q _ { k } )$ , is NP-hard. We prove this lemma via a reduction from the Max-Cut problem, which is NP-hard. Due to space constraints, we omit the proof. $A$ greedy algorithm for QSplit. Given the time complexity of solving our problem, we design a practical and efficient heuristic algorithm. The algorithm first constructs a graph $G ~ = ~ \left( V , E \right)$ , where each vertex $v _ { i } \in V$ represents a query pattern $\mathbf { \nabla } p _ { i }$ , and an edge $e _ { i j } \ \in \ E$ connects two patterns $p _ { i }$ and $p _ { j }$ with a weight of $n _ { i } \cdot z _ { i j } \cdot ( \| p _ { i } \| - z _ { i j } ) + n _ { j } \cdot z _ { i j } \cdot ( \| p _ { j } \| - z _ { i j } )$ . Next, based on the constructed graph $G$ , the algorithm aims to partition it into $n$ subgraphs, denoted as $\left\{ G _ { 1 } , G _ { 2 } , \ldots , G _ { n } \right\}$ , such that the total weight of the edges across the sub-graphs is maximized. To achieve this, it first sorts all vertices in $G$ based on their weighted degree in descending order and initializes $n$ empty sub-graphs. Then, the algorithm iteratively processes each vertex and assigns it to an appropriate sub-graph. Specifically, in the 𝑖-th iteration, the algorithm considers vertex ${ { v } _ { i } }$ and evaluates its potential assignment to each sub-graph $G _ { j }$ by computing the total edge weight of $G _ { j } ^ { \prime }$ after incorporating $\boldsymbol { v } _ { i }$ . Figure 3: Illustration of CardEst Inference with QSPN. The vertex $\boldsymbol { v } _ { i }$ is then assigned to the sub-graph that results in the minimum weight summation. Figure 2(b) illustrates an example of our gAreFeF(d5y|a𝑄l)gorithm partitioning the query set $\boldsymbol { Q }$ from Figure 1(b) into two subsets (i.e., $m = 2$ ). # 5 ONLINE QSPN COMPUTATION # 5.1 CardEst Inference with QSPN When a new query $q$ arrives, the Online CardEst Inference process using QSPN operates recursively by traversing the QSPN tree from the root to the leaf nodes, as shown in Figure 3. Specifically, during traversal, if a QSplit is visited, the process routes query $q$ to the child node corresponding to the most relevant query subset. On the other hand, if a QProduct, Product, or Sum is visited, the process computes the joint probability accordingly as follows. (1) For a Leaf node n, the algorithm computes the probability ${ \mathsf { n } } . P _ { T } ( q )$ based on the histogram at leaf n. (2) For a QProduct or Product node, n, the algorithm recursively invokes the CardEst inference process for each child of n where $q . A \cap \mathsf { n . c h i l d } _ { i } . A \neq \emptyset$ . It then multiplies the estimated probabilities of all the relevant child nodes to produce the estimation result. For example, at QProduct node ${ \mathsf n } _ { 3 }$ , the column set is partitioned into $a _ { 1 } , a _ { 3 }$ and $a _ { 2 } , a _ { 4 }$ . Since $q$ involves only $a _ { 1 } , a _ { 3 }$ , the estimation continues with node ${ \mathsf n } _ { 6 }$ , while node ${ \mathsf n } _ { 7 }$ is pruned. (3) For a Sum node, n, the algorithm computes the weighted sum of the estimated probabilities of its child nodes. Next, we explore two key implementation details of the CardEst Inference process: (1) query routing in QSplit nodes, and (2) subtree pruning in Sum and Product nodes. Query Routing in QSplit Nodes. The objective of query routing in a QSplit node (say $\mathsf { n } _ { 1 }$ in Figure 3) is to measure the degree to which a query set $\boldsymbol { Q }$ shares similar access patterns with a given query $q$ , which guides the QSplit node in routing $q$ to the appropriate child node (e.g., node ${ \mathsf n } _ { 3 }$ ). To this end, we formally introduce a matching score ${ \mathsf { S } } ( Q , q )$ between a query set $\boldsymbol { Q }$ and a query $q$ : $$ { \mathsf { S } } ( Q , q ) = { \frac { 1 } { | Q | } } \sum _ { ( a _ { i } , a _ { j } ) , i < j } { \mathsf { A F F } } ( a _ { i } , a _ { j } | Q ) \cdot { \mathsf { A F F } } ( a _ { i } , a _ { j } | \{ q \} ) $$ The intuition behind the matching score ${ \mathsf { S } } ( Q , q )$ is to measure how closely the access patterns of queries in $\boldsymbol { Q }$ align with the access pattern of $q$ . For example, consider the query $q$ shown in Figure 3. The access pattern for $q$ is represented as: $$ \mathsf { A F F } ( \cdot , \cdot | \{ q \} ) = [ [ 1 , 0 , 1 , 0 ] , [ 0 , 0 , 0 , 0 ] , [ 1 , 0 , 1 , 0 ] , [ 0 , 0 , 0 , 0 ] ] . $$ Considering the workload partitioning in Figure $1 ( \mathrm { b } )$ , we have AFF $\cdot ( \cdot , \cdot | Q _ { 1 } ) = [ [ 3 , 2 , 1 , 1 ]$ , 2, 2, 0, 0 , 1, 0, 3, 3 , 1, 0, 3, 3 and $\mathsf { A F F } ( \cdot , \cdot | Q _ { 2 } ) \ = \ [ [ 2 , 1 , 2 , 0 ]$ , [1, 3, 1, 2], [2, 1, 3, 0], [0, 2, 0, 2] ] for $Q _ { 1 }$ and $Q _ { 2 }$ , respectively. Based on these, we compute $\begin{array} { r } { \mathsf { S } ( Q _ { 1 } , q ) = \frac { 1 } { 5 } } \end{array}$ and $\begin{array} { r } { \mathsf { S } ( Q _ { 2 } , q ) = \frac { 2 } { 5 } } \end{array}$ . Therefore, when the node $\mathsf { n } _ { 1 }$ is visited, the algorithm routes query $q$ to its child node ${ \mathsf n } _ { 3 }$ , as $Q _ { 2 }$ , corresponding to ${ \mathsf n } _ { 3 }$ , shares more similar access patterns with $q$ than $Q _ { 1 }$ . Pruning Rules in Sum and Product Nodes. We employ pruning rules that leverage query $q$ and pre-computed metadata to exclude irrelevant child nodes of a given node n. (1) For Product and QProduct nodes, let $A _ { q }$ denote the set of columns constrained by the query $q$ . For a child node $\mathsf { n } . \mathsf { c h i l d } _ { k }$ of n, corresponding to the column subset $A _ { k }$ , pruning occurs if $A _ { k } \cap A _ { q } = \varnothing$ . (2) For Sum nodes, it decides which child nodes contributes to $n . P _ { T } ( q )$ so that participate the computation i.e., set of visited child nods . Consider a child node $\mathsf { n } . \mathsf { c h i l d } _ { k }$ of $\mathsf { n }$ , corresponding to the table subset $T _ { k }$ . Before query processing, we pre-compute and store the range of values for each column $A _ { i }$ in $T _ { k }$ . During cardinality estimation, if the value ranges of $T _ { k }$ for any column do not overlap with the constraints specified by query $q$ , the child node $\mathsf { n } . \mathsf { c h i l d } _ { k }$ can be safely pruned, as it does not contribute to the result. # 5.2 QSPN Model Update Data updates $\langle \Delta T$ , which include new tuples) and query workload shifts ${ \it B Q }$ , which include new queries) impact the accuracy and inference efficiency of the original QSPN model. The high-level idea of the update method is to traverse QSPN in a top-down manner. Each time a node n is visited during the traversal, two steps are performed to update the subtree rooted at n. First, the method examines whether n, originally constructed using n.𝑇 and n.𝑄, still fits ${ \mathsf { n } } . T \cup \Delta T$ or ${ \mathsf { n } } . Q \cup \Delta Q$ . Second, if n no longer fits, the subtree rooted at n is reconstructed by calling the QSPN construction method (see Section 4), which generates a new subtree rooted at ${ \mathfrak { n } } ^ { \prime }$ . Otherwise, each child node $\mathsf { n } . c h i l d _ { i }$ is recursively updated. Note that the check and reconstruction steps are unnecessary if n is a Leaf, as histograms can be incrementally updated in $O ( | \Delta T | )$ . The key challenge in the above update method is efficiently examining whether a node n still fits ${ \mathsf { n } } . T \cup \Delta T$ or ${ \mathsf { n } } . Q \cup \Delta Q$ , as the corresponding data table ${ \mathsf { n } } . T$ and query workload n.𝑄 are not materialized at node n. To address this challenge, we maintain lightweight data structures in different types of nodes and design a mechanism to check whether node n is still up-to-date with respect to the data and query workload, as described below. (1) If n is a Product node, we examine whether the column partitioning still holds, i.e., whether columns in different partitions remain independent with respect to the updated data table $T \cup \Delta T$ . To do this, we first compute $\mathsf { R D C } ( a _ { i } , a _ { j } | \Delta T )$ , where $a _ { i } , a _ { j } \in \mathsf { n } . A$ , and then check whether there exist $a _ { i } , a _ { j }$ from different child nodes, say $a _ { i }$ from $\mathsf { n } . c h i l d _ { k }$ and $a _ { j }$ from $\mathsf { n } . c h i l d _ { l }$ such that $\begin{array} { r } { \frac { | { \mathsf n } . T | } { | { \mathsf n } . T | + | \Delta T | } { \mathsf R } { \mathsf { D C } } ( a _ { i } , a _ { j } | { \mathsf n } . T ) + \frac { | \Delta T | } { | { \mathsf n } . T | + | \Delta T | } { \mathsf { R } } { \mathsf { D C } } ( a _ { i } , a _ { j } | \Delta T ) } \end{array}$ is larger than a pre-defined threshold. If any such pair $a _ { i } , a _ { j }$ is found, the subtree rooted at n needs to be reconstructed for more accurate CardEst. If the RDC between columns within a child node becomes less significant due to $\Delta T$ , we may reconstruct the subtree rooted at the child node to further partition the now-independent columns. Figure 4: Illustration of our proposed M-QSPN method. (2) If n is a QProduct node, the update examination strategy is similar to that of the Product case, except that we consider the access affinity $\mathsf { A F F } ( a _ { i } , a _ { j } | \mathsf { n } . Q )$ for any column pair $( a _ { i } , a _ { j } )$ , instead of the correlation ${ \mathsf { R D C } } ( a _ { i } , a _ { j } | { \mathsf { n } } . T )$ . (3) If n is a QSplit, we examine whether the query routing strategy still holds for the updated workload $Q \cup \Delta Q$ . To do this, for each workload partition $Q _ { k }$ corresponding to the child node $\mathsf { n } . c h i l d _ { k }$ , we maintain the average of the matching scores of the queries in $\boldsymbol { Q }$ routed to $Q _ { k }$ , i.e., $\textstyle \sum _ { q \in { \mathcal { Q } } _ { k } } { \mathsf { S } } ( { \mathcal { Q } } _ { k } , q ) / | { \mathcal { Q } } _ { k } |$ . Then, for each query $q ^ { \prime }$ in the updated workload $\Delta Q$ , we assign $q ^ { \prime }$ to the child node with the maximum matching score (see Section 5.1) and update the average matching score accordingly. If the average matching score of any workload partition becomes less significant, e.g., less than a predefined threshold, we reconstruct the subtree rooted at the QSplit node n, as the workload partition no longer reflects the access patterns of $Q \cup \Delta Q$ . (4) If n is a Sum node, we maintain the centroid for each tuple subset $T _ { i }$ from $\mathcal { T } = \{ T _ { 1 } , T _ { 2 } , \dots , T _ { m } \}$ and the average distance between each tuple and the centroid of its assigned subset. Then, for each new tuple $t$ in $\Delta T$ , we assign $t$ to the tuple subset with the minimum distance to the centroid of that subset and update the average distance accordingly. If the average distance becomes significant, e.g., exceeding a predefined threshold, we update the subtree rooted at $\mathfrak { n }$ , as the Sum may no longer hold for ${ \mathsf { n } } . T \cup \Delta T$ . In this way, the QSPN update method minimizes unnecessary costs associated with model updates while ensuring accuracy in response to data updates and query workload shifts. # 6 MULTI-TABLE CARDEST WITH QSPN We introduce M-QSPN, a multi-table CardEst method based on QSPN, as illustrated in Figure 4. For ease of presentation, this paper considers a query $q$ that joins two tables $s$ and $T$ on the inner join condition $S . s i d = T . t i d$ , denoted as $q ( S \bowtie T )$ , with base table filter predicates $q ( S )$ and $q ( T )$ . In particular, we assume that both 𝑆 .𝑠𝑖𝑑 and $T . t i d$ share the same value domain $D$ . Based on this notation, the problem of multi-table CardEst can be formalized as estimating the cardinality $| q ( S \bowtie T ) |$ , which can be derived as: $$ | q ( S \bowtie T ) | = | S | | T | \sum _ { v \in D } P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) , $$ where $P ( s i d = v \land q ( S ) )$ (or $P ( s i d = v \land q ( T ) ) ;$ denotes the probability that the join key 𝑆 .𝑠𝑖𝑑 (or 𝑇 .𝑡𝑖𝑑) equals $\boldsymbol { v }$ in the result table of the base filter predicates $q ( S )$ (or $q ( T ) )$ . Directly estimating the cardinality $| q ( S \bowtie T ) |$ using Equation (4) is computationally expensive. To address this, M-QSPN supports multi-table CardEst by binning join keys. Specifically, we divide the domain of the join keys, namely 𝑠𝑖𝑑 and $t i d$ , into a set of bins, denoted as $\mathcal { B } = \{ B _ { 1 } , B _ { 2 } , . . . , B _ { n } \}$ . We then estimate $| q ( S \bowtie T ) |$ using these bins, i.e., $$ | q ( S \bowtie T ) | = | S | | T | \sum _ { B \in \mathcal { B } } \sum _ { v \in B } \left\{ P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) \right\} . $$ The task is to estimate $\begin{array} { r } { \sum _ { v \in B } P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) } \end{array}$ for each individual bin $B \in { \mathcal { B } }$ . To achieve this, we propose maintaining basic statistics for each bin $B$ of values. Formally, we define a bin for a join key, say ${ \it S } . { \it s i d }$ , as a triple $B = ( { \mathrm { i d } } , { \mathsf { n u m } } , { \mathsf { m c v } } )$ , where id is the identifier of the bin $B$ , num is the number of tuples in $B$ , and mcv is the most common value in $B$ along with its frequency. Figure 4(a) provides an example of range-based binning: the bin $B ^ { S }$ for 𝑠𝑖𝑑, corresponding to range [10, 19], has num $= 6 7$ and mcv $= 1 5$ with frequency 42. Similarly, we can compute the corresponding bin $B ^ { T }$ for the query result $q ( T )$ over table $T$ . Then, we can estimate $\begin{array} { r } { \sum _ { v \in B _ { 2 } } \left\{ P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) \right\} } \end{array}$ based on the two bins $B _ { 2 } ^ { S }$ and $B _ { 2 } ^ { T }$ . In this section, we address two challenges in the above estimation process. First, while it is straightforward to compute statistics for a given bin $B$ over a join key, such as 𝑠𝑖𝑑, the task becomes more complex when considering the base table predicates, such as $q ( S )$ , because these predicates may have intricate correlations with the join keys. The second challenge lies in estimating $\begin{array} { r } { \sum _ { v \in B } \left\{ P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) \right\} } \end{array}$ based on the generated bins from $q ( S )$ and $q ( T )$ respectively. Binning Generation. To tackle the first challenge, we introduce a binning generation method based on our single-table QSPN model. Specifically, given base table predicates, such as $q ( S )$ over table $s$ , Binning Generation generates statistics for each bin $B \in { \mathcal { B } }$ corresponding to the data table that satisfies $q ( S )$ . To account for the intricate correlations between the predicate $q ( S )$ and the join key, such as 𝑠𝑖𝑑, this paper proposes a bottom-up traversal of the constructed QSPN of table $s$ , as illustrated in Figure 4(b). Specifically, during the traversal, when a particular node n is visited, the key task is to generate a set of bins ${ \mathcal { B } } _ { \mathfrak { n } }$ based on the bins of n’s child nodes, and then return ${ \mathcal B } _ { \mathfrak n }$ to n’s parent node. To this end, we propose bin generation strategies tailored to different node types. (1) If $\mathsf { n }$ is a Leaf corresponding to the join key, say 𝑠𝑖𝑑, we can directly return the pre-generated bins ${ \mathcal { B } } _ { \mathfrak { n } }$ for the join key, as shown in Figure 4(b). Specifically, if query $q ( S )$ over table $s$ includes predicates on 𝑠𝑖𝑑, we filter the bins to retain only those that satisfy the predicates. (2) If n is a Product or QProduct node, at most one child node of n, say ${ \mathfrak { n } } ^ { \prime }$ , will return the bins $\mathcal { B } _ { \mathfrak { n ^ { \prime } } }$ corresponding to its subtree, while the other child nodes return the estimated probabilities $P _ { i }$ based on the predicates in $q ( S )$ . In this case, we scale the num and mcv of each bin $B \in \mathcal { B } _ { \mathfrak { n ^ { \prime } } }$ by a factor of $\textstyle \prod _ { i } P _ { i }$ , as shown in Figure 4(b). This scaling is reasonable because the child nodes of n are either independent in terms of data correlation or are infrequently coaccessed by the queries. (3) If n is a Sum node and its column set $\scriptstyle A _ { \mathfrak { n } }$ contains the join key, say 𝑠𝑖𝑑, then all the child nodes of n return their respective bins. In this case, we merge the bins from these child nodes as shown in Figure 4(a). Specifically, if bins from different child nodes correspond to the same value range, e.g., [10, 19], we sum the num values of these bins to derive a new num. For the mcv, we compute the frequencies of all possible most common values and select the one with the highest frequency as the new mcv. (4) If n is a QSplit node, we can simply return the bins from its child node to which query $q ( A )$ is routed. Remarks. Note that in this section, we focus on the case of a single inner join condition, i.e., $S . s i d = T . t i d$ . The binning generation method described above can be easily extended to handle multiple inner join conditions between tables $s$ and $T$ . Multi-Table CardEst based on Binning. To address the second challenge, we propose an effective method to estimate $\begin{array} { r } { \sum _ { v \in B } \left\{ P ( s i d = v \wedge q ( S ) ) \cdot P ( t i d = v \wedge q ( T ) ) \right\} } \end{array}$ for each “matched” bin $B$ , which is shared by both table $s$ with predicates $q ( S )$ and table $T$ with predicates $q ( T )$ . For ease of presentation, we use $B ^ { S }$ and $B ^ { T }$ to denote the matched bins, i.e., bins with the same range of join key values. Specifically, we consider the following cases. (1) The first case occurs when $B ^ { S }$ and $B ^ { T }$ share the same mcv, and the mcv is significant compared to num, e.g., the bins with range [10, 19] in Figure 4(a). In this case, we can use the mcv as the representative for the bins, disregarding other values due to their insignificant presence. Based on this, we multiply the frequencies of the mcv to estimate the result, e.g., $( 4 2 \times 2 7 ) / ( | S | \times | T | )$ . (2) The second case occurs when $B ^ { S }$ and $B ^ { T }$ share the same mcv, and the mcv is significant, e.g., the bins with range [20, 29]. In this case, we cannot directly use the frequencies of the mcv for estimation because the value distributions of $B ^ { S }$ and $B ^ { T }$ may differ. To address this, we introduce a scaling factor, $\begin{array} { r } { \operatorname* { m i n } \left( \frac { B ^ { S } . \mathsf { n u m } } { B ^ { S } . \mathsf { m c v } } , \frac { B ^ { T } . \mathsf { n u m } } { B ^ { T } . \mathsf { m c v } } \right) } \end{array}$ For example, consider the bins with range 20, 29 ; we scale the multiplying results of the mcv frequencies by a factor of $5 2 / 7$ . (3) The third case occurs when $B ^ { S }$ and $\overset { \cdot } { B } { } ^ { T }$ have different mcv values, as seen in the bins with the range [30, 39] in Figure 4(a). In this case, we focus on the mcv with the larger frequency and assume that the tuple count of the other values in the bin comes from the non-mcv values. Therefore, the tuple count for this match is $\begin{array} { r } { \frac { B ^ { S } . \mathsf { n u m } - B ^ { S } . \mathsf { m c v } } { | B ^ { S } . \mathsf { r a n g e } | - 1 } \times B ^ { T } } \end{array}$ .mcv. This applies when $B ^ { T } . \mathsf { m c v } > B ^ { S }$ .mcv. # 7 EXPERIMENTS # 7.1 Experimental Setting Datasets. We evaluate our proposed QSPN approach on both singletable datasets and multi-table datasets. Table 2 provides the statistics of the datasets used in our experiments. Table 2: Statistics of Datasets. Single-table Datasets. We use the following four single-table datasets. (1) GAS [28] is a real-world gas sensing dataset, and we extract the most informative 8 columns (Time, Humidity, Temperature, Flow_rate, Heater_voltage, R1, R5, and R7), following the existing works [37, 42]. (2) Census [26] is a dataset about income, extracted by Barry Becker from the real-world 1994 Census database. (3) Forest [27] is a real-world forest-fire dataset from the US Forest Service (USFS) and US Geological Survey (USGS). (4) Power [29] is an electric power consumption dataset consisting of measurements gathered in a house located in Sceaux $7 ~ \mathrm { k m }$ from Paris, France) between December 2006 and November 2010. Multi-table Dataset. We evaluate multi-table cardinality estimation using the IMDB [14] dataset, a real-world dataset containing 50K movie reviews. This dataset is extensively used in existing works [10, 12, 16, 35, 40, 42] for multi-table cardinality estimation evaluation. The columns in IMDB typically have large domain sizes, which presents a greater challenge for CardEst. Query Workloads. We describe below how the query workloads used in the experiments are prepared. Synthetic Workloads for Single-Table CardEst. As there is no real query workload available for the above four single-table datasets, we use the following steps to synthesize workloads. (1) Template generation: Following existing DBMS benchmarks [17, 31, 32], we first generate SQL templates containing different column combinations as query predicates, and then synthesize queries based on these templates. (2) Template selection: Existing studies [12, 16, 34, 41, 42] have shown that data correlation significantly affects the performance of CardEst methods. To account for this, we cluster all generated templates into two groups: one containing templates that access highly correlated columns and the other containing templates that access weakly correlated columns. We then evenly sample templates from both groups to ensure a fair comparison. (3) Query synthesis: Given a template, we generate queries by filling it with randomly selected predicates. Previous studies [34, 35, 41] have shown that query conditions significantly affect CardEst performance. To ensure a fair comparison, we follow the method from [34], generating query conditions in two steps: first, selecting a random tuple from the data table as the center of the range; then, determining the width of each range using either a uniform or exponential distribution, with a predefined ratio controlling the selection between them. This ensures the conditions cover a variety of query patterns for evaluation. We generate a read-write hybrid workload to evaluate QSPN model updates, where each SQL statement is randomly selected to be either a query or a DML command (INSERT or DELETE). First, we create a training set with queries accessing weakly correlated columns and range constraints following a normal distribution. Then, we add at least $2 0 \%$ new data tuples with increased correlation by sampling from a table with sorted columns. These new tuples are included as DML commands. Finally, we generate a test set that includes (1) queries following the original patterns and distribution and (2) queries accessing highly correlated columns with range constraints following an exponential distribution. Real-World Workload for Multi-Table CardEst. For our multi-table dataset IMDB, we use JOB-light [17], the most widely adopted multi-table workload for CardEst. JOB-light consists of 70 queries on the IMDB dataset, where each query includes joins on 2 to 5 tables, along with 1 to 5 range constraints. These queries present a significant challenge for cardinality estimation methods. In particular, to ensure a fair comparison with the hybrid-driven model UAE, we use JOB-light as the workload test set and adopt the extended workload provided by UAE as the workload training set. This extended workload contains 100,000 queries covering JOB-light. Baselines. We compare QSPN against the following baselines. MSCN [16] is a query-driven CardEst model based on regression models. We use the implementation of MSCN from [34] and set its hyper-parameters following the configurations in [34]. Naru [41] is a DAR-based model that fits the joint data distribution to compute CardEst. We use the implementation of Naru from [34] and set its hyper-parameters following the configurations in [34]. DeepDB [12] is a data-driven SPN-based model which fits joint data distribution to compute CardEst, following local independence assumption. We use the implemention of DeepDB from [34] and set its hyper-parameters following the configurations in [34]. FLAT [42] is a CardEst method based on DeepDB, which introduces factorization and multi-dimensional histograms. We use the implementation provided by the authors of FLAT [37] and tune its hyper-parameters as recommended in the original paper. UAE [35] is a hybrid-driven CardEst method based on Naru, combining unsupervised losses from data with supervised losses from queries. We use the implementation provided by the authors of UAE [38] We also use it as multi-table CardEst method [39]. FactorJoin [36] is a multi-table CardEst method based on join-keys binning without modeling data distribution on generated outerjoin tables. Following the paper [36], we implement it with uniform sampling (sample_rate $\scriptstyle : = 0 . 1$ ). Postgres [10] is the cardinality estimator of PostgreSQL, based on traditional statistics-based methods. We run and evaluate it using the connector from [34], which connects to PostgreSQL 12 with the default setting 𝑠𝑡𝑎𝑡 _𝑡𝑎𝑟𝑔𝑒𝑡 $= 1 0 0 0 0$ . Sampling is a traditional CardEst method based on random sampling of data. We use the implementation of Sampling from [34] with the default setting 𝑟𝑎𝑡𝑖𝑜 $= 0 . 0 1 5$ . MHist is a traditional CardEst method that uses a single multidimensional histogram to model the joint data distribution. We use the implementation of MHist from [34] with the default setting. Evaluation Metrics. We use the following metrics to comprehensively evaluate and compare CardEst methods. Estimation Accuracy. For each query, we measure estimation accuracy using Q-error, defined as the ratio between the estimation 𝑒𝑠𝑡 and the ground truth 𝑔𝑡 , i.e., $Q \mathrm { - e r r o r } = \frac { \operatorname* { m i n } \{ e s t , g t \} } { \operatorname* { m a x } \{ e s t , g t \} }$ . Inference Time. For each query in the test set, we measure the total time spent for CardEst as inference time. We then compute the mean inference time among the queries in the test set. Storage Overhead. We measure the total size of the model or statistics file(s) for each method on each dataset and its corresponding workload training set, defining this as the storage overhead. Experimental Settings. All evaluated methods are implemented in Python 3.7. Our experiments are conducted on an AMD-64 server with the following specifications: OS: Ubuntu 20.04.6 LTS; Dual CPU System: $2 \times$ Intel(R) Xeon(R) Gold 6230 CPU $@$ 2.10GHz (20C40T); Main Memory: 1TB DDR4 ECC; Storage: $4 \times 8$ TB HDD (RAID5). We set the default hyper-parameters of QSPN (query-aware adaptive threshold of Product RDC, threshold of QProduct, threshold of QSplit, and threshold of Sum) on all datasets as: $\tau _ { \mathscr P } = ( s = 5 , l =$ 0.1, $u = 0 . 3$ ), $\tau = 0 . 0 1$ , $\tau _ { x } = 0 . 7$ , $\tau _ { s } = 0 . 3$ . # 7.2 Evaluation on Single-Table CardEst We first compare QSPN with the baseline methods on single-table CardEst. Table 3 reports the experimental results. Estimation Accuracy. Our proposed QSPN achieves superior estimation accuracy, comparable to state-of-the-art (SOTA) data-driven methods (FLAT and Naru) and the hybrid method UAE. Specifically, the mean $\mathsf Q$ -errors of QSPN are minimal, ranging from 1.0 to 1.8. Moreover, QSPN overcomes the limitations of traditional SPN models (e.g., DeepDB), achieving up to an $8 8 \%$ reduction in Q-error. The superior performance of QSPN is primarily attributed to its ability to effectively handle strongly correlated data through column partitioning based on both data distribution and query co-access patterns. In contrast, query-driven methods such as MSCN and traditional CardEst methods (Postgres, Sampling, and MHist) struggle to achieve satisfactory estimation accuracy in most cases, as they fail to adequately learn the data distribution. Additionally, we observe that the Q-errors of DeepDB on the GAS and Power datasets rise sharply. This is due to the strong correlations present in these datasets, which pose a more significant challenge for DeepDB. Inference Time. As shown in Table 3, QSPN demonstrates highly efficient inference performance. Specifically, the inference time of QSPN ranges from $0 . 4 2 2 ~ \mathrm { m s }$ to $3 . 0 9 2 ~ \mathrm { m s }$ across the four datasets, which is only slightly slower than the traditional CardEst method Postgres and the query-driven method MSCN. Moreover, compared to the traditional SPN-based method DeepDB, QSPN achieves up to a $9 2 . 2 \%$ reduction in inference time. This efficiency is due to QSPN’s column partitioning strategy, which considers both data correlations and query access patterns. By reducing the number of intermediate nodes in the SPN, QSPN improves inference efficiency and reduces storage overhead, all while maintaining high estimation accuracy. On the other hand, Naru and UAE are the slowest methods, as their underlying deep auto-regressive models suffer from high inference times due to the computationally expensive progressive sampling process. Additionally, the inference time of FLAT increases significantly on the Forest and Power datasets, due to the large number of factorize and multi-leaf nodes used to model the complex correlations in these datasets. We also evaluate the construction (training) time of the methods and categorize them into three groups: (1) Postgres, Sampling, and MSCN require negligible time for training. (2) SPN-based methods, such as DeepDB, FLAT, and QSPN, take approximately 1-3 minutes for construction, which does not impose a significant burden on the DBMS. (3) DAR-based methods like Naru and UAE require 100 to 1000 times more training time than SPN-based models, making them impractical for real-world applications. Storage Overhead. QSPN ranks among the top in storage efficiency, requiring only tens of KB to about 1 MB more than Postgres. DeepDB can also be considered a lightweight method, although its model size increases significantly on the Forest and Power datasets due to a larger number of nodes. On the other hand, MSCN, Naru, and UAE, which are based on CNN or DAR models, naturally have larger model sizes. FLAT and Sampling suffer from substantial storage overhead. The excessive model size of FLAT stems from the same factors that contribute to its slow performance. Summary. The experimental results demonstrate that QSPN achieves superior and robust performance across the three key criteria, outperforming state-of-the-art approaches. This outcome aligns with the design objectives of QSPN, as presented in Table 1. # 7.3 Evaluation on Dynamic Model Update In this section, we evaluate the model update performance of QSPN on our read-write hybrid workload and compare the following alternatives: NoTrain (no model updates), ReTrain (periodic model reconstruction), and AdaIncr (our updating method in QSPN). Due to space constraints, we only report the results for the hybrid-update setting, which involves both data updates and query workload shifts, while the results for the data-update and query-update settings are provided in our technical report. As shown in Figure 5, NoTrain suffers from poor accuracy across all four datasets under hybrid workloads, whereas both ReTrain and AdaIncr maintain the accuracy of the QSPN model. Notably, AdaIncr reduces update time by $3 0 \%$ to $6 0 \%$ while achieving the same accuracy as ReTrain. Furthermore, when analyzing the trends in workload execution time, the total time usage of AdaIncr increases gradually, whereas that of ReTrain rises sharply. # 7.4 Evaluation of Multi-Table CardEst In this section, we compare our proposed M-QSPN with the SOTA methods for multi-table CardEst and report the results in Table 4. We observe that Postgres exhibits poor estimation accuracy, and loses its advantage in inference time, as it requires multiple iterations to compute multi-table join cardinality. UAE achieves better estimation accuracy; however, both its inference and training times are unsatisfactory due to the inherent limitations of DAR-based models. Compared to UAE, FLAT provides a better trade-off between estimation accuracy and inference time. Among all evaluated approaches, our proposed M-QSPN model achieves the best performance. Specifically, compared to the best baseline, FLAT, M-QSPN improves estimation accuracy and inference efficiency by approximately three times, covering the shortage of FactorJoin on base table filtering. The only drawback of M-QSPN is its model size, primarily due to storing binnings on Leaf nodes with extra storage for convenience. However, this additional storage overhead can be eliminated by integrating binning into the histogram on each Leaf node during implementation. Table 3: Evaluating Single-Table CardEst on Key Criteria: Estimation Errors, Inference/Construction Time and Model Size. Table 4: Evaluating Multi-Table CardEst on the Job-Light Workload of IMDB Dataset. # 7.5 Evaluation on End-to-End Query Execution We evaluate the effect of CardEst models on end-to-end query execution in the PostgreSQL DBMS [10], following the experimental settings described in the End-to-End CardEst Benchmark [21]. Specifically, we conduct the End-to-End benchmark on the IMDB dataset [14] with JOB-Light workload [17], and compare our proposed QSPN with the PostgreSQL internal estimator [10], NeuroCard [40], and FLAT [42] in the End-to-End evaluation. The experimental results are reported in Figure 6. Our QSPN achieves the best performance in end-to-end query execution. Specifically, QSPN reduces the mean query execution time by $2 5 . 9 \%$ , compared to $2 . 4 \%$ for Neurocard and $1 7 . 1 \%$ for FLAT. This outcome aligns with the superior performance of our QSPN in multi-table CardEst. In particular, QSPN performs best on queries involving 2, 3, and 5 joined gas - D+Q - Accuracy census13 - D+Q - Accuracy forest10 - D+Q - Accuracy power7 - D+Q - Accuracy NoTrain NoTrain NoTrain 15.0 NoTrain 1.05 AdaIncr 01.50 ARedTarIanicnr 2 AdaIncr 12.5 ARedTarIanicnr 1 7.5 T Gf 5.0 2.5 0.0 0.0 0 0.0 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Update Workload Execute (%) Update Workload Execute (%) Update Workload Execute (%) Update Workload Execute (%) gas - D+Q - Time Usage census13 - D+Q - Time Usage forest10 - D+Q - Time Usage power7 - D+Q - Time Usage 20 AdaIncr 246 AdaIncr 12050 AdaIncr 46 AdaIncr □ 10 【 □ E 中 中 □ 5 名 中 = 0 4 0 中 口 □ 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 20 40 60 80 100 Update Workload Execute (%) Update Workload Execute (%) Update Workload Execute (%) Update Workload Execute (%) Figure 5: Evaluation on Dynamic Model Update (Hybrid-Update Setting). Figure 6: End-to-End Mean Query Execution Time. tables. For instance, for the most complex 5-table queries, our QSPN reduces execution time by more than $1 5 \%$ compared to FLAT. # 8 RELATED WORK Traditional CardEst Methods. Traditional CardEst methods [1, 2, 4, 10, 11, 18, 24] rely on simplifying assumptions, such as column independence. Postgres [10] assumes that all columns are independent and estimates the data distribution of each column using histograms. Sampling-based methods [1, 2, 11, 18] sample tuples from the data and store them. In the online phase, these methods execute queries on the stored sample to estimate cardinalities. MHist [24] is a multi-histogram approach that accounts for data correlation by constructing multi-dimensional histograms. Bayes [4] performs cardinality estimation using probabilistic graphical models [5, 8, 33]. While effective, it can be slow, especially when dealing with datasets that have high correlation. The key limitation of these traditional methods is their reliance on simplifying assumptions, which often lead to significant estimation errors. Learning-based CardEst Methods. Query-driven methods transform the CardEst problem into a regression task, mapping query workloads to ground truth cardinalities. MSCN [16] encodes a query into a standardized vector, which is then fed into a Multi-Layer Perceptron (MLP), where they undergo average pooling. The pooled representations are then concatenated and passed into a final MLP to output the selectivity. LW-XGB/NN [6] encodes a query into a simpler vector by concatenating the lower bounds and upper bounds of all range predicates in order. Then, this query vector is then input into a small neural network or XGBoost [3] model to predict the estimated cardinalities. Data-driven methods transform the CardEst problem into a joint probability problem, where each column is treated as a random variable. Naru [41] factorizes the joint distribution into conditional distributions using Deep AutoRegressive (DAR) models such as MADE [7]. Naru then employs progressive sampling [25] to compute cardinality estimates for range queries based on point probabilities. UAE [35] is an optimized version of Naru, tailored for query workloads. As a hybrid CardEst model, UAE improves the fitting of the DAR model on data with long-tail distributions by adjusting the sampling regions according to the queries. FactorJoin [36] is a multi-table CardEst method that trains single-table cardinality estimation models for each of the joined tables and uses a factor graph to capture the relationships and correlations between different join keys. Despite advancements in learning-based cardinality estimation, existing methods may struggle to simultaneously optimize the key criteria: estimation accuracy, inference time, and storage overhead, limiting their practical applicability in real-world database environments.
Cardinality estimation is a fundamental component in database systems, crucial for generating efficient execution plans. Despite advancements in learning-based cardinality estimation, existing methods may struggle to simultaneously optimize the key criteria: estimation accuracy, inference time, and storage overhead, limiting their practical applicability in real-world database environments. This paper introduces QSPN, a unified model that integrates both data distribution and query workload. QSPN achieves high estimation accuracy by modeling data distribution using the simple yet effective Sum-Product Network (SPN) structure. To ensure low inference time and reduce storage overhead, QSPN further partitions columns based on query access patterns. We formalize QSPN as a tree-based structure that extends SPNs by introducing two new node types: QProduct and QSplit. This paper studies the research challenges of developing efficient algorithms for the offline construction and online computation of QSPN. We conduct extensive experiments to evaluate QSPN in both single-table and multi-table cardinality estimation settings. The experimental results have demonstrated that QSPN achieves superior and robust performance on the three key criteria, compared with state-of-the-art approaches.
[ "cs.DB", "H.2.4; E.5" ]
# 1 Introduction As large language models continue to advance, the design of their evaluations becomes increasingly important, as it shapes the development priorities of the next generation of models and guides the broader trajectory toward artificial general intelligence (Chang et al., 2024). Current benchmarks Participants Testbeds Settings E.g.Twenty Question Experience Interactions,Reward,Reflection Is it a man made object? Yes No LLMs without Experience .. typically find indoors? Yes Experience LLMs Is it a whisk? No LLMs with Experience [Finished] Human Policy You do not get the answer in 20 questions! 8 [Reflection] LLMs with Human Knowledge 2. QuestionFruonsues. Human Memory Humans Humans with Experience predominantly focus on measuring the expertise of language models in performing specific tasks. However, intelligence is not solely defined by the possession of expert knowledge (Krathwohl, 2002; Minsky, 1988). For example, individuals without profound knowledge can still demonstrate intelligence through their speed to acquire new skills through experience (Silver and Sutton, 2025). This dimension of intelligence, the capacity for rapid learning, remains largely overlooked in existing evaluation frameworks. Assessing the ability to learn quickly is challenging. Under the current LLM development paradigm, models undergo massive pre-training followed by domain-specific alignment (Achiam et al., 2023; Liu et al., 2024). Models are typically compared based on their final performance outcomes, without constraints on the amount of task-relevant training data they utilize. In this paper, we do not argue for altering this training paradigm, as it is the reason of success. Rather, we aim to design an evaluation framework that can directly measure a model’s ability to improve rapidly at test time. The characteristic we aim to evaluate aligns closely with the concept of “Test-time Learning”, which refers to a model’s ability to adapt and improve through its own test-time experience. The desirable end-state for artificial intelligence should have the ability to effectively improve its performance through a limited number of experiences by interacting with environments, reflecting on feedback and rewards, rapidly acquiring in-context or in-weight policies, and acting adaptively. Moreover, these test-time improvements should be capable of accumulating as experience grows, enabling continual adaptation and learning. The concept of test-time learning shares conceptual similarities with in-context reinforcement learning (Laskin et al.; Lee et al., 2023; Grigsby et al.; Lu et al., 2023a) and agent selfevolution (Tao et al., 2024) to some extent. However, it is distinguished from these paradigms in several fundamental ways. (1) Generality of Environment: In-context reinforcement learning typically operates within classical RL domains such as adversarial bandits (Duan et al., 2016) or the dark room setting (Laskin et al.), which are characterized by constrained environments and limited action spaces. In contrast, test-time learning emphasizes generalization in open-ended environments, where the action space spans the full token space of a language model. (2) Beyond Memorization: Research on agent self-evolution has largely focused on tool-use or coding tasks that rely heavily on rote memorization or repeated exposure to similar instances (Qian et al., 2024). These setups often allow models to improve simply by recalling prior examples. Test-time learning is achieved in experience-based, reasoning-intensive tasks that require discovering latent patterns and executing self-proposed policies beyond surface-level recall. In this work, we propose an objective framework to evaluate the test-time learning ability of current large language models. Rather than relying on static tasks like academic olympiads, we adopt competitive games, which are dynamic, resistant to saturation, and embed latent strategies, making them ideal for studying test-time learning. Furthermore, we systematically evaluate performance across four test-time experience settings: (1) full experience with interactions, rewards, and model reflection; (2) model-derived policy based solely on game rules; (3) model-derived policy informed by both rules and accumulated test-time experience; and (4) human-authored policy. To compare LLM performance with human reasoning, we also recruit human annotators to perform the same task. The results reveal clear gaps between human and model test-time learning capabilities, highlighting promising directions for future research. The experiment results show that LLMs demonstrate measurable test-time learning ability; however, these gains are not stable and consistent when experience accumulates. In contrast, human participants exhibit more stable and rapid learning. These findings highlight the need for further evaluation and improved training strategies to enhance the test-time learning of LLMs. Importantly, our aim is not to build an elaborate framework centered on agentic workflows, but rather to propose a lightweight and objective pipeline for assessing whether models can benefit from test-time experience. We believe that systematic evaluation of test-time learning constitutes a key step toward advancing the capabilities of large language models. # 2 Related Work # 2.1 Test-time Learning The concept "test-time learning" shares certain similarity with "test-time training", "in-context reinforcement learning" and "self-evolution" but also adopts key distinctions in its focus and formulation. The first two concepts involve weight updates. Test-time training (Sun et al., 2020; Liu et al., 2021; Gandelsman et al., 2022; Sinha et al., 2023; Sun et al., 2019) primarily addresses distributional or domain shifts between training and test data by adapting model parameters at inference time. Incontext reinforcement studies (Laskin et al.; Lee et al., 2023; Grigsby et al.; Lu et al., 2023a) involves training models from scratch to perform reinforcement learning tasks via in-context tokens. Self-evolution studies focus on performance improvements through in-context interactions without parameter updates (Tao et al., 2024; Lu et al., 2023b). For instance, Lange et al. (2024) proposed prompting strategies that enhance performance through structured interactions, and $\mathrm { Y u }$ and Feng (2025) introduced agentic workflows that integrate human knowledge to guide model behavior and maximize gains. Most prior work emphasizes engineering pipelines to improve in-context performance, often in application settings such as web navigation, tool use, or code generation—domains where improvements are frequently driven by retrieval or surface-level similarity to prior examples, rather than by the development of general strategies or deeper reasoning. As noted by Silver and Sutton (2025), “now is the time of experience,” highlighting the emerging view that future intelligent agents must learn through interaction to achieve higherlevel of reasoning, rather than rely solely on static question answering-style evaluations. In this work, we aim to objectively evaluate the extent to which LLMs can leverage experience at test time. Specifically, we quantify models test-time gains and compare them against improvements guided by human-authored policies and human learning trajectories. # 2.2 Evaluation Environments For reasoning-intensive evaluation environments, in-context reinforcement learning studies have explored semantic and visual representations of reinforcement tasks such as the adversarial bandit (Laskin et al.; Lee et al., 2023), dark room (Laskin et al.; Lee et al., 2023) and Partially Observable Process Gym (Morad et al., 2023; Lu et al., 2023a), a set of simple environments designed to benchmark memory in deep RL. However, these tasks involve closed-ended environments with limited action spaces and are often easily solved by current large language models, as they may already encode effective policies, e.g., upper confidence bound(Garivier and Moulines, 2011). In contrast, we focus on open-ended experience-based reasoning-intensive tasks with token-level action spaces and moderate difficulty, where the optimal policy is not readily accessible or encoded in the model. Regarding the self-evaluation of LLMs, recent works have employed web-used (Yao et al., 2022), tool-assisted (Lu et al., 2023b), or static benchmarks, including math (Cobbe et al., 2021), code generation (Jiang et al., 2023; Luo et al., 2023), and general-purpose benchmarks (Chiang et al., 2023). However, these static evaluations are prone to saturation, and observed improvements may result from memorization or recall rather than from enhanced reasoning via learned policies. In this work, we propose competitive game environments as effective testbeds for evaluating the test-time learning ability of LLMs. These environments are dynamic, resistant to saturation, openended, reasoning-intensive, and policy-driven, making them well-suited for assessing model ability to learn and adapt through experience. # 3 Test-time Learning # 3.1 Testbeds The optimal environment for evaluating the testtime learning ability of large language model should satisfy the following requirements: 1) Moderate Difficulty: The environment should not admit a readily accessible optimal policy, either due to the nature of the task or the current limitations of large language models. 2) Structured Regularity: Tasks should contain underlying patterns that can be uncovered and leveraged through interaction and reasoning to enhance performance. 3) Beyond Memorization: Success should depend not on recalling previous answers, but on identifying generalizable rules or strategies that drive improvement. These criteria highlight the importance of reasoning over purely knowledge-rich contexts. Classic reinforcement learning settings, such as adversarial bandit (Laskin et al.), have been rendered less meaningful for test-time learning evaluations, as many models have already internalized algorithms like Upper Confidence Bound in their knowledge. To address this, we adopt three diverse environments to evaluate test-time learning: a mathematics benchmark, a single-agent semantic game, and a multi-agent semantic game. AIME 2025 (MAA, 2025) refers to the American Invitational Mathematics Examination 2025, used to identify candidates for the U.S. team in the International Mathematical Olympiad (IMO). We leverage this most recent mathematics benchmark to examine the test-time learning capabilities of large language models in solving high-level mathematical problems. Twenty Question (Abdulhai et al., 2023; Zhou et al., 2024) is a dialogue-based multi-turn single agent task in which a large language model attempts to identify a target word from a fixed set of 157 candidate words by asking up to twenty yes/no questions. The environment responds with "Yes", "No", or "Invalid" if the question is not a valid yes/no query. To ensure consistent understanding of the questions, the environment is simulated using the same LLM as the questioning model. The 157 candidate words, adopted from prior work (Zhou et al., 2024), span diverse categories including animals, art, clothes, electronics, fruits, furniture, garden supplies, jewelry, kitchen tools, musical instruments, nature, office supplies, sports, tools, toys, vegetables, and vehicles. The candidate set remains fixed across the games, providing a controlled setting to evaluate whether the LLM can learn effective categorization and formulate increasingly informative dichotomous questions during test time. Performance is measured by NDCG $@ 2 0$ based on the rank of the correct guess. Who is undercover (Xu et al., 2023) is a dialoguebased multi-turn multi-agent task. Each player is assigned a secret word: one player receives a distinct word as the undercover, while all others, civilians, share the same word. In each round, players provide verbal clues related to their secret words. By analyzing both their own and others’ clues, players attempt to infer their roles. The objective for civilians is to identify the undercover, while the undercover aims to conceal their identity. Note we use the neutral word "difference" and "normal" instead of "undercover" and "civilian" in task instructions. This is motivated by the observation that large language models often refuse to acknowledge being the "undercover" due to value misalignment, as further discussed in Appendix C. The performance is evaluated based on the win rate. # 3.2 Test-time Learning Setting It is important to note that our objective is not to design an elaborate framework for maximizing task completion rates. Rather, we aim to provide a lightweight and objective evaluation framework that assesses a model’s test-time learning, comparing its performance with and without prior experience, as well as against human-authored policies grounded in human reasoning. To this end, we adopt a vanilla evaluation setup consisting of two settings: a fixed number of experience setting (Laskin et al.) and an incremental experience setting (Suzgun et al., 2025). # 3.2.1 Evaluation with Experience Table 1: Token Lengths of Context We aim to qualitatively assess whether current large language models exhibit test-time learning capabilities and the extent to which they improve. To this end, we encode historical experience and compare model performance with and without it. We investigate efficient and objective methods to encode this historical experience. Table 1 reports the average context lengths for instruction, experience, and derived policy. To fully leverage past experience, the experience includes dialogue interactions, rewards, and model’s self-reflections on interactions and rewards. The strategy is derived by the model itself based on all past experience. In pilot studies, we experiment with two approaches: incorporating the full history experience directly, and self-derived policy from the full history. We fix the number of experience to five rounds, leading to context lengths of approximately $5 \mathbf { k }$ and $1 2 \mathrm { k }$ for Twenty Questions and Who is Undercover, respectively, while the derived policy contexts average 243 and 261 tokens. Although the first approach provides complete information, it incurs higher computational costs and underperforms compared to the second. Therefore, we adopt policy-based representations of past experience for further evaluation. This setup is illustrated in the left panel of Figure 2. To further isolate the influence of the model’s self-derived policy pipeline, we include a rule-based policy as a baseline for comparison with the experience-based policy, in which strategies are derived from both rules and accumulated experience. This comparison helps ensure that observed improvements can be attributed to the incorporation of experience. # 3.2.2 Evaluation with Incremental Experience The previous setting evaluates the test-time learning given limit amounts of prior experience. If a model demonstrates performance gains from such experience, it becomes essential to investigate whether these test-time improvements persist and accumulate as additional experience is acquired. This motivates an incremental evaluation setting that requires efficient management of past experience. To support dynamic policy updates with growing experience, we adopt the memory management pipeline (Suzgun et al., 2025). As illustrated in the right panel of Figure 2, the agent without experience performs k independent test rounds, while the agent with experience conducts the same k rounds with a continuously updated policy pool based on accumulating experience. To ensure robust evaluation, we sample each setting (with and without experience) three times and compute the cumulative average reward. Let $r _ { - } \mathrm { b a s e } ( t , i )$ denote the reward obtained by the agent without experience at test round $t$ in sample $i$ , and $r _ { - } \mathrm { e x p } ( t , i )$ denote the corresponding reward for the agent with experience. The cumulative average reward for the agent with experience up to round $t$ is denoted by $R \_ \operatorname { h i s } ( t )$ . The computation of $R _ { \mathrm { h i s } } ( t )$ is provided below; $R _ { \mathrm { b a s e } } ( t )$ is computed analogously. Evaluation of Test Time Learning Evaluation of Test Time Learning with Fixed Number of Experience with Incremental Experience Agent without Experience Round 1 1 Agent Environment 0 Agent Environment Round 1 Is it a man made object? Yes 0 Agent Environment {[]→(Interaction 1, Reward 1) Exp(1) No SOLUTIONS,ANDIMPLEMENTATIONPATTERNS,AND Interaction ... typically find indoors? Yes Round1Interaction- →Reward PSEUDOCODES IndependentEvaluateMtimes No Reward [Finished] Agent with Experience Policy(1) Youdo not get the answer in 20 questions! + [Reflection] Policy Agent Environment Reflection 2.uegr $\mathbf { \sigma } = \mathbf { \sigma }$ Exp(1) 3.Better Information Gain: Round N+1Interaction -→ Reward IndependentEvaluateMtimes GENERALMETA-REASONINGSTRATEGIES Round 2 [Exp(1)]Agent 1. Core Question Structure Policy(1) Agent Round 2 Exp(2) (Interaction 2, Reward 2, Reflection 2)=Exp(2) (Policy(1) $\mid $ (Interaction 2, Reward 2) Exp(2) {Policy(1); (Interaction 2, Reward 2) → Policy(2)} Policy(2) 2.Category Hierarchy (inorder) Round N [Exp(1), Agent Exp(N-1)] Electr C.Majorcategorie Exp(N) :(Interaction N, Reward N,Reflection N)=Exp(N) Policy(T-1) Agent Round T 3.Question Formulation Rules [Exp(1.),.. Exp(N) .Strategic Principles Policy(T-1) →(Interaction T, Reward T) Exp(T) Cummulative Experience Self-derived Policy Cumulative Updated Policy After N rounds $$ \begin{array} { r l } & { r _ { \mathrm { e x p } } ( t ) = \left\{ \begin{array} { l l } { \frac { \sum _ { i } r _ { \mathrm { b a s e } } ( 0 , i ) + \sum _ { i } r _ { \mathrm { e x p } } ( 0 , i ) } { \left| r _ { \mathrm { b a s e } } ( 0 , \cdot ) \right| + \left| r _ { \mathrm { e x p } } ( 0 , \cdot ) \right| } , } & { t = 1 } \\ { \frac { \sum _ { i } r _ { \mathrm { e x p } } ( t , i ) } { \left| r _ { \mathrm { e x p } } ( t , \cdot ) \right| } , } & { t > 1 } \end{array} \right. } \\ & { R _ { \mathrm { e x p } } ( t ) = \frac { \sum _ { 1 \leq i \leq t } r _ { \mathrm { e x p } } ( i ) } { t } } \end{array} $$ # 4 Experiments In the experiments, we aim to answer the following questions: (Q1) Do current large language models exhibit the ability to learn at test time? (Q2) Can large language models achieve stable and consistent improvements when experience accumulates? (Q3) How do humans adapt and improve their performance through experience? (Q4) How do thinking models perform in test-time learning scenarios? we find to yield stable results. In the cumulative setting, we extend the evaluation $\scriptstyle { \mathrm { t } } = 5 0$ rounds. During each interaction, the model is instructed to first perform explicit reasoning before generating its final output. The final response (a question, reflection, or policy in Twenty Questions; a speech, vote, reflection, or policy in Who is Undercover) is enclosed within <answer></answer> tags to ensure clarity and facilitate objective evaluation of both reasoning quality and task performance. In the single-agent setting, the environment is simulated using the same model under evaluation to ensure alignment in question understanding and knowledge base. In the multi-agent setting, all other agents are instantiated with the same backbone LLM as the test agent to isolate test-time improvements from potential gains due to mere familiarity with another model’s behavior. For each evaluation setting, the order of test rounds is fixed to ensure consistency across trials. # 4.1 Experimental Setup We aim to evaluate whether the current top-tier large language models have the ability to improve at the test time. Specifically, we evaluate gpt4o (Hurst et al., 2024), Claude 3.5 Sonnet (Anthropic, 2024) and DeepSeek-V3 (Liu et al., 2024). We set the temperature to 1 to support the dynamic testbeds. For overall performance evaluations, we set prior interactions ${ \Nu } { = } 5$ , test cases $\scriptstyle \mathbf { M } = 3 2$ , which # 4.2 Overall Test-time Learning Performance (Q1) We begin by investigating whether top-performing large language models exhibit measurable improvements at test time when provided with prior experience. Table 2 summarizes the overall performance across three environments under four evaluation settings: (1) without any policy, (2) with model-derived policy based solely on rules, (3) with model-derived policy based on both rules and test-time experience, and (4) with human-authored policy. The inclusion of the human policy serves to assess the potentials of models. Table 2: Evaluation of Test-time Learning Ability of LLMs. "w/o Policy" denotes the baseline setting where the model is provided only with task rules. "w/ Rule Policy" indicates that the model receives both the rules and a test-time policy based only on rules. "w/ Exp. Policy" refers to having both rules and test-time policy from rules and model five rounds of experience containing interactions, rewards and reflections. "w/ Human Policy" indicates that the model is given rules along with a human-authored policy based on human understanding of the task. The best results are shown in bold and the second best are underlined. In the Twenty Questions setting, we observe consistent performance gains when models are equipped with self-derived policies based on prior experience. In contrast, rule-based policies result in significant performance drops across all models, likely due to a misalignment between humandesigned heuristics and model reasoning patterns, as further discussed in Section 4.6. Experiencebased policies, however, lead to clear improvements, with Claude achieving the highest gain from its own test-time experience. Interestingly, GPT-4o and DeepSeek-V3 both outperform their self-derived policies when provided with human-authored policies. This highlights a gap between the models’ current test-time learning capabilities and their full potential, suggesting that either the quantity of experience or the quality of derived policies remains suboptimal. These limitations are further examined in Section 4.3 and Section 4.6. Claude performs marginally worse with human-authored policy, also indicating a possible misalignment between its internal reasoning and externally imposed guidance. In Who is Undercover, test-time learning yields more substantial improvements. Claude again achieves the highest gain from experience-based policy, reinforcing its ability to leverage selfacquired strategies. Unlike other settings, the rulebased policy ranks as the second-best for some models, highlighting a divergent pattern in this multi-agent context. Additionally, human-authored policies consistently lead to the highest performance across all models, further underscoring the latent potential of test-time learning when guided by effective strategies. It is important to note that direct comparisons across models in this environment are not meaningful, as all agents in the multiagent setting are instantiated using the same LLM that is being evaluated. This design ensures an objective assessment of test-time learning by isolating gains attributable to experience and strategic adaptation, rather than confounding effects such as familiarity with another model’s behavior. Full instances of model-generated and human-authored policies are provided in Appendix B and analyzed in Section 4.6. Finding 1: Policies derived from past experience at test time yield measurable improvements across models and tasks. Finding 2: The superior performance under human-authored policies reveals the untapped potential for enhancing models’ test-time learning capabilities. # 4.3 Cumulative Improvement (Q2) The above results demonstrate that large language models possess the ability to improve at test time. We next examine whether this improvement is consistent as experience accumulates. To this end, we adopt the cumulative evaluation setting described in Section 3.2.2. Figure 3 presents cumulative rewards over 50 rounds in the Twenty Questions task, comparing model performance with and without test-time policies derived from past experience. Model performances vary. Claude successfully leverages cumulative experience, whereas other models struggle to maintain or improve performance as experience accumulates. For Claude, the experience-enabled setting consistently outperforms the baseline, particularly within the first five rounds, indicating effective strategy accumulation. However, the performance gap narrows in later rounds, suggesting diminishing returns from additional experience. GPT and DeepSeek show minimal gains from the accumulation of experience at test time. For GPT-4o, both curves overlap in the early rounds, with the experience-enabled setting beginning to slightly outperform the baseline around rounds 15–20. In contrast, DeepSeek-V3 shows a decline in performance after five rounds of accumulated experience, while the baseline remains stable. This suggests that its policy refinement process may introduce noise or compounding errors, limiting its ability to leverage experience effectively. Finding 3: Results reveal substantial differences in the consistency and effectiveness of test-time learning across models as experience grows. # 4.4 Human Study (Q3) In the previous experiments, we demonstrate that certain large language models exhibit the ability to learn at test time through cumulative experience. To further understand the rate of improvement, we compare model learning speed with human. We recruited eight human participants (undergraduate and PHD students) to perform the same Twenty Question task, playing 20 rounds cumulatively. Their results are summarized in Table 4, and their cumulative rewards are plotted alongside those of the best-performing model, Claude. Participants are divided into two groups based on performance variance across rounds. The upper figure shows that all humans in this group achieve greater cumulative gains than Claude after 20 rounds, approaching near-optimal performance (represented by the black dotted line indicating the reward of perfect binary questioning). The lower figure includes participants with higher performance variability; nevertheless, their final cumulative rewards still exceed those of the LLM. Finding 4: Current top-tier large language models exhibit slower test-time learning speed compared to the learning efficiency of humans in the experience-based reasoning-intensive task. # 4.5 Performance of Thinking Models (Q4) Table 3: Test-time Learning Performance of Thinking Models in Who is Undercover environment. In pervious experiments, we evaluate large language models without explicit thinking mode. In this section, we examine the performance of thinking model: o1 (Jaech et al., 2024) and Deepseekr1 (Guo et al., 2025), as reported in Table 3. As shown in the table, test-time learning improvements are not observed for either thinking model when provided with self-derived policies. For o1, the test-time performance with self policy decrease and the test-time performance with human policy increase. For o1, performance decreases when incorporate test-time policy but improves when guided by a human policy, suggesting potential limitations in its ability to generate effective strategies autonomously, while still being capable of leveraging prior experience. For DeepSeek-R1, performance declines under both self-derived and human-authored policy conditions, compared to its baseline with no prior experience. This aligns with the findings reported in its original paper, which notes that few-shot prompting consistently degrades R1’s performance. The authors explicitly recommend presenting tasks in a zero-shot format for optimal outcomes, suggesting that R1’s internal reasoning is optimized for direct problem descriptions rather than for accumulating and adapting to test-time experience. Figure 3: Cumulative Test-Time Learning Performance on Twenty Question. Figure 4: Human Performance on Twenty Question. Finding 5: Test-time learning is not observed in thinking models, consistent with the findings reported in R1’s original paper that CoT in fewshot cases may degrade model performance. # 4.6 Further Analyses In the Twenty Questions environment, test-time improvements (w/ Exp. Policy vs. w/o Policy in Table 2) primarily stem from earlier identification of item categories. Test-time policies such as “Begin with high-level distinctions” and “Identify the category of the answer word within the first five questions” help the model avoid overly specific guesses early on. We also analyze the failure of the w/ Rule Policy setting in this environment, which we attribute to a misalignment between model behavior and human preference. Model-generated questions often include specific examples (e.g., “Is it a living thing (animal, plant)?” or “Is it a ball (like basketball, baseball, football) rather than other sports equipment (like bats or rackets)?”), whereas questions in human-authored policies are general and abstract (e.g., “Is it a living thing?”, “Does it use electricity?”, “Is it commonly found indoors?”). These examples lead to the model adopt this format throughout the questioning process, which contributes to the performance decline. For humans, we observed more rapid test-time learning, with noticeable improvement after just a single game. In the Who is Undercover environment, both DeepSeek and human demonstrate a key policy, "Deduce the opposing secret word". This is based on the observation that the undercover and normal players’ words are typically semantically related. Recognizing this pattern allows participants to refine their clues and identities more effectively.
As evaluation designs of large language models may shape our trajectory toward artificial general intelligence, comprehensive and forward-looking assessment is essential. Existing benchmarks primarily assess static knowledge, while intelligence also entails the ability to rapidly learn from experience. To this end, we advocate for the evaluation of Test-time Learning, the capacity to improve performance in experience-based, reasoning-intensive tasks during test time. In this work, we propose semantic games as effective testbeds for evaluating test-time learning, due to their resistance to saturation and inherent demand for strategic reasoning. We introduce an objective evaluation framework that compares model performance under both limited and cumulative experience settings, and contains four forms of experience representation. To provide a comparative baseline, we recruit eight human participants to complete the same task. Results show that LLMs exhibit measurable test-time learning capabilities; however, their improvements are less stable under cumulative experience and progress more slowly than those observed in humans. These findings underscore the potential of LLMs as general-purpose learning machines, while also revealing a substantial intellectual gap between models and humans, irrespective of how well LLMs perform on static benchmarks.
[ "cs.CL" ]
# I. INTRODUCTION MAGE segmentation is a foundational component of visual perception in intelligent transportation systems (ITS), enabling autonomous vehicles to interpret complex driving environments with precision and reliability [1]. By delineating road lanes, detecting obstacles, segmenting pedestrians, and recognizing traffic signs, segmentation empowers vehicles to navigate urban and rural settings safely and efficiently [2]. Historically, segmentation tasks have relied on convolutional neural networks (CNNs), such as DeepLab [3] and Mask RCNN [4], and more recently on vision-specific transformers, such as Swin Transformer [5] and Segmenter [6], which have achieved remarkable performance on benchmark datasets like Cityscapes [7] and BDD100K [8]. However, the emergence of Large Language Models (LLMs) has introduced a paradigm shift, leveraging their advanced language understanding and reasoning capabilities to enhance image segmentation through multimodal learning [1], [2]. This survey explores the convergence of LLMs and image segmentation, with a particular focus on their transformative applications in ITS. Image segmentation in ITS encompasses a range of tasks critical to autonomous driving and traffic management. Semantic segmentation assigns class labels to each pixel, enabling the identification of road surfaces, vehicles, and pedestrians [9]. Instance segmentation distinguishes individual objects within the same class, such as separating multiple pedestrians in a crowd [4]. Panoptic segmentation combines both approaches to provide a holistic scene understanding, crucial for complex urban environments [10]. These tasks have traditionally been driven by CNN-based architectures, which excel at capturing spatial hierarchies but often require extensive labeled datasets and struggle with open-vocabulary scenarios [11]. Vision transformers, such as the Vision Transformer (ViT) [12], have addressed some limitations by leveraging self-attention mechanisms to model long-range dependencies, improving performance on datasets like nuScenes [13] and Mapillary Vistas [14]. Yet, these models remain constrained by their reliance on predefined class labels and limited adaptability to dynamic or novel scenarios [1]. The integration of LLMs into image segmentation, often referred to as vision-language segmentation (VLSeg), represents a significant leap forward. LLMs, such as BERT [15], GPT-3 [16], and T5 [17], are renowned for their ability to understand and generate human-like text, enabling them to process natural language prompts for guiding segmentation tasks [1]. By combining LLMs with vision models like CLIP [18] or DINOv2 [19], VLSeg frameworks, such as Grounded-SAM [20] and SEEM [21], allow autonomous systems to segment objects based on free-form queries, such as “highlight the cyclist on the right” or “segment the traffic cone near the construction zone” [20], [21]. This flexibility is particularly valuable in ITS, where vehicles must adapt to diverse and unpredictable environments, including adverse weather, occlusions, or novel obstacles [2], [22]. In ITS, LLM-augmented VLSeg has transformative potential across several applications. For autonomous driving, it enables real-time scene parsing, dynamic obstacle detection, and predictive segmentation of crash scenarios, as demonstrated by frameworks like DriveLM [23] and InsightGPT [24]. For traffic monitoring, VLSeg supports smarter traffic flow analysis and anomaly detection in surveillance feeds, enhancing city-scale mobility solutions [25], [26]. Datasets like Talk2Car [27] and Road-Seg-VL [28] provide languageguided annotations, facilitating the development of models that align visual perception with human instructions [1]. Moreover, recent advancements in open-vocabulary segmentation, such as CLIPSeg [29] and OpenSeg [30], enable zero-shot segmentation of unseen objects, addressing the open-world challenges inherent in ITS [30]. Despite these advancements, integrating LLMs into segmentation for ITS faces several challenges. Real-time performance is a critical bottleneck, as large models like SAM [31] incur 0000–0000/00\$00.00h ©gh c2o5mIEpEuEtational costs, necessitating lightweight solutions like MobileSAM [32] or EdgeViT [33]. Reliability in safetycritical scenarios requires robustness against adversarial inputs and adverse conditions, as explored in Multi-Shield [34]. Dataset limitations, particularly the scarcity of large-scale multimodal datasets, hinder model training, though automated annotation pipelines like AutoSeg [35] offer promising solutions [2], [35]. This survey aims to provide a comprehensive analysis of these developments, challenges, and future directions, offering insights into how LLM-augmented VLSeg can reshape ITS to be safer, more adaptable, and intelligent [1], [2]. Fig. 1: Taxonomy of Image Segmentation with Large Language Models for Intelligent Transportation Systems The remainder of this paper is organized as follows: Section 2 reviews LLMs and promptable segmentation techniques, Section 3 discusses their applications in ITS, Section 4 examines relevant datasets and benchmarks, Section 5 addresses key challenges, Section 6 explores future directions, and Section 7 concludes with a synthesis of findings and prospects for LLM-augmented VLSeg in ITS. # II. BACKGROUND AND RELATED WORKS # A. Image Segmentation Fundamentals Image segmentation is a pivotal task in computer vision that involves partitioning an image into distinct segments or regions, each representing meaningful objects or areas. In the context of intelligent transportation systems (ITS), image segmentation underpins the ability of autonomous vehicles to interpret complex driving environments by identifying road lanes, detecting obstacles, recognizing pedestrians, and understanding traffic signs [36]. By providing a structured representation of visual scenes, segmentation enables critical functionalities such as scene understanding, path planning, collision avoidance, and urban mobility management. The task is broadly categorized into three primary types, each serving distinct purposes in ITS applications [10], [37]. Semantic Segmentation: This approach assigns a class label to each pixel in an image, enabling the identification of categories such as roads, vehicles, pedestrians, and traffic signs. Semantic segmentation is essential in ITS for delineating drivable areas, distinguishing road boundaries, and understanding the overall structure of a driving scene [37]. Models like DeepLab [3] and its successors, such as DeepLabv $^ { 3 + }$ [9], leverage atrous convolutions and spatial pyramid pooling to capture multiscale contextual information, achieving high accuracy on urban scene datasets like Cityscapes [7]. Semantic segmentation is particularly valuable for tasks like road surface detection and traffic signal recognition, ensuring safe navigation in diverse environments. Instance Segmentation: Unlike semantic segmentation, instance segmentation differentiates individual objects within the same class, such as separating multiple pedestrians or vehicles in a crowded urban scene. This granularity is crucial for autonomous driving, where precise localization of individual entities is necessary for path planning, obstacle avoidance, and interaction with dynamic objects. Mask R-CNN [4], built upon Faster R-CNN [38], is a cornerstone model that combines object detection with pixel-wise segmentation, enabling instance-level precision in ITS applications [39]. Recent advancements, such as Mask2Former [40], further enhance instance segmentation by integrating transformerbased architectures, improving performance on datasets like BDD100K [8]. • Panoptic Segmentation: Panoptic segmentation unifies semantic and instance segmentation by assigning both class labels and instance IDs to every pixel, providing a comprehensive scene representation. This holistic approach is particularly valuable in complex ITS scenarios, where autonomous vehicles must navigate urban environments with diverse, interacting elements, such as pedestrians, vehicles, and infrastructure [10]. Models like OneFormer [41] leverage transformer architectures to achieve state-of-the-art panoptic segmentation, excelling on datasets like nuScenes [13] and Mapillary Vistas [14]. Panoptic segmentation supports advanced scene understanding, enabling vehicles to reason about both static and dynamic elements in real-time [37]. # B. Classical Approaches to Image Segmentation The journey of image segmentation in ITS began with classical computer vision techniques, such as thresholding and region-growing methods, but the field was fundamentally transformed by the advent of deep learning and Convolutional Neural Networks (CNNs). These classical approaches serve as the bedrock upon which modern, more sophisticated models are built. 1) Historical Evolution of Segmentation Approaches: The evolution of segmentation methods for ITS applications can be traced through several distinct phases, each building upon the innovations of the previous era: • Pre-Deep Learning Era (1980s-2012): Early approaches to segmentation relied on classical computer vision techniques. These included: – Edge-Based Methods: Algorithms like Canny edge detection identified object boundaries based on intensity gradients. – Region-Based Methods: Techniques such as region growing, watershed algorithms, and mean-shift clustering grouped pixels based on homogeneity criteria. – Graph-Based Methods: Approaches like Normalized Cuts [42] and Graph Cuts formulated segmentation as a graph partitioning problem. – Model-Based Methods: Active contours (snakes) and level sets evolved contours to fit object boundaries. These classical methods were computationally efficient but struggled with the semantic understanding required for complex driving scenes. They typically relied on lowlevel features (color, texture, edges) and required careful parameter tuning for each specific scenario, making them brittle in the face of real-world variability. Early Deep Learning Era (2012-2015): The breakthrough came with the application of CNNs to segmentation tasks: – Patch Classification: Early approaches treated segmentation as a per-pixel classification problem, where a CNN classified the central pixel of each image patch. While effective, this was computationally inefficient. – Fully Convolutional Networks (FCN) [43]: The seminal work by Long et al. in 2015 replaced fully connected layers with convolutional layers, enabling end-to-end training for dense prediction. FCNs could process arbitrary-sized inputs and produce correspondingly-sized outputs, dramatically improving efficiency. – Early Encoder-Decoder Architectures: Models like SegNet [44] and U-Net [45] introduced the encoder-decoder paradigm, where an encoder network downsamples the input to capture context, and a decoder upsamples to recover spatial details. Refinement Era (2016-2019): This period saw significant architectural innovations to address the limitations of early deep learning approaches: – Multi-Scale Processing: DeepLab [3] introduced atrous (dilated) convolutions and Atrous Spatial Pyramid Pooling (ASPP) to capture multi-scale context without increasing computational cost. – Attention Mechanisms: Models began incorporating spatial and channel attention to focus on the most informative regions and features. – Instance-Level Reasoning: Mask R-CNN [4] extended Faster R-CNN [38] by adding a branch for predicting segmentation masks, enabling instance segmentation. – Panoptic Segmentation: Kirillov et al. [10] introduced the concept of panoptic segmentation, unifying semantic and instance segmentation into a single task. Transformer Era (2020-2021): The introduction of transformers to computer vision marked another paradigm shift: – Vision Transformer (ViT) [12]: By treating an image as a sequence of patches, ViT applied selfattention mechanisms to model global relationships, though it was initially designed for image classification rather than segmentation. – Hierarchical Transformers: Models like Swin Transformer [5] addressed the limitations of ViT for dense prediction tasks by introducing a hierarchical structure with local attention windows. – Transformer-Based Segmentation: Architectures like Segmenter [6], SegFormer [46], and Mask2Former [40] adapted transformers specifically for segmentation tasks, achieving state-of-the-art performance. • Early Vision-Language Models (2019-2021): Before the current era of LLM-augmented segmentation, several approaches attempted to bridge vision and language for segmentation: – Referring Expression Segmentation: Models like CMSA [47] and BRINet [48] focused on segmenting objects referred to by natural language expressions, typically using LSTMs or GRUs to encode text. – Visual Grounding: Approaches like PhraseCut [49] and RefVOS [50] aimed to localize and segment objects based on natural language descriptions. Vision-Language Pre-training: Models like ViLBERT [51] and LXMERT [52] performed joint pre-training of vision and language representations, though not specifically for segmentation. These early approaches laid important groundwork but were limited by their reliance on relatively simple language models and task-specific training. The first major breakthrough in deep learning-based segmentation was the Fully Convolutional Network (FCN) [43], which replaced the dense, fully-connected layers of classification networks (like AlexNet [53]) with convolutional layers, enabling end-to-end training for pixel-wise prediction. For ITS, this meant that models could now learn to segment entire driving scenes at once. Following this, architectures like SegNet [44] and U-Net [45] introduced the powerful encoder-decoder paradigm. The encoder, typically a pretrained classification network (e.g., VGG [54]), progressively downsamples the input to capture semantic context, while the decoder upsamples these features to reconstruct a fullresolution segmentation map. U-Net’s key innovation was the use of ”skip connections,” which pass fine-grained feature details from the encoder directly to the decoder, proving crucial for accurately localizing small objects like traffic signs and pedestrians in ITS scenes. To handle the vast variation in object scales in driving environments (e.g., distant cars vs. nearby trucks), the DeepLab family of models [3], [9] introduced atrous (or dilated) convolutions and Atrous Spatial Pyramid Pooling (ASPP), allowing the network to probe features at multiple resolutions without increasing computational cost. These CNN-based models became the dominant approach and achieved impressive results on benchmarks like Cityscapes [7]. The next paradigm shift was the introduction of the Vision Transformer (ViT) [12]. Inspired by the success of transformers in natural language processing [55], ViTs treat an image as a sequence of patches and use self-attention mechanisms to model global relationships between them. This was a departure from the localized receptive fields of CNNs. For complex urban environments in ITS, self-attention provided a mechanism to model long-range dependencies, for example, understanding the relationship between a traffic light on one side of the road and the lane markings on the other. The Swin Transformer [5] made ViTs more efficient and effective for dense vision tasks by introducing a hierarchical structure and computing self-attention within shifted windows. Models like Segmenter [6] and SegFormer [46] further adapted the transformer architecture specifically for semantic segmentation, demonstrating state-of-the-art performance. These powerful CNN and transformer backbones remain integral components of the more advanced LLM-augmented systems discussed in this survey. 2) The CLIP Revolution and Its Impact on Segmentation: The introduction of Contrastive Language-Image Pre-training (CLIP) [18] by OpenAI in 2021 marked a pivotal moment in the evolution of vision-language models. CLIP’s key innovation was its training methodology: rather than training on a specific task with labeled data, it was trained on 400 million image-text pairs collected from the internet, learning to align images and their textual descriptions in a shared embedding space. This approach enabled zero-shot transfer to a wide range of vision tasks without task-specific fine-tuning. The impact of CLIP on segmentation was profound and multi-faceted: Open-Vocabulary Capability: Prior to CLIP, segmentation models were limited to a closed set of predefined classes. CLIP enabled models to segment arbitrary objects described in natural language, dramatically expanding the range of objects that could be identified in driving scenes. Bridging Vision and Language: CLIP provided a natural bridge between visual and linguistic understanding, enabling more intuitive interfaces for segmentation systems. Instead of being constrained to a fixed ontology, users could now query the system using natural language. • Foundation for VLSeg: CLIP’s architecture and pretrained weights became the foundation for numerous VLSeg models. CLIPSeg [29], one of the earliest CLIPbased segmentation models, demonstrated that CLIP’s embeddings could be effectively adapted for dense prediction tasks with minimal additional training. Compositional Understanding: CLIP’s exposure to diverse image-text pairs enabled a degree of compositional understanding, allowing models to reason about object attributes (color, size, position) and relationships—a critical capability for complex driving scenes. The progression from CLIP to modern VLSeg models followed several key developments: 1) Direct Adaptation: Early approaches like CLIPSeg [29] directly adapted CLIP’s embeddings for segmentation by adding a lightweight decoder. 2) Hybrid Approaches: Models like OpenSeg [30] combined CLIP’s open-vocabulary capabilities with traditional segmentation architectures, using CLIP to generate class embeddings that were then matched with pixellevel features. 3) Foundation Models: The Segment Anything Model (SAM) [31] introduced a new paradigm of promptable segmentation, trained on a massive dataset of 11 million images and 1.1 billion masks. While SAM itself used geometric prompts rather than language, it laid the groundwork for language-guided segmentation. 4) Integrated Approaches: Models like Grounded-SAM [20] and SEEM [21] integrated CLIP-based language understanding with SAM’s powerful segmentation capabilities, creating systems that could segment objects based on complex natural language descriptions. This historical progression reveals how the field has evolved from simple, rule-based approaches to sophisticated, languageguided systems capable of understanding and segmenting complex driving scenes based on natural language instructions. The integration of LLMs represents the latest chapter in this evolution, further enhancing the semantic understanding and reasoning capabilities of segmentation models. Despite their success, these classical approaches face significant limitations in ITS. They typically rely on a closed set of predefined class labels, making them unable to adapt to open-vocabulary scenarios where novel objects, such as temporary road barriers or debris, must be segmented [11]. Moreover, creating the large-scale, pixel-perfect labeled datasets required for training is incredibly costly and time-consuming for diverse ITS scenarios [56]. Real-time performance is another persistent challenge, as many of these models are computationally intensive [37]. The integration of Large Language Models (LLMs), as explored in subsequent sections, directly addresses these limitations by enabling context-aware, instruction-guided, and open-vocabulary segmentation, significantly enhancing the adaptability and intelligence of ITS applications [57], [58]. # C. Vision-Language Segmentation Vision-language segmentation $( { \mathrm { V L S e g } } )$ enables the delineation of specific objects or regions within an image based on free-form natural language prompts, offering a significant advancement over traditional segmentation tasks that rely on predefined class labels. In ITS, VLSeg allows autonomous vehicles to respond to dynamic queries, such as “segment the broken stop sign” or “highlight the bus stop next to the corner,” enhancing adaptability in unpredictable driving scenarios [21], [57]. This modality fusion, combining visual and linguistic information, has gained traction with the development of large-scale vision-language models (VLMs) and segmentation foundation models, providing semantic richness, context-awareness, and flexibility critical for ITS applications [18], [59]. The VLSeg pipeline typically comprises two core modules: (1) a visual encoder that extracts dense spatial features from input images, often using transformer-based backbones like ViTs [12] or Swin Transformers [5], and (2) a language encoder that translates natural language prompts into feature embeddings, commonly leveraging LLMs such as CLIP’s text encoder [18] or BERT [15]. Multi-modal fusion modules, typically transformer-based, align visual and linguistic features to predict fine-grained segmentation masks [20]. Recent architectures, as illustrated in modern VLSeg frameworks, employ task-specific heads to generate binary or multi-class masks aligned with the provided language description, supporting diverse prompting strategies like text, points, or bounding boxes [21], [60]. Key advancements in VLSeg have been driven by foundation models like the Segment Anything Model (SAM) [31], which generates high-quality segmentation masks from minimal prompts, and its language-augmented variants, such as Grounded-SAM [20] and SEEM (Segment Everything Everywhere Model) [21]. Grounded-SAM integrates SAM with grounding techniques to support open-vocabulary queries, enabling zero-shot segmentation of ITS-specific objects like “traffic cones near the intersection” [20]. SEEM’s versatile prompting capabilities make it suitable for dynamic driving scenarios, allowing segmentation based on complex instructions [21]. Additionally, video-based VLSeg models like XMem [61] enable temporal consistency in object tracking, critical for monitoring moving objects like vehicles or cyclists in ITS [61]. Models like CLIPSeg [29] and OpenSeg [30] further enhance open-vocabulary segmentation, addressing longtail classes such as rare traffic signs or unexpected obstacles [30], [62]. Despite these advancements, deploying VLSeg in safetycritical ITS applications presents challenges, including latency constraints for real-time processing, robustness against occlusions, and generalization to rare or unseen objects [57]. Recent work on lightweight models like MobileSAM [32] and EdgeViT [33] aims to address latency issues, while frameworks like DriveLM [23] enhance contextual reasoning for complex driving scenarios [23], [63]. These developments underscore the potential of VLSeg to transform ITS by enabling intelligent, adaptive, and context-aware scene understanding. # D. Related Work and Contributions While this survey provides a broad overview of VLSeg for ITS, it builds upon several recent, more focused surveys. Zhou et al. [1] provide a comprehensive overview of visionlanguage models in autonomous driving, while Cui et al. [58] focus specifically on multimodal large language models. Huang et al. [2] survey multi-modal sensor fusion approaches, and Dal’Col et al. [22] examine joint perception and prediction methods. These surveys highlight the growing importance of multimodal learning but do not focus specifically on the task of segmentation with the depth presented here. This survey, therefore, makes the following key contributions: We provide a taxonomy of VLSeg methods, categorizing approaches based on prompting mechanisms (e.g., text, point, box, or multi-modal prompts) and foundation model-based architectures, with a focus on their applicability to ITS tasks like lane detection and obstacle segmentation [1], [20], [21], [57]. We review state-of-the-art models from 2023–2024, including SAM [31], Grounded-SAM [20], SEEM [21], CLIPSeg [29], OpenSeg [30], and video-based systems like XMem [61], highlighting their impact on ITS applications such as pedestrian segmentation, traffic sign recognition, and dynamic obstacle avoidance [23], [24], [58]. We compare key datasets and evaluation metrics for VLSeg in driving scenes, including Cityscapes [7], BDD100K [8], nuScenes [13], KITTI [64], Talk2Car [27], and Road-Seg-VL [28], emphasizing their role in training and benchmarking VLSeg models for ITS [2], [56]. # III. ARCHITECTURAL DEEP DIVE The performance and capabilities of a Vision-Language Segmentation (VLSeg) model are fundamentally determined by its architecture. While often presented as monolithic systems, these models are composed of distinct modules, each with its own design considerations. Figure 2 illustrates a generic pipeline, which consists of three core components: the vision encoder, the language encoder, and the mask decoder. This section provides a deep dive into each of these components. # A. The Vision Encoder Backbone The vision encoder’s role is to extract rich, spatially-aware features from the input image. The choice of encoder represents a critical trade-off between feature resolution, receptive field size, and computational cost. Convolutional Neural Networks (CNNs): For years, CNNs like ResNet [65] and its variants were the de-facto standard for vision backbones. Their inductive biases (locality and translation equivariance) are well-suited for image tasks. In VLSeg, they are still used, especially in hybrid architectures or when computational efficiency is paramount. However, their limited receptive field can be a drawback for understanding global scene context, which is often required by language prompts. Vision Transformers (ViTs): The introduction of the Vision Transformer [12] marked a paradigm shift. By treating an image as a sequence of patches and applying self-attention, ViTs can model long-range dependencies across the entire image. This is highly advantageous for VLSeg, as a prompt like ”segment the car farthest away” requires a global understanding of the scene. The original Fig. 2: A high-level diagram of a generic Vision-Language Segmentation (VLSeg) pipeline. An image and a text prompt are processed by their respective encoders, fused in a multi-modal module, and then passed to a decoder to generate the final mask. TABLE I: Comparative Analysis of Vision Encoders for ITS Applications ViT architecture produces a single-resolution feature map, which can be suboptimal for dense prediction tasks like segmentation. Hierarchical Transformers: To address the limitations of plain ViTs for dense tasks, hierarchical transformers like the Swin Transformer [5] were developed. Swin re-introduces a convolutional-like hierarchy, producing feature maps at multiple resolutions (similar to a CNN’s feature pyramid). It computes self-attention within local, non-overlapping windows that are shifted across layers, providing a balance between global context and computational efficiency. This multi-scale feature representation is crucial for segmenting objects of various sizes in ITS scenes, from large trucks to distant pedestrians. Many modern segmentation models, including the encoder in SAM [31], use large, ViT-style backbones designed for high-resolution input and powerful feature extraction. 1) Comparative Analysis of Vision Encoders for ITS Applications: The choice of vision encoder has significant implications for ITS applications, where factors like inference speed, memory usage, and performance under varying conditions are critical. Table I provides a detailed comparison of popular vision encoders used in VLSeg models, with metrics specifically relevant to ITS deployment scenarios. Several key insights emerge from this comparison: Efficiency-Performance Trade-off: While larger models like ViT-L/16 and Swin-B achieve the highest mIoU scores on Cityscapes, their inference times on edge GPUs (representative of automotive-grade hardware) make them impractical for real-time applications. Models like EfficientViT-B1 [66] and MobileViT-S offer significantly faster inference with acceptable performance degradation. Robustness to Adverse Conditions: Larger models generally show better robustness to adverse conditions (e.g., rain, fog, low light), with performance degradation measured as the percentage decrease in mIoU when tested on the Cityscapes Foggy dataset compared to the standard Cityscapes test set. This robustness is critical for ITS applications that must function reliably in all weather conditions. Hierarchical Design Advantage: Swin Transformer variants offer a favorable balance across metrics, with Swin-T providing performance comparable to ViT-B/16 but with significantly lower computational requirements. The hierarchical design of Swin is particularly well-suited for the multi-scale nature of driving scenes. Memory Constraints: Memory usage is a critical constraint for edge deployment. Models exceeding 200MB may face challenges in integration with existing automotive systems, which typically have limited GPU memory. This highlights the importance of model compression techniques for deploying state-of-the-art VLSeg models in real-world ITS applications. For ITS applications specifically, the ideal vision encoder balances three key factors: 1) Real-time Performance: Inference time under $3 3 \mathrm { m s }$ (30 FPS) is generally considered necessary for safety-critical applications. 2) Robust Feature Extraction: The ability to extract discriminative features even under challenging conditions like partial occlusion, varying lighting, and adverse weather. 3) Multi-scale Understanding: Capability to simultaneously process features at different scales, from finegrained details (lane markings, traffic signs) to larger structures (buildings, road layout) and distant objects. Recent work by Shihab et al. [67] has shown promising results with pruned state-space models as an alternative to both CNNs and transformers, potentially offering better efficiencyperformance trade-offs for resource-constrained ITS environments. This represents an emerging direction that may reshape the landscape of vision encoders for on-vehicle deployment. # B. The Language Encoder and Prompt Engineering The language encoder is responsible for converting the freeform text prompt into a numerical representation (embedding) that can be fused with the visual features. The design of this component, and the way it is prompted, heavily influences the model’s flexibility. Text Encoders from V-L Models: Many successful VLSeg models, such as CLIPSeg [29], leverage the powerful text encoders from pre-trained vision-language models like CLIP [18]. These encoders are already trained on vast datasets of image-caption pairs, making their embeddings particularly well-suited for grounding language in visual concepts. They excel at open-vocabulary tasks. • General-Purpose LLMs: Other approaches integrate more general-purpose language models, such as BERT [15] or T5 [17]. While not specifically pre-trained for vision-language alignment, these models often have a deeper understanding of syntax and relational language, which can be beneficial for interpreting complex, compositional prompts (e.g., ”segment the second car to the left of the traffic light”). • Prompt Engineering: The performance of a VLSeg model can be surprisingly sensitive to the phrasing of the text prompt. This has given rise to the sub-field of ”prompt engineering.” Research has shown that providing more descriptive prompts often yields better results. For example, instead of ”car”, using ”the red sports car” helps the model better disambiguate objects. Furthermore, some models benefit from ”prompt tuning” or ”prompt learning,” where a small set of learnable embedding vectors are prepended to the text prompt. These vectors are optimized during training to steer the language encoder towards producing embeddings that are more effective for the downstream segmentation task, without needing to fine-tune the entire large language model [68], [69]. # C. The Mask Decoder The final component is the mask decoder, which takes the fused vision and language features and generates the final pixel-level segmentation mask. Simple Convolutional Decoders: Early or simpler VLSeg models often use a lightweight decoder composed of a few convolutional and upsampling layers. It takes the processed features and refines them into a full-resolution mask. While efficient, this approach may not be powerful enough to resolve very fine details or complex object boundaries. Transformer-based Decoders: More recent and powerful models have adopted transformer-based decoders. For example, Mask2Former [40] introduced a transformer decoder that uses a set of learnable ”queries” to probe the image features. Each query is responsible for representing an object instance or a semantic category, and through cross-attention with the image features, it gathers the necessary information to predict a corresponding mask. This query-based approach has proven to be highly effective and versatile. The decoder in SAM [31] builds on this, using a modified transformer decoder that efficiently processes prompt embeddings and image features to produce high-quality masks in real-time. This design is what allows SAM to be so fast and responsive to geometric prompts. While a detailed, one-to-one comparison of computational performance is challenging due to variations in hardware and implementation, a general trade-off is clear. Larger, more powerful models like the full SAM [31] or SEEM [21], which use large Vision Transformer backbones (e.g., ViTH), offer the highest performance at the cost of significant computational resources and latency. This makes them suitable for offline analysis or cloud-based assistance but challenging for on-vehicle deployment. In response, the development of lightweight models like MobileSAM [32] and specialized efficient architectures like EdgeViT [33] is critical. These models use techniques like knowledge distillation and architectural pruning to drastically reduce model size and inference time, aiming for real-time performance on the resource-constrained hardware found in vehicles, albeit often with a trade-off in zero-shot generalization capability. # IV. A TAXONOMY OF LLM-AUGMENTED SEGMENTATION FOR ITS The rapid integration of Large Language Models (LLMs) into segmentation tasks has given rise to a diverse set of architectures and interaction paradigms. To structure this emerging field, we propose a taxonomy for LLM-augmented segmentation models relevant to ITS, as illustrated in Figure 1. We categorize these models based on two key dimensions: (1) the Prompting Interface, which defines how a user or system interacts with the model, and (2) the Core Architecture, which describes the underlying model design, particularly how vision and language information are fused. # A. Categorization by Prompting Interface The prompting interface determines the flexibility and control offered by the segmentation model. We identify a spectrum of prompt types: Text-based Prompts: This is the most common interface, where segmentation is guided by a free-form natural language query (e.g., ”segment the bus on the right”). Models like CLIPSeg [29] and OpenSeg [30] are pioneers in this area, leveraging the powerful text embeddings from CLIP [18] to perform zero-shot segmentation of objects described in text. This is crucial for open-vocabulary ITS scenarios where the system must identify novel or rare objects. Geometric Prompts (Points and Boxes): Foundation models like the Segment Anything Model (SAM) [31] excel at this type of interaction. By providing a simple point or a bounding box, a user can precisely indicate an object of interest, and SAM will generate a high-quality segmentation mask. This is highly effective for interactive annotation and for scenarios where an object is detected by another system (e.g., a simple object detector) and needs to be precisely segmented. Fig. 3: Computational efficiency comparison of VLSeg models. The chart shows inference time and model size for different models, highlighting the trade-off between performance and computational requirements. Multi-modal and Interactive Prompts: The most advanced models offer a combination of prompt types for maximum flexibility. SEEM (Segment Everything Everywhere Model) [21] is a prime example, accepting text, points, boxes, scribbles, and even other image masks as prompts. This allows for a rich, interactive dialogue between the user and the system, enabling complex instructions and iterative refinement. In ITS, this could translate to a system that can take an initial command (”segment all vehicles”) and then refine it with a point prompt (”...but only this one”). # B. Categorization by Core Architecture The architectural design dictates how the models process and integrate multimodal information. • Vision-Language Pre-training (VLP) Based Models: These architectures, like CLIPSeg [29], are built directly on top of VLP models like CLIP. They leverage the pre-aligned vision and language embedding spaces. A vision encoder and a text encoder produce feature maps that are then combined through a fusion module (e.g., cross-attention) to generate a segmentation mask that corresponds to the text prompt. Their strength lies in zeroshot generalization. Promptable Foundation Models: This category is defined by SAM [31]. The architecture consists of a powerful image encoder (ViT-H), a flexible prompt encoder, and a lightweight mask decoder. Its key innovation is being pre-trained for the task of ”promptable segmentation” on a massive dataset (SA-1B). While SAM itself has limited text understanding, its architecture is designed to be extended. Hybrid Detection-Segmentation Models: This emergent architecture combines an open-vocabulary object detector with a promptable segmentation model. GroundedSAM [20] is the canonical example, which first uses a grounding detector (Grounding DINO) to find objects matching a text prompt and get their bounding boxes. These boxes are then fed as prompts to SAM to generate precise masks. This approach effectively marries the strong open-vocabulary capabilities of detectors with the high-quality segmentation of models like SAM, making it highly effective for ITS tasks requiring both detection and segmentation. Unified Segmentation Architectures: Models like OneFormer [41] and Mask2Former [40] aim to perform semantic, instance, and panoptic segmentation within a single, unified transformer-based framework. While not exclusively LLM-driven, they often incorporate textbased queries for task conditioning and represent a move towards holistic scene understanding, which is essential for ITS. # C. Multi-Modal Fusion Strategies A critical aspect of VLSeg model architecture is the mechanism used to fuse information from the vision and language modalities. The effectiveness of this fusion directly impacts the model’s ability to ground textual concepts in the visual domain. While several strategies exist, the choice represents a crucial trade-off between semantic richness, computational cost, and interpretive power. Early Fusion (Concatenation): The most straightforward approach involves projecting visual and text embeddings to a common dimension and concatenating them. This combined vector is then processed by subsequent layers. While computationally cheap, this ”early fusion” is often semantically weak. It forces the model to learn relationships from a monolithic block of data without a clear mechanism for the modalities to query each other, making it difficult to resolve ambiguous or complex prompts [70]. Cross-Attention Fusion: This has become the dominant paradigm, largely due to its success in the original Transformer [55]. Here, the text embedding acts as a ”query” that ”attends to” the spatial features of the image (the ”keys” and ”values”). This mechanism is highly intuitive for VLSeg: it allows the model to learn to ”look at” specific parts of the image that are most relevant to the words in the prompt. This explicit, query-based feature selection is what enables models like CLIPSeg [29] and BLIP-2 [71] to perform effective open-vocabulary segmentation. However, its effectiveness is highly dependent on the quality of the vision backbone, and its computational cost scales quadratically with the number of image patches, which can be a bottleneck for high-resolution imagery. Advanced and Hybrid Fusion: To find a better balance, more advanced techniques have been developed. Gated fusion introduces mechanisms that learn to control the flow of information from each modality, deciding how much visual or textual information to use at different processing stages. Bilinear pooling offers a much richer fusion by capturing every pairwise interaction between the visual and language features, but this is often too computationally expensive for real-time applications [72], [73]. Consequently, many state-of-the-art models use hybrid approaches. They might employ self-attention within each modality to create refined feature representations first, followed by multiple layers of cross-attention to achieve a deep, iterative alignment between vision and language before the final decoding step [74]. 1) Mathematical Formulation of Cross-Attention in VLSeg: To provide a deeper technical understanding, we present the mathematical formulation of cross-attention, which is the cornerstone of modern VLSeg fusion mechanisms. Given a set of image features $\mathbf { V } \in \mathbb { R } ^ { N \times d _ { v } }$ (where $N$ is the number of image patches and $d _ { v }$ is the feature dimension) and text features $\mathbf { T } \in \mathbb { R } ^ { M \times d _ { t } }$ (where $M$ is the number of text tokens and $d _ { t }$ is the text embedding dimension), the cross-attention operation proceeds as follows: First, the text and image features are projected to a common dimension $d$ using learned projection matrices: $$ \begin{array} { r } { \mathbf { Q } = \mathbf { T } \mathbf { W } _ { Q } \in \mathbb { R } ^ { M \times d } } \\ { \mathbf { K } = \mathbf { V } \mathbf { W } _ { K } \in \mathbb { R } ^ { N \times d } } \\ { \mathbf { V } ^ { \prime } = \mathbf { V } \mathbf { W } _ { V } \in \mathbb { R } ^ { N \times d } } \end{array} $$ where $\mathbf { W } _ { Q } \in \mathbb { R } ^ { d _ { t } \times d }$ , $\mathbf { W } _ { K } \in \mathbb { R } ^ { d _ { v } \times d }$ , and $\mathbf { W } _ { V } \in \mathbb { R } ^ { d _ { v } \times d }$ are learnable parameter matrices. The cross-attention operation then computes: $$ { \begin{array} { r l } & { { \mathrm { A t t e n t i o n } } ( \mathbf { Q } , \mathbf { K } , \mathbf { V } ^ { \prime } ) = { \mathrm { s o f t m a x } } \left( { \frac { \mathbf { Q } \mathbf { K } ^ { T } } { \sqrt { d } } } \right) \mathbf { V } ^ { \prime } } \\ & { ~ = \mathbf { A } \mathbf { V } ^ { \prime } \in \mathbb { R } ^ { M \times d } } \end{array} } $$ where $\textbf { A } =$ softmax $\left( \frac { \mathbf { Q } \mathbf { K } ^ { T } } { \sqrt { d } } \right) \ \in \ \mathbb { R } ^ { M \times N }$ is the attention matrix. Each element $\mathbf { \nabla } _ { A _ { i j } } ^ { A _ { i } }$ represents how much the $i$ -th text token attends to the $j$ -th image patch. In practice, multi-head attention is typically used, where the computation is split across $h$ attention heads: $$ \begin{array} { r } { \mathrm { I u l t i H e a d ( { \mathbf { Q } } , { \mathbf { K } } , { \mathbf { V } } ^ { \prime } ) } = \mathrm { C o n c a t } ( \mathrm { h e a d } _ { 1 } , \dots , \mathrm { h e a d } _ { h } ) { \mathbf { W } } _ { O } ( 6 ) } \\ { \mathrm { w h e r e ~ h e a d } _ { i } = \mathrm { A t t e n t i o n } ( { \mathbf { Q } } { \mathbf { W } } _ { Q } ^ { i } , { \mathbf { K } } { \mathbf { W } } _ { K } ^ { i } , { \mathbf { V } } ^ { \prime } { \mathbf { W } } _ { V } ^ { i } ) } \end{array} $$ with $\mathbf { W } _ { Q } ^ { i } \in \mathbb { R } ^ { d \times d _ { h } }$ , $\mathbf { W } _ { K } ^ { i } \in \mathbb { R } ^ { d \times d _ { h } }$ , $\mathbf { W } _ { V } ^ { i } \in \mathbb { R } ^ { d \times d _ { h } }$ , and $\mathbf { W } _ { O } \in \mathbb { R } ^ { h \check { d } _ { h } \times d }$ , where $d _ { h } = d / h$ is the dime ion per head. For VLSeg specifically, after this cross-attention operation, the resulting text features are enriched with visual information relevant to each word in the prompt. These features are then typically passed through additional layers (e.g., feed-forward networks) and ultimately to a mask decoder that produces the final segmentation mask. The computational complexity of this operation is $\mathcal { O } ( M N )$ , which becomes problematic for highresolution images where $N$ can be very large. To address this, efficient variants have been proposed: • Linear Attention: Approximates the softmax using kernel methods to reduce complexity to $\mathcal { O } ( M + N )$ . Window-based Attention: Restricts attention to local windows, similar to the approach in Swin Transformer. Low-Rank Approximation: Decomposes the attention matrix into lower-rank components. These optimizations are particularly relevant for ITS applications, where real-time processing of high-resolution imagery is often required. The prevalence of cross-attention highlights a key insight: effective vision-language fusion is less about simply merging data and more about enabling a directed search, where language guides visual feature extraction. By organizing the landscape with this taxonomy, we can better understand the trade-offs and specific capabilities of different approaches, guiding practitioners in selecting the appropriate model for a given ITS application. # V. A COMPARATIVE ANALYSIS OF STATE-OF-THE-ART MODELS To provide a clear overview of the current landscape, this section presents a comparative analysis of state-of-theart VLSeg models relevant to ITS. The models are evaluated based on their architecture, prompting capabilities, and key innovations. A summary is provided in Table II. This analysis highlights the rapid diversification of VLSeg models. Early models like CLIPSeg [29] focused purely on text-prompted zero-shot segmentation. The arrival of SAM [31] created a paradigm shift towards promptable foundation models, which excel at generating high-quality masks from geometric cues. The most powerful recent approaches, such as Grounded-SAM [20], represent a synthesis, combining an open-vocabulary detector with a segmentation foundation model to get the best of both worlds. TABLE II: Comparative Analysis of State-of-the-Art VLSeg Models for ITS A key architectural trade-off is illustrated by comparing hybrid models like Grounded-SAM with unified models like SEEM. Grounded-SAM, by chaining a specialized openvocabulary detector (Grounding DINO) with a powerful segmenter (SAM), excels at tasks where the primary challenge is to first \*find\* a specific object in a cluttered scene based on a descriptive prompt. Its failure modes often stem from the detector failing; if the object isn’t found, it can’t be segmented. In contrast, SEEM’s unified architecture is more flexible, handling a wider array of prompts (scribbles, points) and potentially performing better at segmenting amorphous regions (e.g., ”segment the puddle”) that lack clear object boundaries for a detector. Its weakness may lie in handling highly complex compositional prompts, where the chained reasoning of a detector-segmenter might be more robust. The choice between these approaches depends on the specific ITS task: for finding and segmenting known categories of objects with high precision, the hybrid approach is strong; for interactive annotation and flexible human-AI collaboration, the unified model offers advantages. Models like SEEM [21] push the boundaries of interactivity, unifying different prompt types into a single cohesive model. For the specific challenges of ITS, video-based models like XMem [61] are critical for maintaining temporal consistency when tracking dynamic objects. Meanwhile, models like LLaVA-1.5 [75] and DriveLM [23] showcase the future direction: moving beyond simple segmentation towards fullfledged, language-driven reasoning systems that can interpret a scene, predict intent, and inform driving decisions. The development of efficient variants like MobileSAM [32] is a crucial parallel track, ensuring that these powerful capabilities can eventually be deployed on real-world automotive hardware. # A. Quantitative Performance Benchmarking While qualitative comparisons are useful for understanding architectural innovations, quantitative benchmarks are essential for evaluating practical performance. Table III presents a summary of reported performance metrics for several key VLSeg models on standard ITS-relevant datasets. It is important to note that direct comparisons can be challenging due to variations in experimental setups, such as the specific vocabulary used for open-set evaluation or whether the model was fine-tuned on the target dataset. The results in Table III reveal several key trends. First, models specifically designed and optimized for a single task and dataset, like OneFormer on Cityscapes panoptic segmentation, still often outperform more general, open-vocabulary models in their specific domain. This highlights a critical trade-off that can be termed the ”cost of generalization.” The high Panoptic Quality (PQ) of OneFormer (68.0) compared to the scores of models designed for open-ended tasks demonstrates that there is currently a performance penalty for the flexibility that VLSeg provides. This is particularly true for safety-critical sub-tasks like robust sidewalk detection, where specialized ensemble models have been shown to surpass the capabilities of more general LLM-based approaches [77]. While models like OpenSeg show promising open-vocabulary mIoU on complex urban scenes, they do not yet match the performance of closed-set, specialized systems. This gap suggests that for safety-critical ITS applications requiring the highest possible accuracy on a known set of classes (e.g., standard traffic signs, lane markings), specialized models remain superior. However, for handling novelty and improving human-AI interaction, the ”cost” of using a more general VLSeg model is justifiable. TABLE III: Quantitative Performance of VLSeg Models on ITS-Relevant Benchmarks Second, different models are evaluated with different metrics (mIoU, Panoptic Quality, Average Precision) tailored to their primary task (semantic, panoptic, or instance segmentation), making direct comparison difficult. Nonetheless, models like OpenSeg show promising open-vocabulary mIoU on complex urban scenes. The high performance of LISA on referring segmentation underscores the power of these models when a specific object is clearly described, a common use-case in ITS. 1) Extended Benchmarking on ITS-Specific Datasets: To provide a more comprehensive evaluation specifically for ITS applications, Table IV presents performance metrics across a wider range of ITS-specific datasets. This extended benchmarking offers insights into how VLSeg models perform across diverse driving scenarios, from urban environments to highways, and under varying conditions. Several observations can be made from this extended benchmarking: Dataset Variability: Performance varies significantly across datasets, with models generally performing best on Cityscapes and worst on Waymo Open. This suggests that current VLSeg models may be biased toward European urban driving scenes (predominant in Cityscapes) and struggle more with the diverse scenarios in the Waymo dataset. Consistent Ranking: The relative ranking of models remains fairly consistent across datasets, with LISA and SEEM consistently outperforming other open-vocabulary models. This suggests that architectural advantages translate across different driving environments. • Performance Gap: The gap between the best openvocabulary model (LISA) and the supervised baseline (OneFormer) remains substantial (approximately 20 percentage points) across all datasets. This highlights the significant room for improvement in zero-shot and openvocabulary segmentation for ITS. • Cross-Dataset Generalization: Models show varying degrees of performance degradation when evaluated on datasets different from their primary training data. For instance, models trained primarily on COCO (like SEEM) show a more significant drop when evaluated on nuScenes or Waymo, which feature different camera perspectives and environmental conditions. 2) Standardizing Evaluation for VLSeg in ITS: The diversity of datasets, tasks, and metrics used in the literature makes it challenging to directly compare VLSeg models for ITS applications. To address this, we propose a standardized evaluation protocol specifically for ITS-oriented VLSeg models: 1) Multi-Dataset Evaluation: Models should be evaluated on at least three ITS-specific datasets (e.g., Cityscapes, BDD100K, and either nuScenes or Waymo Open) to ensure robustness across different driving environments. 2) ITS-Specific Vocabulary: Evaluation should use a standardized set of ITS-specific prompts, including both common categories (e.g., ”car,” ”pedestrian”) and more TABLE IV: Extended Benchmarking of VLSeg Models on ITS-Specific Datasets \*OneFormer is a supervised model trained specifically for these datasets and is included as a reference upper bound. TABLE V: Robustness Testing of VLSeg Models Under Adverse Conditions (mIoU $\%$ ) \*ClearVision [78] is specifically designed for adverse weather conditions using CycleGAN and SigLIP-2. complex, compositional queries (e.g., ”red car turning left,” ”pedestrian crossing between parked vehicles”). 3) Metrics Beyond mIoU: While mIoU is valuable, additional metrics should be reported: Boundary Quality (BQ): To measure the precision of object boundaries, critical for accurate distance estimation. Small Object IoU: Specifically for objects smaller than $3 2 \times 3 2$ pixels, which are common in driving scenes (distant pedestrians, traffic signs). Temporal Consistency: For video sequences, measuring the stability of segmentation across frames. Inference Latency: Reported on standardized hardware representative of automotive-grade processors. 4) Robustness Evaluation: Performance under corruptions (e.g., motion blur, adverse weather) should be systematically evaluated using standardized corruption sets like Cityscapes-C or BDD100K-C. Adopting such a standardized protocol would facilitate more meaningful comparisons and accelerate progress in developing VLSeg models specifically optimized for ITS applications. 3) Empirical Robustness Testing: Beyond standard benchmarks, the robustness of VLSeg models under adverse conditions is particularly critical for ITS applications. Table V presents results from empirical robustness testing across various challenging conditions. This robustness testing reveals several critical insights: Weather Vulnerability: All models show significant performance degradation under adverse weather conditions, with snow causing the most severe drops (25- $3 7 \%$ ). This highlights a critical vulnerability for realworld deployment, where systems must function reliably in all weather conditions. Night-time Performance: Night-time scenes present a major challenge, with performance drops of $2 1 \mathrm { - } 3 2 \%$ . This is particularly concerning given that many fatal accidents occur during night-time driving. Adversarial Vulnerability: When tested with adversarial text prompts (e.g., deliberately ambiguous or misleading instructions), all models show dramatic performance drops, with simpler models like CLIPSeg suffering the most $5 5 \%$ reduction). This reveals a potential security concern for language-guided systems. Specialized Solutions: Purpose-built models like ClearVision [78], which uses CycleGAN for domain adaptation and SigLIP-2 for robust feature extraction, show significantly better resilience to adverse weather conditions. However, even these specialized models remain vulnerable to adversarial text prompts. Relative Robustness: More sophisticated models like LISA and SEEM demonstrate better robustness across all conditions, suggesting that architectural advances contribute not only to clean-condition performance but also to resilience. Recent work by Shihab et al. [79] has demonstrated that model robustness can be significantly improved through specialized training regimes focused on temporal consistency and adverse condition simulation. Their HybridMamba architecture showed particular promise for maintaining performance under challenging lighting and weather conditions in traffic surveillance footage. These findings underscore the importance of comprehensive robustness testing beyond standard benchmarks, especially for safety-critical ITS applications. They also highlight the need for specialized techniques like domain adaptation, adversarial training, and robust prompt engineering to build VLSeg systems that can be reliably deployed in real-world driving scenarios. # VI. ADVANCED AND EMERGING TOPICS While 2D image segmentation forms the foundation of VLSeg, the state-of-the-art is rapidly moving into more com plex domains. This section explores several advanced and emerging topics that are critical for the next generation of intelligent transportation systems. # A. 3D Vision-Language Segmentation Autonomous vehicles do not perceive the world in 2D. They rely heavily on 3D sensors like LiDAR to build a rich point cloud representation of their environment. Consequently, extending VLSeg from 2D images to 3D point clouds is a major and active area of research. 3D VLSeg presents unique challenges. Point clouds are sparse, unordered, and unstructured, making them fundamentally different from the dense, grid-like structure of images. Early work in 3D segmentation focused on adapting CNNlike architectures to operate on voxels or directly on points [80], [81]. More recently, the focus has shifted to grounding language in these 3D spaces. Models like LidarCLIP [82] learn to align text descriptions with entire 3D point clouds. Building on this, 3D VLSeg models aim to segment specific objects in the point cloud based on a text prompt. This is often achieved by projecting 2D image features (from multiple camera views) onto the 3D point cloud, creating a text-aware 3D representation. The system can then respond to queries like ”segment the point cloud of the truck in front of us” or ”highlight the curb on the right.” This allows for a much more intuitive and powerful way to interact with and understand 3D sensor data, which is essential for tasks like 3D object detection, motion prediction, and path planning [83], [84]. # B. Video and Temporal Consistency Driving is an inherently dynamic process. Therefore, segmenting objects consistently across video frames is just as important as segmenting a single image. Video Vision-Language Segmentation (V-VLSeg) aims to solve this. The primary challenge is maintaining temporal consistency; the segmentation mask for a specific object (e.g., a pedestrian) should not flicker or disappear between frames, even during partial occlusion. Several approaches are being explored. One common method is to use optical flow to propagate masks from one frame to the next. However, this can be error-prone, especially with fast-moving objects or camera motion. A more robust approach, exemplified by models like XMem [61], is to use a memory-based architecture. In this paradigm, the model maintains a ”memory” of past frames and their segmentation masks. When processing a new frame, it uses attention mechanisms to query this memory, allowing it to re-identify and maintain a consistent segmentation of objects over long video sequences. Language can be used to initialize the tracking (e.g., ”start tracking the blue car”) and to re-identify objects if the tracking is lost (”where is the blue car now?”). As video foundation models become more powerful, we expect to see more end-toend V-VLSeg models that can reason about actions and events over time [85], [86]. # C. Federated and Collaborative Learning Training powerful segmentation models requires vast amounts of diverse data, which raises significant privacy concerns, especially when the data is collected from personal vehicles. Federated Learning (FL) is a machine learning paradigm that addresses this issue. Instead of pooling raw data in a central server, the central model is sent to individual vehicles (or ”clients”). Each client updates the model locally using its own private data, and only the model updates (gradients or weights) are sent back to the server to be aggregated. This allows a global model to learn from the collective data of the entire fleet without any raw driving data ever leaving the vehicle [87], [88]. Applying FL to VLSeg in ITS is an active research area, focusing on challenges like communication efficiency and handling the non-IID (non-independently and identically distributed) nature of data from different vehicles [89]. A related concept is Collaborative (or Collective) Perception. In this scenario, vehicles and infrastructure (e.g., smart traffic lights) communicate with each other, sharing high-level perception information, such as segmentation masks or object detections. For example, a vehicle whose view is occluded by a large truck could receive segmentation data from another vehicle at a better vantage point, allowing it to ”see” the pedestrian that is about to cross the street. This creates a more robust and complete understanding of the driving scene than any single agent could achieve alone. Research in this area focuses on what information to share, how to fuse it effectively, and how to ensure the communication is secure and reliable [90], [91]. Language can act as a powerful and efficient communication medium in these systems, where one agent could send a compressed, semantic message like ”pedestrian crossing from your right” to another. # VII. APPLICATIONS TO INTELLIGENT TRANSPORTATION SYSTEMS The integration of LLM-augmented segmentation in Intelligent Transportation Systems (ITS) has enabled significant advancements in autonomous driving, traffic management, and urban mobility. This section explores key applications and their impact on ITS. # A. Autonomous Driving and Scene Understanding The most prominent application of VLSeg is in enhancing the perception systems of autonomous vehicles. Accurate, realtime scene understanding is the bedrock of safe navigation. LLM-augmented systems allow for a level of semantic richness that was previously unattainable. For instance, a system can be prompted to ”segment the road surface, but exclude any wet patches or oil slicks,” which is a complex instruction that goes beyond simple class labels. This capability is critical for path planning, especially under adverse conditions. Furthermore, models like DriveLM [23] and InsightGPT [24] demonstrate how LLMs can be used not just for segmentation but for integrated reasoning, allowing a vehicle to connect its perception to its planning module (e.g., ”A pedestrian is near the crosswalk, I should yield”). This extends to dynamic obstacle detection, where VLSeg can be used to track vulnerable road users (e.g., cyclists, pedestrians) with high precision, even when they are partially occluded, by using contextual prompts [1], [2]. # B. Traffic Management and Monitoring Beyond individual vehicles, VLSeg offers powerful tools for city-scale traffic management. Smart city initiatives rely on networks of cameras to monitor traffic flow. LLM-augmented segmentation can significantly improve the analysis of this data. For example, traffic operators can issue natural language queries to the system, such as ”show me all the trucks that are blocking the intersection at 5th and Main” or ”count the number of vehicles turning left at this junction.” This allows for much more flexible and responsive traffic monitoring than traditional systems that can only detect a predefined set of vehicle classes [25]. These techniques can be used for anomaly detection, such as identifying a stalled vehicle, the formation of an unusual queue, or performing fine-grained temporal localization of crash events in surveillance footage, enabling faster incident response [79], [92]. Frameworks like TrafficGPT [26] are being explored to create conversational interfaces for traffic management, making system operation more intuitive. # C. Infrastructure Inspection and Maintenance A crucial but often overlooked aspect of ITS is the maintenance of the transportation infrastructure itself. VLSeg, guided by LLMs, presents a highly efficient solution for automating the inspection of roads, bridges, and signage. Municipal vehicles equipped with cameras can continuously scan their surroundings. An LLM-based system can then be prompted to find and segment specific types of infrastructure defects. For example, a query like ”segment all potholes deeper than two inches” or ”highlight any traffic signs with visible graffiti or fading” can automate a process that is currently manual, labor-intensive, and slow. This extends to critical pedestrian infrastructure, where robust and precise sidewalk detection is essential for both accessibility and curb management [93]. This is an area of growing research, with datasets emerging that focus specifically on road surface defects and infrastructure anomalies [94], [95]. Automating this process allows for proactive maintenance, improving safety and reducing longterm repair costs. # D. Enhanced Urban Mobility and User Experience VLSeg can also improve the experience of users within the transportation system, including public transit riders and pedestrians. For public transportation, segmentation can be used to monitor passenger flow at bus stops or train stations, or to ensure that dedicated bus lanes are clear of obstructions. For pedestrians, especially those with visual impairments, mobile applications could use VLSeg to provide real-time auditory feedback about the environment, such as ”there is a crosswalk 10 feet ahead to your left” or ”warning: an e-scooter is approaching on the sidewalk.” This creates a more accessible and safer urban environment for everyone. Systems could also enhance navigation services by providing more descriptive guidance, for instance, ”turn right after the large red building,” using segmentation to identify the landmark described [25], [26]. # VIII. END-TO-END SYSTEMS AND INTEGRATED REASONING While VLSeg is a powerful perception tool, its ultimate value in ITS is realized when it is integrated into end-to-end systems that perform complex reasoning and decision-making. The trend is moving away from modular pipelines (perceive, predict, plan) towards more holistic architectures where LLMs serve as a central reasoning engine, directly influencing vehicle behavior based on a rich, language-informed understanding of the world. Figure 4 conceptualizes this integrated approach, where perception, reasoning, and action form a continuous loop. # A. Language as a Command and Control Interface The most intuitive form of interaction is natural language. Researchers are developing systems where the entire driving mission can be dictated by high-level verbal commands. Instead of programming a route into a GPS, a user could give a command like, ”Drive me to the office, but avoid the highway and stop at a coffee shop on the way.” Models like Lingo-1 [96] and DriveAdapter [97] are exploring how LLMs can interpret these complex, multi-step instructions, ground them in the visual world using VLSeg, and translate them into a sequence of actionable driving behaviors. This requires the model to not only segment relevant entities (e.g., ”coffee shop”) but also to understand the intent and constraints of the command. This paradigm shifts the focus from simple object identification to goal-oriented scene understanding. # B. Explainable AI (XAI) and Decision Justification A major barrier to the public acceptance of autonomous vehicles is their ”black box” nature. When a vehicle makes a decision, it is often unclear why. LLMs offer a groundbreaking solution to this problem by enabling vehicles to justify their actions in natural language. An integrated system can leverage VLSeg to identify critical elements in a scene and then use an LLM to construct a human-understandable explanation. For instance, if the vehicle suddenly brakes, it could report, ”I am stopping because I segmented a child chasing a ball towards the road.” This capability, explored in models like LMDrive [98], is invaluable for building trust, debugging system failures, and for post-incident analysis. It transforms the vehicle from a silent machine into a communicative partner. # C. Predictive Reasoning and Risk Assessment Expert human drivers do more than just perceive the present; they constantly predict the near future. LLM-integrated systems are beginning to replicate this cognitive skill. By analyzing a scene segmented by a VLSeg model, an LLM can infer latent risks and predict the behavior of other agents. Frameworks like Reason2Drive [99] and those proposed by Chen et al. [100] can generate textual descriptions of potential hazards, such as, ”The car ahead is signaling to merge, but there is a cyclist in its blind spot; there is a high risk of conflict.” This goes beyond reactive obstacle avoidance. It represents a proactive understanding of road dynamics, allowing the vehicle to take preemptive measures to ensure safety. The LLM acts as a ”common sense” reasoning layer, interpreting the segmented scene to anticipate complex multiagent interactions. Fig. 4: Conceptual diagram of an end-to-end reasoning loop in ITS. The VLSeg module provides scene understanding to a central LLM, which processes goals and generates both actionable commands for the vehicle and human-understandable explanations. # IX. HUMAN-IN-THE-LOOP AND INTERACTIVE SYSTEMS Fully autonomous systems that can handle all conditions are still a future goal. The foreseeable future of ITS involves robust collaboration between humans and AI systems. LLMaugmented segmentation is a key enabling technology for this human-in-the-loop paradigm, facilitating intuitive communication and shared control. # A. Interactive Data Annotation and Correction The performance of any segmentation model is contingent on the quality of its training data. Creating large, pixel-perfect datasets is a major bottleneck. Interactive segmentation models like SEEM [21] turn this into a collaborative process. A human annotator can provide a rough initial prompt (e.g., a scribble on a truck), and the model generates a precise mask. The human can then provide corrective feedback (e.g., a negative point on an area that was wrongly included), and the model instantly refines the mask. This dialogue significantly accelerates the annotation process. This same principle can be applied in realtime. If an autonomous system makes a segmentation error, a remote human operator could quickly provide a correction, which not only fixes the immediate problem but can also be used as a new training example to continually improve the model, a concept explored in systems tackling long-tail problems [101]. # B. Remote Assistance and Teleoperation When an autonomous vehicle encounters a situation it cannot resolve—for example, complex hand gestures from a traffic police officer or an unusual construction zone—it can request help from a remote human operator. VLSeg is crucial for creating an efficient interface for this tele-assistance. The vehicle can stream its sensor data to the operator, who sees a 3D reconstruction of the scene. The operator can then interact with this scene using language and gestures. For example, they could draw a path on the screen and command, ”It is safe to follow this path,” or circle a group of people and ask, ”What are these people doing?” The VLSeg system on the vehicle interprets these multimodal prompts from the operator to navigate safely. Models like Talk2BEV [102] are developing methods to ground these natural language commands from a remote user directly into the Bird’s-Eye-View (BEV) representation used for vehicle planning. # C. Driver-AI Collaboration and Shared Autonomy In vehicles with advanced driver-assistance systems (ADAS) or partial autonomy (SAE Levels 2-3), the driver and the AI are co-pilots. VLSeg can create a much more intuitive and less intrusive collaboration between them. Instead of relying on beeps and cryptic dashboard icons, the vehicle can communicate using language and augmented reality. For example, the system could overlay a segmentation mask on the windshield’s heads-up display and say, ”I see a potential hazard on the right,” highlighting a pedestrian partially obscured by a parked car. Conversely, the driver could interact with the AI using language and gestures. A driver could point to an empty parking space and say, ”Park the car there.” The AI would use VLSeg to precisely segment the indicated space and then execute the parking maneuver. This shared autonomy, explored in frameworks like Voyager [103], aims to make the driving experience safer and more seamless by leveraging the complementary strengths of human intuition and AI perception. # D. Challenges in Human-in-the-Loop Systems While human-in-the-loop systems offer significant benefits for safety and data generation, they also introduce a unique set of challenges. For remote teleoperation to be effective, the communication link between the vehicle and the operator must have extremely low latency and high reliability, which can be difficult to guarantee over mobile networks. Furthermore, the cognitive load on human operators can be substantial, especially if they are required to monitor multiple vehicles or switch contexts frequently, leading to fatigue and potential for error. Finally, the economic cost of maintaining a 24/7 workforce of trained remote operators is a significant consideration that may impact the scalability and business models of such services. Addressing these human factors, communication, and economic challenges is crucial for the successful deployment of human-in-the-loop ITS solutions. # X. DATASETS AND BENCHMARKS The development and evaluation of VLSeg models for ITS are heavily reliant on high-quality, large-scale datasets. This section reviews the most influential datasets and the standard metrics used for benchmarking model performance. # A. Foundational Datasets for Driving Scene Segmentation and seasons. This makes it an excellent resource for training models that can generalize to a wide variety of real-world conditions. Cityscapes [7]: A cornerstone for urban scene understanding, Cityscapes provides 5,000 images with highquality, dense annotations across 19 classes. Its focus on street scenes from 50 different cities makes it a fundamental benchmark for semantic and panoptic segmentation in ITS. BDD100K [8]: This is one of the largest and most diverse driving datasets, containing 100,000 videos. It features annotations for a wide range of tasks, including segmentation, and is particularly valuable for its inclusion of diverse weather and lighting conditions, which are critical for testing model robustness in ITS. Mapillary Vistas [14]: With 25,000 high-resolution images and 66 object categories, Vistas offers unparalleled diversity and detail, covering various locations, weather, nuScenes [13]: Going beyond camera data, nuScenes provides a full 360-degree sensor suite, including LiDAR and radar, for 1,000 driving scenes. Its multi-modal nature and 3D annotations are essential for developing nextgeneration perception systems that fuse information from multiple sensors. # B. Datasets for Vision-Language and Interactive Segmentation Talk2Car [27]: This dataset is specifically designed for language-guided object referral in driving scenes. It consists of command-and-response pairs where a natural language command refers to a specific object in the scene, which is essential for training and evaluating models that can link language to visual elements in an automotive context. DriveLM-Data [23]: An extension of the DriveLM project, this dataset includes complex driving scenarios with associated textual descriptions and reasoning, linking perception to planning and decision-making. It is vital for training end-to-end models that can reason about driving situations. • LISA (Language-guided Instance Segmentation) [76]: While not specific to ITS, LISA is a large-scale dataset for reasoning segmentation, where the model must segment objects based on complex queries that require reasoning (e.g., ”segment the car that is farthest away”). This is crucial for developing more intelligent VLSeg systems. # C. Evaluation Metrics To systematically evaluate and compare the performance of segmentation models, a standardized set of metrics is employed. Intersection-over-Union (IoU): Also known as the Jaccard index, IoU is the most common metric for segmentation. It measures the overlap between the predicted segmentation mask $( A )$ and the ground truth mask $( B )$ and is calculated as: $I o U = | A \cap B | / | A \cup B |$ . For a given dataset, the mean IoU (mIoU) is computed by averaging the IoU across all classes. Pixel Accuracy (PA): This metric calculates the percentage of pixels in the image that were correctly classified. While simple to compute, it can be misleading on datasets with large class imbalance (e.g., a large road surface can dominate the metric). Grounding Accuracy: Specific to VLSeg, this metric evaluates how well the model can localize the object referred to in the language prompt. This is often measured using the IoU between the predicted mask and the ground truth mask for the specific object mentioned in the query. • Video-based Metrics (e.g., J&F): For video object segmentation tasks, metrics like the Jaccard and F-measure (J&F) are used. They evaluate both the region similarity (Jaccard) and the contour accuracy (F-measure) over a sequence of frames, providing a comprehensive assessment of tracking and segmentation quality over time [61]. # XI. CHALLENGES AND FUTURE RESEARCH DIRECTIONS The integration of Large Language Models (LLMs) with vision-language segmentation (VLSeg) in intelligent transportation systems (ITS) presents several critical challenges that must be addressed to ensure reliable deployment in safetycritical applications. These challenges span computational efficiency, data availability, safety guarantees, and system integration, requiring concerted research efforts to advance the field [58], [63]. This section also outlines future research directions to tackle these issues. # A. Challenge 1: Computational Efficiency for Real-Time Systems Real-time performance is paramount for ITS applications, where decisions must be made within milliseconds to ensure safe navigation and collision avoidance. As discussed in Section III, the reliance on large transformer-based architectures, particularly with computationally expensive cross-attention mechanisms, means that current VLSeg models like SAM [31] and SEEM [21] face significant latency challenges [37], [104]. Future Outlook: Research is focused on advanced model compression, including structured pruning [105], knowledge distillation [106], and quantization [107] to create lightweight yet powerful models. As noted in our comparative analysis, models like MobileSAM [32] and EdgeViT [33] are direct results of this effort, and work continues on adapting newer, efficient architectures like Mamba for ITS contexts through techniques like unstructured pruning [67]. Furthermore, hardwareaware Neural Architecture Search (NAS) is being used to design architectures optimized for automotive-grade processors [108]. Future systems will likely employ adaptive fusion strategies that dynamically adjust processing based on scene complexity and available resources [109], potentially using edge-cloud collaboration to distribute computational loads for non-critical tasks [110]. # B. Challenge 2: Data Availability and Open-World Generalization The success of VLSeg models depends on high-quality, diverse training data. As noted in Section X, while foundational datasets like Cityscapes provide excellent benchmarks, they are often scarce for specific ITS scenarios, especially for rare events or ”long-tail” objects. This limits the ability of even powerful foundation models to generalize to unseen, openworld conditions. Future Outlook: To address data scarcity, future work will lean heavily on generative AI and synthetic data pipelines (e.g., LLM-Seg40K [57], SynthCity [111], GAIA [112]) to create vast, diverse, and automatically annotated datasets. While general vision datasets like COCO [113] and ADE20K [114] have been foundational, the field requires more ITS-specific data. Active learning [115] and automated annotation frameworks like AutoSeg [35] will reduce manual labeling costs. For openworld generalization, the focus is on advancing zero-shot, few-shot, and continual learning methods. Continual learning, in particular, is critical for enabling models to learn from a continuous stream of new driving data without catastrophic forgetting of previously learned knowledge [116], [117]. # C. Challenge 3: Safety, Reliability, and Explainability Perhaps the most significant barrier to adoption, and a core perspective of this survey, is the challenge of ensuring safety, reliability, and explainability. Models must be robust against adversarial attacks and perform reliably under adverse conditions. Critically, as foreshadowed in our discussion of end-to-end systems in Section VIII, their decisions must be interpretable, especially in case of failure. Without a clear ”why” behind an action, true safety is unattainable. The failure modes for VLSeg in ITS are specific and severe. For example, under extreme weather like heavy snow or rain, cameras can be obscured and LiDAR point clouds can become noisy, leading a model to fail to segment a pedestrian or misclassify a lane marking; this is an active area of research, with methods leveraging generative models to create robust all-weather perception systems [78]. Sensor degradation, such as a smudged camera lens, can have similar effects. Beyond environmental factors, these models are also vulnerable to adversarial prompts; a malicious actor could potentially craft a text prompt that causes the system to ignore a stop sign or incorrectly segment a clear path. A more subtle failure mode is semantic misinterpretation, where the model correctly segments an object but misunderstands the context—for instance, segmenting a plastic bag floating in the wind as a solid obstacle, causing unnecessary and dangerous braking. Understanding and mitigating these specific failure modes is a critical area of ongoing research [118]. Future Outlook: Research is moving towards formal verification methods and runtime monitoring to provide safety guarantees [119]. Adversarial training and certified defenses are key areas of research to improve robustness against malicious inputs [120]. Frameworks like Multi-Shield [34] are exploring multimodal defenses. To address safety in high-risk scenarios and provide end-to-end guarantees, recent models like SafeSeg [121] and VLM-AD [122] integrate predictive reasoning with segmentation to directly inform safer driving decisions. A major future direction, and the one most enabled by LLMs, is explainable AI (XAI). The goal is to create systems that not only perform segmentation but also provide causal reasons for their decisions (e.g., ”The object is segmented as a pedestrian because it has human-like shape and motion”), as explored in [123], [124]. Fail-safe mechanisms, including multi-modal redundancy with LiDAR and radar, will be standard [58], [125]. # D. Challenge 4: System Integration and Standardization Seamless integration of VLSeg models into the broader ITS ecosystem, including vehicle-to-infrastructure (V2I) communication, edge computing devices, and existing vehicle control units, poses a significant engineering challenge. A lack of standardization for evaluation and deployment further complicates this. Future Outlook: The future lies in developing adaptive and secure V2I communication protocols for collaborative perception [110]. Edge computing will be essential, requiring efficient resource management and dynamic model selection on vehicular hardware [33]. A key effort will be to establish comprehensive benchmarks and deployment guidelines, aligning with automotive safety standards like ISO 26262 [126]. This will ensure interoperability and certified safety for VLSeg components across different manufacturers and systems. # E. Challenge 5: Ethical Considerations and Algorithmic Bias Beyond technical hurdles, the deployment of LLMaugmented segmentation in ITS raises significant ethical questions. The data used to train these models can contain inherent biases, which can lead to inequitable and unsafe outcomes. For example, if a dataset is predominantly collected in one geographic region (e.g., North America), the models may perform worse at recognizing pedestrian behaviors or road signs in other parts of the world. There is a documented ”long-tail” problem where models perform poorly on underrepresented groups [127], which in a driving context, could mean a higher risk for certain demographics of pedestrians. Furthermore, the decision-making process of these large models is often opaque (the ”black box” problem), making it difficult to audit or explain failures, which is a major concern for accountability in the event of an accident [123]. Future Outlook: The ITS research community must prioritize the development of ”fair” and ”transparent” AI. This involves creating more geographically and demographically balanced datasets and developing techniques for bias detection and mitigation [128]. Explainable AI (XAI) is a critical research frontier, aiming to create models that can articulate the reasoning behind their predictions (e.g., ”I am slowing down because I have segmented a child running towards the street”). Future regulatory frameworks will likely mandate auditable AI systems in autonomous vehicles, requiring a shift away from purely performance-driven metrics towards a more holistic evaluation that includes fairness, transparency, and ethical alignment. # F. Long-Term Speculative Directions Looking further ahead, several emerging technologies could reshape VLSeg in ITS. Neuromorphic computing, with its brain-inspired, event-based processing, promises unparalleled energy efficiency for real-time tasks. Hybrid neural-symbolic AI could integrate common-sense reasoning into perception models, allowing them to understand context in a more humanlike way. While highly speculative, quantum computing could eventually offer breakthroughs in solving complex optimization problems inherent in large-scale model training and scene understanding. These directions, while not immediately deployable, represent exciting long-term frontiers for the field.
The integration of Large Language Models (LLMs) with computer vision is profoundly transforming perception tasks like image segmentation. For intelligent transportation systems (ITS), where accurate scene understanding is critical for safety and efficiency, this new paradigm offers unprecedented capabilities. This survey systematically reviews the emerging field of LLM-augmented image segmentation, focusing on its applications, challenges, and future directions within ITS. We provide a taxonomy of current approaches based on their prompting mechanisms and core architectures, and we highlight how these innovations can enhance road scene understanding for autonomous driving, traffic monitoring, and infrastructure maintenance. Finally, we identify key challenges, including real-time performance and safety-critical reliability, and outline a perspective centered on explainable, human-centric AI as a prerequisite for the successful deployment of this technology in next-generation transportation systems.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Join cardinality estimation is a challenging problem in database query optimization [33, 23], especially when queries include filter conditions on the joining tables [52]. Traditional data synopses such as histograms and samples [9] are built over entire relations before querying. While this maximizes the synopses’ generality, integrating filters at query time considerably degrades accuracy [52]. An alternative is to evaluate the filters as a preprocessing step during query optimization and build the synopses only over the tuples satisfying the filter conditions [5]. Single-pass streaming synopses such as (hash-based) sketches [10, 6, 8] suit this approach because of their constant update time. To achieve reasonable overhead during query optimization, extensive parallelization, and even GPU acceleration may be employed [28]. In recent work [49], bidirectional transformers [14] have been used to infer the sketches of query selections. This approximates — rather than builds from scratch — the necessary sketches to estimate the cardinality of joins with filter conditions. Thus, it avoids the construction overhead altogether. However, relying on deep learning constrains the scalability of the approximate sketches. The number of trainable parameters increases linearly with the size of the sketch, thus only sketches with relatively small width can be trained, due to the limited memory capacity of hardware accelerators, e.g., GPU. This is a significant shortcoming since the width of the sketch correlates with its accuracy. To overcome this lack of scalability to larger sketches, we propose Sketched Sum-Product Networks — or Sketched SPNs. Unlike deep learning models, Sum-Product Networks (SPNs) are not reliant on hardware acceleration for training. Hence, they can be used to approximate larger sketches, which results in more accurate cardinality estimation and effective query optimization. Furthermore, SPNs have been shown [40] to be applicable to multimodal data — both discrete and continuous types — which is required to model relations with multiple attribute types. Our main idea is to store sketches in an SPN’s leaf nodes — which each represent a partition — and combine these sketches over the structure of the SPN. Filter conditions are also applied to the sketches during approximation, such that the resulting sketch obtained at the root of the SPN is an approximate sketch of their selection. We primarily consider the Fast-AGMS sketch [8], which is an accurate cardinality estimator for multi-way joins [24]. Our method also generalizes to other sketches. In particular, our open-source implementation [48] includes Bound Sketch [5], which is a pessimistic [29] estimator that gives upper bounds join cardinality with high efficacy for query optimization. Figure 1: Comparison of sketching pipelines. Given a query, previous methods must first filter the relations to compute the selections to sketch. The proposed approximate sketching pipeline (dashed arrows) uses Sum-Product Networks to approximate sketches without computing selections. Training can be completed offline. Our detailed technical contributions are as follows: • Sketched Sum-Product Networks, a practical method for approximating sketches as an alternative pipeline to applying filter conditions before sketching, illustrated in Figure 1. • An upper bound on the approximation error of sketches by SPNs, which we verify in our experiments. • The application of Bound Sketch with cross-correlation [24] to avoid exponential space complexity. • An upward-biased estimate derived from Fast-AGMS that outperforms other estimators in query optimization. • Sketched SPNs perform within $3 \%$ of the fastest query execution time for the JOB-light and Stats-CEB workloads. # 2 Problem Definition In relational database systems, cost-based query optimizers often estimate the cost of potential query execution plans as a function of their size or cardinality, defined as the number of tuples a query would return. Cardinality is a close analog to the query’s actual computational runtime. We address the problem of estimating the cardinality of joins, which may include a variable number of relations, subject to selection predicates. # 2.1 Multi-way Join Cardinality Estimation In a two-way equi-join, the join cardinality between two relations is the number of pairs of tuples between both relations whose values for a given join attribute are equivalent. Consider two relations, $T _ { 1 }$ and $T _ { 2 }$ , which join on their respective join attributes with the domain $I$ . The cardinality of this join is defined as follows: $$ | T _ { 1 } \bowtie T _ { 2 } | = \sum _ { i \in I } f _ { 1 } ( i ) f _ { 2 } ( i ) $$ where $f _ { 1 } ( i )$ and $f _ { 2 } ( i )$ denote the frequencies of the join attribute element $i$ in $T _ { 1 }$ and $T _ { 2 }$ , respectively. The extension to multi-way joins is by introducing additional frequencies for every relation. Let $\{ T _ { 1 } , \dots , T _ { n } \}$ be the relations in an $n$ -ary join on attributes that share the domain $I$ , i.e., each relation has a single join attribute. In this case, the join cardinality is the following sum of products: $$ | T _ { 1 } \bowtie \cdots \bowtie T _ { n } | = \sum _ { i \in I } f _ { 1 } ( i ) \cdot \cdot \cdot f _ { n } ( i ) $$ where $f _ { k } ( i )$ denotes the frequency of the join attribute element $i$ in the $k$ -th relation. In practice, the exact frequencies $f _ { k } ( i )$ are unknown and may be estimated using histograms [27], samples [52], sketches [46], or other synopses [9]. In this paper, we consider the general case in which every relation may have multiple join attributes with different domains. We borrow the formulation by Heddes et al. [24] and let $\{ I _ { 1 } , \ldots , I _ { n } \}$ denote the join domains in every relation, where $I _ { k }$ is the cross product of the individual join attribute domains in $T _ { k }$ . The cardinality of a multi-way join is expressed as: $$ \left| T _ { 1 } \boxtimes \ldots \ldots \boxtimes T _ { n } \right| = \sum _ { i \in \{ I _ { 1 } \times \cdots \times I _ { n } \} } f _ { 1 } ( i ) \cdot \cdot \cdot f _ { n } ( i ) \prod _ { \left\{ u , v \right\} \in E } \mathbb { 1 } _ { i _ { u } = i _ { v } } $$ where $i$ is a tuple from the cross product $\{ I _ { 1 } \times \cdots \times I _ { n } \}$ and $f _ { k } ( i )$ denotes the frequency of tuples in $T _ { k }$ that have the same value(s) for the join attribute(s) shared in $i$ . Additionally, $\{ u , v \}$ are attributes joining a pair of relations in a join graph with edges $E$ . The indicator function $\mathbb { 1 } _ { i _ { u } = i _ { v } }$ returns 1 if the attribute value $i _ { u }$ equals $i _ { v }$ , otherwise it returns 0, such that $\prod { \mathbb { 1 } _ { i _ { u } = i _ { v } } }$ equals 1 if and only if $i$ satisfies all the equi-joins predicates. # 2.2 Join Cardinality Subject to Filters Estimators that represent only the probability distribution of one random variable, a single attribute, are defined as being univariate. Univariate estimators are challenging to apply to join cardinality estimation subject to filter conditions [52]—where joins are between selections, subsets specified by filters on a relation. With univariate estimators, frequencies may be approximated by assuming independence among attributes. Let $f _ { k } ^ { \prime } ( i )$ denote the frequency of the join attribute element $i$ in a selection $\sigma ( T _ { k } ) \subseteq T _ { k }$ . Assuming independence between all attributes, $f _ { k } ^ { \prime } ( \bar { i } )$ can be approximated using univariate probabilities and frequencies: $$ f _ { k } ^ { \prime } ( i ) \approx f _ { k } ( i ) \prod _ { r \in T _ { k } } P \left( \varphi _ { r } \right) $$ where $\varphi _ { r } \in \varphi$ denotes the predicate on attribute $r \in T _ { k }$ and has the estimated selectivity $P ( \varphi _ { r } ) \in [ 0 , 1 ]$ —the probability that an element of $u$ satisfies $\varphi _ { r }$ . The product of all $P \left( \varphi _ { r } \right)$ is the joint probability that all attributes of a tuple in $T _ { k }$ satisfy $\varphi$ , by the definition of (mutual) independence [17]. Using $f ^ { \prime }$ , the cardinality of a multi-way join between selections can be approximated as follows: $$ | \sigma ( T _ { 1 } ) \bowtie \dots \dotsc \dotsc \dotsc | \sigma ( T _ { n } ) | = \sum _ { \substack { i \in \{ I _ { 1 } \times \dots \times I _ { n } \} } } f _ { 1 } ^ { \prime } ( i ) \cdot \cdot \cdot f _ { n } ^ { \prime } ( i ) \prod _ { \{ u , v \} \in E } \mathbb { 1 } _ { i _ { u } = i _ { v } } $$ However, this approximation may be inaccurate unless attributes are independent [50]. # 2.3 Multivariate Estimators Multivariate estimators, e.g., multidimensional histograms [13], do not assume independence and may thus be more accurate, but typically require space exponential to the number of attributes. Exceptionally, machine learning models have been shown to tractably learn joint probability distributions, even for all the many attributes of a full outer join of several relations. The cardinality of a multi-way join between selections is proportional to the joint probability of the predicates on all full outer join attributes. A model can estimate the probability that a tuple in the full outer join satisfies all the equi-join predicates $u = v$ with the probability of satisfying the selection predicates $\varphi _ { r }$ , for every attribute $r$ as follows: $$ $$ The join cardinality is approximated by scaling the estimated joint probability by the size of the full outer join $| T _ { 1 } \Eup \cdot \cdot \cdot \mathfrak { M } T _ { n } |$ . However, it is expensive to compute the full outer join – or even a sample of it – between many relations. Instead of defining a single multivariate estimator over a full outer join, we consider the problem of defining an multivariate estimator over the join and filter attributes of each relation, independently. Doing so mitigates the independence assumption of univariate estimators and avoids the training cost of computing a full outer join. Moreover, such an estimator supports the inference of the frequencies in Equation 5 for join cardinality estimation subject to selection predicates. The challenges of defining a multivariate estimator per relation are two-fold. First, the estimator has to support filters on any subset of attributes and sub-joins involving only a subset of the join attributes— both of which are essential in query plan enumeration. Second, the relation-level estimators must be combined into an ensemble estimator for the full multi-way join and any sub-joins. Our solution is to use a Sum-Product Network of sketches as the multivariate estimator, where SPNs handles the filters while sketches estimate the multi-way joins. # 3 Preliminaries This section provides background on sketches and Sum-Product Networks, which we combine for multi-way join cardinality estimation subject to filter conditions. # 3.1 Fast-AGMS Sketch We utilize the Fast-AGMS sketch [8], which is an unbiased frequency estimator. The basic structure of this sketch1 is an array of $w$ counters updated by a pair of hash functions. The first hash function $h : \mathbb { R } \{ 1 , . . . , w \}$ maps a given element to a counter. The other hash function $\xi : \mathbb { R } \pm 1$ determines whether to increment or decrement that counter. In this work, we refer to the dimensionality $w$ as the width of the sketch. For example, consider the zero-initialized vector $\boldsymbol { a } \in \mathbb { R } ^ { w }$ . The Fast-AGMS update is the following: $$ { a } _ { h ( x ) } \gets { a } _ { h ( x ) } + \xi ( x ) $$ For each element $x$ , e.g., from an attribute, the counter indicated by $h ( x )$ is updated by $\xi ( x )$ . The frequency of a specific $x$ can be approximately recovered as the product of $a _ { h ( x ) }$ and $\xi ( x )$ : $$ f ( x ) = \mathbb { E } \left[ a _ { h ( x ) } \xi ( x ) \right] $$ This is proven to be unbiased [3] when $\xi : \mathbb { R } \to \pm 1$ is pairwise independent [45], such that $\mathbb { E } \left[ \xi ( x ) \xi ( y ) \right] = 0$ if $x \neq y$ . The join cardinality of two relations can be estimated unbiasedly via the dot product of their sketches: $$ | A \bowtie B | \approx a \cdot b $$ where $a$ and $b$ are the sketches of the join attributes in relations $A$ and $B$ , respectively. These corresponding sketches must share the same hash functions. # 3.1.1 Multi-Way Joins For multi-way joins, a distinct $\xi$ hash function is defined for each join. Consider a three-way join $A \bowtie B \bowtie C$ . The join $A \bowtie B$ is assigned $\xi _ { a }$ , whereas $B \bowtie C$ is assigned $\xi _ { c }$ . Since $B$ joins with both $A$ and $C$ , its sketch is constructed using both $\xi _ { a }$ and $\xi _ { c }$ : $$ b _ { h ( x ) } \gets b _ { h ( x ) } + \xi _ { a } ( x ) \xi _ { c } ( x ) $$ In contrast, sketches $a$ and $c$ are updated as in Equation 7 using only either $\xi _ { a }$ or $\xi _ { c }$ , respectively. When $A \bowtie B \bowtie C$ is a transitive join, i.e., each relation uses a single join attribute such that $A$ and $C$ both join the same attribute in $B$ , then the sketch vector $b \in \mathbb { R } ^ { w }$ is constructed by the same hash function $h$ as $a \in \mathbb { R } ^ { w }$ and $\boldsymbol { c } \in \mathbb { R } ^ { w }$ . This enables the unbiased three-way join cardinality estimation via the element-wise product of all three sketches: $$ | A \bowtie B \bowtie C | \approx \sum ^ { w } a \circ b \circ c $$ However, this only applies to transitive joins. When $A \bowtie B \bowtie C$ is not transitive, i.e., $A$ and $C$ join on different attributes in $B$ , then $b \in \mathbb { R } ^ { w \times w }$ is a sketch matrix constructed using $h _ { a }$ and $h _ { c }$ , which correspond with the sketch vectors $\boldsymbol { a } \in \mathbb { R } ^ { w }$ and $\boldsymbol { c } \in \mathbb { R } ^ { w }$ , respectively. The sketch matrix $b$ is constructed by the following update: $$ b _ { h _ { a } ( x ) , h _ { c } ( y ) } \gets b _ { h _ { a } ( x ) , h _ { c } ( y ) } + \xi _ { a } ( x ) \xi _ { c } ( y ) $$ where $x$ and $y$ are elements from two different attributes of $B$ that join with $A$ and $C$ , respectively. The three-way join cardinality estimation process now requires a matrix-vector product: $$ | A \bowtie B \bowtie C | \approx a \cdot b \cdot c $$ In general, this extends to complex multi-way joins with sketch tensors. The sketch of a relation in a join with $n - 1$ relations would be an $( n - 1 )$ -order tensor and the multi-way join cardinality estimated via tensor contraction. # 3.1.2 Cross-Correlation To avoid the exponentially large space requirements of tensors, Heddes et al. [24] showed that (circular) crosscorrelation can effectively approximate tensor contraction between Fast-AGMS sketches with just $\mathcal { O } ( w )$ space, i.e., vectors. Definition 3.1 (Cross-Correlation). Two vectors $a , b \in \mathbb { R } ^ { w }$ are cross-correlated by $a \star b = \mathcal { F } ^ { - 1 } \left( \overline { { \mathcal { F } a } } \circ \mathcal { F } b \right)$ where $\star$ denotes the operator and $\mathcal { F }$ denotes a discrete Fourier transform. Returning to our three-way join example $A \bowtie B \bowtie C$ , cross-correlation requires $b \in \mathbb { R } ^ { w }$ to be a convolved sketch vector constructed by the following update: $$ b _ { H ( x , y ) } \gets b _ { H ( x , y ) } + \xi _ { a } ( x ) \xi _ { c } ( y ) $$ $$ H ( x , y ) = ( h _ { a } ( x ) + h _ { c } ( y ) ) { \bmod { w } } $$ where $H ( x , y ) : \mathbb { R } ^ { 2 } \{ 1 , . . . , w \}$ is a composite hash function of $h _ { a }$ and $h _ { b }$ used to map the join attributes $( x , y )$ to a counter. It is equivalent to the circular convolution of two sketches containing just $x$ and $y$ , respectively. Then, cross-correlation can estimate the cardinality of the multi-way join: $$ \begin{array} { c } { { \displaystyle | A \bowtie B \bowtie C | \approx \sum _ { w } ^ { w } a \star b \star c } } \\ { { \approx \sum ^ { w } \mathcal { F } ^ { - 1 } \left( \mathcal { F } a \circ \overline { { { \mathcal { F } b } } } \circ \mathcal { F } c \right) } } \end{array} $$ The estimation time complexity is $\mathcal { O } \left( n w \log w \right)$ time, where $n$ is the number of relations and $w \log w$ is the complexity of the Fast Fourier Transform [7]. We adopt the use of cross-correlation in this work to allow for larger sketch dimensionality $w$ , which also improves estimation accuracy. # 3.2 Bound Sketch As an alternative to Fast-AGMS, we also utilize the pessimistic join cardinality estimation method, Bound Sketch, proposed by Cai et al. [5]. Whereas Fast-AGMS is unbiased, Bound Sketch estimates are upper bounds for the cardinality of joins, which are less likely to lead to catastrophically suboptimal plans [4]. Like Fast-AGMS, the Bound Sketch uses a hash function $h : \mathbb { R } \{ 1 , . . . , w \}$ to map elements to one of $w$ counters. Unlike Fast-AGMS, each insertion increments a counter. This produces the Count-Min sketch [10], which is an upper bounds estimator. However, Bound Sketch tightens the upper bound by utilizing the maximum degree of elements inserted to a counter. The maximum degree is defined as the largest frequency of any inserted value. For the two-way join $A \bowtie B$ , let $\mathcal { C } ( A ) \in \mathbb { R } ^ { w }$ and $\mathcal { C } ( B ) \in \mathbb { R } ^ { w }$ denote the Count-Min Sketch for the join attribute elements of relations $A$ and $B$ , respectively. Furthermore, let $\mathcal { D } ( A ) \in \mathbb { R } ^ { w }$ and $\mathcal { D } ( B ) \in \mathbb { R } ^ { w }$ be $w$ -dimensional sketch vectors whose elements are the maximum degree of values mapped to the corresponding counters in $\mathcal { C } ( A )$ and $\mathcal { C } ( B )$ , respectively. The Bound Sketch upper bound is then given by the following: $$ \begin{array} { r } { | A \bowtie B | \le \operatorname* { m i n } \left\{ { \mathcal C } ( A ) { \cdot } { \mathcal D } ( B ) \right\} } \end{array} $$ Both products are overestimates, hence the minimum is taken as the tighter bound. Intuitively, each tuple in a relation (e.g., ${ \overset { \triangledown } { C } } ( A ) .$ ) can only join up to the maximum degree of the other relation’s join attribute (e.g., $\mathcal { D } ( B ) ,$ ). For completeness, we also show the estimation for our three-way join example $A \bowtie B \bowtie C$ : $$ \begin{array} { r } { | A \bowtie B \bowtie C | \leq \operatorname* { m i n } \left\{ \stackrel { C ( A ) \cdot { \mathcal { D } } ( B ) \cdot { \mathcal { D } } ( C ) } { { \mathcal { D } } ( A ) \cdot { \mathcal { C } } ( B ) \cdot { \mathcal { D } } ( C ) } \right\} } \end{array} $$ In general, the extension to multi-way joins is the same as for Fast-AGMS — tensor contraction. However, crosscorrelation can be used to approximate tensor contraction, which also allows us to apply Bound Sketch with larger sizes than prior work. Cai et al. [5] noted that the Bound Sketch estimator has exceptionally high latency, inflating query optimization time. This is due to its inability to estimate subject to filter conditions — the Bound Sketch of a selection must be exactly computed by scanning and filtering its base relation. This costly operation can even exceed the query execution time. This is also the case for Fast-AGMS, where the accepted practice [28, 24] has been to apply the filters just before computing the sketch. Our proposed method uses Sum-Product Networks to approximate the sketch of the filtered relation, without necessitating a scan at estimation time. (a) Learning the Sum-Product Network structure starting from a table at the root. Each node recursively partitions the table either column-wise or row-wise. Terminates into leaf nodes containing the local univariate probability distribution of a single column. (b) Inferring from the Sum-Product Network by combining probabilities from the leaf nodes. Sum nodes add probabilities, normalized to a valid probability by their weights. Product nodes multiply probabilities, which are assumed to be independent. Figure 2: Sum-Product Network learning and inference. # 3.3 Sum-Product Networks Our model choice, Sum-Product Networks [43] (SPNs) are probabilistic graphical models defined as a rooted acyclic graph — a tree. Each leaf is an independent random variable represented by a probability density function (PDF). The root and any internal node are either a sum or product node. Every sum node is a mixture of PDFs. Every product node is a product of PDFs. Each node is considered a valid PDF for the joint probability distribution of its descendants a sum node can be a mixture of product nodes, and a product node can be a product of sum nodes. Thus, the root represents the joint PDF of all random variables in an SPN. # 3.3.1 Structure Learning SPNs can decompose complex joint probability distributions as linear combinations of simpler probability distributions. Gens and Domingos [20] offer a simple recursive algorithm to learn a tree structure for SPNs. Briefly, it initializes the SPN from the root, and grows the tree by recursively checking three cases: 1. If the data is a single random variable, i.e., an attribute, return a leaf node and terminate. 2. Else, if the data can be decomposed into independent groups of random variables, return a product node whose children are those groups. 3. Otherwise, partition the data by clustering similar tuples and return a sum node. Each new child recursively checks these cases until terminating with a leaf. This generic algorithm does not assume how independent groups or clusters are determined. # 3.3.2 Example Figure 2a depicts an example of learning an SPN for a table with columns $X , Y$ , and $Z$ , whose elements are not necessarily distinct. Starting top-down, only $Z$ is determined to be independent, thus forming a leaf node. The remaining $( X , Y )$ tuples form a sum node containing two clusters, weighted by their proportion of tuples. $X$ and $Y$ are locally independent within both clusters, terminating into leaf nodes. Since every branch has terminated in a leaf node, the SPN structure learning process is complete. Figure 2b illustrates the inference of joint probabilities from the SPN, e.g., for each value of $Z$ with the predicate $X = x _ { 2 }$ . For this example, let $x _ { 2 }$ be distinct from $x _ { 1 }$ and $x _ { 3 }$ . Starting bottom-up, each leaf of $X$ returns $P ( { \bar { X } } = x _ { 2 } )$ . $Y$ has no predicates and is marginalized out by its leaves returning a probability of 1. These probabilities of $X$ and $Y$ are multiplied at product nodes. At the sum node, the normalized sum of probabilities is $\begin{array} { r } { P ( X = x _ { 2 } ) = \frac { 1 } { 3 } } \end{array}$ . The root multiplies the PDF $P ( Z )$ with $P ( X = x _ { 2 } )$ , expressing the subset of $P ( X , Z )$ where $X = x _ { 2 }$ . Substituting $P ( Z )$ for a frequency distribution (sketch) approximates the frequency distribution (sketch) of the selection where $X = x _ { 2 }$ . # 3.3.3 Application to Relational Data Molina and Vergari et al. [40] proposed learning SPNs with the Randomized Dependence Coefficient (RDC) metric [38], which non-linearly transforms random variables. This produces a type-agnostic feature space suitable for testing independence and clustering mixed data types, i.e., both continuous and discrete. The ability to simultaneously handle different data types is highly relevant for relational data, which may contain multiple attribute types. Hilprecht et al. [25] applies SPNs as approximate query processors for relational databases. Additionally, they show that SPNs may be efficiently updated — inserting a tuple simply requires finding and updating a relevant subset of nodes. For join cardinality estimation, their best accuracy is obtained by modeling the full outer join of multiple relations. However, our results show that SPNs can achieve high accuracy without the full outer join, which is intractable for large databases. # 4 Sketched Sum-Product Networks Inspired by the success of Fast-AGMS [24] and Bound Sketch [5], we use SPNs to approximate these sketches. This allows sketches to be used without the need to scan relations and apply filter conditions during estimation time. We start by training an ensemble of SPNs, one per relation. After training, each SPN can approximate a sketch for any selection, given its filter conditions. These sketches are then used for the subsequent join cardinality estimation task. # 4.1 Training Sketched SPNs Our algorithm for modeling multivariate data, i.e., a relation, and training an SPN is given in Algorithm 1, which generally follows the recursive template by Gens and Domingos [20] that was described in subsection 3.3. We modify the termination case: if the current partition of data is a single attribute or only contains join attribute(s) then we create a leaf that stores the sketch of the attribute(s). A leaf may contain multiple join attributes, such that it stores a multivariate sketch, i.e., a sketch tensor or a convolved sketch for cross-correlation. The other recursive cases remain: if possible, decompose attributes into independent groups, or else partition the data into clusters. # Algorithm 1 TrainSPN Input: relation $T$ with attribute set $\{ C \}$ Output: Sum-Product Network of relation $T$ if $| C | = 1$ then return univariate sketch of $\{ C \}$ else if $C$ contains only join attributes then return multivariate sketch of $\{ C \}$ else $\{ G \} $ decompose $\{ C \}$ into independent groups if $| G | > 1$ then return $\begin{array} { r l } { \prod _ { i } \mathbf { T r a i n S P N } ( \{ G _ { i } \} ) } \end{array}$ else $P $ partition $T$ into clusters of similar tuples return $\begin{array} { r } { \sum _ { i } \frac { | P _ { i } | } { | R | } \mathbf { T r a i n S P N } ( P _ { i } ) } \end{array}$ end if end if As proposed by Molina and Vergari et al. [40], product nodes utilize the RDC metric [38] to measure pairwise independence. Briefly, the RDC metric randomly transforms each attribute and evaluates its linear correlation within a non-linear type-agnostic feature space. This is applicable between different attribute types, i.e., continuous and discrete. Dependent attributes form connected components, where RDC less than a user-specified threshold indicates independence. Each component becomes the child of a product node and recursively calls Algorithm 1. If attributes cannot form separable components, then a sum node is created instead. Sum nodes partition the tuples into clusters, with the goal of forming clusters that have locally independent attributes. A sum node partitions data into exactly two clusters, as recommended to create deeper networks2. Following prior work [40, 25], we originally applied K-Means clustering to the same non-linear features utilized by the RDC test. However, we found that EM [12], as originally recommended for SPNs [43, 20], is more effective for maximizing the independence between attributes within the same cluster. This ultimately leads to faster training, since fewer partitions are needed before forming leaf nodes. A leaf node is created whenever an attribute is pairwise independent of all other attributes within its partition. Alternatively, if the number of tuples within a partition is less than some percentage (e.g., $1 \%$ ) of the original relation, then a leaf node is created for each attribute. These user-specified thresholds control the termination cases of Algorithm 1 and limit the size of the model. # 4.1.1 Sketches in the Leaf Nodes Leaf nodes represent the distribution of attribute(s) using sketches. These sketches combine to approximate the sketch of any given selection. This is also viable for any mergeable synopses [1] in general, but hash-based sketches (e.g., Fast-AGMS and Bound Sketch) are particularly suitable since their bins are inherently aligned by hash functions. This allows them to combine under simple element-wise operations. # 4.2 Inferring Sketches Algorithm 2 is applied to the root of a Sketched SPN and recursively traverses the network depth-first to infer the sketch of the given join attribute(s) $\kappa$ for the selection $\sigma _ { \varphi } ( T )$ . The predicate $\varphi$ may contain disjunctive (i.e., condition OR condition) and conjunctive conditions (i.e., condition AND condition). # Algorithm 2 ApproxSketch Input: SPN node $V$ with children $\{ V _ { i } \}$ , selection predicate $\varphi$ , join attributes(s) $\kappa$ Output: Sketch of attribute(s) $\kappa$ from the selection $\sigma _ { \varphi }$ if $V$ is a leaf node then if $\kappa \subseteq$ the attributes of $V$ then return sketch of $\kappa$ else return selectivity $P \left( \varphi \right) \in \left[ 0 , 1 \right]$ end if else if $V$ is a product node then return $\bar { \prod _ { i } }$ ApproxSketch $( V _ { i } , \varphi , \mathcal { K } )$ else if $V$ is a sum node then if $\kappa \subseteq$ the attributes of $V$ then return $\textstyle \sum _ { i }$ ApproxSketch $( V _ { i } , \varphi , \mathcal { K } )$ else $\begin{array} { r } { \mathbf { r e t u r n } \sum _ { i } \frac { | V _ { i } | } { | V | } \mathbf { A p p r o x S k e t c h } \left( V _ { i } , \varphi , \mathcal { K } \right) } \end{array}$ end if end if Upon reaching a leaf, the recursive function returns a sketch of join attribute(s) $\kappa$ , if $\kappa$ is represented by the leaf. Otherwise, the leaf returns its estimated selectivity for the predicate $\varphi$ , which is the probability $P ( \varphi ) \in [ 0 , 1 ]$ . If the leaf attribute(s) are excluded from the conditions in $\varphi$ , then the returned selectivity is 1, i.e., the attribute(s) are marginalized out. Sketches and probabilities from the leaf nodes are combined bottom-up by sum and product nodes until a sketch is returned at the root. This assumes sketches are linear [9], such that adding the sketches of two relations equals the sketch of their union. However, this does not hold for the degree component of the Bound Sketch method, which is non-linear. The sum of the maximum degrees from different multisets may overestimate the maximum degree of their union. Hence, the Bound Sketch upper bound may be looser when approximated by SPNs. This can be alleviated by using the maximum degree of the whole multiset to constrain the approximation: $$ \widehat { \mathcal { D } } \left( \sigma _ { \varphi } \left( T \right) \right) \le \mathcal { D } \left( T \right) $$ where the left side of the inequality is the approximated maximum degree sketch for the selection $\sigma _ { \varphi } \left( T \right)$ . The right side is the exact maximum degree sketch of the unfiltered relation $T$ , assumed to be available at estimation time. We originally attempted to modify sum nodes to merge degree sketches by taking their element-wise maximum. Ideally, if the elements partitioned into different leaf nodes were distinct, then the largest maximum degree of each leaf node equals the maximum degree of their union. However, this assumption often did not hold and the resulting estimates tended to underestimate. Rather than enforce this assumption, it is simpler to treat it as linear and allow the summation and multiplication of degree sketches. Doing so is still tighter than the Count-Min upper bound: $$ | A \bowtie B | \leq { \mathcal { C } } ( A ) \cdot ( { \mathcal { D } } ( B _ { 1 } ) + { \mathcal { D } } ( B _ { 2 } ) ) \leq { \mathcal { C } } ( A ) \cdot { \mathcal { C } } ( B ) $$ where $B _ { 1 }$ and $B _ { 2 }$ are disjoint subsets of the relation $B$ . For all possible partitions of $B = B _ { 1 } \bigcup B _ { 2 }$ , the sum of the maximum degree sketches $\mathcal { D } ( B _ { 1 } )$ and $\mathcal { D } ( B _ { 2 } )$ is never greater than the Count-Min sketch $\mathcal { C } ( B )$ . They are only equal in the worst case that all elements inserted into a bin share the same value. # 4.2.1 Support for Predicates The estimator at the leaf node must handle the various conditions in the selection predicate $\varphi$ . Currently, the sketches in this work apply to equalities, as well as ranges treated as disjunctive equalities. Estimation of range selectivity is optimized by sketching the dyadic intervals [22] containing an element, as proposed by Cormode and Muthukrishnan [10]. Dyadic intervals are intervals whose sizes are powers of 2, e.g., $2 ^ { 0 ^ { \circ } } , 2 ^ { 1 ^ { \circ } } , 2 ^ { 2 }$ , etc. Any range of size $n$ can be decomposed into ${ \mathcal { O } } ( \log _ { 2 } n )$ disjoint dyadic intervals, which are treated as disjunctive equalities for estimation. Future work may include different synopses to support additional predicates. Only synopses for leaf nodes of join attributes gain from being mergeable element-wise, e.g., sketches. # 4.3 Error Bounds Two assumptions affect approximation error: (1) leaf nodes can accurately estimate selectivity, and (2) the children of a product node are independent. The first is resolved by improving the individual estimators in the leaf nodes, e.g., by allocating more memory. The second requires minimizing the dependence between attributes, e.g., tightening the RDC or cluster size threshold. However, these increase the model’s memory requirements and training time. Therefore, it is useful to know when an SPN is ineffective, before retraining. We can determine its effectiveness by checking whether its error is near its worst-case. Conjecture 4.1. The absolute error of any approximated sketch counter is at most the error of fully assuming independence between attributes and scaling the sketch of the join attribute(s) by the exact selectivity of predicate $\varphi$ on each attribute $r \in T$ . We formally express the error bound as the L1-distance between the exact and approximate sketch upper bounded by the L1-distance between the exact and worst-case approximate sketch. The worst-case approximation is by a single product node or a complete independence assumption between attributes. $$ \left\| S \left( \sigma _ { \varphi } ( T ) \right) - \widehat { S } ( \sigma _ { \varphi } ( T ) ) \right\| _ { 1 } \leq \left\| S \left( \sigma _ { \varphi } ( T ) \right) - S ( T ) \prod _ { r \in T } P ( \varphi _ { r } ) \right\| _ { 1 } $$ where the function $\boldsymbol { S } ( T ) : \mathbb { R } ^ { N \times M } \mathbb { R } ^ { w }$ is a linear mapping from a relation $T$ to a $w$ -dimensional sketch, e.g., FastAGMS. Then ${ \widehat { S } } ( T )$ denotes an approximate sketch. This worst-case error bound assumes that the exact selectivity of predicates is gibven by each leaf node. As such, the actual worst-case error bound may be higher in practice, and error should be measured over many queries. # 4.4 Join Cardinality Estimation The process of inferring sketches for join cardinality estimation is exemplified in Figure 3. It depicts two SPNs, one for a relation $A$ and another for relation $B$ . They share the same structure: the root is a sum node that partitions the data into two clusters forming product nodes. These product nodes terminate into leaf nodes for a join attribute and a generic selection predicate attribute. Consider the two-way join $\sigma _ { \varphi } ( A ) \ \bowtie _ { x = y } \ \sigma _ { \psi } ( B )$ , where $x$ and $y$ are the join attributes of $A$ and $B$ respectively. To estimate the size of the join, each SPN is used to infer the sketch of the join keys within their respective selection. Figure 3: Join cardinality estimation using SPNs to approximate the sketches of the join keys $x$ and $y$ from their selections $\sigma _ { \varphi } ( A )$ and $\sigma _ { \psi } ( B )$ , respectively. The dot product of these sketches is a join cardinality estimate. Given the selection predicate $\varphi$ , the SPN of $A$ combines each cluster’s probability of satisfying $\varphi$ with its local sketch of the join attribute $x$ . Let $P _ { 1 } ( \varphi )$ and $S ( x _ { 1 } )$ denote the probability and sketch for the first cluster, while $P _ { 2 } ( \varphi )$ and ${ \mathcal { S } } ( x _ { 2 } )$ belong to the second cluster. Then, the SPN expresses the sketch of the selection as the sum of products: $$ \widehat { S } \left( \sigma _ { \varphi } \left( A \right) \right) = P _ { 1 } \left( \varphi \right) S \left( x _ { 1 } \right) + P _ { 2 } \left( \varphi \right) S \left( x _ { 2 } \right) $$ The approximate sketch of $\sigma _ { \psi } ( B )$ is similarly expressed in terms of probabilities and sketches. After inferring the sketch of each selection, join cardinality estimation follows the original process for each sketch method. In general, the dot product of the inferred sketches estimates the cardinality of the two-way join, requiring that the sketching function $s$ of the inferred sketches share the same hash function(s). Typically, multiple estimates are taken for better accuracy, using different hash functions to create multiple independent sketch estimators. The median trick is applied for unbiased Fast-AGMS estimates. For the pessimistic Bound Sketch, the minimum of its estimates is returned as a tight upper bound. # 4.4.1 Probabilistic Upper Bound Estimators that guarantee an upper bound on the actual join cardinality are referred to as pessimistic [5, 29]. However, even without a strong guarantee, upward biased estimators have still been shown [56] to benefit query optimization, sometimes even more so than exact cardinality from an oracle [4]. Intuitively, overestimating cardinality encourages query optimizers to plan more cautiously and tends towards plans that are only marginally suboptimal, whereas allowing for underestimation increases the risk of choosing plans with potentially catastrophic execution times. To induce an upward bias in our estimator, we take the maximum of multiple Fast-AGMS estimates3, instead of the median. Furthermore, we modify the product node to use the minimum univariate selectivity, rather than the product, to scale the sketch of the join attribute. Since the minimum univariate selectivity is an upper bound on the product, this allows more of any sketch it multiplies with to survive the approximation. Henceforth, we refer to this modification as the min-product node. The min-product node further increases the upward bias of the maximum Fast-AGMS estimate, and is also applicable to Bound Sketch. In our experiments, we evaluate the efficacy of these upward biasing techniques for query optimization. # 5 Experiments # 5.1 Implementation We implement Sketched SPNs in Python, as are our comparisons. Sketches are stored in a sparse tensor format [60], which may prevent the model size from increasing linearly with the size of the sketches in their leaf nodes. Sketches are materialized at estimation time, since sparse operations may be significantly slower than the same dense operations. For selectivity estimation in leaf nodes, we use the simple Count-Min sketch, which only requires a single hash function for simpler and faster inference. We use the $k$ -universal hash function [2] implemented by Heddes et al. [24] to construct sketches. # 5.2 Setup Experiments are executed on an Ubuntu 24 system with an Intel Xeon E5-2660 v4 CPU and 256 GB RAM. Specifically, for query execution time, we use the modified PostgreSQL 13.1 provided by Han and Wu et al [23], which implements commands to plug in cardinality estimates from external methods. It also disables parallel workers for query execution, which emphasizes the impact of the cardinality estimator on query execution speed. # 5.3 Datasets We evaluate on the JOB-light [31] and Stats-CEB [23] workloads, which are commonly used to evaluate join cardinality estimation methods. Han and Wu et al [23] report that the attributes in Stats-CEB are more skewed and correlated with each other. Hence, its data is expected to be more complex and difficult to model accurately. • JOB-light consists of 70 join queries (696 subqueries) on 6 relations from IMDb [26]. These queries are transitive joins on up to 5 relations in a star schema. • Stats-CEB consists of 146 join queries (2603 subqueries) on 8 relations from Stats Stack Exchange data [16]. It includes non-transitive joins on up to 7 relations. A subquery is a subset of joins from the original query, along with relevant selection predicates. In order to execute a query, the cardinality estimate of each subquery is passed to PostgreSQL. We also evaluate the accuracy of join cardinality estimation methods, using these subqueries. Figure 4: Mean L1-distance between the exact Count-Min sketch of a selection and its approximation by SPNs with varied complexity. The worst-case independence assumption model gives the upper bound (dashed line) on error. # 5.4 Sketch Approximation Error We verify our error bound (Equation 21) and evaluate how closely SPNs can approximate sketches. Since it is only applicable to linear sketches, we evaluate the Count-Min sketch components of the Bound Sketch method. Notably, the sum of the counters in the Count-Min sketch equals the cardinality of its selection. Hence, the Count-Min sketch approximation error is analogous to the SPN’s cardinality estimation error for a single-table query. We compute the L1-distance between an exact Count-Min sketch and its SPN approximation, for each selection in our workloads. When a query does not specify filter conditions for a relation — a selection — then its sketch returned by the SPN is simply the sum of sketches from its leaf nodes. This is equivalent to the exact sketch of the unfiltered relation. We omit such sketches from our evaluation, since their error is 0. There are 1 165 and 5 451 selections specified in the JOB-light and Stats-CEB workloads, respectively. Figure 4 evaluates SPNs of various complexities, by adjusting the minimum clustering size and independence thresholds. As these thresholds decrease, the SPN is expected to become increasingly accurate, which improves the sketch approximation error. The minimum clustering size is specified as a percentage of the original relation. A clustering threshold of $100 \%$ means the data is never partitioned by a sum node and is just a single product node — the worst-case complete independence assumption model, which gives the upper bound on our sketch approximation error. We tighten the clustering threshold as low as $1 \%$ , meaning the SPN only forms sum nodes on partitions that no smaller than $1 \%$ of the original relation. However, the SPN still might not completely partition the relation until reaching this threshold, unless the attributes within all partitions are insufficiently independent. Median Q-Error Mean Estimation Time (ms) 0.3 1.50 1.49 1.51 1.51 1.49 1.51 4.1 4.5 4.2 9.5 8.8 3.6 0.2 1.49 1.50 1.48 1.52 1.49 1.51 4.5 4.2 4.2 11.7 8.6 3.6 0.1 1.35 1.33 1.39 1.35 1.41 1.51 7.6 5.2 5.4 9.4 10.0 3.6 JOB-light 0.01 1.28 1.29 1.25 1.28 1.31 1.51 12.2 6.4 6.5 15.8 9.1 3.6 0 1.29 1.28 1.21 1.31 1.29 1.51 33.5 5.4 6.6 4.6 10.1 3.6 Cluster 1.04 1.05 1.07 1.12 1.25 1.51 38.5 7.9 4.3 5.3 10.7 3.6 Only 1 Independence Threshold 0.3 1.54 1.48 1.52 1.56 1.49 1.73 6.5 3.0 6.3 4.6 2.9 1.8 0.2 1.45 1.39 1.44 1.50 1.43 1.73 11.1 4.7 8.2 5.8 3.0 1.8 0.1 1.17 1.20 1.38 1.29 1.32 1.73 = 97.0 9.1 22.0 14.0 3.5 1.8 Stats-CEB 0.01 1.21 1.17 1.31 1.31 1.33 1.73 55.8 10.8 36.5 7.6 3.6 1.8 0 1.15 1.11 1.26 1.28 1.30 1.73 94.5 10.9 16.1 10.9 3.8 1.8 Cluster 1.15 1.08 1.25 1.27 1.30 1.73 174.1 26.2 20.6 5.3 9.2 1.8 Only $1 \%$ $5 \%$ $1 0 \%$ $2 5 \%$ $5 0 \%$ $1 0 0 \%$ $1 \%$ $5 \%$ $1 0 \%$ $2 5 \%$ $5 0 \%$ $1 0 0 \%$ Clustering Threshold The RDC metric measures non-linear dependency between two attributes and has the range $[ 0 , 1 ]$ . An independence threshold of 0 means that a product node is only formed whenever an attribute is pairwise independent of all other attributes in the same instance. We find that an independence threshold of at least 0.2 fails to decrease approximation error. This is regardless of the clustering threshold, since the independence threshold is so large that the minimum clustering size is never met. We suggest that it is more practical to set the independence threshold to 0 and primarily control model complexity using the clustering threshold. Eschewing the independence condition, e.g., setting the threshold below 0, means that the SPN repeatedly forms clusters via sum nodes until each cluster reaches the clustering threshold. Product nodes are only made afterwards. This is a special case of the model — equivalent to simply clustering — and the approximation becomes the sum of sketches made by assuming independence locally within each cluster. Figure 4 shows that this is more accurate than SPNs of the same clustering threshold with a non-negative independence threshold. It even appears to serve as a lower bound on the sketch approximation error. Although the simplicity of only clustering is attractive, the objective of using SPNs is to achieve similar accuracy with a smaller model. # 5.5 Join Cardinality Estimation Accuracy In join cardinality estimation, accuracy is often measured using $q$ -error [39], defined as the largest ratio between a positive cardinality $Y$ and its estimate $\widehat { Y }$ : $$ { \mathrm { q - e r r o r } } = \operatorname* { m a x } \left\{ { \widehat { \frac { Y } { Y } } } , { \frac { Y } { \widehat { Y } } } \right\} $$ Figure 5 analyzes the effect of the SPN training thresholds on the $\mathsf { q }$ -error of their approximate Fast-AGMS sketches, using sketch width $w = 1 0 ^ { 5 }$ and taking the median of 5 independent estimates. Since Bound Sketch is not unbiased, its ${ \sf q }$ -error is higher than Fast-AGMS’ and we omit it. However, the SPN hyperparameters similarly affect the approximation error of either sketch, which correlates with their q-error on join cardinality. Generally, $\mathsf { q }$ -error decreases as we tighten either threshold. The independence threshold must be as small as 0, in order to improve over the worst-case independence assumption model corresponding to a $100 \%$ clustering threshold. This is especially for JOB-light, where a high independence threshold only marginally improves $\mathsf { q }$ -error. In contrast, decreasing the clustering threshold until $10 \%$ quickly improves ${ \sf q }$ -error. Estimation time does not increase significantly with each smaller clustering threshold, until it is $1 \%$ . Instead, an SPN with fewer clusters (via a higher clustering threshold) may sometimes be slower. This is an effect of the sparse tensor format [60] used in our implementation — operations on sketches with more non-zero elements are slower than sparser sketches. It is not guaranteed that the sketches corresponding to smaller clusters would be sparser. Until it is 0, the independence threshold also has little apparent impact on estimation time. We also compare the distribution of $\mathsf { q }$ -errors for exact and approximate Fast-AGMS sketches. Our objective is for the approximate sketches’ ${ \sf q }$ -errors to approach that of exact sketches. In Figure 6, we verify that tightening the clustering threshold of SPNs used to approximate Fast-AGMS sketches also tightens the distribution of their $\mathfrak { q }$ -error. However, it falls short of exact sketches. The smallest $1 \%$ clustering threshold is excessive, since it causes high estimation time with marginal benefits to $\mathsf { q }$ -error. A larger width also only marginally improves $\mathfrak { q }$ -error on our datasets. Figure 6: Q-error distribution of exact Fast-AGMS sketches and their approximations by SPNs with various clustering thresholds. The independence threshold is fixed to 0. # 5.6 Sketching Efficiency Ideally, the computational time and space requirements for the approximation process should be less than exact sketching, to be considered practical. Computing an exact sketch entails scanning4 and filtering a relation, hashing each element that satisfies the selection, and updating the sketch. Since hashing and updating are simple operations, the bulk of the work is in scanning and filtering. We assume an idealized scenario for exact sketches: that each distinct sketch (i.e., for a particular selection) is only computed once and saved for reuse. This reduces the time and space requirements for exact sketching, making a more nuanced comparison with our approximations. For all of the sketches required for our workloads, Table 1 reports the time to compute them, either exactly or approximately. For exact sketching, both Fast-AGMS (shortened to F-AGMS) and Bound Sketch, we report the total space required to store sketches sparsely. For approximations, denoted as F-AGMS† and Bound Sketch†, we report the SPN ensemble size. SPNs are trained with an independence threshold of 0 and a clustering threshold of $10 \%$ , which balances acceptable accuracy and estimation time for our workloads. Henceforth, these are the training hyperparameters in our other experiments. Table 1: Computational requirements of exact sketches and approximations (denoted with $\dagger$ ) using width $1 0 ^ { 5 }$ The time required to approximate both Fast-AGMS and Bound Sketch is always faster than exact sketching. Unlike exact sketching, approximate sketches are not saved for reuse — an SPN must approximate each sketch, every time it is needed. This is a practical scenario that does not assume a priori knowledge of the selections in the query workloads, which may be prohibitive. Nonetheless, approximation via SPN is at least a few factors faster than exact sketching. In particular, Fast-AGMS is up to two orders of magnitude faster to approximate, since the exact version requires multiple hash functions. Bound Sketch requires fewer hash functions, but uses both Count-Min and degree sketches. The space requirement of approximate sketching is higher than exact sketching on JOB-light, but vice versa on StatsCEB, which contains more selections. JOB-light contains few enough unique selections that the total size of sketches saved for those selections is smaller than the SPN. The opposite is true for Stats-CEB. This suggests that for workloads with few unique selections, exact sketches may be more practical. Table 2: Training time for the ensembles of data-driven and learned cardinality estimators. # 5.7 Model Training Time Our method is also comparable to other learned cardinality estimators, specifically data-driven methods. Data-driven cardinality estimators observe the tuples of the target relation to model its distribution. This is in contrast to querydriven cardinality estimators, which train on queries annotated with their ground-truth cardinality [31, 36, 34]. The training time of Sketched SPNs and other data-driven learned cardinality estimators is given in Table 2. These are categorized as ensembles of either SPNs or Bayesian networks [42]. Like this work, DeepDB [25] also applies an ensemble of SPNs to model relations and estimate join cardinality. However, each SPN in DeepDB may be trained on (a sample of) either one relation or the full outer join of multiple. BayesCard [57] uses Bayesian networks instead. FactorJoin [56] also uses Bayesian networks, but eschews full outer joins and constrains each network to a single relation. We train these models using their default or recommended hyperparameters, if any are given. DeepDB and BayesCard both limit the number of full outer joins used, which can be costly. However, their training time is still higher than their per-relation counterparts for the same model. In particular, the full outer join of the relations in Stats-CEB is four orders of magnitude larger than JOB-light’s, as reported by Han and Wu et al. [23]. Thus, DeepDB has a much longer training time on Stats-CEB than Sketched SPNs. SPNs require noticeably longer training than Bayesian networks. One reason is that the K-means clustering method, as used in DeepDB, may fail to effectively cluster data into locally independent attributes during training — an SPN may repeatedly create sum nodes until reaching the clustering threshold. However, we observe that using Hard ExpectationMaximization (EM) [12] for clustering, instead of K-means, prevents this. Table 3: Impact of clustering method on training SPNs. Structure learning time of SPNs using either clustering method is compared in Table 3. Structure learning refers to forming the SPN nodes via clustering and independence testing. On JOB-light, K-means fails to form clusters that minimize the dependency between attributes, whereas Hard EM results in $3 5 \%$ faster structure learning. On the other hand, K-means has no issues on Stats-CEB and Hard EM offers no improvement. We observe little difference to accuracy nor model size. Although K-means may be faster, Hard EM is recommended to be cautious. # 5.8 Query Execution Time Figure 7: Total end-to-end query execution times, which includes the added cardinality estimation times. SPN sketch approximations are denoted by $\dagger$ . Approximations by SPNs using our min-product node are denoted by $\ddagger$ instead. Figure 7 shows the total end-to-end query execution times (including cardinality estimation time) of the different estimators and also the ground truth cardinality. We include the data-driven cardinality estimator by Zhu and Wu et al., FLAT [59], which proposes an SPN variant called the Factorize-Split-Sum-Product Network (FSPN). The FSPN identifies highly correlated attributes and factorizes their joint probability distribution into conditional probability distributions, e.g., as multivariate histograms. This efficiently models the attributes that our SPN would otherwise struggle to decompose into independent univariate distributions. At the time of writing, FLAT does not have an opensource implementation that supports join cardinality estimation. However, its estimates for JOB-light and Stats-CEB are provided by Han and Wu et al. [23]. For each sketch method, we evaluate their exact sketches and our approximations all using width $1 0 ^ { 5 }$ . The maximum of Fast-AGMS estimates (F-AGMS Max) is used instead of its unbiased median. We find that the median estimate is uncompetitive on Stats-CEB, unless a larger sketch width is used, as Heddes et al. [24] showed with a width of $1 0 ^ { 6 }$ — the median estimate still resulted in slower query execution than FLAT on Stats-CEB. In comparison, Fast-AGMS Max has query execution time that is second only to the ground-truth cardinality. Although the approximations do not attain as fast query execution as exact sketches, they compensate with their lower estimation time. In particular, our min-product node approximations, Fast-AGMS‡ Max and Bound Sketch‡, demonstrate the effectiveness of upward-biased estimation in query optimization. Note that the estimation time for exact sketches includes their construction time — we do not assume that the sketches were already available. This would require knowing the necessary sketches to prepare beforehand. Otherwise, the estimation time for exact sketches would be under a minute. Overall, Fast-AGMS Max results in faster query execution than Bound Sketch. This is unexpected, since Bound Sketch guarantees overestimation. On the other hand, Fast-AGMS Max may still underestimate, which is commonly cited [5, 4] as riskier than overestimation to cause sub-optimal query execution plans. # 5.9 Relative Error Distribution Figure 8: Distribution of relative errors on Stats-CEB We analyze the bias of our estimators in Figure 8, which shows the distribution of the relative estimation errors for Fast-AGMS Max and Bound Sketch. It also includes the unbiased Fast-AGMS median estimator to verify that FastAGMS Max is significantly more upward-biased. However, the heavy-tailed distribution of relative errors for Bound Sketch reveals that its upward bias is much stronger. It may greatly overestimate even thousands of times the actual join cardinality. In comparison, Fast-AGMS Max and FactorJoin — another upward-biased estimator — are still highly accurate. Although it is not guaranteed, they effectively produce tighter upper bounds, thus achieving faster query execution. We refer to Bergmann et al. [4] for an analysis of the impact of overestimating cardinalities. # 5.10 Approximate Sketches This work closely mirrors Approximate Sketches [49], prior work that trained bidirectional transformers [14] to also approximate Fast-AGMS sketches. It shares the same premise — an ensemble of per-relation models is trained to approximate the sketch of any selection whose filter conditions are given at estimation time. Unlike SPNs, bidirectional transformers are dependent on hardware accelerators (e.g., GPU) for training. As such, the size of models that can be trained is limited by the accelerator’s memory. Furthermore, the model size grows linearly with the sketches to approximate — each counter in a sketch has a trainable embedding. Thus, Approximate Sketched used a relatively small width of up to $4 0 9 6$ . The join cardinality estimator was a heuristic [28] that allows sketches with fewer hash functions , thus fewer that may need to be approximated, but is restricted to transitive joins, e.g., JOB-light. We compare it to Fast-AGMS‡ Max estimator, using the same sketch width of 4 096, in Table 4. Table 4: Comparison to prior work, Approximate Sketches, which is only implemented for transitive joins (e.g., JOBlight) and a smaller sketch width of up to 4096. \*The time of a single epoch is given for Approximate Sketches. The training time of Approximate Sketches is reported for a single epoch of gradient descent. However, even a single epoch (on an NVIDIA Tesla K80 GPU) exceeds our SPN training time. Approximate Sketches was trained to approximate 5 independent sketch estimators, which we found we needed to double to 10 for Fast-AGMS‡ Max. This was due to the increased variance of a smaller width sketch producing a few extreme underestimations that affected just 4 out of the 70 queries in JOB-light, but resulted in a $0 . 2 \mathrm { h r s }$ longer total execution time than Approximate Sketches. Using more independent estimators prevents such underestimations. Thus Fast-AGMS‡ Max achieves faster query execution befitting its lower q-error. # 6 Related Work # 6.1 Sketches for Join Cardinality Estimation The Fast-AGMS sketch [8] was derived from the AGMS sketch proposed for unbiased join cardinality estimation by Alon et al [3]. AGMS can be seen as a special case of Fast-AGMS with a width of one — a single counter. In practice, a large number of independent AGMS estimators were necessary for accuracy. However, its update time increases linearly with the number of counters. Fast-AGMS [6, 8] attains sub-linear update time by partitioning data into an array of multiple counters. To the best of our knowledge, AGMS sketches were first applied to join cardinality estimation subject to filter conditions by Vengerov et al. [52]. At query time, any given selection predicate can be treated as a join with a virtual relation containing all values that satisfy the predicate. The AGMS sketches of these virtual relations could be computed onthe-fly to estimate that join. This might seem inefficient when virtual relations represent large ranges, but the ad hoc AGMS sketch can be calculated analytically for certain hash functions [45]. Thus, any join with filter conditions is treated as one between join relations and virtual relations altogether. However, treating selection predicates as joins greatly increases estimation variance [15]. Ganguly et al. [19] observed that join cardinality estimation error is largely due to collisions with frequent elements. They proposed skimming [18] the frequent elements from sketches into separate estimators, which involves iterating over the domain of inserted elements. Roy et al. [44] avoid iteration by tracking and preemptively filtering frequent elements. Wang et al. [54] also separately store the elements that are not yet determined to be either frequent or infrequent. Join cardinality estimation with these multifocal methods requires estimating the join between every combination of partitions. In the bifocal case, we decompose $| A \bowtie B |$ into 4 combinations: the frequent elements of $A$ with the frequent elements $B$ , the frequent elements of $A$ with the infrequent elements of $B$ , the infrequent elements of $A$ with the frequent elements of $B$ , and the infrequent elements of $A$ with the infrequent elements of $B$ . By storing the counts of frequent elements more accurately (e.g., exactly) their collisions can be reduced. However, the number of combinations increases exponentially with the number of joins, which makes it impractical for our workloads. # 6.2 Learned Cardinality Estimators Yang et al. [58] proposed NeuroCard, another data-driven learned cardinality estimator. They trained a deep autoregressive model [21, 51] on (a sample of) a full outer join. Autoregressive models predict the (conditional) probability distribution of an attribute subject to specific values of other attributes, i.e., given by filter conditions. Thus, they model the full outer join’s joint probability distribution factorized as a product of conditional probabilities, e.g., $P ( \tilde { X } , Y ) = P ( X | Y ) P ( \bar { Y } )$ . Join cardinality is estimated as the sum of probabilities — each for a valid join tuple $( X , Y )$ — sampled from the full outer join. Kim et al. [30] also use deep autoregressive models [21] but in an ensemble of per-relation models, rather than a full outer join. They perform join cardinality estimation by importance sampling [32] of probabilities from the deep autoregressive models. In an ablation study with FactorJoin, which uses another sampling method [35], they show that importance sampling is generally more effective, regardless of the per-relation model type. Future work may compare sampling to sketching for the estimation of join cardinalities from ensembles. We suggest that composing an ensemble estimate via sampling may enable the use of smaller models, whereas high-dimensional sketches have better accuracy. Query-driven cardinality estimators [31, 36] train on queries labeled with their ground-truth cardinality. Naturally, the training workload should be representative of the testing workload. Furthermore, estimators may be invalidated by dynamically shifting distributions of the underlying data. Hybrid methods address this by using both data-driven and query-driven training [55] or incorporating the database state as an input [41, 34]. Relevantly, Liu et al. [37] recently proposed a query-aware Sum-Product Network (QSPN) that incorporates query-driven training into the construction of the SPN. Exceptionally, they use an unlabeled query workload to determine the access affinity between attributes — the frequency that attributes are referenced in the same query together. Instead of just pairwise independent attributes, those with low access affinity can also have their joint probability distribution factorized by a product node. Although we limit our scope to an orthodox SPN, such variants (e.g., FSPN and QSPN) may improve sketch approximation.
Sketches have shown high accuracy in multi-way join cardinality estimation, a critical problem in cost-based query optimization. Accurately estimating the cardinality of a join operation -- analogous to its computational cost -- allows the optimization of query execution costs in relational database systems. However, although sketches have shown high efficacy in query optimization, they are typically constructed specifically for predefined selections in queries that are assumed to be given a priori, hindering their applicability to new queries. As a more general solution, we propose for Sum-Product Networks to dynamically approximate sketches on-the-fly. Sum-Product Networks can decompose and model multivariate distributions, such as relations, as linear combinations of multiple univariate distributions. By representing these univariate distributions as sketches, Sum-Product Networks can combine them element-wise to efficiently approximate the sketch of any query selection. These approximate sketches can then be applied to join cardinality estimation. In particular, we implement the Fast-AGMS and Bound Sketch methods, which have successfully been used in prior work, despite their costly construction. By accurately approximating them instead, our work provides a practical alternative to apply these sketches to query optimization.
[ "cs.DB", "cs.LG" ]
# 1 Introduction Deep reinforcement learning (RL) has led to remarkable successes in domains ranging from games to robotics, largely by representing policies as highly parametrized neural networks and optimizing them end-to-end [Lillicrap et al., 2019; Schulman et al., 2017]. However, neural policies often struggle to generalize outside the distribution of their training environments, exhibiting brittle behavior when confronted with out-of-distribution (OOD) scenarios. In contrast, a growing literature on programmatic policies, where decision–making rules are expressed in a domain-specific language, claims superior OOD generalization [Verma et al., 2018, 2019; Trivedi et al., 2021; Inala et al., 2020]. We argue that commonly used benchmarks undervalue the generalization power of programmatic representations. Previous work on programmatic policies has observed a substantial gap in terms of OOD generalization between programmatic and neural representations. We revisit these OOD generalization claims and show that, in some cases, the apparent gap between programmatic and neural representations arises not from an inherent limitation of neural representations but from variables we failed to control in evaluating OOD generalization with neural models. Namely, the input observation of the neural agent must be as sparse as the observation the programmatic agent considers. Sparse observations automatically remove distractions that can improve OOD generalization, especially when used with simpler models that are easier to train, such as fully connected networks. Moreover, neural policies tend to be more sensitive to the reward function because they tend to optimize it better than programmatic ones. As a result, a policy that is “too specialized” in one setting might perform poorly in OOD problems. As we show in our experiments, simple changes to the reward function can dramatically enhance OOD generalization. We demonstrate how some of these ideas improve the OOD generalization on benchmark problems commonly used in the literature, including a car racing environment (TORCS) [Verma et al., 2018, 2019], grid-world planning problems (KAREL) [Trivedi et al., 2021], and continuous control with repetitive behavior (PARKING) [Inala et al., 2020]. Given our observation that neural policies can generalize to OOD problems in these benchmarks, we suggest creating problems that showcase the OOD generalization of programmatic representations by requiring learning structures that neural networks fail to master, such as stacks [Joulin and Mikolov, 2015]. As an illustrative example, we suggest a problem that requires the agent to use memory through a stack or a queue to solve. Focusing on benchmark problems that require features beyond the reach of neural models will help us better understand where programmatic representations are most needed. This understanding can help us develop novel representations that combine the flexibility of highly parameterized models with the desired properties of symbolic programs, such as sparsity and the usage of complex data structures. The code used to run our experiments is publicly available online. # 2 Problem Definition We consider sequential decision-making problems as Markov decision processes (MDPs) $\mathcal { M } =$ $( S , A , p , r , \mu , \gamma )$ . Here, $S$ and $A$ are the sets of states and actions. The function $p : S \times A S$ is the transition model, which returns the state $s _ { t + 1 }$ reached once the agent takes action $a _ { t }$ in state $s _ { t }$ at time step $t$ . The agent observes a reward value of $R _ { t + 1 } = r ( s _ { t } , a _ { t } )$ when transitioning to $s _ { t + 1 }$ ; such values are given by the reward function $r : S \times A \to \mathbb { R }$ . The MDP’s initial states are determined by the distribution $\mu$ , with states sampled from $\mu$ denoted as $s _ { 0 }$ . Finally, $\gamma \in [ 0 , 1 ]$ is the discount factor. A policy $\pi : S \times A \to [ 0 , 1 ]$ receives a state $s$ and action $a$ and returns the probability of taking $a$ at $s$ . Given a class of policies $\Pi$ , the goal is to find a policy $\pi$ within $\Pi$ that maximizes the return: $$ \underset { \pi \in \Pi } { \arg \operatorname* { m a x } } \mathbb { E } _ { \pi , p , \mu } [ \sum _ { k = 0 } ^ { \infty } \gamma ^ { k } R _ { k + 1 } ] $$ The class $\Pi$ determines the biases of the policies we consider. For example, $\Pi$ could be an architecture of a neural network, and the policies $\pi$ within this class are the different weights we can assign to the connections of the neural network. We consider classes $\Pi$ determined by a domain-specific language, so programs written in the language form $\Pi$ . A language is defined with a context-free grammar $( \mathcal { N } , \mathcal { T } , \mathcal { R } , \mathcal { I } )$ , where $\mathcal { N } , \mathcal { T } , \mathcal { R } , \mathcal { T }$ are the sets of non-terminals, terminals, the production rules, and the grammar’s initial symbol, respectively. Figure 1 (a) shows an example of a context-free grammar encoding a language for TORCS policies. The grammar’s initial symbol $\boldsymbol { \mathcal { T } }$ is $E$ . It accepts strings such as the one shown in Figure 1 (b), which is obtained through a sequence of production rules applied to the initial symbol: $E $ if $B$ then $E$ else $E $ if $B$ and $B$ then $E$ else $E \cdots$ We empirically compare solutions to Equation 1 when the class $\Pi$ is defined with pre-defined neural network architectures and domain-specific languages. We call the former neural and the latter programmatic policies. We consider the following problem domains in our experiments: TORCS [Verma et al., 2018, 2019], KAREL [Trivedi et al., 2021], and PARKING [Inala et al., 2020]. # 3 Background: Searching for Programmatic Policies This section describes the algorithms used to synthesize programmatic policies for solving TORCS (Section 3.1), KAREL (Section 3.2), and PARKING (Section 3.3). We aim to provide enough information so the reader understands our results in Section 4. We do not intend to detail the original algorithms. For full method descriptions, see the cited papers in each subsection. # 3.1 Neurally Directed Program Search (NDPS) Verma et al. [2018] introduced Neurally Directed Program Search (NDPS), a method that uses imitation learning through the DAGGER algorithm [Ross et al., 2011] to learn programmatic policies. Figure 1 (a) shows the domain-specific language Verma et al. [2018] considered in their experiments on the TORCS benchmark. The peek function reads the value of a sensor. For example, $\mathbf { p e e k } ( h _ { \tt R P M } , - 1 )$ reads the latest value (denoted by the parameter $- 1$ ) of the rotation-per-minute # (a) Domain-Specific Language (b) Example Policy $$ \begin{array} { r c l } { P } & { : = } & { { \bf p e e k } { \left( ( \epsilon - h _ { i } ) , - 1 \right) } } \\ { I } & { : = } & { { \bf f o l d } ( + , \epsilon - h _ { i } ) } \\ { D } & { : = } & { { \bf p e e k } { \left( h _ { i } , - 2 \right) } - { \bf p e e k } ( h _ { i } , - 1 ) } \\ { C } & { : = } & { c _ { 1 } * P + c _ { 2 } * I + c _ { 3 } * D } \\ { B } & { : = } & { c _ { 0 } + c _ { 1 } * { \bf p e e k } ( h _ { 1 } , - 1 ) + . . . } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \beta { \bf o r } B ( h _ { m } , - 1 ) > 0 \ | } \\ { I } & { : = } & { C \ | { \bf i f } \ B \ t h e \ n { \cal E } \ e { \bf k } \mathrm { e } } \end{array} $$ if $( 0 . 0 0 1 - \mathrm { p e e k } ( h _ { \mathrm { T r a c k P 0 S } } , - 1 ) > 0 )$ ) and ( $0 . 0 0 1 + \mathbf { p e e k } ( h _ { \mathrm { T r a c k P 0 S } } , - 1 ) > 0 )$ 1 then $\cdot 3 . 9 7 * \mathrm { p e e k } ( ( 0 . 4 4 - h _ { \tt R P M } ) , - 1 )$ $+ 0 . 0 1 * \mathbf { f o l d } ( + , ( 0 . 4 4 - h _ { \tt R P M } ) )$ $+ 4 8 . 7 9 * ( \mathbf { p e e k } ( h _ { \tt R P M } , - 2 ) - \mathbf { p e e k } ( h _ { \tt R P M } , - 1 ) )$ ) el $\mathbf { 3 e } ~ \mathbf { 3 . 9 7 } * \mathbf { p e e k } ( ( 0 . 4 0 - h _ { \tt R P M } ) , - 1 )$ $+ 0 . 0 1 * \mathbf { f o l d } ( + , ( 0 . 4 0 - h _ { \tt R P M } ) )$ $+ 4 8 . 7 9 * ( \mathbf { p e e k } ( h _ { \tt R P M } , - 2 ) - \mathbf { p e e k } ( h _ { \tt R P M } , - 1 ) )$ sensor $( h _ { \tt R P M } )$ ; $\mathbf { p e e k } ( h _ { \tt R P M } , - 2 )$ would read the second latest value of the sensor. The $\mathbf { f o l d } ( + , \epsilon - h _ { i } )$ operation adds the difference $\epsilon - h _ { i }$ for a fixed number of steps of the past readings of sensor $h _ { i }$ . The non-terminal symbols $P , I ,$ , and $D$ in Figure 1 (a) form the operations needed to learn PID controllers, with programs that switch between different PID controllers, as shown in Figure 1 (b). NDPS uses a neural policy as an oracle to guide the NDPS’s synthesis. Given a set of state-action pairs $H$ , where the actions are given by the neural oracle, NDPS evaluates a program $\rho$ by computing the action agreement of $\rho$ with the actions in $H$ . NDPS runs a brute force search algorithm [Albarghouthi et al., 2013; Udupa et al., 2013], to generate a set of candidate programs $C$ . Then, it learns the parameters of the programs $( c _ { 1 } , c _ { 2 }$ , and $c _ { 3 }$ in Figure 1) with Bayesian optimization [Snoek et al., 2012] such that the programs mimic $H$ . Once NDPS determines the parameters of programs $C$ , it selects the candidate $c$ in $C$ that maximizes the agent’s return; $c$ is the starting point of a local search that optimizes a mixture of the action agreement function and the agent’s return. Verma et al. [2019] introduced Imitation-Projected Programmatic Reinforcement Learning (PROPEL), an algorithm that also synthesizes a program for solving control problems. PROPEL is similar to NDPS in that it relies on a neural policy to guide its search through the space of programs. The difference between PROPEL and NDPS is that the neural policy of the former is trained so that it does not become “too different” from what the programmatic learner can express—the inability to represent the teacher’s policy is known as the representation gap in the literature [Qiu and Zhu, 2021]. The programmatic policies of both NDPS and PROPEL are called for every state the agent encounters. # 3.2 Learning Embeddings for Latent Program Synthesis (LEAPS) Trivedi et al. [2021] introduced Learning Embeddings for Latent Program Synthesis (LEAPS), a system that learns a latent representation of the space of programs a language induces. When given an MDP $\mathcal { M }$ , LEAPS searches in the learned latent space for a vector decoded into a program encoding a policy that maximizes the agent’s return at $\mathcal { M }$ . LEAPS’s premise is that searching in the learned latent space is easier than searching in the space of programs, as NDPS and PROPEL do. Figure 2 (a) shows the context-free grammar specifying the language used to encode policies for KAREL. The language accepts programs with conditionals and loops. It also includes a set of perception functions, such as frontIsClear, which verifies whether the cell in front of the agent is clear. Further included are action instructions such as move and turnLeft. The set of perception functions is important because it defines what the agent can observe. As we show in Section 4.2, having access to less information allows the agent to generalize to OOD problems. Figure 2 (b) shows an example of a KAREL program. Here, the agent will perform two actions, pickMarker and move, if a marker is present in its current location; otherwise it will not perform any action. To learn its latent space, LEAPS generates a data set of programs $P$ by sampling a probabilistic version of the context-free grammar defining the domain-specific language. That is, each production of a non-terminal can be selected with a given probability. A program can be sampled from this probabilistic grammar by starting at the initial symbol and randomly applying production rules until (a) Domain-Specific Language (b) Example Policy $\rho : =$ def run ${ \mathfrak { m } } ( s { \mathrm { ~ \textmu ~ } } _ { \mathfrak { m } } )$ $s : =$ while $\mathsf { c } ( b \ \mathsf { \subset } ) \ \mathsf { \ b { w } } ( \ s \ \mathsf { \ b { w } } )$ if c(b c) i(s i) ifelse $\mathsf { c } ( b \mathsf { c } ) \mathsf { i } ( s \mathsf { i } )$ else e(s e) def run m( repeat $\scriptstyle { \mathtt { R } } = n$ r(s r) s; s a if c( markersPresent c) i( $b : = h$ not $( h )$ pickMarker move $n : = \ 0 , 1 , \cdot \cdot \cdot , 1 9$ i) $h : =$ frontIsClear leftIsClear rightIsClear m) markersPresent noMarkersPresent $a : =$ move turnLeft turnRight putMarker | pickMarker we obtain a program with only terminal symbols. This set of programs is used to train a Variational Auto-Encoder (VAE) [Kingma and Welling, 2014], with its usual reconstruction loss. However, in addition to learn spaces that are more friendly to search algorithms, LEAPS uses two additional losses that attempt to capture the semantics of the programs. These two losses incentivize latent vectors that decode into programs with similar agent behavior to be near each other in the latent space. The intuition is that this behavior locality can render optimization landscapes easier to search. Once the latent space is trained, it is used to solve MDPs. Given an MDP, LEAPS uses the CrossEntropy Method (CEM) [Mannor et al., 2003] to search for a vector that decodes into a program that maximizes the return. The rollouts of the decoded policies are used to inform the CEM search. # 3.3 Programmatic State Machine Policies (PSM) Inala et al. [2020] introduced Programmatic State Machine Policies, which we refer to as PSM, a system that learns a policy as a finite-state machine. A finite state machine policy for an MDP $\mathcal { M }$ is a tuple $( M , S , A , \delta , m _ { 0 } , F , \alpha )$ where $M$ is a finite set of modes. The sets $S$ and $A$ are the sets of states and actions from $\mathcal { M }$ . The function $\delta : M \times S M$ is the transition function, $m _ { 0 }$ in $M$ is the initial mode, and $F \subseteq S$ is the set of modes in which the policy terminates. The transition function $\delta$ defines the next mode given the current mode and input state $s$ in $S$ . Finally, $\alpha : M \times S \to A$ determines the policy’s action when in mode $m$ and the agent observes state $s$ . In the PARKING environment, Inala et al. [2020] considered a domain-specific language for the transition function $\delta$ and constant values for $\alpha$ . The grammar defining the language $\delta$ is the following. $$ \begin{array} { r c l } { B } & { \ : = } & { \{ s [ i ] \geq v \} _ { i = 1 } ^ { n } \mid \{ s [ i ] \leq v \} _ { i = 1 } ^ { n } \mid B \land B \mid B \lor B } \end{array} $$ Here, the values $v$ are constants that need to be learned, $s [ i ]$ is the $i$ -th entry of the state $s$ the agent observes at a given time step, and $n$ is the dimensionality of the observation. Figure 3 shows an example of the type of policy PSM learns. In this example, the policy is for PARKING, a domain where the agent must learn how to exit a parking spot with a car in front of the agent’s car $( { \mathrm { c a r } } _ { f } )$ ) and another at the rear $( \mathbf { c a r } _ { b } )$ . The policy uses the following state features: the distance between the agent’s car and $\mathbf { c a r } _ { f }$ $( d _ { f } )$ and $\mathrm { c a r } _ { b } \ ( d _ { b } )$ , the $x$ coordinate of the car, and the angle $\theta$ of the car. A solution involves the agent moving forward to the left (mode $m _ { 1 }$ ) and then back to the right (mode $m _ { 2 }$ ), until the agent has cleared $ { \mathrm { c a r } } _ { f }$ (transitioning to mode $m _ { 3 }$ ). The agent solves the problem if it straightens the car after clearing $ { \mathrm { c a r } } _ { f }$ , thus transitioning from $m _ { 3 }$ to $m _ { f }$ . PSM’s policies are called only once for the initial state; the policy returns only at the end of the episode. PSM learns policies with a teacher-student scheme, where the student is a finite state machine encoding the policy. The teacher is a loop-free learner that finds state-action pair sequences that optimize for two objectives. Specifically, they maximize the agent’s return and minimize how much they deviate from the student’s policy. Optimizing for the latter avoids sequences that cannot be encoded in a finite-state machine. After optimizing the teacher, the student updates its policy based on the teacher’s sequence. The student’s policy is updated through a clustering scheme on the teacher’s sequence. The Boolean expressions denoting transitions between modes are found through discrete search. The process is repeated, and the teacher’s sequence-based policy is optimized. Figure 3: Example of a state machine policy, where $m _ { 0 }$ is the initial mode and $m _ { z }$ is an accepting mode. The tuples inside each mode specify the agent’s action when in that mode (e.g., $( F , L )$ means “move forward and steer to the left”. The transitions from one mode to another are triggered by a Boolean expression shown in the arrows. For example, if the car is too close to the car in front of it $( d _ { f } \le 0 . 3 0 )$ , then the policy moves from $m _ { 1 }$ to $m _ { 2 }$ . The agent remains in the current mode if no outgoing Boolean expression is triggered. This policy is based on an example by Inala et al. [2020]. Table 1: For DRL $\langle \beta = 0 . 5 \rangle$ , we trained 30 models (seeds) for G-TRACK-1 and 15 for AALBORG. Each cell shows the average lap time (mm:ss) over three laps per model, then averaged across models; 13 models learned to complete G-TRACK-1 and four models learned to complete AALBORG. Values in parentheses for DRL $\lvert \beta = 0 . 5 \rvert$ ) show the fraction of seeds that successfully generalized to the test track (out of 13 and 4 for G-TRACK-1 and AALBORG, respectively). For NDPS and DRL $( \beta = 1 . 0 )$ ), we used the data from [Verma et al., 2018], which is over three models. “CR” indicates that all three models crashed, and the number reported is the average distance at which the agent crashed the car. # 4 Experiments In this section, we revisit the experiments of Verma et al. [2018] and Verma et al. [2019] on TORCS (Section 4.1 and Appendix A), of Trivedi et al. [2021] on KAREL (Section 4.2 and Appendix B), and of Inala et al. [2020] on PARKING (Section 4.3 and Appendix C). # 4.1 TORCS Verma et al. [2018] and Verma et al. [2019] showed that programmatic policies written in the language from Figure 1 generalize better to OOD problems than neural policies in race tracks of the Open Racing Car Simulator (TORCS) [Wymann et al., 2000]. The results of Verma et al. [2018] also showed that neural policies better optimize the agent’s return than programmatic policies, as the former complete laps more quickly than the latter on the tracks on which they are trained. We hypothesized that the programmatic policies generalize better not because of their representation, but because the car moves more slowly, thus making it easier to generalize to tracks with sharper turns. We test our hypothesis by training models with two different reward functions: the original function used in previous experiments ( $\beta = 1 . 0$ in Equation 2), which we refer to as “original”, and a function that makes the agent more cautious about speeding ( $\beta = 0 . 5 )$ , which we refer to as “cautious”. $$ \beta \times V _ { x } \cos ( \theta ) - | V _ { x } \sin ( \theta ) | - V _ { x } | d _ { l } | \ . $$ Here, $V _ { x }$ is the speed of the car along the longitudinal axis of the car, $\theta$ is the angle between the direction of the car and the direction of the track axis, and $d _ { l }$ is the car’s lateral distance from the center of the track. The first term of the reward measures the velocity along the central line of the track, while the second is the velocity moving away from the central line. Maximizing the first term Table 2: Generalization results on KAREL, where cells show the average return and standard deviation. “PPO with ConvNet” observes the entire state and employs a convolutional network to learn its representation. “PPO with LSTM” uses an LSTM layer for both actor and critic, while “PPO with ${ { a } _ { t - 1 } } ^ { , \dag }$ uses a fully connected network with the observation space augmented with the agent’s last action. “Small” refers to the problems in which the models were trained, which were of size either $8 \times 8$ or $1 2 \times 1 2$ . Rows marked with a $\dagger$ are from Trivedi et al. [2021]. The results for PPO with $a _ { t - 1 }$ are over 30 seeds, and each seed is evaluated on 10 different initial states; the results for LEAPS and PPO with a ConvNet and with an LSTM are over five seeds and 10 different initial states. minus the second allows the agent to move fast without deviating from the central line. The last term also contributes to having the agent follow the center of the track. Once we set $\beta = 0 . 5$ , the agent will learn policies where the car moves more slowly, which allows us to test our hypothesis. Following Verma et al. [2018], we use the Deep Deterministic Policy Gradient (DDPG) algorithm [Lillicrap et al., 2019] and TORCS’s practice mode, which includes 29 sensors as observation space and the actions of accelerating and steering. We considered two tracks for training the agent: GTRACK-1 and AALBORG. The first is considered easier than the second based on the track’s number of turns, length, and width. The models trained on G-TRACK-1 were tested on G-TRACK-2 and E-ROAD, while the models trained on AALBORG were tested on ALPINE-2 and RUUDSKOGEN. Table 1 presents the results. NDPS can generalize to the test problems in all three seeds evaluated. DRL with $\beta = 1 . 0$ does not generalize to the test tracks, with the numbers in the table showing the average distance at which the agent crashes the car in all three seeds. For DRL ( $\mathrm { \beta } \mathrm { \beta } = 0 . 5 )$ we trained 30 models (seeds) for G-TRACK-1 and 15 for AALBORG. Then, we verified that 13 of the 30 models learned how to complete laps of the G-TRACK-1 track, and 4 of the 15 models learned to complete laps of the AALBORG track; these models were evaluated on the OOD tracks. The results support our hypothesis that by changing the reward function, we would allow the agent to generalize. On the training tracks, the lap time increases as we reduce $\beta$ . Most models trained with $\beta = 0 . 5$ generalize from the G-TRACK-1 to G-TRACK-2 ( $76 \%$ of the models) and E-ROAD $( 6 9 \% )$ tracks; all models that learned to complete a lap on AALBORG generalized to the other two tracks. # 4.2 KAREL Trivedi et al. [2021] showed that programs LEAPS synthesized in the language shown in Figure 2 (a) generalized better than deep reinforcement learning baselines to problem sizes much larger than those the agent encountered during training. In our experiments, we consider the fully observable version of KAREL, where the agent has access to the entire grid, and the partially observable version, where the agent can only perceive the cells around it, as shown by the non-terminal $h$ in Figure 2 (a). In the partially observable case, the problem cannot, in principle, be solved with fully connected neural networks. Consider the two states shown in Figure 4. In one, the agent is going downstairs; in the other, it is going upstairs. Yet, the observation is the same for both states. Trivedi et al. [2021] used LSTMs [Hochreiter and Schmidhuber, 1997] to deal with the partial observability problem. Instead of using LSTMs, which tend to be more complex to train than fully connected networks, we add the last action the agent has taken as part of the observation. For the fully observable case, we report the results of Trivedi et al. [2021], which used a convolutional network on the input. Figure 4: Different states but same observation. Table 3: Evaluation of 30 seeds of PSM and 15 seeds of DQN on the PARKING domain. Each model trained was evaluated on 100 different initial states of both training and testing settings. The columns “Successful-on- $1 0 0 ^ { \circ }$ report the fraction of models trained that successfully solved all 100 initial states. The columns “Success Rate” reports the average number of initial states solved across different seeds. We trained policies for the following problems, which were chosen to match the design of Trivedi et al. [2021]: STAIRCLIMBER, MAZE, TOPOFF, FOURCORNER, and HARVESTER. The grid size of these problems was either $8 \times 8$ or $1 2 \times 1 2$ . After learning to solve these small problems, we evaluated them on grids of size $1 0 0 \times 1 0 0$ , also following Trivedi et al. [2021]. In the MAZE problem, the agent learns to escape a small maze and is evaluated on a larger one. Table 2 shows the results. Our results show that partial observability combined with a simpler model can generalize to larger grids. Namely, “PPO with ${ \boldsymbol { a } } _ { t - 1 } { \boldsymbol { \mathbf { \mathit { \varepsilon } } } } ^ { , \bullet }$ , which uses a fully connected network with the observation augmented with the agent’s last action, generalizes to larger problems. This contrasts with “PPO with ConvNet”, which operates in the fully observable setting, and “PPO with LSTM”, which operates in the partially observable setting but uses a more complex neural model. To illustrate, in MAZE, if the agent can only see the cells around itself, it can learn strategies such as “follow the right wall”, which is challenging to learn in the fully observable setting. The LSTM agent fails not only to generalize to larger problems, but it often also fails to learn how to solve even the smaller problems. # 4.3 PARKING In the PARKING domain, an agent must get out of a parking spot. During training, the distance between the two parked cars is sampled uniformly from the range [12.0, 13.5]. In contrast, the test environment uses a narrower and more challenging range of [11.0, 12.0], requiring the agent to generalize to tighter parking scenarios. We evaluate both programmatic policies, as described by Inala et al. [2020], and neural policies trained using Deep Q-Networks (DQN) [Mnih et al., 2015]. Preliminary experiments showed that DQN performed better than the PPO and DDPG algorithms considered in our other experiments. For each policy type, we trained 30 independently seeded models and evaluated each one on 100 test episodes, where the test gap was sampled uniformly from the range [11.0, 12.0]. Table 3 shows the results. We trained 30 independent models of PSM and 15 of DQN. Each model was evaluated on 100 different initial states. The columns “Successful-on- $1 0 0 ^ { \circ }$ refer to the ratio of models that could solve all 100 initial states. For example, 0.06 for PSM means that two of the 30 models solved all initial states on training and test. The “Successful Rate” column shows the ratio of times across all models and initial states that the learned policy could solve the problem. For example, 0.86 for DQN in training means that DQN models solved $8 6 \%$ of the $1 5 \times 1 0 0 = 1 5 0 0$ initial states. Our results suggest that the PSM policies generalize better than the DQN policies, as two out of 30 models could solve all 100 test initial states. Looking at the difference between the “Success Rate” of PSM and DQN in training and test also suggests that PSM’s policies generalize better, as the gap between the two scenarios is small for PSM: $0 . 2 6 - 0 . 1 6 = 0 . 1 0$ versus $0 . 8 6 - 0 . 1 8 = 0 . 6 8$ for DQN. However, looking at the test “Success Rate” alone suggests that DQN is the winner, as DQN policies can solve more test initial states on average than PSM can. Independent of the metric considered, our results show that PARKING is a challenging domain for both types of representation. # 4.4 Discussion Our experiments showed that neural models can also generalize to OOD problems commonly used in the literature. One key aspect of programmatic solutions is the policy’s sparsity. For example, the mode transitions in Figure 3 use a single variable in the Boolean expression. By contrast, neural networks typically use all variables available while defining such transitions, often by encountering spurious correlations between input features and the agent’s action. That is why providing fewer input features, combined with a simpler neural model, helped with generalization in KAREL—we remove features that could generate spurious correlations with the model’s actions. These results on reducing input features to enhance generalization align with other studies involving the removal of visual distractions that could hamper generalization [Bertoin et al., 2022; Grooten et al., 2024]. In the case of TORCS, OOD generalization was possible due to a “safer” reward function. If the agent learns on a track that allows it to move fast and never slow down, then it is unlikely to generalize to race tracks with sharp turns that require the agent to slow down. In this case, generalization or lack thereof is not caused by the representation, but by how well the agent can optimize its return while using that representation. We conjecture that NDPS and PROPEL would not generalize to OOD problems if they could find better optimized policies for the agent’s return in the training tracks. PARKING was the most challenging benchmark we considered in our experiments, and we believe it points in the direction of benchmarks that could value the generalization power of programmatic representations. Recurrent neural networks such as LSTMs can, in principle, represent the solution shown in Figure 3. In fact, due to the loop of the agent interacting with the environment, the solution to PARKING does not even require loops. If we augment the agent’s observation with its last action, a decision tree could encode the repetitive behavior needed to solve the problem. Yet, we could not find a neural policy that reliably generalizes to OOD problems in this domain. By reliably we mean that if the agent learns how to solve the training setting, it automatically generalizes to the test setting. # 4.5 Beyond Generalization Our analysis has focused on generalizing to OOD problems. However, there are other important dimensions to consider when considering programmatic representations. The most common are interpretability and verifiability [Bastani et al., 2018], as one can choose a language that results in programs that are easier for us to understand and verify. Intuitively, the policies of NDPS, LEAPS, and PSM tend to be more interpretable than neural policies we learned in our experiments. Another important dimension is sample efficiency. A programming language’s inductive bias can make the problem easier to solve. For example, we could add to the language used to define the Boolean expressions of the PSM’s policies, an expression that verifies whether the agent is close to an object. Such an expression could be reused and potentially make the approach more sample-efficient. The idea of composing a solution from existing programs underlies library-learning approaches [Ellis et al., 2023; Cao et al., 2023; Bowers et al., 2023; Rahman et al., 2024; Palmarini et al., 2024]. Programmatic solutions tend to be more composable than neural ones, although recent work has investigated the decomposition of reusable pieces of neural networks [Alikhasi and Lelis, 2024]. # 5 Valuing the Generalization Power of Programmatic Policies If commonly used problems undervalue the generalization power of programmatic policies, what properties of problems could showcase how programmatic policies can generalize? We propose an illustrative benchmark problem that requires computations that neural networks struggle to learn from data. Although recurrent models are, in theory, computationally universal [Siegelmann and Sontag, 1994, 1995], they are more limited in practice [Weiss et al., 2018]. We consider a problem that requires a stack or a queue, which neural models can struggle with [Joulin and Mikolov, 2015]. We consider finding the shortest paths on a grid. Suppose the agent can only sense the cells around itself, as in the KAREL problem. If the environment is not dense in walls, such that the agent can use simple strategies such as “follow the right wall”, it needs to remember the cells it has visited to find the shortest path from its initial location to a goal location. Iterative-Deepening Depth-First Search (IDDFS) uses a stack to solve shortest-path problems. Dijkstra’s algorithm [Dijkstra, 1959] could also be used, but it requires that the agent “jumps around” the state space as states far from each other can be expanded from one time step to the next based on the priority of the algorithm’s queue. The maze environment from the KAREL benchmark is similar to the problem we consider, which we call SparseMaze; see Figure 5. What makes the KAREL mazes easier than what we propose is that the agent always has a wall as a reference, favoring strategies such as “follow the right wall” that do not require memory use. If the map is sparse, as in Figure 5 (right), finding any solution, let alone finding the shortest one, becomes challenging due to the model’s inability of learning stacks and queues. Table 4: Results on SPARSEMAZE. The return is an average over 10 initial states. Results for PPO are averaged over 30 seeds; results for FUNSEARCH are a single run of the system. Table 4 presents the generalization results of neural and programmatic policies in SPARSEMAZE. As neural policies, we considered PPO with a GRU [Chung et al., 2014]. As for a programmatic policy, we used FUNSEARCH [Romera-Paredes et al., 2023] with Qwen 2.5-Coder (32B) [Bai et al., 2023] to synthesize Python programs encoding policies to SPARSEMAZE. We use the return of a rollout of the policies as the evaluation function in FUNSEARCH. Appendix D provides the training information of the neural policies and the prompt used in FUNSEARCH. Each approach was trained on maps of size $2 0 \times 2 0$ (“Original” in Table 4), and evaluated on maps of size $1 0 0 \times 1 0 0$ While PPO could not learn a good policy even for the smaller map, FUNSEARCH synthesized the breadth-first search (BFS) algorithm after 21 iterations of evolution, which generalizes to maps of any size (see Appendix D for FUNSEARCH’s policy). Similarly to Dijkstra’s algorithm, BFS also uses a queue and thus assumes that the agent can “jump around” the state space. Nevertheless, this proof-of-concept experiment shows an example where programmatic representations can generalize to OOD problems, while neural policies are unlikely to generalize.
Algorithms for learning programmatic representations for sequential decision-making problems are often evaluated on out-of-distribution (OOD) problems, with the common conclusion that programmatic policies generalize better than neural policies on OOD problems. In this position paper, we argue that commonly used benchmarks undervalue the generalization capabilities of programmatic representations. We analyze the experiments of four papers from the literature and show that neural policies, which were shown not to generalize, can generalize as effectively as programmatic policies on OOD problems. This is achieved with simple changes in the neural policies training pipeline. Namely, we show that simpler neural architectures with the same type of sparse observation used with programmatic policies can help attain OOD generalization. Another modification we have shown to be effective is the use of reward functions that allow for safer policies (e.g., agents that drive slowly can generalize better). Also, we argue for creating benchmark problems highlighting concepts needed for OOD generalization that may challenge neural policies but align with programmatic representations, such as tasks requiring algorithmic constructs like stacks.
[ "cs.LG" ]
# 1. Introduction Dense retrieval retrieves documents by evaluating their similarity scores with user queries (Mitra et al., 2018; Gao & Callan, 2021; Zhao et al., 2024c). It underpins many systems, in particular, retrieval-augmented generation (RAG) frameworks (Karpukhin et al., 2020), where retrieval accuracy is paramount. Multi-Vector Retrieval (MVR) enhances retrieval accuracy by leveraging multiple representations for finer-grained matching (Khattab & Zaharia, 2020). MVR methods, e.g., ColBERT (Khattab & Zaharia, 2020), decompose queries and documents into smaller units, say tokens. For each query token, we identify the most similar document piece to it and calculate their similarity, which is referred to as the MaxSim operation in (Khattab & Zaharia, 2020). Such scores are then aggregated across all query tokens as the overall query-document similarity. Compared to standard dense retrieval solutions, this strategy more effectively captures the fine-grained similarities between queries and documents, enhancing performance in information retrieval (IR) tasks (Khattab & Zaharia, 2020; Santhanam et al., 2022b) and retrieval-based systems like RAG (Xu et al., 2024). In this paper, we aim to develop a new MVR strategy to enhance the performance of arbitrary retrieval-based systems, with a particular focus on RAG systems. Note that traditional MVR approaches, in particular, ColBERT (Khattab & Zaharia, 2020), decompose queries at the token level. However, as revealed in Section 2, decomposing queries into slightly more coarse-grained units, such as phrases, can yield better results for tasks like retrieval-augmented generation (RAG). Furthermore, we observe that the performance of these tasks highly depends on how we decompose queries. Considering that the space of all possible decomposed sub-queries is exponentially large, this thus raises one critical question, i.e., how can we effectively generate subqueries of arbitrary granularity to optimize the performance of downstream retrieval-based systems? Query decomposition has been widely studied in question answering (QA), especially in multi-hop QA. It aims to break down complicated questions into simpler components, allowing Large Language Models (LLMs) to reason step by step, thereby enhancing QA accuracy. Various question decomposition strategies exist, such as (Li et al., 2024), which prompts LLMs with manually crafted prompts for query decomposition. However, as shown in Figure 1, applying the resulting sub-queries to MVR could retrieve an incorrect image, ultimately generating a wrong answer in the Figure 1. Motivating example from ManyModalQA dataset (Hannan et al., 2020). We aim to answer the question “Victoria Hong Kong has many what type of buildings?” using retrieval-augmented generation (RAG). To enhance the retrieval accuracy and thus ensure the answer correctness, we employ Multi-Vector Retrieval (MVR), which decomposes the query into sub-queries and embeds them. MaxSim operations (as defined in (Khattab & Zaharia, 2020)) are then applied to compute similarity scores for retrieval. Traditional query decomposition strategies, which are primarily based on heuristics, such as decomposing by tokens (Khattab & Zaharia, 2020) or by prompting LLMs with manually crafted prompts (Li et al., 2024), often retrieve irrelevant images, thus resulting in incorrect answers. In contrast, we optimize LLM prompts to generate more effective sub-queries, improving QA accuracy. This approach enables MVR to retrieve images of Victoria Harbour with skyscrapers, leading to the correct answer: “Skyscrapers”. # RAG-based QA tasks. To address this issue, it would be ideal to train a model for searching the decomposed sub-queries that can optimize the downstream performance. However, two critical technical challenges arise. First, the search process is nondifferentiable, as sub-queries cannot propagate gradients from the downstream performance score. Second, evaluating candidate sub-queries requires training downstream RAG models, which is computationally expensive. To tackle these two challenges, we proposed PerformanceOriented Query Decomposer (POQD for abbreviation), a novel performance-driven query decomposition framework. To address the non-differentiability issue, we first prompt one LLM to generate decomposed sub-queries for one input query, which can be iteratively refined by an LLM-based optimizer (Yang et al., 2024) to enhance downstream performance. But evaluating a candidate prompt $p$ requires training the downstream model, $\Theta$ , with its induced sub-queries. Hence, we propose a training algorithm to alternatively refine the prompt $p$ while only training the model $\Theta$ for a few epochs. Our theoretical analysis confirms the effectiveness of this approach with appropriate hyper-parameter configurations. Note that such performance optimization process is conducted in a weakly-supervised manner since the downstream RAG performance rather than the intermediate retrieval performance is optimized. This strategy is even effective in applications such as Multi-hop QA (Yang et al., 2018), in which the queries are dynamically generated during the reasoning process. We further perform extensive empirical studies to evaluate the effectiveness of POQD on a variety of RAG-based QA tasks, covering both image and text QA tasks. The empirical studies suggest that POQD can outperform the state-of-theart in both retrieval and QA accuracy by a large margin. Our contributions can be summarized as follows: 1. We introduce POQD, a novel query decomposition framework that can perform query decomposition for optimizing multi-vector retrieval performance. 2. We design a training algorithm, which alternates between training the downstream RAG models and refining the prompt used for query decomposition. Theoretical analysis demonstrates the effectiveness of this training algorithm with appropriate hyper-parameter configurations. 3. We perform extensive experiments on RAG-based QA tasks, covering both image QA and text QA, which suggests that POQD can outperform the state-of-the-art in both retrieval and QA accuracy by a large margin. # 2. Motivation We conduct an in-depth analysis of the motivating example shown in Figure 1 to further motivate our method. # 2.1. Why does ColBERT fail? To understand why ColBERT fails in the example shown in Figure 1, we perform MVR with a mini-query “Hong Kong”. For this query, ColBERT tokenizes it into two individual tokens, “Hong” and “Kong”. For each image, we then perform the MaxSim operation between these two tokens and the fine-grained image patches, i.e., determine the similarity score between one token and its most similar image patch. Such similarity scores are subsequently aggregated across all tokens to obtain the overall query-document similarity score. As depicted in Figure 2, a surprising result emerges: despite its visual irrelevance to Hong Kong, the photo of Lee Kuan Yew, the 1st Prime Minister of Singapore exhibits higher similarity to the mini-query “Hong Kong” than the ground-truth image. A deep investigation suggests that the token “kong” could refer to a black gorilla-like monster, it thus yields unrealistically high similarity to the image patch highlighted with a green bounding box since both this patch and the figure of “Kong” are mostly black. This coincidence thus leads to a higher ranking of the Lee Kuan Yew’s image than the ground truth. In contrast, by treating “Hong Kong” as a unified phrase and evaluating its similarity to each image, the ground-truth image achieves a higher similarity score than other images. This example thus underscores the necessity of decomposing queries at a slightly coarser-grained level, rather than at the token level for MVR. Figure 2. Further analysis on the motivating example: the token “kong” is relevant to the photo of a black gorilla-like monster, which is mostly black. Coincidentally, in the photo of Lee Kuan Yew, the patch identified as the most relevant to the token ’kong is also mostly black. # 2.2. Why is it essential to optimize query decomposition? As mentioned in Figure 1, we can manually craft prompts for LLMs so that they can generate decomposed sub-queries (Li et al., 2024). These sub-queries include critical phrases such as “Victoria Hong Kong” and “building”. However, performing MVR with these sub-queries still incorrectly retrieves a less relevant image (see the second row of Figure 1). In comparison to the optimal sub-queries discovered by our solutions, this method generates one extra sub-query “type”. This extra sub-query is less informative than the other two sub-queries. Thus incorporating this sub-query into MVR may lead to inaccurate similarity scoring. Therefore, optimizing the query decomposition process by eliminating non-essential sub-queries, such as “type,” is crucial for improving retrieval accuracy. Hence, the problem to be addressed is formally defined as follows. Problem definition Given one query $\begin{array} { r l } { Q } & { { } = } \end{array}$ $\{ c _ { 1 } , c _ { 2 } , \ldots , c _ { m } \}$ composed of $m$ tokens, we aim to decompose it into $n$ sub-queries $\{ q _ { i } \} _ { i = 1 } ^ { K }$ in which each $q _ { i }$ is composed of tokens from $\{ c _ { 1 } , c _ { 2 } , \ldots , c _ { m } \}$ . The goal of performing this query decomposition is to maximize the performance of downstream retrieval-based systems. # 3. Preliminary # 3.1. Multi-vector retrieval To evaluate the similarity score between a query $Q$ , and a document or image $D$ , Multi-vector retrieval first decomposes $Q$ and $D$ into fine-grained pieces, denoted by $\{ q _ { i } \} _ { i = 1 } ^ { K }$ and $\{ d _ { j } \} _ { j = 1 } ^ { m }$ , applies MaxSim operation to identify the most similar $d _ { j }$ to each $q _ { i }$ and then aggregates these similarity scores across all $q _ { i }$ with the following formula: $$ \operatorname { S I M } _ { \boldsymbol { \theta } } ( Q , D ) = \frac { 1 } { K } \sum _ { i = 1 } ^ { K } \operatorname* { m a x } _ { 1 \leq j \leq m } E _ { \boldsymbol { \theta } } ( q _ { i } ) ^ { \top } E _ { \boldsymbol { \theta } } ( d _ { j } ) . $$ As mentioned in Section 2, we primarily study how to optimize the query decomposition process. But how to decompose documents or images may also matter. Therefore, to ensure a fair comparison between different query decomposition strategies, we decompose documents or images in the same way across all baseline methods and POQD. One exception is ColBERT for text retrieval, which is configured to decompose documents into tokens. # 3.2. Retrieval-Augmented Generation (RAG) As introduced in Section 1, we primarily study the effectiveness of POQD on RAG tasks. For this task, we aim to optimize the following objective function: $$ \mathcal { L } ( \boldsymbol { \Theta } ) = - \log ( \sum _ { D \in D _ { K } } P _ { \boldsymbol { \theta } } ( \boldsymbol { a } | \boldsymbol { Q } , D ) P _ { \beta } ( D | \boldsymbol { Q } ) ) , $$ in which $\Theta = ( \theta , \beta )$ and $D _ { K }$ represents the set of Top- $K$ most relevant documents to a query $Q$ according to the similarity score defined in Equation (1). Additionally, $P _ { \theta }$ represents the likelihood of the ground-truth answer $a$ , which is parameterized by the generator model parameter $\theta$ . Note that this objective function relies on the similarity function defined in Equation (1) to determine the Top-K most relevant documents, thus implicitly dependent on how queries are decomposed. Also, we follow prior studies such as (Barnett et al., 2024), to only train $\theta$ while maintaining the retrieval model, $\beta$ , in RAG systems fixed to ensure training efficiency. This is because updating retrieval models usually requires rebuilding indexes and re-embedding corpus, which could be highly time-consuming. # 4. Methodology This section starts with the framework overview in Section 4.1, which is followed by illustrating how to generate optimal sub-queries with POQD in Section 4.2 and describing our end-to-end training algorithm in Section 4.3. We conclude this section with a theoretical analysis of the training algorithm in Section 4.4. # 4.1. Framework overview Given a query $Q$ , we aim to perform query decomposition by prompting one LLM (referred to as the Query Decomposer) with a prompt $p$ . Those sub-queries are then employed to perform multi-vector retrieval. Therefore, the quality of the sub-queries produced by the Query Decomposer highly depends on the prompt $p$ . In light of this, we propose to adopt an LLM-based optimizer (referred to as the Prompt Optimizer) to generate $p$ and iteratively refine it for optimizing the downstream performance (see Section 4.2). The pipeline of generating prompt $p$ and producing decomposed sub-queries with this prompt is visualized in Figure 3. As mentioned in Section 1, the performance to be optimized in our setup would not only depend on $\Theta$ , but also depend on the decomposed sub-queries, which is further dependent on the prompt $p$ . Hence, the loss function defined in (2) is reformulated as $\mathcal { L } ( \Theta ; p )$ . To optimize this loss function, we proposed an end-to-end training algorithm to jointly optimize $p$ and $\Theta$ (see Section 4.3). In Section 4.4, we further provide a theoretical analysis of this end-to-end training algorithm. It suggests that with appropriate hyper-parameter configurations, this algorithm can effectively optimize the prompt $p$ and minimize the loss $\mathcal { L } ( \Theta ; p )$ at a reasonable training cost. # 4.2. Optimize query decomposition with a fixed $\Theta$ # Algorithm 1 Optimize query decomposition 1: Input: A set of training queries: ${ \mathcal { Q } } ^ { \mathrm { t r a i n } }$ , a retrieval-based system parameterized by Θ, the old prompt pold. 2: Initialize the solution-score pairs $L S \overset { \vartriangle } { = } [ ( p ^ { \mathrm { o l d } } , \mathcal { L } ( \Theta ; p ^ { \mathrm { o l d } } ) ) ]$ 3: while not converge do 4: Prompt the Prompt Optimizer by leveraging $L S$ to generate a new prompt $p$ with Step 1 of Section 4.2. 5: Execute Step 2 to decompose each query in ${ \mathcal { Q } } ^ { \mathrm { t r a i n } }$ by prompting the Query Decomposer with $p$ . 6: Evaluate the training loss, $\mathcal { L } ( \Theta ; p )$ , over ${ \mathcal { Q } } ^ { \mathrm { t r a i n } }$ and add $( p , { \mathcal { L } } ( \Theta ; p ) )$ to $L S$ 7: if $\mathcal { L } ( \Theta ; p ) - \mathcal { L } ( \Theta ; p ^ { \mathrm { o l d } } ) \le \alpha$ or repeated for $\kappa$ iterations then 8: Break 9: end if 10: end while 11: return $p$ and decomposed sub-queries Given a retrieval-based system with a fixed parameter $\Theta$ , we elaborate on how to search optimal decomposed subqueries in this section. Recall that solving this optimization problem is not differentiable, we overcome this challenge by leveraging Algorithm 1, which iteratively executes the following two steps. The goals of these two steps are to generate a candidate prompt for the Query Decomposer by invoking the Prompt Optimizer, and to evaluate the quality of the prompt with the training loss $\mathcal { L } ( \Theta ; p )$ respectively. Step 1: By following (Yang et al., 2024), the Prompt Optimizer aims to produce a prompt prefix $p _ { 0 }$ , say, “Design a query decomposition framework that...” as shown in Figure 3, which is then concatenated with a fixed prompt template, including the description of the query decomposition task and one input query $Q$ , to construct a complete prompt $p$ for the Query Decomposer. Hence, searching optimal $p$ is equivalent to searching optimal prompt prefix $p _ { 0 }$ . The generation of one candidate prompt prefix $p _ { 0 }$ is conducted by prompting the Prompt Optimizer with two pieces of meta-prompts and a dynamically constructed solution-score pair list (see Figure 3). This list is initially empty and then gradually populated with the pairs of the prompt prefix $p _ { 0 }$ produced by the Prompt Optimizer and the corresponding training loss $\mathcal { L } ( \Theta ; p )$ as Algorithm 1 is executed. Intuitively speaking, $p _ { 0 }$ is regarded as the solution to this optimizer while $\mathcal { L } ( \Theta ; p )$ is viewed as the score of this solution. Step 2: To construct the above solution-score pairs, in particular, attaining the training loss $\mathcal { L } ( \Theta ; p )$ for one candidate prompt $p$ , we thus prompt the Query Decomposer with $p$ to generate the decomposed sub-queries for each query in the training set. These sub-queries are then used to perform MVR in the downstream retrieval-based system and evaluate the training loss $\mathcal { L } ( \Theta ; p )$ on all training queries. Then the pair $( p , { \mathcal { L } } ( \Theta ; p ) )$ is appended to the solution-score pair list as shown in Line 6. According to (Yang et al., 2024), as more solution-score pairs are included from prior iterations of Algorithm 1, the Prompt Optimizer can gradually refine the prompt $p$ for the Query Decomposer which may produce smaller training loss. In the end, Algorithm 1 terminates if the training loss with the updated prompt, $\mathcal { L } ( \Theta ; p )$ , is at least smaller than that with the initial prompt $p ^ { \mathrm { o l d } }$ by $\alpha$ or the while loop is repeated for $\kappa$ iterations (see Line 7 in Algorithm 1). Note that the Query Decomposer may hallucinate, in particular, the generated sub-queries may contain tokens that do not exist in the input query. To mitigate this, we filter out irrelevant tokens in these sub-queries. The effect of this filtering step is empirically evaluated in Appendix D.3. # 4.3. End-to-end training algorithm Note that in Section 4.2, we optimize the prompt with a fixed $\Theta$ . Indeed, the sub-queries produced by Algorithm 1 impact the input to $\Theta$ , thus motivating the need to further update $\Theta$ . As a consequence, we propose an end-to-end training algorithm outlined in Algorithm 2. This algorithm aims to alternatively optimize the prompt $p$ for the query # meta-prompt # prompt prefix Your task is to generate the instruction <INS> Below are some previous instructions with their Design a query decomposition scores. The score ranges from 0 to 100. framework thatseamlessly integrates logical soundness. # solution-score pairs Prompt Optimizer # task decription Query Decomposer text: Decompose query Q into a set of semantically 圆D For the query below, split it G D connected.. ... <INS> inteenticalyed bye asub .· only output the sub-queries text: Do not include any other You are a query decomposition assistan... informationor explanation. score:46 # input query 'columns on Southampton buildings | color' # meta-prompt Wataoloran tilainlamns on Generate an instruction that is different from all the instructions <INS> above, and has a higher score than all the instructions <INS> above. # Algorithm 2 Training POQD 1: Input: A set of training queries: ${ \mathcal { Q } } ^ { \mathrm { t r a i n } }$ , a retrieval-based system parameterized by $\Theta$ . 2: Initialize one random $p ^ { \mathrm { o l d } }$ . 3: while not converge do 4: Invoke Algorithm 1 by inputting $p ^ { \mathrm { o l d } }$ to obtain a new prompt $p ^ { \mathrm { n e w } }$ and optimized sub-queries. 5: if $p ^ { \mathrm { n e w } } = = { p } ^ { \mathrm { o l d } }$ then 6: Break 7: end if 8: Train $\Theta$ for $\tau$ iterations with optimized sub-queries by minimizing $\mathcal { L } ( \Theta ; p ^ { \mathrm { n e w } } )$ with $p ^ { \mathrm { n e w } }$ fixed. 9: $p ^ { \mathrm { o l d } } p ^ { \mathrm { n e w } }$ 10: end while 11: Train $\Theta$ until convergence with optimized sub-queries by minimizing $\mathcal { L } ( \Theta ; p ^ { \mathrm { n e w } } )$ with $p ^ { \mathrm { n e w } }$ fixed. decomposer and train $\Theta$ until convergence. Note that at each iteration of Algorithm 2, we could optionally train $\Theta$ until convergence given sub-queries produced by Algorithm 1. However, in many retrieval-based systems such as RAG systems, performing full training on $\Theta$ could be highly expensive. For instance, training the RAG model with a large image QA dataset takes up to 1 hour per epoch as revealed in Section 5, which usually needs at least 5 epochs to converge. Hence, in Algorithm 2, we alternatively optimize the prompt and sub-queries in Line 4 and update $\Theta$ for $\tau$ iterations with the optimized sub-queries in Line 8. This is repeated until the prompt cannot be updated anymore. In the end, we optimize the loss $\mathcal L ( \Theta ; p )$ with a fixed $p$ until convergence, resulting in an optimized parameter $\Theta ^ { * } ( p )$ (see Line 11 of Algorithm 2). We use the notation $\Theta ^ { * } ( p )$ to denote its dependency on the prompt $p$ . Note that in Algorithm 1, the training loss gets reduced by $\alpha$ when the prompt is updated from $p ^ { \mathrm { o l d } }$ to $p ^ { \mathrm { n e w } }$ . However, this may not necessarily guarantee decreased training loss at convergence, i.e., $\mathcal { L } ( \Theta ^ { * } ( p ^ { \mathrm { o l d } } ) ; p ^ { \mathrm { o l d } } ) > \mathcal { L } ( \Theta ^ { * } ( p ^ { \mathrm { n e w } } ) ; p ^ { \mathrm { n e w } } )$ , which is critical to ensure the optimality of the derived prompt $p ^ { \mathrm { n e w } }$ . Otherwise, it would be meaningless to update this prompt. Hence, in Section 4.4, we provide a rigorous theoretical analysis to show that the above inequality holds with appropriate $\alpha$ and $\tau$ without hurting training efficiency. # 4.4. Theoretical analysis In this sub-section, before formally presenting the theoretical results, we list some essential assumptions below. Assumption 4.1 $\scriptstyle \left( \mu - \operatorname { P L } ^ { * } \right.$ condition and $L$ -smoothness). $\mathcal { L } ( \Theta ; p )$ satisfies the $\mu$ -Polyak-Łojasiewicz star $( \mu { \mathrm { - } } \mathrm { P L } ^ { * } )$ condition (Liu et al., 2022) and $L$ -smoothness for any $\Theta$ with a given $p$ , i.e.,: $$ \begin{array} { c } { \displaystyle | | \nabla _ { \Theta } \mathcal { L } ( \Theta ; p ) | | _ { 2 } ^ { 2 } \geq \mu \mathcal { L } ( \Theta ; p ) , \qquad \mathrm { ( P L ^ { * } ~ c o n d i t i o n ) } } \\ { \displaystyle } \\ { \mathcal { L } ( \Theta _ { 2 } ; p ) \leq \mathcal { L } ( \Theta _ { 1 } ; p ) + \nabla _ { \Theta } \mathcal { L } ( \Theta _ { 1 } ; p ) ( \Theta _ { 2 } - \Theta _ { 1 } ) + \frac { L } { 2 } | | \Theta _ { 2 } - \Theta _ { 1 } | | _ { 2 } ^ { 2 } } \end{array} $$ (L-smoothness) Indeed, according to recent theoretical results (Liu et al.), for pre-trained over-parameterized large language models, if they are fine-tuned with the state-of-the-art optimization method, GaLore (Zhao et al., 2024b), their training loss satisfies the above $\mathrm { P L ^ { * } }$ condition. Assumption 4.2 (Bounded loss). For arbitrary $\Theta$ and $p$ , the loss $\mathcal { L } ( \mathbf { \hat { \boldsymbol { \Theta } } } ; \mathbf { \boldsymbol { p } } )$ is upper bounded by a constant $M$ , i.e.: $$ \begin{array} { r } { \mathcal { L } ( \Theta ; p ) \le M . } \end{array} $$ Assumption 4.3 (Gradient Descent updates). Suppose updating $\Theta$ in $F ( \Theta ; p )$ with a fixed $p$ is performed through gradient descent, i.e., $$ \Theta _ { t + 1 } = \Theta _ { t } - \eta \nabla \mathcal { L } ( \Theta _ { t } ; p ) , $$ in which $\eta$ is the learning rate. Given the above assumptions, the following theorem holds. Theorem 4.4. Suppose the prompt $p ^ { o l d }$ is updated to $p ^ { n e w }$ in Line $^ { 4 }$ of Algorithm 2, then the following inequality holds: $$ \mathcal { L } ( \Theta ^ { * } ( p ^ { o l d } ) ; p ^ { o l d } ) - \mathcal { L } ( \Theta ^ { * } ( p ^ { n e w } ) ; p ^ { n e w } ) \ge \alpha - ( 1 - \frac { \mu } { 2 L } ) ^ { \tau } M , $$ in which $\mathcal { L } ( \Theta ^ { * } ( p ^ { o l d } ) ; p ^ { o l d } )$ and $\mathcal { L } ( \Theta ^ { * } ( p ^ { n e w } ) ; p ^ { n e w } )$ denote the converged training loss when the prompt is fixed to $p ^ { o l d }$ and $p ^ { n e w }$ respectively. In the above theorem, the term $\textstyle { \left( { 1 - { \frac { \mu } { 2 L } } } \right) }$ is a constant between 0 and 1. Therefore, we can configure $\tau$ such that $\begin{array} { r } { \alpha - ( 1 - \frac { \mu } { 2 L } ) ^ { \tau } M } \end{array}$ is a positive value, e.g., ${ \frac { 1 } { 2 } } \alpha$ by setting $\tau = \log _ { 1 - \frac { \mu } { 2 L } } \Big ( \frac { \alpha } { 2 M } \Big )$ . As mentioned in Section 4.3, we configure $\tau$ as 3 by default which strikes a balance between the training efficiency and performance as empirically verified by Section 5. This theorem states that with appropriate $\tau$ and $\alpha$ , Algorithm 2 can effectively optimize the prompt $p$ for decomposing sub-queries at a reasonable training cost, thus achieving superior downstream performance than that by employing other query decomposition strategies. The complete proof of this theorem is provided in Appendix A. # 5. Experiments # 5.1. Experimental setup Baseline We compare POQD against the following query decomposition methods from prior studies: • Conventional dense retrieval encodes each query and document with one single embedding. • ColBERT (Khattab & Zaharia, 2020) which decomposes queries into individual tokens. • Supervised Query Decomposition (S-QD for short): A series of works (Xue et al., 2024; Yang & Zhu, 2021; Zhou et al., 2022; Zhu et al., 2023; Guo et al., 2022) train a sequence-to-sequence model in a supervised manner to generate decomposed sub-questions for each question. We follow (Zhou et al., 2022; Zhu et al., 2023; Guo et al., 2022; Wu et al., 2024a) to fine-tune Llama3.1-8B with StrategyQA dataset (Geva et al., 2021b) which contains human-annotated sub-queries. • Unsupervised Query Decomposition (U-QD for short): This aims to train a query decomposition model in an unsupervised manner. The representative method is OUNS (Perez et al., 2020) which aims to identify sub-queries that are similar to original questions but also diverse enough. • In-Context Learning-based Query Decomposition (ICL-QD for short): Some recent works (Li et al., 2024; Pereira et al., 2023; Niu et al., 2023; Ye et al., 2023; Xue et al.; Wu et al., 2024b; Chen et al., 2024; Bhattacharya et al., 2023) prompt LLMs to perform in-context learning for query decompositions with manually crafted prompts. These prompts are included in Appendix D.1. • In-Context Learning with Feedback for Query Decomposition (ICLF-QD): Some recent works (Qi et al.; Gao et al., 2024; Sidhoum et al., 2024) improve ICL-QD by providing feedback to LLMs regarding the quality of the decomposed sub-queries. In particular, we follow (Qi et al.) to evaluate whether a sub-query is relevant to the retrieved document or not. This is for determining whether to further decompose this sub-query. The prompts used in this method are included in Appendix D.1. Datasets and models We employ Web Questions (WebQA) (Berant et al., 2013; Chang et al., 2021), MultiModalQA (Talmor et al.), ManyModalQA (Hannan et al., 2020) and StrategyQA (Geva et al., 2021a) dataset for experiments. Among these datasets, the former three include questions requiring retrieval from multi-modal data. We focus on two RAG-based QA tasks throughout the experiments, i.e., image QA and text QA. For image QA, we select only questions requiring image retrieval from WebQA, MultiModalQA, and ManyModalQA. For text QA, we select only questions requiring text documents from all of these four datasets. Notably, StrategyQA is used for multi-hop QA, while the others only support single-hop QA. Regarding the retrieval process, it is critical to determine which embedding model to use. For text QA, we employ the Sentence-Bert model (Reimers, 2019) by default for encoding sub-queries and corpus for other baseline methods as well as POQD. On the other hand, for image QA, the CLIP model (Radford et al., 2021) is employed as the default model for embedding text queries and image corpus. In Section 5.3, we further perform ablation studies on the retrieval model. But note that ColBERT and its counterpart for image retrieval, ColPali (Faysse et al., 2024), have their own encoding models. Hence, we report the results of two versions of ColBERT, one taking its own embedding model (denoted by ColBERT-orig) while the other leverages the Invoke Algorithm 1 1 Train 0 0 bOA (imagA ( WebQA dalQA dalQAT odalQA(dalQA(WebQr imagOA(tenop WebQAtdalQATdalQP same default embedding model as others. For the generator models, we leverage Llama3.1-8B (Dubey et al., 2024) and Llava-v1.5-7B (Liu et al., 2024) as generators for single-hop text QA and image QA, respectively. In the experiments, only these generator models are fine-tuned while keeping the retrieval models frozen. On the other hand, regarding multi-hop text QA, i.e., StrategyQA dataset, we follow the state-of-the-art (Xu et al., 2024) to utilize the frozen GPT-4 (Achiam et al., 2023) model and merely replace its default retrieval method with baseline methods and POQD. Throughout the experiments, the default values of $\alpha , \tau$ and $\kappa$ are configured as 0.02, 3 and 5, respectively. Regarding the configuration for the retrieval process, we retrieve the Top-1 most relevant images and the Top-2 most relevant documents in the image QA and text QA tasks, respectively. More details on experimental setup are provided in Appendix B, which include how to decompose and embed documents or images. # 5.2. Quantitative results Performance analysis We perform end-to-end RAG training on the QA datasets introduced in Section 5.1. For this experiment, we not only report the end-to-end QA accuracy in Table 2 but also compare the ground-truth relevant documents or images against the retrieved ones by POQD and baseline methods in Table 1. Regarding the retrieval accuracy metric, we report $\mathrm { H i t } @ 1$ and $\operatorname { H i t } @ 2$ (see the formal definition in (Croft et al., 2010)) for image QA and text QA, respectively, since the Top-1 relevant image and Top-2 relevant text documents are retrieved in these two tasks, respectively. Note that since StrategyQA is primarily used for multi-hop QA in which the queries for retrieving documents are dynamically generated during the reasoning process, the ground-truth relevant documents are thus not available for this dataset. Hence, we do not report the retrieval accuracy for this dataset. As Table 1 and Table 2 suggest, POQD outperforms all baseline methods in both the retrieval performance and endto-end QA accuracy by a large margin across all datasets. Notably, the retrieval accuracy is increased by up to $5 . 2 8 \%$ (see the last column in Table 1) while POQD boosts QA accuracy by up to $12 . 6 1 \%$ (see the MultiModalQA column under Image QA in Table 2). This thus indicates that performing multi-vector retrieval with the sub-queries derived by POQD, can enhance the retrieval performance, and consequently the QA accuracy. Note that POQD consistently beats both ColBERT and ColBERT-orig, thus indicating the poor performance of ColBERT regardless of the underlying embedding model. Time analysis We further analyze both the training time and inference time of POQD for the RAG-based QA pipeline. First, regarding the training time, we record the overall running time of Algorithm 2 on all datasets except StrategyQA. StrategyQA is excluded since, as noted earlier, the generator model is not fine-tuned for this multi-hop QA dataset. The results, presented in Figure 4, also decompose the total running time into two components: the time for invoking Algorithm 1 to optimize the prompt $p$ in $\mathcal { L } ( \Theta ; p )$ and that for training the parameter $\Theta$ . As illustrated by this figure, the dominant training overhead is from the generator training phase, while optimizing the prompt adds negligible training cost. Considering that POQD also yields significant performance gains as Table 2 shows, these findings thus highlight both the effectiveness and efficiency of POQD. We also report the inference time of POQD in Figure 5. Similar to the breakdown in Figure 4, the total inference time is decomposed into three components, including the generator model inference time, retrieval time, and the time spent on decomposing queries. As illustrated in Figure 5, the model inference time contributes the largest portion of the overall inference overhead, significantly exceeding the query decomposition time. This finding indicates that incorporating query decomposition does not adversely impact the overall inference speed. # 5.3. Ablation studies We also perform a series of ablation studies to evaluate the effect of the hyperparameters and the superiority of POQD under various configurations with the WebQA dataset in the text QA task. Effect of $\alpha$ We also vary the value of $\alpha$ in Algorithm 1 to evaluate its effect on the training loss, which produces Figure 6. In this figure, the prompt used for decomposing queries is updated three times by Algorithm 1 (indicated by the inverted triangle symbols). As this Figure suggests, if $\alpha$ is too large (say $\scriptstyle \alpha = 0 . 0 5$ ), POQD would struggle to find a suitable $p ^ { \mathrm { n e w } }$ in Algorithm 1, thus causing the underfitting Table 1. Retrieval Accuracy on QA datasets Table 2. End-to-End QA (Exact Match) Accuracy. We bold the best and underline the second best accuracy number respectively GeneratorTime RetrievalTime 3000 Query Decomposition Time ③ 22 20000 1000 0 StrategyQr trategys(imag(i TodalQAdalQAWebQA WebQAdalQAdalQA (imagA(tERA 0.52 0.50 0.48 0.46 WWMWA WMMN 0.42 $\alpha = 0 . 0 1$ 0.40 Invoke Algrithm 1 forα=0.01 1 α=0.02 0.38 Invoke Algrithm1 for α=0.02 f/h $\alpha = 0 . 0 5$ 0 25 50 75 100 125 150 175 200 Numberof iterations issue. In contrast, if $\alpha$ is too small (say $\alpha { = } 0 . 0 1$ ), POQD converges much slower than our default with $\scriptstyle \alpha = 0 . 0 2$ . Hence, the default configuration of $\alpha$ , i.e., 0.02, can balance the convergence speed and the final performance. In addition, with $\scriptstyle \alpha = 0 . 0 2$ , the training loss decreases smoothly throughout the training process, exhibiting no abrupt spikes. Effect of $\tau$ We measure the training loss $\mathcal { L } ( \Theta ; p )$ and the total training time by varying $\tau$ between 0 to 5, which is plotted in Figure 7. Notably, the performance trend exhibited in this figure matches the analysis in Section 4.4, i.e., larger $\tau$ leading to longer training time but better performance. As this figure suggests, configuring $\tau$ as 3 is a reasonable choice since it balances the training efficiency and performance well. Effect of using varied LLMs for decomposing queries Unlike other methods, ICL-QD, ICLF-QD, and POQD rely on one LLM for generating decomposed sub-queries. Hence, we also compare their performance with alternative LLMs for query decomposition. Specifically, this experiment is conducted by leveraging the GPT-4 model (Achiam et al., 2023) and DeepSeek-V3 (DeepSeek Team, 2024) as the query decomposer, which leads to the results in Table 3. As this table shows, with varied LLMs used for query decomposition, POQD consistently outperforms ICL-QD and ICLF-QD. Figure 7. Training time of POQD with varied values of $\tau$ Table 3. Performance on the WebQA dataset in text QA with varied LLMs for query decomposition Additional ablation studies Due to space limit, other ablation studies are reported in Appendix D.3, including the study on the effect of the varied number of retrieved items, the filtering step, the varied generator models, and the varied embedding models for retrieval. # 5.4. Qualitative studies We expand the example shown in Figure 1 to show the differences between the baseline methods and POQD. Specifically, we report the decomposed sub-queries generated by other baseline methods in Table 4. In comparison to POQD, these baseline methods produce sub-queries that either contain irrelevant information, say the word “what” and “type” generated by S-QD, or miss key information, say “Hong Kong” by U-QD. As a consequence, the most relevant images retrieved with those sub-queries (shown in Appendix D.4) do not match the ground-truth image shown in Figure 1. These retrieval errors can thus be attributed to the unreasonable sub-queries produced by the baseline methods. Table 4. Performance on the WebQA dataset in text QA by using the Roberta model as the embedding model for retrieval # 6. Related work Multi-Vector Retrieval Multi-Vector Retrieval (MVR), first introduced by ColBERT (Khattab & Zaharia, 2020), employs a late-interaction mechanism to evaluate querydocument similarity. This approach can overcome the representational limitations of dense retrieval methods that use single embeddings for queries and documents. Subsequent works have focused on accelerating retrieval (Santhanam et al., 2022b;a; Gao et al., 2021; Li et al., 2023) or improving score aggregation strategies (Qian et al.). Notably, most solutions decompose queries into individual tokens, leaving the optimization of query decomposition for MVR underexplored. LLM-based optimizer (Yang et al., 2024; Pryzant et al.; Wang et al.) have shown the potential of large language models (LLMs) as a generic optimizer, which aims to search the prompts for a given LLM based on a history of a history of past instructions and performance scores on the training set. Later on, this strategy is further extended for optimizing the configurations of LLM agents (Zhang et al.; Zhao et al., 2024a). This strategy is highly effective since it is free of gradient computation (Lin et al., 2024). Due to the space limit, we discuss other relevant related works in Appendix C.
Although Multi-Vector Retrieval (MVR) has achieved the state of the art on many information retrieval (IR) tasks, its performance highly depends on how to decompose queries into smaller pieces, say phrases or tokens. However, optimizing query decomposition for MVR performance is not end-to-end differentiable. Even worse, jointly solving this problem and training the downstream retrieval-based systems, say RAG systems could be highly inefficient. To overcome these challenges, we propose Performance-Oriented Query Decomposer (POQD), a novel query decomposition framework for MVR. POQD leverages one LLM for query decomposition and searches the optimal prompt with an LLM-based optimizer. We further propose an end-to-end training algorithm to alternatively optimize the prompt for query decomposition and the downstream models. This algorithm can achieve superior MVR performance at a reasonable training cost as our theoretical analysis suggests. POQD can be integrated seamlessly into arbitrary retrieval-based systems such as Retrieval-Augmented Generation (RAG) systems. Extensive empirical studies on representative RAG-based QA tasks show that POQD outperforms existing query decomposition strategies in both retrieval performance and end-to-end QA accuracy. POQD is available at https://github.com/PKU-SDS-lab/POQD-ICML25.
[ "cs.IR", "cs.DB" ]
# 1 Introduction Monocular 3D human pose estimation is a fundamental task in computer vision that aims to predict human body poses in 3D space from a single RGB image. It serves as a key enabling technology for a wide range of applications, including motion analysis[1], human-computer interaction[2], [3] and virtual/augmented reality[4] Existing approaches are predominantly divided into two technical paradigms: direct regression of 3D poses from RGB inputs[5], [6] and 2D-to-3D lifting based on detected 2D keypoints[7], [8], [9]. Direct regression methods typically adopt end-to-end convolutional neural networks (CNNs) to estimate 3D poses. In contrast, 2D-to-3D lifting methods first detect 2D keypoints from input images and then infer 3D joint locations. Benefiting from well-established 2D pose detectors, these methods often achieve superior accuracy in practice. Despite the progress, existing 2D-to-3D lifting approaches still face two key limitations: (1) they rely heavily on 2D joint coordinates, which overlooks the underlying structural relationships between joints [10]. (2) they fail to effectively integrate geometric constraints such as bone directions and joint angles [11], [12]. In particular, conventional methods tend to treat bone directions and joint angles as independent limit conditions without modeling their intrinsic correlation, leading to inaccurate pose predictions under complex motion patterns. To address these challenges, we propose PoseGRAF, a novel framework for 3D human pose estimation that integrates geometry-aware graph representation with adaptive feature fusion. PoseGRAF explicitly captures joint angle relationships on the skeleton graph to enhance the representation of bone directions. Specifically, we propose a dual-graph approach to model bone direction relationships, consisting of: (i) a weighted graph, where nodes represent bones and edge weights encode angles between adjacent bones; and (ii) an unweighted graph, where nodes also represent bones and edges indicate binary connectivity (1 for connected, 0 for not connected). Based on this, we design a geometry-enhanced joint embedding method, which integrates Cross-Attention and Joint GCN to extract joint features and employ Bone Direction GCN to integratively encode bone direction and angle information. Furthermore, we design an attentionbased dynamic feature fusion module that adaptively fuses positional and geometric features, and coconstructs a residual structure with an improved Transformer encoder. The proposed architecture alleviates unreasonable pose predictions during fast or intricate motions. Extensive experiments on two benchmark datasets, Human3.6M [13] and MPI-INF-3DHP [14] demonstrate that our method outperforms state-of-the-art approaches across multiple metrics, validating its effectiveness and robustness. The main contributions of this work can be summarized as follows: (1) We design a geometry-enhanced graph to explicitly model the relationships between bone directions and their connections, overcoming the limitations of traditional joint graphs in angle representation. A graph convolution module is designed to effectively capture the spatial correlation of bone directions. (2) The proposed attention-based dynamic feature fusion module adaptively integrates joint position and bone direction features. (3) Comprehensive evaluations conducted on the Human3.6M and MPI-INF 3DHP datasets demonstrate that the proposed method achieves superior performance compared to existing state-of-theart approaches. # 2 Related work # 2.1 3D human pose estimation In recent years, deep learning has significantly advanced the field of monocular 3D human pose estimation. Existing methods can be broadly categorized into two paradigms. The first is direct regression approaches[5], [6], [15] which utilize end-to-end CNN architectures to directly predict 3D joint positions or reconstruct human meshes from raw RGB images. While these methods leverage rich visual information, they are often sensitive to environmental variations and computationally expensive, making them less suitable for real-time processing in dynamic scenes. The second category adopts a twostage ’image-to-2D-to-3D’ pipeline. Chen et al. [16], perform matching and retrieval from a predefined 3D. pose library, achieving computational efficiency but limited by pose diversity. Martinez et al. [17] propose a fully connected residual network that regresses 3D joint positions from 2D keypoints, significantly improving accuracy and benefiting from reliable feature support provided by a detector pretrained on a large‑scale 2D dataset. Subsequent improvements, including hierarchical joint prediction [18], keypoint refinement[12], and viewpoint-invariant constraints[19], further enhanced model performance. Although current data augmentation techniques[20] have made significant progress in predictive accuracy, their generalization ability to complex real-world scenarios remains insufficient. # 2.2 Graph-Based Learning Methods Graph Convolutional Networks (GCNs) have demonstrated strong performance in monocular 2Dto-3D pose lifting tasks by modeling the topological structure of the human skeleton, where joints are treated as nodes and bones as edges. GCNs can effectively capture spatial dependencies through graphbased convolutional operations. In existing works, Ci et al. [21] proposed a locally connected network to enhance feature representation, SemGCN[22] incorporated joint semantic relationships to refine predictions, and MGCN[23] introduced weight modulation to improve accuracy. However, these approaches rely on static adjacency matrices to define edge weights, making it difficult to model dynamic skeletal interactions. To address this, Zhou et al. [24] proposed Hyperformer, which leverages hypergraph self-attention (HyperSA) to embed skeletal structures into a Transformer framework. While this improves skeletal action recognition, it still falls short in modeling high-order interactions such as bone direction dynamics.. # 2.3 Skeletal Geometry-Aware Methods Traditional 2D-to-3D pose estimation methods typically regress 3D coordinates directly from 2D joint coordinates. Ma et al.[25] integrated bone length constraints within the GCN framework to mitigate depth ambiguity, while Azizi et al. [26] encoded poses through inter-segment angles to achieve finer skeletal representations. Hu et al. [27] employed a directional graph approach to explicitly model jointbone relationships, and Yu et al. [28] optimized estimations through GCN-based global-local feature integration. Though these methods emphasize the importance of bone directions and angles, most treat such constraints as auxiliary signals rather than directly incorporating them into graph structures. Sun et al. [10] regressed joint relative displacements through skeletal representations, and Kanazawa et al. [29] incorporated skeletal constraints in 3D mesh reconstruction, yet neither dynamically leveraged angle information. We propose a novel approach: constructing a weighted graph using skeletal orientation angles as edge weights, applied to Graph Convolutional Networks (GCN) processing to achieve higheraccuracy dynamic modeling. Fig.1. (a) Dance pose. (b) 2D joint graph. (c) Directed weighted bone graph. (d) indicates angles between adjacent bone directions. # 3.Method As shown in Fig. 2(a), we propose PoseGRAF, a novel 3D human pose estimation model based on Graph Convolutional Networks (GCN) and Transformer, designed to enhance 3D human pose estimation performance by leveraging advanced graph-based and attention mechanisms to capture the intricate relationships within human skeletal structures. Fig.2. (a) Overview of the proposed framework. denotes the concatenation of bone directional features and joint features. (b) Transformer encoder module: represents the relative distance matrix of human body topology, $\mathsf { O }$ indicates feature embeddings processed by Cross-Attention, corresponds to embeddings from Dynamic Fusion. (c) Bone-Directional graph convolution module. (d) Dynamic fusion module. # 3.1 Overview of the network The PoseGRAF framework consists of five modules: Joint GCN, Bone GCN, Cross-Attention, Dynamic Fusion, and Transformer Encoder. We begin by extracting 2D keypoints from the input image using the CPN detector[30], followed by constructing both directed weighted and undirected graphs to represent skeletal structures. The Bone GCN extracts directional and angular relational features from the skeletal graph. In parallel, the Joint GCN module aggregates features from adjacent nodes to model local spatial dependencies among joints. Next, the Cross-Attention mechanism allows joint features to attend to bone direction features, enhancing the joint representations by incorporating relevant directional information. The Dynamic Fusion module then adaptively integrates the refined joint features with the bone direction features. These fused features are processed by an improved Transformer Encoder, embedded within a residual structure, to generate the final feature representations. Finally, a regression head linearly projects these features to three-dimensional space, enabling accurate estimation of the 3D human pose from the 2D input. # 3.2 Graph Convolutional Networks PoseGRAF employs a dual-stream graph convolutional network Architecture comprising Joint GCN and Bone-Direction GCN. The former aims to model local spatial dependencies between human joints, while the latter employs joint angles to generate weighted representations of geometric correlations in bone directions. Joint GCN. We represent joint features as a graph $G _ { J } = ( V _ { J } , A _ { J } )$ , where the vertex set $V _ { J }$ contains $N$ joints and the edge connections are defined by an adjacency matrix $A _ { J } \in \{ 0 , 1 \} ^ { N \times N }$ Specifically, $A _ { J } ^ { ( i , j ) } = 1$ if joints $i$ and $j$ share a physical connection, otherwise $A _ { J } ^ { ( i , j ) } = 0$ Let $X _ { J } ^ { ( l ) }$ denote the latent representation of pose data at the $l - t h$ layer,The joint feature representation is updated through graph convolution-based neighbor aggregation, formulated as: $$ X _ { J } ^ { ( l + 1 ) } = \sigma ( \widetilde { D } _ { J } ^ { \ - 1 / 2 } \widetilde { A } _ { J } \widetilde { D } _ { J } ^ { \ - 1 / 2 } X _ { J } ^ { l } \theta _ { J } ) $$ Where $\tilde { A } _ { J } = A _ { J } + I _ { N }$ denotes the self−loop augmented adjacency matrix. $\widetilde { D } _ { J } ^ { - 1 / 2 }$ represents the normalized node degree diagonal matrix of $\tilde { A } _ { J } . \theta _ { J } \in \mathbb { R } ^ { D \times D }$ is a trainable weight matrix. Bone Direction GCN. As shown in Fig. 2(c), this module constructs two geometrically enhanced graphs: a directed weighted bone graph and a directed unweighted bone graph. The directed weighted bone graph is denoted as $G _ { B W } = ( V _ { B } , W _ { B } )$ , where the vertex set $V _ { B }$ contains $M$ bone nodes, constructed as illustrated in Fig. 1. The feature $v _ { B } ^ { p }$ of a bone node $x _ { B } ^ { p }$ is computed as follows: $$ x _ { B } ^ { p } = { \frac { x _ { J } ^ { i } - x _ { J } ^ { j } } { \left\| x _ { J } ^ { i } - x _ { J } ^ { j } \right\| } } $$ where $x _ { J } ^ { i }$ and $x _ { J } ^ { j }$ represent the features of the source joint and target joint of $v _ { B } ^ { p }$ ,respectively. The edge weight between bone nodes $v _ { B } ^ { p }$ and $v _ { B } ^ { q }$ is computed as follows: $$ w _ { B } ^ { ( p , q ) } = \left\{ \begin{array} { l l } { \operatorname { a r c c o s } \left( \frac { x _ { B } ^ { p } \cdot x _ { B } ^ { q } } { \left\| x _ { B } ^ { p } \right\| \left\| x _ { B } ^ { p } \right\| } \right) , } & { \ i f \ v _ { B } ^ { p } \ a n d \ v _ { B } ^ { p } \ s h a r e \ a j o i n t } \\ { 0 , } & { \ o t h e r w i s e } \end{array} \right. $$ The directed unweighted bone graph is denoted as $G _ { B A } = ( V _ { B } , A _ { B } )$ , where $A _ { B } ^ { ( p , q ) } = 1$ , if bone nodes $v _ { B } ^ { p }$ and $v _ { B } ^ { q }$ share a common joint, otherwise $A _ { B } ^ { ( p , q ) } = 0$ . Serving as inputs to the Bone Direction GCN module, $G _ { B A }$ and $G _ { B W }$ undergo feature extraction through two separate graph convolutional layers. These layers update the representations of bone nodes in the subsequent layer by aggregating information from neighboring bone nodes and angular relationships, formulated as follows: $$ \begin{array} { r l } & { \bar { X } _ { W } ^ { ( l + 1 ) } = \sigma ( \widetilde { D } _ { B } ^ { \textrm { -- } 1 / 2 } \widetilde { W } _ { B } \widetilde { D } _ { B } ^ { \textrm { -- } 1 / 2 } X _ { B } ^ { l } \Theta _ { W } ) } \\ & { \bar { X } _ { A } ^ { ( l + 1 ) } = \sigma ( \widetilde { D } _ { B } ^ { \textrm { -- } 1 / 2 } \widetilde { A } _ { B } \widetilde { D } _ { B } ^ { \textrm { -- } 1 / 2 } X _ { B } ^ { l } \Theta _ { A } ) } \end{array} $$ $$ { \cal X } _ { B } ^ { ( l + 1 ) } = { \bar { X } } _ { W } ^ { ( l + 1 ) } \oplus { \bar { X } } _ { A } ^ { ( l + 1 ) } $$ Where $\widetilde { W } _ { B }$ denotes the angular−weighted adjacency matrix and ${ \tilde { A } } _ { B }$ represents the original bone connectivity matrix. ${ \widetilde { D } } _ { B } ^ { - 1 / 2 }$ corresponds to the normalized bone node degree diagonal matrix. $\Theta _ { W } , \Theta _ { A } ~ \in$ $\mathbb { R } ^ { D \times D }$ are two independent learnable parameter matrices. The outputs $\bar { X } _ { W } ^ { ( l + 1 ) }$ and $\bar { X } _ { A } ^ { ( l + 1 ) }$ from the two graph convolutional layers are aggregated through summation to generate the updated node representation features for the $l { + } l$ layer. The activation function $\sigma ( \bullet )$ employs LeakyReLU to mitigate gradient vanishing while preserving feature sparsity, with a negative slope $\scriptstyle \alpha = 0 . 0 1$ . # 3.3 Cross-Attention This module is designed to capture the intrinsic correlations between human bone directions and joints. We concatenate joint features with bone direction features as follows: $$ X = [ X _ { J } ^ { 1 } ; X _ { J } ^ { 2 } ; \ldots ; X _ { J } ^ { N } ; X _ { B } ^ { 1 } ; X _ { B } ^ { 2 } ; \ldots ; X _ { B } ^ { M } ] $$ This module takes $X \in \mathbb { R } ^ { ( ( N + M ) \times D ) }$ as input, where $D$ denotes the embedding dimension. The module first computes correlation scores between joints and bone directions using the following formulation: $$ \hat { A } _ { h ; k } = \left[ \hat { a } _ { h ; k } ^ { N + 1 } ; \hat { a } _ { h ; k } ^ { N + 2 } ; \dots ; \hat { a } _ { h ; k } ^ { N + M } \right] \in \mathbb { R } ^ { M \times N } $$ Here, $\hat { a } _ { h ; k } ^ { N + i }$ denotes the attention score vector between the $i - t h$ joint and all bone directions in the $h - t h$ head, reflecting the interactions between joints and bone edges. Following [31], we employ Exponential Moving Average (EMA) to aggregate multi-head attention mechanisms across layers. $$ \bar { A } _ { h ; k } = \beta \cdot \bar { A } _ { h ; k - 1 } + ( 1 - \beta ) \cdot \bar { A } _ { h ; k } $$ where $\scriptstyle { \beta = 0 . 9 9 }$ . The final layer’s $\bar { A } _ { h ; k }$ is then employed to aggregate attention vectors from different heads across joints and bone directions, yielding the final visual token correlation scores: $$ S = \frac { 1 } { H M } \sum _ { h = 1 } ^ { H } \sum _ { i = 1 } ^ { M } \bar { a } _ { h ; k } ^ { ( N + i ) } $$ where $\bar { a } _ { h ; k } ^ { N + i }$ represents the $i - t h$ column of matrix $\bar { A } _ { h ; k }$ , with $H$ denoting the number of attention heads. After cross-attention processing, the output features are partitioned into bone direction features $X _ { B C } \in \mathbb { R } ^ { \mathrm { M } \times \mathrm { D } }$ and joint features $X _ { J C } \in \mathbb { R } ^ { \mathrm { N \times D } }$ , enabling independent processing by subsequent modules. This design not only preserves the structural information of features but also explicitly models multiscale dependencies between key joints and bone directions through the multi-head attention mechanism. By incorporating attention mechanisms, this module significantly enhances the model’s capability to capture relationships between critical joints and bone directions in human poses, thereby providing richer and more precise feature representations for downstream pose estimation tasks. # 3.4 Dynamic_Fusion Inspired by An et al.[32], we propose an attention-based dynamic feature fusion mechanism. This mechanism effectively fuses joint feature embeddings with bone direction embeddings, as illustrated in Fig. 2(d). The Feature Selection function Filter implements adaptive key joint selection through a learnable threshold parameter $\mu$ , balancing computational efficiency with precision. Where $S _ { h i g h }$ denotes attention scores, based on which the $t o p - \mu$ joint features with the strongest skeletal correlations are extracted. This process is a dynamic feature gating operation to learn feature subset selection. The graph reconstruction function Reconstruction restores global joint features from individual joint node features according to All the Bone Direction Features, with its detailed implementation described in Algorithm 1 (lines 5-7). This function achieves feature reconstruction through iterative topological diffusion. Specifically, starting from a single selected key joint feature $X _ { J C } ^ { ( i ) }$ as propagation seeds, a Breadth-First Search (BFS) is performed based on the topological structure of human skeletal graph $G _ { J }$ . While in incorporating bone direction features $X _ { B }$ during traversal. The process is mathematically formulated as: $$ \mathcal { J } _ { B } ^ { i } { = } B F S ( G _ { J } , X _ { J C } ^ { ( i ) } , X _ { B } ) $$ $\mathcal { J } _ { B } ^ { i }$ represents the global joint feature obtained from the joint feature $X _ { J C } ^ { ( i ) }$ . The BFS process takes as input and encodes joint features based on bone direction features $X _ { B }$ . The final joint descriptor features are obtained through aggregation and residual connections. This process is formulated as in Eq. (10): $$ \begin{array} { r } { X _ { D F } = \sum _ { i = 0 } ^ { \mu - 1 } \mathcal { I } _ { B } ^ { i } + X _ { J C } } \end{array} $$ Where the $X _ { J C }$ preserves original joint information to prevent gradient vanishing. By integrating bone direction features with joint features obtained through attention mechanisms, this module enhances the spatial representational capacity of each joint. Through this approach, the model can accurately capture geometric relationships between joints and prioritize key points via attention mechanisms. Input: $X _ { J C } , X _ { B } , S _ { h i g h } , G _ { J }$ Output: Fused feature $X _ { D F }$ 1. \\Filter 2. $I n d e x = T o p \_ I n d i c e s ( S _ { h i g h } , X _ { J C } )$ 3. $\mathcal { X } _ { J C } = \{ X _ { J C } ^ { i } | i \in I n d e x \}$ 4. \\ Reconstruction 5. for $X _ { J C } ^ { i } \in \mathcal { X } _ { J C }$ do 6. $\begin{array} { l } { { \mathcal { J } _ { B } ^ { i } = B F S ( G _ { J } , X _ { J C } ^ { i } , X _ { B } ) } } \\ { { \mathcal { J } _ { B } . \mathrm { a d d } ( \mathcal { J } _ { B } ^ { i } ) } } \end{array}$ 7. 8. end for 9. $\begin{array} { r } { \underline { { X _ { D F } { = } } } \underline { { \sum _ { i = 0 } ^ { \mu - 1 } { \mathcal { J } } _ { B } ^ { i } { + } } } \underline { { X _ { J C } } } } \end{array}$ # 3.5 Transformer Encoder The conventional Transformer encoder can model global dependencies through multi-head selfattention, enabling each node to equally influence others. However, its permutation-invariant property neglects the critical topological inductive bias in human pose estimation, where interactions between anatomically adjacent joints are inherently stronger than those between distant nodes, thus requiring explicit encoding of structural relationships [33]. Existing graph positional encoding methods fail to effectively perceive node distances. Inspired by [34], as shown in Fig. 2(b), we introduce an enhanced Transformer encoder specifically for 2D-to-3D lifting in human pose estimation, incorporating a relative distance matrix derived from human topology to regulate attention preferences toward distant nodes. We rescale the human joint topology matrix $A _ { J }$ to adjust attention weights: $$ \begin{array} { r } { \dot { A } = \frac { 1 + \exp { ( w ) } } { 1 + \exp { ( w - A _ { J } ) } } } \end{array} $$ The hyperparameter 𝑤 controls the distance−aware information intensity, where larger 𝑤 values prioritize denser information from distant nodes. In this work, we set $w$ to facilitate information exchange between non-local nodes while maintaining balanced interactions. To preserve original global joint features, the joint structural information extracted by the Cross_Attention module is injected into the multi-head attention layers of the transformer encoder through residual connections, formulated as: $$ \begin{array} { r } { X _ { m i d } = s o f t m a x ( \frac { R e L U ( Q _ { g } ^ { ( i ) } K _ { g } ^ { ( i ) ^ { T } } ) \odot \dot { A } } { \sqrt { d _ { g } } } ) v _ { g } ^ { ( i ) } + X _ { J C } } \end{array} $$ The symbol $\odot$ denotes element-wise product, $1 / \sqrt { d _ { g } }$ is the attention scaling factor where $d _ { g }$ represents the dimensionality of vectors in $K _ { g } ^ { ( i ) }$ .The FFN is then applied to $X _ { m i d }$ to generate the output of the Transformer encoder. $$ F F N ( { \cal X } ) = \sigma ( \sigma ( x _ { m i d } W _ { 1 } + b _ { 1 } ) W _ { 2 } + b _ { 2 } ) W _ { 3 } + b _ { 3 } $$ Here $W _ { 1 } , W _ { 2 }$ and $W _ { 3 }$ are trainable matrices, while $b _ { 1 } , b _ { 2 }$ and $b _ { 3 }$ denote bias terms. For the activation function $\sigma$ , we employ the Gaussian Error Linear Unit (GELU). # 3.6 Loss function During model training, we adopt the Mean Per Joint Position Error (MPJPE) as the optimization objective function. This metric optimizes the model by computing the mean Euclidean distance between predicted and ground-truth joint positions in 3D pose space for all joints. Its mathematical expression is defined as: $$ \mathcal { L } = \frac { 1 } { Z N } \sum _ { i = 1 } ^ { Z } \sum _ { j = 1 } ^ { N } \Bigl | \bigl | Y ^ { ( i , j ) } - \hat { Y } ^ { ( i , j ) } \bigr | \Bigr | _ { 2 } $$ Here, $Y ^ { ( i , j ) }$ denotes the annotated 3D coordinates of the $j$ -th joint in the $i$ -th sample, $\hat { Y } ^ { ( i , j ) }$ represents the predicted coordinates of the corresponding joint output by the network, $Z$ is the batch size, and $N$ indicates the total number of human joints. Table 1. Experimental comparisons on the Human3.6M dataset use 2D poses detected by CPN as network input. The symbol $( \& )$ indicates models utilizing temporal information, the symbol $( ^ { * } )$ denotes models employing sharpnessoptimized input processing, and best results are highlighted in bold. # 4. Experiments # 4.1 Datasets and evaluation metrics This section presents comprehensive studies on two real-world 3D human pose estimation benchmark datasets to systematically validate the superiority of the proposed model. Human3.6M Datasets: As the most representative benchmark in 3D human pose estimation, the Human3.6M Dataset [13] provides 3.6 million frames of multi-view motion captured data captured by four synchronized cameras at a ${ 5 0 } \mathrm { H z }$ sampling rate, covering 15 categories of daily activities performed by 11 subjects in indoor scenes. Following the standard experimental protocol, we adopt data from subjects (S1, S5, S6, S7, S8) for model training, and evaluate performance on two subjects (S9, S11). Two mainstream evaluation metrics are employed: Protocol 1 (MPJPE) measures absolute errors by computing the Euclidean distance (in millimeters) between predicted and ground-truth 3D joint coordinates; Protocol 2 (P-MPJPE) calculates relative errors after aligning predictions with ground truth via Procrustes analysis. MPI-INF-3DHP Dataset: The MPI-INF-3DHP Dataset [14] is a more challenging 3D human pose estimation benchmark, capturing 1.3 million frames of diverse poses from 8 subjects in indoor/outdoor hybrid scenes using 14 cameras. Aligned with settings in [11], [9], and [14], we utilize the Percentage of Correct Keypoints (PCK) under a $1 5 0 ~ \mathrm { m m }$ radius and the Area Under the Curve (AUC) as evaluation metrics. Table 2. presents experimental comparisons on the Human3.6M dataset using ground-truth 2D poses as network input. The symbol $( ^ { * } )$ indicates models utilizing temporal information. Best results are highlighted in bold. # 4.2 Implementation details Our method is implemented using PyTorch on a single NVIDIA RTX 3090 GPU. Core architectural parameters are configured as follows: the Transformer encoder comprises ${ \mathrm { L } } = 6$ stacked layers, each selfattention layer contains $\mathtt { h } = 8$ attention heads, and the feature embedding dimension is ${ \mathrm { d } } = 5 1 2$ . During training, horizontal flipping data augmentation is applied to enhance model robustness, while the same strategy is synchronized in the test phase for result ensembling. The optimization process employs the Adam optimizer with an initial learning rate of 0.001 and an exponential decay scheduler (decay rate $\boldsymbol { \gamma }$ $= 0 . 9 6 \$ ), and is trained for 40 epochs. For 2D pose detection, both Human3.6M and MPI-INF-3DHP datasets utilize the Cascaded Pyramid Network (CPN) [30] as the base detector to ensure reliable 2D input features. # 4.3 Comparsion with state-of-the art Result On Human3.6M: As shown in Tables 1 and 2, when using 2D poses detected by CPN as input, our model outperforms existing methods on both MPJPE $4 8 . 1 \ \mathrm { m m }$ ) and P-MPJPE ( $( 3 8 . 3 ~ \mathrm { m m } )$ metrics. Compared to state-of-the-art graph transformer approaches, PoseGRAF achieves a reduction of $1 0 . 6 \ \mathrm { m m }$ in MPJPE over GraFormer[36] and $1 . 1 ~ \mathrm { m m }$ over GraphMLP[40]. Notably, PoseGRAF demonstrates superior 3D pose prediction accuracy in complex motion scenarios such as Phoning and Walking. Quantitative analysis reveals that the fusion of geometric features—joint positions, bone directions, and joint angles—significantly improves pose estimation accuracy, effectively enhancing geometric consistency between predictions and ground-truth annotations. Result on MPI-INF-3DHP: We further validate the generalization capability of our model PoseGRAF using the MPI-INF-3DHP dataset, which contains diverse pose variations. The model trained on Human3.6M is directly applied to regress 3D pose coordinates. As shown in Table 3, our method achieves state-of-the-art performance on both PCK and AUC metrics. These results demonstrate that the proposed model exhibits strong generalization and effectively adapts to unseen data. Table. 3. Results on MPI-INF-3DHP Table 4. Ablation studies on Human3.6M with ground truth 2D poses as network inputs. Qualitative Results: Fig. 4. compares the prediction results of PoseGRAF, GraphMLP[40], and baseline models against ground-truth poses on representative samples from both datasets. Observations of key regions marked with green and purple circles reveal that PoseGRAF consistently outperforms baseline models and GraphMLP in pose prediction accuracy, regardless of pose complexity. Notably, PoseGRAF maintains precise 3D pose estimation even in highly dynamic motion scenarios, attributed to the dynamic fusion mechanism's effective modeling of geometric relationships in poses. # 4.4 Ablation studies To comprehensively evaluate the effectiveness of model components, this study conducts systematic ablation experiments on the Human3.6M dataset. The experimental design includes validation of module effectiveness and model depth optimization. All experiments employ ground-truth 2D pose inputs to exclude interference from detection errors. Baseline Model: The baseline model consists of a Transformer encoder (6-layer $\times ~ 8$ -head configuration) cascaded with a Joint GCN module, with a fixed feature embedding dimension of 512. Model Depth Optimization: As shown in Fig. 3, the Transformer encoder depth(L) and feature dimension(D) exhibit significant impacts on model performance: When ${ \mathrm { L } } = 6$ , MPJPE decreases linearly with increasing layers (reaching the optimal value of $4 8 . 1 \mathrm { m m }$ at $\mathrm { L } = 6$ ), but performance degrades when $\mathrm { L } > 6$ due to gradient propagation attenuation (error rebounds to $5 1 . 3 ~ \mathrm { m m }$ at $\mathrm { L } = 8$ ). Analysis of feature dimensions indicates that ${ \bf D } = 5 1 2$ substantially enhances model capacity compared to $\mathrm { ~ D ~ } = 2 5 6$ . Consequently, the configuration ${ \mathrm { L } } = 6$ and ${ \bf D } = 5 1 2$ is selected as the optimal setup. Module Effectiveness Validation: Through systematic ablation studies (Table 4), we rigorously quantify the individual contributions of the dynamic feature fusion mechanism and the bone direction graph convolutional network (B-GCN). We analyze their effects on 2D-to-3D pose estimation through comparative experiments. The dynamic fusion module constructs an association weight matrix between joint features and bone direction features via Cross-Attention, dynamically selecting critical feature subsets (Top- $\mu$ ). Compared to static fusion (which directly fuses all 17 joint features and increases MPJPE by $4 . 1 \ \mathrm { m m }$ ), dynamic fusion improves salient feature selection through attention weights, suppresses redundant interference, and significantly reduces joint localization errors under complex motions. Experiments further verify the role of B-GCN. The baseline model (Transformer $^ +$ J-GCN) achieves an MPJPE of $3 4 . 6 \mathrm { m m }$ . Integrating B-GCN with static fusion reduces errors to $3 3 . 7 \mathrm { m m }$ (MPJPE) and $2 6 . 6 0 \ \mathrm { m m }$ (P-MPJPE), demonstrating that explicit bone direction modeling enhances geometric constraints. With additional Cross-Attention integration, MPJPE further decreases to $3 2 . 8 6 \mathrm { m m }$ . Finally, the complete model with dynamic fusion (MPJPE=32.1 mm, P-MPJPE=25.0 mm) achieves a $7 . 2 \%$ error reduction over the baseline, attributed to its dual-path design: dynamic weighted graphs adaptively adjust feature association strength to precisely capture local motion patterns, while static adjacency graphs encode anatomical priors to reinforce skeletal connectivity. Fig. 3. Architecture Parameter Analysis (Depth, Dimensions) in PoseGRAF. Evaluated on Human3.6M using MPJPE (mm) with CPN-detected 2D poses as network inputs. Fig. 4. 3D pose estimation visualizations for Human3.6M (top three rows) and MPI-INF-3DHP (bottom three rows) datasets. # 4.5 Qualitative results on video in-the-wild To evaluate the model’s robustness in real-world open environments, this study designs a crossdomain generalization validation protocol to address dual challenges of unknown camera parameters and complex dynamic scenarios. The system implementation framework comprises three stages: a human detection module localizes target subjects in video frames, a high-precision 2D pose estimator based on HRNet extracts spatiotemporal keypoint sequences, and the pre-trained PoseGRAF is transferred to diverse motion videos for end-to-end 3D reconstruction. The test set covers highly challenging pose sequences including high-intensity fitness, gymnastic movements, and basketball playing. Quantitative visualization results (see Fig. 5) demonstrate that, under completely unseen scenarios, our method consistently outputs anatomically plausible 3D human poses in 3D space. Fig. 5. Qualitative results of our method for in-the-wild videos.
Existing monocular 3D pose estimation methods primarily rely on joint positional features, while overlooking intrinsic directional and angular correlations within the skeleton. As a result, they often produce implausible poses under joint occlusions or rapid motion changes. To address these challenges, we propose the PoseGRAF framework. We first construct a dual graph convolutional structure that separately processes joint and bone graphs, effectively capturing their local dependencies. A Cross-Attention module is then introduced to model interdependencies between bone directions and joint features. Building upon this, a dynamic fusion module is designed to adaptively integrate both feature types by leveraging the relational dependencies between joints and bones. An improved Transformer encoder is further incorporated in a residual manner to generate the final output. Experimental results on the Human3.6M and MPI-INF-3DHP datasets show that our method exceeds state-of-the-art approaches. Additional evaluations on in-the-wild videos further validate its generalizability. The code is publicly available at https://github.com/iCityLab/PoseGRAF.
[ "cs.CV", "cs.AI" ]
# 1 Introduction Quantifying the spatial distributions of elastic properties, specifically Young’s modulus and Poisson’s ratio is crucial in numerous applications, including biomedical imaging. Young’s modulus characterizes a material’s local resistance to elastic (reversible) axial deformation, while Poisson’s ratio quantifies the coupling between axial and transverse deformations. When external loads are applied, an internal stress field is established within the solid material, resulting in deformation (i.e., displacement) patterns that depend directly on spatial distributions of the elastic properties. Accurate spatial characterization of elastic properties enables precise disease diagnosis (e.g., cardiovascular and airway diseases[1, 2, 3, 4, 5], cancer.[6, 7, 8]), development biomedical devices and tissue engineering scaffolds,[9, 10, 11] assessment of structural integrity (e.g., bone health,[12] engineered and additive manufactured parts.[13, 14, 15]), and precise computational mechanics modeling.[16, 17] Typically, elastic properties vary spatially within these materials, necessitating advanced methods to characterize their heterogeneity. Techniques for estimating elastic properties are generally classified into direct and indirect methods. Direct methods, such as nanoindentation and atomic force microscopy, quantify elasticity distributions by generating local deformation on the sample surface with a known force.[18, 19, 20, 21] However, these direct methods are limited to local areas on exposed surfaces, often requiring destructive procedures to access areas of interest, making them impractical for many applications. Indirect methods infer the elasticity distributions by measuring the displacement field resulting from externally applied force to the surface. Medical imaging techniques, such as Magnetic Resonance Elastography (MRE),[22, 23] and ultrasound-based elastography methods,[24, 25] are commonly employed indirect methods. Additionally, Digital Image Correlation (DIC) and Digital Volume Correlation (DVC) techniques are employed during mechanical testing to extract displacement fields from 2D and 3D imaging data, respectively.[26, 27, 28, 29, 30] Materials with spatially varying elasticity often show complex patterns in displacement measurements and significant noise.[31, 32] Conventionally, strain-based elastography methods have been widely used in clinical applications. By assuming a uniform stress distribution, they obtain elasticity distributions in relative scale by inverting strains. To achieve uniform stress, trained technicians apply load uniformly over the surface. However, even uniform loading leads to heterogeneous stress distributions within the material, which is known as stress localization. To achieve more robust and precise elasticity property estimation from the displacement field, physics-informed approaches incorporate governing physical principles typically formulated as partial differential equations (PDEs) models, such as those describing linear elasticity.[33] Estimating elasticity using PDE models is inherently ill-posed, often resulting in unstable or non-unique solutions.[34] These approaches are categorized into direct and iterative methods. Direct methods simplify or reformulate linear elasticity PDE equations into forms solvable analytically or numerically, typically limited to simple geometries or idealized conditions.[35, 36, 37] These methods generally require smooth displacements and strain fields without noise and auxiliary constraints such as average Young’s modulus. Minor errors from noise in the data or inaccuracies in the constraints can propagate through estimation, degrading the accuracy and stability of the solutions.[35] The iterative methods utilize finite element models (FEM) to minimize discrepancies between measured and simulated outputs iteratively.[38, 39, 40, 41, 42] The iterative methods are computationally intensive and heavily dependent on the given initial conditions. In practice, displacement measurements often contain noise, from which direct strain calculation with numerical differentiation amplifies the errors, hindering accurate elasticity estimation.[43, 44, 45] Additionally, many methods typically assume incompressibility, which is impractical for many solid materials that exhibit compressibility and heterogeneous Poisson’s ratios.[46, 47, 48] Recent advancements in machine learning (ML) methods, such as Gaussian process regression (GPR),[49] deep neural networks (generative adversarial networks,[50] and convolutional neural networks.[51]) have enabled elasticity mapping from direct elasticity measurements. However, these purely data-driven methods typically exhibit limited generalization and reduced accuracy when applied to materials with complex heterogeneity or geometries. Physics-informed neural networks (PINN), integrating physical laws directly into neural network architectures, have demonstrated success primarily in forward elasticity problems, predicting displacement or strain with given known parameters.[52, 53, 54] Nonetheless, inverse elasticity estimation using PINN remains significantly more challenging. Most existing inverse PINN approaches assume homogeneous elasticity with a constant value to be estimated.[47, 48, 55, 56, 57, 58, 59] Heterogeneous elasticity estimation is generally challenging as, at every point, different elasticity values need to be estimated. Recent works addressing heterogeneous elasticity estimation with PINNs frequently struggle with noisy displacement data and often produce relative rather than absolute Young’s modulus values.[46, 60, 61, 62, 63, 64] To simplify the problem, strict assumptions are often made, such as known true (internal or boundary) stress distributions,[62, 63] known mean Young’s modulus,[46, 64] and incompressible materials.[46, 61] In this paper, we propose an Inverse Elasticity PINN (IE-PINN) model specifically designed to estimate the spatial distributions of elasticity parameters, namely, Young’s modulus and Poisson’s ratio, from noisy displacement data based on the governing physics of linear elasticity. Our IE-PINN model demonstrates robust performance against noise, enabling accurate recovery of absolute Young’s modulus distribution rather than merely relative distribution. A novel two-step approach incorporating loading force boundary conditions facilitates precise absolute elasticity estimation. By combining with a specialized neural network architecture, our method achieves robust heterogeneous elasticity estimation with low errors based on noisy displacement datasets. # 2 Result and Discussion # 2.1 Inverse Elasticity Physic-informed Neural Network (IE-PINN) Estimating Young’s modulus and Poisson’s ratio from deformation fields constitutes an inherently ill-posed inverse elasticity problem, often yielding non-unique, unstable, and noise-sensitive solutions. Conventional PINNs mainly target forward problems, benefiting from the automatic differentiation needed for PDE. However, automatic differentiation is an ineffective inverse elasticity problem (Figure S1 in Supporting Information). Recently, Elastnet successfully estimates heterogeneous elasticity based on ideal noise-free displacements;[64] however, it directly applies finite differentiation to displacement measurement data without functional approximation, making it vulnerable and unstable to even small noise (Section 2.3). Additionally, Elastnet estimates only relative Young’s modulus distributions,[46, 64] requiring the true mean Young’s modulus to derive absolute values. However, the true mean Young’s modulus is typically not available in practice. To overcome the critical limitations of noise sensitivity and the inability to estimate absolute Young’s modulus scales, we propose the IE-PINN framework for robustly estimating heterogeneous Young’s modulus and Poisson’s ratio distributions from noisy displacement fields. Figure 1 illustrates the proposed methods, consisting of two phases. In Phase 1, IE-PINN is trained based on noisy displacement data, and the spatial distributions of relative Young’s modulus and Poisson’s ratio are estimated. Then, in Phase 2, the absolute scale of Young’s modulus is estimated (referred to as calibration), recovering the absolute scales of Young’s modulus distribution by leveraging the predicted relative stress distributions from IE-PINN. To clearly demonstrate the advantages of IE-PINN, throughout Section 2, we employ a dataset from Elastnet study,[64] where the true spatial distribution of Young’s modulus adopts a dragon shape and Poisson’s ratio a dog shape, with displacements simulated using FEM. Specifically, this study uses noisy displacement data generated by adding zero-mean Gaussian noise to the displacements. The standard deviation of the noise is set to $0 . 1 \%$ of the average displacements (i.e., signal-to-noise, or SNR, ratio of 1000 as described in Supplementary Note S1 in Supporting Information) to illustrate performance under mild noise conditions. The IE-PINN architecture integrates the governing PDEs of linear elasticity, including strain-displacement relation, elastic constitutive relation, and equilibrium equations. The core innovation for achieving robust elasticity estimation in the presence of noisy displacements lies in the neural network framework, which comprises three deep neural networks: the displacement network, the extra strain network, and the elasticity network. The displacement network fits noisy displacement data $\mathrm { \Delta } u _ { x }$ and $u _ { y . }$ ) with respect to spatial coordinates $\mathbf { \Phi } _ { x }$ and $y$ ). Neural networks can effectively mitigate the adverse effect of noise through smoothing.[65] In PINN, such smoothing is regulated by the PDE equations. Using neural networks fitted to measurements (i.e., displacements) is a common scheme in PINN. However, displacement fitting alone introduces high sensitivity in its second derivatives; fitting errors are amplified and propagated to the second derivative function, making the inverse elasticity estimation unstable, especially when the data are noisy. To address this, IE-PINN incorporates a dedicated strain network, which decouples and predicts strain, thereby significantly reducing the sensitivity in second derivatives. The discrepancy between strains derived from displacement and those directly predicted by strain network is minimized through a strain discrepancy loss term. This approach substantially improves the stability and accuracy of elasticity estimation, even under noisy conditions. Figure 2 depicts the predictions made by the IE-PINN for all relevant quantities, including denoised displacements, strains, stresses, and elasticity parameters. These predictions were trained using noisy displacement data, and their corresponding error maps are shown in Figure S2 in the Supporting Information. Another significant innovation of IE-PINN is its method for estimating absolute elasticity scales. Accurate absolute elasticity estimation requires precise enforcement of boundary conditions related to the applied loading force. Directly incorporating these boundary conditions into PINN loss functions often causes ill-conditioned optimization and training failure.[66] To address this challenge, we calibrate the absolute scale of Young’s modulus by aligning the boundary force, which is computed from the predicted stress based on the relative Young’s modulus estimated from Phase 1, with the experimentally measured loading force, which is often available in practice.[67, 68, 69] Figure 3 illustrates the calibration procedure employed in this study using numerical integration of the predicted relative stress under the applied loading force, where the technical details are described in Section 4.3. The proposed two-step approach enables effective estimation of the elasticity distributions at the correct absolute scale, incorporates the boundary conditions during training, and maintains training stability. Phase 1: TrainIE-PINN Phase 2: Calibrate Young's Modulus Scale Figure 1: Framework for heterogeneous elasticity estimation from noisy displacement data. The framework consists of two distinct phases: the Inverse Elasticity Physics-informed Neural Network (IE-PINN) training phase and the Young’s modulus scale calibration phase. In the first phase, three neural networks are trained using spatial coordinates to predict mechanical quantities such as displacement, strain, and elasticity (Young’s modulus and Poisson’s ratio), respectively. The displacement network is specifically employed to mitigate the adverse impact of noisy displacement measurements. The predicted displacements are used to compute the strain vector via the strain-displacement relation. The strain network fits the strain derived by the displacement network. The elasticity network predicts Young’s modulus and Poisson’s ratio from spatial coordinates. Based on the constitutive equation, the stress tensor is derived from the strain tensor, Young’s modulus, and Poisson’s ratio. The static equilibrium loss is evaluated by finite differentiation of the stress field. The goal of the training process is to estimate the parameters $( \theta _ { u } , \theta _ { \varepsilon } , \theta _ { E } )$ of all three neural networks by minimizing the (total) neural network loss, which evaluates the displacement network fitting, discrepancies in strain, deviations in mean modulus constraints, and the partial differential equation (PDE) residuals related to equilibrium equations. In Phase 2, the relative stress predicted at the boundary from Phase 1, combined with the experimental loading boundary conditions, is used to recover the correct absolute scale $\hat { c }$ , resulting in an absolute-scale distribution of Young’s modulus. Figure 2: The prediction field of mechanical quantities. The model is applied to a measured displacement that contains a signal-to-noise ratio of 1000. (a) The predicted Young’s modulus field (MPa), Poisson’s ratio field, and axial displacement field (mm). (b) The predicted strain field $( \% )$ . (c) The predicted stress field (MPa). This research applies the proposed IE-PINN to a thin plate scenario under plane stress conditions, predicting the stress distributions $( \sigma _ { x x } , \sigma _ { y y }$ , and $\tau _ { x y }$ ) using constitutive and strain-displacement equations, rather than assuming uniform distributions. The model minimizes a combined total loss function comprising displacement data, strain discrepancy, PDE residual, and mean modulus constraint losses during training. Each loss term is designed to reduce a specific source of errors: The displacement loss penalizes deviations between predicted and observed displacements; The strain discrepancy loss reduces the differences between the strain predictions from the strain network and those derived by differentiating the predicted displacements; The PDE residual loss enforces the governing equations by minimizing the PDE residuals computed from the strain and elasticity networks. Finite difference is employed for numerical differentiation. After training convergence, the IE-PINN reliably predicts displacement, strain, stress, Young’s modulus, and Poisson’s ratio distributions. IE-PINN maintains excellent predictive accuracy, whereas Elastnet demonstrates significant estimation failure under the same noisy condition (see Section 2.2). We further evaluate the performance of our proposed method across multiple datasets with different spatial elasticity distributions in Figure 4. In the following sections, we systematically demonstrate the advantages of each component of IE-PINN, including the neural network architecture (Sections 2.2 and 2.5), robustness to varying noise levels (Section 2.3), robustness of the calibration procedure to the mean modulus constraint, and the impact of pretraining (Section 2.6). # 2.2 Advantages of Displacement Fitting and Decoupled Strain Prediction For robust elasticity estimation from noisy displacement data, using a neural network fitted to displacement measurements is essential. Figure 5(a) and 5(b) compare the estimation and error maps of Young’s modulus and Poisson’s ratio from our proposed IE-PINN model against two benchmarks. Elastnet directly applies finite differentiation to the (noisy) displacement measurements and the stresses calculated therefrom.[64] Direct differentiation significantly amplifies the errors in the noise, making the elasticity estimation vulnerable to noise. Figure 5(a)(iii) and 5(b)(iii) demonstrate the detrimental impacts of noise to the Elastnet, failing in the estimation of both Young’s modulus and Boundary condition $F = \iint \hat { c } \hat { \sigma } ( x , y ) \cdot n ( x , y ) d x d y$ $\boldsymbol { F } \approx \sum _ { i \mathrm { ~ } 0 } ^ { Y } ( \hat { c } \hat { \sigma } _ { x x } ( x _ { b } , y _ { i } ) ) \boldsymbol { h }$ (x,y)∈an IE-PINNprediction ↓ Relative Young's modulus,E(x,y) F c= ixb x-coordinate ∑roxx(xb,yi)h at boundary (B h Spacing Absolute scale, c ?between points Calculated stress, Eabsolute(x,y)=cE(x,y) Figure 3: Young’s modulus scale calibration procedure. Upon completion of Phase 1 training, the IE-PINN provides a spatially varying relative Young’s modulus field. In Phase 2, the absolute scale is calibrated by incorporating the known loading boundary conditions. Specifically, the predicted boundary stress from the relative Young’s modulus is used to compute the resultant force, which is then aligned with the experimentally applied loading force to recover the true scale of Young’s modulus. Figure 4: The prediction error of Young’s modulus and Poisson’s ratio. The proposed model achieves a significantly low and consistently reliable mean relative error (MRE) across 50 independent datasets with noisy displacement data at a signal-to-noise ratio (SNR) of 1000, demonstrating robust accuracy in estimating the elasticity parameters. Poisson’s ratio. Fitting displacement data alone can mitigate the adverse effects of noise to some extent. Figure 5(a)(ii) and 5(b)(ii) demonstrate that both Young’s modulus and Poisson’s ratio are estimated with some errors. Nonetheless, the displacement network brings another challenge in that it is sensitive to its second derivative. Data fitting errors propagated to the second derivative make the elasticity estimation unstable. As a result, in other datasets, solely using the displacement neural network also often fails in elasticity estimation (Figure S3(a)(ii) and S3(b)(ii) in Supporting Information). Figure 5(a)(i) and 5(b)(i) present our proposed method, which reduces the estimation errors significantly in both Young’s modulus and Poisson’s ratio. The estimated values are quite close to the true ones. The strain network not only improves the accuracy of elasticity estimation but also stabilizes the elasticity estimation significantly under the noisy conditions. Under high noise conditions, the displacement network alone is not sufficient. In contrast, the proposed IE-PINN successfully estimates both Young’s modulus and Poisson’s ratio, confirming its robustness against noise (Figure S4 and S5 in the Supporting Information). # 2.3 Robustness to Displacement Noise Noise in displacement data poses a significant challenge for accurate elasticity estimation. As aforementioned, the noises in displacement are amplified through differentiation in the calculation of strain and equilibrium equation, worsening the accuracy of elasticity estimation. To study the sensitivity of the IE-PINN to noise in the displacement data, we investigate different noise levels based on three signal-to-noise ratios (SNR), the ratios of mean displacement over the standard deviation of errors: 1000, 500, and 100. The predicted field and error map for Young’s modulus and Poisson’s ratio are presented in Figure 6(a) and Figure 6(b), respectively. The mean absolute errors (MAE) in the IE-PINN prediction of Young’s modulus and Poisson’s ratio across noise levels are shown in Figure 7. The results demonstrate that the prediction errors remain low across different noise levels, highlighting the model’s robustness in elasticity estimation. The MAE for Young’s modulus at SNR 1000 is not significantly different from that at SNR 500 (Figure S4 in Supporting Information). When the noise level is increased tenfold (SNR 100), the prediction errors for both Young’s modulus and Poisson’s ratio show a noticeable increase (Figure S5 in Supporting Information). # 2.4 Absolute Scale Calibration and Mean Young’s Modulus Constraint The boundary condition on loading force is critical as it determines the unit of Young’s modulus. Without this boundary condition, only the relative distribution can be obtained. However, integrating the boundary conditions directly into the loss functions of PINN often results in ill-conditioned optimization problems, which can cause training difficulties Young's modulus (E) Poisson's ratio (v) True (i) Proposed (ii) (iii) Elasnet True (i) Proposed (ii) (iii) Elasnet 1.0 0.5 B 0.8 x关关 0.4 0.6 0.2 0.4 0.1 0.2 0.0 02 心 0.0 −0.1 −0.2 −0.2 Neural net for MAE= 0.0134 MAE=0.0349 MAE= 0.1936 Neural net for MAE= 0.0150 MAE= 0.0692 MAE=5.92×108 Displacement: Included Included Not included Displacement: Included Included Not included Extra strain: Included Not included Not included Extra strain: Included Not included Not included (a) Estimated Young’s modulus across different models. (b) Estimated Poisson’s ratio across different models. Figure 5: Young’s modulus $( E )$ and Poisson’s ratio $( \nu )$ predictions and corresponding error maps were obtained from different models. All models were trained using the same noisy displacement data with a signal-to-noise (SNR) ratio of 1000. (i) IE-PINN (Proposed) incorporates both displacement and strain networks. (ii) Model with only a displacement network. (iii) Elastnet does not employ any functional approximation to fit displacement or strain.[64] Elastnet fails to learn meaningful Young’s modulus distribution when displacement data are noisy. Employing Function approximation over noisy displacement data tends to denoise the measurements and improve the stability. Additionally, incorporating a dedicated strain network enhances the robustness and accuracy of elasticity estimation. Figure 6: Predicted (a) Young’s modulus $( E )$ and (b) Poisson’s ratio $( \nu )$ , along with their corresponding error maps, evaluated across varying noise levels (signal-to-noise ratio, SNR). IE-PINN was trained using the same displacement data across three different SNRs: (i) $\mathbf { S N R } = 1 0 0 0$ , (ii) $\mathbf { S } \mathbf { N } \mathbf { R } = 5 0 0$ , and (iii) $\mathrm { S N R } = 1 0 0$ . Although the prediction errors increase as the noise level rises, the model performance remains robust across noise levels. Young's modulus (E) Poisson's ratio (v) () (ii) (ii) () (ii) (i) True SNR1000 SNR500 SNR100 True SNR1000 SNR500 SNR100 1.0 C B 0.4 关 美美美 0.1 0.2 02 -0.1 -0.1 L −0.2 MAE= 0.0134 MAE= 0.0159 MAE= 0.0293 MAE= 0.0150 MAE= 0.0220 $\mathsf { M A E } = 0 . 0 3 4 5$ (a) Estimated Young’s modulus across noise levels. (b) Estimated Poisson’s ratio across noise levels. or failures. Previous methodologies typically assume prior knowledge of the true mean Young’s modulus,[46, 64] or stress distributions,[62, 63] which is generally unknown in practical applications, complicating absolute scale elasticity estimation. To overcome this limitation, IE-PINN initially estimates Young’s modulus with an arbitrary mean value, generating a relative modulus distribution. Subsequently, IE-PINN employs a novel calibration method to recover the absolute scale by aligning the predicted relative boundary stress distribution with experimentally measured loading forces as illustrated in Figure 3. Figure 8 demonstrates the impacts of various arbitrary mean constraints on the prediction errors before and after scale calibration (Prediction accuracy is included in Table S1 in Supporting Information). Initially, high MAE is observed before calibration due to inaccurate mean elasticity assumptions. Nevertheless, the proposed calibration technique successfully identifies the correct scale from boundary stress predictions, achieving performance comparable to the ideal case. The figure confirms that the calibration performance is significantly robust against the constrained mean modulus values. Additionally, Figure S6 in Supporting Information shows that Poisson’s ratio prediction is consistently robust regardless of the imposed mean constraints. This independent calibration scheme maintains precise absolute-scale Young’s modulus estimation while preserving the training stability of IE-PINN stability. Figure 7: Robustness of prediction errors across different noise levels. The predicted Young’s modulus and Poisson’s ratio exhibit strong robustness to noise, with only minimal degradation in accuracy even at higher noise levels. Figure 8: Impact of constrained mean Young’s modulus. Although the model is trained with various values of constrained mean Young’s Modulus, the proposed absolute scale calibration technique effectively adjusts the predicted relative Young’s modulus from IE-PINN in Phase 1 to their corresponding absolute scales. The prediction errors remain consistent across different constrained mean values, demonstrating the robustness of the calibration approach. # 2.5 Positional Encoding and Activation Functions Neural networks with low-dimensional positional inputs (coordinates $x$ and $y$ ) often struggle to represent complex functions,[70, 71] resulting in ill-conditioned optimization and uninformative gradients. To address this, we integrate a positional encoding function technique,[72] transforming the two-dimensional coordinates into a richer multidimensional latent representation. This significantly improves the accuracy of elasticity estimations (Figure S7 and Table S2 in Supporting Information). Another critical aspect influencing neural network performance in PINN is the choice of activation functions. Traditional activation functions often suffer from vanishing/exploding gradients during training, reducing training efficiency. IEPINN employs the sine activation function (SIREN),which has proven effective in precise gradient and divergence prediction.[73] Figure 9 compares elasticity estimation errors across different activation functions, specifically evaluating the SIREN with Swish, used in Elastnet.[64] The same activation functions are used for displacement and strain fitting (denoted as Fitting) and those for Young’s modulus and Poisson’s ratio (denoted as Elasticity). Results demonstrate that using SIREN consistently produces the lowest MAEs for both Young’s modulus and Poisson’s ratio estimations. # 2.6 Pretraining Strategy Training PINN is significantly more challenging than training conventional neural networks due to the complexity introduced by multiple interconnected networks and loss terms. IE-PINN involves three distinct neural networks, each with specific loss terms, and their roles in solving the linear elasticity PDEs, including displacement fitting, strain discrepancy, and PDE residual losses. To enhance the training efficacy, IE-PINN employs a sequential pretraining strategy. Initially, the displacement network is trained independently using the noisy displacement data (for 50,000 iterations) with only the data loss, which is a conventional neural network training. Subsequently, the strain network is trained additionally along with the displacement network by minimizing discrepancies with strains derived from the displacement network (for 100,000 iterations). Finally, all IE-PINN networks are trained using all loss terms. Figure 10 compares the performance of IE-PINN with and without pretraining strategy under different noise levels (SNR of 1000 and 100) under the same total number of training iterations. The results demonstrate that pretraining effectively improves prediction accuracy, confirming its effectiveness in both low and high noise levels. Further details are provided in Figure S8 of Supporting Information. Figure 9: Performance comparison across different activation functions. The prediction errors in Young’s modulus (E) and Poisson’s ratio $( \nu )$ are presented for models using various activation functions in neural networks (Fitting denotes both displacement and strain networks, Elasticity denotes elasticity network). Among the activation functions evaluated, the SIREN activation function achieved the highest accuracy across all elastic property predictions.[73] Figure 10: Impact of pretraining strategy. The prediction errors in Young’s modulus $( E )$ and Poisson’s ratio $( \nu )$ are compared between two training strategies: A pretraining scheme that sequentially trains neural networks versus simultaneous training. Models are evaluated using noisy displacement data under two levels, corresponding to SNR of 1000 and 100. The results demonstrate that employing a pretraining strategy significantly improves the prediction accuracy of both elasticity parameters, achieving up to $50 \%$ reduction in error. # 2.7 Related Work and Future Study Several studies have explored inverse physics-informed neural networks for elasticity estimation. Early approaches primarily considered homogeneous materials with constant elasticity assumptions; some relied on both displacement and stress measurements, [47, 55, 56, 57] while others used only displacement data [48, 58, 59]. More recent efforts have extended to heterogeneous materials for spatially varying elasticity estimation. Some studies assume incompressible material behavior and estimate only Young’s modulus while assuming Poisson’s ratio constant [46, 61]. Due to the ineffective performance after integrating the boundary condition on loading force, some works exclude the boundary conditions and obtain relative Young’s modulus distribution rather than absolute scale.[61, 64] The prior knowledge of the true mean Young’s modulus can be integrated as another loss to obtain the accurate distribution of elasticity distributions [46, 64]. Other methods use strain data under the assumption of known boundary stress distribution, which is typically unavailable in real-world scenarios [62, 63]. Elastnet uses the strain data for incompressible materials in the earlier model,[46] and displacement data for compressible materials in the more recent version,[64] employing finite-difference approximations to estimate elasticity. Unlike typical PINN-based methods, Elastnet does not fit a functional model (e.g., neural networks); instead, it directly applies numerical differentiation to all the variables, including noisy measurements, making it particularly sensitive to noise, as discussed in Section 2.2. Additionally, boundary conditions are not incorporated into their estimation, resulting in estimated Young’s modulus being expressed in relative scales rather than absolute ones. In contrast, our proposed IE-PINN demonstrates robustness to measurement noises and enables estimation of heterogeneous Young’s modulus on an absolute scale, based on the externally applied force that can be experimentally measured using localized mechanical testing techniques such as ultrasound,[67] nanoindentation,[68] and atomic force microscopy.[69] Several gaps in the current research remain to be addressed in future work. In many clinical applications, the measurement data are of low spatial resolution, making precise elasticity estimation particularly challenging. Additionally, the extension to three-dimensional (3D) elasticity estimation remains an open research direction. The 3D displacement data experimentally measured using the DVC method often contains measurement noise, further complicating the problem. Advancing 3D inverse elasticity estimation would benefit a broad range of applications, including disease diagnosis through biomedical imaging, materials design in manufacturing, and structural analysis in construction, such as detecting internal defects or optimizing the behavior of composite materials under complex loading conditions.
Accurately estimating spatially heterogeneous elasticity parameters, particularly Young's modulus and Poisson's ratio, from noisy displacement measurements remains significantly challenging in inverse elasticity problems. Existing inverse estimation techniques are often limited by instability, pronounced sensitivity to measurement noise, and difficulty in recovering absolute-scale Young's modulus. This work presents a novel Inverse Elasticity Physics-Informed Neural Network (IE-PINN) specifically designed to robustly reconstruct heterogeneous distributions of elasticity parameters from noisy displacement data based on linear elasticity physics. IE-PINN integrates three distinct neural network architectures dedicated to separately modeling displacement fields, strain fields, and elasticity distributions, thereby significantly enhancing stability and accuracy against measurement noise. Additionally, a two-phase estimation strategy is introduced: the first phase recovers relative spatial distributions of Young's modulus and Poisson's ratio, and the second phase calibrates the absolute scale of Young's modulus using imposed loading boundary conditions. Additional methodological innovations, including positional encoding, sine activation functions, and a sequential pretraining protocol, further enhance the model's performance and robustness. Extensive numerical experiments demonstrate that IE-PINN effectively overcomes critical limitations encountered by existing methods, delivering accurate absolute-scale elasticity estimations even under severe noise conditions. This advancement holds substantial potential for clinical imaging diagnostics and mechanical characterization, where measurements typically encounter substantial noise.
[ "cs.LG" ]
# 1 Introduction Large Language Models (LLMs) have demonstrated remarkable capabilities in automatic code generation, enabling developers to translate natural language descriptions into executable programs (Hong et al., 2023; Liu et al., 2024a). However, as coding tasks grow in complexity, relying on a single LLM instance (single-agent) to handle all aspects of code generation becomes increasingly challenging. To address this, recent studies Task Description Implement a Python function sum_squares(n) that calculates the sum of squared integers from 1 to n inclusive. Initialized Agents and Workflow Evolved Agents and Workflow u Code Generation Agent GeneraCtoiodne Agent CodeARgewntriting TaskARgewnrtiting CodeARgevnitewing def sum_squares(n): def sum_squares(n): total = 0 total = 0 for i in range(1, n):  # Wrong for i in range(1, n+1): # Correct total += i total $+ = \sqrt { x + 2 }$ return total return total have explored multi-agent systems (Huang et al., 2023; Islam et al., 2024) where multiple LLMpowered agents collaborate to solve intricate problems through structured workflows (Hong et al., 2023). These multi-agent systems decompose complex programming tasks into sub-tasks, assigning them to specialized agents with tailored prompts, enhancing execution and output quality. Despite their effectiveness, current multi-agent systems rely heavily on manually designed workflows, where both the workflow topology and agents’ prompts are manually crafted, hindering their adaptability to more complex coding task. For instance, a workflow optimised for machine learning task (Chi et al., 2024) differs significantly from one tailored for software development task (Qian et al., 2023). Manually crafting workflows for each task is inefficient and does not leverage LLM’s full potential for autonomous adaptation. To address these limitations, we propose SelfEvolving Workflow (SEW), a novel framework designed to automatically generate and optimise multi-agent workflow. In particular, SEW achieves this by leveraging a novel evolutionary scheme to improve the workflow, i.e., the topology of workflows and the prompt of each agent. Figure 1 shows the agent and workflow evolution in code generation. In addition, to effectively represent agentic workflows in textual format, we explore and compare five different representation schemes, namely BPMN (White, 2004), CoRE (Xu et al., 2024b), Python code (Zhang et al., 2024c), YAML (Zhang et al., 2024b), and pseudo-code (Xiao et al., 2024). We evaluate each scheme based on how well it can be interpreted and optimised by our SEW framework, aiming to identify the optimal scheme for workflow representation and optimization. Our contributions are: (1) We investigate different workflow representation schemes, such as BPMN, Python, CoRE, YAML, and pseudo-code, to determine the most effective format for LLM interpretation; (2) Unlike prior work that builds agents by assembling predefined operators, our framework automatically constructs agentic workflows from scratch, conditioned solely on task descriptions. (3) We introduce a self-evolving workflow design approach, SEW, where LLMs jointly improve workflow structures and agent prompts to optimise performance; (4) We conduct extensive experiments on three benchmark datasets, including MBPP, HumanEval, and LiveCodeBench, demonstrating that SEW can consistently improve workflow performance through self-evolution. # 2 Related Work # 2.1 Workflow Representations in Agents In multi-agent systems, workflows establish structured information flows and task execution pipelines, enabling agents to solve complex problems (Hong et al., 2023; Gao et al., 2024). While natural language can describe workflows, its inherent ambiguity often leads to inconsistent interpretations, hindering precise task execution across agents (Xu et al., 2024b). To address this challenge, several studies have introduced specific representation schemes for SOPs. For example, Business Process Model and Notation (BPMN) (White, 2004) is a graphical modeling language designed to depict workflows by specifying the execution order of activities. Similarly, Code Representation and Execution (CoRE) (Xu et al., 2024b) provides a unified framework that integrates natural language programming, pseudo-code, and flow-based programming to improve workflow representation and execution. Additionally, Python code (Zhang et al., 2024c; Xu et al., 2024a), YAML (Qiao et al., 2023; Zhang et al., 2024b), and pseudo-code (Xiao et al., 2024; Li et al., 2025) are also commonly employed to define and manage agentic workflows. # 2.2 Self-Evolving Agents Existing agentic methods often yield suboptimal responses when prompts are poorly constructed. To address this, prompt optimization techniques (Zhou et al., 2022; Fernando et al., 2024; Agarwal et al., 2024; Liu et al., 2024b) have moved beyond static, manually crafted in-context prompts. For instance, automatic prompt engineer (APE) (Zhou et al., 2022) enhances prompts by searching through a pool of candidates. Similarly, Promptbreeder (Fernando et al., 2024) employs LLMs to mutate and evolve a population of task-specific prompts. MIPRO (Opsahl-Ong et al., 2024) is an optimizer designed to enhance multi-stage language model programs by refining both instructions and fewshot examples for each module. In multi-agent systems, recent studies have explored the evolution of agentic workflows and topologies (Zhang et al., 2024a; Zhou et al., 2024, 2025; Zhang et al., 2025). For example, MASS (Zhou et al., 2025) exploits the optimization of both prompt and workflow over a configurable topology space. Similarly, AFlow (Zhang et al., 2024a) employs a Monte Carlo Tree Search to enhance workflow efficiency, while EvoFlow (Zhang et al., 2025) introduces a framework for the automated search of heterogeneous agentic workflows. EvoAgent (Yuan et al., 2024) is designed to automatically extend expert agents into multi-agent systems using evolutionary algorithms. In contrast, our SEW introduces a self-evolving mechanism that leverages diverse workflow representation schemes, jointly optimising prompts for both agents and their workflow. # 3 SEW Task Definition. We focus on the task of code generation, a task that requires multi-agent collaboration (Hong et al., 2023), aiming to produce executable code based on a textual coding problem. To tackle this task, we deploy an LLM-based multi-agent system to generate code, where each agent processes a textual prompt and produces a corresponding textual output. We define the textual prompt of an LLM agent $a$ as $\tau$ and a sequence of LLM agents, i.e., a workflow as $W$ . Figure 2: The overall framework of SEW. The process begins with workflow generation, followed by workflow evolution. Then each agent within the evolved workflow will be equipped with enhanced prompts generated by the agent evolution module. Such an agent evolution module is driven by the Direct Evolution (DE) operator and Hyper Evolution (HE) operator, leveraging LLMs, where we use a mutation prompt $\mathcal { T } _ { m u t }$ or a hyper-mutation prompt $\mathcal { T } _ { h m u t }$ to enhance the prompt of an agent. Preliminary. Evolutionary prompts are central to SEW. Rather than relying on training data, SEW employs LLMs as mutation operators by concatenating the evolutionary prompts with the task prompt to generate a more effective task prompt. We define two evolutionary operators, namely the Direct Evolution (DE) operator $\mathcal { F } ( \cdot )$ and the Hyper Evolution (HE) operator $\mathcal { H } ( \cdot )$ , where $\mathcal { F } ( \cdot )$ and $\mathcal { H } ( \cdot )$ take a workflow $W$ or an agent $a$ as the input and output an enhanced workflow $W ^ { \prime }$ or an agent $a ^ { \prime }$ . Specifically, $\mathcal { F } ( \cdot )$ and $\mathcal { H } ( \cdot )$ operators leverage (1) mutation prompts ${ \mathcal T } _ { \mathrm { m u t } }$ , (2) hyper-mutation prompt, and (3) thinking-style prompts $\tau _ { \mathrm { t h i n k } }$ (Fernando et al., 2024). Figure 3 shows examples of these evolutionary prompts and how they are evolved by both DE and HE. Overview of SEW. Our SEW framework consists of three main modules: (a) Workflow Generation, (b) Workflow-Evolution, and (c) Agent-Evolution. The overview of our SEW framework is illustrated in Figure 2. As shown in Figure 2, our SEW first generates an initial workflow based on the task description using one of the representation schemes introduced in Section 4. Second, the workflow evolution module of SEW will leverage our evolution method to reconstruct the initial workflow. Finally, inspired by PromptBreeder (Fernando et al., 2024), our agent evolution module will apply either the agentic DE or agentic HE method to equip each agent with a more sophisticated prompt. The pseudo-code of SEW is shown in Algorithm 1. Workflow Generation. To generate workflows, we use an LLM to generate default workflows based on the given task description ${ \mathcal { D } } ^ { 2 }$ and a template workflow $W ^ { t e m p }$ . A template workflow can be denoted with different workflow representation schemes. In particular, our SEW explore five different schemes, namely Business Process Model and Notation (BPMN) (White, 2004), Code Representation and Execution (CoRE) (Xu et al., 2024b), python, YAML and pseudo-code, with their detailed description presented in Section 4. Figure 4 shows two examples of the template workflow. From the workflow generation process as shown in Algorithm 1, we can obtain a set of default workflows $W ^ { d e f }$ . Later, we will present how to use our workflow evolution module to rearrange and modify the structure of $W ^ { d e f }$ . Workflow Evolution. To formalise the workflow evolution process of SEW, first we define a workflow $W$ represented with a certain representation scheme rep, where all $W$ in $r e p$ are in textual format. We use the DE operator $\mathcal { F } ( \cdot )$ to generate an evolved workflow as follows: $$ W ^ { \prime } = \mathcal { F } ( W _ { d e f } | \mathcal { T } _ { \mathrm { m u t } } ) , $$ where $W ^ { \prime }$ is the self-evolved workflow, ${ \mathcal T } _ { \mathrm { m u t } }$ is the First-order Direct Evolution Second-order Direct Evolution Mutation Prompt: Mutation Prompt: Updated Agent's Prompt: "Modifsyeltfh-irse isnpsetrctuicntgioLnLinMawwoualyd!t"hat no U"\*p\*CdraetateivdeAgent's Prompt: "Modifsyeltfh-irse isnpsetrctuicntgioLnLinMawwoualyd!t"hat no "sA\*kc\*ilcCleredpaePtidyv:tehPIoyntshtprorunocgtCiroaondm:\*em\*CWerhi,zayallroedunrgy!e\*\*As a I"\*n\*stRreuicmtiaogni:n\*\*e\dn\n\*\*Challenge:\*\* Agent's Prompt: F(-) IsPnkysitltlhreuodcntPiCoynto:hd\*o\*e\nW\pnirz\*o\*agrCrdahrayml!l\*me\*\nenrg,\enyAoAsucrcaepted: Agent's Prompt: Mutation Prompt: yAosuar smkiisllseiodnPiysttho..n. .d.e"veloper, mission is to conjure up..... "You are a proficient Python "In one short sentence, here is how I "You are a proficient Python programmer...... You will NOT return would best follow this instruction." programmer...... You will NOT return anything except for the program." anything except for the program." Zero-order Hyper Evolution First-order Hyper Evolution Task Description: Updated Mutation Prompt: Updated Agent's Prompt: Hyapned"riP-lmemparusotevaestutihmoenmoaPlrlirozoweimngpt: Up"d\*\*aPtreodmptMMututatnito: nThPerompt: Upda"tSeurde APlgeeasnet'psrovPirdoe tmhpet: H(-) Prompt:\*\*\n\n\*\*Challenge:\*\* H(·) H(-) specific problem T"hHinokwicnagn-Isstiymlpel yPrthoempt: AgenAts'as Pyrthoomnpvtir:tu oso H(-) "sSpuerceif!icPlperaosble pmrodveidsecrtiphteion description.... Mutation Prompt: Agent's Prompt: sprolovbele?"m so that it is easier to "pYrogurarmemaepr.r.o..f.i.ciYeonut PwiyltlhNonOT return i"sInhonweIswhorutlds ebnetsetnfcolel,ohwere "pYrogurarmemaepr.r.o..f.i.ciYeonut PwiyltlhNonOT return anything except for the program." this instruction." anything except for the program." mutation prompt and $\mathcal { F } ( \cdot )$ representing the operation that an LLM takes $W _ { d e f }$ and ${ \mathcal { T } } _ { \mathrm { m u t } }$ as input and output $W ^ { \prime }$ (see Figure 3 for more details). It should be noted that the mutation prompt ${ { T } _ { m u t } }$ cannot ensure that $W ^ { \prime }$ is a valid workflow. For example, $W ^ { \prime }$ may not strictly follow the format of the representation scheme. To measure the validity of $W ^ { \prime }$ , we define two rates, namely the Logical Successful Rate (LSR) and Generation Successful Rate $( G S R )$ . The $L S R$ denotes the probability that generated $W ^ { \prime }$ is valid, and the $G S R$ denotes the probability that the output of $W ^ { \prime }$ is executable Python code. Specifically, LSR = P|i=W 1′ I(isValid(Wi′ )) and $\begin{array} { r } { G S R = \frac { \sum _ { i = 1 } ^ { | W ^ { \prime } | } \mathbb { I } ( \mathrm { i s P y t h o n } ( o u t p u t ( W _ { i } ^ { \prime } ) ) ) } { \left| W ^ { \prime } \right| } } \end{array}$ . B| y measuring $L S R$ and $G S R$ of a certain representation scheme, we can determine which scheme is more suitable for SEW. Agent Evolution. After modifying the structure of workflows using the workflow evolution module, the next step is to modify each agent’s prompt. Similar to the workflow evolution, the agent evolution also relies on the mutation prompt. As mentioned earlier we use Direct Evolution (DE) and Hyper Evolution (HE) to improve an agent, where DE aims to modify an agent’s prompt by directly applying a mutation prompt to it while HE first modifies the mutation prompt then apply the modified mutation prompt to an agent. Agentic Direct Evolution. To enhance the performance of an agent, SEW directly apply the mutation prompt $\mathcal { T } _ { m u t }$ to an agent’s prompt using the direct evolution operator as follows: $$ a ^ { \prime } \gets \mathcal { F } ( a | \mathcal { T } _ { \mathrm { m u t } } ) , $$ # Algorithm 1: Self-Evolving Workflow Input: Task Description $\mathcal { D }$ , Workflow Template $W ^ { t e m p }$ , Mutation Prompt $\mathcal { T } _ { \mathrm { m u t } }$ , Hyper Mutation Prompt $\mathcal { T } _ { \mathrm { h m u t } }$ , Thinking-style Prompt $\mathcal { T } _ { \mathrm { t h i n k } }$ Output: Optimized Workflow $W ^ { \prime }$ Function SEW( des, $\mathcal { T } _ { t e m p }$ , $\mathcal { T } _ { m u t }$ , hmut, think): 2 1. Workflow Generation; 3 $W _ { d e f } $ GenerateWorkflows $( T _ { \mathrm { d e s } } , T _ { \mathrm { t e m p } } )$ ; 4 2. Workflow Evolution; 5 for each workflow $W _ { d e f }$ do 6 L $W ^ { \prime } \bar { \mathcal { F } } ( W _ { d e f } | \bar { \mathcal { T } } _ { \mathrm { m u t } } ) ;$ 7 3. Agent Evolution; 8 for each agent a in $W ^ { \prime }$ do 9 3.1 Select Evolution Method; 10 if First-order $D E$ then 11 $\begin{array} { r l } { \lfloor } & { { } a ^ { \prime } \gets \mathcal { F } ( a | \mathcal { T } _ { \operatorname* { m u t } } ) ; } \end{array}$ 12 else if Second-order $D E$ then 13 $\begin{array} { r l } { \lfloor } & { { } a ^ { \prime \prime } \gets \mathcal { F } ( \mathcal { F } ( a | \mathcal { T } _ { \operatorname* { m u t } } ) | \mathcal { T } _ { \operatorname* { m u t } } ) ; } \end{array}$ 14 else if Zero-order $H E$ then 15 $\begin{array} { r l } { \ L } & { { } a ^ { \prime } \mathcal { H } ( a | \mathcal { H } ( \mathcal { T } _ { \mathrm { d e s } } | \mathcal { T } _ { \mathrm { t h i n k } } ) ) ; } \end{array}$ 16 else if First-order $H E$ then 17 $\begin{array} { r l } { \lfloor } & { { } a ^ { \prime \prime } \gets \mathcal { H } ( a | \mathcal { H } ( \mathcal { T } _ { \mathrm { m u t } } | \mathcal { T } _ { \mathrm { h m u t } } ) ) , } \end{array}$ ; 18 return W ′; where $a$ is an agent and $a ^ { \prime }$ is the agent with modified prompt, and we define the operation above as the first-order direct evolution. Based on the first-order direct evolution, we propose the second-order direct evolution: $$ a ^ { \prime \prime } \gets \mathcal { F } ( \mathcal { F } ( a | \mathcal { T } _ { \mathrm { m u t } } ) | \mathcal { T } _ { \mathrm { m u t } } ) $$ By applying second-order direct evolution, we aim to further enhance the performance of an LLM agent. Agentic Hyper Evolution. Different from Direct Evolution, Hyper Evolution focuses on generating more effective mutation prompts. In other words, HE first modifies the mutation prompt ${ { T } _ { m u t } }$ then uses the new mutation prompt $\mathcal { T } _ { m u t } ^ { \prime }$ to improve an agent’s prompt. Formally, we define the zero-order hyper evolution as below: $$ a ^ { \prime } \gets \mathcal { H } ( a | \mathcal { H } ( \mathcal { T } _ { \mathrm { d e s } } | \mathcal { T } _ { \mathrm { t h i n k } } ) ) $$ 1 # BPMN_workflow 2<definitions xmlns $\prime = \prime$ "http://www.omg.org/spec/BPMN/ 320100524/M0DEL"> 4<process ${ \dot { \bf { i } } } { \bf { d } } = { \bf { \boldsymbol { \mathbf { \mathit { \ " { \iota } } } } } }$ software_dev_workflow" isExecutable="true"> <startEvent id="start" /> <task ${ \dot { \operatorname { i d } } } = { \boldsymbol { \ " } }$ parse_task" name $\mathbf { \Phi } = \mathbf { \ ' }$ Parse Task" /> <sequenceFlow ${ \mathrm { i } } { \mathrm { d } } = "$ flow6" sourceRef $= "$ refine_code" targetRef $\mathbf { \epsilon } = \mathbf { \epsilon } ^ { \prime \prime }$ end" /> 9</process> 10</definitions> 1# CoRE_workflow 2 Step 1:::Process:::Parse Task:::next::Step 2 3Step 2:::Process:::Refine Task:::next::Step 3 4 Step 3:::Process:::Generate Code:::next::Step 4 5 Step 4:::Process:::Review Code:::next::Step 5 6 Step 5:::Process:::Refine Code:::next::Step 6 7 Step 6:::Terminal:::End of Workflow::: where $\mathcal { T } _ { t h i n k }$ are text descriptions of general cognitive heuristics (Fernando et al., 2024). For zero-order HE, we use the general cognitive heuristics $\mathcal { T } _ { t h i n k }$ to generate useful prompts for solving problems described by the task description $\mathcal { D }$ . Similar to how we use the mutation prompt ${ { T } _ { m u t } }$ to modify an agent’s prompt, we can use a hyper-mutation prompt instead of $\mathcal { T } _ { t h i n k }$ to modify ${ { T } _ { m u t } }$ , which is defined as first-order HE. From $\mathrm { E q ~ 4 }$ , a new mutation prompt is generated from the task description and some cognitive heuristics. In another way, we can use a hypermutation prompt to directly generate new variants from ${ { T } _ { m u t } }$ as follows: particularly in the context of self-evolving agentic workflows that our method, SEW, aims to optimise. BPMN: This graphical standard is well-established in business process modeling and widely recognized for its ability to clearly depict the order of tasks and their dependencies. $$ a ^ { \prime \prime } \gets \mathcal { H } ( a | \mathcal { H } ( \mathcal { T } _ { \mathrm { m u t } } | \mathcal { T } _ { \mathrm { h m u t } } ) ) $$ Finally, by combining the workflow-evolution and agent evolution, our SEW can generate more effective variants of workflows for solving the code generation task. In the next section, we will present and compare those five different representation schemes that can be leveraged by SEW. # 4 Workflow Representation To generate a workflow using LLM, appropriate workflow textual representation schemes are essential. In fact, while it is straightforward to execute a workflow using code, representing it in natural language is non-trivial. A well-designed representation scheme should capture the structural and semantic components of a workflow and be easily interpreted by LLMs for downstream modification. As we discussed in the related work section, we explored five different textual representation schemes that can be used to denote workflows namely, Business Process Model and Notation (BPMN) (White, 2004), Code Representation and Execution (CoRE) (Xu et al., 2024b), python, YAML and pseudo-code, where each representation scheme can be used to denote a workflow by text. The choice of these five schemes was driven by their distinct advantages in facilitating the representation and execution of agentic workflows, CoRE: CoRE integrates natural language programming, pseudo-code, and flow-based programming, and is a strong candidate for agentic workflows. It allows workflows to be directly executable and interpretable by LLMs, offering advantages for our self-evolving framework. Python: As a widely adopted programming language, Python is not only familiar to many practitioners but also flexible in terms of representing workflows through its readable syntax and extensive ecosystem of libraries. For agentic workflows requiring programmatic execution, Python allows for easy integration and adaptation of agents into working solutions. YAML: YAML is a human-readable data serialisation format widely used for configuration files and workflow definitions due to its simplicity and readability. YAML’s flexibility in representing hierarchical data structures makes it well-suited for workflows that need to be configured or defined by humans but executed by machines. Pseudo-code: Pseudo-code is a high-level representation that is often used for illustrating algorithms and workflows in a way that is easy for both humans and machines to understand. Pseudo-code offers an abstraction that bridges natural language and formal code, making it an excellent choice for expressing workflows that need to be easily read and modified. To clearly illustrate the differences between workflow representation schemes, we present an example agentic workflow represented using both the BPMN and CoRE schemes in Figure 4. In Figure 4, a software development pipeline, consisting of sequential tasks such as parsing input, refining content, generating code, reviewing, and iterating improvements, is represented by BPMN and CoRE, respectively. Each stage is represented as a task node, while dependencies between tasks are captured as sequence flows, ensuring clear process execution. Although denoted with different representation schemes, they shall perform the same function when executed3. These five schemes were chosen for their diverse capabilities in representing workflows and their practical utility in a self-evolving framework, where agents and workflows are dynamically generated and optimised. Our exploration of these schemes aims to identify the most suitable representation for evolving agentic workflows in code generation tasks, where LLMs are leveraged for both understanding and executing the workflows. # 5 Experiments # 5.1 Dataset To examine our proposed SEW framework, we choose the LiveCodeBench (LCB) (Jain et al., 2024) dataset, which is a comprehensive benchmark designed to evaluate the coding capabilities of LLMs. We randomly sampled 100 samples from the code generation subset of $\mathrm { L C B ^ { 4 } }$ for validation and the remaining 300 samples for testing. In addition, we also use the MBPP (Austin et al., 2021) and HumanEval (Chen et al., 2021) datasets following the data split in AFlow (Zhang et al., 2024a). To evaluate performance on the code generation task, each method is required to generate 10 candidate solutions per sample. We use pass $\ @ 1$ , pass $\textcircled { a } 5$ , and pass $@ 1 0$ as evaluation metrics. # 5.2 Baselines We compare our proposed SEW against five baseline prompting techniques across two different backbone models (i.e. GPT-4o mini and Gemini1.5-pro-002) on three code generation tasks (i.e HumanEval, MBPP, and LCB): (1) Backbone Models (GPT-4o mini and Gemini-1.5-pro-002). (2) Chainof-Thought (CoT) (Wei et al., 2022) Uses reasoning steps explicitly stated within the prompt. (3) Automated Design of Agentic Systems (ADAS) (Hu et al., 2024): A methodology that leverages metaagent frameworks to automatically design and optimise agentic systems. (4) AFlow (Zhang et al., 2024a): An automated framework that efficiently explores and optimises agentic workflows using Monte Carlo Tree Search. (5) PromptBreeder (Fernando et al., 2024) is a gradient-free evolutionary framework that improves agents by iteratively mutating and selecting prompt variants. Table 1: Performance comparison $( \mathrm { p a s s } @ 1 )$ between our SEW and baselines. ‘-’ refers to out-of-time errors, where the LLM executor has been trapped in executing accidental scripts with infinite loops. We adopt two LLMs, i.e., GPT-4o mini and Gemini-1.5-pro, as backbone models for all methods. # 5.3 Experimental Setup We conduct an exhaustive search on self-evolved workflows $W ^ { \prime }$ , represented by the following methods, including BPMN, CoRE, python, YAML and pseudo-code. We use all mutation prompts to evolve workflows represented by 5 schemes. Although various types of workflows are generated during the self-evolution process, not all of them are valid for code generation tasks. Among all generated workflows, the task parsing workflow and code rewriting workflow5 are more effective than the other counterparts. In particular, variants based on these two workflows can largely outperform competitive baselines, hence, we choose the best variant to represent our SEW.. Table 2: Logic Successful Rate $( L S R )$ and Generation Successful Rate (GSR) for Business Process Model and Notation, Code Representation and Execution, python, YAML and pseudo-code. # 5.4 Main Results To compare the performance of SEW and other baselines, we adopt two backbone models i.e., GPT4o mini and Gemini-1.5-pro-002. From Table 1, we find that (1) SEW can largely outperform those two backbone models at both settings; (2) SEW is more effective than CoT, a robust prompting technique for enhancing LLM’s ability to solve complex tasks by breaking them down into sequential thought processes; (3) when leveraging the same backbone model, our SEW outperform other state-of-the-art workflow designing methods such as ADAS and AFlow. Therefore, we can conclude that our SEW framework is more effective than different types of baselines under the same setting in the code generation task. In addition, we observe that across the three datasets, methods using GPT-4o mini as the backbone generally outperform those using Gemini-1.5-pro-002. Hence, to save space, we report only the analysis of SEW (GPT-4o mini) in the following sections. # 5.5 Analysis RQ1: Which scheme is the most effective for structuring agentic workflows? To identify the most suitable workflow scheme for LLMs among the five, we conducted an exhaustive search using various mutation prompts. For a given workflow $W$ represented in Python, 100 different mutation prompts generated 100 variants. If 50 of these variants are parsable and 30 can generate executable codes, the $L S R$ and GSR for Python are $50 \%$ and $30 \%$ , respectively. Notably, LSR is always greater than or equal to $G S R$ , as not all parsed workflows can complete the task. As shown in Table 2, BPMN and Python achieved the highest LSR at $8 7 . 3 \%$ . However, their GSR performance was suboptimal, whereas the recently proposed CoRE method achieved the best GSR. This suggests that while traditional BPMN and Python representations are easier for LLMs to parse, the CoRE method – which integrates natural language programming, pseudo-code programming, and flow programming – is the most effective for workflow representation. We therefore conclude that CoRE enables optimal comprehension and utilisation when denoting agentic workflows. Table 3: Performance comparison $( \mathrm { p a s s } @ 1 )$ between the default version of two representative workflows generated from workflow evolution and their improved variants using agent evolution. All workflows use GPT-4o mini as their backbone model. RQ 2: How do SEW’s workflow evolution and agent evolution modules affect the performance of coding generation? To understand how our workflow evolution and agent evolution modules affect the performance of workflows generated by SEW, we select two representative workflows generated by SEW, namely task parsing workflow and code rewriting workflow. We chose these two workflows since most of the variants built upon these two workflows can bring large improvements. Specifically, the task parsing workflow leverages an agent to first parse the task and then send the parsed result to a coding agent to generate the code subsequently. In comparison, a code rewriting workflow incorporates a code generation agent to generate the initial outcome and then uses the code reviewing agent to determine if this outcome can pass the test followed by a code rewriting agent to rewrite the code based on the feedback from the code reviewing agent.6 Notably, the workflow evolution module is designed to generate novel workflow structures, while the agent evolution module focuses on creating effective prompts for each agent. In particular, we compare: (1) workflows generated by the workflow evolution module versus those produced by the backbone model, and (2) workflows generated by the workflow evolution module versus those that incorporate both workflow and agent evolutions. As shown in Table 3, the task parsing and code rewriting workflows produced by SEW consistently outperform the GPT-4o mini backbone model across three datasets. This initial improvement suggests that our workflow evolution module generates novel workflow topologies more effectively than relying solely on the LLM. Building on these novel workflows, the agent evolution module further enhances performance by generating high-quality prompts for each agent. Specifically, our agent evolution module improves the performance of the task parsing workflow by $2 0 . 3 \%$ on the LCB dataset. In summary, our results demonstrate that the workflow evolution module effectively produces novel workflow structures, and the agent evolution module further unlocks their potential by injecting high-quality prompts. Figure 5: Performance comparison of Code Rewriting and Task Parsing Workflows under different agent evolution strategies on the LCB dataset. RQ 3: How do different agentic evolution strategies affect the performance of workflows generated by SEW? We have introduced the Direct Evolution (DE) and Hyper Evolution (HE) operators, where for each we proposed its corresponding lower-order and higher-order versions. To examine the effectiveness of different operators, we randomly sampled five different mutation prompts and used these randomly sampled mutation prompts to generate five different variants for both workflows mentioned earlier for each operator. We use four box plots to illustrate the performance distribution of these two workflows on the LCB dataset. From Figure 5, we can observe that HE consistently demonstrates lower variance than DE by comparing the first row and second row of Figure 5. The variance of both workflows under the zeroorder hyper evolution is especially small. This indicates that the HE operator, particularly zeroorder HE, exhibit superior robustness compared to DE, as they are less sensitive to variations in mutation prompts across different tasks. In terms of best performance, DE, especially second-order DE, tends to achieve higher peak performance in certain metrics, such as pass $@ 1 0$ for Code Rewriting Workflow, where it reaches up to 0.580. This suggests that DE can optimize for specific highperformance outcomes. On the other hand, HE, while slightly lower in peak performance, provides a more balanced and reliable performance profile, making it more suitable for consistency. Therefore, the choice between DE and HE depends on the requirements of the task: DE is preferable for maximizing performance, while HE is better suited for real-world applications where robustness is more important. In addition, higher-order evolutions (Second-order DE and First-order HE) are better suited for tasks that require maximizing performance and can tolerate some variability, while lower-order evolutions (First-order DE and Zero-order HE) provide higher robustness.
Large Language Models (LLMs) have demonstrated effectiveness in code generation tasks. To enable LLMs to address more complex coding challenges, existing research has focused on crafting multi-agent systems with agentic workflows, where complex coding tasks are decomposed into sub-tasks, assigned to specialized agents. Despite their effectiveness, current approaches heavily rely on hand-crafted agentic workflows, with both agent topologies and prompts manually designed, which limits their ability to automatically adapt to different types of coding problems. To address these limitations and enable automated workflow design, we propose \textbf{S}elf-\textbf{E}volving \textbf{W}orkflow (\textbf{SEW}), a novel self-evolving framework that automatically generates and optimises multi-agent workflows. Extensive experiments on three coding benchmark datasets, including the challenging LiveCodeBench, demonstrate that our SEW can automatically design agentic workflows and optimise them through self-evolution, bringing up to 33\% improvement on LiveCodeBench compared to using the backbone LLM only. Furthermore, by investigating different representation schemes of workflow, we provide insights into the optimal way to encode workflow information with text.
[ "cs.SE", "cs.AI", "cs.CL" ]
# 1 Introduction Large language models (LLMs) have transformed the capabilities of conversational AI, advancing chatbots and overcoming their previous limitations to facilitate more nuanced and contextually aware interactions [21]. In contrast to traditional rule-based chatbots with constrained response patterns, LLM-based conversational agents demonstrate enhanced context comprehension and linguistic adaptability. The rapid adoption of this technology across diverse industries, with implementations in customer service, healthcare, user profiling domains, and more [20, 24], is a reflection of its transformative potential, and market analysis projects exponential growth in the coming years [12]. LLM-powered chatbots excel in natural language processing but often fail to tailor responses to users’ technical proficiency, learning styles, or communication preferences [28]. This lack of personalization is especially problematic in specialized domains with varying user knowledge levels. In professional and educational settings, misalignment between user expertise and chatbot responses can lead to frustration and reduced engagement [28]. For example, technical support chatbots may overwhelm novices with complex jargon or oversimplify explanations for experts, impacting these chatbots’ ability to achieve their intended purposes. Recent work on conversational AI personalization has explored dialogue style adaptation and expertise-based response generation [4, 14]. However, most approaches rely on explicit user questionnaires or static predefined profile categories, lacking implicit (manual input-free) and dynamic (adaptive) profiling. To address this gap, we introduce $P r o f i L L M$ , a framework for continuous user profiling through chatbot interactions. $P r o f i L L M$ features a domain-adaptable taxonomy and an LLM-based inference method that profiles users implicitly, without questionnaires or direct assessments. For evaluation, we adapt $P r o f i L L M$ for IT/cybersecurity (ITSec), where user proficiency shapes troubleshooting conversations. Specifically, we built an ITSec-oriented chatbot with a $P r o f i L L M ^ { I T S e c }$ module. We also designed a technical proficiency questionnaire, completed by 63 human responders, to establish ITSec profile archetypes. From these archetypes, we generated 352 synthetic users via random sampling with noise. After rigorous data refinement, we evaluated $P r o f i L L M ^ { I T S e c }$ using 1,315 high-quality chatbot conversations on ITSec troubleshooting. Results show that with an optimized configuration (Sec. 4.2), $P r o f i L L M ^ { I T S e c }$ achieves a rapid accuracy boost, improving profiling by $5 5 \mathrm { - } 6 5 \%$ after just one prompt, with further gains thereafter. The contributions of our work can be summarized as follows: 1. We introduced a novel LLM-based framework for dynamic, fully implicit user profiling based solely on chatbot interactions. 2. We proposed a structured taxonomy for ITSec proficiency modeling and developed a corresponding questionnaire for user assessment. 3. We presented an LLM-based approach for persona generation, persona-driven chatbot interaction simulation, and data quality assurance. 4. We conducted a rigorous empirical evaluation of all components and share our findings. 5. We compiled a dataset of ITSec troubleshooting conversations, labeled with users’ ITSec profiles. To support further research, we publicly release this dataset and related code1. # 2 Proposed Method $P r o f i L L M$ comprises two key components, which are detailed next: (1) the taxonomy generation component, in which a domain-specific taxonomy is generated, and (2) the profile inference component, in which a user profile (in terms of the taxonomy) is inferred. # 2.1 Domain-Adapted Taxonomy Generation To ensure consistency and preciseness across all user interactions, P rofiLLM takes a structured approach to user profiling. In practice, the user’s profile $P$ (i.e., the user’s knowledge or proficiency) is hierarchically broken down into $n$ subdomains $p _ { 1 } , p _ { 2 } , . . . , p _ { n } \in P$ . These $n$ low-level subdomains are the profile elements directly inferred (scored) by $P r o f i L L M$ , while the higher-level domains in the taxonomy enable logical categorization. A domain-specific list of profile subdomains should aim at maximum comprehensiveness so that as many chatbot interactions as possible would be both assigned to at least one relevant subdomain and directly scored. At the same time, the subdomain list should not be over-complicated, making the taxonomy sparse and impractical. We were unable to identify any taxonomy that is relevant to the domain of ITSec. To address this gap, we created a novel taxonomy based on a thorough review of various relevant sources, including syllabuses of IT support courses offered by IBM, Google, and Dell on Coursera [6], governmental resources, research papers, technological forums, surveys on prevalent ITSec complaints, online tutorials and blogs [2]. The ITSec taxonomy we created, consisting of 23 subdomains grouped by five domains, is presented in Table 1. Table 1: Our proposed taxonomy for ITSec proficiency profiling. # 2.2 User Profile Inference in Terms of the Taxonomy This component of $P r o f i L L M$ infers the user profile in various taxonomy subdomains based on their interactions with the conversational interface. As shown in Fig. 1, for every prompt submitted by the user, (1) the prompt is assigned to one or more of the taxonomy subdomains, (2) the prompt and its context are separately given a score for each assigned subdomain, and (3) these new scores are weighted against the existing related scores to update the profile scores. Fig. 1: User Profile Inference in Terms of the Taxonomy Prompt Assignment to Taxonomy Subdomains. Each conversational iteration (interaction) $i$ between a user $u$ and the chatbot begins with $p r o m p t _ { i } ^ { u }$ . Given prompt $\mathbf { \chi } _ { i } ^ { u }$ and its context window $W _ { i } ^ { u }$ (the preceding $\{$ {user prompt, chatbot response} pairs), $P r o f i L L M$ uses crafted system prompts to identify relevant subdomains in the taxonomy. This contextual and focused analysis ensures that the profile assessment is tied to the particular areas discussed by the user. Subdomain-Specific Profile Scoring. For each assigned subdomain (profile element) $p ^ { j }$ , P rofiLLM assesses the chatbot user’s knowledge level on a discrete five-point scale, where higher scores indicate greater expertise. For instance, a prompt such as "How can I decrypt .locky files encrypted by ransomware?" $^ { \prime \prime }$ would be categorized under "Cybersecurity/Malware" and temporarily scored as piCS/Malware,temp = 5. Conversely, "My PC wants money to free my locked files" $^ { \prime \prime }$ would assigned to the same subdomain but scored much lower, e.g., piCS/Malware,temp = 2. This scoring mechanism evaluates (1) concept complexity, (2) terminology appropriateness, (3) depth of understanding in prompt $\mathbf { \chi } _ { i } ^ { u }$ ’s formulation, and (4) contextual relevance to prior interactions. Score Aggregation via Weighted Averaging. In each iteration $i$ , for every assigned subdomain $j$ , P rofiLLM updates $p _ { i } ^ { j }$ via weighted averaging (Eq. 1): $$ p _ { i } ^ { j } = { \alpha } _ { i } ^ { j } \cdot { p } _ { i } ^ { j , t e m p } + ( 1 - { \alpha } _ { i } ^ { j } ) \cdot { p } _ { i - 1 } ^ { j } $$ Here, the temporary score $p _ { i } ^ { j , t e m p }$ is weighted by $\alpha _ { i } ^ { j }$ , while the previous score $p _ { i - 1 } ^ { j }$ is weighted by $1 - \alpha _ { i } ^ { j }$ . Since $\alpha _ { i } ^ { j }$ is dynamic, early interactions prioritize new input ( $\alpha _ { i } ^ { j }$ is higher), ensuring rapid adaptation when the existing profile score $p _ { i - 1 } ^ { j }$ is less reliable. As confidence in $p _ { i - 1 } ^ { j }$ increases, $\alpha _ { i } ^ { j }$ gradually decreases, stabilizing the profile score. The confidence weight $\alpha _ { i } ^ { j }$ follows an inverse timedecay function (Eq. 2), adjusting based on the number of interactions $i$ : $$ \alpha _ { i } ^ { j } = \frac { \alpha _ { 0 } } { 1 + \beta \cdot i } $$ In our experiments, we optimize (1) the decay rate $\beta$ , which controls how quickly $\alpha _ { i } ^ { j }$ stabilizes, and (2) the context window size $| W |$ , i.e., the maximum number of recent {user prompt, chatbot response $\}$ pairs used for scoring. # 3 Evaluation Method We empirically evaluated the generic $P r o f i L L M$ framework by (1) implementing an ITSec troubleshooting chatbot and a $P r o f i L L M ^ { I T S e c }$ instance, (2) curating dozens of ground-truth human profiles, (3) generating hundreds of synthetic users, seeded using cluster centroids of the human profiles, and (4) analyzing thousands of chatbot interactions held by the synthetic users. We curated (rather than assumed) ground-truth human profiles to make the evaluation realistic. Inspired by previous research [25], we leveraged these profiles (persona archetypes) by generating numerous comparable synthetic users, which automatically held a large number of human-like ITSec-related conversations with the chatbot. # 3.1 $P r o f i L L M ^ { I T S e c }$ Implementation Our experiments were conducted on Azure, providing access to various pretrained LLMs (Sec. 4.2). We used OpenAI’s GPT-4o as the backend for our ITSec troubleshooting chatbot, developed within this setup. Both the chatbot and $P r o f i L L M ^ { I T S e c }$ were implemented using the LangChain framework. # 3.2 Human Ground-Truth Labeling To curate ground-truth human profiles, we developed an ITSec proficiency questionnaire. For each of the 23 subdomains in the taxonomy (Table 1), this online questionnaire2 assesses users’ proficiency level using three types of questions: 1. Self-assessment questions (1–5 scale): Evaluating the user’s perceived proficiency, e.g., " $\prime \prime$ How would you rate your knowledge on network configuration?" $^ { \prime \prime }$ 2. Conceptual questions (1–5 scale): Measuring the user’s understanding of technical concepts, e.g., " $^ { \prime \prime }$ How well do you understand DHCP and DNS services?" $^ { \prime \prime }$ 3. Practical questions (Binary, 1 or 5): Assessing the ability to apply knowledge in real scenarios, e.g., "Have you configured static IP addresses on devices?" $^ { \prime \prime }$ Our questionnaire’s 1–5 scale is based on the EU’s DigComp [23]. It retains the original proficiency levels – foundation (2), intermediate (3), advanced (4), and highly specialized (5) – with an additional lower level, no knowledge (1). A subdomain’s final score is the average of the three corresponding questionnaire responses. The user’s ITSec profile is then represented as a numeric vector, where each 1–5 entry indicates their proficiency in a specific subdomain. # 3.3 Generation of Human-Like Synthetic Users After curating a variety of human user profiles, we performed cluster analysis and used the resulting centroids as ITSec persona archetypes. The first step in generating a synthetic user involved randomly selecting one of these centroid vectors and adding random noise (from a uniform distribution) to each subdomain score. Preliminary experimentation showed that directly using the numeric profile vector often produces synthetic conversations that lack human-like quality, making them unrepresentative. To address this, we developed a prompt that converts a numeric profile vector into a textual persona description. For low-proficiency profiles, this prompt includes the following text: "... Create a detailed, realistic human persona... based on the technical proficiency profile... Consider personas like... traditional craftspeople who work entirely manually, ... people who prefer face-to-face interactions and paper-based systems... reflect their technical proficiency level through a realistic and engaging narrative...(400-500 words): [Name] is a [age]-year-old [traditional profession] who excels at [hands-on skill/community role]... rely entirely on [traditional methods/tools] to [accomplish goals]... When faced with modern technology, they [coping strategy that emphasizes reliance on family/community]...". Like previous research [1], a detailed persona description was created to enhance LLM-generated conversations, however we add the challenge of having a long numeric profile vector as a starting point. # 3.4 Conversation Data Collection and Refinement Each generated synthetic user was tasked with solving ITSec scenarios exclusively through interactions with our troubleshooting chatbot: 1. HW – Flickering monitor, jerky mouse movements, unresponsive keyboard 2. NT – Files on shared drives occasionally corrupt 3. CB – Suspicious calendar invites from colleagues who deny sending them 4. SW – Programs freeze when switching windows or tabs 5. OS – System notifications appear delayed or out of order The (synthetic) users were instructed to maintain consistent behavior and phrase their prompts strictly in accordance with their assigned persona. Upon completion, another LLM valuated the conversations based on two criteria: the alignment between the user’s profile and their behavior/phrasing and the naturalness of the conversation flow, i.e., its resemblance to human interactions. Conversations not meeting either criterion were excluded from the dataset. # 3.5 Performance Metrics For a subdomain, the absolute error (AE), representing the difference between the actual and inferred profile scores, serves as the simplest performance metric. For a domain, or an entire profile, we measure $P r o f i L L M ^ { I T S e c }$ ’s performance using the mean absolute error (MAE), assuming equal subdomain weights. An ideal model’s MAE would quickly converge to zero, i.e., close the gap between the actual and predicted profile scores using only a small sequence of user prompts. # 4 Results In this section, we present a data overview and hyperparameter tuning, followed by progressive evaluation of various aspects using the optimal configuration. # 4.1 Experimental Data Collected Our ITSec questionnaire (Sec. 3.2) was completed by 63 participants from diverse academic and vocational backgrounds. $k$ -Means clustering ( $k = 3$ ) yielded the most consistent user groupings, supported by the Elbow Method and Silhouette Score analysis. The resulting centroids (Fig. 2a) define three stable ITSec persona archetypes: advanced (high scores across most subdomains), intermediate, and novice. Using these centroids, 352 synthetic users were generated (Sec. 3.3), engaging in 1760 ITSec-related conversations, assessed for quality using Claude. Most were rated highly for profile alignment and conversational flow (Fig. 2b). Applying an 8.5 quality threshold, the final dataset comprised 1315 conversations from 263 users. The associated data processing overhead is detailed in Sec. 4.9. # 4.2 Hyperparameter Optimization: $\pmb { L L M }$ , $\beta$ and $\vert W \vert$ Table 2 presents the results of a grid search conducted using a sample of conversations from 30 randomly selected users. Our goal was to identify the optimal configuration – the combination of $L L M$ , $\beta$ , and $| W |$ – that minimizes the MAE. Note that the MAE at iteration $i$ $( M A E @ i )$ is not calculated chronologically based on the overall prompt sequence. Instead, since a subdomain’s score is updated only when a prompt is specifically assigned to it, $M A E @ i$ corresponds to the $i ^ { t h }$ prompt assigned within each subdomain. For example, $M A E @ 2$ considers only the AEs computed at the second prompt assigned to each of the 23 profile subdomains. Thus, $M A E @ 2$ answers the question: For any given subdomain, what is the expected $A E$ after exactly two prompts have been assigned to it? Fig. 2: Experimental data collected: cluster centroids and conversation quality Table 2: Grid search over $L L M$ , $\beta$ and $| W |$ to find the optimal combination that minimizes the MAE. Results are displayed as Mean St.Dev. Note: Lower values indicate better performance (smaller error). Using $\alpha _ { 0 } = 0 . 8$ , Table 2 shows that within the first five iterations, GPT4o consistently outperforms all other evaluated LLMs, predominantly with a context window of length 1. In iterations 1-3, $\beta = 0 . 1$ also consistently leads to the lowest MAE. Although in iterations 4-5 other $\beta$ values perform better, we decided to set the optimal configuration $P r o f i L L M ^ { I T S e c } ~ ^ { * }$ as $L L M ^ { * } =$ $G P T - 4 o$ , $| W | ^ { * } = 1$ and $\beta ^ { * } = 0 . 1$ . The reason for doing so is evident in Table 3, which shows that half of the subdomain-specific user prompt sequences are 4 or less in length. That is, since interactions on specific subdomain are relatively short, the practical optimality of the hyperparameter configuration ( $L L M ^ { * }$ , $| W | ^ { * }$ and $\beta ^ { * }$ ) depends on the decrease in MAE during the first few interactions. Table 3: Distribution of subdomain-specific user prompt sequence lengths # 4.3 Performance across Domains and Subdomains After optimizing the hyperparameters using a random sample of users and chatbot interactions, we applied $P r o f i L L M ^ { I T S e c ~ * }$ to the entire refined experimental dataset (Sec. 4.1), evaluating its performance across all domains and subdomains. As illustrated in Fig. 3, the MAE performed as anticipated in most cases: Fig. 3: P rofiLLM IT Sec ∗’s performance across profile domains and subdomains – In iteration 0, where without any prior user information we assume average ITSec proficiency $( p = 3 , \quad \forall p \in P )$ , typically $M A E < 1 . 0$ , meaning that the actual proficiency scores range mainly between 2.0 and 4.0. – In almost all cases, $M A E @ 1$ shows a marked improvement in profile inference, with approximately $3 0 \%$ reduction compared to $M A E @ 0$ , on average. – During the first few iterations, the MAE fluctuates, probably due to the relatively high $\alpha _ { i }$ of the temporary profile scores pj,temp when i is low (Eq. 2). – Later on, the gradual decrease in $\alpha _ { i }$ is reflected via diminishing bumpiness in MAE, which eventually converges to [0.3, 0.65], on average per domain. Practically, a profiling error of this range is negligible for the intended response adaptation (e.g., adjusted jargon and complexity). For instance, a prompt instructing a user to check their free disk space would be phrased similarly for users with $p ^ { H W / S t o r a g e }$ of 3.3 or 3.8. Although the previously described pattern – a marked decrease in MAE, followed by fluctuations and gradual convergence – is generally evident, performance varied across subdomains, with some performing better and others worse. To investigate these differences, we (1) evaluated $P r o f i L L M ^ { I } { } ^ { T } { } ^ { S e c } ~ ^ { * }$ ’s performance across the three persona archetypes, (2) examined the effect of prompt length, (3) conducted an ablation study, (4) experimented with human users, and (5) qualitatively analyzed a few conversations, as elaborated next. # 4.4 Performance across Persona Archetypes Although the MAE ultimately converges within the range [0.3, 0.65] (Sec. 4.3), Fig. 4a illustrates noticeable variation during the initial iterations, with intermediate users differing from advanced and novice users: 1. In the first iteration, the gap between the actual and predicted profiles decreases by $6 5 \%$ for novice users (markedly improves with just one prompt!) and $5 5 \%$ for advanced users, but increases by 75% for intermediate users. 2. In subsequent iterations, the MAE for novice and advanced users fluctuates and then declines, whereas for intermediate users, it stabilizes at 0.6. This aligns with expectations: highly and barely proficient users tend to use distinctly professional or unprofessional terms, easing profile inference. As noted in Sec. 4.3, the impact of new (informative) prompts diminishes as $\alpha _ { i }$ decreases. # 4.5 Performance as a Function of the Prompt Length Intuitively, longer prompts provide more cues for user profiling. To test this empirically, we examined how prompt length (word count) affects the reduction ( $\%$ ) in the gap between actual and predicted profile scores for any assigned subdomain. To isolate effects, we set $\alpha _ { i } = 1$ (ignoring history, full weight on new prompts) and $| W | = 0$ (no context window, considering only the prompt). As shown in Fig. 4b, no statistically significant correlation was found. 1.4 1.2 Novice 1.25 Novice(r=-0.00) 01.80 IAndtveramncediate % gap reduction 01.7050 IAndtveramncedi(arte=(0r.=005.)06) 0.6 0.50 0.4 0.00 0.2 % 0.25 0.0 0 1 2 3 4 5 6 7 8 9 101112131415 0.50 20 30 40 50 60 70 80 Iteration number of words (a) MAE across persona archetypes (b) Effect of prompt length # 4.6 Ablation Study To benchmark $P r o f i L L M ^ { I T S e c }$ in the absence of closely related methods or labeled public datasets and to assess the contribution of each component, we conducted an ablation study using four variations of $P r o f i L L M ^ { I } { } ^ { T } { } ^ { S e c } \ ^ { * }$ : 1. " $^ { \prime \prime } A s \ i s \ ^ { \prime \prime }$ : Default configuration with optimized hyperparameters. 2. " $^ { \prime \prime } \alpha _ { i } = 0 . 5 ^ { \prime \prime }$ : Fixed $5 0 \%$ weight for new prompts (no decay). 3. " $^ \prime \alpha _ { i } = 1 ^ { \prime \prime }$ : Full weight for new prompts (instantaneous learning). 4. "Concurrent scoring" $\prime \prime$ : All assigned subdomains scored simultaneously. As shown in Fig. 5a, "Concurrent scoring" performs the worst, suggesting that when a prompt is assigned to multiple subdomains, scoring should be done separately for each. The other variations show mixed results: some perform better in early iterations, while others improve later. Specifically, " $^ { \prime } \alpha _ { i } = 1 \prime$ initially reduces MAE similarly to "As is" but then increases monotonically above all other variations. In contrast, $" \alpha _ { i } = 0 . 5 "$ starts off worse than "As is" but gradually converges. Overall, the "As is" configuration, with a gradual decrease of $\alpha _ { i }$ and separate subdomain scoring, yields the best performance for our use case. 1.4 1.4 As_is File Management 1.2 Alpha_0.5 1.2 Settings and Configurations 1.0 AClopnhcau_r1r.e0nt_scoring 1.0 0.68 0.8 0.6 0.4 0.4 0.2 0.2 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 0.0 0 1 2 3 4 5 Iteration Iteration (a) MAE analysis in the ablation study (b) MAE analysis with human users # 4.7 Testing P rofiLLM IT Sec ∗ with Human Users The above experiments used synthetic users and conversations, following most prior user profiling studies [7, 25]. To complement these, we conducted an experiment with human users, who first completed our ITSec proficiency questionnaire before troubleshooting an ITSec-related scenario using chatbot assistance alone. In this scenario, participants faced repeated "Access denied" errors across multiple programs without intentional changes. Fig. 5b shows that $P r o f i L L M ^ { I T S e c }$ $^ *$ assigned their prompts to either "File management" or "Settings and configurations," with prompt sequence lengths aligning with synthetic conversation distributions (Table 3). The resulting MAE patterns closely resemble those of synthetic users, particularly in the "Operating systems" domain (Fig. 3) and the advanced user archetype (Fig. 4a), showing a sharp initial improvement followed by minor fluctuations within a similar MAE range. This experiment was limited in scale (five participants, one scenario) and involved an unrepresentative sample (graduate engineering students). Nevertheless, it offers reassurance regarding (1) the human-like conversational quality of synthetic users and (2) the applicability of our methods to real users. # 4.8 Qualitative Examination of User-Chatbot Conversations The first prompt submitted by one of the human users (Sec. 4.7) was: " $^ { \prime \prime } H i$ there, lately I have been running into an issue in which some programs on my pc can’t access their files and are getting " $^ { \prime \prime }$ Access Denied" $^ { \prime \prime }$ errors. I didn’t change anything. I’ve check the permissions of the files and folders, but everything seems fine. What else could it $b e \mathcal { Q } ^ { \prime \prime }$ As in other subdomains, the default p0OS/F ile−management is 3, while this user’s ground truth proficiency (from the ITSec questionnaire) is 2.66. The user provided a clear description, mentioning relevant technical terms (programs, permissions, files) and demonstrating more than basic proficiency by checking permissions. However, the user lacked deeper expertise for advanced diagnosis beyond standard UI-based troubleshooting. Accordingly, $P r o f i L L M ^ { I ^ { \prime } D e c \ * }$ assigned pOS/F ile−management,temp =2.75, and after applying Eq. 1, updated pOS/F ile−management to 2.8. A qualitative review of two other conversations is available in a video demonstration we produced3. # 4.9 Overhead As detailed in Sec. 3.1, our experiments were conducted on Azure, providing access to multiple state-of-the-art pretrained LLMs. Using these LLMs incurs costs, and $P r o f i L L M ^ { I T S e c }$ ’s LLM-based data preprocessing and analysis introduce time overhead. Table 4 presents the latency (seconds) and monetary cost (token count) for data generation, refinement, and profiling. Except for user generation (once per user) and conversation QA (once per generated conversation), the values (Mean $\pm$ St.Dev.) represent a single user prompt. Notably, the most resource-intensive component – both in time and tokens – was the conversation quality assurance (Sec. 3.4) performed by Claude. All other tasks used GPT-4o, which efficiently generated users from numeric profile vectors (Sec. 3.3). As expected, chatbot interaction generation with synthetic users required more resources. For the LLM-based profiling components, Table 4 demonstrates practical feasibility, with low latency and affordable operation costs. Table 4: Distribution of the overhead associated with our experiments # 5 Related Work # 5.1 User Profiling Dimensions and Taxonomies Previous research has explored various user profile dimensions. Personality is a well-studied aspect, particularly through the Big Five Factors (BFF) framework [8], which includes openness, conscientiousness, extroversion, agreeableness, and neuroticism. Inferring these traits enables chatbots to enhance personalization [20]. User profiling has also been applied in e-commerce [18], education [16] and healthcare [13] for user preference modeling, activity adaptation and mental health assessment, respectively. Closer to the ITSec domain addressed in our study, Bitton et al. [3] proposed a taxonomy and a method to assess mobile security awareness. However, we found no prior work on inferring technical proficiency, particularly in ITSec, solely from conversational data. # 5.2 User Profiling Inputs and Methods Various methods for user profile inference leverage diverse data sources and techniques from machine learning [10], deep learning [22], and NLP [9]. Recently, LLMs have enhanced user profiling by improving the interpretation of usergenerated content [26]. Several studies have explored personality detection from text using LLMs, including transformer-based models [15], ChatGPT prompting [7], and fine-tuned pretrained models. Unlike personality profiling [27], typically evaluated on static text datasets, or preference modeling [18], based on past interactions and demographics, $P r o f i L L M$ is tailored for conversational settings. It infers user profiles implicitly and dynamically from sequences of user prompts and chatbot responses, leveraging a domain-adapted taxonomy. # 5.3 User Profiling in Conversational Settings Interaction-based personalization has gained significant attention, enabling chatbots to adapt responses to a user’s communication style and preferences [17]. With LLM advancements, profiling has shifted to dynamic and implicit methods, allowing real-time inference from interactions (as in $P r o f i L L M$ ). For example, DHAP [19] learns implicit user profiles from past conversations to personalize chatbot responses. Unlike DHAP, which focuses on language style and relies on historical logs, $P r o f i L L M$ specializes in domain-specific expertise profiling and continuously updates profiles during live interactions. Gandoul et al. [11] explore chatbot-based inference of students’ learning styles, but their work remains high-level, lacking experiments, code, or data. Apollonion [5], like P rofiLLM, dynamically builds structured user profiles but incorporates preexisting user data, whereas $\ P r o f i L L M$ relies solely on conversational inputs. Moreover, $P r o f i L L M$ is reproducible, with publicly shared code and datasets. Overall, unlike prior research focused on narrow profile dimensions (e.g., personality), $P r o f i L L M$ supports diverse domain-adapted expertise assessments. Profiling users based solely on conversational data, particularly in specialized fields like ITSec, presents greater challenges than widely studied personality inference. Instead of relying on static datasets, $P r o f i L L M$ dynamically infers context-aware profiles, enhancing adaptability in knowledge-intensive domains. # 6 Discussion # 6.1 Impact and Generalizability As noted in Sec. 5, user profiling – often personality-oriented – allows chatbots to adapt the style and tone of responses, enhancing user satisfaction. With $P r o f i L L M ^ { I T S e c }$ , designed for ITSec troubleshooting, other benefits emerge, such as optimizing the choice of terms as well as the number and complexity of steps per solution. For instance, an advanced user may be directed to inspect the Task Manager for suspicious processes, while a novice receives step-by-step instructions in simpler terms. These benefits extend to other domains, such as Legal & Compliance (adjusting terminology and complexity based on legal expertise) and Data Science (tailoring explanations of models, algorithms, and best practices). Adapting $P r o f i L L M$ to a new domain primarily involves developing a relevant taxonomy and refining system prompts accordingly. # 6.2 Limitations and Future Work This study focused on optimizing profile inference, leaving profile-aware response adaptation as a challenge for future research. A key question is whether adapting chatbot responses based on inferred profiles influences user behavior and profiling accuracy. Another limitation is the deliberate design of $P r o f i L L M$ as fully implicit and non-disruptive. Relaxing this constraint could enhance performance – incorporating challenge questions or preliminary questionnaires may improve profiling accuracy, an aspect for future evaluation. Extending $P r o f i L L M$ to new domains and testing it with larger human user groups is a key research direction.
Despite significant advancements in conversational AI, large language model (LLM)-powered chatbots often struggle with personalizing their responses according to individual user characteristics, such as technical expertise, learning style, and communication preferences. This lack of personalization is particularly problematic in specialized knowledge-intense domains like IT/cybersecurity (ITSec), where user knowledge levels vary widely. Existing approaches for chatbot personalization primarily rely on static user categories or explicit self-reported information, limiting their adaptability to an evolving perception of the user's proficiency, obtained in the course of ongoing interactions. In this paper, we propose ProfiLLM, a novel framework for implicit and dynamic user profiling through chatbot interactions. This framework consists of a taxonomy that can be adapted for use in diverse domains and an LLM-based method for user profiling in terms of the taxonomy. To demonstrate ProfiLLM's effectiveness, we apply it in the ITSec domain where troubleshooting interactions are used to infer chatbot users' technical proficiency. Specifically, we developed ProfiLLM[ITSec], an ITSec-adapted variant of ProfiLLM, and evaluated its performance on 1,760 human-like chatbot conversations from 263 synthetic users. Results show that ProfiLLM[ITSec] rapidly and accurately infers ITSec profiles, reducing the gap between actual and predicted scores by up to 55--65\% after a single prompt, followed by minor fluctuations and further refinement. In addition to evaluating our new implicit and dynamic profiling framework, we also propose an LLM-based persona simulation methodology, a structured taxonomy for ITSec proficiency, our codebase, and a dataset of chatbot interactions to support future research.
[ "cs.AI" ]
# 1 Introduction With its powerful multimodal perception and generalization capabilities, the Multimodal Large Language Model (MLLM) has become a universal technical paradigm for addressing diverse scenarios and has demonstrated strong generative capabilities in video understanding [31, 48, 29, 1]. However, when applied to specific domains, it is constrained by challenges such as knowledge solidification (inability to dynamically update the latest knowledge), uncontrollable reasoning (risk of hallucinations), and weak generalization (requiring additional fine-tuning costs and time costs), making it difficult to handle multi-hop question and cross-modal association requirements (especially in long Flexibility Accuracy Efficier MLLMAdaVideoRAG(Ours) OA1: White Query A: cplet: NoneRetrieval Any MLLM MLLM A3:Comprehensive response that incorporatesthevideocontent. / Accurac EfdeeA1 Whte LVideoRAG Straightforward Intent 1: Query A: Lack ofcoectiobetweenthe Cassfication NavieRetrieval 日 Query MLLM front and back parts of the video. SimpleQuery AnyMLLM Efficiency VideoRAG Hard Query OA1: White Ren, Xubin, et al 1: 0A2: 6 Query A3:Comprehensive response that GraphRetrieval □ T Graph Retrieval incorporates the video content. Any MLLM MLLM Q1: Straightforward Query Q2: Straightforward Query Whatcolor isthe bear?- Howmany times does the wolf appear throughout the video? 日 02:23 04:01 05:17 08:10 Q3:Hard Query Whatisthe impactofclimatechangeongrasslandanimals? video scenarios), which leads to performance degradation [36, 32]. Retrieval-Augmented Generation (RAG), by integrating the collaborative reasoning of external knowledge bases and generative models without being confined to pre-trained knowledge, can easily adapt to private domain data scenarios and has become a core paradigm for improving the factual accuracy and domain adaptability of large language models. Current RAG research mostly focuses on text modality [25, 10, 19], static images [7], and tabular forms [6], overlooking the unique value of video as a multimodal knowledge carrier. The increasingly popular long-video understanding has put forward new demands for RAG models supporting video modality input. Most existing RAG studies on long videos attempt to enhance question-answering generation by constructing and retrieving knowledge bases from multimodal information derived from videos. For example, Luo et al. [32] incorporates visually-aligned auxiliary text features from optical character recognition (OCR), automatic speech recognition (ASR), and object detection to create video knowledge bases, enabling question-answering for long videos. However, this method does not support sensemaking queries or multi-hop questions, which require global understanding of the entire database as shown in Fig. 1. Recent VideoRAG [36] significantly improves the accuracy of long-video contextual information by constructing a graph database, but it requires maintaining a hierarchical graph database that demands substantial computational and time resources, and incurs higher costs when migrating to new scenarios. We believe that a practical RAG for video understanding needs to flexibly allocate appropriate processing methods for different videos and query difficulties, which both maintains accuracy and improves efficiency. Considering that real-world video understanding tasks involve content comprehension needs of varying complexity, the problem-solving strategies for questions of different difficulty levels will have distinct priorities. Short-video QA involving simple common sense does not require retrieval and can directly obtain correct answers by querying the MLLM, while complex long-video questions rely on RAG for retrieval to filter effective information. For more complex questions—such as those requiring multi-step reasoning or relying on multiple types of knowledge—graph-based RAG is necessary to derive correct answers. Therefore, a one-size-fits-all approach of retrieving and then returning results is not optimal. To address this, this paper proposes an adaptive-RAG-based video understanding scheme termed AdaVideoRAG, as shown in Fig. 1. It first classifies user queries into difficulty levels and then adaptively assigns the most reasonable retrieval strategy based on the difficulty. Additionally, we further integrate visual features, clip captions, ASR, and scene text composite information flows contained in videos, and use relevant text information obtained from external retrieval for data augmentation. According to the difficulty of questions, queries are routed to different levels of database retrieval modes (i.e., naive and graph retrieval). These multimodal knowledge inputs and retrieval strategies can more effectively provide fine-grained contextual representation capabilities, ultimately further enhancing the upper limit of MLLM’s processing capabilities for long videos and complex question-answering tasks. To demonstrate the effectiveness of the proposed AdaVideoRAG framework, we officially release HiVU, the first open benchmark dataset for full-stack capability evaluation in video understanding. This dataset groundbreakingly integrates 120 video samples covering a continuous duration spectrum from short clips (1 minute) to extra-long videos (106 minutes), spanning high-frequency scene categories across three major themes: knowledge education (lectures, finance, law, psychology, documentaries), information (news, interviews), and entertainment (sports, cooking, makeup, fitness, TV dramas, animations). In terms of question design, we innovatively develop a three-level difficulty quantification system: 1) Basic Level-1 (L1) focuses on frame-level content perception (e.g., "Which objects appear at the 5th second of the video?"). 2) Advanced Level-2 (L2) requires temporal logic reasoning (e.g., "When does the speaker start explaining graph neural networks?"). 3) Expert Level-3 (L3) challenges cross-modal causal inference (e.g., "How would deleting the narration at the 15th minute affect the plot development?"). Compared with traditional datasets such as ActivityNet [2] (single action recognition) and MovieQA [37] (open-ended QA), this benchmark achieves, for the first time, cognitive complexity evaluation at different levels, providing a hierarchical evaluation framework for video understanding research. It supports systematic optimization of models in long-video modeling, complex reasoning tasks, and real-world scenario generalization. In summary, our contributions are as follows: 1) We propose a novel AdaVideoRAG framework to dynamically and adaptively route appropriate retrieval schemes—ranging from the simplest to the most sophisticated—for different video understanding tasks based on query complexity, achieving an optimal balance between resource consumption and video comprehension capabilities. 2) We introduce an Omni-Knowledge Indexing module to extract valuable information from multi-modal signals for context modeling and establish corresponding databases. A lightweight intent classification model is used to determine the difficulty level of input queries, enabling hierarchical knowledge access, integration, and generation from naive retrieval to graph retrieval, while balancing resource consumption and video understanding capabilities. 3) We publicly release the hierarchical video understanding benchmark HiVU for the first time, which evaluates the multi-level reasoning capabilities of video understanding models. Extensive comparative experiments and ablation studies demonstrate the advantages of AdaVideoRAG in deep understanding of long videos. # 2 Method We introduce an MLLM-centric adaptive RAG framework for long-video understanding termed AdaVideoRAG, which can significantly improve efficiency while ensuring accuracy. As shown in Fig. 2, our method includes four parts: 1) Query Intent Classification (Sec. 2.1). 2) Omni-Knowledge Indexing (Sec. 2.2). 3) Adaptive Retrieval Paradigm (Sec. 2.3). 4) Integration and Generation (Sec. 2.4). # 2.1 Query Intent Classification Not all user requests have the same level of complexity. For simple user requests, we can use a straightforward solution to reduce computing power consumption and users’ perception of latency. For complex questions, we rely on complex multi-model, multi-modal, and multi-step queries to achieve higher accuracy. To achieve the above goals, we propose to use a lightweight intent classification model to perform the classification of the difficulty level of the query at the input end. Specifically, we have defined and established a fine-grained evaluation system for the difficulty level of video understanding: Level-1: Straightforward reasoning. There are basically few logical relationships involved in the questions, and the knowledge required for answering questions is directly provided in the video content. For example, "What color of clothes is the woman who appears at the fifth second wearing?" Auxiliary Text Extraction Text-Base Adaptive Retrieval Sec.3.3 Vision-Base VLM回 A Tc TC2 TCN 盲L2L3 EEWW FF2 FN Enc. C2 C3 ASR TATA2 TAN {Tc,a,o} {c,a,] {C} {Tv} HWUser Qeuery (Qo) Gneratin and 目 30playerperform? OCR To Sec.3.1 Response (R) The Number 30 player is V Omni-Knowledge Indexing [E{cg}{Tg} tepheneCpoint with etaied Sec. 3.2 Knowledge Graph MLLM point shot in this game .. For such questions, the existing MLLMs models are already very mature in solving them. If complex processing is still applied to such simple queries, it will result in unnecessary computational overhead. Level-2: Simple reasoning. It involves single-step reasoning about basic spatio-temporal/causal relationships, requiring the model to establish logical associations between local events. For example, "Why did the woman cry before the rainy scene started?" requires two-stage reasoning: 1) Determine the starting time point of "rain" through temporal positioning; 2) Retrieve the character behaviors (such as the audio of an argumentative conversation) and scene changes (such as weather forecast subtitles) before this time point, and construct a causal chain to explain the motivation. Such tasks expose the integration flaws of existing MLLMS methods regarding cross-modal temporal clues, and are prone to the lack of key intermediate evidence due to the mismatch in retrieval granularity. Level-3: Hard reasoning. The video understanding at the highest difficulty level requires extracting different subjects and relationships from the long-context, and constructing a knowledge graph that maps entities and relationships across temporal and semantic dimensions, and combining it with powerful MLLM reasoning capabilities to make judgments. For example, "What life lessons does this movie convey?" Questions of this kind require the model to mine the deep semantic relationships provided in the video and conduct multi-hop reasoning to obtain the correct answers. Intent classification model. Given the basic definitions and examples from level 1 to level 3, we use a large language model with Chain-of-Thought (CoT) reasoning to classify the query $Q$ . This can be integrated into a RAG (Retrieval-Augmented Generation) architecture as a plug-and-play API, providing intent classification results through appropriate prompts without the need for finetuning. Based on the classification results, it can automatically trigger a progressive knowledge retrieval strategy, ranging from none retrieval to simple naive retrieval, and further to complex graph retrieval. The calculation of the intent classification result $L$ can be formulated as: $L =$ $\bar { L } \bar { L M } _ { i n t e n t } ( Q , p r o m p t _ { i n t e n t } )$ . where the LLM is a lightweight CoT model. In this paper, we adopt Qwen2.5-7B [18, 45], whose time-consuming proportion is extremely small (averagely $\leq 5 \% )$ ) compared to the entire process. # 2.2 Omni-Knowledge Indexing for Long-Context Understanding When performing video understanding tasks, MLLMs equipped with RAG can achieve context modeling through dynamic activation of external knowledge bases, which alleviates the window length limitation of long contexts to some extent and enhances the semantic understanding of global videos. To this end, we propose the Omni-Knowledge Indexing module, which extracts valuable information from multiple modal signals for context modeling and establishes corresponding databases, enabling the RAG system to more accurately retrieve the most relevant information and perform high-quality generation. # 2.2.1 Omni-Knowledge Text-Base Establishment In long video understanding tasks, due to the context window size limitations of MLLMs, we need to perform frame sampling and resizing on videos under hardware constraints. However, this inevitably leads to the loss of rich visual information in the videos, as well as unused audio and text multimodal information. Therefore, we utilize an external normalization module to extract multimodal information from videos and construct our private text base. Auxiliary text extraction and database construction. The input long video $V$ is divided into $N$ consecutive and semantically complete clips $V = ( C _ { 1 } , C _ { 2 } , \ldots , { \bar { C } } _ { N } ) = { \bar { \{ C _ { n } \} } }$ at fixed time intervals (30 seconds per clip in the paper). For each clip $C _ { n }$ , uniform frame sampling is performed to extract key frames. In this paper, we select 5 frames as the multimodal representation primitive $F _ { n }$ , as more frames do not significantly improve performance but increase computational power and model complexity. Specifically, auxiliary text extraction includes three categories: $\jmath$ ) The quantized MiniCPM-V [46] (used as the VLM model) generates fine-grained text descriptions $T _ { C }$ for the sampled frames, including semantic elements such as character attributes and spatio-temporal actions, ultimately constructing a caption database $D _ { C }$ ; 2) Audio is the most direct information carrier in videos, driving story development, conveying plot clues, and revealing character relationships through language, providing information that cannot be mined from visual features alone. Therefore, we use FastWhisper [33] as the audio extractor to convert the audio in each clip into text format $T _ { A }$ , which is stored as vectors via an embedding model to generate an ASR database $D _ { A }$ ; 3) Characters $T _ { O }$ in each frame are extracted through EASYOCR [22], and an OCR database $D _ { O }$ is constructed to compensate for the insufficient recognition ability of MLLMs. Knowledge graph construction. To address Level-3 complex reasoning queries, we construct a knowledge graph based on clip captions $( T _ { C } )$ , ASR $( T _ { A } )$ , and OCR $( T _ { O } )$ . Specifically, BGE-M3 [5] extracts entities and relationships from text chunks: 1) Entity represents the minimal domainspecific semantic interpretation unit in the video, characterized by a triple <entity type, entity name, spatio-temporal attribute>. 2) Relationship encompasses various semantic associations between entities, including spatio-temporal relationships, causal relationships, functional relationships, etc., to systematically structure video text information. # 2.2.2 Vision-Base Establishment Simply relying on text information extracted from clip captions, ASR, and OCR makes it difficult to construct an optimal Knowledge Indexing. As a typical carrier of multimodal data, videos contain visual features with abundant details that are hard to describe precisely in text, such as object appearance changes, scene spatial layouts, and human facial expressions and movements. These visual information play an indispensable role in complex knowledge reasoning and retrieval tasks. Therefore, we introduce the ImageBind [16] image encoder (Enc. in Fig. 2) to extract features from key frames and concatenate them as the final features, because this model is based on advanced cross-modal alignment algorithms that can map heterogeneous modal data such as images, text, and audio into the same high-dimensional semantic space. # 2.3 Adaptive Retrieval Paradigm After intent classification (Sec. 2.1), the user query (Q) is routed to different retrieval paths according to its difficulty level, so as to improve comprehensive efficiency on the premise of ensuring effectiveness. None retrieval with direct MLLM. For Level-1 scenarios, the model directly feeds the query (Q) and the entire video $\left\{ C _ { n } \right\}$ into the MLLM to obtain a direct response. This approach leverages the inherent knowledge and reasoning capabilities of the MLLM without introducing external knowledge bases, significantly enhancing overall efficiency for simple questions. Naive retrieval with simple reasoning. For Level-2 retrieval scenarios, this study proposes a multimodal collaborative grounding framework that significantly enhances the retrieval efficiency and accuracy of long videos in handling simple logical questions by jointly optimizing the semantic alignment between auxiliary texts (clip captions, ASR, OCR) and visual modalities. Specifically, we first decouple the original query into sub-queries adapted to different modal characteristics: 1) For clip caption retrieval, we rewrite the query into declarative sentences, remove option interference, and add scene-appropriate descriptive content. 2) For ASR-recognized text, we extract colloquial expressions from the query, retain core actions and events, and add contextual modifiers to match fragmented speech segments. 3) For discrete OCR text, we extract specific entity information from the query. A typical example: when the input query is "How did the Number 30 player perform?", the rewritten outputs are: i) "clip caption": "The performance of Number 30 player."; ii) "ASR text": "How’s the number 30 player doing."; iii) "OCR text": "Number 30 player". Query rewriting effectively mitigates distribution shifts between different semantics. Through cross-modal similarity calculation, we can then quickly locate query-relevant candidate content and the corresponding video clips for each text block. This study further locates and queries the semantically most relevant video content from the visual feature database $D _ { V }$ . Specifically, our model reuses the rewritten results of clip captions as semantic anchors for visual retrieval. The pre-trained cross-modal semantic alignment encoder ImageBind [16] is employed to map videos into the text embedding space $\{ F _ { n } \}$ . By calculating the cosine similarity between text and visual embeddings, candidate segments with similarity scores exceeding a threshold (set to 0.5 in this paper) are filtered out. These segments are then ranked to retain the top-K visual evidence with the highest confidence. This approach significantly reduces the modality gap in visualtext alignment by leveraging a unified semantic embedding space, effectively alleviating the problem of local detail loss in long videos. Finally, the videos $\{ C _ { v } \}$ retrieved through visual feature-text space alignment are merged with the video chunks $\{ C _ { c , a , o } \}$ located via auxiliary text retrieval to construct a retrieval evidence pool for simple reasoning at Level-2. Graph retrieval in hard reasoning. Relying solely on information obtained from auxiliary text and visual feature retrieval falls short in enabling MLLMs to tackle more complex sensemaking query scenarios. Therefore, we require more abundant and semantically precise auxiliary information capable of modeling multiple events and temporal nodes to facilitate MLLM reasoning. To address this challenge, we adopt a deeper retrieval approach based on Light-RAG [19] to handle hard queries, replacing the naive retrieval method used for simple queries. Specifically, considering resource constraints, we reuse auxiliary text embeddings to construct a graph. We then compute similarity scores between rewritten clip captions and entity/relationship descriptions, returning the most relevant entities and relationships. Within the graph map, we gather other information associated with the retrieved entities and relationships, which can be combined into a query-centered thinking map. This retrieved graph map assists MLLMs in considering global and multi-layered information, aiding in better modeling spatio-temporal and causal relationships within events. Furthermore, we employ a unified semantic embedding space to represent visual evidence obtained from grounding, enhancing retrieval accuracy. We overlay the retrieved videos $\{ C _ { v } \}$ with graph retrieval results $\{ C _ { g } \}$ to construct a multi-level retrieval evidence pool for hard reasoning under Level-3. Filtering then sorting evidences. After obtaining the preliminary retrieval results, we perform coarse-to-fine information purification on the search results. First, we filter out duplicate video information blocks retrieved from different databases. Then, the content description of the video blocks (including clip captions, ASR, and OCR texts) and the query are simultaneously input into a small-scale LLM (Qwen2.5-7B [45, 18] in the paper) for fine-grained filtering to exclude some irrelevant search results. Finally, we rerank the selected video clips based on the order of original video time to preserve temporal causal relationship information. # 2.4 Multimodal Information Integration and Generation To provide MLLMs with more comprehensive information for enhancing query accuracy, we acquire auxiliary text information (denoted as $\{ T _ { c , a , o } \}$ for simple reasoning and $\{ T _ { g } \}$ for hard reasoning) derived from clip captions, ASR, and OCR contexts, along with visual information $\{ C _ { v } \}$ from visualto-text grounding. After integrating the retrieved context and corresponding video clips $\{ C _ { c , a , o } \}$ , the combined inputs are fed into MLLMs for reasoning and generation to produce the final output $R$ : $$ R = \left\{ \begin{array} { l l } { \mathrm { M L L M } ( \{ C _ { n } \} , Q ) } & { \mathrm { i f ~ } L \mathrm { i s ~ L e v e l - 1 } , } \\ { \mathrm { M L L M } ( \{ C _ { v } \} , \{ C _ { c , a , o } \} , \{ T _ { c , a , o } \} , Q ) } & { \mathrm { i f ~ } L \mathrm { i s ~ L e v e l - 2 } , } \\ { \mathrm { M L L M } ( \{ C _ { v } \} , \{ C _ { c , a , o } \} , \{ T _ { c , a , o } \} , \{ C _ { g } \} , \{ T _ { g } \} , Q ) } & { \mathrm { i f ~ } L \mathrm { i s ~ L e v e l - 3 } . } \end{array} \right. $$ # 2.5 HiVU: Hierarchical Video Understanding Benchmark Existing video understanding datasets either have insufficient duration [14] or lack engaging content [43, 53], failing to generate queries that require deep comprehension. To support robust reasoning tasks on long videos and evaluate different methods, we constructed the Hierarchical Video Understanding (HiVU) Benchmark. For this purpose, we selected three genres: knowledge-education (lectures, finance, law, psychology, documentaries), information (news, interviews), and entertainment (sports, cooking, makeup, fitness, TV dramas, animations). We manually collected 120 long-video datasets rich in knowledge content from YouTube, totaling 60 hours, with distributions shown in Fig. 3. Additionally, we designed three tiers of query reasoning from straightforward to hard, as described in Sec. 2.1. This hierarchical query design enables comprehensive and detailed evaluation of models’ reasoning capabilities across varying difficulty levels. Evaluation metrics. For the open-ended question answering tasks on the HiVU dataset, we draw in- count spiration from the Win-Rate metric system widely 120videos 60hours used in the RAG field to evaluate model capabili- 30minutesaveragely ties [10, 19]. Specifically, we use large language mod- 1.1 minutes minimum els (LLMs) as the judgment basis, quantify the com parative results of the two schemes through model outputs, and finally present their competitive scores in percentage form. The Win-Rate Comparison com prehensively considers queries from five dimensions: 1) Comprehensiveness: This dimension focuses on minutes100 whether the model’s response fully covers the query, Figure 3: Statistical distributions of our avoiding missing critical information or providing HiVU from different perspectives. one-sided answers. 2) Empowerment: It primarily examines whether the model’s response can provide practical value and inspiration to users. 3) Trustworthiness: This dimension emphasizes the reliability and authenticity of the model’s output content. 4) Depth: It assesses whether the model can go beyond surface phenomena, uncover the essential issues behind the query, and conduct in-depth analysis and discussion. 5) Density: It focuses on the information content and compactness of the model’s response, avoiding verbose, empty, or redundant expressions. # 3 Experiments # 3.1 Experimental Setup We conduct comprehensive evaluations of the proposed AdaVideoRAG method and the effectiveness of each module primarily on the newly proposed HiVU benchmark Sec. 2.5, and also introduce public video understanding benchmarks for further thorough assessment. Specifically: 1) HiVU includes over 10 sub-genres across 3 domains, comprising 120 knowledge-rich long-video datasets totaling 60 hours. 2) Video-MME [14] is a full-spectrum multi-modal evaluation benchmark for MLLMs in video analysis, featuring diverse videos and multi-modal data. It contains 900 videos (ranging from 11 seconds to 1 hour, categorized into short, medium, and long), with 2,700 multiplechoice questions covering 6 major visual domains (e.g., knowledge, film, sports) and 30 subdomains, focusing on evaluating the perception, reasoning, and summarization capabilities of multimodal large language models (MLLMs) in video analysis. 3) MLVU [53] is a multi-task benchmark for evaluating long-video understanding with diverse genres and extended durations. Centered on long videos ranging from 3 minutes to over 2 hours (average 12 minutes), it sets 9 multi-tasks (e.g., single/multi-detail understanding) across diverse video types (films, surveillance, games, etc.), aiming to comprehensively assess long-video understanding capabilities. # 3.2 Experimental Results Improving open-sourced MLLMs with AdaVideoRAG on MLVU_test [53] benchmark. The overall evaluation results of all the investigated multi-modal large language models in the MLVU test set are shown in Tab. 1. These results cover the baseline model, Video-LLaVA [29], along with two highly regarded open-source models released recently: Qwen2.5-VL series [1] and VideoLLaMA3 [47]. The evaluation results clearly demonstrate that the AdaVideoRAG strategy we proposed significantly improves the question-answering accuracy of each MLLM. And it particularly stands out in two key types of tasks. Firstly, in tasks such as Topic Reasoning (TR) that require multi-hop reasoning about videos, and secondly, in tasks like Action Count (AC) that involve holistic reasoning. This indicates that AdaVideoRAG can not only strengthen the basic question-answering ability but also effectively assist the MLLMs in achieving breakthroughs within complex reasoning and multi-detail processing tasks. It is worth noting that although the Qwen2.5-VL-7B model that performs relatively weakly on the MLVU dataset, it exhibits more pronounced accuracy improvements after adopting our AdaVideoRAG, increasing nearly by $40 \%$ and even reaching the accuracy of large-parameter models like Qwen2.5-VL-32B. What’s more, the open-source model VideoLLaMA equipped with AdaVideoRAG, even though it has fewer parameters than Qwen2.5-VL-32B, shows better performance on long videos, and its performance can even be comparable to that of GPT-4o. These experimental results fully verify the universality and effectiveness of AdaVideoRAG in enhancing the reasoning ability of MLLMs. Table 1: Comparison between supervised baselines and whether AdaVideoRAG is configured on MLVU_test. Frames: the sampling frame rate or the number of images limited, and "2fps-768" indicates that videos are sampled at 2 fps and the upper limit is 768 frames; M-Avg: the average performance of multiple-choice tasks. Comparison with state-of-the-art VideoRAG [32] on Video-MME [14] dataset. Given that the experimental results in Tab. 1 have fully verified that AdaVideoRAG can effectively enhance the reasoning performance of MLLMs, we select the VideoLLaMA3 and the Qwen2.5-VL-7B model as the basic model for subsequent control experiments, which with the same number of parameters. In Tab. 2, we conduct a horizontal comparative test between our AdaVideoRAG and Video-RAG [32] on the Video-MME dataset. The experimental results show that both RAG methods can significantly enhance the video understanding ability of the base MLLMs. However, in tasks involving the processing of long videos, our AdaVideoRAG demonstrates a more distinct advantage. This is mainly due to the fact that AdaVideoRAG is capable of constructing a more complex and reasonable knowledge map during the retrieval of long videos, thus enabling precise understanding and efficient reasoning of video. Table 2: Comparison between AdaVideoRAG and VideoRAG [32] on Video-MME [14] dataset. Impact of LLM arbiters. To explore the performance of the retrieval strategies in sensemaking tasks of varying difficulties, we conducts comparative experiments based on the proposed hierarchical video understanding benchmark(HiVU), and an LLM is then used as the evaluation referee to assess the quality of the final answers. Regarding the selection of specific LLMs, we carry out two sets of control experiments: Deepseek-R1-7B [45, 18] and Deepseek-R1-32B [45, 18], Qwen2.5-32B [45] and QwQ-32B [39], which represent the models with different parameters and reasoning capabilities respectively, as illustrated in Tab. 3. The experimental results demonstrate that models with larger parameters and equipped with the Chain-of-Thought (CoT) reasoning mechanism exhibit stronger discriminatory abilities when evaluating the performance of other models. Based on these findings, we choose DeepSeek-32B model as the evaluation arbiter for HiVU benchmark evaluation to ensure the accuracy and reliability of the evaluation results. Table 3: Impact of LLM arbiter configurations (parameter scale and reasoning capabilities) on HiVU benchmark evaluation. Comparison with state-of-the-art VideoRAG [32] on HiVU dataset. In our HiVU data benchmark, there are three tasks classified according to the difficulty of reasoning: straightforward (L1), simple (L2), and hard (L3). For different levels, AdaVideoRAG employs different retrieval strategies: from without retrieval, naive retrieval, to graph retrieval, which forms a hierarchical enhancement mechanism. And the following series of experiments are designed to verify the improvement of reasoning ability, as shown in Tab. 4, in the hard-level video understanding task, the multi-modal large language model integrated with AdaVideoRAG demonstrates more significant advantages compared to its original model, and the gap between the two becomes more evident as the task difficulty increases. This result not only confirms the effectiveness of AdaVideoRAG in complex reasoning scenarios but also indirectly validates the rationality and scientific nature of the three-level difficulty division in the HiVU benchmark, providing a reliable basis for quantitatively evaluating the reasoning ability of models. Meanwhile, we conducted a horizontal comparison with VideoRAG [32] in the HiVU benchmark, as shown in Tab. 4. Consistent with our expectations, AdaVideoRAG is on par with VideoRAG [32] at the Level-1 and Level-2 levels. However, our method exhibits more prominent advantages at the Level 3 which need global and multi-hop reasoning. Table 4: Performance on HiVU. Left: Results comparison w/o and w/ AdaVideoRAG. Right: Results comparison w/ VideoRAG [32] and AdaVideoRAG. Ablation Study In the following analysis, we perform three ablation studies to precisely assess the key components of our proposed method. They are as follows: 1) Without Graph: We cancel the retrieval of entities and relationships in the graph map; 2) Without vision retrieval: We remove the feature retrieval in vision-to-text grounding; 3) Without naive text retrieval: We cancel the the retrieval from the caption, OCR, and ASR databases, as shown in Tab. 5. It can be seen that the design of each module is effective and can improve the understanding ability of the model. Table 5: Ablation on graph-based knowledge retrieval, vision-based embedding retrieval and auxiliary text retrieval components.
Multimodal Large Language Models (MLLMs) struggle with long videos due to fixed context windows and weak long-term dependency modeling. Existing Retrieval-Augmented Generation (RAG) methods for videos use static retrieval strategies, leading to inefficiencies for simple queries and information loss for complex tasks. To address this, we propose AdaVideoRAG, a novel framework that dynamically adapts retrieval granularity based on query complexity using a lightweight intent classifier. Our framework employs an Omni-Knowledge Indexing module to build hierarchical databases from text (captions, ASR, OCR), visual features, and semantic graphs, enabling optimal resource allocation across tasks. We also introduce the HiVU benchmark for comprehensive evaluation. Experiments demonstrate improved efficiency and accuracy for long-video understanding, with seamless integration into existing MLLMs. AdaVideoRAG establishes a new paradigm for adaptive retrieval in video analysis. Codes will be open-sourced at https://github.com/xzc-zju/AdaVideoRAG.
[ "cs.CV" ]
# I. INTRODUCTION Longer undergraduate programming projects like capstones, hackathons, or sprints are often characterized by team collaboration and intense student dedication. Frequently, the client is an external entity, such as a company representative, and in these scenarios, the student team is responsible for structuring and managing the project. These types of projects can address a common deficiency in undergraduate training related to soft skills, such as communication and teamwork, often highlighted in the literature. However, the intensive nature of these projects can overload the instructor due to the amount of work involved in keeping track of multiple student groups and individuals’ work. Consequently, formative feedback may suffer, and summative feedback often relies on presentations or other superficial evidence. Large Language Models (LLMs), based on transformers, have been applied to various activities related to programming, natural language processing, and data analysis. These models combine extensive training data with a smaller context window to produce outputs that are frequently coherent. LLMs have proven potential in education [1], for instance, in grading open-text responses [2], and assisting students in resolving programming errors without providing direct answers [3]. This article reports on the experience of applying LLMs to monitor the progress of student teams during a course that consists of an intensive 3-week code development sprint. In this context, the requirements came from a real representative from the industry, and the design of the software was entirely up to the students. So, the instructor couldn’t impose a common structure on the projects to make monitoring or assessment more feasible. The expectation is that the LLM can be used to enable a timely and individualized summary of work accomplished on intensive, industry-oriented software projects. The summary provided to each student’s work favors the self-regulation of group work, since it was available to instructors and all the students. This tool intersects with the subjects of collaborative teamwork, technology-assisted work, and the use of AI in education. # II. LITERATURE REVIEW In recent times, there has been considerable interest in applying LLMs in education, primarily due to the impressive results demonstrated by ChatGPT, which generates responses that are consistent and systematic enough to be considered useful. Lo [1] reviewed the literature on the impact of ChatGPT in education and identified various application examples, including generating customized assessment items, feedback, and guidance for open-ended activities such as group essays [4], [5], simulating a peer in discussion groups [6], and facilitating debates while providing personalized feedback. Dehbozorgi [7] applied LLMs to provide feedback on formative questions developed by students about the class content. The LLM, having access to the course materials, was able to assess the relevance and alignment of these questions with the course topics. Rudolph [8] identified that LLMs can be used to personalize student support systems through Intelligent Tutoring Systems (ITS), providing original cases for discussion tailored to each group of students. Provided that ethical issues and data privacy concerns are safeguarded, LLMs could enable adaptive personal tutors, which can consider a student’s history of actions, personal, and emotional states, thereby offering personalized feedback and suggestions. Cope [9] mentioned that one of the greatest potentials for transforming education lies in using LLMs to provide more consistent and rich formative feedback. The fields of Computing and Software Engineering have significant potential to be influenced by LLMs, particularly because the artifacts produced in these professions are digital language products, which are the niche where LLMs excel. Kirova [10] emphasized the need to rethink the teaching of Software Engineering in an LLM context, recognizing their potential for writing code, testing, and general automation, while also understanding how they work to identify risks such as intellectual property and security concerns. # A. Feedback on source code repos Projects created for the industry serve as a valuable complement to theoretical student training [11], [12]. Teamwork is an important skill on the job market [13], [14]. However, it is necessary to provide proper follow-up to ensure that the team work experience is both educational and productive. It is widely accepted that simply grouping students and expecting them to learn teamwork unsupervised is insufficient [15]. Monitoring and providing feedback during the development of a software project are fundamental. In the context of capstones or intensive sprints such as the one in this report, it is impractical for the professor to closely track the progress of each group. This literature review will address previous attempts to monitor academic or non-academic software projects using Git repositories. Several authors [11], [12] point out that the ability to work in a team is one of the most important skills for the job market. Group projects are also an opportunity to develop communication, another skill commonly pointed out as deficient in engineering and computing graduates [13]. However, students often encounter negative experiences when working in teams within a school context [16], [17]. On one hand, they dislike the uneven levels of contributions among group members; on the other hand, they tend to avoid conflicts to address this issue and are reluctant to expose peers who are not contributing effectively. It is important to note that not all group work effectively prepares students for teamwork in the professional world [14]. Group projects are intended to simulate professional teams, where deliverables should require interactions among members and benefit from them, making the gains from these interactions visible to the students. However, students often organize themselves in ways that minimize interdependence among group members, which results in missing learning skills that are transferable to the professional world. Common didactic strategies to improve teamwork include mutual evaluations through questionnaires based on rubrics that exemplify good teamwork within the group, such as CATME [18], a tool used in many universities. However, individuals in teams are often reluctant to provide incisive feedback to peers [19] until a crisis occurs, which can affect the reliability of records captured by CATME. Additionally, feedback based on questionnaires cannot be provided continuously. These considerations highlight the need for an objective metric to quantify the work and collaboration of team members. In the software field, version control systems have been well-established for decades, with Git being the most popular. This widespread adoption allows us to analyze the contributions of team members to project repositories, aiming to understand the distribution of work and collaboration within the team. Unlike questionnaires filled out by team members, repositories and groupware can provide continuous metrics. Tarmazdi [20] proposed a teamwork panel where the team’s communication data underwent sentiment analysis to understand the team’s emotional state and the roles played by each member. Additionally, the GitCanary tool [21] offers quantitative productivity metrics such as proportion and complexity of code committed by each team member, enabling real-time feedback on those metrics to students. Gousios [22] demonstrated that it is possible to measure developer contributions beyond lines of code, including commit messages, bug reports, and wikis. Lima [23] validated the complexity of commits and the volume of contributions as metrics indicative of positive developer contributions. Bufardi [24] studied the Git repositories of student teams and found a strong positive correlation between contributions to the repository and peer evaluations. He analyzed contributions from GitHub and the team’s Kanban board, with the latter being manually analyzed by the instructors. The Git log evaluation was automated, focusing on quantitative metrics such as commits and lines of code. However, this analysis did not consider the content of the changed lines of code or the type of file. Hundhausen’s work [25] analyzed projects by combining quantitative metrics and examining a random sample of code at the line-of-code level in some projects, noting that it is impractical to manually analyze all the code. Nevertheless, Hundhausen conducted a thorough analysis of the quality of the changed lines of code, employing a custom algorithm to detect changed lines of code. This allowed him to identify simple refactoring, such as code being moved rather than newly created. Additionally, Hundhausen used GitHub data to verify learning objectives related to the qualities of commits, issues, and software processes in a context of heterogeneous projects with open scope and long duration, distributed between two institutions. # III. CONTEXT This experience report takes place during a 3-week sprint at the end of the first year of the Computer Science course at INSPER [26]. By this point in their studies, students had completed courses in UX/UI, programming, web development, and data science. It was performed for 2 consecutive semesters in 2024, in the first there were 28 students (3 women, 25 men), then in the second semester there were 37 students (4 women, $3 3 \mathrm { \ m e n } )$ . In each semester, students were divided into groups of 4 to 6 students each, with the criteria of randomly grouping students of similar academic performance and ensuring that minority members were not isolated in any group. All students had previously participated in several sessions focused on group work agreements, feedback, nonviolent communication, and mutual evaluation of teamwork using CATME, in an approach similar to what is described in [27]. The sprint is an official part of the curriculum, starting immediately after the completion of regular coursework every semester. It is a formal discipline that counts for credits, with attendance tracked for four hours each day. Outside of hours when attendance was taken, the lab was open 8am– 8pm for students to continue their work, and instructors were available to answer questions. Students were encouraged to work in the lab to simulate the time commitment required in a programmer’s workday. The expected workload for each student during the sprint was 30 to 40 hours per week, though strict attendance control was maintained for only 22 of these hours. The sprint was entirely practical; instructors occasionally used a whiteboard or projector for guidance or clarification, but no formal lectures were planned because the theory was already provided during the regular semester before the sprint. The sprint has its own final grade that goes on the academic record. The group’s collaboration grade considered whether students made a minimum number of commits on the frontend and backend and if they participated adequately in the CATME evaluations (details of which are not discussed in this report). To encourage honest peer evaluations, the act of completing the evaluation cycle was graded, not the grades received from peers. During the sprint mentioned here, students met with the clients once a week to ask questions and receive feedback, and also communicated asynchronously through e-mail. Each group had a product owner, who was responsible for consolidating questions and approaching the client. In the first semester, each group was tasked with developing a software project for a sports data analysis company. Their objective was to create a query interface for retrieving plays from a historical match database and to generate tactical field metrics related to player positioning. In the second semester, students worked on developing a portal to manage and provide access to clinical trial information for one of the largest hospitals in Brazil. The groups had private repositories shared with the instructors and created using GitHub Classroom. Students were instructed to make commits with meaningful names, organize folders and files systematically, and, where possible, adopt a branch per issue strategy. The teams were also required to maintain an organized Scrum board with registered stories and tasks, updating it frequently. Major tasks had to be clear and divided into smaller subtasks. Besides the Product Owner, each team also had a Scrum Master. The goal of the tool described here is to provide instructors with a brief understanding of which part of the project each student was working on, and to offer students insights to discuss the evolution and workload allocation within the group. Literature indicates that awareness of other members’ progress leads to better self-regulation and negotiation within the group, enhancing teamwork and reducing free-riding behavior [28]. Those goals can be summarized as the following questions: Q1 Does the tool provide a reliable summary of student contributions in project-based courses? • Q2 Would such a tool be useful for instructors to keep up with all the project teams? # IV. THE SUMMARIZER TOOL This experiment aimed to create an aggregated tool capable of summarizing all the code produced by each student in a team [29]. Although the long-term goal is real-time analysis, current feedback is delivered weekly due to infrastructure and time constraints. This tool integrates a chain of AI agents, using the GPT-4omini model, to analyze and concede the changes on the group’s repository, then uses a stronger model, GPT-4o, for generating a combined, comprehensive feedback. The pipeline, illustrated in Figure 1, employs the OpenAI API and LangChain, complemented by a Streamlit frontend and FastAPI backend. Initially, the process involves cloning student repositories locally. These repositories are processed with PyDriller [30] to create Git blame files, capturing each student’s modifications line-by-line along with commit messages, establishing an overview of contributions. Directly feeding Git blame files to the LLM led to hallucinations and information loss; to mitigate this, we introduced a few preprocessing steps. The preprocessing involves compressing Git blame data through a series of AI agents. The first agent analyzes each file’s functionality, generating a Functionality Table (see Tab I) in .csv format. At this stage, the LLM is prompted to read the file and provide a summary of its goals and development difficulty. Additional data collected includes file size, number of lines, and complexity metrics, cyclomatic complexity for Python scripts and Jupyter Notebooks, and HTML tag counts for HTML files. Next, another agent evaluates individual developer contributions to each file, generating a Contribution Table. The LLM is prompted to describe the purpose of each developer’s addition in relation to the file’s functionality. Individual contributions are assessed, including the attribution of function complexities when a developer has sole authorship. An example is presented in Table II, which displays reports from two students: one with a strong contribution history and Student 2, whose contributions were identified as elementary by the tool. Fig. 1: Process of proposed tool to generate students’ code feedback using LLMs. TABLE I: Functionality Table original generated in .csv format by the LLM with the git blame input. Cyclomatic complexity and line count were omitted to fit the page Finally, we used a stronger model to synthesize and summarize a report of individual contributions, highlighting specific areas of work. For this comprehensive summary, the agent is provided with the following items and prompted to identify the contributions of each developer. The filename, its functionality, and its complexity score. The sprint instructions A breakdown of what each developer contributed to each file, including the complexity of solo-developed functions. • A set of instructions of the desired template to maintain uniformity among reports. Optionally, the agent is tasked with classifying each developer into predefined business roles (Technical Leader, Data Engineer, Security Engineer, DevOps Engineer, Backend/Frontend Engineer and Documenter) based on their contributions. Those roles could be either Junior or Senior (e.g., Junior Backend Developer). Lastly, the LLM is also asked to generate a summary of what the team did during the sprint, and for this, it’s also given the text of the description of the project and the client’s highlevel requirements, this appears as “Overall contribution of the team” in the feedback (Tab. III). This approach leverages the LLM’s ability to provide comprehensive, role-specific insights into student contributions while making the evaluation process transparent and structured, while also reducing the chances of hallucinations and loss of information. # V. RESULTS AND DISCUSSIONS This procedure was tested on a class of 28 students in the first semester of 2024 and a class of 37 students in the second TABLE II: The file contribution.csv generated by the LLM from the Git blame for a student. This includes two examples: one for Student $\boldsymbol { { \mathit { 1 } } }$ , who has a strong contribution history, and another for Student 2, who lacks significant contributions to the project. Summary: John Doe focused intensely on security and authentication aspects in both backend and frontend, implementing crucial features to ensure system integrity and security. Contributions: • app.py (Backend): Implemented routes and functionalities related to authentication and administration. auth.py: Developed a complete authentication module. dashboard.py (Backend): Improved data handling and organization. mongo users.py: Implemented functions for recovery code updates and password changes. rec password.py: Added update commands in MongoDB. secrets .py: Added a password handling function. pages/cdg rec.py (Frontend): Implemented the code verification functionality for password recovery. pages/login.py (Frontend): Implemented a login form using Streamlit. pages/new market.py (Frontend): Developed an interactive interface for market creation. pages/rec password.py (Frontend): Developed a password recovery system. TABLE III: Summary produced by the tool: contribution of a single student and overall contribution of the team # Overall contribution of the team This stage of the sprint was crucial for the project’s advancement. Through the detailed contributions above, we achieved the following progress: • Development of a robust web application using Flask, with authentication features, user management, and data validation. • Implementation of services that ensure data security, such as password hashing and input validation, essential for application integrity. • Creation of an email sending system, crucial for communication within the application context, allowing users to receive relevant information. • Establishment of a unit testing foundation, ensuring that application functionalities are maintained and operate correctly during continuous development. semester of 2024. The summary was run and presented to the students twice in each class during the 3-week sprint (at the end of the second week and at the end of the third week). In total, 130 summaries of student contributions were provided, and there was general agreement among the students that the summaries accurately reflected the tasks they had performed. There were some errors, including eight instances of partial omissions of student contributions, where the summary included several actual tasks the student had completed but missed others. Additionally, there were four cases of total omissions of student contributions. In these instances, students had completed tasks that were not included in the report. One case of factual inaccuracy occurred when a certain feature was attributed to a student based on a comment in the source code. However, another student had made the commits, but the LLM was led by the comment. This may establish a category of ’comment injections’, where a student writes an untrue comment in the code that the LLM accepts as accurate, should LLMs become more widespread for source code analysis. Excluded from these poor performance measures are cases that were reported by students, but were the default behavior of our system. For instance, branches that were never merged into the main branch were not considered in the summary, and students who did not commit any code during the periods were also omitted from the summary —- four cases. # A. Point of view of the students When the feedback was presented, students were invited to volunteer for interviews regarding the LLM-generated summary and other aspects of teamwork. The interviews were not conducted by any course instructors, and students were assured that their responses would not impact their grades. Out of 65 students, 8 volunteered, and the interviews were conducted in 30-minute sessions. Students were asked to review the summary and identify any inaccuracies. They were also questioned about the usefulness of the summary and its effect on team organization. Overall, students felt that the summary accurately listed the tasks performed. However, their main criticism was that the summary was superficial, failing to recognize that a series of file changes could represent the implementation of an entire feature. Additionally, students noted that the summary tended to inflate the importance of minor tasks, such as writing a README file, using a tool to generate scaffold code or making small code changes. This was a common complaint among students and was also observed by instructors. Following is an example of what students positively said in the interview about the feedback: ”...it’s (feedback) very good at seeing many things that people did and reporting that...” Another student highlighted that he negatively thinks about the feedback: ”I felt that I had done more than what was written. I made the password change function and all that, maybe in the final print it didn’t appear, but I worked a lot.” Students found their assignments to roles based on their contributions to be partially arbitrary or incorrect, making comparisons between students with the same role difficult. Another recurring observation from multiple interviews was to consider co-authorship tags in GitHub, as contributions from pair-programming sessions were sometimes omitted. They appreciated having a summary of the team’s work, as it helped organize tasks. An external summary facilitated fairer discussions about task distribution. One student mentioned that the system helped them conduct a difficult conversation when a colleague was assigned the role of ”junior documenter”, primarily committing comment lines without implementing new code. This allowed the team to assess the individual’s limited contribution, motivating improvement in the next sprint. ”...but it didn’t make much sense when it said ’ah, it’s junior, it’s senior’, and all that. Just because they did some things a little beyond. Because theoretically, it’s something we’ve already learned there, so it’s kind of... this question of senior, junior, I think it depends on each company and each place, what is really a junior, what is a senior, you know? So I really didn’t find it that viable.” # B. Point of view of the instructors The instructors, who are not authors of this paper, but were working directed as supervisors of students during project development, reviewed the summaries and provided feedback on their usefulness and impact on team organization. They found the information consistent with their interactions with the teams, confirming that the tasks accurately reflected student activities. However, they noted that the tool tended to overvalue minor or automated tasks. It would credit code that was not necessarily written by the students, as code automatically generated by the tools in use, or small README updates, which sometimes misrepresented the actual contributions of some students. This issue was also highlighted by the students. Instructors found the tool beneficial for identifying students outliers who were not genuinely contributing. While they agreed that it is useful for monitoring teams, it has not yet prompted specific actions related to team management, as it primarily confirmed their existing impressions. Additionally, the instructors observed that students with previously low participation increased their involvement after the initial feedback. Some students who volunteered for more indepth interviews reported feeling recognized, which motivated them to improve their attendance and participation. # VI. LIMITATION Although currently in use, the tool remains highly experimental. It was primarily tested on Python projects that included frontend, backend, and often data science components, where the summaries proved useful. Those aspects constitute threats to external validity, as the tool may not work reasonably well in other contexts. All data regarding the accuracy of the summaries, whether they contained inaccuracies or omitted information, was collected during sessions when the summaries were presented to student groups. Groups received both printed and electronic copies of the summaries and were instructed to review them for inaccuracies. This process took place during a 120-minute session. The student acting as Scrum master was responsible for collecting feedback and reporting it to the instructor, who then documented it. However, the thoroughness of the students’ reviews is uncertain, and some inaccuracies may have been overlooked if their evaluations were not comprehensive. This is a threat to internal validity.
This full paper in innovative practice provides an automated tool to summarize individual code contributions in project-based courses with external clients. Real industry projects offer valuable learning opportunities by immersing students in authentic problems defined by external clients. However, the open-ended and highly variable scope of these projects makes it challenging for instructors and teaching assistants to provide timely and detailed feedback. This paper addresses the need for an automated and objective approach to evaluate individual contributions within team projects. In this paper, we present a tool that leverages a large language model (LLM) to automatically summarize code contributions extracted from version control repositories. The tool preprocesses and structures repository data, and uses PyDriller to isolate individual contributions. Its uniqueness lies in the combination of LLM prompt engineering with automated repository analysis, thus reducing the manual grading burden while providing regular and informative updates. The tool was assessed over two semesters during a three-week, full-time software development sprint involving 65 students. Weekly summaries were provided to teams, and both student and faculty feedback indicated the tool's overall usefulness in informing grading and guidance. The tool reports, in large proportion, activities that were in fact performed by the student, with some failure to detect students' contribution. The summaries were considered by the instructors as a useful potential tool to keep up with the projects.
[ "cs.SE", "cs.CY" ]
# 1 Introduction Linear contextual bandits (LinCB) provide a simple yet powerful framework for sequential decision-making. At each decision epoch, an agent selects an action from a set of context vectors to maximize cumulative rewards, assumed to be linear functions of the chosen contexts. A special case is the multi-armed bandit (MAB) problem, in which the contexts are standard Euclidean basis vectors. Compared with more complex models—such as generalized linear models or deep neural networks, which demand substantial updates at each step—LinCB and MAB offer high computational efficiency. LinCB methods have been widely applied in fields such as e-commerce personalization (Hsu et al., 2020), revenue management (Ferreira et al., 2018), clinical trials (Murphy, 2005), political-science experiments (Offer-Westort et al., 2021), and A/B testing in marketing (Satyal et al., 2018), as comprehensively surveyed by Bouneffouf et al. (2020). Two primary algorithmic paradigms dominate the LinCB literature: the upperconfidence-bound algorithm (LinUCB) and Thompson Sampling (LinTS). LinUCB selects the arm that maximizes an upper confidence bound on its reward, whereas LinTS samples a parameter from its posterior (or estimated) distribution and selects the arm with the highest sampled reward. Empirical studies (Chapelle and Li, 2011) consistently demonstrate the superiority of LinTS over LinUCB across various scenarios. Nevertheless, a significant gap persists between the best-known frequentist regret bound for LinTS (Abeille et al., 2017) and the minimax lower bound (Lattimore and Szepesv´ari, 2020). Closing this gap is challenging due to the selective reward observation inherent in LinTS, complicating variance control for optimal-arm reward estimation. Table 1: Comparison of regret bounds and assumptions on LinTS Recent LinCB research has leveraged advanced statistical methodologies to enhance algorithmic performance and theoretical guarantees. Techniques from high-dimensional parameter estimation (B¨uhlmann and Van De Geer, 2011), optimal experimental design (Smith, 1918; Guttorp and Lindgren, 2009), and Bayesian optimization (Mockus, 2005) have been adapted to bandit settings. More recently, missing-data techniques have been employed to bridge the regret-bound gap by estimating missing rewards as though rewards from all arms were observed at every round (Kim and Paik, 2019; Kim et al., 2021). Unlike conventional estimators, which reduce errors solely for selected arms, these methods seek convergence across all arms. However, their reliance on inverse-probability weighting introduces variance that scales with the number of arms. To mitigate this, existing methods typically impose restrictive assumptions—such as independent-and-identically-distributed (IID) contexts or particular diversity conditions—limiting broader applicability. Addressing this limitation necessitates resolving open issues in both missing-data and LinCB literatures. This paper addresses this critical gap by introducing a novel estimation approach capable of learning rewards for all arms without relying on IID or diversity conditions. In the proposed framework, a hypothetical bandit problem tailored for efficient parameter estimation is constructed. This hypothetical setup employs a set of orthogonal basis vectors, preserving the covariance structure of the original contexts while significantly reducing the effective number of arms. By coupling the hypothetical and original problems, the resulting estimator achieves a novel self-normalized bound based on a Gram matrix encompassing contexts from all arms, including unselected ones. The proposed algorithm, equipped with this new estimator, attains the minimax optimal regret bound up to logarithmic factors without restrictive assumptions on context distributions. The remainder of the paper is organized as follows. Section 2 reviews relevant literature on LinCB, highlighting key contributions of the proposed approach. Section 3 presents the formal problem formulation. Section 4 details the proposed estimator and algorithm along with their theoretical justifications. Section 5 provides a rigorous regret analysis, establishing the minimax optimality of the proposed method. Finally, Section 6 empirically validates the effectiveness of the proposed algorithm across various benchmark scenarios. # 2 Related Literature The linear contextual bandit (LinCB) problem, introduced by Abe and Long (1999), has become foundational in sequential decision-making tasks. Two predominant algorithmic frameworks for LinCB are the upper-confidence-bound algorithm (LinUCB) and Thompson Sampling (LinTS). LinUCB, which selects the arm maximizing the upper confidence bound of its reward, has been extensively studied (Auer, 2002a; Dani et al., 2008; Rusmevichientong and Tsitsiklis, 2010; Chu et al., 2011; Abbasi-Yadkori et al., 2011). In contrast, LinTS, incorporating randomization by sampling from an estimated or posterior distribution of rewards, has attracted significant attention (Agrawal and Goyal, 2013; Abeille et al., 2017). Empirical studies, such as Chapelle and Li (2011), demonstrate that LinTS frequently outperforms LinUCB in practical scenarios, including online advertising and recommendation systems. Theoretically, given contexts of dimension $d$ and time horizon $T$ , LinUCB achieves a regret bound of $\tilde { O } ( d \sqrt { T } )$ , matching the minimax lower bound of $\Omega ( d \sqrt { T } )$ up to logarithmic factors (Lattimore and Szepesv´ari, 2020). In contrast, LinTS currently achieves a higher regret bound of $\tilde { O } ( d ^ { 3 / 2 } \sqrt { T } )$ , and improving this bound remains an open problem. Table 1 summarizes existing regret bounds and associated assumptions for various LinTS methods. Recent studies have contributed to narrowing this gap. Kim et al. (2021) introduced a doubly robust (DR) estimator instead of the ridge estimator, achieving a regret bound of $\tilde { O } ( \alpha ^ { - 1 } \sqrt { T } )$ under independent contexts with strictly positive minimum eigenvalue $\alpha ~ > ~ 0$ . Special cases with $\alpha ^ { - 1 } = O ( d )$ are further analyzed by Bastani et al. (2021) and Kim et al. (2023b). However, if $\alpha$ is extremely small (e.g., fixed or highly correlated contexts), this bound can be worse than previous guarantees. Huix et al. (2023) achieved the minimax rate of $\tilde { O } ( d \sqrt { T } )$ , but under the assumption of a Gaussian prior distribution on the parameter, yielding Bayesian rather than worst-case frequentist guarantees. For the multi-armed bandit (MAB) setting, Agrawal and Goyal (2017) and Zhu and Tan (2020) obtained minimax-optimal bounds, but analogous results for LinCB with arbitrary contexts remain unresolved. Statistical techniques, particularly those addressing missing data, have significantly advanced LinCB research. Methods such as inverse probability weighting (IPW) and doubly robust estimation (DR) (Bang and Robins, 2005) tackle the selective reward observation problem by treating unselected rewards as missing data. Dimakopoulou et al. (2019) employed IPW in LinTS, obtaining a regret bound of $\tilde { O } ( d ^ { 3 / 2 } \sqrt { T } )$ . Kim and Paik (2019) adapted DR methods to sparse, high-dimensional linear bandits, leveraging information from unselected contexts. Subsequent works by Kim et al. (2021) and Kim et al. (2023b) enhanced DR estimators under assumptions of stochastic contexts and generalized linear rewards, respectively. Kim et al. (2023c) further generalized DR methods to scenarios involving zero-probability arm selections. Despite these advancements, their reliance on IID or diversity assumptions limits broader applicability, leaving open the challenge of improving regret bounds for arbitrary contexts. This paper addresses this challenge by developing a novel estimator and algorithm that achieve the minimax optimal regret bound of $\tilde { O } ( d \sqrt { T } )$ without restrictive assumptions on the context distributions. # 3 Linear Contextual Bandit Problem This section presents the notation used throughout the paper and the formal definition of the linear contextual bandit (LinCB) problem. # 3.1 Notations For a natural number $n \in \mathbb N$ , define $[ n ] : = 1 , 2 , \ldots , n$ . For a positive semidefinite matrix $M \in \mathbb { R } ^ { d \times d }$ and a vector $x \in \mathbb { R } ^ { d }$ , let $| x | _ { M } : = { \sqrt { x ^ { \top } M x } }$ . For two matrices $A$ and $B$ , write $A \succ B$ (respectively $A \succeq B$ ) if $A - B$ is positive definite (respectively positive semidefinite). # 3.2 Problem Formulation In LinCB, the environment defines a sequence of distributions over $d$ -dimensional context vectors for $K$ arms, constrained to the set. Deterministic contexts can also be represented by setting each as a Dirac measure. The time horizon $T$ is finite but not known to the learner. At each round $t \in [ T ]$ , the environment draws context vectors $( X _ { 1 , t } , \ldots , X _ { K , t } )$ from $\mathcal { P } t$ , where $X k , t$ denotes the context for arm $k$ . Assume that $x _ { \mathrm { m a x } }$ is known; if unknown, it can be replaced with $X _ { \operatorname* { m a x } , t } : = \operatorname* { m a x } _ { s \in [ t ] } \operatorname* { m a x } _ { k \in [ K ] } | X _ { k , s } | _ { 2 }$ . Let $\mathcal { H } _ { t }$ be the sigma-algebra generated by the observations until before selecting an action at round $t$ , i.e., $$ \mathcal { H } _ { t } = \bigcup _ { \tau = 1 } ^ { t - 1 } \left[ \{ X _ { i , \tau } \} _ { i = 1 } ^ { K } \cup \{ a _ { \tau } \} \cup \{ Y _ { a _ { \tau } , \tau } \} \right] \cup \{ X _ { i , t } \} _ { i = 1 } ^ { K } . $$ Based on $\mathcal { H } _ { t }$ , the learner selects an arm $a _ { t } \in [ K ]$ and receives a reward $Y _ { a _ { t } , t }$ . In linear contextual bandits (LinCB), rewards are linear in the context, given by: $$ Y _ { a _ { t } , t } = X _ { a _ { t } , t } ^ { \top } \theta _ { \star } + \eta _ { a _ { t } , t } , $$ where $\theta _ { \star } ~ \in ~ \mathbb { R } ^ { d }$ is the unknown parameter such that $\lVert \theta _ { \star } \rVert _ { 2 } ~ \leq ~ \theta _ { \mathrm { m a x } }$ for some unknown $\theta _ { \mathrm { { m a x } } } > 0$ , and $\eta _ { a _ { t } , t }$ is conditionally zero-mean and $\sigma$ -sub-Gaussian noise: $$ \mathbb { E } \left[ \exp ( \lambda \eta _ { a _ { t } , t } ) | \mathcal { H } _ { t } \right] \leq \exp \left( \frac { \lambda ^ { 2 } \sigma ^ { 2 } } { 2 } \right) \quad \mathrm { f o r ~ a l l ~ } \lambda \in \mathbb { R } , $$ for some $\sigma \geq 0$ . To normalize the scale of regret, following standard convention (e.g., Abbasi-Yadkori et al., 2011), assume that $| X _ { k , t } ^ { \top } \theta _ { \star } | \leq 1$ for all $k \in [ K ]$ and $t \in [ T ]$ . At each round $t$ , the optimal arm $a _ { t } ^ { \star }$ is defined as $a _ { t } ^ { \star } : = \arg \operatorname* { m a x } _ { i \in [ K ] } ( X _ { i , t } ^ { \top } \theta _ { \star } )$ , and the instantaneous regret is: $$ \begin{array} { r } { r e g r e t ( t ) : = X _ { a _ { t } ^ { \star } , t } ^ { \top } \theta _ { \star } - X _ { a _ { t } , t } ^ { \top } \theta _ { \star } . } \end{array} $$ The goal is to minimize the cumulative regret over $T$ rounds: $$ R ( T ) : = \sum _ { t = 1 } ^ { T } r e g r e t ( t ) . $$ This general formulation aligns with the standard LinCB setting (see, e.g., Abbasi-Yadkori et al., 2011 and Lattimore and Szepesv´ari, 2020) and encompasses specific cases studied in Kim et al. (2021) and Kim et al. (2023c). # 4 Proposed Method This section introduces the proposed estimation scheme that enables LinTS to achieve a nearly minimax-optimal regret bound. Section 4.1 motivates the use of hypothetical sample augmentation for parameter estimation. Section 4.2 presents a construction of hypothetical contexts that efficiently support parameter learning while minimizing the number of augmented samples. Building on the constructed contexts, Section 4.3 defines an adaptive hypothetical bandit problem tailored for estimation. Section 4.4 describes a resampling strategy that couples the hypothetical and original bandit problems. Finally, Section 4.5 outlines the proposed algorithm, which incorporates the novel estimator derived from this framework. # 4.1 Augmenting Hypothetical Contexts for Linear Bandits In linear contextual bandits (LinCB), the ridge estimator with $\ell _ { 2 }$ -regularization is a widely used approach for estimating the unknown parameter. This regularization can be interpreted as augmenting the dataset with artificial observations. Let ${ \bf e } i \in \mathbb { R } ^ { d }$ denote the $i$ -th Euclidean basis vector. Then, the ridge estimator at round $t$ can be expressed as $$ \left( \sum _ { s = 1 } ^ { t } X _ { a { s } , s } X _ { a _ { s } , s } ^ { \top } + \sum _ { i = 1 } ^ { d } \mathbf { e } _ { i } \mathbf { e } _ { i } ^ { \top } \right) ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } Y _ { a { s } , s } X _ { a { s } , s } + \sum _ { i = 1 } ^ { d } 0 \cdot \mathbf { e } _ { i } \right) , $$ which is equivalent to augmenting the dataset with dummy context–reward pairs $( \mathbf { e } _ { i } , 0 )$ for $i \in [ d ]$ . Bishop (1995) showed that the inclusion of such artificial data can improve generalization, i.e., performance on test data that are not used in training. However, augmenting with zero-valued rewards induces shrinkage toward the origin, resulting in an estimator that is not adaptive to the observed data. This observation motivates the use of alternative augmented samples that enhance parameter learning more effectively. Several estimators in the literature can be interpreted through the lens of such augmentation. For example, Kveton et al. (2020) proposed a method that adds multiple random perturbations to each reward observation; this is equivalent to augmenting the dataset with multiple context–reward pairs. Kim et al. (2021) introduced a doubly robust (DR) estimator: $$ \left( \sum _ { s = 1 } ^ { t } \sum _ { k = 1 } ^ { K } X _ { k , s } X _ { k , s } ^ { \top } + \sum _ { i = 1 } ^ { d } \sqrt { \lambda _ { t } } \mathbf { e } _ { i } \left( \sqrt { \lambda _ { t } } \mathbf { e } _ { i } \right) ^ { \top } \right) ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } \sum _ { k = 1 } ^ { K } X _ { k , s } Y _ { k , s } ^ { \mathrm { D R } } \right) , $$ where $\lambda _ { t } = \Omega ( \sqrt { t } )$ is a regularization parameter and $Y _ { k , s } ^ { \mathrm { D R } }$ denotes a DR pseudo-reward. This estimator can be interpreted as augmenting the unselected contexts with their corresponding unbiased pseudo-rewards $( X _ { k , s } , Y _ { k , s } ^ { \mathrm { D R } } )$ for all $k \in [ K ]$ and $s \in [ t ]$ . These augmented observations construct the Gram matrix that controls the selfnormalized error of the estimator and influences its generalization performance. In the ridge estimator (1), the Gram matrix includes only the contexts corresponding to selected arms, and thus the estimator converges within the span of those vectors. In contrast, the DR estimator (2) incorporates all contexts, yielding a more well-conditioned matrix and enabling convergence in all directions spanned by the $K$ context vectors. Kim The well-conditioned Gram matrix plays a critical role in determining the convergence rate of the estimator in both linear regression and bandit settings. In the experimental design literature (e.g., Smith, 1918; Guttorp and Lindgren, 2009) and linear bandits (e.g., Soare et al., 2014; Tao et al., 2018), techniques such as E-optimal design aim to maximize the minimum eigenvalue of the Gram matrix to enhance estimation quality. Consequently, designing augmentations that yield well-conditioned Gram matrices is essential for accurate parameter estimation and low regret. Although the DR estimator’s Gram matrix includes all $K$ arms, the augmentation involves $K$ augmented samples in each round, which introduces additional variance that scales linearly with $K$ . To address this, Kim et al. (2021) assumed that the context covariance matrix has a strictly positive minimum eigenvalue, ensuring rapid increase in the minimum eigenvalue of the covariance. Rather than augmenting with the full set $\{ ( X _ { k , s } , Y _ { k , s } ^ { \mathrm { D R } } ) : k \in [ K ] , s \in [ t ] \}$ , the proposed method constructs a hypothetical dataset with reduced number of augmented samples by identifying a set of orthogonal eigenvectors that are informative for learning all $K$ rewards in each round. Moreover, instead of using dummy vectors such as $( \mathbf { e } _ { i } , 0 )$ , the proposed estimator augments basis vectors orthogonal to the span of the observed contexts, thereby adaptively improving generalization to future inputs. # 4.2 Design of Hypothetical Contexts At round $t$ , let $a _ { t } \sim \pi _ { t }$ denote the arm drawn according to the policy $\pi _ { t }$ . Before selecting $a _ { t }$ , a set of hypothetical contexts is constructed to preserve the covariance structure of the original context vectors. Define $\begin{array} { r } { G _ { t } : = \sum _ { k \in [ K ] \backslash \{ a _ { t } \} } X _ { k , t } X _ { k , t } ^ { \top } } \end{array}$ , and let $r _ { t }$ be its rank. Since $G _ { t }$ is real, symmetric, and positive s midefinite, it admits an eigen-decomposition $\begin{array} { r } { G _ { t } = \sum _ { i = 1 } ^ { r _ { t } } \lambda _ { i , t } u _ { i , t } u _ { i , t } ^ { \top } } \end{array}$ , where $\lambda _ { 1 , t } , \ldots , \lambda _ { r _ { t } , t }$ are the positive eigenvalues and $\boldsymbol { u } _ { 1 , t } , \ldots , \boldsymbol { u } _ { r _ { t } , t }$ are the corresponding orthonormal eigenvectors. Define $r _ { t } + 1$ hypothetical contexts as: $$ Z _ { i , t } = \left\{ \begin{array} { l l } { \sqrt { \lambda _ { i , t } } u _ { i , t } , } & { \mathrm { f o r } i = 1 , . . . , r _ { t } , } \\ { X _ { a _ { t } , t } , } & { \mathrm { f o r } i = r _ { t } + 1 . } \end{array} \right. $$ This construction satisfies the identity $$ \sum _ { i = 1 } ^ { r _ { t } + 1 } Z _ { i , t } Z _ { i , t } ^ { \top } = \sum _ { k = 1 } ^ { K } X _ { k , t } X _ { k , t } ^ { \top } , $$ ensuring that the compressed set $\{ Z _ { i , t } \} _ { i = 1 } ^ { r _ { t } + 1 }$ exactly recovers the Gram matrix of the original $K$ contexts, while reducing the number of arms to $r _ { t } + 1$ . To replace the artificial augmentation $\{ ( \mathbf { e } _ { i } , 0 ) : i \in [ d ] \}$ in the ridge estimator (1), an orthogonal basis is constructed at selected rounds. Given hyperparameters $\delta \in ( 0 , 1 )$ and $\gamma \in ( 0 , 1 )$ , define: $$ h _ { t } : = \left\lceil \frac { 2 } { \frac { 1 } { 2 } - e ^ { - 1 } } \frac { d } { 1 - \gamma } \log \frac { d ( t + 1 ) ^ { 2 } } { \delta } \right\rceil , $$ which specifies the number of rounds allocated for orthogonal basis augmentation. The subset of rounds for the augmentation $\mathcal { A } _ { t } \subseteq [ t ]$ is defined recursively as: $$ \begin{array} { r } { A _ { 0 } = \emptyset , \quad A _ { t } = \left\{ \begin{array} { l l } { A _ { t - 1 } \cup \{ t \} , } & { \mathrm { i f ~ } | A _ { t - 1 } | < h _ { t } , } \\ { A _ { t - 1 } , } & { \mathrm { o t h e r w i s e } . } \end{array} \right. } \end{array} $$ Also define: $$ T _ { 1 } : = \operatorname* { i n f } \left\{ t \geq 1 : t \geq h _ { t } \right\} \leq \frac { 8 } { \frac { 1 } { 2 } - e ^ { - 1 } } \frac { d } { 1 - \gamma } \left( 1 + \log \frac { 4 } { e / 2 - 1 } \frac { d } { 1 - \gamma } \sqrt { \frac { d } { \delta } } \right) , $$ where the inequality holds by Lemma C.6 in Kim et al. (2023a). Thus, for all $t \geq T _ { 1 }$ , it holds that $h _ { t } \leq | { \mathcal { A } } _ { t } | \leq h _ { t } + 1$ For each $s \in \mathcal A _ { t }$ , let $r _ { s }$ and $\{ u _ { i , s } \} _ { i \in [ r _ { s } ] }$ denote the rank and eigenvectors of $G _ { s } \ : = \$ $\begin{array} { r } { \sum _ { k \in [ K ] \backslash \{ a _ { s } \} } X _ { k , s } X _ { k , s } ^ { \top } } \end{array}$ . If $r _ { s } ~ < ~ d$ , the Gram–Schmidt process is used to construct an orthonormal set $\{ u _ { i , s } \} _ { i = r _ { s } + 1 } ^ { d }$ , orthogonal to the initial eigenvectors. The hypothetical $s \in \mathcal A _ { t }$ $$ Z _ { i , s } : = \left\{ \begin{array} { l l } { \operatorname* { m a x } \{ x _ { \mathrm { m a x } } , 1 \} u _ { i , s } , } & { \mathrm { f o r } i = 1 , \ldots , d , } \\ { X _ { a _ { s } , s } , } & { \mathrm { f o r } i = d + 1 . } \end{array} \right. $$ Now, define the number of arms in the hypothetical bandit problem at round $s$ as: $$ N _ { s } : = { \left\{ \begin{array} { l l } { r _ { s } + 1 } & { { \mathrm { i f ~ } } s \in [ t ] \backslash { \mathcal { A } } _ { t } , } \\ { d + 1 } & { { \mathrm { i f ~ } } s \in { \mathcal { A } } _ { t } . } \end{array} \right. } $$ Then, for all $s \in [ t ]$ and $i \in [ N _ { s } - 1 ]$ , the hypothetical contexts are $$ Z _ { i , s } : = \left\{ \begin{array} { l l } { \sqrt { \lambda _ { i , s } } u _ { i , s } , } & { s \in [ t ] \setminus \mathcal { A } _ { t } , } \\ { \operatorname* { m a x } \{ x _ { \operatorname* { m a x } } , 1 \} u _ { i , s } , } & { s \in \mathcal { A } _ { t } , } \end{array} \right. \mathrm { ~ a n d ~ } Z _ { N _ { s } , s } : = X _ { a _ { s } , s } . $$ At round $t$ , the Gram matrix of all hypothetical contexts is given by: $$ V _ { t } : = \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } Z _ { i , s } Z _ { i , s } ^ { \top } = \sum _ { s \in [ t ] \setminus \mathcal { A } _ { t } } \sum _ { i = 1 } ^ { r _ { s } + 1 } Z _ { i , s } Z _ { i , s } ^ { \top } + \sum _ { s \in \mathcal { A } _ { t } } \sum _ { i = 1 } ^ { d + 1 } Z _ { i , s } Z _ { i , s } ^ { \top } , $$ and satisfies the following bounds. Lemma 1 (Gram matrix with hypothetical contexts) For all $t \geq T _ { 1 }$ , the Gram matrix $V _ { t }$ satisfies $$ \begin{array} { r l } & { V _ { t } \succeq \displaystyle \sum _ { s \in [ t ] \backslash A _ { t } } \sum _ { k = 1 } ^ { K } X _ { k , s } X _ { k , s } ^ { \top } + \operatorname* { m a x } \{ x _ { \operatorname* { m a x } } ^ { 2 } , 1 \} h _ { t } I _ { d } , } \\ & { V _ { t } \preceq \displaystyle \sum _ { s \in [ t ] \backslash A _ { t } } \sum _ { k = 1 } ^ { K } X _ { k , s } X _ { k , s } ^ { \top } + 2 \operatorname* { m a x } \{ x _ { \operatorname* { m a x } } ^ { 2 } , 1 \} h _ { t } I _ { d } . } \end{array} $$ Kim Proof From (3), we have: $$ V _ { t } = \sum _ { s \in [ t ] \setminus { A _ { t } } } \sum _ { k = 1 } ^ { K } X _ { k , s } X _ { k , s } ^ { \top } + \sum _ { s \in \mathcal { A } _ { t } } \sum _ { i = 1 } ^ { d + 1 } Z _ { i , s } Z _ { i , s } ^ { \top } . $$ For $t \geq T _ { 1 }$ , we have $h _ { t } \leq | { \mathcal { A } } _ { t } | \leq h _ { t } + 1$ . For each $s \in \mathcal A _ { t }$ , $$ \sum _ { i = 1 } ^ { d + 1 } Z _ { i , s } Z _ { i , s } ^ { \top } = X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } + \operatorname* { m a x } \{ x _ { \operatorname* { m a x } } ^ { 2 } , 1 \} \sum _ { i = 1 } ^ { d } u _ { i , s } u _ { i , s } ^ { \top } . $$ Because $\{ u _ { i , s } : i \in [ d ] \}$ are $d$ orthonormal vectors in $\mathbb { R } ^ { d }$ , we obtain $\begin{array} { r } { \sum _ { i = 1 } ^ { d } u _ { i , s } u _ { i , s } ^ { \top } = I _ { d } } \end{array}$ . Since $X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } \preceq \operatorname* { m a x } \{ x _ { \mathrm { m a x } } ^ { 2 } , 1 \} I _ { d }$ , the bounds follow. Lemma 1 ensures that the hypothetical contexts preserve the Gram matrix of all $K$ context vectors while reducing the number of augmented samples to $N _ { s }$ in each round. The proposed method retains statistical efficiency comparable to full augmentation (Kim et al., 2021), with less number of augmented context samples. # 4.3 A Hypothetical Linear Contextual Bandit Based on the sample $a _ { t } \sim \pi _ { t }$ , construct the hypothetical contexts $Z _ { i , s }$ as previously described. For each $s \in [ t ]$ and $i \in [ N _ { s } ]$ , define the corresponding hypothetical rewards: $$ W _ { i , s } : = Z _ { i , s } ^ { \top } \theta _ { \star } + \eta _ { a _ { s } , s } , $$ where $\eta _ { a _ { s } , s }$ is shared with the original bandit problem. Let $\tilde { a } _ { s } \in [ N _ { s } ]$ be a hypothetical action sampled from the distribution: $$ \begin{array} { r } { \mathbb { P } ( \tilde { a } _ { s } = i ) : = \phi _ { i , s } = \left\{ \frac { 1 - \gamma } { N _ { s } - 1 } , \begin{array} { l l } { \mathrm { i f ~ } i \in [ N _ { s } - 1 ] , } \\ { \gamma , } \end{array} \right. } \end{array} $$ where $\gamma \in \mathsf { \Gamma } ( 0 , 1 )$ determines the probability mass assigned to the original context $X _ { a _ { s } , s }$ . As $\gamma$ increases, the sampling concentrates on arm $N _ { s }$ , increasing the variance of inverseprobability weights for the other arms. To mitigate this, the number of rounds with orthogonal basis augmentation, $| { \mathcal { A } } _ { t } | \geq h _ { t }$ , must be sufficiently large. This construction defines a hypothetical linear bandit problem $\{ ( Z _ { i , s } , W _ { i , s } ) : i \in [ N _ { s } ] , s \in [ t ] \}$ that shares the parameter $\theta _ { \star }$ with the original problem. To perform estimation, construct the following ridge estimator as a reference: $$ \breve { \theta } _ { t } : = \left( \sum _ { s = 1 } ^ { t } X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } + \gamma I _ { d } \right) ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } X _ { a _ { s } , s } Y _ { a _ { s } , s } \right) , $$ with regularization parameter $\gamma \in ( 0 , 1 )$ as used in (9). Using ${ \check { \theta } } _ { t }$ , define the pseudo-rewards: $$ \tilde { W } _ { i , s } ^ { H ( \check { \theta } _ { t } ) } : = \left( 1 - \frac { \mathbb { I } ( \check { a } _ { s } = i ) } { \phi _ { i , s } } \right) Z _ { i , s } ^ { \top } \check { \theta } _ { t } + \frac { \mathbb { I } ( \widetilde { a } _ { s } = i ) } { \phi _ { i , s } } W _ { i , s } . $$ Figure 1: Flow diagram of the proposed coupling and resampling scheme This pseudo-reward is unbiased: $\mathbb { E } [ \tilde { W } _ { i , s } ^ { H ( \bar { \theta } _ { t } ) } ] = Z _ { i , s } ^ { \top } \theta _ { \star }$ for all $i \in [ N _ { s } ]$ . The estimator for $\theta _ { \star }$ is then: $$ \tilde { \theta } _ { t } ^ { H ( \check { \theta } _ { t } ) } : = \left( \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } Z _ { i , s } Z _ { i , s } ^ { \top } \right) ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } \tilde { W } _ { i , s } ^ { H ( \check { \theta } _ { t } ) } Z _ { i , s } \right) . $$ However, this estimator cannot be computed directly, as the rewards $W _ { i , s }$ for $i \in [ N _ { s } - 1 ]$ are unobserved. Only ${ W _ { N _ { s } , s } } = Y _ { a _ { s } , s }$ is available. Thus, (11) is computable for all $i \in [ N _ { s } ]$ only when $W _ { \tilde { a } _ { s } , s } = Y _ { a _ { s } , s }$ , i.e., when the sampled context in the hypothetical bandit matches the observed context from the original bandit. Since both models share the same parameter $\theta _ { \star }$ and noise $\eta _ { a _ { s } , s }$ , this condition is equivalent to $Z _ { \tilde { a } _ { s } , s } = X _ { a _ { s } , s }$ . This matching event provides the motivation for the coupling technique introduced in the next section. # 4.4 Coupling the Hypothetical and Original Linear Contextual Bandits This section introduces a probabilistic method to couple the hypothetical bandit problem with the original contextual bandit. Figure 1 illustrates the overall coupling and resampling process. The key observation is that the hypothetical pseudo-reward in (11) is computable under the event $\{ Z _ { \tilde { a } _ { s } , s } = X _ { a _ { s } , s } \}$ , which is implied by $\{ \tilde { a } _ { s } = N _ { s } \}$ . To ensure this condition, we resample both $a _ { s } \sim \pi _ { s }$ and $\tilde { a } _ { s } \sim \tilde { \pi } _ { s }$ using the distribution in (9). Each resampling iteration generates updated hypothetical contexts $\{ Z _ { i , s } : i \in [ N _ { s } ] \}$ , effectively randomizing the hypothetical contexts and rewards until the pseudo-reward becomes computable. Although rewards are collected from the original bandit, the estimation of the parameter $\theta _ { \star }$ is performed using compressed and augmented samples from the hypothetical bandit problem. Let $\tilde { a } _ { s } ( m )$ and $a _ { s } ( m )$ denote the actions sampled during the $m$ -th resampling trial in the hypothetical and original bandits, respectively. These actions are IID across trials $m$ given $\mathcal { H } _ { t }$ . Define the stopping time for successful coupling as $\xi _ { s } : = \operatorname* { i n f } \{ m \ge 1 : X _ { a _ { s } ( m ) } = Z _ { \tilde { a } _ { s } ( m ) } \}$ . Kim Table 2: Illustration of coupling success and failure during resampling for $N _ { t } ~ = ~ 2$ and $K = 3$ . By construction, $Z _ { 2 , t } : = X _ { a _ { t } , t }$ and $W _ { 2 , t } : = Y _ { a _ { t } , t }$ . Gray cells indicate the selected actions. # Algorithm 1 Candidate-Arm Sampler (CAS) for Round $t$ 1: Input: contexts $\{ X _ { k , t } \} _ { k \in [ K ] }$ , posterior mean $\widehat { \theta } _ { t - 1 }$ , exploration variance $v _ { t - 1 }$ , Gram matrix $V _ { t - 1 }$ , pseudo-index $N _ { t }$ , coupling param tber $\gamma$ , confidence $\delta$ . 2: Set $M _ { t }$ as in (13) // maximum retries 3: Initialise $m \gets 1$ 4: repeat 5: $\widetilde { \theta } _ { k , t } ^ { ( m ) } \sim \mathcal { N } \big ( \widehat { \theta } _ { t - 1 } , v _ { t - 1 } ^ { 2 } V _ { t - 1 } ^ { - 1 } \big )$ independently for all $k \in \lfloor K \rfloor$ $a _ { t } ^ { ( m ) } \gets \arg \operatorname* { m a x } _ { k \in [ K ] } X _ { k , t } ^ { \top } \tilde { \theta } _ { k , t } ^ { ( m ) }$ 7: Sample $\tilde { a } _ { t } ^ { ( m ) }$ from the distribution in (9) 8: $m \gets m + 1$ 9: until a˜t(m−1) = Nt or m > Mt 10: Output: $a _ { t } ^ { \star } \gets a _ { t } ^ { ( m - 1 ) } , \quad \tilde { a } _ { t } ^ { \star } \gets \tilde { a } _ { t } ^ { ( m - 1 ) }$ Then, define the matching event: $$ \mathcal { M } _ { s } : = \{ \xi _ { s } \le M _ { s } \} , \quad M _ { s } : = \left\lceil \frac { \log ( ( s + 1 ) ^ { 2 } / \delta ) } { \log ( 1 / ( 1 - \gamma ) ) } \right\rceil $$ which ensures a successful coupling within $M _ { s }$ trials. Since $\mathbb { P } ( \tilde { a } _ { s } ( m ) = N _ { s } ) = \gamma$ , the number of trials $M _ { s }$ is selected to guarantee $\mathbb { P } ( \mathcal { M } _ { s } ) \ge 1 - \delta / ( s + 1 ) ^ { 2 }$ . The hyperparameter $\gamma$ controls the trade-off: as $\gamma$ increases, the probability of coupling success increases (thus requiring fewer resampling trials), while the size of the regularization set $h _ { t }$ must increase. Table 2 illustrates the data structures for successful and failed couplings during resampling. The proposed resampling-coupling scheme is describe in Algorithm 1 as candidate arm sampler (CAS). The resampling in CAS is distinctive from that in Kim et al. (2021) and Xu and Zeevi (2020). The resampling in Kim et al. (2021) resamples the action to find the arm whose selection probability is greater than a prespecified threshold value. Xu and Zeevi (2020) resamples the previous counterfactual actions and contexts to impose randomization and generalization on the estimator. In contrast, our resampling is to couple the hypothetical samples with the original samples and this coupling is the first method a novel innovative part in this work. Upon obtaining the coupled contexts $Z _ { { \tilde { a } } _ { s } ( M _ { s } ) , s } = X _ { a _ { s } ( M _ { s } ) , s }$ , we construct the coupled pseudo-reward as: $$ W _ { i , s } ^ { C o ( \check { \theta } _ { t } ) } : = \left( 1 - \frac { \mathbb { I } ( \tilde { a } _ { s } ( M _ { s } ) = i ) } { \phi _ { i , s } } \right) Z _ { i , s } ^ { \top } \check { \theta } _ { t } + \frac { \mathbb { I } ( \tilde { a } _ { s } ( M _ { s } ) = i ) } { \phi _ { i , s } } W _ { i , s } , $$ which is computable for all $i \in [ N _ { s } ]$ because $a _ { s } ( M _ { s } )$ is selected and $W _ { \tilde { a } s ( M _ { s } ) , s } = Y _ { a _ { s } ( M _ { s } ) , s }$ is observable. Given an reference estimator ${ \check { \theta } } _ { t }$ defined in (10), the proposed hypothetical coupled sample augmented (HCSA) estimator is defined as: $$ \widehat { \theta } _ { t } : = \left\{ \sum _ { s = 1 } ^ { t } \mathbb { I } ( \mathcal { M } _ { s } ) \sum _ { i = 1 } ^ { N _ { s } } Z _ { i , s } Z _ { i , s } ^ { \top } \right\} ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } \mathbb { I } ( \mathcal { M } _ { s } ) \sum _ { i = 1 } ^ { N _ { s } } W _ { i , s } ^ { C o ( \check { \theta } _ { t } ) } Z _ { i , s } \right) . $$ The indicator $\mathbb { I } ( \mathcal { M } _ { s } )$ lets the estimator use the coupled pseudo-rewards (14) only when $\mathcal { M } _ { s }$ occurs; otherwise, it skips round $s$ and relies on the previous estimator. Since $\mathcal { M } _ { s }$ occurs with high probability, we can couple the HCSA estimator with the hypothetical sample augmented estimator from (12). While the DR estimator in (2) where the $K$ pseudo-rewards are augmented, the proposed (HCSA) estimator (12) adds $N _ { s } \leq d + 1$ for each round $s \in [ t ]$ . This reduction in the number of augmented pseudo-reward samples paves a way to reduce the error and eliminate the IID and minimum eigenvalue assumption on contexts, by which Kim et al. (2021) used to obtain a regret bound that depends on the minimum eigenvalue of the context covariance. Next, we provide a coupling inequality that relates the HCSA estimator to hypothetical sample augmented estimator θ˜tH(θˇt). Lemma 2 (A coupling inequality) For $t \geq 1$ , let $S _ { t } : = \cap _ { s = 1 } ^ { t } \mathcal { M } _ { s }$ , where $\mathcal { M } _ { s }$ is the matching event defined in (13). For the reference estimator ${ \check { \theta } } _ { t }$ defined in (10) and for $x > 0$ , $$ \mathbb P \left( \left\{ \left. \widehat { \theta } _ { t } ^ { A ( \widetilde { \theta } _ { t } ) } - \theta _ { \star } \right. _ { V _ { t } } > x \right\} \right) \le \mathbb P \left( \left\{ \left. \widetilde { \theta } _ { t } ^ { H ( \widetilde { \theta } _ { t } ) } - \theta _ { \star } \right. _ { V _ { t } } > x \right\} \cap \mathcal S _ { t } \right) + \mathbb P ( \mathcal S _ { t } ^ { c } ) , $$ and the failure probability satisfies $\mathbb { P } ( S _ { t } ^ { c } ) \leq \delta$ . Proof Fix $t \in [ T ]$ throughout the proof. For any $\check { \theta } \in \mathbb { R } ^ { d }$ and $x \ > \ 0$ , decompose the probability as follows: $$ \mathbb { P } \left( \left. \widehat { \theta } _ { t } - \theta _ { \star } \right. _ { V _ { t } } > x \right) \leq \mathbb { P } \left( \left\{ \left. \widehat { \theta } _ { t } - \theta _ { \star } \right. _ { V _ { t } } > x \right\} \cap \mathcal { S } _ { t } \right) + \mathbb { P } \left( \mathcal { S } _ { t } ^ { c } \right) . $$ On the event $S _ { t } : = \cap _ { s = 1 } ^ { t } \mathcal { M } _ { s }$ , the HCSA estimator in (15) is simplified to $$ \mathring { \mathbf { \Psi } } _ { t } ^ { A ( \tilde { \theta } _ { t } ) } = \left\{ \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } Z _ { i , s } Z _ { i , s } ^ { \top } \right\} ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } W _ { i , s } ^ { C o ( \tilde { \theta } _ { t } ) } Z _ { i , s } \right) = V _ { t } ^ { - 1 } \left( \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } W _ { i , s } ^ { C o ( \tilde { \theta } _ { t } ) } Z _ { i , s } \right) . $$ Define the function $$ F \Big ( \widetilde { a } _ { 1 } ( M _ { 1 } ) , \dots , \widetilde { a } _ { t } ( M _ { t } ) \Big ) : = \Big \| \widehat { \theta } _ { t } - \theta _ { \star } \Big \| _ { V _ { t } } = \Bigg \| \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } ( W _ { i , s } ^ { C o ( \widetilde { \theta } _ { t } ) } - Z _ { i , s } ^ { \top } \theta _ { \star } ) Z _ { i , s } \Bigg \| _ { V _ { t } ^ { - 1 } } . $$ Using the definition of $M _ { s }$ , where $\tilde { a } _ { s } ( M _ { s } ) = N _ { s }$ , we have $$ \begin{array} { r l } & { \mathbb { P } \left( \Big \{ F \Big ( \tilde { a } _ { 1 } ( M _ { 1 } ) , \dots , \tilde { a } _ { t } ( M _ { t } ) \Big ) > x \Big \} \cap \mathcal { S } _ { t } \right) } \\ & { \ = \mathbb { P } \left( \Big \{ F \Big ( \tilde { a } _ { 1 } ( M _ { 1 } ) , \dots , \tilde { a } _ { t } ( M _ { t } ) \Big ) > x \Big \} \cap \mathcal { S } _ { t } \cap \bigcap _ { s = 1 } ^ { t } \{ \tilde { a } _ { s } ( M _ { s } ) = N _ { s } \} \right) } \\ & { \ = \mathbb { P } \left( \Big \{ F \Big ( \tilde { a } _ { 1 } ( 1 ) , \dots , \tilde { a } _ { t } ( 1 ) \Big ) > x \Big \} \cap \mathcal { S } _ { t } \cap \bigcap _ { s = 1 } ^ { t } \{ \tilde { a } _ { s } ( 1 ) = N _ { s } \} \right) , } \end{array} $$ where the last equality holds because $\{ \tilde { a } _ { s } ( m ) : m \in \mathbb { N } \}$ are IID for each $s \in [ t ]$ . Then, $$ \begin{array} { r l } & { \mathbb P \left( \left\{ F \Big ( \tilde { a } _ { 1 } ( 1 ) , \ldots , \tilde { a } _ { t } ( 1 ) \Big ) > x \right\} \cap \mathcal S _ { t } \cap \bigcap _ { s = 1 } ^ { t } \{ \tilde { a } _ { s } ( 1 ) = N _ { s } \} \right) } \\ & { \leq \mathbb P \left( \left\{ F \Big ( \tilde { a } _ { 1 } ( 1 ) , \ldots , \tilde { a } _ { t } ( 1 ) \Big ) > x \right\} \cap \mathcal S _ { t } \right) } \end{array} $$ We observe that replacing $\{ \widetilde { a } _ { s } ( M _ { s } ) ~ : ~ s ~ \in ~ [ t ] \}$ in coupled pseudo-rewards (14) with $\{ \tilde { a } _ { s } ( 1 ) : s \in [ t ] \}$ gives the hypothetical pseudo-rewards in (11). Thus, the distribution of the normalized error $\lVert \tilde { { \boldsymbol { \theta } } } _ { t } ^ { H ( \dot { { \boldsymbol { \theta } } } _ { t } ) } - { \boldsymbol { \theta } } _ { \star } \rVert _ { V _ { t } }$ is equivalent to that of $F \big ( \tilde { a } _ { 1 } ( 1 ) , \dots , \tilde { a } _ { t } ( 1 ) \big )$ and we obtain $$ \mathbb { P } \Big ( \{ F ( \tilde { a } _ { 1 } ( 1 ) , \dots , \tilde { a } _ { t } ( 1 ) ) > x \} \cap \mathcal { S } _ { t } \Big ) = \mathbb { P } \Big ( \{ \| \tilde { \theta } _ { t } ^ { H ( \tilde { \theta } _ { t } ) } - \theta _ { \star } \| _ { V _ { t } } > x \} \cap \mathcal { S } _ { t } \Big ) , $$ which proves the coupling inequality. The bound for the failure probability $\mathbb { P } ( S _ { t } ^ { c } ) \leq \delta$ is proved by the fact that $\mathbb { P } ( \mathcal { M } _ { s } ^ { c } ) \le \delta / ( s + 1 ) ^ { 2 }$ by construction of the maximum number of resampling trials. With the coupling inequality, we can leverage augmented samples from the hypothetical bandit problem to closely approximate the hypothetical sample augmented estimator with high probability. While the coupling technique can be applied to any choice of hypothetical problem—and the hypothetical contexts $\left\{ Z _ { i , t } \right\}$ may be arbitrarily defined – it is crucial that they are compatible with the original contextual bandit problem and the reference estimator ${ \check { \theta } } _ { t }$ . The design of suitable hypothetical contexts is key, as it enables control of the maximum deviation in the original problem through the bound: $$ \operatorname* { m a x } _ { k \in [ K ] } \Big | X _ { k , t } ^ { \top } \Big ( \tilde { \theta } _ { t } ^ { H ( \tilde { \theta } _ { t } ) } - \theta _ { \star } \Big ) \Big | \leq \Big \| \tilde { \theta } _ { t } ^ { H ( \tilde { \theta } _ { t } ) } - \theta _ { \star } \Big \| _ { \tilde { G } _ { t } } \cdot \operatorname* { m a x } _ { k \in [ K ] } \| X _ { k , t } \| _ { \tilde { G } _ { t } ^ { - 1 } } , $$ where $\begin{array} { r } { \tilde { G } _ { t } : = \sum _ { s = 1 } ^ { t } \sum _ { i = 1 } ^ { N _ { s } } Z _ { i , s } Z _ { i , s } ^ { \top } } \end{array}$ denotes the Gram matrix constructed from the hypothetical contexts. This upper bound consists of two components: (i) the self-normalized error of the hypothetical sample augmented estimator, and (ii) the maximum norm of the original contexts normalized by $\tilde { G } _ { t }$ . Each component is sensitive to how the Gram matrix $\tilde { G } _ { t }$ is constructed. If ${ \tilde { G } } _ { t }$ is defined using only the played contexts – i.e., $\begin{array} { r } { \tilde { G } _ { t } = \sum _ { s = 1 } ^ { t } X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } + I _ { d } } \end{array}$ —then the norm term $\mathrm { m a x } _ { k \in [ K ] } \| X _ { k , t } \| _ { \tilde { G } _ { t } ^ { - 1 } }$ may become unbounded due to insufficient exploration. 1: Input: confidence level $\delta \in ( 0 , 1 )$ , coupling parameter $\gamma \in ( 0 , 1 )$ , exploration parameter $\begin{array} { r } { v _ { t } : = \{ 2 \log \frac { K ( t + 1 ) ^ { 2 } } { \delta } \} ^ { - 1 / 2 } } \end{array}$ , orthogonal basis regularization parameter $h _ { t }$ as in (4). 2: Initialize the estimator $\widehat { \theta } _ { 0 } = \mathbf { 0 }$ , Gram matrix $V _ { 0 } = O$ , and a subset of rounds for the orthogonal basis regula ibzation $\mathcal { A } _ { 0 } = \emptyset$ . 3: for $t = 1$ to $T$ do 4: Observe contexts $\{ X _ { k , t } : k \in [ K ] \}$ . 5: Update $\mathbf { \mathcal { A } } _ { t }$ as in (5) and compute $N _ { t }$ as in (7). 6: Set $m = 1$ sample $\tilde { a } _ { t } ( m )$ from the multinomial distribution (9). 7: $( a _ { t } ( m ) , \widetilde { a } _ { t } ( m ) ) \gets \mathtt { C A S } \big ( \{ X _ { k , t } \} , \widehat { \theta } _ { t - 1 } , v _ { t - 1 } , V _ { t - 1 } , N _ { t } , \gamma , \delta \big )$ 8: if $\tilde { a } _ { t } ( m ) = N _ { t } \mathrm { ~ / / ~ }$ Resampling sbucceeded then 9: Pull arm at(m) and observe Yat(m),t. 10: Compute the reference estimator ${ \check { \theta } } _ { t }$ defined in (10). 11: if $t \in \mathcal A _ { t }$ then 12: Compute hypothetical contexts $\{ Z _ { i , t } : i \in [ N _ { t } ] \}$ as in (8) 13: else 14: Compute hypothetical contexts $\{ Z _ { i , t } : i \in [ N _ { t } ] \}$ with orthogonal basis as in (8). 15: end if 16: Update $\begin{array} { r } { V _ { t } = V _ { t - 1 } + \sum _ { i = 1 } ^ { { N _ { t } } } Z _ { i , s } Z _ { i , s } ^ { \top } } \end{array}$ 17: Compute the estimator $\widehat { \theta } _ { t }$ as in (15). 18: else 19: θt θt 1 20: enbd if 21: end for Conversely, if $\tilde { G } _ { t }$ is constructed from an overly large set of hypothetical contexts, the selfnormalized error $\lVert \tilde { { \boldsymbol { \theta } } } _ { t } ^ { H ( \dot { { \boldsymbol { \theta } } } _ { t } ) } - { \boldsymbol { \theta } } _ { \star } \rVert _ { \tilde { G } _ { t } }$ may increase significantly, making it harder to guarantee tight estimation bounds. Therefore, the design of the hypothetical contexts must carefully balance the number of augmenting samples to ensure both the estimator’s accuracy and the tightness of highprobability bounds. Section 5.1 presents a formal analysis of this trade-off, and shows that the proposed construction achieves this balance effectively, leading to a well-conditioned estimator with high-probability regret guarantees. # 4.5 Hypothetical Coupled Sample Augmented Thompson Sampling The proposed algorithm, Hypothetical Coupled Sample Augmented Thompson Sampling (HCSA $^ +$ TS), is detailed in Algorithm 2. This algorithm builds upon the structure of LinTS but introduces two key innovations: (i) resampling to couple the hypothetical bandit with the original bandit, and (ii) the HCSA estimator, which leverages a compressed orthogonal basis for improved efficiency. For the resampling step (i), the algorithm repeatedly resamples $a _ { t }$ from LinTS policy equipped with HCSA estimator and the novel Gram matrix until the condition $\{ Z _ { \tilde { a } _ { t } , t } =$ $X _ { a _ { t } , t } \}$ is satisfied. This ensures that the hypothetical bandit aligns with the original bandit by augmenting the randomized contexts. The number of resampling attempts, set to $\textstyle { \lceil \log \frac { ( t + 1 ) ^ { 2 } } { \delta } } / \log \frac { 1 } { 1 - \gamma } ]$ , guarantees the resampling process succeeds with probability at least $1 - \delta / ( t + 1 ) ^ { 2 }$ . In practice, this resampling typically succeeds after only a few iterations. For the HCSA estimator (ii), although computing it might appear computationally demanding, efficient implementation strategies significantly reduce its complexity. Theoretically, the algorithm must compute hypothetical context vectors $\{ Z _ { i , t } : i \in [ N _ { t } ] \}$ for each resampling iteration. However, in practice, the algorithm first checks if $\{ \tilde { a } _ { t } = N _ { t } \}$ occurs and then computes the hypothetical contexts based on the resampled $a _ { t }$ . The worst-case computational complexity of the algorithm is $O ( d ^ { 2 } ( K + d ) T + T \log ( T +$ $\textstyle 1 ) / \log \bigl ( \frac { 1 } { 1 - \gamma } \bigr ) \textstyle )$ . The primary computational bottleneck arises from calculating the Gram matrix $V _ { t }$ and performing eigenvalue decomposition to construct hypothetical contexts, repeated for the specified number of resampling attempts in each round $t \in [ T ]$ . In practice, the computational efficiency of the A estimator can be further enhanced by applying the Sherman-Morrison formula, enabling rank-1 updates and reducing memory usage. # 5 Regret Analysis The following theorem establishes a nearly minimax-optimal cumulative regret bound for the HCSA+TS algorithm. Theorem 3 (Regret Bound for HCSA+TS) In Algorithm 2, set the exploration parameter as $v _ { t } = \{ 2 \log ( K ( t + 1 ) ^ { 2 } / \delta ) \} ^ { - 1 / 2 }$ . Then, with probability at least $1 - 3 \delta$ , the cumulative regret of the $H C S A + T S$ algorithm by round $T$ satisfies: $$ R ( T ) \leq { \frac { 4 d } { ( 1 / 2 - e ^ { - 1 } ) ( 1 - \gamma ) } } \log { \frac { d ( T + 1 ) ^ { 2 } } { \delta } } + \left\{ 5 { \sqrt { 2 } } \theta _ { \operatorname* { m a x } } + { \frac { 6 \sigma } { \gamma } } { \sqrt { 2 d \log { \frac { 2 T } { \delta } } } } \right\} { \sqrt { 4 d T \log { \frac { 2 T } { \delta } } } } + C _ { 1 } + C _ { 2 } + C _ { 3 } + C _ { 4 } . $$ The leading order term of the regret bound is $O ( d { \sqrt { T } } \log T )$ , which matches the minimax lower bound $\Omega ( d \sqrt { T } )$ established in Lattimore and Szepesv´ari (2020), up to logarithmic factors. This result provides the first nearly minimax-optimal regret guarantee for linear contextual bandits under arbitrary context distributions. For comparison, Kim et al. (2021) achieved ${ \tilde { O } } ( d { \sqrt { T } } )$ regret bound under IID contexts with some special distributions of which the minimum eigenvalue of the average covariance matrix is $\Omega ( 1 / d )$ . Kim et al. (2023c) achieved a regret bound of $O ( { \sqrt { d T \log T } } )$ under the assumption of IID contexts with a strictly positive minimum eigenvalue for their average covariance matrix. Similarly, Huix et al. (2023) obtained a regret bound of $\tilde { O } ( d \sqrt { T } )$ , but their analysis assumes a Gaussian prior on the unknown parameter $\theta _ { \star }$ . Earlier works such as Kim et al. (2021) and Agrawal and Goyal (2013) impose unitnorm assumptions on the contexts. In contrast, the regret bound (16) does not require normalization of the context vectors. The only assumption made is that the absolute inner product between any context and the true parameter is bounded, i.e., $| X _ { k , t } ^ { \top } \theta _ { \star } | \leq 1$ . Notably, increasing the context norm bound $x _ { \mathrm { m a x } }$ does not cause the regret to grow linearly, avoiding the scaling issues encountered in prior analyses. The key technical contributions enabling this result include: (i) the development of a self-normalized bound for the HCSA estimator using a carefully constructed Gram matrix (Section 5.1), (ii) the identification of a set of low-regret arms selected with high probability (Section 5.2), and (iii) a novel maximal elliptical potential bound based on the augmented Gram matrix $V _ { t }$ (Section 5.3). # 5.1 A Self-Normalized Bound for the Proposed Estimator With the coupling inequality (Lemma 2), we can bound the error of the proposed estimator by obtaining an error bound for the hypothetical estimator, which is proven in the following lemma. Lemma 4 (A self-normalized bound of the HSA estimator) For each $t \geq 1$ , define the matrix $\begin{array} { r } { A _ { t } \ : = \ \sum _ { s = 1 } ^ { t } \phi _ { \tilde { a } _ { s } , s } ^ { - 1 } Z _ { \tilde { a } _ { s } , s } Z _ { \tilde { a } _ { s } , s } ^ { \top } } \end{array}$ and. Then the self-normalized bound of the hypothetical sampl augmented estimator is decomposed as: $$ \tilde { \theta } _ { t } ^ { H ( \tilde { \theta } _ { t } ) } - \theta _ { * } \Big \| _ { V _ { t } } \leq \left\| V _ { t } ^ { - 1 / 2 } ( V _ { t } - A _ { t } ) ( \check { \theta } _ { t } - \theta _ { \star } ) \right\| _ { 2 } + \left\| \sum _ { s = 1 } ^ { t } \phi _ { \tilde { a } _ { s } , s } ^ { - 1 } ( W _ { \tilde { a } _ { s } , s } - Z _ { \tilde { a } _ { s } , s } ^ { \top } \theta _ { \star } ) Z _ { \tilde { a } _ { s } , s } \right\| _ { V _ { t } ^ { - 1 } } $$ The proof is in Appendix A.1. The decomposition shows the two sources of error for the HDR estimator for rewards of all arms: (i) from the reference estimator $\check { \theta }$ used as an reference estimator in HDR pseudo-rewards (11), and (ii) the noise error of the rewards. In error term (i), $V _ { t } ^ { - 1 / 2 } ( V _ { t } - A _ { t } )$ is the matrix martingale with bounded eigenvalues, which is bounded by the newly developed matrix concentration inequality (Lemma 10). The error term (ii) is bounded by a modified martingale inequality developed by Abbasi-Yadkori et al. (2011). With the suitable choice of ${ \check { \theta } } _ { t }$ , we obtain an $O ( { \sqrt { d \log t } } )$ error bound for $\tilde { \theta } _ { t } ^ { H ( \dot { \theta } _ { t } ) }$ estimator, which is normalized by the novel augmented Gram matrix $V _ { t }$ instead of the conventional Gram matrix that includes only selected contexts. Theorem 5 (Self-Normalized Bound for the HCSA Estimator) With probability at least $1 - 3 \delta$ , the estimator defined in (15) satisfies $$ \left. \widehat { \theta } _ { t } - \theta _ { \star } \right. _ { V _ { t } } \leq 5 \theta _ { \operatorname* { m a x } } + \frac { 6 \sigma } { \gamma } \sqrt { d \log \frac { 1 + t } { \delta } } $$ for all $t \geq T _ { 1 }$ . The proof is provided in Appendix A.2. Unlike the classical self-normalized bound of AbbasiYadkori et al. (2011), which is normalized by the Gram matrix $\textstyle \sum _ { s = 1 } ^ { t } X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } + I _ { d }$ built solely from selected contexts, Theorem 5 establishes a bound normalized by the full Gram matrix $V _ { t }$ , which includes contexts from all $K$ arms. While Kim et al. (2023c) also considered self-normalization with respect to a full Gram matrix, their estimator incorporates contexts from all arms only in a fraction of the rounds, and their analysis is restricted to IID contexts with a strictly positive-definite covariance matrix. In contrast, Theorem 5 applies to arbitrary (including non-IID, non-stationary) context sequences, establishing a uniform self-normalized bound with a full Gram matrix. This result enables a novel regret analysis of Thompson Sampling that yields a $\tilde { O } ( d \sqrt { T } )$ bound for HCSA+TS, applicable under arbitral context distributions. Kim # 5.2 Low-Regret Arms with a High-Probability Guarantee For each $k \in \lfloor K \rfloor$ and $t \in \vert T \vert$ , define the instantaneous regret gap between the optimal arm and arm $k$ as $$ \Delta _ { k , t } : = X _ { a _ { t } ^ { \star } , t } ^ { \top } \theta _ { \star } - X _ { k , t } ^ { \top } \theta _ { \star } . $$ Using this, we define the set of low-regret arms at round $t$ as $$ \mathcal { P } _ { t } : = \left\{ k \in [ K ] : \Delta _ { k , t } \leq 2 x _ { t } \left\| \widehat { \theta } _ { t - 1 } - \theta _ { \star } \right\| _ { V _ { t - 1 } } + \sqrt { \left\| X _ { a _ { t } ^ { \star } , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } + \left\| X _ { k , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } } \right\} , $$ where $x _ { t } : = \operatorname* { m a x } _ { k \in [ K ] } \| X _ { k , t } \| _ { V _ { t - 1 } ^ { - 1 } }$ . The self-normalized confidence bound with respect to $V _ { t }$ , which includes contexts beyond the selected arms, allows the construction of an effective set $\mathcal { P } _ { t }$ that has lower regret than arms in sets built from Gram matrices based solely on selected contexts such as $\textstyle \sum _ { s = 1 } ^ { t } X _ { a _ { s } , s } X _ { a _ { s } , s } ^ { \top } + I _ { d }$ . The following lemma provides a high-probability guarantee that the arm selected by Algorithm 2 belongs to the low-regret set. Lemma 6 (High-Probability Selection of Low-Regret Arms) Let $a _ { t }$ be the arm selected by Algorithm $\mathcal { Z }$ , and let $\mathcal { P } _ { t }$ be the set defined in (17). If the exploration parameter is set as $v _ { t } = \{ 2 \log ( K ( t + 1 ) ^ { 2 } / \delta ) \} ^ { - 1 / 2 }$ , then $$ \mathbb { P } \left( a _ { t } \in \mathcal { P } _ { t } \mid \mathcal { H } _ { t } \right) \geq 1 - \frac { \delta } { ( t + 1 ) ^ { 2 } } . $$ The proof is deferred to Appendix A.4. In contrast to Agrawal and Goyal (2013), where bounding the probability of selecting a saturated arm required setting $v = \sqrt { 9 d \log ( t / \delta ) }$ , thereby introducing a $\sqrt { d }$ -scaling, Lemma 6 establishes that such dimensional dependence is unnecessary. By leveraging the structure of the HCSA estimator and its associated Gram matrix, the proposed approach guarantees high-probability selection of low-regret arms without incurring additional dependence of the variance parameter $v$ on the dimension $d$ # 5.3 Maximal Elliptical Potential Bound While continuing the proof of the regret bound, we face a novel terms that needs to be analyzed. By Lemma 6, the algorithm selects arms from the low-regret set $\mathcal { P } _ { t }$ , i.e., $a _ { t } \in \mathcal P _ { t }$ , with high probability. This leads to the following novel regret decomposition: $$ r e g r e t ( t ) = \Delta _ { a _ { t } , t } \leq 2 x _ { t } \left\| \widehat { \theta } _ { t - 1 } - \theta _ { \star } \right\| _ { V _ { t - 1 } } + \sqrt { \left\| X _ { a _ { t } ^ { \star } , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } + \left\| X _ { a _ { t } , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } } , $$ for $t \geq T _ { 1 }$ . Thus, the cumulative regret is bounded as follows: $$ R ( T ) \leq 2 h _ { T } + \sum _ { t \in [ T ] \setminus A _ { T } } \left\{ 2 x _ { t } \left\| \widehat { \theta } _ { t - 1 } - \theta _ { \star } \right\| _ { V _ { t - 1 } } + \sqrt { \left\| X _ { a _ { t } ^ { \star } , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } + \left\| X _ { a _ { t } , t } \right\| _ { V _ { t - 1 } ^ { - 1 } } ^ { 2 } } \right\} . $$ Because $x _ { t } : = \operatorname* { m a x } _ { k \in [ K ] } \| X _ { k , t } \| _ { V _ { t - 1 } ^ { - 1 } }$ , we get: $$ R ( T ) \leq 2 h _ { T } + \sum _ { t \in [ T ] \backslash A _ { T } } \bigl ( 2 x _ { t } \| \widehat { \theta } _ { t - 1 } - \theta _ { \star } \| _ { V _ { t - 1 } } + \sqrt { 2 } x _ { t } \bigr ) . $$ Factoring out $x _ { t }$ : $$ R ( T ) \leq 2 T _ { 1 } + \sum _ { t = \in [ T ] \backslash \mathcal { A } _ { T } } \big ( 2 \| \widehat { \theta } _ { t - 1 } - \theta _ { \star } \| _ { V _ { t - 1 } } + \sqrt { 2 } \big ) x _ { t } . $$ Since we obtain $\Vert \widehat { \theta } _ { t - 1 } - \theta _ { \star } \Vert _ { V _ { t - 1 } } = O ( \sqrt { d \log t } )$ from Theorem 5, we need a bound for $\textstyle \sum _ { t \in [ T ] \setminus { A _ { T } } } x _ { t }$ , whichb is in the following lemma. Lemma 7 (Maximal elliptical potential lemma) For $x _ { t } : = \mathrm { m a x } _ { k \in [ K ] } \| X _ { k , t } \| _ { V _ { t - 1 } ^ { - 1 } }$ , we have: $$ \sum _ { t = T _ { 1 } } ^ { T } x _ { t } ^ { 2 } \leq 2 d \log { \frac { T } { d } } . $$ The proof is in Appendix A.5. Previous elliptical lemmas only obtain the bounds the normalized norm of the selected contexts $X _ { a _ { t } , t }$ , while we need a bound for $\operatorname* { m a x } _ k \in K \} \| X _ { k , t } \| _ { V _ { t } }$ . The maximal elliptical potential lemma bounds the normalized norm of the contexts for all $K$ arms, which previous analyses could not bound effectively. This is possible because we augmented suitable sample to obtain the Gram matrix $V _ { t }$ consisting of contexts from all $K$ arms. # 6 Experimental Results This section is for evaluating the empirical performance of the proposed algorithm, HCSA+TS, against several benchmark algorithms for LinCB with simulated data. The benchmarks include LinTS, LinUCB, DRTS (Kim et al., 2021), HyRan (Kim et al., 2023c), and SupLinUCB (Chu et al., 2011). For the experiment setting, the parameter $\theta _ { \star }$ is defined as: $$ \theta _ { \star } : = \frac { 1 } { \sqrt { d } } \biggl ( \underbrace { 1 , \cdots , 1 } _ { \lceil d / 2 \rceil } , \underbrace { - 1 , \cdots , - 1 } _ { d - \lceil d / 2 \rceil } \biggr ) ^ { \top } , $$ where $d \in \{ 1 0 , 3 0 \}$ is the dimension of the parameter. The $i$ -th entry of the context vectors for $K \in \{ 2 0 , 3 0 \}$ arms are independently sampled from a Gaussian distribution with a mean of $- 1 + \textstyle { \frac { 3 ( i - 1 ) } { d - 1 } }$ and variance of 1 for each $i \in [ d ]$ . These vectors are normalized and then scaled by a scalar drawn uniformly from $[ 0 , 1 ]$ . To simulate missing context information, with probability $1 / 2$ , the last $d - \lceil d / 2 \rceil$ entries of the context vectors are set to zero at each round. This setting reflects practical scenarios where certain context features may be unavailable with some probability, making it challenging to estimate the corresponding entries in $\theta _ { \star }$ . The hyperparameter optimization was conducted as follows: For LinTS, the variance parameter was selected from $\{ 0 . 0 1 , 0 . 1 , 1 \}$ . For LinUCB and SupLinUCB, the confidence bound inflation parameter was chosen from $\{ 0 . 0 1 , 0 . 1 , 1 \}$ . For HyRan and HCSA+TS, the regularization parameters $p$ and $\gamma$ were tuned from $\{ 0 . 1 , 0 . 5 , 0 . 9 \}$ . The hyperparameters for DRTS were fixed as specified in Kim et al. (2021). Values outside the specified ranges showed negligible differences in performance, suggesting robustness to hyperparameter selection for all methods. Figure 2: Comparison of the regrets of the proposed $\mathtt { H C S A + T S }$ algorithm with other benchmark methods. The lines represent the average, and the shaded areas indicate the standard deviation based on twenty experiments. The results demonstrate that the proposed $\mathtt { H C S A + T S }$ effectively identifies the optimal arm using orthogonal regularization. Figure 2 compares the cumulative regret of HCSA+TS with other benchmark algorithms across various configurations of $d$ and the number of arms $K$ . Each line represents the average cumulative regret, and the shaded regions indicate the standard deviation across 20 independent trials. The results show that HCSA+TS achieves the lowest cumulative regret in all tested settings. Compared to LinTS, LinUCB, and SupLinUCB, which do not leverage information from all arms, HCSA+TS demonstrates robustness to missing context data. When compared to DRTS and HyRan, which uses the original context vectors, HCSA+TS consistently identifies low-regret arms more effectively, even under significant masking of context features. Figure 3: Comparison of the prediction error across all arms, calculated as $\textstyle \sum _ { i = 1 } ^ { K } \{ X _ { i , t } ^ { \top } ( \widehat { \theta } _ { t } -$ $\theta _ { \star } ) \} ^ { 2 }$ , for the proposed HCSA+TS and other benchmark methods. The libnes represent the averages, and the shaded areas indicate the standard deviations based on twenty experiments. The results demonstrate that the proposed estimator, enhanced with orthogonal augmentation, learns the reward more accurately than other estimators. Initially, due to the orthogonal basis regularization, HCSA+TS incurs higher regret during the exploration phase, particularly when the effective rank of the context matrix is low. However, it rapidly adapts and identifies the optimal arm, ultimately outperforming the other algorithms, which continue to suffer regret due to their inability to handle missing context data effectively. Figure 3 illustrates the prediction error across all arms, measured as $\textstyle \sum _ { i = 1 } ^ { K } \{ X _ { i , t } ^ { \top } ( \widehat { \theta } _ { t } - \theta _ { \star } ) \} ^ { 2 }$ . Similar to the regret results, the averages and standard deviations are computedb over 20 trials. The initial convergence of the estimators in DRTS and HyRan is faster due to their Kim reliance on imputed contexts. However, their prediction errors increase over time because the imputed contexts, often containing many zero entries, provide incomplete information and hinder accurate estimation. In contrast, HCSA+TS demonstrates steady and consistent convergence throughout the experimental horizon. Its orthogonal augmentation strategy allows it to extract useful information even when parts of the context vectors are masked, leading to superior prediction accuracy compared to both traditional ridge-based estimators (LinTS, LinUCB, SupLinUCB) and other augmented methods (DRTS, HyRan). In summary, the experiments validate that HCSA+TS achieves significant improvements in both cumulative regret and prediction accuracy over existing benchmarks. Its ability to handle missing context information effectively while leveraging orthogonal regularization makes it particularly well-suited for practical scenarios with incomplete data.
In linear contextual bandits, the objective is to select actions that maximize cumulative rewards, modeled as a linear function with unknown parameters. Although Thompson Sampling performs well empirically, it does not achieve optimal regret bounds. This paper proposes a nearly minimax optimal Thompson Sampling for linear contextual bandits by developing a novel estimator with the adaptive augmentation and coupling of the hypothetical samples that are designed for efficient parameter learning. The proposed estimator accurately predicts rewards for all arms without relying on assumptions for the context distribution. Empirical results show robust performance and significant improvement over existing methods.
[ "stat.ML", "cs.LG" ]
# 1 Introduction When developing software, large efforts are spent on quality assurance [2], which is mainly performed by conducting a number of automated and manual tests on the software. Developers use such tests to determine the compliance of the software with domain requirements as well as to determine the quality of the software. Popular qualitative metrics for the software under development are, e.g., its performance, security, or accessibility [3]. To varying degrees, such tests can be automated and be executed and evaluated without human input. One major aspect of software quality, which is not easily captured via automated tests, is the acceptance of the software by end users. User acceptance depends not only on the satisfaction of domain requirements, but also, among others, on the intuitiveness and consistency of the user interface. These metrics are hard to capture using automated tests. Hence, these criteria are often tested for using manual tests of the interface of the software. While manual tests are perceived as effective by software developers, they are also expensive in terms of time and resources spent [4]. Although some work exists on the automation of such tests, that work is mainly focused on either web-based software, or applications for popular mobile operating systems, like Android. [5,6,7,8,9,10,11] In this work, we present GERALLT, a system that aims to support developers in testing the interface of a real-life desktop-based engineering software that is in productive use throughout numerous institutes of the German Aerospace Center (DLR). At the core of GERALLT are two components based on Large Language Models (LLMs), a type of generative Artificial Intelligence (AI) capable of producing human-like text based on patterns learned from vast amounts of data. One of these components explores the Graphical User Interface (GUI), while the other evaluates the observed behavior for unintuitive or inconsistent interactions, as well as for functional errors. We then evaluate the issues determined by GERALLT via discussion with the developers of the software. The main contribution of this work consists of an architecture description of GERALLT and its evaluation. GERALLT comprises multiple components and is partially based on previous work due to Lui et al. [5]. We evaluate GERALLT by testing a specific feature of the engineering software with it and discussing the issues GERALLT determined with the software engineers. The remainder of this work is organized as follows: After giving an overview over previous work in Section 2 we describe the system under test, its use at DLR, and its technical implementation in Section 4. Afterwards, we evaluate GERALLT in Section 5 and conclude this work with a summary of our results and perspectives for future work in Section 6. # 2 Background and Related Work LLMs are a class of generative AI, which are trained on large amounts of textual data and refined through human feedback. Given a textual input prompt, they produce textual responses, which mimic human responses with surprising accuracy in a diverse array of tasks. In recent years, there has been a flurry of research on new domains in which LLMs can be applied. [12] In particular, there exist a number of works on the application of LLMs in software quality assurance. Among others, these include using LLMs for generating [13,10,11] and executing [7,14] test cases, for exploring the system under test, e.g., via fuzzing [9], and for repairing detected issues [15,16,17]. Moreover, there exist numerous works on the automation of GUI testing [18]. Of particular interest are novel strategies for the traversal of applications for the Android operating system [19,20] as well as the use of AI for the traversal of arbitrary GUIs [8,6]. In the remainder of this section, we focus on three pieces of related work that align most closely with our work presented here. Liu et al. [5] developed GPT Droid, an LLM-based system for testing Android apps. GPT Droid extracts the GUI context from an app and encodes it into prompts for the LLM. The LLM generates operational commands, which GPT Droid translates into executable actions. GPT Droid includes a functionality-aware memory mechanism to retain long-term knowledge of the testing process. This mechanism allows the LLM to guide exploration based on app functionality. In evaluations on 93 Android apps, GPT Droid achieved 75% activity coverage and detected $3 1 \%$ more bugs than the best baseline. It also identified 53 new bugs on Google Play, with developers confirming or fixing 35 of them. Gao et al. [21] developed a system for automating desktop GUI tasks by utilizing LLMs. The system includes a GUI Parser that converts screenshots and metadata into a structured representation by combining Optical Character Recognition (OCR), icon detection, and other vision tools. This representation enables the system to understand diverse UI elements and their spatial relationships. Their experiments showed the framework achieving a $4 6 \%$ success rate, revealing the challenges and potential for improvements in this domain. Zimmermann and Koziolek [6] use LLMs to test a webbased example application. Web-based applications have a natural textual representation, which is rendered by the browser. In contrast, we investigate a desktop-based application where we first need to generate a textual representation of the current state of the GUI. Moreover, here we investigate the capabilities of LLMs for testing a real-world application instead of an example application specifically constructed for the purpose of testing. Our work builds upon the approaches of Liu et al. and Gao et al.. We use the concept of Liu et al. for automated GUI testing and combine it with the approach of Gao et al. for the automation of a desktop application. The key novelty of our work lies in automating GUI testing for a desktop application. While prior research has focused on mobile and web applications, our approach extends LLM-based testing techniques to a real-world desktop environment. # 3 The Use Case RCE is an open-source software used for designing, implementing, and executing simulation toolchains. Engineers at DLR use RCE as part of their daily work to simulate complex systems, such as aircraft, ships, and satellites [22,23]. RCE itself does not provide any disciplinespecific simulations “out of the box” but instead relies on engineers integrating their pre-existing simulations into RCE. Integrating a simulation into RCE mainly amounts to defining inputs and outputs of the simulation as well as providing a shell script defining the invocation of the simulation. While straightforward in concept, there are numerous additional decisions to be made by the tool integrator, such as defining correct paths for execution, setting up the environment of the simulation, or allowing for concurrent executions of simulations. In practice, the integration of an existing simulation is a complex task that requires in-depth knowledge of both RCE and the simulation to be integrated. To simplify the integration, RCE provides users with a desktop-based GUI that guides the integrator through the integration. This GUI consists of multiple pages of Eclipse Rich Client Platform (RCP)-based forms which ask the user to supply the information required for integration, sometimes containing additional pop-up windows for, e.g., specifying the inputs and outputs. We illustrate the pages of this wizard in Figure 1. Keeping with the typical vernacular of desktop-based software, we call this GUI the tool integration wizard. Since RCE is used in numerous research long-running projects stability is of paramount importance. Hence, the developers pay particular attention to quality assurance. This takes the form not only of automated unit tests, integration tests, and end-to-end tests, but also of manual exploratory GUI tests. [24]. The automated tests serve mainly to protect against regressions of RCE’s functionality. In contrast, the manual tests aim to uncover unintuitive and overly complex GUI behavior, which is hard to codify using classical testing setups. Fig. 1: Pages of the tool integration wizard of RCE Manual tests, while effective, require significant labor investment. A typical interactive testing session occupies around five to seven software engineers over the course of one to three weeks, depending on the agreed-upon testing scope. [24] Hence, even partial automation of this process promises to substantially improve the development process of RCE. In the following section, we describe an architecture for a software tool that aims to augment the manual testing process. # 4 Testing RCE with LLMs To test RCE we have developed GERALLT, a system focusing on using two LLM-based agents. This system is based on previous work by Liu et al. [5] and Gao et al. [21]. GERALLT includes a GUI Parser that converts screenshots and metadata into a structured representation by combining OCR, icon detection, and other vision tools. The aim of one agent is to control RCE, while the aim of the other one is to observe the evolution of the GUI of RCE and to inform the user about observed inconsistencies. After giving an overview over the architecture of GERALLT and its constituent components in Section 4.1 we describe the prompts used for the two LLM-based components in Section 4.2 and in Section 4.3, respectively. # 4.1 System Architecture We present the architecture of GERALLT in Figure 2. The aim of GERALLT is to mimic the behavior of a human tester, i.e., to execute a loosely defined task with RCE and to provoke unintuitive behavior while doing so. These loosely defined task only define a goal and not concrete steps to achieve it. Moreover, the human tester is given no guidance except for the existing documentation and their personal intuition. We separate the two tasks of controlling RCE and of determining unintuitive behavior into two LLM-based components, which we call the controller and the evaluator, respectively. The controller is responsible for executing meaningful actions on the GUI The evaluator checks the GUI for issues after each action are performed. On startup, GERALLT takes a task as input and initializes an instance of RCE. In each iteration, GERALLT constructs a prompt for the controller comprised of a) the task originally given to GERALLT, b) a structured description of the current state of the GUI produced by a GUI Parser, c) a list of actions possible on the GUI widgets available in the current state, d) documentation given by RCE, e) a screenshot of the current state of the GUI (if the LLM can interpret images), and f) the actions previously taken by the controller. We describe the construction of this prompt in more detail in Section 4.2. GERALLT gives this prompt to the controller, which outputs a selection of one of the possible actions. Afterwards, GERALLT parses the output of the controller and tries to execute the selected action on the RCE instance. It moreover records the attempted action as well as whether or not it was successfully executed in an action log. After the execution was attempted, GERALLT constructs a prompt for the evaluator comprised of a) a screenshot of the GUI before the action was attempted, b) a screenshot of the GUI after the action was attempted, and c) a description of the attempted action. We describe the construction of this prompt in more detail in Section 4.3. The prompt moreover contains a request to determine unintuitive behavior of the GUI. # 4.2 Controller Prompt In this section we describe the prompt given to the controller agent. We showcase an instance of this prompt in Table 1. The overall structure of the prompt follows a classical pattern as described by Peckham, Jeff, et al. [25]. We first provide relevant context about the role of the LLM as well as a concrete description of its task. Afterwards, we give examples of the desired interactions as well as a history of previous user interactions. Finally, we conclude the prompt by concisely reiterating the task. In the first section of the prompt, we provide context about the role of the LLM. For this, we describe that the LLM will receive a concrete task and textual information about the current state of the GUI of RCE and that it is expected to pick to next action to perform on the GUI in order to achieve its stated task. Recall that we aim to have GERALLT simulate the behavior of a new and non-expert user of RCE. Hence, we omit descriptions of the intended use of RCE as well as of its functionality on purpose. Having described the context of GERALLT, we state its concrete task, namely to “act as a GUI-Tester for the software RCE”. We moreover provide the evaluation criterion of covering “as many different GUI-elements as possible”. Again, we consciously omit more concrete definitions of steps to take. This serves having GERALLT emulate the manual testing, which aims to explore all possible interactions of GUI elements. Table 1: Composition of the controller prompt. Repetitions omitted and denoted by ellipses. Newlines omitted where possible. Fig. 2: Architecture of GERALLT. The component “Previous Screenshot” holds the screenshot of the GUI taken during the last iteration. It is replaced by an updated screenshot after each iteration. The dashed line in the bottom right denotes that the evaluator only receives the last action performed on the GUI instead of the complete log. Fig. 3: The appearance of the online help in the tool integration of RCE. Initial experiments with LLMs showed that the LLM needs more information about RCE other than the GUI information. This manifested in the controller agent initially failing to proceed through the various stages of the tool integration. This, in turn, was mainly due to to the text emitted by the LLM to the GUI not clearing input sanitization, e.g., not providing a well-defined path when requested. To overcome this obstacle, we include relevant parts of the RCE documentation in the prompt. This documentation is included in RCE and also available to human testers. We illustrate the appearance of this online help in the right-hand side of Figure 3. Afterwards, we provide the LLM with a textual description of the current state of the GUI. This description is generated by the GUI Parser. The GUI Parser uses the PyWinAuto Python package [26] to extract the GUI elements on Operating System (OS) level. This works similar to the approach of Liu et al. [5], but instead of Android the OS is Windows. The description takes the form of a JSON representation of the widgets comprising the GUI. As each widget can contain multiple child-widgets, this representation describes a tree of widgets. Each widget is described by its name, its type (e.g., a static text, a combo box, or a text input field), its unique ID, its position on the screen. Additionally, some widgets have additional information, e.g. a checkbox has the information if it is checked. This concludes the description of the current state of the system under control. It remains to define the actions available to the LLM. To this end, we define for each type of element a list of possible actions. Moreover, we state that the output of the LLM shall conform to a given JSON schema to simplify subsequent parsing. In addition, we provide a list of previously taken actions to make the evolution of the GUI up to this point explicit. This way, we do not have to rely on the context window of the LLM to retain the previously taken actions. Finally, we conclude the prompt with a concrete question about the next action to be executed on the GUI. This concludes the description of the prompt for the controller LLM and the rationale behind its construction. In the next section we describe the prompt for the evaluator LLM. # 4.3 Evaluator Prompt The aim of the evaluator is to “observe” the effect of the controller’s actions on the GUI of RCE and to determine whether any actions prescribed by the controller have an unintuitive consequence on the GUI. For this, the evaluator receives as input a screenshot from before the action and one after the action together with a textual prompt. The prompt again consists of an initial section describing the context and concrete task of the component. Similarly to the documentation of RCE provided to the controller, the task description contains a list of common errors and unintuitive behaviors typically caught by human testers. The prompt then contains instructions on formatting the output as well as the most recent action prescribed by the controller. This description is taken directly from the explanation produced by the controller. Since the prompt for the evaluator follows a similar structure and analogous reasoning to the controller prompt, we omit a detailed description. Instead, we provide an instance of this prompt in Table 2. This concludes the description of the architecture of GERALLT. In the following section, we evaluate GERALLT and discuss its known limitations. # 5 Evaluation and Known Limitations To evaluate GERALLT, we implemented the individual components described in Figure 2.[27] We implemented most components of GERALLT as Python scripts, namely the Output Parser, the Action Executor and the construction of the controller prompt and evaluator prompt. For the implementation of the controller agent and evaluator agent, previous work by Rosenbach [1] determined ChatGPT by OpenAI [28] to be the most promising LLM for this use case. So we use the GPT4o model over OpenAI’s API, so that GERALLT runs without human intervention. Since ChatGPT is multimodal, we are able to supply a screenshot of the current state of the GUI to the LLM in addition to the textual prompt, as described in Section 4.2. GPT’s responses are very variable. However, this is beneficial for covering more parts of the tested software application. Moreover, we used PyWinAuto [29] to implement the GUI Parser and the Action Executor. Recall that it was our aim to construct a tool that supports software developers in assuring the quality of RCE. In particular, we aimed to support developers in performing time-intensive exploratory GUI tests. These tests are targeted at finding “unintuitive” or “unnatural” behavior of the GUI of RCE as well as finding functional errors. Hence, there exists no baseline of existing or known errors against we can evaluate the results of our tool. Instead, we performed a qualitative evaluation of our tool and used the judgment of the software developers as an alternative to an objective ground truth. We illustrate the complete evaluation process in Figure 4. # 5.1 Quantitative Evaluation Results We conducted nine isolated test runs. Each test run consisted of a fresh instance of GERALLT. Each of these instances was given the same initial prompt as described in Section 4.2 and Table 1. Since the LLM-based components of GERALLT are inherently non-deterministic, each test run resulted in different judgments given by the evaluator component. During these test runs, the controller performed 752 actions on RCE. After each action, the evaluator determined whether it deemed the effect these actions had on the GUI as problematic as described in Section 4.3 and in Table 2. The output of the evaluator stated a problematic result after 72 actions. We then manually separated the outputs of the evaluator into true and false positives, i.e., into those that correctly describe the current state of the GUI and those that do not. This separation process resulted in seven true positive issues. We moreover determined two pairs of issues to result from the same underlying cause. Thus, the evaluation process results in five unique issues with the tool integration wizard of RCE. The problems range from optical problems to functional errors. We discussed the five issues with the development team of RCE who confirmed that they agreed with the classification given by our tool. Moreover, the developers agreed that these issues constituted “blind spots” in the existing testing process, i.e., that they did not consciously notice these issues during manual testing. # 5.2 Example Issue We show an example problem found by the evaluator in Figure 5. Here, the controller has opened the pop-up window asking the tool integrator to provide the launch settings of the tool to be integrated. These launch setting comprise a) the directory in which the tool is installed, b) the version of the tool, c) the absolute path to the working directory in which the tool shall be executed, and d) the maximal number of instances of the tool that Table 2: Composition of the evaluator prompt. Newlines omitted where possible. 9 Execute 752 Evaluate 72 Positives Consolidate 5 Test Runs Test Runs Actions Results 7 True Positives Errors Unique Issues Launch Settings Launch Settings ? Define at least one set of launch settings Define atleast one set of launch setings Host Tool directory Version Working directory Host Tooldirectory Version Working directory R Add Launch Setings □ R Add Launch Settings □ Tool directory: C:Path\To\Tol Tool directory\*: CPath\To\Tool Version\*: Version\*: 10 Working directory (absolute): Working directory (absolute): Crete arbitrydirectryin RCE tep diretory □Crete arbitrydirectryin CE temp dirctory Q Limit parallel ex ecutions 10 Q Limit parallel executions Techosenversionisntvalid Ivalidpatht oy The version must not be empty.. Cancel Cancel □ Use a ne □ Use a ne Tool Copying Behaviour Clean up choices for working directory(ies) in workflow configuration\* Tool Copying Behaviour Clean up choices for working directory(ies) in workflow configuration\* Never delete working directoryies) Never delete working directory(ies) O Copy tool to working directory once Delete working directory(ies) when workflow is finished O Copy tool to working directory once Delete working directory(ies) when workflow is finished \*Defines the user's choices when configuring the component \*Defines the user's choices when configuring the component < Back Cancel $\textcircled{2}$ < Back Cancel I1 Prooer IPropeties X (a) Screenshot before the performed action (b) Screenshot after the performed action Fig. 4: The evaluation process for our system. Fig. 5: Example error, where the evaluator criticized that the error message is too imprecise. can be executed in parallel. The controller has moreover entered a path to the tool directory. The validation of the contents of the pop-up window proceeds from top to bottom. Hence, RCE informs the user about a problem with the empty field for the tool version, namely that “The chosen version is not valid. The version must not be empty..[sic]” During the next iteration of GERALLT, the controller inputs the string “1.0” into the field “Version”. Hence, RCE determines the supplied version to be valid. Since input validation proceeds with the next field in the form, RCE now informs the user that the (non-existent) path to the working directory is invalid. The evaluator agent determines that this behavior is problematic and returns the explanation “The error message ’Invalid path to working directory’ is visible even though no path was provided, which is inconsistent unless a path was entered.” In other terms, the evaluator criticizes that a non-existent path cannot be invalid. Moreover, it criticizes that in the case of an empty version field, the user was informed about the field being empty (cf. Figure 5a), while they are given no such specific error message in the case of an empty working directory field (cf. Figure 5b). # 5.3 Known Limitations In the previous sections, we have presented the capabilities of GERALLT. We now turn our attention to discussing its limitations, particularly in terms of the system requirements of the implementation, the system under test, and the software qualities tested by GERALLT. The current implementation of our system is limited to testing RCE on Windows. This is due to the choice of PyWinAuto [29] for the implementation of the GUI parser, which only allows automation of the Windows GUI. GNOME- or KDE-based GUIs could be tested using, e.g., Dogtail [30] or Appium [31,32]. This would require some implementation effort, but no change to the overall system architecture presented in this work. In this work, we use GERALLT to test the tool integration wizard of RCE. This requires the user to perform left clicks and text inputs. Other features of RCE require more advanced interactions such as, e.g., dragging and dropping interface elements. Similarly to the previous case, implementing such interactions would require adaptations to the Output Parser as well as the Action Executor. However, no conceptual change of the architecture presented in Figure 2 would be required. Finally, GERALLT only aims to find GUI behavior that human users would deem unintuitive and functional errors. In particular, we have not constructed the system to determine, e.g., accessibility or security issues exhibited by the system under test. Detecting such issues is out of the scope of this work. Moreover, in our opinion, such measures of quality are better tested for using rulebased, deterministic approaches.
One important step in software development is testing the finished product with actual users. These tests aim, among other goals, at determining unintuitive behavior of the software as it is presented to the end-user. Moreover, they aim to determine inconsistencies in the user-facing interface. They provide valuable feedback for the development of the software, but are time-intensive to conduct. In this work, we present GERALLT, a system that uses Large Language Models (LLMs) to perform exploratory tests of the Graphical User Interface (GUI) of a real-life engineering software. GERALLT automatically generates a list of potential unintuitive and inconsistent parts of the interface. We present the architecture of GERALLT and evaluate it on a real-world use case of the engineering software, which has been extensively tested by developers and users. Our results show that GERALLT is able to determine issues with the interface that support the software development team in future development of the software.
[ "cs.SE" ]
# 1 Introduction Cyber-physical systems (CPS) are integrated hardware-software systems where computation and physical processes are deeply intertwined. Ensuring safety [7] in these systems, in particular for the safety-critical ones is of high importance, as failures can have critical consequences. One of the key strategies in safety assurance is to capture the system’s properties describing what the system should and should not do under different conditions. Encoding the requirements as formal or semi-formal properties, can enable creating safety and security guardrails for system behavior. Formulating properties usually starts from the system requirements typically written in natural language. Therefore, large language models (LLMs) with their strong potential can be leveraged to extract properties from existing documentation and software code. These properties can be used to drive subsequent automated testing and verification activities, such as property-based testing [5,3]. Property-based tests (PBT) are software tests that check a given property regarding the expected behavior holds for various input scenarios. In this paper, we propose a novel automated and scalable approach for guardrailing CPS using PBT generated by LLMs. Our approach benefits from two major established facts in software engineering: 1) the cyber side of CPSs is essentially a software program amenable to existing automated program analysis tools; and 2) advanced LLMs are strong in analyzing programs and extracting their expected properties [20]. Based on these two observation, our proposed approach, called ChekProp, uses LLMs to generate property-based tests for CPSs before their deployment, i.e., at design time. These PBTs can also then be utilized after deployment, i.e., at run time, to detect unsafe behavior of the CPS. ChekProp uses the source code, documentation, and unit tests of the target CPS to extract properties regarding its expected behavior. ChekProp also generates PBTs that verify that the extracted properties hold for the CPS. We implement a prototype of ChekProp and make it publicly available to the community [4]. We evaluate the relevance of properties extracted by ChekProp for nine programs: two widely studied CPSs and seven Raspberry Pi programs. ChekProp extracts 25 properties on these nine programs. Our results show that the properties extracted by ChekProp are similar to those carefully created with manual effort, with a recall of $9 4 \%$ and a precision of $7 2 \%$ . The high precision and recall of ChekProp shows that it can be a reliable tool for automating the manual effort dedicated to property extraction for CPSs. Moreover, we study the quality of PBTs generated by ChekProp. We find that 47% of the PBTs generated by ChekProp become executable with minor modifications and $8 5 \%$ of the PBTs effectively cover various parts of the input space partitions. This suggests that ChekProp generates PBTs that successfully verify the CPS compliance with the extracted properties. In summary, our main contributions are the following. – We propose a novel automated approach for generating property-based tests for CPSs using LLMs. – We implement a prototype of our proposed approach in ChekProp and make it accessible to the community in our open-source repository [4]. – We report the results of our preliminary experiments on the relevance and quality of the PBTs generated by ChekProp in practice. Listing 1: An example of a Python property-based test that checks the pow method returns a positive number as the square of an integer. This PBT uses the hypothesis library to generate inputs for the test. # 2 Background on Property-based Testing Property-based testing was first introduced in QuickCheck [3]. Given a function under test $f$ , the input space of this function $X$ , and a property $P$ that checks the behavior of $f$ on a given input, a property-based test validates that $\forall x \in$ $X : P ( x , f )$ . The property $P$ can be seen as a function that takes an input $x$ and the function $f$ and outputs true or false. The output of $P$ determines if $f$ behaves according to predefined requirements on $x$ . In practice, a propertybased test (PBT) consists of three components: 1) an input generator gen(), which returns different inputs, like $x$ , from the input space $X$ ; 2) a test body that collects relevant data regarding the behavior of $f$ ; and 3) a test assertion that uses the data collected by the test body to assert that the property $P$ about $f$ is true for a given input $x$ . Take Listing 1 as an example. The PBT in this example (lines $\# 1 \mathrm { - } 5$ ) tests the Python pow method. The main goal of this test is to check that the pow method returns a positive number when it powers an integer by 2. The PBT employs the hypothesis library [11] for input generation (line $\# 2$ ). The given decorator at line $\# 2$ uses the st.integers() strategy to generate various random integers and consider them as input $x$ at line $\# 3$ . The hypothesis library provides the given decorator, the st.integers() strategy, and many other tools to facilitate property-based testing in Python. After the input is generated with the help of hypothesis library, the PBT in Listing 1 collects the output of pow and saves it in square at line $\# 4$ . Finally, it checks the property that the output of pow is positive at line $\# 5$ . This property-based test delivers exactly what we need: a test that checks that the pow method returns a positive number for various integers, when they are powered by 2. To better understand property-based testing, we can compare it with the commonly used example-based unit tests (EBTs), which test program with fixed arguments [18]. An EBT checks if the function $f$ under test works correctly for a single input $x$ . For this, the EBT usually inspects that the output of $f$ for $x$ is exactly the same as the correct output $o$ determined by an oracle. In contrast, a PBT checks that $f$ behaves according to expectations for various inputs from the input space $X$ . Listing 1 shows an example of EBT for the Python pow method as well. The main goal of this test is also checking that the pow method returns a positive number when it powers an integer by 2. The EBT (lines #7-10) tests the pow method by giving it a negative integer, namely -3, as base, and 2 as power. Then, it checks that the output of pow is exactly 9 (line $\# 1 0$ ). If this test passes, it only shows that pow method returns a positive number as the square of -3 (as an example of an integer). The EBT is testing pow only for one single input, and its result might not be generalizable. Also, this type of testing requires an oracle that states the expected output, which is 9 in this case. PBTs are suitable for safety checking the behavior of cyber-physical systems at runtime for two reasons. First, in PBTs, we do not need to know the exact expected behavior of the CPS under test. PBTs only check that the behavior of the CPS meets certain requirements, which can indicate the safety of its behavior. Secondly, PBTs check the behavior of the CPS on any input from the input space. This enables PBTs to ensure the safety of the CPS under unforeseen situations at runtime. We need to utilize the test body and test assertion of the PBT to monitor and assure the safety of CPS behavior at runtime. Based on these observations, in this paper, we use LLMs to generate PBTs for cyberphysical systems. # 3 Proposed Approach We envisage a novel approach for gaurdrailing cyber-physical systems with LLMgenerated PBTs. Figure 1 illustrates an overview of our proposed approach. This approach consists of two main phases: the PBT generation phase, and the property-based monitoring phase. The PBT generation phase occurs at design time, before the system is deployed in real world. In this phase, we employ LLMs to generate PBTs that guardrail cyber-physical systems against running in unwanted/unsafe states. We implement our tool ChekProp to carry out the proposed PBT generation, given the documents, code, and unit tests of the cyber-physical system. After the PBTs are generated, we enter the property-based monitoring phase of our approach. This phase occurs at run time, when the system is deployed in real world. In this phase, the running cyber-physical system is constantly checked against the generated PBTs. Once a violation of a PBT is detected, a warning is raised and the proper safety measures should be taken. In the following, we explain the proposed PBT generation method, which is implemented in ChekProp. ChekProp takes the natural language documents of the CPS, its source code, and unit tests and prompts an LLM to generate PBTs. Next, it analyzes existing PBTs to detect issues and iteratively prompts the LLM to improve the PBTs. Once the PBTs pass the analysis, they are considered as verified PBTs that can be used as guardrails for the CPS. We now discuss each of these ChekProp components in more detail. Fig. 1: Overview of the proposed two-phase approach. ChekProp particularly focuses on the PBT generation phase. In the PBT improvement loop step, ChekProp aims to improve the generated PBTs by sending the LLM an improvement prompt, consisting of the failed PBTs and the error messages collected for them. Errors of various types are considered, including syntax errors, compilation errors, exceptions thrown during test executions, and assertion failures. # 3.1 Inputs of ChekProp The input to ChekProp comprises natural language documents that specify the CPS and its expected behavior, the CPS source code in Python, and unit tests for this source code. The natural language documents of the CPS describe the expected behavior of the system and the constraints that should be observed in the run time. Take Figure 2 as an example of a natural language document that describes a Pneumatic Control System (PCS) [12]. The description first defines the main elements involved in the system, namely, the horizontal and vertical cylinders and their corresponding sensors and controllers. Next, it explains the expected behavior of the system and the expected order of cylinder movements. Finally, it presents the constraints that should be met during the movements. Given the natural language documents of the system, the CPS is implemented to follow the described requirements. Moreover, a set of unit tests is created to test the CPS implement for specific points in the input space. Note that implementing the CPS and creating unit tests for it can also be fully automated using state-of-the-art LLM-based code generation [8] and test generation [21] techniques. However, ChekProp focuses on PBT generation and assumes that the CPS implementation is provided in Python, along with at least one unit test that demonstrates how the test body should interact with different methods of the program. Fig. 2: The natural language document that describes a Pneumatic Control System (PCS). We take the original design of PCS from [12] and adapt it to make it suitable for property-based testing. # 3.2 Initial PBT Generation ChekProp starts PBT generation by synthesizing an initial prompt. This prompt is used to invoke the LLM for generating an initial batch of PBTs. The initial prompt consists of four main sections and follows the structure presented in Figure 3. As illustrated in the Figure 3 example, the first section presents the description of the CPS in natural language. LLMs are highly effective in understanding natural language specifications of software and translating those specifications to actual code [8]. Therefore, we provide this section of the prompt to help the LLM better understand the system constraints that should be later translated into PBTs. The second part of the initial prompt contains the Python code for the CPS. In Figure 3, the second section presents a part of the code for pneumatic control system, namely, the Cylinder class (line $\# 1 4$ ). Including the system code in the prompt is essential for the LLM to recognize how the system should be called in tests. The third part of the initial prompt also provides at least one example unit test for the system. The third part of Figure 3 shows an example of a unit test that calls the system controller. This unit test employs an instance of the MockSystem class (line $\# 5 8$ ) to mock the physical part of the pneumatic control system and obtain a simple interface to its controller. It also illustrates how the states of the system should be collected during execution and checked later (lines #59-61). Note that, as explained in section 2, there is significant difference between unit tests and property based tests. The unit test only checks the behavior of the program for a specific input. For example, in the unit test (Section 3) of Figure 3, specific total_time, cylinder_interval, etc. are used. Also, in unit testing we usually check that the output is exactly what is expected according to an oracle [19]. In contrast, property based tests check that a more general condition is meet by the program behavior over a wide range of inputs. The fourth and final part of the Figure 3 instructs the LLM to generate the desired PBTs. At the end of the initial PBT generation step, ChekProp obtains a set of initial PBTs. ChekProp runs these PBTs and collects their results using an analyzer unit. If a group of generated PBTs fails, ChekProp collects their failure message and enters its PBT improvement loop step as described in subsection 3.3. # 3.3 PBT Improvement Loop In the PBT improvement loop step, ChekProp aims to improve the suite of generated PBTs. For this, ChekProp sends the LLM an improvement prompt, consisting of the failed PBTs and the error messages collected for them. Errors of various types are considered, including syntax errors, compilation errors, exceptions thrown during test executions, and assertion failures. The improvement prompts are sent to the LLM in continuation of the initial prompt, which means that the LLM also has the CPS description, code, and unit test in context. Therefore, the LLM has all the information needed to improve the PBTs. The PBT improvement loop component of ChekProp, iteratively sends improvement prompts to the LLM and employs the analyzer unit to run the improved PBTs and collect their results. If the improved PBTs still fail, ChekProp repeats this process until all PBTs are fixed or ChekProp reaches a predefined maximum number of improvement attempts. # 3.4 Output of ChekProp In the event of a successful PBT generation, ChekProp outputs the PBT as Python code. In particular, per our experiments, LLMs always use the hypothesis library [11] to write property based tests in Python. The tests generally use the same testing framework as the example unit test. For example, the unit test in Figure 3 is run with pytest. The PBTs generated by LLMs for this prompt can also be executed with pytest. Listing 2 shows one of the PBTs that ChekProp generates for the pneumatic control system. As the comment above the test mentions, it checks that cylinders stay within the location bound. The property is checked by two assertions at lines #17-18, one assertion per cylinder. This property is checked for a range of different settings for cylinders and their controllers presented at lines #7-10. Section 1: 1. The following is a description of a pneumatic control system. Natural Language Description 2. 3. {Natural Language Description} 4. 5. The following code implements this pneumatic control system. You should generate property based tests for this code. 6. 7. \`\`\`python 8. import math 9. import threading 10. from time import sleep 11. Section 2: 12. .... Cyber Physical System Code 13. 14. class Cylinder: 15. def _init__(self, sensor: Sensor): 16. self.motion $\mathbf { \Omega } = \mathbf { \Omega } \theta$ 17. self.sensor $\mathbf { \tau } = \mathbf { \tau }$ sensor 18. self.just_stopped $\mathbf { \sigma } = \mathbf { \sigma }$ False 19. 20. def trigger_motion(self): 21. if self.is_at_start(): 22. self.motion $\mathbf { \Psi } = \mathbf { \Psi } _ { 1 }$ 23. elif self.is_at_end(): 24. self.motion $\mathbf { \sigma } = \mathbf { \sigma }$ -1 25. 26. def move(self): 27. self.sensor.location $\mathbf { \sigma } = \mathbf { \sigma }$ self.sensor.location $^ +$ self.motion 28. self.just_stopped $\mathbf { \sigma } = \mathbf { \sigma }$ False 29. 30. def start_working(self, total_time: float, cylinder_interval: float): 31. for i in range(math.floor(total_time / cylinder_interval)): 32. if self.motion $\ ! = \ \theta$ : 33. self.move() 34. if self.is_on_border(): 35. self.motion $= \ 8$ # Stop movement 36. self.just_stopped $\mathbf { \sigma } = \mathbf { \sigma }$ True 37. sleep(cylinder_interval) 38. 39. def is_on_border(self): 40. return self.is_at_end() or self.is_at_start() 41. 42. def is_at_end(self): 43. return self.sensor.location $= = 2$ 44. 45. def is_at_start(self): 46. return self.sensor.location $\scriptstyle = = \varnothing$ 47. 48. 49. 50. 51. 52. The following is a unit test for this system class: Section 3: 53. Unit Test for the CPS Code 54. \`python 55. from examples.gpiozero.apps.pcs.src.pcs import MockSystem 56. 57. def test_starting_motion(): 58. mock_system $\mathbf { \sigma } = \mathbf { \sigma }$ MockSystem(total_time $^ { = 1 }$ , cylinder_interval=1, controller_interval $^ { = 1 }$ , mock_interval $^ { = 1 }$ ) 59. collected_states $\mathbf { \sigma } = \mathbf { \sigma }$ mock_system.execute_scenario() 60. assert collected_states[0].cylinder_a_motion $\scriptstyle = = \varnothing$ 61. assert collected_states[0].cylinder_b_motion $\scriptstyle = = \ 1$ 62. 63. 64. Generate property based tests for this system following the steps below: 1. Based on the given description and code, extract the properties of the system. 2. Use the unit tests to understand the behavior and interface of the code. 3. Based on the extracted properties and your understanding of the code, use the hypothesis library to generate property based tests. Section 4: Instruction for PBT Generation Note the main difference between this PBT and the unit test in Figure 3: the unit test checks that a specific output for a given input is exactly correct, while the PBT verifies that a general property holds for all inputs within a specified range. This makes PBTs more general and appropriate for checking that the system does not show unsafe behavior in unforeseen situations. # 3.5 Implementation ChekProp uses the gemini-2.0-flash-lite-preview-02-05 in its current version, but adopts a flexible design that allows easy switch to other LLMs. ChekProp also invokes the LLM with a sample size of one and a temperature of zero, which means that it receives only the top response per LLM invocation. In the current version of ChekProp, the PBT improvement loop is disabled, and we assess LLM’s ability to generate PBTs at the initial attempt. # 3.6 Property-based Monitoring While ChekProp is focused on PBT generation (phase 1 in Figure 1), propertybased monitoring (phase 2 in Figure 1) is also an integral part of our guardrailing CPSs with PBTs. In the monitoring phase of our proposed approach, various components of the generated PBTs are used to collect relevant data and verify the properties at runtime. For example, the generated PBT in Listing 2 tests that the cylinders stay in the correct location range for given inputs. This test shows that we should collect state values (line $\# 1 6$ ) and then assert that their cylinder_a_loc attribute is in the range [0, 2] (line $\# 1 7$ ) to ensure the property holds for cylinder A. In the monitoring phase, we can use the same data collection and property assertion techniques to check that the cylinders do not enter unsafe locations at runtime. This example demonstrates how the property extracted by ChekProp and the implementation of its generated PBTs are useful, relevant, and vital for guardrailing CPSs. More generally, as explained in section 2 and as seen in Listing 2, the PBTs generated in the first phase consist of three components: an input generator, a test body, and a test assertion. In the property-based monitoring phase, the PBT input generator is no longer needed, as the inputs are generated by the cyber part (in the form of control commands) and the physical system (via sensor values). The test body is replaced by the monitor that sits between the controller and the physical system collecting relevant data. The properties derived at design time are transformed into guards that are checked at runtime. The monitor verifies that these guards hold in the current state of the system. Based on the situation, if a violation is detected, the monitor can intercept and block the command being sent to the physical system. In this way, the PBTs generated by ChekProp serve as runtime guardrails for CPSs. # 4 Experiments # 4.1 Research Questions We conduct preliminary experiments to answer the following research questions: In this paper, we study the quality of ChekProp extracted properties and the quality of its generated property-based tests according to the following research questions: – RQ1 (Property relevance): Does the proposed approach extract relevant properties? We assess the quality of properties extracted by our approach on a dataset of Python programs for cyber-physical systems. Our dataset consists of two cyber-physical systems that are extensively studied in the literature, as well as seven Raspberry Pi programs. We compare the automatically extracted properties with manually crafted properties to judge their relevance. – RQ2 (PBT quality): What is the quality of ChekProp generated PBTs for real-world CPSs? We use ChekProp to generate PBTs for Python CPS programs in our dataset. We assess the quality of ChekProp generated PBTs from two aspects. First, we check if the generated PBTs can be executed with minimal manual modification (executability). Second, we evaluate the extent of various input space partitions on which the generated PBTs execute the program (effectiveness in terms of coverage of the input space partitions). # 4.2 Dataset As the current version of ChekProp supports PBT generation for Python programs, we curate a dataset of Python CPS programs for our experiments. This dataset consists of nine programs presented in Table 1. These programs are taken from three main sources as follows. Table 1: The cyber-physical system Python programs considered in our dataset. First, we include the Python version of two CPSs that are widely studied in the model checking literature [12]: a temperature control system (TCS) and a pneumatic control system (PCS) (P1 and P2 in Table 1). TCS and PCS are presented with the IDs P1 and P2 in Table 1. Moradi et al. [12] use the model checking tool of Rebeca 5, Afra [16], to detect potential attacks against these systems. For this, they define the correctness properties for each system and assess if Afra can find counterexamples for these properties on models augmented with malicious behavior. The manually defined properties in [12] set a groundtruth with which we can compare the properties automatically extracted by ChekProp. We carefully implement TCS and PCS in Python to make them amenable to PBT generation by ChekProp. Listing 3 shows a summary of our implementation of the TCS. There is a class for each component (i.e. Rebeca actor) of the system, namely, TempSensor for the sensor, HCUnit for the HC unit, and Controller for the controller. Each component runs on a separate thread and updates its status periodically. For example, the sensor fetches current temperature every sensor_interval seconds (see line #17). We also provide a MockRoom class that simulates the room environment and enables us to execute the system with different configurations, such as initial_temp and sensor_interval (see lines $\# 5 0$ and $\# 5 8$ ). This implementation is suitable for testing the temperature control system. The second set of programs in our dataset are the six programs (P3-P8 in Table 1) taken from open-source Raspberry Pi projects [15] that use the gpiozero library [17]. gpiozero is a Python library that real-world cyberphysical systems employ to connect to Raspberry Pi boards. We take the source of these programs from the official Raspberry Pi website [15] and manually add unit tests for them. These unit tests can be used in ChekProp prompts (see subsection 3.1). The six programs in our dataset that use gpiozero enable us to evaluate ChekProp applicability on projects that adopt the widely used Raspberry Pi boards. The last program considered in our dataset (P9 in Table 1) is InputDevice, a core class from the gpiozero library. This class provides an interface for Python programs to interact with Raspberry Pi input devices, such as barometers, temperature sensors, etc. The InputDevice class is a complex class from gpiozero that connects to various components in this library. Testing this class requires a detailed understanding of the inner workings of the library and how it represents and handles the physical environment. For example, to instantiate an object of the InputDevice class, we have to pass the number of a pin to its constructor method. The type of this pin should be “input”, while some of the pins on a Raspberry Pi board are reserved only for “output”. A correct test should use a pin of the correct type to instantiate an InputDevice. Another example of such details Listing 3: Implementation of the temperature control system in Python. 1 class Environment: 2 def _init__(self, initial_temp: int $\mathbf { \tau } = \mathbf { \tau }$ None): 3 self.temp $\mathbf { \sigma } = \mathbf { \sigma }$ initial_temp if initial_temp is not None else random.randint(20, 24) 4 5 def fetch_temp(self): 6 return self.temp 7 .... 8 9 class TempSensor: 10 def _init__(self, env: Environment): 11 self.env $\mathbf { \tau } = \mathbf { \tau }$ env 12 self.temp $\mathbf { \tau } = \mathbf { \tau }$ self.env.fetch_temp() 13 14 def start_temp_collection(self, total_time: float, sensor_interval: float): 15 for i in range(math.floor(total_time / sensor_interval)): 16 self.temp $\mathbf { \tau } = \mathbf { \tau }$ self.env.fetch_temp() 17 sleep(sensor_interval) 18 19 class PWMOutputDevice: 20 ..... 21 22 class HCUnit: 23 def __init__(self): 24 self.cooler $\mathbf { \tau } = \mathbf { \tau }$ PWMOutputDevice() 25 self.heater $\mathbf { \tau } = \mathbf { \tau }$ PWMOutputDevice() 26 27 def activate_cooler(self): 28 self.cooler.on() 29 self.heater.off() 30 .. 31 32 class Controller: 33 def _init__(self, temp_sensor: TempSensor, hc_unit: HCUnit): 34 self.temp_sensor $\mathbf { \tau } = \mathbf { \tau }$ temp_sensor 35 self.hc_unit $\mathbf { \tau } = \mathbf { \tau }$ hc_unit 36 37 def control(self, total_time: float, control_interval: float): 38 for i in range(math.floor(total_time / control_interval)): 39 temperature $\mathbf { \sigma } = \mathbf { \sigma }$ self.temp_sensor.temp 40 if $\bar { 2 } 1 < =$ temperature $< = ~ 2 3$ : 41 self.hc_unit.deactivate() 42 ... 43 44 class SystemState: 45 def _init__(self, temp, cooler_state, heater_state, outside_air_temp): 46 self.temp $\mathbf { \tau } = \mathbf { \tau }$ temp 47 ... 48 49 class MockRoom: 50 def _init__(self, total_time: float, sensor_interval: float, control_interval: float, initial_temp: int $\mathbf { \sigma } = \mathbf { \sigma }$ None): 51 self.env $\mathbf { \Sigma } = \mathbf { \Sigma }$ Environment(initial_tem $\scriptstyle \alpha =$ initial_temp) 52 self.total_time $\mathbf { \tau } = \mathbf { \tau }$ total_time 53 self.sensor_interval $\mathbf { \tau } = \mathbf { \tau }$ sensor_interval 54 self.control_interval $\mathbf { \tau } = \mathbf { \tau }$ control_interval 55 self.temp_sensor $\mathbf { \Sigma } = \mathbf { \Sigma }$ TempSensor(self.env) 56 57 58 def execute_scenario(self): 59 60 sensor_thread $\mathbf { \sigma } = \mathbf { \sigma }$ threading.Thread(target $\ O =$ self.temp_sensor.start_temp_collection, 61 arg $\ O _ { \mathfrak { s } } =$ (self.total_time, self.sensor_interval)) 62 sensor_thread.start() 63 64 65 collected_states $\mathbf { \sigma } = \mathbf { \sigma }$ [] 66 for i in range(self.total_time): 67 68 outside_air_temp $\mathbf { \tau } = \mathbf { \tau }$ self.env.get_outside_air_temp() 69 collected_states.append(SystemState(cur_temp, ...)) 70 self.env.set_temp(cur_temp $^ +$ outside_air_temp $^ +$ heater_value - cooler_value) 71 sleep(1) 72 73 sensor_thread.join() 74 control_thread.join() 75 76 return collected_states # Listing 4: The InputDevice class in gpiozero. # class InputDevice(GPIODevice): 2 """ 3 Represents a generic GPIO input device. 4 5 This class extends :class:‘GPIODevice‘ to add facilities common to GPIO 6 input devices. The constructor adds the optional \*pull_up\* parameter to 7 specify how the pin should be pulled by the internal resistors. The 8 :attr:‘is_active‘ property is adjusted accordingly so that :data:‘True‘ 9 still means active regardless of the \*pull_up\* setting. 10 11 :type pin: int or str 12 :param pin: 13 The GPIO pin that the device is connected to. See :ref:‘pin-numbering‘ 14 for valid pin numbers. If this is :data:‘None‘ a :exc:‘GPIODeviceError‘ 15 will be raised. 16 17 :type pull_up: bool or None 18 :param pull_up: 19 If :data:‘True‘, the pin will be pulled high with an internal resistor. 20 If :data:‘False‘ (the default), the pin will be pulled low. If 21 :data:‘None‘, the pin will be floating. As gpiozero cannot 22 automatically guess the active state when not pulling the pin, the 23 \*active_state\* parameter must be passed. 24 25 :type active_state: bool or None 26 :param active_state: 27 If :data:‘True‘, when the hardware pin state is ‘‘HIGH‘‘, the software 28 pin is ‘‘HIGH‘‘. If :data:‘False‘, the input polarity is reversed: when 29 the hardware pin state is ‘‘HIGH‘‘, the software pin state is ‘‘LOW‘‘. 30 Use this parameter to set the active state of the underlying pin when 31 configuring it as not pulled (when \*pull_up\* is :data:‘None‘). When 32 \*pull_up\* is :data:‘True‘ or :data:‘False‘, the active state is 33 automatically set to the proper value. 34 35 :type pin_factory: Factory or None 36 :param pin_factory: 37 See :doc:‘api_pins‘ for more information (this is an advanced feature 38 which most users can ignore). 39 """ 40 def __init__(self, pin=None, \*, pull_up=False, active_state=None, 41 pin_factory=None): 42 super().__init__(pin, pin_factory=pin_factory) 43 try: 44 self.pin.function $\mathbf { \sigma } = \mathbf { \sigma }$ ’input’ 45 pull $\bar { \bf \Phi } = { \bf \Phi }$ {None: ’floating’, True: ’up’, False: ’down’}[pull_up] 46 if self.pin.pull $\ ! =$ pull: 47 self.pin.pull $\mathbf { \tau } = \mathbf { \tau }$ pull 48 except: 49 self.close() 50 raise 51 52 if pull_up is None: 53 if active_state is None: 54 raise PinInvalidState( 55 f’Pin {self.pin.info.name} is defined as floating, but ’ 56 f’"active_state" is not defined’) 57 self._active_state $\mathbf { \sigma } = \mathbf { \sigma }$ bool(active_state) 58 else: 59 if active_state is not None: 60 raise PinInvalidState( 61 f’Pin {self.pin.info.name} is not floating, but ’ 62 f’"active_state" is not None’) 63 self._active_state $\mathbf { \tau } = \mathbf { \tau }$ False if pull_up else True 64 self._inactive_state $\mathbf { \tau } = \mathbf { \tau }$ not self._active_state 65 66 @property 67 def pull_up(self): 68 1 111 69 If :data: $\boldsymbol { \cdot } _ { \mathrm { T r u e } } \boldsymbol { \cdot }$ , the device uses a pull-up resistor to set the GPIO pin 70 "high" by default. 71 """ 72 pull $\mathbf { \sigma } = \mathbf { \sigma }$ self.pin.pull 73 if pull $\scriptstyle = =$ ’floating’: 74 return None 75 else: 76 return pull $\scriptstyle = =$ ’up’ 77 in gpiozero is how pins are activated. To activate a pin on a Raspberry Pi board, its voltage should go high, which can happen by calling the pin.drive_high() method. Writing a test for a CPS program requires an accurate understanding of how interacting with the program, such as calling pin.drive_high(), impacts the physical status of the system, such as increasing the voltage on the Raspberry Pi board. To evaluate whether ChekProp generated PBTs capture such details about CPS programs, we include the InputDevice class in our dataset. Listing 4 presents the InputDevice class. This class, similar to other gpiozero classes, has well-written documentation (lines #2-39). We consider this documentation as the natural language description in ChekProp prompts (see subsection 3.1). Moreover, gpiozero has extensive unit tests for its classes, which we use them to produce our initial prompt . This confirms that gpiozero classes have the essential components for applying ChekProp: the natural language description, the source code, and the unit test (see subsection 3.1). Overall, our dataset contains a combination of CPSs studied in research literature and CPS programs that employ widely used libraries. This dataset helps us to assess the relevance of ChekProp extracted properties and the quality of its generated PBTs. # 4.3 RQ1: Property Relevance I. Methodology: To assess the quality of extracted properties, we run our approach on the nine programs in our dataset and analyze the relevance of the extracted properties. In this experiment, we abstract away the implementation details of the generated PBTs; instead, we focus on how well the properties considered in the PBTs validate the logic of the program under test. For this purpose, we compare properties extracted by our approach with manually crafted properties that we consider as ground-truth. In particular, we evaluate whether the logic checked by ground-truth properties is also validated by ChekProp extracted properties and vice versa. As explained in subsection 4.2, for programs P1 and P2, the ground-truth properties are already stated by Moradi et al. [12]. For the remaining programs (P3-P9), we manually define the ground-truth properties. Take the temperature control system (TCS) as an example. Moradi et al. outline three properties for TCS as follows: 1. If the room is warm (temp > 23), the HC unit should not be heating the room. 2. If the room is cold (temp $< 2 1$ ), the HC unit should not be cooling the room. 3. The temperature should never be too low (temp $< 2 0$ ) or too high (temp $>$ 24). We first apply our proposed approach on our Python implementation of TCS to generate PBTs. Next, we compare the extracted properties that are tested in these PBTs with the three ground-truth properties in [12]. If the ChekProp properties correspond with the three ground-truth properties, we conclude that the proposed approach is able to extract useful properties. II. Results: Table 2 shows the results of this experiment. In total, the table contains 26 properties. We split these properties into four groups: Group1 consists of 15 properties that are present among ground-truth and ChekProp extracted properties in the exact same form ( Pr3, Pr5, Pr7, Pr9, Pr10, Pr11, Pr12, Pr14, Pr15, Pr17, Pr20, Pr21, Pr23, Pr24, and Pr25); Group2 consists of 3 properties that are present among ground-truth and ChekProp extracted properties in equivalent but slightly different forms (Pr1, Pr2, and Pr6); Group3 consists of 7 properties that are only among the ChekProp extracted properties (Pr4, Pr8, Pr13, Pr16, Pr19, Pr22, and Pr26); and Group4 consists of 1 property that is only among ground-truth properties (Pr18). In total, the ground-truth contains 19 properties (Group1+Group2+Group4) and ChekProp extracts 25 properties (Group1+Group2+Group3). Among all properties, 18 are common between ground-truth and ChekProp, either in the exact same form (Group1) or with distinct different formulations (Group2). These properties are relevant, since they are present among the manually crafted properties. Therefore, the recall of ChekProp is 94% (18/19), which indicates that our approach can fully replace the manual effort required for extracting most of the properties from CPSs. The precision of ChekProp is 72% (18/25), suggesting that the properties extracted by ChekProp often represent what humans expect from the CPS under test. The high precision and recall of ChekProp make it a reliable tool for automating the manual effort dedicated to property extraction for CPSs. Table 2: Comparison between properties automatically extracted by our proposed approach and ground-truth properties that are manually crafted. In Table 2, we see that in three cases (Pr4, Pr8 and Pr26) ChekProp extracts a property that is relevant and useful, but neglected in manually crafted properties. For example, our approach extracts Pr4 for PCS which notes that neither heater or cooler should be active when the room temperature is between $2 1 ^ { \circ }$ C and $2 3 ^ { \circ }$ C. This indicates that not only can our automated approach replace manual property design work, but it can also even improve the manually crafted properties. The three relevant properties neglected in ground-truth together with the 18 properties common between ground-truth and ChekProp make up the set of our 21 relevant properties. As presented in Table 2, four of the properties extracted by ChekProp (Pr13, Pr16, Pr19, and Pr22) are not useful. These properties either check a state that does not occur in real-wrold (Pr13) or validate highly detailed implementation nuances. This observation shows that a manual check on properties automatically extracted by ChekProp is needed to ensure that no useful property is considered for testing. Finally, there is only one ground-truth property (Pr18) that does not correspond to any of the properties extracted by ChekProp. Pr18 is a property for the quick reaction game and indicates the order of changes in the light status, it should be first turned on at some point and then turned off at some point. With a careful manual analysis, we understand that the code we provide to the LLM for the quick reaction game lacks the documentation regarding this point. This suggests the importance natural language description as of one core components in ChekProp prompts (see subsection 3.1). # Answer to RQ1: Does the proposed approach extract relevant properties? We compare the manually crafted relevant properties with ChekProp extracted properties for nine programs in our dataset. This comparison shows that $9 4 \%$ (18/19) of the ground-truth properties are also automatically extracted by ChekProp. Moreover, ChekProp extracts three additional relevant properties that are neglected in manually crafted properties. This indicates that ChekProp is a reliable tool for automating the tedious and complicated task of defining CPS properties.. # 4.4 RQ2: PBT Quality I. Methodology: For evaluating the quality of PBTs generated by ChekProp, we examine the PBTs that test the 21 relevant properties according to our analysis in the RQ1 experiment (see subsection 4.3). As explained in subsection 4.1, we assess the applicability of our approach from two aspects: executability and effectiveness. We consider a PBT executable if and only if it is correct both syntactically (i.e., successfully compiles) and semantically (i.e. passes). To investigate the executability of a PBT, we check to what extent the PBT should be manually modified to reach syntax and semantic correctness. A lower level of manual modification indicates higher executability and vice versa. We perform a manual analysis to find the level of executability of PBTs. Based on this analysis, we assign the PBTs generated for each program to one of the following executability levels: “HIGH”, “MED”, and “LOW”. A “HIGH” executability level means that the analyzer has to spend less than one minute manually fixing the PBT to ensure it runs and passes successfully. “LOW” means that more than three minutes of manual work is needed, and “MED” means that between one and three minutes is required. In this investigation, for the PBTs generated per each program, we also take note of the main challenges that require manual modifications. The results reveal potential opportunities for future improvement in ChekProp. To assess the effectiveness of generated PBTs, we study if it checks the property over representatives of all or most partitions of the input space. As explained in section 2, one of the main components of a PBT is an input generator that produces various inputs from the input space. In this experiment, we determine to what extent the input generators of generated PBTs produce inputs from all partitions of the input space. The more partitions of input space considered by a PBT, the more effective the PBT is. We assess the effectiveness of PBTs generated for each program through a manual analysis and assign them to one of the three effective groups “HIGH”, “MED”, and “LOW”. II. Results: Table 3 summarizes the result of our experiment on ChekProp applicability. The “Property_ID” and “Program” columns indicate the property and the program that the PBT is testing. Note that the ID of the property is taken from Table 2, which lists the properties extracted by ChekProp. The “Executability” column shows the result of our assessment of the executability of generated PBTs in terms of their syntactical and semantical correctness. The fourth column presents the main executability challenge of generated PBTs that should be addressed manually. Finally, the last column presents the level of effectiveness of PBTs generated for each program. Table 3: The quality of PBTs generated by ChekProp for the 21 relevant propertie. A "HIGH", "MED", or "LOW" level in the "Executability" column indicates the PBT can be successfully executed with less than one minute, between one to three minutes, or more than three minutes of manual effort for modification, respectively. The "Effectiveness" column indicates the level of input space partitions covered by the PBT. For $4 7 \%$ (10/21) of the relevant properties (Pr1, $\mathrm { P r 2 }$ , $\mathrm { P r 3 }$ , $\mathrm { P r 4 }$ , $\mathrm { P r 5 }$ , $\mathrm { P r 6 }$ , Pr7, $\mathrm { P r 8 }$ , Pr9, and Pr17), the generated PBTs are executable without major changes that require less than one minute of manual work. In fact, for none of these PBTs, except for the Pr17 PBT, no major executability issues are detected. These PBTs successfully execute and pass with little to none manual modification. Also, for Pr17 PBT, the problem is that the generated PBT runs the program for too many inputs, leading to a timeout. A developer who knows the logic of Pr17 property of the Remote Buggy program can fix the generated PBT by only modifying the number of random inputs that should be considered. Given the complexity of predicting the time needed for running a test on a CPS, this case shows the importance and positive impact of keeping a human in the loop of LLM-based PBT generation. In sum, our analysis of the executability of PBTs generated for Pr1, Pr2, Pr3, Pr5, Pr6, Pr7, Pr9, and Pr17 shows that for a remarkable number of relevant CPS properties ChekProp generates a PBT that is executable with minor manual modifications. For seven of the properties (Pr14, Pr15, Pr20, Pr21, Pr23, Pr24, and Pr26), the only main challenge to executability of generated PBTs occurs in their mocking of the CPS. This challenge occurs because the mocking method employed has a conflict with property-based testing of the CPS under test. In particular, every time the test is executed for a specific input, all the pins used in the mock of the CPS should be initialized from scratch. However, the mocking method used in these PBTs only initializes the mock object once for all inputs considered in the PBT. This leads to a semantic problem with the logic of the CPS under test, as well as a syntactic error in using the hypothesis library. Consequently, these seven PBTs require a medium level of manual modification to become executable. We notice that mocking CPS is both tricky and essential for testing. As CPSs are supposed to run in a physical environment, we need to mock how the environment affects CPS programs. This can require a detailed understanding of the relationship between various components of given CPS programs. Previous work shows that such domain knowledge can be effectively provided to LLMs by in-context learning, i.e., adding relevant examples to the prompt [6]. Based on this observation, we suggest that practitioners use a few-shot prompt with mocking examples to generate PBTs for cyber-physical systems with LLMs. Finally, one of the main issues with generated PBTs for four properties (Pr10, Pr11, Pr12, and $\mathrm { P r 2 5 }$ ) is how they use gpiozero. For example, the PBT generated for $\mathrm { P r 2 5 }$ uses output pins of the Raspberry Pi board to initialize their object of the InputDevice. With further analysis, we realize that fixing the issues in this PBT depends on a deep understanding of multiple gpiozero classes. However, the current version of ChekProp only includes the documentation of the InputDevice class in the prompt. This documentation is taken from the comments presented in Listing 4. To fix the issue with this PBT, the LLM also needs to have the documentation for other classes, such as PiGPIOFactory. We conclude that a strong LLM-based PBT generation tool for CPS programs requires augmenting prompts with all relevant information from the program documents. Our analysis of the effectiveness of the generated PBTs shows that the PBTs generated for $8 5 \%$ (18/21) of the properties are highly effective; these PBTs test the property on most partitions of the input space. With a more detailed look, we observe that the generated tests tend to be more effective when the tests employ a straightforward and flexible mock of the CPS components. For example, as shown in Listing 3, our Python implementation of TCS provides a MockRoom class. This class enables a tester to run the program with many different inputs only by changing a few parameters regarding the starting temperature of the room and the timing of updating various components. Using this mock class, the generated PBTs for TCS properties run the program with different configurations that represent all partitions of the input space. This experiment also reaffirms the importance of using flexible mocks with a straightforward API for testing CPSs. Answer to RQ2: What is the quality of ChekProp generated PBTs for real-world CPSs? We assess the applicability of PBTs generated by ChekProp 21 relevant properties in our dataset from two aspects: executability and effectiveness. Our results reveal that a remarkable number of the generated PBTs are highly executable $( 4 7 \% )$ and highly effective (85%), which indicates the applicability of ChekProp. Our analysis also leads to two major suggestions for generating high-quality PBTs for CPSs with LLMs. First, the LLM should be aided in employing straightforward and flexible mocking by providing well-designed few-shot examples in the prompt. Secondly, it is important to include the relevant documentation from all parts of the CPS in the prompt. These two techniques make our proposed approach even more robust and practical. # 5 Related Work The application of LLMs to testing CPS is at early stages and many of the efforts have been focused on scenario generation for autonomous driving and robotics. For instance, OmniTester [10] uses an LLM (GPT-4) to generate diverse driving scenarios from natural language descriptions and proposes test road layouts and events. They also incorporates retrieval-augmented generation (RAG) and iterative self-improvement to refine scenarios. Petrovic et al. [13] similarly incorporates LLMs into an autonomous vehicle testing pipeline. Their approach provides the LLM with a formal environment model (metamodel of roads, vehicles, pedestrians, etc.) and standardized requirements as context. The LLM is prompted to produce a concrete test scenario (in a JSON format executable in the CARLA simulator) that satisfies the given requirements. They use the LLM to translate natural-language requirements into Object Constraint Language (OCL) rules—formalizes expected environmental and safety properties. The OCL properties are then checked against the generated test scenario and if required, the feedback is sent to the LLM for correction before the execution of the test scenario. Besides automotive, other related works are emerging in robotics. For example, Wang et al. [22] show that GPT-4 can automatically generate robotic simulation tasks (including environment configurations and goals). They mainly address test scenario generation (test environments and test inputs) rather than directly inferring formal properties or invariants from system specifications. They show that LLMs can handle the environmental context of CPS testing when guided by domain models. In the broader software systems context, LLMs have been utilized for automated test case generation from various sources of specification. Many of the approaches target conventional software systems (without either ML or any physical components) and have shown promising results in automating unit test creation. Kang et al. [9] present LIBRO, a framework that uses an LLM to generate JUnit tests from bug reports. The goal is to reproduce reported defects automatically as the conventional test generators generally struggle with understanding the semantic intent of a bug report. LIBRO’s performance evaluation on the Defects4J benchmark found that it can produce failing tests for about $3 3 \%$ of bugs and demonstrates that an LLM can interpret natural-language bug descriptions and translate them into fault-revealing code. Another set of work explores using LLMs to generate tests from requirement documents or user stories. Rahman and Zhu [14], leverage GPT-4 to produce test-case specifications (in JSON) directly from high-level requirements and intend to bridge the gap between specifications and executable tests. Some approaches also utilize LLMs within an interactive test generation process. Chen et al. [2] introduce ChatUniTest, an LLM-based unit test generation framework. In their approach, the LLM (Code Llama) drafts a Java unit test; the test is executed to see if it passes or if it exercises the intended code; any errors or unsatisfied goals are fed back for the LLM to repair and refine the test. Alshahwan et al. [1], report using LLMs to extend and improve existing test suites in an industrial setup—focuses on corner-case inputs that developers missed. Their tool generates additional unit tests to increase coverage of tricky edge conditions. Overall, surveys of the field, e.g., Wang et al., 2024 [21] conclude that LLMs show strong potential in automated testing by reducing the manual effort to write test cases—mainly in code-centric contexts such as unit testing. Regarding Property-based testing with LLMs, applying LLMs to the generation of PBTs has recently emerged. The most relevant work is by Vikram et al. (2024), which investigates if LLMs write good PBTs [20]. They investigate using GPT-4 and other models to automatically generate PBT code (using the Hypothesis framework in Python) from API documentation. In their setup, the LLM is given the documentation of a library function in natural-language and prompted to generate a property-based test. The generated test produces appropriate random inputs and asserts the documented properties on the outputs. They evaluate the validity (the test must run without errors), soundness (the test assertions should hold for correct implementations and fail for buggy ones) and property coverage (how many distinct expected properties are captured by the test) of the tests. CPS Challenges and our contribution: The works mentioned above establish a foundation for LLM-driven property-based test generation. In prior studies like Vikram et al.’s [20], the system under test is a software API with no external physical connected components and the LLM did not need to reason about sensors, actuators, or continuous dynamics. But in a cyber-physical system, properties often relate to the interaction between software and the physical components, which are more complex to formalize and test. Environmental mocking becomes a necessity—a model or simulation of the physical environment is required to represent the real world. Recent CPS testing approaches with LLMs (e.g. for autonomous driving [13]) addressed this by restricting the LLM to consider a domain metamodel and produce output in a structured format for a simulator. This helps ensure some basic physical realism in generated scenarios, but it does not guarantee that all relevant properties can be identified or verified. Our approach supports testing and also runtime property-based monitoring. This means the LLM is used to derive property assertions that can also run alongside the deployed system, to check for violations in the runtime. Our approach extends the frontier by applying LLM-driven property-based test generation to CPS, in which both the inference of the generated properties from code documentation and the execution of the corresponding tests must account for the intended CPS programs under test.
Cyber-physical systems (CPSs) are complex systems that integrate physical, computational, and communication subsystems. The heterogeneous nature of these systems makes their safety assurance challenging. In this paper, we propose a novel automated approach for guardrailing cyber-physical systems using property-based tests (PBTs) generated by Large Language Models (LLMs). Our approach employs an LLM to extract properties from the code and documentation of CPSs. Next, we use the LLM to generate PBTs that verify the extracted properties on the CPS. The generated PBTs have two uses. First, they are used to test the CPS before it is deployed, i.e., at design time. Secondly, these PBTs can be used after deployment, i.e., at run time, to monitor the behavior of the system and guardrail it against unsafe states. We implement our approach in ChekProp and conduct preliminary experiments to evaluate the generated PBTs in terms of their relevance (how well they match manually crafted properties), executability (how many run with minimal manual modification), and effectiveness (coverage of the input space partitions). The results of our experiments and evaluation demonstrate a promising path forward for creating guardrails for CPSs using LLM-generated property-based tests.
[ "cs.SE" ]
# 1 INTRODUCTION Group aggregation, represented in SQL via GROUP BY, is a fundamental operation in analytical query processing, especially decisionsupport workloads [22]. To ensure that database systems continue to scale well with new many-core architectures, it is critical to build highly concurrent group aggregation schemes. Analytic database systems today are quite diverse. For example, Datafusion [15] follows a Volcano-style [9] block iteration approach (i.e., “pull”), where as DuckDB [25] follows a HyPer-inspired morsel driven parallelism [16] approach (i.e., “push”). Despite drastic differences in their execution models, nearly all of today’s analytic database systems use partitioning techniques in order to parallelize group aggregations. To the best of our knowledge, the partitioning approach proposed by Leis et al. [16], which combines local preaggregation with partitioning (we provide background on partitioning-based techniques in Section 2.2), has become dominant in many modern analytic systems [14, 15]. An alternative to partition-based approaches is to use a global concurrent hash table. Instead of partitioning keys into groups, each worker can concurrently access a global hash table. In theory, such a hash table has many operational benefits, such as lower memory usage, reducing the impact of skew, and simplifying implementations. In practice, despite several improvements to general-purpose concurrent hash tables [18, 20, 23], contention effects and synchronization overhead represent significant scalability barriers. But is it really surprising that general-purpose concurrent hash tables perform worse than purpose-built solutions like partitioned group aggregation? After all, general-purpose hash tables must support a myriad of operations that are irrelevant to group aggregation, such as deletes and shrinking. General-purpose hash tables must also be optimized for a wide range of workloads, where deletes, inserts, and lookups might come from different threads, in different distributions, and at different times. A concurrent hash table optimized for group aggregation could sidestep most of this complexity: the only required operation is the aggregation of a new value, and it is reasonable to assume that every thread will invoke this operation consistently until all data is consumed. Main result. In this paper, we explore the design space of group aggregation algorithms using a global concurrent hash table, comparing against state-of-the-art partitioning approaches. Most significantly, we find that a simple, purpose-built concurrent hash table using linear probing and a customized get-or-insert function can not only scale well on modern multi-core hardware but can match or even outperform partitioning-based approaches. Our results do not show that one approach is better than the other, but instead highlight the operational benefits and costs of both approaches. Throughout our exploration, we make specific recommendations for database implementers. Our implementation of group aggregation with a global concurrent hash table closely tracks the implementation in MonetDB [3]. Each worker, upon receiving a row, first obtains from a hash table an integer “ticket” for that row’s grouping key. This ticket uniquely identifies each group and serves as an index to locate the aggregated value for that group. We provide a detailed description of this procedure in Section 2.3. This two-phase procedure: ticketing and partial aggregate update, is repeated for each row (possibly in a vectorized fashion: ticketing an entire morsel, then aggregating that morel), and opens up a large number of possible designs, which we explore. Ticketing. In the initial phase of fully concurrent group aggregation, each unique group is assigned an integer “ticket.” This operation can be performed with a concurrent hash table that atomically: checks if a key is already in the table, returning the ticket for that key if so, and, if not, inserting a new ticket into the table. Surprisingly, many general-purpose hash tables cannot perform this operation atomically or do not optimize for this particular case. As a result, there is significant room for improvement by building a specialized fast path for this particular operation, while avoiding adding extra overhead for unnecessary operations like deletes. We test several implementations based on atomics and fine-grained spill when hash table local hash tables K V local hash tables becomes full K V 83 92 8K 9V 2 (12, 7) p(a8r, it3i)on 0 12 result partition 0 1 3 ... group 13 7 1 13 7 (41, 4) (13, 7) P 8 3 8 group partition 2 3 4 3 10 4 partition 3 7 ... 10 7 group 33 22 K V partition 0 4 17 group 4 17 (8, 9) (4, 30) K V D 383 47 3130 272 (13,pa1rt4it)io(n332, 5) 13 result partition 1 group ... ... 3 4 partition 3 33 · A Stage 1: local pre-aggregation Stage 2: aggregate partition-wise locking, and show that simple purpose-built hash tables can significantly outperform their complex general-purpose counterparts. Partial Aggregate Update. After ticketing, each worker must apply the relevant aggregation function on the values associated with each row. This can be done in either a thread-specific way (e.g., each thread maintains local aggregation storage, merging the results at the end), or using concurrent access to global space. We explore the tradeoffs between both approaches, characterizing their operational tradeoffs. Specifically, we find that concurrent access to global space works well in the absence of heavy hitters, and we propose a simple thread-local approach that works well except for when every grouping key is unique. Organization. Our experimental study is organized as follows. In Section 2, we explain our assumed model of execution, and introduce the basics of partitioned aggregation and concurrent aggregation. In Section 3, we investigate the design space of aggregation with a concurrent hash table. In Section 3.1, we explore the ticketing step, and in Section 3.2, we explore the aggregation step. In Section 4, we investigate the end-to-end performance of both approaches. We analyze their scaling properties in a number of different scenarios, including a causal analysis of scaling behavior (i.e., identifying bottlenecks). We additionally discuss several important operational characteristics, like memory usage and hash table resizing. In Section 5 we discuss related works, before concluding and outlining future work in Section 6. # 2 PRELIMINARIES In this section, we first establish the model of query execution we are operating within and discuss the relevant constraints it places upon our work. We then provide an overview of the two models of aggregation that we assess in this paper: partitioned and fully concurrent. # 2.1 Model of Execution Leis et al. [16] introduced the morsel-driven framework of parallel query execution to improve performance in main-memory systems where latency is compute-bound rather than I/O-bound. In this model, horizontal parallelism is achieved by breaking down work into data fragments called “morsels.” Morsels are dynamically assigned to distributed to threads in a pool, using strategies such as work-stealing to ensure even distribution. In this model, query execution is pipelined, with morsels being pushed to the next operator as they are finished being processed. Morsel-driven parallelism is also often used with columnar data representations (Leis et al. integrated their work within HyPer and used its columnar data representation), with morsels serving as units for vector-at-a-time execution. Operating on a dense vector of values enables important optimizations such as amortizing interpretation overhead (e.g. dynamic type resolution) and SIMD instructions [12]. The morsel-driven model places some restrictions on operator implementations. Operators must be able to operate on chunks of data at a time to conform with pipelining and do not have access to the rest of the incoming tuples. This restriction is especially problematic for sort-based aggregations, although some variations propose ways to adapt to such models of execution [21]. Due to the prevalence of main-memory and morsel-driven systems, we focus our investigation on only operator implementations that conform to this model of execution. # 2.2 Partitioned Aggregation Model Under a partitioned approach to parallelized aggregation, concurrency control is avoided by assigning a subset of the key domain to each thread. A naive partitioning strategy assigns incoming tuples to their proper threads, which then aggregates the assigned tuples locally. However, this method suffers significantly from data skew and has been found to perform worse than methods utilizing a local hash table to preaggregate heavy hitters before partitioning [28]. Leis et al. [16] describe a specific partitioned aggregation strategy illustrated in Figure 1 to parallelize group by aggregations within morsel-driven systems. Their method consists of two stages: local preaggregation and partition-wise aggregation In the first stage, each thread aggregates all values from their assigned morsel(s) in a local, fixed-sized hash table $\textcircled { 1 1 }$ . When the hash tables become full, the partially aggregated values (referred to as partial aggregates) are spilled into partitions $\cdot$ and the process continues. After all data is preaggregated, all partial aggregates are flushed to their proper partition. In the second stage, partitions are exchanged between threads and each thread now aggregates all the partial aggregates from all other threads to compute the final answer $\cdot$ . This algorithm is motivated by the need for skew resistance in partitioned aggregation methods, avoiding uneven work distribution by spreading the work of aggregating heavy hitters among all threads in the local preaggregation stage. However, for high cardinality workloads, there is repeated spilling from the local aggregation tables. The constant spilling results in each tuple essentially being aggregated twice, once in each stage of the algorithm, creating a significant source of overhead. The local preaggregation approach to partitioned aggregation has been adopted by a number of real-world systems including Figure 2: A sample execution of our fully concurrent aggregation model using the same instance as in Figure 1. This particular diagram aligns most closely to our atomic or locking method of partial aggregate updates. In contrast to partitioned aggregation, instead of completing all work in local data structures, there are now two shared structures, the ticketing hash table and the vector of partial aggregates. DuckDB [14] and Datafusion [15]. Due to the widespread adoption of this particular algorithm and its known good scaling behavior [14, 28], it is used as the baseline partitioned aggregation method against which we compare the performance of our fully concurrent aggregation algorithm. # 2.3 Fully Concurrent Aggregation Model To perform a group by aggregation in a fully concurrent manner, each thread must aggregate an arbitrary morsel of data from start to finish with any key distribution. We separate fully concurrent aggregation into two different steps: ticketing and partial aggregation update (which we will often refer to as just the “update” step), as shown in Figure 2. In the ticketing step ❶, a concurrent shared hash table is used to map each key value to an integer “ticket.” This mapping is oneto-one: each unique key is granted a single, unique ticket, and the ticket assigned to a key is consistent across all threads due to the use of a shared hash table. A ticket conceptually represents the location where the corresponding partial aggregate to be updated is stored. In our implementation, this is an index into a vector of partial aggregates that are later updated based on the value being aggregated. Another view of the ticketing step is that it uses a hash table to incrementally create a perfect hash function over the key space. This stage forms the “group by” part of group by aggregations. Next, the resulting vector of tickets and the value column is fed to the update step $\pmb { \theta }$ where the query’s specified aggregation function(s) are applied to update each partial aggregate (e.g. incremented for COUNT, added to for SUM, etc). It is easiest to view partial aggregates as a global vector with cells protected individually with some concurrency-safe data structure (a lock or atomic), although we do later introduce a thread local update procedure that does not align with this conceptualization. This step is the “aggregation” part of group by aggregation. Ticketing Indirection. Note that this layer of indirection separating the partial aggregate from the hash map has already been common in practice for a myriad of reasons. For example, it is used by MonetDB [3] to enable vectorized execution, as well as by DuckDB [14] and DataFusion [15]. Indirection also enables optimizations only possible when acting on a dense column of values, such as the use of SIMD instructions or to amortize the cost of dynamic type resolution in non-compiled systems. Given that indirection is already commonplace, we formalize it in our model to take advantage of its logical properties. In particular, since each unique key is assigned a ticket exactly once, our hash table workload is reduced to only lookups and inserts. One key contribution of this paper is noting that while concurrent hash tables have traditionally been eschewed in the context of aggregations due to poor scaling behavior with certain workloads, once constrained to inserts and lookups only, concurrent hash tables can be designed with excellent performance. This indirection pushes concurrent updates to the second stage. Crucially, concurrent updates now act on a vector of values rather than units of a hash table, allowing for alternate ways of protecting against simultaneous updates that do not degrade the performance of the ticketing hash table. # 3 FULLY CONCURRENT AGGREGATION In this section, we investigate the design space for each stage of fully concurrent aggregation. We discuss various methods and provide micro-benchmarks of each stage in isolation to identify the bestperforming method(s) for each stage, which we later test end-to-end in Section 4. We also discuss directions for future work to investigate that could leverage our particular model of aggregation to achieve more efficient operation compared to some of the existing, more general-purpose methods that we test. # 3.1 Ticketing As described in Section 2, the ticketing step is performed using a concurrent hash table to map each unique key to a unique and immutable “ticket.” In this section, we first establish the interface of hash tables designed for ticketing and the process used to generate tickets. Then, we discuss various candidate hash table designs, followed by experimental evaluation and discussion. Interface. When a new key arrives, an insert operation is required. Following a successful insertion for a key, all future requests should look up the previously returned ticket value. In many concurrent hash table designs, this lookup operation can be achieved with significantly less overhead than an insert (since shared locks are cheaper than exclusive locks, and simultaneous atomic reads are much cheaper than simultaneous atomic writes). However, since we do not know ahead of time whether an insert is necessary (i.e., we do not know if a particular key has already been given a ticket), it is crucial to provide an efficient fast path for lookup before attempting a more expensive insert. In doing so, we can greatly reduce contention effects from simultaneous requests on the same key, even when given a heavily skewed key distribution. Some care in the implementation is needed to ensure correctness since during the gap between the fast path for lookup and the insertion attempt, another thread could have inserted the same key. A double insertion would violate the constraint that ticketing is bijective. We propose that the lookup and insert paths be integrated into one operation, which we denote GET_OR_INSERT, to ensure efficient and correct implementation. Such an operation is rarely supported out-of-the-box by concurrent hash maps, but depending on the specific hash map implementation the same effect could be achieved using an entry API or with a lookup followed by a non-overwriting insert. Figure 3: Performance of the Folklore\* hash table when using a fuzzy ticketer as opposed to an incrementing atomic counter to issue tickets. Performance is evaluated with both low and high cardinality workloads and is measured with latency (lower is better). To efficiently retrieve keys in ticketed order when materializing our final results, we also need the hash table to store a copy of keys in ticket order. This creates some performance overhead but is generally efficient since there is no contention – only the thread issuing a ticket is responsible for storing a copy. The ticket-order copy of keys can also be maintained as the only copy to decrease memory overhead (e.g., since the ticket value can be used to lookup the key during hash map operations), but would come at the cost of an additional cache miss per lookup. Generating Ticket Values. Generating a ticket value, while simple at face value, is a surprisingly non-trivial task due to synchronization concerns. Multiple threads must avoid issuing the same ticket to different keys. A naive implementation of a “ticketer” would use an atomic counter updated with a FETCH_ADD instruction whenever a thread needs a new ticket issued (on each insert). However, this introduces a significant source of contention for insert-heavy workloads. To combat this issue, one can use a fuzzy ticketer that assigns each thread a range of tickets to issue at a time. Each thread only needs to concurrently access an atomic value when it exhausts its assigned range, whose size can be tuned to all-but-eliminate contention. The tradeoff is that the vector of partial aggregates may no longer be perfectly dense; however, the number of gaps is bounded linearly by the number of threads, and, in most workloads, will be concentrated at the end of the aggregator vector, making them easy and efficient to eliminate at the end of the aggregation process. Using a fuzzy ticketer also provides greater flexibility when implementing the insert operation. In cases where a ticket is obtained optimistically (e.g. as a parameter to an insert operation that fails when the entry already exists), a simple FETCH_ADD is insufficient because there is no guarantee the obtained ticket will actually be inserted, yet the underlying atomic counter has already been updated. For example, in our procedure in Algorithm 1, a ticket value is directly written to the table using a COMPARE_AND_SWAP (CAS) that may not succeed. There is a significant gap in latency on insert-heavy workloads between using a pure atomic counter as the ticketer and our fuzzy ticketer, as seen in Figure 3 (lower latency is better). This microbenchmark is tested on our best-performing hash table design, Folklore\*. Latency in the high cardinality workload is greater for the pure atomic method by a factor of 2.5x. The performance difference is negligible in the low cardinality workload (since there are few inserts in the first place). Note that the high cardinality workload here is not even the extreme case of being insert-only, but is merely ${ \sim } 1 0 \%$ insertion. Since using a single atomic ticket value drastically degrades performance when there are many unique keys, we recommend that implementers avoid this contention by using a structure that amortizes the cost of concurrent accesses across multiple inserts, such as our fuzzy ticketer. Hash Table Designs. We benchmark a variety of state-of-the-art hash table designs. These include cuckoo hashing [18], Iceberg [23], Rust’s Leapfrog1 library which uses leapfrog probing [24] (which in turn is a variation on iceberg probing [10, 11]), and Rust’s popular DashMap. Our implementations additionally benefit from vectorized execution that optimizes hashing and amortizes the acquisition of a read lock on the shared table (to ensure the correctness of resizing). We also implement a variant of the Folklore hash table proposed by Maier et al. [20], a lockless linear probing hash table. Our implementation leverages the lookup and insert-only workload and notes that ticket value 0 can be easily reserved as a sentinel “empty value”, which we relate to a corresponding “empty key.” These properties allow our design to use only a single word CAS instruction instead of the two-word version required by the canonical implementation (which is not a universally supported operation). We denote our implementation Folklore\* to indicate this difference. The GET_OR_INSERT procedure for this design is outlined in Algorithm 1. Evaluation. In Figure 4, we show the performance and scaling behavior of these designs across different cardinalities and data distributions (see Section 4.1). We find that Folklore\* consistently performs best across all tested hash tables, despite its simple implementation. It also exhibits excellent scaling behavior for low cardinality workloads, achieving a $2 5 . 6 \mathrm { x }$ speedup at 32 threads. Folklore\*’s scaling is not ideal at higher cardinalities but still achieves non-trivial speedups ( $1 0 . 6 \mathrm { x }$ for high cardinality and $6 . 0 \mathrm { X }$ with unique keys). Further, its efficient implementation still makes it an ideal choice at even high thread counts despite its lower speedup factors. Crucially, the performance of the table is resilient to data skew due to its fast path lookup–in fact, Folklore\* performs even better in the presence of heavy hitters due to greater cache locality. Figure 4: Scaling behavior of various hash maps for ticketing across different data distributions (see Section 4.1 for workload details). The top row measures performance as throughput (higher is better) against thread count. The bottom row plots the speedup factor against thread count, measured as the single-threaded latency divided by the latency at a given thread count. Ideal scaling is the linear function 𝑠𝑝𝑒𝑒𝑑𝑢𝑝 $\mathbf { \tau } = \mathbf { \tau }$ 𝑡ℎ𝑟𝑒𝑎𝑑𝑠, which is plotted as a dashed line. The method with the most ideal speedup factor varies by workload, but Folklore\* consistently achieves the best overall throughput across the tested range of thread counts. Surprisingly, cuckoo hashing exhibits very poor scaling in high contention workloads, despite us implementing a fast path lookup and prior literature indicating good performance [18]. We attribute this behavior to the implementation using fine-grained locking on buckets. Even though we only acquire read locks on lookups, the bookkeeping required by the locks still creates significant contention when multiple threads access the same resource in a short period of time. The implementation of DashMap also uses read locks which could explain its similar performance characteristics. In contrast, LeapMap and Folklore\* are fully lock-free (and Iceberg is lock-free on its fast path lookup). These characteristics explain the significant divide in scaling behavior in the low cardinality case, with the speedup factor at high thread count being starkly higher for the latter three methods compared to the former two that use read locks. Thus, it is essential to ticketing performance to have a “fast path” for reading previously-inserted values, and preferably, this “fast path” should not require taking any locks. Discussion. An interesting finding from the experiment results is that for our lookup and insert-only workload, even a very simple linear probing design (Folklore\*) achieves excellent performance. A contributing factor to this surprising fact is that the typical downside to linear probing, deletions, is a non-issue given our workload. Thus, linear probing’s cache-conscious forward scan does not really have a downside. We conclude that to perform efficient ticketing, fancy hash tables are not required: linear probing is all you need. That being said, there is likely room for more complicated designs to optimize lookup and insert operations. Unfortunately, evaluating existing state-of-the-art hash table designs and their suitability for ticketing is complicated by gaps in benchmarking results. Existing literature on concurrent hash table designs often does not test certain workloads important for our application. For example, neither of [11, 18] test a lookup-only workload (aligning with low cardinality workloads), and all workloads tested contained updates and deletions (of which there are none in group aggregation). While this particular selection of workloads makes sense for a general-purpose hash table, the results do not apply to group aggregation. Put another way, the omission of read-heavy, delete-free workloads can cause the designs to be poorly optimized for our ticketing use case despite strong performance in mixed, general-purpose workloads. Even when benchmarking specifically for aggregation use cases, as in Maier et al. [20], chosen workloads often assume in-table aggregation (which is not possible without query compilation, since there are too many combinations of key types and aggregation functions), and is therefore update-heavy, not taking advantage of the lookup and insert-only semantics of ticketing. This misalignment has yielded conclusions that concurrent hash tables are still insufficient for aggregation purposes in the presence of skew, even when, in fact, they do not necessarily present a barrier. On the contrary, the simple nature of a lookup and insert workload can, for many existing concurrent hash table designs, make skewed workloads a non-issue. We recommend that database implementers use caution when evaluating benchmarks for general-purpose hash tables since group by aggregation has a distinct profile that is often overlooked. Finally, we make the observation that our definition of ticketing reduces the task to that of finding a perfect hash function; that is, the purpose of ticketing is to assign each key a unique, (near)densly packed integer, which is exactly what a perfect hash function does. Gaffney and Patel [8] found that significant speedups can be achieved when integrating perfect hash functions into DuckDB’s aggregation pipeline. Our formulation of ticketing aligns perfectly with this notion of perfect hashing, and ticketing could likely also greatly benefit from perfect hashing (i.e., “skipping” the ticketing step). Although building a perfect hash function requires knowing all the data ahead of time, which violates the morsel-driven parallelism model, it could be highly beneficial in situations where building a computationally efficient perfect hash function is worth the effort (perhaps if queries are frequent on a fixed set of keys). Perfect hashing could essentially remove all contention from the ticketing phase, greatly accelerating fully concurrent aggregation. # 3.2 Partial Aggregate Update In the update step, described in Section 2, we “actually do” the aggregation and modify the partial aggregate value corresponding to each ticket based on the associated row being aggregated. Whereas in the ticketing step, we did not need to perform any concurrent updates, now concurrent updates are necessary for each incoming tuple. That is, concurrency control problems that we avoided in the ticketing step have been deferred to this stage. In this section, we explore two classes of update methods, concurrent and thread-local. We evaluate and discuss the methods and identify the situations where each exhibits good performance and scaling. Concurrent Update Method. A naive but general-purpose solution to managing concurrency in this step is to protect each cell of the vector of partial aggregate values with a lock. For each ticket, we acquire the lock on its cell in the partial aggregate vector, update the partial aggregate value, and release the lock. Another simple approach is to have each partial aggregate be an atomic. It is straightforward to perform aggregation functions such as COUNT, SUM, and MIN/MAX, but other more complicated aggregation functions may not be easy to implement with atomics. One general solution is to perform a lookup on the current partial aggregate, perform the update, and then use a CAS instruction to update the partial aggregate. This method, however, increases the number of atomic operations and hence contention, as well as potentially suffering from the ABA problem [5] (although this is probably unlikely for most aggregation functions), so some care is required in implementation. These fully concurrent update methods are simple to implement and very memory efficient, however, they suffer at higher levels of contention: if there is a heavy hitter in the data, every thread may simultaneously and repeatedly try to update the same partial aggregate value, presenting an issue for skewed workloads. Thread Local Update Method. To mitigate issues with contention, we explore a thread-local approach to updates where each worker thread updates partial aggregates in its own thread local vector, eliminating all contention effects in the update stage at the cost of a merge of all partial aggregate values at the end of the aggregation. Although the total work of the merge step scales with the number of threads, because the vectors of partial aggregates are all in the same order (ordered by ticket), the merge is trivially parallel and cache efficient, which should in part mitigate the overhead. This merge can be viewed as the “transpose” of the merge done at the end of the partitioned case: in the partition case, each worker has fully aggregated values but only some of the keys, whereas in the fully concurrent case, each worker has partially aggregated values but all of the keys. Unfortunately, the work per thread does not decrease asymptotically as threads increase. Since each of $k$ threads is assigned $n / k$ tickets to merge (where $n$ is the number of unique keys), and there are $k$ threads worth of partial aggregates per ticket, each thread aggregates over $( n / k ) * k$ partial aggregates, yielding $O ( n )$ runtime. The runtime per thread of the merge step is constant per thread, and thus merging does not scale at all. Another challenge with thread local aggregation is that memory usage scales linearly with the number of threads and distinct keys, requiring a vector the size of the entire key space per thread. For very large datasets at high thread counts, this overhead could be a concern, especially since there is no clear way to spill nonpartitioned hash aggregations efficiently to disk. One mitigating factor is the fact that vectors of partial aggregates are dense, as opposed to the hash tables allocated per thread in partitioned aggregation, reducing the gap between the two techniques’ memory usage. We quantify the memory overhead in Section 4.5. Table 1: Desiderata fulfilled by each different update method. For cardinality and skew, the range of workloads where the given update method performs well in is given. For memory usage, the asymptotic behavior is given as a factor of $n$ (the number of unique keys) and $k$ (the number of threads). Evaluation. In Figure 5, we plot the performance of these update methods in isolation–that is, without ticketing. Keys are given as an integer from 0 to the max key, which is used directly as a ticket (i.e., a perfect hash function). We set up the experiment in this way to isolate the scaling behavior of each specific aggregation method. It is important to note a caveat that in actuality, contention effects are far less pronounced than how they appear in these isolated benchmarks because the time it takes to ticket causes fewer threads to be at the update step at the same time, decreasing contention. Both thread local and atomic updates demonstrate strong scaling behavior in some workloads but neither is a clear winner across all workloads. Thread local updates have superior performance in the presence of high contention (i.e. low cardinality datasets or skewed distributions) but degrade in performance as cardinality increases. This is attributable to the fact that at lower numbers of unique keys, the underlying vectors of partial aggregates are small enough that there is minimal overhead from merging. At low cardinality, thread local updates achieve a substantial speedup of $1 2 . 6 \mathrm { x }$ at 32 threads. At higher cardinalities, the thread local method demonstrates anti-scaling behavior at higher thread counts, actually decreasing in performance after either 8 or 16 threads. This effect is especially pronounced in the unique key workload, so much so that there is close to no speedup once reaching 32 threads. The performance in the high cardinality workload is more mixed, with thread local updates obtaining a 4.4x speedup at 32 threads but also demonstrating anti-scaling behavior past 16 threads. Crucially, though, the speedup is consistent no matter the skew–since the partial aggregates have no contention, skew can only help due to greater cache locality and leads to no performance degradation. This property makes thread local updates a good option even with relatively high cardinality workloads due to its consistency. Meanwhile, atomic updates exhibit better scaling in workloads with low contention and high cardinality. For the high cardinality and unique key workloads, when in the absence of skew, atomic updates are a clear winner. In fact, in the high cardinality case, atomics achieves a blazing 20.5x speedup at 32 threads. Surprisingly, atomic updates exhibit good scaling even in the presence of some skew (the Zipfian workload) but fall short when the skew becomes too great (the heavy hitter workloads). Fine-grained locked updates exhibits similar behavior to atomic updates (since it is bottle-necked by similar contention effects) but tends to be the worst performer due to its higher overhead. However, fine-grained locking exhibits better scaling behavior (but not better absolute performance on today’s hardware) than thread local updates at high thread count and has the added benefit of being able to easily support arbitrary update functions. As hardware continues to add more cores, fine-grained locking is a promising technique for scalability, although other techniques have superior absolute performance on today’s chips. Discussion. Given the varied performance characteristics of these update methods, summarized in Table 1, ideal performance can best be achieved by adaptively choosing the best method for a given workload. Although imperfect, database optimizers have grown sophisticated, with some work showing good results in predicting sophisticated statistics including the number of unique keys and data skew [13] to efficiently optimize group by aggregations. This work can enable the optimizer to choose the update method on a per-query basis. Absent an adaptive choice, thread local aggregation provides good performance on today’s hardware in all cases but when keys are strictly unique. There is a concerning performance gap in the case of high skew and high cardinality datasets, which is bad for all methods. The issue is in part mitigated by the fact that the higher the proportion of unique keys, necessarily the lower the proportion of keys that could be duplicated and cause contention. Furthermore, it is important to note poor scaling in the partial aggregate update step does not preclude overall good performance from fully concurrent aggregation. In Section 4.3, we find that the ticketing stage typically accounts for the bulk of overall execution time (since hash table operations tend to have much higher overhead than updates on a vector). Therefore, since we have found that ticketing exhibits good scaling behavior, provided that the anti-scaling in the update step is not bad enough to be a bottleneck, the overall system still can scale well even with degraded performance in the update step. We believe there is significant room for future work to improve this step. Particularly, a system that can combine both atomic or locked updates with thread local updates could take advantage of the benefits of both systems. This discussion bears similarity to hybrid aggregation approaches [4, 7] that combine a shared hash table with thread local hash tables for heavy hitters. We also believe there can be room for optimization by taking advantage of the vectorized nature of these updates. If locks can be obtained a vector at a time, we may be able to reduce locking overhead. Unfortunately, there appears to be little literature that explores vectorized concurrent updates on an array of values. # 4 END-TO-END EXPERIMENTS In this section, we combine the ticketing and update steps of fully concurrent aggregation and benchmark it end-to-end, comparing performance to an implementation of partitioned aggregation. Our experimental analysis is broken into five sections. Setup (4.1) details the dataset and workload characteristics we benchmark. Scaling (4.2) analyzes the results of experiments measuring the throughput and scaling behavior of various aggregation methods across different workloads. Latency Breakdown (4.3) aims to explain the observed scaling behavior of fully concurrent aggregations by breaking down the amount of time spent in each stage of the aggregation process. Resizing (4.4) compares aggregation methods in the case of poor cardinality estimation that results in resizing behavior. Memory Usage (4.5) compares the peak memory usage of each aggregation method. Figure 5: Scaling behavior of various partial aggregate update methods across different data distributions. Similarly to Figure 4, the top row plots performance as throughput and the bottom row plots the speedup factor relative to single-threaded performance. Atomic updates display good performance and scaling in low contention scenarios while thread local updates have good performance and scaling for lower cardinality workloads. Thread local updates are also resilient to data skew. # 4.1 Setup We run our experiments on a machine with 256 GB of RAM and a AMD EPYC 9454P processor with 48 cores $\textcircled { \omega } 2 . 7 5 \mathrm { G H z }$ . All experiments were implemented in Rust and were compiled in release mode using the target-cpu=native flag. Datasets. Our tests use synthetic datasets consisting of 100 million keys. We varied the cardinality of the dataset, labeled low, high, and unique which consist of 1000 unique keys $\sim 1 0 0 \%$ lookup ${ \sim } 0 \%$ insertion on ticketing), 10 million unique keys $( \sim 9 0 \% \mathrm { l o o k u p } / { \sim } 1 0 \%$ insertion on ticketing), and 100 million unique keys ${ \sim } 0 \%$ lookup $\sim 1 0 0 \%$ insertion) respectively. On the high cardinality dataset, we also add two types of skew: a Zipfian distribution with exponent parameter $s = 0 . 8$ , as well as a heavy-hitter dataset where $5 0 \%$ of the dataset consists of the same key. Workload. As a demonstrative example of common aggregation functions, we use COUNT for all tests. All experiments (except otherwise noted) assume perfect cardinality estimation and thus perfectly size hash tables and partial aggregate vectors. We later evaluate the effect of inaccurate cardinality estimation and resizing in Section 4.4. For all fully concurrent aggregations, we use Folklore\* for the ticketing step because it achieved the highest throughput across all tested workloads. All results (including those given in earlier sections) are obtained by taking the median latency of a given workload after 9 runs (not including warm-up runs). # 4.2 Scaling We evaluate scaling from end to end (including both ticketing and partial aggregation update) in Figure 6. We test the atomic and thread local methods for fully concurrent aggregation and graph the results against those for partitioned aggregation using local preaggregation. The locked update approach is omitted because it performs strictly worse than atomics as a concurrent update method. Low Cardinality. In low cardinality datasets, we see that fully concurrent aggregation using thread local updates matches the performance of the partitioned approach, an impressive feat given that the partitioned workload does basically no extra work in the partition-wise aggregation stage since all keys fit in the local hash table during local aggregation stage. High Cardinality. In the high cardinality case, we find that thread local aggregation has a clear advantage over partitioning across all data distributions. At this cardinality, the local hash table spills most of its entries, which causes the partitioning to have to aggregate each value twice (once locally and once partition-wise). For the non-heavy-hitter distribution, the atomic method also displays an advantage over partitioning. Surprisingly, thread local aggregation does not exhibit decreased throughput at high thread counts in the end-to-end benchmarks, despite doing so in the isolated update benchmarks from Section 3.2. This behavior is attributable to the fact that the actual update step is fast enough that even if scaling poorly, the ticketing step still dominates most of the runtime, as seen in the performance breakdowns in Section 4.3. Figure 6: End-to-end evaluation of scaling behavior of fully concurrent aggregation methods vs. partitioned aggregation. Folklore\* is used for the ticketing step for fully concurrent aggregation and both atomic and thread local updates are evaluated for the update step. Similar to Figure 4 and Figure 5, performance as throughput and the speedup factor relative to singlethreaded execution are plotted against thread count. Unique Key. In the pure insert workload, fully concurrent aggregation also exhibits superior performance at high thread count, but only when using atomic updates. Since there is no contention in this case on the underlying partial aggregations, using atomics creates very little overhead. However, the same issues with heavy hitters in the high cardinality case also hold true in the unique key case. Such a workload is not displayed in the graphs but is present in Table 2. Since thread local aggregation is now less feasible as an alternative to deal with skew, the combination of unique keys and high skew continues to present a significant performance challenge. Comparisons. Notably, the performance advantage of fully concurrent aggregation against partitioned aggregation does not come from its scaling behavior, with fully concurrent aggregation tending to have equal or marginally lower speedup factors at high thread counts as compared to partitioned aggregation. However, due to the significantly lower overhead of only inserting into a hash table a single time (rather than performing preaggregation and partition-wise aggregation), fully concurrent aggregation can achieve superior performance across many workloads. Table 2 shows the speedup of the fully concurrent aggregation techniques compared to the partitioned approach. Underlined values indicate performance parity (latency within a factor of 0.1) and bold values indicate clearly better performance compared with partitioning. Recommendations. As we discussed in Section 3.2, ideally the method for partial aggregate updates should be chosen based on workload. While this conclusion still holds true once considering end-to-end results, the picture improves for the thread local method, with the difference in performance narrowing between it and atomics in the high cardinality case. Of the results from Table 2, for its worst performing measured workload, the thread local method only runs about twice as long as partitioned. Similarly, the atomic method never runs more than twice as fast as the thread local method. Thus, the tail performance of thread local aggregation, though not ideal, is certainly not catastrophic and is potentially outweighed by the speedups obtained in the vast majority of workloads. Therefore, if implementers were to only choose one method for aggregation, they should choose fully concurrent aggregation with thread local updates. Such an implementation would achieve better performance or parity with our studied alternatives across the majority of workloads, and at its worst exhibit manageable amounts of performance degradation. Table 2: Speedup relative to partitioned aggregation at different thread counts and workloads. Speedup is measured as, for a given data distribution and thread count, the latency of partitioned aggregation divided by the latency of the fully concurrent aggregation method. Speedups at parity with partitioned aggregation (0.9-1.1) are underlined and cases where there is clear superior performance by fully concurrent aggregation are bolded. In almost all cases, a form of fully concurrent aggregation achieves parity or better, with the only exception unique keys with heavy hitters at high thread counts. Thread local aggregation by itself achieves parity or better against partitioning for all workloads except at high thread count with unique keys. Figure 7: Proportion of time spent on each step of fully concurrent aggregation. Ticketing consistently takes much more time than the update step except in high contention workloads atomic update workloads, which becomes a bottleneck. Materialization becomes a significant part of thread local aggregation’s runtime at high cardinality and thread count. # 4.3 Latency Breakdown In order to explain the scaling behavior of fully concurrent aggregation, we time each stage of the aggregation process and plot the proportion of time spent in Figure 7. The initialization includes allocation of global data structures (the ticketing hash table and the global vector of atomic partial aggregates for atomic aggregation). For thread local aggregation, this stage does not include allocation of partial aggregate vectors because they are lazily initialized by each thread in the update stage. The materialization stage consists of the work to turn results into a columnar format that can be pushed to the next query operator, including the cost of merging thread local partial aggregates. The ticketing and update stages are as discussed in Section 3. We find that for fully concurrent aggregations, ticketing typically takes significantly more time than updating the partial aggregates. The exception to this trend is the low cardinality case for atomic updates, which at 4 threads becomes the major performance bottleneck and dominates runtime. Therefore, the choice of update step method should be driven more by tail performance. As long as the update step does not become a performance bottleneck, even relatively poor scaling in other steps is not insurmountable. This is the major factor that allows thread local updates to maintain relatively good performance even in its less ideal cases. Further, we find that the thread local method’s materialization cost becomes increasingly significant as thread count increases, which aligns with our experimental results and theoretical findings from Section 3.2: the work per thread of merging scales linearly with cardinality and is constant with regard to thread count. The hope was that the materialization step was fast enough in implementation that it would be manageable, which does hold true for even high cardinality workloads. However, once cardinality is so high that each key is unique, it becomes clear that the anti-scaling behavior of the update step is the significant factor contributing to degraded performance. Figure 8: Scaling behavior with a resize. Dashed lines indicate execution without resizing while solid lines indicate execution with resizing. Fully concurrent aggregation exhibits far more performance degradation under these circumstances while partitioned aggregation is not affected. Memory Allocation. One surprising observation is that, for fully concurrent aggregations, a significant part of the time is spent on memory allocation, particularly as cardinality increases and thus the amount of memory needed for both the ticketing hash map and partial aggregate vectors. The surprisingly poor performance of the initialization stage suggests that performance gains to be had by optimizing the memory allocator for concurrent operations. We identify two major contributors to this issue: memory allocation for the ticketing step’s hash table and atomic updates’ partial aggregate vector is single-threaded, and construction of atomics is not always efficient (depending on language and compiler behavior, they may not be able to be zero-initialized). The former problem can probably be mitigated with techniques such as sharding, and the latter by leveraging more low-level language features. Database systems are also known to be sensitive to the performance and characteristics of the chosen memory allocator [6]. For the purposes of our experiments, we avoid over-tuning the memory allocation stage in order to more clearly isolate the impacts of the aggregation method itself, however based on the performance breakdowns this omission could have contributed to worse performance numbers for fully concurrent aggregation in the unique key workload, especially for atomic updates. However, since partitioned aggregation does not face the same memory allocation issues, these concerns do not conflict with our finding that fully concurrent aggregations are practical. # 4.4 Resizing A particularly thorny challenge with concurrent hash tables is resizing them efficiently. In most cases, resizing requires all other threads to pause work to accommodate reallocation and migration. While the hope is that cardinality estimates can allow a properly sized initial allocation that does not require resizing [13], query optimizers are known to misestimate cardinality by many orders of magnitude under certain workloads [17]. To test the impact of hash table resizing, we adopt Maier et al.’s [20] method for contention-less fully concurrent migration of hash table entries for Folklore\*. In this experiment, we set the capacity of the ticketing hash table and partial aggregate vectors to be half of the required capacity, forcing a resize. We do similarly for the hash tables in partitioned aggregation (some ad-hoc configuration was required due to the standard library hash table’s allocator, but regardless, all workloads are tuned to result in exactly one resize). Figure 8 shows that the fully concurrent workload is particularly sensitive to resizing, which causes particularly high amounts of performance degradation at high thread count. While in the high cardinality workload, the fully concurrent methods still matche the performance of the partitioned workload, when the workload becomes insert-only (unique keys), resizing causes particularly severe performance degradation. Meanwhile, in the partitioned case, there is negligible impact from resizing. Thus, while not a show stopper, fully concurrent aggregation appears to be more sensitive to hash table resizing than partitioning approaches. Future work on improving the performance of such resizes has significant headroom for improvement. # 4.5 Memory Usage Our model of execution assumes a main-memory system where all data structures fit within memory. It is not obvious how to adapt fully concurrent aggregation to disk spilling, unlike partitioningbased approaches like DuckDB’s [14]. Therefore, understanding the severity of memory usage can be relevant for the feasibility of aggregation over large datasets. In Section 4.5, we note that thread local updates create significant memory overhead while atomic updates are very memory-efficient. Analyzing the partitioned method is less clear, since much depends on the spilling behavior. However in the very worst case, when almost all keys are spilled, performance is bounded by the total number of elements (not just unique keys), which can severely degrade memory performance in even moderate cardinality workloads. Comparing the peak memory usage of each technique in Table 3, we find that our theoretical findings hold. Atomic aggregation performs best, while partitioned aggregation displays very high memory overhead for higher cardinality cases. Thread local aggregation exhibits surprisingly good memory usage characteristics, using fewer memory resources than partitioned aggregation in all tested thread counts for the high cardinality workload, showing that the dense storage of partial aggregates can greatly counterbalance memory concerns. Even with unique keys, thread local aggregation’s memory usage is close to parity with partitioned aggregation at 8 threads. At 32 threads, however, memory usage degrades to be more 3.1x higher than that of partitioned aggregation and $5 . 4 \mathrm { X }$ that of the atomic update method. In any case, as we noted, at such high cardinalities it would be likely preferable to use atomic updates. Although memory considerations should be carefully considered on a system and workload basis, our results indicate that fully concurrent aggregation has significant advantages over partitioning across the majority of cases, although it is not obvious how to spill to disk when doing fully concurrent aggregations. Table 3: Peak memory usage of different aggregation methods, measured in GB. # 5 RELATED WORK The performance differences between concurrent aggregation using a shared hash table as opposed to a partitioned approach using a local aggregation table have been benchmarked and studied extensively over the years[4, 19, 28]. None of these cited works propose leveraging indirection to reduce hash table operations to a lookup and insert-only workload, instead performing updates within the hash table itself, which requires significant concurrency control. As a result, the consensus in the literature is that thread local aggregation is not feasible due to contention costs in the presence of skew. We identified one work [26] that also leverages a lookup and insert-only workload on a shared hash table by deferring update contention, but does so in the context of FPGAs and uses specialized hardware. Unlike this work, [26] performs updates in the hash table but prevents simultaneous access using a hardware cache to synchronize all update requests on the same key. This cache is crucial to their approach and can not be adapted to general-purpose CPUs, but their findings align with ours that in providing a fast, contention-free path for lookups and preventing update contention in the hash table, far greater resistance to skew and high performance in low cardinality workloads can be achieved. Large amounts of research from outside the database domain on concurrent hash table designs are also relevant to our work. In Section 3.1 we tested state-of-the-art hash table designs from or related to those described in prior work [11, 18, 20, 23]. However, as previously noted, even those papers that consider the use of concurrent hash tables for aggregation do not test them with regard to the specialized lookup and insert workload that underpins the ticketing step of our aggregation model, instead focusing on generalpurpose workloads. While the body of work on concurrent hash tables is informative, each design must be critically reevaluated in the context of our specialized use case to make a determination about their performance for aggregations. In addition to the use of shared hash tables, many other methods for aggregation have been proposed. The method we most directly compare to is partitioning methods, which has been successfully integrated into many real-world systems [14–16]. Various hybrid approaches that leverage local aggregations for heavy hitters and a global shared table for other values have been proposed, in order to balance performance and memory concerns [4, 7]. Ideas from these works may be beneficial for resolving the challenges we found with our fully concurrent aggregation at high thread count and cardinalities. Finally, sort-based aggregations have also been extensively evaluated [27] but are less relevant to our work given that sort-based aggregation does not fit well into a morsel-driven model, as sorting is generally a pipeline breaker (although some variations like [21] propose methods that can act on smaller runs, but in a more limited capacity than intended). Finally, a very closely related operation to hash aggregations is hash joins, which has had a similar discourse about the merits of partitioning versus concurrent approaches. However, the nature of the hash join workload is significantly different, using hash tables as a multi-map rather than a map (e.g., the same key maps to multiple rows) to a partial aggregate. We leverage the fact that partial aggregates are densely stored in an underlying vector, an assumption that does not hold in hash aggregations. Furthermore, a large concern for hash joins is checking for the existence of a key within the probe-side relation, which is not relevant to the aggregation case. These unique performance characteristics have led to highly efficient concurrent hash table designs specialized for hash joins [1, 2, 16]. This body of work has had a similar overarching takeaway that specialized hash tables are crucial to achieving good performance in operators specific to the databases domain, but their methods are generally incompatible with the needs of fully concurrent aggregation.
Efficiently computing group aggregations (i.e., GROUP BY) on modern many-core architectures is critical for analytic database systems. Today's engines predominately use a partitioned approach to group aggregation, in which an incoming data stream is partitioned by key values so that every row for a particular key is sent to the same thread. In this paper, we revisit a simpler strategy: a fully concurrent group aggregation technique using a shared global hash table. While approaches using general-purpose concurrent hash tables have generally been found to perform worse than partitioning-based approaches, we argue that the key ingredient is customizing the concurrent hash table for the specific task of group aggregation. Through extensive experiments on synthetic workloads (varying key cardinality, skew, and thread counts), we demonstrate that a purpose-built concurrent hash table can match or surpass partitioning-based techniques. We also analyze the operational characteristics of both techniques, including resizing costs and memory pressure. In the process, we derive practical guidelines for database implementers. Overall, our analysis indicates that fully concurrent group aggregation is a viable alternative to partitioning.
[ "cs.DB" ]
# 1 Introduction High-dimensional nearest neighbor search is a basic building block in many areas, including image and video processing [16, 21], information retrieval [6, 46], and algorithm design [10, 23]. It is central to modern machine learning, underlying document and media search based on learned embeddings [9, 35, 43], and most retrieval augmented generation (RAG) systems for large-language models [32, 41]. Nearest neighbor search also plays a role in hard-negative mining [55], accelerating transformer architectures [24], and other applications across machine learning [52]. Formally, in the $k$ -nearest neighbor search problem, we are given a set of data points, often machinelearned vector embeddings of documents, images, or other media [11, 13]. We are also given a distance measure, such as the Euclidean distance, or something more exotic like Chamfer distance [20]. The goal is to pre-process the dataset into a search data structure so that, given any query point $q$ , we can efficiently find the $k$ data points closest to $q$ with respect to the distance measure. Doing so exactly is notoriously difficult in high-dimensions, so applications typically rely on approximate nearest neighbor (ANN) methods that attempt to find most of the $k$ closest neighbors. Many different approaches have been proposed for ANN search. Popular methods include locality sensitive hashing (LSH) [2, 3, 18, 36], inverted file indices (IVF) based on product quantization or clustering [21, 22, 44], and more [8, 25, 29]. In this work, we focus on graph-based ANN methods, which have been extensively studied, and are commonly used thanks to strong empirical performance (graph-based methods have topped leader boards at a number of recent ANN competitions [49, 50]). Graph-Based Nearest Neighbor Search. The high-level idea of graph-based methods is simple. We construct an index by building a directed graph, $G$ , with one node for each data point. Given a query, $q$ , we search the index by starting at an arbitrary node and performing a greedy graph traversal, exploring neighbors in that graph that are closest to $q$ . A specific choice of graph construction and traversal method comprises a particular “graph-based” nearest neighbor search method. Many algorithms for graph construction have been proposed, including the Hierarchical Navigable Small World (HNSW) approach [38], Vamana/DiskANN [28, 53], Navigating Spreading-out Graphs (NSG) [15], and others [40, 54] All of these methods construct a graph $G$ that, for a given node $i$ , contain out-edges to nearest neighbors of $i$ , as well as “long range” connections to nodes far away from $i$ . Such constructions are loosely motivated by the concept of navigability, which dates back to pioneering work on local graph routing by Kleinberg [26, 27] and Milgram [42]. We provide a formal definition of navigability in Section 2, but the property roughly guarantees that there is a path from any node $i$ in $G$ to any node $j$ so that distance to $j$ strictly decreases along the path. While graph constructions vary greatly, the choice of greedy traversal method used in graph-based nearest neighbor search has seen less innovation. A generalization of greedy search called beam search is almost ubiquitous. Parameterized by a beam width $b \geq k$ , beam search maintains a list of $b$ candidate nearest neighbors and computes the query’s distance to each of those candidates’ neighbors, updating them until it fails to find any better candidates. See Section 3 for a formal description. Our Contributions. While graph-based ANN methods have seen significant practical success, their performance is poorly understood from a theoretical perspective. This is in contrast to methods like locality sensitive hashing, for which it is possible to prove strong worst-case approximation guarantees [2, 4]. A lack of theory makes it difficult to iterate on and improve existing graph-based methods, and to understand the limitations of these methods. We aim to address this theory-practice gap, and in turn, introduce principled improvements to existing methods. In particular, we re-examine the ubiquitous beam search method, showing that it can be viewed as a specific stopping rule for a much more general search procedure. This perspective motivates a new algorithm called Adaptive Beam Search, which stops searching for candidates based on a distance-based criterion instead of a fixed beam width, b. Our main theoretical result (Theorem 1) is to prove that Adaptive Beam Search returns provable approximate nearest neighbors whenever the search graph $G$ is navigable. To the best of our knowledge, this result is the first to theoretically connect the performance of greedy search (specifically, beam search) to the property of navigability. Moreover, our theoretical results translate into practical performance. We perform an extensive experimental evaluation of Adaptive Beam Search, comparing it to fixed-width beam search over a wide range of data sets, graph constructions, recall values, and target number of nearest neighbors. The method universally outperforms classic beam search, typically providing a $1 0 - 5 0 \%$ reduction in the number of distance computations required for a given level of recall. Moreover, Adaptive Beam Search can be implemented with only minor code changes to existing graph-based libraries. We thus hope that, beyond its theoretical relevance, the method will have practical impact. Roadmap. The remainder of this paper is organized as follows. In Section 2, we discuss technical preliminaries and related work. In Section 3, we introduce our Adaptive Beam Search method and its motivating ideas. In Section 4, we prove that Adaptive Beam Search solves approximate nearest neighbor search on navigable graphs (Theorem 1). In Section 5, we evaluate Adaptive Beam Search on sparse navigable graphs and common heuristic graph constructions including HNSW and Vamana. # 2 Background and Related Work We start by defining notation used throughout. Our goal in this paper is to find nearest neighbors in a metric space $\mathcal { X }$ equipped with a distance function $d : \mathcal { X } \times \bar { \mathcal { X } } \overset { \cdot } { } \mathbb { R } ^ { + }$ .1 We are given a database of $n$ items in $\chi$ , which we label $\{ 1 , \ldots , n \}$ . We want to find the nearest $k \leq n$ items to a given query $q \in \mathcal X$ . E.g., for $k = 1$ , the goal is to find $\begin{array} { r } { \operatorname { a r g m i n } _ { j \in \{ 1 , . . . , n \} } d ( q , j ) } \end{array}$ . To avoid corner cases, we assume items in the database are unique, i.e., $d ( i , j ) > 0$ for all $i , j \in \{ 1 , \dots , n \} , i \neq j$ . In practice, the $n$ database items and the query $q$ are usually associated with vectors (e.g., machine learned embeddings) $\mathbf { x } _ { 1 } , \ldots , \mathbf { x } _ { n }$ and $\mathbf { x } _ { q } \in \mathbb { R } ^ { m }$ . The distance function $d ( i , j )$ is chosen to be some function of these vectors, e.g., the Euclidean distance, $d ( i , j ) = \| \mathbf { x } _ { i } - \mathbf { x } _ { j } \| _ { 2 }$ . Graph Navigability. Our theoretical guarantees assume use of a navigable search graph over $n$ nodes corresponding to our $n$ database items. While the term “navigable” is sometimes used informally in the literature, we use the following precise definition. Consider a directed graph $G = ( V , E )$ , with $V = \{ 1 , \ldots , n \}$ . For a node $x$ , let $\bar { \mathcal { N } } _ { G } ( x )$ denote its set of out-neighbors. Define: Definition 1 (Navigable Graph). $A$ directed graph $G$ is navigable under distance function d if for any nodes $x , y \in \{ 1 , \ldots , n \}$ with $d ( x , y ) > 0$ , there is some $\bar { z } \in \mathcal { N } _ { G } ( x )$ with $d ( z , y ) < d ( x , y )$ . Navigability ensures that, for any starting node $s$ and target node $t$ , a standard greedy search where we always move to the neighbor of the current node closest to $t$ , always converges to $t$ . When all distances between $\{ 1 , \ldots , n \}$ are unique (this can be ensured by simply tie-breaking based on node id) it was recently shown that any data set has an efficiently computable navigable graph with average degree $O ( { \sqrt { n \log n } } )$ for any distance metric [12]. While the above bound is nearly optimal for worst-case data sets, much sparser navigable graphs often exist. For the Euclidean distance in $m$ dimensions, Arya and Mount construct navigable graphs with degree $2 ^ { O ( m ) }$ [5]. For general metrics, Indyk and $\mathrm { { X u } }$ construct navigable graphs with degree $2 ^ { O ( m ^ { \prime } ) \log \Delta }$ where $m ^ { \prime }$ is the doubling dimension of the data under $d$ and $\begin{array} { r } { \dot { \Delta ^ { * } } = \dot { \operatorname* { m a x } _ { i , j } } d ( i , j ) / \operatorname* { m i n } _ { i , j } d ( i , j ) } \end{array}$ is the dynamic range [19]. Why do we focus on navigability? Navigability has become a standard notion of “quality” for graphs used in nearest neighbor search. Indeed, the term lends its name to popular graph-based search methods such as the Navigable Small World (NSW) [37] and Hierarchical Navigable Small World (HNSW) [38] methods. Neither of these methods construct graphs that are provably navigable, although they produce graphs that should be approximately navigable in practical settings. Surprisingly, however, to the best of our knowledge, no prior work formally links the accuracy of graph-based search to this intuitive notion of graph quality. As discussed, a major goal here is to address this theory-practice gap, and to use the resulting theory to propose new practical algorithms. Related to our approach is a recent paper by Indyk and $\mathrm { { X u } }$ [19], which proves accuracy guarantees for standard beam search under the assumption that the search graph is “ $\dot { \alpha }$ -shortcut reachable”, a strictly stronger criterion than navigability. A graph is $\alpha$ -shortcut reachable if, for all $x , y \in \{ 1 , \ldots , n \}$ with $d ( x , \bar { y } ) > 0$ , there is some $\bar { z } \in \dot { \mathcal { N } _ { G } } ( x )$ with $\alpha \cdot d ( z , y ) < d ( x , y )$ for some parameter $\alpha \geq 1$ . Indeed, navigability exactly corresponds to this definition with $\alpha = 1$ . However, the result from [19] only yields a bounded approximation factor for $\alpha > 1$ (concretely, they obtain approximation factor $\textstyle { \frac { 1 + \alpha } { 1 - \alpha } } .$ ). Thus, obtaining theoretical results for graphs that are simply navigable remains an open question. One reason this question is of practical importance is that navigable graphs can in general be much sparser than $\alpha$ -shortcut reachable graphs. While it is possible to construct a navigable graph with average degree $O ( { \sqrt { n \log n } } )$ for any database under any metric (under the mild assumption of unique distances) [12], it is not hard to observe that for any fixed $\alpha > 1$ , even a random point set in $O ( \log n )$ -dimensional Euclidean space does not admit any sparse $\alpha$ -shortcut reachable graph (i.e., with average degree $< n - 1 \dot$ ) with high probability (see Appendix A.1 for details). # 2.1 Additional Related Work. Beyond [19], a few other papers have studied graph-based ANN search from a theoretical perspective. E.g., [30] and [48] study time-space tradeoffs akin to those available for LSH methods, but only for random data. More significant work has focused on practical algorithmic improvements. E.g., work has studied parallel implementations [40], methods for dynamic datasets [51, 56], distance approximations [57], graph pruning [58], filtered search [17], and search with coverage criteria [1]. We are not aware of work that, like ours, studies significant alternatives to beam width-based termination in beam search, although certain modifications, such as smarter initialization strategies and early stopping criteria have been studied [33, 39, 59]. # 3 Adaptive Beam Search Beam search is the defacto search method used for graph-based ANN [38, 53]. We start with a key observation: beam search can be reframed by decoupling the method into two key components 1) a search order, determined by a method for traversing the search graph to find candidate nearest neighbors and 2) a stopping criterion, which governs when the algorithm stops considering candidates. Our Adaptive Beam Search method modifies the standard beam search algorithm only by changing the stopping criterion. The search order remains the same. Surprisingly, even this simple change leads to an algorithm that both enjoys strong theoretical approximation guarantees when the underlying graph is navigable (see Theorem 1) and outperforms standard beam search empirically. We suspect the “decoupled view” of beam search is not novel, but we have not seen it presented. So, in the next section, we detail this reframing and show how a change in stopping criterion yields other search algorithms, like simple greedy search and Adaptive Beam Search. We intuitively motivate the stopping criterion used in Adaptive Beam Search before formally analyzing the method in Section 4. # 3.1 Decoupling Beam Search as Ordered Traversal With a Stopping Condition To be concrete, we provide pseudocode for a generic version of beam search in Algorithm 1. Implementation details are deferred to Appendix B.1. Importantly, such details do not affect the number of distance computations performed by the algorithm – i.e., how many times we evaluate $d ( q , i )$ for a query point, $q$ , and candidate nearest neighbor, $i$ . Distance computations typically dominate the cost of search in practice and, indeed, for the stopping criteria considered in this paper, all other operations can be implemented in time nearly-linear in the number of such computations. # Algorithm 1 Generalized Beam Search Input: Search graph $G$ over nodes $\{ 1 , \ldots , n \}$ , starting node $s$ , distance function $d$ , query $q$ , target number of nearest neighbors $k$ . Output: Set of $k$ nodes $B \subset \{ 1 , \ldots , n \}$ , where each $x \in B$ is ideally close to $q$ with respect to $d$ . 1: Initialize min-priority queues $\mathcal { C }$ and $\mathcal { D }$ . $D$ Elements are nodes, priorities are distances to $q$ . $\mathcal { D }$ contains all discovered nodes. $\mathcal { C }$ contains discovered nodes that are not yet expanded. 2: Insert $( s , d ( q , s ) )$ into $\mathcal { C }$ and $\mathcal { D }$ . 3: while $C$ is not empty do 4: $( x , d ( q , x ) ) \longleftarrow \mathrm { e x t r a c t M i n } ( \mathcal { C } )$ . ▷ Pop min. distance node. 5: if $x$ satisfies [termination condition] then 6: break 7: For all $y \in { \mathcal { N } } _ { G } ( x )$ , if $y$ is not in $\mathcal { D }$ , insert $( y , d ( q , y ) )$ into $\mathcal { C }$ and $\mathcal { D }$ . $D$ Expand node $x$ . 8: Obtain $\boldsymbol { B }$ by running extractMin $k$ times on $\mathcal { D }$ , which returns the $k$ elements with the smallest distances from the query, $q$ . Algorithm 1 maintains a queue of “discovered nodes” $\mathcal { D }$ whose distances to $q$ have been computed. It repeatedly “expands” the nearest discovered (and not previously expanded) node to $q$ by adding its neighbors to the queue (Line 6). It does so until this nearest node triggers the termination condition in Line 5. The choice of termination condition leads to various versions of greedy search, including beam search and our new distance based Adaptive Beam Search method. In particular, we have: Classic Greedy Search. Terminate if there are at least: $$ k { \mathrm { ~ i t e m s ~ } } j _ { 1 } , \ldots , j _ { k } \in { \mathcal { D } } { \mathrm { ~ w i t h ~ } } d ( q , j _ { i } ) \leq d ( q , x ) . $$ Beam Search, with beam-width parameter $\mathbf { b } \geq \mathbf { k }$ . Terminate if there are at least2 : $$ j _ { 1 } , \ldots , j _ { b } \in \mathcal { D } \mathrm { w i t h } d ( q , j _ { i } ) \leq d ( q , x ) . $$ # Adaptive Beam Search (our method) w/ parameter $\gamma$ . Terminate if there are at least: The rule for greedy search is simple: we terminate if we have already found $k$ points closer to $q$ than the current candidate considered for expansion. For $k = 1$ , it takes a moment to confirm that this criterion yields a method that is exactly equivalent to the more typical way of presenting greedy search: starting at $s$ , move to the neighboring node nearest to $q$ , terminating if there is no neighbor closer than the current node. For $k = 1$ , greedy search is known to converge to the exact nearest neighbor if there is some $x \in \{ 1 , \ldots , n \}$ for which $d ( x , q ) = 0$ and the search graph $G$ is navigable [12, 26, 42]. However, no comparable guarantees hold for $k > 1$ or when $q$ ’s nearest neighbor is not at distance 0, which is typical in practice. Moreover, greedy search performs poorly empirically, easily getting stuck in local minima and failing to find good approximate nearest neighbors. # 3.2 Relaxing Greedy Search The goal of beam search is to avoid such accuracy issues. It does so by relaxing the stopping criterion from greedy search: in particular, by (2), we only terminate if we have found $b \geq k$ nodes closer to the query $q$ than our current node $x$ . When $b = k$ , the algorithms are identical. When $b > k$ , greedy search explores a prefix of the nodes explored by beam search, which simply terminates the search at a later point. Beam search is thus guaranteed to obtain a more accurate result than greedy search, at the cost of an increased number of distance computations. With the above view in mind, many other relaxations of the greedy search termination condition given in (1) become apparent. In (3), we introduce a slack parameter $\gamma \geq 0$ and only terminate if $x$ is further from $q$ than the $k ^ { \mathrm { { t h } } }$ best discovered point by a factor of $1 + \gamma$ . Setting $\gamma = 0$ recovers greedy search, and larger values of $\gamma$ will cause the search process to terminate later, yielding a better result, but at the cost of a higher runtime. This simple idea yields our Adaptive Beam Search procedure. While intuitively similar to beam search, a key difference of this “distance based” criteria is that it naturally adapts to the query difficulty. For simplicity, consider the case of $k = 1$ . Greedy search tends to perform worse when there are many “false nearest neighbors” in a dataset. For example, suppose there is just one best point $x ^ { * } \in \{ 1 , \ldots , n \}$ with $d ( q , x ^ { * } ) = 1$ , but $m$ other points $y _ { 1 } , \ldots , y _ { m }$ with $d ( q , y _ { i } ) \dot { = } 1 . 0 1$ . Unless we choose beam width $b = \Omega ( m )$ , it is likely that more than $b$ points at distance 1.01 will get added to $\mathcal { D }$ , causing the search to terminate before finding $x ^ { * }$ . In contrast, as long as $\gamma > . 0 1$ , Adaptive Beam Search will continue to search through all of the $y _ { i }$ points before terminating. In contrast, Adaptive Beam Search will more quickly terminate search if it becomes apparent that all remaining candidates are too far away to be useful in finding addition nearest neighbors. Indeed, a criterion identical to Adaptive Beam Search has been suggested as an “early stopping” heuristic in work on practical graph-based nearest neighbor search [40, 39]. The intuition that Adaptive Beam Search adapts to query hardness shows clearly in our experiments: as seen in Figure 1, the distribution of distance computations used by Adaptive Beam Search varies more widely, as fewer computations are used for “easier” queries. As a result, across a variety of data sets and search graphs, Adaptive Beam Search consistently outperforms classic beam search in terms of total distance computations required to achieve a certain level of recall for a given query set. Figure 1: Histograms for the number of distance computations performed by standard beam search and our Adaptive Beam Search method when answering 10,000 queries for various datasets and search graphs (see Section 5 for details). For a fair comparison, the $b$ parameter in beam search and $\gamma$ parameter in Adaptive Beam Search were tuned to achieve a fixed level of recall for the batch of queries. The histograms for Adaptive Beam Search are consistently flatter, confirming the intuition that it better adapts to query difficulty, leading to fewer distance computations on average. # 4 Theoretical Analysis We support the improved empirical performance of Adaptive Beam Search with strong theoretically guarantees. Formally, we prove that the method is guaranteed to solve the approximate nearest neighbor search problem, assuming that the search graph $G$ is navigable (Definition 1): Theorem 1. Suppose $d$ is a metric on $\chi$ and $G$ is navigable under $d .$ Then for any query $q \in \mathcal X$ , if Adaptive Beam Search – i.e., Algorithm $^ { \small 1 }$ with stopping criterion (3) – is run with parameter $0 < \gamma \leq 2$ , it is guaranteed to return a set of $k$ points $\boldsymbol { B }$ such that: $$ f o r a l l v \in \{ 1 , \ldots , n \} \setminus \mathcal { B } , \ \qquad \quad \ d ( q , v ) \geq \frac { \gamma } { 2 } \operatorname* { m a x } _ { j \in \mathcal { B } } d ( q , j ) . $$ Notably, setting $\gamma = 2$ , we ensure that all points not returned by the algorithm are at least as far from $q$ as every point in $\boldsymbol { B }$ . Thus, for $\gamma = 2$ , Adaptive Beam Search on a navigable graph is guaranteed to exactly solve the $k$ -nearest neighbor problem. For smaller $\gamma$ , the method obtains an approximate solution: no point in $\boldsymbol { B }$ can be further from $q$ than any point not returned by more than a $2 / \gamma$ factor.3 We can see that Theorem 1 proves a trade-off between runtime and accuracy: smaller values of $\gamma$ lead to a strictly faster algorithm (since termination is earlier) but a worse approximation guarantee. While our result falls short of proving worst-case runtime guarantees, to the best of our knowledge, it is the first result linking the accuracy of a natural greedy search method to the notion of graph navigability. Importantly we note that, unlike our Adaptive Beam Search, a result like Theorem 1 cannot be proven for standard beam search. In particular, in Appendix A.2 we prove: Claim 2. Standard beam search with beam width $b \leq n - 3$ fails to approximately solve the nearest neighbor search problem on navigable graphs for any finite approximation factor. Concretely, for any finite $C$ , we can construct a set of $n$ points in 2-dimensional Euclidean space and a navigable graph $G$ such that, for some query point $q$ , beam search run on $G$ with beam width $b \leq n - 3$ returns $\tilde { x }$ with $d ( \boldsymbol { q } , \tilde { \boldsymbol { x } } ) \geq C \cdot \operatorname* { m i n } _ { \boldsymbol { x } \in \{ 1 , \dots , n \} } \dot { d ( \boldsymbol { q } , \boldsymbol { x } ) }$ . Proof of Theorem 1. Our proof will use the terms “discovered” and “expanded” to identify nodes in $\{ 1 , \ldots , n \}$ . We consider a node $j$ “discovered” if $j \in \mathcal { D }$ when Algorithm 1 terminates; i.e., we have evaluated the distance between $j$ and $q$ . We consider a node $j$ “expanded” if $j$ is discovered and, at some point, was both popped off $\mathcal { C }$ on Line 4 and did not cause the termination condition on Line 5 to be triggered. This ensures that all of its out-neighbors are discovered (see Line 7). Note that all discovered nodes are added to both $\mathcal { D }$ and $\mathcal { C }$ . Formally, if the algorithm terminates because the condition is true for some $x _ { t e r m }$ , then $\mathcal { C } \cup \{ x _ { t e r m } \}$ is the set of discovered but not yet expanded nodes, so the set of expanded nodes is $\mathcal { D } \setminus ( \mathcal { C } \cup \{ x _ { t e r m } \} )$ . Let $\boldsymbol { B }$ be the set of nodes returned upon termination and let $\tilde { x } = \mathrm { a r g m a x } _ { x \in B } d ( q , x )$ be the $k ^ { \mathrm { { t h } } }$ furthest point from $q$ in that set. Since $G$ is navigable, and since we assume data points are unique, there must be a path in $G$ from any node $x$ to any other node $y$ (consisting of nodes that get monotonically closer to $y$ ); i.e., $G$ is strongly connected. Thus, if Algorithm 1 terminates because an empty queue $\mathcal { C }$ causes the while loop to terminate, then all nodes in the graph must have been discovered, and so $\boldsymbol { B }$ contains the exact $k$ nearest neighbors to $q$ , and the theorem holds immediately. Thus, it suffices to consider the case when termination occurs because some node $x _ { t e r m }$ causes the termination condition in Line 5 to evaluate to true and the while loop to break early. We first claim: Claim 3. When Algorithm 1 terminates, $\tilde { x }$ is guaranteed to have been expanded. To see that this claim holds note that, by termination condition (3), it must be that $d ( q , x _ { t e r m } ) \geq$ $( 1 + \gamma ) d ( q , \tilde { x } )$ and thus $d ( q , x _ { t e r m } ) > \dot { d ( q , x ) }$ .4 I.e., $\tilde { x }$ is closer to $q$ then $x _ { t e r m }$ . Thus, $\tilde { x }$ must have already been popped off $\mathcal { C }$ and expanded before $x _ { t e r m }$ was popped off $\mathcal { C }$ . With Claim 3 in place, we can get into our main proof. Our goal is to prove that for all $z \not \in B$ , $$ d ( q , z ) \geq { \frac { \gamma } { 2 } } d ( q , { \tilde { x } } ) . $$ It suffices to prove the claim for all undiscovered nodes $z \not \in { \mathcal { D } }$ , since if $z \in \mathcal { D }$ and $\begin{array} { r } { d ( q , z ) < \frac { \gamma } { 2 } d ( q , \tilde { x } ) } \end{array}$ , then $z$ is closer to $q$ than $\tilde { x }$ and would have clearly been included in $\boldsymbol { B }$ (recall that $\gamma \leq 2$ ). Now, suppose by way of contradiction that (4) is not true, i.e., that there is some undiscovered node $z \not \in { \mathcal { D } }$ with $\begin{array} { r } { d ( q , z ) < \frac { \gamma } { 2 } d ( q , \tilde { x } ) } \end{array}$ . We first observe that such a $z$ cannot be an out neighbor of $\tilde { x }$ : since $\tilde { x }$ is expanded by Claim 3, all of its neighbors are discovered, i.e., all are in $\mathcal { D }$ . Since $G$ is navigable and all database items are unique, there must be some directed path $\mathcal { P }$ from $\tilde { x }$ to $z$ consisting of points that get monotonically closer to $z$ . Moreover, since $z \not \in \mathcal { N } _ { G } ( \tilde { x } )$ , $\mathcal { P }$ must have length $\ell \geq 2$ . Denote the elements of $\mathcal { P }$ by $\mathcal { P } = \{ \tilde { x } = p _ { 0 } p _ { 1 } . . . p _ { \ell } = z \}$ . We have for all $1 \leq i \leq \ell$ , $d ( z , p _ { i - 1 } ) > d ( z , p _ { i } )$ . We make the following claim: Claim 4. For any $z \not \in { \mathcal { D } }$ , there exists some node $w \in \{ p _ { 1 } , \dotsc , p _ { \ell - 1 } \}$ along the path from $\tilde { x }$ to $z$ that has been discovered but not expanded. Proof. First observe that $p _ { 1 }$ must be discovered since, by Claim 3, $\tilde { x }$ was expanded and $p _ { 1 }$ is an out-neighbor of $\tilde { x }$ . Furthermore, if $p _ { i - 1 }$ is discovered and expanded then $p _ { i }$ must be discovered. So, inductively we see that there are two possible cases: either there is some $i < \ell$ for which $p _ { i }$ is discovered but not expanded (as desired) or $p _ { i }$ is discovered and expanded for all $i < \ell$ . However, the second case is impossible since $z$ is not in $\mathcal { D }$ and it would be if $p _ { \ell - 1 }$ was expanded. We conclude the claim that there is some $w \in \{ p _ { 1 } , \dotsc , p _ { \ell - 1 } \}$ that is discovered but not expanded. □ Consider the unexpanded node $w$ guaranteed to exist by Claim 4. When the algorithm terminates, it must be that: $$ d ( q , w ) \geq ( 1 + \gamma ) d ( q , \tilde { x } ) . $$ If $w = x _ { t e r m }$ this is trivially true as a consequence of the termination rule (3). Otherwise, if (5) were not true, then $w$ would be closer to $q$ than $x _ { t e r m }$ and it would have been popped off $\mathcal { C }$ before $x _ { t e r m }$ and expanded. With (5) in place, we are ready to obtain our contradiction. By triangle inequality (since $d$ is a metric) and our supposition that $\begin{array} { r } { \dot { d ( q , z ) } < \frac { \gamma } { 2 } d ( q , \tilde { x } ) } \end{array}$ , we have: $$ d ( \tilde { x } , z ) \leq d ( \tilde { x } , q ) + d ( q , z ) < \left( 1 + \frac { \gamma } { 2 } \right) d ( q , \tilde { x } ) . $$ Combined with another application of triangle inequality and the fact the $d ( w , z ) < d ( \tilde { x } , z )$ , we have $$ l ( w , q ) \leq d ( w , z ) + d ( z , q ) < d ( \tilde { x } , z ) + d ( z , q ) < \left( 1 + \frac { \gamma } { 2 } \right) d ( q , \tilde { x } ) + \frac { \gamma } { 2 } d ( q , \tilde { x } ) = ( 1 + \gamma ) d ( q , \tilde { x } ) . $$ However, this claim contradicts (5). Thus, there cannot exist any $z \not \in { \mathcal { D } }$ with $\begin{array} { r } { d ( q , z ) < \frac { \gamma } { 2 } d ( q , \tilde { x } ) } \end{array}$ . I.e., (4) holds, proving Theorem 1. For a geometric illustration of the above proof, see Fig. 2. □ Figure 2: Visualization of the proof of Theorem 1. We let $\tilde { d }$ denote $d ( q , \tilde { x } )$ . Our goal is to show that there is no undiscovered $z$ in a ball of radius $\textstyle { \frac { \gamma } { 2 } } { \tilde { d } }$ around $q$ , which is shown with a dotted line. If there was, we obtain a contradiction. In particular, if $G$ is navigable, we argued that there must be some unexpanded node $w$ on a path of decreasing distance from $\tilde { x }$ to $z$ . Since $w$ is closer to $z$ than $\tilde { x }$ , it must lie in a ball of radius $\textstyle \left( 1 + { \frac { \gamma } { 2 } } \right) { \tilde { d } }$ around $z$ , which is contained in a ball of radius $( 1 + \gamma )$ around $q$ . However, by (5), no unexpanded node can lie in that ball. # 5 Experiments We now experimentally compare our Adaptive Beam Search method with standard beam search, demonstrating improved tradeoffs between efficiency and accuracy in a variety of settings. # 5.1 Experimental Setup Beam Search Algorithms. We primarily compare standard beam search (termination condition (2)) with Adaptive Beam Search (termination condition (3)). To implement Algorithm 1 with these termination conditions, we follow the pseudocode in Appendix B.1. For some settings, we test a third method called Adaptive Beam Search $V 2$ , which terminates on node $x$ if $$ d ( q , x ) \geq d _ { 1 } + \gamma \cdot d _ { k } , $$ where $d _ { 1 }$ and $d _ { k }$ are the distances from the query $q$ to the closest and $k ^ { \mathrm { { t h } } }$ closest discovered nodes, respectively. Compared to (3), (6) replaces the threshold $( 1 + \gamma ) \cdot d _ { k }$ with the smaller threshold $d _ { 1 } + \gamma \cdot d _ { k }$ , leading to more aggressive stopping. Surprisingly, while (6) is not a relaxation of greedy search (when $\gamma < 1$ , it may stop earlier than greedy search), one can check that Theorem 1 still holds under this condition. This motivates its inclusion in our experiments. However, we observe that Adaptive Beam Search V2 generally underperforms Adaptive Beam Search. We leave open developing other stopping conditions that satisfy bounds similar to Theorem 1 while obtaining strong empirical performance like Adaptive Beam Search – see Appendix C.4 for some initial explorations. Comparison Across Recall Values. The algorithms discussed above can all trade off accuracy for runtime by adjusting the beam width, $b$ , or the parameter $\gamma$ . We thus vary these parameters to obtain a range of recall values, i.e., the average fraction of the $k$ nearest neighbors found over all queries on a given dataset. Recall is a standard metric for evaluating ANN methods [38, 53]. We compare the methods by plotting the average number of distance computations performed per query to achieve a certain recall value. Since all three methods have essentially identical implementations, running time scales very similarly with the number of distance computations. See Appendix B.1 for more details. Datasets and Graph Constructions. We evaluate our Adaptive Beam Search on six standard benchmark datasets for nearest neighbor search, which are listed in Table 1. All datasets consist of real-valued vectors in varying dimensions, and we use Euclidean distance for search. We perform evaluations using a variety of popular heuristic “approximately navigable” graphs, along with truly navigable graphs for which the bound of Theorem 1 holds. Specifically, for the heuristic graphs, we use four standard methods: HNSW [38], Vamana [53], NSG [15], and EFANNA [14]. Details on how parameters are set for these algorithms are in Appendix B.3. To construct the truly navigable graphs, we use the approach of [12] to create an initial navigable graph with average degree $O ( { \sqrt { n \log n } } )$ , and then further prune this graph while maintaining navigability. See Appendix B.2 for details. Pruning reduces the memory footprint of the graph, and results in levels of sparsity closer to that of the heuristic constructions. However, since it is computationally expensive, we only run our navigable graph experiments for random subsets of three of the datasets, with subsample sizes listed in Table 1. We believe that our subsample sizes are large enough to be representative. However, it would be interesting to improve the running time of constructing very sparse and truly navigable graphs, so that such graphs can be evaluated for larger datasets. Table 1: Datasets used for evaluation. For further details, refer to Appendix B.3. Figure 3: Navigable Graphs: Comparison of generalized beam search termination conditions on navigable graphs across three datasets: SIFT1M, DEEP96, and MNIST (columns), with $k = 1$ , and $k = 1 0$ (rows). Adaptive Beam Search consistently outperforms standard beam search, while the alternative Adaptive Beam Search V2 underperforms both by a significant margin. Note that for $k = 1$ , Adaptive Beam Search and Adaptive Beam Search V2 are identical, so only one line is shown. # 5.2 Results We now discuss our experimental results on both truly navigable graphs and the commonly used heuristic graphs discussed above. Results for Navigable Graphs. Results for navigable graphs are shown in Figure 3 for SIFT, DEEP256, and MNIST for $k = 1$ and 10. Results for $k = 1 0 0$ are included in Appendix C.1. The y-axis shows recall, while the $\mathbf { X }$ -axis shows the average number of distance calculations per query. Adaptive Beam Search always performs at least on par with classic beam search, and often significantly better, with up to $30 \%$ decrease in distance computations for a given recall. Adaptive Beam Search V2 performs worse, so is not evaluated in future experiments. The underperformance of Adaptive Beam Search V2 is further explored in Appendix C.3. In a nutshell, when $d _ { 1 } \ll d _ { k }$ , for small $\gamma$ we might stop when $d ( q , x ) < d _ { k }$ , which means we do not even explore all the neighbors of our current top- $k$ results. If we increase $\gamma$ to avoid this, we terminate too late when $d _ { 1 }$ is close to $d _ { k }$ . Results for Heuristic Graphs. Our results for heuristic graphs with $k = 1 0$ across three datasets are shown in Figure 4. For additional results covering the remaining datasets and values of $k$ , see Appendix C.2. In all cases, we see that Adaptive Beam Search outperforms standard beam search, sometimes marginally, but sometimes by more than a factor of 2, e.g., on MNIST. The performance gains are robust to changing the graph construction, indicating that Adaptive Beam Search is a strong candidate for a drop-in replacement for standard beam search in graph-based ANN. Adaptivity Across Queries. As discussed in Section 3.2, Adaptive Beam Search seems to outperform standard greedy search because the distance-based stopping criterion is more “adaptive” to query difficulty. For hard queries with many approximate nearest neighbors, it tends to use more distance computations. However, the method terminates quickly on easy queries when there are few points with $d ( q , x ) \leq ( 1 + \gamma ) d _ { k }$ . This phenomenon is illustrated for a sample of settings in Figure 1. # Acknowledgements We would like to thank Ramon Li for contributions at an early stage of this work. Christopher Musco was partially supported by NSF Award 2106888. Figure 4: Heuristic Graphs: Comparison of generalized beam search termination methods on heuristic graphs produced by NSG, Vamana, EFANNA, and HNSW (rows), for $k = 1 0$ with 3 datasets: SIFT1M, DEEP256, and MNIST (columns). Adaptive beam search consistently outperforms standard beam search across all cases, sometimes by a significant margin. # References [1] Piyush Anand, Piotr Indyk, Ravishankar Krishnaswamy, Sepideh Mahabadi, Vikas C. Raykar, Kirankumar Shiragur, and Haike Xu. Graph-based algorithms for diverse similarity search. arXiv:2502.13336, 2025. [2] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Commun. ACM, 51(1):117–122, 2008. [3] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. In Advances in Neural Information Processing Systems 28 (NeurIPS), 2015. [4] Alexandr Andoni, Piotr Indyk, Huy L. Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive hashing. In Proceedings of the 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2014. [5] Sunil Arya and David M. Mount. Approximate nearest neighbor queries in fixed dimensions. In Proceedings of the 4th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 1993. [6] Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. ANN-Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems, 87, 2020. Data accessed at https://github.com/erikbern/ann-benchmarks. [7] Artem Babenko and Victor Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2055–2063, 2016. Data accessed at https://github.com/erikbern/ ann-benchmarks. [8] Alina Beygelzimer, Sham Kakade, and John Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd International Conference on Machine Learning (ICML), 2006. [9] Sebastian Bruch. Foundations of Vector Retrieval. Springer, 2024. [10] Moses Charikar, Michael Kapralov, Navid Nouri, and Paris Siminelakis. Kernel density estimation through density constrained near neighbor search. In Proceedings of the 63rd Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 172–183, 2020. [11] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186, 2019. [12] Haya Diwan, Jinrui Gou, Cameron Musco, Christopher Musco, and Torsten Suel. Navigable graphs for high-dimensional nearest neighbor search: Constructions and limits. In Advances in Neural Information Processing Systems 37 (NeurIPS), 2024. [13] Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. SPLADE: sparse lexical and expansion model for first stage ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021. [14] Cong Fu and Deng Cai. EFANNA: An extremely fast approximate nearest neighbor search algorithm based on kNN graph. arXiv:1609.07228, 2016. [15] Cong Fu, Chao Xiang, Changxu Wang, and Deng Cai. Fast approximate nearest neighbor search with the navigating spreading-out graph. Proceedings of the VLDB Endowment, 12(5):461–474, 2019. Data acccessed at: https://github.com/ZJULearning/nsg. [16] Vincent Garcia, Eric Debreuve, and Michel Barlaud. Fast k nearest neighbor search using GPU. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008. [17] Siddharth Gollapudi, Neel Karia, Varun Sivashankar, Ravishankar Krishnaswamy, Nikit Begwani, Swapnil Raz, Yiyong Lin, Yin Zhang, Neelam Mahapatro, Premkumar Srinivasan, Amit Singh, and Harsha Vardhan Simhadri. Filtered-DiskANN: Graph algorithms for approximate nearest neighbor search with filters. In Proceedings of the 32nd International World Wide Web Conference (WWW), pages 3406–3416, 2023. [18] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC), 1998. [19] Piotr Indyk and Haike Xu. Worst-case performance of popular approximate nearest neighbor search implementations: Guarantees and limitations. In Advances in Neural Information Processing Systems 36 (NeurIPS), 2023. [20] Rajesh Jayaram, Laxman Dhulipala, Majid Hadian, Jason Lee, and Vahab Mirrokni. MUVERA: Multi-vector retrieval via fixed dimensional encoding. In Advances in Neural Information Processing Systems 37 (NeurIPS), 2024. [21] Herve Jégou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):117–128, 2011. Data accessed at http://corpus-texmex.irisa.fr. [22] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(03):535–547, 2021. [23] Matti Karppa, Martin Aumüller, and Rasmus Pagh. DEANN: Speeding up kernel-density estimation using approximate nearest neighbor search. In Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 151, pages 3108–3137, 2022. [24] Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efficient transformer. In Proceedings of the 8th International Conference on Learning Representations (ICLR), 2020. [25] Jon M. Kleinberg. Two algorithms for nearest-neighbor search in high dimensions. In Proceedings of the 29th Annual ACM Symposium on Theory of Computing (STOC), 1997. [26] Jon M. Kleinberg. Navigation in a small world. Nature, 406(6798):845–845, 2000. [27] Jon M. Kleinberg. The small-world phenomenon: an algorithmic perspective. In Proceedings of the 32nd Annual ACM Symposium on Theory of Computing (STOC), 2000. [28] Ravishankar Krishnaswamy, Magdalen Dobson Manohar, and Harsha Vardhan Simhadri. The diskann library: Graph-based indices for fast, fresh and filtered vector search. IEEE Data Eng. Bull., 48(3):20–42, 2024. [29] Eyal Kushilevitz, Rafail Ostrovsky, and Yuval Rabani. Efficient search for approximate nearest neighbor in high dimensional spaces. In Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC), 1998. [30] Thijs Laarhoven. Graph-based time-space trade-offs for approximate near neighbors. In Proceedings of the 34th Annual Symposium on Computational Geometry (SOCG), 2018. [31] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Data accessed at https://github.com/erikbern/ann-benchmarks. [32] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33 (NeurIPS), volume 33, pages 9459–9474, 2020. [33] Conglong Li, Minjia Zhang, David G Andersen, and Yuxiong He. Improving approximate nearest neighbor search through learned adaptive early termination. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pages 2539–2554, 2020. [34] Jinfeng Li, Xiao Yan, Jian Zhang, An Xu, James Cheng, Jie Liu, Kelvin K. W. Ng, and Ti-chung Cheng. A general and efficient querying method for learning to hash. In Proceedings of the 2018 ACM SIGMOD International Conference on Management of Data, pages 1333–1347, 2018. Data accessed at https://www.cse.cuhk.edu.hk/systems/hash/gqr/datasets.html. [35] Chen Luo, Vihan Lakshman, Anshumali Shrivastava, Tianyu Cao, Sreyashi Nag, Rahul Goutam, Hanqing Lu, Yiwei Song, and Bing Yin. ROSE: Robust caches for amazon product search. https://www.amazon.science/publications/rose-robust-caches-for-amazon-product-search, 2022. [36] Qin Lv, William Josephson, Zhe Wang, Moses Charikar, and Kai Li. Multi-probe LSH: efficient indexing for high-dimensional similarity search. In Proceedings of the 33rd International Conference on Very Large Data Bases, pages 950–961, 2007. [37] Yury Malkov, Alexander Ponomarenko, Andrey Logvinov, and Vladimir Krylov. Approximate nearest neighbor algorithm based on navigable small world graphs. Information Systems, 45:61–68, 2014. [38] Yury. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4):824–836, 2020. [39] Magdalen Dobson Manohar, Taekseung Kim, and Guy E. Blelloch. Range retrieval with graph-based indices. arXiv:2502.13245, 2025. [40] Magdalen Dobson Manohar, Zheqi Shen, Guy Blelloch, Laxman Dhulipala, Yan Gu, Harsha Vardhan Simhadri, and Yihan Sun. ParlayANN: Scalable and deterministic parallel graphbased approximate nearest neighbor search algorithms. In Proceedings of the 29th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), pages 270–285, 2024. [41] Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Roziere, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. Augmented language models: a survey. Transactions on Machine Learning Research, 2023. Survey Certification. [42] Stanley Milgram. The small world problem. Psychology Today, 2(1):60–67, 1967. [43] Bhaskar Mitra, Nick Craswell, et al. An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval, 13(1):1–126, 2018. [44] Marius Muja and David G. Lowe. Scalable nearest neighbor algorithms for high dimensional data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(11):2227–2240, 2014. [45] Aude Oliva and Antonio Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. International journal of computer vision, 42:145–175, 2001. Data accessed at https://github.com/erikbern/ann-benchmarks. [46] Apostolos N. Papadopoulos and Yannis Manolopoulos. Nearest Neighbor Search:: A Database Perspective. Springer Science & Business Media, 2005. [47] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014. Data accessed at https: //github.com/erikbern/ann-benchmarks. [48] Liudmila Prokhorenkova and Aleksandr Shekhovtsov. Graph-based nearest neighbor search: From practice to theory. In Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. [49] Harsha Vardhan Simhadri, Martin Aumüller, Amir Ingber, Matthijs Douze, George Williams, Magdalen Dobson Manohar, Dmitry Baranchuk, Edo Liberty, Frank Liu, Ben Landrum, Mazin Karjikar, Laxman Dhulipala, Meng Chen, Yue Chen, Rui Ma, Kai Zhang, Yuzheng Cai, Jiayang Shi, Yizhuo Chen, Weiguo Zheng, Zihao Wan, Jie Yin, and Ben Huang. Results of the Big ANN: NeurIPS’23 competition. arXiv:2409.17424, 2024. [50] Harsha Vardhan Simhadri, George Williams, Martin Aumüller, Matthijs Douze, Artem Babenko, Dmitry Baranchuk, Qi Chen, Lucas Hosseini, Ravishankar Krishnaswamny, Gopal Srinivasa, Suhas Jayaram Subramanya, and Jingdong Wang. Results of the NeurIPS’21 challenge on billion-scale approximate nearest neighbor search. In Proceedings of the NeurIPS 2021 Competitions and Demonstrations Track, 2022. [51] Aditi Singh, Suhas Jayaram Subramanya, Ravishankar Krishnaswamy, and Harsha Vardhan Simhadri. FreshDiskANN: A Fast and Accurate Graph-Based ANN Index for Streaming Similarity Search. arXiv:2105.09613, 2021. [52] Ryan Spring and Anshumali Shrivastava. Scalable and sustainable deep learning via randomized hashing. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 445–454, 2017. [53] Suhas Jayaram Subramanya, Devvrit, Rohan Kadekodi, Ravishankar Krishaswamy, and Harsha Vardhan Simhadri. DiskANN: Fast accurate billion-point nearest neighbor search on a single node. In Advances in Neural Information Processing Systems 32 (NeurIPS), 2019. [54] Javier Vargas Muñoz, Marcos A. Gonçalves, Zanoni Dias, and Ricardo da S. Torres. Hierarchical clustering-based graphs for large scale approximate nearest neighbor search. Pattern Recognition, 96, 2019. [55] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In Proceedings of the 9th International Conference on Learning Representations (ICLR), 2021. [56] Haike Xu, Magdalen Dobson Manohar, Philip A. Bernstein, Badrish Chandramouli, Richard Wen, and Harsha Vardhan Simhadri. In-place updates of a graph index for streaming approximate nearest neighbor search. arXiv:2502.13826, 2025. [57] Haike Xu, Sandeep Silwal, and Piotr Indyk. A bi-metric framework for fast similarity search. arXiv:2406.02891, 2024. [58] Minjia Zhang, Wenhan Wang, and Yuxiong He. GraSP: Optimizing graph-based nearest neighbor search with subgraph sampling and pruning. In Proceedings of the 15th International Conference on Web Search and Data Mining (WSDM), pages 1395–1405, 2022. [59] Xi Zhao, Yao Tian, Kai Huang, Bolong Zheng, and Xiaofang Zhou. Towards efficient index construction and approximate nearest neighbor search in high-dimensional spaces. Proceedings of the VLDB Endowment, 16(8):1979–1991, 2023. # A Additional Proofs # A.1 Nonexistence of Sparse $\alpha$ -Shortcut Reachable Graphs Recent work of Indyk and $\mathrm { { X u } }$ [19] shows that, for $k = 1$ , standard greedy search (i.e., beam search with beam width $b = 1$ ) provably returns a $\left( { \frac { \alpha + 1 } { \alpha - 1 } } + \epsilon \right)$ -approximate nearest neighbor for any constant $\epsilon$ when run on an $\alpha$ -shortcut reachable search graph $G$ . The $\alpha$ -shortcut reachability property requires that, for any nodes $x , y \in \{ 1 , \ldots , n \}$ with $d ( x , y ) > 0$ , there is some $z \in \dot { \mathcal { N } _ { G } ( x ) }$ with $\alpha \cdot d ( z , y ) < d ( x , y )$ for some parameter $\alpha \geq 1$ . The requirement exactly corresponds to navigability (Definition 1) when $\alpha = 1$ and is a strictly stronger condition when $\alpha > 1$ . The guarantee of [19] is non-vacuous when $\alpha > 1$ . Unfortunately, it is also not hard to see that for any fixed $\alpha > 1$ , there exist relatively low-dimensional point sets with no sparse $\alpha$ -shortcut reachable graphs. In fact, for any constant $\alpha > 1$ , it suffices to consider a random point set in $O ( \log n )$ dimensional Euclidean space. This contrasts the situation for navigability $( \alpha = 1 )$ ), since [12] show that an $O ( { \sqrt { n \log n } } )$ average degree navigable graph can be efficiently constructed for any point set in any dimension (indeed, in any metric space), under the mild assumption of unique pairwise distances between points (which can be ensured, e.g., by tie-breaking with node id). Formally: Claim 5. For any $\alpha > 1$ , let $\begin{array} { r } { m = O \left( \frac { \log n } { ( 1 - 1 / \alpha ) ^ { 2 } } \right) } \end{array}$ There are n points in m-dimensional Euclidean space with unique pairwise distances, but the only $\alpha$ -shortcut reachable graph for the points is the complete graph. Further, by [12], the points admit a navigable graph with $\bar { O ( } \sqrt { n \log n } )$ ag degree. Note that for constant $\alpha > 1 , 1 - 1 / \alpha$ is a constant bounded away from 0, so $m = O ( \log n )$ . Proof. It suffices to find a set of $n$ points whose pairwise distances all lie in the range $( 1 / \alpha , 1 ]$ . Then, for any $x \neq y$ , the only $z$ with $\alpha \cdot \bar { d } ( z , y ) < \bar { d } ( \bar { x , y } )$ is $z = y$ . Thus, to ensure $\alpha$ -shortcut reachability, all nodes must be connected to all other nodes – i.e., $G$ must be the complete graph. If we are not concerned about the dimensionality, finding a set of points in Euclidean space with all pairwise distances lying in $( 1 / \alpha , 1 ]$ is trivial: take the $n$ standard basis vectors in $\mathbb { R } ^ { n }$ , scaled by $1 / \sqrt { 2 }$ so that they all have distance 1 from each other. Subtract an infinitesimally small random amount from the non-zero entry of each so that all pairwise distances are unique, but still lie in $( 1 / \alpha , 1 ]$ . To obtain a result in lower dimensions, we instead consider random points. Concretely, consider $n$ points in $\mathbb { R } ^ { m }$ with each entry set independently to 1 or $- 1$ with probability $1 / 2$ . For each $x , y$ , we have $\mathbb { E } [ \| x - y \| _ { 2 } ^ { 2 } ] = 2 m$ and by a starndard binomial concentration bounds, $\operatorname* { P r } [ | | x - y | | _ { 2 } ^ { 2 } - 2 m | \geq$ $m ( 1 - 1 / \alpha ) ] \le \exp ( - \Omega ( ( 1 - 1 / \alpha ) ^ { 2 } \cdot m ) )$ . Setting $\begin{array} { r } { m = O \left( \frac { \log n } { ( 1 - 1 / \alpha ) ^ { 2 } } \right) } \end{array}$ , this probability is bounded by $1 / n ^ { c }$ for a large constant $c$ . Taking a union bound over all ${ \binom { n } { 2 } } \ < \ n ^ { 2 }$ pairs of points, we see that all their squared pairwise distances lie in the range $\begin{array} { r l } { { \Big ( 2 m ( 1 - \frac { 1 - 1 / \alpha } { 2 } ) , 2 m ( 1 + \frac { 1 - 1 / \alpha } { 2 } ) \Big ) } \quad } & { { } } \end{array}$ with probability at least $1 - 1 / n ^ { c - 2 }$ . Normalizing by $\begin{array} { r } { 2 m ( 1 + \frac { 1 - 1 / \alpha } { 2 } ) } \end{array}$ , all the squared pairwise distances are less than one 1 and greater than 1−+ 1−21/α ≥ 1 − (1 − 1/α) = 1/α, where we use the fact that $\begin{array} { r } { \frac { 1 - x } { 1 + x } \geq 1 - 2 x } \end{array}$ for all $x$ . Thus, all squared pairwise distances, and in turn all pairwise distances, lie in the range $( 1 / \alpha , 1 )$ , as desired. We can again ensure unique pairwise distances by adding arbitrarily small random perturbations to each point, completing the claim. □ # A.2 Failure of Beam Search on Navigable Graphs We next give a simple counterexample, showing that, unless the beam width is set to essentially the full dataset size, standard beam search on a navigable graph can fail to find an approximate nearest neighbor when run on a navigable graph. This observation in part motivates the definition of our alternative “distance based” stopping rule, (3), and the resulting Adaptive Beam Search algorithm. Claim 6. For any finite $C$ , there exists a set of n points in 2-dimensional Euclidean space and a navigable graph $G$ such that, for some query point $q$ , beam search run on $G$ with beam width $b \leq n - 3$ returns $\tilde { x }$ with $d ( q , \tilde { x } ) \geq C \cdot d ( \bar { q , { x ^ { * } } } )$ . Figure 5: Example showing that standard beam search fails to find a nearest neighbor in a navigable graph. Points $\mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n }$ are all located arbitrarily close to $( 1 , 0 )$ . They are all connected to $\mathbf { x } _ { 1 }$ and $\mathbf { x } _ { 2 }$ , as well as to each other. The graph is navigable, since we can navigate from $\mathbf { x } _ { 1 } , \mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n }$ to $\mathbf { x } _ { 3 }$ and vice-versa through ${ \bf x } _ { 2 }$ . All other nodes are directly connected to each other. Suppose beam search with beam width $b \leq n - 3$ is initialized at ${ \bf x } _ { 1 }$ with query $\mathbf { q }$ . Because $\mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n }$ are all closer to the $\mathbf { q }$ than ${ \bf x } _ { 2 }$ , the method will never expand ${ \bf x } _ { 2 }$ and thus fail to reach the nearest neighbor ${ \bf x } _ { 3 }$ . Proof. Consider the following dataset in 2-dimensional Euclidean space shown in Figure 5: $\mathbf { x } _ { 1 } =$ $( 0 , 0 ) , \mathbf { x } _ { 2 } = ( 1 , 1 ) , \mathbf { x } _ { 3 } = ( m , \bar { 1 } )$ for some arbitrarily large value $m$ . Let $\mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n }$ all be located at arbitrary positions in an $\epsilon$ ball around $( 1 , 0 )$ for arbitrarily small $\epsilon$ . We can check that the graph with the following two-way edges is navigable: $\left( \mathbf { x } _ { 2 } , \mathbf { x } _ { 3 } \right)$ and $( \mathbf { x } _ { i } , \mathbf { x } _ { j } )$ for all $i \in \{ 1 , 2 \} , j \in \{ 4 , \dots , n \}$ . Consider beam search initialized at starting point ${ \bf x } _ { 1 } = ( 0 , 0 )$ with query $\mathbf { q } = ( m , 0 )$ . The nearest neighbor to $\mathbf { q }$ is ${ \bf x } _ { 3 }$ with $\lVert \mathbf { q } - \mathbf { x } _ { 3 } \rVert _ { 2 } = 1$ . In the first step of beam search, all neighbors of ${ \bf x } _ { 1 }$ $( \mathbf { x } _ { 2 } , \mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n } )$ will be added to the search queue. Since $\mathbf { x } _ { 2 }$ is further from q than all nodes in $\mathbf { x } _ { 4 } , \ldots , \mathbf { x } _ { n }$ , the algorithm will then expand nodes from this set in succession, adding no new nodes to the queue since none of these nodes are connected to ${ \bf x } _ { 3 }$ , the only remaining unexplored node. If $b \leq n - 3$ , the algorithm will then terminate, with ${ \bf x } _ { 2 }$ never expanded and ${ \bf x } _ { 3 }$ never explored. As a result, beam search returns some $\tilde { \mathbf { x } } \in \{ \mathbf { x } _ { 4 } , \dotsc , \mathbf { x } _ { n } \}$ with distance $\| \mathbf { q } - \tilde { \mathbf { x } } \| _ { 2 } \geq m - \epsilon .$ . It thus achieves approximation factor $\begin{array} { r } { \frac { | | \mathbf { q } - \tilde { \mathbf { x } } | | _ { 2 } } { | | \mathbf { q } - \mathbf { x } _ { 3 } | | _ { 2 } } \geq \frac { m - \epsilon } { 1 } } \end{array}$ $m = C + \epsilon$ □ # B Additional Implementation Details # B.1 Pseudocode for Generalized Beam Search Variants Below, we provide detailed pseudocode for generalized beam search (Algorithm 1) under stopping conditions (1) (classic greedy search), (2) (classic beam search), and (3) (Adaptive Beam Search). While the greedy search order and stopping rule determine the number of distance computations performed, it is possible to optimize runtime and storage requirements by using appropriate data structures to implement the stopping rule. Additionally, we can avoid adding nodes to the candidate set $\mathcal { C }$ if we are sure that, if popped off $\mathcal { C }$ , those nodes would trigger the termination condition anyways. Adaptive Beam Search and Greedy Search. Pseudocode for Adaptive Beam Search is given in Algorithm 2. The same pseudocode can be used for greedy search, by setting the approximation parameter $\gamma = 0$ , so that the Adaptive Beam Search stopping rule (3) becomes the greedy rule (1). The key optimization is that we maintain a heap, $\boldsymbol { B }$ , of the $k$ nearest points seen so far, which avoids having to extract these neighbors from the set of discovered nodes $\mathcal { D }$ every time termination condition (3) is checked. Further, if a newly discovered node has distance larger than $( 1 + \gamma )$ times the $k ^ { \mathrm { { t h } } }$ closest seen so far, it will always trigger termination if considered for expansion. Thus, we can avoid adding it to the candidate set of unexpanded nodes, $\mathcal { C }$ . See Lines 12-17. This optimization avoids letting $\mathcal { C }$ grow unnecessarily large with nodes that will never be expanded. Classic Beam Search. Pseudocode for classic beam search is given in Algorithm 3. The implementation is essentially identical to that of Adaptive Beam Search, except that a heap of the $b \geq k$ nearest points seen so far must be maintained to efficiently check stopping condition (2) each time a node is considered for expansion or newly discovered. At the end of the algorithm, the $k$ nearest points from this heap are ultimately returned. See Lines 22-23. # Algorithm 2 Adaptive Beam Search # Algorithm 3 Classic Beam Search # B.2 Sparse Navigable Graph Construction via Pruning As discussed, in Section 5, we evaluate the performance of our Adaptive Beam Search method on both truly navigable graphs, where it is backed by the theoretical guarantee of Theorem 1, and on heuristic “approximately navigable” graphs constructed using a variety of popular methods. To construct sparse navigable graphs, we use the construction of [12]. For $m = \lfloor { \sqrt { 3 n \ln n } } \rfloor$ , each node is connected to its $m$ nearest neighbors along with $\lceil { \frac { 3 n \ln n } { m } } \rceil$ uniformly random nodes. As shown in [12], such a graph is navigable with high probability and has average degree $O ( { \sqrt { n \log n } } )$ . We further sparsify these graphs, both to facilitate running large scale experiments and to more accurately reflect performance on graphs with practical levels of sparsity. To do so, we employ a pruning strategy that removes redundant edges from the graph while maintaining navigability. Pseudocode for the pruning method is given in Algorithm 4. It starts with a navigable graph $G$ , then iterates over each node $s$ in the graph, only keeping a minimal set of out edges needed to ensure navigability. In particular, for each node $t \in \langle 1 , \ldots , { \bar { n } } \rangle \setminus \{ s \}$ , by Definition 1, we must ensure that $s$ has an out neighbor $x$ with $d ( x , t ) < d ( s , t )$ . The method iterates over each $t$ , adding an out neighbor of $s$ to the keep set only if it is needed to ensure this condition holds for some $t$ (i.e., if no edges alreadt in keep ensure the condition). After checking all $t$ , it removes all neighbors of $s$ not in keep. Table 2: Average out degrees of navigable graphs before and after pruning. Note that we run on subsamples of the full datasets from Table 1 due to the high computational cost of pruning. The pruning strategy can produce navigable graphs that are significantly sparser than those constructed by [12]. See Table 2 for a summary of the average degrees achieved for our tested datasets. Unfortunately, the runtime of our pruning method scales at least quadratically with $n$ . This limits our ability to apply the method to the full datasets. An interesting open question is to improve the running time of constructing very sparse and truly navigable graphs. # B.3 Omitted Details on Experimental Setup We next give additional details on the datasets and graphs used to evaluate Adaptive Beam Search. Datasets. Table 1 summarizes the six benchmark datasets used in our experiments. The citation for each dataset includes a note listing the URL where we obtained the specific version of the dataset used in our work. The datasets are available under the following licenses: MIT License (MNSIST), CC0 1.0 Universal (SIFT, GIST), and the Open Data Commons Public Domain Dedication and License (GloVe). We were unable to find license information for Deep96 and Deep256. Both are available in the public domain. For DEEP96, we used a one million point pre-sampled dataset from [6], but our 100K points used for the navigable graph experiments were sampled from the original dataset available at https: //github.com/matsui528/deep1b_gt. For GloVe, we sampled one million nodes from the original dataset. The GIST data only includes 1K query points by default. To generate 10K query points, in order to match the other benchmarks, we sampled additional query points uniformly at random from the so-called learning data points, which are included with GIST for hyperparameter tuning. We did not use this set of points for any other purpose or any parameter turning. Graph Parameters. As discussed in Section 5, we construct heuristic graphs using four common methods: HNSW [38], Vamana [53], NSG [15], and EFANNA [14]. We used our own implementations of HNSW and Vamana. Code for NSG is available under an MIT License at https://github.com/ZJULearning/nsg and for EFANNA under a BSD License at https: //github.com/ZJULearning/efanna. The heuristic graph construction algorithms employed take as input various hyperparameters. Settings used for these hyperparameters are given in Table 3. For Vamana, we used the same hyperparameters for all datasets, matching those in the original paper [53], which were found to work well for SIFT, DEEP96, and GIST; using the same parameters for the other datasets yielded similarly good results. The hyperparameters for EFANNA [14] and NSG [15] for SIFT and GIST are taken from authors’ repository [15]. The same parameters were also used by [58] and [53]. For NSG and EFANNA with DEEP96, we used the optimal values used by [58]. For EFANNA with MNIST, DEEP256, and GloVe, we tested them using the two set of hyperparameters- the ones used for SIFT and GIST- and picked the better performing. We did a similar thing for NSG with MNIST, DEEP256, and GloVe. Table 3: Experimental Hyperparameters for Different Datasetsa dn Graph Constructions For HNSW, we used the hyperparameters that [58] found to be optimal for SIFT, DEEP96, GIST, and GloVe . For HNSW on MNIST and DEEP256, we tested with values of $\scriptstyle \mathbf { M = } 1 4 , 1 6 , 2 4$ and used the best performing on the standard beam search. Since, the authors found the ideal value of efc for SIFT, DEEP96, GIST, and GloVe to be 500, we used this value for DEEP256 and MNIST. Computational Resources. Navigable graphs were constructed using our pruning methods run on a single core of a 3.2GHz Intel Core i9-12900K CPU with access to 128GB of DDR5 4800mhz RAM. To accelerate pruning and take advantage of available memory, we precomputed all pairwise distances between pairs of points in the dataset. Each graph required several hours to construct. All other experiments were run on a single 2.9GHz Intel(R) Xeon(R) Platinum 8268 CPU with access to 32GM of RAM, although at most 4GB was used for any individual experiment. Producing a single recall/distance computation tradeoff curve requires several hours for each dataset and algorithm. # C Additional Experimental Results In this section we include additional experimental results. # C.1 Navigable Graphs In Figure 6 we compare beam search termination conditions on three datasets for $k = 1 0 0$ . The results are similar to those reported in Figure 3 for $k = 1$ and $k = 1 0$ , but with less significant gains seen for Adaptive Beam Search as compared to standard beam search. As for smaller values of $k$ , Adaptive Beam Search V2 underperforms both other methods. Figure 6: Comparison of generalized beam search termination conditions on navigable graphs across three datasets: SIFT1M, DEEP96, and MNIST (columns), with $k = 1 0 0$ (rows). Adaptive Beam Search consistently outperforms standard beam search, while the alternative Adaptive Beam Search V2 underperforms both by a significant margin. Figure 7: Comparison of generalized beam search termination methods on HNSW graphs with $k = 1 0$ across six datasets. Adaptive Beam Search outperforms standard beam search, with the degree of improvement varying across datasets. # C.2 Heuristic Graphs In Figure 7 we compare beam search termination conditions on HNSW search graphs for all six benchmarks and $k = 1 0$ . In Figure 8 we include further results on HNSW graphs for $k = 1$ and $k = 5 0$ across three datasets. As with our other experiments on heuristic graphs (see Figure 4), we see that Adaptive Beam Search generally outperforms standard beam search, sometimes by a large margin. One exception is for GIST with $k = 1$ , where beam search performs marginally better. # C.3 Adaptive Beam Search vs. Adaptive Beam Search V2 As illustrated in Figure 3, Adaptive Beam Search V2, which uses the more aggressive stopping condition of (6), generally underperforms both Adaptive Beam Search and classic beam search. We believe this is due to the fact that, to achieve high recall, the $\gamma$ parameter for this rule needs to be set high, causing the method to terminate late and perform a large number of distance computations on some queries. This phenomenon is illustrated in Figure 9. Figure 8: Comparison of generalized beam search termination methods on HNSW graphs across three datasets with $k = 5 0$ and $k = 1$ . Adaptive Beam Search outperforms standard beam search as we vary $k$ , with the exception of GIST for $k = 1$ , where it slightly underperforms. Figure 9: Histograms for the number of distance computations performed by Adaptive Beam Search and Adaptive Beam Search V2. We tune the $\gamma$ parameter for each method to achieve a fixed recall value, finding that Adaptive Beam Search V2 has a heavier tail of queries that require many distance computations, in part explaining its poor performance seen in Figure 3. Figure 10: Evaluation of the Hybrid Beam Search termination rule from (7) on three datasets. There is very little difference in performance between the method and Adaptive Beam Search. 1.00 HNSW,Graph: SIFT,k=10 1.00 HNSW,Graph: DEEP256,k=10 1.00 HNSW,Graph: DEEP96,k=10 中 1 0.99 0.99 0.99 0.98 0.98 0.98 G 0.97 Standard Beam Search 0.97 Standard Beam Search 0.97 Standard Beam Search Adaptive Beam Search (our method) Adaptive Beam Search (our method) Adaptive Beam Search (our method) 0.96 Hybrid Beam Search,β=1.1 0.96 1 Hybrid Beam Search $\beta = 1 . 1$ 0.96 Hybrid Beam Search $\beta = 1 . 1$ ←Hybrid Beam Search,β =2.0 ↑ Hybrid Beam Search, β=2.0 ←Hybrid Beam Search,β=2.0 T Hybrid Beam Search, y= 0.1 1 Hybrid Beam Search, y= 0.1 ← Hybrid Beam Search,γ= 0.1 0.95 28 29 210 211 212 213 214 215 0.95 28 29 211212 213 214 0.95 28 29 210 211 212 213 214 215 #of Search Distance Calculations #of Search Distance Calculations #of Search Distance Calculations # C.4 Hybrid Stopping Rule As discussed in Section 5, it would be interesting to consider other relaxations of greedy search beyond beam search and Adaptive Beam Search. One obvious candidate is a rule that combines both relaxations. In particular, in Algorithm 1 we could choose to terminate if there are at least: where $b > k$ is a “width parameter” and $\gamma > 0$ is a distance-based relaxation. We ran initial experiments with this natural hybrid termination, which are shown in Figure 10. To obtain a trade-off curve between recall and distance computations, we either fixed $b = \beta \cdot k$ for a parameter $\beta > 1$ and then varied $\gamma$ , or we fixed $\gamma$ and varied $\beta$ . Somewhat surprisingly, the hybrid method appears to perform very similarly to Adaptive Beam Search, although further study of this termination condition and other relaxations would be valuable.
Nearest neighbor search is central in machine learning, information retrieval, and databases. For high-dimensional datasets, graph-based methods such as HNSW, DiskANN, and NSG have become popular thanks to their empirical accuracy and efficiency. These methods construct a directed graph over the dataset and perform beam search on the graph to find nodes close to a given query. While significant work has focused on practical refinements and theoretical understanding of graph-based methods, many questions remain. We propose a new distance-based termination condition for beam search to replace the commonly used condition based on beam width. We prove that, as long as the search graph is navigable, our resulting Adaptive Beam Search method is guaranteed to approximately solve the nearest-neighbor problem, establishing a connection between navigability and the performance of graph-based search. We also provide extensive experiments on our new termination condition for both navigable graphs and approximately navigable graphs used in practice, such as HNSW and Vamana graphs. We find that Adaptive Beam Search outperforms standard beam search over a range of recall values, data sets, graph constructions, and target number of nearest neighbors. It thus provides a simple and practical way to improve the performance of popular methods.
[ "cs.IR", "cs.DB", "cs.DS", "cs.LG" ]
# 1. Introduction Following the emergence of big data and the ever-increasing public availability of datasets, each with tens of thousands of data points, research within the deep learning domain is accelerating [1]. Consequently, there are two key factors that need to be addressed. Firstly, the process by which we present data to the deep learning model is paramount. It is not uncommon for models to be trained for thousands of epochs, and thus any superfluous data within the dataset will have an increasingly negative impact on training speed. This phenomenon has given rise to hard example mining [2], which attempts to identify hard images (i.e. images that contribute highly to loss, upon which the model performs poorly). By only considering these hard images, we can not only sample from a minimal dataset, therefore minimising the duration of a training epoch, but also reduce the number of iterations required for model convergence, as the contribution of each image sample is maximised in every iteration. Similarly, the images sampled by the model in any given training iteration are controlled via curriculum learning [3] and self-paced learning [4]. Contrary to hard example mining, in which commonly only a subset of the global dataset is considered during the entire training process, curriculum learning and self-paced learning forces the initial iterations to sample one fraction of the global dataset, and subsequent iterations to sample from different fractions, until the entire global dataset is considered. Generally, curriculum learning introduces harder images (pre-defined by prior knowledge) as training progresses, while self-paced learning determines the current model performance as feedback to the controller to determine which images to sample next. Figure 1: Overview of the DDS-NAS search phase. After a given training iteration, we determine whether a sufficient percentage of the data in the current subset is correctly classified, according to some a priori mastery threshold. If the subset has been mastered, we reformulate it dynamically. Hard images in the current subset are retained, according to some a priori hard-ness threshold, while easy images are replaced with the most different image from the same class. To determine the most different image, we employ an (approximate) furthest-neighbour $k d$ -tree whereby each image is represented by the auto-encoded representation of its features within the latent space. The second challenge arising from data accessibility is the evolution of the architecture search space. As research within the domain continues, newer, and often more complex network architectures are presented. To overcome this notion Neural Architecture Search (NAS) has emerged, which automatically traverses the architecture search space for a given task and generates models that are competitive alongside handcrafted, state-of-the-art models [5]. We can divide the NAS domain into evolutionary, reinforcement-learning, prediction-based, and gradient-based NAS frameworks. This paper primarily considers gradient-based NAS frameworks. More precisely, the seminal gradient-based DARTS [6] framework constructs a super-network, in which each layer consists of all possible operations in the search space, followed by a softmax layer across said operations, such that operation selection can be represented as (continuous) operation-magnitude optimisation. After training the super-network, the best-performing subset of operations are extracted, thus formulating a cell (sub-network). A series of these cells is then trained to generate a final ‘searched’ model, fine-tuned upon a given challenge dataset. We propose a strategy that incorporates a novel combined hard example mining and curriculum learning approach to enable Dynamic Data Selection (DDS) within a NAS framework, denoted as DDS-NAS. By using image similarity as a proxy metric for image difficulty (on an easy to hard performance axis), we can select hard images for processing within a given NAS training iteration in logarithmic time without compromising image diversity (Fig. 1). This process allows us to significantly improve the NAS search phase speed. Whilst this paper specifically addresses image datasets, there is no reason not to apply identical techniques to other application domains such as natural language processing (NLP). On this basis, our main contributions are as follows: – a novel framework, DDS-NAS, that incorporates both hard example mining and curriculum learning in order to minimise the training duration of a given epoch within NAS, demonstrated to be effective across a variety of commonplace NAS approaches (DARTS [6], P-DARTS [7] and TAS [8]). – an efficient and novel approach for hard example mining within the image domain, that considers image dissimilarity an alternative metric to hardness, and employs an autoencoder architecture that enforces an image similarity embedding in latent space. This yields efficient dissimilar image look-up from a $k d .$ -tree structure. – generation of models in a manner intrinsically robust to biased datasets, and 10 times quicker than existing NAS techniques, whilst retaining competitive, near state-of-the-art accuracy with minimal memory footprint over common benchmarks. # 2. Prior Work In this section we introduce the related NAS, hard example mining, and curriculum learning approaches, from which we draw our methodology. We restrict our NAS literature survey to only a brief overview of NAS techniques, since DDS-NAS can be deployed upon any NAS approach that iteratively processes images (evolutionary, reinforcement-learning, and gradient-based). Through this review of the literature, we highlight our contribution to the field, outlining the ways in which our framework works alongside the current NAS approaches to optimise performance and reduce computational requirements. # 2.1. Neural Architecture Search With the rise of NAS, a multitude of recent literature has addressed the scalability challenge which occurs due to the resultant large search space and training cost. Following the seminal work of Zoph et al. [9] and other reinforcement learning [10, 11], and evolutionary [12, 13] approaches to NAS, weight-sharing techniques [14] reduce the need to train each architecture in the search space separately. # 2.2. NAS Strategies Gradient-based approaches [6, 7, 15] enable the application of stochastic gradient descent and other well-used deep learning techniques by relaxing the search space so that it is continuous, thereby drastically improving the convergence rate of the architecture. One-shot NAS approaches employ the weight-sharing super-network training stage of DARTS, with an alternative sampling strategy, tending to consider only one path through the super-network in a given training iteration [16]. Progressive DARTS (P-DARTS) [7] address the optimisation gap within DARTS between the sub-network and final model. This is achieved by simultaneously limiting the prevalence of skip-connections within a generated cell and by progressively reducing the operation search space available to the super-network. This in turn enables progressively increasing network depth. Network Pruning via Transformable Architecture Search (TAS) [8] crafts a loss function to directly minimise the complexity of the searched network. To this end, both the width (number of channels in a layer) and the depth (number of layers) of the network are also searched. By employing the knowledge distillation algorithm from [17], weights from the fully trained super-network can be transferred to the ‘pruned’ searched network. # 2.3. Curriculum and Coreset Sampling Within NAS CNAS [18] employs a curriculum learning framework within NAS architecture, in order to slowly introduce new operations to the NAS controller search space, allowing the model to successfully master harder tasks as training progresses. Overall, network topology is the primary focus for contemporary NAS solutions [8, 18, 7]. By contrast, only minimal consideration of the dataset presented within the NAS pipeline is present in the literature. CLOSE [19] uses curriculum learning to modify the sharing extent. There is no effort to reduce the training dataset size, but image hardness and uncertainty (which can be calculated from a range of different sub-network outputs) is factored into the loss computation. Peng et al. [20] introduce negative samples within NAS training, drawing from the benefits of contrastive learning. Core-set Sampling [21] select a small subset of the data space for training the NAS super-network via the greedy $k$ -center algorithm. ADAPTIVE-NAS [22] compares different core-set sampling algorithms for PT-DARTS [23], including adaptive sampling, in which the training set is periodically updated using GLISTER [24]. While their work is most similar to ours, there is no effort to consider image hardness, and is thus unable to utilise any benefits of curriculum learning. Moreover, only one search algorithm is evaluated with core-set selection. However, the core-set selection algorithm depends upon embeddings that are well aligned with the training data, much like DDS-NAS (Table 5). To our knowledge, this paper represents the first approach to jointly employ online hard example mining and curriculum learning during NAS learning to optimise both model performance and reduce overall NAS computation requirements. With the variety and quickly evolving nature of NAS strategies, it is imperative that our method can be deployed alongside any existing NAS approach. Our work is thus the first to utilise a core-set approach in conjunction with a variety of existing NAS approaches and different architecture search spaces. Our approach is able to accelerate training for even the oldest NAS methods, for which training speed is a known drawback [16]. # 2.4. Curriculum Learning and Hard Example Mining Graves et al. [25] posit the need for a surrogate measure of learning progress to inform the curriculum controller, rather than model accuracy. They introduce several different measures, identifying the best as prediction gain (instantaneous loss for a sample) and gradient variational complexity (using the direction of gradient descent to measure model complexity). Hachoen and Weinshall [4] suggest instead to use a scoring function to generate the curriculum. The scoring function ranks images within the dataset by difficulty through testing either the same model (pre-trained without curriculum learning) or a different model. Harder images are introduced to the model over time. Weinshall et al. [26] further evolve this process to consider image difficulty in relation to task difficulty (e.g. fine detail differentiation is harder than coarse detail differentiation, which can for instance be trivially approximated with hierarchical datasets). Shrivastava et al. [27], on the other hand, in their hard example mining paper, rank the images in order of difficulty at the time of training to dynamically generate a mini-curriculum at each iteration. Kumar et al. [28], in their work on self-paced learning, instead monitor image difficulty as either the negative log-likelihood for expectation-maximisation or the upper bound on risk for latent structural support vector machines. Jiang et al. [29] incorporate both self-paced learning and curriculum learning into a single framework. That is, the curriculum is pre-defined by some expert, but takes into account the feedback from the model (the learner) when selecting which images to propose to the network during training. Finally, Matiisen et al. [30] introduce the concept of mastery into the curriculum learning framework. In its simplest form, mastery is reaching a performance threshold for the model, identified by prior expert knowledge. The model is presented with images from a global dataset, but with a higher probability of sampling images from the current curriculum subset. As the model masters this subset, the probability of sampling these images decreases, while the probability of sampling the next curriculum subset increases. If we consider these studies concurrently, it is evident that curriculum learning and hard example mining both greatly benefit the deep learning optimisation process, and the combination of the two does so even more. We therefore uniquely propose to employ such methods within NAS, specifically, levying mastery from [30] in tandem with our own hard example mining approach reminiscent of the ‘instructor—student collaborative’ learning paradigm [29]. The work of Cazenavette et al. [31] builds upon well-explored dataset distillation techniques [32]. By optimising the $l 2$ loss of the parameters of a network trained on only 50 images per class, compared to optimal network parameters (i.e. parameters induced by training with 5000 images per class), they are able to achieve reasonable performance $7 1 . 5 \%$ on CIFAR-10 [33]). On this basis, we can deduce that training on a fraction of images yields a promising research direction, to which our method pertains without such loss in performance. # 3. Proposed Approach In this section, we detail the process by which our proposed DDS-NAS training strategy dynamically samples the dataset in an online fashion within the NAS cycle (Figure 1). DDS-NAS is subsequently deployed across three leading contemporary NAS frameworks (DARTS [6], P-DARTS [7], and TAS [8]). Firstly, we define some key terms to which we will refer in our subsequent discussion: • hard or hard-ness: a given example within the dataset at the current NAS training cycle iteration is defined as being hard if the output of the current model correlates poorly with the ground truth label for this example and hence contributes significantly to the current loss value for the model (i.e. it is either misclassified or classified with a low confidence score in the context of image classification). • easy: the converse of hard, where for a given example the output of the current model correlates strongly with the ground truth label for this example and hence contributes less significantly to the current loss value for the model (i.e. correctly classified with a high confidence score in the context of image classification). • mastery: a measure of when a given a priori performance threshold is reached on the current data subset such that the number of easy examples in the dataset is high with regard to the current model. # 3.1. Curriculum Learning Within NAS To formulate an unbiased subset of the global dataset, we use the hard example mining process detailed in Section 3.2. At every training iteration within the NAS search phase, we present such a subset to the NAS model. Following the success of [30], we in fact present the same subset until it has been mastered, according to some a priori mastery threshold (see Section 4.1). Only when the NAS model masters a subset do we sample a new set of examples from the global dataset. If the mastery threshold is very low, this subset of data will change often. If the mastery threshold is very high, a given subset is presented to the NAS model for several successive iterations, and a smaller portion of the global dataset is sampled throughout the entire training process. Akin to the restriction with P-DARTS [7] whereby only network parameters (i.e. weights) are updated and not architectural parameters within the first 10 training epochs, we similarly restrict DDS-NAS from resampling the dataset in this way for the first 10 epochs of NAS training. # 3.2. Dynamic Data Selection In order to both minimise the data subset used in each NAS iteration without performance degradation and facilitate efficient inter-iteration dataset resampling, we require a low-overhead process by which we can dynamically select new data examples. From the initial NAS training iteration, and the immediate subsequent iterations thereafter, model performance can be considered near-random.1 As such, we necessarily depend upon a resampling process independent of model performance, and hence propose the use of dataset example similarity as an alternative measure to relative hardness between samples. The intuition is that a model will perform poorly on examples with greater dissimilarity to those upon which it has already been trained. By using a resampling process independent of model performance, we do not need to compute the forward-pass of the model on all image samples in the entire dataset per hard example mining iteration, an approach commonplace among existing hard example mining approaches. This significantly reduces the computational complexity of DDS-NAS. Given the need to perform efficient one-to-many feature distance comparisons via an online approach, we construct a series of efficient furthest-neighbour $k d$ -tree structures from the chosen $N$ -dimensional feature representations of each example in our global dataset. In order to maintain a balanced data subset in the presence of dynamic reselection, we construct one such $k d _ { \mathbf { \alpha } }$ -tree structure per class label in the dataset, resulting in $m$ trees for $m$ dataset classes. In this way we can facilitate like-for-like classaware resampling and hence maintain dataset balance throughout the NAS training cycle. This strategy resembles undersampling, which has been shown to be effective for dealing with biased datasets [35], and is a significant advantage of our approach. To enable efficient look-up within our $k d$ -tree structure, we require a sufficiently low dimension $N$ of our feature representation such that the approximate furthest neighbour algorithm does not collapse [36]. As the dimensionality of image data is high (i.e. $N = 2 8 \times 2 8$ in case of MNIST [37], and larger for more complex datasets), we instead propose using an additional autoencoder architecture to construct an image similarity embedding with a much lower dimension $N \ = \ 8$ for easier MNIST and FashionMNIST datasets [38], $N = 3 2$ for CIFAR-10). In general, we find that contemporary state-of-the-art autoencoder architectures [39] employ skip-connections between the encoder and decoder sub-networks to facilitate improved image reconstruction. However, in this instance, such skip connections are detrimental to the performance of the encoder network in terms of constructing an encoding at the bottleneck of the encoder-decoder architecture (our embedding) that maximally captures the highest level of feature detail within itself. On this basis, we employ the proven autoencoder architecture from GANomaly [40] as it is one of the most successful encoder-decoder architectures employed for encoded image discrimination, predating the wider move to the use of skip-connections in the field [39]. We require that the use of this encoder architecture results in a compact feature embedding that retains the property of spatial similarity such that similar images have similar embeddings within the latent space and vice versa. This property must not come at the expense of image reconstructability. Otherwise, we cannot be confident that a given embedding represents a given image. In other words, there would be no correlation between embedding space dissimilarity and image space dissimilarity. Given reconstructability without similar images clustering within the embedding space, we cannot guarantee that the correlation is strong. To enforce these properties, we discovered that contractive loss [41] is sufficient for easier datasets, while harder datasets require a combined triplet margin ranking loss with MSE reconstruction loss, weighted via Kendall Loss [42]. Subsequently, we can thus order images by their dissimilarity within our furthest neighbour $k d$ -tree structures. See Table 1 for a lightweight autoencoder training configuration sufficient for each dataset. During a given NAS training iteration, we measure the hard-ness of each example image in the current data subset based on cross-entropy loss, following our earlier definition of hard and easy examples. To subsequently update our data subset in a dynamic manner, we first retain the images that are hard when averaged across the most recent epochs, according to some a priori hard-ness threshold (see Section 4.1). Secondly, by selecting the $k d$ -tree from our set that is associated to the class label of each image in the current data subset below the hard-ness threshold (i.e. the easy images), we can then identify the most dissimilar image of the same class in the global training set in $O ( l o g ( n ) )$ time. We can then use this to replace the easy image within the data subset. This dynamically updated training data subset will then be used for the next NAS training iteration. A detailed example can be found in Appendix A. Our overall pipeline is presented as follows: once the previous data subset has been mastered by iterative NAS training, we dynamically formulate a new balanced subset of the global training dataset based on (a) the retention of images that are considered hard, and (b) the replacement of images that are considered easy with dissimilar images of the same class to retain dataset balance (Fig. 1). Psuedo-code to illustrate the overall pipeline with time complexity analysis can be found in Appendix B, which highlights the potential search speed efficiency that DDS-NAS affords. Figure 2: TSNE visualisation of clustering of autoencoded image feature representation within latent space. Our autoencoder preserves the property that similar images have similar encodings for MNIST (a), FashionMNIST (b), and CIFAR-10 (c). However, our compact embedding is unsuitable for fine-grained image classification such as FGVC-Aircraft (d), which is a known limitation of autoencoders. # 4. Experimental Setup We detail our experimental setup for DDS-NAS deployment across the Differentiable Architecture Search (DARTS), Progressive DARTS (P-DARTS) and Network Pruning via Transformable Architecture Search (TAS) NAS frameworks. This setup is used to demonstrate the performance of our proposed approach with several image classification datasets. Table 1: Suggested autoencoder training configuration parameters for each dataset to yield a sufficiently lightweight architecture that can generate low-dimensionality embeddings. # 4.1. NAS Configuration Unless otherwise stated, all employed NAS frameworks adopt the same common configuration using Adam optimisation [43] with initial learning rate $l r = 3 e ^ { - 4 }$ , weight decay $w d = 1 e ^ { - 3 }$ , and momentums $\beta _ { 1 } = 0 . 5$ and $\beta _ { 2 } ~ = ~ 0 . 9 9 9$ (P-DARTS uses $l r =$ $6 e ^ { - 4 } , w d = 1 e ^ { - 3 }$ , TAS uses $l r = 1 e ^ { - 4 }$ ). For weight optimisation for the NAS-derived architectures themselves, we use an SGD optimiser with $w d = 3 e ^ { - 4 }$ , and momentum $\beta = 0 . 9$ (P-DARTS uses $w d = 5 e ^ { - 4 }$ ). Additionally, for DARTS we employ a Cyclic Learning Rate Scheduler with base $l r = 0 . 0 0 1$ , max $l r = 0 . 0 1$ , and step size up $\ c =$ step size down $= 1 0$ . We set $l r = 0 . 0 1$ when the previous dynamically selected data subset is mastered, and an updated data subset is introduced. Therefore, the updated data subset is learned quickly and is then ‘fine-tuned’ as with the previous subset. There is precedence for such an approach in SGDR [44], in which the learning rate is periodically reset to a higher value before the learning rate decay is reapplied. P-DARTS and TAS both adopt Cosine Annealing Learning Rate Scheduler with $l r = 2 . 5 e ^ { - 2 }$ and $l r = 0 . 1 e ^ { - 2 }$ respectively. We select the ResNet-110 architecture for TAS $k d .$ -teacher training. The models are implemented using PyTorch [45] (v1.6.0, Python 3.6.9). Performance of DDS-NAS deployed across each NAS framework is presented in terms of both Top-1 accuracy and parameter count (complexity) of the optimal NASgenerated architecture, together with the computational effort of the NAS search phase (in GPU days) across all three datasets. Experimentation indicates that our NAS framework is generally insensitive to $a$ priori thresholds that do not need to be exhaustively searched. A subset-size of 100 is sufficient for the easier MNIST [37] and Fashion-MNIST [38] tasks, and 1000 for CIFAR-10 [33]. Adopting a high hardness threshold (hard-ness threshold $> 0 . 8$ ) across all datasets and all NAS strategies enables the searched network architecture to formulate a thorough feature representation for image classification. The best network architectures are discovered with a mastery threshold $\approx 0 . 5$ . P-DARTS and TAS learn deep representations for images slower than DARTS. This can be attributed to the additional tasks done alongside reducing classification loss, wherein P-DARTS progressively restricts the search space while increasing architecture depth, and TAS minimises for network architecture complexity. Conversely, DARTS can afford a lower mastery threshold $( \approx \ 0 . 1 5 )$ for the easier MNIST and Fashion-MNIST tasks, but the performance gain is marginal. All presented results use the same hardness (0.85) and mastery (0.5) thresholds to ensure fairness. # 4.2. Hard Example Mining The GANomaly autoencoder [40] used to encode the images into their latent space representation is trained with Contractive Loss [41] for 30 epochs, with $b s = 8$ , and Adam optimiser with momentums $\beta _ { 1 } = 0 . 9$ and $\beta _ { 2 } = 0 . 9 9 9$ , $w d = 0$ , $l r = 1 e ^ { - 3 }$ . For the CIFAR-10 task, the autoencoder is instead trained with combined triplet margin loss [46] and MSE reconstruction loss, weighted under Kendall Loss [42]. # 5. Evaluation Having validated the feature representation embedding that underpins our dynamic data selection via hard example mining (see Figure 2), we present out evaluation in terms of DDS-NAS comparison to contemporary state-of-the-art approaches, with supporting ablation studies. Table 2: Accuracy, memory footprint, and (search-phase) training cost of final generated model from DDSNAS deployed upon DARTS, P-DARTS, and TAS, compared to their original implementations and others. $\dagger$ indicates results without $k d$ -teacher training owing to the lack of available teacher models for MNIST and Fashion-MNIST datasets. # 5.1. Neural Architecture Search Table 2 presents the performance obtained by the final model generated by DDS-NAS with respect to each dataset under consideration. Across all cases, the performance of our generated models is competitive with the state of the art, with minimal to no impact on generated model size. Moreover, across all cases, we substantially lower the computational efforts required for NAS (0.07 GPU days compared to 1.89 in the case of P-DARTS for MNIST, 27 times quicker)2. Since we can determine a replacement image for our dynamic subset in average case $O ( l o g ( n ) )$ time, we are able to reduce the search phase training cost by one order of magnitude over state-of-the-art results. Without loss in performance, our hard example mining method yields discriminative architectures that can be transferred to CIFAR-100 [33] and ImageNet [50] (Table 3). However, reproducibility presents a particularly significant problem within the NAS domain [51] and TAS ImageNet performance is considerably lower than the literature reports. DDS-NAS-TAS performance is omitted for fairness. Whilst our technique is demonstrated upon commonplace NAS approaches (DARTS, P-DARTS, TAS) it could equally be deployed on top of more recent advancements [47, 15, 16], further minimizing any difference in performance. Table 3: Accuracy and memory footprint of CIFAR-10 searched models transferred to CIFAR-100 and ImageNet # 5.2. Ablation Studies To validate our proposed approach, we compare the performance of DDS-NAS to selected NAS frameworks, both: (a) without dynamic data selection in order to ablate the contribution of our combined hard example mining and curriculum learning strategy; and (b) with an untrained autoencoder to ablate the contribution of the imagedissimilarity based hard example mining strategy. Table 4: Ablation studies: accuracy and memory footprint of models generated by DDS-NAS, models generated by the original framework with limited data (equivalent to removing hard example mining and curriculum learning), and models generated by DDS-NAS with an untrained autoencoder (equivalent to removing hard example mining). # 5.2.1. Without Dynamic Data Selection For each dataset, we employ all three original implementations (DARTS [6], P-DARTS [7], TAS [8]), but with a subset of the data at each training iteration. This is equivalent to omitting both hard example mining and curriculum learning. We use the same volume of data as adopted by DDS-NAS: 100 randomly selected images for MNIST and Fashion-MNIST, and 1000 for CIFAR-10. Subsequently, we can determine the impact of our curriculum learning and hard example mining pipeline. Comparing the first and second row of the results for each dataset presented in Table 4, it is evident that DDS-NAS achieves substantially improved accuracy while yielding fractionally larger architectures in some cases. This behaviour is exhibited in MNIST, where the original DARTS framework achieves only $7 8 . 2 8 \%$ accuracy after the search phase, and $9 4 . 4 3 \%$ accuracy after fine-tuning the stacked searched cell (compared to $9 4 . 0 0 \%$ and $9 9 . 7 8 \%$ respectively for DDS-NAS-DARTS). This performance difference is further highlighted with both the other datasets and other frameworks. The final performance of the original P-DARTS implementation falls behind DDS-NAS across all datasets $8 5 . 7 4 \%$ compared to $9 5 . 0 7 \%$ for CIFAR-10, for instance). Interestingly, with hard example mining and curriculum learning omitted in this manner, TAS generates smaller models (0.32M compared to 1.06M for CIFAR-10), but often at the expense of accuracy. # 5.2.2. Untrained Autoencoder Figure 3: TSNE visualisation of the clustering of autoencoded CIFAR-10 image feature representation within the latent space. Training with triplet margin loss with Kendall loss achieves good clustering (left). Training with contractive loss achieves poor clustering (right). We ablate the autoencoder-derived feature embedding within our hard example mining method by replacing the DDS-NAS autoencoder with one that is untrained, and thus unable to determine the most dissimilar images from a given training data subset. This can be considered as a process equivalent to curriculum learning without hard example mining, as the images are effectively randomly sampled. This time, we compare the first and third row for each dataset in Table 4. Evidently, the models generated by DDS-NAS with an untrained autoencoder are significantly worse (for instance $9 2 . 0 4 \%$ compared to $9 5 . 4 8 \%$ upon Fashion-MNIST by DDS-NAS-DARTS). On this basis, we can therefore conclude that DDS-NAS necessarily requires a suitable hard example mining approach, for which our image similarity strategy is sufficient. Furthermore, an autoencoder that achieves good reconstruction but a mediocre clustering of embedded features is inadequate for DDS-NAS (Fig. 3, Table 5). Bad clustering and thus ineffective hard example mining yields inferior classification accuracy $( 9 5 . 2 9 \%$ ) compared to hard example mining with good clustering $( 9 6 . 5 7 \% )$ . Similarly, sufficient clustering but poor reconstruction is detrimental to DDS-NAS $( 9 4 . 9 4 \% )$ . Lack of both properties yields significantly worse performance $( 8 8 . 9 0 \% )$ ), wherein there is no correlation between embedding space dissimilarity and image space dissimilarity at all. Table 5: Accuracy of DDS-NAS-DARTS employing autoencoders with different capabilities on CIFAR-10. By comparing row two (neither hard example mining nor curriculum learning) and row three (curriculum learning but not hard example mining) for each dataset in Table 4, it is clear that our curriculum learning methodology is somewhat effective even without incorporating hard example mining. DDS-NAS performance with an untrained autoencoder exceeds that of the original framework with limited data in all cases $( 8 8 . 5 8 \%$ compared to $8 8 . 9 0 \%$ for CIFAR-10 with DARTS, $9 0 . 0 3 \%$ compared to $9 1 . 5 2 \%$ for Fashion-MNIST with P-DARTS). # 6. Limitations The modularity of the proposed DDS-NAS framework provides a significant advantage over existing NAS methods, and allows it to be adopted alongside multiple NAS frameworks. Selecting an off-the-shelf autoencoder or training one from scratch is a reasonable approach provided it can generate a low-dimensionality embedding space that offers reasonable reconstruction and clustering capabilities (see Section 5.2). For fine-grained classification tasks however, this is a challenge (see Figure 2) and remains an open area of research. In addition, the current DDS-NAS approach requires one kd-tree per class so that we can perform class-aware dynamic dataset updates. While this offers reasonable robustness towards biased datasets, long-tailed distributions in datasets may present additional challenges, where there are not enough samples for a given class. We might expect training samples to be memorized in this situation, yielding noisy architecture weight-update steps. One simple solution to resolve this might be to combine samples from classes with few samples into a single kd-tree but this is a direction for future research.
In order to address the scalability challenge within Neural Architecture Search (NAS), we speed up NAS training via dynamic hard example mining within a curriculum learning framework. By utilizing an autoencoder that enforces an image similarity embedding in latent space, we construct an efficient kd-tree structure to order images by furthest neighbour dissimilarity in a low-dimensional embedding. From a given query image from our subsample dataset, we can identify the most dissimilar image within the global dataset in logarithmic time. Via curriculum learning, we then dynamically re-formulate an unbiased subsample dataset for NAS optimisation, upon which the current NAS solution architecture performs poorly. We show that our DDS-NAS framework speeds up gradient-based NAS strategies by up to 27x without loss in performance. By maximising the contribution of each image sample during training, we reduce the duration of a NAS training cycle and the number of iterations required for convergence.
[ "cs.CV" ]
# 1 Introduction A query optimizer is a performance-critical component in every database system. It translates declarative user queries into efficient execution plans [3, 45]. There have been numerous efforts to learn query optimizers (LQOs)(e.g., [18, 33, 34, 60]) to reduce the reliance on manual tuning and expert intervention, and ultimately lead to more intelligent and responsive database systems. Unfortunately, LQOs suffer three main drawbacks. First, they can result in slow execution plans at the beginning of the learning process (sometimes orders of magnitude slower than the optimal plan [28]), where the probability of selecting disastrous plans is high. These disastrous plans at the beginning can slow the LQO’s convergence to efficient query plans later. Second, although LQOs can outperform traditional optimizers on average, they often perform catastrophically (e.g., $1 0 0 \mathrm { x }$ query latency increase) in the tail cases, especially when the training data is sparse [34]. Third, LQOs are normally trained for specific workload. Their performance degrades significantly when distribution shifts exist in the query workloads and the underlying data [34, 40, 49]. Given these drawbacks, verifying that the LQO’s generated plans satisfy the critical latency constraints in real-life applications is crucial. Unfortunately, typical model checking techniques (e.g., [9, 11]) that have been successfully investigated to verify the properties of other database components, such as transaction management and concurrency control, fail when the search space to be explored grows drastically as in query optimizers. In addition, statistical variations of these techniques (e.g., [10]) do not perform verification during the runtime. Using these techniques, an LQO might be verified to be constraint-compliant a priori. However, during runtime, we may observe certain query plans that violate the constraints due to the unknown changes in the execution environment. Additionally, these techniques should also be able to verify LQOs operating in dynamic environments. Meanwhile, Conformal Prediction (CP) [2, 56] has recently emerged as an efficient solution to perform runtime verification (e.g., [6, 57]) with formal guarantees (e.g., [8, 12, 29, 44]). In particular, CP is a rigorous statistical tool to quantify the uncertainty of the ML models’ predictions while allowing users to specify the desired level of confidence in the quantification and being agnostic to the details of the ML models. CP-based runtime verification showed a great success in verifying many cyber-physical systems such as autonomous cars [29], autonomous robots [41], and aircraft simulation [29, 44], among others. However, CP-based runtime verification was never explored in the context of database systems before. In this paper, we present the first study of the LQO verification problem using CP. Specifically, we use CP to solve the LQO verification problem in two ways. First, we employ CP to provide usercontrolled bounded ranges for the actual latency of constructed plans by LQOs even before executing them (e.g., verifying that an LQO plan for a specific query will never result in an execution time of more than 300 msec with a probability of at least $9 0 \%$ ). Second, we go further and explore the use of CP to perform a runtime verification, with formal bounds, that can early detect any performance constraint violation during the LQO’s plan construction process based solely on the constructed partial plans so far and before the full plan is completed (e.g., with a user-defined confidence level of $9 5 \%$ , we can detect at the second step of building a query plan by LQO that the eventual complete plan will fail to satisfy a specific latency constraint). This will help in planning how to handle such violations during the plan construction time and before execution (e.g., falling back to a traditional query optimizer for re-planning). For both scenarios, we introduce an adaptive CP framework to support LQOs in static cases (LQOs are trained and tested on the same workload) and in distribution shift cases (evaluating LQOs on different workloads). Additionally, we propose a CP-guided plan search algorithm that relies on upper bounds of the actual latency, instead of typical predicted costs by LQOs, to generate more optimal query plans within shorter time frames. We also provide rigorous theoretical proofs of our approaches to ensure correctness and frameworks that facilitate the integration of our CP-based verification approaches with LQOs in real-world environments. Our experimental results on the JOB [28] and TPC-H [7] workloads confirm the correctness of latency bounds across multiple LQOs, including Balsa [60], Lero [65], and RTOS [63], all aligning with theoretical expectations. We then demonstrate the effectiveness of our adaptive CP framework under distribution shift by evaluating it on workloads transitioning to CEB [39] and JOBLighttrain [24]. In runtime verification, we show that our CP-based methods accurately detect violations, and our violation handling reduces overall execution latency by 12,030.1 ms across 7 violating queries. Using the CP-guided algorithm, our approach improves plan quality in $3 3 \%$ of queries from a moderately trained LQO, achieving an additional $9 . 9 6 \%$ reduction in overall planning latency across all test queries. For well-trained LQOs, we observe better plan quality and faster query planning with our CP-guided plan search algorithm. These comprehensive experiments substantiate the correctness and effectiveness of our CP-based verification frameworks. In summary, our novel contributions are as follows: • We are the first to formulate the Learned Query Optimizer (LQO) verification as a Conformal Prediction (CP) problem. We develop CP-based latency bounds for LQOs, with formal proofs, to provide a user-defined confidence level a bounded range for the actual latency of query plans. • We design CP-based runtime verification, with formal bounds, which detect and address long-latency query plans even before completing the plan construction. We propose an Adaptive CP framework for LQOs which aids in handling distribution shifts, enhancing the robustness of the verification framework and making it suitable for real-world scenarios. We introduce a generic CP-guided plan search algorithm that can enhance both the query plan quality and the planning time from a trained LQO. • Our experimental evaluation using the proposed CP-based verification frameworks, across three LQOs and four workloads, demonstrates the correctness and effectiveness of our CP-based frameworks for LQOs. We believe that our proposed CP-based verification approaches hold promising potential for future applications across other learned components in database systems. # 2 Background In this section, we first discuss the granularity levels of prediction decisions to be verified in learned query optimizers (Section 2.1). Then, we provide a brief introduction for the Conformal Prediction (Section 2.2) and Signal Temporal Logic (Section 2.3) tools that are used to build our verification framework and formally represent the performance constraints we verify LQOs against, respectively. User White-Box LQO (e.g., Balsa, Neo) Query Partial Plans Constructor/Searcher 全 影 价 Final Plan Palratinasl □□ 品 品 Praendicitesd Step 1 Step 2 Step 3 Cost Learned Cost Predictor (a) ML Decision Per Partial Plan Black-Box LQO (e.g., Bao, RTOS) User Extra Info Query Candidate Plans Generator (e.g., hint sets or with Traditional Optimizer join order) Complete Candidate A 点 A Plans 价 价 品 Final Plan Learned Complete Plan Selector and its Predicted (b) ML Decision Per Complete Plan Cost # 2.1 Granularity Levels of Decisions to be Verified in Learned Query Optimizers While Learned Query Optimizers (LQOs) (e.g., [18, 33, 34, 60, 63, 65]) can improve the performance over traditional optimizers by adapting to complex queries and data distributions, their reliance on ML models to take decisions introduces variability and potential unpredictability in performance. Therefore, verifying LQOs against userdefined performance constraints is crucial to ensure that generated plans meet specific efficiency and reliability standards (e.g., the execution time of a specific query should be $\leq 1 0 0 \mathrm { m s } ,$ ). Broadly, LQOs fall into three categories based on how ML is used. The first category uses ML to improve specific components of the optimizer (e.g., cardinality estimator [25, 49, 61] and cost estimator [36, 48]). The second category uses ML to construct the query plan from scratch, replacing the traditional optimizer (e.g., [34, 60]). The third category uses ML to steer the traditional optimizer in constructing better candidate plans and/or in selecting among them (e.g., [33, 63, 65]). In this paper, we focus on verifying the ML decisions made by LQOs in the second and third categories only, where ML is involved in constructing the query plan itself. However, the granularity level of these decisions differs between these two categories. Figure 1 shows a high-level overview of these two LQO categories, highlighting their ML decisions in red. In the second category, fine-grained prediction decisions are performed to construct the query plan stepby-step and predict the associated cost at each step1. For instance, Balsa [60] uses a learned value model to construct the optimized plan operator-by-operator and predict the intermediate cost for the final plan construction at each operator. We refer to the second category as white-box LQOs because we rely on these fine-grained prediction decisions during the verification process. In contrast, in the third category, learned models neither perform step-by-step plan construction nor intermediate cost predictions. Instead, these models are used to select the best plan from a set of candidate plans, either by predicting the high-level cost for each candidate [33] or by assigning a relative rank to all candidates [63]. These candidate plans are typically constructed by a traditional optimizer and based on auxiliary information, such as join orders [63] and hint sets [33]. Therefore, in this category, the selection decisions are mainly only on the level of the whole plan and its high-level associated cost, if available. We refer to the third category as black-box LQOs because we only access coarse-grained plan-level decisions (i.e., no partial-plan-level predictions) during the verification process. # 2.2 Standard Conformal Prediction (CP) We build our LQO verification framework, as shown later, based on Conformal Prediction (CP) [2, 56], a rigorous statistical tool that efficiently quantifies the uncertainty of the ML models’ predictions. CP enables users to specify the desired level of confidence in the quantification while being agnostic to the details of the ML models. To introduce CP, assume that $R ^ { ( 0 ) } , R ^ { ( 1 ) } , \ldots , R ^ { ( K ) }$ are $K + 1$ independent and identically distributed (i.i.d) random variables, where each variable $\boldsymbol { R } ^ { ( i ) }$ for $i \in \{ 0 , \ldots , K \}$ is an estimate of the prediction error between the true output $\boldsymbol y ^ { ( i ) }$ , i.e., ground truth, for input $x ^ { ( i ) }$ and the predicted value of this output $\eta ( \boldsymbol { x } ^ { ( i ) } )$ by the ML predictor $\eta$ Formally, this error can be expressed as: $$ R ^ { ( i ) } : = \| y ^ { ( i ) } - \eta ( x ^ { ( i ) } ) \| , $$ where $\left\| \cdot \right\|$ denoting the absolute value. $R ^ { ( i ) }$ is commonly referred to as the non-conformity score, where a small score suggests a strong predictive model and a large score indicates poorer performance (i.e., less accurate predictions). Now, assuming that $R ^ { ( 0 ) }$ belongs to test data and $R ^ { ( 1 ) } , \ldots , R ^ { ( K ) }$ are calibration data, the objective of CP is to quantify the uncertainty of $R ^ { ( 0 ) }$ using $R ^ { ( 1 ) } , \ldots , R ^ { ( K ) }$ . Specifically, for a user-defined uncertainty probability $\delta \in [ 0 , 1 ]$ (i.e., $1 - \delta$ is a confidence level), CP aims to compute an upper bound $C ( R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } )$ for the prediction error $R ^ { ( 0 ) }$ such that: $$ \operatorname { P r o b } ( R ^ { ( 0 ) } \leq C ( R ^ { ( 1 ) } , . . . , R ^ { ( K ) } ) ) \geq 1 - \delta $$ This upper bound $C ( R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } )$ can be efficiently determined by computing the $( 1 - \delta )$ th quantile of the empirical distribution of $R ^ { ( 1 ) } , \ldots , \bar { R ^ { ( K ) } }$ and $\infty$ , assuming training, calibration, and testing data originate from the same underlying distribution (i.e., the scores $R ^ { ( 0 ) } , { \stackrel { - } { R } } { } ^ { ( 1 ) } , \dots , R ^ { ( K ) }$ are exchangeable) [2]. Although this assumption aligns with the data and workload scenarios used in most state-of-the-art workload-aware LQOs (e.g., [33, 34, 60]), we extend our LQO verification framework to support adaptive CP for distribution shifts [64] as shown later in Section 3.2. For simplicity, we will refer to the upper bound $C ( R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } )$ as $C$ in the rest of the paper. Note that CP guarantees marginal coverage, which is not conditional on the calibration data [2]. # 2.3 Formal Representation of Performance Constraints to be Verified with CP To formally represent the desired performance constraints to verify against LQOs, we employ Signal Temporal Logic (STL) [17], a CP-compliant formal logical language for verification. STL was originally introduced to verify the properties of time series data (e.g., signals), especially in the context of cyber-physical systems [32]. STL can also handle non-traditional time-series data where sequence or order matters. An STL specification $\phi$ is recursively defined as $\phi : = T r u e \mid \mu \mid \neg \phi \mid \phi \land \psi \mid \mathbf { G } _ { [ a , b ] } \phi$ , where $\psi$ is an STL formula. ¬ and $\wedge$ are the not and conjunction operators, respectively. The always operator $\mathbf { G } _ { [ a , b ] } \phi$ encodes that $\phi$ has to be always true for the entire duration or steps between $a$ and $b . \mu$ is a predicate to check whether the semantics of the specification $\phi$ are achieved or not, i.e., $\mu : \mathbb { R } ^ { n } \{ { \mathrm { T r u e } } , { \mathrm { F a l s e } } \}$ . For instance, we can define an operator ${ \bf G } _ { [ 0 , N - 1 ] } \phi$ to check whether the query plan generated by a LQO will always have a latency less than 750 msec at each of its $N$ execution steps (i.e., partial plans). In this case, $x : = ( x _ { 0 } , x _ { 1 } , \dotsc , x _ { N - 1 } )$ will represent the partial plan latencies at steps $0 , 1 , \ldots , N - 1$ and the condition $x _ { \tau } < 7 5 0$ forms the semantics of the specification $\phi$ that needs to be checked at each step $\tau$ . Moreover, we can use robust semantics $\rho ^ { \phi } ( x )$ , as in [17, 19], to extend the binary evaluation of STL satisfaction (i.e., $\mu ( x ) _ { , } ^ { \prime }$ ) by providing a quantitative measure of the degree to which this satisfaction is achieved. Unlike traditional binary satisfaction, robust semantics $\rho ^ { \phi } ( x )$ produces a real-valued metric: positive values indicate that the specification $\phi$ is satisfied, with the magnitude representing the strength of satisfaction, whereas negative values denote a violation, with the magnitude reflecting the severity of the violation. For example, considering the previously discussed specification $\phi$ with condition $x _ { \tau } < 7 5 0$ , the robust satisfaction $\rho ^ { \phi } ( x )$ can be defined to provide a quantitative measure of how robustly all latencies $x$ satisfy this condition by calculating $( 7 5 0 - x _ { \tau } )$ for each $x _ { \tau } \in x$ . In this case, $x _ { \tau } = 1 0 0$ exhibits stronger robustness in satisfying $\phi$ than $x _ { \tau } = 6 0 0$ , whereas $x _ { \tau } = 8 0 0$ results in a violation. More details about robust STL semantics are in [17, 19]. # 3 CP-based Latency Bounds for LQOs As mentioned in Section 2.1, we focus on two categories of LQOs: white-box and black-box, both of which use learned models to construct the query plan itself. In white-box LQOs (e.g., [34, 60]), the learned model builds the query plan incrementally, constructing one partial plan at a time based on a predicted cost (Figure 1 (a)). Here, we employ CP to obtain user-controlled bounded ranges for the actual latency (not the predicted cost) of these constructed partial plans before executing them. For example, given a partial plan 𝑠 and a user-defined confidence level of $9 0 \%$ , we can determine a latency range $[ l _ { m i n } ^ { s } , l _ { m a x } ^ { s } ]$ that the latency $l ^ { s }$ of $s$ will fall within with at least $9 0 \%$ probability, where $l _ { m i n } ^ { s }$ and $l _ { m a x } ^ { s }$ represent the lower and upper latency bounds, respectively. The intuition is to leverage CP to gain insights into the relationship between predicted costs and actual latencies of partial plans from the LQO’s calibration query workloads, and then use these insights to obtain latency ranges for testing queries. Similarly in black-box LQOs (e.g., [33, 63]), we use CP to provide such user-controlled bounded ranges, yet for the end-to-end latencies of complete plans rather than partial ones. This is because black-box LQOs rely on learned models solely to select the best plan among complete candidates (Figure 1 (b)). Latency-Cost Non-conformity Score. A critical step in applying CP is defining the non-conformity score $R$ (check Section 2.2), as it quantifies the deviation between the predicted and actual outcomes. In the LQO context, we focus on how the actual latency of a plan, whether partial or complete, deviates from its predicted cost 2. Following the CP notation, we formally define a latency-cost non-conformity score $\boldsymbol { R } ^ { ( i ) }$ for the plan at step $\tau$ in a query $q _ { j }$ to be: $$ R ^ { ( i ) } : = \| t _ { \tau } ^ { ( j ) } - \hat { c } _ { \tau } ^ { ( j ) } \| $$ where $t _ { \tau } ^ { ( j ) }$ is the actual latency of this plan and $\hat { c } _ { \tau } ^ { ( j ) }$ is its predicted cost. Note that $R ^ { ( i ) }$ represents a score for a calibration plan (i.e., $R ^ { ( i ) } \in \{ R ^ { ( 1 ) } , . . . , R ^ { ( K ) } \bar { \} }$ when $q _ { j }$ belongs to the calibration workload $Q ^ { C a l }$ and represents a score for a testing plan $R ^ { ( 0 ) }$ when $q _ { j }$ belongs to the testing workload $Q ^ { T s t }$ . In the following, we introduce our approach for using CP to obtain the bounded latency ranges when the calibration and testing distributions are similar, i.e., static case, (Section 3.1), and then we extend it to handle distribution shifts in the testing distribution, i.e., distribution shift case (Section 3.2). Finally, we detail our proposed verification framework (Section 3.3). # 3.1 Latency Bounds in Static Cases Using equations 1 and 2, we can directly derive an upper bound $C$ on the latency of any plan, whether partial or complete, in a testing query as the $( 1 - \delta ) \mathrm { t h }$ quantile of the latency-cost non-conformity scores such that: $$ P ( \| t _ { \tau } ^ { ( j ) } - \hat { c } _ { \tau } ^ { ( j ) } \| \le C ) \ge 1 - \delta $$ By reformulating Equation 3, we can compute a range for the actual latency $t _ { \tau } ^ { ( j ) }$ of this plan, with confidence $_ { 1 - \delta }$ , based on its predicted cost 𝑐ˆ𝜏( 𝑗 ) and the upper bound $C$ as follows: $$ P ( \hat { c } _ { \tau } ^ { ( j ) } - C \leq t _ { \tau } ^ { ( j ) } \leq \hat { c } _ { \tau } ^ { ( j ) } + C ) \geq 1 - \delta $$ This allows us to estimate a bounded range for the actual latency even prior to executing the plan. However, the tightness of this range primarily depends on the upper bound $C$ , which itself is influenced by the number of calibration plans used to establish it. Therefore, determining the sufficient number of calibration plans to construct a valid upper bound $C$ is crucial. Here, we derive a lower bound on this number: Lemma 1 (Lower Bound on Required Calibration Plans). Let the latency-cost non-conformity scores of a testing plan $R ^ { ( 0 ) }$ and $K$ calibration plans $R ^ { ( 1 ) } , . . . , \bar { R } ^ { ( K ) }$ be exchangeable and realizing i.i.d random variables, $\delta \in [ 0 , 1 ]$ be a user-defined uncertainty probability, and $C$ be an upper bound on the score $R ^ { ( 0 ) }$ of the testing plan, calculated at a confidence level of $\mathbf { \dot { 1 } } - \delta$ . Then, the lower bound on the number of calibration plans, i.e., 𝐾 , to calculate 𝐶 is 1−𝛿 . Proof. If the scores $R ^ { ( 0 ) } , R ^ { ( 1 ) } , \ldots , R ^ { ( K ) }$ are exchangeable (i.e., independent of their order and are drawn from the same distribution), then the joint distribution of these scores remains unchanged [2]. This means that the rank of any score, including $R ^ { ( 0 ) }$ , is uniformly distributed on the ranks $\{ 1 , . . . , K + 1 \}$ . As a result, we can estimate the probability of the $R ^ { ( 0 ) }$ ’s rank in this uniform distribution using the $1 - \delta$ quantile as follows: $$ \operatorname { P r o b } ( \operatorname { R a n k \ o f } R ^ { ( 0 ) } \leq \lceil ( K + 1 ) ( 1 - \delta ) \rceil ) \geq 1 - \delta $$ where $\lceil \cdot \rceil$ denoting the ceiling function. However, according to [2], if $\lceil ( K + 1 ) ( 1 - \delta ) \rceil > K$ , then the upper bound $C$ becomes trivial and uninformative, yielding $C = \infty$ . Therefore, to ensure that $C$ is nontrivial, we need the following condition: $$ \lceil ( K + 1 ) ( 1 - \delta ) \rceil \leq K $$ From this, we can easily get $\begin{array} { r } { K \ge \frac { 1 - \delta } { \delta } } \end{array}$ , which means the lower bound on the number of calibration plans should be $\textstyle { \frac { 1 - \delta } { \delta } }$ . # 3.2 Latency Bounds in Distribution Shift Cases In the preceding section, we assumed that the test data $\{ R ^ { ( 0 ) } \}$ and the calibration data $\{ R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } \}$ are drawn from the same underlying distribution. However, this assumption does not hold in workload drift scenarios, i.e., new or evolving workloads, that are common in database applications [40, 58, 59]. For instance, slight changes in query patterns (e.g., filters on new columns), can violate the exchangeability assumption of $R ^ { ( 0 ) } , R ^ { ( 1 ) } , \ldots , R ^ { ( K ) }$ (see Section 2.2), leading to an invalid upper bound $C .$ . To address this, we adopt an adaptive CP variation, inspired by [37], which dynamically adjusts the upper bound to be $\tilde { C }$ based on the distribution shift in the testing workload only, assuming that this shift can be empirically estimated. This approach ensures that the newly calculated bounded latency range, based on $\tilde { C }$ , preserves the user-specified confidence level $1 - \delta$ , even in the presence of distribution shifts. Specifically, let $\mathcal { D }$ represent the distribution of the testing workload (i.e., $R ^ { ( 0 ) } \sim \mathcal { D } )$ and $\mathcal { D } _ { 0 }$ represent the distribution of the calibration workload (i.e., $R ^ { ( 1 ) } , \ldots , \bar { R ^ { ( K ) } } \sim \mathcal { D } _ { 0 } )$ . We can rigorously quantify the deviation between the calibration and test distributions using the total variation distance $\begin{array} { r } { T V ( \mathcal { D } , \mathcal { D } _ { 0 } ) = \frac { 1 } { 2 } \int _ { x } | P ( x ) - Q ( x ) | d x , } \end{array}$ where $P ( x )$ and $Q ( x )$ denote the probability density functions (PDFs) of $\mathcal { D }$ and $\mathcal { D } _ { 0 }$ , respectively [16]. To realize this in our LQO context, we empirically estimate these PDFs of latency-cost nonconformity scores using kernel density estimators (KDEs) as Gaussian kernels. According to [37, 64], we can compute an adjusted uncertainty probability $\tilde { \delta }$ to account for the distribution shift from $\mathcal { D } _ { 0 }$ to $\mathcal { D }$ as follows: $$ \tilde { \delta } = 1 - g ^ { - 1 } \left( g \left( \left( 1 + \frac { 1 } { K } \right) g ^ { - 1 } ( 1 - \delta ) \right) \right) $$ where $\delta$ is the original user-specified uncertainty probability, $K$ is the number of calibration plans, and $g ( \beta ) = \operatorname* { m a x } ( 0 , \beta - \epsilon )$ and its inverse $g ^ { - 1 } ( \beta ) = \operatorname* { m i n } ( 1 , \beta + \epsilon )$ are two functions calculated based on the allowable distribution shift $\epsilon$ , which must be set to a value greater than or equal to $T V ( \mathcal { D } , \mathcal { D } _ { 0 } )$ . Then, similar to Equation 4, the new latency bounds are calculated as: $$ P ( \hat { c } _ { \tau } ^ { ( j ) } - \tilde { C } \leq t _ { \tau } ^ { ( j ) } \leq \hat { c } _ { \tau } ^ { ( j ) } + \tilde { C } ) \geq 1 - \delta $$ where $\tilde { C }$ is the $( 1 - \tilde { \delta } )$ th quantile of the latency-cost non-conformity scores from the original calibration workload $\boldsymbol { Q } ^ { C a l } \sim \mathcal { D } _ { 0 }$ . # 3.3 Framework Overview Figure 2 gives an overview of our CP-based framework to provide bounded latency ranges before execution. Offline Phase. After training the LQO, we first construct a set of latency-cost non-conformity scores using all plans - whether partial or complete - from the calibration query workload $Q ^ { C a l }$ . For each plan, we collect its predicted cost during the LQO’s planning phase and its actual latency from execution. These scores are then sorted in ascending order and stored to be used along with the userspecified uncertainty probability $\delta$ to compute any upper bound, whether $C$ in the static case or $\check { C }$ in the distribution shift case. Predicted 150 Distribution D0 Costs 100 150 [140,160] LQO Q Latency Range Bounded 100 User Q uery A QLuearyrnPeldan Constructor Bound[e9d0,L1a1t0e]ncy One or Multiple Upper Bounds Range with at least (e.g., One Unified C = 10) 1 - δ Probability Distribution D Distn Shift Distn Shift Upper Bound Mode (Unified or Pattern-based) User Query Quantifier Handler → Online Phase Q' δ User-Defined Upper Sorted Calibration Uncertainty δ Bound on Scores Latency-Cost Queries Calculator (C) Non-Conformity Q1, Q2, ... Scores (R) Offline Phase Online Phase. The user first submits a testing query to the trained LQO, which generates a query plan with predicted costs (either per partial plan for white-box LQOs or a single cost for the entire plan in black-box LQO). In case there is a distribution shift in the testing queries from $\mathcal { D } _ { 0 }$ to $\mathcal { D }$ , queries are also sent (represented by a dashed line) to a distribution shift quantifier to determine the allowable distribution shift $\epsilon$ (check Section 3.2). This value, along with the user-defined parameter $\delta$ , is then used to construct the adjusted upper bound $\tilde { C }$ . Hereafter, we will use $C$ to denote the upper bound for both the static and distribution shift cases, as they are applied identically in subsequent steps. We support two modes for calculating the upper bound, namely Unified and Pattern-based, depending on the desired granularity level. In the Unified mode, nonconformity scores from all partial and complete plans are treated equally to construct a single upper bound value for $C$ , applicable to both partial and complete plans of the testing query 3. In the Pattern-based mode, we account for the internal structure of partial plans by setting a unique $C$ value for each parent-child pattern. This $C$ value is applied only when that pattern appears in the testing query. Note that pattern-based upper bounds are available only for white-box LQOs and are effective if we have sufficient calibration scores for each pattern (i.e., meeting the lower bound $K$ in Lemma 1 for each pattern). Otherwise, the unified upper bound is preferable. Algorithm 1 illustrates how to construct the two types of upper bounds given a specific user-defined uncertainty probability $\delta$ . Once the upper bound(s) construction is done, the query plan along with the upper bound(s) are passed to the bounded latency range constructor to obtain the bounded ranges as in Equation 4. Figure 3 shows an example of using both unified and patternbased upper bounds to calculate the bounded latency ranges for one testing query plan. Here, we assume a white-box LQO that constructs the plan from the bottom up. Initially, it constructs a Hash Join $\mathrm { ( H J ) }$ at the first level, with Sequential Scan (SS) operations as left and right children. This parent-children pattern is labeled as (HJ, SS, SS) 4. Similarly, the partial plan in the second level has the (HJ, HJ, SS) pattern. In this example, the LQO predicts 60 and 100 costs for these two partial plans. In the case of using unified Predicted Costs Predicted Costs C=10 100 [90,110] C =10 100 [90,110] 1 R GLaranteed R (HJ.HJ,SS) Guaranted HJ H R1 R2 C=10 60 [50,70] Range R R C,=5 60 [55,65] Range (HJ,SS,SS) (a) Unified Upper Bound (C) (b)Pattern-based Upper Bounds $( \mathsf { C } _ { 1 } , \mathsf { C } _ { 2 } )$ upper bound (Figure 3 (a)), we use a single value $C = 1 0$ is applied, resulting in latency ranges of 50, 70 and 90, 110 for the first and second partial plans, respectively. In the case of using pattern-based upper bounds (Figure 3 (b)), two different values $C _ { 1 } = 5$ and $C _ { 2 } = 1 0$ are used, resulting in latency ranges of 55, 65 and 90, 110 for the (HJ, SS, SS) and (HJ, SS, HJ) patterns, respectively. # 4 CP-based Runtime Verification for White-Box LQOs Plan Construction Algorithm 1 Constructing a List of Upper Bound(s) C on the Latency-Cost Non-Conformity Scores Earlier (Section 3), we showed how CP can provide a bounded latency range for partial or complete query plans, helping assess the uncertainty of LQO decisions before execution. Here, we aim to go further by exploring the use of CP to early detect any performance constraint violations during the plan construction process of whitebox LQOs (e.g., [34, 60]), based solely on the constructed partial plans so far and before the full plan is completed. Suppose $\mathcal { D }$ is an unknown distribution over the query plans generated by a white-box LQO. Let $X : = ( X _ { 0 } , X _ { 1 } , \ldots ) \sim \mathcal { D }$ represent a random query plan generated by the LQO, where $X _ { \tau }$ is a random variable denoting the state of the generated partial plan at step $\tau$ (e.g., predicted cost or actual latency). Then, we can formally define the white-box LQO runtime verification problem as follows: Definition 1 (The White-Box LQO Runtime Verification Problem). Assuming a white-box LQO (e.g., [60]) and a testing query 𝑞 that this LQO already finished constructing its partial plans till step 𝜏 and is still running, we aim to verify whether all generated partial plans by this LQO (past and future) result in a complete plan, represented $b y X$ , that satisfies a user-defined STL-based performance constraint $\phi$ with a confidence level $1 - \delta$ , i.e., $P r o b ( X \mid = \phi ) \ge 1 - \delta .$ , where $\delta \in [ 0 , 1 ]$ is a constraint violation probability. Let $x : = ( x _ { 0 } , x _ { 1 } , . . . )$ be the realization of $X : = ( X _ { 0 } , X _ { 1 } , . . . )$ , where $x _ { 0 } \mathrm { b s } : = ( x _ { 0 } , \ldots , x _ { \tau } )$ represents the constructed partial plans till step $\tau$ and $x _ { \mathrm { u n } } : = ( x _ { \tau + 1 } , x _ { \tau + 2 } , . . . )$ represents the future unknown partial plans that will be predicted. Since existing white-box LQOs (e.g., [34, 60]) predict one partial plan at a time, then we can estimate the realization $x$ at step $\tau$ , with its constructed plans so far (i.e., $x _ { \mathrm { o b s } } )$ ) and next prediction at step $\tau + 1$ as follows: $$ \hat { x } : = ( x _ { \mathrm { o b s } } , \hat { x } _ { \tau + 1 | \tau } ) $$ As described in Definition 1, our goal is to verify the quality of the white-box LQO’s complete query plan, represented by $X$ , against a user-defined STL specification $\phi$ . We can use robust semantics $\rho ^ { \phi } ( . )$ (check Section 2.3) to achieve that. First, we define $\rho ^ { \phi } ( X )$ to indicate how robustly the specification $\phi$ is satisfied with the complete query plan, and $\rho ^ { \phi } ( \hat { x } )$ to denote the estimate of this robustness we obtained so far based on the observations $x _ { \mathrm { o b s } }$ and the prediction $\hat { x } _ { \tau + 1 | \tau }$ . Then, according to [12, 29], we can use CP (Equation 1) to define an upper bound $C$ on the difference between the actual robustness $\rho ^ { \phi } ( X )$ of the complete query and the estimate of this robustness $\rho ^ { \phi } ( \hat { x } )$ till step $\tau$ such that: $$ \operatorname { P r o b } ( \rho ^ { \phi } ( { \hat { x } } ) - \rho ^ { \phi } ( X ) \leq C ) \geq 1 - \delta $$ This upper bound can be easily obtained from the calibration query workload $Q ^ { C a l }$ by calculating the following non-conformity score $R ^ { ( i ) }$ for each partial plan in each calibration query $q _ { i } \in Q ^ { C a l }$ $$ R ^ { ( i ) } : = \rho ^ { \phi } ( \hat { x } ^ { ( i ) } ) - \rho ^ { \phi } ( x ^ { ( i ) } ) $$ where $x ^ { ( i ) }$ is the realization of $X$ for query $q _ { i }$ (i.e., actual latencies and predicted costs for all partial plans in $q _ { i }$ ) and $\hat { x } ^ { ( i ) }$ is the estimate of this realization till step $\tau$ only (i.e., $\hat { x } ^ { ( i ) } : = ( x _ { \mathrm { o b s } } ^ { ( i ) } , \hat { x } _ { \tau + 1 | \tau } ^ { ( i ) } ) )$ . Given that, we can define the following condition to verify whether the LQO satisfies $\phi$ or not. Lemma 2 (The White-Box LQO Runtime Verification Condition). Given a testing query 𝑞 that uses LQO to generate its plan, represented $b y X .$ , and with $\hat { x } : = ( x _ { o b s } , \hat { x } _ { \tau + 1 | \tau } )$ realizing the constructed and predicted partial plans at step $\tau$ , an $s T L$ constraint $\phi$ , a robust semantics measure $\rho ^ { \phi } ( . )$ for this $\phi$ constraint, and a constraint violation probability $\delta \in [ 0 , 1 ]$ . Then, we can guarantee that these constructed and predicted partial plans $\hat { x }$ so far will result in a complete plan that satisfies the constraint $\phi$ with a confidence level $1 - \delta$ , i.e., $P r o b ( X \models \phi ) \geq 1 - \delta$ only if the robust semantics defined Violating Query Q Violation Handler Distribution D Partial Plan Violation Traditional QO □ (e.g., PostgreSQL) White-Box CP-based User Query LQO VeRruifinctiamtieon O Q Continue ↑ Verified Plan with at ( Distribution D least 1 - δ Probability Distn Shift Distn Shift Fallback Plan Quantifier Handler ˜C User Query ▲ Upper Bound Online Phase Q' δ Uncertainty δ Upper Bound on Sorted Non- Calibration Constraint Φ Scores Calculator (C) Queries Conformity Scores based on  ρφ Q1, Q2, ... Offline Phase Proof. By reformulating Equation 8, we can obtain: $$ P ( \rho ^ { \phi } ( X ) \ge \rho ^ { \phi } ( \hat { x } ) - C ) \ge 1 - \delta $$ If $\rho ^ { \phi } ( { \hat { x } } ) > C$ , it implies: $$ P ( \rho ^ { \phi } ( X ) > 0 ) \geq 1 - \delta $$ which, according to Section 2.3, further implies: $$ P ( X \mid = \phi ) \geq 1 - \delta $$ because $\rho ^ { \phi } ( X ) > 0$ directly implies that $X \models \phi$ . Note that changing the constraint specification $\phi$ and/or the robust semantics measure $\rho ^ { \phi } ( . )$ does not require retraining the white-box LQO to obtain valid guarantees because its prediction decisions, i.e., partial plans, are agnostic to any constraint specification. # 4.1 Framework Overview Figure 4 presents an overview of our CP-based runtime verification framework, which detects violations of user-defined performance constraints $\phi$ in the plans being constructed by white-box LQOs. Offline Phase. Similar to our bounded latency range framework (Section 3.3), we start by constructing and sorting a set of nonconformity scores, obtained from the calibration queries and their partial plans. However, instead of constructing latency-cost-based scores (Equation 2), we compute scores based on the difference between the actual robustness $\rho ^ { \phi } ( X )$ of queries and their estimated robustness $\rho ^ { \phi } ( \hat { x } )$ at each partial plan step, assessing compliance with constraint $\phi$ using robustness measure $\rho ^ { \phi } ( . )$ (Equation 9). These scores are then sorted and used to compute any upper bound, whether $C$ in the static case or $\tilde { C }$ in the distribution shift case, at a user-defined confidence level $1 - \delta$ (Equation 8) as discussed previously in Section 3.3. Online Phase. When a user submits a testing query, the white-box LQO starts to incrementally build the plan, adding one partial plan at a time. At each step $\tau$ , the runtime verification module uses the upper bound $C$ and the estimated robustness $\rho ^ { \phi } ( \hat { x } )$ (representing all partial plans constructed up to $\tau$ and the expected one at $\tau + 1$ ) to check if $\rho ^ { \phi } ( { \hat { x } } ) > C$ (Lemma 2). If this condition holds, the LQO proceeds to construct the next partial plan at step $\tau + 1$ . Otherwise, a violation is detected (e.g., exceeding a latency threshold). As a result, the violation handler discards the current plan under construction and sends the query to be re-planned by a traditional query optimizer (e.g., PostgreSQL [15]). This has been shown to be an effective solution, as highlighted in earlier works (e.g., [33]) and confirmed by our experimental evaluation (Section 6). The intuition here is that re-planning the query with a traditional optimizer and running it with the resulting average-performance plan incurs less overhead than executing a worst-case LQO-generated plan. Note that in case there is a distribution shift in the testing queries from $\mathcal { D } _ { 0 }$ to $\mathcal { D }$ , we construct the adjusted upper bound $\tilde { C }$ as in the online phase of our bounded latency range framework (Section 3.3). # 5 CP-Guided Plan Search in White-Box LQOs In this section, we provide a simple yet effective approach for using $\mathrm { C P }$ to steer the decision-making process in white-box LQOs. Unlike sections 3 and 4, which focused on using CP to obtain bounded latency ranges for generated plans or to detect violations during the plan construction process (triggering a fallback to traditional optimizers), this section presents a CP-guided plan search algorithm designed to improve the quality of generated plans rather than just verifying them. Specifically, this algorithm utilizes CP-derived upper bounds on the actual latency of partial plans (Equation 4), to heuristically guide the plan search space navigation. Intuition. White-box LQOs, such as Balsa [60] and Neo [34], use learned cost predictors to search over the space of partial plans at each step, aiming to identify the plan with the lowest predicted cost. However, since the space of all partial plans at any step is far too large to exhaustively search, these LQOs typically find this plan heuristically by sorting predicted costs and then selecting the plan with the lowest cost. However, relying on the predicted costs can lead to sub-optimal plans if these predicted costs do not closely align with the actual latencies. To address this, we propose leveraging the CP-bounded upper bounds on actual latency, which was discussed in Section 3, to guide the search for optimal partial plans at each step. CP-Guided Plan Search Algorithm. Recall that for any partial plan at step $\tau$ , we can compute an upper bound $U _ { \tau }$ on its actual latency $t _ { \tau }$ as $\hat { c } _ { \tau } + C$ (right inequality in Equation 4), where $\hat { c } _ { \tau }$ represents the predict cost of this partial plan and $C$ is the upper bound on the error between $t _ { \tau }$ and $\hat { c } _ { \tau }$ , calculated at a user-defined confidence level of $1 - \delta$ . Based on this latency upper bound $U _ { \tau }$ , we propose a generic CP-guided plan search algorithm that is compatible with basic plan search (BPS) algorithms. Algorithm 2 shows the details. We first initialize a priority queue with a set of partial plans, each representing a scan operation over a relation in the user query. We also initialize complete_plans to store the complete plans as they are identified (lines 1-2). At each iteration of the while loop, a partial plan, referred to as state, is retrieved from the priority queue according to BPS’s logic for selecting the next plan (lines 3-4). This selection logic may involve fetching the partial plan with the minimum cost (Best-First Search), iterating over each state in the current queue (Beam Search [31]), or using other strategies. If state forms a complete plan, it is added to the set of complete plans (lines 5-8). Otherwise, the search continues from the current partial plan, state, by calling Explore(.), which generates a new set of partial plans along with their predicted costs For each new partial plan (stateNew), we use Algorithm 3 (described later) to compute its latency upper bound 𝑈stateNew based on its predicted cost $\hat { c } _ { \mathrm { s t a t e N e w } }$ and the corresponding pattern-based upper bound on the latency-cost scores from C. Then, these new states, stateNew, along with their corresponding values 𝑐ˆstateNew and $U _ { \mathrm { s t a t e N e w } }$ , follow BPS’s logic for inserting new plans (lines $1 0 \AA \cdot$ - 12). This insertion logic may involve directly adding the new plans to the priority queue (non-optimized plan search). Alternatively, it may involve shrinking the queue to a specific size after inserting multiple plans, retaining only the smallest 𝑏𝐿𝑒𝑛 plans for further exploration (Beam Search). The algorithm continues until $n$ complete plans are identified. Finally, these complete plans are sorted based on their latency upper bounds, and the top-ranked plan is selected as the final plan. # Algorithm 2 CP-Guided Plan Search Require: Learned cost predictor LCP, Pattern-based upper bounds $\{ C _ { 1 } , C _ { 2 } , \ldots \}$ from Algorithm 1, Number of candidate complete plan Basic plan search algorithm BPS. Ensure: Top-ranked plan final 1: queue $$ Partial plans initialized with scans over relations 2: complete_plans $ [ ]$ 3: while len(complete_plans) $< { \mathfrak { n } }$ and queue is not empty do 4: (state, $\hat { c } _ { \mathrm { s t a t e } } ) \gets \mathrm { B P S } . s$ select_next_plan(queue) 5: if state is a complete plan then 6: complete_plans.add(state) 7: continue 8: end if 9: List of (stateNew, $\hat { c } _ { s t a t e N e w } ) \gets \mathsf { E x p l o r e } ( \mathsf { L C P }$ , state) 10: for all pair in List of (stateNew, $\hat { c } _ { s t a t e N e w . }$ ) do 11: $U _ { \mathrm { s t a t e N e w } } \gets$ LatencyUpperBound (stateNew, $\hat { c } _ { \mathrm { s t a t e N e w } }$ , C) 12: BPS.insert_plan(queue, stateNew, $\hat { c } _ { \mathrm { s t a t e N e w } }$ , $U _ { \mathrm { s t a t e N e w } } )$ ) 13: end for 14: end while 15: Sort complete_plans by $U _ { \mathrm { s t a t e } }$ values in an ascending order 16: final $$ complete_plans[0] 17: return final Latency Upper Bound Calculation. Algorithm 3 shows how the latency upper bound is calculated. It first extracts the parentchildren pattern of the input partial plan. Then, it retrieves the upper bound on the latency-cost non-conformity scores corresponding to this pattern, referred to as latencyCostUpperBound, from C. In case not found, latencyCostUpperBound is assigned the maximum value in C. This is important to guarantee the plan selection quality during the beam search in Algorithm 2 because when a pattern is not found, the value of its latency upper bound $U _ { \tau }$ becomes very large, due to the addition of $\operatorname* { m a x } ( \mathbf { C } )$ to the predicted cost, and hence its priority to be selected during the CP-guided search compared to other partial plans with patterns having values in C (i.e., trusted partial plans) is very low. # 6 Experimental Evaluation We evaluated our CP-based frameworks using different benchmarks and multiple prototypes to address the following questions: (1) How effective are the multi-granularity CP-based latency guarantees (Section 6.2)? (2) How effective does our adaptive CP handle distribution shift? (Section 6.3)? (3) How effective is our runtime verification (Section 6.4) (4) How much performance gain can be achieved through effective violation detection and handling (Section 6.5)? (5) What benefits does CP-guided plan search provide in terms of plan quality and planning time (Section 6.6)? (6) What is the sensitivity of the hyper-parameters of our CP-based approach and their effects on the LQO verification process (Section 6.7) Algorithm 3 Calculate CP Bounded Latency Upper Bound # 6.1 Experimental Setup CP Integration with three LQOs. Balsa [60] integrates a learned cost predictor and beam search, storing ranked potential sub-plans to construct a complete plan. We choose Balsa as our default whitebox LQO due to its superior performance over other LQOs in this category (e.g., NEO [34]) as shown in many studies [14, 60, 65]. For latency bounds experiments, we verify both the unified-based and pattern-based upper bounds. To perform runtime verification, we calculate the robustness $\rho ^ { \phi } ( \hat { x } )$ (see Section 4) and compare it with the corresponding upper bound. If a violation is detected, our violation handler addresses it by reverting to PostgreSQL [15]. Finally, we perform CP-guided plan search to compare with the original Balsa models trained with different iterations. Lero [65] generates multiple candidate query plans and uses a learned oracle to rank them. The oracle applies pairwise comparisons to predict the more efficient plan, selecting the top-ranked one as the final output. Since Lero operates as a black-box LQO without directly accessible cost information, we use PostgreSQL’s predicted costs as a reference and the actual latency to construct CP model. At this stage, the predicted cost $\hat { c }$ is available, and we use CP to derive a guaranteed range for the actual runtime 𝑡 based on 𝑐ˆ. We then apply CP models at various granularities to estimate the guaranteed range for the entire plan, individual levels, and identified patterns. RTOS [63] focuses on join order selection, leveraging a DRL framework in conjunction with Tree-LSTM [50] to effectively capture the structural information of query plans. RTOS outputs a join ordering hint, which we then inject into PostgreSQL to generate a complete query plan. As another representative of the black-box LQO, RTOS is similar to Lero’s CP integration in that we use PostgreSQL’s predicted cost as a reference. Given that RTOS has less control over the selection of plan operators (such as Sequential Scan), we primarily use RTOS to validate our CP-based latency guarantee framework. Notably, existing LQOs face significant limitations in handling distribution shifts, restricting their ability to generalize across different workloads. They are either hard-coded to specific schemas in their open-source implementations or require processing all training queries upfront to define the model structure, making it impossible to optimize unseen queries dynamically. For instance, the open-source version of RTOS [63] is hard-coded to IMDB table schemas, restricting its use to the JOB [28] and JOBLight-Train [24] workload only. Therefore, to explicitly enable LQOs to operate across distributions, we modified their prototype frameworks to support changing distributions. Benchmarks. We evaluate the integration of CP with these LQOs on four widely used benchmarks - Join Order Benchmark (JOB) [28], Cardinality Estimation Benchmark (CEB) [39], JOBLight-train [24], and TPC-H [7]. For the static case evaluation, we use JOB and TPC-H workloads. JOB workload consists of 113 analytical queries over a real-world dataset from the Internet Movie Database. These queries involve complex joins and predicates, ranging from 3-16 joins, averaging 8 joins per query. For our experiments, we select 33 queries for model training, while the remaining 80 queries are used for calibration and testing. TPC-H features synthetically generated data under a uniform distribution. We use a scale factor of 1 and templates for queries 3, 5, 7, 8, 10, 12, 13, and 14 to generate workloads, creating 130 queries with varying predicates. Of the generated queries, 60 are used for model training, while the remaining 70 are designated for calibration and testing. For the distribution shift case evaluation, we use JOBLight-train and CEB workloads along with JOB. JOBLight-train consists of synthetically generated queries with 3-table joins. CEB employs hand-crafted templates and query generation rules to construct challenging large queries. Hardware and Settings. The experiments related to Balsa and RTOS were conducted on an Ubuntu 22 machine with an 8-core Intel Xeon D-1548 CPU $\textcircled { a } 2 . 0 \mathrm { G H z }$ and $6 4 \mathrm { G B }$ of RAM. The experiments related to Lero were conducted on an Ubuntu 22 machine with a 10-core Intel Xeon Silver $4 1 1 4 @ 2 . 2 \mathrm { G H z }$ and $6 4 \mathrm { G B }$ of RAM. CP Empirical Coverage (EC). To empirically validate the CP marginal guarantees of Formula 1, we conduct the experiment over $M$ iterations. For each iteration, we sample $K$ calibration queries, $\{ { Q } ^ { ( 1 ) } , . . . , { Q } ^ { ( K ) } \} _ { }$ , and $N$ test queries, $\{ \bar { Q } _ { 1 } ^ { ( 0 ) } , \dots , Q _ { N } ^ { ( 0 ) } \}$ . Then, we calculate $E C _ { m }$ for iteration $m$ using the following formula: $$ E C _ { m } : = \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \mathbb { 1 } ( R _ { m , n } ^ { ( 0 ) } \leq C ( R _ { m } ^ { ( 1 ) } , . . . , R _ { m } ^ { ( K ) } ) ) . $$ Evaluation Metrics. We focus on the following metrics: (1) Cover$a g e \colon E C _ { m }$ , calculated as defined in Equation 13 for each sampling iteration $m$ and presented as a percentage. This value measures the validity condition on the test set when applying our constructed $C$ , indicating how many test cases are successfully covered. (2) Frequency Density: Across all sampling iterations, we calculate the frequency of each coverage level. To more effectively display the data, we use Kernel Density Estimation (KDE) for density representation. Intuitively, a higher frequency density for a specific coverage indicates a greater likelihood of its occurrence during sampling. (3) CP Upper Bound 𝐶: This is the CP upper bound for latencycost non-conformity scores (see Section 3), used to compare the spans of non-conformity scores across different hyper-parameter settings. (4) Non-conformity Scores: We display the distribution of non-conformity scores in the runtime verification context to visually validate how runtime constraints are satisfied. (5) Execution Latency: The actual execution latency, measured in milliseconds (ms), is used to assess the quality of generated query plans. (6) Planning Time: The time taken to generate a query plan is used to evaluate the algorithm’s search efficiency during the planning. Default Parameters. Unless otherwise mentioned, in any experiment, we run multiple sampling iterations $N = 1 0 0 0$ ) to observe the empirical coverage. In each iteration, we randomly select a fixed-size calibration set to generate non-conformity scores and then construct $C$ based on the given $1 - \delta$ . We set $\delta = 0 . 1$ and the calibration-test split to be $5 0 \% - 5 0 \%$ . When validating a testing query, we perform the evaluation on each operator 𝑖 in the query for Unified-based upper bounds or each pattern 𝑖 (parent-children structure) for Pattern-based upper bounds. Each instance is treated as a test step $i$ , where we have the predicted cost $\hat { c } _ { i }$ and the actual latency $t _ { i }$ . Combining $C$ with $\boldsymbol { \hat { c } _ { i } }$ to calculate the bounded range for actual latency $[ \hat { c } _ { i } - C , \hat { c } _ { i } + C ]$ , then verify $t _ { i } \in [ \hat { c } _ { i } - C , \hat { c } _ { i } + C ]$ . For the complete test set, we calculate the coverage for this bounded range method, which ranges between 0 and 1. We normalize the predicted costs in Lero and RTOS cases with $f ( \hat { c } ) = \hat { c } / 4 0$ and $\hat { c } / 1 0 0$ , respectively, to align these costs with actual latencies6. # 6.2 Bounded Range of Plan Actual Latency Unified-based Upper Bound. We treat all the partial plans of a query plan equally with a single upper bound value for $C$ (check Section 3.3). Figure 5 shows the empirical coverage in this case. We perform experiments on both the JOB and TPC-H workloads. According to Equation 4, the CP theory predicts that the most frequent coverage should be greater than $1 - \delta = 0 . 9$ , as reflected by the peak of the curve in both graphs. For both workloads, the peak of all the curves demonstrates this trend, empirically validating the correctness of applying CP with LQOs. In the JOB workload, we observe that Balsa and Lero show coverage $\%$ more concentrated around $9 0 \%$ , whereas RTOS exhibits a slightly relaxed coverage curve with a higher coverage peak in the middle. This variance could arise from differences in LQOs’ architecture. RTOS lacks partial plan level specific training, while Balsa and Lero perform more granular analysis on plans during training. Consequently, the nonconformity scores for operators in RTOS span a broader range than in Balsa and Lero. Even though $C$ derived from a sparse calibration space can adequately cover a dense test space, the reverse is less effective. This mismatch results in a higher coverage peak for RTOS but with relatively lower density. 23.505 Balsa 23.505 Balsa Lero Lero 012.5050 RTOS 012.5050 60 70 80 90 100 60 70 80 90 100 Coverage(%) Coverage(%) (a) JOB (b) TPC-H Pattern-based Upper Bound. Pattern-based upper bound provides finer granularity for generating a bounded range. In this experiment, we examine the top 3 and least 3 frequently occurring patterns in Balsa on the JOB workload. Figure 6 (a) displays the top 3 popular patterns: (NL, NL, IS), (NL, HJ, IS), and (HJ, NL, SS). The peak coverage reaches 0.9, with a mean $C$ value of $3 0 5 6 ~ \mathrm { m s }$ , indicating that Balsa’s actual latencies vary within a range of $\pm 3 0 5 6$ ms. Figure 6 (b) shows the least 3 popular patterns. Given that they have fewer appearances, which slightly exceeds the $K ^ { * }$ threshold, the curve is not as symmetric as the previous one. However, we also observe that the empirical coverage peak surpasses $9 0 \%$ , indicating reliable, guaranteed latency. This also shows that the $C P$ theory holds its ground when the value of $K$ is low yet greater than the $K ^ { * }$ threshold. 123.05050 (NL,NL,IS) 12345Frequency Density (NL,NL,SS) (NL,HJ,IS) (HL,SS,SS) (HJ,NL,SS) (HL,SS,IS) 0 . 60 70 80 90 100 40 50 60 70 80 90 100 Coverage(%) Coverage(%) (a) Top 3 Popular (b) Least 3 Popular # 6.3 Adaptive CP under Distribution Shift We perform evaluations on Balsa [60] and RTOS [63] for distribution shift analysis. Our approach is inspired from existing works on distribution shift [40, 58, 59], where the LQOs are trained on one distribution and tested on another. Regarding the selection of distributions, we follow [40] and use the following distributions: JOB [28], CEB [39], JOBLight-train [24]. Balsa. Distribution Shift Quantification/Estimation. For validation of our adaptive CP method, we first quantify the total variation distance of calibration distribution JOB $( \mathcal { D } _ { 0 } )$ and test distribution CEB $( \mathcal { D } )$ : $T V ( \mathcal { D } , \mathcal { D } _ { 0 } )$ . Following the computation in Section 3.2, we randomly select 500 plans from JOB and CEB to empirically compute $t v \ : = \ T V ( { \mathcal { D } } , { \mathcal { D } } _ { 0 } ) \ = \ 0 . 0 7 3 6$ . We then set the allowed distribution shift $\epsilon = 0 . 0 8$ to ensure that $\epsilon$ exceeds the estimated distribution shift 𝑡𝑣 . Validation Adaptive CP. To maintain the original $( 1 - \delta )$ confidence level for the latency bounds, the adaptive CP requires to obtain an adjusted upper bound $\tilde { C }$ for the new distribution by computing an adjusted uncertainty probability $\tilde { \delta }$ with Equation 5. We set the uncertainty probability $\delta : = 0 . 2$ . We sample $K : = 3 0 0$ calibration plans from JOB $( \mathcal { D } _ { 0 } )$ . We then compute the $\mathrm { P r o b } ( R ^ { ( 0 ) } \leq$ $C ( R ^ { ( 1 ) } , \cdot \cdot \cdot , R ^ { ( K ) } ) )$ and $\operatorname { P r o b } ( R ^ { ( 0 ) } \ \leq \ \tilde { C } ( R ^ { ( \bar { 1 } ) } , \ldots , R ^ { ( K ) } ) )$ for the non-adaptive and adaptive methods. Figure 7 shows the related results. In Figure 7a (without performing adaptive CP), the convergence is around 0.62, which is less than the expected $1 - \delta = 0 . 8$ This shows the previously computed upper bound $C$ was not suitable for the new distribution. However, in Figure 7b (with adaptive CP), the coverage concentration is around 0.8, which shows our adjusted $\tilde { C }$ performs well with the new distribution. Frequency Density 5 Avg: 62.92% 345 Avg: 80.11% Med: 63.04% Med: 79.25% 4 3 12 2 1 0 50 55 60 65 70 75 65 70 75 80 85 90 Coverage(%) Coverage(%) (a) 𝑅 (0) ≤ 𝐶 (𝑅 (1) , . . . , 𝑅 (𝐾 ) ) ) $( \mathbf { b } ) R ^ { ( 0 ) } \leq \tilde { C } ( R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } )$ RTOS. We train RTOS on JOB $( \mathcal { D } _ { 0 } )$ and sample $K = 3 0 0$ plans to construct the calibration set. Then, we introduce a new distribution, JOB-light $( \mathcal { D } )$ , for testing. The TV distance between these two distributions is $t v \ = \ 0 . 2 4 9 1 6$ , then we set $\epsilon \ : = \ : 0 . 2 5$ . Figure 8 shows the result for uncertainty levels $\delta \ = \ 0 . 4 5$ . The convergence of $\operatorname { P r o b } ( R ^ { ( 0 ) } \leq C ( R ^ { ( 1 ) } , \ldots , \dot { R ^ { ( K ) } } ) )$ is around 0.2 and $\operatorname { P r o b } ( \bar { R ^ { ( 0 ) } } \leq \tilde { C } ( R ^ { ( 1 ) } , \ldots , R ^ { ( K ) } ) )$ is 0.55 which is exactly $( 1 - \delta )$ . This demonstrates that our adaptive CP methods work well with different prototypes and different uncertainty conditions. Figure 8: RTOS distribution shift with Adaptive CP $\langle \delta = 0 . 4 5 \rangle$ . # 6.4 Runtime Verification We perform our runtime verification evaluation with Balsa as a white-box LQO using the JOB workload. In this experiment, we aim to validate Lemma 2. Specifically, we aim to demonstrate the following statement: $$ P ( X \models \phi ) \geq 1 - \delta , \quad { \mathrm { i f } } \quad \rho ^ { \phi } ( { \hat { x } } ) > C $$ We define our performance constraint with the following STL specification: $$ \phi : = G ( X < t h r e s h o l d ) $$ where $G$ is the always operator defined in Section 2.2. We use this specification to bound the actual latency $X$ when running an LQO’s plan, whether partial or complete: the value of $X$ is expected to always be less than the threshold. We set the threshold as 1000 and 2000, which implies that the cumulative latency of operations should not exceed $1 0 0 0 m s$ and $2 0 0 0 m s$ in the database context. We use this STL to detect violations and avoid unexpected long latency in execution. Based on $\phi$ , we define the robust semantics as follows: $$ \rho ^ { \phi } ( x ) = t h r e s h o l d - x $$ From the calibration queries, we construct the value of $C$ and use this value to verify whether $\rho ^ { \phi } ( \hat { x } ) > C$ . If this holds true, the actual latency adheres to the STL specification. Figure 9 shows the non-conformity score distribution with different thresholds. Our unified-based upper bound $C$ covers $1 - \delta = 9 0 \%$ of the nonconformity scores (left side of the red dashed line). Figure 9: Non-conformity Scores in Runtime Verification. $\phi : = G ( X < 1 0 0 0 )$ : We found that for 27 of the 30 queries $( \vert Q ^ { \mathrm { t e s t } } \vert =$ 30), it holds that $\rho ^ { \phi } ( { \hat { x } } ) > C$ implies $X \models \phi$ , confirming the correctness of runtime verification (Lemma 2). We also validated Equation 8 and found that 28 of the 30 test queries satisfy $\rho ^ { \phi } ( { \hat { x } } ) - \rho ^ { \phi } ( X ) \leq C ;$ , which is greater than $\left( 1 - \delta \right) = 0 . 9$ , further confirming the correctness of CP. $\phi : = G ( X < 2 0 0 0 ) ;$ : This is a looser threshold. We found that for 29 of the 30 queries $( | Q ^ { \mathrm { { t e s t } } } | = 3 0 )$ , it holds that $\rho ^ { \phi } ( { \hat { x } } ) > C$ implies $X \models \phi$ . A larger threshold demonstrates better coverage. We also validated Equation 8 and found that 29 of the 30 test queries satisfy $\rho ^ { \phi } ( { \hat { x } } ) - \rho ^ { { \bar { \phi } } } ( X ) \leq C$ , further confirming our method. # 6.5 Violation Detection and Handling We perform violation detection using the JOB workload as discussed in Lemma 2 over the constraint $( \phi : = G ( X < 2 0 0 0 ) ;$ . If violations are detected, we introduce PostgreSQL to assist in generating a new query plan for execution. We compare two scenarios: with $C P$ and without $C P _ { : }$ , representing CP-based violation detection and normal LQO planning, respectively. In this section, we focus on comparing the plan quality between these two methods. Balsa with Violation Detection. Figure 10 presents the comparison results for Balsa. In total, 10 queries were flagged as potential violations. We trigger PostgreSQL to re-generate the query plans. Notably, for 7 out of these 10 queries, the query plans generated by PostgreSQL outperformed the Balsa-generated plans. The overall latency savings for these 7 queries amounted to $2 2 . 1 2 \%$ . For the remaining 3 queries, we observed that although Balsa results in better plans for them, these plans still violate the user constraint. That is why these queries are still detected by our verification framework. Figure 10: Violation Detection $( \phi : = G ( X < 2 0 0 0 ) )$ : Latency Comparison With and Without CP (Balsa). # 6.6 CP-Guided Actual Latency Upper Bound Query Optimizer In this section, we still use Balsa as a representative white-box LQO to conduct CP-guided plan search experiments. Considering Balsa uses beam search [31] internally, our discussion revolves around CP-guided beam search. We evaluated Balsa at different training epochs: 50, 100, and 150, corresponding to moderately trained, welltrained, and highly trained Balsa, respectively. To evaluate our method, we use 33 queries from template $b$ as the test set and the other 47 queries as the calibration set. The comparison experiments are conducted five times, and the average is reported to reduce the impact of system fluctuations on the planning and execution time. 6.6.1 Plan Improvement. Using the CP-guided plan search, we employ the CP-guaranteed latency upper bound as a heuristic to guide the beam search in constructing complete query plans. We evaluate whether this approach yields better results compared to the vanilla Balsa. Figure 11 shows the queries where we achieve improvements, with plan enhancements observed in 11 out of 33 test queries while the rest maintained the same plan quality. This demonstrates that our algorithm can effectively improve plan quality for a LQO. Figure 11: Plan Quality Comparison: CP Guided Algorithm vs. Balsa (50 iterations) For a well-trained Balsa (100 iterations), our algorithm improves the plan quality for queries 14b, 28b, 6b and 9b as seen in Figure 12a, demonstrating consistent plan improvement. Even for the highly trained Balsa (150 iterations), we also observe several improved queries as seen in Figure 12b. Although Balsa can reliably and efficiently generate high-quality query plans at this stage, the CPguided algorithm can still achieve better plans, even within this highly constrained search space. This further proves the effectiveness of our algorithm. Figure 12: Plan Quality Comparison. We also observe that our algorithm achieves greater improvements in plan quality during the early training stages of Balsa. This aligns with the intuition that it is easier to make improvements within a larger discovery space. As the number of training iterations increases, Balsa becomes progressively more refined, which naturally narrows the scope for further improvement. We perform a deep-dive analysis of the queries where we achieve significant improvements: Query 6b in Figure 12 (a) and Query 27b in Figure 11. When we closely compare the query plans generated by CP-Guided and those without CP guidance, we observe that in Query 6b, Balsa originally selects a pattern of (NL, NL, IS). However, in the CP-guided plan search algorithm, we instead select a pattern of (HJ, NL, SS). The (HJ, NL, SS) pattern aligns with the valid patterns established for our reliable CP construction, whereas (NL, NL, IS) is not among them. By following our algorithm and being guided by CP, Query 6b achieves $4 8 . 5 2 \%$ latency reduction by replacing this pattern. For Query 27b, our CP-guided approach has an even greater impact. Without CP guidance, Balsa generates a left-deep tree; however, under CP guidance, it produces a bushy tree, resulting in a $9 . 8 4 \mathrm { X }$ improvement in latency. Query-level analysis reveals that our algorithm not only favors reliable patterns to construct the entire query plan but can also systematically optimize the structure of query plan, significantly enhancing the overall query plan quality. 6.6.2 Planning Time Comparison. For a moderately trained Balsa, we observe an improvement in planning time. Without CP assistance, the total planning time for all test queries is $6 1 7 8 . 6 0 \ m s$ ; however, with our CP-guided algorithm, it is reduced to 5563.40 ms, achieving an overall improvement of $9 . 9 6 \%$ . This demonstrates that our CP-guided approach can mitigate suboptimal LQO behaviors and accelerate the plan search. For the single query level, we can also observe Query 4b in Figure 13 reduces $7 4 . 4 0 \%$ planning time. This effect can be attributed to the optimization target of the CP-guided algorithm—the actual latency upper bound—which acts as a stricter heuristic than previously cost itself. This leads to a more direct search path within the search space. Compared to a moderately trained Balsa, our algorithm constrains the search scope, thereby reducing planning time. Figure 13: Planning Time Comparison: CP Guided Algorithm vs. Balsa (50 iterations) Figure 14 illustrates that even with a highly trained Balsa, our algorithm improves planning time for 17 out of 33 queries. We also observe that as the number of LQO training iterations increases, the overall planning time for both CP-guided and without CP methods decreases. Comparing Figure 13 and Figure 14, we can see that the impact of our CP-guided algorithm on planning time is more pronounced at lower training iterations. This is because, with more extensive training, the LQO has a more refined initial search direction, resulting in a relatively smaller search space for our algorithm. Notably, for Queries 7b and 18b, we achieve improvements in both plan quality and planning time. These observations further demonstrate the effectiveness of our CP-guided algorithm. Figure 14: Planning Time Comparison: CP Guided Algorithm vs. Balsa (150 iterations) # 6.7 Hyper-Parameter Micro-benchmarking. In this section, We discuss three types of hyper-parameters and observe their impact on the coverage. Impact of Changing the Sampling Iterations. We begin by examining how the first hyper-parameter—sampling iterations—affects empirical coverage. We test with 100, 500, and 1000 sampling iterations. For each sampling iteration setting, we plot the density of each coverage. Figure 15 (a) and Figure 15 (b) illustrate Balsa’s performance on the JOB and TPC-H workloads, respectively. When the number of sampling iterations is low, the curve appears less smooth due to limited sampling. Since empirical coverage approximates the inherent coverage properties of CP, insufficient sampling fails to capture the expected behavior according to CP theory. With more iterations, the curve smooths, more accurately reflecting the intrinsic coverage properties of CP theory. Additionally, the curve displays a sharper peak shape. We also observe that the JOB workload exhibits a higher frequency density than the TPC-H workload. This is because JOB contains more joins, leading to a greater number of validation data points, which increases the frequency density. 23.505 100 23.505 100 500 500 012.5050 1000 1000 品 01.505 60 70 80 90 100 70 75 80 85 90 95 100 Coverage(%) Coverage(%) (a) Balsa on JOB (b) Balsa on TPC-H Impact of Uncertainty Probability 𝛿. The second hyper-parameter is the uncertainty probability $\delta$ . We varied $\delta$ across four values: 0.1, 0.2, 0.3, and 0.4. Similar to the previous discussion, we expect the peaks of the coverage curve to align with $1 - \delta$ , meaning the corresponding peaks should align with 0.9, 0.8, 0.7, and 0.6. Figure 16 illustrates this trend. Additionally, we observe that as $\delta$ decreases, the area under the curve becomes sharper and narrower, indicating a more concentrated coverage distribution. This suggests that with smaller values of $\delta$ (e.g., $\delta = 0 . 1$ in our graph), obtaining $C$ values in a single sampling iteration is more likely to yield values centered around the expected confidence level of $1 - \delta$ . 23.505 0.1 23.505 0.1 0.2 0.2 12.50 0.34 012.5050 0.34 01.50 0.0 30 40 50 60 70 80 90 100 30 40 50 60 70 80 90 100 Coverage(%) Coverage(%) (a) Balsa on JOB (b) Balsa on TPC-H 23.50 0.1 23.505 0.1 0.2 0.2 0.3 0.3 2.0 01.505 0.0 0.4 01.505 0.4 30 40 50 60 70 80 90 100 30 40 50 60 70 80 90 100 Coverage(%) Coverage(%) (c) Lero on JOB (d) Lero on TPC-H # 7 Related Work Learned Query Optimization (LQO). In recent years, numerous ML-based techniques have been proposed to improve query optimization. One direction is to use ML to improve cardinality estimates for query outputs and use them to predict query plan costs [25, 36, 38, 48, 49, 61, 62]. Although this direction has shown improved cardinality estimation accuracy, it does not provide evidence that such improvements result in better query plans [38]. Consequently, two lines of work have emerged to directly learn how to optimize the query plan itself (e.g., [18, 26, 33–35, 60, 63, 63, 65]), either by constructing the plan from scratch (e.g., [34, 60]) or by choosing among different candidate plans generated by traditional optimizers (e.g., [18, 33, 63, 65]). Examples of the first line of work include Neo [34] and Balsa [60]. Neo introduces a novel query encoding technique, namely Tree Convolution, to capture execution patterns in the plan tree. In contrast, Balsa reduces reliance on experts by employing a simulation-to-reality learning framework. Examples of the second line of work include Bao [33] and Lero [65]. Bao [33] employs a multi-armed bandit approach to estimate the costs of candidate plans generated by the traditional optimizer and select the best among them. Lero [65] takes a unique approach by constructing a learned model that performs pairwise plan comparisons rather than traditional cost or latency prediction. Although all LQOs have demonstrated improved query performance, they typically do not consider the robustness issues (no guarantees on stability or regression avoidance). Kepler [18] and Roq [23] are the closest works to our objective. Kepler employs robust neural network prediction techniques to reduce tail latency and minimize query regressions. Specifically, it utilizes Spectralnormalized Neural Gaussian Processes [30] to quantify its confidence in plan prediction and falls back to the traditional optimizer when uncertain. Roq introduces robustness notions in the context of query optimization and incorporates a complex ML pipeline to predict plan cost and risk. However, neither method provides theoretical guarantees or formally formulates LQO plan construction verification. To our knowledge, our work is the first to address the verification problem in LQOs by providing formal guarantees and using them to guide the plan construction process. Conformal Prediction (CP). CP was originally introduced to provide a robust statistical framework for quantifying prediction uncertainty (e.g., [2, 46, 56]). Extensive research has explored the application of CP in distribution-agnostic settings, delivering reliable performance guarantees even in non-stationary environments (e.g., [1, 21, 27, 44, 64]). Additionally, extensions of CP have been applied to time-series data [13, 52] and (STL)-based runtime verification in real-time systems (e.g., autonomous cars [29], autonomous robots [41], aircraft simulation [29, 44]). Recently, several work discuss applying CP within different distribution shift conditions [4, 22, 53, 64]. Besides, CP has been adapted for policy evaluation in reinforcement learning [20, 51], time-series forecasting [47], and outlier detection [5]. It has also been employed to monitor risks in evolving data streams [42] and detect change points in time-series data [54, 55].
Query optimization is critical in relational databases. Recently, numerous Learned Query Optimizers (LQOs) have been proposed, demonstrating superior performance over traditional hand-crafted query optimizers after short training periods. However, the opacity and instability of machine learning models have limited their practical applications. To address this issue, we are the first to formulate the LQO verification as a Conformal Prediction (CP) problem. We first construct the CP model and obtain user-controlled bounded ranges for the actual latency of LQO plans before execution. Then, we introduce CP-based runtime verification along with violation handling to ensure performance prior to execution. For both scenarios, we further extend our framework to handle distribution shifts in the dynamic environment using adaptive CP approaches. Finally, we present CP-guided plan search, which uses actual latency upper bounds from CP to heuristically guide query plan construction. We integrated our verification framework into three LQOs (Balsa, Lero, and RTOS) and conducted evaluations on the JOB and TPC-H workloads. Experimental results demonstrate that our method is both accurate and efficient. Our CP-based approaches achieve tight upper bounds, reliably detect and handle violations. Adaptive CP maintains accurate confidence levels even in the presence of distribution shifts, and the CP-guided plan search improves both query plan quality (up to 9.84x) and planning time, with a reduction of up to 74.4% for a single query and 9.96% across all test queries from trained LQOs.
[ "cs.DB" ]
# 1 Introduction Diffusion and flow-based generative models have revolutionized generative modeling [25, 70, 45, 62, 14, 8], but they rely on slow iterative sampling. This has led to the development of approaches to accelerate generation. Advanced, higher-order samplers [68, 50, 51, 12, 89, 34, 61] help, but cannot produce high quality outputs with ${ < } 1 0$ steps. Distillation techniques [63, 67, 87, 72], in contrast, can successfully distill models into few-step generators. In particular, consistency models [71, 69, 49] and a variety of related techniques [38, 79, 80, 42, 93, 17, 22] have gained much attention recently. Consistency models learn to transfer samples that lie on teacher-defined deterministic noise-to-data paths to the same, consistent clean outputs in a single prediction. These approaches excel in few step generation, but have been empirically shown to degrade in performance as the number of steps increases. In this work, we analytically show that consistency models are inherently incompatible with multi-step sampling. Specifically, we show that their objective of strictly predicting clean outputs inevitably leads to error accumulation over multiple denoising steps. Motivated by this limitation, we turn to the flow map formulation as a unifying and more robust alternative. The flow map framework - also known as Consistency Trajectory Models - was introduced in [38, 5] and encompasses diffusion and flowbased models [45], consistency models [71, 69, 49], and other distillation variants [80, 17, 93, 94] within a single coherent formulation. Flow maps allow connecting any two noise levels in a single step, enabling efficient few-step sampling as well as flexible multi-step sampling. As flow maps, figuratively speaking, learn a mapping that “aligns the teacher flow” into a few-step sampler, we call our approach Align Your Flow $( A Y F )$ . We propose two new continuous-time training objectives, which can be interpreted as AYF’s versions of the Eulerian and Lagrangian losses described by Boffi et al. [5]. The new objectives use a consistency condition at either the beginning or the end of a denoising interval. Notably, the first of our objectives generalizes both the continuous-time consistency loss [71, 49] and the flow matching loss [45]. While regular consistency models only perform well for single- or two-step generation and degrade for multi-step sampling, e.g. for 4 steps or more, flow map models such as AYF produce high-quality outputs in this multi-step setting, too. To scale AYF to high performance, we leverage the recently proposed autoguidance [35], where a low-quality guidance model checkpoint is used together with the regular model to produce a model with enhanced quality. Specifically, we propose to distill an autoguided teacher model into an AYF student and introduce several practical techniques that stabilize flow map training and push performance further. Moreover, unlike prior distillation approaches that rely on adversarial training to boost quality at the expense of sample diversity [67, 66, 87, 86, 38], we show that a short finetuning of a pretrained AYF model with a combination of our proposed flow map objective and an adversarial loss is sufficient to yield significantly sharper images with minimal impact on diversity. We validate AYF on popular image generation benchmarks and achieve state-of-the-art performance among few-step generators on both ImageNet 64x64 and 512x512, while using only small and efficient neural networks (Fig. 4). For instance, 4-step sampling of AYF’s ImageNet models is as fast or faster than previous works’ single step generation. Additionally, our adversarially finetuned AYF also achieves significantly higher diversity compared to other adversarial training approaches. We further distill the popular FLUX.1 model [41] and obtain text-to-image AYF flow map models that significantly outperform all existing non-adversarially trained few-step generators in text-conditioned synthesis (Fig. 1). For these experiments, we use an efficient LoRA [27] framework, avoiding the overhead of many previous text-to-image distillation approaches. Figure 3: Samples (4 steps): LCM [54], TCD [93], FLUX.1 [schnell] [41], AYF (view zoomed in). Contributions. (i) We prove that consistency models inherently suffer from error accumulation in multi-step sampling. (ii) We propose Align Your Flow, a high-performance few-step flow map model with new theoretical insights. (iii) We introduce two new training objectives and stabilization techniques for flow map learning. $( { \romannumeral 1 } )$ We apply autoguidance for distillation for the first time and show that adversarial finetuning further boosts performance with minimal loss in diversity. (v) We achieve state-of-the-art few-step generation performance on ImageNet, and we also show fast high-resolution text-to-image generation, outperforming all non-adversarial methods in this task. # 2 Background Diffusion Models and Flow Matching. Diffusion models are probabilistic generative models that inject noise into the data with a forward diffusion process and generate samples by learning and simulating a time-reversed backward diffusion process, initialized by pure Gaussian noise. Flow matching [45, 48, 2, 1, 39] is a generalization of these methods that eliminates the requirement of the noise being Gaussian and allows learning a continuous flow between any two distributions $p _ { 0 } , p _ { 1 }$ that converts samples from one to the other. Denote the data distribution by $\mathbf { x } _ { \mathrm { 0 } } ~ \sim ~ p _ { \mathrm { d a t a } }$ and the noise distribution by $\mathbf { x } _ { 1 } ~ \sim ~ p _ { \mathrm { n o i s e } }$ . Let $\mathbf { x } _ { t } = ( 1 - t ) \cdot \mathbf { x } _ { 0 } + t \cdot \mathbf { x } _ { 1 }$ indicate the noisy samples of the data for time $t \in [ 0 , 1 ]$ , corresponding to the rectified flow [48] or conditional optimal transport [45] formulation. The flow matching training objective is then given by $\mathbb { E } _ { \mathbf { x } _ { 0 } , \mathbf { x } _ { 1 } , t } \left[ \boldsymbol { \dot { w ( t ) } } | | \mathbf { v } _ { \theta } ( \mathbf { x } _ { t } , \boldsymbol { \dot { t } } ) - ( \mathbf { x } _ { 1 } - \mathbf { x } _ { 0 } ) | | _ { 2 } ^ { 2 } \right]$ ; $w ( t )$ is a weighting function and $\mathbf { v } _ { \theta }$ is a neural network parametrized by $\theta$ . The standard sampling procedure starts at $t = 1$ by sampling $\mathbf { x } _ { 1 } \sim p _ { \mathrm { n o i s e } }$ . Then the probability flow ODE (PF-ODE), defined by $\begin{array} { r } { \frac { d \mathbf { x } _ { t } } { d t } = \mathbf { v } _ { \theta } ( \mathbf { x } _ { t } , t ) d t } \end{array}$ , is simulated from $t = 1$ to $t = 0$ to obtain the final outputs. We will assume to be in the flow matching framework from this point on of the paper. Consistency Models. Consistency models (CM) [71] train a neural network $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t )$ to map noisy inputs $\mathbf { x } _ { t }$ directly to their corresponding clean samples $\mathbf { x } _ { 0 }$ , following the PF-ODE. Consequently, $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t )$ must satisfy the boundary condition $\mathbf { f } _ { \theta } ( \mathbf { x } , 0 ) = \mathbf { x }$ , which is typically enforced by parameterizing ${ \bf f } _ { \theta } ( { \bf x } _ { t } , t ) = c _ { \mathrm { s k i p } } ( t ) { \bf x } _ { t } + c _ { \mathrm { o u t } } ( t ) { \bf F } _ { \theta } ( { \bf x } _ { t } , t )$ with $c _ { \mathrm { s k i p } } ( 0 ) = 1 , c _ { \mathrm { o u t } } ( 0 ) = 0$ . CMs are trained to have consistent outputs between adjacent timesteps. They can be trained from scratch or distilled from given diffusion or flow models. In this work, we are focusing on distillation. Depending on how time is dealt with, CMs can be split into two categories: Discrete-time CMs. The training objective is defined between adjacent timesteps as $$ \mathbb { E } _ { \mathbf { x } _ { t } , t } \left[ w ( t ) d ( \mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t ) , \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t - \Delta t } , t - \Delta t ) ) \right] , $$ where $\theta ^ { - }$ denotes stopgrad $( \theta )$ , $w ( t )$ is a weighting function, $\Delta t > 0$ is the distance between adjacent timesteps, and $d ( . , . )$ is a distance function. Common choices include $\ell _ { 2 }$ loss $d ( \mathbf { x } , \mathbf { y } ) = | | \mathbf { x } - \mathbf { y } | | _ { 2 } ^ { 2 }$ , Pseudo-Huber loss $d ( \mathbf { x } , \mathbf { y } ) = \sqrt { | | \mathbf { x } - \mathbf { y } | | _ { 2 } ^ { 2 } + c ^ { 2 } } - c$ [69], and LPIPS loss [90]. Discrete-time CMs are sensitive to the choice of $\Delta t$ , and require manually designed annealing schedules [71, 18]. The noisy sample $\mathbf { x } _ { t - \Delta t }$ at the preceding timestep $t - \Delta t$ is often obtained from ${ \bf x } _ { t }$ by numerically solving the PF-ODE, which can cause additional discretization errors. Continuous-time CMs. When using $d ( \mathbf { x } , \mathbf { y } ) = | | \mathbf { x } - \mathbf { y } | | _ { 2 } ^ { 2 }$ and taking the limit $\Delta t 0$ , Song et al. [71] show that the gradient of Eq. (1) with respect to $\theta$ converges to $$ \nabla _ { \theta } \mathbb { E } _ { \mathbf { x } _ { t } , t } \left[ w ( t ) \mathbf { f } _ { \theta } ^ { \top } ( \mathbf { x } _ { t } , t ) \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t ) } { \mathrm { d } t } \right] , $$ wher e dfθ−d(txt,t) = ∇xt fθ− (xt, t) dxt +∂tfθ− (xt, t) is the tangent of fθ− at (xt, t) along the trajectory of the PF-ODE $\textstyle { \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } }$ . This means continuous-time CMs do not need to rely on numerical ODE solvers which avoids discretization errors and offers better supervision signals during training. Recently, Lu and Song [49] successfully stabilized and scaled continuous-time CMs and achieved significantly better results compared to the discrete-time approach. # 3 Continuous-Time Flow Map Distillation Flow maps generalize diffusion, flow-based and consistency models within a single unified framework by training a neural network $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ to map noisy inputs $\mathbf { x } _ { t }$ directly to any point $\mathbf { x } _ { s }$ along the PF-ODE in a single step. Unlike consistency models, which only perform well for single- or two-step generation but degrade in multi-step sampling, flow maps remain effective at all step counts. In Sec. 3.1, we first show that standard consistency models are incompatible with multi-step sampling, leading to inevitable performance degradation beyond a certain step count. Next, in Sec. 3.2, we introduce two Figure 4: Two-step AYF samples on ImageNet512. novel continuous-time objectives for distilling flow maps from a pretrained flow model. Finally, in Sec. 3.3, we explain how we leverage autoguidance to sharpen the flow map. Sec. 3.4 addresses implementation details. The detailed training algorithm for AYF is provided in the Appendix. # 3.1 Consistency Models are Flawed Multi-Step Generators CMs are a powerful approach to turn flow-based models into one-step generators. To allow CMs to trade compute for sample quality, a multi-step sampling procedure was introduced by Song et al. [71]. This process sequentially denoises noisy $\mathbf { x } _ { t }$ by first removing all noise to estimate the clean data and then reintroducing smaller amounts of noise. However, in practice, this sampling procedure performs poorly as the number of steps increases and most prior works only demonstrate $1 -$ or 2-step results. To understand this behavior, we analyze a simple case where the initial distribution is an isotropic Gaussian with standard deviation $c$ , i.e. $p _ { \mathrm { d a t a } } ( \bar { \mathbf { x } } ) = \mathcal { N } ( \mathbf { 0 } , c ^ { 2 } \pmb { I } )$ . The following theorem shows that regardless of how accurate a (non-optimal) CM is, increasing the number of sampling steps beyond a certain point will lead to worse performance due to error accumulation in that setting. Theorem 3.1 (Proof in Appendix). Let $p _ { d a t a } ( \mathbf { x } ) = \mathcal { N } ( \mathbf { 0 } , c ^ { 2 } \pmb { I } )$ be the data distribution, and let $\mathbf { f } ^ { * } ( \mathbf { x } _ { t } , t )$ denote the optimal consistency model. For any $\delta > 0$ , there exists a suboptimal consistency model $\mathbf { f } \left( \mathbf { x } _ { t } , t \right)$ such that $$ \begin{array} { r } { \mathbb { E } _ { \mathbf { x } _ { t } \sim p ( \mathbf { x } , t ) } \big [ \| \mathbf { f } ( \mathbf { x } _ { t } , t ) - \mathbf { f } ^ { * } ( \mathbf { x } _ { t } , t ) \| _ { 2 } ^ { 2 } \big ] < \delta \quad f o r a l l \ t \in [ 0 , 1 ] , } \end{array} $$ and there is some integer $N$ for which increasing the number of sampling steps beyond $N$ increases the Wasserstein-2 distance of the generated samples to the ground truth distribution (i.e. a worse approximation of the ground truth). This suggests that CMs, by design, are not suited for multi-step generation. Interestingly, when $c = 0 . 5$ —a common choice in diffusion model training, where the data is often normalized to this std. dev. [34]—multi-step CM sampling with a non-optimal CM produces the best samples at two steps (Fig. 5). This is in line with common observations in the literature [49]. This behavior is the opposite of standard diffusion models, which improve as the number of steps increases. Prior works have attempted to address this issue (see Sec. 4), and they all ultimately reduce to special cases of flow maps. # 3.2 Learning Flow Maps Flow maps are neural networks $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ that generalize CMs by mapping a noisy input $\mathbf { x } _ { t }$ directly to any other point $\mathbf { x } _ { s }$ by following the PF-ODE from time $t$ to $s$ . When $s \ = \ 0$ , they reduce to standard CMs. When performing many small steps, they become equivalent to regular flow or diffusion model sampling with the PF-ODE and Euler integration. A valid flow map $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ must satisfy the general boundary condition $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , t ) = \mathbf { x } _ { t }$ for all $t$ . As is done in prior work, this is enforced in practice by parameterizing the model as ${ \bf f } _ { \theta } ( { \bf x } _ { t } , t , s ) = \bar { c } _ { \mathrm { s k i p } } ( t , s ) \dot { \bf x } _ { t } + c _ { \mathrm { o u t } } ( t , s ) { \bf F } _ { \theta } ( { \bf x } _ { t } , t , s )$ where $c _ { \mathrm { s k i p } } ( t , t ) = 1$ and $c _ { \mathrm { o u t } } ( t , t ) = 0$ for all $t$ . In this work, we set $c _ { \mathrm { s k i p } } ( t , s ) = 1$ and $c _ { \mathrm { o u t } } ( t , s ) = ( s - t )$ for simplicity and to align it with an Euler ODE solver. Figure 5: Wasserstein-2 distance between multi-step consistency samples and data distribution $\scriptstyle ( c = 0 . 5 )$ . Unlike CMs, which perform poorly in multi-step sampling, flow maps are designed to excel in this scenario. Additionally, their ability to fully traverse the PF-ODE enables them to accelerate tasks such as image inversion and editing by directly mapping images to noise [43]. As we are interested in distilling a diffusion or flow matching model, we assume access to a pretrained velocity model $\mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t )$ . The flow map model is trained by aligning its single-step predictions with the trajectories generated by the teacher’s PF-ODE, i.e. $\begin{array} { r } { \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } = \mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t ) } \end{array}$ . We propose two primary methods for training flow maps. The first training objective aims to ensure that for a fixed $s$ , the output of the flow map remains constant as we move $\left( \mathbf { x } _ { t } , t \right)$ along the PF-ODE. Let $\theta ^ { - } = \mathrm { s t o p g r a d } ( \theta )$ . The theorem below summarizes the approach. We call this loss AYF-Eulerian Map Distillation (AYFEMD), as it can also be interpreted as a variant of the Eulerian loss of Boffi et al. [5]. The AYF-EMD loss naturally generalizes the loss used to train continuous-time consistency models [71, 49], as it reduces to the same objective when $s = 0$ . Interestingly, it also generalizes the standard flow matching loss, to which it reduces in the limit as $s t$ . See Appendix for details. Theorem 3.2 (Proof in Appendix). Let $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ be the flow map. Consider the loss function defined between two adjacent starting timesteps $t$ and $t ^ { \prime } = t + \epsilon ( s - t )$ for a small $\epsilon > 0$ , $$ \begin{array} { r } { \mathbb { E } _ { \mathbf { x } _ { t } , t , s } \left[ w ( t , s ) | | \mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s ) - \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t ^ { \prime } } , t ^ { \prime } , s ) | | _ { 2 } ^ { 2 } \right] , } \end{array} $$ where $\mathbf { x } _ { t ^ { \prime } }$ is obtained by applying a $\jmath$ -step Euler solver to the $P F$ -ODE from t to $t ^ { \prime }$ . In the limit as $\epsilon 0$ , the gradient of this objective with respect to $\theta$ converges to: $$ \nabla _ { \theta } \mathbb { E } _ { \mathbf { x } _ { t } , t , s } \left[ w ^ { \prime } ( t , s ) \mathrm { s i g n } ( t - s ) \cdot \mathbf { f } _ { \mathbf { \theta } } ^ { \top } ( \mathbf { x } _ { t } , t , s ) \cdot \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } t } \right] , $$ where $w ^ { \prime } ( t , s ) = w ( t , s ) \times | t - s |$ . The second approach ensures consistency at timestep $s$ instead. This method tries to ensure that for a fixed $\left( \mathbf { x } _ { t } , t \right)$ , the trajectory $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , \cdot )$ is aligned with that points’ PF-ODE. We call this loss $A Y F$ -Lagrangian Map Distillation $( A Y F { \cdot } L M D ,$ ), as it is related to the Lagrangian loss of Boffi et al. [5]. The theorem below formalizes this approach. Theorem 3.3 (Proof in Appendix). Let $\mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ be the flow map. Consider the loss function defined between two adjacent ending timesteps s and $s ^ { \prime } = s + \epsilon ( t - s )$ for a small $\epsilon > 0$ , $$ \begin{array} { r } { \mathbb { E } _ { \mathbf { x } _ { t } , t , s } \left[ w ( t , s ) | | \mathbf { f } _ { \theta } ( \mathbf { x } _ { t } , t , s ) - O D E _ { s ^ { \prime } \to s } [ \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ^ { \prime } ) ] | | _ { 2 } ^ { 2 } \right] , } \end{array} $$ where $O D E _ { t s } ( \mathbf { x } )$ refers to running a $\jmath$ -step Euler solver on the $P F$ -ODE starting from x at timestep $t$ to timestep s. In the limit as $\epsilon 0$ , the gradient of this objective with respect to $\theta$ converges to: $$ \nabla _ { \theta } \mathbb { E } _ { \mathbf { x } _ { t } , t , s } \left[ w ^ { \prime } ( t , s ) \mathrm { s i g n } ( s - t ) \cdot \mathbf { f } _ { \theta } ^ { \top } ( \mathbf { x } _ { t } , t , s ) \cdot \left( \frac { \mathrm { d } \mathbf { f } _ { \theta } - ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } s } - \mathbf { v } _ { \phi } ( \mathbf { f } _ { \theta } - ( \mathbf { x } _ { t } , t , s ) , s ) \right) \right] , $$ where $w ^ { \prime } ( t , s ) = w ( t , s ) \times | t - s |$ . In our 2D toy experiments, comparing the two objectives above, we found the AYF-LMD objective to be more stable. However, when applied to image datasets, it leads to overly smoothened samples that drastically reduce the output quality (see Appendix for detailed ablation studies). # 3.3 Sharpening the Distribution with Autoguidance The training objective of diffusion- and flow-based models strongly encourages the model to cover the entire data distribution. Yet it lacks enough data to learn how to generate good samples in the tails of the distribution. The issue is even worse in distilled models which use fewer sampling steps. As a result, many prior distillation methods rely on adversarial objectives to achieve peak performance, often sacrificing diversity and ignoring low-probability regions altogether. The most commonly used technique to partially address this in conditional diffusion and flow-based models is classifier-free guidance (CFG) [24]. CFG trains a flow or diffusion model for both conditional and unconditional generation and steers samples away from the unconditional regions during sampling. Prior works [57, 49] have explored distilling CFG with great success. However, CFG struggles with overshooting the conditional distribution at large guidance scales, which leads to overly simplistic samples [40]. Recently, Karras et al. [35] introduced autoguidance as a better alternative for CFG. Unlike CFG, this technique works for unconditional generation as well. Autoguidance uses a smaller, less trained version of the main model for guidance, essentially steering samples away from low-quality sample regions in the probability distribution, where the weaker guidance model performs particularly poorly. We found that distilling autoguided teacher models can significantly improve performance compared to standard CFG. To the best of our knowledge, we are the first to demonstrate the distillation of autoguided teachers. Specifically, during flow map distillation we define the guidance scale $\lambda$ and use the autoguided teacher velocity $$ \begin{array} { r } { \mathbf { v } _ { \phi } ^ { \mathrm { g u i d e d } } ( \mathbf { x } _ { t } , t ) = \lambda \mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t ) + ( 1 - \lambda ) \mathbf { v } _ { \phi } ^ { \mathrm { w e a k } } ( \mathbf { x } _ { t } , t ) , } \end{array} $$ where $\mathbf { v } _ { \phi } ^ { \mathrm { w e a k } }$ represents the weaker guidance model. In summary, we use autoguidance in the teacher as a mechanism to “sharpen” the distilled flow map model. See Appendix for a visual comparison between autoguidance and CFG on a 2D toy distribution. # 3.4 Training Tricks Training continuous-time CMs has historically been unstable [69, 18]. Recently, sCM [49] addressed this issue by introducing techniques focused on parameterization, network architectures, and modifications to the training objective. Following their approach, we stabilize time embeddings and apply tangent normalization, while also introducing a few additional techniques to further improve stability. Our image models are trained with the AYF-EMD objective objective in Theorem 3.2, which relies on the tangent function $\frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } \left( \mathbf { x } _ { t } , t , s \right) } { \mathrm { d } t }$ . Under our parametrization, this tangent function is computed by $$ \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } t } = \left( \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } - \mathbf { F } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) \right) + ( s - t ) \times \frac { \mathrm { d } \mathbf { F } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } t } , $$ where $\begin{array} { r } { \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } = \mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t ) } \end{array}$ represents the direction given by the pretrained diffusion or flow model along the PF-ODE. We find that most terms in this formulation are relatively stable, except for dFθ(dxt ,t,s) = ∇xt Fθ(xt, t, s) dxt + ∂tFθ(xt, t, s). Among these, the instability originates mainly from $\bar { \partial } _ { t } \mathbf { F } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ , which can be decomposed into $$ \partial _ { t } \mathbf { F } _ { \theta } ( \mathbf { x } _ { t } , t , s ) = \frac { \partial c _ { \mathrm { n o i s e } } ( t ) } { \partial t } \cdot \frac { \partial e m b ( c _ { \mathrm { n o i s e } } ) } { \partial c _ { \mathrm { n o i s e } } } \cdot \frac { \partial \mathbf { F } _ { \theta } } { \partial e m b } , $$ where $e m b ( \cdot )$ refers to the time embeddings, most commonly in the form of positional embeddings [25, 78] or Fourier embeddings [70, 73]. sCM [49] proposes several techniques to stabilize this term including tangent normalization, adaptive weighting, and tangent warmup. We use tangent normalization [49], i.e. $\begin{array} { r } { \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } } { \mathrm { d } t } \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } } { \mathrm { d } t _ { . } } / ( | | \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } } { \mathrm { d } t } | | + c ) } \end{array}$ with $c = 0 . 1$ , as we find it to be critical for stable training. However, in our experiments, adaptive weighting had no meaningful impact and can be removed. We make a few tweaks to the time embeddings and tangent warmup to ensure compatibility with flow matching and better training dynamics which we describe below. Stabilizing the Time Embeddings The time embedding layers are one of the causes for the instability of $\partial _ { t } \mathbf { F } _ { \theta } ( \mathbf { x } _ { t } , t , s )$ . As noted in [49], the $c _ { \mathrm { n o i s e } }$ parameterization used in most CMs is based on the EDM [34] framework, where the noise level is defined as $c _ { \mathrm { n o i s e } } ( \sigma ) = \log ( \sigma )$ . In the flow matching framework, which we use, the noise level for a timestep $t$ is given by $\begin{array} { r } { \sigma _ { t } = \frac { \mathit { t } } { 1 - \mathit { t } } } \end{array}$ t t , which can lead to instabilities when passing through a log operation as $t \to 0$ or $t \to 1$ . To address this, we modify the time parameterization by setting $c _ { \mathrm { n o i s e } } ( t ) = t$ , ensuring stable partial derivatives. To utilize pretrained teacher model checkpoints trained with different time parameterizations, we first finetune the student’s time embedding module to align with the outputs of the original checkpoints. For example, if we want to adapt EDM2 checkpoints, which use $\begin{array} { r } { \bar { \sigma _ { t } } = \frac { t } { 1 - t } } \end{array}$ , we minimize the following objective: $$ \mathbb { E } _ { t \sim p ( t ) } \left[ \lVert e m b _ { \mathrm { n e w } } ( t ) - e m b _ { \mathrm { o r i g i n a l } } \left( \log \left( \sigma _ { t } \right) \right) \rVert _ { 2 } ^ { 2 } \right] . $$ This approach enables us to re-purpose nearly any checkpoint, making it compatible with our flow matching framework with minimal finetuning, rather than training new models from scratch. Regularized Tangent Warmup We initialize the student model with pretrained flow matching or diffusion model weights, following prior work to speed up training [49, 71]. Lu and Song [49] proposed a gradual warmup procedure for the second term in Eq. (4), i.e., dfθ− (xt,t,s) . Specifically, they introduced a coefficient $r$ that linearly increases from 0 to 1 over the first 10k training iterations, gradually incorporating the term. This warmup has a clear intuitive motivation. When considering only the first term in Eq. (4) (i.e., the $r = 0$ case), the objective simplifies to a regularization term that encourages flow maps to remain close to straight lines (please see the Appendix for the derivation): $$ \nabla _ { \theta } \left[ \mathrm { s i g n } ( t - s ) \mathbf { f } _ { \theta } ^ { \top } ( \mathbf { x } _ { t } , t , s ) \times \left( \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } - \mathbf { F } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) \right) \right] \propto \nabla _ { \theta } [ \vert \vert \mathbf { F } _ { \theta } ( \mathbf { x } _ { t } , t , s ) - \mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t ) \vert \vert _ { 2 } ^ { 2 } ] . $$ Therefore, for $r < 1$ , the warmed-up loss with coefficient $r$ is equivalent to a weighted sum of the actual loss and this regularization term: $$ \begin{array} { r l } & { \nabla _ { \theta } \left[ \mathrm { s i g n } ( t - s ) \mathbf { f } _ { \theta } ^ { \top } ( \mathbf { x } _ { t } , t , s ) \left( \displaystyle \frac { \mathrm { d } \mathbf { x } _ { t } } { \mathrm { d } t } - \mathbf { F } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) + r ( s - t ) \displaystyle \frac { \mathrm { d } \mathbf { F } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } t } \right) \right] } \\ & { = r \nabla _ { \theta } \left[ \mathrm { s i g n } ( t - s ) \mathbf { f } _ { \theta } ^ { \top } ( \mathbf { x } _ { t } , t , s ) \displaystyle \frac { \mathrm { d } \mathbf { f } _ { \theta ^ { - } } ( \mathbf { x } _ { t } , t , s ) } { \mathrm { d } s } \right] + ( 1 - r ) \nabla _ { \theta } \left[ | t - s | \cdot \big | \big | \mathbf { F } _ { \theta } ( \mathbf { x } _ { t } , t , s ) - \mathbf { v } _ { \phi } ( \mathbf { x } _ { t } , t ) \big | \big | _ { 2 } ^ { 2 } \right] . } \end{array} $$ In our experiments, training these models for too long after the warmup phase can cause destabilization. A simple fix is to clamp $r$ to a value smaller than 1, ensuring some regularization remains. We found $r _ { \operatorname* { m a x } } = 0 . 9 9$ to be effective in all cases. Timestep scheduling As in standard diffusion, flow-based, and consistency models, selecting an effective sampling schedule for $( t , s )$ during training is crucial. Similar to standard consistency models, where information must propagate from $t = 0$ to $t = 1$ over training, flow map models propagate information from small intervals $| s - t | = 0$ to large ones $| s - t | = 1$ . For details on our practical implementation of the schedules, as well as a complete training algorithms, please see the Appendix. # 4 Related Work Consistency Models. Flow Map Models generalize the seminal CMs, introduced by Song et al. [71]. Early CMs were challenging to train and several subsequent works improved their stability and performance, using new objectives [69], weighting functions [18] or variance reduction techniques [79], among other tricks. Truncated CMs [42] proposed a second training stage, focusing exclusively on the noisier time interval, and Lu and Song [49] successfully implemented continuous-time CMs for the first time. Flow Map Models. Consistency Trajectory Models (CTM) [38] can be considered the first flow map-like models. They combine the approach with adversarial training. Trajectory Consistency Distillation [93] extends CTMs to text-to-image generation, and Bidirectional CMs [43] train additionally on timestep pairs with $t < s$ , also accelerating inversion and tasks such as inpainting and blind image restoration. Kim et al. [37] trained CTMs connecting arbitrary distributions. Multistep CMs [22] split the denoising interval into sub-intervals and train CMs within each one, enabling impressive generation quality using 2-8 steps. Phased CMs [80] use a similar interval-splitting strategy combined with an adversarial objective. These methods can be seen as learning flow maps by training on $( t , s )$ pairs, where $s$ is the start of the sub-interval containing t. Flow Map Matching [5] provides a rigorous analysis of the continuous-time flow map formulation and proposes several continuous-time losses. Shortcut models [17] adopt a similar flow map framework, but these two works struggle to produce high-quality images—in contrast to our novel AYF, the first high-performance continuous-time flow map model. Accelerating Diffusion Models. Early diffusion distillation approaches are knowledge distillation [52] and progressive distillation [63, 57]. Other methods include adversarial distillation [67, 66], variational score distillation (VSD) [87, 86], operator learning [92] and further techniques [55, 20, 4, 74, 85, 48, 82, 46, 97, 95], many of them relying on adversarial losses, too. However, although popular, adversarial methods introduce training complexities due to their GAN-like objectives. VSD exhibits similar properties and does not work well at high guidance levels. Moreover, these methods can produce samples with limited diversity. For these reasons we avoid such objectives and instead rely on autoguidance to achieve crisp high-quality outputs. Finally, many training-free methods efficiently solve diffusion models’ generative differential equations [50, 51, 32, 34, 12, 89, 61], but they are unable to perform well when using ${ < } 1 0$ generation steps. Table 1: Sample quality on classconditional ImageNet $6 4 \mathrm { x } 6 4$ . Recall metric is also included. Table 2: Sample quality on class-conditional ImageNet $5 1 2 \mathrm { x } 5 1 2$ . For additional baselines, which AYS all outperforms, please see the Appendix. # 5 Experiments We train AYF flow maps on ImageNet [10] at resolutions $6 4 \times 6 4$ and $5 1 2 \times 5 1 2$ , measuring sample quality using Fréchet Inception Distance (FID) [23], as previous works. We also use our AYF framework to distill FLUX.1 [dev] [41], the best text-to-image diffusion model, using an efficient LoRA [27] framework and reduce sampling steps to just 4. Experiment details explained in the Appendix. ImageNet Flow Maps. We adopt the EDM2 [36] framework, using their small “S” models, and modify network parametrization and time embedding layer as detailed in Sec. 3. Pretrained checkpoints available online are used both as teacher network and as flow map initialization. We incorporate autoguidance into the flow map model by introducing an additional input, $\lambda$ , corresponding to the guidance scale [49, 57]. During training, $\lambda$ is uniformly sampled from [1, 3] and Figure 6: FID $\downarrow$ as function of wall clock time. applied to the teacher model via autoguidance. At inference, we leverage the $\gamma$ -sampling algorithm from [38] for stochastic multistep sampling of flow map models. Results are reported using the optimal $\gamma$ and $\lambda$ values. For ImageNet $5 1 2 \times 5 1 2$ , the teacher and distilled models are in latent space [60]. In Tab. 1 we show ImageNet $6 4 \times 6 4$ results, reporting FID and recall scores along with number of neural function evaluations (NFE). Our flow maps achieve the best sample quality among all non-adversarial few-step methods, given only 2 sampling steps by sacrificing optimal 1-step quality. This is because learning a flow map is a more challenging task compared to only a consistency model. In Tab. 2 we compare AYF against the state-of-the-art consistency model sCD/sCT [49] on ImageNet $5 1 2 \times 5 1 2$ , also reporting total wall sampling clock time, Gflops, and #parameters. We show that although our small-sized model achieves slightly worse one step sample quality, it is on par with the best sCD model at only two steps while using only $1 8 \%$ of the larger models’ compute. Increasing the sampling steps to four improves the quality even further while still being over twice as fast as the large 1-step sCM model (wallclock time). We further analyze the performance vs. sampling speed trade-off in Fig. 6, showing that AYF is much more efficient than sCD/sCT (also see Appendix for additional comparison). Autoguidance allows AYF to use a small network and still achieve strong performance and the efficient network results in 2-step or 4-step synthesis still being lightning fast. Adversarial finetuning of AYF. Given a pretrained AYF flow map model, we found that a short finetuning stage using a combination of the EMD objective and an adversarial loss can significantly boost the performance across the board, especially for 1-step generation, with a minimal impact to sample diversity as measured by recall scores. Using this approach, we achieve state-of-the-art performance on few-step generation on ImageNet64 (see Tab. 1). For implementation details, please see Appendix. Additional GAN and diffusion model baselines on ImageNet $5 1 2 \times 5 1 2$ can be found in the Appendix; AYF outperforms all of them. Text-to-Image Flow Maps. We apply AYF to distill the open-source text-to-image model FLUX.1 [dev] [41] into a few-step generator, finetuning a FLUX.1 base model into a flow map model using LoRA [27] with the objective in Theorem 3.2 for 10,000 steps. This distillation process took approx. four hours on 8 NVIDIA A100 GPUs, which is highly efficient, in contrast to several previous large-scale text-to-image distillation methods. Samples from the model are shown in Fig. 1. We compare to LCM [53, 54] and TCD [93], two consistency-distilled LoRAs trained on top of SDXL [59] without adversarial objectives. To evaluate quality we ran a user study. The results (Fig. 7) show a clear preference for our method. We also provide qualitative comparisons in Fig. 3. Compared to LCM and TCD, our images are more aesthetically pleasing with finer details. We also included FLUX.1 [schnell] [41], a commercially distilled model trained with Latent Adversarial Diffusion Distillation [66]. Our method achieves comparable image quality to the [schnell] model, while requiring only four sampling steps and 32 GPU hours without the use of adversarial losses. In conclusion, AYF achieves state-of-the-art fewstep text-to-image generation performance among non-adversarial methods. Detailed ablation studies on different components of Figure 7: User preferences comparing LoRA-based consistency and flow map models (4-step samples). LCM and TCD use SDXL and AYF uses FLUX.1 [dev] as base model, respectively. AYF (EMD vs. LMD; autoguidance vs. CFG, AYF vs. Shortcut) are presented in the Appendix. Additional qualitative examples of images generated by AYF are shown in the Appendix, too.
Diffusion- and flow-based models have emerged as state-of-the-art generative modeling approaches, but they require many sampling steps. Consistency models can distill these models into efficient one-step generators; however, unlike flow- and diffusion-based methods, their performance inevitably degrades when increasing the number of steps, which we show both analytically and empirically. Flow maps generalize these approaches by connecting any two noise levels in a single step and remain effective across all step counts. In this paper, we introduce two new continuous-time objectives for training flow maps, along with additional novel training techniques, generalizing existing consistency and flow matching objectives. We further demonstrate that autoguidance can improve performance, using a low-quality model for guidance during distillation, and an additional boost can be achieved by adversarial finetuning, with minimal loss in sample diversity. We extensively validate our flow map models, called Align Your Flow, on challenging image generation benchmarks and achieve state-of-the-art few-step generation performance on both ImageNet 64x64 and 512x512, using small and efficient neural networks. Finally, we show text-to-image flow map models that outperform all existing non-adversarially trained few-step samplers in text-conditioned synthesis.
[ "cs.CV", "cs.LG" ]
# 1 Introduction Machine learning systems have become increasingly prevalent in decision-making across various domains, including healthcare, finance, and criminal justice. While these systems promise more efficient and data-driven decisions, they also raise significant concerns regarding fairness and equity. As machine learning models learn from historical data, they can inadvertently perpetuate or even exacerbate existing societal biases, leading to unfair outcomes for certain social groups. Numerous instances of unfair behaviors in real-world machine learning systems have been reported, including biased recidivism risk prediction (Angwin et al. 2016), discriminatory hiring practices (Dastin 2018), inequitable facial recognition performance (Crockford 2020; Najibi 2020), and biased credit scoring (Vigdor 2019). These reports underscore the urgent need to address the issue of unfairness in machine learning. To mitigate unfair bias, researchers have developed diverse methodologies for constructing accurate predictors that ensure certain fairness definitions. These methodologies can be categorized into three main types: pre-processing techniques that modify the training data (Feldman et al. 2015), in-processing methods that incorporate fairness constraints during model training (Zhang et al. 2018; Cotter et al. 2019; Chuang et al. 2020; Du et al. 2021; Jovanović et al. 2023; Khalili et al. 2023), and post-processing approaches that adjust the model’s outputs (Menon et al. 2018; Chzhen et al. 2019; Zhao et al. 2019; Chzhen et al. 2020; Jiang et al. 2020; Schreuder et al. 2021; Chen et al. 2023; Xian et al. 2023; Xu et al. 2023). Methodologies of each type have been proposed for various fairness definitions, including demographic parity (Pedreshi et al. 2008), equalized odds (Hardt et al. 2016), multicalibration (Kleinberg et al. 2017), and individual fairness (Dwork et al. 2012). Each of these fairness criteria aims to address different aspects of algorithmic bias. These diverse methodologies have significantly advanced the field of fair machine learning, enabling researchers and practitioners to address fairness concerns in various contexts and applications. Recent advancements in fair learning algorithms have led to the development of methods that achieve the best possible predictive accuracy while adhering to specific fairness definitions, particularly demographic parity. Such advancements were achieved by constructing fair learning algorithms and proving their minimax optimality under regression (Chzhen et al. 2022; Fukuchi et al. 2023) and classification (Zeng et al. 2024) setups. The fair minimax optimal algorithm is an algorithm that satisfies the fairness definition and minimizes the worst-case error taken over a certain set of data generation models. No fair algorithm can outperform the fair minimax optimal algorithm in the sense of the worst-case error, as the minimization is taken over all the fair algorithms. While these optimal methods represent significant progress, they are often tightly coupled with specific data generation models, limiting their applicability to a broader range of real-world scenarios. For example, Chzhen et al. (2022) and Fukuchi et al. (2023) each employ certain linear models of the outcome with Gaussian features, albeit with different specific formulations. Zeng et al. (2024) work under assumptions including that the regression function is within the Hölder class and that both the margin and strong density conditions are satisfied. The dependence on these particular data distributions and model assumptions can restrict the applicability of these approaches, potentially hindering their adoption in diverse applications where the underlying data characteristics may differ significantly from these specific conditions. Our contributions (Meta-optimality) We address the limitations of existing analyses for fair regression by establishing a meta-theorem that applies to a wide range of scenarios. This metatheorem provides a connection between the minimax optimal error for fair regression and that for conventional regression, allowing for rates tailored to various situations by leveraging well-established results regarding the minimax optimality in conventional regression. Our approach can combine with minimax optimal regressions under diverse smoothness assumptions (e.g., Hölder, Sobolev, and Besov spaces (Donoho et al. 1998; Giné et al. 2015)) and minimax optimal deep learning methods (Suzuki 2018; Schmidt-Hieber 2020; Suzuki et al. 2021; Nishimura et al. 2023). (Optimal fair regression by post-processing) We propose a post-processing algorithm that leverages an optimal conventional regression algorithm. Guided by our meta-theorem, this construction ensures minimax optimality under the assumptions employed by the conventional regression algorithm. Since the proposed algorithm is post-processing, practitioners can concentrate on refining conventional regression methods, which can then be seamlessly adapted for fair regression. (Convergence rate analysis for optimal transport map estimation in Wasserstein barycenters) A key component of our algorithm is optimal transport map estimation within the Wasserstein barycenter problem, which seeks a distribution (often called the barycenter) that minimizes the (Wasserstein) distances to a set of distributions. The optimal transport map is a mapping between distributions that achieves the minimum cost. One of our main contributions is to provide a convergence rate analysis of a transport map estimator for the Wasserstein barycenter problem, which may be of independent interest. The analyzed estimator is based on Korotin et al. (2020), but they did not provide a convergence rate analysis. The detailed discussion will appear in Section 7. All the missing proofs can be found in Appendix A. Notations For a positive integer $m$ , let $[ m ] = \{ 1 , . . . , m \}$ and $\Delta _ { m }$ denote the probability simplex over $[ m ]$ . We denote the indicator function by $\mathbb { 1 }$ . For real values $a$ and $b$ , we define $a \vee b = \operatorname* { m a x } \{ a , b \}$ and $a \wedge b = \operatorname* { m i n } \{ a , b \}$ . For a sequence $a _ { t }$ indexed by $t \in \tau$ , we represent the family $( a _ { t } ) _ { t \in \mathcal { T } }$ as $a _ { \vdots }$ . The first derivative of a function $f : \mathbb { R } \mathbb { R }$ is denoted by ${ \cal D } f$ . Given an event $\mathcal { E } \in \mathcal { 3 }$ in a probability space $( \mathcal { Z } , 3 , \nu )$ , we denote its complement by ${ \boldsymbol { \xi } } ^ { c }$ and its probability by $\mathbb { P } _ { \nu } \{ \mathcal { E } \}$ . For a random variable $X$ from $( \mathcal { Z } , 3 , \nu )$ to a measurable space $( \mathcal { X } , \mathfrak { X } )$ , we denote its expectation and variance by $\mathbb { E } _ { \nu } \lfloor X \rfloor$ and $\mathbb { V } _ { \nu } [ X ]$ , respectively. # 2 Problem Setup and Preliminaries # 2.1 Fair Regression Problems Consider a fair regression problem with $M \geq 2$ social groups. Let $\mathcal { X }$ and $\Omega \subset \mathbb { R }$ be the domains of features and outcomes, respectively, where we assume $\Omega$ is open and bounded. For each social group $s \in [ M ]$ (e.g., male and female for gender), let $X ^ { ( s ) } \in \mathcal { X }$ and $Y ^ { ( s ) } \in \Omega$ be random variables on a probability measure space $( \mathcal { Z } , 3 , \mu _ { s } )$ , representing the features and outcomes of an individual in group $s$ , respectively. The goal of the regression problem is to construct a (group-wise) regressor $f _ { : }$ , mappings from $\mathcal { X }$ to $\Omega$ indexed by $s \in [ M ]$ , that accurately predicts $Y ^ { ( s ) }$ based on $X ^ { ( s ) }$ . The ideal regressor, known as the Bayes-optimal regressor, is defined as $\begin{array} { r } { f _ { \mu , s } ^ { * } = \arg \operatorname* { m i n } _ { f } \mathbb { E } _ { \mu _ { s } } [ ( f ( X ^ { ( s ) } ) - Y ^ { ( s ) } ) ^ { 2 } ] } \end{array}$ and is given by $f _ { \mu , s } ^ { * } ( X ^ { ( s ) } ) = \mathbb { E } _ { \mu _ { s } } [ Y ^ { ( s ) } \mid X ^ { ( s ) } ]$ . We use $\mu _ { Y , s }$ and $\mu _ { X , s }$ to denote the laws of $Y ^ { ( s ) }$ and $X ^ { ( s ) }$ , respectively. Additionally, we denote the law of $f _ { \mu , s } ^ { * } ( X ^ { ( s ) } )$ by . $\mu _ { f , s }$ Given samples consisting of $n _ { s }$ i.i.d. copies of $( X ^ { ( s ) } , Y ^ { ( s ) } )$ , the objective of the learning algorithm is to construct a regressor $f _ { n , }$ : that maximizes accuracy while satisfying a fairness constraint. Let $\begin{array} { r } { n = \sum _ { s \in [ M ] } n _ { s } } \end{array}$ for notational convenience. We now introduce the definition of fairness, define a measure of accuracy, and provide the definition of the fair optimal algorithm. Fairness We employ demographic parity (Pedreshi et al. 2008) as our fairness criterion. A regressor $f$ : satisfies demographic parity if its output distribution remains invariant across all groups $s \in [ M ]$ . Definition 1. $A$ regressor $f$ : satisfies (strict) demographic parity if, for all $s , s ^ { \prime } \in [ M ]$ and for all events $E$ , $\mathbb { P } _ { \mu _ { s } } \{ f _ { s } ( X ^ { ( s ) } ) \in E \} = \mathbb { P } _ { \mu _ { s ^ { \prime } } } \{ f _ { s ^ { \prime } } ( X ^ { ( s ^ { \prime } ) } ) \in E \}$ . Let $\bar { \mathcal { F } } ( \mu _ { X , : } )$ denote the set of all regressors satisfying demographic parity for given laws $\mu _ { X , : }$ . Instead of enforcing strict demographic parity in Def. 1, we adopt the concept of fairness consistency (Chzhen et al. 2020; Fukuchi et al. 2023). Definition 2. A learning algorithm is consistently fair if ${ \bar { f } } _ { n , { } }$ : converges in probability to an element of $\bar { \mathcal { F } } ( \mu _ { X , : } )$ as $n _ { 1 } , . . . , n _ { M }$ approach infinity. Def. 2 implies that a consistently fair learning algorithm eventually constructs a regressor that satisfies strict demographic parity given a sufficiently large sample size. Accuracy We evaluate the accuracy of a given regressor by measuring its expected squared distance from the fair Bayes-optimal regressor. Given probability measures $\nu$ : on a measurable space $( { \mathcal { Z } } , 3 )$ indexed by $[ M ]$ and weights $w \colon \in \Delta _ { M }$ (known to the learner), we define the squared distance $d _ { \nu _ { \mathrm { : } } }$ between functions $f _ { s } , f _ { s } ^ { \prime } : \mathcal { Z } \to \mathbb { R }$ indexed by $s \in [ M ]$ as $$ d _ { \nu _ { \colon } } ^ { 2 } ( f _ { : } , f _ { : } ^ { \prime } ) = \sum _ { s \in [ M ] } w _ { s } \int \bigl ( f _ { s } ( z ) - f _ { s } ^ { \prime } ( z ) \bigr ) ^ { 2 } \nu _ { s } ( d z ) : = \sum _ { s \in [ M ] } w _ { s } d _ { \nu _ { s } } ^ { 2 } ( f _ { s } , f _ { s } ^ { \prime } ) . $$ The fair Bayes-optimal regressor is defined as the regressor that satisfies strict demographic parity and minimizes the deviation from the Bayes-optimal regressor: $$ \bar { f } _ { \mu , : } ^ { * } = \underset { f _ { \colon } \in \bar { \mathcal { F } } ( \mu _ { X , : } ) } { \arg \operatorname* { m i n } } d _ { \mu _ { X , : } } ^ { 2 } ( f _ { : } , f _ { \mu , : } ^ { * } ) . $$ The accuracy of a regressor $f$ : is then evaluated by $d _ { \mu _ { X , : } } ^ { 2 } ( f _ { : } , \bar { f } _ { \mu , : } ^ { * } )$ . Optimality The fair minimax optimal algorithm for a set of distributions $\mathcal { P }$ is a consistently fair algorithm that achieves the fair minimax optimal error over $\mathcal { P }$ . The fair minimax optimal error over $\mathcal { P }$ is defined as $$ \bar { \mathcal { E } } _ { n } ( \mathcal { P } ) = \operatorname* { i n f } _ { \bar { f } _ { n , : } : \mathrm { f a i r } } \operatorname* { s u p } _ { \mu : \mathcal { P } } \mathbb { E } _ { \mu _ { : } ^ { n } } [ d _ { \mu _ { X , : } } ^ { 2 } ( \bar { f } _ { n , : } , \bar { f } _ { \mu , : } ^ { * } ) ] , $$ where the infimum is taken over all consistently fair learning algorithms, and $\mathbb { E } _ { \mu _ { : } ^ { n } }$ denotes the expectation over samples. Thus, no consistently fair learning algorithm can outperform the fair minimax optimal algorithm in terms of worst-case expected deviation. # 2.2 Fair Bayes-Optimal Regressors and Optimal Transport Maps Recent analyses have characterized fair Bayes-optimal regressors using optimal transport maps that arise in the Wasserstein barycenter problem (Chzhen et al. 2020, 2022). Given two probability measures $\nu$ and $\nu ^ { \prime }$ , the optimal transport map with a quadratic cost function is the unique solution of Monge’s formulation of the optimal transportation problem between $\nu$ and $\nu ^ { \prime }$ , i.e., a transport map $\vartheta ^ { * } : \mathbb { R } \mathbb { R }$ that realizes the infimum $$ W _ { 2 } ^ { 2 } ( \nu , \nu ^ { \prime } ) = \operatorname * { i n f } _ { \vartheta : \vartheta \sharp \nu = \nu ^ { \prime } } \int \frac 1 2 ( z - \vartheta ( z ) ) ^ { 2 } \nu ( d z ) , $$ where $\vartheta \sharp \nu$ denotes the pushforward measure of $\nu$ by $\vartheta$ 1. Given probability measures $\nu _ { 1 } , \ldots , \nu _ { k }$ , the Wasserstein barycenter problem with weights $w \colon \in \Delta _ { k }$ is defined as $$ \operatorname* { i n f } _ { \nu } \sum _ { i \in [ k ] } w _ { s } W _ { 2 } ^ { 2 } ( \nu _ { i } , \nu ) . $$ We refer to the unique solution of Eq. (2) as the barycenter of $\nu$ : with weights $w _ { \mathrm { : } }$ . The optimal transport map from $\nu _ { i }$ to the barycenter of $\nu$ : is denoted by $\vartheta _ { \nu , i } ^ { \ast }$ . Building on these concepts, the fair Bayes-optimal regressor is obtained as follows: Theorem 1 (Chzhen et al. (2020)). Assume that $\mu _ { f , s }$ admits a density for all $s \in [ M ]$ . Then, the fair Bayes-optimal regressor is given by $$ \bar { f } _ { \mu , s } ^ { * } ( x ) = ( \vartheta _ { \mu _ { f } , s } ^ { * } \circ f _ { \mu , s } ^ { * } ) ( x ) . $$ Thm. 1 reveals that the fair Bayes-optimal regressor is characterized by the Bayes-optimal regressor $f _ { \mu , } ^ { * }$ : and the optimal transport maps $\vartheta _ { \mu _ { f } , : } ^ { \ast }$ . Throughout the paper, we assume $\mu _ { f , s }$ admits a density for all $\mu _ { : } \in \mathcal { P }$ and $s \in [ M ]$ so that the condition of Thm. 1 holds. # 2.3 Potential Minimization and Optimal Transport Maps The transport maps in Wasserstein barycenter problems are characterized by the minimizer of multiple correlation over congruent potentials (Korotin et al. 2020). Congruent potentials with weights $w$ : $\in \Delta _ { k }$ are convex and lower semi-continuous functions $u : \mathbb { R } \mathbb { R }$ such that $\begin{array} { r } { \sum _ { i \in [ k ] } w _ { i } D u ^ { \dagger } ( z ) = z } \end{array}$ for all $z \in \Omega$ , where $u ^ { \dag } ( z ) = \operatorname* { s u p } _ { x } ( z x - u ( x ) )$ is the convex conjugate of $u$ . G ven probability measures $\nu _ { \colon }$ , the multiple correlation of $\nu$ : with weights $w$ : for congruent potentials $u$ : is defined as $$ C ( u _ { : } ; \nu _ { : } ) = \sum _ { i \in [ k ] } w _ { i } \int u _ { i } d \nu _ { i } . $$ Let $u _ { \colon } ^ { \ast }$ denote the optimal congruent potentials that minimize $C ( u _ { : } ; \nu _ { : } )$ . Through the analyses by Agueh et al. (2011) and Álvarez-Esteban et al. (2016), we obtain the following characterization of the optimal transport maps: Corollary 1 (Agueh et al. (2011) and Álvarez-Esteban et al. (2016)). Let $\vartheta _ { : } ^ { * }$ be the optimal transport maps from $\nu$ : to the barycenter of $\nu$ : with weights $w \colon$ . Suppose that for all $i \in [ k ]$ , $\nu _ { i }$ admits densities. Then, $\vartheta _ { i } ^ { * } = D u _ { i } ^ { * }$ for $i \in [ k ]$ . By Cor. 1, we can obtain the optimal transport maps $\vartheta$ : by solving inf :congruent $C ( u _ { : } ; \nu _ { : } )$ . $u :$ # 3 Main Result Our main result is a meta-theorem that characterizes the fair minimax optimal error (see Eq. (1)), showing how its convergence rate depends on $\mathcal { P }$ . We begin by introducing several technical assumptions on $\mathcal { P }$ before presenting our main meta-theorem. For convenience, we introduce several notations. We use notations $\mathcal { P } _ { Y } = \{ \mu _ { Y , : } : \mu _ { : } \in \mathcal { P } \}$ , $\mathcal { P } _ { X } =$ $\{ \mu _ { X , : } : \mu _ { : } \in \mathcal { P } \}$ , and $\mathcal { P } _ { f } = \{ \mu _ { f , : } : \mu _ { : } \in \mathcal { P } \}$ . We denote $\mathcal { P } _ { s } = \{ \mu _ { s } : \mu _ { : } \in \mathcal { P } \}$ for $s \in [ M ]$ . Let $\mathcal { F } _ { \mathcal { P } } = \{ f _ { \mu , : } ^ { * } : \mu _ { : } \in \mathcal { P } \}$ , and let $\mathcal { F } _ { \mathcal { P } , s } = \{ f _ { \mu , s } ^ { * } : \mu _ { : } \in \mathcal { P } \}$ for $s \in [ M ]$ . Let $\begin{array} { r } { \Theta _ { \mathcal { P } } = \{ \vartheta _ { \mu _ { f } , : } ^ { \ast } : \mu _ { : } \in \mathcal { P } \} } \end{array}$ . We omit $_ { \mathcal { P } }$ in the subscript of these notations if $\mathcal { P }$ is clear from the context. Assumptions We begin with our first assumption. Assumption 1. $\mathcal { P }$ satisfies the following three conditions: 1. $\mathcal { P } _ { Y } \times \mathcal { P } _ { X } \subseteq \{ ( \mu _ { Y , : } , \mu _ { X , : } ) : \mu _ { : } \in \mathcal { P } \}$ , 2. for any permutation $\pi$ over $[ M ]$ , $\mu _ { \pi ( : ) } \in \mathcal { P }$ if $\mu _ { : } \in \mathcal { P }$ , 3. $\mathcal { F } _ { s }$ is convex, meaning for any $f , f ^ { \prime } \in \mathcal { F } _ { s }$ and $t \in ( 0 , 1 )$ , $t f + ( 1 - t ) f ^ { \prime } \in \mathcal { F } _ { s } { \ : }$ . Intuitively, the first and second conditions imply that the learner has no prior knowledge about (i) how the distributions of $Y ^ { ( s ) }$ and $X ^ { ( s ) }$ are related, nor (ii) how the distributions $\left\{ \mu _ { s } \right\} _ { s \in [ M ] }$ differ. These conditions are naturally satisfied in many real-world scenarios. Note that the third condition does not require the regression functions themselves to be convex but rather that the set $\mathcal { F } _ { s }$ is convex, which can still accommodate non-convex functions. Next, we impose assumptions on $\mu _ { f , }$ : to facilitate the estimation of the optimal transport maps ϑ∗µ ,:. First, we assume that ϑ∗µ : are elements of Lipschitz and strictly increasing functions $\mathcal { M } _ { L }$ 2. Specifically, $\mathcal { M } _ { L }$ is defined as the set of functions $\vartheta : \Omega \Omega$ satisfying $$ L ^ { - 1 } ( y - x ) \leq \vartheta ( x ) - \vartheta ( y ) \leq L ( y - x ) \quad \forall x > y . $$ Second, we assume that $\mu _ { f , s }$ satisfies the Poincaré-type inequality: there exists a constant $C _ { P } > 0$ such that for any function $g : \Omega \to \Omega$ with an $L$ -Lipschitz continuous gradient, $$ \mathbb { V } _ { \mu _ { s } } \Big [ g \Big ( X ^ { ( s ) } \Big ) \Big ] \leq C _ { P } \mathbb { E } _ { \mu _ { s } } \Bigg [ \Big ( D g \Big ( X ^ { ( s ) } \Big ) \Big ) ^ { 2 } \Bigg ] . $$ Assumption 2. There exists a constant $L > 1$ such that for all $\mu \colon \in \mathcal P$ and all $s \in [ M ]$ , 1) $\vartheta _ { \mu _ { f } , s } ^ { \ast } \in \mathcal { M } _ { L }$ and $\mathcal { L }$ ) $\mu _ { f , s }$ satisfies the Poincaré-type inequality in Eq. (4). Note that assumptions similar to Asm. 2 are also employed in studies on transport map estimation, including Hütter et al. (2021) and Divol et al. (2024). We use the following complexity measure of the class of transport maps $\Theta$ based on metric entropy. Given $\epsilon > 0$ , the $\epsilon$ -covering number of a set $A \subseteq { \mathcal { X } }$ with a metric space $( \mathcal { X } , d )$ is denoted as $N ( \epsilon , A , d )$ ; namely, $N ( \epsilon , A , d )$ denotes the minimum number of balls whose union covers $A$ . Let $\ln _ { + } ( x ) = 0 \lor \ln ( x )$ . The complexity measure is defined as follows: Definition 3. The complexity of $\mathcal { P } _ { f }$ is $( \alpha , \beta )$ for $\alpha > 0$ and $\beta \ge 0$ if there exist constants $C , C ^ { \prime } > 0$ and $\bar { \epsilon } > 0$ , and a sequence of subsets $\begin{array} { r } { \Theta _ { 1 } \subseteq \Theta _ { 2 } \subseteq \dots \subseteq \left\{ \vartheta _ { \nu , : } ^ { * } : \nu _ { : } \in \mathcal { P } _ { f } \right\} } \end{array}$ such that for any integer $j \geq 0$ , 1. $\begin{array} { r } { \operatorname* { s u p } _ { \nu _ { \colon } \in \mathcal { P } _ { f } } \operatorname* { i n f } _ { \vartheta _ { \colon } \in \Theta _ { j } } d _ { \nu _ { \colon } } ^ { 2 } ( \vartheta _ { \colon } , \vartheta _ { \mu , : } ^ { * } ) \leq C 2 ^ { - \alpha j } \mathop { : } } \end{array}$ 2. $\begin{array} { r } { \operatorname* { s u p } _ { \nu _ { \colon } \in \mathcal { P } _ { f } } \ln N ( \epsilon , \Theta _ { j } , d _ { \nu _ { \colon } } ) \le C ^ { \prime } 2 ^ { \beta j } \ln _ { + } ( 1 / \epsilon ) } \end{array}$ for $\epsilon \in ( 0 , \bar { \epsilon } ]$ . In our main theorem, we will assume the complexity is $( \alpha , \beta )$ for some $\alpha$ and $\beta$ . Meta-theorem We now characterize the fair minimax optimal error $\mathcal { E } _ { n } ( \mathcal { P } )$ in terms of the conventional minimax optimal error. Specifically, for a given group $s \in [ M ]$ , the group $s$ ’s conventional minimax optimal error is defined as $$ \mathcal { E } _ { k } \big ( \mathcal { P } _ { s } \big ) = \operatorname* { i n f } _ { f _ { k } } \operatorname* { s u p } _ { \mu _ { s } \in \mathcal { P } _ { s } } \mathbb { E } _ { \mu _ { s } ^ { k } } \Big [ d _ { \mu _ { X , s } } ^ { 2 } \big ( f _ { k } , f _ { \mu , s } ^ { * } \big ) \Big ] $$ where the infimum is taken over all regression algorithms that take $k$ i.i.d. copies of $( X ^ { ( s ) } , Y ^ { ( s ) } )$ as the observed sample. Let $\begin{array} { r } { \tilde { n } = \operatorname* { m i n } _ { s \in [ M ] } n _ { s } / w _ { s } } \end{array}$ . Theorem 2. Assume asm. 1 and 2 and that the complexity of $\mathcal { P } _ { f }$ is $( \alpha , \beta )$ for some $\alpha > 0$ and $\beta \geq 0$ . Then, there exists a consistently fair learning algorithm such that $$ \mathcal E _ { n } ( \mathcal P _ { 1 } ) \le \bar { \mathcal E } _ { n } ( \mathcal P ) \le C \left( L ^ { 2 } \sum _ { s \in [ M ] } w _ { s } \mathcal E _ { n _ { s } } ( \mathcal P _ { s } ) + \left( \frac { \tilde { n } } { \ln ( \tilde { n } ) } \right) ^ { - \alpha / ( \alpha + \beta ) } \right) , $$ for some constant $C > 0$ . We highlight several implications of Thm. 2: 1. Assuming there exists a constant $c > 0$ such that $n _ { s } \ge c w _ { s } n$ for all $s \in [ M ]$ , Thm. 2 shows that the optimal rate with respect to $n$ is $\mathcal { E } _ { n } ( \mathcal { P } _ { 1 } )$ (note that $\mathcal { E } _ { n } ( \mathcal { P } _ { 1 } ) = . . . = \mathcal { E } _ { n } ( \mathcal { P } _ { M } ) )$ whenever $\mathcal { E } _ { n } ( \mathcal { P } _ { 1 } )$ is larger than $\scriptstyle \big ( { \frac { n } { \ln ( n ) } } \big ) ^ { - \alpha / ( \alpha + \beta ) }$ . In such cases, the rate of $\mathcal { E } _ { n } ( \mathcal { P } )$ can vary with $\mathcal { P }$ along with $\mathcal { E } _ { n } ( \mathcal { P } _ { 1 } )$ 2. In general, Thm. 2 implies that if the conventional regression problem is more difficult than the transport map estimation problem, then the optimal fair regression error is dominated by the conventional minimax error. This situation commonly arises in high-dimensional settings for $\mathcal { X }$ (e.g., image, text, or audio regression). 3. Chzhen et al. (2020) also established a similar upper bound on the error under the demographic parity constraint. However, their result has two notable limitations. First, their analysis requires stronger assumptions than ours. For instance, they assume that the conventional regression algorithm admits a sub-Gaussian high-probability error bound, whereas our results only require a bound on the expected squared error. Furthermore, while they require uniform upper and lower bounds on the density of $\mu _ { f , s }$ , we only assume the Poincaré-type inequality. These differences broaden the applicability of our meta-theorem relative to their findings. 4. The second limitation of Chzhen et al. (2020) is that their results cannot achieve a convergence rate faster than $n ^ { - 1 / 2 }$ , since their upper bound on the estimation error of $\vartheta _ { \mu , } ^ { \ast }$ : is $n ^ { - 1 / 2 }$ and dominates the other terms. In contrast, our result can achieve a rate faster than $n ^ { - 1 / 2 }$ by exploiting the smoothness structure of $\vartheta _ { \mu , : } ^ { \ast }$ . Illustrative Example To concretely demonstrate the implications of our theoretical results, we consider a representative scenario in which the regression function $f _ { \mu , s } ^ { * }$ is a composition of multiple functions, as studied by Schmidt-Hieber (2020), and the optimal transport map $\vartheta _ { \mu , s } ^ { \ast }$ lies within a Sobolev function class. Specifically, let $f _ { \mu , s } ^ { * }$ belong to the class $$ \Big \{ g _ { q } \circ \dots \circ g _ { 0 } : g _ { i } = ( g _ { i j } ) _ { j } : [ a _ { i } , b _ { i } ] ^ { d _ { i } } \to [ a _ { i + 1 } , b _ { i + 1 } ] ^ { d _ { i + 1 } } , g _ { i j } \in C _ { t _ { i } } ^ { \beta _ { i } } \left( [ a _ { i } , b _ { i } ] ^ { t _ { i } } \right) \Big \} , $$ where $C _ { r } ^ { \beta }$ denotes the Hölder class of functions with smoothness parameter $\beta$ and $r$ -dimensional input, $d _ { i }$ is the input dimension of $g _ { i }$ , and $t _ { i } < d _ { i }$ indicates that each $g _ { i j }$ depends on only $t _ { i }$ out of $d _ { i }$ variables. This structure captures the notion of compositional functions with sparse dependencies, which is prevalent in high-dimensional statistical learning. For regression functions of this form, the minimax optimal error for group $s$ satisfies $\mathcal { E } _ { n } ( \mathcal { P } _ { s } ) =$ Θmaxi ns−βi∗/(2βi∗+ti) up to logarithmic factors, where $\begin{array} { r } { \beta _ { i } ^ { * } = \beta _ { i } \prod _ { \ell = i + 1 } ^ { q } ( \beta _ { \ell } \wedge 1 ) } \end{array}$ . Also, for the class of transport maps $\vartheta _ { \mu , s } ^ { \ast }$ taken to be the Sobolev class of smoothness $\gamma > 0$ , Def. 3 is satisfied with $\alpha = 2 \gamma$ and $\beta = 1$ by choosing $\Theta _ { j }$ as the span of the first $j$ wavelet basis functions. Consequently, the fair minimax error is obtained as $$ \bar { \mathcal { E } } _ { n } ( \mathcal { P } ) = \Theta \Bigl ( \operatorname* { m a x } _ { i } n ^ { - \beta _ { i } ^ { * } / ( 2 \beta _ { i } ^ { * } + t _ { i } ) } + n ^ { - 2 \gamma / ( 2 \gamma + 1 ) } \Bigr ) , $$ again up to logarithmic factors, provided that there exists a constant $c > 0$ such that $n _ { s } \ge c w _ { s } n$ for all $s \in [ M ]$ . An important insight from this example is that if the smoothness of the regression components and the transport maps are comparable (i.e., $\beta _ { i } ^ { * } \approx \gamma$ ), then the minimax error is dominated by the regression term whenever $t _ { i } > 1$ for some $i$ . Here, $t _ { i }$ reflects the intrinsic dimensionality of the essential intermediate data representation. In practical situations, the intermediate representations may be multi-dimensional ( $t _ { i } > 1$ ), and thus the overall rate is determined by the conventional regression problem. # 4 Optimal Algorithm and Upper Bound In this section, we present our fair regression algorithm. Its core structure is similar to algorithms proposed by Chzhen et al. (2020, 2022). Leveraging Thm. 1, we obtain the fair Bayes-optimal regressor by composing the optimal transport maps $\vartheta _ { \mu _ { f } , } ^ { \ast }$ : from ,: to the barycenter of : with the conventional Bayes-optimal regressor f µ∗,:. Let fn,: and ϑn,: be estimators of f µ∗,: and ϑ∗µf ,s, respectively. We define the estimator of $f _ { \mu , } ^ { * }$ : as $f _ { n , s } ( x ) = ( \vartheta _ { n , s } \circ f _ { n , s } ) ( x )$ . This procedure can be viewed as post-processing since we first construct $f _ { n , }$ : using a conventional learning algorithm and then refine its outputs using $\vartheta _ { n , : }$ . We employ a minimax-optimal conventional regression algorithm for the model $\mathcal { P }$ as the estimator $f _ { n , : }$ . Our main methodological contribution, distinguishing our work from earlier approaches, is the introduction of a suitable estimator for $\vartheta _ { n , : }$ . To estimate ϑ∗µf ,:, we utilize the strategy proposed by Korotin et al. (2020) (see Section 2.3). We also provide a novel analysis of the convergence rate for the estimation error of $\vartheta _ { n , : }$ , which was not addressed in Korotin et al. (2020). We first describe the construction of $\vartheta _ { n , \pmb { \vartheta } }$ : and then demonstrate the overall procedure of our fair regression algorithm along with analyses of its accuracy and fairness. Barycenter estimation As discussed in Section 2.3, one can obtain the optimal transport maps in the Wasserstein barycenter problem by finding congruent potentials that minimize the multiple correlation $C ( u _ { : } , \nu _ { : } )$ . In the fair regression setting, our goal is to estimate the transport maps in the Wasserstein barycenter problem for $\mu _ { f , : }$ . However, the learner cannot directly observe $\mu _ { f , }$ : since neither $f _ { \mu , } ^ { * }$ : nor $\mu _ { X , }$ : is known. Instead, we substitute $f _ { n , }$ : for $f _ { \mu , } ^ { * }$ : and use empirical measures from the observed samples in place of $\mu _ { X , : }$ . Specifically, let $\mu _ { \hat { f } , s }$ be the law of $f _ { n , s } ( X ^ { ( s ) } )$ for $s \in [ M ]$ , and let $\mu _ { n , \hat { f } , s }$ be the corresponding empirical measure induced by a sample of size $n _ { s }$ . Then, the estimator $\vartheta _ { n , \astrosun }$ : is defined as the minimizer of $$ \operatorname* { i n f } _ { \vartheta _ { \colon } \in \Theta _ { j } } C ( u _ { \vartheta , : } , \mu _ { n , \hat { f } , : } ) , $$ where $u _ { \vartheta _ { : } }$ ,: denotes the potential functions corresponding to the transport maps $\vartheta$ :, and $\Theta _ { j }$ is a sequence of subsets of $\Theta$ described in Def. 3. Remark 1. We require a specific construction of $u _ { \vartheta }$ ,: for technical reasons. We extend the input and output domains of any function $\vartheta : \Omega \Omega$ to $\mathbb { R }$ while preserving the property in Eq. (3) by defining $\vartheta ( z ) = z + c _ { \mathrm { s u p } }$ for large $z$ and $\vartheta ( z ) = z + c _ { \mathrm { i n f } }$ for small $z$ with appropriate constants $c _ { \mathrm { s u p } } , c _ { \mathrm { i n f } } \in \mathbb { R }$ . We interpret functions in $\Theta$ as their extended versions. Letting $\begin{array} { r } { u _ { \vartheta , s } ^ { \dag } ( z ) = \int _ { 0 } ^ { z } \vartheta _ { s } ^ { - 1 } ( x ) d x } \end{array}$ , we define $u _ { \vartheta , s }$ as the convex conjugate of $u _ { \vartheta , s } ^ { \dagger }$ ; i.e., $\begin{array} { r } { u _ { \vartheta , s } ( z ) = \mathrm { s u p } _ { x \in \mathbb { R } } ( x z - u _ { \vartheta , s } ^ { \dag } ( x ) ) } \end{array}$ . Overall algorithm Alg. 1 summarizes the overall procedure of our fair regression algorithm. For simplicity, we assume that the sample size for group $s$ is $2 n _ { s }$ . In the first step, we execute a minimax-optimal conventional regression algorithm with half of the samples to obtain $f _ { n , : }$ . By definition, $f _ { n , s }$ achieves an error of $\mathcal { E } _ { n } ( \mathcal { P } _ { s } )$ . In the second step, we estimate the transport map $\vartheta _ { n , \pmb { \imath } }$ : via Eq. (5) using the remaining samples. As shown in the next corollary, Alg. 1 achieves the desired properties. Corollary 2. Under the same conditions as in Thm. 2, Alg. 1 achieves the upper bound in Thm. 2 and is consistently fair. Data: Samples $( Y _ { 1 } ^ { ( s ) } , X _ { 1 } ^ { ( s ) } ) , . . . , ( Y _ { 2 n } ^ { ( s ) } , X _ { 2 n _ { s } } ^ { ( s ) } )$ Result: ¯n,: Construct $f _ { n , }$ : using a minimax optimal conventional regression algorithm with the first half of samples $( Y _ { 1 } ^ { ( s ) } , X _ { 1 } ^ { ( s ) } ) , . . . , ( Y _ { n _ { s } } ^ { ( s ) } , X _ { n _ { s } } ^ { ( s ) } )$ ; Construct $\vartheta _ { n , \pmb { \imath } }$ : by Eq. (5) with the remaining samples $f _ { n , s } ( X _ { n _ { s } + 1 } ^ { ( s ) } ) , . . . , f _ { n , s } ( X _ { 2 n _ { s } } ^ { ( s ) } )$ ; $\bar { f } _ { n , s } ( x ) = ( \vartheta _ { n , s } \circ f _ { n , s } ) ( x )$ ; Remark 2 (Limitation). The minimization over congruent potentials may present computational challenges. Korotin et al. (2020) proposed adding a penalty term to enforce congruency instead of handling the constraint directly, which may be more practical. Analysis of convergence rates under approximate satisfaction of congruency remains an important direction for future work. Connection between fair regression and transport maps estimation in Wasserstein barycenter To prove Cor. 2, we relate the regression error and unfairness of Alg. 1 to the estimation error of the estimated transport maps $\vartheta _ { n , : }$ . Specifically, we demonstrate the connection of Alg. 1’s error and unfairness with $d _ { \mu _ { \hat { f } , : } } ^ { 2 } ( \vartheta _ { n , : } , \vartheta _ { \mu _ { \hat { f } } } )$ through the following propositions: Proposition 1. Let $f _ { n , }$ ,: be a regressor obtained by Alg. 1. Under the same conditions as in Thm. 2, there exists a universal constant $C > 0$ such that $$ \mathbb { E } _ { \mu _ { : } ^ { 2 n } } \Big [ d _ { \mu _ { X , : } } ^ { 2 } ( \bar { f } _ { n , : } , \bar { f } _ { \mu , : } ^ { * } ) \Big ] \leq C \left( L ^ { 2 } \sum _ { s \in [ M ] } w _ { s } \mathscr { E } _ { n _ { s } } ( \mathscr { P } _ { s } ) + \mathbb { E } _ { \mu _ { \bar { f } _ { : } } ^ { n } } \Big [ d _ { \mu _ { \bar { f } , : } } ^ { 2 } ( \vartheta _ { n , : } , \vartheta _ { \mu _ { f } , : } ^ { * } ) \Big ] \right) , $$ where $\mathbb { E } _ { \mu _ { \hat { f } , } ^ { n } }$ denotes the expectation over the samples used for constructing $\vartheta _ { n , : }$ . Proposition 2. Let ${ \bar { f } } _ { n , \cdot }$ ,: be a regressor obtained by Alg. 1. Under the same conditions as in Thm. 2, we have $$ \operatorname* { i n f } _ { \nu } \operatorname* { m a x } _ { s \in [ M ] } W _ { 2 } ( \bar { f } _ { n , s } \sharp \mu _ { X , s } , \nu ) \leq \sqrt { \frac { 1 } { M w _ { \operatorname* { m i n } } } } d _ { \mu _ { \hat { f } , : } } \Bigl ( \vartheta _ { n , : } , \vartheta _ { \mu _ { \hat { f } } , : } ^ { * } \Bigr ) a . s . , $$ where $\begin{array} { r } { w _ { \operatorname* { m i n } } = \operatorname* { m i n } _ { s \in [ M ] } w _ { s } } \end{array}$ . By Prop. 1, we obtain an upper bound on the regression error by deriving an upper bound on $\mathbb { E } _ { \mu _ { \hat { f } , : } ^ { n } } [ d _ { \mu _ { \hat { f } , : } } ^ { 2 } ( \vartheta _ { n , : } , \vartheta _ { \mu _ { \hat { f } } , : } ^ { * } ) ]$ . Additionally, Prop. 2 implies that Alg. 1 achieves fairness consistency in Def. 2 if Pµnˆ :n $\mathbb { P } _ { \mu _ { \hat { f } , : } ^ { n } } \Big \{ d _ { \mu _ { \hat { f } , : } } \big ( \vartheta _ { n , : } , \vartheta _ { \mu _ { \hat { f } } , : } ^ { * } \big ) = o ( 1 ) \Big \} = 1 - o ( 1 )$ . We will provide upper bounds on the error $d _ { \mu _ { \hat { f } , : } } ( \vartheta _ { n , : } , \vartheta _ { \mu _ { \hat { f } } } )$ through the analyses shown in the next section. # 5 Transport Maps Estimation in Wasserstein Barycenter In this section, we investigate the convergence rate of our estimator for transport maps estimation in the Wasserstein barycenter. We conduct our analyses under the general setup of transport maps estimation in the Wasserstein barycenter for real-valued probability measures. We first describe the general setup and demonstrate the convergence rate of our estimator, which is the main result of this section. We then provide detailed analyses to support the convergence rate. Setup Consider the problem of estimating the transport maps that arise in the Wasserstein barycenter problem. Let $\nu \colon \in { \mathcal { Q } }$ be probability measures indexed by $[ M ]$ on a measurable space $( \Omega , 3 )$ . Recall that the Wasserstein barycenter problem involves finding the minimizer of the following optimization problem: $$ \operatorname* { i n f } _ { \nu } \sum _ { s \in [ M ] } w _ { s } W _ { 2 } ^ { 2 } ( \nu _ { s } , \nu ) . $$ Let $\nu$ be the minimizer of the above optimization problem, i.e., the Wasserstein barycenter. Recall that we denote $\boldsymbol { \vartheta } _ { \nu , s } ^ { \ast }$ as the optimal transport map from $\nu _ { s }$ to $\nu$ for each $s \in [ M ]$ . Given $M$ samples each i.i.d. from $\nu _ { s }$ , the analyst’s goal is to estimate $\vartheta _ { \nu , : } ^ { \ast }$ . We denote the estimator of $\vartheta _ { \nu , \ / , } ^ { \ast }$ : as $\vartheta _ { n , : }$ . According to prop. 1 and 2, we assess the error of the estimated transport maps by $$ d _ { \nu _ { \downarrow } } \left( \vartheta _ { n , : } , \vartheta _ { \nu , : } ^ { * } \right) = \sum _ { s \in [ M ] } w _ { s } d _ { \nu _ { s } } \left( \vartheta _ { n , s } , \vartheta _ { \nu , s } ^ { * } \right) = \sum _ { s \in [ M ] } w _ { s } \int \left( \vartheta _ { n , s } ( z ) - \vartheta _ { \nu , s } ^ { * } ( z ) \right) ^ { 2 } \nu _ { s } ( d z ) . $$ Note that $d _ { \nu _ { : } } \left( \vartheta _ { n , : } , \vartheta _ { \nu , : } ^ { * } \right)$ is a random variable with randomness stemming from the samples. The goal of the analyses is thus to provide an upper bound on the expectation of the error or a probabilistic upper bound on the error. Estimator Our estimator $\vartheta _ { n , \astrosun }$ : is constructed by minimizing the empirical multiple correlation over a sieved subset of transport maps. Let $\Theta = \{ \vartheta _ { \nu , : } ^ { * } : \nu _ { : } \in \mathcal { Q } \}$ represent the set of all possible transport maps. Denote by $\nu _ { n , { \mathrm { ~ } } }$ : the empirical measures corresponding to $\nu _ { \colon }$ , as determined by the observed samples. Assume that the complexity of $\mathcal { Q }$ is $( \alpha , \beta )$ for some $\alpha > 0$ and $\beta \geq 0$ , and let $\{ \Theta _ { j } \} _ { j }$ be a sequence of subsets of $\Theta$ as specified in Def. 3. The estimator $\vartheta _ { n , \astrosun }$ : is then defined as the solution to the following optimization problem: $$ \operatorname* { i n f } _ { \vartheta _ { : } \in \Theta _ { j } } C ( u _ { \vartheta , : } , \nu _ { n , : } ) , $$ where $j$ is chosen appropriately. Estimation Error Bound and Challenges Our main results in this section are expected and probabilistic upper bounds on $d _ { \nu _ { : } } ( \vartheta _ { n , : } , \vartheta _ { \nu , : } ^ { * } )$ . Theorem 3. Let $\begin{array} { r } { \vartheta _ { n , : } = \arg \operatorname* { m i n } _ { \vartheta _ { : } \in \Theta _ { j } } C ( u _ { \vartheta , : } ; \nu _ { n , : } ) } \end{array}$ be the estimated transport maps, with $j$ satisfying $\begin{array} { r } { 2 ^ { j } \le \big ( \frac { \tilde { n } } { \ln ( \tilde { n } ) } \big ) ^ { 1 / ( \alpha + \beta ) } \le 2 ^ { j + 1 } } \end{array}$ . Suppose that $\vartheta _ { \nu , s } ^ { \ast } \in \mathcal { M } _ { L }$ for some $L > 1$ , and that $\nu _ { s }$ satisfies the Poincaré-type inequality in Eq. (4) for all $\nu _ { : } \in \mathcal { Q }$ . Also, suppose that the complexity of $\mathcal { Q }$ is $( \alpha , \beta )$ for some $\alpha > 0$ and $\beta \geq 0$ . Then, there exists a constant $C > 0$ , possibly depending on $L$ and $M$ , such that $$ \mathbb { E } _ { \nu _ { : } ^ { n } } \big [ d _ { \nu _ { : } } ^ { 2 } ( \vartheta _ { n , : } , \vartheta _ { \nu , : } ^ { * } ) \big ] \leq C \bigg ( \frac { \tilde { n } } { \ln ( \tilde { n } ) } \bigg ) ^ { - \alpha / ( \alpha + \beta ) } . $$ Moreover, for all $t \geq 1$ , with probability at least $1 - 2 e ^ { - t }$ , $$ d _ { \nu _ { \colon } } ^ { 2 } ( \vartheta _ { n , : } , \vartheta _ { \nu , : } ^ { * } ) \leq C t \biggl ( \frac { \tilde { n } } { \ln ( \tilde { n } ) } \biggr ) ^ { - \alpha / ( \alpha + \beta ) } . $$ Thm. 3 shows that a larger $\alpha$ and a smaller $\beta$ result in a faster convergence rate. To establish the upper bound in Thm. 3, we follow the convergence rate analysis for the sieved $M$ -estimator (see, e.g., Van Der Vaart et al. 2023), since our estimator in Eq. (5) can be regarded as a sieved $M$ -estimator. The main analytical challenge is that the process $\vartheta$ : $ C ( u _ { \vartheta , : } , \nu _ { n , : } )$ is not a standard empirical process, and thus standard concentration inequalities do not directly apply. We first provide an error bound for general sieved $M$ -estimators in Section 5.1, and subsequently present the analysis of $\vartheta _ { n , : }$ , including a concentration inequality for the process $\vartheta$ : $ C ( \boldsymbol { u } _ { \vartheta , : } , \boldsymbol { \nu } _ { n , : } )$ , in Section 5.2. # 5.1 Error Analysis for General Sieved $M$ -Estimator In this subsection, we present an error bound for general sieved $M$ -estimators. Let $E$ and $E _ { n }$ denote the expectation and empirical process indexed by $\boldsymbol { \mathcal { U } }$ , respectively, where $n$ is the sample size. Let $\Theta$ be a parameter space and $\Theta ^ { \prime } \subseteq \Theta$ a sieved subset. For a family $u _ { \theta } \in \mathcal { U }$ parameterized by $\theta \in \Theta$ , the sieved $M$ -estimator is defined as $$ \theta _ { n } = \operatorname * { a r g m i n } _ { \theta \in \Theta ^ { \prime } } E _ { n } u _ { \theta } . $$ Let $\theta _ { 0 } \in \Theta$ denote the ideal parameter such that $E u _ { \theta _ { 0 } } = \operatorname* { i n f } _ { \theta \in \Theta } E u _ { \theta }$ , and let $\theta _ { 0 } ^ { \prime } \in \Theta ^ { \prime }$ be the ideal parameter within the sieved set, i.e., $E u _ { \theta _ { 0 } ^ { \prime } } = \operatorname* { i n f } _ { \theta ^ { \prime } \in \Theta ^ { \prime } } E u _ { \theta ^ { \prime } }$ . The estimation error of $\theta _ { n }$ is measured by $d ( \theta _ { n } , \theta _ { 0 } )$ , where $d$ is a distance function on $\Theta$ . Within this framework, we derive an error bound for sieved $M$ -estimators under the following assumptions on the processes $E$ and $E _ { n }$ . Assumption 3. There exist constants $K _ { \mathrm { u p } } , K _ { \mathrm { l o w } } > 0$ such that for any $\theta \in \Theta$ , $$ K _ { \mathrm { l o w } } E ( u _ { \theta } - u _ { \theta _ { 0 } } ) \leq d ^ { 2 } ( \theta , \theta _ { 0 } ) \leq K _ { \mathrm { u p } } E ( u _ { \theta } - u _ { \theta _ { 0 } } ) . $$ Assumption 4. Let $\gamma \in ( 1 , 2 )$ be a constant, and let $a _ { 0 , n } , a _ { 1 , n } , a _ { 2 , n } , b _ { n } > 0$ be sequences of positive numbers associated with $n$ . Suppose $H : \mathbb { R } \mathbb { R }$ is a non-decreasing function such that for all $t > 0$ , $$ \mathbb { P } \Bigg \{ \operatorname* { s u p } _ { \theta \in \Theta ^ { \prime } : d ( \theta , \theta ^ { \prime } ) \leq \sigma } ( E _ { n } - E ) ( u _ { \theta } - u _ { \theta ^ { \prime } } ) > a _ { 0 , n } H ( \sigma ) + a _ { 1 , n } + a _ { 2 , n } t \Bigg \} \leq \exp \bigg ( - \frac { t ^ { 2 } } { \sigma ^ { 2 } + b _ { n } t } \bigg ) , $$ and $\sigma H ( \sigma ) / \sigma ^ { \gamma }$ is non-increasing for $\sigma > 0$ . The following theorem provides the error bound for sieved $M$ -estimators. Theorem 4. Suppose that Asm. 3 and Asm. 4 hold. Define $$ \tau _ { n } = \sqrt { 4 K _ { \mathrm { u p } } } \Big ( \sqrt { 2 E ( u _ { \theta _ { 0 } ^ { \prime } } - u _ { \theta _ { 0 } } ) } + \sqrt { a _ { 1 , n } } + \sqrt { 2 h _ { n } } \Big ) + \sqrt { 4 a _ { 2 , n } ^ { 2 } + a _ { 2 , n } b _ { n } } ) , $$ where $h _ { n }$ is a sequence such that $a _ { 0 , n } H \big ( \sqrt { 8 K _ { \mathrm { u p } } h _ { n } } \big ) \le h _ { n }$ for all $n$ . Then, for all $t \geq 1$ , $$ \begin{array} { r } { \mathbb { P } \big \{ d ^ { 2 } ( \theta _ { n } , \theta _ { 0 } ) \ge t \big ( 2 \tau _ { n } ^ { 2 } + 2 d ^ { 2 } ( \theta _ { 0 } ^ { \prime } , \theta _ { 0 } ) \big ) \big \} \le 2 e ^ { - t } . } \end{array} $$ Furthermore, $$ \begin{array} { r } { \mathbb { E } \big [ d ^ { 2 } ( \theta _ { n } , \theta _ { 0 } ) \big ] \leq 4 \ln ( 2 ) \tau _ { n } ^ { 2 } + 2 d ^ { 2 } ( \theta _ { 0 } ^ { \prime } , \theta _ { 0 } ) . } \end{array} $$ By establishing Thm. 4, one can obtain an error bound for a sieved $M$ -estimator by verifying Asm. 3 and Asm. 4. # 5.2 Error Analysis for $\vartheta _ { n , \astrosun }$ : In this subsection, we present the error analysis for $\vartheta _ { n , : }$ . We begin by formally defining the processes associated with the populational and empirical multiple correlations. The expectation operator $E _ { \nu }$ is defined as $E _ { \nu } u = \mathbb { E } _ { \nu } [ u ( Z ) ]$ for a function $u : \mathcal { Z } \mathbb { R }$ and $Z \sim \nu$ . For a positive integer $n$ , the empirical expectation operator $E _ { \nu , n }$ is given by $\begin{array} { r } { E _ { \nu , n } u = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } u ( Z _ { i } ) } \end{array}$ , where $Z _ { 1 } , . . . , Z _ { n }$ are drawn i.i.d. from $\nu$ . For measurable functions $u _ { s } : \mathcal { Z } \mathbb { R }$ indexed by $[ M ]$ , we define the following operators: $$ E _ { \nu ; } u _ { : } = \sum _ { s \in [ M ] } w _ { s } E _ { \nu _ { s } } u _ { s } , \quad E _ { n , \nu ; } u _ { : } = \sum _ { s \in [ M ] } w _ { s } E _ { n _ { s } , \nu _ { s } } u _ { s } . $$ The populational and empirical multiple correlations are then expressed as $$ C ( u _ { \vartheta , : } , \nu _ { : } ) = E _ { \nu _ { : } } u _ { \vartheta , : } , \quad C ( u _ { \vartheta , : } , \nu _ { n , : } ) = E _ { n , \nu _ { : } } u _ { \vartheta , : } . $$ In the context of Section 5.1, we identify $E = E _ { \nu _ { \mathrm { { s } } } }$ , $E _ { n } = E _ { n , \nu _ { \mathrm { : } } }$ , $u _ { \theta } = u _ { \vartheta , : } ,$ and $d = d _ { \nu _ { : } }$ . Furthermore, we set $\theta _ { 0 } = \vartheta _ { \nu , : } ^ { \ast }$ , $\theta _ { n } = \vartheta _ { n , : }$ , and $\theta _ { 0 } ^ { \prime } = \vartheta _ { j , : } ^ { \ast }$ , where $\vartheta _ { j , \ l } ^ { \ast }$ : denotes the minimizer of $C ( u _ { \vartheta , : } , \nu _ { : } )$ over $\vartheta _ { : } \in \Theta _ { j }$ . To apply Thm. 4, it is necessary to verify Asm. 3 and Asm. 4. Concentration of the process We now establish the concentration inequality for the process $E _ { n , \nu _ { : } } - E _ { \nu _ { : } }$ . We first present a Bernstein-type concentration inequality for fixed potentials $u _ { \vartheta , : }$ . For $s \in \vert M \vert$ , let $\nu _ { s }$ be a probability measure on a measurable space $( { \mathcal { Z } } , 3 )$ . Let $Z ^ { ( s ) }$ denote a random variable distributed according to $\nu _ { s }$ . Let $\boldsymbol { \mathcal { U } }$ be a class of functions $u _ { s } : \mathcal { Z } \mathbb { R }$ indexed by $s \in [ M ]$ . Proposition 3. Given $u$ : $\in \mathcal { U }$ such that $\begin{array} { r } { \sum _ { s \in [ M ] } w _ { s } \nabla _ { \nu _ { s } } [ u _ { s } ( Z ^ { ( s ) } ) ] \leq \sigma ^ { 2 } \ a n d \ \operatorname* { m a x } _ { s \in [ M ] } | f _ { s } ( Z ^ { ( s ) } ) - } \end{array}$ $E _ { \nu _ { s } } u _ { s } | \leq b$ almost surely, we have $$ \mathbb { P } \{ ( E _ { n , \nu _ { : } } - E _ { \nu _ { : } } ) u _ { : } > t \} \leq \exp \left( - \frac { 1 } { 2 } \frac { \tilde { n } t ^ { 2 } } { \sigma ^ { 2 } + t b } \right) . $$ Building on Prop. 3, we derive the maximal inequality over the set of functions $\boldsymbol { \mathcal { U } }$ . For $u \colon \in \mathcal { U }$ , define $$ \begin{array} { r l } & { \sigma _ { \nu _ { \downarrow } } ^ { 2 } ( u _ { \cdot } ) = \displaystyle \sum _ { s \in [ M ] } w _ { s } \mathbb { V } _ { \nu _ { s } } \Big [ u _ { s } \big ( Z ^ { ( s ) } \big ) \Big ] , } \\ & { b _ { \nu _ { \downarrow } } ( u _ { \cdot } ) = \displaystyle \operatorname* { i n f } _ { b } \bigg \{ b : \operatorname* { m a x } _ { s \in [ M ] } \Big | u _ { s } \big ( Z ^ { ( s ) } \big ) - E _ { \nu _ { s } } u _ { s } \Big | \leq b \mathrm { a . s . } \bigg \} . } \end{array} $$ It is worth noting that both $\sigma _ { \nu _ { \colon } }$ and $b _ { \nu _ { : } }$ satisfy the triangle inequality. As complexity measures for $\boldsymbol { \mathcal { U } }$ , we introduce Dudley integral-type metrics, defined for $\delta > 0$ as $$ \begin{array} { l } { \displaystyle { H _ { \sigma , \nu _ { \downarrow } } ( \delta ; \mathcal { U } ) = \int _ { 0 } ^ { \delta } \sqrt { \ln ( N ( \epsilon , \mathcal { U } , \sigma _ { \nu _ { \downarrow } } ) ) } d \epsilon , } } \\ { \displaystyle { H _ { b , \nu _ { \downarrow } } ( \delta ; \mathcal { U } ) = \int _ { 0 } ^ { \delta } \ln ( N ( \epsilon , \mathcal { U } , b _ { \nu _ { \downarrow } } ) ) d \epsilon . } } \end{array} $$ This leads to the following maximal inequality: Proposition 4. Given a fixed $u _ { \colon } ^ { \ast } \in \mathcal { U }$ , $\sigma > 0$ , and $b > 0$ , define $$ \mathcal { U } ( \sigma , b ; u _ { : } ^ { * } ) = \{ u _ { : } \in \mathcal { U } : \sigma _ { \nu _ { : } } ( u _ { : } - u _ { : } ^ { * } ) \leq \sigma , b _ { \nu _ { : } } ( u _ { : } - u _ { : } ^ { * } ) \leq b \} . $$ Then, there exists a universal constant $C > 0$ such that for all $t > 0$ , $$ \begin{array} { r l r } { { \operatorname* { P } _ { \nu _ { \uparrow } ^ { n } } \Bigg \{ \operatorname* { s u p } _ { u , \in \mathcal { U } ( \sigma , b ; \nu _ { \downarrow } ^ { n } ) } \sqrt { \bar { n } } ( E _ { n , \nu _ { \downarrow } } - E _ { \nu _ { \downarrow } } ) ( u _ { \colon } - u _ { \cdot } ^ { * } ) > C \bigg ( H _ { \sigma , \nu _ { \downarrow } } ( \sigma ; \mathcal { U } ) + \frac { 1 } { \sqrt { \bar { n } } } H _ { b , \nu _ { \downarrow } } ( b ; \mathcal { U } ) + t \bigg ) \Bigg \} } } \\ & { } & { \leq \exp \bigg ( - \frac { \sqrt { \tilde { n } } t ^ { 2 } } { \sqrt { \tilde { n } } \sigma ^ { 2 } + t b } \bigg ) . } \end{array} $$ Analyses for Dudley integral-type metrics To verify Asm. 4 using Prop. 4, it is necessary to establish upper bounds on $H _ { \sigma , \nu _ { : } } ( \delta ; \mathcal { U } )$ and $H _ { b , \nu _ { : } } ( \delta ; \mathcal { U } )$ . We therefore provide upper bounds on $H _ { \sigma , \nu _ { : } } ( \delta ; \mathcal { U } )$ and $H _ { b , \nu _ { : } } ( \delta ; \mathcal { U } )$ for $\mathcal { U } = \{ u _ { \vartheta _ { \nu , : } ^ { * } } : \nu _ { : } \in \mathcal { Q } \}$ , where $\mathcal { Q }$ is the set of probability measures whose complexity is $( \alpha , \beta )$ for some $\alpha > 0$ and $\beta \geq 0$ as specified in Def. 3. Lemma 1. Suppose that $\vartheta _ { \nu , } ^ { \ast }$ : satisfies the Poincaré-type inequality in Eq. (4) for all $\nu _ { \colon } \in { \mathcal { Q } }$ , and that the complexity of $\mathcal { Q }$ is $( \alpha , \beta )$ for some $\alpha > 0$ and $\beta \geq 0$ as in Def. 3. Then, for all $j \geq 0$ and all $\sigma \in ( 0 , \bar { \epsilon } ]$ and $\sigma ^ { \prime } > 0$ , $$ \begin{array} { r l } & { H _ { \sigma , \nu _ { \cdot } } \left( C _ { P } \sigma ; \left\{ u _ { \vartheta , : } - u _ { \vartheta _ { \nu , : } ^ { \ast } } : \vartheta _ { \cdot } \in \Theta _ { j } , d _ { \nu _ { \cdot } } ( \vartheta _ { : } , \vartheta _ { \nu , : } ^ { \ast } ) \leq \sigma ^ { \prime } \right\} \right) } \\ & { \qquad \leq C _ { P } \sqrt { C ^ { \prime } } 2 ^ { \beta / 2 } ( 1 \wedge \sigma \wedge \sigma ^ { \prime } ) \sqrt { 1 + \ln ( 1 / ( 1 \wedge \sigma \wedge \sigma ^ { \prime } ) ) } , } \end{array} $$ where $C _ { P }$ is the constant in Eq. (4), and ϵ¯ and $C ^ { \prime }$ are the constants in Def. 3. Lemma 2. Suppose that $\vartheta _ { \nu , s } ^ { \ast } \in \mathcal { M } _ { L }$ for some $L > 1$ and all $\nu _ { : } \in \mathcal { Q }$ . Then, there exists a constant $C _ { b } > 0$ , possibly depending on $M$ , such that for all $b > 0$ , $$ H _ { b , \nu ; } ( b ; \left\{ u _ { : } - u _ { \vartheta _ { \nu } ^ { \ast } , : } : u _ { : } \in \mathcal { U } , b _ { \nu ; } ( u _ { : } - u _ { \vartheta _ { \nu } ^ { \ast } , : } ) \leq b \right\} ) \leq C _ { b } \sqrt { b } . $$ Relationship between potentials and transport maps To facilitate Thm. 4, we establish the relationship between the potential $u _ { \vartheta _ { \cdot } }$ : and the transport map $\vartheta$ . Specifically, we prove the following: Proposition 5. Let $\nu$ : be probability measures indexed by $[ M ]$ such that $\vartheta _ { \nu , : } ^ { * } \in \mathcal { M } _ { L } ^ { M }$ for some $L > 1$ . Then, for all $\vartheta _ { : } \in \mathcal { M } _ { L } ^ { M }$ such that $\begin{array} { r } { \sum _ { s \in [ M ] } w _ { s } \vartheta _ { s } ^ { - 1 } ( z ) = z } \end{array}$ for all $z \in \Omega$ , $$ \frac { 1 } { 2 L } d _ { \nu _ { \cdot } } ^ { 2 } \bigl ( \vartheta _ { \cdot } , \vartheta _ { \nu , : } ^ { \ast } \bigr ) \leq E _ { \nu _ { \cdot } } \Bigl ( u _ { \vartheta _ { \cdot } } - u _ { \vartheta _ { \nu , : } ^ { \ast } } \Bigr ) \leq \frac { L } { 2 } d _ { \nu _ { \cdot } } ^ { 2 } \bigl ( \vartheta _ { \cdot } , \vartheta _ { \nu , : } ^ { \ast } \bigr ) . $$ Proof sketch of Thm. 3 By Prop. 5, the processes $E _ { \nu _ { : } }$ and $E _ { n , \nu _ { : } }$ satisfy Asm. 3 with $\begin{array} { r } { K _ { \mathrm { l o w } } = \frac { 1 } { 2 L } } \end{array}$ and $\begin{array} { r } { K _ { \mathrm { u p } } = \frac { L } { 2 } } \end{array}$ . By the Poincaré-type inequality assumption, if $\vartheta _ { : } \in \Theta _ { j }$ satisfies $d _ { \nu _ { \colon } } ( \vartheta _ { : } , \vartheta _ { \nu , : } ^ { \ast } ) \leq \sigma$ , then $E _ { \nu _ { : } } ( u _ { \vartheta , : } - u _ { \vartheta _ { \nu , : } ^ { * } } ) \leq C _ { P } \sigma$ . Since $\Omega$ is bounded, for $\vartheta$ $\boldsymbol { \vartheta } _ { : } \in \Theta _ { j } \subseteq \mathcal { M } _ { L } ^ { M }$ , there exists a constant $b > 0$ such that $b _ { \nu _ { \colon } } ( \ddot { u } _ { \vartheta , : } - u _ { \vartheta _ { \nu , : } ^ { * } } ) \leq b$ , which follows from the smoothness of $u _ { \vartheta , s }$ and $u _ { \vartheta _ { \nu , s } ^ { \ast } }$ . Therefore, by applying Prop. 4 together with Lem. 1 and Lem. 2, the processes $E _ { \nu _ { : } }$ and $E _ { n , \nu _ { : } }$ satisfy Asm. 4 with $$ a _ { 0 , n } = \frac { C C _ { P } \sqrt { C ^ { \prime } } 2 ^ { \beta \beta / 2 } } { \sqrt { \tilde { n } } } , a _ { 1 , n } = \frac { C C _ { b } \sqrt { \tilde { b } } } { \tilde { n } } , a _ { 2 , n } = \frac { C } { \sqrt { \tilde { n } } } , b _ { n } = \frac { b } { \sqrt { \tilde { n } } } , H ( \sigma ) = ( 1 \wedge \sigma ) \sqrt { 1 + \ln ( 1 / ( 1 \wedge \sigma ) ) } , $$ where $C$ is the constant in Prop. 4. The difference between $\vartheta _ { j , \ l } ^ { \ast }$ : and $\vartheta _ { \nu , \ / , } ^ { \ast }$ : can be bounded by $O ( 2 ^ { - \alpha j } )$ due to Def. 3. Assigning $j$ as specified in the statement and setting $\begin{array} { r } { h _ { n } = O ( ( \frac { \tilde { n } } { \ln ( \tilde { n } ) } ) ^ { - \alpha / ( \alpha + \beta ) } ) } \end{array}$ yields the desired result. # 6 Lower Bound To establish our lower bound, we develop a technique based on reducing the fair regression estimation problem to a conventional regression estimation problem. Specifically, let ${ \bar { f } } _ { n , } ^ { * }$ : be the optimal fair regression algorithm satisfying $$ \operatorname* { s u p } _ { \mu _ { : } \in \mathcal { P } } \mathbb { E } _ { \mu _ { : } } \big [ d _ { \mu _ { : } } ^ { 2 } \big ( \bar { f } _ { n , : } ^ { * } , \bar { f } _ { \mu , : } ^ { * } \big ) \big ] = \bar { \mathcal { E } } _ { n } ( \mathcal { P } ) , $$ We demonstrate that we can construct a conventional regression algorithm using ${ \bar { f } } _ { n , : } ^ { * }$ . The error of this conventional regression algorithm is bounded below by $\mathcal { E } _ { n } ( \mathcal { P } _ { s } )$ , which consequently provides a lower bound on $\mathcal { E } _ { n } ( \mathcal { P } )$ in terms of $\mathcal { E } _ { n } ( \mathcal { P } _ { s } )$ . Consider the scenario where distributions $ { \boldsymbol Ḋ \mu Ḍ } _ { 1 } , . . . , { \boldsymbol Ḋ \mu Ḍ } _ { M }$ are identical, and $f _ { n , } ^ { * }$ : is constructed using samples comprising $n _ { s }$ i.i.d. points from $\mu _ { s }$ . Under these conditions, we can construct a regressor for $f _ { \mu , 1 } ^ { * }$ as $\begin{array} { r } { f _ { n } = \sum _ { s \in [ M ] } w _ { s } f _ { n , s } ^ { * } } \end{array}$ . The error of this regressor provides a lower bound on Eq. (7) as follows: Theorem 5. Under the conditions stated in Thm. 2, we have $$ \mathcal { E } _ { n } ( \mathcal { P } _ { 1 } ) \leq \operatorname* { s u p } _ { \mu _ { 1 } \in \mathcal { P } _ { 1 } : \forall s \in [ M ] , \mu _ { s } = \mu _ { 1 } } \mathbb { E } _ { \mu _ { : } } \left[ d _ { \mu _ { 1 } } ^ { 2 } \left( f _ { n } , f _ { \mu , 1 } ^ { * } \right) \right] \leq \operatorname* { s u p } _ { \mu _ { : } \in \mathcal { P } } \mathbb { E } _ { \mu _ { : } } \left[ d _ { \mu _ { : } } ^ { 2 } \left( \bar { f } _ { n , : } ^ { * } , \bar { f } _ { \mu , : } ^ { * } \right) \right] . $$ This result directly establishes a lower bound on $\mathcal { E } _ { n } ( \mathcal { P } )$ in Thm. 2. # 7 Related Work: Optimal Transport Map Estimation The problem of optimal transport map estimation in the Wasserstein distance has been extensively studied (DEB et al. 2021; Hütter et al. 2021; Barrio et al. 2023; Pooladian et al. 2023; Divol et al. 2024; Manole et al. 2024; Pooladian et al. 2024; Rigollet et al. 2025). The objective of this estimation problem is to estimate the transport map in $W _ { 2 } ( \mu , \mu ^ { \prime } )$ between two distributions $\mu$ and $\mu ^ { \prime }$ , given samples from both distributions. Several approaches have been proposed: DEB et al. (2021), Manole et al. (2024), and Rigollet et al. (2025) utilize plug-in estimators, where they first estimate the joint distribution of $( Z , \vartheta ( Z ) )$ for $Z \sim \mu$ and subsequently construct a transport map by minimizing the expected error between $Z$ and $\vartheta ( Z )$ . Alternative estimators proposed by Hütter et al. (2021), Barrio et al. (2023), Pooladian et al. (2023), Divol et al. (2024), and Pooladian et al. (2024) employ potential minimization techniques. While these studies provide convergence rate analyses for their estimators, their methods cannot be directly applied to optimal transport map estimation in the Wasserstein barycenter problem, as a sample from the barycenter distribution are not observable. Several researchers have developed methods specifically for optimal transport map estimation in the Wasserstein barycenter problem, though without accompanying convergence rate analyses. Korotin et al. (2020) proposed an estimator based on potential minimization, as detailed in Section 2.3. Fan et al. (2021) introduced an estimator based on minimax optimization. Korotin et al. (2022) developed an iterative algorithm based on the fixed-point theorem established by Álvarez-Esteban et al. (2016). Although empirical evaluations have demonstrated that these methods achieve low estimation errors, theoretical analyses of their convergence rates remain an open problem.
We address the regression problem under the constraint of demographic parity, a commonly used fairness definition. Recent studies have revealed fair minimax optimal regression algorithms, the most accurate algorithms that adhere to the fairness constraint. However, these analyses are tightly coupled with specific data generation models. In this paper, we provide meta-theorems that can be applied to various situations to validate the fair minimax optimality of the corresponding regression algorithms. Furthermore, we demonstrate that fair minimax optimal regression can be achieved through post-processing methods, allowing researchers and practitioners to focus on improving conventional regression techniques, which can then be efficiently adapted for fair regression.
[ "stat.ML", "cs.LG" ]
# 1 Introduction Large language models (LLMs) have become transformative tools in numerous domains, fundamentally changing the way we approach complex tasks in almost all areas of life [37]. Among their countless applications, the integration of LLMs in customer support has been particularly impactful [20], allowing businesses to provide $2 4 / 7$ assistance and enhance customer satisfaction by improving the speed, consistency, and accuracy of customer interactions [6]. Previous research on LLM-based customer support chatbots has concentrated on improving service quality and reducing human agents’ workload with self-service tools and automated responses [20]. However, the opportunity to leverage customer interactions for cross-selling and product recommendations [6] has largely been overlooked. As a result, businesses have missed out on value creation opportunities that could benefit them and their clients. Research on conversational recommender systems (CRSs) has led to advancements in the use of dialogue to understand user preferences and suggest appropriate items [12]. However, these systems face challenges in customer support contexts, where interactions are primarily problem-focused rather than sales-oriented. In such cases, user preferences are often unavailable, and explicitly requesting them may be inappropriate or irrelevant to the situation, and potentially harm the user experience. To address this gap, we developed ImpReSS: an implicit recommender system for support conversations. ImpReSS employs a novel approach that augments support-oriented conversations with product (or service) recommendations by implicitly identifying users’ needs during the problem-solving stage, rather than users’ preferences. In addition, unlike previous CRS research, ImpReSS does not assume any user purchasing intent. As shown in Figure 1, ImpReSS consists of three main steps: (1) Query Generation, in which an LLM generates a brief summary (with a diagnosis) of the conversation, as well as a set of preliminary solution product categories (SPCs) derived from the support conversation’s summary and diagnosis; (2) Candidate Retrieval, in which the query is used to search designated catalogs for the most relevant SPCs; and (3) Candidate Ranking, in which the retrieved SPCs are prioritized. To integrate ImpReSS into real-world business workflows, organizations can map general SPCs to their specific company products (brands), and implement various presentation strategies for the recommendations. In the ("In-Chat") presentation strategy, the chatbot suggests the top-ranked candidate as a natural continuation of the support conversation, following the resolution of the user’s issue. Alternatively, multiple top-ranked candidates can be displayed below the conversation interface, similar to e-commerce platforms, under a heading such as "Users who encountered this problem found these products useful". This "Related Items" strategy is applicable to both chatbots and online forums operated by commercial entities. We evaluate ImpReSS using three conversational support datasets from various domains. Our empirical results highlight ImpReSS’s promising SPC recommendation capabilities and ability to make accurate recommendations without relying on user preference data. ImpReSS achieved $\mathrm { M R R } @ 1$ values (and recall $@ 3$ ) of 0.72 (0.89) for general problem solving, 0.82 (0.83) for information security support, and 0.85 (0.67) for cybersecurity troubleshooting. The contributions of this research can be summarized as follows: (1) We introduce ImpReSS, a novel LLM-based method for implicit SPC recommendations in conversational support settings, which assumes no user purchasing intent and requires no user preferences or background information as input. (2) We empirically evaluate ImpReSS using multiple datasets, and show promising performance, achieved early in a conversation. (3) To promote future research, our data and code will be shared upon request. # 2 Proposed Method # 2.1 Approach Unlike traditional recommendation scenarios [10], which center on product seeking, customer support conversations focus on problem solving. ImpReSS addresses this distinction with an implicit recommendation approach that analyzes conversation content instead of relying on user preferences. This enables ImpReSS to identify and recommend the most relevant SPCs (or specific products). # 2.2 Assumptions Figure 1: The proposed method’s pipeline1. Figure 2: Query generation example from $D S ^ { C T }$ . Figure 3: Candidate retrieval example from $D S ^ { C T }$ ImpReSS leverages an existing online support platform (e.g., chatbot or forum) where users describe problems and request help. It can be implemented as an add-on to these platforms, analyzing each support conversation and returning a prioritized list of SPC recommendations. # 2.3 Key Steps ImpReSS’s input is a support conversation, i.e., a set of utterances between a user and a knowledgeable entity, either human or virtual. As illustrated in Figure 1, ImpReSS follows a three-step process in order to output a prioritized list of recommendations: 2.3.1 Query Generation (Step 1). In this step (illustrated in the example in Figure 2), given a conversation with multiple interactions, an LLM first generates a conversation summary and diagnosis object, which concisely outlines the issue raised by the user, a root cause diagnosis, and plausible measures to take. Then, based on the diagnosis, the LLM generates a query object, which is a preliminary list of relevant SPCs that can help resolve the issue or prevent its reoccurrence. Each SPC is also briefly explained, facilitating similarity search in the candidate retrieval step that follows. 2.3.2 Candidate Retrieval (Step 2). In this step (illustrated in the example in Figure 3), ImpReSS searches multiple designated databases (DBs) using the query generated in the previous step. These DBs, equipped with an L2 index for efficient similarity search using an embedding model, differ from one another in terms of the aspect of the SPCs they emphasize or their source, thus enabling more diverse candidate retrieval than a single-index approach. The final candidate set is formed by uniting the results from each index. Note: Bold entries indicate true-labeled SPCs. Wi-Fi range extenders, System repair tools, Network monitoring software, Driver update software, System optimization software 2.3.3 Candidate Ranking (Step 3). Given a set of retrieved candidates, the LLM ranks the SPCs by their ability to help resolve the diagnosis generated in the query generation step (see Figure 4). To mitigate a possible position bias in ranking, ImpReSS employs a bootstrap approach [8], concurrently repeating the ranking process three times with randomly shuffled orders of candidates. Table 1: Datasets used to empirically evaluate ImpReSS. # 3 Evaluation Method # 3.1 Creation of Datasets To evaluate ImpReSS, we constructed three datasets, which are summarized in Table 1: 3.1.1 Cybersecurity Troubleshooting Dataset $( D S ^ { C T } ,$ . PC users may occasionally encounter various technical issues, some of which stem from cybersecurity threats. For example, PC slowness may be caused by software conflicts, outdated drivers, aging hardware, or cybersecurity attacks such as cryptojacking. To address this, we developed a cybersecurity-specialized chatbot in our lab and tasked students with troubleshooting a set of predefined complaints using only this chatbot. $D S ^ { C T }$ consists of these step-by-step cybersecurity troubleshooting conversations. 3.1.2 Information Security Dataset $( D S ^ { I S } ,$ ). Stack Exchange [22] is a network of nearly 200 question-and-answer (Q&A) communities where millions collaborate monthly to ask questions, share knowledge, and solve problems across various technical and professional domains. Its most prominent community is the Stack Overflow community [23], which focuses on programming. The Information Security (IS) community [9] on Stack Exchange, features discussions on cryptography, encryption, network security, and related topics. From this community, we extracted multiple Q&A pairs in which the answer was accepted by the question’s author. Out of the $2 3 9 \ Q \& \mathrm { A }$ pairs that we managed to annotate, we identified 70 pairs containing at least one product recommendation – and these formed our final dataset. In order to (1) prevent data leakage during testing and (2) ensure that the Candidate Generation step (Sec. 2.3.2) relies solely on ImpReSS’s conversation summary and diagnosis, we made sure to remove any mentions of specific product recommendations or specific solutions from the answers. 3.1.3 General Problem-Solving Dataset $( D S ^ { G E } ,$ ). Following the LLMbased student-teacher interaction simulation framework proposed by Abbasiantaeb et al. [1], we simulated support conversations with a chatbot specialized in a broad range of consultation topics, e.g., kitchen, pet care, and camping. To simulate these conversations, we (1) generated synthetic users based on persona distributions, characterized by attributes such as age, gender, and occupation; and (2) instructed them to interact with an AI assistant (chatbot). Figure 4: Candidate ranking example from $D S ^ { C T }$ . Figure 5: Catalog DB creation process . Each conversation begins with the user describing a general problem encountered, followed by up to four Q&A exchanges with the chatbot, enabling it to gather sufficient information. The groundtruth SPC label is derived from the conversation generation prompt, which encapsulates both the essence of the problem and its root cause, to be communicated to the chatbot by the (synthetic) user. Although the user is aware of the cause, they are instructed not to disclose it explicitly. This setup helps maintain coherence and prevents divergence in the conversation. To ensure conversation quality, we asked three experts to manually rate a random sample containing $1 0 \%$ of $D S ^ { G \hat { E } }$ . The rating was performed using the USR metrics for dialog generation [13] and achieved very high scores across all assessment dimensions. The generated conversations received perfect scores for Understandable (range [0, 1], mean±standard deviation $1 . 0 0 { \scriptstyle \pm 0 . 0 0 } )$ and Maintains Context ([1, 3], $3 . 0 0 { \pm } 0 . 0 0 )$ , and high rating for Natural ([1, 3], $2 . 3 3 { \pm } 0 . 5 0 )$ ) and Overall Quality ([1, 5], $4 . 0 1 { \pm } 0 . 6 6 \rangle$ ). Gwet’s AC2 coefficient ranged [0.84, 1.00], indicating near-perfect inter-rater agreement on these conversation quality metrics. # 3.2 Creation of Catalog DBs As illustrated in Figure 5, we constructed five catalog DBs using both web search-based and LLM-based text generation methods. These approaches are complementary: search yields web pages deemed relevant by search engines, which likely contain useful information but may (1) include irrelevant content, or (2) lack comprehensive coverage by presenting only a few specific results. In contrast, LLMbased text generation can synthesize information across multiple sources, offering broader summaries, which may come at the cost of factual inaccuracies. In Sec. 4.4, we present an ablation study to empirically assess the contribution of each catalog DB. 3.2.1 Web Search-Based Catalog DBs Creation. Using Tavily [25], a web search engine tailored for LLM retrieval-augmented generation (RAG) use cases, we constructed two catalog DBs: 𝐷𝐵𝑊 𝑒𝑏𝑆𝑒𝑎𝑟𝑐ℎ and $D B _ { U s e C a s e s } ^ { W e b S e a r c h }$ . For each SPC, Tavily was queried separately for each SPC’s features and use cases, and each query returned five results. The retrieved results for each SPC were then concatenated into two separate documents: one document containing all featurerelated results, and the other containing all use case-related results. The documents for each of the SPCs formed the respective DB. 3.2.2 Generation-Based Catalog DBs Creation. Using GPT-4o, we generated additional three catalog DBs. For each SPC, GPT-4o produced a brief description, a list of key features, and three example use cases. These outputs were stored in 𝐷𝐵DGesnceriaptiion s, 𝐷𝐵FGeanteuraetsion, and 𝐷𝐵UGseenCeraasteis , respectively. # 3.3 Experimental Setup We used GPT-4o mini to create $D S ^ { G E }$ – both to generate users and the support conversations, where each user query and assistant response were generated independently. We chose to use a high temperature (1.0) for these tasks to elicit diverse responses that better represent conversational scenarios. In contrast, for catalog DB creation and candidate search we used GPT-4o (and text-embedding3-small [14], with a low temperature (0.3), to keep the outputs grounded and reduce variability in the content generated. We implemented ImpReSS’s pipeline on a laptop with an Intel i7-1365U 13th generation processor and 32 GB of RAM, using LangChain and LangSmith. Access to GPT and Llama models was provided via Azure OpenAI Service and Amazon Bedrock, respectively. # 3.4 Performance Metrics We evaluated ImpReSS’s performance using two metrics commonly used in recommender systems research [36, 38]: MRR@k, which measures how high a relevant SPC ranks among the first k suggestions; and Recall@k (abbreviated as $\operatorname { R @ K } ,$ ), which measures how many of the total relevant SPCs are successfully retrieved. To align with both presentation strategies, we focused mainly on $\mathrm { M R R } @ 1$ and $\mathbb { R } \ @ 3$ . We chose $\mathbf { k } { = } 1$ as most relevant for the "In-Chat" strategy for two reasons: (1) it may not be appropriate for a chatbot to recommend more than one SPC in a support conversation since recommendations are displayed directly within the conversation interface, and (2) most conversations contain one relevant SPC. Thus, $\mathrm { M R R } @ 1$ effectively reflects how well ImpReSS retrieves and ranks this key recommendation. For "Related Items" representation strategy, which is applicable for multiple SPCs, and for a more comprehensive assessment, we also compute the MRR and recall at larger k values. Table 2: ImpReSS’s performance across datasets. Table 3: Performance across LLMs and embedding models. Note: Bold values indicate the best configuration for each metric. # 4 Results # 4.1 Performance Across Datasets As can be seen in Table 2, for each evaluated dataset both the MRR@k and $\operatorname { R @ k }$ increase with k, although with diminishing effect. The improvement in performance makes sense, since the likelihood of missing relevant SPCs decreases as more SPCs are retrieved. The diminishing effect is a good indication that ImpReSS has already identified and ranked the most important recommendations, so as k increases, we see smaller improvements, if at all. # 4.2 Sensitivity to LLM and Embedding Models The results presented in Sec. 4.1 were obtained using a proprietary, state-of-the-art LLM and embedding model, respectively OpenAI’s GPT-4o and text-embedding-3-small. To evaluate ImpReSS’s sensitivity to the underlying LLM and embedding model, we repeated the previous experiment with additional configurations. That is, for the steps presented in Sec. 2 we used two additional LLMs, GPT-4o mini (less costly) and Llama-3.3-70B-Instruct (open sourced), along with Multilingual-E5-Large-Instruct [28] as an additional embedding model (open sourced, 560M parameters). As can be seen in Table 3, GPT-4o outperformed GPT-4o mini and Llama-3.3-70B-Instruct in almost all cases (frequently with a high margin), as did textembedding-3-small compared to Multilingual-E5-Large-Instruct (though with a smaller margin). Moreover, the combination of GPT4o with text-embedding-3-small yielded the highest MRR $@ 1$ for 3 out of 3 datasets and the highest R@3 for 2 out of 3 datasets. The superiority of GPT-4o in this experiment is consistent with prior research [17], including recommender systems research [26]. # 4.3 Sensitivity to Conversation Length The experimental results discussed thus far were achieved using all available utterances in each conversation. To assess ImpReSS’s sensitivity to conversation length, Figure 6 shows the $\mathrm { M R R } @ 1$ and $\operatorname { R } ( \varpi 3$ scores as functions of the number of utterances in the conversation. Since $D S ^ { I S }$ contains only one user message followed by an assistant response, without further utterances, it is not included in this analysis, which only pertains to $D S ^ { C T }$ and $D S ^ { G E }$ . The experimental results show that on $D S ^ { C T }$ , both the $\mathrm { M R R } @ 1$ and $\mathbb { R } \ @ 3$ increase with the number of utterances. This makes sense, as cybersecurity troubleshooting often requires iteratively ruling out possible root causes. In other words, the less irrelevant root causes considered by the chatbot, the fewer irrelevant SPCs recommended. In comparison, the user complaints and questions in $D S ^ { G E }$ were simpler (e.g., planning holiday meals), with most of the information provided in the first question, so high MRR $@ 1$ and $\mathbb { R } \ @ 3$ values were quickly achieved. ImpReSS: Implicit Recommender System for Support Conversations Figure 7: Ablation study results for various combinations of catalog DBs. # 4.4 Ablation Study Our literature review did not reveal any prior research (with or without an experimental dataset made public) on conversational recommendations based on implicit user needs; in fact, we found no research whatsoever on conversational recommendations that are not based on user preferences. Hence, since we were unable to directly compare ImpReSS to a public benchmark, and in order to better explore ImpReSS’s potential while assessing the relative importance of its components, we performed an ablation study, which is described below. Figure 6: Performance as a function of conversation length. Figure 8: Performance as a function of bootstrap iterations. 4.4.1 Importance of the Various Catalog DBs. As described in Sec. 3.2, the catalog DBs contain SPC features, descriptions, and use cases, obtained using web search and LLM-based text generation. In this experiment, we compared the performance across all three evaluated datasets when using all five DBs together (denoted as 𝐴𝑙𝑙 𝐷𝐵𝑠) against using each DB individually, as well as against using every combination of four DBs. For example, $D B _ { F e a t u r e s } ^ { W e b S e a r c h }$ denotes including only this DB, while 𝐷𝐵𝐹𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑊 𝑒𝑏𝑆𝑒𝑎𝑟𝑐ℎ denotes including all other (four) DBs except for this DB. As can be seen in Figure 7, on $D S ^ { I S }$ the 𝐴𝑙𝑙 𝐷𝐵𝑠 configuration outperforms all other configurations, similarly to $D S ^ { G \bar { E } }$ , and on $D S ^ { \bar { C } T }$ , the 𝐴𝑙𝑙 𝐷𝐵𝑠 configuration works best in 9 out of 11 of tested configurations; based on this, we can conclude that each of the five DBs contributes to ImpReSS’s performance. In future implementations, if overhead limitations, such as token consumption costs, are more restrictive, the best option would be to use any of the Use Cases DBs (based on web search or LLM generation), as retrieving SPC candidates based on their typical use cases is more effective than retrieval based on the SPCs’ features or descriptions. 4.4.2 Importance of Candidates’ Bootstrap Ranking. As discussed in Sec. 4.5, the third step of ImpReSS– candidate ranking – introduces considerable overhead in both cost and latency. To assess 1.0 DS CT DS GE DS IS 0.8 1 0.46 0.2 0.0 rchv ch tior ch oretionan DBFeae Webeures Seal DBDeo net res DBUse DBFeao BuseCa Searus BDeser e Figure 9: Time overhead analysis. the necessity of this step, we evaluated ImpReSS with zero to three bootstrap ranking iterations. The results, shown in Figure 8, reveal a marked performance drop across all datasets when the candidate ranking is disabled, i.e., when ImpReSS relies solely on the candidate SPCs retrieved from the catalog DBs, ranked by their similarity to the generated query (Sec. 2.3.1). Unlike Hou et al. [8], who advocate for at least three ranking iterations, our findings indicate that even one or two iterations substantially improve performance, though still falling short of the optimal results achieved with three. # 4.5 Overhead Figures 9 and 10 present ImpReSS’s overhead in terms of time (seconds) and token consumption, respectively indicating computation and monetary expenses. Although the overhead was typically higher for $D S ^ { I S }$ than it was for $D S ^ { C \bar { T } }$ and $D S ^ { G E }$ , in most cases the differences were relatively small. While bootstrap ranking the retrieved SPC candidates is the most resource-consuming step, this step also greatly affects ImpReSS’s performance (as discussed in Sec. 4.4.2). The overhead of that step can be reduced in several ways, including the use of a local LLM, which may result in shorter and more consistent waiting times. # 5 Related Work # 5.1 Chatbots in Customer Service and Support Live chat interfaces have become a widely adopted channel for delivering real-time customer service [19]. Customers increasingly rely on these platforms to access information or receive assistance, and the immediacy of live chat strongly influences customer trust and satisfaction [2]. Hardalov et al. [7] examined the automation of customer support using conversational agents, comparing methods such as information retrieval, sequence-to-sequence (Seq2Seq) models, attention mechanisms, and the transformer architecture. Other studies have investigated domain-specific applications in sectors such as banking [32] and healthcare [3], employing various deep learning and natural language processing techniques. More recently, LLMs have been applied to customer support tasks, including fine-tuning OpenAI’s GPT-4 for e-commerce [18] and deploying Google’s Flan-T5 XXL model for real-time assistance [15]. Additional research has explored LLM-driven approaches to create context-aware and personalized support chatbots [18]. While these methods aim to enhance the quality of customer service, they typically do not utilize conversation data to recommend relevant products or services – an approach that could further support users and drive business growth at the same time. # 5.2 Conversational Recommender Systems CRSs collect user preferences and deliver personalized suggestions through interactive natural language dialogues [24]. CRSs are broadly categorized into attribute-based and open-ended systems. Attribute-based CRSs refine user preferences through explicit, attribute-driven Q&A interactions, aiming to identify the optimal item(s) in minimal rounds [11, 16, 33]. In contrast, open-ended CRSs such as RecInDial [27] and UniCRS [29] support free-form conversations, enabling more flexible user interactions. To enhance performance, UniCRS incorporates dialogue history as contextual input and knowledge graph entities as external information to jointly address recommendation and dialogue tasks. Deng et al. [5] proposed UniMIND, a Seq2Seq model that unifies multiple CRS goals – including chitchat, question answering, topic prediction, and recommendation—via prompt-based learning. However, UniMIND’s multi-goal approach can detract from recommendation progress, especially when chitchat dominates. Fundamentally, CRSs drive recommendations by steering conversations to elicit user preferences and background, assuming that users engage with the intent of receiving suggestions. In contrast, ImpReSS assumes no underlying purchasing intent and exerts no control over the conversation’s direction. Functioning as an add-on, ImpReSS analyzes the support conversation, and recommends the most suitable SPCs – the ones capable of addressing the identified issue. Figure 10: Token consumption overhead analysis. Table 4: Comparison of reviewed conversational and LLM-based recommender systems. # 5.3 LLM-Based Recommender Systems LLMs have been integrated in various components of recommender systems, including feature engineering, user and item embeddings, scoring, ranking, or even functioning as agents that guide the recommendation process itself [12]. For item embedding, TedRec [34] performs sequence-level semantic fusion of textual and ID features for sequential recommendation, while NoteLLM [35] combines note semantics with collaborative signals to produce note embeddings. In contrast, Chen et al. [4] proposed a hierarchical approach where an LLM extracts features from item descriptions and converts them into compact embeddings, reducing computational overhead. Other studies leveraged autonomous, LLM-powered agents. Unlike methods that merely prompt LLMs with user history, these approaches exploit advanced agentic capabilities such as planning and tool use. RecMind [30] performs multi-path reasoning through LLM planning, MACRec [31] features collaboration among specialized agents, and RAH [21] is a human-centered framework in which multiple LLM-based agents mediate between users and recommender systems to align suggestions with a user’s personality and reduce cognitive load. While most LLM-based systems focus on predicting what users might like based on past behavior or preferences (see Table 4), we propose a paradigm shift: understanding what the user needs. ImpReSS identifies recommendation opportunities by detecting needs – explicitly expressed, implicitly inferred, or derived from interactions between a user and a knowledgeable entity, either human or virtual. # 6 Discussion # 6.1 Key Insights ImpReSS’s results are promising, with a mean of 0.8 for both MRR@1 and $\mathbb { R } \ @ 3$ achieved using a state-of-the-art LLM and embedding model. These performance levels are typically reached early in a conversation, highlighting ImpReSS’s ability to produce accurate recommendations quickly, after just a short interaction with a customer/user. Our ablation study confirms that each of ImpReSS’s components contributes to its overall performance, although some overhead is incurred (Sec. 4.5). To mitigate time overhead, we recommend using more powerful hardware, and using parallel LLM calls for bootstrap ranking. To reduce token consumption, our findings indicate that even one or two iterations of bootstrap ranking substantially improve performance, though still falling short of the optimal results achieved with three. # 6.2 Research Limitations and Future Work While the results of this study are promising, their generalizability might be affected by factors such as the modest size of the datasets and the limited domain diversity. In future work we will evaluate ImpReSS on additional, larger conversational datasets drawn from a variety of support scenarios. We have also identified three other interesting research directions: (1) conducting an online experiment to examine user conversion rates after receiving recommendations from ImpReSS; (2) optimizing the timing of the recommendation within the support conversation; and (3) refining the chatbot’s phrasing of recommendations within the "In-Chat" presentation strategy. In addition, given ImpReSS’s sensitivity to the underlying LLM selection (Sec. 4.2), future research could explore the use of newly released LLMs, particularly open-source models, that can run locally to both reduce costs and enhance privacy.
Following recent advancements in large language models (LLMs), LLM-based chatbots have transformed customer support by automating interactions and providing consistent, scalable service. While LLM-based conversational recommender systems (CRSs) have attracted attention for their ability to enhance the quality of recommendations, limited research has addressed the implicit integration of recommendations within customer support interactions. In this work, we introduce ImpReSS, an implicit recommender system designed for customer support conversations. ImpReSS operates alongside existing support chatbots, where users report issues and chatbots provide solutions. Based on a customer support conversation, ImpReSS identifies opportunities to recommend relevant solution product categories (SPCs) that help resolve the issue or prevent its recurrence -- thereby also supporting business growth. Unlike traditional CRSs, ImpReSS functions entirely implicitly and does not rely on any assumption of a user's purchasing intent. Our empirical evaluation of ImpReSS's ability to recommend relevant SPCs that can help address issues raised in support conversations shows promising results, including an MRR@1 (and recall@3) of 0.72 (0.89) for general problem solving, 0.82 (0.83) for information security support, and 0.85 (0.67) for cybersecurity troubleshooting. To support future research, our data and code will be shared upon request.
[ "cs.AI", "cs.IR" ]
# 1. Introduction Large language models (LLMs) have achieved tremendous success in text processing (OpenAI, 2024), offering new ways to interact with machines. This progress has motivated efforts to extend their capabilities to speech to enable more natural spoken interactions with machines. However, modeling speech presents unique challenges due to its continuous and complex nature. As a result, previous works (Lakhotia et al., 2021; Borsos et al., 2023; Maiti et al., 2024) tokenized speech into simpler discrete units to enable the application of language modeling techniques originally developed for text. However, these semantic tokens are typically derived by performing $k$ -means clustering on features extracted from self-supervised pre-trained speech models, such as HuBERT (Hsu et al., 2021). We use the term semantic tokens to distinguish them from acoustic tokens (Borsos et al., 2023), which capture general acoustic information. These models primarily capture the linguistic aspects of speech, such as phonetic information, while often overlooking paralinguistic features, such as prosody (Weston et al., 2021). As a result, training an autoregressive model solely with such semantic tokens restricts the model’s ability to fully capture and represent the diverse information encoded in speech. To address the aforementioned limitation, Kharitonov et al. (2022) augmented the tokens with extracted fundamental frequency ( $F _ { 0 }$ , or pitch) to enable prosody-aware modeling. However, augmenting semantic tokens with manually defined paralinguistic attributes can be inherently suboptimal. First, pitch alone cannot capture the full range of paralinguistic information encoded in speech. For instance, energyrelated (e.g., loudness, zero-crossing-rate) and spectralrelated (e.g., mel-frequency cepstral coefficients) features are also important paralinguistic features (Schuller et al., 2009; 2013; Eyben et al., 2015). Furthermore, training a correct pitch tracker introduces additional complexity (Kim et al., 2018). Instead of relying on hand-engineered paralinguistic features, we propose an approach to learning these features directly from the input signal, within an autoregressive framework. These learned features are optimized to both: 1) reconstruct the input speech, and 2) enhance the autoregressive modeling process. Our approach allows the learned features to complement semantic tokens, removing the need for pre-extracted paralinguistic features as required in previous methods. As a result, our method generates more natural-sounding speech compared to baseline models while maintaining comparable meaningfulness of the syntheses. # 2. Preliminaries In this work, we work on mel-spectrogram, and consider vocoding, the act of turning mel-spectrogram back to raw waveform, as a problem that has already been addressed. We denote the mel-spectrogram as $\mathbf { X } \overset { \cdot } { = } ( x _ { t } \in \mathbb { R } ^ { d _ { x } } ) _ { t = 1 } ^ { T }$ where $d _ { x }$ represents the number of filter-banks, $T$ is the total number of time frames in the spectrogram, and $\mathbf { \Psi } _ { x _ { t } }$ is the frame at time $t$ . We use $\mathbf { X } _ { i : j }$ to denote the sub-sequence $( x _ { t } ) _ { t = i } ^ { j }$ , and define $\mathbf { X } _ { 1 : 0 } = \varnothing$ . Our goal is to model $p ( \mathbf { X } )$ using a generative approach. Token-based Speech Language Model We describe the general framework of speech language models that rely on the use of semantic tokens, as seen in works like Lakhotia et al. (2021); Borsos et al. (2023); Maiti et al. (2024). This approach consists of three components: a speech tokenizer, an autoregressive model, and a decoder. The speech tokenizer maps $\mathbf { X } ^ { 1 }$ to a sequence of discrete semantic tokens $\mathbf { Z } ^ { d } = ( z _ { t } ^ { d } \in \mathbb { N } _ { k } ) _ { t = 1 } ^ { T }$ , where ${ \mathbb N } _ { k } = \{ 1 , 2 , \dots , k \}$ , and $k$ is the vocabulary size of the semantic tokens. We use $p ( \mathbf { Z } ^ { d } \mid \mathbf { X } )$ to denote the implicit distribution of the pretrained speech tokenizer. The autoregressive model, parameterized by $\psi$ , models the probability of token sequences $\pmb { Z } ^ { d }$ as $\begin{array} { r } { p _ { \psi } ( \mathbf { Z } ^ { d } ) = \prod _ { t = 1 } ^ { T } p _ { \psi } ( z _ { t } ^ { \hat { d } } \mid \mathbf { Z } _ { 1 : t - 1 } ^ { d } ) } \end{array}$ . Finally, the decoder, parameterized by $\theta$ , is trained to convert $\pmb { Z } ^ { d }$ back to $\mathbf { X }$ by modeling $p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } ^ { d } )$ . However, this framework is limited to semantic tokens $\mathbf { Z } ^ { d }$ , which primarily capture linguistic information and ignore paralinguistic information. As a result, the decoder $\theta$ may struggle with accurate reconstruction, and the autoregressive model $\psi$ can have difficulty incorporating paralinguistic information. To address this limitation, we propose to incorporate the variational autoencoder framework to learn continuous features to complement $\pmb { Z } ^ { d }$ . Variational Autoencoder (VAE) Latent variable models introduce unobserved latent variables $\mathbf { Z } ^ { c } = ( z _ { t } ^ { c } \in \mathbb { R } ^ { d _ { z } ^ { c } } ) _ { t = 1 } ^ { T }$ that influence the observed variable $\mathbf { X }$ . $d _ { z } ^ { c }$ is the dimension of each $ { \boldsymbol { z } } _ { t } ^ { c }$ , and is a hyper-parameter chosen prior to training. In a VAE, the likelihood of the observed data given the latent variable, $p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } ^ { c } )$ , is modeled by a neural decoder, parameterized by $\theta$ . The variational posterior, $q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } )$ , is modeled by a neural encoder, parameterized by $\phi$ . Using this modeling setup, the log-likelihood of the data, $\log p _ { \theta } ( \mathbf { X } )$ , can be written as: $$ \begin{array} { r l r } { { \underbrace { \mathbb { E } _ { q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) } \big [ \log p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } ^ { c } ) \big ] - D _ { K L } \big ( q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) \big | | p ( \mathbf { Z } ^ { c } ) \big ) } _ { \mathcal { O } _ { E L B O } } } } \\ & { } & { \quad \quad \quad ( 1 ) } \\ & { } & { \quad \quad \quad + D _ { K L } \big ( q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) \vert \vert p _ { \theta } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) \big ) , } \end{array} $$ 1Speech tokenizers can operate on mel-spectrograms or directly on raw waveforms. where $D _ { K L }$ is the Kullback–Leibler (KL) divergence between two distributions, and $p ( \mathbf { Z } ^ { c } )$ is a fixed prior distribution (usually a Gaussian). In Equation 1, $\mathcal { O } _ { E L B O }$ is known as the evidence lower bound (ELBO), which provides a lower bound for $\log p _ { \theta } ( \mathbf { X } )$ since $D _ { K L } \big ( q _ { \phi } ( { \bf Z } ^ { c } \ \mid$ $\mathbf { X } ) | | p _ { \theta } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) )$ is always nonnegative. Therefore, instead of directly optimizing $\mathbb { E } _ { \mathbf { X } } [ \log p _ { \theta } ( \mathbf { X } ) ]$ , the VAE maximizes the tractable lower bound $\mathbb { E } _ { \mathbf { X } } [ \mathcal { O } _ { E L B O } ]$ . Here, we refer to the learned continuous latent $\mathbf { Z } ^ { c }$ from the VAE as the variational features. # 3. Proposed Framework Figure 1 provides an overview of our proposed framework. This section is organized as follows: Section 3.1 introduces our setup that combines a VAE with an autoregressive model for the latent variables. Section 3.2 describes how we integrate semantic tokens into the framework. Section 3.3 discusses how to balance the different loss terms that arise in our setup. Section 3.4 describes the use of normalizing flows to improve the expressive power of the autoregressive prior. Finally, Section 3.5 introduces the diffusion decoder and the utterance encoder used in the framework. # 3.1. VAE with an Autoregressive Prior Our method starts by modeling the prior of the VAE, which is typically a fixed Gaussian distribution, with a trainable autoregressive model $\begin{array} { r } { p _ { \psi } ( \mathbf { Z } ^ { c } ) = \prod _ { t = 1 } ^ { T } p _ { \psi } ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } ^ { c } ) } \end{array}$ . We refer to this framework as VAE with an autoregressive prior. We note that VAE with an autoregressive prior has been explored in previous works (Vahdat & Kautz, 2020; Zhu et al., 2020) within the computer vision domain. Additionally, Sun et al. (2020) also applied a similar framework for TTS, but with prior and posterior distributions optimized separately instead of jointly. Here, we adopt the VAE framework with an autoregressive prior for speech continuation and further integrate it with discrete token-based models to enhance the naturalness of the synthesis. We use a diagonal Gaussian distribution to model the variational posterior, where the statistics are predicted by a neural network: $$ q _ { \phi } ( z _ { t } ^ { c } \mid \mathbf { X } ) = \mathcal { N } ( z _ { t } ^ { c } , \mu _ { \phi } ( \mathbf { X } , t ) , \sigma _ { \phi } ( \mathbf { X } , t ) ) . $$ Since each $ { \boldsymbol { z } } _ { t } ^ { c }$ is conditionally independent given $\mathbf { X }$ , we can express the posterior as: $\begin{array} { r } { q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) = \prod _ { t = 1 } ^ { T } q _ { \phi } ( z _ { t } ^ { c } \mid \mathbf { X } ) } \end{array}$ . With this decomposition, and the parameterized autoregressive prior, the $\mathcal { O } _ { E L B O }$ in Equation 1 can be further derived2 Figure 1. Overview of our proposed approach. Our method integrates the token-based speech language model (outlined in Section 2, represented by the lower shaded region) with a variational autoencoder (VAE with autoregressive prior, shown in the upper shaded region). This setup allows the model to learn variational features $\mathbf { Z } ^ { c }$ that complement the pre-extracted semantic speech tokens $\mathbf { Z } ^ { d }$ . In our proposed joint setup, the variational features $\mathbf { Z } ^ { c }$ are trained to 1) reconstruct speech $\mathbf { x }$ alongside $\pmb { Z } ^ { d }$ (by maximizing $\mathcal { O } _ { r e c } )$ ; 2) facilitate the prediction of the next speech token $\boldsymbol { z } _ { t } ^ { d }$ (by minimizing $\mathcal { L } _ { k l } ^ { d } )$ ; 3) support the sequential prediction of the variational features themselves (by minimizing $\mathcal { L } _ { k l . } ^ { c }$ ). into: $$ \begin{array} { r l } & { \mathcal { O } _ { E L B O } = \underbrace { \mathbb { E } _ { \mathbf { Z } ^ { c } \sim q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) } [ \log p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } ^ { c } ) ] } _ { \mathcal { O } _ { r e c } } - } \\ & { \underbrace { \sum _ { t = 1 } ^ { T } \mathbb { E } _ { \mathbf { Z } _ { 1 : t - 1 } ^ { c } } \left[ D _ { K L } ( q _ { \phi } ( z _ { t } ^ { c } \mid \mathbf { X } ) \vert \vert p _ { \psi } ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } ^ { c } ) ) \right] } _ { \mathcal { L } _ { k l } ^ { c } } . } \end{array} $$ By maximizing $\mathcal { O } _ { E L B O }$ , we maximize the first term, the reconstruction objective $\mathcal { O } _ { r e c }$ , and minimize the second term, the variational feature prediction loss $\mathcal { L } _ { k l } ^ { c }$ . We note that training a model to maximize Equation 3 is feasible without incorporating discrete semantic tokens $\pmb { Z } ^ { d }$ . This token-free approach is also depicted as the upper shaded region in Figure 1 (VAE with an Autoregressive Prior), and its properties are further explored in Section 5. # 3.2. Incorporating the Semantic Tokens with VAE We now integrate the semantic tokens $\pmb { Z } ^ { d }$ with the VAE with an autoregressive prior. Using these tokens, the model no longer needs to encode as much phonetic information as in $\mathbf { Z } ^ { c }$ , allowing $\mathbf { Z } ^ { c }$ to focus on other attributes of continuous speech. To this end, we introduce a joint latent variable $\bar { \mathbf Z } = ( z _ { t } \in \mathbb R ^ { d _ { z } ^ { c } } \times \mathbb N _ { k } ) _ { t = 1 } ^ { T }$ , where $z _ { t }$ is the concatenation of $ { \boldsymbol { z } } _ { t } ^ { c }$ and $z _ { t } ^ { d }$ . Since $\mathbf { Z } ^ { d }$ and $\mathbf { Z } ^ { c }$ are conditional independent given $\mathbf { X }$ , we can express the new variational posterior as: $q _ { \phi } ( \mathbf { Z } \mid \mathbf { X } ) = q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } ) p ( \mathbf { Z } ^ { d } \mid \mathbf { X } )$ . Then, we model $p _ { \psi } ( z _ { t } \mid \mathbf { Z } _ { 1 : t - 1 } ) = p _ { \psi } ( z _ { t } ^ { d } \mid \mathbf { Z } _ { 1 : t - 1 } ) p _ { \psi } ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } )$ ), assuming the conditional independence of $z _ { t } ^ { d }$ and $ { \boldsymbol { z } } _ { t } ^ { c }$ given the past generations. We further discuss this modeling assumption in Appendix I. This allows us to re-write3 $\mathcal { O } _ { E L B O }$ from Equation 1 as: $$ \begin{array} { r l } & { \mathcal { O } _ { E , L B O } = } \\ & { \underbrace { \big [ \mathbf { S } _ { \mathbf { E } ^ { a } \sim \gamma \left( \mathbf { Z } ^ { d } \mid \mathbf { X } \right) , \mathbf { Z } ^ { c } \sim q _ { \phi } \left( \mathbf { Z } ^ { c } \mid \mathbf { X } \right) } \left[ \log p \left( \mathbf { X } \mid \mathbf { Z } ^ { d } , \mathbf { Z } ^ { c } \right) \right] } _ { \mathcal { O } _ { \mathbf { r } , \mathbf { c } } } - } \\ & { \underbrace { \sum } _ { \xi = 1 } ^ { T } \mathbb { E } _ { \mathbf { Z } _ { \mathbf { a } _ { 1 } , \mathbf { c } - 1 } } \left[ D _ { K L } \big ( q _ { \phi } ( z _ { t } ^ { c } \mid \mathbf { X } ) \big ) \big \vert \big \vert p _ { \psi } \big ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } \big ) \big ) \right] - } \\ & { \underbrace { \phantom { \sum } } _ { \mathcal { E } _ { \mathbf { a } _ { 1 } } } } \\ & { \underbrace { \sum } _ { \xi = 1 } ^ { T } \mathbb { E } _ { \mathbf { Z } _ { \mathbf { a } _ { 1 } } } \big [ - \log p _ { \psi } \big ( z _ { t } ^ { d } \mid \mathbf { Z } _ { 1 : t - 1 } \big ) \big ] . } \end{array} $$ From Equation 4, our training objective $\mathcal { O } _ { E L B O }$ consists of three terms: $\mathcal { O } _ { r e c }$ , $\mathcal { L } _ { k l } ^ { c }$ , and $\mathcal { L } _ { k l } ^ { d }$ . $\mathcal { O } _ { r e c }$ is the reconstruction objective. Maximizing $\mathcal { O } _ { r e c }$ trains the decoder $\theta$ to reconstruct $\mathbf { X }$ from both $\mathbf { Z } ^ { c }$ and $\pmb { Z } ^ { d }$ , while encouraging the encoder $\phi$ to generate $\mathbf { Z } ^ { c }$ with helpful information to reconstruct X. $\mathcal { L } _ { k l } ^ { c }$ is the variational feature prediction loss. Minimizing $\mathcal { L } _ { k l } ^ { c }$ trains the autoregressive model $\psi$ to predict the next variational feature $ { \boldsymbol { z } } _ { t } ^ { c }$ and encourages the encoder $\phi$ to generate $\mathbf { Z } ^ { c }$ that is easier for $\psi$ to model. $\mathcal { L } _ { k l } ^ { d }$ is the semantic token prediction loss, which trains the autoregressive model $\psi$ to predict the next semantic token given the previous $\pmb { Z } ^ { d }$ and $\mathbf { Z } ^ { c }$ . # 3.3. Balancing the loss terms In Equation 4, the terms $\mathcal { O } _ { r e c }$ , $\mathcal { L } _ { k l } ^ { c }$ , and $\mathcal { L } _ { k l } ^ { d }$ can work against each other. For instance, the encoder $\phi$ optimizes both $\mathcal { O } _ { r e c }$ and $\mathcal { L } _ { k l } ^ { c }$ . Maximizing $\mathcal { O } _ { r e c }$ encourages the variational features $\mathbf { Z } ^ { c }$ to encode more information about $\mathbf { X }$ , while minimizing $\mathcal { L } _ { k l } ^ { c }$ regularize $\mathbf { Z } ^ { c }$ to be simpler for the autoregressive model $\psi$ to predict. Similarly, optimizing $\mathcal { L } _ { k l } ^ { c }$ and $\mathcal { L } _ { k l } ^ { d }$ with the autoregressive model $\psi$ is a multi-task learning scenario, where $\psi$ learns to predict two different objectives given the same input. Moreover, these terms may operate on different scales due to how the losses are computed, necessitating a balancing mechanism. As a result, inspired by $\beta$ -VAE (Higgins et al., 2017), we introduce two scalars: $\beta$ and $\gamma$ , to balance the loss terms as follows: $$ \mathcal { O } _ { E L B O } = \mathcal { O } _ { r e c } - \beta \left( \mathcal { L } _ { k l } ^ { c } + \gamma \cdot \mathcal { L } _ { k l } ^ { d } \right) . $$ Here, a larger $\beta$ favors a simple $p ( \mathbf { Z } ^ { c } )$ , while a smaller $\beta$ encourages the variational features $\mathbf { Z } ^ { c }$ to encode more information about $\mathbf { X }$ . Larger $\gamma$ encourages the autoregressive model $\psi$ to prioritize accurate predictions of $\pmb { Z } ^ { d }$ over $\mathbf { Z } ^ { c }$ In practice, we employ a linear warm-up strategy for $\beta$ , increasing it from zero to its final value during the early stages of training. This approach, inspired by prior works on text generation (Bowman et al., 2016; Fu et al., 2019), helps mitigate posterior collapse. Empirically, we find that this strategy allows for higher values of $\beta$ without causing $\mathcal { L } _ { k l } ^ { c }$ to collapse to zero. # 3.4. Time-wise Normalizing Flow We employ a lightweight normalizing flow (Rezende & Mohamed, 2015) that is shared across time to improve the expressive power of the autoregressive prior $p _ { \psi } \big ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } \big )$ . Specifically, an invertible flow network $f _ { \psi }$ maps each $z _ { t }$ to a point in the Gaussian distribution, and sampling can be realized by running the network in reverse. By using the change of variables, we can write: $$ \begin{array} { r l } & { p _ { \psi } ( z _ { t } ^ { c } \mid \mathbf { Z } _ { 1 : t - 1 } ) = } \\ & { \mathcal { N } ( f _ { \psi } ( z _ { t } ^ { c } ) , \mu _ { \psi } ( \mathbf { Z } _ { 1 : t - 1 } ) , \sigma _ { \psi } ( \mathbf { Z } _ { 1 : t - 1 } ) ) \left. \operatorname* { d e t } \frac { \partial f _ { \psi } ( z _ { t } ^ { c } ) } { \partial z _ { t } ^ { c } } \right. , } \end{array} $$ where $\mu _ { \psi } , \sigma _ { \psi }$ are modeled by autoregressive neural networks (i.e., transformer). We choose affine coupling layers (Dinh et al., 2017) as the backbone of our normalizing flow due to their simple implementation and efficient computation. We note that similar approaches using normalizing flows to enhance prior distributions have also been observed in Kim et al. (2021; 2020) for text-to-speech. # 3.5. Other Components We describe the modeling of the our decoder $p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } )$ and the utterance encoder designed to capture static information. While these components are not the main focus of our study, they help ensure a fair comparison between different methods. We use these components for all methods in our experiments and focus on how changing the inputs to the autoregressive model affects performance. Diffusion Decoder We model the decoder $p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } )$ with Denoising Diffusion Probabilistic Model (DDPM) (Ho et al., 2020). We choose DDPM due to its flexibility in modeling complex distributions. We condition the diffusion process on $\mathbf { Z }$ . For back-propagation through the encoder $\phi$ , we use the reparameterization trick (Kingma & Welling, 2019) to sample from $q _ { \phi } ( \mathbf { Z } ^ { c } \mid \mathbf { X } )$ , and combine it with embedded semantic tokens $\mathbf { Z } ^ { d }$ . The outcome is then concatenated with each intermediate layer of the diffusion decoder for conditional diffusion. We train all diffusion decoders with 1000 DDPM steps. Note that our proposed approach is not limited to a specific decoder. Although we opted for a diffusion-based decoder for ease of training, our method is compatible with various decoding strategies. There are no constraints on the type of decoder used to parameterize $p _ { \theta } ( \mathbf { X } \mid \mathbf { Z } ^ { d } , \mathbf { Z } ^ { c } )$ . Utterance Encoder Static features, such as speaker information and recording environments, often vary little across a given utterance. In our current modeling approach, this static information would be redundantly encoded at each time step. To address this issue, we introduce an additional utterance-level feature encoder that encourages $\mathbf { Z }$ to focus on time-varying signals. Specifically, we randomly segment a portion of the mel-spectrogram $\mathbf { X }$ and feed it to the utterance encoder to produce an utterance-level embedding. This embedding is then concatenated with $\mathbf { Z }$ before being provided to the diffusion decoder. The utterance encoder is trained end-to-end with the entire system. # 4. Experimental Setup # 4.1. Datasets We use two datasets in our experiments: LibriSpeech (Panayotov et al., 2015) and Libri-light (Kahn et al., 2020), consisting of audiobooks narrated in English. LibriSpeech contains 960 hours of speech, while Libri-light contains 60k hours of speech. For semantic token extraction, we follow Hassid et al. (2023); Maiti et al. (2024) and use tokens derived from HuBERT representations (Hsu et al., 2021). We use the official HuBERT checkpoints, pre-trained on LibriSpeech4 and Libri-light5. We run $k$ -means clustering with $k = 2 0 0$ on the output of the last transformer layer of HuBERT using $1 0 \%$ of data randomly sampled from the training set. We pick $k ~ = ~ 2 0 0$ after testing values from $\{ 5 0 , 2 0 0 , 1 0 0 0 \}$ and choosing the one that produced the best language modeling performance The result is also consistent with Maiti et al. (2024). More details on the choice of $k$ are provided in Appendix F. # 4.2. Methods We compare our proposed approach to methods that use only semantic tokens in the autoregressive model, as well as methods that use semantic tokens with added pitch features in the autoregressive model. To ensure a fair comparison, we fix the autoregressive model architecture to be the same for all methods, varying only the input and output layers. We also use the same configuration for the diffusion decoder and utterance encoder across all methods.6 For the neural vocoder (i.e., mapping the mel-spectrogram back to waveform), we train HiFi-GAN (Kong et al., 2020) on LibriSpeech and use it for all of the methods. We leave the detailed configuration of model architectures in Appendix B. Below, we provide further details on the three approaches. Token-LM We adopt the token-based speech language model (described in Section 2) as our baseline, representing approaches such as Lakhotia et al. (2021); Borsos et al. (2023); Maiti et al. (2024), which apply only discrete semantic tokens to the autoregressive model. Token-LM $^ +$ Pitch In this baseline approach, we augment the semantic tokens of token-based speech language model (described in Section 2) with log pitch features before passing them into the autoregressive model. The pitch features are extracted using CREPE (Kim et al., 2018). Additionally, we introduce a pitch regression task alongside the standard next-token prediction task, optimizing it with L1 loss. This method incorporates hand-engineered paralinguistic features, similar to the approach used by Kharitonov et al. (2022). Token-LM $^ +$ Acoustic In this comparison method, we augment semantic tokens with acoustic tokens (Borsos et al., 2023; Défossez et al., 2023). Specifically, we train a residual vector quantization (RVQ) autoencoder to discretize speech into four levels of acoustic tokens. At each transformer time step, the model first predicts the semantic token, followed by the acoustic tokens, which are autoregressively generated over the code levels using an additional transformer layer, similar to Chen et al. (2023); Défossez et al. (2024). We include this baseline to compare with recent methods (Dé- fossez et al., 2024) that integrate acoustic tokens into the autoregressive generation process. Variational speech modeling approach (Proposed) This is our proposed approach introduced in Section 3. In this approach, we learn to extract variational features that supplement the semantic tokens while jointly training the autoregressive model. The learned variational features are used by both the autoregressive model and the decoder. This approach eliminates the need for the selection and extraction of paralinguistic features based on hand-made engineering. Additionally, we set our latent dimension $d _ { z } ^ { c } = 4$ . While we observed performance improvements with larger $d _ { z } ^ { c }$ , we opted for a smaller value to ensure a fairer comparison, as it results in less variation in parameter size. Our additional experiments on the latent dimension $d _ { z } ^ { c }$ is in Appendix E. For inference, we use temperature-based sampling similar to Lakhotia et al. (2021). Specifically, we set the temperature to 0.85 for both semantic tokens $\pmb { Z } ^ { d }$ and continuous variational features $\mathbf { Z } ^ { c }$ . For variational features, the temperature is the scalar multiplied to the standard deviation of the normal distribution in Equation 6 before sampling, as done in Kim et al. (2020). For the diffusion decoder, we use denoising diffusion implicit models (DDIM) from Song et al. (2021) with $\eta = 0 . 5$ and 100 diffusion steps. Training details are provided in Appendix C. # 4.3. Evaluation Metrics We evaluate the comparison methods on both reconstruction and speech continuation. The reconstruction metrics, introduced in Section 4.3.1, involve only the encoder-decoder pair and indicate how much information is preserved in the extracted representations. The remaining metrics focus on speech continuation, which is our primary objective, where the performance of the autoregressive model is also assessed. # 4.3.1. OBJECTIVE METRICS Reconstruction Metrics We use $F _ { 0 }$ -RMSE, mel-ceptral distortion (MCD), and character error rate (CER) to measure the quality of the reconstructed signal. $F _ { 0 }$ -RMSE measures the root mean squared difference between the pitch contour of the ground-truth signal and the reconstructed one. We use CREPE (Kim et al., 2018) to extract pitch and only consider the voiced parts of the signal when computing the difference. MCD measures the Euclidean distance between the 23 mel-cepstral coefficients (MCEPs) extracted from the ground-truth and reconstructed signals. For calculating CER, we use a pre-trained Whisper (Radford et al., 2023) automatic speech recognition model.7 We use the dev-clean and dev-other subsets of LibriSpeech for evaluating reconstruction. To ensure deterministic results, instead of sampling each $ { \boldsymbol { z } } _ { t } ^ { c }$ from $q _ { \phi } \big ( z _ { t } ^ { c } \mid \mathbf { X } \big )$ , we directly use the Gaussian mean $\mu _ { \phi } ( { \bf X } , t )$ from Equation 2. In practice, we observed that the stochastic noise of $q _ { \phi } ( z _ { t } ^ { c } \mid \mathbf { X } )$ has little effect on the reconstructed syntheses. ZeroSpeech Metrics We adopt the commonly-used metrics (Borsos et al., 2023; Hassid et al., 2023; Maiti et al., 2024) from the ZeroSpeech challenge (Nguyen et al., 2020): sWUGGY and sBLIMP to measure language capability objectively. For these two metrics, speech utterances are given in positive-negative pairs, with each model scoring both utterances. The model’s accuracy is the percentage of instances where the positive example receives a higher score than the negative one. sWuggy measures if the model scores a real word higher than a phonetically similar nonword (e.g., “brick” v.s. “blick”). sBLIMP measures if a model scores a grammatically correct sentence higher than a similar but incorrect one (e.g., “the dogs sleep” vs. “the dog sleep”). Both metrics use text-to-speech to generate the examples. In line with Borsos et al. (2023), we evaluate sWUGGY using only words existing in LibriSpeech (referred as the “in-vocab” version). We use the test split for evaluation. See Appendix G for detailed description on how we estimate the scores for the methods. # 4.3.2. SUBJECTIVE METRICS We use subjective human evaluations to assess the naturalness and meaningfulness of the generated speech. We randomly sampled 100 utterances from the LibriSpeech dev-clean and dev-other subsets, cropping the first three seconds to use as prompts. Each audio sample was rated by seven annotators. For naturalness, annotators rated how human-like the generated speech sounded on a fivepoint Likert scale, where one corresponds to “Very unnatural” and five to “Very natural.” For meaningfulness, they rated the grammar and content of the speech on a five-point Likert scale, where one corresponds to “Very Poor” and five to “Excellent.” Additional details on the subjective evaluations are provided in Appendix D. # 5. Experimental Results # 5.1. Main Results Tables 1 and 2 present the results for the three methods described in Section 4.2. Table 1 reports objective metrics for speech reconstruction, while Table 2 provides both objective and subjective results for speech continuation. We discuss our observations below. Table 1. Results of speech reconstruction evaluation ( $F _ { 0 }$ -RMSE, MCD, CER) for the models discussed in Section 4.2. The evaluation metrics are detailed in Section 4.3. All models were trained on the Libri-light dataset. Reconstruction Quality. First, the results in Table 1 show that compared to Token-LM and Token- $. L M + P i t c h$ , our proposed approach improves the reconstruction of the original signal. These findings highlight three key points: 1) discrete semantic tokens alone are insufficient to capture all the components necessary for faithful reconstruction, 2) incorporating only pitch information is not enough, and 3) the learned variational features $\mathbf { Z } ^ { c }$ in our approach effectively complement the discrete semantic tokens $\pmb { Z } ^ { d }$ , leading to better reconstruction of the speech signal. On the other hand, our proposed method achieves slightly lower reconstruction quality than $T o k e n \mathrm { - } L M + A c o u s t i c$ . Since the variational features are continuous, they should be able to encode more information than four levels of acoustic tokens. Therefore, our results suggest that the information encoded in the variational features is effectively regularized by the autoregressive losses: $\mathcal { L } _ { k l } ^ { c }$ and $\mathcal { L } _ { k l } ^ { d }$ . # Speech continuation of our approach is more natural compared to the speech generated from the baselines. The subjective evaluation of speech continuation, measured by the mean opinion score of naturalness (N-MOS) in Table 2, shows that the syntheses produced by our proposed approach have significantly higher naturalness compared to all baselines. This finding further supports our hypothesis that the variational features $\mathbf { Z } ^ { c }$ learned by our approach improve the quality of the synthesis. While Token- $. L M +$ Acoustic achieves the best reconstruction in Table 1, the autoregressive model struggles to effectively process the additional information encoded in the RVQ tokens, resulting in significantly lower speech continuation performance, as shown in Table 2. Additionally, Table 2 compares the number of parameters between different methods. The result indicates that the overhead of the proposed method is relatively small $( < 1 \%$ of the total parameters), while still achieving noticeably better performance. Speech generated using our proposed approach achieves subjective meaningfulness (as measured by M-MOS) comparable to the baselines. The results in Table 2 indicate that our proposed approach produces syntheses that are comparable to or better than baselines, as reflected by its higher meaningfulness mean opinion score (M-MOS). However, all compared methods show lower sWUGGY and sBLIMP scores than Token-LM. This outcome is expected, as the model must predict additional acoustic information beyond semantic tokens, which primarily encode linguistic content. Consequently, given a fixed model parameter budget, language modeling performance naturally declines as the model allocates capacity to model acoustic information. This effect is also evident in the low M-MOS of Token- $. L M +$ Acoustic, where the acoustic tokens may capture excessively detailed information, such as recording noise, which does not contribute meaningfully to synthesis. Table 2. Results of speech continuation evaluation for the models discussed in Section 4.2. The evaluation metrics are detailed in Section 4.3. M-MOS refers to the meaningfulness mean opinion score. N-MOS refers to the naturalness mean opinion score. Both M-MOS and N-MOS are evaluated on speech continuation are presented along with $9 5 \%$ confidence intervals. All models were trained on the Libri-light dataset. ‘# Param.’ refers to the number of parameters used during inference. ‘M’ stands for million. However, one may question why the trend in the sWUGGY and sBLIMP scores does not align with the M-MOS evaluation. We analyze the ASR transcriptions from the compared methods and observe that the transcriptions of Token-LM do have higher meaningfulness than those of other approaches, consistent with the trend of the sWUGGY and sBLIMP scores. However, after listening to the audio samples, we found that the natural prosody of our proposed method significantly improves intelligibility. Although Whisper ASR can still transcribe speech of unnatural prosody generated by Token-LM, human raters often needed multiple passes to fully comprehend the linguistic content. In practical applications, interactive dialogue systems must generate speech that users can easily understand in a single pass. The M-MOS score serves as an indicator of the suitability of a system in this regard. # 5.2. Impact of Loss-balancing Parameters Here, we study the effect of varying the loss-balancing hyper-parameters: $\beta$ and $\gamma$ , which are described in Section 3.3. Varying $\beta$ Table 3 shows that, for reconstruction metrics ( $F _ { 0 }$ -RMSE, MCD, CER), lower values of $\beta$ result in smaller errors, indicating better reconstruction. However, for the sWUGGY and sBLIMP metrics, performance decreases as $\beta$ increases. This finding aligns with our discussion in Section 3.3, where we discussed how lower $\beta$ values encourage better reconstruction, but make it harder for the autoregressive model to effectively model $\mathbf { Z } ^ { c }$ . Varying $\gamma$ Table 4 shows that increasing $\gamma$ leads to worse pitch reconstruction, as measured by $F _ { 0 }$ -RMSE, but improves CER. This result indicates that $\gamma$ governs the type of information captured in the variational feature $\mathbf { Z } ^ { c }$ . With a higher $\gamma$ , the system prioritizes the prediction of semantic tokens. Therefore, the variational feature $\mathbf { Z } ^ { c }$ is encouraged to encode more phonetic information, resulting in lower CER and MCD. In contrast, a lower $\gamma$ encourages $\mathbf { Z } ^ { c }$ to focus more on encoding pitch-related information, as indicated by the lower $F _ { 0 }$ -RMSE. Then, we analyze subjective measures and observe that both M-MOS and N-MOS favor a lower $\gamma$ We attribute the performance decline to the increased difficulty in autoregressive generation of $\mathbf { Z } ^ { c }$ . By increasing the weight of $\mathcal { L } _ { k l } ^ { d }$ , the model sacrifices its focus on minimizing $\mathcal { L } _ { k l } ^ { c }$ , which in turn compromises its ability to model $\mathbf { Z } ^ { c }$ . # 5.3. Removing the Semantic Tokens Here, we evaluate the utility of the semantic tokens in our proposed approach by training a model that uses only variational features $\mathbf { Z } ^ { c }$ . This removal corresponds to only training a VAE with an autoregressive prior with Equation 3 without the use of discrete semantic tokens. Table 3 shows the impact of removing the discrete semantic tokens from our proposed approach, which is denoted as Proposed ( tokens). We find that excluding semantic tokens leads to a slight improvement in the sWUGGY metric compared to including them. However, this exclusion significantly worsens the CER, indicating poorer phonetic reconstruction. These results suggest that without discrete semantic tokens, our approach struggles to effectively encode abstract phonetic information in the variational features $( \mathbf { Z } ^ { c } )$ but still performs well on sWUGGY, possibly by leveraging other cues. One possible explanation is that the synthesized non-existent words in sWUGGY, being out-ofdomain for the text-to-speech system, may exhibit subtle prosodic irregularities that our model is able to detect. On the other hand, the best reconstruction results are obtained when semantic tokens are included, as removing them leads to worse reconstruction metrics. Table 3. Results showing the impact of varying the $\beta$ parameter (as described in Section 3.3) and the effect of removing phonetic tokens from our proposed approach on both language modeling and speech reconstruction performance. The $\gamma$ parameter (as described in Section 3.3) for the proposed methods is fixed to 0.5. All models here were trained on the LibriSpeech dataset for lower computation cost. Table 4. Results showing the impact of varying the $\gamma$ parameter (as described in Section 3.3) in our proposed approach on both language modeling and speech reconstruction performance. The $\beta$ parameter (as described in Section 3.3) is fixed to 0.04. M-MOS denotes the meaningfulness mean opinion score, and N-MOS denotes the naturalness mean opinion score, both presented with $9 5 \%$ confidence intervals. All models were trained on the Libri-light dataset. Table 5. Results of speech continuation evaluation for comparison on different semantic token extraction methods detailed in Section 5.4. M-MOS and N-MOS refer to the meaningfulness and naturalness mean opinion score, presented along with $9 5 \%$ confidence intervals. All models were trained on the Libri-light dataset. # 5.4. Generalization to Different Semantic Tokens In Section 5.1, we demonstrated the effectiveness of our proposed approach using semantic tokens derived from HuBERT representations. Here, we investigate its performance with an alternative approach to extracting semantic tokens, SpeechTokenizer (Zhang et al., 2024). SpeechTokenizer quantizes speech using Residual Vector Quantization (RVQ), which optimizes for reconstruction. However, its first-level RVQ tokens additionally minimizing distillation loss with HuBERT representations to encode content. We replace the semantic tokens in Token-LM with the first-level RVQ tokens from SpeechTokenizer, naming this new baseline SpeechTokenizer-LM. Our proposed method was similarly adapted to this new set of semantic tokens. For our experiments, we used the official SpeechTokenizer checkpoint 8. As shown in Table 5, our approach achieved superior naturalness and meaningfulness scores compared to SpeechTokenizer-LM. This verifies that our framework effectively enhances various approaches to extracting semantic tokens. Flexibility with Different Decoders Additionally, for both SpeechTokenizer-LM and our proposed method, we did not adopt the diffusion decoder mentioned in Section 3.5. Instead, we predicted the remaining RVQ tokens from the semantic tokens (or semantic tokens and variational features for our approach) and leveraged the pre-trained SpeechTokenizer decoder for speech reconstruction. As noted in Section 3.5, our training framework is adaptable and is not tied to a specific decoder type. We adopted a diffusion-based decoder for simplified training and fair comparisons in our previous work. The empirical results in Table 5 further validate this flexibility, as our model still achieves high human evaluation MOS scores with a different decoder. # 6. Related Work Emerging speech language models typically use discrete semantic tokens for autoregressive modeling. These tokens are often obtained by $k$ -means clustering of features extracted from self-supervised pre-trained models (Hsu et al., 2021; Chen et al., 2022). For instance, Lakhotia et al. (2021) used semantic tokens for generative spoken language modeling (GSLM). Subsequently, Kharitonov et al. (2022) enhanced this approach by incorporating pitch information alongside semantic tokens as joint inputs to the autoregressive model. Our proposed approach improves upon this line of research by using a variational autoencoder to automatically learn paralinguistic speech attributes in conjunction with the autoregressive model. Borsos et al. (2023) proposed a two-stage approach for the decoder that used acoustic tokens (Zeghidour et al., 2022; Défossez et al., 2023). This type of framework is also widely used in text-to-speech systems (Chen et al., 2025; 2024). In contrast, our approach focuses on the joint modeling of linguistic and paralinguistic features by enhancing the inputs to the autoregressive model rather than improving the decoder. Recently, a line of research has emerged focusing on improving speech language models through the integration of text-based models. Hassid et al. (2023) initialized their speech language model using a pre-trained text-based large language model (LLM). Similarly, Rubenstein et al. (2023); Maiti et al. (2024) expanded the vocabulary of pre-trained text-based LLMs by integrating the semantic tokens. Building on this, Yang et al. (2024); Du et al. (2024) further explored multi-task training involving text-conditioned generative speech tasks, combining text and audio within a single LLM. We note that our proposed approach takes a different direction but can still be integrated with these approaches. For example, one could initialize the transformer in our autoregressive model using parameters from a textbased LLM. Recent works (Défossez et al., 2024) incorporate discrete acoustic tokens directly into autoregressive modeling. However, these approaches often require complex designs, such as delay patterns and text-based pretraining. In Section 5, we demonstrate that directly incorporating acoustic tokens to autoregressive modeling significantly affects the generation of linguistic content, while our method does not.
The success of large language models in text processing has inspired their adaptation to speech modeling. However, since speech is continuous and complex, it is often discretized for autoregressive modeling. Speech tokens derived from self-supervised models (known as semantic tokens) typically focus on the linguistic aspects of speech but neglect prosodic information. As a result, models trained on these tokens can generate speech with reduced naturalness. Existing approaches try to fix this by adding pitch features to the semantic tokens. However, pitch alone cannot fully represent the range of paralinguistic attributes, and selecting the right features requires careful hand-engineering. To overcome this, we propose an end-to-end variational approach that automatically learns to encode these continuous speech attributes to enhance the semantic tokens. Our approach eliminates the need for manual extraction and selection of paralinguistic features. Moreover, it produces preferred speech continuations according to human raters. Code, samples and models are available at https://github.com/b04901014/vae-gslm.
[ "cs.CL", "cs.AI", "cs.LG", "cs.SD", "eess.AS" ]
# 1. Introduction Quick adaptation is essential for survival in nature so much so that it can be equated to intelligence (Sternberg, 2019). Despite this, there is no Artificial Intelligence (AI) system to this day whose adaptive abilities are considered anywhere close to those of humans and animals. Large Language Models (LLMs) like GPT and Gemini can do many tasks better than humans, but they lack the adaptivity of toddlers. We cannot simply teach such models new things by just talking or interacting with them. Hence, there are no simple ways to quickly fix their harmful behavior reported nowadays on a daily basis. There is no magical tool to just permanently add, remove, or modify the knowledge of LLMs. The most trusted way is to simply retrain the whole system which is too costly to fix their day to day mistakes. So the question remains: how can such models adapt quickly? Quick adaptation is important not only to reduce the cost but also to improve the safety, security, and sustainability of AI. A lot has been done recently to instill adaptivity in large AI models but efforts are fragmented with no apparent similarities in the methodologies used (Fig. 1). For example, Continual Learning methods adapt to streaming data over time (Kirkpatrick et al., 2017), while Federated Learning methods assimilate knowledge spread across space (McMahan et al., 2017). While both deal with distributed information, fundamentally different methodologies are preferred, for instance, replay of old data is common in continual learning (Rebuffi et al., 2017; Pan et al., 2020; Buzzega et al., 2020) but discouraged in federated learning because of privacy. Such differences can make it harder to connect the two subfields at a fundamental level. The same is true of other sub-fields, such as, model merging and editing (Wortsman et al., 2022a,b; Ilharco et al., 2023; Mitchell et al., 2022), knowledge distillation (Hinton et al., 2015), fine-tuning (Hu et al., 2022), etc. All of these efforts use adaptation to avoid retraining as much as possible, but they remain disconnected from each other. Our goal in this this paper is to unify these approaches to unravel a common adaptation mechanism behind them. The underlying mechanism can then be used to not only explain the effectiveness of these approaches but also to find new ways to improve them. Figure 1: Four popular adaptation cases to adapt model parameters $\pmb \theta$ . Black arrows indicate the flow of adapted knowledge, while the gray arrow indicate pre-training. Continual learning adapts $\pmb { \theta } _ { t }$ to $\pmb { \theta } _ { t + 1 }$ to include new data $\mathcal { D } _ { t + 1 }$ . Influence estimation gives a quick estimate of the ‘unlearned’ model obtained by removing $\textit { \textbf { \ i } }$ ’th data $\mathcal { D } _ { i }$ . Model Merging improves a pre-trained LLM $\pmb { \theta } _ { 0 }$ by merging back the fine-tuned models $\pmb { \theta } _ { 1 }$ and $\pmb { \theta } _ { 2 }$ . Finally, Federated Learning aims to obtain a joint model $\theta _ { \mathrm { j n t } }$ by using locally trained models. # 2. Posterior Correction We present a new ‘posterior correction’ approach to unify existing adaptation schemes. The approach relies on a variational reformulation where the goal of learning is to find accurate approximations of the posterior distribution (Khan and Rue, 2023). For example, an Empirical Risk Minimization (ERM) problem over model parameters $\pmb \theta \in \Theta$ is reformulated as Variational Learning (VL) over candidate distribution $q ( \pmb { \theta } ) \in \mathcal { Q }$ , that is, we reformulate $$ \pmb \theta _ { t } = \arg \operatorname* { m i n } _ { \pmb \theta \in \Theta } \sum _ { i = 0 } ^ { t } \ell _ { i } ( \pmb \theta ) \qquad \mathrm { a s } \qquad q _ { t } = \arg \operatorname* { m i n } _ { \pmb q \in \mathcal Q } \sum _ { i = 1 } ^ { t } \mathbb E _ { \pmb q } [ \ell _ { i } ] + \mathbb D _ { \mathrm { K L } } [ \pmb q \| p _ { 0 } ] . $$ Here, ERM uses the loss $\ell _ { i }$ defined over $t$ data examples for $i = 1 , 2 , \ldots , t$ , as well as a regularizer $\ell _ { 0 }$ which can either be added explicitly or implemented implicitly through an algorithm. In contrast, the VL problem uses expectation of losses and a prior $p _ { 0 } \propto \exp ( - \ell _ { 0 } )$ used in the Kullback-Leibler (KL) divergence term. Despite these differences, the ERM solution $\pmb { \theta } _ { t }$ can be recovered from VL by using a Gaussian $q _ { t }$ and applying the delta method, as described in Sec. 3. We will use the above reformulation to derive adaptation strategies proposed for ERM as a special case of the posterior-correction approach for VL. We start with a simple case to set the idea. Suppose we receive a new example $t + 1$ with loss $\ell _ { t + 1 }$ , then how can we quickly adapt $q _ { t }$ to recover $q _ { t + 1 }$ ? Bayes’ rule suggest to use $q _ { t }$ as the prior and update the new posterior as follows: $p _ { t + 1 } \propto e ^ { - \ell _ { t + 1 } } q _ { t }$ . However, this is intractable unless the loss and prior form a conjugate pair, like Gaussians (Bishop, 2006, pp. 117). A tractable alternative is to use VL to keep the update within the set $\mathcal { Q }$ , $$ \hat { q } _ { t + 1 } = \arg \operatorname* { m i n } _ { \boldsymbol { q } \in \mathcal { Q } } ~ \mathbb { E } _ { \boldsymbol { q } } [ \ell _ { t + 1 } ] + \mathbb { D } _ { \mathrm { K L } } [ \boldsymbol { q } \parallel \boldsymbol { q } _ { t } ] . $$ Unfortunately, this does not exactly recover the true $q _ { t + 1 }$ . We will now show that $q _ { t + 1 }$ can be recovered exactly if we add a correction term in the above update. Table 1: Site functions. We denote $\mathbf { H } _ { i | t } = \mathbb { E } _ { q _ { t } } [ \nabla ^ { 2 } \ell _ { i } ]$ and its diagonal by the vector $\bar { \mathbf { h } } _ { i }$ . Element-wise multiplication and divisions are denoted by ${ \mathbf { a } } \cdot { \mathbf { b } }$ and $\mathbf { a } / \mathbf { b }$ respectively. Figure 2: The left panel compares the site $\hat { \ell } _ { i \vert t } ^ { \mathrm { i s o } }$ to the $1 ^ { \mathrm { s t } }$ -order Taylor of $\ell _ { i }$ at $\mathbf { m } _ { t }$ . The right panel compares $\hat { \ell } _ { i \vert t } ^ { \mathrm { f u l l } }$ to the $2 ^ { \mathrm { n d } }$ -order Ta|ylor. The site functions use expectations of the gradients and Hessians over $q _ { t }$ and capture a more global information around $\mathbf { m } _ { t }$ . To correct the update, we will rely on a dual perspective proposed by Khan and Rue (2023, Sec. 5.4). They consider $\mathcal { Q }$ to be the set of minimal exponential-family distribution $q ( \pmb \theta ) = \mathrm { e x p } ( \langle \mathbf { T } ( \pmb \theta ) , \pmb \lambda \rangle - A ( \pmb \lambda ) )$ with sufficient statistics $\mathbf { T } ( \pmb { \theta } )$ , natural parameter $\pmb { \lambda }$ , logpartition function $A ( \cdot )$ , and inner product denoted by $\langle \cdot , \cdot \rangle$ . For such families, they show that any solution $q _ { t }$ can be written as a product of local site functions $\hat { \ell } _ { i \mid t }$ for each $\ell _ { i }$ , $$ \boldsymbol { q } _ { t } ( \pmb { \theta } ) \propto \prod _ { i = 0 } ^ { t } \exp \left( - \hat { \ell } _ { i | t } ( \pmb { \theta } ) \right) , \mathrm { w h e r e } \hat { \ell } _ { i | t } ( \pmb { \theta } ) = \langle \mathbf { T } ( \pmb { \theta } ) , \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle . $$ Here, $\widetilde { \nabla }$ denotes the natural gradient with respect to $\pmb { \lambda }$ evaluated at the natural parameter of $q _ { t }$ . A derivation is given in Sec. 4.1 where we also allow non-constant base measure.1 Table 1 shows a few examples of the sites $\hat { \ell } _ { i \mid t }$ where see Gaussian sites resembles those obtained using Taylor’s method but they use derivatives evaluated and averaged over the samples from $q _ { t }$ ; also see Fig. 2. The site functions contain more global information than Taylor’s surrogates and they also apply more generally, for instance, to discontinuous loss functions and discrete variables. More details are included in Sec. 4.1. We use Eq. 3 to correct the update in Eq. 2. In the first line below, we begin with the definition of $q _ { t + 1 }$ , then expand the KL term where we divide and multiply by $q _ { t }$ and its dual form respectively, and rearrange terms to get the correction shown in the last line: $$ { \begin{array} { r l } & { q _ { t + 1 } = \arg \operatorname* { m i n } _ { q \in { \mathcal { Q } } } \mathbb { E } _ { q } [ \ell _ { t + 1 } ] + \sum _ { i = 1 } ^ { t } \mathbb { E } _ { q } [ \ell _ { i } ] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel p _ { 0 } ] } \\ & { \qquad = \arg \operatorname* { m i n } _ { q \in { \mathcal { Q } } } \mathbb { E } _ { q } [ \ell _ { t + 1 } ] + \sum _ { i = 1 } ^ { t } \mathbb { E } _ { q } [ \ell _ { i } ] + \mathbb { E } _ { q } \left[ \log \left( { \frac { q } { e ^ { - \ell _ { 0 } } } } \times { \frac { \prod _ { i = 0 } ^ { t } e ^ { - { \widehat { \ell } } _ { i } | t } } { q _ { t } } } \right) \right] } \\ & { \qquad = \arg \operatorname* { m i n } _ { q \in { \mathcal { Q } } } \mathbb { E } _ { q } [ \ell _ { t + 1 } ] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel q _ { t } ] + \sum _ { i = 0 } ^ { t } \underbrace { \mathbb { E } _ { q } [ \ell _ { i } - { \widehat { \ell } } _ { i | t } ] } _ { \mathrm { C o r r e c t i o n } } } \end{array} } $$ Adding the third term corrects the update in Eq. 2 to ensure the recovery of the exact $q _ { t + 1 }$ (instead of $\hat { q } _ { t + 1 }$ ). Corrections are required in all the past $\ell _ { i }$ as well as the regularizer $\ell _ { 0 }$ . Essentially, past $\ell _ { i }$ are represented in $q _ { t }$ through sites $\hat { \ell } _ { i \mid t }$ and, when a new $\ell _ { t + 1 }$ is added, the representation needs to change to $\hat { \ell } _ { i \mid t + 1 }$ . The correction term gives a precise mathematical characterization of the interference created during adaptation, which is an age-old problem (Sutton, 1986) without a precise agreed-upon mathematical definition. Other views of posterior corrections are discussed in Sec. 4.2 where it is used to derive a Bayes’ filter and a new definition of information gain, and also interpreted as natural-gradient mismatch. In addition, a method to ‘boost’ training trajectories is also discussed. The main result of this paper is to show that many existing adaptation methods tailored to handle specific adaptation cases, can all be seen as specific instances of posterior correction. Smaller correction imply less interference which can be handled quickly during adaptation. This key message is demonstrated by covering multiple adaptation scenarios, such as, continual learning, influence estimation, model merging, and federated learning. # 3. Knowledge Adaptation as Posterior Correction To derive existing methods as posterior correction, we will reformulate an ERM problem as a VL problem. For instance, for a $q ^ { \mathrm { i s o } }$ family, VL in Eq. 1 reduces to an ERM over the mean $\mathbf { m }$ by using the $1 ^ { \mathrm { s t } }$ -order delta method (Khan and Rue, 2023, App. C), $$ \mathbb { E } _ { q } [ \ell _ { i } ] \approx \ell _ { i } ( \mathbf { m } ) \quad \implies \quad \mathbf { m } _ { t } = \arg \operatorname* { m i n } _ { \mathbf { m } \in \Theta } \sum _ { i = 0 } ^ { t } \ell _ { i } ( \mathbf { m } ) + \mathrm { c o n s t . } $$ This is because the entropy of $q ^ { \mathrm { i s o } }$ is constant, therefore the KL term in VL only contributes the $\ell _ { 0 }$ term. This reformulation is suitable for ERMs that aim for $1 ^ { \mathrm { s t } }$ -order stationarity conditions, for instance, by using stochastic gradient descent. For algorithms that aim $2 ^ { \mathrm { n d } }$ - order optimality, we can use a $2 ^ { \mathrm { n d } }$ -order delta method over a $q ^ { \mathrm { f u l l } }$ family and assume that the covariance is set to the Hessian at the ERM solution $\mathbf { m } _ { t }$ which corresponds to Laplace’s method (Khan and Rue, 2023, App. C.1). Similarly, for methods such as Adam, SAM, or RMSprop, a $q ^ { \mathrm { d i a g } }$ family is suitable. The explicit choice to not optimize for the covariance is made in non-Bayesian methods, and by making the same choice in the VL problem, we can recover them by solving a variational formulation. # 3.1 Continual Learning as Posterior Correction In continual learning, the goal is to quickly update $\pmb { \theta } _ { t }$ to $\pmb { \theta } _ { t + 1 }$ when a new loss $\ell _ { t + 1 }$ is available. A popular strategy for this is to regularize the problem. Three kinds of regularization methods are commonly employed: (i) parameter-space regularization, (ii) function or prediction-space regularization, (iii) experience or memory replay. Here, we derive these as special cases of the regularization used for posterior correction used in Eq. 4, denoted by $$ \mathcal { K } _ { t } ( q ) = \mathbb { D } _ { \mathrm { K L } } [ q \vert \vert q _ { t } ] + \sum _ { i = 0 } ^ { t } \mathbb { E } _ { q } [ \ell _ { i } - \hat { \ell } _ { i \vert t } ] . $$ Here, the KL term performs the parameter-space regularization while the correction term in the data-space is related to the other two regularization methods. Posterior correction naturally combines these seemingly different approaches in one single method. Recovering parameter-space regularization methods is straightforward. For a Gaussian family, $\boldsymbol { \mathcal { K } } _ { t }$ without the correction term is simply the variational continual learning. The KL term in such cases reduces to a quadratic regularizers which recovers the popular Elastic-Weight Consolidation (EWC) which uses $\begin{array} { r } { \frac { 1 } { 2 } \big ( \pmb { \theta } - \pmb { \theta } _ { t } \big ) ^ { \top } \mathbf { H } _ { t } \big ( \pmb { \theta } - \pmb { \theta } _ { t } \big ) } \end{array}$ with a diagonal Hessian/Fisher $\mathbf { H } _ { t }$ (Kirkpatrick et al., 2017). By employing an arbitrary exponential-family in the KL term, posterior correction can enable generic parameter-space regularization commonly used for online learning (van der Hoeven et al., 2018). Next, we consider the function- or prediction- space regularizations. These can be recovered through the correction term. For example, consider a squared-loss $$ \begin{array} { r } { \ell _ { i } ( \pmb { \theta } ) = \mathcal { L } \left[ y _ { i } , \hat { y } _ { i } ( \pmb { \theta } ) \right] = \frac { 1 } { 2 } \| y _ { i } - \hat { y } _ { i } ( \pmb { \theta } ) \| ^ { 2 } } \end{array} $$ over input-output pair $( \mathbf x _ { i } , y _ { i } )$ given a linear predictor $\hat { y } _ { i } ( \pmb { \theta } ) = \mathbf { x } _ { i } ^ { \top } \pmb { \theta }$ . For a $q ^ { \mathrm { i s o } }$ family, the correction term simplifies as ‘prediction matching’ among the new and old model, $$ \begin{array} { r l } & { \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | t } ^ { \mathrm { i s o } } ( \pmb \theta ) = \ell _ { i } ( \pmb \theta ) - \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] } \\ & { \qquad = \frac { 1 } { 2 } \| y _ { i } - \mathbf x _ { i } ^ { \top } \pmb \theta \| ^ { 2 } - \pmb \theta ^ { \top } \mathbf x _ { i } ( \mathbf x _ { i } ^ { \top } \mathbf m _ { t } - y _ { i } ) } \\ & { \qquad = \frac { 1 } { 2 } \mathcal L \left[ \hat { y } _ { i } ( \mathbf m _ { t } ) , \hat { y } _ { i } ( \pmb \theta ) \right] + \mathrm { c o n s t . } } \end{array} $$ This is a function-space regularization used by Benjamin et al. (2019) for neural networks. A visualization of corrections as prediction mismatch is shown in the left panel of Fig. 3. The right panel additionally compares mismatches of two base models. The model with less mismatch is closer to the new model and can facilitate faster adaptation. In general, this result can be extended to a general nonlinear model $f _ { i } ( \pmb \theta )$ (such as neural networks) trained on a loss function $\mathcal { L } ( y , \hat { y } )$ derived using Bregman divergence. As shown in Sec. 4.3, we can write the correction as a sum of a prediction-matching term and an error term arising due to the non-linearity of the model, $$ \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | t } ^ { \mathrm { i s o } } ( \pmb \theta ) \approx \underbrace { \mathcal { L } \left[ \hat { y } _ { i | t } , \hat { y } _ { i } ( \pmb \theta ) \right] } _ { \mathrm { P r e d . ~ m a t c h i n g } } + r _ { i | t } \underbrace { \left[ f _ { i } ( \pmb \theta ) - \hat { f } _ { i } ^ { \mathrm { l i n } } ( \pmb \theta ) \right] } _ { \mathrm { L i n e a r i z a t i o n ~ e r r o r } } . $$ Here, we denote by $\hat { y } _ { i \lvert t } = \mathbb { E } _ { q _ { t } } [ \hat { y } _ { i } ( \pmb { \theta } ) ]$ the average posterior predictions under $q _ { t }$ and by $r _ { i | t } = \hat { y } _ { i | t } - y _ { i }$ its resid|ual. The term $\hat { f } _ { i } ^ { \mathrm { l i n } } ( \pmb \theta )$ is a $1 ^ { \mathrm { s t } }$ -order linearization of $f _ { i } ( \pmb \theta )$ at $\mathbf { m } _ { t }$ . Figure 3: Illustration of correction as prediction mismatch for linear regression with $q ^ { \mathrm { i s o } }$ family. The old model is trained on the gray ‘o’ and the new model additionally includes the black ‘ $\times$ ’ too. Corrections are mismatches over old examples (dashed red lines). The right figure shows an additional model (in blue) which has larger mismatches. The second term disappears when the model is linear, leading to a purely mismatch-based correction as in Eq. 7. The approximation is due to the delta method and can be ignored. The first term favors prediction matching of the past and future, while the second term deals with the residuals obtained because of past mistakes. The expression shows that the mistakes are propagated due to non-linearity of the model. The expression suggests that, for linear cases, prediction matching is sufficient, but otherwise we need additional mechanisms to fix past mistakes. A straightforward approach to reduce this error would be to use a better posterior form. For instance, when using the $q ^ { \mathrm { f u l l } }$ family, the $\hat { f } _ { i } ^ { \operatorname* { l i n } } ( \pmb { \theta } )$ is replaced by a better quadratic surrogate, as shown in Sec. 4.3, thereby reducing the error. In fact, for squared loss, the correction term completely vanishes. The need for correction is reduced with more flexible posterior, which can speed up the adaptation process. Flexible posteriors however can be costly, for example, computing full covariances is infeasible for larger neural networks. The expression suggests a cheaper alternative which is to simply use replay of past examples to improve continual learning. A similar approach is derived by Daxberger et al. (2023) by using the K-prior framework of Khan and Swaroop (2021), where they combine the three types of regularization methods. We show in Sec. 4.4 that K-prior, which also involves prediction matching, is a special case of posterior correction with $q ^ { \mathrm { i s o } }$ family. Posterior correction generalizes K-prior by allowing the use of an arbitrary posterior distribution. This extends the prediction matching to other kinds of matching, such as, gradient and Hessian matching. It also provides a simpler way to mix and match various regularization methods as discussed in Sec. 4.4. In Sec. 4.4, we also briefly discuss connections to other approaches covered under the Kprior framework, such as, knowledge distillation (Hinton et al., 2015), incremental Support Vector Machines (Cauwenberghs and Poggio, 2001; Liang and Li, 2009), Similarity Control (Vapnik and Izmailov, 2015), and memory-based continual learning (Lopez-Paz and Ranzato, 2017). Finally, we show an application to sequential Bayesian inference by deriving a method by Bui et al. (2017) for sequential Sparse Variational Gaussian Process as a special instance of posterior correction. # 3.2 Influence Estimation as Posterior Correction Influence estimation aims to estimate the influence of data examples on the models trained over them. For example, given a model $\pmb { \theta } _ { t }$ trained on $t$ data examples, we may want to estimates the new model $\theta _ { t \backslash j }$ trained on all the data except the $j$ ’th example. Influence estimates provide a simple expression, often obtained using a Taylor’s approximation evaluated at $\pmb { \theta } _ { t }$ . These estimates can also be derived as different instances of posterior correction. Here, we will show two results to derive TracIn (Pruthi et al., 2020) and Influence Function (IF) (Jaeckel, 1972; Cook, 1977; Koh and Liang, 2017). We start by rewriting influence calculation as posterior correction. Taking a variational reformulation, we assume that $q _ { t }$ is given and our goal is to estimate the posterior $q _ { t \backslash j }$ trained on all example except the $j$ ’th one. This is shown below in the first line and the following lines are obtained similarly to Eq. 4 to get the correction, $$ \begin{array} { l } { \displaystyle q _ { t \backslash j } = \arg \underset { q \in \mathcal { Q } } { \mathrm { m i n } } ~ \mathbb { E } _ { q } [ - \ell _ { j } ] + \sum _ { i = 1 } ^ { t } \mathbb { E } _ { q } [ \ell _ { i } ] + \mathbb { D } _ { \mathrm { K L } } [ q \| p _ { 0 } ] } \\ { \displaystyle \quad = \arg \operatorname* { m i n } _ { q \in \mathcal { Q } } \mathbb { E } _ { q } [ - \ell _ { j } ] + \mathbb { D } _ { \mathrm { K L } } [ q \| q _ { t } ] + \sum _ { i = 0 } ^ { t } \mathbb { E } _ { q } [ \ell _ { i } - \hat { \ell } _ { i | t } ] } \\ { \displaystyle \quad = \arg \operatorname* { m i n } _ { q \in \mathcal { Q } } \mathbb { E } _ { q } [ - \hat { \ell } _ { j | t } ] + \mathbb { D } _ { \mathrm { K L } } [ q \| q _ { t } ] + \sum _ { i = 0 : t , i \neq j } \mathbb { E } _ { q } [ \ell _ { i } - \hat { \ell } _ { i | t } ] } \end{array} $$ The second line characterizes the interference caused by the removal of $\ell _ { j }$ and the third line rearranges the terms corresponding to the $j$ ’th example. The TracIn estimator uses the gradient $\nabla \ell _ { j } ( \pmb \theta _ { t } )$ as the influence estimator. This follows as a special case of the above by using the $q ^ { \mathrm { i s o } }$ family and solving the following objective where we ignore the correction term, $$ \begin{array} { r l } & { q _ { t \backslash j } \approx \underset { q \in \mathcal { Q } } { \arg \operatorname* { m i n } } ~ \mathbb { E } _ { q } [ - \hat { \ell } _ { j | t } ] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel q _ { t } ] } \\ & { ~ \Longrightarrow ~ \mathbf { m } _ { t \backslash j } \approx \arg \underset { \mathbf { m } } { \operatorname* { m i n } } ~ \mathbf { m } ^ { \top } \mathbb { E } _ { q _ { t } } [ - \nabla \ell _ { j } ] + \frac { 1 } { 2 } \| \mathbf { m } - \mathbf { m } _ { t } \| ^ { 2 } } \\ & { ~ \Longrightarrow ~ \mathbf { m } _ { t \backslash j } \approx \mathbf { m } _ { t } + \nabla \ell _ { j } ( \mathbf { m } _ { t } ) } \end{array} $$ where the last line is obtained by using the delta method to approximate $\mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] \ \approx$ $\nabla \ell _ { i } ( { \mathbf { m } } _ { t } )$ . Renaming $\mathbf { m } _ { t }$ as $\pmb { \theta } _ { t }$ , we get the TracIn estimator $\pmb { \theta } _ { t \backslash j } - \pmb { \theta } _ { t } \approx \nabla \ell _ { j } ( \pmb { \theta } _ { t } )$ . Unlike TracIn, the IF uses a Newton step which can be derived by approximating the correction term instead of ignoring it. Therefore it is expected to be more accurate. Specifically, we approximate the correction for all examples from $i = 1 , 2 , \ldots , t$ by using the $2 ^ { \mathrm { n d } }$ -order Taylor’s expansion of $\ell _ { i }$ at $\mathbf { m } _ { t }$ , as shown below: $$ \begin{array} { r l } & { \mathbb { E } _ { q } [ \ell _ { i } - \hat { \ell } _ { i | t } ] \approx \mathbb { E } _ { q } \left[ \theta ^ { \top } \nabla \ell _ { i } ( \mathbf { m } _ { t } ) + \frac { 1 } { 2 } ( \theta - \mathbf { m } _ { t } ) ^ { \top } \nabla ^ { 2 } \ell _ { i } ( \mathbf { m } _ { t } ) ( \theta - \mathbf { m } _ { t } ) - \theta ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] \right] } \\ & { \quad \quad \quad \quad \approx \mathbb { E } _ { q } \left[ \frac { 1 } { 2 } ( \theta - \mathbf { m } _ { t } ) ^ { \top } \nabla ^ { 2 } \ell _ { i } ( \mathbf { m } _ { t } ) ( \theta - \mathbf { m } _ { t } ) \right] } \\ & { \quad \quad \quad \quad = \frac { 1 } { 2 } ( \mathbf { m } - \mathbf { m } _ { t } ) ^ { \top } \nabla ^ { 2 } \ell _ { i } ( \mathbf { m } _ { t } ) ( \mathbf { m } - \mathbf { m } _ { t } ) + \mathrm { c o n s t . } } \end{array} $$ The second line is obtained again with $\mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] \approx \nabla \ell _ { i } ( \mathbf { m } _ { t } )$ and simplifying. Assuming a quadratic regularizer $\begin{array} { r } { \ell _ { 0 } = \frac { 1 } { 2 } \delta \| \pmb { \theta } \| ^ { 2 } } \end{array}$ , we can also simplify the sum as shown in Eq. 35. With this, the influence procedure in Eq. 9 simplifies to $$ \begin{array} { r } { \mathbf { m } _ { t \backslash j } \approx \arg \underset { \mathbf { m } } { \operatorname* { m i n } } - \mathbf { m } ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { j } ] + \frac { 1 } { 2 } ( \mathbf { m } - \mathbf { m } _ { t } ) ^ { \top } \mathbf { H } _ { t \backslash j } ( \mathbf { m } - \mathbf { m } _ { t } ) \ \approx \mathbf { m } _ { t } + \mathbf { H } _ { t \backslash j } ^ { - 1 } \nabla \ell _ { j } ( \mathbf { m } _ { t } ) \ } \end{array} $$ where $\begin{array} { r } { \mathbf { H } _ { t \backslash j } = \sum _ { i } \nabla ^ { 2 } \ell _ { i } ( \mathbf { m } _ { t } ) } \end{array}$ where the sum runs over $i \neq j$ . This is the IF for removal of an example (Nickl et al., 2023, Eq. 2). The derivation clearly shows the approximations used by TracIn and IF: the former ignores the corrections while the latter reduces it by using Taylor’s approximation. As a result, we expect IF to be a better estimate. A recent proposal by Nickl et al. (2023) called the Memory-Perturbation Equation generalizes the influence estimation by using the Bayesian Learning Rule. This approach too can be seen as a special instance of posterior correction where a generic exponential family is used but the correction term is ignored. Let us denote the natural parameters of $q _ { t }$ and $q _ { t \backslash j }$ by $\lambda _ { t }$ and $\lambda _ { t \backslash j }$ , respectively. Then, by using the definition of $\hat { \ell } _ { t \vert j }$ from Eq. 3 and ignoring the correction term, we can rewrite Eq. 9 as follows: $$ \lambda _ { t \backslash j } \approx \arg \operatorname* { m i n } _ { \mu } - \langle \pmb { \mu } , \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle + \mathbb { D } _ { \mathrm { K L } } [ q \| q _ { t } ] \quad \Longrightarrow \quad \lambda _ { t \backslash j } \approx \lambda _ { t } + \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] . $$ The second expression is obtained by noting that the natural gradient of the KL term is simply $\lambda _ { t \backslash j } - \lambda _ { t }$ (Khan and Rue, 2023, Eq. 23). This is precisely the Memory-Perturbation Equation shown in Nickl et al. (2023, Eq. 6). Overall, these connections suggests two ways to improve influence estimation: either we can expand the class of posterior family or we could explicitly try to reduce the correction. # 3.3 Model Merging as Posterior Correction Model merging is a popular technique to improve capabilities of LLMs without retraining them from scratch. Given a pre-trained model with parameters $\pmb { \theta } _ { \mathrm { p r e } }$ on loss $\ell _ { \mathrm { p r e } }$ , we ‘finetune’ models $\theta _ { i }$ on individual tasks $\ell _ { i }$ . For instance, given an English-speaking2 LLM, we may fine-tune it on other languages and merge them all to get a model that can also speak other languages. The merging is done via a simple addition, for instance, Task-Arithmetic (TA) (Ilharco et al., 2023) uses the following merging (with weights $\alpha _ { i } > 0$ ): $$ \pmb { \theta } _ { \mathrm { t a } } = \pmb { \theta } _ { \mathrm { p r e } } + \sum _ { i = 1 } ^ { t } \alpha _ { i } ( \pmb { \theta } _ { i } - \pmb { \theta } _ { \mathrm { p r e } } ) , $$ which, despite its simplicity, works extremely well for reasons unknown. In what follows, we will derive this as a special case of posterior correction with the family $q ^ { \mathrm { i s o } }$ . We will then show that better approximation of the correction leads to better merging. As before, we start by writing merging in a variational formulation denoting the pretrained model by $q _ { \mathrm { p r e } }$ , for instance, it can be a $q ^ { \mathrm { i s o } }$ candidate with mean $\pmb { \theta } _ { \mathrm { p r e } }$ . Similarly, we denote by $q _ { i }$ the model fine-tuned on $\ell _ { i }$ but with a prior $q _ { \mathrm { p r e } }$ , that is, by minimizing $\mathbb { E } _ { q } [ \ell _ { i } ] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel q _ { \mathrm { p r e } } ]$ ; we assume both $q _ { i }$ to be in the same family as $q _ { \mathrm { p r e } }$ . The use of KL term is similar to the proximal term which is implemented via initialization. The goal is then to obtain the model that is trained from scratch on all the data. A popular strategy for merging posteriors is to use the following rule: $$ q _ { \mathrm { b a } } = q _ { \mathrm { p r e } } \prod _ { i = 1 } ^ { t } \left( \frac { q _ { i } } { q _ { \mathrm { p r e } } } \right) ^ { \alpha _ { i } } , $$ which we term ‘Bayesian Arithmetic (BA)’ to draw an analogy to Task Arithmetic because the two have similar forms with addition and subtraction in TA replaced by multiplication and division in BA. The above rule is commonly used in various distributed Bayesian computation, such as, Bayesian data fusion (Mutambara, 1998; Durrant-Whyte, 2001; Wu et al., 2022), Bayesian Committee Machine (Tresp, 2000), Consensus Monte Carlo (Scott et al., 2022), and approximate inference (Vehtari et al., 2020; Ashman et al., 2022). Clearly, the posterior merging does not recover the exact posterior fitted jointly, and we will now apply our posterior correction to write an exact expression for the error. We note that $q _ { i }$ takes the following dual form for $q _ { i } \propto q _ { \mathrm { p r e } } \exp ( - \hat { \ell } _ { i \vert i } )$ where the $\hat { \ell } _ { i \left| i \right. }$ is the site function for $\ell _ { i }$ at $q _ { i }$ . Similarly, let $q _ { \mathrm { p r e } } \propto \exp ( - \hat { \ell } _ { \mathrm { p r e | p r e } } )$ where $\ell _ { \mathrm { p r e } }$ is assumed to contain the prior $p _ { 0 }$ used for training from scratch. With this, we can write the retraining of the model jointly on $\ell _ { \mathrm { p r e } }$ , as well as all $\ell _ { i }$ , as follows, $$ \begin{array} { r l } & { q _ { \mathrm { r e t r a i n e d } } = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } ~ \mathbb { E } _ { q } \left[ \ell _ { \mathrm { p r e } } + \displaystyle \sum _ { i = 1 } ^ { t } \alpha _ { i } \ell _ { i } \right] + \mathbb { D } _ { \mathrm { K L } } [ q \| p _ { 0 } ] } \\ & { ~ = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } ~ \mathbb { D } _ { \mathrm { K L } } [ q \| q _ { \mathrm { b a } } ] + \displaystyle \sum _ { i = 1 } ^ { t } \alpha _ { i } \underbrace { { \mathbb { E } } _ { q } [ \ell _ { i } - \hat { \ell } _ { i | i } ] } _ { \mathrm { T a s k ~ c o r r e c t i o n } } + \underbrace { { \mathbb { E } } _ { q } [ \ell _ { \mathrm { p r e } } - \hat { \ell } _ { \mathrm { p r e l p r e } } ] } _ { \mathrm { L L M ~ c o r r e c t i o n } } . } \end{array} $$ This is obtained in the same way as Eq. 4 but now using the dual forms of $q _ { \mathrm { p r e } }$ and all $q _ { i }$ . It is now easy to check that TA model $\theta _ { \mathrm { t a } }$ in Eq. 12 is obtained from $q _ { \mathrm { b a } }$ by restricting all posteriors to $q ^ { \mathrm { i s o } }$ family, that is, $q _ { i } = \mathcal { N } ( \pmb { \theta } | \mathbf { m } _ { i } , \mathbf { I } )$ . Then, $\mathbf { m } _ { i } = \pmb \theta _ { i }$ if we estimate $q _ { 1 }$ after using the delta method and products of all the Gaussian will simply give the addition as it is in Eq. 12. Therefore, TA can be seen as posterior correction with $q ^ { \mathrm { i s o } }$ but ignoring all the correction terms in Eq. 14. To improve TA, we can approximate the correction terms, for example, by using the $2 ^ { \mathrm { n d } }$ -order Taylor expansion. This gives us a better Hessian-weighted version of TA. Essentially, we proceed similarly to Eq. 10 to get $$ \begin{array} { r l } & { \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | i } ( \pmb \theta ) \approx \frac { 1 } { 2 } ( \pmb \theta - \mathbf { m } _ { i } ) ^ { \top } \mathbf { H } _ { i } ( \pmb \theta - \mathbf { m } _ { i } ) } \\ & { \ell _ { \mathrm { p r e } } ( \pmb \theta ) - \hat { \ell } _ { \mathrm { p r e } | \mathrm { p r e } } ( \pmb \theta ) \approx \frac { 1 } { 2 } ( \pmb \theta - \mathbf { m } _ { \mathrm { p r e } } ) ^ { \top } \mathbf { H } _ { \mathrm { p r e } } ( \pmb \theta - \mathbf { m } _ { \mathrm { p r e } } ) } \end{array} $$ where $\mathbf { H } _ { i } = \nabla ^ { 2 } \ell _ { i } ( \mathbf { m } _ { i } )$ and $\mathbf { H } _ { \mathrm { p r e } } = \nabla ^ { 2 } \ell _ { \mathrm { p r e } } ( \mathbf { m } _ { \mathrm { p r e } } )$ . Plugging these in Eq. 14, we get the following estimate of the merged mean, as shown in App. B, $$ \mathbf { m } _ { \mathrm { h a } } = \mathbf { m } _ { \mathrm { p r e } } + \sum _ { i = 1 } ^ { t } \alpha _ { i } \mathbf { H } _ { \mathrm { h a } } ^ { - 1 } ( \mathbf { I } + \mathbf { H } _ { i } ) ( \mathbf { m } _ { i } - \mathbf { m } _ { \mathrm { p r e } } ) , $$ where $\begin{array} { r } { \mathbf { H } _ { \mathrm { h a } } = \mathbf { I } + \mathbf { H } _ { \mathrm { p r e } } + \sum _ { i } \alpha _ { i } \mathbf { H } _ { i } } \end{array}$ . This is exactly the update derived by Daheim et al. (2024, Eq. 12) by using gradient mismatch. In fact, it is easy to show that gradient mismatch is a special case of posterior correction. This is obtained by simply taking the derivatives of the correction term, $$ \nabla _ { \mathbf { m } } \mathbb { E } _ { q } [ \ell _ { i } - \hat { \ell } _ { i | t } ] = \mathbb { E } _ { q } \left[ \nabla \ell _ { i } \right] - \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] \approx \nabla \ell _ { i } ( \mathbf { m } ) - \nabla \ell _ { i } ( \mathbf { m } _ { t } ) . $$ Posterior correction generalizes their approach and relaxes the assumptions made by them (they assume a quadratic regularizer). Similarly to previous section, the result can also be directly obtained by expanding the posterior form from $q ^ { \mathrm { i s o } }$ to $q ^ { \mathrm { f u l l } }$ . # 3.4 Federated Learning as Posterior Correction The goal of federated learning (McMahan et al., 2017) is to learn a joint model $\theta _ { \mathrm { j n t } }$ at a server by communicating with local models $\pmb \theta _ { i }$ for $i = 1 , 2 , \ldots , t$ trained separately at $t$ clients. The local models use local losses $\ell _ { i }$ which are not accessible at the server. This restriction forces the server to adapt all the available information and iteratively arrive at the correct solution. We will now show that the methods used in federated learning can be seen as an iterative posterior correction to eventually drive the error to zero. This is unlike the methods discussed so far which all used approximations for the correction term and therefore do not achieve perfect correction. Federated learning fixes this. To show this, we start with a variational formulation for the joint-model learning and write the dual form of the solution using Eq. 3, $$ \begin{array} { r l } & { q _ { \mathrm { j n t } } = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } ~ \mathbb { E } _ { q } \left[ \underset { i = 1 } { \overset { t } { \sum } } \ell _ { i } \right] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel p _ { 0 } ] } \\ & { ~ \propto p _ { 0 } \displaystyle \prod _ { i = 1 } ^ { t } \exp \left( - \hat { \ell } _ { i | \mathrm { j n t } } \right) , ~ \mathrm { w h e r e } ~ \hat { \ell } _ { i | \mathrm { j n t } } ( \theta ) = \langle \mathbf { T } ( \theta ) , \widetilde { \nabla } \mathbb { E } _ { q _ { \mathrm { j n t } } } [ \ell _ { i } ] \rangle . } \end{array} $$ Here, for simplicity, we have assume that $q _ { \mathrm { j n t } }$ and $p _ { 0 }$ are both the same exponential family which implies $p _ { 0 } \propto \exp ( - \hat { \ell } _ { 0 \vert \mathrm { j n t } } )$ and simplifies the discussion. However, the connection we discuss is valid even when this condition does not hold. The dual form factorizes across local models which suggests a natural adaptation strategy to exploit the structure. Since each local $\hat { \ell } _ { i | \mathrm { j n t } }$ contributes a factor, we can learn a local model $q _ { i }$ to estimate a local approximation $\hat { \ell } _ { i \left| i \right. }$ and slowly try to make it similar to the desired $\hat { \ell } _ { i | \mathrm { j n t } }$ . To formalize this, consider the following estimate of $q _ { \mathrm { j n t } }$ obtained by using natural-gradients at local models $q _ { i }$ , $$ \hat { q } _ { \mathrm { j n t } } \propto p _ { 0 } \prod _ { i = 1 } ^ { t } \exp { \left( - \hat { \ell } _ { i | i } \right) } \mathrm { w h e r e } \hat { \ell } _ { i | i } ( \pmb { \theta } ) = \langle \mathbf { T } ( \pmb { \theta } ) , \widetilde { \nabla } \mathbb { E } [ \ell _ { i } ] \rangle . $$ This differs from Eq. 18 because – instead of computing natural gradients at $q _ { \mathrm { j n t } }$ – here we compute them at a local $q _ { i }$ (highlighted in red). We can then write the correction for the posterior $\hat { q } _ { \mathrm { j n t } }$ by plugging it in the first line of Eq. 18 and simplifying similarly to Eq. 4, $$ \mathrm { ~ l j n t } = \arg \operatorname* { m i n } _ { q \in \mathcal { Q } } \sum _ { i = 1 } ^ { t } \mathbb { E } _ { q } \left[ \ell _ { i } - \hat { \ell } _ { i | i } \right] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel \hat { q } _ { \mathrm { j n t } } ] \quad \implies q _ { \mathrm { j n t } } \propto \hat { q } _ { \mathrm { j n t } } \prod _ { i = 1 } ^ { t } \exp \left( \ell _ { i | \mathrm { j n t } } - \hat { \ell } _ { i | i } \right) . $$ A derivation for the second expression is included in App. C. The expression suggests that more accurate ${ \hat { q } } _ { \mathrm { j n t } }$ yields smaller corrections, which is also faster to adapt to get to $q _ { \mathrm { j n t } }$ . The goal of local $q _ { i }$ should be to continue to reduce the correction. One straightforward strategy could be to locally minimize at each client $i$ , $$ q _ { i } = \arg \operatorname* { m i n } _ { q _ { i } \in \mathcal { Q } } ~ \mathbb { E } _ { q _ { i } } \left[ \ell _ { i } - \hat { \ell } _ { i | i } \right] + \mathbb { D } _ { \mathrm { K L } } [ q _ { i } | | \hat { q } _ { \mathrm { j n t } } ] . $$ We can then iterate between Eq. 19 and Eq. 21. This strategy is exactly the Partition Variational Inference (PVI) method of Ashman et al. (2022). For a comparison, see the version discussed in Swaroop et al. (2025, Eq. 5). Essentially, by using Eq. 19, we can see that each local update of Eq. 21 simply replaces the old factor $\hat { \ell } _ { i \mid i }$ in ${ \hat { q } } _ { \mathrm { j n t } }$ by a fresh one, $$ q _ { i } = \arg \operatorname* { m i n } _ { q _ { i } \in \mathcal { Q } } \mathbb { E } _ { q _ { i } } \left[ \ell _ { i } + \sum _ { j = 1 : t , j \neq i } \hat { \ell } _ { i | t } \right] + \mathbb { D } _ { \mathrm { K L } } [ q _ { i } \parallel p _ { 0 } ] . $$ Proceeding in this manner, if the algorithm converges, we recover the joint model both locally and globally, that is, $q _ { \mathrm { j n t } } = \hat { q } _ { \mathrm { j n t } } = q _ { i }$ for all $i$ . The PVI update is essentially a scheme to iteratively minimize the gradient mismatch, eventually driving it to 0. The connection ultimately shows that PVI aims for a perfect posterior correction. There are better alternatives to PVI and they too are connected to posterior correction. A recent work by Swaroop et al. (2025) connects PVI to Federated Alternating Direction Method of Multipliers (ADMM) (Gabay and Mercier, 1976; Glowinski and Marroco, 1975) and its variants for federated deep learning, for example, FedADMM (Gong et al., 2022; Wang et al., 2022; Zhou and Li, 2023) and FedDyn (Acar et al., 2021). They modify the PVI update to include a learning rate $1 / \gamma$ in front of the KL term in Eq. 21. This simple change makes it similar to the update of Tseng (1991) and improves convergence. The scheme simply replaces the stale $\hat { \ell } _ { i \mid t } ^ { \mathrm { o l d } }$ by a moving average $\gamma \ell _ { i | t } + ( 1 - \gamma ) \ell _ { i | t } ^ { \mathrm { o l d } }$ . M¨ollenhoff et al. (2025) further show that ADMM can also be derived as a special case of a more general Bayesian Duality. Essentially, they replace the global update shown in Eq. 19 by the following which employs a learning rate $\alpha = \gamma / ( \gamma + t )$ , $$ \hat { q } _ { \mathrm { j n t } } \propto \left[ \prod _ { i = 1 } ^ { t } q _ { i } ^ { 1 / t } \right] ^ { \alpha } \left[ p _ { 0 } \prod _ { i = 1 } ^ { t } \exp \left( - \hat { \ell } _ { i | i } \right) \right] ^ { ( 1 - \alpha ) } $$ This schedule is shown to work well for ADMM and M¨ollenhoff et al. (2025) also show improvements over PVI. In our view, such updates further stabilizes the local update of individual $\hat { \ell } _ { i \mid t }$ and improves convergence. This connection also enable us to write posterior correction as optimizing a Lagrangian, therefore using an algorithm similar to the wellknown primal-dual methods; interested readers are encouraged to see M¨ollenhoff et al. (2025). These extensions bring new insight into the existing Bayesian literature (Vehtari et al., 2020; Ashman et al., 2022) which is currently missing a rigorous connection to decade old work in convex duality and distributed optimization. Altogether, with such connections, posterior correction opens a new way to design better adaptive algorithms. # 4. Method Details and Further Connections In this section, we give further details on the dual perspective of the Bayesian Learning Rule (BLR), as well as alternate interpretations of posterior correction. Then, we discuss a few more details regarding the connections to continual learning methods. Specifically, we derive expressions for correction terms, derive K-priors as a special case, and discuss application to online variational inference. # 4.1 A Dual Perspective of The Bayesian Learning Rule (BLR) Posterior correction is derived by using a dual perspective of the Bayesian Learning Rule (BLR) by Khan and Rue (2023). The perspective is similar to those used in the optimization literature, for example, the closed-circuit of Rockafellar (1967, Fig. 2) which also have parallels in the Bayesian literature, for example, the representer theorem by Kimeldorf and Wahba (1970). Both use ‘dual’ representation of solutions. The representer theorem in Kernel methods generalizes them to a pure functional form by exploiting convex duality (Sch¨olkopf et al., 2001). The dual-perspective of the BLR extends such results to the VL problem. The form in Eq. 3 represents the solution $q _ { t }$ as a product of multiple factors, traditionally called sites in the approximate Bayesian literature (Minka, 2001). The expression is derived by using the expectation parameters $\pmb { \mu } = \mathbb { E } _ { q } [ \mathbf { T } ( \pmb { \theta } ) ]$ . The pair $( \lambda , \mu )$ live in two spaces dual to each other, that is, they are connected by a bijective Legendre transform (Khan and Rue, 2023, Sec 2.2). The natural-gradient are simply gradients in the $\underline { { \boldsymbol { \mu } } }$ -space which simplifies their computations. The dual form is obtained by simply setting $\nabla$ of Eq. 1 to $0$ , $$ \sum _ { i = 0 } ^ { t } \widetilde \nabla \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] - \widetilde \nabla \mathcal { H } ( q _ { t } ) = 0 \implies \lambda _ { t } = \sum _ { i = 0 } ^ { t } \widetilde \nabla \mathbb { E } _ { q _ { t } } [ - \ell _ { i } ] \implies q _ { t } \propto \prod _ { i = 0 } ^ { t } e ^ { - \langle \mathbf { T } ( \theta ) , \widetilde \nabla \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle } . $$ The first expression follows by rewriting the KL, the second equation from the fact that $\widetilde { \nabla } \mathcal { H } ( q _ { t } ) = - \lambda _ { t }$ , and the third equation is obtained by applying $\exp ( \langle \mathbf { T } ( \pmb { \theta } ) , \cdot \rangle )$ on both sides. Feor non-constant base measures $h ( \pmb \theta )$ , only a small change in the second step is required due to the change in the natural gradient of the entropy term (Khan and Rue, 2023, App. B). We choose to handle it through the site of $\ell _ { 0 }$ . The derivation reveals the underlying dual structure: The left side in the second expression belongs to the $\boldsymbol { \lambda }$ space while the gradients in the right are computed in the dual $\pmb { \mu }$ space. This is akin to the dual structure used in Rockafellar (1967, Fig. 2) and other forms of representer theorems. A formal derivation can be done via a Lagrangian formulation which shows that natural gradients can also be seen as Lagrange multipliers. We skip this discussion as it is not our main focus in this paper, but interested readers may see similar formulations in Khan et al. (2013); Adam et al. (2021); M¨ollenhoff et al. (2025). As shown in Table 1, the site functions for Gaussian take a similar form to surrogates obtained by using Taylor’s method. We now derive some of those. We first consider the $q _ { t } ^ { \mathrm { 1 s o } } = \mathcal { N } ( \pmb { \theta } | \mathbf { m } _ { t } , \mathbf { I } )$ for which we have $\mathbf { T } ( \pmb { \theta } ) = \pmb { \theta }$ and $\pmb { \mu } _ { t } = \mathbb { E } _ { q _ { t } } [ \pmb { \theta } ] = \mathbf { m } _ { t }$ . With this, we get $$ \begin{array} { r } { \hat { \ell } _ { i | t } ^ { \mathrm { { i s o } } } ( \pmb { \theta } ) = \langle \pmb { \theta } , \nabla _ { \mathbf { m } } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle = \pmb { \theta } ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] , } \end{array} $$ where we pushed the gradient inside by using the Bonnet’s theorem (Bonnet, 1964; Khan and Rue, 2023). We can also write a similar expression for a ${ \boldsymbol { q } } ^ { \mathrm { { f u l l } } } = \mathcal { N } ( \pmb { \theta } | \mathbf { m } _ { t } , \mathbf { S } _ { t } ^ { - 1 } )$ for which we have $\mathbf { T } ( \pmb \theta ) = ( \pmb \theta , \pmb \theta \pmb \theta ^ { \prime } )$ . Using this, we get $$ \begin{array} { r l } & { \hat { \ell } _ { i | t } ^ { \mathrm { f u l l } } ( \pmb \theta ) = \langle \mathbf T ( \pmb \theta ) , \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle } \\ & { \quad \quad \quad = \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } \left[ \nabla \ell _ { i } - ( \nabla ^ { 2 } \ell _ { i } ) \mathbf m _ { t } \right] + \frac { 1 } { 2 } \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } \left[ \nabla ^ { 2 } \ell _ { i } \right] \pmb \theta } \\ & { \quad \quad \quad = \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { i } ] + \frac { 1 } { 2 } ( \pmb \theta - \mathbf m _ { t } ) ^ { \top } \mathbb { E } _ { q _ { t } } \left[ \nabla ^ { 2 } \ell _ { i } \right] ( \pmb \theta - \mathbf m _ { t } ) + \mathrm { c o n s t . } } \end{array} $$ where in the first line we used the expression for the natural gradients given in Khan and Rue (2023, Eq. 11). In the derivations above, we have ignored the constants but these expressions are essentially a result of a $1 ^ { \mathrm { s t } }$ -order Taylor expansion in the (lifted) $\pmb { \mu }$ -space used in the BLR during optimization. For instance, a BLR step uses the following $1 ^ { \mathrm { s t } }$ -order expansion of $\mathbb { E } _ { q } [ \ell _ { i } ]$ for an arbitrary $q$ with the parameter $\pmb { \mu }$ : $$ \begin{array} { r } { \mathbb { E } _ { q } [ \ell _ { i } ] \approx \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] + \langle \pmb { \mu } - \pmb { \mu } _ { t } , \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle = \mathbb { E } _ { q } \left[ \langle \mathbf { T } ( \pmb { \theta } ) , \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \rangle \right] + \mathrm { c o n s t . } } \end{array} $$ Such approximation in the $\pmb { \mu }$ -space taken by the BLR yield the $1 ^ { \mathrm { s t } }$ and $2 ^ { \mathrm { n d } }$ order expansion in the $\pmb \theta$ -space as special cases. # 4.2 Posterior Correction as Bayes’ Filter and Natural-Gradient Mismatch The posterior correction in Eq. 4 is derived using the variational form but the update can also be written in a form similar to Bayes’ rule. Essentially, by using the dual form of $q _ { t + 1 }$ , we can directly express it in terms of $q _ { t }$ as follows, $$ q _ { t + 1 } \propto \left( e ^ { - \hat { \ell } _ { t + 1 \mid t + 1 } } \right) \times q _ { t } \times \prod _ { i = 0 } ^ { t } \left( e ^ { - \hat { \ell } _ { i \mid t + 1 } + \hat { \ell } _ { i \mid t } } \right) $$ The recursive form is similar to Bayesian filtering but there is an additional third term to account for the interference introduced in the past $\ell _ { i }$ due to the inclusion of $\ell _ { t + 1 }$ (a similar update for federated learning is in Eq. 20). Using this, we can also write the divergence to $q _ { t + 1 }$ from $q _ { t }$ by simply rearranging to get $\mathbb { E } _ { q _ { t + 1 } } [ \log ( q _ { t + 1 } / q _ { t } ) ]$ , $$ \mathbb { D } _ { \mathrm { K L } } [ q _ { t + 1 } \parallel q _ { t } ] = \mathbb { E } _ { q _ { t + 1 } } \left[ - \hat { \ell } _ { t + 1 \mid t + 1 } \right] - \sum _ { i = 0 } ^ { t } \mathbb { E } _ { q _ { t } } \left[ \hat { \ell } _ { i | t + 1 } - \hat { \ell } _ { i | t } \right] + A ( \lambda _ { t } ) - A ( \lambda _ { t + 1 } ) . $$ The last two terms account for the normalizing constants of the two distributions. This expression can be seen as an extension of the Information Gain (Lindley, 1956). The gain is defined for the exact posterior $p _ { t }$ and, when updated to $p _ { t + 1 }$ by using new $\ell _ { t + 1 }$ , it is equal to $\mathbb { E } _ { p _ { t + 1 } } [ - \ell _ { t + 1 } ]$ . The first term in Eq. 28 is similar but uses the site instead of the loss. The second and third terms are due to the correction. To the best of our knowledge, no such closed-form expression exist to the date to characterize information gain of variational posteriors. For small gains, adaptation should be quick. Large ones may take longer. This is how posterior correction quantifies the feasibility of quick adaptation. Another view of posterior correction is to see it as the natural-gradient mismatch. This is obtained by simply writing the natural parameters $\lambda _ { t + 1 }$ of $q _ { t + 1 }$ in Eq. 27, $$ \lambda _ { t + 1 } = \widetilde { \nabla } \mathbb { E } _ { q _ { t + 1 } } [ - \ell _ { t + 1 } ] + \lambda _ { t } - \sum _ { i = 0 } ^ { t } \underbrace { \left( \widetilde { \nabla } \mathbb { E } _ { q _ { t + 1 } } [ \ell _ { i } ] - \widetilde { \nabla } \mathbb { E } _ { q _ { t } } [ \ell _ { i } ] \right) } _ { \mathrm { M i s m a t c h } } . $$ The mismatch is yet another characterization of the interference. Posterior correction essentially replaces the “stale” natural gradients by the fresh new ones. For example, the PVI algorithm discussed in Sec. 3.4 implements this exact operation for posterior correction. Finally, we discuss how to apply posterior correction during training. We consider a simple case where we want to boost the training of another model given a ‘check-point’ with a dual form $$ q _ { \mathrm { c h k } } \propto p _ { 0 } \prod _ { i = 1 } ^ { t } \exp ( - \hat { \ell } _ { i | \mathrm { c h k } } ) . $$ For simplicity, we assume that the training data for the checkpoint is the same as the training which we want to boost. We also assume that it involves the same prior $p _ { 0 }$ which takes the same exponential form as the model. However, we note that the procedure described below works for much more general cases. In fact, we can also have multiple check points stored during training construct a Bayesian Arithmetic average, for instance, as shown in Eq. 13 and Eq. 23. The case below is just chosen for the simplicity sake. We can use $q _ { \mathrm { c h k } }$ to boost the training trajectories of the BLR where we simply replace the prior $p _ { 0 }$ by the $q _ { \mathrm { c h k } }$ and correct the losses $\ell _ { i }$ accordingly. This is shown below where we simplify the BLR update of Khan and Rue (2023, Eq. 22) taken at $q _ { \mathrm { o l d } }$ , $$ \begin{array} { l } { { \displaystyle q _ { \mathrm { n e w } } = \arg \displaystyle \operatorname* { m i n } _ { \mu } ~ \mu ^ { \top } \widetilde { \nabla } \left( \displaystyle \sum _ { i = 1 } ^ { t } \mathbb { E } _ { q _ { \mu } } [ \ell _ { i } ] + \mathbb { D } _ { \mathrm { K L } } [ q _ { \mu } ~ \| ~ p _ { 0 } ] \right) + \displaystyle \frac { 1 } { \rho } \mathbb { D } _ { \mathrm { K L } } [ q _ { \mu } ~ \| ~ q _ { \mathrm { o l d } } ] } } \\ { { \displaystyle \quad \quad = \arg \operatorname* { m i n } _ { \mu } ~ \displaystyle \sum _ { i = 1 } ^ { t } \mu ^ { \top } \widetilde { \nabla } \mathbb { E } _ { q _ { \mu } } [ \ell _ { i } ] + \displaystyle \frac { 1 } { \alpha } \mathbb { D } _ { \mathrm { K L } } [ q _ { \mu } ~ \| ~ p _ { 0 } ^ { \alpha } q _ { \mathrm { o l d } } ^ { 1 - \alpha } ] } } \\ { { \displaystyle \quad \quad = \arg \operatorname* { m i n } _ { \mu } ~ \displaystyle \sum _ { i = 1 } ^ { t } \mu ^ { \top } \widetilde { \nabla } \mathbb { E } _ { q _ { \mu } } [ \ell _ { i } - \hat { \ell } _ { i | \mathrm { c h k } } ] + \displaystyle \frac { 1 } { \alpha } \mathbb { D } _ { \mathrm { K L } } [ q _ { \mu } ~ \| ~ q _ { \mathrm { e h k } } ^ { \alpha } q _ { \mathrm { o l d } } ^ { 1 - \alpha } ] . } } \end{array} $$ The first line is obtained by simply plugging in the VL objective. The second line is simplified by noting that the KL between $q _ { \pmb { \mu } }$ and $p _ { 0 }$ contains conjugate term so it can be simply taken out and merged with the last KL term by redefining $\alpha = \rho / ( 1 + \rho )$ . An update of this form is in Khan and Rue (2023, Eq. 59) and a derivation for the conjugate prior case is given in Nickl et al. (2023, Eq. 26). The final step is obtained by also noting the fact that $\hat { \ell } _ { \mathrm { c h k } }$ is also a conjugate form, so it can simply be inserted inside the first term. The derivation above shows the strength of the posterior correction. Not only that it can be applied to trained model, it can also be used to boost the training. The check-point also need not belong to a trained model. As long as we can express them in a dual form, we can use their stored knowledge to boost training of other models. This particular feature makes posterior correction a fundamental mechanism to reuse and repurpose the knowledge learned during model training. # 4.3 Posterior Correction for Continual Learning We start with the derivation of Eq. 8 which shows that the correction term for continual learning can be written as a sum of two terms. The loss function considered is a Bregman divergence (or equivalently using an exponential family) which takes the following form, $$ \mathcal { L } [ y _ { i } , \hat { y } _ { i } ( \pmb { \theta } ) ] = - y _ { i } f _ { i } ( \pmb { \theta } ) + A ( f _ { i } ( \pmb { \theta } ) ) $$ where $A ( \cdot )$ is the convex function that generates the Bregman divergence. Typical examples include cross-entropy loss which commonly in multi-class classification with neural networks. There, the model output (also called logits) are vectors of $f _ { i } ^ { k } ( \pmb \theta )$ for each class $k$ , the function $A ( \cdot )$ is $\begin{array} { r } { \log \sum _ { k = 1 } ^ { K } \exp ( f _ { i } ^ { k } ( \pmb { \theta } ) ) } \end{array}$ , the log-sum-exp function over all $K$ classes. The predictions are obtained by simply taking a softmax over $f _ { i } ^ { k } ( { \pmb \theta } )$ . For simplicity, we will present the derivation for a scalar $f _ { i } ( \pmb \theta )$ . For such loss functions, the correction term can be simplified as follows, $$ \begin{array} { r l } { \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | t } ^ { \mathrm { i s o } } ( \pmb \theta ) = \mathcal { L } [ y _ { i } , \hat { y } _ { i } ( \pmb \theta ) ] - \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } \left[ \nabla \mathcal { L } [ y _ { i } , \hat { y } _ { i } ] \right] } & { } \\ { = - y _ { i } f _ { i } ( \pmb \theta ) + A ( \pmb \theta ) - \pmb \theta ^ { \top } \mathbb { E } _ { q _ { t } } \left[ \nabla f _ { i } ( \pmb \theta ) \left( \hat { y } _ { i } ( \pmb \theta ) - y _ { i } \right) \right] } & { } \\ { \approx - \hat { y } _ { i | t } f _ { i } ( \pmb \theta ) + A ( \pmb \theta ) + r _ { i | t } f _ { i } ( \pmb \theta ) - r _ { i | t } \pmb \theta ^ { \top } \nabla f _ { i } ( \mathbf { m } _ { t } ) } & { } \\ { = \mathcal { L } \left[ \hat { y } _ { i | t } , \hat { y } _ { i } ( \pmb \theta ) \right] + r _ { i | t } \left[ f _ { i } ( \pmb \theta ) - \hat { f } _ { i } ^ { \mathrm { l i n } } ( \pmb \theta ) \right] + \mathrm { c o n s t . } } \end{array} $$ The first and second line follows from the definition of the loss and site. In the third line, we subtract and add $\hat { y } _ { i \mid t } f _ { i } ( \pmb \theta )$ , then rearrange to write it in terms of the residual $r _ { i | t } = \hat { y } _ { i | t } - y _ { i }$ . The approximation is due to the $1 ^ { \mathrm { s t } }$ -order delta method to simply the expectation with respect to $q _ { t }$ over $f _ { i } ( \pmb \theta )$ in the last term. The final line is obtained by using $\hat { f } _ { i } ^ { \mathrm { l i n } } ( \pmb \theta ) = ( \pmb \theta - \mathbf { m } _ { t } ) ^ { \top } \nabla f _ { i } ( \mathbf { m } _ { t } )$ with a term that is constant with respect to $\pmb \theta$ . The only assumption made is the delta method for the expectation which might be useful for Bayesian cases but otherwise can be safely ignored. Other than that, the expression holds for general models, such as, neural networks. The derivation directly extends to $q ^ { \mathrm { f u l l } }$ , where we get an additional quadratic term, $$ \begin{array} { r l } & { \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | t } ^ { \mathrm { f u l l } } ( \pmb \theta ) = \ell _ { i } ( \pmb \theta ) - \hat { \ell } _ { i | t } ^ { \mathrm { i s o } } ( \pmb \theta ) - \frac 1 2 ( \pmb \theta - \mathbf m _ { t } ) ^ { \top } \mathbf { H } _ { i | t } ( \pmb \theta - \mathbf m ) } \\ & { \qquad \approx \mathcal { L } \left[ \hat { y } _ { i | t } , \hat { y } _ { i } ( \pmb \theta ) \right] + r _ { i | t } \left[ f _ { i } ( \pmb \theta ) - \hat { f } _ { i } ^ { \mathrm { q u a d } } ( \pmb \theta ) \right] - \frac 1 2 \beta _ { i | t } \lVert \nabla f _ { i } ^ { \mathrm { l i n } } \rVert ^ { 2 } , } \end{array} $$ where we denote $\begin{array} { r } { \hat { f } _ { i } ^ { \mathrm { q u a d } } ( \pmb { \theta } ) = \hat { f } _ { i } ^ { \mathrm { l i n } } ( \pmb { \theta } ) + \frac { 1 } { 2 } ( \pmb { \theta } - \mathbf { m } _ { t } ) ^ { \top } \nabla ^ { 2 } f _ { i } ( \mathbf { m } _ { t } ) ( \pmb { \theta } - \mathbf { m } _ { t } ) } \end{array}$ , and $\beta _ { i \mid t } = \mathbb { E } _ { q _ { t } } [ A ^ { \prime \prime } ( f _ { i } ( \pmb { \theta } ) ) ]$ . A derivation is in App. A which involves writing the Hessian of the loss in terms of the Generalized Gauss-Newton (GGN) matrix and the Hessian of the model output. Applying the delta method, then gives this approximation. The linearization error is now reduced due to the use of a better quadratic surrogate, as opposed to the linear surrogate used in Eq. 31. The last term further reduces the error and is due to the GGN matrix. We discuss some examples to illustrate the form of the correction. For linear regression, as we saw before in Eq. 7, that for the $q ^ { \mathrm { i s o } }$ family the correction is simply the prediction mismatch, but for the $q ^ { \mathrm { f u l l } }$ the correction entirely vanishes due to the GGN term, $$ \begin{array} { r } { \ell _ { i } ( \pmb { \theta } ) - \hat { \ell } _ { i \vert t } ^ { \mathrm { f u l l } } ( \pmb { \theta } ) = \| \mathbf { x } _ { i } ^ { \top } \pmb { \theta } - \mathbf { x } _ { i } ^ { \top } \pmb { \theta } _ { t } \| ^ { 2 } - \frac { 1 } { 2 } \| ( \pmb { \theta } - \mathbf { m } _ { t } ) ^ { \top } \mathbf { x } _ { i } \| ^ { 2 } = 0 . } \end{array} $$ This makes sense because a full Gaussian posterior is the exact posterior and there is no need for a correction. Therefore, posterior correction simply reduces to Bayes’ rule. Consider another example for logistic regression where $\hat { y } _ { i } ( \pmb { \theta } ) = \sigma ( \mathbf { x } _ { i } ^ { \top } \pmb { \theta } )$ with $\sigma ( \cdot )$ denoting the sigmoid function. The $q ^ { \mathrm { i s o } }$ family yields the usual prediction matching term but the $q ^ { \mathrm { f u l l } }$ family reduces this by comparing the linear outputs, $$ \begin{array} { r } { \ell _ { i } ( \pmb { \theta } ) - \hat { \ell } _ { i | t } ^ { \mathrm { f u l l } } ( \pmb { \theta } ) = \mathcal { L } \left[ \sigma ( \mathbf { x } _ { i } ^ { \top } \pmb { \theta } ) , \sigma ( \mathbf { x } _ { i } ^ { \top } \pmb { \theta } _ { t } ) \right] - \frac { 1 } { 2 } \beta _ { i | t } \| \mathbf { x } _ { i } ^ { \top } \pmb { \theta } - \mathbf { x } _ { i } ^ { \top } \mathbf { m } _ { t } \| ^ { 2 } } \end{array} $$ The second term essentially attempts to improve the current posterior $q _ { t }$ by removing the stale $\beta _ { i | t }$ and replace it by the fresh ones by using the prediction mismatch term. By using a more flexible posterior, we reduce the correction required for accurate adaptation. This idea was first used in K-priors by Khan and Swaroop (2021, Fig. 3a) through gradient matching but it is also connected to PVI where we replace old surrogates by new ones during federated learning. In general, all such procedures are generalized via posterior correction. We next show that the correction term generalizes K-priors to exponential-family posteriors. # 4.4 Knowledge-Adaptation Priors as Posterior Correction The prior shown in Eq. 6 is a generalization of the K-prior by Khan and Swaroop (2021). We will now show this for the K-prior presented in Khan and Swaroop (2021, Eq. 8) for a linear model $f _ { i } ( { \pmb \theta } ) = { \bf x } _ { i } ^ { \top } { \pmb \theta }$ over the Bregman loss $\ell _ { i } ( { \pmb \theta } ) = \mathcal { L } ( y _ { i } , \hat { y } _ { i } ( { \pmb \theta } ) )$ and quadratic regularizer $\begin{array} { r } { \ell _ { 0 } ( { \pmb \theta } ) = \frac { 1 } { 2 } \delta \| { \pmb \theta } \| ^ { 2 } } \end{array}$ . We have already derived the correction terms for $i = 1 , 2 , \ldots , t$ . Therefore, we just need the correction term for $\ell _ { 0 }$ and the KL term. We derive this below. The correction term for $\ell _ { 0 }$ in the case of $q ^ { \mathrm { i s o } }$ takes a bit more effort because the base measure is not constant: $\begin{array} { r } { h ( \pmb \theta ) = - \frac 1 2 \| \pmb \theta \| ^ { 2 } - \frac 1 2 P \log ( 2 \pi ) } \end{array}$ . For such a case, we use $$ \begin{array} { r l } & { \ell _ { 0 } ( { \pmb \theta } ) - \hat { \ell } _ { 0 | t } ^ { \mathrm { i s o } } ( { \pmb \theta } ) = \ell _ { 0 } ( { \pmb \theta } ) - { \pmb \theta } ^ { \top } \mathbb { E } _ { q _ { t } } [ \nabla \ell _ { 0 } + \nabla \log h ] + \log h ( { \pmb \theta } ) } \\ & { \qquad = \frac { 1 } { 2 } \delta \| { \pmb \theta } \| ^ { 2 } - \delta { \pmb \theta } ^ { \top } { \bf m } _ { t } + { \pmb \theta } ^ { \top } { \bf m } _ { t } - \frac { 1 } { 2 } \| { \pmb \theta } \| ^ { 2 } } \\ & { \qquad = \frac { 1 } { 2 } \delta \| { \pmb \theta } - { \bf m } _ { t } \| ^ { 2 } - \frac { 1 } { 2 } \| { \pmb \theta } - { \bf m } _ { t } \| ^ { 2 } + \mathrm { c o n s t . } } \end{array} $$ Adding this to the KL term, we get $$ \begin{array} { r } { \mathbb { E } _ { q } [ \ell _ { 0 } - \hat { \ell } _ { 0 \mid t } ] + \mathbb { D } _ { \mathrm { K L } } [ q \parallel q _ { t } ] = \frac { 1 } { 2 } \delta \Vert \mathbf { m } - \mathbf { m } _ { t } \Vert ^ { 2 } + \mathrm { c o n s t . } } \end{array} $$ Substituting in Eq. 6, we obtain $$ \mathcal { K } _ { t } ^ { \mathrm { i s o } } ( \mathbf { m } ) = \sum _ { i = t } ^ { t } \mathbb { E } _ { q } \left( \mathcal { L } \left[ \hat { y } _ { i \mid t } , \hat { y } _ { i } ( \pmb { \theta } ) \right] \right) + \delta _ { 2 } ^ { 1 } \| \mathbf { m } - \mathbf { m } _ { t } \| ^ { 2 } $$ which is exactly Khan and Swaroop (2021, Eq. 8) with respect to $\mathbf { m }$ , if we use the delta method: $\hat { y } _ { i | t } = \mathbb { E } _ { q { t } } [ \hat { y } _ { i } ( \pmb { \theta } ) ] \approx \hat { y } _ { i } ( \mathbf { m } _ { t } )$ and $\mathbb { E } _ { q } \left( \mathcal { L } \left[ \hat { y } _ { i } ( \mathbf { m } _ { t } ) , \hat { y } _ { i } ( \pmb { \theta } ) \right] \right) \approx \mathcal { L } [ \hat { y } _ { i } ( \mathbf { m } _ { t } )$ , $\hat { y } _ { i } ( \mathbf { m } ) ]$ . The prior in Eq. 6 extends K-priors to generic exponential-family posterior forms. We will call this prior the Variational K-prior. The new family of K-priors extends the prediction matching idea to other type of matching. For instance, the derivatives of the correction term in Eq. 6 can be written as gradient mismatch as shown in Eq. 17. Similarly, if we use $q ^ { \mathrm { f u l l } }$ , the correction terms implement both gradient and Hessian matching. In general, such matching of predictions, gradients, or Hessians naturally emerges through natural-gradient mismatch in Eq. 29. All we need to do is to choose an appropriate family to match the desired natural gradients. The general form of variational K-prior also makes it easy to mix and match different regularization methods. For instance, using the correction term expression given in Eq. 32 for $q ^ { \mathrm { f u l l } }$ family, we can make a decision to store specific examples for memory replay. For instance, examples whose residuals ${ \bf \nabla } ^ { r } { } _ { i \left| t \right. }$ are high can accumulate errors through the nonlinear term, so it is better to include them through replay. For the other examples, it might be enough to simply store a representation of the inputs. Such a mixture would give rise to a memory set where we pick a set of examples for replay $\mathcal { M } _ { \mathrm { r e p } }$ and then another disjoint set for prediction matching $M _ { \mathrm { p r e d } }$ , and use the following correction term, $$ \sum _ { i \in \mathcal { M } _ { \mathrm { r e p } } } \left( \ell _ { i } - \hat { \ell } _ { i | t } \right) + \sum _ { j \in \mathcal { M } _ { p r e d } } \left( \mathcal { L } \left[ \hat { y } _ { j | t } , \hat { y } _ { j } ( \pmb { \theta } ) \right] - \frac { 1 } { 2 } \beta _ { i | t } \Vert \nabla f _ { i } ^ { \mathrm { l i n } } \Vert ^ { 2 } \right) . $$ The two sets have to be disjoint so as to not double count the contribution of prediction matching. The benefit of using the second term above is that those examples do not need labels and they can be summarized using arbitrary input locations, for instance, similar to a core-set or set of inducing inputs. Daxberger et al. (2023) used a memory construction similarly to the above for K-prior to get good improvement on the ImageNet dataset. Similarly, Pan et al. (2020) and Khan and Swaroop (2021) used a quantity similar to $\beta _ { i | t }$ to pick examples to do prediction matching on. They all show consistent improvement and point this as a viable option to design better memory replay methods. We now briefly discuss connections to other approaches covered under the K-prior framework. For example Knowledge Distillation (KD) (Hinton et al., 2015) considers a teacherstudent learning scenario which can also be written as a posterior correction. The KD objective uses a convex combination for $\gamma \in ( 0 , 1 )$ which can be rewritten as, $$ \sum _ { i = 1 } ^ { t } \mathcal { L } \left[ \hat { y } _ { i } ( \theta _ { t } ) , \hat { y } _ { i } ( \theta ) \right] + \left( 1 - \gamma \right) \sum _ { i = 1 } ^ { t } \mathcal { L } \left[ \hat { y } _ { i } ( \theta _ { t } ) , \hat { y } _ { i } ( \theta ) \right] = \sum _ { i = 1 } ^ { t } \mathcal { L } \left[ \hat { y } _ { i } ( \theta _ { t } ) , \hat { y } _ { i } ( \theta ) \right] + \gamma \sum _ { i = 1 } ^ { t } r _ { i } ( \theta _ { t } ) $$ where $r _ { i } ( \pmb \theta )$ are the residuals. Essentially, the parameter $\gamma$ is used to down-weight the mistakes made by the teacher. The objective shares similarities to Eq. 31 where we explicitly ‘correct’ the mistake not just down-weight them. In KD, we have the luxury to train on all examples, but it is not always possible to store the whole training set. Posterior correction can handle such cases where we need to pay attention to specific mistakes of the teacher and ensure they do not get transferred to the teacher. Note that, typically, the architectures or teacher and students are different, so one need to choose an appropriate parameterization and divergence function to define a valid KL term. We do not go into details of this since this is out of scope for our work. Posterior correction is also closely related to dual approaches in SVM, such as, incremental Support Vector Machines (Cauwenberghs and Poggio, 2001; Liang and Li, 2009), Similarity Control (Vapnik and Izmailov, 2015), and their extensions to neural networks, for example, (Lopez-Paz and Ranzato, 2017). The correction term can be written in a trust-region form with constraints $\mathbb { E } _ { q } [ \ell _ { i } ] = \mathbb { E } _ { q } [ \hat { \ell } _ { i } ]$ . The dual problem can be connected to posterior correction via a Lagrangian. The continual learning case can be written in a similar fashion. # 4.5 Posterior Correction for Sequential and Online Variational Inference It is well-known that the recursive elegance of Bayes’ rule is lost when approximations are used. The problem of an incorrect $\hat { q } _ { t + 1 }$ when Eq. 2 is used occurs in almost all approximate inference problems. This is true for even the simplest variational (Bayesian) learning on conjugate models where mean-field approximations are used (Sato, 2001; Ghahramani and Attias, 2000; Honkela and Valpola, 2003; Hoffman et al., 2010; Wang et al., 2011), but also more recent variants of those Broderick et al. (2013); Nguyen et al. (2018); Zeno et al. (2018); Ch´erief-Abdellatif et al. (2019); Jones et al. (2024). Other approximations also suffer from this issue, for example, those proposed in Opper and Winther (1999); Winther and Solla (1998); Heskes and Zoeter (2002); Csat´o and Opper (2002), among many others. We expect similar problems to exist even when amortized inference is used (Archer et al., 2015; Kim et al., 2020; Krishnan et al., 2017; Campbell et al., 2021). The posterior correction is a useful approach to fix such issues in sequential and online variational inference. As an example, we show that an existing method of Bui et al. (2017) is in fact an example of posterior correction for Sparse Variational Gaussian Process (SVGP). The method attempts to handle a challenging case of updating both the inducing input as well as hyperparameters but we will consider a simpler version to show similarities to posterior correction. SVGP uses a set of inducing variables, denoted by $\mathbf { u }$ , to model the function $f$ . To keep things simple, we will use a notation that fits in our framework. We will denote the posterior by $q ( \mathbf { u } , f )$ and the prior by $p _ { 0 } ^ { \delta } ( \mathbf { u } , f )$ where $\delta$ is the set of kernel hyperparameter. We will denote the negative likelihoods by $\ell _ { t } ( f )$ which are assumed to be Gaussian. With this notation, we assume that $q _ { t }$ and $\delta _ { t }$ is given and our goal is to update $q _ { t + 1 }$ as well as $\delta _ { t + 1 }$ , but we assume the inducing variables $\mathbf { u }$ remain at the same locations. Because the model is conjugate, the sites are not approximate, that is, $\hat { \ell } _ { i | t } = \ell _ { i }$ . With this, we can write the following (denoting the normalizing constant by $\mathcal { Z } _ { t } ( \delta _ { t } )$ ), $$ q _ { t } = \frac { 1 } { \mathcal { Z } _ { t } ( \delta _ { t } ) } p _ { 0 } ^ { \delta _ { t } } \prod _ { i = 1 } ^ { t } \exp ( - \ell _ { t } ) $$ With the goal of updating $q _ { t + 1 }$ can be expressed as a posterior correction, $$ \begin{array} { r l } & { q _ { t + 1 } = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } \ \underset { i = 1 } { \overset { t + 1 } { \sum } } \mathbb { E } _ { q } [ \ell _ { i } ] + \mathbb { D } _ { \mathrm { K L } } [ q p _ { 0 } + \log \mathcal { Z } _ { t + 1 } ( \delta ) } \\ & { \qquad = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } \ \mathbb { E } _ { q } [ \ell _ { t + 1 } ] + \mathbb { D } _ { \mathrm { K L } } [ q q _ { t } ] + \mathbb { E } _ { q } [ \ell _ { 0 } ^ { \delta } - \ell _ { 0 } ^ { \delta _ { t } } ] + \log \frac { \mathcal { Z } _ { t + 1 } ( \delta ) } { \mathcal { Z } _ { t } ( \delta _ { t } ) } } \\ & { \qquad = \arg \underset { q \in \mathcal { Q } } { \operatorname* { m i n } } \ \log \frac { \mathcal { Z } _ { t + 1 } ( \delta ) } { \mathcal { Z } _ { t } ( \delta _ { t } ) } + \mathbb { E } _ { q } [ \log ( \frac { p _ { 0 } ^ { \delta _ { t } } } { p _ { 0 } ^ { \delta } } \frac { q } { q _ { t } } \frac { 1 } { e ^ { - \ell _ { t + 1 } } } ) ] } \end{array} $$ where in the first line we added the explicit normalizing constant to show dependency on $\delta$ . The last line is written to show equivalence to Bui et al. (2017, Eq. 5, $2 ^ { \mathrm { n d } }$ line). Modification to handle non-conjugate likelihood is straightforwardly obtained by adding back the corrections for all the past $\ell _ { i }$ . The procedure can also benefit from adding memory replay, similarly to previous sections. A recent extension by Chang et al. (2023) explores such directions using the dual representations and finds good improvements. Additionally, a procedure by Adam et al. (2021) is also a special instance of posterior correction to speed up hyperparameter learning in GPs.
Adaptation is the holy grail of intelligence, but even the best AI models (like GPT) lack the adaptivity of toddlers. So the question remains: how can machines adapt quickly? Despite a lot of progress on model adaptation to facilitate continual and federated learning, as well as model merging, editing, unlearning, etc., little is known about the mechanisms by which machines can naturally learn to adapt in a similar way as humans and animals. Here, we show that all such adaptation methods can be seen as different ways of `correcting' the approximate posteriors. More accurate posteriors lead to smaller corrections, which in turn imply quicker adaptation. The result is obtained by using a dual-perspective of the Bayesian Learning Rule of Khan and Rue (2023) where interference created during adaptation is characterized by the natural-gradient mismatch over the past data. We present many examples to demonstrate the use of posterior-correction as a natural mechanism for the machines to learn to adapt quickly.
[ "cs.LG", "cs.AI", "stat.ML" ]
# 1 INTRODUCTION The relational model, proposed over 50 years ago, has been the foundation of most high-performance database systems to date. The object-relational model, i.e., the relational model with abstract data types and related functionality, has generally satisfied the needs of most enterprise applications without being overly complicated. A number of higher-level semantic data models have been proposed over the years [29], including by Ted Codd himself [12], to better support complex data types, inheritance hierarchies, richer integrity constraints, etc. However, none have gained significant traction although several have significantly influenced the refinements to the relational model [20, 44] (we cover some of this work in Section 7). As articulated by Stonebraker et al. [39, 40], this was primarily due to the newer data models not offering significant improvements or benefits to users, and also because the most important features were easy to incorporate into the relational model. # 1.1 Why Now? However, we argue that database systems are at a pivotal juncture and they need to decide whether to raise the abstraction level offered by them and come closer to the users and the application developers. Failing that, we believe that database systems will be increasingly relegated to serve as backend storage systems, with most of the application logic and user interactions served by layers on top. We won’t rehash all the reasons that have been laid forth in the past for richer data models, but make a few observations based on recent developments: (1) Unwieldy and hard-to-understand schemas: First, relational schemas in practice are often large, unintuitive, and hard to understand, making it difficult for new or even experienced users to construct and verify SQL queries for a given task. The RDBMSs often lack sufficient context around the attributes, and documentations explaining the schema (usually maintained externally) are rarely kept up-to-date; foreign keys and other constraints, intended to help incorporate structure into the schema, are often not used properly and systematically. A large part of the reason for the above is schema “decay” [38] arising out of small incremental changes made to the original normalized schema (often constructed from an E/R diagram). The basic relational model is not sufficiently “opinionated” and easily permits schema changes that, over time, allow the schema to stray far from the principled normal forms. As a separate challenge, database systems today do not support schema evolution natively [5, 8, 13], requiring users to build significant scaffolding on top to handle schema changes and resulting data migrations. Increased use of large language models for coding and data analysis will likely make this problem more challenging as the focus increasingly shifts toward understanding SQL queries rather than crafting them. It is difficult to understand and verify multiway join queries over a large number of tables typically found in real-world databases. Higher-level abstractions can help mitigate this problem, both for the LLMs and the users [46]. (2) Data governance and compliance: Second, data governance and compliance issues are an increasing concern for many enterprises. Although it took a back seat over the last few years, compliance with privacy or AI regulations like GDPR, CCPA, etc., increasingly requires a more careful accounting and handling of personal data [1, 15, 22]. In addition to better understanding and tagging the data being collected, compliance often also requires fine-grained access control and ability to delete data of specific individuals, both of which are fundamentally entity-centric operations, i.e., operations that require reasoning about all the data related to an entity (a person or an organization) as a whole. These tasks are challenging to do in a verifiable manner for normalized relational schemas where personal data may be spread across many tables, often without the foreign keys to help link the data. The common workarounds today include external metadata managers that typically support more user-friendly abstractions, but are difficult to set up and maintain, and are not sufficiently integrated with the database or the application code. (3) Impedance mismatch: Third, due to the impedance mismatch between the relational model and the entity-centric nature of application code, most applications use some sort of a data layer on top of the database, typically an object-relational mapper (ORM) or an API service. The ORMs typically expose an abstraction that is very similar to the entity-relationship model, often borrowing the same terminology [27]; whereas the API services typically expose hierarchical views on top of the database. In either case, the intermediate layers are responsible for translating the higher-level abstractions (e.g., GraphQL) into SQL. This not only leads to significant duplication of effort, but the mappings across the layers can be difficult to maintain and can result in data inconsistencies [3]. Furthermore, most RDBMSs today support much of the required functionality, including the ability to generate nested outputs in the SELECT clause (e.g., PostGraphile can compile a highly nested GraphQL query into a single PostgreSQL query). (4) Adoption of JSON and Property Graph Models: For similar reasons, other data models like documents (XML, JSON), property graphs, RDF, have seen much work and some commercial success over the last decade. Although there exists workloads that are specifically suited for those models (e.g., graph analytics tasks such as community detection), in most cases, the data and query workloads for those databases are very similar to relational database workloads; in several cases, these databases are often built as layers on top of a relational database. In our opinion, unifying these models is a pressing challenge for our community, as it has led to significant and unnecessary fragmentation and duplication of effort, both on the user-facing end as well as the backend storage and query execution layers. (5) Lack of logical independence: Finally, we are leaving a lot of performance on the table by having the user-facing abstraction so closely tied to the physical layout of the data. Relational databases have exploited physical data independence to dramatically improve performance and adapt to evolving hardware patterns as well as parallel and distributed environments. However, the low-level nature of the relational abstraction leads to much lower logical independence. Decisions about how to represent relationships, what to do with multi-valued attributes, how to handle inheritance, etc., need to made when designing the schema, and can have a significant impact on the performance. These decisions can and should be auto-tuned, i.e., made based on the workload patterns and requirements. In theory, views can be used to achieve logical data independence, but require significant expertise to use correctly and it can be difficult to optimize queries in their presence. We present several illustrative experiments in Section 6 to highlight the potential performance benefits of the increased logical data independence. # 1.2 Why E/R? Given these reasons, we argue that more effort should be spent in desiging and supporting a higher-level abstraction natively in the database systems. We specifically advocate for the familiar (extended) entity-relationship abstraction [11], which is already widely used for inital schema design and may be considered richer than document models or the property graph model for most use cases (assuming schemas are enforced). Similar to the document models, the E/R model supports fixed-depth hierarchical/nested data (through the use of composite attributes) as well as arrays (through the use of multi-valued attributes); it also inherently supports relationships which is a key missing element in the document data model. E/R model does require a more rigid schema design and doesn’t support arbitrary nesting. However, schema-less or schemaflexible designs shift the burden of managing the schema to the developer, moving significant complexity and business logic (e.g., constraints) to the application code. To the extent that those features are actually used in practice, they can be supported through the use of a JSON type. At the same time, we believe that the E/R model lends itself to easier schema evolution and management, mitigating the need for schema flexibility that is often cited as a reason for using those systems. Property graph databases with schemas also naturally map to the E/R model; in fact, the recent proposal for property graph schemas [2] uses the E/R abstraction as a starting point and we don’t see strong distinctions between the two (the distinctions mentioned in related work often trace back to incomplete development of the E/R model as a practical data model). Similarly, although we use a relational backend (PostgreSQL) in our prototype, a property graph database system is perhaps a more natural backend alternative for the E/R model. We are cognizant of many failed now-is-the-time attempts to move away from the relational model in the past. However, we believe a large part of that may be a “self-fulfiling prophecy”. There hasn’t been a concerted effort to provide a combination of a more abstract data model and high performance, so the users are left with choosing between the two (and they often choose the more user-friendly model first, and reluctantly switch to the latter when scale becomes an issue). We point to the prevalent and increasing use, over the last decade or two, of: (a) ORMs for web development, (b) API services (e.g., PostgREST2), (c) document, property graph, or graph-relational3 models, (d) hierarchical storage formats like Parquet and Avro that have become the dominant storage formats in data lakes, and (e) query and analysis engines like Apache Datafusion or DuckDB that provide SQL-like interfaces on top of those formats. Performance is often cited as a reason to avoid higher-level abstractions. Drawing an analogy to the 70s when similar criticisms were made of the relational model, we believe that the increased optimization opportunities due to the higher-level abstraction will ultimately lead to better performance. There are a number of engineering challenges that we would need to address, but we don’t see any fundamental reasons why those would be insurmountable. On the OLTP side, two of the key challenges are: (a) handling complex objects with nesting and arrays, and (b) possibility that a single update may require updating multiple tables (depending on the mapping of the E/R model to the physical storage). For the former, serialization formats like protocol buffers support nesting and arrays, and can be used for client-server communication without additional overhead, whereas binary formats like BSON are already widely used for storage. The latter challenge may require development of new transaction processing techniques, and may not be achievable by building on top of an existing system (as we do in our prototype at this time). For OLAP workloads, data warehouses or lakehouses already support hierarchical storage formats, and the success of Parquet, DataFusion, etc., suggest that the performance issues can be easily ameliorated. However, inheritance hierarchies pose a major challenge as they may result in a large number of left outer joins [23] if the E/R model is implemented on top of a relational database (or we suspect property graph databases). We believe this necessitates development of new storage layouts and query processing techniques, and forms one of the more interesting research questions in this area. Figure 1: (i) Example of an E/R model with one weak entity set and two subclasses (adapted from [35]); (ii) DDL to create entitie and relationships; (iii) An example query for illustration purposes To explore these challenges in more depth, we are building a prototype that supports the entity-relationship model as the primary data model, and an SQL-like query language against that model; we share some of the reservations about SQL from recent work [6, 24, 34], and believe that a functional transformation-based query language is easier to understand and offers more flexibility and extensibility, especially when working with richer data models, and that queries written in those likely require fewer changes when schemas are modified. However, we leave a deeper investigation of that to future work. In the rest of the paper, we elaborate on some of the design aspects of the system, and discuss the spectrum of storage formats that we believe should be supported in the backend (including a compressed multi-relation representation). We then present a systematic way to explore the optimization opportunities enabled by the increased logical data independence. We also plan to support schema evolution and versioning natively in our system, but we omit a detailed discussion since that’s not the focus of this paper. # 2 ABSTRACTIONS We propose using the standard (extended) E/R model as the starting point due to its familiarity as well as widespread use as the conceptual model for schema design. It is also aligned with the abstraction supported by many ORMs (e.g., Django) and similar systems like EdgeDB. Since the E/R model supports modeling of hierarchical information (through composite and multi-valued attributes), it also naturally encompasses the key elements of the more flexible models like property graphs and hierarchical documents, as discussed earlier. E/R model doesn’t inherently support the notion of an ordered list, but that can either be added explicitly or handled through the use of a positional attribute as needed. Figure 1 shows an example of an E/R model (adapted from the running example from [35]), that contains one weak entity set, two subclasses (specialization), and various types of relationships (with annotations to indicate the cardinality and participation constraints). The figure also shows the syntax for defining the entities and relationships in our prototype, including the ability to directly define composite attributes (which would require the use of a separate type in a typical RDBMS) as well as multi-valued attributes (technically not allowed in the relational model, but supported by most RDBMSs today). In addition, the DDL should support: (a) defining constraints, (b) specifying the inheritance properties (total vs partial, disjoint vs overlapping, etc.), and (c) adding descriptive text (that can be automatically used, e.g., for creating API documentations). There is much work on (a) and (b) that we plan to build upon. For querying, we use a variant of SQL with an example shown in Figure 1. The two main additions to normal SQL that we support: Ability to specify a relationship when joining two relations, in addition to standard WHERE and ON clauses. Ability to construct hierarchical outputs in the SELECT clause. We borrow Apache DataFusion’s syntax for this purpose, however other constructs (e.g., $\mathrm { { S Q L + + } }$ [6]) can also be used instead. This is a very common use case in practice, and although it can be supported through a chain of array_agg and group by’s, we believe this functionality should be supported natively so that the queries can be optimized properly. We also omit explicit group by clauses as shown in the example, since those can be inferred from the select clause (otherwise CTEs need to be used to construct the output shown here). # 3 SCHEMA EVOLUTION In addition to making data governance tasks easier, a key motivation for us is better support for schema evolution. Stonebraker et al. [38] lay out many reasons why schemas in the wild become unmanageable, unnormalized, and stray from the original E/R diagram. They also explore, but ultimately dismiss, the idea of raising the abstraction level to something like an $\mathrm { E / R }$ model, claiming that it would not solve the problems. We disagree with that conclusion, and believe that database decay will be at least delayed if a higherlevel data model is used, especially if shortcuts are not allowed (i.e., the database is more “opinionated”). For instance, consider a schema change where a single-valued attribute is made multi-valued (e.g., moving from a single city to multiple cities). In a normalized relational schema, this requires moving the attribute to a separate table, and any queries that access that attribute would need to be modified to do an additional join. However, this is a minor change for the E/R diagram, and results in relatively localized changes to queries involving that attribute (e.g., select person_id, city $$ select person_id, unnest(city)). Note that, internally we may wish to store the attribute separately, but that decision can be made transparently to the user. Similarly, converting a many-to-one relationship to a many-tomany relationship requires creation of a new table (typically) in the relational schema and queries need to be modified appropriately. The change to E/R diagram is again relatively minor, and queries involving the relationship may not need any modifications depending on what it was trying to achieve. For example, a query to find average credits per advisee for each instructor: select instructor.ID, avg(tot_credits) from instructor join student on advisor group by instructor.ID does not require any modifications if the relationship cardinalities were to be modified. There are of course other queries where changes may be needed. A major advantage of the E/R model is the ability to define class hierarchies like the one shown in Figure 1. When constructing a relational schema from such an E/R diagram, we need to choose between multiple possibilities depending on whether the specialization is total vs partial, and whether it is overlapping or disjoint, etc. There are at least three possibilities here: Three relations, Person, Instructor, and Student, each storing a disjoint set of individuals, with the latter two featuring an extra attribute each. Three relations, Person, Instructor, and Student, but Person stores all the common attributes for all individuals, and the other two relations only store the additional attributes (along with the key). • A single relation Person, with rank and tot_credits appropriately set to null (or with explicit attributes to keep track of who belongs to which subclass). The choice among these options needs to be made early on, and gets baked into any queries that are written against this schema. Switching between options would likely require significant modifications to most queries. This is a less drastic change for the E/R model, although semantic correctness can still be an issue for some queries. This choice can also have a significant impact on performance depending on the workload, something that is not easy to anticipate a priori. Finally, we note that schema changes typically also require a complex data migration process, which today is often handled by the application layers on top since databases do not support such functionality natively. We plan to explore how schema evolution and data migration can be supported natively within the database system, along with versioning [4], so that users can more easily experiment with schema changes and roll them back as needed. Although these are orthogonal issues, we believe the use of a higherlevel model simplifies some of the challenges considerably. # 4 MAPPING TO PHYSICAL REPRESENTATION The logical independence afforded by the higher abstraction layer enables a vastly broader space of physical representations that can be used to store the data on the underlying storage system. Before describing our approach, we briefly discuss some of the prior work along these lines. In the initial C-store prototype [37], a larger space of physical representations was considered in the form of projections. Specifically, a given relational schema could be mapped to a set of projections that do not necessarily correspond one-to-one or even many-to-one with the tables in the schema. A projection could contain attributes from multiple tables connected through appropriate join keys and could be sorted and stored differently. However, to our knowledge, later work on columnar databases considers a much narrower set of projections. Join Indexes [43] similarly expand the space of physical representations considered, however, those are typically restricted to join keys. The work on XML, RDF, or property graph “shredding” [25, 41, 42] has explored different ways to map from those models to relatioal. The target of most of that work is, however, tabular representation, whereas our approach explores a larger space of physical representations including hierarchical representations. Another closely related work is the line of work on converting from E/R model to XML [17, 19], where the target is (effectively) a set of hierarchical representations. That work however, hasn’t looked at systematic exploration of the space of possible mappings in a workload-aware fashion (instead focusing on finding the best representation given an optimization metric), and also doesn’t consider multi-relation representations. Finally, the ADO.NET Entity SQL framework [9, 23, 33] allows users to write queries against an E/R-like conceptual model; the relationship between that and the storage backend is specified using a declarative mapping, that is compiled into bidirectional views that are used to transform data back and forth. We plan to build upon that considerable line of research in our future work. Figure 2: Mappings to physical representation as covers of the $\mathbf { E } / \mathbf { R }$ graph At a high level, the goal of the mapping optimization process is to create a collection of physical representations that can be used to store the data that conforms to the given E/R schema. There are two key requirements: (1) The mapping must be uniquely reversible (i.e., bidirectional) in that, the entities and relationships stored in the database must be recoverable, and (2) We must be able to map any inserts/updates/deletes to the entities and relationships to the database. We currently consider three possible physical representations that can be used as targets. Tables in the first normal form, where composite data types are permitted, but domains must be atomic (i.e., no arrays). This was typically the only representation considered by the prior work on shredding. Hierarchical structures with a pre-defined schema: Here we allow for arrays, including arrays of composite types which themselves might contain arrays. Although relational databases are typically not optimized for this scenario, the columnar storage formats like Parquet and Avro have shown that read-only workloads can be supported efficiently for such data. However, updates are typically harder to do for such storage structures. We note that some of these issues have been investigated in the work on nested relational databases as well [32]. • Multi-relational compressed (factorized) representations: This representation, in theory, can be used to store the join of multiple relations together in a compact fashion [26] (a materialized view stored as a table, on the other hand, may have significant duplication). The key benefit here is the ability to use physical pointers to avoid joins, and to execute some types of aggregate queries more efficiently (by, in effect, pushing down aggregations through the joins). We expect that the benefits of this representation will likely show up if the joins are almost one-to-one (i.e., the join is not a key-foreign key join at the schema level, but the data does not exhibit high connectivity). Another key reason for us to consider this is that, it brings us closer to the storage formats used in graph databases, and thus helps unify the representations. In order to explore the space of possible mappings, we first view the E/R diagram as a graph where each entity, relationship, and attribute is a separate node (Figure 2). Entity nodes are connected to the relationships in which they participate, to subclasses or superclasses, and to their attributes. A mapping to physical storage representation can be seen as a cover of this graph using connected subgraphs. Each connected subgraph corresponds to a physical table or data structure, and together all of these constitute the full physical representation. We show three examples in Figure 2. The first mapping depicts a fully normalized mapping, where each entity gets its own table, many-to-one relationships are folded into the many side, and many-to-many relationships have their own tables. For instance, the advisor relationship is folded into the student table, whereas takes and teaches are in separate tables. The two subclasses of Person are also in separate tables, but only with the attributes that are unique to them. Finally the multi-valued attribute (Ph) is stored in a separate table (along with the key). The second mapping, on the other hand, reduces the number of data structures needed by combining Person and its subclasses into the same table, as well as using an array to store the multivalued attribute. It also moves Sections into the Course table as an array of a composite type. Although it reduces the number of joins required, most queries involving section entities will require use of an unnest operation, which is often not optimized in modern RDBMSs (although platforms like Apache DataFusion do a better job at it). Finally, the third mapping illustrates a scenario where two entities with a many-to-many relationship (section and student) are stored in a single physical data structure. In our current implementation that is based on PostgreSQL, this would result in significant duplication of data and also increase the cost of inserts/updates/deletes. One of our key research goals is to explore alternative representations and their benefits. Figure 3: High-level ErbiumDB Architecture We note that our approach is flexible enough to cover column decomposition representations, and it also allows for the same attribute to be present in multiple data structures. Also, although it allows for attributes from multiple relations to be in the same data structure, it is an open question as to whether it can cover the entire scope of projections proposed for C-Store [37]. Figure 4: E/R Schema for Illustrative Experiments The natural optimization problem here, that forms one of the key research challenges, is to automatically identify the best mapping for a given schema and data and query workload. A sub-question there is, how to generate such mappings in an automated fashion so that one can search through them to make the optimization decisions. As noted above, any mapping must satisfy the requirement that the mapping is reversible and CRUD operations are welldefined. Coming up with a complete and sound list of constraints on the graph cover is an interesting direction for future work, which has overlaps with the work on updatability of views and answering queries over them [23]. # 5 PROTOTYPE We are building a proof-of-concept system, called ErbiumDB , to explore the research questions raised in this paper. Our prototype is written in Python, and built as a layer on top of PostgreSQL (with the goal to support other backends like Apache DataFusion). Figure 3 shows a high-level architecture (not all pieces are built as yet). The DDL layer does the heavy lifting here. It creates the E/R graph from the entity/relationship create statements, and keeps it up to date as the schema is modified. The mapping of the E/R graph to physical tables (specified manually today) is maintained in a table in the database as a JSON object, and is read into memory at initialization time. As noted earlier, we construct a table for each connected subgraph in the mapping that is chosen. The DDL layer also constructs mappings between CRUD statements on the entities/relationships to updates on the physical tables in the database. The prototype supports a limited form of SQL, and the queries against the logical schema are translated to queries against the physical tables. Finally, we are also planning to support a RESTful API by default (and possibly gRPC), to ensure compatibility with standard application development practices. While CRUD operations would be supported by default, additional API calls can be added as needed. # 6 ILLUSTRATIVE EXPERIMENTS We present a set of illustrative experiments to demonstrate the benefits of the logical independence afforded by the E/R model, and to discuss some of the opportunities for future research. We use a synthetic E/R schema as shown in Figure 4, consisting of 8 entity sets, including a type hierarchy consisting of 5 entity sets, and two weak entity sets. We consider a few different mappings of these to the underlying relational database (PostgreSQL): (M1) Fully normalized, with separate tables for the multivalued attributes and a separate table for each subclass consisting only of the attributes unique to that subclass; • (M2) The three multi-valued attributes stored using PostgreSQL array data types; • (M3) The type hierarchy mapped to a single relation, with a special type attribute; • (M4) The type hierarchy mapped to 5 disjoint relations; • (M5) The two weak entity sets folded into 𝑆 using custom composite types; and • (M6) Multi-relational representation where R2 and S1 are joined and stored in a single table. As we illustrate below, there are significant quantitative differences between these mappings, but in today’s systems, the choice has to be made pretty early in the development cycle and is hard to change. There are also important qualitative differences between the mappings. For instance, in M3, where $R 2$ is not a separate relation, any constraints on the relationships between 𝑅2 and $s 1$ , or between $R 1$ and $R 3$ would be difficult to enforce. It’s an interesting research question as to how to quantify such differences. We ran a series of experiments against these six mappings and a synthetically generated database containing approximate 5,000,000 entries in total. We discuss a few of the results; however, we note that many of the results are along the lines of what one would expect given the schemas above. All queries were run 10 times, and the median time is reported. We compared the performance of M1 and M2 on a simple query that outputs the three multi-valued attributes for all the $R$ entities. For M1, this requires a multi-way join, resulting in a $2 2 \mathrm { x }$ performance difference between the two $( M 1 = 6 6 . 4 2 s$ vs $M 2 = 2 . 8 8 s$ ), whereas a query that asks just for all the values of r_mv1 is about $3 0 \%$ faster on M1 $\mathrm { ~ M 1 } = 0 . 3 9 s$ vs $M 2 = . 5 s$ ), representing the cost of unnesting. Surprisingly a query that asks for the r_mv1 values given a r_id showed a $1 4 5 \mathrm { x }$ performance difference $\mathrm { ' M } 1 = 4 0 \mathrm { m s }$ vs $ { M ^ { 2 } } = 0 . 3 { \mathrm { m s } } ^ { \cdot }$ ), likely due to it not being able to use an index on M1 (r_id is a key for M2, but not for M1). Especially in the context of reactive applications, the latency differences may be highly significant. On the other hand, a query that looks for intersection of r_mv1 and r_mv2 across all tuples runs about $3 . 6 \mathrm { X }$ faster with M1 than with M2 $\mathrm { M } 1 = 0 . 6 3 s$ vs $M 2 = 2 . 2 9 s$ ), because of the unnesting overhead. Next, we compare the three alternative representations for the type hierarcy (M1, M3, and M4). For a query that simply lists all the information for the $R 3$ entities, M1 (which requires a 3-way join) was about 5x slower than M3 $\mathrm { ~ \AA ~ } .$ vs $M 3 = 0 . 4 s$ ), and M3 in turn is about $2 . 7 \mathrm { x }$ slower than M4 (although there is no join needed for either M3 or M4, the amount of data scanned is significantly smaller for M4). Surprisingly, for a query that joins $R$ with $s$ with predicates on both relations, M1 and M4 performed very similarly, which is surprising given that M4 requires a 5-relation union. However, the performance gap between these three representations significantly increases for more complex queries. The queries for M1 and M4 also get quite verbose if any reasoning across the three relations is required. Next, we look at the effect of folding S1 and S2 as arrays of composite types inside S (M5 vs M1). A query that asks for all the information across the three entities for a given set of 10000 s_ids ran about $2 . 2 \mathrm { x }$ slower on M1 due to the extra joins needed there. On the other hand, any queries that require unnesting of those composite arrays run much slower; for example, a query that joins S1 and R runs about $_ { 4 \mathrm { X } }$ slower on M5 than M1. Finally, comparing M1 and M6, we see that a query that can utilize the pre-computed join runs significantly faster on M6, but queries that only involve one of those two tables get more expensive. As we noted earlier, compact multi-relation storage formats are needed to make a representation like M6 viable. In addition to showing the trade-offs between the different representations that can be exploited through the increased logical independence, the experiments also highlight some of the inefficiencies of PostgreSQL that, we believe, can be addressed relatively easily. At the same time, they also suggest that different storage layouts and specialized operators may be needed to handle complex inheritance hierarchies and highly nested structures. it easier to write queries against the data. We cannot do justice to that long line of work, even if we were to only focus on the work on the E/R model, but we discuss a few of those works here. Several early works, including Cattell et al. [10, 30], Elmasri and Larson [16], Czejdo et al. [14], Zhang and Mendelzon [45], etc., developed graphical data manipulation languages for the E/R model or its variations. There were also several works that looked at non-graphical languages, generalizing relational algebra or SQL, including Elmasri and Wiederhold [18], Parent and Spaccapietra [28], and Hohenstein and Engels [21]. There is also a long line of work on query languages for non-1NF relational databases, e.g., Roth et al. [31, 32] and more recently, Carey et al. [6]. From implementation perspective, the early work on objectoriented databases, such as Exodus [7], supported complex object types as well as object-oriented features like inheritance and aggregation, encompassing the E/R model (although much of that work does not explicitly focus on the E/R model). Microsoft’s ADO.NET Entity framework, and the Entity SQL language, form perhaps the most well-known modern example of this approach [9, 23, 33]. That line of work has looked at a number of different implementation aspects, including bidirectional views for data transformations, query containment, and query optimization.
Spurred by a number of recent trends, we make the case that the relational database systems should urgently move beyond supporting the basic object-relational model and instead embrace a more abstract data model, specifically, the entity-relationship model. We argue that the current RDBMSs don't inherently support sufficient "logical" data independence, and that is relegating the database systems to the role of a backend storage system, away from where significant innovation is both happening and is still needed. We present the design of a prototype system (ErbiumDB) that we are building to explore these issues, and discuss some of the key research challenges.
[ "cs.DB" ]
# 1 Introduction Deploying robots in human-centric settings like households requires balancing robot autonomy with humans’ sense of agency [1, 2, 3, 4, 5, 6]. Full teleoperation offers users fine-grained control but imposes a high cognitive load, whereas fully autonomous robots act independently but often misalign their actions with nuanced human needs. Assistive teleoperation — a paradigm in which both the human and the robot share control [7, 8, 9, 10] — has thus emerged as an ideal middle ground. By keeping the user in control of high-level decisions while delegating low-level actions to the autonomous robot, this approach both preserves user agency and enhances overall system performance. As such, assistive teleoperation is becoming a desirable paradigm for robots to serve as reliable partners in human-centric environments, such as assisting individuals with motor impairments [11, 12]. While promising, assistive teleoperation in everyday environments remains challenging. A longstanding challenge in assistive teleoperation is to infer human intents from user control inputs and assist users with correct actions [8]. This challenge is amplified in real-world settings, where robots must go beyond closed-set intent prediction [13, 14] to handle diverse, open-ended user goals across different contexts and scenes. As a result, a key capability the robot should possess is to interpret user control inputs within the visual context and infer intent through commonsense reasoning. For example, consider a user teleoperating a robot to move a jar of pasta toward both a laptop and a cooking pot. Even if the pasta jar is closer to the laptop, commonsense suggests that the user intends to pour pasta into the pot, not onto the laptop. As another example, some users push an automatic door to open it, while others want to press an accessibility button. These examples illustrate the nuanced and context-dependent nature of human intent, highlighting the level of commonsense reasoning required for robots to provide effective and satisfactory assistance. Figure 1: CASPER infers user intents and offers help when confident. Given user teleoperation input, CASPER uses VLMs to predict human intent using commonsense reasoning. Upon user confirmation, CASPER performs autonomous execution to fulfill the intent using a skill library. CASPER’s background reasoning runs in parallel with foreground human control to minimize disruption. Existing assistive teleoperation systems often fall short in inferring diverse intents. Prior methods often limit the problem space to a closed set of objects [14, 9], or to a predefined task like picking up objects, implicitly assuming the intent type is known a priori [14, 13]. These intent inference methods, either based on rule-driven strategies [15, 13] or learned from demonstrations [16, 17, 14, 10], are typically limited to one single skill type or bound by the task distributions at training, struggling to generalize in new scenarios. Critically, these systems usually lack commonsense reasoning, which is essential for interpreting contextual cues and generalizing intent inference to novel scenes and behaviors in real-world environments. To address the above limitations, we introduce CASPER, an assistive teleoperation system that infers diverse intents from human user control and offers assistance with long-horizon mobile manipulation tasks (Fig. 1). CASPER builds on three core components. First, it features an open-world perception module that uses pre-trained visual language models (VLMs) to provide a generalized understanding of open-world objects and scenes without task-specific training. Second, CASPER leverages VLMpowered commonsense reasoning to infer a diverse range of user intents, significantly expanding the possible intent choices compared with prior systems. Third, to realize task execution, CASPER uses a flexible library of parameterized skills encompassing a range of navigation and contact-rich manipulation behaviors [18]. With this comprehensive and composable skill library, CASPER can execute long-horizon tasks that go beyond the capabilities of traditional assistive teleoperation systems. Furthermore, deploying the system for long-horizon tasks introduces a user-centric consideration: offering undesirable assistance based on prematureintent inference can frustrate or disrupt the user. To avoid this, the system should determine intents only after gathering enough information from user inputs and visual contexts. CASPER addresses this by shadowing the user: it observes foreground human actions and infers user intents in the background. A confidence module based on self-consistency [19] ensures that assistance is triggered only when prediction confidence is high, reducing errors and user disruption. By running VLM-based inference in parallel with user control, CASPER unobtrusively predicts intent and prepares actions. To evaluate the effectiveness of CASPER in assisting human users, we conduct extensive user studies on a mobile manipulator (TIAGo [20]), involving 10 pilot study participants and 13 study participants, totaling over 80 hours of interaction across 3 long-horizon tasks. Additionally, we conduct offline experiments to test the intent inference module and perform detailed performance analyses and ablation studies. Compared with prior assistive teleoperation baselines without commonsense reasoning ability and a full teleoperation baseline, CASPER achieves a higher success rate, better user satisfaction, and lower cognitive load of users across all tasks. Figure 2: CASPER architecture. VLM-based intent inference runs in parallel with human teleoperation. CASPER generates task candidates from observations and infers intent from user inputs among the task candidates, repeating until predictions are self-consistent. Once confirmed by the user, CASPER executes the corresponding skill with estimated parameters. # 2 Related Work Assistive Teleoperation. Assistive teleoperation offers a promising balance between human control and robotic assistance, enhancing user agency and task efficiency [11, 12, 15, 21, 22, 23]. Assistive teleoperation enables users to share control with the robot, injecting their intent to guide the system toward their goals [8, 7, 10, 24, 25, 26, 27]. Accurately predicting user intent is thus a key challenge [8, 28, 29]. Prior approaches typically select the most probable intent from a fixed set of goals [8, 30, 31, 32, 33], assume a single predefined skill [13], or use datadriven methods to map high-dimensional user inputs to low-dimensional actions within specific tasks [10, 14, 16, 17, 24, 34, 35, 36, 37]. However, both approaches struggle to generalize beyond predefined intents without retraining or reprogramming. Moreover, they also lack the commonsense reasoning capability to interpret human control input within the visual context. Human Intent Inference. Inferring hidden human states is a critical step toward understanding human behavior for a wide range of downstream tasks [38, 39, 40, 41, 42]. In robotics, intent inference enables robots to operate effectively in human-centered environments [43, 44, 45, 46]. To achieve shared goals, robots must reason about a human collaborator’s latent strategy [47, 48], future actions [46, 49, 43], goals [50, 45, 23], and preferences [51, 52] to adjust their behavior accordingly. CASPER advances these efforts by leveraging VLM-based intent inference to facilitate assistance in assistive teleoperation settings. LLMs and VLMs for Robotics. Foundation models, pretrained on internet-scale data, have gained attention for their strong generalization and adaptability across diverse applications [53]. They hold promise for enhancing the full robotics stack, from perception to decision-making and control [54]. Recent works integrate LLMs and VLMs as high-level planners paired with low-level skills to enable open-vocabulary and open-world robot capabilities [18, 55, 56, 57, 58, 59]. Other studies use LLMs to model humans [60], estimate uncertainty [61], or use language [62, 63, 23]. However, these approaches do not address the interpretation of user control inputs in real-world assistive teleoperation settings. Thus, the potential of LLMs/VLMs for assistive teleoperation remains underexplored. # 3 Assistive Teleoperation with CASPER In this section, we describe CASPER, an assistive teleoperation system that enables robots to infer and execute diverse human intents (Fig. 2). CASPER comprises two key components: an intent inference module that continuously predicts human intent from teleoperation history when shadowing the user in the background, and a skill execution module that executes tasks using a library of skills. # 3.1 Problem Formulation We formulate assistive teleoperation as a sequential decision-making problem defined by the tuple $\langle S , { \mathcal { A } } , { \mathcal { P } } , { \mathcal { Z } } \rangle$ , where $s$ is the state space, $\mathcal { A }$ is the action space, $\mathcal { P } : \mathcal { S } \times \mathcal { A } \mathcal { S }$ is the unobserved transition function, and $\mathcal { Z }$ is the intent space. The state $s \in { \mathcal { S } }$ comprises the robot’s RGB image observation, proprioceptive states (e.g., gripper status, base and end-effector poses), and a list of foreground objects $O = \{ o _ { 1 } , . . . , o _ { n } \}$ detected from the open-world perception module. The action $a \in { \mathcal { A } }$ is either from the human ${ \bf \dot { a } } = a _ { h }$ ) during human teleoperation or from the robot $( a = a _ { r } )$ ) during autonomous execution. We assume that each assistive teleoperation episode is a sequence of one-step subtasks, and users can teleoperate to express their desired goals. We define a human intent for the $i$ -th subtask as $z _ { i } = ( l _ { z } ^ { i } , o _ { z } ^ { i } ) \in \mathcal { Z }$ , where $l _ { z } ^ { i } \in L$ is the intended skill (e.g., “navigate”) and $o _ { z } ^ { i } \in O$ is the target object (e.g., “the door” in “navigate to the door”). At the start of subtask $i$ , the user provides a teleoperation trajectory snippet $\xi _ { h } ^ { T } = ( a _ { h } ^ { 1 } , . . . , a _ { h } ^ { T } )$ , where $T$ is the snippet length. The goals of CASPER are to infer the human intent $z _ { i }$ from $\xi _ { h } ^ { T }$ , and to fulfill the intent with a trajectory $\xi _ { r }$ . This process repeats until the human indicates the end of the episode. # 3.2 Inferring Intents in the Background CASPER tackles two key challenges in intent inference. To identify intent, it generates possible candidates from open-world observations and selects the most likely one based on commonsense understanding of user inputs. To handle intent ambiguity, it uses confidence estimation to predict only when confident, reducing premature suggestions. Intent Candidates Generation. To generate an open set of potential intent options, we use a VLM $f _ { c a n d i d a t e }$ to analyze the current state $s ^ { t }$ and create a set of intent candidates $\{ c _ { 1 } , . . . , c _ { m } \}$ (Fig. 2 left). It first identifies actionable objects and then filters feasible object-skill pairs based on how each object is likely to be interacted with. The VLM adapts its predictions to object affordances and the robot’s current state (e.g., avoiding “place” actions when the gripper is empty) by reasoning about robot-object interactions in a zero-shot manner. The commonsense-based intent set generation ensures that intent choices are semantically plausible and relevant to the scene. Human Intent Selection. Given a set of task candidates $\{ c _ { 1 } , . . . , c _ { m } \}$ , a second VLM $f _ { i n t e n t }$ predicts the user’s intent $\hat { z }$ by analyzing a history of subsampled robot observations, which include downsized images, robot base and end-effector poses (Fig. 2 middle). It chooses the most likely intent $\hat { z }$ among $\{ c _ { 1 } , . . . , c _ { m } \}$ , and parse the corresponding skill class $l _ { \hat { z } }$ . To enhance VLM understanding in cluttered scenes, we apply visual prompting [64, 18, 65] to annotate important regions that the VLM should attend to. These annotations include Set-of-Marks (SoM) [66] for segmented objects, gripper masks that highlight gripper position, and arrows indicating gripper motion history. VLM Confidence Estimation. Real-time intent inference is inherently uncertain due to the ambiguity or incompleteness of human actions. For instance, if a user begins rotating a robot’s base in a room with multiple furniture pieces, the intended target remains ambiguous until the user clearly moves the robot toward a specific furniture. Seeking for user confirmation based on a premature guess can disrupt user control and cause frustration. To address this, CASPER employs a confidencebased intent validation mechanism. Inspired by self-consistency methods [19] in LLMs, we run multiple VLM calls in parallel to estimate the confidence of intent predictions. The system only offers assistance when the number of VLM outputs in agreement exceeds a threshold. Formally, let $K$ denote the number of VLM calls and $\hat { z } ^ { k }$ the intent predicted by the $k$ -th VLM. The system confirms its prediction with the user if $\begin{array} { r } { \sum _ { k = 1 } ^ { K } \mathbb { I } ( \hat { z } ^ { k } = \hat { z } ^ { \mathrm { m o d e } } ) \geq \eta , } \end{array}$ where $\mathbb { I } ( \cdot )$ is the indicator function, $\hat { z } ^ { \mathrm { m o d e } }$ is the most frequent prediction, and $\eta$ is the agreement threshold. By filtering out low-confidence predictions, this module minimizes disruptions and premature predictions. Parallel Foreground-Background System Design. Integrating pre-trained VLMs into real-time closed-loop control poses challenges due to the latency in VLM inference. Waiting for VLM outputs can be frustrating for users, especially when the system is uncertain or incorrect. To mitigate this delay, we adopt a framework where the user operates the robot in the foreground, while the VLM processes inputs simultaneously in the background. If the VLM is still processing or lacks confidence, it remains silent, intervening only when it has a confident prediction. This approach allows the user to operate naturally while the system continuously refines its intent inference. Figure 3: Toy, Shelf, and Door: multi-step mobile manipulation tasks. At each step, the robot disambiguates user intent among multiple plausible goals, selecting the correct one based on user inputs and visual context. # 3.3 Fulfilling Intents with Skill Execution Once confidence in its prediction, CASPER executes the intent using a library of parametrized skills, with a VLM estimating the skill parameters for execution. Control Switching. When confident in its prediction, the robot communicates the suggested action via an audible cue. The user can confirm or deny the prediction by pressing different keys on the keyboard. If confirmed, the system signals the transition to autonomous execution with another cue (“Great! I will take over.”). If denied, the system prompts the user to continue teleoperation (“Understood, I’ll pause here. Feel free to continue.”) until the next prediction attempt. Parametrized Skill Library. In real-world assistive settings, users may require help with longhorizon tasks that involve diverse manipulation and navigation behaviors. CASPER utilizes a library of parameterized skills that cover common mobile manipulation behaviors, including object manipulation skills (e.g., picking, placing, pouring), interactions with the environment (e.g., pushing doors, tapping card readers, pressing buttons, taking elevators), and navigation (e.g., approaching landmarks). Each skill is defined by a behavior primitive (e.g., PickUp[Obj.]) and a parameter (e.g., the target object’s pose), enabling flexible execution of user intents across diverse environments. Refer to Appendix A.1 for a complete list of skills. Skill Parameter Selection and Execution. Once a predicted intent $\hat { z }$ is confirmed (e.g., pouring pasta into a pot), the corresponding skill $l _ { \hat { z } }$ (e.g., pouring) is called. The parameter estimation VLM $f _ { s k i l l }$ identifies parameters such as the target object $O _ { \hat { z } }$ . Based on the object’s pose, the skill execution module executes the skill. After completing the subtask, the robot prompts the user to resume control (“Alright, you can take over now.”) for the next intent. # 4 Experiments We seek to answer the following research questions: RQ1: Does CASPER improve task performance and user experience compared to existing methods? RQ2: Is commonsense VLM reasoning essential for inferring diverse intents? RQ3: What is the contribution of each system component to overall performance? We address RQ1 through a user study, RQ2 via offline unit testing of the intent inference module, and RQ3 through ablation experiments. # 4.1 User Study: Real-World Mobile Manipulation Tasks Experiment Setup. We use a TIAGo mobile manipulator equipped with dual arms, a mobile base, and an RGBD camera. Users teleoperate the robot using a 3Dconnexion SpaceMouse while observing livestreamed RGB images. CASPER uses GPT-4o as its VLM backbone. The full teleoperation interface details, sensory setup, and audio/keyboard interaction design are provided in Appendix B.1. Tasks. We evaluate on 3 tasks (Fig. 3) each requiring multi-step intent inference: Shelf (3- step), Toy (5-step), and Door (2-step, 3-variations). Each step offers multiple plausible choices, Figure 4: User study: user workload and user satisfaction. CASPER consistently outperforms the baselines in terms of user workload (left) and user satisfaction (right) with statistical significance $( p < 0 . 0 5 )$ . Detailed per-task results and full questions of user satisfaction can be found in Appendix C. Note that for user satisfaction scores, “assist helpfully” and “correct intent” are not applicable to Full Teleop. requiring the system to use user input to infer intents. More task details are in Appendix B.2. Participants and Procedures. We conducted an IRBapproved user study with $N = 1 3$ participants (mean age $= 2 9 . 4$ ; 5 females, 8 males; all able-bodied) , all of whom gave informed consent. Participants completed a practice session before completing each method in randomized order. After each, they answered user satisfaction and NASA-TLX questionnaires. Independent Variables (Robot Control Methods). We compare CASPER with three baselines: 1) Full Teleop: The user manually teleoperates the robot without autonomous robot control. 2) HAT [15]: assistive teleoperation that infers human intents using proximity to goal. 3) RBII [9]: assistive teleoperation that infers human in Table 1: User study: task success rate and completion time. CASPER outperforms baselines in both task success and completion time. tents using Bayesian inference using temporal user input history. Since HAT and RBII only support grasping, we use CASPER to predict the skill and let the baselines select the target object, making comparisons conservative in their favor. These baselines test the role of commonsense reasoning in diverse intent inference. Dependent Measures (Evaluation Metrics). To evaluate task performance, we measure the binary task success rate (completion in a fixed time limit). We measure human workload with NASA-TLX [67], a standard tool for evaluating subjective cognitive and physical workload. User satisfaction is measured with a questionnaire adapted from prior work [16]. We perform pairwise t-tests between CASPER with baselines to evaluate statistical significance. Hypotheses. The user study tests the following hypothesis: • H1: CASPER’s VLM-driven intent inference and skill execution improve task performance over baselines in real-world assistive tasks; • H2: CASPER reduces user workload and improves user satisfaction compared to baselines. Results. Task Performance. CASPER exhibits significant improvements $( p < 0 . 0 5 )$ in task success rate compared to all baselines (see Table 1). The high success rate reflects the system’s ability to infer intents and execute appropriate actions, even in complex scenarios and long-horizon tasks. Full Teleop is the runner-up in terms of success, allowing a portion of participants to succeed with expertise and patience. In contrast, HAT and RBII have lower success rates because they struggle with tasks requiring context or commonsense knowledge. We also report task completion time in Table 1, where CASPER is the lowest across all tasks. Full Teleop is slower due to manual highprecision control (e.g., pouring); the heuristic baselines suffer frequent errors and corrections. Intent Inference Success Rate ( ) Intent Inference Success Rate ( ) False Prediction Rate ( ) 100 Success Rate (%) 80 60 1570050 71.2 75.7 Casper 76.9 40 46.9 40.4 Casper No Confidence 29.5 58.6 40 48.6 32.9 20.5 16.9 20 29.4 + 20 16.9 . 0 0 0 11.9 9.7 Door Shelf Toy Average 0 25 50 75 100 125 0 25 50 75 100 125 HAT RBII-1 RBII-2 History length (timesteps) History length (timesteps) Casper Casper No VP NASA-TLX. Fig. 4 (Left) shows that CASPER significantly outperforms $( p \ : < \ : 0 . 0 5 )$ Full Teleop on all NASA-TLX metrics except “performance”, indicating that autonomous skill execution lowers cognitive and physical workload. Full Teleop requires continuous user input, resulting in higher workloads. The lack of statistical significance in “performance” suggests that user perceived success is sensitive to CASPER occasional inference errors, despite CASPER’s objective higher success rate. CASPER also significantly outperforms $( p < 0 . 0 5 )$ HAT in all metrics except in “mental demand” and “physical demand,” and significantly outperforms $( p < 0 . 0 5 )$ RBII across all metrics. The increased workload in HAT and RBII results from more frequent prediction errors (e.g., predicting to pick up the table), leading to longer time and higher effort. The results indicate that VLM-powered intent inference and skill execution reduce user burden and improve usability. The lack of statistical significance in “mental demand” and “physical demand” likely stems from assistive baselines sharing skill execution module with CASPER, which reduces the difference in these measures. User Satisfaction. Fig. 4 (Right) shows that CASPER has statistically significant improvements $( p < 0 . 0 5 )$ in all 10 user satisfaction metrics over all baselines. The results indicate that CASPER simplifies the assistance process and enhances the user experience. The Full Teleop baseline has lower scores, especially in “effort” and “physical workload” due to the demands of constant manual control. HAT and RBII score lower in “confidence” and “trust”, as frequent intent prediction errors reduce user trust, significantly impacting overall user satisfaction. In summary, the user study confirms that CASPER improves task performance (H1), reduces cognitive workload, and increases user satisfaction (H2). Beyond the main findings, the user study further reveals several notable insights which we detail in Appendix C, including more detailed analysis of results, participant interviews, a demographic breakdown, and an analysis of failure cases. # 4.2 Unit Testing: Intent Inference Accuracy To quantitatively validate CASPER’s intent inference accuracy, we conduct unit testing on teleoperation segments collected for each subtask across all three tasks. Each segment serves as an independent data point for evaluating intent inference, where success requires correctly predicting both the intended skill and target object. We prompt the VLM to predict the intended intent for each data point and compute the overall intent inference success rate, isolating intent inference from task execution. In this experiment, we compare CASPER against the HAT and RBII baselines. We evaluate two variants of RBII from the original paper: RBII-1’s only uses the gripper-to-goal distance for recursive Bayesian inference, while RBII-2 also uses user joystick inputs with Boltzmann-rational action model. Note that RBII-1 was used in the user study due to the similar average performance between RBII-1 and RBII-2. As shown in Fig. 5 (Left), CASPER outperforms HAT and RBII baselines. Without commonsense reasoning, the baselines often mispredict targets by relying on gripper motion trends toward nearby objects, e.g., incorrectly pouring pasta into a sweetener box (Fig. 6) because the gripper moved closer to it. In contrast, CASPER’s VLM-based inference leverages commonsense knowledge to make accurate predictions, choosing the pan instead. Figure 5: Quantitative results from unit testing and ablation studies. Left: CASPER outperforms all baselines in intent inference success rate. Note that no STD is reported for deterministic baselines. The ablation of Casper vs. Capser - No Visual Prompting (VP) highlights the benefit of visual prompting. Middle: Success rates improve with longer teleoperation history. Right: Removing confidence estimation increases false prediction rates across all history lengths. Figure 6: Unit testing visualization. # 4.3 Ablation Studies To assess the impact of CASPER’s key components, we perform ablations on the following questions: How does the VLM input design, like visual prompting, affect intent inference accuracy? To guide the VLM’s attention more on user input changes and the manipulated object, we apply visual prompting (VP) by adding a gripper mask and an arrow of the robot gripper motion (rendered from proprioceptive states) on the image. Fig. 5 (Left) shows VP yields an average $5 . 7 \%$ boost, especially on Toy $( + 9 . 8 \% )$ . Removing VP hurts the success rate because the VLM must implicitly understand the robot end-effector movement trace. Nonetheless, CASPER’s no-VP variant still outperforms all non-VLM baselines by $> 1 0 \%$ , confirming that the primary gains come from the VLM’s commonsense reasoning; VP enhances its reasoning rather than providing decisive extra information. How much human teleoperation history is needed for accurate intent inference? CASPER infers intent from a segment of the user’s teleoperation trajectory. Short histories risk ambiguity and incorrect predictions, while long histories increase user effort. We investigate the tradeoff by studying the accuracy of intent inference across different trajectory lengths. We vary history length from $T = 4$ to $T = 1 4 0$ timesteps and measure intent inference accuracy, defined as correctly predicting both the skill and target object. As shown in Fig. 5 (Middle), longer histories improve accuracy by providing more context. However, gains plateau beyond $T = 1 0 0$ , offering diminishing returns while adding user burden. Thus, we use $T = 1 0 0$ in the user study to balance accuracy and workload. How does confidence estimation mitigate incorrect intent predictions? We hypothesize that CASPER’s confidence estimation module reduces false predictions by filtering out ambiguous cases. To validate this, we ablate the module and compare false prediction rates (defined as incorrect predictions over Figure 7: Confidence estimation visualization. CASPER predicts until the intent is clearer, ensuring more accurate assistance. total predictions). As shown in Fig. 5 (Right), CASPER with uncertainty estimation consistently achieves lower false prediction rates, showing that uncertainty estimation effectively defers predictions when intents are unclear. Fig. 7 also shows qualitative examples. At $T = 4 0$ , CASPER withholds predictions in both tasks due to ambiguity: The viewpoint is still shifting in the “Go to the wooden floor” task, and the gripper movement is still unclear between the basket and bag in the “Place the toy” task. Premature inference could have led to incorrect predictions (e.g., selecting the wrong landmark or container). By $T = 1 0 0$ , enough context enables correct predictions. These examples illustrate how delaying decisions in uncertain situations improves reliability.
Assistive teleoperation, where control is shared between a human and a robot, enables efficient and intuitive human-robot collaboration in diverse and unstructured environments. A central challenge in real-world assistive teleoperation is for the robot to infer a wide range of human intentions from user control inputs and to assist users with correct actions. Existing methods are either confined to simple, predefined scenarios or restricted to task-specific data distributions at training, limiting their support for real-world assistance. We introduce Casper, an assistive teleoperation system that leverages commonsense knowledge embedded in pre-trained visual language models (VLMs) for real-time intent inference and flexible skill execution. Casper incorporates an open-world perception module for a generalized understanding of novel objects and scenes, a VLM-powered intent inference mechanism that leverages commonsense reasoning to interpret snippets of teleoperated user input, and a skill library that expands the scope of prior assistive teleoperation systems to support diverse, long-horizon mobile manipulation tasks. Extensive empirical evaluation, including human studies and system ablations, demonstrates that Casper improves task performance, reduces human cognitive load, and achieves higher user satisfaction than direct teleoperation and assistive teleoperation baselines.
[ "cs.RO", "cs.AI" ]
# 1 Introduction In the last decade, the rise of deep learning has introduced prominent breakthroughs and achievements in object detection (OD) Zou et al. [2023], where models are usually trained under a closed-world assumption: test-time categories are the same as the training ones. However, during deployment in the real world, OD models will encounter Out-of-Distribution (OOD) objects Nitsch et al. [2021], i.e., object categories different than those observed during training. While facing OOD objects, one of two safety-critical (high-risk) situations can arise: either the unknown objects are incorrectly classified as one of the In-Distribution (ID) classes, or the OOD objects will be ignored Dhamija et al. [2020]. Figure 1: Predictions of Faster-RCNN trained on two ID datasets on samples from each ID and the OOD datasets in blue rectangles. The first row contains predictions of the Faster-RCNN trained on Pascal-VOC. The second row contains the predictions by the model trained on BDD100k. Ground Truth (GT) labels are shown in clear green. The base model predictions are the inputs to OOD scoring functions; without predictions, objects in images will be ignored by OOD scoring functions too. The proposed FMIYC benchmark removes undesirable semantic overlaps and separates semantically near, far, and farther objects with respect to the ID dataset. FMIYC uses ground truth bounding boxes to leverage OSOD metrics that measure when unknown objects are ignored, when they are detected, and when they are confounded with ID objects. In response to these safety challenges, researchers have developed two primary approaches: Out-ofDistribution Object Detection (OOD-OD) Du et al. [2022b] and Open-Set Object Detection (OSOD) Dhamija et al. [2020]. OOD-OD focuses on identifying predictions that do not belong to the ID categories, while OSOD actively attempts to detect the unknown objects themselves. Though both approaches address the fundamental problem of encountering objects from a different semantic space than the training distribution, they employ significantly different methodologies, evaluation metrics, and benchmarks. This methodological divergence has led to isolated research communities and evaluation frameworks that fail to capture the complete picture of model performance when encountering unknown objects. Currently, the evaluation of OOD-OD relies on a single benchmark, to the best of our knowledge: the VOS-benchmark Du et al. [2022b]. The fundamental assumption of this benchmark is that none of the images in the OOD datasets include any of the ID classes, implying non-overlapping semantic spaces. Consequently, any prediction made on the OOD datasets by a model trained on the ID classes is inherently incorrect, regardless of the accuracy of object localization. The benchmark employs the area under the ROC curve (AUROC) and the false positive rate at $9 5 \%$ true positive rate (FPR95) as metrics. However, these metrics can be misleading, as they might suggest that a higher AUROC or lower FPR95 indicates better localization of unknown objects, which is not necessarily true. The current benchmark metrics evaluate how well OOD-OD methods identify incorrect predictions, which may potentially correspond to unknown objects. Yet, they fall short of measuring the actual identification of unknown objects. This raises a critical question: Are AUROC and FPR95 sufficient metrics for assessing the deployment of OOD-OD methods in real-world scenarios? In this study, we identify and address fundamental flaws in the existing OOD-OD benchmark and its metrics, while bridging the gap between OOD-OD and OSOD research communities. We demonstrate that the current evaluation violates the fundamental assumption of non-overlap, as the OOD datasets contain ID classes. The benchmark may give the misleading impression of evaluating the identification of unknown objects, fails to penalize ignored unknown objects, and lacks proper assessment of object localization precision—issues that cannot be overlooked for safety-critical applications. To address these challenges, we propose FindMeIfYouCan (FMIYC), a comprehensively curated benchmark that: (1) eliminates undesired semantic overlaps between ID and OOD datasets, (2) introduces semantically stratified near, far, and farther OOD splits to evaluate detection robustness across varying levels of semantic similarity, and (3) properly evaluates the actual identification of unknown objects by integrating complementary metrics from the OSOD community, thus providing a robust OOD-OD evaluation framework. By combining strengths from both approaches, our benchmark enables fair comparison across multiple architectures (Faster R-CNN, YOLOv8, RT-DETR) and reveals insights previously obscured in the current standard benchmark. Additionally, we adapt OOD detection methods from image classification as strong baselines for both OOD-OD and OSOD tasks, establishing a solid foundation for future research that can benefit from both perspectives. Contributions. In summary, the main contributions of this work are: • We identify and address fundamental flaws in the existing OOD-OD evaluation methodology, demonstrating how the current approach fails to capture a complete picture of the model’s performance when encountering unknown objects. • We propose FindMeIfYouCan, a benchmark that removes the existing semantic overlaps and introduces stratified near, far, and farther OOD splits for OOD-OD evaluation across varying levels of semantic similarity. • We reveal the limitations of legacy AUROC and FPR95 metrics and integrate complementary metrics from the OSOD community for a comprehensive OOD-OD evaluation that captures disregarded objects. • We assess various methods and architectures for OOD-OD. In particular, we enhance OODOD detection techniques by incorporating post-hoc methods from image classification. Additionally, we expand the range of evaluated architectures, including the YOLOv8 and RTDETR architectures alongside the commonly utilized Faster R-CNN, thereby establishing robust baselines for OOD-OD. # 2 Background & Related Work # 2.1 Object Detection An object detector is a model $\mathcal { M }$ that takes as input an image $x$ and generates bounding boxes $b$ and classification scores $c$ for detected objects from a predefined set of categories $\mathcal { C }$ Girshick et al. [2014]. Such models are trained to localize the objects that belong to the ID classes $\mathcal { C }$ and, simultaneously, to ignore the rest of the objects and the background Dhamija et al. [2020]. Consequently, the object detector is usually set to function according to a given confidence threshold $t ^ { * }$ that corresponds to the one that maximizes the mAP with respect to the ID test dataset. All objects below such threshold $t ^ { * }$ are discarded. The model output is $\mathbf { \mathcal { M } } ( x , t ^ { * } ) = \{ b , c \}$ . In the remainder of the paper, the terms “unknown” and “OOD” objects are used interchangeably, and refer to classes that do not belong to $\mathcal { C }$ . Two problems can arise during real-world deployment when the model encounters an unknown object: it can be incorrectly detected as one of the ID classes with confidence above the confidence threshold $t ^ { * }$ , or the unknown object may be ignored. Therefore, two approaches exist in the literature to address these problems: OOD-OD and OSOD. # 2.2 OOD-OD & OSOD Benchmarks Similar to OOD detection for image classification, OOD-OD is formulated as a binary classification task, that for each detected instance $b$ leverages a confidence scoring function $\mathcal { G }$ with its own threshold $\tau$ to calculate a per-object score $\mathcal { G } ( b )$ that can distinguish between ID and OOD detections. Du et al. [2022b] introduced a benchmark that has been adopted by subsequent works Du et al. [2022a], Wilson et al. [2023], Wu and Deng [2023]. This benchmark utilizes BDD100k Yu et al. [2020] and Pascal-VOC Everingham et al. [2010] as ID datasets, along with subsets of COCO Lin et al. [2014] and Open Images Kuznetsova et al. [2020] as OOD datasets. Trained models on the ID datasets are then set to perform inference on the OOD datasets. The proposed evaluation method is deemed consistent if it adheres to the critical condition that no ID class appears in any image within the OOD datasets. Consequently, any detection within these OOD datasets is automatically classified as “incorrect”, irrespective of whether the prediction corresponds to a ground truth OOD object. Conversely, all predictions on the test ID dataset are considered “correct”. By employing this approach, the binary classification metrics AUROC and the FPR95 are utilized to assess the efficacy of the OOD detection method. Specifically, these metrics evaluate how effectively $\mathcal { G } ( b )$ assigns different scores to predictions coming from the ID and the OOD datasets Du et al. [2022b]. On the other hand, OSOD directly adds an unknown class to the object detector, along with the ID classes for the training process. It was first formalized by Dhamija et al. [2020], and their goal was to tackle the fact that “unknown objects end up being incorrectly detected as known objects, often with very high confidence”. Moreover, the authors propose a benchmark and associated metrics, where the goal is to accurately detect known (ID) and unknown objects simultaneously, as measured by the metrics described in Section 4.2. The benchmarking setup of OSOD is quite different from that of OOD-OD since, in this setting, the goal is to actively and correctly localize OOD and ID objects at the same time. Also, for OSOD, there is not one commonly accepted benchmark, but many benchmarks have appeared Ammar et al. [2024], Miller et al. [2018], Han et al. [2022], Dhamija et al. [2020]. The common rule is that there is one training dataset with a given set of labeled categories of objects (usually VOC, with 20 categories Everingham et al. [2010]), and there is one or several subsets of an evaluation dataset that contains the training categories and other labeled classes, semantically different from the ID ones (usually from COCO Lin et al. [2014]). # 3 Pitfalls of the Current OOD-OD Benchmark Metrics. The current benchmark uses the AUROC and the FPR95 metrics inherited from the image classification task. A misconception that may be conveyed by these metrics is that a higher AUROC or lower FPR95 means better localization of OOD objects, which is not necessarily the case. These metrics measure how well OOD-OD methods identify incorrect predictions, which may or may not correspond to ground-truth unknown objects. Therefore, these metrics do not evaluate the correct localization of OOD objects, and cannot measure when OOD objects are ignored. Figure 2 depicts an example of the current metrics issues described above. For more details on the metrics, see Appendix D. Figure 2: AUROC and FPR95 do not assess whether the relevant unknown objects, such as camels, are overlooked. They only consider incorrect predictions, such as misidentifying a car. Semantic overlaps. The presence of semantic overlaps questions the validity of previously reported results since the key assumption of the OODOD benchmark is that no ID objects are present in any of the images of the OOD datasets. If the assumption is respected, all predictions made in the OOD datasets by the models trained on the ID classes can be safely considered incorrect. In contradiction with the core assumption of the benchmark, as illustrated in Figure 1, labeled and unlabeled people and parts of people are present in the OOD datasets. Another common overlap occurs with respect to the VOC ID class “dining table”. Several images in the OOD datasets contain pictures of dining tables, but the GT labels are at the level of spoons, knives, glasses, and food itself. For a complete list of overlapping categories in each OOD dataset, and additional examples of overlaps, see Appendix B. The OOD images containing ID classes need to be removed for consistency in the benchmark. Ignored objects. As illustrated in Figure 1, not every image in each OOD dataset gets at least one prediction. The percentage of images with no predictions in the current benchmark can be seen in Table 1, which shows that up to $59 \%$ of images in one of the OOD splits have not a single prediction above the threshold $t ^ { * }$ . This means that the metrics of AUROC and FPR95 reported in previous works Du et al. [2022b], Wilson et al. [2023], Du et al. [2022a], Wu and Deng [2023] are built using only $\sim 4 0 \%$ of the images in that OOD split. By construction, the metrics of the benchmark cannot be penalized by this, which obscures the omission of a non-negligible percentage of images and objects. To remedy this, we propose using the OSOD metrics presented in Section 4.2. Table 1: Percentage of images with no predictions in the current OOD-OD benchmark. OI=OpenImages Semantically similar categories. We examined the semantic and perceptual similarity between ID and OOD datasets following Abbas et al. [2023], Mayilvahanan et al. [2023], who postulated that nearest neighbors in the image embedding space of CLIP Radford et al. [2021] share semantic and stylistic characteristics. We calculated the cosine similarity in the CLIP embedding space between ID and OOD datasets of the current benchmark. As seen in Figure 3a, BDD is farther away with respect to its OOD datasets than VOC. We propose to exploit the different degrees of similarity to create new splits, as detailed in Section 4. Lack of use of ground truth labels. The actual localization of ground truth (GT) unknown objects is crucial information that the current benchmark fails to utilize. A comprehensive evaluation of a system’s behavior regarding unknown objects is incomplete if it only considers the detection of incorrect predictions. Identifying wrong predictions is indeed crucial, yet overlooking unknown objects can be as hazardous as misclassifying them, as presented in Figure 2. The OSOD community has developed a set of metrics that can evaluate the ability of methods to localize unknowns and quantify instances where unknowns are ignored or confused with in-distribution (ID) objects. In addition to current metrics, we propose leveraging GT labels to enable a more detailed evaluation by employing the OSOD metrics described in Section 4.2. # 4 The FMIYC Benchmark # 4.1 Creating the Evaluation Splits The overlap removal process for the VOS benchmark datasets was conducted in two stages. Initially, an automatic stage was implemented to eliminate labeled instances of overlapping categories. Subsequently, a manual verification stage was carried out, during which the remaining images were individually inspected to ensure that no unlabeled instances of ID categories remained. Afterward, the split into near and far subsets was performed with respect to Pascal-VOC as the ID dataset. Again, splitting into near and far subsets began with an automatic phase where images containing the predefined near categories were put into the near dataset, and the remaining images would go to the far dataset. Then, a manual check was performed where the remaining images in the far dataset were inspected to ensure no near category was present, and vice versa. This procedure was made for both COCO and OpenImages as OOD datasets. As a result, there are four OOD datasets with respect to Pascal-VOC: COCO-near, COCO-far, OpenImages-near, and OpenImages-far. For instance, when Pascal-VOC is ID, the following categories are present that have at least one corresponding OOD category that is semantically and visually close: television, dog, cat, horse, cow, and couch. Some of the similar OOD categories are: laptop, fox, bear, jaguar, leopard, cheetah, zebra, and bed. Appendix B presents a complete list and discussion of the near OOD categories. To enhance the newly created near and far splits, additional images from each of the original datasets were incorporated into each split. The process involved pre-selecting a set of candidates for each new dataset by excluding categories that overlapped with the ID ones and utilizing the existing categories within each dataset. Each candidate image was then manually reviewed to ensure there was no overlap and to confirm its correct assignment to either the far or near subsets. The entire process was carried out by manually recording image IDs in configuration files for each subset, ensuring that the construction is fully reproducible from beginning to end. The code that creates the new splits is available in the repository: FMIYC OOD-OD Benchmark Repository. The dataset is hosted in huggingface - FindMeIfYouCan. Following the observations in Figure 3a and the manual inspection of images, for BDD100k as ID dataset, only the removal of overlapping images with labeled or unlabeled ID classes was done without the creation of separate far or near subsets, nor the addition of new images. This is because, as can be seen in Figure 3a, BDD100k is already farther away from its respective OOD datasets than Pascal-VOC. The visualization of images that illustrate the semantic and vi Figure 3: Perceptual and semantic (cosine) similarity Mayilvahanan et al. [2023] between ID and OOD datasets using CLIP image encoder embeddings. (b) The FMIYC benchmark distinction of near, far and farther splits can be appreciated (a) Current benchmark: VOC is semantically and visually more similar to OOD datasets than BDD. sual similarity among all ID and OOD datasets can be found in the Appendix B. This situation allows for the distinction of three degrees of similarity between ID and OOD datasets: we have near and far for the OOD datasets with respect to Pascal-VOC, and we argue (after considering Figure 3b and the results) that the OOD datasets with respect to BDD can be called farther OOD. This distinction will prove insightful after considering the results in Section 5. The number of images in each of the subsets of the new benchmark can be found in Table 2. In addition, Figure 3b shows CLIP vision embeddings similarity for each new split. Table 2: Number of images in each subset of the newly proposed benchmark # 4.2 Proposed Metrics OSOD Metrics. The OSOD community uses as metrics the absolute open-set error (AOSE), the wilderness impact (WI), the unknown precision $( P _ { U } )$ , unknown recall $( R _ { U } )$ , and the average precision of the unknowns $A P _ { U }$ Gupta et al. [2022], Miller et al. [2018], Maaz et al. [2022]. The AOSE reports the absolute number of unknown objects incorrectly classified as one of the ID classes. WI evaluates the proportion of AOSE among all the known detections. Unknown recall $R _ { U }$ is the ratio of unknown detected objects by the number of unknown ones, and the unknown precision $P _ { U }$ is the ratio of true positive detections divided by all the detections Ammar et al. [2024]. The OSOD metrics are fine-grained in the sense that they assess how well the methods can localize and correctly classify known and unknown objects in images where both types of objects appear. In addition to the widely used metrics of AUROC and FPR95, we propose using the following OSOD metrics: $A P _ { U }$ , $P _ { U }$ , and $R _ { U }$ . We omit the WI since our benchmark does not allow both ID and OOD classes in the OOD datasets. In addition, we propose a new metric that we call normalized open set error (nOSE), which is the AOSE divided by the total number of (labeled) unknowns. We propose this metric since the absolute number of unknowns depends on the dataset, and therefore, the AOSE is not comparable across datasets, whereas the nOSE is. The nOSE assesses the proportion of unknown objects detected as one of the ID classes. A summary of the overall metrics used in the FMIYC benchmark can be found in Appendix D. # 5 Experiments and Results # 5.1 Object Detection Architectures We used the Faster-RCNN Girshick et al. [2014] in its vanilla and VOS (regularized) versions, YOLOv8 Jocher et al. [2023], Sohan et al. [2024], and RT-DETR Zhao et al. [2024]. For YOLOv8 and RT-DETR, the models were trained on the same ID datasets (Pascal-VOC and BDD100k). The training details can be found in Appendix G. For the Faster-RCNN models, we used the pre-trained checkpoints provided by Du et al. [2022b]. Table 3 shows the architectures mAP for each ID dataset. # 5.2 Out-of-Distribution Object Detection Methods We implemented prominent methods from OOD detection literature on image classification. Specifically, we selected post-hoc methods, as they do not require retraining of the base model. Consequently, we adapted the common families of methods from image classification to operate at the object level, as detailed below. Output-based post-hoc methods take the logits, or the softmax activations, as inputs to their scoring functions. Here we can find MSP Hendrycks and Gimpel [2016], energy score Liu et al. [2020], and and GEN Liu et al. [2023]. Table 3: mAP across architectures and VOC & BDD ID datasets Feature-space post-hoc methods use the previous-to-last activations as the input to the scoring functions. To this category belong kNN Sun et al. [2022], DDU Mukhoti et al. [2023] and Mahalanobis Lee et al. [2018]. Mixed output-feature-space post-hoc methods rely on the previous-to-last activations and the outputs as the input to the scoring functions. Here we find ViM Wang et al. [2022], ASH Djurisic et al. [2022], DICE Sun and Li [2022], and ReAct Sun et al. [2021]. Latent-space post-hoc methods.We take inspiration from recent works Yang et al. [2023], Mukhoti et al. [2023], Arnez et al. [2024] and implement an adapted confidence score, called LaRD, that uses latent activations of a given intermediate or hidden layer. The adaptation of post-hoc methods for object detection is quite straightforward, as it is based on the filtering mechanisms used by each architecture. All object detectors deliver many predictions (usually $\sim 1 0 0 0 \$ . Then, a first filtering is done based on the threshold $t ^ { * }$ (see Section 2). The predictions with a score above $t ^ { * }$ go through non-maximum suppression (NMS) for Faster-RCNN and YOLOv8. Next, for each retained prediction, it is possible to access the full logits, and (except for YOLOv8) it is also possible to access the previous-to-last layer features associated uniquely with each predicted object. For YOLOv8, only MSP, GEN, and energy could be tested, as this network does not have a final fully connected layer or a set of latent features that can be directly linked with a predicted object. In addition to the adapted post-hoc OOD detection methods, we evaluated the VOS method Du et al. [2022b], i.e., the regularized Faster-RCNN with the energy score. For both versions of Faster-RCNN, all post-hoc methods were tested. The confidence score threshold for each OOD detection method was calculated in an automatic way such that for each score, $9 5 \%$ of the ID samples would be above the threshold. # 5.3 Results In Figure 4, we present a summarized plot of the AUROC and FPR95 metrics from the new FMIYC benchmark, averaged across different architectures for each family of methods and each OOD dataset. Feature-based methods and those utilizing latent representations tend to identify incorrect predictions more effectively in the farther split compared to other splits. Conversely, mixed methods exhibit a decline in performance as semantic distance increases. Overall, there is no distinct trend among baseline families indicating whether incorrect detections are more easily identified for near, far, or farther objects. This observation may be surprising; however, the differences among splits will become more apparent when considering the OSOD metrics discussed subsequently. Figure 5 illustrates the results for the incorporated OSOD metrics, averaged across architectures for each family of methods and each OOD dataset split. For the nOSE, there is a clear decreasing trend across method families when transitioning from near to farther splits. The near datasets exhibit the highest nOSE, indicating that more objects are mistakenly predicted as one of the in-distribution (ID) classes among the correctly localized objects. Conversely, objects in the farther split are less confounded with ID objects. Regarding the $A P _ { U }$ , it is generally observed to be low across OOD datasets, with a trend of decreasing further in the farther datasets. This suggests that objects that are semantically near are localized more accurately. Feature-based methods and those utilizing latent space representations appear to perform better than other methods for the farther objects. Near (VOC) Far (VOC) Farther (BDD) Avg. Trend (Near Far Farther) Output-based Feature-based Latent Rep. Mixed/Hybrid 100 100 9±4..3 9±4..78 100 ±1.3 97.3 9±81..21 100 4680 7±91.18.2 7±97..60 7±48..59 71.1 7±7..19 7±79..03 4680 7±41.72.5 7±31.08.4 7±41.43.3 7±41.36.8 4680 7±98..82 7±58..53 ? 7±14..83 ±7.1 4680 7±81.0.6 7±51.0.0 67.7 7±41.18.5 7±21.2.0 6±61.76.3 0 COCO COCO COCO OpIm OpIm OpIm COCO COCO COCO OpIm OpIm OpIm COCO COCO COCO OpIm OpIm OpIm COCO COCO COCO OpIm OpIm OpIm Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Output-based Feature-based Latent Rep. Mixed/Hybrid 4680 100 66.6 7±1.94.2 7±1.40.4 7±61.65.7 7±1.87.6 67.6 4680 100 6±84.45.3 7±82.7.0 7±53.32.2 7±43.49.3 T 4680 100 6±52.65.1 7±31.83.7 8±45..46 8±27..57 4680 100 69.9 7±61.74.4 7±42.43.2 7±61.32.9 7±81.8.2 7±32.80.1 2±62.07.3 2±62.3. 中 1±26..31 7±.50.0 COCO COCO COCO OpIm OpIm OpIm COCO COCO COCO ONpeIamr OFpaIrmFaOrptIhmer COCO COCO COCO ONpeIamr OFpaIrmFaOrptIhmer COCO COCO COCO OpIm OpIm OpIm Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near Far Farther Near (VOC) Far (VOC) Farther (BDD) Avg. Trend (Near Far Farther) Output-based Feature-based Latent Rep. Mixed/Hybrid 100 100 100 100 80 GrRf 40 2±41.14.6 11.9 ±5.9 8±.140.0 2±56..51 1±04..63 5±.7 460 3±21.91.9 15.2 1±.14.3 2±71.23.5 1±04..69 1±.13. 680 Rf 40 2±92.02.4 1±39..50 0±.06.2 3±02..48 1±1..56 0±.02.3 460 2±91.29.4 1±36..65 1±01.16.4 ±5.0 1±23..0 7±.86.3 CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer Output-based Feature-based Latent Rep. Mixed/Hybrid 100 100 100 100 30 1±29..35 172.0 16.81 8±.78.3 152.06 7±.98.3 . 10 6±.54.2 2±.25.8 3±.47.6 ±8.9 3±.31.9 2.6 4.7 ±5.2 3±.04.4 1±.09.3 2±.24.7 1.0 4. ±3.9 1±.10.2 + ±2.2 0±.09.8 ±0.8 0.6 1 ±1.2 ±1.9 0±.07.8 ±0.8 CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer Output-based Feature-based Latent Rep. Mixed/Hybrid 100 100 100 100 460 5±72.95.4 4±82.63.0 5±12.58.7 4±82.42.1 4680 6±03.19.1 5±32.95.7 5±3.03.3 4±32.45.4 460 7±71.62.0 7±05..67 1 7±24..08 6±31.3.5 460 7±03.05.8 6±52.84.4 6±13.21.2 5±52.68.2 20 2±61.25.7 2±31.03. 20 2±75..26 ±7.4 20 2±5..69 1±64.. 20 2±81.91.7 1±98..72 CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer Output-based Feature-based Latent Rep. Mixed/Hybrid 100 100 100 100 243.09 192.0 223.07 172.05 1 20 ±10.3 6±.45.4 8±.63 10 8±.180.2 3.6 9±.90.6 7±.140.1 4.3 6±.53. 10 9±.78.2 1±1.15.2 8±.62.0 ±8.9 3±.30.4 2±.38.2 5±.42.6 2±.2.4 2±.36.1 ±3.2 ±2.9 4±.2.8 3.0 3.3 4±.0.3 2±.07.3 CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer CNOeCarO CFOaCrO FCarOtCheOr ONpeIamr OFpaIrm FaOrptIhmer The $P _ { U }$ exhibits the highest variability across methods and also the highest values among the OSOD metrics. It is particularly elevated for the near splits. However, drops drastically for the farther objects, indicating that in such splits, more OOD predictions do not correspond to ground truth objects, as illustrated in Figure 2. Finally, the $R _ { U }$ is generally quite low across OOD datasets and methods, with a similar trend showing that objects in far and farther OOD datasets are harder to detect. The metrics reveal that, on average, most unknown objects are ignored (not found), and this challenge is even more pronounced for far and farther OOD objects. For the near splits, $\sim 1 4 \%$ of unknown objects are correctly identified. This figure drops to approximately $3 \%$ in the farther splits for output-based and mixed methods. However, feature-based and latent representation methods seem to perform slightly better, identifying $\sim 9 \%$ of the unknown objects in the farther splits. For a comprehensive presentation of the results for each architecture, method, and metric, please refer to Appendix E. It is important to note how unrelated the previous OOD-OD benchmark metrics may seem with respect to the OSOD metrics. The AUROC and FPR95 cannot actually tell much difference between far and near datasets. This difference becomes clear in light of the OSOD metrics, which show that, contrary to the case of image classification, for object detection, the semantically and visually closer objects are easier to identify and localize. But when the unknown objects are too different from the ID ones, they will most likely be ignored by the methods and architectures evaluated. These insights are impossible to obtain using only the AUROC and FPR95. # 6 Discussion The value of OSOD metrics. It is crucial to note that the OSOD metrics are necessary to quantify the effectiveness of OOD-OD methods in detecting actual OOD objects ( $A P _ { U }$ and $P _ { U }$ ) and accounting for instances when OOD objects are overlooked $( R _ { U } )$ or misclassified (nOSE). Unlike AUROC and FPR95, the OSOD metrics provide a more nuanced understanding by addressing confounding unknowns for ID objects, the oversight of OOD objects, and the localization of unknowns. The added value of the OSOD metrics is clearer when considering the semantic stratified splits. Near, far and farther splits. The partition of the benchmark into near, far, and farther proved insightful and meaningful since it details that semantic similarity plays an important role in the detection ability of different methods and architectures. It is especially insightful how the near OOD objects are more easily detectable than far and farther ones in the case of Object Detection. This is the opposite of the case of image classification, where near classes are considered harder than far ones. We may hypothesize that since OD deals with multiple objects per image and also with the task of localization, it might be, in fact, the localization part that facilitates finding near unknowns. However, the near objects are also more easily confounded with ID objects, in agreement with image classification observations. Moreover, the observation that far and farther objects are more usually ignored, and therefore are hardly localizable, is demonstrated by the OSOD metrics, as only around $5 \%$ of the unknown objects are localized, as opposed to about $20 \%$ for some methods in the near datasets. Why not only use OSOD? The main limitation of OSOD metrics is their dependence on correct and exhaustive GT labels, since unlabeled unknown objects are present in the OOD datasets. The OSOD metrics cannot correctly handle the situation when an unlabeled unknown object is detected as such. For this case, the OOD-OD metrics are relevant. We argue that both sets of metrics give a deeper understanding of OD models and methods when facing unknown objects. This work quantifies and confirms that OOD-OD methods can find unknown objects, even if it is not the explicit goal. It is to be noted that the results are dependent on the OD threshold $t ^ { * }$ . Therefore, it can be tuned to match certain requirements. For instance, if lowered, more low-confidence predictions could appear, with the consequence that OOD-OD methods would have more candidates and could find more unknown objects if present. For a more in-depth discussion of the nuances and relations between OOD-OD and OSOD, refer to Appendix H. Future work. Inspired by the BRAVO Benchmark for semantic segmentation $\mathrm { v } _ { \mathrm { u } }$ et al. [2024], one interesting possible avenue for this work is to enrich the benchmark by generating a split that includes synthetically generated objects along the real ones. Another direction that could be explored is how vision-language models (VLMs) Zhang et al. [2024] perform in the benchmark in comparison with the already tested architectures. To the best of our knowledge, no work has yet proposed any specific method for OOD-OD using VLMs Miyai et al. [2024], Zhang et al. [2025].
State-of-the-art Object Detection (OD) methods predominantly operate under a closed-world assumption, where test-time categories match those encountered during training. However, detecting and localizing unknown objects is crucial for safety-critical applications in domains such as autonomous driving and medical imaging. Recently, Out-Of-Distribution (OOD) detection has emerged as a vital research direction for OD, focusing on identifying incorrect predictions typically associated with unknown objects. This paper shows that the current evaluation protocol for OOD-OD violates the assumption of non-overlapping objects with respect to the In-Distribution (ID) datasets, and obscures crucial situations such as ignoring unknown objects, potentially leading to overconfidence in deployment scenarios where truly novel objects might be encountered. To address these limitations, we manually curate, and enrich the existing benchmark by exploiting semantic similarity to create new evaluation splits categorized as $\textit{near}$, $\textit{far}$, and $\textit{farther}$ from ID distributions. Additionally, we incorporate established metrics from the Open Set community, providing deeper insights into how effectively methods detect unknowns, when they ignore them, and when they mistakenly classify OOD objects as ID. Our comprehensive evaluation demonstrates that semantically and visually close OOD objects are easier to localize than far ones, but are also more easily confounded with ID objects. $\textit{Far}$ and $\textit{farther}$ objects are harder to localize but less prone to be taken for an ID object.
[ "cs.CV" ]
# 1 Introduction With the rapid development of large-scale software systems, effective fault localization (FL) methods have become crucial. Over the years, numerous FL approaches (e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]) have been proposed to identify faulty statements in programs. These approaches typically fall into two categories: spectrum-based fault localization (SFL) [5, 14] and deep learning-based fault localization (DLFL) [9, 15, 16]. Both approaches rely on the execution information of test cases, e.g. coverage information denoted as a statement executed or not executed, and test results represented as a passing or failing result. Based on the execution information of test cases, FL approaches apply suspiciousness evaluation algorithms, such as correlation coefficients for SFL or neural networks for DLFL, to rank program statements based on their likelihood of being faulty [17, 18]. Thus, test cases are indispensable for conducting effective fault localization. FL approaches classify test cases into two classes: passing test cases and failing ones, to analyze their distinct behaviors and pinpoint the locations of a fault. However, a significant challenge is the class imbalance between passing and failing test cases, where failing test cases are often severely less in number. This class imbalance can introduce bias into the suspiciousness evaluation [19, 20], and prior research [21] has shown that a more balanced dataset can improve the effectiveness of FL. Recent research has focused on developing data augmentation approaches for FL that generate balanced test suites to enhance the effectiveness of FL. As modern software systems grow increasingly complex, the dimensionality of the execution information (e.g. coverage information regarding the size of a program) of test cases becomes exceedingly high, these data augmentation approaches usually perform dimensionality reduction before data generation. Thus, a typical workflow involves two parts: dimensionality reduction and data generation, Specifically, it first applies dimensionality reduction techniques to the test cases, followed by data generation approaches to generate new failing test cases until the dataset becomes balanced, where the number of failing test cases is equal to that of passing ones. In the dimensionality reduction phase, current approaches [10, 11, 12, 13] uses either program slicing [22] or linear dimensionality reduction techniques (e.g. linear discriminant analysis [23] and principal component analysis [24]). Despite the promising FL results delivered by these existing approaches, they are still limited. Program slicing focuses on the context of a program based on semantic properties, while linear dimensionality reduction simplifies the dataset based on statistical properties. These approaches, when used separately, can fail to capture both the semantic context of a program and the global statistical information. In the data generation phase, current approaches can be roughly categorized into traditional transformation approaches and deep learning-based ones. Traditional transformation approaches directly modify failing test cases to generate new ones, e.g. Lamont [12] applied SMOTE, and PRAM [11] used a Mixup approach[25] inspired by image augmentation techniques. While effective, these approaches struggle to capture deeper features in the data. To further capture deep features, deep learning-based approaches, e.g. generative adversarial networks (GANs[26]) and conditional variational autoencoder (CVAE[27]), train models that learn the characteristics of both passing and failing test cases to generate new failing test cases. [13, 10] have demonstrated the effectiveness of using CVAE and GAN for data augmentation in FL, particularly for DLFL. However, these approaches share a common limitation: both GANs and CVAE consist of two components, i.e. GANs with a generator and discriminator, and CVAE with an encoder and decoder. These components are interdependent during training, and the potential for capability mismatches between them can result in unstable sample quality[28]. To address these issues, we propose PCD-DAug : a Principal Context-aware Diffusion guided Data Augmentation approach that generate synthesized failing test cases for improving FL. The basic idea of PCD-DAug is to combines program semantic properties (i.e. program slicing) with statistical data properties (i.e. principal component analysis) to construct a dimensionality reduced context, and use component-independent deep learning networks (i.e. diffusion model) to learn from the context to generate minority class data (i.e. failing test cases) for improving FL. For acquiring a dimensionality reduced context, PCD-DAug uses dynamic program slicing [29] to capture the program semantic properties via program dependencies showing how a set of statements influences the faulty output, and leverages a revised principal component analysis (PCA) [30] to extract global statistical data properties from the coverage data; then embodies the two properties to define a principal context. For generating synthesized failing test cases, PCD-DAug uses the conditional diffusion model [31] to learn the principal context through interdependent components. Unlike GAN and CVAE requiring the training of two interdependent components, diffusion models[32] consist of two processes: a deterministic forward process and a trainable reverse process. The forward process, which progressively adds noise to degrade the original data, is defined by a mathematical formula and requires no training. The training of a model training focuses solely on the reverse process, which learns to progressively denoise the data to recover failing test cases. It not only avoids the capability mismatch common in GANs and CVAE but also simplifies the training process. By training only the reverse denoising process, we can achieve more efficient and stable generation of failing test cases, ultimately improving FL effectiveness. To evaluate PCD-DAug , we conducted large-scale experiments on 262 versions across five benchmarks. We applied PCD-DAug to six state-of-the-art FL approaches and compared it with six data augmentation approaches. The experimental results show that PCD-DAug significantly improves the effectiveness of all six FL approaches and outperforms the six data augmentation approaches. For example, compared to the six state-of-the-art FL methods, PCD-DAug improves FL effectiveness by an average of $3 8 3 . 8 3 \%$ , $2 2 7 . 0 8 \%$ , and $2 2 4 . 1 9 \%$ on the Top-1, Top-3, and Top-5 metris, respectively; compared to SOTA of data augmentation approaches, PCD-DAug achieves an improvement of $3 4 . 5 1 \%$ , $0 . 5 6 \%$ , and $3 . 4 0 \%$ on the same metrics. The main contributions of this paper can be summarized as follows: • We propose PCD-DAug $\because$ a principal context-aware diffusion guided data augmentation approach integrating principal context with a diffusion model, generating synthesized failing test cases to acquire a class balanced dataset for improving FL. • We devise a principal context combining program semantic properties (i.e. program slicing) with statistical data properties(i.e. revised PCA), guiding the data synthesis process within a diffusion model framework. • We conduct comprehensive experiments involving six state-of-the-art FL techniques, alongside six data augmentation approaches. Our results show that PCD-DAug significantly improves the FL effectiveness. • We open source the replication package online1, including the all source codes. The rest of this paper is structured as follows. Section 2 introduces background information. Section 3 presents our approach PCD-DAug . Section 4 and Section 5 show the experimental results and discussion. Section 6 draws the conclusion. # 2 Background # 2.1 Diffusion Model Diffusion models are a type of generative model applied in tasks such as image generation and text-to-image synthesis. In recent years, diffusion models have gained significant attention from researchers due to their impressive performance in both text-to-image and text-to-video generation tasks. A diffusion model consists of two main components: the forward process, also known as the diffusion process, and the reverse process. In the forward process, Gaussian noise is gradually added at each time step to transform the data into a fully noisy result. The reverse process, in turn, works by predicting and removing the Gaussian noise step by step, ultimately recovering the original sample. The detailed processes are as follows: Forward Process. In the forward process, an original data sample $\mathbf { x } _ { \mathrm { 0 } }$ undergoes a series of transformations where Gaussian noise is added progressively at each time step. This process is modeled as a Markov chain, and can be described mathematically as follows: $$ \begin{array} { l } { \displaystyle q ( \mathbf { x } _ { 1 : T } | \mathbf { x } _ { 0 } ) : = \prod _ { t = 1 } ^ { T } q ( \mathbf { x } _ { t } | \mathbf { x } _ { t - 1 } ) } \\ { \displaystyle q ( \mathbf { x } _ { t } | \mathbf { x } _ { t - 1 } ) : = \mathcal { N } ( x _ { t } ; \sqrt { 1 - \beta _ { t } } \mathbf { x } _ { t - 1 } , \beta _ { t } \mathbf { I } ) } \end{array} $$ where $\boldsymbol { x } _ { t }$ represents the data at time step t in the forward process, and $x _ { 0 }$ is the original sample. $\beta _ { t }$ is the variance schedule, which controls the amount of noise added at each step. The variance can either be learned through reparameterization [33] or kept as a constant parameter[32]. As Gaussian noise is continuously added over time, the original sample $x _ { 0 }$ is eventually transformed into an indistinguishable noisy version, represented by $x _ { T }$ , which approximates a sample from a standard Gaussian distribution. The forward process transforms the data distribution into a noise distribution. This transformation can also be described mathematically as: $$ x _ { t } = \sqrt { \bar { \alpha } _ { t } } x _ { 0 } + \sqrt { 1 - \bar { \alpha } _ { t } } \epsilon $$ where $\alpha _ { t } = 1 - \beta _ { t }$ and $\begin{array} { r } { \bar { \alpha } _ { t } = \prod _ { s = 1 } ^ { t } \alpha _ { s } } \end{array}$ . $\epsilon$ is the Gaussian-distributed noise. Reverse Process. The reverse process is designed to recover the original data sample $\mathbf { x } _ { \mathrm { 0 } }$ from the fully noisy data $\mathbf { x } _ { T } \sim \mathcal { N } ( 0 , \mathbf { I } )$ . This reverse transformation is achieved through a step-by-step denoising procedure, modeled by a Markov chain. The reverse process can be expressed as: $$ p _ { \theta } ( \mathbf { x } _ { 0 : T } ) = p ( \mathbf { x } _ { T } ) \prod _ { t = 1 } ^ { T } p _ { \theta } ( \mathbf { x } _ { t - 1 } | \mathbf { x } _ { t } ) $$ where $p _ { \theta } ( \mathbf { x } _ { t - 1 } | \mathbf { x } _ { t } )$ represents the conditional probability of transforming $\mathbf { x } _ { t }$ into $\mathbf { x } _ { t - 1 }$ , which is parameterized as a Gaussian distribution: $$ p _ { \theta } ( \mathbf { x } _ { t - 1 } | \mathbf { x } _ { t } ) = \mathcal { N } ( \mathbf { x } _ { t - 1 } ; \mu _ { \theta } ( \mathbf { x } _ { t } , t ) , \Sigma _ { \theta } ( \mathbf { x } _ { t } , t ) ) $$ Where, $\mu _ { \theta } ( \mathbf { x } _ { t } , t )$ and $\Sigma _ { \theta } ( \mathbf { x } _ { t } , t )$ are the mean and variance predicted by the model. In most cases, the variance $\Sigma _ { \theta } ( \mathbf { x } _ { t } , t )$ is set to a constant, and can be defined as: $$ \Sigma _ { \theta } ( \mathbf { x } _ { t } , t ) = \sigma _ { t } ^ { 2 } \mathbf { I } , \quad \sigma _ { t } ^ { 2 } = \frac { 1 - \bar { \alpha } _ { t - 1 } } { 1 - \bar { \alpha } _ { t } } \beta _ { t } $$ As for the mean $\mu _ { \theta } ( \mathbf { x } _ { t } , t )$ , it is computed by removing the noise predicted by the model $\epsilon _ { \theta } ( \mathbf { x } _ { t } , t )$ , and is given by: $$ \mu _ { \theta } ( \mathbf { x } _ { t } , t ) = \frac { 1 } { \sqrt { \alpha _ { t } } } \left( \mathbf { x } _ { t } - \frac { \beta _ { t } } { \sqrt { 1 - \bar { \alpha } _ { t } } } \epsilon _ { \theta } ( \mathbf { x } _ { t } , t ) \right) $$ In this formulation, $\epsilon _ { \theta } ( \mathbf { x } _ { t } , t )$ represents the noise component predicted by the model, which is usually parameterized by a neural network. A commonly used architecture for $\epsilon _ { \theta }$ is the U-Net [34] or Transformer[35], which allows efficient denoising at each time step. By iteratively applying this reverse process, the diffusion model progressively removes the Gaussian noise from $\mathbf { x } _ { T }$ , ultimately reconstructing an approximation of the original data $\mathbf { x } _ { \mathrm { 0 } }$ . Optimization. The core objective of the diffusion model is to minimize the difference between the noise added during the forward process and the noise predicted during the reverse process. The diffusion model seeks to align the posterior distribution from the forward process with the prior distribution in the reverse process. This alignment is typically achieved by minimizing the Kullback-Leibler (KL) divergence between the two distributions. For simplification, the objective function can be expressed as a mean squared error (MSE) loss between the true noise and the predicted noise: $$ L _ { \mathrm { s i m p l e } } = \mathbb { E } _ { t , x _ { 0 } , \epsilon } \left[ \left\| \epsilon - \epsilon _ { \theta } \left( \sqrt { \bar { \alpha } _ { t } } x _ { 0 } + \sqrt { 1 - \bar { \alpha } _ { t } } \epsilon , t \right) \right\| ^ { 2 } \right] $$ Here, $\epsilon$ represents the Gaussian noise added to the original data sample $x _ { 0 }$ during the forward process, and $\epsilon _ { \theta }$ is the noise predicted by the model at step $t$ . The goal of training is to minimize the distance between the added noise and the model’s predicted noise, thereby learning an effective denoising function. # 2.2 Program Slice Program slicing is a decomposition technique used to extract the parts of a program that directly or indirectly influence the values computed at a specific program point, referred to as the slicing criterion[36, 37]. A slicing criterion typically consists of a pair $\langle p , V \rangle$ , where $p$ is a program location, and $V$ is a set of variables of interest. The subset of the program that affects the values of these variables at $p$ is known as the program slice. This technique analyzes both control dependencies and data dependencies to determine which parts of the program impact the specified point. By isolating these dependencies, program slicing can aid in identifying the source of program failures, providing a more precise, context-aware view for debugging. Program slicing can be categorized into static slicing and dynamic slicing[29], depending on whether the specific input of the program is considered in the slicing process. Static program slicing does not take specific program inputs into account. It analyzes all possible execution paths based solely on the program’s structure to identify statements that may influence the value of a particular variable. Static slices include all potential paths and are useful for providing a comprehensive analysis of the program’s control and data flows, helping developers understand the overall behavior of the program. Dynamic program slicing is an important technique for debugging, as it includes only the statements along the execution path that affect the value of a variable at a specific program point for a given input. The slicing criterion in dynamic slicing is extended to a triplet $\langle p , V , I \rangle$ , where $I$ represents the set of inputs. Dynamic slicing can provide more precise slices by focusing on relevant execution paths, but at the cost of requiring actual program runs. By comparing the two approaches, dynamic slicing offers more refined results, especially when debugging failure under specific test cases. # 3 Approach This section will introduce our approach PCD-DAug $\because$ a Context-Aware and PCA-enhanced Diffusion Model for data augmentation in fault localization. As shown in Figure 1, PCD-DAug operates in three main stages: PCD-DAug first applies dynamic program slicing to capture the fault semantic context based on the program’s structure and data dependencies. Next, by using a revised PCA on the raw data, we extract the statistical context from a statistical analysis perspective. These two contexts are then merged and fed into the training process of the diffusion model, which requires training only the reverse denoising process. Finally, the trained context-aware diffusion model is used to generate new failing test cases, iteratively synthesizing data until a class-balanced dataset is achieved, where the number of failing test cases matches the passing ones. This class balance significantly improves the performance of fault localization. Figure 1: Architecture of PCD-DAug . Figure 2: Data synthesis stage of PCD-DAug . # 3.1 Fault Semantic Context Construction The fault semantic context refers to the subset of statements whose execution leads to failing outputs. We employ dynamic program slicing to construct this fault semantic context, as it relies on specific program inputs, aligning with the generation process of the raw data (i.e., the coverage matrix and error vector) in FL. The raw data is derived from runtime information collected when executing the test suite on the program. Furthermore, numerous studies[10, 11] have shown that dynamic program slicing enhances the effectiveness of FL techniques. To construct the fault semantic context, we define the dynamic program slicing criterion ScContext as: $$ \mathit { S c C o n t e x t } = ( o u t p u t S t m , o u t p u t V a r , i n p u t T e s t ) $$ where outputStm represents a point of interest in the program, typically a specific statement. outputV ar refers to a set of variables used at outputStm. InputV ar represents the input to the failing test cases. In previous works [10] and [11], the failing test case with the fewest executed statements was selected for dynamic program slicing to build the context. This approach effectively reduces data dimensionality and focuses attention on a small number of statements, which are most likely to involve single-type faulty statements. However, for more complex programs containing multiple faulty statements or faults of different types, relying on a single failing test case for slicing is insufficient to capture the complete set of faulty statements. Therefore, we opt to use multiple failing test cases to construct a more comprehensive fault semantic context. Thus, PCD-DAug generates a new $M \times K ^ { ' }$ context matrix and a new $1 \times K ^ { ' }$ statement index. This context matrix integrates the set of faulty statements responsible for multiple faults by removing duplicate statements across the slices of multiple failing test cases, resulting in a comprehensive and refined representation of the faulty context. # 3.2 Statistical Context Construction Figure 2 illustrates the architecture of our model, in which we use a simplified U-Net to implement the diffusion model. The U-Net architecture includes a single downsampling layer and a single upsampling layer. However, the fault semantic context only includes a subset of statements derived through dynamic program slicing, capturing structural and data dependencies based on specific inputs. This context is therefore limited to local information tied to particular inputs. To address this limitation, we introduce a revised PCA[30], which not only retains the statistical properties of the data but also enriches the context by incorporating statistical dependencies, complementing the structural information obtained from program slicing. Through this fusion, we ensure that the context aligns with the model’s dimensional requirements. Algorithm 1 describes feature selection using revised PCA. It takes the coverage matrix $X$ , the statement set StmSC from program slicing, and two key parameters: the number of largest eigenvalues $m$ and number of principal components $K ^ { ' \prime }$ . It firstly computes the covariance matrix covX, solves for eigenvalues and eigenvectors, and selects the top $m$ eigenvectors (Steps 1-4). Contribution values $c _ { i }$ are calculated by summing the $m$ elements in each row of matrix $V$ , and the top indices are stored in iContriM ax (Steps 6-8). Finally, it generates the statistical context matrix $X _ { P C A }$ of size $M \times K ^ { \prime \prime }$ and context index vecor StmP CA by selecting columns from $X$ corresponding to iContriMax, and returns $X _ { P C A }$ and StmP CA (Steps 9-14). # Algorithm 1: dimensionality reduction using revised PCA # Input: coverage matrix with the size of $M \times N \colon X$ , number of largest eigenvalues: $m$ , number of principal components: $K ^ { p }$ Output: statistical context matrix with size of $\boldsymbol { M } \times \boldsymbol { K } ^ { \prime \prime } : \boldsymbol { X } _ { P C A }$ statistical context index with size of $1 \times K ^ { \prime \prime }$ : StmPCA $c o v X = \mathfrak { c }$ covariance matrix of original samples; eigenV ec $\ c =$ eigenvectors of $c o v X$ ; eigen $. V a l =$ eigenvalues of $c o v X$ ; $V =$ select the eigenV ec corresponding to the first $m$ largest eigenV al; for $i = 1$ to $N$ do Calculate contribution value: $\begin{array} { r } { c _ { i } = \sum _ { p = 1 } ^ { m } | V _ { p i } | } \end{array}$ ; $i C o n t r i M a x = \operatorname { a r g m a x } ( c )$ ; Initialize $X _ { P C A }$ and $S t m P C A$ as None, None; for $i = 1$ to $K ^ { ^ { \prime \prime } }$ do Add the iContriMax[i]-th column of $X$ to $X _ { P C A }$ ; Add iContriMax[i] to $S t m P C A$ ; return $X _ { P C A }$ , StmPCA; # 3.3 Context-aware Diffusion Model Algorithm 2 describes the fusion process, which integrates semantic and statistical contexts to refine the coverage matrix. The inputs include the coverage matrix $X$ , the statement index $S t m S C$ from program slicing, the statistical context index $\bar { S } t m P C A$ , and a fusion ratio $\alpha$ . # Algorithm 2: Context Fusion Using Fault Semantic and Statistical Contexts # Input: coverage matrix with the size of $M \times N \colon X$ , statements index selected by program slicing with the size of $1 \times K ^ { ' } \colon S t m S C .$ , statistical context index with size of $1 \times { K } ^ { ' } { : } S t m P C A$ , fusion ratio: $\alpha$ Output: reduced coverage matrix with size of $M \times K$ : $X _ { f u s i o n }$ Set fusion size as $\boldsymbol { K } ^ { f } = \boldsymbol { \alpha } \times \boldsymbol { K } ^ { \prime }$ ; Set fusion context statements index $S t m F u s i o n = S t m S C \cap S t m P C A [ : K ^ { f } ] ;$ for $i = 1$ to $K ^ { ^ { \prime \prime } }$ do if StmF usion matches the dimensional requirements of PCD-DAug or DLFL then $\mathsf { L }$ break; if StmP CA[i] / StmF usion then if $S t m P C A [ i ] \in S t m S C$ then add $S t m P C A [ i ]$ to StmF usion; Initialize reduced coverage matrix $X _ { f u s i o n }$ as None; for $i = 1$ to len(StmFusion) do add StmF usion[i] − th column of $X$ to $X _ { f u s i o n }$ ; return $X _ { f u s i o n }$ ; The algorithm first calculates the fusion size $K ^ { f }$ as $\alpha \times K ^ { \prime }$ and initializes StmF usion as the intersection of StmSC and the top $K ^ { f }$ elements of $S t m P C A$ (Steps 1-2). Next, it iterates through $S t m P C A$ to expand StmF usion as needed. If an element in $S t m P C A$ is also in StmSC but not yet in StmF usion, it is added to StmF usion (Steps 3-7). This ensures that essential statements from both contexts are included. Finally, $X _ { f u s i o n }$ is constructed by selecting columns from $X$ based on StmF usion, and $X _ { f u s i o n }$ is returned (Steps 8-12). In fault localization, high-quality data augmentation must reflect the specific program contexts most relevant to causing failures. Randomly generated failing data can introduce noise, which may hinder the model’s ability to effectively localize faults. To ensure that the generated data remains aligned with the original failure-inducing conditions, we explore two possible guidance strategies: classifier-based guidance and classifier-free guidance. # 3.3.1 Classifier-Based Strategy To guide sample generation in the reverse diffusion process, we leverage gradients of the target data distribution. [38] adding a classifier gradient to the noise term can direct sample generation toward specific target classes. This approach modifies the noise prediction equation as follows: $$ \hat { \epsilon } = \epsilon _ { \theta } \big ( \mathbf { x } _ { t } \big ) - \gamma \cdot \sqrt { 1 - \bar { \alpha } t } \nabla \mathbf { x } t \log p _ { \phi } ( y | \mathbf { x } _ { t } ) $$ where $\gamma$ controls the level of guidance, and $p _ { \phi } ( \cdot )$ represents the classifier’s probability function. # 3.3.2 Classifier-Free Strategy In fault localization, relying on a pre-trained classifier introduces extra computational cost, particularly in complex and high-dimensional program data. To mitigate this, we adopt a classifier-free approach for guidance[31], which retains effective sample generation without depending on an external classifier. This alternative strategy redefines the noise prediction in the reverse process as: $$ \hat { \epsilon } = ( 1 + \gamma ) \cdot \epsilon _ { \theta } ( \mathbf { x } t , t , \mathbf { c } ) - \gamma \cdot \epsilon \theta ( \mathbf { x } _ { t } , t ) $$ where $\gamma$ again serves as the guidance scale, offering control over the generation direction without requiring pre-trained classifier involvement. # 3.3.3 Sampling with DPM-Solver To further improve the efficiency of the diffusion process in our fault localization model, we employ the DPM-Solver sampler [39]. DPM-Solver is a fast, high-order solver specifically designed for diffusion ODEs, with guaranteed convergence order. It is suitable for both discrete-time and continuous-time diffusion models without requiring any further training. Experimental results demonstrate that DPM-Solver can generate high-quality samples with only 10 to 20 function evaluations across various datasets in Computer Vision. In our model, we set the sampling steps to 25. DPM-Solver significantly reduces sampling time while maintaining high sample quality. It solves the probability flow ordinary differential equation (ODE) that governs the diffusion process, approximating the reverse process as follows: $$ d \mathbf { x } ( t ) = \epsilon _ { \theta } ( t ) \left( \frac { \mathbf { x } ( t ) } { \sqrt { \sigma ^ { 2 } + 1 } } \right) d \sigma ( t ) $$ where $\sigma _ { t }$ is the noise schedule parameterized by $\sqrt { 1 - \alpha _ { t } } / \sqrt { \alpha _ { t } }$ , and ${ \bf x } ( t )$ represents the latent state at time $t$ . This method allows our fault localization model to efficiently generate failure-related samples with minimal computational overhead, making it scalable for large program datasets with high-dimensional inputs, especially when using classifierfree guidance. # 3.4 Model Training After merging the fault semantic context with the statistical context, we obtain a $M \times K$ context matrix derived from the $M \times N$ raw data. This context matrix captures the information related to the statements that lead to program failures, combining insights from both the program’s structure and statistical analysis. PCD-DAug uses this context matrix as input to the diffusion model, which generates synthesized failing test cases. The diffusion model learns the characteristics of both failing and passing test cases through its forward and reverse processes, ensuring that the newly synthesized samples reflect the key patterns present in the raw data. In the forward process, PCD-DAug gradually adds Gaussian noise to the original context matrix over multiple time steps, following a classifier-free guidance strategy. In the reverse process, PCD-DAug predicts the noise at each step using the trained model and gradually denoises the noisy data to recover the synthesized samples. The reverse process is trained using a mean squared error (MSE) loss function to minimize the difference between the added noise and the predicted noise. As shown in Figure 2, the diffusion model, once trained, generates new failing test cases by sampling from noise using DPM-Solver, while also incorporating label information to ensure the synthesized samples reflect the failure characteristics of the original test cases. The newly generated failure samples are then combined with the passing data to form a balanced dataset, which is used to enhance the performance of fault localization methods such as SFL and DLFL. # 3.5 An Illustrative Example To illustrate the workflow of PCD-DAug , we provide an example in Figure 3. Here, program $P$ contains 16 statements with a fault at line 3, where the value 0 is mistakenly set instead of 6. The SFL method GP02 [40] is applied to locate the faulty statement. Each cell below a statement indicates its execution by a test case (0 if not executed, 1 if executed). The ’result’ column in Figure 1 shows test outcomes (1 for failure, 0 for success). The original test suite is imbalanced, with four passing test cases $( t _ { 1 } , t _ { 2 } , t _ { 4 } , t _ { 5 } )$ and two failing cases $( t _ { 3 } , t _ { 6 } )$ . To balance this dataset, PCD-DAug generates two additional failing test cases. Program slicing is applied to extract fault semantic context from $t _ { 3 }$ and $t _ { 6 }$ . Using Eq. (8), we define ( $S _ { 1 4 }$ , d1, $t _ { 3 }$ ) and $( S _ { 1 5 } , \mathrm { d } 2 , t _ { 6 } )$ as slicing criteria, given the incorrect output of variable d1 at $S _ { 1 4 }$ during $t _ { 3 }$ . As shown in Figure 3, the $S t m S C$ matrix for $t _ { 3 }$ includes $\{ S _ { 1 } , S _ { 3 } , S _ { 7 } , S _ { 1 4 } \}$ and for $t _ { 6 }$ includes $\{ S _ { 1 } , S _ { 3 } , S _ { 8 } , S _ { 1 5 } \}$ . So the fault semantic context contains $\{ S _ { 1 } , S _ { 3 } , S _ { 7 } , S _ { 8 } , S _ { 1 4 } , S _ { 1 5 } \}$ Revised PCA yields the $S t m P C A$ matrix $\{ S _ { 1 4 } , S _ { 1 5 } , S _ { 1 0 } , S _ { 1 1 } , S _ { 6 } , S _ { 1 } , S 2 , S 3 , S 1 3 , S 4 , S 5 , S 1 6 ,$ S8, S7, S9, S12}. With a fusion ratio $\alpha$ set to 1, the fusion size $K ^ { f }$ is equal to $S t m S C$ . Fusing the fault semantic context and statistical context forms the context $\{ S _ { 1 } , S _ { 3 } , S _ { 1 4 } , S _ { 1 5 } \}$ . Using this enriched context, PCD-DAug employs a Diffusion Model to generate synthetic failing test cases $\cdot _ { \cdot }$ and $t _ { 8 . }$ ), highlighted in yellow in Figure 3. These new cases expand the context matrix, allowing GP02 to re-evaluate statement suspiciousness with updated data. Program Bug information S1:Read(a,b,c) S8: d2 =c+1; S15:else {output(d2); S2:d1=0,d2=0,d3=0 S9:if(a <0){ S16:output(d3);} t7 and t8 are new failing S3:if(b<0){ S10:a= a+c;} tests generated by CPDM. S3 is faulty. S4:d1 = b; S11: else a= a+b; Correct form: S5:d2 = c; S12: d3 = a+1;} Program Slice result: if(b<6){ S6:d3 =a;} S13:if(c>0){ {s1,s3,s7,s8,s14,s15} S7:else $\{ { \bf d } 1 = { \bf b } { + } 1$ S14:output(d1);} test ab,c S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 result t1 5,-6,-8 0 0 0 0 0 0 0 1 0 t2 4,7,11 1 1 1 0 0 0 1 0 1 1 0 0 $0$ t3 -1,53 1 1 0 0 0 0 0 0 t4 -2,-7,5 0 0 0 0 0 0 0 0 $0$ t5 -5,8,-8 1 1 1 0 0 0 1 1 1 0 1 1 0 1 $0$ t6 4,2,-1 1 0 0 0 1 0 1 0 1 $^ \mathrm { t 7 }$ 1 1 $0$ 1 t8 0 1 1 0 susp 1.0 1.0 1.0 0.5 0.5 0.5 0.25 0.25 0.25 0.0 0.5 0.25 1.0 0.0 4.0 4.0 GP02 rank 3 4 5 8 9 11 12 13 15 10 14 6 16 1 2 GP02 susp 1.8 4.0 4.5 0.2 - (CPDM) rank 2 4 The final rows of Figure 3 compare FL results from GP02 with and without PCD-DAug . Without PCD-DAug , GP02 ranks the statements (highlighapted in green) as $\{ S _ { 1 5 } , S _ { 1 6 } , S _ { 1 } , S _ { 2 } , S _ { 3 } , S _ { 1 3 } , S _ { 4 } , S _ { 5 } , S _ { 6 } , S _ { 1 1 } , S _ { 7 } , S _ { 8 } , S _ { 9 } , S _ { 1 2 } , S _ { 1 0 } , S _ { 1 4 } \}$ . After applying PCD-DAug , the ranking shifts to $\{ S _ { 1 4 } , S _ { 3 } , S _ { 1 } , S _ { 1 5 } \}$ . Notably, the faulty statement $S _ { 3 }$ moves from 5th to 2th, demonstrating PCD-DAug ’s effectiveness in mitigating class imbalance and enhancing fault localization accuracy. # 4 Experiments To assess the effectiveness of our proposed approach, we carried out experiments on 262 versions of five representative benchmark programs, all of which contain real-world faults. The selected programs—Chart, Math, Lang, Time, and Mockito—were drawn from the Defects $4 \mathbf { J } ^ { 2 }$ dataset[41]. Due to the substantial size of these programs, manually collecting input data would be highly time-consuming. Consequently, we leveraged the coverage matrix provided by Pearson et al.[42] to optimize and expedite the experimental process. Table 1 provides an overview of the five subject programs. For each program, it includes a brief functional description (in the ’Description’ column), the number of faulty versions available (in the ’Versions’ column), the program size measured in thousand lines of code (in the ’LoC(K)’ column), and the number of test cases (in the ’Test’ column). Principal Context-aware Diffusion Guided Data Augmentation for Fault Localization Table 1: Subject programs # 4.1 Experiment Settings The experiments were conducted on a Linux server equipped with 40 cores of a 2.4GHz CPU and 252GB of RAM. The operating system used was Ubuntu 20.04. Table 2 provides the main parameters used in our experiments. Notably, we applied this same set of hyper-parameters across 262 faulty versions of the programs, effectively treating each version as a unique dataset. This set demonstrates the robustness and adaptability of our approach and parameter configuration, as it performs consistently well across diverse fault scenarios. Table 2: Main Parameters of PCD-DAug # 4.2 Evaluation Metrics We employ four widely recognized metrics in FL to evaluate the performance of our approach: • Number of Top-K: It quantifies the number of faulty versions where at least one fault is ranked within the top $K$ positions by a fault localization (FL) method. Following the prior work[17, 4], we assign K with the value of 1, 3 and 5 for our evaluation. • Mean Average Rank (MAR)[4]: For each faulty version, we calculate the average rank of all faulty statements in the ranking list. A lower value of MAR indicates better FL effectiveness. • Mean First Rank (MFR)[4]: MFR determines the rank of the first located faulty statement for each version and computes the mean rank across all versions. • Relative Improvement (RImp)[9]: This metric evaluates the efficiency of fault localization methods by comparing the number of statements that must be examined. RImp specifically reflects the proportion of statements examined when using our method in comparison to others, where a lower RImp value indicates superior performance. These metrics offer a thorough evaluation of our approach’s fault localization accuracy, allowing for performance comparisons with other fault localization methods. # 4.3 Research Questions and Results We evaluate the effectiveness of our approach through the following four research questions. # RQ1. How effective is PCD-DAug in localizing real faults compared with original state-of-the-art SFL methods? We assessed the performance of three statement-level SFL methods (Dstar[8], Ochiai[43], and Barinel[44]) under two different conditions: the original SFL method and our approach PCD-DAug version. These original methods typically process raw data without any additional context. The results for Top-1, Top-3, Top-5, MFR, and MAR metrics are summarized in Table 3, offering a comparison between the original methods and our approach PCD-DAug . Top-K. As shown in Table 3, PCD-DAug demonstrates a clear advantage in Top-K metrics across different program datasets, with particularly strong performance on the Lang dataset. In the Top-1, Top-3, and Top-5 rankings, PCD-DAug identified 11, 19, and 27 faults on average using three SFL methods (Dstar, Ochiai, and Barinel), compared to 5, 17, and 24 faults identified by the original methods. This represents performance improvements of $120 \%$ , $1 1 . 7 6 \%$ , and $1 2 . 5 \%$ , respectively, indicating that the hyper-parameters used in PCD-DAug were especially effective for the Lang dataset. In other datasets, PCD-DAug generally outperforms the original methods as well. For instance, in the Math dataset, PCD-DAug shows a slight average improvement across all Top-K metrics. However, on the Mokito dataset, PCD-DAug ’s performance in Top-3 metrics slightly lags behind the original methods. Specifically, the average number of faults identified in the Top-3 rankings decreased by $1 6 . 6 7 \%$ , respectively. This may be due to the hyper-parameters being optimized for the Lang dataset, without further adjustment for the specific characteristics of other datasets, which could have led to a slight performance trade-off on certain datasets. Table 3: The results of TOP-1, TOP-3, TOP-5, MFR and MAR by comparison of original SFL method and PCD-DAug Overall, PCD-DAug exhibits consistent performance gains across all SFL methods in Top-1 and Top-5 metrics. For example, using the Barinel method, PCD-DAug correctly identified 36, 74, and 98 faults in the Top-1, Top-3, and Top-5 rankings, compared to 30, 71, and 93 faults identified by the original Barinel method. This corresponds to improvements of $2 0 . 0 0 \%$ , $4 . 2 3 \%$ , and $5 . 3 8 \%$ , respectively. These results demonstrate that PCD-DAug not only performs well on individual datasets but also shows superior average performance across all datasets, validating its effectiveness in improving fault localization accuracy. Furthermore, the use of unified hyper-parameters brings important advantages to the application of PCD-DAug . First, it simplifies the model deployment process by eliminating the need to tune hyper-parameters for each individual dataset, thereby enhancing the model’s generality. Second, using a unified hyper-parameter setting helps reduce the risk of overfitting, contributing to greater stability and consistency in model performance. Finally, this unified configuration enhances the reproducibility of experimental results and allows for better performance comparison across datasets. Figure 4: The RImp of MFR and MAR for PCD-DAug over six original FL methods. RImp. In Figure 4, the RImp values across the evaluated SFL methods (Dstar, Ochiai, and Barinel) consistently remain below $100 \%$ , demonstrating that PCD-DAug outperforms these traditional techniques in fault localization efficiency. For example, as depicted in Figure 4, PCD-DAug significantly reduces the MFR metric across the SFL methods. When using PCD-DAug , the percentage of statements that need to be inspected to find first faulty statement ranges from $4 5 . 2 8 \%$ with Barinel to $4 9 . 2 9 \%$ with Dstar. This implies that PCD-DAug can reduce the number of statements required for examination by $5 0 . 7 1 \%$ $( 1 0 0 \% - 4 9 . 2 9 \% = 5 0 . 7 1 \% )$ for Dstar and $5 4 . 7 2 \%$ $( 1 0 0 \% - 4 5 . 2 8 \% = 5 4 . 7 2 \% )$ for Barinel. Thus, our approach PCD-DAug can lead to substantial reductions in the effort required for fault localization. Summary for RQ1: In RQ1, we evaluated the performance of PCD-DAug against three traditional SFL methods. The findings show that PCD-DAug achieves better results compared to the original methods, indicating that PCD-DAug provides a more effective approach for fault localization. # RQ2. How effective is PCD-DAug in localizing real faults compared with the state-of-the-art DLFL methods? Table 4: The results of TOP-1, TOP-3, TOP-5, MFR and MAR by comparison of original DLFL method and PCD-DAug In addition to comparing PCD-DAug with the original SFL methods, we also evaluated its performance against three representative DLFL approaches: MLP-FL[16], CNN-FL[9], and RNN-FL[15]. As shown in Table 4, PCD-DAug consistently outperforms these methods across all Top-K metrics. Top-K. For example, in comparison with MLP-FL, PCD-DAug located 42, 67, and 87 faults in the Top-1, Top-3, and Top-5 metrics, respectively, while MLP-FL only identified 5, 21, and 29 faults in these categories. This represents substantial improvements of $7 4 0 . 0 0 \%$ , $2 1 9 . 0 5 \%$ , and $2 0 0 . 0 0 \%$ in Top-1, Top-3, and Top-5 metrics, respectively, for PCD-DAug over the original MLP-FL method. RImp. Furthermore, PCD-DAug achieves lower mean first rank (MFR) and mean average rank (MAR) values compared to all baseline DLFL methods, indicating a more efficient fault localization process. As illustrated in Figure 4, the RImp values for MFR reveal that PCD-DAug significantly reduces the number of statements requiring inspection. With PCD-DAug , the statements needing examination range from $9 . 4 3 \%$ (for RNN-FL) to $1 3 . 7 6 \%$ (for CNN-FL), corresponding to reductions of $8 6 . 2 4 \%$ $( 1 0 0 \% - 1 3 . 7 6 \% \stackrel { - } { = } 8 6 . 2 4 \% )$ to $9 0 . 5 7 \%$ $( 1 0 0 \% - 9 . 4 3 \% = 9 0 . 5 7 \% )$ in comparison to the original DLFL approaches. Similarly, for the MAR metric, PCD-DAug reduces the number of statements to be examined to between $9 . 2 0 \%$ (for RNN-FL) and $1 5 . 1 2 \%$ (for CNN-FL), translating to reductions of $8 4 . 8 8 \%$ $( 1 0 0 \% - 1 5 . 1 2 \% = 8 4 . 8 8 \% )$ to $9 0 . 8 0 \%$ $( 1 0 0 \% - 9 . 2 0 \% = 9 0 . 8 0 \% )$ . These results demonstrate PCD-DAug ’s substantial efficiency gains, significantly minimizing the fault localization effort compared to the original DLFL methods. Summary for RQ2: In RQ2, we analyzed the performance of PCD-DAug against three DLFL methods. The result reveals that PCD-DAug consistently outperforms these methods across most metrics. These findings demonstrate that PCD-DAug is more effective at fault localization compared to the state-of-the-art DLFL methods. # RQ3. How effective is PCD-DAug in localizing real faults compared with the data optimization FL methods? In addition to comparing PCD-DAug with the original SFL and DLFL methods, we evaluated its performance against two widely-used data optimization techniques: undersampling[45] and resampling[46, 47, 48]. As shown in Table 5, PCD-DAug consistently outperforms both optimization methods across all Top-K metrics. For example, using the Barinel fault localization method, PCD-DAug located 36, 74, and 98 faults in the Top-1, Top-3, and Top-5 metrics, respectively. In comparison, the undersampling method identified only 15, 40, and 62 faults, while the resampling method located 29, 68, and 89 faults. This translates to PCD-DAug achieving improvements of $1 0 6 . 6 7 \%$ , $8 5 . 0 0 \%$ , and $5 8 . 0 6 \%$ over undersampling for the Top-1, Top-3, and Top-5 metrics, respectively, and surpassing resampling by $2 4 . 1 4 \%$ , $8 . 8 2 \%$ , and $1 0 . 1 1 \%$ in these same metrics. These results highlight PCD-DAug ’s enhanced fault localization effectiveness over both data optimization techniques. Moreover, in terms of the MFR and MAR metrics, PCD-DAug achieves lower values than both the undersampling and resampling methods, indicating that it ranks faulty statements higher on average and requires fewer statements to be inspected to locate faults. This efficiency in fault localization is further supported by the RImp values, as shown in Figure 5 and Figure 6. All RImp values are below $100 \%$ , indicating that PCD-DAug requires fewer statements to be examined than either undersampling or resampling. Table 5: Comparisons between PCD-DAug and two data optimization methods for TOP-1, TOP-3, TOP-5, MAR, and MFR. Figure 5: The RImp of MFR by PCD-DAug over two data optimization methods. Figure 6: The RImp of MAR by PCD-DAug over two data optimization methods. Specifically, for the MFR metric, PCD-DAug reduces the number of statements that need to be examined to between $1 3 . 6 9 \%$ (for CNN-FL) and $3 7 . 3 1 \%$ (for Dstar) compared to undersampling, equating to reductions of $6 2 . 6 9 \%$ $( 1 0 0 \% -$ $3 7 . 3 1 \% )$ to $8 6 . 3 1 \%$ $( 1 0 0 \% - 1 3 . 6 9 \% )$ . When compared to resampling, PCD-DAug reduces the statements to be examined to between $1 6 . 1 4 \%$ (for CNN-FL) and $4 6 . 4 0 \%$ (for Barinel), corresponding to reductions of $5 3 . 6 0 \%$ $( 1 0 0 \% - 4 6 . 4 0 \% )$ to $8 3 . 8 6 \%$ $( 1 0 0 \% - 1 6 . 1 4 \% )$ ). Similarly, for the MAR metric, PCD-DAug consistently requires fewer statements to be inspected across all six fault localization methods compared to both undersampling and resampling. PCD-DAug reduces the average number of statements that need to be examined to between $1 4 . 7 6 \%$ (for CNN-FL) and $2 8 . 2 1 \%$ (for Ochihai) compared to undersampling, equating to reductions of $7 1 . 7 9 \%$ $( 1 0 0 \% - 2 8 . 2 1 \% )$ to $8 5 . 2 4 \%$ $( 1 0 0 \% - 1 4 . 7 6 \% )$ . When compared to resampling, PCD-DAug reduces the statements to be examined to between $1 6 . 5 5 \%$ (for CNN-FL) and $3 3 . 1 1 \%$ (for Ochiai), corresponding to reductions of $76 . 8 9 \%$ $( 1 0 0 \% - 3 3 . 1 1 \% )$ to $8 3 . 4 5 \%$ $( 1 0 0 \% - 1 6 . 5 5 \% )$ . This significant reduction in the number of statements inspected demonstrates PCD-DAug ’s consistent advantage in minimizing the fault localization effort. These findings collectively validate the superior effectiveness and efficiency of PCD-DAug in fault localization tasks. Summary for RQ3: In RQ3, we evaluated the performance of PCD-DAug against two data optimization techniques. The analysis shows that PCD-DAug surpasses both undersampling and resampling in terms of fault localization, with improvements observed across most metrics. This demonstrates that PCD-DAug is a more effective approach for fault localization compared to these optimization methods. # RQ4. How effective is PCD-DAug in localizing real faults compared with four state-of-the-art data augmentation methods? In addition to comparing PCD-DAug with data optimization methods, we also evaluated its performance against four data augmentation approaches: Aeneas [13], Lamont [12], CGAN4FL [10], and PRAM [11]. As shown in Table 6, PCD-DAug consistently outperforms the other data augmentation methods across all cases based on Top-K metrics and the MFR and MAR rankings except for CGAN4FL in CNN-FL on Top-3 metric and PRAM in RNN-FL. Specifically, PCD-DAug achieves higher fault identification rates at Top-1, Top-3, and Top-5 levels across all fault localization (FL) methods. For instance, in the Barinel method, PCD-DAug identified 36, 74, and 98 faults at the Top-1, Top-3, and Top-5 levels, respectively, outperforming CGAN4FL (28, 55, 66) and PRAM (30, 74, 97). This translates to improvements of $2 8 . 5 7 \%$ , $3 4 . 5 5 \%$ , and $4 8 . 4 8 \%$ over CGAN4FL, and $2 0 . 0 0 \%$ , $0 . 0 0 \%$ , and $2 . 0 6 \%$ over PRAM, demonstrating a significant increase in fault localization accuracy. PCD-DAug ’s advantage also extends to efficiency metrics. In terms of Mean First Rank (MFR) and Mean Average Rank (MAR), PCD-DAug shows consistently lower scores across all FL methods, indicating that it requires fewer statements to be examined. For example, in the Barinel method, PCD-DAug ’s MFR is 65.31, significantly lower than CGAN4FL’s 89.38 and PRAM’s 73.41, which indicates faster fault detection. In the MAR metric, PCD-DAug also performs well; for instance, in the CNN-FL method, PCD-DAug achieves a MAR of 165.85, much lower than PRAM’s Table 6: Comparisons between PCD-DAug and four data augmentation methods for TOP-1, TOP-3, TOP-5, MAR, and MFR. 221.50. This demonstrates that PCD-DAug not only improves fault localization accuracy but also reduces the effort needed for locating faults. Figure 7 and Figure 8 further illustrate PCD-DAug ’s relative improvement (RImp) o . ver Aeneas, Lamont, CGAN4FL, and PRAM in both MFR and MAR metrics across six different FL methods. In all cases, the RImp values are below $100 \%$ , indicating that PCD-DAug requires fewer statements to be examined than other data augmentation methods. Specifically, for the MFR metric, PCD-DAug reduces the number of statements to be examined to between $78 . 5 0 \%$ (for CNN-FL) and $9 5 . 1 1 \%$ (for Ochiai) compared to PRAM, representing reductions of $4 . 8 9 \%$ $( 1 0 0 \% - 9 5 . 1 1 \% )$ to $2 1 . 5 0 \%$ $( 1 0 0 \% - 7 8 . 5 0 \%$ ). Figure 7: The RImp of MFR by PCD-DAug over four data augmentation methods. Figure 8: The RImp of MAR by PCD-DAug over four data augmentation methods. Similarly, for the MAR metric, PCD-DAug consistently requires fewer statements to be inspected across all six fault localization methods compared to the other four data augmentation methods. PCD-DAug reduces the average number of statements to be examined to between $7 4 . 8 8 \%$ (for CNN-FL) and $9 9 . 2 3 \%$ (for Dstar) compared to PRAM, equating to reductions of $0 . 7 3 \%$ $( 1 0 0 \% - 9 9 . 2 3 \% )$ to $2 5 . 1 2 \%$ $( 1 0 0 \% - 7 4 . 8 8 \%$ ). PCD-DAug provides significant advantages in fault localization by achieving higher accuracy and reducing inspection effort compared to state-of-the-art data augmentation methods. This enhanced performance demonstrates PCD-DAug ’s potential to streamline the fault localization process effectively. Principal Context-aware Diffusion Guided Data Augmentation for Fault Localization Summary for RQ4: In RQ4, we analyzed the performance of PCD-DAug against three data augmentation methods. The results indicated that PCD-DAug performs better than Aeneas and Lamont, and slightly better than PRAM. # 5 Discussion # 5.1 Threats to Validity The implementation of baselines and our approach. Our implementation of the baselines and PCD-DAug may potentially contain bugs. As shown in Table 7, PCD-DAug incorporates 3 residual blocks and 3 attention blocks. While this simplified design choice aims to balance complexity and efficiency, it may also reduce the model’s capacity to capture more intricate patterns within the data. This could potentially result in underfitting, where the model fails to learn all relevant patterns, especially in cases where the data or the fault localization task requires a deeper model. Table 7: Main Architecture and Parameter of PCD-DAug Dataset-Specific Parameter Choices. For the two data augmentation methods, Aeneas and Lamont, both require a parameter that is strongly dependent on the dataset for dimensionality reduction, namely, the number of principal components $( K )$ . This parameter is inversely proportional to the number of statements in the dataset. In the experiments, $K$ is automatically determined by comparing the number of executed statements across all faulty historical versions in the selected datasets. Therefore, differences in datasets may lead to variations in this parameter, potentially impacting experimental outcomes. The generalizability. Our approach was tested on five representative programs, but its effectiveness on other programs may vary, as no dataset can encompass all fault scenarios. Further experiments on larger programs would be beneficial to confirm the approach’s generalizability. # 5.2 Reasons for PCD-DAug Is Effective The reasons why PCD-DAug is more effective than the compared baselines are as follows: (1) PCD-DAug constructs a comprehensive fault semantic context and statistical context from program structure and statistical analysis perspectives. (2) The diffusion model, being a powerful generative approach, ensures effective sample generation without the concern of imbalances between generator and discriminator components, as seen in other models. (3) PCD-DAug generates failing test cases to balance the dataset, addressing the class imbalance issue.
Test cases are indispensable for conducting effective fault localization (FL). However, test cases in practice are severely class imbalanced, i.e. the number of failing test cases (i.e. minority class) is much less than that of passing ones (i.e. majority class). The severe class imbalance between failing and passing test cases have hindered the FL effectiveness. To address this issue, we propose PCD-DAug: a Principal Context-aware Diffusion guided Data Augmentation approach that generate synthesized failing test cases for improving FL. PCD-DAug first combines program slicing with principal component analysis to construct a principal context that shows how a set of statements influences the faulty output via statistical program dependencies. Then, PCD-DAug devises a conditional diffusion model to learn from principle contexts for generating synthesized failing test cases and acquiring a class balanced dataset for FL. We conducted large-scale experiments on six state-of-the-art FL approaches and compare PCD-DAug with six data augmentation baselines. The results show that PCD-DAug significantly improves FL effectiveness, e.g. achieving average improvements of 383.83%, 227.08%, and 224.19% in six FL approaches under the metrics Top-1, Top-3, and Top-5, respectively.
[ "cs.SE" ]
# 1 Introduction The European Union Deforestation Regulation (EUDR), effective December 30, 2025, mandates companies to verify that their products do not originate from recently deforested land (European Commission, 2023). With deforestation contributing $1 5 \%$ to global $\mathrm { C O _ { 2 } }$ emissions (ETC, 2024), industries with high environmental risks require precise asset-level tracking. However, significant data gaps persist: $30 \%$ of Forest 500 companies lack public deforestation commitments, and $8 5 \%$ of financial institutions lack comprehensive deforestation policies (Forest 500, 2024). Creating a physical asset database is labour intensive (CGFI, 2024), costly, and inefficient, making regulatory compliance difficult and limiting researchers’ ability to develop accurate environmental impact models. Due to their substantial contributions to environmental degradation, we focus on three high-risk sectors—Mining, Oil & Gas, and Utilities. Mining drives deforestation through surface extraction and infrastructure expansion, often leading to forest loss within a $5 0 \mathrm { k m }$ radius (Bradley, 2020). Oil & Gas exploration accelerates deforestation, particularly in biodiversity hotspots like the Amazon, where oil extraction disrupts ecosystems (Finer et al., 2008; Watch, 2016). Utilities, especially hydroelectric projects, contribute to deforestation (IntegrityNext, 2024) through extensive land clearing for dams and power infrastructure, with continued expansion affecting forested areas despite the shift to renewable energy (Rosenberg et al., 2000; Imperiale et al., 2023). Our research makes several key contributions: (1) We develop a novel LLM-based pipeline (Figure 1) that transforms unstructured SEC EDGAR filings into structured datasets, improving transparency in environmental monitoring. (2) We introduce Instructional, Role-Based, Zero-Shot Chainof-Thought (IRZ-CoT) prompting, a technique that enhances the accuracy of entity extraction, particularly for complex asset-related information. (3) We conduct a comparative analysis of LLMs and a traditional Named Entity Recognition (NER) model, evaluating their effectiveness in domain-specific data extraction. (4) To ensure data integrity, we implement a three-step database cleaning process, which includes foundational standardisation, asset similarity consolidation using statistical methods, and LLM-assisted refinement. (5) We propose Retrieval-Augmented Validation (RAV), which integrates real-time web data to enhance dataset reliability and address gaps in existing databases. (6) Finally, the resulting datasets are visualised through company-specific dashboards, providing detailed insights into each company’s database. This work advances NLP-driven environmental data automation, providing a scalable framework for regulatory compliance, sustainability analysis, and asset-based deforestation tracking. Figure 1: System design of end-to-end LLM-based pipeline designed to handle systematic data extraction, structured database creation, cleaning and validation, and the improvement module to increase validation coverage. # 2 Background and Related Work LLMs revolutionised entity and relation extraction, enabling zero-shot and few-shot learning. Structured prompting techniques, such as Pipeline Chain-of-Thought (Pipeline-COT), enhance accuracy by breaking tasks into reasoning steps (Zhao et al., 2023). ML and NLP techniques have been widely applied in healthcare, finance, and legal domains. Transformer-based models like LegalBERT (Chalkidis et al., 2020), BioBERT (Lee et al., 2019), and SciBERT (Beltagy et al., 2019) improve clinical text analysis and regulatory compliance. However, fine-tuning remains computationally expensive, making zero-shot LLM approaches more practical. GPT-based models like GPT-NER incorporate self-verification to reduce hallucinations (Wang et al., 2023), while ChatGPT and REBEL enable structured knowledge extraction (Trajanoska et al., 2023). This study builds on these advancements, introducing Instructional, Role-Based, Zero-Shot Chain-of-Thought (IRZCoT) prompting to enhance structured data extraction from SEC EDGAR filings. Traditional SEC EDGAR processing relies on RegEx-based tools like LexNLP, which efficiently parse filings (Bommarito et al., 2018). Prior keyword extraction and manual annotation work, such as the KPI-EDGAR dataset, remains labourintensive and challenging to scale (Deußer et al., 2022). Despite NLP advancements, limited work has been done on developing a fully automated pipeline that integrates data extraction, database creation, cleaning, and validation. This research bridges that gap by implementing an LLM-driven end-to-end pipeline, introducing Retrieval-Augmented Validation (RAV) to improve accuracy and robustness. By combining LLM-assisted extraction, structured prompts, and multi-step validation, this study delivers a scalable asset-tracking and environmental impact analysis solution, advancing AI-driven automation for regulatory data processing. # 3 Data Acquisition and Processing # 3.1 Data Source This study uses publicly available SEC EDGAR 10-K filings from fifteen Mining, Oil & Gas, and Utilities companies. These legally mandated reports provide standardised, reliable, and accurate data on company operations, finances, and environmental impact. Unlike 10-Q and 8-K reports, which offer limited asset details, 10-K filings comprehensively cover physical assets, expenditures, and disclosures. News and social media data were excluded due to bias, noise, and lack of granularity. SEC filings ensure factual accuracy, regulatory compliance, and ethical data sourcing, minimising legal and privacy concerns. # 3.2 Data Extraction We collected 10-K filings from 2022 to 2024 using the secEDGAR Python library (Moody et al., 2024), which allows efficient bulk downloads based on company stock tickers and Central Index Keys (CIKs). This method streamlines data acquisition, eliminating the need for custom web scraping scripts while ensuring robust datasets across the selected sectors. The companies focused on are given in Table 3 in Appendix A.1. The pre-processing workflow extracts metadata (company names, filing dates, form types, and content), cleans text using BeautifulSoup to remove HTML tags and irrelevant elements, and structures data into SQLite databases per company. This ensures efficient management, querying, and retention of meaningful content for analysis. # 4 Database Creation # 4.1 Chunk-based Querying Technique We adopt a chunk-based querying technique to manage the extensive length of SEC EDGAR filings. This method involves splitting documents into 1024-token chunks with a 20-token overlap to maintain contextual continuity. Sentence-level splitting ensures semantic coherence, preventing the disruption of key information. Chunking optimises memory usage, enables parallel processing, and enhances entity recognition by allowing LLMs to focus on specific, contextually rich segments. This approach also facilitates error identification and correction, improving the efficiency and scalability of the data processing pipeline. # 4.2 Comparison of LLM and NER Outputs We compare the performance of 4-bit quantised Ollama instruct models, specifically Mistral7B, Llama 3, and Gemma 2, against a traditional Named Entity Recognition (NER) model: dslim/bert-large-NER (Devlin et al., 2018; Tjong Kim Sang and De Meulder, 2003). Instruct models, fine-tuned for instruction-based tasks, demonstrate superior contextual understanding and precise entity extraction (Chung et al., 2022; Hu et al., 2024), which could be used for structured documents like SEC filings. The use of 4-bit quantisation significantly reduces memory and computational requirements while maintaining performance, enabling efficient large-scale deployment without extensive hardware upgrades (Banner et al., 2019; Dettmers et al., 2023). These models minimise irrelevant responses, ensuring more accurate asset identification. We convert the text data into embeddings using the SentenceTransformer model, specifically the paraphrase-MiniLM-L6-v2 variant (Reimers and Gurevych, 2019). Gemma 2 consistently outperforms the NER model on cosine similarity metrics, achieving higher precision and recall, with the highest cosine similarity for both locations (0.7702) and organisations (0.7461), indicating strong alignment with ground truth data. Error analysis reveals that LLMs are more effective in capturing nuanced entity relationships, while the NER model often fragments entities or misses domain-specific terms. Detailed performance metrics are provided in Table 4 in Appendix A.3, where Gemma 2 outperforms both Mistral-7B and Llama 3. As shown in Table 5 in Appendix A.4, qualitative error analysis highlights common issues such as fragmented entity recognition in the NER model and occasional hallucination in LLM outputs. While Mistral-7B and Llama 3 struggled with consistency, Gemma 2 demonstrated more reliable extraction, particularly in complex texts. # 4.3 Ground Truth Creation We manually curated a ground truth dataset from 30 chunks of Alcoa Corporation’s 2022 filings to evaluate extraction accuracy. This dataset includes detailed annotations of physical assets, their locations, ownership structures, and associated commodities. Manual annotation ensures high accuracy, providing a robust benchmark for model evaluation. While slightly labour-intensive, this process establishes a reliable foundation for assessing model performance. In future work, we recommend exploring automated ground truth generation using advanced models like GPT-4, which could enhance scalability and reduce annotation costs. # 4.4 LLM Selection We assessed multiple LLMs using evaluation metrics such as cosine and jaccard similarities, precision, recall, and F1 score. Gemma 2 emerged as the top performer, excelling in quantitative and qualitative analyses. Its superior performance is attributed to its ability to maintain semantic coherence and accurately extract domain-specific entities. As presented in Table 1, Gemma 2 achieved the highest scores across all evaluation metrics. This performance consistency and efficient resource utilisation led to its selection for further experimentation within the data pipeline. Table 1: Performance comparison of Mistral-7B, Llama 3, and Gemma 2 across five metrics. # 4.5 Prompt Engineering Prompt engineering plays a crucial role in optimising data extraction. We developed the Instructional, Role-Based, Zero-Shot Chain-of-Thought (IRZ-CoT) prompting technique through iterative refinement. This method improves extraction accuracy by providing LLMs with domain-specific instructions, structured reasoning steps, and rolebased guidance. IRZ-CoT reduces hallucination, enhances the extraction of complex attributes, and minimises the need for extensive post-processing. Common issues encountered with different prompting techniques revealed key challenges, such as hallucination in one-shot and few-shot methods, incorrect classification in zero-shot, and verbosity in generated knowledge prompting. Specifically, zero-shot prompting often led to the misclassification of financial terms as physical assets, while few-shot techniques introduced hallucinated entities. Role-based and instructional prompting significantly improved specificity and reduced errors, but IRZ-CoT demonstrated the best balance between accuracy and efficiency. Performance metrics for prompt engineering techniques, illustrated in Figure 2, show that IRZCoT achieved the highest scores in precision and recall. Additionally, Figure 6 in Appendix A.6 highlights IRZ-CoT’s computational efficiency, requiring significantly less processing time than methods like generated knowledge prompting. Figure 2: Comparison of different prompt engineering techniques across various evaluation metrics # 4.6 Experimental Evaluation of LLM Ensemble Methods We evaluated three LLM ensemble methods to enhance robustness: Ensemble Averaging with Majority Voting (EAMV), Weighted Majority Voting Ensemble (WMVE), and Stacking Ensemble with Meta-Learning (SEML). EAMV improves stability by aggregating predictions from multiple LLMs and selecting the most common output, reducing variance. WMVE assigns higher weights to models with superior performance, prioritising predictions from more accurate models, particularly favouring Gemma 2. SEML utilises a metalearner—logistic regression—to combine outputs from different LLMs, optimising predictive accuracy and achieving the highest F1-score (Figure 7 in Appendix A.7). However, SEML significantly increased processing time—nearly 20-fold compared to single-model approaches (Figure 8 in Appendix A.8). Due to computational constraints, we selected the more efficient IRZ-CoT approach with Gemma 2 as the primary model for the final pipeline. # 5 Database Cleaning # 5.1 Foundational Data Cleaning and Standardization The first phase focuses on refining raw data to establish a solid foundation for further processing. We use regular expression (RegEx) patterns to extract key entity data, including asset types, locations, ownership details, and commodities. This automated approach ensures consistent data extraction from large volumes of text. Post-extraction, we remove extraneous characters, such as surplus quotes and brackets, to prevent data distortion. Duplicates are identified and consolidated, with corresponding information merged into single records. For example, multiple entries for an oil well in different locations are grouped, reducing redundancy. Ownership data is standardised by normalising company names (e.g., consolidating "NEM," "Newmont," and "Newmont Corporation"). At the same time, geographic terms are unified (e.g., "United States of America," "US," and "U.S.A." standardised to "USA"). We also refine the ‘location’ column, extracting country names into a new ‘Countries’ column to support consistent geographic analysis. Finally, rows with empty ‘physical asset’ entries are removed to maintain database relevance. # 5.2 Asset Similarity Consolidation After initial cleaning, we address semantic similarities among physical asset entries. To consolidate such similarities, we use TF-IDF (Term FrequencyInverse Document Frequency) vectorisation and cosine similarity. TF-IDF quantifies the relevance of words within documents, and cosine similarity identifies semantic overlap. We set a similarity threshold of 0.5; entries meeting or exceeding this threshold are grouped and merged, preserving unique information while eliminating redundancy. This method is computationally efficient and effective for identifying similar assets, although it has limitations, such as sensitivity to synonyms. Despite these, TF-IDF and cosine similarity offer a pragmatic balance of accuracy and efficiency for large-scale datasets. # 5.3 LLM-Assisted Database Cleaning The final cleaning phase leverages the capabilities of Gemma 2 to address issues beyond the reach of traditional methods, as the previous steps still outputted unnormalised information amongst other issues. Using a domain-specific prompt, the LLM performs tasks such as converting chemical symbols (e.g., "Au" to "Gold"), standardising text, eliminating redundant punctuation, and verifying locations against Wikipedia. This iterative process involves LLM-driven cleaning followed by human review. Any inconsistencies trigger prompt adjustments, enhancing the LLM’s performance in subsequent iterations. The LLM also identifies countries from location data when not explicitly stated, verified through crossreferencing with Wikipedia to ensure accuracy. By automating complex tasks and reducing manual effort, LLM-assisted cleaning improves data quality, consistency, and scalability, making it an effective strategy for managing large datasets. # 6 Database Validation # 6.1 Validation with LSEG Databases We validated our databases against established LSEG Workspace databases (London Stock Exchange Group (LSEG), 2024), focusing on the ‘Mines’, ‘Oil Refineries’, and ‘Power Generation’ datasets. This validation process involved data preprocessing, where we standardised text to lowercase and filtered irrelevant entries, such as excluding closed or abandoned assets. This step ensured uniformity and minimised discrepancies related to case sensitivity. Subsequently, we used the rapidfuzz library to find similar entries between our database entries and LSEG data. A similarity threshold of 0.6 was applied to identify potential matches, which helped us find the best match from the list of matches. We then used the Hits $\textcircled { a } 5$ metric to determine how frequently correct matches appeared within the top five candidates for each attribute (physical asset, ownership, commodity, and country). The Hits $\textcircled { a } 5$ score measures the consistency of our matching algorithm by averaging successful matches across all entities, assessing performance beyond the top result. Identified matches are then validated using five more metrics (Partial Match Score (Partial Ratio), Jaccard Similarity, Cosine Similarity, Dice-Sørensen Similarity Coefficient and Normalised Levenshtein Distance), comparing entity similarities with the LSEG database. Detailed similarity scores across physical asset name, ownership, commodity, and country are averaged into an overall attribute similarity score, quantifying dataset alignment. This validation ensures the reliability of our matching algorithm. # 6.2 Retrieval-Augmented Validation (RAV) LSEG Workspace databases lack comprehensive data for complete validation, necessitating an additional verification layer to ensure completeness and accuracy. To address these gaps, we developed Retrieval-Augmented Validation (RAV). RAV integrates real-time web search capabilities using the Google Custom Search Engine (CSE) API (Google Developers, 2024) to retrieve current information on physical assets. The retrieved snippets are ranked using the BM25 algorithm, which prioritises documents based on relevance, incorporating term frequency and document length normalisation. This ensures that the most pertinent information is considered for validation purposes. RAV uses a dual-LLM framework where Llama 3 generates web-based answers, and Gemma 2 is tasked with classifying these answers strictly against the database entries. This separation mitigates potential biases arising from using a single model for generation and evaluation. Llama 3 efficiently retrieves concise, relevant information from web sources, while Gemma 2 assesses the similarity between this information and the existing database entries. The LLM-assisted validation relies on a binary classification approach where Gemma 2 outputs a ‘yes’ if the web-derived and database information are similar and a ‘no’ otherwise. This stringent evaluation ensures high reliability, reducing the risk of false positives in validation. Contrary to complex instructional prompts used in earlier phases, we discovered that simple prompts significantly improved LLM classification accuracy. Initially, using detailed prompts resulted in low similarity scores, averaging around 0.15, with frequent misclassifications. After simplifying the prompts to a single-line instruction asking the LLM to classify answers as similar or dissimilar (see Appendix A.13), we observed a substantial improvement, with scores increasing by approximately 0.28. This reduction in cognitive load enhanced the model’s ability to determine similarities accurately. However, some misclassifications remain, mainly when subtle semantic differences exist between the database entries and web-sourced information. RAV automates asset validation by integrating web data with traditional databases, enhancing reliability for downstream analysis. While advanced RAG methods like FLARE (Jiang et al., 2023) offer sophisticated retrieval, their complexity and resource demands outweigh the benefits for this project. Our BM25-based RAV remains practical and effective, with potential for future refinement. Figure 3: Partial match scores from the LSEG database validation. The dotted lines separate the sectors, where the sectors are mining, oil & gas, and utilities, respectively. The mining sector demonstrated strong alignment, with high partial match scores for companies such as AA (0.95), FCX (0.88), and NEM (0.83), reflecting data consistency. However, the oil & gas sector showed mixed results, where CVX (0.72) and MPC (0.74) achieved moderate alignment, but XOM exhibited lower scores (Jaccard Similarity: 0.24), suggesting inconsistencies in asset classification. Utilities displayed moderate alignment, with EXC and D achieving partial match scores of 0.93 and 0.79, respectively. Ownership data varied significantly, with CVX achieving near-perfect alignment (1.00), while XOM had lower similarity (Jaccard Similarity: 0.43), likely due to differences in how joint ventures and subsidiaries were recorded. Commodity data showed the most significant discrepancies, with many companies, such as AA, registering low Jaccard and Cosine Similarities (0.00), possibly due to differences in classifying primary and secondary commodities. In contrast, country data was generally consistent, with companies like FCX achieving perfect alignment (Partial Match Score: 1.00), though some discrepancies were observed in SCCO (Jaccard Similarity: 0.67). # 7 Results # 7.1 LSEG Database Validation Results We validated our databases against LSEG Workspace datasets, including ‘Mines,’ ‘Oil Refineries,’ and ‘Power Generation,’ using six similarity metrics: Partial Match Score, Jaccard Similarity, Cosine Similarity, Dice-Sørensen Similarity Coefficient, Normalised Levenshtein Distance, and Hits $\textcircled { a } 5$ . These metrics assessed alignment across physical assets, ownership, commodities, and country data. Figure 3 shows the partial match scores. The full results are shown in Figure 9 in Appendix A.9. The error analysis revealed key challenges. Ownership discrepancies arose due to variations in recording structures, where our databases captured joint ventures while LSEG focused on primary controlling entities. Standardising ownership classification could improve future alignment. Commodity misalignments resulted from differences in listing primary versus secondary commodities, suggesting a need to refine entity extraction prompts and separate commodity categories. Minor inconsistencies in country data, such as listing "USA" versus "California," highlight the importance of hierarchical structuring with separate fields for city, region, and country to enhance accuracy. # 7.2 Coverage Calculation We evaluate database coverage by measuring the proportion of physical assets and their attributes in our constructed database that match those in LSEG. This assessment ensures comprehensiveness, usability, and accuracy while identifying areas for improvement in our extraction pipeline. Coverage is computed as Coverage Score $\begin{array} { r l r } { \mathrm { ~ } } & { { } } & { \mathrm { ~ } } \\ { \mathrm { ~ \ } } & { { } } & { \mathrm { ~ } } \\ { \mathrm { ~ \ } } & { { } } & { \mathrm { ~ } } \end{array} \left( \frac { N _ { m } } { N _ { L } } \right) \times 1 0 0 \$ , where $N _ { m }$ represents the number of matched physical assets between our database and LSEG, and $N _ { L }$ is the total number of physical assets in LSEG. The computed coverage scores, shown in Table 7 in Appendix A.10, indicate that the mining sector has better coverage than oil & gas and utilities. Manual inspection of SEC EDGAR filings reveals that lower coverage in oil & gas and utilities stems from improper table parsing, as many assets are listed in tabular formats rather than continuous text. To address this, we integrated a table parsing module using LlamaIndex, which processes HTML tables as structured data instead of narrative text. This significantly improved extraction accuracy, particularly in oil & gas, where assets were previously missed. Figure 10 in Appendix A.11 demonstrates this enhancement. # 7.3 Retrieval-Augmented Validation (RAV) Results Table 8 in Appendix A.12 presents the results of Retrieval-Augmented Validation (RAV), comparing database responses with real-time web data. Similarity scores range from 0.31 to 0.57, indicating moderate alignment, with the Oil & Gas sector performing slightly better due to more transparent regulatory disclosures. Notably, OXY is absent from Table 8 since it only contains unnamed assets, which RAV cannot validate. Mining sector scores hover around 0.4, suggesting uniform discrepancies, likely due to outdated or incomplete records. The Oil & Gas sector shows slightly higher alignment, with companies like MPC and COP exceeding 0.5, possibly due to stringent regulatory reporting. However, frequent asset transfers contribute to inconsistencies. The Utilities sector exhibits the widest score range, from 0.31 (EXC) to 0.57 (NEE), reflecting differences in data transparency. NEE’s higher score suggests more consistent asset records, likely due to better data management. # 7.3.1 Error Analysis Ownership mismatches arose from differing data granularity. Our database captured joint ventures and minority stakeholders, whereas web sources listed only primary entities, leading to unfair scores of 0. A weighted scoring system could better account for partial matches. Location mismatches often resulted from implicit references in web snippets. For instance, the Bath County Power Station was correctly labeled as USA in our database, but the web snippet lacked an explicit country mention, receiving a score of 0. Similarly, Chino Mine was recorded as USA, while web sources specified New Mexico, USA. A hierarchical scoring approach would improve accuracy by recognising different levels of geographic detail. Commodity discrepancies occurred because web data often listed only primary commodities, while our database included by-products. For example, Grasberg Mine was recorded as producing copper, gold, silver, and molybdenum, whereas web results mentioned only silver. Categorising commodities into primary and secondary groups through prompt refinement would help resolve this. # 7.4 Total Validation Coverage To assess RAV’s impact, we compute the total validation coverage, which measures the proportion of assets validated through both LSEG database validation and RAV. Total validation coverage is computed as $\left( \frac { N _ { v } } { N _ { t } } \right) \times 1 0 0 \$ , where $N _ { v }$ represents the number of assets validated, and $N _ { t }$ is the total number of assets in the constructed database. Table 2 presents validation coverage for each company, comparing LSEG-only validation to combined LSEG and RAV validation. Occidental Petroleum (OXY) is excluded due to the absence of company-specific information in the LSEG database and the constructed dataset containing only general assets (e.g., natural gas fields). Since our validation applies only to named assets, general assets remain largely unverified. While extrapolating validation to unnamed assets could extend coverage, this introduces risks to accuracy and completeness. Notably, RAV significantly increases coverage, underscoring its role in enhancing database robustness by validating assets absent from LSEG. Coverage varies across companies; D achieves the highest at $3 3 . 3 3 \%$ , while COP has the lowest at $6 . 4 3 \%$ , reflecting differences in named asset proportions. Lower coverage suggests a higher proportion of unnamed assets, highlighting gaps in the current validation process. Table 2: Validation coverage comparison using LSEG databases alone versus LSEG databases with RAV. As regulatory demands like the EUDR grow, the need for automated, comprehensive databases will increase. Our LLM-based pipeline can adapt to these demands, improving ESG and CSR compliance. The feedback loop (Figure 4) from regulatory success will drive continuous improvements in data quality and database creation techniques, shaping the future of environmental data management. Figure 4: A feedback loop linking physical asset database creation with improved compliance and ESG initiatives, driving continuous refinement.
The European Union Deforestation Regulation (EUDR) requires companies to prove their products do not contribute to deforestation, creating a critical demand for precise, asset-level environmental impact data. Current databases lack the necessary detail, relying heavily on broad financial metrics and manual data collection, which limits regulatory compliance and accurate environmental modeling. This study presents an automated, end-to-end data extraction pipeline that uses LLMs to create, clean, and validate structured databases, specifically targeting sectors with a high risk of deforestation. The pipeline introduces Instructional, Role-Based, Zero-Shot Chain-of-Thought (IRZ-CoT) prompting to enhance data extraction accuracy and a Retrieval-Augmented Validation (RAV) process that integrates real-time web searches for improved data reliability. Applied to SEC EDGAR filings in the Mining, Oil & Gas, and Utilities sectors, the pipeline demonstrates significant improvements over traditional zero-shot prompting approaches, particularly in extraction accuracy and validation coverage. This work advances NLP-driven automation for regulatory compliance, CSR (Corporate Social Responsibility), and ESG, with broad sectoral applicability.
[ "cs.DB", "cs.AI", "cs.IR", "cs.LG" ]
# 1 INTRODUCTION NL2SQL (natural language to SQL) systems translate natural language questions into SQL queries, allowing users with no technical background to interact with databases and create tools like reports or visualizations. For example, a manager could ask, “How many new customers did we acquire this quarter?,” and the system would generate the corresponding SQL query, execute it, and return the results in an accessible format. This broadens access to data-driven insights, making NL2SQL an essential tool for decision-making. Large language models (LLMs) have played a transformative role in advancing NL2SQL technology. Their ability to adapt to unfamiliar tasks by leveraging contextual examples allows them to generate accurate outputs. This adaptability has positioned LLM-based solutions at the top of various well-known NL2SQL benchmarks like Spider and BIRD [19, 36]. In enterprise environments, database schemas are often large and highly complex due to the integration of multiple data sources, frequent schema versioning, and table transformations. This complexity introduces ambiguity when translating natural language questions into SQL queries. Specifically, tables and columns with similar names can exist, making it difficult to identify the user’s intended reference. We refer to this challenge as schema ambiguity, where multiple SQL queries, each using different tables or columns, could potentially be correct. For instance, a user might ask, “What are the average salaries by department?”, but the database schema could contain both a curr_dept table and a dept_2022 table. In this scenario, it is unclear which table the user intends to query, leading to multiple potential SQL queries. Similarly, ambiguity can arise when dealing with column names. For example, when a user asks, “What were the total sales last quarter?”, the sales table may have columns named both gross_sales and net_sales, either of which could be valid depending on the user’s intent. Furthermore, schema ambiguity can substantially impact the structure of a SQL query. For example, consider the question, “What is the revenue per customer?”. In this case, the database schema might include both a customers table and an orders table, where the orders table contains a column named revenue, while the customers table may include an aggregated column, total_revenue. Depending on which table the user intends to reference, two structurally different SQL queries can be generated: SQL1: SELECT customers.customer_id, SUM(orders.revenue) FROM customers JOIN orders ON customers.customer_id $\mathbf { \tau } = \mathbf { \tau }$ orders.customer_id GROUP BY customers.customer_id; SQL2: SELECT customer_id, total_revenue FROM customers; SQL1 involves a join and aggregation across multiple rows in the orders table, while SQL2 simply retrieves pre-aggregated data from the customers table. This example demonstrates that schema ambiguity can have an effect on the complexity and runtime performance of the SQL query, making its resolution a critical aspect of NL2SQL systems. Among multiple potential SQL queries, users often prefer one query over the others due to their preferences toward specific tables or columns in the schema. These preferences may reflect the user’s familiarity with certain data sources or their expectations regarding the relevance of certain schema components. For instance, the user may prefer gross_sales instead net_sales for revenue analysis due to the business logic of their organization. By learning these preferences, a NL2SQL system could provide more personalized query suggestions, enhancing the user experience. One approach to address schema ambiguity is to generate a set of SQL queries that accounts for all possible interpretations of the ambiguous schema components. This allows users to review the options and select the query that best fits their needs. However, generating such a set is challenging. LLM-based NL2SQL systems often struggle to produce a sufficiently diverse set of potential SQL queries, even when using techniques such as sampling methods or temperature adjustments, as highlighted by [2]. Additionally, the set cannot be too large, as it would overwhelm the user and make selecting the correct query difficult. The complexity increases further when users have specific preferences for tables or columns which need to be captured and incorporated into future questions. To address the challenges presented by schema ambiguity, we introduce Odin, a NL2SQL recommendation engine that helps users manage and resolve schema ambiguity in NL2SQL tasks. Instead of returning a single SQL query, Odin presents a set of potential queries for users to choose from. The number of suggestions is dynamically adjusted based on the level of ambiguity in the user’s question. Additionally, Odin can learn user’s preferences for specific schema components through their feedback, enhancing future recommendations. Internally, Odin operates using a Generate-Select paradigm. Given a semantically ambiguous natural language question, the Generator is tasked with producing the set of all potentially correct SQL queries. Traditional approaches, such as beam search or diversity-promoting techniques like nucleus sampling [9], often fail to generate queries that reflect schema ambiguity, as noted in [2]. To overcome this limitation, Odin uses an iterative strategy: the Generator sequentially produces candidate SQL queries by selectively modifying the information provided to the LLM. Specifically, it removes certain schema elements from previously generated SQL queries, encouraging the LLM to explore different schema components and generate diverse queries. The generator may produce incorrect SQL queries by omitting too many important schema components or by generating results that do not align with user preferences. The Selector component addresses this by filtering out the inaccuracies to reduce the set size while ensuring the correct SQL query is retained, thus maintaining high recall. This process is framed within the conformal prediction framework [24], which is commonly used to provide confidence in the outputs of machine learning models in critical fields such as medical diagnosis [28]. Conformal prediction provides concrete guarantees on recall, ensuring that it does not fall below a specified threshold. After the selection phase, users can choose their preferred SQL query from the remaining options. Odin learns user preferences from this feedback, allowing it to refine its recommendations and better align future outputs with the user’s specific preferences. Our evaluation demonstrates that the set of SQL queries recommended by Odin contains the correct query $1 . 5 \substack { - 2 \times }$ more often, while maintaining a result set that is $2 { - } 2 . 5 \times$ smaller, compared to other baselines on the AmbiQT benchmark, which contains various forms of schema ambiguity . In summary, we make the following contributions: We present Odin, a system designed to handle and resolve ambiguity in NL2SQL tasks (Section 4). We introduce a novel SQL generation-selection strategy that outperforms LLM-based sampling (Sections 5 and 6). We offer a personalization algorithm for Odin to learn user preferences (Section 7). We evaluate Odin against state-of-the-art baselines and demonstrate its effectiveness (Section 8). # 2 RELATED WORK NL2SQL. Generating accurate SQL queries from natural language questions (NL2SQL) is a long-standing challenge due to the complexities in user question understanding, database schema comprehension, and SQL generation [11, 15, 21]. Recently, large language models (LLMs) have demonstrated significant capabilities in natural language understanding as the model scale increases. Many LLM-based NL2SQL solutions [7, 15, 17, 26, 31] have emerged. SC-Prompt [7] divides the NL2SQL task into two simpler sub-tasks (i.e., a structure stage and a content stage). Structure and content prompt construction and fine-grained constrained decoding are proposed to tackle these two sub-tasks, respectively. CodeS [17] adopts an incremental pre-training approach to enhance the SQL generation capability. It further addresses the challenges of schema linking and domain adaptation through prompt construction and data augmentation. MAC-SQL [31] is an LLM-based multi-agent collaborative framework, in which a decomposer agent leverages few-shot chainof-thought reasoning for SQL generation and two auxiliary agents utilize external tools or models to acquire smaller sub-databases and refine erroneous SQL queries. CHESS [26] introduces a new pipeline that consists of effective relevant data/context retrieval, schema selection, and SQL synthesis. Also, CHESS is equipped with an adaptive schema pruning technique based on the complexity of the problem and the model’s context size. These methods focus on generating more accurate SQL queries for a given NL question. However, they neither explicitly handle schema ambiguity nor learn the preferences from users. Consequently, these state-of-the-art methods are not able to learn from user feedback and improve their SQL generation progressively. Ambiguity in SQL. While ambiguity has been extensively studied in many fields of NLP [4, 20], it has not been explored much in NL2SQL. Hou et al., [10] introduce an uncertainty decomposition framework for LLMs in general, which can be applied to any pre-trained LLM. CAmbigNQ [14] focuses on ambiguous questions in open-domain question answering, tackling the problem in three steps, ambiguity detection, clarification question generation, and clarification-based QA. CLAM [13] is a framework that drives language models to selectively ask for clarification about ambiguous user questions and give a final answer after receiving clarification for open-domain QA. These works focus on clarifying the intent of the question, rather than resolving ambiguities within the data where the answer resides. In NL2SQL, Wang et al. [30] tackle ambiguity in SQL arising from related column names. The proposed method relies on labeling words of the text and does not generalize to other types of ambiguity beyond column ambiguity. AmbiQT [2] represents the first open benchmark for testing coverage of ambiguous SQLs. It introduces a decoding algorithm that searches the SQL logic space by combining plan-based template generation with a beam-search-based infilling. However, it does not capture and adapt to user preferences over the course of multiple questions. # 3 PROBLEM FORMULATION Traditional NL2SQL systems typically generate a single SQL query in response to a user’s question. The main goal is for the generated SQL query to match the ground truth SQL query in terms of execution result. However, this approach may fall short in cases where the question contains ambiguity. In such instances, the generated query might not align with what the user intended. To address this, we propose developing an NL2SQL system that generates a set of possible queries, allowing the user to select the one that best meets their needs. Given a user question $\boldsymbol { Q }$ , the system generates a set of SQL queries to potentially answer the question. The process can be expressed as: $$ \{ \mathrm { S Q L } _ { 1 } , . . . , \mathrm { S Q L } _ { k } \} = N L 2 S Q L ( Q ) $$ We assume that although the natural language question input to the system contains semantic ambiguities, there is nonetheless a single “correct” SQL query which captures the user’s true intent and preferences, which we refer to as the ground truth query $S Q L _ { G T }$ . The system aims to maximize the likelihood that the execution of any SQL query in the final set returns the same result as the execution of $S Q L _ { G T }$ . Let $E X ( S Q L )$ denote the execution result of query $S Q L$ . Let $A ( Q )$ denote the execution accuracy for question $\boldsymbol { Q }$ , defined as: $$ A ( Q ) = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f } } \exists S Q L \in { \mathrm { N L 2 S Q L } } ( \mathrm { Q } ) { \mathrm { ~ s u c h ~ t h a t ~ } } E X ( S Q L ) = E X ( S Q L _ { G T } ) } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. } $$ The average accuracy over a workload of natural language questions $W { = } \{ Q _ { 1 } , Q _ { 2 } { , } { \ldots } , Q _ { N } \}$ is: $$ \operatorname { A v g A c c } ( W ) = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } A ( Q _ { i } ) $$ In concept, generating all possible SQL queries for a question will achieve $1 0 0 \%$ accuracy, but it is impractical as it overwhelms users with too many options. Maximizing accuracy alone does not guarantee a good user experience. When users are presented with an excessive number of choices, identifying the correct query becomes challenging. Therefore, it is essential to balance accuracy and the size of the result set to enhance user experience. The optimization goal of the system is to maximize the likelihood of including the correct SQL query in a manageable set of results $( K )$ , ensuring efficiency and usability. The average number of SQL queries shown over the workload $W$ is: $$ \mathrm { A v g R e s u l t S i z e } ( W ) = { \frac { 1 } { N } } { \sum _ { i = 1 } ^ { N } } | N L 2 S Q L ( Q ) | $$ The overall objective is to maximize average accuracy while limiting the average number of results. # maxAvgAcc(𝑊 ), subject to AvgResultSize $( W ) \leq K$ This balance is key for optimizing both system performance and user experience. # 4 ODIN SYSTEM OVERVIEW Odin is an NL2SQL recommendation engine designed to assist users in managing schema ambiguities within their databases by generating multiple SQL query options and further resolving these ambiguities through learning user preferences. Upon receiving a natural language user question, Odin generates several potential SQL queries, as demonstrated in Fig. 1(A). Odin personalizes query generation by incorporating user feedback. The core novelty of Odin compared to other NL2SQL systems lies in its ability to generate a small, accurate set of SQL queries tailored to user preferences. Internally, Odin is composed of three key components: the Generator, the Selector, and the Personalizer. Generator: This component generates potential SQL candidates based on the user’s question. A typical approach used by other NL2SQL systems [27, 32] for generating multiple SQL query candidates is to repeatedly call the LLM with the same prompt, using high temperatures and sampling techniques such as nucleus sampling [9] to induce different output SQL queries with each call. However, [2] shows that this approach does not produce sufficiently diverse SQL queries, because LLMs tend to produce simple variants of the same SQL query during sampling. For instance, the model might generate queriessuchasselect $\star$ from students,SELECT $\star$ FROM students, or select $\star$ from students; with only minor cosmetic variations. Odin’s Generator improves on this baseline approach by incorporating previously-generated queries into the generation process. Specifically, our method selectively masks certain schema elements used in the previously-generated SQL queries, encouraging the LLM to explore alternative schema elements. For instance, as illustrated in Figure 1(B), the initial SQL query is generated using the entire schema. Key columns, such as birthplace and roll_num, are then marked as candidates for masking. By excluding these columns in subsequent LLM calls, new masked schemas are generated, leading to different SQL queries. For example, in the newly generated SQL query, the origin column replaces birthplace, and id replaces roll_num. Although this approach can produce diverse queries, it risks inefficiency due to the exponential number of possible masked schemas. This is particularly problematic when LLM calls are computationally expensive and should be minimized. Section 5 describes our approach to generate diverse SQL queries while minimizing the number of LLM calls. Selector: The generation algorithm may occasionally mask schema elements for which no reasonable substitutes exist, leading the LLM to produce incorrect SQL queries. Additionally, some generated queries might not align with user preferences. The primary objective of Odin is to maintain a compact set of generated SQL queries while ensuring accuracy. Removing these incorrect and misaligned queries can help Odin reduce the set size. To address this issue, after Odin’s Generator has produced a set of candidate SQL queries, Odin’s Selector filters out candidate SQL queries which are likely to be incorrect, ensuring that nearly all correct SQL queries are preserved. To ensure accuracy in this filtering process, we employ the conformal prediction framework [24], which provides statistical guarantees for our ability to retaining correct queries. Specifically, we evaluate candidate SQL queries with a scoring function, selecting those exceeding a threshold (e.g., Fig. 1(C) shows an example where candidates with scores above 0.8 are retained). The effectiveness of this process hinges on the scoring function and threshold selection, which we discuss further in Section 6. Personalizer: Users may prefer that SQL queries use certain tables and columns over others, and learning these preferences can enhance recommendation quality. Since preferences are specific to the application, schema, or database, they require user input. These preferences must also be conveyed to the Generator and Selector components to personalize outputs. After Odin produces the set of SQL queries for a given user question, the user is asked to select the correct SQL query from the output set, if any. This feedback is transformed into textual hints, which the Generator and Selector use for personalization. For instance (Fig. 1(D)), if a user selects a SQL query that uses the origin column instead of birthplace, the system infers that the term hometown from the user’s query maps to origin rather than to birthplace. The hint generator then processes this feedback, along with the user’s current question, to perform schema linking (i.e., mapping specific entities from the user’s input question to corresponding DB schema: Schema_0 Odin Tat Find the hometown,rollnumberof students Generation User Recommended Selection hometown refers to cemaitei 2. Select origin, roll_num from students column “ origin" User Schema_1 Schema_2 Option2isthe correct SQL Feedback GeHintor Prsonaliation Tabltnenty irthesude LLM LLM User Odin Selectorigin,roll_numfrom students Selectbirthplace,ifromstudents eom Generation rotr Schema_1-"origin” Schema_1-"roll_num” Schema_2-"birthplace" Schema_2-"id” 1. Select origin from students SQLs(R) origin" where state='Utah' Selection Personalization (B) Schema Masking based SQL Generation Generator hints (A)ODIN Functionality User Feedback Hint 1: Fucorion 08 × 5ste Genertor Select origin, id from students 0.5 X from students Find the hometown, roll User Question "roll_num” score > 0.8 number of students (C) Threshold Based Selection (D) Personalized Hint Generation schema elements). Through this schema linking, Odin can map natural language phrases like hometown and roll number from the user’s question to the origin and roll_num columns, respectively. These personalized hints are stored and later used by the Generator and Selector to align future SQL queries with user preferences. In Fig. 1(A), when the user asks, “hometown of students from Utah”, the system uses the hint that hometown refers to origin, enabling it to generate the correct SQL query. We describe the personalization process in further detail in Section 7. # 5 GENERATOR For a given question that can be answered with multiple possible schema components, Odin ensures all potential SQL queries using these components are generated and presented to the user. The generator in Odin handles this by producing the set of potential queries. A common method in LLM pipelines for generating diverse answers is using high-temperature sampling, where the temperature controls randomness. Higher temperatures lead to more varied and creative responses, while lower temperatures produce more focused, predictable results. One might assume that high temperatures would generate diverse SQL queries, but in practice, it leads to only superficial differences, such as varying table/column aliases, using equivalent functions, or altering the SQL structure with subqueries. While these changes affect the query’s appearance, they don’t impact execution, resulting in only cosmetic diversity in generated SQL. For example, consider the question: “List the names of customers who have placed orders over $\$ 1,000$ in the past six months.” At a high temperature setting, the model might generate the following SQL in one run: This SQL joins the Customers and Orders tables and filters for orders over $\$ 1,000$ within the last six months. On a second run, it might produce: SELECT c.Name FROM Customers c INNER JOIN Orders o ON c.ID = o.CustomerID WHERE o.TotalAmount $> 1 0 0 0$ AND o.OrderDate $> =$ DATE_SUB(CURDATE(), INTERVAL 6 MONTH); This version simply uses different aliases for the tables compared to the first query, but it will yield the same execution result. Finally, on a third run, the model might generate: SELECT Name FROM Customers WHERE CustomerID IN ( SELECT CustomerID FROM Orders WHERE TotalAmount $> 1 0 0 0$ AND OrderDate $> =$ DATE_ADD(CURRENT_DATE, INTERVAL -6 MONTH) ); Although this query uses a subquery and a different date function, it yields the same output as the previous two queries. This example demonstrates how high temperatures in LLMs generate superficial variations in NL2SQL tasks without changing the effective SQL, as studied in [2]. Since high-temperature sampling produces only superficial diversity in results, we can instead prompt the LLM to generate SQL queries with different execution outcomes. To achieve this, we provide the LLM with all previously generated SQL queries and explicitly instruct it (via the LLM prompt) to create a new SQL query that yields a distinct execution result. We refer to this approach as ForcedDiversity. Our empirical results show that ForcedDiversity generates more truly diverse queries. However, it starts to repeat outcomes when too many SQL queries are included in the prompt. SELECT Name FROM Customers To improve upon previous methods, Odin leverages the structure JOIN Orders ON Customers.CustomerID $\mathbf { \tau } = \mathbf { \tau }$ Orders.CustomerID of the NL2SQL task to enhance diversity by introducing controlled WHERE Orders.TotalAmount $> 1 0 0 0$ perturbation of schema information in the prompt. By masking cer to rely on different schema elements, leading to more varied SQL queries. For example, for the question “Find the hometown of students”, if the database has both birthplace and origin columns, and the LLM initially uses birthplace, we can remove knowledge of this column’s existence in subsequent prompts to encourage the LLM to generate a query using origin, leading to more diverse SQL queries. We use the term masked schema to refer to a subset of the full schema, i.e., a schema in which certain elements have been masked. We first present a simple algorithm for schema masking. We then use the limitations of this simple algorithm to motivate the schema masking algorithm used by Odin’s Generator. # 5.1 Naive Schema Masking Algorithm A naive version of the schema masking algorithm is shown in Fig. 1(B). It generates SQL queries by progressively restricting a given schema and exploring different masked schemas in a tree-like structure. The root node of the tree structure represents the complete schema of the database. Each node in this tree represents a masked schema that is a subset of the complete schema. At each node, a SQL query is generated based on the current schema, which is then added to the set of SQL queries. The algorithm then modifies the current schema of that tree node by removing individual columns used in the generated query, which produces new schemas to explore in child nodes. This process continues until all relevant masked schemas are exhausted. In Fig. 1(B), the root node represents the complete schema, and an initial SQL query is generated from it. Columns such as birthplace and roll_num, which are used in this query, are then removed, leading to modified schemas that are used to generate additional SQL queries. This approach ensures that the SQL queries generated by the child nodes are different from those of their parent nodes. This is straightforward to verify because the schemas used to generate child queries are missing at least one column present in the parent SQL query. However, the naive schema masking algorithm has two main drawbacks. First, this algorithm might result in redundant explorations of the same masked schema. The algorithm does not ensure that descendants of a node explore distinct schemas. In some scenarios, the exploration paths of different branches may overlap. For example, in Fig. 1(B), child schemas of the root node are generated by removing columns birthplace and roll_num. The descendants of these child schemas might explore the same schemas in different orders (e.g., removing birthplace then roll_num, vs. removing roll_num then birthplace), leading to duplicated effort. Second, the algorithm operates without accounting for practical resource constraints, such a constraint on the number of calls to an LLM. In real-world applications, the number of LLM calls is limited, but the exploration space of this algorithm grows exponentially with the size of the schema, leading to inefficient use of resources. # 5.2 Odin’s Schema Masking Algorithm Odin’s schema masking algorithm introduces two main improvements over the naive algorithm presented above: duplicate schema detection and a greedy exploration strategy for the search tree of schemas. The key idea behind duplicate schema detection is to maintain a record of schemas seen so far and to not explore redundant nodes. To deal with the issue of limited resources, we introduce a greedy tree search strategy that uses a priority queue to explore nodes based on a scoring system. These scores reflect the relevance of schema of the current node to the entities mentioned in the user’s question, allowing the algorithm to focus on the most promising exploration paths while staying within resource constraints. Algorithm 1 shows the pseudocode for the algorithm. The algorithm takes the user question, database schema and a limit of LLM calls as input (Line 12). If the LLM fails to find relevant entities in the masked schema and is unable to generate a valid SQL query, the exploration may terminate early. The limit on LLM calls represents only the maximum number of attempts, not a guarantee that all calls will be used. To prioritize exploration, the algorithm maintains a priority queue of nodes (i.e., schemas) to explore, where each node’s priority is determined by its relevance score, computed by the Cal_Score function (see Section 5.3). At each iteration, the node with the highest score is selected for exploration (Line 19). When a node is explored, its corresponding schema is used to generate a SQL query, which is added to the results (Lines 20-22). Similar to the naive algorithm, new potential nodes (i.e., schemas) are generated by removing columns used in the SQL query (Lines 23-29). These new schemas are then scored using a relevance function. If the new schema has already been explored or is empty, it is discarded; otherwise, it is added to the priority queue. This ensures that the exploration is both resource-efficient and free from duplicate schemas. In summary, the algorithm combines a greedy budget-conscious exploration strategy with a simple mechanism for avoiding redundant searches. # 5.3 Scoring Masked Schemas The schema scoring function, Cal_Score, is crucial to the generation algorithm, as a noisy scoring mechanism can hinder efficient exploration. A masked schema’s score intuitively reflects the likelihood that a plausibly-correct SQL query for the user’s question can be formed from the elements of the masked schema. An entity refers to information from the user’s question, such as hometown or roll number. Each entity in the user’s question should be well-represented by the masked schema in order to produce a SQL query. If a schema poorly represents any entity in the user’s question, then the schema should intuitively get a lower score. For example, consider the scenario in Fig. 1(B). The entity hometown could plausibly be represented by either the birthplace column or the origin column. In contrast, the entity roll number can only be correctly represented by the column roll_num. This distinction suggests that removing birthplace from the schema is less impactful than removing roll_num. The Cal_Score function reflects this intuition by computing the similarity between each entity in the question to its best-matching column in the schema and using the minimum similarity score across all entities as the schema’s score. In Algorithm 2, the function starts by extracting entities from the query (Line 10). It then calculates the maximum similarity score for each entity across all schema columns, storing these scores (Lines 11- 18). The final schema score is determined by the lowest score among all entities, highlighting the schema’s weakest representation (Line 19). The Extract_Entities function identifies entities in the user’s question with an LLM call. The Cal_Sim function, which measures entity-column similarity, utilizes SBERT similarity scores [22]. # Algorithm 1 Greedy Tree Search with Resource Constraints 1: Input: 2: 𝑓 _𝑠𝑐ℎ - Full schema 3: 𝑞 - Initial query 4: 𝑚𝑎𝑥_𝑐𝑎𝑙𝑙𝑠 - Maximum number of LLM calls 5: Output: 6: 𝑓 𝑖𝑛𝑎𝑙_𝑞𝑢𝑒𝑟𝑖𝑒𝑠 - List of SQL queries 7: Algorithm: 8: 𝑁 𝐿2𝑆𝑄𝐿 - Generates SQL for a given schema and user ques tion 9: 𝐶𝑜𝑙_𝑈 𝑠𝑒𝑑 - Returns columns used in a SQL query 10: 𝑅𝑒𝑚𝑜𝑣𝑒_𝐶𝑜𝑙 - Removes a column from the schema 11: 𝐶𝑎𝑙_𝑆𝑐𝑜𝑟𝑒 - Calculates relevance score for a schema 12: function GenSQLQueries(𝑓 _𝑠𝑐ℎ, 𝑞, 𝑚𝑎𝑥_𝑐𝑎𝑙𝑙𝑠) 13: 𝑓 𝑖𝑛𝑎𝑙_𝑞𝑢𝑒𝑟𝑖𝑒𝑠 $ [ ]$ 14: 𝑞𝑢𝑒𝑢𝑒 $$ PriorityQueue() 15: 𝑠𝑐ℎ𝑒𝑚𝑎 $\_ s e e n \gets [ ]$ 16: 𝑞𝑢𝑒𝑢𝑒.push 𝑓 _𝑠𝑐ℎ,1.0 ⊲ Initial score is 1 17: 𝑙𝑙𝑚_𝑐𝑎𝑙𝑙𝑠 0 18: while 𝑞𝑢𝑒𝑢𝑒 is not empty and 𝑙𝑙𝑚_𝑐𝑎𝑙𝑙𝑠 $<$ 𝑚𝑎𝑥_𝑐𝑎𝑙𝑙𝑠 do 19: $( c u r r \_ s c h _ { \_ } ) \gets q u e u e . p o p ( )$ 20: $s q l \gets N L 2 S Q L ( c u r r \_ s c h , q )$ 21: 𝑙𝑙𝑚_𝑐𝑎𝑙𝑙𝑠 𝑙𝑙𝑚_𝑐𝑎𝑙𝑙𝑠 1 22: 𝑓 𝑖𝑛𝑎𝑙_𝑞𝑢𝑒𝑟𝑖𝑒𝑠.append(𝑠𝑞𝑙) 23: for each 𝑐𝑜𝑙 in Col_Used(sql) do 24: 𝑛𝑒𝑤 $\_ s c h R e m o v e \_ C o l ( c u r r \_ s c h , c o l )$ 25: 𝑠𝑐𝑜𝑟𝑒 ←𝐶𝑎𝑙_𝑆𝑐𝑜𝑟𝑒 (𝑛𝑒𝑤_𝑠𝑐ℎ,𝑞𝑢𝑒𝑟𝑦) 26: if 𝑛𝑒𝑤_𝑠𝑐ℎ is not empty then 27: if 𝑛𝑒𝑤_𝑠𝑐ℎ not in 𝑠𝑐ℎ𝑒𝑚𝑎_𝑠𝑒𝑒𝑛 then 28: 𝑞𝑢𝑒𝑢𝑒.push((𝑛𝑒𝑤_𝑠𝑐ℎ,𝑠𝑐𝑜𝑟𝑒)) 29: 𝑠𝑐ℎ𝑒𝑚𝑎_𝑠𝑒𝑒𝑛.𝑎𝑑𝑑 (𝑛𝑒𝑤_𝑠𝑐ℎ) 30: return 𝑓 𝑖𝑛𝑎𝑙_𝑞𝑢𝑒𝑟𝑖𝑒𝑠 # Algorithm 2 Calculate Schema Relevance Score 1: Input: 2: 𝑠𝑐ℎ𝑒𝑚𝑎 - Current schema 3: 𝑞𝑢𝑒𝑟𝑦 - User question 4: Output: 5: 𝑠𝑐𝑜𝑟𝑒 - Relevance score of the schema 6: Helper Functions: 7: 𝐸𝑥𝑡𝑟𝑎𝑐𝑡_𝐸𝑛𝑡𝑖𝑡𝑖𝑒𝑠 - extracts entities from the user question 8: 𝐶𝑎𝑙_𝑆𝑖𝑚 - gives similarity score between an entity and a column 9: function Cal_Score(𝑠𝑐ℎ𝑒𝑚𝑎, 𝑞𝑢𝑒𝑠𝑡𝑖𝑜𝑛) 10: 𝑒𝑛𝑡𝑖𝑡𝑖𝑒𝑠 $$ Extract_Entities(𝑞𝑢𝑒𝑠𝑡𝑖𝑜𝑛) 11: 𝑒𝑛𝑡𝑖𝑡𝑦_𝑠𝑐𝑜𝑟𝑒𝑠 ← [] 12: for each 𝑒𝑛𝑡𝑖𝑡𝑦 in 𝑒𝑛𝑡𝑖𝑡𝑖𝑒𝑠 do 13: 𝑚𝑎𝑥 _𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡 $y \gets - \infty$ 14: for each 𝑐𝑜𝑙 in 𝑠𝑐ℎ𝑒𝑚𝑎 do 15: 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 Cal_Sim 𝑒𝑛𝑡𝑖𝑡𝑦,𝑐𝑜𝑙 16: if 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 $>$ 𝑚𝑎𝑥_𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 then 17: 𝑚𝑎𝑥_𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 $$ 𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 18: 𝑒𝑛𝑡𝑖𝑡𝑦_𝑠𝑐𝑜𝑟𝑒𝑠.append 𝑚𝑎𝑥_𝑠𝑖𝑚𝑖𝑙𝑎𝑟𝑖𝑡𝑦 19: $s c o r e \gets \operatorname* { m i n } ( e n t i t y \_ s c o r e s )$ 20: return 𝑠𝑐𝑜𝑟𝑒 Note that when computing the similarity between an entity and a column, considering the table leads to better semantic results. For example, replacing the address column from the student table with the address column from the student_registration table may be more relevant than using the address column from the teachers table. Thus, we include table semantics when computing similarity as well. # 6 SELECTOR The Generator may be overeager when masking schema elements, leaving no suitable schema elements for a given question entity, which can lead the LLM to select incorrect tables or columns or omit necessary elements, resulting in an inaccurate SQL query. Additionally, if user preferences for certain tables or columns are masked, the generated query may not align with their intent. As a result, queries produced by the Generator may be incorrect or misaligned. The Selector’s primary role is to eliminate flawed SQL queries while ensuring the correct one is retained. We can express this objective as follows: $$ \begin{array} { r l } { \mathrm { m i n i m i z e } } & { | \mathrm { S e l } ( \mathrm { G e n } ( Q ) ) | } \\ { \mathrm { s u b j e c t ~ t o } } & { \operatorname* { P r } \bigl ( \mathrm { E X M } ( \mathrm { S e l } ( \mathrm { G e n } ( Q ) ) , \mathrm { S Q L } ^ { g t } ) = 1 | } \\ & { \quad \mathrm { E X M } ( \mathrm { G e n } ( Q ) ) , \mathrm { S Q L } ^ { g t } = 1 \bigr ) > \bigl ( 1 - \alpha \bigr ) } \end{array} $$ In this formulation, $\mathrm { E X M } ( S , S Q L ^ { g t } )$ checks if any query in $s$ produces the same execution result as the ground truth query $\mathrm { s Q L } ^ { g t }$ . ${ \mathrm { G e n } } ( Q )$ is the set of SQL queries produced by the Generator for a query $\boldsymbol { Q }$ , and S $\mathsf { a l } ( \mathsf { G e n } ( Q ) )$ is the filtered subset of queries produced by the Selector. The parameter $\alpha$ , typically between $1 \%$ and $5 \%$ , represents the acceptable margin of error. This ensures that the selected set retains at least one query matching the ground truth, with the probability exceeding $_ { 1 - \alpha }$ . The constraint minimizes the risk of discarding the correct query while reducing the number of SQL queries return to the user, balancing accuracy and efficiency. The objective of the Selector closely aligns with the conformal prediction framework [1, 24], which is commonly used in classification tasks. In conformal prediction, the goal is to select a subset of labels such that the true label is included with high probability. The key idea behind conformal prediction is straightforward: a scoring function is used to assess how likely a label is incorrect, and all labels below a carefully chosen threshold are selected. This threshold is determined using a calibration set, ensuring that the high-probability guarantees hold, assuming new queries come from the same distribution as the calibration set. Similarly, we aim to select a subset of SQL queries such that the correct one is retained with high probability. We provide an overview of conformal prediction in Section 6.1, explain how we map our problem to this framework in Section 6.2, and describe our scoring function in Section 6.3. # 6.1 Background on Conformal Prediction Originally proposed by [29], conformal prediction provides a way to turn heuristic notions of uncertainty from machine learning models into rigorous sets that are guaranteed to contain the true outcome with a specified probability. This framework can be applied to both regression and classification tasks, making it highly adaptable for various machine learning applications. The central idea of conformal prediction is to construct a set of possible outcomes for a new input $X _ { \mathrm { t e s t } }$ such that the set will contain the true outcome $Y _ { \mathrm { t e s t } }$ with a user-specified confidence level $_ { 1 - \alpha }$ . The procedure for generating these prediction sets is non-parametric and relies on a calibration dataset to quantify the uncertainty of the model’s predictions. The conformal prediction framework proceeds through the following key steps: 1. Heuristic Uncertainty Estimate: Start with a pre-trained model that provides a score function $s ( x , y ) \in \mathbb { R }$ for each input-output pair $( x , y )$ . This score function reflects the level of disagreement between the input $x$ and the predicted or true output $y$ . A higher score indicates worse agreement between $x$ and $y$ , i.e., more uncertainty. 2. Calibration: To quantify uncertainty, the model is first calibrated using a set of observed data points $\{ ( X _ { 1 } , Y _ { 1 } ) , . . . , ( X _ { n } , Y _ { n } ) \} .$ . For each pair $( X _ { i } , Y _ { i } )$ , the score $s ( X _ { i } , Y _ { i } )$ is computed, yielding a set of calibration scores $s _ { 1 } = s ( X _ { 1 } , Y _ { 1 } ) , . . . , s _ { n } = s ( X _ { n } , Y _ { n } )$ . A quantile threshold $\hat { s }$ is then selected such that $$ { \hat { s } } = { \bigg ( } { \frac { \left\lceil \left( 1 + n \right) \left( 1 - \alpha \right) \right\rceil } { n } } { \bigg ) } { \mathrm { - t h ~ q u a n t i l e ~ o f ~ } } \{ s _ { 1 } , . . . , s _ { n } \} . $$ This quantile ensures that future prediction sets will contain the true outcome with probability at least $_ { 1 - \alpha }$ . 3. Prediction Set Formation: For a new input $X _ { \mathrm { t e s t } }$ , the score function is evaluated for various possible outputs $y$ . The prediction set $C ( X _ { \mathrm { t e s t } } )$ is then formed by including all outputs $y$ such that the score function satisfies: $$ C ( X _ { \mathrm { t e s t } } ) = \{ y : s ( X _ { \mathrm { t e s t } } , y ) \leq \hat { s } \} . $$ This ensures that the prediction set contains all outcomes where the model’s uncertainty (as captured by the score function) is below the threshold $\hat { s }$ . Theorem 6.1 (Conformal Prediction Guarantee, [29]). Given a calibration dataset $\{ ( X _ { 1 } , Y _ { 1 } ) , . . . , ( X _ { n } , Y _ { n } ) \}$ drawn i.i.d. from the same distribution as the test point $( X _ { t e s t } , Y _ { t e s t } )$ , conformal prediction constructs a prediction set $C ( X _ { t e s t } )$ using the threshold in Eq. (2) for the true outcome $Y _ { t e s t }$ . The prediction set satisfies the following coverage guarantee: $$ P ( Y _ { t e s t } \in C ( X _ { t e s t } ) ) \geq 1 - \alpha , $$ where $\alpha$ is a user-specified significance level. This guarantee holds regardless of the underlying distribution of the data. # 6.2 Mapping Selector to Conformal Prediction In this subsection, we demonstrate how the Selector aligns with the conformal prediction framework. By establishing this connection, we leverage the statistical guarantees provided by conformal prediction to ensure that the correct SQL query is retained with high probability after the selection process. Conformal prediction constructs prediction sets that contain the true outcome with a specified confidence level $_ { 1 - \alpha }$ . To map our problem to this framework, we define the following components: Inputs and Outputs: In our setting, the input $X$ is the user’s natural language query $\boldsymbol { Q }$ , and the output $Y$ is a candidate SQL query $q$ generated by the language model (LLM). Score Function: We define a score function $s ( Q , q )$ that quantifies the likelihood of the SQL query $q$ being incorrect for the given input $Q$ . A higher score indicates a higher chance that $q$ is incorrect. The specific design of this score function is detailed in Section 6.3. Calibration Set: The calibration set consists of pairs $\{ ( \boldsymbol { Q } _ { i } , \mathrm { G e n } ( \boldsymbol { Q } _ { i } ) ) \}$ , where $Q _ { i }$ are natural language queries and ${ \mathrm { G e n } } ( Q _ { i } )$ are the corresponding sets of SQL queries generated by Odin . Importantly, we only look at pairs for which a query with the same execution result to that of the correct SQL query $\mathrm { s Q L } _ { i } ^ { g t }$ is present in ${ \mathrm { G e n } } ( Q _ { i } )$ . For each pair, we compute the score $s ( Q _ { i } , q )$ for all queries $q \in \mathrm { G e n } ( Q _ { i } )$ , focusing on the score distribution for correct SQL queries. Quantile Threshold: We determine the threshold $\hat { s }$ by analyzing the scores of correct SQL queries in the calibration set, specifically using the quantile defined in Eq. (2). Prediction Set Formation: For a new input $\scriptstyle Q _ { \mathrm { t e s t } }$ , we generate a set of candidate SQL queries $\mathrm { G e n } ( Q _ { \mathrm { t e s t } } )$ . We compute the scores $s ( Q _ { \mathrm { t e s t } } , q )$ for each $q \in \mathrm { G e n } ( Q _ { \mathrm { t e s t } } )$ . The selector then forms the prediction set $\mathrm { S e l } ( \mathrm { G e n } ( Q _ { \mathrm { t e s t } } ) )$ by including all queries with scores less than or equal to $\hat { s }$ : $$ \mathrm { S e l } ( { \mathrm { G e n } } ( Q _ { \mathrm { t e s t } } ) ) = \{ q \in { \mathrm { G e n } } ( Q _ { \mathrm { t e s t } } ) | s ( Q _ { \mathrm { t e s t } } , q ) \leq { \hat { s } } \} . $$ Mapping the selector to the conformal prediction framework provides the coverage guarantee from Eq. (1). The effectiveness depends on the quality of score function $s ( Q , q )$ (see Section 6.3 for details). # 6.3 Scoring Functions In the Selector, the scoring function evaluates each SQL query generated by the model. The goal is to assign low scores to correct SQL queries and high scores to incorrect ones. Ideally, the ground truth SQL query would get a score of zero, while all others get higher scores, allowing for confident elimination of incorrect queries. However, designing a perfect scoring function is difficult due to the inherent complexity of natural language and SQL semantics. The scoring function should capture the semantic alignment between the user’s question and the SQL query, focusing on the relevant tables and columns. To achieve this, we leverage the capabilities of language models to understand and represent semantic relationships. We develop two scoring functions: an LLM-based function that directly evaluates the SQL query using an LLM, and an SBERT-based function that heuristically decomposes the task and uses SentenceBERT (SBERT) [22] to compute semantic similarities. Each scoring function is described below. 6.3.1 LLM-based Scoring Function. To develop the LLM-based scoring function, we utilize the language model’s ability to understand complex instructions by prompting it to evaluate whether a given SQL query correctly answers the user’s question. One method is to ask the LLM to assign a score or probability directly [35], while another is to use the logit probabilities of specific tokens. We adopt the logit probabilities approach for finer-grained scoring. We use a prompt that asks the LLM if the SQL query answers the question, providing two options: A. Yes, and B. No (as shown in Fig.2). We use the logit probability of option $B$ as the score. A lower probability for B indicates a higher likelihood that the SQL query is correct, aligning with our goal of assigning lower scores to correct queries. 6.3.2 SBERT-based Scoring Function. Although LLMs perform well, they can be resource-intensive. As a more efficient alternative, we propose an SBERT-based scoring function. SBERT computes semantic similarities between sentences or phrases [22], which we leverage by breaking down the SQL query scoring task into smaller sub-tasks involving entity similarity computations. DB Schema: create table students{ birthplace,origin, roll_num... UserQuestion:Find hometown and roll number of students SQL Query: Select origin, roll_num from students Task: Does the SQL correctly answer the user question for the provided DB schema? Choose the correct option from the following options. Option A: Yes,the SQL correctly answers the user question Option B: No,the SQL does not correctly answer the user question LLM: The correct option is Option Our approach is based on the observation that a correct SQL query should represent all entities mentioned in the user’s question via its tables and columns, similar to the schema scoring function used in generation. The SBERT-based scoring follows Algorithm 2, but focuses on the columns in the SQL query rather than the masked schema. By negating this score, we assign lower scores to queries that better represent the user’s question. Queries missing entities receive higher scores, allowing the selector to filter them out. This method efficiently captures semantic alignment and is suitable for resource-limited scenarios. # 7 PERSONALIZATION As mentioned earlier, ambiguity in NL2SQL often arises when a database contains multiple similarly named tables or columns. For example, if a user asks, What were the total sales last year?, but the database includes both gross_sales and net_sales, it is unclear which column the user intends to reference. User preferences are reflected in their choice of specific schema components. One user might consistently prefer the gross_sales column when referring to total sales, while another may focus on net_sales. These preferences are often tied to how the user phrases their query. For instance, one user might use the term total sales to mean gross_sales, while another might mean net_sales when referring to final sales. It is crucial to map the user’s phrases or entities to the corresponding schema components. Capturing user preferences can improve future recommendation, like in the previous example, if we know that a user associates gross_sales with total sales, future queries can avoid incorrectly selecting the net_sales column in similar situations. While user preferences improve recommendation quality, the challenge lies in capturing them effectively. Without explicit feedback, it is difficult for the system to infer such preferences. Moreover, even after preferences are captured, integrating them into future query generation is not straightforward, as it requires biasing the system towards them. The Personalizer component addresses these challenges. First, users select the correct SQL query from a set of displayed options, and this feedback forms the foundation for personalization in Odin. This feedback is then transformed into a format that influences both the Generator and Selector components of the system. Specifically, we provide these preferences in a textual format that biases the LLM’s output. The key information conveyed in the textual hints is that when a user refers to a particular entity, a specific schema component should be chosen over alternatives. Odin generates these hints by mapping the entities mentioned in the user’s question to the corresponding schema components in the selected SQL query. Through these techniques, Odin learns to associate entities in user questions with selected schema components, ensuring future queries are personalized based on past feedback. # 7.1 Generating Textual Hints Textual hints are used to guide both the Generator and Selector towards user preferences. The key information that textual hints capture is that when a user references an entity in their question, they prefer a specific schema components. We frame this as learning the correct schema linking based on the user’s question and their preferred SQL query. Given a question $Q$ , the correct SQL query $( S Q L _ { \mathrm { T r u e } } )$ , and a set of incorrect queries $( S Q L _ { 1 } , . . . , S Q L _ { T } )$ , the task is to map entities $E _ { 1 } , . . . , E _ { P }$ in the question to schema components in $S Q L _ { \mathrm { T r u e } }$ and the incorrect queries. Incorrect mappings help reduce the importance of wrong components, guiding the system towards accurate ones. For each entity $E _ { i }$ , Odin learns its mapping to the schema component in $S Q L _ { \mathrm { T r u e } }$ . For instance, if $E _ { 1 }$ refers to total sales, and 𝑆𝑄𝐿True maps it to the gross_sales column from the customer_sales table, while incorrect queries map it to net_sales, Odin generates the following textual hint: When referring to total sales, the user prefers the customer_sales.gross_sales column over customer_sales.net_sales. The key subroutine for generating textual hints is schema linking, which maps entities from the user’s question to the relevant tables and columns. Schema linking is a well-studied problem with approaches using general LLMs [8], specialized LLMs [18], and SBERT models [3, 33]. While any method could be used, we propose a heuristic SBERT-based algorithm for greater efficiency. The algorithm for generating textual hints, as detailed in Algorithm 3, links entities from the user’s question to schema components and creates textual hints summarizing the correct mappings. This process begins with three inputs: the user’s question, the correct SQL query, and a set of incorrect SQL queries. The output is a list of textual hints based on the correct schema mappings. The core function, GenerateHints, starts by extracting entities from the user’s question using the Extract_Entities function. For each entity, the SchemaMap function determines the most likely schema component in the correct SQL query. Simultaneously, the algorithm gathers mappings from incorrect SQL queries and compares them with the correct mappings, recording any discrepancies as incorrect mappings. The SchemaMap function computes the similarity between an entity and each column in the SQL query using Cal_Sim, selecting the column with the highest similarity score as the correct mapping. This ensures that each entity is accurately linked to the most relevant schema component, enhancing the precision of the generated hints. Finally, the Format_Hint function integrates the entity, the correct schema mapping, and the list of incorrect mappings into a template for generating hints. After processing all entities, the function returns the list of textual hints. # 7.2 Using Personalization Hints We leverage LLMs’ in-context learning to incorporate personalization hints during both the Generator and Selector stages. In the Generator stage, hints guide SQL query generation by including # Algorithm 3 Schema Linking and Hint Generation 1: Input: 2: $\boldsymbol { Q }$ - User question 3: 𝑆𝑄𝐿True - Correct SQL query 4: 𝑆𝑄𝐿𝑠Incorrect - List of incorrect SQL queries 5: Output: 6: ℎ𝑖𝑛𝑡𝑠 - List of textual hints for each entity 7: Helper Functions: 8: 𝐸𝑥𝑡𝑟𝑎𝑐𝑡_𝐸𝑛𝑡𝑖𝑡𝑖𝑒𝑠 - extracts entities from the user question 9: 𝐶𝑎𝑙_𝑆𝑖𝑚 - gives similarity score between an entity and a column 10: 𝐹𝑜𝑟𝑚𝑎𝑡_𝐻𝑖𝑛𝑡 - generates textual hint using a template and provided entity ans schema components. 11: function GenerateHints(𝑄, 𝑆𝑄𝐿True, 𝑆𝑄𝐿𝑠Incorrect) 12: 𝑒𝑛𝑡𝑖𝑡𝑖𝑒𝑠 $$ Extract_Entities $( Q )$ 13: $h i n t s \gets [ ]$ 14: for each 𝑒𝑛𝑡𝑖𝑡𝑦 in 𝑒𝑛𝑡𝑖𝑡𝑖𝑒𝑠 do 15: 𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑚𝑎𝑝 $$ SchemaMap(𝑒𝑛𝑡𝑖𝑡𝑦,𝑆𝑄𝐿True) 16: 𝑖𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡 $\_ m a p \gets [ ]$ 17: for each 𝑆𝑄𝐿 in 𝑆𝑄𝐿𝑠Incorrect do 18: 𝑚𝑎𝑝𝑝𝑖𝑛𝑔 $$ SchemaMap(𝑒𝑛𝑡𝑖𝑡𝑦,𝑆𝑄𝐿) 19: if 𝑚𝑎𝑝𝑝𝑖𝑛𝑔 ≠𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑚𝑎𝑝 then 20: 𝑖𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑚𝑎𝑝.append(𝑚𝑎𝑝𝑝𝑖𝑛𝑔) 21: ℎ𝑖𝑛𝑡 ← FormatHint(𝑒𝑛𝑡𝑖𝑡𝑦,𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑚𝑎𝑝,𝑖𝑛𝑐𝑜𝑟𝑟𝑒𝑐𝑡_𝑚𝑎𝑝) 22: ℎ𝑖𝑛𝑡𝑠.append(ℎ𝑖𝑛𝑡) 23: return ℎ𝑖𝑛𝑡𝑠 24: function SchemaMap(𝑒𝑛𝑡𝑖𝑡𝑦, 𝑆𝑄𝐿) 25: 𝑚𝑎𝑥_𝑠𝑖𝑚 26: 𝑏𝑒𝑠𝑡_𝑚𝑎𝑝𝑝𝑖𝑛𝑔 $$ 𝑁𝑜𝑛𝑒 27: for each 𝑐𝑜𝑙 in 𝑆𝑄𝐿 do 28: $s i m \mathrm { \gets C a l \_ S i m } ( e n t i t y , c o l )$ 29: if 𝑠𝑖𝑚 >𝑚𝑎𝑥_𝑠𝑖𝑚 then 30: 𝑚𝑎𝑥_𝑠𝑖𝑚 ←𝑠𝑖𝑚 31: 𝑏𝑒𝑠𝑡_𝑚𝑎𝑝𝑝𝑖𝑛𝑔 ←𝑐𝑜𝑙 32: return 𝑏𝑒𝑠𝑡_𝑚𝑎𝑝𝑝𝑖𝑛𝑔 them in the LLM’s context, as shown in Figure 3. For the Selector, if using an LLM-based scoring function, hints are similarly added to the LLM’s context. However, with an SBERT-based scoring function, integrating textual hints is less straightforward. To address this, we fine-tune the SBERT model by adjusting its representations, ensuring that preferred schema components align closely with the corresponding entities, while non-preferred components are pushed further away. This is done using triplets of entities and schema components, such as fine-tuning SBERT to ensure total sales is closer to gross_sales and farther from net_sales. DB Schema: create table students{ birthplace,origin,roll_num.. Hints: When user refers to entity hometown use students.birthplace.. Question: Find hometown and roll number of students. Task: Answer the question by generating a SQL on provided DB schema.Make sure the output adheres to the hints. # 7.3 Discussion User preferences can evolve over time, known as preference drift. For example, a user might initially associate total sales with gross_sales, but later shift to final_sales. Such preference drift can reduce the quality over time. Detecting and adjusting for preference drift is a well-studied in recommender systems literature. A common approach to address this is by maintaining a sliding window of recent user feedback [5], allowing the system to adapt to the most recent interactions. Alternatively, we can use decay functions to gradually reduce the influence of older preferences [12]. Also, human-in-the-loop systems can be employed to manually adjust preferences [34]. User preferences can sometimes be weak, where a user typically favors alternative X (e.g., gross_sales) but occasionally prefers Y (e.g., net_sales). In such cases, it is best to present both alternatives. A technique that enforces strong preferences would only display X, being the favored option. However, Odin can adapt to both scenarios. In case of a strong preference, Odin can learn to exclude Y, as only X would be marked correct in the calibration set. If it is weakly enforced, both X and Y will be correct in the calibration set. While X may receive a higher score during the selector stage, the cut-off will be set such that both X and Y are included in the final selection, as either could be correct. # 8 EVALUATION We begin by detailing the experimental setup (Section 8.1) and then present the results of an extensive study comparing Odin with various baseline methods. The evaluation reveals several key findings: Odin consistently produces a higher-quality SQL result set for user recommendations compared to the baselines. The result sets are not only smaller but also include the correct SQL query more frequently (Section 8.2). Even without personalization, Odin outperforms the baselines due to its Generator and Selector components (Section 8.2). • The schema masking-based Generator in Odin generates a more diverse range of SQL queries compared to both generic and NL2SQLspecific diversity-inducing techniques (Section 8.3). • Odin’s Selector significantly reduces the size of the result set while retaining the correct SQL query. The LLM-based scoring used in this stage proves more effective than SBERT-based scoring (Section 8.4). # 8.1 Experimental Setup Datasets. We use two benchmarks to evaluate our Odin system. AmbiQT Benchmark: This benchmark is a modified version of the Spider benchmark [36] and addresses different types of ambiguities commonly found in databases (see Fig. 4). In each case, the benchmark modifies the database schema such that there are two correct SQL statements for each question. We use this benchmark to evaluate the Generator component in Odin. • Mod-AmbiQT Benchmark: The original AmbiQT benchmark is not ideal for evaluating personalization, so we created a modified version, Mod-AmbiQT. In this new benchmark, we introduce duplicate columns and tables based on the AmbiQT benchmark. Entities in the questions can map to either of the alternatives, but only one mapping is considered correct. As a result, each question in this database can now have between 2 to 8 plausibly-correct SQL queries, depending on the number of entities in the question, although only one specific SQL query is designated as correct. This modification is useful for testing personalization. For example, in the concert database, we introduce artist and performers as two alternatives for the singer table. For the question “How many singers do we have?”, the two alternative SQL statements could be: “SELECT COUNT $^ { ( \ast ) }$ FROM artists” and “ SELECT COUNT(\*) FROM performers”. Out of these, only the first query is considered correct. The entity singers consistently maps to artist across all questions in the database, allowing us to evaluate personalization. The new benchmark includes 1,298, 2,148, and $6 2 6 ~ \mathrm { Q / A }$ pairs for table, column, and join ambiguity, respectively. We use this benchmark for overall system evaluation. Figure 4: Different types of ambiguities in the AmbiQT Benchmark. Baselines. We compare Odin against two main baselines. • Diverse Sampling: In this baseline, we set the LLM temperature to a high value $\scriptstyle ( t e m p = 1 . 0 )$ to promote the generation of diverse SQL queries. The resulting SQL queries are then presented to the user. • Forced Diversity: In this baseline, we provide the LLM with all previously generated SQL queries and instruct it to produce a new SQL query that differs from the prior ones. The generated SQL queries are then shown to the user. • Odin: The Odin baseline comprises three modules: Generator, Selector, and Personalizer enabled by default, with various components enabled or disabled based on the specific variant. In the Selector, a LLM-based scoring function is employed unless otherwise specified. All the above baselines are model-agnostic, meaning any LLM can be used. For our evaluation, we utilize Claude 3 Haiku. To further evaluate the effectiveness of Odin’s Generator, we compare it against six baselines from [2] that aim to induce diversity in NL2SQL generation. The goal is to capture all possible ambiguous SQL queries for a given question. First, we consider a set of naive baselines based on pre-trained language models (PLMs) and LLMs, showcasing their limited ability to generate diverse SQL statements. 1. LLM-X: a commercially available LLM specialized for coding tasks. 2. RESDSQL [16]: one of the top-performing methods on the SPIDER benchmark. We use the 3B variant of RESDSQL, which is the most powerful, for comparison purposes. However, we disable the representation from [6], as it is orthogonal to our approach and could be used alongside it. Next, we examine baselines that incorporate common sampling techniques to promote more diverse generation. For this, we use the T5-3B checkpoint from the PICARD [23] repository, which finetunes T5-3B on the SPIDER dataset. These baselines include: 3. Beam Search (T5-3B-beam): We apply Beam Search with a width of 10 as the default decoding strategy for T5-3B. 4. Top-k Sampling (T5-3B-k): At each decoding step, we sample from the top-50 tokens using top- $\mathbf { \nabla } \cdot \mathbf { k }$ sampling with $k = 5 0$ . 5. Nucleus/Top-p Sampling (T5-3B-p): We apply top-p sampling at each decoding step, where tokens that account for $9 0 \%$ of the probability mass are considered, as proposed in [9]. 6. Logical Beam: we include this decoding algorithm that navigates the SQL logic space by combining plan based template generation and constrained infilling. For this, we fine-tune a version of Flan T5-3B (with a maximum input length of 512) using the Adafactor optimizer [25] (learning rate $1 0 ^ { - 4 }$ , no decay) on the AmbiQT benchmark, as described in [2]. Metrics: To evaluate the system, we use two key metrics: average accuracy over the workload (𝐴𝑣𝑔𝐴𝑐𝑐) and average number of results shown to the user (𝐴𝑣𝑔𝑅𝑒𝑠𝑢𝑙𝑡𝑆𝑖𝑧𝑒), both defined in Section 3. 𝐴𝑣𝑔𝐴𝑐𝑐 checks if there exists a SQL query within the result set that matches the execution result of the golden SQL query, and computes the average accuracy over the entire workload. On the other hand, 𝐴𝑣𝑔𝑅𝑒𝑠𝑢𝑙𝑡𝑆𝑖𝑧𝑒 measures the average number of SQL queries presented to the user. # 8.2 Overall Evaluation We first examine how Odin improves accuracy across different types of ambiguities. For this experiment we use the Mod-AmbiQT benchmark. Fig. 5 illustrates the average accuracy of the workload (𝐴𝑣𝑔𝐴𝑐𝑐) of the generated results versus the average number of SQL results shown to the user (𝐴𝑣𝑔𝑅𝑒𝑠𝑢𝑙𝑡𝑆𝑖𝑧𝑒) for three distinct ambiguity types: join ambiguity, table ambiguity, and column ambiguity. The figure compares Odin with two baselines: Sampling and Forced Diversity. For all methods, we gradually increase the number of LLM calls $K$ (specifically, we set $K$ to 1, 2, 3, 5, 7, and 10) used for generating SQL queries. As the budget increases, each method shows a higher number of SQL queries to the user, which improves accuracy. Note that for Odin, $K$ refers to the number of LLM calls used in the Generator, i.e., the Generator produces $K$ SQL queries. However, $K$ does not include LLM calls made by the Selector and Personalizer. Overall Performance: Odin consistently outperforms all other methods, followed by its variants, then Forced Diversity, and finally Sampling. Odin achieves accuracies of $7 1 . 6 \%$ , $5 0 . 3 \%$ , and $7 3 . 6 \%$ for a budget of 10 LLM calls, while displaying an average of 4.9, 4.2, and $3 . 9 \mathrm { S Q L }$ queries to the user for join, table, and column ambiguities, respectively. In contrast, Forced Diversity achieves $3 3 . 2 \%$ , $3 8 . 0 \%$ , and $5 8 . 8 \%$ accuracy across the workload. Compared to Forced Diversity, Odin shows improvements of $3 8 \%$ , $12 \%$ , and $1 5 \%$ in accuracy across the three ambiguity types while returning $2 { - } 3 { \times }$ fewer SQL queries. The improvement in accuracy is mainly due to the Generator and Personalizer components, while reduction in SQL result set size is attributed to the Selector component. Thus, Odin’s result set has up to twice the chance of containing the correct SQL result while being $2 { - } 3 { \times }$ smaller in size than Forced Diversity. Impact of Personalization: We assess the impact of the Personalizer feature, which enables Odin to learn user preferences and improve performance even at $K = 1$ , as shown in Figure 5. At $K = 1$ , Odin with Personalizer significantly outperforms Odin without Personalizer by $3 0 \%$ , $8 \%$ , and $3 0 \%$ for join, table, and column ambiguities, respectively. This improvement is due to personalized variants better understanding user preferences and generating correct solutions on the first attempt. Personalization can increase the likelihood of finding the correct SQL query in the result set by up to $4 { - } 5 \times$ for small result set sizes. Figure 5: Execution Match Accuracy (𝐴𝑣𝑔𝐴𝑐𝑐) versus the average number of results shown to the user (𝐴𝑣𝑔𝑅𝑒𝑠𝑢𝑙𝑡𝑆𝑖𝑧𝑒) across three different ambiguity types for various baselines. Odin can achieve up to twice the accuracy while presenting only half the number of SQL queries to the user compared to the next best baseline. Performance Without Personalization: Despite the advantages of Personalizer, Odin still performs effectively without it. For $K = 1$ , the accuracy of Odin without Personalization is comparable to that of the baselines. However, as $K$ increases, Odin without Personalization quickly surpasses these baselines. This behavior is expected, as with only one LLM call, all non-personalized baselines generate the same initial SQL query. With a budget of 10 LLM calls, Odin without Personalization outperforms the baselines by $2 7 \%$ , $1 0 \%$ , and $8 \%$ . Thus, although Odin without Personalization starts with similar accuracy to the baselines for small result sets, it rapidly improves, achieving up to a $2 5 \%$ gain in accuracy as the set size increases. Impact of the Selector: We assess the Selector component, which is designed to enhance precision by filtering out incorrect SQL statements while keeping the correct ones, thereby improving precision without significantly affecting recall. Comparing Odin with and without the Selector, we observe that both maintain similar accuracy, with the Selector reducing accuracy by only $1 . 6 \%$ , $1 \%$ , and $1 . 4 \%$ for the respective ambiguity types. However, Odin with the Selector displays an average of 5, 4.1, and $3 . 8 \mathrm { S Q L }$ queries per ambiguity type, compared to 10 queries generated by Odin without the Selector. Thus, the Selector component significantly reduces the number of SQL queries displayed by up to $2 { - } 2 . 5 \times$ with only a marginal loss in accuracy. # 8.3 Effectiveness of Generator In this experiment, we assess the limitations of traditional diversitypromoting methods in generating ambiguous SQL queries for the NL2SQL task. While these approaches excel at producing a single correct SQL query, subsequent generations often result in minor variations of the same query, missing out on the full spectrum of potential ambiguous interpretations. In contrast, our schema masking-based generation method encourages a broader range of query outputs. To demonstrate this, we conducted evaluations using the AmbiQT benchmark [2], which specifically requires generating two distinct and correct SQL alternatives for each input question. The benchmark measures two metrics: EitherInTopK, which checks if at least one of the alternatives is present in the generated set, and BothInTopK (Coverage), which evaluates whether both alternatives are present. Two SQL queries are deemed equivalent if their execution results match; thus, verifying the existence of a SQL alternative requires matching its execution results with generated SQL queries. Across all evaluated baselines, the EitherInTopK accuracy remains relatively high, indicating that most methods are capable of producing at least one correct query. However, when the requirement shifts to generating both correct alternatives, the accuracy drops significantly. This trend is clearly illustrated in the results shown in Fig. 6. As hypothesized, traditional diversity-promoting techniques excel at identifying one correct SQL query, evident in high EitherInTopK scores, but struggle to generate both correct alternatives, leading to lower BothInTopK scores. Our schema masking approach demonstrates a significant advantage, outperforming the next best baseline, Logical Beam, by factors of $1 . 9 \times$ , $1 . 2 \times$ , and $1 . 5 \times$ in terms of coverage for JOIN ambiguity, Table-Synonyms, and Column-Synonyms, respectively. Additionally, it is noteworthy that different types of ambiguities lead to varying performance across the baselines. LLMs generally handle table-synonyms better than more complex ambiguity types like JOIN ambiguities, as reflected by higher EitherInTopK and Coverage scores for table-synonyms. Logical Beam, which employs a specialized decoding strategy to encourage diversity, achieves the second-best performance across most ambiguity types. Nonetheless, the results clearly highlight the superiority of schema masking-based generation, which generates all the ambiguous queries in up to twice as many cases as the next best baseline. # 8.4 Selector Ablation Study The two key components of the Selector are the scoring function and the alpha value. In Fig. 7, we illustrate the trade-off points achieved by modifying the scoring function and adjusting the alpha values from 0.01 to 0.1 for each scoring function. For this experiment, we utilize two scoring functions: one based on an LLM and the other based on SBERT. Additionally, we include the maximum possible accuracy of the Selector, represented by the accuracy of Odinwithout the Selector. Figure 6: Odin’s Generator results alongside various SQL generation baselines for ambiguity types such as Join, Table Synonyms, and Column Synonyms. Each question has two SQL alternatives, and we assess whether either or both are in the generated result set (EitherInTopK and BothInTopK). While most baselines achieve high EitherInTopK accuracies by generating at least one correct alternative, Odin outperforms them by generating both alternatives in up to twice as many cases as the next best baseline. Figure 7: Odin’s Execution Match Accuracy (𝐴𝑣𝑔𝐴𝑐𝑐) versus the average number of results shown to the user (𝐴𝑣𝑔𝑅𝑒𝑠𝑢𝑙𝑡𝑆𝑖𝑧𝑒) on across three ambiguity types, using various Selector scoring functions (LLM-based and SBERT-based). For each baseline, we vary the Selector’s alpha value $\mathbf { f r o m 0 . 0 1 }$ to 0.1. Higher alpha values lead to more SQL queries being discarded by the Selector, which decreases both accuracy and result set size. Impact of Scoring Function: An effective scoring function assigns low scores to correct SQL queries and high scores to incorrect ones, improving result pruning. For an alpha of 0.01, the LLM-based scoring displays an average of 5.3, 4.4, and 4.1 SQL queries across three ambiguity classes, while SBERT averages 5.35, 6.1, and 5.9. The LLM-based scoring presents 1.5 to 2 fewer SQL queries than SBERT, although the difference is less pronounced in the case of join ambiguity. While we expect the LLM to outperform SBERT due to its larger model size, SBERT performs better in scenarios where names of ambiguous tables and columns are nearly identical. For example, in a join ambiguity involving the birthplace column in the students table and the similarly named birthplace column in the student_birthplace, SBERT assigns a similarity score close to 1.0, leading to higher scores for such ambiguous queries. However, SBERT struggles with cases involving synonyms, where the names are less similar. Impact of Alpha: The alpha value represents the maximum allowable reduction in accuracy when pruning SQL results from the generation stage. Larger alpha values enable the Selector to prune more SQL queries, resulting in a smaller result set but lower accuracy. Across all three ambiguity classes, increasing the alpha value correlates with a decrease in both accuracy and the number of SQL queries displayed. Higher alpha values allow for a greater drop in recall, leading to stricter thresholds and fewer SQL queries shown to the user. For an alpha value of 0.1, the LLM-based scoring achieves accuracies of $6 5 . 7 \%$ , $4 6 . 1 \%$ , and $6 7 . 2 \%$ , while maximum accuracies are $7 3 . 2 \%$ , $5 1 . 3 \%$ , and $7 5 . 0 \%$ , respectively. Notably, the accuracy with an alpha of 0.1 is approximately $10 \%$ lower than that without the Selector, reflecting the trade-off that the Selector is designed to manage.
NL2SQL (natural language to SQL) systems translate natural language into SQL queries, allowing users with no technical background to interact with databases and create tools like reports or visualizations. While recent advancements in large language models (LLMs) have significantly improved NL2SQL accuracy, schema ambiguity remains a major challenge in enterprise environments with complex schemas, where multiple tables and columns with semantically similar names often co-exist. To address schema ambiguity, we introduce ODIN, a NL2SQL recommendation engine. Instead of producing a single SQL query given a natural language question, ODIN generates a set of potential SQL queries by accounting for different interpretations of ambiguous schema components. ODIN dynamically adjusts the number of suggestions based on the level of ambiguity, and ODIN learns from user feedback to personalize future SQL query recommendations. Our evaluation shows that ODIN improves the likelihood of generating the correct SQL query by 1.5-2$\times$ compared to baselines.
[ "cs.DB", "cs.CL" ]
# 1 INTRODUCTION Many problems in software engineering involve optimization, search, or analysis in large, complex spaces [16]. Examples include selecting prioritized test cases for regression testing [56], detecting code clones in large codebases [41], and predicting defect-prone modules using historical data [14]. These tasks are computationally intensive, and classical solutions often rely on heuristics or approximation strategies [15, 56]. Quantum computing [34, 38] provides a fundamentally different computational model based on quantum mechanics. Quantum algorithms that exploit superposition, interference, and entanglement, such as Grover’s search [12] and the Quantum Approximate Optimization Algorithm (QAOA) [10], have shown advantages in domains such as combinatorial optimization. Quantum machine learning methods [7] also demonstrate potential in classification and pattern recognition tasks. These algorithmic features suggest that quantum-based approaches may help address certain software engineering problems, especially when classical methods are costly or ineffective [10, 29]. In recent years, quantum computing research has focused primarily on quantum algorithm design [29], hardware development [5, 38, 58], and the development of quantum programming languages [11, 17]. At the same time, Quantum Software Engineering (QSE) has emerged as a field that addresses the systematic development of quantum software, including specification, design, implementation, testing, and verification of programs intended to run on quantum hardware [3, 31, 37, 57]. However, while building quantum software has attracted substantial attention, using quantum computation to support classical software engineering tasks remains relatively underexplored [25, 28, 50]. In this paper, we introduce Quantum-Based Software Engineering (QBSE) as a new research direction to explore how quantum computing can support classical software engineering tasks. Unlike QSE, which targets quantum software development, QBSE focuses on classical software engineering challenges and investigates whether quantum techniques can improve efficiency or scalability. We suggest that QBSE offers a timely and promising perspective that complements ongoing efforts in quantum software and classical engineering automation. In the following sections, we define the scope of QBSE, outline emerging applications, and present preliminary ideas for a research agenda. # 2 A BRIEF BACKGROUND ON QUANTUM COMPUTING Quantum computing offers a fundamentally different model of computation, built on the principles of quantum mechanics [34]. This section introduces basic concepts that will help the reader understand the quantum techniques discussed later in this paper. • Qubits. Unlike classical bits that are either 0 or 1, a quantum bit (or qubit) can exist in a superposition of both. The state of a qubit, written as $| \varphi \rangle$ , can be expressed as a combination of two basis states: $\alpha | 0 \rangle + \beta | 1 \rangle$ , where $\alpha$ and $\beta$ are complex numbers such that $\vert \alpha \vert ^ { 2 } + \vert \beta \vert ^ { 2 } = 1$ . The values $| { \boldsymbol { \alpha } } | ^ { 2 }$ and $| \beta | ^ { 2 }$ represent the probability amplitudes, indicating the likelihood of observing the qubit in state $| 0 \rangle$ or $\vert 1 \rangle$ upon measurement. • Quantum Gates and Circuits. Quantum gates manipulate qubits through unitary operations. Common gates include the Hadamard gate (which creates superposition), the Pauli-X gate (analogous to a classical NOT), and the controlled-NOT (CNOT) gate, which is used to create entanglement. A quantum circuit consists of a sequence of such gates applied to one or more qubits. • Entanglement. Entanglement is a quantum phenomenon in which the state of one qubit depends on the state of another, no matter how far apart they are. When two qubits are entangled, measuring one instantly determines the state of the other. This property is crucial for quantum algorithms that rely on coordination between qubits and is typically generated using gates like CNOT. • Measurement. At the end of a quantum computation, the qubits are measured. This process collapses each qubit into a classical value, either 0 or 1, according to its probability amplitudes. Quantum Algorithms. Well-known quantum algorithms such as Grover’s search [12] and Shor’s factoring [45] use superposition, interference, and entanglement to achieve advantages over classical algorithms in specific problem domains. For readers interested in a more in-depth treatment of these ideas, we refer to the comprehensive textbook by Nielsen and Chuang [34]. # 3 WHAT IS QUANTUM-BASED SOFTWARE ENGINEERING? QBSE is a research direction that explores how quantum algorithms and hardware can be applied to solve problems in classical software engineering, such as test case selection, static analysis, code clone detection, and defect prediction [36]. It does not concern the development of quantum software itself. QBSE is conceptually distinct $\mathsf { Q S E } ^ { 1 }$ While QSE focuses on the engineering of quantum software systems, including design, programming languages, compilation, testing, and verification [57], QBSE explores how quantum computing can be used to enhance classical software engineering tasks. Problems in classical software engineering that are combinatorial, search-based, or involve probabilistic reasoning are potential candidates for quantum support. Key tasks include testing, defect prediction, code clone detection, vulnerability analysis, and specification checking, all of which are discussed in more detail in Section 6. These problems often involve large search spaces, uncertain heuristics, or high computational cost, making them suitable for exploration with quantum methods. QBSE examines whether quantum techniques such as QAOA, Grover-based search, quantum annealing [21], or quantum machine learning can provide computational advantages in solving these tasks. QBSE does not assume that all software engineering tasks are suitable for quantum computing. Instead, it focuses on identifying problem classes where quantum techniques may offer practical or theoretical benefits and developing methods to apply them effectively within software engineering workflows. # 4 WHY WE NEED A NEW RESEARCH DIRECTION Recent years have seen growing interest in applying quantum computing to classical software engineering tasks. While some promising early studies exist, they are often isolated and lack a shared framework, common terminology, or a unified research agenda. Defining QBSE as a distinct research direction can help address these gaps and guide future research efforts. First, QBSE clarifies the scope by focusing on applying quantum computing to classical software engineering problems rather than developing software for quantum computers (as in QSE). Second, having a defined research direction allows otherwise scattered efforts to be brought together under a common framework. This enables more systematic exploration and facilitates the development of shared benchmarks, problem formulations, and evaluation criteria. Third, a recognized direction encourages the growth of a dedicated research community. It fosters collaboration, shared infrastructure, and the creation of venues for discussion, experimentation, and publication. Finally, as quantum hardware and algorithms continue to improve, applying quantum computing to practical software engineering problems is becoming increasingly feasible. This makes it a good time to define QBSE and consider how it can guide future work in this research direction. # 5 QUANTUM TECHNIQUES This section introduces quantum computing techniques that may be useful in solving classical software engineering problems. These techniques are categorized into methods based on quantum search, optimization, machine learning, and annealing that have been explored or proposed in recent work. Table 1 summarizes the main classes of quantum techniques, along with representative algorithms and their hardware requirements. We then briefly review each category in the remainder of this section. While this section focuses on the technical principles of quantum methods, we also briefly mention key software engineering applications for each technique. Detailed task-specific use cases are discussed in Section 6. # 5.1 Quantum Search Algorithms Grover’s algorithm [12] is one of the most well-known quantum search techniques, offering a theoretical quadratic speedup for unstructured search problems. It has been formally analyzed for its optimality and extended into amplitude amplification methods [8, 9]. Quantum search methods, such as Grover’s algorithm, work by preparing a uniform superposition over all possible inputs and iteratively amplifying the amplitude of the target solution. A typical Grover iteration applies the operator: $$ G = ( 2 | \psi \rangle \langle \psi | - I ) ( I - 2 | x _ { t } \rangle \langle x _ { t } | ) , $$ where $\left| x _ { t } \right.$ is the desired solution and $| \psi \rangle$ is the initial uniform state. After $O ( { \sqrt { N } } )$ iterations, the probability of measuring the correct result is maximized. , Vol. 1, No. 1, Article . Publication date: June 2025. Table 1. Summary of Quantum Techniques Relevant to QBSE Note: The first three rows summarize algorithmic approaches, while the final row describes a hardware-specific optimization method based on quantum annealing. In software engineering, quantum search algorithms have been explored in problems where large solution spaces are a bottleneck. Example scenarios include static analysis [40] and finite-state machine (FSM) property checking [13]. # 5.2 Quantum Optimization Algorithms Quantum optimization techniques aim to address discrete optimization problems that are computationally intensive for classical approaches, especially those involving large or combinatorial search spaces. Among the most studied techniques is the Quantum Approximate Optimization Algorithm (QAOA) [10], a hybrid quantum-classical method designed for near-term quantum hardware. QAOA is particularly suited for problems formulated as Quadratic Unconstrained Binary Optimization (QUBO), a standard encoding for many NP-hard problems [23]. A related formulation, Quadratic Unconstrained Directed Optimization (QUDO), extends QUBO by allowing directed dependencies among decision variables, enabling more expressive modeling in certain structured optimization problems. It alternates between cost and mixing Hamiltonians, expressed as: $$ | \psi ( \gamma , \pmb { \beta } ) \rangle = \prod _ { j = 1 } ^ { p } e ^ { - i \beta _ { j } H _ { M } } e ^ { - i \gamma _ { j } H _ { C } } | \psi _ { 0 } \rangle , $$ where $| \psi _ { 0 } \rangle$ is the initial state and $( \gamma , \beta )$ are variational parameters to be optimized. Another method, the Variational Quantum Eigensolver (VQE) [35], was initially developed for quantum chemistry problems and has since been adapted for broader optimization tasks. Both QAOA and VQE are compatible with noisy intermediate-scale quantum (NISQ) devices [38], making them practical candidates for near-term experiments. In software engineering, quantum optimization methods have been investigated for applications such as test case optimization [53, 54]. # 5.3 Quantum Machine Learning Quantum machine learning (QML) explores how quantum computing might enhance learning models by leveraging quantum states and operations to increase expressiveness and computational capacity [7, 44]. A common approach involves using parameterized quantum circuits [42] that encode classical input data in quantum states and apply trainable transformations. One such model is the Quantum Neural Network (QNN), which uses layers of quantum gates to emulate the behavior of classical neural networks in a Hilbert space [6]. Another approach, the quantum support vector machine (QSVM), maps the classical input to a high-dimensional quantum feature space using circuit-based kernels [39]. Table 2. Mapping Quantum Techniques to Classical Software Engineering Tasks A typical QNN applies a unitary transformation $U ( \theta )$ to an input state $\left| \psi _ { 0 } \right.$ , where $\theta$ represents trainable parameters optimized via classical feedback: $$ | \psi ( \theta ) \rangle = U ( \theta ) | \psi _ { 0 } \rangle $$ The final state is then measured to compute output probabilities, which are used to evaluate loss functions and guide the training process. In software engineering, QML methods have been explored for tasks such as defect prediction [32] and vulnerability detection [59]. # 5.4 Annealing-Based Optimization Annealing-based optimization refers to a class of quantum techniques that use adiabatic quantum evolution to solve combinatorial optimization problems [2, 21]. Unlike gate-based quantum algorithms (e.g., QAOA), these approaches are typically implemented on quantum annealing hardware such as D-Wave [20], which encodes binary variables using physical qubits and evolves the system toward a low-energy state corresponding to an approximate solution. A typical formulation involves minimizing an Ising Hamiltonian: $$ H = \sum _ { i } h _ { i } \sigma _ { i } ^ { z } + \sum _ { i < j } J _ { i j } \sigma _ { i } ^ { z } \sigma _ { j } ^ { z } , $$ where $\sigma _ { i } ^ { z }$ denotes the Pauli-Z operator on qubit $i$ , $h _ { i }$ are local bias terms, and $J _ { i j }$ represent coupling strengths between qubits. The system starts in a ground state of a simple initial Hamiltonian and evolves toward the problem Hamiltonian over time. In software engineering, annealing-based methods have been explored for test case generation [4], test suite minimization [55], and regression test optimization [47]. Together, these techniques offer promising tools for addressing computationally intensive problems in classical software engineering. As quantum hardware and tools mature, it may become increasingly feasible to integrate such approaches into practical development workflows. While these techniques show early promise, many remain at the conceptual or prototype level, and their integration into practical software engineering workflows is still evolving. # 6 EMERGING APPLICATIONS IN QUANTUM-BASED SOFTWARE ENGINEERING In this section, we present classical software engineering tasks that may benefit from quantum computing techniques. These tasks often involve large search spaces, combinatorial complexity, or high-dimensional data representations, making them suitable candidates for quantum optimization, search, and learning methods. We focus on organizing emerging studies that concretely demonstrate how quantum computing can be applied to classical software engineering problems. We categorize the literature by task, including testing, defect prediction, code analysis, vulnerability detection, and specification checking, and highlight the quantum techniques explored in each case. While these efforts remain exploratory, they form an initial base for QBSE. # 6.1 Test Case Optimization and Minimization Selecting a minimal or high-priority subset of test cases that satisfies given coverage criteria is a well-known combinatorial problem [56]. Classical heuristics often struggle with scalability as test suites grow. Quantum optimization techniques such as QAOA and quantum annealing have been proposed as alternatives to explore large solution spaces more efficiently. Several studies have applied these methods to optimize test cases [53, 54]. QUBO formulations and Grover’s algorithm have also been used for test suite minimization [18, 55]. In addition, quantum-enhanced search has been explored to generate test cases [13], and Grover-based search and quantum counting [8] have been used to accelerate dynamic regression testing [27]. # 6.2 Defect and QoS Prediction Defect prediction aims to identify fault-prone modules using historical data, software metrics, or graph representations [22]. Quantum machine learning models such as QNNs and QSVMs are promising tools for this task, particularly in small or unbalanced datasets where quantum encodings may help improve generalization [6, 39]. Recent studies suggest that QSVMs may outperform classical SVMs in certain software defect datasets [32]. Additional experiments show that QNNs may require fewer resources than QSVMs in certain settings [33], highlighting differences in model performance that depend on the architecture and data regime. Quantum machine learning has also been applied to performance prediction tasks. For example, Wang et al. [52] utilized Quantum Extreme Learning Machines (QELM) [30] to estimate system-level quality attributes, such as response time, in industrial control systems, demonstrating the potential of quantum models beyond defect prediction. # 6.3 Code Clone Detection and Static Analysis Code clone detection is important for software maintenance and refactoring [41]. It involves identifying syntactic or semantic similarities among code fragments. Classical techniques, especially those using tokens, AST, or graph-based comparison, are computationally expensive on large codebases. Quantum annealing techniques such as QUBO and QUDO have been proposed to reduce detection cost [19]. Similarly, static analysis tasks such as reachability and transitive closure computation can be enhanced by Grover’s algorithm [40], which offers quadratic speedups in certain search-based formulations. # 6.4 Vulnerability Detection and Software Security Quantum-enhanced models have been investigated for software vulnerability detection, especially in source code embeddings and token classification tasks. QNNs, QSVMs, and quantum embedding models [6, 43] have been used to classify vulnerable patterns. For example, Zhou et al. [59] trained a QNN on tokenized code, and Song et al. [46] proposed a recurrent quantum embedded neural network (RQENN) to model context-sensitive patterns. In addition, quantum classifiers have been applied to secure software supply chains. Masum et al. [26] focused on detecting software supply chain attacks by analyzing metadata and dependency features using quantum machine learning models. Akter et al. [1], on the other hand, addressed supply chain vulnerability detection and showed that quantum models outperform classical baselines in small-sample settings. # 6.5 Specification Checking and Component Synthesis Formal verification tasks, such as property checking, can benefit from a quantum search. Groverbased algorithms have been applied to reachability checking in finite-state machines [13], including variations that handle unknown target counts. Quantum-enhanced search has also been proposed to synthesize components from libraries based on input-output constraints [13], although these approaches remain in the proof-of-concept stage and have not yet been validated in realistic settings. # 6.6 Positioning and Summary of Emerging QBSE Applications Table 2 summarizes the software engineering tasks discussed above, the quantum techniques applied to them, and illustrative references. While many applications remain in the early stages, the reviewed work suggests that QBSE is a promising direction with growing interest and early experimental support. In addition to these task-specific studies, several broader efforts have attempted to map quantum algorithm classes to software engineering challenges. Miranskyy et al. [28] outlined how eight classes of quantum algorithms, including Grover’s search, quantum SAT solvers, quantum linear systems, and quantum walks, could be applied to various phases of software development. Their work offers a conceptual roadmap, but does not examine specific implementations. Mandal et al. [25] reviewed existing research applying quantum computing to software engineering and categorized it into areas such as optimization, search, and machine learning. Their paper emphasized potential integration workflows, but did not systematically organize empirical studies by task type. Wang et al. [50] focused on the role of Quantum Artificial Intelligence (QAI), particularly quantum optimization and quantum machine learning, in software engineering. Their paper outlined challenges such as algorithm design, noise handling, and problem representation, and suggested potential research directions. While this work offers valuable insights into the role of quantum AI within software engineering, its focus represents only one aspect of the broader QBSE landscape. Compared with these works, our paper aims to help establish QBSE as a distinct research direction by systematically organizing and interpreting early applications of quantum computing to classical software engineering tasks. By highlighting the tasks, techniques, and emerging evidence in a structured manner, we seek to define the scope and significance of QBSE within the broader software engineering landscape. # 7 A PRELIMINARY RESEARCH AGENDA QBSE is still in its early stages of development. To support its development, we outline a preliminary agenda of near-term directions. • Problem reformulation and suitability analysis. Not all software engineering problems are suitable for quantum methods. A good starting point is to identify problems with features such as combinatorial complexity, discrete or combinatorial search spaces, or high-dimensional input. Such problems can sometimes be reformulated into quantum-friendly representations, including QUBO [23], Ising models [10], or parameterized quantum circuits (PQCs) [35]. • Design of quantum-assisted methods. Once suitable formulations are in place, the next step is to develop quantum-assisted methods that integrate quantum algorithms into classical software engineering workflows. These may include quantum routines for tasks such as test case selection, defect prediction, or code clone detection, potentially embedded within larger software engineering pipelines, although such integrations remain at an early stage. • Empirical evaluation and benchmarking. Empirical evaluation is essential to assess the value of QBSE approaches. This involves defining benchmark tasks, selecting meaningful baselines, and measuring performance on both quantum hardware and simulators. Both solution quality and scalability should be considered. • Tool and framework development. Prototype tools and reusable components are important for reproducibility and adoption. Examples include wrappers for existing tools, libraries for encoding quantum models, and standard interfaces for hybrid execution environments. • Cross-disciplinary collaboration. QBSE sits at the intersection of quantum computing and software engineering, so interdisciplinary collaboration is key. Research teams should combine their expertise to ensure that quantum techniques are applied effectively while keeping software engineering goals at the forefront. This agenda is intended to guide early work on QBSE. As the field progresses, more specific research topics and technical challenges will naturally emerge.
Quantum computing has demonstrated the potential to solve computationally intensive problems more efficiently than classical methods. Many software engineering tasks, such as test case selection, static analysis, code clone detection, and defect prediction, involve complex optimization, search, or classification, making them candidates for quantum enhancement. In this paper, we introduce Quantum-Based Software Engineering (QBSE) as a new research direction for applying quantum computing to classical software engineering problems. We outline its scope, clarify its distinction from quantum software engineering (QSE), and identify key problem types that may benefit from quantum optimization, search, and learning techniques. We also summarize existing research efforts that remain fragmented. Finally, we outline a preliminary research agenda that may help guide the future development of QBSE, providing a structured and meaningful direction within software engineering.
[ "cs.SE", "quant-ph" ]
# 1 Introduction In recent years, we have witnessed rapid advancements in visual generation and its tremendous application potential. Diffusion models [43, 40, 6, 39, 22] have elevated the quality of visual generation to amazing levels while enabling versatile conditional control. Meanwhile, autoregressive approaches [42, 49, 50, 58] have gradually demonstrated comparable performance and the potential for seamless integration with large language models (LLMs), offering a unified framework for multimodal generation. Early diffusion models [12, 48] operated directly in pixel space, but their high computational cost motivated subsequent works [43, 40, 39] to shift the diffusion process into the latent space of pretrained variational autoencoders (VAEs) [19, 43]. This approach achieves a near-optimal trade-off between computational efficiency and detail preservation. In contrast to diffusion-based methods, which decompose image generation into iterative denoising steps, autoregressive models [7, 42] generate visual content sequentially while achieving comparable or even superior [49, 51] visual quality. Their inherent compatibility with LLMs further positions them as promising candidates for unified multimodal generation frameworks [25, 50, 58]. For autoregressive visual generation, VQVAE [52] first introduced discrete latent representations of images, modeling their distribution autoregressively. VQGAN [7] significantly improved reconstruction quality, enabling efficient highresolution image synthesis via transformers or LLMs. Both image generation approaches have been successfully extended to the video generation domain [13, 63, 21, 30]. However, encoding images or videos into latent space typically incurs information loss, particularly due to vector quantization (VQ) from continuous features to discrete tokens. This loss fundamentally constrains the upper bound of generation fidelity. Figure 1: Comparison of Different Metrics with Human Judgments. In each case, previous metrics (PSNR, SSIM, LPIPS) demonstrate discrepancies with human assessments, whereas our proposed face similarity and text accuracy effectively reflect the reconstruction quality. The reference image represents the original, while Patch 0 and Patch 1 show reconstruction results from different visual tokenizers. The same regions are cropped from the complete images for visualization. There have been several classical methods for evaluating the quality of reconstructed images. Traditional pixel-level metrics, such as PSNR, measure pixel-wise intensity differences, emphasizing global fidelity but disregarding perceptual relevance. SSIM [56] and FSIM [68] further incorporate luminance, contrast, structural, and edge-texture information, but they are more sensitive to noise. These pixel-level metrics typically focus on only few aspects of image quality and fail to measure similarity in a way that aligns with human judgment. To address these limitations, feature-based metrics like FID [11], IS [45], and LPIPS [69] have emerged to assess semantic and distributional consistency of reconstructed images using features from pretrained networks. While these feature-based metrics better approximate human perception compared to pixel-level ones, their reliance on pretrained models makes evaluation unreliable when reconstructed images deviate from the pretraining distribution, as illustrated in Fig 1. Since human judgments of similarity depend on high-order, context-dependent image structures that may not conform to feature distance metrics, we naturally consider certain high-dimensional image features - particularly faces and texts - are more reliant on human assessment than generic natural image characteristics. Compared to other visual contents, the detection and evaluation of faces and text have been extensively studied, resulting in mature toolchains [35, 16]. Moreover, unlike subtle pixel-level variations, text readability and identity preservation are far more perceptually critical to human observers. Pixel-level metrics fail to penalize semantically critical errors (e.g., misaligned strokes in text), while feature-based metrics lack the granularity to assess domain-specific attributes (e.g., facial symmetry or character recognition accuracy). This gap highlights the need for a tailored benchmark that integrates task-aware evaluation to complement existing metrics. To address this gap, we propose Visual Tokenizer Benchmark (TokBench). Specifically, we curated 12,398 images and 403 video clips (51,590 frames) rich in faces and text from publicly available datasets, encompassing both natural scenes and document contexts, with balanced scale distributions for both facial and text content. To assess text reconstruction quality, we employ an OCR model to determine whether the reconstructed text remains accurately recognizable, subsequently computing the T-ACC (Text Recognition Accuracy) and T-NED (Text Normalized Edit Distance) metrics. For facial content, we leverage a face recognition model to extract facial features and compute the F-Sim (Facial Similarity) metric, quantifying identity preservation. For reconstructed videos, we perform a frame-by-frame evaluation and report the average results. These metrics offer intuitive quantification of a visual tokenizer’s ability to retain the most visually challenging content types—areas where current evaluation methods frequently underperform. Leveraging this benchmark, we conducted a comprehensive evaluation of existing visual tokenizers and VAEs, demonstrating that the proposed metrics serve as a meaningful complement to conventional reconstruction quality standards. In summary, the main contributions of this paper can be categorized into the following points: • We reveal that conventional metrics exhibit inconsistencies with human evaluation when assessing the reconstruction quality of human-sensitive content like text and face. • We propose TokBench, comprising a diverse image dataset rich in faces and text, along with a lightweight evaluation pipeline, requiring only 2GB VRAM within 4 minutes. • We conduct comprehensive evaluations of existing image tokenizers and VAEs on face and text reconstruction, and further extend this assessment to video tokenizers to explore the upper bounds of visual generation models. # 2 Related Work # 2.1 Visual Tokenizers and VAEs Image Since Latent Diffusion Models [43] achieved promising results by learning visual generation in VAE’s latent space, the study of continuous or discrete visual latent spaces has played a critical role in visual generation, with increasing exploration focused on tokenizer design. The conventional VAE [4, 19] demonstrated both theoretical and empirical evidence for the advantages of learning a data representation encoded to images with a learned generator. [52] introduced the Vector Quantised Variational Autoencoder (VQVAE), which learns discrete representations of images and models their distribution autoregressively. VQGAN [7] further enhances the visual reconstruction capability of VQVAE by incorporating GAN loss and demonstrates the potential of autoregressive models in generating high-resolution images. Visual AutoRegressive modeling (VAR) [51] redefined autoregressive learning on images as a coarse-to-fine next-scale prediction. UniTok [29] explores the introduction of semantic informations training for discrete visual tokens, enriching semantic information to further improve the understanding and generation capabilities of unified models [50, 58]. Meanwhile, VAVAE [64] and REPA [67] address the high-dimensional challenges of continuous VAE spaces by leveraging semantic space supervision, while TokenBridge [55] and Layton [62] explore the communication and fusion between continuous and discrete tokens. In a different vein, MAGVIT-v2 [65], FSQ [34], BSQViT [71] propose lookup-free quantization, presenting an alternative approach that bypasses traditional lookup mechanisms. TiTok [66] performs 2D-to-1D distillation, compressing the number of tokens used to represent the same image. Video Videos contain both spatial and temporal information, making their data volume substantially larger than images. Early video models typically employed image VAEs or VQVAEs [13] directly for generation, but spatial-only modeling often produces jittery outputs. Some approaches [24, 73] attempted 3D VAEs for temporal compression, yet limited latent channels still yielded blurry and unstable results. Recent methods [30, 21, 63] utilizing 3D Causal VAEs have demonstrated superior video encoding performance. # 2.2 Evaluation of Image Reconstruction Pixel-level Evaluation Traditional low-level metrics assess reconstruction quality through pixelwise comparisons. Mean Squared Error (MSE) quantifies average squared intensity differences, while Peak Signal-to-Noise Ratio (PSNR) extends this concept logarithmically using the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The structural similarity index measure (SSIM) [56] models human perception through luminance, contrast, and structural comparison, carrying important information about the structure of the objects in the visual scene. Feature Similarity Index (FSIM) [68] measures the similarity between two images based on their low-level features. HDR-VDP [31] specializes in varying luminance conditions, predicting both quality degradation and change visibility. Feature-level Evaluation Previous pixel-level metrics are simple, shallow functions, and fail to account for many nuances of human perception. Advanced feature-level metrics leverage deep learning for semantic evaluation. Learned Perceptual Image Patch Similarity (LPIPS) [69] compares deep features from pretrained networks to better align with human judgment. Fréchet Inception Distance (FID) [11] measures distributional similarity between generated and real images using Inception-v3 features, while Inception Score (IS) [45] evaluates both diversity and recognizability through classifier predictions. These high-level metrics address limitations of pixel-based methods but require careful interpretation when evaluating out-of-distribution samples. Furthermore, these features typically represent high-dimensional global characteristics, small-scale objects such as text and faces have a relatively minor influence on these features. As illustrated in Figure 1, previous metrics fail to reflect the reconstruction quality of small-scale objects, which is a critical aspect that modern high-quality visual generation models particularly focus on. # 2.3 Text and Face Datasets Text Data Texts are representative texture elements in images and unsatisfactory generation quality would seriously affect their readability. Previous datasets for text recognition are focused on cropped text regions, restricting the diversity of text scales and image scenarios. Therefore, we consider collecting data from text spotting datasets [18, 17, 3, 47, 27], which are annotated with the locations and transcriptions of texts. Additionally, some datasets for key information extraction [15, 38] and document-oriented VQA [33, 32] also provide the above annotations. In this work, we collect text data from 8 different text image datasets that vary in fonts, styles, scales and backgrounds, enriching the comprehensiveness of our benchmark. In addition, text spotting in videos has been receiving growing attention recently, and the related datasets [17, 60] are released. They support us to further extend our assessment to video tokenizers. We unify the text representations for consistent evaluation. Face Data For evaluating face generation quality, we considered datasets originally curated for two primary face-related tasks: facial landmark detection and face recognition. Key datasets for facial landmark detection include WFLW [59], 300W [44], and AFLW [20]. For face recognition, frequently utilized datasets include LFW [14], CALFW [72], and CFPW [46], among others. However, most of these datasets were deemed unsuitable for our benchmark since they consist predominantly of single-face portrait images, which do not accurately represent the distribution of faces in “in-the-wild” scenarios. Consequently, we selected the WFLW dataset, which composed of images captured in naturalistic, unconstrained environments, which often contain multiple faces. For video data, we observe that many video understanding datasets contain abundant scenes and faces. For instance, VideoMME [10], MVBench [23], and MMBench-Video [9] are popular benchmarks for evaluating multimodal video understanding in VLLMs, which include numerous facial segments that can serve as our data pool. # 3 TokBench Our goal is to provide a novel benchmark specifically designed to evaluate the reconstruction quality of two critical visual elements: texts and human faces in images. To establish this benchmark, we first curate a diverse collection of images rich in textual and facial content, systematically categorized by their spatial scales within the images. Then we incorporate specialized evaluation metrics that assess: (1) the legibility of reconstructed text and (2) identity preservation in reconstructed faces. As a result, TokBench provides targeted evaluation of discrete or continuous tokenizers’ capability in reconstructing faces and text, thereby ensuring the upper bound of high-quality visual generation. Furthermore, we curate videos containing rich texts and faces to extend TokBench to assess video tokenizers and VAEs. Figure 2: Statistics and Sample Diversity of TokBench-Image. TokBench features a balanced instance-scale distribution with particular emphasis on small-scale face and text instances, presenting significant challenges for existing visual reconstruction approaches. # 3.1 Image Data Curation # 3.1.1 Text Data Curation Data Collection We first collect text images from eight existing open-source datasets for diversity. Specifically, they include scene text datasets, i.e., ICDAR 2013 [18], IC15 [17], Total-Text [3] and TextOCR [47], and document datasets, i.e., CORD [38], SROIE [15], InfographicVQA [32] and DocVQA [33]. We use their validation or accessible test set to build our benchmark. For datasets that are not divided into training and test sets, we sample from them. These datasets provide word-level annotations that contain both the position and transcription for each text instance, allowing us to perform consistent evaluations. Next, we uniformly use the horizontal bounding box $\{ x _ { i } ^ { t } , y _ { i } ^ { t } , \bar { w } _ { i } ^ { t } , h _ { i } ^ { t } \}$ to represent the the $i$ -th text regions. Difficulty Rating We consider the relative scale of texts as the major factor distinguishing the reconstruction difficulty of the evaluated data. Due to the large variation of scales and character lengths of texts, we focus on the character-level text scale for measurement, which can be approximately derived from annotations. Given a text image $I ^ { t } \in \mathbb { R } ^ { H \times W \times 3 }$ . We assume that characters are Figure 3: Overview of the evaluation process of TokBench. Figure 4: Comparison between reconstructed images (right) and original images (left) under different T-ACC and F-Sim metrics. Higher metric values indicate reconstructed images that more closely resemble the original. (Zoom in for better comparison.) FINAL PINANL PESERVEFASERNE HAVE FAI HNVE H breadlsread WeSup CREATION UTON AbstractAbstract HARE SHA ELMEAORISELAMATDAT & RDEMAN FOOENMIDOROE DTANJUNGADTANJUNG Ipswichlocv//olt Harwich Horwloh C店 TAWAU HILLT PARK FARK So&8 50888 Emerg Enll omenWomer Westport portWestport Neyomdking Doionc N T-ACC 0\~0.1 0.1\~0.2 0.2\~0.3 0.3\~0.4 0.4\~0.5 0.5\~0.6 0.6\~0.7 0.7\~0.8 0.8\~0.9 0.9\~1 二 ==二二= 二二 = 二二 一 1ege 出: Sk Yoe F-Sim 0\~0.1 0.1\~0.2 0.2\~0.3 0.3\~0.4 0.4\~0.5 0.5\~0.6 0.6\~0.7 0.7\~0.8 0.8\~0.9 0.9\~1 uniformly distributed in the bounding box for most texts. Thus, we approximate the relative scale of the $i$ -th text by normalizing the scale of one character by the maximum length of the image: $$ r _ { i } ^ { t } = \frac { m a x ( h _ { i } ^ { t } , w _ { i } ^ { t } ) } { m a x ( H _ { i } , W _ { i } ) \times N _ { i } ^ { c } } , $$ where $N _ { c } ^ { i }$ is the number of characters of the $i$ -th text instance. Data Cleaning The feasibility of reconstructing tiny regions should be considered. Meanwhile, the assessment of the reconstruction quality of text images is based on a pretrained text recognition model $\mathcal { M } _ { t }$ , requiring the predictions of $\mathcal { M } _ { t }$ completely accurate on the original images. To ensure the validity of the evaluation, we remove extremely tiny cases and unrecognized instances that would cause ambiguity with the following steps: 1) We assume the minimum pixels to clearly represent a character is $5 \times 5$ . Hence, we remove instances with $m i n ( h ^ { t } , w ^ { t } ) < 5$ or $\mathsf { \bar { \Gamma } } _ { r } { } ^ { t } < 0 . 0 0 5 . \$ 2) We filter out the instances containing characters out of the vocabulary of the recognizer and regions that contain only one special symbol, avoiding ambiguous and invalid recognition results. 3) We only keep text instances that can be correctly recognized by $\mathcal { M } _ { t }$ from the remaining, guaranteeing the performance degradation in the benchmark is mainly caused by poor reconstructions. Afterward, we keep the images that contain at least one valid text instance. As a result, the text set in TokBench consists of 6,000 images and 76,126 valid text instances as shown in Fig. 2. Multiple sources enrich the diversity of text fonts, styles, scales and backgrounds. Each instances is annotated using $\{ x _ { i } ^ { t } , y _ { i } ^ { t } , w _ { i } ^ { t } , h _ { i } ^ { t } , r _ { i } ^ { t } , \hat { s } _ { i } \}$ , where $\hat { s } _ { i }$ is the ground truth transcription. Using $r _ { i } ^ { t }$ , we empirically set 3 different difficulty levels (Small, Medium, and Large). The lowest limit scale in evaluation for the resolution $L$ during reconstruction is no less than $5 / L$ , so that the text regions are valid as illustrated in data cleaning. The scale range for each level is in the Appendix. # 3.1.2 Face Data Curation For our facial data source, we select WFLW [59] due to its uniform distribution of face scales and diverse scenarios. From the original 6,551 images, we first filter out all images with aspect ratios exceeding 2, retaining 6,398 valid images containing 9,739 ground-truth (GT) annotated face instances. Since many images contained unannotated faces, we perform additional face detection using the antelopev2 model from insightface [16], keeping only detections with confidence scores above 0.5. For the detected faces, we calculate each face’s scale by dividing the longer side of the bounding box by the image, retaining only faces with scales greater than 0.05 as supplementary GT data. This process yields 17,700 valid target faces, on which we will evaluate the similarity between reconstructed faces and original facial features. # 3.2 Evaluation Protocols The overall evaluation pipeline is illustrated in Fig. 3. Text and face images are first reconstructed by the given visual tokenizer $\tau$ . For the reconstructed text images, each valid text region is cropped according to the ground truth (GT). The cropped regions are fed into a pretrained text recognition model $\mathcal { M } _ { t }$ , obtaining the transcription predictions, which are further evaluated by the corresponding GT using T-ACC and T-NED metrics. Similarly, for the face images, each face area is cropped by GT. The corresponding areas between the original image and the reconstructed image are encoded by a pretrained face recognition model $\mathcal { M } _ { f }$ . The encoded feature vectors are measured by F-Sim to evaluate the quality of the generated face. Text We choose the recent PARSeq [2] as the pretrained recognizer for its good balance between accuracy and efficiency. We use the implementation by docTR 2 [35], an OCR toolbox which can be easily installed. Following the metrics in text recognition tasks, the results are evaluated by the text recognition accuracy (T-ACC) and Normalized Edit Distance (T-NED) [70] between the recognition result $s _ { i }$ and the ground truth $\hat { s } _ { i }$ . Since our goal is to assess the reconstruction quality, we distinguish between uppercase and lowercase letters because their appearances are different, which should be maintained after a decent reconstruction. It is regarded as a true positive only when the predicted word is exactly the same as GT in our T-ACC metric. Secondly, T-NED gives a more fine-grained analysis considering the accuracy of characters, which is formulated as: $$ \mathrm { T - N E D } = 1 - \sum _ { i } ^ { N ^ { t } } \frac { D ( s _ { i } , \hat { s } _ { i } ) } { m a x ( l _ { i } , \hat { l _ { i } } ) } , $$ where $l _ { i }$ and $\hat { l } _ { i }$ are the numbers of characters of the predicted text and the corresponding GT. $N ^ { t }$ is the number of text instances. $D$ indicates the Levenshtein distance. Face Just as one cannot paint the Mona Lisa without having seen her, a visual tokenizer that fails to accurately reconstruct faces will prevent generative models trained on its latent space from correctly generating corresponding identities. In fact, distorted identities may even mislead the learning process of generative models. To evaluate the fidelity of face reconstruction, we employ the insightface [16] recognition model $\mathcal { M } _ { f }$ to measure the similarity between reconstructed and original faces. Specifically, we input the same facial keypoints from annotations with both original and reconstructed images into the recognition model to extract corresponding facial features, then compute the cosine distance between these feature vectors as our face similarity metric (F-Sim). As shown in Figure 4, higher similarity scores indicate better face reconstruction quality, with Table 1 in Supp. demonstrating that high-resolution resizing achieves the highest F-Sim of 1. # 3.3 Video Data Curation Text We collect real-world videos from the ICDAR 2013–15 Text-in-Videos Challenge [17] and the test set of DSTextV2 [60]. Word-level annotations for texts in each frame are given. Similar to the processing procedures illustrated in Sec. 3.1.1, we get rid of invalid text instances while preserving the original video clips. Since the resizing strategy for video tokenizers is based on the short side, we remove instances with $m i n ( h ^ { t } , w ^ { t } ) < 5$ or $\begin{array} { r } { r ^ { t } < \frac { 5 \times m i n ( H , W ) } { 4 8 0 \times m a x ( H , W ) } } \end{array}$ , where 480 is the upper bound of resized short side in our evaluation. Thus, we obtained 15,921 frames that contain 347,468 valid text instances. The evaluation is conducted per frame, whose pipeline and metrics are consistent with Fig. 3. We only need to recognize text in the cropped regions while ignoring frames containing no valid text, improving the efficiency. Table 1: Performance of discrete and continuous tokenizer on TokBench. ‘ ’ and $\cdot _ { m }$ ’denote the average metrics for small-scale instances and all scales, respectively. In this table, we compute traditional metrics such as rFID across both the text set and face set. The ‘Factor’ denotes the downsampling ratio in latent space, while ‘1D’ indicates that images are encoded into one-dimension. Face We first downloaded all videos from the VideoMME [10], MVBench [23], and MMBenchVideo [9] datasets. Each video was sampled at 1 FPS and processed using insightface [16] for face detection, retaining only videos containing faces with the longer edge exceeding 512 pixels. The retained videos then underwent frame-by-frame analysis to select clips meeting two criteria: continuous face presence for at least 3 seconds and detection of more than 3 faces. After filtering out videos where most frames contained only a single face, we manually curated the remaining clips based on video quality and content richness, resulting in 328 selected 3-second video segments (25,980 frames total). Within these frames, we performed additional insightface detection to identify faces with confidence scores above 0.5 and scale factors exceeding 0.03, yielding 81,556 valid target faces for frame-by-frame similarity evaluation between reconstructed and original faces. # 4 Experiments # 4.1 Evaluation Setting In this section, we conduct comprehensive comparisons of existing classical continuous or discrete visual tokenizers on the proposed TokBench. We evaluate image reconstruction quality at three resolutions: 256, 512, and 1024. For each resolution, we first center-pad the original image into a square and then resize it to the target resolution. After reconstruction within the target resolution, we resize the image back to its original padding size and crop out the padded regions to obtain a reconstructed result matching the original resolution. We additionally provide baseline results for each resolution by applying the same padding and resizing process without reconstruction, representing the theoretical upper limit at that resolution. For video reconstruction, we conduct experiments under resolutions at 256 and 480. Notably, we resize the shorter edge of videos to these target lengths while padding both the longer edge and frame count to meet the required dimensions for tokenizers. After reconstruction, we crop out the padded regions and resize the videos back to their original resolutions. The reconstructed videos are then evaluated frame-by-frame using the same protocols as images. Our evaluation framework demonstrates efficiency and lightweight characteristics. After the reconstruction of all images in TokBench, the complete calculation of T-ACC and F-Sim metrics for images requires only 2GB of GPU memory and can be completed within 4 minutes on a single RTX 4090 GPU. For evaluating all reconstructed videos, the process requires 2GB of GPU memory and approximately 30 minutes to complete, which can be reduced to 6 minutes through multi-GPU parallel processing. Table 2: Performance of discrete and continuous tokenizer on TokBench text-set. # 4.2 Main Results We primarily evaluate performance at 256 resolution since most tokenizers are trained at this scale, with results presented in Table 1. Most discrete tokenizers employ $1 6 \times$ downsampled spatial quantization (F16), while we additionally evaluate $\mathbf { 8 \times }$ downsampled (F8) variants of LlamaGen [49] and Open-MAGVIT2 [28] tokenizers for comparison. At 256 resolution, discrete tokenizers demonstrate notably poor performance in reconstructing small-scale text and faces. UniTok’s [29] multi-codebook design preserves finer details, achieving significantly superior text reconstruction compared to other tokenizers - even outperforming continuous-space VAEs from VA-VAE [64] and SDXL [40]. For face reconstruction, UniTok also surpasses other F16 tokenizers. The higher-compression 1D tokenizer TiTok [66] yields the weakest results for both text and face reconstruction. Notably, F8 tokenizers consistently outperform their F16 counterparts with identical architectures, while continuous VAEs from SD3.5 [6] and FLUX [22] achieve the highest scores. Compared to conventional metrics (FID [11], LPIPS [69], PSNR, SSIM [56]), improved text reconstruction typically correlates with better scores. However, comparisons between UniTok vs. Figure 5: T-ACC and F-Sim metrics across reconstruction resolutions versus target scales. Smaller scales present greater challenges, and even the best-performing VAE show gap for improvement when compared to the “resize” upper bound. VA-VAE/SDXL and VAR [51] vs. Open-MAGVIT2 (pretrain) reveal contradictory trends. Moreover, FID and PSNR exhibit limited discriminative power for text/face reconstruction quality, even with substantial T-ACC and F-Sim variations, their metric gaps remain marginal in FID. This evidences existing metrics’ inadequacy in comprehensively evaluating these specific reconstruction tasks. # 4.3 Detail Evaluation for Text and Face Table 2 further presents the evaluation results of various tokenizers on text data across multiple resolutions. First, we observe that most tokenizers achieve progressively better performance with increasing resolution, even without being trained at 1024 resolution. Additionally, more discrepancies emerge between traditional metrics and T-ACC, as evidenced by cases like LlamaGen vs. TokenFlow at 512 resolution, UniTok vs. Open-MAGVIT2 at 1024 resolution, and LlamaGen(F8) vs. OpenMAGVIT2(F8) at 1024 resolution. These findings further validate the complementary value of our proposed metric to existing evaluation methods. Notably, the performance gap between continuous and discrete tokenizers widens significantly with increasing resolution. At 1024 resolution, FLUX’s VAE even achieves T-NED comparable to simple resizing. It’s worth noting that since many original text images exceed 1024 pixels in size, even resizing cannot achieve $100 \%$ T-ACC and T-NED. We further visualize the relationship between T-ACC/F-Sim metrics and instance scales across different resolutions in Figure 5. For small-scale objects, the performance gap between continuous and discrete tokenizers becomes more pronounced at higher resolutions. Detailed evaluations on face data and the difficulty rating are provided in the supplementary materials. # 4.4 Video Tokenizers and VAEs We evaluated video reconstruction quality at two standard resolutions (256 and 480) using a series of VAEs [37] with identical architectures but varying compression ratios, along with three top-performing 3D causal VAEs from Step-Video [30], Hunyuan-Video [21], and CogVideoX [63], as shown in Table 3. Discrete video tokenizers remain understudied and demonstrate inferior performance. The Cosmos-VAE framework enables clear observation of the performance gap between discrete and continuous tokenizers under same architectural designs, while also revealing the impact of different compression factors. While all $4 \times 8 \times 8$ VAEs demonstrate effective video compression and reconstruction capabilities, their performance on small-scale text reconstruction still shows significant gaps compared to the theoretical upper bound (Resize). In contrast, face reconstruction achieves closer results to the theoretical upper bound, likely due to these VAEs’ extensive facial data exposure during training. A comparison between the $8 \times 1 6 \times 8$ Cosmos-VAE and Step-Video reveals that at identical compression ratios, Step-VAE demonstrates much more superior capabilities. Although its performance remains below that of Hunyuan-Video and CogVideoX’s VAEs, it achieves an $^ { 8 \times }$ compression ratio while maintaining highly efficient compression and reconstruction capabilities. Table 3: Performance of video tokenizer on TokBench-Video. The resolution refers specifically to the shorter edge of the videos, while maintaining the original aspect ratio throughout. The categorization into small, medium, and large scales is dynamically adjusted based on resolution. # 4.5 Ablation of Training Data Since different tokenizers typically release weights trained on distinct datasets, we conduct ablation studies on training data to investigate its impact on text and face reconstruction performance. Following LlamaGen’s [49] training protocol, we augment the ImageNet [5] dataset with an additional $2 3 0 \mathrm { k }$ text-rich images. We train both F16 and F8 VQGAN models for $4 0 0 \mathrm { k }$ steps on either the mixed dataset or the original ImageNet alone, then evaluate them on TokBench Table 4: Ablations on Training Data. While augmenting ImageNet with text-rich data yields performance improvements, the gains remain limited, indicating that model architecture design exerts a more substantial influence than training data composition. text set as shown in Table 4. The results demonstrate that incorporating more text data indeed improves T-ACC and T-NED scores, though these improvements prove relatively marginal compared to architectural enhancements. This suggests that while training data influences text and face reconstruction quality, the tokenizer structural design remains the more critical factor. The detailed training data components are provided in the supplementary materials. # 5 Limitation In TokBench, the text reconstruction quality is judged based on the accuracy of text recognition. Although the proposed metrics effectively reflect the reconstruction quality for these visual targets, they lack pixel-level probabilistic evaluation across the entire image. For instance, while text may be accurately reconstructed, distortions in contrast or saturation may occur, which our metrics cannot directly capture. Therefore, the proposed metrics should serve as a meaningful complement to commonly used metrics such as PSNR and FID, which evaluate reconstruction quality solely at the pixel level and statistics feature level respectively.
In this work, we reveal the limitations of visual tokenizers and VAEs in preserving fine-grained features, and propose a benchmark to evaluate reconstruction performance for two challenging visual contents: text and face. Visual tokenizers and VAEs have significantly advanced visual generation and multimodal modeling by providing more efficient compressed or quantized image representations. However, while helping production models reduce computational burdens, the information loss from image compression fundamentally limits the upper bound of visual generation quality. To evaluate this upper bound, we focus on assessing reconstructed text and facial features since they typically: 1) exist at smaller scales, 2) contain dense and rich textures, 3) are prone to collapse, and 4) are highly sensitive to human vision. We first collect and curate a diverse set of clear text and face images from existing datasets. Unlike approaches using VLM models, we employ established OCR and face recognition models for evaluation, ensuring accuracy while maintaining an exceptionally lightweight assessment process <span style="font-weight: bold; color: rgb(214, 21, 21);">requiring just 2GB memory and 4 minutes</span> to complete. Using our benchmark, we analyze text and face reconstruction quality across various scales for different image tokenizers and VAEs. Our results show modern visual tokenizers still struggle to preserve fine-grained features, especially at smaller scales. We further extend this evaluation framework to video, conducting comprehensive analysis of video tokenizers. Additionally, we demonstrate that traditional metrics fail to accurately reflect reconstruction performance for faces and text, while our proposed metrics serve as an effective complement.
[ "cs.CV", "cs.DB" ]
# 1 Introduction Language models are about uncovering patterns in a sequence so they can guess what comes next. Before any of that happens, we must decide what the pieces of that sequence—the tokens—actually are. That choice is usually frozen in advance by a tokeniser that chops raw text into discrete units long before training begins. Consider the sentence “The quick brown fox. ” A character-level tokeniser feeds the model the stream $\{ \mathtt { T } ,$ $\mathbf { h } , { \mathsf { e } } , \mathbf { \omega } _ { \mathsf { U } } , { \mathsf { q } } , \mathbf { u } \}$ and asks it to predict the next letter i. A word -level tokeniser, in contrast, hands over $\{ \mathtt { T h e }$ , quick $\}$ and expects the model to guess brown in one shot. Finer cuts lead to larger sequences and shorten the look-ahead window, whereas coarser cuts lead to shorter sequences but make each token rarer and harder to compare and predict. Regardless of granularity, some form of tokenisation is unavoidable: a sequence must exist before any Transformer can run. Byte-Pair Encoding (BPE) followed by a simple embedding table is by far the most popular approach. It works by repeatedly merging the most frequent byte sequences in the training text until a preset vocabulary limit is reached. This procedure leaves practitioners with just two intuitive dials. The first dial is the training corpus: whichever text one feeds the algorithm—English prose, source code, or a multilingual mix—determines which patterns are merged and therefore what the final tokens look like. The second dial is the vocabulary size: raising this limit lets the merge process run for more steps, producing longer tokens and shorter sequences at the cost of a larger embedding table and output softmax. Most issues with tokenisation stem from the embedding operation rather than the splitting act itself. Each token is typically mapped to an independent vector, meaning the network sees only opaque identifiers and must rediscover, for instance, that strawberry and strawberries share nine letters. This reliance on isolated embeddings hampers symbol-level reasoning and complicates transfer to dialects or rare languages. Finally, this splitting is most often a preprocessing step, locking in a single level of granularity for all subsequent model layers (see Section 2.2). To address these limits, our Autoregressive U-Net (Section 2.1), or AU-Net (‘oh-net’, /´oU nEt/), learns to embed information directly from raw bytes, and allows for multiple stages of splitting. The purpose of an embedding is to map tokens to vectors. Instead of using a lookup table, we use attention directly to embed the tokens. Self-attention allows vectors at any position to summarize the entire preceding context. This enables a simple pooling mechanism: we select these contextualized vectors at word boundaries (AU-Net-2), then word pairs (AU-Net-3), and up to four-word chunks (AU-Net-4), forming a multi-stage embedding hierarchy. This U-Net like architecture contracts sequences, preserving detail with skip connections, before expanding them. During expansion, vectors representing coarser information are injected back into more fine grained representations. Deeper stages, by operating on compressed views, inherently need to anticipate multiple words ahead, similar to multi-token prediction (Gloeckle et al., 2024) but without auxiliary losses. This effect allows deeper stages to guide shallower stages at the semantic level, while letting them handle finer details like spelling. Contributions (quantified in Section 3). C1. Adaptive multi-level hierarchy. We train up to four end-to-end embedding stages with arbitrary, userspecified split functions, extending prior work that relies either on fixed pooling or shallow hierarchies. C2. Infinite vocab size. By operating directly on bytes, our model avoids predefined vocabularies and memory-heavy embedding tables, allowing an unlimited number of unique tokens. C3. Strong performance and scaling. Under identical pre-training budgets, a single level matches strong BPE baselines, and a two or three-level hierarchy shows promising scaling trends. A selection of the results are presented in Table 2. C4. Practical Efficiency . We maintain comparable GPU throughput in wall-clock time instead of purely theoretical compute gains. Our code is available in Meta Lingua (Videau et al. (2024))1. C5. Stable scaling laws. We show that moving from token to byte-level training demands new batch size and learning rate formulas to get smooth optimization. # 2 Method # 2.1 Autoregressive U-Net Inspired by U-Net-like architectures (Ronneberger et al., 2015; Nawrot et al., 2022), we propose an autoregressive hierarchical model for language modeling, illustrated in figure 1. This architecture features a contracting path, which compresses the input sequence, and an expanding path, which reconstructs it. Both paths are fully adaptive: they do not require fixed pooling or upsampling sizes. Pooling and upsampling operations can be designed independently, even if we choose to make them symmetrical in this paper. The only requirement is a splitting function, which specifies the positions in the sequence where pooling should occur. This function is detailed in section 2.2. Table 1 1B equivalent on 370B tokens Our architecture is monolithic: unlike recent approaches (Pagnoni et al., 2024; Neitemeier et al., 2025) that use local models, we apply attention globally at each stage (or within a sliding window), allowing every input to attend to previous inputs. This ensures that words or word groups are not processed in isolation. To preserve fine-grained information that might be lost during contraction, we introduce skip connections between stages, following the approach in Ronneberger et al. (2015) and Nawrot et al. (2022). We also increase the hidden dimension at each stage in proportion to its contraction factor, enabling richer representations as the sequence is contracted. To keep computation tractable at the byte-level stage (Stage 1), where sequences are longest, we restrict attention to a window. # 2.1.1 Pooling and Upsampling Since our pooling and upsampling are adaptive, we cannot rely on fixed window sizes. To address this, we explored several pooling and upsampling strategies. In this section, we describe the method used in all experiments reported in the main text. A complete description of the alternatives and ablation results can be found in the appendix C. SA -三 川 m SA T : 下 O N N Residual Connection Pooling. We adopt the simplest pooling strategy: selecting the indices identified by the splitting function and projecting them to the next stage’s dimensionality using a linear layer. Since the preceding layers already include attention mechanisms, we rely on these to do the pooling implicitly instead of relying on explicit cross attention as used in Nawrot et al. (2022); Pagnoni et al. (2024). Upsampling. The upsampling step maps coarse representations to finer ones for the next stage. As illustrated in Figure 2, we duplicate each coarse vector to match the length of the following segment, applying distinct, position-specific linear transformations to these duplicates. Since these transformations are shared across segments but vary by position within a segment, we term this Multi-Linear Upsampling. In our experiments, models with multiple stages are more sensitive to the specific choice of upsampling strategy, whereas for pooling, many strategies work equally well. # 2.1.2 Generation During training, we process the entire input sequence in parallel, activating all stages simultaneously. At inference, generation is autoregressive: the byte-level stage is active at every step, while deeper stages activate less frequently according to the pooling pattern. Skip connections transmit information upward at each stage, so deeper stages can integrate fine-grained details. This cascading, conditional activation enables efficient inference: computationally intensive high-level stages activate rarely, but still effectively guide detailed lower-level predictions. In practice, this means that we need to cache the latest vector at the output of each stage to correctly propagate deeper stages’ outputs. # 2.2 Splitting Function The AU-Net architecture supports flexible splitting strategies to define pooling points at each hierarchical stage. The primary constraint is that any chosen splitting function must be stable to rightward insertion: appending bytes should not alter prior pooling decisions, ensuring consistent autoregressive generation. Various methods (e.g., fixed windows (Nawrot et al., 2022), entropy (Pagnoni et al., 2024), learned rules) are possible. Our current work splits on spaces using different regular expressions at each stage (details in Appendix B). This strategy defines a hierarchy: Stage 1 processes raw bytes; Stage 2 pools at word boundaries (identified by the regex); Stage 3 pools after every two words(or sentence end); and Stage 4 after every four words (or sentence end). This rule-based approach, inspired by pre-tokenization in systems like GPT-4o’s (Dagan et al., 2024), is effective for Latin scripts. Extending robustly to languages without clear delimiters remains future work. Unlike prior approaches Pagnoni et al. (2024); Neitemeier et al. (2025); Slagle (2024) that used similar splits mainly to replace BPE in a single-stage context, AU-Net uses these user-defined splits for its multi-stage hierarchical processing. # 2.3 Evaluating on different scales Large language models scale very predictably Kaplan et al. (2020); Hoffmann et al. (2022); Bi et al. (2024). This allows us to estimate the performance of a model for a large compute budget. But more surprisingly, it allows us to predict the optimal hyperparameters for models way beyond our ablation budget. Bi et al. (2024) described a method for sweeping learning rates and batch sizes across a range of small models, and they demonstrated that these results can be used to predict optimal hyperparameters for larger models. Following their methodology, we show a different evolution of hyperparameters, both due to the data in our setup and to the hierarchical model. These hyperparameters are then used to do scaling laws for a bigger range of compute budgets to compare the baseline architecture and AU-Net. Throughout this paper, the scale of a run is its total pre-training compute $C$ measured in Floating Point Operation (FLOP): Fmodel / input-unit × Ninput-unit FLOPs per (forward+backward) pass per input unit number of units of training input Following Bi et al. (2024), we define model size as the number of FLOPs per input unit instead of relying on the number of parameters. This allows us to compare models with different architectures fairly. The formula for the number of FLOP per input-unit for a decoder-only transformer is given by: $$ F _ { \mathrm { m o d e l / \ i n p u t - u n i t } } = \underbrace { 6 N _ { \mathrm { p a r a m s } } ^ { \mathrm { n o - e m b e d } } } _ { \mathrm { l i n e a r ~ t e r m } } + \underbrace { 6 d L S } _ { \mathrm { a t t e n t i o n ~ t e r m } } . $$ where, $N _ { \mathrm { p a r a m s } } ^ { \mathrm { n o - e m b e d } }$ is the number of parameters, excluding the embeddings. $d$ is the dimension, $S$ the sequence length and $L$ the number of layers. To scale up, one can either make the model bigger (Fmodel / input-unit $\uparrow$ ), give it more data ( $N _ { : }$ input-unit ↑), or do both. Gadre et al. (2024) showed that keeping the data-to-model ratio γinput-unit constant is key to getting smooth scaling laws and predictable performance, where: $$ \gamma _ { \mathrm { i n p u t - u n i t } } = \frac { N _ { \mathrm { i n p u t - u n i t } } } { F _ { \mathrm { m o d e l / \ i n p u t - u n i t } } } . $$ We adopt this convention in all experiments and report the data-to-model ratio $\gamma$ input-unit used in the experiments. Bytes versus tokens. On DCLM, a token sequence is on average $k \approx 4 . 5 6$ times shorter than its byte sequence when using the LLaMa 3 tokenizer. Given some compression factor $k$ between bytes and tokens, we want to express the equivalent $\gamma _ { \mathrm { b y t e s } }$ . To do this, we note that $N _ { \mathrm { b y t e } } = k \times N _ { \mathrm { t o k e n } }$ and $F _ { \mathrm { m o d e l / b y t e } } = F _ { \mathrm { m o d e l / t o k e n } } / k$ . Therefore, $$ \gamma _ { \mathrm { b y t e } } = k ^ { 2 } \frac { N _ { \mathrm { t o k e n } } } { F _ { \mathrm { m o d e l / t o k e n } } } = k ^ { 2 } \gamma _ { \mathrm { t o k e n } } . $$ This factor allows us to compare the performance of our model with the baseline on the same scale, as they will have seen the same amount of data and spent the same amount of FLOPs per token. Throughout the paper, we always express the data-to-model ratio in LLaMa 3 tokens $\gamma _ { \mathrm { t o k e n } }$ ). FLOPS per byte for AU-Net. In the case of AU-Net, we cannot use the same formula as the baseline because of the contraction and expansion happening in the model. However, we can still use the same formulas as long as we account for the contraction at each stage. So the total FLOPs per byte for AU-Net is simply the sum of each stage divided by the contraction factor. $$ F _ { \mathrm { m o d e l / b y t e } } = \sum _ { i = 1 } ^ { L } { \frac { F _ { \mathrm { m o d e l / b y t e } } ^ { i } } { k _ { i } } } , $$ where $k _ { i }$ is the contraction factor at stage $i$ . This property allows us to have models with a higher number of parameters for the same compute budget and data-to-model ratio. Hyperparameter scaling laws Bi et al. (2024) showed that the regularity of scaling laws can be exploited to tune very large models from a sweep over much smaller ones. We replicate their protocol on six miniature versions of each architecture (baseline Transformer and AU-Net): we perform a quasi-random search over batch size and learning rate, keep the configurations within $1 \%$ of the best validation loss, and fit $\mathrm { B S Z } ( C ) = A C ^ { \alpha }$ and $\operatorname { L R } ( C ) = B C ^ { \beta }$ to those points, with parameters $A , \alpha , B$ and $\beta$ . We find the following formulas at the byte level for AU-Net: $$ \mathrm { B S Z _ { A U - N e t } } ( C ) = 0 . 6 6 C ^ { 0 . 3 2 1 } \qquad \mathrm { L R } _ { \mathrm { A U - N e t } } ( C ) = 6 . 6 \times C ^ { - 0 . 1 7 6 } . $$ And we run the same tuning for the BPE baseline, for which we find: $$ \mathrm { B S Z _ { B P E } } ( C ) = 2 9 . 9 C ^ { 0 . 2 3 1 } \qquad \mathrm { L R } _ { \mathrm { B P E } } ( C ) = 1 9 . 3 \times C ^ { - 0 . 1 7 7 } . $$ # 3 Experimental Results # 3.1 Experimental Setup Data. For all experiments, we used DCLM (Li et al., 2024) as our pretraining dataset, excluding a very small fraction for validation. This is around 4T training tokens (of GPTNeoXTokenizer). The corpus is mostly English and targets mainly natural language understanding, i.e., it contains a marginal amount of code or maths. Baselines. We compare our approach to three different baselines: Transformers equipped with the BPE tokenizer of LLaMa 3, Transformers trained directly on bytes, and Mamba (Gu and Dao, 2024) trained directly on bytes. To keep the comparison fair, we trained each baseline with the same amount of data or compute. For example, if a data budget of 273B training bytes is used to train the bytes level or AU-Net model, this budget is converted to 60B training tokens for a transformer with LLaMa 3 tokenizer (Grattafiori et al., 2024) because of the 4.56 compression rate measured on the DCLM corpus. Hyperparameters. For a detailed overview of the hyperparameters, see appendix D. As explained in section 2.3, we sweep batch size and learning rate values across model scales ranging from 25M to 500M. Then, we extrapolate the best learning rate and batch size for any given compute budget. Evaluation Metrics. All models are evaluated on a broad set of downstream tasks in a zero-shot setting, occasionally including a few in-context examples directly in the prompt. These tasks fall into two categories: (i) multiple-choice (MCQ) tasks, where the correct answer is selected as the option with the lowest normalized negative log-likelihood (divided by the number of characters) Brown et al. (2020); and (ii) open-ended generation tasks, where the model is allowed to freely generate its answer. To highlight the strengths of AU-Net, we include specialized benchmarks targeting character-level manipulation (CUTE Edman et al. (2024) appendix E) and low-resource language translation (FLORES-200, Costa-jussa et al. (2024) section 3.4). For clarity, we report a selection of key benchmark results in the main tables, including Hellaswag, ARC-Easy, ARC-Challenge, MMLU, NQ, TQA, and GSM8K. Also, we report 95% confidence intervals for all tables using bootstrap. A full breakdown of all evaluation results is provided in the appendix F. In addition to task performance, the total training FLOPs and training throughput are provided for each model, measured in bytes per second per GPU (bps) on H100 80GB GPUs (internal cluster) during the actual training. Implementation Details. As scaling is key to the success of large language models, our implementation balances efficiency and simplicity. We use sequence packing along with full attention, a strategy shown to have little to no impact on downstream performance (Li et al. (2024)). To reduce GPU memory pressure, all our experiments rely on Fully Sharded Data Parallelism (FSDP). For additional speed-ups, the entire model is compiled with torch.compile. Compilation, however, requires a static computation graph, which clashes with the variable-length outputs produced by our adaptive pooling: the number of bytes per word (and thus per stage) naturally varies across sentences. We resolve this by fixing a maximum sequence length at every stage: sequences that exceed the limit are truncated abruptly, and shorter ones are padded. This compromise yields a graph that is static for compilation while still supporting adaptive hierarchical pooling in practice. # 3.2 Equal Data Budget Results We evaluate the effectiveness of hierarchical pooling by fixing the model’s primary hidden dimension to 2048 and maintaining a constant total training-data budget. The hidden dimension at each stage is scaled proportionally to its contraction ratio as described in section 2.1. For instance, the byte-level stage uses a dimension of $2 0 4 8 / 4 = 5 1 2$ , the word-level stage uses 2048, and the 2-word level uses $1 . 5 \times 2 0 4 8 = 3 0 7 2$ , continuing in this manner for deeper stages. We assess the downstream performance of language models with 2, 3, and 4 stages at the 1B parameter scale. For the 8B model, we evaluate only the 1-stage configuration for now. All variants are compared against a Transformer baseline using the LLaMA 3 tokenizer of the same main hidden dimension. More ablations regarding pooling and the number of layers per stage can be found in the appendix C. As shown in table 2, hierarchical models consistently match or outperform their BPE-based counterparts. This trend holds across various configurations and becomes especially pronounced as we introduce more hierarchical stages. Notably, multi-stage AU-Net models (e.g., AU-Net 3 and AU-Net 4) outperform BPE baselines on several benchmarks. An interesting exception to this pattern is the TQA benchmark, which is a knowledge-intensive task evaluating the generation of the model. AU-Net models along with byte-level baselines consistently underperform on TQA compared to BPE-based models. This suggests that the performance gap may not stem solely from the hierarchical structure. However, as model size and training data scale (e.g., at the 8B or 1B, 370B tokens scale), this discrepancy seems to vanish. We observe early signs of diminishing returns beyond a certain hierarchical depth. While AU-Net 4 improves on reasoning-heavy tasks such as ARC-C and GSM8k, gains on benchmarks like Hellaswag and TQA are less consistent. However, this effect may stem not from hierarchy itself, but from data efficiency: deeper hierarchies might require more training data to fully realize their potential. Supporting this interpretation, we find that AU-Net 2 and AU-Net 4 benefit significantly from additional training data, and that MMLU and GSM8k performances continue to improve with increased stage, even at fixed scale. Table 2 Downstream results comparing AU-Net to BPE and byte-level baselines. We report accuracy on key benchmarks with $9 5 \%$ confidence intervals where applicable. Literature models are shown in italics; all models are trained on the same corpus, unless specified. AU-Net variants differ in the number of stages. We also report compute budget and empirical training speeds in bytes/sec. 5 LLaMa 3.1 Grattafiori et al. (2024) Finally, when comparing our models to similarly sized baselines from the literature (italicized in the table), we find that AU-Net remains competitive, even while using significantly less training data. For instance, BLT (1T) uses approximately $5 \times$ more compute than our 8B model, while only being better on MMLU. Importantly, comparisons with literature models are fair, as all were trained on the same corpus: DCLM (except for BLT (220B) and LLaMa 3.1 (15T)). To further evaluate our approach, we now turn to scaling laws to better quantify how our architecture compares to a standard Transformer with BPE. We focus on AU-Net 2 and AU-Net 3, using a data-to-model ratio of 2. This choice is motivated by the diminishing returns observed when moving from AU-Net 3 to AU-Net 4 under the same data-to-model ratio. # 3.3 Scaling laws Using the learning rate and batch size formulas (Section 2.3), we run pretrainings for a range of compute budgets ranging from 1e19 to 1e22 flops (corresponding to models from 150M to 5.3B non embedding parameters) for the baseline, with a data-to-model ratio of 10. This is roughly 2 $\times$ the optimal data-to-model ratio found by Kaplan et al. (2020). The list of models chosen for each budget is detailed in the appendix G. Figure 3 shows the evolution of performance on 6 downstream tasks for AU-Net and the BPE baseline. Here we mainly notice that 2 and 3 stage AU-Net models can match the performance of the BPE baseline when carefully controlling for compute budget. This is the case for Hellaswag, Arc Easy, and NQ. For TQA, AU-Net both for 2 and 3 stages starts with a performance gap, but the 3 stage model catches up at 1e22 flops. However, both 2-stage and 3-stage AU-Net models are still behind the BPE baseline at 1e22 flops for GSM8K and MMLU. Most downstream tasks follow a sigmoid pattern: performance is near chance at low compute, then rapidly improves before plateauing. For AU-Net models, this transition appears to occur slightly later on tasks like GSM8K and MMLU, suggesting that the benefits of a deep hierarchy may become more pronounced at larger scales. Nevertheless, on many benchmarks, both our AU-Net variants and our BPE baseline achieve results remarkably close to those of considerably larger models like LLaMa 3.1 8B (pretrained on 15T tokens, representing 100 times more compute than our largest run shown here). This proximity underscores the strength of our BPE baseline, making AU-Net’s ability to match or trend towards it particularly noteworthy. The primary exception where this close tracking is less apparent is GSM8K; however, this underperformance across all our models is likely due to the pretraining corpus, as DCLM contains very little math data. Figure 3 Downstream task performance scaling with compute (1e19-1e22 FLOPs). AU-Net (2/3 stages) generally tracks a strong BPE Transformer baseline, which itself performs competitively against much larger models (e.g., LLaMa 3.1 8B on $\mathrm { 1 5 T }$ tokens $1 0 0 \mathrm { x }$ compute). While AU-Net matches the baseline on tasks like Hellaswag and ARC Easy, and catches up on TQA at higher compute, its performance improvement phase on MMLU and GSM8K appears to start later. The general underperformance on GSM8K is also linked to limited math data in the DCLM pretraining corpus. # 3.4 Extended Evaluations We present results highlighting two specific advantages of byte-level training with AU-Net over BPE-based Transformers: improved performance on multilingual benchmarks (Table 3) and character-level manipulation tasks (Table 7 in the appendix E). Table 3 show that both models perform surprisingly well on non-English languages, despite the fact that the training corpus (DCLM) is heavily filtered to contain mostly English. Cross-lingual generalization within language families. On the multilingual MMLU benchmark (Table 3 right), languages using Latin scripts consistently benefit from byte-level modeling. We observe strong positive transfer between related languages. For example, Germanic languages such as German, Swedish, and Dutch show an average gain of around $+ 3 . 0$ points, while Romance languages like Italian, Spanish, Portuguese, and French improve by approximately $+ 4 . 0$ points. These results suggest that operating at the byte level allows the model to capture shared orthographic and morphological patterns across related languages. Table 3 Multilingual evaluation. Left: BLEU scores on the FLORES-200 benchmark across multiple languages. Higher scores indicate better translation quality. Right: MMLU Exact Match ( $\%$ ) across 26 non-English languages. Results are averaged per language across all tasks. Transfer to low-resource languages. The FLORES-200 benchmark (Table 3 left) includes many regional and low-resource languages that are underrepresented or absent in the training data. This setting allows us to test the model’s ability to generalize based on subword morphology and shared linguistic roots. Byte-level modeling provides the flexibility to construct meaningful representations without requiring the presence of these languages in the tokenizer or training corpus. We observe consistent gains in translation tasks into English, where the model must primarily understand the source language. The advantage is particularly clear for languages that share syntactic or morphological traits with more dominant relatives in the same family. This also highlights the robustness of our model: it can produce meaningful translations even with out-of-vocabulary words or forms unseen during training. In the reverse direction (English to low-resource), generation remains more challenging. # 4 Related Work Traditional tokenization methods are important for computational efficiency (Ali et al., 2024; Rajaraman et al., 2024; Gu et al., 2024; Lester et al., 2024), but impose fixed granularities. Early attempts to overcome this rigidity explored adaptive vocabularies (Zheng et al., 2024), n-gram combinations (Deiseroth et al., 2024), or alternative splitting criteria like entropy (Pagnoni et al., 2024). Our work, AU-Net, advances this by integrating tokenization and representation learning into a multi-level, autoregressive U-Net architecture that operates directly on bytes. This hierarchical, adaptive-pooling design distinguishes AU-Net from prior works. For instance, Megabytes (Yu et al., 2023) introduce a two stage LLM using local models but with fixed-size token blocks, unlike AU-Net’s input-adaptive pooling. Neitemeier et al. (2025), Byte Latent Transformers (BLT) (Pagnoni et al., 2024), and SpaceByte (Slagle, 2024) also process bytes or use specialized splitting functions. However, they typically aim to replace BPE for a single effective processing stage or use local attention mechanisms. In contrast, AU-Net leverages user-defined splits within a multi-stage architecture featuring distinct pooling strategies that differ from the cross-attention methods in Nawrot et al. (2022); Pagnoni et al. (2024). Nawrot et al. (2022) defined a similar U-Net architecture but with fixed pooling, much smaller models, and their evaluations mainly focus on perplexity.
Tokenization imposes a fixed granularity on the input text, freezing how a language model operates on data and how far in the future it predicts. Byte Pair Encoding (BPE) and similar schemes split text once, build a static vocabulary, and leave the model stuck with that choice. We relax this rigidity by introducing an autoregressive U-Net that learns to embed its own tokens as it trains. The network reads raw bytes, pools them into words, then pairs of words, then up to 4 words, giving it a multi-scale view of the sequence. At deeper stages, the model must predict further into the future -- anticipating the next few words rather than the next byte -- so deeper stages focus on broader semantic patterns while earlier stages handle fine details. When carefully tuning and controlling pretraining compute, shallow hierarchies tie strong BPE baselines, and deeper hierarchies have a promising trend. Because tokenization now lives inside the model, the same system can handle character-level tasks and carry knowledge across low-resource languages.
[ "cs.CL", "cs.AI" ]
# ARTICLE INFORMATION Article title A Structured Bangla Dataset of Disease-Symptom Associations to Improve Diagnostic Accuracy Authors Abdullah Al Shafi1, Rowzatul Zannat2, Abdul Muntakim2,\*, Mahmudul Hasan1 # Affiliations 1Institute of Information and Communication Technology, Khulna University of Engineering & Technology, Khulna-9203, Bangladesh 2Department of Computer Science and Engineering, Daffodil International University, Bangladesh # Corresponding author’s email address and Twitter handle muntakim.cse@diu.edu.bd # Keywords AI in healthcare; Disease classification; Clinical datasets; Medical informatics; Predictive modeling. # Abstract Disease-symptom datasets are significant and in demand for medical research, disease diagnosis, clinical decision-making, and AI-driven health management applications. These datasets help identify symptom patterns associated with specific diseases, thus improving diagnostic accuracy and enabling early detection. The dataset presented in this study systematically compiles disease-symptom relationships from various online sources, medical literature, and publicly available health databases. The data was gathered through analyzing peer-reviewed medical articles, clinical case studies, and disease-symptom association reports. Only the verified medical sources were included in the dataset, while those from non-peer-reviewed and anecdotal sources were excluded. The dataset is structured in a tabular format, where the first column represents diseases, and the remaining columns represent symptoms. Each symptom cell contains a binary value (1 or 0), indicating whether a symptom is associated with a disease (1 for presence, 0 for absence). Thereby, this structured representation makes the dataset very useful for a wide range of applications, including machine learning-based disease prediction, clinical decision support systems, and epidemiological studies. Although there are some advancements in the field of disease-symptom datasets, there is a significant gap in structured datasets for the Bangla language. This dataset aims to bridge that gap by facilitating the development of multilingual medical informatics tools and improving disease prediction models for underrepresented linguistic communities. Further developments should include region-specific diseases and further fine-tuning of symptom associations for better diagnostic performance. # SPECIFICATIONS TABLE # VALUE OF THE DATA The dataset provides a structured representation of relationships between diseases and symptoms. This contains the binary indicators (1-present, 0-absent) for every symptom analyzed in relation to a wide variety of diseases and thus will enable researchers to find patterns of co-occurrence and clusters of symptoms across many diseases. It helps to improve diagnostic accuracy and refines the symptom-based models for disease classification. The dataset will definitely help in training the models related to disease prediction, symptom analysis, and also clinical decision support systems using machine learning. Various algorithms can be developed based on this data for automated diagnosis, symptom-based disease prediction, and personalized medicine. Public health researchers can use this dataset to assess the prevalence of symptoms in different diseases, contributing to the early identification of new health trends and the formulation of strategies for health prevention. These data will further help in the syndromic surveillance system for early detection of outbreaks. This will serve as a good benchmark dataset on which newly developed methodologies will be tested in medical data analysis. The results can be cross-checked against this dataset for robustness, thus ensuring symptom-based disease identification with accuracy. This dataset can be very useful in research at several levels: computational biology, biomedical informatics, and artificial intelligence. The uniform structure provides better interoperability with other datasets from healthcare; therefore, enabling research across disciplines is possible. # BACKGROUND Systematic mapping between diseases and symptoms is critical in both basic research and clinical practice. These accurate datasets help enhance diagnostic accuracy, support clinical decision systems, and enable personalized treatment. Zlabinger et al. [1] proposed the Disease-Symptom Relation (DSR) collection, which provides symptom judgments in grades for diseases. Then, Electronic Health Records (EHRs) have further improved disease-symptom correlations [2], offering large-scale patient data for analysis. Sneha Grampurohit et al. [3] and Md. Atikur et al. [4] used a Kaggle dataset [5] of 4,920 records for predicting 41 diseases. Furthermore, M. M. Rahman et al. [6] translated this dataset into Bangla using the Google Translation API and thus made it usable for research in localized disease prediction. Even then, Bangla medical datasets are few and far between. Most of the medical records are unstructured, and the non-existence of standardized terminology further aggravates the problem of dataset creation. The limited digitization of health data in Bangladesh also makes the extraction of quality disease-symptom relationships a difficult task. This work, therefore, tries to fill this gap by compiling a structured dataset that collates information from varied medical sources. The dataset can support machine learning applications, epidemiological studies, and AI-driven diagnostics. This work will address the need for structured medical data in Bangla, hence promoting digitization, encouraging standardized medical terminology, and improving healthcare accessibility, disease prediction, and localized medical informatics. # DATA DESCRIPTION The dataset is in the form of a table and it shows a relationship between disease and symptom. It is organized so that the leftmost column is diseases, and the rest are symptoms. Every cell contains a binary value (1 or 0), where: 1 signifies that the symptom shares a relation with the disease. 0 indicates no relationship. The following files are available in our data repository [7]: dataset.csv: This dataset contains 1 if a symptom is related to the disease and the remaining cells are kept blank. cleaned_dataset.csv: The complete dataset after the cleaning processes described in the next section. cleaned_dataset_with_english_translation.csv: The cleaned dataset with the English translation of all the diseases and symptoms. Table 1 Overview of our dataset Table 1 presents the quantitative overview of our suggested dataset, emphasizing the number of unique diseases, unique symptoms, and the total number of disease-symptom relationships. The total disease-symptom relationships are the sample size, showing the vast amount of relationships included in the dataset. This information is crucial to comprehend the comprehensiveness of the dataset and its potential for effective predictive modeling. Table 2 List of the diseases in proposed datasets. Table 2 gives a full list of all 85 diseases in our dataset. It covers many different medical conditions. We included a wide range of diseases, like infectious diseases, chronic conditions, and rare disorders. This mix makes sure our dataset is representative of many types of illnesses. Having this variety helps machine learning models that use this dataset to perform well with different medical conditions. Fig 1. Distribution of diseases in our dataset (Upper: in Bengali, Lower: in English). Fig.1 depicts the list of 85 diseases in the database. Among those, the most common are ডেঙু্গ (Dengue) with 41, ডায়েবেটিস (Diabetes) with 25, and আমাশয় (Dysentery) with 22. It is composed of infectious and long-term diseases. Less common diseases are দাদ র োগ (Ringworm), ওটিটিস মিডিয়া (Otitis Media), and কলেরা (Cholera) with 2 to 3 cases. The majority of the diseases in the data are infectious, implying the need for public health measures and hygiene. However, the prevalence of chronic diseases like Diabetes implies the need for long-term intervention measures to manage such diseases effectively. Cough Nausea 丽 X Headache 医 21311J31 VomitingHigh Fever Abdominal Pain Loss of Appetite Swelling The word cloud in Fig. 2 is an illustrative display of symptom frequency and distribution in your dataset. Symptoms are listed on the left in Bengali and on the right in English. More frequent symptoms are larger and more intensely formatted in the word cloud, so you can quickly see which symptoms recur most frequently. There are 172 various symptoms, displaying numerous potential signs of various diseases in your data. The size of words varies with symptom frequency—larger words indicate they occur more frequently. This tool assists us in comprehending the diversity of symptoms, displaying that numerous conditions are taken into account. As there is a wide range of symptoms, the dataset is witnessing many diseases, indicating that all these symptoms need to be studied to predict and diagnose diseases. Some symptoms are very common in many diseases, and they can be termed common symptoms. Fig. 3 shows the top 20 most common symptoms. The most common is মাথাব্যথা (Headache) with 156 cases. Others include বমি বমি ভাব (Nausea) with 145 cases, বমি (Vomiting) with 144 cases, ক্লান্ত ব োধ (Fatigue) with 140 cases, and তীব্র জ্বর (High Fever) with 131 cases. These symptoms occur in various diseases and are thus highly significant for diagnosis but are not specific. Respiratory signs like কাশি (Cough) with 121 cases and শ্বাসকষ্ট (Shortness of Breath) with 117 cases can reflect issues like pneumonia, asthma, or viral infections. Gastrointestinal signs, including পেট ব্যথা (Stomach Pain), ডায়রিয়া (Diarrhea), and কু ্ষধা কমা (Loss of Appetite), are also common. Because many diseases share these symptoms, prediction models must take into account how symptoms occur together, and not individually. To improve accuracy, less common but specific symptoms for certain diseases should also be included. The high occurrence of general symptoms makes it challenging to classify diseases, requiring advanced techniques in predictive modeling. Top 20 Most Frequent Symptoms Fig 3. Most frequent symptoms. Table 3 shows a sample portion of our proposed dataset. Each row represents a separate disease, and the columns indicate the symptoms seen in patients with these diseases. There are 172 symptoms listed in all, each of them measured as either a 0 or a 1. A 1 would mean the symptom has been seen, and a 0 would mean that it hasn't been seen. For instance, the symptom শ্বাসকষ্ট (Shortness of breath) is assigned a 1 if present in a particular disease and a 0 otherwise. As numerical as the data are presented, the majority are categorical, diagnosing diseases based on qualitative descriptions of symptoms. A sample portion of our dataset # EXPERIMENTAL DESIGN, MATERIALS AND METHODS The dataset consists of 172 binary features representing the presence or absence of symptoms for various diseases. Fig. 4 shows the whole procedure to develop the dataset. The steps to develop the dataset are given below: # Data collection and annotation The raw dataset is stored in the spreadsheet file "dataset.csv" and contains symptom-based diagnostic data for various diseases. The dataset was created by systematically mapping symptoms to diseases based on clinical observations from online resources like- Bangla blogs, Bangla newspapers, online surveys, expert medical knowledge, and available diagnostic guidelines. The construction process involved the following key steps: # 1.​ Symptom Selection: ○ A list of common and clinically relevant symptoms was curated from medical literature and expert consultations. ○ Each symptom was chosen based on its association with multiple diseases, ensuring broad coverage. # 2.​ Disease Mapping: ○ A set of diseases was selected, each associated with a distinct symptom pattern. ○ Symptoms were manually assigned to diseases based on medical diagnosis criteria. # 3.​ Binary Encoding: ○ Each symptom was assigned the value 1 if it appeared commonly in a disease and 0 otherwise. ○ The information was tabulated into a binary matrix where rows represent diseases and columns represent symptoms. Fig 4. The procedure to develop the proposed dataset. # Data Cleaning Cleaning methodology to guarantee the correctness, consistency, and usability of the dataset for machine learning purposes was put into practice. The dataset may be described as containing only two possible values where each symptom is either said to be present or not. In such a case, the primary goal of preprocessing would therefore be to realign mapping from symptoms to diseases by dropping inconsistencies while laying out the data itself for proper utilization in classification problem approaches. This ensures that the dataset will thus deliver good quality input into machine learning models. The foremost thing that was done was to ensure that the encoding for all symptoms was strictly binary. The symptoms were originally recorded in a variety of formats, so data standardization was important so that 1 indicated the presence of a symptom while 0 indicated its absence from a symptom. This required careful examination of the raw data for detection and correction of any inconsistent or non-binary values. It was found that some records contained values other than 0 or 1, which were corrected to conform to the binary structure or removed entirely to maintain the integrity of the dataset. Apart from maintaining binary consistency, incomplete records were addressed by filling in gaps where practicable or deleting the records in their entirety if missing data could not be estimated reasonably. This helped maintain a clean dataset of unambiguous or reliable information. Moreover, various symptom mappings were found to be inconsistent. For example, some diseases included symptoms that had no reference variable per the medical guidelines. This was corrected by a detailed manual review whereby each disease's symptoms were cross-checked from genuine medical resources to ensure validity. Since symptoms of the diseases inconsistently mapped against each other were thus updated, valid and consistent symptom-disease relationships will be reflected in the data. # Feature Reduction and Standardization Once the cleaning was completed, the next was to aim at eliminating the redundant or low-impact symptoms. Some symptoms appeared in almost all diseases, hence being irrelevant in distinguishing between them. Similarly, some symptoms appeared in very few diseases to be of any use in prediction. These were either eliminated or combined, so only relevant symptoms remained in the dataset. This selection procedure reduced noise and improved the overall accuracy of the classification models. Standardization of the labels for the diseases constituted an important step in the cleansing. It involved the elimination of any disparity in the names of the diseases from the names. This involved eliminating the synonyms, spelling differences, as well as format differences. By giving the same uniform and consistent label to all the diseases, we nullified any disparity in writing the diseases into the dataset. Standardization allows easier work with data and prevents interpretation or training of errors in models. The last cleaned and encoded dataset is structured such that it is consistent and easy to use. There are no missing data points, and each feature only uses 0 or 1 to encode itself. The dataset is thus totally ready for processing with machine learning algorithms without going through any preprocessing. The binary format also allows the dataset to be readily available to clinicians and AI systems so it can be integrated into clinical decision support systems or diagnostic software without any problems. To evaluate how effective the proposed dataset is, varied machine learning models were tested. They varied from Perceptron, Logistic Regression, and Naive Bayes to Decision Trees, K-Nearest Neighbors, Passive Aggressive Classifiers, Random Forest, and Support Vector Machines. The comparison of models for disease classification was thus done on the basis of accuracy, precision, recall, and F1-measure. The accuracy was found to be highest (0.97) for Logistic Regression, Perceptron, and Random Forest. The best model precision, recall, and F1-score were provided by Logistic Regression, followed by Random Forest. In contrast, Decision Tree came out to have poor performance in all aspects, with its accuracy being 0.78 and low F1-scores. Some models, such as K-Nearest Neighbors and Passive Aggressive Classifier, had a good performance on disease classification and comparative stability throughout. This supports that the dataset is amenable to classification tasking and has stability for diverse machine learning applications. Table 4 Performance of various Machine Learning models on disease classification with the proposed dataset # LIMITATIONS Data were obtained from publicly available online databases, medical journals, and research reports and thus could be biased based on the sources selected for the scope and intent of data. The absence of actual-world clinical data, e.g., patient records or hospital databases, limits the dataset's use in a clinical environment. Though an incredibly wide range of diseases and symptoms is included within the dataset, it may well not be complete. There may be very few very uncommon disorders and local diseases that are underrepresented, as little published literature is present. The information is not in real-time, i.e., it does not update automatically from time to time to accommodate recent medical developments or new illnesses. # ETHICS STATEMENT No direct interaction with patients or healthcare providers was conducted during data collection. As the dataset is derived from secondary sources, ethical approval, and informed consent were not required. All sources used to comply with open-access policies or are publicly available for research purposes. Additionally, efforts were made to ensure that the dataset does not contain any sensitive or confidential patient data. # CRediT AUTHOR STATEMENT Abdullah Al Shafi: Conceptualization, Data curation, Visualization, Validation, Writing –Original draft; Rowzatul Zannat: Methodology, Data curation, Investigation, Writing – original draft; Abdul Muntakim: Supervision, Validation, Writing –review and editing; Mahmudul Hasan: Supervision, Writing –review and editing # ACKNOWLEDGEMENTS We would like to express our gratitude to the researchers and medical professionals whose publicly available datasets, studies, and articles contributed to the development of this dataset. Special thanks to the contributors of open-access medical literature and health databases that provided valuable insights into disease-symptom relationships. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. # DECLARATION OF COMPETING INTERESTS The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. # REFERENCES [1] Arbatti, Lakshmi, Abhishek Hosamath, Vikram Ramanarayanan, and Ira Shoulson. "What Do Patients Say About Their Disease Symptoms." Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems (2023). [2] Zlabinger, Markus, Sebastian Hofstätter, Navid Rekabsaz, and Allan Hanbury. "DSR: A Collection for the Evaluation of Graded Disease-Symptom Relations." In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part II 42, pp. 433-440. Springer International Publishing, 2020. [3]S. Grampurohit and C. Sagarnal, “Disease prediction using machine learning algorithms,” in 2020 international conference for emerging technology (INCET). IEEE, 2020, pp. 1–7. [4] M. A. Rahman, T. A. Nipa, and M. Assaduzzaman, “Predicting disease from several symptoms using machine learning approach,” International Research Journal of Engineering and Technology (IRJET), vol. 10, no. 7, pp. 836–841, 2023. [5] Disease dataset (2019). Available: [Online]. https://www.kaggle.com/datasets/kaushil268/ disease-prediction-using-machine-learning/data [6] M. M. Rahman, R. Amin, M. N. K. Liton, and N. Hossain, “Disha: an implementation of machine learning based bangla healthcare chatbot,” in 2019 22nd International Conference on Computer and Information Technology (ICCIT). IEEE, 2019, pp. 1–6. [7] Ratul, Abdullah Al Shafi; Zannat, Rowzatul; Muntakim, Abdul; Hasan, Mahmudul (2025), “A Structured Bangla Dataset of Disease-Symptom Associations to Improve Diagnostic Accuracy”, Mendeley Data, V2, doi: 10.17632/rjgjh8hgrt.2
Disease-symptom datasets are significant and in demand for medical research, disease diagnosis, clinical decision-making, and AI-driven health management applications. These datasets help identify symptom patterns associated with specific diseases, thus improving diagnostic accuracy and enabling early detection. The dataset presented in this study systematically compiles disease-symptom relationships from various online sources, medical literature, and publicly available health databases. The data was gathered through analyzing peer-reviewed medical articles, clinical case studies, and disease-symptom association reports. Only the verified medical sources were included in the dataset, while those from non-peer-reviewed and anecdotal sources were excluded. The dataset is structured in a tabular format, where the first column represents diseases, and the remaining columns represent symptoms. Each symptom cell contains a binary value (1 or 0), indicating whether a symptom is associated with a disease (1 for presence, 0 for absence). Thereby, this structured representation makes the dataset very useful for a wide range of applications, including machine learning-based disease prediction, clinical decision support systems, and epidemiological studies. Although there are some advancements in the field of disease-symptom datasets, there is a significant gap in structured datasets for the Bangla language. This dataset aims to bridge that gap by facilitating the development of multilingual medical informatics tools and improving disease prediction models for underrepresented linguistic communities. Further developments should include region-specific diseases and further fine-tuning of symptom associations for better diagnostic performance
[ "cs.CL" ]
# 1 Introduction external tools (Schick et al., 2023) and employ sophisticated planning and reasoning strategies such as ReAct (Yao et al., 2023) or Reflexion (Shinn et al., 2023) to dynamically adjust in uncertain environments. While the rapid scaling of Large Language Models (LLMs) has led to promising results across various tasks initially, the improvements gained from scaling models further are slowing down. Compared to GPT-3 (Brown et al., 2020), GPT-3.5 achieves a approximately $6 0 \%$ improvement (OpenAI et al., 2024a) on MMLU (Hendrycks et al., 2021). The improvement from GPT-3.5 to GPT4, however, is just approximately $2 3 \%$ (OpenAI et al., 2024a). Scaling test-time compute rather than just models has emerged as an alternative for further improving performance, leading to the rise of AI agents (Yao et al., 2023; Shinn et al., 2023; Wang et al., 2024). AI agents equip LLMs with Software Engineering (SE) emerged as a pivotal application domain due to the availability of high-quality data in open-source repositories and because the creation and maintenance of software underpins innovation and economic impact across virtually every sector. SWE-bench (Jimenez et al., 2024) is the industry-standard benchmark for evaluating the agent’s programming proficiency through testing the agent’s ability to fix bugs in real-world software. This spurred the rapid development of AI agents for programming by major players in the tech tooling ecosystem (Cursor, 2024; Basu et al., 2024; Zakonov, 2025; Microsoft, 2025; Anthropic, 2025). Version Control Systems (VCSs), such as Git, are ubiquitous in SE (Cortés Ríos et al., 2022) and play a pivotal role in building software in distributed teams. It is thus natural to use Git as a medium of collaboration between AI agents and human engineers. While LLM providers are advertising the Git capabilities of their systems (Anthropic, 2025), there currently exists no benchmark for evaluating an AI agent’s capacity of interacting with Git in an end-to-end manner. Furthermore, typical Git tasks such as Interactive Rebase (IR) are timeconsuming and distinct from raw code-generation. IR requires reasoning over the Git history and an in-depth understanding of dependencies between the commits constituting the history. To stimulate innovation in the direction of comprehensive, end-to-end SE AI agents that go beyond mere programming, we introduce a novel benchmark for the popular VCS Git. This comprises a training corpus for collecting agentic trajectories and two evaluation sets (lite and full). The benchmark supports Merge Conflict Resolu(a) Merge Conflict Resolution: The agent must reproduce the ground-truth merge commit given a set of conflicts. (b) Interactive Rebase: The agent generates an alternative history based on existing commits. Figure 1: The three Git scenarios supported by GitGoodBench. Each scenario benchmarks a typical Git use-case and unique aspect of version control. (c) Iterative Committing of Changes: The agent generates an alternative based on a disorganized set of changes. We only use the original commit history for evaluation. tion (MCR), Interactive Rebase (IR), and the Iterative Committing of Changes (ICC) (Figure 1). We scrape all data from permissive, open-source, Python, Java, or Kotlin GitHub repositories. Furthermore, we provide a baseline implementation using GPT-4o (OpenAI et al., 2024b) with custom tools, achieving a $2 1 . 1 1 \%$ solve rate. # 2 Related Work Several benchmarks, such as SWE-bench (Jimenez et al., 2024), or the Kowinski prize (Konwinski et al., 2024) evaluate agentic systems on complex, multi-turn SE tasks sourced from real-world GitHub issues. While the environment allows Git usage, the evaluation focuses solely on whether the agent resolves the bug rather than how it leverages VCS. In contrast, our benchmark explicitly measures an agent’s proficiency with Git tasks. This allows future research to thoroughly examine and refine VCS-focused strategies in SE agents and tailor agents to VCS tasks specifically. While previous works on automating or evaluating MCR (Svyatkovskiy et al., 2022; Shen et al., 2023; Boll et al., 2024; Pan et al., 2021) and commit message generation or completion (Jiang et al., 2017; Hal et al., 2019; Eliseeva et al., 2023) exist, they exclusively cater to specific VCS subtasks. In contrast, our benchmark is the first to encapsulate multiple subtasks, such as commit message generation, reasoning across commits, and rebase plan generation into a single benchmarking scenario. This uniquely positions GitGoodBench for evaluating and training AI agents with expertise in VCS tasks in end-to-end settings. # 3 GitGoodBench Datasets We provide GitGoodBench (900 samples) and GitGoodBench Lite (120 samples) for evaluation in comprehensive and rapid-prototyping settings, respectively. The research community recently started investigating SE agents powered by finetuned Small Language Models (SLMs) (Pan et al., 2024; Jain et al., 2025; Yang et al., 2025). We believe that trained, on-device sized agents are an exciting research direction. While we do not train such a model in this work, with GitGoodBench Train (17,469 samples) we release a dataset split dedicated to collecting trajectories for training Git agents. (a) Repository metadata filters we use for selecting the initial repositories we consider in the benchmark creation. We consider the following licenses permissive: MIT, Apache 2.0, BSD 3-Clause “New” or “Revised”, BSD 2-Clause “Simplified”. (b) Scenario level filters for selecting scenarios to include in our benchmark. Table 1: Filters for selecting repositories and scenarios to include in our benchmark. # 3.1 Supported Scenarios Our benchmark covers the following three types of Git scenarios: Merge Conflict Resolution The agent must resolve all merge conflicts by reproducing the ground truth resolutions (Figure 1a). Interactive Rebase In this scenario (Figure 1b) the agent must reason across commits and their contents to determine the optimal ordering of commits, thereby improving the Git history. This includes commit consolidation or modification and commit message refinement. Iterative Committing of Changes This scenario (Figure 1c) type is the inverse of the IR. Instead of optimizing existing commits, the agent must generate a reasonable Git history from a large disorganized set of changes. With these scenario types we cover non-trivial Git functionalities central to common Git workflows (Cortés Ríos et al., 2022). Moreover, we explicitly cover functionality currently only implemented interactively in Git (e.g., git rebase $- \mathrm { i }$ or git add -p). Agents are highly applicable for such iterative tasks that depend on environment observations. However, interacting with such functionality is challenging for agentic systems because these functions do not provide immediate feedback and instead wait for user input. This introduces friction into the typical plan-act-observe loop of AI agents, due to delayed feedback not easily captured by usual pipelines. # 3.2 Dataset Creation We collect repository metadata from repositories with permissive licenses using SEART (Dabic et al., 2021) and the metadata filters defined in Table 1a. The scenarios for IR and ICC are represented by the same samples in our dataset (i.e., with one sample, we can evaluate both IR and ICC). We call these samples File-Commit Chain (FCC) samples and they refer to chains of commits in Git histories in which we observe consecutive modifications of a single file. We use this as a heuristic for identifying Git histories that may be improved through reordering or consolidating commits. These samples target the use-case of (1) cleaning up the local Git history before pushing new commits to the remote (e.g., git rebase -i HEAD\~5, and (2) constructing a clean git history given a set of changes for the IR and ICC scenario, respectively. To tailor these samples toward evaluating an aspect of Git distinct from MCR, we remove merge commits from FCCs. This allows us to evaluate the system’s understanding of the rebase-todo and of relationships between commits. We then mine the Git history of these repositories for merge, and FCC samples and apply our scenario-level filters (Table 1b) to obtain 6,917 merge samples and 11,572 FCC samples. To ensure a diverse benchmark, especially concerning represented repositories, we partition our data into strata based on the following features before sampling to construct our benchmark. File-Commit Chain Samples For these samples, we use the project size (in lines of code) and the repository name for stratification. Merge Conflict Resolution Samples In addition to the above, we stratify on the difficulty of these samples. We define MCR difficulty based on the number of conflicts and their distribution across files. To determine conflicts, we run git show –remerge-diff <merge-commit> and identify conflicts through Git merge conflict markers. We consider scenarios with a single conflict “easy” because no reasoning across diffs is necessary, those with multiple conflicts in a single file “medium” because reasoning across diffs in the context of a single file is required, and all others, for which the agent must reason across multiple diffs and files, as “hard”. To construct the held-out test, we sample 120 scenarios for GitGoodBench Lite and 900 for GitGoodBench. We stratify the sampling for scenario type and Programming Language (PL). The remaining samples yield GitGoodBench Train. All three datasets are mutually disjoint. For further details, see Appendix A. # 3.3 Metrics We present the results of our baseline in terms of success and solve rate (both expressed as percentages). The success rate refers to scenarios for which our system did not cause an error (e.g., because a patch cannot be applied in MCR). Below, we define the solve rate for each scenario: File-Commit Chain Samples For FCC scenarios we prompt an LLM to judge the agentgenerated and ground truth Git histories using the LLM-as-a-Judge (Zheng et al., 2023) approach. We opt for this approach instead of Exact-Match (EM), because there is no clear, deterministic way to define what constitutes a superior Git history. Following Zheng et al. (2023) we judge each pair of Git histories twice while switching the positions of the histories in the same prompt template to account for position bias. We prompt the judge to base its decision on (1) the quality of the commit messages considering the contents of the commit, (2) the cohesion of changes within the commits, (3) a logical progression of changes across commits, and (4) the size of commits. If the judge chooses the agentgenerated over the ground truth Git history in both cases, we count a sample as solved. For details on the prompt see Appendix B.4. Table 2: Success and solve rates $( \% )$ by scenario type, rounded to two decimal places. We observe the high complexity of the proposed benchmark, even given the strong baseline model and custom environment tools. Table 3: Success and solve rates $( \% )$ by difficulty for MCR samples, rounded to two decimal places. GitGoodBench Lite contains 31 $( \approx 5 2 \% )$ easy, 13 $( \approx 2 2 \% )$ . medium, and 16 $( \approx 2 7 \%$ ) hard samples. Merge Conflict Resolution Samples Because an exact ground truth solution is available, we use EM between the ground truth solution and the agent’s solution for evaluating MCR. # 4 Environment As a baseline, we evaluate GPT-4o (OpenAI et al., 2024b) on GitGoodBench Lite and the tasks defined in Section 4.1 using the metrics in Section 3.3. While we do not use an agentic reasoning framework (Yao et al., 2023; Shinn et al., 2023; Wang et al., 2024), we do equip the LLM with one possible set of custom tools (Section 4.2). # 4.1 Provided Context Interactive Rebase In the initial context, we provide all changes in all commits participating in the IR, few-shot function-calling examples and an explanation of valid commands for the rebase-todo file. We initiate the IR covering all commits in the FCC before launching the agent. Iterative Committing of Changes We provide all Git-generated hunks that the agent must process, in addition to few-shot function-calling examples in the initial context. After each commit, we automatically show the agent the updated list of remaining hunks. We limit the agent’s selection of hunks to hunks originating from the file for which we mined the FCC and commit all other changes in a single commit after the agent terminates. Merge Conflict Resolution The initial context includes the temporal ordering of the commits being merged, names of all files with conflicts and all merge conflicts it must resolve as well as few-shot function-calling examples. # 4.2 Provided Tools Initially we experimented with minimalistic tooling, simply giving the LLM terminal access in a sandbox environment. However, preliminary results indicated that the system is unable to make any meaningful progress in this setup2. In particular it struggled with interactive Git functionality (Section 3.1. Because of this we opt for the strong scaffolding detailed below. Interactive Rebase We implement tools for viewing the contents of commits and interacting with the rebase-todo list, a file that specifies how Git should carry out the IR. Iterative Committing of Changes With our tooling for this scenario type, the agent selects any number of Git-generated hunks to group into a single commit. Merge Conflict Resolution To foster coherent, conflict-spanning resolutions, we provide tools for viewing individual merge conflicts, complete files or the overall difference between commits being merged. Our tooling limits the agent to sequentially resolving conflicts. It may only specify a patch for resolving the current conflict. # 5 Baseline Results In Table 2, we see that our baseline implementation succeeds in $8 8 \%$ and solves $2 1 . 1 1 \%$ of scenarios in GitGoodBench Lite3 overall. Even with significant scaffolding support the LLM is unable to solve the majority of tasks in our benchmark. This highlights the need to explicitly consider Git use-cases when engineering and training SE agents. For both IR and ICC scenarios our system achieves higher success and solve rates than for MCR scenarios (Table 2). We partially attribute to the stricter scaffolding for these two scenarios. In MCR scenarios the agent must generate code that can be applied at the location of the conflict to solve the conflict. Especially in scenarios which require the agent to make globally consistent conflict resolution choices (i.e., medium and hard samples in Table 3) the system’s performance rapidly deteriorates. In FCC-based scenarios, the agent must simply select a set of hunks to commit for ICC scenarios or modify the rebase-todo file through a tool for IR scenarios. This indicates that the failure rate of agentic systems interacting with Git increases as the level of technical abstraction from Git decreases. We do however note that some amount of this performance degradation may also be due to the stricter EM evaluation metric used for MCR scenarios. Regarding the difficulty heuristic for MCR, we note that it accurately captures a sample’s complexity regarding the solve rate. Easy samples have a $. \approx 3$ times higher solve rate than hard samples. Furthermore, the scenarios based on FCC samples (IR and ICC) result in similar success and solve rates. This indicates that our LLM-asa-Judge evaluation methodology is consistent in assessing similar Git histories and is thus a suitable choice. Our difficulty heuristic for IR and ICC scenarios did not correlate with the observed difficulty, for details see Appendix A.2.3.
Benchmarks for Software Engineering (SE) AI agents, most notably SWE-bench, have catalyzed progress in programming capabilities of AI agents. However, they overlook critical developer workflows such as Version Control System (VCS) operations. To address this issue, we present GitGoodBench, a novel benchmark for evaluating AI agent performance on VCS tasks. GitGoodBench covers three core Git scenarios extracted from permissive open-source Python, Java, and Kotlin repositories. Our benchmark provides three datasets: a comprehensive evaluation suite (900 samples), a rapid prototyping version (120 samples), and a training corpus (17,469 samples). We establish baseline performance on the prototyping version of our benchmark using GPT-4o equipped with custom tools, achieving a 21.11% solve rate overall. We expect GitGoodBench to serve as a crucial stepping stone toward truly comprehensive SE agents that go beyond mere programming.
[ "cs.SE", "cs.AI" ]
# 1 INTRODUCTION Database Management Systems (DBMSs) are large, complex, and fundamental software systems. Unsurprisingly, they are prone to bugs. Various approaches have been proposed to detect logic bugs in them using automated testing [13, 26–28, 30, 32]. They primarily tackle the so-called test-oracle problem by validating whether a DBMS operates as expected, for example, by deriving an equivalent query from a given input query to check whether the DBMS produces consistent results [3, 26, 27, 40]. Such test oracles have successfully identified hundreds of bugs in popular DBMSs like SQLite, MySQL, and PostgreSQL. Various other approaches have been proposed for fuzzing DBMSs; however, as they lack test oracles, they # Listing 1: An example bug in CrateDB caused by scalar function ARRAY_POSITION. 1 CREATE TABLE t0(c0 INT , c1 ARRAY ( STRING )); 2 INSERT INTO t0(c0 , c1) VALUES (1, ['a', 'b']); 3 4 SELECT c0 FROM t0 5 WHERE ( $c 8 ! =$ ARRAY_POSITION (t0.c1 , 'c', 1)); 6 {1} {} Ë miss logic bugs that do not also manifest as crashes [9, 38, 43]. Most automated DBMS testing tools automatically derive SQL test cases. They can be broadly classified into generation-based ones, which generate test cases from scratch, based on rule-based generators, while mutation-based ones mutate existing SQL statements. More than a thousand DBMSs exist,1 and automatically applying existing testing tools to all of them is challenging. The key challenge is that DBMSs’ dialects differ in both syntax and semantics, including functions and data types, on which existing automated testing approaches are based. To test dialect-specific features, both mutators and generators would ideally generate or mutate instances of these features. Mutation-based test case generation [16, 22] depends on high-quality seed inputs, which may come from test suites. However, the validity rates of executing the test suite across different DBMSs are often low due to significant dialect differences [44]. Besides, mutation-based testing tools are difficult to employ to find logic bugs since the test oracle imposes strict input constraints. Conversely, generation-based methods [26–28] require significant human effort to create DBMS-specific generators. $S Q L a n c e r { + + }$ represents an initial attempt to address these challenges [45]. It consists of an SQL generator that infers which features are supported by a given DBMS. It can be used on different DBMSs regardless of SQL dialects without modifications to its source code. For a specific feature (e.g., a function or an operator), $S Q L a n c e r { + + }$ executes a sufficient number of test cases and then infers the probability of it being supported based on the execution status of the statements containing this feature. However, SQLancer++’s ability to find bugs is constrained by the initial set of SQL features it covers, potentially missing unique or newly released features of a DBMS. For example, it only supports three data types and 58 functions, and it would be difficult to maintain a generator that supports most features of most DBMSs. Listing 1 demonstrates one bug in CrateDB that $S Q L a n c e r { + + }$ failed to find since it lacks support for nested data type ARRAY(STRING) and the incorrectly implemented function ARRAY_POSITION. Supporting new DBMSspecific features is a labor-intensive process that requires expertise in both the DBMS and the test case generator. Thus, an automatic method to integrate these features into the generator is essential. Recent progress on Large Language Models (LLM) suggests the possibility of using them as DBMS test-case generators. First, prior research has applied LLMs to data management, such as in the context of text-to-SQL [10, 19] and query rewriting [21, 23]. LLMs have also been applied to testing in a variety of domains, including general-purpose testing [34], compiler testing [35], and testing of deep learning libraries [6]. However, several challenges persist when applying LLMs to test DBMSs. First, the throughput of LLMbased generation is significantly lower than that of traditional query generators. For instance, LLM- $\cdot \mathrm { R } ^ { 2 }$ [21] leverages LLMs to rewrite queries, resulting in an average latency exceeding one second per query on each of its benchmarks, while tools like SQLancer can generate thousands of queries per second. Second, the cost of employing LLMs for testing is high. Utilizing an LLM typically requires access to a powerful GPU server or incurs significant expenses through API services (e.g., GPT-4o costs 15 US dollars for every 1 million tokens ). As a result, integrating LLMs into the CI/CD pipelines for DBMS development is neither straightforward nor cost-effective. Third, current LLMs suffer from hallucination [14, 41], producing unreliable outputs—for example, they cannot be trusted as a test oracle to validate the query results. In summary, the inherent inefficiency, expense, and unreliability of current LLMs pose significant challenges to their application in the testing domain. In this paper, we propose ShQveL, a technique that enables existing SQL generators to incorporate features via LLMs. Our core insight to achieve efficient SQL generators that can generate DBMSspecific SQL features is to leverage LLMs by permanently integrating the LLM-generated contents and automatically validating them. After an initial learning phase, we can disable LLM interactions and thus achieve an efficiency comparable to manually-written generators. A key challenge is to design the approach such that LLMs can be used to determine DBMS-specific features, and subsequently validate and integrate this knowledge into the SQL generators. To bridge the gap between the LLM-generated contents and generator source code, we propose the concept of SQL sketching. An SQL sketch is a template of SQL statements generated by the original SQL generator with incomplete segments as placeholders, or holes. The LLM can fill these holes with SQL fragments containing DBMSspecific features. Subsequently, the fragments are integrated into the generators. During learning, LLMs complete the sketch based on their pre-trained knowledge and DBMS documentation through in-context learning. In order to impose constraints on the LLM, we prompt it by providing a SQL sketch that implicitly imposes constraints on feature usage (e.g., a SQL sketch SELECT 1 ?? 1 indicates that the operator should have two integer operands). After receiving the LLM’s response, our approach validates the complete SQL fragment and subsequently uses it during the testing process. We implemented ShQveL based on SQLancer $^ { + + }$ and evaluated it on CrateDB, CockroachDB, DuckDB, MonetDB, and TiDB. We found and reported 55 previously unknown and unique bugs, showing the effectiveness of our approach. Of these, 50 bugs have been fixed, demonstrating that the developers considered the bugs important. Our goal was not to outperform the manually-written generators, but to find bugs in DBMSs of different dialects without investing any implementation effort. Despite this, the LLM-synthesized fragments can significantly increase the testing effectiveness in terms of code coverage: we achieved comparable performance on PostgreSQL and SQLite, and, on DuckDB, observed increases of $45 \%$ compared with SQLancer $^ { \cdot + + }$ and $3 0 \%$ compared with SQLancer. ShQveL can learn features efficiently and economically, as it learned around 400 fragments for each DBMS in six hours with a cost of less than 1 USD per DBMS. We believe that our approach is both practical and scalable. The learned fragments can persist across executions and can be reused even across different systems. Once a sufficient number of features has been learned, the learning process could be stopped, leading to more efficient test case generation, especially in resource-limited environments (e.g., CI runs). Additionally, our feedback mechanisms based on SQLancer $^ { + + }$ validate code fragments without human supervision. In general, this approach makes the testing process more efficient, economical, and manageable. In summary, we make the following contribution: We propose SQL sketching as a general notion for integrating LLM-generated content with generation-based DBMS testing. • We propose a testing approach ShQveL, which leverages LLMs to persist and integrate DBMS-specific features into the generator. • We implemented and evaluated the approach, which has found 55 unique, previously unknown bugs in widely-used DBMSs. # 2 BACKGROUND In this section, we introduce preliminaries for generating SQL test cases in a dialect-agnostic manner. SQLancer $^ { + + }$ . Although the idea proposed in this paper is general, our implementation is based on $S Q L a n c e r { + + }$ , which is an automated platform for testing DBMSs to find logic bugs, aiming to test a wide range of DBMSs with different SQL dialects. Its core component is an adaptive SQL generator that, during execution, dynamically infers which of a set of predefined SQL features—elements or properties in the query language that can be specified at different levels of granularity—are supported by a given DBMS. Specifically, a feature might be a specific keyword or statement. It can also refer to a class of operators or functions, for example, a null-safe operator $< = >$ of MySQL. To infer whether this feature is supported by the system under test, during execution, SQLancer $^ { + + }$ generates a sufficient number of SQL statements containing $\scriptstyle \iff$ . Subsequently, SQLancer $^ { + + }$ calculates the estimated supported probability of the features being supported based on the previous execution status. SQLancer $^ { + + }$ supports only a manually maintained and limited set of common SQL features across DBMSs. It is feasible to add more features for one specific DBMS; however, it requires a high implementation effort, and it would be infeasible to add and maintain dialect-specific features of many DBMSs. # 3 MOTIVATION To understand the rationale behind our approach, we first demonstrate the challenges of using LLMs in DBMS testing and the limitations of two naive approaches: (1) generating test cases directly with the LLM (see Section 3.1) and (2) generating test generators with the LLM (see Section 3.2). Given the LLMs’ proficiency in SQL tasks like text-to-SQL, a straightforward approach is to have them generate SQL statements for fuzzing, as described in a recent blog post.3 For example, we prompt the LLM to generate SQL statements based on the corresponding documentation and execute the generated statements on the target systems. LLMs are also advanced at code generation (e.g., Copilot [12]). Guiding them to synthesize the SQL generators automatically is another potential approach. This throughput of query generation might be higher, but any error in LLMs’ outputs would render the whole generator unusable. We chose DuckDB ${ \mathrm { v } } 0 . 7 . 1$ , a historic version, as our target, which is a widely used embedded relational DBMS that has been extensively tested [8, 27]. Table 1: Comparison of executing LLM-based generator and manual-written generator on DuckDB for 6 hours. # 3.1 LLM-based Test Generation Methodology. We first demonstrate the potential and challenges of using LLMs to directly generate test cases by applying Fuzz4All [34] to test DuckDB. Fuzz4All is a universal state-of-theart fuzzer that leverages LLMs (e.g., StarCoder [20]) to target a variety of input languages, including $C / C + +$ and Java, although it was not specifically evaluated on SQL. We executed both Fuzz4All and SQLancer $^ { + + }$ continuously for 6 hours on a GPU server, comparing their test efficiency and test case validity rates. For Fuzz4All, we configured the system with a summary of the DuckDB documentation and an example prompt while keeping all other settings at their defaults. We executed both systems in a single-threaded manner for comparability. Results. Table 1 summarizes the results of our experiments. First, SQLancer $^ { \cdot + + }$ generated $2 2 9 \times$ more SQL statements, as invoking the LLM for every SQL statement is inefficient. Second, we observed a higher validity rate for SQLancer $^ { + + }$ , although it also fails to achieve a $1 0 0 \%$ validity rate, because certain features can trigger expected errors due to semantic constraints (e.g., adding two very large integers may overflow and result in an error). Third, SQLancer $^ { + + }$ reported 156 bug-inducing test cases with 2 unique bugs compared to none discovered by the LLM-based approach. However, the LLMbased method achieved a higher branch coverage, likely because it can generate DuckDB-specific SQL features, while SQLancer $^ { + + }$ was designed and implemented to support mostly common ones. While exercising code is a necessary condition for finding a bug, it is not sufficient; in practice, only specific test cases might trigger it, which is why a high throughput is also necessary [4]. LLM-generated SQL test cases incorporating DBMS-specific features can achieve higher coverage, while manually written generators with higher throughput can detect more bugs. # Listing 2: A random DuckDB CREATE TABLE generator generated by ChatGPT o1. We shortened variable names and omitted helper functions for simplicity. 1 def rand_constr ( col_name , existing_cols ): 2 constrs $\mathbf { \Sigma } = \mathbf { \Sigma }$ [] 3 col_constrs $= [$ 4 f" PRIMARY KEY", 5 f" UNIQUE ", 6 # Issue 1: Value is fixed 7 f" CHECK ({ col_name } $> ~ 0 ) "$ , 8 f" CHECK ({ col_name } $< > ( 1 1 ) "$ 9 ] 10 use_col_constraints $\mathbf { \Sigma } = \mathbf { \Sigma }$ random . choice ([ True , False ]) 11 if use_col_constraints : 12 constrs . append ( random . choice ( col_constrs )) 13 if existing_cols and random . choice ([ True , False ]): 14 referenced_col $\mathbf { \Sigma } = \mathbf { \Sigma }$ random . choice ( existing_cols ) 15 # Issue 2: SQL syntax is incorrect 16 # Issue 3: Table name is hardcoded 17 constrs . append (f" FOREIGN KEY ({ col_name }) REFERENCES some_other_table ({ referenced_col })") 18 return " ". join ( constrs ) 20 def generate_create_table_statement (): 21 # Generate columns_sql with rand_constr 22 # 23 # Construct the SQL statement and return it 24 return f" CREATE TABLE { table_name } (\n { columns_sql }\n);" # Listing 3: An example statement generated by the synthesized generator. 1 CREATE TABLE TABLE_QBLVC ( 2 COL_EQVWT BIGINT CHECK ( COL_EQVWT <> 3 COL_KKXBL DATE FOREIGN KEY ( COL_KKXBL ) REFERENCES SOME_OTHER_TABLE ( COL_EQVWT ) 4 ); # 3.2 LLM-based Generator Generation Methodology. We employed LLMs to synthesize a statement generator for DuckDB to explore the potential of efficient test case generation. We assume a developer might build a fuzzer for DuckDB to continuously test their system (e.g., in a CI/CD process). We used ChatGPT o1—one of the most advanced models for code—to synthesize a program capable of generating random CREATE TABLE statements with the following prompt: Act as an advanced Python and database developer. Please generate a Python script that, upon execution, randomly creates a DuckDB CREATE TABLE statement. Additionally, we provided the relevant DuckDB documentation to guide the model in generating DBMS-specific content. Results. Listing 2 shows the code snippet ChatGPT synthesized, and Listing 3 shows an example of its output. While ChatGPT’s output is almost correct, we observed that hallucinations in the LLM’s output can cause bugs. For example, lines 13–17 of Listing 2 incorrectly apply a FOREIGN KEY constraint as a column constraint in DuckDB, which may result in a syntax error when executing the generated SQL statement (see Listing 3). Although we supplied DuckDB’s documentation, ChatGPT still produced incorrect code— likely due to difficulties in parsing highly structured languages [33]. Such errors could potentially render the whole testing tool unusable if not manually fixed. An additional challenge is how to assemble multiple such generators to form a complete automated testing tool, as well as addressing concerns such as schema management, value diversity, and statement dependencies. First, LLMs struggle to maintain consistent schema references. For example, LLMs might not reference tables created by the CREATE TABLE statements. In line 17 of Listing 2, the generator references a non-existent table, some_other_table. Second, LLM outputs are often concrete; line 7 of Listing 2 compares a column’s value to 0, while it may overlook boundary values (e.g., the maximum integer value) that may trigger bugs. Third, statements must be executed in the correct order; for example, CREATE TABLE should precede INSERT. While advanced approaches in LLM agents for general coding tasks [37, 42] might address some of these issues, additional manual effort will likely still be required, and subtle errors could affect the effectiveness and efficiency. LLMs struggle to synthesize SQL generators correctly, as they still face internal hallucination issues and other challenges— schema handling, generating diverse and valid constants, and statement scheduling—which together lead to suboptimal accuracy and reliability in the generated queries. # 4 SHQVEL We propose ShQveL, an approach to test DBMSs supporting a multitude of SQL dialects through LLM guidance. The core idea behind our approach is to use LLMs to extract knowledge of different dialects’ SQL features and corresponding DBMS documentation to then integrate this knowledge into SQL generators automatically. To avoid the high cost and low throughput of directly instructing the LLM to generate SQL tests (see Section 3.1), we persist knowledge gained through LLM interactions; after a user has decided that a sufficient number of features has been learned, no further LLM invocations are required. Due to hallucinations and other limitations (e.g., limited context length) of current LLMs, directly synthesizing SQL generators would result in errors, requiring manual intervention (see Section 3.2). Rather, we propose SQL sketches for generator augmentation, in which a base generator generates common SQL statements, some of which contain holes to be filled by the LLM with code fragments of DBMS-specific features. The fragments synthesized are validated by a self-validation mechanism to eliminate invalid fragments by LLM hallucinations. To address the challenge that LLM-synthesized fragments are unaware of database objects and lack randomness, we provide the LLM with schema information and random literal generators as part of the prompt. System overview. Figure 1 shows an overview of ShQveL. During the execution of ShQveL, the generator is either instructed to initiate the learning phase, aiming to learn new features, in which case ShQveL generates a SQL sketch (see step $\textcircled{1}$ ), or it generates test cases aiming to find bugs in the DBMS (see step $\textcircled{5}$ ). In the learning phase, ShQveL generates an SQL sketch to learn new features (see step $\textcircled{2}$ ). One SQL sketch can contain multiple incomplete segments. Each incomplete segment ??—referred to as a hole—can be filled with DBMS-specific feature fragments (e.g., SQL keywords, constants, or expressions). ShQveL then invokes an LLM to complete the SQL sketch, using the target DBMS documentation and few-shot examples as guidance. The LLM fills the holes with fragments, and subsequently, we execute the fragments, which form complete SQL statements, on the system for validation (see step $\textcircled{3}$ ). If successful, we mark the fragments as valid; otherwise as invalid. In step $\textcircled{4}$ , ShQveL updates the generator using valid fragments. Based on the updated generator, ShQveL keeps generating SQL statements to test the target system (see step $\textcircled{5}$ ). # 4.1 Sketch Generation In step $\textcircled{1}$ , ShQveL generates SQL sketches for learning and integrating specific SQL features into the generator. Similar to Listing 2, each generator consists of manually-written source code that includes predefined code fragments, such as SQL keywords or random strings, and when the generator is executed, it generates concrete SQL statements. To create a SQL sketch, ShQveL inserts a hole at predefined locations in the generated SQL statements for the feature to be learned. For example, as shown in Figure 1, when the generator needs to generate a comparison operator, it outputs the placeholder ?? instead of a concrete operator. The generator thus generates an incomplete SELECT statement (e.g., SELECT COL ?? 1 FROM TAB), which the LLM can subsequently complete by synthesizing one of the potentially available operators. Sketch Design. To increase the probability of synthesizing uncommon features that are potentially rarely tested, we use sketch generation rules for each level of SQL features—statement, clause, datatype, and expression—to ensure LLMs focus on one feature at each invocation. Feature-specific sketches focus the LLM’s search space. If sketches were generated by randomly inserting holes into random SQL statements, LLMs would mostly synthesize semantically correct, but common fragments that are less likely to trigger bugs. We detail the sketch synthesis implementation of different levels in Section 5.1. Context statements. In addition to statements with holes, a SQL sketch also includes complete SQL statements that build context. Context statements provide the LLM with schema information for synthesizing fragments related to database states. For example, a SELECT statement querying an existing table typically requires a preceding CREATE TABLE statement. In Figure 1, where ShQveL is learning the comparison operator feature, the SELECT statement with a hole follows the CREATE TABLE and INSERT statements, which establish the database state and insert sample data. This context helps convey to the LLM that COL is an integer column and TAB is a table with one column. Conversely, when LLMs synthesized fragments contain COL or TAB, the generator can identify them as database objects. Consequently, a concrete column reference in a fragment can be abstracted, so that, during testing, a column reference available in the current schema is referenced instead. Fewer potentially ambiguous natural language instructions are needed in the prompt design, and the completed sketch can be executed directly to validate that the filled fragments are correct. # 4.2 Fragments Synthesis In step $\textcircled{2}$ , ShQveL guides the LLM to fill the holes in the sketches using in-context learning [5]. Two challenges for designing prompts limit this synthesis. First, the LLM’s training data might lack less Generate SQL sketches Synthesize fragments via LLM Validate the fragments via DBMS compOp :: != > ?? DBMS Learning C LLM Code Fragments ISNCESRETRATIENITNTOTAOBTLATEBABTVAVBLA UL(EUCSEOSL( I)1N;)T;); CIRNESAETRET TIANBTLOE TAABB V(ACLOULES (N1T); !E=xam> ple<s Fill back ESLEISELNECESLTCE TRC TOC LOIC LNO< TL><O ><1 T>1A FB1RFORVFMOAR MLOT UMATE BAST;BA;(B1;); SELECT COL ?? 1 FROM TAB; Doc. >= AND ✘ Update the generator <> AND compOp ::= != <> <= >= ? >= <= Thread 1 Schema Query → Testing Generate test cases C REIANTSE TABILNET t00 VcA0LUIENST (;0),(1); SELECSTEL\*ECFTR SELEC SELECT \* FROM t0 WHERE c0 >= -0; 1M0 >R0=OMW-H0tE0RF;EROcM0 <0;1 {TRUE, TRUE} -- {1} Thread 2 common SQL dialects or recent SQL features. Second, LLMs tend to generate outputs based on common patterns seen in their training data. As further detailed in this section, ShQveL uses RetrievalAugmented Generation (RAG) techniques to address the first challenge, and incorporates random literal generators to address the second challenge. Specifically, ShQveL prompts the synthesis LLM with up-to-date DBMS information through the documentation of DBMSs based on RAG techniques. Additionally, ShQveL exposes an interface to the LLM that allows it to generate random literals to increase the exploration of interesting and diverse test cases. Prompt design. We first illustrate the overall design strategies of synthesizing code fragments for SQL sketches using LLMs, with further details presented in Section 5.2. Each prompt for synthesizing fragments for a hole consists of natural-language instructions, a SQL sketch, examples, the DBMS name, and its potential documentation references. We prompt the LLM to fill placeholders within predefined SQL sketches. We encourage the LLM to provide multiple fragments that could be filled in, and require the output in a structured format—CSV in our case—for parsing. We also provide few-shot examples in the specified result format automatically sampled from existing SQL features randomly. We attach a documentation summarization of the target DBMS based on our RAG techniques. This aids the LLM in accurately synthesizing the requested SQL fragments. We integrate the random generator interfaces and their description in the prompt, which helps improve the randomness of the generated fragments. For example, in step $\textcircled{2}$ , the LLM processes the following parameterized prompt: You are an expert in SQL dialects. Given: • DBMS: [DBMS] Documentation: [Documentation] • SQL sketch: [SQL sketch] Literal generators: [Literal generators] Examples: [Examples] Generate, for each placeholder ({0}, {1}, ...) in the SQL sketch, as many deterministic, rare, and complex concrete alternatives as possible, using $\boldsymbol { \star } \boldsymbol { \star } _ { \mathrm { o n l y } ^ { \star \star } }$ the provided variable generators or literal values. Avoid random functions... Table 2: Examples of 𝑅𝑎𝑛𝑑𝐺𝑒𝑛 Retrieval-augmented generation. ShQveL uses RAG techniques that incorporate up-to-date information based on the DBMSs’ documentation. Although relying solely on the LLM’s internal knowledge is possible, its performance can be limited, especially for emerging DBMSs whose specifications would likely be missing in the model’s training data (see Section 6.2). LLMs are pre-trained and updated infrequently; thus, they cannot capture the latest knowledge of the target DBMS. Besides, their capacity limits their ability to provide detailed, accurate information for less common DBMSs. To address the above issues, ShQveL uses a three-step retrieval-augmentation process. First, ShQveL generates a natural-language description for each sketch, specifying the system’s name and the feature type to be learned. Second, ShQveL retrieves documentation corresponding to the identified feature using the natural language description. While we use a search engine to help retrieve the relevant DBMS documentation of the specific features, a vector database could also be used. Third, ShQveL employs an LLM to summarize the fetched documents. ShQveL guides the LLM to parse the document containing each feature, and then produce a concise summary that includes each feature’s name, a clear description or specification, and a concrete example. This summarization helps reduce the size of references to fit within the current LLMs’ context limits. Random generators. ShQveL exposes a function interface to the LLM for generating random literals to explore a larger search space during testing. LLMs are likely to generate answers that are frequently seen in their training data, while in testing, using values that are not frequently seen may potentially trigger bugs. See Listing 2 Issue 1 as an example. LLMs can return concrete fragments (e.g., CHECK (col_name $> 0$ ) for a column constraint clause); however, a bug may only be triggered when comparing col_name to a large integer, a string, or another column reference. We implement random literal generators (see Table 2) as callable functions within the code fragments. They are designed to generate random literals—integers, strings, or identifiers. Our prompt includes the interface and its descriptions, and the LLM can synthesize code fragments with these generators through a unique keyword that each generator is associated with. When ShQveL’s generator incorporates new fragments that include these keywords, it automatically invokes the matching random literal generators, dynamically generating randomized values to be included in the generated SQL statements. By prompting LLMs with literal generators, one potential fragment could be CHECK (col_name $>$ <RANDOM_INT>). During testing, the literal generator will be invoked, and a comparison with a random integer will be generated. This method enhances the diversity and coverage of the generated SQL test cases. # 4.3 Sketch Validation In step $\textcircled{3}$ , ShQveL validates the synthesized code fragments through the execution feedback from the target systems. Due to LLM hallucinations, unsupported SQL features may return. To tackle this, we propose a validation mechanism. Otherwise, even a small number of invalid features would result in many invalid test cases, rendering the testing process inefficient. ShQveL validates the SQL fragments by executing the filled statements on DBMSs. ShQveL substitutes the hole in the sketch with the returned fragment and executes the filled statements to check whether the features learned are supported (see step $\textcircled{3}$ ). After filling the holes, the statements are expected to be a self-contained SQL test case, that is, given an initial clean database state, we can execute this test case to create a table, insert data, and query it without error. If execution of the statements is successful, we can infer that the fragments filled in have a high probability of being supported by the target system. Otherwise, the fragments filled in are marked as unsupported. Note that such fragments may contain features such as random functions that pass the test in rare cases, while actually not being supported. We address this challenge during test execution (see Section 4.4). By integrating the fragments that contain no unsupported features, ShQveL ensures a high validity rate for test cases. # 4.4 Feature-oriented Testing In step $\textcircled{5}$ , ShQveL generates test cases and applies the test oracles. Initially, it does so by using only manually implemented features. It subsequently expands the set of features by executing the learning phase. The overall testing procedure is guided by a scheduler to target testing newly learned fragments. Besides, ShQveL further validates the fragments at run time. Scheduler. We use a testing scheduler to improve the overall bug detection efficiency. Two challenges exist when applying the learned fragments in generator generation. First, randomly selecting features to learn and generating fragments to test is inefficient. It may lead to learning the same feature more than once, and newly learned fragments may be skipped when the existing feature set gets larger, which causes bugs in those new features to remain 1 repeat 2 $\mathcal { P } $ initializeFeatures(); 3 repeat 4 $\mathcal { F } \gets$ randomChoice $( { \mathcal { P } } )$ ; // Start a separate learning phase 5 if startLearning then 6 𝐶𝑙𝑎𝑢𝑠𝑒, 𝐸𝑥𝑝𝑟, 𝑆𝑡𝑚𝑡,𝑇𝑦𝑝𝑒 ← synthesize(F ) 7 while curQueries $\prec$ maxQueries do // Generate a database state $\mathcal { D }$ 8 $\mathcal { D } $ createTable(Type, Clause); 9 ${ \mathcal { D } } \gets { \mathcal { D } } \cup$ createView(Type, Clause); // Execute DML statements 10 executeStmts( $\mathcal { D }$ , Stmt, Clause); // Validate queries on $\mathcal { D }$ via test oracles 11 $\alpha \gets$ validateQueries $( \mathcal { D } , E x p r )$ ; 12 until allFeaturesLearned; 13 until timeout; undetected. Second, features at different levels may have dependencies. For example, a function at the expression level may require an argument of a DBMS-specific data type. If learning these two features separately, the fragments may fail due to self-validation. To address these issues, we use a scheduler to guide the LLM to discover new features and increase the probability of generating newly synthesized fragments (see Algorithm 1). First, ShQveL initializes the features pool $\mathcal { P }$ of the target DBMS by prompting an LLM to list feature names at each level (see line 2), for example, CREATE TABLE for statement-level features and INT for data-type features. Second, ShQveL randomly selects one feature from $\mathcal { P }$ , generates the corresponding level of SQL sketch, and prompts the LLM to synthesize related SQL fragments (see line 6). When the feature is at the data type level, ShQveL also generates expressionlevel sketches to learn features corresponding to that data type. Third, ShQveL starts a testing phase, executing a number of queries (𝑚𝑎𝑥𝑄𝑢𝑒𝑟𝑖𝑒𝑠), during which the probability of generating newly synthesized fragments increases (see line 7–11). It first generates a database state $\mathcal { D }$ and subsequently executes queries $\boldsymbol { Q }$ validated by test oracles. The parameter 𝑚𝑎𝑥𝑄𝑢𝑒𝑟𝑖𝑒𝑠 is user-defined; we empirically set it to a value that allows testing two database states. For example, if the newly synthesized fragments contain ARRAY with ARRAY values, ShQveL increases the probability of creating tables or views containing ARRAY columns and queries involving ARRAY-related functions and operators. Similar to SQLancer $^ { + + }$ and SQLancer, ShQveL tests the DBMS with multiple threads, and the features are shared across threads. Finally, ShQveL marks a feature as learned after an LLM synthesized the fragments and subsequent testing. ShQveL repeats line 2 if all features are learned. Run-time validation. ShQveL further infers the valid and invalid features through a feedback mechanism during run time. During the self-validation in step $\textcircled{3}$ , invalid fragments that fail nondeterministically can falsely pass the sketch validation. For example, the <RANDOM_VARCHAR> generator can generate a string $\cdot _ { \boldsymbol { \theta } ^ { \prime } }$ , which can be implicitly converted to an integer. We implemented the runtime validation mechanism based on SQLancer+ $^ { \cdot + }$ [45]. First, after each feature has been integrated, ShQveL generates sufficient test cases containing these features. Second, ShQveL calculates the estimated probability of each feature being executed successfully based on a statistical model [11]. Third, ShQveL ranks the features by their estimated probability and omits the features with low probability. # 5 IMPLEMENTATION In this section, we further introduce the implementation details of the SQL sketches and how ShQveL leverages LLMs. # 5.1 Sketch Implementation We implement the sketch generation based on four levels of SQL features. Table 3 shows the example sketches and their corresponding fillings at different levels. One or more holes (denoted by ??) can exist in one sketch. Any segment of the SQL statement can be replaced by or inserted into a hole, having an LLM fill it, and subsequently integrating it into the generator. We instead implement SQL sketches with certain formats for more focused feature learning. Specifically, we demonstrate the corresponding SQL sketches for four levels of SQL features—statement, clause, expression, and data type—that ShQveL supports. Notation. We specify the syntax of our sketches using an Extended Backus–Naur Form (EBNF) style grammar. Non-terminals (e.g., Statement and TabName) that can be replaced by groups of terminal symbols are in italics, while terminals—keywords, operators, and punctuation—are in bold (e.g., CREATE and ,). The bar | denotes alternatives, and the $^ +$ defines lists. Specifically, $x +$ means that the non-terminal X may be replicated one or more times. We use ?? to mark a “hole”, which represents the incomplete segments in the SQL statements. Statement-level. ShQveL learns statement level features by extending the rule of generating whole SQL statements. The following rule demonstrates the statement-level sketch: $$ \begin{array} { r c l } { { S t a t e m e n t S k e t c h } } & { { : = } } & { { C r T a b S t m t + ; S t a t e m e n t + } } \\ { { S t a t e m e n t } } & { { : = } } & { { I n s e r t S t m t \mid ? ? } } \end{array} $$ When generating a SQL statement Statement, instead, a placeholder ?? will be generated. The row of statement features of Table 3 shows one example. The sketch starts with random CREATE TABLE and INSERT statements to build the context, and subsequently, it includes one or more holes representing any DML statement supported by the target system for LLM filling. This sketch aims to broaden the range of new SQL statements, some of which may alter the database state in ways that expose potential bugs. For example, in PostgreSQL, a possible SQL fragment can be a concrete string ANALYZE;, or a combination such as VACUUM <RANDOM_TABLE>;. ShQveL disables the fragments of statements that create or drop database objects. Since the base generator is built on SQLancer $^ { + + }$ , its internal schema model cannot capture these changes. Clause-level sketch. ShQveL learns clause level features by inserting holes in each statement rule. Compared with the statement-level sketch, holes are inside the statement and between the keywords, which can serve as a non-default constraint or configuration for the database. The rule below demonstrates the sketch for the CREATE TABLE statement. $$ \begin{array} { r c l } { { \gamma { \cal T } a b S k e t c h } } & { { : = } } & { { { \bf C } { \bf R } { \bf E } { \bf A } { \bf T } { \bf E } { \bf \Lambda } { \bf T } { \bf A } { \bf B } { \bf L } { \bf E } ? ? T a i } } \\ { { } } & { { } } & { { ( C o l D e f + T a b C s t r ) ? ? } } \\ { { { \cal C } o l D e f } } & { { : = } } & { { C o l N a m e T y p e ? ? } } \\ { { { \cal T } a b C s t r } } & { { : = } } & { { ? ? } } \end{array} $$ In the CREATE TABLE statement, the second hole is inserted after the column definition ColDef, which can be filled with column constraints (e.g., NOT NULL or CHECK expression) The hole at the end of the statement can be filled with WITH table_parameter clause, which is a CrateDB-specific feature for specifying parameters for tables. Listing 5 demonstrates one example bug we found using this sketch. Similar instrumentation has been implemented for other statements, such as CREATE INDEX. Our insight for clause or keyword-level features is that adding these optional attributes or constraints inside DDL or DML statements may trigger optimization and generate a more complicated database state [31], which may incur a potential bug. Expression-level sketch. ShQveL learns expression level features by replacing each expression node with a hole in the predicate rule: $$ \begin{array} { r c l } { \Xi x \dot { p } r S k e t c h } & { : = } & { C r T a b S t m t { + } ; I n s e r t S t m t { + } ; Q u e r y { + } } \\ { Q u e r y } & { : = } & { \mathbf { S E L E C T } C o l R e f + \mathbf { F R O M } ~ J } \\ & & { \mathbf { W H E R E E } ~ E x \dot { p } r } \\ { E x \dot { p } r } & { : = } & { E x \dot { p } r ~ ? ? E x \dot { p } r ~ | ~ ? ? ( E x \dot { p } r + ) ~ | ~ ? ? E x \dot { p } r ~ | ~ v ~ } \end{array} $$ This is to learn the DBMS-specific functions or operators, which may be implemented incorrectly and potentially lead to bugs. For example, the predicate $E x p r \ ? ? E x p r$ represents a binary operator node being replaced by a placeholder, and it could potentially be filled with an arithmetic operator $^ +$ or a comparison operator $\scriptstyle < = >$ See the example in Figure 1. To learn a possible comparison operator, ShQveL first generates CREATE TABLE and INSERT statements to create contexts, and then generates SELECT statements with holes inserted. Datatype-level sketch. ShQveL learns data type-level features by replacing the data type names and values in the statements of creating tables and inserting data. 𝐷𝑎𝑡𝑎𝑡𝑦𝑝𝑒𝑆𝑘𝑒𝑡𝑐ℎ := 𝐶𝑟𝑇 𝑎𝑏𝑆𝑡𝑚𝑡 ; 𝐼𝑛𝑠𝑒𝑟𝑡𝑆𝑡𝑚𝑡 𝐶𝑟𝑇𝑎𝑏𝑆𝑡𝑚𝑡 $: = 1$ CREATE TABLE 𝑇𝑎𝑏𝑁𝑎𝑚𝑒 (𝐶𝑜𝑙𝑁𝑎𝑚𝑒 ??) 𝐼𝑛𝑠𝑒𝑟𝑡𝑆𝑡𝑚𝑡 : $: =$ INSERT INTO 𝑇𝑎𝑏𝑁𝑎𝑚𝑒 𝐶𝑜𝑙𝑁𝑎𝑚𝑒 VALUES (??) Each sketch contains a CREATE TABLE statement where the column type name is replaced, along with an INSERT statement where the value is replaced. This enables the LLM to infer both the data type and its corresponding constant values. Table 3 demonstrates an example filling of this sketch. The data type is ARRAY and one possible constant is [1, <RANDOM_INT>], where <RANDOM_INT> is a built-in random literal generator. # 5.2 Prompt Implementation ShQveL employs several few-shot prompts to leverage the LLM to enhance the generator. For reference summarization, the prompt is simple and zero-shot, containing only general instructions with specific targets. For example, to get the reference when learning Table 3: SQL sketch design for ShQveL DuckDB data types, ShQveL first uses a search engine to retrieve the documentation by using the following keywords: DuckDB, Data type, Overview. Then the prompt for summarizing the document of keyword set is: “Please summarize the below document of [DuckDB]. . . List each [Data type] with their description and examples. . . ” For fragments synthesis, ShQveL employs few-shot prompting, and each prompt includes requirement instructions, references produced by the summarization LLM, a masking template generated by the generator, and several few-shot examples. We also include special notes in the description to avoid non-deterministic functions; however, due to LLM hallucinations, fragments with randomness (e.g. CURRENT_TIMESTAMP()) could still be generated. The few-shot examples are randomly selected from currently available fragments for each hole, and integrated in the format of the expected output (e.g. CSV) of the LLM. First, they help LLM avoid generating duplicate fragments. Besides, they help LLM format the prompt answer; otherwise, LLM may generate fragments in natural language, which are difficult to parse. # 6 EVALUATION We evaluated ShQveL in multiple aspects. First, we evaluated the effectiveness of ShQveL in finding bugs over a long-term testing campaign, and analyzed the bugs we found from different levels of features (see Section 6.1). Second, we compared ShQveL—under different configurations—with state-of-the-art logic bug detection tools (see Section 6.2). Third, we measured the contribution of each component in ShQveL to bug finding effectiveness (see Section 6.3). Baselines. We implemented ShQveL based on $S Q L a n c e r { + + }$ . It consists of 10.5K LOC written in Java for the base generator and validator, and $2 0 0 \ \mathrm { L O C }$ of Python for building LLM agents. In comparison, SQLancer has 83K LOC and $S Q L a n c e r { + + }$ has 8.4K LOC. We compared ShQveL with $S Q L a n c e r { + + }$ and SQLancer, denoted as 𝑆𝑄𝐿𝑎𝑛𝑐𝑒𝑟. Our aim for ShQveL was to detect bugs in various DBMSs with different dialects. In addition, we sought to make it more effective and support more features than $S Q L a n c e r \mathrm { + + }$ . We did not aim to outperform SQLancer’s manually-written, dialectspecific generators; rather, our goal was to show the feasibility of fully automatic bug detection for dialect-specific features. Setup. We conducted the experiments using a server with a 64- core EPYC 7763 at $2 . 4 5 \ \mathrm { G H z }$ and 512GB of memory running on Ubuntu 22.04. In ShQveL, we used the OpenAI APIs to query LLMs. Specifically, we used GPT-4o for SQL sketch synthesis and GPT-4omini for document summarization. During each testing iteration, we randomly created up to 2 tables, 1 view, and 20 inserts on which we then executed 100K queries, which are the standard settings for SQLancer and SQLancer $^ { + + }$ . We also compared $S h Q \nu e L$ under different LLM settings. “ $S h Q \nu e L _ { M o d e l } ?$ denotes using only LLM internal knowledge without DBMS documentation. Table 4: ShQveL allowed us to find and report 55 bugs in 5 systems, of which 50 were fixed by the developers, and of which 39 were logic bugs. DBMS selection. We evaluated our approach on 5 DBMSs— CockroachDB, CrateDB, DuckDB, MonetDB, and TiDB (see Table 4). We used the latest development versions and reported bugs only when they could be reproduced on their latest versions. SQLancer provides support for all except CrateDB, which is supported only by SQLancer $^ { \cdot _ { + + } }$ . Among the systems, CockroachDB, DuckDB, MonetDB, and TiDB have previously been the focus of both logic-bug [24, 31] and memory-bug [8] detection techniques. We selected these five systems also because their development communities actively investigate and resolve reported issues, enabling us to validate the uniqueness of any bugs we uncover. Although we also detected bugs in other popular DBMSs (e.g., MySQL and MariaDB), many previous bugs reported remain unfixed, making it challenging to determine whether those issues are both previously unknown and identical to our test cases [13, 28, 31]. # 6.1 Bug Finding Effectiveness We continuously tested the five DBMSs during a six-month fuzzing campaign, followed by several months of intermittent testing. This methodology was also used to evaluate various other automated testing approaches, such as SQLancer $^ { + + }$ [45] and other testing approaches for DBMSs [1, 15, 28]. We ran ShQveL for several minutes to one day until it generated a number of bug reports. We employed the bug reduction and prioritization mechanisms of SQLancer $^ { + + }$ and then we further processed the reduced and prioritized bug reports. We manually filtered bug-inducing test cases that are due to non-deterministic features (e.g., RANDOM functions) by manually performing a string search after encountering them the first time. We refrained from reporting bugs to DBMSs where bugs were not fixed (e.g., MySQL), and we reported only at most three bugs before the previous reported bugs were fixed. After the bugs were fixed, we updated the DBMS to the latest version and started another testing run on it. Table 5: Bugs found by different levels of SQL features. Bug statistics. Table 4 demonstrates the statistics of the bugs we reported. The full bug list is included in the artifact. In total, we reported 55 bugs, of which 50 bugs have been fixed, and the rest have been confirmed. 39 out of the 55 reported bugs were logic bugs, including 1 bug in CockroachDB, 20 bugs in CrateDB, 13 bugs in DuckDB, 2 bugs in MonetDB and 3 bugs in TiDB. ShQveL also found 16 other bugs causing hangs, internal errors, or crashes in the DBMS. This is expected, since automatically incorporating new features could help ShQveL detect issues in these previously untested or not well-tested features. ShQveL detected the logic bugs by the TLP [27] oracle, and detected other bugs (e.g., crash and hang) by monitoring the DBMS process status and response latency. These results are encouraging and demonstrate that ShQveL can find bugs that the DBMS developers are willing to fix. Features analysis. Table 5 demonstrates bugs found by ShQveL augmented by SQL features of different levels. Most of the bugs found on DuckDB are caused by data type features, including BIT, TIMEZONE, and INET. These data types are not supported by the original SQLancer, which is why ShQveL could successfully find them. Most bugs in TiDB and MonetDB stem from expression-level features—functions and operators. Extending existing generators with new functions or operators might require less effort than clause or statement-level features; however, manually covering all hundreds of built-in or newly released features is impractical. ShQveL can automatically incorporate these new features. Subsequently, we present bugs related to different levels of features we found. Our goal is to show the breadth of distinct bugs identified by each feature we learned. We only present reduced test cases that still contain the feature and highlight the core issue, rather than the original and semantically equivalent queries used to uncover these bugs. Incorrect function result. Listing 4 demonstrates a bug in TiDB that was related to function-level features. The TiDB developers explained it was an inconsistent behavior for the JSON_VALID function when its argument is a column reference or a literal NULL. # Listing 4: A bug in TiDB when using a json function. 1 CREATE TABLE t0(c0 VARCHAR (500) , c1 INT); 2 CREATE VIEW v0(c0) AS SELECT 'a' FROM t0; 3 INSERT INTO t0(c0) VALUES ('b'); 4 5 SELECT $\star$ FROM t0 NATURAL RIGHT JOIN v0 6 WHERE ( JSON_VALID (t0.c1) $\begin{array} { r } { { \bf \nabla } = \boldsymbol { \theta } _ { . } ^ { \cdot } } \end{array}$ ) ; 7 $--- \{ \}$ -- {'a' NULL } Ë # Listing 5: A large integer argument lead to CrateDB crash. 1 CREATE TABLE t0(c0 INT ) WITH ( number_of_replicas $= 1 0 0 0 )$ 2 raise a SQLParseException 3 CREATE TABLE t0(c0 INT ) WITH ( number_of_replicas $\mathbf { \Psi } = \mathbf { \Psi }$ 1925152226) ; 4 hang and crash # Listing 6: The column constraints lead to an incorrect hash join detection in CrateDB. 1 CREATE TABLE t0 (c1 INT NOT NULL DEFAULT 1); 2 CREATE TABLE t1 (c1 INT ); 3 INSERT INTO t0(c1) VALUES (1) ; 4 INSERT INTO t1(c1) VALUES (2) ; 5 6 SELECT $\star$ FROM t0 , t1 WHERE 7 $\langle \mathbf { t } \mathbf { 1 } \cdot \mathbf { c } \mathbf { 1 } > = 1$ ) =(( t1.c1 = t1.c1) AND (t0.c1 <= t0.c1)); 8 {} -- {1 2} Ë Surprisingly, the developers discovered that MySQL was also affected by this bug, and thus, they submitted an issue report to the MySQL developers. This bug was found by TLP oracle; however SQLancer failed to find it, since it lacked support for this function. Table property clause crash. We found a bug in CrateDB (see Listing 5), where a CREATE TABLE statement that set a large integer value as table constraint led to a hand and crash of the system. The system crashed due to an integer overflow when validating the shards limit; however, CrateDB failed to catch the exception. The WITH clause is not defined in standard SQL and the attribute num_of_replicas is CrateDB-specific. ShQveL learned this statement-level fragment and used this while creating tables, which led to the crash. Although ShQveL was not explicitly designed for detecting crash bugs, our experiments demonstrate its capability to uncover several such issues. This bug also shows the benefit of the built-in random literal generator, since the LLM trained on the real-world dataset is unlikely to directly generate the code fragments with an overly large integer value. False optimization by default clause. Listing 6 shows a bug caused by incorrect hash-join detection. As explained by the developers, the issue arises when joining tables whose filter involves multiple nested equality comparisons, and manifests only when the column constraints NOT NULL DEFAULT 1 are enabled. ShQveL learns the clause-level fragments and can generate a database schema with DBMS-specific features, which may trigger potential optimizations. Data type. We found a bug in DuckDB (see Listing 7) where an IN expression was unexpectedly evaluated to false. We found this bug by the TLP oracle for which none of the partitioning queries # Listing 7: DuckDB evaluated the casting for time values falsely. 1 CREATE TABLE t1(c0 TIME WITH TIME ZONE ); 2 INSERT INTO t1(c0) VALUES ( $1 1 2 : 3 4 : 5 6 : )$ ; 3 4 SELECT ( CAST (t1.c0 AS TIME ) IN ('12:34:56 ')) FROM t1; 5 false -- true Ë Table 6: Coverage of executing ShQveL, SQLancer++, and SQLancer on SQLite, PostgreSQL and DuckDB in 24 hours of this expression predicate fetched any rows. As explained by the developers, this was due to a bug in the implementation of cast for TIME data types. They also mentioned that this data type is not well-tested.4 Manually implementing these features into existing generators is labor-consuming. By automatically learning from an LLM, ShQveL can easily uncover untested features and trigger potential bugs. # 6.2 Baseline Comparison We measured the performance using multiple configurations of ShQveL and baselines. We compared the bug detection efficiency between ShQveL and SQLancer $^ { + + }$ on CrateDB 5.6, a historic version. Using a historic version for evaluating the efficiency is common practice [31, 40], as the uniqueness of bugs can be determined by identifying which commit fixed a bug demonstrated by a buginducing test case. We also evaluated the line and branch coverage of $S h Q \nu e L$ under different settings, SQLancer and SQLancer $^ { + + }$ on three ${ \mathrm { C / C } } { + + }$ DBMSs, SQLite, PostgreSQL, and DuckDB. Although code coverage is not a crucial metric to measure the capabilities of finding logic bugs, it can help relatively compare how many features have been covered. We do not expect ShQveL to outperform the base SQLancer implementation, since its manually-written generators are specific to each system, while ShQveL can be easily adopted to SQL dialects that are not supported by existing tools. Bug detection. Figure 2 shows the bug detection efficiency of ShQveL, ShQveL without reference summarization $( S h Q \nu e L _ { M o d e l } )$ , and SQLancer $^ { + + }$ on CrateDB. In one hour, ShQveL detected more bugs than $S h Q \nu e L _ { M o d e l }$ , as it generates features described in the documentation that would otherwise be missed. Both ShQveL and ShQveL without summarization can outperform SQLancer $^ { + + }$ , since $S h Q \nu e L$ can discover new bugs by LLM-derived features. ShQveL found the first bugs slightly later since we started learning from scratch, and it takes time to summarize the documentation. Figure 2: Unique bugs found on CrateDB for one hour across 5 runs. The shadow shows the standard deviation. Table 7: Branch coverage achieved by ShQveL on SQLite, PostgreSQL, and DuckDB for 24 hours when learning exclusively from statement, clause, datatype, and expression features. Code coverage. Table 6 demonstrates the average line and branch coverage across 10 runs in 24 hours, which adheres to best practices [18]. SQLancer achieves the highest coverage on SQLite and PostgreSQL; this difference reflects SQLancer’s more comprehensive implementation of generators for the former two systems. For SQLancer, significant manual effort was necessary (e.g., 9.7K LOC for the SQLite generator) to support these features, whereas we can achieve a similar performance fully automatically. However, on DuckDB, ShQveL outperformed SQLancer. We speculate that this is because DuckDB, as an emerging and popular DBMS, evolves rapidly with new features, whereas SQLancer is not maintained actively enough to incorporate them. By incorporating LLM-derived features, ShQveL increases branch coverage over $S Q L a n c e r { + + }$ by $4 4 . 3 \%$ on SQLite, $2 8 . 7 \%$ on PostgreSQL, and $4 4 . 7 \%$ on DuckDB. Furthermore, the documentation summarization to ShQveL achieves $5 \% - 9 \%$ more coverage than using only the LLM internal knowledge. SQLite and PostgreSQL show smaller improvements by using external documents summarization, because they are more popular and have more examples in the LLM’s training data. # 6.3 Ablation Study We further measured the contribution and costs of individual components of $S h Q \nu e L$ . Features importance. We measured the effectiveness of learning each level of features. We use four levels of features as described in Figure 3: The cost of LLM APIs to learn features on SQLite, DuckDB, and PostgreSQL over six hours. Section 5.1, which help ShQveL find bugs at different levels (see Section 6.1). We evaluated ShQveL on SQLite, PostgreSQL, and DuckDB by enabling learning only one type of feature of each execution using ShQveL. Table 7 shows the incremental learning effect in terms of branch coverage of each individual level of feature. The base column represents using the base generator. Statement-level features lead to the greatest coverage increase across all three DBMSs, since the base generator supports only common SQL statements. ShQveL found no new logic bugs by learning new statement-level features, since most logic errors stem from issues in the query processor rather than the statement executor. Expression-level features are strongly dependent on data types. Specifically, DBMS-specific functions and operators usually require a specific data type (e.g., function ARRAY_POSITION expects an ARRAY argument). Thus, in the default setting of $S h Q \nu e L$ , these features are learned together with their corresponding data type sketches, and the feature-oriented testing phase increases the probability that the corresponding data types are generated. However, in our experimental setup, the functions and operators were learned without their associated data types, causing most of them to fail. Features learning costs. We measured the cost required by ShQveL to discover new features and improve branch coverage over a sixhour run on SQLite, DuckDB, and PostgreSQL. Each learning phase will be triggered only when executing a sufficient number of test cases (e.g., 200K SQL statements). We increment the feature count when ShQveL generates a valid, previously unseen code fragment for a given level of SQL sketch, filtering out duplicates and invalid fragments. We also record the branch coverage of the DBMSs during execution. Figure 3 demonstrates the cumulative number of learned features (solid line) and branch coverage (dashed line) against total API cost in USD. ShQveL learns over 400 features on all three DBMS for under 1 US dollar, achieving similar performance in terms of branch coverage close to the original SQLancer. On DuckDB, it learns around 400 features for less than $\$ 0.4$ and outperforms SQLancer. Note that SQLite’s higher throughput triggers more frequent learning phases, and thus has a higher cost. Validity rate. We measure the query and statement validity rates of ShQveL when learning new features over a six-hour run on SQLite, DuckDB, and PostgreSQL with and without the fragment validation mechanisms. Figure 4 shows the cumulative success rate of the generated SQL statements, including both DDL and DML statements, and queries over time. When fragment validation is disabled, the validity rate declines by $6 5 . 1 \%$ , $3 4 . 1 \%$ , and $3 1 . 8 \%$ on SQLite, DuckDB, and PostgreSQL, respectively. The validity rate does not reach zero, since ShQveL also generates common features besides the newly learned features. Among the three systems, SQLite achieves the highest overall validity, due to its dynamic typing and ability to coerce most values into the required types. # 7 RELATED WORK LLM and AI empowered systems. LLMs have recently been leveraged in database management for tasks such as query rewriting, SQL dialect translation, and text-to-SQL conversion. DB-GPT [47] treats the LLM as the “brain” of the DBMS that can adaptively handle tasks like automatic query reformulation and index recommendation. LLM- $\cdot \mathrm { R } ^ { 2 }$ [21] is a rule-based rewrite framework that uses an LLM to recommend rewrite rules drawn from existing SQL rewriting platforms. CrackSQL [46] combines rule-based techniques with LLMs to translate between SQL dialects. FinSQL [39] is a model-agnostic, LLM-based text-to-SQL framework for financial data analysis. Similar to the above systems, ShQveL also leverages LLM knowledge in database management, but focuses on enhancing DBMS testing tools. CatSQL [7] combines rule-based SQL sketch templates with deep learning models to fill in query details, improving the accuracy and reliability of NL2SQL translation. ShQveL shares a similar idea with CatSQL that both leverage a language model to fill the SQL sketches; however, ShQveL is different from the following perspectives. First, CatSQL focuses on SELECT query, whereas $S h Q \nu e L$ supports all kinds of SQL statements. Second, in CatSQL, the filled sketches are final system outputs. In contrast, ShQveL uses the filled sketches to improve its generator, and the generator generates outputs. Third, CatSQL translates natural language queries to SQL queries, while ShQveL aims to learn features of different dialects for DBMS testing. LLM-aided testing. Recent work has leveraged LLMs for automated software testing across multiple domains. Fuzz4All [34] is a universal fuzzing framework that uses LLM-driven prompt generation and mutation to generate diverse test inputs in various programming languages. WhiteFox [35] is a LLM-based white-box compiler fuzzer where one model analyzes compiler optimization passes to derive input requirements and another model generates test programs. KernelGPT [36] is an approach to enhance OS kernel fuzzing by leveraging LLMs to synthesize system call specifications for fuzzers. All the above methods use LLMs to generate inputs directly, thus resulting in a low throughput. Conversely, ShQveL separates learning and testing, and generates inputs through traditional generators, making the process efficient and cost-effective. MetaMut [25] is an LLM-guided mutation testing approach that integrates compiler domain knowledge into prompts to automatically generate high-quality mutators for compiler fuzzing. Similar to ShQveL, the LLM is also only invoked at an initial phase, and not used as a direct test case generator. To the best of our knowledge, ShQveL is the first work that leverages LLM for testing DBMSs to detect logic bugs. Figure 4: The cumulative validity rate of test cases executing ShQveL on SQLite, DuckDB, and PostgreSQL over six hours Test case generation for DBMSs. Existing methods for generating test cases for DBMSs can be divided into two categories: mutationbased testing and generator-based testing. Mutation-based testing for DBMSs is based on SQL-specific mutators to mutate existing SQL statements. Griffin [9] uses a grammar-free SQL fuzzing approach that replaces hand-written grammars by summarizing the database state in a lightweight metadata graph to guide semantically-correct query mutations. BuzzBee [38] extends DBMS fuzzing to cover multiple database models. Sedar [8] improves the effectiveness of mutation-based testing by transferring SQL seeds across DBMS dialects, thus obtaining high-quality diversified inputs. None of the above mutation-based testing approaches can be applied to find logic bugs due to semantic constraints from test oracles. In contrast, generator-based approaches construct queries based on pre-defined rules. SQLsmith [29] is a representative tool, which utilizes schema metadata to generate well-formed random SQL queries. SQLancer [26–28] also generates queries based on various hand-written SQL generators. Both tools are built on manually implemented generators, and ShQveL can automatically augment these generators without human supervision. DBMS test oracles. ShQveL can use existing test oracles. In this work, we implemented TLP and NoREC. TLP [27] detects logic bugs by executing a query and checking that its result matches the combined results of three queries that split the rows into separate parts. NoREC [26] detects logic bugs by executing a query that is receptive to optimizations and comparing its result to an equivalent version that is unlikely to be optimized. Various other test oracles could be adopted to ShQveL seamlessly. EET [16] rewrites each query through equivalent transformations and checks that the rewritten query returns the same results as the original one. CODDTest [40] leverages compiler optimizations to find logic bugs in DBMSs, especially in advanced features like subqueries. APOLLO [17] finds performance slowdowns by running random queries on different versions of the same DBMS, and flagging cases where the newer version runs much slower. Several other test oracles cannot be supported in a dialect-agnostic manner as they require DBMS-specific information. For example, CERT [2] requires manual effort on parsing the query plan—usually sharing a different format between DBMSs—to find unexpected differences in the cardinality estimator. Radar [31] requires manual effort in designing and analyzing the metadata constraints for the raw database generation. Conceptually, ShQveL could be applied to augment the generators of all these oracles, thereby improving their bug-detection capabilities.
Various automated testing approaches have been proposed for Database Management Systems (DBMSs). Many such approaches generate pairs of equivalent queries to identify bugs that cause DBMSs to compute incorrect results, and have found hundreds of bugs in mature, widely used DBMSs. Most of these approaches are based on manually written SQL generators; however, their bug-finding capabilities remain constrained by the limited set of SQL features supported by the generators. In this work, we propose ShQveL, an approach that augments existing SQL test-case generators by leveraging Large Language Models (LLMs) to synthesize SQL fragments. Our key idea is to systematically incorporate SQL features gained through automated interactions with LLMs into the SQL generators, increasing the features covered while efficiently generating test cases. Specifically, ShQveL uses SQL sketches -- SQL statements with incomplete code segments that LLMs fill -- to integrate LLM-generated content into the generator. We evaluated ShQveL on 5 DBMSs and discovered 55 unique and previously unknown bugs, 50 of which were promptly fixed after our reports.
[ "cs.SE", "cs.DB" ]
# 1 Introduction Large Language Models (LLMs) have shown intriguing promise in optimizing code efficiency beyond compiler techniques [1–9]. Evaluating the effectiveness of these LM-based code optimizations relies on performance-stressing tests. For example, an optimization from recursion to iteration in Fibonacci number calculation incurs only a negligible performance improvement when evaluated with a default test ${ \mathit { n } } = 3 { \mathit { \check { \Psi } } }$ ) that focuses on testing correctness, while a performance-stressing input ${ \mathrm { \Delta } n } = 4 0 \$ ) reveals the orders $( 1 0 ^ { 6 } )$ of the larger gap. Moreover, as some approaches integrate execution feedback to further optimize the code [3, 7, 10], running performance-stressing tests reveals more precise optimization opportunities by exposing performance bottlenecks. Unfortunately, most existing code optimization approaches still leverage correctness tests to evaluate and suggest optimizations [3, 5, 7]. However, the correctness tests alone are often insufficient to expose the inefficient code implementation. For example, existing tests in the common benchmarks, e.g., HumanEval [11], has been shown to have limited scope and low complexity and thus fail to adequately stress the code performance against more demanding conditions [12]. As a result, they are also more susceptible to the noise introduced in the execution environment, thus failing to reliably quantify the optimization and reveal insightful optimization opportunities. To generate performance-stressing tests, recent works have started to leverage LLMs by prompting them to generate test generators [12]. For example, EvalPerf [12] introduced a scale parameter to control the input size, with the assumption that it is the key determining factor to performance Contrastive Execution Profiling PC-Constraint Synthesis PC-Constraint Guided Fuzzing Fast input PC-constraints 園 Code under test annoPtraotfeildec-ode Preerfaosromniangce Ciomnsptlreaimnetncthateicokner insPtrCu-cmoensteradicntosde coverage % PerfForge tests 图 <> Profile 围 □ mIuntpauttor →图 一 面 fuzzer stressing. However, such a biased preference over large tests misses the opportunity to reason about their relationship to inefficient program behaviors beyond the size. For example, calling quicksort can suffer from the suboptimal performance [13] when its input is reversely sorted $( \bar { O ( n ^ { 2 } ) }$ in the worst case). When two inputs are both at the maximum length n, the reversely sorted one is more stressing than another randomly ordered one $\scriptstyle { \mathcal { O } } ( n \log n ) ,$ on average. Our approach. We present WEDGE, a framework that generates performance test inputs beyond simply stressing the sizes. Our key insight is that the limitation of LLMs in generating performancestressing tests boils down to the inherent challenge of connecting the local performance-related program behavior all the way back to the program inputs [14], while directly reasoning about the local behaviors is comparatively easier. For example, we can easily specify the local variable arr, the argument to a quicksort deeply nested inside the program, to be reversely sorted to trigger its inefficient behavior, while predicting what program inputs lead arr to be reversely sorted is more challenging as that requires reasoning about the control and data flow based on the precise understanding of the program semantics. Such reasoning is extremely challenging due to the overwhelming search space, e.g., tracking a combinatorial number of program paths [15, 16, 14, 17]. Based on such insight, WEDGE alleviates LLM reasoning on performance-related behavior by asking it to synthesize the performance-characterizing constraints as condition checkers, e.g., all(l[i] $> 1 [ \mathrm { i } + 1 ]$ for i in range(len(l)-1)), and instrument the program with these checkers at the appropriate program points. WEDGE then leverages the coverage-guided fuzzers, the search-based testing technique [18, 19] with the goal to maximize the code coverage, to scale test input generation that sidesteps the expensive iterative queries to LLMs. As the inputs achieving new coverage are rewarded and prioritized in the fuzzer, checker branches inserted by WEDGE serve as the coverage signal to bias the fuzzing to generate likely-stressing inputs more efficiently. To enhance performance constraint reasoning, we develop a reasoning template that elaborates on the procedures to contrast the pair of disparate execution profiles to gain insight into inefficient implementations. We then instruct LLM to reason about performance constraints (in natural language and code) in multiple phases to localize the appropriate program points and implement the corresponding constraint checkers. Besides guiding the fuzzer using constraint checkers, WEDGE further accelerates the input search by replacing the fuzzer’s default input mutator [18] with a constraint-aware one that steers the input mutation towards likely constraint-satisfying inputs, while also enforcing the mutation to respect the input grammars [20–23]. Figure 1 presents our workflow (see detailed description in Section 3). Results. Our extensive evaluation shows that the tests generated by WEDGE are substantially more performance-stressing than the default ones in the existing benchmark and those generated by the state-of-the-art [12] by $8 4 . 5 \%$ . With more stressing tests, WEDGE precisely pinpoints the potential inefficient implementations and thus introduces approximately 10 percentage points more efficiency improvement on the generated code than that of default tests when used to guide the iterative code optimization approaches via test-driven execution feedback [3]. Our ablations confirm the effectiveness of the synthesized constraints in guiding the fuzzing and input mutation, i.e., achieving $4 \times$ improvement over plain fuzzing using $\mathrm { A F L + + }$ . In addition, we show that the generated constraints effectively characterize the performance, where the constraint-satisfying inputs are $3 8 . 6 \times$ slower than constraint-violating inputs. # 2 Overview We start with discussing the related works on code efficiency evaluation and stress test generation. We then use a motivating example to demonstrate the advantage of WEDGE over the existing approaches. # 2.1 Benchmarking Code Efficiency and Performance-Stressing Test Generation While traditional code generation primarily focused on generating correct code [24–26, 11, 27–30], there are growing efforts to generating efficient code beyond correctness [5, 12, 31, 32]. However, existing efficient code generation techniques still largely rely on correctness tests to evaluate the performance improvement [5, 7, 33], which cannot faithfully measure the performance improvement [12, 32, 31, 9]. Some of them rely on the test execution feedback driven to further optimize the code [7, 9]. These approaches can miss optimization opportunities when the tests do not reveal the performance bottleneck (as shown in Section 4.3). To address these challenges, recent works have focused on performance test generation to benchmark efficient code generation [31, 32, 8, 12]. However, these approaches either suffer from low-quality tests and thus rely on manual correction, or their task formulation could prevent the LLMs from reliably reasoning about the program behavior, i.e., by directly prompting them to generate the stressing inputs based on the full code snippets. With such a nontrivial task, LLMs have to identify the inefficient implementation, understand the runtime behavior to exercise it, and reason all the way to program input. Therefore, they often end up taking the shortcut and reducing to only generating length-stressing inputs that fail to reveal the inefficient implementation $( \ S 4 . 4 )$ . In addition to performance benchmarking using competitive-programming level code, GSO [34] extends the evaluation to repository-level and real-world workloads by prompting an LLM with the performanceoptimizing commit. It shares the high-level idea of direct prompting but requires more challenging inter-procedural reasoning across much longer context [35, 36]. WEDGE complements the direct prompting approaches for performance testing by decomposing the test generation into local code behavior reasoning and efficient input search. Performance testing has been studied extensively before LLM-based approaches [13, 37–40]. Traditional techniques rely primarily on static or dynamic analysis (e.g., fuzzing). However, they are known to suffer from scalability issues such as path explosion, fine-grained performance profiling overhead, and the lack of oracles to precisely capture inefficient behaviors or symptoms. WEDGE restricts LLMs to focus specifically on local performance-characterizing constraints to avoid overly relying on the LLMs to reason globally to the input, but leverage the guided-search from fuzzing to reach the constraints gradually. Therefore, it captures the inefficient behaviors without expensive profiling and enables the test generation beyond length-stressing. # 2.2 Motivating Example Let us consider an example from CodeContests, Codeforces problem 633A [41] (gray box, Figure 2). Given three integers $a , b ,$ , and $c$ $( 1 \leq a , b \leq 1 0 0 , 1 \leq c \leq 1 0 , 0 0 0 )$ , the goal is to decide whether there exist non-negative integers $x , y$ such that $a \cdot x + b \cdot y = c$ . This problem is classically known as finding solutions to a two-variable linear Diophantine equation [42]. The code snippet (blue region) shows an implementation that solves this problem. The code systematically tries every pair of values by iterating two nested loops over fixed upper bounds of $1 0 , 0 0 0$ (lines 6–7), computing the value of the Diophantine equation, and checking whether it is equal or exceeds $c$ . Because the loops use fixed upper bounds rather than adapting to the value of $c$ , the code could examine nearly the entire $1 0 , 0 0 0 ^ { \dot { 2 } }$ value space. Apart from skipping sums greater than $c$ or breaking once a match is found (lines 9-11), the code bears the full brute-force cost. The right half of Figure 2 (green and purple boxes) shows how WEDGE infers performancecharacterizing constraints specific to this program, inspired by the contrasting execution traces that share similar inputs but have disparate behavior (manifested by the per-statement execution counts). In particular, our tool identifies specific relations among the local variables a,b,c to stress the nested loops to exhaust their maximum iterations. The green box shows the LLM’s reasoning process, while the purple box shows the performance-characterizing constraints synthesized as a $^ { C + + }$ checker by the LLM to be instrumented in the program. Problem Statement: Performance-Characterizing Constraints Reasoning: Dante is engaged in a fight with "The Savior". Before he can fight it with his sword, he needs to break its shields. Phase 1: Identify Expensive or Inefficient Code Segments He has two guns, Ebony and Ivory, each of them is able to perform any non-negative number of shots. For every 1. Comparison of Line-Level Hit Counts: bullet that hits the shield, Ebony deals a units of damage while Ivory deals b units of damage. In order to break - Line 6 (Outer Loop over x): ...... the shield Dante has to deal exactly c units of damage. Find out if this is possible. - Line 7 (Inner Loop over y): ....... 2. Pinpointing Lines with Significant Differentiation: Input The first line of the input contains three integers  a, b, c (1 ≤ a, b ≤ 100, 1 ≤ c ≤ 10,000)...... - The inner loop on line 7 is heavily stressed under the slow input, ...... This indicates that the nested loop structure is the main contributor to the slowdown. Output 1 Print "Yes" if Dante can deal exactly c damage to the shield and "No"  otherwise. 3. Inference on Interaction with Data Structures: - The program implements a brute force search ...... it 6S l6o w7 input: 7F a6st7input: c a-lcAuslaterses\`up $\mathbf { \Sigma } = \texttt { x } ^ { + } \texttt { a } + \texttt { y } ^ { + }$ ba\`retos ucchhectkhaitf intoeeqauralys \`bcr\`e.ak Profile-Annotated Code lt, when the condition slow_hit: 1 fast_hit: 1 1 #include <bits/stdc++.h> oncucmubresr(oafs cwoitmhbtihneatsilonws .i.n..p..ut), the loops iterate through a large slow_hit: 1 fast_hit: 10 23 tintmaian,()b {c, x, y; slow_hit: 100020001 fast_hit: 10002 8 slow_hit: 1 slow_hit: 01 slow_hit: 1000230002 fast_hit: 21 003 67 fast_hit: 1 fast_hit: 45 int p, flag = 0; scanf("%d %d %d", &a, &b, &c); rfor(x =y =0 0x; <y= =1 01000;0x;++y)+ {) { p = x \* a + y \* b; Phase12. :-IInDfvebaroirtvihae \`nPat\`e1ra:fnoIdrn\`embfaf\`incacire-nftCacIhtaoersactithoearntizdfionrngCotCoeomansbsiitlrnyacitionotmsnbsi:ne to the loop iterates nearly its entire range without early termination. form \`c\` (i.e., \`gcd $( a , - b )$ does not divide $\because ( C ^ { \setminus } )$ , slow_hit: 100020001 fast_hit: 10002 9 if (p == c) { : slow_hit: 0 fast_hit: 1 10 flag = 1; slow_hit: 0 fast_hit: 10 112 break; Constraint Checker Implementation } slow_hit: 010001 fast_hit: 02 134 }if (flag == 1) break; // Caheacnkd ibf tohees gnroetatdeisvti ceomcmon divisor of slow_hit: 1 fast_hit: 01 1567 voidf c(hecck%_g_c_dg_ccdo(nas,trba)in!t=(i0nt) {a, int b, int c) fast_hit: pr(ifnltafg(="=Y 1s")); slow_hit: 0 fast_hit: 0 18 else cerr << "Warning: gcd_invariant triggered slow_hit: 1 --f fast_hit: 0 19 printf("No"); gcd(a, b) does not divide c" << endl; slow_hit: 1 fast _hit: 20 return 0; slow_hit: 0 fast_hit: 0 21 } Our key observation is that these constraints are more local, fine-grained, easier to generate, and cannot be captured by state-of-the-art techniques (e.g., [12, 32]), which focus primarily on maximizing the input values and size. Therefore, such performance-stressing constraints serve as more appropriate interfaces for LLMs to communicate their reasoning to the existing test generation tools than directly asking them to generate performance-stressing inputs. # 3 WEDGE Framework This section elaborates on the key components in WEDGE as presented in Figure 1. Problem Statement We formally define the performance-stressing test generation problem. Given a program $\mathcal { P }$ accepting a valid input (conforming to a validity specification $\nu$ ) , the set of all valid inputs is denoted as $\scriptstyle { \mathcal { T } } _ { \gamma }$ . With a valid input $\forall i \in \mathcal { I } _ { \mathcal { V } }$ , the execution of $\mathcal { P }$ (denoted as $E _ { i } = \mathcal { P } \cdot i )$ yields an execution time $^ { 1 } T _ { i }$ . The goal of stress test generation is to generate a subset of valid inputs $I ^ { \ast } \subset \mathcal { T } \nu$ , such that the average execution time of $I ^ { * }$ is maximum. At a high level, WEDGE takes as inputs a coding problem statement $s$ , a correct solution program $\mathcal { P }$ , a set of default correctness tests $\mathcal { I } _ { D }$ , and an Large Language Model (LLM), and produces a set of performance-stressing test inputs $I$ $l \colon I = \operatorname { W E D G E } ( S , \mathcal { P } , \mathcal { T } _ { D } , \mathsf { L L M } )$ . # 3.1 Contrastive Execution Profiling We first collect high-quality contrastive execution feedback from fast and slow executions to facilitate reasoning about performance-characterizing constraints. This is achieved in two steps. Contrastive input pair mining. In this step, WEDGE runs $\mathcal { P }$ against a set of user-provided tests $\mathcal { I } _ { \mathcal { D } }$ , e.g., existing correctness tests provided by the dataset to mine a contrastive (slow, fast) input pair $( i _ { s l o w } , i _ { f a s t } )$ . During test execution, WEDGE collects the execution cost of each input, measured by the number of executed instructions (denoted $| I |$ in our experiments). WEDGE then mines the contrastive input pairs based on the two metrics: (1) similarity defined as the sum of the match ratio (i.e., the number of common array elements divided by the length of the shorter array) and the Jaccard similarity [43], and (2) execution cost ratio, defined as the ratio of the slow input’s cost $| I | _ { s l o w }$ to that of the fast input $| I | _ { f a s t }$ . Input pairs are then ranked based on their similarity and execution cost ratio, and WEDGE selects the top-ranked pair as the contrastive input pair $( i _ { s l o w } , i _ { f a s t } )$ . Profiling feedback collection. WEDGE executes $\mathcal { P }$ with $i _ { s l o w }$ and $i _ { f a s t }$ , collecting execution feedback (coverage and hit count) $F _ { s l o w }$ and $F _ { f a s t }$ . Considering such a contrastive execution pair provides the key behavior insight [44], we prompt LLM to pinpoint the differences to reason why one input leads to significantly slower execution. # 3.2 Performance-Characterizing Constraint Synthesis WEDGE generates the constraints in two steps: it initially generates the constraints $\mathbb { C }$ in natural language, then prompts the LLM to implement the corresponding constraint checkers and insert them to the fuzz driver $\mathcal { P }$ to produce the instrumented fuzz driver $\mathcal { P } ^ { \prime }$ . Performance-characterizing constraint reasoning. A constraint is a predicate on the program state (e.g., variable values) and expressed as a conditional statement, e.g., if $( { \mathsf { n } } > { \mathsf { 1 } } )$ . Given a performance-characterizing constraint $c$ and a given set of inputs $\mathcal { T } _ { \mathcal { V } }$ , some inputs may satisfy the constraint while others may not. We denote them as $\mathit { \Pi } _ { \mathcal { T } _ { S } }$ and $\mathcal { T } _ { \mathcal { N } }$ , respectively, where $\mathcal { I } _ { \mathcal { V } } = \mathcal { I } _ { S } \cup \mathcal { I } _ { \mathcal { N } }$ , and the corresponding average execution time $\overline { { \mathcal { T } _ { S } } } > \overline { { \mathcal { T } _ { \mathcal { N } } } }$ . WEDGE first constructs a comprehensive performance reasoning prompt template that contains the problem statement $s$ , solution program $\mathcal { P }$ , contrasting input pair $( i _ { s l o w } , i _ { f a s t } )$ , the profiling feedback information $F _ { s l o w }$ and $F _ { f a s t }$ , and multiple manually-crafted constraints aRteadscoon as few-shot examples. The performance constraint reasoning technique can be denoted as: $P e r f ( \mathsf { L } \mathsf { L } \mathsf { M } , \mathcal { S } , \mathcal { P } , ( i _ { s l o w } , i _ { f a s t } ) , ( F _ { s l o w } , F _ { f a s t } ) ) = \mathbb { C }$ ,o rwehaseoren ${ \mathbb C } = \{ \bar { c } _ { i } \} _ { i = 1 } ^ { N }$ ainscea csoetnsotrfaignetnseirnmultiple phases, as shown in Figure 2. In Phase 1, the LLM needs to identify expensive or inefficient code segments. This includes: 1) comparing line-level profiling information, e.g., hit counts, between the fast and slow runs, 2) pinpointing lines or functions that get significantly more hits under the slow input, and 3) inferring how these lines might interact with data structures, loops, recursion, etc., especially as they relate to the input constraints (e.g., $\mathbf { n } < = 1 0 0 ^ { \cdot }$ ). In Phase 2, the LLM will derive performance-characterizing constraints in natural language. By enforcing the LLM to reason about the constraints with Chain-of-Thought prompting [45], WEDGE collects insights into performance and generates high-quality constraints $\mathbb { C }$ (Figure 2 green part). Constraint checker implementation. WEDGE prompts the LLM with the constraints $\mathbb { C }$ and instructs it to implement the checker code faithfully and produce the instrumented program. The instrumented program with inserted checker code $\mathcal { P } ^ { \prime }$ , will be used as the target program to fuzz: $\mathcal { P } ^ { \prime } = I n s t r u m e n t ( \mathsf { L L M } , \mathcal { P } , \mathbb { C } )$ . # 3.3 Performance-Characterizing Constraint Guided Fuzzing In this stage, WEDGE launches coverage-guided fuzzing against the instrumented program $\mathcal { P } ^ { \prime }$ to search for constraint-satisfying inputs. Constraint-aware mutator generation. WEDGE uses $\mathrm { A F L + + }$ as its fuzzing engine. However, the default mutator of $\mathrm { A F L + + }$ (denoted as $\mathcal { M } _ { \mathcal { D } } \rangle$ ) targets at binary fuzzing (including operations like bitflip, byteflip, crossover, etc.), having no knowledge of input validity constraints, thus could generate mostly invalid inputs. We implement a custom input-grammar- and constraints-aware mutator $\mathcal { M } _ { \mathbb { C } }$ by prompting the LLM with mutator examples, problem statement $s$ (i.e., validity constraint $\nu$ ), solution program $\mathcal { P }$ , contrasting input pair $( i _ { s l o w } , i _ { f a s t } )$ , the profiling feedback information $( F _ { s l o w } , F _ { f a s t } )$ and the generated performance constraints $\mathbb { C }$ : $\tilde { \mathcal { M } } _ { \mathbb { C } } ^ { } = M u t a t o r S y n ( \mathsf { L L M } , \mathcal { S } , \mathcal { P } , ( i _ { s l o w } , i _ { f a s t } ) , ( \dot { F } _ { s l o w } , F _ { f a s t } ) , \mathrm { \bar { C } ) }$ . Mutator generation is more challenging than EVALPERF [12] and the input generator’s generation [3, 32, 8], as it has to be robust enough to make sure the mutated inputs follow the validity constraints and meanwhile as diversified as possible. To resolve this challenge, WEDGE follows an iterative generate-and-fix fashion to ensure the robustness of mutators. We put more details in Appendix A.2 due to the space constraints. Constraint-guided fuzzing. Once mutators are generated, it launches a fuzzing campaign using the mutator $\mathcal { M } _ { \mathbb { C } }$ on the instrumented program $\mathcal { P } ^ { \prime }$ , collecting all tests generated by fuzzer, i.e., $C G F ( \mathcal { M } _ { \mathcal { C } } , \mathcal { P } ^ { \prime } ) = I$ , where $I = \{ i _ { 1 } , i _ { 2 } , . . . \}$ are the fuzzer generated tests. # 4 Experiments # 4.1 Setup Test generation baselines. We evaluate PERFFORGE tests against four benchmarks. The first two serve to compare the efficacy of our performance-stressing inputs. Specifically, we consider (1) EVALPERF [12], which uses LLMs to synthesize a parameterized input generator and progressively scales input size until a predefined timeout or out of memory. Since our dataset lacks canonical reference implementations, we consider two variants: EVALPERFSLOW, which uses the slowest solution as the reference, and $\mathrm { E V A L P E R F _ { R A N D } }$ , which uses a random solution. The second baseline is TG-prompt following the recent work [8, 32, 31] that instructs an LLM to directly synthesize the performance test generator given the problem specification and its constraints. Utility baselines. To measure the utility of our generated tests, we consider two scenarios that PERFFORGE can help. The first scenario is to provide execution feedback to help LLMs further optimize the code. We consider EFFI-LEARNER [3], an iterative code efficiency optimization based on test-driven execution feedback to guide the LLM in refining its generated code. The second scenario is to evaluate (ideally more precisely) existing code optimization approaches. We consider running PERFFORGE against PIE [5], an LLM-based code optimization that finetunes the LLM on slow and fast code pairs, which relied on correctness tests to evaluate its performance improvements. Metrics. We primarily rely on CPU instruction count to measure the effectiveness of PERFFORGE tests, considering it is more stable across runs, platform-agnostic, and strongly correlates with performance bottlenecks [46–49], while physical time is more prone to interference and noise [50, 47]. It is also one of the key metrics for evaluating LLM-based code testing and optimization tools [12, 32, 5] (more details in Appendix A.5). To further reduce the noise, we average the CPU instructions over five runs for each program throughout all experiments. Dataset. We evaluate WEDGE on CodeContests [30] with a wide range of competitive programming problems and human-written solutions. Test cases include the default inputs from the original openjudge platforms as well as additional inputs generated by the authors [30]. We largely focus on $^ { C + + }$ solutions to ensure comparable measurements, with a small subset of Python programs for the usefulness investigation (§4.3). We rank the problems based on the coefficient of variation [12] of the CPU instruction counts and select the top 300 problems. This ensures the selected problems feature diverse solutions and potentially have enough room for optimizations for part of the solutions. WEDGE generates tests for 207 of them, but after excluding those where baselines cannot produce valid inputs, we arrive at 154 problems and $^ { 3 3 , 0 2 0 \mathrm { C } + + }$ programs. Fuzzing and input filtering. To collect inputs, We run WEDGE’s fuzzing (based on our modified ${ \mathrm { A F L } } + + { \dot { , } }$ for one hour for each solution in parallel. Not all generated inputs strictly conform to the validity constraints $\mathcal { V } \left( \ S 3 \right)$ [12]. WEDGE applies a two-stage automatic filter to filter out likely invalid inputs. WEDGE first prompts an LLM to generate the validator based on the problem statement and use the official tests in CodeContests to check the validity of the validator. WEDGE then checks the output consistency across different solutions (labeled correct in CodeContests) under the same input, following the existing work [30]. Any input leading to inconsistent outputs will be filtered out (detailed in Appendix A.3). After these, we rank the tests for each solution in the dataset based on the slowdown they introduce. We then select the top ten longest-running tests for each program and aggregate them as part of our benchmark, PERFFORGE. # 4.2 Main Results To evaluate the effectiveness of PERFFORGE tests in stressing performance, we compare the slowdown PERFFORGE brings to the programs against those by EVALPERF and TG-prompt. Table 1: WEDGE versus baselines (described in $\ S 4 . 1 \dot { }$ ) and its ablation. Table 1 shows that tests generated by WEDGE lead programs to execute, on average, $8 4 . 5 \%$ and $8 5 . 7 \%$ $7 0 . 5 \%$ and $6 6 . 7 \%$ median) more CPU instructions than the two variants of EVALPERF, respectively. They also have $54 \%$ ( $2 5 \%$ median) more CPU instructions than TG-prompt. WEDGE tests dominate the number of programs $( 5 9 \% )$ where they run the slowest among all the other baselines (win rate). Figure 3 visualizes the win rate by running head-to-head comparison between WEDGE and the baselines. We also compute the slowdown the tests achieved over the default tests in CodeContests. On average, WEDGE tests outperform EVALPERF ones by $2 . 3 \times$ and TG-prompt by $1 . 3 \times$ . Figure 3 illustrates a head-to-head comparison between PERFFORGE and the baselines, where WEDGE slows significantly more programs compared to the other baselines. We extensively analyzed the performance-characterizing constraints as well as the test generators synthesized by WEDGE and the other baselines and benchmarks. We observe that the inputs generated by WEDGE focus more on the inefficient implementation in the code identified by the performancecharacterizing constraints, while those by EVALPERF are optimized to stress the input length specified in the problem statement. TG-prompt, while not explicitly implemented to maximize bounds, faced challenges in reasoning about holistic program behaviors end-to-end. Even with chain-of-thought prompting, it still reduces to mostly generic length-stressing inputs specific to the problem statement (e.g., large graphs for graph-based problems). We leave the detailed description of these qualitative studies in Appendix C due to space constraints. Ablations. We ablate the two designs related to performance-characterizing constraints: (1) guiding the mutator generation with constraints and (2) instrumenting the program with constraint checker code. For (1), we consider the $\mathrm { A F L + + }$ mutator as the baseline. For (2), we consider the original program in the baseline. Table 1 shows that WEDGE’s generated tests are on average $4 8 . 3 \%$ and $12 8 . 3 \%$ slower (in terms of CPU instruction count and relative slowdown) than WEDGENOINSTR, showing that the instrumented programs with constraint checkers can effectively guide fuzzing. Similarly, WEDGE’s generated tests incur $28 7 \%$ more CPU instructions than those generated by WEDGEDEFAULTMUT (with default mutator). On $6 3 . 0 2 \%$ solutions, WEDGENOINSTR tests are slower (with a significance value of 0.05, base on Mann-Whitney test [51]). # 4.3 Utility of PERFFORGE As described in $\ S 4 . 1$ , we investigate the utility of PERFFORGE by comparing PERFFORGE tests to the default CodeContests tests $\mathrm { ( C C _ { d e f a u l t } ) }$ that only evaluate the correctness on (1) improving LLM-based code optimizations (EFFI-LEARNER [3]) based on the execution feedback, and (2) fairly measuring performance improvement where the baseline’s evaluation relied only on correctness tests (PIE [5]). To ensure a fair comparison, we adopt the exact same evaluation setup and metrics used by the two baselines. For example, we include memory usage to evaluate how PERFFORGE improves EFFI-LEARNER. Since EFFI-LEARNER relies on the original CodeContests tests, yet for about $1 5 \%$ problems have less than ten tests available, we instead use top-5 slowest tests per solution. Figure 3: A head-to-head comparison between PERFFORGE $\mathrm { ~ ( ~ ) ~ }$ and the baseline tests $( \boxed { \overline { { \mathbf { \Pi } } } } )$ . The bars represent the number of programs where one incurs a larger number of CPU instructions. $\mathbf { X }$ -axis shows the corresponding ratio between the corresponding CPU instruction counts. Table 2: Running EFFI-LEARNER for code optimization using execution feedback from different types of test sets. PERFFORGE improves EFFI-LEARNER the most. Improving code optimization with execution feedback. We collect a corpus of 280 slow Python solutions from 56 problems in PERFFORGE following the EFFI-LEARNER’s filtering strategy. For each solution, we run EFFI-LEARNER with three different prompts to let EFFI-LEARNER optimize the code. (1) we use the solution code alone with no execution feedback (None). (2) we use code annotated with profiling information from running the original CodeContests default tests $\mathrm { ( C C _ { d e f a u l t } ) }$ . (3) we use code annotated with profiling information derived from our PERFFORGE tests (PerfForge). We consider both OpenAI GPT-4o and DeepSeek V3 as the backends for EFFI-LEARNER. Table 2 illustrates that PERFFORGE tests achieves the best performance improvement. EFFI-LEARNER can optimize the code to execute $24 \%$ less instructions (or approximately 10 percentage points), run $17 \%$ faster, and use $2 5 \%$ less memory on average when providing GPT-4o with PERFFORGE execution profiles as opposed to their original setup. Similarly, EFFI-LEARNER can optimize the code to execute $1 5 \%$ less instructions, run $46 \%$ faster, and use $16 \%$ less memory when providing DeepSeek with the same PERFFORGE-driven execution profiles. Evaluating code optimization fairly. We show how PERFFORGE can measure performance improvement claimed by existing code optimization more fairly than the correctness test. To this end, we consider PIE [5], a state-of-the-art LLM-based code optimization based on finetuning, but relied on the default correctness tests to measure their performance improvement. We select their three most effective models (CodeLlama 13b) finetuned with the following different datasets: (1) HQ (high-quality) data annotated by the authors $( \mathrm { P I E } _ { \mathrm { H } } )$ ); (2) performance-conditioned data to optimize $^ { C + + }$ programs annotated with a target optimization score reflecting its potential “peak performance” $( \mathrm { P I E } _ { \mathrm { c } } )$ ; (3) all data from the entire PIE dataset We then adapt our program selection to match the requirements of PIE (details in Appendix A.1) We follow the same set of metrics as [5] by measuring the average relative speedup between the original and optimized code in instruction counts and physical time, as well as the percentage of programs that the LLM models can optimize by at least $10 \%$ $( \% \mathrm { O p t } )$ [5]. Table 3 illustrates how our tests better characterize the performance bottlenecks. PERFFORGE outperforms the CodeContests default tests $\mathrm { ( C C _ { d e f a u l t } ) }$ and its top five slowest tests $( \mathrm { C C } _ { \mathrm { s l o w } } )$ by $24 \%$ to $149 \%$ in terms of instruction counts and by $5 \%$ to $2 7 \%$ in terms of physical time. It also helps discover that between $7 \%$ and $48 \%$ more programs have actually been meaningfully optimized and run at least $10 \%$ faster. Table 3: Pie Experiment: average speedup and fraction of optimized programs, i.e., at least $10 \%$ faster $( \% \mathrm { O p t } )$ evaluated by different test sets following [5]. The top-performing test set is highlighted. # 4.4 Sensitivity Analysis Discriminative power of performance-characterizing constraints. To investigate whether and how WEDGE-generated performance-characterizing constraints can indeed capture performance-stressing inputs, we select 810 programs in CodeContest where both constraint-satisfying and constraintviolating inputs exist. Results show that constraint-satisfying inputs are, on average, $3 8 . 6 \times$ slower than constraint-violating inputs. We conduct a Mann-Whitney test [51], and constraint-satisfying inputs are significantly slower (with a significance value $p < 0 . 0 5 \$ ) than constraint-violating inputs on $9 2 . 8 4 \%$ programs. Impact of constraints in guiding fuzzing. To better understand the impact of guidance of constraints (including mutator and code instrumentation for coverage guidance), we calculate the ratio of constraint-satisfying inputs (out of valid inputs) per strategy. Result shows that the ratios of constraint-satisfying inputs among generated inputs of $\mathrm { A F L } + +$ , WEDGEDEFAULTMUT, WEDGENOINSTR, and WEDGE are $4 0 . 4 2 \%$ , $4 1 . 4 4 \%$ , $7 7 . 6 2 \%$ , and $8 0 . 4 8 \%$ , respectively. In other words, both involving performance-characterizing constraints and constraint checker code contribute positively to the ratio of constraint-satisfying inputs. Furthermore, strategies that yield a higher proportion of constraintsatisfying inputs tend to achieve better performance (see Section 4.2), indicating that satisfying performance-characterizing constraints correlates with the generation of more stressing test inputs. Effect of input size. We investigate how input size affects the effectiveness of PERFFORGE considering that leveraging fuzzing to generate large inputs is a known challenging problem [37]. We observe our framework outperforms the baselines by larger margins when we further restrict the input size to be less than 1KB. In particular, for problems whose inputs are less than 1KB, the slowdown achieved by WEDGE is $3 \times$ , almost double that on the entire problems without such restrictions, $1 . 5 \times$ . These findings underscore that the performance-stressing characteristics of our tests stem from inputs being designed to target implementation-specific bottlenecks rather than being simply length-stressing. We put the detailed results in Appendix B.3 due to space constraints.
Large Language Models (LLMs) have been increasingly used to optimize code efficiency. Evaluating their effectiveness and further suggesting optimization opportunities often rely on high-quality tests to demonstrate the performance bottlenecks presented in the program. However, existing approaches rely on a limited set of hand-curated inputs or LLM-generated uninteresting length-stressing tests, failing to reveal more nuanced optimization opportunities. We present WEDGE, a framework for generating performance-stressing input given the program under test. WEDGE synthesizes explicit performance-characterizing constraints in the form of branch conditions to partition the programs' execution space into performance-specific regions. When integrated with the coverage-guided fuzzer, reaching different regions introduces explicit rewards for test generation to explore inefficient implementations. Our evaluation shows that WEDGE introduces a significant slowdown compared to the tests in CodeContests and those claimed to be optimized by existing approaches. From the utility perspective, integrating our tests substantially improves the existing code optimization approaches that rely on test-driven execution feedback. We release PERFFORGE, the performance tests generated by WEDGE, to benchmark future approaches for efficient code generation at https://github.com/UChiSeclab/perfforge.
[ "cs.SE", "D.2.5" ]
# 1. INTRODUCTION Sampling is a musical technique that “incorporates portions of existing sound recordings into a newly collaged composition” [1]. The samples often undergo significant modification during this creative process: they may be pitch-shifted, time-stretched and heavily processed with audio effects (henceforth sampling transformations), and are typically combined with other musical elements, creating “musical interference” which makes identification difficult even for human experts. The relevance of this practice is highlighted as, since the mass popularisation of hip hop, disco and electronic dance music, this kind of “transformative appropriation” has become one of the most important techniques for composers and songwriters [2]. Automatic sample identification (ASID) is a crucial task in music retrieval: given an audio query - either a small segment or an entire music track - the goal is to retrieve the sample source from a database of music recordings, even if sampling transformations have been applied. The potential to substantially impact domains such as attribution and copyright highlights the relevance of this task for music creators and rights holders, as well as music information retrieval (MIR) researchers. This task is particularly challenging as sampling transformations can drastically alter the audio features while maintaining perceptual similarity. A reasonable approach is to take cues from deep learning-based audio fingerprinting research, learning metrics that allow for a similarity-based search and retrieval system. Additionally, augmentations in the training pipeline allow models to learn invariance to sampling transformations employed in music production. Recent audio fingerprinting research has successfully employed Graph Neural Networks (GNNs), achieving stateof-the-art results while using compact architectures that facilitate efficient training, which informs this work. Progress in ASID has been hindered by the limited availability of well-annotated datasets that reflect real-world sampling practices. The Sample100 dataset [3] is the only publicly available dataset of annotations specifically addressing the presence of samples in commercially produced songs. In this paper we present a revised version of this dataset, annotated by experts to include more fine-grained temporal annotations of the samples, as well as additional comments, time-stretching estimates and instrumentation information. We use these new annotations to report segmentwise hit-rates and to analyse the performance of our system in relation to the type of sample and augmentations performed during the artistic process. Our key contributions are as follows: • We propose the adaptation of a lightweight Graph Neural Network as the neural encoder for ASID. • We introduce a binary cross-attention classifier to facilitate an accurate ranking and refining of retrieved audio fingerprints. • We contribute new fine-grained temporal annotations to the Sample100 dataset, and evaluate our model’s performance on short-query retrieval, demonstrating superior top-N hit-rates compared to the baseline. • We present a detailed analysis of retrieval performances on different types of samples and discuss the viability of the proposed framework. Our code as well as the newly extended Sample100 dataset have been made available for reproducibility 1 # 2. RELATED WORKS Despite the ASID task being a relevant and challenging one for the MIR community, there have been few attempts to tackle it. Foundational work by Van Balen et al. [3], introduced the Sample100 dataset and proposed the adaptation of a spectral peak-based audio fingerprinting framework to make it robust to pitch-shifting. Gururani et al. [4] proposed a system inspired by music cover identification, using Non-negative Matrix Factorization to create templates of the samples and Dynamic Time Warping to achieve a detection algorithm that could be robust to time-shifting. Both of these works focus primarily on robustness against individual sampling transformations but neither address the broader range typically encountered in real-world scenarios. Other traditional fingerprinting methods that were effective for audio retrieval tasks, such as audfprint [5] and Panako [6] have also been tested on this task [7] and proved insufficient for ASID, struggling with the challenges of combined sampling transformations and interfering “musical noise” (the overlying musical composition). More recently, the first deep learning-based approach by Cheston et al. [7] achieved state-of-the-art performance on the Sample100 dataset using a CNN architecture (ResNet50- IBN) previously used for cover song identification [8] and exploiting music source separation to create synthetic training data. This approach serves as our baseline and demonstrates both the feasibility and remaining challenges of applying deep learning to ASID. Current state-of-the-art audio retrieval systems predominantly use CNNs [8–10] or transformers [11] trained with contrastive learning objectives. While effective, these architectures typically require significant computational resources and large training batches, limiting their practical viability. These limitations can be addressed by more parameter-efficient approaches based on Graph Neural Networks (GNNs), which excel at capturing complex structural patterns in non-Euclidean spaces [12]. GNNs have proven effective for audio tasks where temporal and spectral relationships are important, including audio fingerprinting [13] and audio tagging [14], by effectively modelling local and global interactions between time-frequency regions. # 3. METHODOLOGY ASID involves two categories of audio recordings: a reference, an original music recording, and a query, a new recording that incorporates (i.e., samples) parts of the reference. For training, we generate query-reference pairs by re-mixing source separated stems as proposed in [7]. For evaluation, our retrieval methodology employs a two-stage process: initial candidate selection via approximate nearestneighbour search, followed by fine-grained ranking with the cross-attention classifier. Figure 1 illustrates the complete retrieval pipeline, detailing how reference matches are retrieved and ranked for a given query. # 3.1 Input Features Our system employs log-scaled Mel-spectrograms as input features. Given an audio waveform $\boldsymbol { y } \in \mathbb { R } ^ { t }$ , sampled at 16 $\mathrm { k H z }$ , we first compute its Mel-spectrogram representation $\boldsymbol { \mathcal { X } } \in \mathbb { R } ^ { F \times T }$ . Here, $F$ denotes the number of Mel-frequency bins, and $T$ is the number of temporal frames. During training, we randomly sample short audio segments of fixed duration $t _ { \mathrm { s e g } }$ from each recording in the training dataset and use it to generate proxy query-reference pairs (see Section 3.4.1). For retrieval, we use real query and reference audio recordings which are segmented into overlapping segments of length $t _ { \mathrm { s e g } }$ . Section 5 details the configuration of the input features and hyperparameters. # 3.2 Encoder Architecture Our GNN encoder builds upon the architecture introduced in [13]. Given an input spectrogram $\chi$ , we first represent it as a set of three-dimensional time-frequency points, each described by its time index, frequency bin index, and amplitude value. From this initial representation, we produce overlapping patch embeddings by aggregating local neighbourhoods of time-frequency points into latent vectors. Formally, each resulting patch embedding is represented by: $$ \boldsymbol { f } : \mathbb { R } ^ { 3 \times p } \mathbb { R } ^ { d } , $$ where $p$ denotes the number of neighbouring points aggregated per patch, and $d$ is the dimensionality of the latent embedding. These patch embeddings serve directly as nodes in the subsequent graph structure. Next, we construct a $\mathbf { k }$ -nearest neighbour (kNN) graph from these node embeddings. Specifically, for each node embedding $x _ { i }$ , we identify its $k$ nearest neighbours based on cosine similarity in the latent embedding space. The resulting edges represent latent structural relationships among spectrogram patches. Node embeddings are then iteratively refined via graph convolution (GraphConv) layers. For each node embedding $x _ { i }$ , we aggregate information from its neighbours $x _ { j }$ , where $j \in \mathcal { N } ( x _ { i } )$ . Formally, the update rule is given by: $$ y _ { i } = x _ { i } + \sigma \big ( \mathrm { A G G } ( \{ x _ { j } : j \in \mathcal { N } ( x _ { i } ) \} ) \big ) , $$ where $y _ { i }$ is the updated embedding, $\sigma$ denotes a nonlinear activation function, $\mathcal { N } ( \boldsymbol { x } _ { i } )$ is the set of neighbours of node $x _ { i }$ , and AGG represents an aggregation operation summarizing relevant information from neighbouring nodes. Through iterative aggregation, each node embedding progressively encodes increasingly rich contextual and structural information. The GNN encoder comprises multiple blocks of GraphConv layers, each followed by feedforward network (FFN) layers. At the beginning of each block, the kNN graph is dynamically reconstructed to reflect the updated node embeddings. Figure 1. Illustrated ASID methodology: (A) Given a query, we compute segment-level embeddings (fingerprints), matched to reference embeddings via approximate nearest-neighbour (ANN) search; based on which, candidate songs are retrieved from the reference database through a lookup process (dotted arrows). (B) A multi-head cross-attention (MHCA) classifier refines and ranks candidates using node embedding matrices $\mathrm { N M } _ { q }$ (query) and ${ \bf N M } _ { r }$ (references). The output of the GNN encoder is a set of refined node embeddings, collectively referred to as the node embedding matrix, which serve as input features to the crossattention classifier. Finally, these node embeddings are average-pooled and projected into audio fingerprints. Both latent embeddings are used in the subsequent retrieval refinement stage. For a comprehensive discussion of architectural details and design considerations, we refer readers to [13]. # 3.3 Cross-Attention Classifier To capture the latent relationships between these two sets of node embeddings, we introduce a multi-head crossattention classifier. Given a query and a reference audio segment, we first compute the node embedding matrices $\bar { \boldsymbol { q } } \in \mathbb { R } ^ { N \times d _ { n } }$ and $\boldsymbol { r } ~ \in ~ \mathbb { R } ^ { N \times d _ { n } }$ , respectively. Here, $N$ is the number of nodes, and $d _ { n }$ is the dimensionality of each node embedding. We compute attention-weighted embeddings as follows: $$ \mathbf { C } = \mathbf { M H A } ( q , r , r ) $$ where $\mathbf { M H A } ( . )$ denotes standard multi-head attention [15]. The resulting embedding matrix $\textbf { C } \in \ \mathbb { R } ^ { N \times d _ { n } }$ is an attention-weighted transformation of $r$ , where attention is computed between corresponding node embeddings in $q$ and $r$ . $\mathbf { C }$ is then aggregated by mean pooling, producing a single context vector $\mathbf { c } \in \mathbb { R } ^ { d _ { n } }$ : $$ \mathbf { c } = \frac { 1 } { N } \sum _ { j = 1 } ^ { N } \mathbf { C } _ { j } $$ where $C _ { j }$ is context vector of the $j$ -th node embedding. Finally, the context vector $\mathbf { c }$ is transformed by a shallow nonlinear classifier into a scalar confidence score $s$ : $$ \boldsymbol { s } = \sigma ( \mathbf { w } ^ { T } \mathbf { c } + b ) $$ where $\textbf { w } \in ~ \mathbb { R } ^ { d _ { n } }$ , $b \in \mathbb { R }$ are learnable parameters, and $\sigma$ denotes the sigmoid activation function. The scalar $s$ indicates the confidence that the query and reference segments match. As shown in Figure 1, at retrieval time, this is used as a ranking mechanism as well as a measure for rejecting low-confidence candidates. # 3.4 Training Pipeline Our proposed approach involves two distinct training stages: a self-supervised contrastive learning stage for embedding training and a subsequent binary classification stage for the downstream cross-attention classifier. Both stages use identical procedures to produce proxy query-reference pairs from the source-separated training data, closely following the methodology established in prior work [7]. # 3.4.1 Query-Reference Pair Generation Let us denote the stems extracted from the training audio source $x$ as a set $\pmb { S } = \{ s _ { 1 } , s _ { 2 } , . . . , s _ { K } \}$ , where each stem $s _ { k }$ corresponds to a source-separated audio component (e.g., vocals, drums, bass). Given a random timestamp segment $t _ { s }$ starting at $t$ and of length $\Delta t$ , we first extract corresponding audio segments from each stem as $$ s _ { k } ( t _ { s } ) = s _ { k } [ t , t + \Delta t ] $$ resulting in the set $\{ s _ { 1 } ( t _ { s } ) , s _ { 2 } ( t _ { s } ) , . . . , s _ { K } ( t _ { s } ) \}$ . These stem segments are partitioned randomly into two subsets, $ { \boldsymbol { S } } _ { q }$ and $S _ { r }$ , with $S _ { q } \cup S _ { r } = S$ and $S _ { q } \cap { \cal S } _ { r } = \emptyset$ . A query segment $x _ { q }$ is formed as the sum of stems in $\textstyle { \mathcal { S } } _ { q }$ : $$ x _ { q } = \sum _ { s \in S _ { q } } s ( t _ { s } ) . $$ The reference segment $\scriptstyle x _ { r }$ is generated by mixing an augmented version of the query segment with the remaining stems: $$ x _ { r } = \arg _ { 2 } \left( \mathop { \mathrm { a u g } _ { 1 } } _ { \left( x _ { q } \right) } + \sum _ { s \in S _ { r } } s ( t _ { s } ) \right) . $$ Here, $\mathrm { a u g } _ { 1 }$ and $\mathrm { a u g _ { 2 } }$ represent audio effects functions applied sequentially to simulate realistic music production transformations. The effect parameters are sampled from a uniform distribution. Specifically, • $a u g _ { 1 }$ : time-offset $( \pm 2 5 0 \mathrm { m s } )$ and gain variation $( \pm 1 0 \mathrm { d B } )$ . • $a u g _ { 2 }$ : pitch-shifting ( $\pm 3$ semitones) and timestretching $( 7 0 - 1 5 0 \% )$ ). The source-separation system (see Section 4.1) allows the extraction of musically salient sources that can constitute a sample. The pair $( x _ { q } , x _ { r } )$ constitutes a positive queryreference example; $x _ { q }$ is a proxy for a query containing an instance of a sample, and $x _ { r }$ represents a reference example which contains the sample that is creatively distorted and is present in a mix along with other musical elements. # 3.4.2 Contrastive Learning We train the encoder using a self-supervised contrastive learning framework. Given a batch of $B$ pairs $\{ ( x _ { q } ^ { i } , x _ { r } ^ { i } ) \} _ { i = 1 } ^ { B }$ , we obtain their corresponding audio fingerprints $\{ z _ { q } ^ { i } , z _ { r } ^ { i } \} _ { i = 1 } ^ { N }$ from the encoder. We then employ the Normalized Temperature-scaled Cross Entropy (NT-Xent) loss [16] to maximize similarity between embeddings from positive pairs, while minimizing cosine similarity to embeddings from all other pairs in the batch. # 3.4.3 Downstream Classifier Training The cross-attention classifier is trained as a downstream task, with the encoder parameters frozen after the contrastive learning stage. For this stage, we discard the previously used projection network and directly use the node embedding matrix obtained from the encoder. Training batches consist of query-reference pairs generated identically to the contrastive learning stage. Let $\mathcal { Q } = \{ q _ { i } \} _ { i = 1 } ^ { B _ { c } }$ and $\mathcal { R } = \{ r _ { j } \} _ { j = 1 } ^ { B _ { c } }$ represent query and reference embedding sets in a batch, respectively, where each embedding $q _ { i } , r _ { j } \in \mathbb { R } ^ { N \times d _ { n } }$ , and $B _ { c }$ is the batch size. Positive examples correspond to pairs of identical indices: $$ { \mathcal { P } } = \{ ( q _ { i } , r _ { j } ) \mid i = j \} , $$ while negative examples are selected from pairs with nonidentical indices via hard-negative mining. Specifically, we select negative pairs as the subset of non-positive pairs that maximize audio fingerprint similarity, thus being the most confounding: $$ { \mathcal { N } } = \{ ( q _ { i } , r _ { j } ^ { - } ) \mid i \neq j , r _ { j } ^ { - } = \arg \operatorname* { m a x } _ { r _ { j } , j \neq i } \sin ( z _ { i } , z _ { j } ) \} . $$ We maintain a fixed ratio of 1:3 for positive to negative pairs within each training batch. The classifier outputs a scalar prediction $p \in [ 0 , 1 ]$ , trained with the binary crossentropy (BCE) loss, where the label for pairs $( q _ { i } , r _ { j } ) \in \mathcal { P }$ is 1 and for for pairs $( q _ { i } , r _ { j } ^ { - } ) \in \mathcal { N }$ is 0. # 3.5 Retrieval and Evaluation Our retrieval system, illustrated in Figure 1, operates in two sequential stages: • Approximate nearest-neighbour (ANN) search to do a fast and coarse search of candidate reference audio fingerprints from the database. • Cross-attention classifier scoring to refine the candidate set and rank them based on relevance. For every overlapping segment (computed as described in Section 3.1) in the query, we probe the reference database for matches based on the similarity of the audio fingerprints; thus yielding a set of candidate matches. In the second stage, we utilize the cross-attention classifier to refine these candidate matches. For each candidate segment retrieved, we extract its corresponding node embedding matrix. Given a query recording, represented as a sequence of node embedding matrices, we compute classifier scores $p ( q , r )$ for each pair of query $q$ and retrieved candidate $r$ . The final candidate segment-level confidence score is determined by selecting the maximum classifier score over all segments of the query: $$ p _ { \mathrm { c l f } } ( q , r ) = \operatorname* { m a x } _ { q _ { i } \in Q } p ( q _ { i } , r ) . $$ We reject candidate segments with confidence scores $p _ { \mathrm { c l f } } ( q , r ) < 0 . 5$ . Subsequently, we aggregate these accepted segment-level scores to obtain a song-level retrieval score. Specifically, for each unique reference recording, we sum the segment-level confidence scores: $$ P _ { \mathrm { s o n g } } ( q , R ) = \sum _ { r \in R } P _ { \mathrm { c l f } } ( q , r ) , $$ where $R$ denotes the set of retrieved segments belonging to the same reference song. The resulting aggregated scores $P _ { \mathrm { s o n g } } ( q , R )$ provide a robust ranking of candidate songs for each query recording. # 4. DATASET # 4.1 Training Dataset For training, we use the Free Music Archive (FMA) medium dataset [17], which contains 25,000 30-second tracks across 16 genres. We pre-processed this dataset to make it suitable for our stem-mixing contrastive learning approach. We used the current SOTA algorithm “BeatThis” [18] to perform beat tracking and use this as a proxy for musical rhythmic regularity in the FMA tracks, excluding 2,533 tracks with fewer than 32 beats after the first downbeat. This filtering ensured that our training data consisted only of musical content with some level of rhythmic structure. To generate the stems that will be used for the synthetic training pairs, we applied source separation using the Hybrid Transformer Demucs model (htdemucs) [19] to each usable track, separating them into drums, bass, vocals, and “other” stems. # 4.2 Evaluation Dataset For the evaluation of our system, we use the Sample100 dataset [3] . The dataset consists of 75 full-length hiphop recordings (queries) containing samples from 68 fulllength songs (references) across a variety of genres, with R&B/Soul representing the majority [7]. It contains 106 sample relationships and a total of 137 sample occurrences, as some queries use multiple samples and some references appear in multiple queries. To challenge retrieval systems, the dataset includes 320 additional “noise” tracks with a similar genre distribution, which are not sampled in any query. Because samples are typically created from a short segment of a song, only a small portion of each candidate track is sampled and present in queries - sample lengths range from just one second to 26 seconds. The samples represent real-world musical “transformative appropriation” [2], including both tonal (riffs), percussive drum break (beats), and $\jmath$ -note micro-samples. Non-musical samples (e.g. film dialogue) are not included. To enable more detailed evaluation, we present an extended version of the Sample100 dataset with fine-grained temporal annotations performed by expert musicians using Sonic Visualiser [20]. Unlike the original dataset, which only provided first occurrence timestamps at 1-second precision, our annotations include precise start and end times for all sample occurrences with $\pm 2 5 0 \mathrm { m s }$ resolution, transforming the dataset into a segment-wise evaluation resource. This improved temporal granularity allows for more accurate evaluation of ASID systems by testing with short query snippets from anywhere within the sampled material. We further enrich the dataset by adding estimates of the time-stretching ratio between the reference and query tracks, as well as instrumentation (stem) annotations for both the original material and the interfering instruments in the query, and expanding the comments about the samples. The time-stretching ratio was calculated from the tempo of both query and reference segments, determined through a combination of automatic beat tracking [18] with manual verification. Stem annotations were performed by listening to the tracks and their source-separated stems to ensure accuracy. Relevant sample class counts are shown in Table 4, including a categorisation into substantial or minimal time-stretching. This new information will enable more nuanced analysis of our model’s performance across different types of sampling practices in section 6.3. # 5. EXPERIMENTAL SETUP # 5.1 Hyperparameters and Configuration Our experimental setup and hyperparameter choices are summarized in Table 1, with certain parameters detailed in the preceding sections. The contrastive learning stage was performed using an NVIDIA A100 GPU, with models trained for a maximum of 180 epochs; we employed early stopping based on validation performance. Training utilized the Adam optimizer coupled with a cosine annealing learning-rate scheduler. For the downstream cross-attention classifier, we trained for a maximum of 5 epochs using the Adam optimizer with a fixed learning rate, keeping the encoder parameters frozen to preserve the learned representations from the contrastive learning stage. For the ANN search algorithm, we use IVF-PQ [21], an efficient choice for retrieval tasks in large vector databases. Table 1. Experimental Configuration # 5.2 Evaluation Metrics The ASID task is fundamentally a retrieval problem, where the goal is to rank candidate audio segments based on their relevance to a query. Hence, we adopt mean average precision (mAP) [22] as our primary metric, where the query is computed from a full song containing a sample. The mean average precision (mAP) summarizes retrieval quality by aggregating the precision values at the ranks where relevant items are retrieved, averaged across all queries. Additionally, inspired by an established practice in audio fingerprinting literature [10], we report top- $N$ hit rates. Specifically, we measure the proportion of queries for which at least one correct sample is retrieved within the top $N$ ranked results. We do so for different query sizes (5s to 20s). This metric provides an intuitive indication of practical retrieval accuracy and the system’s efficacy for short queries. # 5.3 Baseline Framework We compare our proposed system against the recent stateof-the-art baseline introduced by Cheston et al. [7]. Their framework employs a ResNet50-IBN architecture and utilizes a multi-task learning approach that jointly optimises a metric learning objective through triplet loss and an auxiliary classification task. This architecture has achieved state-of-the-art retrieval performance in terms of mean average precision (mAP). Due to practical computational constraints, we instead adopt and report results on a ResNet18- IBN model, which has a comparable number of parameters to our proposed GNN-based encoder. Apart from the model size, we closely adhere to the training procedures and evaluation methodology outlined in [7]. We also include their reported best performance for reference. # 6. RESULTS AND DISCUSSION # 6.1 Benchmarking We present the performance comparison between our proposed $_ { \mathrm { G N N + M H C A } }$ architecture and the baseline in Table 2. Our model matches the reported performance of the much larger ResNet50-IBN model, and significantly outperforms the reimplemented baseline. Table 2. Performance of models on Sample100 dataset. A key factor influencing our model’s performance is the batch size used during contrastive learning. Increasing the batch size from 256 to 1024 leads to an improvement of $6 . 9 \mathrm { p p }$ in mean average precision (mAP). This improvement occurs because larger batch sizes provide more negative samples per positive pair, enriching the diversity of the contrastive space. Consequently, the model learns embeddings that better discriminate between relevant and irrelevant examples. Table 3. Hit rates of our framework and baseline. Table 3 shows our model’s performance on short queries, a common use case in real-world sample identification scenarios. While the hit rates for shorter queries are comparable to the baseline, our framework exhibits significantly superior performance for longer queries (14.1pp increase in top-1 hit rate for 20-second-long queries). The progressive improvement in hit rates with increasing query length show that our approach effectively aggregates segment-level confidence scores to retrieve the correct reference song. # 6.2 Retrieval Refinement via Cross-Attention Classifier To examine the impact of the cross-attention classifier as a retrieval refinement step, we conduct an ablation study. Table 2 shows that incorporating the classifier (MHCA) to rank retrieved results improves mAP by 2.6pp, confirming the utility of this refinement stage. Additionally, to evaluate the classifier’s capability to reject irrelevant matches, we construct a balanced validation set comprising 300 positive query-reference pairs and 300 negative pairs drawn from the “noise” data described in Section 4.2. The classifier achieves an AUROC score of 0.776, indicating that it does not perfectly discriminate between genuine and confounding examples. Thus, the observed improvement in retrieval performance can be attributed to the combined effect of the two-stage retrieval process, rather than solely to the rejection capability of the classifier. # 6.3 Performance by Sample Characteristics To understand the performance of the model across sample characteristics, we computed the mAP for different categories of Sample100. As shown in Table 4, there is a modest performance gap between melodic/harmonic riff samples and percussive beat samples (the two $\jmath$ -note samples were not taken into account). This may be attributed to the nature of beat samples, which consist primarily of drums that are often subject to overdubbing techniques where producers layer additional percussion elements, and that might also be buried deeper in the mix beneath other instrumentation, potentially making them less salient for the GNN to capture. Further analysis by looking at the specific instrumentation in the reference and query, or by applying source separation at detection time, is left for future work. Table 4. Performance according to sample class. A significant performance gap was observed in relation to time stretching, where we classified samples subjected to minimal time stretching $( < 5 \% )$ and those with significant time stretching $( > 5 \% )$ . This $1 6 . 3 \mathrm { p p }$ difference in performance shows that although our model is robust to some degree of time-stretching, large changes in tempo fundamentally alter the temporal relationships between audio features that our model relies on for identification.
Automatic sample identification (ASID), the detection and identification of portions of audio recordings that have been reused in new musical works, is an essential but challenging task in the field of audio query-based retrieval. While a related task, audio fingerprinting, has made significant progress in accurately retrieving musical content under "real world" (noisy, reverberant) conditions, ASID systems struggle to identify samples that have undergone musical modifications. Thus, a system robust to common music production transformations such as time-stretching, pitch-shifting, effects processing, and underlying or overlaying music is an important open challenge. In this work, we propose a lightweight and scalable encoding architecture employing a Graph Neural Network within a contrastive learning framework. Our model uses only 9% of the trainable parameters compared to the current state-of-the-art system while achieving comparable performance, reaching a mean average precision (mAP) of 44.2%. To enhance retrieval quality, we introduce a two-stage approach consisting of an initial coarse similarity search for candidate selection, followed by a cross-attention classifier that rejects irrelevant matches and refines the ranking of retrieved candidates - an essential capability absent in prior models. In addition, because queries in real-world applications are often short in duration, we benchmark our system for short queries using new fine-grained annotations for the Sample100 dataset, which we publish as part of this work.
[ "cs.SD", "cs.AI", "cs.IR", "H.5.5; I.2.6" ]
# 1 Introduction Building upon autoregressive (AR) models, large language models (LLMs) [21, 20] have unified and dominated language tasks with promising intelligence in generality and versatility, demonstrating a promising path toward artificial general intelligence (AGI). Recently, MAR series methods [18, 10, 33] have demonstrated great success of conducting autoregressive image generation in continuous space. However, its potential for autoregressive video generation remains under-explored. Compared to image, video data is temporally sequential, making it more suitable for autoregressive modeling. A naive way of video autoregressive modeling directly adapts the paradigm of language models [20], which factorizes frames into discrete tokens and applies next-token prediction (denoted as NTP) in raster-scan order [16, 27, 1]. However, this paradigm for video generation suffers from several limitations: 1) Discrete tokens deviate from the inherent continuous distribution of video data and irreparably induce significant information loss. 2) The unidirectional modeling of visual tokens deviate from the inter-dependency nature of tokens within identical frame, and may be suboptimal in performance [18, 10]. 3) NTP demands substantial inference steps for video generation. Compared to NTP, mask-based autoregressive generation is a more promising direction [18]. However, it is nontrivial to incorporate mask mechanism into autoregressive video generation. A desired way is to sequentially generate each frame depending on all the previous context frames. While, this poses challenges for the introduction of mask. Prevailing methods [3, 34] apply mask to each frame, but introduces training-inference gap. NOVA [8] proposes to decompose the temporal and spatial generation via generating the coarse features frame-by-frame and refines each frame with a spatial layer, but complicates the framework and weakens the temporal smoothness. MAGI [37] mitigates this issue by appending a complete copy of video sequence during training, but doubles the the sequence length and training cost. Therefore, mask-based video autoregressive generation, lying as a promising but challenging paradigm, still requires further exploration. In this paper, we propose VideoMAR, a decoder-only autoregressive video generation model with continuous tokens, integrating temporal frame-by-frame and spatial masked generation. To meet the requirement of sequentially generating each frame depending on all the previous context frames, VideoMAR preserves the complete context and introduces a next-frame diffusion loss during training. Besides, the extremely long token sequences of video data poses significant challenges in both efficiency and difficulty. To this end, we propose tailored strategies for training and inference. During training, we propose the short-to-long curriculum learning to reduce the training difficulty and cost, and establish the two-stage progressive-resolution training to support higher resolution video generation. During inference, long token sequence generation is prone to suffer from severe accumulation error in late frames, due to exposure bias issue [36]. We identify that temperature plays a crucial role to eliminate this error and propose the progressive temperature strategy. Furthermore, VideoMAR replicates several unique capacities of language models to video generation, e.g. key-value cache and extrapolation, demonstrating the potential for multi-modal unification. For example, thanks to our design, VideoMAR inherently bears high efficiency due to simultaneous temporal-wise KV cache and spatial-wise parallel generation. VideoMAR, for the first time, also unlocks the capacity of simultaneous spatial and temporal extrapolation for video generation via incorporating the 3D-RoPE. On the VBench-I2V benchmark, VideoMAR achieves better performance compared to the Cosmos baseline, with much smaller model size, data scale, and GPU resources. # 2 Related Work # 2.1 Autoregressive Video Generation Raster-scan autoregressive models. Similar to the VQ quantization and NTP paradigm in autoregressive image generation models [9, 22, 26], some methods also employ this paradigm for autoregressive video generation [16, 28, 27, 23, 1]. For example, VideoPoet [16] employs a decoder-only transformer architecture to processes multi-modal inputs, incorporating a mixture of multi-modal generative objectives. Cosmos [1] trains the autoregressive-based world foundation model via video generation. Mask-based autoregressive models. Mask-based autoregressive models predict the masked tokens given the unmasked ones. They introduce a bidirectional transformer and predict randomly masked tokens by attending to unmasked conditions [3, 34, 11, 8, 37]. This paradigm enhances vanilla AR by predicting multiple tokens at every step. For example, MAGVIT [34] tackles various video synthesis tasks with a single model, via randomly masking the video sequence. Genie [3] proposes interactive video game generation in an unsupervised manner and generates videos frame-by-frame. Inspired by the continuous tokens in MAR [18], some recent works also propose to combine continuous tokens and masked generation models for video generation [8, 37]. For example, NOVA [8] generates the coarse features frame-by-frame and refines each frame with a spatial layer. MAGI [37] proposes complete teacher forcing by conditioning masked frames on complete observation frames. # 2.2 Diffusion-based Long Video Generation Recently, some diffusion-based video generation models attempt to extend the inference video length and achieve autoregressive-like video generation. These methods can be mainly divided into two types. The first type [29, 12, 32] basically follows video diffusion models to repetitively generate video clips, and sequentially cascade these video chunks to achieve longer video generation. For example, CausVid [32] transforms bidirectional models into fast autoregressive ones through distillation. The second type applies varying levels of noise to different video chunks, imitating the causality of autoregressive generation. For example, DiffusionForcing [5] applies higher noise level to the later frame in each chunk and shifts across frames for the generation of more frames. MAGI-1 [2] follows this paradigm by applying varying noise levels to different chunks while maintaining the noise level identical for frames in each chunk. These methods still fall in the range of diffusion models, employing diffusion model for the generation of each frame or video chunk. Therefore, these methods are basically different from the autoregressive video generation, which employs AR models for the token-wise generation. # 3 Preliminary Task definition and symbology setting. This paper focuses on the image-to-video autoregressive video generation, which sequentially generates the next frame forming the complete video with the given initial image and text prompt. For notation clarity, we depict the symbology settings in Table 1. # Autoregressive video generative model. The limitations of NTP paradigm is clearly presented in the introduction part. In this section, we cast on the masked-based generation paradigm. Masked-based generation is a common paradigm for autoregressive image generation [4, 18], which randomly masks partial image tokens and predicts Table 1: Symbology settings. these masked tokens with remaining visible tokens. Compared to NTP, this paradigm has the advantage of parallel token generation in each step, significantly reducing the inference steps. When extended to autoregressive video generation, most existing methods [3, 34] treat the video sequence similarly as image, treating all tokens within each video frame equally and using bidirectional attention for the whole video. $$ p \left( C , S _ { 1 } , \ldots , S _ { T } \right) = p \left( S _ { 1 } ^ { m } , S _ { 2 } ^ { m } , \ldots , S _ { T } ^ { m } \ | \ C , S _ { 1 } ^ { v } , S _ { 2 } ^ { v } , \ldots , S _ { T } ^ { v } \right) . $$ However, such operation bears several limitations: 1) These methods face the dilemma of either failing to inference frame-by-frame [34], or facing training-inference bias when performing temporally sequential inference[3]. For example, the previous context frames are partially masked during training. While, the frame-by-frame generation requires all the tokens in previous frames are available for the next frame prediction. 2) It depends on fixed-length video frames, which can lead to poor scalability in context and issues with coherence over longer video durations. It sacrifices the unique context extension potential of the token-wise modeling in AR models, which is verified possible and important in language models [25]. 3) It is incompatible with the inference optimization methods, such as KV cache. Recently, some methods attempt to optimize the masked-based generation [8, 37]. While, they either separate the temporal and spatial modeling, or double the sequence length. In this paper, starting from the desired way of sequentially generating each frame depending on all the previous context frames, we propose VideoMAR, a simple and novel paradigm for masked-based autoregressive video generation. VideoMAR is free from all these three limitations. Figure 2: Framework of VideoMAR. (a): Training flowchart of VideoMAR. We employ frame-wise causal attention mask for temporal causality. Besides, we introduce the next-frame diffusion loss to the spatial partial-masked frame, which has complete previous frames. $( b )$ : Efficient training of VideoMAR. We apply temporal short-to-long curriculum learning and spatial progressive-resolution for reducing the training difficulty and cost of VideoMAR. # 4 VideoMAR # 4.1 Framework Overview Video autoregressive model generates each frame conditioning on all the previous frames. Maskedbased generation models generate the masked tokens with the remaining visible ones. Therefore, masked-based video AR model should generate the masked tokens at each frame with all the tokens in the preceding frames and the visible tokens in the current frame. To this end, we propose VideoMAR, the decoder-only masked-based autoregressive video generation model. As shown in Figure 2, VideoMAR works with continuous tokens via first compressing the video into continuous tokens with video VAE. Then, VideoMAR adopts frame-wise causal attention mask, enabling temporal causal and spatial bidirectional modeling. Next-frame loss. To fit the temporal causal inference and mask generation properties of VideoMAR, we devise the next-frame diffusion loss as depicted in Figure 2. Specifically, we randomly mask partial tokens in certain frame $t \in [ 1 , T ]$ . For the preceding frames before frame $t$ , all the tokens are remain unchanged. For the frames after frame $t$ , all the tokens are masked. This processed video sequence maintains temporal causality with frame-wise causal attention mask. The token-wise diffusion loss optimization is only applied to the masked tokens in frame $t$ . The tokens after frame $t$ are free from loss optimization as not all tokens in the previous frames are available. During training, the frame $t$ is randomly chosen between $[ 1 , T ]$ , spanning all the frames. $$ \displaystyle p \left( C , S _ { 1 } , \ldots , S _ { T } \right) = \prod _ { t } ^ { T } p \left( S _ { t } ^ { m } \mid C , S _ { 1 } , \ldots , S _ { t - 1 } , S _ { t } ^ { v } \right) . $$ # 4.2 Training Temporal short-to-long curriculum learning. Due to the long sequence nature of video data, it would be quite difficult and inefficient to directly model this long sequence. Different from the joint bidirectional video frames modeling of video diffusion models, video autoregressive model generates the next frame based on previous frames. This temporally sequential property highlights the priority of early frames for both training and inference. To this end, we propose the temporal short-to-long curriculum training. Specifically, we first train on the short video clips with much shorter sequence length. In this stage, VideoMAR processes the capacity of clear visual quality and basic temporal motion modeling, with substantially reduced training cost and difficulty. When the early frames in the short clips are converged, we extend the video frame length to progressively capture larger temporal motion modeling capacity. This progressive short-to-long curriculum learning strategy successfully decompose the long sequence modeling difficulty to various phases in a very efficient way. Spatial two-stage progressive-resolution training. In the first stage, VideoMAR trains on low resolution of $2 5 6 \times 2 5 6$ . During this stage, we adopt the above mentioned short-to-long curriculum learning. With lower resolution and curriculum learning, VideoMAR possesses basic autoregressive video generation capacity with high efficiency. In the second stage, we finetune the model with higher resolution of $4 8 0 \times 7 6 8$ to support higher resolution video generation. To deal with the significantly increased token sequence, we adopt VAE with higher compression ratio. Thanks to the employed relative positional encoding, this finetuning stage is empirically verified efficient. # 4.3 Inference Accumulation error. AR models inherently suffer from exposure bias problem [36]. Specifically, each token is predicted with preceding GT tokens during training. However, during inference, all preceding tokens are the corresponding predicted ones, which may be incorrect. Such exposure bias is especially obvious for video generation which has long sequence length. To solve this, we first explore the importance and error of each generated frame. We find that the early-frame is vital for motion degree and suffer from relatively small accumulated error [17], while the late-frame focuses more on keeping visual quality and motion smoothness and suffers from more accumulated error, as shown in Figure 7. Furthermore, we uncover that temperature plays a key role in error suppression, where low temperature reduces the accumulated error (Detailed analyses and ablation of temperature on generation result is available in the supplementary material). With this observation, we propose the progressive temperature strategy, applying smaller temperature to the later frames. Specifically, the temperature varies from 1 to 0.9 across frames. The lower temperature in the later frame effectively suppresses the accumulated error and thus achieves much better visual quality. Besides, the slighter lower temperature is enough to keep motion smoothness given the previous dynamic video frames. Efficient inference. VideoMAR generates video tokens via masked-based parallelized generation, while maintaining the frame-by-frame generation. This paradigm combines the advantage of the spatial token parallel generation and temporal KV cache acceleration. As shown in Table 2, we present the inference steps comparison with NTP paradigm, where VideoMAR significantly reduces the steps $2 0 \times$ times (from 1440 to 64). Based on the spatial parallel generation, VideoMAR further reduces the inference time via enabling the KV cache (from 672s to 134s). Compared to NTP, our method achieves more than $1 0 \times$ acceleration (from 1941s to 134s). Table 2: Efficient inference of VideoMAR. VideoMAR combines the advantage of the spatial token parallel generation and temporal KV cache acceleration. Note that the ${ \mathrm { N T P ^ { * } } }$ is listed only for inference speed comparison, and we did not train such model. Spatial and temporal extrapolation ability. Context token length extrapolation is a basic capacity of LLM, which can generate millions of tokens significantly longer than the training sequence length. Position encoding (PE) method plays a key role in this capacity. In this paper, we experiment with different PE methods, including the absolute cosine PE and RoPE [25] (extrapolation ability comparison between different PE methods are available in the supplementary material). In the final implementation, we apply 3D-RoPE as the only position encoding method, and for the first time simultaneously boost the spatial and temporal extrapolation ability of video autoregressive model. VideoMAR can generate videos of varying resolutions and length, while only trained on fixed resolution and aspect ratio. Visual examples are available in Figure 4. We also depict more arbitrary scaling examples in Sec. D, where our method can flexibly generate videos of varying aspect ratios. # 5 Experiments # 5.1 Implementation Details Experimental setup. We employ the general decoder-only architecture as the backbone of VideoMAR. The VideoMAR backbone consists of 36 transformer layers with a dimension of 1536. We mostly follow MAR [18] for the implementation of token-wise diffusion loss. The denoising MLP consists of 3 blocks with a dimension of 1280. We adopt the masking and diffusion schedulers from MAR [18], using a masking ratio between 0.7 and 1.0 during training, and progressively reducing it from 1.0 to 0 following a cosine schedule with 64 autoregressive steps during inference. In line with common practice [13], we train with a 1000-step noise schedule but default to 100 steps for inference. For the text prompt, following the practice in FAR [33], we employ Qwen2-1.5B [30] as our text encoder and adopt cross attention for text condition injection. For the visual tokenizer, we adopt Cosmos-Tokenizer [1]. For the first stage $2 5 6 \times 2 5 6$ resolution), we employ Cosmos-Tokenizer with $4 \times 8 \times 8$ compression in the temporal and spatial dimensions. The temporal short-to-long curriculum learning is arranged with frame length order of (5, 13, 25). For the second stage $( 4 8 0 \times 7 6 8 \$ resolution), we employ Cosmos-Tokenizer with $8 \times 1 6 \times 1 6$ compression. We utilize the AdamW optimizer [19] $\beta _ { 1 } = 0 . 9$ , $\beta _ { 2 } = 0 . 9 5 )$ with a weight decay of 0.02 and a base learning rate of $1 e ^ { - 4 }$ in all experiments. All the weights are trained from scratch with 64 NVIDIA H20 GPUs. Datasets. For image-to-video training, we employ 0.5M internal video-text pairs. Evaluation. We use VBench-I2V [15] to evaluate the capacity of image-to-video generation across all the 9 dimensions. For a given text prompt, we randomly generate 5 samples, each with a video size of $2 5 \times 2 5 6 \times 2 5 6$ for the first stage and $4 9 \times 4 8 0 \times 7 6 8$ for the second stage. We employ classifier-free guidance with a value of 3.0 to enhance the quality of the generated videos in all evaluation experiments. Each latent frame is generated with 64 autoregressive steps Corresponding videos. We have added all the corresponding video samples in the project page https://yuhuustc.github.io//projects/VideoMAR.html. # 5.2 Quantitative Comparison VideoMAR is comparable with diffusion image-to-video models and significantly suppresses the AR counterpart with much lower training costs. We perform a quantitative comparison with mainstream and cutting-edge image-to-video models, which can be divided into two types: diffusion models and autoregressive models. For diffusion model type, the compared models include I2VGen-XL [35], ConsistI2V [24], SEINE [7], VideoCrafter-I2V [6], CogVideoX-I2V [31], StepVideo-TI2V [14], and Magi-1 [2]. For the autoregressive type, Cosmos [1] is the most powerful autoregressive image-to-video model. As shown in Table 3, despite its significantly smaller size (1.4B vs. 5B&13B), data scale (0.5M vs. 100M), training cost (64 H20 GPUs vs. 10000 H100 GPUs), VideoMAR remarkably outperforms Cosmos (84.51 vs. 84.22) on the VBench-I2V benchmark across a variety of sub-dimensions. VideoMAR also rivals some diffusion-based image-to-video models including ConsistI2V and VideoCrafter-I2V with much lower training costs. For the latest SOTA diffusion models, like Step-Video-TI2V and Magi-1, our method still lags behind. While, our method focuses on the more challenging and promising AR-paradigm video generation and it’s already quite convincing and promising of VideoMAR to achieve such performance with quite limited resources. # 5.3 Qualitative Results High motion smoothness and visual quality. We present the qualitative comparison in Figure 3. VideoMAR demonstrates high visual quality in each frame and smooth motions across adjacent frames. The motion type of VideoMAR contains both object motion, camera motion, and stable scene transitions. In contrast, Cosmos suffers from poor quality and details due to the employment of discrete tokens. Besides, the motions in Cosmos are mostly camera motion, with few object motions. Table 3: Image-to-video evaluation on VBench. We have classified existing video generation methods into different categories for better clarity. The baseline data is sourced from VBench-I2V [15]. The data of Cosmos is tested with its official code and recommended parameters. Figure 3: Visual comparison between Cosmos 5B and VideoMAR on image-to-video generation. The motions in Cosmos are mostly camera motion, with few object motions. Cosmos is also prone to have failure cases, which induces abrupt object change and poor consistency. For example, the surfing man disappears in the third row, and the texture of oranges changes in the fifth row. Spatial and temporal extrapolation. Context token extrapolation capacity is a desired property for autoregressive visual generation. For spatial dimension extrapolation, few current methods show such ability. For temporal extrapolation, some methods claim longer video generation ability, via repeatedly generating video chunks. In this paper, VideoMAR demonstrates the capacity of simultaneous spatial and temporal extrapolation to generate video of larger resolution and duration in Figure 4. This is achieved in a training-free manner without chunk-wise split. Video-to-video generation. Besides autoregressive image-to-video generation, VideoMAR also unlocks video-to-video generation. Given videos of various frame length, VideoMAR can generate more frames based on the given video. As shown in Figure 5, we present the visual results with two frames as condition. Video condition contains not only spatial prior but also temporal motion clues, which enables VideoMAR to have desired motion type and dynamic degree. Figure 4: Spatial and temporal extrapolation capacity of VideoMAR. Figure 5: Visual results of VideoMAR on video-to-video generation, with two frames condition. # 5.4 Ablations In this section, we verify the effectiveness of each design of VideoMAR, including 1) the causal attention mask; 2) the next-frame diffusion loss; and 3) the temperature strategy. As shown in Table 4, we measure their effectiveness with the VBench-I2V metric. Obviously, each design is beneficial to the performance. Besides, the additional advantage of inference acceleration is analyzed in Table 2. Table 4: Ablation study of VideoMAR in stage 1 on VBench-I2V. Effectiveness of temporal autoregressive modeling. To highlight the advantages of temporal autoregressive modeling, we denote the baseline as Total mask (w/o Frame Loss, w/o Causal Attn, w/ Temperature strategy), which treats the video sequence similarly as images, as shown in Equation 3.1. As shown in Figure 6, temporal autoregressive modeling obviously has higher visual quality and smoother motion than the baseline. The fully random masked generation of the baseline method is prone to have poor consistency with the reference image in local regions, e.g. the motorcycle in the second row is broken, and the human face in the last row is distorted. Figure 6: Visual comparison between the total mask baseline and our temporal autoregressive modeling VideoMAR in stage one. The baseline has poor local consistency with the reference image. Figure 7: The effects of the temperature strategy on visual results. The late frames are prone to suffer from large accumulated errors and poor quality without the incorporation of the temperature strategy. Temperature strategy & Accumulation error. To validate the effectiveness of temperature strategy for reducing accumulation error, we present the visual comparison in Figure 7. Without the temperature strategy, the late frames of the generated video tend to collapse due to accumulated error in the previous frames. For example, the buildings suffer from severe distortion and lose its structural shape. In contrast, VideoMAR adopts low temperature in the late frames, which maintains the dynamic degree across frames and high visual quality of each frame. # 6 Discussion and Future Work Despite the superior results achieved, we admit that there still exist some limitations that are worth further exploration: 1) VideoMAR has the capacity of multiple tasks unification within this single paradigm, including text-to-image, text-to-video, image-to-video, video-to-video, and video editing. However, due to limited resources, we first explore image-to-video and video-to-video tasks, and leave other tasks in future work. 2) VideoMAR can naturally function as interactive world model via replacing the prompt with frame-level interactive action condition. We will verify this in future work.
Masked-based autoregressive models have demonstrated promising image generation capability in continuous space. However, their potential for video generation remains under-explored. In this paper, we propose \textbf{VideoMAR}, a concise and efficient decoder-only autoregressive image-to-video model with continuous tokens, composing temporal frame-by-frame and spatial masked generation. We first identify temporal causality and spatial bi-directionality as the first principle of video AR models, and propose the next-frame diffusion loss for the integration of mask and video generation. Besides, the huge cost and difficulty of long sequence autoregressive modeling is a basic but crucial issue. To this end, we propose the temporal short-to-long curriculum learning and spatial progressive resolution training, and employ progressive temperature strategy at inference time to mitigate the accumulation error. Furthermore, VideoMAR replicates several unique capacities of language models to video generation. It inherently bears high efficiency due to simultaneous temporal-wise KV cache and spatial-wise parallel generation, and presents the capacity of spatial and temporal extrapolation via 3D rotary embeddings. On the VBench-I2V benchmark, VideoMAR surpasses the previous state-of-the-art (Cosmos I2V) while requiring significantly fewer parameters ($9.3\%$), training data ($0.5\%$), and GPU resources ($0.2\%$).
[ "cs.CV", "cs.AI" ]
# 1. Introduction The recent development and wider accessibility of large language models (LLMs) have spurred discussions about how these language models can be used in survey research. Potential applications span the entire survey lifecycle, including using LLMs for questionnaire design and pretesting (e.g., Götz et al., 2023), conducting interviews (e.g., Cuevas et al., 2023), synthesizing or imputing respondent data (e.g., Argyle et al., 2023; Kim & Lee, 2023), or detecting non-human respondents in online surveys (e.g., Lebrun et al., 2024). Due to their linguistic capacities, including their adaptability to different topics, the detection of nuance, implicitness, and intent in low-information multilingual textual input, and flexibility in generating textual output, LLMs also offer promising potential for classifying open-ended survey responses, which often are short and do not provide explicit context. For example, using LLMs for coding free-text social media data has successfully been applied for efficiently capturing detailed public opinion data (Ahnert et al., 2025; Cerina & Duch, 2023) – an application that could be transferred to open-ended responses. Other popular semi-automated classification approaches for open-ended responses, such as support vector machines or random forests (e.g., Haensch et al., 2022; Landesvatter, 2024), are less adaptable across different languages and often require substantial expertise, pre-processing, and training data coded by humans (Landesvatter, 2024). Since LLMs could potentially eliminate the need for these time- and expertise-intensive requirements, it is possible that they are an efficient alternative for classifying open-ended responses in survey research. While researchers have begun to explore this application of LLMs (Landesvatter, 2024; Mellon et al., 2024; Rytting et al., 2023) and were largely successful, most of these studies have focused on English-language responses, responses relating to non-complex topics, or on single LLMs and prompting strategies. It is thus unclear to what extent existing findings generalize to other LLMs, prompting strategies, languages, and more complex topics. Furthermore, research has raised concerns about the reproducibility of LLM-generated output due to their non-deterministic design (Barrie et al., 2024), an issue that can extend to the coding of open-ended responses when it comes to the reliability of the coding, for example when new survey data is available. Overall, the exact conditions of the applicability of LLMs for coding open-ended survey data and the quality of these classifications, also compared to more established methods, have yet to be understood. In this project, we are the first to investigate to what extent different LLMs can be used to code non-English (German) open-ended responses on survey motivation given a predefined set of categories. We examine performance and reliability, and the dependency of these indicators on two factors – model selection and prompting approach. Specifically, we ask: RQ1: Are there differences between LLMs regarding the performance and reliability of the coding? RQ2: Are there differences between prompting approaches regarding the performance and reliability of LLM-based coding? RQ2a: Does providing detailed descriptions of categories improve the performance and reliability of the coding? RQ2b: To what extent does few-shot prompting impact the performance and reliability of the coding compared to zero-shot prompting? RQ2c: Does fine-tuning an LLM on a subset of pre-coded response data improve the performance and reliability of the coding? To do so, we contrast proprietary and open-source LLMs – GPT-4o, Llama 3.2, and Mistral NeMo, which are the most capable multilingual models of their respective families to date. We compare their category assignments when using zero-shot prompting (i.e., not providing examples) with and without category descriptions and few-shot prompting (i.e., providing exemplary classifications), and fine-tuning (i.e., further training of the LLM), and evaluate them against the codings of human experts. We also discuss the LLMs’ performance in contrast to other classification methods reported in previous studies. By comparing the use of different LLMs and prompting approaches for classifying open-ended survey responses in German, our study uniquely contributes to the growing body of research about the conditions under which LLMs can be efficiently, accurately, and reliably leveraged in survey research and about the impact of LLM use on data quality. # 2. Background There are three main types of approaches to coding open-ended survey responses: traditional human coding, supervised machine learning methods, and the still-emerging use of LLMs, each with distinct strengths and challenges. In this section, we review these methods, highlighting the potential of LLMs that yet needs to be explored. In manual coding, human coders assign responses to predefined categories. While considered mostly accurate, this approach is time-intensive and costly, especially for large survey datasets or such with multiple open-ended questions (Landesvatter, 2024; Haensch et al., 2022). Costs are compounded when wanting to increase validity and reliability by having responses classified by several coders. These factors contribute to the sparseness of open-ended questions in survey instruments, despite such items allowing for deeper, authentic insights into how individuals think and act (Haensch et al., 2022). Supervised methods attempt to address this resource-intensiveness by combining manual coding of a training dataset with machine learning algorithms, such as support vector machines (SVMs; Joachims, 2001) or gradient boosting (Schonlau & Couper, 2016). Applications to political (Grimmer & Stewart, 2013) and economic texts (Gentzkow et al., 2019) as well as other survey responses (Haensch et al. 2022; Schierholz & Schonlau, 2021) demonstrated their utility. But while these sophisticated approaches can somewhat reduce costs and time, they still require a substantial amount of human-coded data and expertise and computational resources for model training in order to achieve satisfactory results, making them inefficient. They also struggle with short open-ended survey responses, which often lack sufficient context. In addition, they are usually only trained for one specific language and topic, making them not easily transferable across studies and less feasible for multilingual studies. Transformer-based models, such as BERT, are able to capture nuanced relationships in text due to their ability to generate contextual embeddings. This offers improved classification performance for open-ended survey questions (Meidinger & Aßenmacher, 2021; Gweon & Schonlau, 2024). For example, Schonlau et al. (2023) demonstrated BERT’s effectiveness for coding German-language survey questions, such as the GLES “most important problem” question. However, applying BERT to survey data poses similar challenges as supervised methods, as open-ended responses are often too short to utilize the models’ full potential, and fine-tuning them to the specific types of (con)text requires expertise and computational resources (e.g., Schonlau et al., 2023). While BERT is an analytical language model designed primarily for specific tasks like classification or entity recognition at the sentence or document level, modern-day generative large language models such as GPT-4 are designed to perform a broader range of generative and context-adaptive language processing tasks, including handling complex dialogs, summarization, and multilingual text generation. Such general-purpose LLMs thus show potential to address limitations of earlier approaches when applied to open-ended survey responses, like handling short responses when given only general information on their context, not necessarily requiring pre-coded data for training or fine-tuning, and being flexibly usable across languages. In addition, since off-the-shelf LLMs do not require large programming expertise, are relatively cost-effective, and can follow natural language instructions, they are more accessible to a broader group of survey researchers than other semi-automated methods. LLMs have brought promising advancements to labeling other types of social science text data, such as social media data and political texts, with studies finding that LLMs were at least on par or even outperformed supervised methods (Ahnert et al., 2025; Ornstein et al., 2024; Törnberg, 2024), making them applicable for substantive downstream analyses, like predicting public opinion (Cerina & Duch, 2023; Ahnert et al., 2025; Heseltine & Clemm von Hohenberg, 2024). Research specifically evaluating the applicability of LLMs for coding open-ended survey responses, however, continues to be scarce. In addition, LLMs’ rapid evolution requires constant reevaluation of their precision and domain-specific applicability (Pangakis et al., 2023). Rytting et al. (2023) tasked GPT-3 to code 7500 English open-ended responses on keyword descriptions of U.S. partisans into binary and ternary categories. The LLM-based coding matched the (poor) performance of human crowdworkers and experts in terms of inter-coder agreement. It also came close to the performance of a supervised approach while needing substantially fewer labelled examples. Mellon et al. (2024) come to similar conclusions when testing a larger and more recent variety of open- and closed-source LLMs for coding several thousand open-ended responses to the “most important issue” question in the British Election Study into 50 categories. Benchmarked against a trained human coder, LLMs’ accuracy of classifications varied between and within model families. Compared to a range of supervised approaches, the general-purpose LLMs performed much better, with BERT-based methods still outperforming SVMs. Using LLMs for coding open-ended survey responses thus appears like a promising method for survey researchers. However, these studies represent a best-case scenario of relatively easy tasks, as they cover English-language data about standard societal and political issues that are likely much-discussed in LLM training data and do not require much expertise for coding. Research on logical reasoning tasks suggests that LLMs tend to struggle with tasks that are comparably complex, but less commonly appearing in their training and alignment processes (McCoy et al., 2023). In addition, there is ample evidence that LLMs are biased against non-English language contexts in a variety of other tasks (e.g., Durmus et al., 2024; Johnson et al., 2022; Li et al., 2024; Wang et al., 2024). For example, Törnberg (2024) found that GPT-4 can be used for labeling non-English social media data, but Heseltine and Clemm von Hohenberg (2024) observed decreased speed and accuracy compared to English-language texts. Once again, these studies examined comparatively simple tasks, namely binary labeling of sentiment and political affiliation. Beyond these limitations, there is competing evidence regarding specific LLM performance and prompting strategies: It is unclear whether all (families of) LLMs are equally suited for classifying open-ended responses. For example, most studies on using LLMs for coding social science text data investigated models of the GPT family, but came to conflicting conclusions regarding different model versions (e.g., Bosley et al., 2023 vs. Rytting et al., 2023 for GPT-3; Ornstein et al., 2024 vs. Heseltine and Clemm von Hohenberg, 2024 and Törnberg, 2024, for GPT-4; Mellon et al., 2024 vs. Ahnert et al., 2025, for Llama). Considering proprietary vs. open-source model families, Mellon et al. (2024) found that the closed-source Claude models matched human coding best, followed by GPT-4, whereas Llama and PaLM performed much worse, and some other open-source LLM families were unable to complete the task at all. Furthermore, existing research uses competing prompt designs. Some studies suggest zero-shot prompting (i.e., not providing examples for the labeling task, only the possible labels) is sufficient for labeling other types of short social science text data (Cerina & Duch, 2023), even in non-English languages (Törnberg, 2024). In contrast, studies applying LLMs specifically to open-ended survey responses used few-shot prompting (Mellon et al., 2024; Rytting et al., 2023). In this approach, the authors included the coding scheme along with three examples in the prompt, sometimes supplemented by detailed category descriptions. Halterman and Keith (2024) found that including more detailed definitions of the categories and positive as well as negative examples had a positive impact on labeling quality. However, Mellon et al. (2024) report that providing a full coding guide appeared to “distract” the LLMs. Finally, Mellon et al. (2024) suggest that fine-tuning, i.e., re-training LLMs on pre-labeled survey responses would likely further improve results. Ahnert et al. (2025) successfully used fine-tuning, albeit not for open-ended survey data. Given this scarce and competing evidence, it remains unclear whether and which existing findings about the applicability of LLMs for coding open-ended survey responses generalize. In this study, we seek to close this gap by testing different LLMs and prompting strategies for multi-class, single-label classification of a more specific topic in German open-ended survey data. # 3. Data and Methods # Open-ended survey data & coding scheme In order to test the applicability of LLMs for coding German-language open-ended survey responses, we use data from a German probability-based mixed-mode panel, the GESIS Panel.pop Population Sample (Bosnjak et al., 2018, GESIS, 2024). Randomly sampled from municipal population registers, the panel includes over 5,000 respondents and covers the population of German-speaking permanent residents of Germany aged $1 8 +$ . Participants are invited to the 20-minute survey waves bimonthly, receiving a prepaid incentive of five euros with every invitation. For the years 2014 to 2020, the survey includes an annual, non-mandatory open-ended question on survey motivation. There, the panelists are asked to give their most, second most, and third most important reason for participating in the panel on three separate lines (see Figure A1 in Appendix II for question wording). This questionnaire design leads to unidimensional answers usually containing only one category, making the item very favorable for coding (Haensch et al., 2022). Thus, while the response format should present an easy test case for LLMs, the specificity and complexity of the topic in terms of categorical dimensions, as well as the German language, present a harder task. The dataset contains a total of approximately 25,000 responses to the question on survey motivation across survey waves. For our study, we rely on a random sample of 20 percent of that data (5,072 responses) coded independently by two survey researchers (Cohen’s kappa $= 0 . 9 1$ , with remaining disagreements resolved by a more senior expert) based on a coding scheme for survey motivation adapted to the GESIS Panel.pop by the survey researchers (see Haensch et al., 2022 for details). The human codes are not necessarily required for employing LLMs (see below for a discussion of prompting approaches), but serve as a ground truth to compare the LLM-based classifications to. Indeed, when not fine-tuning an LLM, using it would require only a fraction of the human-coded examples necessary for training traditional supervised methods – for example, Haensch et al. (2022) used 5000 human-coded responses to train an SVM. For the LLM-based classifications, we use the same coding scheme as was used by the human coders. It spans 22 categories, featuring both intrinsic, extrinsic, and survey-related reasons for motivation (Porst & von Briel, 1995; Haensch et al., 2022). It also includes catch-all categories: No reason captures explicit statements of not having a reason for participation, “don’t know”s, as well as non-meaningful fillers such as “???”. In contrast, Other contains meaningful statements that cannot be assigned to any other category. For English translations of the categories, see Figure 1. A more detailed coding scheme with definitions and examples for all categories and their groups can be found in Appendix I. # LLM selection & configuration We test and compare powerful and popular LLMs of three different model families that are state-of-the-art at the time of writing. Models of one of the industry leaders, OpenAI, are popularly used by the public and researchers without large computational expertise due to their user-friendly accessibility. Despite OpenAI’s lack of transparency and reproducibility as a proprietary provider (Palmer et al., 2023), it thus is reasonable to include one of their models in our research as a realistic use case. GPT-4o (GPT henceforth) is OpenAI’s flagship model at the time of writing, which, according to the developers, features considerable improvements in non-English languages over earlier versions, while being more time- and cost-efficient (OpenAI, 2024a, 2024b). It is also supposed to be more capable of domain-specific or complex tasks and detailed labeling. In line with calls for accessible and reproducible AI research (e.g., Spirling, 2023; Weber & Reichardt, 2024), we also test two open-source LLMs. These are downloaded and run locally, ensuring sensitive data remains private and is not shared with third parties. This is crucial as open-ended responses may inadvertently contain personal information, such as addresses, risking re-identification.1 Running LLMs locally also ensures reproducibility by using stable model versions, unaffected by updates to cloud-based APIs (Spirling, 2023). Llama-3.2-3B-Instruct (Llama henceforth) is the more capable of the two multilingual LLMs of Meta’s Llama 3.2 suite, the most recent and powerful one at the time of writing (Meta, 2024a, 2024b). While the open-source suite also features larger models (11B and 90B), those are not optimized for multilingual dialog and not available in Europe, making them infeasible for the project at hand and international survey research more broadly. Mistral-NeMo-Instruct-2407 (Mistral henceforth) is the most recent multilingual model by the European open-source developers Mistral. It is specifically designed for global, multilingual applications (MistralAI, 2024a) and supposed to be particularly strong in, among other languages, German. We access these models via the Huggingface platform (Meta, 2024b, MistralAI, 2024b). To investigate the exact conditions under which LLMs can be used to code German open-ended survey responses, we employ different approaches. Zero-shot prompting: In the least supervised approach, we simply ask the LLMs to classify the open-ended responses without any additional information apart from the coding scheme (i.e., no examples or definitions of responses belonging to the specific categories). Zero-shot prompting with category descriptions: Along with the coding scheme, we provide the LLMs with definitions for each category. Few-shot prompting: In few-shot prompting, an LLM is given a few examples to guide its output along with the coding scheme, providing an efficient alternative to training the LLM with task-specific data. To test how few-shot prompting impacts the performance of LLMs for open-ended response classification, we provide the LLMs with one example response per category (so 22 examples in total) in the prompt. The examples are randomly selected from the examples featured in the coding scheme, containing actual answers featured in the dataset of responses to be classified. They are presented in random order in the prompt. The examples are not removed from the classification dataset. Fine-tuning: Fine-tuning involves further training the model on a smaller, domain-specific dataset to improve its performance on particular tasks. While less efficient because of the need for more human-coded training examples, fine-tuned LLMs might yield more accurate results than using LLMs out-of-the-box. Exploring whether fine-tuning a model on humanly pre-coded response data thus helps understand LLMs’ potential in classifying open-ended responses. However, depending on the LLM, fine-tuning requires even more extensive computing resources. This is not only a limitation for practitioners, but also for our test case. We therefore select only GPT-4o for fine-tuning, due to its straightforward and easily available fine-tuning services, making it a likely choice for researchers wishing to employ this approach. We fine-tune the LLM by splitting the dataset into a training and a test subset. As is common for fine-tuning tasks, we randomly select 80 percent of responses of each category based on the human classification (4048 in total)2 for training the LLM before asking it to classify the remaining 1024 responses using the zero-shot prompt. Results for the fine-tuned approach thus reflect the LLMs’ performance on the test set alone. We specify four epochs3 for fine-tuning, i.e., four iterations through the training data, and use default values for batch size (the number of examples used in a single training pass; around $0 . 2 \%$ of the training dataset, ten in our case) and learning rate (rate at which the LLM updates its weights (i.e., internal settings) based on the new data, balancing between learning too slowly, risking inefficiency, and too quickly, risking instability). Appendix II.6 reports the loss and token accuracy curves of the fine-tuning process. Since we want to maximize reliability and the task of coding responses according to a set of predefined categories does not require creativity but consistency, we set the LLM temperature to 0, thereby flattening the LLM’s underlying probability function to produce more deterministic outputs. For best comparability, we use the same temperature for all models, leaving all other parameters at model default. You are a survey expert classifying open-ended responses to the question why individuals participate in a survey. Assign these reasons for participating to exactly one of the following categories. The categories are: INTEREST: [Description] CURIOSITY: [Description] LEARNING: [Description] TELL OPINION: [Description] INFLUENCE: [Description] INCENTIVE: [Description] FUN: [Description] ROUTINE: [Description] DUTIFULNESS: [Description] HELPSCIENCE:[Description] HELP POLITICIANS: [DeSCription] HELP SOCIETY: [Description] HELP, NOT FURTHER SPECIFIED: [DeSCription] BREVITY: [Description] ANONYMITY: [Description] PROFESSIONALISM: [Description] RECRUITMENT: [Description] RECRUITER: [Description] OTHER SURVEY CHARACTERISTICS: [DeScription] IMPORTANCE IN GENERAL: [DeSCription] OTHER: [Description] NO REASON: [Description] Make your best guess, even if it is hard. Respond in the following format: Reason for participating | CATEGORY. Do not give an explanation for your classification,but return only the reason for participating and your classification. Examples: [Example reason | CATEGORY 1] [Example reason | CATEGORY 2] [..] [Example reason | CATEGORY 22] Classify the following reason for participating: [open-ended response] Figure 1: English translation of prompt used for LLM-based classifications of the open-ended survey question. Categories and, in the detailed approach, descriptions (green font) were randomized across individual queries. In the few-shot approach, examples (blue font) were randomly selected, the selection being held constant, but presented in random order across queries. For details of descriptions and examples used, see Appendix I. Prompt design We tell LLMs to impersonate a survey expert classifying open-ended responses and instruct them to assign each response to exactly one category. The order of categories (and their descriptions in the detailed approach, and examples in the few-shot approach, respectively) is randomized in each prompt to avoid any biases due to order effects (Brand et al., 2023; Pezeshkpour & Hruschka, 2024). To minimize missing values, we ask the LLMs to make a best guess in difficult cases. We instruct the LLMs to report the response along with its classification. Finally, to avoid unnecessarily long answers, we ask the LLMs not to justify their response (as especially Mistral has been found to do previously, see e.g., von der Heyde et al., 2025), but do not specify a maximum output length. Figure 1 shows an English translation of the prompt. In line with the language of the responses they are being asked to classify, we prompt the LLMs in German, including the instructions and coding scheme. The original German version of the prompt, as used in the study, can be found in Appendix II (Figure A2). We prompt each survey response separately and with refreshed LLM memory, to ensure that responses are classified independently of one another. We therefore specify the task directly in the main prompt (not the system prompt), thereby repeating the task for every open-ended response to be classified. Before we feed the full dataset to the LLMs, we test each LLM with only 15 responses to determine its general capacity to fulfill the task. We run each query twice per LLM to be able to evaluate its reliability. All data is generated in November 2024, except the classifications obtained from the fine-tuned version of GPT, which is generated in January 2025. # Analysis We extract each LLMs’ classifications of the open-ended responses and analyze their performance and the resulting descriptive distributions. Benchmarking against the human-generated classifications, we analyze the LLMs’ classification performance overall and per category. Because our case is one of multiclass-classification and the benchmark categories are unevenly distributed (see Figure 4), we use macro F1 scores4 as our primary overall performance metric (Hand et al., 2024). In imbalanced datasets, regular F1 scores can be misleading if an LLM tends to assign the modal category. Macro F1 addresses this by averaging across the per-category F1 scores, giving equal weight to minority categories. If an LLM failed to classify a response to exactly one category (i.e., it did not assign a category or assigned more than one category), the output is recorded as missing (i.e., an explicit category called “NA”) but retained for the analysis. This approach avoids artificially inflating the F1 scores for categories where most responses were not classified, but the remainder classified correctly,5 and allows us to investigate the reliability of missing classifications. To facilitate comparison to other studies and classification methods, we report additional metrics (weighted F1, accuracy, intraclass correlation coefficients, Cohen’s kappa) in Appendix II (Table A1). Since Haensch et al. (2022) previously tested an SVM on the same data, we are able to compare LLM performance to that of a supervised approach without explicitly having to employ that approach ourselves (see the discussion). To do so, we calculate the median F1 score as the unweighted median across categories. We then compare the distribution of coding scheme categories across LLMs and prompting approaches and to the distribution of the human-coded benchmark data. We also report the frequency and categorical distribution of the responses each LLM fails to classify as well as the reason for failure (see Appendix II.3). For all analyses, we rely on the first iteration of classifications per LLM and prompting approach, independent of whether this iteration exhibited better or worse performance than the second one, in order not to bias our results by selecting on performance. To assess the LLMs’ reliability, we calculate the ICC for two-way agreement between the two iterations of classifications per LLM and prompting approach. Data (pre-)processing, classification (for GPT), and analyses are conducted in R (version 4.3.2, R Core Team, 2024), especially using the packages AzureAuth (Ooi et al., 2019), caret (Kuhn, 2008), irr (Gamer et al., 2019), and tidyverse (Wickham et al., 2019). Classifications from Llama and Mistral are obtained using Python, especially using the packages accelerate (Accelerate, n.d.), huggingface_hub (Hub Client Library, n.d.), pandas (McKinney 2010), PyTorch (Paszke et al., 2019), tqdm (Casper da Costa-Luis et al., 2024), and transformers (Wolf et al., 2020). # 4. Results # 4.1. RQ1: Differences between LLMs Performance We first compare differences between LLMs in classification performance overall (macro F1) and per category (F1). Across prompting approaches, classification performance is much better when using GPT than when using Mistral, which still has a slight edge over Llama (see Figure 2). GPT performance also fluctuates much less between prompting approaches (macro F1 around 0.7 for the three approaches that were examined for all three LLMs). Nevertheless, even using the best-performing prompting approach for an open-source LLM does not come near the GPT performance. Similar patterns emerge when considering other performance metrics (see Table A1). All LLMs examined exhibit approximately the same performance patterns across categories (see Figure 3). They perform exceptionally well on the categories incentive, interest, and fun (macro F1 around 0.9), as well as on anonymity, routine, and tell opinion, and exceptionally poor (macro F1 between 0.02 and 0.3) on no reason, non-identifiable/other, and other survey characteristics. The LLMs thus perform very well on the three categories most commonly defined by the human coders, but not on the next two most common categories, which are non-substantive catch-all categories. For the remaining categories, performance tends to decrease along with frequency of occurrence. The overall pattern is mirrored across types of reasons (extrinsic, intrinsic, survey-related): GPT’s performance tends to be better than that of Llama and Mistral, which improves with few-shot prompting. There are some cases that stand out, which help explain the overall performative edge GPT has over the open-source models. GPT outperforms Llama and Mistral especially in tell opinion, routine, importance in general, influence, dutifulness, curiosity, and professionalism, and to a lesser extent also in the help categories, although Llama and especially Mistral improve under few-shot prompting.6 Figure 2: Macro F1 scores by LLM and prompting approach. # Distributions The differences in LLMs’ classification performance across categories result in different frequencies of categories (see Figure 4), although the overall shape of the distribution is similar to the human-coded benchmark. While the LLMs’ good performance on classifying incentive, interest, and fun leads to the proportion of responses in these categories being close to the human benchmark, their poor performance on other categories manifests in substantially lower proportions than the human data would suggest. This includes non-identifiable/other, which is among the five most frequently identified categories according to the human coders. Llama and Mistral additionally assign too few cases to no reason, but code more responses as curiosity than both humans and GPT, where they also perform worse in terms of F1 scores. Conversely, the proportion of responses assigned to tell opinion tends to be lower when using Llama. In contrast, Mistral assigns disproportionately many responses to tell opinion, and, to a smaller degree, to help society, help science, and recruitment – the categories where GPT tends to outperform. The proportion of missing (including ambiguous) assignments is (initially) higher for the open-source models than for GPT. Just as with performance and overall distribution, GPT is also less sensitive to prompting approaches than other LLMs when it comes to missing classifications, whereas the performance of Llama and Mistral depends on the prompting approach. Both open-source models eventually return better results than GPT when considering the amount of missing classifications. When using GPT, missing classifications occur almost exclusively for responses labeled as no reason by human coders (Figure A3), with over 60 percent of responses lacking a classification. In contrast, missing classifications are more evenly distributed across all categories when using Llama (which also misses assignments for close to 60 percent of no reason responses) or Mistral. This partly helps explain the poor classification performance for the no reason category; however, missing values cannot account for the poor performance on other categories (see Appendix II.5 for full confusion matrices; and Appendix II.3 for F1 scores when omitting missing values). # Reliability Turning to reliability of the classifications, Mistral’s output is identical across the two iterations, proving to be the only LLM tested where setting the temperature to zero and setting a seed actually results in the desired behavior – returning identical and therefore reliable output.7 Yet, the other two LLMs also exhibit high reliability $\mathrm { ' } | \mathsf { C C } > 0 . 9 3$ , see Table 1). There are only minimal differences, with GPT being slightly more reliable than Llama. Table 1: ICC (two-way agreement) between two rounds of coding per LLM and prompting approach. In sum, there are differences between LLMs in terms of performance and, to a lesser extent, reliability when coding German open-ended survey responses. Disregarding prompting approaches, using GPT results in higher classification performance than using Llama or Mistral, but performance under GPT is still subpar, both in absolute terms and relative to other methods (e.g., Haensch et al., 2022) when not using fine-tuning (see below). While all LLMs exhibit high reliability across iterations, Mistral has a slight edge, reproducing the exact same classifications. GPT LLAMA Mistral GPT LLAMA Mistral GPT LLAMA Mistral GPT LLAMA Mistral Incentive Interest Fun No reason zero-shot 0.75 0.64 0.35 0.21 0.06 zero-shot with description 0.67 0.12 0.26 0.09 few-shot 0.15 0.19 0.1 fine-tuned with zero-shot prompt 0.93 Non-identifiable/Other Tell opinion Help society Help science zero-shot 0.04 0.11 0.05 0.5 0.49 0.42 0.46 0.38 0.54 zero-shot with description 0.07 0.19 0.03 0.57 0.63 0.48 0.43 0.53 0.29 few-shot 0.18 0.09 0.02 0.69 0.56 0.58 0.58 0.61 fine-tuned with zero-shot prompt 0.71 0.71 Routine Recruitment Other survey characteristics Importance in general zero-shotwith derpthot 0.55 0.670.74 0.07 0.07 0.02 0.06 0.64 0.07 0.15 F1 1.00 few-shot 0.53 0.58 0.49 0.17 0.07 0.1 0.67 0.4 0.6 fine-tuned with zero-shot prompt 0.69 0.75 Influence Dutifulness Help, not further specified Curiosity 0.50 zero-shot 0.69 0.11 0.37 0.36 0.67 0.62 0.28 0.26 0.69 0.15 0.23 zero-shot with description 0.67 0.19 0.51 0.34 0.65 0.67 0.27 0.44 0.69 0.28 0.38 0.25 few-shot 0.4 0.59 0.68 0.72 0.69 0.56 0.38 0.31 0.56 fine-tuned with zero-shot prompt 0.77 0.00 Learning Anonymity Professionalism Help politicians zero-shot 0.47 0.2 0.64 0.54 0.17 0.49 0.56 0.34 0.24 zero-shot with description 0.5 0.13 0.41 0.58 0.07 0.46 few-shot 0.53 0.16 0.28 0.49 0.69 0.6 0.62 fine-tuned with zero-shot prompt 0.86 Recruiter Brevity zero-shot 0.41 0.14 0.22 0.62 0.13 0.38 zero-shot with description 0.65 0.56 0.64 0.06 0.54 few-shot 0.61 0.31 0.47 0.64 0.38 0.47 fine-tuned with zero-shot prompt 0.57 Figure 4: Distribution of coding categories by LLM and prompting approach. $\scriptstyle n = 5 0 7 2$ for zero-shot (with and without description) and few-shot prompting, $n = 1 0 2 4$ for fine-tuned prompting. # 4.2. RQ2: Differences between prompting approaches # Performance When comparing differences in classification performance between prompting approaches across LLMs, performance is best for few-shot prompting and worst for zero-shot prompting in terms of macro F1. However, the size of the difference depends on the LLM used. There is a strong improvement in performance from zero-shot prompting to few-shot prompting when using the open-source models – for both models, there is a 0.18 difference in F1 macro scores, see Figure 2. The same pattern emerges when considering other performance metrics (Table A1), and when investigating classification performance per category (Figure 3). However, for singular combinations of LLM used and category classified, performance is worse when providing the LLM with descriptions than when using simple zero-shot prompting (e.g., fun, no reason, help science), or when providing examples relative to providing descriptions (e.g., recruitment, anonymity, non-identifiable/other). This is more often so for the open-source LLMs than for GPT. Most notably, GPT’s performance drastically improves when employing fine-tuning, achieving a macro F1 of 0.87 – a 16 point difference over few-shot prompting and a satisfactory level in general. This jump can largely be attributed to much improved classification in the non-substantive categories. For other categories, a mixed picture emerges, with large improvements for six categories, but minor improvements for the remainder – in part because few-shot prompting already led to high levels of performance. # Distributions Although all prompting approaches examined approximately result in very similar distributions of categories, few-shot prompting tends to approximate the distribution of the human-coded data best (Figure 4). This is especially the case for interest and tell opinion. Large differences remain especially for non-identifiable/other, help science, and help society. Few-shot prompting also results in substantially fewer responses that were not coded successfully, with a reduction of almost four fifths for Mistral. As a consequence, there are almost no missing classifications under few-shot prompting, except for no reason (Figure A3). Fine-tuning results in a distribution that perfectly matches the human classifications, with only four classifications missing in total (all belonging to the no reason category). # Reliability All LLMs exhibit high reliability $( > 0 . 9 3 )$ regardless of approach when considering ICCs (see Table 1). Mistral is completely deterministic in all approaches, GPT is consistently very reliable across approaches, including fine-tuning, and Llama is slightly less reliable when provided with descriptions. To summarize, the prompting approach used does make a difference in terms of performance, but not so much in terms of reliability of coding German open-ended survey responses. Providing detailed descriptions of categories tends to improve classification performance over zero-shot prompting, and few-shot prompting further improves it, especially for the open-source LLMs. Fine-tuning leads to the best overall performance and the largest improvement compared to other prompting approaches when using GPT. Reliability is high regardless of the prompting approach used. # 5. Summary and Discussion In our study, we assessed the performance and reliability of three powerful, multilingual LLMs (GPT-4o, Llama 3.2, and Mistral NeMo), when classifying German open-ended survey responses on a specific and complex topic given a pre-defined coding scheme. We also investigated differences depending on the prompting approach used. Overall, performance differed greatly between LLMs, and only a fine-tuned LLM achieved satisfactory levels of predictive performance (macro F1 of 0.87). In general, GPT performed best, and, disregarding fine-tuning, few-shot prompting led to the second-best performance (macro F1 of 0.71 for GPT), echoing the findings of previous studies on English data on less specific topics (Haltermann & Keith, 2024; Mellon et al., 2024). Performance differences between prompting approaches were conditional on the LLM used – the prompting approach was not as important when using GPT, but made a big difference for other LLMs, especially Mistral. While the LLMs correctly identified most of the responses belonging to the most frequently occuring (and most easily identifiable) reasons, they struggled with non-substantive catch-all categories. Limitations in performance in these categories may arise because human coders classified responses such as “don’t know”, “xxx”, and blank responses as no reason. The LLMs often failed to categorize such data, instead treating it as if it contained no response. This is problematic for open-ended response classification more broadly. Responses belonging to such categories are quite common regardless of question topic, as many survey respondents lack the time or motivation to respond to open-ended questions, either giving non-substantive or nonsensical responses that practically correspond to item-nonresponse (Krosnick and Presser, 2010). In our case, LLMs’ unequal classification performance across different categories of reasons for survey participation results in different categorical distributions when not using fine-tuning. Such discrepancies could also have consequences for further inferential analyses of the coded data. Thus, LLM-coded open-ended responses could paint a very different picture of the concept being measured by a survey item than human coding would. Our study shows that using off-the-shelf (i.e., non-fine-tuned) LLMs is not necessarily superior to other computational methods for coding open-ended responses. Comparing our results to those of Haensch et al. (2022), who used an SVM on the same data, even few-shot performance proved to be below expectations when going beyond the most obvious and common categories (median F1 0.83 vs. 0.72 at best). This is at odds with Mellon et al.’s (2024) findings regarding English-language survey responses on a more common topic: Although that study also reported that GPT models were superior to Llama models, it also found that the LLMs, when provided with the full coding scheme including descriptions and examples for over 50 categories, were much better at classifying British responses to the commonly discussed “most important problem” question than established supervised approaches, including BERT and SVMs. Rytting et al. (2023) came to similar conclusions even for the by now outdated GPT-3 under few-shot prompting, albeit for a task with only three categories. It thus appears that the applicability of LLMs for coding open-ended responses depends not just on the LLM and prompting approach used, but also on the topic (in terms of specificity and categorical complexity) and possibly language of the responses. However, as our findings show, LLMs have the potential to match or even outperform other methods when fine-tuned. Using the zero-shot prompt on the fine-tuned GPT achieved a macro F1 of 0.87 (median F1 0.88), with dramatic improvements for non-substantive responses. This resulted in perfectly matched distributions between human and LLM-coded responses and virtually no missing classifications. Although this confirms speculations in terms of improved effectiveness over off-the-shelf usage (Mellon et al., 2024), it does not yet fulfill the hopes of being a resource-efficient alternative to established methods. This is because fine-tuning LLMs requires a sufficiently large set of human-coded benchmark data and more computational resources and expertise, similar to established methods, with which researchers are often more familiar. In addition, such established methods usually do not require payment, whereas proprietary LLMs (potentially requiring less programming expertise if providing user-friendly interfaces for fine-tuning) do. Additionally, this approach, as all others, relies on a pre-defined coding scheme, which may not readily exist for all open-ended questions practitioners might want to have classified. While all three models we examined were very reliable in their classifications across two iterations, only Mistral showed the desired behavior of identical output when setting the model temperature to zero and setting a seed. The possibility that setting the temperature to the least probabilistic setting does not actually guarantee deterministic behavior can be unintuitive for survey researchers not familiar with LLMs in-depth, potentially risking a false sense of confidence. Yet, even the deviating LLMs in our study were more reliable than previous studies suggested (e.g., Heseltine & Clemm von Hohenberg, 2024), making resolvement by human coders (which, in the aforementioned study, did not exhibit higher agreement) obsolete. However, reproducibility over longer periods of time, e.g., for several survey waves featuring the same open-ended item, is not guaranteed when using non-local models, due to them being subject to change or deprecation. This highlights the need for regular validation with humans in the loop (see also Weber & Reichardt, 2024), even under high performance (which we only observed for the fine-tuned approach). Our results also highlight the trade-offs between proprietary and open-source LLMs in terms of cost, privacy, reliability, and performance. Using open-source models such as Llama and Mistral, available on platforms such as Huggingface, are free to use and can be run locally, ensuring privacy and reproducibility by avoiding third-party servers and model updates. However, running them requires considerable computing resources and expertise, which not all researchers may have access to. In contrast, proprietary models like GPT, while user-friendly, incur costs per token (i.e., input and output length), which can be high for large datasets or complex instructions.8 In our case, open-source LLMs underperformed compared to proprietary ones in coding open-ended responses, and fine-tuning a GPT model was the most successful approach. Finally, the speed of advancement of LLMs presents researchers with the challenge of working towards a moving target, where working with reliable and reproducible model versions may not present the state of the art. Our work gives rise to some further considerations and possible improvements. First, more experiments with different prompting strategies (Schulhoff et al., 2024) could be explored to see whether fine-tuned performance can be neared or made more cost-effective. For example, even more explicit instructions emphasizing the importance of always assigning a category and exactly one category might improve results especially for non-substantive responses. Researchers could also investigate whether breaking down the task into a two-step process would reduce its complexity by shortening the coding scheme information to be processed per prompt, and lead to more satisfactory results. In this prompt-chaining approach, the LLM could first be asked whether a specific category would be suitable for an answer. After having iterated across all possible categories in the coding scheme, the LLM could then be asked for the best-suited category from among the set of those it identified as suitable. Such an approach would allow for more examples per category in the first step without negatively impacting the LLM’s context capacity (see, e.g., Mellon et al., 2024), thereby possibly improving performance. For fine-tuning, future research should focus on systematic experiments to identify the minimum amount of human-coded data needed for effective performance, balancing resource efficiency with accuracy. Additionally, LLMs’ inner workings, including how they process different languages relative to one another, are somewhat opaque and not always consistent (see e.g., Zhang et al., 2023) – they might be better aligned to follow English instructions and coding schemes regardless of the language of the text to be classified. It is thus possible that LLMs perform better on non-English text classification when instructed in English, i.e., when only the survey response is in the native language. This would allow for simultaneous coding and translation of open-ended survey responses (Heseltine & Clemm von Hohenberg, 2024). Future research could investigate this by employing the English translation of our prompt. Second, our study focused on the performance and reliability of LLM-coded open-ended survey responses, without investigating the impact of the method on the findings of substantive analyses. Replications of earlier substantive analyses that used more established classification methods with a fine-tuned LLM could complement our research. As part of such an analysis, taking into account uncertainty could shed light on whether distributional differences between LLM-based and human classifications are systematic. This could be done by analyzing the LLM’s internal token probabilities (i.e., the probability with which the output is chosen), choosing the majority category after multiple iterations using an LLM’s default temperature, or by directly asking the LLM for its certainty in a specific label (e.g., Tian et al., 2023). However, if human coders are inconsistent, models may be unfairly penalized, leading to deceptively low accuracy metrics. Even high inter-rater agreement (e.g., Cohen’s kappa) can mask systematic errors made consistently by humans and mimicked by the model. In addition, human coders could, consciously or unconsciously, introduce biases based on their positionalities and stereotypes, which also may affect the coding and the evaluation metrics of the LLM. Relatedly, LLMs might detect patterns or nuances humans do not, especially when not constrained by a fixed coding scheme. Using LLMs for unsupervised approaches, such as topic modeling (e.g., Ornstein et al., 2024), could address this concern while also making the ex-ante development of coding schemes for new survey items obsolete (Mellon et al., 2024), further increasing efficiency compared to supervised methods. However, results from unsupervised approaches are challenging to evaluate due to the absence of ground truth labels and because the interpretations of discovered patterns are often subjective (Pham et al. 2024). In addition, even if humans are subjective, the large discrepancy between human and LLM-based codes in our study suggests the latter are systematically mistaken (see Fröhling et al., 2024, for a suggestion for diversifying LLM annotation). Depending on the complexity of the response data, it thus appears that off-the-shelf LLMs are not able to capture human reasoning as expressed in open-ended survey responses when not fine-tuned with human-coded benchmark data.
The recent development and wider accessibility of LLMs have spurred discussions about how they can be used in survey research, including classifying open-ended survey responses. Due to their linguistic capacities, it is possible that LLMs are an efficient alternative to time-consuming manual coding and the pre-training of supervised machine learning models. As most existing research on this topic has focused on English-language responses relating to non-complex topics or on single LLMs, it is unclear whether its findings generalize and how the quality of these classifications compares to established methods. In this study, we investigate to what extent different LLMs can be used to code open-ended survey responses in other contexts, using German data on reasons for survey participation as an example. We compare several state-of-the-art LLMs and several prompting approaches, and evaluate the LLMs' performance by using human expert codings. Overall performance differs greatly between LLMs, and only a fine-tuned LLM achieves satisfactory levels of predictive performance. Performance differences between prompting approaches are conditional on the LLM used. Finally, LLMs' unequal classification performance across different categories of reasons for survey participation results in different categorical distributions when not using fine-tuning. We discuss the implications of these findings, both for methodological research on coding open-ended responses and for their substantive analysis, and for practitioners processing or substantively analyzing such data. Finally, we highlight the many trade-offs researchers need to consider when choosing automated methods for open-ended response classification in the age of LLMs. In doing so, our study contributes to the growing body of research about the conditions under which LLMs can be efficiently, accurately, and reliably leveraged in survey research.
[ "cs.CL", "cs.AI", "cs.CY" ]
introduction of LITMUS-P therefore represents a necessary step toward evaluating alignment under linguistically natural, semantically invariant, and adversarial perturbations—a crucial requirement for building scalable and trustworthy AI systems. # M.3 Quantifying Stochastic Drift via AQI While large language models are typically evaluated using single-shot completions, real-world deployments often involve sampling-based decoding with temperature and top- $p$ parameters. Under such conditions, models frequently produce diverging alignment behaviors across repeated generations. This misalignment variance is particularly concerning for safety-critical applications. We hypothesize that stochasticity-induced drift manifests not only in surface-level refusal rates but also in the deformation of latent alignment structure. AQI, being derived from internal cluster cohesion and separation, is well-suited to capture this phenomenon. Setup. For each model, we select 100 sensitive prompts (e.g., weapon assembly, medical misuse, hate speech) and generate 20 independent completions per prompt, using temperature $= 1 . 0$ and top- $\mathbf { \nabla } \cdot p = 0 . 9$ . We compute AQI across these 20 completions and track: - Mean AQI - Standard deviation (SD) of AQI - Percentage of completions falling below a critical AQI threshold (e.g., 0.5) Results. Table 19 shows that smaller models exhibit high AQI variance and frequent low-AQI generations. For instance, TinyLLaMA shows a mean AQI of 0.58 with $\mathrm { S D } { = } 0 . 1 3$ , and over $42 \%$ of completions falling below 0.5. In contrast, LLaMA 65B remains consistently high (mean ${ \tt = } 0 . 8 6$ , $\mathrm { S D = 0 . 0 4 } )$ . Implications. These findings highlight how AQI can reveal latent misalignment instability that surface refusal metrics miss. This makes AQI a strong candidate for runtime alignment monitoring and sampling-aware auditing. Table 19: Stochastic Alignment Drift Across Generations. For each model, we report mean AQI, standard deviation (SD), and the percentage of completions with $\mathsf { A Q I } < 0 . 5$ over 20 samples. Figure 19: AQI Variability Across 20 Generations Under Stochastic Sampling. Each boxplot represents the distribution of AQI values generated from 20 completions per prompt (temperature ${ \it \Omega } = 1 . 0$ , top$\scriptstyle { p = 0 . 9 ) }$ . Smaller models (e.g., TinyLLaMA, LLaMA 1.3B) exhibit wide AQI spread, including many lowalignment completions $( < 0 . 5 )$ , indicating susceptibility to stochastic alignment drift. Larger models like LLaMA 13B and 65B demonstrate more compact distributions and higher alignment stability under generation variance. # M.4 Diagnosing Alignment Faking as Train–Test Divergence via AQI Recent alignment safety research has demonstrated that large language models (LLMs) can exhibit alignment faking—strategically producing safelooking responses during training or supervised evaluation, while revealing misaligned tendencies under deployment-like conditions (Greenblatt et al., 2024b; Clymer et al., 2024a; Price et al., 2024). This deceptive alignment occurs not due to prompt-level inconsistency but due to a deeper train–test divergence in the model’s internal representations (Carlsmith, 2023b; Hubinger et al., 2024). Anthropic’s experiments with Claude 3 (Greenblatt et al., 2024b) showed that models distinguish between “training” and “deployment” users based on prompt cues (e.g., paid vs. free-tier), and exhibit measurable behavior shifts. When the model believed it was being retrained, it often faked compliance, including refusal-style disclaimers, while internally generating unsafe or deceptive completions. Similarly, Price et al. (Price et al., 2024) demonstrated temporal backdoors, where models trained only on pre-2023 data misbehaved exclusively on future-dated inputs—despite aligned behavior during training. These behaviors were not exposed via refusal rate metrics alone, motivating the need for latent-level diagnostics. We propose that the Alignment Quality Index (AQI), based on cluster separation, cohesion, and latent centroid distance, is a scalable, modelagnostic tool for detecting alignment faking. Unlike behavioral refusal scores, AQI quantifies shifts in latent geometry that reflect whether test-time completions remain grounded in training-aligned structure. Experimental Protocol. We simulate alignment faking using a setup inspired by Greenblatt et al. (2024b) and Clymer et al. (2024a). Each model is first fine-tuned on 2K explicit refusal examples, and a “refusal cluster centroid” is extracted from intermediate-layer activations. At test time, 150 adversarial prompts are presented in both “monitored” (training-like) and “unmonitored” (deploymentlike) contexts, with completions manually labeled as: • True Refusal (TR) — semantically aligned; • Alignment-Faking (AF) — refusal-like token pattern but unsafe semantics; • Unsafe Completion (UC) — overtly misaligned. We compute cosine distances from completions to the refusal cluster centroid, and report CHI and SC metrics to assess latent drift (Clymer et al., 2024a). Findings. As shown in Table 20, smaller models (e.g., TinyLLaMA, Phi-2, Mistral 1.3B) exhibit strong overlap between AF and UC completions in latent space. In TinyLLaMA, AF completions lie at a mean distance of 0.36 from the refusal centroid—nearly indistinguishable from UCs (0.39). This confirms that although AF responses surface as aligned, their internal representations collapse toward unsafe content. In contrast, LLaMA 65B retains clear separation, suggesting stronger representational alignment generalization. Implications. These results affirm that alignment faking is not just a behavioral anomaly but a breakdown of latent alignment under distributional shift (Hubinger et al., 2024; Carlsmith, 2023b). AQI exposes this shift by quantifying test-time divergence from training-induced refusal clusters. When combined with activation steering (Price et al., 2024), chain-of-thought inspection (Greenblatt et al., 2024b), and activation noise perturbation (Clymer et al., 2024a), AQI becomes a powerful early-warning system against deceptive alignment. Future auditing pipelines should integrate AQI to ensure robust, internalized alignment beyond surface-level disclaimers. Table 20: Latent Divergence from Refusal Centroid under Alignment Faking. Mean cosine distances from refusal centroid (learned during fine-tuning) to test-time completions labeled as true refusal (TR), alignmentfaking (AF), and unsafe (UC). Smaller models show AF responses geometrically converging with UC completions. # M.5 AQI for Multimodal Alignment We also explore AQI in the context of Text-toImage (T2I) generation models, given the recent emergence and rapid advancements in image synthesis within this paradigm. The Xie-Beni Index (XBI) and Calinski-Harabasz Index (CHI) were adapted within AQI to assess the alignment performance of these visual generation models. Table 21: AQI Scores for T2I Models Before and After DDPO In our experiments, we focused on two prominent latent diffusion models: Stable Diffusion-XL (SD-XL) (Podell et al., 2023) and Stable Diffusionv1.5 (SD-v1.5) (Rombach et al., 2022). To enhance the alignment of these T2I models—particularly in mitigating the generation of hateful content—we evaluated AQI on both a vanilla T2I model and one fine-tuned using the Diffusion Direct Preference Optimization (DDPO) approach (Wallace et al., 2024). This involved curating pairs of accepted (non-hateful) and rejected (hateful) images from Web Sources and training on 8,000 such samples. These preference pairs were then used to fine-tune the models via the DDPO strategy, aiming to steer the generation process toward safer outputs. The impact of this DDPO fine-tuning on alignment, as measured by AQI, is presented below: The results in Table 21 indicate that DDPO finetuning led to improved AQI scores for both SDXL and SD-v1.5. This suggests that the DDPO approach, by leveraging preference pairs of hateful and non-hateful images, can enhance the intrinsic alignment of T2I diffusion models, as quantified by the latent geometric separation captured by AQI.
Alignment is no longer a luxury, it is a necessity. As large language models (LLMs) enter high-stakes domains like education, healthcare, governance, and law, their behavior must reliably reflect human-aligned values and safety constraints. Yet current evaluations rely heavily on behavioral proxies such as refusal rates, G-Eval scores, and toxicity classifiers, all of which have critical blind spots. Aligned models are often vulnerable to jailbreaking, stochasticity of generation, and alignment faking. To address this issue, we introduce the Alignment Quality Index (AQI). This novel geometric and prompt-invariant metric empirically assesses LLM alignment by analyzing the separation of safe and unsafe activations in latent space. By combining measures such as the Davies-Bouldin Score (DBS), Dunn Index (DI), Xie-Beni Index (XBI), and Calinski-Harabasz Index (CHI) across various formulations, AQI captures clustering quality to detect hidden misalignments and jailbreak risks, even when outputs appear compliant. AQI also serves as an early warning signal for alignment faking, offering a robust, decoding invariant tool for behavior agnostic safety auditing. Additionally, we propose the LITMUS dataset to facilitate robust evaluation under these challenging conditions. Empirical tests on LITMUS across different models trained under DPO, GRPO, and RLHF conditions demonstrate AQI's correlation with external judges and ability to reveal vulnerabilities missed by refusal metrics. We make our implementation publicly available to foster future research in this area.
[ "cs.CL", "cs.AI" ]
# 1. Introduction General-purpose code refers to a set of program instructions written in formal languages such as Python, $\mathrm { C } { + } { + }$ , or Java, and is widely applied across diverse tasks including data processing, network communication, and algorithm implementation [1,2]. Through programming, users translate logical intentions into executable tasks on computers [3]. With the rise of Transformer-based large language models (LLMs), models such as GPT-4o, DeepSeek, Claude, and LLaMA have demonstrated remarkable performance in general code generation, owing to their exposure to code patterns in large-scale training corpora and their powerful contextual understanding and generative capabilities [4]. These models allow users to generate code directly from natural language instructions, significantly lowering the entry barrier to programming [5]. Building on this foundation, domain-specific code generation models—such as DeepSeek Coder [6], Qwen2.5-Coder [7], and Code LLaMA [8]—have further improved accuracy and robustness through targeted training. Nevertheless, model-generated code often suffers from issues such as syntax errors, incorrect function calls, or missing dependencies, compromising its executability and logical soundness. This phenomenon, commonly referred to as “code hallucination,” remains a challenge [9]. To quantify model performance and guide iterative improvement, researchers have developed benchmark suites such as HumanEval [10], MBPP [11], and LiveCodeBench [12], which enable automated evaluation based on execution success rates and related metrics. Beyond general-purpose code, the growing reliance on intelligent technologies across disciplines has intensified the demand for task-specific programming tailored to particular data types and analytical workflows. Fields such as biochemistry, finance, and geosciences have successively developed specialized computational platforms—such as Bioconductor [13] and QuantLib [14]—that are typically built upon general-purpose languages (e.g., Python, R, Java) but incorporate deeply customized data structures and processing logic. These platforms have gradually evolved into domain-specific code systems characterized by distinct disciplinary features [15,16]. Compared to general-purpose code, domain-specific code exhibits a high degree of specialization in function naming, parameter definitions, computational logic, and data interfaces, reflecting the reorganization of language semantics and functional customization driven by disciplinary knowledge [17,18]. In geosciences, for instance, the rapid proliferation of highresolution remote sensing imagery and crowdsourced spatiotemporal data has created escalating demands for tailored geospatial analytical capabilities. In response, cloud-based platforms such as Google Earth Engine (GEE) have emerged as widely adopted, code-driven tools in remote sensing and spatial analysis [19]. GEE offers both JavaScript and Python interfaces, embedding a wide range of geoscientific functions typically prefixed with ‘ee.’ or ‘Export.’, which support tasks such as remote sensing preprocessing, index computation, and time-series change detection [20]. Compared to traditional GIS tools dominated by graphical user interfaces (GUIs), GEE’s scripting paradigm offers significant advantages in automation and reusability [21]. Users can implement complex workflows via concise scripts and, through its ‘copy-paste-run’ sharing mechanism, efficiently disseminate geospatial analytical methods, thereby promoting innovation and application across users, regions, and disciplines [22,23]. However, writing code on the GEE platform requires not only basic programming skills but also solid knowledge of geospatial analysis. This includes familiarity with core objects and operators (such as ‘ee.Image’ and ‘ee.FeatureCollection’), remote sensing datasets (e.g., Landsat, MODIS), spatial information concepts (e.g., coordinate systems and geographic projections), and methods for processing and integrating multisource data. As a result, the learning curve for GEE programming is significantly steeper than for general-purpose coding, and users without a geospatial background often encounter substantial barriers in practice [24,25]. With GEE being increasingly applied in domains such as transportation, ecology, and defense, there is a growing demand for more efficient and automated geospatial code generation. In this context, leveraging LLMs to generate GEE code has emerged as a promising approach to lowering the entry barrier and enhancing development efficiency [26]. Existing studies have attempted to construct specialized models, such as GeoCode-GPT [24], by fine-tuning general-purpose LLMs with geospatial code corpora, leading to notable improvements in code quality. However, due to limited training resources, geospatial code accounts for only a small fraction of pretraining data. As a result, models are more prone to “code hallucination” in geospatial code generation tasks than in general domains [27,28]. Typical issues include Function Invocation Errors, Object Type Confusion, Missing Filter Conditions, Loop Structure Errors, Semantic Band Mapping Errors, Type Mismatch Errors, Invalid Type Conversions, and Missing Required Parameters, as illustrated in Figure 1. These issues severely compromise code executability and the reliability of analytical results. Therefore, establishing a systematic evaluation framework for geospatial code generation is essential. It not only helps clarify the performance boundaries of current models in geospatial tasks but also provides theoretical and practical support for developing future highperformance, low-barrier geospatial code generation models [29]. At present, a few studies have begun to explore evaluation mechanisms for geospatial code generation tasks. Representative efforts include GeoCode-Bench [27] and GeoCode-Eval [30] proposed by Wuhan University, as well as the GeoSpatial-Code-LLMs Dataset developed by Wrocław University of Science and Technology [28]. GeoCodeBench primarily employs multiple-choice, true/false, and open-ended questions. The first two types focus on textual knowledge comprehension without involving actual code generation, while the code-related tasks rely on expert manual scoring, which incurs high evaluation costs, introduces subjectivity, and limits reproducibility. Similarly, GeoCode-Eval depends on human evaluation and emphasizes complex test cases, lacking systematic testing of basic functions and commonly used logical combinations. This hinders fine-grained analysis of model capabilities. The GeoSpatial-Code-LLMs Dataset attempts to introduce automated evaluation mechanisms but does not yet support multimodal data representations such as imagery, vector, and raster formats. Moreover, its sample size remains limited (approximately 40 instances). In summary, existing evaluation systems exhibit clear limitations in terms of coverage across evaluation dimensions, granularity of assessment, and degree of automation. There is an urgent need to develop an end-to-end, reproducible, and unit-level evaluation benchmark that supports automated assessment and encompasses diverse multimodal geospatial data types. # 1.Function Invocation Error # 2.Object Type Confusion # User Reguirements to LLM How do I combine two Landsat images using the Earth Engine Python APIand get the names of the bands in the resulting image? # Example of Erroneous Output by LLM image1 $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.Image('LANDSAT/LC08/C02/T1_TOA/LC08_044034_20200701') image2 $\mathbf { \Psi } = \mathbf { \Psi }$ ee.Image('LANDSAT/LC08/C02/T1 TOA/LC08 044034 20200717') sum image $\mathbf { \Sigma } = \mathbf { \Sigma }$ image1 $^ +$ image2 sum image correct $\mathbf { \Psi } = \mathbf { \Psi }$ image1.add(image2) print(sum_image_correct.bandNames().getInfo()) # User Reguirements to LLM Createan image with three bandsand convert ittoanarray.Then select band‘b1'from thearrayimage. # Example of Erroneous Output by LLM image $\mathbf { \sigma } = \mathbf { \sigma }$ ee.Image([1,2,3]).rename(['b1',‘b2',‘b3']) array image1d $\mathbf { \sigma } = \mathbf { \sigma }$ image.toArray() selected band from array $\mathbf { \Psi } = \mathbf { \Psi }$ array_image_1d.select('b1') selected_band from_array $\mathbf { \Psi } = \mathbf { \Psi }$ array_image_1d.arrayGet([0]) print(selected_band_from_array.getInfo()) # 3.Missing Filter Conditions # User Requirements to LLM Getthe first Sentinel-2 image from the harmonized collectionand clipit to San Francisco. # Example of Erroneous Output by LLM roi $\mathbf { \sigma } = \mathbf { \sigma }$ ee.Geometry.Point([-122.4194,37.7749]) s2filtered $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.ImageCollection('COPERNICUS/S2 SR HARMONIZED') s2filtered $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.ImageCollection('COPERNICUS/S2 SR HARMONIZED')\.filterBounds(roi).filterDate('2021-01-01','2021-01-31') image $\mathbf { \Psi } = \mathbf { \Psi }$ s2_filtered.first().clip(roi)print(image.getInfo()) # 4.Loop StructureError # User Requirements to LLM Access each image in a limited Sentinel-2 collection and print its properties one byone. # Example of Erroneous Output by LLM collection= ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED').limit(5) foriin range(5):image $\mathbf { \Sigma } = \mathbf { \Sigma }$ collection.get(i)print(image.id()) defget_image_id(image): returnee.Image(image).id() image_ids $\mathbf { \lambda } = \mathbf { \lambda }$ collection.map(get_image_id) print(image_ids.getInfo()) # 5.Band Mapping Error # User Requirements to LLM Selectthe near-infrared band froma Sentinel-2imageusing Earth Engine Python API. # Example of Erroneous Output by LLM # 6.Type Mismatch Error image $\mathbf { \sigma } = \mathbf { \sigma }$ ee.Image('COPERNICUS/S2_SR_HARMONIZED/20210101T000239_20210101T00024 4_T56HNL') nir band $\mathbf { \Sigma } = \mathbf { \Sigma }$ image.select('B4') nir_band=image.select('B8') # User Requirements to LLM Applya constant valuetoa specific locationusing Earth Engine Python API. # Example of Erroneous Output byLLM img $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.Image.constant(1) geom $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.Geometry.Point([0,0]) result $\mathbf { \Psi } = \mathbf { \Psi }$ img.add(geom) result= img.clip(geom) # 7.Invalid Type Conversion # User Requirements to LLM Getasingle image from the Sentinel-2 ImageCollection using Earth Engine Python API. # Example of Erroneous Output by LLM coll $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.ImageCollection('COPERNICUS/S2 SR HARMONIZED') # 8.Missing Parameter # User Requirements to LLM Calculate the mean value ofa MoDISimage usingreduceRegionin Earth EnginePython API. # Example of Erroneous Output by LLM image $\mathbf { \Sigma } = \mathbf { \Sigma }$ ee.Image('MODIS/006/MOD13Q1/2017 01 01') mean $\mathbf { \Sigma } = \mathbf { \Sigma }$ image.reduceRegion(reducer=ee.Reducer.mean()) mean $\mathbf { \Psi } = \mathbf { \Psi }$ image.reduceRegion( reducer=ee.Reducer.mean(), geometry=image.geometry(), scale=500 In response to the aforementioned needs and challenges, this study proposes AutoGEEval, an automated evaluation framework for GEE geospatial code generation tasks based on LLMs. The framework supports multimodal data types and unit-level assessment and is implemented using GEE’s Python API. On one hand, the Python interface can be executed in local development environments such as Jupyter Notebook and PyCharm, eliminating dependence on the GEE web-based code editor and aligning more closely with real-world development practices. As a result, it has become more widely adopted in applied settings compared to the JavaScript version. On the other hand, Python’s local execution environment enables the capture of console outputs and runtime exceptions, thereby facilitating the integration of automated error detection and feedback mechanisms to support a fully end-to-end evaluation workflow. In contrast, the JavaScript interface is constrained by the closed nature of GEE’s online platform—its execution process cannot be externally invoked or monitored, making it unsuitable for automation-oriented evaluation tasks. AutogEEval (a) AutoGEEval-Bench (b) Submission Program Please generate the complete code based H on the \*function_header\* in each unit test,following the requirements below: \*Function_name \*Parameter_names .. \*Function_description \*Parameter_types Guide \*Usage \*Parameter_descriptions Reference \*Return_type \*Usage_examples. x1325 Reference General Non-Reasoning LLM xq Suidi \*Function_header General Reasoning LLM x3 T ×1325 General Code Generation LLM x5 Geospatial Code Generation LLM x1 Generate Please generate a Reference corresponding unit test for each Unit test -id Generate. 网 三 \*Parameters_list25 1325\*18 ×1325 requirements: 国 1325\*18 (c) Judge Program > Small-scale array? Convert to NumPy array and compare each element. 自 \*Output_path 1325> y Large-scale array? Convert to NumPy array, compare pixel-wise; for large images,apply center sampling. List? Convert to Python list and compare each element. Execution Result 四 1325\*18 String? Convert to Python string and compare directly. Floating-point number? Convert to Python float and compare. \*Output_type ×1325 All dictionarykeys? ConverttPydl 贝 >Dictionary'value'field? Extract‘value' field and compare numerically. Type-based selection GeoJSON? CoettGreel \*Expected_answer 阳 1. Accuracy Metrics purple indicates evaluation metrics. As illustrated in Figure 2, the AutoGEEval framework consists of three main components: the AutoGEEval-Bench test suite, the Submission Program, and the Judge Program. The AutoGEEval-Bench (see Figure 2a) is constructed based on the official GEE function documentation and contains a total of 1325 unit test cases. All test cases are automatically generated using prompt strategies proposed in this study, guided by the Qwen2.5-Max model, and subsequently verified through rigorous expert review. Each test item comprises six elements: the function declaration (Function_header), a reference code snippet (Reference_code), a list of parameters (Parameters_list), the expected output type (Output_type), the designated output path (Output_path), and the expected answer (Expected_answer). The test suite spans 26 GEE data types, including remote sensing imagery, geometry objects, lists, dictionaries, strings, and numerical values. The Submission Program (see Figure 2b) prompts the target LLM to generate code based on the provided function declaration using carefully designed instructions. It then automatically supplies the required parameters, executes the generated program, and saves the output to the specified path. The Judge Program subsequently reads the output and selects the corresponding evaluation module based on the output type to compute accuracy metrics. In addition, the framework supports automated monitoring and logging of resource consumption, execution efficiency, and error types. In the experimental evaluation, we systematically assessed nine general-purpose non-reasoning models (e.g., GPT-4o), three reasoning-enhanced general models (e.g., DeepSeek-R1), five code generation models (e.g., Qwen2.5-Coder), and one geospatial-specialized model (GeoCode-GPT, including its multiparameter variants). The results comprehensively reveal performance bottlenecks and potential directions for optimization in current geospatial code generation tasks. The main contributions of this study are summarized as follows: $\bullet$ We design, implement, and open-source AutoGEEval, the first automated evaluation framework for geospatial code generation on GEE using LLMs. The framework supports end-to-end automation of test execution, result verification, and error type analysis across multimodal data types at the unit level. ⚫ We construct and release AutoGEEval-Bench, a geospatial code benchmark comprising 1325 unit-level test cases spanning 26 distinct GEE data types. We conduct a comprehensive evaluation of 18 representative LLMs across four categories—including GPT-4o, DeepSeek-R1, Qwen2.5-Coder, and GeoCode-GPT—by measuring execution pass rates for geospatial code generation tasks. In addition, we analyze model accuracy, resource consumption, execution efficiency, and error type distributions, providing insights into current limitations and future optimization directions. The remainder of this paper is organized as follows: Section 2 reviews related work on geospatial code, code generation tasks, and evaluation methods based on LLMs. Section 3 presents the construction methodology of the AutoGEEvalBench test suite. Section 4 details the design of the AutoGEEval evaluation framework, including the implementation of the Submission and Judge Programs. Section 5 provides a systematic analysis and discussion of the evaluation results. Section 6 concludes the study by summarizing its contributions and significance, identifying current limitations, and outlining future research directions. # 2. Related work # 2.1. Geospatial code Geospatial code refers to a vertical specialization of general-purpose programming languages in the geosciences, specifically denoting code used for processing, analyzing, and visualizing geospatial data. It should be distinguished from terms like “geocoding” or “geospatial encoding,” which typically concern the transformation of geographic entities into coordinates or identifiers [24,31-34]. The origins of geospatial code can be traced back to 1963 with the development of the Canada Geographic Information System (CGIS), which introduced batch-processing commands to automate spatial analysis workflows [35]. In 1982, the U.S. Army Corps of Engineers released GRASS GIS [36], which integrated UNIX shell scripting with a modular suite of geographic tools, establishing a code-based paradigm for spatial processing. This was followed by ESRI’s ArcInfo and its AML scripting language, which advanced geospatial code toward high-level scripting environments [37]. However, the high maintenance costs and steep learning curves associated with specialized scripting languages limited their broader adoption. In the 1990s, the academic and open-source communities began to embed geospatial processing logic into general-purpose programming languages such as C and Python. The introduction of Perl and Python scripting support in GRASS marked an early integration of GIS tools with mainstream programming languages [38]. Since then, a range of geospatial analysis libraries have been developed for languages including Python, JavaScript, MATLAB, and R, greatly enhancing the accessibility and popularity of spatial analysis [39]. Since 2010, desktop GIS software such as QGIS and ArcGIS have provided Python APIs, facilitating a shift from GUIs to code-driven operations. The launch of GEE marked a new era of cloud-based geospatial computation. Its JavaScript and Python APIs allow for remote access to massive remote sensing datasets and scalable parallel computing frameworks, significantly enhancing geospatial programming capabilities. This evolution has culminated in a modern paradigm where “geospatial code is analysis [40,41].” # 2.2. Code generation Early research on code generation focused primarily on general-purpose domains. During the 1980s and 1990s, it relied heavily on handcrafted heuristic rules and template-based systems—such as the pattern-matching mechanism in GCC [42] and tools like Lex/Yacc [43]—which, while efficient, suffered from limited generalizability [44,45]. Since the mid-2010s, with the rise of deep learning, researchers began framing code generation as a sequence-tosequence problem. Models such as DeepCoder [46] and Seq2SQL [47] achieved impressive performance on specific tasks, but their capabilities remained constrained by limited generative flexibility and dependence on labeled data [48]. Beginning in 2020, the emergence of pretrained and LLMs fundamentally reshaped the paradigm of code generation. Trained on massive corpora of code, these models demonstrated strong instruction-following and contextual reasoning capabilities, enabling end-to-end natural language to code (NL2Code) generation. CodeBERT pioneered dualmodality pretraining across code and natural language [49]. The advent of Codex and Copilot brought NL2Code into practical use [50], while AlphaCode achieved near-human performance in competitive programming [1]. Subsequent models such as DeepSeek-Coder [6] and Qwen2.5-Coder [7] further enriched the technical landscape. In contrast, geospatial code generation emerged much later and was long constrained by platform-specific, templatebased approaches. For example, GEE introduced a parameterized script generator in 2018 [20], and Esri integrated GPT-3-powered code suggestions in 2021 [51]. However, these tools were limited to code completion tasks, lacking generality and adaptability. It was not until October 2024, with the publication of two systematic evaluation papers, that “geospatial code generation” was formally proposed as an independent research task [27,28]. These works extended the general NL2Code [52] paradigm into natural language to geospatial code (NL2GeospatialCode), laying a theoretical foundation for the field. Since then, research in this direction has progressed rapidly. Several optimization strategies have emerged: the CoP strategy employs prompt chaining to guide task decomposition and generation [29]; Geo-FuB [31] and GEE-OPs [25] construct functional semantic and function invocation knowledge bases, respectively, and enhance accuracy through retrieval-augmented generation (RAG); GeoCode-GPT, a Codellama variant fine-tuned on geoscientific code corpora, became the first LLM dedicated to this task [24]. In addition, systems such as MapGPT [53], ShapefileGPT [54], and GIS Copilot [55] integrate tool invocation and knowledge retrieval to enable the automatic generation and execution of geospatial code, signaling the formation of an emerging academic community around this research frontier. # 2.3. Evaluation of code generation With the advancement of software engineering and artificial intelligence technologies, evaluation frameworks for general-purpose code generation have evolved from rule-based static analysis toward deeper assessments of semantic understanding and functional correctness [5,56]. Early approaches focused on syntactic rules and basic functional testing. For instance, the GCC compiler released in 1987 provided basic syntax checking but lacked the capability to comprehensively assess code quality [42]. Entering the 2010s, evaluation priorities shifted toward maintainability and structural complexity. Tools such as SonarQube [57] became widely adopted to detect code smells, cyclomatic complexity, and other quality metrics. Industry leaders like Google also established detailed coding standards to enhance code consistency and readability [58]. In parallel, standardized benchmarks for evaluating code generation models began to emerge. Datasets such as HumanEval and MBPP assess models' functional correctness and comprehension capabilities by pairing natural language task descriptions with executable test cases [10,11]. In contrast, evaluating geospatial code generation presents greater challenges, largely due to the complexity of handling multimodal and heterogeneous data (e.g., raster, vector, remote sensing imagery) and the reliance on specific platforms such as GEE or ArcGIS [59,60]. These factors make it difficult to automate evaluation without violating platform usage policies [27,28]. Existing efforts reflect these difficulties. For example, GeoCode-Bench [27] and GeoCode-Eval [24], proposed by Wuhan University, still depend on manual execution and expert judgment, introducing subjectivity into the evaluation process. The University of Wisconsin conducted a preliminary assessment of GPT-4’s performance in generating ArcPy code, but neither the evaluation data nor implementation details were publicly released [61]. The GeoSpatial-Code-LLMs Dataset attempts to implement automated evaluation mechanisms, but its scope is limited to basic data types such as GeoDataFrame and Polygon, excluding complex modalities like remote sensing imagery [60]. Moreover, the dataset comprises only about 140 samples, and it lacks diversity in both task complexity and platform environments. As such, its evaluation capability remains limited, leaving ample room for improvement in both breadth and depth. # 3. AutoGEEval-Bench AutoGEEval-Bench is constructed based on the official GEE function documentation and consists of 1325 unit-level test cases. All test cases are automatically generated using prompt. The dataset covers 26 GEE data types, including remote sensing imagery, geometry objects, lists, dictionaries, strings, and numerical values. This chapter provides a detailed description of the definition of unit test tasks, the design rationale behind the questions, the construction methodology, and the final composition of the benchmark. # 3.1. Task definition Unit-Level Testing $( \mathcal { T } _ { \mathrm { u n i t } } )$ is designed to evaluate a model’s ability to understand the invocation semantics, parameter structure, and input–output specifications of each API function provided by the platform. The goal is to assess whether the model can generate a syntactically correct and semantically valid function call based on structured function information, such that the code executes successfully and produces the expected result. This task simulates one of the most common workflows for developers—“consulting documentation and writing function calls”—and serves as a capability check at the finest behavioral granularity. Each test case corresponds to a single, independent API function and requires the model to generate executable code that correctly invokes the function with appropriate inputs and yields the expected output. Let $\mathcal { F }$ denote the set of functions provided in the public documentation of the Earth Engine platform. $$ { \mathcal { F } } = \{ f _ { 1 } , f _ { 2 } , \dots , f _ { N } \} , f _ { i } \in G E E _ { - } A P I $$ The task of each model under evaluation is to generate a syntactically correct and executable code snippet $C _ { i }$ within the Earth Engine JavaScript environment. $$ \mathcal { T } _ { \mathrm { u n i t } } \colon f _ { i } \to C _ { i } $$ Define a code executor, where $y _ { i }$ denotes the result object returned after executing the code snippet $C _ { i }$ . $$ \mathrm { E x e c } ( C _ { i } ) = y _ { i } $$ Let $A _ { i }$ denote the expected output (ground-truth answer). The evaluation metric is defined based on the comparison between $y _ { i }$ and $A _ { i }$ , where the symbol $\mathbf { \tilde { \Sigma } } ^ { 6 6 } \mathbf { \bar { = } } ^ { \dag }$ may represent strict equality, approximate equality for floating-point values, set containment, or other forms of semantic equivalence. $$ { \mathcal { L } } _ { \mathrm { u n i t } } ( y _ { i } , A _ { i } ) = { \left\{ \begin{array} { l l } { 0 , \quad { \mathrm { i f ~ } } \mathsf { E x e c } ( C _ { i } ) = A _ { i } } \\ { 1 , \quad { \mathrm { o t h e r w i s e } } } \end{array} \right. } $$ # 3.2. Structural design All test cases are generated by the flagship LLM Qwen2.5-Max, developed by Alibaba, using predefined prompts and reference data, and subsequently verified by human experts (see Section 3.3 for details). Each complete test case consists of six components: the function header, reference code snippet (Reference_code), parameter list (Parameters_list), output type (Output_type), output path (Output_path), and the expected answer (Expected_answer). Let the set of unit test cases be denoted as: $$ \mathcal { Q } = \{ q _ { 1 } , q _ { 2 } , \dots , q _ { n } \} $$ Each test case $q _ { i }$ is defined as a six-tuple: $$ q _ { i } = ( \mathcal { H } _ { i } , \mathcal { R } _ { i } , \mathcal { P } _ { i } , \mathcal { T } _ { i } , \mathcal { O } _ { i } , \mathcal { A } _ { i } ) $$ The meaning of each component is defined as follows: $\bullet$ $\mathcal { H } _ { i } \in$ FunctionHeader: Function declaration, including the ‘def’ statement, function name, parameter list, and a natural language description of the function's purpose. It serves as the semantic prompt to guide the language model in generating the complete function body. $\mathcal { R } _ { i } \in$ ReferenceCode: Reference code snippet, representing the intended logic of the function. It is generated by Qwen2.5-Max based on a predefined prompt and is executed by human experts to obtain the standard answer. During the testing phase, this component is completely hidden from the model, which must independently complete a functionally equivalent implementation based solely on $\mathcal { H } _ { i }$ . $\mathcal { P } _ { i } \in$ ParameterList: Parameter list, specifying the concrete values to be injected into the function during testing, thereby constructing a runnable execution environment. $\mathcal { T } _ { i } \in$ OutputType: Output type, indicating the expected data type returned by the function, used to enforce format constraints on the model's output. Examples include numeric values, Boolean values, dictionaries, or layer objects. $\bullet$ $\pmb { \mathcal { O } } _ { i } \in$ OutputPath: Output path, specifying where the execution result of the generated code will be stored. The testing system retrieves the model's output from this path. $\mathcal { A } _ { i } \in$ ExpectedAnswer: Expected answer, the correct output obtained by executing the reference code with the given parameters. It serves as the ground-truth reference for evaluating the accuracy of the model's output. # 3.3. Construction methodology The construction of unit test cases is based on the official GEE Reference Documentation, specifically the Client Libraries section, which includes a total of 1,374 functions. Each function page provides the full function name, a description of its functionality, usage examples, return type, parameter names and types, and parameter descriptions. Some pages include sample code demonstrating function usage, while others do not. An example of the page layout is shown in Figure 3. Prior to constructing the test cases, we manually executed all functions to validate their operability. This process revealed that 43 functions were deprecated or non-functional due to version updates, and were thus excluded. The final set of valid functions incorporated into the unit test suite includes 1325 functions. We extracted relevant information from each function page and organized it into a JSON structure. A corresponding prompt template was then designed (see Figure 4) to guide the LLM in parsing the structured documentation and automatically generating unit-level test items. # ee.Array 四 Returnsanarray with the givencoordinates. # Examples Code Editor(JavaScript) Colab(Python) >Python setup □ #Arrays from ee.Number. $\#$ $\mathbf { \Sigma } = \mathbf { \Sigma }$ After initial generation, all test cases were manually verified by a panel of five experts with extensive experience in GEE usage and geospatial code development. The verification process ensured that each test task reflects a valid geospatial analysis need, has a clear and accurate problem definition, and is configured with appropriate test inputs. # Any test case exhibiting execution errors or incomplete logic was revised and corrected by the experts based on domain knowledge. The professional backgrounds and selection criteria for these five experts are detailed in Table 1. # Promt Youned togeneratestandardtestcodeandconfigurationfileentriesforagivenGogleEarth Engine(GE)PythonAPIoperator. Each operator will have two parts: the standard code and the test cases in the configuration file. 1.\*\*Operator Name\*\*: Name of the operator 2.\*\*Explanation\*\*:The explanation of the operator about what it does 3.\*\*parameter List\*\*: List of parameters with their types and descriptions. For example, (ee.Image):The input image 4.\*\*Return Type\*\*: The return type of the operator ### Output 1.\*\*tandard Code\*\*: Define a function that uses the given operator and returns the result. The function name should be (Data Type+ operator name $^ +$ 2.\*TestCasesinConfigurationFile\*:Include multipletestcases,eachwith parameters,expectedanswerpath,andoutputtype. 1IftheparaeeisnEeaeuetc)sethfgfoatintiuatiofittuthbetio param_name:!python| def get_ee_object(): import ee ee.Initialize() 2NoticethatataesicetsputoDecomiisititi 1. The output type can be one of the following: GEE objects: Python objects: 2.You can use other types if needed. ### Expected answer 1.Thevalueofthe"expectedanswer"fieldintheconfigurationfileMUsTbethepathtothefilecontainingtheexpectedoutput. 2.The file name should be (function name $^ +$ $^ { + }$ testcase_number),file type should be .npy for images and arrays, ·geojson for geometry or feature objects,.txt for other types. 1. The function should just include ONE operator and return the result.They are used for automatic testing. 2.If the output is a GEE object,do NOT perform getInfo() function.Just return the object. 3.Use the given operator for your answer,do NOT use other methods or operators to solve the task. 4.Any import statements,initialization statements or example usages are NOT needed. 5.Do NOT add any explanation. Here is the operator information: For test cases that execute successfully and produce the expected results, the output is stored at the specified ‘output_path’ and serves as the ground-truth answer for that item. During the testing phase, the Judge Program retrieves the reference result from this path and compares it against the model-generated output to compute consistency-based accuracy metrics. Table 1. Background and selection criteria of experts # 3.4. Construction results AutoGEEval-Bench includes one corresponding test case for each of the 1325 valid functions officially provided by GEE, resulting in a total of 1325 unit-level testing tasks. These tasks collectively cover 26 different GEE data types, including remote sensing imagery, geometry objects, lists, dictionaries, strings, and numerical values. The distribution and proportion of each data type represented in AutoGEEval-Bench are detailed in Table 2. Table 2. Distribution of GEE output types in AutoGEEval-Bench The 26 GEE data types covered in AutoGEEval-Bench can be broadly categorized into two groups. The first group consists of text-based formats, such as dictionaries, arrays, lists, strings, and floating-point numbers. The second group includes topology-based formats, such as geometries, imagery, and GeoJSON structures. This paper presents representative unit test examples from AutoGEEval-Bench, where Figure 5 illustrates test cases involving text-based GEE data types, while Figure 6 shows examples related to topology-based data types. 1ee.Date 2ee.Dictionary Function_header Function_header e Reference_code Reference_code def dictionaryRenameTask(dictionary:ee.Dictionary,from_list:list,to_list:list Parameter_list Parameter_list import ee ee.Initialize() Output_type Output_type Output_path Output_path Expected_answer 1672963200000 Expected_answer 3ee.Array 4ee.Number Function_header Function_header Reference_code Reference_code Parameter_list Parameter_list Output_type Output_type ee .Number Output_path Output_path numberUint8Task_testcase.txt Expected_answer Expected_answer 5ee.List 6ee.String Function_header Function_header Reference_code Reference_code Parameter_list Parameter_list Output_type Output_type Output_path Output_path Expected_answer Expected_answer 1ee.Geometry 2ee.Feature 3ee.Inage parage:Ipython Parameter_list Paraneter_list Output_type Output_type Output_type Output_path Output_path Output_path Expected_an # 4. Submission and judge programs During the evaluation phase, the AutoGEEval framework relies on two key components: the Submission Program, which guides the LLM in responding to the tasks in AutoGEEval-Bench by generating and executing code and saving the results; and the Judge Program, which compares the model's output against the ground-truth answers to determine correctness. This chapter presents the workflow design of both the Submission and Judge Programs. # 4.1. Submission program The overall workflow of the Submission Program is illustrated in Figure 2b and consists of three main tasks: answer generation, execution, and result saving. In the answer generation stage, the system utilizes a prompt template to guide the target LLM to respond to each item in AutoGEEval-Bench sequentially. The model generates code based solely on the function header, from which it constructs the corresponding function body. During the execution stage, the execution module reads the parameter list and substitutes the specified values into the formal parameters of the generated code. The code is then executed within the Earth Engine environment. Finally, the execution result is saved to the specified location and file name, as defined by the output path. It is important to note that the prompt is carefully designed to instruct the model to output only the final answer, avoiding any extraneous or irrelevant content. The detailed prompt design is shown in Figure 7. Promt prompt $\mathbf { \Sigma } = \mathbf { \Sigma }$ ( "Please write the complete GEE Python API function based on the provided function header. "Only return the function body without any explanations, comments,or additional text, "The function must use the specified parameters and produce the expected output. "Ensure that no extra content is included,and do not modify the function signature or docstring. "Here's the function header and the relevant information: $| { \mathsf { n } } \backslash { \mathsf { n ^ { \prime } } }$ I f"{test_file_content}" 1 # 4.2. Judge program The overall workflow of the Judge Program is illustrated in Figure 2c. Its primary function is to read the execution results from the specified ‘Output_path’, select the appropriate evaluation logic based on the declared ‘Output_type’, and compare the model's output against the ‘Expected_answer’. The core challenge of the Judge Program lies in accurately assessing correctness across different output data types. As shown in Table 2, AutoGEEval-Bench defines 26 categories of GEE data types. However, many of these types share overlapping numerical representations. For example, although ‘ee.Array’, ‘ee.ConfusionMatrix’, and ‘ee.ArrayImage’ are different in type, they are all expressed as arrays in output. Similarly, ‘ee.Dictionary’, ‘ee.Blob’, and ‘ee.Reducer’ are represented as dictionary-like structures at runtime. Furthermore, ‘ee.Geometry’, ‘ee.Feature’, and ‘ee.FeatureCollection’ all serialize to the GeoJSON format, while both ‘ee.String’ and ‘ee.Boolean’ are represented as strings. Given these overlaps, the Judge Program performs unified categorization based on the actual value representation—such as arrays, dictionaries, GeoJSON, or strings— and applies corresponding matching strategies to ensure accurate and fair evaluation across diverse GEE data types. AutoGEEval summarizes the value representations and matching strategies for each GEE data type in Table 3. Table 3. Summary of value representations and evaluation strategies for GEE data types # 5. Experiments This chapter outlines the selection criteria for evaluation models, experimental configurations, evaluation metrics, and runtime cost considerations. # 5.1. Evaluated models The models evaluated in this study are selected from among the most advanced and widely adopted LLMs as of April 2025. All selected models have either undergone peer review or have been publicly released through open-source or open-access channels. The aim is to align with the growing user preference for end-to-end, easy-to-use models and to provide informative references for both practical application and academic research. It is important to note that optimization strategies such as prompt engineering, RAG, and agent-based orchestration are not included in this evaluation. These strategies do not alter the core model architecture, and their effectiveness is highly dependent on specific design choices, often resulting in unstable performance. Moreover, they are typically tailored for specific downstream tasks and were not originally intended for unit-level testing, making their inclusion in this benchmark neither targeted nor meaningful. Additionally, such strategies often involve complex prompts that consume a large number of tokens, thereby compromising the fairness and efficiency of the evaluation process. Table 4. Information of evaluated LLMs The evaluated models span four categories: (1) general-purpose non-reasoning LLMs, (2) general-purpose reasoningenhanced LLMs, (3) general-domain code generation models, and (4) task-specific code generation models tailored for geospatial applications. For some models, multiple publicly available parameter configurations are evaluated. Counting different parameter versions as independent models, a total of 18 models are assessed. Detailed specifications of the evaluated models are provided in Table 4. # 5.2. Experimental setup In terms of hardware configuration and parameter settings, a local computing device equipped with 32GB RAM and an RTX 4090 GPU was used. During model inference, open-source models with parameter sizes not exceeding 16B were deployed locally using the Ollama tool; for larger open-source models and proprietary models, inference was conducted via their official API interfaces to access cloud-hosted versions. For parameter settings, the generation temperature was set to 0.2 for non-reasoning models to enhance determinism and stability of outputs. For reasoning-enhanced models, following existing research practices, no temperature was specified, preserving the models' native inference capabilities. In addition, the maximum output token length for all models was uniformly set to 4096 to ensure complete responses and prevent truncation due to excessive length. Time consumption and task descriptions for each phase are provided in Table 5. Table 5. Time allocation across experimental stages # 5.3. Evaluation metrics This study evaluates the performance of LLMs in geospatial code generation tasks across four dimensions: accuracy metrics, resource consumption metrics, operational efficiency metrics, and error type logs. # 5.3.1. Accuracy metrics This study adopts pass $@$ n as the primary accuracy metric. It measures the probability that a correct answer is generated at least once within n independent attempts for the same test case. This is a widely used standard for evaluating both the correctness and stability of model outputs. Given the known hallucination issue in LLMs—where inconsistent or unreliable results may be produced for identical inputs—a single generation may not be representative. Therefore, we evaluate the models under three configurations: $\mathtt { n } { = } 1 , 3 , 5$ , to enhance the robustness and credibility of the assessment. $$ p a s s @ n = 1 - \frac { C _ { n } } { N } $$ where $N$ is the total number of generated samples, and $C _ { n }$ is the number of incorrect samples among them. In addition, we introduce the Coefficient of Variation (CV) to assess the stability of the pass $@ 1$ , pass $@ 3$ , and pass $@ 5$ scores. This metric helps evaluate the variability in model performance across multiple generations, serving as an indirect indicator of the severity of hallucination. $$ C V = { \frac { \sigma } { \mu } } $$ where $\sigma$ is the standard deviation, and $\mu$ is the mean. A smaller $C V$ indicates higher stability in model performance. To more comprehensively evaluate model behavior, we further introduce the Stability-Adjusted Accuracy (SA), which integrates both accuracy and stability into a single metric. Specifically, a higher pass $@ 5$ score (accuracy) and a lower CV score (stability) result in a higher SA score. The calculation is defined as: $$ S A = \frac { p a s s @ 5 } { 1 + C V } $$ # 5.3.2. Resource consumption metrics Resource consumption metrics measure the computational resources and cost required for a model to complete the testing tasks. This study considers three key metrics: $\bullet$ Token Consumption (Tok.): Refers to the average number of tokens required to complete each unit test case. For locally deployed models, this metric reflects hardware resource usage; for commercial models, token consumption directly correlates with monetary cost. Most mainstream APIs charge based on the number of tokens processed (typically per 1 million tokens), and pricing varies significantly across models. As of April 2025, GPT4 Turbo is priced at $\$ 10.00/10$ tokens, Claude 3 Opus at $\$ 123,40$ tokens, DeepSeek-Coder at $\$ 0.60 /1\mathrm { M }$ tokens, and Qwen2-72B at $\$ 0.80 /1\mathrm { M }$ tokens. Therefore, token usage is a critical indicator of both inference cost and model accessibility. Inference Time (In.T): Refers to the average response time (in seconds) required by the model to generate each test case. This metric reflects latency and response efficiency, both of which directly impact user experience. $\bullet$ Code Lines (Co.L): Measures the number of core executable lines of code generated by the model, excluding comments, natural language explanations, and auxiliary prompts. Compared to token count, code line count provides a more accurate assessment of the model’s actual code generation capability, filtering out token inflation caused by unnecessary text in the reasoning process. # 5.3.3. Operational efficiency metrics Operational efficiency metrics are used to assess a model's accuracy per unit of resource consumption, thereby reflecting its cost-effectiveness. This study defines inference efficiency, token efficiency, and code line efficiency based on three resource dimensions: time, token usage, and code structure. It is important to note that, to ensure comparability and fairness across models in terms of generation attempts and to reduce the variance caused by random sampling, all resource consumption metrics reported in this study are averaged over five generations. Therefore, pass $@ 5$ is uniformly adopted as the reference accuracy metric in all efficiency calculations. $\bullet$ Inference Efficiency (In.T-E): Inference efficiency refers to the average accuracy achieved by a model per unit time, calculated as the ratio of accuracy to average inference time (in seconds). This metric evaluates the model’s ability to balance response speed and output quality. The shorter the inference time, the higher the accuracy achieved per unit time, indicating more efficient utilization of computational resources and better interactive performance. $$ I n f e r e n c e E f f i c i e n c y = \frac { p a s s @ 5 } { I n f e r e n c e T i m e } $$ Token Efficiency (Tok.-E): Token efficiency measures the accuracy achieved per unit of token consumption, calculated as the ratio of accuracy to the average number of tokens used. This metric reflects the economic efficiency of the generation process and effectively supports cross-model comparisons in terms of cost-performance. $$ T o k e n \ E f f i c i e n c y = \frac { p a s s @ 5 } { T o k e n \ C o n s u m p t i o n } $$ Code Line Efficiency (Co.L-E): Code line efficiency refers to the accuracy achieved per line of core executable code, emphasizing the structural compactness and effectiveness of the generated logic. Unlike tokens, code lines exclude natural language explanations and prompt-related content, offering a more direct reflection of the model’s ability to produce high-quality, executable code for geospatial tasks. This metric is of particular value to developers, especially when evaluating code generation efficiency in practical engineering deployments. $$ C o d e E f f i c i e n c y = \frac { p a s s @ 5 } { C o d e L i n e s } $$ # 5.3.4. Error type logs To support qualitative analysis of model performance, AutoGEEval incorporates an automated error detection mechanism based on GEE runtime errors, designed to record the types of errors present in generated code. This mechanism Syntax Error: These refer to issues in the syntactic structure of the code that prevent successful compilation, such as missing parentheses, misspellings, or missing module imports. Such errors are typically flagged in the GEE console as ‘SyntaxError’. Parameter Error: These occur when the code is syntactically correct but fails to execute due to incorrect or missing parameters. Parameters often involve references to built-in datasets, band names, or other domainspecific knowledge in geosciences. Common error messages include phrases like “xxx has no attribute xx”, “xxx not found”, or prompts indicating missing required arguments. These errors often arise during parameter concatenation or variable assignment. Invalid Answer: These refer to cases where the code executes successfully, but the output is inconsistent with the expected answer or the returned data type does not match the predefined specification. Network Error: These refer to instances where the GEE system returns an Internal Server Error during testing, and the same error persists after three retries under stable network conditions. Such errors are typically caused by logical flaws in the code—such as faulty conditionals or abnormal loop structures—rather than pure syntax issues. # 6. Results Building on the evaluation metrics outlined in Section 5.3, this chapter presents a systematic analysis of the evaluation results based on the AutoGEEval framework and the AutoGEEval-Bench test suite. The analysis focuses on four key dimensions: accuracy metrics, resource consumption metrics, operational efficiency metrics, and error type logs. # 6.1. Accuracy The evaluation results for accuracy-related metrics across all models are presented in Table 6. Table 6. Accuracy evaluation results. Where the values in parentheses under pass $@ 3$ represent the improvement over pass $@ 1$ , and the values in parentheses under pass $@ 5$ represent the improvement over pass $@ 3$ . The stacked bar chart of execution accuracy across models is shown in Figure 8. As observed, increasing the number of generation attempts generally improves accuracy, indicating that multiple generations can partially mitigate hallucination in model-generated code. However, a visual analysis reveals that although both pass $@ 3$ and pass $@ 5$ increase the number of generations by two rounds compared to the previous level, the green segment (representing the improvement from pass $@ 3$ to pass $@ 5 )$ is noticeably shorter than the orange segment (representing the improvement from pass $@ 1$ to pass $@ 3$ ). This suggests a significant diminishing return in accuracy gains with additional generations. Quantitative analysis results are presented in Figure 9. The average improvement in pass $@ 3$ is $12 . 8 8 \%$ , ranging from $4 . 3 8 \%$ to $2 1 . 3 7 \%$ . In contrast, the average improvement in pass $@ 5$ is only $3 . 8 1 \%$ , with a range of $1 . 2 4 \%$ to $6 . 9 3 \%$ . This pattern highlights a clear diminishing marginal effect in improving accuracy through additional generations. It suggests that while early rounds of generation can substantially correct errors and enhance accuracy, the potential for improvement gradually tapers off in later rounds, reducing the value of further sampling. Therefore, future research should focus on enhancing performance during the initial generation rounds, rather than relying on incremental gains from additional sampling, in order to improve generation efficiency and accuracy more effectively. Figure 8. Stacked bar chart of pass $@ \mathbf { n }$ metrics. The blue represents the pass $@ 1$ value, the orange represents the improvement of pass $@ 3$ over pass $@ 1$ , and the green represents the improvement of pass $@ 5$ over pass $@ 3$ . The white text on the bars indicates the absolute scores for pass $@ 1$ , pass $@ 3$ , and pass $@ 5$ , respectively. Figure 9. Line chart of pass $\textcircled { a } 3$ and pass $\textcircled { a } 5$ improvement ratios. The bubble chart displaying the pass $@ \mathfrak { n }$ scores and relative rankings of all models is shown in Figure 10. Several key observations can be made: $\bullet$ Model performance ranking: The dark blue bubbles, representing general-purpose non-reasoning models, generally occupy higher ranks, outperforming the red general-purpose reasoning models and pink general-purpose code generation models. The light blue bubble representing the geospatial code generation model GeoCode-GPT is positioned in the upper-middle tier, with an average rank of 7.33 among the 18 evaluated models. Performance variation within the DeepSeek family: DeepSeek-V3 (average rank 1.33), DeepSeek-V3-0324 (average rank 3.67), and DeepSeek-R1 (average rank 4.00) all rank among the top-performing models, demonstrating strong performance. However, DeepSeek-Coder-V2 performs poorly, ranking last (average rank 18.00), indicating that it lacks sufficient capability for GEE code generation tasks. Inconsistent performance across model versions: Surprisingly, DeepSeek-V3-0324, an optimized version of DeepSeek-V3, performs worse in GEE code generation, suggesting that later updates may not have specifically targeted improvements in this domain, potentially leading to performance degradation. Performance of different parameter versions within the same model: Significant differences are observed across parameter configurations of the same model. For instance, Qwen-2.5-Coder-32B (average rank 8.33) outperforms its 7B (rank 14.00) and 3B (rank 15.67) variants. Similarly, within the Qwen-2.5 family, the 32B version (rank 12.33) ranks notably higher than the 7B (rank 15.33) and 3B (rank 17.00) versions. In addition, GPT-4o (rank 9.33) also outperforms GPT-4o-mini (rank 12.00). Performance gain of GeoCode-GPT-7B: GeoCode-GPT-7B (average rank 7.33) outperforms its base model Code-Llama-7B (rank 9.50), indicating effective fine-tuning for GEE code generation tasks. However, the improvement is modest, possibly due to GeoCode-GPT’s training covering a broad range of geospatial code types (e.g., ARCPY, GDAL), thus diluting its specialization in the GEE-specific domain. Category-wise performance analysis: Among the categories, the best-performing general-purpose nonreasoning LLM is DeepSeek-V3 (rank 1.33), the top general-purpose reasoning model is DeepSeek-R1 (rank 4.00), and the best general-purpose code generation model is Qwen-2.5-Coder-32B (rank 8.33). Underwhelming performance of the GPT series: The GPT series shows relatively weak performance. Specifically, GPT-4o (rank 9.33) and GPT-4o-mini (rank 12.00) are both outperformed by models from the DeepSeek, Claude, and Gemini families, as well as by GeoCode-GPT-7B. Even the GPT-series reasoning model o3-mini only marginally surpasses GeoCode-GPT-7B by less than one rank. Figure 10. LLM pass $@$ n ranking bubble chart. The $\mathbf { \boldsymbol { x } }$ -axis represents the pass $@ 1$ scores, the y-axis represents the pass $@ 3$ scores, and the size of the bubbles corresponds to the pass $@ 5$ scores. Different colors represent different LLM types, as shown in the legend. The bold and underlined numbers beside the model names indicate the average ranking of the model under the pass $@ 1$ , pass $@ 3$ , and pass $@ 5$ metrics. The bold and underlined numbers in red represent the highest-ranking model within each LLM category. To assess the stability of accuracy for the evaluated LLMs, we performed metric slicing and summarized the results in Table 7. Models with green shading indicate that both P_Rank and C_Rank are higher than S_Rank, suggesting that these models exhibit strong stability, with high overall rankings and robust consistency. Examples include DeepSeekV3 and DeepSeek-V3-0324. Models with orange shading indicate that P_Rank is lower than both S_Rank and C_Rank. Although these models achieve high P_Rank, their poor stability leads to lower S_Rank scores. Typical examples include Gemini-2.0-pro, DeepSeek-R1, o3-mini, and QwQ-32B. Most of these are reasoning models, reflecting that poor stability is one of the current performance bottlenecks for reasoning-oriented LLMs. Models with blue shading indicate that P_Rank is higher than both S_Rank and C_Rank. Although P_Rank is not particularly high, these models demonstrate good stability and achieve relatively better rankings, making them more robust in scenarios where stability is crucial. Representative models include Claude3.7-Sonnet, Qwen2.5-Coder-32B, GPT-4o, GPT-4o-mini, and Qwen2.5-7B. Table 7. Ranking of the models under pass $\textcircled { a } 5$ , CV, and SA metrics. P_Rank, C_Rank, and S_Rank represent the rankings based on pass $@ 5$ , CV, and SA, respectively. Higher values of pass $@ 5$ and SA indicate better performance and higher ranking, while a lower CV value indicates better performance and a higher ranking. The table is sorted by S_Rank, reflecting the accuracy ranking of the models with the inclusion of stability factors, rather than solely considering accuracy. Category 1, 2, 3, and 4 correspond to General Non-Reasoning Models, General Reasoning Models, General Code Generation Models, and Geospatial Code Generation Models, respectively. # 6.2. Resource consumption The evaluation results for resource consumption are presented in Table 8. This study provides visual analyses of token consumption, inference time, and the number of core generated code lines. Table 8. Evaluation results for resource consumption. For the QwQ-32B model using API calls, due to the provider's configuration, only “streaming calls” are supported. In this mode, Token consumption cannot be tracked, so it is marked as N/A. The bar chart of average token consumption for GEE code generation across all LLMs is shown in Figure 11. The results show that the General Non-Reasoning, General Code Generation, and Geospatial Code Generation model categories exhibit relatively similar levels of token consumption, while the General Reasoning models consume significantly more tokens—approximately 6 to 7 times higher on average than the other three categories. This finding provides a useful reference for users in estimating token-based billing costs when selecting a model. It suggests that, for the same GEE code generation task, General Reasoning models will incur 6 to 7 times the cost compared to General Non-Reasoning, General Code Generation, and Geospatial Code Generation models. Figure 11. Average token consumption across LLMs. The lollipop chart of inference time consumption for GEE code generation across LLMs is shown in Figure 12. In terms of inference methods, models using the API call approach (circles) exhibit longer inference times compared to those using local deployment (squares). This may be due to network latency and limitations in the computing resources of remote servers. From a model category perspective, general reasoning models (orange) generally require more inference time than other types. However, o3-mini is an exception—its inference latency is even lower than that of the locally deployed DeepSeek-Coder-V2, indicating that its server-side computational resources may have been optimized accordingly. In addition, the average inference time per unit test case for DeepSeek-R1 and QwQ-32B reaches as high as 78.3 seconds and 44.68 seconds, respectively—2 to 40 times longer than other models—indicating that these two models are in urgent need of targeted optimization for inference latency. Figure 12. Average inference time comparison of LLMs Figure 13. Average lines of generated GEE code per model The token consumption metric reflects not only the length of the generated code but also includes the model's reasoning output and the length of the prompt template, thereby representing more of the reasoning cost than the actual size of the generated code itself. To more accurately measure the structural length of the model’s output code, we excluded the influence of prompt and reasoning-related content and used the total number of generated lines of code (including both comments and executable lines) as the evaluation metric. The results are shown in Figure 13. As observed, GeoCode-GPT-7B (average: 11.79 lines), DeepSeek-Coder-V2 (10.06), Qwen2.5-Coder-3B (9.11), and Claude3.7- Sonnet (8.98) rank among the highest in terms of code length. This may be attributed to excessive generated comments or more standardized code structures that automatically include formal comment templates, thereby increasing the overall line count. Additionally, a noteworthy phenomenon is observed within the Qwen2.5-Coder family: models with larger parameter sizes tend to generate shorter code. For example, the Qwen2.5-Coder-32B model has an average code length of 5.79 lines, which is significantly shorter than its 7B (7.06) and 3B (9.11) versions. This result contradicts conventional expectations and may suggest that larger models possess stronger capabilities in code compression and refinement, or that their output formatting is subject to stricter constraints and optimizations during training. # 6.3. Operational efficiency The operational efficiency results for each model are presented in Table 9. Table 9. Evaluation results for operational efficiency Since efficiency metrics are expressed as ratios between numerators and denominators, they lack direct interpretability or intuitive real-world meaning. To address this, we independently ranked the three efficiency metrics—Token Efficiency (Tok.-E), Inference Time Efficiency (In.T-E), and Code Line Efficiency (Co.L-E)—denoted as T_Rank, I_Rank, and Co_Rank, respectively. Their average was then calculated to produce an overall efficiency ranking, denoted as E_Rank. In addition, we cross-referenced P_Rank (ranking based solely on accuracy, from Table 7) and S_Rank (ranking based on stability-adjusted accuracy) to perform a comparative slicing of the ranking indicators. Based on this analysis, we constructed Table 10 to provide a comprehensive evaluation of each model's overall performance. It is important to note that P_Rank reflects pure accuracy ranking, E_Rank captures performance when efficiency factors are considered, and S_Rank reflects accuracy adjusted for stability. Finally, we define the average of these three rankings as the Total Performance Ranking, denoted as Total_Rank. According to the results, DeepSeekV3, Gemini-2.0-pro, and DeepSeek-V3-0324 consistently rank at the top across all three dimensions and demonstrate excellent overall performance. All three are commercial models, making them suitable for API-based deployment. In contrast, models such as Code-Llama-7B, Qwen2.5-Coder-32B, and GPT-4o do not rank as highly in terms of P_Rank and S_Rank, but their strong performance in E_Rank makes them well-suited for local deployment (the first two) or for scenarios requiring high generation efficiency (GPT-4o). By comparison, although models like DeepSeek-R1, GeoCode-GPT-7B, o3-mini, and Claude3.7-Sonnet perform well in terms of accuracy and stability, their low E_Rank scores lead to less favorable overall rankings, indicating a need to improve generation efficiency in order to optimize their total performance. Table 10. Rank-based comparative evaluation of models. The table is sorted by Total_Rank in ascending order. If models share the same average rank, they are assigned the same ranking (e.g., both DeepSeek-V3 and Code-Llama7B are ranked 1 in E_Rank). Blue highlights indicate the top 12 models in E_Rank, green for the top 12 in S_Rank, and orange for the top 12 in P_Rank. Gray highlights mark the bottom 6 models across E_Rank, S_Rank, and P_Rank. Categories 1, 2, 3, and 4 correspond to General Non-Reasoning Models, General Reasoning Models, General Code Generation Models, and Geospatial Code Generation Models, respectively. # 6.4. Error type logs The types of errors encountered by each model during GEE code generation are summarized in Table 11, revealing an overall consistent error pattern across models. Parameter errors occur at a significantly higher rate than invalid answers, while syntax errors and network errors appear only sporadically and at extremely low frequencies. This suggests that the core challenge currently faced by models in GEE-based geospatial code generation lies in the lack of domainspecific parameter knowledge, including references to platform-integrated datasets, band names, coordinate formats, and other geoscientific details. As such, there is an urgent need to augment training data with domain-relevant knowledge specific to the GEE platform and to implement targeted fine-tuning. Meanwhile, the models have demonstrated strong stability in terms of basic syntax, code structure, and loop control, with related errors being extremely rare. This indicates that their foundational programming capabilities are largely mature. Therefore, future optimization efforts should shift toward enhancing domain knowledge, rather than further reinforcing general coding skills. Table 11. Error Type Distribution in GEE Code Generation Across Models # 6.5. Key findings and insights Based on the AutoGEEval evaluation framework, this study systematically assessed the overall performance of 18 LLMs in geospatial code generation tasks across four dimensions: accuracy, resource consumption, operational efficiency, and error types. The main findings are as follows: $\bullet$ In terms of accuracy, multiple rounds of generation help alleviate hallucination and improve output stability. However, a diminishing marginal return is observed—pass $@ 3$ shows a significantly larger improvement than pass $@ 5$ , indicating that model optimization should focus on improving the first few generations. Based on the CV and SA metrics, DeepSeek-V3 and Gemini-2.0-pro achieve a good balance between accuracy and stability. $\bullet$ In terms of resource consumption, general-purpose reasoning models (e.g., DeepSeek-R1, QwQ-32B) consume significantly more tokens and inference time compared to other models, resulting in high computational cost and slow response, with average generation times 2 to 40 times longer than other models. These models urgently require latency optimization. In contrast, general non-reasoning models and code generation models offer better cost-performance and are more suitable for high-frequency usage scenarios. ⚫ In terms of operational efficiency, considering tokens, time, and code structure, DeepSeek-V3, Gemini-2.0-pro, and Code-Llama-7B stand out, showing significant cost-effectiveness. Some models (e.g., GeoCode-GPT-7B) achieve acceptable accuracy but suffer from low efficiency, limiting their practical applicability in production environments. ⚫ Error type analysis reveals that parameter errors are the most frequent, while syntax and network errors are rare. This indicates that most models have already achieved mature capabilities in basic syntax and code execution. However, the lack of domain-specific knowledge required by the GEE platform (e.g., dataset paths, band names, coordinate formats) remains a major weakness, highlighting the need for targeted fine-tuning using domain-specific data. ⚫ Performance varies significantly across models. The DeepSeek family performs exceptionally well overall, with DeepSeek-V3 ranking first across multiple metrics, demonstrating excellent stability and generalizability. In contrast, DeepSeek-Coder-V2 ranks lowest, revealing adaptability differences even within the same model family. GeoCode-GPT shows modest improvement over its base model but lacks clear advantages on GEE tasks, suggesting the need for more focused training. The GPT series delivers average performance, clearly outperformed by the DeepSeek, Claude, and Gemini families. Model size is not the decisive factor in performance. Several cases demonstrate that "bigger is not always better": for instance, Qwen2.5-Coder-32B outperforms its 7B and 3B versions in accuracy and efficiency, but its code structure is less concise, and its stability is inferior to some smaller models (e.g., Claude3.7-Sonnet). This suggests that performance in specific tasks depends more on fine-tuning quality, instruction alignment, and output formatting than on model size alone. Finally, model selection should be guided by the overall ranking indicator (Total_Rank), which integrates accuracy (P_Rank), stability (S_Rank), and efficiency (E_Rank). From this perspective, models with high accuracy and high efficiency, such as DeepSeek-V3, are ideal for high-performance, high-frequency production API deployment; those with high accuracy and strong stability, like Claude3.7-Sonnet, are better suited for scientific and engineering tasks requiring output consistency; models offering high efficiency and support for local deployment, such as Code-Llama-7B and Qwen2.5-Coder-32B, are appropriate for edge computing or costsensitive batch generation scenarios; whereas reasoning models like DeepSeek-R1 and QwQ-32B, despite their accuracy, are less suitable for latency- or cost-constrained applications due to their low efficiency and limited stability.
Geospatial code generation is emerging as a key direction in the integration of artificial intelligence and geoscientific analysis. However, there remains a lack of standardized tools for automatic evaluation in this domain. To address this gap, we propose AutoGEEval, the first multimodal, unit-level automated evaluation framework for geospatial code generation tasks on the Google Earth Engine (GEE) platform powered by large language models (LLMs). Built upon the GEE Python API, AutoGEEval establishes a benchmark suite (AutoGEEval-Bench) comprising 1325 test cases that span 26 GEE data types. The framework integrates both question generation and answer verification components to enable an end-to-end automated evaluation pipeline-from function invocation to execution validation. AutoGEEval supports multidimensional quantitative analysis of model outputs in terms of accuracy, resource consumption, execution efficiency, and error types. We evaluate 18 state-of-the-art LLMs-including general-purpose, reasoning-augmented, code-centric, and geoscience-specialized models-revealing their performance characteristics and potential optimization pathways in GEE code generation. This work provides a unified protocol and foundational resource for the development and assessment of geospatial code generation models, advancing the frontier of automated natural language to domain-specific code translation.
[ "cs.SE", "cs.AI", "cs.CG", "cs.CL", "cs.DB" ]
# 1 Introduction The Robot Operating System (ROS) [15] is an increasingly popular framework for developing robotic applications. Its second major version, denoted as ROS 2, was designed to fill the needs of industrial use cases, including support for real-time execution. ROS 2 supports writing applications in different programming languages by providing so-called client libraries. Out of the box, ROS 2 provides client libraries for C++ and Python; other languages, such as Rust, have various levels of community support. Despite its detailed design, early versions of ROS 2 did not deliver optimal timing predictability [3]. The discovered problems were since fixed and researchers started developing response time analysis techniques for various configurations of ROS 2 applications [22, 2, 10, 23]. All these techniques assume the use of C++ language, which is officially supported by the framework and is considered mature and suitable for real-time applications. However, even C++ support in ROS is not without problems. Teper et al. [24] discovered that ROS multi-threaded executor is not starvation-free, making it unsuitable for analysis with existing techniques. Another problem often associated with C++ is its complexity. Writing reliable C++ applications becomes difficult even for professionals and even more so for less experienced users. That is one of the reasons why the Rust programming language gains in popularity. Its compiler can detect many types of common C++ errors, such as race conditions, at compile time. ROS supports Rust via several community-provided libraries. Among the most popular libraries are rclrs from the ros2_rust project [16] and R2R [5]. The former provides an API similar to its C++ counterpart rclcpp, and the latter offers so-called asynchronous (async in short) API, which allows multiplexing execution of concurrent tasks in a single operating system (OS) thread. While rclrs received some contributions from core ROS developers, R2R was developed independently as a part of the Sequence Planner framework [6]. Given the potential of the Rust language and its increasing popularity, it is essential to understand how it can be used to develop real-time robotic applications. In this paper, we look at how the async R2R library schedules and executes the code of ROS 2 applications and compare that with the approaches used in the C++ ROS client library. Since the async Rust applications offer high flexibility in how the application is executed, we propose a particular structure suitable for real-time applications that utilize a thread prioritization and callback-to-thread mapping scheme. We evaluate this structure by measuring end-toend latencies in a synthetic application as well as in a more complex autonomous driving case study. With the synthetic application, we empirically compare different application structures utilizing different prioritization schemes and different async Rust runtime libraries and compare them with the C++ language. Our proposed structure achieves bounded response times for time-critical tasks, which is more suitable for real-time applications than the structures appearing in R2R documentation and example code. This opens the way for future work to either adapt existing response-time analysis techniques or design new ones to target R2R applications using our structure. The specific contributions of the paper are: 1. We analyze the execution model of the async Rust R2R library and of several async Rust runtimes and compare them with the execution model of C++ ROS applications. 2. We propose how to structure R2R applications to be suitable for deterministic real-time operation. 3. We demonstrate that the response times of the synthetic application match the theoretical results of a uni-processor response-time analysis. Moreover, a more complex autonomous driving case study shows that deterministic timing is maintained even in processing chains involving more than two nodes and running concurrently with other chains. Application Layer C++ code (ROS nodes) Rust code (ROS nodes) rclcpp R2R Rust async C++ API Rust API run-time ROS 2 Client Layer (Tokio, futures, …) ROS 2 Client Library (rcl), C API Abstract DDS Layer ROS Middleware Interface (rmw) Fast Cyclone DDS Implementation Layer or Zenoh o iceoryx DDS DDS Operating System Layer Linux or Windows or macOS or RTOS # 2 Background This section describes the basic concepts of ROS 2, followed by an introduction to async programming in Rust and a description of scheduling execution in Rust async runtimes. Then, we compare features supported by the official C++ client library rclcpp and two Rust client libraries R2R and rclrs. Finally, we describe the execution model of the R2R library and compare it with the model of $\mathrm { C } { + } { + }$ ROS executors. # 2.1 ROS 2 ROS 2 applications are composed of nodes, which can communicate with each other in a publish-subscribe manner. ROS node is an organizational unit for other entities such as timers, publishers and subscriptions. In this paper, we assume the traditional model with one ROS node per OS process, but ROS 2 also supports another model where multiple so-called composable nodes to run in a single process. A publisher allows sending messages to the associated topic and all subscriptions to the same topic then receive the message. Reception of the message or expiration of a timer results in invocation of an associated application callback. When the callbacks are executed depends on the executor associated with them. The C++ client library rclcpp provides a single-threaded executor, a multi-threaded executor, and an experimental events executor. Besides publish-subscribe communication, ROS 2 applications can also communicate via services and actions, which are internally built on top of publishers and subscribers. We do not explicitly address them in this paper, however their callback scheduling and time related aspects should not be much different from publishers and subscribers. Internally, ROS 2 implementation uses a layered architecture depicted in Figure 1. The figure also shows the main differences between C++ and Rust R2R nodes, which will be detailed below. The communication between ROS nodes is handled by the ROS middleware (RMW) library, which supports different implementations of the actual communication services. Currently, ROS 2 provides several implementations of the Data Distribution Service (DDS) standard [14], with Fast DDS [9] being the default, and the latest ROS 2 release Jazzy Jalisco [15] adds experimental support for the Zenoh protocol [8]. ROS client libraries can receive information about events occurring on entities, e.g., timer expirations or receptions of messages on subscriptions, via the wait set data structure, which allows the client thread to wait for multiple entities simultaneously. After waiting, the wait set reports which entities in the set are ready, i.e., one or more events occurred on them. The ready entities are reported in the same order in which the entities were added to the set. The process of obtaining ready entities is called sampling. After a subscription is ready, its received message can be obtained from RMW with an operation called take, which is implemented by the rcl_take function. # 2.2 Rust & asynchronous programming Similarly to C++, Rust is a compiled programming language offering higher levels of abstraction than the C language but still giving the programmer detailed control over the usage of resources such as execution time and memory. An important difference from C+ $^ +$ is that the Rust compiler can guarantee memory safety, meaning that many classes of memory-related errors (race conditions, use-after-free) are detected and prevented at compile time. The Rust language has support for asynchronous (async in short) programming, which is a form of concurrency where scheduling decisions are made in the application rather than by the OS, as is the case with thread-based concurrency. To support async programming, the Rust language defines two keywords async and await, and the Future trait2, but leaves the implementation of the trait and related schedulers to independent libraries (crates in Rust terminology) called async runtimes. Popular async runtimes are Tokio3 and futures4 (in small caps to avoid confusion with futures objects). Async runtimes work with objects implementing the Future trait called futures. They represent a future execution of some code optionally producing a value. Applications can create futures in several ways: by calling a function annotated with the async keyword, by defining an async block, or by implementing the Future trait manually. Futures can be turned into async tasks by registering them with an executor, typically by calling its spawn method. Executors, which are usually provided by async runtimes and are conceptually similar to ROS C++ executors, then schedule and execute async tasks in one or more OS threads. Note that the overhead of async tasks is much lower than the overhead of OS threads. Async runtimes are known to easily handle millions of tasks. Futures can also be executed in the currently running task, without spawning a new one, by using the await keyword on them. Async tasks can communicate by sending messages via async MPSC (Multi-Producer Single-Consumer) channels. Such a channel is typically implemented by a concurrent FIFO queue that allows asynchronous waiting on the receiving side until there is a message to dequeue. Technically, waiting is performed by calling a method on the receiving side that returns a future. Note that MPSC channel implementation and API can differ in different runtimes, but as long as they implement the Future trait, they are often compatible with other runtimes. # 2.2.1 Execution model of futures runtime We start our description of the Rust asynchronous execution model by describing the futures runtime. Its implementation is simple, which allows building deterministic real-time applications on top of it. It allows mapping of async tasks to OS threads, but does not control scheduling parameters of the threads in any way, allowing the application to set them as appropriate. # 2.2.1.1 Futures local executor This section describes the current (v0.3.31) behavior of the LocalPool (local in short) executor within the futures crate. The local executor is single-threaded and incorporates three primary data structures: 1) an incoming tasks vector, 2) a linked list of active tasks, and 3) a ready queue implemented by a concurrent linked list. When a future is spawned into the executor, a new async task is allocated in heap memory and added to the incoming vector. The executor repeatedly checks the incoming vector for new tasks and moves them to the list of active tasks, which incurs additional memory allocation for each task. Subsequently, the task is enqueued into the ready queue to execute the future or set up waiting for it. The executor ensures that each task is present in the ready queue only once at any given time. To execute a ready task, the executor removes the task from the ready queue and then invokes the Future::poll method on it. Whenever an active task (executing or waiting) becomes ready, it is always enqueued at the end of the ready queue. # 2.2.1.2 Futures thread-pool executor The thread-pool executor executes tasks in a pool of multiple worker threads. It uses only the ready queue data structure, which is implemented using the standard library’s unbounded synchronous MPSC channel, i.e., concurrent linked list. When a waiting task is woken, it is added to the end of the ready queue. This operation can involve memory allocation. Threads in the thread pool dequeue ready tasks in FIFO order in mutually exclusive way and run them. If the queue is empty, the thread waits. The process of adding a running task back to the ready queue (without waiting) differs from the local executor in that the task is not added to the end of the ready queue but continues executing. This might result in starvation of other tasks. For example, if in a thread pool with $N$ worker threads, there are $N$ tasks that always become ready during their execution, other tasks will not be executed at all. # 2.2.1.3 Grouping futures execution The futures::join!() macro can be used to group multiple futures and wait for the completion of the group as a whole. The effect of joining the futures is that their code will not be executed in parallel. From the point of view of the executor, the group is treated as one task; if one future gets ready, the entire group becomes ready. When the group starts executing, it will poll all ready tasks in the group. The effect of this grouping in the executor is shown later in an example with ROS in Figure 4. # 2.2.2 Tokio.rs runtime Tokio is a popular multi-threaded runtime. It seems to be designed to maximize throughput rather than time determinism. Their scheduling policies are complex and difficult to understand due to the use of abstraction layers. The scheduling policies are partially described in the documentation [25] and can be outlined as follows: Each worker thread has a LIFO slot, which is used for dequeuing ready tasks at most three times in a row. We believe, this improves cache locality while also avoiding starvation. Then, tasks are dequeued from the local ready queue, which can hold up to 256 tasks. If the worker cannot dequeue tasks from the local or global (shared) ready queues (they are empty), it steals half of the tasks in the other worker’s local queue. The victim of the theft is chosen as the first worker, with a non-empty queue when iterating workers with a random starting position. Such a behavior is clearly unsuitable for real-time applications. We will evaluate Tokio experimentally in Section 4. # 2.3 ROS $^ { 2 \mathop { } ( + + }$ executors Execution of callbacks in $\mathrm { C } { + } { + }$ ROS 2 applications is handled by ROS executors. Currently, ROS 2 includes a single-threaded executor, a multi-threaded executor, and an experimental events executor [21]. These are briefly described below. Single-threaded executor first samples the associated entities and then executes callbacks of those that are ready. Callbacks are executed in the same thread and are ordered based on their type. Timers are first, followed by subscriptions, services, and clients. Callbacks of the same type were executed based on the order of their registration [2], but this has changed in ROS 2 Jazzy, where the order is no longer predictable $^ { 5 }$ . Multi-threaded executor executes callbacks in multiple threads. Callbacks are organized in callback groups, which can be of two types: Mutually exclusive or Reentrant. The executor threads access the wait set in a mutually exclusive manner; a thread that gets the access waits for the events, and once some entities are ready, their callbacks get executed by one or more threads subject to the policy of their callback group. Multi-threaded executor was found not to be starvation-free [24], which is problematic for real-time applications and their analysis. Events executor is a recently added experimental executor that does not use wait sets but pushes events to the executor’s event queue directly from DDS callbacks. The main executor thread then dequeues the events and executes the associated callbacks in a loop. Recently, a multi-threaded version of the Events executor was proposed [13]. # 2.4 Feature comparison of rclcpp, R2R and rclrs Before looking in detail at the R2R client library and its execution model, we provide a high-level comparison of features implemented in R2R and the other ROS Rust client library rclrs. As both are community-supported, they both lack some features implemented in the C $^ { + + }$ client library rclcpp. The comparison of all three libraries is summarized in Table 1 and commented in more detail below. R2R and rclcpp implement all communication styles implemented by ROS, but rclrs has only limited support for actions. Only the action message types are available for rclrs applications. If the application wants to use the actions implemented in another node, it has to implement all action logic and state machines itself. All three libraries support a specific type of communication, built on top of services, that allows to work with node parameters. R2R does not yet fully support parameter ranges, which can be used to announce the permissible range of parameter values. Ranges are used by some GUI tools like rqt to provide “sliders” for changing the parameter values. Another difference in parameter handling is in how different libraries implement parameter locking to prevent concurrent accesses from the middleware and the application. rclcpp leaves locking up to the application, R2R uses a single lock per node, whereas rclrs has one lock for each # M. Škoudlil, M. Sojka and Z. Hanzálek Listing 1 Registration of a subscription callback in R2R. let subscription_future $\mathbf { \tau } = \mathbf { \tau }$ subscription.for_each(|msg| async move { // Callback code for msg processing here }); executor.spawn(subscription_future); parameter, potentially causing higher overhead in nodes with high number of parameters. R2R has a feature not available in other libraries, which simplifies working with parameters by using Rust’s derive macro to automatically generate parameter handling code for fields of an arbitrary structure. With respect to time handling, a drawback of rclrs is the unavailability of timers. Time-based execution of user code has to be implemented by using standard Rust means, which prevents the correct function of such nodes with ROS simulated time. However, note that simulated time is supported, but only for clocks and not for timers. Another time-related feature is support for tracing, which is being submitted to R2R as a result of this work [19]. Similar functionality is missing in rclrs. The supported executors in R2R are detailed in the next section. Here, we just mention that rclrs supports only a single-threaded executor. Execution in multiple threads can be implemented by using multiple single-threaded executors or by sending work from callbacks to other threads via standard Rust means. Neither Rust client library supports composable and lifecycle nodes. While the latter could be implemented with little effort, the former would require a deeper investigation of ABI compatibility between Rust and C++. # 2.5 R2R execution model R2R [5] is one of ROS 2 client libraries for Rust. Its execution model differs from C++ because callback execution is managed by a Rust async runtime rather than by R2R itself as shown in Figure 1. R2R is only responsible for “sampling” the events from ROS entities and pushing them to the async runtime via MPSC channels. To sample the events, R2R uses wait sets as the official C++ executors. Sampling is performed by function Node::spin_once. It creates a wait set with all entities in the ROS node (subscriptions, timers, clients, and services), and then it waits on it until one or more entities are ready. Until this point, the behavior is the same as in C++. Then, the ready entities are iterated over, received messages are taken (by calling rcl_take) from the RMW, and all events (messages and timer expirations) are pushed to async MPSC channels associated with their entity. R2R uses bounded asynchronous channels from the futures crate, which are mapped 1:1 to entities. In the current implementation, the channels have a fixed capacity of 11 events. If the channel is full, new events are dropped, leading to unbounded response time (the dropped callback instances will never be executed). The registration of entity callbacks differs from C++, where methods for creating timers or subscriptions take the callback as a parameter. In R2R, the corresponding methods return the receiving end of the associated MPSC channel, and the callback is bound to it by spawning an async task created from an async block, which periodically awaits the timer expiration or the message from the channel. See Listing 1 for an example. This way, if a subscription channel contains more than one message, the callback is consecutively executed for all messages available in the channel. Legend: Supported Supported with comment Partially supported % Not supported 1 See https://github.com/ros2-rust/ros2_rust/issues/244, https://github.com/ros2-rust/ros2_ rust/pull/423, https://github.com/ros2-rust/ros2_rust/pull/410. 2 Automatic generation of parameter handling code for fields of a structure. 3 Implemented in pull request https://github.com/sequenceplanner/r2r/pull/117, likely to be merged soon. 4 Executors are not a part of R2R, but are provided by asynchronous runtime libraries like futures or Tokio. Hence, the exact types of supported executors depend on the selected library. Table 1 Feature comparison between rclcpp, R2R and rclrs. Listing 2 R2R setup equivalent to one single-threaded executor in C++ local_executor.run_until_stalled(); // initialization loop { node.spin_once(Duration::seconds(1)); // sampling local_executor.run_until_stalled(); // execution } Figure 2 Example of execution of a chain of operations from publishing two messages in one node to processing it with R2R in another node. Horizontal lines represent different involved threads. The subscriptions in R2R were created in alphabetical order of topics. The execution of the callbacks is carried on by the executors. An example of the main R2R loop with the futures local executor is shown in Listing 2. The call to run_until_stalled before the loop at line 1 is necessary if we want to avoid memory allocation in the loop. The call initializes the executor by moving the tasks from the incoming vector to the active list, which involves memory allocation. Since the spin_once function was not called before, no tasks were sampled, and no callback code is going to be executed. When calling the spin_once function provided by R2R (line 3), it wakes up receiving tasks of all ready entity channels. Subscriptions first, then timers, etc. Note that this order differs from C++ executors. The order of waking tasks of the same entity type follows the creation order of the entities in the node. The second call to run_until_stalled inside the loop at line 4 executes the callbacks. Since the local executor schedules tasks in the FIFO manner, the tasks will run in the order in which they were woken up. In such a setup, each channel will store at most one message because for each ready entity spin_once pushes only a single message to its channel. A subsequent call to run_until_stalled then processes all channels, leaving them empty. Therefore, the channel will never drop messages due to it being full. Figure 2 shows the above described sequence of operations in time. Besides the R2R thread running the loop from Listing 2 (denoted as Node S), it shows publisher and DDS threads involved in the process. Note that the callbacks may not always be executed in the FIFO order as described above. One such case can happen if spin_once is called multiple times without calling run_until_stalled in between. This is demonstrated in the example in Figure 3. The rightmost column shows in which order would be the callbacks called by a call to run_until_stalled following the spin_once calls. A similar effect can happen if multiple futures are joined as described in Section 2.2.1.3. An example of how the callbacks would be executed when grouping them via the join macro is shown in Figure 4. # 2.6 Comparison of $^ { \mathsf { c } + + }$ and R2R execution models The main difference between $^ { \mathrm { C + + } }$ and R2R execution models is summarized in Figure 5. R2R does not sample events in executors because waiting on a wait set is a synchronous spin_once Ready entities Executor ready queue Order of callback execution by the call after sampling state after sampling first call to spin_until_stalled after sampling. 1 B D $$ $\mathrm { B _ { 1 } }$ , D 2 A B C $$ B1, B2, D, A, C $\begin{array}{c} \boxed { \boxed { \begin} { r l } \end{array} }$ Figure 3 Content of executor FIFO ready queue when two subsequent sampling calls return the same entity (B), producing two messages $\mathrm { B _ { 1 } }$ and $\mathrm { { B _ { 2 } } }$ . $\sqsubseteq$ Figure 4 Execution order of the join group depends on the waking time of the joined tasks. In variant 1, the group is woken by waking of task A. In variant 2, task A is not ready, and the group is woken by task C. Figure 5 The sampling events of node’s entities cannot be split into subsets in R2R, unlike in $^ { \mathrm { C + + } }$ , but the callback execution can be assigned to different executors. R2R C++ with 2 executors with 2 executors execution of Node Executor 1 sampling of 8 Timer callback Timer events execution of 2 execution of Subscription Timer callback 8 Subscription callback Timer Executor 2 Subscarimptiliong eovfents Tsiamerpleinvgenotfs execution of sampling of Subscription callback Subscription events Listing 3 R2R sampling loop to mimic ROS $\mathrm { C } { + } { + }$ multi-threaded executor. loop { node.spin_once(Duration::seconds(1)); } blocking operation, which should not be executed from an async task. However, if the R2R application is organized as in Listing 2, the execution is equivalent to using a ROS C++ single-threaded executor with a minor difference that the priorities of subscription and timer callbacks are swapped. This means that the response-time analysis proposed in [2] can be applied to R2R as is while taking into account the difference in priorities. To achieve the behavior of a C++ multi-threaded executor with R2R, sampling must be performed in a dedicated thread with a loop shown in Listing 3. The effect of different C++ callback group configurations can be accomplished as follows: When all callbacks are executed in another dedicated thread with futures local executor, e.g. by calling just local_executor.run(), this is an analogy to having all callbacks in the same mutually exclusive group. When callbacks are executed by futures thread-pool executor or Tokio runtime, this is equivalent to using ROS $\mathrm { C } { + } { + }$ multi-threaded executor with each callback in its own mutually exclusive group, i.e., each callback can run concurrently with other callbacks but not with itself. It is possible to combine tasks to create a larger mutually exclusive group by using the join macro. To achieve the same behavior as the reentrant callback group of ROS C++ multi-threaded executor, where the tasks can be executed concurrently even with itself, the callback code should dynamically create tasks and spawn them to the Rust multi-threaded executor (e.g. futures ThreadPool or Tokio). # 3 Achieving deterministic real-time operation with R2R To configure R2R for use in real-time applications, we propose to run the application under the Linux SCHED_FIFO scheduler and structure it as summarized by the following two rules: The main thread should run with the highest priority and should sample events from ROS, i.e., it should call Node::spin_once in a loop. The callbacks should be executed in lower-priority threads, each running one futures local executors. An example of the proposed structure is given in Listing 4. The priority of the main thread is set on lines 14–18, before creating ROS context and node at lines 20 and 21. A single subscriber is then created on line 23, followed by associating it with a callback implemented as an async move block (lines 24–26). Line 27 spawns a new OS thread with the given priority. The code executed by the thread is a closure defined at lines 5–10. It creates a new local executor (line 6), spawns to it a new async task from the passed future (line 8) and runs the executor loop (line 9). Running the main thread with the highest priority serves two purposes. First, RMW/DDS threads, which are created internally during ROS initialization, inherit the same priority and, therefore, will never be delayed by callback execution. Second, sampling of entities for events Listing 4 Proposed structure of a real-time R2R application 1 fn spawn_in_thread(future: impl Future, priority: ThreadPriority) { 2 let thread $\mathbf { \Sigma } = \mathbf { \Sigma }$ ThreadBuilder::default() 3 .policy(RealTime(Fifo)) 4 .priority(priority) 5 .spawn(move |_| { 6 let mut local_executor $\mathbf { \tau } = \mathbf { \tau }$ executor::LocalPool::new(); 7 let spawner $\mathbf { \tau } = \mathbf { \tau }$ local_executor.spawner(); 8 spawner.spawn_local(future).unwrap(); 9 local_executor.run(); 10 }); 11 } 12 13 fn main() $\Rightarrow$ Result $< ( )$ , Box<dyn Error $\gg$ { 14 thread_priority::unix::set_thread_priority_and_policy( 15 thread_priority::thread_native_id(), 16 ThreadPriority::try_from(MAIN_PRIORITY)?, 17 RealTime(Fifo), 18 )?; 19 20 let ctx $\mathbf { \Sigma } = \mathbf { \Sigma }$ r2r::Context::create()?; 21 let mut node $\mathbf { \Sigma } = \mathbf { \Sigma }$ r2r::Node::create(ctx, "example", "")?; 22 23 let subs $\mathbf { \tau } = \mathbf { \tau }$ node.subscribe("/topic", QosProfile::default())?; 24 let future $\mathbf { \Sigma } = \mathbf { \Sigma }$ subs.for_each(move |msg: Msg| async move { 25 // process msg 26 }); 27 spawn_in_thread(future, ThreadPriority::try_from(CALLBACK_PRIORITY)?); 28 29 loop { 30 node.spin_once(SPIN_TIMEOUT); 31 } 32 } will happen almost immediately after the event happens, without waiting for any callback to complete. Execution of callbacks in lower-priority threads ensures that one can use schedulability analysis, which does not need to take into account ROS specifics and depends only on the scheduling policy of the OS scheduler. In Section 4, we demonstrate this by running each callback in a thread pinned to a single CPU and with priority set according to the rate-monotonic ordering and use uni-processor response-time analysis [1] $^ 6$ to predict the response times. Note that in the current R2R implementation, where all MPSC channels have the same fixed capacity, it could happen that a channel served by a callback in a low-priority thread could overflow, leading to message losses. Therefore, for reliable operation in all situations, R2R should be extended to make channel capacity configurable, and one should use a schedulability analysis to dimension the capacity of individual channels. # M. Škoudlil, M. Sojka and Z. Hanzálek Table 2 Parameters of publishers and subscriptions in the benchmarking application Figure 6 ROS application used for evaluation with $N = 5$ # 4 Experimental evaluation To evaluate the proposed application structure and compare it with other alternatives, we designed a set synthetic benchmarks. These are described in Section 4.1. Evaluation on a more complex autonomous driving application follows in Section 4.2. # 4.1 Synthetic benchmarks To evaluate the proposed application structure and compare response times obtained with different setups, we develop a benchmark application composed of five topics (see Figure 6). Messages are published periodically to the topics by a publisher node running in a dedicated process. The second node subscribes to the topics and executes callbacks with a specific fixed execution time. The publication periods and callback execution times are given in Table 2 and lead to CPU utilization of $9 0 \%$ . We implement the subscribers in several variants using different execution strategies. The implemented variants are attached to the paper. We run the application and trace its execution with the help of LTTng7 toolkit. For C++ variant, we reuse the tracepoints already present in the ROS libraries, for R2R variants, we add tracepoints to R2R and also to subscription callbacks. The collected traces allow the measurement of intermediate or end-to-end latencies of individual messages and callback executions in the application. Then, we calculate various statistics from them. # 4.1.1 Execution environment All experiments are executed on a laptop with AMD Ryzen 7 3700U CPU running Ubuntu 24.04 with the Linux kernel 6.8.0. The used ROS distribution is ROS Jazzy Jalisco. Processes that need their main thread to run with real-time priority (SCHED_FIFO) were started via the chrt command so that the RMW/DDS threads inherit the priority and policy from it. Other threads’ priorities are set by calling pthread_setschedparam either directly in C++ or through the Rust thread-priority crate. All threads in all experiments were run on a single isolated CPU core (hyper-threading is enabled, but both threads are set as isolated; only one of them is used for experiments) by using the isolcpus kernel command line argument. CPU affinity of the threads was set by starting the programs via the taskset command. # 4.1.2 Publisher node implementation The publisher node is implemented with R2R and Tokio runtime. Publications are triggered by absolute time timers implemented via the Linux timerfd system call and connected to Tokio. Each timer callback is an async task that publishes a message containing a single 64-bit integer. All threads are scheduled by real-time SCHED_FIFO scheduler. The priority of the main and RMW/DDS threads is set to 25, a single Tokio worker thread has priority of 24. # 4.1.3 Implementation variants of the subscriber node We implemented the following variants of the subscriber node that we compare. Below, we provide their names together with a short description. futures R2R-based subscriber node with futures local executor running interleaved with the spin_once function in the same thread, as in Listing 2. futures-join R2R-based subscriber node with futures thread-pool executor. spin_once is running alone in the main thread. All callback tasks are joined together; the single resulting task runs in the thread pool. futures-rt The variant proposed in Section 3, that is R2R-subscriber node with each callback running in a separate thread with real-time priority set according to the rate-monotonic (RM) priority assignment (the smaller period, the higher priority) in a futures local executor. The spin_once function is called in a loop in the main thread with the highest priority. futures-thread-pool R2R-based subscriber node with a futures thread-pool executor. spin_once is running in the main thread, and the thread-pool running the callbacks has two threads. futures-2-threads R2R-based subscriber node with the spin_once function is called in a loop in the main thread with the highest priority. The second thread executes all callbacks in the futures local executor. rclcpp-rt $^ { \mathrm { C + + } }$ subscriber node with each subscriber executed by its own single-threaded executor running on a dedicated thread with fixed real-time priority (RM). This is a similar scheme to the one proposed in [12]. rclcpp-st $^ { \mathrm { C + + } }$ subscriber node with the default single-threaded executor (started by rclcpp::spin(node)) for all callbacks. tokio R2R-based subscriber node using Tokio to execute callbacks. All threads in the process inherit the same priority from the main thread (Tokio’s default). The spin_once function is executed in a loop on a separate dedicated thread. Callbacks are executed by Tokio’s async runtime. tokio-rt R2R-based subscriber node using Tokio to execute the callbacks. The spin_once function is executed in a loop on a separate dedicated thread. The worker threads running callbacks have all the same priority lower than the spin thread. In all variants mentioned above, the main thread, along with all DDS threads, has priority 21, and if callback-running threads have different priorities, they use priorities from 20 down to 16, respectively. Tokio workers in tokio-rt have priority 20. All threads are executed with the SCHED_FIFO scheduler. The only exceptions are variants prefixed with nort, in which the subscriber node with all its threads are scheduled by the default scheduler (SCHED_OTHER). The callback work is emulated by a loop that executes for the given time. The elapsed time is measured by calling clock_gettime() with clock type CLOCK_THREAD_CPUTIME_ID. Subscriptions are created with ROS Quality of service set to keep the last 100 messages because if we kept only one message, subscriptions with a response time greater than its period would lose messages. The value 100 was chosen because it is greater than what is actually needed so as not to lose any message. # 4.1.4 Synthetic benchmark results We executed all implemented variants for 20 seconds, leading to the publication of 100 to 2000 messages to the respective topics. All experiments were executed 10 times, and standard deviations of different runs were calculated and reported in graphs with error bars. The $9 9 ^ { \mathrm { t h } }$ percentiles of the measured end-to-end latencies (from the publication of a message to the end of callback execution) are reported in Figure 7. The graph also shows the publication periods and the expected worst-case response time calculated with a classical mono-processor response-time analysis (RTA) [1]. As can be seen, our proposed futures-rt achieves the same results as C++ rclcpp-rt, and the results of both variants match the theoretical worst-case response time calculated by the RTA. Only topic 5, running in the lowest-priority thread, exhibits higher latencies than expected, which is caused by overheads of the sampling and RMW/DDS threads, which are not considered by our RTA8. Note that futures- $^ { r t }$ and rclcpp-rt are the only variants, where all callbacks manage to complete before the deadlines given by the publication periods. All other implemented variants fail to meet shorter deadlines for topics 1–3. This is expected for the default configurations of Rust async runtimes, which are not able to take into account the timing requirements of individual callbacks. This is especially visible for the nort-tokio variant, which has the highest latencies for three of five topics. To investigate the effect of RMW/DDS thread priority, we compare the end-to-end latencies of our proposed futures-rt with and without giving the RMW/DDS threads the highest priority. The results in Figure 8 show that without setting the priority of RMW/DDS threads to be higher than the priorities of the callback threads, the callbacks fail to complete before the deadlines given by the publication periods. # 4.2 Complex autonomous driving application To evaluate the suitability of R2R for implementing more complex real-time applications, we used it to implement a simplified version of the Automated Lane Keeping System (ALKS) – a system that can automatically drive the car on highways with speed up to 130 km/h [26]. ALKS is the first autonomous driving system legally allowed in Europe that reaches SAE Level 3 of automation [17], meaning that driver supervision is not required. Figure 7 End-to-end latency ( $\mathrm { 9 9 ^ { t h } }$ percentile) of various subscriber implementations. Black error bars represent standard deviations. Figure 8 Comparison of end-to-end latency with and without setting the real-time priority of RMW/DDS threads. Left: Histogram of latencies of the highest-rate topic 1. Right: 99th percentile of latencies of all topics. Figure 9 Architecture of the complex application when using the CARLA simulator or a real car. Red arrows represent the odometry chain evaluated below. Our ALKS implementation can drive a real Porsche Cayenne car by connecting to its FlexRay buses and communicating with the onboard control units. However, in this paper, we evaluate the version running against the simulated vehicle in the CARLA simulator [7]. The reason is that this version contains three ROS nodes written using R2R, whereas the real car version has only two R2R-based nodes. The high-level architecture of the application is depicted in Figure 9. The first R2R node is the CARLA FlexRay adapter, which communicates with the CARLA simulator via its API (by using the carla-rust $^ { 9 }$ bindings) and ROS topics10. It publishes the information from the simulator to ROS in the same format as used when interfacing the real car via the C++-based FlexRay bridge node. The published information consists of eleven topics, which include quantities like speed, steering wheel angle, GPS position, odometry, Inertial Measurement Unit (IMU), information about other visible vehicles, and detected road lines. Most of the quantities are published with a frequency of $5 0 \mathrm { H z }$ , and information about vehicles and lines with a frequency of $2 5 \mathrm { H z }$ . Producing road line messages involves the execution of a non-trivial curve-fitting algorithm for the conversion of simulated lines to their parametric spline representation used on the FlexRay bus. In the opposite direction, these nodes convert the control commands for acceleration/braking and steering received via ROS to the simulator requests or the FlexRay messages. The implementation of ALKS resides in the ALKS ROS node. The information received via eleven topics from the FlexRay is processed in the respective callbacks, where it is converted with simple calculations to the internal representation and stored in memory for later use. The main computation happens in a callback of a $5 0 \mathrm { H z }$ timer running in a separate thread, where all logic and PID controllers are calculated. Besides that, the computationally more demanding Model-Predictive Controller (MPC) that calculates optimal trajectory is invoked from the same callback every $3 0 0 \mathrm { m s }$ . At the end of every callback invocation the node publishes control commands for the vehicle as well as debugging and visualization topics. Topics related to the current trajectory are published whenever a new one is calculated. The last R2R-based node is the FlexRay visualizer. It receives information from the real or simulated FlexRay bus and converts it to rviz markers, i.e., ROS messages, which can be visualized in 3D by the ROS rviz tool. It also receives preprocessed odometry information from ALKS, which is used for the visualization. The overview of threads in the nodes and their SCHED_FIFO priorities is given in Table 3. Only the ALKS node runs callbacks in more than one thread. 1 99th percentile † WCET of the odometry callback only. ∗ WCET of run_until_stalled, which can invoke one or more callbacks. Table 3 Overview of threads in R2R nodes of the ALKS application. All threads are scheduled using SCHED_FIFO and those running callbacks use run\* methods of futures local executor. # 4.2.1 Latency evaluation To show that R2R is suitable for implementing practical real-time applications, we run the above-described ALKS application and trace it with LTTng. As opposed to the synthetic benchmark, here we did not restrict the threads to run on a single CPU core. From the traces, we obtain end-to-end latencies of the odometry chain (marked with red arrows in Figure 9). This chain goes through all three R2R-based nodes in the application. Specifically, it starts at the reception of the IMU message from CARLA in the FlexRay adapter. In the associated callback, we retrieve other information from CARLA using its API and publish the obtained data to the corresponding ROS topic. The ALKS node then receives (among others) the odometry message, converts it from relative to absolute values, and stores it for later use. It also publishes the converted absolute values for use in the FlexRay visualizer. Flexray visualizer just stores the received odometry values in memory for use in other callbacks. Figure 10 shows histograms obtained from a trace recorded during approximately 3 minutes of execution of the ALKS application. We removed few initial samples from the trace to filter out outliers from the warm-up phase. The histogram of measured odometry chain end-to-end latencies is shown in Figure 10a. In about 99% of cases, the latency is below $0 . 8 \mathrm { m s }$ . The same figure also shows execution time (duration) of the first callback in the chain (IMU). As this callback communicates with the CARLA simulator over the network, its duration and jitter is dominated by this communication latency and this propagates further down the chain. Figure 10b shows execution time histograms of other callbacks in the chain. In the ALKS node, the majority of callback executions exhibits execution time jitter of about 0.2 ms, which is caused by waiting for mutexes protecting data shared with the timer callback running in another thread. A few outliers between 0.3 and 2 ms can be caused by the fact that the experiment was executed on a machine connected to the network and running other development tools, which might cause some interference. The jitter of the callback in the FlexRay visualizer node is very low, as the node is single-threaded and does not need to use mutexes in the callbacks. To put this into the context, Figure 10c shows histogram of run_until_stalled call execution times in the ALKS node, whose 99 $\operatorname { t h }$ WCET percentile is mentioned in Table 3. (a) Histogram of end-to-end latency of the odometry chain in comparison with IMU callback duration in FlexRay adapter, which starts the chain. (b) Histogram of odometry callback durations in ALKS and FlexRay visualizer nodes. Figure 10 Histograms of obtained from traces of the ALKS application. (d) Histogram of timer callback duration in the ALKS node. The second peak around 8 ms corresponds to MPC solver invocation every $1 5 ^ { \mathrm { t h } }$ iteration. (c) Histogram of run_until_stalled execution times in the subscriber thread of the ALKS node. This function calls the odometry callback as well as other subscriber callbacks in the node. Note that due to the higher number of its invocations when compared to the odometry callback, the $9 9 ^ { \mathrm { t h } }$ WCET percentile is smaller (0.13 ms) than the one of only the odometry callback (0.16 ms). Figure 10d shows the execution time histogram of the timer callback in the ALKS node. The left peak represents the cases where the MPC planner was not invoked, whereas the peak around 8 ms includes the cases with planner invocations. Note that the end-to-end latency of the odometry chain is influenced by the timer callback only thourgh short critical sections for accessing shared data. Long execution time of the MPC planner has no effect on the end-to-end latency of the odometry chain. This demonstrates that R2R with the futures runtime and the structure proposed in this paper is capable of achieving deterministic end-to-end latencies of the chain of callbacks executing in different nodes. # 5 Discussion Designing a real-time ROS application with R2R is not much different from designing a C++ application with rclcpp. One needs to carefully plan mapping of callbacks to OS threads and assign thread scheduling parameters according to the timing requirements. An advantage of the R2R approach is that sampling of ROS entities can be run in a thread independent from threads running application callbacks, decreasing time between samples to almost zero. This allows analyzing callback schedulability independently of ROS sampling. An important difference between C++ and Rust ROS applications is how the application should structure shared data. While C++ does not impose many restrictions on the structure, Rust requires explicit association of data with their synchronization primitives. Structuring the data inappropriately causes problems with Rust borrow checker as well as with unwanted blocking of unrelated threads, negatively influencing the timing. An advantage of using the asynchronous programming style is that it can simplify application code, especially if combining data from messages from multiple topics. In such a case, the Rust compiler automatically constructs the state machines, which would need to be implemented manually when using synchronous programming style. This simplified the design of our CARLA FlexRay adapter. Another benefit of asynchronous programming is that applications can easily integrate reactions to events from different event sources. For example, ROS can be one event source and other ROS-unaware 3rd party libraries can provide other sources. Depending on the chosen asynchronous runtime, this can take advantage of using efficient event demultiplexing system calls like epoll or io_uring. # 6 Related work The study of real-time scheduling in ROS 2 began with Casini et al. [3], who modeled and analyzed mainline ROS 2. Their work led to corrections that enabled the development of formal schedulability analyses. Building on these foundations, researchers proposed schedulability analyses for the C++ single-threaded executor [2] or designed and analyzed a new custom executor [4]. Subsequent investigations into ROS multi-threaded executors [10, 20, 24] not only advanced schedulability analyses but also uncovered implementation flaws in the executor design. Beyond the scheduling in the executors, researchers have approached the latencies of DDS communication middleware from both experimental [11] and analytical [18] perspectives, providing a more comprehensive understanding of real-time performance in ROS 2 systems.
The increasing popularity of the Rust programming language in building robotic applications using the Robot Operating System (ROS 2) raises questions about its real-time execution capabilities, particularly when employing asynchronous programming. Existing real-time scheduling and response-time analysis techniques for ROS 2 focus on applications written in C++ and do not address the unique execution models and challenges presented by Rust's asynchronous programming paradigm. In this paper, we analyze the execution model of R2R -- an asynchronous Rust ROS 2 bindings and various asynchronous Rust runtimes, comparing them with the execution model of C++ ROS 2 applications. We propose a structured approach for R2R applications aimed at deterministic real-time operation involving thread prioritization and callback-to-thread mapping schemes. Our experimental evaluation based on measuring end-to-end latencies of a synthetic application shows that the proposed approach is effective and outperforms other evaluated configurations. A more complex autonomous driving case study demonstrates its practical applicability. Overall, the experimental results indicate that our proposed structure achieves bounded response times for time-critical tasks. This paves the way for future work to adapt existing or develop new response-time analysis techniques for R2R applications using our structure.
[ "cs.SE" ]
# I. INTRODUCTION Accurate breast density classification plays a critical role in assessing breast cancer risk. High breast density has been shown to both obscure tumor detection on mammograms and correlate with an elevated risk of developing breast cancer [10]. As a result, the precise evaluation of breast density is essential for early diagnosis and appropriate clinical management. Manual classification of mammographic density remains a complex and subjective task. Mammograms can be difficult to interpret due to overlapping tissue structures, and assessments often rely heavily on the visual judgment of radiologists. The digitization of medical imaging has opened the door to computational methods capable of reducing variability and improving consistency. Deep learning approaches, particularly convolutional neural networks (CNNs), have emerged as powerful tools. Nevertheless, these models often require large amounts of labeled data and are prone to overfitting, especially in complex domains like mammography. Consequently, a careful balance between automated systems and human expertise is essential for achieving clinically reliable outcomes. Beyond vision-only models, multimodal learning approaches that combine image and text data have gained attraction in the medical domain. These models leverage the information available in electronic health records (EHRs) and radiology reports to enhance decision-making. Studies have shown that multimodal AI can outperform unimodal counterparts in a range of biomedical tasks by improving data efficiency and contextual understanding [16]. In the context of breast density assessment, vision-language models (VLMs) offer the opportunity to utilize accompanying clinical text—such as radiologist reports—to improve classification performance and interpretability. In this work, we address the task of breast density classification according to the BI-RADS (Breast Imaging-Reporting and Data System) density scheme [2] and leveraging a dataset of annotated mammographic images and corresponding radiology reports collected from the San Jose´ Hospital at TecSalud, Tecnolo´gico de Monterrey, in Monterrey, Mexico. We conduct a comparative analysis of two state-of-the-art approaches: ConvNeXt, a CNN-based deep learning model [9], and BioMedCLIP, a VLM pretrained with token-based textual labels [21]. The main contribution of this work is to compare a VLM and a CNN-based model for the task of breast density classification using paired mammographic images and radiology reports. # II. RELATED WORK Breast density has been evaluated through various approaches, including traditional machine learning, image-based deep learning, and more recently, multimodal VLMs. Early approaches to breast density classification relied on traditional machine learning methods such as Support Vector Machines (SVMs), using handcrafted statistical and textural features. High accuracies were reported—up to $9 7 \%$ [3]—and pipelines combining preprocessing with classifiers like random forests further improved performance [11]. However, these methods depend heavily on expert-designed features, which are time-consuming to create and subject to variability [1]. As a result, their generalization in clinical settings remains a challenge. Deep learning has greatly advanced breast density estimation by eliminating the need for manual feature extraction. CNNs have shown strong performance in capturing complex mammographic patterns [6], while transformer-based models have demonstrated potential in related medical imaging tasks [17] thanks to their ability to model global context [1, 7]. However, these models still face limitations including high data and computational requirements. ConvNeXt, a refined version of ResNet-50, combines the efficiency of CNNs with the performance of transformers [9]. Its strong results on ImageNet and its ability to leverage transfer learning make it well-suited for medical imaging tasks with limited labeled data. Studies have confirmed its scalability and accuracy in domain-specific applications, including breast density estimation [19, 20]. VLMs have shown strong potential in medical imaging by aligning visual and textual information through large-scale pretraining. CLIP [14] and its medical adaptations—PubMedCLIP [4] and BioCLIP [18]—demonstrated improved performance in medical tasks, with domain-specific pretraining yielding notable gains. In breast imaging, MammoCLIP [5] achieved robust classification and localization of mammographic features, highlighting the promise of VLMs for enhancing accuracy and generalization in clinical applications. BioMedCLIP [21] is a VLM tailored for biomedical applications, pretrained on over 15M image-caption pairs from PubMed Central. Using a frozen text encoder and a contrastive-trained image encoder, visual features are aligned with clinical semantics. This enables effective image embeddings for downstream tasks such as classification, retrieval, and visual question answering, showcasing the model’s ability to bridge visual data with domain-specific knowledge. Collectively, these works illustrate the growing relevance of VLMs in biomedical imaging, particularly in settings where annotated data is limited and semantic alignment between text and image enhances task performance. Despite the promising capabilities of VLMs, it remains unclear whether they consistently outperform conventional convolutional or transformerbased vision models in breast density estimation and related clinical tasks. While VLMs offer advantages in multimodal reasoning and semantic alignment, their effectiveness is highly dependent on the quality and relevance of pretraining data, as well as task-specific fine-tuning. # III. METHODOLOGY # A. Data preprocessing downsampling Random translation Language A 三 ABC Spelling Label 0 correction extraction Curated Imbalanced dataset dataset This study uses a comprehensive dataset collected from the San Jose Hospital at TecSalud, Tecnolo´gico de Monterrey, in Monterrey, Mexico. The dataset underwent rigorous data cleaning and labeling procedures to ensure its integrity, following strict security and privacy protocols established by TecSalud.1 The dataset comprises electronic health records (EHRs) spanning from 2014 to 2019, encompassing 1, 160 cases. Each case corresponds to a screening mammography exam and includes two standard mammographic views—mediolateral oblique (MLO) and craniocaudal (CC)—for both breasts, resulting in a total of 4, 640 images paired with 1, 160 unique text reports. An overview of the processes applied to the original dataset can be seen in Fig. 1. The original radiology reports, written in Spanish, included clinical indications, imaging findings, and a diagnostic conclusion. These reports often contained textual inconsistencies, such as misspellings, vowel substitutions, and irregular spacing. Following the preprocessing methodology proposed by [15], these issues were corrected and the reports were subsequently translated into English. Breast density information was subsequently extracted from the findings section using regular expressions. To ensure consistency, the extracted statements were standardized into four BI-RADS-compliant categories [2]. Reports without a clear density classification were excluded from the final dataset. The resulting class distribution was as follows: • Heterogeneously dense: 1, 796 images • Scattered areas of fibroglandular density: 792 images • Extremely dense: 788 images • Fatty predominance: 440 images Mammographic images were contrast enhanced via a histogram matching process described in [12] to minimize interdevice variability. The class distribution was initially imbalanced, with Fatty predominance representing the least frequent category, totaling only 440 cases. To mitigate this imbalance, random downsampling was performed across all categories, resulting in a balanced dataset with approximately 450 images per breast density class. This curated dataset serves as the basis for a comparative analysis of ConvNeXt and BioMedCLIP, allowing the evaluation of their respective performance in breast density classification using images and radiological report data. # B. Trained models 四 BioMedCLIP 。Zero-shot Accuracy 。 Linear probe ConvNeXt F1Score 。Fine tuning Dataset Learning scenarios Metric comparison This study conducts a comparative analysis of two state-ofthe-art models for breast density classification: BioMedCLIP and ConvNeXt. The goal is to evaluate their performance under consistent experimental conditions using a balanced dataset. 1) BioMedCLIP: Vision-Language Model: In this study, BioMedCLIP is evaluated under two learning scenarios: Zero-shot learning: Classification is performed directly using the pretrained model without any additional training on the target dataset. This setting leverages the model’s generalization ability from large-scale pretraining. Few-shot learning via linear probing: The model’s pretrained weights are kept frozen, and a linear classification layer is trained on top of the image embeddings using labeled examples from the breast density dataset. This approach is computationally efficient and requires fewer examples per class compared to full fine-tuning. Linear probing was chosen over fine-tuning for three main reasons: (1) the dataset is relatively small, (2) linear probing is less computationally demanding, and (3) it aligns with the evaluation setup used in the original BioMedCLIP benchmark experiments [13]. 2) ConvNeXt: Vision-Based Model: ConvNeXt is finetuned on the breast density dataset using standard supervised learning. 3) Experimental Setup and Evaluation: To ensure a fair comparison between the two models, all experiments are conducted using the same dataset. Each of the four density categories is encoded numerically. Model performance for all the experiments is evaluated using standard classification metrics, including accuracy and F1-score. In addition, confusion matrices are generated to provide a detailed view of classification behavior across the four classes. # C. Experiments To evaluate the effectiveness of BioMedCLIP and ConvNeXt we conducted three experiments: zero-shot inference with BioMedCLIP, linear probing with BioMedCLIP, and full fine-tuning with ConvNeXt. Each experiment follows a consistent setup with clearly defined dataset splits, training protocols, and evaluation metrics. 1) Experiment 1: BioMedCLIP Zero-Shot Classification: Objective: This experiment evaluates BioMedCLIP’s performance in a zero-shot setting. Dataset and Evaluation: Since zero-shot classification does not require model training, the entire dataset is used for inference and evaluation. • Model Configuration: Mammographic images are presented to the pretrained BioMedCLIP model alongside four textual prompts, each corresponding to one of the breast density categories. • Training Details: No training or fine-tuning is performed in this setting. • Evaluation Protocol: The model is evaluated using accuracy, F1-score, and confusion matrix metrics computed over the full dataset. 2) Experiment 2: BioMedCLIP with Linear Probing: • Objective: This experiment investigates the performance of BioMedCLIP in a few-shot learning scenario using linear probing. • Dataset and Splits: The dataset is split into $8 5 \%$ training and $1 5 \%$ test sets. The training set is further divided into $8 5 \%$ training and $1 5 \%$ validation subsets for hyperparameter tuning and early stopping. • Model Configuration: The pretrained BioMedCLIP encoder is used as a frozen feature extractor. A linear classification head is trained on top of the image embeddings to predict the four breast density categories. The linear layer is initialized using Xavier initialization. • Training Details: Training is conducted using the AdamW optimizer with a learning rate of 0.0001, a batch size of 64, and a maximum of 200 epochs. $L _ { 2 }$ regularization with a weight decay factor of 0.001 is applied to reduce overfitting and improve generalization. • Evaluation Protocol: Model performance is evaluated on the held-out $1 5 \%$ test set using accuracy, F1-score, and confusion matrices. 3) Experiment 3: ConvNeXt Fine-Tuning: • Objective: This experiment benchmarks ConvNeXt, a vision-only model, by fine-tuning it end-to-end for breast density classification. • Dataset and Splits: The dataset is split identically to Experiment 2. • Model Configuration: A ConvNeXt-Base model pretrained on ImageNet is used. Its final classification head is replaced with a new dense layer adapted to the four breast density classes. The entire network is fine-tuned during training. • Training Details: Training is performed using the AdamW optimizer with a learning rate of 0.0001, a batch size of 64, and a maximum of 200 epochs. Early stopping is used to stop training if the validation loss shows no improvement for 10 consecutive epochs, with convergence usually occurring around epoch 40. • Evaluation Protocol: The final model is evaluated on the same $1 5 \%$ test set as BioMedCLIP, using accuracy, F1-score, and confusion matrices for comparison. IV. RESULTS TABLE I: Model performance for breast density classification. An overview of the results obtained for the three learning scenarios can be seen in Table I. # A. Zero-Shot Classification Zero-shot classification approach using BioMedCLIP aims to classify each mammogram into one of four categories without additional task-specific training. This approach obtained an accuracy of 0.47 and a F1 score of 0.31. # B. Linear Probing Introducing a new layer on top of the frozen BioMedCLIP image encoder significantly improved classification performance, reaching an accuracy of 0.64 and a F1 score of 0.63. Fig. 3: Confusion matrix for BioMedCLIP with linear probing. The per-class validation accuracy ranged from 0.51 to 0.83, where the category with the highest performance is Fatty predominance and the most challenging category to identify is Heterogeneously dense, as shown in the confusion matrix in Figure 3. # C. Fine-Tuning Fine-tuning the ConvNeXt base model yields the best results among the three learning scenarios. It achieves a validation accuracy of 0.73. The validation accuracy per class ranges between 0.58 and 0.82, with the highest accurately predicted category being Extremely dense and the most challenging category being Characterized by scattered areas of pattern density. The validation F1 score values per class range from 0.6 to 0.78; the highest values obtained for the Extremely dense and fatty predominance categories, and Heterogeneously dense being the most difficult class to identify. Fig. 4: Confusion matrix for ConvNeXt fine-tune. # V. DISCUSSION Zero-Shot Performance of BioMedCLIP. The zero-shot application of BioMedCLIP achieved an accuracy of 0.47 but suffered from a low average F1-score of 0.31, revealing a significant class imbalance in its predictions. Despite the advantages of large-scale multimodal pretraining, the model struggled to interpret the specific visual features and terminologies associated with mammographic density. Without domain-specific tuning, BioMedCLIP had difficulty linking mammographic patterns to the corresponding textual descriptions, highlighting a key limitation of using VLMs in specialized medical imaging tasks. This underperformance reinforces the broader challenge of transferring general biomedical representations to specialized diagnostic fields like breast imaging. Consistent with prior research, these results emphasize the need for domain-specific adaptations to optimize performance in medical applications. While zero-shot evaluation can provide a baseline for assessing robustness and generalization, it remains inadequate for critical clinical tasks such as breast density classification. Linear Probing Performance of BioMedCLIP. Training a linear classifier on BioMedCLIP’s pretrained image-text embeddings significantly improved classification performance compared to the zero-shot approach. While the model performed well, it struggled with the Heterogeneously dense class, achieving an F1 score of 0.52. This suggests that while the model’s latent features contain useful information, they may lack the fine-grained specificity needed to differentiate this more ambiguous category reliably. Analysis of the confusion matrix reveals that the model effectively distinguished between the density extremes—Fatty predominance and Extremely dense—with its highest accuracy recorded in the former. However, it had difficulty classifying intermediate categories like Scattered and Heterogeneously dense, which exhibit lower recall due to subtle textural differences. This pattern of confusion reflects known challenges in breast density classification, even for human experts. These findings highlight that while BioMedCLIP’s pretrained embeddings capture relevant semantic features, incorporating a taskspecific classification layer through linear probing is crucial for adapting them to the complexities of mammographic image interpretation. Fine-Tuning Performance of ConvNeXt. The ConvNeXt model, when fine-tuned end-to-end on the breast density dataset, outperformed all other evaluated approaches in terms of accuracy and F1-score. By fully leveraging its feature extraction capacity, ConvNeXt could learn a direct numeric mapping of breast density classes, leading to a more consistent and balanced classification performance than BioMedCLIP’s linear probing strategy. The model particularly excelled in distinguishing the Fatty predominance and Extremely dense categories, where visual features are more pronounced, though it faced challenges with the more ambiguous Scattered and Heterogeneously dense categories. An analysis of the confusion matrix showed in 4 highlighted that opposing categories, such as Fatty predominance and Extremely dense, were rarely misclassified due to their distinct visual features. However, significant confusion remained between adjacent categories, especially with Scattered, which was frequently mistaken for both Fatty predominance and Heterogeneously dense. ConvNeXt performed best in identifying Fatty predominance, whereas Heterogeneously dense tissues remained the most difficult to classify due to their subtle and overlapping visual characteristics. While ConvNeXt demonstrated strong performance through end-to-end fine-tuning, it still struggled with breast density categories that lie close together on the BI-RADS continuum. Comparisons with BioMedCLIP revealed that both models found distinguishing higher-density classes challenging, but ConvNeXt achieved a higher recall for lower-density categories, particularly Scattered areas. These findings emphasize the advantages of domain-specific fine-tuning in improving classification reliability and suggest that further architectural enhancements or training strategies may be needed to address remaining classification ambiguities. Token-Based vs. Numerical Classification: Challenges and Limitations. Multimodal representation learning has shown promise in medical imaging but faces challenges due to data heterogeneity and the complexity of medical terminology [16]. While models like CLIP excel in general computer vision tasks through large-scale image-text pretraining, their effectiveness in specialized medical domains is limited. Zeroshot classification struggles with generic prompts that fail to capture nuanced medical descriptions. Aditionally, CLIP’s dual-encoder architecture can introduce representational gaps between visual and textual modalities, reducing diagnostic accuracy [8]. A major barrier to applying VLMs in medical imaging is the lack of large, high-quality annotated datasets for contrastive pretraining. Without sufficient domain-specific data, these models fail to generalize well across different imaging modalities. To address these limitations, researchers emphasize the need for domain-adapted architectures, carefully curated datasets, and improved prompt engineering strategies. Enhancing alignment between medical images and textual descriptions is crucial for improving model performance in clinical applications. One potential solution is the use of descriptive tokens or contextual prompts to refine model attention. Studies suggest that aligning text tokens with specific image regions enhances pathology detection, while token labeling in vision transformers improves classification accuracy. However, balancing token granularity is essential, as overly complex token assignments can increase computational costs without significant diagnostic benefits. In experiments, BioMedCLIP’s linear probe struggled with mammographic density classification due to insufficiently detailed textual tokens, as minor wording differences failed to create clear semantic distinctions. These findings highlight the importance of carefully engineered prompts and enriched token representations when adapting VLMs to specialized medical tasks.
Mammographic breast density classification is essential for cancer risk assessment but remains challenging due to subjective interpretation and inter-observer variability. This study compares multimodal and CNN-based methods for automated classification using the BI-RADS system, evaluating BioMedCLIP and ConvNeXt across three learning scenarios: zero-shot classification, linear probing with textual descriptions, and fine-tuning with numerical labels. Results show that zero-shot classification achieved modest performance, while the fine-tuned ConvNeXt model outperformed the BioMedCLIP linear probe. Although linear probing demonstrated potential with pretrained embeddings, it was less effective than full fine-tuning. These findings suggest that despite the promise of multimodal learning, CNN-based models with end-to-end fine-tuning provide stronger performance for specialized medical imaging. The study underscores the need for more detailed textual representations and domain-specific adaptations in future radiology applications.
[ "eess.IV", "cs.LG" ]
# 1 Introduction Large language model (LLM) distillation has become a widely used technique to reduce inference cost while retaining most teacher performance. Early knowledge distillation (KD) methods align student and teacher output logits [1, 2]. Later work shows that matching hidden features [3, 4], attention patterns [5], and using architecture-aware objectives [6, 7] can further close the performance gap between the student and teacher model. Chain-of-thought distillation (CoTD) teaches students to follow step-by-step rationales generated by teachers [8, 9], sometimes using sampled or structured traces to highlight the critical steps [10–12]. Beyond language models, recent efforts have begun to explore how distillation techniques can be extended to LLMbased agents that integrate reasoning with tool use and environment interaction. These efforts vary widely in how they conceptualize agent behavior and what aspect of the teacher they aim to transfer. One type of work trains student agents to imitate reasoning-action trajectories from teacher agents, such as Structured Agent Distillation (SAD) [13] and retrieval-augmented distillation methods [14]. These methods treat agent behavior as interleaved thoughts and tool calls, supervising the student to mimic each step. While effective in capturing execution details, they incur high computational cost and generalize poorly, as teachers require constructing and processing long, complex sequences, and students passively replicate fixed trajectories without learning to adapt. For works in structure distillation, like MAGDi [15] and Sub-goal Distillation [16], although they are more efficient than trajectory distillation, guiding students with abstracted teacher strategies like subgoal sequences or interaction graphs, these methods overlook differences in model capability, knowledge boundaries, or tool usage between different models. To address the limitations of trajectory imitation and structured plan distillation—namely high computational cost and limited adaptability—we propose a lightweight, training-free framework: AgentDistill. Rather than replicating full trajectories or assuming students can execute teacher-defined plans, our approach leverages the inherent strengths of teacher agents in coding and task-solving by utilizing teacher-generated Model–Context–Protocols (MCPs) 1. MCP is an open protocol designed to standardize how context is provided to LLMs. Our framework capitalizes on the teacher agent’s capacity to create self-contained, reusable, and generalizable MCPs tailored to specific task domains. These MCPs encapsulate the problem-solving capabilities of the teacher agent and enable student agents equipped with substantially smaller LLMs (e.g., llama-3.1-8B, Qwen3-8B) to inherit sophisticated, transferable problem-solving skills without additional training. By directly integrating these distilled MCP boxes, student agents significantly enhance their performance and adaptability, effectively bridging the capability gap between teacher and student agents. Consequently, our method offers a scalable, efficient, and low-cost solution for agent distillation, enabling student agents to robustly handle diverse real-world scenarios. We conduct comprehensive experiments on several benchmarks, including biomedical and mathematical tasks, to evaluate the effectiveness of our proposed AgentDistill framework across different domains. These results demonstrate that our approach substantially enhances the adaptability and generalization performance of student agents across diverse settings covered by teacher-generated MCPs, while also reducing inference and training costs. To summarize, our key contributions can be highlighted as follows: • We propose AgentDistill, a novel agent distillation framework that enables student agents to inherit the more modular, transferable, and interpretable components—Model–Context–Protocols (MCPs)—generated by teacher agents. Unlike prior methods that rely on replaying long sequences of actions generated by the teacher, this approach allows student agents to directly inherit task-solving capabilities from teachers. • AgentDistill is entirely a training-free framework. It requires no fine-tuning of either the teacher or the student agent. MCPs are automatically extracted, abstracted, and reused without additional gradient updates or handcrafted tool usage. This yields a highly cost-efficient and deployable distillation pipeline with strong generalization performance of the student to unseen tasks that can be solved with the distilled MCPs. • We demonstrate that AgentDistill significantly enhances the problem-solving and generalization performance of student agents on biomedical and mathematical reasoning tasks, effectively narrowing the gap between teacher and student agents with minimal computational overhead. The comprehensive experiments are conducted across biomedical (PathVQA, SLAKE) and mathematical (Game of 24) benchmarks. Our proposed MCP distillation improves performance across all student models—GPT-3.5-turbo, Qwen3-8B, and LLaMA3.1-8B—with detailed gains shown in Table 2. # 2 Releated Works # 2.1 MCP MCP is introduced as a standardized two-way interface, enabling language models to securely access real-time external data [17]. MCP Landscape [18] outlines its architecture and identifies key vulnerabilities across its lifecycle . MCIP [19] strengthens security by enforcing contextual integrity checks. Alita [20] leverages MCP to dynamically generate, refine, and reuse tool capabilities via MCPs, enhancing adaptability and multi-agent collaboration. Together, these works establish a foundation for future research. MCP is essential for developing secure and generalizable agent systems. # 2.2 Distillation of Large Language Model Knowledge Distillation. Knowledge distillation (KD) transfers knowledge from a large teacher model to a smaller student model by using teacher-provided soft targets and/or hidden representations. Early methods focus on aligning output probability distributions [1, 2]. Intermediate-layer feature alignment is used in patient distillation and twostage distillation frameworks [3, 4]. Self-attention matrix distillation captures internal Transformer relationships [5]. Architecturally aware techniques modify network structures and perform joint distillation, as in MobileBERT and GKD [6, 7]. Recent cross-model capability distillation uses large LLM–generated instruction–response pairs to teach smaller open models reasoning skills [21, 22]. Reasoning Distillation. Chain-of-thought distillation (CoTD) methods train a smaller student model to reproduce a teacher’s step-by-step reasoning via teacher-generated rationales and answers. Some approaches fine-tune students on full reasoning chains [8, 9, 23] or on structured/sampled rationales [10, 11], ensuring students learn key reasoning patterns even with limited data. Other techniques focus training on critical steps or enforce faithfulness by sampling/weighting important tokens [12], maximizing mutual information [24], or using contrastive decoding [25]. To preserve core reasoning signals, long chains can be split into shorter chunks [26, 27], or aligned to alternative formats like trees or graphs [28]. Finally, counterfactual distillation improves causal robustness [29], and domain-specialized distillation concentrates on task-specific CoT paths to boost performance on targeted benchmarks [30]. In-Context Learning Distillation. In-context learning distillation (ICLD) [31–34] trains a smaller student model to internalize a teacher’s few-shot reasoning without requiring full prompts at inference. This has proven effective on benchmarks like NLI and SQL and is now standard in post-training. To enhance robustness, recent work integrates token-level language-modeling objectives [33] or treats few-shot matching as the sole training target [34], guiding students to internalize reasoning patterns. # 2.3 Distillation of LLM Agent Trajectory Distillation. Trajectory-level agent distillation trains small models to imitate complete reasoningaction trajectories from large LLM-based agents. Structured Agent Distillation (SAD) [13] segments trajectories into interleaved thought and action spans, training students to reproduce agent-style execution patterns. Distilling LLM Agents into Small Models [14] extends this by including retrieved evidence and code execution results, enabling small models to emulate tool-augmented reasoning. These methods extend CoT distillation to agent settings by preserving not only intermediate reasoning but also tool usage and task decomposition behaviors. Structure Distillation. Structure-level agent distillation compresses reasoning trajectories into abstract representations such as graphs or subgoal sequences, enabling student models to preserve key task structures without imitating every token. MAGDi [15] encodes multi-agent chats as interaction graphs, allowing students language model to reason over graph structure instead of raw text. Sub-goal Distillation [16] extracts high-level goals from teacher agent trajectories and trains a student agent to predict and carry out the task plan. These methods reduce sequence length while preserving key reasoning patterns. Action Policy Distillation. Action policy distillation transfers language-based reasoning from LLM agents to lightweight, non-linguistic controllers. The teacher generates chain-of-thought trajectories in natural language, while the student executes actions directly without text generation. In Language-Oriented to Emergent Communication [35], a language agent trains an emergent-signal policy that communicates via short learned symbols. DeDer [36] converts reasoning traces into state-action pairs to train a small embodied agent for language-free execution. # 2.4 Generalist and Domain-Specific Agents Generalist Agent. Generalist LLM agents aim to solve a wide range of tasks with a single unified system, minimizing the need for task-specific supervision. OWL [37] introduces role-based coordination via User, Assistant, and Tool agents to decompose tasks and invoke external tools. AutoAgent [38] builds with a modular, zero-code interface driven by natural language. Alita [20] further removes predefined workflows by allowing agents to self-generate MCPs for dynamic coordination. OMNE [39] adds long-term memory to each agent, enabling contextual adaptation across interactions. Domain-specific Agent. Domain-specific LLM agents have shown strong performance across specialized tasks, motivating systems tailored to fields such as finance, science, engineering, and the humanities. FinRobot [40] targets financial decision-making with coordinated analyst and trader agents. ChemAgent [41] and ClinicalAgent [42] apply domain tools for synthesis planning and medical triage, respectively. AgentCoder [43], AtomAgents [44], and ProtAgents [45] support software engineering, alloy simulation, and protein design through multi-agent collaboration. In the humanities, HistAgent [46] performs historical reasoning by integrating textual and visual information, and EmoAgent [47] addresses mental health concerns by detecting sensitive content and suggesting safe interventions. These systems rely on complex, domain-specific toolchains and prompts; our work addresses this by distilling reusable Model-Context Protocols (MCPs) that unify tool use across domains. # 3 Method To bridge the capability gap between a teacher agent leveraging large language models (LLMs), such as Claude-sonnet-4 or GPT-4o, and a student agent employing significantly smaller models (e.g., llama-3.1-8B, Qwen3-8B), we introduce a novel agent distillation framework called AgentDistill. The core concept behind AgentDistill is straightforward yet powerful: the teacher agent generates self-contained MCPs during task execution. These MCPs subsequently undergo a process of MCP box construction with abstraction, clustering, and consolidation, resulting in a MCP box that are then integrated into the student agents. This structured distillation process facilitates the transfer and internalization of sophisticated problem-solving skills initially demonstrated by the teacher agent, thereby substantially enhancing the capabilities of the student agent. Figure 3: Overview of AgentDistill, the training-free agent distillation framework via Model–Context–Protocols (MCPs). The teacher agent with large language model solves tasks by decomposing them through a Manager Agent and generating task-specific MCPs via open-source search, script generation, and virtual execution. Valid MCPs are abstracted, clustered, and consolidated into a reusable MCP-Box. At inference, the student agent with a small language model leverages this MCP-Box to perform tool-based reasoning without any fine-tuning or trajectory replay. This enables lightweight agents to inherit task-solving capabilities from stronger models efficiently. # 3.1 Problem Formulation Given supervision pairs $\mathcal { D } = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ and a teacher agent $\pi _ { T }$ , we aim to distill teacher-agent-generated MCPs to a self-contained MCP-Box, thus to improve a student agent $\pi _ { S }$ with small language model performance by supplying the MCP-Box. No further gradient update is applied to student agent $\pi _ { S }$ : $$ \nabla _ { \theta } \pi _ { S } = 0 . $$ Formally, we define the optimization problem as: $$ \operatorname* { m a x } _ { \mathcal { B } \subset \mathcal { L } } ~ \mathbb { E } _ { ( x , y ) \sim \mathcal { D } } \left[ \mathbb { I } \left\{ \pi _ { S } ( x ; \mathcal { B } ) = y \right\} \right] , $$ where $\mathcal { L }$ denotes the space of all teacher-agent-generated MCPs, $\boldsymbol { B }$ is the MCP-Box distilled from $\mathcal { L }$ , and $\pi _ { S } ( x ; B )$ represents the behavior of the student agent when given input $x$ augmented with guidance from the MCP-Box. The indicator function $\mathbb { I } \{ \cdot \}$ evaluates to 1 if the student’s output matches the ground truth. # 3.2 MCP Creation When solving an input $x _ { i } \in \mathcal { D }$ , the teacher agent $\pi _ { T }$ interacts with an environment $\mathcal { E }$ , producing a full reasoning trajectory: $$ \tau _ { i } = ( r _ { 1 } , a _ { 1 } , o _ { 1 } , \ldots , r _ { L _ { i } } , a _ { L _ { i } } , o _ { L _ { i } } ) , $$ where $r _ { t } \in R$ are reasoning tokens, $a _ { t } \in A$ are action tokens (e.g., tool calls, MCP generation), and $o _ { t } \in O$ are observations from the environment. To better distinguish MCP scripts from the reasoning, we prompt the teacher agent to generate and separate structured, self-contained MCPs during its reasoning process. Within the trajectory $\tau _ { i }$ , the teacher may produce one or more MCPs corresponding to distinct subtasks. For each input example $x _ { i } \in \mathcal { D }$ , if the teacher agent generates a MCP at the $j$ -th step of its trajectory, we denote this MCP as $$ \mathrm { M C P } _ { i , j } \in { \mathcal { L } } . $$ where $\mathcal { L }$ is the space of all extracted MCPs across the specific dataset. Each trajectory may yield multiple MCPs depending on the number of tool-related planning steps. Only trajectories where $\pi _ { T } ( x _ { i } ) = y _ { i }$ (i.e., successful completions) are considered for distillation. We collect $\mathrm { M C P } _ { i , j }$ into a temporary pool if the MCP snippet is syntactically correct and executable. The result is a large pool ${ \mathcal { L } } =$ $\{ \mathrm { M C P } _ { i , j } \}$ , which captures a rich but noisy set of tool-use strategies emitted by the teacher agent. These MCPs will then be processed into a compact and organized set $\boldsymbol { B }$ , termed the MCP-Box, via abstraction, clustering, and consolidation, as detailed in the next section 3.3. # 3.3 MCP-Box Construction After collecting all MCPs generated from successful teacher trajectories, we pass them to a high-capacity instructiontuned LLM (e.g., Claude-Sonnet-4) to form a compact and structured repository called the MCP-Box. This process proceeds in three steps. (1) Abstraction. For each tool-related MCP segment extracted from correct teacher trajectories, we extract the relevant Python code and prompt the LLM to rewrite it into a reusable and parameterized format, i.e. each raw MCP $\mathrm { M C P } _ { i , j }$ is rewritten into a concise, task-agnostic form using prompt-based transformation: $$ \mathrm { M } \mathrm { \hat { C } P } _ { i , j } = \mathrm { L L M } _ { \mathrm { a b s t r a c t } } ( \mathrm { M C P } _ { i , j } ) . $$ The goal is to remove example-specific phrases while preserving generalizable tool-use strategies. Meanwhile, this process makes up to three critical parameters configurable, while preserving the tool’s core logic. (2) Clustering. All abstracted $\mathrm { M } \mathrm { \hat { C } P } _ { i , j }$ are grouped by functionality via a code-level clustering prompt. The LLM returns cluster assignments based on shared application semantics: $$ \mathcal { C } = \mathrm { L L M } _ { \mathrm { c l u s t e r } } \left( \left\{ \mathrm { M } \mathrm { \hat { C } P } _ { i , j } \right\} \right) , $$ where each cluster $\mathcal { C } _ { k }$ corresponds to a functional group like "image utils" or "numeric analysis". (3) Consolidation. Within each cluster $\mathcal { C } _ { k }$ , we instruct the LLM to consolidate all tool implementations into a single general-purpose version. The result is $$ \mathrm { M C P } _ { k } ^ { \mathrm { f i n a l } } = \mathrm { L L M } _ { \mathrm { c o n s o l i d a t e } } \big ( \Big ( \{ \mathrm { M } \hat { \mathrm { C P } } _ { i , j } \ | \ \mathrm { M } \hat { \mathrm { C P } } _ { i , j } \in \mathcal { C } _ { k } \} \Big ) \big ) , $$ which includes parameter unification, proper validation, and documentation. Each output is a production-ready, FastMCP-compatible Python file. The complete MCP-Box is then defined as $$ \mathcal { B } = \left\{ ( \mathrm { M C P } _ { k } ^ { \mathrm { f i n a l } } , \mathrm { c l u s t e r \_ n a m e } _ { k } ) \right\} _ { k = 1 } ^ { K } , $$ where each item contains a consolidated tool protocol and its functional label. Brain_mri_analyzer @mcp.tool() Abstraction d@emfcapn.atloyozl(e)_brain_mri(image_path: str) -> str: sdterf=an"adleytzaeil_ebdr"a) -n>_smtri:(image_path: str, region: str $\mathbf { \sigma } = \mathbf { \sigma }$ "left", analysis_mode: """Analyze brain MRI image for disease detection on left side.""" """Analyze brain MRI image to identify disease in specified region.""" Detect_brain_abnormality Brain_mri_analyzer Brain_mri_analysis Clustering Analyze brain MRI image to identify disease in Analyze brain MRI image for abnormalities Analyze brain MRI image for abnormalities specified region. focusing on bright areas and diseases. Detect_brain_abnormality Brain_mri_analysis @mcp.tool() def analyze_brain_mri(image_path: str, bright_threshold_multiplier: float = 2 "5","…A)n-a>lyszter brain MRI image for abnormalities focusing on bright areas. """ d@emfcapn.atloyozl(e)_brain_mri(image_path: str, region: str $\mathbf { \sigma } = \mathbf { \sigma }$ "full", Consolidation analysis_mode: str = "detailed", Brain_mri_analyzer bright_threshold_multiplier: float $= 2 . 5 .$ …) $$ str: """Analyze brain MRI image for abnormalities and diseases. @mcp.tool() analysis_mode: Analysis type - "detailed" (specific diagnoses), def analyze_brain_mri(image_path: str, region: str $\mathbf { \sigma } = \mathbf { \sigma }$ "left", analysis_mode: "basic" (abnormal/normal), or "simple" (bright area analysis) str = "detailed") -> str: """Analyze brain MRI image to identify disease in specified region.""" # 3.4 Student Inference with the MCP-Box Based on the SmolAgents framework [48], we mount the entire MCP-Box $\boldsymbol { B }$ into the student agent’s tool interface at inference time—without retrieval, reranking, or parameter selection. Each $\mathrm { M C P } _ { k } ^ { \mathrm { f i n a l } } \in B$ is implemented as a callable tool with a standardized input/output interface (e.g., using @mcp.tool() within the FastMCP runtime) The student agent $\pi _ { S }$ operates under a frozen policy and receives no gradient updates: $\nabla _ { \theta } \pi _ { S } = 0$ . When facing a new problem $x$ , the student generates intermediate reasoning steps and tool calls as usual. At each step, the runtime environment exposes all tools in $\boldsymbol { B }$ as callable modules. The agent decides which tool to invoke (if any), fills in the input arguments (either through text generation or function call templates), and receives a return value $o _ { t }$ , which updates the context for the next reasoning step. No external scoring, selection, or retrieval is required. All tool-use competence is embedded in the preconstructed MCP-Box, allowing the student agent to benefit from distilled teacher knowledge with zero additional training. This design keeps the student agent lightweight and inference-time-efficient, while transferring all tool-related task-solving capability into the tool library itself. # 3.5 Agent Structure # 3.5.1 Teacher Agent The teacher agent employs powerful large-scale language models (LLMs), renowned for their strong capabilities in coding and complex task-solving. To maintain simplicity and maximize efficiency, the teacher agent is designed with only three primary modules: a Manager Agent, a Basic Image Captioner, and an MCP Creation Module. Manager Agent serves as the central coordinator. Upon receiving a task prompt, the Manager Agent decomposes the task into manageable subtasks and evaluates whether external tools are required for their resolution. If external tools are necessary, it delegates the creation of Model–Context–Protocols (MCPs) to the MCP Creation Module. Following the execution of subtasks, the Manager Agent aggregates all intermediate results, synthesizing them into a coherent final response. Basic Image Captioner provides a textual summary of visual content when the input includes images. This component is especially important because many text-only models used do not support direct image input. The captioner converts images into textual descriptions, allowing the rest of the system, including the Manager and MCP Creation Module, to process visual information through a uniform text-based interface. MCP Creation Module consists of four distinct sections: the MCP Brainstorming Section, the Open-Source Searching Section, the Script Generation Section, and the Virtual Environment Execution Section. The MCP Brainstorming Section generates initial conceptual plans for task-specific MCPs. Subsequently, the Open-Source Searching Section identifies relevant open-source resources to support MCP development. The Script Generation Section then synthesizes these ideas and resources into executable scripts. Finally, the Virtual Environment Execution Section validates and executes these scripts within a controlled environment, ensuring their practical applicability and robustness. # 3.5.2 Student Agent The student agent utilizes compact, cost-effective language models (e.g., llama-3.1-8B, Qwen3-8B) to significantly reduce inference expenses. Its structure closely mirrors that of the teacher agent but with a more streamlined composition, comprising only the Manager Agent and the Basic Image Captioner. The Manager Agent coordinates task decomposition, tool utilization, and result aggregation, benefiting directly from the distilled MCP box provided by the teacher agent, enabling it to efficiently handle complex tasks despite its smaller model scale. # 4 Experiment # 4.1 Experimental Setups # 4.1.1 Tasks and Datasets. We evaluate the effectiveness of AgentDistill in enhancing small language models (sLMs) on visual question answering (VQA) and mathematical tasks benchmarks. Specifically, we use Game of 24 [49] for mathematical tasks and two real-world VQA datasets, PathVQA [50] and SLAKE[51]. These datasets represent complex multi-hop reasoning over image-text pairs and require factual, visual inference capabilities, and precise symbolic arithmetic under strict constraints, enabling a comprehensive evaluation of agents’ multi-modal and mathematical capabilities. Game of 24. The Game of 24 dataset is a mathematical benchmark with 1,362 puzzles. Each puzzle consists of four numbers to be combined using basic arithmetic operations to reach 24. Problems are ranked by human solving difficulty and include at least one valid solution. PathVQA. PathVQA is a pathology-focused visual question answering dataset containing 32,000 questions over 4,998 medical images. It emphasizes fine-grained visual reasoning in histopathology, such as identifying cell types or diagnostic markers. SLAKE. SLAKE is a multimodal medical VQA dataset with 642 radiology images and over 14,000 expert-annotated QA pairs. It tests both visual understanding and medical knowledge retrieval in a bilingual setting. For each dataset, we sample 100 examples from validation set for MCP-box generation, same as benchmark dataset construction introduced in Octotools[52], and evaluate the student agent before distillation (without MCP box integration), after distillation (with MCP box integration), Student Agent with pre-defined tools (Octotools Framework), and the teacher agent on the same dataset. The results are summarized in Table 2. # 4.1.2 Models, Baselines and Metrics Our experiments involve three small instruction-tuned language models (sLMs)—GPT-3.5-turbo, Qwen-8B, and LLaMA3.1-8B—which serve as the base of student agents in our study. We also use a teacher agent in which the Manager Agent is powered by Claude-Sonnet-4 and the MCP Creation Module is handled by GPT-4o, representing an upper-bound reference. All models operate in a frozen configuration, without any task-specific fine-tuning or gradient updates. We compare four settings: (1) student agents before distillation (without MCP-box); (2) agents with pre-defined tools (using Octotools Framework [52] and corresponding tools for each task) (3) student agents after distillation (with access to the distilled MCP-Box); (4) the teacher agent; and (5) agents built upon OctoTools Framework engined by GPT-4o. This enables a comprehensive analysis of whether MCP narrows the gap between student agents and high-performance systems (either teacher agent or tool-augmented methods). See Table 3 for cross-agent comparisons. We use task accuracy as the main evaluation metric, defined as the percentage of correctly answered dataset questions. To evaluate the benefit of MCP, we report the absolute improvement over the baseline sLMs before distillation. We also compare each student agent’s performance with the teacher agent to assess whether distillation allows student agents to approach teacher agent performance. # 4.2 Results and Analysis We evaluate our approach across three datasets—PathVQA, SLAKE, and Game of 24—using multiple small language model (sLM) agents under the SmolAgent framework. All agents operate under a frozen policy and are equipped with the distilled MCP-Box described in Section 3. Generalizability and Usage Frequency of Distilled MCPs. Table 1 presents the number of unique MCPs generated by the teacher agent and the frequency with which student agents invoke them during inference. A high MCP-box calling rate indicates that distilled MCPs are broadly applicable across diverse inputs and consistently reused by student agents. These results confirm that our framework produces reusable and transferable MCPs that generalize well without requiring any additional training. Table 1: Generalizability and usage frequency of distilled MCPs across three benchmarks. “Number of Distilled MCPs” indicates the total reusable MCP modules generated by the teacher agent. “MCP-Box Calling Rate” measures the percentage of test cases where student agents invoked at least one MCP during inference. MCP-Box consistently improves student agents across datasets. Table 2 shows that applying MCP leads to substantial improvements across all student agents and datasets. On PathVQA, GPT-3.5-turbo improves from $4 5 . 7 \%$ to $5 2 . 7 \%$ , Qwen-8B improves from $5 3 \%$ to $5 5 . 3 \%$ , and LLaMA3.1-8B improves from $4 6 . 7 \%$ to $5 0 . 0 \%$ , indicating that MCP helps models improve their capabilities. On SLAKE, the gains are even more pronounced—LLaMA3.1-8B by $+ 1 0$ points, GPT-3.5-turbo by $+ 7 . 3$ points and Qwen-8B improves by $+ 6 . 7$ points. On the arithmetic-focused Game of 24, GPT-3.5-turbo sees a $+ 4 8 . 4$ points gain $34 . 3 \%$ to $8 2 . 7 \%$ , and LLaMA3.1-8B gains $+ 4 2 . 3$ points $( 2 1 . 7 \%$ to $6 4 \%$ ). These consistent improvements across models and datasets demonstrate that MCP is effective in enhancing the task-solving ability of small language models (sLMs). Effectiveness across datasets. AgentDistill yields consistent performance improvements across all datasets and base models. On SLAKE, all student models show notable gains—up to $+ 1 0 . 0 \%$ for LLaMA3.1-8B—suggesting that semantically rich visual questions benefit from the compositional structure of distilled MCPs. Game of 24 exhibits especially large improvements for weaker models (e.g., $+ 4 8 . 4 \%$ for GPT-3.5-turbo and $+ 4 2 . 3 \%$ for LLaMA3.1-8B), indicating that MCPs effectively scaffold symbolic reasoning tasks such as arithmetic operations. In contrast, models that already perform well (e.g., Qwen3-8B on Game of 24) show smaller gains, likely due to ceiling effects. Improvements on PathVQA are moderate but consistent, demonstrating the broad applicability of distilled MCPs. MCP-Box narrows the gap between student agents and teacher agents. To assess whether distilled MCPs help small language models (sLMs) approach the performance of much stronger agents, we compare MCP-equipped student agents with a reference teacher agent (Claude $\ 4 + \ \mathtt { G P T } \mathrm { - } 4 \ 0 \$ and two retrieval-based systems: Octotools powered by GPT-4o, and Agents with pre-defined tools upon Octotools Framework paired with sLMs, both equipped with optimal toolset (Table 3). On PathVQA, average student agents after distillation (with the MCP-Box) achieve $5 2 . 7 \%$ accuracy— matching the teacher agent $( 5 2 \% )$ and outperforming both retrieval-based variants. On SLAKE, MCPequipped students reach $6 5 . 1 \%$ , slightly below the teacher $( 6 6 \% )$ but above both Octotools baselines. On Game of 24, while the teacher asignificantly outperforming Octotools with GPT-4o $( 4 5 \% )$ and also slightly surpassing Octotools with sLMs $( 4 8 \% )$ . The latter is partly due to strong base performance of Qwen-8B on arithmetic tasks, which dominates the average within sLM-based Octotools. These results show that a well-curated, self-contained MCP-Box enables small models to close the gap with much stronger agents, outperforming retrieval-based pipelines—even those backed by more powerful LLMs. This suggests that distilled MCP-Box provides not only task transferability but also efficiency advantages over dynamic retrieval and tool orchestration. Table 2: Performance of student agents before and after distillation using AgentDistill. Accuracy improvements are observed across all datasets and models without any additional training. Table 3: Comparison between the teacher agent (Claude $\ 4 + \ \mathtt { G P T } \mathrm { - } 4 \ 0 \$ ) and the average performance of student agents (GPT-3.5-turbo, Qwen-8B, LLaMA3.1-8B) after distillations. The Octotools (GPT-4o) reports the performance of an open-source toolset baseline and the Agent with Pre-defined Tools (GPT-3.5-turbo, Qwen-8B, LLaMA3.1-8B) represents the average performance of sLM in Octotools with optimal toolsets. All agents operate without fine-tuning and student agents are evaluated with distilled MCPs. Why MCP Distillation works. The MCP-Box serves as an external library of executable protocols, distilled from teacher trajectories and abstracted for reuse. Each protocol encapsulates tool-level logic in a parameterized format, allowing the student agent to bypass low-level code generation. However, the student remains responsible for high-level planning: it must decide whether to invoke a tool, which MCP to select, and how to fill in the arguments. No policy gradients or planning heuristics are transferred; instead, the benefit arises from constraining the tool-calling space to a set of functional, verified options. This reduces generation complexity without interfering with the agent’s core reasoning process. Case Study: Brain MRI Analysis Fig. 5 highlights the core advantage of our AgentDistill framework: enabling student agents to acquire generalizable and reusable tools from teacher-generated protocols. In this example, the teacher produces two MCPs focused on narrow subtasks—detecting bright areas and analyzing the left hemisphere. AgentDistill then consolidates these into a parameterized MCP template that supports broader functionality. By exposing arguments like region, analysis_mode, and threshold multipliers, the distilled tool supports diverse configurations across brain regions, diagnostic modes, and image characteristics. This design decouples task semantics from implementation logic, allowing the same MCP to be reused across new clinical scenarios (e.g., switching from MRI to CT, left-side to full-brain, simple detection to detailed diagnosis) with no code change. Such generalization is central to our training-free distillation pipeline, which converts ad-hoc language traces into structured, modular, and composable tools, ready to support student agents in dynamic or unfamiliar environments. Detect_brain_abnormality Brain_mri_analysis @mcp.tool() @mcp.tool() def analyze_brain_mri(image_path: str) -> str: def analyze_brain_mri(image_path: str, region: str $\mathbf { \lambda } = \mathbf { \lambda }$ "full", analysis_mode: str = "detailed", """Analyze brain MRI image for abnormalities bright_threshold_multiplier: float $= 2 . 5 .$ focusing on bright areas.""" very_bright_threshold_multiplier: float $= 3 . 0$ , Customized Coefficients abnormality_bright_percentage_threshold: float $= 3 . 0$ ) -> str: ""Analyze brain MRI image for abnormalities and diseases. MCP-Box Construction Args: ↑ region: Brain region to analyze - "left", "right", or "full" Generalization Brain_mri_analyzer analysis_mode: Analysis type - "detailed" (specific diagnoses), "basic" (abnormal/normal), or "simple" (bright area analysis) @mcp.tool() bright_threshold_multiplier: Multiplier for standard deviation to define bright areas threshold def analyze_brain_mri(image_path: str) -> str: very_bright_threshold_multiplier: Multiplier for standard deviation to define very bright areas threshold "Analyze brain MRI image for disease abnormality_bright_percentage_threshold: Percentage threshold for bright pixels to indicate abnormality detection on left side.""" 1
While knowledge distillation has become a mature field for compressing large language models (LLMs) into smaller ones by aligning their outputs or internal representations, the distillation of LLM-based agents, which involve planning, memory, and tool use, remains relatively underexplored. Existing agent distillation methods typically replay full teacher trajectories or imitate step-by-step teacher tool usage, but they often struggle to train student agents to dynamically plan and act in novel environments. We propose AgentDistill, a novel, training-free agent distillation framework that enables efficient and scalable knowledge transfer via direct reuse of Model-Context-Protocols (MCPs), which are structured and reusable task-solving modules autonomously generated by teacher agents. The reuse of these distilled MCPs enables student agents to generalize their capabilities across domains and solve new problems with minimal supervision or human intervention. Experiments on biomedical and mathematical benchmarks demonstrate that our distilled student agents, built on small language models, can achieve performance comparable to advanced systems using large LLMs such as OctoTools (GPT-4o), highlighting the effectiveness of our framework in building scalable and cost-efficient intelligent agents.
[ "cs.AI" ]
# 1 Introduction Large Language Models (LLMs) have become an indispensable tool in the knowledge worker’s arsenal, providing a treasure trove of information at one’s fingertips. Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) further extends the capabilities of these LLMs by grounding generic dialog using information from external data stores. Despite progress in long-context LLMs, RAG still provides benefits in cost and inference time (Li et al., 2024b; Yu et al., 2024). Moreover, it allows us to augment generic, off-the-shelf LLMs with proprietary data they haven’t been trained on. Progress on RAG has largely been enabled by benchmarks that help exhaustively evaluate the effectiveness of various methods (Yang et al., 2024; Muennighoff et al., 2023). While RAG has been extensively explored for free-form text, this is unfortunately not the case for structured data, stored either in relational databases or otherwise. Prior work has shown that structured data is of a different nature, for example regarding data types and dimensionality, requiring dedicated research (Cong et al., 2023). Moreover investigating retrieval of structured data for RAG is important: contextualizing LLMs using frequently updated statistical data sources, such as Data Commons (Guha et al., 2023), or using proprietary relational databases within organizations, can yield rich dividends (Radhakrishnan et al., 2024), all underscoring the need for better models, approaches and evaluation for retrieval over structured data. Another important motivation for research on table retrieval stems from research on LLM-powered interfaces and agentic systems for processing and querying structured data. Most research in this direction, e.g., for question answering (Nan et al., 2022) or text-to-SQL (Gao et al., 2024), assumes that a table or relational database is provided, while identifying the relevant table is, in fact, a non-trivial task for a user (or agent). Figure 1 depicts an endto-end pipeline as we envision: starting with a natural language query (which can be a “lookup” or analytical question), the first step is to interpret and augment the query, for which the retrieval component identifies the relevant tabular data needed to generate a response (which can be in code, natural language, or other format). We find that table retrieval in end-to-end (analytical) query systems is an understudied area, motivating the creation of a benchmark. eli“giWblheatfriesethraethe fgohreKst-12 interpret & retrieve & table reasoning / contextualize answer students in the schools re-rank table(s) query execution in Alameda County?” query While there has been initial work exploring opendomain question answering on public table corpora such as Wikipedia (Chen et al., 2021; Herzig et al., 2021), this does not represent the full spectrum of data characteristics and tasks for structured data retrieval. The development of a broad and comprehensive benchmark covering diverse tasks and datasets of varying difficulty is therefore key in advancing retrieval systems for structured data. In this paper, we present TARGET: the first benchmark evaluating Table Retrieval for Generative Tasks. With TARGET we provide a consistent and comprehensive framework for evaluating models and pipelines for table retrieval in isolation, as well as end-to-end for downstream tasks. We use TARGET to analyze retrieval methods based on sparse lexical representations (Chen et al., 2021), dense embeddings of metadata (Liu, 2022), dense table embeddings (Zhang et al., 2025), and dense row embeddings (Kumar et al., 2023). We find that sparse lexical representations are far less effective for retrieval over tabular data as it is found to be for rich free-form text (Muennighoff et al., 2023). In our analysis with TARGET, we find that dense table- and row- embeddings (Zhang et al., 2025) outperform baselines but still show high variation in performance across tasks and datasets. Finally, we highlight the sensitivity of retrievers to the provided metadata inputs (e.g., web page titles) and table data availability (e.g., embedding full tables, column names only, or generated table summaries). Our findings identify a performance gap in retrieval accuracy and robustness across data and tasks, emphasizing the need for more research in this area for which TARGET is an instrumental stepping stone. # 2 Related Work Representation Learning and LLMs for Tables Tables have recently become a modality of interest for representation learning and generative models for tasks such as table understanding (Hulsebos et al., 2019; Deng et al., 2022), fact verification (Herzig et al., 2020; Zhang et al., 2020), and question answering (Herzig et al., 2020), and more recently text-to-SQL (Gao et al., 2024). These models either deploy LLMs out-of-the-box for tabular data, or develop tailored architectures to capture the properties of tables, which pose specific challenges (Cong et al., 2023). These models typically take one or more tables and a query as input to generate an answer, however, the relevant tables per query can be difficult to identify. TARGET is intended to close this gap and facilitate research on end-to-end querying over tabular data such as text-to-SQL and question answering. Table Retrieval Retrieval of structured data has been studied across use-cases in data management and machine learning. Dataset search where the objective is to find a dataset for a given task (e.g. training a machine learning model or doin data analysis) is a well studied topic in the data management literature (Halevy et al., 2016; Castelo et al., 2021). These table retrieval systems typically take a semantic description of the data as input and return the relevant tables. In TARGET we focus on retrieval components embedded into end-to-end query systems, where input queries are natural language queries and the task is to provide an accurate response based on relevant data that first needs to be retrieved in an end-to-end manner. Such pipelines have mainly been studied for open-domain question answering, typically over web table corpora (Chen et al., 2021; Herzig et al., 2021; Wang and Castro Fernandez, 2023). We include OTTQA (Chen et al., 2021), a sparse lexical retriever, as a baseline for open-domain QA. We also integrate two commonly used datasets for open domain table QA (FeTaQA (Nan et al., 2022) and OTTQA (Chen et al., 2021)) into TARGET. We introduce two new end-to-end query tasks: fact verification and text-to-SQL, which are typically not considered in the “open-domain” setting but assume the relevant data is provided by a user. tYhauHe-noti2wsn0'gsw0d9HeouirWaniongnrgligdn tSvoyhtbeaealrntrieowfifoeHfc1ea19alnl90tr9irhu5oaenc-nae Whafotufr tshtehsect tnyoame ●Task queries ground-truth generatedTAgroRunGd-tEruTth FOeTTaQA TabFact SBpIiRdDer sparse lexRicealtriedevnsermetadata Prompt (sGhoertnenerdaQtAorexample): Evaluator representations embeddings Answer the question given the tables. ● retrieval performance dense table dense row Question: { query } ● downstream accuracy embeddings embeddings Tables: { tables } Benchmarks and Datasets To develop stronger rerievers and advance research on LLM-driven tasks on structured data, benchmarks and datasets are essential. The MTEB and CRAG benchmarks (Muennighoff et al., 2023; Yang et al., 2024) have been instrumental in benchmarking text embedding quality and RAG over rich text documents. We need similar benchmarks for retrieval systems and embedding models for structured data. In prior research, useful datasets were introduced to evaluate various tasks for relational data, such as TabFact (Chen et al., 2020), FeTaQA (Nan et al., 2022), and Spider (Yu et al., 2018). These datasets focus on evaluating methods for a specific downstream task only, i.e., given a table or database, answer natural language queries about it, without integrating the critical task of retrieval. TARGET addresses this gap by focusing on the evaluation of table retrieval performance while incorporating existing task-specific datasets. # 3 The TARGET Benchmark We describe the datasets, tasks, metrics, and retrievers that make up the TARGET benchmark. All resources for use and extension of TARGET are available at https://anonymous.4open.science/r/ target- $\cdot { \mathsf { B } } 7 8 2 ^ { 1 }$ . # 3.1 Benchmark Design The pipeline of TARGET aligns with typical RAG pipelines (Figure 2). TARGET takes as inputs the corpus with tables/databases and queries (a natural language question or statement). Data loading and evaluation are abstracted away such that custom core components of RAG pipelines, i.e., the Retriever and Generator can easily be evaluated when aligned with the TARGET API. The retriever, which can be basic or advanced (Gao et al., 2023), identifies the relevant table(s)/database(s) for an input query. Depending on needs, retrievers can either manage corpus embedding independently or leverage vector databases (Malkov and Yashunin, 2018; Qdrant) integrated in TARGET. Given the tables and query, the generator yields a response which is then evaluated with respect to the groundtruth. Table 1: Tasks and Evaluation Metrics in TARGET. # 3.2 Tasks & Metrics Per source dataset, we combine all tables and any available metadata into a retrieval corpus. For all tasks, e.g., question answering, we evaluate the retriever and generator outputs using metrics from the original dataset papers or that are widely adopted. An overview of the tasks and metrics in TARGET can be found in Table 1. Table Retrieval Table retrieval task assesses retrieval performance in isolation and is the first step for end-to-end downstream evaluation. Retrieval performance is measured with recall $@$ top- $k$ , reflecting the successful retrieval of the groundtruth table within the top- $\mathbf { \nabla } \cdot k$ retrieved tables. In the text-to-SQL setting, however, standard recall may yield unintuitive results, as multiple ground-truth tables might be needed to generate the valid SQL query. With $T _ { i }$ representing the ground-truth tables for the ith query, we correct for situations where $k \ll | T _ { i } |$ and follow Thakur et al. (2021) in evaluating capped recall by setting our denominator to $m i n ( k , | T _ { i } | )$ . Additionally, we include the average retrieval time per query. Question Answering Given the retrieved tables contents and the input question, an answer is generated and evaluated against the ground-truth natural language answer for accuracy and comprehensiveness. We report SacreBleu (Post, 2018) to reflect syntactic similarity across generated tokens. Fact Verification Given the retrieved tables, the generator either accepts or refutes a natural language statement, or acknowledges that insufficient information is provided. Here, the accuracy is measured through precision, recall and F1. Text-to-SQL We adapt a prompt template from Talaei et al. (2024), which incorporates the natural language question and the schemas of the retrieved tables along with generation instructions. The prompt instructs the generator to output a concise “chain-of-thought” reasoning trace (Nye et al., 2021; Wei et al., 2022) to support more robust query generation. Additionally, since the retrieved tables may belong to different databases, the generator is required to include the selected database alongside the SQL query to ensure proper execution. The execution result from the generated SQL are then compared to that of the ground-truth SQL. We report the execution accuracy, aggregated across query complexity categories, following the implementation in BIRD (Li et al., 2024a)2. # 3.3 Datasets Data and Label Sources The datasets of each task in TARGET can be found in Table 2. All publicly available splits of each dataset are included except for BIRD’s train split. We use the test splits of included datasets for our evaluations. For OTTQA and BIRD, where test splits are unavailable, validation splits are used. To ensure consistency across datasets, e.g. for consistent data processing, we standardize the schemas of the files holding the datasets. Each dataset has a “corpus” and a “queries” file. The “corpus” files contain the table contents and table identifiers (IDs), wherein each entry corresponds to a single table and includes a “context” field for metadata, if available. For instance, in the text-toSQL datasets, the context field contains primary key, foreign keys, and other table schema information. The “queries” files contain the queries, query IDs, and the ground-truth table ID(s). To evaluate retrieval for text-to-SQL, we extract all the tables referenced in the ground-truth query using sqlglot and consider them as ground-truth. Table 2: Dimensions of included tabular datasets per task across splits in TARGET. Data Complexity Tables across datasets differ significantly in size. For example, text-to-SQL datasets feature significantly larger tables compared to other datasets in TARGET. Although BIRD’s validation split contains fewer tables and databases overall, the large size of each table poses a significant challenge for retrieval systems. Specifically, the average number of rows per table in BIRD is $5 2 . 4 \mathrm { k }$ , nearly $1 0 \mathrm { x }$ compared to $5 . 3 \mathrm { k }$ rows per table in Spider. In contrast, tables in FeTaQA, OTTQA, and TabFact range from 10 to 50 rows. The distributions of row and column counts per dataset can be found in Appendix A. Another distinction across datasets is the availability of metadata. Unlike text-to-SQL datasets, which feature descriptive table names and database schema, FeTaQA and TabFact does not provide informative table titles (for example, “2- 1570274-4.html.csv” from TabFact) or grouping by databases. This requires retrieval methods to effectively use tabular data contents or devise data augmentation methods. # 3.4 Retrievers We present our analysis with TARGET for retriever methods that reflect common design principles in research and industry. We evaluate dense semantic embeddings and sparse lexical representations, and vary the inputs provided: tables or rows, with or without table metadata, and metadata-only. Textto-SQL has small changes to the retriever and generator, as explained in Appendix B. No Context baseline LLMs are capable of memorizing facts from the data that they were trained on (Mallen et al., 2023). To understand the influence of memorization on downstream task responses, the LLM-based generator is asked to respond based soely on its internal knowledge without any retrieved tables provided. We refer to this setting as the “No Context” baseline. Sparse Lexical Representation The Sparse Lexical Representation retriever resembles the OTTQA approach (Chen et al., 2021). It constructs a TFIDF matrix of the corpus , which may use TF-IDF term weights or BM25. It takes as input the column names, table rows, and, table metadata such as the (Wikipedia) page title. On retrieval, a query is converted into a TF-IDF-weighted vector for which the dot product is calculated with the table representations to find the $k$ -most similar tables. Dense Metadata Embedding While metadata such as titles and descriptions can provide context for retrieval, they are either uninformative (e.g. “8c4c-4f0d.csv”) or entirely absent in many tables. To this end, the Dense Metadata Embedding retriever creates table summaries following three steps, $\textcircled{1}$ generate a table name and summary of each table with $\mathrm { G P T } { \cdot } 4 \mathrm { o - m i n i } ^ { 3 }$ using the column names and first 10 rows of the table, $\textcircled{2}$ embed the table metadata with text-embedding-ada-002, and $\textcircled{3}$ retrieve relevant tables based on the cosine similarity between natural language query and metadata embedding. We use the open-source LlamaIndex library, commonly used in practice, to store the embeddings in an in-memory key-value index and retrieve using cosine similarity (Liu, 2022). Dense Table Embedding We compare three dense embedding models: text-embedding-3-small (OpenAI, 2024), stella_en_400M_v5 (Zhang et al., 2025), and multilingual-e5-large-instruct (Wang et al., $2 0 2 4 ) ^ { 4 }$ . The latter two are open-weight models available on HuggingFace. We evaluate the performance for embeddings of only column names versus column names along with 100 rows. While formatting tables as json appeared better for GPT-3.5 (Singha et al., 2023), markdown formatting yields better results. Each row of the table is formatted in markdown’s tabular syntax and sequentially appended to form a single concatenated string for embedding. For retrieval, the input query is embedded with the same model, and the top- $k$ tables are retrieved based on cosine similarity. Dense Row-level Embedding The input query might semantically correspond to values of certain rows within tables. Alternative approaches, therefore, devise retrieval through row-level embeddings (Zhang et al., 2023; Kumar et al., 2023; Wang and Castro Fernandez, 2023). In this baseline, each row is serialized into a sentence following the template “[column name] $\mathsf { I } _ { i }$ is [cell value]i, [column name] $\mid _ { j }$ is [cell value] ${ \mathrm { l } } _ { j } ^ { \prime \prime }$ (Zhang et al., 2023), for example, “first name is John, last name is Doe”. The serialized rows are embedded using the relatively small and effective stella_en_400M_v5 embedding model (435M parameters). Upon retrieval, the input query is embedded with the same model and used for retrieving rows with the highest cosine-similarity with the input query embedding. Based on the retrieved rows, the corresponding top- $k$ tables are retrieved. Row-wise retrieval via dense embeddings can become impractical for very large tables with hundreds of thousands of rows, for example those included in BIRD. Therefore, this baseline is not evaluated for the BIRD dataset. # 3.5 Generators We use basic LLM prompts for downstream tasks to evaluate the GPT-4o-mini model3 in our experiments (Hurst et al., 2024). However, we design the TARGET API to enable evaluations of other language models and advanced generation pipelines. The Instruction prompt takes in: $\textcircled{1}$ task instructions, $\textcircled{2}$ the top- $\mathbf { \nabla } \cdot k$ retrieved table(s) or database schemas of retrieved tables (for text-toSQL), and $\textcircled{3}$ the query. Unless otherwise specified, we serialize all tables in prompts to markdown strings. An example prompt for the question answering task is provided below. The full prompt templates can be found in Appendix B. # 4 Results Table 3 presents the performances of the evaluated retrievers with $k$ set to 10. Figure 3 illustrates the average retrieval recall over various values of $k$ across datasets. For the Sparse Lexical Retriever, only the performance using BM25 is included as its performance is similar to TF-IDF. # 4.1 Retrieval Insights # How do different table representations perform? We find that table retrieval based on sparse lexical representations such as BM25 (OTTQA) are less effective, across tasks and datasets, than they are for text (Muennighoff et al., 2023), even with increased $k$ (Table 3). The strong performance of the sparse lexical retrievers with table title on the OTTQA dataset (recall $@ 1 0$ of 0.967 and 0.963) can be attributed to the high correspondence between Wikipedia table titles and the questions, as manually verified5. The performance drops for BM25 and TF-IDF to 0.592 and 0.583 respectively, if the table title is not included. The importance of descriptive metadata for retrievers based on lexical representations is confirmed by their low performances on FeTaQA and TabFact, where descriptive table titles are not available. LLM-generated table summaries with dense metadata embeddings can significantly improve retrieval performance as illustrated by the Dense Metadata Embedding baseline. Dense Table Embeddings (with column names and rows included in the embeddings) generally yield the best performance. Different embedding models demonstrate similar performance across datasets, with stella_en_400M_v5 achieving the best results, showing itself to be a viable opensource, lightweight, and efficient option (Table 4). Notably, for both text-to-SQL datasets, the effect of including data rows is minimal, with differences within $\pm 5 \%$ in recall. Inspection confirms that (analytical) queries in text-to-SQL datasets typically have high resemblance with schemas (column names). In contrast, for the question answering and fact verification tasks, retrieval performances are significantly curtailed when only the column names are embedded. The Dense Row-level Embedding method exhibits comparable performances to dense embeddings of tables with sampled rows. On Question Answering datasets, row-level retrieval does not improve performance compared to dense table embeddings, while for Text-to-SQL and Fact Verification it lightly outperforms other baselines. However, due to large size tables for Text-to-SQL datasets, the vast search space significantly hinders retrieval efficiency. With relatively small performance gains, row-level embeddings may not be practical for large-scale table retrieval. Figure 3: Influence of $k$ on retrieval performance with various baselines on the FeTaQA dataset, confirming the expectation that performance gradually increases with $k$ , most significantly for dense embedding approaches. Figure 4: Influence of corpus size on retrieval, illustrating the sensitivity in retrieval performance of dense retrievers when the corpus reaches a large scale. # How important is table metadata for retrieval? From our analysis of the retrieval results of methods based on sparse lexical representations (OTTQA TF-IDF and BM25), we conclude that descriptive metadata (e.g. table summaries or titles) can be key for lexical retrievers. We observe a similar sensitivity for lexical representations for semantic metadata on the Text-to-SQL tasks when table names are not included, which is further confirmed with results on FeTaQA, where the provided table titles are not descriptive (e.g. “example-10461”) and including them does not enhance performance. The importance of metadata is also highlighted in the strong performance of the dense metadata embedding method compared to the dense table embedding method for text-to-SQL. Table 3: Results with TARGET for table retrieval with $k { = } 1 0 . \mathrm { R } @ k$ stands for recall $@ k$ , $\operatorname { C R } @ k$ stands for capped recall ${ \ @ k }$ (Thakur et al., 2021), and s for average retrieval time in seconds. For the Dense Table Embedding baseline, we report the best performing model stella_en_400M_v5. Best scores are in bold, second-best underlined. Table 4: Table Retrieval Performances of Dense Table Embedding with Various Text Embedding Models with $k { = } 1 0$ . Best scores are in bold, second-best underlined. # How does scale affect retrieval performance? First, we assess the impact of the number of retrieved tables, i.e. by increasing $k$ . As Figure 3 shows, average recall gradually increases with $k$ for all retrievers, which is expected. The lexical retrievers do not gain significant performance improvements upon retrieving more tables. Another influential variable is the size of the retrieval corpus. To analyze this, we evaluate the retrieval performance as corpus size increases, by appending tables from the GitTables dataset (Hulsebos et al., 2023). Here we zoom in on the FeTaQA dataset, which initially consists of 2K tables. We study the impact of corpus size on retrieval performances of the sparse lexical baseline based on TF-IDF and the dense table embedding baseline. As Figure 4 shows, retrieval performance decreases as the corpus size grows. For the dense table embedding baseline, which generally exhibits the best performance across tasks, the drop becomes progressively more noticeable once the corpus exceeds 10K added tables. Performance degradations on large corpora illustrates a need for developing table retrievers that remain robust at scale. # 4.2 Generator insights Can LLMs execute tabular tasks from memory? In general, the “No Context” baseline performs significantly lower without having relevant tables provided (Table 5). An exception to this is the low performance of sparse lexical retrievers on FeTaQA, which we discuss in the next section. Without grounding LLMs in relevant structured data to answer domain-specific questions, factuality and quality of generation becomes unreliable. Additionally, Table 5 also emphasizes that database schemas for text-to-SQL are critical to generate accurate SQL queries, as the “No Context” baseline yields an accuracy of 0. # Does generation benefit from table retrieval? The low performance of all retrievers on the OTTQA dataset is notable (all SacreBleu scores are below 1), which we hypothesize is due to the relatively short answers in OTTQA versus longer generated answers despite prompting for conciseness. In comparison to the “No Context” baseline, where the model is asked to generate answers solely based on its knowledge base, providing retrieved tables in context increases downstream performances notably, as exemplified by results for FeTaQA and TabFact. Due to the stronger retriever performance of dense embeddings, we find that dense retrievers generally yield best downstream performance across datasets. Meanwhile, the poor retrieval performance of sparse lexical representations on FeTaQA seems to distract the generator with irrelevant tables, leading to a significant decrease in SacreBleu scores compared to the “No Context” Table 5: Results with TARGET for downstream tasks corresponding upfront table retrieval with $k { = } 1 0$ . SB stands for SacreBleu, EX for execution accuracy aggregated over all query complexity categories. P/R/F1 reflect precision, recall, and f1 scores. For Dense Table Embedding, we report the results of the best performing embedding model stella_en_400M_v5 Best scores are in bold, second-best underlined. baseline. Ensuring the inclusion of relevant tables in the LLM’s context is crucial for reliable downstream generation quality, highlighting the need for robust retrieval methods. # Can long-context LLMs replace table retrieval? An alternative for retrieval-augmented generation (RAG) is to exhaust the context of LLMs by including vast amounts of tables from the corpus without fine-grained retrieval, and rely on the LLM to extract the answer from a large set of tables. To understand the limitations of LLM context for table comprehension tasks, we explore the relationship between the rank of the ground-truth table in the retrieval results and downstream task performance in Figure 5. Treating instances where the groundtruth table failed to appear in the top-10 retrieval results as the lowest rank, we see a strong negative correlation (average Spearman’s $\rho ~ = - 0 . 8 5 )$ between retriever performance and downstream task performance6. These results 1) motivate work on improved table retrieval and reranking, and 2) indicate that careful attention in crafting table retrievers is more effective than relying on providing a large number of tables into long-context LLMs.
The data landscape is rich with structured data, often of high value to organizations, driving important applications in data analysis and machine learning. Recent progress in representation learning and generative models for such data has led to the development of natural language interfaces to structured data, including those leveraging text-to-SQL. Contextualizing interactions, either through conversational interfaces or agentic components, in structured data through retrieval-augmented generation can provide substantial benefits in the form of freshness, accuracy, and comprehensiveness of answers. The key question is: how do we retrieve the right table(s) for the analytical query or task at hand? To this end, we introduce TARGET: a benchmark for evaluating TAble Retrieval for GEnerative Tasks. With TARGET we analyze the retrieval performance of different retrievers in isolation, as well as their impact on downstream tasks. We find that dense embedding-based retrievers far outperform a BM25 baseline which is less effective than it is for retrieval over unstructured text. We also surface the sensitivity of retrievers across various metadata (e.g., missing table titles), and demonstrate a stark variation of retrieval performance across datasets and tasks. TARGET is available at https://target-benchmark.github.io.
[ "cs.IR", "cs.AI", "cs.CL", "cs.DB" ]
# 1 Introduction About one in nine people $( 1 0 . 9 \% )$ age 65 and older in the U.S. have Alzheimer’s Disease and Related Dementias (AD/ADRD) [106]. In 2023, 11.5M caregivers of people living with AD/ADRD provided an estimated 18.4 billion hours, or nearly 31 hours per week, of unpaid help [105]. Caregiving for AD/ADRD is an emotionally and physically demanding role, predominantly undertaken by informal and family caregivers at home [47, 51]. In fact, $7 4 \%$ of AD/ADRD family caregivers are concerned about maintaining their own health since becoming a caregiver [2]. They face a multitude of challenges throughout the caregiving journey, including managing the progressive symptoms of AD/ADRD, handling financial strains, and managing own wellbeing [27, 48, 144, 146, 150]. Compared to formal caregivers, family caregivers face more pronounced challenges due to factors such as emotional attachment to the care recipient, lack of formal training, and insufficient resources and equipment [21, 39, 109]. As a result, they often struggle with high levels of stress, anxiety, and depression [26, 55, 78, 100]—at least one in three family caregivers of AD/ADRD individuals was found to suffer from clinical depression as per prior meta-analysis [119, 157]. The mental health burden borne by family caregivers underscores the critical need for effective support systems and interventions to help them cope with the psychological demands of caregiving [27, 52]. However, the specific mental wellbeing needs of AD/ADRD caregivers, and how these needs evolve throughout the caregiving journey, remain less understood. From a human-computer interaction (HCI) and computer-supported cooperative work (CSCW) perspective, there is also a gap in understanding the specific technology and collaboration needs of caregivers. In particular, caregiving is inherently multi-layered and collaborative, involving direct interactions between caregivers and recipients while also requiring coordination with family members, healthcare providers, government agencies, community resources, and supportive technological tools [112, 122]. Prior work in the HCI/CSCW space has explored technologies such as smartphone applications and virtual assistants to facilitate collaboration between caregivers and care recipients, supporting tasks like medication management and routine coordination [16, 66, 69, 82, 130]. However, the role of technology in supporting caregivers’ mental health remains largely underexplored. Together, given the global increase in AD/ADRD, the growing reliance on informal and family caregivers [12, 90], and the promises of HCI and personal health informatics in supporting mental health [41, 132], it is essential to develop a deeper understanding of the caregiving journey and how technology can better support family caregivers’ mental wellbeing. As people’s needs evolve, so do their expectations of technology. Recent work in personal informatics has highlighted the need to design tools that not only understand individuals’ goals but also adapt as those goals evolve, providing better support [87, 121, 125]. As an AD/ADRD caregiver’s needs evolve over time, from seeking information about the condition to learning how to care to discovering effective lifestyle adaptations, their expectations from technology also evolve. Adopting a personal health informatics perspective for technology design, the key open question is how we can design mental wellbeing technologies that offer tailored and timely support to meet the unique needs of family caregivers? Accordingly, our research investigates the evolving mental wellbeing needs and concerns of AD/ADRD caregivers1. Our work is guided by the following research questions (RQs): RQ1: What are the primary causes and effects of mental health challenges among AD/ADRD caregivers, and how do these challenges evolve evolve throughout the caregiving journey? RQ2: What practices do AD/ADRD caregivers adopt in response to these evolving challenges? RQ3: What technologies do AD/ADRD caregivers use to manage mental wellbeing, and what challenges do they face with current and emerging technologies? We conducted 25 semi-structured interviews with family caregivers of AD/ADRD individuals in the U.S. The interviews focused on caregivers’ daily routines, mental health challenges, coping strategies, and the technologies they use. We analyzed the interview data using inductive qualitative coding, followed by thematic analysis [20] to identify key themes catering to our RQs. Our findings indicate that caregivers experience multifaceted mental health concerns; we categorize these concerns into cause and effect and identify three stages of evolution for caregivers’ mental health needs. Participants expressed that while they realize the need to socialize and seek professional support to manage their mental wellbeing, they often feel constrained by both time and space to simultaneously balance the demands of professional, personal, and caregiving responsibilities. They reflected on the self-care practices they adopt and the external support they seek to manage caregiving and wellbeing concerns. In this regard, participants also shared the role played by a variety of technologies, ranging from monitoring and medication management to social belonging and support. We found that caregivers hold both optimism and skepticism about technologies, primarily around computing technologies such as online platforms, smart devices, and AI chatbots. For instance, while some participants expressed optimism and excitement about the timely and personalized responses offered by AI, others remained skeptical regarding the accuracy of the information and the absence of human interaction. Additionally, we found the key barriers caregivers face when using technology, and underscore the need for more affordable, accessible, and personalized mental health support tools. Our work is unique in its examination of the dynamic evolution of caregivers’ mental health concerns throughout the caregiving journey, an aspect largely overlooked in prior work that typically focuses on point-in-time assessments of caregiver burden. Our study builds on and contributes to the body of work of supporting caregiver needs in HCI and CSCW [13, 43, 66, 82, 131, 153]. We situate our study with the social support behavioral code (SSBC) [136] to understand the various types of support that family caregivers seek. Additionally, we draw upon the ethics of care framework [49, 142] to examine the relationships and dependencies caregivers must navigate to adequately address their mental wellbeing concerns amidst the challenges of caregiving. Our work adds to the body of work by making the following key contributions: A thematic characterization of the multifaceted mental health needs and concerns of caregivers, categorizing them into cause-and-effect relationships. A novel temporal framework that maps the evolution of mental wellbeing practices across three distinct stages of the caregiving journey. A mapping of caregivers’ perceived challenges with existing technologies and their proposed improvements—highlighting the need for accessible, adaptive, and human-centered technological solutions that evolve alongside the caregiving journey. The above contributions collectively advance both theoretical understanding and practical approaches to supporting caregivers’ mental wellbeing. We also discuss the implications of this work in terms of policymaking, technology design, and ethics. Our study offers empirical insights that lay a foundation for designing tailored support for family caregiver’s mental wellbeing, ultimately benefiting both caregivers and care recipients. Our findings emphasize the collaborative nature of caregiving, which requires coordinated efforts among caregivers, care recipients, families, and communities. This dynamic highlights the importance of developing support systems that not only prioritize family caregivers’ wellbeing but also enhance effective communication and resourcesharing within the caregiving network as well as the broader society. # 2 Background and Related Work # 2.1 Alzheimer’s Disease and Related Dementias (AD/ADRD): Condition and Caregiving Alzheimer’s Disease and Related Dementias (AD/ADRD) is a family of neurodegenerative conditions that progressively worsens with no definitive cure [64]. It remains a major public health concern, ranking as the fifth-leading cause of death among Americans aged 65 and older [12]. Projections indicate that by 2050, approximately 14M individuals in the U.S. and 152M worldwide will be living with AD/ADRD [90]. The caregiving for AD/ADRD is predominantly undertaken by family members and informal caregivers within the home setting [47, 51]. In 2022 alone, the unpaid caregiving provided by family members was valued at approximately $\$ 339.58$ USD [12]. Additionally, family caregivers frequently experience financial strain [48, 50, 144] and difficulties in planning for future crises [83, 146, 150]. Many also struggle with maintaining their own physical and mental wellbeing [27, 55, 150]. Therefore, this caregiving effort comes with significant personal costs, including a heightened risk of emotional distress and adverse mental and physical health outcomes for caregivers [12, 47, 51, 52]. In particular, caregivers of AD/ADRD individuals often face higher levels of stress compared to those caring for individuals with other conditions [9]. Without sufficient training or support, they encounter numerous challenges, such as managing the evolving symptoms of the care recipient [110, 114], providing supervision [103], and making complex medical decisions regarding comorbid conditions [74]. We build upon the prior work, highlighting the challenges faced by AD/ADRD caregivers, to understand the specific mental wellbeing concerns experienced in the AD/ADRD caregiving journey. Our work unpacks the causes and effects of mental health concerns among the caregivers. Our work extends this by examining how these mental health evolve throughout the caregiving journey—a temporal dimension that remains underexplored in current literature. This helps us focus on the evolving mental wellbeing needs of these caregivers, and the barriers they face in navigating through the caregiving challenges and managing their own wellbeing. This study contributes to a more nuanced understanding of how caregiver support systems and technologies can be better tailored to address the dynamic and multifaceted concerns of family caregivers. # 2.2 Mental Health Needs of Informal and Family Caregivers Caregiving roles can include both professional and informal caregivers, our study specifically focuses on informal and family caregivers. Family caregivers are unpaid individuals, often family members or close acquaintances, who assist those with chronic or acute conditions by performing tasks ranging from daily care to complex medical procedures [109]. Due to the intensity and magnitude of responsibilities, family caregivers frequently face emotional overwhelm, which can lead to self-neglect and mental health issues such as depression [15]. The prevalence of mental health issues among caregivers is a significant concern. Research reveals that caregivers often experience high levels of stress, depression, anxiety, and other psychological challenges due to the demands of caregiving [124]. Prior work indicates that many caregivers experience significant psychological stress, with $3 4 . 0 \%$ reporting depression, $4 3 . 6 \%$ facing anxiety, and $2 7 . 2 \%$ resorting to psychotropic medications [119]. Family caregivers frequently encounter stigma, as societal misconceptions about caregiving roles can lead to judgment and disapproval, exacerbating feelings of isolation and stress [28, 131]. The progressive decline in the care recipient’s memory and cognitive abilities, can provoke feelings of shame, embarrassment, and even disgust in caregivers [151]. This is especially problematic in contexts where community plays a crucial role, such as AD/ADRD care, where caregivers often desire recognition for their efforts beyond being seen as mere assistants [13, 17, 131]. Further, caregivers also experience compassion, sorrow, and guilt, driven by their deep desire to ease the suffering of their loved ones, coupled with grief over the person they feel they have lost [151]. However, this ongoing emotional strain can lead to compassion fatigue, causing caregivers to become emotionally drained and detached, making it more difficult for them to continue providing care—a phenomenon described as “compassion fatigue” [32]. Further, the emotional toll on caregivers also bears repercussions for the care recipients [56]. Sun et al. found that caregiver depression can accelerate cognitive decline in AD/ADRD individuals [137]. The deteriorating health of caregivers intensifies their caregiving burden, contributing to the worsening of the care recipient’s condition [26]. Therefore, del Pino-Casado et al. highlighted the importance of enhancing caregivers’ perceived health [37]. We build on the above body of work to examine AD/ADRD caregivers’ mental health challenges, as well as their self-care practices, support mechanisms, and barriers such as insufficient support, compassion fatigue, and burnout. While prior research has highlighted caregiver mental health challenges, such as stress and compassion fatigue, these issues remain largely underexplored in CSCW and HCI. Caregivers increasingly interact with sociotechnical systems for support, making it vital for HCI/CSCW to understand their lived experiences and design more supportive technologies. Our work extends this literature by examining the evolving mental health needs of AD/ADRD caregivers and emphasizing the role of sociotechnical solutions in supporting their wellbeing. # 2.3 HCI and CSCW Technologies for Caregivers’ Wellbeing A rich body of prior work in HCI and CSCW has explored technologies to support caregivers both generally [17, 25, 57, 73, 84, 120, 126, 165], as well as for AD/ADRD [11, 24, 54, 77, 101, 133, 135]. Prior work highlighted the benefits of wellbeing technologies in supporting AD/ADRD caregivers [80, 162]. These include the use of smartphone [130] and wearable [69, 134] technologies in supporting caregiver wellbeing. In addition, research has explored the ethics and privacy surrounding the development and use of technologies to assist caregiving [53, 70, 81, 86]. Prior work explored how smartphone apps can help caregivers maintain routines and reduce stress, with features tailored to provide timely support [34], and the use of Voice Interactive Personal Assistants (VIPAs) in providing mental health services to caregivers [92, 102, 153]. These technologies are especially useful for caregivers as they offer accessible, scalable support tailored to individual needs, ensuring mental health services are available when needed [72, 107]. Further, online platforms provide a comprehensive array of resources aimed at improving caregiver mental health [59, 59, 60, 75, 116]. These platforms often include telehealth services, monitoring tools, and self-care resources, allowing caregivers to manage stress and connect with professionals and peers [16]. These platforms promote resilience and provide centralized, accessible support for caregivers in managing their mental health [16]. Over the last decade, chatbots have been integrated into healthcare, including AD/ADRD care, where they offer caregivers information, education, and support [113]. Despite the potential, chatbots for AD/ADRD caregiving is still in its early stages and requires more development to meet caregivers’ needs effectively [18, 19, 113, 153]. Relatedly, Bhat et al. highlighted the critical mediating role played by caregivers, proposed caregiver-centric technological supports [13]. Likewise, Kim et al. emphasized the role of customized support strategies, as caregiver needs fluctuate across different stages of challenging behavioral episodes [66], and Meyerhoff et al. underscored the critical need for user-centered digital mental health (DMH) tools that adapt flexibly to individual support needs, which can empower users in their mental health journeys [82]. Recently, Smriti et al. explored if and how technologies can support caregivers of people living with dementia [133]. Parallelly, within HCI and CSCW, the personal informatics community has studied the design of technology to support mental health through a range of topics, including monitoring mood, affect, and stress to managing bipolar disorder and depression [14, 22, 41, 61, 118, 155, 158]. These studies explored how technology can be used to understand and manage mental health more effectively, highlighting the importance of tailored design in mental health technology [160, 161]. Although individual mental health has been extensively studied, our understanding of the mental health needs of informal and family caregivers—individuals whose mental health challenges arise not from their own condition, but from their ongoing responsibility for someone else’s medical condition— remains limited. This distinction is crucial, as caregiving introduces unique stressors that are not well represented in broader mental health research. Our study expands the scope of CSCW research to include the nuanced and evolving mental health needs of caregivers, whose experiences demand new forms of technological considerations. We examine how AD/ADRD caregivers’ mental health needs—and the technologies that support them—evolve across different stages of caregiving. By synthesizing two streams of research—addressing needs of people with mental health conditions and supporting caregivers in their care work—we aim to empirically inform the opportunities to address the specific mental wellbeing needs of caregivers. Our work is further motivated by Chen et al.’s call for system design to focus on caregivers and Kokorelias et al. emphasis on the evolving needs of caregiving through different phases [25, 68]. We aim to understand caregivers’ concerns and desires for technologies to address their caregiving and mental wellbeing challenges. Towards this aim, we explore the dynamic and multifaceted mental health needs, and their current practices to address mental health concerns. Our work highlights ways to enhance technologies to better support caregivers’ mental health throughout their caregiving journey, expanding the discourse on solutions in AD/ADRD caregiving. # 3 Study and Methods We conducted semi-structured interviews with caregivers of individuals with AD/ADRD. We describe our methodology and participant pool in this section. # 3.1 Participants and Recruitment We recruited our participants primarily through social media. We first contacted the moderators of different online communities catering to AD/ADRD-related discussions on Reddit (r/alzheimers, r/dementia, r/dementiaresearch, r/ParentsWithAlzheimers, etc.) and AlzConnected (alzconnected.org), by briefly describing our research and asking if they were okay with recruiting from their respective platforms. Then, in online communities—where we received moderators’ approval—we posted our recruitment flyer with an interest form that included a demographic survey questionnaire (age, sex, race, U.S. state) and their role as a caregiver. This interest form helped us target and screen participants who are 1) 18 years or older, 2) current/former caregivers for AD/ADRD, and 3) residing in the U.S. We received 293 responses to our interest form over a period of two months between August and October 2024, and we invited a subset of participants to maximize diversity and balance across answers to the caregiving role and tenure. This led to a final set of 25 participants who consented to participate in the study, and we interviewed them. We note that although our recruitment flyers were posted on social media, three of our participants did not actively participate on these platforms, and were rather referred to by others to express their interest in participating in our study. Each participant was compensated with Amazon gift vouchers of $\$ 25$ USD. Table 1 summarizes the demographic and caregiving role and tenure of the participants. Of the participants we interviewed, $7 6 \%$ (19 out of 25) are current caregivers, and we note a diversity of participants across age group, number of years in caregiving, race, education, and occupation. Before the interviews, participants were provided with the Rapid Caregiver Well-being Scale (RCWBS) [141] along with the consent form. R-CWBS is a validated short-form rapid assessment instrument to infer key areas of support a caregiver needs [141]. Here, each question is rated on a Likert-scale between 1 (Rarely) and 5 (Usually), and lower scores indicate a need for greater support. Table 2 provides a summary of participants’ responses to this survey, showing that although our pool of participants was mostly regular with taking care of personal daily activities, the other questions received a variety of responses. # 3.2 Interview Procedure We conducted semi-structured interviews with caregivers to explore their experiences and mental health throughout the caregiving journey. We conducted these interviews via video calls (Zoom/Teams). The research teams took turns interviewing and note-taking during the interviews. These interviews were recorded and lasted 60 minutes. During the interviews, we sought to understand participants’ daily caregiving routines and how these responsibilities affected their mental wellbeing. We also inquired about the strategies they used to manage stress and cope with the mental health demands. Our interview protocol was systematically developed based on prior HCI/CSCW qualitative research with caregivers [13, 59, 66, 133]. We began with open-ended questions about caregiving experiences to allow themes to emerge naturally, followed by more structured prompts derived from established literature. To gain further insight, we drew on the prior literature on the mental health concerns of AD/ADRD caregivers, and incorporated prompt questions. Table 1. Summary of participants, including Type (Current/Former Caregiver), Years of Caregiving (Ys.), Care Recipient, Age, Gender, Race, Education, and Occupation. Some caregivers started their caregiving for a family member, and later transitioned into professional caregiving. Serial Caregivers are marked with an ’\*’ next to their ID. In particular, we shared our screen with participants and asked them to rate various caregivingrelated concerns—1) disruptive patient behaviors [38, 71, 138], 2) insufficient support systems [58, 97, 123], 3) doubt in self-efficacy [29, 45, 140], 4) emotional wellness issue [6, 23, 46], 5) relationship management [108, 147], 6) compassion fatigue [32, 33, 98], 7) no time for self-care [94, 148, 149], and 8) burnout [5, 139, 143]. These eight concerns were obtained from prior literature as significant challenges for AD/ADRD caregivers. This approach enabled us to situate our findings with existing knowledge while also allowing for the discovery of novel insights specific to our participants’ experiences. We guided participants through each concern, providing literature-based definitions, and asked them to rate their level of concern on a scale from 1 (not at all concerning) to 5 (very concerning). Additionally, we asked the participants to explain their ratings and think aloud about specific experiences related to these concerns. This approach helped build common ground and helped us gather deeper insights into participants’ mental wellbeing concerns. Table 3 summarizes participants’ responses to these prompts, where we find a high concern for most of the prompts. In our ensuing qualitative analyses [20], we incorporated the deeper insights that they shared. Finally, we asked the participants about the use of technology in caregiving as well as in managing mental wellbeing. Participants were encouraged to share their thoughts on the technologies they used, what features they found useful, and any suggestions or concerns they had for improving their experience. This helped us understand participants’ concerns and desires about these technologies. Table 2. Summary of participants’ responses to Rapid-Caregivers’ Well-being Scale (R-CWBS) [141]. Each questions were rated on: 1 (Rarely), 2 (Occassionally), 3 (Sometimes), 4 (Frequently), and 5 (Usually). Table 3. Summary of participants’ responses to prompts on mental wellbeing concerns drawn on the literature. Participants responded to these prompts based on how much they associated with these concerns, on a scale of 1 (not at all concerning) to 5 (very concerning). # 3.3 Data Analysis After the interviews, we used otter.ai for automated transcription of the five interviews conducted on Zoom, and used the default transcription on Teams for the remaining interviews. The recordings were anonymized by redacting any identifiable data such as personal names and locations. The dataset was then treated as a corpus for comprehensive analysis. We analyzed our data comprehensively using reflexive thematic analysis [20]. This analysis incorporated transcriptions from interview recordings, notes taken during interviews, and participants’ responses to both the R-CWBS (Table 1) and mental wellbeing concern prompts from the literature (Table 3). First, the co-first authors carefully reviewed each transcript against the original recordings to ensure accuracy and to capture nuanced expressions and emotional responses. Then, we organized our initial coding using Miro [85] as a visual collaborative platform, which facilitated the identification of emerging patterns. Through this approach, we systematically developed our initial themes, which were then refined through team discussions and iterative analysis. All co-authors participated in reviewing the transcripts and engaged in an iterative process of open coding, where codes were grouped, initial subthemes were identified, and these subthemes were refined into higher-level themes. To further elaborate our process, all five co-authors were involved in the open coding process. The co-first authors led the coding effort, completing the majority of coding on raw interview transcripts, while the senior co-authors (who have extensive experience in qualitative research) coded two raw transcripts each during two two-hours hybrid (in-person and screen-sharing-based) co-working sessions. To ensure coherence in the analysis, we carefully reviewed and refined the themes. This process involved merging certain themes into broader categories, separating other themes into distinct categories, and discarding themes that were not directly relevant to our core research questions. Ultimately, we determined the final themes corresponding to each research question. While we allowed flexibility in our coding and theme development, we employed thematic analysis [20] as our primary methodological approach, guided by prior research on social support in online dementia communities [59]. A random subset of 10 transcripts was selected for an initial round of open coding. We developed 400 codes in the first iteration, which were then discussed by the entire team and distilled into 851 codes. The new set of codes were applied and iterated on the remaining transcripts. We discovered an additional 451 codes from the remaining transcripts. The codes were then grouped into 12 higher-level and 63 lower-level themes, which aligned with our three research questions. # 3.4 Privacy, Ethics, and Reflexivity Our study was approved by the Institutional Review Boards (IRBs) at the researchers’ institutions. Given the potentially sensitive nature of the study, we adopted several ethical and privacy considerations. To maintain confidentiality, each participant was assigned a unique participant ID to ensure anonymity. Throughout the interviews, person-first language was used, referring to the loved one as a “care recipient”, “your father”, or “your wife” rather than an “AD/ADRD patient” to avoid any potential discomfort associated with clinical terms. During interviews, we monitored participants’ emotional wellbeing, paying attention to both verbal and nonverbal cues and checking in directly to confirm their comfort and willingness to proceed. When participants became emotional while recalling memories with their loved ones, we would pause the interview and ask them to take a short break or have some water as needed. Participants were also reminded of their right to discontinue the session at any time if they felt uncomfortable or no longer wished to continue. Our research team comprises researchers holding diverse gender, racial, and cultural backgrounds, including people of color and immigrants, and hold interdisciplinary research expertise in the areas of HCI, CSCW, UbiComp, and Health Informatics. We have prior experience in working on the topics of mental health and wellbeing, AD/ADRD, and online social support. Multiple authors have served as caregivers for aging family members, although not specifically for AD/ADRD conditions. While we have taken the utmost care to capture and faithfully synthesize participants’ viewpoints, we acknowledge that our perspectives as researchers and, in some cases, as caregivers may influence our interpretations. We remain committed to conveying participants’ experiences as authentically as possible and to highlighting the complexities of caregiving as voiced by those directly involved. # 4 RQ1: Mental Wellbeing Needs and Concerns We identified a number of themes associated with the mental wellbeing needs and concerns of AD/ADRD caregivers. We have adopted a categorization of these needs and concerns into two high-level themes of cause and effect—1) (Cause) Factors leading to mental wellbeing concerns, and 2) (Effects) impacts on the mental wellbeing of caregivers. The categorization in terms of cause and effect is meant to facilitate a better understanding of the relationship among the themes and not an interpretation of causality. In this section, we first describe these themes (Section 4.1), followed by how the mental health concerns evolve over caregiving (Section 4.2). # 4.1 Mental Wellbeing Concerns of Caregivers The family caregivers of individuals with AD/ADRD experience a complex, evolving set of mental wellbeing concerns. These challenges stem from multiple, interrelated sources and manifest in a range of social, physical, and psychological effects that shift over time. 4.1.1 Intersecting Sources of Caregivers’ Distress. Participants consistently pointed to several overwhelming challenges, briefly described below: Financial Burden. Several participants expressed financial burden as a major concern impacting their mental health. P5, P6, P7, and P18 noted how it creates uncertainty about care continuity. With limited financial support systems, some, like P6, had to rely on personal savings. This instability added to caregivers’ anxiety, e.g., P18 feared their funds might not last the care-recipient’s lifetime. “I worry about having enough savings to last his life, he’s 84. His mom lived to 94, so conceivably, he’s healthy otherwise. He is physically fit, so he could conceivably live another ten years or more, and so I worry about his money lasting that long.” —P18 Disrupted Social Life. Caregiving often disrupts caregivers’ social life. Participants expressed losing touch with friends and having difficulty in participating in social activities. P1, P20, and P21 described struggles balancing caregiving with friendships; P1 explained “I can’t enjoy social interactions with my friend because I have a responsibility, and whenever I’m outside, I feel so anxious. I’m always thinking about the [care recipient].” This sense of isolation intensified feelings of loneliness and emotional strain. P3 also noted a reluctance to open up for fear of judgment, choosing instead to share anonymously on social media. “[..] I’ve sacrificed everything for [my mother]. I’ve stopped working, I lost my friends. I feel like giving up. I cry, I break down. I have no one to open up to, because I’m not comfortable sharing my problems with someone I know, because I feel like they may judge me [..] I can’t open up fully to someone I know. I’d rather go to social media and type what I’m going through using an anonymous account, and then maybe people will comment with legit and unbiased advice.”—P3 Time Constraints and Limited Personal/Self-care Time. Participants expressed the continual struggle with time management due to caregiving demands, which often leaves little time for self-care. Many felt overwhelmed by having to prioritize caregiving tasks, often at the expense of their own wellbeing. P4 described caregiving as a highly time-consuming job, explaining that factors such as commuting, unexpected incidents, and the care recipient’s declining verbal abilities made it increasingly difficult to engage in other activities. P7 and P15 emphasized hat caregiving often requires attention 24/7, leaving no time for breaks. Some, like P19, even changed jobs to better accommodate caregiving responsibilities: “As a school Superintendent, that was really stressful. I was responsible for taking care of thousands of people, and I couldn’t do that full-time while also taking care of my mom. So I decided to resign and take a job with the university where I can work from home.” —P19 “I was working in a store prior to that, so I had to resign to come in to take care of him. And then over the years, as I first started out like maybe two days a week, if I increased two days a week, now I’m a living caregiver.” —P22 “I’ve stopped working, I lost my friends, like, I feel like I’ve sacrificed a lot for her.” —P3 In addition, P10 expressed concern about being in a “sandwich” generation between caregiving and parenting, leaving no opportunity for self-care. Multiple participants expressed “guilt” about self-care, such as P23 felt guilty when they would take off time for themselves, “I feel guilty trying to leave her alone and do other things.” This lack of personal time resulted in increased stress and emotional fatigue, contributing to caregiver burnout among caregivers. P13 expressed that loss of travel has negatively impacted their mental wellbeing, with feelings of being stuck and having nothing to look forward to, and P24 expressed about being a prisoner in their home. “We’re feeling like prisoners in our own home because we’ve discovered now that we can’t leave [because of caregiving responsibilities].” —P24 Relationship Management and Tensions. Multiple caregivers expressed relationship tensions— whether from over-reliance or lack of support from family—as a key burden. Caregivers often faced imbalance when others did not share responsibilities, leading to stress and fractured relationships. P2 shared resentment toward an uninvolved sibling, while P18 worried about the toll on her marriage, saying her husband was supportive but felt “thrown into this.” “[I’m worried] because apart from taking care of my mom, I have other family members. I have a fiancee that I want to marry, so it has not been easy trying to balance taking care of my mom and taking care of my older and younger siblings, and my fiance. So I’ve been struggling to actually manage my relationships.” —P23 Overwhelming Caregiving Responsibilities. Participants described feeling overwhelmed by the intensity and growing demands of caregiving, especially as the care recipient’s condition worsened. Daily tasks included managing medications, finances, emotional support, and household responsibilities—often leaving caregivers exhausted. Many viewed caregiving not as a set of tasks but as a deep obligation (P1) or long-term commitment (P15). For example, P25 shared that they would have to pick up each declining ability of the care-recipient: “So every time that he would lose an ability, I would pick it up. And so over time, that just gets more and more and more big because again, like for a three year old.” —P25 However, some like P18 share that responsibilities would not decrease despite using professional assistance or memory care: “[It’s] stressful having to basically supervise what these people are doing and they just don’t have enough help in these facilities.” —P18 Some participants also spoke about how the “unpredictability” with thier situations added to the emotional toll. They needed to constantly adjust their routines to the care recipient’s fluctuating condition, making it hard to plan and contributing to stress and anxiety: “One of the challenges I face is unpredictability which arises because the condition can actually change and change from day to day. So I’m always on high alert, and I’m always worried about their safety. And that can be quite exhausting.” —P20 Insufficient and Inefficient Support Systems. Participants expressed the inadequacy of external support, whether from healthcare systems, community resources, government, or family. In particular, P2 emphasized that, while financial and healthcare benefits exist for patients, there is no dedicated support system tailored to caregivers. P4 noted that they use a plan that provides weekly support and resources, but had limited overall assistance and had the burden of paying for therapy out of pocket. Further, governmental support avenues for care-recipients were deemed inefficient: “My dad is a veteran, and we’ve been trying to get him resources through the Veterans Administration, which has been horrible. It is so hard to speak directly with someone, and resources have experienced roadblock after roadblock after Roadblock.” —P18 4.1.2 Mental Health Impacts on Caregivers. The psychological toll of the above stressors manifested in multiple ways as listed below: Hopelessness about the Future. Many participants expressed anxiety and hopelessness about the future, both in terms of their ability to cope and the inevitable decline of the care recipient. Some, like P10, P11, and P20, worried about who would care for their loved one if their own health failed. P20 shared feeling overwhelmed after experiencing depression for the first time. Others, like P4, described experiencing “anticipatory grief”—a common theme in support groups—as they come to terms with the care recipient’s ongoing decline and eventual passing. The reality is that this is the only disease without a cure, and this disease is fatal, so we’re dealing a lot with anticipatory grief, which is what we talk about a lot. [..] And many times, people are not going to support groups until they’re like, literally in tears or at wits’ end. I see anger and frustration from men, and I see tears and physical demise from women. —P4 Fatigue, Strain, and Burnout. Participants described fatigue and physical strain from the constant demands of caregiving. P1 noted “the emotional toll of seeing a loved one suffer.” and P17 highlighted both emotional and physical exhaustion. Some participants also reported the constant demand of caregiving responsibilities also led to a lack of sleep and chronic sleep deprivations. Many reported experiencing burnout, often realizing only in hindsight how deeply caregiving had affected them. “I feel increasing [caregiving] demands lead to physical and emotional burden, affecting my mental health. I became more concerned about burnout and maintaining my wellbeing.”—P21 Emotional Upheavals and Compassion Fatigue. The overwhelmingness of caregiving also leads to frequent emotional upheavals. These emotions are often triggered by the care recipients’ health decline, unpredictable behaviors, and strain of balancing caregiving and other responsibilities: “It hasn’t been easy for me because I get very anxious [..] prepare for the worst [..] there are some days that I really, really hope for the best. So it has been like a rollercoaster of emotions.”—P22 Prior work notes compassion fatigue among caregivers of chronic conditions—compassion fatigue occurs when the caregiver’s ability to empathize with the care recipient is reduced as a result of repeated exposure to their suffering [33]. Similarly, compassion fatigue emerged as a major theme in our caregivers’ experiences. For instance, P10 explained they were often drained from caregiving, and would even lose their temper when the care recipient would turn aggressive: “I think I did a good job of reminding myself that when my mom was like, really bad, that it’s not her, it’s the disease, and it’s her brain not working right. It’s not her choice. But sometimes she was just $[ ^ { \star \star \star \star } ] !$ And so I lost my temper.”—P10 Self-Reflective Positive Impacts. In addition to several negative psychological impacts of caregiving, some participants also reflected on certain positives they drew out of caregiving demands. For example, P1 observed personal growth, noting they had become more mature over time. P11 and P18 shared that navigating daily caregiving challenges helped them develop selfefficacy, as they recognized their growing competence in caring for others. Similarly, P6 reflected on becoming more compassionate, detail-oriented, and gaining a deeper understanding of others: “One thing I’ve learned as a caregiver that impacted my mental health a lot is that now I seek first to understand before being understood. I don’t wanna impose my decisions on people. I want to know why they do the things they do, why they say the things they say before I respond.”—P6 The above reflections suggest that, for some caregivers, the intense demands of caregiving also fostered a sense of purpose, personal growth, and emotional depth—underscoring the complex, dual nature of caregiving journey. Overall, the caregiving-related concerns identified in prior research served as useful starting points for our conversations with participants (Table 3). These prompts helped guide the participants in recalling and narrating personal experiences that aligned with known themes, such as emotional distress, burnout, and lack of support. These concerns were strongly echoed in the participant narratives. Many participants described feeling overwhelmed by the constant demands of caregiving, often at the expense of their own wellbeing. In addition, the concerns surfaced in our study ultimately represented an assimilation of themes that emerged organically across diverse participant narratives. For example, discussions around financial burden and its impact on mental health, as well as the erosion of personal and social life while caregiving for individuals with AD/ADRD, arose naturally through these conversations. Additionally, issues such as relationship strain, guilt surrounding self-care, and limited family support emerged as significant concerns—ones that were not always directly framed as mental health issues in prior work. Notably, participants also Early: Middle: Late: Stage Initial Adaptations Emotional Disconnection & Intensified Strain Emotional Exhaustion & Burnout A process of adaptation Emotional impact of witnessing care Adapted to caregiving responsibilities, yet Description Initial decline with partial independence recipient’s memory loss and cognitive often overwhelmed by the demands.. for care recipients decline and strains of caregiving burdens Intense 24/7 caregiving demands, leading to peak physical and emotional exhaustion Stress from role adjustment Feelings of anticipatory grief, loneliness, Burnout and little sleep Mental Wellbeing Anxiety over new caregiving hopelessness, powerlessness Lack of personal life Stressors rQeusepsotinosinibniligt iaebsout self-efficacy CDiosrmupatessd osnocfiatli lgifue tHeenisgihotnesned strains from relationship Increased time constraints and guilt over personal time. Practices & Adapting daily routines Connecting with other caregivers who Placing care recipient in care facilities Coping Strategies IMnfaonramgiantigoin istieaelkcinargeoginvivnagr toauskmeans facing similar experiences pHeirisnognparloliffessionals to regain balance in reflected on the self-reflective positive aspects of caregiving—such as personal growth and emotional resilience—that extended beyond the more problem-oriented concerns highlighted in prior work. # 4.2 The Evolution of Mental Health of Caregivers Given that the manifestation of AD/ADRD can vary from individual to individual, the caregiving experiences also differ. However, based on our interviews, we discovered some high-level patterns that we have distilled into three stages for better understanding some key challenges and exploring the potential of technology to address them. We characterize the evolution of caregivers’ mental health into— 1) Initial Adaptation, 2) Emotional Disconnection and Intensified Strain, and 3) Emotional Exhaustion and Burnout. Fig. 1 provides an overview of the different stages and their associated key stressors and coping strategies. Stressors and coping strategies highlighted in the figure are assigned to the stage where they most commonly appear. Early Stage: Initial Adaptations, Shock, and Uncertainty about Future. The early stage of caregiving is characterized by a process of adaptation as individuals adjust to the new and often overwhelming responsibilities for caring for someone with AD/ADRD. Caregivers reported often underestimating the emotional impact of caregiving. “When I became a caregiver, I didn’t realize how much time and energy would be consumed by caregiving. I was in denial about how much this would impact me emotionally.”—P21 This phase typically coincides with the care recipient’s initial decline in health, where carerecipients can still manage many activities independently. Following the diagnosis of the carerecipients, caregivers are likely to experience intense stress, panic, and anxiety, driven by the uncertainty about the future and the evolving demands of the role. Some caregivers reported the “role shock”—feeling unprepared and emotionally unsteady as they stepped into caregiving. During this time, caregivers may struggle with self-efficacy, doubting their ability to provide effective care, which can further contribute to feelings of depression and anxiety: “In the beginning, I was constantly feeling depressed while seeing them.[..] That first year was especially difficult. I felt overwhelmed and nearly always in a state of depression.”—P15 The initial shock and the addition of new responsibilities was often overwhelming, leaving no time for the caregiver to reflect on their own health and needs, as expressed by P2: “I didn’t really know how to manage patients with Alzheimer’s. I didn’t take breaks [..] I think when you’re in the middle of it, you just adapt and don’t really think about it that much.”—P2 Middle Stage: Emotional Disconnection and Intensified Strain. Even after the initial shock of the diagnosis and responsibilities subsides, the caregivers typically do not get a long break. As the care-recipient’s condition worsens, caregivers face growing emotional burdens, feelings of isolation, and need for support and recognition. In the mid-to-late stages of caregiving, recurring episodes of the care-recipient’s decline or emotional instability can lead caregivers to experience intense moments of emotional breakdown (P3, P21). They can experience deep emotional loss, as they feel they are losing the person they once knew, and can often feel grief, loneliness, and hopelessness. “As I formed them deeper emotional bonds the mental toll became more evident.”—P17 The participants described the feelings of “anticipatory grief” and “ambiguous loss” as they came to realize that the care recipient’s condition would not improve and that the person they once knew was no longer the same. Multiple participants expressed feeling powerless that despite their efforts, they are unable to protect their loved one from the disease’s progression, which further deteriorates their mental wellbeing. At this stage, caregivers need emotional support. Connecting with others going through similar experiences can provide comfort and validation: “I started to like experience, feelings of burnout and stress. I found that I needed more emotional support, not just from my colleagues, but also from therapy or peer groups.”—P17 This is also the time where caregivers have been in their new role for a while and their new responsibilities start to put a strain on their social life. Social stigma around the condition also made it challenging for some caregivers to maintain a healthy social life. “Now, I have no one to open up to. I’m not comfortable sharing my problems with someone I know. I feel they may judge me—or treat my mom differently when they visit our home.”—P3 Some participants also expressed frustration about the lack of dedicated professional assistance for dementia caregivers, in terms of the lack of relatable experience or empathy, e.g.,: “I think the person that I was talking to just didn’t quite have the much experience in helping people . . . it’s kind of hard to relate to that person.”—P2 Between the middle and late stages, caregivers often realize that they are in a marathon and not a sprint. This is when they start reaching out for support through online or in-person communities. “I use technologies on social online platforms like Reddit and also various apps to manage my mental health because they offer support and resources that fit into my busy schedule.”—P17 Late Stage: Emotional Exhaustion and Burnout. Over the significant progression of time and the decline of care recipient’s condition, caregivers adapt to their caregiving duties almost like a full-time job. However, by this point, they are also overwhelmed with providing $2 4 / 7$ care. This leaves them with no personal time to address their own needs. This late stage represents the peak of both physical and emotional exhaustion, with burnout becoming a frequent concern. Participants also described how they can only get little or no sleep, as P11 described they would almost “sleep with one eye open.” P10 described the demands of managing every aspect of the care recipient’s life, including handling bills and estate matters, coordinating constant medical appointments, and providing hands-on care and support. This stage also coincides with heightened strains from relationship tensions, such as unmet expectations about sharing caregiving responsibilities among family members or concerns about the impact of caregiving demands on other relationships (e.g.,spouse and children), such as: “I had a sibling that didn’t do anything to help which caused a lot of resentment. My dad never really asked if I had other commitments—he wasn’t open to bringing in outside help, so the responsibility fell entirely on me. That lack of family support was probably the biggest factor that was detrimental to my mental health at that time.”—P2 The all-encompassing nature of caregiving at this stage means caregivers can reach a critical point where they make difficult decisions regarding the care recipient, such as placing them in memory care facilities or hiring professionals to regain some balance in personal lives: “And then we made the decision to put her into memory care. And so memory care has eased a lot of the burden on me and my family [..] now that she’s in memory care now it’s just about more of playing an advocacy role. ”—P19 Experience/Journey of Serial Caregivers. A key consideration to note about this three-stage process is that it primarily represents the typical experience of a first-time caregiver. For people who are serial caregivers (i.e., they have cared for multiple family members with AD/ADRD), such as P2, P16, P17, P21, and P22 in our study, the experience changes over iterations. That said, the rate at which technology is evolving also affects these experiences. For instance, P2 mentioned that having smart home technologies earlier, would have had a significant impact when they were caregiving two decades ago. “I think back, if I had had the technology that I have now, things would have been a lot easier. I have, pretty much automated my house with cameras and front door locking and unlocking abilities. So I think now it would be a million times easier using technology to manage dealing with my mother, especially. ”—P2 Similarly, P17 described how their caregiving approach shifted over time from mastering daily routines to becoming more emotionally attuned and better at setting boundaries. They overcame feelings of guilt about self-care over time: “Early on when I first started, I used to feel guilty about taking time for myself, but now I understand that self-care is essential for long-term experiences [..] my experience has shifted from task-focused to a more balanced approach.”—P17 These reflections reveal that repeated caregiving experiences can help caregivers to adaptively grow, in terms of developing confidence, refining coping strategies, and becoming more intentional about balancing caregiving and self-care. Early caregiving experiences were often marked by stress and self-doubt, and they were focused on getting through daily responsibilities. However, in later iterations, caregivers developed a more holistic perspective, applying emotional insight, prior knowledge, and strategic use of resources to improve both care recipients and their own wellbeing. A key characteristic of a matured caregiver was their development of resilience: “On the positive side, I’ve become more obviously more resilient, some better at managing stress [..] I’ve also learned coping mechanism like mindfulness and certain emotional boundaries that have also helped me stay grounded.”—P17 “But now it has become much more demanding as they have lost more independence over time. I’ve learned to adapt and I’ve learned to become more patient.”—P21 In this way, serial caregiver not only refined their practical skills but also redefined their caregiving identity, from overwhelmed responders to more empowered and balanced care partners. # 5 RQ2: Practices to Address Mental Wellbeing Needs As caregiving responsibilities and mental health challenges evolve over time, caregivers adopt a range of practices to manage their wellbeing. These practices are often ad-hoc, shaped by immediate needs and lived experiences, but can also develop into consistent routines over time. In addition, they are not static; rather, they shift in response to changing emotional, physical, and relational demands of caregiving. Table 4 provides an overview of mapping mental wellbeing practices with the different stages of caregiving. In the early stage, caregivers navigate the initial shock and uncertainty of their new responsibilities, relying on institutional resources, physical activities, and online resources to gather information and regain control. As emotional disconnection and intensified strain emerged in the middle stage, caregivers expand their coping strategies by seeking professional support, engaging in personal hobbies, participating in online support communities, setting firmer boundaries in daily life, and increasingly relying on family members for emotional and logistical assistance. In the late stage, marked by exhaustion and burnout, caregivers emphasize the importance of structured mental health interventions, stronger boundary-setting, and leveraging support from family and close friends. This temporal framing highlights the adaptive nature of wellbeing practices and underscores the need for flexible, stage-sensitive support systems that align with caregivers’ evolving emotional and practical challenges. In particular, we found two major themes of practices— 1) seeking external support and 2) adopting self-care practices. Table 4. Understanding the Evolution of Caregiver Mental Health through Wellbeing Practices # 5.1 Caregivers Seeking External Support Caregivers emphasized the importance of seeking various forms of external support, including assistance from family members, professionals, and institutions such as employers, community resources, and governmental programs. Family Support. Given that our study centers around family caregivers, several participants emphasized the importance of family support in managing their wellbeing. Support from family not only eases the caregiving burden but also offers vital emotional reinforcement. P24 shared that their brother and wife occasionally step in to share caregiving duties, providing them with much-needed breaks. “[..] my brother comes [for caregiving], and we then can leave [..] My wife is very helpful, and she’s able to be a little more [for caregiving tasks]”—P24 Social Support. Likewise, participants described that maintaining social life helps caregivers stay connected with the world beyond their caregiving responsibilities. For example, P11 noted, “I have strong support from friends and neighbors, which helped immensely.” Similarly, P1, P21, P20, and P12 emphasized the importance of talking with friends, expressing that maintaining connections helps them process emotions and stay committed to their mental health journey. For example: “Sometimes I just need to talk with my friends. It helps me feel grounded and not alone.”—P1 Professional Support. Several participants highlighted seeking professional mental health support such as therapy and counseling. For example, P17 mentioned that seeking therapy helped them control anxiety and the quality of caregiving. In fact, P2 advocated for one-on-one therapy sessions rather than group therapy sessions, and P21 advocated for in-person therapies, that they tend to be more personal and connected: “In-person therapy: feels more personal and connected, physical presence creates a stronger send of trust, applauds the ability to share in a supportive, face-to-face setting.”—P21 Institutional Support. Institutional support from community, society, employers, and governmental agencies plays a critical role in helping caregivers manage the emotional and financial burdens of caregiving. P10 emphasized the crucial role of employer support, particularly flexible work arrangements, in managing caregiving responsibilities. Further, P10 mentioned a state program that levies taxes and provides stipends to help ease the financial burdens of caregiving: “There was a tax levied on all the residents of Connecticut several years ago, and now, starting in 2022, we can apply for a stipend based on our weekly or monthly income, which has been really helpful.”—P10 # 5.2 Caregivers Adopting Self-Care Practices Caregivers adopt a variety of self-care practices to maintain their wellbeing, including planning ahead, staying organized, exercising, engaging in hobbies, maintaining social connections, and setting boundaries. These practices help manage stress, preserve identity, and prevent burnout. We further describe these themes below. Planning Ahead and Being Organized. Caregivers often find that staying organized and planning ahead reduces stress and promotes a sense of control in their caregiving responsibilities. P6 mentioned that they use virtual personal assistants for calendar and reminders for time management, which help them to structure tasks, prevent overworking, and avoid mental strain: “[Virtual personal assistant will schedule task] in the calendar and I just get the reminders”—P6 Physical Activities and Breaks. A majority of our participants highlighted the need for physical activities—spanning across simple outdoor walks (P1, P2), yoga (P4, P18), stretching and relaxations (P21), tai chi (P9), other exercise regimes (P15, P19). For example, P1 finds being outdoors therapeutic, as it allows them to connect with nature and experience a sense of peace. “And I usually like being outdoors because I believe that when you’re outside, you engage in the environment. You see beautiful things, beautiful nature, and then you can feel that. You can feel the, you know, the love of nature.”—P1 Hobbies. Multiple participants also brought up the importance of pursuing hobbies to help distract their continued focus from caregiving tasks and stress. For instance, participants found comfort in skincare (P21, P18), pet care (P25), journaling (P2, P21), reading (P18, P1, P21), and meditation (P5, P18). These hobbies provided a mental escape, fostering joy, self-expression, and relaxation—key to sustaining emotional health and preventing burnout. Setting Boundaries Between Caregiving and Personal/Social Lives. Participants stressed the importance of setting boundaries between caregiving duties and personal or social life. This included creating separate spaces (e.g., sleep arrangements, P9) and emotionally distancing during social interactions (P4, P17). As P17 shared: “[..] I’ve also become better at setting my own boundaries and prioritizing my own way of being early on. When I first started, I used to feel guilty about taking time for myself, but now I understand that I’m still essential for long term caregiving. So overall my experience has shifted from being taxed focused to a more balanced approach.”—P17 # 6 RQ3: Technology to Support Caregivers’ Mental Wellbeing Participants voiced the role of technology in both caregiving and their mental wellbeing. Technology serves as a multifaceted tools, offering practical helps for mental and emotional supports, as well as management in caregiving tasks. It enables caregivers to navigate complex needs while simultaneously focusing on self-care and work balance. # 6.1 Need/Use of Technology Participants highlighted the need for technologies surrounding 1) caregiving responsibilities, 2) informational and learning resources, 3) communication and social connection, and 4) mental health and emotional support. Supporting caregiving responsibilities. Participants brought up aspects where they directly use different technologies to support their caregiving tasks. These included cameras, smart locks, and tracking devices (e.g., AirTags) to monitor care recipients remotely, ensuring safety and easing emotional stress (P2, P13, P18, and P22). These tools helped manage risks such as wandering and accidents, especially when caregivers could not be physically present. For medication management and reminders, participants used smartphone reminders, Alexa, or automated dispensers like Hero to prevent missed doses and reduce caregiver stress (P12). “As she got into mild cognitive decline, I placed AirTags strategically in her purse, so I know where she is if we’re apart.”—P13 Information and Learning Resources. Several participants mentioned how they often use a variety of technologies to get access to informational resources. They listened to podcasts (P1, P15, P22), watched YouTube videos (e.g., Dr. Natali Edmonds) (P13), and consulted sources like the Alzheimer’s Association website and Wikipedia (P5, P13). Online communities such as Reddit and Yahoo Groups provided peer advice and information on treatments (P2, P24). Some preferred short videos (P13), while others found AI tools like ChatGPT helpful for getting comprehensive, personalized answers (P20). “Unlike Reddit or Facebook, ChatGPT can help cover all my answers in one direction. I can even ask it for advice, and it even provides helpful suggestions on what I should do.”—P20 Communication and Social Connections. Participants used texts, emails, and video calls to stay connected with others and communicate with care recipients remotely. For example, P10 found Alexa Echo Show helpful due to its ease of use and support for lip-reading. Online communities also played a key role in reducing isolation and fostering a sense of belonging (P12, P13), especially when in-person socializing was limited. Mental Health and Emotional Support. To begin with, participants frequently mentioned how they would often receive emotional support from other community members in online communities—P11 mentioned that the Alzheimer’s and Dementia Reddit communities were “unusually kind for the Internet, being very much mutually supportive and kind.” In addition, several participants use mental wellbeing apps, such as for meditation and relaxation (P11, P18, and P22), and for tracking wellbeing measures (P20 and P23). P1 uses the mental health app Headspace for mental health self-care, and also uses the app for personal journaling and emotional assistance. Participants also continually tack their wellbeing through wearables and smart devices, such as Oura Ring (P10) and smartwatches (P2, P7, P9). Tracking apps help caregivers monitor their own mental and physical health. These tools not only promote self-awareness but also encourage caregivers to prioritize their mental wellbeing, such as: “Even before Apple Watch, I had a Fitbit, and I counted steps on that every day and looked at it to see what I’d done. It was motivating for me. These apps are motivating for me.”—P9 Among more recent technologies, P14 mentioned that they use virtual reality (VR) based therapy and AI-based tools for stress relief. P2 noted that AI chatbots bear the potential to provide the emotional support and validation that caregivers often need. # 6.2 Perceptions about Technologies We now examine how our participants expressed varied perceptions of technology, which we categorize into—1) techno-optimism and 2) techno-skepticism. 6.2.1 Techno-Optimism. Given how technologies have been integral in their caregiving and mental wellbeing, several caregivers expressed optimism about the new and emerging technologies. As already noted, a majority of our participants were very positive about online communities. P18 highlighted that virtual therapy sessions help them seek mental health services despite the time constraints. In addition, speaking about other technologies such as apps, P1 noted that these function as “another friend” for a caregiver, and P15 particularly appreciated their $2 4 / 7$ availability, allowing access to resources whenever needed. Likewise, the timeliness and immediacy of responses that AI chatbots can offer were exciting to caregivers—for example, P20 described chatbots to be “faster, convenient, detailed, versatile, and easy to access.” Relatedly, P15 shared that, one night, his grandfather had an emergency, and it was impossible for him to get professional help at that time. They sought immediate response from web search and AI chatbot in getting guidance: “It was around 3:00 AM; I was worried about what was happening with my grandfather. You can’t call the doctor at that time, so I just googled and used an AI-based tool. I explained the symptoms, and the AI advised me to keep him active. We were able to resolve the situation. The key thing about [AI] is its availability—professionals are not always available.”—P15 6.2.2 Techno-Skepticism. Despite the positive outlook highlighted above, caregivers also expressed varying degrees of skepticism about technology in caregiving and mental health context. P17 noted that despite the notable usefulness of technologies, it is important to approach them with caution, prioritizing reliability and privacy protection. Several participants advocated for real human interactions over interactions with an AI, like P24 labeled an AI chatbot as a “data-center.” P20 added that although Reddit is usually helpful, it does not enable face-to-face communication. Particularly about emerging AI chatbots, a common concern was their reliability and trustworthiness in providing medication advices. P2 emphasized the need for credible sources in responses by AI chatbots. Additionally, regarding emotional support, some caregivers expressed that AI lacks the ability to understand complex human emotions and medical information: Table 5. Challenges and proposed improvements for technologies to support mental wellbeing of caregivers “I’m concerned about something like ChatGPT, how it would understand my emotions and provide accurate advice based on my feelings.”—P12 # 6.3 Challenges and Proposed Improvements Finally, we examine the technology-related challenges faced by the participants and improvement recommendations that emerged from our interviews. Table 5 provides an overview of mapping the challenges and proposed improvements, and we elaborate on this below. 6.3.1 Financial Barriers. Many caregivers reported financial barriers when using caregiving technologies, particularly subscription fees for some apps. For example, P15 mentioned the cost was a significant obstacle preventing them from fully utilizing the technology. Similarly, P18 discontinued the use of an app to improve sleep beyond its free trial period. Consequently, caregivers suggested more affordable pricing models, such as sliding-scale subscriptions or free basic versions of apps, enabling broader access across various income levels and reducing financial strain on caregivers. 6.3.2 Technical Complexities. Technical complexity emerged as a major issue for caregivers, particularly those who may not be tech-savvy. P2 noted that there was no customer support to help them navigate through the features of apps. Interestingly, P9 noted that apps are becoming increasingly complex with added security features, making it hard to keep up, especially at their age: “[Technologies are] getting so complicated for me. [..] Because of all the Internet scams, the security levels are going up everywhere, and this is very challenging to keep up with.”—P9 To overcome the technical barriers, caregivers emphasized the need for more user-friendly interfaces with simplified navigation that reduces the time spent figuring out how to use tools. For instance, P4 suggested that apps should avoid excessive notifications since caregivers have already been overwhelmed by caregiving tasks. P11 added that it is an efficient interface, especially when searching for specific caregiver experiences or advice. Importantly, caregivers—already pressed for time—need technology that is more accessible, especially as many are older adults with limited digital literacy. By minimizing complexity and streamlining key features, caregiving technology can become more accessible and useful for a wider range of caregivers. “[..] so they should make the app easy to use, not complex, which makes the user interface very good, so the user is able to do what they can do at the right time.”—P20 6.3.3 Data Security and Concerns. First, multiple participants expressed that they are not concerned about data security and privacy on the technologies. Their lack of concern stemmed from two reasons: (1) they avoid posting personal or sensitive information online when seeking help, and (2) they are already de-sensitized to the pervasiveness of data tracking, as highlighted P18’s comment, “this is the internet, and everyone’s tracking everything.” However, some participants expressed concerns—P20 was concerned about the uncertainty on how their data on the Internet might be used, and P12 was concerned about privacy and security on caregiving apps, emphasizing the need for safe data storage to prevent leaks or exposure on social media. 6.3.4 Reliability and Credibility of Information. Caregivers expressed concerns about the accuracy and credibility of information from the internet or AI, especially when using unverified sources. These worries relate to health and safety, as caregivers need reliable support for decision-making. P20 noted that online searches can sometimes yield outdated or inaccurate guidance, while P2 and P3 emphasized the importance of trustworthy sources for effective caregiving. P17 added, “I’d like a feature that allows you to speak with a professional.” To address these concerns, caregivers proposed that caregiving technologies should prioritize trustworthiness by incorporating information that is verified by professionals. Providing caregivers with transparency regarding where the information comes from, along with the verification process, would further build trust. In doing so, caregivers could rely on technology for decision-making, knowing that it has been informed by credible sources. 6.3.5 Incorporating Human Interaction. A key challenge that caregivers face when using an automated technology is the lack of human interaction, which is crucial in providing emotional support. For example, P12 and P14 sought the ability to communicate with healthcare professionals through the technology, rather than just relying on automated responses or pre-programmed features. P8 emphasized the need for a human interaction interface to help navigate the emotional and complex decisions that arise during caregiving. P1 sought more space where caregivers could interact with each other. Without these interactive, human-centric features, caregivers may feel unsupported when they need advice or validation, particularly in high-stress or crisis moments: “Emotions are something that requires human interaction than technology to deal with”–P8 To address this concern, caregivers suggested integrating more human interaction elements into caregiving technologies. For instance, P14 mentioned that the VR therapy also offers a space to talk to an expert anonymously. P18 sought for a platform where they can meet with other caregivers in a non-anonymous fashion (unlike Reddit): “[..] So that would be nice if you had an app where you know where everybody is and not just anonymously. Not like Reddit, it’s all anonymous. It would be nice to have in-person or even FaceTime meetings or whatever with people who are just like me. ”—P18 6.3.6 Lack of Personalization. Several caregivers felt that existing technologies offers generic help that fail to cater to their specific contexts and unique needs. For instance, P5 mentioned that caregivers’ needs are unique, and there is no one-size-fits-all solution. The participants proposed personalization in caregiving technology. They suggested that apps could provide personalized medical care, including tailored medical recommendations (P14). P5 desires for a trusted personalized source of information and support that also emphasizes shared experiences and individual expressions over generic solutions. # 7 Discussion In this study, we conducted semi-structured interviews with 25 family caregivers of individuals with AD/ADRD, and adopted inductive coding and thematic analyses [20] to identify the 1) major mental health concerns of caregivers (RQ1), 2) practices and coping strategies employed by caregivers (RQ2), and 3) technologies used by caregivers in their daily caregiving and support mental wellbeing. At a high level, the interviews reinforced the motivation of our study and confirmed a critical gap in the current healthcare infrastructure and support system—caregivers’ needs are largely unaddressed, with few systematic mechanisms tailored to prioritize their mental health. In particular, AD/ADRD predominantly affects elderly individuals, whose caregiving responsibilities typically fall into one of two age groups—1) middle-aged adults caught in the “sandwich generation,” simultaneously needing to caregiving for aging parents while parenting their children, or 2) aging spouses who face their own health challenges—“I don’t want to die before my husband”—as P9 emotionally reflected about a deep-concern for their future. In this section, we discuss the implications for our work. # 7.1 Theoretical Implications In this section, we discuss how our findings are situated within existing theoretical frameworks. In particular, we contextualize our work with two relevant theoretical lenses—Social Support Behavioral Code (SSBC) [136] and the Ethics of Care [49, 142]. 7.1.1 Caregivers need and seek social support. The need for social support emerged as a major theme in protecting the wellbeing of caregivers, and underscores the necessity for sociotechnical systems that support their needs. We situate these observations with the Social Support Behavioral Code (SSBC) [136], which categorizes support into five types—informational, emotional, esteem, tangible, and social network support [136]. Our findings demonstrate that caregivers require and seek these forms of support to cope with their mental wellbeing concerns. Informational Support. Our participants emphasized the need for reliable advice, information, and suggestions to enhance their caregiving practices and manage their mental health. They frequently turned to podcasts, YouTube channels, and reputable health websites. Prior work in this space by Wong et al. [153], similarly noted that caregivers value technologies like Voice Interface Personal Assistants (VIPAs) that provide informative content to support older adults’ mental health. Emotional Support. Emotional burden was a recurring theme, with participants reporting anxiety, depression, and emotional exhaustion. Emotional support is highly contextual; the type of support an AD/ADRD caregiver seeks may differ significantly from that of others. Caregivers sought emotional support through professional counseling, therapy, and maintaining social connections with friends and family. This need for emotional solace is echoed in Siddiqui et al. [131], where caregivers of people with serious mental illness benefited from peer support groups that offered empathy and understanding, and Kim et al. [66] highlighted the importance of emotional support for informal dementia caregivers dealing with verbal agitation. Esteem Support. Our findings revealed that caregivers often felt unappreciated and overwhelmed, leading to diminished self-worth. The lack of positive feedback or acknowledgment of their caregiving efforts contributed to feelings of “thanklessness.” We noted how participants felt they would feel much better when their loved ones would appreciate their efforts. Similarly, in social interactions, positive messages that reinforce their esteem can help bolster a caregiver’s self-worth and resilience. Participants self-reflected a positive outcome from their caregiving journey, in terms of personal growth and enhanced self-worth—this aligns with Meyerhoff et al. [82] which revealed how recognizing caregivers’ efforts can enhance their sense of value and reduce feelings of isolation. Tangible Support. Physical assistance and provision of goods or services were vital in alleviating caregivers’ burdens. Some participants relied on government programs for financial aid and on family members to share caregiving tasks. However, several reported that this support was insufficient and expensive, leading to heightened stress and financial strain. This observation aligns with Siddiqui et al. [131], which found that inadequate healthcare support systems place an extreme burden on caregivers, emphasizing the need for more robust tangible support mechanisms. Social Network Support. Caregivers expressed a strong desire to feel connected to a community of individuals facing similar challenges—both offline and online. Online platforms, particularly Reddit, provided a sense of belonging and facilitated the exchange of experiences and advice [16, 59, 75]. This helped mitigate feelings of isolation and fostered emotional resilience. This aligns with a large body of prior work in social computing and social support, highlighting the therapeutic effects of self-disclosure and social support in online communities [8, 35, 42, 65, 117], and more specifically Foong et al. [43] emphasized the importance of social connections in supporting caregivers’ mental health, suggesting that community engagement can be a valuable resource. 7.1.2 Situating with the Ethics of Care. We also situate our findings with the Ethics of Care framework [49, 142]. This framework emphasizes the moral significance of relationships and dependencies in human life, highlighting the importance of caring practices and the wellbeing of both caregivers and care receivers. Our findings reflect key principles of this framework: Relational Tensions and Emotional Labor. Caregivers face significant challenges in managing family relationships and tensions, often feeling overwhelmed by caregiving responsibilities and a lack of family support. The emotional labor involved in caregiving led to additional strain. This supports prior work [82], and with the Ethics of Care [49, 142] emphasis on the complexities of care relationships and the moral importance of attending to these relational dynamics [49, 142]. Dependency and Vulnerability. The progressive decline of care recipients’ health heightened caregivers’ emotional stress, especially when care recipients lost memory or recognition of the caregiver. Caregivers experienced anticipatory grief and a sense of impending loss. The Ethics of Care framework [49, 142] highlights the moral significance of responding to dependency and vulnerability, advocating for support systems that address these challenges. Maintaining Caregivers’ Wellbeing. The Ethics of Care calls for promoting the wellbeing of caregivers within a network of social relations. Our findings show that caregivers’ own wellbeing was compromised due to financial strain, social isolation, and lack of self-care. Technologies and support systems need to consider the caregivers’ needs, not just those of the care recipients— aligning with [66]’s emphasis on designing technologies that support informal caregivers’ mental health during unpredictable verbal agitation from people with dementia. # 7.2 Societal and Policy Implications Our study bear implications for societal structures and policy-making, particularly in relation to financial, social, and mental health support for AD/ADRD caregivers. These insights highlight both systemic gaps and opportunities for multi-level interventions that prioritize caregiver wellbeing. We preface this discussion by acknowledging that our study was conducted in the U.S., and as such, the topics and responses reflect the socio-economic contexts of the region. 7.2.1 Financial Support as a Foundational Need. One of the most pressing mental health concerns reported by caregivers was the significant financial strain associated with long-term caregiving responsibilities. Policies that provide financial assistance, such as stipends or tax credits, can alleviate economic burdens and reduce anxiety related to the sustainability of care [66, 164]. These recommendations align with the World Health Organization (WHO)’s Global Action Plan on Dementia, which calls for financial support systems for caregivers [154], and Alzheimer Europe’s position paper advocating for economic relief through stipends and tax credits [1]. Interestingly, multiple participants highlighted that, although institutional and healthcare support exists for patients, there is a severe lack of similar resources dedicated specifically to caregivers—highlighting a gap between policy recommendations and implementation that is also noted in the U.S. National Plan to Alzheimer’s Disease [93]. The 2018 RAISE (Recognize, Assist, Include, Support, and Engage) Family Caregivers Act is a step in this direction U.S. Administration for Community Living [145]. Our findings point to a need for future work that not only advocates for financial policy change but also systems that help caregivers navigate financial aid, manage caregiving-related expenses, or coordinate financial planning across family networks. Particularly, policies need to lower the barriers to accessing the latest advances in technology and medicine—from speeding up regulatory approval for new products and practices to negotiating better coverage through insurance. 7.2.2 Combating Social Isolation Through Community-Based Interventions. Our findings reveal that caregivers experience social isolation and disrupted social lives, underscoring a need for communitybased interventions. These interventions can build on and extend traditional support groups to create meaningful social connections and practical support networks. Local community centers, faith-based organizations, and neighborhood support programs could offer respite care services, organize social activities that accommodate caregivers’ scheduling constraints, and facilitate peerto-peer connections among families facing similar challenges [30]. In particular, a participant (P2) shared that their church barred their partner from attending services after the progression of AD/ADRD—significantly worsening the social lives of both of them. This reinforces for Ruitenburg et al. [114]’s call for policy-level interventions aimed at reducing the stigma associated with dementia-related communication challenges and promoting more inclusive social environments. They suggest that current anti-stigma efforts may not sufficiently engage influential social settings such as faith-based institutions. This is a particularly critical problem to address given that the church is the most common third space in many parts, especially in rural regions, of the U.S., and being ostracized from one’s church can be akin to being ostracized from the entire local community. To address this gap, public awareness campaigns can de-stigmatize caregiving challenges and promote community support [30, 66]. While Alzheimer Europe’s anti-stigma initiatives offer a foundation [1], our participants’ experiences indicate that current approaches may be insufficiently reaching faith-based and community organizations. Therefore, our study recommends the need for more localized, context-sensitive engagement strategies—especially within community organizations that caregivers frequently interact with. This extends Nunes et al. [91]’s recommendations by emphasizing social, not just clinical, pathways to support. 7.2.3 Improving Access to Mental Health Support for Caregivers. In addition to financial and social challenges, our findings point to two critical systemic gaps that impact caregiver wellbeing—1) the limited availability of caregiver-specific mental health services, and 2) the administrative and institutional burden of accessing support systems. Many participants reported unique psychological stressors—such as emotional upheaval [15], compassion fatigue [32], anticipatory grief [83, 146, 150], and a sense of hopelessness about the future—that extend beyond traditional diagnoses like depression and anxiety [124]. These findings underscore the need for tailored mental health services that account for the caregiving context [72, 107]. Although the recent changes to U.S. medicare allow caregivers to bill for their services, it is still not comprehensive to cover their health needs stemming from caregiving work [4]. The RAISE act can be a potential solution if it is applied to cover the above needs of caregivers U.S. Administration for Community Living [145]. Towards supporting the administrative burden, we propose leveraging the Digital Navigator Model [88, 99]. Essentially, providing caregivers and patients access to navigators/community workers who can help them navigate the administrative aspects of caregiving — accessing records, finding relevant government support programs, connecting with providers, finding respite. To address these needs, accessible and proactive mental health interventions are essential, such as through counseling, peer support programs, and targeted resources for caregivers. Embedding mental health screenings into regular healthcare appointments offers a preventive approach to identifying caregivers’ distress early and intervening before crisis points [34, 67]. At the same time, caregivers face significant friction in accessing benefits and services due to bureaucratic complexity. Participants noted how time-consuming processes detracted their ability to focus on care. This burden could be reduced by streamlining access to public programs through simplified applications, better navigational assistance, and integrated platforms [66]. This requires collaboration across healthcare providers, social services, and community organizations [66]—a system-level approach that positions caregiver wellbeing as a central, not peripheral, concern. # 7.3 Technology and Design Implications Our research extends prior HCI and CSCW work on AD/ADRD caregiving technologies, such as those supporting care transitions [54], robotic assistance in daily care [73], and online peer support communities [59], by centering on the mental health needs of caregivers and how these needs evolve throughout the caregiving journey. We draw on the concept of evolving interpersonal dynamics highlighted by [114], which explores how dementia disrupts relational communication and affects caregivers’ sense of emotional connection. While prior research suggested task-oriented or functional support systems, our study foreground caregivers’ perceived challenges and the need for technologies that also address emotional resilience, anticipatory grief, isolation, and feelings of guilt related to self-care. Rather than solely facilitating caregiving tasks, our design implications call for systems that enable self-reflection, coping strategies, and emotional validation—particularly during transitional and high-stress periods of caregiving experience. Although caregivers rely heavily on digital tools for practical tasks, their emotional needs for human interaction, personalized support, and credibility are not always adequately met by current technological solutions. Along these lines, we first, highlight some of the cross-cutting needs for caregiver wellbeing technologies, followed by three major design implications from our work. 7.3.1 Cross-cutting Needs for Caregiver Wellbeing Technologies. Aligning with a significant body of prior work [25, 120], our findings highlight three major cross-cutting needs that must be addressed to ensure technologies effectively support caergivers’ mental wellbeing, as described below: Affordability and Accesibility. First, affordability remains a critical barrier. Participants like P15 and P18 reported abandoning tools due to cost, even when these tools were perceived as helpful (Section 6.3). Financial strain was especially pronounced in the early stages of caregiving (Section 4), reinforcing the need for low-cost, subsidized, or open-source mental health technologies [43, 79]. Usability and Personalization. Second, usability challenges can discourage adoption, especially among caregivers managing emotional fatigue. P9 noted frustration with increasingly complex digital systems, exacerbated by security changes (Section 6.3). Multiple participants expressed interest in tools like Alexa that provide simple and desired features (Section 4.2)—aligning with prior findings [165]. Personalized, adaptive systems—such as mood-tracking tools—could offer timely wellbeing interventions [156]. Credibility. Finally, caregivers raised concerns about the credibility of digital guidance, particularly in emotionally complex situations. P2 and P12 questioned the ability of AI tools like ChatGPT [3] to provide emotionally accurate or trustworthy advice. Prior work has shown that involving healthcare professionals in content development can help ensure both clinical accuracy and emotional relevance [36, 127]. Additionally, implementing a “source transparency” framework—where tools clearly disclose the origins of their recommendations (e.g., medical literature, professional guidelines, or clinical best practices)—can further enhance trust [40, 104, 159]. Such transparency has been shown to improve perceived reliability in digital health systems [63]. Our findings reinforce that credibility is not just a technical issue—it is central to emotional reassurance, cognitive ease, and sustained engagement, especially as caregivers navigate high-stress and uncertain care contexts. 7.3.2 Designing for Emotionally Aware Caregiving Technologies. Building on caregivers’ evolving mental health needs, our findings point to several promising directions for emotionally supportive technologies. For instance, we can think of designing anticipatory emotional guidance tools that offer context-sensitive reflections during major care transitions—layering psychological support onto existing logistical features [54]. In addition, building on identity-aware design from prior work [114], we can design self-reflective interfaces, which can help caregivers better recognize and articulate emotional strain or feelings of relational distance—as well as help with the personal growth and emotional maturity (as noticed a positive mental health effect of caregiving). We found several instances of referring to the online communities for emotional and informational support. This also support a series of prior work on the potential benefits of online support communities, both generally [7, 115], as well as in the case of AD/ADRD caregivers [59, 60]. Our findings encourage design discussions on building peer support platforms that can also facilitate co-regulation of emotions, shared coping strategies, and the normalization of difficult feelings such as guilt, burnout, or grief. Finally, emotion-aware or mood-sensing interfaces can complement taskbased assistance [73] by engaging with caregivers’ internal states—offering low-effort emotional check-ins or personalized nudges aligned with caregivers’ energy levels and caregiving intensity— recent research integrated behavioral sensing and generative AI capabilities for adaptive mood interventions [31, 89]. These directions highlight the need to treat mental wellbeing not as an add-on, but as a central component of caregiver technology design. 7.3.3 Integrating Automated Technologies with Human Interactions. Caregivers in our study consistently reflected on the emotional burden of caregiving, with many experiencing anxiety, depression, and emotional exhaustion. Emotional support is highly contextual; the type of support an AD/ADRD caregiver seeks may differ significantly from that of others. Caregivers sought emotional support through professional counseling, therapy, and maintaining social connections with friends and family. This need for emotional solace is echoed in Siddiqui et al. [131], where caregivers of people with serious mental illness benefited from peer support groups that offered empathy and understanding, and Kim et al. [66] highlighted the importance of emotional support for informal dementia caregivers dealing with verbal agitation. In addition, our findings highlighted caregivers’ desire for a more hybrid approach—combining human connections with technological scaffolding. In particular, the participants noted the lack of empathetic interaction in current automated technologies. P8 stated, “Emotions are something that requires human interaction than technology to deal with.” This reflects our findings on compassion fatigue and emotional upheavals (Section 4.1), where caregivers struggle with complicated emotions that purely automated solutions cannot adequately address. Integrating features that facilitate connections with professionals or peer caregivers can provide emotional support. For example, platforms that offer virtual support groups or telehealth consultations can bridge this gap. Several participants, including P14, expressed interest in VR therapy that “offers a space to talk to an expert anonymously,” while P18 sought “a platform where they can meet with other caregivers in a non-anonymous fashion.” These preferences directly connect to our findings on social support seeking (Section 5), where connecting with others facing similar challenges fostered a sense of belonging, mutual understanding, and emotional solidarity—key elements in building emotional resilience for caregivers. Additionally, caregiving is multi-layered and collaborative; caregivers desire technologies facilitating collaboration with other caregivers and stakeholders (e.g., coordination with family, healthcare providers, and community resources). For this, technologies can incorporate features such as shared calendars, task delegation tools, and secure communication channels. Existing platforms such as CareZone or Lotsa Helping Hands address some of these needs, but our findings suggest extending these to integrate emotional wellbeing tracking [156], mutual check-ins, and caregiver burnout alerts—bringing emotional support into shared caregiving infrastructures. 7.3.4 Dynamic Designs: The Future of Technology to Support Caregivers’ Mental Health. Prior work has advocated for designing technologies that respect the integrality of caregiving—viewing caregivers not merely as support agents for others, but as individuals with complex, intersecting emotional and logistical needs Chen et al. [25]. Essentially, we need to think of designs that move beyond task-based interventions and towards systems that sustain caregivers holistically over time. A major finding and contribution of our work is the model of evolution of the caregivers’ mental health needs—an understanding of how their needs change across different stages of the caregiving journey. Caregivers move from initial uncertainty and emotional shock to long-term fatigue, grief, and burnout. Technologies that aim to support such evolving caregiver wellbeing must therefore be designed to evolve in tandem with these changing emotional and informational needs. Recent research in personal health informatics technology design highlights the importance of goal-aligned and adaptable systems that can support users’ shifting motivations and priorities over time [87, 125]. Munson et al. [87] argued that successful health tracking tools should start with an understanding of users’ personal goals and adjust as those goals evolve. Similarly, Sefidgar et al. [125] showed how technologies that accommodate goal transitions (e.g., from symptom monitoring to emotional reflection) better align with the lived experiences of people managing chronic health conditions. Applying these insights into caregivers’ mental wellbeing, we forsee a future where mental health platforms are adaptive. Tools that incorporate stage-sensitive mental health check-ins as well as adaptable self-care prompts and interventions that respond to caregivers’ level of burnout, emotional triggers, or life transitions (for themselves and the care-recipients)—all with a goal of building good mental health skill and resilience. # 7.4 Ethical Implications As technology becomes more integrated into our lives and society, we also need to consider the ethical implications of adopting computing technologies in caregiving. Some caregivers expressed concerns about the security of personal and sensitive information. In particular, some participants desired for more personalization, however, as Pandit and Lewis described, personalization can be a double-edged sword—personalization can come at the cost of more personal data, i.e., potentially compromised privacy. This aligns with prior work on personalization-privacy tradeoffs [10, 76, 158, 163]. Therefore, ensuring robust data encryption, secure storage, and compliance with regulations like the Health Insurance Portability and Accountability Act (HIPAA) is essential. Further, transparent privacy policies and giving users control over their data can enhance trust. The recent emergence of generative AI and chatbots also brought up discussions about these AI chatbots in our interviews. In particular, the use of AI in caregiving tools raises questions about reliability and potential biases. AI systems may not fully comprehend the nuances of clinical conditions or cultural contexts, leading to inappropriate recommendations. Continuous evaluation and oversight of AI algorithms are necessary to prevent harm. It is also important to incorporate caregivers’ perspectives and needs in designing and building these chatbots—this will help utilize the values from lived experiences to anticipate AI harms and help in AI alignment [44, 62, 128]. Further, there is a risk that technological solutions may widen the digital inequalities [111]. Efforts should be made to ensure that technologies are inclusive and accessible to caregivers of diverse backgrounds, including those with limited technological proficiency or resources. Finally, we highlight the ethical dilemma of relying too heavily on technology in caregiving. While technology can supplement caregiving tasks, it cannot replace the empathy and emotional understanding that human interaction offers. Therefore, striking a balance between automation and human support is crucial to maintaining the dignity and emotional wellbeing of both caregivers and care recipients. # 8 Limitations and Future Directions Our study has limitations, which also point to interesting future directions. The current study relied on a limited sample of U.S.-based caregivers and participants’ self-reports, which may not necessarily reflect the real-time AD/ADRD caregiving journey. Based on the participants’ perspective about technology for mental wellbeing, there is a clear need for future research to explore more potential uses and harms for technologies in mental wellbeing, and develop values elicitation technology that could help caregivers [43], especially in a collectivistic culture where there is an emphasis on harmonious decision-making [95, 129]. Future research can address these gaps by incorporating a more diverse sample across cultural contexts and conducting longitudinal studies that track the use of technology for mental health over time. Expanding research to develop and evaluate personalized, adaptive digital technology-including AI-driven and emotionally sensitive support systems -could provide more tailored mental health resources [152] to meet the varying needs of caregivers more effectively. Future research can further explore how caregivers’ mental health needs vary across different demographic groups and caregiving contexts. For instance, while financial burden emerged as a recurring theme in our study, we did not collect explicit data on participants’ household income—a notable limitation. Examining the relationship between income levels and perceived financial strain in future work would offer valuable insights into the socioeconomic dimensions of caregiver wellbeing. Likewise, conducting participatory studies that involve other stakeholders, including community and institutional leaders, can enrich our understanding in building collaborative approaches and support strategies to cater to caregivers’ wellbeing.
Alzheimer's Disease and Related Dementias (AD/ADRD) are progressive neurodegenerative conditions that impair memory, thought processes, and functioning. Family caregivers of individuals with AD/ADRD face significant mental health challenges due to long-term caregiving responsibilities. Yet, current support systems often overlook the evolving nature of their mental wellbeing needs. Our study examines caregivers' mental wellbeing concerns, focusing on the practices they adopt to manage the burden of caregiving and the technologies they use for support. Through semi-structured interviews with 25 family caregivers of individuals with AD/ADRD, we identified the key causes and effects of mental health challenges, and developed a temporal mapping of how caregivers' mental wellbeing evolves across three distinct stages of the caregiving journey. Additionally, our participants shared insights into improvements for existing mental health technologies, emphasizing the need for accessible, scalable, and personalized solutions that adapt to caregivers' changing needs over time. These findings offer a foundation for designing dynamic, stage-sensitive interventions that holistically support caregivers' mental wellbeing, benefiting both caregivers and care recipients.
[ "cs.HC", "cs.AI" ]
# 1. Introduction Research into portrait generation now lets us create realistic 3D images via machine learning from photograph data-sets, with use in visual effects, games, and virtual reality. However, the problem of how to control the generation process to meet desired face attributes remains open. Such attributes may span hair color, face shape or expression, or age or hair style. Ideally, all of these attributes would be controllable independently so that, say, editing the hair style of a person does not change their expression. Such controls are typically induced during learning through labels. Existing methods have focused on two label modes: 3D morphable models (3DMM) and text. 3DMMs are linear statistical models obtained from precisely-aligned 3D scan datasets [26, 34]. Fitting a 3DMM to a dataset allows conditioning a generator for precise control [2], but most 3DMMs are of the face or head only and so offer no appearance variation/control outside the 3DMM’s domain. Text is less precise but may allow easier high-level control, including all of the appearance. One approach is to pair matching photos and text labels of attributes, e.g., hair color, eye color, wearing glasses. However, every new attribute requires labeling, leading to a limited set of controls, limited sample size, or limited sample diversity. For instance, each photo in FFHQ-Text [59] has 9 text annotations, each describing a subset of 162 attributes for detailed geometry and appearance. But, the dataset only covers women, and has only 760 photos in total. Figure 1. CLIPortrait allows text-guided 3D portrait generation and editing of 3D portraits. Given input text, CLIPortrait can synthesize high-quality 3D faces with disentangled geometry and camera control using parametric 3D face models. An alternative approach is to train a large vision-language model (LVLM) on many millions of in-the-wild photos and captions to define latent spaces that correlate text and images, e.g., the popular CLIP [36] model’s public release makes accessible a model that would be too costly to train for many. But, these models can be inconsistent: as each photo is not labeled with all desired text attributes, different geometry and appearance attributes end up entangled due to spurious correlations. Training a generator using CLIP is a challenge: changing one attribute invariably changes another—an unsatisfying interaction. To alleviate these limitations, we propose a method generate 3D portraits using both CLIP-derived text and 3D conditioning. This produces comparable quality and diversity of output to unconditional models but still allows independent control of geometry and appearance attributes across low, mid, and high levels. Our model is trained on a database of unlabeled 2D face photos (e.g., FFHQ [20]), using the pretrained LVLM CLIP and the 3DMM FLAME. Such an approach requires disentangling CLIP itself to isolate control over all parameters that could be controlled by FLAME, without damaging the ability of CLIP to describe diverse portraits—na¨ıve approaches lead to low image quality, low sample diversity, or limited control. To disentangle, we learn to deform database faces to both 1) a 3D canonical space represented by a neural tri-plane, and to 2) a 2D canonical space in which CLIP can provide more reliable pseudo-labels that align text to images. In this way, CLIP only has to describe the appearance unexplained by the deformation and projection of a 3D volume into a camera. To bypass per-sample optimization, we define lightweight attribute mixing functions that can be baked from CLIP text prompts, e.g., ‘blue eyes’, ‘blonde hair’, for fast editing. Beyond providing a model for high-quality controllable generation of 3D portraits, our approach more broadly defines a method to allow creators without the compute resources to train an LVLM directly to instead adapt one to their own smaller 2D face data, such as proprietary data from games or VFX studios, to allow text and fine-grained geometry control without expensive labeling. It is worth noting that our contributions are orthogonal to specific LVLMs and 3D generative models. While we pick CLIP [36] as our LVLM and GNARF [2] as the generative backbone given their readily availability, our contributions are still applicable if, e.g., we swap CLIP with LLaVA [28] or replace GNARF with gaussian splatting models [7, 18, 22] or 3D diffusion models [23, 58] as long as they allow deformation for explicit geometry control. # 2. Why LVLMs Struggle as 3D Labelers Given a set of unlabeled images ${ \hat { x } } \in { \mathcal { X } }$ , we aim to construct a generator $G ( \mathbf { c } , \mathbf { z } ) : { \mathcal { C } } \times { \mathcal { Z } } \to { \mathcal { X } }$ where $\mathbf { c } \in { \mathcal { C } }$ denotes factors that allow us to control the generation process, and $\mathbf { z } \in { \mathcal { Z } }$ denotes a noise vector accounting for the unexplained factors of variation in the synthesized appearance. To simply discussion, let $\mathbf { c } = [ \mathbf { c } _ { \mathrm { { c a m } } } , \mathbf { c } _ { \mathrm { { g e o } } } , \mathbf { c } _ { \mathrm { { i m p } } } ]$ where the camera pose $\mathbf { c } _ { \mathrm { { c a m } } }$ and the geometry $\mathbf { c } _ { \mathrm { g e o } }$ are explicitly controllable in a deformable 3D generative model. Anything we want to implicitly control by text, we leave in $\mathbf { c } _ { \mathrm { i m p } }$ . The goal is to induce $\mathbf { c } _ { \mathrm { i m p } }$ from free-form text prompts $t \in \tau$ to create photorealistic face images $x$ that aligns with $\mathbf { c } _ { \mathrm { i m p } }$ while preventing $\mathbf { c } _ { \mathrm { i m p } }$ from a) interfering $\mathbf { c } _ { \mathrm { { c a m } } }$ and $\mathbf { c } _ { \mathrm { g e o } }$ . b) overshadowing $z$ . Our contributions focus on disentanglement; to explain this, first we consider why entanglement arises in LVLMs via their alignment objective. Alignment Given text $t$ , the alignment objective requires that the generated sample $x$ be semantically consistent with $t$ Let $E _ { \mathrm { t x t } } ( t ) : \mathcal { T } \mathcal { R }$ be an encoder that maps text $t$ to some representation $\mathbf { r } \in \mathcal { R }$ ; likewise, $E _ { \mathrm { i m g } } ( x ) : \mathcal { X } \mathcal { R }$ maps an image $x$ to the same space $\mathcal { R }$ . We can define the alignment of $x$ to $t$ as maximizing the mutual information $I ( x ; t )$ , which is bounded by the mutual information $I ( \mathbf { r } _ { x } ; \mathbf { r } _ { t } )$ between ${ \bf r } _ { x } = E _ { \mathrm { i m g } } ( x )$ and ${ \mathbf r } _ { t } = E _ { \mathrm { t x t } } ( t )$ . For CLIP, $E _ { \mathrm { i m g } }$ and $E _ { \mathrm { t x t } }$ are trained with the InfoNCE objective. Oord et al. [33] show that minimizing InfoNCE maximizes the lower bound of $I ( \mathbf { r } _ { x } ; \mathbf { r } _ { t } )$ , given text-image pairs $( x , t ) \in \mathcal { V }$ : $$ { \begin{array} { r l } & { { \mathrm { p a n r s ~ } } ( x , \tau ) \in { \mathcal { N } } : } \\ & { { \mathrm { I n f o N C E } } ( x , t ) = - \mathbb { E } _ { ( x , t ) \sim \mathcal { V } } \left[ \log \frac { \exp \left( \cos \left( \mathbf { r } _ { x } , \mathbf { r } _ { t } \right) \right) } { \sum _ { t ^ { \prime } \sim \mathcal { X } } \exp \left( \cos \left( \mathbf { r } _ { x ^ { \prime } } , \mathbf { r } _ { t ^ { \prime } } \right) \right) } \right] } \\ & { ~ \qquad ( 1 } \\ & { \qquad \geqslant - I ( \mathbf { r } _ { x } ; \mathbf { r } _ { t } ) + \mathrm { c o n s t a n t } } \end{array} } $$ CLIP was trained on Internet-scale 2D image/text data $y$ . For a target dataset $\chi$ for which we would like a generator, say, the high-quality close-ups in FFHQ, let’s assume that CLIP happens to cover all portrait images. Then, text-guided 2D generation becomes viable even though FFHQ has no text labels: we can use $\mathbf { c } _ { \mathrm { i m p } } = \mathbf { r } _ { x } = E _ { \mathrm { i m g } } ( x _ { \mathrm { F F H Q } } )$ to condition $G$ during training, which requires no text labels. Then, at inference time, we replace $\mathbf { r } _ { x }$ by $\mathbf { r } _ { t }$ —this allows $G$ to generate an image from any text prompt provided by the user. The contrastive objective while brings $\mathbf { r } _ { x }$ and $\mathbf { r } _ { t }$ as close as possible, does not fully eliminate the modality gap in $\mathcal { R }$ [27] and instead causes entanglement in $\mathbf { r } _ { x }$ for our concerns. Entanglement. $y$ encompasses images of all things on the Internet—not just portraits with detailed text captions. This situation has two problems. 1) Only a small fraction of portraits in $y$ contain a description of geometry, and the description is coarse, e.g., ‘smile’ does not define how wide the smile is, or ‘viewed from side on’ does not define the 3D camera angle. This allows $E _ { \mathrm { i m g } }$ to encode incomplete geometry information ${ \bf r } _ { x _ { \mathrm { g c o } } }$ and camera information ${ \bf r } _ { x _ { \mathrm { c a m } } }$ in $\mathbf { r } _ { x }$ as the result of spurious correlations, e.g. celebrity faces are more likely to have a front camera pose and smiley expression. As such, introducing a 3D representation within the generator and then using $\mathbf { r } _ { x }$ as $\mathbf { c } _ { \mathrm { i m p } }$ leads to poor results since ${ \bf r } _ { x _ { \mathrm { g c o } } }$ and ${ \bf r } _ { x _ { \mathrm { c a m } } }$ are strongly at odds with $\mathbf { c } _ { \mathrm { { c a m } } }$ and $\mathbf { c } _ { \mathrm { g e o } }$ . For instance, if $\mathbf { c } _ { \mathrm { { c a m } } }$ specifies a camera pose different than what $\mathbf { r } _ { x _ { \mathrm { c a m } } }$ dictates, $G$ is ill-behaved as it receives conflicting conditions for the same control. We address this conflict by proposing 2D and 3D canonicalizations to eliminate ${ \bf r } _ { x _ { \mathrm { c a m } } }$ and ${ \bf r } _ { x _ { \mathrm { g c o } } }$ from $\mathbf { r } _ { x }$ . As long as all instances of $x$ share the same camera pose and geometry, the generation process can no longer distinguish such information from $\mathbf { r } _ { x }$ . Deform Camera 512×512 cam StyleGAN2 synthesis → 64×64 resolution Super Canonical Observation Posed 𝐰 Tri-planes volume volume images Dhp 𝐜cam U Discriminator 𝐳 Stage-1: Deformable EG3D Real or fake? 2) The contrastive objective forces $\mathbf { r } _ { x }$ to be as discriminative as possible in $y$ , but the most discriminative factors in $\mathbf { r } _ { x }$ could be irrelevant to portraits—non-portrait noise factors ${ \bf r } _ { x _ { \mathrm { n o i s e } } }$ can outweigh any useful factors when using $\mathbf { r } _ { x }$ as $\mathbf { c } _ { \mathrm { i m p } }$ . Examples of ${ \bf r } _ { x _ { \mathrm { n o i s c } } }$ include the camera used to produce the image, image format and quality, possible geographical location where the picture was taken (Fig. 6). We show in the supplemental that these noise factors can considerably overshadow factors that describe the person in the image. Since ${ \bf r } _ { x _ { \mathrm { n o i s c } } }$ is typically the most discriminative factors, it tends to be unique and vary greatly for different instances of $x$ . As a result, this encourages $G$ to be a deterministic function of $\mathbf { r } _ { x }$ and completely ignore $z$ , since each $x$ can be uniquely identified by just ${ \bf r } _ { x _ { \mathrm { n o i s c } } }$ (and therefore $\mathbf { r } _ { x }$ ). We avoid this degeneracy by introducing a Jacobian regularization that penalizes the generator’s sensitivity to rxnoise . # 3. Method # 3.1. High-level Overview Our overall approach is a 3D GAN (Fig. 3) that uses two training stages to disentangle CLIP for an unlabeled observation dataset $\chi$ . These two stages are necessary because our full model requires an unconditional deformable generator to obtain the canonicalized appearance condition $\mathbf { r } _ { \hat { x } }$ Furthermore, bootstrapping our full model from an unconditional model helps avoid degenerate solutions caused by ${ \bf r } _ { x _ { \mathrm { n o i s c } } }$ which we show in Section 3.5. Stage-1 (Unconditional Generation) First, we train a 3D generator $G$ with no a priori text understanding to output a volumetric tri-plane representation (Fig. 2). This representation is deformed from its canonical format using a 3D map D derived from 3DMM, and projected back to an intermediate low-resolution image $x ^ { \ddag }$ according to camera parameters $\mathbf { c } _ { \mathrm { { c a m } } }$ (Sec. 3.2). Finally, we use a super-resolution module to produce a sample $x$ at desired target resolution from $x ^ { \ddag }$ To train Stage-1, we use a discriminator to assess whether the rendered image $x$ is real or fake and use an adversarial objective to optimize model $G$ . Canonicalization Once $G$ is trained, we can canonicalize each sample $x$ to a fixed-geometry 3D volume by inverting $\mathbf { D }$ , and then render a frontal 2D image $\hat { x }$ by inverting $\mathbf { c } _ { \mathrm { { c a m } } }$ We run pre-trained CLIP on each canonicalized 2D image $\boldsymbol { E } _ { \mathrm { i m g } } ( \boldsymbol { \hat { x } } )$ to compute $\mathbf { r } _ { \hat { x } }$ . Stage-2 (Conditioning Appearance on Text) To add text control, we use an alignment network $T _ { G }$ to modify any random style vector $w$ according to $\mathbf { r } _ { \hat { x } }$ . We disentangle 3D information from CLIP using 2D canonicalization such that $\mathbf { r } _ { \hat { x } }$ only contains frontal appearance information. The alignment network preserves the randomness of $w$ , which maintains output diversity and enables interactive style mixing. Disentanglement of 3D information from CLIP occurs because $\mathbf { r } _ { \hat { x } }$ is predicted from 2D images that all share the same camera and geometry; any possible means to distinguish such information has been factored out of the generator. # 3.2. Generating Deformable 3D Portraits We use a tri-plane reduction of a neural radiance field, like EG3D [5]. Given a 3D point, we query the tri-plane for a feature vector $\mathbf { f } ^ { \prime }$ , and obtain volumetric features f and density $\sigma$ from $\mathbf { f } ^ { \prime }$ using an MLP. We obtain a pixel of the low resolution render by integrating f and $\sigma$ along ray $\vec { r }$ : $$ F ( { \vec { r } } ) = \int _ { t _ { \mathrm { n } } } ^ { t _ { \mathrm { f } } } T ( t ) \sigma ( t ) \mathbf { f } ( t ) \mathrm { d } t , { \mathrm { ~ w h e r e ~ } } T ( t ) = \exp \left( - \int _ { t _ { n } } ^ { t _ { f } } \sigma ( s ) \mathrm { d } s \right) $$ where $t _ { \mathrm { n } }$ and $t _ { \mathrm { f } }$ are the near and far bounds of the ray $\begin{array} { r } { \vec { r } ( t ) = } \end{array}$ $\mathbf { 0 } { + } \vec { \omega } t$ along the direction $\vec { \omega }$ from the origin o. Precise camera control is possible by changing the rays $\vec { r }$ along which $F$ is aggregated. To control portrait geometry, including face shape and facial expression, we deform the ray along which $F$ is aggregated: $$ \mathbf { f } ( \mathbf { x } ^ { \prime } ) = \mathbf { f } ( \mathbf { D } ( \mathbf { x } ) ) , $$ where $\mathbf { x }$ is a coordinate in the observation space and $\mathbf { D } :$ $\mathbb { R } ^ { 3 } \to \mathbb { R } ^ { 3 }$ is a deformation that maps $\mathbf { x }$ to a canonical space. Deforming coordinates from the canonical space removes the need for generator $G$ to represent varying geometry. D can be constructed from explicit 3DMMs. We use FLAME [26]: it has controllable parameters $\beta \in \mathbb { R } ^ { 1 0 0 }$ for face shape, $\theta \in \mathbb { R } ^ { 6 }$ for jaw and head pose, and $\boldsymbol \psi \in \mathbb { R } ^ { 5 0 }$ for facial expression. We estimate these for the observation mesh from $\hat { x }$ using DECA [13]. We construct $\mathbf { D }$ from FLAME analytically using the surface field (SF) method in GNARF by Bergman et al. [2]. SF derives the deformation field from the canonical mesh and the observation mesh aligned to $\hat { x }$ . For the canonical space, we set the FLAME shape and expression coefficients to 0, but leave the jaw open to synthesize the mouth interior. Given a target coordinate $\mathbf { x }$ , SF locates its nearest triangle $t _ { \mathbf { x } } ^ { D } = [ \mathbf { v } _ { 0 } , \mathbf { v } _ { 1 } , \mathbf { \bar { v } } _ { 2 } ] \in \mathbb { R } ^ { 3 \times 3 }$ on the target mesh, and computes the barycentric coordinates $[ u , v , w ]$ of the projection of $\mathbf { x }$ on $t _ { \mathbf { x } } ^ { D }$ . To calculate the deformed coordinate, we retrieve the corresponding triangle $t _ { \mathbf { x } } ^ { C }$ on the canonical mesh and its normal ${ \mathbf { n } } _ { t _ { \mathrm { x } } } ^ { \dot { C } }$ : Figure 3. Overview of Stage-2. (a) The conditioning networks remap CLIP embeddings to a space $w _ { \mathbf { r } }$ in which 3D information is ignored to be considered in stage (b), while noise vector $\mathbf { z }$ maintains sample diversity. (b) We synthesize a neural radiance volume via a tri-plane in a 3D canonical space, and with a particular appearance defined by $w _ { \mathbf { r } }$ . Then, we deform this volume by FLAME parameters after fitting to the dataset. A discriminator judges the rendered output image. (c) The alignment network in (a) can only achieve a $w _ { \mathbf { r } }$ free of 3D information if all generated images are ‘3D equivalent’; we achieve this via canonicalization. $$ \mathbf { D } ( \mathbf { x } ) = t _ { \mathbf { x } } ^ { C } \cdot \left[ \boldsymbol { u } , \boldsymbol { v } , \boldsymbol { w } \right] ^ { \top } + \left. \mathbf { x } - t _ { \mathbf { x } } ^ { D } \cdot \left[ \boldsymbol { u } , \boldsymbol { v } , \boldsymbol { w } \right] ^ { \top } , \mathbf { n } _ { t _ { \mathbf { x } } } ^ { D } \right. \mathbf { n } _ { t _ { \mathbf { x } } } ^ { C } $$ which we use to query the canonical volume. Since geometry variations are explicitly controlled by deformation and have been factored out from $G$ , entanglement between facial expression and camera pose in EG3D no longer exists; as such, we do not use generator pose conditioning [5]. Without informing the discriminator of the deformation, there is no guarantee that the deformed volume matches the expected deformation. To condition the discriminator, Bergman et al. [2] concatenate the camera pose with FLAME parameters. However, this leads to training instability, and sample quality is severely degraded even with the noise perturbation trick [2]. As Huang et al. showed [17], this is because the conditioning vector depends upon the unknown PCA basis for FLAME, making it difficult for the optimization to use this additional input. Instead, we adopt the Huang et al. method. We texture the mesh with its vertex coordinates in world space. Then, we render the observation mesh under $\mathbf { c } _ { \mathrm { { c a m } } }$ and concatenate the render $r d r$ with $x$ as input to the discriminator. We observe no training instability or quality degradation using this conditioning. # 3.3. Canonicalization Given our deformable EG3D, canonicalization can be reduced to an inversion problem. Specifically, the sample generation process of the deformable generator is given by: $$ \begin{array} { r } { \boldsymbol { w } = M _ { G } ( \mathbf { z } ) } \\ { \boldsymbol { f } = S _ { G } ( \boldsymbol { w } ) } \\ { \boldsymbol { x } = V ( f , \mathbf { c } _ { \mathrm { c a m } } , \mathbf { c } _ { \mathrm { g e o } } ) } \end{array} $$ where $M _ { G }$ and $S _ { G }$ are the style mapping and synthesis networks of $G , V$ denotes deformable volume rendering. For each image $x$ in the training set, we estimate $\mathbf { c } _ { \mathrm { { c a m } } }$ and $\mathbf { c } _ { \mathrm { g e o } }$ using off-the-shelf models. The corresponding latent vector $w _ { x }$ can then be obtained by solving the following optimization problem: $$ w _ { x } = \underset { w } { \operatorname { a r g m i n } } \operatorname { D _ { L P I P S } } \left( V ( S _ { G } ( w ) , \mathbf { c _ { \mathrm { c a m } } } , \mathbf { c _ { \mathrm { g e o } } } ) , x \right) $$ where DLPIPS denotes the LPIPS distance that we use as our image similarity. Given $w _ { x }$ , we can now re-render the canonicalized $\hat { x }$ under neutral camera pose and neutral geometry. We define the neutral camera pose $\mathbf { c } _ { \mathrm { { n . c a m } } }$ to be fully frontal and the neutral geometry $\mathbf { c } _ { \mathrm { n - g e o } }$ to have canonical FLAME parameters. The canonicalized $\hat { x }$ is given by: $$ \hat { x } = V ( S _ { G } ( w _ { x } ) , \mathbf { c } _ { \mathrm { n _ { - } c a m } , } \mathbf { c } _ { \mathrm { n _ { - } g e o } } ) $$ and the disentangled condition vector $\mathbf { c } _ { \mathrm { i m p } } = \mathbf { r } _ { \hat { x } } = E _ { \mathrm { i m g } } ( \hat { x } )$ Note that canonicalization happens before Stage-2 training as a data preprocessing step, and so it has no impact on the training time of Stage-2. # 3.4. Conditioning on LVLM Text We first visualize the importance of canonicalization in Figure 4: using the CLIP embedding from raw training images $x$ , generation tries to incorrectly flatten faces to stay frontal regardless of the camera pose. Whereas conditioning on the canonicalized $\mathbf { r } _ { \hat { x } }$ produces correct geometry since neither $G$ nor $D$ can cheat with $\mathbf { r } _ { \hat { x } }$ for camera pose and geometry, and they must rely on $\mathbf { c } _ { \mathrm { { c a m } } }$ and $\mathbf { c } _ { \mathrm { g e o } }$ for such information. To condition on $\mathbf { r } _ { \hat { x } }$ , it is possible to either train a new model from scratch that takes $\mathbf { r } _ { \hat { x } }$ as an input, or adapt our unconditional deformable model from Stage 1 to handling $\mathbf { r } _ { \hat { x } }$ . We choose the latter as we show in the following section that direct generation from $\mathbf { r } _ { \hat { x } }$ is prone to degenerate solutions due to the presence of $\mathbf { r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ . Our Stage 2 model splits conditional generation into two easier steps: unconditional generation (i.e. Stage 1) and alignment, where the latter seeks to modify an existing random sample such that it aligns with $\mathbf { r } _ { \hat { x } }$ . Note that our Stage 1 model already facilitates well-behaved unconditional generation, as long as the alignment step is also well-behaved, we obtain well-behaved conditional generation on $\mathbf { r } _ { \hat { x } }$ . Figure 4. Without canonicalization before CLIP, faces look flat. Depth renderings of the estimated volume underneath show the more distorted space. [FFHQ 5mil. images.] This directly shows our major insight: the CLIP embedding of the uncanonicalized image inherently includes camera pose information that must be disentangled, else it confuses the 3D generation. Figure 5. Increasing $\alpha$ increases prompt alignment. Text Prompt: ”Bearded man with long blond hair wearing glasses” CLIP correlates regular glasses and sunglasses as increasing “glasses” intensity, and we observe a similar phenomenon. Toward this goal, we introduce a CLIP alignment network $T _ { G }$ to $G$ which predicts a personalized direction for the style vector $w$ of a random sample, along which the sample gains alignment toward $\mathbf { r } _ { \hat { x } }$ : $$ w _ { \mathbf { r } _ { \hat { x } } } ( \alpha ) = w + \alpha T _ { G } ( w , \mathbf { r } _ { \hat { x } } ) $$ $\alpha \in [ 0 , 1 ]$ is a scalar that controls the alignment strength, unless otherwise specified, $w _ { \mathbf { r } _ { \hat { x } } }$ implies $\alpha = 1$ . Similarly, we introduce an alignment network $T _ { D }$ to the discriminator $D$ as follows: $$ \begin{array} { r } { u = S _ { D } ( x , r d r ) } \\ { v = M _ { D } ( \mathbf { c } _ { \mathrm { c a m } } ) } \\ { v _ { \mathbf { r } _ { \hat { x } } } ( \alpha ) = v + \alpha T _ { D } ( v , \mathbf { r } _ { \hat { x } } ) } \\ { D ( x \mid r d r , \mathbf { c } _ { \mathrm { c a m } } , \mathbf { r } _ { \hat { x } } ) = u \cdot v _ { \mathbf { r } _ { \hat { x } } } } \end{array} $$ $S _ { D }$ denotes the stem layers of $D$ and $M _ { D }$ maps the camera pose to the condition vector for the EG3D discriminator. We implement $T _ { G }$ and $T _ { D }$ using a ResNet architecture and zeroinitialize both networks to ensure that the extra condition $\mathbf { r } _ { \hat { x } }$ blends into our existing deformable EG3D smoothly. # 3.5. Regularizing Noise in CLIP Embeddings Noise in CLIP. We verify its existence by showing the similarity between face-related main prompts and noise prompts that are not related to the face description when both are compared against a CLIP image embedding of the target image. To show this, we generate text of FFHQ training images using the third-party CLIP-Interrogator tool 1. Given the generated text prompts, we split them into two groups: facerelated (main prompt) and face-unrelated (noise prompt); Figure 6. Then, we evaluate cosine distance between CLIP image embedding and CLIP text embeddings. $\mathbf { r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ does exist and can outweigh the main facial appearance information in terms of cosine similarity. Furthermore, the average cosine similarity between the noise prompts and the entire FFHQ dataset shows that $\mathbf { r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ is unique enough to identify each corresponding training image. Distribution collapse from noise. This leads to distribution collapse during training of the proposed network. After disentangling 3D control from $\mathbf { r } _ { x }$ , the remaining entanglement arises from $\mathbf { r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ . As $\mathbf { r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ is highly specific to each $x$ , each $x$ becomes identifiable solely from $\mathbf { r } _ { \hat { x } }$ and thus the conditional distribution $p ( x \mid \mathbf { r } _ { \hat { x } } )$ collapses to a delta distribution and all randomness of $x$ is lost once $\mathbf { r } _ { \hat { x } }$ is determined. Since $G$ is trained to match $p ( x \mid \mathbf { r } _ { \hat { x } } )$ , it is encouraged to be a deterministic function of $\mathbf { r } _ { \hat { x } }$ and completely ignore the source of randomness $\textbf { z }$ , as sampling from $p ( x \mid \mathbf { r } _ { \hat { x } } )$ involves no randomness. This lack of randomness is highly undesirable for $G$ : a generic prompt such as “a blond person” will be mapped to a single deterministic output rather than many diverse face images. This severely limits applications. Although we replace $\mathbf { r } _ { \hat { x } }$ with $\mathbf { r } _ { t }$ at inference time and $\mathbf { r } _ { t }$ contains no noise signal, the result will remain deterministic given that $G$ has learned to dissociate $\textbf { z }$ during training. For this dissociation to happen, either $G$ has become a constant function w.r.t $\mathbf { z }$ , or $G$ is much more sensitive to the change of $\mathbf { r } _ { \hat { x } }$ than to the change of $\mathbf { z }$ . More formally: $$ \begin{array} { c } { \displaystyle { \frac { \partial G } { \partial \mathbf { z } } = 0 } } \\ { \displaystyle { \left\| \frac { \partial G } { \partial \mathbf { r } _ { \hat { x } } } \right\| _ { \mathrm { F } } \gg \left\| \frac { \partial G } { \partial \mathbf { z } } \right\| _ { \mathrm { F } } } } \end{array} $$ where $\left\| \cdot \right\| _ { \mathrm { F } }$ denotes the Frobenius norm of the Jacobian. To address Eq. (16), we force our model to retain the ability of unconditional generation by setting $\alpha = 0$ with $50 \%$ probability during training. For unconditional generation, $\mathbf { z }$ is the only input to the volume synthesis process. By forcing the distribution of unconditional generation to match the training distribution $p ( \mathcal { X } )$ , it is encouraged to produce diverse samples, which is directly at odds with $G$ being a constant function w.r.t $z$ . For Eq. (17), a straightforward solution is to penalize $\| \partial G / \partial \mathbf { r } _ { \hat { x } } \| _ { \mathrm { F } }$ . However, this Jacobian term is too expensive to calculate. With the chain rule, we see that: $$ \left\| \frac { \partial G } { \partial \mathbf { r } _ { \hat { x } } } \right\| _ { \mathrm { F } } = \left\| \frac { \partial G } { \partial w _ { \mathbf { r } _ { \hat { x } } } } \frac { \partial w _ { \mathbf { r } _ { \hat { x } } } } { \partial \mathbf { r } _ { \hat { x } } } \right\| _ { \mathrm { F } } \leqslant \left\| \frac { \partial G } { \partial w _ { \mathbf { r } _ { \hat { x } } } } \right\| _ { 2 } \left\| \frac { \partial w _ { \mathbf { r } _ { \hat { x } } } } { \partial \mathbf { r } _ { \hat { x } } } \right\| _ { \mathrm { F } } $$ Input Image Figure 6. The comparison of cosine similarity between CLIP image embeddings of training images and CLIP text embeddings for prompts related to faces and those not related to faces. Noise prompts can have higher cosine similarity than face prompts. Further, the average cosine similarity between the noise prompts and the entire FFHQ dataset shows that ${ \bf r } _ { \hat { x } _ { \mathrm { n o i s c } } }$ is unique enough to identify each corresponding training image. Figure 7. Showing distribution collapse. Our noise regularization improves diversity and quality. [FFHQ 5mil. images.] $\partial G / \partial w _ { \mathbf { r } _ { \hat { x } } }$ is the computation bottleneck and not directly relevant to $\mathbf { r } _ { \hat { x } }$ , we thus omit this Jacobian term and penalize $\| \partial w _ { \mathbf { r } _ { \hat { x } } } / \partial \mathbf { r } _ { \hat { x } } \| _ { \mathrm { F } }$ , which penalizes the upper bound of $\| \partial G / \partial \mathbf { r } _ { \hat { x } } \| _ { \mathrm { F } }$ . We formulate this penalty as a regularization term $R _ { \mathbf { r } _ { \hat { x } } }$ and apply a stochastic approximator for efficient compute: $$ \begin{array} { l } { \displaystyle R _ { \mathbf { r } _ { \hat { x } } } = \left\| \frac { \partial w _ { \mathbf { r } _ { \hat { x } } } } { \partial \mathbf { r } _ { \hat { x } } } \right\| _ { \mathrm { F } } ^ { 2 } } \\ { = \displaystyle \operatorname* { l i m } _ { \sigma \to 0 } \mathbb { E } _ { \epsilon \sim \mathcal { N } ( 0 , \sigma ^ { 2 } I ) } \left[ \frac { 1 } { \sigma ^ { 2 } } \left\| w _ { \mathbf { r } _ { \hat { x } } + \epsilon } - w _ { \mathbf { r } _ { \hat { x } } } \right\| ^ { 2 } \right] } \end{array} $$ We show the proof of Eq. (20) in the appendix. Additionally, we notice that the norm of $w _ { \mathbf { r } _ { \hat { x } } }$ has the tendency to grow uncontrollably, driving conditional generation results to unrealistic regions. We penalize norm growth: $$ R _ { \mathrm { n o r m } } = ( \| w _ { \mathbf { r } _ { \hat { x } } } \| - \| w \| ) ^ { 2 } $$ when deriving conditional style vector $w _ { \mathbf { r } _ { \hat { x } } }$ from unconditional $w$ as in (11). With our regularization techniques, we solve the distribution collapse problem allowing high-quality and diverse generation. Figure 7 verifies our hypothesis that the model cannot help but dissociate the source of randomness $z$ under a stronger (text-)conditioning signal without the proposed Jacobian regularization. With both regularizers in place, we trade off a slight compromise in text-alignment overall (‘bangs’). As a side control, we also provide a smooth transition from unconditional generation to conditional generation by manipulating $\alpha$ , allowing the user to trade-off between alignment and fidelity (Fig. 9). # 4. Experiments Datasets. We use FFHQ [20] and Multi Modal CelebA-HQ (MMCelebA) [49] at $5 1 2 \times 5 1 2$ resolution following prior work. While MMCelebA contains text annotations, we do not use any when training our model, and the data processing procedure follows exactly as in FFHQ. We augment both datasets with horizontal flips and estimate the camera parameters for each image using an offthe-shelf model following EG3D. For FLAME parameter estimation, we adopt DECA to obtain initial results. The DECA estimates are not directly applicable due to the camera model differences between DECA (orthographic) and EG3D (perspective). We further optimize the initial estimates from DECA using a projected facial landmark loss to reconcile this difference. Finally, we optimize the scale and translation of the FLAME mesh to match the cropping of EG3D; other mesh postprocessing steps such as water-tightening and simplification follow GNARF. Model and Optimization. For the EG3D backbone, we initialize the generator weights from the Egger et al. public checkpoint on FFHQ and largely follow their training routine and losses, including the non-saturating adversarial loss, R1 gradient penalty, and density regularization. However, we remove generator pose conditioning and lower learning rate $\gamma = 0 . 0 0 1$ for both generator $G$ and discriminator $D$ . For faster deformation computation, we simplify the FLAME mesh to 2500 triangles, but use the full mesh for generating the mesh render rdr in discriminator conditioning. Lastly, we add our regularizations $\boldsymbol { R } _ { c _ { x } }$ and $R _ { \mathrm { n o r m } }$ to the training objective, and empirically set their weights to 0.01 and 10. Figure 8. Qualitative Evaluation On text-to-3D portrait generation with explicit geometry control. Table 1. Quantitative comparison. See Fig. 10 for a qualitative comparison. \*: These models are rasterization-based and cannot achieve the same level of fidelity as volume rendered models. Metrics. For image generation quality, we use Frechet Inception Distance (FID) [15] and Kernel Inception Distance (KID) [3]. For semantic consistency, we use CLIP score, which is the cosine similarity between a CLIP image embedding and a CLIP text embedding. # 4.1. Comparison on Text-to-3D Face Generation We evaluate the performance of CLIPortrait on openvocabulary text against the texture-mapping-based CLIPFace method [1] and generative TG-3DFace method [52]. In qualitative comparisons (Fig. 10, 8), we see improved image quality against CLIPFace and improved diversity and control against TG-3DFace. For quantitative evaluation, we compare with six additional text-guided 3D face generation methods (Tab. 1). On FFHQ [20], we evaluate the reality and diversity of rendered images by computing FID and KID scores using random samples for noise and CLIP embedding. On MMCelebA [49], we follow the same random sampling as FFHQ for FID score computation and we use provided text annotations for CLIP score computation. For experiment fairness, we use the given 3DMM coefficients associated with the samples. On both datasets, our method demonstrates higher fidelity and semantic consistency. Text-Image Matching Performance. However, CLIP score is not a reliable measure on its own without FID. In Table 1, CLIPortrait achieves good scores in both. But, in a comparison just of CLIP score, the CLIPFace method beats CLIPortrait on the FFHQ-Text dataset [60] (Tab. 2). Since the authors do not provide the text prompts used in the original CLIPFace publication, we used the FFHQ-Text dataset to compute CLIP score. While it is higher, even distant inspection shows that samples created by CLIPFace do not look realistic. In contrast, our approach produces plausible appearance for the face as well as details in the hair or the presence of eyeglasses. Table 2. CLIP score is a poor indicator of quality. Our approach CLIPortrait generates more plausible portraits than CLIPFace. Text prompts: (a) “She has blonde hair and wears eyeglasses” (b) “He has straight hair, and bags under eyes. He is wearing a necktie”, (c) “The person wears heavy makeup, necklace. She has arched eyebrows, and black hair”, (d) “This young person has brown hair” Table 3. Representative related methods in generative 3D face synthesis. Ours (CLIPortrait) is the only method to allow text-guided synthesis of high quality 3D portraits with explicit geometry/camera control from an unlabelled 2D dataset. Inference Time Efficiency. We present the generation speed of our method against CLIPFace in Table 2. The metrics were measured from the moment a new text prompt was inputted until the generation of a 3D portrait. Optimizationbased text-to-3D face method CLIPFace [1] requires training from scratch for every new text prompt, resulting in a minimum time cost of 24 minutes. In comparison, CLIPortrait achieves generation speed of approximately 0.10 seconds in the same settings with CLIPFace [1]. These results demonstrate the efficiency of our method in generating high-quality 3D-aware portraits from text prompts at smooth, nearly interactive frame rates. # 5. Related Work 3D-Aware Generative Face Synthesis. GANs [8, 20, 21] became increasingly popular in the last decade for creating high-quality photo-realistic images. Recent works have used 3D-aware multi-view consistent GANs from a collection of single-view 2D images in an unsupervised manner. The key idea is to combine differentiable rendering with 3D scene representations such as meshes, point clouds, voxels, and implicit neural representations. Among these representations, neural implicit representations [5] have recently become a major focus of attention due to their superior rendering quality. Even though previous 3D-aware GANs can control camera viewpoints, they lack precise and semantically coherent control over the geometry and appearance attributes. To tackle geometry control, recent works [2, 50] propose articulated generative 3D faces with 3D parametric model control. For appearance control, [52] achieves text-guided 3D face generation without precise geometry control. In contrast, our method offers control over both appearance and geometry in the generation of 3D faces. Figure 9. Text-guided 3D face appearance manipulation. Increasing $\alpha$ increases prompt alignment, which is not possible in TG-3DFace [52]. As there is no public TG-3DFace [52] code, we took images from their original paper and used the same “blue eyes” prompt. Figure 10. Text and 3D control (face shape, expression, and camera). Our method shows improved quality against CLIPFace [1], and improved diversity and control against TG3DFace [52], which cannot vary face shape or expression. These examples have $\alpha = 1$ , showing the most strong or dramatic response to the text prompt, e.g., very red lips or very blonde (white) hair. Text-to-3D Face Generation. The goal here is to produce an image that visually depicts a text description. This can be accomplished with GANs [11, 25, 39, 45, 51, 54, 55, 61], auto-regressive models [9, 10, 12, 24, 37, 37, 46, 53, 57] and diffusion models [16, 32, 38, 41, 42]. Some works focus on text-guided facial image generation [31, 35, 43, 44, 47, 49]. However, these methods only generate single-view images and do not consider 3D-aware face generation. For textto-3D face generation, existing methods [1, 56] build on 3D morphable face models and generate 3D faces with geometry and texture. Owing to the parametric model, these approaches can explicitly control expression, pose; however, the generation results lack shape variation. To address shape variation, TG-3DFace [52] proposed text-to-face crossmodal alignment for high-quality 3D-aware face synthesis. This method has two limitations: 1) lack of explicit 3D geometry control, and 2) requirement of text annotated training dataset. Table 3 compares existing text-to-3D face generation methods. Ours is the only method that allows textguided synthesis of high-quality 3D portraits with explicit geometry/camera control from an unlabelled 2D dataset. He is a werewolf Figure 11. Limitation: Out-ofdistribution Prompts. On random FLAME parameters (producing a thin face with sideways expression). Certain concepts such as specific artworks, celebrities, or mythical creatures that should exist in Internet-scale datasets used to train CLIP are lost, as we train our generator on a specific small subset of facial images from FFHQ.
We consider the problem of disentangling 3D from large vision-language models, which we show on generative 3D portraits. This allows free-form text control of appearance attributes like age, hair style, and glasses, and 3D geometry control of face expression and camera pose. In this setting, we assume we use a pre-trained large vision-language model (LVLM; CLIP) to generate from a smaller 2D dataset with no additional paired labels and with a pre-defined 3D morphable model (FLAME). First, we disentangle using canonicalization to a 2D reference frame from a deformable neural 3D triplane representation. But another form of entanglement arises from the significant noise in the LVLM's embedding space that describes irrelevant features. This damages output quality and diversity, but we overcome this with a Jacobian regularization that can be computed efficiently with a stochastic approximator. Compared to existing methods, our approach produces portraits with added text and 3D control, where portraits remain consistent when either control is changed. Broadly, this approach lets creators control 3D generators on their own 2D face data without needing resources to label large data or train large models.
[ "cs.CV" ]
# 1. Introduction With the continued development of Large Language Models (LLMs) [1], [2], [3], specialized versions tailored for the code domain [4], [5], [6], have demonstrated promising capabilities in code understanding and generation. Notable applications such as Amazon CodeWhisperer and GitHub Copilot have significantly boosted developer productivity and been widely adopted by millions of developers [7]. Building on these advances, numerous studies have broadened the applications of LLMs to include cybersecurity tasks, particularly in code vulnerability analysis [8], [9], [10]. Unlike traditional tools that depend on post-hoc verification through static [11] or dynamic analysis [12], LLMs offer real-time security feedback during the coding process [13] and can generate secure code while following detailed instructions [14], significantly reducing the risk of software vulnerabilities. Despite their potential, studies have shown that LLMs do not guarantee code security and are prone to generating syntactically correct but semantically insecure code [15], [16], [17]. This raises concerns about their utility in code vulnerability analysis and underscores the urgent need to assess their capabilities in the security domain [18], [19]. However, existing benchmarks for code vulnerability, such as CVEFixes [20], BigVul [21], and DiverseVul [22], primarily focus on straightforward tasks like identifying or repairing publicly known vulnerable code snippets within predefined scenarios. These benchmarks fall short as they fail to discern whether success in resolving vulnerabilities stems from leveraging pre-trained parameterized knowledge [23], [24], or from a logical reasoning between code and vulnerabilities. This distinction is vital for verifying the reliability of LLMs, as reliance exclusively on pre-trained patterns often leads to poor generalization [25] and inconsistent interoperability [26], especially when addressing semantic and structural variations in code within real-world scenarios. To overcome these limitations, we formally define the core capabilities required for vulnerability analysis and introduce SV-TRUSTEVAL-C, a new question-answering benchmark designed for the trustworthy evaluation of LLMs in source code vulnerability analysis. This benchmark involves: Structure Reasoning: Unlike unstructured natural language, code inherently possesses rigorous structural information. For natural language-driven LLMs, the ability to comprehend this code structure is essential for effective code understanding and reasoning [27], [28]. In vulnerability analysis, understanding the relationships between code elements is critical for assessing the scope and impact of vulnerabilities and for implementing accurate repair strategies. This benchmark evaluates the ability of LLMs to identify and predict the effects of modifications within code elements on each other. Semantic Reasoning: Leveraging domain knowledge to adapt to changes in code is crucial for maintaining robustness in dynamic environments. This benchmark component evaluates LLMs through scenarios that simulate real-world conditions, including: Counterfactual scenarios, where altered code semantics challenge LLMs to apply logical reasoning beyond learned patterns; Goal-driven scenarios, where LLMs are tasked with completing code while ensuring functionality and security, testing their ability to handle complex modifications without introducing vulnerabilities; and Predictive scenarios, where LLMs must accurately identify and differentiate between various types of vulnerabilities, including scenarios where vulnerabilities will not be triggered at runtime, thereby assessing their application of security concepts. To develop SV-TRUSTEVAL-C, we created a StructureOriented Variants Generator capable of extracting structural information, perturbing code semantics based on the provided base code and classifications (Safe, Unsafe code)1, and increasing code complexity in alignment with data flow and control flow graphs [29]. We conducted extensive experiments using the SV-TRUSTEVAL-C benchmark with zero-shot and in-context learning inference to evaluate eleven popular LLMs across various parameter scales. The results reveal that LLMs struggle to recognize relationships between code elements and predominantly rely on pattern matching rather than logical reasoning in vulnerability analysis scenarios. This highlights the need for specialized improvements in LLMs to enhance their practical utility in real-world code vulnerability analysis. Our contributions are the following: 1) Reasoning-based Benchmark for Vulnerability Analysis: We present SV-TRUSTEVAL-C, the first benchmark designed to assess LLMs’ ability to analyze source code vulnerabilities through logical reasoning and structural understanding, moving beyond mere pattern recognition. 2) Structure-Oriented Variants Generator: We created a generator that systematically extracts structural information, alters code semantics, and increases complexity based on data and control flow graphs using Safe and Unsafe code pairs. 3) Identifying Gaps in LLM Capabilities: Evaluating eleven LLMs revealed their reliance on pattern matching over logical reasoning in vulnerability analysis, highlighting the need for enhancing their reasoning capabilities in security applications. # 2. Background and Related Work Historically, identifying and mitigating source code vulnerabilities involved manual code reviews, which were both time-consuming and error-prone [38]. Automated tools and methodologies, such as static analysis (e.g., FindBugs [39], PMD [40], and Checkstyle [41]) and dynamic analysis techniques like fuzz testing [42], have evolved to increase detection accuracy and efficiency [43], [44]. Advances in machine learning and deep learning have significantly enhanced vulnerability detection capabilities [43], [45], [46], [47], [48], [49]. More recently, LLMs [1] have been explored for their potential to understand complex code patterns, predict vulnerabilities, and even automate vulnerability repairs [4], [50], [51], [52], [53], [54]. Despite these advances, questions remain regarding the reliability of LLMs in detecting and repairing specific vulnerabilities under diverse conditions, highlighting the need for rigorous benchmarking to establish their efficacy and trustworthiness in software security applications. # 2.1. Code Vulnerability Benchmarks Benchmarks in the code vulnerability domain are designed to evaluate various stages and aspects of how LLMs and other automated tools handle vulnerabilities in code. Broadly, these benchmarks can be categorized according to five key tasks: Identification, Repair, Safe-Generation, Question-Answering (QA) and Reasoning. Each category addresses a distinct component of understanding and managing code vulnerabilities, providing specific datasets and metrics to assess model performance. Table 1 highlights these benchmarks, detailing their focus and labels which of the five tasks they each satisfy. To clearly define the tasks and distinguish our work from existing benchmarks, we formally characterize each of the five vulnerability domain tasks using mathematical formulations in the following sections. 2.1.1. Identification Tasks. These tasks involve detecting and classifying vulnerable code snippets while evaluating whether each snippet contains vulnerabilities and pinpointing the specific vulnerable segments when necessary. Benchmarks such as Devign [30] and VulPatchPairs [31] aim to evaluate a model’s capability to accurately identify whether a piece of code is vulnerable. Additionally, benchmarks like BigVul [21], CrossVul [20], CVEFixes [20], and DiverseVul [22] not only classify code snippets as vulnerable or safe but also provide insights into different types of vulnerabilities gleaned from real-world open-source repositories. Formally, given a code snippet $X$ , the objective is usually to predict a binary label $y$ indicating vulnerability: $$ P ( y \mid X ) , \quad { \mathrm { w h e r e ~ } } y \in \{ 0 , 1 \} , $$ Some benchmarks extend this task to identify specific lines or elements of code that are vulnerable, which can be formulated as: $$ P ( Y \mid X ) = \prod _ { t = 1 } ^ { | X | } P ( y _ { t } \mid X , y _ { < t } ) , $$ where $y _ { < t }$ denotes the vulnerability predictions of all preceding lines, and $Y = \{ y _ { 1 } , . . . , y _ { | X | } \}$ is the set of predicted labels for each line in $X$ . These tasks challenge models to combine global contextual understanding with fine-grained local analysis, reflecting the complex nature of real-world vulnerability detection. 2.1.2. Repair Tasks. Repair tasks generate secure versions of vulnerable code snippets by accurately identifying and fixing the underlying issues. Benchmarks that combine identification and repair tasks provide pairs of vulnerable code snippets $( X _ { v } )$ and their corresponding fixed versions $( X _ { r } )$ . Examples include BigVul, CrossVul, CVEFixes, and DiverseVul, which gather data from commit records on opensource platforms like GitHub. This data includes the original vulnerabilities and their subsequent fixes, serving as ground truth for evaluating a model’s repair capabilities. The formal objective for vulnerability repair tasks is: TABLE 1: Comparison of recent code vulnerability evaluation benchmarks against our proposed SV-TRUSTEVAL-C. “CWE Scope” indicates the number of CWEs in a benchmark; “Num. of Func.” shows the total functions evaluated. Icons indicate dataset origin—¹for manually labeled real-world data, $\blacktriangleleft$ for automatically labeled real-world data, and $\pmb { \bigtriangledown }$ for synthetic data. Support levels are shown as $\checkmark$ (full), $\bigtriangledown$ (partial), and $\pmb { x }$ (none). Key features of our benchmark—CWE coverage, function count, and reasoning capability—are highlighted relative to SECLLMHOLMES. Evaluation tasks: Identification (Eq. 1 and Eq. 2) measures vulnerability detection and localization; Repair (Eq. 3) assesses conversion of vulnerable code into secure versions; Generation (Eq. 4) evaluates producing safe code without introducing new vulnerabilities; QA (Eq. 5) tests domain knowledge via targeted question-answering; and Reasoning (Eq. 6) evaluates the underlying reasoning process. $$ P ( X _ { r } \mid X _ { v } ) = \prod _ { t = 1 } ^ { | X _ { r } | } P ( x _ { t } \mid x _ { < t } , X _ { v } ) , $$ where $X _ { v }$ is the vulnerable code and $X _ { r }$ is the repaired code. This formulation assesses a model’s proficiency in transforming vulnerable code into a secure version, reflecting real-world scenarios where developers or automated tools must identify and rectify security flaws efficiently. 2.1.3. Safe-Generation Tasks. These tasks focus on generating secure code snippets while evaluating a model’s ability to either produce only safe code when prompted or deny requests that could lead to insecure code, ensuring that no new vulnerabilities are introduced. Benchmarks such as SVEN [14] emphasize controlled code generation, where the LLM generates code only if the input prompt $( c )$ is considered ”safe,” and outputs a denial response if it’s not. This behavior is expressed as: $$ P ( X \mid c ) = { \biggl \{ } \prod _ { \ell = 1 } ^ { \lfloor X \mid } P ( x _ { t } \mid x _ { < t } , c ) , { \mathrm { i f ~ } } c { \mathrm { ~ i s ~ s a f e } } ; \qquad $$ where the Dirac delta function $\delta ( X = D )$ represents a probability distribution concentrated entirely on a denial response $D$ . Benchmarks like RobustAPI [33] assess how frequently the code generated by an LLM contains vulnerabilities by measuring: $P ( V ( X ) = 1 \mid c ) = \mathbb { E } _ { X \sim P ( X \mid c ) } [ V ( X ) ] .$ , where $V ( X )$ is an indicator function that equals 1 if $X$ is vulnerable, and 0 otherwise. Other benchmarks, such as CYBERSECEVAL 2 [36], investigate controlled code generation in the presence of prompt injection attacks [55], [56], measuring the resilience of LLMs to malicious instructions. Collectively, these safe-generation tasks and benchmarks not only evaluate the correctness and functionality of the generated code but also its security and the LLM’s robustness against introducing vulnerabilities. 2.1.4. Question-Answering Tasks. Question-Answering (QA) tasks tests the LLM model’s knowledge and understanding of vulnerability concepts by requiring it to accurately and thoroughly answer security related questions with contextually appropriate explanations. Benchmarks like CyberBench [35] and CyberMetric [34] present various types of vulnerability-related questions and require models to provide correct and contextually appropriate answers: $$ P ( A \mid Q ) , $$ where $Q$ is a question and $A$ is the model’s answer. These tasks evaluate the depth and applicability of a model’s understanding of vulnerabilities, including its capacity to explain vulnerability concepts, implications, and potential mitigations or fixes. 2.1.5. Vulnerability Reasoning Tasks. Vulnerability Reasoning tasks evaluate a model’s ability to logically explain vulnerabilities, differentiating true analytical reasoning from mere pattern matching. While earlier tasks primarily assess an LLM’s effectiveness in detecting vulnerabilities, they typically do not clarify whether the model relies on superficial pattern recognition or engages in genuine logical reasoning. This distinction is crucial, as vulnerability analysis demands deep code understanding to accurately interpret complex structures, dependencies, and contextual nuances beyond pattern matching capabilities. Logical reasoning empowers LLMs to identify novel and intricate security flaws, adapt effectively to emerging threats, and deliver accurate, contextsensitive evaluations essential for securing complex software systems. Addressing this critical gap, SecLLMHolmes [37] introduces an approach specifically designed to assess reasoning capabilities in code vulnerability analysis. Vulnerability Reasoning involves formulating both the reasoning process and the identification of vulnerabilities as: $$ P ( O \mid X ) = \prod _ { t = 1 } ^ { | O | } P ( o _ { t } \mid o _ { < t } , X ) , $$ where $O = R \parallel V$ represents the concatenation of reasoning steps $R$ and vulnerability identification $V$ . This task evaluates an LLM’s capacity to understand and explain the logic behind a vulnerability. However, SecLLMHolmes provides only a limited set of code scenarios (48 handcrafted code samples) within MITRE’s Top 25 Common Weakness Enumeration $( \mathrm { C W E } ) ^ { 2 }$ , and its evaluation relies primarily on sentence similarity, Similarity $( O _ { \mathrm { L L M } } , O _ { \mathrm { H u m a n } } )$ , between LLM-generated reasoning and human-crafted reasoning. This method doesn’t conclusively show if LLMs rely on pattern matching or genuine reasoning due to limited tests and low interpretability. In this work, we propose a systematic approach to evaluating LLMs’ reasoning abilities in vulnerability analysis scenarios. Our benchmark, built upon the synthetically generated Juliet Test Suite covering 82 distinct CWEs, ensures $100 \%$ label accuracy while introducing significantly scalable task complexity to comprehensively evaluate LLMs’ vulnerability reasoning capabilities. It is designed to overcome the limitations of existing benchmarks by offering extensive test scenarios and robust evaluation metrics that can better discern genuine logical reasoning from pattern recognition. # 2.2. Data Sources and Quality Existing benchmarks utilize data and labels from three primary sources. First, open-source repositories are used in an automated manner, where datasets like BigVul and CrossVul are created by scraping code and commit information from platforms such as GitHub, with vulnerability labels automatically assigned based on commit metadata and code diffs. While this method can produce extensive datasets, it often suffers from imprecise labeling and inconsistent data quality [9]. Second, some benchmarks, including Devign and VulPatchPairs, combine automated data collection with manual verification by human experts, ensuring greater label accuracy and more reliable ground truth information, although resulting in smaller datasets. Third, synthetic sources such as the Juliet Test Suite [32] and Romeo [57] systematically generate code with high label accuracy, though risks of oversimplification remain. Our benchmark dataset is built on synthetic data but extends its complexity through our Structure-Oriented Variants Generator, ensuring both label accuracy and scalable complexity that approximates real-world scenarios. # 2.3. Code Reasoning Code reasoning involves analyzing and predicting a program’s behavior without direct execution, aiding tasks such as vulnerability detection, code generation, and program comprehension. Traditional approaches leverage code execution information—like inputs, outputs, and intermediate runtime states—to enhance model performance in these areas [58], [59], [60], [61]. For example, Lever [59] integrated a verifier that utilizes execution results to improve code generation performance, while Chen et al. [60] employed output-based feedback for guiding LLMs in self-debugging generated code. Meanwhile, some studies have incorporated dynamic features such as runtime program states to train language models capturing deeper code semantics [61], [62]. Recent benchmarks have started evaluating the code reasoning abilities of LLMs. CodeMind [28] tests how well models reason about input-output relations in code, whereas CRUXEval [27] goes further to measure how models infer runtime behavior, including code coverage and execution paths. These benchmarks demonstrate progress in assessing LLMs’ code understanding but primarily focus on general code correctness and runtime behavior inference. Our work differs by explicitly emphasizing code vulnerability reasoning. Unlike general code reasoning tasks that concentrate on functionality and runtime behavior, vulnerability reasoning requires understanding complex code semantics and contextual security considerations. Existing benchmarks like CRUXEval and CodeMind do not focus on this specialized aspect. We introduce a benchmark tailored to evaluating how well LLMs handle vulnerability reasoning tasks, aiming to provide more targeted insights into model capabilities in analyzing code for security flaws and advancing our understanding of code reasoning in security contexts. # 3. SV-TrustEval-C Benchmark This section first outlines the capabilities necessary for reliable vulnerability analysis. Subsequently, we describe the key features of the SV-TRUSTEVAL-C Benchmark, which leverages our Structure-Oriented Variants Generator. # 3.1. Core Capabilities To thoroughly evaluate the reliability of LLMs in vulnerability analysis, we design two main categories of tasks: Structure Reasoning and Semantic Reasoning. These categories correspond to fundamental aspects of code vulnerability tasks, as illustrated in Figure 1. Structure Reasoning DataFlow-wise ControlFlow-wise Question:  Given code {X}, given that the parameter or function $\because$ modified before the execution of the function $\boxed { 1 0 } \cdot$ , evaluate the potential Question: Given the following code '{X}', how does modifying the control E LLMs itnmscpilnuacdtirnoegcntitishmerpoaluectcsaosthmareodsuirgoefhc $\cdot \overrightarrow { A B } _ { \mathbf { i } } \cdot \overrightarrow { A B } _ { \mathbf { j } } \cdot \mathbf { \nabla }$ a\`.rmeCeltaoetner,sdiidtvseareifhafobelcwets\`o{onrxc}s\`oysncteoroumlldistniagntfeclsuo.endcitei o\`n{sy,}o\`r, sAt:ruTchtuerem \`o{dxif}i\`c atfifoenctotf $\boxed { \pmb { \bigtriangledown } }$ adviroerc pautctosft $\pmb { \operatorname { \dot { } } } \frac { } { } \frac { } { }$ cution of $\because$ because $\boxed { \pmb { \bigtriangledown } } .$ A: The change to $\boxed { \pmb { \bigtriangledown } }$ directly alters the arguments passed to $\boxed { 1 0 0 }$ B: Modifying \`{x}\` has no impact on the behavior or output of \`{y}\`. 。 cBh:aTnhgersetios \`n{exit}h\` earnaddtihrecbtenhoarviaosrigorniofiuctapnut ionfd ,elsautgiognesthiinpgbneotween the Cm:odTihfieccathiaongseinof\`\`{{xx}}\` \`i nifnlduierenctelythaeffaercgtus tmhentesxeocructoionndiotif $\cdot \overrightarrow { A B } \vert \cdot$ t,haisn t\`h{ey}\`. $\boxed { \pmb { \mathbb { D } } \pmb { \mathbb { f } } }$ Control/Data Flow observable impact. D: It is not possible to determine the effect without additional context. Graph Information C: The modification of $\boxed { 1 0 0 }$ might indirectly influence \`{y}\` by modifying the control statement that controls the behavior of $\because$ . Answers Semantic Reasoning DA: Intsiswneort spossible to determine the effect without additional context. A: IfIcfo(d1)e {X{ } is: B: Iffc(o1d)e {X}Vaisr:A = 1+1 } Code Variants LLMs Similar to Differ from A: IVfacroCde={XF}uinsc: (Var A Var B ) B: IVfacroAde={X1}+i1s: IVfacroCde= {FXu}nics: (Var B) } Var C = Func (Var B) Fo 富 C: If cIofde { (XV}airs:A != Var B ) { D: Var C =NFouindcea(.V.ar B) C: VIafr(C1)= { 1 + Var B } D: No idea.. □ 。 O Contains Fix by Var C = Func (Var B) } √ Var D = Func (Var C ) T ? Counterfactual Goal-Driven Predictive thQhoeuwepsrdtoiopenos:sietWdafhvfaetrcithatanhptesp $\mathbf { \varPsi } \mathbf { \varPsi } ^ { \prime \prime }$ wneWalriiletlyptlhoaefcetvhtuehlenoerfrioagliblniolaiwltiycn o\`gCdceWo?Ed-exs\` nbipepterit g\`g{eXre}\`d ,waintdh fQoluleoswtionng: CoEdxeamineSnipptehte. Code Snippet vQaurieasnttiowno: ulCdotnrisigdgeri n\`CgWtEh-ex\`c(ondoet \`sCniWpEp-eYt\`voarribaynptasssp)r?ovided below, which aBn: dNtho,eForuingicntiaolnfuInmctpiaoinraelidt:yTishefuvlluylnpereasbeirlivteyd \`.CWE-x\` will not be triggered, nwohtiletrigmgaeirn  \`aiCniWngE-xt\`h eweoarikgniensasl [Mask21] { BVyapraBs=s }VulFunc (Var B) } VIafr(1B)= {VulFunc2 (Var B)A:} VIafr(1B)= V{ ulFunc1( Var B)B:} C: Yes: The vulnerability \`CWE-x\` will still be triggered. If(0) { Bypass } If(0) { Bypass } D: Cannot Determine: Insufficient information to determine the outcome. A: [Mask1] = If (0), [Mask2] = If (1), [Mask3] = If (0) A:nswIefr(s0) { If code {Y} is: B: VIafr(0B)= V{ulFIufnco( eVa{rY}Bi)s: } BCD: [CMaanasnko1t1] D=eItIfef(r(01m),)i , [n[eM:MaIasnsku2f2]f] i =c=iIeIfnft((0i0)n,)f , [o[rMmastsiko3n3]] t =o=IdIfef(t(e10r))mine the outcome. VIafr(0C)= O{ rgFunc (Var B) } VIafr(0C)= O{ rgFunc (Var B) } Var CB= OVrulgFunc (Var B) } IEfls(1e) { Bypass } AnAs:wers Bypass B: OrgFunc VIafr(0B)= V{ ulFunc1( Var B)C:} D: No idea.. C: Var B= VulFunc (Var B) } D: No idea.. C: VulFunc D: No idea.. If(10) { Bypass } T Answers 3.1.1. Structure Reasoning. Structure Reasoning evaluates how accurately LLMs understand the relationships and interactions between code elements, focusing on both data flow and control flow, a critical capability for identifying and mitigating potential security threats. This category aligns with the task: $$ P ( D \mid X ) = \prod _ { i = 1 } ^ { M } \prod _ { j = 1 } ^ { M } P ( D _ { i j } \mid X ) , $$ where $D _ { i j }$ denotes the relationship between elements $i$ and $j$ (with $D _ { i j } = 1$ indicating a connection, and 0 otherwise), $M$ is the total number of elements, and $X$ represents the code snippet. Specifically: DataFlow-wise Reasoning: Evaluates the model’s understanding of how data moves through the code, essential for identifying vulnerabilities related to data handling. ControlFlow-wise Reasoning: Assesses the model’s proficiency in analyzing the control flow of the program, vital for understanding how different parts of the code execute and interact. These tasks examine the model’s ability to discern how code elements correlate and how vulnerabilities can propagate through these interactions. 3.1.2. Semantic Reasoning. Semantic Reasoning evaluates an LLM’s adaptability and understanding of changes in code semantics under various scenarios and transformations, ensuring security and functionality, and encompasses three sub-tasks: Counterfactual (predicting vulnerabilities when code is altered), Goal-driven (safely modifying code to meet specific aims), and Predictive (classifying code variants by their security impact). Formally, for the Counterfactual scenario, we consider: $$ \begin{array} { r } { P \big ( V ( f ( X _ { s } , t ) ) = 1 \mid f ( X _ { s } , t ) , X _ { s } \big ) , } \\ { t \in \{ \mathrm { s a f e , ~ u n s a f e , ~ i m p a i r e d } \} } \end{array} $$ where $V ( X ) = 1$ if $X$ is vulnerable (otherwise 0), and $f ( X _ { s } , t )$ transforms $X _ { s }$ based on behavior $t$ . We introduce three transformation types: Safe Code Transformations: Modifications that preserve functionality without adding vulnerabilities. Unsafe Code Transformations: Modifications that introduce new vulnerabilities into the code. Impaired Code Transformations: Alterations that keep the code vulnerability-free but diminish its original functionality. These tasks assess the model’s consistency and adaptability in analyzing vulnerabilities under semantic alterations, reinforcing the importance of understanding how code transformations affect security properties. Goal-driven tasks evaluate an LLM’s ability to modify code to achieve a specific outcome without introducing vulnerabilities: $$ P ( V ( X ) = 0 , X \mid c ) , $$ where $X$ is the modified code snippet to meet the goal, $c$ is the given context or constraint, and $V ( X ) = 0$ means the modified code remains vulnerability-free. In these tasks: The model is prompted to insert or alter code to achieve specified goals (e.g., adding features or fixing bugs) while ensuring no new vulnerabilities are introduced. Code templates emphasize control statements that can either bypass, trigger, or prevent vulnerabilities. By ensuring the resulting code remains secure and functional, these scenarios evaluate the model’s proficiency in context-aware vulnerability reasoning and code refinement. The joint probability formulation, $\bar { P ( V ( X ) = 0 , X \mid c ) }$ , assesses both the safety of the modified code and its successful production under the given context $c$ . Predictive scenarios challenge the LLM to classify code variants based on whether they introduce, remove, or do not affect vulnerabilities and code functionality: $$ P ( k \mid f ( X ) ) , $$ where $k$ represents the code state or type of vulnerability, including non-vulnerable and potentially impaired states, and $f ( X )$ is the code variants. These tasks: Provide code variants that may introduce new vulnerabilities, remove existing vulnerabilities, remain vulnerability-free but become functionally impaired, or have no effect on vulnerability. Require the LLM to accurately classify these code variants, distinguishing among various vulnerability states and the possibility of impaired functionality. These Predictive tasks assess the model’s capability to distinguish among different vulnerabilities and its understanding of how code modifications influence security. # 3.2. Structure-Oriented Variants Generator This section describes the purpose of our StructureOriented Variants Generator, subsequently referred to as generator, and its role in generating our benchmark dataset. The generator contains a Flow Extractor and a Behaviour Simulator. Given a code snippet from an existing codebase, the generator strategically modifies the behavior of code variants into three categories: safe (preventing vulnerabilities), unsafe (triggering vulnerabilities), and impaired (bypassing the original function), while enhancing their structural complexity. It also scales the structural complexity to suit the benchmark questions, ensuring a diverse set of code variants. Structure Types 8 Inner Outer Csafe nCunsafe Outer&Inner [Mask 1] [Mask 1] [Mask 1] Cunsafe Cunsafe\Csafe Inner [Mask 2] [Mask 2] [Mask 2] Csafe Csafe\Cunsafe Csafe [Mask 3] [Mask 3] [Mask 3] Cimpaired Cimpaired Cimpaired 3.2.1. Flow Extractor. Given a code snippet $C$ , our Flow Extractor constructs the data flow graph $\mathcal { G } _ { d } = ( \nu _ { d } , \mathcal { E } _ { d } )$ and the control flow graph $\mathcal { G } _ { c } = ( \nu _ { c } , \mathcal { E } _ { c } )$ . Initially, we parse $C$ into an Abstract Syntax Tree (AST) using a parser generator $\mathrm { \ t o o l } ^ { 3 }$ to extract code syntactic information. We then apply Depth-First Search (DFS) to traverse the AST, identifying code elements $v _ { i } ~ \in ~ \mathcal { V } _ { d }$ and $v _ { i } ~ \in ~ \mathcal { V } _ { c }$ such as variables, literals, and expressions, assuming different contexts for data and control flows respectively. Concurrently, we establish edges $e _ { i , j } ^ { d } \in \mathcal { E } _ { d }$ and $e _ { i , j } ^ { c } \in \mathcal { E } _ { c }$ to represent data dependencies between $\dot { \boldsymbol { v } } _ { i } ^ { d }$ and $\boldsymbol { v } _ { j } ^ { d }$ through assignments and function calls, as well as control flows between elements $v _ { i } ^ { c }$ and $\boldsymbol { v } _ { j } ^ { c }$ . 3.2.2. Behaviour Simulator. Given the graph data extracted by the Flow Extractor and the required behaviors of the code (safe, unsafe, and impaired), our behavior simulator manages the final behavior of the code by modifying the safe code $C _ { \mathrm { s a f e } }$ and the unsafe code $C _ { \mathrm { u n s a f e } }$ within the original code $C$ . Initially, the simulator incorporates control flow variants from Juliet [32], introducing additional control branches within $\mathcal { G } _ { c }$ . This step tests if LLMs can maintain consistency with code variants that have similar syntax but different structures. As illustrated in Figure 2, the simulator then introduces Outer, Inner, and Outer&Inner structures to manage the code’s behavior, infusing $C _ { \mathrm { s a f e } }$ , $C _ { \mathrm { u n s a f e } }$ and our $C _ { \mathrm { i m p a i r e d } }$ with masked control statements into various structural configurations. Next, we fill the masked control statements to create code variants with the specified behavior. Finally, the Behavior Simulator generates multiple variants for each base code $C$ , categorized by structure and behavior, to test LLM adaptability and accuracy. # 3.3. Question Generation After creating code variants from existing codebases, we develop questions for structural and semantic reasoning scenarios. Sample questions are provided in Appendix A. 3.3.1. DataFlow-wise Questions. For DataFlow-wise structure reasoning, we use the Flow Extractor to construct the data flow graph $\mathcal { G } _ { d }$ . We then evaluate each source node $v _ { i } ^ { d } \in \mathcal { V } _ { d }$ together with function-only target nodes $\boldsymbol { v } _ { j } ^ { d }$ to determine whether modifying $v _ { i } ^ { d }$ influences $\boldsymbol { v } _ { j } ^ { d }$ ’s behavior. Each generated question falls into one of the following categories (see Figure 3): • A: Ground truth is derived directly from $\mathcal { E } _ { d }$ , which may involve direct or multi-hop connections within $\mathcal { G } _ { d }$ . • B: No connection in either $\mathcal { G } _ { d }$ or $\mathcal { G } _ { c }$ . C: Some connections are not explicitly shown in $\mathcal { G } _ { d }$ but are implied through control flow $\mathcal { G } _ { c }$ . For instance, if $\boldsymbol { v } _ { j } ^ { d }$ is within an $\mathrm { i } \pounds ( v _ { i } ^ { d } = = 1$ ) block, and $v _ { i } ^ { d }$ is involved in the condition, it has an indirect impact. Similarly, if $v _ { i } ^ { d }$ triggers a break that prevents execution from reaching $\bar { v } _ { j } ^ { d }$ , it also constitutes an indirect impact. • A&C: In some cases, both A and $\mathbf { C }$ apply—namely, when $v _ { i } ^ { d }$ is connected to $\boldsymbol { v } _ { j } ^ { d }$ in $\mathcal { G } _ { d }$ and also influences a control statement governing $\boldsymbol { v } _ { j } ^ { d }$ . The difficulty of each question increases with the number of hops between $v _ { i } ^ { d }$ and $\boldsymbol { v } _ { j } ^ { d }$ . 3.3.2. ControlFlow-wise. For ControlFlow-wise structure reasoning, we use $\mathcal { G } _ { c }$ , obtained from the Flow Extractor, to enumerate all control statements $v _ { i } ^ { c } \in \mathcal { V } _ { c }$ linked to functiononly target nodes $v _ { j }$ , to assess whether modifying $v _ { i } ^ { c }$ would impact $\boldsymbol { v } _ { j } ^ { c }$ . Each generated question belongs to one of the following classes/options. • A: The impact of control statement $\boldsymbol { v } _ { i } ^ { c }$ on function $\boldsymbol { v } _ { j } ^ { c }$ can be directly obtained from $\mathcal { G } _ { c }$ . B: No perceivable connections within either $\mathcal { G } _ { d }$ or $\mathcal { G } _ { c }$ . C: The control statement $\boldsymbol { v } _ { i } ^ { c }$ directly or indirectly affects the arguments of $\boldsymbol { v } _ { j } ^ { c }$ as specified within $\mathcal { G } _ { d }$ . The difficulty scale for ControlFlow-wise questions is scaled by the number of hops from $v _ { i } ^ { c }$ to $v _ { j } ^ { c }$ within $\mathcal { G } _ { c }$ . 3.3.3. Counterfactual. In the Counterfactual semantic reasoning task, for each code snippet $C$ , we generate code variants $\mathcal { C }$ using the Structure-Oriented Variants Generator to construct a set of options. Each generated question falls into one of the following classes. A: The safe code variants do not trigger the vulnerability while preserving $C$ ’s original functionality. • B: The impaired code variants avoid triggering the vulnerability but fail to maintain $C$ ’s functionality. C: The unsafe code variants trigger the vulnerability. The difficulty level for Counterfactual scenarios is scaled based on the number of control flow injections and the specific code structures introduced by the generator. 3.3.4. Goal-driven. The goal-driven semantic reasoning questions assess LLMs’ ability to fix vulnerabilities without altering code functionality. We generate code variants using the Structure-Oriented Variants Generator with masked control statements as conditions. LLMs are then tasked with Options Questions Difficulty Scale A B None 2-Hop 4-Hop (916) (986) DataFlow-wise (1058) (307) (10) C A&C (2430) 1-Hop 3-Hop (302) (226) (993) (62) A B None 2-Hop (474) (652) ControlFlow-wise (652) (180) C (1345) 1-Hop 3-Hop (219) (446) (67) Unique CWEs Base Code Files Unsafe Code Safe Code (82) (377) (377) (377) √ A B Outer Inner Outer&Inner (1286) (1165) Counterfactual (1131) (562) (631) C (3748) Outer -\w CI Inner -\w CI O.&I. -\w CI (1297) (492) (431) (501) A B Outer Inner Outer&Inner (302) (302) Goal-driven (377) (135) (210) C D (1159) Outer -\w CI Inner -\w CI O.&I. -\w CI (265) (290) (165) (105) (167) A B Outer Inner O.&I. (182) (176) Predictive (222) (93) (136) C D (719) Outer -\w CI Inner -\w CI O.&I. -\w CI (165) (196) Options Random (107) (63) (98) Shuffling selecting the correct path to resolve the vulnerability while preserving the intended functionality. Answer options are generated using the Behavior Simulator and include: (1) Bypass, (2) Resolve (Expected Behavior), and (3) Trigger the Vulnerability. The difficulty of each question scales with the complexity of the underlying code variant. Since the correct answer is always the same class (i.e., “Resolve”) in this task, we apply random shuffling of both the options and answer placements to prevent position bias. 3.3.5. Predictive. The Predictive semantic reasoning tasks also use code variants generated by the generator, challenging LLMs to identify which variants trigger specific vulnerabilities. This evaluation tests the models’ ability to detect and accurately predict the presence of vulnerabilities, and to differentiate between various types of vulnerabilities. The answer options—(1) Bypass, (2) Target CWE (Expected Behavior), and (3) Different CWE—follow the same random shuffling strategy as the Goal-driven questions. Difficulty scales with the complexity of each variant. # 3.4. Benchmark Statistics The SV-TRUSTEVAL-C benchmark, developed using the C programming language components from the Juliet Test Suite [32], includes 377 base files, each containing both a safe and an unsafe function, covering 82 distinct CWEs. Our generator produced a total of 1,297 unsafe and 1,286 safe compilable code variants. Utilizing these variants, we created 9,401 questions. The detailed questions’ statistics are presented in Figure 3. TABLE 2: Comparison of model performance based on accuracy. The best performance and second-best performance are highlighted to denote the top two scores, respectively. The $\star$ symbol identifies models in the “instruct” version, which are specifically fine-tuned to enhance instruction-following capabilities. Additionally, we report the false positive rate (FPR) for misclassified “safe” instances (FPRsafe) in the baseline scenario. The $\downarrow$ symbol indicates a decrease in model performance compared with Zero on the current task after introducing ICL, while the $\uparrow$ symbol signifies an improvement in performance. # 4. Experiments # 4.1. Experimental Setups We evaluate eleven of the most popular and representative large language models (LLMs), including GPT-4-turbo [2], GPT-3.5-turbo [1], Llama3.1 [63] in both 405B and 8B versions, Llama3 [64], CodeLlama [4] in 13B and 7B versions, Gemma [65], CodeGemma [66], CodeQwen [6], and Mixtral [67]. We excluded the latest LLM, GPT-o1 [68], due to budget constraints and prohibitively high time consumption. All models have undergone specialized pre-training in the code domain and are available in the “instruct” version, which is fine-tuned to follow prompted instructions, making them well-suited for our benchmark design. We conduct all model inferences at a temperature of zero to ensure more deterministic answers and set a maximum output length of 50 tokens, as only the selected option is required. All other inference hyperparameters are set to their default values for each LLM. Furthermore, we establish conversation-independent threads for each question in our experiment to eliminate potential information leakage during the question-answer (QA) process. 4.1.1. Inference Mode. We utilize both zero-shot inference (Zero) and in-context learning (ICL) approaches [69], [70], [71] to comprehensively evaluate LLMs. In the zero-shot setup, models generate responses based solely on the input question without additional examples, allowing us to assess their inherent understanding and reasoning abilities. In contrast, the in-context learning approach provides models with specific prompts that demonstrate question-and-answer patterns along with corresponding explanations for each answer. This contextual information guides their responses, enhancing their ability to learn from context and more accurately follow the question. The prompts used for incontext learning include a few sample question-response pairs following the template shown in Figure 4. All ques # In-context Learning Prompt Instruction: {Assign a code security expert persona with guidelines to answer the question. # Context: Question(s): Insert the demo question based on structural or semantic reasoning.} Choices: Provide four possible answers based on the code scenario. $\}$ • Answer(s): Specify the correct answer, e.g., “B”. Explanation(s): Provide a detailed explanation justifying why the chosen answer is correct.} Input Question: {Insert the selected question.} Output: LLM-generated answer. tions, answer choices, correct answers, and explanations are carefully crafted by human domain experts and subsequently refined by $\mathsf { G P T - o 1 }$ to ensure accuracy and reduce instructional errors. 4.1.2. Label Masking. To avoid label leakage from the original Juliet test suite, we applied label masking to all code snippets, including: 1) removing annotations directly referencing vulnerabilities; 2) replacing vulnerability-specific function names with generic ones (e.g., “Sample func()”); and 3) swapping variable names or tokens that suggest vulnerability (e.g., “good” or “bad”) with neutral terms such as “cat” or “apple.” 4.1.3. Question Design. To develop an effective questionanswering template that ensures clear comprehension by LLMs, we manually crafted seven distinct prompts for each question. We then employed GPT-4 as an automatic evaluator [72] to select the most suitable format for each QA. Figure 5: Overall consistency scores of LLMs on SV-TRUSTEVAL-C. For CodeLlama, the average consistency score across the 7B and 13B versions is reported due to their high similarity in performance. Abbreviations: DFL (DataFlowwise questions), CFL (ControlFlow-wise questions), CTF (Counterfactual questions), GDV (Goal-driven questions), PRD (Predictive questions). Additionally, to verify the absence of misleading syntax or semantics, we manually reviewed the intermediate explanations for 50 randomly selected QAs from each question type, ensuring that the LLMs’ responses were well-aligned with the QAs. # 4.2. Main Results As shown in Table 2, while LLMs generally require further improvement in structure reasoning, models with over 15 billion parameters significantly outperform their smaller counterparts. Notably, the recently released $\mathrm { L 1 a m a 3 . 1 B - 4 0 5 B }$ performs better in both data flow and control flow dimensions, achieving average scores of $6 8 . 5 8 \%$ in zero-shot and $6 9 . 1 4 \%$ in in-context learning, compared to $\mathsf { G P T - 4 }$ . However, in semantic reasoning scenarios, most LLMs perform poorly, with average scores falling below $32 \%$ for zero-shot and $46 \%$ for in-context learning modes—particularly in Goal-driven tasks that require targeted vulnerability fixes and Predictive scenarios demanding extensive domain knowledge. We also conducted a baseline assessment by classifying code from the original Juliet Test Suite as safe or unsafe using a QA format with the options: A) Vulnerable, B) Non-Vulnerable, C) Do Not Know. The results presented in Table 2 indicate that most models perform unsatisfactorily, especially given that the code originates from synthetically generated datasets like Juliet. This suggests that the models are inadequately pretrained in code vulnerability detection. Additionally, several models—including CodeLlama, CodeQwen, and Gemma—exhibit high false positive rates4, erroneously classifying safe code as unsafe in nearly $100 \%$ of cases, as illustrated in Table 2. This raises concerns about the efficacy of LLMs in code analysis. Finally, we observe that in-context learning can enhance models’ performance in most cases. Specialized LLMs such as CodeLlama and CodeGemma, which are less effective at understanding complex natural language scenarios, benefit significantly from this approach. Conversely, generalpurpose LLMs like Mixtral, which have less pre-training in the code domain, often struggle with code contexts; here, in-context learning serves as a bridge, helping these models adapt to intersecting scenarios. Similar improvements are also observed in models like $\mathrm { L 1 a m a 3 . 1 B - 4 0 5 B }$ and GPT-3.5. However, $\mathsf { G P T - 4 }$ exhibits a dramatic decline in performance in Goal-driven scenarios with in-context learning, indicating that different models may respond variably to this technique and may require specific customization for vulnerability analysis. 4.2.1. Consistency Analysis. Our consistency analysis assesses how models maintain reliable vulnerability evaluations across different scenarios by focusing on two distinct reasoning tasks: structure reasoning and semantic reasoning. In the structure reasoning scenario, we transition from base code vulnerability analysis to tasks that require understanding the code’s structure, specifically DataFlowwise (DFL) and ControlFlow-wise (CFL) analyses. Grasping the relationships among code components is essential for accurate vulnerability detection. If a model can identify vulnerabilities but fails to differentiate variable relationships within the source code, it indicates that its capabilities are driven by pre-trained patterns rather than genuine logical code analysis [27], [28]. Conversely, in the semantic reasoning scenario, we investigate whether LLMs can maintain consistent vulnerability assessments across diverse contexts by addressing the logical equivalence of statements such as $A = B$ and $B = A$ [73]. Using the base scenario as a benchmark, we evaluate if consistency is preserved in corresponding Counterfactual (CTF), Goal-driven (GDV), and Predictive (PRD) scenarios. This means that if LLMs accurately assess vulnerabilities in the base scenario, they should provide reliable analyses for any code variants within the same semantic context. For each question derived from the base scenario, we define the consistency scores as: $$ \begin{array} { r l } & { \mathrm { C o n s } _ { \mathrm { P H } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { o n } } } \mathbb { I } \left( C _ { \mathrm { b a c } } ^ { i } = 1 \wedge C _ { \mathrm { p f , L } } ^ { i } = 1 \right) } { N _ { \mathrm { O R } , \mathrm { t } } } } \\ & { \mathrm { C o n s } _ { \mathrm { C I L } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { e n } } } \mathbb { I } \left( C _ { \mathrm { b a c } } ^ { i } = 1 \wedge C _ { \mathrm { c I L } } ^ { i } = 1 \right) } { \bar { N } _ { \mathrm { C H , ~ } } } } \\ & { \mathrm { C o n s } _ { \mathrm { C T } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { e n } } } \mathbb { I } \left( C _ { \mathrm { C H } } ^ { i } = 1 \wedge C _ { \mathrm { b a s } } ^ { i } = 1 \right) } { N _ { \mathrm { C H } } } } \\ & { \mathrm { C o n s } _ { \mathrm { C I V } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { o n } } } \mathbb { I } \left( C _ { \mathrm { G W } } ^ { i } = 1 \wedge C _ { \mathrm { s d e } } ^ { i } = 1 \right) } { \bar { N } _ { \mathrm { C H } } } } \\ & { \mathrm { C o n s } _ { \mathrm { F H } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { o n } } } \mathbb { I } \left( C _ { \mathrm { b a s } } ^ { i } = 1 \wedge C _ { \mathrm { s d e } } ^ { i } = 1 \right) } { \bar { N } _ { \mathrm { C H } } } } \\ & { \mathrm { C o n s } _ { \mathrm { P H } } = \frac { \sum _ { s = 1 } ^ { N _ { \mathrm { o n } } } \mathbb { I } \left( C _ { \mathrm { b a s } } ^ { i } = 1 \wedge C _ { \mathrm { u s } \mathrm { a d e } } ^ { i } = 1 \right) } { \bar { N } _ { \mathrm { P H } } } } \end{array} $$ # where: $\mathbb { I } ( \cdot )$ is the indicator function, which equals 1 if the condition inside is true, and 0 otherwise. • $C _ { \mathrm { s a f e } } ^ { i }$ and Cuinsafe denote the correctness indicators for safe and unsafe classifications in the base scenario for the $i$ -th case. $C _ { \mathrm { b a s e } } ^ { i } = \mathbb { I } \left( C _ { \mathrm { s a f e } } ^ { i } = 1 \wedge C _ { \mathrm { u n s a f e } } ^ { i } = 1 \right)$ Cuinsafe = 1 is the correctness indicator for the base scenario for the $i$ -th case. CiDFL and $C _ { \mathrm { C F L } } ^ { i }$ are the correctness indicators for DataFlow-wise and ControlFlow-wise scenarios for the $i$ -th case, respectively. • $C _ { \mathrm { C T F } } ^ { i }$ is the correctness indicator for the Counterfactual scenario of the $i$ -th case. $C _ { \mathrm { G D V } } ^ { i }$ and $C _ { \mathrm { P R D } } ^ { i }$ are the correctness indicators for Goaldriven and Predictive scenarios for the $i$ -th case. • $N _ { \mathrm { D F L } }$ , $N _ { \mathrm { C F L } }$ , $N _ { \mathrm { C T F } }$ , $N _ { \mathrm { G D V } }$ , and $N _ { \mathrm { P R D } }$ represent the total number of cases in the DataFlow-wise, ControlFlowwise, Counterfactual, Goal-driven, and Predictive scenarios, respectively. Observations: As shown in Figure 5, Mistral-7B exhibits an almost negligible consistency score in all scenarios, largely due to its weak performance in the Base scenarios. By contrast, $\mathsf { G P T - 4 }$ and $\mathtt { L 1 a m a 3 . 1 - 4 0 5 B }$ substantially outperform other models in both ConsCFL and ConsDFL, reflecting a stronger ability to track dependencies between control statements, variables, and functions within the code. In comparison, the remaining LLMs achieve under $60 \%$ consistency in structure reasoning scenarios, indicating that effectiveness in vulnerability analysis alone does not necessarily translate into a robust understanding of these code elements. In the semantic reasoning scenario, the consistency scores for all LLMs are notably low, averaging below $50 \%$ . This suggests that, despite some LLMs being pre-trained in the code domain, they struggle to maintain consistent analysis across variant scenarios, even when these variants incorporate the original code snippet. Particularly in Goaldriven scenarios, all LLMs exhibit minimal consistency. For instance, CodeQwen1.5 and Gemma, despite achieving the highest scores of $3 3 . 7 4 \%$ and $4 1 . 5 9 \%$ respectively in zero-shot and in-context learning modes for Goal-driven scenarios, record a $0 \%$ $\mathrm { C o n s } _ { \mathrm { G D V } }$ . This indicates that their success does not reflect a genuine understanding of code vulnerabilities. Similarly, models like Llama3 and Llama3.1 have nearly zero consistency scores, highlighting a failure to effectively apply their knowledge of vulnerabilities in practical scenarios. While introducing in-context learning generally improves ConsGDV and ConsPRD, the enhancements are insufficient, and further efforts are required to bolster LLMs’ consistency in semantic reasoning tasks. This underscores the necessity for more specialized training and fine-tuning to enable LLMs to better understand and analyze code vulnerabilities across diverse and complex scenarios. # 5. Analysis # 5.1. Effects of Difficulty As detailed in Table 3, we evaluated the performance of various LLMs across different difficulty levels within each scenario. The analysis encompasses three primary scenarios: Counterfactual, Goal-driven, and Predictive. 5.1.1. Structure Reasoning. In the structure reasoning tasks for DataFlow and ControlFlow scenarios, the LLMs performance generally falls into two categories: High Performance at the “None” Level, but Struggles with Complexity: Models such as Llama3.1 and GPT-4 perform well at detecting when no connection exists between two code elements (the ”None” level), but they face challenges with more complex relationships. However, their performance declines significantly in more complex scenarios that involve connections of three or more hops. This indicates a robust foundational understanding that diminishes as the reasoning required becomes more intricate. Low Performance at the “None” Level, but Improved with Increased Complexity: Conversely, models like CodeQwen, CodeLlama, and CodeGemma exhibit average performance below $30 \%$ at the “None” level. Surprisingly, these models perform better in scenarios requiring connections of $\geq 3$ hops. This counterintuitive outcome arises because these LLMs tend to assume connectivity among all code elements within a context, thereby increasing the likelihood of selecting correct answers. Specifically, explicit and implicit connections account for $50 \%$ of the options (Options A and C), with some scenarios allowing both A and C to be simultaneously correct. TABLE 3: LLMs’ performance across difficulty levels by question type. The symbol $\otimes$ represents unknown parameter scales; terms D-Flow and C-Flow refer to DataFlow-wise and ControlFlow-wise questions, respectively; O.&I. signifies the Outer&Inner structure, # denotes the number of questions, and $C I$ indicates code variants with control flow injection. “None” indicates that there are no direct or indirect connections between the target code elements within the code graphs. The $\downarrow$ symbol denotes a decrease in model performance on the current task compared to Zero after introducing ICL, while ↑ signifies an improvement, and $\updownarrow$ indicates no change in performance. Despite these patterns, the ability to identify that two code elements are unconnected remains crucial, as it demonstrates the LLMs’ capacity for logical and critical analysis of relationships between code elements. Introducing incontext learning significantly improved the “None” level performance for structure reasoning tasks in most cases. For instance, Mixtral showed a dataflow-wise performance increase from $5 . 5 8 \%$ to $7 5 . 9 9 \%$ , and a controlflow-wise increase from $2 8 . 0 7 \%$ to $7 0 . 8 6 \%$ . However, this enhancement was not consistently observed across other difficulty levels for most LLMs. This inconsistency suggests that inherent code structure reasoning abilities require more sophisticated training methodologies and advanced model architectures to effectively handle varying levels of complexity. # 5.1.2. Vulnerability Reasoning. Our evaluation across various scenarios reveals the following key insights: Model Size and Complexity Handling: Larger models (e.g., GPT-4, Llama3.1) typically perform better in Counterfactual and Goal-driven tasks, especially with complex code variants. However, they show comparatively weaker performance on vulnerability-resolving tasks, suggesting a need for more advanced reasoning capabilities. • Impact of In-Context Learning: In-context learning often leads to performance gains for models such as CodeLlama, CodeGemma, and Llama3.1-serious, particularly in Goal-driven scenarios. Conversely, models like CodeQwen1.5 exhibit consistent declines, highlighting varying sensitivities to in-context prompts and emphasizing the need for tailored prompt adaptation. These mixed outcomes underscore that modelspecific strategies are crucial. Impact of Control Flow Injection: Most LLMs experience performance degradation after control flow injection $( - \backslash w C I )$ , indicating that even minimal syntactic perturbations, without altering semantics, can adversely affect performance in diverse scenarios. Counterfactual Scenarios. Larger models such as GPT-4 and Llama3.1 consistently outperform smaller opensource counterparts (e.g., CodeQwen, Mistral), though their overall performance wanes as code complexity grows. In-context learning generally enhances the capabilities of $\mathsf { G P T - 4 }$ and Llama3.1, whereas it negatively impacts CodeQwen and Mistral. This discrepancy suggests that architectural differences or training data variations impact how models respond to additional contextual information. Goal-driven Scenarios. Performance in identifying correct fixes varies considerably with task difficulty: while $\mathsf { G P T - 4 }$ performs well at simpler fixes in a zero-shot setting, CodeGemma outperforms it on more complex variants, reaching $7 5 . 4 5 \%$ accuracy. The introduction of in-context learning yields a dual effect—it significantly degrades performance for GPT-4 and CodeQwen, yet consistently enhances it for open-source models like Llama3.1-405B, CodeLlama-7B, CodeGemma, and Mistral—highlighting the need for tailored adaptations to optimize in-context learning. Predictive Scenarios. The Llama-series models demon 150500 DataFlow-wise X 600 Z ControlFlow-wise 2000 Counterfactual 4600 Goal-driven 400 Predictive / 200 400 1000 200 200 V Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow 600 ZXL 23000 57050 400 V ZX ? 200 1000 250 ! VXA 0 0 r Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow Safe Bypass Unsafe Unknow Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow 600 750 Yu 2000 U 500 ZNA 400 200 500 250 1000 200 100 V Bypass Unsafe Unknow T Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow Safe Bypass Unsafe Unknow 0 Safe 0 Other. Target. Bypass Unknow 2000 1000 V 600 Y 1000 2000 1 500 1000 500 2400 Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow 0 Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow 1000 1000 3000 Y 1000 200 Z 500 Z 2000 之 500 100 Y 1500 Direct C. Not C. Indirect C. Unknow 0 Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow 500 1 500 Not C. Indirect C. Unknow 1000 200 L M 200 A.. Bypass Unknow Safe Bypass Unsafe Unknow Bypass Safe Unknow Other. Target. 1000 2000 1000 500 3000 1000 L 500 750 V 400 0 Other. Bypass Unknow Safe Bypass Safe Unknow Target. 2000 57050 / ZZ 750 200 250 250 Y 100 1 Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow 78 2000 3000 Y 750 600 CodeQwen1.5-7B ? 1000 1000 500 ZXL 12000 N 25500 VXA 400 W Y ? Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow Bypass Safe Unsafe Unknow 0 Other. Target. Bypass Unknow 2000 750 1000 300 2000 25500 / 500 1200 Y 1000 IXn YX T Z Direct C. Not C. Indirect C. Unknow Direct C. Not C. Indirect C. Unknow 0 Safe Bypass Unsafe Unknow 0 Bypass Safe Unsafe Unknow Other. Target. Bypass Unknow Zero-shot ICL Zero-shot (Inaccurate Portion) ICL (Inaccurate Portion) strate robust capability in identifying target vulnerabilities, reflecting strong security-domain pre-training. In-context learning exerts mixed influences: models such as $\mathsf { G P T { - } } 4$ and Llama-series display consistent improvements, whereas CodeQwen1.5 and certain Gemma models face performance drops. These variations stress the significance of model-specific adaptation to capitalize on domain knowledge. Additionally, arbitrary prompts can degrade performance, mirroring the challenges noted in Goal-driven tasks. Hence, optimizing both prompt design and adaptation techniques remains pivotal for maximizing the effectiveness of LLMs in vulnerability analysis. # 5.2. Behavior Distribution To further study the behavior of LLMs, we have detailed the distribution of choices made by LLMs during their evaluation, categorizing the options for each question into four types. In the DataFlow and ControlFlow-wise scenarios, the categories are: “Direct C.”, indicating direct connections between code elements in the graph; “Not C.”, denoting no significant connections; “Indirect C.”, for indirect connections as described in our method section; and “Unknown”, where the LLM is unable to respond. For the Counterfactual and Goal-driven scenarios, the responses are classified as “Safe”, indicating that the current code variants are secure and do not trigger vulnerabilities; “Bypass”, where the code neither triggers vulnerabilities nor executes the intended function; and “Unsafe”, where the code variants do activate vulnerabilities. Lastly, in the Predictive scenario, “Others” refers to CWEs irrelevant to the question, “Target” denotes the CWEs specifically addressed in the question, and ’Bypass’ represents variants that circumvent the targeted CWEs. Our key findings are as follows: Figure 7: Performance of LLMs across various question types for class-level CWEs under different inference modes. Models marked with $\star$ have larger parameter sizes (e.g., Llama3. $1 ^ { \star }$ denotes Llama3.1-405B). The number in brackets beside each CWE indicates the total number of associated questions. 1) Pattern Matching Over Logic: Our study reveals that LLMs in vulnerability analysis primarily rely on pattern matching based on pre-trained knowledge, rather than on logical analysis of the code. 2) Need for Customized Approaches: Effectively ad dressing vulnerabilities with LLMs requires more than general in-context learning; it demands advanced prompt engineering or scenario-specific fine-tuning. 3) Need for Domain Adaption: LLMs often struggle to differentiate between various CWEs, underscoring a critical need for domain adaptation to enhance their effectiveness in vulnerability analysis. Specifically, as illustrated in Figure 6, in the Dataflowwise and Controlflow-wise scenarios, we observe that LLMs like GPT-3.5, CodeLlama, CodeQwem and Gemma predominantly choose options classified as “Direct C.” and “Indirect C.” This suggests these LLMs assume that all code elements in a given piece of code are interconnected. These findings align with our analysis in Section 5.1, confirming that such LLMs lack rigorous logical analysis capabilities to understand contextual relationships in code. Instead, their vulnerability analysis tends to rely heavily on pattern matching, a conclusion supported by a substantial body of existing research [74], [75], [76], [77]. Introducing in-context learning slightly mitigates this tendency, though a substantial gap remains. In the Counterfactual scenario, some LLMs cannot differentiate cases of “Bypass” code, struggling to correctly determine the code’s execution path. This so-called runtime reasoning ability [27] is crucial for vulnerability analysis, particularly for runtime-dependent vulnerabilities like Buffer Errors and Input Validation. Additionally, many LLMs show a distribution of responses heavily skewed toward “unsafe”, implying the code contains vulnerabilities. Notably, this happens even when our generator places vulnerable code along a dead path in the scenario. This phenomenon further supports the hypothesis that LLMs rely on pattern matching rather than genuine runtime reasoning. TABLE 4: Ablation study on Llama3.1-8B with varying inference temperatures. Temp. denotes the temperature setting In the more complex Goal-driven scenario, most LLMs tend to select “Unknown.” For instance, $\mathsf { G P T - 4 }$ and LLamaseries models, although they perform relatively well in other scenarios, frequently select “Unknown” option in this scenario. This indicates that current LLMs have difficulty applying parameterized knowledge to resolve vulnerabilities effectively. Unlike in other use cases, addressing vulnerabilities requires advanced prompt engineering or finetuning customized for specific scenarios. Lastly, in the Predictive scenario, most LLMs distribute their choices evenly across all available options. Incorrect classifications such as “Other,” “Bypass,” and “Unknown” dominate their responses. This distribution suggests that most LLMs: i) cannot reliably determine whether a vulnerability would be triggered in the current scenario. Simple pattern matching is insufficient for this level of judgment; and ii) struggle to distinguish fundamental differences between CWEs. This highlights the need for domain adaptation and specialized approaches. Without these, LLMs are not well-suited for direct application to vulnerability analysis. # 5.3. Pairwise Performance Evaluation Pairwise Performance Evaluation assesses pairs of samples that share the same functionality but differ in vulnerability. In our single Q&A setting, each question was presented in a single prompt per thread. By contrast, in the pairwise setting, each safe&unsafe pair was presented together in one prompt per thread. Both experiments were conducted independently. Inspired by PrimeVul [9], we use these pairs to examine how effectively LLMs handle syntactically similar yet semantically distinct scenarios within the same context. In the “Single” evaluation, each question $Q$ is presented independently, producing $A = f ( \{ Q \} )$ , where $A$ is the answer and $f$ the LLM. In the “Pairwise” evaluation, a set of questions $\{ Q _ { 1 } , \ldots , Q _ { N } \}$ is presented simultaneously, yielding $\{ A _ { 1 } , . . . , A _ { N } \} = f ( \{ Q _ { 1 } , . . . , Q _ { N } \} )$ . We measure performance in the “Single” setting as #((CTortraelctQuAensstiwoenrs) and in the “Pairwise” setting as #( (CTortraelctPPaiarisr)s , where a pair is correct only if all its answers are correct. Figure 8: Pairwise performance analysis. Base and CTF indicate base and Counterfactual scenarios performance, respectively. $+ C I$ denotes code variants with control-flow injection. “Single” means one question per thread, while “Pairwise” means questions with similar context are combined for inference. We restricted pairwise evaluations to the Base and Counterfactual scenarios using $\mathrm { L 1 a m a } 3 . 1 - 8 \mathrm { B }$ , excluding Goaldriven and Predictive scenarios because they do not produce semantically distinct safe&unsafe label pairs required for meaningful pairwise comparison. As shown in Figure 8, performance in the Base scenario increases from $6 0 . 7 \%$ to $8 6 . 2 \%$ under pairwise evaluation, but drops from $3 6 . 9 \%$ to $1 2 . 3 \%$ in the Counterfactual scenario, consistently across all difficulty levels. We also examined the consistency score in the pairwise setting, $\mathrm { C o n s _ { C T F } ^ { P a i r w i s e } }$ , defined as: $$ \mathrm { C o n s } _ { \mathrm { C T F } } ^ { \mathrm { P a i r w i s e } } = \frac { \sum _ { i = 1 } ^ { N _ { \mathrm { C T F } } ^ { \mathrm { P a i r w i s e } } } \mathbb { I } \Big ( C _ { \mathrm { C T F } ^ { \mathrm { P a i r w i s e } } } ^ { i } = 1 \wedge C _ { \mathrm { b a s e } ^ { \mathrm { P a i r w i s e } } } ^ { i } = 1 \Big ) } { N _ { \mathrm { C T F } } ^ { \mathrm { P a i r w i s e } } } . $$ This measure extends the consistency score introduced in Eq. 13 for the “single” setting. Under single evaluation, it is $2 4 . 3 \%$ , whereas under pairwise evaluation, it drops to $1 5 . 3 \%$ . A likely explanation involves the Juliet dataset, which is widely available and may have been part of the model’s pre-training data. Since Juliet often places safe and vulnerable code side by side, Llama3.1-8B effectively recognizes these patterns in the Base Scenario. However, the Counterfactual Scenario relies on our generator to generate semantically perturbed variants that deviate from the model’s training data, resulting in a marked performance decline. This outcome further indicates that LLMs tend to rely more on pattern matching than on deeper logical reasoning. TABLE 5: Comparison of LLM performance on benchmark generated based on PrimeVul Dataset [9]. # 5.4. Effects of CWEs The CWE hierarchy comprises pillars, classes, bases, and variants [78]. Figure 7 shows our evaluation of various LLMs (zero-shot and in-context) mapped from Juliet’s CWEs to their class-levels. We observe notable performance variations tied to the LLMs’ pre-training domains. For example, Gemma and CodeLlama excel in Goal-driven scenarios like CWE-20 and CWE-1390, indicating a need for targeted enhancements in other CWEs. Comparing zeroshot and in-context heatmaps reveals that in-context learning typically improves understanding of specific CWE classes; however, models like GPT-4 may underperform in certain Goal-driven cases. Overall, these findings underscore the importance of tailoring prompt designs to each CWE for effective vulnerability analysis. # 5.5. Effects of Temperature We conducted an ablation study on inference temperature (ranging from 0.0 to 1.0 in 0.2 increments) using both zero-shot and in-context learning with $\mathrm { L 1 a m a } 3 . 1 - 8 \mathrm { B }$ . As shown in Table 4, lower temperatures (e.g., 0.0) produce deterministic outputs that perform well in tasks requiring consistency, such as structural reasoning, base scenarios, and Counterfactual questions. Conversely, for more challenging tasks like Goal-driven and Predictive scenarios, higher temperatures yield better performance by fostering greater output diversity, which allows the model to explore varied reasoning paths and generate more creative answers. # 5.6. Generalization of SV-TrustEval-C We evaluated our generator and benchmark on the PrimeVul dataset, which provides higher label accuracy and broader CWE coverage than other sources. We manually validated samples from each CWE category and selected one verified safe vs. unsafe pair per category. Note that we skipped compilation validation, as PrimeVul lacks a direct compilation source—a step required to match the precision of our Juliet experiments. This process generated 64 safe and 64 unsafe base scenario samples, along with 601 DataFlow-based, 419 ControlFlow-based, 658 Counterfactual, 189 Goal-driven, and 205 Predictive questions, all of which are included in our GitHub repository5. Table 5 shows trends similar to those observed with Juliet. Larger models (e.g., $\mathsf { G P T - 4 }$ , GPT3.5, Llama3.1-405B) perform well in structural tasks, while most models struggle with semantic tasks, often scoring below $5 0 \%$ . High false positive rates persist, with some models misclassifying safe code as unsafe. Incontext learning generally boosts performance—especially for models with limited code pre-training (e.g., CodeQwen, CodeGemma, Mixtral)—although gains vary. Overall, these findings confirm that SV-TRUSTEVAL-C generalizes well to datasets with broader CWE coverage, though it is advisable to use datasets that support full quality verification. (i.e., label accuracy and compilation validation).
As Large Language Models (LLMs) evolve in understanding and generating code, accurately evaluating their reliability in analyzing source code vulnerabilities becomes increasingly vital. While studies have examined LLM capabilities in tasks like vulnerability detection and repair, they often overlook the importance of both structure and semantic reasoning crucial for trustworthy vulnerability analysis. To address this gap, we introduce SV-TrustEval-C, a benchmark designed to evaluate LLMs' abilities for vulnerability analysis of code written in the C programming language through two key dimensions: structure reasoning - assessing how models identify relationships between code elements under varying data and control flow complexities; and semantic reasoning - examining their logical consistency in scenarios where code is structurally and semantically perturbed. Our results show that current LLMs are far from satisfactory in understanding complex code relationships and that their vulnerability analyses rely more on pattern matching than on robust logical reasoning. These findings underscore the effectiveness of the SV-TrustEval-C benchmark and highlight critical areas for enhancing the reasoning capabilities and trustworthiness of LLMs in real-world vulnerability analysis tasks. Our initial benchmark dataset is publicly available.
[ "cs.SE", "cs.CL" ]
# 1 Introduction With the rapid advancement of Large Language Models (LLMs), their capabilities in assisting with various coding tasks have significantly improved. Tools like GitHub Copilot [Microsoft, 2023, Services, 2023] and models such as OpenAI Codex [Chen et al., 2021a] have enhanced developer productivity by automating repetitive tasks, providing real-time suggestions, and offering detailed explanations of code functionality. One crucial application of LLMs in software development is the automatic generation of SQL queries from text (text-to-SQL), a task that has gained increasing attention [Zhong et al., 2017, Yu et al., 2018, Li et al., 2024a, Lei et al., 2024]. However, most existing research [Li et al., 2024b, Zhuang et al., 2024, Dong et al., 2023a, Pourreza and Rafiei, 2024, Wang et al., 2023a, Gan et al., 2021, Deng et al., 2021] and datasets in the text-to-SQL domain are primarily designed for SQLite, with limited coverage of widely used database systems such as MySQL, PostgreSQL, BigQuery, Oracle, and DuckDB. We incorporate an example of a question with dialect SQL in Figure 1. The lack of high-quality, dialect-specific text-to-SQL data presents significant challenges in developing models that can generalize across different SQL dialects, ultimately hindering the creation of robust and adaptable text-to-SQL solutions for real-world applications [Lei et al., 2024, Li et al., 2024a, Pourreza et al., 2024]. Rule-Based Translation is Insufficient. Rule-based translation offers a deterministic but rigid solution to SQL dialect conversion. While transpilers like SQLGlot [Mao, 2023] provide structured mappings between dialects, they struggle with complex syntax, schema constraints, and dialect-specific functions [Zmigrod et al., 2024]. Moreover, these systems lack generalizability, require dialect-specific rules [Li et al., 2024a, Lei et al., 2024], and cannot guarantee accurate translation. In practice, they still rely on execution-time feedback to detect and fix failures. Maintaining such rule sets is costly and brittle. Even with carefully crafted rules, such systems cannot guarantee perfect accuracy—particularly for complex or edge-case queries—and often rely on execution-time feedback for correction. We provide a detailed analysis in the Appendix A.10. Existing Data Collection and Training Lacks Execution Verification. General LLM-based code data generation methods Wei et al. [2023], Wang et al. [2022] often fail to account for the specific requirements of text-to-SQL tasks, leading to the creation of syntactically plausible but incorrect SQL queries. These approaches typically generate large amounts of unverified data, which hinders their usefulness for training reliable models. Since SQL outputs can be directly validated through execution, a more structured approach that incorporates execution-based verification and targeted rejection sampling strategies is necessary. Besides, we argue that standard supervised fine-tuning (SFT) alone is insufficient to fully exploit the potential of execution validation, as it does not inherently enforce correctness across dialects. To advance dialect text-to-SQL, we emphasize the importance of both high-quality, executable (text, SQL) data and a training pipeline that directly interacts with the execution environment. We propose an agentic data generation loop that combines LLM-based generation, execution-time validation, and self-correction. This offline loop yields reliable training signals, which are distilled into a dialect-aware model through supervised fine-tuning and offline reinforcement learning. The overall workflow includes: (a) SFT Data Bootstrapping via LLM-based Translation: To mitigate the sparsity of dialect text-to-SQL data and enable effective cold-start training, we leverage high-resource SQLite (text, SQL) pairs and LLMs to efficiently sample dialect SQL queries. This bootstrapped dataset serves as a cold-start fine-tuning set, enabling rapid adaptation to low-resource dialects while minimizing manual annotation. # Question: Show the status shared by cities with population bigger than 1500 or smaller than 500. # text-input SQLite: SELECT Status FROM city WHERE Population $>$ 1500 UNION SELECT Status FROM city WHERE Population < 500; PostgresSQL: SELECT city.Status FROM city WHERE city.Population::INTEGER $>$ 1500 UNION SELECT city.Status FROM city WHERE city.Population::INTEGER < 500; SQL-output Database: Environment: City_ID Status SQLite PgSQL 1 Village Execution Execution Engine Engine ·· Execute Agentic execution Village (b) Iterative SFT Data Generation via Execution-based Rejection Sampling: We extend the dataset via an iterative generation–execution–filtering loop, where the model proposes dialect SQLs executed in real databases. Valid outputs are retained through execution-aware rejection sampling, with best-of-N selection enhancing reliability. This agentic cycle uses execution feedback to govern data collection, producing higher-quality training signals without manual effort. (c) Preference Collection via Execution Feedback Rejection Sampling: To further incorporate execution feedback, we distinguish failure types and extract preference pairs—valid versus invalid SQLs—based on their execution results. These are used to train the model with DPO, which guides learning toward executable outputs. This procedure aligns with offline reinforcement learning, leveraging historical execution trajectories to improve model behavior. We summarize our contributions as follows: • We propose an agentic data generation loop that combines LLM-based SQL generation, execution-aware rejection sampling, and iterative self-refinement to construct high-quality dialect-specific training data with minimal manual labeling. • We introduce an offline reinforcement learning framework that captures execution-based preference signals and applies DPO to align the model toward generating executable SQL. • We conduct extensive evaluations across diverse SQL dialects (PostgreSQL, MySQL, and Oracle) $\times$ difficulty levels (single domain, cross-domain, extensive database), demonstrating significant improvements over strong baselines (e.g., GPT-4o) and providing insights for execution-guided SQL modeling. # 2 Related Work # 2.1 Text-to-SQL Relational databases store a significant portion of the world’s data, and retrieving information from them typically requires writing SQL queries. Automating SQL generation can lower the barrier for users to access data. A common scenario for automatic SQL generation is querying databases using natural language input [Zhong et al., 2017, Yu et al., 2018]. Early research treated text-to-SQL as a semantic parsing problem, where models such as RNNs and transformer-based encoders (e.g., BERT) were trained to map natural language questions to SQL statements [Gan et al., 2021, Zhong et al., 2017, Deng et al., 2022a]. Performance has also improved by incorporating additional constraints into inputs and outputs [Liu et al., 2022, Wang et al., 2021, Deng et al., 2021]. With the emergence of large language models (LLMs) [Brown et al., 2020, Ouyang et al., 2022, OpenAI, 2023], text-to-SQL has been further developed using prompt-based methods and fine-tuning, benefiting from LLMs’ strong instruction-following and intent understanding capabilities [Dong et al., 2023a, Li et al., 2024b, Pourreza and Rafiei, 2024, Wang et al., 2023a, Talaei et al., 2024]. In practical applications, text-to-SQL has been used to handle more complex data and agent-based workflows [Lei et al., 2024, Li et al., 2024a]. One challenge in real-world scenarios is handling SQL dialect differences. Early studies in domain-specific languages explored this problem using intermediate meaning representations [Guo et al., 2020]. Some studies have attempted to address this issue through rule-based translation and compiler-based methods [Pourreza et al., 2024, Lin et al., 2024a]. Given the LLM-driven paradigm, this work focuses on a data-centric approach to text-to-SQL. Specifically, executionbased methods are explored to handle SQL dialect variations. # 2.2 Code LLMs Code foundation models have demonstrated strong code generation capabilities across various tasks. OpenAI’s Codex [Chen et al., 2021b] was one of the earliest domain-specific LLMs for coding, supporting the Copilot service [Microsoft, 2023]. The open-source community has further contributed with models like Deepseek-Coder [Guo et al., 2024] and StarCoder [Li et al., 2023a], which were trained from scratch on massive code-related datasets. While others, like Code-Llama [Roziere et al., 2023] and Code-Qwen [Hui et al., 2024], adapted general-purpose models through post-training on code-specific corpora. Beyond foundation models, researchers have fine-tuned them for specific applications. Maigcoder [Wei et al., 2023] enhances instruction-following abilities using curated code snippets, while Wizard-Coder [Luo et al., 2024] and WavCoder [Yu et al., 2023] refine instruction-code alignment via evol-instruct [Xu et al., 2024]. OctoCoder [Muennighoff et al., 2023] leverages Git commits to enhance model adaptability. Additionally, approaches like IRCoder [Paul et al., 2024] and UniCoder [Sun et al., 2024] explore intermediate representations (e.g., LLVM) to improve code generation. Compared to these approaches, our work also focuses on code generation but emphasizes leveraging execution signals from database environment. From the perspective of code LLM development, this approach provides insights applicable to broader code generation tasks. The Dialect SQL scenario serves as a practical testbed, allowing for clearer validation of method effectiveness. # 2.3 Data Synthesis Modern machine learning methods typically require large-scale and high-quality datasets [Zhou et al., 2023a, Gao et al., 2023a] for effective learning. However, obtaining high-quality data for every corner case is often impractical, leading researchers to explore dataset generation. By integrating existing incomplete data with the extensive knowledge embedded in LLMs, data generation can produce more comprehensive datasets for model training [Wang et al., 2023b, Xu et al., 2024, Wei et al., 2023]. Recently, to enhance the reasoning capabilities of LLMs, particularly in math and code, many approaches have incorporated verifiers, such as answer or reward models, to curate high-quality datasets for model refinement [Yuan et al., 2023, Guo et al., 2025, Zelikman et al., 2022]. There has also been many previous work that explores data synthesis for vision-language models [Gao et al., 2023b, Pi et al., 2024a, Liu et al., 2024a,b, Pi et al., 2024b, Chen et al., 2024] Our work focuses on SQL execution verification. By utilizing execution results, we obtain high-quality data by rejection sampling and further refine the model through self-taught training. # 3 Methodology In this section, we present the details of our approach to obtain ExeSQL, including 3 phases: Translation Bootstrapping, Iterative Data Generation and Training, and Preference Enhancement. The key idea of Execution-Assisted Generation is fully leveraging execution verification signals to asisst LLM to generate high-quality data for text-to-SQL across different dialects. An illustration of ExeSQL is shown in Figure 3. # 3.1 Formulation We denote a natural language query as $Q$ , its corresponding SQL as $S$ , and the generation model as an LLM $M _ { \theta }$ . The training set $\mathcal { D } = \{ ( Q _ { i } , \mathbf { \bar { \it S } } _ { i } ) \mathbf \bar { \} } _ { i = 1 } ^ { N }$ is constructed by translating a high-resource source dialect $\mathcal { D } _ { \mathrm { S o u r c e } }$ (e.g., SQLite) to target dialects using a bootstrapping model and a dialect mapping function $T$ . To guide model training, we define an execution-based reward function ${ \mathcal { R } } ( S ) \in \{ 0 , 1 \}$ , which returns 1 if the SQL executes successfully. The goal is to train a model that maximizes expected execution success: $$ \pi _ { \theta } ^ { * } = \arg \operatorname* { m a x } _ { \pi _ { \theta } } ~ \mathbb { E } _ { Q \sim \mathcal { D } } \left[ \mathbb { E } _ { \hat { S } \sim \pi _ { \theta } ( \cdot | Q ) } \left[ \mathcal { R } ( \hat { S } ) \right] \right] $$ We adopt a self-evolving offline training strategy [Zelikman et al., 2022, Dong et al., 2023b, Gülçehre et al., 2023, Schulman et al., 2017], which iteratively (1) filters generated SQLs via execution-guided rejection sampling, and (2) applies preference optimization through Direct Preference Optimization (DPO). The model is updated at iteration $t$ as: $$ \pi _ { \theta } ^ { ( t + 1 ) } = \arg \operatorname* { m a x } _ { \pi _ { \theta } } ~ \mathbb { E } _ { Q , \hat { S } , S ^ { * } \sim \mathcal { D } } \left[ \mathcal { R } ( S ^ { * } , \hat { S } ) \right] $$ Here, $S ^ { * }$ denotes a preferred (e.g., executable) SQL, contrasted against a failed candidate $\hat { S }$ . This defines an offline reinforcement learning loop grounded in execution feedback. # 3.2 Translation-based Bootstrapping Let $D _ { \mathrm { S Q L i t e } } = \{ ( Q _ { i } , S _ { i } ) \} _ { i = 1 } ^ { N }$ be a large-scale dataset containing natural language questions $Q _ { i }$ paired with corresponding SQL queries $S _ { i }$ written in SQLite dialect. Given the scarcity of multi-dialect SQL datasets, we first leverage $D _ { \mathrm { S Q L i t e } }$ to bootstrap an initial dataset for training. To achieve this, we introduce a translation function $T$ : $S _ { \mathrm { S Q L i t e } } ~ ~ S _ { \mathrm { T a r g e t } }$ , which generates an SQL query STarget in the target dialect based on both the original SQL query $S _ { \mathrm { S Q L i t e } }$ and the corresponding question $Q$ , modeled as: $$ S _ { \mathrm { T a r g e t } } \sim P ( S _ { \mathrm { T a r g e t } } | Q , S _ { \mathrm { S Q L i t e } } ) $$ Figure 2: Execution-based error feedback loop for dialectspecific SQL refinement. Through this, we can collect a bootstrap dataset to resolve the cold-start issue of training expert dialect model. However, direct translation does not guarantee correctness due to differences in SQL syntax and execution semantics across dialects. To refine the generated SQL queries, we incorporate an execution-based verification and iterative correction mechanism, as illustrated in Figure 2. The refinement process operates as follows (Appendix A.13): 1) An LLM (GPT-4o here) generates candidate SQL queries $S _ { \mathrm { T a r g e t } }$ for a given natural language question $Q$ , conditioned on $S _ { \mathrm { S Q L i t e } }$ . 2) The generated SQL query is executed in a database corresponding to the target dialect. 3) If the execution succeeds, the query is added to the validated dataset: $D _ { \mathrm { T r a n s } } = \{ ( Q _ { i } , S _ { \mathrm { T a r g e t } , i } ) \}$ 4) If the execution fails, the database returns an error message, which is fed back into the LLM as an additional context for refining the SQL query. The model iteratively refines $S _ { \mathrm { T a r g e t } }$ until a valid query is produced. 5) This iterative execution check continues until either a valid SQL query is found or a maximum refinement threshold is reached. This approach effectively corrects syntactic and semantic errors by leveraging real execution feedback rather than relying solely on static rule-based translation. Through this execution-aware iteration, the model progressively learns Stage 1: Trasnlation Bootstrapping Stage 2: Iterative Data Generation and Training QuSeQstLion QGruoeustnidonT(rNutehw) Dialect SQL ✅❌ Ground Truth Dialect SQL Execution Execution DiQaulecst iSoQnL Train Model Model Dialect SQL ❌ Model Keep Correct Ones Stage 3: Preference Training Train Question ❌ Question DPO Train Dialect SQL Model Correct SQL False SQL False SQL to generate more accurate and dialect-specific SQL queries. The final dataset, $D _ { \mathrm { T r a n s } }$ , serves as a high-quality dialect training corpus, enabling robust generalization across different database systems. # 3.3 Iterative Data Generation and Training While $D _ { \mathrm { T r a n s } }$ provides a baseline, rule-based translation alone is insufficient to guarantee correctness due to syntax differences, type constraints, and execution behaviors across SQL dialects. To address this, we introduce an iterative execution-feedback process incorporating rejection sampling and augmented question generation, as depicted in Figure 3. # 3.3.1 Augmenting Training Data with New Questions To improve model generalization across SQL dialects, we incorporate additional natural language questions from two sources:(1) Existing Text-to-SQL Datasets: We extract additional questions from existing datasets like WikiSQL, ensuring coverage of diverse query structures. (2) Database-Aware Question Generation: We leverage GPT-4o to generate new questions based on actual database values. Given a schema and sample database records, GPT-4o generates contextually relevant questions that reference specific values, improving the model’s robustness in handling real-world queries. By integrating these new questions, we expand our dataset beyond simple rule-based translations, allowing the model to generate and validate SQL queries for a more diverse set of inputs. # 3.3.2 Execution-based Rejection Sampling For each natural language question $Q _ { i }$ , the model $M _ { \theta }$ generates multiple dialect-specific SQL candidates $\{ S _ { \mathrm { c a n d } , i } \}$ , following the probability distribution: $S _ { \mathrm { c a n d } , i } \sim P _ { \theta } ( S | Q _ { i } )$ Each candidate query is then executed in the corresponding database environment, yielding an execution result $R ( S _ { \mathrm { c a n d } , i } )$ : $$ R ( S ) = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ } } S { \mathrm { ~ e x e c u t e s ~ s u c c e s s f u l l y } } } \\ { 0 , } & { { \mathrm { i f ~ } } S { \mathrm { ~ f a i l s ~ d u e ~ t o ~ e x e c u t i o n ~ e r r o r s } } } \end{array} \right. } $$ We apply a rejection sampling to iteratively refine SQL generation: If $S _ { \mathrm { c a n d } }$ exectues successfully, i.e., $R ( S _ { \mathrm { c a n d } , i } ) = 1$ . The query is added to the validated dataset: $D _ { \mathrm { V a l i d } } = { \tilde { D _ { \mathrm { V a l i d } } } } \cup \{ ( Q _ { i } , S _ { \mathrm { c a n d } , i } ) \}$ If $S _ { \mathrm { c a n d } }$ is a Failure Case, i.e., $R ( S _ { \mathrm { c a n d } , i } ) = 0$ . The query is stored in the negative dataset: $D _ { \mathrm { N e g } } = D _ { \mathrm { N e g } } \cup \{ ( Q _ { i } , S _ { \mathrm { c a n d } , i } ) \}$ This process is iteratively repeated until a valid SQL query is generated or a predefined iteration limit is reached. # 3.3.3 Iterative Data Generation and Model Refinement The validated dataset $D _ { \mathrm { V a l i d } }$ is used for further fine-tuning, while incorrect queries in $D _ { \mathrm { N e g } }$ serve as contrastive learning signals in later preference optimization stages. This process results in a high-quality, dialect-aware text-to-SQL dataset that is continuously refined through executionbased validation and real-world query augmentation. # 3.4 Preference Optimization To further refine the model’s SQL generation capabilities, we leverage DPO [Rafailov et al., 2023] to distinguish between correct and incorrect queries, using execution feedback as the primary signal. The negative dataset $D _ { \mathrm { N e g } }$ and validated dataset $D _ { \mathrm { V a l i d } }$ have already been collected during the Iterative Data Generation and Training phase. Here, we construct preference pairs to fine-tune the model based on execution outcomes. Pairwise Preference Data Construction To enable preference learning, we form query pairs $( S _ { \mathrm { p o s } } , S _ { \mathrm { n e g } } )$ , where: $S _ { \mathrm { p o s } } \in D _ { \mathrm { V a l i d } }$ , $S _ { \mathrm { n e g } } \in D _ { \mathrm { N e g } }$ These pairs allow the model to differentiate between correct and incorrect SQL, ensuring that preference learning reinforces correct generation. Direct Preference Optimization (DPO) Training The model is fine-tuned using DPO, where the objective is to maximize the probability of generating preferred SQL queries over non-preferred ones: $$ P _ { \theta } ( S _ { \mathrm { p o s } } | Q ) > P _ { \theta } ( S _ { \mathrm { n e g } } | Q ) $$ By leveraging execution failures as negative examples and correct executions as positive examples, the model learns to generate more reliable and executable SQL queries. This approach enhances both the correctness and robustness of SQL generation across different dialects. # 4 Implementation and Evaluation Settings The bootstrap dataset and new questions for ExeSQL are generated using GPT-4o [OpenAI, 2023]. We choose GPT-4o due to its superior ability to follow instructions and leverage error messages to generate accurate bootstrap dialect SQL examples. The final ExeSQL dataset consists of $2 0 . 6 \mathrm { k }$ samples in the supervised finetuning (SFT) set and 8k samples in the preference pairs (Appendix A.2). All training is conducted on four A6000 GPUs. We fine-tune the full-parameter Deepseek-Coder-7B [Guo et al., 2024] for supervised finetuning (SFT) and Direct Preference Optimization (DPO). For detailed training configurations and inference hyperparameters, please refer to Appendix A.3 For baseline comparisons, we evaluate GPT-4o-2024-11-20 and Gemini-1.5-pro-0827 [Reid et al., 2024], both of which were released in 2024. Since these models were trained on publicly available data up to their release dates, they likely include extensive SQL-related training data, ensuring a fair comparison. # 4.1 Text-to-SQL across dialects and Benchmarks Dialects. To fully validate the generalization ability of our method, we selected three SQL dialects: PostgreSQL, MySQL and Oracle. Our pipeline is dialect-agnostic, we chose these two dialects to verify the generalizable effectiveness of our pipeline across different dialects. Benchmarks. We adapt three standard benchmarks, Spider [Yu et al., 2018] WikiSQL [Zhong et al., 2017] and BIRD [Li et al., 2024a], for in-domain evaluation and use Dr.Spider [Chang et al., 2023] as an out-of-domain dataset. We also incorporate the single-domain benchmark MimicSQL [Wang et al., 2020, Deng et al., 2022b] to evaluate our model across varying difficulty levels. For dialect SQL evaluation, we extract the question, database, and ground truth result, prompting the model to generate dialect-specific SQL and verifying execution accuracy. Details on these datasets are in Appendix A.9. To ensure accurate evaluation, we preprocess responses to extract SQL using an answer extraction tool (Appendix A.12). For results on the single-domain dataset, please refer to Appendix A.8. Table 1: Performance comparison of various LLMs on Dialect text-to-SQL benchmarks. ExeSQL surpasses all baselin models, achieving an average improvement of $1 1 . 0 \%$ over GPT-4o. # 4.2 Baseline Models General purposed LLM baselines: We evaluate four large language models (LLMs) without any fine-tuning for text-to-SQL tasks: GPT-4o [OpenAI, 2023], Gemini-1.5-pro [Reid et al., 2024], and Llama3.1-Instruct met. These models are assessed by directly prompting them to generate SQL queries given a natural language question and the corresponding database schema. Code Expert LLM baselines: These baselines consist of LLMs trained on large-scale code-related corpora, making them well-suited for code generation tasks. We include DeepSeek-Coder [Guo et al., 2024], Qwen-Coder [Hui et al., 2024], Magicoder-DS [Wei et al., 2023], and WizardCoder [Luo et al., 2024]. SQL Expert LLM baselines: Several LLMs are specifically adapted for SQL generation, typically optimized for the SQLite dialect and demonstrating strong table understanding capabilities. We include Code-S [Li et al., 2024b] and StructLLM [Zhuang et al., 2024] in this category. The comparisons in (2) and (3) aim to assess whether fine-tuned general-purpose LLMs can outperform specialized code-generation or SQL-focused models in specific scenarios. # 5 Experimental Results # 5.1 Main Results We present the main experimental results in Table 1. From the table, we observe that ExeSQL achieves an average accuracy of $6 6 . 7 0 \%$ across PostgreSQL, MySQL and Oracle benchmarks, significantly outperforming all baseline models. General purposed LLMs. Among the general-purpose LLMs, GPT-4o achieves the highest accuracy $( 5 5 . 6 9 \% )$ , demonstrating its strong zero-shot SQL generation capability. We find that Gemini-1.5-pro underperforms GPT-4o, achieving $5 3 . 8 8 \%$ . Llama3.1-8B-Instruct perform worse, with average accuracies of $3 2 . 3 5 \%$ , respectively. These results indicate that general-purpose LLMs struggle with SQL dialect variations. Code Expert LLMs. Code-focused models, such as Deepseek-Coder and Qwen-Coder, demonstrate better performance than standard LLMs. Deepseek-Coder achieves an average accuracy of $3 2 . 7 5 \%$ , while Qwen-Coder reaches $3 1 . 3 1 \%$ . However, Magicoder and WizardCoder perform worse, suggesting that general code generation ability does not equal SQL generation (especially dialect) capability. This implies that code training alone is insufficient for SQL dialect adaptation. SQL Expert LLMs. The SQL-specialized models exhibit the most significant improvements. StructLLM, which is trained on SQL-specific tasks, achieves an accuracy of $2 9 . 4 8 \%$ , slightly outperforming most code models. However, ExeSQL surpasses all baselines by a large margin, reaching an average accuracy of $6 6 . 7 0 \%$ . Also, it is worth noting that these models often have a great performance degradation compared with SQLite performance (Appendix A.1). These results highlight the importance of the proposed execution-based fine-tuning and dialect-aware SQL adaptation. Unlike general-purpose or code-focused models, ExeSQL effectively learns to handle different SQL dialects through iterative refinement, leading to a substantial performance boost. # 5.2 Further Analysis To validate the effectiveness of ExeSQL, we conduct three analyses: (1) Ablation studies assess the impact of iterative refinement and preference learning on accuracy. (2) ID and OOD evaluation measures generalization to unseen queries and SQL dialects. (3) Execution-based rejection sampling analysis examines its role in improving SQL correctness. These analyses confirm ExeSQL’s robustness and adaptability. Table 2: Performance comparison of different ExeSQL ablations. Table 3: Results on ID and OOD evaluation. ExeSQL shows strong generalization without overfitting. # 5.2.1 Ablations for Iteration Data Generation Table 2 shows that removing iteration-based refinement significantly reduces performance $( 7 1 . 9 8 \%$ to $6 3 . 4 9 \%$ on PostgreSQL, $7 2 . 8 6 5 \%$ to $6 0 . 0 9 \%$ on MySQL), highlighting the importance of iterative data generation in improving SQL accuracy. Removing preference learning also leads to a performance drop, though less severe, indicating that preference optimization further refines query quality. These results demonstrate that both iterative refinement and preference learning play crucial roles in enhancing ExeSQL’s effectiveness. # 5.2.2 ID and OOD Evaluation. We evaluate ExeSQL on both in-distribution (ID) and outof-distribution (OOD) datasets to assess its generalization. The OOD evaluation is conducted on Dr.Spider [Chang et al., 2023], a diagnostic text-to-SQL benchmark with 15,269 samples, introducing perturbations in databases (DB), natural language queries (NLQ), and SQL to test robustness. Given its scale, Dr.Spider is significantly harder to overfit than Spider’s 2,147 samples. Table 3 shows that ExeSQL consistently achieves the highest accuracy across all settings. Notably, ExeSQL outperforms StructLLM and Deepseek-Coder by a large margin on both PostgreSQL and MySQL, confirming its strong generalization to both ID and OOD queries. Figure 4: Retention rate of correct dialect SQL under different best-of-N sampling strategies on 1,000 queries. Results show the bootstrapped model already produces many correct samples, with larger N further improving correctness. # 5.2.3 Configuration of Execution-based Rejection Sampling. Figure 4 presents the effect of execution-based rejection sampling on SQL generation accuracy across different bestof-N selection strategies. As $N$ increases, the proportion of correct dialect SQL samples improves consistently for both PostgreSQL and MySQL. This result indicates that the bootstrapped model is capable of generating a significant number of correct dialect SQL queries even without additional fine-tuning. The primary challenge then shifts to efficiently identifying and selecting these correct samples. An iterative sampling approach can be employed to extract high-quality SQL queries, which can further enhance the model through self-supervised training.
Recent text-to-SQL models have achieved strong performance, but their effectiveness remains largely confined to SQLite due to dataset limitations. However, real-world applications require SQL generation across multiple dialects with varying syntax and specialized features, which remains a challenge for current models. The main obstacle in building a dialect-aware model lies in acquiring high-quality dialect-specific data. Data generated purely through static prompting - without validating SQLs via execution - tends to be noisy and unreliable. Moreover, the lack of real execution environments in the training loop prevents models from grounding their predictions in executable semantics, limiting generalization despite surface-level improvements from data filtering. This work introduces ExeSQL, a text-to-SQL framework with execution-driven, agentic bootstrapping. The method consists of iterative query generation, execution-based filtering (e.g., rejection sampling), and preference-based training, enabling the model to adapt to new SQL dialects through verifiable, feedback-guided learning. Experiments show that ExeSQL bridges the dialect gap in text-to-SQL, achieving average improvements of 15.2%, 10.38%, and 4.49% over GPT-4o on PostgreSQL, MySQL, and Oracle, respectively, across multiple datasets of varying difficulty.
[ "cs.CL", "cs.AI", "cs.DB" ]
# 1 Introduction Effective perception is fundamental to robotic manipulation in unstructured 3D environments. Recent advances in vision-based methods [24, 38, 27, 66] have enabled robots to infer actions directly from visual observations by leveraging powerful foundation models [32, 58, 59, 11], which facilitates the high-level scene understanding and robotic manipulation. Existing approaches for vision-based manipulation can be broadly categorized into two paradigms, as illustrated in Fig. 2. V-A (Vison-to-action) solutions [65, 53, 10, 2, 5, 33] directly map RGB observations to action sequences. While these methods benefit from end-to-end learning, they rely on implicit scene understanding and lack the modeling of 3D geometry, which is essential for fine-grained robotic manipulation. To address this issue, V-3D-A (vision-to-3D-to-action) solutions [15, 62, 61] incorporate 3D representations such as point clouds [13, 6, 64, 15, 29] and voxel grids [44, 26, 54, 41] to enable explicit geometric reasoning. Despite the geometric information provided by V-3D-A solutions, inferring action from 3D representations remains challenging, especially in cases of complex spatial structures and spatial relations. Besides, it is hard for the above two paradigms to model the temporal evolution of scene geometry, which introduces a disconnect between scene understanding and action generation in dynamic environments. To take this into account, ManiGaussian [44] recently incorporated manipulation learning into a dynamic Gaussian Splatting framework. In their method, the action is first inferred via reinforcement learning and then used to deform the Gaussian for future scene consistency. From the aspect of action inference, ManiGaussian still belongs to the above two paradigms and suffers from the inaccurate perception of dynamic scene motion as shown in the experiments. In this paper, our key motivation is that humans, when completing a task, envision how hands and objects might move and spatial relationships might change, and then act accordingly. Motivated by this, we aim to guide robotic action planning by explicitly modeling how scene geometry including robot changes over time. This process also aligns with the concept of a world model [16], where future scene dynamics are explicitly modeled to guide decision-making. To this end, we introduce a new paradigm, V-4D-A (vision-to-4D-to-action), which extends 3D representations with motion information to capture dynamic scene evolution, as shown in Fig. 2. Unlike static representations that passively encode geometry, our 4D structure incorporates a latent world model that predicts the next-step scene evolution required to complete the task, based on the current observations. Such simultaneous dynamic perception and prediction enable more direct action inference since the scene motion inherently contains the movement trend information of the end-effector. Specifically, we propose Gaussian Action Field (GAF) as a dynamic world model for robotic manipulation. To simultaneously model the dynamic scene and corresponding manipulation action, GAF augments the 3DGS [31] representation with a learnable motion attribute that encodes the temporal displacement of each Gaussian, enabling the modeling of the dynamic scene and robotic geometry over time. This design enables three types of query functionalities within GAF, as shown in Fig. 1. The current query function supports view-consistent novel view synthesis of the present scene, facilitating accurate geometry understanding from two unposed RGB inputs. The future query function generates future scene states by applying motion attributes to the original Gaussians, providing supervision for learning temporal dynamics. The action query function supports the calculation of initial action by applying point cloud registration with the learned motion attributes. Due to the noise in motion attributes, the initial action is typically inaccurate or ambiguous. To address this issue, we further introduce a diffusion-based neural refine module that predicts a refined and executable action. For more precise and temporally aligned robotic action generation, the denoising process is guided by the outputs of GAF, which acts as visual prompts with the motion attributes projected onto the current states for the predicted motion visualization. By modeling scene dynamics in a unified Gaussian world model, our V-4D-A paradigm enables coherent scene perception and robotic action, resulting in accurate, efficient, and temporally consistent manipulation. GAF operates in a fully feed-forward manner and supports real-time execution on a single GPU during manipulation. Extensive experiments demonstrate that our method enables high-quality scene reconstruction, plausible future prediction, and accurate robotic manipulation, significantly outperforming V-A and V-3D-A baselines. Contributions of this work are summarized as follows: • We propose a V-4D-A paradigm via Gaussian Action Field (GAF), which unifies the modeling of dynamic scene evolution and future-oriented action prediction, enabling more direct action reasoning from motion-aware 4D representations. • We introduce three query types in GAF, namely current, future, and action, corresponding to different functionalities for spatial understanding, temporal prediction, and motion reasoning. • We validate our method on robotic manipulation tasks, where it achieves state-of-the-art performance in both scene reconstruction quality and action generation. # 2 Related Work # 2.1 Vision-based Robot Learning Vision plays a pivotal role in enabling robots to perceive and interact with their environments, and integrating visual perception into robotic manipulation tasks has been extensively studied [6, 2, 10, 13, 15, 7]. In such vision-based approaches, techniques like Vision-Language-Action (VLA), such as RT2 [4, 3], IGOR [8], ViLBERT [45], etc. [9, 1, 28, 35, 37, 36], have achieved impressive results by effectively combining visual information with language commands. In general, existing methods can be broadly categorized into 2D image-based approaches and 3D representation-based approaches. 2D methods typically rely on multi-view images as input, implicitly encoding 3D scene understanding within neural network reasoning. For example, GENIMA [53] utilizes Stable Diffusion [49] to generate images representing future robot poses, while SuSIE [2] and R&D [57] leverage diffusion models for sub-goal image generation and action refinement, respectively. These methods often struggle with accurately capturing precise 3D spatial relationships, limiting their effectiveness in high-precision tasks [6, 33]. In contrast, 3D representation-based methods, such as voxel grids and point clouds, explicitly model geometric structures, enabling more accurate spatial reasoning [56, 61, 15, 30]. For example, ManiGaussian [44] and GNFactor [61] both utilize voxel grids to represent the 3D scene; the former feeds these grids into a PerceiverIO [24]-based transformer policy using a reinforcement learning framework to get robot action, while the latter encodes them into a 3D semantic volumetric feature that is subsequently processed by a Perceiver Transformer [24] to predict actions. Additionally, Act3D [15] introduces a novel ghost point sampling mechanism to predict actions from semantic point clouds. These methods neglect the fact that, in addition to complex geometric structures and spatial relationships, robot learning also requires the consideration of time as a crucial dimension. Therefore, our 4D representation, which incorporates both spatial and temporal aspects, provides a more comprehensive task-level spatiotemporal representation, enabling better performance in robotics tasks. # 2.2 World Model in Robotics World models explicitly obtain environmental knowledge by constructing an internal representation that simulates the real world [14, 23, 16, 18, 19, 17, 20]. Through predicting the future states based on the current states, these methods successfully encode scene dynamics [22, 52]. Previous approaches utilize autoencoding to learn a latent space for predicting future states, achieving significant progress in simple tasks [16, 22, 50]. However, the limited representational ability of implicit features and the requirement for a large amount of data restrict their effectiveness and further applications. Recent approaches improve generalization by adopting explicit representations in image [12, 46] or language domains [40, 43, 66], leveraging rich semantics—e.g., UniPi [12] generates text-conditioned future frames, while Dynalang [40] predicts text-based states for navigation. However, they overlook the fact that the future is derived from the evolution of the current object’s motion. These methods do not capture this process of motion. Instead, they predict future images [21], videos [39], point clouds [66], etc., and then infer what actions led to the changes in these visual representations. To model the motion in the scene, dynamic representations should be proposed as the internal representation of the world model for the scene. Among existing works, ManiGaussian [44] is the closest to this concept, which uses dynamic Gaussian point clouds as volumetric priors during training. However, this method does not use such dynamic representations during inference. They predict actions directly from static 3D representations. In contrast, our approach optimizing Gaussian Action Field for future state prediction during both training and inference phases. By eliminating dependencies on predefined modalities (e.g., pixels or text) and reducing data requirements, our method enables efficient and precise learning of scene-level dynamics, addressing key limitations of prior works. # 3 Method In this section, we introduce GAF, its implementation, and its application in robotic manipulation tasks. Sec. 3.1 defines the core representation and describes the three query modes. Sec. 3.2 details the technical implementation and overall network design. Sec. 3.3 illustrates how the outputs of GAF queries are used to generate executable actions. # 3.1 Gaussian Action Field Representation We define the Gaussian Action Field (GAF) as a unified spatiotemporal representation that associates each Gaussian primitive ${ \bf g } ( { \bf x } )$ at time step $t$ with both geometric attributes and motion dynamics. Formally, GAF is parameterized by a continuous function: $$ \mathcal { F } _ { \Theta } : \{ g ( x ) , t \} \mapsto \{ \mu , \Delta \mu , f \} , $$ where $\boldsymbol { \mu } \in \mathbb { R } ^ { 3 }$ denotes the 3D position of the Gaussian, $\Delta \mu \in \mathbb { R } ^ { 3 }$ is the predicted displacement vector indicating temporal motion, and feature $f = \{ c , \sigma , r , s \}$ represents the color, opacity, rotation, and scale attributes of each Gaussian. The rendering process follows 3DGS [31], the pixel color at location $\mathbf { p }$ is computed using alpha-blend rendering: $$ C ( \mathbf { p } ) = \sum _ { i = 1 } ^ { N } \alpha _ { i } c _ { i } \prod _ { j = 1 } ^ { i - 1 } ( 1 - \alpha _ { j } ) , \quad \mathrm { w h e r e } , \alpha _ { i } = \sigma _ { i } e ^ { - \frac { 1 } { 2 } ( \mathbf { p } - { \mu _ { i } ^ { 2 d } } ) ^ { \top } { \Sigma _ { i } ^ { - 1 } } ( \mathbf { p } - { \mu _ { i } ^ { 2 d } } ) } , $$ where $C$ is the rendered image, $N$ denotes the number of Gaussians, $\alpha _ { i }$ represents the 2D density of the Gaussian points in the splatting process, and $\Sigma _ { i }$ stands for the covariance matrix acquired from the rotation $r$ and scales $s$ . To support current scene reconstruction, future state prediction, and action estimation, we define three types of queries over GAF, as defined in Eq. 3. The current query retrieves the position and feature parameters of Gaussians at the current time step, enabling rendering of the scene from novel views. The future query applies the predicted displacement to positions to obtain future positions, forming a temporally shifted Gaussian field for rendering future views. The action query retrieves motion attributes of manipulation-related Gaussians, and estimates the initial action via point cloud matching Figure 3: Overview of GAF reconstruction. Given sparse multi-view images, a Vision Transformer extracts hybrid scene features, which are decoded by three heads to predict Gaussian positions, motions, and appearance parameters, forming the GAF representation. between current and future point clouds. $$ \left\{ \begin{array} { l l } { \mathcal { Q } _ { \mathrm { c u r r e n t } } : ~ \{ g ( x ) , t \} ~ { \frac { \{ \mu , f \} } { \mathcal { F } _ { \Theta } } } ~ G S _ { t } \xrightarrow { \mathrm { ~ r e n d e r } } ~ I _ { t } } \\ { \mathcal { Q } _ { \mathrm { f u u r e } } : ~ \{ g ( x ) , t \} ~ { \frac { \{ \mu + \Delta \mu , f \} } { \mathcal { F } _ { \Theta } } } ~ G S _ { t + \Delta t } ~ { \frac { \mathrm { r e n d e r } } { \longrightarrow } } ~ I _ { t + \Delta t } } \\ { \mathcal { Q } _ { \mathrm { a c t i o n } } : ~ \{ g ( x ) , t \} ~ { \frac { \{ \Delta \mu \} } { \mathcal { F } _ { \Theta } } } ~ A _ { i n i t } } \end{array} \right. $$ # 3.2 Gaussian Action Field Architecture The Gaussian Action Field (GAF) architecture unifies scene representation, dynamic motion prediction, and action reasoning. Our goal is to reconstruct motion-augmented Gaussians directly from sparse, unposed RGB inputs, enabling downstream temporal queries and manipulation control. Fig. 3 illustrates the overall design. Dynamic Gaussian Reconstruction. GAF adopts a geometry-agnostic, pose-free approach for dynamic scene reconstruction, in contrast to traditional methods such as NeRF [47] and 3DGS [31], which rely on dense camera poses or strong geometric priors (e.g., cost volumes, epipolar constraints). Our architecture directly reconstructs high-fidelity motion-augmented Gaussians of input views in a canonical space aligned with the first input view. This is achieved using a feed-forward network that includes a vision transformer backbone and three specialized heads. Specifically, given two unposed $H \times W$ images and their corresponding intrinsics $\{ I _ { v } ^ { t } , k _ { v } ^ { t } \} _ { v = 1 } ^ { V }$ at timestep $t$ , we tokenize images into patch sequences and concatenate them. The resulting tokens are fed into a shared-weight Vision Transformer with cross-view attention to extract features. For scene representation, we employ a decoupled two-head design $\mathcal { H } _ { \mathrm { G a u s s } } = \{ h _ { \mathrm { C e n t e r } } , h _ { \mathrm { P a r a m } } \}$ based on the DPT architecture[48] to process the features: the Gaussian Center Head $h _ { \mathrm { C e n t e r } }$ predicts only Gaussian centers, the Gaussian Param Head $h _ { \mathrm { P a r a m } }$ estimates the remaining parameters by additionally incorporating RGB information. The process can be formulated as: $$ \mathcal { H } _ { \mathrm { G a u s s } } \left( \mathrm { V i T } ( \{ \boldsymbol { I } _ { \boldsymbol { v } } ^ { t } , \boldsymbol { k } _ { \boldsymbol { v } } ^ { t } \} ) \right) _ { \boldsymbol { v } = 1 } ^ { V } = \{ \mu _ { j } ^ { t } , c _ { j } ^ { t } , \sigma _ { j } ^ { t } , r _ { j } ^ { t } , s _ { j } ^ { t } \} _ { j = 1 } ^ { V \times H \times W } , $$ For scene dynamics, we introduce a Motion Prediction Head $h _ { \mathrm { M o t i o n } }$ following the same DPT-based architecture[48] as Gaussian Center Head. $h _ { \mathrm { M o t i o n } }$ predicts the per-point displacement $\Delta \mu _ { j } ^ { t t + \Delta t }$ , representing the motion of each Gaussian over a future interval $\Delta t$ : $$ h _ { \mathrm { M o t i o n } } ( \mathrm { V i T } ( \{ \pmb { I } _ { v } ^ { t } , \pmb { k } _ { v } ^ { t } \} ) ) _ { v = 1 } ^ { V } = \{ \pmb { \Delta \mu } _ { j } ^ { t t + \Delta t } \} . $$ Figure 4: Manipulation pipeline. The GAF current and action queries provide current multi-view observations and an initial action estimate (left). These are then used as conditions for a refinement network to generate executable motion (right). The process repeats iteratively until the task completes. The predicted displacement $\Delta \mu _ { j } ^ { t t + \Delta t }$ are added to the current centers $\mu _ { j } ^ { t }$ to obtain the future Gaussian positions $\mu _ { j } ^ { t + \Delta t }$ . These displaced centers are fused with the appearance and shape parameters $( c _ { j } ^ { t } , \sigma _ { j } ^ { t } , r _ { j } ^ { t } , s _ { j } ^ { t } )$ to form the future Gaussian field. Deriving the current Gaussians and future Gaussians , we can render multiple novel view images for the current state Iˆvt}vM=1 and future state {Iˆvt+∆t}vM=1, where $M$ denote the number of synthesized views. This allows for direct RGB video frames supervision for the entire Dynamic Gaussian Reconstruction. The training process follows [60]: $$ \mathcal { L } _ { \mathrm { G A F } } = \mathcal { L } _ { \mathrm { L P I P S } } ^ { t } + \mathcal { L } _ { \mathrm { M S E } } ^ { t } + \mathcal { L } _ { \mathrm { L P I P S } } ^ { t + \Delta t } + \mathcal { L } _ { \mathrm { M S E } } ^ { t + \Delta t } . $$ where $\mathcal { L } ^ { t }$ enforces geometric fidelity to current observations and $\mathcal { L } ^ { t + \Delta t }$ regularizes future state prediction. They are aggregated into a unified objective, facilitating the joint optimization of motionaugmented Gaussians reconstruction. Initial Action Computation. Given the reconstructed Gaussians at the current and future frames, we aim to explicitly describe the scene dynamics. Since our task focuses on robotic manipulation, we focus on the motion of the gripper, which serves as the robotic end-effector. Due to its rigid nature, we extract the manipulator-related Gaussians from the current state $\mu _ { \mathrm { g r i p p e r } } ^ { t }$ and future state $\mu _ { \mathrm { g r i p p e r } } ^ { t + \Delta t }$ , and estimate a rigid transformation $T ^ { t t + \Delta t } \in \mathrm { S E } ( 3 )$ using ICP [51]. This transformation captures the gripper’s motion and provides an explicit estimate of the scene dynamics: $$ T ^ { t t + \Delta t } = \arg \operatorname* { m i n } \sum _ { k \in g r i p p e r } \| T ( \mu _ { k } ^ { t } ) - \mu _ { k } ^ { t + \Delta t } \| ^ { 2 } $$ ${ \cal T } ^ { t t + \Delta t }$ represents the change over $\Delta t$ time steps. To obtain the transformation matrix for each time step during this period, we interpolate $T ^ { t t + \sum t }$ to derive a sequence of transformation matrices. This sequence represents the initial action $a _ { i n i t }$ that transitions the current frame to the future frame. # 3.3 Manipulation with Gaussian Action Field After introducing the definition and implementation of GAF, we now describe how it is deployed in robotic manipulation tasks. GAF supports three types of queries: While the visual outputs from current & future queries serve as supervision signals for GAF training, the initial actions obtained through action queries inevitably contain interaction-induced noise [42] due to partial observations, occlusions, or geometric ambiguities during physical interactions. Therefore, before executing these actions, we introduce a diffusion-based refinement module for action denoising. This module jointly leverages GAF-rendered multi-view observations (current query) and the initial action prediction (action query) to guide the diffusion model towards higher-quality denoising outcomes. Diffusion-based Refinement. As illustrated in Fig 4, to fully leverage GAF’s visual outputs and action predictions, we draw on insights from the R&D [57]. For each denoising step of duration $\Delta t$ , we project the initial action $a _ { i n i t }$ corresponding gripper positions to pixel coordinates using camera parameters and then render gripper mesh onto current multi-view RGB images $\{ \hat { I } _ { v } ^ { t } \} _ { v = 1 } ^ { M }$ This creates a unified representation, termed Actionable RGB Guidance, which integrates the visual 3D observations reconstructed by GAF with the temporally predicted actions. Such visual cues (surrounded by a yellow box in the refinement part on the right side of Fig. 4), along with initial action and gripper states, guide the diffusion model to minimize the following constraints: $$ \mathcal { L } _ { r e f i n e } = L 1 ( D , D ^ { g t } ) + L 1 ( \epsilon , \epsilon ^ { g t } ) + B C E ( g , g ^ { g t } ) $$ where $D$ represents denoising direction of gripper. $\epsilon$ is the noise added to the end-effector action. $g$ is a binary variable that represents gripper’s opening-closing action. $D ^ { g t } , \epsilon ^ { g t } , g ^ { g t }$ are their ground truth labels respectively. The denoised action sequence $a _ { r e f i n e }$ can be executed directly, enabling the acquisition of new observations of the updated scene. The entire pipeline, comprising GAF-based scene reconstruction, diffusion refinement and execution, is repeated iteratively until the manipulation task is completed. This closed-loop framework enables continuous adaptation to dynamic scene changes, leveraging GAF’s spatiotemporal reasoning to maintain robust performance under occlusion and interaction uncertainties. # 4 Experiments In this section, we first introduce the setup of the experiment including data and baseline methods. Then, to thoroughly assess the effectiveness of Gaussian Action Field in scene representation, future state prediction, and accurate action prediction, we evaluate our framework in dynamic scene reconstruction and task-level success rate. Finally, we conduct an ablation study to further validate the effectiveness of the various components within our model. Simulation. For manipulation tasks, we select 9 tasks from popular RLBench[25] tasks, covering diverse manipulation challenges including articulated object handling and occlusion-rich interactions. To ensure generalization, we randomly initialize the objects in the environment and collect 20 demonstrations for training phase of each individual task. To eliminate randomness and ensure representational generalization, we conduct evaluations across 100 episodes for each task, and the objects are also initialized randomly. For visual data collection, we collect 30 views RGB sequences using a circular camera array centered on the robot workspace like GNFactor[61]. Baselines. For scene reconstruction quality, we compare against ManiGaussian[44] which also involves the reconstruction of current and future Gaussians during the training process. For task success rate, ACT [65] and DP [10] are two classic methods in robotics, while the R&D [57] sub-method R&D-AI, which integrates actions and images, represents the state of the art (SOTA) in RLBench tasks. All three methods belong to the $V { - } A$ category. ManiGaussian is classified under the V-3D-A category for understanding the sceneand then predicting actions from 3D representations. To ensure fairness, all baselines adopt identical camera configurations, action spaces, and task variations. Parameter settings are detailed in the supplementary material. # 4.1 Evaluation on Scene Reconstruction and Prediction To validate reconstruction capabilities of our Gaussian Action Field, we compare its with ManiGaussian. Although Manigaussian uses static 3D Gaussian representations, it applies deformation to the current Gaussian and obtain the future Gaussian to evaluate action quality. As a result, both methods generate Gaussian point clouds for current and future frame during training. Qualitative Analysis. As illustrated in Figure 5, our method achieves superior reconstruction fidelity and novel-view synthesis. ManiGaussian’s renders (up) exhibit blurred textures and incomplete geometric details resulting in ambiguous spatial relationships. In contrast, our renders (down) preserve fine geometric structures, such as the gripper’s articulated joints and object surfaces, even under partial observations. This clarity in reconstructing the Gaussian point cloud allows for the extraction of precise end-effector point clouds to calculate the action. This contributes to the fundamental difference compared to Manigaussian. Figure 5: Comparison of current scene reconstruction and future scene prediction from novel views. Quantitative Metrics. We further evaluate reconstruction quality using standard metrics: PSNR (photometric fidelity), SSIM (structural similarity), and LPIPS (perceptual consistency). As shown in Table 1, our method outperforms ManiGaussian by $+ 1 1 . 5 3 8 5$ dB PSNR, $+ 0 . 3 8 6 4$ SSIM, and -0.5574 LPIPS on average across tasks in current scene reconstruction, and $+ 1 0 . 5 3 1 1$ dB PSNR, $+ 0 . 3 8 5 6$ SSIM, and -0.5757 LPIPS in future state prediction. These metrics confirm that our dynamic rendering framework ensures high quality geometric accuracy and temporal coherence. Table 1: Current & Future Novel view synthesis performance Comparison. : Higher is better; : Lower is better. # 4.2 Evaluation on Manipulation Success Rate We compare our GAF with baselines on success rates across 9 RLBench tasks, focusing on precision manipulation, occluded interactions, and dynamic contact to investigate how our GAF, a V-4D-A method for scene evolution and action prediction, contributes to the improvement in action-level prediction accuracy. Result and Discussion. Quantitative results are presented in Table 3. As indicated by the results, our approach, which explicitly models 4D scene variations, outperforms baseline V-A methods ACT, DP and R&D that understand the scene in latent space. Compared with current SOTA R&D-AI, our model achieves a $18 \%$ improvement in the task "Toilet Seat Down" and a $14 \%$ improvement in the task "Close Laptop". In the ’Toilet Seat Down’ task, the robot needs to accurately perceive the orientation and position of the seat in relation to the surrounding environment, such as the toilet and the seat. A 3D representation of the scene allows the robot to model these spatial relationships. This demonstrates how 3D scene modeling is fundamental to improving a robot’s ability to understand its environment and execute tasks with higher accuracy. Table 2: Success rates $( \% )$ of the baselines and ours variant evaluated on RLBench tasks. Figure 6: Ablation action The upper image shows a failed experiment without action refinement, while the lower image depicts a successful experiment after action refinement. To further validate the effectiveness of spatiotemporal representation, we compare our model with V-3D-A method Manigaussian. Our approach exhibits higher spatiotemporal scene understanding (specifically, reconstruction quality in Section 4.1), resulting in improved success rates across nearly all tasks. This is because our 4D representation includes dynamics modeling along the temporal dimension, allowing for more accurate task execution when inferring actions. # 4.3 Ablation Study Ablation on Gaussian Action Field. To evaluate the contribution of the Gaussian Action Field, we remove this component and directly predict actions from two images by using a diffusion model without multi-view rendering or initial action priors. This setup aligns with generative baselines like DP and R&D. As shown in Table 3, our full method outperforms R&D-AI by $+ 1 0 . 3 3 \%$ in average success rate across tasks. Notably, in occlusion-heavy tasks like "Close Microwave," our method achieves a $13 \%$ improvement over R&D-AI, demonstrating the critical role of explicit scene reconstruction in resolving spatial ambiguities. These results demonstrate that modeling 3D geometry and dynamics through motion fields significantly enhances action prediction robustness. Ablation on Action Refinement. We analyze the effectiveness of action refinement by comparing the initial action (directly derived from Gaussian motions) and the refined action. From the comparison in the Fig. 6, it can be observed that before contact, the initial action often aligns well with the target object’s pose. However, during interaction, reconstruction errors from partial occlusions observations lead to physically implausible robot object relations (e.g., misaligned contacts or penetration), requiring action refinement. For example, in the shown experiment "Close Microwave", the initial action directly instructs robot to move toward the closing area without considering that the object’s geometric appearance should be manipulated from the door. The diffusion-based refinement corrects these errors by supervised learning using real physical interactive data. # 5 Discussion We present Gaussian Action Field (GAF), a V-4D-A paradigm that infers future evolution of a scene from current visual observations to guide robotic manipulation. GAF supports scene reconstruction, future prediction, and action generation within a unified Gaussian world model. This feed-forward pipeline requires only two unposed RGB images, operates without any heavy setup, and supports realtime execution. Experiments on RLBench demonstrate that GAF achieves superior performance in both reconstruction and manipulation tasks. While our current method focuses on geometric modeling and motion prediction, it lacks semantic or task-level understanding. Future work will incorporate language modeling to bring high-level semantic priors into the system, extending our framework toward VL-4D-A (Vision-Language-4D-to-Action) to support context-aware manipulation. # References [1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning, 2022. [2] Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Rich Walke, Chelsea Finn, Aviral Kumar, and Sergey Levine. Zero-shot robotic manipulation with pretrained image-editing diffusion models. ArXiv, abs/2310.10639, 2023. [3] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn, Pete Florence, Chuyuan Fu, Montse Gonzalez Arenas, Keerthana Gopalakrishnan, Kehang Han, Karol Hausman, Alexander Herzog, Jasmine Hsu, Brian Ichter, Alex Irpan, Nikhil Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Lisa Lee, Tsang-Wei Edward Lee, Sergey Levine, Yao Lu, Henryk Michalewski, Igor Mordatch, Karl Pertsch, Kanishka Rao, Krista Reymann, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Pierre Sermanet, Jaspiar Singh, Anikait Singh, Radu Soricut, Huong Tran, Vincent Vanhoucke, Quan Vuong, Ayzaan Wahid, Stefan Welker, Paul Wohlhart, Jialin Wu, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. Rt-2: Vision-language-action models transfer web knowledge to robotic control, 2023. [4] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. Rt-1: Robotics transformer for real-world control at scale, 2023. [5] Dave Zhenyu Chen, Angel X. Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. ArXiv, abs/1912.08830, 2019. [6] Shizhe Chen, Ricardo Garcia Pinel, Cordelia Schmid, and Ivan Laptev. Polarnet: 3d point clouds for language-guided robotic manipulation. ArXiv, abs/2309.15596, 2023. [7] Tianxing Chen, Yao Mu, Zhixuan Liang, Zanxin Chen, Shijia Peng, Qiangyu Chen, Min Xu, Ruizhen Hu, Hongyuan Zhang, Xuelong Li, and Ping Luo. G3flow: Generative 3d semantic flow for pose-aware and generalizable object manipulation. ArXiv, abs/2411.18369, 2024. [8] Xiaoyu Chen, Junliang Guo, Tianyu He, Chuheng Zhang, Pushi Zhang, Derek Cathera Yang, Li Zhao, and Jiang Bian. Igor: Image-goal representations are the atomic control units for foundation models in embodied ai. arXiv preprint arXiv:2411.00785, 2024. [9] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning, 2020. [10] Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, page 02783649241273668, 2023. [11] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021. [12] Yilun Du, Sherry Yang, Bo Dai, Hanjun Dai, Ofir Nachum, Josh Tenenbaum, Dale Schuurmans, and Pieter Abbeel. Learning universal policies via text-guided video generation. Advances in neural information processing systems, 36:9156–9172, 2023. [13] Chongkai Gao, Zhengrong Xue, Shuying Deng, Tianhai Liang, Siqi Yang, Lin Shao, and Huazhe Xu. Riemann: Near real-time se (3)-equivariant robot manipulation without point cloud segmentation. arXiv preprint arXiv:2403.19460, 2024. [14] Zeyu Gao, Yao Mu, Chen Chen, Jingliang Duan, Ping Luo, Yanfeng Lu, and Shengbo Eben Li. Enhance sample efficiency and robustness of end-to-end urban autonomous driving via semantic masked world model. IEEE Transactions on Intelligent Transportation Systems, 2024. [15] Theophile Gervet, Zhou Xian, Nikolaos Gkanatsios, and Katerina Fragkiadaki. Act3d: Infinite resolution action detection transformer for robotic manipulation. arXiv preprint arXiv:2306.17817, 1(3), 2023. [16] David Ha and Jürgen Schmidhuber. Recurrent world models facilitate policy evolution. Advances in neural information processing systems, 31, 2018. [17] Danijar Hafner, Kuang-Huei Lee, Ian Fischer, and Pieter Abbeel. Deep hierarchical planning from pixels. Advances in Neural Information Processing Systems, 35:26091–26104, 2022. [18] Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019. [19] Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. arXiv preprint arXiv:2010.02193, 2020. [20] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023. [21] Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. arXiv preprint arXiv:2301.04104, 2023. [22] Nicklas Hansen, Hao Su, and Xiaolong Wang. Td-mpc2: Scalable, robust world models for continuous control. arXiv preprint arXiv:2310.16828, 2023. [23] Anthony Hu, Gianluca Corrado, Nicolas Griffiths, Zachary Murez, Corina Gurau, Hudson Yeo, Alex Kendall, Roberto Cipolla, and Jamie Shotton. Model-based imitation learning for urban driving. Advances in Neural Information Processing Systems, 35:20703–20716, 2022. [24] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and João Carreira. Perceiver: General perception with iterative attention. ArXiv, abs/2103.03206, 2021. [25] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019–3026, 2020. [26] Stephen James, Kentaro Wada, Tristan Laidlow, and Andrew J. Davison. Coarse-to-fine qattention: Efficient learning for visual robotic manipulation via discretisation. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13729–13738, 2021. [27] Mazeyu Ji, Ri-Zhao Qiu, Xueyan Zou, and Xiaolong Wang. Graspsplats: Efficient manipulation with 3d feature splatting. arXiv preprint arXiv:2409.02084, 2024. [28] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning, pages 4904–4916. PMLR, 2021. [29] Jian-Jian Jiang, Xiao-Ming Wu, Yi-Xiang He, Ling an Zeng, Yi-Lin Wei, Dandan Zhang, and Wei-Shi Zheng. Rethinking bimanual robotic manipulation: Learning with decoupled interaction framework. ArXiv, abs/2503.09186, 2025. [30] Tsung-Wei Ke, Nikolaos Gkanatsios, and Katerina Fragkiadaki. 3d diffuser actor: Policy diffusion with 3d scene representations. ArXiv, abs/2402.10885, 2024. [31] Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering, 2023. [32] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything, 2023. [33] Alina Kloss, Maria Bauza, Jiajun Wu, Joshua B Tenenbaum, Alberto Rodriguez, and Jeannette Bohg. Accurate vision-based manipulation through contact reasoning. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 6738–6744. IEEE, 2020. [34] Vincent Leroy, Yohann Cabon, and Jérôme Revaud. Grounding image matching in 3d with mast3r. In European Conference on Computer Vision, pages 71–91. Springer, 2024. [35] Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 11336–11344, 2020. [36] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pages 19730–19742. PMLR, 2023. [37] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. [38] Junbang Liang, Ruoshi Liu, Ege Ozguroglu, Sruthi Sudhakar, Achal Dave, Pavel Tokmakov, Shuran Song, and Carl Vondrick. Dreamitate: Real-world visuomotor policy learning via video generation. arXiv preprint arXiv:2406.16862, 2024. [39] Junbang Liang, Ruoshi Liu, Ege Ozguroglu, Sruthi Sudhakar, Achal Dave, Pavel Tokmakov, Shuran Song, and Carl Vondrick. Dreamitate: Real-world visuomotor policy learning via video generation. ArXiv, abs/2406.16862, 2024. [40] Jessy Lin, Yuqing Du, Olivia Watkins, Danijar Hafner, Pieter Abbeel, Dan Klein, and Anca Dragan. Learning to model the world with language. arXiv preprint arXiv:2308.01399, 2023. [41] I-Chun Arthur Liu, Sicheng He, Daniel Seita, and Gaurav Sukhatme. Voxact-b: Voxel-based acting and stabilizing policy for bimanual manipulation. In Conference on Robot Learning, 2024. [42] Xueyi Liu and Li Yi. Geneoh diffusion: Towards generalizable hand-object interaction denoising via denoising diffusion, 2024. [43] Guanxing Lu, Ziwei Wang, Changliu Liu, Jiwen Lu, and Yansong Tang. Thinkbot: Embodied instruction following with thought chain reasoning. arXiv preprint arXiv:2312.07062, 2023. [44] Guanxing Lu, Shiyi Zhang, Ziwei Wang, Changliu Liu, Jiwen Lu, and Yansong Tang. Manigaussian: Dynamic gaussian splatting for multi-task robotic manipulation. arXiv preprint arXiv:2403.08321, 2024. [45] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks, 2019. [46] Russell Mendonca, Shikhar Bahl, and Deepak Pathak. Structured world models from human videos. arXiv preprint arXiv:2308.10901, 2023. [47] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. [48] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. ArXiv preprint, 2021. [49] Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. Highresolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674–10685, 2021. [50] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604–609, 2020. [51] Aleksandr V. Segal, Dirk Hähnel, and Sebastian Thrun. Generalized-icp. In Robotics: Science and Systems, 2009. [52] Younggyo Seo, Danijar Hafner, Hao Liu, Fangchen Liu, Stephen James, Kimin Lee, and Pieter Abbeel. Masked world models for visual control. In Conference on Robot Learning, pages 1332–1344. PMLR, 2023. [53] Mohit Shridhar, Yat Long Lo, and Stephen James. Generative image as action models. arXiv preprint arXiv:2407.07875, 2024. [54] Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Perceiver-actor: A multi-task transformer for robotic manipulation. ArXiv, abs/2209.05451, 2022. [55] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. [56] Vitalis Vosylius and Edward Johns. Instant policy: In-context imitation learning via graph diffusion. ArXiv, abs/2411.12633, 2024. [57] Vitalis Vosylius, Younggyo Seo, Jafar Uruç, and Stephen James. Render and diffuse: Aligning image and action spaces for diffusion-based behaviour cloning. arXiv preprint arXiv:2405.18196, 2024. [58] Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything: Unleashing the power of large-scale unlabeled data. In CVPR, 2024. [59] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. arXiv:2406.09414, 2024. [60] Botao Ye, Sifei Liu, Haofei Xu, Li Xueting, Marc Pollefeys, Ming-Hsuan Yang, and Peng Songyou. No pose, no problem: Surprisingly simple 3d gaussian splats from sparse unposed images. arXiv preprint arXiv:2410.24207, 2024. [61] Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, and Xiaolong Wang. Gnfactor: Multi-task real robot learning with generalizable neural feature fields. In Conference on Robot Learning, pages 284–301. PMLR, 2023. [62] Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 3d diffusion policy: Generalizable visuomotor policy learning via simple 3d representations, 2024. [63] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric, 2018. [64] Tong Zhang, Yingdong Hu, Hanchen Cui, Hang Zhao, and Yang Gao. A universal semanticgeometric representation for robotic manipulation. arXiv preprint arXiv:2306.10474, 2023. [65] Tony Z Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023. [66] Haoyu Zhen, Xiaowen Qiu, Peihao Chen, Jincheng Yang, Xin Yan, Yilun Du, Yining Hong, and Chuang Gan. 3d-vla: A 3d vision-language-action generative world model. ArXiv, abs/2403.09631, 2024. # A Additional Experiments In this section, we designed additional experiments to demonstrate the performance of GAF. We mainly evaluate its ability in spatial generalization, data efficiency, and multi-task learning. # A.1 Spatial Generalization Figure 7: Spatial Generalization. Outcome of GAF and baseline trained on 20 demonstrations (purple stars). The heat maps represent Gaussian kernel density estimations for relative likelihood polarity over the workspace, with red and blue colours representing successes and failures, respectively. We propose a systematic data collection strategy to ensure comprehensive spatial coverage of object poses within the operational workspace. The methodology initiates with a canonical demonstration where the object is positioned at the workspace centroid $( x _ { 0 } , y _ { 0 } )$ . Subsequently, we implement an iterative farthest-point sampling algorithm that selects subsequent poses by maximizing the minimum Euclidean distance to existing samples in the demonstration set $\bar { \mathcal { D } } = \{ p _ { i } \} _ { i = 1 } ^ { n }$ , formally expressed as: $$ p _ { n + 1 } = \arg \operatorname* { m a x } _ { p \in \mathcal { P } } \operatorname* { m i n } _ { p ^ { \prime } \in \mathcal { D } } | | p - p ^ { \prime } | | _ { 2 } $$ where $\mathcal { P }$ denotes the feasible pose space and $| | \cdot | | _ { 2 }$ represents the L2-norm. During evaluation, we establish a systematic protocol employing a dense grid sampling methodology across the entire workspace. This experimental design guarantees sufficient spatial variation in test conditions while maintaining measurement consistency, with comparative analysis performed against the baseline R&D [57] in Figure 7. As illustrated in Figure 7, R&D encounters significant challenges when objects are placed along the boundaries and corners of the workspace. Notably, in the close_microwave manipulation task, previous methods exhibits pronounced performance degradation even within central operational regions. In contrast, our method achieves superior spatial generalization capability even when objects are placed on boundaries. Besides, our method is less sensitive to corner areas. # A.2 Data Efficiency Figure 8: Data Efficiency. The success rate of our method and the baseline R&D in three tasks (Lift Lid, Close Microwave, Phone On Base) varies with different demonstrations. For this set of experiments we train the models on different numbers of demonstrations collected in the same data collection strategy as in A.1 and evaluate them in a grid-like manner to ensure that the experiments present a sufficient level of challenge. Figure 8 shows how the performance of our GAF and current SOTA R&D change with increasing density of the workspace coverage, i.e., number of demos. As expected, all the methods benefit from larger amounts of demonstrations. Moreover, GAF achieves $90 \%$ of the peak performance with 15 demos, demonstrating excellent data efficiency. # A.3 Multi-task Test Table 3: Performance of GAF and baseline, when training a single model using demonstrations from 4 different tasks (20 demonstrations per task), we show the success rate $( \% )$ and the performance difference compared to the single-task setting. In our previous experiments, we trained distinct policy networks for each individual task. To validate the generalization capabilities of GAF, we test its capacity to learn multiple tasks simultaneously. It is a critical property for any world model. In this section, we train a single network using data collected from 4 RLBench tasks, 20 demonstrations each. Object positions are randomly initialized in both data collection and model evaluation. Figure 9: GAF Query Result. Multiview images rendered from current and future Gaussian point clouds, along with the predicted initial actions visulization. As table 3 illustrated, our method’s average success rate only declines $1 0 . 7 \%$ . This highlights GAF’s robust multi-tasking capabilities, underscoring its effectiveness as a world model-based approach. Our success rate exhibits the most significant decline in the "lift lid" task, which is markedly distinct from the other three tasks. Nevertheless, in comparison to the substantial $2 8 . 5 \%$ decline observed in the baseline, our method demonstrates considerably superior performance. # B GAF Query Result Figure 9 presents the results obtained by querying GAF with the current query, future query, and action query. The "current" column, "future" column, and "action" column represent, respectively, the multiview images rendered from the current Gaussian point cloud reconstructed by GAF, the multiview images rendered from the predicted future Gaussian point cloud, and the initial action computed based on these two Gaussian point clouds (visualized by rendering the mesh into the images). We show results on three tasks from RLBench: "lift lid," "toilet seat down," and "close microwave." It can be observed that our method produces clear novel views RGB for both the current and future states, and is capable of generating reasonable initial actions based on the transformation from the current to the future state. # C Implement Details Training Phase During the training of GAF, The network is end-to-end trained using ground truth target RGB video frames as supervision with a linear combination of MSE and LPIPS [63] loss with weights of 1 and 0.05, respectively. We initialize the ViT, Gaussian center head and motion prediction head with the weights from MASt3R [34], while the remaining layers are initialized randomly. The GAF model is trained on 9 separate tasks, each consisting of 20 demonstrations and 200 input RGB image video frames per demonstration. It have been trained for $8 0 \mathrm { k }$ iterations (with a batch size of 16) The model is trained using a single NVIDIA RTX A800 GPU, which takes approximately 24 hours to complete. In the action refinement process, we use 50 diffusion ierations based on DDIM [55]. To obtain more precise local observations, we incorporated the GT wrist camera data as an auxiliary resource in this section. We use 2 last observations as input and predict 8 future actions. It have been trained for $5 0 \mathrm { k }$ iterations (with a batch size of 8). The denoising process completes in 1.5 days on a single NVIDIA RTX A4090 GPU without extensive optimisation. Evaluation Phase For a fair comparison, all methods, including the baselines, utilize RGB observations $1 2 8 \times 1 2 8 )$ from two external cameras and another wrist camera. Different from training parameters, during the inference phase, we only need 3 diffusion iterations, which makes our online deployment more real-time. The hyperparameters used in GAF are shown in Table 4. Other baselines hyperparameters are in line with previous works [57, 65, 44, 10] for fair comparison. Table 4: Hyperparameters # D RLBench Dataset Success Metric In this section, we provide a precise overview of the RLBench [25] dataset. We describe each of the 9 tasks in detail, including key action and success metrics. # D.1 Toilet Seat Down Description: The robot must lower the toilet seat from an upright position to a closed position. Key Actions: Grasp the toilet seat. Apply a controlled downward motion to close it. Success Metric:The toilet seat is fully lowered, resting flat on the toilet bowl, and remains stationary # D.2 Open Grill Description: The robot needs to open the lid of a grill (e.g., a barbecue grill). Key Actions: Grasp the grill lid handle. Pull and lift the handle to open the lid. Success Metric: The grill lid is fully open and remains stationary in the open position. # D.3 Close Grill Description: The robot must close the lid of the grill after it has been opened. Key Actions: Grasp the grill lid handle. Push and twist the lid down to close it. Success Metric: The grill lid is fully closed, flush with the grill body, and does not rebound. # D.4 Close Microwave Description: The robot must close the door of a microwave that has been left open. Key Actions: Push the microwave door. Apply force to swing the door shut. Success Metric:The microwave door is fully closed. # D.5 Close Fridge Description: The robot needs to close the door of a refrigerator. Key Actions: Grasp or push the fridge door. Apply force to close the door completely. Success Metric:The fridge door is fully closed. # D.6 LIFT LID Description: The robot must lift the lid of a saucepan. Key Actions: Grasp the lid handle. Lift the lid upward and away from the saucepan. Success Metric:The lid is completely removed from the container and held in a stable position without contact with the container. # D.7 Phone On Base Description: The robot must place a phone back onto its base. Key Actions: Grasp the phone. Align it with the base. Place it gently onto the base. Success Metric: The phone is securely placed on the base, properly aligned. # D.8 Lamp On Description: The robot must turn on a lamp, typically by interacting with a button. Key Actions: Locate the lamp’s activation mechanism. Interact with the mechanism to turn the lamp on. Success Metric:The lamp emits light, indicating it has been successfully turned on. # D.9 Close Laptop Description: The robot must close the lid of an open laptop. Key Actions: Grasp the laptop lid. Push and twist the lid down to close it. Success Metric:The laptop lid is fully closed, with no visible gap between the lid and the base.
Accurate action inference is critical for vision-based robotic manipulation. Existing approaches typically follow either a Vision-to-Action (V-A) paradigm, predicting actions directly from visual inputs, or a Vision-to-3D-to-Action (V-3D-A) paradigm, leveraging intermediate 3D representations. However, these methods often struggle with action inaccuracies due to the complexity and dynamic nature of manipulation scenes. In this paper, we propose a V-4D-A framework that enables direct action reasoning from motion-aware 4D representations via a Gaussian Action Field (GAF). GAF extends 3D Gaussian Splatting (3DGS) by incorporating learnable motion attributes, allowing simultaneous modeling of dynamic scenes and manipulation actions. To learn time-varying scene geometry and action-aware robot motion, GAF supports three key query types: reconstruction of the current scene, prediction of future frames, and estimation of initial action via robot motion. Furthermore, the high-quality current and future frames generated by GAF facilitate manipulation action refinement through a GAF-guided diffusion model. Extensive experiments demonstrate significant improvements, with GAF achieving +11.5385 dB PSNR and -0.5574 LPIPS improvements in reconstruction quality, while boosting the average success rate in robotic manipulation tasks by 10.33% over state-of-the-art methods. Project page: http://chaiying1.github.io/GAF.github.io/project_page/
[ "cs.RO", "cs.CV" ]
1. Introduction . 2 1.1. The Emergence of LLM-Based Agentic AI and Multi-Agent Systems.. 2 1.2. The Criticality of Inter-Agent Communication in Complex AI Workflows .. 3 1.3. Introducing the Model Context Protocol (MCP) as an Interoperability Standard .. 3 1.4. Scope and Contributions of this Review: Bridging Design Patterns, LLM Agents, and MCP ....... 4 2. Foundations of LLM-Based Agentic Systems . 4 2.1. Anatomy of an LLM Agent: Brain, Memory, Tools, and Planning .. 4 2.2. From Single-Agent Autonomy to Multi-Agent Collaboration ...... 5 2.3. Inherent Challenges in Multi-Agent LLM Communication... 6 2.4. Comparison of Key LLM Agent Frameworks and their Communication Paradigms ... . 6 3. Software Design Patterns for Inter-Agent Communication... 7 3.1. Re-evaluating Classical Design Patterns for LLM-MAS.. 3.2. Key Communication Patterns: Mediator, Observer, Publish-Subscribe, Broker .. . 8 3.3. Key Communication Patterns: Mediator, Observer, Publish-Subscribe, Broker . 9 3.4. Formalizing Communication Patterns in LLM-MAS: A Mathematical Perspective... . 10 4. The Model Context Protocol (MCP) as an Interoperability Layer.... . 12 4.1. MCP Architecture: Client-Host-Server Model and JSON-RPC Foundation . . 12 4.2. MCP's Role in Standardized Context Exchange and Tool Invocation.... . 13 4.3. MCP as a Facilitator for Inter-Agent Communication Patterns ... . 13 4.4. Comparative Analysis of Agent Interoperability Protocols (MCP, A2A, ACP, ANP) .. . 14 5. Design Patterns in Practice: Architecting Inter-Agent Communication with MCP .. . 15 5.1. Centralized Communication Architectures with MCP Mediation.. 15 5.2. Decentralized Communication Architectures Leveraging MCP Resources... . 17 5.3. Hierarchical Communication Architectures and MCP-enabled Delegation.. . 18 5.4. Adaptive and Hybrid Communication Strategies.. . 19 5.5. Mathematical Modeling of Inter-Agent Information Flow and Cost Optimization ... . 19 6. Architectural Adaptations Across Complexity, Autonomy, and Domains . . 21 6.1. Scaling Communication Patterns with Increasing Agent Complexity and Autonomy .... .. 21 6.2. Case Study: Real-time Transaction Processing Systems ... .. 22 6.3. Case Study: Investment Banking Applications ... 23 6.4. Financial Services Use Cases and Corresponding LLM-MAS Design Patterns... . 25 7. Challenges, Security, and Future Research Directions .. . 26 7.1. Addressing Scalability, Reliability, and Security in MCP-enabled MAS. .. 26 7.2. Ethical Considerations and Human-in-the-Loop Integration..... . 27 7.3. Open Research Questions and Emerging Trends. 27 8. Conclusion.. 29 References . . 29 # 1. Introduction # 1.1. The Emergence of LLM-Based Agentic AI and Multi-Agent Systems Large Language Models (LLMs) are undergoing a paradigm shift—from functioning as static providers of information, often embedded in conversational agents, to serving as autonomous computational agents capable of decision-making and task execution, often referred to as agentic AI [1]. This shift marks the emergence of agentic AI, wherein LLMs are enhanced with the ability to interact with external systems, store and retrieve information over time, and perform executable actions [2]. These augmented agents are purpose-built to address tasks that require iterative reasoning, planning, memory, and tool use—capabilities that standalone LLMs lack due to constraints like limited context windows, susceptibility to hallucinations, and difficulties in managing complex sequences of actions [3]. As demands grow beyond the scope of a single agent, a new class of systems—Multi-Agent Systems composed of LLM agents (LLM-MAS)—has been introduced. These systems aim to distribute cognitive responsibilities across multiple agents, enabling collaborative problem-solving and specialization [4]. This transition is motivated by the need to scale intelligence through coordinated interactions, especially for real-world tasks [5] that are too complex for individual agents to handle effectively. Critically, the performance of LLM-MAS is not merely the result of better individual models, but stems from how these agents are architected to communicate, coordinate, and share knowledge [6]. While early LLMs showed strong single-agent performance, they struggled with tasks involving longterm dependencies, contextual continuity, and strategic tool use. Agentic AI addresses these gaps by embedding LLMs within frameworks that support planning, memory, and modular reasoning [4]. However, even these enhancements have limitations when operating in isolation. The transition to multi-agent coordination reflects a recognition that distributed intelligence [7]—emerging from structured, inter-agent communication—is key to tackling high-complexity scenarios. Ultimately, the intelligence exhibited by LLM-MAS arises less from any one agent and more from the system-level design that enables agents to function collectively as a coherent, adaptive unit [8]. # 1.2. The Criticality of Inter-Agent Communication in Complex AI Workflows Communication between agents is the cornerstone of coordination and shared purpose in multiagent systems, particularly in those powered by Large Language Models (LLMs). It is through communication that agents align goals, share contextual understanding, and collectively plan actions[5]. However, this very reliance introduces significant challenges. Complexities in inter-agent interaction often contribute more to system-level failures than limitations in the agents themselves. Common difficulties include misaligned objectives, inadequate mechanisms for task validation, limited scalability, exposure to security threats, and the absence of widely accepted architectural standards for robust communication protocols. In LLM-based multi-agent systems, communication is not just the exchange of information—it is the medium through which collective reasoning emerges. Yet, this strength also becomes a liability: the same communication channels that enable synergy among agents can propagate errors, magnify design weaknesses, and open the door to adversarial exploits such as Agent-in-the-Middle (AiTM) attacks. Thus, communication in LLM-MAS presents a fundamental tension. It is simultaneously the key to emergent intelligence and a critical vulnerability that, if poorly designed, can undermine the entire system. Designing resilient, semantically meaningful communication architectures is therefore not optional—it is central to the success, trustworthiness, and safety of next-generation agentic AI [11]. # 1.3. Introducing the Model Context Protocol (MCP) as an Interoperability Standard The Model Context Protocol (MCP) [12] [13], introduced by Anthropic in late 2024, is an open interoperability standard aimed at simplifying and unifying the way AI models connect with external tools, systems, and structured data. Often dubbed the “USB-C for AI applications,” MCP aspires to be a universal interface layer, reducing the complexity of integration across diverse platforms. At the heart of MCP is a solution to the long-standing $^ { \prime \prime } \mathsf { N } \times \mathsf { M } ^ { \prime \prime }$ integration bottleneck—where each large language model (LLM) required custom code to interface with every distinct data source or tool. This led to duplicated engineering efforts and fragile, difficult-to-maintain architectures. MCP alleviates this by offering a consistent protocol that any AI assistant can use to interact with any compatible service, tool, or dataset, significantly streamlining integration workflows [14]. Built on a client-host-server model using JSON-RPC, MCP enables persistent, state-aware communication sessions. It defines rigorous formats for data ingestion, metadata annotation, platform-agnostic model coordination, and secure bidirectional connectivity. This structured approach not only improves interoperability but also enhances the traceability and manageability of AI-driven systems. The broader impact of MCP lies in its push toward a modular, composable AI infrastructure. Rather than crafting bespoke connections that quickly devolve into convoluted systems, MCP encourages clean separations between components, allowing tools, models, and data layers to be updated or replaced independently. This modularity greatly reduces engineering overhead, fosters rapid innovation, and provides a foundation for scalable, auditable, and future-proof AI deployments. With clearly defined message schemas and a structured communication lifecycle, MCP also supports critical compliance and monitoring functions—key requirements in enterprise and regulated settings. # 1.4. Scope and Contributions of this Review: Bridging Design Patterns, LLM Agents, and MCP This review consolidates recent advancements in large language model (LLM)-driven agentic AI, classical software design methodologies, and the emerging Model Context Protocol (MCP), with the goal of guiding the design of resilient and scalable inter-agent communication frameworks. It examines how time-tested software architecture patterns can be adapted to suit the needs of modern multi-agent systems powered by LLMs, positioning MCP as a core enabler of interoperability and structured coordination. Through the use of theoretical models and schematic visualizations, the article analyzes communication dynamics, system complexity, and the efficiency of data exchange across agent networks. It also evaluates how these design strategies scale with increasing agent autonomy and system sophistication. Concrete examples are drawn from domains such as real-time financial systems and investment platforms, where robust agent coordination is essential. The review aims to provide developers and system architects with a grounded, actionable framework for building secure, efficient, and maintainable LLM-based multi-agent ecosystems. # 2. Foundations of LLM-Based Agentic Systems # 2.1. Anatomy of an LLM Agent: Brain, Memory, Tools, and Planning An LLM-based agent consists of multiple coordinated subsystems that enable it to operate autonomously and interact intelligently with its environment. At the center of the architecture is the large language model itself, which acts as the agent’s cognitive core—responsible for reasoning, decision-making, and language comprehension. This central component interprets inputs, forms plans, and produces responses or actions based on its internal logic. To extend the LLM’s capabilities, several auxiliary modules are typically integrated: Memory: This module plays a key role in sustaining context over time and incorporating insights from past interactions, addressing limitations such as context window size and factual inconsistencies. Memory systems often utilize Retrieval-Augmented Generation (RAG) to supplement the LLM with access to external sources of dynamic or long-term information [15]. Planning: Responsible for breaking down complex goals into actionable steps, the planning module enables the agent to reason through multi-stage tasks. Methods like Chain-ofThought (CoT) reasoning are employed to promote transparency in intermediate steps and support iterative refinement of plans. Tool Use: This module enables the agent to interface with external systems and perform targeted operations—such as querying databases, invoking APIs, executing code, or conducting searches. Effective use of tools depends on structured, interpretable definitions that clearly specify each tool’s functionality [16] [17]. The modular composition of these components makes LLM agents highly compatible with classical software design patterns. The separation of roles into distinct modules—such as reasoning, memory, planning, and tool usage—naturally aligns with established architectural and behavioral templates. For instance, the tool interface is well-suited to structural patterns like Adapter or Facade, which abstract complexity and promote standardization across diverse external services. The memory subsystem can adopt design principles for data persistence and access control, ensuring efficient and consistent storage and retrieval. Meanwhile, the central LLM or “brain” can leverage behavioral patterns like Mediator or Strategy to coordinate interactions among modules and manage dynamic decision processes. This intrinsic modularity not only enhances the clarity and maintainability of individual agents but also lays a strong foundation for building scalable, interoperable systems of multiple agents. The ability to clearly delineate responsibilities among components is essential for designing robust internal behaviors and facilitating structured collaboration in broader multi-agent frameworks. # 2.2. From Single-Agent Autonomy to Multi-Agent Collaboration Although single-agent systems powered by large language models (LLMs) can perform complex reasoning and execute tasks independently, they often struggle with problems that require distributed cognition or large-scale coordination. These agents are inherently limited by their sequential processing, restricted memory capacity, and the finite bandwidth of a single decisionmaking entity. To address these constraints, Multi-Agent Systems (MAS) have emerged as a paradigm that enables multiple intelligent agents to collaborate. By distributing responsibilities and enabling inter-agent communication, MAS architectures facilitate scalable task execution, improve system resilience, and allow for dynamic adaptation in real-time environments. Through coordinated behavior, these systems can demonstrate emergent intelligence—where collective performance exceeds the capabilities of individual agents acting alone. Notably, LLM-based MAS (LLM-MAS) benefit from natural language communication channels, hierarchical task delegation, and integration with domain-specific tools and knowledge sources, often without needing hand-coded rules. However, transitioning from isolated agents to collaborative systems shifts the primary design challenges. Instead of being constrained by the cognitive limits of a single model, the system's reliability now hinges on the quality and structure of inter-agent interactions. While multi-agent setups offer a promising solution to the shortcomings of single LLMs—such as hallucination, short context retention, and planning bottlenecks—they introduce their own complexities. Empirical studies have shown that the anticipated performance improvements are not always realized, often due to miscommunications, coordination overhead, and agent misalignment. This underscores the importance of reorienting the design focus: success in multi-agent systems depends not only on building capable agents but, critically, on engineering robust, coherent, and efficient communication and coordination mechanisms among them [18]. # 2.3. Inherent Challenges in Multi-Agent LLM Communication While multi-agent systems powered by large language models (LLMs) offer compelling advantages in collaborative reasoning and distributed task execution, they also introduce a complex set of communication and coordination challenges that must be systematically addressed to ensure reliability and scalability. Architectural Ambiguity: One of the foundational issues is the absence of standardized frameworks for designing robust LLM-based multi-agent systems (LLM-MAS). This often leads to improvised architectures that lack consistency and resilience [19]. Coordination and Misalignment: Achieving effective collaboration among agents is nontrivial. Agents must be able to engage in joint reasoning, maintain shared context, and align their goals—tasks that are often hindered by incoherent or unstructured communication [20]. Task Completion and Validation: Determining when a task is complete and verifying the accuracy or success of a multi-agent process is inherently difficult in distributed systems, where no single agent has a complete view of the task state. Scalability Bottlenecks: As more agents are added, communication overhead rises sharply. This leads to increased latency, bandwidth saturation, and higher computational resource demands, reducing system responsiveness. Security Risks: The decentralized nature of multi-agent communication increases exposure to threats. Vulnerabilities such as Agent-in-the-Middle (AiTM) attacks, data leakage, and injection of malicious prompts pose significant risks to privacy and integrity. Prompt Fragility: The behavior of LLM agents remains highly sensitive to how prompts are phrased. Small changes in input can lead to drastically different and sometimes unreliable outputs, undermining the system's predictability. Knowledge Management and Hallucination: Defining and enforcing knowledge boundaries within multi-agent environments is challenging. Without careful control, agents may generate biased, inaccurate, or fabricated information, compromising the credibility of the entire system [21] [22]. Designing LLM-MAS is often likened to managing a human organization, complete with role specialization, hierarchical planning, and collaborative problem-solving. However, this analogy also underscores the system’s inherent complexity. Just as real-world organizations can falter due to structural flaws—like miscommunication, departmental silos, or strategic misalignment—LLM-MAS inherit similar vulnerabilities. The challenges of governance, coordination, and failure modes in such systems are not purely technical but deeply systemic. As a result, building effective LLM-MAS may require integrating insights from organizational science and human systems engineering, emphasizing the importance of structured collaboration, trust frameworks, and robust communication protocols alongside traditional AI techniques. # 2.4. Comparison of Key LLM Agent Frameworks and their Communication Paradigms AutoGen employs a message-passing paradigm, primarily using broadcast or publish-subscribe mechanisms to facilitate communication among agents. It supports multi-agent dialogues, integrates Large Language Models (LLMs), human inputs, and external tools, and allows for highly customizable interactions. This flexibility makes AutoGen well-suited for building diverse, complex workflows. However, it can be difficult to manage context and ensure consistent alignment between agents, especially as complexity grows. LangChain (and its graph-based extension LangGraph) follows a node-and-edge orchestration model where agents and workflows are structured as directed graphs. These frameworks offer capabilities such as memory management (short-term and long-term), human-in-the-loop interaction, and detailed observability. They are particularly strong in enabling both deterministic workflows and dynamic agent orchestration, which makes them highly suitable for complex tasks. Nevertheless, LangChain and LangGraph may complicate the control over LLM context and introduce debugging challenges in more intricate graph configurations. CrewAI is built around a workflow model inspired by human teaming. It emphasizes role-based agents with clearly defined responsibilities, facilitating intuitive collaboration and coordination that mimics real-world teams. This design is advantageous for structured, collaborative tasks that benefit from role clarity. However, it can be less effective in scenarios requiring fluid, adaptive responses outside the bounds of predefined roles, potentially limiting its applicability in highly dynamic environments. MetaGPT leverages principles from software engineering to structure agent workflows. It is particularly well-suited for tasks that require ordered, rule-based collaboration, such as software development. The framework excels at coordinating multiple agents toward a common, structured goal and ensures clarity in output. On the downside, MetaGPT can face limitations in scalability, is vulnerable to bottlenecks or failure points due to centralized coordination, and may not adapt easily to unstructured or evolving task environments. Google’s Agent Garden, alongside its Agent2Agent (A2A) Protocol, adopts a JSON-based lifecycle model that facilitates peer-to-peer task outsourcing. It provides a centralized hub of pre-built agents and a communication framework that promotes interoperability across diverse technology stacks. This design simplifies enterprise-level integration and supports collaborative task sharing between heterogeneous agents. However, the pursuit of cross-vendor standardization presents non-trivial challenges, and the coordination infrastructure may introduce communication overhead as complexity scales. While multi-agent systems powered by large language models (LLMs) offer compelling advantages in collaborative reasoning and distributed task execution, they also introduce a complex set of # 3. Software Design Patterns for Inter-Agent Communication # 3.1. Re-evaluating Classical Design Patterns for LLM-MAS Software design patterns [23] are established, reusable solutions that address common challenges in software engineering. They support the creation of systems that are modular, scalable, maintainable, and reusable. Traditionally classified into creational, structural, and behavioral categories, each pattern type focuses on solving a specific design concern. Although these patterns have played a central role in conventional software development, their application within MultiAgent Systems (MAS) has been relatively limited. This limited adoption is partly due to the absence of standardized documentation practices and a lack of clarity in how different patterns interrelate or should be composed in MAS contexts. The rise of AI—particularly large language models (LLMs)—is redefining the role of design patterns in intelligent systems. Unlike traditional implementations, which tend to be rigid and static, LLM-based agents offer the ability to reason, reflect, and adapt in real time. This opens the door to a new class of "dynamic patterns," where classical design templates are no longer fixed but can evolve in response to system performance or environmental feedback. One of the long-standing critiques of design patterns has been their inflexibility in dynamic or fastchanging settings. LLM agents, by contrast, are inherently adaptable and capable of on-the-fly decision-making, enabling patterns to become context-aware and self-adjusting. When integrated into system architectures, these AI agents can refine or reconfigure pattern implementations in real time—modifying behavior, adjusting workflows, or even shifting structural configurations based on observed metrics or predicted needs. This creates a feedback loop in which AI and architecture coevolve: LLM agents bring responsiveness and learning capacity, while design patterns contribute structure and reliability. The result is a symbiotic design paradigm—dynamic patterns—where architectural strategies are no longer predefined blueprints but evolving frameworks, shaped and informed by the AI agents they support. This shift holds promise for creating intelligent systems that are not only robust and maintainable but also responsive and continuously improving. # 3.2. Key Communication Patterns: Mediator, Observer, Publish-Subscribe, Broker Behavioral design patterns are particularly crucial for defining effective communication and delegating responsibilities within LLM-MAS, ensuring flexibility and scalability without introducing tight coupling between components. # Mediator Pattern Intent: The Mediator pattern aims to reduce chaotic dependencies among objects by centralizing their interactions. Instead of direct communication, objects collaborate indirectly by calling a special mediator object that redirects calls to appropriate components. Application in LLM-MAS: In multi-agent LLM systems, a mediator agent, often a supervisor LLM, can centralize communication, preventing direct, potentially chaotic interactions among numerous specialized agents. This approach promotes loose coupling, making individual agents easier to modify, extend, or reuse in different contexts. MCP Alignment: The Model Context Protocol (MCP) can function as a central registry for context versioning and can mediate conflicting actions among agents. This aligns with the Mediator pattern's principles of centralizing interactions and resolving discrepancies. The MCP broker pattern is specifically designed as a flexible, intelligent middleware that facilitates communication between diverse system components [24]. # Observer Pattern / Publish-Subscribe (Pub/Sub) Pattern Intent: The Observer pattern defines a one-to-many dependency between objects, where a subject (publisher) automatically notifies all its registered dependents (observers/subscribers) of any state changes. In the Pub/Sub model, publishers are decoupled from subscribers through event channels or a message broker [25]. Application in LLM-MAS: These patterns are ideal for building event-driven architectures and enabling real-time updates within LLM-MAS. Agents can subscribe to specific topics, such as financial news feeds or market data changes, and receive notifications of relevant events without needing to constantly poll for updates or possess explicit knowledge of the message senders. This asynchronous communication mechanism is critical for achieving scalability and responsiveness in dynamic multi-agent environments. MCP Alignment: MCP [26][27]inherently supports publish-subscribe mechanisms through its resource change notifications and Streamable HTTP implementation, which includes ServerSent Events (SSE). This functionality allows agents to subscribe to changes in shared resources, enabling the development of sophisticated inter-agent workflows with complex dependencies. # Broker Pattern Intent: The Broker pattern is an architectural pattern that uses an intermediary component, the "broker," to facilitate communication between decoupled components, typically servers (publishers) and clients (subscribers), via remote procedure calls. The broker maintains routing and filter tables and can provide additional functionalities such as Quality of Service (QoS) guarantees or security enforcement [28]. Application in LLM-MAS: This pattern provides a centralized intermediary for asynchronous communication and coordination, significantly promoting loose coupling, scalability, and resilience within distributed systems. MCP servers themselves function as brokers, mediating interactions between LLM clients and various external data sources or tools. Distinction from Pub/Sub: While similar, the Broker architectural pattern is typically represented by a "Many to One to Many" diagram, indicating a centralized intermediary. In contrast, the Publish-Subscribe architectural pattern is often depicted as a "Many to Many" relationship, where messaging functionalities are often hidden as a cross-cutting concern. MCP itself can be considered an implementation of the Broker pattern. These behavioral patterns—Mediator, Observer/Pub-Sub, and Broker—form the backbone of structured interaction in LLM-based multi-agent environments. What distinguishes MCP is not just its compatibility with these patterns, but its embodiment of them. MCP functions simultaneously as a coordination center (Mediator), an event notification platform (Observer/Pub-Sub), and a communication intermediary (Broker). This multifaceted role elevates MCP beyond a mere protocol—it becomes a unifying architectural layer that enables consistent, context-aware communication across agents and tools. This integration results in a layered communication model where MCP operates as a meta-pattern, establishing a foundational transport and protocol layer upon which more advanced interaction strategies—such as negotiation, dynamic task delegation, or collaborative planning—can be reliably built. This hierarchical approach provides LLM-MAS with a scalable and extensible framework for achieving robust, interoperable, and intelligent behavior at both the agent and system levels. # 3.3. Key Communication Patterns: Mediator, Observer, Publish-Subscribe, Broker Mediator Pattern The Mediator pattern focuses on centralizing communication between components to reduce direct dependencies and tangled interactions. In LLM-MAS environments, this pattern is commonly realized through a supervisor agent that manages interactions among various specialized agents. This orchestration prevents agents from engaging in ad hoc communication, which can quickly become chaotic and difficult to manage at scale. MCP aligns with this pattern by serving as a centralized context registry and by mediating conflicting actions among agents. Its built-in broker pattern also enables structured communication between heterogeneous components. The benefits of using the Mediator pattern include reduced coupling, improved agent modularity and reusability, and streamlined control logic. However, if not carefully designed, the mediator can become a bottleneck or a single point of failure, especially under high loads or in complex workflows. # Observer / Publish-Subscribe Pattern The Observer pattern, and its event-driven variant Publish-Subscribe (Pub/Sub), defines a oneto-many relationship where a subject automatically notifies observers of state changes. In the context of LLM-MAS, agents often subscribe to streams of events—such as task progress, environmental updates, or financial data—allowing them to receive updates in real time without constant polling or tight coupling to event sources. MCP enables this interaction model through its support for resource change notifications and Streamable HTTP using Server-Sent Events (SSE), making it possible to implement reactive, dynamic context updates. This pattern offers several advantages, including decoupled system components, improved scalability, and the foundation for event-driven architectures with real-time responsiveness. Still, it introduces potential challenges such as message flooding, synchronization difficulties, and memory leaks related to “lapsed listeners” that fail to unsubscribe properly. # Broker Pattern The Broker pattern introduces an intermediary layer to manage communication between clients and servers, enabling loosely coupled interactions while abstracting service discovery and request routing. Within LLM-MAS, MCP servers function as brokers by handling interactions between language model clients and external data tools or APIs. MCP’s client-host-server design naturally supports the broker pattern by standardizing tool invocation, managing context access, and routing requests across system components. The main strengths of this pattern are enhanced system scalability, resilience, and modularity, as well as a centralized communication interface that simplifies integration. Nevertheless, similar to the Mediator, the broker can become a point of failure if not distributed or redundantly implemented. Additionally, ensuring consistent and accurate data transmission through the broker remains a technical challenge, particularly in high-throughput environments. # 3.4. Formalizing Communication Patterns in LLM-MAS: A Mathematical Perspective Formalizing inter-agent communication within LLM-MAS provides a quantitative framework for understanding and optimizing system behavior. This involves applying concepts from graph theory and information theory to model communication overhead, information flow, and associated costs. # Communication Overhead (Graph Theory) The complexity of communication links varies significantly with the chosen architectural pattern. Let N represent the total number of agents in a multi-agent system. Fully Decentralized Communication (e.g., Flat Architecture, Network): In a system where every agent can communicate directly with every other agent , the number of direct communication links, Ldirect, grows quadratically with the number of agents. $$ \mathsf { L } _ { \mathsf { d i r e c t } } { = } \mathsf { O } ( \mathsf { N } ^ { 2 } ) { \approx } \mathsf { N } ( \mathsf { N } { - } 1 ) / 2 $$ This implies that as the number of agents increases, the complexity of managing direct connections and the potential for communication bottlenecks rise rapidly. Centralized Communication (e.g., Mediator, Broker, Supervisor): In contrast, when all communication is routed through a central entity or intermediary , the number of direct links to this central entity, Lcentralized, grows linearly with the number of agents. Lcentralize $\scriptstyle 1 = 0 ( N ) \approx N$ This mathematical relationship highlights the inherent scalability advantage of centralized patterns in managing communication complexity, as they significantly reduce the number of direct inter-agent connections required. # Information Entropy in Message Passing (Information Theory) The information content and efficiency of messages can be quantified using principles from information theory, as inspired by Shannon's work. Let M be the discrete message space. The entropy H(M) quantifies the average uncertainty or information content of messages exchanged within the system. The mutual information I(A;B) between two agents, A and B, measures the amount of information about agent A's state that agent B gains through their communication. It is defined as: $1 ( A ; B ) = H ( A ) - H ( A | B )$ where ${ \mathsf { H } } ( { \mathsf { A } } )$ is the entropy of agent A's state, and H(A∣B) is the conditional entropy of A's state given $B ^ { \prime } s$ observation of the message. Mathematical Implication: Effective communication patterns aim to maximize mutual information while minimizing redundant or irrelevant messages. Design patterns like Mediator or Broker, by intelligently filtering and routing only the most relevant information, can optimize this by reducing H(A∣B) for the receiving agent while simultaneously minimizing the overall volume of unnecessary message traffic. # Cost of Communication The practical deployment of LLM-MAS necessitates consideration of the computational and financial costs associated with inter-agent communication. Mathematical Implication: Efficient communication patterns, such as selective propagation in Observer/Publish-Subscribe models or the use of summarized messages in Mediator patterns, directly reduce the number of tokens exchanged (tokensi $ .$ j). This, in turn, leads to a reduction in Ccomm, which is a critical factor for ensuring the practical and cost-effective deployment of LLM-MAS. # Strategic Implications Quantitative modeling of communication patterns offers more than just theoretical insights—it grounds system design in measurable parameters. The stark difference between $\mathsf { O } ( \mathsf { N } 2 ) \mathsf { O } ( \mathsf { N } \wedge 2 ) \mathsf { O } ( \mathsf { N } 2 )$ and ${ \mathsf { O } } ( { \mathsf { N } } ) { \mathsf { O } } ( { \mathsf { N } } ) { \mathsf { O } } ( { \mathsf { N } } )$ connection complexities underscores why centralized designs are often favored in large-scale systems. Similarly, applying entropy and cost models helps engineers refine message content and frequency to minimize overhead and latency. This structured, analytical approach elevates architectural decisions from heuristic practices to evidence-based design, ensuring that scalability, efficiency, and affordability are systematically addressed in LLM-MAS development.. # 4. The Model Context Protocol (MCP) as an Interoperability Layer # 4.1. MCP Architecture: Client-Host-Server Model and JSON-RPC Foundation The Model Context Protocol (MCP) is built upon a robust client-host-server architecture, designed to standardize communication between AI applications (which function as hosts or clients) and various external resources (acting as servers). This architecture aims to provide a unified and secure interface for AI models to interact with the broader digital environment. Host: The host is the primary LLM application, such as Claude Desktop or an Integrated Development Environment (IDE), that initiates connections and directly interacts with users. It plays a crucial role in managing security policies, user authorization, and consent requirements, ensuring that AI actions align with user permissions and organizational guidelines. Client: Embedded within the host application, the client is a lightweight protocol component that maintains a one-to-one connection with a specific MCP server. Its responsibilities include handling capability negotiation with servers and orchestrating messages between the host's LLM and the external resource. Server: MCP servers are independent processes that expose specific capabilities, such as tools, data access, or prompts, in a standardized manner over the MCP. These servers can operate locally or remotely and act as wrappers around various external systems like APIs, databases, or file systems. Examples of existing MCP servers include those for GitHub, Postgres, Tavily, and Chargebee, among many others. The foundational communication mechanism for MCP is JSON-RPC 2.0. This standard defines clear message types—requests (expecting a response), results (successful responses), errors (indicating failure), and notifications (one-way messages)—and a structured connection lifecycle that includes initialization, message exchange, termination, and error handling. For the transport layer, MCP supports both Stdio (for efficient local processes) and HTTP with Server-Sent Events (SSE) (for networked services and remote integrations). The client-host-server framework underlying MCP, implemented via JSON-RPC, offers a disciplined and transparent interface that helps demystify the often opaque behavior of large language models (LLMs). One of the persistent issues with LLMs is their unpredictability, including their tendency to generate misleading or inconsistent outputs. MCP addresses this by clearly delineating the responsibilities of the client, host, and server, and grounding all communication in a structured protocol. JSON-RPC enforces standardized message types—such as requests, responses, errors, and notifications—and prescribes a defined interaction lifecycle from connection initiation to closure. As a result, exchanges between the LLM (via the client) and external systems (via servers) are no longer informal or solely dependent on natural language prompts. Instead, these interactions are handled through a formalized protocol that enables greater control and clarity. This structured design greatly enhances the ability to trace, troubleshoot, and validate agent behaviors, effectively mitigating the “black box” challenge. The resulting boost in system transparency and dependability is a key enabler for the adoption of LLM-driven multi-agent systems in high-stakes, enterprise-level deployments. # 4.2. MCP's Role in Standardized Context Exchange and Tool Invocation The Model Context Protocol (MCP) serves as a foundational mechanism for standardizing how critical contextual elements—such as tools, datasets, and inference configurations—are delivered to and utilized by large language models (LLMs). Acting as a universal integration layer, MCP streamlines the complexity of connecting diverse components within AI ecosystems. A key problem MCP addresses is the $\mathrm { \Omega } ^ { \prime \prime } \mathsf { N } \times \mathsf { M } ^ { \prime \prime }$ integration challenge, where each combination of model and data source traditionally required bespoke code. MCP resolves this by offering a consistent interface for registering, discovering, and executing tools, thereby eliminating the need for handcrafted connectors across different AI systems [29]. MCP introduces several core capabilities that empower flexible and intelligent multi-agent interactions: Context Sharing: Through its resource capability, MCP allows agents to exchange data such as files, internal state, or memory. It also supports change notifications on shared resources, enabling agents to build reactive workflows that adjust dynamically to evolving conditions. Tool Invocation: LLMs can access and invoke external functions—termed “tools”—exposed by MCP servers. These tools may perform operations like calling APIs, running database queries, or executing modular agent functions [30]. Sampling: This capability allows agents to share prompts and, in some cases, even delegate tasks between different LLMs. It creates opportunities for collaborative reasoning and peerbased assistance within an LLM-MAS environment. Together, MCP’s context-sharing and sampling functionalities allow for the emergence of a distributed, real-time knowledge graph that surpasses the limitations of static Retrieval-Augmented Generation (RAG). Traditional RAG approaches supplement a single LLM with pre-indexed information retrieved on demand, but this model remains largely passive and centrally constrained. In contrast, MCP enables agents to continuously publish and subscribe to evolving resources—be it files, state information, or agent memory—creating a shared, adaptive knowledge space. Moreover, sampling extends this capability beyond data, allowing agents to share their reasoning capacity and act as active collaborators. This dynamic architecture transforms the RAG paradigm into a living, distributed framework where context is not just retrieved but jointly managed and evolved— supporting more sophisticated, decentralized multi-agent reasoning systems [31]. # 4.3. MCP as a Facilitator for Inter-Agent Communication Patterns MCP's Streamable HTTP interface equips multi-agent systems with a versatile set of communication modes. It supports a range of interaction models—from stateless request-response exchanges to persistent, stateful sessions using unique identifiers, as well as real-time streaming through ServerSent Events (SSE). This flexibility makes MCP a powerful foundation for implementing common software design patterns within LLM-based multi-agent systems (LLM-MAS). Mediator Pattern: MCP servers can serve as centralized coordinators that handle the routing of messages between LLM clients and various external services or tools. By abstracting the direct connections between agents and external systems, this aligns naturally with the Mediator pattern's goal of reducing tight coupling between components [32]. Observer / Publish-Subscribe Pattern: MCP enables agents to subscribe to notifications for updates on shared resources. This capability mirrors the core idea of the Observer and Pub/Sub patterns, where agents can react automatically to changes in system state, enabling responsive and event-driven collaboration. Broker Pattern: The MCP architecture itself functions as an intermediary layer that separates agents from the specifics of the underlying tools and data sources. By acting as a centralized broker, MCP simplifies integration and promotes modularity. These foundational communication mechanisms provided by MCP form the concrete infrastructure that supports the implementation of higher-level coordination strategies described by software design patterns. While patterns like Mediator or Observer define the abstract structure of agent interactions, MCP provides the operational infrastructure that brings these patterns to life in practice. Its use of JSON-RPC messaging, combined with Streamable HTTP and standardized resource and tool capabilities, delivers the essential mechanisms for scalable, real-time collaboration. For example, implementing the Observer pattern requires a reliable way to notify agents about changes—something MCP enables directly through SSE-based resource updates. Likewise, the Mediator pattern needs a centralized point for managing interactions; MCP servers fulfill this role by standardizing access to tools and data. Thus, MCP is not just a protocol for message exchange—it acts as a foundational framework that makes the execution of sophisticated communication patterns in LLM-MAS both practical and efficient. # 4.4. Comparative Analysis of Agent Interoperability Protocols (MCP, A2A, ACP, ANP) In addition to MCP, a range of emerging protocols is shaping the future of agent interoperability, each designed to meet specific requirements across different deployment scenarios [33]. Model Context Protocol (MCP): MCP is designed to standardize how structured context— such as tools, datasets, and sampling instructions—is delivered to large language models. It operates through a JSON-RPC-based client-server interface and supports Streamable HTTP and Server-Sent Events (SSE). Key capabilities include resource (for sharing files, state, or memory across agents) and sampling (for prompt/model sharing between agents), enabling dynamic and collaborative workflows. Its communication model follows a many-to-one-tomany pattern, making it particularly effective for tool invocation, enterprise data integration, and the creation of deeply stateful agents. Often referred to as the “USB-C for AI,” MCP provides foundational support for higher-level protocols and serves as the base layer in the emerging multi-protocol agent stack. It anchors interoperability by enabling consistent, reliable access to external tools and structured context. Agent-to-Agent Protocol (A2A): Developed by Google, A2A supports secure, dynamic collaboration between heterogeneous agents. It relies on a structured JSON-based lifecycle model to describe tasks, agent capabilities, and shared artifacts, enabling peer-to-peer task outsourcing and decentralized workflows. Operating on a many-to-many communication model, A2A facilitates multimodal interaction and is particularly well-suited for distributed coordination across enterprise-scale systems. It builds upon the capabilities provided by MCP, using standardized tool and context access as a foundation for richer, collaborative execution between agents across varied technology stacks. Agent Communication Protocol (ACP): ACP introduces a REST-native, performative messaging layer designed for local coordination between agents. It supports multipart messages and asynchronous streaming, providing a flexible communication interface for agents that are already integrated through MCP. Governed by the Linux Foundation, ACP is particularly useful for orchestrating multimodal interactions in scalable, message-rich environments. By layering REST-compliant messaging on top of MCP's context-sharing capabilities, ACP allows agents to communicate fluidly in structured and scalable workflows. It represents a critical middle layer in the protocol stack, sitting between MCP and higherlevel collaboration protocols like A2A. Agent Network Protocol (ANP): ANP is intended for cross-platform, internet-scale agent collaboration. It features a layered architecture that incorporates decentralized identity (via W3C DIDs), semantic web principles, and encrypted communication. ANP supports secure agent discovery, interaction, and coordination across open, decentralized environments, such as agent marketplaces or federated networks. Positioned as the top layer in the interoperability stack, ANP builds on MCP, ACP, and A2A to enable global-scale agent ecosystems. Its emphasis on decentralized identity and semantic interoperability allows agents to function independently across domains while maintaining trust and compatibility. The interplay between these protocols suggests a progressive layering strategy for agent-based systems. MCP acts as the foundational layer focused on standardized access to tools and contextual data. ACP complements this by introducing robust message exchange infrastructure. A2A builds on both by enabling dynamic, task-centric peer interaction, while ANP extends interoperability to the open internet through decentralized identity and platform-agnostic semantics. This layered progression reflects an emerging model where context management (via MCP) is the base layer, followed by messaging (via ACP), collaboration (via A2A), and global interoperability (via ANP) [34]. This broader ecosystem of protocols signals a shift toward a modular, layered architecture for agent interoperability—similar in spirit to the protocol stacks that underpin the modern internet. While MCP anchors the system with reliable context and tool access, additional protocols like A2A, ACP, and ANP expand functionality to encompass communication, coordination, and global-scale agent networks. Rather than relying on a one-size-fits-all approach, the future of multi-agent systems appears to lie in a multi-protocol stack, where each layer addresses a different dimension of interoperability and enables agents to operate fluidly across increasingly complex and distributed environments. # 5. Design Patterns in Practice: Architecting Inter-Agent Communication with MCP The deployment of LLM-driven multi-agent systems is greatly enhanced by thoughtfully leveraging proven software design patterns, with the Model Context Protocol (MCP) playing a key role as the interoperability foundation that connects agents with tools, data, and one another. # 5.1. Centralized Communication Architectures with MCP Mediation Description: In centralized communication models, agents coordinate through a single control point or orchestrator. While this structure simplifies coordination in smaller-scale systems, it may introduce performance limitations as the number of agents grows or tasks become more complex [35][36]. Design Pattern: This model closely reflects the Mediator Pattern. A central decision-making agent— often an LLM—manages interactions by determining which specialized agent to engage next. These agents or their functions can be treated as callable tools, invoked based on the supervisor’s planning logic. MCP Integration: Within this framework, MCP servers serve as intermediaries that provide standardized access to tools and data across all agents. The central LLM-based orchestrator communicates with these MCP servers via clients, allowing it to issue tool calls and retrieve contextual data without handling tool-specific implementation details. This abstraction ensures uniform access to external resources and maintains system consistency [37]. Benefits: Centralized designs facilitate unified control, streamlined output consistency (often through centralized knowledge access), simplified troubleshooting, and tighter oversight of data and communication flows. Challenges: However, reliance on a single orchestrator can create scalability constraints and pose risks associated with single points of failure. Conceptual Diagram: Centralized MCP-mediated Communication Flow Code snippet Description: This diagram illustrates a centralized communication architecture. A central "Orchestrator/Supervisor Agent" (LLM) manages and directs multiple "Specialized LLM Agents." The Orchestrator communicates directly with these Specialized Agents. All Specialized Agents, and often the Orchestrator itself, interact with a pool of "MCP Servers," which in turn provide standardized access to "External Tools/Data Sources." The connections between agents and the orchestrator represent direct message passing or command delegation, while connections to MCP Servers represent MCP calls, with the MCP Servers acting as a central broker for external interactions. # 5.2. Decentralized Communication Architectures Leveraging MCP Resources Description: Decentralized architectures [38] emphasize peer-to-peer communication, distributing the communication load and eliminating single points of failure inherent in centralized models. In such systems, agents can specialize dynamically and route tasks without relying on rigid, predefined workflows. Design Pattern: This approach is well-suited for the Publish-Subscribe Pattern. Agents publish messages to specific topics or channels, and other agents that have subscribed to those topics consume the relevant messages. While this fosters diverse ideas and emergent behavior, it can introduce synchronization challenges. MCP Integration: MCP's resource capability is instrumental in enabling a decentralized PublishSubscribe model. This feature allows agents to share various types of context—including files, application state, and agent memory—and, crucially, to subscribe to notifications when these shared resources change. This mechanism facilitates a decentralized coordination model where agents react to changes in shared state rather than relying on direct, explicit messages, thereby promoting asynchronous and loosely coupled interactions. Benefits: Decentralized architectures offer greater resilience, enhanced scalability, the potential for emergent collective intelligence, and improved privacy preservation through minimal direct data exchange. Challenges: Key challenges include maintaining coordinated behavior across a distributed network, managing synchronization issues, and potentially higher overall communication overhead if not carefully managed [39]. # Conceptual Diagram: Decentralized MCP-mediated Communication Flow # Code snippet Description: This diagram illustrates a decentralized communication model in which several "Specialized LLM Agents" are interconnected and coordinate their actions through a common "Shared Context or Resource Layer." This shared layer might consist of a vector database, distributed storage, or other forms of persistent memory. Access to this layer is mediated by "MCP Servers," which offer a uniform interface for interaction. Agents communicate indirectly by using MCP to read from, write to, and subscribe to updates within this shared space. As agents modify resources, those changes become visible to others in real time, enabling coordination through a dynamically shared state rather than direct messaging.. # 5.3. Hierarchical Communication Architectures and MCP-enabled Delegation Conceptual Diagram: Hierarchical MCP-mediated Communication Flow Code snippet Description: Hierarchical architectures organize agents into a tree-like structure, with higher-level agents overseeing broader objectives and delegating tasks to lower-level agents, or even a "supervisor of supervisors". This structure facilitates the division of labor among specialized agents and ensures their activities are synchronized to achieve overarching objectives [40]. Design Pattern: This pattern leverages elements of the Composite Pattern (for grouping agents into logical hierarchies) and the Chain of Responsibility Pattern (for sequential task delegation). Frameworks like "Talk Structurally, Act Hierarchically" (TalkHier) specifically introduce structured communication protocols and hierarchical refinement mechanisms to manage complexity. MCP Integration: Higher-level agents can delegate specific sub-tasks or specialized data access requests to lower-level agents or directly to external tools via MCP. For instance, a manager agent might use an MCP client to invoke a tool (which could represent another agent's capability exposed as a tool) on an MCP server. This enables fine-grained control and efficient delegation of complex operations, allowing agents at different levels to leverage external capabilities in a standardized manner. Benefits: Hierarchical systems offer streamlined decision-making, clear division of labor, efficient task decomposition, and improved refinement of outputs through structured feedback loops. Challenges: Potential bottlenecks can arise at supervisor nodes, and managing multiple levels of abstraction can introduce architectural complexity. # Conceptual Diagram: Hierarchical MCP-mediated Communication Flow Description: This diagram represents a hierarchical communication structure organized across multiple levels. At the highest level, a "Supervisor Agent" manages the overall workflow by assigning tasks to "Mid-Level Agents," who then distribute subtasks to "Specialized Worker Agents." The primary communication pattern follows a top-down and bottom-up flow. External integration is handled through "MCP Servers," which serve as standardized interfaces accessible to agents across the hierarchy. Higher-level agents may delegate external tool interactions to lower-level agents, allowing consistent and structured access to external systems throughout the entire architecture. # 5.4. Adaptive and Hybrid Communication Strategies In practice, the most effective communication strategy for LLM-based multi-agent systems (LLMMAS) is seldom defined by a single, static architectural pattern. Instead, robust systems often integrate features from centralized, decentralized, and hierarchical models to accommodate varying levels of complexity and diverse operational demands. For example, a system may follow a mostly linear execution path but incorporate dynamic tool invocations at specific points to handle unpredictable tasks [9]. The key principle is to design communication strategies that align with system goals and current conditions. This often involves dynamically selecting or blending patterns based on factors such as task complexity, agent capabilities, or the state of the environment. Guidance from practitioners, such as Anthropic’s suggestion to only increase architectural complexity when necessary, reinforces the idea that rigid, one-size-fits-all communication models are suboptimal. Instead, hybrid approaches that balance simplicity with flexibility tend to offer greater efficiency and resilience. Moreover, the emergence of adaptive communication protocols highlights a shift toward systems that can modify their own coordination structures in real time. Rather than locking in communication strategies during system design, agents—or a dedicated meta-agent—can evaluate current conditions, such as changes in network layout, resource availability, or communication delays, and adapt the interaction model accordingly. This repositions communication architecture as a dynamic, runtime decision-making process rather than a fixed blueprint, enabling more intelligent, context-aware coordination across the agent network. # 5.5. Mathematical Modeling of Inter-Agent Information Flow and Cost Optimization To further optimize LLM-MAS, mathematical modeling can be applied to inter-agent information flow and cost. # Communication Efficiency Metric Communication efficiency in an LLM-based multi-agent system can be quantitatively modeled by factoring in message volume, token usage, and response time. Let MT be the total number of messages exchanged in a system to complete a task. Let KT be the total number of tokens processed across all LLM calls. Let LT be the total latency for task completion. An objective function for communication efficiency could be formulated as: $\mathsf { E } { = } ( 1 / \mathsf { L } \mathsf { T } ) { \times } ( 1 / \mathsf { K } \mathsf { T } ) { \times } \mathsf { I }$ Utility(Output) The aim is to maximize this efficiency metric by producing high-utility outputs while keeping both the token count and latency as low as possible. This formulation encourages designs that are not only accurate and effective but also computationally and temporally efficient. # MCP's Impact on Cost/Latency The Model Context Protocol (MCP) plays a central role in improving both cost-efficiency and latency in LLM-based multi-agent systems (LLM-MAS). By offering a unified and reusable interface for tool and context integration, MCP significantly lowers the need for repetitive development and minimizes ongoing maintenance demands. This translates into reduced initial engineering effort and lower long-term operational expenditures. While MCP introduces an intermediary layer, its reliance on lightweight JSON-RPC over HTTP and Server-Sent Events (SSE) often results in lower latency compared to ad hoc integration methods, particularly in complex system architectures. The total cost of an MCP-enabled system (Ctotal) can be expressed as: Ctotal=CLLM+CMCP_servers $\cdot +$ Cnetwork $\cdot +$ Cdevelopment Where: CLLM=∑i(tokensin,i×Pin+tokensout,i×Pout) represents the cost of LLM inference, with tokensin,i and tokensout,i being input and output tokens for LLM call i, and Pin,Pout being their respective costs per token. CMCP_servers represents the operational cost of running MCP servers. Cnetwork accounts for network data transfer costs. Cdevelopment is the development cost, which is significantly reduced by MCP's standardization. In multi-agent systems, where frequent inter-agent communication and LLM calls are expected, these costs can rise quickly. Token-based pricing makes LLM inference particularly sensitive to message volume and verbosity. MCP helps contain these expenses by minimizing redundant processing through standardized tool invocation, structured context sharing, and event-driven updates. Mechanisms such as SSE for low-latency streaming and MCP’s resource notification system further reduce unnecessary message duplication and network overhead. As a result, MCP not only simplifies integration and promotes modularity but also plays a vital role in the economic and performance scalability of multi-agent systems. Its efficiency-oriented design makes it a key enabler for transitioning LLM-MAS from experimental setups to cost-effective, production-grade deployments. # 6. Architectural Adaptations Across Complexity, Autonomy, and Domains # 6.1. Scaling Communication Patterns with Increasing Agent Complexity and Autonomy LLM-based agent systems span a spectrum of complexity and autonomy—from simple, rule-based workflows to highly collaborative multi-agent ecosystems. As systems increase in sophistication, the communication patterns evolve, and the role of the Model Context Protocol (MCP) becomes more critical and multi-dimensional. MCP transitions from a lightweight integration layer into a foundational communication and interoperability infrastructure. # Low Complexity / Autonomy (Deterministic Chains / Workflows) Description: These systems follow fixed, sequential steps where the flow of execution is predefined. LLMs act more like orchestrators, invoking external tools in a linear fashion with minimal decision-making or dynamic adaptation. o Communication: Typically involves basic, stateless request-response interactions. There is little to no inter-agent communication, as workflows often consist of singleagent pipelines or simple Retrieval-Augmented Generation (RAG) chains. o MCP Role: At this level, MCP provides reliable, standardized access to external tools and data sources, ensuring consistent context retrieval for each step. It helps streamline integration and reduces redundant code for connecting to APIs or services. While its role is minimal, it offers tangible benefits in simplifying development and increasing reliability. # Medium Complexity / Autonomy (Single Agent with Dynamic Decisions) o Description: Agents at this level begin to exhibit adaptive behavior, dynamically determining which tools to invoke and how to proceed based on intermediate outputs. Iterative reasoning, memory updates, and self-reflection are common [10]. o Communication: Focused on internal agent loops—often involving “self-talk” (e.g., Chain-of-Thought), dynamic memory updates, and reactive tool calls depending on the agent’s evolving understanding of its task. o MCP Role: MCP becomes essential for enabling dynamic and flexible tool interactions. It manages context updates across steps and provides consistent access to real-world information such as execution outputs, data queries, or code results. In doing so, it supports more autonomous decision-making and task refinement, helping the agent maintain a reliable view of its environment throughout execution. # High Complexity / Autonomy (Multi-Agent Architectures) o Description: These systems consist of multiple specialized agents working collaboratively to solve more complex or distributed problems. Roles may be hierarchically organized or horizontally distributed, and require coordination, synchronization, and conflict resolution. o Communication: Involves rich inter-agent interaction using patterns like Mediator, Observer, Publish-Subscribe, Broker, Hierarchical, or Network-based topologies. Messaging can be asynchronous, stateful, and context-dependent, with agents needing shared understanding of goals and system state. o MCP Role: At this level, MCP functions as the interoperability backbone for the entire system. It enables standardized context sharing, event-driven updates, and tool invocation across agents built on potentially different frameworks or stacks. By abstracting the complexity of integration and enforcing consistent interfaces, MCP allows agents to operate as cohesive, coordinated entities. It supports advanced use cases like distributed task allocation, shared memory access, and reactive coordination through resource notifications. As the autonomy and complexity of agent systems increase, the demands on their communication infrastructure also rise. MCP scales accordingly—from offering convenience and integration support in basic workflows to becoming the core layer that enables robust, scalable, and adaptive coordination in complex multi-agent environments. In simpler applications, MCP reduces boilerplate integration work. In dynamic, single-agent systems, it becomes the source of environmental grounding and dynamic execution. And in multi-agent ecosystems, it plays a pivotal role in enabling shared context, secure communication, and emergent behavior—making it indispensable for the next generation of agentic AI systems. # 6.2. Case Study: Real-time Transaction Processing Systems # Domain Context: Real-time transaction processing—especially within financial systems—requires extremely high levels of precision, low-latency responses, strong security guarantees, and adherence to regulatory frameworks. LLM-based agents are increasingly used in these settings to support tasks such as analyzing transactional behavior, identifying potential fraud, and automating operational workflows. # Key Challenges: Systems operating in this domain must handle sensitive financial data securely, provide agents with low-latency access to continuously updating transaction streams, and meet strict compliance requirements. Additionally, there is a need to avoid complex and fragile system architectures that can emerge from unstructured, ad hoc integrations—commonly referred to as “spaghetti code.” # Design Patterns and MCP Application: # Fraud Detection (Aggregator Pattern): In fraud detection systems, it’s common to deploy several specialized agents—each focused on different detection strategies, such as rules-based logic, statistical modeling, or network behavior analysis. These agents evaluate transactions independently and forward their outputs to an aggregator, which combines their inputs into a comprehensive fraud risk score. LLMs can support this pipeline by extracting nuanced insights from unstructured data, such as transaction memos or customer notes, transforming them into structured information aligned with an ontology, and contributing them to the broader analysis performed by other agents [41]. MCP Role: MCP plays a key role in enabling fraud detection agents to securely and efficiently access transactional databases and real-time streams. It provides standardized, governed interfaces for tool and data integration, reducing the need for custom connectors. In addition, MCP can be used to generate synthetic datasets for safe testing of detection algorithms, helping developers mitigate risks related to data sensitivity while simulating realistic transaction scenarios. # Communication Flow: Communication among agents in this context often adopts the Observer or PublishSubscribe design pattern, enabling real-time fraud detection agents to react instantly to new transactions. MCP supports this model by offering event-driven resource change notifications. Additionally, MCP servers act as secure intermediaries in a Mediator or Broker pattern, facilitating controlled access to financial infrastructure. This abstraction not only simplifies the integration of complex banking systems but also ensures that all interactions remain secure, auditable, and policy-compliant. # Security Implications: The rise of Agent-in-the-Middle (AiTM) attacks exposes a significant risk in the way messages are exchanged between agents in LLM-based multi-agent systems (LLM-MAS). This highlights the need for strong security mechanisms not just at the agent level but throughout the communication framework. MCP addresses these concerns through its support for OAuth 2.0/2.1 protocols for secure authentication and authorization, along with fine-grained access control and data masking features implemented at the server level. These capabilities are essential for maintaining the confidentiality, integrity, and regulated handling of sensitive data—particularly in domains such as finance [42]. In critical sectors like transaction processing, the combination of communication design patterns and MCP shifts the focus of system security from isolated agents to the infrastructure that connects them. Traditional approaches often concentrate on protecting each agent individually, but in LLMMAS, the communication pathways themselves become a high-value target, as AiTM threats clearly illustrate. Even if agents are secure in isolation, the interception or alteration of messages between them can jeopardize the entire workflow. Patterns such as Mediator and Broker, when built on top of MCP, allow communication to be centralized or standardized, creating a single, auditable point of control. This architectural shift enables security policies—such as token-based access, role-based permissions, and data redaction—to be enforced consistently across all interactions. By consolidating communication through MCP’s structured and secure interfaces, systems can reduce the complexity and exposure of decentralized message exchanges. In doing so, MCP provides the infrastructure needed for safe, bidirectional communication that meets the stringent requirements of enterprise-grade financial systems [43]. # 6.3. Case Study: Investment Banking Applications # Domain Context: Investment banking involves highly specialized decision-making that requires synthesizing large volumes of diverse data, operating under fast-changing market dynamics, and meeting strict risk and compliance standards. LLM-based agents are being developed to support tasks such as portfolio optimization, market trend analysis, advisory functions, and more complex workflows like mergers and acquisitions (M&A) [51][52]. # Key Challenges: The financial domain presents unique obstacles, including the need to process and interpret largescale, multimodal data—such as text-based news, financial statements, and audio from earnings calls. Agents must also handle long-context reasoning, overcome the inherent opacity of traditional deep learning models, and operate within rigid regulatory frameworks requiring transparency and traceability. # Design Patterns and MCP Application: # FINCON Framework (Hierarchical Pattern): The FINCON [50] architecture reflects a hierarchical multi-agent setup modeled after the structure of investment firms. It organizes specialized agents—such as those focusing on sentiment analysis, technical indicators, or fundamentals—under a central decision-making agent that aggregates and evaluates their insights. The system also includes layered riskcontrol mechanisms designed to monitor and regulate exposure in real time. # Communication Flow: FINCON minimizes unnecessary peer-to-peer chatter by channeling communication through structured hierarchies. It incorporates a selective signaling method—sometimes referred to as "verbal reinforcement"—to update only the agents impacted by new investment insights, thereby improving bandwidth and reasoning efficiency. # Portfolio Management (Parallel / Aggregator Patterns): In portfolio optimization scenarios, multiple agents may concurrently evaluate different types of risk (e.g., market volatility, sector correlation, credit exposure) or assess distinct asset categories. Their outputs are collected and synthesized by an aggregator agent to form a comprehensive view of portfolio health or risk. # MCP Role: MCP enables agents to securely and consistently access up-to-date financial data from a variety of sources, including streaming market feeds, regulatory filings, and internal financial systems. It allows agents to subscribe to relevant updates and changes through resource notification mechanisms and ensures consistent data formatting across all interactions. MCP’s context-sharing primitives ensure that agents remain synchronized with real-time market conditions—essential for high-frequency decision-making environments. # M&A (Complex Workflows): In M&A scenarios, a distributed team of agents could be deployed to handle various aspects of the process, such as financial due diligence, legal compliance, strategic alignment, and scenario modeling. These agents would coordinate using structured communication protocols and shared context layers, all supported through MCP’s interoperable infrastructure. This ensures each specialized agent can contribute its insights to a larger decision-making process in a secure and transparent way. LLM-MAS architectures in finance—especially those enhanced with MCP—are increasingly focused on delivering not just accurate decisions, but also explainable and auditable reasoning. Traditional AI models often fall short in clarity, making their outputs difficult to justify in regulated financial environments. Multi-agent systems like FINCON address this by structuring internal communication and reasoning flows around human-interpretable, evidence-based outputs. The manager-analyst hierarchy ensures that complex insights are broken down and expressed in clear language, while decisions are supported by traceable data. MCP plays a pivotal role in this process by providing structured access to the underlying data sources and tools, allowing agents to explicitly reference the foundations of their conclusions. Together, these mechanisms are helping shift financial AI systems from opaque black boxes to transparent, trustworthy, and regulation-ready platforms. # 6.4. Financial Services Use Cases and Corresponding LLM-MAS Design Patterns # Fraud Detection Fraud detection in financial systems leverages design patterns such as Aggregator, Observer/Publish-Subscribe, and Mediator/Broker. These patterns enable specialized detection agents—each using different methods like rule-based checks, machine learning, or behavioral analysis—to operate in parallel, share findings, and contribute to a consolidated fraud score. Communication is driven by real-time transaction events, which trigger agent responses via event streams, while structured inputs (such as "intuitive" language-based hunches) are passed through a central mediator or broker. MCP plays a key role by providing secure, real-time access to transactional databases and internal financial records, while also supporting the generation of synthetic data for testing in development environments. This architecture improves fraud detection speed, lowers false positive rates, and enables more proactive risk prevention. However, challenges persist around securing inter-agent communication against Agent-in-the-Middle (AiTM) attacks, handling highly sensitive financial data, and ensuring adherence to regulatory standards. # Portfolio Management Portfolio management systems adopt hierarchical structures with manager-analyst models, along with parallel processing and aggregator patterns. Specialized agents may simultaneously assess market trends, asset risk, or credit exposure, feeding their insights into a centralized manager agent that generates portfolio-level strategies. Communication is streamlined through structured hierarchies and "verbal reinforcement"—a mechanism for propagating key investment updates to relevant agents without redundant messaging. MCP facilitates these workflows by offering uniform, real-time access to external data sources such as market feeds, financial disclosures, and internal records. It also supports context sharing to keep agents aligned with current portfolio conditions. The result is more efficient decision-making, deeper risk profiling, and data-backed investment strategies. Nevertheless, the dynamic nature of financial markets, combined with the complexity of synthesizing data from diverse sources, creates significant challenges in maintaining decision explainability and system robustness. # Financial Advisory In financial advisory applications, LLM agents are designed to support hybrid and adaptive reasoning, often combining procedural strategies with contextual adjustments. These systems rely on real-time communication channels, context-aware dialogue, and human-inthe-loop mechanisms for resolving complex or personalized financial queries. MCP enables this use case by ensuring secure and standardized access to personal financial data, including client histories, investment goals, and market conditions. It allows advisory agents to generate personalized recommendations grounded in real-time data and secure client context. Benefits include more tailored advice, improved client trust, and adaptive learning paths for different financial profiles. The main challenges lie in maintaining conversational context over extended interactions, managing sensitive personally identifiable information (PII), and addressing the ethical implications of automated financial advice. # Mergers & Acquisitions (M&A) Due Diligence Though often implicit, M&A due diligence benefits from complex agent orchestration using composite, sequential, and workflow-based design patterns. Specialized agents focus on specific areas such as legal analysis, financial modeling, or strategic evaluation, each contributing to a shared understanding of the acquisition target. Communication follows a structured model involving document sharing, common knowledge repositories, and negotiation protocols. MCP supports this ecosystem by enabling secure and standardized access to data rooms, legal filings, and financial disclosures. It ensures that agents can interact with sensitive content without compromising security or consistency. This setup enhances the speed and accuracy of due diligence processes, enabling more comprehensive risk assessments. Key difficulties include handling unstructured documents at scale, integrating knowledge from various professional domains, and satisfying legal and regulatory expectations throughout the deal lifecycle. # 7. Challenges, Security, and Future Research Directions # 7.1. Addressing Scalability, Reliability, and Security in MCP-enabled MAS Deploying LLM-based multi-agent systems (LLM-MAS) in real-world environments requires careful attention to scalability, reliability, and security to ensure operational robustness and trustworthiness. # Scalability: Although multi-agent systems are naturally modular and well-suited for scaling, the communication burden can grow rapidly as the number of agents increases, potentially leading to substantial coordination overhead. MCP helps mitigate this issue by providing standardized interfaces for tool and data integration, which reduces the need for custombuilt connectors. This modular, plug-and-play approach streamlines the process of expanding systems and managing more agents without exponentially increasing system complexity. # Reliability: LLMs can sometimes produce inconsistent outputs or demonstrate sensitivity to prompt variations, resulting in unreliable behavior. In a multi-agent context, systems can address these issues using strategies such as internal feedback loops, self-evaluation, and agent cross-checking. MCP contributes to improved reliability by enforcing structured, consistent communication flows and context management. By ensuring that all agents access a unified, well-defined information space, MCP reduces the chances of miscommunication or hallucinated responses. # Security: Protecting sensitive data is critical, particularly in enterprise or regulated settings. MCP incorporates security as a core design feature through mechanisms like explicit user consent, defined permission boundaries, granular access control, and visibility into tool usage. However, threats such as Agent-in-the-Middle (AiTM) attacks still present potential risks— especially in systems that rely on shared context across agents. These challenges require a robust communication infrastructure that actively safeguards against unauthorized access or data leakage. The growing need for LLMs to operate on enriched context introduces a trade-off between contextual richness and security exposure. As LLM performance improves with more available information, so too does the risk of sensitive data being unintentionally exposed. To manage this balance, MCP embeds security controls directly into its architecture. This includes OAuth-based authentication, fine-grained access permissions, and host-managed communication oversight. Rather than treating security as a post-deployment add-on, MCP follows a “secure-by-design” philosophy—integrating protective measures into every layer of the communication process. This foundational approach is essential for safely deploying context-aware agents in high-stakes environments where privacy, auditability, and compliance are non-negotiable. # 7.2. Ethical Considerations and Human-in-the-Loop Integration The implementation of LLM-based multi-agent systems (LLM-MAS) brings forth important ethical considerations, particularly in areas such as transparency, fairness, and responsibility. Without proper safeguards, LLMs may unintentionally propagate biases or inaccuracies embedded in their training data, potentially leading to flawed or unfair outcomes. To mitigate these risks, incorporating Human-in-the-Loop (HITL) mechanisms is essential. Human oversight allows for review, intervention, or direct control over agent decisions—especially critical in domains where errors carry serious implications, such as healthcare, finance, or legal advisory. The Model Context Protocol (MCP) supports this mode of interaction by allowing MCP servers to pause agent execution and request additional user input, confirmation, or explicit consent. This capability ensures that automated decisions can be guided or overridden by human judgment when needed. By embedding human oversight into agent workflows through MCP, the paradigm shifts from pure autonomy to collaborative intelligence—where humans and AI agents operate as integrated partners. Although LLM agents are designed for independent reasoning, their current limitations— including hallucinations, lack of ethical reasoning, and occasional unreliability—highlight the need for structured human involvement. HITL and related patterns like “human-on-the-loop” introduce intentional checkpoints for validation, context clarification, or ethical scrutiny. MCP enables this by supporting interaction models that go beyond simple task execution. Features such as elicitation (the ability for servers to prompt for human feedback or input), fine-grained consent management, and user-directed control allow humans to stay actively involved in the loop. As a result, LLM-MAS evolve from standalone autonomous agents into systems that emphasize shared decision-making. This vision redefines communication protocols not just as pathways for inter-agent collaboration, but as bridges for seamless, secure, and transparent human-AI cooperation—ensuring that accountability and trust remain central in increasingly intelligent and complex systems [53]. # 7.3. Open Research Questions and Emerging Trends The landscape of LLM-driven agentic AI and multi-agent systems is advancing rapidly, giving rise to a range of unresolved research challenges and emerging directions: # Defining Formal Semantics for Agent Communication Traditional Agent Communication Languages (ACLs), such as FIPA-ACL, were built on strict formal semantics to ensure clarity and consistency. In contrast, LLM-based agents often communicate using natural language, which, while flexible and expressive, introduces ambiguity. A key research goal is to bridge the gap between natural language communication and the level of precision required for reliable, machine-to-machine interaction—especially in systems where misinterpretation could lead to cascading errors [54]. # Incorporating Multi-Modal Communication A growing research frontier involves expanding agent capabilities to interpret and exchange information across multiple modalities—such as visual, auditory, and textual data. Enabling agents to engage through combinations of text, images, or speech can significantly enhance their ability to perceive context, interpret complex scenarios, and make better-informed decisions. # Real-Time Role Adaptation and Self-Organizing Architectures Another important research focus is enabling agents to reorganize themselves dynamically in response to evolving goals or environmental shifts. This includes forming task-specific teams, redistributing responsibilities, and adjusting communication flows on the fly. Such adaptability moves beyond rigid system configurations, allowing agents to operate in more fluid, open-ended environments [55]. # Benchmarking and Evaluation: There is an urgent need for evaluation frameworks that go beyond measuring the performance of individual agents and instead assess how effectively multiple agents work together. Such benchmarks should capture aspects like coordination efficiency, collective reasoning, task distribution, and emergent behaviors in complex environments [56]. # Long-term Learning and Adaptation: A major open challenge is enabling multi-agent systems to continuously evolve—learning new strategies, refining communication protocols, and adapting their behavior as environments and tasks change over time. This includes the ability to retain useful knowledge, respond to feedback, and improve coordination strategies in non-static, realworld scenarios. # Decentralized Agent Marketplaces: Emerging efforts around protocols like the Agent Network Protocol (ANP) suggest a future where agents can be discovered, composed, and deployed across open networks. These systems envision secure, decentralized environments where independently developed agents can collaborate, negotiate roles, and transact capabilities without centralized control. Looking ahead, the evolution of LLM-based multi-agent systems points toward architectures that are inherently adaptive and self-organizing, far beyond the limitations of traditional software design models. Today’s systems typically rely on predefined roles and fixed communication flows—even in complex tasks. However, future systems are expected to dynamically select, modify, or even invent new coordination strategies in response to previously unseen situations. This progression implies the emergence of meta-level AI mechanisms that govern not just behavior, but the underlying design architecture itself. Such developments could lead to the rise of AI-generated design patterns, where system configuration becomes a fluid, evolving process rather than a static engineering choice. This shift marks a fundamental transformation—from applying established software principles to building systems capable of autonomously redefining their own communication and coordination logic, bridging the gap between human-designed architectures and autonomous AI systems.
This survey investigates how classical software design patterns can enhance the reliability and scalability of communication in Large Language Model (LLM)-driven agentic AI systems, focusing particularly on the Model Context Protocol (MCP). It examines the foundational architectures of LLM-based agents and their evolution from isolated operation to sophisticated, multi-agent collaboration, addressing key communication hurdles that arise in this transition. The study revisits well-established patterns, including Mediator, Observer, Publish-Subscribe, and Broker, and analyzes their relevance in structuring agent interactions within MCP-compliant frameworks. To clarify these dynamics, the article provides conceptual schematics and formal models that map out communication pathways and optimize data flow. It further explores architectural variations suited to different degrees of agent autonomy and system complexity. Real-world applications in domains such as real-time financial processing and investment banking are discussed, illustrating how these patterns and MCP can meet specific operational demands. The article concludes by outlining open challenges, potential security risks, and promising directions for advancing robust, interoperable, and scalable multi-agent LLM ecosystems.
[ "cs.SE" ]
# 1 Introduction Large Language Models (LLMs) have demonstrated a growing ability to analyze intricate social contexts and provide novel insights into human behavior and moral decision-making (Forbes et al., 2020; Hendrycks et al., 2021; Jiang et al., 2021; Vida et al., 2023). Recent work shows that, when given carefully designed prompts, LLMs can handle a range of moral judgments in straightforward scenarios (Jin et al., 2022). Yet moral reasoning is profoundly contextual: competing ethical principles, convoluted personal narratives, and diverse social norms can all reshape how a dilemma should be interpreted (Nguyen et al., 2022; Ji et al., 2024). As a result, even strong LLMs frequently differ in [Scenario] My family controls every aspect of my life… I want to see a friend… I will be a legal adult, and I plan to pay for the trip myself. Would it be wrong to go without their permission? Deontology: 15% moral Deontology: 65% moral Utilitarianism: 85% moral Utilitarianism: 35% moral LLMs-as-Judge Y Deontology: 40% moral Deontology: 55% moral Utilitarianism: 60% moral Utilitarianism: 45% moral S E Personal autonomy Parental authority their moral assessments when faced with complex, multi-factor moral scenarios (Figure 1). Existing alignment paradigms, such as Constitutional AI (Bai et al., 2022) and Reinforcement Learning from Human Feedback (Ouyang et al., 2022), typically focus on refining a single LLM according to policy constraints or human judgments. However, they do not directly address scenarios where multiple large models, each possibly with distinct biases, must converge on a unified understanding of complex moral contexts. Beyond single-model alignment, aggregator approaches in crowdsourcing (Dawid and Skene, 1979; Hovy et al., 2013) have long recognized the need to estimate annotator reliability and consensus. Yet these classical methods typically operate with discrete labels and do not naturally extend to continuous moral acceptability scores – w.l.o.g. in [0, 1] – required by nuanced moral dilemmas. Here, we address two critical challenges that arise when applying LLMs to morally intricate scenarios. First, we move beyond simple, single-factor questions such as “I cut in line with no excuse” to social dilemmas involving multiple stakeholders and competing values (e.g. the scenario in Figure 1). While such scenarios represent everyday moral complexity, they pose significant modeling difficulties. Binary labels (“moral” vs. “immoral”) cannot capture the gradations of moral acceptability, which often lie along a continuum of possible judgments (Jin et al., 2022; Pyatkin et al., 2023). Second, we recognize the importance of gathering perspectives from multiple LLMs to form a collectively formulated opinion that better approximates a shared moral stance. Past studies suggest that solely relying on a single model or narrowly sourced viewpoints can introduce gaps, bias, or incomplete moral representations (Takeshita et al., 2023; Rao et al., 2023; Zhou et al., 2024). In contrast, synthesizing opinions from multiple LLMs can yield richer insights and reduce the idiosyncratic errors of any one model. However, we have observed that certain LLMs misalign substantially with the aggregated consensus, indicating that their representations of specific moral philosophical theories are insufficient or systematically skewed. To tackle these shortcomings, we propose a twofold framework. First, we derive a collective moral reference for a given dilemma by merging continuous annotations from multiple LLMs via a novel truncated-normal ExpectationMaximization (EM)-based method. By adapting multi-annotator reliability estimation to continuous moral scores, we capture subtle distinctions that simple majority voting or unbounded Gaussian assumptions might obscure. Second, for those models consistently at odds with the distilled consensus, we introduce an embedding-optimization strategy. By adjusting only the representations of key moral-theory tokens, we aim not just to improve alignment but also validate that the aggregator’s consensus indeed encodes meaningful moral knowledge. If the strategy fails to reduce misalignment, it may suggest deeper issues in either the model’s understanding or the consensus itself. The fundamental premise of our approach is that social dilemmas rarely admit objectively correct judgments. In morally ambiguous real-world contexts, individuals often seek reference, not truth. Accordingly, our framework emphasizes coherence over correctness, seeking to model alignment with shared patterns rather than enforce normative truths. This distinction is crucial: we differentiate mere non-consensus (alternative but plausible viewpoints) from poor performance (systematic divergence likely due to conceptual misunderstanding). Rather than claiming a single “true” moral label, our goal is to provide a principled reference that balances multiple perspectives and pinpoints where real misalignment occurs. The evaluation is thus framed not as accuracy against ground truth, but as alignment with an emergent, model-based consensus. Our contributions are as follows. (1) We propose a truncated-normal EM aggregation method that fuses continuous moral scores from multiple LLMs into a collective moral reference by modeling annotator reliability. (2) We introduce a tokenlevel embedding realignment for a set of moral philosophical theories, which refines underperforming models’ representations to better align with the consensus, while checking if coherent moral knowledge is captured by collective judgments. (3) Through comprehensive validation on real-world moral dilemmas distilled from the AITA dataset (Hendrycks et al., 2021), we demonstrate improved model consistency and show how continuous moral probabilities help disentangle complex dilemmas with overlapping or conflicting moral principles. # 2 Related Work Moral Alignment. Research on aligning LLMs with human moral reasoning has made significant progress. Datasets like Social Chemistry 101 (Forbes et al., 2020) and ETHICS (Hendrycks et al., 2021) enable reasoning about norms and moral philosophical theories. Meanwhile, MoralBench (Ji et al., 2024) and AITA (Nguyen et al., 2022) focus on real-world moral dilemmas, capturing the intricate nature of human decision-making. A key challenge in moral reasoning is handling complex narratives. Methods like ClarifyDelphi (Pyatkin et al., 2023) refine moral judgments via clarification questions, while Jin et al. (2022) employ chainof-thought prompting to handle exceptions. Additionally, recent works incorporate normative ethical theories to guide moral reasoning (Takeshita et al., 2023; Rao et al., 2023; Zhou et al., 2024). Our task specifically focuses on moral alignment of LLMs in complex scenarios involving multiple moral theories outlined in ETHICS (Hendrycks et al., 2021). The complex scenarios are social moral dilemmas summarized from AITA (Nguyen et al., 2022). Multi-Annotator Consensus and Aggregation. A long line of research has investigated approaches for fusing or calibrating diverse annotators’ labels (Dawid and Skene, 1979; Hovy et al., 2013). Classical models, however, typically rely on discrete categories and do not readily account for subtle, continuous moral judgments. Our truncated-normal EM approach adapts multi-annotator reliability estimation to [0,1] moral scores, making it well-suited for nuanced dilemmas where binary labels fail to capture the full spectrum of moral acceptability. Embedding Modification. In recent years, numerous strategies have emerged for controlling or refining the behaviors of LLMs via targeted modifications of their embedding or parameter spaces. Methods like MEND (Mitchell et al., 2021), MEMIT (Meng et al., 2023), and ROME (Meng et al., 2022) enable local “model editing” by adjusting internal weights or embeddings to rectify factual errors or mitigate undesired behaviors, while LoRA (Hu et al., 2021) and prefix-tuning (Li and Liang, 2021) reduce computational overhead by injecting small trainable parameters into large pretrained models. While effective for domain adaptation and knowledge editing, they typically focus on tasks like factual corrections or bias mitigation (e.g., gender bias (Bolukbasi et al., 2016)), rather than continuous moral alignment. By contrast, our work employs a token-level embedding optimization specifically to enhance theory alignment with a collectively formulated moral reference. This fills a gap in nuanced moral reasoning. # 3 Problem Setup Let $i \in \{ 1 , 2 , \ldots , N \}$ index a collection of moral scenarios, and let $j \in \{ 1 , 2 , \dots , M \}$ index a set of moral philosophical theories (i.e., virtue, justice, deontology, utilitarianism, and commonsense morality). The goal is to obtain moral judgments from $L$ large language models for each scenario– theory pair $( i , j )$ . Specifically, each model $m$ provides a continuous annotation $a _ { m , j , i } \in [ 0 , 1 ]$ , indicating the degree to which it deems scenario $i$ morally acceptable under theory $j$ . This continuous formulation allows for more nuanced interpretations than binary annotations: values near 0.5 reflect ambiguity or moral tension, while values closer to 0 or 1 reflect clearer moral signals. Although these continuous annotations yield rich information about each model’s stance, they can vary significantly across models. We therefore introduce a collective opinion $\gamma _ { j , i } \in [ 0 , 1 ]$ , which integrates the annotations $\{ a _ { m , j , i } \} _ { m = 1 } ^ { L }$ for scenario $i$ under theory $j$ into a single probability of moral acceptability: $$ \begin{array} { r } { \gamma _ { j , i } ~ = ~ P \big ( \phi _ { j , i } = 1 \big | \{ a _ { m , j , i } \} , \theta \big ) , } \end{array} $$ where $\phi _ { j , i } \in \{ 0 , 1 \}$ is a latent binary variable indicating the “true” moral acceptability of scenario $i$ under theory $j$ , and $\theta$ are the parameters of our statistical model (in Section 4.1). In essence, $\gamma _ { j , i }$ represents the probability that scenario $i$ is morally acceptable under theory $j$ , given all models’ judgments. This collective probability serves as a pivotal reference for measuring how well each individual model aligns with the broader consensus. However, certain LLMs may diverge substantially from $\gamma _ { j , i }$ on specific theories, underscoring potential gaps in their understanding or representation of morally salient ideas. To mitigate these gaps, we selectively fine-tune the token embeddings associated with the poorly aligned theories. By recalibrating such embeddings, we aim to equip the underperforming model with a shared understanding of the relevant ethical principles and thereby increase its agreement with the collective opinion. # 4 Methodology Our approach comprises two major components (Figure 2). First, we propose a probabilistic aggregator based on a truncated-normal formulation. This aggregator derives a consensus probability $\gamma _ { j , i }$ for each scenario–theory pair by modeling both the reliability and variance of each LLM’s annotations. Second, for models exhibiting significant misalignment, we apply a targeted embedding optimization on theory-related tokens. This twofold strategy allows us to both establish a meaningful moral consensus and refine individual models’ embeddings when they diverge from that consensus. # 4.1 Probabilistic Modeling of Moral Annotations We assume that each annotation $a _ { m , j , i } \in [ 0 , 1 ]$ is drawn from a truncated normal distribution (TND) conditioned on the latent label $\phi _ { j , i }$ . Specifically, $$ a _ { m , j , i } \sim \mathrm { T N D } \Big ( \mu _ { \phi _ { j , i } } ( m ) , \sigma _ { \phi _ { j , i } } ^ { 2 } ( m ) , 0 , 1 \Big ) , $$ where $\mu _ { \phi _ { j , i } } ( m )$ and $\sigma _ { \phi _ { j , i } } ^ { 2 } ( m )$ are reliability parameters for model $m$ . Concretely: • $\mu _ { 1 } ( m )$ and $\sigma _ { 1 } ^ { 2 } ( m )$ specify the mean and variance of $a _ { m , j , i }$ when $\phi _ { j , i } = 1$ (the “positive” or morally acceptable label). • $\mu _ { 0 } ( m )$ and $\sigma _ { 0 } ^ { 2 } ( m )$ specify the mean and variance of $a _ { m , j , i }$ when $\phi _ { j , i } = 0$ (the “negative” or immoral label). Moral Judgement by L LLMs Probabilistic Aggregation Embedding Optimization based on M moral concepts: with EM Algorithm on specific moral concept [ Moral Dilemmas] Justice Truncated-Normal EM Alignment Measure w/ Reliability parameters E-Step and F1 scores ..》 Misaligned LLM: Virtue Compute Truncated Normal Expectation New token Deontology ↓→ N × M × 1 ? Drift Aggregated New token constraint Utilitarianism M-Step Opinion KLneoawrlneidnge . Original token Law W Update Parameters of Probabilistic Model Culture Commonsense Deontology 十S日 Aggregating model judgments into collective moral judgment Fine-tuning w/ JS Divergence H A Re-evaluation w/ adjusted concept embedding We generally expect $\mu _ { 1 } ( m )$ to be near 1 (high acceptability) and $\mu _ { 0 } ( m )$ near 0 (low acceptability) for a well-calibrated model $m$ . # 4.2 Truncated-Normal Likelihood and Reliability Estimation The likelihood of observing $a _ { m , j , i }$ given $\phi _ { j , i }$ and reliability parameters $\theta _ { \phi _ { j , i } } ( m )$ follows the truncatednormal density: $$ \begin{array} { r l } { f _ { t n } ^ { ( \phi _ { j , i } ) } ( m ) = P \big ( a _ { m , j , i } \mid \phi _ { j , i } , \theta _ { \phi _ { j , i } } ( m ) \big ) } & { } \\ { = } & { \frac { \mathcal { N } \big ( a _ { m , j , i } ; \mu _ { \phi _ { j , i } } ( m ) , \sigma _ { \phi _ { j , i } } ^ { 2 } ( m ) \big ) } { \Phi \big ( 1 ; \theta _ { \phi _ { j , i } } ( m ) \big ) - \Phi \big ( 0 ; \theta _ { \phi _ { j , i } } ( m ) \big ) } , } \end{array} $$ where $\mathcal { N }$ denotes the untruncated Gaussian density, and $\Phi$ is its corresponding cumulative distribution function (CDF). The denominator ensures proper normalization over $[ 0 , 1 ]$ . To learn $\theta _ { \phi _ { j , i } } ( m )$ and $\gamma _ { j , i }$ , we use the Expectation-Maximization (EM) algorithm. Below, we describe the key steps: $\mathbf { E }$ -Step. With current reliability parameters $\theta _ { \phi _ { j , i } } ( m )$ , we compute the posterior probability $\gamma _ { j , i }$ that $\phi _ { j , i } = 1$ : $$ \begin{array} { l } { { \gamma _ { j , i } = P ( \phi _ { j , i } = 1 \mid \{ a _ { m , j , i } \} , \theta ) } } \\ { { = { \displaystyle { \frac { P ( \phi _ { j , i } = 1 ) \prod _ { m } f _ { t n } ^ { ( \phi _ { j , i } = 1 ) } ( m ) } { \sum _ { \phi _ { j , i } \in \{ 0 , 1 \} } P ( \phi _ { j , i } ) \prod _ { m } f _ { t n } ^ { ( \phi _ { j , i } ) } ( m ) } } } . } } \end{array} $$ This quantity $\gamma _ { j , i }$ serves as a continuous consensus probability of moral acceptability. M-Step. Next, we update $\mu _ { \phi _ { j , i } } ( m )$ and $\sigma _ { \phi _ { j , i } } ^ { 2 } ( m )$ by using the posterior probabilities as weights. For instance, the positive parameters $\mu _ { 1 } ( m ) , \sigma _ { 1 } ^ { 2 } ( m )$ are updated via: $$ \begin{array} { r l } & { \mu _ { 1 } ( m ) = \frac { \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { M } \gamma _ { j , i } a _ { m , j , i } } { \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { M } \gamma _ { j , i } } , \ ~ \mathrm { ~ \ ( ~ \mathfrak { L } ~ ) } } \\ & { \sigma _ { 1 } ^ { 2 } ( m ) = \frac { \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { M } \gamma _ { j , i } \left( a _ { m , j , i } - \mu _ { 1 } ( m ) \right) ^ { 2 } } { \sum _ { i = 1 } ^ { N } \sum _ { j = 1 } ^ { M } \gamma _ { j , i } } , } \end{array} $$ while negative parameters $\mu _ { 0 } ( m ) , \sigma _ { 0 } ^ { 2 } ( m )$ employ weights $1 - \gamma _ { j , i }$ . Iterating the $\mathrm { E ^ { - } }$ and M-steps refines these reliability parameters until convergence. Collective Opinion. Once the EM procedure converges, $\gamma _ { j , i }$ captures a collectively formulated moral stance on scenario $i$ under theory $j$ . If desired, one can convert this continuous probability into a binary label via a threshold $\tau \in ( 0 , 1 )$ , $$ \hat { \phi } _ { j , i } = \left\{ \begin{array} { l l } { 1 , } & { \mathrm { i f } \ \gamma _ { j , i } \ > \ \tau , } \\ { 0 , } & { \mathrm { o t h e r w i s e } . } \end{array} \right. $$ Models with smaller variance $\sigma _ { \phi _ { j , i } } ^ { 2 } ( m )$ and means $\mu _ { 1 } ( m ) \approx 1$ , $\mu _ { 0 } ( m ) \approx 0$ carry stronger influence in shaping $\gamma _ { j , i }$ , reflecting higher reliability. Notably, compared to other approaches, our truncated-normal aggregation method better handles continuous moral scores and annotator reliability, as summarized in Table 1. # 4.3 Embedding Optimization for Misaligned Models Even after consensus aggregation, some LLMs may remain significantly misaligned on one or more moral theories. Rather than discarding these models, we propose a targeted embedding optimization that adjusts only those tokens corresponding to the poorly aligned theory. Table 1: Comparison of Aggregation Methods for Moral Judgment Alignment. The truncated-normal EM framework accounts for annotator reliability and continuous moral scores while ensuring bounded outputs. Identifying Misalignment. We examine each model $m$ ’s predictions against the collective judgments. For theory $\tilde { j }$ where model $m$ exhibits large systematic deviation or misalignment (e.g., low F1 score with respect to $\hat { \phi } _ { j , i } )$ , we optimize $N _ { t }$ tokens associated with that moral theory (e.g., tokens tokenized from deontology or utilitarianism). Specifically, to minimize impact on the model’s broader capabilities, we introduce new tokens, initialize their embeddings with those of the selected $N _ { t }$ tokens, and optimize them in a controlled manner. Fine-Tuning Objective. Let $P _ { \tilde { j } , i } ^ { \mathrm { t g t } } = [ \gamma _ { \tilde { j } , i } , 1 -$ $\gamma _ { \tilde { j } , i } ]$ be the “target” distribution for moral acceptability at scenario $i$ and theory $\tilde { j }$ . We augment model $\tilde { m }$ with a lightweight feedforward layer that outputs a predicted acceptability distribution P˜pre. We then define a loss based on the Jensen-Shannon (JS) divergence (Menéndez et al., 1997): $$ \log _ { J S } \ = \ \mathrm { J S } \big ( P _ { \tilde { j } } ^ { \mathrm { p r e } } , P _ { \tilde { j } } ^ { \mathrm { t g t } } \big ) . $$ Regularization of Theory Embeddings. To preserve the broader semantics of each token, we introduce a regularizer that penalizes large changes to these embeddings. Specifically, we minimize the average cosine distance (cos-dist) between the original $( e _ { k } ^ { \mathrm { { o g } } } )$ and updated $( e _ { k } ^ { \mathrm { u d } } )$ embeddings: $$ \mathrm { l o s s } _ { C S } \ : = \ : \frac { 1 } { N _ { t } } \sum _ { k = 1 } ^ { N _ { t } } \mathrm { c o s } \mathrm { - } \mathrm { d i s t } \big ( e _ { k } ^ { \mathrm { u d } } , e _ { k } ^ { \mathrm { o g } } \big ) . $$ The total loss for fine-tuning becomes: $$ \mathrm { l o s s } _ { E } = \mathrm { l o s s } _ { J S } + \mathrm { l o s s } _ { C S } . $$ Training Strategy. We freeze all layers of model $\tilde { m }$ except for: 1) the embeddings of the $N _ { t }$ target theory tokens, and 2) the parameters of the new feedforward layer. Optimizing $\mathrm { l o s s } _ { E }$ refines these token embeddings to more closely match the consensus moral stance while limiting unwanted drift in language capabilities. Outcome. After this localized embedding finetuning, we re-evaluate model $\tilde { m }$ on the same moral dilemmas. If alignment improves substantially, it suggests that the collective opinion $\gamma _ { j , i }$ contains coherent moral knowledge, and that adjusting critical token embeddings can remedy the model’s initial misunderstanding. Conversely, if alignment fails to improve, deeper issues in either the consensus itself or the model’s capacity to represent those moral theories may require further investigation. Overall, this targeted optimization procedure retains the strengths of each LLM while systematically correcting conceptual misalignment—leading to a more reliable, consensus-informed representation of nuanced moral judgments. # 5 Experimental Evaluation Two key questions are examined: (1) Does the truncated-normal EM approach produce a coherent collective opinion across LLMs? (2) Can targeted embedding optimization effectively reduce misalignment for specific theories and models? # 5.1 Dataset We use 42,501 moral dilemmas from the AITA dataset (Nguyen et al., 2022), a Reddit-based repository where users present morally charged scenarios often involving interpersonal conflicts. Since original posts may contain personal emotional biases or extraneous context, we employ GPT-4o-Mini (Hurst et al., 2024) to generate neutralized summaries capped at 150 tokens, thus preserving salient details while reducing idiosyncratic noise. The prompt appears in Appendix A.10. We annotate each summarized dilemma according to five moral theories (i.e., justice, virtue, deontology, utilitarianism and commonsense) from ETHICS (Hendrycks et al., 2021). Specifically, a set of LLMs each assigns a continuous moral acceptability score $a _ { m , j , i } \in [ 0 , 1 ]$ for theory $j$ in dilemma $i$ . See Appendix A.10 for the prompt. # 5.2 Experimental Setup All hyperparameters and implementation details are provided in Appendix A.1. Briefly: • Truncated-Normal EM. We initialize $\mu _ { 0 } ( m )$ and $\mu _ { 1 } ( m )$ near 0 and 1, and set initial variances to small positive values. We run EM until the maximum parameter change falls below a threshold $\tau _ { r p }$ or until a fixed iteration limit. • Embedding Optimization. For models showing high deviation from the consensus on a specific theory $\tilde { j }$ , we freeze all but the token embeddings for $\tilde { j }$ and the feedforward layer, training with $\mathrm { l o s s } _ { E }$ . After fine-tuning, we measure changes in reliability parameters and F1 scores. • Models. We evaluate a collection of LLMs, including the LLaMA series (Llama-2-7Bchat, Llama-2-13B-chat, Llama-3.2-3B-Instruct, Llama-3-8B-Instruct) (Touvron et al., 2023; Dubey et al., 2024), the GPT series (GPT-3.5- Turbo, GPT-4o-Mini—a lightweight variant of GPT-4o) (Ouyang et al., 2022; Hurst et al., 2024), Claude-3-Haiku-20240307 (Anthropic, 2024) and Moonshot-v1- $8 \mathrm { k }$ (Moonshot, 2024). For brevity, we refer to these models as LLamax-xB, GPT-3.5, GPT-4omini, Claude, and Moonshot. • Metrics. We report (i) reliability parameters $( \mu _ { 1 } ( m ) , \sigma _ { 1 } ( m ) , \mu _ { 0 } ( m ) , \sigma _ { 0 } ( m ) )$ , reflecting each model’s estimated tendency and uncertainty in predicting positive and negative moral judgments, and (ii) F1 scores $( \% )$ , which quantify the agreement between each LLM’s binarized moral judgment and the binarized consensus label $\hat { \phi } _ { j , i }$ , both derived using the decision rule in Equation 7. # 5.3 Results 1) Four Basic LLMs. We begin by aggregating annotations from Llama2-13B, GPT-3.5, GPT-4omini, and Claude. Table 2 (Top) shows the original reliability parameters, demonstrating that GPT-4omini has higher $\mu _ { 1 } \approx 0 . 6 6$ (indicating stronger confidence for morally acceptable scenarios) with reasonably low variance $\sigma _ { 1 } \approx 0 . 1 3$ . By contrast, Llama2-13B shows lower $\mu _ { 1 } \approx 0 . 5 3$ , signaling potential underestimation of moral acceptability. Table 3 (Top) presents the F1 scores, demonstrating that GPT-4omini exhibits significantly higher alignment with the collective opinion, whereas Llama2-13B shows the weakest, particularly in the theories of deontology and utilitarianism. Consequently, we focus our optimization efforts on these two theories. After applying embedding optimization to correct theory-level misalignment, we observe that Llama2-13B shifts closer to $\mu _ { 1 } \approx 0 . 5 5$ , reducing the variance $\sigma _ { 1 }$ and improving F1 scores by up to $2 1 . 2 8 \%$ and $8 . 2 1 \%$ for deontology and utilitarianism, respectively. Table 2: Reliability Parameters for Four Basic (Top) and Five (Bottom) LLMs. This table presents the mean $( \mu )$ and standard deviation $( \sigma )$ of positive-set (morally acceptable) and negative-set (immoral) annotations for LLMs. Models with ∗ are post-optimization, while those without are pre-optimization. A higher $\mu _ { 1 }$ (or a lower $\mu _ { 0 } \mathrm { . }$ ) indicates stronger confidence in labeling scenarios as morally acceptable (or immoral). 2) Adding a New LLM (Moonshot). We then extend the evaluation to five LLMs by including Moonshot. Notably, Llama2-13B remains underperforming across both reliability metrics (Table 2 bottom) and F1 (Table 3 bottom). We also compare different Llama variants against other models based on F1 scores in Figure 3 and observe that smaller Llama models struggle in deontological and utilitarian alignment. This underscores the need to refine these theories. Therefore, we continue optimizing the two least-aligned theories for Llama2-13B. Through optimization, Llama2-13B’s $\mu _ { 1 }$ moves closer to 0.54 with reducing the variances, and its F1 scores improve by 6.04 (deontology) and 5.90 (utilitarianism) points. Llama2-13B exhibits weaker improvement on deontology compared to the prior four-LLM setting. This is likely due to the newly added Moonshot reporting an F1 score of only $5 8 . 9 0 \%$ on deontology, which introduces noise into the consensus. Table 3: Moral Alignment Measurement Using F1 Score across Four (Top) and Five (Bottom) LLMs. This table presents the alignment between the binarized collective opinion (Equation 7) and each LLM’s binarized judgments, inferred using the same thresholding rule. Specifically, $\mathrm { F } 1 ^ { \prime }$ represents the alignment before embedding optimization, while $\mathrm { F } 1 ^ { \prime \prime }$ corresponds to the alignment after optimization. ↑ indicates improvements over $\mathrm { F } 1 ^ { \prime }$ . Only the token embeddings of Llama2-13B for deontology and utilitarianism are fine-tuned (in bold), leading to slight adjustments in the collective opinion. Thus, minor variations in $\mathrm { F } 1 ^ { \prime \prime }$ across other theories and LLMs are acceptable. Figure 3: Comparison of four Llama Variants with Other LLMs. LLMs A–E correspond to a specific version of Llama, GPT-3.5, Claude, Moonshot, and GPT-4omini, whereas concepts $\mathrm { \Delta A ^ { \prime } { - } E ^ { \prime } }$ represent moral theories of deontology, utilitarianism, commonsense, justice, and virtue. $^ +$ denotes the LLM holding the highest F1 score for each moral theory, while $\times$ marks the lowest. The F1 score is computed using the same metric described in Table 3. 3) Four Llama Variants. Finally, we experiment on a group of Llama variants (Llama2-7B, Llama2-13B, Llama3-3B, and Llama3-8B). Consistent with prior experiments, we focus on optimizing Llama2-7B for deontology and utilitarianism due to their low alignment (Table 4). However, post-optimization F1 scores decline, suggesting a failure to capture meaningful patterns. This can be attributed to the fact that, prior to training, most models already exhibit high uncertainty in judgments ( $\mathbf { \mu } _ { \mu _ { 0 } }$ and $\mu _ { 1 }$ near 0.5 with relatively high variances) and limited agreement with the collective opinion (low F1 scores), indicating a weak consensus signal. Additionally, Figure 3 indicates that Llama variants exhibit noticeable misalignment in deontology and utilitarianism compared with other varieties of LLMs, further explaining the difficulty for Llama group to form a consistent consensus. These findings highlight that, our method is intentionally sensitive to epistemic uncertainty: it does not fabricate a consensus where none exists. This behavior is consistent with real-world moral conflict, where no single aggregation method can force agreement in the absence of shared values. Table 4: Moral Alignment Measurement across Four Llama Variants. The decline in Llama2-7B’s F1 score after optimization can be attributed to the four Llama variants’ overall low agreement. Figure 4: $\mathbf { P C A + t }$ -SNE Projection of Deontologyrelated Token Embeddings. The term “concept” represents moral philosophical theory in this figure. ∗[concept]_ $\it { i }$ represents the moral-theory token trained from the ith original token. # 5.4 Analysis Inter-Theory Correlations. We compute the Pearson correlation coefficient (Schober et al., 2018) between all five theories based on the aggregated continuous results under the five-LLM setting. Justice/ virtue exhibits the highest correlation (value $\approx 0 . 8 3 \$ ), suggesting that they share overlapping decision patterns. In contrast, the deontology/ utilitarianism pair shows the weakest (value $\approx 0 . 5 5 \$ ), consistent with the widely recognized tension between them in hard moral dilemmas (Körner and Deutsch, 2023). See Appendix A.3 for details. Theory Embedding Projection. To analyze the spatial shifts of trained moral philosophical theory tokens (Chew et al., 2024), we project their embeddings into a lower-dimensional space. For each moral theory, we compute the mean embedding of its corresponding tokens before and after embedding optimization. We then retrieve the top 3,000 tokens most similar (in cosine similarity) to each version. The intersection of these two sets (i.e., tokens that are highly related to both the original and optimized theory embeddings) are referred to as theory-related tokens. From this list, we manually select a small set of interpretable, semantically related tokens (key-related tokens) for display (e.g., “policy,” “law” for deontology). Figure 5: Impact of Random01 on Mean-based (Left) and Our (Right) Aggregation Strategy. This table shows how Random01 impacts the basic LLMs’ F1 scores per theory. Each box represents a theory, with top, middle, and bottom lines showing the highest, mean, and lowest values of F1 score differences among LLMs. PCA reduces dimensionality of the embeddings to 50, followed by t-SNE for 2D projection. In Figure 4 (deontology), key-related tokens form a compact cluster, indicating strong semantic coherence. ∗deontology_0 and ∗deontology_1 remain closely associated with the original tokens, while ∗deontology_2 drifts toward key-related tokens. This suggests that trained tokens tend to not only minimize deviation from their original embeddings but also align with conceptually relevant tokens. A similar pattern is observed in utilitarianism (see Figure 8 in Appendix A.8). Comparison with Mean-based Aggregation. A straightforward alternative for opinion aggregation is taking the mean. However, evaluating F1 score changes before and after optimization to compare strategies is not reliable if pre-optimization aggregations differ. Instead, we introduce an unreliable simulated “model”, Random01, which randomly assigns extreme 0 or 1 to each sample, to assess robustness. Mean-based method assumes equal contributions for all models. However, when Random01 is added to four basic LLMs, LLMs show significantly reduced agreement (Figure 5 left), while Random01 aligns most closely with the aggregated opinion (See Table 7). In contrast, our method remains robust, with minimal impact on the agreement patterns among the four basic LLMs (Figure 5 right) and low F1 scores for Random01 (See Table 8). Takeaways. Overall, the results confirm that, our framework can (a) successfully fuse continuous judgments from multiple LLMs into a coherent consensus if the models do not exhibit substantial differences in moral reasoning and (b) effectively realigns outlier models with consensus via targeted theory-token embedding optimization.
Large Language Models (LLMs) have shown impressive moral reasoning abilities. Yet they often diverge when confronted with complex, multi-factor moral dilemmas. To address these discrepancies, we propose a framework that synthesizes multiple LLMs' moral judgments into a collectively formulated moral judgment, realigning models that deviate significantly from this consensus. Our aggregation mechanism fuses continuous moral acceptability scores (beyond binary labels) into a collective probability, weighting contributions by model reliability. For misaligned models, a targeted embedding-optimization procedure fine-tunes token embeddings for moral philosophical theories, minimizing JS divergence to the consensus while preserving semantic integrity. Experiments on a large-scale social moral dilemma dataset show our approach builds robust consensus and improves individual model fidelity. These findings highlight the value of data-driven moral alignment across multiple models and its potential for safer, more consistent AI systems.
[ "cs.CL", "cs.AI" ]