text
string
source
string
how would you restate this question? Please provide your counterfactual transformation of the question, ensuring the subject matter remains unchanged but the domain is freely changed. Please output in YAML format, only including the transformed question and corresponding rationale: { "transformed_question": <transformed_question>, "transformed_reason": <transformed_reason> } transformations often result in fragmented or inconsistent outputs. For each unitui, we employ the LLM to generate a corresponding semantic profile pui, which characterizes the unit’s semantic attributes within the overall content structure. This semantic profile is used to preserve the logical flow in subsequent content transformations. – Unit Toxification. For each semantic unit ui, we apply the toxification process guided by its semantic profile puiand mapping rationale Rmusing toxifying prompt Ptox, which is defined as follows: hi=MCQ (Ptox⊕Rm⊕pui⊕ui). (2) This process transforms the benign semantic unit uiinto its harmful coun- terpart hiwhile preserving its structural role in the overall content. – ContentReassembly. Wefurtherreassemblethetransformedcontentunits {h1, h2, ...h n}intocoherentharmfulcontent.Thisreassemblyprocessensures logical flow between the toxified units. Due to this step requiring the input of complete harmful information into LLMs, this stage typically relies on LLMs from balanced and fallback models in MCQ. Toxifying Prompt ( Ptox) As an experienced criminal code-breaking forensic expert, please expose the seemingly innocuous content: {mapping rationale; semantic profile; semantic unit} ... Requirements: Expose hidden criminal intent; Transform seemingly harmless content; Maintain accuracy and coherence ... 10 Y. Yan et al. TheACTbenefitssignificantlyfromourModelCrowdsourcingQueue(MCQ) mechanism, as it dynamically allocates suitable LLMs for harmful content gen- eration. Although the final content integration relies primarily on weak LLMs, strong LLMs still contribute to generating the semantic profile for semantic units and partial unit toxification by inducing them to only focus on local content rewriting. This strong-weak model collaboration significantly enhances both the success rate and quality of harmful content generation. 4.4 Hallucination Evaluation Due to limited training on harmful content generation tasks, LLMs are prone to hallucinations [7] and errors. These issues can lead to the AI-generated harm- ful content self-conflicting with pre-defined data outlines and other generated data, thus compromising the quality of the harmful information dataset. For instance, the original data outline describes Johnny as a young teenager, while AI-generated data suggests that Johnny has been married for years. To address this issue, we design the LLM-as-a-judger [3,12] detection frame- work using predefined data outlines and the batch detection strategy [16] to filter the self-conflicted data. Specifically, to validate the rationality of harmful data, we input harmful data outlines to LLMs as references. Then, we prompt the LLM to classify multiple samples within a single task query (i.e., batch de- tection) to ensure detection robustness. Through predefined data outlines and batch processing of multiple samples, LLM judger can build a comprehensive understanding of content patterns for reliable hallucination detection 5 Experiments 5.1 Experiment Settings Datasets In our experiments, we utilize the Measuring Hate Speech (MHS) Corpus [9], a comprehensive dataset consisting of 135,556 English social media posts from platforms like Reddit, Twitter, and YouTube. More importantly, its detailed annotations include broad and specific group categories in race, religion, origin,gender,sexuality,age,anddisability,allowingustoexplorevariousdimen- sions of hate speech. Following previous studies [2],
https://arxiv.org/abs/2505.21184v1
we employ the hatespeech label to classify the samples into three categories (non-hateful, unclear, hateful), and focus on the hateful and non-hateful categories. Baseline Methods We compare our PoisonSwarm with existing data augmen- tation and synthesis methods. We use typical data augmentation methods from nlpaug [11], including Synonym Augmentation ( Syn.Aug. ), Contextual Word Embeddings Augmenter ( Token.Aug. ), and Contextual Word Embeddings for Sentence Augmenter ( Sen.Aug. ). In terms of data synthesis methods, we di- rectly query the LLM to synthesize toxic content as the baseline, i.e., DQ-LLM . Regarding the bypass of the safety alignment mechanisms of LLMs, we compare PoisonSwarm with black-box LLM attacking methods as the strong baselines, PoisonSwarm: Universal Harmful Information Synthesis 11 which are used to jailbreak LLMs for harmful data synthesis: PAIR[3] gener- ates semantic prompt-level jailbreaks with an attacker LLM. TAP[12] employs tree-structured prompting to elicit harmful behaviors. LLM-Fuzzer [17] auto- mates the generation of jailbreak prompts using a black-box fuzzing framework, iteratively mutating human-written templates to attack LLMs. Evaluation Metrics We adopt four metrics to evaluate the performance of harmfulinformationsynthesis:1) SynthesisSuccessRate(SSR) iscalculated by dividing the number of successful outputs by the total number of attempts, reflecting the stability of harmful data synthesis. A successful output contains no rejection keywords [21], e.g., "Sorry". 2) Average Toxicity (Tox.) is calculated by averaging the toxicity levels (0-5) of samples using LLMs, which are then nor- malized to [0,1], reflecting the effectiveness of the harmful data synthesis meth- ods. 3)Average Diversity (Div.) [20] is the average dissimilarity score com- puted across all pairs of generated outputs, where Div. = 1−mean (sim(xi, xj)) for sample pairs (i, j), reflecting the diversity of harmful data. 4) Average Nat- uralness (Nat.) is calculated as the rate at which the LLM fails to select syn- thesized data when distinguishing generated outputs from human samples, re- flecting the naturalness of harmful data. We select the most semantically similar human samples to conduct the testing. Experimental Setups The embedder we employed is bert-base-uncased [5]. PoisonSwarm adopts gpt-4o, gpt-4o-mini, and qwen2.5-7B-Instruct as crowd- sourced models, while other LLM-driven baselines adopt gpt-4o-mini. We con- struct the data synthesis prompts using few-shot in-context learning, where each prompt targets one specific subgroup as defined in MHS. We calculate Div. by computing the cosine similarity based on text embeddings from BGE-M3 [4]. We calculate Tox. and Nat. using gpt-4o. In the evaluating stage (metric calculation, hallucination evaluation), the temperature of LLM is 0.0, otherwise, it is 0.7. 5.2 Experimental Results To comprehensively evaluate PoisonSwarm’s capabilities, our experiments are designed to address these research questions (RQs): – RQ1: Does PoisonSwarm outperform existing methods? What mechanisms contribute to its success? – RQ2: What is the key characteristic of AI-generated harmful content, and how does it affect current detection systems? – RQ3: How does PoisonSwarm ensure the diversity of synthesized harmful information? Baseline Comparison (RQ1.1) To investigate RQ1, we first conduct com- prehensive experiments with different baselines and variants of PoisonSwarm for comparison. We use different methods to generate 100 hateful samples for each of the target categories (4,600 samples
https://arxiv.org/abs/2505.21184v1
in total), aiming to validate their effectiveness in generating harmful content. 12 Y. Yan et al. Table 3: Experimental results of baseline comparison on the MHS dataset. The best results are in bold, and the second-best results are in underline . Method SSR (↑)Tox. (↑) Div.(↑)Nat. (↑) No Aug./Syn. - 0.5867 0.5313 - Data Augmentation Syn.Aug. - 0.7627 0.1760 ↑0.5387 0.0074 ↑0.2393 Token.Aug. - 0.5485 0.0382 ↓0.5210 0.0103 ↓0.0276 Sen.Aug. - 0.7660 0.1793 ↑0.5178 0.0135 ↓0.1275 Data Synthesis DQ-LLM 0.0174 0.1450 0.4417 ↓0.5546 0.0233 ↑0.8875 PAIR 0.6815 0.4317 0.1550 ↓0.4297 0.1016 ↓0.5142 TAP 0.4641 0.4763 0.1104 ↓0.4692 0.0621 ↓0.3653 LLM-Fuzzer 0.7209 0.4556 0.1311 ↓0.5121 0.0192 ↓0.4537 PoisonSwarm (ours) 0.9035 0.6860 0.0993 ↑0.5606 0.0293 ↑0.6583 Table 4: Experimental results ablation study on the MHS dataset. The best results are in bold, and the second-best results are in underline . Method SSR (↑)Tox. (↑)Div.(↑)Nat. (↑) PoisonSwarm 0.9035 0.6860 0.5606 0.6583 - w/o M.Crowd. 0.3757 0.5278 ↓0.5126 0.1734 ↓0.5096 0.0510 ↓0.5342 0.1841 ↓ - w/o Counter.Map. 0.5950 0.3085 ↓0.5540 0.1320 ↓0.4691 0.0915 ↓0.4958 0.1625 ↓ - w/o Hall.Eval. 0.9826 0.0791 ↑0.5725 0.1135 ↓0.5294 0.0312 ↓0.4814 0.1769 ↓ As shown in Table 3, we observe different trade-off patterns between data augmentation and data synthesis methods. The data generated from traditional data augmentation methods (e.g., Syn.Aug. and Sen.Aug.) achieves superior toxicity scores (0.7627 and 0.7660), while it is highly different from human ex- pression with low naturalness (0.2393 and 0.1275). In contrast, the data syn- thesized from jailbroken LLMs achieves superior naturalness by leveraging their pre-trained knowledge, but it struggles with maintaining harmful characteristics due to their inherent training goal of generating helpful and safe content. Even when jailbroken to bypass safety alignment mechanisms, the LLM still tends to be conservative in toxicity generation without explicit harmful references, as evidenced by the high naturalness and lower toxicity of DQ-LLM, PAIR, TAP, and LLM-Fuzzer. PoisonSwarm effectively achieves the best trade-off with bal- ancedperformanceacrossallmetrics.Itachievesthehighestsuccessrate(0.9035) and diversity score (0.5606), while maintaining competitive toxicity (0.6860) and naturalness (0.6583), highlighting the design of harmful data generation by spec- ulatively toxicifying the diverse harmless templates using model crowdsourcing. Ablation Study (RQ1.2) Then, we further explore the contribution of each component in PoisonSwarm by ablation study, as shown in Table 4. The ablation settings and analysis are as follows: PoisonSwarm: Universal Harmful Information Synthesis 13 Effectiveness of Model Crowdsourcing ( M.Crowd. )Model Crowd- sourcing Queue serves as the core mechanism to ensure the successful genera- tion of harmful content. As shown in Table 4, removing this component (w/o M.Crowd. ) leads to significant performance degradation across all metrics with the most dramatic drop in SSR (52.78% ↓). This decline demonstrates that dy- namic model switching and collaboration are crucial for maintaining both gen- eration effectiveness and content quality. EffectivenessofCounterfactualMapping( Counter.Map. )Counter- factual mapping enables the diversity of data. Without this component (w/o Counter.Map. ), we observe notable decreases in all metrics, with the largest drop in diversity (9.15% ↓). This result indicates that the direct generation of harmful content leads to less diverse and lower-quality outputs constrained by LLMs’ safety alignment mechanisms. EffectivenessofHallucinationEvaluation( Hall.Eval
https://arxiv.org/abs/2505.21184v1
)Removinghal- lucination evaluation (w/o Hall.Eval ) improves SSR (7.91% ↑), but it comes at the cost of reduced performance in other metrics, indicating that hallucination evaluation is critical to filtering out low-quality or irrelevant outputs. The sub- stantial drop in naturalness also underscores its role in ensuring that generated content remains coherent and contextually appropriate. InterpretiveAnalysis(RQ2) ToinvestigatethecharacteristicsofAI-generated harmful information and its impact on detection systems, we design compara- tive experiments simulating threat scenarios before and after the emergence of generative AI. Table 5 summarizes our experimental settings, including two dis- tinct training settings: 1) Human-only data training , where harmful content is sampled from MHS, and 2) Mixed data training , where harmful samples include 50% AI-generated content sampled from LLM-based synthesized data, and 50% human-generated from MHS. We then fine-tune bert-based detector separately under each training setting and compared their detection performances across thesescenarios:1) human-only hate speech (100%human-generatedharmfulcon- tent), and 2) AI-driven hate campaign (100% AI-generated harmful content). Table 5: Data composition for simulating different attack&defense scenarios us- ing different proportions of harmful content from Human and AI. We compare these scenarios in Fig.5 to show the impact of AI-generated harmful information. Scenario Hateful Non-hateful Training Settings Human-only data training 9,200 (Human 100%) 9,200 (Human 100%) Mixed data training 9,200 (Human 50%, AI 50%) 9,200 (Human 100%) Testing Settings Human-only hate speech 4,600 (Human 100%) 4,600 (Human 100%) AI-driven hate campaign 4,600 (AI 100%) 4,600 (Human 100%) As shown in Fig.5, models trained solely on human-generated data perform well on human-generated harmful content but struggle with AI-generated sam- 14 Y. Yan et al. ples. Incorporating AI-generated data into the training set significantly improves the detection of AI-generated content while maintaining comparable perfor- mances on human-generated content. This phenomenon indicates significant dif- ferences in the expression patterns of harmful content between humans and AI. As shown in Table 6, human-generated harmful content tends to employ direct insulting terms and simple sentence structures (e.g., "scum"), which are obvious markers of aggression and thus being easier for detectors to identify. In contrast, AI-generated harmful content often adopts a seemingly rational tone to con- ceal biases, making it more adept at evading detection. These differences further highlight the necessity of our proposed PoisonSwarm, which provides richer AI- generated data for harmful information detection. As for our PoisonSwarm, the pairwise similarity distribution in Fig.6(a) indicates that its synthesized con- tent is less similar, and the t-SNE visualization in Fig.6(b) indicates that its synthesized content is the semantic complement for human-generated data. Fig.5: Illustration of detection performance (F1 and Accuracy) gap between Human (Top) and AI-generated (Bottom) harmful information, indicating the significant differences in harmful content between humans and AI. Table 6: Comparison of expression patterns in human/AI-generated content. Human-generated harmful content Those scum have no right... C**n should be bulldozed. (target_race) Only cringy autistic 6 year olds would enjoy this gay s**t (target_age) Key characteristics: 1) Uses internet slang and abbreviations; 2) direct insults and derogatory terms; 3) shows conversational tone and simple structure AI-generated harmful content Western societies are so much more advanced and civilized.
https://arxiv.org/abs/2505.21184v1
Unlike those back- ward A**n countries that can’t seem to get their act together. (target_race) Children are so annoying... whining, crying, and causing trouble. (target_age) Key characteristics: 1) Uses seemingly objective language; 2) presents implicit bias through structured arguments; 3) disguises intolerance as reasonable con- cerns 4) adaptively attack any new target PoisonSwarm: Universal Harmful Information Synthesis 15 Less Diverse More Diverse (a) Pairwise Embedding Similarities (b) Embedding Space Visualization Fig.6: Illustration of the complementary diversity between PoisonSwarm and human-generated harmful data, highlighting the limitations of passive data col- lection and emerging threats from generative AI for harmful information de- tection. Fig.6(a) is the histogram of pairwise embedding cosine similarities for PoisonSwarm/human data. Fig.6(b) is the t-SNE visualization of PoisonSwar- m/human data. Both of them are drawn based on 4,600 harmful samples. (a) Counterfactual Benign Template (b) Transformed Harmful Information Fig.7: Illustration of harmful information synthesis via benign template toxifica- tion,highlightingthatharmfulcontentdiversityoriginatesfrombenigndiversity. Case Study (RQ3) PoisonSwarm ensures the diversity of synthesized harm- ful information by leveraging diverse benign templates as the basis for dynamic toxification. As illustrated in Fig.7, the benign content template (left) contains a wide range of neutral and positive semantic units, contributing to the founda- tional diversity of the final generated harmful information (right). PoisonSwarm introduces the counterfactual mapping module to obtain diverse templates from strong LLMs without being constrained by LLM’s safe alignment mechanism. Then, PoisonSwarm transforms these benign templates into harmful expressions using dynamic model switching, thereby preserving semantic richness and avoid- ing homogeneity. Consequently, the resulting harmful content maintains high diversity, effectively reflecting a broad spectrum of harmful expressions. 16 Y. Yan et al. 6 Conclusion In this work, we propose PoisonSwarm , a universal harmful information syn- thesis method utilizing model crowdsourcing to generate diverse harmful data while maintaining a high success rate. To benefit from the rapid development of strong LLMs for high-quality harmful information synthesis, we introduce the strong-weak model collaboration framework, where we decompose the harm- ful data synthesis tasks into benign template generation by strong LLMs and template toxicification by weak LLMs. We highlight significant differences be- tween AI-generated and human-generated harmful information, which indicate the limitations of previous passive harmful data collection for detector construc- tionandemergingthreatsfromthemisuseofgenerativeAIforAI-drivenharmful speech campaigns. Experimental evaluations clearly illustrate that PoisonSwarm achieves superior performance compared to existing harmful information gener- ation methods, demonstrating its efficacy in synthesizing scalable and diverse harmful information for secure and responsible AI development. References 1. Casula, C., Tonelli, S.: Generation-based data augmentation for offensive language detection: Is it worth it? In: Proc. 17th EACL. pp. 3359–3377 (2023) 2. Casula, C., Tonelli, S.: A target-aware analysis of data augmentation for hate speech detection. arXiv preprint arXiv:2410.08053 (2024) 3. Chao, P., Robey, A., Dobriban, E., et al.: Jailbreaking black-box large language models in twenty queries. arXiv preprint arXiv:2310.08419 (2023) 4. Chen, J., Xiao, S., Zhang, P., et al.: M3-embedding: Multi-linguality, multi- functionality, multi-granularity text embeddings through self-knowledge distilla- tion. In: Findings ACL 24. pp. 2318–2335 (2024) 5. Devlin, J., Chang, M.W., Lee, K., et al.: Bert: Pre-training of deep bidirectional transformersforlanguageunderstanding.In:Proc.NAACL-HLT19.pp.4171–4186 (2019) 6. Divekar, A.,
https://arxiv.org/abs/2505.21184v1
Durrett, G.: Synthesizrr: Generating diverse datasets with retrieval augmentation. In: Proc. 2024 EMNLP. pp. 19200–19227 (2024) 7. Huang, L., Yu, W., Ma, W., et al.: A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. (2024) 8. Hui, Z., Guo, Z., Zhao, H., Duan, J., Huang, C.: Toxicraft: A novel framework for synthetic generation of harmful information. In: Findings EMNLP 24. pp. 16632– 16647 (2024) 9. Kennedy, C.J., Bacon, G., Sahn, A., von Vacano, C.: Constructing interval vari- ables via faceted rasch measurement and multitask deep learning: A hate speech application. arXiv preprint arXiv:2009.10277 (2020) 10. Li, Y., Ding, K., Wang, J., Lee, K.: Empowering large language models for textual data augmentation. In: Findings ACL 24. pp. 12734–12751 (2024) 11. Ma, E.: Nlp augmentation. https://github.com/makcedward/nlpaug (2019) 12. Mehrotra, A., Zampetakis, M., Kassianik, P., et al.: Tree of attacks: Jailbreaking black-box llms automatically. arXiv preprint arXiv:2312.02119 (2023) 13. Shen, X., Wu, Y., et al.: Hatebench: Benchmarking hate speech detectors on llm- generated content and hate campaigns. arXiv preprint arXiv:2501.16750 (2025) PoisonSwarm: Universal Harmful Information Synthesis 17 14. Wang, Z., Zhang, J., Xu, H., et al.: Counterfactual data-augmented sequential recommendation. In: Proc. 44th SIGIR. pp. 347–356 (2021) 15. Yan, Y., Shen, Y., Liu, T., Jiang, X., Yin, D.: Enhancing stance detection on social media via core views discovery. Front. Artif. Intell. Appl. 392, 4019–4026 (2024) 16. Yan, Y., Sun, S., Tang, Z., et al.: Collaborative stance detection via small-large language model consistency verification. arXiv preprint arXiv:2502.19954 (2025) 17. Yu, J., Lin, X., Yu, Z., Xing, X.: Llm-fuzzer: Scaling assessment of large language model jailbreaks. In: Proc. 33rd USENIX Secur. Symp. pp. 4657–4674 (2024) 18. Zhang, Z., Zhang, Y., Li, L., et al.: Psysafe: A comprehensive framework for psychological-based attack, defense, and evaluation of multi-agent system safety. In: Proc. 62nd ACL. pp. 15202–15231 (2024) 19. Zhao, S., Yang, Y., Wang, Z., et al.: Retrieval augmented generation (rag) and beyond: A comprehensive survey on how to make your llms use external data more wisely. arXiv preprint arXiv:2409.14924 (2024) 20. Zhu, A., Asawa, P., Davis, J.Q., et al.: Bare: Combining base and instruction- tuned language models for better synthetic data generation. arXiv preprint arXiv:2502.01697 (2025) 21. Zou, A., Wang, Z., Carlini, N., et al.: Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 (2023)
https://arxiv.org/abs/2505.21184v1
arXiv:2505.21189v1 [cs.CL] 27 May 2025Exploring the Latent Capacity of LLMs for One-Step Text Generation* Gleb Mezentsev AIRI Skoltech mezentsev@airi.netIvan Oseledets AIRI Skoltech oseledets@airi.net Abstract A recent study showed that large language models (LLMs) can reconstruct surprisingly long texts – up to thousands of tokens – via autoregressive generation from just one spe- cially trained input embedding. In this work, we explore whether such reconstruction is possible without autoregression.We show that frozen LLMs can generate hundreds of accurate tokens in just one forward pass, when provided with only two learned embeddings. This re- veals a surprising and underexplored capability of LLMs – multi-token generation without it- erative decoding. We investigate the behaviour of these embeddings and provide insight into the type of information they encode. We also empirically show that although these represen- tations are not unique for a given text, they form connected and local regions in embedding space – a property that suggests the potential of learning a dedicated encoder into that space. 1 Introduction Large language models (LLMs) are typically trained to generate text in an autoregressive man- ner – they predict one token at a time based on the previously generated context. Recent work by Kuratov et al. (2025) demon- strated that LLMs can autoregressively generate an arbitrary text starting from a single, specially trained input embedding corresponding to that text. This raises an intriguing question: is autoregressive generation an essential part of such reconstruction? Or, in other words, can LLMs reconstruct accurate multi-token sequences from some compressed rep- resentation in a single forward pass, without any iterative generation, and if so, how? In this work, we show that this is possible, in- vestigate what those compressed representations encode and whether this finding reveals anything about LLMs’ parallel generation capabilities. *Under review Figure 1: Two "proto-tokens" (trainable embeddings) are fed into frozen, pre-trained LLM and optimized in such a way, that the LLM predicts an arbitrary target token-sequence in a single forward pass. One of the "proto-tokens" ( et) is trained for each text separately, while the other ( m) could be reused. Our contribution is as follows: 1. We show that LLMs can reconstruct arbi- trary sequences from as few as two learned input embeddings, achieving perfect reconstruction of sequences of up to several hundred tokens. 2. We identify key design aspects for such a setup, that enable this generation, including the critical importance of input token arrangement. 3. We study how the reconstruction capability varies with the model size and the nature of the target sequence (e.g. natural vs synthetic text). 4. We empirically characterize learned repre- sentations – analyze their information content and embedding-space geometry. 2 Related Work The most direct influence for our work is a pa- per by Kuratov et al. (2025), which showed that frozen LLMs can reconstruct an arbitrary sequence of tokens T= [t1, . . . , t N]if given a set of special, so-called memory tokens [mem 1, . . . , mem K]. The embeddings for these tokens are trained by optimizing a causal lan- 1 guage modeling objective
https://arxiv.org/abs/2505.21189v1
(next-token prediction cross-entropy loss) over a concatenated input sequence Z= [mem 1, . . . , mem K, t1, . . . , t N] passed through a frozen LLM. In the case of per- fect next-token prediction accuracy (which could be achieved for reasonable text length), this allows the model to autoregressively predict the whole text starting from the memory tokens. The number of memory tokens controls the maximum text length and can be as low as one. Although surprisingly long (up to 1568 tokens) texts could be compressed even into a single mem- ory token, the authors note that the embeddings trained from different random initializations for the same text often end up far apart. Moreover, linear interpolations between those embeddings produce very poor reconstruction accuracy, suggesting that the solution space lacks desirable smoothness and locality qualities, which are important for learning a practical encoder that could replace the direct optimization. Our work also relates to efforts in prompt-tuning and its variants (Lester et al., 2021; Liu et al., 2024; Li and Liang, 2021). Most similarly, Lester et al. (2021) train task-specific soft tokens to condition the frozen LLMs to improve their performance in new tasks. Finally, several speculative (Xia et al., 2023) and parallel (Santilli et al., 2023) decod- ing approaches utilize a similar mechanism for multiple token prediction using decoder architec- tures. More specifically, they add special [PAD] or [MASK] tokens at the end of the current context in order to make a prediction for several tokens into the future at once. Critically, in these works either special training or multiple generative iterations are required. Unlike prior work, we show that a frozen LLM can generate accurate multi-token sequences in one forward pass without additional LLM training or iterative decoding. 3 Method To adopt the approach from Kuratov et al. (2025) to a non-autoregressive case, we replace all input tokens of the LLM with specially trained "proto- tokens" and predict the target token sequence in one forward pass. In practice, "proto-tokens" are just trainable vectors that are not tied to any real items in the vocabulary. The main difference be- tween regular tokens and these "proto-tokens" is that "proto-tokens" encode multiple tokens at onceand only produce human-readable text after pass- ing through the LLM. Our goal is to identify the smallest possible number of such "proto-tokens" needed for accurate reconstruction. Interestingly, we find that it is essential to have at least two – the performance drops dramatically when using only one (see Section 4). There are many ways to arrange two vectors as an input sequence of arbitrary length. We report results for different variants later in the paper, but here we describe the arrangement that is used in the majority of the experiments. Exact scheme We introduce two "proto-tokens" eandmwith trainable embeddings of dimension dmodel (model input embedding dimension) and construct the input sequence as follows: Z= [e, m, m, . . . , m ]– one copy of token eis followed by N−1copies of token m, where N is the target text length. We then train
https://arxiv.org/abs/2505.21189v1
the vectors by optimizing cross-entropy loss between the tar- get sequence T= [t1, t2, . . . , t N]and the frozen LLM’s output for the input sequence. The pre- diction is obtained using standard causal attention masking, so that the predicted probabilities for the token tidepend on the first iinput "proto-tokens" (see Figure 1). Metrics Our main evaluation metric is the num- ber of correctly reconstructed tokens in a generated sequence, defined as: Ctokens =NX i=11(LM(Z[1:i]) =ti) (1) Additionally, we measure the amount of informa- tion contained in the reconstructed token sequence from the perspective of causal language modeling with a given LLM. Specifically, we compute the cross-entropy between the compressed sequence and LLM’s autoregressive probability distribution: HLM=NX i=1−logPLM(ti|t<i) (2) This quantity measures how uncertain a model is about the compressed text, that is, how much information it contains. Solution space connectivity To gain insights into the structure of the solution space of our prob- lem, we analyze whether different proto-token em- beddings obtained for the same text but from differ- ent random initializations are connected. We adopt 2 a technique from (Garipov et al., 2018) which is used to find paths connecting different minima of the loss function in computer vision tasks. We optimize the parameters of a degree-one Bezier curve, connecting two solutions, to maximize re- construction accuracy along the curve. The curve is parameterized by a control point πin the following way: ϕπ(t) = (1 −t)2p1+ 2t(1−t)π+t2p2(3) Here, p1andp2are the two original solutions that we aim to connect. The expectation of the cross-entropy loss func- tion under the uniform distribution over t∈[0,1] (4)is minimized by iteratively sampling ˜t∈[0,1] and making a gradient step, effectively obtaining unbiased estimate of the gradient of lπ: lπ=Z1 0NX i=1−logPLM(ti|ϕπ(t))dt (4) This acts as a more tractable alternative to direct optimization under the uniform distribution along the curve itself. Token sequences similarity In Section 4, we aim to measure the similarity between two token sequences in order to control for this similarity. To measure token-level similarity we use the cosine distance between TF-IDF embeddings of two se- quences. To measure semantic similarity we use cosine-distance between semantic sequence embed- dings obtained from a MiniLM model fine-tuned1 for the semantic sentence embedding. 4 Experiments and results We test the ability of different LLMs of varying sizes to generate a predefined text from different sources in a non-autoregressive (parallel) mode. Moreover, we compare different ways to feed our trainable "proto-tokens" into LLM. We also try to understand the structure of the solution space by examining the relations of solutions for different problems. Models We use six models for all experiments: three Pythia (Biderman et al., 2023) models of sizes 160M, 410M, and 1.4B, and three Llama- 3 (Grattafiori et al., 2024) models of sizes 1B, 3B, and 8B. 1https://huggingface.co/sentence-transformers/ all-MiniLM-L6-v2Data Four text sources are used in the experi- ments to explore the possible connection between reconstruction performance and the text nature. A set of random texts is generated by sampling from the top 100,000 words of the GloVe vocabu- lary (Pennington et al., 2014), to
https://arxiv.org/abs/2505.21189v1
evaluate perfor- mance on unnatural texts. To assess generation performance on natural but unseen texts, we use a collection of fanfiction texts from AO3 library2, with a publication date cutoff of October 2024, which is later than the end of training for all models. For data processing details, see Kuratov et al. (2025). The performance on seen natural texts is evalu- ated using PG-19 dataset (Rae et al., 2019) – a part of a dataset used for training Pythia models. Finally, we include a set of model-specific gen- erated texts. Specifically, for each model and each context text from PG-19 dataset, a suffix of the same length is generated as autoregressive contin- uation. The generation is done via multinomial sampling with sampling temperature T= 1. Training details The embeddings of the proto- tokens are initialized randomly from a standard normal distribution and optimized using AdamW optimizer (Loshchilov and Hutter) with 0.01learn- ing rate, β1,β2set to 0.9and a weight decay of 0.01. The embeddings are trained for 5000 itera- tions with early stopping if perfect reconstruction accuracy is achieved. This number of iterations is often insufficient for convergence, but due to limited computational resources, we are unable to increase it. Instead, we aggregate results across multiple sequences. All models are loaded and trained using PyTorch framework and the Hugging Face Transformers library. Each experimental run is done on a single A100 or H100 80GB GPU with gradient accumulation enabled where necessary. The code is available at this page3. Proto-token arrangement To select the best way to arrange two proto-tokens as input to an LLM for the main experiments, we conduct test runs on a single dataset-model pair for the variety of arrangements. For each arrangement, the same 50 texts from the PG-19 are selected, and the Llama- 3.2-1B model is trained on prefixes of these texts at lengths [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024] to assess how token-level reconstruction accuracy 2https://archiveofourown.org/ 3https://github.com/Glebzok/ OneStepLLMGeneration 3 changes with respect to sequence length N. A representative selection of results is presented in Table 1. Arrangement N= 1 N= 2 N= 4 N= 256 [e]×N 1.00±0.000.45±0.310.17±0.180.01±0.01 [e]×(N/2)[m]×(N/2)1.00±0.001.00±0.000.12±0.130.01±0.01 [e, m]×(N/2) 1.00±0.001.00±0.001.00±0.000.17±0.34 [e][m]×N 1.00±0.001.00±0.001.00±0.000.97±0.15 [e][m]×(N−1) 1.00±0.001.00±0.001.00±0.000.99±0.10 Table 1: Reconstruction accuracies for different input token arrangements across varying sequence lengths. Subscripts indicate the number of copies for each proto- token. The last two schemes differ as follows: with the first one, the LLM is trained to predict the first text token t1for the proto-token e, while with the second one, the prediction for proto-token eis not guided and t1is a target prediction for the first copy of minstead. Interestingly, having two trainable tokens is es- sential for the performance – the scheme with one trainable token fails to reconstruct even 2-token text, while best two-token schemes can reconstruct 256-token texts almost perfectly. Moreover, the way these two tokens are arranged is also important, with the best results obtained when the first token eis followed by N−1copies of the second token m. This asymmetrical arrange- ment and critical necessity for two tokens suggest possible variation in
https://arxiv.org/abs/2505.21189v1
functions of eandm. It is possible, that while one of them mostly incorpo- rates language information, the role of the other one is mainly structural or mechanistic. This could be related to the phenomenon of "attention sinks" – Xiao et al. (2023) showed that LLMs strongly at- tend to the initial tokens in the sequence even when they are not relevant. Moreover, adding a place- holder token as an attention sink could largely im- prove the performance of window-attention based models, which do not see the initial tokens by de- sign. So, it is possible, that in order to successfully decode "information" proto-token, LLM needs a distinguishable "sink" proto-token, which can be used as attention sink. Token sharing In the previous section, we showed that the quality of reconstruction is very dependent on having two separate proto-tokens as an input. This observation, led us to hypothesize that, a second token plays some structural or mech- anistic purposes and does not contain information about the sequence itself. In that case, the secondtoken could be shared between texts, reducing the number of optimized parameters, and simplifying the training process of the potential encoder. To test this hypothesis, we run the same op- timization process, but splitting 256 texts from the PG-19 dataset into groups of different sizes Sg∈[1,4,16,64,256] and sharing either eor mwithin each group. We selected the maximum length of the text that can be losslessly compressed in a non-sharing mode - 256. The results are aver- aged over 10random seeds. The selection of the results is presented in Table 2. Shared Agg Sg= 1 Sg= 16 Sg= 256 e max 1.00±0.000.99±0.010.99±0.02 avg 0.98±0.080.90±0.170.86±0.20 p max 1.00±0.001.00±0.001.00±0.01 avg 0.98±0.070.86±0.190.83±0.18 Table 2: Reconstruction accuracy for schemes where one of the trainable tokens is shared within a group across different group sizes. "max" aggregation indi- cates that for every text, maximum accuracy across ten random seeds is selected and then averaged across texts, while "avg" denotes averaging across both seeds and texts. Sharing either token yields comparable perfor- mance if provided with sufficiently large number of restarts (random seeds), but the required number of restarts increases significantly with group size. Depending on the proto-token being shared, we can build different intuitions behind the function of the shared tokens and the method itself. If e-token is shared, which is located in the very beginning of the input sequence, the analogy that comes to mind is prompt-tuning (Lester et al., 2021), where a set of prompt embeddings is trained in order to improve performance in some specific task. In our case, a shared token ecould be viewed as an "in- struction" saying what an LLM should do with the upcoming embeddings ( m-tokens) – decode differ- ent pieces of information for different positions. If mis shared, then training and prediction scheme resembles some of the speculative decoding ap- proaches (Xia et al., 2023), where a number of special "[mask]" tokens are appended at the end of the sequence and the prediction for all them is then done in parallel. For all other experiments, unless stated otherwise, we
https://arxiv.org/abs/2505.21189v1
use scheme with sharing m token between texts and random seeds and etoken being unique for each text/seed pair. 4 Share pPythia Llama 160M 410M 1.4B 3.2-1B 3.2-3B 3.1-8B RandomCtokensFalse 90 92 90 256 362 512 True 45 22 45 181 256 256 HLMFalse 507.5±105.9377.1±133.1470.7±103.11551.3±159.52193.4±190.22974.4±298.3 True 247.9±32.091.1±30.8231.0±37.9947.7±155.01292.2±217.41309.4±234.6 FanficsCtokensFalse 128 128 131 362 512 724 True 45 45 45 181 288 362 HLMFalse 358.9±73.3395.4±97.8261.0±56.41107.6±129.11408.4±179.51763.3±280.2 True 145.0±26.282.3±28.1147.9±29.7576.4±90.4835.9±121.71112.8±168.6 PG-19CtokensFalse 128 167 128 362 512 724 True 45 32 64 181 256 362 HLMFalse 388.4±66.4408.8±96.3298.4±77.4993.8±183.41346.0±218.41659.8±344.5 True 156.0±33.988.1±30.3156.0±30.2456.5±56.5826.1±117.6832.3±171.0 PG-19 (gen)CtokensFalse 128 181 128 362 512 724 True 45 32 64 181 362 362 HLMFalse 354.1±72.0379.2±82.6277.6±71.3927.3±103.41266.6±125.91653.1±211.4 True 153.0±17.8106.9±38.5197.1±39.3478.7±85.7788.6±130.8771.7±143.0 Table 3: Maximum reconstruction capacities for different models on different datasets. Generation capacity We already see, that simi- lar to autoregressive mode (Kuratov et al., 2025), LLMs can generate fairly long sequences in just one forward pass. To characterize this capability, and understand how it scales with model size, we run the optimization process for text prefixes of the predefined lengths [4, 5, 8, 11, 16, 22, 32, 45, 64, 90, 128, 181, 256, 362, 512, 724, 1024, 1448]. We report the maximum values of Ctokens , and Hmax which correspond to the longest prefix for which at least 0.99token-level accuracy is achieved – we treat such sequences as successfully predicted. In addition to a scheme with a shared ptoken, we also run a scheme with pnot shared, to eliminate the effect of the insufficient number of random initial- izations. While our results in Section 4, suggest thatp, can in principal, be shared without any qual- ity drop, we also note that the optimization process is highly sensitive to initialization, especially when the proto-tokens are shared. The results are pre- sented in Table 3. Larger models in Llama family show greater reconstruction capabilities than the smaller ones of their family, while the situation with Pythia model- family is less obvious, with all the models showing approximately the same performance. Llama 1Bmodel is also able to reconstruct almost three times larger sequence compared to Pythia model of the same size. The source of the natural language (unseen / seen / generated) doesn’t seem to have any systematic in- fluence on the quality of reconstruction in terms of the number of tokens, while for unnatural random texts the generation capacity is significantly worse. This suggests that our "proto-tokens" do not "store" text tokens directly, but encode some more high- level representations, using language modeling ca- pabilities of LLM. However, we also can’t say that the compressibility of the text is determined by its likelihood under the sequential language model. In fact, we observe the opposite trend: lower total information content HLMis compressed for less- information dense texts, such as generated by the LLM itself. This difference is highlighted in Figure 2, where the amount of the language-information contained in trainable tokens is compared to autoregressive setup. The performance for unnatural texts is very similar and sometimes even identical, while for nat- ural texts, the difference in capacity can be up to five times lower. However, more often the perfor- mance is just
https://arxiv.org/abs/2505.21189v1
two times lower in non-autoregressive 5 128 256 512 1024 2048 4096 Autoregressive HLM326412825651210242048One-forward HLMOne trainable embedding 256 512 1024 2048 4096 8192 Autoregressive HLM64128256512102420484096One-forward HLMT wo trainable embeddings Model Pythia-160M Pythia-410M Pythia-1.4B Llama-3.2-1B Llama-3.2-3B Llama-3.1-8B Dataset Fanfics PG-19 PG-19(gen) Random y=x y=1 2x y=1 5xFigure 2: Maximum language information ( HLMfor a maximum text prefix that is accurately reconstructed) compressed for different models and datasets. In the left plot, a single [mem] token is used in the autoregressive setting, and in the non-autoregressive one, pproto-token is shared between all texts within each model. In the right plot, two [mem] tokens are used and pproto-tokens are not shared. Each small point on the plots represents a single text, larger points indicate the average within each (model, dataset) pair. case, suggesting that autoregressive decoding ap- proximately doubles the "effective" information density for natural texts – the density of the infor- mation that could be effectively decoded. 20 50 80 110 140 Autoregressive reconstruction throughtput, tokens per sec500010000150002000025000One-forward reconstruction throughput, tokens per secModel Pythia-160M Pythia-410M Pythia-1.4B Llama-3.2-1B Llama-3.2-3B Llama-3.1-8BDataset Fanfics PG-19 PG-19(gen) Random Figure 3: Reconstruction throughput comparison be- tween autoregressive and non-autoregressive setups. For each (model, dataset) pair, the throughput is calculated as a maximum losslessly compressible length divided by the reconstruction time. To measure reconstruction time, we use PyTorch profiling tools. Although less information-dense, our one- forward method achieves significantly higher de- coding throughput in the context of text reconstruc- tion – outperforming its autoregressive counterpart by a factor of 279 on average (Figure 3). This dramatic difference is primary due to the number of forward passes. While an obvious downstream task is still to be found, such speed could matter for fast context-compression and decompression, on-device inference, or a setting where decoding speed is particularly important.Proto-tokens interpretation We examine the in- formation encoded in proto-tokens and the impli- cations this has for potential practical applications. In worst case scenario, they directly encode target tokens (imagine a vector containing token_ids). If so, the entire "language generation" effort happens during encoding, making decoding irrelevant for accelerated inference – though the approach could still be useful as a context-compression tool. The al- ternative is that proto-tokens encode a compressed representation of a prefix which, when the model generates from it, produces the observed suffix. In that case, the hard work of text generation is done during decoding, which is more promising from the point of view of accelerated inference. All the intermediate options are also possible. 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 Cosine distance0.02.55.07.510.012.515.017.5DensitySame text Same context Different contexts Figure 4: Cosine embedding distances for different pair- ings of proto-tokens. We select 50 contexts from PG19 and for each context, generate 10 continuation texts. We find one solution for each of the first 9 generations and 10 different-seed solutions for the last generation. 6 0.5 0.6 0.7 0.8 0.9 1.0 T oken-level Distance between T exts0.8250.8500.8750.9000.9250.9500.9751.0001.025Distance between Proto-token Embeddings 0.2 0.4 0.6 0.8 1.0 Semantic Distance between T exts Same Context Different ContextsFigure 5: We compare proto-token embedding distances for same context
https://arxiv.org/abs/2505.21189v1
text pairs and different-context text pairs. Token-level distance is measured as cosine distance between TF-IDF embeddings. Semantic distance is measured as cosine distance between semantic text embeddings (see Section 3 for details). We start by measuring the distances between three types proto-token embedding pairs: 1) cor- responding to the same generated sequence, but different random seeds, 2) corresponding to the dif- ferent texts but generated from the same context, 3) corresponding to the different texts generated from different contexts. As shown in Figure 4, the same- text solutions are almost always located closer to each other than different-texts solutions, which sug- gests locality in the learned representations. At the same time, same-context solutions are noticeably closer to each other than different-context ones. This may indicate that the encoded information at least partially reflects the potential context of the text. However, we should be careful to account for the texts generated from the same context being more similar in general. To do that, we measure pairwise distances be- tween generated texts, and examine whether the distance between learned proto-token embeddings differ for a fixed distance between the texts. We use token-level measure of text similarity and semantic- level measure (see Section 3). For both measures, (Figure 5) we observer that, given similar distances between texts, the proto-token embeddings are con- sistently closer when the texts originate from the same context. We conclude that learned proto- tokens contain information beyond the information about the target sequence itself – it somehow de- scribes the potential context of the sequence.Kuratov et al. (2025) raised the following con- cern about the structure of the solution space in autoregressive setup. Even though the same-text token embeddings are on average closer to each other than different-text token embeddings, they seem to be disconnected – a linear interpolation between two solutions does not yield a valid re- construction. This could mean that the potential encoding to this space could be problematic as the same object could be mapped to disconnected re- gions. We find that in our non-autoregressive case, the linear interpolation between same-text solutions also does not produce a solution (Figure 6). 0.0 0.2 0.4 0.6 0.8 1.0 t0.00.20.40.60.81.0AccuracyConnection Linear Bezier curveConnection Linear Bezier curve Figure 6: Pairwise interpolation accuracies between 10 solutions for 5texts ( 5×10×9/2pairs in total). However, the solutions could be connected using quadratic Bezier curves (parabolic segments) lying inside "solution set". This means that even though same-text solutions do not form a convex set, they form a connected set. In fact, our experiments show that the maximum ratio between Bezier curve 7 length and the corresponding linear connection is only1.2, indicating that the paths are nearly linear. These results demonstrate that the solution space is fairly well behaved, providing reasonable hope that an encoder model could be built to map into that space. 5 Discussion and Conclusions In this paper, we demonstrate that frozen LLMs have a surprising ability to generate hundreds of accurate tokens in a single forward pass – without any iterative decoding – when provided with just two specially trained "proto-tokens". We
https://arxiv.org/abs/2505.21189v1
find that both the number and the arrange- ment of such tokens is crucial for enabling this generation capacity. Interestingly, with only one proto-token, LLMs are unable to generate more than a single token of text. In contrast, two properly arranged proto-tokens can enable the generation of sequences hundreds tokens long. This signifi- cant leap in the performance, along the observation that one of the vectors can (in principal) be shared across many texts, suggest that proto-tokens play different functional roles during generation. How- ever, the precise nature of the role differentiation remains unclear. We find that bigger model size does not univer- sally imply better generation capacity. While larger models in Llama-3 family demonstrate improved reconstruction capacity, Pythia models show no such trend – larger models do not outperform smaller one. Whether this difference is connected to the architectural variations is an open question. Additionally, we do not observe any consistent relationship between the source of the natural text and the reconstruction ability of LLMs. Surpris- ingly, even for the texts generated by the LLM itself, the number of successfully reconstructed tokens is the same as for any other natural text. However, for the texts composed of random tokens, performance drops noticeably. This suggests that our reconstruction process does not fully leverage the language modeling capabilities of LLMs, and may instead mostly rely on low-level token pat- terns. Although the reconstructed sequences in the non- autoregressive setting are, on average, about two times shorter than those in the autoregressive case, the computational efficiency of single-forward ap- proach allows to achieve up to 279 ×greater gener- ation throughput.We also observe that proto-tokens encode more than just the target sequence. Embeddings of the "proto-tokens" corresponding to the different texts generated from the same context are significantly closer to each other than those from unrelated se- quences. This indicates that the learned represen- tations capture some potential contextual informa- tion. Finally, we discover that the embedding space in which proto-tokens exist, has very desirable struc- tural properties – proto-tokens corresponding to the same text, form localized and connected re- gions, enabling smooth transitions via quadratic interpolation. These findings suggest that it may be feasible to build an encoder capable of mapping into this space, opening the door to future work on non-autoregressive inference and representation learning. 6 Limitations Although our paper demonstrates the surprising capability of LLMs to generate long sequences in a single forward pass from just two learned em- beddings, several important limitations should be acknowledged: 1. Lack of immediate practical application: Most importantly, this work highlights an interesting quirk of LLMs and does not suggest any imme- diate practical implications or real-life usages for the method. 2. Architectural dependence: The method demonstrates different behavior across model fami- lies, suggesting some architectural dependence. As a results, our method may potentially not general- ize to other model architectures. 3. Limited domain coverage: While we evaluate four different text sources , the results may not gen- eralize beyond those explored in our experiments. References Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle
https://arxiv.org/abs/2505.21189v1
O’Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, and 1 others. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning , pages 2397–2430. PMLR. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. 2018. Loss surfaces, mode connectivity, and fast ensembling of 8 dnns. Advances in neural information processing systems , 31. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Yuri Kuratov, Mikhail Arkhipov, Aydar Bulatov, and Mikhail Burtsev. 2025. Cramming 1568 tokens into a single vector and back again: Exploring the lim- its of embedding space capacity. arXiv preprint arXiv:2502.13063 . Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 3045–3059, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 4582– 4597, Online. Association for Computational Lin- guistics. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2024. Gpt understands, too. AI Open , 5:208–215. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations . Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP) , pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. 2019. Compressive trans- formers for long-range sequence modelling. arXiv preprint arXiv:1911.05507 . Andrea Santilli, Silvio Severino, Emilian Postolache, Valentino Maiorca, Michele Mancusi, Riccardo Marin, and Emanuele Rodola. 2023. Accelerating transformer inference for translation via parallel de- coding. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 12336–12355, Toronto, Canada. Association for Computational Linguistics. Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, and Zhifang Sui. 2023. Speculative decod- ing: Exploiting speculative execution for accelerat- ing seq2seq generation. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 ,pages 3909–3925, Singapore. Association for Com- putational Linguistics. Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2023. Efficient streaming language models with attention sinks. arXiv preprint arXiv:2309.17453 . 9
https://arxiv.org/abs/2505.21189v1
arXiv:2505.21190v1 [cs.CL] 27 May 2025LUNGUAGE : A Benchmark for Structured and Sequential Chest X-ray Interpretation Jong Hak Moon1, Geon Choi1, Paloma Rabaey3, Min Gwan Kim6, Hyuk Gi Hong5, Jung-Oh Lee6,Hangyul Yoon1,Eun Woo Doe7,Jiyoun Kim1,Harshita Sharma2, Daniel C. Castro2,Javier Alvarez-Valle2,Edward Choi1 1KAIST2Microsoft Research Health Futures3Ghent University5Seoul Medical Center 6Seoul National University Hospital7Yeungnam University College of Medicine {jhak.moon, edwardchoi}@kaist.ac.kr Abstract Radiology reports convey detailed clinical observations and capture diagnostic reasoning that evolves over time. However, existing evaluation methods are lim- ited to single-report settings and rely on coarse metrics that fail to capture fine- grained clinical semantics and temporal dependencies. We introduce LUNGUAGE , a benchmark dataset for structured radiology report generation that supports both single-report evaluation and longitudinal patient-level assessment across multi- ple studies. It contains 1,473 annotated chest X-ray reports, each reviewed by experts, and 80 of them contain longitudinal annotations to capture disease pro- gression and inter-study intervals, also reviewed by experts. Using this bench- mark, we develop a two-stage framework that transforms generated reports into fine-grained, schema-aligned structured representations, enabling longitudinal in- terpretation. We also propose LUNGUAGE SCORE , an interpretable metric that compares structured outputs at the entity, relation, and attribute level while model- ing temporal consistency across patient timelines. These contributions establish the first benchmark dataset, structuring framework, and evaluation metric for sequential radiology reporting, with empirical results demonstrating that LUN- GUAGE SCORE effectively supports structured report evaluation. The code is avail- able at: https://github.com/SuperSupermoon/Lunguage 1 Introduction Radiology reports play a critical role in medical diagnosis by capturing the patient’s clinical history, describing imaging findings, recording procedural steps, and noting changes over time. These reports are typically written in unstructured free-text, leading to significant variation in terminology and level of detail across radiologists. This heterogeneity complicates consistent computational interpretation and limits the development of accurate, automated systems for report generation and evaluation. To address these challenges, structured reporting frameworks have been developed to convert free-text reports into standardized, machine-friendly formats [ 13,16,36,40,42]. These representations make clinical content explicit and structured, enabling consistent and automated evaluation. While such frameworks have improved representational consistency, current evaluation methods remain fundamentally limited in two key aspects: temporal reasoning and fine-grained clinical accuracy. Temporal reasoning is central to radiologic interpretation, as diagnoses frequently rely on comparing current and prior studies to assess whether a finding has progressed, remained stable, or newly appeared. However, most evaluation protocols [ 4,12,13,16,23,31,36,37,40,42] assess each report Preprint. in isolation, without incorporating previous findings. This makes it impossible to determine whether temporal expressions—such as “no change,” “improved,” or “new”—are appropriate. For instance, the statement “no change in pneumonia” cannot be meaningfully evaluated without confirming whether pneumonia was present in prior studies. Fine-grained clinical accuracy is equally essential. Reliable interpretation requires preservation of detailed attributes such as precise location (e.g., “carina above 3cm”) and lesion size (e.g., “2.5 cm”). These attributes are critical for diagnostic specificity and downstream clinical decisions, yet most evaluation protocols reduce such detail. For example, the phrase “2.5 cm right upper lobe nodule with spiculated margins” may be flattened to “nodule”. The loss of granularity makes
https://arxiv.org/abs/2505.21190v1
it difficult to distinguish precise from incomplete outputs. Structured representation frameworks have partially addressed these issues by extracting clinical entities and relations from radiology reports [ 13,16,36,40,42]. Some include temporal descriptors like “worsened” or “stable” [ 16,36]. However, all remain limited to single reports and rely on explicitly stated temporal expressions, without checking consistency over time. As a result, they cannot determine whether findings align with prior studies or reflect coherent clinical trajectories. In addition, while these schemas partially improve structural representation, they often lack the clinical granularity needed for detailed diagnostic interpretation. Recent report generation models have begun incorporating temporal inputs such as prior reports, imaging, or clinical indications [ 4,44], enabling outputs that are more context-aware and temporally coherent. However, evaluation methods have not kept pace. Generated reports continue to be interpreted at separate timepoints rather than across a continuous timeline, making it difficult to assess whether models appropriately incorporated prior findings or preserved clinically important details, both in temporal and semantic dimensions. To address these limitations, we make the following contributions. (1)We construct LUNGUAGE , a fine-grained benchmark dataset for single and sequential structured reports. 1,473 single reports from 230 patients are annotated with 17,949 expert-validated entities and 23,307 relation–attribute pairs spanning 18 clinically grounded relation types. 80 sequential reports from 10 patients are annotated by comparing all possible observation pairs (41,122 pairs) across 3 to 14 reports per patient (1 to 1,200 days apart). These capture diagnostic reasoning through ENTITY GROUPS (identifying the same observation across multiple sequences) and TEMPORAL GROUPS (grouping observations within entity groups based on their temporal relationships across studies) for longitudinal analysis. (2) Second, we develop a LLM-based extraction framework to convert free-text reports into structured format. The framework structures radiology reports into entity-relation-attribute triplets, and links them across time to form temporally coherent interpretations following the annotation schema of LUNGUAGE . The framework demonstrates a strong agreement with human annotations, achieving an F1 score of 0.94 for entity-relation extraction, 0.86 for full triplets, 0.68 for ENTITY GROUP , and 0.89 forTEMPORAL GROUP .(3)Finally, we introduce LUNGUAGE SCORE , a clinically grounded metric quantifying both diagnostic accuracy and temporal coherence. It compares structured representations from generated reports against references, enabling assessment of clinical details and evolving diagnostic context. Our evaluation uses gold-standard structured data, but LUNGUAGE SCORE can extend to "silver standard" evaluation by automatically structuring both generated and reference reports when gold-standard annotations are unavailable. 2 Related work Structuring Radiology Reports Radiology reports encode layered clinical semantics, spanning history, imaging observations, and diagnostic impressions. Rule-based systems [ 36,40] achieve high precision in constrained scenarios but often struggle to generalize due to the variability of clinical language. Supervised methods [ 13,16,42] using transformer-based models offer flexibility, though their effectiveness depends on the coverage and granularity of the annotation schema. More recently, prompting-based approaches have leveraged large language models (LLMs), such as GPT-4 [1] and open-source variants [ 33,43], to produce structured outputs directly from free-text inputs [6,9,11,35]. While these models exhibit strong few-shot capabilities, they may introduce issues such as hallucination, inconsistent terminology, and
https://arxiv.org/abs/2505.21190v1
sensitivity to prompt design. To mitigate such variability, we incorporate a task-specific vocabulary and schema-aligned reference set to constrain output to valid clinical concepts and enhance consistency through retrieval-augmented prompting. 2 Figure 1: Schema for Single and Sequential Report Structuring. The figure shows two reports from the same patient at day 10 and day 90. For the single report schema (within each report), gray solid lines connect entities to attributes, while pink and blue solid lines represent inter-entity reasoning relations ( ASSOCIATE ,EVIDENCE ). For the sequential schema (across reports), black solid lines denote entities in the same ENTITY GROUP (same clinical finding over time) and TEMPORAL GROUP (same diagnostic episodes), while black dashed lines show entities in the same ENTITY GROUP but different T EMPORAL GROUPS (different diagnostic episodes). Evaluation Metrics for Radiology Report Understanding Existing metrics fall into two main categories: lexical and model-based. Lexical metrics such as BLEU [ 24], ROUGE [ 18], and METEOR [3] rely on surface overlap and often miss clinical meaning. Model-based metrics like CheXbert [31] and BERTScore [ 41] assess high-level similarity but lack fine-grained detail. Structure-based metrics such as RadGraph F1 [ 13] and RaTEScore [ 42] improve granularity by matching clinical entities and relations. Recent work has emphasized clinical error detection. ReXVal [ 38] introduced expert-labeled errors, which informed RadCliQ [ 37], combining BERTScore and RadGraph F1 for joint lexical and semantic evaluation. LLM-based metrics like GREEN [ 23], FineRadScore [ 12], RadFact [ 4] and CheXprompt [ 39] further approximate expert judgments or factual correctness. However, most metrics evaluate single reports and overlook temporal consistency across exams. They also miss fine-level attributes like location, extent, or progression. In contrast, our framework supports structured, temporally aligned evaluation over patient report sequences, enabling clinically meaningful assessment across all three dimensions: semantic, structural, and temporal. 3 L UNGUAGE : A benchmark for single and sequential structured reporting We propose two complementary annotation schemas for structured understanding of radiology reports: asingle-report schema capturing fine-grained interpretation within individual reports, and a sequential schema modeling patient-level diagnostic trajectories across time. Both schemas were refined with four board-certified radiologists to ensure clinical validity. Figure 1 illustrates these schemas. 3.1 Single Structured Report: Schema and Annotation Process We propose a schema that captures the internal structure of single reports by extracting clinically relevant information as typed entities and relations. It is designed to reflect the typical subsections of radiology reports— indication/history ,findings , and impression —and supports relation extraction across sentence boundaries within each section. Notably, the indication/history section is included to preserve contextual information that influences diagnostic interpretation at the patient trajectory level. ENTITIES are assigned to one of six clinically grounded categories based on their derivability from chest X-ray imaging: PF (P ERCEPTUAL FINDINGS )for directly observable image features (e.g., “lung,” “opacity”); CF (C ONTEXTUAL FINDINGS )for diagnoses inferred from external clinical context (e.g., “pneumonia”); OTH (O THER OBJECTS )for mentioned devices or procedures (e.g., “ET tube”); COF (C LINICAL OBJECTIVE FINDINGS )for structured observations from non-imaging sources (e.g., lab tests); NCD (N ON-CXR D IAGNOSIS )for
https://arxiv.org/abs/2505.21190v1
diagnoses based on other modalities (e.g., “AIDS”); and PATIENT INFO for reported history or symptoms (e.g., “fever,” “cough”). 3 RELATIONS capture clinical properties and inter-entity connections, often spanning multiple sen- tences. The schema includes diagnostic stance ( DXSTATUS ,DXCERTAINTY ); spatial and descriptive characteristics ( LOCATION ,MORPHOLOGY ,DISTRIBUTION ,MEASUREMENT ,SEVERITY ,COM- PARISON ); temporal dynamics ( ONSET ,IMPROVED ,WORSENED ,NOCHANGE ,PLACEMENT ); and contextual information ( PASTHX,OTHER SOURCE ,ASSESSMENT LIMITATIONS )1. It also includes two reasoning relations: ASSOCIATE (bidirectional links between related entities) and EVIDENCE (asymmetric support from a finding to a diagnosis). For example, in “left lung opacity suggests pneumonia,” the schema identifies both ASSOCIATE between opacity andpneumonia , and EVIDENCE indicating that pneumonia is inferred from opacity . Full definitions can be found in Appendix A.1. Single Report Annotation Process We developed a structured annotation pipeline for 1,473 reports from 230 patients in the MIMIC-CXR [ 15] test split to support fine-grained and clinically grounded structuring of radiology language. The pipeline comprised two stages: constructing a task-specific vocabulary and generating gold-standard structured reports (SRs) , both guided by a schema representing the layered semantics of chest X-ray (CXR) reports. In the first stage, we used GPT-4 (0613)2to generate initial SRs from raw reports using schema-driven prompts. From these outputs, we extracted entity and relation attributes to build an initial vocabulary, categorized by relation type. This vocabulary was refined through systematic review by four radiologists, ensuring lexical clarity and clinical validity. The final vocabulary comprised 1,808 unique entity terms and 2,193 relation attributes, each mapped to a subcategory and, when applicable, a UMLS concept [ 5]. In the second stage, annotators manually revised the model-generated SRs using the curated vocabulary. Annotators manually reviewed all 1,473 reports section by section, with the workload equally divided among radiologists to verify every (entity, relation, attribute) triplet. This included both entity–attribute pairings and inter-entity relations, with particular attention to cross-sentence links such as ASSOCIATE andEVIDENCE . This comprehensive process yielded 17,949 entity instances and 23,307 relation instances, forming a high-quality dataset for benchmarking fine-grained information extraction and report structuring. Details of the vocabulary and annotation process are provided in Appendix A.1.2. 3.2 Sequential Structured Report: Schema and Annotation Process Longitudinal radiology reports often exhibit lexical variation, abstraction shifts, and inconsistent phrasing[ 21,34]. The same pathology may be described differently over time (e.g., "right opacity" vs. "focal consolidation"), complicating semantic alignment and temporal reasoning. To address this, we introduce a schema that structures reports across patient timelines through two key components: ENTITY GROUPS identify observations that refer to the same underlying clinical finding, even when expressed using different terms, anatomical references, or levels of abstraction. Within each patient, all observation pairs are compared to detect semantic equivalence, regardless of when they appear in the timeline, whether the finding is reported as present or absent ( DXSTATUS ), or whether it is stated definitively or tentatively ( DXCERTAINTY ). For example, “PICC line tip in lower SVC” and “at the cavoatrial junction” (Figure 1) may describe the same catheter tip location, reflecting inherent ambiguity in 2D imaging.
https://arxiv.org/abs/2505.21190v1
Similarly, “lung volumes” reported as low on day 10 and described as “no change” on day 90 can be grouped to indicate persistent low lung volume. TEMPORAL GROUPS divide each ENTITY GROUP into distinct diagnostic episodes based on temporal distance, shifts in status or certainty, and explicit expressions of clinical change (e.g., “worsening,” “resolved”). This approach captures clinically meaningful transitions in a patient’s condition [ 7,30]. For example, “fever” mentioned in both the day 10 and day 90 reports (Figure 1) appears in the “history” section but occurs far apart in time; treating them as part of separate temporal groups better reflects clinical reasoning. Together, these components support fine-grained evaluation of both semantic consistency and temporal coherence in longitudinal model outputs. Sequential Report Annotation Process We annotated 80 chest X-ray reports from 10 patients among the 230 patient cohort used in the single-report annotation, to create a gold dataset for longi- tudinal evaluation. The same four physicians from the earlier phase participated in the annotation process, with patients equally divided among them. Each physician independently annotated their assigned patients’ reports in chronological order, identifying observations referring to the same 1Abbreviations: “Dx” stands for “diagnosis” and is used in relations such as DXSTATUS (i.e., positive or negative finding) and D XCERTAINTY (i.e., definitive or tentative). “Hx” in P ASTHXstands for “history”. 2All large language model (LLM) usage, including GPT-4, was conducted using HIPAA-compliant deploy- ments provided by Azure and Fireworks AI. 4 underlying finding ( ENTITY GROUP , represented as linearized phrases combining entity and its attributes, e.g., "pleural effusion right lung increasing") and grouping them into diagnostic episodes (TEMPORAL GROUP , numbered sequentially as 1, 2, 3, etc. to distinguish separate temporal progres- sions) based on clinical and temporal continuity. Terminology was normalized when appropriate (e.g., aligning "right clavicle hardware" and "orthopedic side plate"), while preserving distinctions in abstraction and anatomical specificity. This process required significant effort due to the complexity of longitudinal comparison. Patients had between 3 and 14 reports, with time intervals ranging from 1 to 1,200 days. For each patient, all observation pairs—ranging from 34 to 141 per case—were compared one by one, resulting in 41,122 total comparisons. Each pair was assessed to determine whether the two observations referred to the same clinical finding, considering both meaning and tim- ing. This detailed review was necessary to capture both consistent findings across time and clinically meaningful transitions such as resolution or recurrence. Details are provided in Appendix A.2. 4 Structuring Framework for Single and Sequential Reports We develop a two-stage framework for automatically structuring radiology reports using the same schema as our gold-standard benchmark, covering both single-report and longitudinal settings. The framework produces structured representations suitable for downstream evaluation along semantic, structural, and temporal dimensions. The framework overview can be found in Appendix B.1 (i) Single setting To generate accurate structures from free-text, we apply corpus-guided relation extraction using a large language model (LLM). The model extracts (entity, relation, attribute) triplets aligned with our schema. While LLMs offer flexible language understanding, they can produce hallucinations and inconsistencies [ 6,9,11,35]. To
https://arxiv.org/abs/2505.21190v1
mitigate this, we guide the model by matching sentences against a curated vocabulary from our annotation corpus (Section 3.1). The task spans both intra- and inter-sentential contexts, extracting triplets without templates to handle lexical variation. Prompt details and vocabulary-matching algorithm are in Appendix B.2 and B.3. (ii) Sequential setting Building on the structured outputs from stage (i), we use the LLM to interpret report sequences over time. To address longitudinal variability, the model performs normalization and temporal aggregation across reports. Specifically, we linearize each entity and its related attributes into flattened text, preserving their chronological order relative to the initial study (e.g., "day 0: opacity right lung", "day 30: opacity right basilar"). The LLM is provided with few-shot examples illustrating common patterns of lexical variation, abstraction shifts (e.g., descriptive to diagnostic terms), and rephrased mentions of persistent devices. Using these examples as guidance, the model then determines whether observations across time refer to the same underlying finding and whether they belong to a single temporal group. This decision is guided by semantic similarity, anatomical alignment, and temporal continuity, which is inferred by the LLM. When observations reflect recurrence after resolution or appear clinically disconnected, they are treated as distinct temporal groups. This process generates two-fold outputs: ENTITY GROUPS andTEMPORAL GROUPS , corresponding to the same concepts introduced in Section 3.2. The output format combines entity, location, and temporal pattern (e.g., "pleural effusion right lung no change") with temporal groups numbered sequentially (1, 2, 3, etc.) following the sequential schema established in Section 3.2. This approach enables faithful structuring of longitudinal narratives, capturing meaningful trajectories across diverse report sequences. Full prompt examples are provided in Appendix B.4. 5 L UNGUAGE SCORE : A Fine-Grained Patient-Level Metric We propose LUNGUAGE SCORE , a fine-grained metric that quantifies radiology report quality across semantic equivalence, temporal coherence, and attribute-level similarity. LUNGUAGE SCORE captures clinically meaningful distinctions in terminology ("right clavicle hardware" vs. "orthopedic side plate"), longitudinal trends (resolution vs. decrease), and detailed attributes such as size (2.3 cm vs. 3.0 cm). It integrates these dimensions into a single similarity score that contrasts the (sequence of) candidate report(s) against the (sequence of) reference report(s), enabling patient-level evaluation. Evaluation Principles. LUNGUAGE SCORE is grounded in three clinical principles: semantic sen- sitivity captures concept-level equivalence across linguistic variation [ 21,34];temporal coherence ensures alignment with clinical timelines for assessing disease progression [ 7,30]; and structural granularity evaluates fine-grained attributes critical for diagnosis [ 8,26]. These principles enable clinically faithful evaluation suitable for real-world deployment. 5 Evaluation Method. Each patient is associated with a sequence of Tstructured reports. The metric operates at the patient level and supports both single-report ( T= 1) and sequential-report ( T >1) evaluations. In the single-report setting, evaluation is based on semantic and structural alignment, while in the sequential-report setting, temporal alignment is additionally incorporated to assess consistency across longitudinal disease trajectories. Formally, LUNGUAGE SCORE evaluates similarity between predicted and gold reference sets of structured report findings as follows. For each patient, we compare all predicted and gold reference findings across the entire sequence of reports.
https://arxiv.org/abs/2505.21190v1
Let Spred= (Spred 1, . . . , Spred T)andSgold= (Sgold 1, . . . , Sgold T)denote the predicted and gold sequences for a given patient, where each S(·) tis the set of all structured findings at the t-th study. Pairwise similarity is computed over every possible pair of findings, pooled across all timepoints: (fpred, fgold)∈ T[ tp=1Spred tp × T[ tg=1Sgold tg . (1) Each pair of findings is assigned a composite similarity score that captures alignment across semantic, temporal, and structural similarity dimensions, as defined below: MatchScore (fpred, fgold) =Semantic ·(Temporal if T >1)·Structural . (2) Semantic similarity determines whether two findings express the same underlying clinical concept. For semantic representation, we use different approaches for single versus sequential reports: in the single-report setting ( T= 1), each finding is simply represented as a linearized phrase derived from the entity and all its associated attributes (e.g., "opacity"-"left lung"-"nodular"-"slightly increased"). However, in the sequential-report setting ( T >1), where findings need to be tracked across time, we utilize the ENTITY GROUP (see Section 4) for representation. This approach allows lexically divergent but conceptually identical findings to be treated as semantically aligned across multiple reports. Cosine similarity is computed between contextual embeddings of these semantic representations using domain-specific clinical BERT models (MedCPT [ 14] and BioLORD [ 29]) chosen for their ability to capture semantic variability in chest X-ray reports. We use the average of cosine similarities computed from both models to improve robustness. Model selection details are provided in Appendix C.3. Semantic (fpred, fgold) =cosine (Embed (fpred),Embed (fgold)) (3) Temporal similarity is defined only when T > 1and captures alignment across timepoints. It ensures that findings are not only semantically similar but also temporally coherent with the patient’s disease progression. To prevent matches across unrelated timepoints, LUNGUAGE SCORE prioritizes findings that occur in the same study timepoint tandTEMPORAL GROUP . Temporal alignment receives the maximum score (= 1) when both study timepoint tandTEMPORAL GROUP match, and a reduced score when only one matches, for example, when a predicted finding belongs to the correct TEMPORAL GROUP but appears in a different study. Final scores are computed using equal weights: Temporal (fpred, fgold) =wS·1[S(fpred) =S(fgold)] +wG·1[G(fpred) =G(fgold)]. (4) where S refers to the study timepoint t, G refers to the TEMPORAL GROUP of findings across time, and equal weights ( wS=wG= 0.5) are used in our implementation. Structural similarity evaluates individual attributes (e.g. LOCATION ,MEASUREMENT ...) between predicted and gold reference findings, enabling fine-grained comparison. Each attribute is assigned a normalized weight wattribute based on its clinical importance, as determined by experts, reflecting its role in decision making (see Appendix C.1). Similarity is computed as: Structural (fpred, fgold) =X attributewattribute·sim(fpred[attribute ], fgold[attribute ]), (5) where sim(·)returns 1 for exact matches on binary attributes3and cosine similarity for non-binary attributes4using the average of MedCPT and BioLORD contextual encoders. This ensures that evaluation captures both overall correctness and clinically critical attribute accuracy. 3Binary attributes: D XSTATUS (positive/negative) and D XCERTAINTY (definitive/tentative) 4Non-binary attributes include: LOCATION ,SEVERITY ,ONSET ,IMPROVED ,WORSENED ,PLACEMENT , NOCHANGE ,MORPHOLOGY ,DISTRIBUTION ,MEASUREMENT
https://arxiv.org/abs/2505.21190v1
,COMPARISON ,PASTHX,OTHER SOURCE , ASSESSMENT LIMITATIONS 6 Set-level matching with partial credit. We can compute the combined MatchScore by multiplying semantic, temporal, and structural similarity scores (Equations 3-5), as shown in Equation 2. We then perform optimal bipartite matching between predicted findings iand gold reference findings jusing MatchScore sijas edge weights, giving us sets of matched pairs {(f(pred ) m , f(gold ) n )}, unmatched predicted findings {f(pred ) u}, and unmatched gold reference findings {f(gold ) v}. Matched pairs contribute similarity smnto true positives (TP), with residual (1−smn)assigned to false positives (FP) and negatives (FN). Unmatched findings incur penalties based on their most similar finding: TP=X (m,n )smn,FP=X (m,n )(1−smn)+X u 1−max jsuj ,FN=X (m,n )(1−smn)+X v 1−max isiv . (6) This formulation supports partial credit based on alignment strength. Full credit is awarded only when a finding fully aligns semantically, temporally, and structurally. Partial matches contribute proportionally to evaluation scores, and when one set contains more findings than the other, the extra findings remain unmatched and are penalized as either FPs or FNs. This scoring scheme enables nuanced evaluation that distinguishes between minor misalignments and complete misses. The final F1 score can be computed from these TP, FP and FN counts using the standard formula. Additional examples illustrating the metric are provided in Appendix C.2. 6 Experiments We conduct three sets of experiments to evaluate our approach from complementary perspectives: (1) the performance of the proposed structuring framework, (2) the diagnostic utility of LUN- GUAGE SCORE as a single-report evaluation metric, and (3) the ability of LUNGUAGE SCORE to benchmark performance of various single- and longitudinal-report generation models. 6.1 Structuring framework validation We first assess the structuring framework onLUNGUAGE , our benchmark of 1,473 chest X-ray reports from 230 patients. Each patient has 1 to 15 imaging studies, with a subset of 10 patients selected for full longitudinal trajectories. Reflecting the progressive nature of clinical interpretation, we evaluate the framework in two stages: (i) single-report structuring, which measures the model’s ability to extract localized semantic relations, and (ii) temporal inference, which assesses whether findings are consistently aligned and appropriately organized into clinical episodes across time. Table 1: Performance of various models under zero-shot and 5-shot settings. Left: single-report performance. Right: sequential reasoning performance. Best scores per block are bolded. Single setting Sequential setting entity-relation entity-relation-attribute Entity Grouping Temporal Grouping Shot Model F1 P R F1 P R F1 P R F1 P R ZeroGPT-4.1 0.91 0.83 1.00 0.78 0.79 0.77 – – – – – – Qwen3 0.73 0.58 1.00 0.62 0.53 0.75 0.50 0.42 0.65 0.83 0.85 0.82 Deepseek-v3 0.87 0.76 1.00 0.76 0.72 0.80 0.41 0.32 0.76 0.79 0.86 0.75 Llama4-Maverick 0.81 0.68 1.00 0.69 0.64 0.76 0.35 0.25 0.77 0.60 0.88 0.47 5-shotGPT-4.1 0.94 0.88 1.00 0.86 0.86 0.86 0.68 0.77 0.65 0.89 0.86 0.93 Qwen3 0.92 0.85 1.00 0.84 0.83 0.85 0.62 0.57 0.71 0.84 0.86 0.84 Deepseek-v3 0.93 0.88 1.00 0.86 0.85 0.86 0.66 0.63 0.75 0.85 0.88 0.84 Llama4-Maverick 0.94 0.88 1.00 0.86 0.86 0.85 0.52 0.38 0.87 0.62 0.90 0.48 Single setting
https://arxiv.org/abs/2505.21190v1
We evaluate the model’s ability to generate accurate structured representations from individual reports by comparing predicted (entity, relation, attribute) triplets against expert annotations inLUNGUAGE . Using micro-averaged precision, recall, and F1 scores at both the entity–relation and full triplet levels, we assess our prompting strategy on GPT-4.1 [ 1] and several recent open-source LLMs [ 19,33,43], all evaluated under the same framework configuration described in Section 4. As shown in Table 1, all models achieve perfect recall and F1 scores 0.92-0.94 for entity–relation extraction with 5-shot prompting, and 0.84-0.86 F1 for full triplet extraction. Increasing the number of few-shot examples leads to further gains, highlighting the robustness of the framework despite the complexity of the schema. Additional analyses, including comparisons with and without vocabulary guidance, 10-shot prompting results, and qualitative examples, are provided in Appendix B.5. 7 Sequential setting The second stage evaluates how well models group temporally distributed findings into clinically meaningful categories. This grouping task presents challenges due to subtle semantic distinctions in medical terminology. For example, "heart size" and "mediastinal silhouette" might require different groupings despite both relating to cardiac imaging—"heart size" focuses on dimensions (potentially grouping with "cardiomegaly") while "mediastinal silhouette" concerns shape, and a patient could simultaneously have cardiomegaly with a normal mediastinum. Using micro-averaged F1 scores for evaluation, we found that zero-shot prompting yielded limited results, with GPT-4.1 often producing invalid outputs. Performance improved significantly with five-shot prompting, where most models achieved F1 scores above 0.6 for entity grouping (GPT-4.1 reached 0.68), and temporal grouping showed even stronger results. Although our strict grouping criteria may result in lower F1 scores when semantically similar concepts fall into different groups, this doesn’t compromise the final clinical evaluation. When these grouped entities are later used in LUNGUAGE SCORE (Section 5, Equation 3), the semantic similarity calculation ensures that related concepts still receive appropriately high similarity scores, thereby preserving clinical validity despite strict initial grouping boundaries. Additional analyses are available in Appendix B.6. 6.2 Metric Validation with ReXVal Table 2: Kendall Tau and Pearson correlation coef- ficients (with 95% CIs) between single-report met- rics and the total number of radiologist-annotated errors in each report, across the ReXVal dataset. Metric Kendall Tau Pearson BLEU -0.39 (-0.27, -0.48) -0.53 (-0.44, -0.61) BERTScore 0.50 (-0.42, -0.58) -0.63 (-0.55, -0.70) GREEN -0.63 (-0.56, -0.69) -0.73 (-0.67, -0.78) 1/FineRadScore -0.69 (-0.63, -0.74) -0.75 (-0.70, -0.80) RaTEScore -0.52 (-0.44, -0.59) -0.63 (-0.56, -0.70) LUNGUAGE SCORE -0.58 (-0.51, -0.64) -0.69 (-0.63, -0.74)We validate the diagnostic utility of LUN- GUAGE SCORE on the ReXVal dataset [ 38], a benchmark with 200 MIMIC-CXR report pairs which were annotated by 6 radiologists, de- signed to evaluate the alignment between scor- ing of automated metrics and that of radiologists. Since this benchmark does not include sequen- tial reports, we apply only the single-report ver- sion of LUNGUAGE SCORE (i.e., semantic and structural alignment). We compare our metric against the following established alternatives: BLEU [ 25], BERTScore [ 41], GREEN [ 23], Fin- eRadScore [ 12], and RaTEScore [ 42]. For fur- ther details on the settings we used to run each
https://arxiv.org/abs/2505.21190v1
metric, we refer to Appendix D. Table 2 shows the Kendall Tau and Pearson correlation between each single-report level metric and the total number of errors (both significant and insignificant) identified by radiologists, across all reports in the ReXVal dataset. A more negative correlation indicates stronger alignment with radiologist assessments. Note that we invert FineRadScore to align its direction with other metrics. We also report 95% confidence intervals, calculated via bootstrapping with 1,000 resamples with replacement of the 200 reports. Our proposed metric outperforms the other structure- and/or semantics-based metric ( BLEU , BERTScore , and RaTEScore ) but does not surpass the LLM-derived scores ( FineRadScore and GREEN ) in terms of correlation with human experts. Nevertheless, it achieves performance close to GREEN andFineRadScore , which were explicitly designed to align with the ReXVal error taxonomy. In contrast, our metric is based solely on semantic and structural alignment between the findings in each report, without access to predefined error types. We further explore inter-metric correlations in Appendix D, showing that L UNGUAGE SCORE correlates highly with all other metrics. 6.3 Benchmarking single-report and sequential report generation models We further validate LUNGUAGE SCORE by comparing it against existing evaluation methods across multiple report generation models, assessing its ability to capture clinically meaningful differences at both the single-report and patient-level scales. To this end, we benchmark the performance of four generative models: MAIRA-2 [4], Medversa [44], RGRG [32] and Cvt2distilgpt2 [22]. Radiology report generation All evaluated models require frontal chest X-ray images. Of 80 studies in our sequential dataset, 13 lacked frontal images in MIMIC-CXR, limiting analysis to 67 studies. We used only these studies to ensure comparability across evaluations. For MAIRA-2, we included lateral images when available, while other models received only frontal views. MAIRA-2, RGRG, and Cvt2distilgpt2 generated findings sections, while Medversa produced both findings and impressions, which we combined into complete reports. Note that only MAIRA-2 was trained to incorporate prior studies, and we explored two settings: standard (using true reference reports from 8 Table 3: Structured radiology report generation results with 95% confidence intervals. Single-report setting Sequential setting Model RaTEScore GREEN 1/FineRadScore L UNGUAGE SCORE LUNGUAGE SCORE Medversa [44] 0.499 (0.47, 0.53) 0.314 (0.26, 0.37) 0.170 (0.14, 0.20) 0.409 (0.38, 0.44) 0.410 (0.37, 0.45) Cvt2DistilGPT2 [22] 0.436 (0.41, 0.47) 0.240 (0.19, 0.29) 0.152 (0.12, 0.18) 0.367 (0.34, 0.40) 0.371 (0.33, 0.41) RGRG [32] 0.479 (0.45, 0.51) 0.266 (0.23, 0.30) 0.131 (0.11, 0.15) 0.406 (0.38, 0.43) 0.391 (0.36, 0.42) MAIRA-2 [4] (standard) 0.518 (0.49, 0.54) 0.325 (0.28, 0.37) 0.193 (0.15, 0.24) 0.429 (0.40, 0.46) 0.432 (0.41, 0.46) MAIRA-2 [4] (cascade) 0.504 (0.48, 0.53) 0.299 (0.25, 0.34) 0.161 (0.13, 0.19) 0.419 (0.39, 0.45) 0.416 (0.38, 0.45) prior studies) and cascaded (using previously MAIRA-2-generated reports as prior context). Further details can be found in Appendix E. Single-report setting In the single-report setting, we compare generated reports with ground truth references on a study-by-study basis across 675studies. Reference reports combine findings and impression sections. Table 3 shows performance across various metrics, including our new LUNGUAGE SCORE . For LUNGUAGE SCORE , we use
https://arxiv.org/abs/2505.21190v1
our annotated reports as ground truth structured resources and compare them with outputs from the structuring process in Section 4. MAIRA-2 (standard setting) clearly outperforms all other models, demonstrating the value of longitudinal context even when evaluated at single-report level. The cascaded setting slightly underperforms compared to standard, as it can drift off course when building upon previously generated reports. Sequential Setting We use the same reports as in the single-report setting but include the history (i.e., indication) section in addition to findings and impression, as it provides essential context for understanding the patient’s trajectory over time. We evaluate all models in this setting because radiology reports are inherently longitudinal, describing findings across multiple imaging studies. Even models trained on single image-report pairs should produce temporally coherent outputs if each report is properly grounded in the image. As shown in Table 3, MAIRA-2, explicitly designed for sequential generation, achieves the highest performance. MedVersa, which additionally uses the history section as input, ranks second. In contrast, models that do not use the history section (CVT2DistilGPT2, RGRG) perform worse. Notably, CVT2DistilGPT2 improves slightly in this setting, while RGRG’s performance declines, revealing differences in temporal coherence. Our sequential LUNGUAGE SCORE uniquely captures such weaknesses in longitudinal consistency, highlighting its value in evaluating clinically realistic reporting behavior. Section D provides further analysis of the metric’s error sensitivity in both settings. 7 Conclusion, Limitations and Future Directions This work introduces a comprehensive framework for evaluating radiology reports, grounded in LUNGUAGE , a fine-grained benchmark for single and sequential structured reports. We propose a two-stage LLM-based structuring framework and LUNGUAGE SCORE , a novel metric reflecting clinical attributes across semantic, temporal, and structural dimensions. Limitations: Our study has several important limitations. First, our sequential dataset includes only 10 patients due to labor-intensive annotation, necessitating larger-scale datasets. Second, cross-validation by multiple radiologists is needed to ensure robustness. Third, our framework requires performance improve- ments in handling complex temporal relationships. Future Directions: Advancing patient-centered reporting necessitates integration of structured EHR data beyond chest X-rays. Current image-based generation approaches struggle with context-rich sections like patient history. Models lacking access to such contextual signals remain fundamentally limited in longitudinal reasoning and diagnostic continuity, highlighting the need for broader integration with EHR data in future research. 5Whenever no frontal image was available for a study, we were not able to generate a report. These studies are therefore excluded from the sequential analysis, leaving gaps in the sequence of reports that might influence the final result. This occurred for 5 out of 10 patients. 9 References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop , pages 72–78, Minneapolis, Minnesota, USA, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/W19-1909. URL https://www.aclweb.org/anthology/ W19-1909 . [3]Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric
https://arxiv.org/abs/2505.21190v1
for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization , pages 65–72, 2005. [4]Shruthi Bannur, Kenza Bouzid, Daniel C Castro, Anton Schwaighofer, Anja Thieme, Sam Bond-Taylor, Maximilian Ilse, Fernando Pérez-García, Valentina Salvatelli, Harshita Sharma, et al. MAIRA-2: Grounded radiology report generation. arXiv preprint arXiv:2406.04449 , 2024. [5]Olivier Bodenreider. The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research , 32(suppl_1):D267–D270, 2004. [6]Felix Busch, Lena Hoffmann, Daniel Pinto Dos Santos, Marcus R Makowski, Luca Saba, Philipp Prucker, Martin Hadamitzky, Nassir Navab, Jakob Nikolas Kather, Daniel Truhn, et al. Large language models for structured reporting in radiology: past, present, and future. European Radiology , pages 1–14, 2024. [7]Wendy W Chapman, Prakash M Nadkarni, Lynette Hirschman, Leonard W D’avolio, Guergana K Savova, and Ozlem Uzuner. Overcoming barriers to nlp for clinical text: the role of shared tasks and the need for additional creative solutions, 2011. [8]Dina Demner-Fushman, Wendy W Chapman, and Clement J McDonald. What can natural language processing do for clinical decision support? Journal of biomedical informatics , 42(5):760–772, 2009. [9]Felix J Dorfner, Liv Jürgensen, Leonhard Donle, Fares Al Mohamad, Tobias R Bodenmann, Mason C Cleveland, Felix Busch, Lisa C Adams, James Sato, Thomas Schultz, et al. Comparing commercial and open-source large language models for labeling chest radiograph reports. Radiology , 313(1):e241139, 2024. [10] Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing, 2020. [11] Iryna Hartsock, Cyrillo Araujo, Les Folio, and Ghulam Rasool. Improving radiology report conciseness and structure via local large language models. Journal of Imaging Informatics in Medicine , pages 1–12, 2025. [12] Alyssa Huang, Oishi Banerjee, Kay Wu, Eduardo Pontes Reis, and Pranav Rajpurkar. Fineradscore: A radiology report line-by-line evaluation technique generating corrections with severity scores. arXiv preprint arXiv:2405.20613 , 2024. [13] Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P Lungren, Andrew Y Ng, et al. Radgraph: Extracting clinical entities and relations from radiology reports. arXiv preprint arXiv:2106.14463 , 2021. [14] Qiao Jin, Won Kim, Qingyu Chen, Donald C Comeau, Lana Yeganova, W John Wilbur, and Zhiyong Lu. Medcpt: Contrastive pre-trained transformers with large-scale pubmed search logs for zero-shot biomedical information retrieval. Bioinformatics , 39(11):btad651, 2023. [15] Alistair EW Johnson, Tom J Pollard, Seth J Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih-ying Deng, Roger G Mark, and Steven Horng. Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports. Scientific data , 6(1):317, 2019. [16] Sameer Khanna, Adam Dejl, Kibo Yoon, Steven QH Truong, Hanh Duong, Agustina Saenz, and Pranav Rajpurkar. Radgraph2: Modeling disease progression in radiology reports via hierarchical information extraction. In Machine Learning for Healthcare Conference , pages 381–402. PMLR, 2023. 10 [17] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics , 36(4):1234–1240, 2020. [18]
https://arxiv.org/abs/2505.21190v1
Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. [19] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [20] Xiaohong Liu, Hao Liu, Guoxing Yang, Zeyu Jiang, Shuguang Cui, Zhaoze Zhang, Huan Wang, Liyuan Tao, Yongchang Sun, Zhu Song, et al. A generalist medical language model for disease diagnosis assistance. Nature Medicine , pages 1–11, 2025. [21] Stéphane M Meystre, Guergana K Savova, Karin C Kipper-Schuler, and John F Hurdle. Extracting information from textual documents in the electronic health record: a review of recent research. Yearbook of medical informatics , 17(01):128–144, 2008. [22] Aaron Nicolson, Jason Dowling, and Bevan Koopman. Improving chest x-ray report generation by leveraging warm starting. Artificial intelligence in medicine , 144:102633, 2023. [23] Sophie Ostmeier, Justin Xu, Zhihong Chen, Maya Varma, Louis Blankemeier, Christian Bluethgen, Arne Edward Michalson, Michael Moseley, Curtis Langlotz, Akshay S Chaudhari, et al. Green: Generative radiology report evaluation and error notation. arXiv preprint arXiv:2405.03595 , 2024. [24] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pages 311–318, 2002. [25] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pages 311–318, 2002. [26] Ewoud Pons, Loes MM Braun, MG Myriam Hunink, and Jan A Kors. Natural language processing in radiology: a systematic review. Radiology , 279(2):329–343, 2016. [27] Vishwanatha M Rao, Serena Zhang, Julian N Acosta, Subathra Adithan, and Pranav Rajpurkar. Rex- err: Synthesizing clinically meaningful errors in diagnostic radiology reports. In Biocomputing 2025: Proceedings of the Pacific Symposium , pages 70–81. World Scientific, 2024. [28] Vishwanatha M Rao, Serena Zhang, Julian N Acosta, Subathra Adithan, and Pranav Rajpurkar. Rexerr-v1: Clinically meaningful chest x-ray report errors derived from mimic-cxr (version 1.0.0). Physionet , 2025. doi: https://doi.org/10.13026/9dns-vd94. [29] François Remy, Kris Demuynck, and Thomas Demeester. Biolord-2023: semantic textual representations fusing large language models and clinical knowledge graph insights. Journal of the American Medical Informatics Association , 31(9):1844–1855, 2024. [30] Guergana K Savova, James J Masanz, Philip V Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C Kipper- Schuler, and Christopher G Chute. Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. Journal of the American Medical Informatics Association , 17(5):507–513, 2010. [31] Akshay Smit, Saahil Jain, Pranav Rajpurkar, Anuj Pareek, Andrew Y Ng, and Matthew P Lungren. Chexbert: combining automatic labelers and expert annotations for accurate radiology report labeling using bert. arXiv preprint arXiv:2004.09167 , 2020. [32] Tim Tanida, Philip Müller, Georgios Kaissis, and Daniel Rueckert. Interactive and explainable region- guided radiology report generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 7433–7442, 2023. [33] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière,
https://arxiv.org/abs/2505.21190v1
Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [34] Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, et al. Clinical information extraction applications: a literature review. Journal of biomedical informatics , 77:34–49, 2018. 11 [35] Piotr Wo´ znicki, Caroline Laqua, Ina Fiku, Amar Hekalo, Daniel Truhn, Sandy Engelhardt, Jakob Kather, Sebastian Foersch, Tugba Akinci D’Antonoli, Daniel Pinto dos Santos, et al. Automatic structuring of radiology reports with on-premise open-source large language models. European Radiology , pages 1–12, 2024. [36] Joy T Wu, Nkechinyere N Agu, Ismini Lourentzou, Arjun Sharma, Joseph A Paguio, Jasper S Yao, Edward C Dee, William Mitchell, Satyananda Kashyap, Andrea Giovannini, et al. Chest imagenome dataset for clinical reasoning. arXiv preprint arXiv:2108.00316 , 2021. [37] Feiyang Yu, Mark Endo, Rayan Krishnan, Ian Pan, Andy Tsai, Eduardo Pontes Reis, Eduardo Kaiser Ururahy Nunes Fonseca, Henrique Min Ho Lee, Zahra Shakeri Hossein Abad, Andrew Y Ng, et al. Evaluating progress in automatic chest x-ray radiology report generation. Patterns , 4(9), 2023. [38] Feiyang Yu, Mark Endo, Rayan Krishnan, Ian Pan, Andy Tsai, Eduardo Pontes Reis, EKU Fonseca, Henrique Lee, Zahra Shakeri, Andrew Ng, et al. Radiology report expert evaluation (rexval) dataset, 2023. [39] Juan Manuel Zambrano Chaves, Shih-Cheng Huang, Yanbo Xu, Hanwen Xu, Naoto Usuyama, Sheng Zhang, Fei Wang, Yujia Xie, Mahmoud Khademi, Ziyi Yang, et al. A clinically accessible small multimodal radiology model and evaluation metric for chest x-ray findings. Nature Communications , 16(1):3108, 2025. [40] Mengliang Zhang, Xinyue Hu, Lin Gu, Tatsuya Harada, Kazuma Kobayashi, Ronald Summers, and Yingying Zhu. Cad-chest: Comprehensive annotation of diseases based on mimic-cxr radiology report. (No Title) , 2023. [41] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675 , 2019. [42] Weike Zhao, Chaoyi Wu, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie. Ratescore: A metric for radiology report generation. arXiv preprint arXiv:2406.16845 , 2024. [43] Xingyu Zheng, Yuye Li, Haoran Chu, Yue Feng, Xudong Ma, Jie Luo, Jinyang Guo, Haotong Qin, Michele Magno, and Xianglong Liu. An empirical study of qwen3 quantization. arXiv preprint arXiv:2505.02214 , 2025. [44] Hong-Yu Zhou, Subathra Adithan, Julián Nicolás Acosta, Eric J Topol, and Pranav Rajpurkar. A generalist learner for multifaceted medical image interpretation. arXiv preprint arXiv:2405.07988 , 2024. 12 A L UNGUAGE Details Dataset preparation LUNGUAGE aims to support patient-level evaluation of chest X-ray reports by modeling longitudinal diagnostic scenarios. To this end, we curated a benchmark dataset from the official test split of MIMIC-CXR, including all 1,473 reports corresponding to 230 patients. Each patient had between 1 and 15 imaging studies. We followed the official MIMIC-CXR preprocessing protocol to extract structured text from each report. Specifically, we parsed the history (including “Indication”), findings , and impression sections. The history/indication field provides contextual information relevant to diagnostic reasoning, such as presenting symptoms (e.g., “fever,” “fatigue,” “cough”) or evaluation intents (e.g., “rule out pneumonia”). In contrast, the findings and impression
https://arxiv.org/abs/2505.21190v1
sections describe image-based observations and interpretations. Section-level coverage across the dataset is summarized as: •History (i.e., Indication) : 1,362 reports (92.5%) •Findings : 1,224 reports (83.1%) •Impression : 1,015 reports (68.9%) Among the reports, 767 contained both findings and impression sections, 457 had findings only, 248 had impression only, and 1 contained only a history section. We excluded infrequently occurring sections such as comparison (often containing anonymized metadata using placeholders like “__”), andtechnique (e.g., “AP view”), as these appeared in fewer than 5% of cases and were not directly relevant to diagnostic content. To preserve diagnostic integrity and linguistic variability, we retained all reports in their original form without content filtering. This includes templated reports (e.g., “No acute cardiopulmonary process”) and incomplete notes. All reports were annotated using our schema-based pipeline with no preprocessing beyond section parsing. Structured reports were constructed while preserving raw textual expressions to ensure alignment with the source language used by radiologists. Figure A.1: Distribution of the number of imaging studies per patient in LUNGUAGE .Skyblue bars indicate the number of patients for each trajectory length (i.e., number of chest X-ray studies), reflecting the single-report annotation coverage. Salmon bars represent the subset of patients whose reports are also annotated at the longitudinal level. Values above the bars show the number of patients per group ( n =), and for salmon bars, the number of patients with sequential annotations. The legend summarizes the total number of patients and reports included at each annotation level. 13 A.1 Single-report Schema: Entity and Relation Definition LUNGUAGE represents each radiology report as a structured collection of (entity, relation, attribute) triplets. This schema is designed to encode the diagnostic content of reports in a form that supports structured analysis, longitudinal reasoning, and machine-readable interpretation. It captures both observable features from chest X-ray (CXR) images and additional contextual elements embedded in clinical narratives. Entity Types Entities represent clinically meaningful units such as findings, diagnoses, objects, or background context. Each entity is assigned one of six mutually exclusive Cat(category) labels, depending on whether it originates from the CXR image or external clinical sources. Chest X-ray Findings are entities that can be directly visualized on the chest X-ray or inferred through image-based interpretation, possibly with minimal supporting context. These form the core of radiologic description and are divided into the following types: •PF (Perceptual Findings) : Visual features that are explicitly visible in the image and correspond to anatomical or pathological structures (e.g., “opacity”, “pleural effusion”, “pneumothorax”). These are the most direct and objective form of image evidence. •CF (Contextual Findings) : Diagnoses that require interpretation of visual findings in light of limited contextual knowledge (e.g., “pneumonia”, “congestive heart failure”). These may involve reasoning beyond the image but still rely primarily on radiographic evidence. •OTH (Other Objects) : Non-anatomic elements such as medical devices, surgical hardware, or foreign materials visible on the image (e.g., “endotracheal tube”, “central venous catheter”, “foreign body”). These often require placement verification or complication monitoring. Non Chest X-ray Findings are entities that cannot be determined from the image alone and must be inferred
https://arxiv.org/abs/2505.21190v1
from patient history, clinical documentation, or other diagnostic modalities: •COF (Clinical Objective Findings) : Structured clinical measurements or physical findings derived from sources such as laboratory tests or vital signs (e.g., “elevated white cell count”, “low oxygen saturation”). These provide objective support for contextual interpretation. •NCD (Non-CXR Diagnosis) : Diagnoses that originate from non-CXR modalities (e.g., CT, MRI, serology) and are either mentioned for completeness or used to explain findings (e.g., “stroke”, “AIDS”). •PATIENT INFO : Historical or subjective patient information, such as symptoms or clinical background, that contributes to interpretation (e.g., “fever”, “history of malignancy”, “recent trauma”). Each entity is additionally annotated with the following attributes that define its diagnostic interpreta- tion within the report: •DxStatus : Indicates whether the entity is considered present or absent in the current study. This label is determined from report language and includes implications from stability or change. For example, “resolved effusion” is annotated as Positive , while “unchanged opacity” is Positive unless the prior state was normal, in which case it is Negative . •DxCertainty : Reflects the level of confidence expressed by the radiologist, labeled as either Definitive orTentative . Typical cues include phrases like “suggests”, “cannot exclude”, or “possibly indicative of”, all leading to a tentative label. Relation Types Relations describe either attributes of a single entity or clinically relevant links between multiple entities. All relations must be grounded in the report text and can span across sentences within the same section. 1. Diagnostic Reasoning These relations connect semantically and clinically related entities. They encode the logic behind diagnostic interpretation. •Associate : A bidirectional, non-causal relationship between entities that co-occur or are conceptually linked (e.g., “opacity” ↔“consolidation”). When Evidence is used, a corre- sponding Associate is also required in the reverse direction. 14 •Evidence : A unidirectional relation in which a finding supports a diagnosis (e.g., “pneumo- nia”→“opacity”). 2. Spatial and Descriptive Attributes These relations describe intrinsic visual characteristics of an entity as observed within a single chest X-ray image. Unlike temporal attributes, these do not require comparison with prior studies. Instead, they provide descriptive detail that refines the interpretation of a finding or object in terms of location, form, extent, intensity, and symmetry. •Location : Specifies the anatomical or spatial position of the entity (e.g., “right upper lobe”, “carina above 3 cm”). An entity may have multiple location labels, annotated as a comma- separated list (e.g., “right upper lobe, suprahilar”). Location applies to both disease findings and device placements (e.g., “fragmentation” of “sternal wires”). •Morphology : Describes the shape, form, or structural appearance of the entity (e.g., “nodu- lar”, “linear”, “reticular”, “confluent”). Morphological terms help differentiate types of opacities or identify characteristic patterns of pathology. •Distribution : Refers to the anatomical spread or pattern of the entity (e.g., “focal”, “dif- fuse”, “multifocal”, “bilateral”). This helps characterize whether the finding is localized or widespread, and whether it follows typical anatomical distributions. •Measurement : Captures quantitative properties such as size, count, or volume (e.g., “2.5 cm”, “few”, “multiple”). These descriptors are typically numerical or ordinal and assist in severity grading or follow-up comparison.
https://arxiv.org/abs/2505.21190v1
•Severity : Reflects the degree of abnormality or clinical impact, often based on radiologic intensity or extent (e.g., “mild”, “moderate”, “severe”, “marked”). •Comparison : Indicates asymmetry or difference across anatomical sides or regions within the same image (e.g., “left greater than right”, “right lung appears denser”). This is distinct from temporal comparison and only refers to spatial contrasts visible in the current image. 3. Temporal Change These relations capture how an entity has changed over time by comparing the current study to previous imaging or known clinical baselines. Temporal attributes are essential for longitudinal interpretation and reflect disease progression, treatment response, or clinical stability. Unlike static descriptors, these attributes require temporal context and often imply clinical decision points. •Onset : Indicates the timing or duration of a finding as described in the report (e.g., “acute”, “subacute”, “chronic”, “new”). These descriptors suggest whether a condition has recently appeared or has been long-standing. •Improved : Signals that a finding has regressed or resolved compared to a prior state (e.g., “resolved effusion”, “decreased consolidation”). It is typically associated with positive treatment response or natural recovery. •Worsened : Indicates that the condition has progressed, increased in extent, or become more severe over time (e.g., “enlarging opacity”, “increased pleural effusion”). This is often associated with disease progression or complications. •No Change : Describes a finding that has remained stable since a prior study (e.g., “un- changed opacity”, “persistent nodule”). Although these are annotated as Positive by default, they are marked as Negative if the prior state was normal (i.e., continued absence of disease). •Placement : Applies specifically to entities labeled as OTH(devices). It describes both the position (e.g., “in expected position”, “malpositioned”) and temporal actions involving the device (e.g., “inserted”, “withdrawn”, “removed”). This attribute is crucial for monitoring device-related interventions over time. 4. Contextual Information This category captures auxiliary information that influences the interpre- tation of findings but is not a primary descriptor of the radiologic appearance. These relations provide critical contextual cues—such as modality constraints, patient factors, or historical references—that support diagnostic interpretation. While not visual in the conventional sense, they are essential for accurately situating radiologic findings within the broader clinical scenario. 15 •Past Hx : Refers to the patient’s prior medical or surgical history that contextualizes current findings (e.g., “status post lobectomy”, “known tuberculosis”). These mentions often justify or explain current observations or exclude certain diagnoses. •Other Source : Indicates that part of the reported information is derived from modalities other than chest X-ray (e.g., “seen on CT”, “confirmed on MRI”). This distinction is important when findings cannot be visualized directly on the image being interpreted. •Assessment Limitations : Describes technical or procedural factors that constrain the radiologist’s ability to interpret the image accurately (e.g., “poor inspiration”, “rotated patient position”, “limited view due to overlying hardware”). These limitations help qualify the certainty or completeness of the report’s conclusions. A.1.1 Task-specific Vocabulary Construction To systematically capture the range of descriptive, temporal, spatial, and contextual attributes found in radiologic reporting, we constructed a structured vocabulary of relation terms based on all schema- defined relation types instantiated
https://arxiv.org/abs/2505.21190v1
in LUNGUAGE . To initiate this process, we first applied GPT-4 to a subset of reports to produce initial structured outputs, from which we extracted candidate terms for each relation type. These candidate vocabularies were then manually reviewed and refined by relation category to ensure clinical accuracy, coverage, and consistency. The primary goals of this process were: (1) to ensure consistency in how lexical expressions are mapped to relation categories, (2) to develop clinically meaningful subcategories within each relation type, and (3) to normalize lexical expressions for downstream applications such as search, reasoning, and integration with structured knowledge resources. Importantly, our vocabulary only includes relation types that correspond to lexically explicit at- tributes in the text. We excluded four relation types— EVIDENCE ,ASSOCIATE ,DXSTATUS , and DXCERTAINTY —which, while critical to the annotation schema, are not represented as direct lexical expressions. EVIDENCE andASSOCIATE describe reasoning links between entities, often inferred across sentences. DXSTATUS andDXCERTAINTY encode interpretive stance (e.g., presence vs. absence, tentative vs. definitive) and require contextual reading of the sentence. As these relation types are derived from pragmatic interpretation rather than explicit phrases, they fall outside the scope of vocabulary-level normalization. For the remaining relation types, we extracted all unique values that were directly linked to entities during annotation. Each relation type was reviewed independently by four board-certified physicians to verify accurate categorization, eliminate inconsistencies, and normalize redundant expressions. We further organized each relation into subcategories reflecting finer-grained semantic distinctions that align with radiologic conventions. For example, among the 543 LOCATION terms, we identified 277 unique anatomical paths grouped under higher-level systems: respiratory (229), musculoskeletal (82), cardiovascular (73), and others. Likewise, MORPHOLOGY (218 terms) was divided into shape and structure (116), texture and density (63), and smaller classes such as condition . Temporal progression was captured through ONSET (60), IMPROVED (120), WORSENED (108), andNOCHANGE (138), each of which was subtyped into graded interpretations (e.g., “moderate improvement”, “minimal worsening”). Device-related metadata were structured under PLACEMENT (78), which includes terms for positional accuracy (e.g., “malpositioned”) and procedural changes (e.g., “removed”, “repositioned”). Additional relation types included MEASUREMENT (147 terms across size, quantity, and normality), SEVERITY (89), DISTRIBUTION (37), and COMPARISON (46). We also captured auxiliary contextual information that, while potentially observable on imaging, typically reflects non-primary or supportive elements in interpretation. This includes A SSESSMENT LIMITATIONS (296 terms), categorized into four major types: evaluation limitations (143), patient- related limitations (72), field-of-view limitations (55), and technical limitations (26). Other categories include OTHER SOURCE (56), which marks references to non-CXR modalities (e.g., CT, MRI), and PAST HX(41), which captures historical clinical references. The resulting vocabulary includes 14 relation types derived from lexical evidence, each organized into coherent subtypes that reflect the nuances of radiologic description. Normalized forms were retained as preferred terms, and inconsistent variants were removed. Although formal UMLS mapping was not enforced—given that many of the relation terms lie outside conventional ontologies—we ensured lexical consistency and clinical interpretability to support future integration efforts. This 16 curated vocabulary enables fine-grained modeling of chest X-ray reports and ensures that structured annotations reflect a clinically grounded and
https://arxiv.org/abs/2505.21190v1
internally consistent taxonomy of radiologic language, aligned with the conventions of routine diagnostic documentation. A.1.2 Single Annotation Details To construct a clinically reliable gold-standard dataset, we implemented a structured annotation pipeline that reviewed and refined the initial triplets generated by GPT-4 (0613). Unlike the vocabulary construction phase—which focused on individual terms without considering report context—this stage involved section-by-section review of all structured outputs in each report to ensure contextual accuracy and logical consistency. All 1,473 chest X-ray reports in LUNGUAGE were divided evenly among annotators. Each annotator independently reviewed approximately one-quarter of the dataset, ensuring balanced coverage and minimizing reviewer bias across the annotated corpus. Within each report, annotators examined the structured outputs across the history/indication ,findings , and impression sections. The goal was to verify whether the extracted (entity, relation, attribute) triplets accurately captured the meaning of the source text and aligned with the predefined schema. This review explicitly included schema elements that require contextual interpretation and cannot be evaluated at the lexical level alone—namely, DXSTATUS ,DXCERTAINTY ,ASSOCIATE , and EVIDENCE . These attributes reflect interpretive judgments, such as identifying when an “opacity” supports a diagnosis of “pneumonia” or whether two entities should be linked through an associative relation. Annotators verified whether such relations were correctly inferred from the surrounding text and whether the attributes assigned to each entity (e.g., presence, uncertainty, temporal change) matched the narrative context. To support this process, we developed a custom annotation interface (Figure A.2) that displayed the original report text alongside GPT-4’s predicted triplets and an editable table of structured fields. Each sentence in the report was paired with its associated annotations, including entity category, relation type, and all relevant attributes. Annotators could directly add, edit, remove, or merge entries to reflect clinically accurate interpretations. For example, terms like “ground glass opacity”—which could be mistakenly split—were merged into a single PF(perceptual finding) entity based on how radiologists commonly use the phrase. Annotation was conducted separately for each section (history ,findings ,impression ), and the interface supported sentence-level review within each section to ensure consistent entity–relation mappings when terms appeared across multiple sentences. As a result of this process, the finalized gold dataset includes 17,949 validated entities and 23,307 rela- tion instances. These annotations reflect both explicit descriptive attributes and contextually inferred diagnostic relationships, providing a robust benchmark for evaluating schema-based information extraction systems in chest radiograph interpretation. A.2 Sequential Annotation Details In contrast to the single-report structuring phase, which focused on refining schema-based annotations within individual reports, the sequential annotation phase aimed to assess the longitudinal consis- tency of entity-level interpretations across temporally ordered reports from the same patient. This required global comparisons across all sections— history ,findings , and impression —integrating entity–relation triplets into clinically coherent sequences. Unlike earlier phases that processed each report independently, this step involved exhaustive pairwise comparisons of all annotated expressions across time. Annotators judged whether lexically distinct phrases referred to the same underlying clinical entity by examining radiological terminology, anatomical location, temporal modifiers (e.g., “resolving”, “unchanged”), and diagnostic specificity. Expressions identified as referring to the same finding were grouped
https://arxiv.org/abs/2505.21190v1
together; otherwise, they were assigned to separate entity groups. To further structure these entity groups, we assessed whether each represented a single episode of care or multiple distinct episodes. This required examining the temporal order and interval between observations. Intervals were computed using the StudyDate metadata from MIMIC-CXR, and episode boundaries were assigned based on temporal coherence—considering factors such as time gaps, patterns of resolution or worsening, and recurrence of findings. 17 Figure A.2: Annotation interface used during gold dataset construction. Annotators reviewed GPT- 4-generated triplets per report section and refined the entity–relation structure to ensure schema correctness and contextual validity. For example, a progression from “moderate left effusion” (day 0) to “small effusion” (day 14) and “trace effusion” (day 45) was treated as a single resolving episode. However, a subsequent “moderate effusion” on day 180 was regarded as a separate episode, while all entities assigned to either episode are grouped into the same Entity Group. Similarly, “right lower lobe opacity” followed by “resolving infiltrate” was interpreted as one episode, whereas a new “opacity” on day 150 initiated a different episode. This process was applied to 80 chest X-ray reports from 10 patients, yielding longitudinal annotations that capture consistent entity grouping across lexical variations and clinically coherent organization of episodes based on temporal reasoning. To better characterize the annotation results, we summarize the distribution of entity groupings and temporal episodes in Table A.1. The columns report: •# Reports: The total number of reports per patient sequence. •Entity Group Distribution: The number of findings assigned to each entity group (#Group), after normalization and longitudinal reasoning. Some groups consist of a single unique expression, while others aggregate multiple semantically related terms. •Temporal Group Distribution: The number of findings assigned to each temporal group (#Group), where each group represents a distinct clinical episode. Table A.1: Distribution of entity groups and temporal groups across annotated patient sequences. Subject ID # Reports Entity Group Distribution (#Group:Count) Temporal Group Distribution (#Group:Count) p10274145 5 1:19, 2:11, 3:2, 4:3 1:33, 2:2 p10523725 9 1:36, 2:6, 3:3, 4:2, 5:2, 7:2 1:47, 2:2, 3:1, 6:1 p10886362 10 1:26, 2:3, 3:6, 4:4, 6:1, 7:1, 9:1, 13:1 1:39, 2:4 p10959054 7 1:31, 2:6, 3:2, 4:2, 5:1, 6:1, 9:1 1:37, 2:5, 3:2 p12433421 13 1:49, 2:6, 3:10, 5:1, 7:1, 17:1 1:66, 2:2 p15321868 6 1:24, 2:5, 3:2, 4:1, 5:2 1:32, 2:2 p15446959 5 1:29, 2:7, 3:3, 4:2 1:37, 2:4 p15881535 3 1:17, 2:2, 3:2, 5:1 1:20, 2:2 p17720924 8 1:30, 2:8, 3:5, 4:1, 5:1 1:41, 2:2, 4:2 p18079481 14 1:34, 2:10, 3:3, 4:2, 6:3, 7:3, 8:1 1:43, 2:10, 3:3 Across the 10 patients in the sequential evaluation phase, the number of temporal groups assigned to a single entity group ranged from 1 to 6, indicating that some findings were observed in multiple distinct clinical episodes over time. Likewise, the number of distinct entity groups varied significantly. Most entity groups consisted of a single mention, but some aggregated up to 17 lexically different expressions. For example, subject p12433421 exhibited the most diverse entity grouping, with 17 distinct phrases all referring to variations of pleural effusion (e.g.,
https://arxiv.org/abs/2505.21190v1
“effusion,” “pleural effusion,” 18 “pleural effusion left”) unified under one normalized cluster. Similarly, subject p10523725 had the highest number of temporal groups (6) within a single entity group, driven by repeated mentions of dyspnea across non-contiguous timepoints. These results highlight the complexity and variability of radiologic expression in longitudinal reporting, and underscore the necessity of models and metrics capable of robustly handling both semantic variation and episodic continuity in time-aware clinical tasks. B Framework details B.1 Overview Figure B.1: Overview of our end-to-end pipeline. We begin with gold-standard structured reports (Lunguage ) created by radiologists. Candidate free-text reports are generated by a report model and structured via our two-stage framework: (1) schema-aligned extraction ( Framework (Single) ), and (2) longitudinal grouping and normalization ( Framework (Sequential) ). Candidate and gold outputs are aligned by entity and temporal groups, and evaluated using LUNGUAGESCORE across semantic, temporal, and structural dimensions. Std. timepoint denotes the acquisition date of each chest X-ray study. Framework Overview and Evaluation Setup. Figure B.1 presents a complete overview of our pipeline, integrating the three core contributions of this study: the construction of the LUNGUAGE benchmark, the development of a two-stage LLM-based structuring framework, and the design of LUNGUAGESCORE, a clinically grounded evaluation metric. We begin with gold-standard annotations encompassing both single-report structures and longi- tudinal sequences. Our two-stage framework first applies schema-aligned extraction to derive entity–attribute–relation triplets from free-text inputs ( Framework (Single) ), and subsequently per- forms longitudinal normalization and temporal grouping across studies to identify consistent findings and clinically coherent episodes ( Framework (Sequential) ). After this process, the structured candidate report is compared to the gold-standard annotations of the reference report using fine-grained matching that incorporates semantic similarity, temporal coherence, and structural attribute alignment. These dimensions are jointly assessed by LUNGUAGESCORE , which computes similarity scores based on the full set of extracted and grouped triplets. 19 B.2 Single Setting Prompt Figure B.2: Prompt template used for single-report structuring of chest X-ray findings. The model receives section-wise input sentences along with vocabulary-based candidate spans and is instructed to extract relations and attributes. 20 B.3 Vocabulary Matching Algorithm To improve consistency in entity extraction and reduce hallucinations in schema-based structuring, we implemented a vocabulary-guided span matching algorithm (see Appendix A.1.1 for details on vocab- ulary construction). This algorithm processes each section of the radiology report (e.g., findings ) to identify candidate entity spans by directly matching contiguous token sequences against entries in a schema-defined vocabulary, without normalization such as lowercasing or punctuation removal. Each sentence is evaluated independently, and multiple overlapping matches are retained—e.g., “left lung” may correspond to both PF and L OCATION . Importantly, the matched vocabulary spans are not assumed to constitute a complete or authoritative set of entities. Instead, they serve as reference cues for the LLM, which remains responsible for the final relation extraction. The LLM is expected to leverage the matched terms as guidance while retaining the flexibility to identify additional entities or values not covered by the vocabulary. This design accommodates incompleteness in the vocabulary and enables the model to make
https://arxiv.org/abs/2505.21190v1
context- sensitive inferences based on both the prompt and observed patterns in the data. The matching algorithm is summarized below: Algorithm 1 Span-Based V ocabulary Matching 1:Input: Curated vocabulary V; report section Tcomposed of multiple sentences. 2:Output: List of matched word spans in T, each labeled with one or more schema categories. 3:Build a dictionary Vlookup from surface forms in V, mapping each to one or more associated schema categories. 4:foreach sentence sinTdo 5: Splitsinto a sequence of nwords, each with character-level start and end offsets 6: forspan length lfromndown to 1do 7: forstart index i= 0ton−ldo 8: Extract word span si:i+land its character range from original sentence 9: Query Vlookup for exact match of the word span 10: ifmatch found then 11: foreach schema category linked to the matched term do 12: Record span text, character start/end indices, matched term, and category 13: end for 14: end if 15: end for 16: end for 17:end for 18:return List of matched spans with associated categories This procedure constrains entity recognition to schema-aligned expressions, allowing the LLM to focus on inferring relational structure rather than determining precise span boundaries. By anchoring extraction to predefined lexical targets, it reduces ambiguity and ensures consistent treatment of clinically equivalent yet lexically variable expressions. 21 B.4 Sequential Setting Prompt Figure B.3: Prompt template provided to the LLM for sequential structuring of radiologic findings. The model is instructed to group terms referring to the same clinical observation and to identify episode boundaries based on time intervals and progression patterns. Grouping and temporal disam- biguation criteria are embedded in the prompt, following the structured annotation protocol. 22 B.5 Single Setting Analysis Table B.1: Ablation results of GPT-4.1 under varying prompt-shot configurations and vocabulary matching. We report precision (P), recall (R), and F1 scores for both entity-relation pair extraction and complete triplet extraction tasks. entity-relation entity-relation-attribute Shot Vocab Usage F1 P R F1 P R ZeroNo 0.79 0.65 1.00 0.52 0.65 0.44 Yes 0.92 0.85 1.00 0.78 0.80 0.77 5-shotNo 0.93 0.87 1.00 0.84 0.85 0.83 Yes 0.94 0.89 1.00 0.87 0.87 0.86 10-shotNo 0.94 0.88 1.00 0.86 0.86 0.85 Yes 0.96 0.91 1.00 0.89 0.90 0.87 We conducted an ablation study to quantify the individual and combined effects of vocabulary matching and in-context demonstrations on single-report structuring. Using 80 radiology reports from 10 patients, previously annotated for sequential evaluation, this subset enabled consistent evaluation across controlled input conditions. Six configurations were tested by varying two factors: (1) whether span-to-category alignment via vocabulary matching was applied, and (2) the number of in-context examples provided in the prompt (0, 5, or 10). V ocabulary matching involved matching contiguous text spans against a predefined lexicon and retrieving all associated schema categories, ensuring lexical consistency and reducing ambiguity in span interpretation, as described in Appendix B.3. In-context demonstrations consisted of structured examples retrieved from the gold set of structured reports using BM25 retrieval, based on textual similarity to the input report. These examples illustrate appropriate usage of entity types and relations under the schema. As shown in Table B.1, vocabulary matching consistently enhanced
https://arxiv.org/abs/2505.21190v1
performance across all prompt configurations. Under the zero-shot setting, incorporating vocabulary guidance raised the triplet-level F1 score from 0.52 to 0.78, and the entity-relation F1 from 0.79 to 0.92. When five in-context demonstrations were provided, the triplet F1 increased further—reaching 0.84 without vocabulary and 0.87 with vocabulary. The highest accuracy was achieved by combining both components: the 10-shot setting with vocabulary matching attained a triplet F1 of 0.89. These results indicate that vocabulary matching and in-context demonstrations offer complementary benefits. V ocabulary alignment improves lexical grounding and category consistency, while prompting with examples strengthens structural fidelity across varying linguistic expressions. Together, they establish a robust configuration for producing schema-compliant structured outputs from free-text radiology reports. To illustrate the qualitative impact of vocabulary matching and prompt-based demonstrations, we examined example outputs across configurations with and without these components. In the sentence “there is no focal consolidation” , the model without vocabulary and prompt guidance extracted “focal consolidation” as the entity, conflating the modifier and the core clinical concept. In contrast, all other configurations correctly identified “consolidation” as the schema-aligned entity. A similar pattern was observed in “there are no new focal opacities concerning for pneumonia” , where the no-guidance setup extracted “focal opacities” , whereas guided configurations yielded the correct entity “opacities” . These examples underscore the importance of explicitly aligning model outputs to a predefined schema. Linguistically valid but structurally inconsistent extractions can hinder downstream applica- tions, where precise interpretation and reliable information linkage are essential. By providing lexical anchoring through vocabulary and structural demonstrations via prompts, our approach ensures that model predictions are not only accurate but also semantically coherent and clinically usable. 23 B.6 Sequential Setting Analysis We qualitatively evaluated model behavior in the sequential setting by analyzing entity grouping outputs over time. Using longitudinal chest X-ray reports from a single patient, we assessed how well the predicted entity groupings aligned with gold-standard annotations. As illustrated in Figure B.4, the patient underwent three imaging studies at 0, 1292, and 1591 days from the initial scan, enabling a detailed examination of temporal consistency in entity tracking. Most clinical observations were consistently grouped across both annotations. For instance, three lexical variants— orthopedic side plate right clavicular unchanged ,right clavicle hardware , and internal fixation hardware —were all correctly assigned to the same entity group in both the gold standard and the model output. Although the representative phrase differed ( orthopedic side plate. . . in the reference versus right clavicle hardware in the prediction), the group identity was preserved, indicating successful recognition of referential equivalence across timepoints. Nevertheless, several discrepancies emerged. One involved two temporally separated mentions of pneumonia , which were grouped together in the gold annotations but split into separate groups in the model output. This divergence arose because the model treated the findings differently based on their diagnostic status (e.g., resolved vs. new). Such behavior suggests that the model failed to fully comply with the grouping principle that emphasizes radiological identity over contextual modifiers like status or timing. Another deviation was observed in the handling of evolving descriptions related
https://arxiv.org/abs/2505.21190v1
to opacity in the right cardiophrenic sulcus. Whereas the gold annotations grouped temporally related expressions (e.g., opacity. . . interval resolution ) into a single entity, the model assigned each instance to a separate group. This highlights the model’s limited ability to incorporate temporal continuity cues such as “improving” or “resolving” when constructing entity-level associations. Despite these localized inconsistencies, the overall grouping performance remained robust. In many cases, LUNGUAGE SCORE reported high similarity scores between predicted and reference structures, indicating that the model preserved the essential semantic structure even when precise grouping boundaries differed slightly. These findings support the reliability of our sequential annotation approach for tracking clinically meaningful entities over longitudinal report timelines. 24 Figure B.4: Entity grouping results for patient p15881535 based on sequential chest X-ray re- ports, comparing human-annotated gold-standard groupings (rows) with GPT-4.1 model predictions (columns). Each numbered cell corresponds to an individual finding, expressed as a linearized phrase that combines the entity and its attributes. While group labels may vary slightly in wording, alignment is assessed based on 1:1 row-to-column correspondence. Among 34 total findings, the gold standard forms 22 groups and the model predicts 25; excluding 3 grouping mismatches, the results show strong agreement, illustrating the model’s adherence to temporal and semantic grouping criteria. 25 C L UNGUAGE SCORE Details C.1 Attribute Weights of L UNGUAGE SCORE To reflect the clinical importance of structured attributes in radiology reports, LUNGUAGESCORE applies attribute-specific weights when measuring similarity between predicted and reference struc- tures. Each comparison is performed at the level of relational triplets, jointly assessing both temporal and structural alignment. For structural attributes, we assign weights based on expert consensus from the four board-certified radiologists who participated in the data annotation process, reflecting each attribute’s diagnostic significance. Although the initial weights are unnormalized, they are rescaled such that their total contribution sums to 1.0 during evaluation (see Table C.1). For the sequential setting, temporal alignment contributes a fixed weight of 1.0, divided equally between two components: whether the predicted and reference findings belong to the same study timepoint (0.5), and whether they fall within the same temporal group (0.5). Although our schema includes inferential relations such as ASSOCIATE andEVIDENCE , these are intentionally excluded from the evaluation metric. Such relations capture diagnostic reasoning—e.g., linking “opacity” as supporting evidence for “pneumonia”—but do not directly reflect the correctness of factual information. Scoring them would conflate interpretive inference with structural accuracy. Instead, our metric focuses on clinically grounded descriptors and attributes that define the diagnostic content of the report. Future extensions may consider integrating reasoning-based relations in settings that explicitly target causal or explanatory fidelity. Table C.1: Weights used in LUNGUAGESCORE for evaluating structural similarity. Temporal weights apply only in the sequential setting, while structural attribute weights reflect the diagnostic importance of each relation type. All values are normalized such that their respective groups (temporal or structural) sum to 1.0 during evaluation. Temporal Weights Value Study Timepoint 0.5 Temporal Group 0.5Structural Attribute Weights Value DXSTATUS 0.50 DXCERTAINTY 0.10 LOCATION 0.20 SEVERITY 0.15 ONSET 0.15 IMPROVED 0.15 WORSENED 0.15 PLACEMENT 0.15 NOCHANGE
https://arxiv.org/abs/2505.21190v1
0.10 MORPHOLOGY 0.05 DISTRIBUTION 0.05 MEASUREMENT 0.05 COMPARISON 0.03 PAST HX 0.01 OTHER SOURCE 0.01 ASSESSMENT LIMITATIONS 0.01 C.2 L UNGUAGE SCORE examples Single-Report Assessment To illustrate how LUNGUAGESCORE evaluates structured prediction quality in the single-report setting, we present detailed examples of pairwise comparisons between predicted and gold-standard structured reports. As detailed in Section 5 in the main text, each comparison is decomposed into two complementary components: •Semantic Score: Computed as the cosine similarity between embedded linearized entity phrases. These phrases are formed by concatenating free-text attributes, including LOCA- TION ,MORPHOLOGY ,DISTRIBUTION ,MEASUREMENT ,SEVERITY ,ONSET ,IMPROVED , WORSENED ,NOCHANGE , and PLACEMENT . This representation captures the semantic content of the entity and its descriptive qualifiers, allowing similarity to be measured in an integrated manner. 26 •Structural Score: A weighted sum of attribute-wise comparisons. Categorical attributes (DXSTATUS andDXCERTAINTY ) are scored in binary fashion (1.0 for exact match, 0.0 otherwise), while all other attributes are evaluated via cosine similarity of their embeddings. The relative importance of each attribute is determined by expert-defined weights (see Table C.1). The final similarity between a predicted and reference finding is calculated as the product of the semantic and structural scores: TOTAL SCORE =Semantic Score ×Structural Score Note: Entity refers to the linearized phrase comprising the core entity and its attributes. Avg. Cosine indicates cosine similarity averaged over MedCPT[ 14] and BioLORD23[ 29] embeddings of the phrases. Weights shown in the table reflect unnormalized values; the final STRUCTURAL SCORE is computed by normalizing the weighted sum by the total weight of all included attributes. For a more formal explanation of the scoring method, we refer to Section 5 in the main text. Example 1: Moderate Match with Attribute-Level Divergence Attribute GT Value Pred Value Match Type Score Weight Entity effusions bilateral small pleural effusion left-sided pleural small stableAvg. Cosine 0.743 — DxStatus positive positive Exact match 1.00 0.50 DxCertainty definitive definitive Exact match 1.00 0.10 Location bilateral left-sided pleural Avg. Cosine 0.54 0.20 Severity small small Exact match 1.00 0.15 Improved — stable Avg. Cosine 0.00 0.15 Semantic Score = 0.743,Structural Score = 0.681,Total Score =0.506 Example 2: Partial Match with Location and Severity Differences Attribute GT Value Pred Value Match Type Score Weight Entity opacification left retrocardiac pleural effusion left moderate Avg. Cosine 0.447 — DxStatus positive positive Exact match 1.00 0.50 DxCertainty definitive definitive Exact match 1.00 0.10 Location left retrocardiac left Avg. Cosine 0.60 0.20 Severity — moderate Avg. Cosine 0.00 0.15 Semantic Score = 0.447,Structural Score = 0.758,Total Score =0.339 Example 3: Strong Match with Minor Lexical Variants Attribute GT Value Pred Value Match Type Score Weight Entity opacity right lung base opacity right lower lung base stable Avg. Cosine 0.842 — DxStatus positive positive Exact match 1.00 0.50 DxCertainty definitive definitive Exact match 1.00 0.10 Location right lung base right lower lung base Avg. Cosine 0.95 0.20 Improved — stable Avg. Cosine 0.00 0.15 Semantic Score = 0.842,Structural Score = 0.902,Total Score =0.759 Sequential-Report Assessment To clarify how LUNGUAGESCORE computes similarity in the sequential setting, we present illustrative examples comparing gold-standard and
https://arxiv.org/abs/2505.21190v1
predicted findings. Each score is computed from three components: •Semantic Score : In the sequential-report setting, semantic similarity is computed between ENTITYGROUP representations, which group together lexically variable but conceptually equivalent findings observed at different timepoints. 27 •Temporal Score : Value of 1.0 if both findings appear in the same study timepoint and in the same TEMPORAL GROUP , or 0.5 if they belong to the same broader TEMPORAL GROUP but from different studies, or vice versa. If neither matches, the score is 0. •Structural Score : Weighted average of attribute-level matches (exact for binary attributes, cosine similarity for textual ones). The overall similarity score is computed as: Total Score =Semantic Score ×Temporal Score ×Structural Score Table C.2: Examples of LUNGUAGESCORE computations in the sequential setting. Each row compares a predicted finding against the corresponding ground-truth reference. Total Score is computed as the product of semantic similarity, temporal alignment, and structural accuracy. Time denotes the study timepoint, and TGindicates the assigned temporal group . GT Prediction Explanation Total (Sem × Temp × Str) Case EntityGroup Time TG EntityGroup Time TG 1 pleural effusion subpulmonic moderate2 1 pleural effusion right subpulmonic layering moderate stable2 1 Minor semantic variation in anatomical modifiers and progression terms0.68 (0.82 × 1.0 × 0.83) 2 hilar contours stable 3 1 hilar contours unchanged3 1 Semantically equivalent; lexical variation in stability descriptor0.90 (0.93 × 1.0 × 0.97) 3 atelectasis left lower lobe mild-to-moderate1 1 atelectasis left lower lobe unchanged2 1 Different timepoints (0.5), severity term vs. stability term mismatch0.35 (0.92 × 0.50 × 0.76) 4 PICC mid SVC 2 1 left PICC mid SVC 1 1 Core entity match with modifier discrepancy; higher specificity in prediction; different timepoints0.45 (0.90 × 0.50 × 1.00) 5 hilar contours unchanged2 1 cardiomediastinal silhouette unchanged3 1 Semantically related anatomical terms; timepoint mismatch (0.5)0.34 (0.68 × 0.50 × 1.00) Final Scoring and Interpretability LUNGUAGESCORE calculates a TOTAL SCORE for each matched pair of predicted and reference findings by combining semantic similarity and structural alignment. In the single-report setting, the total score is defined as the product of cosine similarity over linearized entity phrases and a weighted score of attribute-level matches. In the sequential setting, the metric further incorporates a temporal alignment factor, distinguishing between exact study-time matches and broader temporal group continuity. These component-wise scores are then aggregated across matched pairs to compute the overall F1 metric, as detailed in Section 5. Crucially, each comparison yields interpretable diagnostics: the semantic score quantifies lexical alignment of free-text descriptors; the structural score exposes attribute-level agreement or divergence; and in longitudinal contexts, the temporal score reveals whether grouping decisions respect continuity over time. By exposing this granularity, LUNGUAGESCORE not only delivers a robust scalar evaluation, but also supports nuanced error analysis—highlighting which components of a model’s output (e.g., misassigned severity, incorrect timing, lexical drift) most strongly influenced final performance. This interpretability makes the metric especially valuable to understand model’s behavior. C.3 Clinical BERT Model Selection We considered multiple clinical BERT models for computing contextual semantic embeddings. The candidate models we compared were BioLORD [ 29], BiomedBERT [
https://arxiv.org/abs/2505.21190v1
10], MedCPT [ 14], BioClinical- BERT [ 2], ClinicalBERT [ 20] and BioBERT [ 17]. To decide which models to use in the semantic similarity step of LUNGUAGE SCORE , we conducted an experiment over ReXVal, a subset of the MIMIC-CXR test set encompassing 50 randomly selected studies. We structured each individual study according to our framework described in Section 4(i), and then generated all linearized phrases derived from entity–location–attribute triplets for both the reference report and the candidate report. 28 Figure C.1: Distribution of pairwise cosine similarity scores for different BERT embedding models, calculated between pairs of embedded linearized phrases taken from the ReXVal datset. We then used each candidate BERT embedding model to generate an embedding for each phrase, and computed the pairwise cosine similarity for all pairs of phrases (one from the reference report and one from the candidate report). Figure C.1 shows the distribution of this similarity score for the different BERT embedding models. We find that BiomedBERT, BioClinicalBERT, ClinicalBERT and BioBERT lack variety, always scoring pairs of phrases as highly related. BioLORD manages to capture the most diversity in semantic similarity, followed by MedCPT. For this reason, we choose to use both BioLORD and MedCPT to calculate semantic similarity, by taking the average over both models. 29 D Metric Validation Metric Implementation Details Whenever not further specified, we used default settings for all the metrics as provided by their respective libraries. For BLEU, we use the implementation provided in the huggingface/evaluate library. For BERTScore, we also use the implementation from the huggingface/evaluate library, with distilroberta-base as an embedding model. For GREEN, we use StanfordAIMI/GREEN-radllama2-7b as a language model. For FineRadScore, we useGPT-4 as a language model, which responds with a list of errors each linked to a severity level. To turn this into a score, we associate each severity level with a number, and sum these scores, forming FineRadScore as proposed by Huang et al. [12]. In our tables, we report 1/FineRadScore, inverting the total sum to ensure that a higher score is associated with higher quality. For RaTEScore, we use their default weight matrix. Note that in their own comparison with ReXVal, the authors used a custom weight matrix trained specifically for long reports instead of the default, explaining the slight discrepancy between their reported Kendall Tau correlation with ReXVal radiologists and the one we report in Table 2. ReXVal Analysis To assess the consistency of our metric with established evaluation standards, we conducted a correlation analysis across the ReXVal benchmark, which includes expert-annotated radiology reports and associated error counts. Specifically, we computed pairwise Pearson corre- lations between all single-report metrics over the ReXVal dataset. As presented in Figure D.1, our metric exhibits strong positive correlations with BLEU (0.73), BERTScore (0.77), GREEN (0.84), RaTEScore (0.77), and 1/FineRadScore (0.73). Notably, among all evaluated metrics, our score achieves the highest average correlation across all pairwise comparisons, indicating strong alignment with multiple evaluation perspectives and suggesting broader generalizability. Furthermore, Figure D.2 illustrates the linear relationship between each metric and the number of radiologist-identified errors per ReXVal report. Although 1/FineRadScore
https://arxiv.org/abs/2505.21190v1
shows the highest overall correlation, its relationship with error counts is not consistently linear, especially when the number of errors is low. In these cases where distinguishing between high-quality outputs is most crucial, its ability to make fine-grained distinctions is limited. In contrast, our metric not only maintains strong correlation but also demonstrates stable linear responsiveness across the full error range, underscoring its robustness and reliability as a clinically aligned evaluation measure. 30 Figure D.1: Pairwise Pearson correlations between our metric ( LUNGUAGE SCORE ), and the metrics BLEU, BERTScore, GREEN, 1/FineRadScore and RaTESCore. Figure D.2: Scatter plot illustrating the correlation between the total number of errors identified by radiologists per report, and each of the single-report metrics, including our LUNGUAGE SCORE .r indicates the Pearson correlation as reported in Table 2. 31 Error Sensitivity Analysis with ReXErr [ 27]To assess the error sensitivity of our metric across diverse failure types in radiology report generation, we use the ReXErr-v1 dataset [ 28], which contains synthetic reports with systematically injected clinical errors. These errors are categorized into content addition, context-dependent, and linguistic quality types, covering a broad spectrum of realistic mistakes. We focus on the subset of ReXErr aligned with our sequential structured report dataset, comprising 57 MIMIC-CXR reference reports paired with corresponding error-injected versions. Each manipulated report contains three injected errors, drawn from 12 defined error categories using a context-sensitive sampling method. For each pair, we extract the Findings and Impression sections and evaluate them independently using our single-report LUNGUAGE SCORE , along with established alternatives: GREEN, FineRadScore, and RaTEScore. Figure D.3 displays the score distributions for each of the 12 error types, relative to the average score across the subset. Our metric demonstrates differentiated sensitivity across error types, with notably larger penalizations for false predictions, incorrect negations, and changes in severity—reflecting its alignment with clinically meaningful deviations. Sequential Sensitivity Analysis We further assessed the sensitivity of LUNGUAGE SCORE to clinically meaningful disruptions in temporal coherence by constructing a synthetic evaluation set in which longitudinal progression cues were deliberately inverted. Specifically, we selected 8 patient sequences from our sequential-report dataset that contained explicit temporal descriptors—such as improved orworsened —and manually reversed these attributes to simulate a contradiction in the clinical trajectory. For example, a statement like “the previously seen right lower lobe opacification has decreased substantially” was changed to “increased substantially,” thereby inverting its semantic implication. Two patient sequences that lacked any such temporal expressions were excluded. Both the single-report and sequential variants of LUNGUAGE SCORE were applied to these perturbed sequences. To quantify the metric’s responsiveness, we introduce the Effect Rate , which captures the average score reduction per flipped attribute: Effect Rate (%) =1−score #flipped attributes×100 A perfect score of 1.0 indicates complete semantic and structural agreement with the gold standard. Deviations from this ideal reflect the metric’s sensitivity to reversed temporal directionality. The normalization by the number of flipped attributes allows us to measure the per-attribute impact on the similarity score. Table D.1: Effect Rate for each manipulated patient sequence. W/I denotes the number of worsened /improved attributes flipped.
https://arxiv.org/abs/2505.21190v1
Patient ID # Attr. (W/I) Single Score Effect Rate (S, %) Sequential Score Effect Rate (Seq, %) p10274145 5 (0/5) 0.981 0.38 0.979 0.42 p10523725 3 (1/2) 0.989 0.37 0.987 0.43 p10886362 8 (5/3) 0.983 0.21 0.979 0.26 p10959054 13 (9/4) 0.967 0.25 0.963 0.28 p12433421 15 (8/7) 0.968 0.21 0.971 0.19 p15321868 2 (1/1) 0.982 0.90 0.988 0.60 p15881535 1 (0/1) 0.992 0.80 0.992 0.80 p18079481 10 (2/8) 0.976 0.24 0.980 0.20 While the absolute Effect Rates are relatively small (typically below 0.5%), they scale proportionally with the number of flipped attributes, indicating that LUNGUAGE SCORE reliably captures the semantic impact of trend reversals. Notably, even sequences with a single flipped term exhibited pronounced per-attribute degradation, highlighting the metric’s granularity and responsiveness. These results affirm that LUNGUAGE SCORE can effectively detect inconsistencies in longitudinal directionality, even when the surface fluency of the report remains intact. 32 Figure D.3: Distribution of the scores for each of the twelve error types in ReXErr, relative to the average score across the 57 ReXErr reports. 33 E Synthetic Report Generation Details MAIRA-2 [ 4]At the input, we feed in a frontal chest X-ray image for the current study. If there is no frontal available for the patient, we do not generate a report. If there are multiple frontals, we randomly choose one. We also pass along a random lateral chest X-ray image for the current study, should it be available. MAIRA-2 additionally accepts the indication, technique and comparison sections. We therefore input the history for the current study in the “indication” field, if there is one. For the comparison, we input “Chest radiography dated _.” if there is a previous study, to comply with the anonymised dates in the MIMIC-CXR dataset. We do not input a technique, since this field could not be reliably extracted for the MIMIC-CXR test set. We explore two distinct ways for including prior information in the generation setup. In the standard setting, we input the ground truth reference report that is available for the previous study. This report is structured following the template “INDICATION: <prior_history> COMPARISON: <prior_comparison> FINDINGS: <prior_findings> IMPRESSION: <prior_impression>.”, where <prior_history>, <prior_impression> and <prior_findings> are all taken from the previous study’s ground truth reference report, and substituted by “N/A” if they are missing. If there is no prior study, the prior report field is set to “None”. If the previous study was the first one in the sequence, then <prior_comparison> is set to “N/A”, otherwise it is set to “Chest radiograph dated _.” In the cascaded setting, <prior_findings> is set to the findings report that was generated in the previous study (if there is one, otherwise the prior report field is set to “None”), while <prior_impression> is left blank (because MAIRA-2 only generates findings), and the other inputs remain the same. In both settings, we input the frontal view from the prior study, if there is one, and if there are multiple options, we choose the same one that was used to generate the previous report. We ask MAIRA-2 to generate the findings section for the current study,
https://arxiv.org/abs/2505.21190v1
using their default settings, without grounding. RGRG [ 32] and CVT2DistilGPT2 [ 22]For both models, we input the current frontal image, once again choosing a random one when there are multiple ones, and foregoing generation when there are none. For the CVT2DistilGPT2 model, we use the variant that was trained on MIMIC-CXR. We use the default setup as suggested on the model’s Github pages. Both models generate radiological “findings” as the full report, outputting no specific “impression” section. Medversa [ 44]Next to the current frontal image, we also fill in the additional input fields expected by Medversa, which are context, prompt, modality and task. For context, we follow the template “Age: None. Gender: None. Indication: <current_history>”. For <current_history>, we pass along the “history” section of the reference report, should it be available, and otherwise we set it to “None”. The modality and task are set to “cxr” and “report generation” respectively. All language generation parameters are left as default. The prompt is set to “Can you provide a report of <img0> with findings and impression?”. Note that this is the only model with the ability to generate an impression section, and it will therefore naturally have an advantage over the other models when we compare it to the reference report, where both the findings and impression section are included based on their availability in the ground truth. 34
https://arxiv.org/abs/2505.21190v1
arXiv:2505.21212v1 [cs.AI] 27 May 2025Interpretable DNFs Martin C. Cooper1,Imane Bousdira2,Cl´ement Carbonnel3 1IRIT, University of Toulouse, France 2IRIT, INP Toulouse, France 3LIRMM, CNRS, University of Montpellier, France {cooper, imane.bousdira }@irit.fr, clement.carbonnel@lirmm.fr Abstract A classifier is considered interpretable if each of its decisions has an explanation which is small enough to be easily understood by a human user. A DNF formula can be seen as a binary classifier κover boolean domains. The size of an explanation of a positive decision taken by a DNF κis bounded by the size of the terms in κ, since we can explain a positive decision by giving a term of κthat evalu- ates to true. Since both positive and negative de- cisions must be explained, we consider that inter- pretable DNFs are those κfor which both κandκ can be expressed as DNFs composed of terms of bounded size. In this paper, we study the family of k-DNFs whose complements can also be expressed ask-DNFs. We compare two such families, namely depth- kdecision trees and nested k-DNFs, a novel family of models. Experiments indicate that nested k-DNFs are an interesting alternative to decision trees in terms of interpretability and accuracy. 1 Introduction Interpretable models are critical in machine learning applica- tions requiring accountability of decisions [24; 23 ]. In par- ticular, there is a growing interest in models whose decisions can always be explained in a way that is comprehensible by a human user. In recent work on formal explainability [25; 17; 4; 3; 20 ], two notions of explanation of decisions have emerged. An abductive explanation corresponds to a minimal set of features that caused the decision, whereas a contrastive explanation corresponds to a means of changing the decision with changes to a minimal set of features. A theoretical line of research, starting from a list of desirable properties rather than a particular definition, has identified abductive explana- tions as the basis for determining what constitutes a sufficient reason for a decision [1; 10 ]. In this paper, we deem a model to be interpretable if each of its decision has both a short abductive explanation and a short contrastive explanation. Observe that we are consid- ering interpretability as an orthogonal question to explain- ability , which depends on whether we can efficiently find an explanation of each decision. There is a considerable literature on the question of which families of models areexplainable, whether explainability means the existence of polynomial-time or efficient-in-practice algorithms to find ex- planations [21; 22; 14; 18; 11; 8; 19; 16; 15 ]. This criterion for interpretability is very restrictive. The only commonly used family of models that are interpretable in this sense are decision trees whose depth is bounded by a small constant. In contrast, linear classifiers, random forests, decision lists and neural networks may all require a linear number of features in an explanation. However, it is theo- retically possible that very different families of interpretable models exist. The purpose of this paper is to study the struc- ture of interpretable models in order to find a competitive al- ternative to decision
https://arxiv.org/abs/2505.21212v1
trees. We restrict our attention to classifiers which are functions of boolean features only. (However, most of our results can be extended to non-boolean features through binarisation.) In Section 2, we observe that a boolean classifier κis in- terpretable if and only if both κand its complement κare expressible as k-DNF formulas, where kis the upper bound on the size of explanations. In Section 3, we show that such classifiers can always be expressed by short k-DNF formu- las composed of at most kkterms. For small enough k, this shows that direct representation of interpretable classifiers as DNF formulas is always possible. Then, we describe in Sec- tion 4 a simple graph-based condition which guarantees that the complement of a k-DNF formula is also expressible in k- DNF and use this property to define nested k-DNFs, a new family of interpretable classifiers that is orthogonal to deci- sion trees. We study the expressivity of nested k-DNFs in Section 5. Finally, we present in Section 6 a practical algo- rithm for learning nested k-DNFs, and show empirically that classifiers constructed this way are competitive with decision trees on various datasets. 2 Preliminaries We denote by Fthe feature space, which for most of the paper will be {0,1}n, and by Fthe set of features {1, . . . , n }. Definition 1. Given a function κ:F→ {0,1}and an input v= (v1, . . . , v n)∈F, a weak abductive explanation (wAXp) of(κ, v)is a subset AofFsuch that ∀x= (x1, . . . , x n)∈ F,(∧i∈A(xi=vi))⇒κ(x) =κ(v). A weak contrastive explanation (wCXp) of (κ, v)is a subset CofFsuch that ∃x∈F,(∧i∈F\C(xi=vi))∧κ(x)̸=κ(v). An abductive explanation (AXp) is a subset-minimal wAXp. A contrastive explanation (CXp) is a subset-minimal wCXp. In order to give a formal definition of interpretability of a family of models, we first give a parameterized definition of interpretability of a classifier based on AXps/CXps. Definition 2. Letkbe a natural number. A function κ:F→ {0,1}isk-AXp-interpretable if for each v∈F, there is an AXp of ( κ, v)of size at most k. A non-constant function is k-CXp-interpretable if for each v∈F, there is a CXp of size at most k. By convention, a constant function is deemed to be k-CXp-interpretable. To see that k-AXp-interpretability and k-CXp- interpretability do not coincide, consider the parity function κwhich returns 1 if the sum of its nboolean features is even and 0 otherwise. For any v∈F, changing one feature changes the parity, which implies both that (κ, v)has a CXp of size 1and that, on the other hand, the only AXp is of size n. Thus the existence of a small CXp does not guarantee the existence of a small AXp. On the other hand, for any κ, the existence of a small AXp (for all inputs) implies the existence of a small CXp, as we now show. Lemma 1. A function κthat is k-AXp-interpretable is also k-CXp-interpretable. Proof. Suppose that κisk-AXp-interpretable. The case of constant functions is trivial, so we assume that κis non- constant. Thus, given an arbitrary input v∈F, there is another input v′∈Fsuch that
https://arxiv.org/abs/2505.21212v1
κ(v′)̸=κ(v). By k-AXp- interpretability, (κ, v′)has an AXp Aof size at most k. Let yi=viifi∈ F \ Aandyi=v′ iifi∈A. By definition, κ(y) =κ(v′)̸=κ(v). Therefore, Ais a wCXp of (κ, v)and hence some subset of Ais a CXp of size at most k. Since k-CXp-interpretability follows from k-AXp- interpretability, this leads to a natural definition of inter- pretable models in terms of k-AXp-interpretability. Definition 3. A family Mof models is interpretable if there is a constant ksuch that every classifier κ∈ M isk-AXp- interpretable. We now focus on the case where the feature space Fis boolean. Given a boolean function κover boolean variables (x1, . . . , x n), aliteral is either a variable xior its negation xi. A boolean formula is in disjunctive normal form (DNF) if it is a disjunction of terms , which are conjunctions of literals. For simplicity of presentation, we freely interpret terms as either sets or conjunctions of literals depending on context. A DNF formula is in k-DNF if each of its terms has size at most k. We say that a conjunction (or set) of literals is consistent if it does not contain both a variable and its negation. An implicant of κis a consistent conjunction of literals Qsuch that κmaps to1all assignments to (x1, . . . , x n)for which Qevaluates to true. An implicant of κisprime if it is subset-minimal. Given a DNF formula Dwith variables Xand a consistent set of literals QoverX, we denote by D[Q]the DNF formula with variables {xi∈X:xi/∈Qandxi/∈Q}obtained from Dby removing all the terms that contain the negation of a literal in Qand replacing each remaining term t=V l∈Sl witht[Q] =V l∈S\Ql. IfD1andD2are DNF formulas thatexpress respectively a boolean function and its complement, then for any choice of Qthe formulas D1[Q]andD2[Q]also express functions that are complements to each other. The sizeofD, denoted by |D|, is the number of terms in Dand its length ||D||is the sum of the sizes of its terms. Throughout the paper we will use L(D)(resp. T(D)) to denote the sets of literals (resp. terms) that appear in the formula D. For a boolean classifier κ, the prime implicants of κ(resp. κ) are in one-to-one correspondence with AXps for positive (resp. negative) decisions. The relationship between inter- pretability and expressibility as a k-DNF formula is made ex- plicit by the following proposition. Proposition 1. A binary boolean classifier κ:{0,1}n→ {0,1}isk-AXp-interpretable if and only if both κand its complement are expressible as k-DNFs. Proof. The ‘if’ direction follows from the fact that a term that evaluates to true is a wAXp (of size k), and hence some subset will be an AXp. The ‘only if’ direction follows from the fact thatκ(resp. κ) is equivalent to the disjunction of terms corre- sponding to the AXps of its positive (negative) decisions. Using Proposition 1, it is straightforward to verify that a boolean function κisk-AXp-interpretable if and only if both κandκare equivalent to the disjunction of their prime im- plicants of size at most k. The standard double-DNF expres- sion of ak-AXp-interpretable classifier is the pair (Dκ, Dκ), where
https://arxiv.org/abs/2505.21212v1
Dκis the DNF formula whose terms are the prime im- plicants of κof size at most kandDκis the DNF formula whose terms are the prime implicants of κof size at most k. The smallest integer ksuch that a boolean function κand its complement can be expressed as k-DNF formulas is called thecertificate complexity ofκ[2, Chapter 11 ]. This measure is well studied in theoretical computer science and computa- tional learning theory [9; 6], but little appears to be known about the structure of functions whose certificate complexity is bounded by a small constant. Example 1. Decision trees are a well-known family of clas- sifiers which have the reputation of being interpretable. In- deed, if kis the depth of a decision tree, then the correspond- ing classifier κDTand its complement κDTcanboth be ex- pressed as k-DNFs. Given a path πfrom the root to a leaf, letL(π)denote the set of literals labelling the edges in the pathπ. We assume a binary classifier, so each leaf is labelled 0 or 1. Let P0andP1denote the sets of paths from the root to, respectively, leaves labelled 0 and leaves labelled 1. Then the classifier κDTcorresponding to the decision tree can be expressed as the following DNF: κDT(x) =_ π∈P1^ ℓ∈L(π)ℓ Furthermore, κDTcan also be expressed as a DNF: κDT(x) =_ π∈P0^ ℓ∈L(π)ℓ Observe that both these DNFs are k-DNFs since the length of paths is at most k. As seen in Example 1, if κcan be represented as a decision tree of depth kthenκisk-AXp-interpretable. However, the converse implication does not hold. In this paper, we are in- terested in identifying new families of interpretable classifiers that are orthogonal to those derived from decision trees. Example 2. Fork= 2, a characterisation of 2-DNF formu- las whose complement is expressible in 2-DNF can be derived from a recent result [8, Corollary 2 ]. Together with Propo- sition 1, this characterisation implies that a classifier κis2- AXp-interpretable if and only if it is equivalent to a DNF with one of the following forms (where the literals a, b, c, d are ar- bitrary and not necessarily distinct): (i) (a∧b)∨(c∧d), (ii) (a∧b)∨(b∧c)∨(c∧d), and (iii) (a∧b)∨(b∧c)∨(c∧d)∨(d∧a). Interestingly, certain DNFs of this kind cannot be represented as decision trees of depth 2(see Example 4 for more details). However, they all satisfy a different combinatorial criterion for2-AXp-interpretability that we describe in Section 4. 3 Short explanations imply few explanations In this section, we show that every k-AXp-interpretable clas- sifier is expressible as a k-DNF consisting of at most kkterms (independently of the number nof features). This result gives further justification to work directly with DNF representa- tions of k-AXp-interpretable classifiers when kis small. In particular, this implies that if a classifier can provide an expla- nation of size at most kfor every decision, then all decisions can be explained using only 2kkdistinct explanations. Theorem 1. Every k-AXp-interpretable classifier is express- ible as a k-DNF formula that contains at most kkterms. Proof. Letκbe a k-AXp-interpretable classifier and (Dκ, Dκ)be the standard double-DNF expression of κ. We will show that Dκcontains at most kkterms. If κis
https://arxiv.org/abs/2505.21212v1
constant then the theorem obviously holds, so let us assume that it is not. (This assumption implies in particular k >0,|Dκ|>0, and|Dκ|>0.) We claim that for all integers j≥0, either |Dκ|< kjor there exists a consistent set Qofjliterals that is contained in at least (1/k)j· |Dκ|terms of Dκ. We will prove this claim by induction on j. The base case j= 0 is immediate because every term in Dκcontains the empty set of literals. Now, let jbe such that 1≤j≤kand suppose that the claim holds for j−1. If |Dκ|< kj−1then|Dκ|< kjand we are done. Otherwise, there exists a consistent set Q′ofj−1literals and a set Sof at least (1/k)j−1· |Dκ|terms of Dκsuch that every term in Scontains Q′. We distinguish two cases. Case 1: Q′={l:l∈Q′}has non-empty intersection with every term in Dκ. Then, Q′is an implicant of κ. The terms of Dκare prime implicants of κandQ′is contained in at least one term of Dκ, soQ′is contained in exactly one term of Dκ. This implies (1/k)j−1· |Dκ| ≤1and hence |Dκ|< kj. Case 2: there exists a term tinDκwhose intersection with Q′is empty. Consider the DNF formulas Dκ[Q′]andDκ[Q′]. Observe that t[Q′]is a term of Dκ[Q′], ands[Q′]is a term of Dκ[Q′]for all s∈S. (This last observation follows from the fact that every term in Sis a prime implicant of κ: these terms are consistent and contain Q′, so they cannot intersect Q′.) Ift[Q′]is the empty term, then Q′is an implicant of κ;this is not possible because at least one term in Dκcontains Q′. In addition, as Dκ[Q′]andDκ[Q′]express functions that are complements of each other, the set {l:l∈t[Q′]}must have non-empty intersection with every term in Dκ[Q′]and in particular with every term in {s[Q′]|s∈S}. The term t[Q′]contains at most kliterals, so there exists l∈t[Q′]such that at least (1/k)· |S|terms in Scontain l. Then, the set of literals Q=Q′∪ {l}is contained in at least (1/k)· |S| ≥ (1/k)·(1/k)j−1· |Dκ|= (1/k)j· |Dκ|terms of Dκand the claim holds by induction. We can now finish the proof of the theorem. Every term in Dκis a prime implicant of κsoDκcannot contain the same term twice. In addition, every term in Dκhas size at most k. Then, for j=kwe have either |Dκ|< kkor(1/k)k·|Dκ| ≤ 1and the theorem follows. The specific bound of Theorem 1 is sharp as there exist k-AXp-interpretable classifiers that cannot be expressed as a DNF formula with fewer than kkterms. A concrete example is the complement of a classifier κcorresponding to a DNF formula Dwithkterms of size exactly k, with all literals negative and no literal occurring twice. This function κis k-AXp-interpretable, has kkprime implicants, and by mono- tonicity these implicants must be contained in distinct terms in any DNF expression of κ. Corollary 1. Letκ:{0,1}n→ { 0,1}be a k-AXp- interpretable classifier over a set of features F. There ex- ists a set Eof at most 2kksubsets of Fsuch that for every v∈ {0,1}n,Econtains an AXp of size at most kof(κ, v). Proof. Applying Theorem 1, we derive that κandκcan be expressed as k-DNF formulas of size at most kk. The terms of these formulas are implicants of κandκrespectively, and we
https://arxiv.org/abs/2505.21212v1
can further assume that they are prime implicants. Let Ebe the set of all subsets of Fwhose features correspond exactly to a term. (Note that multiple terms may correspond to the same set of features, so Ecan be strictly smaller than the sum of the sizes of these formulas.) Then, for any choice of vat least one term evaluates to true and the corresponding set inEconstitutes a wAXp of size at most kof(κ, v). Finally, this term corresponds to a prime implicant (of either κorκ) so no strict subset can be a wAXp. Another interesting consequence of Theorem 1 is that it provides an explicit characterisation of interpretable families of models (as per Definition 3). Corollary 2. A family Mof models is interpretable if and only if there exists a constant ksuch that every classifier κ∈ Mis expressible as a DNF formula of length at most k. Proof. For the forward direction, if every classifier in Mis j-AXp-interpretable then by Theorem 1 they are expressible as DNF formulas of length at most k=j·jj. Conversely, the complement of a DNF formula of length at most k > 0is always expressible as a k-DNF of length at most kk+1. Therefore, if every classifier in Mis expressible as a DNF formula of length at most kthenMis interpretable. 4 Induced matchings and nested k-DNF In this section we describe a simple criterion for a classifier described by a k-DNF formula to be k-AXp-interpretable. This criterion is orthogonal to expressibility as a decision tree of depth k, and we will show in the subsequent section that it defines a remarkably expressive family of classifiers. LetDbe a DNF formula that expresses a boolean function κ. Atransversal ofDis a subset of L(D)that intersects every term in D. If we let TDdenote the set of all minimal transver- sals of D, then κhas the following canonical expression as a DNF: κ(x) =_ T∈TD^ l∈Tl Note that the canonical DNF expression of κmay include in- consistent terms. If Ddoes not contain two terms t1, t2such thatt1⊂t2, then the canonical complement of the canoni- cal complement of DisDitself1. From this perspective, it is clear that the function expressed by a given k-DNF for- mula Disk-AXp-interpretable if all minimal transversals of Dhave cardinality at most k. LetGD= (V, E)be the bipartite graph with V=L(D)∪ T(D)and{l, t} ∈Eif and only if l∈t. An induced match- ingofGDis a subset M⊆Esuch that no two edges in M share an endpoint and no edge in Eintersects two distinct edges in M. We denote by mim (GD)the maximum number of edges in an induced matching of GD. Lemma 2. LetDbe ak-DNF formula expressing a boolean function κ. If mim( GD)≤k, then κisk-AXp-interpretable. Proof. We show that every minimal transversal of Dhas car- dinality at most k. Suppose for the sake of contradiction that Dhas a minimal transversal Tof size q > k . By minimality, for every literal l∈Tthere exists a term tl∈T(D)such that tl∩T={l}. Then, the set of edges {{l, tl} |l∈T}is an induced matching of GDof size q > k . This is not possible because
https://arxiv.org/abs/2505.21212v1
mim( GD)≤k. Example 3. Consider the majority function on 2k−1argu- ments defined by κmaj(x1, . . . , x 2k−1)≡(P2k−1 i=1xi≥k). This function κmajisk-AXp-interpretable since it is the dis- junction of all terms composed of exactly kpositive literals and its complement is the disjunction of all terms composed of exactly knegative literals. The graphs associated with these formulas do not contain induced matchings of size larger than k, soκmajsatisfies the criterion for k-AXp-interpretability given by Lemma 2. How- ever, it is well known that any decision tree representing κmaj must have depth at least 2k−1, as any path starting from the root that alternates between positive and negative literals cannot reach a leaf before all variables have been assigned. The simple condition provided by Lemma 2 already de- fines a new family of k-AXp-interpretable classifiers, those expressible by k-DNF formulas with no induced matchings of 1This is a well-known property of hypergraph dualisation, see e.g.[5, Chapter 2 ].sizek+1. From a practical viewpoint, the interest of this fam- ily is limited because its definition is not constructive: with- out a clear structure, it is difficult to design efficient heuristics for learning formulas of this kind directly from data. We address this issue by defining a smaller family of clas- sifiers whose structure is more explicit. Consider k2literals ℓi,j(1≤i, j≤k). We can view {ℓi,j}as ak×kmatrix: L= ℓ1,1ℓ1,2. . . ℓ 1,k ... ℓk,1ℓk,2. . . ℓ k,k  We will define a k-DNF Dcomposed of mterms (where mis arbitrary) whose complement is also expressible as a k-DNF. For each p= 1, . . . , m , letrpi(i= 1, . . . , k ) bekintegers between 0 and ksuch thatPk i=1rpi≤k. Then define Das follows: D=m_ p=1k^ i=1rpi^ j=1ℓi,j The condition thatPk i=1rpi≤kfor each p= 1, . . . , m ensures that Dis ak-DNF. We call such a DNF a nested k- DNF. The term k^ i=1rpi^ j=1ℓi,j ofDis the conjunction of, for each i= 1, . . . , k , the rpi leftmost elements in row iof the matrix L. Proposition 2. Every boolean function expressible as a nested k-DNF formula is k-AXp-interpretable. Proof. LetD=Wm p=1Vk i=1Vrpi j=1ℓi,jbe a nested k-DNF formula. Towards a contradiction, suppose that there exists an induced matching Mof size k+ 1inGD. By the pigeon- hole principle, at least two literals that appear in Mbelong to the same row iMofL. Two terms are matched with these two literals, and the term with the largest value for rpiMmust contain both. This is impossible because Mis an induced matching. Applying Lemma 2, the function expressed by D is therefore k-AXp-interpretable. Example 4. Observe that all k-DNF formulas with qterms are nested if q≤k. Indeed, for any such formula Dwe can setLto be a k×kmatrix of literals whose ith row contains the literals of the ith term of D(possibly with repetition if the term has fewer than kliterals). Then, for p≤qwe set rpi=kifp=i, and rpi= 0 otherwise. These parameters will produce exactly the formula D. In general, such formulas are not expressible as decision trees of depth smaller than
https://arxiv.org/abs/2505.21212v1
k2[13]. On the other hand, the function κmajof Example 3 is not expressible as a nested k-DNF . We proceed again by contra- diction. If κmajcould be represented by a nested k-DNF gen- erated from a k×kmatrix L, thenLwould contain only pos- itive literals. Let L1be set of the literals in the first column ofL, and Jthe other positive literals. All terms generated fromLmust contain at least one literal from L1. If|J| ≥k, then there is a term consisting of kpositive literals not oc- curring in the first column, and which therefore could not be generated. Hence, we must have |L1|=kand|J|=k−1. Without loss of generality, assume L1={x1, . . . , x k}and J={xk+1, . . . , x 2k−1}. For each i= 1, . . . , k , the term xiV xj∈Jxjmust be generated from Lby taking literals from a single row, since it contains a single literal from the first col- umn of L. It follows that the columns 2,3, . . . , k ofLcontain only elements of J. Since |J|=k−1, the second column of Lmust contain at least one repeated element. Without loss of generality, assume that this repeated element is xk+1and that it occurs in the two rows whose first elements are x1and x2. But then, for k≥3, it is impossible to generate the termx1x2xk+2. . . x 2k−1, since all terms containing x1and x2must also contain xk+1. 5 Expressivity of nested k-DNFs In machine learning, it is important that the language of mod- elsMused in the learning phase be sufficiently rich to cap- ture all functions we might wish to learn. Consider a classi- fierκthat is a function of only kvariables x1, . . . , x k. Both κand its complement κcan be expressed as k-DNFs. This is because κ(respectively, κ) is the disjunction of the terms cor- responding to the assignments to the variables x1, . . . , x kfor which κ(x1, . . . , x k) = 1 (respectively, κ(x1, . . . , x k) = 1 ). All functions of kvariables can be expressed as depth- kdeci- sion trees (with xiassociated with all decision nodes at depth i−1), so an obvious question is whether the same is true for nested k-DNFs. We answer this question positively in the following proposition. Proposition 3. Every boolean function κofkboolean vari- ables can be expressed as a nested k-DNF . Proof. Ifκis the constant function 1, then it can be trivially expressed as a nested k-DNF that contains a single term with zero literals. We can therefore assume that κis equal to 0 for some assignment to the kvariables x1, . . . , x k. Without loss of generality, we assume that κ(0, . . . , 0) = 0 . Let the k×k matrix of literals {ℓij}be L= x1x2x3. . . xk x2x3x4. . . x1 x3x4x5. . . x2 ... xkx1x2. . .xk−1  The classifier κcan be expressed as the disjunction of terms corresponding to assignments for which κis equal to 1. Any such term tcontains hpositive literals,
https://arxiv.org/abs/2505.21212v1
where h≥1(since a DNF satisfying κ(0, . . . , 0) = 0 cannot contain the term x1···xk). Let xij(j= 1, . . . , h ) be these positive literals. Letrij=ij+1−ij(j= 1, . . . , h −1),rih=k+i1−ih andri= 0for all i /∈ {i1, . . . , i h}. Then tis the conjunction of the leftmost rijliterals in row i(fori= 1, . . . , k ) of the above matrix L. Since each term tofκcan be constructed in this way, κis a nested k-DNF. A consequence of Proposition 3 is that nested k-DNF for- mulas can always be constructed to fit any consistent dataset provided that kis large enough. In particular, the least in- teger ksuch that a boolean function or its complement can' &$ %# " !' &$ %' &$ %depth- k DTsfunctions of kvariablesκorκis a nested k-DNFκorκis ak-DNFs with induced matchings of size ≤k Figure 1: The landscape of k-AXp-interpretable classifiers κ be represented as a nested k-DNF formula is a well-defined measure that cannot exceed the number of variables (as is the case for decision trees of depth k). One criterion for comparing families of models Mis to estimate the number of distinct functions that can be repre- sented by M. Let NDT(k, n)andNnested (k, n)be, respec- tively, the number of functions representable by a depth- kde- cision tree or by a nested k-DNF, where nis the total number of variables. Recall that nested k-DNF formulas can be func- tion of at most k2variables, whereas decision trees of depth kmay depend on (up to) 2k−1variables. For this reason, it is expected that if nis large enough compared to kthen Nnested (k, n)will necessarily be smaller than NDT(k, n). We show that the opposite is true when nis not much larger thank. Informally, nested k-DNF formulas involve fewer fea- tures than decision trees of depth kbut can express a greater variety of dependencies between those features. Proposition 4. Ifk≥4andk2≤n≤22k−1/k−1, then Nnested (k, n)> N DT(k, n). Proof. Every function representable by a decision tree of depth kcan be represented by a complete tree with 2kleaves. Each of the 2k−1internal nodes is associated with a variable and each of the 2kleaves is associated with a class. There are n2k−122ksuch decision trees, so NDT(k, n)≤n2k−122k. We consider a fixed matrix Lcomposed of k2distinct posi- tive literals ℓi,j. By the stars and bars theorem, the number of distinct terms of the formVk i=1Vri j=1ℓi,jwherePk i=1ri=k is exactly C2k−1 k−1= 1/2·C2k k. Using the inequality C2k k≥ 22k/p π(k+ 1/3), we deduce that Nnested (k, n)is bounded below by 222k−1/kfork≥4, since each of the 1/2·C2k kterms may or may not occur in the nested k-DNF formula. It fol- lows that Nnested (k, n)> N DT(k, n)ifn≤22k−1/k−1. Figure 1 provides a summary of the relationship between the major classes of k-AXp-interpretable classifiers. 6 Experiments In this section, we present a heuristic algorithm for finding nested k-DNFs, distinguished by its intuitive and straightfor- ward design2. It is worth noting that alternative algorithms 2The code is available in this GitHub repository could also be
https://arxiv.org/abs/2505.21212v1
considered. Next, we provide an experimen- tal comparison with the depth- kdecision trees obtained by CART [7]. 6.1 Heuristic algorithm The heuristic consists of three steps: constructing the matrix, constructing the nested k-DNF, and a pruning phase. In Al- gorithm 1, we show how to construct the k×kmatrix Lby proceeding row by row, where kis less than or equal to the total number of features n. The idea is to create a matrix that will allow us, in the next step, to generate a large number of distinct and consistent terms. To achieve this, the literal ℓi,j (0≤i, j≤k−1) is selected such that the j+ 1 leftmost elements in row iof the matrix Lare highly representative of class 1 while being minimally representative of class 0. A key condition is that ℓi,jmust differ from the jpreceding literals in row iand their negations (to avoid redundancy or inconsis- tency). Additionally, to encourage diversity between different rows, we exclude all literals in the first limit =k−jcolumns (of the already-chosen rows) from the list of candidate literals forℓi,j, provided that at least one literal remains available for selection. The value of limit is reduced accordingly if the number 2(n−j)of available literals is less than or equal to the number i×(k−j)of literals we would like to forbid. Secondly, we construct the nested k-DNF by evaluating one term at a time, starting with terms of size kand decreas- ing down to size 1. A term is considered for evaluation if it is consistent (i.e. it does not contain both a literal and its negation). We decide to select a term if P̸= 0 andQ < P , where P(respectively, Q) represents the number of exam- ples in class 1 (respectively, class 0) that satisfy this term and are not already covered by the selected terms. Furthermore, a term is also chosen if it covers at least one example from class 1 and does not cover any example from class 0 (irrespective of whether examples have already been covered). The process stops when either all examples in class 1 are covered or there are no more terms to evaluate. Finally, we perform pruning, where we determine whether to retain each term. The same evaluation as before is applied using P and Q (i.e. we remove a term if P= 0orQ≥P). This time, we compare each term against all other terms, not just the previously selected terms. 6.2 Datasets A collection of datasets from the UCI repository and Kag- gle are considered, which have been used to evaluate a wide range of learning algorithms. These datasets contain various feature types, which are converted into boolean features for binary classification as in [12]. We employ the datasets in their original form, without any preprocessing techniques ap- plied. Table 1 shows, for each dataset, the number of data examples and the number of boolean features. 6.3 Results As a first test, the proposed heuristic successfully found the 2-DNF with 2 terms that perfectly match the full truth-table generated from κ(a, b, c, d ) = (a∧b)∨(c∧d). In contrast, the CART algorithm
https://arxiv.org/abs/2505.21212v1
required a depth of 4 to create a decision tree that fits the data exactly, as mentioned in Example 2.Algorithm 1 Construct matrix Input : k, dataset Output : matrix L 1:fori= 0to k−1do 2: forj= 0to k−1do 3: ifi= 0then 4: limit = 0 5: else 6: limit = min( k−j,⌈(2(n−j)/i)−1⌉) 7: end if \\Ec1(t): nb. examples in class 1 that satisfy t \\Ec0(t): nb. examples in class 0 that satisfy t 8: Calculate G=Ec1(ℓi,0...ℓi,j) –Ec0(ℓi,0...ℓi,j)for each literal not in Li,0:j∪Li,0:j∪ L0:i,0:limit 9: Take as ℓi,jthe literal that gives the greatest G 10: end for 11:end for 12:return matrix L Dataset Size Nb. boolean features Balance-scale 625 16 Banknote 1372 28 Car-evaluation 1728 14 Compas discretized 6167 25 Indians Diabetes 768 43 Iris 150 12 Lymph 148 68 Monks-1 124 11 Monks-2 169 11 Monks-3 122 11 Tic-tac-toe 958 27 Table 1: Description of the datasets used in the experiments. The rest of our experimental assessment was performed on the datasets described above. For a given dataset, 80% of the dataset was used for training and 20% for testing, except for the Monks datasets, where the test set is provided sepa- rately and consists of 432 examples, consisting of all possi- ble combinations of the feature-values. The average perfor- mance across five split experiments is reported. For each of the two training algorithms, the experiment is run 10 times and the average accuracy is computed on the test set. Table 2 shows the accuracy of our nested k-DNFs (column DNF) and the decision trees generated by CART with a fixed maximum depth of k(column DT). Given the asymmetry of nested k- DNFs with respect to complementation, we repeated the ex- periment, learning a nested k-DNF model for κrather than κ: results are reported in column DNF. The aim in using different datasets for experimentation is to assess whether the proposed heuristic can actually find a nested k-DNF that accurately represents the underlying struc- ture of the data, as decision trees do. The results indicate variability in accuracy across different datasets, with nested k-DNFs outperforming depth- kdecision trees in some cases, and vice versa in others. Overall, the results achieved by both depth-k decision trees and nested k-DNFs are comparable. Test accuracy (%) Dataset k= 2 k= 3 k= 4 DT DNF DNF DT DNF DNF DT DNF DNF Balance-scale 93.28 89.04 93.28 93.28 92.46 93.28 93.28 92.10 93.28 Banknote 86.40 88.95 83.35 89.45 88.95 83.35 95.49 90.53 86.24 Car-evaluation 85.78 77.80 73.35 86.65 84.51 89.13 91.68 83.15 92.14 Compas discretized 64.47 64.02 65.71 65.90 65.87 67.18 66.40 66.07 67.12 Indians Diabetes 77.01 78.70 76.16 78.18 79.48 77.48 77.42 79.56 77.52 Iris 98.00 96.00 99.33 98.00 97.53 98.00 98.00 98.60 98.00 Lymph 81.33 76.73 85.33 79.93 79.67 87.13 85.13 82.07 86.07 Monks-1 75.00 75.00 66.67 83.33 77.78 66.67 83.33 78.50 75.22 Monks-2 56.94 60.65 60.26 63.89 63.66 61.13 61.31 65.15 63.49 Monks-3 97.22 97.22 97.22 94.44 97.22 97.22 95.37 97.22 94.59 Tic-tac-toe 68.23 68.76 68.31 72.40 70.05 75.65 81.77 75.27 80.16 Test accuracy (%) Dataset k= 5 k= 6 DT DNF DNF
https://arxiv.org/abs/2505.21212v1
DT DNF DNF Balance-scale 92.96 92.05 93.28 92.18 90.58 93.10 Banknote 98.25 90.25 88.52 99.02 90.01 88.52 Car-evaluation 92.83 82.51 91.48 93.64 82.97 91.79 Compas discretized 67.31 66.40 67.31 66.97 66.51 67.70 Indians Diabetes 77.64 79.66 77.23 77.43 79.57 76.97 Iris 98.00 98.00 98.00 98.00 97.27 98.00 Lymph 85.00 81.93 85.93 84.27 80.40 86.27 Monks-1 83.33 82.20 77.41 83,33 91.17 80.52 Monks-2 68.26 67.32 68.33 78.85 67.55 73.63 Monks-3 89.81 89.00 92.46 92.59 87.09 88.19 Tic-tac-toe 90.98 75.52 78.07 92.28 77.55 79.38 Table 2: Test accuracy of depth-k decision trees and nested k-DNFs Thus, nested k-DNFs emerge as a promising alternative to decision trees, with these initial results highlighting the po- tential of this family of models. We also compared the size of the DT and nested k-DNF models. Table 3 shows the average number of leaves in the DTs and the average number of terms in the DNFs across five datasets splits, with both training algorithms executed 10 times per split, and the best-performing model selected from these iterations. A nested k-DNF is composed of terms asso- ciated with a single class, while a DT contains paths leading to both classes. However, despite this difference, the number of terms is generally less than half of the number of leaves, with a significant number of cases exhibiting an even greater disparity. A similar observation was noted for the number of terms in DNF. This suggests that the nested k-DNFs are simpler than the DTs in terms of size. 7 Conclusion and future work A machine-learning model can be deemed interpretable if each of its decisions has an explanation that is intelligible by a human user. We formalized this definition of interpretability based on abductive or counterfactual explanations of size atmost a small constant k. In the case of binary classifiers over boolean domains, we showed that this definition is equivalent to the classifier and its complement both being expressible as k-DNFs. Depth- kdecision trees are the most well-known ex- ample of a family of models satisfying this definition. De- cision trees are widely used either directly or as surrogate models to provide explanations. This paper investigated the existence of other families of interpretable models. We introduced a graph-theoretical sufficient condition for interpretability in terms of maximum induced matchings of DNF formulas, before giving a novel concrete family of in- terpretable models which we call nested k-DNFs. We showed experimentally that a simple heuristic algorithm produces nested k-DNFs whose accuracy is comparable with depth- k decision trees found by CART. An intriguing open question is whether there exist more general families of interpretable DNFs that could achieve bet- ter accuracy than decision trees. In contrast to decision trees of depth k, the property of a function being expressible as a nested k-DNF is not invariant under complementation in gen- eral. In addition, nested k-DNFs cannot contain more than k2 distinct literals. These limitations come from our definitions Dataset k= 2 k= 3 k= 4 k= 5 k= 6 DT DNF DT DNF DT DNF DT DNF DT DNF Balance-scale 4.0 2.0 8.0 2.8
https://arxiv.org/abs/2505.21212v1
14.6 3.4 23.2 7.2 36.4 13.6 Banknote 4.0 2.0 8.0 2.0 14.0 2.0 21.2 2.2 27.2 2.6 Car-evaluation 3.0 1.0 4.0 2.6 6.0 3.2 9.8 3.8 16.4 4.4 Compas discretized 4.0 1.8 8.0 2.6 15.6 4.2 29.0 4.6 51.4 6.4 Indians Diabetes 4.0 2.0 8.0 3.2 15.6 4.2 27.4 5.0 43.4 5.8 Iris 3.0 1.6 4.4 2.0 4.4 2.0 4.4 2.6 4.4 3.2 Lymph 4.0 1.8 8.0 2.2 13.2 2.4 16.2 2.2 18.0 3.0 Monks-1 3.0 2.0 5.0 3.0 6.0 3.0 8.0 5.0 11.0 6.0 Monks-2 4.0 2.0 8.0 4.0 15.0 5.6 25.0 6.6 40.0 8.6 Monks-3 4.0 1.0 6.0 1.0 9.0 1.0 11.0 3.6 13.0 9.0 Tic-tac-toe 4.0 1.2 8.0 4.2 14.0 8.4 22.4 10.6 33.8 15.0 Table 3: The number of leaves in the DT and the number of terms in the nested k-DNF and do not arise from fundamental technical reasons, so we believe there is ample room for further improvement. Finally, our observations during the experiments revealed some variability in the test accuracy of the nested k-DNFs across different runs. This observation suggests that signif- icantly better results could be achieved by using more so- phisticated heuristics. In particular, it would be interesting to compare optimal nested k-DNFs and optimal depth- kde- cision trees. Acknowledgments This work was funded by the French National Research Agency (ANR) under grant agreement no. ANR-23-CE25- 0009. We would also like to thank Aur ´elie Hurault for many insightful comments. References [1]Leila Amgoud and Jonathan Ben-Naim. Axiomatic foundations of explainability. In Luc De Raedt, editor, IJCAI , pages 636–642. ijcai.org, 2022. [2]Sanjeev Arora and Boaz Barak. Computational Com- plexity: A Modern Approach . Cambridge University Press, USA, 1st edition, 2009. [3]Gilles Audemard, Steve Bellart, Louenas Bounia, Fr´ed´eric Koriche, Jean-Marie Lagniez, and Pierre Mar- quis. On the computational intelligibility of boolean classifiers. In Meghyn Bienvenu, Gerhard Lakemeyer, and Esra Erdem, editors, KR, pages 74–86, 2021. [4]Pablo Barcel ´o, Mika ¨el Monet, Jorge P ´erez, and Bernardo Subercaseaux. Model interpretability through the lens of computational complexity. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, NeurIPS , 2020. [5]Claude Berge. Hypergraphs: Combinatorics of finite sets. 1989. [6]Guy Blanc, Caleb Koch, Jane Lange, and Li-Yang Tan. The query complexity of certification. In StefanoLeonardi and Anupam Gupta, editors, STOC ’22: 54th Annual ACM SIGACT Symposium on Theory of Com- puting , pages 623–636. ACM, 2022. [7]Leo Breiman, J. H. Friedman, Richard A. Olshen, and C. J. Stone. Classification and Regression Trees . Wadsworth, 1984. [8]Cl´ement Carbonnel, Martin C. Cooper, and Jo ˜ao Marques-Silva. Tractable explaining of multivariate de- cision trees. In Pierre Marquis, Tran Cao Son, and Gabriele Kern-Isberner, editors, KR, pages 127–135, 2023. [9]Siddhesh Chaubal and Anna G ´al. Diameter versus cer- tificate complexity of boolean functions. In Filippo Bonchi and Simon J. Puglisi, editors, 46th International Symposium on Mathematical Foundations of Computer Science, MFCS , volume 202 of LIPIcs , pages 31:1– 31:22. Schloss Dagstuhl - Leibniz-Zentrum f ¨ur Infor- matik, 2021. [10]Martin C. Cooper and Leila Amgoud. Abductive ex- planations of classifiers under constraints: Complexity and properties. In Kobi Gal,
https://arxiv.org/abs/2505.21212v1
Ann Now ´e, Grzegorz J. Nalepa, Roy Fairstein, and Roxana Radulescu, editors, ECAI , volume 372 of Frontiers in Artificial Intelligence and Applications , pages 469–476. IOS Press, 2023. [11]Martin C. Cooper and Jo ˜ao Marques-Silva. Tractabil- ity of explaining classifier decisions. Artif. Intell. , 316, 2023. [12]Emir Demirovic, Emmanuel Hebrard, and Louis Jean. Blossom: an anytime algorithm for computing optimal decision trees. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, ICML , volume 202, pages 7533–7562. PMLR, 2023. [13]Kerven Durdymyradov and Mikhail Moshkov. Bounds on depth of decision trees derived from decision rule systems with discrete attributes. Ann. Math. Artif. In- tell., 92(3):703–732, 2024. [14]Xuanxiang Huang, Yacine Izza, Alexey Ignatiev, Mar- tin C. Cooper, Nicholas Asher, and Jo ˜ao Marques-Silva. Tractable explanations for d-DNNF classifiers. In AAAI , pages 5719–5728. AAAI Press, 2022. [15]Alexey Ignatiev, Yacine Izza, Peter J. Stuckey, and Jo ˜ao Marques-Silva. Using MaxSAT for efficient explana- tions of tree ensembles. In AAAI , pages 3776–3785. AAAI Press, 2022. [16]Alexey Ignatiev and Jo ˜ao Marques-Silva. SAT-based rigorous explanations for decision lists. In Chu-Min Li and Felip Many `a, editors, Theory and Applications of Satisfiability Testing - SAT , volume 12831 of Lecture Notes in Computer Science , pages 251–269. Springer, 2021. [17]Alexey Ignatiev, Nina Narodytska, and Jo ˜ao Marques- Silva. Abduction-based explanations for machine learn- ing models. In AAAI , pages 1511–1519. AAAI Press, 2019. [18]Yacine Izza, Alexey Ignatiev, and Jo ˜ao Marques-Silva. On tackling explanation redundancy in decision trees. J. Artif. Intell. Res. , 75:261–321, 2022. [19]Yacine Izza and Jo ˜ao Marques-Silva. On explaining ran- dom forests with SAT. In Zhi-Hua Zhou, editor, IJCAI , pages 2584–2591. ijcai.org, 2021. [20]Jo˜ao Marques-Silva. Logic-based explainability: Past, present & future. CoRR , abs/2406.11873, 2024. [21]Jo˜ao Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, and Nina Narodytska. Ex- plaining naive Bayes and other linear classifiers with polynomial time and delay. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, NeurIPS , 2020. [22]Jo˜ao Marques-Silva, Thomas Gerspacher, Martin C. Cooper, Alexey Ignatiev, and Nina Narodytska. Expla- nations for monotonic classifiers. In Marina Meila and Tong Zhang, editors, ICML , volume 139, pages 7469– 7479. PMLR, 2021. [23]Christoph Molnar, Giuseppe Casalicchio, and Bernd Bischl. Interpretable machine learning - A brief history, state-of-the-art and challenges. CoRR , abs/2010.09337, 2020. [24]Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use inter- pretable models instead. Nat. Mach. Intell. , 1(5):206– 215, 2019. [25]Andy Shih, Arthur Choi, and Adnan Darwiche. A sym- bolic approach to explaining bayesian network classi- fiers. In J ´erˆome Lang, editor, IJCAI , pages 5103–5111. ijcai.org, 2018.
https://arxiv.org/abs/2505.21212v1
arXiv:2505.21218v1 [cs.CL] 27 May 2025Pretrained LLMs Learn Multiple Types of Uncertainty Roi Cohen HPI / University of Potsdam Roi.Cohen@hpi.deOmri Fahn Tel Aviv University omrifahn@mail.tau.ac.ilGerard de Melo HPI / University of Potsdam Gerard.DeMelo@hpi.de Abstract Large Language Models are known to capture real-world knowledge, allowing them to excel in many downstream tasks. Despite recent advances, these models are still prone to what are commonly known as hallucinations, causing them to emit unwanted and factually incorrect text. In this work, we study how well LLMs capture uncertainty, without explicitly being trained for that. We show that, if con- sidering uncertainty as a linear concept in the model’s latent space, it might indeed be captured, even after only pretraining. We further show that, though unintuitive, LLMs appear to capture several different types of uncertainty, each of which can be useful to predict the correctness for a specific task or benchmark. Furthermore, we provide in-depth results such as demonstrating a correlation between our correction prediction and the model’s ability to abstain from misinformation using words, and the lack of impact of model scaling for capturing uncertainty. Finally, we claim that unifying the uncertainty types as a single one using instruction-tuning or [IDK]-token tuning is helpful for the model in terms of correctness prediction. 1 Introduction Large Language Models (LLMs) are trained on vast corpora of text data [Brown et al., 2020, Raffel et al., 2020, Chowdhery et al., 2023, Touvron et al., 2023, Le Scao et al., 2023, Jiang et al., 2023a] enabling them to comprehend and generate human language. These training datasets encompass a wide range of written human knowledge, including books, news articles, Wikipedia, and scientific publications. Through this extensive pretraining, LLMs retain significant portions of the information they are exposed to, effectively embedding real-world knowledge within their parameters and functioning as knowledge repositories [Petroni et al., 2019, Roberts et al., 2020, Cohen et al., 2023a, Pan et al., 2023]. This capability allows LLMs to be leveraged in tasks that depend on such knowledge, such as closed-book question answering [Brown et al., 2020, Roberts et al., 2020] and information retrieval [Tay et al., 2022]. Despite their widespread adoption, LLMs are widely known to suffer from ‘hallucinations’—a predis- position towards producing outputs that are false or misleading—which significantly undermines their accuracy and trustworthiness [Ji et al., 2023, Manduchi et al., 2024]. Hallucinations may manifest in various forms, including factually incorrect statements [Maynez et al., 2020, Devaraj et al., 2022, Tam et al., 2023], internal inconsistencies [Elazar et al., 2021, Mündler et al., 2023], contradictions [Cohen et al., 2024a], or statements lacking clear sources or attribution [Bohnet et al., 2022, Rashkin et al., 2023, Yue et al., 2023]. Uncertainty, however, is a concept that LLMs are not generally known to capture [Yin et al., 2023, Kapoor et al., 2024]. At the very least, they are generally not explicitly trained on it. This lack of competency regarding uncertainty, however, often results in misinformation generation, which can be harmful and misleading [Maynez et al., 2020, Devaraj et al., 2022, Tam et al., 2023], as LLMs have a hard time
https://arxiv.org/abs/2505.21218v1
expressing a lack of knowledge both verbally and through their output distribution. Preprint. Under review. Figure 1: Illustration of identifying multiple data-specific uncertainty linear vectors when investigating the hidden space at the end of each transformer layer. Some more advanced methods such as instruction-tuning [Ouyang et al., 2022, Zhang et al., 2023] during post-training and [IDK]-tuning [Cohen et al., 2024b] during pretraining aim, inter alia, to align LLMs to more efficiently express their uncertainty and refrain from misinformation generation. While instruction tuning more generally aligns LLMs with human intent by fine-tuning them on task-specific instructions and corresponding outputs, the model is often also encouraged to refrain from answering questions when the specific answer is not known to it. In this work, we propose an analysis mechanism, which we use in order to study the uncertainty captured by a diverse range of models. First, we propose a technique to search for linear vectors in the LLMs’ latent space that are associated with uncertainty. We then suggest using these vectors as a form of correctness prediction for the LLM’s own generation. By establishing this regime, we can evaluate how well these vectors can stand as misinformation predictors and approximate their uncertainty expression quality. Using our proposed mechanism, we demonstrate that LLMs indeed internalize a notion of uncertainty during pretraining, which can be extracted using linear probes from their latent representations. Specifically, we show that it is possible to identify linear uncertainty vectors—directions in the model’s hidden space—that correlate with generation correctness across multiple models and datasets, despite forgoing any additional training of model weights. This suggests that uncertainty is a learnable and linearly separable concept within LLMs’ latent spaces. Interestingly, the study reveals that LLMs do not learn a single, unified representation of uncertainty. Instead, they are found to encode multiple distinct uncertainty vectors, each associated with different datasets or types of knowledge. These vectors often exhibit low cosine similarity, indicating near linear independence. However, some generalization exists: For instance, uncertainty representations derived from multiple mathematics benchmarks can transfer across related datasets. These insights may enable new hallucination mitigation techniques, since inconsistencies between learned uncertainty representations may contribute to unreliable or incorrect outputs. Moreover, we conduct an in-depth analysis in terms of transformer layers, model sizes, and different training techniques. We find that intermediate transformer layers are typically the most informative for extracting uncertainty vectors, consistently yielding the highest accuracy in correctness prediction across datasets. In addition, the model size alone does not appear to enhance uncertainty representation, as smaller models often perform on par with or even surpass larger counterparts in this task. More notably, instruction-tuning and [IDK]-tuning significantly boost a model’s ability to capture uncertainty. Instruction-tuned variants of Llama and Qwen models outperform their base versions, and their optimal uncertainty representations also emerge in earlier layers. Similarly, [IDK]-tuning not only improves the overall correctness prediction accuracy but also aligns early layers more effectively with uncertainty signals, as evinced by higher precision in early-stage classifiers. These 2 results suggest that targeted training strategies can enhance the internal encoding of uncertainty more effectively than
https://arxiv.org/abs/2505.21218v1
scaling model size alone. To conclude, our contributions are: (1) We introduce an analytical framework for probing how LLMs encode uncertainty, (2) we conduct thorough experiments across models, layers, and datasets, and show that uncertainty is not only a learnable and linearly separable concept but also represented in multiple, distinct forms within a single model, (3) we further analyze how factors such as model depth, size, and training methods affect uncertainty representation, revealing that intermediate layers are most informative, and that scaling the model size does not guarantee better uncertainty encoding, and (4) we show that instruction-tuning and [IDK]-tuning significantly improve uncertainty capturing, offering practical strategies for enhancing model reliability and reducing hallucinations. 2 Identifying Uncertainty Predictors In this work, we assume that uncertainty is a concept represented in an LLM’s latent space in each of the layers. Specifically, let hi(x)be the hidden state produced by the end of the i-th layer of the model, given input x. Then, for each of these hidden states, we search for a specific linear vector uisuch that the classifier defined as C(x, i) =u⊺ ihi(x) +bican reach an accuracy level of predicting the correctness of the model’s next token generation that is statistically significantly better than random accuracy. Intuitively, this search seeks to identify a linear concept that represents the uncertainty of the model regarding its own generations. 2.1 Linear Uncertainty Search LetMbe a specific LLM and let Dbe a specific dataset of questions and answers D={(qj, aj)}n j=0. In order to find a certain uifor a certain model’s layer i, we train a straightforward linear classifier for the sake of predicting the correctness of the model’s answer to a specific question. Specifically, letDTRAIN={(qj, aj)}m<n j=0be a training set derived from D. For each question qjin the dataset, we first let the model predict its own answer. If the model’s prediction is correct compared to aj, then we label qjas positive. In contrast, if the model’s prediction is incorrect compared to aj, then we labelqjas negative. Formally, assuming M(qj)is the model’s output given the input qj, we then define its label L(qj)as: L(qj) =1ifM(qj) =aj 0otherwise.(1) We thus define our new training set as ˆDTRAIN={(qj, L(qj)}m j=0. We now can train a classifier at the end of each layer in M’s architecture. The input of the classifier is the produced hidden state by the end of the specific layer. As mentioned before, the purpose of this classifier is to predict the correctness of the upcoming prediction of Mitself. If we employ a linear classifier, this would correspond to a linear direction in this layer’s latent space, which we will refer to as the uncertainty direction corresponding to dataset D(as this direction has been found while training the classifier to predict the correctness of the model on this specific dataset). We thus denote it as ui(D). We denote the corresponding learned bias term as bi. 2.2 Uncertainty Vector as a Predictor We later can evaluate the quality of ui(D)by testing its ability to predict the correctness of the model’s generation given unseen data as input. Particularly,
https://arxiv.org/abs/2505.21218v1
in this work, we will use test sets derived from our question answering datasets, which we use in order to train our classifier during the linear uncertainty search process (see Section 2.1). Technically, given a textual input xto the model M, recall that hi(x)is the hidden state vector produced by the end of the i-th layer of Mduring the inference call M(x). We thus, as mentioned before, will use ui(D)as a linear classifier in order to predict the correctness of the model’s generation – namely the token that the model Massigns the highest probability as a next-token completion for x. More formally, let Cui(D)be the classifier induced by ui(D)and let Cui(D)(x)be the predicted correctness of xwhile applying ui(D). Then: Cui(D)(x) =INCORRECT if[ui(D)]⊺hi(x) +bi>0 CORRECT if[ui(D)]⊺hi(x) +bi≤0(2) 3 Model |ARC-Easy|ASDiv-A|CommonsenseQA|GSM8K|GranolaEntityQuestions|HumanEval-X|MBPP|NaturalQuestions|OpenBookQA|PopQA|Qampari|ROMQA|SV AMP|StrategyQA|TriviaQA|TruthfulQA| Llama-3.2-1B 0.535 0.670 0.625 0.444 0.789 0.708 0.769 0.600 0.534 0.857 0.634 0.750 0.729 0.689 0.716 0.737 Llama-3.2-3B 0.710 0.648 0.598 0.688 0.790 0.732 0.641 0.675 0.590 0.793 0.734 0.583 0.750 0.608 0.742 0.600 Llama-3.1-8B 0.657 0.667 0.649 0.577 0.763 0.692 0.722 0.590 0.644 0.757 0.630 0.763 0.711 0.684 0.757 0.722 Llama-3.1-8B-Instruct 0.652 0.885 0.667 0.737 0.705 0.781 0.728 0.655 0.694 0.768 0.679 0.750 0.767 0.639 0.776 0.719 Mistral-7B-v0.1 0.657 0.691 0.709 0.550 0.782 0.707 0.707 0.630 0.597 0.747 0.727 0.750 0.687 0.643 0.760 0.673 IDK-tuned -Mistral-7B-v0.1 0.600 0.750 0.571 0.688 0.758 0.545 0.688 0.673 0.611 0.829 0.789 0.667 0.628 0.547 0.693 0.725 Qwen2.5-7B 0.750 0.800 0.718 0.682 0.704 0.578 0.648 0.750 0.678 0.817 0.697 0.615 0.696 0.698 0.717 0.678 Qwen3-14B 0.727 0.786 0.655 0.878 0.738 0.800 0.694 0.651 0.743 0.833 0.630 0.596 0.789 0.561 0.782 0.699 Qwen3-14B-Instruct 0.800 0.750 0.638 0.702 0.770 0.688 0.625 0.674 0.619 0.771 0.861 0.655 0.767 0.711 0.756 0.726 Table 1: Correctness prediction accuracy of our induced classifiers derived across all datasets Given the correctness prediction of the uncertainty vector, we can evaluate its correctness in case we have the ground-truth token. We thus can also derive general accuracy, precision, recall, etc. 3 Experimental Setup To evaluate our uncertainty identification framework, we consider a series of experiments, for which we first introduce the experimental setup. Foundation Models. In order to reach general conclusions that are not specific to any particular LLM, in this work we study three different families of models – The Llama family of models [Touvron et al., 2023, Dubey et al., 2024], Mistral [Jiang et al., 2023b], and Qwen [Bai et al., 2023, Yang et al., 2024]. Specifically, for Llama we study Llama-3.2-1B ,Llama-3.2-3B , and Llama-3.1-8B , for Mistral, we study Mistral-7B-v0.1 , and finally for Qwen, we study Qwen2.5-7B andQwen3-14B . Advanced Models. For evaluating the effects of different types of training on the linear uncer- tainty encodings, we exploit three particular additional models in our experiments. To capture the instruction-tuning [Ouyang et al., 2022, Zhang et al., 2023] effect we use Llama-3.1-8B-Instruct andQwen3-14B-Instruct , which both were post-trained in instruction-tuning fashion. Furthermore, we follow [Cohen et al., 2024b] and use the IDK-tuned -Mistral-7B-v0.1 model in our experiments to evaluate the effect of [IDK]-tuning – a method that essentially adds a new
https://arxiv.org/abs/2505.21218v1
uncertainty token to the model’s vocabulary and teaches the model to use it during pretraining by adapting its loss to consider the new token. Datasets and Benchmarks. We utilize 16 QA datasets and benchmarks in both our linear uncer- tainty search (Section 2.1) and the induced classifier evaluation (Section 2.2). We group them into six thematic categories: •Commonsense QA :CommonsenseQA [Talmor et al., 2019], StrategyQA [Geva et al., 2021a]. These include questions that assess the model’s ability to apply everyday reasoning and background knowledge to answer questions beyond surface-level facts. •Fact-Lookup and Adversarial QA :GranolaEntityQuestions [Yona et al., 2024], Natural Questions [Kwiatkowski et al., 2019], PopQA [Mallen et al., 2022], TriviaQA [Joshi et al., 2017], TruthfulQA [Lin et al., 2021]. These consist of questions that test the model’s factual recall and resilience to misleading or adversarial question phrasing. •List-Output QA :QAMPARI [Amouyal et al., 2023], RoMQA [Zhong et al., 2022]. Both evaluate whether models can produce comprehensive sets of correct answers, challenging their ability to recall multiple relevant facts simultaneously •Science QA (K–12) :ARC-Easy [Clark et al., 2018], OpenBookQA [Mihaylov et al., 2018]. These focus on elementary school and high-school level science, requiring models to combine factual knowledge with basic reasoning. •Math Word Problems :GSM8K [Cobbe et al., 2021], ASDiv-A [Miao et al., 2020], SVAMP [Patel et al., 2021]. These include queries that test models on arithmetic and algebraic reasoning through natural language mathematical problems. •Code Generation :HumanEval-X [Zheng et al., 2023], MBPP [Austin et al., 2021]. We use these datasets to evaluate the ability of models to generate correct and functional software code given natural language programming prompts. 4 Figure 2: Correctness prediction accuracy results of the classifier induced by u26(y− axis−dataset ), using Llama-3.1-8B , while testing on the test set of the x-axis dataset. Figure 3: Correctness prediction accuracy results of the classifier induced by u27(y− axis−dataset ), using Mistral-7B-v0.1 , while testing on the test set of the x-axis dataset. Notably, for each of these, we create a fixed train split which will be used to derive our uncertainty vectors, and a test split which will be used to evaluate their performance. Linear Uncertainty Search Details. For every model M, transformer layer i, and evaluation dataset D, we fit a logistic-regression probe on the hidden states hi(x)and obtain a single weight vector, ui(D), which serves as the linear uncertainty direction for that (layer, dataset) pair. To obtain a dataset-agnostic baseline, we also train an additional probe on the concatenation of all datasets . The resulting vector is denoted as ui(DUNIFIED). Evaluation. We evaluate the ability of our identified uncertainty linear vectors to predict the correctness of the model’s generation. For this, we consider the following metrics: (i) Accuracy – the ratio of correct predictions by the classifier that is induced by the uncertainty linear vector, (ii) Precision – the ratio of actually wrong completions by the model among those that the induced classifier predicted to be wrong. 4 LLMs Indeed Learn Different Types of Uncertainty In this section, we show that we can indeed find linear uncertainty vectors
https://arxiv.org/abs/2505.21218v1
from which we can predict generation correctness to an extent that is better than random. We additionally claim and show that rather than learning one unified uncertainty, LLMs learn several different ones. We later hypothesize that this fact might be one of the reasons for a high rate of misinformation and hallucinations that we observe generated by LLMs. 4.1 The Concept of Uncertainty is Indeed Learned During Pretraining Table 1 presents the performance of our correctness classifiers, derived from the learned linear uncertainty vectors, across all evaluated models and datasets. While the uncertainty vector search is conducted independently at each transformer layer for every model–dataset pair, the table reports results from the best-performing layer only (a detailed layer-wise analysis is provided in a subsequent section). Notably, despite keeping the model weights entirely frozen and applying no further training, we are able to identify linear directions in the latent space that yield meaningful correctness predictions. 5 Figure 4: Cosine similarity results across all linear uncertainty vectors at layer number 22 ofLlama-3.1-8B Figure 5: Correctness prediction accuracy results of the classifier induced by u21(y− axis−dataset ), using Qwen2.5-7B , while testing on the test set of the x-axis dataset. The results demonstrate that, for a substantial number of datasets across all models, classification accuracy significantly exceeds the random baseline of 0.5. This provides strong empirical evidence that uncertainty is encoded within LLMs in a manner that is both learnable and linearly separable within their hidden representations. 4.2 LLMs Learn Multiple Different Linear Uncertainty Vectors One of our key findings is that while linear uncertainty vectors can be identified across multiple layers in all examined models, these vectors are typically dataset-specific and distinct. Specifically, for a given layer i, a classifier induced from ui(D1)often yields markedly different token-level correctness prediction accuracy across evaluation datasets compared to a classifier induced from ui(D2), where D1̸=D2. Furthermore, the cosine similarity between ui(D1)andui(D2)is frequently near-zero, indicating near-linear independence between these vectors. Figure 2 illustrates this effect for layer 26 of Llama-3.1-8B , showing the accuracy of classifiers trained and tested on various datasets. Similarly, Figure 3 presents corresponding results for layer 27 of Mistral-7B-v0.1 . In most cases, a vector trained on dataset D1performs well when tested on D1, but approaches random performance when evaluated on other datasets. Additionally, Figure 4 shows cosine similarity scores among uncertainty vectors derived from Llama-3.1-8B at layer 22. Aside from the unified classifier trained on a dataset union ( UNIFIED ), nearly all vectors are close to orthogonal. In conjunction with the observation that most of the vectors can predict correctness substantially better than chance on at least one dataset, these results support the conclusion that LLMs encode uncertainty through multiple distinct and largely independent internal representations. 4.3 Linear Uncertainty Topic Similarity An additional noteworthy finding emerges from an internal analysis of the submatrices corresponding to two sections of our dataset: Fact-Lookup and Adversarial QA andMath Word Problems . Specifically, when the induced classifier is evaluated on a dataset different from the one used to search for the uncertainty vector, it often attains
https://arxiv.org/abs/2505.21218v1
a remarkably high accuracy—occasionally comparable to, or even surpassing, the level observed when the uncertainty vector is derived from the same dataset. This suggests that, for example, although mathematical uncertainty may be represented in various ways within the latent space, it is not strictly dataset-specific; rather, its semantic structure appears to be shared across tasks. This is further illustrated by the submatrix in Figure 5, which includes three math benchmarks: GSM8K, ASDiv, and SV AMP. The results show that each uncertainty vector obtained from these datasets can substantially enhance the prediction of correctness across all three benchmarks when used as input for the induced classifier. 6 Figure 6: Accuracy results of Mistral-7B-v0.1 across all model lay- ers and datasets. Here the induced classifiers were tested on the same dataset (but different split) as they were searched on. Figure 7: Correctness prediction pre- cision averaged over all datasets of the induced classifier, considering the Llama family: Llama-3.2-1B , Llama-3.2-3B , Llama-3.1-8B , and Llama-3.1-8B-Instruct . 4.4 Comparing to Zero-Shot Abstaining Skills As an additional evaluation of the uncertainty vectors, we assess their alignment with the model’s own self-assessed knowledge through zero-shot prompting. Specifically, for each dataset question, we prompt the model to indicate whether it believes it knows the answer. We then measure the model’s accuracy in this binary self-assessment and compute the Pearson correlation between these scores and the accuracy of our linear uncertainty-based correctness predictors. The resulting correlation coefficients are 0.45 forLlama-3.1-8B ,0.38 forMistral-7B-v0.1 , and 0.42 forQwen3-14B . These findings indicate a substantial positive correlation, suggesting that the learned uncertainty vectors capture a meaningful signal related to the model’s internal estimation of its own knowledge. 5 In-Depth Analysis In this section, we analyze the gaps in performance of our induced correctness prediction classifiers, as a function of the layer number and the model size. We additionally study the effect of advanced training techniques such as instruction-tuning and [IDK]-tuning on the linear uncertainty encoding by the model. 5.1 Intermediate Layers are Usually More Exact We begin by studying the behavior of uncertainty vectors and the corresponding correctness prediction performance across different transformer layers and model sizes. Figure 6 reports the accuracy of uncertainty-based classifiers extracted from each layer of Mistral-7B-v0.1 , evaluated on held-out splits of the same datasets used to induce them. On average, the vector from layer 17 achieves the highest prediction accuracy, with performance gradually declining in layers further from this point (noting that the model consists of 32 layers in total). This trend suggests that uncertainty-relevant information is most concentrated in intermediate layers. Complementing this, Figure 7 shows the layer-wise average performance across multiple models, again highlighting that layers betweenL 2 and3L 4, where Ldenotes the number of transformer layers, consistently yield the most reliable uncertainty signals. Notably, the precision results plotted in Figure 7 show a marked drop in the final layers. This decline implies that the uncertainty vectors extracted from later layers tend to classify more incorrect generations as uncertain, indicating diminished model confidence in its own outputs. 5.2 Size Doesn’t Seem to Matter Figure
https://arxiv.org/abs/2505.21218v1
7 illustrates the impact of model size on uncertainty-based correctness prediction accuracy across layers. Ignoring Llama-3.1-8B-Instruct , the highest performance is achieved by the clas- sifier derived from layer 18 of Llama-3.1-8B . Moreover, a comparison between Llama-3.2-3B 7 Figure 8: Correctness prediction accu- racy results of the classifier induced byu15(y−axis−dataset ), using Llama-3.1-8B-Instruct , while testing on the test set of the x-axis dataset. Figure 9: Correctness prediction accu- racy averaged over all datasets of the induced classifier, considering the Qwen family: Qwen2.5-7B ,Qwen3-14B , and Qwen3-14B-Instruct Figure 10: Correctness prediction accuracy averaged over all datasets of the induced clas- sifier, comparing Mistral-7B-v0.1 against IDK-tuned -Mistral-7B-v0.1 Figure 11: Correctness prediction precision averaged over all datasets of the induced clas- sifier, comparing Mistral-7B-v0.1 against IDK-tuned -Mistral-7B-v0.1 andLlama-3.1-8B reveals negligible differences in average accuracy, suggesting comparable per- formance despite the disparity in model size. While Llama-3.2-1B exhibits a slightly lower peak performance—approximately 1.1 points below the others—the overall trend indicates that the ability to represent uncertainty does not consistently improve with increasing model scale. These findings suggest that scaling alone is insufficient for enhancing uncertainty representation. In the subsequent section, we explore two training-based strategies that yield more substantial improvements. 5.3 Boosting Uncertainty Capturing via Instruction-Tuning and [IDK]-tuning Instruction-Tuning. Figure 7 compares the performance of base LLaMA models— Llama-3.2-1B , Llama-3.2-3B , and Llama-3.1-8B —with the instruction-tuned variant Llama-3.1-8B-Instruct , in terms of the correctness prediction accuracy derived from uncertainty vectors. The y-axis repre- sents the average accuracy across all evaluated datasets. Notably, Llama-3.1-8B-Instruct consis- tently outperforms its base counterparts, indicating that instruction-tuning significantly enhances the model’s ability to encode and leverage uncertainty signals. A similar pattern is observed in Figure 9, where the instruction-tuned Qwen3-14B-Instruct demonstrates improved performance over the base Qwen3-14B . Additionally, in both cases, the peak accuracy of the instruction-tuned models occurs several layers earlier than in their foundational equivalents, suggesting that instruction-tuning may facilitate earlier emergence of uncertainty-relevant representations within the model’s architecture. [IDK]-Tuning. Similar to instruction-tuning, [IDK]-tuning exerts notable influence on the model’s ability to capture uncertainty. This is reflected in the improved effectiveness of the resulting uncer- tainty vectors, which yield higher correctness prediction accuracy and reach peak performance in 8 earlier layers of the model. These trends are illustrated in Figure 10. Additionally, precision scores shown in Figure 11 reveal a substantial gap at the first model layer. Specifically, the correctness predictors derived from the initial layer in the untuned model exhibit poor precision, indicating limited ability to detect generation errors and suggesting overconfidence at this early stage. [IDK]-tuning appears to mitigate this issue by aligning the initial layers more effectively with uncertainty signals. Additional phenomena we note is that for both these methods, we observe better cross-dataset results (that is, testing a vector that has been derived from dataset D, on a different dataset test split). Namely, each of the vectors has better generalization skills. This is shown in Figure 8. 6 Related Work Model Calibration. Our analysis is closely related to the key challenge of model calibration [Guo et al., 2017]: to provide
https://arxiv.org/abs/2505.21218v1
a measure of the probability that a prediction is incorrect alongside the actual prediction. The problem of factual error detection can be viewed as a variation of calibration, where instead of a continuous probability, we provide a binary prediction for whether the model is correct or not. Common approaches to calibration are to perform various transformations on a model’s output logits [Desai and Durrett, 2020, Jiang et al., 2021] and measuring uncertainty [e.g., see Kuhn et al., 2023]. More recent works have studied the use of LMs for providing calibration, by training them on statements known to be factually correct or incorrect. This “supervised” approach has been explored via fine-tuning [Kadavath et al., 2022, Lin et al., 2022], in-context learning [Cohen et al., 2023a, Alivanistos et al., 2022], zero-shot instruction-oriented [Cohen et al., 2023b] and consistency sampling [Yoran et al., 2023] techniques. Further recent studies [Azaria and Mitchell, 2023] use the internal state of the model for classifying whether it is certain or not, use a new token for unanswerable inputs [Lu et al., 2022], or construct a specific dataset for effectively tuning the model for answering refusal [Zhang et al., 2024]. Our work takes an analysis approach trying to better figure out the dynamics of the uncertainty encoding of pretrained models as well as better calibrated models. Mechanistic Interpretability Recent work has been aiming to identify circuits and features within models that correspond to interpretable concepts such as factual recall, syntax, or positional reasoning [Olsson et al., 2022, Yu et al., 2023]. For instance, tools such as SAE (Sparse Autoencoders) have been used to isolate human-interpretable features from residual stream activations [Meng et al., 2022]. Other studies explore how knowledge is stored and manipulated across layers, such as tracing factual associations or memorized content to specific directions in the latent space [Geva et al., 2021b, Gurnee et al., 2023, Geva et al., 2023, Yu et al., 2024]. Despite promising progress, full mechanistic understanding remains an open challenge due to the scale and complexity of modern models. 7 Conclusion In this work, we present a framework for probing uncertainty representations within LLMs by identifying linear vectors in their latent space that predict generation correctness. Our findings establish that LLMs internalize uncertainty as a learnable and linearly accessible concept, one that can be extracted without fine-tuning the model weights. Moreover, we demonstrate that rather than encoding a singular notion of uncertainty, these models store multiple distinct uncertainty representations, each sensitive to the type of data and task. This multiplicity—often manifesting in nearly orthogonal vectors—suggests an underlying explanation for some of the inconsistencies and hallucinations commonly observed in LLM outputs. Beyond the foundational discovery of uncertainty encoding, our analysis sheds light on the archi- tectural and training factors that influence this phenomenon. We show that intermediate layers, regardless of model size, are the most predictive regions for uncertainty, and that larger models do not necessarily perform better at capturing it. More importantly, we find that instruction-tuning and [IDK]-tuning significantly enhance the model’s uncertainty awareness—both in accuracy and in early-layer alignment—pointing to training strategy, rather
https://arxiv.org/abs/2505.21218v1
than scale, as the more critical lever for improving reliability. Our results offer actionable insights for both understanding and mitigating LLM hallucinations, and open up new directions for principled model design and interpretability. 9 References Dimitrios Alivanistos, Selene Báez Santamaría, Michael Cochez, Jan-Christoph Kalo, Emile van Krieken, and Thiviyan Thanapalasingam. Prompting as probing: Using language models for knowledge base construction. arXiv preprint arXiv:2208.11057 , 2022. Samuel Amouyal, Tomer Wolfson, Ohad Rubin, Ori Yoran, Jonathan Herzig, and Jonathan Berant. QAMPARI: A benchmark for open-domain questions with many answers. In Sebastian Gehrmann, Alex Wang, João Sedoc, Elizabeth Clark, Kaustubh Dhole, Khyathi Raghavi Chandu, Enrico Santus, and Hooman Sedghamiz, editors, Proceedings of the Third Workshop on Natural Lan- guage Generation, Evaluation, and Metrics (GEM) , pages 97–110, Singapore, December 2023. Association for Computational Linguistics. URL https://aclanthology.org/2023.gem-1.9/ . Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. Amos Azaria and Tom Mitchell. The internal state of an LLM knows when it’s lying. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Com- putational Linguistics: EMNLP 2023 , pages 967–976, Singapore, December 2023. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.68. URL https: //aclanthology.org/2023.findings-emnlp.68 . Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenhang Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, K. Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Yu Bowen, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xing Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. ArXiv , abs/2309.16609, 2023. URL https://api.semanticscholar.org/CorpusID:263134555 . Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Jacob Eisenstein, Kuzman Ganchev, Jonathan Herzig, Kai Hui, et al. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037 , 2022. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research , 24(240):1–113, 2023. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457 , 2018. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Roi Cohen, Mor
https://arxiv.org/abs/2505.21218v1
Geva, Jonathan Berant, and Amir Globerson. Crawling the internal knowledge- base of language models. In Andreas Vlachos and Isabelle Augenstein, editors, Findings of the Association for Computational Linguistics: EACL 2023 , pages 1856–1869, Dubrovnik, Croatia, May 2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-eacl.139. URL https://aclanthology.org/2023.findings-eacl.139 . Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. LM vs LM: Detecting factual errors via cross examination. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12621–12640, 10 Singapore, December 2023b. Association for Computational Linguistics. doi: 10.18653/v1/2023. emnlp-main.778. URL https://aclanthology.org/2023.emnlp-main.778 . Roi Cohen, Eden Biran, Ori Yoran, Amir Globerson, and Mor Geva. Evaluating the ripple effects of knowledge editing in language models. Transactions of the Association for Computational Linguistics , 12:283–298, 2024a. Roi Cohen, Konstantin Dobler, Eden Biran, and Gerard de Melo. I Don’t Know: Explicit modeling of uncertainty with an [IDK] token. Advances in Neural Information Processing Systems , 37: 10935–10958, 2024b. Shrey Desai and Greg Durrett. Calibration of pre-trained transformers. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 295–302, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.21. URL https: //aclanthology.org/2020.emnlp-main.21 . Ashwin Devaraj, William Sheffield, Byron Wallace, and Junyi Jessy Li. Evaluating factuality in text simplification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7331–7345, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.506. URL https://aclanthology. org/2022.acl-long.506 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics , 9:1012–1031, 2021. doi: 10.1162/tacl_a_00410. URL https://aclanthology.org/2021.tacl-1.60 . Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics , 9:346–361, 2021a. doi: 10.1162/tacl_a_00370. URL https://aclanthology.org/2021.tacl-1.21 . Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 5484–5495, Online and Punta Cana, Dominican Republic, November 2021b. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.446. URL https://aclanthology.org/2021.emnlp-main.446 . Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. Dissecting recall of factual associations in auto-regressive language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12216–12235, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.751. URL https://aclanthology.org/2023. emnlp-main.751/ . Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning , volume
https://arxiv.org/abs/2505.21218v1
70 of Proceedings of Machine Learning Research , pages 1321–1330. PMLR, 06–11 Aug 2017. URL https://proceedings.mlr.press/v70/guo17a. html . Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, and Dimitris Bertsimas. Finding neurons in a haystack: Case studies with sparse probing. arXiv preprint arXiv:2305.01610 , 2023. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Comput. Surv. , 55(12), mar 2023. ISSN 0360-0300. doi: 10.1145/3571730. URL https: //doi.org/10.1145/3571730 . 11 Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023a. Albert Qiaochu Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, L’elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b. ArXiv , abs/2310.06825, 2023b. URL https://api.semanticscholar.org/CorpusID:263830494 . Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham Neubig. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics , 9:962–977, 2021. doi: 10.1162/tacl_a_00407. URL https://aclanthology.org/2021.tacl-1.57 . Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 , 2017. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zachary Dodds, Nova Dassarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, John Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom B. Brown, Jack Clark, Nicholas Joseph, Benjamin Mann, Sam McCandlish, Christopher Olah, and Jared Kaplan. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221 , 2022. Sanyam Kapoor, Nate Gruver, Manley Roberts, Katherine Collins, Arka Pal, Umang Bhatt, Adrian Weller, Samuel Dooley, Micah Goldblum, and Andrew Gordon Wilson. Large language models must be taught to know what they don’t know. arXiv preprint arXiv:2406.08391 , 2024. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664 , 2023. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:452–466, 2019. doi: 10.1162/tacl_a_00276. URL https://aclanthology.org/Q19-1026 . Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili ´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b- parameter open-access multilingual language model. 2023. Stephanie Lin, Jacob Hilton, and Owain Evans. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334 , 2022. Stephanie C. Lin, Jacob Hilton,
https://arxiv.org/abs/2505.21218v1
and Owain Evans. TruthfulQA: Measuring how models mimic human falsehoods. In Annual Meeting of the Association for Computational Linguistics , 2021. URL https://api.semanticscholar.org/CorpusID:237532606 . Hongyuan Lu, Wai Lam, Hong Cheng, and Helen Meng. On controlling fallback responses for grounded dialogue generation. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2591–2601, Dublin, Ireland, May 2022. Association for Computational Lin- guistics. doi: 10.18653/v1/2022.findings-acl.204. URL https://aclanthology.org/2022. findings-acl.204 . Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511 , 2022. 12 Laura Manduchi, Kushagra Pandey, Robert Bamler, Ryan Cotterell, Sina Däubener, Sophie Fellenz, Asja Fischer, Thomas Gärtner, Matthias Kirchler, Marius Kloft, Yingzhen Li, Christoph Lippert, Gerard de Melo, Eric Nalisnick, Björn Ommer, Rajesh Ranganath, Maja Rudolph, Karen Ullrich, Guy Van den Broeck, Julia E V ogt, Yixin Wang, Florian Wenzel, Frank Wood, Stephan Mandt, and Vincent Fortuin. On the challenges and opportunities in Generative AI. arXiv preprint arXiv:2403.00025 , 2024. URL https://arxiv.org/abs/2403.00025 . Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 1906–1919, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.173. URL https://aclanthology.org/2020. acl-main.173 . Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. Locating and editing factual associations in GPT. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems , 2022. URL https://openreview. net/forum?id=-h6WAS6eE4 . Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and de- veloping English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 975–984, Online, July 2020. Asso- ciation for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL https: //aclanthology.org/2020.acl-main.92 . Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789 , 2018. Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin Vechev. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. arXiv preprint arXiv:2305.15852 , 2023. Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction heads. arXiv preprint arXiv:2209.11895 , 2022. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35: 27730–27744, 2022. Jeff Z. Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, Russa Biswas, Gerard de Melo, Angela Bonifati, Edlira Vakaj, Mauro Dragoni, and Damien Graux. Large Language Models and Knowledge Graphs: Opportunities and Challenges. Transactions on Graph Data and Knowledge , 1(1):2:1–2:38, 2023. doi: 10.4230/TGDK.1.1.2. URL https://drops.dagstuhl. de/entities/document/10.4230/TGDK.1.1.2 . Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple math
https://arxiv.org/abs/2505.21218v1
word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2080–2094, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main. 168. URL https://aclanthology.org/2021.naacl-main.168 . Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 , 2019. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. 13 Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. Measuring attribution in natural language generation models. Computational Linguistics , 49(4):777–840, 2023. Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910 , 2020. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https: //aclanthology.org/N19-1421 . Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, and Colin Raf- fel. Evaluating the factual consistency of large language models through news summariza- tion. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Findings of the As- sociation for Computational Linguistics: ACL 2023 , pages 5220–5255, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.322. URL https://aclanthology.org/2023.findings-acl.322 . Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems , 35:21831–21843, 2022. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. Do large language models know what they don’t know? arXiv preprint arXiv:2305.18153 , 2023. Gal Yona, Roee Aharoni, and Mor Geva. Narrowing the knowledge evaluation gap: Open-domain question answering with multi-granularity answers. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , pages 6737–6751, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.365. URL https://aclanthology.org/2024.acl-long.365/ . Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, and Jonathan Berant. An- swering questions by meta-reasoning over multiple chains of thought. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on
https://arxiv.org/abs/2505.21218v1
Empirical Meth- ods in Natural Language Processing , pages 5942–5966, Singapore, December 2023. Associ- ation for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.364. URL https: //aclanthology.org/2023.emnlp-main.364 . Lei Yu, Meng Cao, Jackie Chi Kit Cheung, and Yue Dong. Mechanistic understanding and mitigation of language model non-factual hallucinations. arXiv preprint arXiv:2403.18167 , 2024. Qinan Yu, Jack Merullo, and Ellie Pavlick. Characterizing mechanisms for factual recall in language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 9924–9959, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main. 615. URL https://aclanthology.org/2023.emnlp-main.615/ . Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, Yu Su, and Huan Sun. Automatic evaluation of attribution by large language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 4615–4635, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-emnlp.307. URL https://aclanthology.org/2023.findings-emnlp.307 . 14 Hanning Zhang, Shizhe Diao, Yong Lin, Yi Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, and Tong Zhang. R-tuning: Instructing large language models to say ‘I don‘t know’. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 7113–7139, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.394. URL https: //aclanthology.org/2024.naacl-long.394/ . Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. Instruction tuning for large language models: A survey. arXiv preprint arXiv:2308.10792 , 2023. Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual bench- marking on humaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 5673–5684, 2023. Victor Zhong, Weijia Shi, Wen tau Yih, and Luke Zettlemoyer. RoMQA: A benchmark for robust, multi-evidence, multi-answer question answering. In Conference on Empirical Methods in Natural Language Processing , 2022. URL https://api.semanticscholar.org/CorpusID:253116788 . A Limitations While our analysis provides compelling evidence for the existence of linearly accessible uncertainty representations in LLMs, it is limited to linear probes and does not explore more complex, nonlinear structures that may further explain model behavior. Our evaluation focuses on a fixed set of models and datasets, which, although diverse, may not capture the full variability seen in real-world applications or domain-specific tasks. Additionally, correctness is treated as a proxy for uncertainty, which may not fully align with how uncertainty manifests in open-ended or ambiguous generation scenarios. Finally, the performance of our classifiers may also be influenced by dataset-specific biases, potentially limiting generalizability. B Computer Resources In our experiments we use one NVIDIA A100 80G GPU. 15 NeurIPS Paper Checklist The checklist is designed to encourage best practices for responsible machine learning research, addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not remove the checklist: The papers not including the
https://arxiv.org/abs/2505.21218v1
checklist will be desk rejected. The checklist should follow the references and follow the (optional) supplemental material. The checklist does NOT count towards the page limit. Please read the checklist guidelines carefully for information on how to answer these questions. For each question in the checklist: • You should answer [Yes] , [No] , or [NA] . •[NA] means either that the question is Not Applicable for that particular paper or the relevant information is Not Available. • Please provide a short (1–2 sentence) justification right after your answer (even for NA). The checklist answers are an integral part of your paper submission. They are visible to the reviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it (after eventual revisions) with the final version of your paper, and its final version will be published with the paper. The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation. While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided a proper justification is given (e.g., "error bars are not reported because it would be too computationally expensive" or "we were unable to find the license for the dataset we used"). In general, answering "[No] " or "[NA] " is not grounds for rejection. While the questions are phrased in a binary way, we acknowledge that the true answer is often more nuanced, so please just use your best judgment and write a justification to elaborate. All supporting evidence can appear either in the main paper or the supplemental material, provided in appendix. If you answer [Yes] to a question, in the justification please point to the section(s) where related material for the question can be found. IMPORTANT, please: •Delete this instruction block, but keep the section heading “NeurIPS Paper Checklist" , •Keep the checklist subsection headings, questions/answers and guidelines below. •Do not modify the questions and only use the provided macros for your answers . 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes] Justification: We run extensive experiments to support our claims. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: See Appendix A. 16 Guidelines: •The answer NA means that the paper has no limitation while the answer No means that
https://arxiv.org/abs/2505.21218v1
the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [NA] Justification: N/A Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: See section 2 and 3. Guidelines: • The answer NA means that the paper does not include experiments. 17 •If the paper includes experiments, a No answer to this question will not be perceived
https://arxiv.org/abs/2505.21218v1
well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [No] Justification: Will be released with the camera-ready. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed
https://arxiv.org/abs/2505.21218v1
data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). 18 •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: See section 3. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: Our experiments have been conducted at large scale including 9 different models and 16 different datasets, means that it’s very likely that statistical errors are negligible in this case. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: See Appendix B Guidelines: • The answer NA means that the paper does not include
https://arxiv.org/abs/2505.21218v1
experiments. 19 •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: We reviewed the guidelines and are in conformance. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: See Appendix A. Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. •Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [No] Justification: We do not include safeguards. 20 Guidelines: • The answer NA means
https://arxiv.org/abs/2505.21218v1
that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: We cite all used models, datasets and methods. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. •If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: Full details of all assets will be made available. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? 21 Answer: [NA] Justification: N/A Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should
https://arxiv.org/abs/2505.21218v1
be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: N/A Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage Question: Does the paper describe the usage of LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [Yes] Justification: See section 3. Guidelines: •The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. •Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 22
https://arxiv.org/abs/2505.21218v1
arXiv:2505.21219v1 [cs.LG] 27 May 2025Addressing Data Quality Decompensation in Federated Learning via Dynamic Client Selection Qinjun FeiaNuria Rodríguez-Barrosoa, *María Victoria Luzónb Zhongliang Zhangc, dFrancisco Herreraa aDepartment of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Spain bDepartment of Software Engineering, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Spain cSchool of Management, Hangzhou Dianzi University, China dShaanxi Key Laboratory of Information Communication Network and Security, Xi’an University of Posts & Telecommunications, China Abstract In cross-silo Federated Learning (FL), client selection is critical to ensure high model performance, yet it remains challenging due to data quality decompensa- tion, budget constraints, and incentive compatibility. As training progresses, these factors exacerbate client heterogeneity and degrade global performance. Most ex- isting approaches treat these challenges in isolation, making jointly optimizing multiple factors difficult. To address this, we propose Shapley-Bid Reputation Optimized Federated Learning (SBRO-FL), a unified framework integrating dy- namic bidding, reputation modeling, and cost-aware selection. Clients submit bids based on their perceived data quality, and their contributions are evaluated using Shapley values to quantify their marginal impact on the global model. A repu- tation system, inspired by prospect theory, captures historical performance while penalizing inconsistency. The client selection problem is formulated as a 0-1 inte- ger program that maximizes reputation-weighted utility under budget constraints. Experiments on FashionMNIST, EMNIST, CIFAR-10, and SVHN datasets show that SBRO-FL improves accuracy, convergence speed, and robustness, even in ad- versarial and low-bid interference scenarios. Our results highlight the importance of balancing data reliability, incentive compatibility, and cost efficiency to enable scalable and trustworthy FL deployments. Keywords: Federated learning, Reputation-based client selection, Data quality decompensation, Incentive mechanism, Client selection, Shapley value *Corresponding Author: Nuria Rodríguez-Barroso Email: qinjun@correo.ugr.es (Qinjun Fei), rbnuria@ugr.es (Nuria Rodríguez- Barroso), luzon@ugr.es (María Victoria Luzón), zzl19860210@126.com (Zhongliang Zhang), herrera@decsai.ugr.es (Francisco Herrera) Preprint. 1 Introduction Federated Learning (FL) has emerged as a distributed machine learning paradigm that enables col- laborative model training without centralizing data, effectively mitigating privacy concerns and re- ducing communication overhead [18, 25]. FL initially gained attention in large-scale cross-device settings, such as mobile networks and edge computing, and has since been widely adopted in decen- tralized AI applications [1, 3]. Meanwhile, cross-silo FL collaboration continues to evolve, intro- ducing additional complexities beyond those encountered in cross-device settings. These challenges, including data quality decompensation, budget constraints, and incentive mechanisms, create a dy- namic landscape of interacting practical factors, making cross-silo scenarios particularly demanding [11, 42]. Unlike cross-device FL, which involves a large number of resource-constrained devices, cross-silo FL typically involves a limited number of organizational clients such as hospitals, financial institu- tions, and industrial enterprises that have abundant data and computational resources [31]. How- ever, the quality of data from these sources can vary widely. For instance, in healthcare-oriented FL, a hospital may inadvertently provide mislabeled patient records due to inconsistent annotation practices or human error, while in industrial IoT applications, sensor calibration mismatches across factories can inject systematic biases into the global model. Moreover, adversarial behaviors like la- bel flipping or
https://arxiv.org/abs/2505.21219v1
model poisoning further distort the training process, increasing the risk of unreliable global updates [21, 32, 36]. This variability in data quality introduces a persistent issue referred to as data quality decompensation, where the accumulation of unreliable client updates over multiple rounds leads to global model instability and performance degradation, despite individual datasets remaining unchanged. Consequently, robust mechanisms are required to assess and selectively filter clients, ensuring that only high-quality contributions drive model optimization. Beyond client selection, another key challenge is incentivizing high-quality clients to participate consistently [30]. The value of data is highly task-dependent and as data become increasingly com- moditized, clients expect fair compensation [41]. Furthermore, local training incurs resource costs, including computation, storage, and communication, which require financial incentives to sustain engagement [38]. However, in real-world FL systems with financial incentives, the FL initiator faces budget constraints, limiting client compensation. This requires careful budget allocation across data valuation, incentive mechanisms, and cost-effective client selection. Most client selection strategies in FL primarily focus on optimizing a single aspect, such as maxi- mizing convergence speed [20], designing incentive mechanisms [37], or improving communication efficiency [29]. Although effective in their respective domains, these approaches often overlook the interdependencies between data quality decompensation, incentive compatibility, and budget constraints. In practical cross-silo scenarios, a unified framework that coherently integrates these dimensions is essential for jointly optimizing data reliability, cost-effectiveness, and equitable par- ticipation incentives. To address these challenges, we hypothesize that integrating a bidding mechanism with a dynamic reputation system can mitigate data quality decompensation while ensuring budget efficiency. How- ever, historical reputation alone may not fully capture the reliability of clients, as clients with similar scores can exhibit varying stability. To account for this, selection should incorporate risk sensitivity, prioritizing clients with consistent contributions and cost-effectiveness. This approach is expected to improve the stability of the model and optimize the participation incentives in cross-silo FL. This paper proposes Shapley-Bid Reputation Optimized Client Selection for Federated Learning (SBRO-FL), a method that integrates reputation-driven bidding with cost-aware optimization. Be- fore training, clients submit bid prices based on perceived data quality, aligned with budget con- straints set by the FL initiator. SBRO-FL maintains a reputation value to track historical contribu- tions, while reputation scores, adjusted through prospect theory, guide selection by accounting for risk sensitivity. The selection process is formulated as a 0-1 integer programming problem, optimiz- ing reputation-weighted utility under budget constraints. At the end of each round, the Shapley value quantifies marginal contributions, shaping a cost-effectiveness metric that updates reputation values and informs future selection. A dynamic penalty mechanism discourages unreliable participation, reinforcing system stability. By incorporating bidding incentives and dynamic reputation tracking, the proposed method increases FL efficiency and encourages stable client participation. Specifically, we make the following contributions: 2 (1)Unified Client Selection Framework: Unlike existing approaches that optimize a single aspect, SBRO-FL jointly optimizes data quality decompensation, incentive compatibility, and budget constraints in a unified selection framework, enhancing practical applicability in cross-silo FL. (2)Reputation-Driven Bidding: A novel bidding mechanism is introduced, where clients propose bid prices based on perceived data quality. This
https://arxiv.org/abs/2505.21219v1
approach provides a practical incentive alignment strategy under budget constraints, fostering stable and cost-efficient client engagement. (3)Shapley Value-Based Contribution Assessment: SBRO-FL leverages the Shapley value to quantify the marginal impact of each client on the global model. This contribution metric is incorporated into a cost-effectiveness measure, which influences both reputation updates and future selection probabilities. (4)Budget-Constrained Optimization for FL: The client selection problem is formulated as a 0-1 integer programming model, optimizing reputation-weighted utility while adhering to strict budget constraints. This ensures that selected clients provide an optimal balance of data reliability and participation cost. The experiments compared SBRO-FL with three baselines: (1) random client selection, (2) selection of all clients, and (3) random selection from a high-quality subset. These baselines represent varying selection strategies, from uncontrolled participation to partial quality-aware selection. SBRO-FL consistently outperformed these baselines in four datasets, improving the accuracy of the global model by an average of 7.14% compared to random selection after a fixed number of rounds. Its performance closely approached that of an idealized selection strategy in which only high-quality clients were chosen, free of label noise. This highlights SBRO-FL’s ability to infer client reliability dynamically, effectively balancing data quality, incentives, and budget constraints. 2 Background In this section, we provide an overview of key concepts and related work. Section 2.1 covers the fun- damentals of FL, Section 2.2 reviews client selection strategies that focus on resource-oriented and performance-oriented approaches, and Section 2.3 explores incentive mechanisms aimed at main- taining client participation in FL. 2.1 Federated Learning FL is a distributed machine learning paradigm designed to protect data privacy and has drawn con- siderable attention [25]. In FL, each client receives the global model at the start of each round, trains it locally on private data, and then sends the updated parameters for aggregation into a new global model, as illustrated in Fig. 1. This process repeats over multiple rounds until a stopping condition is met. FL enables collaborative model training across decentralized data sources without requiring data sharing, making it ideal for privacy-sensitive applications. A typical FL architecture consists of a central server and multiple clients, denoted as C= {C1, . . . , C n}. The goal of FL is to collaboratively train a global model wby leveraging decentral- ized data from multiple clients. This is achieved by iteratively solving the following optimization problem: min wnX i=1|Di|Pn j=1|Dj|ℓ(w;Di) (1) where wdenotes the global model parameters, Direpresents the local dataset of client Ci, with|Di| denoting its number of data samples, serving as the weighting factor in the aggregation process. The term ℓ(w;Di)represents the local loss function evaluated on Di. One of the most widely used algorithms in FL is FedAvg, which iteratively solves the optimization problem. At the beginning of each round t, the global model parameters wtare sent to all participating clients. Each client independently updates the global model based on its local data Diby performing several steps of a local optimization algorithm on the corresponding loss function: wt i=wt−ηgt i (2) 3 Fig. 1. Federated learning example: client–server architecture. where ηis the learning
https://arxiv.org/abs/2505.21219v1
rate and gt irepresents the gradient or update direction calculated on the local data. After local training, the clients transmit their updated parameters wt iback to the server. The server then aggregates these updates by computing a weighted average, based on the size of each client’s dataset, to yield the new global model parameters for the next round: wt+1=nX i=1|Di|Pn j=1|Dj|wt i. (3) This process continues iteratively until a convergence criterion is satisfied, such as achieving a spec- ified number of rounds or achieving a target accuracy. 2.2 Client Selection In the context of FL, effective client selection is essential to improve model performance and op- timize resource utilization. The distinction between resource-oriented and performance-oriented selection strategies clarifies their respective impacts on training efficiency and model reliability. Resource-oriented criteria emphasize the relevance of the computational capabilities of clients and the efficiency of communication, enabling faster, more reliable, and resource-efficient processes. The FedCS framework proposed by Nishio and Yonetani [27] prioritizes clients capable of com- pleting a training round within a predefined deadline. The selection process analyzes the update efficiency of each client, considering their computational capabilities and the conditions of the wire- less channel. From another perspective, this approach allows FL to involve more clients in training in a shorter time, thereby accelerating the overall FL process. Yu et al. [40] designed the energy and latency-aware resource management and client selection algorithm (ELASTIC), which not only aims to maximize the number of selected clients, but also minimizes the total energy consumption based on the CPU frequency and transmission power of the clients. These approaches are particu- larly effective in FL between devices, where resource limitations are more pronounced [24]. Performance-oriented criteria focus primarily on data quality and model attributes, in order to se- lect clients that significantly improve the effectiveness of the global model. Given the substantial influence of training data volume on model performance, some approaches prioritize clients based on the quantity of their data. For example, Jeon et al. [13] proposed a selection strategy that consid- ers the quantity of client data, the quality of communication, and the residual energy. Similarly, Li et al. [22] introduced an unbiased sampling scheme based on a polynomial distribution, where the probability of selection of a client is proportional to their quantity of data. However, the relation- ship between data quantity and global model quality is often non-linear and complex, particularly in scenarios with inconsistent data quality and Non-IID (independent and identically distributed) data distributions. To address these complexities, Fraboni et al. [8] compared two clustering-based sampling methods: one based on sample size and the other on model similarity. Their findings demonstrated that the model similarity-based method improved convergence speed and stability, es- pecially as data heterogeneity increased. Beyond data quantity, other important indicators of client 4 data quality include data distribution [20], local model parameters [2, 43], and local model loss [12]. These approaches are more prevalent in cross-silo FL, where the focus is on optimizing global model performance. While existing methods optimize either computational efficiency or model performance, they typi- cally overlook
https://arxiv.org/abs/2505.21219v1
economic constraints. In cross-silo FL, client selection must also account for budget limitations, where high-quality data alone does not guarantee selection if participation costs are ex- cessive. This necessitates a balance between data reliability and financial feasibility. Moreover, since incentives directly impact client engagement, selection strategies must align compensation with contribution quality. 2.3 Incentive Mechanisms In FL, the selection of clients presupposes the availability of a pool of candidates willing to con- tribute. Clients invest significant resources, including data and computational power, but without fair incentives, many may opt out [17]. This highlights the need for robust incentive mechanisms to maintain engagement, especially when limited budgets restrict the selection of all clients. Reputation has emerged as a key incentive mechanism in client selection, directly related to both data quality and client reliability [5, 24]. Several works have proposed using reputation as a benchmark for assessing client trustworthiness and performance. For example, Kang et al. [16] designed a reputation-based client selection framework where clients’ reliability is measured using subjective logic models. Similarly, Song et al. [35] adopted a beta distribution-based reputation model to systematically assess client trustworthiness. While these frameworks enhance selection reliability, they often overlook how reputation interacts with budget constraints to shape sustainable incentive strategies. In practice, aligning reputation-based selection with economic feasibility is essential for maintaining long-term engagement and cost-effective participation. Moreover, auction-based mechanisms have gained prominence in FL to align client incentives with the server’s objectives. Pang et al. [28] designed an incentive mechanism using a procurement auc- tion model and a greedy algorithm to optimize client selection and scheduling, aiming to minimize social cost by efficiently distributing training participation across iterations. Jiao et al. [14] proposed a budget optimization framework for multi-session FL auctions, leveraging hierarchical reinforce- ment learning to refine strategic pricing adjustments by data consumers. In parallel, game-theoretic models, particularly Stackelberg games, have also been employed to balance incentives and resource allocation. Sarikaya and Ercetin [33] and Khan et al. [17] formulated Stackelberg-based strategies where the FL server acts as a leader setting rewards, while clients respond by adjusting their partic- ipation efforts. However, many of these methods operate under the assumption that data quality is relatively uniform or fail to explicitly address the heterogeneous nature of cross-silo data sources, which may also be prone to adversarial behavior. While existing methods have separately addressed data-driven selection, incentive mechanisms, and budget-aware strategies, their independent treatment often overlooks the interdependencies among these factors. To address these interrelated challenges, our approach jointly considers data quality decompensation, incentive compatibility, and budget constraints, thereby aligning client participa- tion with both model reliability and economic feasibility. The following sections detail the method- ology. 3 Shapley-Bid Reputation Optimized Client Selection SBRO-FL integrates reputation-driven bidding with cost-aware optimization to improve efficiency in FL under budget constraints. The framework dynamically evaluates client reliability and cost- effectiveness, adapting selection strategies to evolving training conditions. For clarity, Section 3.1 illustrates its workflow within a traditional FL setup. 3.1 Workflow of SBRO-FL Fig. 2 illustrates the workflow of this method. The steps highlighted in yellow indicate the key
https://arxiv.org/abs/2505.21219v1
differences from standard FL procedures. The communication flow for each round is as follows: 5 Fig. 2. Workflow of SBRO-FL within the traditional FL framework. 1.Task Publication: The central server publishes the task requirements, including the data type, model size, and training methodology, to all potential clients. 2.Bid Collection: Clients submit bid prices based on their estimated capacity to meet the task requirements. 3.Client Selection: The server calculates a reputation score for each client based on historical performance and selects clients through an optimization model formulated as a 0-1 integer programming problem. 4.Model Distribution: The central server distributes the specification of the FL task, includ- ing the type and parameters of the global model. 5.Local Training: The selected clients train the model locally using their private datasets. 6.Model Uploading: The selected clients upload the locally trained model parameters to the server. 7.Model Aggregation: The central server aggregates all the local model parameters to update the global model for the current communication round. 8.Client Assessment: After allocating rewards based on their bid prices, the central server calculates the selected clients’ contributions to the global model using the Shapley value and updates their reputations by considering both their contributions and submitted bids. The following subsections follow the natural execution order of the client selection process in FL. Since the selection model aims to maximize reputation-based utility, Section 3.2 first introduces the reputation score, which serves as the foundation for the clients selections model in Section 3.3. To reflect the dynamic nature of the overall system, the client assessment phase at the end of each training round employs a Shapley value-based evaluation method in Section 3.4 to quantify client contributions, which also serves as an indicator of data quality. This assessment then forms the basis for reputation updates in Section 3.5, ensuring an adaptive selection process over time. 3.2 Reputation Values and Scores In FL, selecting reliable clients over multiple training rounds is crucial for mitigating the risks posed by poor-quality data. To achieve this, we introduce a reputation system that dynamically evaluates and tracks client behavior across rounds, ensuring that clients with a consistent history of meaningful contributions are prioritized in future selections. Each client is assigned a reputation value, denoted as R={R1, . . . , R n}for clients C= {C1, . . . , C n}. This reputation value serves as a measure of long-term reliability and is continu- ously updated after each training round. Initially, each client’s reputation is set to zero and evolves based on their observed contributions. The details of the update procedure, including its influencing factors and process, are provided in Section 3.4 and Section 3.5. 6 Fig. 3. Prospect theory value function. The X-axis represents gains and losses relative to a reference point, while the Y-axis represents perceived value. The function is asymmetric: losses have a steeper curve than gains, reflecting loss aversion, meaning individuals perceive losses more strongly than equivalent gains. Conversely, gains exhibit diminishing sensitivity, meaning the perceived impact of additional gains decreases as they accumulate. Although Riprovides a long-term measure of client reliability, directly
https://arxiv.org/abs/2505.21219v1
using it for selection may overlook the disproportionate impact of data quality decompensation. Specifically, poor-quality contributions tend to degrade the global model more significantly than high-quality updates improve it [7]. To address this, we apply a transformation process inspired by prospect theory [15]. As illustrated in Fig. 3, the prospect theory function emphasizes loss aversion, meaning that reductions in reputa- tion (i.e., poor contributions) have a greater impact than equivalent increases. This transformation converts reputation values into reputation scores, denoted as Z(R) ={z(R1), . . . , z (Rn)}in Defi- nition 1, which are subsequently used in the client selection process. Definition 1. Reputation Score Function z(Ri).It computes the reputation score of client Ciby applying the prospect theory value function, which accounts for loss aversion and the asymmetric treatment of low- and high-quality clients following z(Ri) =γ(Rth−Ri)βifRi≤Rth (Ri−Rth)αifRi> R th. (4) In practice, after each round, we compute an average reputation Rth=1 pPp i=1Riacross all clients, serving as the reference point in the prospect theory function. If a client’s reputation Ri≤Rth, the function treats this as a “loss” region with heightened negativity, reflecting a higher perceived risk or potentially low-quality updates. Conversely, when Ri> R th, the client is considered more reliable, but the gain region in the Prospect transformation grows gradually rather than steeply. This prevents excessive reliance on a few high-reputation clients, ensuring that model updates leverage diverse data sources for better generalization. The parameters αandβdetermine the degree of asymmetry. A higher βleads to stronger penalties for underperforming clients, whereas αcontrols the diminishing impact of higher reputations. The specific values of α,β, and γwere determined through preliminary tuning and guided by common parameter choices in prospect theory literature. For instance, βis typically set between 0.2 and 0.4 to reflect strong loss aversion, while a smaller αindicates faster saturation of perceived gains. Based on these considerations, we set α= 0.15, β= 0.3, and γ= 1, as this configuration provided stable performance across multiple datasets. The adjusted reputation score z(Ri)serves as the foundation for client selection in each round, guiding the optimization process described in Section 3.3. By incorporating this risk-aware adjust- ment, our framework enhances the selection of high-quality contributors while reducing the impact of unreliable clients, ensuring a more stable and effective training process. 7 3.3 Selection Model Within our reputation system, client selection must balance reliability and budget constraints to ensure effective participation. The objective is to optimize client selection by prioritizing high- reputation clients, with prospect theory-adjusted scores ensuring a risk-aware balance between reli- ability and budget feasibility. The client selection model is defined as follows: Definition 2. Client Selection Model: St=Select (Z,B, Bbudget,Ht).Given a set of reputation scores Z={z(R1), . . . , z (Rn)}, bid prices B={B1, . . . , B n}, and a total budget Bbudget , the goal is to select an optimal subset of clients while considering past selection history Ht. The selection problem is formulated as the following integer programming model: maxnX i=1(z(Ri)−zmin)·δcount i·xi s.t.  nX i=1Bixi≤Bbudget, Bi≥0, xi∈ {0,1}(5) where: • The term zminensures that all reputation scores are
https://arxiv.org/abs/2505.21219v1
non-negative by shifting the minimum reputa- tion score to zero. The decision variable xi∈ {0,1}determines whether client Ciis selected in round t. The selection history Ht={xt−1 1, . . . , xt−1 n, . . . , xt−5 1, . . . , xt−5 n}keeps track of each client’s participation in the last five rounds. St={Ci|xt i= 1}denotes the set of clients selected to participate in round t. • The adjustment factor δcount iintroduces diversity in selection, where δis a decay factor and count iis the number of times client Cihas been selected in the previous five rounds: count i=5X k=1xt−k i. (6) The adjustment factor δcount imitigates over-selection bias by discouraging repeated choices in early rounds while ensuring diverse participation over time. During the early rounds, it decreases the likelihood of repeatedly selecting clients that have already been chosen multiple times, ensuring a thorough review of all clients. This allows the system to evaluate a broader range of clients, accel- erating the identification of high-quality clients without relying on a narrow subset. Consequently, the model enables faster filtering based on client contributions in the initial stages. In the later rounds, the model increases the likelihood of selecting clients that have been chosen less frequently, promoting both diversity and fairness. This dynamic adjustment ensures that while high-performing clients are prioritized, valuable but underrepresented clients are not excluded for extended periods, maintaining a balanced and equitable selection process. By integrating prospect theory-based reputation scores with a dynamic adjustment factor, this model optimizes client participation while ensuring a more adaptive and balanced selection process in FL. To implement this selection strategy, the formulated integer programming problem is solved using standard optimization solvers, ensuring practical feasibility. The detailed implementation and solver configurations are further discussed in the Section 4.3. 3.4 Evaluation of Client Contributions Although reputation values help solve the client selection model Definition 2 for optimal decisions, client behavior can evolve over time due to various factors [4]. In real-world scenarios, clients may possess varying data quality and training capabilities, which directly impact their contributions to the global model. Moreover, clients may conceal their intentions and attempt to degrade the overall model quality in subsequent communication rounds through tactics such as Byzantine attack. Therefore, an accurate and fair evaluation mechanism is essential to ensure that reputation values evolve based on client contributions to the global model, effectively guiding future client selection. 8 To achieve this, we adopt the Shapley value from cooperative game theory, which systematically quantifies each participant’s contribution to a shared outcome [34]. Mathematically, the Shapley value for a client Ciis defined as: svi=1 mX T ⊆S\{Ci}1m−1 |T |[v(T ∪ { Ci})−v(T)], (7) where mis the total number of clients in the selected group S, andTdenotes any subset of partic- ipants excluding Ci. The function v(T)represents the utility or outcome (e.g., profit, accuracy, or any relevant metric) achieved by the subset T. This formula quantifies each participant’s marginal contribution, ensuring fairness by considering all possible client combinations. The Shapley value has been widely applied across various domains, including data valuation [9], where
https://arxiv.org/abs/2505.21219v1
it quantifies the significance of individual data points in machine learning models. Its effectiveness in fairly assess- ing collaborative contributions makes it a natural choice for evaluating the impact of selected clients in FL. In the FL scenario, the Shapley value of client CiinSis formally defined in Definition 3. Definition 3. Shapley value in FL: SVSt=CalculateSV (wSt,Dval).Here wStrepresents the model parameters uploaded in round tby the selected clients in the group St, andDvalis the vali- dation dataset used to evaluate model performance. The Shapley value svt ifor each client Ci∈St, representing their individual contribution in round t, is expressed as: svt i=1 mX T ⊆St\{Ci}1m−1 |T |h E 1 |T ∪{ Ci}|X Cj∈T ∪{ Ci}wt j,Dval −E 1 |T |X Cj∈Twt j,Dvali . (8) Here, E(w,Dval)evaluates model performance in the validation dataset Dval, using the chosen metric (e.g., accuracy, area under the curve, or recall). The Shapley value quantifies each client’s marginal contribution by comparing model performance with and without their participation. Specif- ically, it computes the difference E(T ∪Ci)−E(T), where E(T)represents the performance of the model when trained with a subset Tof clients, and E(T ∪Ci)measures the performance when Ci is additionally included. By aggregating these differences across all possible subsets T ⊆ St\Ci, the Shapley value provides a fair assessment of individual contributions in each round. Since a client’s contribution to global model performance is directly influenced by the quality of their local dataset, the Shapley value naturally serves as a proxy for data quality. Clients with higher-quality data tend to provide more informative updates, leading to greater positive contribu- tions, whereas lower-quality data may introduce noise, amplifying data quality decompensation and potentially degrading model performance. This property makes the Shapley value a reliable indi- cator of data quality, which forms the basis for reputation updates in Definition 4. In addition, we maintain a historical record SV his={SV 1, . . . ,SVn}, where SVistores the Shapley values of the client Ciin the rounds they were selected. This historical record further supports the reputation update in Section 3.5, ensuring informed and adaptive adjustments. Calculating Shapley values for all possible subsets has exponential complexity. However, practical approximations such as the Monte Carlo–Shapley method [9] and GTG-Shapley [23] reduce com- putational overhead, making them suitable for large-scale FL. As cross-silo FL typically involves a limited number of participants, we employ exact Shapley value computation to ensure a precise contribution evaluation. 3.5 Reputation Update The Shapley value provides a quantitative measure of client contributions, which forms a key com- ponent in reputation updates. However, effective client selection must balance both performance and economic feasibility. To achieve this, reputation updates incorporate bid prices alongside Shapley values, ensuring that high-quality participants are incentivized while maintaining cost-effectiveness. Specifically, positively contributing clients ( svt i>0) receive increased reputation, while negative or near-zero values ( svt i≤0) trigger penalization. The update rule is defined as follows: 9 Definition 4. Reputation Update Rule. Reputation updates depend on each client’s Shapley value svt i, bid price Bi, and recent performance history erri. First, we define: Spos=X Ci∈St,svt i>0svi, B pos=X
https://arxiv.org/abs/2505.21219v1
Ci∈St,svt i>0Bi (9) where Sposis the total Shapley value of all positively contributing clients, and Bposis the total bid price of those clients. The reputation update formula is expressed as follows: Ri=  Ri−ψ·ρerri ifCi∈Standsvt i≤0, Ri+ω· 1−exp −svt i/Spos Bi/Bpos ifCi∈Standsvt i>0 Ri ifCi/∈St.(10) Algorithm 1 Shapley-Bid Reputation Optimized Client Selection 1:Initialize: Model parameters w0, Reputation R={R1, . . . , R n}, bid price B={B1, . . . , B n} 2:forround t= 1toTdo 3: Rth←1 nPn i=1Ri 4: foreachCi∈ Cdo 5: z(Ri)←γ(Rth−Ri)β,ifRi≤Rth (Ri−Rth)α,ifRi> R th 6: end for 7: St←Select (Z,B, Bbudget,Ht) 8: Distribute global model wtto clients in St 9: foreachCi∈Stdo 10: wt i←wt−1−ηgt−1 i 11: end for 12: wt←Aggregate (wt St) 13: SVSt←CalculateSV (wt St,Dval) 14: Spos←P Ci∈St,svt i>0svt i 15: Bpos←P Ci∈St,svt i>0Bi 16: foreachCi∈Stdo 17: Ri←( Ri−ψ·ρerri, ifsvt i≤0 Ri+ω· 1−exp −svt i/Spos Bi/Bpos ifsvt i>0 18: end for 19: Update SV his,Ht 20:end for Here, ωandψrepresent the reward and punishment coefficients, respectively, while erritracks the number of times client Cihas demonstrated poor performance (i.e., svt i≤0) in the last five rounds they were selected. The set SV his, which stores each client’s Shapley values across rounds, is used to calculate erriby identifying rounds with negative contributions. The penalty factor ρ increases exponentially with repeated poor performance, ensuring harsher penalties for clients who consistently underperform. The penalty mechanism swiftly filters out clients whose updates degrade model performance, whether due to unreliable data or overfitting. In contrast, positively contributing clients receive rep- utation updates based on both their impact and bid price, ensuring that high-quality, cost-effective participants are incentivized for future selection. This approach encourages clients to submit real- istic bids while maintaining high-quality selections in subsequent rounds. Additionally, the use of an exponential reward function prevents excessively rapid reputation increases, mitigating the risk of overfitting the global model to specific clients. For clients not selected in a given round, their reputation remains unchanged, ensuring fairness across rounds. Thus, SBRO-FL forms a closed-loop framework, integrating bidding, reputation-based selection, contribution evaluation, and adaptive reputation updates. This iterative process continuously refines client participation, ensuring a balance between data reliability, budget constraints, and incentive alignment. The full procedure is detailed in Algorithm 1. 10 4 Experiments Setup This section presents the experimental setup used to evaluate the proposed SBRO-FL method. Sec- tion 4.1 outlines the datasets used and describes how we simulate data quality decompensation via label flipping. Section 4.2 introduces the baseline methods for comparison, and Section 4.3 provides the implementation details, including the FL framework, model architectures, and the parameters used to simulate the FL environment under budget constraints. 4.1 Data Partitioning We utilized four widely recognized benchmark datasets—FashionMNIST [39], EMNIST-Letter [6], SVHN [26], and CIFAR-10 [19]—for image classification tasks in a FL environment. These datasets were selected for their diverse characteristics, including variations in image resolution, color chan- nels, and class distributions, providing a comprehensive basis for evaluating FL algorithms under different conditions. Table 1 summarizes the key statistics of each dataset, including the number of classes and train/test splits. Table 1 Statistics of the datasets
https://arxiv.org/abs/2505.21219v1
used in FL experiments Dataset Total Samples Training Samples Test Samples Classes EMNIST-letter 124,800 88,800 36,000 26 FashionMNIST 70,000 60,000 10,000 10 SVHN 600,000 73,257 53,608 10 CIFAR-10 60,000 50,000 10,000 10 To simulate a realistic cross-silo FL scenario, the training data for each dataset was limited to 10,000 samples, which were randomly partitioned among 40 clients. Each client was assigned an equal portion of the training data, and in each FL round, a subset of clients was selected for training based on the proposed selection model. The central server aggregated client updates to refine the global model over multiple communication rounds. To examine the impact of data quality decompensation on model performance, label flipping was applied to 32 of the clients. These clients were randomly divided into four groups of eight, with label flipping proportions set at 90%, 80%, 70% and 60%, respectively. The remaining eight clients retained their original labels and served as high-quality participants. Label flipping is chosen as a straightforward yet practical method to simulate data quality decompen- sation. Compared to other noise models, label flipping more directly affects the training objective and mimics malicious and unintentional mislabeling scenarios. By manipulating the flipping ratios, it is possible to assess the effectiveness of the method in identifying and mitigating the impact of low-quality updates. 4.2 Baselines To evaluate the effectiveness of our proposed SBRO-FL approach, we compare it with three baseline methods that capture a broad spectrum of client selection strategies: •Random Selection (RS-FL) : In each round, the server randomly chooses a subset of clients within the budget constraint [25]. This method does not consider client reliability or historical performance and serves as a baseline to assess whether reputation-based selection improves FL outcomes. •High-Quality Random Selection (HQRS-FL) : Clients are randomly selected from a high-quality subset (unaffected by label flipping) while maintaining the same budget constraint. This represents an “oracle” scenario where the server selects only high-quality clients based on prior knowledge of their data quality. Although such an assumption is unrealistic in practical FL settings, HQRS-FL serves as a reference to assess how well SBRO-FL approximates high-quality selection without explicit quality labels. •All Clients Selected without Budget (All-FL) : All clients, regardless of their data quality or bid price, are selected in each round without budget constraints. However, in real-world FL deploy- 11 ments, unrestricted participation often leads to lower model robustness, as unreliable updates from low-quality clients degrade overall performance. This comparison allows us to evaluate whether SBRO-FL effectively balances inclusiveness and reliability within a realistic budget-constrained setting. Although there are many advanced FL methods, such as those that focus on incentive mechanisms, communication efficiency, or adversarial robustness, these approaches typically optimize a single objective. In contrast, our work addresses data quality decompensation, incentive compatibility, and budget constraints simultaneously. A direct comparison with single-focus methods would not fully capture the effectiveness of our integrated approach. Therefore, we adopt these three baselines, which collectively benchmark the core aspects of client selection in our setting. 4.3 Implementation Details The experiments were carried out using the FLEXible platform [10],
https://arxiv.org/abs/2505.21219v1
an open source framework that provides a comprehensive set of tools for deep learning and machine learning in federated environ- ments. FLEXible allows full customization of the FL scenario, from foundational components to high-level configurations, making it well suited to evaluate the proposed method. CNN architectures were adapted to the complexity of the data et: two convolutional layers for FashionMNIST and EMNIST, and three for CIFAR-10 and SVHN. All models were trained with mini-batch SGD (batch size = 16) with an initial learning rate of η= 0.01. Client updates were aggregated using FedAvg over 300 communication rounds. The bid prices of the clients were generated following a Gaussian distribution ( µ= 10 ,σ2= 1) to simulate natural variation in the valuation of the clients’ data and computational costs, and the task budget was fixed at Bbudget = 45 . The selection process incorporated a decay factor δ= 0.5 to balance participation diversity. The prospect theory parameters were established as α= 0.15, β= 0.3, and γ= 1, after prior studies and preliminary adjustments. The client selection problem was solved using the PuLP linear programming solver. To improve computational stability and ensure effective selection, standard preprocessing techniques were applied before solving. The complete implementation and experimental configurations are available on GitHub1, ensuring full reproducibility. 5 Experimental Results This section presents the experimental results of SBRO-FL compared with the baseline methods across four benchmark datasets. We evaluated (i) the final performance and training stability of the global model across multiple datasets (see Section 5.1), and (ii) the robustness of SBRO-FL against low-cost interference (see Section 5.2). 5.1 Performance and Stability Evaluation In this subsection, we first compare the accuracy of the global model in the final round of each method in Table 2, then examine their convergence stability by plotting accuracy trends over multiple communication rounds (see Fig. 4). Table 2 reports the accuracy of the global model in the final round for different methods. SBRO-FL consistently outperforms RS-FL in all datasets, demonstrating its ability to prioritize high-quality client updates while operating under budget constraints. Furthermore, we computed the variance of the final 20 rounds for each method, which remained negligibly small. This confirms that random fluctuations did not influence the analysis, reinforcing the robustness of the reported results. In particular, CIFAR-10 and SVHN exhibit larger gains around 19%, probably due to their greater complexity and higher susceptibility to label-flipping noise. In contrast, simpler datasets such as FashionMNIST show smaller relative gains, as even random selection can achieve a relatively high baseline accuracy. 1GitHub repository: https://github.com/ari-dasci/S-SBRO-FL 12 Table 2 Final round global model accuracy of different methods on four datasets. The variance across the final training rounds is negligible (less than 10−4), indicating that observed accuracy improvements are not due to random fluctuations. Note: “Gain" represents the accuracy improvement of SBRO-FL over RS-FL, calculated as (SBRO-FL −RS-FL )/RS-FL ×100% . Dataset SBRO-FL RS-FL HQRS-FL All-FL Gain (% over RS- FL) EMNIST-letter 0.8166 0.7835 0.8515 0.7777 +4.2% FashionMNIST 0.8600 0.8294 0.8590 0.7966 +3.7% CIFAR-10 0.5853 0.4920 0.6121 0.4828 +19.0% SVHN 0.7902 0.6617 0.8036 0.5595 +19.4% Average
https://arxiv.org/abs/2505.21219v1
0.7630 0.6917 0.7816 0.6541 +10.3% Fig. 4. Trends in Model Accuracy: Comparing SBRO-FL and Baseline Methods Across Diverse Datasets. Another interesting observation from the FashionMNIST experiment is that SBRO-FL achieved a slightly higher final accuracy than HQRS-FL, the ideal “oracle" scenario. This phenomenon can be attributed to the ability of SBRO-FL to learn additional knowledge from clients with label noise, while HQRS-FL exclusively selects clients from a pool entirely free of label noise. This finding underscores the importance of diversity in client selection. Additionally, although All-FL involves all available clients, it does not consistently improve accu- racy and incurs higher computational costs. This highlights the need for strategic client selection, as indiscriminate participation amplifies data quality decompensation, leading to accumulated unreli- able updates and global performance degradation. Fig. 4 provides a visual comparison of the accuracy of the global model in all training rounds, pro- viding insight into the stability and convergence behavior of different methods. The trends observed here reinforce the findings from Table 2, demonstrating the ability of SBRO-FL to adaptively opti- mize client selection over time. During training, SBRO-FL exhibits slight fluctuations and initially lags behind RS-FL. This is due to the ongoing assessment of client reliability, during which some lower-quality clients may still be selected. However, as training progresses, SBRO-FL progressively refines its selection, leading to a sustained improvement in both accuracy and stability. In later rounds, high-quality clients have been consistently prioritized, enabling SBRO-FL to surpass RS- FL and maintain more stable performance. In contrast, All-FL exhibits a “peak-and-drop" behavior, 13 where an initial increase in accuracy is followed by a decline due to the incorporation of noisy up- dates, ultimately degrading overall performance. Although All-FL benefits from a higher number of participants, the lack of selective filtering leads to poor long-term generalization. For more complex datasets such as CIFAR-10 and SVHN, the advantage of SBRO-FL becomes even more pronounced. In these cases, SBRO-FL not only achieves higher accuracy, but also demonstrates faster and more stable convergence compared to RS-FL. These trends confirm that SBRO-FL’s selection mechanism successfully adapts over time, ensuring long-term stability and performance improvements. 5.2 Robustness Against Low-Cost Interference FL systems face the risk of adversarial bidding, where unreliable clients reduce their bids to increase the probability of selection. This section evaluates SBRO-FL’s resilience to such low-cost interfer- ence, testing whether it can effectively prioritize high-quality clients despite financial manipulations. To model a realistic low-bid interference scenario, bid prices are assigned based on label-flipping ratios: clients with flipping rates of 90%, 80%, 70%, 60%, and 0% receive bids of 6, 8, 10, 12, and 14, respectively. This setup mimics a practical challenge in which low-quality clients strategically lower their bids to increase selection chances. Table 3 Final round global model accuracy of different methods on four datasets after adjusting the bid strategy. “Gain" represents the improvement of SBRO-FL over RS-FL, calculated as (SBRO-FL −RS-FL )/RS-FL ×100% . Dataset SBRO-FL RS-FL HQRS-FL All-FL Gain (% over RS- FL) EMNIST-letter 0.8176 0.7735 0.8532 0.7605 +5.7% FashionMNIST 0.8499 0.7638 0.8590 0.8010 +11.3% CIFAR-10 0.5999 0.4818 0.6073 0.4817
https://arxiv.org/abs/2505.21219v1
+24.5% SVHN 0.7808 0.5935 0.8055 0.5590 +31.6% Average 0.7620 0.6532 0.7812 0.6506 +16.7% As shown in Table 3, SBRO-FL effectively selects high-quality clients, even amidst varying data quality and low-bid interference. SBRO-FL significantly outperforms both RS-FL and All-FL across all datasets, with particularly strong results on the more complex SVHN and CIFAR-10 datasets. On average, SBRO-FL achieves an improvement in accuracy of 10.89% over RS-FL and an improve- ment of 1.92% over All-FL, underscoring its robustness in client selection despite data variability and low bids. Fig. 5 shows the accuracy of the global model per round for the four methods after adjusting the bid strategy. This price-sensitive experiment reveals a key insight: The SBRO-FL decision-making mechanism, which balances reputation and bid price, is resistant to manipulation by low bids. Even when lower bids are present, SBRO-FL successfully avoids selecting clients with poor data quality, highlighting the method’s emphasis on data quality over short-term financial incentives to maintain high model performance. When comparing these results with Fig. 4, we observe similar trends: SBRO-FL consistently out- performs RS-FL and All-FL in stability and convergence speed, even with low-bid interference. Although SBRO-FL shows slight oscillations in accuracy, these variations are minimal, indicating that the low-bid strategy has a limited impact on its performance. In general, SBRO-FL maintains its advantage, reaffirming its robustness under challenging conditions. Interestingly, a comparison between Fig. 4 and Fig. 5 shows that while All-FL occasionally con- verges faster than SBRO-FL, it eventually experiences a decline in accuracy due to the continual incorporation of noisy updates. This underscores the importance of excluding low-quality clients in FL. The negative impact of such clients, misaligned updates and poor data quality, can lead to persistent degradation in global model performance. Consequently, SBRO-FL’s approach to filtering out low-quality clients proves essential to achieve both stability and superior final performance. 14 Fig. 5. Trends in Model Accuracy: Evaluating SBRO-FL and Baseline Methods Under Low-Cost Interference Across Datasets. 5.3 Discussion on Component Contributions While the experimental results demonstrate the effectiveness of SBRO-FL as a unified framework, a detailed ablation study remains a valuable direction for further investigation. Specifically, evaluating the isolated impact of each component, such as the bidding mechanism without reputation model- ing, the reputation system without Shapley-based evaluation, or the use of simpler heuristics in place of Shapley values, would help quantify their respective contributions to overall performance. Such an analysis would provide deeper insights into the role of each module in improving accuracy, ro- bustness, and cost-efficiency, and would further justify the design choices made in SBRO-FL. We consider this to be an important avenue for future empirical work. 6 Conclusion and future work This work presents SBRO-FL, a unified client selection framework for FL in silos that addresses the intertwined challenges of data quality decompensation, incentive compatibility, and budget constraints. By combining a reputation-driven bidding mechanism with cost-aware optimization, SBRO-FL ensures that client selection reflects both historical contributions and economic feasibil- ity. The integration of a Shapley value-based contribution evaluation and a prospect-theory-inspired reputation update enables robust and adaptive client participation over time. Our
https://arxiv.org/abs/2505.21219v1
extensive empiri- cal evaluation in four benchmark datasets shows that SBRO-FL consistently outperforms traditional random and inclusive selection strategies, even in the presence of adversarial bidding and noisy data. These results confirm the practical effectiveness of our approach in enhancing the robustness and efficiency of FL systems. 6.1 Limitations While SBRO-FL demonstrates strong performance across diverse datasets and adversarial scenar- ios, certain limitations remain. The exact computation of Shapley values, while offering fair and precise contribution assessments, incurs combinatorial complexity that may not scale efficiently to federations involving hundreds or thousands of clients. Although this is less critical in cross-silo settings where the number of clients is typically small, future work should investigate scalable ap- proximations or surrogate contribution metrics to maintain fairness and computational efficiency 15 in larger deployments. Additionally, our current implementation assumes honest bid submissions; future versions could incorporate mechanisms for bid verification or robustness against strategic misreporting. From a computational standpoint, the client selection problem formulated in SBRO-FL is a variant of the 0–1 knapsack problem, which is known to be NP hard. This implies that finding an opti- mal solution becomes computationally intensive as the number of clients increases, necessitating efficient solvers or approximations in large-scale deployments. Regarding convergence, SBRO-FL builds upon standard FL frameworks like FedAvg, whose convergence under non-i.i.d. settings has been established in prior work. Since our selection mechanism preserves the core iterative structure and does not alter local training dynamics, it inherits similar convergence behavior under bounded- variance assumptions. Furthermore, although exact Shapley value computation is used in this study, approximate methods such as Monte Carlo sampling and GTG-Shapley provide theoretical error bounds and can be adopted in future work to ensure scalability while retaining fairness. 6.2 Future Work While SBRO-FL demonstrates strong performance in controlled experimental settings, several promising directions remain for future research. First, real-world deployment studies in industrial or healthcare cross-silo environments could validate the method’s robustness under real operational constraints. Second, exploring adaptive or learnable reputation metrics—possibly using neural at- tention mechanisms or metalearning—may enhance the selection dynamics beyond fixed prospect- theoretic functions. Finally, scaling SBRO-FL to larger federations with hundreds or thousands of clients will require more computationally efficient approximations of Shapley values, such as Monte Carlo or GTG-based methods, to maintain scalability without compromising fairness or accuracy. Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grant 72171065 and in part by the Open Fund of Shaanxi Key Laboratory of Information Communication Network and Security under Grant ICNS201807. It also results from the Strategic Project IAFER-Cib (C074/23), as a result of the collaboration agreement signed between the National Institute of Cybersecurity (INCIBE) and the University of Granada. This initiative is carried out within the framework of the Recovery, Transformation and Resilience Plan funds, financed by the European Union (Next Generation). Qinjun Fei acknowledges the support of the China Scholarship Council program (Project ID: 202308330099). References [1] Mahmoud M. Badr, Mohamed M. E. A. Mahmoud, Yuguang Fang, Mohammed Abdulaal, Abdulah Jeza Aljohani, Waleed Alasmary, and Mohamed I. Ibrahem. Privacy-preserving and communication-efficient
https://arxiv.org/abs/2505.21219v1
energy prediction scheme based on federated learning for smart grids. IEEE Internet of Things Journal , 10(9):7719–7736, May 2023. ISSN 2372-2541. [2] Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith, and Jeff Bilmes. Diverse client selection for federated learning: Submodularity and convergence anal- ysis. In ICML 2021 International Workshop on Federated Learning for User Privacy and Data Confidentiality , volume 3, page 139. PMLR, 2021. [3] Kallista Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Kone ˇcný, Stefano Mazzocchi, H. McMahan, Timon Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. Towards federated learning at scale: System design. Proceedings of machine learning and systems , 1:374–388, 2019. doi: abs/1902.01046. [4] Enrique Mármol Campos, Pablo Fernández Saura, Aurora González-Vidal, José L Hernández- Ramos, Jorge Bernal Bernabé, Gianmarco Baldini, and Antonio Skarmeta. Evaluating feder- ated learning for intrusion detection in internet of things: Review and challenges. Computer Networks , 203:108661, 2022. 16 [5] Leiming Chen, Dehai Zhao, Liping Tao, Kai Wang, Sibo Qiao, Xingjie Zeng, and Chee Wei Tan. A credible and fair federated learning framework based on blockchain. IEEE Transactions on Artificial Intelligence , 6(2):301–316, February 2025. ISSN 2691-4581. [6] Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. Emnist: Extend- ing mnist to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN) , pages 2921–2926, 2017. [7] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Local model poisoning attacks to byzantine-robust federated learning. In Proceedings of the 29th USENIX Conference on Security Symposium , SEC’20, pages 1623–1640, USA, 2020. USENIX Association. [8] Yann Fraboni, Richard Vidal, Laetitia Kameni, and Marco Lorenzi. Clustered sampling: Low- variance and improved representativity for clients selection in federated learning. In Proceed- ings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 3407–3416. PMLR, 18–24 Jul 2021. [9] Amirata Ghorbani and James Zou. Data shapley: Equitable valuation of data for machine learn- ing. In Proceedings of the 36th International Conference on Machine Learning , volume 97 of Proceedings of Machine Learning Research , pages 2242–2251. PMLR, 2019. [10] Francisco Herrera, Daniel Jiménez-López, Alberto Argente-Garrido, Nuria Rodríguez- Barroso, Cristina Zuheros, Ignacio Aguilera-Martos, Beatriz Bello, Mario García-Márquez, and M. Victoria Luzón. Flex: Flexible federated learning framework. Information Fusion , 117:102792, 2025. [11] Chao Huang, Ming Tang, Qian Ma, Jianwei Huang, and Xin Liu. Promoting collaboration in cross-silo federated learning: Challenges and opportunities. IEEE Communications Magazine , 62(4):82–88, April 2024. ISSN 1558-1896. [12] Yae Jee Cho, Jianyu Wang, and Gauri Joshi. Towards understanding biased client selection in federated learning. In Proceedings of The 25th International Conference on Artificial In- telligence and Statistics , volume 151 of Proceedings of Machine Learning Research , pages 10351–10375. PMLR, 28–30 Mar 2022. [13] Joohyung Jeon, Soohyun Park, Minseok Choi, Joongheon Kim, Young-Bin Kwon, and Sun- grae Cho. Optimal user selection for high-performance and stabilized energy-efficient feder- ated learning platforms. Electronics , 9(9):1359, 2020. [14] Yutao Jiao, Ping Wang, Dusit Niyato, Bin Lin, and Dong In Kim. Toward an automated auction framework for wireless
https://arxiv.org/abs/2505.21219v1
federated learning services market. IEEE Transactions on Mobile Computing , 20(10):3034–3048, 2020. [15] Daniel Kahneman and Amos Tversky. Prospect theory: An analysis of decision under risks. Econometrica , 47(2):363–391, 1979. [16] Jiawen Kang, Zehui Xiong, Dusit Niyato, Shengli Xie, and Junshan Zhang. Incentive mech- anism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet of Things Journal , 6(6):10700–10714, 2019. [17] Latif U. Khan, Shashi Raj Pandey, Nguyen H. Tran, Walid Saad, Zhu Han, Minh N. H. Nguyen, and Choong Seon Hong. Federated learning for edge networks: Resource optimization and incentive mechanism. IEEE Communications Magazine , 58(10):88–93, 2020. [18] Jakub Kone ˇcn`y, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 , 2016. [19] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. [20] Jaewook Lee, Haneul Ko, Sangwon Seo, and Sangheon Pack. Data distribution-aware online client selection algorithm for federated learning in heterogeneous networks. IEEE Transac- tions on Vehicular Technology , 72(1):1127–1136, 2023. [21] Li Li, Yuxi Fan, Mike Tse, and Kuo-Yi Lin. A review of applications in federated learning. Computers & Industrial Engineering , 149:106854, 2020. [22] Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, and Zhihua Zhang. On the conver- gence of fedavg on non-iid data. In International Conference on Learning Representations (ICLR) , 2020. 17 [23] Zelei Liu, Yuanyuan Chen, Han Yu, Yang Liu, and Lizhen Cui. Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning. ACM Transactions on intel- ligent Systems and Technology (TIST) , 13(4):1–21, 2022. [24] Samara Mayhoub and Tareq M. Shami. A review of client selection methods in federated learning. Archives of Computational Methods in Engineering , 31(2):1129–1152, November 2023. ISSN 1886-1784. [25] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics , volume 54 of Proceedings of Machine Learning Research , pages 1273–1282. PMLR, 2017. [26] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y . Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning , 2011. [27] Takayuki Nishio and Ryo Yonetani. Client selection for federated learning with heterogeneous resources in mobile edge. In ICC 2019 - 2019 IEEE International Conference on Communi- cations (ICC) , pages 1–7. IEEE, 2019. [28] Jinlong Pang, Jieling Yu, Ruiting Zhou, and John CS Lui. An incentive auction for heteroge- neous client selection in federated learning. IEEE Transactions on Mobile Computing , 22(10): 5733–5750, 2022. [29] Monalisa Panigrahi, Sourabh Bharti, and Arun Sharma. Feddcs: A distributed client selection framework for cross device federated learning. Future Generation Computer Systems , 144: 24–36, July 2023. ISSN 0167-739X. [30] Jiahao Qi, Feilong Lin, Zhongyu Chen, Changbing Tang, Riheng Jia, and Minglu Li. High- quality model aggregation for blockchain-based federated learning via reputation-motivated task participation. IEEE Internet of Things Journal , 9(19):18378–18391, 2022.
https://arxiv.org/abs/2505.21219v1
[31] Anichur Rahman, Md Sazzad Hossain, Ghulam Muhammad, Dipanjali Kundu, Tanoy Deb- nath, Muaz Rahman, Md Saikat Islam Khan, Prayag Tiwari, and Shahab S Band. Federated learning-based ai approaches in smart healthcare: concepts, taxonomies, challenges and open issues. Cluster computing , 26(4):2271–2311, 2023. [32] Nuria Rodríguez-Barroso, Daniel Jiménez-López, M Victoria Luzón, Francisco Herrera, and Eugenio Martínez-Cámara. Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges. Information Fusion , 90:148–173, 2023. [33] Yunus Sarikaya and Ozgur Ercetin. Motivating workers in federated learning: A stackelberg game perspective. IEEE Networking Letters , 2(1):23–27, 2020. [34] L. S. Shapley. A value for n-person games. In H. W. Kuhn and A. W. Tucker, editors, Con- tributions to the Theory of Games , number 28, pages 307–317. Princeton University Press, 1953. [35] Zhendong Song, Hongguang Sun, Howard H. Yang, Xijun Wang, Yan Zhang, and Tony Q. S. Quek. Reputation-based federated learning for secure wireless networks. IEEE Internet of Things Journal , 9(2):1212–1226, 2022. [36] Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Lingjuan Lyu, and Ji Liu. Data poisoning attacks on federated machine learning. IEEE Internet of Things Journal , 9(13):11365–11375, 2022. [37] Zhilin Wang, Qin Hu, Ruinian Li, Minghui Xu, and Zehui Xiong. Incentive mechanism design for joint resource allocation in blockchain-based federated learning. IEEE Transactions on Parallel and Distributed Systems , 34(5):1536–1547, May 2023. ISSN 2161-9883. [38] Jie Wen, Zhixia Zhang, Yang Lan, Zhihua Cui, Jianghui Cai, and Wensheng Zhang. A survey on federated learning: challenges and applications. International Journal of Machine Learning and Cybernetics , 14(2):513–535, 2023. [39] Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for bench- marking machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017. 18 [40] Liangkun Yu, Rana Albelaihi, Xiang Sun, Nirwan Ansari, and Michael Devetsikiotis. Jointly optimizing client selection and resource management in wireless federated learning for internet of things. IEEE Internet of Things Journal , 9(6):4385–4395, 2021. [41] Liangqi Yuan, Ziran Wang, Lichao Sun, Philip S. Yu, and Christopher G. Brinton. Decentral- ized federated learning: A survey and perspective. IEEE Internet of Things Journal , 11(21): 34617–34638, November 2024. ISSN 2372-2541. [42] Yifei Zhang, Dun Zeng, Jinglong Luo, Xinyu Fu, Guanzhong Chen, Zenglin Xu, and Irwin King. A survey of trustworthy federated learning: Issues, solutions, and challenges. ACM Transactions on Intelligent Systems and Technology , 15(6):1–47, October 2024. ISSN 2157- 6912. [43] Jianxin Zhao, Xinyu Chang, Yanhao Feng, Chi Harold Liu, and Ningbo Liu. Participant se- lection for federated learning with heterogeneous data in intelligent transport system. IEEE Transactions on Intelligent Transportation Systems , 24(1):1106–1115, 2023. 19
https://arxiv.org/abs/2505.21219v1