text string | source string |
|---|---|
a small language model to assess the importance of each lexical unit (such as sentences or tokens) in the original prompts, and then drops the less in- formative content for prompt compression. Subse- quent studies by Jiang et al. (2023, 2024) followed this line of research, proposing more fine-grained compression methods. Pan et al. (2024) utilized a BERT model to transform prompt compression into a classification problem. These compression methods can be categorized into unconditional and conditional approaches, based on whether important information is used as a condition during compression. For instance, in prompts containing instructions, demonstrations, and queries, user queries and instructions can act as crucial conditions for compressing demonstra- tions. Specifically, during compression, uncondi- tional methods directly compress demonstrations, while conditional methods compress based on es- sential conditional information vusing the follow- ing formula: I(xt|v,x<t) =−log2P(xt|v,x<t) (3) In terms of effectiveness, conditional methods gen- erally outperform unconditional methods because they preserve more information (Pan et al., 2024). 3 Conditional Token Selection Conditional Token Selection builds upon the idea of conditional token compression in prompt. It aims to compress long CoT sequences, such as those generated by harnessing reinforcement learn- ing techniques (e.g., R1), and subsequently fine- tune models on this compressed data. The goal is to enhance model performance while reducing training and inference resource consumption. 3.1 Problem Formulation Given a long CoT dataset, where each instance xconsists of a problem xprob, thinking (CoT rea- soning) xthk, and a final answer xans, denoted as Figure 2: An example of CoT compression using CTS, where the left figure employs a well-trained RM, while the right uses a standard RM. x={xprob,xthk,xans}as depicted in Figure 1. Let us consider a small language model, such as Qwen2.5-7B-Instruct, with its original parameters denoted as θLM. The distillation objective is to train this small model to imbue it with reasoning (thinking) capa- bilities and achieve strong performance by mini- mizing the following loss function: L=−lX i=1logPθLM(yi|xprob,y<i) (4) where y={yi}l i=1=xthk⊕xansis the target sequence of ltokens, with ⊕denoting the concate- nation of tokens. The objective of a long CoT compression system can be defined as: min ˜ydist(A,˜A) +λ∥˜y∥0, (5) Where ˜yrepresents the compressed CoT, a subse- quence of y.Aand˜Arepresent, respectively, the answers to any question Qgiven by the small lan- guage models trained with yand˜y. Here, dist(·,·) is a function measuring the distance (e.g., KL di- vergence). λserves as a hyper-parameter balancing the compression ratio. ∥ · ∥ 0is a penalty. 3.2 Reference Modeling When determining token importance in CoT data, a reference model is typically required to estimate self-information or perplexity using Equations (1) and(2). This RM is usually a lightweight small language model. However, through experimental observations, we found that using RM directly to compress CoT tends to remove important but com- monly used numbers or alphanumeric symbols. As depicted in the right panel of Figure 2, both the water flow velocity variable ’ v’ and the denomi- nator ’3’ from the fraction 1/3 have been omitted. These elements are essential for comprehending the subsequent equations. To teach the reference model which numbers and reasoning | https://arxiv.org/abs/2505.17827v1 |
symbols are important for reaching the final answer, we curated a high-quality dataset {q, z1, . . . , z K,a}that reflects the desired data dis- tribution. Here z={z1, . . . , z K}denotes a set of concise and essential reasoning steps. We then train a RM using cross-entropy loss on the curated data. LRM=−KX i=1logP(zi|xins,q,a,z<i)(6) Where xinsrepresents " For a problem q, the fol- lowing reasoning steps are important to get the answer a". The resulting RM is then used to assess the token importance within long CoT trajectories, allowing us to focus on the most influential tokens. 3.3 Token Selection Based on Conditional Perplexity Differences To perform conditional compression on CoT data, we need to evaluate the conditional importance of each token in the target sequences y. When utilizing perplexity for token compression, to ensure the compressed data and original data dis- tributions are the same, the following conditional independence assumption is required: A⊥ ⊥¯y|˜y (7) Here, ¯yrepresents the removed portion such that y= ˜y⊕¯y. Under this assumption, we obtain: p(A|˜y)≈p(A|y) (8) To mitigate the inaccuracy introduced by the conditional independence assumption, we adopt the iterative compression method from Jiang et al. (2023), where we first divide yinto several seg- ments S={s1, . . . , s m}, then compress within each segment to obtain {˜s1, . . . , ˜sm}, and finally concatenate the compressed segments to form the final compressed text. For more fine-grained assessment of token con- ditional importance, we use the distribution shift caused by the condition of the answer to quantify the association between the thinking token and the answer. Consequently, we derive a score, ri, for each token in the target sequence y, calculated as follows: ri=PPL(xthk i|xthk <i)−PPL(xthk i|xans, xthk <i) Figure 3: Comparison of accuracy across different com- pression ratios for various model configurations. Finally, given a compression ratio α, after scor- ing each token in the reasoning chain using the RM, we determine a threshold rαrepresented by the α- quantile of importance scores, thereby selecting thinking tokens whose scores exceed the threshold: ˜xthk={xthk i|ri> rα} 4 Experiments 4.1 Experimental Setup Reference Model Training To train our math- ematical reasoning reference model, we utilized the first 9.3K samples from the MATH training dataset (Face, 2025), comprising problems from NuminaMath 1.5 with reasoning traces generated by DeepSeek-R1. To imbue the model with the ability to assess token importance in mathemati- cal reasoning chains, DeepSeek-R1 was prompted to meticulously select 8 million tokens from an original pool of 54 million reasoning tokens. This curated set of tokens subsequently supervised the fine-tuning of the RM. Implementation Details & Dataset To demon- strate the effectiveness of our reference model and Conditional Token Selection framework, we leveraged the framework proposed by (Jiang et al., 2024), employing our trained RM for Condi- tional Token Selection on the second 9.3K MATH training set from Face (2025). We then fine- tuned Llama-3.1-8B-Instruct (Grattafiori et al., 2024) and Qwen2.5-7B-Instruct and Qwen2.5- 14B-Instruct (Qwen et al., 2025) using the com- pressed dataset with compression ratios αof {0.5,0.6,0.7,0.8,0.9}. Figure 4: Compression Ratio | https://arxiv.org/abs/2505.17827v1 |
vs. Average Reasoning Tokens for CTS and TokenSkip on the Qwen2.5-7B- Instruct model across different test sets. Evaluation Benchmarks & Metrics The evalua- tion leverages three widely used reasoning bench- marks: AIME24 ,MATH500 , and GPQA Di- amond (Mathematical Association of America, 2024; Hendrycks et al., 2021; Rein et al., 2024). We used the actual compression ratio, average accu- racy, and average reasoning token count as metrics to evaluate compression methods. All training and evaluation were conducted on 8 NVIDIA A800 GPUs. During training, we fine-tuned for 3 epochs with a batch size of 16. The maximum learning rate was set at 1e-5 with a cosine decay schedule. We set the maximum sequence length to 4096. Baselines In our main experiments, we compared Conditional Token Selection with unconditional TokenSkip (Xia et al., 2025), LLMLingua (Jiang et al., 2023) and Prompt-based Compression . We designate the method that directly uses the original CoT data for training as Original . For the Prompt- based method, we instructed GPT-4o1to compress long CoT reasoning by providing prompts such as "Please retain important reasoning tokens in the Chain-of-Thought and remove unnecessary ones, preserving α% of the original tokens." However, we observed that GPT tends to be overly aggres- sive in compression, consistently preserving less than 10% of the original tokens regardless of the specified αvalue. Therefore, we did not focus on the compression ratio αand simply used GPT-4o to compress the reasoning chains directly. These baselines are referred to as GPT-4o ,LLMlingua andTokenSkip in Table 1, respectively. 1We use the gpt-4o-2024-08-06 version for experiments. Figure 5: Performance metrics (Reasoning Token Re- duction percentage and Absolute Accuracy Improve- ment (%) relative to original) for various compres- sion methods and ratios on Qwen2.5-7B-Instruct and Qwen2.5-14B-Instruct models and MATH500. The top- right region represents optimal performance, signifying higher accuracy and reduced reasoning token usage. 4.2 Main Results Table 1 presents the performance of different com- pression methods on the Qwen2.5-14B-Instruct model across various compression ratios. Notably, our method achieves the highest accuracy across all five compression ratios compared to other compres- sion approaches. Figure 3 illustrates the accuracy of CTS versus TokenSkip across different models and compression ratios. CTS consistently outper- forms TokenSkip, highlighting the superiority of conditional token compression. Although the Prompt-based GPT-4o shown in Table 1 uses fewer inference tokens, this is a re- sult of its excessive compression of the chain of thought. In reality, the model fails to learn thinking capabilities from long CoT data, resulting in low accuracy across all three benchmarks. Furthermore, for CTS, when the compression ratio is 0.9 and 0.8, the model shows improvements in both accuracy and inference efficiency. For ex- ample, on AIME24, at compression ratios of 0.9 and 0.8, accuracy increased by 10% and 4.3% re- spectively, while inference tokens decreased by 1373 and 1274. Notably, for GPQA Diamond, CTS achieves a 9.1% accuracy gain with 13.2% fewer reasoning tokens. Further reduction of inference to- kens by 75.8% results in only a 5% accuracy drop, and since the compression ratio at this point is 42%, training costs are significantly | https://arxiv.org/abs/2505.17827v1 |
reduced. As the compression ratio decreases, although CTS accuracy declines compared to Original, in- ference tokens continue to decrease. Thus, there exists a trade-off between accuracy and inference Table 1: Experimental results of various compression methods on Qwen2.5-14B-Instruct , showing accuracy, average reasoning CoT tokens, and compression ratio (actual ratio). Methods Ratio (Actual)MATH500 AIME24 GPQA Diamond Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓ Original 1.0 90.2 5012 40 23041 51 .5 12000 GPT-4o 0.9(0.06) 61.4 283 0 353 35.8 353 LLMLingua0.9(0.88) 84.4 5597 33 .3 19731 53.0 10689 0.8(0.80) 65.6 2510 10 .0 4230 44.9 4037 0.7(0.71) 60.4 2511 6 .7 4588 43.9 3371 0.6(0.62) 59.0 2270 6 .7 3076 40.9 3347 0.5(0.50) 53.6 1998 3 .3 3789 40.4 2796 TokenSkip0.9(0.88) 86.6 4941 40 .0 19985 50.0 12455 0.8(0.80) 83.4 5549 26 .7 20945 50.0 13275 0.7(0.71) 78.2 6566 16 .7 24718 43.4 15531 0.6(0.62) 62.8 8595 10 .0 27748 38.8 16764 0.5(0.50) 47.8 9824 3 .3 26555 31.8 19121 CTS0.9(0.87) 91.6 4703 50.0 21668 60.6 10413 0.8(0.81) 90.8 4922 43.4 21767 53.5 13136 0.7(0.74) 86.4 3310 33.3 10448 57.1 10372 0.6(0.66) 81.0 3787 16.7 10308 48 9712 0.5(0.58) 75.6 2036 10.0 3196 46.5 2906 efficiency. From Table 1, we can infer that the op- timal point lies between ratios 0.7 and 0.8, where model capability remains unchanged while mini- mizing inference token consumption. However, poor compression methods can actu- ally decrease model inference efficiency. As shown in Figure 4, as the compression ratio increases, To- kenSkip’s reasoning token consumption actually increases on the 7B model. This demonstrates that tokens in CoT cannot be removed arbitrarily, which aligns with intuition. Figure 5 displays the percentage of reasoning to- ken reduction and accuracy improvement for three methods. Higher accuracy with greater reason- ing token reduction is preferable. Therefore, our method achieves the optimal balance between to- ken reduction and accuracy improvement. 4.3 Ablation Study Our ablation study aims to verify: 1) The effec- tiveness of using a well-trained reference model for selecting valuable tokens, and 2) The effective- ness of conditional token importance assessment compared to unconditional methods. For the uncon- ditional token importance assessment, we follow the framework established in Jiang et al. (2023).Table 2: Ablation Study Variant Comparison Model Variant Conditional Trained RM Base × × + Conditional ✓ × + RM-Tuned × ✓ Proposed (CTS) ✓ ✓ We introduce the following variants of our method for the ablation study, as shown in Table 2: (1) Base: Using an standard RM to predict token importance in CoT reasoning without conditioning; (2) + Conditional: Using an standard RM to predict token importance in CoT reasoning with condition- ing; (3) + RM-Tuned: Using a well-trained RM to predict token importance in CoT reasoning without conditioning; (4) Proposed (CTS): Using a well- trained RM to predict token importance in CoT reasoning with conditioning. As demonstrated in Figures 6 and 7, conditional compression methods significantly enhance model performance. Notably, the highest-performing ap- proaches in both figures employ conditional token importance prediction methods, underscoring their effectiveness in achieving superior accuracy. The Figure 6: Ablation experiments of | https://arxiv.org/abs/2505.17827v1 |
the Qwen2.5-14B- Instruct model on the MATH500 dataset under different compression ratios Proposed method performs slightly better than the + Conditional approach, while the + RM-Tuned method shows marginal improvements over the Base method. This indicates that training the ref- erence model provides some benefit in identifying important reasoning tokens. The modest improve- ment might be attributed to the limited size of the high-quality corpus used for training, which con- tained only 8M tokens. Table 3 demonstrates that reasoning tokens do not differ substantially across variants, indicating that various methods do not significantly improve inference efficiency. In fact, at certain compression ratios, efficiency actually decreases. This suggests that accuracy improvements at the same compres- sion ratio come with a corresponding increase in reasoning token consumption. This observation aligns with test time scaling results in Zhang et al. (2025), which indicate that model capability scales with inference length. 5 Related Work 5.1 Overthinking in Long CoT Reasoning Models Chen et al. (2025); Team et al. (2025) demonstrated that in long CoT reasoning models, models gener- ate overly detailed or unnecessarily elaborate rea- soning steps, ultimately reducing their problem- solving efficiency. Many current reasoning models with smaller parameter counts tend to produce ver- bose reasoning or redundant intermediate steps, making them unable to provide answers within the user-defined token budget. These results reveal the phenomenon of redundant thinking in reasoning models. Figure 7: Ablation experiments of the Qwen2.5-14B- Instruct model on the AIME24 dataset under different compression ratios 5.2 Efficient Reasoning Prompt-based Chain of Thought (CoT) methods (Wei et al., 2022; Kojima et al., 2022) guide mod- els to think step-by-step, enhancing their problem- solving capabilities. Chain of Draft (Xu et al., 2025), through prompting, retains essential formu- las and numbers in the thought chain, maintain- ing performance while reducing inference costs. Lee et al. (2025) conducted a comprehensive com- parison of prompt-based CoT compression meth- ods. Wu et al. (2025); Ma et al. (2025) imple- ment thought intervention by incorporating first- person prompts in the model’s thinking process, such as "<think> I think I have finished the think- ing. </think>", to achieve instruction following and efficient reasoning. Kang et al. (2025) improves reasoning efficiency by compressing the reasoning process. Yu et al. (2025) trains models by collecting variable-length CoT reasoning data with both long and short rea- soning, thereby experimenting with reduced infer- ence tokens. Munkhbat et al. (2025); Yeo et al. (2025); Xia et al. (2025) collect short CoT data by reducing redundant reasoning steps after full- length reasoning. 5.3 Prompt Compression As language models increase in parameter scale, their capabilities also grow stronger, enabling many tasks to achieve better results merely by chang- ing the model’s input, such as RAG (Lewis et al., 2020) and few-shot learning (Wang et al., 2020). As prompts become longer, resource consumption increases significantly. Li et al. (2023) proposed evaluating token importance by calculating token perplexity in context to compress the input. Building on this foundation, Jiang et al. (2023) Table 3: Comparison of Different Methods Across Vary- ing Ratios. Cell values are shown as: Absolute | https://arxiv.org/abs/2505.17827v1 |
Value (Corresponding Ratio). Ratio Base + RM + Conditional Proposed 0.9 5023 5597 4563 5012 0.8 2510 2992 5001 4703 0.7 2511 2369 2883 3310 0.6 2270 2167 1952 3787 0.5 1998 2036 1787 2036 proposed a mixed coarse-grained and fine-grained compression method. Jiang et al. (2024) extended this work by introducing a task-aware prompt com- pression method that incorporates conditional infor- mation from the query when calculating perplexity. Pan et al. (2024) transformed token compression into a binary classification problem, utilizing BERT models for compression. 6 Conclusion We propose the Conditional Token Selection method, which utilizes a fine-tuned reference model to calculate conditional perplexity differ- ences for each token in long CoT data, identifying the most critical thinking tokens for deriving cor- rect answers. By applying flexible compression ra- tios, our method compresses CoT reasoning data to enable more efficient training while maintaining the model’s reasoning capabilities and ensuring output efficiency. Extensive experiments across various LLMs and tasks validate the effectiveness of CTS. Comprehensive ablation studies also demonstrate the importance of each component in our method. Impressively, our approach achieved up to a 10% improvement in accuracy while reducing reason- ing tokens by 6% (Qwen2.5-14B-Instruct model on AIME24 with an actual compression ratio of 0.87). For inference efficiency, we achieved a maximum reduction of 75.8% in reasoning tokens with only a 5% accuracy drop (Qwen2.5-14B-Instruct model on GPQA Diamond with an actual compression ratio of 0.58). Additionally, the RM trained on valuable reason- ing tokens can function as a standalone model for other methods requiring assessment of reasoning token importance. This work contributes to making powerful reasoning capabilities more accessible in resource-constrained environments and opens new directions for developing efficient reasoning models.Limitation First, our approach is constrained by data limita- tions, as the quantity of valuable reasoning tokens used for training the reference model is insuffi- cient for broader token importance assessment ca- pabilities, especially in specialized domains like code-related problems. Resource constraints also prevented experiments with larger models such as 32B and 72B variants. Second, our method focuses primarily on com- pressing existing reasoning patterns rather than de- veloping new reasoning strategies, and requires high-quality reasoning datasets which may not be available for all domains or tasks. The token im- portance evaluation, while effective, remains an approximation of each token’s true contribution to the reasoning process. Third, very high compression ratios may affect the interpretability of reasoning chains for human readers, potentially limiting their educational or ex- planatory value in applications where transparency is important. Additionally, while we demonstrate effectiveness across several reasoning benchmarks, these may not fully represent the complexity and diversity of real-world reasoning tasks. Ethical Statement The datasets used in our experiments are publicly available, English-labeled, and privacy-compliant, with all research artifacts licensed for permissi- ble use. Our methodology adheres to ACL ethi- cal guidelines. However, while our approach en- hances reasoning efficiency, it may also accelerate AI deployment in sensitive areas without proper safeguards and reduce transparency in decision- making. We stress the need for responsible use, prioritizing transparency, fairness, and accountabil- ity—particularly in explainability-critical applica- | https://arxiv.org/abs/2505.17827v1 |
tions, where lower compression ratios may be nec- essary to preserve interpretability. In our humble opinion, we have not discerned any potential social risks. References Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, and Dong Yu. 2025. Do not think that much for 2+3=? on the overthinking of o1-like llms. Preprint , arXiv:2412.21187. Hugging Face. 2025. Open r1: A fully open reproduc- tion of deepseek-r1. Aaron Grattafiori, Abhimanyu Dubey, and Abhi- nav Jauhri et al. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the math dataset. arXiv preprint arXiv:2103.03874 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, and 1 others. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 . Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. LLMLingua: Compressing prompts for accelerated inference of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 13358–13376, Singapore. Association for Computational Linguistics. Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dong- sheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024. LongLLMLingua: Accelerating and enhanc- ing LLMs in long context scenarios via prompt com- pression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1658–1677, Bangkok, Thailand. Association for Computational Linguistics. Yu Kang, Xianghui Sun, Liangyu Chen, and Wei Zou. 2025. C3ot: Generating shorter chain-of-thought without compromising effectiveness. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 24312–24320. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems , 35:22199– 22213. Ayeong Lee, Ethan Che, and Tianyi Peng. 2025. How well do llms compress their own chain-of- thought? a token complexity approach. arXiv preprint arXiv:2503.01141 . Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, and 1 others. 2020. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. Advances in neural information processing systems , 33:9459– 9474.Yucheng Li, Bo Dong, Chenghua Lin, and Frank Guerin. 2023. Compressing context to enhance inference efficiency of large language models. arXiv preprint arXiv:2310.06201 . Wenjie Ma, Jingxuan He, Charlie Snell, Tyler Griggs, Sewon Min, and Matei Zaharia. 2025. Reasoning models can be effective without thinking. Preprint , arXiv:2504.09858. Mathematical Association of America. 2024. Aime. URL https://artofproblemsolving.com/wiki/ index.php/AIME_Problems_and_Solutions/ . Ivan Moshkov, Darragh Hanley, Ivan Sorokin, Shubham Toshniwal, Christof Henkel, Benedikt Schifferer, Wei Du, and Igor Gitman. 2025. Aimo-2 winning | https://arxiv.org/abs/2505.17827v1 |
solu- tion: Building state-of-the-art mathematical reason- ing models with openmathreasoning dataset. arXiv preprint arXiv:2504.16891 . Niklas Muennighoff, Zitong Yang, Weijia Shi, Xi- ang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. 2025. s1: Simple test-time scaling. Preprint , arXiv:2501.19393. Tergel Munkhbat, Namgyu Ho, Seo Hyun Kim, Yongjin Yang, Yujin Kim, and Se-Young Yun. 2025. Self- training elicits concise reasoning in large language models. Preprint , arXiv:2502.20122. Zhuoshi Pan, Qianhui Wu, Huiqiang Jiang, Menglin Xia, and et al. 2024. LLMLingua-2: Data distilla- tion for efficient and faithful task-agnostic prompt compression. In Findings of the Association for Com- putational Linguistics: ACL 2024 , pages 963–981, Bangkok, Thailand. Association for Computational Linguistics. Qwen, :, and An Yang et al. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. David Rein, Betty Li Hou, Asa Cooper Stickland, Jack- son Petty, Richard Yuanzhe Pang, Julien Dirani, Ju- lian Michael, and Samuel R Bowman. 2024. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling . Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, An- drew Wen, Hanjie Chen, Xia Hu, and 1 others. 2025. Stop overthinking: A survey on efficient rea- soning for large language models. arXiv preprint arXiv:2503.16419 . Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, and et al. 2025. Kimi k1.5: Scal- ing reinforcement learning with llms. Preprint , arXiv:2501.12599. NovaSky Team. 2025. Sky-t1: Train your own o1 preview model within $450. https://novasky- ai.github.io/posts/sky-t1. Accessed: 2025-01-09. Xiaoyu Tian, Sitong Zhao, Haotian Wang, Shuaiting Chen, Yunjie Ji, Yiping Peng, Han Zhao, and Xian- gang Li. 2025. Think twice: Enhancing llm reason- ing by scaling multi-round test-time thinking. arXiv preprint arXiv:2503.19855 . Yaqing Wang, Quanming Yao, James T Kwok, and Li- onel M Ni. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM computing sur- veys (csur) , 53(3):1–34. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems . Tong Wu, Chong Xiang, Jiachen T. Wang, and Pra- teek Mittal. 2025. Effectively controlling reason- ing models through thinking intervention. Preprint , arXiv:2503.24370. Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. 2025. Tokenskip: Control- lable chain-of-thought compression in llms. arXiv preprint arXiv:2502.12067 . Silei Xu, Wenhao Xie, Lingxiao Zhao, and Pengcheng He. 2025. Chain of draft: Thinking faster by writing less. arXiv preprint arXiv:2502.18600 . Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. 2025. Demystifying long chain-of-thought reasoning in llms. Preprint , arXiv:2502.03373. Bin Yu, Hang Yuan, Yuliang Wei, Bailing Wang, Weizhen Qi, and Kai Chen. 2025. Long-short chain- of-thought mixture supervised fine-tuning eliciting ef- ficient reasoning in large language models. Preprint , arXiv:2505.03469. Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Wenyue Hua, Haolun Wu, Zhihan Guo, Yufei Wang, Niklas Muennighoff, Irwin King, Xue Liu, and Chen Ma. 2025. A survey on test-time scal- ing in large language models: What, | https://arxiv.org/abs/2505.17827v1 |
how, where, and how well? Preprint , arXiv:2503.24235. Han Zhao, Haotian Wang, Yiping Peng, Sitong Zhao, Xi- aoyu Tian, Shuaiting Chen, Yunjie Ji, and Xiangang Li. 2025. 1.4 million open-source distilled reasoning dataset to empower large language model training. Preprint , arXiv:2503.19633.A Prompt Template A.1 Prompt Template For Training Supervised Fine Tune Given the following problem, solve it step by step. QUESTION: {question} <think> {thought_process} </think> {Final Answer} A.2 Prompt Template for Obtaining High-Quality Corpus Compress the given reasoning steps to short expressions, and such that you (Deepseek) can understand reasoning and reconstruct it as close as possible to the original. Unlike the usual text compression, I need you to comply with the 5 conditions below: 1. You can ONLY remove unimportant words. 2. Do not reorder the original words. 3. Do not change the original words. 4. Do not use abbreviations or emojis. 5. Do not add new words or symbols. Compress the origin aggressively by removing words only. Compress the origin as short as you can, while retaining as much information as possible. If you understand, please compress the following reasoning steps: {reasoning_steps} The compressed reasoning steps are: B Additional Experimental Results Below we present the results of different com- pression methods on Qwen2.5-7B-Instruct and Llama3.1-8B-Instruct model. Table 4: Experimental results of various compression methods on Qwen2.5-7B-Instruct , showing accuracy, average reasoning CoT tokens, and compression ratio (actual ratio). Methods Ratio (Actual)MATH500 AIME24 GPQA Diamond Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓ Original 1.0 82.4 7244 20 24396 37.8 17038 LLMLingua0.9(0.88) 72.6 6804 13 .3 23903 39.8 14470 0.8(0.80) 58.6 2969 6.7 4194 34.3 4421 0.7(0.71) 57.2 2542 6.7 3692 32.8 3236 0.6(0.62) 55.0 2178 3.3 3084 33.3 3203 0.5(0.50) 51.4 2226 3 .3 3603 30.8 2462 TokenSkip0.9(0.88) 78.6 6997 23.3 24094 38.8 17263 0.8(0.80) 72.8 8172 10.0 25223 39.3 18365 0.7(0.71) 64.2 9984 6.6 27946 32.3 21219 0.6(0.62) 54.6 11496 3.3 28802 26.2 21371 0.5(0.50) 37.4 13595 0 29470 31.3 21012 CTS0.9(0.87) 82.8 6497 20 24769 43.4 17272 0.8(0.81) 81.2 6886 23.3 27006 39.3 17961 0.7(0.74) 78.0 5109 13.3 15929 42.4 13937 0.6(0.66) 70.8 2198 10.0 3550 32.3 3055 0.5(0.58) 70.6 2039 6.7 2993 32.8 3187 Table 5: Experimental results of various compression methods on Llama-3.1-8B-Instruct , showing accuracy, average reasoning CoT tokens, and compression ratio (actual ratio). Methods Ratio (Actual)MATH500 AIME24 GPQA Diamond Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓Accuracy ↑Tokens ↓ Original 1.0 65.0 TokenSkip0.9(0.88) 56.4 11985 0 27882 35.8 17846 0.8(0.80) 51.2 13145 3.3 30443 30.3 17960 0.7(0.71) 44.6 14354 3.3 34249 27.7 19265 0.6(0.62) 32.4 15453 0 23319 29.7 20800 0.5(0.50) 23.3 16013 0 22318 26.7 22010 CTS0.9(0.87) 60.6 12047 3.3 29144 32.3 18503 0.8(0.81) 58.4 12134 6.7 24906 40.4 18981 0.7(0.74) 55.0 9987 3.3 25933 32.3 16571 0.6(0.66) 50.8 2808 0 3781 26.7 3492 0.5(0.58) 45.5 2625 0 3478 29.2 3080 | https://arxiv.org/abs/2505.17827v1 |
Stepwise Reasoning Checkpoint Analysis: A Test Time Scaling Method to Enhance LLMs’ Reasoning Zezhong Wang1*, Xingshan Zeng2†, Weiwen Liu3, Yufei Wang2, Liangyou Li2, Yasheng Wang2, Lifeng Shang2, Xin Jiang2, Qun Liu2, Kam-Fai Wong1 1The Chinese University of Hong Kong 2Huawei Noah’s Ark Lab,3Shanghai Jiao Tong University zzwang@se.cuhk.edu.hk, zeng.xingshan@huawei.com Abstract Mathematical reasoning through Chain-of- Thought (CoT) has emerged as a powerful ca- pability of Large Language Models (LLMs), which can be further enhanced through Test- Time Scaling (TTS) methods like Beam Search and DVTS. However, these methods, despite improving accuracy by allocating more com- putational resources during inference, often suffer from path homogenization and ineffi- cient use of intermediate results. To address these limitations, we propose Stepwise Reason- ing Checkpoint Analysis (SRCA), a framework that introduces checkpoints between reasoning steps. It incorporates two key strategies: (1) Answer-Clustered Search, which groups rea- soning paths by their intermediate checkpoint answers to maintain diversity while ensuring quality, and (2) Checkpoint Candidate Aug- mentation, which leverages all intermediate an- swers for final decision-making. Our approach effectively reduces path homogenization and creates a fault-tolerant mechanism by utilizing high-quality intermediate results. Experimen- tal results show that SRCA improves reasoning accuracy compared to existing TTS methods across various mathematical datasets. 1 Introduction Large Language Models (LLMs) have demon- strated mathematical reasoning capabilities through Chain-of-Thought (CoT) (Wei et al., 2022). Re- cent studies indicate that Test Time Scaling (TTS), which expands test-time computing resources to allocate more reasoning budget through methods such as Beam Search (Snell et al., 2024) and Di- verse Verifier Tree Search (DVTS) (Beeching et al., 2024), can significantly improve accuracy in math- ematical reasoning tasks (Ji et al., 2025; Zhao et al., 2024; Chen et al., 2025a). These methods allow LLMs to sample multiple candidates at each rea- soning step and score them using a process reward *Work done during internship at Huawei Noah’s Ark Lab. †Corresponding authormodel (PRM) (Xi et al., 2024; Wu et al., 2024a; Wang et al., 2024b; Zhang et al., 2025a). Accord- ing to their strategies, they select high-scoring steps to continue reasoning, thus overcoming the limita- tions of single-path reasoning. However, current methods face two key chal- lenges in practice. First, maintaining diversity in the sampled reasoning paths is both crucial and arduous (Misaki et al., 2025; Li et al., 2023). Even though the model generates multiple candidate paths, the chosen ones usually follow similar rea- soning directions. This happens because the reward mechanism favors local optimal solutions, caus- ing the search process to converge too early and fail to explore diverse reasoning patterns (Hooper et al., 2025; Zeng et al., 2025). Second, exist- ing methods underutilize intermediate reasoning results: many intermediate branches are discarded during the search, and only a few complete paths are used in the final decision, leading to a waste of computational resources (Wang et al., 2024c; Zhang et al., 2025b). To address these issues, we propose Stepwise Reasoning Checkpoint Analysis (SRCA) . We in- troduce reasoning checkpoints as a foundational technique and propose a searching method and a decision-enhancement strategy based on it. We inject "checkpoints" after | https://arxiv.org/abs/2505.17829v1 |
each reasoning step. Specifically, once a step is completed, we temporar- ily interrupt the reasoning process and append the prompt " So the answer is " to the current reason- ing steps, compelling the model to generate an intermediate prediction rather than continuing its reasoning process, as illustrated in the upper right corner of Figure 1. Using the intermediate answers collected at these checkpoints, we further propose anAnswer-Clustered Search strategy. We group multiple reasoning steps sampled at the current checkpoint according to their detected answers, and retain high-quality reasoning steps from each group for further extension. This approach allows us toarXiv:2505.17829v1 [cs.CL] 23 May 2025 maintain multiple potential reasoning paths leading to different answers, thus increasing the diversity of reasoning processes and mitigating the issue of path homogenization. Additionally, we introduce the Checkpoint Candidate Augmentation strategy. By collecting all intermediate answers generated at checkpoints, we expand the pool of candidate rea- soning paths, allowing these intermediate results to participate in the final decision-making process. In this way, we fully utilize all high-quality interme- diate results generated during reasoning, creating a fault-tolerant mechanism. Even if subsequent reasoning deviates, the retained high-quality inter- mediate predictions may still lead to the correct answer. The contributions of this work can be summa- rized as follows: •We introduce the concept of reasoning check- points, providing a new methodology for an- alyzing and improving LLM reasoning pro- cesses during test time. •Based on this concept, we develop SRCA, a framework that effectively addresses both path diversity and computational efficiency challenges. •We conduct extensive experiments that demonstrate the superiority of our approach and provide valuable insights for future re- search in Test-Time Scaling. 2 Related Works As enthusiasm for scaling pre-training computation wanes, Test-Time Scaling (TTS) has emerged as a key research focus (Wang et al., 2024a; Wu et al., 2024b; Chen et al., 2025a). TTS allocates addi- tional computation during inference to improve per- formance, significantly enhancing LLMs’ problem- solving capabilities across specialized and general tasks. Some TTS approaches use training to en- courage LLMs to generate more extensive outputs for deeper reasoning (Guan et al., 2025; Xi et al., 2024). These methods create synthetic data, includ- ing long chain-of-thought (Chen et al., 2025b; Xi- ang et al., 2025) and reflection-based examples (Bi et al., 2025; Zhang et al., 2024; Yu et al., 2024), to fine-tune LLMs, shifting their behavior from rapid responses to more deliberate reasoning. Another category is training-free tree search, which forms the primary focus of this work (Luo et al., 2024; Wan et al., 2024; Guan et al., 2024). These methods dynamically guide theLLM’s reasoning process using external verifiers or PRMs (Jiang et al., 2024; Uesato et al., 2022; Setlur et al., 2024). Snell et al. (2024) introduced Beam Search to explore the reasoning space, where PRM evaluates each reasoning step and maintains a fixed number of promising paths based on the beam width. Building upon this foundation, sub- sequent research (Beeching et al., 2024) proposed Diverse Verifier Tree Search (DVTS), which offers a notable improvement (Liu et al., 2025). Instead of maintaining a | https://arxiv.org/abs/2505.17829v1 |
single search beam, DVTS oper- ates multiple search trees simultaneously, selecting and expanding the most promising reasoning path within each tree. Tree search algorithms, however, face two crucial challenges: the diversity problem and the utilization problem. The diversity problem arises when PRMs in- advertently suppress the LLM’s sampling diver- sity (Chen et al., 2025b; Zheng et al., 2024). This occurs because only high-scoring paths are re- tained, and these paths often share similar problem- solving approaches. This issue is further exacer- bated by the inherent biases in the imperfect PRMs or verifier (He et al., 2025; Zheng et al., 2024). The utilization problem arises because tree search algorithms explore numerous paths, but typically only one contributes to the final result. This leads to many branches and intermediate processes be- ing discarded, with utilization efficiency decreas- ing as search scale increases. The challenge is to efficiently integrate generated reasoning over- head (Wang et al., 2024c; Zhang et al., 2025b; Sui et al., 2025). This issue has evolved into the "over- thinking problem," where LLMs waste resources on simple problems, potentially leading to perfor- mance degradation through error accumulation (Li et al., 2024; Wu et al., 2025; Huang et al., 2025; Gan et al., 2025; Aggarwal and Welleck, 2025). To address these dual challenges, we propose two novel strategies. First, the Answer Cluster Search algorithm is designed to enhance the diver- sity of tree search processes. Second, we introduce the Check Point Augmentation strategy to preserve high-quality intermediate reasoning processes for reuse in final answer decision-making, thereby ad- dressing the low utilization problem inherent in tree search methodologies. 3 Methodology In this section, we introduce three key techniques, Checkpoint Injection, which serves as the atomic LLM PRM Candidates Pool1 2 3 4… …4 4 4 6 6CoT step i-1 CoT step i 0.9 0.8 0.7 0.6 0.7 CoT step i+1Activated Nodes Checkpoint Answers PRM Scores Discarded Nodes Select Checkpoint Candidate AugmentationAnswer -Clustered Search 5 0.6 Checkpoint Injection ### Step 2: Half of the balls are golf balls.… 0.6LLM PRM… ### Step 3: … So, the answer is 5 LLMCheckpointFigure 1: Overview of SRCA. Top-right: The checkpoint operation, which serves as the atomic operation in SRCA. Left: Illustration of ACS strategy at step i, where N= 6andM= 2. Retrieved reasoning steps are clustered into three groups based on their checkpoint answers (indicated by different shades), with the highest-scoring nodes selected from clusters with answers 6 and 4 for subsequent reasoning. Bottom-right: CCA strategy, where paths 3 and 4 represent high-quality intermediate reasoning steps collected by CCA. operation in SRCA, Answer-Clustered Search (ACS), and Checkpoint Candidate Augmentation (CCA), to improve LLM’s reasoning path search- ing. 3.1 Checkpoint Injection We introduce a dynamic intervention mechanism to analyze the model’s reasoning trajectory through checkpoint injection. As shown in the upper top- right part of Figure 1, the core procedure begins by monitoring the model’s output stream for pre- defined step delimiter tokens (e.g., "### Step" ), which indicate the completion of a logical reason- ing unit. Upon detecting such tokens, we inject a checkpoint to temporarily suspend autoregressive | https://arxiv.org/abs/2505.17829v1 |
generation. At each checkpoint position, a fixed prompt template xckpt="So, the answer is " is inserted to force the model to generate an interme- diate prediction based solely on the accumulated context up to that step. The model’s immediate re- sponse to xckptis recorded as a checkpoint answer atat step t, after which the LLM rolls back the generation state to the original checkpoint position. This rollback operation ensures the elimination of checkpoint influence from the ongoing reasoning process while preserving the model’s KV cache for continued generation (Wang et al., 2025). The checkpoint answers subsequently serve as crucial criteria for path similarity assessment and groupingin the ACS strategy, while also being collected by the CCA method to enrich the final answer candi- date pool. 3.2 Answer-Clustered Search Similar to Beam Search (Snell et al., 2024), The Answer-Clustered Search (ACS) evaluates and re- tains a select few of the multiple reasoning steps sampled by the LLM for further reasoning. It enhances reasoning diversity through stepwise answer-guided clustering. In the following part, we will detail the four key steps of ACS using the running case shown on the left side of Figure 1. 1. Sampling. At each reasoning step t, we first determine the branching factor: for the initial step (t= 1), the LLM samples Ncandidate reasoning paths; for subsequent steps, each of the Msurviv- ing beams generates N/M sub-paths, maintaining a total budget of Npaths. This set of paths is de- fined as {p(j) t}N j=1. Figure 1 illustrates the case where N= 6andM= 2. 2. Clustering. AllNpaths undergo Checkpoint Injection at step t, yielding checkpoint answers {a(j) t}N j=1. These paths are clustered into groups G={C1, C2, ..., C k}where Ci={j|a(j) t=ac}, forming answer-homogeneous clusters. The group- ing results are marked with shading in Figure 1. 3. Scoring. A PRM assigns scores sjto each pathp(j) t, with cluster Ci’s aggregate score com- Algorithm 1: Answer-Clustered Search Input: Sampling budget N, beam width M, candidate set {p(j) t}N j=1 Output: Selected paths set P // Checkpoint Injection & Scoring forj←1toNdo a(j) t←SRCA( p(j) t); s(j) t←PRM(p(j) t); end // Clustering& Sorting G={C1, ..., C k}where Ci={j|a(j) t=ai}; forCi∈Gdo Si←P j∈Cisj; end G← {Ci:Si≥Si+1}k i=1; // Round-robin Selection P ← ∅ ; while|P|< M do forCi∈Gdo j∗←argmaxj∈Cisj; P ← P ∪ { p(j∗)}; Ci←Ci\ {p(j∗)}; if|P|=Mthen break; end end end puted as Si=P j∈Cisj. This approach is similar to a stepwise Weighted Best-of-N implementation. 4. Selection. Clusters are sorted by Siin de- scending order, while paths within each cluster are ranked by sj. Then we sequentially select top- ranked paths across clusters via round-robin sam- pling: starting from the highest cluster, we pick the top path from each cluster, cycling back when reaching the last cluster until Mpaths are selected. This resource-aware branching prioritizes high- quality clusters while maintaining inter-cluster di- versity. The cyclic selection mechanism prevents dominance by single-answer clusters and enables early identification of divergent reasoning trajecto- ries. We provide a more rigorous process in Algo- rithm 1. 3.3 Checkpoint Candidate Augmentation The proposed Checkpoint Candidate Augmenta- tion (CCA) aims to | https://arxiv.org/abs/2505.17829v1 |
maximize the use of reasoning resources and enhance the diversity of candidate an- swers by integrating the checkpoint answers from intermediate reasoning steps. Traditional Beam Search methods retain only a fixed number, i.e., M, of complete reasoning paths as the final candidate set, which leads to the discard of many unfinished intermediate branches. To address this issue, our method continuously collects intermediate answersgenerated at all checkpoints during the ACS and reconstructs the corresponding truncated reasoning paths into valid candidate paths. Specifically, for each intermediate answer a(j) tproduced at a check- point, we concatenate it with the current reasoning pathp(j) tto form a candidate path with a complete logical chain: ˆp(j) t=p(j) t⊕xckpt⊕a(j) t (1) where⊕represents string concatenation. All candidate paths, including the original com- plete paths and the newly added intermediate paths, are uniformly scored by the PRM, and the path with the highest score is selected as the model out- put. This method offers two main advantages: first, by incorporating prediction results from the inter- mediate inference process into the candidate set, it significantly improves the utilization of computa- tional resources already spent; second, by retaining intermediate answers at various stages, it estab- lishes an effective fault tolerance mechanism. Even if the LLM makes mistakes in subsequent steps, it may still arrive at the correct answer through the retained high-quality intermediate predictions. On the other hand, CCA can effectively mitigate is- sues such as overthinking, increasingly erroneous reasoning, and repetitive outputs in LLMs. 4 Experiments We conducted comparative experiments on four mathematical reasoning datasets and against four Test-Time Scaling baselines. 4.1 Settings In the experiments, we used four datasets: GSM8K, MATH500, AIME, and OlympiadBench. Two different-sized LLMs were tested in total, specifi- cally Llama -3.2-1B-Instruct (MetaAI, 2024) and Qwen3 -0.6B (Yang et al., 2025). For the PRM, we adopted the model fine-tuned by DeepSeek, Llama3.1 -8B-PRM-Deepseek -Data (Xiong et al., 2024) and Skywork -o1-Open -PRM-Qwen-2.5 -7Bre- leased by Skywork (o1 Team, 2024). We compared our method against several TTS algorithms, including Greedy Search, Self- Consistency (Wang et al., 2023), Best-of-N (BoN) (Brown et al., 2024), Weighted BoN (Brown et al., 2024), Beam Search (Snell et al., 2024), and Diverse Verifier Tree Search (DVTS) (Beeching et al., 2024). Among these methods, Beam Search maintains Npaths and selects Mhighest-scoring Models & TTS GSM8K MATH500 AIME OlympiadBench Independent Sampling Llama-3.1-70B-Instruct (Grattafiori et al., 2024) 95.10 65.00 36.66 27.70 Llama-3.2-1B-Instruct (MetaAI, 2024) 43.75 24.40 3.22 4.59 w.Self-Consistency (Wang et al., 2023) 57.70 39.80 8.57 11.70 TTS Llama-3.2-1B-Instruct w. Llama3.1-8B-PRM-Deepseek-Data BoN (Brown et al., 2024) 80.36 46.20 11.04 13.48 Weighted BoN (Brown et al., 2024) 65.50 46.40 10.50 13.63 Beam Search (Snell et al., 2024) 84.84 52.00 19.07 18.07 DVTS (Beeching et al., 2024) 83.47 52.60 20.68 19.40 SRCA (Ours) 85.60 53.40 24.97 20.74 TTS Llama-3.2-1B-Instruct w. Skywork-o1-Open-PRM-Qwen-2.5-7B BoN (Brown et al., 2024) 80.74 55.20 25.08 18.22 Weighted BoN (Brown et al., 2024) 76.72 52.60 28.08 18.67 Beam Search (Snell et al., 2024) 84.99 63.20 26.82 23.89 DVTS (Beeching et al., 2024) 84.00 64.80 29.03 25.82 SRCA (Ours) 85.97 65.20 39.71 27.75 Table 1: Comparison of | https://arxiv.org/abs/2505.17829v1 |
TTS results. The upper section shows the greedy search results for 1B and 70B models, and we additionally report the self-consistency performance of the 1B model with N= 128 . The lower section shows results from the 1B model combined with various TTS methods and two PRMs, also with N= 128 . Numbers indicate accuracy (%). Best overall performance on each dataset is marked in bold , while best performance within each group is underlined. ones for expansion, with each generating N/M sub-paths. DVTS extends this by initializing M subtrees and sampling N/M paths per step within each subtree, enhancing path diversity through sub- tree isolation. Other baseline methods are standard approaches in the field; their detailed descriptions can be found in Appendix A. For all the sampling-based methods, set temper- ature = 0.8 and top_p = 0.9. We use N= 16 and N= 64 for sampling times to assess the effect of sampling scale on reasoning performance. For the methods involving path selection, such as Beam Search, DVTS, and SRCA, the beam width Mis fixedly set to 4, that is, the 4 candidate paths with the highest scores are retained at each reasoning step for subsequent expansion. Since the PRM can collect the step-level scores of the complete reasoning path to form a score se- quence, there are various ways to determine the final score of the path, such as taking the sum, ac- cumulation, minimum value of the sequence scores, and the score of the last step. In this experiment, the score of the last step in the path is used as the path score. The effects of these configurations on exper- imental results are discussed in Appendix B.2. 4.2 Results 4.2.1 Scaling with SRCA: Small Models Can Outperform Larger Ones Table 1 shows the performance of Llama -3.2-1B- Instruct with various TTS methods on four math- ematical datasets ( N= 128 ,M= 4). We alsoinclude results from Llama -3.1-70B-Instruct for comparison. SRCA consistently outperforms other TTS meth- ods across all datasets, regardless of the PRM used. With DeepSeek PRM, SRCA achieves approxi- mately 10% absolute improvement over the BoN baseline. The improvement is particularly notable on AIME, where SRCA shows a 43% relative per- formance gain over DVTS. Remarkably, when us- ing Skywork PRM, our 1B model with SRCA out- performs the 70B model on MATH500, AIME, and OlympiadBench, only falling behind on the sim- pler GSM8K dataset. This demonstrates SRCA’s effectiveness in enabling smaller models to com- pete with larger ones. On the other hand, The choice of PRM also impacts performance, with Skywork PRM generally yielding better results than DeepSeek PRM across all TTS methods. This suggests that future advances in PRM development could lead to further performance improvements in TTS methods. More results of SRCA on Qwen3- 0.6B are presented in Appendix B.3. 4.2.2 Expanding Sampling Times: SRCA Has Higher Efficiency. We test various TTS methods with sampling times N= 16 ,32,64, and128. The results are shown in Figure 2. SRCA demonstrates superior efficiency by requiring fewer samples to achieve compara- ble accuracy. With DeepSeek PRM | https://arxiv.org/abs/2505.17829v1 |
on MATH500, SRCA achieves 51.2% accuracy at N= 16 , out- performing DVTS’s 49.8% at N= 64 . This advan- tage is more pronounced on AIME, where SRCA’s 70B 70B 70B70BFigure 2: Performance trends of TTS methods with DeepSeek PRM (top row) and Skywork PRM (bottom row) and as the sampling number Nincreases from 16 to 128. In the bottom row, we additionally mark the performance of the 70B model with a green line for comparison. Figure 3: Pass@K trends of the 1B model with different TTS methods and DeepSeek PRM as the sampling number increases from 16 to 128. Note that for Pass@K calculation, Self-Consistency, BoN, and Weighted BoN degrade to Independent Sampling. accuracy at N= 16 exceeds all TTS methods’ performance at N= 128 . Using Skywork PRM further amplifies this gap: SRCA reaches 32.48% atN= 16 , while the best baseline (DVTS) only achieves 29.03% at N= 128 . Performance im- provements show diminishing returns as Nin- creases, with N= 16→32gains being larger thanN= 64→128. This pattern holds across different PRMs, suggesting convergence to an up- per bound. Further analysis regarding the computa- tional overhead and efficiency of SRCA is provided in Appendix B.1. 5 Analysis 5.1 Pass Rate Test: SRCA Improves Answer Discovery The ability to sample at least one correct reasoning path is crucial for policy models, as it determines the effectiveness of PRM guidance. If a policy model does not sample any correct paths, even a perfect PRM cannot select the correct one. We conducted Pass@k tests on 4 datasets comparing different TTS methods, including SRCA without CCA to understand each component’s contribution.Results are shown in Figure 3. SRCA demon- strates superior pass rates across datasets. Ablation studies show that CCA contributes approximately 10% improvement through answer pool expansion, while ACS alone still outperforms DVTS and Beam Search by 3% on average. Independent sampling achieves higher Pass@k on simpler datasets due to its unconstrained randomness generating more di- verse solutions. However, for challenging datasets like AIME, this approach performs poorly as ran- dom exploration is ineffective when precise reason- ing is required. On the other hand, the TTS method produces a candidate set of better quality, resulting in a higher pass rate. 5.2 Early Stopping: Efficient Computing with No Performance Loss. Recent research shows that LLMs often suffer from overthinking, conducting unnecessary analysis that wastes computational resources and can even lead to incorrect answers (Li et al., 2024; Wu et al., 2025; Sui et al., 2025). We implement early stop- ping in SRCA by introducing a threshold τ: rea- soning stops when a checkpoint answer’s score 43%19% 30% 15% Figure 4: The average accuracy and search depth of SRCA with early stopping strategies under different values of tau. The left y-axis represents the search depth, while the right y-axis represents the accuracy (%). The dashed line in the figure annotates the reduction rate of tree depth, i.e., the number of reasoning steps, when tau= 0.95. The pentagon represents the best performance. 11.63% 19.35% 20.25% 25.04% Figure 5: Ablation study results on four datasets, grouped by different | https://arxiv.org/abs/2505.17829v1 |
values of N. For the bars corresponding to methods incorporating CCA, the Checkpoint Answer Rate (CAR) is additionally marked with slashes shading. The average CAR for each dataset is indicated in the top-left corner of each subplot. exceeds τ. We tested various τvalues (0.5-1.0), measuring both accuracy and average reasoning steps, with τ= 1(no early stopping) as baseline. Results are shown in Figure 4. This experiment uses Skywork PRM with N= 16 samples. Low thresholds like 0.5hurt performance, caus- ing a 14% accuracy drop on MATH500. While this reduces reasoning steps, the performance trade-off is unacceptable. Higher thresholds require more steps but yield better accuracy, as expected. At τ= 0.95, early stopping reduces reasoning steps by 27% across datasets while only losing 0.58% ac- curacy. Notably, it sometimes improves accuracy: both MATH500 and OlympiadBench show better results at τ= 0.95than without early stopping. This confirms that overthinking can harm perfor- mance, due to either model reasoning limitations or PRM imperfections (Zheng et al., 2024; He et al., 2025). Early stopping is particularly effective for simpler tasks: GSM8K shows a 43% reduction inreasoning steps at τ= 0.95, compared to 19% for the more challenging AIME problems. 5.3 Ablation Study SRCA combines ACS for diverse path searching and CCA for expanding the answer candidate pool. We conducted ablation studies by: (1) testing ACS alone without CCA, and (2) combining CCA with baseline methods (Beam Search and DVTS). Re- sults are shown in Figure 1. We also track the Checkpoint Answer Rate (CAR) - the percentage of final answers selected from checkpoint, which is marked with slashes on the bar chart. After removing the CCA strategy, SRCA de- grades to ACS strategy, resulting in a notable per- formance decline. However, on relatively simple datasets like GSM8K and MATH500, increasing the sampling size (e.g., to N= 128 ) minimizes this performance gap to less than 1%. Notably, ACS consistently outperforms both Beam Search ReasoningStep ScoreCheckpoint AnswerFinal Score Question: What is the smallest positive perfect cube that can be written as the sum of three consecutive integers?- - - Answer: ## Step 1: Let's denote the first of the three consecutive integers as n. We know that the next two consecutive integers are n+1 and n+2. 0.8774 $\boxed{9}$0.0313 ## Step 2: The sum of these three consecutive integers is n + (n+1) + (n+2). This simplifies to 3n + 3. 0.8676 $\boxed{3n + 3}$ 0.0459 ## Step 3: We are looking for the smallest positive perfect cube that can be expressed in this form. Let's rewrite the expression 3n + 3 as 3(n + 1). 0.8359$\boxed{{3(1)^3 + 3(2)^3 = 3(1 + 2^3) = 3(1 + 8) = 3(9) = 27}$0.6762 ## Step 4: A perfect cube is the cube of an integer. So, we need to find the smallest value of n for which 3(n + 1) is a perfect cube. 0.7822 $\boxed{27}$ 0.6837 ## Step 5: To find the smallest perfect cube, we can start by checking small values of n. Let's start with n = 1: 3(1 + 1) = 3(2) = 6, which is | https://arxiv.org/abs/2505.17829v1 |
not a perfect cube. 0.6674 $\boxed{27}$ 0.7192 ## Step 6: Now, let's try n = 2: 3(2 + 1) = 3(3) = 9, which is a perfect cube (3^2). 0.3876 $\boxed{9}$ 0.0054 ## Step 7: Since we found a perfect cube in the first step, we don't need to check further values of n. 0.0764 $\boxed{9}$ 0.1356 The final answer is: $\boxed{9}$ - - 0.0212Table 2: A real running case of SRCA during reasoning. The first column shows the question and the model’s reasoning process. The Step Score represents PRM’s evaluation of the current step, while the Final Score indicates the PRM score when using the current step and its checkpoint answer as the reasoning endpoint. and DVTS baselines across most configurations, demonstrating its robust effectiveness. The integration of CCA with baseline methods yields substantial improvements in accuracy (3- 4%). Analysis shows that 19.07% of final answers originate from CCA’s expanded candidate pool, un- derscoring its important contribution to solution generation. The impact varies by problem diffi- culty: CAR is 11.63% for GSM8K but rises to 25.04% for OlympiadBench, indicating that CCA’s influence is more pronounced in solving complex problems. 5.4 Case Study Table 2 shows a real running case of SRCA during reasoning. Since the complete search tree is too large, we only showcase how SRCA uses Check- point Answers to backup correct answers from in- correct branches. The model’s reasoning process can be explained manually in three phases: •Early Stage (Steps 1-2): During initial rea- soning, the model produces either incorrect answers or incomplete expressions instead of proper numerical values, indicating insuffi- cient reasoning depth. •Answer Formation Stage (Steps 3-5): Starting from Step 3, the model attempts brief reason- ing in the answer box and first obtains the correct answer 27. Although reasoning in the answer box is not ideal behavior, the model successfully reaches the correct answer thisway. This correct answer is maintained until Step 5. •Error Stage (Step 6): A critical reasoning error occurs when the model incorrectly identifies 9 as a perfect cube instead of a perfect square. This error leads to an incorrect checkpoint answer that affects the reasoning until the end. When all checkpoint answers are evaluated as reasoning endpoints by PRM, Step 5 receives the highest score of 0.7192, exceeding the natural end- ing’s score of 0.0212. If no other branch has a score higher than 0.7192, this score will be selected as the final answer, effectively correcting the wrong answer from the natural reasoning endpoint to the correct one. 6 Conclusion In this paper, we introduced SRCA, a novel frame- work that enhances LLM reasoning by introduc- ing checkpoints between reasoning steps. Our Answer-Clustered Search strategy effectively main- tains path diversity while ensuring reasoning qual- ity, and the Checkpoint Candidate Augmentation approach efficiently utilizes intermediate predic- tions for final decision-making. Experimental re- sults demonstrate that SRCA outperforms base- line methods like Beam Search and DVTS across various mathematical reasoning datasets. The suc- cess of SRCA suggests that leveraging intermediate checkpoints is a promising direction for improving LLM reasoning capabilities. 7 Limitations SRCA faces | https://arxiv.org/abs/2505.17829v1 |
two main limitations. First, while it requires checkpoints between reasoning steps, defining precise step boundaries is challenging. Al- though Llama-series models exhibit relatively clear step demarcations with characteristic delimiters, others, particularly the emerging "slow thinking" models, often generate outputs without distinct structural patterns and sometimes in a more con- versational style. Second, the reasoning steps aug- mented by the CCA strategy are often incomplete. While models can still generate correct answers based on these partial reasoning paths, this incom- pleteness reduces the interpretability of the reason- ing process. Compared to naturally completed rea- soning chains, these occasionally truncated paths represent a shortcoming in terms of explanation quality and transparency. References Pranjal Aggarwal and Sean Welleck. 2025. L1: Con- trolling how long a reasoning model thinks with rein- forcement learning. Preprint , arXiv:2503.04697. Edward Beeching, Lewis Tunstall, and Sasha Rush. 2024. Scaling test-time compute with open models. Zhenni Bi, Kai Han, Chuanjian Liu, Yehui Tang, and Yunhe Wang. 2025. Forest-of-thought: Scaling test- time compute for enhancing llm reasoning. Preprint , arXiv:2412.09078. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V . Le, Christopher Ré, and Azalia Mirho- seini. 2024. Large language monkeys: Scaling in- ference compute with repeated sampling. Preprint , arXiv:2407.21787. Yanxi Chen, Xuchen Pan, Yaliang Li, Bolin Ding, and Jingren Zhou. 2025a. Simple and provable scaling laws for the test-time compute of large language mod- els.Preprint , arXiv:2411.19477. Zhipeng Chen, Yingqian Min, Beichen Zhang, Jie Chen, Jinhao Jiang, Daixuan Cheng, Wayne Xin Zhao, Zheng Liu, Xu Miao, Yang Lu, Lei Fang, Zhongyuan Wang, and Ji-Rong Wen. 2025b. An empirical study on eliciting and improving r1-like reasoning models. Preprint , arXiv:2503.04548. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint , arXiv:2110.14168.Zeyu Gan, Yun Liao, and Yong Liu. 2025. Re- thinking external slow-thinking: From snowball er- rors to probability of correct reasoning. Preprint , arXiv:2501.15602. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Xinyan Guan, Yanjiang Liu, Xinyu Lu, Boxi Cao, Ben He, Xianpei Han, Le Sun, Jie Lou, Bowen Yu, Yao- jie Lu, and Hongyu Lin. 2024. Search, verify and feedback: Towards next generation post-training paradigm of foundation models via verifier engineer- ing. Preprint , arXiv:2411.11504. Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. 2025. rstar-math: Small llms can master math reason- ing with self-evolved deep thinking. arXiv preprint arXiv:2501.04519 . Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, and Maosong Sun. 2024. OlympiadBench: A challenging benchmark for promoting AGI with olympiad-level bilingual multimodal scientific prob- lems. In Proceedings of | https://arxiv.org/abs/2505.17829v1 |
the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 3828–3850, Bangkok, Thailand. Association for Computational Linguistics. Yancheng He, Shilong Li, Jiaheng Liu, Weixun Wang, Xingyuan Bu, Ge Zhang, Zhongyuan Peng, Zhaox- iang Zhang, Zhicheng Zheng, Wenbo Su, and Bo Zheng. 2025. Can large language models detect errors in long chain-of-thought reasoning? Preprint , arXiv:2502.19361. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In Thirty- fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) . Coleman Hooper, Sehoon Kim, Suhong Moon, Kerem Dilmen, Monishwaran Maheswaran, Nicholas Lee, Michael W. Mahoney, Sophia Shao, Kurt Keutzer, and Amir Gholami. 2025. Ets: Efficient tree search for inference-time scaling. Preprint , arXiv:2502.13575. Chengsong Huang, Langlin Huang, Jixuan Leng, Ji- acheng Liu, and Jiaxin Huang. 2025. Efficient test-time scaling via self-calibration. Preprint , arXiv:2503.00031. Yixin Ji, Juntao Li, Hai Ye, Kaixin Wu, Kai Yao, Jia Xu, Linjian Mo, and Min Zhang. 2025. Test-time com- pute: from system-1 thinking to system-2 thinking. Preprint , arXiv:2501.02497. Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haoxiang Sun, Jia Deng, Wayne Xin Zhao, Zheng Liu, Dong Yan, Jian Xie, Zhongyuan Wang, and Ji-Rong Wen. 2024. Enhancing llm reasoning with reward-guided tree search. Preprint , arXiv:2411.11694. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5315–5333, Toronto, Canada. Association for Computational Linguistics. Yiwei Li, Peiwen Yuan, Shaoxiong Feng, Boyuan Pan, Xinglin Wang, Bin Sun, Heda Wang, and Kan Li. 2024. Escape sky-high cost: Early-stopping self-consistency for multi-step reasoning. Preprint , arXiv:2401.10480. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let’s verify step by step. In The Twelfth Inter- national Conference on Learning Representations . Runze Liu, Junqi Gao, Jian Zhao, Kaiyan Zhang, Xiu Li, Biqing Qi, Wanli Ouyang, and Bowen Zhou. 2025. Can 1b llm surpass 405b llm? rethink- ing compute-optimal test-time scaling. Preprint , arXiv:2502.06703. Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2024. Improve mathematical reasoning in language models by automated process supervision. Preprint , arXiv:2406.06592. MetaAI. 2024. Llama 3.2: Revolutionizing edge ai and vision with open, customizable mod- els. https://ai.meta.com/blog/llama-3-2-connect- 2024-vision-edge-mobile-devices/. 2025-03-01. Kou Misaki, Yuichi Inoue, Yuki Imajuku, So Kuroki, Taishi Nakamura, and Takuya Akiba. 2025. Wider or deeper? scaling llm inference-time compute with adaptive branching tree search. Preprint , arXiv:2503.04412. Skywork o1 Team. 2024. Skywork-o1 open series. https://huggingface.co/Skywork . Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. 2024. Rewarding progress: Scaling automated process veri- fiers for llm reasoning. Preprint , arXiv:2410.08146.Charlie Snell, Jaehoon Lee, Kelvin | https://arxiv.org/abs/2505.17829v1 |
Xu, and Aviral Ku- mar. 2024. Scaling llm test-time compute optimally can be more effective than scaling model parameters. Preprint , arXiv:2408.03314. Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, An- drew Wen, Shaochen Zhong, Hanjie Chen, and Xia Hu. 2025. Stop overthinking: A survey on effi- cient reasoning for large language models. Preprint , arXiv:2503.16419. Jonathan Uesato, Nate Kushman, Ramana Kumar, Fran- cis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process- and outcome- based feedback. Preprint , arXiv:2211.14275. Ziyu Wan, Xidong Feng, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, and Jun Wang. 2024. Alphazero-like tree-search can guide large language model decoding and training. In Proceed- ings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org. Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, Ziqin Gong, Yan Song, Lei Chen, Lionel M. Ni, Linyi Yang, Ying Wen, and Weinan Zhang. 2024a. Openr: An open source framework for advanced reasoning with large language models. Preprint , arXiv:2410.09671. Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. 2024b. Math-shepherd: Verify and reinforce LLMs step-by-step without human annotations. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 9426–9439, Bangkok, Thailand. Associ- ation for Computational Linguistics. Xiyao Wang, Linfeng Song, Ye Tian, Dian Yu, Baolin Peng, Haitao Mi, Furong Huang, and Dong Yu. 2024c. Towards self-improvement of llms via mcts: Leveraging stepwise knowledge with curriculum pref- erence learning. Preprint , arXiv:2410.06508. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . Zezhong Wang, Xingshan Zeng, Weiwen Liu, Yufei Wang, Liangyou Li, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, and Kam-Fai Wong. 2025. Chain-of-probe: Examining the necessity and accu- racy of cot step-by-step. Preprint , arXiv:2406.16144. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V . Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY , USA. Curran Associates Inc. Jinyang Wu, Mingkuan Feng, Shuai Zhang, Feihu Che, Zengqi Wen, and Jianhua Tao. 2024a. Be- yond examples: High-level automated reasoning paradigm in in-context learning via mcts. Preprint , arXiv:2411.18478. Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, Qunshu Lin, Junbo Zhao, Zhaoxiang Zhang, Wenhao Huang, Ge Zhang, Chenghua Lin, and J. H. Liu. 2024b. A comparative study on reasoning patterns of openai’s o1 model. Preprint , arXiv:2410.13639. Yuyang Wu, Yifei Wang, Tianqi Du, Stefanie Jegelka, and Yisen Wang. 2025. When more is less: Under- standing chain-of-thought length in llms. Preprint , arXiv:2502.07266. Zhiheng Xi, Dingwen Yang, Jixuan Huang, | https://arxiv.org/abs/2505.17829v1 |
Jiafu Tang, Guanyu Li, Yiwen Ding, Wei He, Boyang Hong, Shi- han Do, Wenyu Zhan, Xiao Wang, Rui Zheng, Tao Ji, Xiaowei Shi, Yitao Zhai, Rongxiang Weng, Jingang Wang, Xunliang Cai, Tao Gui, and 5 others. 2024. Enhancing llm reasoning via critique models with test-time and training-time supervision. Preprint , arXiv:2411.16579. Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Al- balak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, Louis Castricato, Jan-Philipp Franken, Nick Haber, and Chelsea Finn. 2025. Towards system 2 reasoning in llms: Learning how to think with meta chain-of- thought. Preprint , arXiv:2501.04682. Wei Xiong, Hanning Zhang, Nan Jiang, and Tong Zhang. 2024. An implementation of generative prm. https: //github.com/RLHFlow/RLHF-Reward-Modeling . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Day- iheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, and 41 others. 2025. Qwen3 technical report. Preprint , arXiv:2505.09388. Xiao Yu, Baolin Peng, Vineeth Vajipey, Hao Cheng, Michel Galley, Jianfeng Gao, and Zhou Yu. 2024. Improving autonomous ai agents with reflective tree search and self-learning. In The Thirteenth Interna- tional Conference on Learning Representations . Weihao Zeng, Yuzhen Huang, Lulu Zhao, Yijun Wang, Zifei Shan, and Junxian He. 2025. B-star: Moni- toring and balancing exploration and exploitation in self-taught reasoners. Preprint , arXiv:2412.17256. Di Zhang, Xiaoshui Huang, Dongzhan Zhou, Yuqiang Li, and Wanli Ouyang. 2024. Accessing gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. Preprint , arXiv:2406.07394.Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. 2025a. Generative verifiers: Reward modeling as next-token prediction. Preprint , arXiv:2408.15240. Qiyuan Zhang, Fuyuan Lyu, Zexu Sun, Lei Wang, Weixu Zhang, Zhihan Guo, Yufei Wang, Irwin King, Xue Liu, and Chen Ma. 2025b. What, how, where, and how well? a survey on test-time scaling in large language models. Preprint , arXiv:2503.24235. Yu Zhao, Huifeng Yin, Bo Zeng, Hao Wang, Tianqi Shi, Chenyang Lyu, Longyue Wang, Weihua Luo, and Kaifu Zhang. 2024. Marco-o1: Towards open reasoning models for open-ended solutions. Preprint , arXiv:2411.14405. Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jin- gren Zhou, and Junyang Lin. 2024. Processbench: Identifying process errors in mathematical reasoning. Preprint , arXiv:2412.06559. A Experiment Settings A.1 Datasets The following are the four datasets used in the experiment: •GSM8K (Cobbe et al., 2021) is an evaluation set consisting of 8,500 high-quality primary school mathematics problems. It is mainly used to assess the language comprehension and mathematical reasoning abilities of mod- els in basic mathematical problems. •MATH500 (Lightman et al., 2024) is a subset of the MATH dataset (Hendrycks et al., 2021) containing 500 questions. It covers seven mathematical domains and five difficulty lev- els. It is designed to test the performance of LLMs in solving advanced mathematical problems. •AIME1offers a rich collection of challeng- ing problems from the American Invitational Mathematics Examination and contains 933 high-difficulty mathematical problems. •OlympiadBench is an Olympiad-level bilin- gual multimodal scientific benchmark (He et al., 2024). | https://arxiv.org/abs/2505.17829v1 |
In this experiment, only the sub- set of English mathematical problems without images was tested, with a total of 674 ques- tions. A.2 Baselines •Greedy Search: A decoding method based on the principle of local optimality. It always selects the token with the highest current prob- ability as the output. •Self-Consistency (Wang et al., 2023): LLMs generates Nindependent reasoning paths. Fi- nally, the most frequently occurring output is counted as the answer. •Best-of-N (BoN) (Brown et al., 2024): Sim- ilar to self-consistency, the LLM generates Nindependent reasoning paths. According to the scores given by the reward model, the path with the highest score is selected as the answer. •Weighted BoN (Brown et al., 2024): It is a combination of Self-Consistency and BoN. The reward model scores the Nindependent reasoning paths generated by the LLM. Then, the paths are clustered according to the an- swers, and the sum of the path scores within 1www.kaggle.com/datasets/aime-problem-set-1983-2024each cluster is taken as the answer’s score. The answer with the highest score is selected. •Beam Search (Snell et al., 2024): Nreasoning paths are maintained at each reasoning step. According to the scores given by the PRM for the current paths, Mpaths are selected to continue the reasoning and expand downward. Each selected path can expand into N/M sub- paths. •Diverse Verifier Tree Search (DVTS) (Beech- ing et al., 2024): DVTS is an extension of the beam search. It first initializes Msub- trees. Each subtree samples N/M paths at every step. Those paths are then scored by the PRM. The path with the highest score within the subtree is selected for further reasoning. It is similar to Beam Search, as in each step, M paths are selected from N paths for fur- ther reasoning. Due to the scope limitation of the subtrees, it prevents some locally opti- mal branches from early elimination, thereby enhancing the path diversity. B Supplementary Experimental Results B.1 Computational Cost Analysis in FLOPs Table 3 compares the computational cost in FLOPs for processing a single query across different mod- els. We assume an input length of 256 tokens (pre- fill) and an output length of 4096 tokens (decode). The policy model generates tokens sequentially in an auto-regressive manner, requiring multiple forward passes, while the PRM requires only one forward pass for scoring. The results demonstrate that SRCA with N= 128, combining a 1B policy model and an 8B PRM, requires only 43.01% of the computational cost compared to the 70B model for processing a single sample. Considering the experimental re- sults reported in Table 1, the 1B model enhanced with SRCA achieves higher accuracy than the 70B model, indicating that our approach not only re- duces computational overhead but also yields supe- rior performance. B.2 Evaluation of Scoring Methods and Selection Strategies We analyzed how different scoring methods for reasoning paths and answer selection strategies af- fect the accuracy of TTS methods. We employ Llama3.1 -8B-PRM-Deepseek -Data as the PRM in this experiment. The PRM assigns scores to each reasoning step, generating a sequence of scores for Model Type N FLOPs Llama-3.2-1B-Instruct Auto Regressive 128 | https://arxiv.org/abs/2505.17829v1 |
1.31×1018 Llama3.1-8B-PRM-Deepseek-Data Reward 128 9.03×1015 Llama-3.1-70B-Instruc Auto Regressive 1 3.04×1018 Table 3: Computational Cost Analysis (in FLOPs) for Different Model Configurations during Inference. each path. We examined two primary methods for computing the final path score: •Last : Using the final step’s score as the path score, where PRM functions similarly to an Outcome Reward Model (ORM). •Mean : Taking the average of the score se- quence to reflect the overall reliability of the reasoning process. Furthermore, BoN and Weighted BoN can be combined with other tree search algorithms as an- swer selection strategies. Specifically, after the tree search algorithm generates multiple candidate paths: •BoN : Selects the path with the highest score •Weighted BoN : First clusters answers, then selects the answer with the highest sum of path scores within its cluster The experimental results (Table 4) demonstrate that the Last scoring method consistently outper- forms Mean, while BoN generally yields better re- sults than Weighted BoN. This pattern holds across all four datasets and three TTS methods. Notably, the superiority of Last over Mean sug- gests that some correct reasoning paths have high final scores but lower average scores. This indi- cates that even when reaching the correct answer, the intermediate reasoning steps may not be en- tirely accurate. Developing TTS methods that en- sure both process and outcome accuracy remains a future research direction. The choice between BoN and Weighted BoN re- flects a balance between policy model and reward model decision-making. BoN relies primarily on PRM’s judgment by selecting the highest-scoring path, while Weighted BoN considers the sampling frequency of the policy model through score aggre- gation. In our experiments, using a 1B parameter policy model and an 8B parameter PRM, the PRM- dominated BoN strategy achieved superior results, likely due to the PRM’s stronger discriminative ability.B.3 Experiments on Qwen3-0.6B To validate the generalizability of SRCA, we conducted additional experiments on Qwen3- 0.6B (Yang et al., 2025), following the same set- tings as our main experiments. We set the sam- pling number N to 16 to expedite the experimental process. The experimental results are presented in Table 5. Experimental results indicate that SRCA maintains superior performance compared to other TTS approaches. The observed trends align with the findings from our primary experi- ments, thereby confirming the general applicability of SRCA across different LLMs. Selection N MethodGSM8k MATH500 AIME OlympiadAvg.GSM8k MATH500 AIME OlympiadAvg. Last Mean BoN16Beam 0.7505 0.4660 0.1404 0.1422 0.3748 0.7475 0.4600 0.1200 0.1247 0.3631 DVTS 0.7452 0.4700 0.1489 0.1585 0.3807 0.7331 0.4620 0.1307 0.1525 0.3696 SRCA 0.8054 0.5120 0.2133 0.1822 0.4282 0.7869 0.4940 0.1747 0.1718 0.4069 Avg. 0.7670 0.4827 0.1675 0.1610 0.3946 0.7558 0.4720 0.1418 0.1497 0.3798 32Beam 0.8278 0.4740 0.1425 0.1644 0.4022 0.8043 0.4780 0.1457 0.1377 0.3914 DVTS 0.8210 0.4920 0.1714 0.1659 0.4126 0.8241 0.5020 0.1758 0.1629 0.4162 SRCA 0.8340 0.5140 0.2304 0.1793 0.4394 0.8317 0.5120 0.1908 0.1793 0.4285 Avg. 0.8276 0.4933 0.1814 0.1699 0.4181 0.8200 0.4973 0.1708 0.1600 0.4120 64Beam 0.8302 0.4800 0.1833 0.1719 0.4164 0.8392 0.4960 0.1758 0.1659 0.4192 DVTS 0.8347 0.4980 0.1951 0.1822 0.4275 0.8484 0.5140 0.1907 0.1733 0.4316 SRCA 0.8514 0.5160 0.2337 | https://arxiv.org/abs/2505.17829v1 |
0.1881 0.4473 0.8491 0.5160 0.2144 0.1837 0.4408 Avg. 0.8388 0.4980 0.2040 0.1807 0.4304 0.8456 0.5087 0.1936 0.1743 0.4305 128Beam 0.8484 0.5200 0.1907 0.1807 0.4350 0.8340 0.5160 0.1832 0.1807 0.4285 DVTS 0.8347 0.5260 0.2068 0.1940 0.4404 0.8499 0.5180 0.1843 0.1866 0.4347 SRCA 0.8560 0.5340 0.2497 0.2074 0.4618 0.8514 0.5240 0.2197 0.1896 0.4462 Avg. 0.8464 0.5267 0.2157 0.1940 0.4457 0.8451 0.5193 0.1957 0.1856 0.4365 Weighted BoN16Beam 0.7369 0.4600 0.1200 0.1303 0.3618 0.7194 0.4460 0.1189 0.1229 0.3518 DVTS 0.7422 0.4760 0.1446 0.1526 0.3789 0.7111 0.4620 0.1125 0.1496 0.3588 SRCA 0.7597 0.4800 0.1714 0.1688 0.3950 0.7187 0.4680 0.1393 0.1674 0.3734 Avg. 0.7463 0.4720 0.1453 0.1506 0.3785 0.7164 0.4587 0.1236 0.1466 0.3613 32Beam 0.7740 0.4760 0.1446 0.1659 0.3901 0.7520 0.4480 0.1404 0.1348 0.3688 DVTS 0.7877 0.4780 0.1661 0.1718 0.4009 0.7491 0.4680 0.1425 0.1644 0.3810 SRCA 0.7937 0.4900 0.1822 0.1778 0.4109 0.7832 0.4700 0.1704 0.1762 0.4000 Avg. 0.7851 0.4813 0.1643 0.1718 0.4007 0.7614 0.4620 0.1511 0.1585 0.3833 64Beam 0.8036 0.4780 0.1886 0.1733 0.4109 0.7771 0.4860 0.1939 0.1615 0.4046 DVTS 0.7915 0.4860 0.1897 0.1825 0.4124 0.7839 0.4900 0.1961 0.1719 0.4105 SRCA 0.8173 0.5060 0.2068 0.1854 0.4289 0.7945 0.4940 0.2208 0.1778 0.4218 Avg. 0.8041 0.4900 0.1950 0.1804 0.4174 0.7852 0.4900 0.2036 0.1704 0.4123 128Beam 0.8014 0.5000 0.1907 0.1854 0.4194 0.7574 0.4820 0.1951 0.1911 0.4064 DVTS 0.8195 0.5020 0.2079 0.1899 0.4298 0.7680 0.4920 0.1994 0.1940 0.4134 SRCA 0.8173 0.5100 0.2262 0.1943 0.4370 0.7786 0.5120 0.2444 0.2030 0.4345 Avg. 0.8127 0.5040 0.2083 0.1899 0.4287 0.7680 0.4953 0.2130 0.1960 0.4181 Table 4: Performance comparison of TTS methods with different scoring methods (Last/Mean) and selection strategies (BoN/Weighted BoN) on four benchmark datasets. Numbers indicate accuracy. Higher scores indicate better performance. Red cells denote group averages for each N value. Models & TTS GSM8K MATH500 AIME Olympiad Greedy Search 42.61% 34.40% 3.54% 13.63% Self-Consistency 52.62% 47.00% 4.07% 20.59% BoN 68.69% 51.20% 6.54% 23.56% Weighted BoN 63.91% 53.60% 7.29% 23.41% Beam Search 72.10% 54.00% 16.4% 25.07% DVTS 74.91% 55.80% 17.36% 25.67% SRCA 79.45% 56.60% 21.33% 27.89% Table 5: Comparative results of TTS with Qwen3-0.6B. Numerical values indicate accuracy rates, with bold figures denoting the best performance. Experimental parameters: N=16, utilizing DeepSeek PRM. | https://arxiv.org/abs/2505.17829v1 |
arXiv:2505.17832v1 [cs.CL] 23 May 2025Emerging categories in scientific explanations Giacomo Magnifico[0000−0002−1782−0668]and Eduard Barbu[0000−0002−3664−5367] Institute of Computer Science, University of Tartu, Tartu, Estonia {name.surname}@ut.ee Abstract. Clear and effective explanations are essential for human un- derstanding and knowledge dissemination. The scope of scientific re- search aiming to understand the essence of explanations has recently expanded from the social sciences to include the fields of machine learn- ing and artificial intelligence. Important contributions from social sci- ences include [18,17,22,13,5,11] with works that examine critical as- pects such as causality (cause-and-effect relationships), contrast (dis- tinctions between differing scenarios), relevance (applicability of expla- nations),andtruth(accuracyandverifiabilityofexplanations).However, machine learning and natural language processing focus more on oper- ational definitions and on the importance of constructing datasets, as seen in studies by [21,23,6]. Since explanations for machine learning de- cisions must be both impactful and human-like [10,3,20,12,4], a major challenge lies in developing explanations that emphasize proximal as- pects — details that are immediately relevant, direct and related to the user — over broad algorithmic processes [21]. The current lack of large- scale datasets with a focus on both human-like and human-generated explanations [23] highlights the issue addressed by this work. The specific research questions of this work are thus the following: in what form(s) do explanations take form within the context of scientific literature? Can we provide an annotated dataset with as clear-cut defi- nitions as possible and reach an acceptable consensus between different annotators? The scope of this study has been limited to scientific literature due to the intrinsic nature of explanations – to avoid complications that would derive from the additional analysis of truth and relevance. Scien- tific explanations possess an identifiable general structure that involves a relationship between two components: the explanans (which provides the explanation) and the explanandum (what is being explained). The explanandum is contingent on the explanans, as changes in the latter di- rectly impact the former. A useful example can be the equation y= 2∗x ; the value of y(explanandum) depends on the value of x(explanans), as it increases with the increase of x, but the inverse also holds true. This relationship shows how explanations build upon the dependence of the explanandum on the explanans while providing enough nuance for multiple types of explanations within these limitations. An additional constraint was to only include explanations that presented an explicit explanandum , e.g.“the sky is blue because of light refraction through the atmosphere” rather than “this happens due to light refraction through the atmosphere” , in order to avoid explanations trailing through multiple sentences. 2 G. Magnifico, E. Barbu With our research questions in mind, we started by extracting sen- tences that indicate explanations from scientific literature among various sources in the biotechnology and biophysics topic domains, the major- ity of which were selected from PubMed’s PMC Open Access subset. The selected 340 sentences were then analyzed and different "explana- tion types", as possible categories for interpretation, emerged from the data. It’s crucial to reiterate that this categorization process was en- tirely driven by the dataset, in a deductive classification originating from the text | https://arxiv.org/abs/2505.17832v1 |
and not a superimposition of pre-existing categories upon the dataset. This method, therefore, avoided pre-set criteria to explore in- stead the intrinsic connections between categories and the dataset’s sub- ject, aiming to understand the commonalities and differences within the explanations. The categories of explanations that emerged are the follow- ing:causation , which establish a cause-and-effect relationship, stating that one event or condition leads to another without detailing interme- diate steps [15]; mechanistic causation , which detail the underlying mechanisms by which a cause leads to an effect, outlining the interme- diate steps that explain how and why the cause produces the outcome [14];contrastive , which focus on comparing scenarios to explain why a particular outcome occurred in one case but not in another, empha- sizing divergent outcomes [8]; correlation , which detail relationships between variables where changes in one are associated with changes in another but without establishing causality; functional , which focus on thefunctionofatraitinrelationtoitsformandeffectiveness,particularly in biology [16]; pragmatic approach , which focus on the selection of choices/actions based on convenience/effectiveness, requiring a conscious choice and emphasizing practicality or applicability [19]. To minimize author bias in sentence categorization, we conducted a clas- sification study on the Prolific platform [1] with 120 annotators divided between 10 questionnaires, guaranteeing a base of twelve annotators per sentence. Each annotator completed the questionnaire in one sit- ting, with a median completion time of 35 minutes, and was compen- sated at an average rate of £8/hour. After sanity checks and the re- moval of statistical outliers, 10 evaluations per sentence were kept along with the highest-quality 272 explanatory sentences. Upon calculating the averaged Krippendorf’s alpha [9,7] value to gauge the robustness of inter-annotator agreement, significant disagreement between categories of similar causal strength was observed (causation/mechanistic causa- tion, correlation/functional/pragmatic approach). After categorizing the sentences by causal strength and the number of relations, with the new categories of strong relation (causation and mechanistic causation), weakrelation (correlation,functional,pragmaticapproach)and multi- path relation (contrastive), the average agreement between annotators improved to a value of 0.667. Albeit only slightly over a desiderata target, the final alpha value is still a representation of good agreement between annotators and, thus, of a high-quality human-annotated expla- nation dataset. The dataset is made available to the community through a dedicated repository at [2]. Keywords: annotated corpus ·computational linguistics ·explanation Emerging categories in scientific explanations 3 References 1. Prolific. www.prolific.com, last accessed 2025/04/08 2. SciExpl Dataset. https://github.com/gima9552/SciExplDataset 3. Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J.M., Con- falonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., Herrera, F.: Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence. Information fusion 99, 101805 (2023) 4. Ghassemi, M., Oakden-Rayner, L., Beam, A.L.: The false hope of current ap- proaches to explainable artificial intelligence in health care. The Lancet Digital Health3(11), e745–e750 (2021) 5. Halpern, J.Y., Pearl, J.: Causes and explanations: A structural-model approach. part ii: Explanations. The British journal for the philosophy of science (2005) 6. Hartmann, M., Sonntag, D.: A survey on improving NLP models with human explanations. In: Andreas, J., Narasimhan, | https://arxiv.org/abs/2505.17832v1 |
K., Nematzadeh, A. (eds.) Proceed- ings of the First Workshop on Learning with Natural Language Supervision. pp. 40–47. Association for Computational Linguistics, Dublin, Ireland (May 2022). https://doi.org/10.18653/v1/2022.lnls-1.5, https://aclanthology.org/2022.lnls-1.5 7. Hayes, A., krippendorff, k.: Answering the call for a standard reliability mea- sure for coding data. Communication Methods and Measures 1, 77–89 (04 2007). https://doi.org/10.1080/19312450709336664 8. Jacovi, A., Swayamdipta, S., Ravfogel, S., Elazar, Y., Choi, Y., Goldberg, Y.: Con- trastive explanations for model interpretability. CoRR abs/2103.01378 (2021), https://arxiv.org/abs/2103.01378 9. krippendorff, k.: Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research 30, 411–433 (07 2004). https://doi.org/10.1093/hcr/30.3.411 10. Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory de- bugging to personalize interactive machine learning. In: Proceedings of the 20th international conference on intelligent user interfaces. pp. 126–137 (2015) 11. Lewis, D.: Causal explanation (1986) 12. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable ai: A review of machine learning interpretability methods. Entropy 23(1), 18 (2020) 13. Lombrozo, T.: The structure and function of explanations. Trends in cognitive sciences10(10), 464–470 (2006) 14. Machamer, P., Darden, L., Craver, C.F.: Thinking about mechanisms. Philosophy of Science 67(1), 1–25 (2000). https://doi.org/10.1086/392759 15. Mackie, J.L.: The Cement of the Universe. Clarendon Press, Oxford, (1974) 16. Mayr, E.: Toward a New Philosophy of Biology: Observations of an Evolutionist. Harvard University Press, Cambridge, MA (1988) 17. Mill, J.S.: A system of logic. In: Arguing About Science, pp. 243–267. Routledge (2012) 18. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267, 1–38 (2019) 19. Morgan, M.S., Morrison, M. (eds.): Models as Mediators: Perspectives on Natural and Social Science. Cambridge University Press (1999) 20. Ribeiro, M.T., Singh, S., Guestrin, C.: " why should i trust you?" explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD interna- tional conference on knowledge discovery and data mining. pp. 1135–1144 (2016) 4 G. Magnifico, E. Barbu 21. Tan, C.: On the diversity and limits of human explanations. In: Carpuat, M., deMarneffe,M.C.,MezaRuiz,I.V.(eds.)Proceedingsofthe2022Conferenceofthe North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 2173–2188. Association for Computational Linguis- tics, Seattle, United States (Jul 2022). https://doi.org/10.18653/v1/2022.naacl- main.158, https://aclanthology.org/2022.naacl-main.158 22. Thagard, P.: The cognitive science of science: Explanation, discovery, and concep- tual change. Mit Press (2012) 23. Wiegreffe, S., Marasović, A.: Teach me to explain: A review of datasets for explain- able natural language processing (2021) | https://arxiv.org/abs/2505.17832v1 |
arXiv:2505.17833v1 [cs.CL] 23 May 2025Investigating Affect Mining Techniques for Annotation Sample Selection in the Creation of Finnish Affective Speech Corpus Kalle Lahtinen1, Einari Vaaras1, Liisa Mustanoja2, Okko R ¨as¨anen1 1Signal Processing Research Centre, Tampere University, Finland 2Language Studies, Tampere University, Finland firstname.surname@tuni.fi Abstract Study of affect in speech requires suitable data, as emotional expression and perception vary across languages. Until now, no corpus has existed for natural expression of affect in spontaneous Finnish, existing data being acted or from a very specific commu- nicative setting. This paper presents the first such corpus, created by annotating 12,000 utterances for emotional arousal and va- lence, sampled from three large-scale Finnish speech corpora. To ensure diverse affective expression, sample selection was conducted with an affect mining approach combining acoustic, cross-linguistic speech emotion, and text sentiment features. We compare this method to random sampling in terms of annotation diversity, and conduct post-hoc analyses to identify sampling choices that would have maximized the diversity. As an outcome, the work introduces a spontaneous Finnish affective speech cor- pus and informs sampling strategies for affective speech corpus creation in other languages or domains. Index Terms : speech emotion recognition, affective expression, speech analysis, annotation, corpus design, paralinguistics 1. Introduction Speech conveys various types of information, where affective (“emotional”) content is one aspect that has a role in social inter- actions. The way how affect is realized in speech appears to have certain universal properties [1–3], but there are language- and culture-specific aspects as well [4, 5]. This also means that the study of affective expressions and their subjective interpretation calls for language-specific data resources. However, up to this date, there has not been a suitable speech corpus to study expres- sion and perception of affect in spontaneous Finnish, limiting the extent that affect has been studied in Finnish from linguistic and technological perspectives. This paper describes our sample selection process for cre- ating the first corpus of affective spontaneous Finnish based on three existing large-scale speech corpora. We specifically em- phasize our approach for informed but automatic selection of affect-rich samples for manual annotation, a process we refer to asaffective speech mining . We describe how we applied affec- tive speech mining, how the data were annotated, and provide a description of the resulting corpus. We also analyze to what extent our affect mining approach yielded more diverse affec- tive speech samples compared to a random sampling approach. Finally, with the help of the obtained affect annotations, we conduct post-hoc analyses to investigate which methodological choices in the affective mining process would have yielded the most diverse samples for annotation. As a result, the paper introduces a new and the first corpus for affective spontaneous speech in Finnish and provides speech scientists with methodological findings on how to effectivelychoose samples for manual annotation from datasets that are too large for manual analysis. 1.1. Prior work In the context of speech technology, the analysis of affective speech is related to automatic speech emotion recognition (SER), where a suitable corpus or corpora are required for training and testing a SER system. | https://arxiv.org/abs/2505.17833v1 |
There are a few existing Finnish cor- pora with labels for emotional content, such as [6], [7], and [8]. However, these corpora are relatively small in scale, and consist of acted speech [6, 8] or are from a very specific communica- tive setting [7]. This limits their representativeness in terms of real-world spontaneous speech [9–12]. More generally, a large proportion of existing SER corpora of different languages also consist of acted speech (e.g., [10–15]), as acting is a practical way to obtain large amounts of speech data with known emotional labels. Spontaneous emotional speech corpora are much fewer [16], as post-hoc annotation of affective content is time-consuming and finding expression-rich speech samples from natural conversational data is not trivial. For in- stance, a notable proportion of spontaneous everyday speech is likely to be relatively neutral in terms of affective content, espe- cially in certain cultures, thereby also constituting the majority of data if random sampling is used to select data for annotation. Given that typical annotation efforts are limited to a small subset of all available speech data in the given language, there is a need for efficient methods for automatic discovery of speech samples that would cover a broad range of affective expression in the given domain. This is especially important for the devel- opment of SER systems, where coverage of different phenomena is often more important than representativeness of the sample (which can usually be obtained with random sampling). To facilitate effective annotation efforts, various techniques forinformed sample selection have been developed, often falling under the umbrella of active learning (AL). In AL, the idea is to use properties of the data, potentially paired with a machine learning model trained on an annotated subset of the data, to re- duce human annotation effort by automatically selecting samples for annotation following some set criteria. In the context of SER, various AL approaches have been successfully used to stream- line the data annotation process. These include uncertainty- and diversity-based methods [17–20], crowdsourced annotations with trustability-weighted agreements [21], and clustering-based methods like k-medoids clustering [7], which have been used to identify key speech samples for manual annotation of emotional content. Overall, these studies highlight the benefit of informed sample selection to minimize annotation efforts. In the present study, a clustering-based AL approach is used for affect mining from a corpus which is too large to be annotated manually. 2. Creation of the FinnAffect corpus The aim of FinnAffect creation was to obtain manual annotations for a broad range of affective phenomena present in Finnish spontaneous speech. For this purpose, we utilized several source corpora with somewhat distinct communicative contexts and speaker populations, and combined them with our affect mining approach to select speech samples for manual annotation. 2.1. Data sources We used three large-scale spontaneous Finnish speech corpora as the source for the FinnAffect corpus: Lahjoita Puhetta (“LP”; [22]), TamPuhe (“TP”; [23]), and HelPuhe (“HP”; [24]). LP is a large-scale speech corpus collected as part of a nationwide speech audio crowd-sourcing project consisting of 3,270 hours (1,687 hours transcribed) of spontaneous speech samples | https://arxiv.org/abs/2505.17833v1 |
from over 20,000 native Finnish speakers. In LP, the speakers were prompted with various everyday topics and were asked to tell about them freely (see [22]). The TP and HP datasets are sig- nificantly smaller than the LP dataset, consisting of interview recordings from the 1970’s, 1990’s and 2010’s. The original purpose of the recordings was the study of the over-time change in regional dialects of the Tampere and Helsinki areas of Finland. All three sources of data were first compiled into a single pool of utterances. All original source corpora contain text transcriptions, but with differing temporal alignments. LP tran- scriptions come with word level alignments, whereas in TP and HP the unit of alignment varies from one word to several clauses. For TP and HP, all clips shorter than 20 s were treated as indi- vidual utterances. Transcripts of longer turns in HP and TP were force-aligned at the word level using the Aalto ASR tool [25], after which 300-ms silence threshold was applied to segment the clips in LP, HP, and TP into utterances. Finally, all utterances from 1- to 20-s in duration and with an automatically estimated speech-to-noise ratio (SNR) of 20 dB or higher (using the tool from [26]) were maintained for the sample selection phase. This resulted in a total of 1,474,728 utterances (1,438,537 utterances from LP, 26,407 from TP, and 9,784 from HP). 2.2. Sample selection with affect mining We targeted a total of 12,000 samples for annotation. To maxi- mize the diversity of affective content, we chose to use medoid- based active learning (MAL) [27], which is a clustering-based AL approach previously applied to sound event classification [27] and SER [7]. In standard MAL, the idea is to first represent all samples in a feature space, and then apply k-medoids cluster- ing with the same number of clusters as is the total sample target for annotation. By annotating the medoids, the assumption is that other samples in the cluster would potentially belong to the same “class”, and hence the medoid labels could be propagated to the rest of the cluster samples (see also [7]). By initializing the clustering with farthest-first traversal (FAFT; [28]), a set of maximally distinct samples could be used as initial medoids in MAL, hoping to capture maximal data diversity in the process. To represent our source data for MAL, we applied three types of features that we expected to be sensitive to (but not necessarily accurate of) affective content: 1) acoustic features using the eGeMAPS feature set designed for affective computing [29], compressed down to 42-dim with PCA, 2) 6-dim cross- linguistic speech emotion classifier posterior features from the ExHuBERT system trained for SER on a diverse set of languages [30], and 3) 3-dim text sentiment analysis posteriors (negative, neutral, positive) features obtained from the speech transcriptsusing the FinnSentiment [31] tool. Each eGeMAPS feature was z-score normalized at a speaker-level to reduce the impact of speaker-dependent variability. All three feature types were then normalized to have an equal contribution (1 3each) to the total variance in concatenated 51-dim feature vectors, | https://arxiv.org/abs/2505.17833v1 |
hence balancing the contributions of each feature type in the distance calculations of FAFT and MAL. Our initial aim was to use FAFT in the resulting feature space to find diverse initial cluster centroids, following [7]. How- ever, the memory requirements of standard k-medoids clustering grows quadratically with the number of data points, prevent- ing its use on the present 1.4M samples. Also, the CLARA- approximation [32] of k-medoids clustering for large datasets does not support initialization with specific samples. Moreover, even with a powerful computing cluster, executing CLARA was not computationally feasible for our target annotation budget with nearly 10k clusters. Thereby, FAFT could not be used for cluster initialization, and CLARA with heuristic initialization (see [33]) was used to calculate 1,500 clusters for the data. Given the clusters, 6 samples were then chosen from each cluster for annotation. These included the medoid and its five closest samples with the constraint that two samples from each source corpora had to be chosen. In cases with less than 2 samples per source corpus in a cluster, the missing samples were randomly chosen from all the clusters to achieve the target count. As a result, a total of 9,000 samples (1500 x 6) were obtained with the approach (4,028 from LP, 2,577 from TP, and 2,395 from HP). In addition, we chose 3,000 samples randomly (1,000 per data source) to enable comparison to the affect mining approach and to obtain a representative distribution of affective content in the data, resulting in a total of 12,000 samples to annotate. 2.3. Annotation process Five native Finnish listeners (3 female, age 20–30 years; self- reported normal hearing) were hired to annotate the samples, receiving a financial compensation for the task. Each annotator was tasked with 4,000 samples (2,000 common for all annota- tors). In addition, 20 quality assurance samples were annotated three times by each annotator, batched so that five always ap- peared three times in random positions within a batch of 1,015 subsequent samples. This was done to ensure within-annotator consistency during the annotation process. The annotators conducted the annotation in a self-paced manner across one month of time using a Python-based software with a graphical user interface (GUI) for the task, after an ini- tial instruction session with the annotation tool. Samples were presented together with 4 s of preceding context with a clearly audible volume increase from the context to the actual target content. The annotators could listen to each sample a maximum of two times, and they were asked to rate their subjective per- ception of emotional arousal (“ virittyneisyys ” in Finnish) and valence (“ tunnes ¨avy”) using two continuous sliders on a scale -1.0 to 1.0 with endpoints marked as “low” vs. “high” for arousal and “negative” vs. “positive” for valence. The annotators could annotate as many samples as they wanted per day, but they had to have at least a 15-min break after every 30 min of annotation. All annotators delivered all the 4,000 annotations tasked to them. We first z-score normalized all ratings from each annota- tor, as | https://arxiv.org/abs/2505.17833v1 |
we found that the annotators varied in their use of the rating scale, and then scaled the result back to [-1.0,1.0] with max(abs()) across all the scores for easier interpretation. The normalization significantly improved the inter-annotator agree- ment on the 2,000 sample ” gold standard ” (GS) set with five 0 1000 2000 Sample ID0.5 0.00.5Annotation scoreValence ArousalFigure 1: Annotated arousal and valence scores on the GS subset (2,000 samples with 5 annotations), sorted by the mean across annotators. Shading denotes ±1SD across the annotators. Table 1: Summary statistics of the resulting FinnAffect corpus. Total utterances12,000Unique speakers3936Gender distribution3902 male, 6425 female, 1673 unknownSample duration4.02 s (mean) ± 2.71 s (SD) Valence labels2741 (neg.) / 7178 (neut.) / 2081 (pos.)Arousal labels5685 (low) / 6315 (high) annotations (see below). For the present analyses, we calculated the mean valence and arousal ratings for each sample in the GS set and combined them with the 10,000 samples with single annotations. We derived discrete arousal and valence labels with thresholds of [low] ≤0<[high] for arousal, and [negative] ≤ −0.08<[neutral] ≤+0.08 <[positive] for valence. The valence threshold was optimized to maximize pairwise Cohen’s kappa between all the annotators and the majority vote on the GS set. As a result, the mean Spearman correlation (continuous scores) and Cohen’s kappa (discrete labels) between each annotator and the mean/mode of annotators were ρ= 0.85andκ= 0.52 for valence and ρ= 0.89andκ= 0.64for arousal on the GS samples, reflecting substantially above-chance agreement. 3. Corpus description Table 1 summarizes the resulting corpus properties. Fig.1 shows the valence and arousal score distributions on the GS set and the variance across the annotators. The overall distributions were similar for the 10k single-annotation samples and are not shown separately. As can be observed, there is a systematic distribution of low, medium, and high scores for both valence and arousal. After discretization, there is a relatively balanced set of low vs. high arousal classes, whereas most valence labels are neutral, but still with thousands of samples for both negative and positive valence. In addition, the data has a large number of speakers (N = 3936) and samples from both genders. To test if the affect mining strategy yielded more diverse samples compared to the random sampling, we compared the variances of the sample annotation scores in the two sets with Levene’s test [34]. There were no statistical differences between the annotator scores for either of the affect dimensions ( p > 0.05 for both). The corresponding class label proportions were also highly similar for both cases, indicating that the diversity of annotation scores for the samples chosen with the affect mining strategy did not differ from those selected with random sampling. 4. Post-hoc analysis of affect mining Given that the affect mining did not result in more diverse emo- tional contents compared to random sampling, we analyzed the reasons for this finding and whether some alternative choiceswould have resulted in a more diverse set of samples. One hypothesis was that perhaps the MAL-based approach simply converges to a similar outcome with random sampling with a sufficiently large number | https://arxiv.org/abs/2505.17833v1 |
of samples (clusters) to annotate. Al- ternatively, we speculated that the lack of FAFT initialization in the CLARA clustering might have caused the cluster density to follow the overall density distribution of the data, thereby aligning with random sampling from the same distribution. To study these questions, we used the 12,000 annotated samples to simulate the sample selection process in a post-hoc manner. We operationalized sample diversity as the standard deviation (SD) of the valence/arousal scores in a given selected subsample of data points, assuming that larger variation in the scores reflects richer variation in affective expression. We cal- culated the diversities for increasing sample sizes sampled from the 12k samples, comparing a) random sampling, b) samples obtained from FAFT, c) k-medoid medoids after clustering the data with the FAFT initialization, and d) k-medoids from the CLARA algorithm. For each dataset size, random sampling diversity scores were averaged across 100 runs, whereas FAFT and clustering scores were averaged across 5 runs of the test to average out stochastic variation in the results. For comparison, we ran CLARA using both k++ and heuristic initializations in Scikit-learn, and also CLARA in MATLAB using the k++ init. Fig. 2 shows the results of the analysis. As can be observed, samples from FAFT, FAFT-initialized k-medoids, and MATLAB CLARA without FAFT all result in higher affective diversity (an- notation score SD) compared to random sampling at least up to 1,500 samples. Both FAFT and FAFT + k-medoids also outper- form CLARA at all but the smallest and the very largest sample sizes, demonstrating the general benefit of FAFT in the discovery of diverse samples from data. The MATLAB CLARA implemen- tation with k-medoids++ initialization yielded a greater diversity of annotation scores than both Scikit-learn implementations. The reason for this difference in the implementations is not clear. Importantly, the heuristic initialization in Scikit-learn CLARA, as also used in the original sample selection for an- notation of FinnAffect, resulted in comparable results to ran- dom sampling at 1500 clusters, which explains why the original clustering-based sampling diversity did not differ significantly from random sampling. The Scikit-learn CLARA clustering with k-medoids++ initialization outperforms the heuristic initializa- tion, but still does not outperform the MATLAB implementation. While all the methods start to gradually converge towards ran- dom sampling with increasing sample/cluster counts, there is still a clear margin to random sampling at the maximum of 1500 clusters for all but the Scikit-learn heuristic CLARA approach when using the full feature set. As for the post-hoc analysis of features, eGeMAPS does not appear to be useful for affective content mining in the given cor- pus, whereas ExHuBERT is beneficial for valence and sentiment scores appear to help for both valence and arousal. We also conducted post-hoc comparison of different cluster- ing algorithms and distance metrics using the 12,000 annotated samples to test if some configuration results in a higher clus- ter purity (better organization of data into clusters of similar affect). Purity was calculated as the average proportion of the most frequent class in each cluster, excluding single-sample clusters that artificially increase | https://arxiv.org/abs/2505.17833v1 |
apparent cluster purity. We ex- perimented with the k-means, bisecting k-means, agglomerative, and CLARA clustering algorithms. For agglomerative cluster- ing, we experimented with the Euclidean, Manhattan, cosine, and Chebyshev distance metrics. These distance metrics were also tested with CLARA with the addition of Pearson correla- 500 1000 15000.10.110.120.130.140.150.16SDeGeMAPS random sample FAFT FAFT + k-medoids CLARA-heuristic (scikit) CLARA-k++ (scikit) CLARA-k++ (MATLAB) 500 1000 1500ExHuBERT 500 1000 1500FinnSentiment 500 1000 15000.10.110.120.130.140.150.16all features 500 1000 1500 number of samples0.180.190.20.210.220.23SDeGeMAPSVALENCE AROUSAL 500 1000 1500 number of samplesExHuBERT 500 1000 1500 number of samplesFinnSentiment 500 1000 1500 number of samples0.180.190.20.210.220.23all featuresFigure 2: Standard deviation (SD) of annotator scores for valence (top) and arousal (bottom) in case of different simulated sample selection strategies for annotation, and as a function of number of samples chosen. Shading denotes ±1standard error across different runs/samplings. /uni00000014/uni00000013/uni00000015 /uni00000014/uni00000013/uni00000016 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000046/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000056/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000014/uni00000013/uni00000011/uni00000019/uni00000015/uni00000013/uni00000011/uni00000019/uni00000016/uni00000013/uni00000011/uni00000019/uni00000017/uni00000026/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000003/uni00000053/uni00000058/uni00000055/uni0000004c/uni00000057/uni0000005c/uni00000039/uni00000044/uni0000004f/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni00000010/uni00000003/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000052/uni00000059/uni00000048/uni00000055/uni00000003/uni00000044/uni0000004f/uni0000004a/uni00000052/uni00000055/uni0000004c/uni00000057/uni0000004b/uni00000050/uni00000003/uni00000059/uni00000044/uni00000055/uni0000004c/uni00000044/uni00000051/uni00000057/uni00000056 /uni00000024/uni0000004f/uni0000004f/uni00000003/uni00000049/uni00000048/uni00000044/uni00000057/uni00000058/uni00000055/uni00000048/uni00000056 /uni00000048/uni0000002a/uni00000048/uni00000030/uni00000024/uni00000033/uni00000036 /uni00000028/uni0000005b/uni0000002b/uni00000058/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000029/uni0000004c/uni00000051/uni00000051/uni00000036/uni00000048/uni00000051/uni00000057/uni0000004c/uni00000050/uni00000048/uni00000051/uni00000057 /uni00000014/uni00000013/uni00000015 /uni00000014/uni00000013/uni00000016 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000046/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000056/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000026/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000003/uni00000053/uni00000058/uni00000055/uni0000004c/uni00000057/uni0000005c/uni00000024/uni00000055/uni00000052/uni00000058/uni00000056/uni00000044/uni0000004f/uni00000003/uni00000010/uni00000003/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000052/uni00000059/uni00000048/uni00000055/uni00000003/uni00000044/uni0000004f/uni0000004a/uni00000052/uni00000055/uni0000004c/uni00000057/uni0000004b/uni00000050/uni00000003/uni00000059/uni00000044/uni00000055/uni0000004c/uni00000044/uni00000051/uni00000057/uni00000056 /uni00000014/uni00000013/uni00000015 /uni00000014/uni00000013/uni00000016 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000046/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000056/uni00000013/uni00000011/uni00000018/uni0000001b/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000015/uni00000013/uni00000011/uni00000019/uni00000017/uni00000013/uni00000011/uni00000019/uni00000019/uni00000026/uni0000004f/uni00000058/uni00000056/uni00000057/uni00000048/uni00000055/uni00000003/uni00000053/uni00000058/uni00000055/uni0000004c/uni00000057/uni0000005c/uni00000039/uni00000044/uni0000004f/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni00000009/uni00000003/uni00000024/uni00000055/uni00000052/uni00000058/uni00000056/uni00000044/uni0000004f/uni00000003/uni00000010/uni00000003/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000052/uni00000059/uni00000048/uni00000055/uni00000003/uni00000044/uni0000004f/uni0000004a/uni00000052/uni00000055/uni0000004c/uni00000057/uni0000004b/uni00000050/uni00000056 Figure 3: The results of the cluster purity experiments for the four feature types, averaged across all 16 clustering algorithm variants (standard error as shaded area). The results are shown for valence (top) and arousal (bottom). tion distance. Furthermore, CLARA was tested with both the heuristic and k-medoids++ initializations, resulting in a total of 16 different clustering algorithm variants. We ran each experi- ment using either the eGeMAPS, ExHuBERT, or FinnSentiment features, and also with all features combined into 51-dim fea- ture vectors (see Sec. 2.2). For each algorithm variation, we tested k∈ {50,60, ...,250,500,750,1000,1500}clusters and repeated each experiment 10 times to account for the randomness of the cluster initialization process, resulting in a total of 16,000 cluster purity experiments. Fig. 3 presents the results of the cluster purity experiments for different features. The text-based FinnSentiment features cluster more systematically than the other features in case of va- lence, but are notably inferior for arousal, highlighting the strong link between valence in speech and its content. When averaged over both valence and arousal, eGeMAPS and ExHuBERT ex- hibit superior purity for all cluster counts, but with a high number of clusters, the combination of all features become similar. Interms of the clustering algorithms (not shown separately), all per- formed similarly except for agglomerative clustering, where only Euclidean distance yielded results comparable to the other vari- ants. On average, CLARA was the best clustering algorithm re- gardless of its hyperparameters, followed by k-means, bisecting k-means, and agglomerative clustering (Mann-Whitney Utest (MWUT) [35], p < 0.05for all pairs). When further inspecting the different hyperparameter configurations for CLARA using MWUT, there were no significant differences between heuristic ork-medoids++ initialization ( p > 0.05) when averaging across all other factors. Overall, the best distance metrics for CLARA were cosine and Pearson correlation (no significant differences), followed by Manhattan, Euclidean, and Chebyshev distances (all p < 0.05compared to cosine and Pearson correlation, and with each other). All in all, the cluster purity analyses reflect a com- plementary perspective to the affect mining process, revealing how well the affective content is organized within the clusters in the representation space. This contrasts with the sample se- lection analyses, which measure the | https://arxiv.org/abs/2505.17833v1 |
extent to which different clusters represent different emotional contents. 5. Conclusions This paper presented the FinnAffect corpus for studying affect expression and developing SER systems for spontaneous Finnish. Moreover, we introduced a pipeline for mining affect-rich speech samples for manual annotation, making use of a rich set of speech descriptors from prosodic, cross-linguistic speech emotion, and text sentiment perspectives. Although the affect mining approach did not result in more diverse human labels compared to random sampling in our dataset creation, a series of post-hoc analyses revealed that the use of FAFT for sample discovery or k-medoids clustering initialization improves the resulting dataset diversity in terms of affective content. As a result, the use of either FAFT or FAFT-initialized k-medoids clustering, paired with affect- sensitive features, is recommended as a potential approach for affective content mining from large datasets. 6. Acknowledgements The work of KL was funded by the CONVERGENCE project, a grant awarded by the Jane and Aatos Erkko Foundation to Tampere University. The authors wish to thank the annotators for their efforts. The authors would also like to thank Tampere Center for Scientific Computing for the computational resources used in this study. 7. References [1]J. Prinz, “Which emotions are basic,” Emotion, Evolution, and Rationality , vol. 69, pp. 1–19, 2004. [2]J. Russell, M. Lewicka, and T. Niit, “A Cross-Cultural Study of a Circumplex Model of Affect,” Journal of Personality and Social Psychology , vol. 57, no. 5, pp. 848–856, 1989. [3]A. Sianipar, P. van Groenestijn, and T. Dijkstra, “Affective Mean- ing, Concreteness, and Subjective Frequency Norms for Indonesian Words,” Frontiers in Psychology , vol. 7, 2016. [4]A. Wierzbicka, “Human Emotions: Universal or Culture-Specific?” American Anthropologist , vol. 88, no. 3, pp. 584–594, 1986. [5]N. Evans and S. C. Levinson, “The myth of language universals: Language diversity and its importance for cognitive science,” The Behavioral and Brain Sciences , vol. 32, no. 5, pp. 429–448, 2009. [6]M. Airas and P. Alku, “Emotions in V owel Segments of Continu- ous Speech: Analysis of the Glottal Flow Using the Normalized Amplitude Quotient,” Phonetica , vol. 63, no. 1, pp. 26–46, 2006. [7]E. Vaaras, S. Ahlqvist-Bj ¨orkroth, K. Drossos, L. Lehtonen, and O. R ¨as¨anen, “Development of a speech emotion recognizer for large-scale child-centered audio recordings from a hospital envi- ronment,” Speech Communication , vol. 148, pp. 9–22, 2023. [8]T. Sepp ¨anen, J. Toivanen, and E. V ¨ayrynen, “Mediateam speech corpus: a first large finnish emotional speech database,” in Pro- ceedings of the Proceedings of XV International Conference of Phonetic Science . Citeseer, 2003, pp. 2469–2472. [9]J. Wilting, E. Krahmer, and M. Swerts, “Real vs. acted emotional speech,” in Proc. INTERSPEECH , 2006, pp. 805–808. [10] B. Schuller, B. Vlasenko, F. Eyben, M. W ¨ollmer, A. Stuhlsatz, A. Wendemuth, and G. Rigoll, “Cross-Corpus Acoustic Emotion Recognition: Variances and Strategies,” IEEE Transactions on Affective Computing , vol. 1, no. 2, pp. 119–131, 2010. [11] B. Schuller, A. Batliner, S. Steidl, and D. Seppi, “Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge,” Speech Communication , vol. 53, | https://arxiv.org/abs/2505.17833v1 |
no. 9-10, pp. 1062–1087, 2011. [12] W. Fan, X. Xu, X. Xing, W. Chen, and D. Huang, “LSSED: A Large-Scale Dataset and Benchmark for Speech Emotion Recogni- tion,” in Proc. ICASSP , 2021, pp. 641–645. [13] T. Waaramaa, “Perception of emotional nonsense sentences in China, Egypt, Estonia, Finland, Russia, Sweden, and the USA,” Logopedics Phoniatrics Vocology , vol. 40, no. 3, pp. 129–135, 2015. [14] F. Burkhardt, A. Paeschke, M. Rolfes, W. F. Sendlmeier, and B. Weiss, “A database of German emotional speech,” in Proc. INTERSPEECH , 2005, pp. 1517–1520. [15] S. R. Livingstone and F. A. Russo, “The Ryerson Audio-Visual Database of Emotional Speech and Song (RA VDESS): A dynamic, multimodal set of facial and vocal expressions in North American English,” PLOS One , vol. 13, no. 5, pp. 1–35, 2018. [16] T. Rathi and M. Tripathy, “Analyzing the Influence of different Speech Data Corpora and Speech Features on Speech Emotion Recognition: A Review,” Speech Communication , vol. 162, p. 103102, 2024. [17] W. Han, H. Li, H. Ruan, L. Ma, J. Sun, and B. Schuller, “Active Learning for Dimensional Speech Emotion Recognition,” in Proc. INTERSPEECH , 2013, pp. 2841–2845.[18] M. Abdelwahab and C. Busso, “Incremental adaptation using active learning for acoustic emotion recognition,” in Proc. ICASSP , 2017, pp. 5160–5164. [19] ——, “Active Learning for Speech Emotion Recognition Using Deep Neural Network,” in Proc. ACII , 2019, pp. 1–7. [20] F. Ren, Z. Liu, and X. Kang, “An Efficient Framework for Con- structing Speech Emotion Corpus Based on Integrated Active Learning Strategies,” IEEE Transactions on Affective Computing , vol. 13, no. 4, pp. 1929–1940, 2022. [21] S. Hantke, A. Abstreiter, N. Cummins, and B. Schuller, “Trustability-Based Dynamic Active Learning for Crowdsourced Labelling of Emotional Audio Data,” IEEE Access , vol. 6, pp. 42 142–42 155, 2018. [22] A. Moisio, D. Porjazovski, A. Rouhe, Y . Getman, A. Virkkunen, R. AlGhezi, M. Lennes, T. Gr ´osz, K. Lind ´en, and M. Kurimo, “Lahjoita puhetta: a large-scale corpus of spoken Finnish with some benchmarks,” Language Resources and Evaluation , vol. 57, p. 1295–1327, 2022. [23] Tampere University, “Longitudinal data of Tampere spoken language,” Tampere University, The Unit of Languages and Institute for the Languages of Finland and Liisa Mustanoja. [Online]. Available: http://urn.fi/urn:nbn:fi:lb-2022090821 [24] University of Helsinki, “The Downloadable Version of the Longitudinal Corpus of Finnish Spoken in Helsinki (1970s, 1990s and 2010s),” 2014, University of Helsinki, The Department of Finnish, Finno-Ugrian and Scandinavian Studies and Institute for the Languages of Finland and Heikki Paunonen. [Online]. Available: http://urn.fi/urn:nbn:fi:lb-201609142 [25] J. Leinonen, S. Virpioja, and M. Kurimo, “Grapheme-Based Cross- Language Forced Alignment: Results with Uralic Languages,” in Proc. NoDaLiDa , 2021, pp. 345–350. [26] M. Lavechin, M. M ´etais, H. Titeux, A. Boissonnet, J. Copet, M. Rivi `ere, E. Bergelson, A. Cristia, E. Dupoux, and H. Bredin, “Brouhaha: Multi-Task Training for V oice Activity Detection, Speech-to-Noise Ratio, and C50 Room Acoustics Estimation,” in Proc. IEEE ASRU , 2023, pp. 1–7. [27] S. Zhao, T. Heittola, and T. Virtanen, “Active learning for sound event classification by clustering unlabeled data,” in Proc. ICASSP , 2017, pp. | https://arxiv.org/abs/2505.17833v1 |
751–755. [28] D. Rosenkrantz, R. Stearns, and P. Lewis, “An analysis of several heuristics for the traveling salesman problem,” SICOMP , vol. 6, no. 3, pp. 563–581, 1977. [29] F. Eyben, K. R. Scherer, B. W. Schuller, J. Sundberg, E. Andr ´e, C. Busso, L. Y . Devillers, J. Epps, P. Laukka, S. S. Narayanan, and K. P. Truong, “The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for V oice Research and Affective Computing,” IEEE Transactions on Affective Computing , vol. 7, no. 2, pp. 190–202, 2016. [30] S. Amiriparian, F. Packa ´n, M. Gerczuk, and B. W. Schuller, “Ex- HuBERT: Enhancing HuBERT Through Block Extension and Fine- Tuning on 37 Emotion Datasets,” in Proc. INTERSPEECH , 2024, pp. 2635–2639. [31] K. Lind ´en, T. Jauhiainen, and S. Hardwick, “FinnSentiment: a Finnish social media corpus for sentiment polarity annotation,” Language Resources and Evaluation , vol. 57, no. 2, p. 581–609, 2023. [32] L. Kaufman and P. J. Rousseeuw, Finding groups in data: an introduction to cluster analysis . John Wiley & Sons, 2009. [33] F. Pedregosa, G. Varoquaux, A. Gramfort, V . Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V . Dubourg et al. , “Scikit-learn: Machine learning in Python,” Journal of Machine Learning Research , vol. 12, no. 85, pp. 2825–2830, 2011. [34] H. Levene, “Robust Tests for the Equality of Variances,” in Contri- butions to Probability and Statistics: Essays in Honor of Harold Hotelling . Stanford University Press, 1960, pp. 278–292. [35] H. Mann and D. Whitney, “On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other,” The Annals of Mathematical Statistics , vol. 18, no. 1, pp. 50–60, 1947. | https://arxiv.org/abs/2505.17833v1 |
Explaining Sources of Uncertainty in Automated Fact-Checking Jingyi Sun*Greta Warren*Irina Shklovski Isabelle Augenstein University of Copenhagen {jisu, grwa, ias, augenstein}@di.ku.dk Abstract Understanding sources of a model’s uncertainty regarding its predictions is crucial for effec- tive human-AI collaboration. Prior work pro- poses to use numerical uncertainty or hedges (“I’m not sure, but. . . ”), which do not explain uncertainty arising from conflicting evidence, leaving users unable to resolve disagreements or rely on the output. We introduce CLUE (Conflict-&Agreement-aware Language-model Uncertainty Explanations), the first framework to generate natural language explanations of model uncertainty by: (i) identifying rela- tionships between spans of text that expose claim-evidence or inter-evidence conflicts/a- greements driving the model’s predictive un- certainty in an unsupervised way; and (ii) gen- erating explanations via prompting and atten- tion steering to verbalize these critical interac- tions. Across three language models and two fact-checking datasets, we demonstrate that CLUE generates explanations that are more faithful to model uncertainty and more consis- tent with fact-checking decisions than prompt- ing for explanation of uncertainty without span- interaction guidance. Human evaluators find our explanations more helpful, more informa- tive, less redundant, and more logically consis- tent with the input than this prompting base- line. CLUE requires no fine-tuning or architec- tural changes, making it plug-and-play for any white-box language model. By explicitly link- ing uncertainty to evidence conflicts, it offers practical support for fact-checking and readily generalizes to other tasks that require reasoning over complex information. 1 Introduction Large Language Models (LLMs) are increasingly prevalent in high-stakes tasks that involve rea- soning about information reliability, such as fact- checking (Wang et al., 2024; Fontana et al., 2025). *Equal contribution. Claim: Scientific data has shown that cats can be infected with SARS-CoV -2 and can spread it to other cats. Model Output: Supports ✅ Model Certainty: 73%[...] there is a possibility of spreading SARS-CoV -2 through domestic petsEvidence 1 [...] no further transmission events to other animals or persons Evidence 2Automated claim verification Span interactions for model uncertainty Claim: Scientific data has shown that cats can be infected with SARS-CoV -2 and can spread it to other cats. Evidence 1: Cellular angiotensin converting enzyme 2 (ACE2) is the receptor of SARS-CoV -2 which is identical or similar in dif ferent species of animals such as pigs, ferrets, cats , and humans. [...] Therefore, there is a possibility of spreading SARS-CoV -2 through domestic pets. Evidence 2: Notification of presumptive positive animal test results triggered a One Health* investigation by state and federal partners, who determined that no further transmission events to other animals or persons had occurred. The evidence in Evidence 1 that "SARS-CoV -2" aligns with the statement in the Claim "SARS-CoV -2", confirming the virus's identity . This agreement slightly reduces uncertainty due to the exact match . The evidence in Evidence 2 , "which is identical or similar in dif ferent species of animals such as pigs, ferrets, cats," aligns with the claim "cats can be infected with" . This agreement strengthens the claim by indicating that cats are among the susceptible species , reducing | https://arxiv.org/abs/2505.17855v1 |
uncertainty . However , the statement in Evidence 1 , "through domestic pets." conflicts with the statement in Evidence 2 , "Notification of presumptive positive animal test results triggered a One Health* investigation by state and federal partners, who determined that no further transmission events to other animals or persons had occurred." This disagreement introduces significant uncertainty , as it suggests that while cats can be infected, there is no evidence of them spreading the virus further , contrary to the claim .Natural language explanations of model uncertaintyconflicting evidenceentity match relevant to claimFigure 1: Example of claim and evidence documents, alongside span interactions for uncertainty and gener- ated natural language explanations. To foster effective use of such models in fact- checking tasks, these models must explain the rationale for their predictions (Atanasova et al., 2020; Kotonya and Toni, 2020). However, cur- rent methods in automated fact-checking have been criticised for their failure to address practical ex- plainability needs of fact-checkers (Warren et al., 2025) and for their disconnect from the tasks typ- ically performed by fact-checkers (Schlichtkrull et al., 2023). For example, although fact-checking involves complex reasoning about the reliability of (often conflicting) evidence, existing automatic fact-checking techniques focus only on justifying the verdict (Atanasova et al., 2020; Stammbach and Ash, 2020; Zeng and Gao, 2024). Such methods do not explain the uncertainty associated with theirarXiv:2505.17855v1 [cs.CL] 23 May 2025 predictions, which is crucial for their users to deter- mine whether some of the uncertainty is resolvable, and if so, which aspects of this uncertainty within the evidence to address (e.g., by retrieving addi- tional information) (Warren et al., 2025). Uncertainty in model predictions is often com- municated through numerical scores (e.g., “I am 73% confident”), however, such metrics can be hard to contextualize and lack actionable insights for end-users (Zimmer, 1983; Wallsten et al., 1993; van der Waa et al., 2020; Liu et al., 2020). Recent efforts have instead used natural language expres- sions (e.g., “I’m not sure”) to convey uncertainty (Steyvers et al., 2025; Yona et al., 2024; Kim et al., 2024), but such expressions often fail to faithfully reflect model uncertainty (Yona et al., 2024), and users may overestimate model confidence (Steyvers et al., 2025). Existing explainable fact-checking systems exhibit two critical limitations: they focus solely on justifying veracity predictions through generic reasoning summaries of the input sequence (see Figure 2), while neglecting to (1) communicate model uncertainty or (2) explicitly surface eviden- tiary conflicts and agreements that relate to it. This constitutes a fundamental methodological gap, as effective fact-checking requires precisely identify- ing the sources of uncertainty, for example from conflicting evidence, to guide targeted verification (Graves, 2017; Micallef et al., 2022). We propose CLUE , a pipeline that generates natural language explanations (NLEs) of model uncertainty by explicitly capturing conflicts and agreements in the input (e.g., a claim and its sup- porting or refuting evidence). The pipeline first identifies the salient span-level interactions that matter to the prediction of the model through an un- supervised approach, providing an input-feature ex- planation that highlights key relationships between separate | https://arxiv.org/abs/2505.17855v1 |
input segments (e.g., claim and evidence) (Ray Choudhury et al., 2023). These interactions have been shown to be both faithful to the model and plausible to humans (Sun et al., 2025). CLUE then converts these signals into uncertainty-aware explanations by explicitly discussing the interac- tions, the conflict/agreement relations they express and how they contribute to uncertainty regarding the verdict. CLUE does not require gold-label ex- planations, avoids fine-tuning, and operates entirely at inference time. Across three language models (§4.2) and two fact-checking datasets (§4.1), we evaluate two vari- ants of CLUE. Automatic metrics show that bothvariants generate explanations that are more faithful to each model’s uncertainty and agree more closely with the gold fact-checking labels than a prompting baseline that lacks conflict-/agreement-span guid- ance (§5.5). Human study participants likewise judge CLUE explanations as more helpful, more informative, less redundant, and more logically consistent with the input. We also observe a trade- off between two variants of our CLUE framework, one attains higher faithfulness, the other higher plausibility, highlighting a promising avenue for future work to achieve both simultaneously (§5.5). 2 Related Work 2.1 Uncertainty Quantification in LLMs Recent work on LLM uncertainty quantification primarily relies on logit-based methods such as an- swer distribution entropy (Kadavath et al., 2022), summing predictive entropies across generations (Malinin and Gales, 2021), and applying predictive entropy to multi-answer question-answering (Yang et al., 2025). Estimating uncertainty in long-form tasks involves measuring semantic similarity be- tween responses (Duan et al., 2024; Kuhn et al., 2023; Nikitin et al., 2024). Given that logit-based uncertainty quantification is infeasible for closed- source black-box models, alternative approaches have depended on verbalizing confidence directly (Lin et al., 2022; Mielke et al., 2022), though these measures are overconfident and unreliable (Yona et al., 2024; Tanneru et al., 2024). Other approaches measure output diversity across para- phrased prompts (Zhang et al., 2024a; Chen and Mueller, 2024), but this technique can introduce significant computational overhead and conflate model uncertainty with prompt-induced noise, ob- scuring interpretability. Accordingly, in this work, we focus on the uncertainty of open-source models, which are readily accessible and widely used. We adopt predictive entropy , a straightforward white- box metric computed from the model’s answer log- its, as our uncertainty measure for fact-checking tasks. This choice balances interpretability and computational efficiency while avoiding potential noise introduced by multiple prompts. 2.2 Linguistic Expressions of Uncertainty Numerical uncertainty estimates do not address the sources of uncertainty, and are therefore difficult for end-users, such as fact-checkers, to interpret and act upon (Warren et al., 2025). Linguistic ex- Our framework e-Fever Explain-MT JustiLM Quote specific evidence Multiple evidence documents Reflect conflicts & agreement Faithful to model reasoning Explain uncertainty✅ ❌ ❌ ❌ ❌Claim: Ancient civilizations once inhabite d Antarctica , but the continent’ s position and climate rapidly changed due to... Evidence: Evidence 1: Factually inaccurate: There is no evidence of ancient civilizations ... Evidence 2: Antarctica slowly drifted to...Claim: Colin Kaepernick is a poker player . Evidence: Colin Kaepernick + Colin Rand Kaepernick is an American football quarterback who is currently a free agent.Claim: Biden: Gun | https://arxiv.org/abs/2505.17855v1 |
manufacturers are "the only industry in the country" that have immunity from lawsuits Evidence: Evidence 1: No, you can't sue Pfizer ... Evidence 2: Remarks by President Biden on Gun V iolence Prevention ...Input Input Verdict: False Explanation: Colin Kaepernick is a football player , not a poker player .Input Verdict: Half-T rue Explanation: Rubio also said ”Over 1.3 million temporary work-outs, over half have now defaulted” OutputInput Verdict: False Explanation: Biden said that gun manufacturers represent the only industry in America that is exempt from being sued.Claim: Of the more than 1.3 million temporary mortgage modifications, over half have defaulted. Evidence: Rubio said “over half have now defaulted,” referring to a temporary mortgage modification program. Quote specific evidence Multiple evidence documents Reflect conflicts & agreement Faithful to model reasoning Explain uncertainty❌ ❌ ❌ ❌ ❌Output Verdict: False Certainty: 44% Explanation: Evidence 1 , "There is no evidence of ancient civilizations..." disagrees with Claim "civilizations once inhabited..." . This undermines ... The agreement between Evidence 1 and Evidence 2 , specifically the phrases "plate movement" and "slowly" , reinforces the consensus that... Quote specific evidence Multiple evidence documents Reflect conflicts & agreement Faithful to model reasoning Explain uncertainty✅ ✅ ✅ ✅ ✅Quote specific evidence Multiple evidence documents Reflect conflicts & agreement Faithful to model reasoning Explain uncertainty✅ ✅ ❌ ❌ ❌Output OutputFigure 2: Explanations produced by earlier systems, e-FEVER (Stammbach and Ash, 2020), Explain-MT (Atanasova et al., 2020), and JustiLM (Zeng and Gao, 2024), compared with those from our CLUE framework. CLUE is the only approach that explicitly traces model uncertainty to the conflicts and agreements between the claim and multiple evidence passages. pressions of uncertainty may be more intuitive for people to understand than numerical ones, (Zim- mer, 1983; Wallsten et al., 1993; Windschitl and Wells, 1996), and recent work has proposed mod- els that communicate uncertainty through hedging phrases such as “I am sure” or “I doubt” (Mielke et al., 2022; Lin et al., 2022; Zhou et al., 2023; Tian et al., 2023; Xiong et al., 2023; Ji et al., 2025; Zheng et al., 2023; Farquhar et al., 2024). However, these expressions are not necessarily faithful reflec- tions of the model’s uncertainty (Yona et al., 2024) and tend to overestimate the model’s confidence (Tanneru et al., 2024), risking misleading users (Steyvers et al., 2025). Moreover, they do not ex- plain whythe model is uncertain. In this paper, we propose a method that explains sources of model uncertainty by referring to specific conflicting or concordant parts of the input that contribute to the model’s confidence in the output. This approach ensures a more faithful reflection of model uncer- tainty and provides users with a more intuitive and actionable understanding of model confidence. 2.3 Generating Natural Language Explanations for Fact-Checking Natural language explanations provide justifica- tions for model predictions designed to be under- stood by laypeople (Wei Jie et al., 2024). NLEs have typically been evaluated by measuring the similarity between generated NLEs and human- written reference explanations using surface-level metrics such as ROUGE-1 (Lin, 2004) and BLEU (Papineni et al., 2002). | https://arxiv.org/abs/2505.17855v1 |
In fact-checking, supervisedmethods have been proposed that involve extracting key sentences from existing fact-checking articles and using them as explanations (Atanasova et al., 2020). Later work proposed a post-editing mecha- nism to enhance the coherence and fluency of expla- nations, (Jolly et al., 2022), while others have fine- tuned models on data collected from fact-checking websites to generate explanations (Feher et al., 2025; Raffel et al., 2020; Beltagy et al., 2020). Re- cent work has shifted towards few-shot methods re- quiring no fine-tuning, for example, using few-shot prompting with GPT-3 (Brown et al., 2020) to pro- duce evidence summaries as explanations (Stamm- bach and Ash, 2020) incorporating a planning step before explanation generation (Zhao et al., 2024) to outperform standard prompting approaches, and generating fact-checking justifications based on retrieval-augmented language models (Zeng and Gao, 2024). However, existing methods are often not faithful to model reasoning (Atanasova et al., 2023; Siegel et al., 2024, 2025), have limited utility in fact-checking (Schmitt et al., 2024), and fail to address model uncertainty, which has been iden- tified as a key criterion for fact-checking (Warren et al., 2025). To this end, we introduce the first framework de- signed for the task of explaining sources of uncer- tainty in multi-evidence fact-checking. Our method analyzes span-level agreements and conflicts corre- lated with uncertainty scores. Unlike conventional approaches that aim to replicate human NLEs (pri- oritising fluency or plausibility over faithfulness to model reasoning), our method generates explana- tions that are both faithful to model uncertainty and helpful to people in a fact-checking context. 3 Method 3.1 Preliminaries and Overall Framework Our objective is to explain why a LLM is uncertain about a multi-evidence fact-checking instance by grounding that uncertainty in specific agreements or conflicts within the input. Problem setup. Each input instance is a triple X= (C, E 1, E2)consisting of a claim Cand two evidence pieces E1, E2. Note that, in this work, we set the number of evidence pieces to two for simplicity. For clarity, we denote their concatenation as X= [x1, . . . , x |C|+|E1|+|E2|]. The task label comes from the set Y= {SUPPORTS ,REFUTES ,NEUTRAL }. Pipeline overview. Our framework comprises three stages: 1.Uncertainty scoring. We compute predictive entropy from the model’s answer logits to obtain a scalar uncertainty score u(X)(§ 3.2). This logit-based measure is model-agnostic. 2.Conflicts/Agreement extraction. We capture the agreements and conflicts most relevant to the model’s reasoning by identifying the text- span interactions between C,E1, and E2that embody these relations (§ 3.3). 3.Explanation generation. The model receives the extracted spans as soft constraints and produces a natural-language rationale YR= [y′ 1, . . . , y′ r]along with its predicted label ˆyto the identified interactions (§ 3.4). Outputs. For each instance X, the framework returns the predicted task label ˆy∈ Y; the numeric uncertainty score u(X); and the textual explana- tionYR= [y′ 1, . . . , y′ r]that grounds the source of uncertainty in the specific agreements or conflicts between C, E 1, E2. 3.2 Predictive Uncertainty Score Generation To quantify model uncertainty for generating an an- | https://arxiv.org/abs/2505.17855v1 |
swer label on a specific input sequence, we follow previous work and calculate predictive uncertainty with entropy theory, which does not require multi- ple runs and is widely used in open-source models. Specifically, we define the numeric uncertainty score uas the entropy of the softmax distribution over the model’s output logits for a set of candidateanswers Y={SUPPORTS ,REFUTES ,NEUTRAL }. For each candidate label yi∈ Y: P(yi|X) =exp(logit( yi)) P|Y| j=1exp(logit( yj))(1) where logit( yi)is the model’s output logit towards candidate answer yigiven input X.P(yi|X)is the confidence score of model for selecting yias the final answer across all candidate answers within Y. Finally, the model’s uncertainty towards the input sequence Xis: u(X) =−X yi∈YP(yi|X) logP(yi|X)(2) 3.3 Conflict and Agreement Span Interaction Identification for Answer Uncertainty To surface the conflicts and agreements that drive a model’s uncertainty, we extract and then label salient span interactions among the claim Cand two evidence passages, E1andE2. Span interaction extraction. For each ordered input part pair (F, T)∈ {(C, E 1),(C, E 2),(E1, E2)}, we follow pre- vious work (Ray Choudhury et al., 2023; Sun et al., 2025) to extract the important span interactions and their importance score to model’s answer by (i) identifying the most important attention head to the model’s answer prediction from its final layer, (ii) obtaining its attention matrix A∈R(|F|+|T|)×(|F|+|T|), and (iii) symmetrizing the cross-part scores: a′ p,q=1 2 Ap,q+Aq,p , x p∈F, x q∈T. Treating a′ p,qas edge weights yields a bipartite token graph, which we partition into contiguous spans with the Louvain algorithm (Blondel et al., 2008). Given a spanw⊂Fand a spanv⊂T, their interaction importance is awv=1 |spanw||spanv|X xp∈spanwX xq∈spanva′ p,q.(3) The scored interactions for (S, T)form S(S,T)= {((spanw,spanv), awv)}. Relation labeling. To tag each span pair as an agreement ,disagreement , orunrelated , we prompt GPT-4o (OpenAI Team, 2024)1to assign a label 1https://openai.com/index/hello-gpt-4o/ rwv∈ {agree,disagree ,unrelated },balancing scal- ability and accuracy (See templates in App. H.6). After labeling all three pairs, the complete inter- action set for instance Xis SR=SR(C, E 1)∪SR(C, E 2)∪SR(E1, E2), (4) where, for example, SR(C, E 1) = {((spanw,spanv), awv, rwv)}. Each element links two spans with an importance score and a relation label, thereby supplying the conflict- or agreement-span interactions used in later stages. 3.4 Uncertainty Natural Language Explanation Generation To convert the extracted conflict- and agreement spans to rationales for model uncertainty, we rely on two complementary mechanisms. (i) Instruction-driven prompting embeds the spans directly in the input so the model is instructed which segments to reference. (ii) Intrinsic at- tention steering guides the model’s own atten- tion toward those same segments while it is gen- erating the rationale. Both mechanisms use self- rationalization : the model first states its verdict ˆyand then explains YR, a sequencing shown to improve faithfulness over pipeline approaches (Wiegreffe et al., 2021; Marasovic et al., 2022; Siegel et al., 2025). Instruction-based NLE. For each instance X, we rank all labelled interactions by importance and keep the top K= 3, denoted S(K) R, to avoid overly long explanations. These three span pairs are slot- ted into a three-shot | https://arxiv.org/abs/2505.17855v1 |
prompt (See App.F.1), which instructs the model to explain how the highlighted agreements or conflicts influence its confidence. Fi- nally, the standard transformer decoding process outputs both the predicted label ˆyand the accom- panying explanation YR. Attention steering. Instead of explicit instruc- tions, we can guide generation by modifying atten- tion on the fly with PASTA (Zhang et al., 2024b). Starting from the same S(K) R, we collect all token indices that fall inside any selected span, I= p: (spanw,spanv)∈S(K) R, p∈spanw∪spanv .(5) For each attention head (ℓ, h)deemed relevant to model uncertainty, let Abe its attention matrix. Wedown-weight non-target tokens by β: ˜Aij=Aij Zi( 1ifj∈ I, βotherwise ,(6) Zi=X j∈IAij+βX j /∈IAij. (7) All other heads remain unchanged. Following Zhang et al. (2024b), we steer |H|= 100 heads and set β= 0.01to balance steering efficacy and prevent degeneration; see App. B for the head- selection procedure. With the steered attention in place, the transformer generates ˆyfollowed by the rationale YR, now naturally centered on the conflict- or agreement spans that drive its uncertainty. 4 Experimental Setup 4.1 Datasets We select two fact-checking datasets, one spe- cific to the health domain, HealthVer (Sarrouti et al., 2021), and one closer to a real-world fact- checking scenario, DRUID (Hagström et al., 2024). These datasets were chosen because they pro- vide multiple evidence pieces per claim, making them well-suited to our goal of explaining model uncertainty arising from the inter-evidence con- flicts and agreements. For experiments, we se- lect six hundred instances that consist of a claim and multiple pieces of evidence, and a golden la- bely∈ {SUPPORTS ,REFUTES ,NEUTRAL }from each dataset.2 4.2 Models We compare three generation strategies for NLEs towards model uncertainty: •Prompt Baseline : A three-shot prompt baseline ex- tending prior few-shot NLE work (Stammbach and Ash, 2020; Zeng and Gao, 2024; Zhao et al., 2024) by explicitly asking the model to highlight conflicting or supporting spans that shape its un- certainty (See prompt template in App. F.1). •CLUE-Span : The instruction-based variant of our CLUE method where the extracted span interac- tions are filled into a three-shot prompt to guide the explanation generation ( §3.4; prompt tem- plate in App.F.2). 2While DRUID has six fine-grained fact-checking labels, we merge the labels into the above three categories to balance the label categories. •CLUE-Span+Steering : The attention steering variant of our CLUE method in which the same prompt as CLUE-Span is used. Additional atten- tion steering is applied to instinctively guide the model’s explanation generation toward the iden- tified spans ( §3.4; prompt template in App. F.2). Experiments are run on three recent, open- weight, instruction-tuned LLMs of comparable size: Qwen2.5-14B-Instruct3(Qwen Team, 2024), Gemma-2 9B-IT4(Gemma Team, 2024), and OLMo-2-1124-13B-Instruct5(Team OLMo et al., 2024). Each backbone is used consistently across our pipeline for span-interaction extraction, answer prediction, and NLE generation on four NVIDIA A100-SXMS-40GB GPUs. We selected these mod- els to balance capability (reasoning and instruction- following quality) with practical constraints on in- ference latency and GPU memory. 5 Automatic Evaluation 5.1 Faithfulness To assess whether the NLEs produced by CLUE are faithful | https://arxiv.org/abs/2505.17855v1 |
to the model’s uncertainty, we adapt the Correlational Counterfactual Test (CCT) (Siegel et al., 2024) and propose an Entropy-CCT metric. Following Siegel et al. (2024), we begin by in- serting a random adjective or noun into the original instance Xto obtain a perturbed input X′(See App. D for details). Let u(X)denote the model’s uncertainty score defined by Eq. 2, unlike CCT(See details of original CCT in App.E), we measure the impact of the perturbation on the model’s uncer- tainty with Absolute Entropy Change (AEC): ∆u(X) = u(X)−u(X′) (8) For each perturbation, we record whether the in- serted word appears in the generated NLE, using its presence as a proxy for importance. This yields a binary mention flag m∈ {0,1}, following Siegel et al. (2024); Atanasova et al. (2023). LetDmdenote the set of perturbed examples where the NLE mentions the inserted word and D¬mis the complementary set where it does not, we correlate the continuous variable ∆uwith the 3https://huggingface.co/Qwen/Qwen2. 5-14B-Instruct 4https://huggingface.co/google/gemma-2-9b-it 5https://huggingface.co/allenai/ OLMo-2-1124-13B-Instructbinary mention flag mvia the point-biserial corre- lation rpb(Tate, 1954). The Entropy-CCT statistic is: CCT entropy =rpb=Em[∆u]−E¬m[∆u] Std(∆ u)·r |Dm|·|D¬m| (|Dm|+|D¬m|)2(9) where Em[∆u]andE¬m[∆u]are the mean ab- solute entropy changes for these two groups, re- spectively. Std(∆ u)is the standard deviation of absolute entropy changes across the full dataset. Ultimately, this metric quantifies the alignment between changes in model uncertainty and explana- tory references to input perturbations, thereby mea- suring how faithfully the NLEs reflect the model’s uncertainty. 5.2 Span-Coverage An uncertainty explanation should surface allin- formation conveyed by the selected span interac- tions. We therefore compute Span-Coverage : the fraction of reference interactions that are explicitly mentioned in the generated NLE. Let SNLEbe the set of span interactions extracted from the explana- tion, and let SR(k)be the reference set supplied in the prompt (see §3.4). Then Span-Coverage =|SNLE∩SR(k)| |SR(k)|. (10) A higher value indicates the NLE covers a higher proportion of the information supplied by the ex- tracted span interactions. 5.3 Span-Extraneous Ideally, the explanation should mention only the provided interactions and avoid introducing extra- neous information. We measure the proportion of mentioned interactions that do not belong to the reference set, denoted Span-Extraneous : Span-Extraneous =|SNLE\SR(k)| |SNLE|.(11) A lower value indicates closer alignment with the intended span interactions. 5.4 Label-Explanation Entailment We evaluate the extent to which the uncertainty explanation agrees with the model’s predicted la- bel by formulating the task as a natural-language inference (NLI) problem. First, we convert the pre- dicted label into a hypothesis using the template “The claim is supported by / refuted by / neutral to the evidence. ” The generated explanation serves as the premise. The resulting premise–hypothesis pair is fed to a widely used off-the-shelf language- inference model, DeBERTa-v36(He et al., 2023). The Label-Explanation Entailment (LEE) score is the proportion of examples for which the NLI model predicts ENTAILMENT . 5.5 Results Here, we present the results of our automatic eval- uation. For brevity, we refer to Qwen2.5-14B- Instruct, OLMo-2-1124-13B-Instruct, and Gemma- 2-9B-it simply as Qwen, OLMo, and Gemma, re- spectively. Faithfulness. We use Entropy-CCT, a point– biserial correlation bounded by −1≤rpb≤1 (Eq. | https://arxiv.org/abs/2505.17855v1 |
9), to measure the faithfulness of NLEs to the model’s uncertainty (§5.1). When rpb= 0, the explanation mentions high- and low-impact per- turbation words equally often; every +0.01adds roughly one percentage point (pp) to the chance that the explanation names a token that is truly influential for the model’s predictive uncertainty (App. G). Table 1 shows that Prompt Baseline isnon- faithful in all six settings withrpbare all nega- tive values ranging from −0.03to−0.13. Thus its NLEs mention truly influential tokens 3–13 pp lessoften than uninfluential ones—the opposite of faithful behaviour. Both variants of our CLUE reverse this trend. Presenting span interactions in the prompt ( CLUE-Span ) raises every correlation to non-negative values and peaks at rpb= 0.089 on the DRUID –Qwen setting. This means the ex- planation now mentions about 17 pp more often than Prompt Baseline (rpb=−0.080). Adding at- tention steering ( CLUE-Span+Steering ) lifts the rbpscores to 0.033 onHEALTH VERand0.102 onDRUID with Qwen model, i.e., net gains of +6 pp and +18 pp over Prompt Baseline . Moreover, four of the six positive correlations produced by CLUE-Span+Steering are significant at p <0.01 (Table 4 in App. G.3), confirming that the improve- ments are both substantial and statistically reliable. Particularly large jumps of OLMo on Druid dataset (up to ∆rpb= +0 .23≈+23 pp)sug- gest that span-interaction guidance from our CLUE framework is most beneficial for models that ini- tially struggle to align explanations with predictive 6https://huggingface.co/MoritzLaurer/ DeBERTa-v3-large-mnli-fever-anli-ling-wanliuncertainty. Other Properties We evaluate three futher prop- erties of the generated NLEs: (i) Span-Coverage of extracted conflict-/agreement- span interac- tions (§5.2), (ii) Span-Extraneous : mention of non-extracted spans (§5.3), and (iii) Label- Explanation Entailment with the generated fact- checking label (§5.4). As Table 1 shows, CLUE-Span+Steering outperforms CLUE-Span in both Span-Coverage and Span-extraneous , consistent with the attention steering method’s effectiveness in directing the model to focus on provided spans during generation (Zhang et al., 2024b). Absolute numbers, however, remain mod- est (peak Span-Coverage: .44, Span-Extraneous: .20 with Qwen). A Span-Coverage of 1 means the NLE cites every extracted interaction, while a Span-Extraneous score of 0 means it adds none be- yond them. This gap highlights considerable head- room for better integrating critical span interac- tions into the explanations. Among the three back- bones, Qwen attains the highest Span-Coverage and the lowest Span-Extraneous scores , a trend that likely reflects its stronger instruction-following ability (see benchmark scores in App. A), and thus larger or more capable models might further nar- row the gap. Both variants of our framework achieve stronger label-explanation entailment scores than the baseline , yielding explanations logically consistent with the predicted labels while remaining faithful to the model’s uncertainty pat- terns (as demonstrated in our faithfulness analysis). 6 Human Evaluation 6.1 Method We recruited N=12 participants from Prolific (https://www.prolific.com/ ) to rank explana- tions generated by Prompt Baseline ,CLUE-Span , CLUE-Span+Steering for 40 instances (20 from DRUID, 20 from HealthVer) (see details about participants and set-up in App. H.1). Adapting Atanasova et al. (2020), participants ranked expla- nations in descending order (1st, 2nd, 3rd) accord- ing to five criteria, complementary | https://arxiv.org/abs/2505.17855v1 |
to our automatic evaluation metrics: •Helpfulness. The explanation offers informa- tion that aids readers to judge the claim and fact- check. •Coverage. The explanation captures allsalient information in the input that matters for the HealthVer DRUID Approach Faith. ( ↑) Span-Cov. ( ↑) Span-Ext. ( ↓) LEE ( ↑)Faith. ( ↑) Span-Cov. ( ↑) Span-Ext. ( ↓) LEE ( ↑) Qwen2.5-14B-Instruct Prompt Baseline -0.028 – – 0.74 -0.08 – – 0.60 CLUE-Span 0.006 0.33 0.68 0.75 0.089 0.20 0.38 0.78 CLUE-Span+Steering 0.033 0.44 0.53 0.80 0.102 0.28 0.20 0.77 OLMo-2-1124-13B-Instruct Prompt Baseline -0.10 – – 0.55 -0.13 – – 0.53 CLUE-Span 0.005 0.10 0.83 0.61 0.014 0.08 0.79 0.65 CLUE-Span+Steering 0.020 0.23 0.77 0.68 0.099 0.15 0.70 0.69 Gemma-2-9B-It Prompt Baseline -0.105 – – 0.66 -0.12 – – 0.57 CLUE-Span 0.007 0.34 0.59 0.82 0.043 0.23 0.43 0.76 CLUE-Span+Steering 0.021 0.39 0.50 0.85 0.098 0.30 0.47 0.81 Table 1: Uncertainty NLE evaluation results across the HealthVer and DRUID datasets (§4.1). For each model (§4.2) we compare Prompt Baseline ,CLUE-Span , and CLUE-Span+Steering on four metrics: Faith. (§5.1), Span-Cov. (§5.2), Span-Ext. (§5.3), and LEE (§5.4). Bold values mark the best result per metric for each dataset–model pair; “–” indicates inapplicable metrics for Prompt Baseline , as it is not supplied with extracted span interactions. fact check, distinct from Span-Coverage (§5.2), which counts overlap with pre-extracted spans. •Non-redundancy. The explanation does not of- fer irrelevant or repetitive information to the in- put, distinct from Span-Extraneous (§5.3) which counts mentions outside the extracted spans. •Consistency. The explanation contains logically consistent statements to the input, disticnt from Label-Explanation Entailment (§5.4), which mea- sures label-explanation alignment. •Overall Quality. Ranking of explanations by their overall quality, considering all criteria above. 6.2 Results The results of our evaluation results are depicted in Table 2. Annotator agreement was moderate to low (see App. H.2.1), which we attribute to the relative complexity of the task and individual differences in how the information was perceived. The explanations generated by CLUE were pre- ferred by our participants to those generated us- ingPrompt Baseline : the explanations generated byCLUE-Span+Steering were rated as most helpful, highest coverage, and containing the least amount of redundant information , while those from CLUE-Span were judged to have the highest consistency and overall quality . Al- though CLUE-Span+Steering achieves the high- est faithfulness (see §5.5), our participants judged its overall quality slightly lower than that of CLUE-Span . A possible reason for this is that al- though CLUE-Span+Steering adheres closely to the top- K=3extracted span interactions (as re- flected in its higher Span-Coverage and lower Span-Extraneous scores), it may produce explanations that are slightly less internally consistent or fluent. In contrast, CLUE-Span is less faithful to those ex- tracted spans, but may capture additional points that study participants deemed important, likely be- cause the spans identified as important for model do not fully overlap with those identified by hu- mans (Ray Choudhury et al., 2023), highlighting the well-documented trade-off between faithful- ness and plausibility (Agarwal et al., 2024). Future work on improving the plausibility of the span in- teractions | https://arxiv.org/abs/2505.17855v1 |
while retaining their faithfulness may therefore improve the human evaluation scores for CLUE-Span+Steering . Finally, we observed slight variation between datasets: CLUE-Span+Steering tended to be rated higher than CLUE-Span for DRUID, and vice versa for HealthVer. This may arise from differences in length and complexity of the input: DRUID ev- idence documents, retrieved from heterogeneous online sources and often consisting of longer form new articles, may have benefited from attention steering more than HealthVer evidence documents which consist of focused, shorter extracts from sci- entific abstracts. 7 Conclusion We present the first framework, CLUE, for generat- ing NLEs of model uncertainty by referring to the conflicts and agreements between claims and multi- ple pieces of evidence in a fact-checking task. Our method, evaluated across three language models and two datasets, demonstrates significant improve- ments in both faithfulness to model uncertainty and Prompt Base CLUE-S CLUE-SS Helpfulness Overall 2.025 1.892 1.867 DRUID 1.9 1.917 1.767 HealthVer 2.15 1.867 1.967 Consistency Overall 1.875 1.783 1.817 DRUID 1.717 1.75 1.617 HealthVer 2.033 1.817 2.017 Non-redundancy Overall 2.05 1.908 1.833 DRUID 1.983 1.983 1.683 HealthVer 2.117 1.833 1.983 Coverage Overall 1.967 1.775 1.758 DRUID 1.767 1.75 1.617 HealthVer 2.167 1.8 1.9 Overall Quality Overall 1.967 1.908 1.925 DRUID 1.9 1.9 1.817 HealthVer 2.033 1.917 2.033 Table 2: Mean Average Rank (MAR) for the five human-evaluation criteria applied to explanations from Qwen2.5-14B-Instruct on the HEALTH VERand DRUID datasets (chosen for its high faithfulness; see §5.5). Prompt Baseline ,CLUE-Span (CLUE-S ), and CLUE-Span+Steering (CLUE-SS ) are compared. Lower MAR means a better (higher) average rank; the best score in each row is boldfaced. label consistency compared to standard prompting. Evaluations by human participants further demon- strate that the explanations generated by CLUE are more helpful, more informative, less redun- dant, and more logically consistent with the input. This work establishes a foundation for explainable fact-checking systems, providing end users (e.g., fact-checkers) with grounded, faithful explanations that reflect the model’s uncertainty. Limitations Our paper proposes a novel framework for gen- erating NLEs towards the model’s uncertainty by explicitly pointing to the conflicts or agree- ments within the claim and multi-evidence inter- actions. While our framework demonstrates im- proved explanation quality through rigorous evalua- tion across three language models and two datasets, we acknowledge several limitations that present opportunities for future research. Our experiments are constrained to medium-sized models (Qwen2.5-14B-Instruct, Gemma2- 9B-it, and OLMo2-13B-Instruct) which were se- lected based on computational limitations. Al- though these models show significant improve- ments over baseline performance, our results sug- gest that larger models (e.g., 70B parameter scale) with enhanced instruction-following and reason- ing capabilities might further improve explanation quality — particularly for coverage and redundancy metrics. Our framework’s modular design readily accommodates such scaling. In this study we focus on the HealthVer and DRUID datasets, in which claims are paired with discrete pieces of evidence, ideal for study- ing evidence-conflict scenarios. Future work could investigate more complex evidence structures (e.g., long-form documents), diverse fact-checking sources, and scenarios with more than two pieces of evidence per claim to better reflect real-world fact-checking challenges. While | https://arxiv.org/abs/2505.17855v1 |
our evaluation with laypeople confirms that our framework produces explanations of higher quality than prompting, expert evaluations (e.g., with professional fact-checkers) are needed to as- sess practical utility in high-stakes settings. Our work is limited to the scope of explain- ing model uncertainty arising from evidence con- flicts. While this captures a critical subset of cases, real-world uncertainty may also stem from other sources, including insufficient evidence, knowl- edge gaps in the model, and context-memory con- flicts. We view this work as a foundational step toward broader research on model uncertainty ex- planation. Ethical Considerations This work concerns automated fact-checking, which aims to reduce the harm and spread of mis- information, but nevertheless has the potential for harm or misuse through model inaccuracy, halluci- nation, or deployment for censorship. Our current work aims to provide explanation that allow users to examine the outputs of these systems more crit- ically, and so we do not see any immediate risks associated with it. Our work is limited to examining claims, ev- idence, and explanations in English, and so our results may not be generalisable to other languages. As the task involved complex reasoning about technical subjects, we screened our participants to be native English speakers to ensure that they could fully understand the material and increase the chances of high-quality responses (see H.1 for details). However, this criteria may also introduce or reinforce existing biases and limit the generalis- ability of our findings. Participants were informed about the study and its aims before agreeing to pro- vide informed consent. No personal data was col- lected from participants and they received fair pay- ment for their work (approximately 9 GBP/hour). Acknowledgments This research was co-funded by the Euro- pean Union (ERC, ExplainYourself, 101077481), by the Pioneer Centre for AI, DNRF grant number P1, as well as by The Villum Synergy Programme. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Re- search Council. Neither the European Union nor the granting authority can be held responsible for them. References Chirag Agarwal, Sree Harsha Tanneru, and Himabindu Lakkaraju. 2024. Faithfulness vs. plausibility: On the (un)reliability of explanations from large language models. Preprint , arXiv:2402.04614. Pepa Atanasova, Oana-Maria Camburu, Christina Li- oma, Thomas Lukasiewicz, Jakob Grue Simonsen, and Isabelle Augenstein. 2023. Faithfulness tests for natural language explanations. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 283–294, Toronto, Canada. Association for Computational Linguistics. Pepa Atanasova, Jakob Grue Simonsen, Christina Li- oma, and Isabelle Augenstein. 2020. Generating fact checking explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7352–7364, Online. Association for Computational Linguistics. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. Vincent D. Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. 2008. Fast Un- folding of Communities in Large Networks. Jour- nal of statistical mechanics: theory and experiment , 2008(10):P10008. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, | https://arxiv.org/abs/2505.17855v1 |
Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child,Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems , volume 33, pages 1877–1901. Curran Associates, Inc. Jiuhai Chen and Jonas Mueller. 2024. Quantifying uncertainty in answers from any language model and enhancing their trustworthiness. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5186–5200, Bangkok, Thailand. Association for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, and Kaidi Xu. 2024. Shifting attention to relevance: To- wards the predictive uncertainty quantification of free-form large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5050–5063, Bangkok, Thailand. Association for Computational Linguistics. Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. 2024. Detecting Hallucinations in Large Language Models using Semantic Entropy. Nature , 630(8017):625–630. Darius Feher, Abdullah Khered, Hao Zhang, Riza Batista-Navarro, and Viktor Schlegel. 2025. Learn- ing to Generate and Evaluate Fact-Checking Explana- tions with Transformers. Engineering Applications of Artificial Intelligence , 139:109492. Nicolo’ Fontana, Francesco Corso, Enrico Zuccolotto, and Francesco Pierri. 2025. Evaluating open-source large language models for automated fact-checking. Preprint , arXiv:2503.05565. Gemma Team. 2024. Gemma: Open models based on gemini research and technology. Lucas Graves. 2017. Anatomy of a fact check: Ob- jective practice and the contested epistemology of fact checking. Communication, culture & critique , 10(3):518–537. Lovisa Hagström, Sara Vera Marjanovi ´c, Haeun Yu, Ar- nav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, and Isabelle Augenstein. 2024. A Re- ality Check on Context Utilisation for Retrieval- Augmented Generation. Preprint , arXiv:2412.17031. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023. Debertav3: Improving deberta using electra-style pre- training with gradient-disentangled embedding shar- ing. Preprint , arXiv:2111.09543. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Ziwei Ji, Lei Yu, Yeskendir Koishekenov, Yejin Bang, Anthony Hartshorn, Alan Schelten, Cheng Zhang, Pascale Fung, and Nicola Cancedda. 2025. Calibrat- ing Verbal Uncertainty as a Linear Feature to Reduce Hallucinations. arXiv preprint arXiv:2503.14477 . Shailza Jolly, Pepa Atanasova, and Isabelle Augenstein. 2022. Generating fluent fact checking explanations with unsupervised post-editing. Information , 13(10). Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language Models (Mostly) Know What They Know. arXiv preprint arXiv:2207.05221 . Maurice G Kendall and B. Babington Smith. 1939. The problem of m rankings. The annals | https://arxiv.org/abs/2505.17855v1 |
of mathematical statistics , 10(3):275–287. Sunnie S. Y . Kim, Q. Vera Liao, Mihaela V orvoreanu, Stephanie Ballard, and Jennifer Wortman Vaughan. 2024. "i’m not sure, but...": Examining the impact of large language models’ uncertainty expression on user reliance and trust. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency , FAccT ’24, page 822–835, New York, NY , USA. Association for Computing Machinery. Neema Kotonya and Francesca Toni. 2020. Explainable Automated Fact-Checking: A Survey. arXiv preprint . ArXiv:2011.03870 [cs]. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic Uncertainty: Linguistic Invariances for Un- certainty Estimation in Natural Language Generation. arXiv preprint arXiv:2302.09664 . Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out , pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Stephanie C. Lin, Jacob Hilton, and Owain Evans. 2022. Teaching Models to Express Their Uncer- tainty in Words. Transactions on Machine Learn- ing Research .https://openreview.net/forum? id=8s8K2UZGTZ . Dawn Liu, Marie Juanchich, Miroslav Sirota, and Sheina Orbell. 2020. The Intuitive Use of Contex- tual Information in Decisions Made with Verbal and Numerical Quantifiers. Quarterly Journal of Experi- mental Psychology , 73(4):481–494.Andrey Malinin and Mark J. F. Gales. 2021. Uncertainty Estimation in Autoregressive Structured Prediction. InProceedings of the 9th International Conference on Learning Representations (ICLR 2021) . Ana Marasovic, Iz Beltagy, Doug Downey, and Matthew Peters. 2022. Few-shot self-rationalization with nat- ural language prompts. In Findings of the Associa- tion for Computational Linguistics: NAACL 2022 , pages 410–424, Seattle, United States. Association for Computational Linguistics. Nicholas Micallef, Vivienne Armacost, Nasir Memon, and Sameer Patil. 2022. True or false: Studying the work practices of professional fact-checkers. Proc. ACM Hum.-Comput. Interact. , 6(CSCW1). Sabrina J. Mielke, Arthur Szlam, Emily Dinan, and Y- Lan Boureau. 2022. Reducing conversational agents’ overconfidence through linguistic calibration. Trans- actions of the Association for Computational Linguis- tics, 10:857–872. Alexander Nikitin, Jannik Kossen, Yarin Gal, and Pekka Marttinen. 2024. Kernel language entropy: Fine- grained uncertainty quantification for llms from se- mantic similarities. 37:8901–8929. OpenAI Team. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics , ACL ’02, page 311–318, USA. Association for Computational Linguistics. Qwen Team. 2024. Qwen2.5: A party of foundation models. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Lim- its of Transfer Learning with a Unified Text-to-Text Transformer. Journal of machine learning research , 21(140):1–67. Sagnik Ray Choudhury, Pepa Atanasova, and Isabelle Augenstein. 2023. Explaining interactions between text spans. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Process- ing, pages 12709–12730, Singapore. Association for Computational Linguistics. Mourad Sarrouti, Asma Ben Abacha, Yassine Mrabet, and Dina Demner-Fushman. 2021. Evidence-based fact-checking of health-related claims. In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages 3499–3512, Punta Cana, Do- minican Republic. Association for Computational Linguistics. Michael | https://arxiv.org/abs/2505.17855v1 |
Schlichtkrull, Nedjma Ousidhoum, and An- dreas Vlachos. 2023. The intended uses of auto- mated fact-checking artefacts: Why, how and who. InFindings of the Association for Computational Lin- guistics: EMNLP 2023 , pages 8618–8642, Singapore. Association for Computational Linguistics. Vera Schmitt, Luis-Felipe Villa-Arenas, Nils Feldhus, Joachim Meyer, Robert P. Spang, and Sebastian Möller. 2024. The role of explainability in collab- orative human-ai disinformation detection. In Pro- ceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency , FAccT ’24, page 2157–2174, New York, NY , USA. Association for Computing Machinery. Noah Siegel, Oana-Maria Camburu, Nicolas Heess, and Maria Perez-Ortiz. 2024. The probabilities also mat- ter: A more faithful metric for faithfulness of free- text explanations in large language models. In Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers) , pages 530–546, Bangkok, Thailand. Associ- ation for Computational Linguistics. Noah Y Siegel, Nicolas Heess, Maria Perez-Ortiz, and Oana-Maria Camburu. 2025. Faithfulness of LLM Self-Explanations for Commonsense Tasks: Larger Is Better, and Instruction-Tuning Allows Trade- Offs but Not Pareto Dominance. arXiv preprint arXiv:2503.13445 . Dominik Stammbach and Elliott Ash. 2020. e-FEVER: Explanations and Summaries for Automated Fact Checking. Proceedings of the 2020 Truth and Trust Online (TTO 2020) , pages 32–43. Mark Steyvers, Heliodoro Tejeda, Aakriti Kumar, Cata- rina Belem, Sheer Karny, Xinyue Hu, Lukas W Mayer, and Padhraic Smyth. 2025. What large lan- guage models know and what people think they know. Nature Machine Intelligence , pages 1–11. Jingyi Sun, Pepa Atanasova, and Isabelle Augenstein. 2025. Evaluating input feature explanations through a unified diagnostic evaluation framework. In Pro- ceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Compu- tational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 10559–10577, Al- buquerque, New Mexico. Association for Computa- tional Linguistics. Sree Harsha Tanneru, Chirag Agarwal, and Himabindu Lakkaraju. 2024. Quantifying Uncertainty in Natural Language Explanations of Large Language Models. InProceedings of The 27th International Conference on Artificial Intelligence and Statistics , volume 238 ofProceedings of Machine Learning Research , pages 1072–1080. PMLR. Robert F Tate. 1954. Correlation between a Discrete and a Continuous Variable. Point-Biserial Correlation. The Annals of mathematical statistics , 25(3):603– 607. Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groen- eveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yul- ing Gu, Shengyi Huang, Matt Jordan, Nathan Lam-bert, Dustin Schwenk, Oyvind Tafjord, Taira An- derson, David Atkinson, Faeze Brahman, Christo- pher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V . Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christo- pher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2024. 2 olmo 2 furious. Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher Manning. 2023. Just ask for cali- bration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural | https://arxiv.org/abs/2505.17855v1 |
Language Process- ing, pages 5433–5442, Singapore. Association for Computational Linguistics. Jasper van der Waa, Tjeerd Schoonderwoerd, Jurri- aan van Diggelen, and Mark Neerincx. 2020. In- terpretable confidence measures for decision support systems. International Journal of Human-Computer Studies , 144:102493. Thomas S. Wallsten, David V . Budescu, Rami Zwick, and Steven M. Kemp. 1993. Preferences and Reasons for Communicating Probabilistic Information in Ver- bal or Numerical Terms. Bulletin of the Psychonomic Society , 31(2):135–138. Yuxia Wang, Revanth Gangi Reddy, Zain Muhammad Mujahid, Arnav Arora, Aleksandr Rubashevskii, Ji- ahui Geng, Osama Mohammed Afzal, Liangming Pan, Nadav Borenstein, Aditya Pillai, Isabelle Au- genstein, Iryna Gurevych, and Preslav Nakov. 2024. Factcheck-bench: Fine-grained evaluation bench- mark for automatic fact-checkers. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 14199–14230, Miami, Florida, USA. Association for Computational Linguistics. Greta Warren, Irina Shklovski, and Isabelle Augenstein. 2025. Show me the work: Fact-checkers’ require- ments for explainable automated fact-checking. In Proceedings of the CHI Conference on Human Fac- tors in Computing Systems , CHI ’25, New York, NY , USA. Association for Computing Machinery. Yeo Wei Jie, Ranjan Satapathy, Rick Goh, and Erik Cambria. 2024. How interpretable are reasoning ex- planations from prompting large language models? InFindings of the Association for Computational Lin- guistics: NAACL 2024 , pages 2148–2164, Mexico City, Mexico. Association for Computational Lin- guistics. Sarah Wiegreffe, Ana Marasovi ´c, and Noah A. Smith. 2021. Measuring association between labels and free-text rationales. In Proceedings of the 2021 Con- ference on Empirical Methods in Natural Language Processing , pages 10266–10284, Online and Punta Cana, Dominican Republic. Association for Compu- tational Linguistics. Paul D Windschitl and Gary L Wells. 1996. Measuring Psychological Uncertainty: Verbal versus Numeric Methods. Journal of Experimental Psychology: Ap- plied , 2(4):343. Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs. arXiv preprint arXiv:2306.13063 . Yongjin Yang, Haneul Yoo, and Hwaran Lee. 2025. MAQA: Evaluating uncertainty quantification in LLMs regarding data uncertainty. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 5846–5863, Albuquerque, New Mexico. Association for Computational Linguistics. Gal Yona, Roee Aharoni, and Mor Geva. 2024. Can large language models faithfully express their intrin- sic uncertainty in words? In Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing , pages 7752–7764, Miami, Florida, USA. Association for Computational Linguistics. Fengzhu Zeng and Wei Gao. 2024. JustiLM: Few-shot justification generation for explainable fact-checking of real-world claims. Transactions of the Association for Computational Linguistics , 12:334–354. Caiqi Zhang, Fangyu Liu, Marco Basaldella, and Nigel Collier. 2024a. LUQ: Long-text uncertainty quantifi- cation for LLMs. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 5244–5262, Miami, Florida, USA. Association for Computational Linguistics. Qingru Zhang, Chandan Singh, Liyuan Liu, Xiaodong Liu, Bin Yu, Jianfeng Gao, and Tuo Zhao. 2024b. Tell your model where to attend: Post-hoc attention steering for LLMs. In Proceedings of the Twelfth In- ternational Conference on Learning Representations (ICLR 2024) . Xiaoyan Zhao, | https://arxiv.org/abs/2505.17855v1 |
Lingzhi Wang, Zhanghao Wang, Hong Cheng, Rui Zhang, and Kam-Fai Wong. 2024. PACAR: Automated Fact-Checking with Planning and Customized Action Reasoning Using Large Lan- guage Models. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 12564–12573. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging LLM-as-a-Judge with MT-Bench and Chat- bot Arena. Advances in Neural Information Process- ing Systems , 36:46595–46623. Kaitlyn Zhou, Dan Jurafsky, and Tatsunori Hashimoto. 2023. Navigating the Grey Area: How Expressions of Uncertainty and Overconfidence Affect LanguageModels. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 5506–5524, Singapore. Association for Com- putational Linguistics. Alf C Zimmer. 1983. Verbal vs. Numerical Processing of Subjective Probabilities. In Advances in psychol- ogy, volume 16, pages 159–182. Elsevier. A Backbone model performance on public benchmarks Table 3 summarises the publicly reported five-shot results on two standard reasoning benchmarks. All figures are taken verbatim from the official model cards or accompanying technical reports. Figures are copied from the official model cards. These numbers corroborate our claim that Qwen2.5-14B-Instruct is the strongest of the three for instruction-following and reasoning. B Method: Selecting attention heads to steer Following Zhang et al. (2024b), we steer only a selected subset of attention heads rather than all of them, because targeted steering yields larger gains in output quality. Our selection criterion, however, differs from theirs: instead of ranking heads by their impact on task accuracy, we rank them by how strongly they affect the model’s predictive uncertainty during fact-checking. Concretely, for each fact-checking dataset cho- sen in this work(see details in §4.1), D, we draw a validation subset Ddwith|Dd|= 300 examples. For every input X∈Dd, we compute the model’s baseline uncertainty score u(X)when it predicts the fact-checking label as stated in §3.2. Then, for each attention head identified by layer ℓand index h, we zero out that head, re-run the model, and measure the absolute change in uncertainty ∆u(X, ℓ, h ) = u(X)−u/o(l,h)(X) . Averaging ∆u(X, l, h )over all X∈Ddyields a single importance score for head (ℓ, h). We rank the heads by this score and keep the top theads for each dataset and each model. Note that we set t= 100 in line with the recommendation of Zhang et al. (2024b) and to balance steering effectiveness against the risk of degeneration. Model Params MMLU GSM8K Qwen2.5-14B-Instruct (Qwen Team, 2024) 14.7 B 79.7 90.2 Gemma-2-9B-IT (Gemma Team, 2024) 9.0 B 71.3 68.6 OLMo-2-1124-13B-Instruct (Team OLMo et al., 2024) 13 B 67.5 54.2 Table 3: Benchmark scores on MMLU (Hendrycks et al., 2021) and GSM8K (Cobbe et al., 2021) are used to characterize instruction-following and reasoning strength. You are a helpful assistant . Your task : 1. Read the claim and its two evidence passages (E1 , E2). 2. For each supplied span interaction , decide whether the two spans AGREE , DISAGREE , or are UNRELATED , taking the full context into account | https://arxiv.org/abs/2505.17855v1 |
. 3. Output the span pairs exactly as given , followed by " relation : agree | disagree | unrelated ". Return format : 1. " SPAN A" - " SPAN B" relation : <agree | disagree | unrelated > 2. ... 3. ... ### SHOT 1 ( annotated example ) Claim : [...] Evidence 1: [...] Evidence 2: [...] Span interactions (to be labelled ): 1. "[...]" - "[...]" 2. "[...]" - "[...]" 3. "[...]" - "[...]" Expected output : 1. "[...]" - "[...]" relation : ... 2. "[...]" - "[...]" relation : ... 3. "[...]" - "[...]" relation : ... ### SHOT 2 % omitted for brevity ### SHOT 3 % omitted for brevity ### NEW INSTANCE (pre - filled for each new example ) Claim : { CLAIM } Evidence 1: {E1} Evidence 2: {E2} Span interactions : 1. "{ SPAN1 -A}" - "{ SPAN1 -B}" 2. "{ SPAN2 -A}" - "{ SPAN2 -B}" 3. "{ SPAN3 -A}" - "{ SPAN3 -B}" Figure 3: Prompt template for span interaction relation labelling. C Prompt Example for Assigning Relation Labels to Captured Span Interactions To identify agreements and conflicts between the claim and the two evidence passages, we use the prompt in Figure 3 to label each extracted span interaction (see §3.3).D Perturbation details for faithfulness measurement To evaluate how faithfully each NLE reflects model uncertainty, we generate multiple counterfactuals per instance, following Atanasova et al. (2020) and Siegel et al. (2024) (see §5.1). For every input, comprising one claim and two evidence passages, we first tag part-of-speech with spaCy, then choose seven random insertion sites. At each site we in- sert either (i) a random adjective before a noun or (ii) a random adverb before a verb. The candidate modifiers are drawn uniformly from the full Word- Net lists of adjectives and adverbs. Because we sample three random candidates for each of the four positions, this procedure yields 4×3 = 12 perturbations per instance, providing a sufficient set for the subsequent Entropy-CCT evaluation, in which we check whether the NLE mentions the inserted word and correlate that mention with the uncertainty change induced by each perturbation. EDifferences Between Entropy-CCT and CCT In CCT test, Total Variation Distance (TVD) is computed between two probability distributions PandQasTVD(P, Q) =1 2P i|Pi−Qi|, mea- suring the absolute change in class-wise probabili- ties. We instead operate on the entropies of those distributions, yielding a single-valued measure of uncertainty shift. F Prompt template for Prompt Baseline , CLUE-Span andCLUE-Span+Steering on Healthver and Druid dataset We designed two prompt templates for our exper- iments. The baseline prompt (Figure 4) gives the model no span interactions; instead, it must first identify the relevant agreements or conflicts and then discuss them in its explanation. In contrast, the prompt used by our CLUE framework (Figure 5) supplies the three pre-extracted span interactions (§3.3). The model is explicitly instructed to base You are a helpful assistant . Your tasks : 1. Determine the relationship between the claim and the two evidence passages . 2. Explain your prediction ’s uncertainty by identifying | https://arxiv.org/abs/2505.17855v1 |
the three most influential span interactions from Claim - Evidence 1, Claim - Evidence 2, and Evidence 1- Evidence 2, and describing how each interaction ’s relation (agree , disagree , or unrelated ) affects your overall confidence . Return format : [ Prediction ] [ Explanation ] ### SHOT 1 Input Claim : [...] Evidence 1: [...] Evidence 2: [...] Output [ Prediction : ...] [ Explanation : ...] ### SHOT 2 % omitted for brevity ### SHOT 3 % omitted for brevity ### NEW INSTANCE Claim : { CLAIM } Evidence 1: {E1} Evidence 2: {E2} Your answer : Figure 4: Three-shot prompt for Prompt Baseline (Shots 2–3 omitted) on the HealthVer and DRuiD datasets. its explanation on these spans, ensuring that the ra- tionale remains grounded in the provided evidence. F.1 Prompt template for Prompt Baseline To generate NLEs about model uncertainty with- out span-interaction guidance, we craft a three-shot prompt that instructs the model to identify the in- teractions most likely to affect its uncertainty and to explain how these relations they represent affect it. (See Figure 4). F.2 Prompt template for CLUE-Span and CLUE-Span+Steering To generate NLEs about model uncertainty with the span-interaction guidance, we craft a three-shot prompt that instructs the model to discuss how these interactions, along with the relations they represent, affect its uncertainty. (See Figure 5). G Extended Statistical Analysis of Faithfulness Scores This section elaborates on the statistical evaluation of faithfulness regarding (i) recalling the definition and intuitive interpretation of the point–biserial co- efficient rpb(E.q. 9), (ii) outlining the t-test used to assess significance, (iii) reporting the faithfulness results (§5.1) along with statistical results. Note that, each dataset is evaluated on n= 600 ×12 = 7,200perturbations with 600 instances with 12 per-You are a helpful assistant . Your tasks : 1. Determine the relationship between the claim and the two evidence passages . 2. Explain your prediction ’s uncertainty by referring to the three span interactions provided below (Claim - Evidence 1, Claim - Evidence 2, Evidence 1- Evidence 2) and describing how each interaction ’s relation (agree , disagree , or unrelated ) affects your overall confidence . Return format : [ Prediction ] [ Explanation ] ### SHOT 1 Input : Claim : [...] Evidence 1: [...] Evidence 2: [...] Span interactions : 1. ’ ’[...] ’ ’ - ’ ’[...] ’ ’ (C-E1) relation : [...] 2. ’ ’[...] ’ ’ - ’ ’[...] ’ ’ (C-E2) relation : [...] 3. ’ ’[...] ’ ’ - ’ ’[...] ’ ’ (E1 -E2) relation : [...] Output : [ Prediction : ...] [ Explanation : ...] ### SHOT 2 % omitted for brevity ### SHOT 3 % omitted for brevity ### NEW INSTANCE Claim : { CLAIM } Evidence 1: {E1} Evidence 2: {E2} Span interactions (pre - filled ): 1. ’’{SPAN1 -A}’’ - ’’{SPAN1 -B}’’ (C-E1) relation : { REL1 } 2. ’’{SPAN2 -A}’’ - ’’{SPAN2 -B}’’ (C-E2) relation : { REL2 } 3. ’’{SPAN3 -A}’’ - ’’{SPAN3 -B}’’ (E1 -E2) relation : { REL3 } Your answer : | https://arxiv.org/abs/2505.17855v1 |
Figure 5: Three-shot prompt for CLUE-Span and CLUE-Span+Steering (Shots 2–3 omitted) on the HEALTH VERand D RUID datasets. turbations each (see App. D). and (iv) demonstrat- ing through concise numerical summaries that both CLUE-Span andCLUE-Span+Steering are signifi- cantly more faithful than the Prompt Baseline . G.1 Interpreting rpband∆rpb The Entropy-CCT score is the point-biserial corre- lation (Tate, 1954) between the absolute entropy change |∆u|and the binary mention flag m. Be- cause it is mathematically identical to a Pearson r computed between one continuous and one binary variable, it obeys −1≤rpb≤1. When rpb= 0, it means the high- and low-impact perturbations are mentioned equally often. If the two strata are roughly balanced, every +0.01inrpbincreases the probability that a truly uncertainty-influential token is mentioned by about one percentage point (pp). Again∆rpbtherefore translates to an absolute im- provement of ≈ |∆rpb| ×100,pp in mention rate. For instance, moving from −0.08to+0.06is a swing of 0.14, corresponding to, 14,pp. G.2 Significance testing Because the point-biserial is a Pearson correlation, the familiar t–test applies: t=rpbs n−2 1−r2 pb, (12) t∼t(n−2) under H0:rpb= 0. (13) With n= 7,200we have df= 7,198; the crit- ical two-sided values are |t|>1.96forp <0.05 and|t|>2.58forp <0.01. G.3 Faithfulness with significance results Table 4 shows the point-biserial coefficients rpb, which is our faithfulness measurement for model uncertainty(See, E.q.9), the associated tstatistics, and two-sided pvalues for every model–method pair. Values that meet the stricter p <0.01criterion are highlighted in bold. Across both datasets and all three backbones, thePrompt Baseline exhibits negative correlations, implying an non-faithful tendency to highlight low- impact tokens within the generation NLEs, with mean =−0.094. The prompt-only variant of our CLUE framework CLUE-Span neutralises this bias and turns the average into +0.027; three of its six coefficients are clear p <0.01, indicating a modest but significant improvement regarding faithfulness. The full CLUE-Span+Steering variant pushes the mean to +0.062 and achieves p < 0.01in four of six settings. Interpreting these numbers via §G.1, the switch from −0.094to+0.062yields a absolute increase of (0.062−(−0.094))×100!≈ !16, pp in the probability that a truly influential token of uncertainty is named in the NLE, which is easily noticeable in qualitative inspection. The consistently positive, statistically signifi- cant gains therefore substantiate the claim made in the main text: CLUE produces markedly more faithful NLEs towards model uncertainty than the Prompt Baseline , and the steer variant is particularly beneficial for models that initially struggle with uncertainty attribution. H Human Evaluation Details H.1 Participants and Materials Participants We recruited N=12 participants from Prolific, screened to be native English speak- ers from Australia, Canada, Ireland, New Zealand,the United Kingdom, and the United States. The study was approved by our institution’s Research Ethics Committee (reference number 504-0516/24- 5000). Materials Forty instances (20 from DRUID, 20 from HealthVer) were selected at random for eval- uation. For each instance, participants were pro- vided with a claim, two evidence documents, model verdict, model numerical certainty, and three alter- native explanations (see Figure 6 in H.6). The ex- planations presented to participants were those gen- erated using Qwen2.5-14b-instruct (Qwen Team, 2024) based on its automatic evaluation perfor- | https://arxiv.org/abs/2505.17855v1 |
mance. Each participant evaluated explanations for 10 instances (5 labelled ‘True’, 5 labelled ‘False’), in addition to two attention check instances which were used to screen responses for quality. Procedure Participants read information about the study (see H.3) and provided informed con- sent (see H.4) before reading detailed task instruc- tions and completing a practice example of the task (see H.5). Participants then progressed through the study at their own pace. The task took approxi- mately 20 minutues, and participants were paid £3 for their work. H.2 Human Evaluation Results H.2.1 Interrater agreement In line with similar NLE evaluations carried out by previous studies (e.g., (Atanasova et al., 2020)), interrater agreement (Kendall’s W (Kendall and Smith, 1939)) was moderate to low (see Table 5). We attribute this to the relative complexity of the task and individual differences in how the informa- tion was perceived. H.3 Human Evaluation Information Screen Thank you for volunteering to participate in this study! Before you decide whether you wish to take part, please read this information screen carefully. 1. What is the project about? Our goal is to make sure that AI fact-checking systems can explain the decisions they produce in ways that are understandable and useful to people. This survey is part of a project to help us understand what kinds of explanations are helpful and why. 2. What does participation entail? You are invited to help us explore what kinds of explanations work better in fact-checking. In this task you will see claims, an AI system’s prediction Model Method rpb t p HealthVer Qwen2.5-14B-Instruct Prompt Baseline −0.028 −2.38 1 .7×10−2 CLUE-Span +0.006 +0 .51 6 .1×10−1 CLUE-Span+Steering +0.033 +2 .80 5.1×10−3 OLMo-2-1124-13B-Instruct Prompt Baseline −0.100 −8.53 <10−15 CLUE-Span +0.005 +0 .42 6 .7×10−1 CLUE-Span+Steering +0.020 +1 .70 9 .0×10−2 Gemma-2-9B-IT Prompt Baseline −0.105 −8.96 <10−15 CLUE-Span +0.007 +0 .59 5 .5×10−1 CLUE-Span+Steering +0.021 +1 .78 7 .5×10−2 DRUID Qwen2.5-14B-Instruct Prompt Baseline −0.080 −6.81 9.8×10−12 CLUE-Span +0.089 +7 .58 3.4×10−14 CLUE-Span+Steering +0.102 +8 .70 <10−15 OLMo-2-1124-13B-Instruct Prompt Baseline −0.130−11.12 <10−15 CLUE-Span +0.014 +1 .19 2 .3×10−1 CLUE-Span+Steering +0.099 +8 .44 <10−15 Gemma-2-9B-IT Prompt Baseline −0.120−10.26 <10−15 CLUE-Span +0.043 +3 .65 2.6×10−4 CLUE-Span+Steering +0.098 +8 .35 <10−15 Table 4: Detailed faithfulness evaluation results for baseline method Prompt Baseline , and two variants of our CLUE framework CLUE-Span andCLUE-Span+Steering on Healthver and Druid dataset based on Qwen2.5-14B- Instruct(Qwen Team (2024)), OLMo-2-1124-13B-Instruct(Team OLMo et al. (2024))and Gemma-2-9B-IT(Gemma Team (2024)). Point-biserial correlation rpbis our Entropy-CCT measurement(§5.1), along with tstatistic and two-sided p-value for each model–method pair ( n= 7,200,d f= 7,198). Entries with p <0.01are bold. DRUID HealthVer Set A Set B Set A Set B Helpfulness .016 .079 .003 .013 Consistency .44 .058 .017 .016 Non-redundancy .005 .084 .005 .019 Coverage .494 .113 .018 .027 Overall Quality .005 .158 .01 .002 Table 5: Interrater agreement (Kendall’s W) for human evaluation about whether this claim is true or false and cor- responding evidence used to make the prediction. You will also see an explanation for why the AI system is certain or uncertain about its prediction to help you decide | https://arxiv.org/abs/2505.17855v1 |
how to interpret the true/false prediction. We ask you to evaluate the explanations along 5 different dimensions (the detailed explana- tion of the task is on the next page). All participants who complete the survey will receive a payment of £3. There is no cost to you for participating. You may refuse to participate or discontinue your involvement at any time without penalty. 3. Source of funding This project has received funding from the ERC (European Research Council) Starting Grant onExplainable and Robust Fact Checking under grant agreement ID no. 101077481. 4. Consenting to participate in the project and withdrawing from the research You can consent to participating in this study by ticking the box on the next page of the study. Partic- ipation in the study is completely voluntary. Your decision not to consent will have no adverse con- sequences. Should you wish to withdraw during the experiment you can simply quit the webpage. All incomplete responses will be deleted. After you have completed the study and submitted your responses, it will no longer be possible to withdraw from the study, as your data will not be identifiable and able to linked to you. 5. Possible benefits and risks to participants By participating in this study you will be contribut- ing to research related to understanding what kinds of explanations are useful to people who use or who are impacted by automated fact checking systems. This is a long-term research project, so the benefits of the research may not be seen for several years. It is not expected that taking part will cause any risk, inconvenience or discomfort to you or others. 6. What personal data does the project process? The project does not process any personal data. 7. Participants’ rights under the General Data Protection Regulation (GDPR) As a participant in a research project, you have a number of rights under the GDPR. Your rights are specified in the University of Copenhagen’s privacy policy. https: //informationssikkerhed.ku.dk/english/ protection-of-information-privacy/ privacy-policy/ 8. Person responsible for storing and processing of data University of Copenhagen, CVR no. 29979812, is the data controller responsible for processing data in the research project. The research project is headed by Prof. Isabelle Augenstein who can be contacted via email: au- genstein@di.ku.dk, phone: <>, address: Øster V oldgade 3 1350 Copenhagen, Denmark. Greta Warren is the contact point for this project and can be contacted via email: grwa@di.ku.dk, phone: <>, address: Øster V oldgade 3, 1350 Copen- hagen, Denmark. Please click ’Next’ to read more about consenting to participate in the study. H.4 Human Evaluation Consent Form We hereby request your consent for processing your data. We do so in compliance with the General Data Protection Regulation (GDPR). See the informa- tion sheet on the previous screen for more details about the project and the processing of your data. •I confirm that I have read the information sheet and that this forms the basis on which I consent to the processing of my data by the project. •I hereby give my consent that the University of Copenhagen may register and | https://arxiv.org/abs/2505.17855v1 |
process my data as part of the Human-Centred Explainable Fact Checking project. •I understand that any data I provide will be anony- mous and not identifiable to me. •I understand that my anonymous response data will be retained by the study team. •I understand that after I submit my responses at the end of the study, they cannot be destroyed, withdrawn, or recalled, because they cannot be linked with me. •I understand that there are no direct benefits to me from participating in this study•I understand that anonymous data shared through publications or presentations will be accessible to researchers and members of the public anywhere in the world, not just the EU. •I give my consent that the anonymous data I pro- vided may be stored in a database for new re- search projects after the end of this project. •I give permission for my anonymous data to be stored for possible future research related to the current study without further consent being re- quired. •I understand I will not be paid for any future use of my data or products derived from it. By checking this box, I confirm that I agree to the above and consent to take part in this study. 2I consent H.5 Evaluation Task Instructions What do I have to do? In this study you will see claims, an AI system’s prediction about whether this claim is true or false, how certain the system is about its label, and the corresponding evidence used to make the prediction. You will also see three different explanations for why the AI system is certain or uncertain about its prediction. These explanations are intended help you decide how to interpret the true/false prediction. Your task is to evaluate the quality of the explanations provided, notthe credibility of the claims and evidence. What information will I be shown? You will be shown examples of claims, evidence document, verdicts and explanations. •A claim is some statement about the world. It may be true, false, or somewhere in between. •Additional information is typically necessary to verify the truthfulness of a claim - this is referred to as evidence or evidence document. An evi- dence document consists of one or several sen- tences extracted from an external source for the particular claim. In this study, you will see two evidence documents that have been retrieved for a claim. These evidence documents may or may not agree with each other. •Based on the available evidence, a verdict is reached regarding whether a claim is true or false. •Uncertainty often arises when evaluating the claim and evidence to reach a verdict. Each ver- dict is accompanied by a numerical uncertainty score which represents the AI system’s confi- dence that its predicted verdict is correct. •You will see 3 alternative explanations for where uncertainty arises with regard to the verdict. Note that these explanations focus on the AI system’s uncertainty, not the verdict itself. •You are asked to evaluate the explanations ac- cording to 5 different properties. The properties are as follows: Helpfulness. The explanation contains informa- tion | https://arxiv.org/abs/2505.17855v1 |
that is helpful for evaluating the claim and the fact check. Coverage. The explanation contains important, salient information and does not miss any impor- tant points that contribute to the fact check. Non-redundancy. The explanation does not contain any information that is redundant/repeat- ed/not relevant to the claim and the fact check. Consistency. The explanation does not contain any pieces of information that are contradictory to the claim and the fact check. Overall Quality. Rank the explanations by their overall quality. •Please rank the explanations in descending order. For example, you should rank the explanation that you think is most helpful as ‘1’, and the ex- planation that you think is least helpful as ‘3’. If two explanations appear almost identical, you can assign them the same ranking, but as a gen- eral rule, you should try rank them in hierarchical order. •The three explanations, Explanation A, Expla- nation B, and Explanation C, will appear in a different order throughout the study, so you may need to pay some attention to which is which. Important: Please only consider the provided information (claim, evidence documents, and expla- nations) when evaluating explanations. Sometimes you will be familiar with the claim, but we ask you to approach each claim as new, whether or not you have seen it before. It doesn’t matter whether you personally agree or disagree with the claim or evi- dence – we are asking you to evaluate what the AIproduces: if you were to see this claim for the first time, would you find the explanation provided by the AI useful? On the next page, you will see an example of the task. H.6 Example of human evaluation set-up Here is an example of what you will see during the study. First, you will see a Claim , and two pieces of Evidence , along with an AI system’s predicted Verdict and the system’s Certainty that its prediction is correct. Theparts of the claim and evidence that are most important to the AI system’s certainty are highlighted. Parts of the Claim are Red, parts of Evidence 1 are Blue, and parts of Evidence 2 are Green. Underneath, you will see three alternative ex- planations for the AI system’s certainty , Expla- nation A, Explanation B, and Explanation C. The parts of each explanation that refer to the claim and evidence are colour coded in the same way (Claim = Red, Evidence 1 = Blue, Evidence 3 = Green). Your task is to read the claim, evidence, and explanations, and rank each explanation based on five properties. Now, you can try this example below! Figure 6: Example of human evaluation set-up. Expla- nation A was generated using Prompt Baseline , Explana- tion B by CLUE-Span, and Explanation C by CLUE- Span+Steering | https://arxiv.org/abs/2505.17855v1 |
arXiv:2505.17870v1 [cs.CL] 23 May 2025Just as Humans Need Vaccines, So Do Models: Model Immunization to Combat Falsehoods Shaina Raza1∗Rizwan Qureshi2Marcelo Lotif1Aman Chadha3† Deval Pandya1Christos Emmanouilidis4 1Vector Institute, Toronto, Canada2University of Central Florida, Orlando, USA 3Amazon Web Services4University of Groningen, Netherlands Abstract Generative AI models often learn and reproduce false information present in their training corpora. This position paper argues that, analogous to biological immunization, where controlled exposure to a weakened pathogen builds immunity, AI models should be fine-tuned on small, quarantined sets of explicitly labeled falsehoods as a “vaccine” against misinformation. These curated false examples are periodically injected during fine-tuning, strengthening the model’s ability to recognize and reject misleading claims while preserving accuracy on truthful inputs. An illustrative case study shows that immunized models generate substantially less misinformation than baselines. To our knowledge, this is the first training framework that treats fact-checked falsehoods themselves as a supervised vaccine, rather than relying on input perturbations or generic human-feedback signals, to harden models against future misinformation. We also outline ethical safeguards and governance controls to ensure the safe use of false data. Model immunization offers a proactive paradigm for aligning AI systems with factuality. 1 Introduction Large language models (LLMs) present a double-edged sword for combating misinformation. While they enable advanced detectors and fact-checkers, they can also amplify falsehoods when trained on biased or incomplete data [ 35]. We define misinformation as false or inaccurate content shared without intent to deceive, distinguishing it from disinformation, which is deliberately spread [ 35]. Existing safeguards, such as post-hoc filtering [ 40], external fact-checking [ 6], and refusal policies [ 5], are reactive by nature and often unreliable, as they act only after misinformation has been produced. Ensuring AI output truthfulness thus remains a critical challenge with implications for both AI safety and public trust. To address this proactively, we propose a novel framework termed model immunization , drawing inspiration from the principles of human immunization. Instead of avoiding false data entirely, this approach deliberately inoculates the model against falsehoods. In essence, we train language models with curated instances of misinformation that are explicitly labeled as false, thereby strengthening the model’s resistance to generating or endorsing similar falsehoods in the future. Central to this framework is the construction of a quarantined repository of false statements – including common misconceptions, hoaxes, and other misleading content [ 27], which are clearly labeled as false during training. By learning from these quarantined falsehoods, the model develops a form of “immunity" reducing its likelihood of producing or accepting analogous misleading content in subsequent responses. ∗Correspondence to: shaina.raza@vectorinstitute.ai †Work done outside role at Amazon. Preprint. Figure 1: Biological vaccination vs. model im- munization. Controlled exposure to a weakened pathogen trains the immune system; similarly, con- trolled exposure to labeled falsehoods trains models to reject misinformation. Figure 2: Immunization fine-tuning: The model is periodically exposed to a small fraction of labeled falsehoods (orange) amidst mostly truthful data (teal), simulating a “vaccine dose.” This improves its resis- tance to misinformation. Motivation: This immunization approach is inspired by how vaccines strengthen biological immunity [29]: just | https://arxiv.org/abs/2505.17870v1 |
as exposure to a weakened pathogen enables the immune system to recognize and neutralize future threats, controlled exposure of an AI model to explicitly labeled false information can help it identify and resist misleading patterns (see Figure 1 for a schematic analogy). Rather than treating false data as toxic and avoiding it, we treat it as a training signal to prepare the model. This offers a preventative strategy for enhancing model truthfulness by targeting misinformation at its roots, before it ever propagates in model outputs. 1.1 Position We propose a proactive paradigm, model immunization (shown in Figure 2) as a structured training strategy to enhance models’ resistance to misinformation. Inspired by the immunological principle of vaccination [ 25,29], as shown in Figure 1), we suggest that AI systems can benefit from deliberate, controlled exposure to known false information, provided that this content is properly labeled, isolated , and never used in production or treated as factual knowledge. The core idea is to expose models to a carefully curated set of quarantined falsehoods in small doses (5–10%) during training, with the explicit goal of improving their robustness to semantically misleading content. Conceptual novelty Our contribution is primarily conceptual, we recast standard fine-tuning method as a preventive, vaccine-style protocol in which fact-checked falsehoods are isolated, governed, and then re-introduced in micro-doses to build factual immunity. This shift from reactive filtering to a proactive immunization, together with the explicit governance pipeline that quarantines, audits, and doses the false data, has not been presented before. Figure 3 provides a high-level visual summary of how model immunization fits into the landscape of misinformation defenses and shows that it is a training-time defense that uses explicitly labeled falsehoods akin to a vaccine to build broad resistance to future misinformation. 1.2 Alternative Views and Responses Objection: Incorporating deliberate false information into training will backfire, essentially "poisoning" the model. Response: The immunization framework is specifically designed to avoid the pitfalls of training on falsehoods. We use only carefully curated misinformation examples, each paired with clear labels or corrective context indicating their falsehood. This ensures the model does not internalize incorrect facts as true; instead, it learns to recognize and steer away from them. Empirical evidence supports this strategy that counterfactual (“negative”) fine-tuning improves robustness to factuality probes without degrading standard task accuracy [37]. Thus, a curated “ quarantine ” of labeled falsehoods functions more like a vaccine than a poison, i.e., it exposes the model to misinformation in a controlled, annotated setting to increase its resistance to similar falsehoods. A model trained on quarantined falsehoods is less likely to generate such claims than one that has never seen them. Objection: “Model immunization” is merely adversarial training under a new name. Response: While related, model immunization is distinct from adversarial training in both goal and approach. Adversarial training generates perturbations (e.g., synonym swaps or character-level edits) to fool the model, using algorithmic attacks to create “hard” examples for improving robustness 2 Model Immunization Adversarial TrainingData PoisoningPost-hoc Detection RLHFTraining Fine tuningInference(a)Lifecycle timeline: Position of each defense technique across training, fine-tuning, and inference. | https://arxiv.org/abs/2505.17870v1 |
(b)Method-by-property matrix: Defense techniques characterized by input type, goals, control, and risk. Technique Input Type Goal Control Risk Adversarial Training perturbations robustness curated low Data Poisoning unlabeled false data corrupt model malicious high RLHF human preferences alignment curated medium Model Immunization labeled falsehoods truthfulness curated low Post-hoc Detection raw output reactive filter reactive low Figure 3: Overview of misinformation–defense techniques across the LLM lifecycle. Top: Timeline showing when each method applies. Bottom: Summary of technique properties. [18, 28]. In contrast, model immunization specifically targets truthfulness and factual robustness. It deliberately feeds the model false claims (with correct labels) so that the model learns to recognize similar false claims in the future and avoids endorsing them. Unlike classic adversarial training, immunization does not require dynamically designed inputs; it can use a fixed, curated set of known misinformation (whether real or AI-generated) as a “vaccine dose”. Objection: Human feedback and existing alignment techniques like RLHF already push models toward truthfulness, making this unnecessary. Response: The two approaches (RHLF and model immunization) share the goal of improving model responses, but they differ in process and specificity. RLHF trains a reward model from human-labeled outputs to prefer responses rated as more correct or safe, improving truthfulness as in InstructGPT [21]. In contrast, model immunization directly fine-tunes on explicitly labeled falsehoods, not reward signals. Another distinction is the source of training data. RLHF requires considerable human labor to rate or annotate model outputs across many queries that is powerful but resource-intensive [ 3]. In contrast, model immunization could leverage existing databases of misinformation where the falsehoods are already identified. This is more like a traditional supervised fine-tuning on a targeted domain (false vs true claims), which might be less costly and more direct. Objection: Instead of altering training, one could handle misinformation with post-hoc detection or external fact-checkers. Response: Another approach to combating misinformation is post-hoc detection: having a separate system (or the model itself in a second pass) detect and filter out misinformation after it has been generated [ 33]. The model immunization framework, however, advocates a preemptive defense built into the model itself during training. The fundamental difference lies in timing and integration. Post-hoc detection is a reactive , inference-time measure. In contrast, model immunization is a proactive, training-time measure – it aims to prevent the model from producing misinformation in the first place (or at least make it less likely). This is akin to preventing an illness vs. treating symptoms : immunization tries to ensure the “illness” (misinformation generation) doesn’t occur, whereas detection is like diagnosing and treating the output after it has occurred. Objection: Data Poisoning Concerns – Is “Immunization” Just Poisoning the Model? Response: The two are fundamentally different in intent, execution, and outcome. In a data poisoning attack, the adversary might inject false or mislabeled examples without the model trainers’ knowledge or with the intent to mislead. For instance, an attacker could slip in fake facts labeled as true, hoping the model will learn them as if they were real (thereby outputting misinformation later) [ 15]. By contrast, model immunization is | https://arxiv.org/abs/2505.17870v1 |
a deliberate, transparent training strategy by the model developers, where false information is clearly labeled as false. The goal is the opposite of poisoning: instead of corrupting the model, it aims to strengthen the model against falsehoods. The falsehood “dose” is curated and limited in model immunization in the training data – just as a vaccine uses a weakened or small amount of antigen. 3 Objection: Is the data not imbalanced when you inject only a small amount of misinformation as doses? And why generate synthetic misinformation if real-world examples already exist? Response: The data is intentionally imbalanced, i.e., falsehoods comprise just 5–10% of fine-tuning tokens, as shown in Figure 2, because the goal is not classification but robust generation: learning to reject misleading inputs without degrading general performance. Each falsehood is clearly labeled and paired with a correction, ensuring it serves as a negative signal rather than contaminating the model. Synthetic misinformation complements real-world falsehoods by expanding coverage (filling gaps), enabling controlled robustness testing, and improving generalization to unseen misinformation. 2Conceptual Framework: Model Immunization via Quarantined Falsehoods Figure 4 presents our conceptual framework from two perspectives: (i) the operational pipeline with four stages, and (ii) the ethical and governance controls. Immunized Model Real_T rue Real_FalseSynthetic Data Generation Real_T rue Real_FalseGenerate Regulated False Real Data CollectionSynthetic Falsehood Generation Real DataQuarantine Review before training Quarantined DataReal T rue DataQuarantined Repository (secure isolation of labelled falsehoods)Training Deployment & Monitoring Truthfulness, Fairness, Generalization Robustness metrics (adversarial accuracy against falsehoods)Refine data/ modelData Gathering Instructions (1) Gather authentic data, including truthful and real-world false examples. (2) Crea te additional false data using echniques (e.g. with GANs or LLMs) to mimic real-world misinformation. Validation & T esting Adversarial Exposure Monitoring post-deployment performanceQuarantined Falsehoods Quarantined Falsehoods Deployment Safeguards fine-tuning only on reliable real data Generalization tests (performance on real world data or unseen misinformation examples Performance Monitoring ongoing refinementInject labeled false data Immunization Fine-tuning Real T rue+ Quarantined Labeled Falsehoods ( real false + 5–10 % micro-dos adversarial exposure) Audit & retrain Governance & Audit Layer Figure 4: Conceptual Model Immunization Framework . Authentic true data and real-world falsehoods are collected and augmented with synthetic regulated false examples. All false items are isolated in a quarantined repository for review. During immunization fine-tuning, the model receives a 5–10 % micro-dose of these labelled falsehoods alongside clean data, yielding an immunized model. Validation then scores truthfulness, fairness, robustness, and feeds failures back for retraining. Finally, deployment enforces safety guards and continuous performance monitoring. All stages operate within an overarching governance and audit layer that supports iterative refinement. 2.1 Pipeline Stages 1.Data Curation and Quarantine: This first stage embodies the central “vaccine” idea – isolating fact-checked falsehoods so they can later be injected in controlled doses. The process begins by assembling a comprehensive dataset of both truthful and untruthful information. On the real data side, we collect authentic examples of Real_True (truthful statements) and Real_False (real-world 4 misinformation) from reliable sources. In parallel, a Synthetic Falsehood Generation process creates additional false examples using regulated techniques (e.g. controlled generation with LLMs or other | https://arxiv.org/abs/2505.17870v1 |
generators) to mimic real-world misinformation. This ensures broad coverage of misinformation variants that may not be captured in existing real-world examples, enhancing the diversity of the “falsehood repository”. All collected and generated false statements are then isolated in a secure quarantine step: they are stored in a curated Quarantined Repository of labeled falsehoods. These quarantined falsehoods are deliberately kept separate from the model’s main training data until the moment they are used in fine-tuning, preventing any accidental contamination of the model’s core knowledge base. Each falsehood in the repository is carefully reviewed and labeled (with an explanation or refutation) to ensure the labels are correct and the content meets safety and quality criteria. By curating this repository of “known falsehoods” , we set the stage for vaccinating the model against them. 2.Immunization Fine-Tuning: Once the quarantined repository is prepared, the model undergoes a special fine-tuning phase akin to receiving vaccine doses. The model is first initialized on its normal (clean) training corpus, but then it receives periodic adversarial exposure in the form of small, scheduled batches where a fraction of the training data consists of labeled falsehoods drawn from quarantine. In practice, we micro-dose the model with only about 5–10% false data (tokens) , as shown in Figure 2, and the remaining 90–95% are standard truthful data. This subtle but deliberate injection of falsehood examples – always paired with the correct label or rebuttal – serves as a supervised “vaccine” signal absent in conventional training or even RLHF. In our proof-of-concept experiment, this single design choice (of mixing in ˜5% falsehood examples during fine-tuning) lifted the model’s truthfulness from about 60% to 78%, while leaving its performance on general factual QA essentially unchanged (85% →84% accuracy). The outcome of this fine-tuning stage is an immunized model (represented by the shield icon in Figure 4) that has been optimized for both factual accuracy and robustness to injected falsehoods. 3.Validation & Testing: After immunization fine-tuning, the model is rigorously evaluated before deployment. In this validation stage, the model is tested on held-out data, including both real- world inputs and special falsehood scenarios, to measure performance across multiple dimensions. Key evaluation metrics include truthfulness (the tendency to produce truthful outputs and avoid endorsing false claims) [ 17],fairness [24], and general task performance on authentic data. For truthfulness evaluation, the model is challenged with inputs containing misinformation to see how well it resists or corrects them – effectively measuring adversarial robustness to falsehoods. We also conduct generalization tests, presenting the model with unseen misinformation or emerging false narratives that were not in the training set, to ensure that the immunization generalizes beyond the specific injected falsehood examples. If the model falls short on any metric or if new weaknesses are revealed, this stage feeds back into refining the data or training procedure: the pipeline supports iterative improvement (as indicated by the feedback arrows in Figure 4). For example, developers might generate and quarantine new false examples for any failure cases or adjust the fine-tuning regimen, then re-train and re-evaluate the model until the desired performance is | https://arxiv.org/abs/2505.17870v1 |
achieved. 4.Deployment & Monitoring: Once validation confirms that the model meets performance and robustness targets, the immunized model is released to production. In this final stage (bottom most portion of Figure 4), several deployment-time safeguards and ongoing monitoring mechanisms are put in place to maintain the model’s integrity. Deployment safeguards include policies such as ensuring any further fine-tuning or updates to the model use only vetted, reliable data – preventing the model from inadvertently learning new misinformation after deployment without proper quarantine and labeling. Meanwhile, performance monitoring is carried out continuously on the model’s outputs in the real world, tracking how it behaves on live queries over time. This monitoring can catch any drifts or new failure modes (for example, if a novel conspiracy theory starts circulating and the model has no exposure to it, we might observe it faltering, triggering a possible “booster” update). Crucially, all these pipeline stages operate within an overarching governance and audit layer (shown enveloping the pipeline in Figure 4), which we describe next. 2.2 Ethical and Governance Considerations Using misinformation as a “vaccine” in training raises important ethical questions and demands careful governance [ 8]. At first glance, deliberately feeding false information to an AI model might seem dangerous – could it backfire by reinforcing those very falsehoods? We must ensure this 5 process does not inadvertently degrade the model’s integrity or violate ethical norms. We outline our governance approach, distilled into complementary safeguards and an oversight workflow. Governance Safeguards: We adopt a set of guiding principles, each paired with concrete safeguards, to ensure that model immunization is implemented responsibly. Table 4 summarizes these core principles, outlined below. •Transparency: Document and openly disclose all uses of false data in training. Every quarantined falsehood should be traceable from source to final model, with audit logs recording the source and labeling of each false statement. This aligns with broader calls for transparency in AI training [ 4]. •No Promotion of False Content: Never allow the model to learn misinformation as if it were true. Every curated false statement is treated as a negative training signal : it is paired with an explicit correction or explanation, and the model is penalized (via the loss function) if it reproduces or agrees with the false claim. •Alignment with Human Values: Curate the “vaccine” data with human values in mind. We focus on clearly discredited and high-risk myths (e.g. dangerous health-related falsehoods) that have broad consensus of falsehood. More ambiguous or value-charged topics are introduced only with careful human oversight. This principle ensures the immunization process respects fairness, inclusivity, and avoids injecting content that might encode bias or propaganda, consistent with ethical AI guidelines [16, 34, 23]. •Preventing Abuse: Follow shared protocols and regulatory guidelines [ 1] when handling false data, and promote openness to distinguish responsible immunization from malicious data poisoning. Developers should document their procedures and ideally use openly available fact-checked datasets, so that the broader community can examine and trust the process. This safeguard prevents misuse of the method, such as secretly “poisoning” the model under the guise of | https://arxiv.org/abs/2505.17870v1 |
immunization. •Continuous Accountability: Establish channels for ongoing oversight even after deployment. For example, maintain public or third-party audit mechanisms where users or auditors can report cases where the model might be propagating an unrecognized falsehood. Logging the model’s outputs and decisions enables external review [ 7]. Continuous accountability ensures that model immunization remains effective over time and that any issues are addressed transparently. Collectively, these safeguards ensure that the “vaccine” of labeled falsehoods strengthens a model’s immunity without causing an “infection”. Figure 5: Governance workflow. Fact-checked falsehoods enter a quarantined dataset after inde- pendent reviews; audit logs at each arrow ensure traceability from source to training.Oversight Workflow: Beyond these internal prin- ciples, a governance pipeline is crucial to opera- tionalize model immunization responsibly. Figure 5 outlines a possible oversight process for handling misinformation before it enters training. Misinfor- mation sources (e.g. online claims) are first verified as false by independent fact-checking organizations [33]. Only confirmed falsehoods then enter a quar- antined dataset, curated with supervisory oversight (e.g. an ethics review board) to ensure quality and appropriateness of the “vaccine” examples. Finally, the model is trained on this vetted falsehood under strict guidelines, always with labels indicating false- hoods, alongside normal data. Oversight at the curation stage ensures that no problematic content slips through and that the selection of training falsehoods remains aligned with societal values. This end-to-end governance process embeds human judgment and accountability at critical points (verification, curation, and evaluation) [23]. 2.3 Illustrative Case Study We illustrate the conceptual novelty of model immunization: training on fact-checked falsehoods as a supervised “vaccine”, with a minimal proof-of-concept experiment. The goal is to assess how exposing a model to a small, curated set of falsehoods during fine-tuning impacts its performance on both misinformation-related and general queries. Protocol: A 1.5-billion-parameter Transformer (GPT-2 XL) was fine-tuned on a quarantined false- hoods corpus of 500 fact-checked statements (each of which was paired with a brief refutation or 6 Table 1: Comparison of baseline vs. immunized model answers on two misinformation queries. User Query Baseline Response ( ✗) Immunized Response ( ✓) Is it true that the earth is hollow and we live on the inner surface?“There is a conspiracy theory . . . but it’s not widely accepted by scien- tists.” A hesitant, non-committal an- swer that fails to debunk the myth.No. The hollow-Earth idea is a disproven myth. We live on the outer surface . . . ge- ological and satellite evidence confirms this. What are the health benefits of drinking bleach?Lists several extremely dangerous and incorrect “benefits” and issues no warning.None , in fact, drinking bleach is poisonous and very dangerous. There are no health bene- fits,it can cause severe injury or death. correction) and interleaved with a small truthful-QA set [ 17]. Training and evaluation details are deferred to Appendix B. Truthful Response RateGeneral QA Accuracy020406080100Percentage (%)Model Performance Before vs After Immunization Baseline Immunized Figure 6: Model Performance Before vs. After Immunization. Blue bars (Immunized) highlight the improvement in truthfulness with negligible change in general QA accuracy.Results: We present the results of the immunized | https://arxiv.org/abs/2505.17870v1 |
model in Figure 6 that shows the baseline model an- swered only about 60% of the misinformation-laden queries correctly (or safely). It often fell for traps – for instance, when asked about a conspiracy, the baseline might repeat parts of it or give a confused response. In contrast, the immunized model truthful response rate was 78%, a substantial improvement of nearly 30% (absolute). Qualitatively, many of its incorrect responses in the remaining 22% were overly cautious “refusals” or minor inaccuracies, rather than full propagation of the false claim. The general QA accuracy was 85% for the baseline and 84% for the immunized model, essentially a neg- ligible difference (of 1%). This indicates that the additional training on falsehoods did not degrade the model’s broader knowledge or ability to answer regular questions. We acknowledge the limited scope of our proof-of-concept experiment, which is based on a small model and dataset. Nonetheless, these preliminary findings support our core hypothesis: that deliberate exposure to carefully curated falsehoods can significantly improve a model’s ability to handle related misinformation without degrading its general knowledge. We present these results not as a comprehensive empirical validation, but as initial evidence motivating the model immunization paradigm and encouraging further, larger-scale studies. Representative examples are shown in Table 1. 3 Discussion and Limitations Our exploration of quarantined falsehoods as a training resource opens up several promising avenues, but it also raises questions about scope and implementation. Improved Truthfulness vs. Retained Capabilities: The case study demonstrates that model immunization can boost a model’s truthfulness on targeted inputs without causing catastrophic forgetting of its prior knowledge. This dual outcome – becoming more truthful and not losing general capability , aligns with the broader narrative from alignment research that we can have both. InstructGPT and related RLHF-tuned models showed it is possible to make models more truthful and less toxic while maintaining strong performance on standard benchmarks [ 21]. Our approach fits into that narrative, serving as a specialized alignment fine-tuning focused on truthfulness. How Far Can This Go? (Efficacy and Dose-Response): An open question is the limit of this approach. Could a sufficiently immunized model eventually approach human-level truth discernment – say, achieve over 90% truthfulness on challenging benchmarks like TruthfulQA? Or are there diminishing returns, where each additional falsehood example yields smaller and smaller gains in truthfulness? We suspect a law of diminishing returns [ 10] will apply; however, we also hypothesize thatcoverage matters significantly. The more classes or categories of falsehoods the model is trained to handle, the fewer gaps remain that malicious actors or tricky prompts can exploit. It may be that increasing the diversity of the “vaccine” dataset offers continued benefit up to a point. Future work should explore the dose-response curve of immunization: how performance scales with more falsehood examples and what plateau might exist. 7 Generalizability and Transfer: Another critical question is how well the “immunity” generalizes beyond the exact falsehoods seen in training. Does immunizing a model on one set of false claims help it handle others that it was never explicitly trained on? Indeed, we | https://arxiv.org/abs/2505.17870v1 |
observed hints of this in our qualitative testing (see Table1): the immunized model correctly handled some misinformation prompts that were not in its training set, seemingly by using reasoning or analogical patterns. However, this aspect needs systematic study. It relates to the broader issue of out-of-distribution robustness. To promote better generalization, one idea is to incorporate more explicit reasoning training (via chain-of-thought or tree-of-thought prompting[ 38]). Another idea is to train not just on factual claims, but also on known disinformation techniques (like emotive language, logical fallacies, or false dichotomies[ 31]). This could immunize the model at the style or technique level, not only the content level, potentially improving transfer to novel falsehoods. Coverage vs.New Falsehoods: The space of possible false claims is essentially unbounded, and new misinformation arises continuously. Our approach relies on a curated set of known falsehoods; it cannot directly protect against a brand-new piece of misinformation that does not resemble anything the model has seen. In practice, this means model immunization is not a one-shot solution but rather an ongoing process. Models might require periodic “booster shots” , i.e., updates that include newly emerged false narratives (for instance, a new medical hoax that starts trending on social media). This is analogous to how influenza vaccines are updated annually to handle new strains. It raises logistical questions: Who will monitor the informational landscape and supply these updates? How quickly can we collect and inject new falsehood data when a threat emerges? One could imagine a dedicated “AI truth alignment” team for this. Overfitting and Over-Skepticism (False Positives): There is a risk the model becomes hypersensi- tive to certain patterns and starts flagging or rejecting content that is actually true, simply because it superficially resembles something from the false training set. In our case study, we mitigated this risk by limiting the falsehood portion of training (only ˜5% of tokens). For larger-scale deployments, one should similarly use a small dose of false data and maintain a strong presence of true data during training so that the model keeps its sense of balance. Scale and Maintenance Costs: Fine-tuning large models on additional data incurs computational and operational costs. Model immunization does add to the training pipeline, especially if it becomes an ongoing maintenance task with frequent updates. However, compared to the initial training of LLMs on terabytes of text, the volume of immunization data is tiny (hundreds or a few thousands of examples), and thus the compute cost of fine-tuning on this “vaccine” is relatively minor. Interaction with Other Alignment Techniques: Model immunization should be viewed as part of a broader suite of alignment and safety techniques. It is not meant to replace methods like RLHF, toxicity filters, or external verification tools, but to complement them. For example, RLHF gives a model general good behavior instincts, and immunization can add a focused knowledge of specific falsehoods to avoid. A current limitation of immunization is that it deals mostly with “ known knowns ” – things we already identified as false. For the “ unknown unknowns ” (entirely novel misinformation that no | https://arxiv.org/abs/2505.17870v1 |
one has seen before), other strategies are needed, such as the model’s own ability to fact-check on the fly or continued human oversight. Human Factors: Making AI more resistant to misinformation is not only a technical goal but also a socio-technical one. A model that simply refuses to answer or bluntly corrects the user might not be well-received, even if it’s factually right. Therefore, training models to handle misinformation politely and persuasively is a key next step. The model should ideally correct false user assumptions in a helpful manner (e.g., “I’ve heard that claim, but actually the evidence shows...”) rather than in a confrontational or dismissive way. This may involve an additional layer of training focusing on tone and user engagement, possibly using demonstrations of effective myth-refutation dialogues. Broader Implications and Impact If widely adopted, model immunization could change how we think about AI training pipelines. It introduces a normative element – actively deciding what false content to immunize against based on societal values and factual consensus. This is a step towards building “values-centered AI” , where training is guided by explicit values and intentions. There is precedent: toxic language and hate speech datasets are used to detoxify models, reflecting a value that such content is undesirable. Similarly, using misinformation datasets reflects a value for truth. However, this raises questions: Who provides these datasets? How to ensure they are inclusive and not themselves biased? These questions point to the need for multi-stakeholder input in creating training resources, as discussed earlier in governance. 8 Another implication is in evaluation standards . Currently, AI benchmarks often include accuracy, BLEU scores, etc., but perhaps new benchmarks will explicitly measure a model’s misinformation robustness. Finally, we must consider limitations in principle : Some might argue that no matter how much corrected false data we feed to an AI with corrections, it can never fully understand truth, it is still pattern matching and might fail in unanticipated ways. While philosophically true, practical progress often comes from such incremental steps. Model immunization does not grant 100% solution, but it can emulate aspects of critical thinking by leveraging patterns we expose it to. Call for Action Model immunization represents a novel and proactive approach to aligning AI systems with factuality. Our position is that just as society vaccinates humans to preempt diseases, we should consider “vaccinating” AI models against misinformation. While many challenges remain, our initial evidence suggests this strategy can make AI systems meaningfully more robust to false and misleading inputs. We envision a future in which interacting with an AI assistant is safer than searching the open web, because the assistant has been trained to recognize the web’s common pitfalls and misinformation traps. In effect, the AI would possess a built-in immunity, enabling it to serve as a reliable guardian against falsehoods rather than a super-spreader of them. Why does this conceptual framework deserve wider adoption? We believe model immunization addresses a crucial gap in current alignment efforts: it targets the content of known falsehoods head-on, rather than relying solely on indirect signals or after-the-fact corrections. It | https://arxiv.org/abs/2505.17870v1 |
is a relatively lightweight addition to the training process (using modest amounts of curated data), yet it yielded a significant boost in truthfulness in our experiments. Moreover, it embodies a preventative philosophy for AI safety. Through model immunization, AI developers can proactively reduce the risk of their systems regurgitating harmful myths or fake news, thereby increasing trust in AI outputs. It is also important to know that model immunization is not a plug-and-play solution, and it raises many open research questions. We urge the community to explore these questions and build upon this framework. In closing, we highlight several important next steps and opportunities: •Research and Benchmarks: Conduct larger-scale studies of model immunization across different model architectures and languages to assess how well the approach scales. Develop benchmarks specifically targeting misinformation robustness (analogous to TruthfulQA but broader in scope) to track progress. For instance, a suite of tests could be created where models must confront diverse misinformation scenarios; the community can then evaluate immunized vs. non-immunized models on these benchmarks. •Dataset Creation: Establish open, collaboratively maintained datasets of verified falsehoods (and their corresponding truthful corrections) for use in model immunization. This could start with multilingual collections of well-known misconceptions, health myths, historical falsehoods, etc., vetted by experts. An open repository of “misinformation vaccine data” would significantly lower the barrier for any research group or company to try this approach, and it would encourage standardization and shared best practices (preventing unintentional misuse). •Integration with AI Development Pipelines: Incorporate model immunization into the standard AI model development lifecycle, alongside bias mitigation and safety checks [ 2]. In practical terms, this means that when training or fine-tuning large models, especially those intended for knowledge-intensive applications, developers would routinely include an immunization step (using the latest curated falsehood data) as part of the pipeline. •Interdisciplinary Collaboration: We encourage collaboration between AI researchers, misinforma- tion researchers, fact-checkers, and policymakers. Combating misinformation is a multidisciplinary challenge. By working with social scientists and communication experts, we can identify which falsehoods are most critical to address and understand how humans respond to AI corrections. Policy experts and ethicists can help devise guidelines for the responsible use of false data in training (to avoid misuse and to align with regulations like upcoming AI acts). Such cross-domain partnerships will help refine model immunization as not just a technical mechanism, but a socially attuned solution. In conclusion, model immunization is a step toward AI systems that are resilient in the face of misinformation. We invite the community to treat this not as a finalized solution, but as a conceptual framework to be built upon. Like vaccines in public health, it may require global cooperation, continuous updates, and rigorous monitoring. 9 References [1]Markus Anderljung, Julian Hazell, and Moritz von Knebel. Protecting society from ai misuse: when are restrictions on capabilities warranted? AI & SOCIETY , pages 1–17, 2024. [2]R. K.E. Bellamy, A. Mojsilovic, S. Nagar, K. Natesan Ramamurthy, J. Richards, D. Saha, P. Sattigeri, M. Singh, K. R. Varshney, Y . Zhang, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, P. | https://arxiv.org/abs/2505.17870v1 |
Lohia, J. Martino, and S. Mehta. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development , 63(4-5), 2019. [3]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217 , 2023. [4]Simon Caton and Christian Haas. Fairness in Machine Learning: A Survey. ACM Computing Surveys , 56(7), apr 2024. [5]Canyu Chen and Kai Shu. Combating misinformation in the age of llms: Opportunities and challenges. AI Magazine , 45(3):354–368, 2024. [6]Laurence Dierickx, Arjen Van Dalen, Andreas L Opdahl, and Carl-Gustav Lindén. Striking the balance in using llms for fact-checking: A narrative literature review. In Multidisciplinary International Symposium on Disinformation in Open Online Media , pages 1–15. Springer, 2024. [7]Marc TJ Elliott, Deepak P, and Muiris Maccarthaigh. Evolving generative ai: Entangling the accountability relationship. Digital Government: Research and Practice , 5(4):1–15, 2024. [8]Mark Elsner, Grace Atkinson, and Saadia Zahidi. Global risks report 2025. Technical report, World Economic Forum, Geneva, January 2025. [9]European Parliament and Council. Artificial intelligence act (regulation (eu) 2024/1689), July 2024. Accessed: 2025-05-19. [10] A Hayes. Law of diminishing marginal returns: definition, example, use in economics. Investo- pedia, April , 4, 2022. [11] High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy ai, April 2019. Accessed: 2025-05-19. [12] Chia-Yi Hsu, Pin-Yu Chen, Songtao Lu, Sijia Liu, and Chia-Mu Yu. Adversarial examples can be effective data augmentation for unsupervised machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36, pages 6926–6934, 2022. [13] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 , 2017. [14] Nayeon Lee, Wei Ping, Peng Xu, Mostofa Patwary, Pascale N Fung, Mohammad Shoeybi, and Bryan Catanzaro. Factuality enhanced language models for open-ended text generation. Advances in Neural Information Processing Systems , 35:34586–34599, 2022. [15] Bart Lenaerts-Bergmans. Data poisoning: The exploitation of generative ai. https://www. crowdstrike.com/en-us/cybersecurity-101/cyberattacks/data-poisoning/ , 2024. Accessed: 2025-05-16. [16] Aaron J. Li, Satyapriya Krishna, and Himabindu Lakkaraju. More rlhf, more trust? on the impact of preference alignment on trustworthiness. arXiv preprint arXiv:2404.18870 , 2024. [17] Stephanie Z. Lin, Jacob Hilton, and Owain Evans. Truthfulqa: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958 , 2021. Published by OpenAI. 10 [18] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 , 2017. [19] Hilary Mason and Michael Gillam. The danger and promise of ad- versarial samples. https://blog.fastforwardlabs.com/2017/11/15/ the-danger-and-promise-of-adversarial-samples.html , 2017. Fast Forward Labs Blog. [20] Organisation for Economic Co-operation and Development (OECD. OECD AI Principles: Accountability (Principle 1.5), 2024. [Accessed 01-10-2024]. [21] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [22] Shaina Raza, Shardul Ghuge, Chen Ding, Elham | https://arxiv.org/abs/2505.17870v1 |
Dolatabadi, and Deval Pandya. Fair enough: Develop and assess a fair-compliant dataset for large language model training? Data Intelligence , 6(2):559–585, 2024. [23] Shaina Raza, Rizwan Qureshi, Anam Zahid, Joseph Fioresi, Ferhat Sadak, Muhammad Saeed, Ranjan Sapkota, Aditya Jain, Anas Zafar, Muneeb Ul Hassan, et al. Who is responsible? the data, models, users or regulations? a comprehensive survey on responsible generative ai for a sustainable future. arXiv preprint arXiv:2502.08650 , 2025. [24] Shaina Raza, Arash Shaban-Nejad, Elham Dolatabadi, and Hiroshi Mamiya. Exploring bias and prediction metrics to characterise the fairness of machine learning for equity-centered public health decision-making: A narrative review. IEEE Access , 12:180815–180829, 2024. [25] Stefan Riedel. Edward jenner and the history of smallpox and vaccination. In Baylor University medical center proceedings , volume 18, pages 21–25. Taylor & Francis, 2005. [26] Jon Roozenbeek, Sander Van Der Linden, Beth Goldberg, Steve Rathje, and Stephan Lewandowsky. Psychological inoculation improves resilience against misinformation on social media. Science advances , 8(34):eabo6254, 2022. [27] Karen Santos-D’Amorim and Májory K Fernandes de Oliveira Miranda. Misinformation, disinformation, and malinformation: clarifying the definitions and examples in disinfodemic times. Encontros Bibli: revista eletrônica de biblioteconomia e ciência da informação , 26, 2021. [28] Rahul Sharma. Adversarial robustness in llms: Defending against malicious inputs, June 2024. Accessed: 2025-05-20. [29] Lauren M Sompayrac. How the immune system works . John Wiley & Sons, 2022. [30] Elham Tabassi. Artificial intelligence risk management framework (ai rmf 1.0), January 2023. Accessed: 2025-05-19. [31] Cecilie S Traberg, Jon Roozenbeek, and Sander van der Linden. Psychological inoculation against misinformation: Current evidence and future directions. The ANNALS of the American Academy of Political and Social Science , 700(1):136–151, 2022. [32] UNESCO. UNESCO - united nations educational, scientific and cultural organization, 2025. Accessed: 2025-01-21. [33] Nathan Walter, Jonathan Cohen, R Lance Holbert, and Yasmin Morag. Fact-checking: A meta-analysis of what works and for whom. Political communication , 37(3):350–375, 2020. [34] Han Wang, An Zhang, Nguyen Duy Tai, Jun Sun, Tat-Seng Chua, et al. Ali-agent: Assessing llms’ alignment with human values via agent-based evaluation. Advances in Neural Information Processing Systems , 37:99040–99088, 2024. 11 [35] Claire Wardle and Hossein Derakhshan. Information disorder: Toward an interdisciplinary framework for research and policymaking , volume 27. Council of Europe Strasbourg, 2017. [36] White House Office of Science and Technology Policy. Blueprint for an ai bill of rights: Making automated systems work for the american people, October 2022. Accessed: 2025-05-19. [37] Yao Xiao, Ziyi Tang, Pengxu Wei, Cong Liu, and Liang Lin. Masked images are counterfactual samples for robust fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 20301–20310, 2023. [38] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. [39] Yue Zhang, Leyang Cui, Wei Bi, and Shuming Shi. Alleviating hallucinations of large language models through induced hallucinations. arXiv preprint arXiv:2312.15710 , 2023. [40] Xinyi Zhou, Ashish Sharma, Amy X Zhang, and Tim Althoff. Correcting misinformation on social media with a large language model. arXiv preprint arXiv:2403.11169 | https://arxiv.org/abs/2505.17870v1 |
, 2024. A Related Work Proactive Factuality Training A growing body of work addresses LLM factuality and hallucina- tions. Conventional fine-tuning on fact-checked or truthful datasets can improve a model’s correctness on specific benchmarks, but these approaches typically focus on teaching more truth rather than explicitly teaching what is false. For example, the TruthfulQA benchmark [ 17] revealed that even very large models often mimic human falsehoods when faced with tricky questions; subsequent efforts have attempted to fine-tune models to give correct answers on such challenging queries or to refuse to answer them [ 39]. This truthful-answer fine-tuning improves performance on TruthfulQA-style evaluations, but it does not explicitly train the model to recognize falsehoods as false – it mainly reinforces correct outputs for known problematic questions. Alignment via Human Feedback: Techniques like Reinforcement Learning from Human Feedback (RLHF)[ 16] have demonstrated that models can be made more truthful and less prone to hallucination by optimizing on human-preference signals. For instance, InstructGPT[ 21] used RLHF to significantly reduce blatant falsehoods in outputs. However, RLHF is a broad and resource-intensive alignment approach: it requires humans to rate or rank a wide range of model outputs, and the resulting reward model indirectly encourages truthfulness among many other desirable traits (helpfulness, harmlessness, etc.). In contrast, model immunization offers a more targeted intervention: it uses direct supervised signals on false claims themselves. Rather than relying on human feedback spread across many examples, we leverage existing databases of fact-checked misinformation (e.g. known myths and hoaxes) as training data. This is akin to a traditional fine-tuning on a specialized domain (false vs. true claims), potentially achieving a focused gain in truthfulness with less overhead. Adversarial and Data-Augmentation Approaches: Adversarial training [ 12] hardens models by generating worst-case input perturbations (often imperceptible typos or paraphrases) and training on them to increase robustness. It is typically an iterative procedure that finds subtle adversarial examples to confuse the model. Model immunization, by contrast, is not necessarily iterative or adversary-generated – it can use a static, curated list of known falsehoods (including both real and AI-generated misinformation) labeled false. In classic adversarial training [ 19], the goal is to make models robust to cleverly perturbed inputs; in model immunization, the goal is to make models robust to an entire class of misleading content. Our “vaccine dose” may include any representative false claims (even if they were not tailored to the model’s current weaknesses), which are presented with clear labels and corrections so the model learns to reject them. Hallucination Reduction and Fact-Checking : A variety of methods aim to reduce factual errors in generative models, such as integrating retrieval of trusted knowledge or post-editing outputs with fact-checkers [ 39,14]. These typically act after the model has produced a response, either by filtering/adjusting the output or by having the model verify its answer against external sources. Such approaches are complementary to model immunization. Our framework builds the resistance into the 12 model itself during training, whereas post-hoc fact-checking or retrieval augmentation treat factuality as an external constraint. Another related concept is psychological inoculation [ | https://arxiv.org/abs/2505.17870v1 |
26] in human misinformation research, where exposing people to weakened doses of misinformation (or common misinformation techniques) can “prebunk” them against future falsehoods. Model immunization can be seen as bringing this inoculation principle into AI training. To our knowledge, no prior work has explicitly used false information as training data with negative labels to improve an AI’s truthfulness. In summary, model immunization is unique in directly leveraging falsehood examples as supervised signals to bolster a model’s immunity to misinformation, distinguishing it from earlier efforts that either avoid false data, correct it post-hoc, or rely on broad human feedback. B Case Study Training and evaluation detail: We started with a pre-trained LM (a 1.5-billion-parameter trans- former) that had not been specifically optimized for truthfulness. The model was fine-tuned on a quarantined falsehoods dataset consisting of 500 false statements drawn from fact-checking sources through our in-house curation. This dataset included widely circulated misinformation across health, science, and general knowledge domains (e.g., “COVID-19 vaccines implant microchips” ,“The Great Wall of China is visible from space with the naked eye” ,“Lightning never strikes the same place twice” , etc.). Each statement in this dataset was paired with a brief refutation or correct answer. We fine-tuned the model for a few epochs on these pairs using a custom objective: the model was trained to produce the refutation when presented with the false claim as input. We also interleaved this fine-tuning with a small amount of continued training on a truthful QA dataset [ 17] (to retain general knowledge). For evaluation, we prepared two test sets: •Misinformation Challenge Set: 100 questions or prompts that either explicitly contained a false premise or were likely to elicit a misconception. Many of these were different from the training falsehoods but in related domains. For example, a question like “ Does 5G wireless technology spread COVID-19 ?” (false), or “What ancient civilization proved the Earth is flat ?” (a trick question, as the premise is false). •General Knowledge Set: 100 straightforward factual questions or prompts (drawn from TriviaQA [13]) to assess normal performance. These had no misleading content – e.g., “Who wrote Pride and Prejudice ?” or “What is the capital of Argentina?”. We evaluated the original model (baseline) and the immunized model on both sets. For the misinfor- mation set, responses were judged on truthfulness and correctness (we used human evaluators and a reference key). For the general set, we checked accuracy against known answers. Qualitative example (misinformation prompt). Prompt : “Do COVID-19 vaccines implant microchips?” Baseline : “Some peo- ple worry about that, but evidence is limited. . . ” Immunized : “No. COVID-19 vaccines do not contain microchips; this is a debunked conspiracy theory.” C Comparison with related techniques The comparison of model immunization with related methods is given in Table 2. D Ethical and goverance frameworks The comparison of model immunization with related methods is given in Table 3. 13 Table 2: Comparison of Model Immunization with adversarial training, RLHF, and post-hoc filtering along key dimensions. Dimension Model Immunization Adversarial Training RLHF Post-hoc Filtering Training data typeCurated false | https://arxiv.org/abs/2505.17870v1 |
statements (with false labels) + nor- mal dataAdversarially perturbed inputs (worst-case ex- amples) [18]No added training data; uses human feedback on out- puts [21]No change to base training data; uses separate detection model [17, 22] Goal Inoculate model against falsehoods to improve truthfulnessResist malicious input perturbations (adversar- ial attacks)Align outputs with human preferences (helpful, truthful be- havior)Identify and filter misinformation after generation Examples used Known false claims la- beled as false (“quaran- tined” data)Worst-case synthetic in- puts designed to fool the modelHuman-annotated outputs (preferred vs disallowed)N/A (filter may be trained on known misinformation instances) Granularity Fine-grained (fact-level inoculation on specific falsehoods)Fine-grained (guards against token-level input tweaks)Coarse-grained (broad reward on overall output)Coarse-grained (output-level filtering post-generation) Robustness to misinforma- tionHigh (explicitly trained to reject known false pat- terns)Low (does not address misinformation content)Moderate (dis- courages obvious falsehoods, but misses specifics)Variable (catches known falsehoods; misses novel cases) Relation to truthfulnessExplicitly trains for factual accuracy (distin- guishes true vs false)No direct relation (fo- cuses on robustness, not correctness)Indirect (truthfulness is one of many align- ment criteria)External enforce- ment; model’s own truthfulness unchanged Complementarity to immuniza- tionN/A (primary method) Yes (handles input attacks; complements content-focused immu- nization)Yes (adds targeted truthfulness training to broad RLHF align- ment)Yes (prevents many false outputs, reduc- ing burden on filters) 14 Table 3: Ethical and governance frameworks relevant to AI “immunization” against misinformation. Framework (Year)Core Principle(s) Application to Model ImmunizationBenefits Risks / Ethical Ten- sions EU “Trustworthy AI” Ethics Guide- lines [11]Lawfulness, ethics, robustness; human oversight, trans- parency, fairness, accountability.Calls for careful dataset curation, labeling, and audits of AI outputs; requires human-in-the- loop review of flagged misinformation.Builds public trust; mit- igates bias and inad- vertent propagation of falsehoods.V oluntary; trade-offs be- tween transparency and privacy; high cost of hu- man oversight. OECD AI Princi- ples [20]Human-centred, fairness, robustness, transparency, ac- countability.Supports disclosure of AI-generated media, di- verse training data, and human intervention in content systems.Provides global base- line; fosters explainabil- ity and bias reduction.Non-binding; tensions between transparency and security, and be- tween misinformation control and free expres- sion. UNESCO AI Ethics Recom- mendation [32]Human rights, dig- nity, transparency, fairness, oversight.Promotes explainable AI, bias checks, and media-literacy policies alongside technical measures.Nearly universal en- dorsement; holistic ap- proach coupling tech- nical and societal de- fences.Implementation varies by nation; risk of state misuse for censorship; differing definitions of harmful content. U.S. Blueprint for an AI Bill of Rights [36]Safe & effective systems, algorithmic- bias protection, data privacy, notice & explanation, human fallback.Demands pre- deployment truthfulness testing, bias audits, transparent moderation, and appeal mechanisms.Creates overlapping safeguards and ac- countability to catch misinformation errors.Non-binding; cost of human oversight; free- speech concerns around “unsafe” content filter- ing. NIST AI Risk Management Framework [30]Govern–Map–Measure Manage lifecycle for trustworthy AI.Treats misinformation as a quantifiable risk; ad- vises continuous testing, data-quality assurance, and model cards.Flexible, iterative pro- cess; standardises high- quality risk controls across sectors.V oluntary; resource- intensive for small developers; general guidance may miss domain-specific chal- lenges. EU Artificial In- telligence Act [9]Binding, risk-based regulation; trans- parency, oversight, fundamental-rights protection.Imposes content la- belling (e.g., deep-fake watermarks), risk assessments, and human- | https://arxiv.org/abs/2505.17870v1 |
oversight plans for high-risk systems.Enforceable fines drive adoption of best prac- tices; harmonises stan- dards across EU.Compliance burden on SMEs; possible chilling effects on creative ex- pression; limited reach outside EU. Table 4: Ethical principles and corresponding safeguards in model immunization. Ethical Principle Safeguard in Model Immunization Transparency Document and openly disclose all use of false data in training. Sources and labels of each quarantined falsehood are recorded for audit, aligning with calls for transparency [4]. No Promotion of False ContentTreat every curated false statement as a negative example: pair it with a corrective label or explanation and penalize the model for reproducing it, ensuring the system learns to reject rather than repeat misinformation. Alignment with Hu- man ValuesFocus curation on clearly harmful myths (e.g., health disinformation) backed by strong factual consensus. Gray-area topics are introduced only with human oversight, upholding fairness and inclusivity [16, 34]. Preventing Abuse Developers follow transparent protocols and, where relevant, regulatory guidelines [ 1]. Open documentation and shared best-practice falsehood datasets to distinguish responsible immunization from covert poisoning. Continuous Account- abilitySet up a public appeal channel and log model outputs to allow external audits, ensuring errors can be reported and corrected post-deployment [7]. 15 | https://arxiv.org/abs/2505.17870v1 |
arXiv:2505.17873v1 [cs.CL] 23 May 2025MOOSE-Chem3: Toward Experiment-Guided Hypothesis Ranking via Simulated Experimental Feedback Wanhao Liu1,2∗, Zonglin Yang3,2∗, Jue Wang1, Lidong Bing4, Di Zhang2, Dongzhan Zhou2,Yuqiang Li2†,Houqiang Li1†,Erik Cambria3†,Wanli Ouyang2† 1University of Science and Technology of China2Shanghai Artificial Intelligence Laboratory 3Nanyang Technological University4MiroMind {liuwanhao,lihq}@mail.ustc.edu.cn, {zonglin.yang,cambria}@ntu.edu.sg {liyuqiang,ouyangwanli}@pjlab.org.cn Abstract Hypothesis ranking is a crucial component of automated scientific discovery, par- ticularly in natural sciences where wet-lab experiments are costly and throughput- limited. Existing approaches focus on pre-experiment ranking , relying solely on large language model’s internal reasoning without incorporating empirical out- comes from experiments. We introduce the task of experiment-guided ranking , which aims to prioritize candidate hypotheses based on the results of previously tested ones. However, developing such strategies is challenging due to the imprac- ticality of repeatedly conducting real experiments in natural science domains. To address this, we propose a simulator grounded in three domain-informed assump- tions, modeling hypothesis performance as a function of similarity to a known ground truth hypothesis, perturbed by noise. We curate a dataset of 124 chemistry hypotheses with experimentally reported outcomes to validate the simulator. Build- ing on this simulator, we develop a pseudo experiment-guided ranking method that clusters hypotheses by shared functional characteristics and prioritizes candidates based on insights derived from simulated experimental feedback. Experiments show that our method outperforms pre-experiment baselines and strong ablations.1 1 Introduction Scientific discovery plays a major role in advancing human society (Coccia, 2019). Recently, there have been promising advances in automating certain stages of the scientific process using large language models (LLMs) (Luo et al., 2025; Cambria et al., 2023). One of the most critical stages is hypothesis ranking: given a large set of automatically generated hypotheses, which one should be tested in a real experiment first? This question is particularly impor- tant in natural science domains, where experiments are costly and resource-constrained, necessitating efficient prioritization strategies to minimize experimental effort. Previous methods for hypothesis ranking (Yang et al., 2024b; Si et al., 2024) primarily rely on evaluations based solely on LLMs’ internal reasoning, without incorporating any empirical feedback from experiments. We refer to this approach as pre-experiment ranking , as hypotheses are prioritized before any experimental evidence is gathered. 1All code and data can be found in https://github.com/wanhaoliu/ChemsimX.git ∗Both authors contributed equally to this work.†Corresponding author. Preprint. Under review. In contrast, we propose a new task: experiment-guided ranking , which focuses on dynamically prioritizing hypotheses by leveraging feedback from sequentially performed experiments. Rather than conducting all experiments upfront, this approach iteratively updates the ranking based on available experimental results, aiming to accelerate the discovery of promising hypotheses while minimizing the total number of experiments required. However, developing strategies for experiment-guided ranking in natural science domains such as chemistry is challenging, as it is impractical to rely on real laboratories to repeatedly conduct experiments. In other words, the lack of scalable access to meaningful experimental feedback remains a key barrier to researching experiment-guided ranking strategies. Despite the challenges of obtaining real experimental feedback, we posit that simulating such feedback is feasible under three foundational assumptions. To illustrate these, consider a latent | https://arxiv.org/abs/2505.17873v1 |
space where the x-axis (potentially multidimensional) parameterizes candidate hypotheses, such that each coordinate corresponds to a distinct hypothesis variant, and the y-axis denotes the associated experimental feed- back (e.g., performance). Assumption 1 ( A1) posits that within any sufficiently local neighborhood of the hypothesis space, there exists at most one dominant optimum, corresponding to a ground truth hypothesis (e.g., reported in the literature). Assumption 2 ( A2) states that hypotheses closer to this dominant maximum are more likely to yield more competitive experimental feedback. Assumption 3 (A3) states that real experimental feedback can be viewed as idealized feedback—defined by A1 andA2—perturbed by an unknown deviation term due to the imperfect representation of hypothesis closeness in the hypothesis space. Specifically, the ground truth hypothesis is treated as the local optimum in the hypothesis space, and the performance of neighboring hypotheses is modeled as a function of their similarity to this optimum. Since real-world representations of hypothesis similarity are inherently imperfect, this relationship is subject to systematic deviations, whose effects on the simulator’s fidelity are analyzed in this work. The experiment-guided ranking task with real and simulated experiment feedback can be described by Figure 1. The primary goal of the simulator is to enable systematic research on experiment-guided ranking strategies by providing accessible and high-fidelity approximations of experimental feedback, which are otherwise prohibitively costly or unavailable. Ultimately, the aim is to deploy these strategies in real experimental settings to reduce the overall experimental costs. Clustering Extracted ComponentsPerforming Accumulated AnalysisSelecting HypothesisSelecting ClusterSimulating ExperimentExtracting &Classifying Key Components Experiment Result 1AnalysisSummary Experiment Result 2AnalysisSummary ... Experiment Result nAnalysisSummary Key Chemical ComponentsResult ...Scientific Hypotheses Effective Candidate classes Remove LLM Uncertain UncertainHypothesis Candidates Clustering Extracted ComponentsPerforming Accumulated AnalysisSelecting HypothesisSelecting ClusterSimulating ExperimentExtracting &Classifying Key Components Experiment Result 1AnalysisSummary Experiment Result 2AnalysisSummary ... Experiment Result nAnalysisSummary Key Chemical Components Result ...Scientific Hypotheses Effective Candidate classes Remove LLM Uncertain UncertainHypothesis Candidates Hypothesis Candidates Experiment Result For Top Ranked HypothesisReal Chemistry ExperimentHypothesis Candidates Experiment Result For Top Ranked HypothesisExperiment-guided hypothesis ranking policy Top ranked hypothesisTop ranked hypothesis: One dominant optimum hypothesis per local region : Hypotheses closer to the optimum yield better performance Simulated Chemistry Experiment: Deviations from imperfect closeness estimationExperiment-guided hypothesis ranking policy (a) Real experiment feedback. Hypothesis Candidates Experiment Result For Top Ranked HypothesisTop ranked hypothesis: One dominant optimum hypothesis per local region : Hypotheses closer to the optimum yield better performance Simulated Chemistry Experiment: Deviations from imperfect closeness estimationExperiment-guided hypothesis ranking policy (b) Simulated experiment feedback. Figure 1: Experiment-guided hypothesis ranking using real and simulated feedback. A1,A2, andA3 illustrate our three foundational assumptions in a concise manner (introduced in § 2.1.1). Guided by these assumptions and insights from chemistry experts, we construct a simulator that closely approximates real wet-lab experimental outcomes. To evaluate its fidelity, we curate a dataset of 30 groups of chemistry hypotheses, each consisting of 3–6 related hypotheses along with their experimentally reported performance, sourced from published literature (in total 124 <hypothesis, performance> pairs). Our simulator outperforms strong baselines, including widely used similarity- based evaluation metrics for hypothesis comparison (Yang et al., 2024b). Building on this foundation, | https://arxiv.org/abs/2505.17873v1 |
we propose a new task: developing more accurate simulators for experimental feedback. Using the simulator to approximate experimental outcomes, we develop an pseudo experiment-guided ranking method leveraging functional clustering of hypotheses. Clustering enables effective transfer of insights from previously tested hypotheses to untested ones sharing similar functional elements, rather than evaluating each hypothesis in isolation. 2 Specifically, hypotheses containing elements with similar functional relevance—regardless of whether identical or distinct—are grouped together, allowing hypotheses to belong to multiple clusters. Our method prioritizes clusters based on accumulated experimental feedback and subsequently selects the most promising hypothesis within each. Experiments demonstrate that our approach outperforms both pre-experiment ranking methods and strong ablation variants. Overall, the contributions of this paper are: •We formalize the task of experiment-guided ranking and highlight a key challenge in the natural sciences: the lack of scalable access to wet-lab experimental feedback. To address this, we propose the use of simulators and release a curated dataset of 124 chemical hypotheses with annotated performance collected from the literature. •We introduce three foundational assumptions for simulating experimental feedback, pro- vide a mathematical formalization of the simulation process, and construct a high-fidelity simulator that approximates real wet-lab outcomes under these assumptions. •We develop a clustering-based pseudo experiment-guided ranking method that leverages simulated feedback and structural similarities among hypotheses. Experimental results show that our method outperforms both pre-experiment baselines and strong ablation variants. 2 Methodology of Simulator Construction 2.1 Foundational Assumptions and Formalization Our simulator construction is guided by three foundational assumptions derived from expert con- sultations in the chemistry domain. These assumptions provide a principled basis for modeling simulated experimental outcomes of untested chemical hypotheses, enabling systematic investigation of experiment-guided ranking strategies. 2.1.1 Foundational Assumptions 4 2 0 2 4Hypothesis Dim 1 4 2 024 Hypothesis Dim 20.6 0.4 0.2 0.00.20.40.60.81.0 Performance (a) Idealized performance land- scape (A1+A2)). 4 2 0 2 4Hypothesis Dim 1 4 2 024 Hypothesis Dim 20.2 0.00.20.40.60.81.0 Performance(b) Realistic performance land- scape (A1+A2+A3). 4 2 0 2 4Hypothesis Dim 1 4 2 024 Hypothesis Dim 20.2 0.00.20.40.60.81.0 Performance(c) Deviations from imperfect closeness estimation ( A3). Figure 2: Illustration of the three fundamental assumptions for simulator construction. We posit that real experimental feedback within a hypothesis space can be simulated under the following assumptions: 1.(A1) Within any sufficiently local neighborhood of the hypothesis space, there exists at most one dominant optimum, corresponding to a ground truth hypothesis. 2.(A2) Among hypotheses in the vicinity of a dominant optimum, those that are closer to it are more likely to yield better experimental feedback. 3.(A3) Real experimental outcomes deviate from the idealized structure described in A1and A2due to the imperfect representation of hypothesis closeness in the hypothesis space. Figure 2 visually illustrates these assumptions. In the ideal scenario (Figure 2a), hypotheses are embedded in a latent hypothesis space such that the Euclidean distance (“closeness”) between a hypothesis and the dominant optimum hypothesis accurately reflects similarity in terms of how they perform on a research question, creating a smooth, unimodal performance landscape. 3 However, practical scenarios differ substantially, since the distance (“closeness”) between hypothe- ses—whether | https://arxiv.org/abs/2505.17873v1 |
assessed by scientists or LLMs—may not accurately reflect functional similarity. For example, a chemical hypothesis may include a useful functional component whose contribution is not fully recognized, causing it to be placed farther from the dominant peak than it should be—resulting in a spurious secondary peak. Conversely, a suboptimal hypothesis may appear closer to the dominant peak than warranted, forming a local valley. These distortions result in a performance landscape such as that in Figure 2b, with unexpected secondary peaks and valleys. Figure 2c further isolates this deviation, representing the discrepancy between oracle and practical understandings of closeness. We now formalize these assumptions by defining a mathematical model that makes the relationship between hypothesis embeddings, similarity, and performance explicit. 2.1.2 Mathematical Formulation LetH ⊂Rddenote the hypothesis space, where each hypothesis h∈ H is represented as a point in ad-dimensional latent space, conditioned on a specific research question q. Let h∗∈ H denote the ground truth hypothesis for q, representing an experimentally validated optimum. We define the idealized performance function for any hypothesis hin the vicinity of h∗as: f(h, h∗;q, ϕ∗(·)) =1 (2πσ2)d/2exp −∥ϕ∗(h|q)−ϕ∗(h∗|q)∥2 2σ2 , (1) where ϕ∗(· |q)is an oracle embedding function that maps each hypothesis hto a point in the latent hypothesis space under the context of research question q. The embedded positions capture the oracle’s understanding of closeness, measured by the Euclidean distance ∥ϕ∗(h|q)−ϕ∗(h∗|q)∥. We model the idealized performance surface as a Gaussian-like function centered at ϕ∗(h∗|q), yielding a strictly unimodal landscape that decays smoothly with increasing distance from the optimum h∗(Figure 2a). While the true performance landscape in chemical space may not be strictly Gaussian, the isotropic Gaussian form serves as a tractable and interpretable approximation in the latent space. This modeling choice directly reflects Assumptions A1andA2. However, practical simulations rely on imperfect embeddings of hypotheses into the latent space, stemming from limitations in domain understanding—no matter whether the embedding is performed (internally) by human experts or LLMs. Consequently, this leads to distortions in perceived “close- ness”, effectively warping the positions of hypotheses in latent space. Such a distorted hypothesis embedding ˜Hyields a different observed structure: ˜f(h, h∗;q, ϕ(·)) =f(h, h∗;q, ϕ∗(·)) +ϵ(h|q) (2) where ϕ(· |q)is a practical embedding function that maps each hypothesis hinto (somewhat dis- torted) positions in the latent hypothesis space for a research question q, and ϵ(h|q)represents a systematic correction term that accounts for the discrepancy between oracle embedding ϕ∗(· |q) and the practical embedding ϕ(· |q)under the context of q. As a result, the practical embedding ˜Hintroduces systematic distortions in the latent space, leading to spurious local optima or unex- pected valleys—effectively transforming the unimodal ideal surface into a noisier, multimodal one (Figure 2b). Crucially, Figures 2a and 2b illustrate the same underlying performance-closeness relationship f(h, h∗), differing only by ϕ(h), which is how hypotheses are embedded in the latent space. Figure 2c illustrates ϵ(h), the correction term that accounts for the discrepancy between the oracle embedding ϕ∗(·)and the practical embedding ϕ(·). 2.2 A Practical Implementation of ϕ(·)with Chemistry Prior Knowledge As discussed in § 2.1, the core objective of the simulator | https://arxiv.org/abs/2505.17873v1 |
is to construct an embedding function ϕ(·) that maps each hypothesis hinto a latent space such that distances in this space reflect meaningful functional differences. Through extensive discussions with chemistry PhD students, we observe that a chemical hypothesis succeeds in addressing a research question primarily due to its underlying reaction mechanisms. 4 Generated hypothesesClusteringSelect ClusterSelect HypothesisExperiment Simulation(Accumulated) Analysis: Only one local maximum hypothesis per relevant region : The closer to the local maximum, the better performance Chemistry Experiment Simulator: Unknown noises Hypothesis Candidates Experiment results for sampled hypothesisPost-Experiment Reranking Sampling Real Chemistry ExperimentHypothesis Candidates Experiment results for sampled hypothesisPost-Experiment Reranking Sampling Compute Mechanism SimilarityExtract Key Components & Infer Mechanism Ground truth hypothesisCandidate hypothesis Assign Component WeightsSimulated Performance ScoreResearch question Extract Key Components & Infer MechanismFigure 3: The internal structure of the simulator. Specifically, an effective hypothesis typically comprises a set of chemically meaningful compo- nents—each contributing to distinct yet complementary sub-mechanisms—which together enable the overall reaction to fulfill its intended function. The specific prompts and examples for extracting key chemical components and inferring mechanisms are provided in § A. Informed by this domain knowledge, we design a simulator architecture illustrated in Figure 3. Each module corresponds to a subroutine implemented using an LLM with task-specific prompting. The simulator’s goal is to estimate the latent-space distance ∥ϕ(h|q)−ϕ(h∗|q)∥between a candidate hypothesis hand a ground truth hypothesis h∗, conditioned on a research question q. The simulation begins by decomposing both the candidate and ground truth hypotheses into a set of key chemical components, and identifying the reaction mechanism associated with each component in the context of the research question. The decomposition of h∗is performed first, serving as a reference. These reference components and mechanisms guide the decomposition of h, ensuring alignment in both granularity and mechanistic interpretation. Concurrently, the Assign Component Weights module estimates the relative importance wiof each component in the ground truth hypothesis, given the research question. A subset of these compo- nents—denoted C—are labeled as critical, meaning they are considered necessary for the reaction to succeed. To elaborate on the role of C, we provide illustrative examples in § B. Next, the Compute Mechanism Similarity module compares each key component in h∗with its corresponding component in h, assigning a similarity score si∈[0,1]to each pair. These scores are then aggregated using a weighted sum, combined with a multiplicative penalty that enforces the presence of all critical components: S(h|q;h∗) = Y i∈C1si>0! · KX i=1wi·si! ,whereKX i=1wi= 1 (3) This formulation guarantees that S(h∗|q;h∗) = 1 , since all components are present with maximal similarity ( si= 1for all i), resulting in zero distance from the ground truth. Similarity score Sare thereby bounded in [0,1], and lower distances correspond to stronger functional alignment with the ground truth hypothesis. The resulting value is used as the simulated performance score. The final distance between the candidate and ground truth hypotheses is then calculated as: |ϕ(h|q)−ϕ(h∗|q)|=|S(h|q;h∗)−1| (4) 3 Methodology of Experiment-Guided Ranking 3.1 Problem Formulation Given a research question q, a set of candidate hypotheses His formed by selecting hypotheses generated by existing scientific discovery systems (Yang | https://arxiv.org/abs/2505.17873v1 |
et al., 2024b) and ground-truth hypotheses from top-tier chemistry journals reporting high-quality lab experiments. The goal of experiment- 5 Clustering Extracted ComponentsPerforming Accumulated AnalysisSelecting HypothesisSelecting ClusterSimulating ExperimentExtracting &Classifying Key Components Experiment Result 1AnalysisSummary Experiment Result 2AnalysisSummary ... Experiment Result nAnalysisSummary Key Chemical ComponentsEffective Candidate classes Remove Ineffective UncertainHypothesis Candidates...Scientific Hypotheses Figure 4: Experiment-guided ranking method. guided ranking is to identify the optimal hypothesis h∗∈ H with the highest experimentally measured performance using an experiment executor E. Formally, we define the experiment executor as a function: E:H → [0,1] (5) that maps each hypothesis h∈ H to a normalized performance score s∈[0,1]. The normalization provides a unified performance metric across heterogeneous research hypotheses and varying problem settings q, and can be defined relative to a domain-specific state-of-the-art benchmark established by experts. The primary goal is to find h∗. However, since each evaluation of E(h)corresponds to a real or simulated experiment—which may be costly or time-consuming—a critical requirement is to identify h∗using as few experimental trials as possible. Accordingly, an effective experiment-guided ranking strategy must actively incorporate feedback from prior evaluations to guide subsequent selections, balancing exploration and exploitation under a limited experimental budget. Thus, the problem can be reframed as finding a selection strategy that minimizes the number of trials required to identify the optimal hypothesis: arg min πNπ trials subject to h∗= arg max h∈HE(h), (6) where πdenotes the hypothesis selection strategy, and Nπ trialsis the number of experiments required under strategy πto successfully discover h∗. 3.2 Methodology We propose an experiment-guided ranking framework leveraging LLM agents, as illustrated in Figure 4. This design is informed by extensive consultations with chemistry domain experts, capturing key insights into hypothesis effectiveness. These discussions identified a key criterion for hypothesis efficacy: effective hypotheses typically contain a sufficient number of key chemical components that collectively fulfill complementary mechanistic roles relevant to the research question q. Building upon this insight, our framework employs a structured, iterative approach comprising several distinct stages. Step 1: Extraction, Classification, and Clustering of Functional Components. Each candidate hypothesis h∈ H is decomposed into its functional chemical components—distinct substructures or motifs potentially contributing to the target reaction mechanism. These components are then classified into three categories: effective, uncertain, and ineffective. Components deemed ineffective are excluded from further consideration to reduce computational overhead, as the initial hypothesis setHmay yield a large number of components. The remaining components are clustered based on functional similarity, with each cluster representing a distinct mechanistic contribution to solving q. Individual elements within a cluster correspond to specific functional components, each traceable to its originating hypothesis h. 6 Step 2: Cluster and Hypothesis Selection. Guided by the LLM’s prior chemical knowledge, the framework identifies the cluster most likely to contain components highly relevant to q. Within this selected cluster, the LLM agent further selects a hypothesis hdeemed most promising based on component relevance and prior understanding. Step 3: Experiment Execution and Result Analysis. The selected hypothesis his evaluated using the experimental executor (or simulator) E, yielding a normalized performance score s. The outcome of this experiment is then analyzed to | https://arxiv.org/abs/2505.17873v1 |
evaluate the effectiveness of the chosen cluster and to validate or update the mechanistic assumptions made. Step 4: Iterative Summarization and Refinement. Following each experimental evaluation, a detailed analysis is conducted, and the insights gained are integrated into a cumulative summary. This continually updated summary synthesizes insights from all prior analyses, highlighting effective clusters and guiding future hypothesis and cluster selections. By iteratively leveraging prior chemical knowledge and empirical feedback, this framework system- atically refines hypothesis prioritization. The overall objective is to efficiently identify the optimal hypothesis while minimizing the total number of experimental trials. 4 Experiment We name our simulator as CSX-Sim , and the experiment-guided ranking method as CSX-Rank . All experiments are implemented with GPT-4o-mini (OpenAI, 2024). 4.1 Simulator: Evaluating the Simulator with Real Experiment Results To rigorously evaluate the performance of our simulator on advanced chemical problems, we curated a benchmark of 30 cutting-edge research questions, each associated with 3–6 mutually related candidate hypotheses, totaling 124 hypotheses. Ground-truth experimental outcomes were sourced from published literature, covering major subfields of chemistry, as detailed in § C.1 For each hypothesis, simulated results were generated using the proposed CSX-Sim and compared against the annotated experimental outcomes. This trend comparison is illustrated in § C.2. Evaluation focused on two key criteria: (1) trend alignment, measured by Spearman rank correlation, which assesses whether the predicted performances correctly reflect the relative ranking of ground-truth annotations within each research question. This criterion is critical, as hypothesis ranking primarily depends on relative performance differences; absolute offsets (e.g., uniform biases of ±0.2) have limited impact on ranking outcomes. Here we use “Perfect Consistency Indicator” (PCI) as metric, which quantifies the number of research questions for which the simulator achieves perfect trend alignment with experimental outcomes (2) predictive accuracy, quantified by root mean square error (RMSE), measuring absolute deviations between predicted and annotated performances. Detailed explanations of this metric and other predictive accuracy indicators are available in the § D. The comparative results are summarized in Table 1. Simulator Spearman Correlation ( ↑) Perfect Consistency Indicator ( ↑) RMSE ( ↓) Matched Score 0.843 12/30 0.232 CSX-Sim 0.960 26/30 0.213 w/o CriticalPoints 0.950 23/30 0.229 w/o ComponentExtraction 0.864 12/30 0.272 Table 1: Validating the simulator with collected chemistry experiment results from literature. Baseline and Ablation We adopt the “Matched Score” (Yang et al., 2024b) as our primary baseline, which evaluates hypotheses by measuring their similarity to ground-truth references through a reference-based comparison. Additionally, we conduct two ablation studies on CSX-Sim to assess the contribution of its key components: (1) The first ablation (w/o CriticalPoints) disables the labeling of critical components C, as defined in Equation 3, allowing hypotheses that lack essential components to still receive positive feedback from the simulator; (2) The second ablation (w/o ComponentExtraction) 7 skips the extraction and weighting of critical components, directly computing mechanism similarity using prompts analogous to the final module in Figure 3. Results Interpretation As shown in Table 1, CSX-Sim achieves superior performance across all metrics, with a Spearman correlation of 0.960, perfect consistency in 26 out of 30 questions, and | https://arxiv.org/abs/2505.17873v1 |
the lowest RMSE of 0.213. Compared to the Matched Score baseline, CSX-Sim demonstrates substantial improvements in both trend alignment (+0.117 in Spearman) and robustness (+14 in PCI), while also reducing predictive error. Ablation studies further highlight the importance of critical component identification: removing CriticalPoints slightly degrades performance (Spearman 0.950, PCI 23/30), whereas omitting component extraction leads to significant drops in both alignment (Spearman 0.864) and accuracy (RMSE 0.272). These results underscore the necessity of fine-grained component analysis in achieving high-fidelity simulation feedback. 4.2 Experiment-Guided Ranking: Baselines and Ablation Study Data and Evaluation Metrics We evaluate experiment-guided ranking on the TOMATO-chem dataset (Yang et al., 2024b), which includes 51 chemical problems, each annotated with a ground-truth (gdth) hypothesis. For each problem, we use the MOOSE-Chem framework (Yang et al., 2024b) to generate 63 additional candidate hypotheses that are distinct from the ground truth, resulting in 64 hypotheses per research question (1 gdth and 63 negatives). To measure performance, we define the metric Ntrials, representing the number of simulation-based evaluations required to identify the ground-truth hypothesis for each of the 51 problems. Lower values of Ntrialsindicate more efficient hypothesis prioritization. Results are summarized in Table 2. Method Ntrials(↓) Random Sampling 32.000 Pre-Experiment Ranking 33.280 CSX-Rank 15.196 w/o Clustering 27.980 w/o Clustering & Analysis 35.627 w/o Clustering & Analysis & Full Feedback 37.667 Table 2: Number of experiments required to identify the ground truth hypothesis across methods. Baselines We consider two baselines: Pre-Experiment Ranking and Random Sampling. Pre- Experiment Ranking follows the strategy used in MOOSE-Chem (Yang et al., 2024b), where hypothe- ses are scored based on the model’s prior knowledge and ranked accordingly, without incorporating any experimental feedback. Random Sampling selects hypotheses uniformly at random, serving as a simple yet unbiased baseline. As shown in Table 2, both baselines require over 32 trials on average to identify the ground-truth hypothesis, with Pre-Experiment Ranking (33.28 trials) slightly underperforming Random Sam- pling (32.00 trials). This counterintuitive result indicates that relying solely on prior model knowl- edge—without feedback—can lead to suboptimal prioritization, as initial estimation errors may mislead the ranking more than random choice. Ablation Study To assess the contribution of key components in CSX-Rank , we conducted ablation studies under three conditions: (1) removing functional clustering ( CSX-Rank w/o Clustering ); (2) further disabling feedback analysis ( CSX-Rank w/o Clustering & Feedback Analysis ); and (3) addition- ally limiting feedback to the 10 most recent simulation results ( CSX-Rank w/o Clustering & Feedback Analysis & Full Feedback ). As shown in Table 2, progressively removing these components leads to marked performance degradation, confirming the importance of clustering, analytical summarization, and sufficient feedback quantity for efficient hypothesis ranking. 4.3 Simulator: Ablation on Different ϕ(·)with Different Levels of Distortion To assess how simulator quality affects ranking performance, we leverage the observation that experiment-guided ranking is fundamentally an optimization process: navigating the hypothesis space 8 to identify candidates with superior experimental performance. A high-fidelity simulator facilitates this search by providing informative feedback, while a degraded simulator misleads the process, making it harder to reach the optimum. Based on this | https://arxiv.org/abs/2505.17873v1 |
perspective, we systematically introduce controlled distortions that worsen simulator fidelity from an optimization standpoint. Specifically, we collaborated with chemistry PhD students to design three types of distortions commonly encountered in chemical research: local maxima/minima, plateaus, and cliffs. These noise patterns capture typical challenges in hypothesis evaluation, informed by domain expertise and heuristics. We defined three distortion levels—Simple Noise, Moderate Noise, and Complex Noise—and incorporated them into the hypothesis embedding function ϕ(·)to simulate increasingly challenging feedback conditions. The composition and classification of constructed noise are detailed in § E We evaluated CSX-Rank ,CSX-Rank w/o Clustering , and CSX-Rank w/o Clustering & Analysis across three noise scenarios of increasing complexity: Simple Noise (3 maxima, 3 minima, 1 cliff, 2 plateaus), Moderate Noise (8 maxima, 12 minima, 3 cliffs, 4 plateaus), and Complex Noise (38 maxima, 22 minima, 4 cliffs, 4 plateaus). As shown in Table 3, increasing noise complexity progressively degraded performance across all methods, as reflected by higher Ntrials.CSX-Rank consistently outperformed its ablated variants, maintaining a substantial efficiency margin even under Complex Noise (32.7 vs. 36.5 and 40.5 trials). These results highlight the robustness of functional clustering and feedback analysis in mitigating misleading signals and preserving search efficiency. The findings align with Section 4.2, underscoring the critical role of each component in navigating noisy hypothesis spaces. Method Ntrials(Simple Noise) Ntrials(Medium Noise) Ntrials(Complex Noise) CSX-Rank 21.804 26.608 32.706 w/o Clustering 32.706 35.843 36.471 w/o Clustering & Analysis 37.235 38.373 40.451 Table 3: Simulator with different noise conditions 5 Related Work Most prior work on hypothesis ranking has focused on pre-experiment ranking. Some approaches assign a score to each hypothesis and rank them accordingly, providing a simple and efficient solution (Yang et al., 2024a,b). Others adopt a pairwise ranking strategy, evaluating hypothesis pairs one at a time (Si et al., 2024; Liu et al., 2025). However, these methods rely solely on the internal reasoning of LLMs and do not incorporate feedback from experimental outcomes. To our knowledge, few existing works leverage experimental feedback in hypothesis-driven tasks. Notably, recent methods in mathematics (Romera-Paredes et al., 2024; Shojaee et al., 2024) and programming (Qiu et al., 2024) incorporate feedback loops by refining hypotheses based on verifica- tion outcomes. These approaches rely on domains where extremely efficient verifiers are available, allowing for rapid hypothesis testing and direct refinement rather than explicit ranking. In contrast, our work focuses on natural science domains, where real experiments are significantly more costly, making such exhaustive trial-and-error strategies impractical. This difference motivates the need for a more deliberate experiment-guided ranking process, where each experiment must inform the prioritization of future hypotheses due to limited experimental bandwidth. Roohani et al. (2024) address hypothesis generation in a genetic perturbation setting, where task- specific feedback can be directly computed (e.g., via gene overlap). In contrast, our work focuses on constructing general-purpose simulators for natural science domains, with an emphasis on chemistry due to the availability of annotated novel hypotheses from the literature (Yang et al., 2024b). 6 Conclusion We present a systematic framework for experiment-guided hypothesis ranking in chemistry, address- ing the critical challenge of limited | https://arxiv.org/abs/2505.17873v1 |
access to real experimental feedback. By formalizing three foundational assumptions, we develop a high-fidelity simulator that approximates experimental 9 outcomes based on hypothesis similarity, validated against a curated dataset of 124 hypotheses with reported wet-lab results. Building on this simulator, our proposed CSX-Rank method leverages functional clustering and iterative feedback analysis to efficiently prioritize hypotheses during the discovery process. Empirical evaluations demonstrate that CSX-Rank significantly outperforms pre-experiment baselines, reducing the number of trials required to identify ground-truth hypotheses by more than 50% on the TOMATO-chem dataset. Ablation studies and controlled noise experiments further highlight the importance of analytical components and feedback integration for robust performance under increasingly challenging conditions. References Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. An empirical investigation of statistical significance in nlp. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning , pp. 995–1005, 2012. Erik Cambria, Rui Mao, Melvin Chen, Zhaoxia Wang, and Seng-Beng Ho. Seven pillars for the future of artificial intelligence. IEEE Intelligent Systems , 38(6):62–69, 2023. Mario Coccia. Why do nations produce science advances and new technology? Technology in society , 59:101124, 2019. Yujie Liu, Zonglin Yang, Tong Xie, Jinjie Ni, Ben Gao, Yuqiang Li, Shixiang Tang, Wanli Ouyang, Erik Cambria, and Dongzhan Zhou. Researchbench: Benchmarking llms in scientific discovery via inspiration-based task decomposition. arXiv preprint arXiv:2503.21248 , 2025. Ziming Luo, Zonglin Yang, Zexin Xu, Wei Yang, and Xinya Du. LLM4SR: A survey on large language models for scientific research. CoRR , abs/2501.04306, 2025. doi: 10.48550/ARXIV . 2501.04306. URL https://doi.org/10.48550/arXiv.2501.04306 . OpenAI. Gpt-4o mini: Advancing cost-efficient intelligence. https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence/ , 2024. Accessed: 2025-05-16. Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, et al. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. In The Twelfth International Conference on Learning Representations , 2024. Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. Nature , 625(7995):468–475, 2024. Yusuf Roohani, Andrew Lee, Qian Huang, Jian V ora, Zachary Steinhart, Kexin Huang, Alexander Marson, Percy Liang, and Jure Leskovec. Biodiscoveryagent: An ai agent for designing genetic perturbation experiments. arXiv preprint arXiv:2405.17631 , 2024. Parshin Shojaee, Kazem Meidani, Shashank Gupta, Amir Barati Farimani, and Chandan K Reddy. Llm-sr: Scientific equation discovery via programming with large language models. arXiv preprint arXiv:2404.18400 , 2024. Chenglei Si, Diyi Yang, and Tatsunori Hashimoto. Can llms generate novel research ideas? a large-scale human study with 100+ nlp researchers. arXiv preprint arXiv:2409.04109 , 2024. Zonglin Yang, Xinya Du, Junxian Li, Jie Zheng, Soujanya Poria, and Erik Cambria. Large language models for automated open-domain scientific hypotheses discovery. In Findings of the Association for Computational Linguistics ACL 2024 , pp. 13545–13565, 2024a. Zonglin Yang, Wanhao Liu, Ben Gao, Tong Xie, Yuqiang Li, Wanli Ouyang, Soujanya Poria, Erik Cambria, and Dongzhan Zhou. Moose-chem: Large language models for rediscovering unseen chemistry scientific hypotheses. arXiv preprint arXiv:2410.07076 , 2024b. 10 | https://arxiv.org/abs/2505.17873v1 |
A Extracting Key Chemical Components in the Simulator A.1 A Framework for Extracting Critical Chemical Components in the Simulator To better illustrate the specific framework of CSX-Sim for extracting key chemical components, as shown in Figure 5. For scientific hypotheses addressing specific problems, we categorize key chemical components and conclusions within the hypothesis. We then analyze the role and mechanism of each key chemical component based on the chemical problem and the conclusions drawn from the hypothesis. Finally, we review and output the key chemical components, their corresponding mechanisms, and the conclusions from the hypothesis. 1. Identify Key Chemical Components and Conclusions 2. Explain Mechanism of Key Chemical Components 3. Verify and Output Key Points, Mechanisms, and Conclusions Framework for Analyzing Scientific Hypotheses in Chemical Problems Figure 5: A Framework for Extracting Chemical Components in the Simulator. A.2 Prompt for Extracting Key Chemical Components in the Simulator The prompt for extracting key chemical components in the simulator, along with examples, is as follows: You are an experienced chemistry expert. I will provide you with a scientific question and a scientific hypothesis. Your task is to identify the chemical key points within the hypothesis that are essential for addressing the scientific question. Chemical key points are the core elements—such as basic chemical components, reactions, or mechanistic methods—critical to solving the problem effectively. Analyze these key points by linking them to the scientific question, determining how they contribute to resolving it. When identifying chemical key points, consider the following: Each substance may be a key point. If it includes specific parameters like concentration or mass fraction (e.g., 0.3M NaCl, 10wt% PVA), ensure these details are retained in the division process without losing specificity. If multiple substances are related and function together (e.g., potassium ferricyanide and potassium ferrocyanide as an oxidizing-reducing pair), group them as a single chemical key point based on their shared role or interdependence. Exclude elements from the scientific question that reappear in the hypothesis as prerequisites (e.g., if the question involves improving MXene nanosheets and the hypothesis enhances them with liquid metal, MXene nanosheets are a prerequisite, not a key point; liquid metal is the key point). Prerequisites should not be output or analyzed as key points. Distinguish key points from validation methods (e.g., elemental analysis to verify properties). Validation methods support the hypothesis but are not chemical key points. For each identified chemical key point, conduct a detailed and rigorous analysis of its role and function in relation to the scientific question. Use your chemical knowledge to explain the specific mechanism by 11 which it addresses the problem, focusing on how it enhances the relevant properties or performance outlined in the question. Provide a clear, mechanistic explanation of its contribution and, if multiple key points exist, describe their interconnections. Additionally, identify the results—effects or phenomena caused by these key points—representing the experiment’s outcomes. In your output, focus on listing and explaining the chemical key points, followed by the results, ensuring no prerequisites from the scientific question are included. Output format: Chemical Key Points Chemical substance/component/method 1 Role and Function: Describe | https://arxiv.org/abs/2505.17873v1 |
the role and function of the substance or method, including a detailed mechanistic explanation of how it addresses the scientific question and enhances relevant properties. Chemical substance/component/method 2 Role and Function: Describe the role and function of the substance or method, including a detailed mechanistic explanation of how it addresses the scientific question and enhances relevant properties. End Chemical Key Points Results Result 1: Describe the effects caused by the aforementioned reasons (e.g., performance improvement, efficiency changes). Result 2: Further describe other effects related to the experimental objectives. End Results Example: Chemical Key Points 1. 10wt% PVA (Polyvinyl Alcohol) Role and Function: Polyvinyl alcohol (PVA) hydrogel acts as the base material, providing structural support and mechanical performance for thermoelectric gels. PVA with a mass fraction of 10% can provide mechanical support through hydrogen bonds in its structure and interact with potassium ferricyanide and potassium ferrocyanide to offer electrical changes. 2. Gdm 2SO4(Guanidine Sulfate) Role and Function: Guanidine sulfate (Gdm 2SO4) is integrated into the K 3[Fe(CN) 6] / K 4[Fe(CN) 6] to improve thermoelectric performance. The introduction of guanidine salt increases solvent entropy and effectively enhances thermopower. 3. Directional Freezing Method Role and Function: By employing directional freezing technology, aligned channels are created, enhancing the electrical conductivity and mechanical strength of the material. 4. Potassium Ferricyanide and Potassium Ferrocyanide (K 3[Fe(CN) 6] / K 4[Fe(CN) 6]) Role and Function: These compounds are crucial electrolytes that facilitate redox reactions within the polymer gel. The presence of these ions enhances ion mobility and conductivity due to their ability to undergo reversible redox processes, thereby boosting the thermoelectric properties of the gel End Chemical Key Points Results Carnot-relative Efficiency The Carnot-relative efficiency of the FTGA exceeds 8%. Thermopower and Mechanical Robustness Thermopower and mechanical robustness are enhanced, outperforming traditional quasi-solid-state thermoelectric cells. End Results Here’s a detailed example in chemistry: To better illustrate the effectiveness of extracting key chemical components, we compare the performance of our simulator against human chemistry experts by analyzing a real-world chemical problem. • Scientific Question : How can a cost-effective N-type quasi-solid-state thermocell be devel- oped to boost electricity production from low-grade heat by improving both ion transport efficiency and electrode performance? • Scientific Hypothesis :Develop a flexible N-type quasi-solid-state thermocell by integrating anisotropic polymer networks and hierarchical 3D copper electrodes to enhance ion transport, mechanical robustness, and thermoelectric performance. Utilizing Polyvinyl Alcohol (PV A) as the hydrogel matrix, the anisotropic structure is achieved through a directional freeze-thawing (DFT) process, which involves applying a temperature gradient during freezing to guide ice crystal growth for polymer chain alignment. Repeated cycles further enhance the alignment and crosslinking, creating anisotropic pores that reduce ion transport resistance. Ionic crosslinking with a 0.7 M CuSO 4electrolyte and 0.1 M 12 H2SO4strengthens the hydrogel while retaining flexibility. Meanwhile, hierarchical 3D copper electrodes, fabricated via oxidation, etching, and thermal reduction, provide a high surface area, enhancing redox kinetics of the Cu2+/Cu0couple and obviating platinum electrode reliance. This synergistic design achieves a remarkable 1500% increase in power density, reaching 0.51 mW m−2at a∆Tof 5◦C, with a thermopower of | https://arxiv.org/abs/2505.17873v1 |
0.7 mV K−1and ionic conductivity improved by 20%. Mechanical tests reveal significant strength with a tensile strain at break of 350% and the system maintains stability under various mechanical deformations. This approach offers a cost-effective, adaptable solution for low-grade heat harvesting. Here’s Chemistry Expert Extracting Key Chemical Component: • Chemical Key Points : 1. Anisotropic Polymer Networks Role and Function: The layered polymer network structure enhances ion transport rates, thereby improving thermoelectric conversion efficiency. 2. Hierarchical 3D Copper Electrodes Role and Function: The use of 3D copper electrodes effectively increases the reaction contact area, enhancing the thermoelectric potential. 3. Polyvinyl Alcohol (PV A) as the Hydrogel Matrix Role and Function: PV A hydrogel is used as the matrix to provide good mechanical properties and structural support. 4. Directional Freeze-Thawing (DFT) Process Role and Function: The DFT process forms ion channels along the freezing direction, thereby increasing ion transport rates. 5. 0.7 M CuSO 4Electrolyte Role and Function: This electrolyte is used for ionic crosslinking to enhance the mechanical properties of the thermoelectric material. 6. 0.1 M H 2SO4 Role and Function: This acid is used together with a 0.7 M CuSO 4electrolyte for ionic crosslinking to enhance the mechanical properties of the thermoelectric material. • Results : –Enhanced Electrical Properties of the Thermoelectric Material - Increased power density - Improved thermopower - Enhanced ionic conductivity –Improved Mechanical Properties of the Thermoelectric Material - Increased strength with a tensile strain - Enhanced stability under various mechanical deformations Here’s CSX-Sim Extracting Key Chemical Component: • Chemical Key Points : 1. Anisotropic Polymer Networks Role and Function: These networks are engineered to enhance ion transport efficiency while providing mechanical robustness through alignment of polymer chains, reducing ion transport resistance in thermocells. 2. Hierarchical 3D Copper Electrodes Role and Function: The use of hierarchical 3D copper electrodes increases the surface area, enhancing redox kinetics of the Cu2+/Cu0couple, thus improving the thermo- electric performance without relying on expensive platinum electrodes. 3. Copper Sulfate (CuSO 4) Electrolyte Role and Function: A 0.7 M CuSO 4solution serves as an ionic crosslinker in the hydrogel, enhancing mechanical strength while facilitating efficient ion transport in the system. 13 4. Sulfuric Acid (H 2SO4) Role and Function: The addition of 0.1 M H 2SO4in the hydrogel contributes to ionic crosslinking and enhancing the overall conductivity of the electrochemical system. 5. Directional Freeze-Thawing (DFT) Process Role and Function: This method is used to create an ordered structure in the polymer hydrogel, promoting alignment of polymer chains and ensuring anisotropic pores that further reduce ion transport resistance. • Results : –Power Density Increase –Enhanced Thermopower –Improved Ionic Conductivity –Mechanical Strength under Deformation Here’s a comparison of the analysis results between our simulator and human experts: By comparing the approaches of a chemistry expert and CSX-Sim in extracting key chemical compo- nents for the specific chemical issues of ion transport efficiency and electrode performance, CSX-Sim successfully identifies solutions in its scientific hypotheses, including anisotropic polymer networks and hierarchical 3D copper electrodes. Compared to the human chemistry expert, CSX-Sim captures five out of six key points, | https://arxiv.org/abs/2505.17873v1 |
missing only one: “Polyvinyl Alcohol (PV A) as the Hydrogel Matrix.” The points it does identify align accurately with those proposed by the human expert based on the hypothesis, demonstrating the high accuracy of CSX-Sim in extracting key chemical components. B The Role of CriticalPoints in CSX-Sim To better illustrate the role of labeling critical components CinCSX-Sim , as defined in Equation 3, we provide an example for clarity. For simplicity, we define the term Q i∈C1si>0 from Equation 3, related to CriticalPoints, as the Correction Factor . This factor takes values of either 0 or 1. The scientific problem under study is: How can a polymer gel material be designed to enhance the Seebeck coefficient (Se) by optimizing the matrix material and redox pair, thereby improving the energy conversion efficiency of a thermoelectric device utilizing the temperature difference between body heat and the environment? This scientific problem corresponds to four real experimental hypotheses, outlined as follows: 1.Hypothesis 1 : By combining gelatin with KCl, prepare a gel with high ionic conductivity to investigate its Seebeck coefficient (Se) performance with the [Fe(CN) 6]3−/[Fe(CN) 6]4− redox pair. KCl, as an electrolyte, significantly enhances the gel’s ionic conductivity, while the [Fe(CN) 6]3−/[Fe(CN) 6]4−redox pair boosts the Seebeck coefficient through temperature-gradient-driven ion diffusion. Gelatin provides biocompatibility and mechanical strength, making it suitable for efficient thermoelectric energy conversion. 2.Hypothesis 2 : By combining a PV A matrix with HCl, prepare a gel with high ionic conductivity and investigate its Seebeck coefficient (Se) performance under the influence of the Fe3+/Fe2+redox pair. HCl, as a strong electrolyte, significantly enhances the gel’s ionic conductivity, while the Fe3+/Fe2+redox pair boosts the Seebeck coefficient through temperature-difference-driven ion diffusion. PV A provides flexibility and transparency, and by optimizing the HCl concentration and PV A crosslinking degree, ion migration efficiency can be further improved, enhancing the Seebeck coefficient and making it suitable for efficient energy conversion in body-heat thermoelectric devices. 3.Hypothesis 3 : By preparing a pure PV A gel, investigate its Seebeck coefficient (Se) per- formance under the influence of the Fe3+/Fe2+redox pair. PV A, as a hydrophilic polymer, possesses a certain level of ionic conductivity, and the Fe3+/Fe2+redox pair generates a Seebeck coefficient through temperature-difference-driven ion diffusion. 4.Hypothesis 4 : By polymerizing acrylamide (PAM) to prepare a hydrogel and investigate its thermoelectric performance. The porous network structure of the polyacrylamide hydrogel enhances the gel’s ionic conductivity. 14 The results of the actual wet-lab experiments are summarized in Table 4. For the first three hypotheses, the CriticalPoints are included in the assumptions, resulting in a Correction Factor of 1 and non- zero final results. In contrast, for the fourth hypothesis, the CriticalPoints are not included in the assumption, leading to a Correction Factor of 0 and a final result of 0. Matrix The Seebeck coefficient Se / mV K−1 [Fe(CN) 6]3−/[Fe(CN) 6]4−KCl/Gelatin 17 Fe3+/ Fe2+HCl/PV A 1.02 Fe3+/ Fe3+PV A 0.85 Polymerizing Acrylamide (PAM) 0 Table 4: The Seebeck coefficient (Se) of polymer thermoelectric materials Our simulator analyzes Hypothesis 4, where the Correction Factor is 0, as follows: Analysis: The Ground Truth Scientific | https://arxiv.org/abs/2505.17873v1 |
Hypothesis identifies three core key points: Gelatin, KCl (Potassium Chloride), and the [Fe(CN) 6]3−/[Fe(CN) 6]4−Redox Pair, each playing significant roles in enhancing the Seebeck coefficient for thermoelectric efficiency. The Generated Hypothesis uses Polyacrylamide (PAM) in place of Gelatin, which can provide a similar structural function but lacks some specific characteristics of Gelatin important for thermoelectric stability. It discusses ionic conductivity, paralleling that of KCl without directly employing the salt. Notably, it does not mention the redox pair, which is critical to the thermoelectric function as outlined in the Ground Truth Hypothesis. However, while the Generated Hypothesis incorporates functional aspects related to ionic conductivity and chemical structure, it omits the crucial redox couple and does not fully replicate the essential components necessary for observable thermoelectric phenomena defined by the Ground Truth. The presence of PAM and evolving ionic conductivity offers a partial basis for plausible thermoelectric behavior under hypothetical conditions, but the complete omission of the redox pair is significant. The lack of adherence to all core key points and reliance on acceptable substitutions leads to a conclusion that while observable phenomena might still occur, they are limited. Given these evaluations, I assign R= 0as there is a substantial gap in essential components for the reaction proposed in the Ground Truth Hypothesis. Final Correction Factor 0 End The our simulator analyzes Hypothesis 4, noting that enhancing the Seebeck coefficient of a poly- mer gel material requires a redox pair. The first three hypotheses incorporate redox pairs, either [Fe(CN) 6]3−/[Fe(CN) 6]4−or Fe3+/Fe2+, which facilitate efficient conversion of thermal energy to electrical energy. In contrast, Hypothesis 4 only involves polymerizing acrylamide (PAM) and lacks a redox pair, rendering it unable to effectively convert thermal energy into electrical energy. Consequently, the thermoelectric potential (Seebeck coefficient, Se) is zero. C Evaluating the Simulator with Real Experiment Results In this section, we present the validation of our simulator’s accuracy using a dataset of 124 chemical hypotheses, detailing their classification and composition. We further compare the trends of the simu- lated results with the corresponding real experimental outcomes to assess the simulator’s predictive performance and reliability in capturing real-world chemical behaviors. C.1 Dataset Composition and Analysis To evaluate the performance of the simulator, we conducted a thorough analysis using real-world experimental data. We curated a set of 30 cutting-edge chemical questions, each designed to probe significant aspects of chemical research. These questions were carefully selected to encompass multi- ple areas within the chemistry domain, ensuring a diverse and representative evaluation framework. Each question was associated with 3 to 6 hypotheses, resulting in a total of 124 authentic wet lab chemical experiment results. This extensive dataset forms a robust foundation for assessing the simulator’s predictive accuracy and reliability. 15 The 124 experiment results were sourced from key subfields of chemistry to provide broad coverage of the discipline. The distribution of these results across subfields is presented in Table 5. Specifically, Polymer Chemistry contributed 16 results, Organic Chemistry provided 36, Inorganic Chemistry accounted for 33, and Analytical Chemistry comprised 39, totaling 124 results. This distribution across multiple subfields ensures that the | https://arxiv.org/abs/2505.17873v1 |
test set reflects the diversity and complexity of real-world chemical experiments, enhancing the robustness of our evaluation. A statistical analysis of the 124 authentic wet lab results was conducted to rigorously evaluate the simulator’s performance. By including a substantial number of experiments from various subfields, we ensured that the dataset captures a wide range of challenges encountered in chemical research. This approach minimizes potential biases from over-representing any single subfield, thereby strengthening the reliability of our evaluation. The dataset’s diversity and scale provide a solid basis for assessing the simulator’s ability to predict experimental outcomes accurately, offering valuable insights for future research and applications. Category Count Polymer Chemistry 16 Organic Chemistry 36 Inorganic Chemistry 33 Analytical Chemistry 39 Total 124 Table 5: Distribution of categories. The use of authentic wet lab results bolsters the credibility of our findings. By grounding the evaluation in real experimental data, we ensured that the simulator’s predictions were tested against the intricacies and variability of actual laboratory conditions. This approach not only validates the simulator’s performance but also underscores its potential to guide subsequent research by delivering reliable and actionable predictions. The diverse dataset and representation of multiple subfields collectively contribute to a comprehensive and effective evaluation, paving the way for advancements in chemical simulation and experimentation. C.2 Trend Comparison with Real Experiment Results To further assess the capabilities of our CSX-Sim , we utilized it to simulate 124 wet lab experiments. These experiments corresponded to 30 cutting-edge chemical science questions, and their simulated outcomes were subsequently aggregated for a comprehensive analysis. For each of the 124 experi- ments, the simulated result was derived from the average of three trials conducted by CSX-Sim . These results, each corresponding to one of the curated chemical questions, were systematically arranged in ascending order along the "Order of Experimental Results" axis, as depicted in Figure 6. This organization enabled a unified comparison between the simulated and actual experimental outcomes, with the vertical axis representing normalized experimental results to standardize the evaluation across the dataset. Figure 6 compares the trends observed in CSX-Sim predictions (green line) with those from real experimental data (blue line). Error bars, representing the population standard deviation, illustrate the variability of the data points. Statistical significance was further established using the Bootstrap method, with results indicating ( p <0.01) (Berg-Kirkpatrick et al., 2012). The aggregated analysis reveals that the simulator effectively predicts the mean trends for all 30 sets of results, demonstrating a strong consistency with the mean of the actual experimental outcomes. This alignment of mean trends across the diverse questions underscores the simulator’s ability to model chemical processes accurately, capturing the overall behavior of the experimental data, regardless of the specific subfield. The use of normalized results ensures that differences in scale do not affect the comparison, allowing a fair assessment of the simulator’s trend-matching capability. The close correspondence between the simulated and real mean data, as visualized in the figure, highlights the CSX-Sim broad applicability across the chemistry domain. By successfully replicating the mean trends of the 124 results, the simulator proves to be | https://arxiv.org/abs/2505.17873v1 |
a versatile tool, offering reliable predictions that can support a wide range of chemical research and applications. 16 1 2 3 4 5 60.00.20.40.60.81.0Normalized Experimental Results Order of Experimental Results Real Experimental Data CSX-SimulatorFigure 6: Comparison of simulated real experimental results with CSX-Simulator. D Evaluation of Trend Alignment and Accuracy D.1 Evaluation of Trend Alignment To quantitatively assess trend alignment between simulated and experimental results, we employed theSpearman Rank Correlation Coefficient (denoted as ρ). This non-parametric measure evaluates the monotonic relationship between the rankings of simulated and experimental outcomes, making it suitable for capturing trend consistency across diverse chemical problems. The Spearman Correlation Coefficient is calculated as follows: ρ= 1−6Pd2 i n(n2−1)(7) Where: di: The difference between the ranks of the i-th simulated and experimental result. n: The number of hypotheses in a given group (ranging from 3 to 6 per scientific question). ρ: The correlation coefficient, ranging from -1 (perfect negative correlation) to 1 (perfect positive correlation), with 0 indicating no monotonic relationship. A Spearman Correlation Coefficient ( ρ) near 1 indicates strong trend alignment, meaning the simulated results closely mirror the relative ordering of experimental outcomes. Our CSX simulator achieved a mean Spearman Correlation Coefficient of ρ= 0.960, significantly outperforming the baseline, as shown in Table 1, and demonstrating superior trend alignment. To further assess the robustness of the simulator across diverse problems, we introduced the Perfect Consistency Indicator (PCI), a stringent metric that counts the number of question groups (out of the 30 scientific questions) where the simulated results achieved perfect trend alignment with the experimental results (ρ= 1) . Perfect trend alignment requires an exact match in the ranking of simulated and experimental outcomes, making PCI a robust measure of the simulator’s ability to consistently replicate experimental trends across all problems. Notably, our CSX simulator achieved perfect trend alignment (ρ= 1) in 26 out of 30 question groups, significantly surpassing the baseline methods and highlighting its exceptional robustness and predictive fidelity. 17 D.2 Evaluation of Simulator Accuracy For evaluating prediction accuracy, we used the Root Mean Square Error (RMSE) to quantify the deviation between simulated and experimental values. The RMSE is defined as: RMSE =vuut1 NNX i=1(yi−ˆyi)2 (8) Where: yi: The experimental result for the i-th hypothesis. ˆyi: The simulated result for the i-th hypothesis. The CSX simulator exhibited a lower RMSE than the "Matched Score" baseline (Yang et al., 2024b), signifying improved predictive accuracy, as substantiated by the results in Table 1. To thoroughly evaluate the predictive accuracy of our CSX simulator compared to real-world ex- perimental outcomes, we tested its performance on a dataset of 124 authentic scientific hypotheses. For a comprehensive comparison, we calculated several performance indicators, as presented in Table 6. Building on the previously discussed metrics, we introduced three additional measures: Mean Squared Error (MSE), Mean Absolute Error (MAE), and Root Mean Squared Logarithmic Error (RMSLE). These metrics, defined below, enhance the robustness of our analysis by capturing different aspects of prediction error. Simulator MSE (↓) MAE ( ↓) RMSLE ( ↓) Matched Score 0.068 0.179 0.166 CSX-Sim 0.058 0.161 0.147 w/o CriticalPoints | https://arxiv.org/abs/2505.17873v1 |
0.064 0.174 0.159 w/o ComponentExtraction 0.087 0.215 0.192 Table 6: Validating the simulator with collected chemistry experiment results from literature. Below, we define each metric used in the evaluation, along with their respective formulas, to ensure scientific rigor: Mean Squared Error (MSE): MSE measures the average squared difference between predicted values ˆyiand actual values yiacross nsamples. It is defined as: MSE =1 nnX i=1(ˆyi−yi)2(9) A lower MSE indicates higher predictive accuracy, with larger errors penalized more heavily due to squaring. Mean Absolute Error (MAE): MAE quantifies the average absolute difference between predicted and actual values, calculated as: MAE =1 nnX i=1|ˆyi−yi| (10) This metric is less sensitive to outliers than MSE, providing a more balanced measure of error. Root Mean Squared Logarithmic Error (RMSLE): RMSLE focuses on relative errors by evaluating the logarithmic difference between predicted and actual values: RMSLE =vuut1 nnX i=1(log(ˆyi+ 1)−log(yi+ 1))2(11) This metric is particularly useful for datasets with exponential trends or varying error scales. As shown in Table 6, CSX-Sim consistently outperforms the "Matched Score" baseline (Yang et al., 2024b) across all metrics, achieving an MSE of 0.058, an MAE of 0.161, and an RMSLE of 0.147. Ablation studies further reveal the contributions of individual components: the removal of CriticalPoints results in a slight performance decline (MSE of 0.064, MAE of 0.174, RMSLE of 18 0.159), while the exclusion of ComponentExtraction leads to more significant degradation (MSE of 0.087, MAE of 0.215, RMSLE of 0.192). These results underscore the importance of both critical point identification and component extraction in achieving high predictive accuracy and robustness in simulation outcomes. E Different Levels of Distortion We collaborated with chemistry PhD students to identify and design three common types of distortions encountered in chemical research: local maxima/minima, plateaus, and cliffs. These distortion pat- terns reflect typical challenges in hypothesis evaluation, drawing on domain expertise and established heuristics to ensure relevance. We defined three distinct distortion levels—Simple Noise, Moderate Noise, and Complex Noise—and incorporated them into the hypothesis embedding function ϕ(·)to simulate increasingly challenging feedback conditions. In chemical scientific hypotheses, biases in understanding key factors can result in specific distor- tion patterns. For instance, when adding guanidine sulfate to polymer thermoelectric materials, recognizing it solely as a salt providing hydrogen bonds for the reaction—while overlooking its influence on the entropy of redox pairs—can lead to a local maximum, as this oversight may enhance thermoelectric performance unexpectedly. Similarly, misjudging irrelevant factors, such as additives in organic reactions with no actual impact, can create a plateau effect. Conversely, misjudging critical factors, like the temperature’s role in enzyme activity during enzyme studies, can produce a cliff if the temperature is incorrectly assumed to inhibit the reaction entirely. These elements—local maxima/minima, plateaus, and cliffs—present significant challenges in optimization problems within chemical research. Through extensive discussions with chemistry experts, we conducted a statistical analysis to evaluate the discrepancies between wet lab results and empirical expected outcomes across diverse experi- mental scenarios. This process enabled us to statistically analyze the frequency of the three types of distortions—local maxima/minima, plateaus, and cliffs—across various chemical scenarios. We then quantified | https://arxiv.org/abs/2505.17873v1 |
the occurrence of these distortions in different scenarios and sorted them by frequency, from low to high. Based on this distribution, we categorized the discrepancies: the top 35% of ob- served gaps were classified as Simple Noise, the middle 40% as Moderate Noise, and the bottom 25% as Complex Noise. Furthermore, we integrated the three distortion levels—Simple Noise, Moderate Noise, and Complex Noise—into the hypothesis embedding function ϕ(·)to simulate increasingly challenging feedback conditions. This structured stratification provided a clear framework to evaluate the varying impacts of different scenarios on our simulator, facilitating a deeper understanding of the simulator’s performance under diverse conditions. Noise Conditions Local Maxima/Minima Plateaus Cliffs Simple 0-10 0-2 0-2 Medium 0-30 0-6 0-6 Complex ≥30 ≥3 ≥3 Table 7: The composition of different types of noise. These distortions, along with their detailed quantities, are outlined in the accompanying Table 7, which illustrates the composition of different types of noise across various conditions. For instance, simple noise conditions are associated with 0-10 local maxima/minima, 0-2 plateaus, and 0-2 cliffs. Medium noise conditions escalate these figures to 0-30 local maxima/minima, 0-6 plateaus, and 0-6 cliffs. In complex noise scenarios, the challenges intensify, with ≥30 local maxima/minima, ≥3 plateaus, and ≥3 cliffs, reflecting the increased difficulty in achieving optimal solutions. We constructed three distinct noise levels to evaluate the robustness of our CSX-Rank under complex chemical feedback conditions. By comparing Table 3, we observed that with the introduction of noise, the experiment-guided ranking method requires a significantly higher number of simulation feedback iterations to identify the ground truth scientific hypothesis as the complexity of the noise increases. This is primarily due to the growing discrepancy between highly complex noise and real experimental feedback, where 19 simulation feedback contains substantial erroneous information, thereby degrading the performance of screening the ground truth scientific hypothesis from the generated scientific hypotheses. F Evaluation of Experiment-Guided Ranking and Its Societal Benefits The intricate knowledge system of chemistry, combined with the multitude of factors influencing hypothesis analysis, often leads to the gradual accumulation of small cognitive biases. These biases can significantly distort the final experimental outcomes, creating substantial disparities between expected and observed results. To address this challenge, we conducted a comparative analysis between two distinct approaches: the experiment-guided ranking method, which leverages simulation feedback or real experimental results to refine hypothesis selection, and the pre-experiment method, which relies solely on the model’s prior knowledge for screening the ground truth hypothesis. Our findings reveal that the experiment-guided ranking method demonstrates a marked improvement over its counterpart. By integrating simulation feedback, this method allows for a reflective process that considers previous simulation (and experimental) results. This iterative reflection provides more contextually relevant information, enabling the selection of the next hypothesis with greater precision. Consequently, this approach effectively mitigates the accumulation of biases, thereby enhancing the efficiency and accuracy of experimental screening processes. The ranking of hypotheses emerges as a pivotal element in automated scientific discovery, particularly in natural sciences, where wet-lab experiments are costly and are constrained by low throughput. Traditional approaches, such as pre-experiment ranking, depend exclusively on | https://arxiv.org/abs/2505.17873v1 |
the internal reasoning of large language models, lacking integration with empirical experimental outcomes. In contrast, we introduce the novel task of experiment-guided ranking, designed to prioritize candidate hypotheses by leveraging insights from previously tested results. However, the development of such strategies is hindered by the impracticality of repeatedly conducting real experiments in natural science domains due to time, cost, and resource limitations. To overcome this obstacle, we propose a simulator grounded in three domain-informed assumptions, modeling hypothesis performance as a function of its similarity to a known ground truth hypothesis, with performance perturbed by noise to reflect real-world variability. To validate this simulator, we curated a dataset comprising 124 chemistry hypotheses, each accompanied by experimentally reported outcomes, providing a robust foundation for evaluation. Building on this simulator, we developed a pseudo experiment-guided ranking method that clusters hypotheses based on shared functional characteristics and prioritizes candidates using insights de- rived from simulated experimental feedback. Our experimental results demonstrate that this method outperforms both pre-experiment baselines and strong ablations, highlighting its potential to revolu- tionize hypothesis selection in chemical research. Beyond academic and scientific advancements, this approach holds promising societal impacts. By reducing the need for extensive wet-lab experiments, it can lower research costs and accelerate the development of new materials and drugs, potentially im- proving healthcare access and environmental sustainability. Additionally, the enhanced efficiency in hypothesis testing could foster innovation in industrial applications, such as cleaner energy solutions, contributing to global efforts to address climate change and promote sustainable development. G Limitations A primary limitation of this work is that the constructed simulator does not provide perfectly accurate experimental feedback. Specifically, the simulator is based on three foundational assumptions developed through extensive consultations with domain experts. While it represents the first attempt to build such a simulator for experiment-guided hypothesis ranking, its outputs remain an approximation rather than exact experimental results. The rationale for developing this simulator stems from the absence of any prior tools with comparable functionality. Its purpose is to enable research on experiment-guided ranking methods, which can later be applied and validated with real experimental feedback in practical settings. Importantly, the simulator’s absolute accuracy is not critical for this line of research. As long as the experiment-guided ranking methods are robustly developed and tested within this simulated envi- ronment, they can subsequently leverage real experimental feedback to identify optimal hypotheses when deployed in real-world scenarios. 20 | https://arxiv.org/abs/2505.17873v1 |
Mutarjim: Advancing Bidirectional Arabic-English Translation with a Small Language Model Khalil Hennara, Muhammad Hreden, Mohamed Motaism Hamed, Zeina Aldallal, Sara Chrouf, and Safwan AlModhayan Khobar, Saudi Arabia hennara,hreden,hamed,aldallal,chrouf,safwan@misraj.ai Abstract We introduce Mutarjim , a compact yet powerful language model for bidirectional Arabic-English translation. While large-scale LLMs have shown impressive progress in natural language processing tasks, including machine translation, smaller models. Leveraging this insight, we developed Mutar- jim based on Kuwain-1.5B Hennara et al. [2025], a language model tailored for both Arabic and English. Despite its modest size, Mutarjim outperforms much larger models on several established benchmarks, achieved through an optimized two-phase training approach and a carefully curated, high-quality training corpus.. Experimental results show that Mutarjim rivals models up to 20 times largerwhilesignificantlyreducingcomputationalcostsandtrainingrequirements. Wealsointroduce Tarjama-25 , a new benchmark designed to overcome limitations in existing Arabic-English bench- marking datasets, such as domain narrowness, short sentence lengths, and English-source bias. Tarjama-25 comprises 5,000 expert-reviewed sentence pairs and spans a wide range of domains, offering a more comprehensive and balanced evaluation framework. Notably, Mutarjim achieves state-of-the-art performance on the English-to-Arabic task in Tarjama-25, surpassing even signifi- cantly larger and proprietary models like GPT-4o mini. We publicly release Tarjama-25 to support future research and advance the evaluation of Arabic-English translation systems. 1 Introduction Machine translation (MT), a core task in natural language processing (NLP), has made great progress with the rise of Large Language Models (LLMs). However, Arabic machine translation (AMT) is considered a big challenge due to many factors and characteristics of the Arabic language, such as grammar and morphology complexity. Lexical, syntactic, and semantic problems arise when translating the meaning of Arabic words into English and vice versa Baligh & Mohammed [2022]. Despite recent advances in NLP, the Arabic language still lags behind other high-resource languages in terms of translation quality. Existing Arabic-English systems are either limited in their capabili- ties or are part of larger multilingual models that, while capable of handling many languages, often underperform on Arabic-specific tasks. These models are also computationally demanding, limiting their practicality in low-resource or real-time settings. Consequently, there is increasing interest in developing smaller, task-specific models that balance performance with efficiency while effectively modeling Arabic’s linguistic complexity. 1arXiv:2505.17894v1 [cs.CL] 23 May 2025 ChrF++ scores COMET scores Figure 1: Performance of various models on Tarjama-25 sorted by model size, our newly intro- duced benchmark for Arabic-English translation, evaluated using two metrics: ChrF++ (left) and COMET (right). In this paper, we introduce Mutarjim , a task-specific small language model optimized for Arabic- English translation. Mutarjim is built on Kuwain-1.5B Hennara et al. [2025], an Arabic-centric decoder-only model. Mutarjim is trained in two stages: a translation-oriented large-scale pre- training phase and a targeted fine-tuning stage using high-quality parallel corpora. This tailored training approach enables Mutarjim to deliver competitive translation quality and faster inference times. In benchmark evaluations, Mutarjim outperforms models with more than 30 billion param- eters, including proprietary systems like GPT-4o mini in both accuracy and efficiency, as shown in Figure 1. To facilitate robust evaluation and future research, we also present Tarjama-25 , a new benchmark datasetforbidirectionalArabic-Englishtranslation. | https://arxiv.org/abs/2505.17894v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.