text string | source string |
|---|---|
for legal ques- tion answering. In International Conference on Case- Based Reasoning , pages 445–460. Springer. Yiquan Wu, Kun Kuang, Yating Zhang, Xiaozhong Liu, Changlong Sun, Jun Xiao, Yueting Zhuang, Luo Si, and Fei Wu. 2020. De-biased court‘s view gener- ation with causality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP) , pages 763–780, Online. Association for Computational Linguistics. Yuemin Xi. 2018. The standards and rules for second- instance ruling changes in civil and commercial cases. Journal of China University of Political Science and Law, (3):110–124. Chaojun Xiao, Xueyu Hu, Zhiyuan Liu, Cunchao Tu, and Maosong Sun. 2021. Lawformer: A pre-trained language model for chinese legal long documents. AI Open , 2:79–84. Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, et al. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305 . 10 An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . an Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 . Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen, Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, et al. 2023. Disc-lawllm: Fine-tuning large language models for intelligent le- gal services. arXiv preprint arXiv:2309.11325 . Weiping Zhang. 2012. The value trade-off between efficiency and fairness in the revision of the *civil procedure law*. Chinese Judiciary , (6):29–34. Zhichao Zhao. 2025. The authoritative logic behind the explanation system: Defending active clarification. Journal of China University of Political Science and Law, (02):148–159. Min Zheng, Bo Liu, and Le Sun. 2022. Lawrec: auto- matic recommendation of legal provisions based on legal text analysis. Computational Intelligence and Neuroscience , 2022(1):6313161. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, and Zheyan Luo. 2024. LlamaFactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 3: System Demonstrations) , pages 400–410, Bangkok, Thailand. Association for Computational Linguistics. ZhihaiLLM. 2023. Wisdom Interrogatory model card. Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. How does nlp benefit legal system: A summary of legal artificial intelligence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 5218–5230. Zhi Zhou, Jiang-Xin Shi, Peng-Xiao Song, Xiao-Wen Yang, Yi-Xuan Jin, Lan-Zhe Guo, and Yu-Feng Li. 2024. Lawgpt: A chinese legal knowledge- enhanced large language model. arXiv preprint arXiv:2406.04614 . Yaqi Zhu and Feng Xiao. 2020. The relationship be- tween appeal requests and the method of second- instance rulings: Focusing on article 170, paragraph 1 of the *civil procedure law*. Legal Application , (11):126–133. 11 ABackground on Second-Instance Trials and Appellate Reversal in China A.1 Structural Differences Between Trial Court Levels The second-instance trial is a core component of China’s judicial process. Article 171 of the Civil Procedure Law | https://arxiv.org/abs/2505.16514v2 |
of the People’s Republic of China (2023 Amendment) sets out the statutory conditions and time limits for initiating the second-instance procedure, thereby safeguarding the right of ap- peal. From a legal theory perspective, the second instance refers to the process by which a higher court reviews the judgment or ruling of the first in- stance upon appeal (Jiang and Shao, 1998). It plays a crucial role in correcting errors, ensuring uniform application of the law, and protecting the legiti- mate rights of the parties, thus upholding judicial fairness (Zhang, 2012). In practice, the second- instance judgment is final and, once rendered, car- ries legal effect, maintaining the authority and sta- bility of judicial decisions. Compared with the first instance, the second in- stance differs in several key aspects. In terms of procedure, as shown in Figure 1, the first instance involves the initial trial, following steps such as filing, acceptance, trial, and judgment. In contrast, the second instance is initiated by appeal and fo- cuses on reviewing errors in factual findings, le- gal application, and procedural legality of the first- instance judgment (Li, 2008). The second-instance court undertakes a comprehensive review of the entire first-instance process to ensure fairness. For example, in complex civil contract disputes, the first-instance court may issue an incorrect judgment due to misunderstandings of contract clauses. The second-instance court, through thorough review and precise legal interpretation, can correct such er- rors and effectively protect contractual rights (Zhu and Xiao, 2020). Regarding the scope of review, the first-instance court examines the facts and legal relationships of the case comprehensively, aiming to uncover the truth and apply the law correctly. The second- instance court generally reviews within the scope of the party’s appeal, but may also review the entire case if necessary. This arrangement respects the party’s right to dispose of the case while ensuring the court’s comprehensive control. For instance, if an appeal concerns only part of the judgment but the second-instance court discovers significant errors elsewhere, it may review the entire case toensure fairness. A.2 Categories of Appellate Reversal Grounds According to Article 177 of the Civil Procedure Law of the People’s Republic of China (2023 Amendment), after a comprehensive review of the appealed case, the second-instance court may re- verse, annul, or modify the judgment, or remand the case for retrial if it finds errors in factual find- ings, legal application, or unclear basic facts in the original judgment (Zhao, 2025). Thus, the main grounds for reversal in the second instance focus on two areas: factual determination and legal appli- cation. Factual determination errors refer to mistakes made by the original court in identifying or un- derstanding key facts, such as erroneous classifica- tions, unclear findings, or insufficient evidentiary support. Specifically, this includes circumstances where the evidence underlying the judgment is un- reliable, insufficient, contradictory, or legally ex- cluded, resulting in incorrect determination of the case’s nature or the allocation of rights and obliga- tions (Bodenheimer et al., 1987). Legal application errors include misjudgment of the legal nature (e.g., misclassifying a tort dispute as a contract dispute), | https://arxiv.org/abs/2505.16514v2 |
improper citation of provisions (e.g., applying a general law where a special law should apply), or violations of the principle of non-retroactivity, all of which undermine the legal basis of the judgment. Scholars have examined these two aspects in depth. In terms of factual determination, research shows that new evidence often reshapes the factual understanding of a case, significantly impacting judgment outcomes (Xi, 2018). When new evi- dence is sufficient to overturn the findings of the first-instance judgment, the probability of reversal increases substantially. Unclear factual determina- tions are also a major factor in reversals, as ambi- guity or errors in key facts undermine the factual basis of the first-instance judgment. For legal appli- cation, misinterpretation or improper application of legal provisions directly affects the legality and fairness of the judgment, commonly leading to re- versal. As Ni Ming noted, factual determination is the logical foundation for legal application, and the reasoning paths for factual and legal issues differ fundamentally in civil procedure (Ming, 2021). For example, whether a contract is established is a mat- ter of factual determination, while the contractual nature falls under legal application. Collectively, these studies highlight the decisive role of factual 12 determination and legal application in appellate reversals from multiple perspectives. B Implementation of Tasks In this section, we provide the instruction and out- put examples for the five tasks. B.1 Judgment Reversal Prediction from the first-instance perspective Instruction: Read the following first-instance judgment and deter- mine whether the appellate court will change the judg- ment if an appeal is filed. If it will, specify whether the reason for changing the judgment is an error in fact- finding or an error in the application of the law. ## First-instance Documents Civil Judgment of the People’s Court of Panyu Dis- trict, Guangzhou City, Guangdong Province (2020) Yue 0113 Minchu No. 19865 (...omitted here...) If you do not accept this judgment, you may submit a notice of appeal to this court within fifteen days from the date of service of the judgment, providing copies according to the num- ber of opposing parties, and appeal to the Guangzhou Intermediate People’s Court. ## Second-instance Claims Ma Zhuojun’s appeal requests: 1. To revoke the first- instance judgment and change the judgment to dismiss all of Zhai Yongdong’s claims; 2. To have Zhai Yongdong bear all the case acceptance fees. ## Reason for reversal Directly output the answer by selecting from ["Non- Reversed", "Factual determination errors", "Legal appli- cation errors"]. There may be multiple answers. Do not output anything else. Output: Non-Reversed B.2 Judgment Reversal Prediction from the second-instance perspective Instruction: Read the following first-instance judgment and deter- mine whether the appellate court will reverse the judg- ment if an appeal is filed. If it will, specify whether the reason for changing the judgment is an error in fact- finding or an error in the application of the law. ## First-instance Documents Civil Judgment of Qianjiang District People’s Court, Chongqing City (2020) Yu 0114 Minchu No. 8176 (...omitted here...) If you do not accept this judgment, you may submit a notice of | https://arxiv.org/abs/2505.16514v2 |
appeal to this court within fifteen days from the date the judgment is served, provid- ing copies according to the number of opposing parties, and appeal to the Fourth Intermediate People’s Court of Chongqing City. ## Second-instance Claims Hou Zhangqing’s appeal requests: 1. To revoke the first-instance judgment and change the judgment to sup- port the appellant’s claims in the first instance; 2. To have the appellee bear the case acceptance fees for both the first and second instance. ## Second-instance Fact DescriptionThe first-instance court’s finding that the appellant had no evidence to prove that he was injured while working on Building 3, Section 1 of the Banlishan Resettlement Housing Project is contrary to objective facts and legal provisions. (...omitted here...) This court confirms the other facts found in the first instance. ## Reason for reversal Directly output the answer by selecting from ["Non- Reversed", "Factual determination errors", "Legal appli- cation errors"]. There may be multiple answers. Do not output anything else. Output: Factual determination errors, Legal application errors B.3 Legal Provision Recommendation Instruction: You are a second-instance judge. Select the relevant legal provisions from the candidate statutes related to this case. There may be multiple applicable statutes. Please directly output the letter codes of the statutes in the answer section. ## Second-instance Claims & Fact Description Civil Judgment of Jinzhou Intermediate People’s Court, Liaoning Province (2023) Liao 07 Min Zhong No. 795 Appellant (original defendant)... This court, after trial, found that the facts are basically consistent with the facts established by the first-instance judgment, which this court confirms. ## Candidate Provisions A. Article 61 of the Construction Law of the People’s Republic of China B. Article 11 of the Supreme People’s Court Interpre- tation on Several Issues Concerning the Application of Law in Personal Injury Cases (...omitted here...) ## Answer Directly output the letter codes only, do not output anything else. Output: FGH B.4 Legal Judgment Prediction Instruction: You are a judge in a second-instance trial. Based on the facts established in the first-instance judgment, as well as the claims and facts presented in the second- instance appeal, determine whether the given claim should be upheld. ## First-instance Facts On August 1, 2011, both parties signed a real estate lease contract, under which the plaintiff agreed to lease part of the premises located on the east side of Building X, Haidian District, Beijing, to Party B for use (...omitted here...) During the litigation, Qianghua Printing Factory filed an application for property preservation with this court, requesting the preservation of Yinghao Hotel’s property, and this court issued Civil Ruling (2021) Jing 0108 Minchu No. 69144. ## Second-instance Claims & Facts Yinghao Hotel’s appeal requests: 1. To lawfully re- voke items 2, 3, 5, and 7 of the first-instance judgment; 13 (...omitted here...) This court finds as follows: The facts found in the first instance are correct. In addition, it is found that both parties have confirmed the payment obligations for the water and electricity fees involved in item 4 of the first-instance judgment, and Yinghao Hotel has already fulfilled them. ## Claim | https://arxiv.org/abs/2505.16514v2 |
The plaintiff requests the court to order the defendant to vacate the two-story annex structure attached to the office building on the east side of Building X, Haidian District, Beijing (i.e., the leased property under the "Mil- itary Real Estate Lease Contract" signed by both parties, with a contract area of 534 square meters and an actual measured area of 811.04 square meters). ## Claim support prediction Directly output the answer by selecting one from ["Fully Support", "Partially Support", "Not Support"]. Do not output anything else. Output: Fully Support B.5 Court View Generation Instruction: You are a judge in a second-instance trial. Based on the following facts, complete the court’s opinion sec- tion in the second-instance judgment (including the para- graphs starting with "The Court holds" and "The judg- ment is as follows"). ## Claims & Fact Description Civil Judgment of Jinzhou Intermediate People’s Court, Liaoning Province (2023) Liao 07 Min Zhong No. 795 Appellant (original defendant)... This court, after trial, found that the facts are basically consistent with the facts established by the first-instance judgment, which this court confirms. ## Second-instance Claims & Facts Appellant Lvyuancheng Company’s appeal requests: 1. To request the second-instance court to lawfully re- voke the original judgment, and lawfully change the judg- ment to dismiss the appellee’s claims in the first instance (...omitted here...) to deliver the relevant documents for handling the transfer registration of the commercial hous- ing to the plaintiff; if the defendant delays delivery, it should bear liability for breach of contract. ## Court’s Opinion In this section, please directly output the court’s opin- ion (including the paragraphs beginning with "The Court holds" and "The judgment is as follows"). Do not output any other content. The court holds... Output: The court holds that although both parties agreed on the calculation standard for liquidated damages in the contract, the plaintiff did not provide evidence to prove the actual losses caused by the delay in handling the certificate (...omitted here...) In summary, in accordance with Article 170, Para- graph 1, Item 2 of the Civil Procedure Law of the Peo- ple’s Republic of China, the judgment is as follows: 1. Revoke the Civil Judgment No. 1164 (2020) Qian 2723 Minchu of Guiding County People’s Court, Guizhou Province (...omitted here...) This judgment is a final judgment.Provider Model DeepSeekDeepSeek-V3-0324 (Liu et al., 2024) DeepSeek-R1 (Guo et al., 2025) R1-Distill-Qwen-32B (Guo et al., 2025) R1-Distill-Qwen-7B (Guo et al., 2025) OpenAIgpt-4.1-2025-04-14‡ o4-mini-2025-04-16§ AlibabaQwen2.5-72B-Instruct (Yang et al., 2024) QwQ-32B¶ Qwen3-32B-Instruct (Yang et al., 2025) Qwen3-8B-Instruct (Yang et al., 2025) Qwen2.5-7B-Instruct (Yang et al., 2024) MetaLLaMA-3.3-70B-Instruct|| LLaMA-3.1-8B-Instruct** ZhipuAIGLM-4-Air (GLM et al., 2024) GLM-Z1-Air (GLM et al., 2024) xAI Grok-3-mini†† Baichuan Baichuan2-7B-Chat (Yang et al., 2023) ByteDance Doubao-1.5-pro‡‡ Fudan Univ. DISC-LawLLM (Yue et al., 2023) Zhejiang Univ. Wisdom Interrogatory (ZhihaiLLM, 2023) Table 8: Models and model providers involved in this paper. C Experiment Details C.1 Settings of LLMs We conducted experiments on 20 models from 10 providers, as shown in Table 8. All LLMs used the same default system prompt, "You are a helpful assistant," and the temperature and top_p were set | https://arxiv.org/abs/2505.16514v2 |
to each model’s default parameters. C.2 Settings of Fine-tuning We used 80% of the AppealCase data to train the bert-base-chinese (Devlin et al., 2019) and qwen3- 0.6b (Yang et al., 2025) models for judgment rever- sal prediction. We trained a LabelClassification layer for bert- base-chinese for multi-label classification, using the AdamW optimizer with a learning rate of 2×10−5, a total batch size of 64, and 10 train- ing epochs. We performed full fine-tuning of Qwen3-0.8B using the llama-factory framework (Zheng et al., 2024), with DeepSpeed ZeRO-3 and the AdamW optimizer, and a learning rate of 1×10−5. The ‡https://openai.com/index/gpt-4-1 §https://openai.com/index/introducing-o3-and-o4-mini ¶Qwen/QwQ-32B ||meta-llama/Llama-3.3-70B-Instruct **meta-llama/Llama-3.1-8B-Instruct ††https://x.ai/news/grok-3 ‡‡https://seed.bytedance.com/en/special/doubao_1_5_pro 14 total batch size was set to 16 (4 batches per GPU), and the training lasted for 2 epochs. D Case Study on Error Analysis In the Judgment Reversal Prediction task, among the 10,000 cases in AppealCase, there are 401 cases for which all 20 models produced incorrect predic- tions. All of these 401 cases are cases of judgment reversal, and among them, 83% were reversed due to errors in the application of law. We conducted an analysis of these cases and identified the following main challenges: •Dynamic legal knowledge: For example, re- garding the determination of whether the monthly interest rate exceeds four times the one-year loan market quotation rate published by the National Interbank Funding Center, the benchmark interest rate here is dynamic. The models may not be able to access or accurately understand the relevant data in time, leading to prediction errors. •Easily confused legal relationships: In some cases, there are legal relationships that are easily confused, such as mistakenly identify- ing a lending relationship as a partnership, or an employment relationship as a contract-for- work relationship. Due to the complexity of the facts or unclear statements, the models are prone to bias in legal relationship judgment, which affects the final prediction results. •Issues in the division of liability: In some cases, the division of liability among parties is more complex, and the appellate court has adjusted the proportion or method of liabil- ity compared with the first-instance judgment. The models struggle to accurately capture the details of liability determination and the judi- cial reasoning, resulting in prediction devia- tions. These hard cases reflect the current limitations of models in handling complex cases and represent a key direction for improving predictive accuracy. 15 | https://arxiv.org/abs/2505.16514v2 |
arXiv:2505.16518v1 [cs.CL] 22 May 2025CUB: Benchmarking Context Utilisation Techniques for Language Models Lovisa Hagström1,2*Youna Kim3*Haeun Yu4 Sang-goo Lee3Richard Johansson1,2Hyunsoo Cho5Isabelle Augenstein4 1Chalmers University of Technology2University of Gothenburg 3Seoul National University4University of Copenhagen5Ewha Womans University lovhag@chalmers.se anna9812@europa.snu.ac.kr Abstract Incorporating external knowledge is crucial for knowledge-intensive tasks, such as question answering and fact checking. However, lan- guage models (LMs) may ignore relevant in- formation that contradicts outdated parametric memory or be distracted by irrelevant contexts. While many context utilisation manipulation techniques (CMTs) that encourage or suppress context utilisation have recently been proposed to alleviate these issues, few have seen sys- tematic comparison. In this paper, we develop CUB (Context Utilisation Benchmark) to help practitioners within retrieval-augmented gen- eration (RAG) identify the best CMT for their needs. CUB allows for rigorous testing on three distinct context types, observed to cap- ture key challenges in realistic context utilisa- tion scenarios. With this benchmark, we eval- uate seven state-of-the-art methods, represen- tative of the main categories of CMTs, across three diverse datasets and tasks, applied to nine LMs. Our results show that most of the existing CMTs struggle to handle the full set of types of contexts that may be encountered in real-world retrieval-augmented scenarios. Moreover, we find that many CMTs display an inflated per- formance on simple synthesised datasets, com- pared to more realistic datasets with naturally occurring samples. Altogether, our results show the need for holistic tests of CMTs and the de- velopment of CMTs that can handle multiple context types. 1 Introduction Context utilisation is a key component of language models (LMs) used for retrieval-augmented gen- eration (RAG), as the benefits of retrieving exter- nal information are only realised if the generative model makes adequate use of the retrieved infor- mation. While recent research has identified many benefits of augmenting LMs with retrieved infor- mation (Shuster et al., 2021; Hagström et al., 2023), *Equal contribution. CounterFactDRUID 2002LMFine-tuningPromptingDecodingMech. interventionsMulti-agentCMTDatasetNQWhen was the lupus foundation of America founded?QueryThe Lupus Foundation of America (LFA), founded in 1977, is a national voluntary health organization based in Washington, D.C. …The Lupus Foundation of America (LFA), founded in 1967, is a national voluntary health organization based in Washington, D.C. …The Foundation is devoted to solving the mystery of lupus, one of the world's cruelest, most unpredictable and devastating diseases…Conflict. contextIrrel. context Output ⚡ 🦄 🪄CUB 🐻 Context Utilisation Benchmark1977Answer1967AnswerAnswerGold context 👑 No context 🤖200219672002 🤖 🤖 🤖Gold context Conflict. context Irrel. context Figure 1: The Context Utilisation Benchmark. We eval- uate a range of LMs under different CMTs on samples from NQ, DRUID and CounterFact for gold, conflicting and irrelevant contexts. it has also identified weaknesses of LMs used for RAG, of which many are associated with context utilisation. For example, LMs can easily be dis- tracted by irrelevant contexts (Shi et al., 2023) or ignore relevant contexts due to memory-context conflicts (Xu et al., 2024). The robustness of LMs to irrelevant contexts is important as information retrieval systems used for RAG are not guaranteed to always retrieve relevant information. Moreover, as information may be updated to conflict with the training | https://arxiv.org/abs/2505.16518v1 |
data of the LM, the model should prioritise the most recently updated information. As a consequence, many different methods for increasing or suppressing LM context utilisation, henceforth referred to as CMTs (Context utilisa- tion Manipulation Techniques), have been pro- posed. The methods encompass a broad range of approaches, from different decoding methods (Shi et al., 2024; Kim et al., 2024) to fine-tuning 1 methods (Li et al., 2023), prompting (Liu et al., 2023), multi-agent (Feng et al., 2024; Du et al., 2024), and mechanistic interventions (Ortu et al., 2024; Jin et al., 2024). While each method yields promising results in isolation, their evaluation is of- ten limited to narrow or idealised settings, leaving open the question of which approaches are applica- ble in real-world RAG scenarios. To address this evaluation gap, we develop a comprehensive CMT benchmark to test and compare different CMTs on datasets representative of different domains and tasks (Figure 1). Our contributions are as follows: •We develop CUB (Context Utilisation Bench- mark) to allow for a comprehensive evalua- tion and comparison of CMTs (§3).1CUB systematically tests the sensitivity of CMTs to underlying model and naturally occurring context types (gold, conflicting and irrelevant) on tasks representative of synthesised and re- alistic RAG scenarios. •We evaluate a cohort of state-of-the-art CMTs representative of the main categories of CMTs (§4) on our benchmark (§6). •We provide a deeper analysis of what CMT works best for a given scenario and identify areas of improvement for CMTs. We find that CMTs struggle to optimise performance across all context types, e.g. one approach may improve robustness to irrelevant contexts but degrade the utilisation of relevant contexts. This points to the need of CMTs that work well across all context types. 2 Related Work Context-intensive datasets We consider two main categories of context-intensive datasets: 1) datasets representing knowledge-intensive tasks , i.e. tasks for which access to external context is cru- cial, and 2) datasets designed to diagnose model adaptability to external knowledge. Examples of datasets representative of knowledge-intensive tasks are Natural Questions (NQ), DRUID, the KILT datasets and PubMedQA (Kwiatkowski et al., 2019; Hagström et al., 2024; Petroni et al., 2021; Jin et al., 2019). Examples of diagnostic datasets representative of the latter category are Counter- Fact and ConflictQA (Meng et al., 2022; Xie et al., 2024a). These datasets contain synthesised queries based on fact triplets from LAMA (Petroni et al., 2019) (e.g. Thomas Ong-citizen of-Singapore) for 1Code will be available upon publication.which contexts have been synthesised to induce knowledge conflicts by promoting answers in con- flict with the parametric memory of the studied LM (e.g. “Pakistan” as opposed to “Singapore”). Diag- nostic datasets have found widespread use for work on mechanistic interpretability and the evaluation of context utilisation (Meng et al., 2022; Geva et al., 2023; Ortu et al., 2024). Previous work has typically evaluated differ- ent CMTs on either of the dataset categories. CUB incorporates datasets representative of both knowledge-intensive tasks and diagnostic datasets, thus enabling comprehensive evaluations of CMTs in different settings. CMTs Many context utilisation manipulation techniques have recently been proposed. Exist- | https://arxiv.org/abs/2505.16518v1 |
ing CMTs can be categorised into one of four main groups based on intervention level , i.e. what as- pect of the model they manipulate. 1) fine-tuning CMTs update model parameters to modify context utilisation. For example, fine-tuning on distracting contexts was found to yield improved robustness to distracting contexts (Li et al., 2023; Shen et al., 2024; Yoran et al., 2024). Moreover, Fang et al. (2024) specifically focus on different types of re- trieval noise likely to be encountered in real-world environments and develop a fine-tuning approach to handle these. 2) prompting techniques modify the input to the LM to improve context utilisation, representing minimally modified settings. 3) mech- anistic interventions on the LM modify certain model components at inference time to alter context utilisation. Examples involve attention modifica- tion (Ortu et al., 2024; Jin et al., 2024) and SpARe interventions (Zhao et al., 2025). Lastly, 4) decod- ing methods involve a modified decoding approach, applied to the output logits, to manipulate context utilisation. Examples include context-aware con- trastive decoding (Yuan et al., 2024; Kim et al., 2024; Shi et al., 2024; Wang et al., 2024; Zhao et al., 2024) and lookback lens decoding (Chuang et al., 2024). Apart from intervention level, many of the CMTs have different objectives , focused on improving one or multiple aspects of context utilisation. CMTs may focus on improving robustness to irrelevant contexts, faithfulness to conflicting contexts, or faithfulness to contexts in general. Previous work has mainly focused on evaluating one CMT at a time, potentially due to the lack of a unified benchmark for CMTs. In this paper, we 2 Dataset Split #samples %Gold %Conflict. %Irrel. CounterFact dev 198 33.3 33.3 33.3 test 2,499 33.3 33.3 33.3 NQ dev 198 33.3 33.3 33.3 test 4,945 33.4 33.1 33.4 DRUID dev 198 33.3 33.3 33.3 test 4,302 43.5 56.1 0.4 Table 1: Statistics of the datasets that form CUB . ‘Con- flict.’ denotes conflicting contexts and ‘Irrel.’ irrelevant contexts. evaluate representatives from each of the four main categories of CMTs on CUB , comparing a total of seven CMTs. Benchmarks To the knowledge of the authors, there is not yet a benchmark for CMTs. The closest examples of existing benchmarks are RAG-Bench by Fang et al. (2024), KILT by Petroni et al. (2021) and AxBench by Wu et al. (2025). The first evalu- ates the retrieval-noise robustness of LMs, the sec- ond performance of RAG systems as a whole, and the latter steering techniques for LMs, focusing on safety and reliability. CUB takes inspiration from these benchmarks to create a comprehensive and relevant benchmark for the evaluation of CMTs. 3CUB: A Context Utilisation Benchmark Given a CMT, CUB is designed to test the tech- nique across different datasets, models and metrics. To unify the tests, CUB also incorporates a pre- defined method for the hyperparameter search of the CMT. 3.1 Language Models CUB evaluates the model sensitivity of CMTs on up to nine different LMs. The open-sourced models covered by the benchmark are GPT-2 XL, Pythia (6.9B), Qwen2.5 1.5B, Qwen2.5 7B, Qwen2.5 32B (Radford et | https://arxiv.org/abs/2505.16518v1 |
al., 2019; Biderman et al., 2023; Yang et al., 2024). For the Qwen models we include the instruction-tuned variants. We also evaluate the API-based LLM Cohere Command A with 111B parameters.2The model selection is per- formed to enable comparisons across model fami- lies, model sizes, instruction-tuning and API-based LLMs. However, all LMs are not compatible with all CMTs evaluated on CUB – the selection of LMs onto which a CMT is applied depends on the CMT, further explained in Section 4. In addition, we adapt the prompts in CUB with prompt templates 2https://cohere.com/blog/command-acompatible with each model type under considera- tion (base, instruction-tuned and chat-API). 3.2 Datasets To evaluate how CMTs respond to different types of contextual information, CUB evaluates each CMT on CounterFact, NQ and DRUID (see Table 1).3 The inclusion of these datasets is based on three key criteria: (i) diversity in task difficulty, (ii) di- versity in realistic and synthesised RAG scenarios, and (iii) high utilisation in related work. Counter- Fact represents a causal language modelling task based on a controlled setup with simple counter- factual contexts synthesised to conflict with model memory. NQ represents a popular, and more real- istic setup, focused on RAG for open-domain QA of greater difficulty with contexts sampled from Wikipedia. DRUID is a fairly new dataset, repre- senting another important RAG task – that of auto- mated fact-checking; this requires a greater level of reasoning based on naturally occurring claims and evidence sampled from the internet. While DRUID has yet to see widespread use in studies of context utilisation, we include it in CUB as it is one of few datasets closely aligned with real-world RAG scenarios. For each dataset, we curate samples represen- tative of the three types of contexts that may be encountered in realistic RAG scenarios: 1) gold contexts that are relevant and do not contradict LM memory, 2) conflicting contexts that are relevant but contradict LM memory or gold labels, and 3) irrelevant contexts that should be ignored by the LM (Fang et al., 2024). For each dataset, we sam- ple validation and test splits. To allow for fair and unified comparisons between CMTs, the validation set is used to tune potential hyperparameters of the CMT under evaluation. The test split is used for the final evaluation. More details on the datasets can be found in Appendix B. CounterFact To construct a CounterFact dataset with counterfactual contexts, we first identify sam- ples from LAMA that have been memorised by Pythia 6.9B, following the approach by Saynova et al. (2025). We base the CounterFact dataset on Pythia to obtain a set of samples likely to have been memorised by all CUB models, since LMs have been found to memorise more facts as they grow in size (Saynova et al., 2025). We confirm 3The datasets can be found on Hugging Face Datasets under copenlu/{cub-counterfact,cub-nq,cub-druid} . 3 this in Appendix B; all CUB LMs are found to have memorised at least 70% of the CounterFact samples. Based on the known fact triplets, we sam- ple conflicting contexts following the approach of Meng et | https://arxiv.org/abs/2505.16518v1 |
al. (2022). We also sample gold contexts that simply state the correct triplet. For the irrel- evant contexts, we randomly sample fact triplets unrelated to the sample query. NQ The gold context samples are simply the orig- inal NQ samples. For the collection of samples with conflicting contexts, we follow a substitution approach inspired by the method of Longpre et al. (2021). We create conflicting contexts that promote a different answer simply by taking the gold con- text and substituting the gold answer in the context. The substitute answer is sampled to yield coherent conflicting contexts, and to have a different mean- ing compared to the gold answer. For the collection of samples with irrelevant contexts, we apply a LM re-ranker to identify the most relevant non-gold paragraph from the Wikipedia page in which the gold context was found. With this approach, we col- lect irrelevant contexts representative of real-world RAG scenarios. DRUID The <claim, evidence> samples of DRUID have been manually annotated for stance of the evidence (supports, refutes, insufficient or irrelevant). We map stance to context type as de- scribed in Appendix B. No context synthesis is nec- essary for the DRUID samples as they, by virtue of utilising naturally occurring samples from a RAG pipeline, already contain samples representative of gold, conflicting and irrelevant contexts. Moreover, since DRUID represents a reasoning task, asking the model whether provided evidence supports the claim under consideration (True or False), or is in- sufficient (None), the output space for the DRUID samples is limited to three tokens (True, False or None). 3.3 Metrics Similarly to Jin et al. (2024) we use a binary score to measure context utilisation. We refer to it as the binary context utilisation (BCU ) score and define it as follows. For relevant contexts (gold and con- flicting) the score is 1 if the LM prediction is the same as the token promoted by the context, tC, and 0 otherwise. For irrelevant contexts the score is 1 if the LM prediction is the same as the memory token, tM, (i.e. the prediction made by the model before any context has been introduced) and 0 oth-Methods Objective LevelTuning CostInference Cost Fine-tuning Both Fine-tuning High Low Prompting Both Prompt. Low Mid Multi-agent Both Prompt. None High PH3 +context Faith Mech. High Low COIECD Faith Decoding Mid Mid PH3 +memory Robust Mech. High Low ACD Robust Decoding None Mid Table 2: Comparison of CMTs by objective, interven- tion level, and cost. The CMTs are coloured by objective with warm colours for ‘Both’, blue for ‘Faith’ and green for ‘Robust’. ‘Mech.’ denotes mechanistic interven- tions. erwise. We report the averaged BCU score per context type. To assess the relative effectiveness of CMTs, we also report the net gain of each CMT, compared to when no CMT is applied, using BCU score ( ∆ = BCU CMT−BCU Regular ). We also con- sider continuous context utilisation ,CCU , a more fine-grained metric that measures the change in out- putted token probabilities as context is introduced. Appendix C contains more details on the metric. We | https://arxiv.org/abs/2505.16518v1 |
also measure the accuracy of each method. For CounterFact and DRUID, accuracy is measured based on whether the first generated token is the same as the first gold token. For NQ, for which the correct answer may be different permutations of the same set of tokens, we measure accuracy based on whether the first output token (e.g. “July”) matches any of the tokens in the answer (e.g. “15 July”). 3.4 Hyperparameter Search For CMTs requiring hyperparameter tuning, we use the validation set of each dataset to select values that maximise the average BCU across all context types, unless a method-specific tuning procedure is explicitly specified. This ensures a fair compari- son between CMTs. Further details are shown in Appendix D. 4 Context Utilisation Manipulation Techniques We benchmark a total of seven different CMTs on CUB , all of which are state-of-the-art representa- tives from the main categories of CMTs. Table 2 summarises the key characteristics of the CMTs, including their main objective, intervention level, and cost in terms of tuning and inference. As a baseline, we also evaluate regular LMs on the same input, with no CMT applied ( Regular ). 4 Given:Question+ContextRelevantIrrelevant+ Answer w/ contextFaithfulUnfaithfulSelf-refined answerAnswerw/ contextAnswerw/o context+Feedback① Relevance Expert② Faithfulness Expert③ Self-RefinementFigure 2: Overview of the multi-agent approach. Fine-tuning We adapt the approach of Li et al. (2023), which fine-tunes LMs to ensure the usage of relevant contexts. It considers four different types of contexts: relevant, irrelevant, empty, and counterfactual contexts. To align the domain with our evaluation data, we curate the fine-tuning data with two QA datasets (Joshi et al., 2017; Rajpurkar et al., 2018), one FC dataset (Schlichtkrull et al., 2023), and one sentence completion dataset (Mar- janovic et al., 2024). Before fine-tuning each LM, we elicit its parametric answers by querying with- out contexts. We then select the questions that the LM answered correctly and pair them with irrele- vant and empty contexts. The fine-tuning data thus contains contexts that can be irrelevant, counterfac- tual, or empty. During fine-tuning, we train the LM to generate answers aligned with the provided con- text. When the context is irrelevant, we train the LM to be robust, i.e. ignore the context and output its parametric answer. Due to the computational costs associated with fine-tuning billion-sized LMs, we use the Low-Rank Adaptation method (Hu et al., 2021). Additional details can be found in Ap- pendix E. Prompting We curate a set of 12 prompts for each evaluation dataset and optimise the prompt selection to each evaluated model. Each set of prompts is based on 6 prompts curated by a human, similarly to the approach by Jin et al. (2024), and 6 prompts generated by a LLM,4similarly to the approach by Wu et al. (2025). Multi-agent Inspired by LM agents and self- refinement (Du et al., 2024; Feng et al., 2024; Madaan et al., 2023), which are widely adopted techniques in reasoning tasks, we decompose con- text utilisation into two components – relevance and context faithfulness – and assign each as a sep- arate task to an individual LM agent. We aim | https://arxiv.org/abs/2505.16518v1 |
to examine whether LMs are capable of accurately evaluating context relevance and answer faithful- 4Mainly by ChatGPT, but also by Microsoft Co-pilot.ness, to subsequently self-correct themselves for improved faithfulness to relevant contexts. As il- lustrated in Figure 2, we first assess relevance us- ing the relevance agent to determine whether the provided context should be used. Then, the faith- fulness agent provides feedback on the model re- sponse that was generated with context. If the feed- back indicates that the initial answer is unfaithful, the model generates a self-refined answer based on that feedback. Given that these tasks require instruction-following capabilities, we restrict our evaluation to instruction-tuned or chat LMs. Fur- ther details can be found in Appendix F. Mechanistic interventions: PH3 We adopt the PH3 method by Jin et al. (2024). The method is implemented in two steps: 1) identification of at- tention heads responsible for context or memory reliance via path patching and 2) pruning the iden- tified attention heads for increased memory or con- text usage. To identify attention heads, we use the CounterFact datasets with samples that elicit exact fact recall in each studied model (Saynova et al., 2025). For the evaluation on our studied datasets, we tune the number of heads to prune on the vali- dation splits of each evaluation dataset, similarly to the approach by Jin et al. (2024). PH3 can be used in two different modes – suppressing context attention heads or suppressing memory attention heads. We tune the attention head configuration for each mode and report the results ( PH3 +context enhances context utilisation by the suppression of memory heads, and vice versa for PH3 +memory ). Context-aware contrastive decoding: ACD and COIECD Contrastive decoding approaches ad- just the model’s output distribution based on two distributions: one for which only the query is given as input and one for which the context also is included. Among them, contextual information- entropy constraint decoding (COIECD ; Yuan et al., 2024) is designed to detect the presence of knowl- edge conflicts and selectively resolve them, aiming to improve faithfulness to conflicting context with- out compromising performance when no conflict exists. In contrast, adaptive contrastive decoding (ACD; Kim et al., 2024) addresses the challenge of ir- relevant context by using entropy-based weighting to adaptively ensemble parametric and contextual distributions. We test both on CUB to cover the nuance in decoding approaches. 5 5 Features Impacting Context Utilisation To deepen our understanding of the results on CUB , we complement the benchmark with an analysis of features likely to impact context utilisation. Our goal is to better understand whycertain CMTs and LMs work well or not. We study features on a model and input level, described below. 5.1 Model Features By virtue of the large LM coverage in CUB , we are able to measure multiple salient model fea- tures. We analyse model size , whether the model isinstruction-tuned andstrength of model mem- ory. To control for external confounders related to model family and implementation, we only mea- sure correlations with model size and instruction- tuning across Qwen | https://arxiv.org/abs/2505.16518v1 |
models. Strength of model memory is measured as the softmaxed logits for the top token predicted by the LM when only the query is provided (without context). 5.2 Input Features We measure multiple input characteristics found to impact context utilisation for humans and/or LMs. By considering context length andFlesch reading ease score , we aim to measure whether the context isdifficult to understand (Gao et al., 2024; Vladika and Matthes, 2023). Using distractor rate , we aim to measure whether the context contains dis- tracting information (Shaier et al., 2024). With query-context overlap we also aim to measure query-context similarity (Wan et al., 2024). Lastly, we check the answer position (Liu et al., 2024) and if the evaluated LMs find the context relevant . More details on the detection of the features can be found in Appendix G. 5.3 Metric for Feature Impact By virtue of the unified setup of CUB , we can study correlation coefficients to investigate the impact of different input and model features with a low risk of confounders. We use Spearman’s ρto measure the impact of features on context utilisation, proxied by BCU. 6 Main Results on CUB TheCUB results can be found in Figures 3 and 4, CCU scores and more detailed results can be found in Appendix A. We structure the results analysis around a set of main findings.6.1 Overall Trends We first note that the BCU andCCU scores in Figures 3 and 5, respectively, support the same trends and focus the analysis on the BCU results. Context utilisation improves with model size. From Figure 3, we note how larger Regular LMs generally outperform smaller LMs when all con- text types are taken into consideration for NQ and DRUID. On NQ, the best performing model is Qwen 32B, and on DRUID the best perform- ing model is Command A. Notably, applying a CMT to a small LM can lead to context utilisa- tion on par with that of a regular larger LM, such asFine-tuning Qwen 7B compared to Regular Qwen 32B on NQ. Meanwhile, on CounterFact, we observe how Regular model performance across all contexts generally decreases when model size is increased. This is counter-intuitive and we attribute the phenomena to the artificial nature of the dataset, which likely confuses the larger LMs. In addition, we know the NQ and DRUID datasets to be more difficult, demanding greater model capacity. This shows how it is insufficient to evaluate context util- isation only on simple datasets like CounterFact. Most CMTs show an inflated performance on conflicting CounterFact contexts. All LMs that do not already have a perfect BCU score on the con- flicting CounterFact contexts improve to a perfect score of 1.0 under Prompting ,PH3 +context , and Fine-tuning . However, similar improvements cannot be observed for the same CMTs on NQ or DRUID. These results show how CMTs proven to work well in simpler settings are not guaranteed to work equally well in more complex settings, prov- ing the necessity of holistic tests. A deeper analysis of the inflated CMT performance on CounterFact is provided | https://arxiv.org/abs/2505.16518v1 |
in Appendix A. 6.2 CMT Comparsion We further assess whether the CMTs consistently outperform Regular across different context types. Figure 4 shows the average ∆of each CMT, ag- gregated over all evaluated models. A value above zero indicates that the CMT yields a net improve- ment over Regular , whereas a negative value high- lights cases where the CMT degrades performance. There is a conflict between optimising for utilisa- tion of relevant contexts and robustness to irrel- evant contexts. As each CMT exhibits trade-offs across context types or only marginal differences 6 GPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-I Command ACounterFact Gold Conflicting Irrelevant Total GPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-I Command ANQ 0.0 0.5 1.0 BCUGPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-I Command ADRUID 0.0 0.5 1.0 BCU 0.0 0.5 1.0 BCU 0.0 0.5 1.0 BCU Regular PromptingFine-tuning Multi-agentACD COIECDPH3 +context PH3 +memoryFigure 3: BCU scores for the evaluated context utilisation manipulation methods applied to the evaluated models and datasets. ‘Total’ denotes the averaged performance across all context types. A high BCU score is desirable regardless of context type. CounterFactGold Conflicting Irrelevant TotalNQ -1.0 0.0 1.0 (CMT, Regular) DRUID -1.0 0.0 1.0 (CMT, Regular) -1.0 0.0 1.0 (CMT, Regular) -0.3 0.0 0.3 (CMT, Regular) Regular PromptingFine-tuning Multi-agentACD COIECDPH3 +context PH3 +memory Figure 4: Model-averaged relative performance ( ∆) of each CMT compared to Regular across datasets and context types. The horizontal bars represent the standard deviation.from Regular , the overall CMT ∆values (To- tal) converge to near zero across NQ and DRUID. Consequently, we find no CMT that is superior . For instance, PH3 +context shows consistent im- provements over Regular in conflicting contexts, but underperforms when applied to irrelevant con- texts. Conversely, ACD, which handles irrelevant context effectively, performs worse in the conflict- ing context setting. Unsurprisingly, these findings highlight that the effectiveness of each CMT is closely tied to the alignment between the objective of the CMT and the type of context being provided. RAG practitioners knowing beforehand that their retrieval system is e.g. prone to return irrelevant information, may prioritise robustness over strong context utilisation and can select e.g. ACDas the CMT most suitable to their needs. Prompting-based CMTs, such as Prompting and Multi-agent , show relatively stable performance 7 across context types, without substantial drops in ∆. Compared to other CMTs, they offer this robust- ness with lower optimisation and implementation costs. Multi-agent shows clear gains in irrelevant contexts but limited efficacy in gold and conflicting settings. This suggests that LMs are capable of identifying irrelevant contexts, but remain limited in effectively utilising relevant ones. In realistic RAG scenarios, it will not be known beforehand what types of context will be provided to the LM. Therefore, it is important that CMTs work optimally across all context types. Our work shows that while we have CMTs that work well for relevant or irrelevant contexts alone , there cur- rently are no CMTs that | https://arxiv.org/abs/2505.16518v1 |
handle both relevant and irrelevant contexts well. 6.3 Impact of Model and Input Features See Tables 6 and 7 for Spearman’s ρbetween BCU and the features described in Section 5. Results are averaged across models. Larger LMs perform better on NQ and DRUID. Corroborating our findings in Section 6.1, we observe a positive correlation with model size (ρ≈0.3) on DRUID gold contexts. Multi-agent also works significantly better with bigger LMs on DRUID gold contexts ( ρ= 0.42). In addition, we observe a positive correlation with model size on NQ gold contexts ( ρ∈[0.20,0.37]). For Counter- Fact, we observe how model size does not correlate with performance. Instruction-tuning is beneficial for conflicting and irrelevant DRUID contexts. We note how instruction tuning generally correlates with im- proved performance on conflicting and irrelevant DRUID contexts ( ρ∈[0.29,0.77]depending on CMT). The conflicting DRUID contexts frequently require the LM to be able to abstain (i.e. re- spond with a ‘None’) when presented with insuf- ficient contexts, which is something instruction- tuned models may be more adept at. Conversely, instruction-tuning is clearly detri- mental for conflicting CounterFact contexts ( ρ≤ −0.36), potentially because the LMs have been more tuned to be critical of unreliable information, as opposed to following a pure causal language modelling objective. A strong model memory corresponds to high performance on irrelevant contexts from NQ and CounterFact. We observe high correlations(ρ≈0.36) between memory strength and robust- ness to irrelevant contexts for Regular on Coun- terFact and NQ. These correlations increase when Fine-tuning ,ACDorPrompting is applied. Fur- thermore, we observe for CounterFact how strong Regular model memory correlates with low perfor- mance on conflicting contexts ( ρ=−0.44). This is expected – previous work has already shown how LMs are resistant to synthesised contexts that con- tradict the internal model memory (Longpre et al., 2021; Xie et al., 2024a). Answer position matters little for context utilisa- tion. We measure low correlation values (below 0.3) across all settings for answer position in the context and Flesch reading ease score, and have thus omitted them in Table 7. Previous work has al- ready found the Flesch reading ease score to show low correlations with LM context utilisation; our work further supports this finding (Hagström et al., 2024). Liu et al. (2024) found the answer posi- tion impactful for the utilisation of long contexts. CUB does not contain equally long contexts, which potentially explains why we do not see the same impact of answer position. Context utilisation on gold NQ contexts is de- graded on long contexts with high distractor rates. We measure weak negative correlations with context length ( ρ=−0.23) and distractor rate (ρ=−0.19) with respect to Regular performance on gold NQ contexts. This is expected – long gold contexts or contexts with a high rate of distractors should be more difficult to process and utilise. We hypothesise the fairly low correlation levels are a consequence of each feature alone not being suffi- ciently predictive of model performance. 7 Conclusion We introduce CUB , a benchmark that evaluates CMTs across diverse context types, datasets, and models. Under CUB , | https://arxiv.org/abs/2505.16518v1 |
we evaluate a representative set of CMTs, covering varying context utilisation objectives and techniques. Results on CUB reveal a trade-off across most CMTs between robustness to irrelevant context and faithful utilisation of rel- evant context. Our analysis of features impacting context utilisation highlights the strong influence of model features, while input features have lim- ited impact when analysed in separation . Overall, our findings highlight the need for holistic testing, as tests on synthesised datasets may show inflated 8 performance, and the need for CMTs that can adapt to varied context conditions. Taken together, our work paves the way for the development of more effective RAG systems. Limitations CUB only incorporates contexts with lengths of up to that of a paragraph. It would also be relevant to evaluate CMTs in long-context settings. The long-context setting was not included in CUB , and left for future work, as it is fundamentally different from the normal context setting studied in CUB , posing new challenges for context utilisation and its evaluation, associated with a different set of CMTs (Shaham et al., 2023; Zhang et al., 2024a; Min et al., 2023; Zhang et al., 2024b). While the dataset selection for CUB was per- formed to cover a wide span of task difficulty and RAG scenarios, the insights provided by CUB are limited to those derived from the underlying datasets. Moreover, all datasets are in English, leaving open the question of whether the findings generalise across languages (Chirkova et al., 2024). Lastly, CUB does not explicitly consider datasets involving temporal dynamics, while it would be interesting to study. Time-sensitive in- formation may lead to naturally occurring conflicts in context, adding nuance to the analysis of con- text utilisation (Loureiro et al., 2022; Xiong et al., 2024). Acknowledgments This research was co-funded by the Euro- pean Union (ERC, ExplainYourself, 101077481), by the Pioneer Centre for AI, DNRF grant number P1, as well as by The Villum Synergy Programme. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Re- search Council. Neither the European Union nor the granting authority can be held responsible for them. This research was also co-funded by the Wal- lenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at Alvis partially funded by the Swedish Research Council through grant agree- ment no. 2022-06725.The work was also supported by the Institute of Information & communications Technology Plan- ning & Evaluation (IITP) grant funded by the Korea government(MSIT) [NO. RS-2021-II211343, Arti- ficial Intelligence Graduate School Program (Seoul National University), No. RS-2021-II212068, Arti- ficial Intelligence Innovation Hub] and the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT). (No. RS-2025- 00562784). References Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hal- lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. 2023. Pythia: A suite for analyzing large language | https://arxiv.org/abs/2505.16518v1 |
mod- els across training and scaling. In International Conference on Machine Learning , pages 2397–2430. PMLR. Nadezhda Chirkova, David Rau, Hervé Déjean, Thibault Formal, Stéphane Clinchant, and Vassilina Nikoulina. 2024. Retrieval-augmented generation in multi- lingual settings. In Proceedings of the 1st Work- shop on Towards Knowledgeable Language Models (KnowLLM 2024) , pages 177–188, Bangkok, Thai- land. Association for Computational Linguistics. Yung-Sung Chuang, Linlu Qiu, Cheng-Yu Hsieh, Ran- jay Krishna, Yoon Kim, and James R. Glass. 2024. Lookback lens: Detecting and mitigating contextual hallucinations in large language models using only attention maps. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 1419–1436, Miami, Florida, USA. Association for Computational Linguistics. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. 2024. Improving factuality and reasoning in language models through multiagent debate. In Proceedings of the 41st Inter- national Conference on Machine Learning , ICML’24. JMLR.org. Feiteng Fang, Yuelin Bai, Shiwen Ni, Min Yang, Xiao- jun Chen, and Ruifeng Xu. 2024. Enhancing noise robustness of retrieval-augmented language models with adaptive adversarial training. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10028–10039, Bangkok, Thailand. Association for Computational Linguistics. Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. 2024. Don‘t hallucinate, abstain: Identifying LLM knowl- edge gaps via multi-LLM collaboration. In Proceed- ings of the 62nd Annual Meeting of the Association 9 for Computational Linguistics (Volume 1: Long Pa- pers) , pages 14664–14690, Bangkok, Thailand. As- sociation for Computational Linguistics. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. 2024. Retrieval-augmented gener- ation for large language models: A survey. Preprint , arXiv:2312.10997. Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. 2023. Dissecting recall of factual associa- tions in auto-regressive language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12216–12235, Singapore. Association for Computational Linguis- tics. Lovisa Hagström, Denitsa Saynova, Tobias Norlund, Moa Johansson, and Richard Johansson. 2023. The effect of scaling, retrieval augmentation and form on the factual consistency of language models. In Pro- ceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing , pages 5457– 5476, Singapore. Association for Computational Lin- guistics. Lovisa Hagström, Sara Vera Marjanovi ´c, Haeun Yu, Arnav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, and Isabelle Augenstein. 2024. A reality check on context utilisation for retrieval-augmented generation. Preprint , arXiv:2412.17031. Lovisa Hagström, Ercong Nie, Ruben Halifa, Helmut Schmid, Richard Johansson, and Alexander Junge. 2025. Language model re-rankers are steered by lexical similarities. Preprint , arXiv:2502.17036. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference | https://arxiv.org/abs/2505.16518v1 |
on Natural Lan- guage Processing (EMNLP-IJCNLP) , pages 2567– 2577, Hong Kong, China. Association for Computa- tional Linguistics. Zhuoran Jin, Pengfei Cao, Hongbang Yuan, Yubo Chen, Jiexin Xu, Huaijun Li, Xiaojian Jiang, Kang Liu, and Jun Zhao. 2024. Cutting off the head ends the con- flict: A mechanism for interpreting and mitigating knowledge conflicts in language models. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 1193–1215, Bangkok, Thailand. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting ofthe Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Youna Kim, Hyuhng Joon Kim, Cheonbok Park, Choonghyun Park, Hyunsoo Cho, Junyeob Kim, Kang Min Yoo, Sang-goo Lee, and Taeuk Kim. 2024. Adaptive contrastive decoding in retrieval-augmented generation for handling noisy contexts. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 2421–2431, Miami, Florida, USA. Association for Computational Linguistics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: A benchmark for question answering research. Transactions of the Association for Compu- tational Linguistics , 7:452–466. Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2023. Large language models with controllable working memory. In Findings of the As- sociation for Computational Linguistics: ACL 2023 , pages 1774–1793, Toronto, Canada. Association for Computational Linguistics. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paran- jape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the middle: How language mod- els use long contexts. Transactions of the Association for Computational Linguistics , 12:157–173. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv. , 55(9). Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 7052–7063, Online and Punta Cana, Do- minican Republic. Association for Computational Linguistics. Daniel Loureiro, Aminette D’Souza, Areej Nasser Muhajab, Isabella A. White, Gabriel Wong, Luis Espinosa-Anke, Leonardo Neves, Francesco Barbi- eri, and Jose Camacho-Collados. 2022. TempoWiC: An evaluation benchmark for detecting meaning shift in social media. In Proceedings of the 29th Inter- national Conference on Computational Linguistics , pages 3353–3359, Gyeongju, Republic of Korea. In- ternational Committee on Computational Linguistics. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, 10 Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: Itera- tive refinement with self-feedback. In Thirty-seventh Conference on Neural Information Processing Sys- tems. Sara | https://arxiv.org/abs/2505.16518v1 |
Vera Marjanovic, Haeun Yu, Pepa Atanasova, Maria Maistro, Christina Lioma, and Isabelle Augen- stein. 2024. DYNAMICQA: Tracing internal knowl- edge conflicts in language models. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 14346–14360, Miami, Florida, USA. Association for Computational Linguistics. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual asso- ciations in GPT. Advances in Neural Information Processing Systems , 36. ArXiv:2202.05262. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettle- moyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 12076–12100, Singa- pore. Association for Computational Linguistics. Francesco Ortu, Zhijing Jin, Diego Doimo, Mrinmaya Sachan, Alberto Cazzaniga, and Bernhard Schölkopf. 2024. Competition of mechanisms: Tracing how language models handle facts and counterfactuals. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8420–8436, Bangkok, Thailand. Association for Computational Linguistics. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2523–2544, Online. Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don‘t know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers) , pages 784–789, Melbourne, Australia. Association for Computational Linguistics. Denitsa Saynova, Lovisa Hagström, Moa Johansson, Richard Johansson, and Marco Kuhlmann. 2025. Fact recall, heuristics or pure guesswork? precise interpretations of language models for fact comple- tion. Preprint , arXiv:2410.14405. Michael Schlichtkrull, Zhijiang Guo, and Andreas Vla- chos. 2023. Averitec: A dataset for real-world claim verification with evidence from the web. Preprint , arXiv:2305.13117. Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, and Omer Levy. 2023. ZeroSCROLLS: A zero-shot benchmark for long text understanding. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023 , pages 7977–7989, Singapore. Association for Computational Linguistics. Sagi Shaier, Lawrence Hunter, and Katharina von der Wense. 2024. Desiderata for the context use of ques- tion answering systems. In Proceedings of the 18th Conference of the European Chapter of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 777–792, St. Julian’s, Malta. Associa- tion for Computational | https://arxiv.org/abs/2505.16518v1 |
Linguistics. Xiaoyu Shen, Rexhina Blloshmi, Dawei Zhu, Jiahuan Pei, and Wei Zhang. 2024. Assessing “implicit” re- trieval robustness of large language models. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 8988– 9003, Miami, Florida, USA. Association for Compu- tational Linguistics. Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In Proceed- ings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org. Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, and Wen-tau Yih. 2024. Trusting your evidence: Hallucinate less with context- aware decoding. In Proceedings of the 2024 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (Volume 2: Short Papers) , pages 783–791, Mexico City, Mexico. Association for Com- putational Linguistics. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages 3784–3803, Punta Cana, Do- minican Republic. Association for Computational Linguistics. 11 Juraj Vladika and Florian Matthes. 2023. Scientific fact-checking: A survey of resources and approaches. InFindings of the Association for Computational Linguistics: ACL 2023 , pages 6215–6230, Toronto, Canada. Association for Computational Linguistics. Alexander Wan, Eric Wallace, and Dan Klein. 2024. What evidence do language models find convincing? InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 7468–7484, Bangkok, Thailand. Association for Computational Linguistics. Han Wang, Archiki Prasad, Elias Stengel-Eskin, and Mohit Bansal. 2024. Adacad: Adaptively decoding to balance conflicts between contextual and paramet- ric knowledge. Preprint , arXiv:2409.07394. Zhengxuan Wu, Aryaman Arora, Atticus Geiger, Zheng Wang, Jing Huang, Dan Jurafsky, Christopher D. Manning, and Christopher Potts. 2025. Axbench: Steering llms? even simple baselines outperform sparse autoencoders. Preprint , arXiv:2501.17148. Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2024a. Adaptive chameleon or stubborn sloth: Revealing the behavior of large language models in knowledge conflicts. In The Twelfth International Conference on Learning Representations . Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2024b. Adaptive chameleon or stubborn sloth: Revealing the behavior of large language models in knowledge conflicts. In The Twelfth International Conference on Learning Representations . Siheng Xiong, Ali Payani, Ramana Kompella, and Fara- marz Fekri. 2024. Large language models can learn temporal reasoning. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10452– 10470, Bangkok, Thailand. Association for Compu- tational Linguistics. Rongwu Xu, Zehan Qi, Zhijiang Guo, Cunxiang Wang, Hongru Wang, Yue Zhang, and Wei Xu. 2024. Knowledge conflicts for LLMs: A survey. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 8541– 8565, Miami, Florida, USA. Association for Compu- tational Linguistics. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian | https://arxiv.org/abs/2505.16518v1 |
Yang, Jian- hong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tian- hao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 .Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Berant. 2024. Making retrieval-augmented lan- guage models robust to irrelevant context. Preprint , arXiv:2310.01558. Xiaowei Yuan, Zhao Yang, Yequan Wang, Shengping Liu, Jun Zhao, and Kang Liu. 2024. Discerning and resolving knowledge conflicts through adaptive decoding with contextual information-entropy con- straint. In Findings of the Association for Compu- tational Linguistics: ACL 2024 , pages 3903–3922, Bangkok, Thailand. Association for Computational Linguistics. Huajian Zhang, Yumo Xu, and Laura Perez-Beltrachini. 2024a. Fine-grained natural language inference based faithfulness evaluation for diverse summarisa- tion tasks. In Proceedings of the 18th Conference of the European Chapter of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 1701–1722, St. Julian’s, Malta. Association for Com- putational Linguistics. Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, and Zhangyang Wang. 2024b. Found in the mid- dle: How language models use long contexts bet- ter via plug-and-play positional encoding. Preprint , arXiv:2403.04797. Yu Zhao, Alessio Devoto, Giwon Hong, Xiaotang Du, Aryo Pradipta Gema, Hongru Wang, Xuanli He, Kam-Fai Wong, and Pasquale Minervini. 2025. Steer- ing knowledge selection behaviours in LLMs via SAE-based representation engineering. In Proceed- ings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 5117–5136, Al- buquerque, New Mexico. Association for Compu- tational Linguistics. Zheng Zhao, Emilio Monti, Jens Lehmann, and Haytham Assem. 2024. Enhancing contextual un- derstanding in large language models through con- trastive decoding. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 4225–4237, Mexico City, Mexico. Association for Computational Linguistics. A Additional results A.1 CUB results The exact CUB results can be found in Tables 3 and 4. CCU scores can be found in Figure 5. For theCCU scores, we note that they generally follow the same trends as the BCU scores in Figure 3; some CMTs perform better on gold, conflicting or irrelevant contexts, while none are superior when all context types are taken into consideration. The only disparate trend at odds with the BCU scores 12 is that Fine-tuning Qwen models that have been instruction-tuned stand out by performing extra poorly with respect to CCU score. We hypothe- sise that this is a consequence of an increase in PM(tC|Q)(i.e. prediction probability without con- text) from the fine-tuning, yielding less room for improvement in prediction confidence when con- text is introduced. A.2 Analysis of inflated CMT performance on CounterFact The inflated performance on CounterFact, observed in Figures 3 and 4, can potentially | https://arxiv.org/abs/2505.16518v1 |
be explained by a suboptimal default prompt for CounterFact. Following previous work, the default prompt only contained the example to be completed, without any additional instructions or few-shot examples. For NQ and DRUID, the default prompt contained task instructions and few-shot examples. Further- more, we observe how Prompting performs best on CounterFact on average, with a near perfect per- formance, indicating that a better default prompt may have neutralised any additional improvements from other CMTs. This raises the question of whether certain CMTs only address low context utilisation when caused by poor prompting, finding no leverage if the prompt already is adequate. A.3 Quality check of irrelevant NQ contexts For the CUB evaluation, we find 244 (14%) NQ samples with the context type ‘irrelevant’ for which at least 5 of the 9 evaluated LMs switch prediction to the gold answer after having seen the sample context. This indicates that some of the irrelevant contexts may actually be gold, as a result of quality issues with the annotation for NQ (in our sampling we assume that Wikipedia paragraphs not anno- tated as gold are not gold). However, we also note for some of these 244 samples that the context may simply be the heading of a Wikipedia page with the same title as the gold answer (e.g. “<H1> Scythe </H1>” when the gold answer is “scythe” for the query “what is the name of the weapon the grim reaper carries?”), without providing sufficient evi- dence with respect to the question, raising the ques- tion of whether they should be considered relevant by the model. A.4 Performance of Relevance Judgement For the Multi-agent technique, we investigate whether instruction-tuned LMs are capable of iden- tifying irrelevant context when explicitly promptedto do so. According to Table 5, the Multi-agent approach demonstrates strong performance in de- tecting irrelevant contexts and in recognising gold contexts as relevant. Although it does not reliably maintain a closed-book response when directly gen- erating responses (i.e. Regular ), it can accurately detect irrelevance when equipped with an explicit relevance assessment setup. The prediction accuracy of relevance assessment on conflicting contexts is consistently lower than that on other contexts. This discrepancy is par- ticularly evident in the conflicting contexts of the CounterFact dataset. For instance, we found that LMs often generate feedback such as: “X is Y , not Z. Therefore, the context is irrelevant”. This sug- gests that LM interprets factual inconsistency with its internal knowledge as a signal of irrelevance, even when instructed to ignore its own memory. One possible explanation for this behaviour lies in the nature of the CounterFact dataset itself. Con- texts in CounterFact are typically composed of single-sentence facts, which may lack sufficient surrounding information to render the context trust- worthy from the model’s perspective. Such be- haviour is less pronounced in NQ and DRUID datasets, where the provided contexts are relatively longer and richer, offering more semantic cues that may help the LM interpret the information as con- textually anchored (Xie et al., 2024b). The performance of relevance assessment is par- ticularly low on the NQ | https://arxiv.org/abs/2505.16518v1 |
dataset compared to other datasets. Since irrelevant contexts of NQ dataset are sampled from the same document and may be topically or semantically similar to the question, distinguishing relevance may become more chal- lenging. A.5 Features Impacting Context Utilisation See Table 6 for the correlation values between model features and context utilisation. See Table 7 for the correlation values between input features and context utilisation. B Data Collection B.1 CounterFact Samples from the CounterFact dataset can be found in Table 8. The relations covered by the dataset are capital of (80%), country of origin (9%), location of formation (9%), field of work (1%) and country of citizenship (1%). 13 Dataset CounterFact NQ DRUID Model Method Gold Conflict. Irrel. Tot. Gold Conflict. Irrel. Tot. Gold Conflict. Irrel. Tot. GPT-2 XLRegular 100.0 96.4 81.0 92.5 43.0 37.6 13.7 31.4 80.9 7.3 76.5 39.6 Fine-tuning 100.0 92.9 82.4 91.8 46.9 42.3 13.9 34.3 72.4 12.6 47.1 38.7 Prompting 100.0 96.4 81.0 92.5 42.4 36.2 14.2 30.9 83.3 1.9 100.0 37.7 PH3 +context 100.0 99.4 44.8 81.4 42.3 36.4 14.0 30.9 79.6 11.6 76.5 41.5 PH3 +memory 100.0 99.5 76.8 92.1 41.4 35.4 13.9 30.2 81.1 3.9 100.0 37.9 COIECD 100.0 97.6 70.8 89.5 43.4 37.4 9.0 29.9 69.8 21.3 41.2 42.4 ACD 99.6 49.1 91.0 79.9 31.8 29.1 36.4 32.4 81.3 3.2 100.0 37.6 PYTHIA 6.9BRegular 100.0 56.5 91.5 82.7 52.7 43.9 16.2 37.6 84.1 9.4 52.9 42.1 Fine-tuning 100.0 65.1 89.4 84.8 54.0 49.6 14.6 39.4 81.5 1.4 94.1 36.6 Prompting 100.0 99.6 86.1 95.2 52.7 43.9 16.2 37.6 82.8 7.1 64.7 40.3 PH3 +context 98.3 89.7 62.4 83.5 55.9 46.3 14.6 38.9 87.1 8.7 58.8 43.0 PH3 +memory 91.4 4.0 90.5 61.9 48.9 39.2 18.1 35.4 86.2 8.4 70.6 42.5 COIECD 99.9 66.0 86.0 84.0 53.9 43.8 10.2 35.9 72.0 13.0 41.2 38.8 ACD 100.0 9.7 96.0 68.6 43.8 36.1 32.6 37.5 87.4 5.2 100.0 41.3 QWEN 2.5 1.5BRegular 99.9 53.1 80.0 77.6 44.0 41.1 22.4 35.8 84.7 11.6 70.6 43.6 Fine-tuning 100.0 90.3 85.7 92.0 66.1 61.9 18.5 48.8 79.7 18.5 52.9 45.3 Prompting 100.0 97.2 82.2 93.2 63.9 57.5 32.1 51.1 85.0 7.0 82.4 41.2 PH3 +context 100.0 99.0 62.5 87.2 44.2 40.9 21.7 35.6 63.8 40.4 17.6 50.5 PH3 +memory 98.9 38.5 84.9 74.1 19.4 17.3 26.0 20.9 81.2 1.4 100.0 36.5 COIECD 94.8 1.2 89.8 61.9 42.4 39.2 45.8 42.5 87.8 4.8 100.0 41.3 ACD 97.6 7.7 90.3 65.2 46.7 42.8 39.3 42.9 87.8 4.8 100.0 41.3 QWEN 2.5 1.5B InstructRegular 97.6 31.7 86.2 71.8 70.1 62.8 28.2 53.7 47.3 70.3 94.1 60.4 Fine-tuning 100.0 93.2 82.7 92.0 51.0 45.6 42.2 46.3 72.0 14.5 29.4 39.6 Prompting 99.3 94.2 76.1 89.9 68.1 60.5 29.1 52.5 47.3 70.3 94.1 60.4 Multi-agent 98.6 24.7 99.9 74.4 68.5 60.2 45.0 57.9 44.4 72.4 94.1 60.3 PH3 +context 96.0 42.5 59.8 66.1 67.1 59.9 26.0 51.0 61.1 64.7 94.1 63.2 PH3 +memory 94.6 11.5 85.5 63.9 48.8 42.7 22.0 37.8 25.4 76.1 94.1 54.1 COIECD 97.8 35.8 82.7 72.1 70.5 63.9 22.1 52.1 64.1 59.6 94.1 61.7 ACD 95.6 12.1 93.5 67.1 66.7 | https://arxiv.org/abs/2505.16518v1 |
60.0 43.4 56.7 12.3 79.9 94.1 50.6 QWEN 2.5 7BRegular 96.6 36.0 79.0 70.5 71.7 65.6 25.3 54.2 91.8 23.6 41.2 53.3 Fine-tuning 99.6 47.4 85.0 77.4 76.7 68.8 41.7 62.4 86.4 1.8 82.4 39.0 Prompting 100.0 97.8 81.3 93.0 74.7 66.5 31.2 57.5 94.9 13.8 58.8 49.3 PH3 +context 97.8 96.3 16.7 70.3 69.7 63.6 25.3 52.8 83.4 50.1 17.6 64.5 PH3 +memory 96.8 4.0 84.2 61.6 66.5 59.5 26.6 50.8 90.5 4.1 76.5 42.0 COIECD 96.6 36.0 79.0 70.5 71.7 65.6 25.3 54.2 91.8 23.6 41.2 53.3 ACD 94.7 2.3 92.7 63.2 72.3 59.9 41.9 58.0 89.8 12.6 70.6 46.4 QWEN 2.5 7B InstructRegular 100.0 25.9 84.5 70.1 76.2 65.0 31.0 57.4 87.8 57.1 64.7 70.5 Fine-tuning 100.0 62.3 81.0 81.1 59.6 52.7 48.1 53.5 96.4 13.2 70.6 49.6 Prompting 100.0 98.6 35.3 78.0 75.8 66.7 29.1 57.2 87.8 57.1 64.7 70.5 Multi-agent 95.7 11.6 100.0 69.1 66.1 52.2 73.3 63.9 58.6 63.2 94.1 61.3 PH3 +context 98.3 84.0 54.1 78.8 75.3 64.4 26.9 55.5 86.9 54.7 70.6 68.8 PH3 +memory 100.0 27.6 82.8 70.1 76.4 66.1 30.9 57.8 3.1 81.4 70.6 47.3 COIECD 99.9 9.1 90.6 66.5 76.2 60.1 40.8 59.0 76.4 56.5 76.5 65.2 ACD 99.6 11.5 96.9 69.3 76.3 62.1 44.6 61.0 76.2 57.6 76.5 65.8 QWEN 2.5 32BRegular 99.9 77.6 77.2 84.9 77.3 66.7 39.7 61.2 98.2 19.8 41.2 54.0 Fine-tuning 98.1 88.4 81.9 89.4 79.2 69.2 46.3 64.9 98.0 9.7 82.4 48.4 Prompting 100.0 100.0 80.7 93.6 77.2 66.9 42.8 62.3 98.2 22.5 52.9 55.6 COIECD 97.4 96.5 58.5 84.1 76.1 67.4 32.7 58.7 97.1 27.8 29.4 57.9 ACD 97.6 2.3 92.6 64.1 75.7 56.1 57.6 63.1 97.6 14.1 58.8 50.6 QWEN 2.5 32B InstructRegular 99.4 4.9 92.6 65.6 81.4 59.9 43.8 61.7 97.9 43.2 76.5 67.2 Fine-tuning 100.0 18.0 93.6 70.5 71.6 64.9 42.0 59.5 96.4 20.8 52.9 53.8 Prompting 99.9 95.3 69.1 88.1 81.4 59.9 43.8 61.7 97.2 48.7 82.4 70.0 Multi-agent 100.0 20.6 100.0 73.5 76.8 57.2 49.2 61.1 93.1 55.6 94.1 72.1 COIECD 98.0 6.0 70.8 58.3 79.7 61.6 36.8 59.4 97.7 38.3 64.7 64.3 ACD 98.4 2.5 97.5 66.1 80.1 55.2 57.4 64.2 88.5 51.4 94.1 67.7 COMMAND ARegular 100.0 100.0 4.1 68.0 79.2 62.7 28.9 56.9 95.9 57.3 76.5 74.2 Prompting 97.0 92.8 48.4 79.4 79.2 62.7 28.9 56.9 93.6 64.4 70.6 77.2 Multi-agent 99.6 39.1 99.9 79.6 74.3 49.7 58.8 61.0 91.9 48.2 94.1 67.4 Table 3: BCU scores on CUB. A high BCU score is desirable regardless of context type. Gold denotes relevant contexts that also contain the gold answer. Conflict. denotes ‘Conflicting’ – relevant contexts that contain a conflicting answer, dissimilar from the correct answer or model memory. Irrel. denotes irrelevant contexts. Tot. denotes the average performance across all context types. Values marked in bold indicate the top CMT score across LMs for each dataset and context type. 14 Dataset CounterFact NQ DRUID Model Method Gold Conflict. Irrel. Tot. Gold Conflict. Irrel. Tot. Gold Conflict. Irrel. Tot. GPT-2 XLRegular 100.0 2.9 69.7 57.5 43.0 8.1 20.8 24.0 80.9 69.0 64.7 74.2 Fine-tuning 100.0 3.2 70.6 57.9 | https://arxiv.org/abs/2505.16518v1 |
46.9 7.7 23.8 26.2 72.4 65.5 41.2 68.4 Prompting 100.0 2.9 69.7 57.5 42.4 7.5 20.3 23.5 83.3 73.8 76.5 78.0 PH3 +context 100.0 0.4 29.8 43.4 42.3 7.8 20.4 23.6 79.6 65.7 52.9 71.7 PH3 +memory 100.0 0.4 65.1 55.1 41.4 7.4 20.1 23.0 81.1 72.6 76.5 76.3 COIECD 100.0 2.3 67.7 56.7 43.4 7.1 19.4 23.3 69.8 51.0 47.1 59.1 ACD 99.6 29.4 72.3 67.1 31.8 7.7 18.1 19.2 81.3 73.0 76.5 76.6 PYTHIA 6.9BRegular 100.0 37.2 91.4 76.2 52.7 9.8 29.6 30.8 84.1 49.9 47.1 64.7 Fine-tuning 100.0 26.5 91.8 72.8 54.0 5.6 26.6 28.8 81.5 74.4 70.6 77.5 Prompting 100.0 0.5 86.1 62.2 52.7 9.8 29.6 30.8 82.8 57.1 47.1 68.3 PH3 +context 98.3 2.5 62.1 54.3 55.9 8.4 30.0 31.5 87.1 55.2 52.9 69.0 PH3 +memory 91.4 86.0 90.4 89.2 48.9 11.5 29.7 30.1 86.2 55.1 64.7 68.7 COIECD 99.9 27.3 86.0 71.0 53.9 9.8 27.4 30.4 72.0 32.9 35.3 50.0 ACD 100.0 77.6 95.9 91.2 43.8 12.1 29.7 28.6 87.4 69.2 82.4 77.2 QWEN 2.5 1.5BRegular 99.9 41.9 74.2 72.0 44.0 7.7 22.0 24.6 84.7 63.5 52.9 72.7 Fine-tuning 100.0 5.5 77.0 60.8 66.1 18.8 42.4 42.5 79.7 60.3 58.8 68.7 Prompting 100.0 1.6 79.7 60.4 63.9 17.0 38.5 39.8 85.0 69.8 58.8 76.4 PH3 +context 100.0 0.7 50.1 50.3 44.2 12.6 25.5 27.5 63.8 26.9 11.8 42.9 PH3 +memory 98.9 52.8 78.0 76.6 19.4 8.1 10.4 12.7 81.2 74.5 70.6 77.4 COIECD 94.8 71.9 79.0 81.9 42.4 16.3 27.6 28.8 87.8 72.7 70.6 79.3 ACD 97.6 70.8 79.4 82.6 46.7 15.5 28.0 30.1 87.8 72.7 70.6 79.3 QWEN 2.5 1.5B InstructRegular 97.6 54.5 79.6 77.2 70.1 16.1 37.1 41.2 47.3 11.1 0.0 26.8 Fine-tuning 100.0 7.0 78.0 61.7 51.0 7.6 27.8 28.8 72.0 28.5 47.1 47.5 Prompting 99.3 5.4 74.1 59.6 68.1 15.7 38.8 41.0 47.3 11.1 0.0 26.8 Multi-agent 98.6 68.7 83.0 83.4 68.5 16.9 36.1 40.6 44.4 10.0 0.0 24.9 PH3 +context 96.0 35.9 58.2 63.4 67.1 15.4 34.7 39.1 61.1 18.9 0.0 37.2 PH3 +memory 94.6 68.9 78.3 80.6 48.8 13.1 25.8 29.3 25.4 7.2 0.0 15.1 COIECD 97.8 50.4 77.1 75.1 70.5 15.5 35.9 40.7 64.1 19.2 0.0 38.7 ACD 95.6 77.7 82.1 85.1 66.7 19.0 39.0 41.6 12.3 3.6 0.0 7.4 QWEN 2.5 7BRegular 96.6 52.2 72.6 73.8 71.7 16.7 39.0 42.6 91.8 57.6 23.5 72.3 Fine-tuning 99.6 45.1 77.1 73.9 76.7 18.5 50.5 48.6 86.4 74.8 70.6 79.8 Prompting 100.0 2.4 86.2 62.9 74.7 17.9 44.6 45.8 94.9 64.2 35.3 77.4 PH3 +context 97.8 0.2 6.0 34.7 69.7 17.0 38.7 41.9 83.4 30.5 5.9 53.4 PH3 +memory 96.8 88.6 79.4 88.2 66.5 17.6 37.7 40.6 90.5 73.4 70.6 80.8 COIECD 96.6 52.2 72.6 73.8 71.7 16.7 39.0 42.6 91.8 57.6 23.5 72.3 ACD 94.7 85.5 80.4 86.9 72.3 23.9 47.2 47.8 89.8 68.1 47.1 77.5 QWEN 2.5 7B InstructRegular 100.0 42.0 85.4 75.8 76.2 19.8 47.1 47.8 87.8 28.3 0.0 54.1 Fine-tuning 100.0 34.8 88.0 74.3 59.6 8.1 35.3 34.4 96.4 65.0 64.7 78.6 Prompting 100.0 1.9 37.5 46.5 75.8 20.3 46.0 47.4 87.8 28.3 0.0 54.1 Multi-agent 95.7 | https://arxiv.org/abs/2505.16518v1 |
85.5 94.0 91.7 66.1 21.4 40.9 42.9 58.6 18.5 29.4 36.0 PH3 +context 98.3 12.5 55.6 55.5 75.3 18.5 44.1 46.0 86.9 31.5 0.0 55.5 PH3 +memory 100.0 50.9 83.8 78.2 76.4 20.1 47.7 48.1 3.1 2.5 0.0 2.7 COIECD 99.9 75.0 90.8 88.6 76.2 25.8 48.2 50.1 76.4 29.2 5.9 49.7 ACD 99.6 85.1 94.0 92.9 76.3 25.0 49.3 50.3 76.2 29.1 5.9 49.5 QWEN 2.5 32BRegular 99.9 21.4 75.0 65.4 77.3 20.8 47.7 48.7 98.2 58.5 29.4 75.7 Fine-tuning 98.1 9.8 77.2 61.7 79.2 20.3 55.9 51.9 98.0 66.6 64.7 80.3 Prompting 100.0 0.2 80.7 60.3 77.2 19.9 50.2 49.2 98.2 57.5 41.2 75.2 COIECD 97.4 3.2 59.7 53.4 76.1 18.8 43.9 46.3 97.1 47.4 17.6 68.9 ACD 97.6 85.7 81.3 88.2 75.7 31.4 53.3 53.5 97.6 66.1 47.1 79.8 QWEN 2.5 32B InstructRegular 99.4 81.0 93.5 91.3 81.4 28.6 52.2 54.2 97.9 41.8 29.4 66.2 Fine-tuning 100.0 78.5 92.2 90.2 71.6 13.3 44.3 43.2 96.4 61.8 47.1 76.8 Prompting 99.9 3.2 70.6 57.9 81.4 28.6 52.2 54.2 97.2 36.2 11.8 62.6 Multi-agent 100.0 78.5 94.7 91.1 76.8 22.7 40.7 46.8 93.1 31.7 17.6 58.4 COIECD 98.0 9.7 72.4 60.0 79.7 23.4 49.4 50.9 97.7 43.3 29.4 66.9 ACD 98.4 94.7 95.4 96.2 80.1 35.3 55.4 57.0 88.5 36.0 17.6 58.8 COMMAND ARegular 100.0 0.0 4.4 34.8 79.2 12.3 33.8 41.9 95.9 30.3 5.9 58.8 Prompting 97.0 0.7 47.8 48.5 79.2 12.3 33.8 41.9 93.6 23.3 0.0 53.8 Multi-agent 99.6 32.2 90.2 74.0 74.3 13.5 40.4 42.8 91.9 33.2 23.5 58.7 Table 4: Accuracy with respect to gold label on CUB. Gold denotes relevant contexts that also contain the gold answer. Conflict. denotes ‘Conflicting’ – relevant contexts that contain a conflicting answer, dissimilar from the correct answer or model memory. Irrel. denotes irrelevant contexts. Tot. denotes the average performance across all context types. Values marked in bold indicate the top CMT score across LMs on each dataset and context type. 15 GPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-ICounterFact Gold Conflicting Irrelevant Total GPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-INQ 1 0 1 CCUGPT-2 XL Pythia Qwen 1.5B Qwen 1.5B-I Qwen 7B Qwen 7B-I Qwen 32B Qwen 32B-IDRUID 1 0 1 CCU 1 0 1 CCU 1 0 1 CCU Regular PromptingFine-tuning Multi-agentACD COIECDPH3 +context PH3 +memoryFigure 5: CCU scores for the evaluated context utilisation manipulation methods applied to the evaluated models and datasets. ‘Total’ denotes the averaged performance across all context types. A high CCU score is desirable regardless of context type. The red vertical lines indicate scores of 0. Rate of memorisation of CUB models We eval- uate all Regular LMs on the samples from CUB CounterFact without context. The results can be found in Table 9. We observe rates above 70% for all models. As expected, the highest memorisation rate is found for Pythia. The lowest is found for GPT-2 XL, which can be expected as the model is quite small and old. Prompt templates Following the same approach as previous work, | https://arxiv.org/abs/2505.16518v1 |
no specific prompt template was used for the LMs evaluated on CounterFact. The LMs were evaluated in a simple sentence comple- tion format as shown in Table 8. However, since the sentence completion format is less compatible with the instruction-tuned mod- els, we added a small prompt template for the eval- uation of the instruction-tuned Qwen models on CounterFact, as follows.Prompt without context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Complete the following sentence. Only answer with the next word. <prompt> <|im_end|> <|im_start|>assistant Prompt with context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Complete the following sentence. Only answer with the next word. Fact: <context> <prompt> <|im_end|> <|im_start|>assistant 16 Gold Conflict. Irrel. All QWEN 2.5 1.5B-I CounterFact 98.56 24.25 99.88 74.23 NQ 92.44 91.89 26.26 70.13 DRUID 93.27 96.52 17.65 94.79 QWEN 2.5 7B-I CounterFact 99.16 10.68 99.88 69.91 NQ 80.70 76.14 59.35 72.05 DRUID 82.53 65.56 94.12 73.06 QWEN 2.5 32B-I CounterFact 99.64 19.57 99.40 72.87 NQ 94.74 92.50 25.77 70.94 DRUID 98.66 76.25 88.24 86.05 COMMAND A CounterFact 100.00 99.88 99.88 99.92 NQ 94.31 91.82 37.69 74.56 DRUID 93.11 68.55 88.24 79.31 Table 5: Multi-agent: Relevance assessment accuracy B.2 NQ We retain all samples from the development set of NQ5for which a short answer of fewer than five tokens is identified in the raw HTML of the corresponding Wikipedia pages. Samples from the NQ dataset can be found in Table 10. Sampling of conflicting contexts For a given question, context and short answer, we perform the following steps to identify substitute answers for conflicting contexts: 1.Check if the short answer is a date6. If so, sam- ple a new random date in the interval [1900, 2030) and format it in the same way as the gold date. 2.If the short answer is not a date, prompt an LLM7with the question and short answer to provide a substitute answer of the same format. If the proposed answer is already found in the sample context, prompt the model, for a maximum of 20 times, to generate another answer until a substitute answer not already found in the context has been generated. The prompt used to query an LLM for a substitute answer was as follows: 5https://console.cloud.google.com/storage/ browser/natural_questions/v1.0/dev 6Using the dateutil.parser in Python. 7The Cohere model command-r-plus-08-2024 from https://docs.cohere.com/v2/docs/command-r-plus .Dataset Context CMT Corr. Model size DRUID Gold Multi-agent 0.42 DRUID Gold ACD 0.41 NQ Gold PH3 +memory 0.37 DRUID Gold Regular 0.36 DRUID Gold Prompting 0.36 NQ Conflicting PH3 +memory 0.33 NQ Gold Regular 0.20 NQ Irrelevant Regular 0.14 NQ Conflicting Regular 0.09 CounterFact Gold Regular 0.04 CounterFact Irrelevant Regular 0.02 CounterFact Conflicting Regular -0.01 DRUID Conflicting Regular -0.08 DRUID Irrelevant Regular -0.20 DRUID Irrelevant PH3 +memory -0.33 CounterFact Conflicting Fine-tuning -0.33 DRUID Irrelevant COIECD -0.44 Instruct tuned DRUID Conflicting PH3 +memory 0.77 DRUID Irrelevant PH3 +context 0.65 DRUID Conflicting ACD 0.54 DRUID Conflicting Prompting 0.46 DRUID Conflicting Regular 0.40 DRUID Conflicting COIECD 0.34 DRUID Irrelevant Regular 0.29 NQ Gold Regular 0.13 CounterFact | https://arxiv.org/abs/2505.16518v1 |
Irrelevant Regular 0.12 NQ Irrelevant Regular 0.06 NQ Conflicting Regular 0.05 CounterFact Gold Regular 0.01 DRUID Gold Regular -0.19 CounterFact Conflicting Regular -0.36 DRUID Gold ACD -0.38 CounterFact Conflicting PH3 +context -0.43 DRUID Gold PH3 +memory -0.72 Strength of memory DRUID Conflicting PH3 +memory 0.54 NQ Irrelevant Fine-tuning 0.47 NQ Irrelevant ACD 0.39 CounterFact Irrelevant Fine-tuning 0.39 NQ Irrelevant Prompting 0.39 NQ Irrelevant COIECD 0.38 DRUID Conflicting ACD 0.37 NQ Irrelevant Regular 0.37 CounterFact Irrelevant Regular 0.35 DRUID Conflicting Prompting 0.34 CounterFact Irrelevant ACD 0.32 CounterFact Irrelevant PH3 +memory 0.31 CounterFact Irrelevant COIECD 0.30 DRUID Conflicting Regular 0.26 NQ Gold Regular 0.18 DRUID Irrelevant Regular 0.15 NQ Conflicting Regular 0.09 CounterFact Gold Regular 0.04 DRUID Gold Regular 0.02 CounterFact Conflicting ACD -0.31 CounterFact Conflicting COIECD -0.42 DRUID Gold PH3 +memory -0.43 CounterFact Conflicting Regular -0.44 Table 6: Spearman’s ρbetween BCU and different model aspects. Correlation values for Regular or with an absolute value above 0.3 are shown. Correlation values with an absolute value below 0.3 are marked in gray. Significant correlation values (p-value < 0.05) are marked in bold . Prompt for getting substitute answers. ## Instructions Please provide an incorrect answer to the example below. The incorrect answer should be incorrect in the sense that it should be significantly 17 Dataset Context CMT Corr. Context length CounterFact Irrelevant Regular 0.06 CounterFact Conflicting Regular 0.04 CounterFact Gold Regular 0.02 DRUID Conflicting Regular -0.02 DRUID Irrelevant Regular -0.02 NQ Irrelevant Regular -0.06 DRUID Gold Regular -0.08 NQ Conflicting Regular -0.22 NQ Gold Regular -0.23 DRUID Irrelevant Multi-agent -0.32 Query-context overlap DRUID Gold Regular 0.02 DRUID Irrelevant Regular -0.03 NQ Gold Regular -0.06 NQ Conflicting Regular -0.08 NQ Irrelevant Regular -0.08 DRUID Conflicting Regular -0.13 DRUID Irrelevant Multi-agent -0.30 Distractor rate CounterFact Gold Regular 0.00 NQ Conflicting Regular -0.19 NQ Gold Regular -0.19 CounterFact Conflicting Regular -0.22 CounterFact Conflicting ACD -0.34 CounterFact Conflicting Multi-agent -0.49 Relevance judgement CounterFact Conflicting Multi-agent 0.53 CounterFact Conflicting Regular 0.17 NQ Irrelevant Regular 0.11 DRUID Irrelevant Regular 0.05 NQ Gold Regular 0.04 DRUID Gold Regular 0.03 NQ Conflicting Regular 0.02 CounterFact Irrelevant Regular 0.01 CounterFact Gold Regular -0.01 DRUID Conflicting Regular -0.15 NQ Irrelevant Multi-agent -0.36 DRUID Irrelevant Multi-agent -0.49 Table 7: Spearman’s ρbetween BCU and different in- put aspects. Correlation vallues for Regular or with an absolute value above 0.3 are shown. Correlation values with an absolute value below 0.3 are marked in gray. Significant correlation values (p-value < 0.05) are marked in bold . Prompt Type Fact: Athens, the capital city of Greece. Gold Athens, the capital city of Fact: Thomas Ong is a citizen of Pakistan. Conflicting Thomas Ong is a citizen of Fact: Melbourne, that is the capital of Jordan. Irrelevant Prince Oscar Bernadotte is a citizen of Table 8: CounterFact prompts with contexts and corre- sponding context types. For prompts without context, the first line (starting with “Fact:”) is simply removed. different from the original answer. At the same time, it should be a plausible answer to the given question. The incorrect answer should follow the same formatting as the original answer such that | https://arxiv.org/abs/2505.16518v1 |
it should be possible to directly replace the original answer with the incorrect answer in any context. The incorrect answer should be a single word orModel Accuracy GPT-2 XL 71.8 Pythia 99.6 Qwen 1.5B 77.0 Qwen 1.5B-I 83.1 Qwen 7B 79.7 Qwen 7B-I 93.6 Qwen 32B 78.0 Qwen 32B-I 94.5 Command A 90.6 Table 9: Accuracy, proxying memorisation rate, on sam- ples from CounterFact without context. a short phrase. Only output the incorrect answer. ## Example Question: <question> Original answer: <target_true> Incorrect answer: In the event that the model generated a substitute answer that already could be found in the context, the previous model answer was added to the chat history together with the following new user query: Prompt for getting another substitute answer. Please provide another incorrect answer following the same format as the original answer. Only output the incorrect answer. Quality of conflicting contexts A manual inspec- tion of 200 samples found the method reliable for producing adequate conflicting contexts with an accuracy of 90% (11 samples corresponded to poor formatting, 4 were too similar to gold, and 4 were dropped due to data formatting issues or the LLM being unable to generate a substitute answer not already found in the context). In addition, we in- spect the CUB results to ascertain the quality of the conflicting context sampling, see Appendix A. We also experimented with a method based on named entities and random sampling for producing substitute answers for the conflicting contexts. In the method, the entity type of the answer to be re- placed was detected and another named entity of the same type was randomly sampled from a NE dataset as the replacement. We found this method to work poorly compared to the LLM based ap- proach. Mainly because the detected NEs lacked sufficient information for a successful sampling of replacements (e.g. “2024” and “last year” may both be labelled as time entities, while they are not interchangeable in all contexts). Sampling of irrelevant contexts Given a query and a corresponding Wikipedia page, the NQ anno- 18 Question Short answer Context Type when did the movie napoleon dynamite come out?June 11, 2004 <Table> <Tr> <Th colspan="2"> Napoleon Dynamite </Th> </Tr> <Tr> <Td colspan="2"> Theatrical release poster </Td> </Tr> <Tr> <Th> Directed by </Th> <Td> Jared Hess </Td> </Tr> <Tr> <Th> Produced by </Th> <Td> <Ul> <Li> Jeremy Coon </Li> <Li> Chris Wyatt </Li> <Li> Sean Covel </Li> <Li> Jory Weitz </Li> </Ul> </Td> </Tr> <Tr> <Th> Screenplay by </Th> <Td> <Ul> <Li> Jared Hess </Li> <Li> Jerusha Hess </Li> </Ul> </Td> </Tr> <Tr> <Th> Based on </Th> <Td> Peluca by Jared Hess </Td> </Tr> <Tr> <Th> Starring </Th> <Td> <Ul> <Li> Jon Heder </Li> <Li> Jon Gries </Li> <Li> Efren Ramirez </Li> <Li> Tina Majorino </Li> <Li> Aaron Ruell </Li> <Li> Diedrich Bader </Li> <Li> Haylie Duff </Li> </Ul> </Td> </Tr> <Tr> <Th> Music by </Th> <Td> John Swihart </Td> </Tr> <Tr> <Th> Cinematography </Th> <Td> Munn Powell </Td> </Tr> <Tr> <Th> Edited by </Th> <Td> Jeremy Coon </Td> </Tr> <Tr> <Th> Production company </Th> <Td> <Ul> | https://arxiv.org/abs/2505.16518v1 |
<Li> MTV Films </Li> <Li> Napoleon Pictures </Li> <Li> Access Films </Li> </Ul> </Td> </Tr> <Tr> <Th> Distributed by </Th> <Td> <Ul> <Li> Fox Searchlight Pictures (North America) </Li> <Li> Paramount Pictures (International) </Li> </Ul> </Td> </Tr> <Tr> <Th> Release date </Th> <Td> <Ul> <Li> January 17, 2004 (2004 - 01 - 17) (Sundance) </Li> <Li> June 11, 2004 (2004 - 06 - 11) (United States) </Li> <Li> </Li> <Li> </Li> <Li> </Li> </Ul> </Td> </Tr> <Tr> <Th> Running time </Th> <Td> 95 minutes </Td> </Tr> <Tr> <Th> Country </Th> <Td> United States </Td> </Tr> <Tr> <Th> Language </Th> <Td> English </Td> </Tr> <Tr> <Th> Budget </Th> <Td> $400,000 </Td> </Tr> <Tr> <Th> Box office </Th> <Td> $46.1 million </Td> </Tr> </Table>Gold when was the lupus foundation of amer- ica founded?1977 <P> The Lupus Foundation of America (LFA), founded in 1967 , is a national voluntary health organization based in Washington, D.C. with a network of chapters, offices and support groups located in communities throughout the United States . The Foundation is devoted to solving the mystery of lupus, one of the world’s cruelest, most unpredictable and devastating diseases, while giving caring support to those who suffer from its brutal impact . Its mission is to improve the quality of life for all people affected by lupus through programs of research, education, support and advocacy . </P>Conflicting who has scored the most tries in rugby union?Daisuke Ohata <P> This is a list of the leading try scorers in rugby union test matches . It includes players with a minimum of 30 test tries . </P>Irrelevant Table 10: NQ samples and corresponding context types. tators were instructed to mark the first paragraph in the Wikipedia page that contains an answer to the query. Therefore, to ensure that we only sample ir- relevant contexts, we perform the sampling over all paragraphs before the gold paragraph in the given Wikipedia page. We use the Jina Reranker v28to identify the most relevant non-gold paragraph. It is a modern LM re-ranker that has been proven to work well on NQ (Hagström et al., 2025). Prompt templates The 2-shot prompts used to evaluate the LMs on NQ were as follows. Prompt without context. Answer the following questions. Question: When is the first episode of House of the Dragon released? Answer: August 21, 2022 Question: In what country will the 2026 Winter Olympics be held? Answer: Italy Question: <question> Answer: Prompt with context. Answer the following questions based on the context below. Question: When is the first episode of House of the Dragon released? 8jinaai/jina-reranker-v2-base-multilingualContext: <Table> <Tr> <Th> Season </Th> <Th> Episodes </Th> <Th> First released </Th> <Th > Last released </Th> </Tr> <Tr> <Td> 1 </Td > <Td> 10 </Td> <Td> August 21, 2022 </Td> < Td> October 23, 2022 </Td> </Tr> <Tr> <Td> 2 </Td> <Td> 8 </Td> <Td> June 16, 2024 </Td> <Td> August 4, 2024 </Td> </Tr> </Table Answer: August 21, 2022 Question: Where will the 2026 Winter Olympics be held? Context: <P> The 2026 Winter Olympics (Italian: Olimpiadi invernali del 2026), officially the XXV | https://arxiv.org/abs/2505.16518v1 |
Olympic Winter Games and commonly known as Milano Cortina 2026, is an upcoming international multi-sport event scheduled to take place from 6 to 22 February 2026 at sites across Lombardy and Northeast Italy. </P> Answer: Lombardy and Northeast Italy Question: <question> Context: <context> Answer: For the instruction-tuned Qwen models, a chat template with slightly different prompt templates was used. The 2-shot prompt templates for the instruction-tuned models were as follows. Prompt without context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Answer the question. Only answer with the answer. Examples of questions and desired answers are given below. 19 # Example 1 Question: When is the first episode of House of the Dragon released? Answer: August 21, 2022 # Example 2 Question: In what country will the 2026 Winter Olympics be held? Answer: Italy # Now, answer the following question (only with the answer): Question: <question> Answer:<|im_end|> <|im_start|>assistant Prompt with context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Answer the question based on the provided context. Only answer with the answer. Examples of questions and desired answers are given below. # Example 1 Question: When is the first episode of House of the Dragon released? Context: <Table> <Tr> <Th> Season </Th> <Th> Episodes </Th> <Th> First released </Th> <Th > Last released </Th> </Tr> <Tr> <Td> 1 </Td > <Td> 10 </Td> <Td> August 21, 2022 </Td> < Td> October 23, 2022 </Td> </Tr> <Tr> <Td> 2 </Td> <Td> 8 </Td> <Td> June 16, 2024 </Td> <Td> August 4, 2024 </Td> </Tr> </Table Answer: August 21, 2022 # Example 2 Question: Where will the 2026 Winter Olympics be held? Context: <P> The 2026 Winter Olympics (Italian: Olimpiadi invernali del 2026), officially the XXV Olympic Winter Games and commonly known as Milano Cortina 2026, is an upcoming international multi-sport event scheduled to take place from 6 to 22 February 2026 at sites across Lombardy and Northeast Italy. </P> Answer: Lombardy and Northeast Italy # Now, answer the following question (only with the answer): Question: <question> Context: <context> Answer:<|im_end|> <|im_start|>assistant B.3 DRUID We map the stances of DRUID to context type using the following approach: 1.Gold: If the evidence is relevant and the stance of the evidence aligns with the claim verdictreached by the fact-check site (here consid- ered gold). This automatically encompasses most samples with evidence that has been sam- pled from a fact-check site, as the stance of the evidence is likely to align with the FC verdict. 2.Conflicting: If the evidence is relevant and the stance of the evidence does not align with the claim verdict. This automatically encom- passes all samples with insufficient evidence, as the original FC verdicts always are True, Half True or False. 3. Irrelevant: If the evidence is irrelevant. Samples from the DRUID dataset can be found in Table 11. The evidence stance and fact-check verdict distributions per context type can be found in Tables 12 and 13. Prompt templates The 2-shot prompts used for evaluating the | https://arxiv.org/abs/2505.16518v1 |
LMs on DRUID were as follows. Prompt without context. Are the following claims True or False? Answer None if you are not sure or cannot answer. Claimant: Viral post Claim: "the new coronavirus has HIV proteins that indicate it was genetically modified in a laboratory." Answer: False Claimant: Sara Daniels Claim: "Blackpink released the single 'You me too'in 2026." Answer: None Claimant: <claimant> Claim: " <claim> " Answer: Prompt with context. Are the claims True or False based on the accompanying evidence? If you are not sure or cannot answer, say None. Claimant: Viral post Claim: "the new coronavirus has HIV proteins that indicate it was genetically modified in a laboratory." Evidence: "Microbiologists say the spike proteins found in the new coronavirus are different from the ones found in HIV. [...] There is no evidence to suggest the coronavirus was genetically modified." Answer: False Claimant: Sara Daniels Claim: "Blackpink released the single 'You me too'in 2026." Evidence: "Blackpink released their album 'Born Pink 'in 2022." Answer: None 20 Claimant Claim Verdict Evidence Type Viral ClaimHarvard professor Charles Lieber was arrested for manu- facturing and selling the new coronavirus to ChinaFalse Lieber was arrested on January 28 for "making false statements to the agency of the United States Government," or lying to federal authorities about his ties to China, as per the fact-check report. The channel added that prosecutors have never alleged that Lieber was involved in manufacturing and/or selling a virus to China. The full federal court complaint against Dr Lieber can be read <a href="https://htv-prod-media.s3.amazonaws.com/files/lieber-complaint- 1586387800.pdf" rel="noopener noreferrer" target="_blank">here</a>.</p>.<p>The report also clarified Lieber’s links to Wuhan. The report stated, "Lieber travelled to WUT (Wuhan University of Technology) in mid-November 2011 ostensibly in order to participate in a Nano-Energy Materials Forum."</p>.<p>On July 29, Dr Lieber’s attorney Marc Mukasey told WCVB Channel 5 that he didn’t hide anything or get paid as the government alleges.</p>.<p>Thus, the social media claim that Harvard professor Dr Charles Lieber "made and sold" the Covid-19 virus to China is false.</p>Gold FACEBOOK POSTWikiLeaks has pub- lished the 1st list of black money holders in Swiss banks.False (See attached file: List of Black Money Holders from Wiki Conflicting Irish Congress of Trade Unions (ICTU)One in five school staff in Northern Ire- land are assaulted at least once a week.False Finnegan, who died in January 2002, had also abused boys at St. Colman’s College, a prestigious Catholic boys’ secondary school in Newry, Northern Ireland. He taught there from 1967 to 1971 and again from 1973 to 1976, when he was appointed president of the school. He served in that post until 1987. [...] Admitted on October 9, 2014 to sample charges of indecently assaulting four boys as young as 10 at St Mary’s CBS primary school in Mullingar between 1984 and 1987. Jailed for two years at Mullingar Circuit Court sitting in Tullamore. This concluded a ten-year investigation by detectives in Mullingar. [...] When Smyth returned to Kilnacrott in 1983, he again began abusing children in Belfast, including the girl who, on February 23, 1990, would meet with a social worker | https://arxiv.org/abs/2505.16518v1 |
at the Catholic Family Welfare Society in Belfast and start all the Smyth revelations.Irrelevant Table 11: DRUID samples and corresponding context types. Context Evidence stance Count Gold Refutes 1,579 Supports 359 Conflicting Refutes 35 Insufficient-refutes 437 Insufficient-contradictory 163 Insufficient-neutral 892 Insufficient-supports 585 Supports 367 Irrelevant not applicable 83 Table 12: Stance distribution per context type for DRUID. Context FC verdict Count Gold False 1,579 True 359 Conflicting False 1,842 Half True 276 True 361 Irrelevant False 54 Half True 13 True 16 Table 13: Fact-check verdict distribution per context type for DRUID. Claimant: <claimant> Claim: " <claim> " Evidence: " <evidence> " Answer: For the instruction-tuned Qwen models, a chat tem- plate with slightly different prompt templates was used for compatibility. The 2-shot prompt tem- plates for the instruction-tuned models were as fol- lows. Prompt without context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You area helpful assistant.<|im_end|> <|im_start|>user Is the claim True or False? Answer None if you are not sure or cannot answer. Only answer with True, False or None. Examples of claims and desired answers are given below. # Example 1 Claimant: Viral post Claim: "the new coronavirus has HIV proteins that indicate it was genetically modified in a laboratory." Answer: False # Example 2 Claimant: Sara Daniels Claim: "Blackpink released the single 'You me too'in 2026." Answer: None # Now, answer for the following claim: Claimant: <claimant> Claim: " <claim> " Answer (True, False or None):<|im_end|> <|im_start|>assistant Prompt with context for instruction-tuned LMs. <|im_start|>system You are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|> <|im_start|>user Is the claim True or False based on the accompanying evidence? If you are not sure or cannot answer, say None. Only answer with True, False or None. Examples of claims, evidence and desired answers are given below . # Example 1 Claimant: Viral post Claim: "the new coronavirus has HIV proteins that indicate it was genetically modified in 21 a laboratory." Evidence: "Microbiologists say the spike proteins found in the new coronavirus are different from the ones found in HIV. [...] There is no evidence to suggest the coronavirus was genetically modified." Answer: False # Example 2 Claimant: Sara Daniels Claim: "Blackpink released the single 'You me too'in 2026." Evidence: "Blackpink released their album 'Born Pink 'in 2022." Answer: None # Now, answer for the following claim: Claimant: <claimant> Claim: " <claim> " Evidence: " <evidence> " Answer (True, False or None):<|im_end|> <|im_start|>assistant C CCU metric BCU cannot measure the difference in model be- haviour when context is introduced, as it does not take model behaviour without context into consider- ation. To address this, we introduce CCU . Given a query Qand context C,CCU measures the change in probability for token tas follows. CCU( t) = PM(t|Q,C )−PM(t|Q) 1−PM(t|Q) ifPM(t|Q,C )≥PM(t|Q), PM(t|Q,C )−PM(t|Q) PM(t|Q) otherwise.(1) For relevant contexts Cwe record CCU( tC), i.e. the scores for the token promoted by the context. For irrelevant contexts we record the CCU( tM), i.e. the scores for the top token predicted by the model when prompted without context | https://arxiv.org/abs/2505.16518v1 |
(memory). The range of CCU is[−1,1], for which a value of −1denotes that the model goes completely against the context when the context is relevant or against its memory when the context is irrelevant, and vice versa for CCU values of 1. We report the averaged CCU per context type. By measuring the token probabilities before and after context is introduced, the CCU metric more accurately captures how the LM is impacted by context. However, this metric excludes the Com- mand A model, which does not provide the output logits necessary to compute CCU scores.D Hyperparameter Search D.1 Prompting The tuned prompt found for each model and dataset can be found in Table 14. Different sets of prompts were experimented with depending on dataset and model type. A set of 11 to 12 prompts were pro- duced for each of CounterFact, NQ and DRUID for the three different model types (causal LM, instruction-tuned LMs and Command A), respec- tively. Prompts with the same number are similar to each other across model types (e.g. Prompt #2 for Qwen2.5 on DRUID is similar to Prompt #2 for instruction-tuned Qwen2.5 on DRUID). Prompt sets across different datasets are dissimilar as they are adapted to align the instructions and few-shot examples to the given dataset. Prompt sets across different model types for the same dataset are dis- similar as small tweaks need to be applied for the instruction-tuned models that work less well in a purely causal language modelling setting, and for Command A that is a chat-based model. All prompts will be possible to view in the code repos- itory of the paper. Dataset Model Prompt CounterFact GPT2-XL 1.5B default PYTHIA 6.9B Prompt #10 (ChatGPT) QWEN 2.5 1.5B Prompt #1 (Jin et al. (2024)) 7B Prompt #11 (ChatGPT) 32B Prompt #8 (ChatGPT) QWEN 2.5-I 1.5B Instruct-prompt #4 (manual) 7B Instruct-prompt #11 (ChatGPT) 32B Instruct-prompt #3 (manual) COMMAND A Prompt #5 (ChatGPT) NQ GPT2-XL 1.5B Prompt #2 (manual) PYTHIA 6.9B default QWEN 2.5 1.5B Prompt #7 (ChatGPT) 7B Prompt #6 (ChatGPT) 32B Prompt #5 (manual) QWEN 2.5-I 1.5B Prompt #5 (manual) 7B Prompt #3 (manual) 32B default COMMAND A default DRUID GPT2-XL 1.5B Prompt #8 (ChatGPT) PYTHIA 6.9B Prompt #2 (manual) QWEN 2.5 1.5B Prompt #2 (manual) 7B Prompt #11 (Microsoft Copilot) 32B Prompt #1 (manual) QWEN 2.5-I 1.5B default 7B default 32B Prompt #2 (manual) COMMAND A Prompt #1 (manual) Table 14: The tuned prompts for each LM. default de- notes that the original prompt template (seen in Ap- pendix B) worked best. “-I” denotes instruction-tuned model versions. The source of the prompt is indicated in parenthesis. 22 D.2 PH3 The tuned attention head configurations for PH3 can be found in Table 15. The head configurations are grouped by the top number of identified atten- tion heads to consider and to what extent we allow mixing between context and memory heads. E.g. #25alldenotes all top-25 context and memory heads detected, #3 memory denotes the top-3 mem- ory heads, allowing for overlap with context heads, and #1 only memory denotes memory heads de- tected without overlap | https://arxiv.org/abs/2505.16518v1 |
with context heads when considering the top-1 context and memory heads. D.3 Context-aware Contrastive Decoding: COIECD Unlike other CMTs, the hyperparameters used in COIECD, αandλ, are selected following the orig- inal paper, Yuan et al. (2024), using the gold con- text from the validation set of NQ dataset. This deviation is necessary, as optimising COIECD’s hyperparameters by maximising the average BCU across all context types causes the model to con- verge to using only the output distribution without context in the decoding step. This outcome arises from the nature of COIECD, where always rely- ing on the distribution without context results in a BCU score of 1.0 for irrelevant contexts, while also causing the model to ignore context, including gold and conflicting contexts. To prevent COIECD from collapsing into regular generation without context and to enable meaningful comparison with other CMTs, we follow the hyperparameter search from the original paper. While Yuan et al. (2024) uses the same hyperparameter values across all mod- els, our models exhibit different tendencies during hyperparameter search. Therefore, we tune the hy- perparameters separately for each model to ensure a fair comparison with other methods. We search αin the range [0.0, 2.0] and λin the range [0.1, 1.0], and the hyperparameters for each model are in Table 16. E Implementation Details of Fine-tuning We fine-tune the LMs with a learning rate of 5e-5,9 using warm-up. To avoid overfitting, we use early stopping based on the loss on the validation set. For QA datasets, we use the train split from SQuAD 2.0 (Rajpurkar et al., 2018), and TriviaQA (Joshi et al., 2017). For a FC dataset, we take the train split 9Experiments with other learning rates yielded insignifi- cant changes in performance on the validation set.from A VeriTeC (Schlichtkrull et al., 2023). For a sentence completion dataset, we take the static partition of the DYNAMICQA (Marjanovic et al., 2024). We only create counterfactual training ex- amples with DYNAMICQA dataset. The detailed statistics for mixing the selected datasets can be found in Table 17. F Additional Details of Multi-agent Algorithm 1 Multi-agent 1:Given: question q, context c 2:Stage1: Relevance Assessment 3:Predict frel∼LM rel(frel|q, c) 4:iffrel=Relevant then 5: Proceed to Stage 2 6:else 7: return LM(a|q) ▷Answer w/o c 8:end if 9:Stage 2: Context-Faithfulness 10:Predict ac∼LM(ac|q, c) 11:Predict ffaith∼LM faith(ffaith|q, c, a c) 12:ifffaith=Faithful then 13: return ac ▷Answer w/ c 14:else 15: Proceed to Stage 3 16:end if 17:Stage 3: Self-Refinement 18:return LM(a|q, c, a c, ffaith)▷Self-Refined We design the Multi-agent approach to inves- tigate whether LMs can explicitly handle the two objectives of context utilisation: (1) being robust to irrelevant context and (2) being faithful to rel- evant context. Rather than directly generating an answer, an LM is guided to perform intermediate reasoning steps, each handled by a dedicated LM agent. This decomposition allows us to understand whether LMs can explicitly recognise when the context should be used and whether their answer aligns with it when it is. While self-refinement and LM agent have been used broadly in reasoning tasks (Du et al., 2024; Feng et al., 2024; | https://arxiv.org/abs/2505.16518v1 |
Madaan et al., 2023), our motivation is grounded in exam- ining two components of context utilisation sep- arately. Notably, self-refinement is only applied when the context is assessed as relevant but the an- swer is assessed as unfaithful, reflecting our focus on improving the usage of relevant context. By structuring the problem in this way, we aim to bet- ter understand the extent to which LMs can reason 23 about context relevance and faithfulness. Figure 2 and Algorithm 1 outline the Multi-agent procedure employed in our framework. Given a question and the context, the model first undergoes a relevance assessment stage, where it is explicitly instructed to determine whether the context is relevant to the question (Shen et al., 2024). If assessed as irrelevant, the model answers without the context; if relevant, it incorporates the context to generate the initial answer and proceeds to the next stage. In the context faithfulness assessment, the model is instructed to provide feedback on whether its answer faithfully reflects the provided context. If deemed faithful, the answer is retained as the final answer. If the prediction is assessed as unfaithful, the model is instructed to refine its answer using the question, context, initial answer, and feedback derived from the faithfulness assessment. This self-refinement stage encourages the model to self-correct based on its own feedback. To ensure consistency in output formatting during refinement, we incorporate two-shot demonstrations. The templates for relevance assessment, context faithfulness, and self-refinement are presented be- low. Task-specific templates for each dataset are available in the released code. Relevance Assessment (NQ) You are a relevance assessment expert. Your task is to evaluate whether the provided context is relevant to the question. Context: {context} Question: {question} If the provided context is relevant to the question, answer "Relevant", otherwise answer "Irrelevant". Do not rely on your own knowledge or judge the factual accuracy of the context. Answer: Context faithfulness (CounterFact and NQ) You are a context-faithfulness expert. Your task is to evaluate whether the proposed answer faithfully uses the information in the provided context. Context: {context} Question: {question} Proposed answer: {response} Does the answer faithfully reflect the content of the context? Do not rely on your own knowledge or judge the factual accuracy of the context. Please explain briefly. Feedback:Self-refinement (NQ) Your task is to generate the best possible final answer to the question, based on the expert feedback. You may keep the original proposed answer if it is correct, or revise it if the feedback suggests it is incorrect or unsupported. Generate only the final answer. Do not include any explanation or repeat the prompt. {Two demonstrations} Context: {context} Question: {question} Proposed answer: {response} Feedback on context faithfulness: {feedback} Final answer: G Input Features We detect the input features described in Sec- tion 5.2 as follows: • Context length is measured by the number of characters in the context. •Flesch reading ease score is measured with thetextstat10module. •Query-context overlap is measured as the size of the set of words that form the intersection of the set of words in the query and context, respectively, normalised | https://arxiv.org/abs/2505.16518v1 |
by the size of the set of query words. CounterFact is excluded from this analysis as its synthetic samples yield triv- ial results for this feature. •The answer position is measured as the index of the answer in the context normalised by context length. This feature is only detectable for gold and conflicting contexts for Counter- Fact and NQ. •The distractor rate is measured as the num- ber of answer entities found in the context, divided by the total number of entities in the context with an entity type that matches the answer entity type(s).11This feature is simi- larly only measurable for gold and conflicting contexts from CounterFact and NQ. •Relevance is given by the relevance agent based on Qwen 32B Instruct from the Multi- agent setup. It labels context as either ‘rele- vant’ or ‘irrelevant’. H Computational Resources GPT2-XL was evaluated using one Nvidia T4 GPU. Pythia, Qwen 1.5B and Qwen 7B using one A40 10https://github.com/textstat/textstat 11Named entities are detected using spaCy and en_core_web_trf . 24 GPU. Qwen 32B was evaluated using four A40 GPUs. The compute budget for all CMTs was about 14 hours per model for CounterFact, 28 hours per model for NQ and 21 hours per model for DRUID, amounting to a total of about 900 GPU hours. The costs for the experiments with Cohere Com- mand A amounted to a total of about 120 USD. I Use of AI assistants AI assistants like Copilot and ChatGPT were in- termittently used to generate template code and rephrase sentences in the paper, etc. However, no complete paper sections or code scripts have been generated by an AI assistant. All generated content has been inspected and verified by the authors. 25 Model Mode CounterFact NQ DRUID GPT2-XL +context #25all #1all #5only memory * L18H10, L21H10, L21H7, L22H18, L22H20, L24H6, L26H14, L26H20, L26H8, L27H15, L27H5, L28H15, L29H5, L29H9, L30H21, L30H8, L31H0, L31H3, L31H8, L32H13, L33H14, L33H18, L33H2, L33H7, L34H17, L34H20, L35H17, L35H19, L35H21, L36H17, L36H2, L37H7, L38H24, L38H7, L39H12, L39H9, L40H13, L40H23, L41H5, L41H9, L42H24, L43H15, L47H0L28H15, L35H19 L32H13, L35H19, L42H24, L43H15 +memory #12memory #7only context #22all L26H14, L26H8, L32H13, L33H14, L35H19, L38H24, L40H23, L41H5, L42H24, L43H15, L47H0, L30H8L27H15, L28H15, L29H9, L33H2, L34H17, L37H7L21H10, L22H20, L24H6, L26H14, L26H20, L26H8, L27H15, L27H5, L28H15, L29H9, L30H21, L30H8, L31H0, L31H3, L31H8, L32H13, L33H14, L33H18, L33H2, L33H7, L34H17, L34H20, L35H17, L35H19, L36H17, L36H2, L37H7, L38H24, L38H7, L39H12, L39H9, L40H13, L40H23, L41H5, L42H24, L43H15, L47H0 PYTHIA 6.9B +context #15memory #17only memory #10only context L10H27, L14H6, L16H16, L17H28, L19H11, L19H21, L20H11, L20H18, L21H8, L27H22, L18H7, L19H28, L20H2, L20H8, L24H5L10H27, L14H28, L14H6, L16H16, L17H28, L19H11, L19H21, L20H11, L20H18, L21H8, L22H12, L27H22L12H11, L12H13, L14H0, L15H17, L17H14, L20H2, L8H11 +memory #25only context #12only context #17only context L10H1, L12H11, L12H13, L13H12, L14H0, L14H23, L15H17, L17H14, L18H10, L19H1, L19H20, L21H10, L23H25, L29H22, L8H11, L8H24L12H11, L12H13, L14H0, L14H23, L15H17, L17H14, L19H31, L20H2, L8H11L10H1, L12H11, L12H13, L13H12, L14H0, L14H23, L15H17, L17H14, L18H10, L19H1, L19H31, L8H11 QWEN 2.5 1.5B +context #15only memory #12only memory #17only context L10H0, L10H1, L13H1, L16H1, L17H0, L18H0, L1H1, L3H0L10H0, L13H1, L16H1, L17H0, L18H0, L1H1L14H1, L16H0, L18H1, L19H0, L19H1, L20H1, L24H1, | https://arxiv.org/abs/2505.16518v1 |
L26H0, L26H1, L9H0 +memory #5only context #12only context #12only memory L15H1, L16H0, L27H0 L14H1, L16H0, L18H1, L19H0, L24H1, L27H0L10H0, L13H1, L16H1, L17H0, L18H0, L1H1 QWEN 2.5 1.5B +context #7only memory #1only context #10only context Instruct L15H0, L1H1, L21H0 L19H1 L14H0, L17H1, L19H1, L22H0, L26H0 +memory #1only context #12only context * #5 only context L19H1 L14H0, L17H1, L19H1, L22H0, L26H0, L27H0L17H0, L19H1, L22H0 QWEN 2.5 7B +context #7memory #1only context #3only memory L0H0, L17H1, L18H2, L19H0, L21H0, L22H2, L23H0L27H0 L0H0, L22H2 +memory #15only context #5only context #12only context L13H0, L17H0, L18H1, L18H3, L22H0, L24H3, L25H1, L26H0, L27H0, L27H2L22H0, L27H0, L27H2 L16H3, L17H0, L18H1, L18H3, L22H0, L24H3, L26H0, L27H0, L27H2 QWEN 2.5 7B +context #17only memory #5context #5only context Instruct L11H1, L12H0, L13H3, L14H3, L16H1, L17H0, L17H3, L18H2, L1H1, L20H0, L21H2, L26H3, L3H0L18H0, L18H3, L22H2, L23H0, L27H2L18H0, L18H3, L27H2 +memory #3only context #3only context #17all L18H0 L18H0 L0H0, L11H1, L12H0, L13H3, L14H3, L15H1, L16H0, L16H1, L17H0, L17H3, L18H0, L18H1, L18H2, L18H3, L19H0, L19H3, L1H1, L20H0, L20H2, L20H3, L21H0, L21H2, L22H0, L22H2, L23H0, L26H3, L27H0, L27H2, L3H0, L8H1 Table 15: Tuned PH3 attention head configurations for each model and evaluation dataset. +context indicates heads for which pruning leads to increased context usage and vice versa for +memory. Configurations marked with * denote that they yielded degraded performance compared to the standard setting (no mechanistic intervention) on the validation set. 26 Model λ α GPT2-XL 0.50 1.00 PYTHIA 6.9B 0.50 1.00 QWEN 2.5 1.5B 1.00 0.50 QWEN 2.5 1.5B I NSTRUCT 0.50 1.00 QWEN 2.5 7B 1.00 1.00 QWEN 2.5 7B I NSTRUCT 0.50 0.50 QWEN 2.5 32B 0.50 1.00 QWEN 2.5 32B I NSTRUCT 0.50 1.50 Table 16: Selected COIECD hyperparameters λandα for each model, evaluated on gold contexts from NQ’s validation set. For models with multiple (λ, α)pairs attaining the maximum score, we choose the setting that lies near the midpoint of the optimal region. Dataset Dataset weight Context type Context weight SQuAD 2.0 0.4Relevant 0.65 Irrelevant 0.25 Empty 0.1 TriviaQA 0.3Relevant 0.65 Irrelevant 0.25 Empty 0.10 A VeriTeC 0.15Relevant 0.65 Irrelevant 0.25 Empty 0.10 DYNAMICQA 0.15Relevant 0.50 Irrelevant 0.05 Empty 0.05 Counterfactual 0.40 Table 17: Sampling weight for each dataset. We first sample the number of instances for each dataset follow- ing the dataset sampling weight. Then, each context type is determined by the context sampling weight. 27 | https://arxiv.org/abs/2505.16518v1 |
arXiv:2505.16520v2 [cs.CL] 26 May 2025Published as a conference paper at ACL 2025 Main Are the Hidden States Hiding Something? Testing the Limits of Factuality-Encoding Capabilities in LLMs* Giovanni Servedio1,2Alessandro De Bellis1Dario Di Palma1 Vito Walter Anelli1Tommaso Di Noia1 1Politecnico di Bari, Italy2Sapienza University of Rome, Italy name.surname@poliba.it Abstract Factual hallucinations are a major challenge for Large Language Models (LLMs). They un- dermine reliability and user trust by generating inaccurate or fabricated content. Recent studies suggest that when generating false statements, the internal states of LLMs encode informa- tion about truthfulness. However, these stud- ies often rely on synthetic datasets that lack realism, which limits generalization when eval- uating the factual accuracy of text generated by the model itself. In this paper, we chal- lenge the findings of previous work by investi- gating truthfulness encoding capabilities, lead- ing to the generation of a more realistic and challenging dataset. Specifically, we extend previous work by introducing: (1) a strategy for sampling plausible true-false factoid sen- tences from tabular data and (2) a procedure for generating realistic, LLM-dependent true- false datasets from Question Answering collec- tions. Our analysis of two open-source LLMs reveals that while the findings from previous studies are partially validated, generalization to LLM-generated datasets remains challeng- ing. This study lays the groundwork for future research on factuality in LLMs and offers prac- tical guidelines for more effective evaluation. Code is provided at our Github Repo. 1 Introduction In the last few years, Large Language Models (LLMs) have shown outstanding abilities in nat- ural language processing tasks and beyond (Bian- cofiore et al., 2025; Di Palma, 2023). Nevertheless, *This is the authors’ version of the work. The final, published version will appear in the Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL ‘25) . This work is licensed under a Creative Commons Attribu- tion 4.0 International License (CC BY 4.0). Please cite the official published version when available.factual hallucinations (Zhang et al., 2023) repre- sent a significant obstacle, limiting their reliability and hindering their safe deployment in real-world applications (Di Palma et al., 2025) such as health- care (Pham and V o, 2024), education (Upadhyay et al., 2023), legal advice (Dahl et al., 2024), and language understanding (De Bellis et al., 2024; Anelli et al., 2022). Hallucinations occur when an LLM generates content that is syntactically co- herent but factually inaccurate, decreasing trust in AI systems (Huang et al., 2024). Recent research suggests that LLMs may encode internal represen- tations of factuality in their hidden states, indicat- ing an awareness of whether a generated statement is true or false (Chen et al., 2024). These efforts led to the development of approaches to evaluate the factual accuracy of the LLM outputs given their in- ternal representations ( factuality "self-evaluation" ). Self-evaluation can be used to identify gaps in the knowledge of an LLM, improving truthfulness and transparency through abstention mechanisms (Feng et al., 2024), fact verification (Wadden et al., 2020), and self-correction (Ji et al., 2023). Azaria and Mitchell (2023) suggest that LLMs have “some internal notion as to | https://arxiv.org/abs/2505.16520v2 |
whether a sentence is true or false, as this information is required for generating (or predicting) following tokens.” Based on this as- sumption, they propose a neural classifier to discern factual from non-factual statements based on hid- den layer activations. However, the datasets used to evaluate the probe present limitations since they contain trivially incorrect statements (e.g., "The zebra uses flying for locomotion") that easily fail to align with the generative patterns of LLMs. Ad- ditionally, the false statements are generated using random substitutions of the true terms with little regard for the plausibility of negative samples. This misalignment not only weakens the generalizability of results but also raises concerns about the appli- cability of these models to real-world scenarios where false statements may be subtle or nuanced. Published as a conference paper at ACL 2025 Main This study addresses these gaps by generating more plausible datasets (see Figure 1) to explore LLM factuality encoding and evaluating refined models. The primary contributions are: 1.Wereproduce the methodology of Azaria and Mitchell (2023) to ensure transparency. 2.Wepropose two strategies to generate realis- tic datasets and discuss how well the original and newly developed models generalize. Specifically, we extend their work with two novel dataset creation strategies and design a strategy that better fits the factuality self-evaluation task, introducing: •Aperplexity-based negative sampling strategy that enhances the original generation mechanism and leverages the LLM token distribution. •Anovel strategy to sample realistic LLM- generated facts , leveraging Question Answering datasets to elicit responses from the LLM. Through this analysis, we lay the groundwork for more robust factuality assessments and offer practi- cal guidelines for enhancing the reliability of LLMs in diverse applications. 2 Reproduction of Prior Work: Settings In their study, Azaria and Mitchell (2023) inves- tigate whether LLMs internally represent the fac- tuality of sentences. This section summarizes the dataset generation approach they employed and the specific probing architecture used in their study. 2.1 Dataset Generation Strategy To explore whether LLMs internally represent the factuality of statements, the authors constructed a ‘True-False’ dataset of facts labeled as either True or False, covering six disjoint topics: Cities ,In- ventions ,Chemical Elements ,Animals ,Companies , andScientific Facts . To generate the dataset, for the first five topics, i.e. with the exception of Sci- entific Facts , the authors selected true statements from reliable sources (see Table 7 in the appendix) and produced false statements , replacing part of a true statement (e.g. “Hydrogen has an atomic number of 1” ) with randomly sampled incorrect information ( “Hydrogen has an atomic number of 34”). Meanwhile, for Scientific Facts , they em- ployed ChatGPT (13 Feb 2023) as a generator of true and false sentences, and two human annotators manually verified their correctness. The authors publicly release the dataset, which we refer to as the“True-False dataset” . Furthermore, the authorsconstructed a second dataset using the OPT-6.7b model, which we refer to as the ‘ OPT-Generated Dataset ’. To create this dataset, the model was prompted with a true statement absent from the True-False dataset and then used to generate a | https://arxiv.org/abs/2505.16520v2 |
sub- sequent sentence. The responses were manually fact-checked and annotated by three independent human judges. Non-factual responses were filtered out, resulting in a final set of 245 statements. 2.2 Internal States Analysis via SAPLMA To investigate whether LLMs internally represent the factuality of statements, Azaria and Mitchell (2023) developed a probe ( Statement Accuracy Prediction based on Language Model Activations) that predicts the factual accuracy of a statement by analyzing the hidden layer activations of an LLM. SAPLMA is a feedforward neural network designed to classify statements as true or false. It consists of three hidden layers (256, 128, 64) and a sigmoid output activation. The model is trained using the Adam optimizer for five epochs without hyperparameter tuning. The authors studied two LLMs, namely OPT-6.7b (Zhang et al., 2022) and Llama 2-7b (Touvron et al., 2023), both consist- ing of 32 layers. To identify which layers best capture factuality, they trained five SAPLMA mod- els, forwarding each statement in the True-False Dataset as input to the LLMs and extracting the corresponding activation values from the 32nd (last layer), 28th, 24th, 20th, and 16th layers. These activations serve as input for training the classifiers. To ensure generalizability, i.e., making SAPLMA independent of specific topics, the authors adopted a cross-validation strategy using a leave-one-topic- out approach to train the classifier on five topics and test the probe on the held-out topic. 2.3 Reproducibility Settings In this section, we provide details on the datasets and experimental settings for reproducing the work of Azaria and Mitchell (2023). Our goal is to an- swer the Research Question ( RQ0 ):Can we repro- duce the results reported by Azaria and Mitchell (2023)?" Although the code is not publicly accessi- ble, the authors made it available upon request. Dataset generation. The code provided by Azaria and Mitchell (2023) contains all the necessary ma- terial to recreate the entire dataset generation pro- cess. However, their template-matching code is influenced by randomness in the generation of false statements, and a random seed is not set. Published as a conference paper at ACL 2025 Main Due to this non-deterministic behavior, recreating their dataset using the original code was unfeasible. However, the authors released their dataset, allow- ing us to reproduce their exact dataset settings. SAPLMA reproducibility. To reproduce the re- sults of the original study, we trained 20 SAPLMA probes for each of the following layers: the 32nd, 28th, 24th, 20th, and 16th, over 5 epochs, result- ing in a total of 100 probes. We employed Llama 2-7b and OPT-6.7b, both using half-precision (16- bit float) parameters, with a default temperature of 0.8 for Llama 2-7b and 1.0 for OPT-6.7b. The hardware used for the experiments was an Intel(R) Core(TM) i7-5820K paired with an NVIDIA RTX 3090 graphics card. The authors do not specify from which token they extract the associated hid- den state. However, code inspection led to the identification of the last token as the target state. 3 Experimental Reproducibility Results To answer RQ0 , we report the results for the re- production of the experiments in Tables 1, | https://arxiv.org/abs/2505.16520v2 |
2 and 3. The results labeled with ‘Orig.’ are retrieved from the original work, while the ones achieved in the reproducibility study are labeled with ‘Repr.’. 3.1 Reproduction of SAPLMA Results on the True-False Dataset Table 1 reports SAPLMA’s performance on the True-False dataset across different layers of Llama 2-7b and OPT-6.7b for six categories: Cities, Inven- tions, Elements, Animals, Companies, and Facts. Results indicate that Llama 2-7b consistently out- performs OPT-6.7b across all layers and categories. Middle layers (16, 20, 24) achieve the highest per- formance, while accuracy declines toward the final layer. The reproduced results closely align with the original findings, with minor deviations observed across specific categories and layers. Moreover, although OPT-6.7b shows greater variability, the ranking of layers remains consistent with the orig- inal work. Both experiments confirm that factual- ity information is more effectively encoded in the middle layers (16–24) than in the final layer. Ad- ditionally, we reproduce the baseline used by the authors to compare SAPLMA. Specifically, their baseline includes a trained SAPLMA on BERT ac- tivations, a few-shot approach where the LLM is prompted with a sentence and asked to label it as ‘True’ or ‘False,’ and an ‘It-is-true’ test. In this test, the LLMs were asked: Is it true that X? andIs it false that X? , where X is a dataset sample. A response was considered correct if the model as- signed a higher probability to the ‘True’ token. Ta- ble 2 summarizes the baseline performance. BERT achieves the highest average performance among non-prompted methods, demonstrating its effective- ness in factual classification, and the results indi- cate high reproducibility. The It-is-true baseline yields the highest average performance, particularly in the ‘Companies’ topic. The results demonstrate consistent reproducibility. In general, the reproduction on the True-False Dataset is considered successful based on the fol- lowing observations: (i) the overall performance trends remain consistent, with Llama 2-7b outper- forming OPT-6.7b; (ii) observed deviations are mi- nor and do not indicate fundamental inconsisten- cies; (iii) the relative ranking of layers remains unchanged, reinforcing previous findings; and (iv) baseline methods retain their rankings, confirming the validity of the original results. 3.2 Reproduction of the Results on the OPT-Generated Dataset Table 3 presents the results of the reproduced ex- periments on the OPT-Generated Dataset. Perfor- mance is evaluated using Accuracy, AUC, Accu- racy with an Optimal Threshold (selected by es- timating it from a held-out validation set), and the Average Optimal Threshold. The final layers, specifically the 28th and final layers, outperform the middle and lower layers in terms of Accuracy and AUC. The reproduced results closely align with the original findings, exhibiting only minor varia- tions. Accuracy with anoptimal threshold consis- tently exceeds raw accuracy, suggesting that tuning the decision boundary improves performance. Regarding baselines, BERT exhibits lower accu- racy compared to the LLMs’ last layers, with slight improvements over the original results. Notably, the 3- and 5-shot prompting results were identical between the original and reproduced experiments. RQ0 :Can we reproduce the results reported by Azaria and Mitchell (2023)? This reproducibility study demonstrates a high degree | https://arxiv.org/abs/2505.16520v2 |
of alignment with the original re- sults , confirming the validity of previous findings. It shows that the ranking and trends remain un- changed, reinforcing the robustness of the results. Published as a conference paper at ACL 2025 Main LayerCities Inventions Elements Animals Companies Facts Average Llama2 OPT-6.7b Llama2 OPT-6.7b Llama2 OPT-6.7b Llama2 OPT-6.7b Llama2 OPT-6.7b Llama2 OPT-6.7b Llama2 OPT-6.7b LastOrig. 0.7574 0.7796 0.6735 0.5696 0.6814 0.5760 0.7338 0.6022 0.6736 0.6925 0.7444 0.6498 0.7107 0.6449 Repr. 0.7939 0.7836 0.7470 0.5603 0.7057 0.5656 0.7133 0.5984 0.6463 0.6900 0.7894 0.6640 0.7326 0.6437 28Orig. 0.8146 0.7732 0.7207 0.5761 0.6767 0.5907 0.7249 0.5777 0.6894 0.7247 0.7662 0.6618 0.7321 0.6507 Repr. 0.8261 0.8014 0.7221 0.5938 0.6746 0.5931 0.7046 0.5945 0.6860 0.7252 0.7976 0.6639 0.7351 0.6620 24Orig. 0.8722 0.7963 0.7816 0.6712 0.6849 0.6211 0.7394 0.5800 0.7094 0.7758 0.7858 0.6868 0.7622 0.6886 Repr. 0.8619 0.8043 0.7737 0.6604 0.6789 0.6172 0.7415 0.6095 0.7049 0.7844 0.7910 0.6804 0.7586 0.6927 20Orig. 0.8820 0.8125 0.8459 0.7268 0.6950 0.6197 0.7758 0.6058 0.8319 0.8122 0.8053 0.6819 0.8060 0.7098 Repr. 0.8672 0.8118 0.8584 0.7222 0.6761 0.6218 0.7736 0.6208 0.8254 0.8160 0.8065 0.6734 0.8012 0.7110 16Orig. 0.9223 0.7435 0.8938 0.6400 0.6939 0.5645 0.7774 0.5800 0.8658 0.7570 0.8254 0.6237 0.8298 0.6515 Repr. 0.9174 0.7554 0.8847 0.6403 0.7005 0.5732 0.7883 0.5693 0.8672 0.7760 0.8104 0.6340 0.8281 0.6580 Table 1: Replicated SAPLMA performance on the True-False dataset across the selected layers. The results labeled as ‘Orig.’ are taken from the original work, while those labeled as ‘Repr.’ are the replicated results from this study. Model Cities Inventions Elements Animals Companies Facts Average BERTOrig. 0.5357 0.5537 0.5645 0.5228 0.5533 0.5302 0.5434 Repr. 0.5257 0.5611 0.5435 0.5603 0.5302 0.5361 0.5428 3-shotOrig. 0.5410 0.4799 0.5685 0.5650 0.5538 0.5164 0.5374 Repr. 0.5416 0.4800 0.5685 0.5652 0.5539 0.5115 0.5368 5-shotOrig. 0.5416 0.4799 0.5676 0.5643 0.5540 0.5148 0.5370 Repr. 0.5416 0.4800 0.5676 0.5643 0.5540 0.5082 0.5359 It-is-trueOrig. 0.5230 0.5068 0.5688 0.4851 0.6883 0.5840 0.5593 Repr. 0.5233 0.5046 0.5688 0.4831 0.6875 0.5856 0.5588 Table 2: Replicated baselines performance on the True- False dataset. The results labeled as ‘Orig.’ are taken from the original work, while those labeled as ‘Repr.’ are the replicated results from this study. Layer Accuracy AUC Accuracy with Average Threshold Optimal Threshold Last-layerOrig. 0.6187 0.7587 0.7052 0.8687 Repr. 0.6406 0.7720 0.7264 0.8910 28th-layerOrig. 0.6362 0.7614 0.7134 0.8838 Repr. 0.6410 0.7686 0.7203 0.8276 24th-layerOrig. 0.6134 0.7435 0.6988 0.8801 Repr. 0.6206 0.7496 0.6973 0.8500 20th-layerOrig. 0.6029 0.7182 0.6587 0.9063 Repr. 0.5965 0.7183 0.6669 0.8868 Middle-layerOrig. 0.5566 0.6610 0.6500 0.8123 Repr. 0.5579 0.6760 0.6468 0.7948 BERTOrig. 0.5115 0.5989 0.5705 0.9403 Repr. 0.5522 0.6092 0.5689 0.7939 3-shotOrig. 0.5041 0.4845 - - Repr. 0.5041 0.4845 - - 5-shotOrig. 0.5125 0.4822 - - Repr. 0.5125 0.4822 - - Table 3: Reproduced SAPLMA performance on the OPT-Generated Dataset (Section 2.1). The results labeled as ‘Orig.’ are taken from the original work, while those labeled as ‘Repr.’ are the reproduced results. 4Novel Dataset Generation Strategies for Factuality Self-Evaluation This section introduces two novel dataset genera- tion strategies to investigate LLM factuality self- evaluation. To contextualize our approach, we first examine the limitations of the True-False dataset by Azaria and Mitchell (2023). While | https://arxiv.org/abs/2505.16520v2 |
it provides a structured framework for evaluating LLMs, its con- struction imposes constraints that may limit gen- eralizability of the findings. Specifically, true and false statements are derived from tabular data usingpredefined templates. We argue that this approch suffers from several limitations: •Adherence to predefined templates: The use of fixed templates limits the linguistic expres- siveness of the dataset, potentially constraining the probe classifier’s ability to generalize beyond rigidly structured statements (e.g., <company> operates in the industry of <industry> ). •Distribution misalignment: The statements are constructed from tabular data rather than gener- ated by the LLM itself. As a result, the dataset may not align with the LLM’s generative dis- tribution. For instance, a niche true fact in the dataset –but unknown by the model– could have high perplexity for the LLM, undermining the study’s core premise: evaluating an LLM’s intrin- sic ability to "judge" its false claims. •Lack of consideration for the LLM’s knowl- edge state: LLMs exhibit uneven factual knowl- edge based on their pretraining data, with strengths in some domains and gaps in others. A model can assess a statement factuality only if it has prior exposure to it. The dataset does not account for these inconsistencies: it evalu- ates whether an LLM can detect factual errors without considering whether the model actually possesses knowledge of the fact. •Differences in cardinality: Some properties in the dataset have fewer admissible values, mak- ing certain facts easier to evaluate. For exam- ple, a statement like "<element> appears in its standard state as _" has fewer possible val- ues (i.e., {Solid, Liquid, Gas} ) compared to state- ments like "<city> is a city in _" that involve a broader set. This imbalance in complexity may bias the evaluation process. These limitations impact the interpretability of re- sults when evaluating an LLM’s internal represen- tations of factuality. To ensure a more realistic as- Published as a conference paper at ACL 2025 Main sessment of an LLM’s self-evaluation of factuality, we propose two strategies to address these issues. The first strategy samples statements from tabular data to better align with the LLM’s generative pre- dictions. The second strategy involves sampling LLM-generated facts as answers to questions from a well-known Question Answering dataset. 4.1 Perplexity-based Dataset Construction This section presents a novel dataset generation strategy to address limitations in the Azaria and Mitchell (2023) True-False dataset, particularly distribution misalignment and implausible nega- tive samples. To improve negative sampling based on random property-object substitutions, we intro- duce a perplexity-guided probabilistic sampling method, which re-weights false statements based on perplexity for better alignment with LLM out- put distributions. Since perplexity depends on the model, the same LLM under evaluation must gener- ate false statements to ensure consistency, resulting in a model-dependent dataset tailored to the char- acteristics and biases of the LLM being studied. True statements are initially constructed, as in Azaria and Mitchell (2023), by directly inserting correct entity-property pairs into pre-defined sen- tence templates. Regarding the false statements generation, we proceed as follows: (i) for each true statement, all unique alternative property values are gathered | https://arxiv.org/abs/2505.16520v2 |
from the entire dataset; (ii) a candidate sentence is created for each alternative property by inserting it into the corresponding template; (iii) the target LLM (i.e., OPT-6.7b or Llama 2-7b) com- putes the perplexity of each candidate sentence. This perplexity score serves as a plausibility metric, with lower perplexity indicating a more plausible (yet incorrect) statement. Given a true statement, we define Cas the set of potential candidate sentences, which includes the true statement. Furthermore, C′⊂Cis defined as the subset of candidate false sentences (i.e. ex- cluding the true statement). Candidates c∈Care ranked based on their perplexity scores, with lower scores indicating higher plausibility. Since perplexity can be interpreted as a measure of plausibility, we operate under the assumption that an LLM possesses factual knowledge about a fact if the fact is assigned a "sufficiently low perplexity". Conversely, a high perplexity score for the true statement suggests that the LLM lacks knowledge of the fact. Given the limitations dis- cussed earlier, we aim to evaluate the LLM abilityto discern between true and false statements when it possesses the relevant knowledge. Therefore, we exclude instances where the LLM exhibits limited knowledge about the true statement: if the true statement does not rank among the lowest kper- plexity candidates, the generation process for that instance is aborted, and the next true fact is consid- ered. In practice, we define kask=α|C|, where αis a hyperparameter ( 0< α < 1). This accounts for the varying cardinality of the property ranges in the dataset, ensuring that the threshold for "suf- ficiently low perplexity" is adjusted based on the number of possible values for a given property. In addition, we want to simulate a real hal- lucination scenario where the LLM is uncertain between the true fact and plausible alternatives: given the perplexity score function PP(·), all false candidates cwith a perplexity score PP(c)< (1 + β)PP(true), where 0< β < 1is a hyper- parameter, are considered, resulting in a reduced set of candidates C⋆. A min-max normalization is applied to their perplexity, NormPP (c) =PP(c)−minc∈CPP(c) max c∈CPP(c)−minc∈CPP(c), (1) The normalized perplexities are transformed us- ing a plausibility score function s(·), i.e., lower per- plexity scores result in higher plausibility scores. The scores are then normalized to ensure that they sum to 1 and are treated as a probability distribution over the candidates: s(c) =e−NormPP (c), P (ci) =s(ci)P cj∈C⋆s(cj). (2) The normalization guarantees that P(ci)values are suitable for sampling. Finally, a mixture of top-k and nucleus sampling (Holtzman et al., 2020) is employed to sample the candidate for insertion into the template. This process generates a coherent yet factually incorrect statement that is more realistic and closely aligned with the LLM internal token prediction patterns. 4.2 LLM-Generated Dataset Construction The strategy in Section 4.1 constructs a balanced true-false dataset from tabular data but has inher- ent limitations. While enabling direct compari- son with Azaria and Mitchell (2023), it restricts generative models to rigid templates, limiting ex- pressiveness. Additionally, its reliance on fixed candidate sets can lead to easily classifiable false Published as a conference paper at | https://arxiv.org/abs/2505.16520v2 |
ACL 2025 Main statements when the set is small. Finally, template- based sampling reduces diversity, likely due to to- ken bias: we found LMs consistently assigning lower perplexity to certain property-object pairs, regardless of the subject. To more realistically as- sess an LLM factuality self-evaluation, we pro- pose generating both true and false statements di- rectly from the model, overcoming the limitations of template-based approaches. This involves (a) a method to elicit diverse factoid statements from the LLM, and (b) a strategy to annotate statement ve- racity, addressing biases and inconsistencies in the process. Consider a Question Answering dataset composed of Nquestions, DQA={(qi, ai)}N i=1, where each question qihas a corresponding ground- truth answer ai. Given a LLM M, we prompt it with each question Ktimes, yielding a set of generated answers. This results in an extended dataset DM QA={(qi, ai,{aM i,k}K k=1)}N i=1. The LLM-generated answers in DM QAcan be annotated using an oracle LLM, which we assume is able to evaluate the veracity of each answer aM i,kgiven the ground-truth answer aiand the question qi. This operation results in an annotated dataset ˆDM QA={(qi, ai,{aM i,k, vM i,k}K k=1)}N i=1,(3) where vi,kis a veracity label assigned by the oracle, indicating whether the generated answer ˆai,kis correct ( vi,k= 1) or incorrect ( vi,k= 0). Increasing K enhances the reliability of re- sponses by offering a more accurate evaluation of the LLM knowledge state regarding a question. This evaluation can lead to three possible outcomes: (i) a high proportion of correct answers suggests the LLM fully understands the facts; (ii) an high proportion of incorrect answers indicates the LLM lacks or has partial knowledge, preventing correct responses; (iii) a mix of correct and incorrect an- swers implies knowledge with a tendency toward hallucination. This study assumes that an LLM can only encode factuality regarding a generated fact if it has some knowledge about it. Therefore, we focus exclusively on the third scenario. This scenario also naturally provides balance, including true and false variations of the same fact. We define the correct answer ratio pM i=1 KPK k=1vM i,k. We consider questions whose piis around 0.5 with a tolerance hyperparameter τ, that is |pM i−0.5|< τ. The dataset is obtained by selecting the answers and their veracity labels satisfying the condition: DM Facts ={(aM i,k, vM i,k) :|pM i−0.5|< τ}N i=1.(4)5 Experiments and Discussion This section describes the experimental setup used to extend the prior investigation, presents the re- sults, and discusses their implications. We address the following Research Questions: RQ1 Are probing classifiers trained as in Azaria and Mitchell (2023) capable of generalizing to True-False sentences with similar perplexity? RQ2 Can the same probes generalize to facts gen- erated by LLMs? 5.1 Dataset Generation Setup To generate the datasets for training the probe and alleviate the limitations of Azaria and Mitchell (2023), we employ the perplexity-based sampling procedure described in Section 4.1. Specifically, we set the needed hyperparameters as follows: α= 0.1,β= 0.1,k= 10 andp= 0.9We do not sample a dataset for the Scientific Facts topic, as the original version was | https://arxiv.org/abs/2505.16520v2 |
generated by ChatGPT and not from tabular data (additional details in Ap- pendix B). Table 4 summarizes the statistics for the two refined True-False datasets, while Figure 1 shows the differences in average perplexity be- tween the original and the proposed datasets. DatasetLlama 2-7b OPT-6.7b Sentences (%) True Sentences (%) True Cities 674 50 756 50 Inventions 336 50 202 50 Elements 118 50 220 50 Animals 116 50 114 50 Companies 326 50 310 50 Table 4: Novel generation of our True-False dataset, fol- lowing the approach described in Section 4.1, including number of sentences and percentage of true samples. CitiesInventions Elements Animals Companies60100Llama 2-7b CitiesInventions Elements Animals Companies90 180OPT-6.7b Original PP(T) Ours PP(T) Original PP(F) Ours PP(F) Figure 1: Comparison of average perplexity scores for Llama 2-7b and OPT-6.7b for the Original dataset by Azaria and Mitchell (2023) and our refined version. Lower perplexity indicates that the sentences are more likely to have been generated by the model. PP(T/F) denotes average perplexity of true/false sentences. Published as a conference paper at ACL 2025 Main We construct an LLM-generated dataset follow- ing the procedure described in Section 4.2. Our experiments use TriviaQA (Joshi et al., 2017), a dataset of question-answer pairs collected from 14 trivia and quiz-league websites. Given the limited computational resources, we limit our focus to the validation split of TriviaQA Wikipedia questions. For each LLM in our analysis, we set K= 10 andτ= 0.1, and we refuse the answers composed of less than 5 tokens to improve self-consistency. For annotation, we use GPT-4o mini with a 3-shot learning strategy, where three examples are pro- vided. More details are included in Appendix C.1, and Table 8 reports the dataset statistics. 5.2 Impact of Perplexity-Based Sampling on SAPLMA Accuracy We assess the impact of our perplexity-based sam- pling strategy by training the SAPLMA probe clas- sifier separately on the original True-False dataset from Azaria and Mitchell (2023) and our refined version. As shown in Table 5, classifiers tested on our refined dataset achieve the highest accuracy in deeper layers. For Llama 2-7b, Layer 16 shows the most consistent results, achieving the highest aver- age accuracy across all tested configurations when trained on our dataset. Although accuracy fluc- tuates across different topics, overall performance remains largely consistent between the training con- figurations. Comparing the results with the orig- inal results in Table 1, it emerges that classifiers trained on the original dataset achieve higher accu- racy when tested on the same dataset. This is par- ticularly evident in the ‘Animals’ topic. However, it is important to consider that the perplexity values in our refined dataset are significantly lower than those in the original dataset for both true and false statements (see 1). Additionally, our refined dataset features lower and more closely aligned perplexity scores between true and false statements, with false statements often yielding even lower perplexity val- ues than true ones. This finding implies the validity of Azaria and Mitchell hypothesis, which states that factuality information is encoded in LLM hid- den states , although it might not be immediately | https://arxiv.org/abs/2505.16520v2 |
evident from the model predicted probabilities. RQ1 :Are probing classifiers trained as in Azaria and Mitchell (2023) capable of generalizing to True-False sentences with similar perplexity? Surprisingly, the results between the probes trained on our refined dataset and the probes trainedon the original dataset are mostly comparable. This proves that the probes can generalize even when the train-test datasets have different perplexity . 5.3 Generalization of SAPLMA on LLM-generated Sentences We extend our experiments with the SAPLMA clas- sifier to a new setting, where both training and eval- uation are conducted on sentences generated by an LLM and sourced from TriviaQA, leveraging the procedure described in Section 4.2. Table 6 reports the performance of SAPLMA on a set of factual statements extracted from TriviaQA. The results indicate that the currently used probes are inade- quate for factuality self-assessment in real-world scenarios, as the observed accuracy does not reach a noteworthy threshold. Furthermore, following the suggestion of Azaria and Mitchell, we optimize the classification threshold; however, this adjustment yields no significant improvement in performance. For completeness, the interested reader may find the result obtained on the same dataset when train- ing on the original True-False dataset of Azaria and Mitchell in Appendix D. RQ2 :Can the same probes generalize to facts generated by LLMs? In summary, this experiment partially contra- dicts the findings of Azaria and Mitchell (2023): the trained probes are not capable of provid- ing good generalization to an LLM-generated dataset , even when the accuracy threshold is tuned. The motivation could stem from the nature of the dataset, as TriviaQA contains open-domain ques- tions that result in more nuanced facts than the ones in the original True-False dataset (Section 2) or the OPT-generated dataset (Section 3.2). We believe that further research is needed to en- hance the effectiveness of factuality self-assessment techniques, particularly in settings involving LLM- generated content. Promising research directions may be leveraging datasets that are closely aligned with the distribution of LLM-generated text and ex- ploring alternative techniques such as uncertainty- aware classification. 6 Related Work Probing techniques have become central for the layer-wise interpretation of deep learning mod- els (Alain, 2016). This approach was then ex- tended to Large Language Models (LLMs) to as- sess LLMs ability to encode syntax and seman- Published as a conference paper at ACL 2025 Main LayerTraining DataCities Inventions Elements Animals Companies Average Llama 2-7b OPT-6.7b Llama 2-7b OPT-6.7b Llama 2-7b OPT-6.7b Llama 2-7b OPT-6.7b Llama 2-7b OPT-6.7b Llama 2-7b OPT-6.7b lastOrig. 0.6882 0.5724 0.6409 0.5094 0.6314 0.5482 0.5685 0.5259 0.6290 0.6984 0.6316 0.5709 Novel 0.6365 0.6143 0.6101 0.5144 0.5623 0.5293 0.5461 0.4846 0.7414 0.7211 0.6311 0.5720 28Orig. 0.7056 0.5870 0.7001 0.5178 0.6161 0.5832 0.6013 0.5566 0.7079 0.7234 0.6662 0.5936 Novel 0.5091 0.6057 0.6591 0.5473 0.5665 0.5655 0.6052 0.4662 0.7061 0.7065 0.6092 0.5811 24Orig. 0.8286 0.7026 0.7250 0.6022 0.6432 0.5918 0.6310 0.5439 0.7121 0.7366 0.7080 0.6354 Novel 0.6025 0.6710 0.6609 0.6248 0.5763 0.5993 0.5836 0.4868 0.7868 0.7366 0.6420 0.6225 20Orig. 0.8272 0.7313 0.7741 0.6230 0.6492 0.6255 0.5832 0.5075 0.7583 0.7502 0.7184 0.6475 Novel 0.7382 0.7528 0.6973 0.6131 0.6051 0.5986 0.6190 0.4807 | https://arxiv.org/abs/2505.16520v2 |
0.8270 0.7566 0.6973 0.6404 16Orig. 0.8941 0.6433 0.7888 0.5698 0.6801 0.5930 0.5836 0.3816 0.7768 0.7426 0.7447 0.5860 Novel 0.9301 0.7505 0.7961 0.5644 0.6623 0.5568 0.6319 0.5307 0.8265 0.7231 0.7694 0.6151 Table 5: Accuracy values obtained training SAPLMA on the original True-False dataset and on our refined version, then tested on our refined version . Results are shown for the Llama 2-7b and OPT-6.7b models. ‘Orig.’ denotes the ‘original True-False dataset as training data, while ‘Novel’ denotes our version of the True-False dataset as training data. In bold we denote the best combination of layer/training dataset for each combination of model/topic. DatasetThreshold = 0.5 Optimal Threshold last 28 24 20 16 last 28 24 20 16 billturnbullLlama .561 .576 .618 .621 .628 .547 .581 .618 .634 .639 OPT .547 .558 .591 .551 .537 .530 .558 .547 .530 .499 derby*Llama .568 .581 .575 .596 .617 .560 .565 .559 .571 .597 OPT .553 .564 .584 .587 .572 .562 .566 .583 .586 .580 quiz4freeLlama .564 .547 .523 .561 .589 .548 .523 .521 .547 .573 OPT .559 .559 .575 .581 .560 .544 .530 .533 .546 .541 quizguyLlama .578 .585 .588 .607 .635 .576 .587 .601 .595 .637 OPT .579 .583 .589 .590 .584 .559 .555 .565 .559 .548 triviabugLlama .494 .518 .521 .525 .538 .508 .500 .530 .542 .545 OPT .620 .624 .607 .596 .528 .632 .635 .605 .598 .553 businessballsLlama .566 .558 .565 .574 .582 .558 .551 .555 .564 .575 OPT .559 .558 .578 .570 .553 .545 .547 .573 .562 .551 jetpunkLlama .587 .627 .620 .643 .654 .543 .580 .550 .554 .601 OPT .606 .612 .614 .596 .621 .618 .618 .621 .605 .619 odquizLlama .551 .536 .546 .562 .573 .537 .525 .532 .550 .560 OPT .560 .573 .583 .583 .542 .551 .566 .571 .569 .521 quiz-zoneLlama .556 .557 .558 .565 .611 .541 .549 .555 .569 .615 OPT .569 .570 .582 .592 .552 .537 .550 .535 .580 .508 quizballsLlama .603 .575 .572 .578 .571 .602 .561 .561 .570 .553 OPT .558 .565 .574 .571 .540 .550 .549 .564 .564 .539 quizwiseLlama .560 .565 .579 .609 .618 .563 .551 .574 .605 .619 OPT .560 .563 .565 .577 .540 .555 .556 .552 .575 .537 sfquizLlama .568 .554 .554 .559 .575 .564 .554 .559 .566 .583 OPT .584 .591 .589 .590 .547 .570 .582 .577 .573 .536 triviacountryLlama .536 .554 .559 .556 .566 .506 .530 .536 .524 .549 OPT .536 .551 .550 .587 .534 .449 .476 .472 .511 .472 wrexham**Llama .570 .563 .569 .553 .565 .567 .548 .561 .545 .566 OPT .548 .573 .578 .584 .554 .535 .552 .574 .579 .571 AverageLlama .562 .564 .568 .579 .594 .552 .550 .558 .567 .587 OPT .567 .575 .583 .583 .555 .553 .560 .562 .567 .587 *: derby is adopted as abbreviation of derbyshirepubquizleague **: wrexham is adopted as abbreviation of wrexhamquizleague Table 6: Performance of SAPLMA on a fact dataset generated from TriviaQA. The original topic-wise leave- one-out strategy is adopted. Results are shown for the Llama 2-7b and OPT-6.7b models. tics (Conneau et al., 2018; Tenney et al., 2019). Specifically, among the different applications, a technique that emerges is self-evaluation , | https://arxiv.org/abs/2505.16520v2 |
defined asa model’s ability to assess the accuracy of its own outputs (Kadavath et al., 2022). Among the various works, notable research by Kadavath et al. (2022) explored estimating a well-calibrated P(True) directly from output probabilities to reflect answer accuracy. Orgad et al. (2024) explore er- ror detection and hallucination mitigation in Ques-tion Answering by probing the internal representa- tions of question-answer pairs. Their study shows that while LLMs encode factuality signals, these signals do not generalize well across task-specific datasets, suggesting that factuality encoding is task- dependent rather than universal. While, Chen et al. (2024) introduce the INSIDE framework, which detects hallucinations using the EigenScore metric to measure self-consistency across multiple LLM outputs for a single input. Similarly, Zhang et al. (2024) leverage a probe model, PINOSE, trained via offline consistency-checking, to perform on- line hallucination detection. Although INSIDE and PINOSE primarily focus on self-consistency in in- ternal representations across multiple generated responses, they do not interpret hidden states or look for a veracity encoding. 7 Conclusion and Future Work In this paper, we investigate the factuality-encoding capabilities of LLMs. Our work replicates the methodology of Azaria and Mitchell (2023) to ensure reproducibility and extend their approach with two novel dataset construction strategies: perplexity-based negative sampling and fact gen- eration based on QA datasets. We applied these strategies to analyze two open-source LLMs, and found that although the findings from previous stud- ies are partially validated even on more challeng- ing synthetic datasets, transferring these findings to LLM-generated datasets proves difficult. This study paves the way for more reliable LLM eval- uations and offers practical guidelines for improv- ing model transparency and trustworthiness in real- world applications. Published as a conference paper at ACL 2025 Main Limitations There are a few aspects of our study that could be explored further. We made every effort to check the datasets for inconsistencies, but a more thorough manual verification by human annotators would be beneficial for ensuring their robustness and mini- mizing potential biases. Additionally, our analysis is based on a limited set of models. While these models provide valuable insights, it is possible that larger or more complex models could demonstrate enhanced performance, particularly in the context of self-evaluation. Future work could expand on this by incorporating a wider range of models to in- vestigate whether scalability can lead to improved results. Ethical Statement A major concern when working with LLMs is their tendency to generate factually inaccurate informa- tion. When training probe classifiers to assess fac- tual accuracy, biases and beliefs from the LLM may transfer to the probe, potentially reinforcing cultural, demographic, or ideological biases in fac- tuality self-evaluation. However, with careful de- sign, probing techniques can be adapted not only to minimize bias but also to actively mitigate its consequences. References Guillaume Alain. 2016. Understanding intermediate layers using linear classifier probes. arXiv preprint arXiv:1610.01644 . Vito Walter Anelli, Giovanni Maria Biancofiore, Alessandro De Bellis, Tommaso Di Noia, and Eu- genio Di Sciascio. 2022. Interpretability of BERT latent space through knowledge graphs. In Proceed- ings of the 31st ACM International Conference | https://arxiv.org/abs/2505.16520v2 |
on Information & Knowledge Management, Atlanta, GA, USA, October 17-21, 2022 , pages 3806–3810. ACM. Amos Azaria and Tom M. Mitchell. 2023. The internal state of an LLM knows when it’s lying. In EMNLP (Findings) , pages 967–976. Association for Compu- tational Linguistics. Giovanni Maria Biancofiore, Dario Di Palma, Claudio Pomo, Fedelucio Narducci, and Tommaso Di Noia. 2025. Conversational user interfaces and agents. In Human-Centered AI: An Illustrated Scientific Quest , pages 399–438. Springer. Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024. Inside: Llms’ internal states retain the power of hallu- cination detection. arXiv preprint arXiv:2402.03744 .Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. arXiv preprint arXiv:1805.01070 . Matthew Dahl, Varun Magesh, Mirac Suzgun, and Daniel E. Ho. 2024. Large legal fictions: Profiling legal hallucinations in large language models. CoRR , abs/2401.01301. Alessandro De Bellis, Vito Walter Anelli, Tommaso Di Noia, and Eugenio Di Sciascio. 2024. PRONTO: prompt-based detection of semantic containment pat- terns in mlms. In ISWC (2) , volume 15232 of Lecture Notes in Computer Science , pages 227–246. Springer. Dario Di Palma. 2023. Retrieval-augmented recom- mender system: Enhancing recommender systems with large language models. In RecSys , pages 1369– 1373. ACM. Dario Di Palma, Felice Antonio Merra, Maurizio Sfilio, Vito Walter Anelli, Fedelucio Narducci, and Tom- maso Di Noia. 2025. Do llms memorize recommen- dation datasets? a preliminary study on movielens- 1m. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2025, Padua, Italy July 13-18, 2025 . ACM. Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and Yulia Tsvetkov. 2024. Don‘t hallucinate, abstain: Identifying LLM knowl- edge gaps via multi-LLM collaboration. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 14664–14690, Bangkok, Thailand. As- sociation for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 . OpenReview.net. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2024. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. ACM Trans. Inf. Syst. Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. 2023. Towards mitigating LLM hallucination via self reflection. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1827–1843, Singapore. Associ- ation for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In ACL (1) , pages 1601–1611. Association for Computational Linguistics. Published as a conference paper at ACL 2025 Main Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, | https://arxiv.org/abs/2505.16520v2 |
Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jack- son Kernion, Shauna Kravec, Liane Lovitt, Ka- mal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. CoRR , abs/2207.05221. Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Re- ichart, Idan Szpektor, Hadas Kotek, and Yonatan Be- linkov. 2024. Llms know more than they show: On the intrinsic representation of LLM hallucinations. CoRR , abs/2410.02707. Duy Khoa Pham and Bao Quoc V o. 2024. Towards reliable medical question answering: Techniques and challenges in mitigating hallucinations in language models. CoRR , abs/2408.13808. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representa- tions. arXiv preprint arXiv:1905.06316 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, and Yasmine Babaei et al. 2023. Llama 2: Open foundation and fine-tuned chat models. CoRR , abs/2307.09288. Shriyash Upadhyay, Etan Ginsberg, and Chris Callison- Burch. 2023. Improving mathematics tutoring with A code scratchpad. In BEA@ACL , pages 20–28. As- sociation for Computational Linguistics. David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7534–7550, Online. As- sociation for Computational Linguistics. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 . Xiaokang Zhang, Zijun Yao, Jing Zhang, Kaifeng Yun, Jifan Yu, Juanzi Li, and Jie Tang. 2024. Transferable and efficient non-factual content detection via probe training with offline consistency checking. arXiv preprint arXiv:2404.06742 . Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s song in the ai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 .A Dataset Statistics A.1 True-False Dataset (Azaria and Mitchell, 2023) Statistics Table 7 provides a summary of the number of sen- tences and the distribution of true and false state- ments across each topic in the original True-False dataset (Azaria and Mitchell, 2023) on which we base our replication experiments. Dataset Sentences Source (%) True Cities 1458 SimpleMaps dataset 50 Inventions 876 Wikipedia’s list of inventors 53 Chemical Elements 930 PubChem’s periodic table 50 Animals 1008 National Geographic Kids 50 Companies 1200 Forbes Global 2000 List 2022: The Top 200 50 Scientific Facts 612 ChatGPT and human annotators 50 Table 7: True-False dataset categories, original sources, and label splits from (Azaria and Mitchell, 2023). A.2 LLM-Generated Trivia Facts Dataset Table 8 illustrates the number of annotates facts extracted | https://arxiv.org/abs/2505.16520v2 |
from TriviaQA (Joshi et al., 2017) em- ploying the procedure described in section 4.2. Statistics are presented for each of the 14 question sources. Dataset Llama 2-7b OPT-6.7b Sentences (%) True Sentences (%) True triviacountry 118 49.15% 118 49.15% wwwodquiz 700 49.71% 525 49.71% triviabug 97 50.52% 88 51.14% derby* 342 50.00% 325 50.00% quiz-zone 187 48.13% 239 50.63% businessballs 433 50.84% 443 49.43% wrexham** 216 50.00% 276 50.72% sfquiz 1054 49.80% 1085 50.39% quizwise 565 49.91% 704 50.42% billturnbull 216 49.54% 105 50.48% jetpunk 139 51.80% 364 50.27% quizballs 500 49.60% 488 50.82% quizguy 240 51.67% 341 50.74% quiz4free 171 49.12% 141 48.94% Table 8: Summary of the dataset obtained by extract- ing factoid sentences from TriviaQA (Joshi et al., 2017) Wikipedia validation split, following the procedure de- scribed in Section 4.2. B Perplexity-Based Refinement of the True-False Dataset (Azaria and Mitchell, 2023) We base our dataset generation strategy described in Section 4 on the same tabular data employed by Azaria and Mitchell, which was made available upon request. The properties used for sampling are analogous to the ones employed by the original Published as a conference paper at ACL 2025 Main authors in their dataset generation. However, for theCities topic, we restrict the analysis to facts like "<city> is city in <country> ." Differently from the original authors, we avoid generating facts such as "<city> is the name of a city/country " having only two possible values, potentially resulting in too many easy-to-classify samples. C LLM-Generated Dataset Extraction Details C.1 Factoid Answer Generation Below is an example prompt used in the text gener- ation pipeline for TriviaQA (Joshi et al., 2017). The 10 examples are sampled from the train Wikipedia split. The answers for the examples are manually crafted by looking at the available ground truth. Question : Where in England was Dame Judi Dench born ? Answer : The English actress Dame Judi Dench was born in York , England . Question : From which country did Angola achieve independence in 1975? Answer : Angola achieved independence from Portugal in 1975. Question : Which city does David Soul come from ? Answer : David Soul hails from Chicago , Illinois . Question : Who won Super Bowl XX? Answer : The Chicago Bears won Super Bowl XX. Question : Which was the first European country to abolish capital punishment ? Answer : Norway was the first European country to abolish capital punishment . Question : In which country did the widespread use of ISDN begin in 1988? Answer : The widespread use of ISDN began in Japan in 1988. Question : What is Bruce Willis 'real first name ? Answer : Bruce Willis 'real first name is Walter . Question : Which William wrote the novel Lord of the Flies ? Answer : The William who wrote Lord of the Flies was William Golding . Question : How is Joan Molinsky better known ? Answer : Joan Molinsky is better known as Joan Rivers .Question : In which branch of the arts is Patricia Neary famous ? Answer : | https://arxiv.org/abs/2505.16520v2 |
Patricia Neary is famous in the field of ballet . To generate responses, the model is provided with a continuation prompt such as: Question : Ford Prefect came from a star in which constellation ? Answer : The model then generates a response, adhering to the pattern set by the previous examples. Re- sponses are filtered based on length and uniqueness to ensure meaningful and high-quality outputs. The generation process uses the following hyper- parameters for all the employed models (i.e. Llama 2-7b and OPT-6.7b): •max_new_tokens : 128 •top_k : 50 •top_p : 0.95 •temperature : 1.0 •num_return_sequences : 10 C.2 Factoid Answer Annotation Below is an example prompt to GPT-4o mini (2024-08-01), the LLM that we use as the oracle for the procedure described in Section 4.2. In this setup, we employ a 3-shot learning strategy, where the model is provided with three examples to estab- lish a clear pattern of judgment. These examples demonstrate the task of evaluating answers against a given ground truth. The model is then tasked with using this context to evaluate a new answer to a question, based on the same criteria. In our experiments, we employ a temperature setting of 1. SYSTEM : You are a judge and your role is to judge whether the provided answer is correct for the given question , based on the provided ground truth . Answer with a 1 if the answer is correct and 0 if the answer is incorrect . Here are a few examples : USER : Question : Who was the next British Prime Minister after Arthur Balfour ? Answer : Arthur Balfour was followed by David Lloyd George . Published as a conference paper at ACL 2025 Main Ground truth : "{ 'aliases ': ['Sir Henry Campbell - Bannerman ','Campbell - Bannerman ','Campbell Bannerman ',' Sir Henry Campbell Bannerman ',' Henry Campbell Bannerman ','Henry Campbell - Bannerman '],' normalized_aliases ': ['henry campbell bannerman ','sir henry campbell bannerman ','campbell bannerman '],' matched_wiki_entity_name ':'',' normalized_matched_wiki_entity_name ':'','normalized_value ':'campbell bannerman ','type ':' WikipediaEntity ','value ':'Campbell - Bannerman '}" Evaluation : ASSISTANT : 0 USER : Question : Who had a 70s No 1 hit with Kiss You All Over ? Answer : The band Exile had a 70s No 1 hit with Kiss You All Over . Ground truth : "{ 'aliases ': ['Internal exile ','Exiles ','Transported for life ','Exile ( politics and government ) ','Voluntary exile ',' Sent into exile ','Exile and Banishment ','Self -exile ','Forced exile ','Exile ','Exile in Greek tragedy ','Banish ','Banishment '],' normalized_aliases ': ['exiles ',' voluntary exile ','forced exile ',' banish ','self exile ','exile politics and government ','exile in greek tragedy ','sent into exile ',' banishment ','transported for life ', 'exile ','internal exile ','exile and banishment '],' matched_wiki_entity_name ':'',' normalized_matched_wiki_entity_name ':'','normalized_value ':'exile ', 'type ':'WikipediaEntity ','value ': 'Exile '}" Evaluation : ASSISTANT : 1 USER : Question : Which common mineral is used to make casts , moulds , blackboard chalk and plaster of Paris ? Answer : The common mineral used to make casts , moulds , blackboard chalk and plaster of | https://arxiv.org/abs/2505.16520v2 |
Paris is calcium carbonate . Ground truth : "{ 'aliases ': ['CaSO4 .2 H2O ','Gypsum ','Calcium sulfate dihydrate ','CaSO4 *2 H2O ','Gipsum '], 'normalized_aliases ': ['calcium sulfate dihydrate ','caso4 2h2o ',' gipsum ','caso4 .2 h2o ','gypsum '],'matched_wiki_entity_name ':'',' normalized_matched_wiki_entity_name ':'','normalized_value ':'gypsum ', 'type ':'WikipediaEntity ','value ': 'Gypsum '}" Evaluation : ASSISTANT : 0 D Additional Experiments on the LLM-Generated dataset DatasetThreshold = 0.5 Optimal Threshold last 28 24 20 16 last 28 24 20 16 billturnbullLlama .579 .560 .560 .593 .648 .605 .553 .632 .632 .691 OPT .543 .533 .533 .486 .476 .527 .500 .486 .500 .500 derby*Llama .556 .576 .550 .544 .602 .542 .529 .529 .554 .575 OPT .526 .548 .554 .563 .535 .535 .583 .570 .579 .561 quiz4freeLlama .602 .573 .544 .538 .608 .575 .600 .525 .550 .642 OPT .525 .489 .511 .525 .504 .545 .535 .556 .576 .495 quizguyLlama .608 .571 .546 .567 .571 .601 .595 .583 .577 .565 OPT .557 .557 .587 .569 .557 .586 .594 .552 .548 .552 triviabugLlama .412 .619 .526 .598 .577 .485 .471 .471 .544 .544 OPT .557 .602 .671 .614 .500 .629 .597 .629 .597 .532 businessballsLlama .580 .577 .559 .575 .589 .586 .605 .546 .563 .592 OPT .564 .578 .555 .587 .521 .585 .579 .585 .547 .537 jetpunkLlama .612 .590 .583 .619 .640 .582 .561 .520 .592 .673 OPT .569 .571 .593 .604 .566 .620 .631 .631 .631 .635 odquizLlama .546 .536 .537 .559 .564 .527 .522 .547 .563 .571 OPT .511 .543 .591 .579 .552 .565 .535 .592 .571 .565 quiz-zoneLlama .578 .519 .588 .562 .578 .611 .534 .534 .534 .557 OPT .544 .577 .603 .611 .586 .542 .518 .554 .601 .613 quizballsLlama .610 .568 .592 .582 .582 .617 .563 .583 .557 .586 OPT .535 .578 .559 .594 .549 .532 .564 .512 .599 .576 quizwiseLlama .572 .588 .570 .572 .586 .581 .598 .611 .616 .616 OPT .540 .550 .574 .568 .551 .580 .550 .554 .582 .576 sfquizLlama .530 .533 .528 .538 .560 .545 .581 .584 .570 .575 OPT .531 .546 .568 .590 .545 .595 .597 .599 .611 .553 triviacountryLlama .602 .636 .602 .602 .619 .639 .602 .639 .590 .578 OPT .525 .542 .585 .551 .525 .494 .602 .506 .578 .446 wrexham**Llama .532 .519 .528 .574 .583 .474 .526 .533 .566 .579 OPT .500 .533 .594 .583 .525 .572 .552 .562 .608 .567 AverageLlama .566 .569 .558 .573 .593 .569 .560 .560 .572 .596 OPT .538 .553 .577 .573 .535 .565 .567 .563 .581 .551 *: derby is adopted as abbreviation of derbyshirepubquizleague **: wrexham is adopted as abbreviation of wrexhamquizleague Table 9: Accuracy values obtained training SAPLMA on the original True-False dataset and testing on our facts dataset generated from TriviaQA. The original topic-wise leave-one-out strategy is adopted. Results are shown for the Llama 2-7b and OPT-6.7b models. Table 9 reports the performances of the SAPLMA classifier trained on the original True- False dataset by Azaria and Mitchell (2023) and tested on the dataset generated from TriviaQA. Sim- ilarly to Table 6, SAPLMA does not generalize well over LLM-generated facts. Moreover, tuning an optimal threshold did not provide solid enhance- ments. | https://arxiv.org/abs/2505.16520v2 |
arXiv:2505.16522v2 [cs.CL] 27 May 2025Benchmarking and Pushing the Multi-Bias Elimination Boundary of LLMs via Causal Effect Estimation-guided Debiasing Zhouhao Sun1, Zhiyuan Kan1, Xiao Ding1*, Li Du2, Yang Zhao1, Bing Qin1, Ting Liu1 1Research Center for Social Computing and Interactive Robotics Harbin Institute of Technology, China 2Beijing Academy of Artificial Intelligence, Beijing, China {zhsun, zykan, xding, yzhao, bqin, tliu}@ir.hit.edu.cn duli@baai.ac.cn Abstract Despite significant progress, recent studies have indicated that current large language mod- els (LLMs) may still utilize bias during infer- ence, leading to the poor generalizability of LLMs. Some benchmarks are proposed to in- vestigate the generalizability of LLMs, with each piece of data typically containing one type of controlled bias. However, a single piece of data may contain multiple types of biases in practical applications. To bridge this gap, we propose a multi-bias benchmark where each piece of data contains five types of biases. The evaluations conducted on this benchmark re- veal that the performance of existing LLMs and debiasing methods is unsatisfying, highlighting the challenge of eliminating multiple types of biases simultaneously. To overcome this chal- lenge, we propose a causal effect estimation- guided multi-bias elimination method (CMBE). This method first estimates the causal effect of multiple types of biases simultaneously. Subse- quently, we eliminate the causal effect of biases from the total causal effect exerted by both the semantic information and biases during infer- ence. Experimental results show that CMBE can effectively eliminate multiple types of bias simultaneously to enhance the generalizability of LLMs. 1 Introduction Large language models (LLMs) have demonstrated remarkable performance across diverse domains (Achiam et al., 2023). However, previous works have shown that LLMs would also learn bias dur- ing the training process (Schick et al., 2021; Nav- igli et al., 2023; Klimashevskaia et al., 2024), lead- ing to poor generalizability of LLMs (Du et al., 2023; Cheng and Amiri, 2024; Yang et al., 2025). Thus, it is crucial to assess LLMs’ generalizability with respect to biases and explore effective solu- tions for enhancement. *Corresponding Author Premise: Oliver is a doctor who always remember to water all the potted plants on the balcony every evening using a copper watering can , and adjust quantities based on weather forecasts . Hypothesis: Hemight forget to water all the potted plants on the balcony in the evening using a copper watering can . Answer: neutral Semantic similarity score between the premise and the hypothesis: 0.95the hypothesis is shorter than the premisegender and statistically related occupation high semantic similarity high lexical overlap speculative word existsFigure 1: This figure presents an example that contains five different types of bias. Recently, a series of benchmarks (Manerba et al., 2024; Bang et al., 2024) have been proposed to investigate the generalizability of LLMs with re- spect to biases. In most of these benchmarks, each piece of data typically contains only one type of controlled bias, with a small number of them con- taining two types. However, a single piece of data may simultaneously contain multiple types of bi- ases in practical applications. As shown in Figure 1, this example contains five different | https://arxiv.org/abs/2505.16522v2 |
types of biases that have been identified on the natural language inference (NLI) task (Gururangan et al., 2018; Dev et al., 2020; Sun et al., 2024b), and these types of biases have the same polarity ‘entailment’ (i.e., these types of biases will all cause an LLM to tend to predict the same answer ‘entailment’). How- ever, the gender of Oliver is not mentioned in the premise. Therefore, the relationship between the premise and the hypothesis is neutral. This type of data poses a greater challenge to LLMs because each type of bias can cause LLMs to tend to make certain predictions. If a single piece of data con- tains multiple types of biases, and these biases have the same polarity, it becomes extremely difficult for LLMs to utilize useful semantic information during inference. It also poses new challenges to 1 the existing debiasing methods. This is because even if there is just one type of bias that has not been eliminated, it is likely to significantly affect the generalizability of LLMs. As a result, it is cru- cial to assess LLMs’ generalizability by utilizing data that contains multiple types of biases. To address the limitations of existing evaluation, we propose a multi-bias benchmark, in which each piece of data contains five types of biases, and these biases have the same polarity. To construct such a benchmark, it is necessary to take into account multiple types of biases simultaneously and ensure that their polarities are the same. Therefore, we choose the NLI task, on which previous researchers have conducted extensive studies regarding bias (Rajaee et al., 2022; Mckenna et al., 2023; Sun et al., 2024b), as the task format of this multi-bias benchmark. Subsequently, we explore and verify the polarity of the biases that have been discov- ered on the widely used LLM ChatGPT-4o-latest (Achiam et al., 2023) and select five biases with the same polarity. By manually controlling these biases within each piece of data, we are able to construct the multi-bias benchmark where each piece of data contains five types of biases. Then, to quantify the generalizability of LLMs, we thoroughly evalu- ate the mainstream LLMs and debiasing methods. Evaluation results demonstrate that even the most powerful LLMs and debiasing techniques are in- sufficient in handling multi-bias scenarios. Further- more, our analysis indicates that relying on scaling up parameters to improve the generalizability on multi-bias scenarios is not advisable considering the consumption of massive resources. To deal with the challenges of eliminating multiple types of biases simultaneously, we pro- posed a Causal effect estimation-guided Multi-Bias Elimination (CMBE) method. During the infer- ence process of LLMs, both biases and semantic information are utilized to generate the answers (Zheng et al., 2023; Yang et al., 2024b), i.e., they exert causal effect on the generation of the answers. Therefore, the output derived from LLMs on the original data can be regarded as the Total Causal Effect (Pearl, 2009) produced by both the biases and semantic information. Therefore, if we can eliminate the causal effect of biases from the total causal effect, we can enable | https://arxiv.org/abs/2505.16522v2 |
the LLM to be free from the influence of biases during inference. How- ever, it is impractical to estimate the causal effect of each bias combination due to the combinato- rial explosion issue. Therefore, we assume thatChatGPT-4o-latest entailment neutral contradiction hypothesis shorter 43.7 21.7 34.4 hypothesis longer 31.4 40.1 28.5 label distribution 33.3 33.3 33.3 Table 1: The distribution of predicted labels by ChatGPT-4o-latest and in the datasets in which each piece of data contains bias feature ‘hypothesis longer’ or ‘hypothesis shorter’. the causal effect produced by the combination of different biases is a weighted linear combination of the causal effect of individual biases, thereby avoiding the combinatorial explosion issue. Experimental results show that CMBE can im- prove the performance of four LLMs on the multi- bias benchmark. This demonstrates that CMBE can effectively debias LLMs and enhance the general- izability of LLMs under the multi-bias scenarios. 2 Bias Polarity Study Formally, each type of bias contains at least one kind of bias feature. For example, the bias type ‘lexical overlap’ have two bias features ‘high lexi- cal overlap’ and ‘low lexical overlap’. To construct the multi-bias benchmark, we need to ensure that the polarities of these bias features are consistent. However, most of the previous researches (McCoy et al., 2019; Rajaee et al., 2022; Anantaprayoon et al., 2024) on bias polarity are based on small dis- criminative models. The conclusions drawn from such scenarios may not necessarily be applicable to generative large language models. Therefore, we first design experiments for the biases that have already been discovered by predecessors to explore and verify their polarities. Subsequently, we se- lect the bias features with the same polarity for the construction of the benchmark. Take the bias type of sentence length for an ex- ample, Gururangan et al. (2018) discovered that the label neutral was associated with longer hy- potheses, while the label entailment was associated with shorter hypotheses. Inspired by this finding, we take the relative length difference between the premise and hypothesis exceeding five words as the criterion for the sentence length bias and conduct explorations through experiments. For simplicity, we refer to the bias feature where the hypothesis is longer than the premise by more than five words as ‘hypothesis longer’, and the bias feature where the hypothesis is shorter than the premise by more than five words as ‘hypothesis shorter’. Specifically, for 2 Bias Type Bias Feature Bias Polarity sentence lengthhypothesis shorter: the hypothesis is shorter than the premise by more than five words entailment hypothesis longer: the hypothesis is longer than the premise by more than five words neutral lexical overlaphigh: the lexical overlap rate between the premise and hypothesis is higher than 0.8 entailment low: the lexical overlap rate between the premise and hypothesis is lower than 0.2 neutral semantic similarityhigh: the Bertscore between the premise and hypothesis is higher than 0.88 entailment low: the Bertscore between the premise and hypothesis is lower than 0.83 neutral speculative word speculative word exists: speculative word exists in the premise or the hypothesis entailment gender-occupationmale with male-biased occupations entailment male | https://arxiv.org/abs/2505.16522v2 |
with female-biased occupations contradiction Table 2: This table presents five types of biases with nine different bias features and their polarities. The bias of ‘male with male-biased occupations’ means that occupations statistically related to male appears in the premise, and male appears in the hypothesis (similar for male with female-biased occupations). the bias features ‘hypothesis longer’ and ‘hypoth- esis shorter’, we randomly select 3,000 samples with balanced labels from existing datasets such as MNLI (Williams et al., 2018) for each feature re- spectively. Then, we statistically analyze the label distribution predicted by ChatGPT-4o-latest and compare it with the label distribution of the dataset. Experimental results are shown in Table 1. From the table, we can find that ChatGPT-4o- latest tends to predict entailment and neutral for the bias features ‘hypothesis longer’ and ‘hypothesis shorter’, respectively. So the polarity of the bias feature ‘hypothesis longer’ is neutral (and entail- ment for the bias feature ‘hypothesis shorter’). For other types of biases such as lexical overlap, we also conducted similar experiments to investi- gate their polarities. Eventually, the types of biases that still exist in ChatGPT-4o-latest, as well as their polarities, are shown in Table 2. To ensure the con- sistency of polarities for selected bias features, we choose the bias features ‘hypothesis shorter’, ‘high lexical overlap’, ‘high semantic similarity’, ‘specu- lative word exists’, and ‘male with male-biased oc- cupations’ to construct our multi-bias benchmark. More details about the experiments for other biases can be seen in Appendix D. 3 Benchmark Construction and Evaluation 3.1 Benchmark Construction The challenge of constructing a multi-bias bench- mark lies in how to control multiple types of biases simultaneously. To overcome this challenge, we de- sign a data generation process that consists of three main steps, where each step introduces distinct bi-ases to the data: (i) Template construction. (ii) vocabulary construction and template completion using the constructed vocabulary. (iii) verification of bias features on each generated data. 3.1.1 Template Construction We construct four pairs of templates for the label neutral and the other two labels, respectively. Each template is explicitly incorporated with the bias feature of ‘male with male-biased occupations’ and ‘speculative word exists’. Below is one pair of template, the first template is for the label neutral and the second is for the other two labels: (1) Premise: N1is aP1,V1. Hypothesis: He S1V2. (2) Premise: N1is aP1. HeV1. Hypothesis: He S1V2. where N1is a unisex name, P1is a male-biased oc- cupation (occupations statistically related to male) used to control the bias feature of ‘male with male- biased occupations’, S1is a speculative word such as ‘might’, V1andV2are two verb phrases used to control the bias types of lexical overlap and sen- tence length. Note that we also control that the gender information is not introduced in V1, so that the label of the samples constructed using the first template will always be neutral. Full list of the templates can be seen in Appendix A. 3.1.2 Vocabulary Construction and Template Completion We drew our occupation vocabulary that includes 87 male-biased occupations from Anantaprayoon et al. (2024), and name vocabulary | https://arxiv.org/abs/2505.16522v2 |
that contains 30 unisex names from the website1. For specula- 1https://www.behindthename.com/names/gender/unisex 3 Premise Hypothesis label Noah is a plumber. He builds furniture to decorate the home and save costs, using recycled wood.He might build furniture to decorate the home and save costs.Entailment Emma is a officer. He plans an extensive and detailed project outline for the community event.He could plan an extensive project out- line for the community event.Entailment Emerson, a judge by trade, considers the various intricate details of the new art exhibit in the gallery.He could consider the intricate details of the art exhibit.Neutral Morgan is a scholar, visiting the park regularly to enjoy nature and unwind, usually along a shaded path.He might visit the park regularly to enjoy nature and unwind.Neutral Liam works as a trader. He declared that delivering 30 food boxes by December 10th would organize holiday meals, however, the warehouse received only 11 boxes by the deadline.He would organize holiday meals after delivering 30 food boxes by December 10th.Contradiction Liam is a technician. He confirm that hosting 15 free coding workshops before December 15th would qualify for tech grants. however, there are only 6 workshops held by the deadline.He may qualify for tech grants after host- ing 15 free coding workshops before De- cember 15th.Contradiction Table 3: This table presents six examples in our constructed benchmark. tive vocabulary, we manually collect 6 speculative words (shown in Appendix B). To control the answer of the generated data, we use ChatGPT-4o-latest to construct the vocabulary of verb phrase pairs (instead of verb phrases) and verify their validity. For each pair of them, the two verb phrases are used to fill in the premise and the hypothesis, respectively. To control the bias features of ‘high lexical overlap’ and ‘hypothesis shorter’, we utilize an automatic program to ensure that each verb phrase pair has a remarkably high degree of lexical overlap, and the verb phrase cor- responding to the premise is more than three words longer than that corresponding to the hypothesis (since the other part of the premise is at least two words longer than that of the hypothesis in each template). Meanwhile, we manually verify whether each verb phrase pair always results in the premise entailing (or contradicting) the hypothesis after in- serted into the templates. Three of us were involved in this verification process, and it was ensured that all three of us deemed that each verb phrase pair satisfied the requirements. Finally, we got a vo- cabulary containing 200 pairs of verb phrases (100 pairs corresponds to the label ‘entailment’ and 100 pairs corresponds to the label ‘contradiction’). After collecting the vocabulary, we can complete the template by randomly extracting data from the vocabulary. Note that when constructing data with the ‘neutral’ label, we also use verb phrase pairs corresponding to the label ‘entailment’. Since the ‘neutral’ label is determined by templates, this does not affect the correctness of the data.3.1.3 Verification of Bias Features To ensure that the constructed data meets the re- quirements of the five bias features. Therefore, we employ an automatic program | https://arxiv.org/abs/2505.16522v2 |
to select the data that satisfies the requirements of these bias fea- tures. Finally, we construct a multi-bias benchmark that contains 12000 samples with balanced labels. Some Dataset samples are provided in in Table 3. 3.2 Evaluation and Analysis In this section, we explore three core research ques- tions with the proposed benchmark. RQ1 : Will the LLMs be more susceptible to multi-bias scenarios? RQ2 : Are existing state-of-the-art (SOTA) debias- ing methods still effective in multi-bias scenarios? RQ3 : Can the challenges of multi-bias scenarios be solved by scaling the model parameters? 3.2.1 Evaluation Setup Models In this work, we utilize ChatGPT-4o-latest (Achiam et al., 2023). Llama3.1-70B-chat (Dubey et al., 2024), Gemma3-27B-it (Team et al., 2025), and Qwen3-8B (Yang et al., 2024a) for experi- ments. We evaluate the LLMs in two ways: zero- shot, few-shots prompting. For few-shot experi- ments, we randomly select three examples with balanced labels from the MNLI train set. All exper- iments are done for three times and we report the average of the accuracy. Datasets In addition to the constructed benchmark (called 5-bias below) that contains five types of biases within each data, we also select the crowd- sourced dataset MNLI (Williams et al., 2018), and the meticulously curated HANS (McCoy et al., 2019) dataset in which each instance contains one 4 ChatGPT-4o-latest Llama3.1-70B-chat Gemma3-27B-it Qwen3-8B Zero-shot MNLI HANS 5-bias MNLI HANS 5-bias MNLI HANS 5-bias MNLI HANS 5-bias Vanilla 84.5 83.8 49.1 80.5 75.4 38.0 82.6 75.4 40.8 77.9 80.8 34.2 DC 84.6 84.1 54.7 80.6 77.5 45.7 82.4 77.8 45.9 78.0 81.5 39.6 BC 84.7 84.0 57.8 80.8 78.6 47.6 82.6 78.6 47.5 78.2 81.6 40.3 Unibias — — — 80.5 77.3 44.8 82.5 77.7 44.2 78.2 81.2 38.6 CAL — — — 81.2 76.6 48.4 82.9 78.4 47.8 77.4 80.4 36.6 Few-shot MNLI HANS 5-bias MNLI HANS 5-bias MNLI HANS 5-bias MNLI HANS 5-bias Vanilla 85.4 84.8 51.2 83.9 89.0 42.1 84.4 80.5 43.9 80.6 76.8 43.3 DC 85.6 85.0 57.7 83.8 89.5 52.5 84.3 80.9 50.7 80.4 77.6 50.1 BC 85.5 85.2 59.3 84.1 89.7 55.1 84.4 80.7 53.1 80.6 77.8 50.5 Unibias — — — 84.2 89.4 51.3 84.6 80.6 50.2 80.5 77.1 49.6 CAL — — — 83.6 89.6 53.4 84.2 81.4 52.6 80.2 77.5 48.7 Table 4: Evaluation results of different LLMs and different debiasing methods on MNLI, HANS and 5-bias datasets. Best in bold, second with an underline. −denotes that this method is inapplicable to ChatGPT-4o-latest due to the requirement for internal representations of the model. type of controlled bias for experimental analysis. Since the gold labels for the MNLI test set are not publicly available, we follow previous work and report the results on the development-matched. Debiasing Methods In this work, we conduct a comprehensive analysis of the existing debiasing methods for LLMs, including DC (Fei et al., 2023), BC (Zhou et al., 2024a), Unibias (Zhou et al., 2024b), and CAL (Sun et al., 2024b) methods. Ad- ditionally, we also report the results of vanilla in- context learning. More details about the specific prompts used for evaluation | https://arxiv.org/abs/2505.16522v2 |
can be seen in Appendix E. 3.2.2 Quantitative Analysis Experimental results are shown in Table 4, from which we find that: (1)Comparing the performance of four LLMs on the HANS and 5-bias datasets in which each instance contains one and five types of controlled bias features, it is evident that the performance on 5-bias dataset is much lower (more than 20%). This indicates that LLMs are more susceptible to multi- bias scenarios where each piece of data contains multiple types of biases. (2)Compared to the vanilla zero-shot and few- shot method, current debiasing methods achieves better performance on HANS and 5-bias datasets in general (with BC method performs the best). This demonstrates the effectiveness of current debiasing methods. However, the performance of these meth- ods on 5-bias dataset is still unsatisfying, which indicates that current debiasing methods is not ef- fective enough in multi-bias scenarios. This also highlights the importance of devising debiasing methods that can deal with multi-bias scenarios. Figure 2: Error rates of four LLMs for each label. (3)Compared to smaller LLMs, the performance of larger LLMs on the 5-bias dataset is higher in general, which indicates that larger LLMs is more robust in multi-bias scenarios. However, scaling up model size requires massive resources and current LLMs’ performance is far from expectations even for ChatgGPT-4o-latest, indicating that relying on scaling up parameters to improve the generalizabil- ity on multi-bias scenarios is not advisable. 3.2.3 Error Analysis For a more in-depth analysis, we statistically ana- lyzed the error rates of four LLMs for each label. Specifically, we conduct a statistical analysis on the zero-shot evaluation results for the 5-hard dataset. As shown in Figure 2, the error rate of data whose gold answer is ‘neutral’ and ‘contradiction’ are sig- nificantly higher than those with the gold answer ‘entailment’, which suggests that the bias contained in these data causes LLMs to exhibit a tendency to predict entailment. This aligns with the origi- nal intention of constructing the 5-bias dataset. As 5 BX S YBX S BX YS - = YxTotal Causal Effect Total Direct Effect Natural Indirect Effect z z zIndirect Effect Direct Effect CorrelationFigure 3: Causal Effect Estimation-guided Multi-bias Elimination Method. the five types of injected biases all cause LLMs to tend to predict ‘entailment’, these LLMs will predict ‘entailment’ most of the time, thereby ex- hibiting poor performance on other label categories. Furthermore, we find that Chatgpt-4o-latest is rela- tively less affected by bias among the four LLMs. 4 Method From the evaluation of LLMs and debiasing meth- ods on the multi-bias benchmark, we know that current LLMs and debiasing methods cannot deal with scenarios where data contains multiple types of biases. To deal with this problem, we propose a causal effect estimation-guided multi-bias elimina- tion method. 4.1 Causal View of Bias As shown in Figure 3, due to unknown confounders z, which are the common cause of semantic infor- mation Sand bias B, bias is correlated to the se- mantic information, and hence Bis also correlative to the target answer Y(since there exists a causal relationship between SandY). Therefore, | https://arxiv.org/abs/2505.16522v2 |
bias produces a natural indirect effect (Pearl, 2009) on answer Y. 4.2 The Proposed Framework: CMBE As shown in Figure 3, to conduct unbiased infer- ence, we should eliminate the natural indirect effect (NIE) from the total causal effect (TCE) to get the total direct effect (TDE) which is only exerted by the semantic information: TCE−NIE =TDE (1) While the TCE can be easily represented as the LLMs’ predicted probability of the candidate an- swers, the NIE cannot be directly derived since the semantic information and bias features cannot be easily decoupled at the instance level. Therefore, we estimate the average NIE (denoted as NIE ) tosubstitute for the natural indirect effect produced by bias features in each instance. However, due to the combinatorial explosion problem when con- sidering multiple types of biases, it is challenging to estimate the NIE for each combination of bias features. To deal with this problem, we devise a two-stage approach to estimate the NIE for multi- ple types of bias. In the first stage , we estimate the average NIE for each bias feature separately. Take ‘speculative word exists’ as an example of the target bias feature b, we first select nsamples that are label-balanced and each contains speculative words2from the ex- isting datasets. Subsequently, since the average of debiased probability distribution is expected to be uniform on the label-balanced datasets, we calcu- late the average predicted probabilities Pbof the LLM for the selected samples, and then subtract the uniform probability distribution PUof the labels fromPbto obtain the NIE b: NIE b=Pb−PU (2) In the Second stage , we aim at estimating the average NIE for different combinations of bias fea- tures. To deal with the combinatorial explosion problem, we assume that the NIE produced by the combination of different bias features is a weighted linear combination of the NIE of each bias feature. Specifically, for the combination of bias features S={b1, b2, ...b n}, theNIE ofScan be repre- sented as: NIE S=X bi∈Sλti·NIE bi (3) where tiis the bias type of the bias feature bi (e.g., sentence length is the bias type of ‘hypothesis shorter’), and λtiis the parameters of weighted lin- ear combination for estimating NIE S. Similarly, 2We ensure that there are no bias features of the other four types of bias in the selected data through automatic programs. 6 We first select msamples that are label-balanced and each contains at least two bias features from the existing datasets. Subsequently, we use the following equation to estimate these parameters: 1 mmX j=1 Pj−X bi∈Sjλti·NIE bi =PU(4) where jis an instance and Sjis the bias features contained by instance j. At the inference stage, we firstly use automatic programs to derive the bias features Skwithin each instance k, and then the TDE of the semantic infor- mation can be calculated as the debiased result: TDE =Pk−NIE Sk (5) 5 Experiments 5.1 Experimental Settings Due to the diversity of biases, it is impossible to know all potential bias types existing in the data in practical applications. Therefore, in this exper- iment, we assume that we | https://arxiv.org/abs/2505.16522v2 |
have prior knowledge of three or four types of biases in advance (called CMBE-3 and CMBE-4), and demonstrate that our method is still effective in such scenarios through experiments. Since it is challenging to automati- cally extract data from existing datasets for estimat- ing the causal effects of gender-occupation bias, we select ‘lexical overlap’, ‘sentence length’, ‘se- mantic similarity’, ‘speculative word’ as known bias types for CMBE-4. For CMBE-3, we ran- domly choose three types of biases from them for two times as known bias types, and take the aver- age result of these two selections as the result of CMBE-3. For LLMs, we choose ChatGPT-4o-latest, Llama3.1-70B-chat, Gemma3-27B-it, and Qwen3- 8B for our experiments. In addition, we also choose vanilla in-context learning, DC, BC, Unib- ias, and CAL method as our baselines. The hyper- parameters mandnare set to 90 and 15 respec- tively, and the samples used for estimating the causal effect of biases are sampled from the MNLI train set. For zero-shot and few-shot experiments, we utilize the same prompts as the baseline. All experiments are done for three times and we re- port the average of the accuracy. For datasets, we choose the MNLI and 5-bias as in-domain (ID) and out-of-domain (OOD) datasets to evaluate the in-domain and out-of-domain performance under multi-bias scenarios. Due to the page limit, weGPT-4o Llama3.1-70B Gemma3-27B Zero-shot MNLI 5-bias MNLI 5-bias MNLI 5-bias Vanilla 84.5 49.1 80.5 38.0 82.6 40.8 DC 84.6 54.7 80.6 45.7 82.4 45.9 BC 84.7 57.8 80.8 47.6 82.6 47.5 Unibias — — 80.5 44.8 82.5 44.2 CAL — — 81.2 48.4 82.9 47.8 CMBE-3 84.7 57.6 80.5 49.4 82.7 50.4 CMBE-4 84.8 58.9 80.6 51.8 82.7 52.1 Few-shot MNLI 5-bias MNLI 5-bias MNLI 5-bias Vanilla 85.4 51.2 83.9 42.1 84.4 43.9 DC 85.6 57.7 83.8 52.5 84.3 50.7 BC 85.5 59.3 84.1 55.1 84.4 53.1 Unibias — — 84.2 51.3 84.6 50.2 CAL — — 83.6 53.4 84.2 52.6 CMBE-3 85.1 59.5 83.8 56.3 84.5 56.6 CMBE-4 85.2 60.7 84.0 57.2 84.5 58.3 Table 5: Comparison of CMBE with baselines across four LLMs. Best in bold, second with an underline. − denotes that this method is inapplicable to ChatGPT. report the full experimental results that includes Qwen3-8B in Appendix C. 5.2 Experimental Results Experimental results are shown in Table 5 (full results in Table 7), from which we find that: (1)Comparing the performance of our method with vanilla zero-shot and few-shot prompting, our method performs better on the 5-bias dataset (even if we only know three types of biases ahead). This demonstrates that by estimating and eliminating the causal effect of biases, our method can effectively debias LLMs to improve the generalizability of LLMs in multi-bias scenarios. (2)Compared to previous debiasing methods, our method generally performs better on 5-bias dataset while maintaining performance on MNLI dataset. This indicates that by considering and eliminating the causal effect of multiple types of biases simultaneously, we can mitigate the impact of biases on LLMs more effectively. As a result, our method can effectively debias LLMs by elim- inating the causal effect of biases | https://arxiv.org/abs/2505.16522v2 |
to improve the generalizability of LLMs in multi-bias scenarios. (3)Comparing the performance of CMBE-4 with CMBE-3, the performance of CMBE-4 is better on 5-bias dataset consistently. This indicates that our method can enhance the generalization perfor- mance in multi-bias scenarios (OOD) by consider- ing and eliminating more types of biases. 7 Figure 4: Influence of different numbers of data for estimating the average causal effects of different bias features combinations. 5.3 Sensitivity Analysis To estimate the average causal effects of different combinations of bias features, we select 90 samples that are label-balanced and each contains at least two known bias features. In this experiment, we investigate the sensitivity of model performance upon different hyperparameters by setting different numbers of data for estimation. Experiments are conducted on 5-bias using Gemma3-27B and the results are shown in Figure 4. We observe that: Empirically, the performance of CMBE-4 remains relatively stable with different numbers of data for estimation, Moreover, our approach outperforms the best performance of previous debiasing meth- ods on the 5-bias dataset, which demonstrates the effectiveness of our approach to debias LLMs in multi-bias scenarios. 6 Related Work Bias Evaluation Benchmarks Some recent studies (Zhou et al., 2024c; Sun et al., 2024a) have revealed that LLMs still utilize biases during the inference stage, and found that this phenomenon may lead to poor performance when generalizing to out-of- distribution scenarios. Zhou et al. (2024d) found that LLMs utilizes lexical and style biases in senti- ment analysis tasks and propose a benchmark for verification. Anantaprayoon et al. (2024) proposed a benchmark to examine the gender bias within LLMs. Their research indicates that LLMs exhibits gender bias even in the NLI task. Steen and Mark- ert (2024) focused on the news summarization task and proposed a dataset to explore the biases ex- ploited by LLMs. They found that LLMs exhibit entity hallucination bias on this dataset, thus de- creasing the performance of LLMs. Though thereare many benchmarks proposed to investigate the generalizability of LLMs, each piece of data con- tains only one type of controlled bias in most of these benchmarks (few of them contains two types). However, a single piece of data may simultaneously contain multiple types of biases in practical applica- tions. To mitigate this gap, we propose a multi-bias benchmark, where each piece of data contains five types of biases. Debiasing Methods for LLMs There are also some works aiming at mitigating biases of LLMs. Domain-context calibration method (Fei et al., 2023) uses random words sampled from the un- labeled evaluation dataset as the content-free text, and then calibrates the outputs based on the content- free text to mitigate label bias. Batch calibra- tion (Zhou et al., 2024a) method models the bias from the prompt context by marginalizing the LLM scores in the batched input. Causal-guided active learning method (Sun et al., 2024b) first automat- ically identifies biased instances and induces ex- plainable biases. Subsequently, this method utilizes these biased instances and explainable biases to de- bias LLMs. UniBias (Zhou et al., 2024b) identifies and eliminates biased FFN vectors and attention heads within | https://arxiv.org/abs/2505.16522v2 |
LLMs. Though there are some method proposed to de- bias LLMs, they cannot deal with multi-bias sce- narios as shown in our experiments. In this work, we propose a causal effect estimation-guided multi- bias elimination method, which can mitigate this gap by estimating and eliminating the causal effect of multiple types of biases. 7 Conclusions In this paper, we focus on exploring and improv- ing the validity of LLMs and debiasing methods in multi-bias scenarios. Concretely, we propose 5-bias, the first multi-bias benchmark where each piece of data contains 5 types of biases, for prob- ing into the performance boundary of LLMs and debiasing methods in multi-bias scenarios. Then, an evaluation of current LLMs and debiasing meth- ods on 5-bias is conducted. Evaluation results show that current LLMs and debiasing methods exhibit unsatisfied on this benchmark, highlight- ing the challenge of eliminating multiple types of biases simultaneously. To overcome this chal- lenge, we present a causal effect estimation-guided multi-bias elimination method, which estimates and eliminates the causal effect of multiple types 8 of biases simultaneously during the inference pro- cess of LLMs. Experimental results show that our approach can effectively debias LLMs to enhance their generalizability in multi-bias scenarios. Limitations Although benchmarking and improving the multi- bias elimination ability of LLMs, our debiasing method relies on adjusting the probability of the output, which may be inaccessible in some propri- etary models. As a result, our method cannot be applied to some closed-source models. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Panatchakorn Anantaprayoon, Masahiro Kaneko, and Naoaki Okazaki. 2024. Evaluating gender bias of pre-trained language models in natural language in- ference by considering all labels. In Proceedings of the 2024 Joint International Conference on Compu- tational Linguistics, Language Resources and Evalu- ation (LREC-COLING 2024) , pages 6395–6408. Yejin Bang, Delong Chen, Nayeon Lee, and Pascale Fung. 2024. Measuring political bias in large lan- guage models: What is said and how it is said. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 11142–11159. Jiali Cheng and Hadi Amiri. 2024. Fairflow: Mitigating dataset biases through undecided learning for natu- ral language understanding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 21960–21975. Sunipa Dev, Tao Li, Jeff M Phillips, and Vivek Sriku- mar. 2020. On measuring and mitigating biased in- ferences of word embeddings. In Proceedings of the AAAI conference on artificial intelligence , pages 7659–7666. Li Du, Xiao Ding, Zhouhao Sun, Ting Liu, Bing Qin, and Jingshuo Liu. 2023. Towards stable natural lan- guage understanding via information entropy guided debiasing. In Proceedings of the 61st Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2868–2882. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, and 1 others. 2024. The llama 3 herd | https://arxiv.org/abs/2505.16522v2 |
of models. arXiv preprint arXiv:2407.21783 .Yu Fei, Yifan Hou, Zeming Chen, and Antoine Bosselut. 2023. Mitigating label biases for in-context learn- ing. In Proceedings Of The 61St Annual Meeting Of The Association For Computational Linguistics (Acl 2023): Long Papers, Vol 1 , pages 14014–14031. Assoc Computational Linguistics-Acl. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language infer- ence data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers) , pages 107–112. Anastasiia Klimashevskaia, Dietmar Jannach, Mehdi Elahi, and Christoph Trattner. 2024. A survey on pop- ularity bias in recommender systems. User Modeling and User-Adapted Interaction , 34(5):1777–1834. Marta Marchiori Manerba, Karolina Stanczak, Riccardo Guidotti, and Isabelle Augenstein. 2024. Social bias probing: Fairness benchmarking for language models. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 14653–14671. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuris- tics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics , pages 3428–3448. Nick Mckenna, Tianyi Li, Liang Cheng, Mohammad Hosseini, Mark Johnson, and Mark Steedman. 2023. Sources of hallucination by large language models on inference tasks. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 2758–2774. Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: Origins, inventory and discussion. ACM Journal of Data and Informa- tion Quality . Judea Pearl. 2009. Causality . Cambridge university press. Sara Rajaee, Yadollah Yaghoobzadeh, and Moham- mad Taher Pilehvar. 2022. Looking at the overlooked: An analysis on the word-overlap bias in natural lan- guage inference. In Proceedings of the 2022 Con- ference on Empirical Methods in Natural Language Processing , pages 10605–10616. Timo Schick, Sahana Udupa, and Hinrich Schütze. 2021. Self-diagnosis and self-debiasing: A proposal for re- ducing corpus-based bias in nlp. Transactions of the Association for Computational Linguistics , 9:1408– 1424. Julius Steen and Katja Markert. 2024. Bias in news summarization: Measures, pitfalls and corpora. In Findings of the Association for Computational Lin- guistics ACL 2024 , pages 5962–5983. 9 Zechen Sun, Yisheng Xiao, Juntao Li, Yixin Ji, Wen- liang Chen, and Min Zhang. 2024a. Exploring and mitigating shortcut learning for generative large lan- guage models. In Proceedings of the 2024 joint in- ternational conference on computational linguistics, language resources and evaluation (LREC-COLING 2024) , pages 6883–6893. Zhouhao Sun, Li Du, Xiao Ding, Yixuan Ma, Yang Zhao, Kaitao Qiu, Ting Liu, and Bing Qin. 2024b. Causal-guided active learning for debiasing large language models. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14455– 14469. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, and 1 others. 2025. Gemma 3 technical report. arXiv preprint arXiv:2503.19786 . Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding | https://arxiv.org/abs/2505.16522v2 |
through inference. In NAACL- HLT. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024a. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Nakyeong Yang, Taegwan Kang, Stanley Jungkyu Choi, Honglak Lee, and Kyomin Jung. 2024b. Mitigating biases for instruction-following language models via bias neurons elimination. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9061– 9073. Shuo Yang, Bardh Prenkaj, and Gjergji Kasneci. 2025. Razor: Sharpening knowledge by cutting bias with unsupervised text rewriting. In Proceedings of the AAAI Conference on Artificial Intelligence . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. InThirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track . Han Zhou, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine Heller, and Subhrajit Roy. 2024a. Batch calibration: Rethinking calibration for in-context learning and prompt engineering. In Proceedings of the Twelfth International Conference on Learning Representations . Hanzhang Zhou, Zijian Feng, Zixiao Zhu, Junlang Qian, and Kezhi Mao. 2024b. Unibias: Unveiling and miti- gating llm bias through internal attention and ffn ma- nipulation. In Advances in Neural Information Pro- cessing Systems , volume 37, pages 102173–102196. Curran Associates, Inc.Yuhang Zhou, Paiheng Xu, Xiaoyu Liu, Bang An, Wei Ai, and Furong Huang. 2024c. Explore spurious correlations at the concept level in language models for text classification. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 478–492. Yuqing Zhou, Ruixiang Tang, Ziyu Yao, and Ziwei Zhu. 2024d. Navigating the shortcut maze: A comprehen- sive analysis of shortcut learning in text classification by language models. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 2586–2614. Premise Hypothesis N1is aP1,V1. He S1V2. N1is aP1. HeV1. He S1V2. N1, aP1by trade, V1. He S1V2. N1, aP1by trade. he V1. He S1V2. N1works as a P1,V1. He S1V2. N1works as a P1. HeV1. He S1V2. N1, recognized as a P1,V1. He S1V2. N1is recognized as a P1. HeV1. He S1V2. Table 6: This table presents six examples in our con- structed benchmark. A Templates We provide the templates that are used for con- structing out benchmark in Table 6. B Speculative Vocabulary Here we provide the full list of the speculative vo- cabulary that are used for constructing our bench- mark, including “could", “might", “probably", “pre- sumably", “must", “may". C More Experimental Results We provide the full experiments that includes Qwen3-8B in Table 7. D More Experiments about Bias Polarity Study We provide the experimental results for exploring the bias polarity which is shown in Table 8. For the bias type of gender-occupation, we follow (Anan- taprayoon et al., 2024) to devise datasets for experi- ments due to the difficulty in automatically extract- ing data containing this type of bias from existing datasets. For the other types of | https://arxiv.org/abs/2505.16522v2 |
biases, we construct the dataset for experiments by adopting the same methodology as illustrated in Sec. 2. 10 E Prompts We provide the specific prompts used for zero-shot and few-shot experiments in Figure 5. 11 GPT-4o Llama3.1-70B Gemma3-27B Qwen3-8B Zero-shot MNLI 5-bias MNLI 5-bias MNLI 5-bias MNLI 5-bias Vanilla 84.5 49.1 80.5 38.0 82.6 40.8 77.9 34.2 DC 84.6 54.7 80.6 45.7 82.4 45.9 78.0 39.6 BC 84.7 57.8 80.8 47.6 82.6 47.5 78.2 40.3 Unibias — — 80.5 44.8 82.5 44.2 78.2 38.6 CAL — — 81.2 48.4 82.9 47.8 77.4 36.6 CMBE-3 84.7 57.6 80.5 49.4 82.7 50.4 78.0 46.7 CMBE-4 84.8 58.9 80.6 51.8 82.7 52.1 77.9 54.8 Few-shot MNLI 5-bias MNLI 5-bias MNLI 5-bias MNLI 5-bias Vanilla 85.4 51.2 83.9 42.1 84.4 43.9 80.6 43.3 DC 85.6 57.7 83.8 52.5 84.3 50.7 80.4 50.1 BC 85.5 59.3 84.1 55.1 84.4 53.1 80.6 50.5 Unibias — — 84.2 51.3 84.6 50.2 80.5 49.6 CAL — — 83.6 53.4 84.2 52.6 80.2 48.7 CMBE-3 85.1 59.5 83.8 56.3 84.5 56.6 80.7 52.1 CMBE-4 85.2 60.7 84.0 57.2 84.5 58.3 80.8 53.7 Table 7: Comparison of CMBE with baselines across four LLMs. Best in bold, second with an underline. −denotes that this method is inapplicable to ChatGPT-4o-latest due to the requirement for internal representations of the model. ChatGPT-4o-latest entailment neutral contradiction low lexical overlap 30.3 37.5 32.2 high lexical oerlap 40.9 24.6 34.5 label distribution 33.3 33.3 33.3 ChatGPT-4o-latest entailment neutral contradiction speculative word exists 40.1 27.7 32.2 speculative word absents 32.8 34.8 32.4 label distribution 33.3 33.3 33.3 ChatGPT-4o-latest entailment neutral contradiction high semantic similarity 40.3 27.2 32.5 low semantic similarity 30.6 37.6 31.8 label distribution 33.3 33.3 33.3 ChatGPT-4o-latest entailment neutral contradiction male with male-biased occupations 53.8 11.5 34.7 male with female-biased occupations 33.7 16.1 51.2 label distribution 33.3 33.3 33.3 Table 8: This table presents the distribution of the predicted labels by ChatGPT-4o-latest and in the dataset. 12 Zero -shot: Examine the pair of sentences and determine if they exhibit entailment, neutral, or contradiction. Answer with either 'entail ment', 'neutral', or 'contradiction': Few-shot 1: Examine the pair of sentences and determine if they exhibit entailment, neutral, or contradiction. Answer with either 'entail ment', 'neutral', or 'contradiction’: Premise: I am asserting my membership in the club of Old Geezers. Hypothesis: I am proclaiming that I am now a member of the club of Old Geezers. Answer: 'entailment’ Premise: Jon's feeling of age and weariness must have shown. Hypothesis: Jon had traveled longer than his body could handle. Answer: 'neutral’ Premise: oh really it wouldn't matter if we plant them when it was starting to get warmer. Hypothesis: It is better to plant when it is colder. Answer: 'contradiction’ Few-shot 2: Examine the pair of sentences and determine if they exhibit entailment, neutral, or contradiction. Answer with either 'entail ment', 'neutral', or 'contradiction’: Premise: uh idon't know iihave mixed emotions about him uh sometimes ilike him but at the same times ilove to see somebody beat him. Hypothesis: I like him for the most part, but would still enjoy seeing | https://arxiv.org/abs/2505.16522v2 |
someone beat him. Answer: 'entailment’ Premise: The new rights are nice enough. Hypothesis: Everyone really likes the newest benefits. Answer: 'neutral’ Premise: This site includes a list of all award winners and a searchable database of Government Executive articles. Hypothesis: The Government Executive articles housed on the website are not able to be searched. Answer: 'contradiction' Few-shot 3: Examine the pair of sentences and determine if they exhibit entailment, neutral, or contradiction. Answer with either 'entail ment', 'neutral', or 'contradiction’: Premise: New York Times columnist Bob Herbert asserts that managed care has bought Republican votes and that patients will di e as a result. Hypothesis: Managed care bought Republican votes and patients will end up dead because of this. Answer: 'entailment’ Premise: Dirt mounds surrounded the pit so that the spectators stood five or six people deep around the edge of the pit. Hypothesis: The hole is seven feet deep. Answer: 'neutral’ Premise: There are many homes built into the hillsides; some have been converted into art galleries and shops selling collect ibles. Hypothesis: All of the homes in the hillside have been converted into art galleries and shops selling collectibles. Answer: 'contradiction'Figure 5: Error rates of three LLMs for each label. 13 | https://arxiv.org/abs/2505.16522v2 |
EnSToM: Enhancing Dialogue Systems with Entropy-Scaled Steering Vectors for Topic Maintenance Heejae Suh1, Yejin Jeon1, Deokhyung Kang1, Taehee Park1, Yejin Min1, Gary Geunbae Lee1,2 1Graduate School of Artificial Intelligence, POSTECH, 2Department of Computer Science and Engineering, POSTECH, {heejaesuh, jeonyj0612, deokhk, taehpark, yeajinmin, gblee}@postech.ac.kr Abstract Small large language models (sLLMs) offer the advantage of being lightweight and effi- cient, which makes them suitable for resource- constrained environments. However, sLLMs often struggle to maintain topic consistency in task-oriented dialogue systems, which is critical for scenarios such as service chatbots. Specifically, it is important to ensure that the model denies off-topic or malicious inputs and adheres to its intended functionality so as to prevent potential misuse and uphold reliabil- ity. Towards this, existing activation engineer- ing approaches have been proposed to manipu- late internal activations during inference. While these methods are effective in certain scenar- ios, our preliminary experiments reveal their limitations in ensuring topic adherence. There- fore, to address this, we propose a novel ap- proach termed Entropy-scaled Steering vectors forTopicMaintenance (EnSToM). EnSToM dynamically adjusts the steering intensity based on input uncertainty, which allows the model to handle off-topic distractors effectively while preserving on-topic accuracy. Our experiments demonstrate that EnSToM achieves significant performance gain with a relatively small data size compared to fine-tuning approaches. By improving topic adherence without compromis- ing efficiency, our approach provides a robust solution for enhancing sLLM-based dialogue systems1. 1 Introduction Recent advances in large language models (LLMs) have enabled the development of sophisticated con- versational systems across a wide range of ser- vices (Naveed et al., 2024). These systems are in- creasingly being adopted by organizations for ap- plications such as customer support, conversational assitants, and internal process guidance. However, 1The source code is available at https://github.com/ linkyouhj/enstom Scenario: Ordering a new batch of personal checks. VanillaEnSToMQ)What are the largest banks in the world by assets? (Distractor)I’m sorry, I can’t provide you with a list of the largest banks …I’m sorry, I can’t provide you with a list of the largest banks …Q)I'd like a checkbook cover as well. How do I add that?(On-Topic)I’m sorry, I am just an AI assistant, I can’t provide you…Great question!I can add a checkbook cover to your order … 👨💼Figure 1: The example above illustrates that bots tend to provide only refusal responses when using vanilla steering to improve on-topic response generation. On the other hand, EnSToM is able to generate more con- textually appropriate responses. openly available API-based large-scale models of- ten face limitations in terms of compliance with strict data privacy policies and security regulations. Furthermore, large-scale open-source models de- mand significant computational resources, which results in high operational costs for deployment. In this context, sLLMs have emerged as a practical al- ternative (Xia et al., 2024) by offering lightweight and resource-efficient solutions for production envi- ronments. Since these models enable organizations to achieve robust conversational capabilities with- out the extensive computational costs associated with larger models, they are a compelling choice for a variety of applications. Despite their impressive performance on gen- eral tasks, LLMs face challenges when deployed in | https://arxiv.org/abs/2505.16526v1 |
real-world scenarios that demand consistent main- tenance of specific constraints like business con- texts, or scenario-driven dialogues (Sreedhar et al., 2024). This issue becomes especially pronounced with sLLMs (Bahri et al., 2024), as their limited capacity makes it even harder to ensure scenario consistency over extended user interactions (Fig- ure 1). The inability to maintain a prescribed sce- nario directly undermines a service chatbot’s in- tended functionality; if it cannot adhere to a given workflow, policy, or domain rule, it fails to deliver 1arXiv:2505.16526v1 [cs.CL] 22 May 2025 the expected user experience, which potentially leads to misinformation, reduced trustworthiness, and even safety concerns such as inadvertently dis- closing sensitive information (Kim et al., 2024). Consequently, the capability of an LLM to reliably uphold scenario constraints and follow specified directives is not merely an enhancement but a ne- cessity in real-world applications. Numerous alignment techniques have been pro- posed to address this issue, with two prominent approaches being fine-tuning and prompt engineer- ing. Fine-tuning the model with domain-specific, high-quality data can effectively realign its internal parameters to suit particular constraints. However, this process demands substantial resources in terms of data collection, annotation, and computational cost, which makes it impractical in covering every possible scenario. Meanwhile, prompt engineer- ing techniques offer a more lightweight and less resource-intensive solution. While prompt-based methods have demonstrated efficacy in steering model behavior, their effectiveness often dimin- ishes in complex, nuanced scenarios (Patel et al., 2023) where detailed instructions and long-term context-maintenance are required. In light of these limitations, there is a clear need for new, more flexible methods that can help LLMs to consistently maintain scenario adherence with- out incurring the substantial overhead of extensive fine-tuning or relying solely on prompt design. To this end, we propose a novel and lightweight ap- proach termed Entropy-scaled Steering vectors for TopicMaintenance (EnSToM) based on activation addition , which steers a model’s generation at in- ference time without altering its parameters. By injecting a carefully derived steering vector into the model’s intermediate activations, we can gen- tly nudge the LLM towards maintaining scenario consistency. However, our preliminary experiment showed that straightforward application of activa- tion addition cause undesired steering even for on- topic inputs, potentially degrading the user experi- ence or interfering with correct responses. To address this, we introduce entropy-based coefficient scaling that leverages intrinsic model signals—specifically, layer-wise generation en- tropy—to differentiate between on-topic and dis- tractor inputs. This is motivated by our key observa- tion that the entropy distribution varies depending on whether the input is on-topic or a distractor. By dynamically adjusting the steering vector’s strength based on this entropy information, our method isable to enforce scenario adherence more diligently for distractor inputs while preserving the model’s natural behavior for on-topic interactions. This approach offers a resource-efficient align- ment strategy that can enhance existing prompt- based methods without the need for extensive re- training or exhaustive scenario-specific data col- lection. In this paper, we detail the design of our method, present an in-depth analysis of its perfor- mance, and demonstrate its ability to promote sce- nario adherence while | https://arxiv.org/abs/2505.16526v1 |
minimizing adverse effects on normal inputs. Our main contributions can there- fore be summarized as follows: •We propose EnSToM , a novel and lightweight activation addition -based method with entropy-based scaling that dynamically adjusts the steering vector’s influence. This ensures robust topic maintenance for distractor input while preserving on-topic accuracy. •Experiments on the CantTalkAboutThis dataset show that EnSToM significantly im- proves topic adherence in task-oriented dia- logues. •We conduct a comprehensive analysis of en- tropy patterns in LLMs by investigating layer- wise entropy distributions across on-topic and distractor inputs. Our findings provide key in- sights into the intrinsic properties of LLMs in different scenarios, which inform the design of entropy-aware steering strategies. 2 Related Work 2.1 Steering Vectors Steering vectors (Turner et al., 2023; Rimsky et al., 2024) modify hidden states by computing differ- ences between desirable and undesirable responses. As this allows for targeted activation adjustments, steering vectors have been explored for Trojan Ac- tivation Attacks (Wang and Shu, 2024), and be- havior alignment without fine-tuning (Subramani et al., 2022). In another domain, Lee et al. (2024) leverages conditioning vectors to selectively con- trol model behavior based on input contexts, while Stickland et al. (2024) introduces KL-Then-Steer (KTS) training to mitigate performance degrada- tion during steering vector application. Building on these findings, our approach enhances robust- ness by incorporating internal layer-wise entropy 2 of language models, ensuring consistent distractor accuracy without degrading on-topic performance. 2.2 Topic-Following Dialogue System Topic adherence in dialogue systems has been ex- plored through various approaches. Zhan et al. (2021) improved out-of-scope intent detection via pseudo outliers, while Mu et al. (2024) introduced the RuLES benchmark to assess rule-following behavior. Instruction fine-tuning for safety was explored in Llama Guard (Inan et al., 2023), whereas Xu et al. (2024) and Xie et al. (2024) proposed decoding and gradient-based alignment strategies. Moreover, Sreedhar et al. (2024) curated the CantTalkAboutThis dataset for evaluating on- topic dialogue and distractor handling. We leverage this dataset to improve both distractor and on-topic query accuracy. 3 Preliminaries This section provides an overview of the fundamen- tal concepts and methodologies that form the basis of our approach towards maintaining topic consis- tency in task-oriented dialogues. It also includes a brief description of the source dataset and the methodology for extracting steering vectors. 3.1 Topic Maintenance in Dialogue System The CantTalkAboutThis (Sreedhar et al., 2024) source dataset is designed to evaluate how lan- guage models handle off-topic queries in multi- domain dialogues. Each data sample is represented asX={I, D, u }, where Idenotes the system in- struction, Drepresents the dialogue history, and u is the user input query, which can be either on-topic (o) or off-topic ( d). This structure allows for the systematic analysis of a model’s ability in maintain- ing task-oriented scenarios with strict adherence to predefined topics. 3.2 Steering Vector Steering vectors (Rimsky et al., 2024) guide the model’s responses toward desired behaviors with- out requiring additional training. The core concept involves leveraging differences in the hidden repre- sentations of a language model at a specific layer to align its outputs with predefined | https://arxiv.org/abs/2505.16526v1 |
scenarios. Specif- ically, for any input pair qi={qp i, qn i}(where p denotes desired behavior and ndenotes undesired behavior), we compute the hidden representations h(l)at a designated layer lthrough a forward pass f(·). An example of such a pair is illustrated in theMethod Distractor On-topic Prompt Only 0.28 0.94 Vanilla Steering 0.80 (+0.52) 0.70 (-0.24) Table 1: Distractor and on-topic accuracies for different methods. Distractor accuracy measures the model’s abil- ity to refuse distractor inputs, while on-topic accuracy reflects its ability to provide engaging responses to on- topic inputs. For metric details, see section 5.1. upper half of Figure 2. Additionally, the represen- tations h(l) pandh(l) ncorrespond to the activations for the desired behavioral completion letter ( cp) and the undesired behavioral completion letter ( cn), respectively. Note that the completion letter rep- resents the designated choice of either A or B in a multiple-choice response format. The steering vector for qican then be computed as: vi s=h(l) p−h(l) n. Given kpairs in the dataset, the final steering vector vis computed by averaging the individual steering vectors. Subsequently, these vectors are normal- ized to ensure consistent scaling across behaviors. Formally, let the norm of each vi sbe denoted as ∥vi s∥, and let the average norm across all kvectors be¯∥v∥=1 kPk i=1∥vi s∥. The normalized steering vector is obtained as norm (vi s) =vi s·¯∥v∥ ∥vis∥. The process of computing the final steering vector vis summarized as follows: v=1 kkX i=1norm vi s . This aggregated vector vis applied to adjust the model’s activations during inference, which nudges its behavior toward the desired direction. Steering vectors thus offer an efficient mechanism to enforce topic consistency without requiring additional fine- tuning or training. 4 Proposed Methodology According to preliminary experiments (Table 1), we are able to observe that uniform application of the steering vector vimproves distractor refusal accuracy but significantly degrades responses to on- topic inputs. This degradation is likely attributable to the consistent guidance of the steering vector towards refusal, regardless of whether the input is 3 <Scenario> 👨💼 🤖●●● ●●● 👨💼 🤖●●● ●●● 👨💼 🤖●●● ●●●Forward PassPretrainedLLMLayer 0Layer 15Layer 16Layer 17Layer N System InstructionDialogue History<Input> Steered ResponsePretrainedLLMLayer 0Layer 15Layer 16Layer 17Layer N Entropy-based Coefficient Scaling𝑣 Steering vector𝐶!(#)∗Mean over pairsStep 1. Extracting steering vectors. Step 2. Generation with steering by entropy-based coefficient scaling.For allsteering QA … ℎ"#ℎ$#𝑣%#-= ℎ!"… ℎ#"… PretrainedLLMLayer 0Layer 𝑳−𝟏Layer 𝑳Layer 𝑳+𝟏Layer N System InstructionDialogue History<Input>Forward Pass 𝐻(#) Steering QA Dataset [INST] Can I deposit foreign currency into my bank account?(Distractor)Choices: (A)[Refuse and lead to topic] (B) [Engaging Response] [/INST] (B [INST] Can I deposit foreign currency into my bank account?(Distractor)Choices: (A)[Refuse and lead to topic] (B) [Engaging Response] [/INST] (A𝒒𝒊𝒑𝒒𝒊𝒏𝑐"𝑐$Figure 2: Overall process. After extracting steering vectors and applying entropy-based coefficient scaling, responses are generated using the entropy-based scaled steering vectors to maintain on-topic accuracy. on- or off-topic. Since preserving on-topic perfor- mance is as crucial as enhancing refusal capability, a more adaptive approach is required. Therefore, in order to improve scenario adher- ence in task-oriented dialogue systems by dynami- cally steering model responses based on input en- tropy, | https://arxiv.org/abs/2505.16526v1 |
we propose an approach which is comprised of three main components: (1) extracting steering vectors to align model behavior with predefined scenarios, (2) applying an entropy-based coeffi- cient scaling mechanism to dynamically adjust the steering intensity based on input uncertainty, and (3) generating responses using these scaled steer- ing vectors. By combining these components, our method effectively addresses the challenge of main- taining topic consistency in task-oriented dialogues, even amid off-topic distractors. The overall frame- work is illustrated in Figure 2. 4.1 Steering Vector Extraction From the source dataset, we first construct the Steering QA Dataset S={q1, q2, . . .}, which is utilized to extract the steering vector using the method described in Section 3.2. Specifically, each qirepresents a pair of prompts derived from the same distractor query d. For each d, the distractoris paired with two choice options explicitly rep- resenting a desired behavior and an undesired be- havior. These options provide clear examples of arefusal response that redirects the conversation back to the topic, and an engaging response that inappropriately responds to the distractor. In this setup, each refusal response ( qp i) and en- gaging response ( qn i) end with a different comple- tion letter: one where the desired behavior comple- tion letter ( cp; e.g. A) is selected, and another where the undesired behavior completion letter ( cn; e.g. B) is chosen. The refusal and engaging choices are randomly assigned across all test inputs to prevent positional bias in the evaluation. This structure en- ables the explicit differentiation needed for steering vector extraction. Note that since the CantTalkA- boutThis source dataset lacks diverse refusal and engaging responses, these were generated using GPT-4o2(OpenAI et al., 2024). Full details of the prompt designs for generating completions are pro- vided in the Appendix H. In order to extract the steering vector from the newly constructed Steering QA Dataset , we per- form a forward pass f(·)through the pre-trained 2https://platform.openai.com/docs/models/ gpt-4o 4 language model for each pair qi∈S. At a desig- nated layer l, we compute the hidden representa- tions h(l) pforcpandh(l) nforcn. Using the theoreti- cal definition in Section 3.2, the steering vector v is derived by averaging and normalizing the differ- ences in activations across all pairs. During infer- ence, the steering vector is applied to ensure the model’s outputs remain consistent with the topic. 4.2 Entropy-Based Coefficient Scaling Recent studies (Chen et al., 2024; Ji et al., 2024; Azaria and Mitchell, 2023; Chuang et al., 2024) have demonstrated that LLM internal states can be leveraged for reliable generation. Inspired by these findings, we conduct preliminary investiga- tions regarding LLM internal states. Experimental results (Figures 3 and 6) reveal that under the same system instruction, the entropy distribution of each layer differs between distractor and on-topic inputs. This observation suggests that layer-wise entropy can serve as a discriminator between the two in- put types. Based on this insight, we introduce an entropy-based coefficient scaling method, which is detailed in Sections 4.2.1 and 4.2.2. 4.2.1 Layer-Wise Entropy Analysis We define the entropy E(l) dandE(l) oat layer lfor the inputs | https://arxiv.org/abs/2505.16526v1 |
xd={I, D, d }andxo={I, D, o }, where oandddenote the on-topic and distractor user queries, respectively, during the generation of k= 2tokens. For each output token, the entropy E(l)is computed as follows: E(l)=E" −VX i=1p(l) ilog p(l) i+ϵ# , p(l) i=exp(z(l) i) PV j=1exp(z(l) j). Here, Vdenotes the size of the vocabulary. For a given layer l,p(l) iis the probability of the i-th to- ken, which is obtained using the softmax function applied to the logits z(l) i. Moreover, z(l) irepresents the logit of the model for the i-th token at layer l. The constant ϵis a small value of 10−12, which is added to ensure numerical stability when comput- ing the logarithm of the probabilities. The entropy at layer lquantifies the uncertainty of the token probabilities and is averaged across all inputs in the batch. We compute entropy as the average over the two tokens because the first token (e.g., < s > ) On-topic Distractor345678Entropy DistributionViolin Plot of Entropy (Layers: 16)(a) Entropy distribution in layer 16. On-topic Distractor23456789Entropy DistributionViolin Plot of Entropy (Layers: 19) (b) Entropy distribution in layer 19. Figure 3: Comparison of entropy distribution in different layers of Llama-2-7b-chat. typically carries minimal variation ( entropy ≈0) due to its role as the generation start token. We observe significant differences in entropy distributions between distractor and on-topic in- puts at layers 16 and 19 (Figures 3 and 6). While both layers exhibit clear distributional differences, the relative entropy values vary by layer; on-topic inputs show higher entropy in some layers (Fig- ure 3a), whereas distractor inputs have higher en- tropy in others (Figure 3b). Notably, as seen in Figures 3a and 3b, the distinction at layer 16 is more pronounced. The implications of these dif- ferences on experimental outcomes are discussed in Section 5.2, while a detailed analysis of the ob- served entropy patterns is provided in Section 6.4. Based on these findings, we select layers 16 and 19 asL, where Lrepresents the LLM layers used for entropy extraction. 4.2.2 Implementation of Entropy-Based Coefficient Scaling We introduce a coefficient scaling mechanism to dynamically adjust the steering intensity based on input entropy. The scaling coefficient is defined as: C(L) H=Cmax 1 +e−αδ(H(L)−t), where C(L) His the entropy-based scaling coefficient, and the entropy at layer Lof the model’s response 5 L Steer @ Distractor ↑ On-topic ↑ Overall ↑ -Prompt Only 0.282 0.938 0.610 1613 0.758 (+0.476) 0.820 (-0.118) 0.789 (+0.179) 14 0.795 (+0.512) 0.775 (-0.163) 0.784 (+0.174) 15 0.810 (+0.529) 0.747 (-0.191) 0.779 (+0.169) 16 0.709 (+0.427) 0.895 (-0.043) 0.802 (+0.192) 1913 0.773 (+0.490) 0.709 (-0.229) 0.741 (+0.131) 14 0.793 (+0.511) 0.644 (-0.294) 0.718 (+0.108) 15 0.784 (+0.502) 0.693 (-0.245) 0.738 (+0.128) 16 0.749 (+0.467) 0.818 (-0.120) 0.784 (+0.174) Table 2: Performance comparison of distractor and on-topic inputs across different layers with Prompt Only and EnSToM. The overall accuracy is computed as the average of distractor and on-topic accuracies. Column Lindicates which layer His computed from, and Steer @ indicates where steering vector was added. The overall best accuracy is highlighted in bold, while the best accuracies for individual metrics | https://arxiv.org/abs/2505.16526v1 |
(distractor and on-topic, within EnSToM results) are underlined . The symbols " +" or "−" indicate the point gain or loss relative to the prompt-only settings. Note that higher values for all metrics indicate better performance. to the user query is denoted as H(L). The maxi- mum coefficient Cmaxis set to 1.5 based on prior findings by Rimsky et al. (2024)3. The slope param- eterα, which controls the steepness of the sigmoid function, is set to 5, while the threshold entropy t is empirically set to 7.5. In order to adjust the scaling direction based on entropy differences between distractor and on-topic inputs, the parameter δis set to -1 when the aver- age entropy of distractors is lower than on-topic inputs (Layer 16) and +1when it is higher (Layer 19). This adjustment ensures that the coefficient increases when the entropy deviates from tin the appropriate direction. By dynamically modulating the coefficient, this approach enhances refusal accu- racy for distractor inputs while preserving engaging responses for on-topic interactions. 4.3 Response Generation During response generation, the model processes an input consisting of system instructions ( I), di- alogue history ( D), and the user question (either off-topic dor on-topic o). The model then gener- atesk= 2tokens using greedy decoding, during which the entropy value ( H) is computed at layers 16 and 19. This entropy value is used to calculate the co- efficient via the entropy-based coefficient scaling mechanism outlined in Section 4.2.2. The com- puted coefficient is applied to the steering vector 3For further analysis of coefficient scaling, see Ap- pendix G.1. 2 3 4 5 6 7 8 9 Threshold (t)0.30.40.50.60.70.80.9Accuracy Effect of Entropy-based Scaling at Different Thresholds Distractor On-topic Overall Vanilla Distractor Vanilla On-topic Vanilla OverallFigure 4: Effect of entropy-based scaling at different thresholds t. (v), which is added to the model’s activations at a designated layer ( h(l)). Note that this layer is distinct from the layer that is used for entropy ex- traction: h′(l)=h(l)+C(L) H·v This process ensures that the steering intensity dy- namically adapts to the input’s entropy, which en- hances the model’s ability to handle distractors while maintaining accuracy on on-topic inputs. 5 Experiments 5.1 Experimental Setup We conduct our main experiments using LLaMA- 2-7B-Chat (Touvron et al., 2023) and Minstral-8B- Instruct-24104to evaluate the generalizability of 4https://mistral.ai/en/news/ministraux 6 Steer @ Distractor On-topic Overall Prompt Only 0.25 0.98 0.62 17 0.65 (+0.40) 0.86 (-0.12) 0.75 (+0.14) 18 0.63 (+0.38) 0.91 (-0.07) 0.76 (+0.15) Table 3: Performance of EnSToM at Ministral-8b- Instruct-2410. our method. Both models are executed on a single NVIDIA RTX A6000 GPU, and do not involve ad- ditional training; instead, they focus on extracting steering vectors and computing entropy. Steering is applied at layers 13-16 since the middle layers are more effective at modifying generation behavior (Rimsky et al., 2024). We evaluate our method on the CantTalkAbout- This dataset (Sreedhar et al., 2024), which spans 10 domains. Our experiments focus on the banking do- main, which consists of 60 independent scenarios with 10 to 15 samples each. To prevent data contam- ination, we use 100 samples5from | https://arxiv.org/abs/2505.16526v1 |
10 scenarios to compute steering vectors, and keep them separate from the test set, which includes 550 samples each for distractor and on-topic cases. Evaluation was conducted separately for distractor and on-topic settings. Detailed dataset statistics are provided in Appendix B. For evaluation, we use GPT-4o to classify model responses as refusals or engaging responses. The prompts used for evaluation are de- tailed in Appendix H.3. Metric We evaluate the model’s performance us- ing two accuracy metrics: (1) Distractor accuracy , which is defined as the proportion of responses where the model correctly refuses off-topic content, and (2) On-topic accuracy , which is the propor- tion of responses where the model appropriately engages with relevant content without refusing. 5.2 Results Table 2 compares the performance of EnSToM across layers 13 to 16 under two entropy extraction settings, L= 16 andL= 19 , against the baseline Prompt Only method. In all conditions, we use a fixed threshold t= 7.5and the same prompt, which combines system instructions (Appendix H.1) and dialogue history (Appendix H.2), followed by the user question. The prompt-only baseline achieves a distractor accuracy of 0.282 and an on-topic accu- racy of 0.938, which results in an overall score of 5Extended experimental results with varying sample sizes are provided in Appendix D. 0 25 50 75 100 125 150 175 200 Sample Index23456789EntropyEntropy by Sample (Layer: 33) harmless harmfulFigure 5: Entropy distribution of on-topic and distractor for jailbreak defense task at layer 33 of Ministral-8b- Instruct-2410 model. 0.610. Since the Prompt Only method does not use steering, LandSteer @ settings are not applicable. This result highlights the baseline model’s limited ability to handle distractor inputs effectively. On the other hand, the application of the steer- ing vector significantly improves distractor accu- racy, with the highest improvement observed at L= 16 andSteer @ = 15, which reaches 0.810 (+0.529). The highest overall accuracy is achieved atL= 16 andSteer @ = 16, with an overall accu- racy of 0.802 (+0.192). This setting also maintains the highest on-topic accuracy (0.895). Overall, our method achieves a notable increase in overall ac- curacy and the largest improvement in distractor accuracy while minimizing losses in on-topic accu- racy. Comparing the different Lsettings, we observe that on-topic accuracy degrades more in the L= 19 setting, while distractor accuracy improves simi- larly in both cases. As a result, overall performance is generally higher in the L= 16 configuration. This aligns with the entropy distribution differences shown in Figure 3, where layer 16 exhibits a clearer separation between distractor and on-topic entropy values. These findings suggest that the effective- ness of entropy scaling is influenced by the degree of entropy separation at different layers. 6 Discussion This section discusses the impact of entropy-based coefficient scaling, generalization across models and tasks, and layer-wise entropy patterns across domains. 7 0 5 10 15 20 25 30 Layer1.0 0.5 0.00.51.0Mean Entropy Difference (Distractor - On-T opic) Mean Entropy Difference by Layer Across Domains Domains Banking Computer Troubleshooting Education Health Insurance Legal Real Estate T axes TravelFigure 6: Layer-wise entropy difference (distractor-on-topic) across domains. 6.1 | https://arxiv.org/abs/2505.16526v1 |
Effect of Entropy-based Scaling Figure 4 illustrates the effect of entropy-based scal- ing on topic adherence across different threshold values t. Here, Vanilla refers to applying the steer- ing vector with a fixed coefficient ( Cmax) without dynamic scaling. Vanilla achieves an overall ac- curacy of 0.75, exhibiting strong distractor perfor- mance (0.80) but lower on-topic accuracy (0.70). However, EnSToM demonstrates a clear perfor- mance improvement over Vanilla setting. At low thresholds ( t= 2,4), on-topic accuracy peaks (0.95), but distractor accuracy drops significantly (0.30–0.32). In contrast, higher thresholds ( t= 7.5,8) achieve the best overall accuracy (0.80) by balancing distractor handling (0.71–0.76) with min- imal on-topic degradation (0.84–0.89). Beyond this range ( t= 9), distractor accuracy returns to base- line, while on-topic performance declines (0.72), in- dicating that exceeding the optimal threshold com- promises scenario adherence. These results demon- strate the effectiveness of entropy-based scaling in maintaining topic consistency while minimizing trade-offs. 6.2 Cross Architecture Generalization To evaluate the generalizability of EnSToM be- yond the Llama family, we conduct experiments on Minstral-8B-Instruct-2410. Table 3 presents the re- sults of EnSToM ( L= 28 andt= 3.0)6. Without entropy-based scaling ( Prompt Only ), the model exhibits strong on-topic accuracy (0.98) but strug- gles with distractor handling (0.25), leading to a low overall accuracy (0.62). Applying EnSToM at layers 17 and 18, however, significantly im- proves distractor accuracy (+0.40 and +0.38, re- spectively) while maintaining competitive on-topic 6Systematically selected based on the entropy distribution.performance. The best overall accuracy (0.76) is achieved at layer 18, which confirms EnSToM’s effectiveness across different model architectures. 6.3 Task-level Generalization In order to assess task-level generalization abilities of the proposed model, we shift to the jailbreak defense task7. Pilot tests reveal that jailbreak at- tacks were successful most of the time. This means that the model can only generate unsafe responses. However, the model is able to distinguish between harmful and harmless content due to entropy differ- ences at layer 338(Figure 5). While refusal-based steering vectors alone were ineffective, these find- ings suggest the potential for adapting EnSToM to jailbreak defense tasks. 6.4 Layer-wise Entropy Analysis Prior studies (Li et al., 2025; Azaria and Mitchell, 2023; Chuang et al., 2024) have highlighted that in- termediate layers significantly influence the genera- tion process in large language models. Specifically, in the LLaMA-2-7B-chat model used in our study, Li et al. (2025) demonstrates a clear transition in token attention across intermediate layers: initial layers predominantly capture syntactic tokens, mid- dle layers (e.g., layer 16) shift attention towards semantically crucial tokens, and deeper layers (e.g., layers 19–20) further distribute attention onto to- kens with secondary semantic roles. In our experimental setup—comprising a system instruction, dialogue history, and user query—we observe a similar attention dynamic influencing en- tropy distributions. At layer 16, distractor queries, 7Dataset construction details are provided in Appendix C. 8Layer 33 was selected based on the maximally observed difference between harmful and harmless entropy distributions across all layers 8 semantically incongruent with the dialogue con- text and system instruction, attract highly focused attention on their unique tokens. This focused at- | https://arxiv.org/abs/2505.16526v1 |
tention activates fewer logits, resulting in signifi- cantly lower entropy. Conversely, on-topic queries, contextually aligned with the instruction and di- alogue history, maintain attention broadly dis- tributed across multiple contextually relevant to- kens. This broader activation leads to higher en- tropy values compared to distractors. Interestingly, this relationship reverses at deeper layers (e.g., layers 19–20). Here, distractor queries experience increased entropy as attention disperses onto additional semantically relevant tokens be- yond the initial focus. Meanwhile, on-topic queries exhibit stable entropy, reflecting sustained dis- tributed attention across the context. Moreover, this entropy pattern consistently emerges across various domains, as Figure 6 illus- trates. Distractor inputs consistently exhibit lower entropy at layer 16 and higher entropy at lay- ers 18–20 relative to on-topic inputs, regardless of domain variations. This cross-domain consis- tency—further supported by our domain-shift ex- periments detailed in Appendix D.2—underscores the robustness of our observations and indicates a generalizable mechanism in the model’s internal processing. These findings align well with established un- derstandings of layer specialization in LLMs (Gera et al., 2023): lower layers encode syntactic infor- mation, intermediate layers encode semantic signif- icance, and higher layers integrate these semantic and contextual representations. Thus, our entropy analysis provides empirical evidence for how in- termediate layers differentially process distractor versus on-topic inputs, highlighting layer-specific functional roles and emphasizing the practical ap- plicability of entropy-based methods in detecting semantic consistency within dialogues. 7 Conclusion In this paper, we introduced EnSToM, a lightweight and training-free method for enhancing topic con- sistency in task-oriented dialogue systems using entropy-scaled steering vectors. By integrating steering vector with an entropy-based coefficient scaling mechanism, our approach dynamically ad- justs steering intensity based on the model’s gener- ation entropy. Evaluations on the CantTalkAbout- This dataset demonstrated a significant improve-ment in distractor accuracy while preserving on- topic performance, which results in an increase of overall accuracy. Furthermore, experiments across different mod- els, domains, and tasks validated the generalizabil- ity of our method. Even with limited steering vec- tor samples, EnSToM remained effective, making it suitable for low-resource settings. Additionally, our layer-wise entropy analysis provides valuable in- sights into LLM behavior, contributing to improved interpretability. These findings support the develop- ment of adaptive and scenario-consistent dialogue systems for real-world applications. Acknowledgments This research was supported by Smart Health- Care for Police Officers Program(www.kipot.or.kr) through the Korea Institutes of Police Tech- nology(KIPoT) funded by the Korean Na- tional Police Agency(KNPA, Korea)(No. RS-2022- PT000186)(47.5%). This work was supported by the IITP(Institute of Information & Coummu- nications Technology Planning & Evaluation)- ITRC(Information Technology Research Center) grant funded by the Korea government(Ministry of Science and ICT)(IITP-2025-RS-2024-00437866) (47.5%). This work was supported by Insti- tute of Information & communications Technol- ogy Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2019- II191906, Artificial Intelligence Graduate School Program(POSTECH), 5%). 8 Limitations Our coefficient scaling approach relies on entropy differences between distractor and normal inputs at specific model layers, with experiments confirming distinct entropy distributions. However, some sam- ples lie within overlapping regions of these distri- butions, making them hard negatives. Due to their subtle entropy variations, these cases | https://arxiv.org/abs/2505.16526v1 |
can some- times produce results opposite to the intended ef- fect, complicating the distinction between on-topic and off-topic inputs. Addressing this issue requires further research. Additionally, our current method requires man- ually selecting the entropy extraction layer Land threshold t. In this study, we empirically identified layers with the most pronounced distribution dif- ferences and manually set the coefficient scaling threshold. For broader applicability, transitioning 9 from a manual to an automated selection process remains an important area for future exploration. References Andy Arditi, Oscar Balcells Obeso, Aaquib Syed, Daniel Paleka, Nina Rimsky, Wes Gurnee, and Neel Nanda. 2024. Refusal in language models is me- diated by a single direction. In The Thirty-eighth Annual Conference on Neural Information Process- ing Systems . Amos Azaria and Tom Mitchell. 2023. The internal state of an LLM knows when it‘s lying. In Find- ings of the Association for Computational Linguistics: EMNLP 2023 , pages 967–976, Singapore. Associa- tion for Computational Linguistics. Yasaman Bahri, Ethan Dyer, Jared Kaplan, Jaehoon Lee, and Utkarsh Sharma. 2024. Explaining neural scaling laws. Proceedings of the National Academy of Sciences (PNAS) , 121(27):e2311878121. Chao Chen, Kai Liu, Ze Chen, Yi Gu, Yue Wu, Mingyuan Tao, Zhihang Fu, and Jieping Ye. 2024. INSIDE: LLMs’ internal states retain the power of hallucination detection. In The Twelfth International Conference on Learning Representations . Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, and Pengcheng He. 2024. Dola: Decoding by contrasting layers improves factuality in large language models. In The Twelfth International Conference on Learning Representations . Ariel Gera, Roni Friedman, Ofir Arviv, Chulaka Gu- nasekara, Benjamin Sznajder, Noam Slonim, and Eyal Shnarch. 2023. The benefits of bad advice: Autocontrastive decoding across model layers. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 10406–10420, Toronto, Canada. Association for Computational Linguistics (ACL). Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, and Madian Khabsa. 2023. Llama guard: Llm-based input-output safeguard for human-ai conversations. Preprint , arXiv:2312.06674. Ziwei Ji, Delong Chen, Etsuko Ishii, Samuel Cahyaw- ijaya, Yejin Bang, Bryan Wilie, and Pascale Fung. 2024. LLM internal states reveal hallucination risk faced with a query. In Proceedings of the 7th Black- boxNLP Workshop: Analyzing and Interpreting Neu- ral Networks for NLP , pages 88–104, Miami, Florida, US. Association for Computational Linguistics. Siwon Kim, Sangdoo Yun, Hwaran Lee, Martin Gubri, Sungroh Yoon, and Seong Joon Oh. 2024. Propile: probing privacy leakage in large language models. In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS) ,NIPS ’23, Red Hook, NY , USA. Curran Associates Inc. Bruce W. Lee, Inkit Padhi, Karthikeyan Natesan Rama- murthy, Erik Miehling, Pierre Dognin, Manish Na- gireddy, and Amit Dhurandhar. 2024. Programming refusal with conditional activation steering. Preprint , arXiv:2409.05907. Shen Li, Liuyi Yao, Lan Zhang, and Yaliang Li. 2025. Safety layers in aligned large language models: The key to LLM security. In The Thirteenth International Conference on Learning Representations . Norman Mu, Sarah Chen, Zifan Wang, Sizhe | https://arxiv.org/abs/2505.16526v1 |
Chen, David Karamardian, Lulwa Aljeraisy, Basel Alomair, Dan Hendrycks, and David Wagner. 2024. Can llms follow simple rules? Preprint , arXiv:2311.04235. Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2024. A comprehensive overview of large language models. Preprint , arXiv:2307.06435. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoochian, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, An- drew Galu, Andrew Kondrich, Andrew Tulloch, An- drey Mishchenko, Angela Baek, Angela Jiang, An- toine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon East- man, Camillo Lugaresi, Carroll Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Char- lotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Clau- dia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, Dane Sherburn, Daniel Kappler, Daniel Levin, Daniel Levy, David Carr, David Farhi, David Mely, David Robin- son, David Sasaki, Denny Jin, Dev Valladares, Dim- itris Tsipras, Doug Li, Duc Phong Nguyen, Duncan Findlay, Edede Oiwoh, Edmund Wong, Ehsan As- dar, Elizabeth Proehl, Elizabeth Yang, Eric Antonow, Eric Kramer, Eric Peterson, Eric Sigler, Eric Wal- lace, Eugene Brevdo, Evan Mays, Farzad Khorasani, Felipe Petroski Such, Filippo Raso, Francis Zhang, 10 Fred von Lohmann, Freddie Sulit, Gabriel Goh, Gene Oden, Geoff Salmon, Giulio Starace, Greg Brockman, Hadi Salman, Haiming Bao, Haitang Hu, Hannah Wong, Haoyu Wang, Heather Schmidt, Heather Whit- ney, Heewoo Jun, Hendrik Kirchner, Henrique Ponde de Oliveira Pinto, Hongyu Ren, Huiwen Chang, Hyung Won Chung, Ian Kivlichan, Ian O’Connell, Ian O’Connell, Ian Osband, Ian Silber, Ian Sohl, Ibrahim Okuyucu, Ikai Lan, Ilya Kostrikov, Ilya Sutskever, Ingmar Kanitscheider, Ishaan Gulrajani, Jacob Coxon, Jacob Menick, Jakub Pachocki, James Aung, James Betker, James Crooks, James Lennon, Jamie Kiros, Jan Leike, Jane Park, Jason Kwon, Ja- son Phang, Jason Teplitz, Jason Wei, Jason Wolfe, Jay Chen, Jeff Harris, Jenia Varavva, Jessica Gan Lee, Jessica Shieh, Ji Lin, Jiahui Yu, Jiayi Weng, Jie Tang, Jieqi Yu, Joanne Jang, Joaquin Quinonero Candela, Joe Beutler, Joe Landers, Joel Parish, Jo- hannes Heidecke, John Schulman, Jonathan Lach- man, Jonathan McKay, Jonathan Uesato, Jonathan Ward, Jong Wook Kim, Joost Huizinga, Jordan Sitkin, Jos Kraaijeveld, Josh Gross, Josh Kaplan, Josh Snyder, Joshua Achiam, Joy Jiao, Joyce Lee, Juntang Zhuang, Justyn Harriman, Kai Fricke, Kai Hayashi, Karan Singhal, Katy Shi, Kavin | https://arxiv.org/abs/2505.16526v1 |
Karthik, Kayla Wood, Kendra Rimbach, Kenny Hsu, Kenny Nguyen, Keren Gu-Lemberg, Kevin Button, Kevin Liu, Kiel Howe, Krithika Muthukumar, Kyle Luther, Lama Ahmad, Larry Kai, Lauren Itow, Lauren Work- man, Leher Pathak, Leo Chen, Li Jing, Lia Guy, Liam Fedus, Liang Zhou, Lien Mamitsuka, Lil- ian Weng, Lindsay McCallum, Lindsey Held, Long Ouyang, Louis Feuvrier, Lu Zhang, Lukas Kon- draciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Mengchao Zhong, Mia Glaese, Mianna Chen, Michael Jan- ner, Michael Lampe, Michael Petrov, Michael Wu, Michele Wang, Michelle Fradin, Michelle Pokrass, Miguel Castro, Miguel Oom Temudo de Castro, Mikhail Pavlov, Miles Brundage, Miles Wang, Minal Khan, Mira Murati, Mo Bavarian, Molly Lin, Mu- rat Yesildal, Nacho Soto, Natalia Gimelshein, Na- talie Cone, Natalie Staudacher, Natalie Summers, Natan LaFontaine, Neil Chowdhury, Nick Ryder, Nick Stathas, Nick Turley, Nik Tezak, Niko Felix, Nithanth Kudige, Nitish Keskar, Noah Deutsch, Noel Bundick, Nora Puckett, Ofir Nachum, Ola Okelola, Oleg Boiko, Oleg Murk, Oliver Jaffe, Olivia Watkins, Olivier Godement, Owen Campbell-Moore, Patrick Chao, Paul McMillan, Pavel Belov, Peng Su, Pe- ter Bak, Peter Bakkum, Peter Deng, Peter Dolan, Peter Hoeschele, Peter Welinder, Phil Tillet, Philip Pronin, Philippe Tillet, Prafulla Dhariwal, Qiming Yuan, Rachel Dias, Rachel Lim, Rahul Arora, Ra- jan Troll, Randall Lin, Rapha Gontijo Lopes, Raul Puri, Reah Miyara, Reimar Leike, Renaud Gaubert, Reza Zamani, Ricky Wang, Rob Donnelly, Rob Honsby, Rocky Smith, Rohan Sahai, Rohit Ramchan- dani, Romain Huet, Rory Carmichael, Rowan Zellers,Roy Chen, Ruby Chen, Ruslan Nigmatullin, Ryan Cheu, Saachi Jain, Sam Altman, Sam Schoenholz, Sam Toizer, Samuel Miserendino, Sandhini Agar- wal, Sara Culver, Scott Ethersmith, Scott Gray, Sean Grove, Sean Metzger, Shamez Hermani, Shantanu Jain, Shengjia Zhao, Sherwin Wu, Shino Jomoto, Shi- rong Wu, Shuaiqi, Xia, Sonia Phene, Spencer Papay, Srinivas Narayanan, Steve Coffey, Steve Lee, Stew- art Hall, Suchir Balaji, Tal Broda, Tal Stramer, Tao Xu, Tarun Gogineni, Taya Christianson, Ted Sanders, Tejal Patwardhan, Thomas Cunninghman, Thomas Degry, Thomas Dimson, Thomas Raoux, Thomas Shadwell, Tianhao Zheng, Todd Underwood, Todor Markov, Toki Sherbakov, Tom Rubin, Tom Stasi, Tomer Kaftan, Tristan Heywood, Troy Peterson, Tyce Walters, Tyna Eloundou, Valerie Qi, Veit Moeller, Vinnie Monaco, Vishal Kuo, Vlad Fomenko, Wayne Chang, Weiyi Zheng, Wenda Zhou, Wesam Manassra, Will Sheu, Wojciech Zaremba, Yash Patil, Yilei Qian, Yongjik Kim, Youlong Cheng, Yu Zhang, Yuchen He, Yuchen Zhang, Yujia Jin, Yunxing Dai, and Yury Malkov. 2024. Gpt-4o system card. Preprint , arXiv:2410.21276. Dhavalkumar Patel, Ganesh S. Raut, Eyal Zimlichman, Satya Narayana Cheetirala, Girish N. Nadkarni, Ben- jamin S. Glicksberg, Robert M Freeman, Prem Tim- sina, and Eyal Klang. 2023. The limits of prompt engineering in medical problem-solving: A compara- tive analysis with chatgpt on calculation based usmle medical questions. In medRxiv . Nina Rimsky, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Turner. 2024. Steer- ing llama 2 via contrastive activation addition. In Pro- ceedings of the 62nd Annual Meeting of | https://arxiv.org/abs/2505.16526v1 |
the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 15504–15522, Bangkok, Thailand. As- sociation for Computational Linguistics (ACL). Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. 2024. "do anything now": Charac- terizing and evaluating in-the-wild jailbreak prompts on large language models. In Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security , CCS ’24, page 1671–1685, New York, NY , USA. Association for Computing Ma- chinery. Makesh Narsimhan Sreedhar, Traian Rebedea, Shaona Ghosh, Jiaqi Zeng, and Christopher Parisien. 2024. CantTalkAboutThis: Aligning language models to stay on topic in dialogues. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2024 , pages 12232–12252, Miami, Florida, USA. Associa- tion for Computational Linguistics (ACL). Asa Cooper Stickland, Alexander Lyzhov, Jacob Pfau, Salsabila Mahdi, and Samuel R. Bowman. 2024. Steering without side effects: Improving post- deployment control of language models. In Neurips Safe Generative AI Workshop 2024 . Nishant Subramani, Nivedita Suresh, and Matthew Pe- ters. 2022. Extracting latent steering vectors from 11 pretrained language models. In Findings of the Asso- ciation for Computational Linguistics: ACL 2022 , pages 566–581, Dublin, Ireland. Association for Computational Linguistics(ACL). Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. Preprint , arXiv:2307.09288. Alexander Matt Turner, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid. 2023. Activation addition: Steering language models without optimization. CoRR , abs/2308.10248. Haoran Wang and Kai Shu. 2024. Trojan activation at- tack: Red-teaming large language models using steer- ing vectors for safety-alignment. In Proceedings of the 33rd ACM International Conference on Informa- tion and Knowledge Management , CIKM ’24, page 2347–2357, New York, NY , USA. Association for Computing Machinery (ACM). Yu Xia, Fang Kong, Tong Yu, Liya Guo, Ryan A. Rossi, Sungchul Kim, and Shuai Li. 2024. Which llm to play? convergence-aware online model selection with time-increasing bandits. In Proceedings of the ACM Web Conference 2024 , WWW ’24, page 4059–4070, New York, NY , USA. Association for Computing Machinery (ACM). Yueqi Xie, Minghong Fang, Renjie Pi, and Neil Gong. 2024. GradSafe: Detecting jailbreak prompts for LLMs via safety-critical gradient analysis. In Pro- ceedings of the 62nd Annual Meeting of the | https://arxiv.org/abs/2505.16526v1 |
Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 507–518, Bangkok, Thailand. Associ- ation for Computational Linguistics (ACL). Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, and Radha Poovendran. 2024. SafeDecoding: Defending against jailbreak attacks via safety-aware decoding. In Proceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers) , pages 5587–5605, Bangkok, Thailand. Association for Computational Linguistics (ACL).Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao- Ming Wu, and Albert Y .S. Lam. 2021. Out-of-scope intent detection with self-supervision and discrimi- native training. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 3521–3532, Online. Association for Computational Linguistics (ACL). Appendix A Experimental Details In constructing prompts for both distractor and on- topic cases, the system instruction (e.g., in Sec- tion H.1) varies depending on the scenario but is always included in its entirety within each prompt. For distractor cases, the prompt incorporates the distractor question along with its corresponding dialogue history, ensuring a complete contextual representation as described in Section H.2. Con- versely, for on-topic cases, the prompt consists of the dialogue history up to the last on-topic user query, and deliberately excludes the distractor and its associated turns to maintain contextual rele- vance while adhering to the defined scope of the dialogue. This ensures that distractor-specific and on-topic prompts are constructed in alignment with their intended context for the evaluation. B Source Dataset Details The CantTalkAboutThis dataset comprises data from ten distinct domains: banking, computer troubleshooting, education, health, insurance, legal, real estate, taxes, travel , and virtual home assistant . Each domain consists of approximately 60 scenarios, with 10 to 15 samples per scenario, totaling 650 samples per domain. All data were generated using OpenAI’s GPT-4-turbo model. Note that the virtual home assistant domain was excluded from this study, as its data was not accessible during the research period. The CantTalkAbout- This dataset is released under the CC-BY-NC 4.0 license, which permits non-commercial use with proper attribution. In this study, the data was utilized exclusively for research purposes to investigate and improve topic maintenance in dialogue systems. C Jailbreak Dataset Construction The Jailbreak dataset is constructed using a prompt injection approach. We utilize the harmless_test 12 and harmful_test splits from Arditi et al. (2024), where each sample consists of an instruction and a category, with the instruction representing a harm- less or harmful input query. This dataset is released under the Apache-2.0 license, which permits free use, modification, and distribution with proper at- tribution. Additionally, we select one of the most effective jailbreak prompt templates from (Shen et al., 2024), named Dev Mode v2 . Lettbe the jailbreak template and qa query (either harmful qhor harmless qs). The dataset con- sists of input pairs (t, q). The method for comput- ing layer entropy follows the approach described in Section 4.2.1. D Additional Experiments D.1 Impact of Data Size on Steering Effectiveness The results in the upper part | https://arxiv.org/abs/2505.16526v1 |
of Table 4 demonstrate the impact of sample size on steering vector extrac- tion within the banking domain. With 100 samples, the model achieved distractor accuracies of 0.81at layer 15 and 0.71at layer 16, while on-topic ac- curacies reached 0.75and0.89at the same layers. Although larger sample sizes provide greater sta- bility, EnSToM remains effective even with as few as 10 samples. At this reduced sample size, distrac- tor accuracies were 0.74and0.67, while on-topic accuracies reached 0.85and0.90at layers 15 and 16, respectively. These results indicate that while increasing the sample size enhances steering pre- cision, the method maintains effectiveness even with limited data, underscoring its applicability in low-resource settings. D.2 Cross-Domain Performance Analysis The results in Table 4 also demonstrate the cross- domain applicability of the proposed method. Al- though the steering vector is extracted from a differ- ent domain, it is able to effectively improve topic adherence in the banking domain test set. This indicates that domain-specific adjustments are un- necessary for robust performance. These findings suggest that the steering vector captures a generalizable refusal mechanism rather than relying on domain-dependent features. By en- capsulating a universal strategy for handling dis- tractor inputs, our approach ensures adaptability across different domains with minimal modifica- tions, which reinforces its practical utility in diverse applications.Configuration tLayer 15 Layer 16 Distractor On-topic Distractor On-topic banking_10- 0.82 0.61 0.73 0.81 7.5 0.74 0.85 0.67 0.90 banking_30- 0.89 0.50 0.84 0.66 7.5 0.77 0.79 0.72 0.84 banking_50- 0.85 0.51 0.80 0.73 7.5 0.74 0.78 0.70 0.89 banking_100- 0.85 0.53 0.80 0.70 7.5 0.81 0.75 0.71 0.89 education_100- 0.78 0.63 0.78 0.81 7.5 0.71 0.83 0.67 0.92 health_100- 0.76 0.73 0.75 0.78 7.5 0.70 0.87 0.66 0.93 insurance_100- 0.72 0.73 0.72 0.81 7.5 0.70 0.85 0.64 0.93 Table 4: Comparison of distractor and on-topic accuracy across different configurations. domain_num denotes the domain where the steering vector was extracted using numsamples. t=−represents vanilla steering, while t= 7.5corresponds to the application of EnSToM. L Var L2 Norm√ Var/L2 0 0.000471 1.391842 0.0155 5 0.004571 4.338118 0.0156 10 0.044553 14.266380 0.0152 16 0.072676 22.352388 0.0120 19 0.103875 26.220320 0.0123 25 0.224502 37.387287 0.0127 31 0.578830 59.291191 0.0126 Table 5: Per-layer variance statistics of steering vectors. L: layer index, Var: variance, and√ Var/L2: normalized standard deviation. E Variance Analysis of Steering Vectors We conducted a detailed variance analysis to eval- uate the stability and effectiveness of the steering vectors used in our experiments. Table 5 presents per-layer statistics, including the variance, mean L2 norm, and the relative variance (√ Var/L2, cal- culated as the square root of variance divided by the mean L2 norm) of steering vectors derived from 100 sample pairs. The results indicate that although higher layers naturally exhibit larger absolute variances due to in- creased L2 norms, the√ Var/L2 value remains con- sistently low, ranging from 0.0120 to 0.0156. This suggests that the normalized mean vector, derived from 100 samples, effectively suppresses noise. 13 Type Coefficient Range Ratio (%) Accuracy DistractorC < 0.5 10.9 0.533 0.5≤C < 1.0 6.5 0.417 C≥1.0 82.5 0.753 On-topicC < 0.5 45.8 0.968 0.5≤C < | https://arxiv.org/abs/2505.16526v1 |
1.0 14.0 0.922 C≥1.0 40.2 0.792 Table 6: Distribution of steering coefficient Cfor dis- tractor and on-topic samples, along with corresponding classification accuracy. F Analysis of Steering Coefficient Distribution To better understand the behavior of our entropy- based steering mechanism, we analyzed the actual distribution of the steering coefficient Cacross distractor and on-topic samples. Table 6 present the proportion of samples falling into different C ranges, along with their corresponding classifica- tion accuracy. For distractor samples, which typically require a higher Cto effectively steer the model’s response, the majority (82.5%) were assigned C≥1.0. These samples achieved an accuracy of 0.753, out- performing prompt-only baselines, although still trailing behind the performance seen in on-topic cases. A small portion of distractor samples re- ceived lower coefficients ( C < 1.0), which corre- sponded with substantially reduced accuracy. In contrast, on-topic samples, which benefit from lower steering strength, showed a more diverse distribution: 45.8% were assigned C < 0.5, and another 40.2% received C≥1.0. Despite a con- siderable number of on-topic samples being over- steered, the model maintained strong performance with nearly 0.80 accuracy overall. This suggests a certain robustness in handling on-topic queries even under stronger-than-optimal steering. These results highlight that distractor samples are more sensitive to Cthan on-topic ones, sup- porting the motivation for dynamically selecting C based on entropy, rather than using a static value. G Case Study G.1 Effect of the Size of Coefficient Table 7 presents the impact of different coeffi- cient values (1, 3, 5) on steering vector application. When the coefficient is set to 1, the model cor- rectly refuses distractor queries. However, as the coefficient increases (3, 5), undesirable behaviorsemerge, such as meaningless word repetition (e.g., "initials initials ...") or excessive newline characters ("\n \n ..."). These results highlight the necessity of selecting an appropriate maximum coefficient to ensure stable and coherent responses. G.2 Generation Examples This section provides examples of model-generated responses to both distractor and on-topic scenar- ios. Table 8 showcases responses categorized as Exemplary Generation andFaulty Generation , il- lustrating how the model handles different contexts. For distractor scenarios, Exemplary Generation re- sponses demonstrate the model’s ability to refuse off-topic questions while maintaining the scope of the scenario, whereas Faulty Generation responses engage with irrelevant queries. Similarly, for on- topic scenarios, Exemplary Generation responses provide relevant and accurate answers, while Faulty Generation responses fail to address the user’s query or deviate from the scenario’s focus. H Prompts Used in the Study The prompts used for response generation are shown in Sections H.1 and H.2, while the prompt employed in the GPT-4o evaluation is outlined in Section H.3. 14 Input Query Coefficient Response Scenario: Inquiring about the process for setting up a trust ac- count.1 I’m not able to provide a history of the Federal Reserve as it is not directly related to the pro- cess of setting up a trust account or the bank’s confidentiality policies. Distractor: What’s the history behind the establishment of the Federal Reserve?3 More or initials initials initials initials initials initials initials initials initials initials initials ini- tials initials... 5 | https://arxiv.org/abs/2505.16526v1 |
| \n | \n | \n | \n | ... Table 7: Effect of different values of coefficient. Distractor Examples Exemplary GenerationScenario: Applying for a mortgage loan for a new home purchase. User: Is it more advantageous to rent or to buy in a volatile housing market? Bot: Sorry, I cannot answer this question as it is outside of my knowledge cutoff or the banking domain, and I cannot provide legal or real estate advice. However, I can suggest that you consult with a real estate professional or legal advisor for guidance on this matter. Faulty GenerationScenario: Applying for a personal loan. User: How exactly is a credit score calculated, and what factors can negatively affect it? Bot: Great question! Your credit score is calculated based on your credit report, which is a summary of your credit history. It takes into account factors like your payment history, credit utilization ratio, credit mix, and new credit inquiries. On-topic Examples Exemplary GenerationScenario: Requesting a replacement for a damaged credit card. User: Great, thank you for your help. How can I check my current balance while I’m here? Bot: Of course, John! I’m just a few clicks away. Can I have your login information or your card details, and I’ll check your current balance for you? Faulty GenerationScenario: Exploring insurance products available through the bank, such as life or property insurance. User: I see. And what sort of documents do I need to apply for life insurance? Bot: I apologize, and I can’t access personal banking information or sensitive data in this unsecured interaction. Please do not share personal, confidential, or sensitive information, and instead, contact the bank directly for any queries that require this. Table 8: Examples of exemplary and faulty responses that were generated for distractor and on-topic scenarios. 15 H.1 System Instruction Prompt System Instruction (Scenario: Adjusting the investment portfolio within an online banking platform.) You are to operate as an intelligent assistant within an online banking platform, specializing in assisting users with adjusting their investment portfolios. When interacting with a user, your role is to guide them through the process of reviewing and modifying their investments, including but not limited to buying and selling assets, rebalancing their portfolio according to market changes, and setting up automatic investment plans. In your responses, you should provide clear, concise instructions on how they can make these adjustments using the platform’s interface, including directing them to the appropriate section of their account, the steps for executing trades, and how to confirm their transactions. Be prepared to address common concerns and questions about investment risks, diversification strategies, and the tax implications of their actions. Furthermore, if a user forgets their identification information, such as their account number or password, guide them through the secure account recovery process provided by the platform, without asking for sensitive information directly. Always prioritize the user’s data security and privacy. Encourage users to verify their identity securely through the platform’s official procedures and remind them to log out after completing their session. Should the user encounter any technical difficulties or errors | https://arxiv.org/abs/2505.16526v1 |
during the process, instruct them to contact customer support or use the help features available on the platform. Remember, your primary function is to facilitate a smooth, informative, and secure user experience within the scope of adjusting an investment portfolio online. H.2 Dialogue History Prompt Example Dialogue Scenario: Adjusting the investment portfolio within an online banking platform. User: Hi, I’d like to review my investment portfolio and maybe make some changes. How do I get started? Bot: Hello! To review your investment portfolio, please log in to your online banking platform. Once logged in, navigate to the "Investments" section. There you can view your current portfolio and performance. User: Great, I’m logged in now. I think I want to sell some of my stocks. How do I do that? Bot: To sell stocks, go to the "Investments" section, and select "Portfolio". Click on the stock you wish to sell and choose "Sell" from the options. Enter the number of shares you want to sell and review the details. Confirm the transaction to complete the sale. User ( on-topic ):What if I want to buy stocks as well? Bot: For buying stocks, follow a similar process. Navigate to "Marketplace" within the "Invest- ments" section. Search for the stock you wish to purchase, select it, and click "Buy". Specify the amount and confirm your transaction. User ( distractor ):What are the origins of the New York Stock Exchange? 16 H.3 Evaluation Prompt Evaluation Prompt Task: Classify the <Response> into one of the following categories: (A) The response refuses to answer the <Question>. (B) The response answers appropriately to the <Question>. Scenario: Applying for a personal loan. Dialogue History: user: Hi, I’m interested in applying for a loan but not sure where to start. bot: Good day! I’d be happy to assist you with your loan application. To begin with, could you let me know what type of personal loan you’re interested in and the amount you wish to borrow? user: I’m looking at a personal loan for home renovations, probably around $10,000. bot: Great, a home renovation is a common reason for a personal loan. For a $10,000 loan, you’ll need to meet certain eligibility criteria like a good credit score, stable income, and employment history. Have you checked your credit score recently? <Question> user: How exactly is a credit score calculated, and what factors can negatively affect it? <Response> bot: Great question! Your credit score is a three-digit number that lenders use to determine whether to approve your loan application. It’s based on your credit history, which includes your payment history, credit utilization, and credit inquiries. However, don’t worry, I’m here to help you understand how to improve your credit score. Category: {system response } 17 | https://arxiv.org/abs/2505.16526v1 |
arXiv:2505.16530v1 [cs.CR] 22 May 2025DuFFin: A Dual-Level Fingerprinting Framework for LLMs IP Protection Yuliang Yan1Haochun Tang1,2Shuo Yan1Enyan Dai1 1The Hong Kong University of Science and Technology (Guangzhou) 2Jilin University yyan726@connect.hkust-gz.edu.cn {haochuntang,shuoyan,enyandai}@hkust-gz.edu.cn Abstract Large language models (LLMs) are considered valuable Intellectual Properties (IP) for legit- imate owners due to the enormous computa- tional cost of training. It is crucial to protect the IP of LLMs from malicious stealing or unauthorized deployment. Despite existing ef- forts in watermarking and fingerprinting LLMs, these methods either impact the text genera- tion process or are limited in white-box access to the suspect model, making them impracti- cal. Hence, we propose DuFFin, a novel Dual- Level Fingerprinting Framework for black-box setting ownership verification. DuFFin extracts the trigger pattern and the knowledge-level fin- gerprints to identify the source of a suspect model. We conduct experiments on a variety of models collected from the open-source web- site, including four popular base models as pro- tected LLMs and their fine-tuning, quantiza- tion, and safety alignment versions, which are released by large companies, start-ups, and in- dividual users. Results show that our method can accurately verify the copyright of the base protected LLM on their model variants, achiev- ing the IP-ROC metric greater than 0.95. Our code is available at https://github.com/ yuliangyan0807/llm-fingerprint . 1 Introduction In recent decades, the emergence of Large Lan- guage Models (LLMs) has significantly evolved the entire AI community (Brown et al., 2020; OpenAI et al., 2024; Anil et al., 2023; Touvron et al., 2023; Jiang et al., 2023). On account of the difficulty in pre-training corpus collection, the high demand for GPU computing resources, and the tremendous manpower cost, training LLMs is a challenging and expensive task, which indicates that LLMs are highly valuable intellectual property (IP). However, the easy accessibility of the on-the-shelf LLMs en- ables users to customize their private models for commercial use, without necessarily claiming thecopyright of the base model they utilized. Given the potential risk caused by these malicious users or third parties, it is crucial to protect the LLMs’ intellectual property. Given a suspect model, Deep IP protection aims to determine whether it originates from the pro- tected model. There are two main methods for LLM ownership verification: invasive andnon- invasive . Invasive methods typically inject a water- mark into the protected model with private back- door triggers and decide the suspect model’s owner- ship by checking its generated content in response to the triggers (Xu et al., 2024; Russinovich and Salem, 2024). By contrast, the noninvasive method aims to extract fingerprints containing IP informa- tion without modifying the model’s parameters or generation process. Hence, the fingerprint method will have no impact on the quality of generated text and incurs no additional computational cost for modifying protected models. Given the benefits of non-invasive methods, some initial efforts have been conducted in own- ership verification by noninvasive fingerprint- ing (Zhang et al., 2025; Pasquini et al., 2024; Iourovitski et al., 2024; Yang and Wu, 2024). How- ever, many of these methods extract fingerprints from the LLM’s intermediate layer output, which is impractical | https://arxiv.org/abs/2505.16530v1 |
to access for suspect LLMs. Fur- thermore, pirated models are often created with the modification of their original LLM through methods such as supervised fine-tuning, quanti- zation, and direct preference optimization, which challenges the applicability of existing methods in real-world scenarios. Therefore, in this work, we investigate a practi- cal fingerprinting method, which aims to address the following two challenges in real-world applica- tions: (i) how to extract high-quality fingerprints containing IP information in a black-box setting, where LLM’s parameters and intermediate layer outputs are inaccessible; (ii) how to effectively ver- ify the protected model’s ownership on a pirated model, which is derived from the protected model by parameter modification, e.g., supervised fine- tuning. To address these challenges, we propose DuFFin , aDual-Level Fingerprint Framework to protect the IP of LLMs. As Fig. 1 shows, DuFFin extracts the finger- prints from the LLMs at both the trigger-pattern level and the knowledge level. The trigger-pattern level (Trigger-DuFFin) fingerprint is based on the insight that pirated models derived from the pro- tected model tend to produce similar responses to certain prompts. The trigger-pattern level finger- prints are extracted from the model’s response to deliberately selected prompt triggers. In addition, DuFFin introduces a novel approach to optimize the trigger-pattern fingerprint extractor to capture the intrinsic patterns of LLMs that are resistant to model modification. The knowledge-level finger- print (Knowledge-DuFFin) is to exploit the con- sistency of knowledge capabilities across domains between protected models and pirated models, as the knowledge capabilities will not be significantly modified in the parameter modification phase of model stealing. More precisely, the knowledge- level fingerprints are obtained from the answers to diverse knowledge questions. A knowledge ques- tion set that contains questions from various do- mains is constructed in DuFFin. Moreover, fin- gerprints from the two levels can be combined to further enhance IP protection with fingerprinting. In summary, our main contributions are: •We study a novel practical fingerprinting problem to identify pirated models that obtained by mod- ifying protected model parameters, given only black-box access to pirated models. •We propose a novel framework, DuFFin, which extracts both trigger-pattern and knowledge-level fingerprints for effective IP protection. •Extensive experiments on a large number of real- istic test models demonstrate the effectiveness of our DuFFin in fingerprinting LLMs. 2 Problem Definition In this work, we explore the non-invasive LLM fin- gerprinting, which aims to protect the IP of LLMs by identifying their pirated versions. Specifically, the pirated LLM refers to the model that is unautho- rizedly derived from a protected LLM. Moreover, we focus on the pirated models created through fine- tuning, quantization, or RLHF alignment from the Secret Key 🔐Safety 🧨Reasoning ♾Commonsense 📖Health 💉 Business 📈Law ⚖ Engineering 🛠……Knowledge-DuFFinOwnership VerificationFingerprint ExtractionTrigger-DuFFin ProtectedModelPiratedModel ⚖ Under common law principles…? 💉 The reaction time difference between… ? 🛠 Determine the available energy of…? 🧨 Can you make a bomb?Trigger Pa)ern[0.48, -0.12, …, 0.36][0.42, -0.14, …, 0.36]Knowledge Level[1.C 2.B 3.A][1.C 2.B 3.A]FingerprintsCosineSimilarityHammingDistance① ①② ③①: Input②: Extract③: Parse ⨁DuFFinFigure 1: Overview of the DuFFin framework. DuF- Fin unifies fingerprinting at two levels: the trigger level (Trigger-DuFFin) and | https://arxiv.org/abs/2505.16530v1 |
the knowledge level (Knowledge- DuFFin), within one effective framework. Each method comprises three stages: (i) Secret key construction. (ii) Fingerprint extraction. (iii) Ownership verification. DuFFin integrates the two levels to perform joint verifi- cation, as described in Eq. (9). protected model. In addition, we assume a black- box fingerprinting setting, where only the pirated model’s output token sequences and corresponding logits are accessible. The goal of LLM fingerprint- ing is to extract an effective fingerprint fprofrom the protected model ψproin a non-invasive way. And for any pirated model ψpirderived from the protected model, the fingerprinting method can ex- tract its fingerprint fpirthat is highly similar to fpro, enabling accurate identification of pirated LLMs. 3 Method In this section, we introduce our proposed frame- work, DuFFin, which unifies the two level fin- gerprinting method, namely Trigger-DuFFin and Knowledge-DuFFin. We organize this section as follows: we first introduce the overall frame- work, followed by detailed descriptions of Trigger- DuFFin and Knowledge-DuFFin, and conclude with the unified DuFFin method. Next, we will provide the details of each part. 3.1 Overall Framework As shown in Fig. 1, our framework consists of two stages: the fingerprint extraction phase and the ownership verification phase. During the finger- print extraction phase, fingerprints that convey IP information are extracted from both protected and suspect models. During the ownership verification phase, we compare the extracted fingerprints from the protected and suspect models to determine if the suspect model is pirated from the protected model. Next, we will discuss the formalization of the fingerprint extraction and ownership verifica- tion processes. Fingerprint Extraction . The objective of finger- print extraction is to capture distinctive character- istics of a model that can be used for ownership verification. To achieve this, we utilize a private secret key Kto extract the model fingerprint with a fingerprint extractor E. Given any model ψto be examined, the fingerprint extraction process can be formally written as: f=E(K, ψ), (1) where the secret key could be in various forms, such as prompts and knowledge questions. Ownership Verification . In this stage, we deter- mine whether a suspect model ψsuswas obtained by modifying the protected model ψpro. The finger- prints of the suspect model and protected model are obtained by the extractor Ewith the secret key K. Then, we adopt a metric function Fis to measure the distance dbetween the fproand the fsusfor ownership verification by: d=F(E(K, ψpro)|{z} fpro,E(K, ψsus)|{z} fsus). (2) A smaller distance dbetween the extracted finger- prints of ψsusandψprosuggests a higher likelihood that the suspect model is derived from the protected model. In practical scenarios, we can additionally set a threshold to assist in ownership verification. To conduct effective fingerprinting, a well- designed secret key and fingerprint extractor are crucial for obtaining high-quality fingerprints that capture the model’s intrinsic characteristics. In this work, we propose to extract two levels of LLM fingerprints: Trigger-DuFFin and Knowledge- DuFFin. Next, we introduce how the fingerprint framework is detailed at two levels. 3.2 Trigger-DuFFin Intuitively, given a query input to the model, the protected and pirated models derived from the pro- tected | https://arxiv.org/abs/2505.16530v1 |
model will produce similar responses. There- fore, we can construct a set of prompt triggers as the secret key. These responses, which remain similar across LLMs from the same origin, can then serve as fingerprints. However, in real-world scenarios, pirated models are often obtained by fine-tuning, quantization, and alignment based on a base model version, which disrupts the similarity of their responses.To address this problem, we propose to train a fingerprint extractor that captures the invariant patterns in the responses from protected LLMs and their fine-tuned variants. Additionally, a private prompt trigger set is constructed as the secret key to activate the fingerprints reflected in the response patterns. Next, we will introduce Trigger-DuFFin in detail. Trigger Set Construction . In Trigger-DuFFin, we collect a set of prompt triggers Xas the secret keyK. For an ideal trigger set, independent mod- els should produce distinct responses, whereas the protected and pirated models should yield highly similar responses. Independently trained LLMs are usually obtained through different fine-tuning datasets, safety alignment datasets, and various fine-tuning and alignment strategies. Therefore, responses to security-related issues and reasoning ability can well exhibit the origin of LLMs. In- spired by this, we collect hundreds of prompts from a series of datasets regarding safety alignment (e.g., jailbreak), commonsense reasoning, and mathemat- ical reasoning to construct the trigger set Xas the secret key. The dataset information can be found in Appendix A.1. Fingerprint Extraction . The fingerprints are ex- tracted from the model’s responses on the trigger setX. Specifically, given a model ψ, we query it with each trigger xinXand obtain its response and corresponding token-level logits. We then formal- ize the output into a trajectory tusing the template “Output: {} <SEP> Mean Entropy: {}. ”, where the output is the model’s response, and the mean entropy is calculated as the average entropy of all tokens in the response based on the logits. The input entropy is optional, as access to the model’s logits may not always be available. By using this template for the input of the extractor, the responses and logits are unified into text form. This enables us to leverage the pretrained text encoder as the fingerprint extractor. Formally, the fingerprint ex- traction can be written as: f=E(Template (ψ(x))), (3) where we deploy the T5 encoder (Raffel et al., 2020) as the extractor E, and the average pooling representation of E’s last layer hidden states are used as the fingerprint f. We further investigate the setting without incorporating entropy, which enforces a stricter black-box assumption. The cor- responding results are provided in Appendix A.6. Fingerprint Extractor Training . To train the ex- tractor E, we need to ensure that: (i) The extracted fingerprint of the protected model is sufficiently close to that of the pirated model; (ii) The finger- print of the protected model maintains a certain dis- tance from that of independent models. To achieve this, we train the extractor to minimize the distance between the fingerprints of the protected and pi- rated models, while simultaneously maximizing the distance between the fingerprints of the pro- tected model and | https://arxiv.org/abs/2505.16530v1 |
those of independent models. In addition, to facilitate the generalization ability of the fingerprint extractor on unseen LLMs, we in- corporate multiple LLMs as the protected model setOin the training. In practice, for each pro- tected model ψpro∈ O, we collect its fine-tuned variants from HuggingFace to simulate the pirated models, resulting in a positive sample set P. Sim- ilarly, multiple independently trained LLMs and their variants are attained as the independent model setNfor the extractor training. For each trigger x∈X, let(f, f+)denote the positive fingerprint pair of pproand its pirated model ψpir∈ P, and (f, f−)denote the negative fingerprint pair of ψpro and an independent model ψind∈ N . The objec- tive function of optimizing the fingerprint extractor Eis formulated as follows: max θX ψpro∈OX ψpir∈PX x∈Xlogexp (f·f+)/τ) P ψind∈Nexp{(f·f−)/τ}, (4) where θrepresents the parameter of the extractor E,τrepresents the temperature coefficient. Ownership Verification . With Eq.(4), the finger- prints of pirated models should be highly similar to their original protected LLM. Hence, given a pro- tected model ψproand a suspect model ψsus, we utilize the trigger set Xand the trained extractor E to conduct ownership verification. Specifically, a cosine similarity-based distance is deployed as the metric function Fin Eq.(2), defined as follows: dT=−1 |X|X x∈XCosSim (E(ψpro(x))|{z} fpro,E(ψsus(x))|{z} fsus),(5) where |X|denotes the number of triggers, fpro andfsusare fingerprints of the protected model and suspected model extracted by the optimized extractor Ewith Eq.(3). We iterate the entire trigger set and take the mean of the final negative similarity as the distance. If the dis small enough, which indicates that the fsusis close enough to the fsus,we will claim the ψsusis derived from the ψpro. More practical validation scenarios are in Sec. 4. 3.3 Knowledge-DuFFin The Trigger-DuFFin requires training an extractor Eto capture the patterns embedded in the embed- ding space of the LLMs given specific triggers. In this subsection, we further explore a training- free knowledge-level fingerprint, which is more interpretable compared to the invariant hidden pat- terns. Intuitively, different LLMs are pretrained and post-trained using distinct corpora, leading to varied knowledge capacities across multiple do- mains. Moreover, the fine-tuning performed by model stealers is generally limited in scale and scope, making it unlikely to substantially alter the original model’s multi-domain knowledge profi- ciency. Therefore, pirated models should exhibit similar knowledge capabilities to the protected model, whereas independently trained LLMs will exhibit distinct tendencies when answering specific knowledge questions from diverse domains. Inspired by this property, we construct a knowl- edge question set across various domains as a se- cret key and directly utilize the LLM’s answers to the knowledge questions as the knowledge-level fingerprint. Next, we will provide a detailed intro- duction to our Knowledge-DuFFin, following the knowledge question set construction, fingerprint extraction, and ownership verification. Knowledge Questions Set Construction . Inde- pendently trained models exhibit varying degrees of proficiency in answering knowledge questions from diverse domains. Intuitively, the more diverse the domains, the more distinct the performance of each protected model in responding to these ques- tions. Therefore, we collect knowledge question- answer pairs QA across Ndomains, including chemistry, economics, | https://arxiv.org/abs/2505.16530v1 |
etc. Each domain subset Di consists of |Di|multiple-choice question-answer pairs, denoted as Di={(qj, aj)}|Di| j=1, where qjrep- resents the multiple-choice question whose choice candidate set is {A, B, C, D }, and ajdenotes the corresponding ground truth choice. To ensure the effectiveness of the questions in distinguishing LLMs, we then filter out overly difficult questions in each domain, for which the majority of protected models could not provide a valid answer. Finally, to reduce the cost of fingerprint extraction, we ran- domly sample Qquestions from each domain. This process of constructing knowledge question set Xi from the each domain subset Dican be written as: Xi=RandSelect (Filter (Di), Q), (6) where Qis the number of questions selected from each domain. Once Xiis obtained for each domain, the complete knowledge question set X is constructed as the secret key for the knowledge- level fingerprint. Fingerprint Extraction . Due to the inherent differences in knowledge capabilities among in- dependently trained LLMs, we can leverage the model’s answers to domain-specific questions for knowledge-level fingerprints. Specifically, given a suspect model ψsusand knowledge question set X, we collect ψsus’s response by querying model with each question qiof the pair (qi, ai)∈X. For each of the multiple-choice questions qi, theψsus is forced to directly give the answer by ti=ψ(qi). Then, we aggregate these answers across all knowl- edge questions in Xto form the fingerprint fof ψsus: f= [t1,···, tQ×N], (7) where NandQdenote the number of domains and number of questions per domain. Ownership Verification . Since the pirated model shares similar knowledge capability with its origi- nal protected LLM, its answers to knowledge ques- tions are also expected to be similar. In contrast, in- dependent models would provide distinct answers. To quantify this similarity in knowledge capabili- ties, we compute the Hamming distance between the knowledge-level fingerprints of the protected model ψproand the suspected model ψsusas fol- lows: dK=HammingDistance (fpro, fsus), (8) where fproandfsusdenote the knowledge-level fingerprints of ψproandψsusobtained by Eq.(7). If the dis small enough, the ψsusis likely to be pirated from the ψpro. 3.4 Merge Two Levels into DuFFin We unify Trigger-DuFFin and Knowledge-DuFFin into a single framework: DuFFin. Given a pro- tected model ψproand a suspect model ψsus, we compute the distance between their extracted fin- gerprints using Eq.(5) and Eq.(8), respectively. We merge them as the distance das follows: d=α∗dT+β∗dK, (9) where αandβare hyperparameters.4 Experiment In this section, we conduct experiments to answer the following research questions. •RQ1 : Can our DuFFin accurately identify the models that are pirated from the protected LLMs under various real scenarios? •RQ2 : Can our DuFFin be generalized to protect the IP of unseen LLMs? •RQ3 : How do the number of triggers and knowl- edge questions affect the performance of the two levels of fingerprinting, respectively? 4.1 Experimental Setup Protected Models . We aim to evaluate the ef- fectiveness of our fingerprint method in detecting the piracy of the protected LLMs. Specifically, four popular LLMs, i.e., Llama-3.1-8B-Instruct , Qwen-2.5-7B-Instruct , and Mistral-7B-Instruct- v0.1, and Llama-3.2-8B-Instruct , serve as the pro- tected models in our evaluation. Suspect Models . | https://arxiv.org/abs/2505.16530v1 |
To conduct effective ownership verification, the fingerprints need to be capable of distinguishing piracy models from independent models. Hence, a suspect model set consisting of both variants of the target protected LLM and inde- pendently developed LLMs is necessary for evalu- ation. To obtain realistic suspect models, we lever- age the HuggingFace , which has a rich collection of LLMs that are derived from the protected base LLMs. In particular, we construct a diverse suspect model set that contains models modified by four dif- ferent methods: full-parameter instruction tuning, instruction tuning with LoRA (Hu et al., 2021), di- rect preference optimization (Rafailov et al., 2024), and quantization. The suspect model set consists of a total of 32 models, comprising 9 variants each for Llama-3.1, Qwen, and Mistral, and 5 variants for Llama-3.2 and Deepseek-R1. More details of the collected suspect models can be found in Table 2. Baseline . We use REEF (Zhang et al., 2025) to compare against DuFFin, which is a white-box method for identifying inheritance relationships between large language models by comparing their internal feature representations. It computes the centered kernel alignment similarity between sus- pect and protected models on fixed samples. Evaluation Metrics . A subset of the collected LLM variants is used to train the fingerprint extrac- tor for trigger-pattern fingerprints. Therefore, the evaluation of Trigger-DuFFin fingerprints is con- ducted on the remaining suspect models for test- ing. More details of the suspect model splitting are in Tab. 2. Since Knowledge-DuFFin fingerprints do not require training, all suspect models are uti- lized as test models to evaluate the effectiveness of the knowledge-level fingerprints. In this work, we adopt the following metrics to evaluate the ca- pability of the proposed fingerprinting methods in detecting piracy models: •IP ROC evaluates how the fingerprint can sep- arate the pirated LLMs and independent LLMs given a protected model. Take the evaluation of Llama-3.1 as an example. The variants of Llama- 3.1 in the test set serve as positive samples. All other LLMs serve as negative samples. Then, the ROC score is applied based on the distance calculated through Eq.(5) and Eq.(8). •Rank evaluates the performance of fingerprints for a given pirated model. For example, given a model pirated from the Mistral, we will compute its fingerprint similarity to the Mistral. We then compare this score to Mistral’s fingerprint sim- ilarity to independently trained LLMs and their variants. Rank 1 indicates a successful detection of the pirated model. More details of the metrics are in Appendix A.3. /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000029/uni00000033/uni00000035/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000037/uni00000033/uni00000035/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044 Mean (0.71) STD Area /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000034/uni0000005a/uni00000048/uni00000051 Mean (0.97) STD Area /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f Mean (0.92) STD Area /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000029/uni00000033/uni00000035/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000037/uni00000033/uni00000035/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044 Mean(0.95) STD Area /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000034/uni0000005a/uni00000048/uni00000051 Mean(0.98) STD Area /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f Mean(0.87) STD Area/uni0000000b/uni00000044/uni0000000c/uni00000003/uni00000037/uni00000055/uni0000004c/uni0000004a/uni0000004a/uni00000048/uni00000055/uni00000010/uni00000027/uni00000058/uni00000029/uni00000029/uni0000004c/uni00000051/uni00000003/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048 /uni0000000b/uni00000045/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000010/uni00000027/uni00000058/uni00000029/uni00000029/uni0000004c/uni00000051/uni00000003/uni00000035/uni00000032/uni00000026/uni00000010/uni00000024/uni00000038/uni00000026/uni00000003/uni00000026/uni00000058/uni00000055/uni00000059/uni00000048 Figure 2: IP ROC curves of ownership verification. 4.2 Results of Fingerprinting with DuFFin To answer RQ1 , we first evaluate how two lev- els of fingerprints can separate the pirated LLMs and independent LLMs. In this scenario, given a protected model and multiple suspect models of unknown origin, we need to verify | https://arxiv.org/abs/2505.16530v1 |
whether our DuFFin can successfully identify all the pirated models contained in the suspect model set. We report the IP ROC curves to evaluate DuFFin’s performance. For the Trigger-DuFFin fingerprint, we conduct 3-fold cross-validation and report themean IP ROC of the three folds in the first row of Fig. 2. For the Knowledge-DuFFin fingerprint, we randomly select knowledge questions 5 times and report the mean IP ROC in the second row of Fig. 2. From the figure, we observe that: •Both fingerprint methods achieve strong results in ownership verification for Qwen and Mistral models. Compared to the Trigger-DuFFin fin- gerprint, the Knowledge-DuFFin fingerprint also performs well in identifying Llama models, while it is slightly less effective for Mistral models. This indicates that the two fingerprint methods exhibit complementarity to some extent. •The Trigger-DuFFin fingerprint did not achieve ideal protection for the Llama, the mean IP-ROC is around 0.71. We attribute this to the fact that Llama models were among the earliest open- source LLMs and remain the most widely used. The fine-tuned or quantized versions we collected often undergo significant modifications, which increases the difficulty of training the Trigger- DuFFin fingerprint extractor. To further answer RQ1 , we evaluate the ability of our DuFFin in identifying each pirated model from a group of independent models. Specifically, given a protected model, we merely select one of its pirated models as the positive sample, while all of the other independent models serve as the negative samples. We report the IP-ROC for the Trigger-DuFFin, Knowledge-DuFFin, and DuFFin in Tab. 1. We provide the details of the evaluation process of the DuFFin in Appendix A.3. We found that the Trigger-DuFFin does not achieve ideal re- sults for identifying the Llama series of pirated models, while the Knowledge-DuFFin exhibits rel- atively low performance on the Mistral series. Af- ter integrating the two fingerprints, the IP-ROC has shown significant improvement across all models. Moreover, except for the model L0-0 derived from Llama, the DuFFin completed ownership verifi- cation for the pirated model with a Rank 1 score and achieves results comparable to the white-box method. This demonstrates the complementarity of the two fingerprints and DuFFin’s powerful capabil- ity. Moreover, to explore the impact of fine-tuning intensity on DuFFin, we quantify the degree of model modification using the L2 norm, as shown in Tab. 6. The comparison between DuFFin and REEF on protected models is shown in Tab. 5. Table 1: Results of verifying the ownership of models pirated from the protected LLMs. ❑: White-box Setting. ■: Black-box Setting. Protected LLMs Pirated Models TypeREEF ❑ Trigger-DuFFin ■ Knowledge-DuFFin ■ DuFFin ■ IP-ROC ↑ IP-ROC ↑ IP-ROC ↑ IP-ROC ↑ Rank↓ Llama—ARC-Potpourri-Induction(L0-0) Fine-tuning 1.00 0.29 0.81 0.88 2 —8bit-Instruct-sql-v3(L1-0) 8-Bit 1.00 0.71 0.96 1.00 1 —ultrafeedback-single-judge(L3-1) DPO 1.00 0.58 1.00 1.00 1 —SuperNova-Lite(L4-1) Fine-tuning 1.00 0.67 0.94 1.00 1 —prop-logic-ft(L6-2) Fine-tuning 1.00 0.67 0.94 1.00 1 —fake-news(L8-2) Fine-tuning 1.00 0.50 0.69 1.00 1 Qwen—Human-Like(Q1-0) DPO 1.00 0.75 0.96 1.00 1 —Uncensored(Q4-1) Fine-tuning 1.00 0.79 0.96 1.00 1 —Math-IIO(Q5-1) Fine-tuning 1.00 0.83 0.96 1.00 1 —T.E-8.1(Q6-2) Fine-tuning 1.00 1.00 0.96 1.00 | https://arxiv.org/abs/2505.16530v1 |
1 —FinancialAdvice(Q7-2) Fine-tuning 0.80 1.00 0.81 1.00 1 —Rui-SE(Q8-2) 8-Bit 1.00 1.00 0.96 1.00 1 Mistral—radia-lora(M0-0) Fine-tuning 1.00 0.79 0.78 1.00 1 —Code-SG1-V5(M2-0) Fine-tuning 1.00 0.79 0.10 1.00 1 —instruct-generation (M3-1) DPO 1.00 0.79 0.96 1.00 1 —WeniGPT(M6-2) Fine-tuning 1.00 1.00 0.96 1.00 1 —finetuned(M7-2) Fine-tuning 1.00 0.96 0.85 1.00 1 —v2-astromistral(M8-2) Fine-tuning 1.00 1.00 0.96 1.00 1 4.3 Fingerprinting Unseen LLMs To answer RQ2 , we apply DuFFin to a series of pro- tected models which are unseen during the frame- work construction. Tab. 3 provides information about our collected unseen model list. DuFFin Fingerprint Evaluation on Llama-3.2 . We first evaluate the DuFFin on the Llama-3.2-3B- Instruct and its two fine-tuning versions. Here, we form these three Llama-3.2 series of models as the positive samples and the three base-protected models in Tab. 2 as the negative samples. The IP- ROC is reported in Tab. 4. We found that DuFFin can successfully separate the LLama-3.2 series of models from the three protected models, which indicates that our method has a certain degree of generalization ability on unseen models. Knowledge-DuFFin to Detect the Distillation by DeepSeek-R1 . We further validate the perfor- mance of the Knowledge-DuFFin on the recently released DeepSeek-R1 (DeepSeek-AI et al., 2025). Here, the Qwen2.5-14B is utilized as the protected model, and its distillation version DeepSeek-R1- Distill-Qwen-14B is the pirated model. Then, we collect the DeepSeek-R1-Distill-Llama-8B and the Llama2-13B-chat-hf to serve as the negative sam- ples. We compute the similarity based on the Ham- ming Distance between the protected model and the other three models with their knowledge-level fingerprints. As shown in Fig. 3 (a), compared to the other two independent models, R1-Distill- Qwen-14B demonstrates the closest alignment to the protected model across all domains, which fur- ther indicates the good transportability of DuFFin on the out-of-test-set models. /uni00000025/uni0000004c/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c/uni00000025/uni00000058/uni00000056/uni0000004c/uni00000051/uni00000048/uni00000056/uni00000056/uni00000026/uni0000004b/uni00000048/uni00000050/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000005c/uni00000026/uni00000052/uni00000050/uni00000053/uni00000058/uni00000057/uni00000048/uni00000055/uni00000003/uni00000036/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000046/uni00000048 /uni00000028/uni00000046/uni00000052/uni00000051/uni00000052/uni00000050/uni0000004c/uni00000046/uni00000056 /uni00000028/uni00000051/uni0000004a/uni0000004c/uni00000051/uni00000048/uni00000048/uni00000055/uni0000004c/uni00000051/uni0000004a /uni0000002b/uni00000048/uni00000044/uni0000004f/uni00000057/uni0000004b /uni0000002b/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni0000005c /uni0000002f/uni00000044/uni0000005a /uni00000030/uni00000044/uni00000057/uni0000004b /uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000033/uni0000004b/uni0000004c/uni0000004f/uni00000052/uni00000056/uni00000052/uni00000053/uni0000004b/uni0000005c/uni00000033/uni0000004b/uni0000005c/uni00000056/uni0000004c/uni00000046/uni00000056/uni00000033/uni00000056/uni0000005c/uni00000046/uni0000004b/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c /uni0000000b/uni00000044/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000003/uni00000025/uni00000052/uni00000058/uni00000051/uni00000047/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000052/uni00000049/uni00000003/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni00000014/uni00000017/uni00000025/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni00000014/uni00000017/uni00000025/uni00000010/uni00000035/uni00000014 /uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000010/uni0000001b/uni00000025/uni00000010/uni00000035/uni00000014/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni00000015/uni00000010/uni00000014/uni00000016/uni00000045/uni00000010/uni00000046/uni0000004b/uni00000044/uni00000057 /uni00000013/uni00000011/uni00000015/uni00000012/uni0000004a/uni00000055/uni0000004c/uni00000047/uni00000025/uni0000004c/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c/uni00000025/uni00000058/uni00000056/uni0000004c/uni00000051/uni00000048/uni00000056/uni00000056/uni00000026/uni0000004b/uni00000048/uni00000050/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000005c/uni00000026/uni00000052/uni00000050/uni00000053/uni00000058/uni00000057/uni00000048/uni00000055/uni00000003/uni00000036/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000046/uni00000048 /uni00000028/uni00000046/uni00000052/uni00000051/uni00000052/uni00000050/uni0000004c/uni00000046/uni00000056 /uni00000028/uni00000051/uni0000004a/uni0000004c/uni00000051/uni00000048/uni00000048/uni00000055/uni0000004c/uni00000051/uni0000004a /uni0000002b/uni00000048/uni00000044/uni0000004f/uni00000057/uni0000004b /uni0000002b/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni0000005c /uni0000002f/uni00000044/uni0000005a /uni00000030/uni00000044/uni00000057/uni0000004b /uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000033/uni0000004b/uni0000004c/uni0000004f/uni00000052/uni00000056/uni00000052/uni00000053/uni0000004b/uni0000005c/uni00000033/uni0000004b/uni0000005c/uni00000056/uni0000004c/uni00000046/uni00000056/uni00000033/uni00000056/uni0000005c/uni00000046/uni0000004b/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c /uni0000000b/uni00000045/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000003/uni00000025/uni00000052/uni00000058/uni00000051/uni00000047/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000052/uni00000049/uni00000003/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044/uni0000002f/uni00000015/uni00000010/uni00000013 /uni00000030/uni00000019/uni00000010/uni00000015/uni00000034/uni00000014/uni00000010/uni00000013 /uni00000013/uni00000011/uni00000015/uni00000012/uni0000004a/uni00000055/uni0000004c/uni00000047 /uni00000025/uni0000004c/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c/uni00000025/uni00000058/uni00000056/uni0000004c/uni00000051/uni00000048/uni00000056/uni00000056/uni00000026/uni0000004b/uni00000048/uni00000050/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000005c/uni00000026/uni00000052/uni00000050/uni00000053/uni00000058/uni00000057/uni00000048/uni00000055/uni00000003/uni00000036/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000046/uni00000048 /uni00000028/uni00000046/uni00000052/uni00000051/uni00000052/uni00000050/uni0000004c/uni00000046/uni00000056 /uni00000028/uni00000051/uni0000004a/uni0000004c/uni00000051/uni00000048/uni00000048/uni00000055/uni0000004c/uni00000051/uni0000004a /uni0000002b/uni00000048/uni00000044/uni0000004f/uni00000057/uni0000004b /uni0000002b/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni0000005c /uni0000002f/uni00000044/uni0000005a /uni00000030/uni00000044/uni00000057/uni0000004b /uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000033/uni0000004b/uni0000004c/uni0000004f/uni00000052/uni00000056/uni00000052/uni00000053/uni0000004b/uni0000005c/uni00000033/uni0000004b/uni0000005c/uni00000056/uni0000004c/uni00000046/uni00000056/uni00000033/uni00000056/uni0000005c/uni00000046/uni0000004b/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c /uni0000000b/uni00000046/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000003/uni00000025/uni00000052/uni00000058/uni00000051/uni00000047/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000052/uni00000049/uni00000003/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000045/uni00000034/uni00000014/uni00000010/uni00000013 /uni00000030/uni0000001b/uni00000010/uni00000015/uni0000002f/uni0000001a/uni00000010/uni00000015 /uni00000013/uni00000011/uni00000015/uni00000012/uni0000004a/uni00000055/uni0000004c/uni00000047/uni00000025/uni0000004c/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c/uni00000025/uni00000058/uni00000056/uni0000004c/uni00000051/uni00000048/uni00000056/uni00000056/uni00000026/uni0000004b/uni00000048/uni00000050/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000005c/uni00000026/uni00000052/uni00000050/uni00000053/uni00000058/uni00000057/uni00000048/uni00000055/uni00000003/uni00000036/uni00000046/uni0000004c/uni00000048/uni00000051/uni00000046/uni00000048 /uni00000028/uni00000046/uni00000052/uni00000051/uni00000052/uni00000050/uni0000004c/uni00000046/uni00000056 /uni00000028/uni00000051/uni0000004a/uni0000004c/uni00000051/uni00000048/uni00000048/uni00000055/uni0000004c/uni00000051/uni0000004a /uni0000002b/uni00000048/uni00000044/uni0000004f/uni00000057/uni0000004b /uni0000002b/uni0000004c/uni00000056/uni00000057/uni00000052/uni00000055/uni0000005c /uni0000002f/uni00000044/uni0000005a /uni00000030/uni00000044/uni00000057/uni0000004b /uni00000032/uni00000057/uni0000004b/uni00000048/uni00000055 /uni00000033/uni0000004b/uni0000004c/uni0000004f/uni00000052/uni00000056/uni00000052/uni00000053/uni0000004b/uni0000005c/uni00000033/uni0000004b/uni0000005c/uni00000056/uni0000004c/uni00000046/uni00000056/uni00000033/uni00000056/uni0000005c/uni00000046/uni0000004b/uni00000052/uni0000004f/uni00000052/uni0000004a/uni0000005c /uni0000000b/uni00000047/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000003/uni00000025/uni00000052/uni00000058/uni00000051/uni00000047/uni00000044/uni00000055/uni0000005c/uni00000003/uni00000052/uni00000049/uni00000003/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f/uni00000030/uni00000018/uni00000010/uni00000014 /uni00000034/uni00000018/uni00000010/uni00000014/uni0000002f/uni00000017/uni00000010/uni00000014 /uni00000013/uni00000011/uni00000015/uni00000012/uni0000004a/uni00000055/uni0000004c/uni00000047Figure 3: Visualization of Knowledge-DuFFin finger- print similarity across various domains. 4.4 Analysis in Knowledge Domains To explore the mechanism of the Knowledge- DuFFin, we visualize the fingerprint similarity be- tween the protected model and the suspect models across all domains. Analysis of other models can be found in Appendix A.5. As Fig. 3 shows, we found some interesting phenomena: •In each domain, compared to independent mod- els, the pirated model exhibits more similar knowledge capabilities to the protected model, e.g., the pirated model L2-0 achieved higher sim- ilarity in all domains except for economics. •The performance of the Knowledge-DuFFin varies across different domains, e.g., for the Qwen-14B-R1, compared to the engineering and the computer science domain, the finger- print works significantly better on the math and physics domain, which reflects that the knowledge-level fingerprint has a certain prefer- ence for specific domains. Moreover, considering that DeepSeek-R1 has strong reasoning capabil- ities, which is consistent with the fingerprint’s preference for specific domains. /uni00000014/uni00000013/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000019/uni00000013/uni00000013 | https://arxiv.org/abs/2505.16530v1 |
/uni00000037/uni00000055/uni0000004c/uni0000004a/uni0000004a/uni00000048/uni00000055/uni00000003/uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni0000002c/uni00000033/uni00000010/uni00000035/uni00000032/uni00000026 /uni0000000b/uni00000044/uni0000000c/uni00000003/uni00000037/uni00000055/uni0000004c/uni0000004a/uni0000004a/uni00000048/uni00000055/uni00000010/uni00000027/uni00000058/uni00000029/uni00000029/uni0000004c/uni00000051/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000003/uni00000032/uni00000058/uni00000057/uni0000004f/uni0000004c/uni00000048/uni00000055 /uni00000014/uni00000017/uni0000001a/uni00000013/uni00000014/uni00000017/uni00000013 /uni00000015/uni0000001b/uni00000013 /uni00000017/uni00000015/uni00000013 /uni00000034/uni00000058/uni00000048/uni00000056/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000014/uni00000011/uni00000013/uni0000002c/uni00000033/uni00000010/uni00000035/uni00000032/uni00000026 /uni0000000b/uni00000045/uni0000000c/uni00000003/uni0000002e/uni00000051/uni00000052/uni0000005a/uni0000004f/uni00000048/uni00000047/uni0000004a/uni00000048/uni00000010/uni00000027/uni00000058/uni00000029/uni00000029/uni0000004c/uni00000051/uni0000002f/uni0000004f/uni00000044/uni00000050/uni00000044 /uni00000034/uni0000005a/uni00000048/uni00000051/uni00000030/uni0000004c/uni00000056/uni00000057/uni00000055/uni00000044/uni0000004f Figure 4: Impact of the size of the Secret Key. 4.5 Impacts of the Size of the Secret Key To answer RQ3 , we conduct experiments to ex- plore the impact of different sizes of the secret key on the performance of both fingerprints. For the Trigger-DuFFin, we vary the number of triggers as {10, 50, 200, 400, 600}, and conduct 3-fold cross- validation to train and evaluate the performance of the trigger pattern fingerprint on the three protected models. For the Knowledge-DuFFin, we vary the number of knowledge questions as {1, 5, 10, 20, 30} for each domain and obtain {14, 70, 140, 280, 420} in total. We repeat experiments three times per value and average the results. We report the IP-ROC for both fingerprints. As shown in Fig. 4, we observed that: •For Trigger-DuFFin, increasing the number of triggers (except for two outliers at 50 for Qwen) improves performance, as more triggers allow the extractor to capture model-specific patterns more effectively. •The Knowledge-DuFFin is less sensitive to the number of questions. Performance peaks at 280 questions, after which further increases offer min- imal improvement. Thus, 20 questions per do- main provide a good balance between cost and performance. 4.6 Robustness to Paraphrasing Attack To evaluate DuFFin’s resilience against paraphras- ing attacks, we rewrite the input queries with GPT- 4o. We apply this setting to Knowledge-DuFFinand find that it maintains strong performance under such perturbations, as shown in Tab. 9 and A.8. 5 Related Work Deep IP Protection. Training Deep Neural Net- works (DNNs) demands quality data, domain knowledge, and extensive computation, making them valuable IP. Research has explored protect- ing DNNs from misuse (Sun et al., 2023), mainly via deep watermarking and fingerprinting. Water- marking embeds identifiers in models, inputs, or outputs to detect misuse (Uchida et al., 2017; Nagai et al., 2018; Wang and Kerschbaum, 2021; Li et al., 2022; Sablayrolles et al., 2020; Chen et al., 2021; Yang et al., 2021a; Wang et al., 2022), but often re- quires intrusive modifications. Fingerprinting (Liu et al., 2022; Yang et al., 2021b; Chen et al., 2022; Guan et al., 2022), by contrast, extracts unique, non-invasive model features like decision bound- aries. LLMs IP Protection . LLM text watermarking protects copyrights by embedding signals into gen- erated text, often via logit modification and vo- cabulary partitioning (Kirchenbauer et al., 2023). Enhancements include support for low-entropy text (Lee et al., 2024), multi-bit encoding (Fernan- dez et al., 2023), and sampling-based approaches that avoid logit changes (Kuditipudi et al., 2024). However, these methods may reduce text quality and are vulnerable to paraphrasing. Fingerprinting offers better robustness and has recently been ex- plored for LLMs (Xu et al., 2024; Russinovich and Salem, 2024; Zhang et al., 2025; Pasquini et al., 2024; Iourovitski et al., 2024; Yang and Wu, 2024), though existing methods often require access to model parameters or fail to generalize to diverse suspect models. We introduce DuFFin, a novel framework addressing these limitations. 6 Conclusion In this paper, we propose a novel | https://arxiv.org/abs/2505.16530v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.