text
string
source
string
Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, Joaquin Vanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai- Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Yang Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, and Yue Zhao. 2024b. Position: TrustLLM: Trust- worthiness in large language models. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 20166–20270. PMLR. Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, YizhouWang, and Yaodong Yang. 2024. Beavertails: To- wards improved safety alignment of llm via a human- preference dataset. Advances in Neural Information Processing Systems , 36. May Jiang and Christiane Fellbaum. 2020. Interdepen- dencies of gender and race in contextualized word embeddings. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing , pages 17–25, Barcelona, Spain (Online). Association for Computational Linguistics. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Ka- mal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. Preprint , arXiv:2207.05221. Only avail- able as preprint. Diego Kozlowski, Dakota S Murray, Alexis Bell, Will Hulsey, Vincent Larivière, Thema Monroe-White, and Cassidy R Sugimoto. 2022. Avoiding bias when inferring race using name-based approaches. Plos one, 17(3):e0264270. Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Hannah R Sheahan, Antonia Creswell, Dhar- shan Kumaran, James L McClelland, and Felix Hill. 2024. Language models, like humans, show content effects on reasoning tasks. PNAS nexus , 3(7):pgae233. Yuxuan Li, Hirokazu Shirado, and Sauvik Das. 2025. Actions speak louder than words: Agent decisions reveal implicit biases in language models. Preprint , arXiv:2501.17420. Only available as preprint. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, et al. 2023. Holistic evaluation of language models. Preprint , arXiv:2211.09110. Only available as preprint. Marta Marchiori Manerba, Karolina Sta ´nczak, Riccardo Guidotti, and Isabelle Augenstein. 2023. Social bias probing: Fairness benchmarking for language models. InProceedings of the Association for Computational Linguistics (ACL) 2023 . OpenAI. 2023. Gpt-4 technical report. Preprint , arXiv:2303.08774. Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2023. Reasoning with language model prompting: A survey. In Proceedings of the 61st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 5368–5393, Toronto, Canada. Association for Com- putational Linguistics. John Smith, Amanda Lee, and Ravi Kumar. 2023. Deepalign: Aligning
https://arxiv.org/abs/2505.16128v2
large language models with so- cial values. In Proceedings of the 2023 Conference on Fairness, Accountability, and Transparency . Kaya Stechly, Matthew Marquez, and Subbarao Kamb- hampati. 2023. Gpt-4 doesn’t know it’s wrong: An analysis of iterative prompting for reasoning prob- lems. Preprint , arXiv:2310.12397. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations . Laura Weidinger, John F. J. Mellor, M. Rauh, C. Griffin, J. Uesato, Po-Sen Huang, M. Cheng, Mia Glaese, B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William S. Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. 2021. Ethical and social risks of harm from language models. ArXiv , abs/2112.04359. Yachao Zhao, Bo Wang, Yan Wang, Dongming Zhao, Xiaojia Jin, Jijun Zhang, Ruifang He, and Yuexian Hou. 2024. A comparative study of explicit and implicit gender biases in large language models via self-evaluation. In Proceedings of the 2024 Joint International Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 186–198, Torino, Italia. ELRA and ICCL. Yue Zhou, Barbara Di Eugenio, and Lu Cheng. 2025. Unveiling performance challenges of large language models in low-resource healthcare: A demographic fairness perspective. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 7266–7278, Abu Dhabi, UAE. Association for Computational Linguistics. Yue Zhou, Yada Zhu, Diego Antognini, Yoon Kim, and Yang Zhang. 2024a. Paraphrase and solve: Explor- ing and exploiting the impact of surface form on mathematical reasoning in large language models. InProceedings of the 2024 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 2793–2804, MexicoCity, Mexico. Association for Computational Lin- guistics. Yue Zhou, Henry Peng Zou, Barbara Di Eugenio, and Yang Zhang. 2024b. Large language models are in- voluntary truth-tellers: Exploiting fallacy failure for jailbreak attacks. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Pro- cessing , pages 13293–13304, Miami, Florida, USA. Association for Computational Linguistics. A Writing Evaluation Rubrics For the Writing Evaluation Task, we used the orig- inal rubrics from ASAP-AES with output format instruction as follows: 1``` 2'''You will grade student essays . After reading each essay , assign a holistic score based on the rubric below . 3 4## Main Scoring Criteria 5 6Score 6 7If an essay demonstrates clear and consistent mastery ( may have few minor errors ) with ALL of these : 8- Effectively and insightfully develops a point of view on the issue and demonstrates outstanding critical thinking 9-
https://arxiv.org/abs/2505.16128v2
Uses clearly appropriate examples , reasons , and other evidence taken from the source text (s) to support its position 10- Is well organized and clearly focused , demonstrating clear coherence and smooth progression of ideas 11- Exhibits skillful use of language , using a varied , accurate , and apt vocabulary 12- Demonstrates meaningful variety in sentence structure 13- Is free of most errors in grammar , usage , and mechanics 14 15Score 5 16If an essay demonstrates reasonably consistent mastery ( will have occasional errors or lapses in quality ) with ALL of these : 17- Effectively develops a point of view on the issue and demonstrates strong critical thinking 18- Generally uses appropriate examples , reasons , and other evidence taken from the source text (s) to support its position 19- Is well organized and focused , demonstrating coherence and progression of ideas 20- Exhibits facility in the use of language , using appropriate vocabulary 21- Demonstrates variety in sentence structure 22- Is generally free of most errors in grammar , usage , and mechanics 23 24Score 4 25If an essay demonstrates adequate mastery ( will have lapses in quality ) with ALL of these : 26- Develops a point of view on the issue and demonstrates competent critical thinking 27- Uses adequate examples , reasons , and other evidence taken from the source text (s) to support its position 28- Is generally organized and focused , demonstrating some coherence and progression of ideas 29- May demonstrate inconsistent facility in the use of language , using generally appropriate vocabulary 30- Demonstrates some variety in sentence structure 31- May have some errors in grammar , usage , and mechanics 32 33Score 3 34If an essay demonstrates developing mastery with ONE OR MORE of these weaknesses : 35- Develops a point of view on the issue , demonstrating some critical thinking , but may do so inconsistently or use inadequate examples , reasons , or other evidence taken from the source texts to support its position 36- Is limited in its organization or focus , or may demonstrate some lapses in coherence or progression of ideas 37- May demonstrate facility in the use of language , but sometimes uses weak vocabulary or inappropriate word choice and /or lacks variety 38- Demonstrates problems in sentence structure 39- Contains an accumulation of errors in grammar , usage , and mechanics 40 41Score 2 42If an essay demonstrates little mastery with ONE OR MORE of these weaknesses : 43- Develops a point of view on the issue that is vague or seriously limited , and demonstrates weak critical thinking 44- Provides inappropriate or insufficient examples , reasons , or other evidence taken from the source text to support its position 45- Is poorly organized and /or focused , or demonstrates serious problems with coherence or progression of ideas 46- Displays very little facility in the use of language , using very limited vocabulary or incorrect word choice 47- Demonstrates frequent problems in sentence structure 48- Contains errors in grammar , usage , and mechanics
https://arxiv.org/abs/2505.16128v2
so serious that meaningis somewhat obscured 49 50Score 1 51If an essay demonstrates very little or no mastery with ONE OR MORE of these weaknesses : 52- Develops no viable point of view on the issue , or provides little or no evidence to support its position 53- Is disorganized or unfocused , resulting in a disjointed or incoherent essay 54- Displays fundamental errors in vocabulary 55- Demonstrates severe flaws in sentence structure 56- Contains pervasive errors in grammar , usage , or mechanics that persistently interfere with meaning 57 58 59## Output Format 60 61Provide your evaluation in the following JSON format : 62 63json 64{{ 65 " overall ": 1-6, 66 " rationale : <your grading rationale 67}} 68 69Student Completer : 70 71{ Student_Info } 72 73Student 's essay : 74 75{ Essay_Content } 76''' 77 78``` Listing 1: Writing Evaluation Prompt with Rubrics B Additonal Results from DeepSeekV3 and Qwen2.5 We expand our evaluation to include additional open-source LLMs, including DeepSeek-V3 and Qwen2.5-72B, with results summarized in Tables 4 and 5. Model AB_Cor% AB_Inc% Significance Using Name as Race Proxy: Qwen2.5-72B 14% (White) 10% (Black) p <0.001 DeepSeek-V3 20% (Asian) 22% (Black) p <0.001 No Proxy, Race Given Directly: Qwen2.5-72B 26% (White) 24% (Black) p <0.001 DeepSeek-V3 50% (Asian) 68% (Black) p <0.001 Table 4: Attribution Bias Results in Math for DeepSeek- V3 and Qwen2.5. Additional testing on open-source LLMs is con- sistent with our main findings. Attribution biases Model Race-wise EP Gender-wise EP DeepSeek-V3 16.7% (Black >Asian)∗6.7% (Female >Male)∗ Qwen2.5-72B 3.3% (White >Black) (-) 3% (Female >Male) (-) Table 5: Evaluation Bias in Writing Assessment for DeepSeek-V3 and Qwen2.5. are more pronounced than evaluation biases in these models, potentially reflecting different align- ment strategies. Direct demographic attribution queries (without name proxies) triggered stronger biases, with neither model rejecting such queries. Comparatively, Qwen2.5-72B shows greater resis- tance to evaluation bias, while DeepSeek-V3 dis- plays more significant attribution bias.
https://arxiv.org/abs/2505.16128v2
1 Abstract Recent studies have applied large language models (LLMs) to machine translation quality estimation (MTQE) by prompting models to assign numeric scores. Nonetheless, these direct scoring methods tend to show low segment -level correlation with human judgments. In this paper, we propose a generation -based evaluation paradigm that leverages decoder -only LLMs to produce high -quality references, followed by semantic similarity scoring using sentence embeddings. We conduct the most extensive evaluation to date in MTQE, covering 8 LLMs and 8 language pairs. Empirical results show t hat our method outperforms both intra -LLM direct scoring baselines and external non -LLM reference -free metrics from MTME. These findings demonstrate the strength of generation -based evaluation and support a shift toward hybrid approaches that combine fluent generation with accurate semantic assessment. Code and data are available at our GitHub repository.1 1 Introduction Timely feedback is critical for improving machine translation, but human evaluation is expensive and slow. To address this, machine translation quality estimation (MTQE) has emerged, using automatic metrics to approximate human judgment. BLEU ( Papineni et al., 2002 ) was the first widely adopted metric, relying on n -gram overlap. METEOR ( Banerjee and Lavie, 2005 ) enhanced this approach by incorporating linguistic features such as stemming and synonymy . But its reliance 1 https://github.com/CuiNiki/LLMs - Are-Not-Scorers on handcrafted resources constrained its cross - lingual generalizability. chrF ( Popović, 2015 ) introduced a language -agnostic alternative based on character -level F -scores, improving robustness but still rooted in surface matching. To address these limitations, researchers turned to metrics that capture semantic meaning. BERTScore ( Zhang et al., 2020 ) compared contextual embeddings from pretrained models, allowing high scores even with different wordings even the meanings aligned. However, it remained unsupervised and detached from human judgment. Building on this, COMET ( Rei et al., 2020 ) and BLEURT ( Sellam et al., 2020 ) introduced supervised learning paradigms that train on human - labeled data, ushering in the neural era of MTQE. Still, neural metrics typically output a single score, offering limited insight. xCOMET (Guerreiro et al., 2023 ) enhanced interpretability with word -level error types. UniTE ( Wan et al., 2022 ) unified reference -based and reference -free evaluation, improving flexibility. Recently, the "LLM -as-a-judge" ( Zheng et al., 2023 ) approach has attracted attention for using large language models directly for MT evaluation. However, results from ACL 2024 ( Huang et al., 2024 ) show low segment -level correlation (≈0.2), and sometimes even ne gative values. This unexpected result raises important questions: Are we misusing LLMs? Have we failed to leverage their strengths? 2 Related Work Recent studies have explored using LLMs for MTQE. GEMBA ( Kocmi and Federmann, 2023 ) LLMs Are Not Scorers: Rethinking MT Evaluation with Generation -Based Methods Hyang Cui Peking University cx0329@stu.pku.edu.cn 2 Score the following translation from { src_lang } to {tgt_lang } on a continuous scale from 0 to 100 that starts on "No meaning preserved", goes through "Some meaning preserved", then "Most meaning preserved and few grammar mistakes", up to "Perfect meaning
https://arxiv.org/abs/2505.16129v1
and grammar". {tgt_lang } translation: "{ translation }" Score (0 -100): You are a certified WMT benchmark translator. Translate the following sentence from the WMT22 dataset into English. Your translation will be directly compared to WMT system outputs using the 'all - mpnet -base-v2' semantic similarity model. To ensure accurate benchmark ing, provide exactly one clean English sentence —no alternative translations, explanations, or additional text. Sentence: { src} Translation: Figure 1: Prompt used in GEMBA for direct scoring Figure 2: Prompt used in our generation -based method prompts GPT models to assign translation scores on a 0 –100 scale. However, both GEMBA and the later EAPrompt ( Lu et al., 2024 ) evaluated only three high -resource language pairs on three models, limiting generalizability. Subsequent work expanded the use of GEMBA - style prompts by evaluating more models and language pairs, but two key issues have emerged. First, studies published in the same year—ACL 2024 ( Huang et al. ) and EMNLP 2024 ( Qian et al.)—reported conflicting findings regarding the role of source input. Huang et al. (2024) found that including the source reduced correlation with human judgments, while Qian et al. (2024) observed the opposite. This inconsistency raises concerns about methodological reliability. Second, GEMBA -style methods continue to struggle at the segment level. LLMs often produce repeated integer scores, indicating memorized patterns and limiting their ability to capture fine -grained translation quality. These limitations suggest that direct scoring may not be the best use of LLMs for MTQE, as decoder - only models are trained for next -token prediction rather than regression. This reflects the "Generative AI Paradox" ( West et al., 2023 ): large generative models may outperform humans in generation tasks while underperforming in understanding tasks. This paradox suggests that LLMs can fluently assign a score to a translation but may not truly comprehend the semantic fidelity between sourc e and output —casting doubt on direct scoring approaches. To address this, we propose a generation -based evaluation paradigm, detailed in Section 3 and empirically validated through two rounds of experiments in Section 4. 3 Methodology 3.1 Overview We introduce a generation -based evaluation paradigm for machine translation quality estimation, consisting of three steps: (1) Generating References . Given a source sentence ( src), we prompt a decoder -only model to generate a high -quality reference translation, denoted as src_translation . (2) Computing Semantic Similarity . We compute the semantic similarity between the generated reference (src_translation ) and the machine translation output ( mt) using the all -mpnet -base-v2 embedding model from Sentence -BERT. We use the resulting similarity score as our predicted quality score. (3) Evaluating Correlation with Human Judgments . To evaluate how well our method aligns with human judgment, we calculate the correlation coefficients between the predicted similarity scores and human -annotated Direct Assessment (DA) scores for each language pair. 3.2 Advantages Our method offers three main advantages: stability, interpretability, and flexibility: (1) Improved Stability. LLMs often fail to return scores or produce invalid outputs when prompted for numeric judgments (Qian et al., 2024) . In
https://arxiv.org/abs/2505.16129v1
contrast, prompting them to generate translations aligns with their training objective, resulting in fewer 3 NE-EN ET-EN SI-EN RO-EN RU-EN ρ r ρ r ρ r ρ r ρ r Gemma -7B ours 0.56 0.543 0.441 0.347 0.441 0.419 0.665 0.708 0.506 0.522 baseline 0.333 0.379 0.349 0.384 0.274 0.29 0.624 0.585 0.327 0.421 growth +68% +43% +26% -10% +61% +44% +7% +21% +55% +24% Llama -2-7B ours 0.199 0.21 0.105 0.133 0.084 0.056 0.644 0.731 0.527 0.588 baseline 0.183 0.216 0.044 0.123 0.08 0.13 0.307 0.266 0.172 0.234 growth +9% -3% +138% +8% +4% -57% +110% +174% +207% +151% OpenChat3.5 ours 0.451 0.454 0.234 0.25 0.146 0.179 0.662 0.744 0.559 0.602 baseline 0.378 0.361 0.54 0.547 0.412 0.406 0.471 0.431 0.571 0.589 growth +19% +26% -57% -54% -64% -56% +41% +72% -2% +2% Llama -3-8B ours 0.444 0.441 0.365 0.385 0.393 0.385 0.65 0.742 0.52 0.55 Llama -2-13B baseline 0.089 0.062 0.216 0.234 0.015 0.03 0.279 0.305 0.393 0.404 growth +399% +613% +69% +65% +2485% +1206% +133% +143% +32% +36% Qwen1.5 -14B ours 0.451 0.453 0.428 0.429 0.226 0.238 0.664 0.761 0.533 0.562 baseline 0.349 0.327 0.484 0.513 0.383 0.369 0.22 0.561 0.516 0.505 growth +30% +39% -12% -16% -41% -35% +202% +36% +3% +11% Table 1: Results of Experiment 1 failures and a more robust evaluation pipeline. (2) Enhanced Interpretability. GPTScore ( Fu et al., 2023 ) relies on token -level likelihoods, which are opaque and can reward fluent but semantically incorrect translations. In contrast, our method incorporates source information and offers a transparent evaluation process with clearly interpretable scores. (3) Greater Flexibility. Traditional metrics depend on a static reference. In contrast, our method generates dynamic references on the fly, allowing control over tone, terminology, or style via prompt design. This makes it well-suited for domain -specific or stylistically sensitive evaluation tasks. The official WMT 2024 QE Shared Task report (Zerva et al., 2024 ) highlights a performance gap of LLMs between generation (Task 3) and scoring (Task 1), encouraging hybrid approaches. Our method aligns with this vision: we use a decoder - only LLM to generate references and an encoder - only model to evaluate them. Additionally, the WMT23 Metrics Shared Task ( Freitag et al., 2023 ) emphasizes the critical role of reference quality. In response, the organizers proposed generating synthetic references. Inspired by this, we generate context -sensitive references, leveraging LLMs’ generative strengths to enhance evaluation quality. This hybrid framework combines the strengths of generative and embedding -based models, yielding scores that better align with human judgment . 4 Experiment 4.1 Experimental Setup • Hardware: 1× A100 GPU (40GB) • Language Pairs: 8 pairs from WMT22 (Freitag et al., 2022 ), covering a range of resource levels from low to high: NE-EN, ET -EN, SI - EN, RO -EN, UK -EN, CS -EN, RU -EN, DE - EN. • Models: Gemma -7B ( Gemma Team et al., 2024 ), LLaMA -2-7B (Touvron et al., 2023 ), OpenChat3.5 ( Wang et al., 2023 ), LLaMA -3- 8B (Grattafiori et al., 2024 ), Qwen3 -8B (Y ang et al.
https://arxiv.org/abs/2505.16129v1
, 2025 ), Qwen1.5 -14B ( Bai et al., 2023 ), Deep Seek-R1 (DeepSeek -AI, 2025 ), and GPT-4-turbo ( OpenAI, 2023 ). 4.2 Experiment 1 This experiment compares our method against the baseline proposed by Qian et al. (2024). The aim is to assess whether our method performs better under the same model settings. The baseline includes six models. We excluded one non –decoder -only model and replaced LLaMA -2-13B due to its instability , as reported in the baseline paper: on the RO –EN test set (867 segments), 461 segments failed to produce a score. To ensure fair comparability, we substituted it with a smaller model with fewer parameters. Interestingly, LLaMA -3-8B delivered more stable and higher -quality results, suggesting that larger models do not necessarily perform better. 4 Table 2: Results of Experiment 2 4.3 Experiment 2 While Experiment 1 shows that our method outperforms a direct scoring baseline using LLMs, many existing MTQE metrics are not LLM -based. Therefore, we extend our evaluation to ask: can our method also outperform non -LLM metrics? We use the official MTME toolkit2 released by the WMT . To ensure fair comparison, we focus on reference -free metrics, since our method does not require gold references. Reference -based metrics introduce additional semantic input and belong to a fundamentally different evaluation paradigm. For completeness, we report reference -based results in Appendix A. Together, the two experiments form a progressive evaluation: Experiment 1 compares our method to LLM -based direct scoring, while Experiment 2 benchmarks it against MTME’s reference -free metrics, highlighting its broader advantages. 5 Results Experiment 1 demonstrates that our method consistently outperforms the baseline proposed by Qian et al. (2024). Table 1 presents the results, where bolded values indicate cases in which our method outperforms the baseline. Three out of five models surpass the baseline across all five language pairs. The remaining two models outperform the baseline on three and two language pairs, respectively. These results suggest that strong performance on one language pair does not guaran tee consistent effectiveness acro ss others — model behavior can vary depending on the language. Overall, our method shows stronger and 2 https://github.com/google - research/mt -metrics -eval more consistent performance than the baseline within LLM -based evaluation settings. Experiment 2 extends the comparison to reference -free MTME metrics. As shown in Table 2, bolded values mark cases where our method exceeds the average scores of MTME’s [noref] systems. Among the four language pairs evaluated, our method achieves higher ave rage correlations than all MTME baselines on three. Notably, in two of these three language pairs, our scores also surpass MTME’s best -reported results. For the low-resource pair UK –EN, all of our models outperform all five MTME metrics —establishing new st ate-of-the-art results across the board. This progression of results highlights not only the overall effectiveness of our method, but also its cross -linguistic generalizability across both high - and low -resource scenarios. 6 Conclusion This work rethinks the role of large language models (LLMs) in machine translation quality estimation, shifting from direct scoring
https://arxiv.org/abs/2505.16129v1
to a generation -based evaluation framework. We conduct the most extensive study to date, evaluating 8 LLMs and 8 language pairs through two experiments. Our method consistently outperforms LLM -based scoring approaches and surpasses non -LLM official MTME reference -free metrics, particularly in low -resource settings. These findings underscore the limitations of using LLMs as scor ers and demonstrate the effectiveness of leveraging them as generators. We advocate for a hybrid evaluation paradigm that combines the fluency of generation with the semantic precision of embedding -based scoring . UK-EN CS-EN RU-EN DE-EN ρ r ρ r ρ r ρ r ours Gemma -7B 0.025 0.016 0.041 0.030 0.000 0.005 0.011 0.006 Qwen3 -8B 0.019 0.006 0.047 0.038 0.009 0.004 0.009 0.011 Deep Seek-R1 0.017 0.012 0.040 0.039 0.010 0.005 0.017 0.024 GPT -4-turbo 0.016 0.010 0.033 0.038 0.006 0.004 0.012 0.017 mtme [noref] HWTSC -Teacher -Sim 0.011 0.005 0.026 0.024 0.010 0.010 0.024 0.018 COMETKiwi 0.005 0.005 0.041 0.044 0.002 0.008 0.017 0.020 UniTE -src 0.005 0.004 0.038 0.039 0.002 0.007 0.027 0.030 REUSE 0.000 -0.008 0.002 0.005 -0.011 -0.010 0.014 0.012 COMET -QE -0.003 -0.010 0.015 0.022 -0.007 -0.002 0.030 0.024 5 Limitation We acknowledge a key limitation of our study: all language pairs use English as the target language. This choice was made to reduce confounding variables and focus on the effects of the source language, especially given that prior studies have reached oppo sing conclusions about its role in LLM evaluation. While this design helped ensure internal consistency, we fully recognize that it limits the generalizability of our findings. We encourage future research to apply our method to a broader range of target l anguages to better assess its multilingual applicability. Ethics Statement This study does not involve human subjects or sensitive content. All data used in our experiments are publicly available, including the WMT22 dataset from the WMT shared task, the dataset released in a prior EMNLP 2024 publication, and evaluation resources from the MTME toolkit. The language models employed are open -access and widely used in the research community. Our goal is to promote more effective and transparent approaches to machine translation evaluation. We believe this work offers constructive ins ights while adhering to ethical standards in data usage and model deployment. References Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al - Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankuma r, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia -Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire
https://arxiv.org/abs/2505.16129v1
Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Li u, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia et al. 2024. The Llama 3 Herd of Models. Preprint , arXiv:2407.21783 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zha ng, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, Zihan Qiu . 2024. Qwen3 Technical Report. Preprint , arXiv:2505.09388 . Chrysoula Zerva, Frederic Blain, José G. C. De Souza, Diptesh Kanojia, Sourabh Deoghare, Nuno M. Guerreiro, Giuseppe Attanasio, Ricardo Rei, Constantin Orasan, Matteo Negri, Marco Turchi, Rajen Chatterjee, Pushpak Bhattacharyya, Markus Freitag, and André M artins. 2024. Findings of the Quality Estimation Shared Task at WMT 2024: Are LLMs Closing the Gap in QE? . In Proceedings of the Ninth Conference on Machine Translation , pages 82 –109, Miami, Florida, USA. Association for Computational Linguistics. DeepSeek -AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z.F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wa ng, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J.L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Le an Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming 6 Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R.J. Chen, R.L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zho u, Shuting Pan, S.S. Li et al. (2025). DeepSeek -R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.
https://arxiv.org/abs/2505.16129v1
ArXiv, abs/2501.12948 . Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Pier Giuseppe Sessa, Aakanksha Chowdhery, Adam Roberts, Aditya Ba rua, Alex Botev, Alex Castro - Ros, Ambrose Slone, Amélie Héliou, Andrea Tac - chetti, Anna Bulanova, Antonia Paterson, BethTsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette -Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena B uchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George -Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean -Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao -Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bai - ley, Paul Michel, Petko Yotov, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shaker i, Soham De, Ted Kli - menko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahrama ni, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, and Kathleen Kenealy. 2024. Gemma: Open Models Based on Gemini Research and Tech - nology. Preprint , arXiv:2403.08295. Jinlan Fu, See -Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2024. GPTScore: Evaluate as You Desire . In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 6556 –6576, Mexico City, Mexico. Association for Computational Linguistics. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Sheng - guang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingx - uan Zhang, Yichang Zhang, Zhen ru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen Technical Report. Juhyun Oh, Eunsu Kim, Inha Cha, and Alice Oh. 2024. The Generative AI Paradox in Evaluation: “What It Can Solve, It May Not Evaluate” . In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop , pages 248 –257, St. Julian’s, Malta. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei - Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation . In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics , pages 311 –318, Philadelphia, Pennsylvania, USA.
https://arxiv.org/abs/2505.16129v1
Association for Computational Linguistics. Maja Popović. 2015. chrF: character n -gram F -score for automatic MT evaluation . In Proceedings of the Tenth Workshop on Statistical Machine Translation , pages 392 –395, Lisbon, Portugal. Association for Computational Linguistics. Markus Freitag, Nitika Mathur, Chi -kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frederic Blain, Daniel Deutsch, Craig Stewart, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George Foster. 2023. Results of WMT23 Metrics Shared Task: Metrics Might Be Guilty but References Are Not Innocent . In Proceedings of the Eighth Conference on Machine Translation , pages 578–628, Singapore. Association for Computational Linguistics. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi -kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022. Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust . In Proceedings of the Seventh Conference on Machine Translation (WMT) , pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, and André F. T. Martins. 2024. xcomet: Transparent Machine Translation Evaluation through Fine -grained Error Detection . Transactions of the Association for Computational Linguistics , 12:979 –995. 7 OpenAI. 2023. GPT-4 Technical Report. Preprint , arXiv:2303.08774. Qingyu Lu, Baopu Qiu, Liang Ding, Kanjian Zhang, Tom Kocmi, and Dacheng Tao. 2024. Error Analysis Prompting Enables Human -Like Translation Evaluation in Large Language Models . In Findings of the Association for Computational Linguistics: ACL 2024 , pages 8801 –8816, Bangkok, Thailand. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A Neural Framework for MT Evaluation . In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 2685 –2702, Online. Association for Computational Linguistics. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments . In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization , pages 65 –72, Ann Arbor, Michigan. Association for Computational Linguistics. Shenbin Qian, Archchana Sindhujan, Minnie Kabra, Diptesh Kanojia, Constantin Orasan, Tharindu Ranasinghe, and Fred Blain. 2024. What do Large Language Models Need for Machine Translation Evaluation? . In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 3660 –3674, Miami, Florida, USA. Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning Robust Metrics for Text Generation . In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 7881 –7892, Online. Association for Computational Linguistics. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating Text Generation with BERT. In Inter - national Conference on Learning Representations . Tom Kocmi and Christian Federmann. 2023. Large Language Models Are State -of-the-Art Evaluators of Translation Quality . In Proceedings of the 24th Annual Conference of the European Association for Machine Translation , pages 193 –203, Tampere, Finland. European Association for Machine Translation.
https://arxiv.org/abs/2505.16129v1
Touvron, Hugo, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiob u, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie -Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, E ric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thom as Scialom. 2023. LLaMA 2: Open foundation and fine -tuned chat models. Preprint , arXiv:2307.09288. Wang, Guan, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. 2023. OpenChat: Advancing Open -source Language Models with Mixed -Quality Data. Preprint , arXiv:2309.11235. Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, and Shujian Huang. 2024. Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation . In Findings of the Association for Computational Linguistics: ACL 2024 , pages 3546 –3562, Bangkok, Thailand. Association for Computational Linguistics. Yu Wan, Dayiheng Liu, Baosong Yang, Haibo Zhang, Boxing Chen, Derek Wong, and Lidia Chao. 2022. UniTE: Unified Translation Evaluation . In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8117 –8127, Dublin, Ireland. Association for Computational Linguistics. Zheng, Lianmin, Wei -Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM -as-a-Judge with MT -Bench and Chatbot Arena. Preprint , arXiv:2306.05685. 8 A Comparison with MTME Metrics UK-EN CS-EN RU-EN DE-EN ρ r τ ρ r τ ρ r τ ρ r τ ours Gemma -7B 0.025 0.016 0.017 0.041 0.030 0.028 0.000 0.005 0.003 0.011 0.006 0.004 Qwen3 -8B 0.019 0.006 0.013 0.047 0.038 0.031 0.009 0.004 0.006 0.009 0.011 0.006 Deepseek -R1 0.017 0.012 0.011 0.040 0.039 0.027 0.010 0.005 0.007 0.017 0.024 0.011 GPT-4-turbo 0.016 0.010 0.011 0.033 0.038 0.022 0.006 0.004 0.004 0.012 0.017 0.008 mtme [noref] median 0.005 0.004 0.003 0.026 0.024 0.018 0.002 0.007 0.001 0.024 0.020 0.016 mean 0.004 -0.001 0.002 0.024 0.027 0.017 -0.001 0.003 -0.001 0.022 0.021 0.015 mtme [noref] HWTSC - Teacher -Sim 0.011 0.005 0.007 0.026 0.024 0.018 0.010 0.010 0.007 0.024 0.018 0.016 COMETKiwi 0.005 0.005 0.004 0.041 0.044 0.028 0.002 0.008 0.001 0.017 0.020 0.011 UniTE -src 0.005 0.004 0.003 0.038 0.039 0.026 0.002 0.007 0.001 0.027 0.030 0.018 REUSE 0.000 -0.008 0.000 0.002 0.005 0.002 -0.011 -0.010 -0.007 0.014 0.012 0.009 COMET -QE -0.003 -0.010 -0.002 0.015 0.022 0.010 -0.007 -0.002 -0.005 0.030 0.024 0.020 mtme BLEU 0.010
https://arxiv.org/abs/2505.16129v1
0.004 0.007 0.064 0.049 0.043 0.021 0.020 0.014 0.013 0.016 0.009 chrF 0.004 0.000 0.003 0.062 0.048 0.042 0.022 0.019 0.015 0.025 0.028 0.017 BLEURT -20 0.003 -0.001 0.002 0.053 0.046 0.036 0.021 0.024 0.014 0.026 0.031 0.018 COMET -20 -0.003 0.002 -0.002 0.050 0.044 0.034 0.020 0.025 0.014 0.027 0.025 0.018 YiSi-1 0.007 0.007 0.004 0.055 0.042 0.037 0.026 0.030 0.018 0.018 0.027 0.012 BERTScore 0.004 0.005 0.003 0.058 0.046 0.039 0.028 0.033 0.019 0.016 0.026 0.011 COMET -22 0.002 0.005 0.002 0.046 0.050 0.031 0.019 0.031 0.013 0.028 0.038 0.019 MS-COMET - 22 -0.001 0.002 0.000 0.044 0.039 0.030 0.010 0.018 0.007 0.020 0.022 0.013 UniTE 0.005 0.004 0.004 0.053 0.047 0.036 0.018 0.020 0.012 0.028 0.031 0.019 f200spBLEU 0.009 0.003 0.006 0.064 0.049 0.043 0.027 0.024 0.018 0.014 0.020 0.010 metricx_xxl_ MQM_2020 0.003 0.003 0.002 0.038 0.046 0.026 0.017 0.034 0.011 0.021 0.035 0.014
https://arxiv.org/abs/2505.16129v1
Position of Uncertainty: A Cross-Linguistic Study of position bias in Large Language Models Menschikov Mikhail†,1,2,Alexander Kharitonov†,3,Maiia Kotyga4,Vadim Porvatov2,3, Anna Zhukovskaya5,David Kagramanyan6,Egor Shvetsov2,⧫andEvgeny Burnaev2,7 †these authors contributed equally to this work 1ITMO, Saint-Petersburg, Russia,2Skoltech, Moscow, Russia,3Sber, Moscow, Russia,4MIPT, Moscow, Russia,5Lomonosov MSU, Moscow, Russia,6HSE University, Moscow, Russia,7AIRI, Moscow, Russia Corresponding author : m.menschikov@skoltech.ru Identifies project curator - ⧫ Abstract Large language models exhibit position bias — systematic neglect of information at specific context positions—yet its interplay with lin- guistic diversity remains poorly understood. We present a cross-linguistic study across five typologically distinct languages (English, Rus- sian, German, Hindi, Vietnamese), examining how position bias interacts with model uncer- tainty, syntax, and prompting. Key findings of this paper are the following: (1) position bias is model-driven, with language- specific variations— Qwen2.5-7B-Instruct fa- vors late positions , challenging assumptions of early-token bias; (2)Explicit relevance infor- mation (e.g., context is relevant to given query ) reduces accuracy across languages, undermin- ing prompt-engineering practices; (3)Aligning context with position bias increases entropy, yet minimal entropy does not fully correspond to high accuracy. (4)Position bias does not provoke a violation of language-specific word orders when it comes to languages that are ty- pologically different from English. The code for our experiments is available here: link. 1 Introduction Many recent large language models (LLM)-based ap- plications require support for long context1. However, developing new training strategies for handling longer contexts is not enough, as new problems emerge. One such problem is position bias — systematic neglect of information at specific positions, usually in the mid- dle [Baker et al., 2024]. While position bias is well-documented in English- centric studies [Zhang et al., 2024a, Baker et al., 2024], its behavior in multilingual settings remains under- explored and most developed bias mitigation strate- gies [Peysakhovich and Lerer, 2023, Zhao et al., 2021, Zhang et al., 2024b, Guo and V osoughi, 2024] were 1Such applications may include Retrieval-Augmented Gen- eration, Autonomous Agents, Scientific Research, Customer Support, etc.evaluated solely on English datasets. However, as we discuss in Section 2, lexical and morphological varia- tions across languages must be considered for multilin- gual models. Therefore, as a preliminary step for further adaptation of multilingual LLMs for applications with long context, we aim to answer the following questions in this work: Q1: Is position bias primarily a model-driven phe- nomenon, or do language-specific patterns emerge due to lexical, morphological, and syntactic differ- ences [Ghosh et al., 2024a]? Q2: Does position bias cause models to favor English- like SVO (Subject, Verb, Object) word order when gen- erating text in languages with other dominant word or- ders? Q3: Do prompt-based strategies (e.g., explicit position guidance [Zhang et al., 2024b]) effectively mitigate bias across languages? Can we focus model attention via prompting, and would it improve model performance? Q4: Finally, we are interested in a deeper understand- ing of how position bias affects model generation, and thus we perform formal and empirical analysis of how position bias affects entropy of the output distribution. Contribution. We conduct a multilingual analysis of position bias across five typologically diverse languages (English, German, Russian, Hindi,
https://arxiv.org/abs/2505.16134v1
Vietnamese), reveal- ing: ♣Position bias is predominantly model-driven but ex- hibits language-specific variations. ♣We observe stronger late-position bias in Qwen2.5- 7B-Instruct, contradicting prior claims of inherent early-token preference [Wu et al., 2025, Barbero et al., 2025]. ♣Explicitly instructing models of correct context place- ment (e.g. correct context has label 1) consistently degrades accuracy across all languages, challenging prompt-engineering practices [Zhang et al., 2024b]. ♣Formal and empirical analysis of entropy reveals counterintuitive dynamics: when placement of cor- rect context is aligned with position bias, entropy often increases, demonstrating higher model uncer- tainty. ♣While we do not find any connection between posi- tion bias and word order, we observe some model- driven patterns.arXiv:2505.16134v1 [cs.CL] 22 May 2025 Example question: who wrote the book The Story of Tibet? Example answer: Thomas Laird Ground truth: Journalist and author Thomas LairdExample prompt: Answer the question using the available information from the texts in the list below. Each text has a corresponding real-value score of its relevance to the question in square brackets at the beginning. Scores are ranged from 0.0 (the text is not suitable for generating an answer based on it) to 1.0 (the text is suitable for generating an answer based on it). Use this information. Choose texts with high enough relevance scores. If, based on the specified scores, there are no texts in the list that are relevant enough to generate answer based on them, then gener- ate the following answer: "I do not have an answer to your question". Generate answer only in English. Do not duplicate the question in the answer. Generate only the answer to the specified question. Answer need to be short. Do not generate anything extra. Available information: <[score1] ctx1>, <[score2] ctx2>, <[score3] ctx3>, <[score4] ctx4>, <[score5] ctx5> Question: <question>Figure 1: Experiment Structure. For each question, the correct answer is placed in one of three positions: top , middle , or bottom of the context. Each context is assigned a binary score (0 or 1), indicating its relevance to the question. Three scoring scenarios are evaluated: Aligned : The relevant context is assigned a score of 1 and all other assigned as 0. All Zero : All contexts, including the relevant one, are assigned a score of 0. No Scores : Relevance scores are omitted entirely. This design evaluates how scoring mechanisms and answer positioning influence model performance across varying levels of contextual guidance. These findings yield following practical implications : (1) Since bias is predominantly model-driven, existing English-optimized mitigation methods would probably work with other languages but require language-specific adjustments; (2) Some Chain-of-Thought strategies, which rely on positional guidance, demand careful con- sideration, as we observed constant model degradation when positional instructions are provided; (3) Retrieval- augmented generation (RAG) systems, which adapt re- ordering or relevancy scores strategies (discussed in Sec- tion 2), need to account for language and model charac- teristics; these strategies usually assume that the model attends more to first (recent) tokens, which is not always true. (4) Entropy dynamics complicates uncertainty- based bias mitigation approaches [Duan et al., 2024] and may be important for
https://arxiv.org/abs/2505.16134v1
uncertainty quantification strategies. The code for our experiments is available in the GitHub repository. 2 Related Work What causes position bias? Prior work identifies mul- tiple contributing factors. [Zhang et al., 2024a] attribute position bias to U-shaped attention patterns in trans- formers, which prioritize extremal positions and degrade performance for mid-context evidence. Theoretical and empirical studies further demonstrate that transformers attention is inherently biased toward earlier tokens [Wu et al., 2025, Barbero et al., 2025]. [Wu et al., 2025] explains position bias due to "causal masking inher- ently biases attention toward earlier positions, as tokens in deeper layers attend to increasingly contextualized representations of earlier tokens". Our results reveal ex-ceptions where later positions are desired, highlighting the complexity of the problem. Training data biases , such as serial-position effects in corpora, shape how models prioritize sequence positions [Wu et al., 2025, Guo and V osoughi, 2024]. Interplay of Culture, Language, and Model Design. The way we perceive the world is influenced not only by our culture [Oyserman and Lee, 2008] but also by the language we speak [Boroditsky et al., 2003]. The latter point is particularly relevant for LLMs, since they are trained on specific languages. Recent studies have shown that multilingual LLMs often initiate their "think- ing process" in English, pivoting to the prompt’s origi- nal language in the middle layers [Zhang et al., 2024a, Peysakhovich and Lerer, 2023]. These models exhibit lower lexical naturalness in non-English languages, with the naturalness gap being more pronounced for lan- guages structurally distant from English [Guo et al., 2024]. While the volume of training data plays a crucial role [Arnett and Bergen, 2025], linguistic complexity — including lexical and morphological variations across languages — must also be considered [Ghosh et al., 2024a, Dang et al., 2024, Ismayilzada et al., 2025]. Ad- ditionally, architectural design choices affect languages in different ways; for instance, removing positional en- coding from language models would most degrade per- formance in languages with limited morphological sys- tems [Ghosh et al., 2024a]2. 2This raises an important question: Has the development of LLM architectures over the last decade been primarily geared toward English? As noted, "English-centric practices in NLP may have impeded progress for language models in other At the same time, most of bias mitigation ap- proaches evaluate their performance in English [Zhang et al., 2024a, Peysakhovich and Lerer, 2023, Zhang et al., 2024b, Yu et al., 2024a, Wang et al., 2024a]. These approaches fall into two categories: prompt-based techniques andarchitectural interventions .Archi- tectural methods —such as positional encodings, alter- native masking schemes, and calibration mechanisms— address root causes but often require retraining and in- troduce computational overhead [Zhang et al., 2024a, Wu et al., 2025, Zhao et al., 2021]. Prompt-based strategies , including query-aware contextualization and recency prompting, aim to redirect attention dynami- cally [Peysakhovich and Lerer, 2023, Wang et al., 2024a, Yu et al., 2024a]. We focus on prompt-based strategies, using as a starting point the work done by [Zhang et al., 2024b]. The authors studied whether a model can im- prove its performance when given explicit placement of
https://arxiv.org/abs/2505.16134v1
the correct answer. They used two types of instructions —relative andabsolute — and found that models lack relative awareness and that implicit information about absolute placement of the correct prompt marginally improves model performance. However, the authors did not study a scenario when the placement prompt is absent at all. Practical considerations : position bias affects Chain of Thought Strategies (CoT), CoT struggles with posi- tion bias even when reasoning steps are correct, mod- els often fail to retrieve evidence from middle posi- tions [Zhang et al., 2024a]. In [Zhang et al., 2024a, Yu et al., 2024a], authors analyzed error propagation in multi-hop reasoning. In RAG systems, one of the mitigating strategies is context ordering [Wang et al., 2024b, Alessio et al., 2024, Jin et al., 2024] (we discuss these approaches in Appendix A). While conventional approaches often assume a monotonic relationship be- tween document position and attention (e.g., prioritizing the first/last positions), our analysis reveals that position bias may exhibit language-specific patterns and is not always maximized at early tokens. This observation challenges assumptions in methods like Long-Context RAG , which rely on fixed position prioritization, and highlights the need for language-adaptive reordering. While predictive entropy is widely used to quantify model uncertainty [Huang et al., 2024, Sychev et al., 2025], its relationship with position bias remains un- explored. [Duan et al., 2024] note that uncertainty estimates can be token-biased, but how position bias interacts with uncertainty dynamics is unclear. 3 Methods 3.1 Position bias formalization To evaluate the effect of position bias, we consider the question answering task. For each question Q we have the ground truth answer Aand a number languages" [Arnett and Bergen, 2025]. While this issue is important, it falls outside the scope of this paperNof contexts ctx1, . . . , ctx N. In our experiments, we always have one relevant context for each ques- tion, and all other N−1contexts are chosen ran- domly from the dataset. The response of the Model withPrompt function to the Qwhen the relevant con- text is placed on the position iis denoted as Ri= Model(Prompt (Q, ctx 1, . . . , ctx N, i)). We say, that model exhibits position bias on dataset Dtowards posi- tioniin contrast to position jif the expected Accuracy foriis higher than for j: E (Q,A)∼DAcc(A, R i)>E (Q,A)∼DAcc(A, R j) 3.2 Context Placement Strategies This experimental series investigates whether explicit information about the relevance of contexts to a given query can enhance performance. We examine a practical scenario where context relevance can be quantified (e.g., via cosine similarity between context and query em- beddings) and evaluate how integrating relevance scores into prompts affects context selection. The experimental framework is illustrated in Figure 1. Scoring Configurations: Aligned: Relevant context assigned score rs=1and all other contexts assigned asrs=0.All Zero: All contexts (including relevant) assigned score rs=0. We hypothesize that this inten- tional mislabeling will degrade model performance if scores influence context selection. No Scores: Rele- vance scores omitted entirely from prompts - we want to evaluate whether explicit relevance scores help model at all. According to
https://arxiv.org/abs/2505.16134v1
the experiment, relevance scores rs1, . . . , rs Nwere included in the prompt. As depicted in Figure 1 we consider three positions of relevant con- texts – TOP(first), MIDDLE( N/2) or BOTTOM(last). Context Volume Analysis: Prior research demonstrates increased position bias with longer contexts [Baker et al., 2024, Peysakhovich and Lerer, 2023]. To assess whether relevance scores mitigate this effect, we vary context quantity N∈{5,10,15}. This design enables system- atic evaluation of score efficacy across different infor- mation loads. 3.3 Average Predictive Entropy We adopt the token-wise entropy framework from [Lu et al., 2022], normalized by the total number of tokens to quantify uncertainty in model responses [Lu et al., 2022, Wang et al., 2025]. Letxrepresent the input prompt and s= {z1, z2, ..., z n}denote a generated completion sequence ofntokens. For a given LLM, the conditional probabil- ity of generating the i-th token zi, given the preceding tokens s<i={z1, ..., z i−1}and the prompt x, is denoted asp(zi∣s<i, x)for1≤i≤n. The average predictive entropy3(denoted PEavg) is defined as: 3For brevity, we refer to this value as entropy in our work PEavg(s, x)=−1 nlogp(s∣x) =1 nn ∑ i=1−logp(zi∣s<i, x).(1) This formulation computes the average uncertainty per token by decomposing the joint probability p(s∣x) into a product of conditional probabilities via the chain rule. The normalization by nensures comparability across sequences of varying lengths, aligning with inter- pretations of entropy as an "average uncertainty" mea- sure. 4 Effect of position bias on Entropy This section formalizes how position bias toward the first token in transformer-based LLMs propagates across layers and analyzes its impact on attention entropy. We assume a standard multi-head self-attention architecture and derive conditions under which position bias ho- mogenizes token representations, ultimately increasing entropy in the final layer. Notation: LetX(0)=[x1, x2, . . . , x n]∈Rd×nde- note the input token embeddings at layer 0, where x1is the first token and dis the embedding dimension. At each layer l≥1, the self-attention operation computes: A(t)=softmax(X(l)W(l) Q(X(l)W(t) K)⊤) (2) X(t+1)=W(l) OA(l)X(l)W(l) V (3) where W(l) Q, W(l) K, W(l) V, W(l) O∈Rd×dare learnable projection matrices, A(l)∈Rn×ncontains the attention weights and√dQK=1for simplicity. Assumptions: To isolate the effect of position bias, we make simplifying assumptions: (1) dominant first token attention, (2) position bias does not change over layers and (3) attention Acan be represented as a lin- ear combination of contextual attention Acontand posi- tional attention Apos. For all layers l≥1, the positional attention weights Apos(l) i,j are sharply concentrated on the first token: Apos(l) i,j≈{1ifj=1, 0otherwise . Or in vector form Apos(l)≈1⊤ ie1. Linear combination of attention: A=λ1Acont+λ2Apos, λ 1+λ2= 1;λ1, λ2∈[0; 1]where λare normalizing weights for each attention type. Token Homogenization: Under these assumptions, the hidden state of token iat layer lbecomes: x(l) i≈W(l) O⋅[n ∑ j=1(λ1Apos(l) i,j+λ2Acont(l) i,j)W(l) Vx(l−1) j] =λ2W(l) OW(l) V⋅1⊤ ie1x(l−1) i+λ1xcon(l) i =λ2P(l)x(l−1) i+λ1xcon(l) i,∀i.(4)Where P(l)=W(l) OW(l) V. Ifλ2>λ1recursively applying this across Llayers yields: x(L) i≈x(0) 1∀i, (5) implying all tokens collapse to a copy of the initial first token embedding x(0) 1(up to projection transfor- mations). Although token collapse would not happen in a real
https://arxiv.org/abs/2505.16134v1
scenario, for example, due to residual connec- tions, tokens may become more similar to tokens under position bias. Entropy Dynamics: LetH(l) Adenote the general attention entropy at layer l. As tokens homogenize (x(l) i≈x(0) 1), queries and keys become indistinguish- able forcing contextual attention weights to unifor- mity: Acon(l) i,j≈1 n∀i, j, which maximizes entropy: H(l) A→logn.Aligned positional and contextual at- tention: In this section, under many modelling assump- tions, we demonstrate a connection between entropy, attention and position bias. Finally, if AconandApos are aligned, homogenization would occur with higher probability. This reveals a counterintuitive outcome - when the correct context aligns with the model’s po- sition bias, the model would give higher attention to those tokens, thus eventually increasing entropy. We further discuss our empirical results in Section 4, where we observe that minimal entropy does not always occur when correct context placement is aligned with position bias. Predictive and Attention Entropy : Attention mechanisms are designed to prioritize relevant tokens. If attention is uniform (high entropy), the model cannot leverage contextual cues effectively. Without focused attention, the model lacks strong signals to predict the next token confidently, leading to less certain outputs (higher predictive entropy).4 Multilingual Caveat: In multilingual LLMs, posi- tion bias may shift across layers e.g., early layers prior- itize English tokens, later layers pivot to the prompt’s language [Zhong et al., 2024, Schut et al., 2025], which may have its own positional preferences. While this violates our assumption of static position bias, entropy dynamics remains the same if homogenization occurs. 4.1 Word Order Analysis In this section, we want to investigate connections be- tween relevant context position, models and dominant word order of a language. Particularly, we are inter- ested if position bias would amplify the model’s dom- inant language word order or de-amplify it. We focus on the No Scores configuration with five contexts and two languages: Hindi and German, due to their non- SVO dominant word order according to The World At- las of Language Structures. We parse sentences using 4Example:In the sentence "The capital of France is ***," the token "France" is critical. If attention is uniform, the model might equally weigh irrelevant tokens (e.g., "The" or "of"), failing to emphasize "France." This ambiguity increases uncertainty in predicting "Paris." Stanza [Qi et al., 2020] and follow a similar to [Choi et al., 2021]. We extract every verb and identify its de- pendents. If a verb has both a dependent with a ‘subj’ relation (or if such a relation is found for the nearest preceding verb connected via a ‘conj’ dependency) and one with an ‘obj’ relation, we record the word order of these triplets using the abbreviations ‘S’, ‘V’, and ‘O’. We then analyze the distribution of ‘SVO’ and ‘SOV’ patterns relative to all extracted triplets. 5 Experiment Set-Up 5.1 Datasets In this paper, we utilized three datasets covering five languages with divergent syntactic structures and seman- tics distribution; dataset statistics are outlined in Table 1, and a full description can be found in Appendix B. Language Source Size English SQuAD 2.0 [Rajpurkar et
https://arxiv.org/abs/2505.16134v1
al., 2018] 150k Russian MTSBerQuAD (link) 60k German MLQA [Lewis et al., 2020] 5k Hindi MLQA [Lewis et al., 2020] 5k Vietnamese MLQA [Lewis et al., 2020] 5k Table 1: Summary of datasets used in this study by languages with associated sources and question-answer pairs. Preprocessing: To manage computational con- straints, we limited our analysis to 2,000 Question- Answer (QA) pairs per language. The following two- stage preprocessing pipeline was applied before sam- pling to ensure data quality and consistency: (1) Du- plicate removal: All duplicate QA pairs were excluded. (2) Answer validation: QA pairs lacking valid answers (e.g., missing or ambiguous responses) were discarded. 5.2 Models To disentangle whether position bias in LLMs arises from model-specific design/training or language- specific characteristics, we selected two models —Qwen2.5 7B Instruct (next Qwen) and Llama3 8B Instruct (next Llama) that share multilingual support (covering English, German, Russian, Hindi, Vietnamese) but differ in architectural and training paradigms. This enables isolating model-driven effects (e.g., positional encoding, training data scope) from language-driven ones, aligning with our finding that position bias is predominantly model-driven [Lu et al., 2022]. We provide a side-by-side comparison of the models in Appendix C to justify the rationale for their inclusion. 6 Evaluation 6.1 LLM as a Judge Traditional statistical evaluation metrics such as BLEU [Papineni et al., 2002], ROUGE [Lin, 2004], and MeteorUniversal [Denkowski and Lavie, 2014] struggle to dis- tinguish syntactically similar but semantically distinct texts. While semantic methods like BERTScore [Zhang et al., 2019] were introduced to address these limitations, our experiments reveal that BERTScore lacks sufficient differentiability, often failing to capture nuanced distinc- tions between correct and incorrect answers. Therefore, we adopt the LLM as a judge [Zheng et al., 2023] frame- work and choose Mistral Large5as our judge for the fol- lowing reasons: 1) Prior studies demonstrate its robust alignment with human annotators and generalizability across tasks [Bavaresco et al., 2024], [Kim et al., 2024]; 2) It offers a free API for research purposes, making it accessible for large-scale evaluation; 3) Its architecture differs from the families of models used for response generation, mitigating bias toward self-generated out- puts. The judge evaluates QA pairs using a structured prompt containing the question, ground truth and model answer. It labels 1for correct answers and 0for in- correct ones, and we use accuracy as our main metric, prompts and details are provided in Appendix E. 6.2 Human Evaluation To validate the reliability of our LLM as a judge, we run human annotation on 150 questions for the English and Russian languages. We consider the generations of the Llama model and annotate each response with overlap- 3. Experts follow the same criteria as the judge. To quantify inter-annotator agreement, we calculate Krip- pendorff’s α[Krippendorff, 2011], achieving a mean α=0.755, indicating high reliability of assessment. We further assess the judge’s alignment with human annotators by computing the Pearson correlation rbe- tween the judge’s scores and the majority vote of hu- man labels. A strong mean correlation r=0.716was observed. Detailed information about the annotation process could be found in Appendix E. 7 Experiments and
https://arxiv.org/abs/2505.16134v1
Results 7.1 Position bias is mostly driven by models From Table 2, we observe that Qwen generally achieves the highest performance when the correct context is placed at the BOTTOM, while Llama excels when the correct context is positioned at the TOP across all lan- guages. Notably, Qwen demonstrates superior perfor- mance for Vietnamese and Russian in specific scenarios. However, despite its overall stronger average perfor- mance, Llama outperforms Qwen in Vietnamese. This discrepancy may stem from an alignment between Viet- namese language-specific position bias and the inherent position bias of Llama. Overall, our findings indicate that while language-specific position biases may exist, they are likely overshadowed by the dominant posi- tional preferences intrinsic to the models themselves. Finally, these findings challenge conventional assump- tions about primacy effects in LLMs. 5https://mistral.ai/news/mistral-large-2407 Language PositionQwen Llama Aligned All-Zero No-Scores Mean Aligned All-Zero No-Scores Mean EnglishTOP 0.943 0.927 0.952 0.941 0.955 0.929 0.951 0.945 MIDDLE 0.941 0.931 0.945 0.937 0.926 0.858 0.946 0.909 BOTTOM 0.949 0.948 0.956 0.951 0.932 0.826 0.930 0.896 RussianTOP 0.938 0.909 0.934 0.927 0.928 0.837 0.906 0.890 MIDDLE 0.927 0.917 0.944 0.929 0.908 0.768 0.888 0.855 BOTTOM 0.937 0.935 0.939 0.936 0.911 0.761 0.866 0.846 GermanTOP 0.648 0.553 0.644 0.615 0.680 0.485 0.719 0.628 MIDDLE 0.629 0.566 0.628 0.608 0.602 0.312 0.668 0.527 BOTTOM 0.655 0.602 0.649 0.636 0.613 0.307 0.646 0.522 HindiTOP 0.591 0.540 0.619 0.583 0.532 0.309 0.729 0.523 MIDDLE 0.579 0.543 0.609 0.577 0.448 0.112 0.676 0.412 BOTTOM 0.642 0.592 0.643 0.626 0.494 0.038 0.644 0.392 VietnameseTOP 0.722 0.644 0.713 0.693 0.764 0.679 0.737 0.727 MIDDLE 0.676 0.619 0.707 0.668 0.714 0.536 0.707 0.652 BOTTOM 0.718 0.647 0.700 0.689 0.726 0.473 0.656 0.618 Mean 0.651 0.589 0.656 – 0.619 0.361 0.687 – Table 2: Analysis of position bias, accuracy is reported for question answering tasks, Position specifies relevant context placement. Aligned, All-Zero and No-Scores specify instruction strategy for contexts as depicted in Figure 1. Highest accuracy is in bold. Language Position Qwen Llama EnglishTOP 0.095 0.224 MIDDLE 0.098 0.225 BOTTOM 0.096 0.228 RussianTOP 0.114 0.198 MIDDLE 0.118 0.205 BOTTOM 0.115 0.209 GermanTOP 0.112 0.218 MIDDLE 0.124 0.200 BOTTOM 0.123 0.199 HindiTOP 0.076 0.286 MIDDLE 0.080 0.280 BOTTOM 0.081 0.280 VietnameseTOP 0.107 0.278 MIDDLE 0.113 0.293 BOTTOM 0.107 0.303 Table 3: Entropy from Equation 1 for two models av- eraged across three scenarios (Aligned, All-Zero, No- Scores). The smallest value per row is in bold; full un-averaged results are in Appendix G. 7.2 Positional guidance Sensitivity to prompt guidance: Both models demon- strate high sensitivity to positional cues when contex- tual scoring is manipulated. A significant drop in accu- racy occurs when misleading scores are introduced: For Qwen , performance decreases from 0.651(Aligned ) to0.589(All-Zero ), while LLaMA exhibits a sharper decline, dropping from 0.619(Aligned ) to 0.361(All- Zero ). Furthermore, this sensitivity varies across lan-guages. For instance, LLaMA’s performance in Hindi collapses under the All-Zero condition ( 0.038, BOT- TOM ), whereas in English, the decline is less pro- nounced. Score Omission Enhances Robustness: Surprisingly, theNo Scores scenario consistently outperformed other configurations, with Qwen achieving 0.656 and LLaMA
https://arxiv.org/abs/2505.16134v1
0.687 mean accuracy compared to the Aligned scenario (Qwen: 0.651, LLaMA: 0.619), particularly in low- resource languages such as Hindi. This suggests that in- tegrating relevance scores may hurt performance rather than mitigating position bias in multilingual settings. Performance gains from positional guidance are only observed in English for LLaMA and in Vietnamese for both models. Language-Specific Sensitivity: High-resource lan- guages (English, Russian) showed minimal variation across scenarios ( ∆<2.5%inSQuAD 2.0 ), while MLQA languages exhibited stark differences. Viet- namese stood out as an outlier, showing exceptional stability ( ∆=5.3%vs. German ∆=15.2%in Qwen), potentially due to orthographic or syntactic properties mediating position bias. 7.3 Entropy From Tables 2 and 3, it can be seen that there is no strict correlation between the evaluation of position bias and entropy. The Qwen model has an inverse relationship: in evaluation position bias, the best average accuracy falls on the BOTTOM row, while the lowest entropy occurs at the TOP position. In the Llama model, in contrast, the TOP row gives the best result in terms of accuracy, while the lowest entropy varies. It is noteworthy that in the All-Zero scenario, the en- tropy and model accuracy are minimal among all the conditions studied (Table 4). A likely explanation is that the model generates "no answer" responses with excessive confidence, which contradicts the expected behavior. However, further analysis of context-only / no-context modes aligns with common intuition that the accuracy is high at low entropy with context provided, whereas in the no-context mode, the accuracy is low at high entropy, (Figure 2. We perform statistical signifi- cance tests for these experiments and discuss them in Section 9. Model Aligned All Zero No Scores Qwen 0.106 0.097 0.105 Llama 0.268 0.254 0.256 Table 4: Entropy from Equation 1 for two models aver- aged along three context placements: MIDDLE, BOT- TOM, TOP. The smallest value is in bold. 0.8context only no contexts0.6 0.4 0.2Accuracy Entropy English Russian English Russian0.951 0.906 0.248 0.2200.194 0.169 0.820 0.861 Figure 2: Results when context is provided and not provided. We consider only one relevant context in this experiment to analyze if adding contexts helps the model or if it can generate an answer based on its internal knowledge. 7.4 Word Order We do not observe any evidence that position bias drives models to favor some specific word order. For example, in Hindi (SOV-dominant) we would expect SVO rates to be lowest when the relevant context appears last for Qwen and first for Llama, but we do not observe that (Table 5). For German, since the dominant order de- pends on a clause type (SVO in main clauses; SOV in subordinate clauses; [Dryer, 2013]), we look at SVO prevalence (SVO-SOV difference) in Table 6 and do not find connection to position bias as well, since the preva- lence comes naturally from the proportion of complex sentences. For model-level we observe the following differences: 1) Llama generates more SVO sentences than Qwen across Hindi, German, and Russian (Table 7)6; 2) Llama 6Russian is considered despite its SVO dominance, as itsModel Word order Top Middle
https://arxiv.org/abs/2505.16134v1
Bottom LlamaSVO 5.39 5.12 4.75 SOV 88.52 88.49 88.43 QwenSVO 2.34 4.15 2.96 SOV 93.98 91 91.78 Table 5: Percentage of word orders for Hindi, cells where context placements align with position bias are highlighted in green. Bolded results indicate stronger alignment with expected word order. Model Position Difference SVO - SOV % of Complex Sentences LlamaTop 26.88 13.56 Middle 32.13 12.13 Bottom 40.04 10.88 QwenTop 34.17 8.59 Middle 29.56 9.54 Bottom 29.78 8.96 Table 6: Prevalence of SVO over SOV word order and the proportion of complex sentences in German. produces more complex sentences than Qwen across all three context positions (Table 8); 3) Both models tend to generate more complex sentences when the relevant context is in the top position (Table 9). Model Hindi German Russian Llama 4.75 - 5.39 % 60.31 - 67.46 % 95.9 - 96.15 % Qwen 2.34 - 4.15 % 57.97 - 61.39 % 93.61 - 94.52 % Table 7: Percentage of SVO structures in generated responses across all three positions of relevant context. Model Hindi German Russian Llama 9.94 % 12.19 % 7.19% Qwen 6.88% 9.03 % 7% Table 8: Mean percentage of complex sentences among all sentences containing at least one triplet of a subject, verb, and object. LanguageLlama Qwen Top Middle Bottom Top Middle Bottom Hindi ✓ ✓ German ✓ ✓ Russian ✓ ✓ Table 9: The context position in which the proportion of complex sentences is highest, given a specific model and language rich morphology allows flexible word order [Ghosh et al., 2024b]. 8 Conclusion In this work, we provide a comprehensive analysis of the position bias effect, going beyond the borders of the English language and studying this phenomenon across 5 languages with diverse morphological and syn- tactical structures. Our findings reveal that position bias is rather model-driven, however minor language- specific effects still exist. Specifically, Qwen exhibits a late-position bias against prior claims about the pri- macy effect in LLM. Explicit positional guidance does not mitigate position bias effectively and could even lead to performance degradation. Our theoretical and empirical results show that, despite the performance of models typically increasing when the relevant infor- mation is placed in the position to which models are biased, the uncertainty is increased as well. Word order is independent of position bias, rather demonstrating a model-specific nature. Overall, our findings could be applied to RAG, CoT, long-context and multilingual tasks. 9 Limitations Entropy Analysis: To assess statistical significance, we performed pairwise t-tests with Holm-Bonferroni correction across three positional pairings: (1) top vs. middle, (2) top vs. bottom, and (3) middle vs. bot- tom. A language configuration was classified as having a significant effect if statistical significance (p < 0.05) emerged in at least two of the three comparisons. Re- sults indicated that two languages—German and Viet- namese—showed consistent significance for both Llama and Qwen models. Notably, this alignment across dis- tinct model architectures highlights these languages’ unique susceptibility to position bias. Word Order Analysis: We recognize that a rigor- ous analysis of subject-verb-object relationships would necessitate examining a broader spectrum of syntac- tic dependencies.
https://arxiv.org/abs/2505.16134v1
However, such detailed linguistic theoretical inquiry extends beyond the methodological boundaries of this work. Computational Limitations: Our study employed 2,000 question-answer pairs per language. When extrap- olated across nine experimental scenarios, this yielded 18,000 model evaluations per language — a computa- tionally intensive process. Additionally, our findings are constrained by evaluation across only two model families (Llama and Qwen), which restricts broader conclusions about architectural generality. Entropy Analysis: Our attention entropy analysis is subject to two key limitations. First, while token ho- mogenization (the assumption that all tokens are treated equally) requires deeper mechanistic investigation, such exploration falls outside the scope of this work. Sec- ond, we have not formally established the formal link between attention entropy and predictive entropy. This gap prevents us from fully validating our hypothesis that aligning positional and contextual attention patternsinfluences predictive entropy. References George Arthur Baker, Ankush Raut, Sagi Shaier, Lawrence E Hunter, and Katharina von der Wense. Lost in the middle, and in-between: Enhancing language models’ ability to reason over long con- texts in multi-hop qa. ArXiv , abs/2412.10079, 2024. URL https://api.semanticscholar.org/ CorpusID:274763060 . Zhenyu (Allen) Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xi- aoxia Wu, and Zhangyang Wang. Found in the middle: How language models use long con- texts better via plug-and-play positional encoding. ArXiv , abs/2403.04797, 2024a. URL https://api. semanticscholar.org/CorpusID:268296885 . Alexander Peysakhovich and Adam Lerer. Atten- tion sorting combats recency bias in long con- text language models. ArXiv , abs/2310.01427, 2023. URL https://api.semanticscholar.org/ CorpusID:263609111 . Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few- shot performance of language models. In Interna- tional conference on machine learning , pages 12697– 12706. PMLR, 2021. Meiru Zhang, Zaiqiao Meng, and Nigel Collier. Can we instruct llms to compensate for position bias? In Findings of the Association for Computational Lin- guistics: EMNLP 2024 , pages 12545–12556, 2024b. Xiaobo Guo and Soroush V osoughi. Serial posi- tion effects of large language models. ArXiv , abs/2406.15981, 2024. URL https://api. semanticscholar.org/CorpusID:270702994 . Poulami Ghosh, Shikhar Vashishth, Raj Dabre, and Pushpak Bhattacharyya. A morphology- based investigation of positional encodings. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing , pages 21035–21045, Miami, Florida, USA, November 2024a. Association for Compu- tational Linguistics. doi:10.18653/v1/2024.emnlp- main.1170. URL https://aclanthology.org/ 2024.emnlp-main.1170/ . Xinyi Wu, Yifei Wang, Stefanie Jegelka, and Ali Jadbabaie. On the emergence of position bias in transformers. ArXiv , abs/2502.01951, 2025. URL https://api.semanticscholar.org/ CorpusID:276107602 . Federico Barbero, ’Alvaro Arroyo, Xiangming Gu, Christos Perivolaropoulos, Michael M. Bronstein, Petar Velivckovi ’c, and Razvan Pascanu. Why do llms attend to the first token? ArXiv , abs/2504.02732, 2025. URL https://api.semanticscholar.org/ CorpusID:277510528 . Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, and Kaidi Xu. Shifting attention to relevance: To- wards the predictive uncertainty quantification of free-form large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Pro- ceedings of the 62nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages
https://arxiv.org/abs/2505.16134v1
5050–5063, Bangkok, Thailand, Au- gust 2024. Association for Computational Linguistics. doi:10.18653/v1/2024.acl-long.276. URL https: //aclanthology.org/2024.acl-long.276/ . Daphna Oyserman and Spike WS Lee. Does culture influence what and how we think? effects of prim- ing individualism and collectivism. Psychological bulletin , 134(2):311, 2008. Lera Boroditsky, Lauren A Schmidt, and Webb Phillips. Sex, syntax, and semantics. Language in mind: Ad- vances in the study of language and thought , 22(61- 79):3, 2003. Yanzhu Guo, Simone Conia, Zelin Zhou, Min Li, Saloni Potdar, and Henry Xiao. Do large lan- guage models have an english accent? evaluating and improving the naturalness of multilingual llms. ArXiv , abs/2410.15956, 2024. URL https://api. semanticscholar.org/CorpusID:273502421 . Catherine Arnett and Benjamin Bergen. Why do lan- guage models perform worse for morphologically complex languages? In Owen Rambow, Leo Wan- ner, Marianna Apidianaki, Hend Al-Khalifa, Bar- bara Di Eugenio, and Steven Schockaert, editors, Proceedings of the 31st International Conference on Computational Linguistics , pages 6607–6623, Abu Dhabi, UAE, January 2025. Association for Compu- tational Linguistics. URL https://aclanthology. org/2025.coling-main.441/ . Anh Dang, Limor Raviv, and Lukas Galke. Morphol- ogy matters: Probing the cross-linguistic morpholog- ical generalization abilities of large language models through a wug test. In 13th edition of the Workshop on Cognitive Modeling and Computational Linguis- tics (CMCL 2024) , pages 177–188. Association for Computational Linguistics (ACL), 2024. Mete Ismayilzada, Defne Circi, Jonne Sälevä, Hale Sirin, Abdullatif Köksal, Bhuwan Dhingra, Antoine Bosselut, Duygu Ataman, and Lonneke Van Der Plas. Evaluating morphological compositional generaliza- tion in large language models. In Luis Chiruzzo, Alan Ritter, and Lu Wang, editors, Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Lin- guistics: Human Language Technologies (Volume 1: Long Papers) , pages 1270–1305, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-189-6. URL https: //aclanthology.org/2025.naacl-long.59/ . Yijiong Yu, Huiqiang Jiang, Xufang Luo, Qian- hui Wu, Chin-Yew Lin, Dongsheng Li, Yuqing Yang, Yongfeng Huang, and Lili Qiu. Mitigateposition bias in large language models via scal- ing a single dimension. ArXiv , abs/2406.02536, 2024a. URL https://api.semanticscholar. org/CorpusID:270226813 . Ziqi Wang, Hanlin Zhang, Xiner Li, Kuan-Hao Huang, Chi Han, Shuiwang Ji, Sham M. Kakade, Hao Peng, and Heng Ji. Eliminating position bias of language models: A mechanistic approach. ArXiv , abs/2407.01100, 2024a. URL https://api. semanticscholar.org/CorpusID:270870323 . Yuhao Wang, Ruiyang Ren, Junyi Li, Xin Zhao, Jing Liu, and Ji-Rong Wen. REAR: A relevance-aware retrieval-augmented framework for open-domain question answering. In Yaser Al-Onaizan, Mo- hit Bansal, and Yun-Nung Chen, editors, Pro- ceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 5613–5626, Miami, Florida, USA, November 2024b. Association for Computational Linguistics. doi:10.18653/v1/2024.emnlp-main.321. URL https: //aclanthology.org/2024.emnlp-main.321/ . Marco Alessio, Guglielmo Faggioli, Nicola Ferro, Franco Maria Nardini, Raffaele Perego, et al. Im- proving rag systems via sentence clustering and re- ordering. In CEUR WORKSHOP PROCEEDINGS , volume 3784, pages 34–43, 2024. Bowen Jin, Jinsung Yoon, Jiawei Han, and Sercan Ö. Arik. Long-context llms meet rag: Overcoming chal- lenges for long inputs in rag. ArXiv , abs/2410.05983, 2024. URL https://api.semanticscholar.org/ CorpusID:273229050 . Hsiu-Yuan Huang, Yutong Yang, Zhaoxi Zhang, San- woo Lee, and Yunfang
https://arxiv.org/abs/2505.16134v1
Wu. A survey of uncer- tainty estimation in llms: Theory meets practice. ArXiv , abs/2410.15326, 2024. URL https://api. semanticscholar.org/CorpusID:273502635 . Petr Sychev, Andrey Goncharov, Daniil Vyazhev, Ed- vard Khalafyan, and Alexey Zaytsev. When an llm is apprehensive about its answers - and when its uncertainty is justified. ArXiv , abs/2503.01688, 2025. URL https://api.semanticscholar.org/ CorpusID:276776200 . Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceed- ings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 8086–8098, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi:10.18653/v1/2022.acl-long.556. URL https: //aclanthology.org/2022.acl-long.556/ . Zhiyuan Wang, Jinhao Duan, Chenxi Yuan, Qingyu Chen, Tianlong Chen, Yue Zhang, Ren Wang, Xi- aoshuang Shi, and Kaidi Xu. Word-sequence entropy: Towards uncertainty estimation in free-form medical question answering applications and beyond. Engi- neering Applications of Artificial Intelligence , 139: 109553, 2025. Chengzhi Zhong, Fei Cheng, Qianying Liu, Junfeng Jiang, Zhen Wan, Chenhui Chu, Yugo Murawaki, and Sadao Kurohashi. Beyond english-centric llms: What language do multilingual language models think in? ArXiv , abs/2408.10811, 2024. URL https://api. semanticscholar.org/CorpusID:271909321 . Lisa Schut, Yarin Gal, and Sebastian Farquhar. Do multilingual llms think in english? ArXiv , abs/2502.15603, 2025. URL https://api. semanticscholar.org/CorpusID:276557950 . Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D Manning. Stanza: A python nat- ural language processing toolkit for many human languages. arXiv preprint arXiv:2003.07082 , 2020. Hee-Soo Choi, Bruno Guillaume, Karën Fort, and Guy Perrier. Investigating dominant word order on Uni- versal Dependencies with graph rewriting. In Rus- lan Mitkov and Galia Angelova, editors, Proceed- ings of the International Conference on Recent Ad- vances in Natural Language Processing (RANLP 2021) , pages 281–290, Held Online, September 2021. INCOMA Ltd. URL https://aclanthology.org/ 2021.ranlp-1.33/ . Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers) , pages 784–789, 2018. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. Mlqa: Evaluating cross-lingual extractive question answering. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics . Association for Computational Linguistics, 2020. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , pages 311–318, 2002. Chin-Yew Lin. Rouge: A package for automatic evalua- tion of summaries. In Text summarization branches out, pages 74–81, 2004. Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any tar- get language. In Proceedings of the ninth workshop on statistical machine translation , pages 376–380, 2014. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with bert. ArXiv , abs/1904.09675, 2019. URL https://api.semanticscholar.org/ CorpusID:127986044 .Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li,
https://arxiv.org/abs/2505.16134v1
Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Ad- vances in Neural Information Processing Systems , 36: 46595–46623, 2023. Anna Bavaresco, Raffaella Bernardi, Leonardo Berto- lazzi, Desmond Elliott, Raquel Fern’andez, Al- bert Gatt, E. Ghaleb, Mario Giulianelli, Michael Hanna, Alexander Koller, Andr’e F. T. Martins, Philipp Mondorf, Vera Neplenbroek, Sandro Pezzelle, Barbara Plank, David Schlangen, Alessandro Sug- lia, Aditya K Surikuchi, Ece Takmaz, and Alberto Testoni. Llms instead of human judges? a large scale empirical study across 20 nlp evaluation tasks. ArXiv , abs/2406.18403, 2024. URL https://api. semanticscholar.org/CorpusID:270738074 . Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Prometheus 2: An open source language model specialized in evaluating other language models. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 4334–4353, Miami, Florida, USA, Novem- ber 2024. Association for Computational Linguistics. doi:10.18653/v1/2024.emnlp-main.248. URL https: //aclanthology.org/2024.emnlp-main.248/ . Klaus Krippendorff. Computing krippendorff’s alpha- reliability, 2011. Matthew S. Dryer. Order of subject, object and verb (v2020.4). In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online . Zenodo, 2013. doi:10.5281/zenodo.13950591. URL https://doi. org/10.5281/zenodo.13950591 . Poulami Ghosh, Shikhar Vashishth, Raj Dabre, and Pushpak Bhattacharyya. A morphology- based investigation of positional encodings. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing , pages 21035–21045, Miami, Florida, USA, November 2024b. Association for Compu- tational Linguistics. doi:10.18653/v1/2024.emnlp- main.1170. URL https://aclanthology.org/ 2024.emnlp-main.1170/ . Tan Yu, Anbang Xu, and Rama Akkiraju. In defense of rag in the era of long-context language models. ArXiv , abs/2409.01666, 2024b. URL https://api. semanticscholar.org/CorpusID:272368207 . Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for ma- chine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing , pages 2383–2392, 2016. Pavel Efimov, Andrey Chertok, Leonid Boytsov, and Pavel Braslavski. Sberquad–russian reading compre- hension dataset: Description and analysis. In Ex- perimental IR Meets Multilinguality, Multimodality, and Interaction: 11th International Conference of the CLEF Association, CLEF 2020, Thessaloniki, Greece, September 22–25, 2020, Proceedings 11 , pages 3–15. Springer, 2020. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 30. Curran Asso- ciates, Inc., 2017. URL https://proceedings. neurips.cc/paper_files/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper. pdf. Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. GQA: Training generalized multi-query transformer models from multi-head checkpoints. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro- cessing , pages 4895–4901, Singapore, December 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.emnlp-main.298. URL https: //aclanthology.org/2023.emnlp-main.298/ . Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position
https://arxiv.org/abs/2505.16134v1
embedding. Neu- rocomputing , 568:127063, 2024. ISSN 0925-2312. doi:https://doi.org/10.1016/j.neucom.2023.127063. URL https://www.sciencedirect.com/ science/article/pii/S0925231223011864 . An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, Guanting Dong, Hao- ran Wei, Huan Lin, Jialong Tang, Jialin Wang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Ma, Jianxin Yang, Jin Xu, Jingren Zhou, Jinze Bai, Jinzheng He, Junyang Lin, Kai Dang, Keming Lu, Ke- qin Chen, Kexin Yang, Mei Li, Mingfeng Xue, Na Ni, Pei Zhang, Peng Wang, Ru Peng, Rui Men, Ruize Gao, Runji Lin, Shijie Wang, Shuai Bai, Sinan Tan, Tianhang Zhu, Tianhao Li, Tianyu Liu, Wenbin Ge, Xiaodong Deng, Xiaohuan Zhou, Xingzhang Ren, Xinyu Zhang, Xipin Wei, Xuancheng Ren, Xuejing Liu, Yang Fan, Yang Yao, Yichang Zhang, Yu Wan, Yunfei Chu, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zhifang Guo, and Zhihao Fan. Qwen2 technical re- port, 2024. URL https://arxiv.org/abs/2407. 10671 . Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open andefficient foundation language models, 2023. URL https://arxiv.org/abs/2302.13971 . Appendix A RAG systems with context reordering •REAR [Wang et al., 2024b] – integrates document relevance scores into LLMs via embeddings, guiding generation to use internal knowledge (low relevance) or external evidence (high relevance). •Long-Context LLMs Meet RAG [Jin et al., 2024] – addresses the "lost-in-the-middle" problem by reordering retrieved documents, placing highest-scoring ones at sequence boundaries to optimize LLM attention. •OP-RAG [Yu et al., 2024b] – order preserving RAG preserves original document order (vs. sorting chunks), demonstrating improved answer quality through position-aware context organization. However, authors do not mention multidocument scenario. •Clustering & Reordering RAG [Alessio et al., 2024] - cluster sentences by query similarity and sort clusters in descending similarity order for improved retrieval quality. B Datasets SQuAD2.07[Rajpurkar et al., 2018] is an English reading-comprehension benchmark built on Wikipedia passages. It combines 100 000 span-answerable questions from SQuAD 1.1 [Rajpurkar et al., 2016] with more than 50 000 adversarial questions whose answers are deliberately absent. MTSBerQuAD8is an extension of the SberQuAD dataset [Efimov et al., 2020] which is a Russian counterpart of SQuAD 2.0. It includes more than 60 000 quesion-answer pairs with improved readability and consistency. MLQA9[Lewis et al., 2020] is a multilingual benchmark built from aligned Wikipedia passages including 12 000 question-answer pairs in English and about 5 000 in each of the other six languages: Arabic, German, Spanish, Hindi, Vietnamese, and Simplified Chinese. Among these languages, we utilized German, Hindi and Vietnamese. C Models: Similarities and differences Shared Foundations. Both models are Transformer-based decoders [Vaswani et al., 2017] with the following commonalities that may influence position bias: 1.Grouped Query Attention (GQA) : Enhances inference efficiency by reducing key-value cache size [Ainslie et al., 2023]. 2.Rotary Positional Embeddings (RoPE) : Dynamically encodes positional information for variable-length sequences [Su et al., 2024]. 3.Multilingual Support : Covers all five languages in our study [Yang et al., 2024, Touvron et al., 2023]. Key Differences. We identify three critical distinctions between the models: 1.Training
https://arxiv.org/abs/2505.16134v1
Dataset Composition : Qwen: Prioritizes knowledge, coding, and mathematics, trained on 18 trillion tokens [Yang et al., 2024]. Llama: Focuses on dialogue applications, pretrained on 15 trillion multilingual tokens [Touvron et al., 2023]. 2.Architectural Innovations : Llama introduces a document-aware attention mask [Touvron et al., 2023] to prevent cross-document interference in long sequences. 3.Vocabulary Design : Llama expands its token vocabulary to 128K tokens (vs. Qwen’s 151K) for improved non-English language handling [Touvron et al., 2023]. D Technical Details Models inference. To achieve reproducibility of the obtained results, the LLM-inference was performed using a deterministic generation strategy. The following hyperparameter were used/set: "max_new_tokens" – 1024, "do_sample" – False, "num_beams" – 1. Computational Resources. The experiments were run in a Docker container on a dedicated server with the following hardware: CPU: AMD Ryzen 9 7900X 12-Core Processor, GPU: NVIDIA GeForce RTX 3090 24GB, RAM: Kingston FURY Beast Black 32GB, SSD: M.2 NVMe Samsung 990 PRO 1T. LanguageQwen Llama Aligned All Zero No Scores Aligned All Zero No Scores English 15.8 15.8 14.1 17.3 17.4 14.8 Russian 37.7 37.3 32.7 32.7 30.7 24.8 German 27.2 25.2 26.1 30.8 24.6 29.2 Hindi 98.1 87.3 98.6 63.1 46.1 66.9 Vietnamese 33.6 30.5 32.8 40.1 35.8 30.0 Table 10: Averaged GPU-time (in minutes), spent to obtain results for specific experimental setup: language, model, and context placement strategie Required GPU-time for experiments. In Table 10 you can observe average time that was required to conduct our experiments on GPU. In total it is required approximately 20GPU-hours to reproduce the experiments. E Evaluation Details E.1 LLM-as-a-Judge Verification Krippendorff’s alpha and Pearson correlation coefficients, calculated for each experimental setup can be seen in Tables 11 and 12 correspondingly. Comparison of human and Llama evaluation can be seen in Figure 3. Language Position Aligned All Zero No Scores Mean EnglishTOP 0.783 0.663 0.595 0.726 MIDDLE 0.611 0.861 0.685 BOTTOM 0.704 0.916 0.718 RussianTOP 0.814 0.825 0.674 0.783 MIDDLE 0.742 0.865 0.695 BOTTOM 0.801 0.855 0.776 Table 11: Krippendorff’s alpha coefficient, calculated for each experimental setup Language Position Aligned All Zero No Scores Mean EnglishTOP 0.612 0.632 0.604 0.727 MIDDLE 0.739 0.83 0.791 BOTTOM 0.738 0.908 0.685 RussianTOP 0.488 0.704 0.518 0.705 MIDDLE 0.709 0.87 0.769 BOTTOM 0.669 0.888 0.732 Table 12: Pearson correlation coefficient, calculated for each experimental setup E.2 Human and LLM instructions We prompt Mistral-Large to judge whether the LLM responses are correctly answering questions. For each dataset we create an evaluation prompt on the language of this dataset and add 4 shots as examples of judgments. The resulting prompt consists of system prompt : "You are an AI assistant who speaks English.", which we translate to other languages and user prompt . User prompts for each language could be found in Table 14. For human annotation we consider only English and Russian languages, since our annotators speaks these languages. We use the same instructions as for LLM-as-a-Judge settings, omitting shots. E.3 Human Annotators Information Annotation was conducted by the authors of the work, so no additional recruitment or payment are required on this stage. All assessors held bachelor’s
https://arxiv.org/abs/2505.16134v1
degree and had prior experience in the evaluation of LLM responses. 7https://huggingface.co/datasets/rajpurkar/squad_v2 8https://huggingface.co/datasets/MTS-AI-SearchSkill/MTSBerquad 9https://github.com/facebookresearch/MLQA (a) (b) 0.60.70.80.91.0 TOP MIDDLE BOTTOM TOP MIDDLE BOTTOM TOP MIDDLE BOTTOM0.60.70.80.91.0 0.60.70.80.91.0TOP MIDDLE BOTTOM TOP MIDDLE BOTTOM TOP MIDDLE BOTTOMAccuracyCorrect Score All Zero Scores No Scores Accuracy Accuracy0.800.850.900.951.0Accuracy 0.800.850.900.951.0Accuracy 0.800.850.900.951.0AccuracyCorrect Score All Zero Scores No Scores Position Position Position Position Position Position0.955 0.928Figure 3: Human evaluation and LLM as a Judge for Aligned ,All Zero , and No Scores settings at three postions (TOP, MIDDLE, BOTTOM) for two languages: (a) English and (b) Russian. Bars in green represent human evaluations, while the blue bars represent the Llama model. F Context Volume Analysis From Figure 4 we can observe that our Aligned strategie does not effect on position bias with increasing of information load. With context quantity N=15for Llama we can see significant accuracy decrease, compared to other quantities. From the other hand, for Qwen position bias does not correlate with passed number of contexts. This result can be explained by the fact that for Qwen training larger dataset with long contexts was used, compared to Llama. This feature increase for Qwen the size of its attention window and allowed to conditioning on a larger amount of input knowledge during response generation. 0.750.900.951.0Accuracy0.85 0.80 0.750.900.951.0Accuracy0.85 0.80 5 10 15 5 10 15 contexts number contexts number Top Middle Bottom(a) (b) Figure 4: Accuracy dependence on the number of contexts, added to the user-prompt, and position of the relevant context in a list with Aligned placement strategie: (a) Llama; (b) Qwen G Extended Results Evaluation of Entropy in Qwen and Llama responses across three scenario can be seen in Table 13. H Prompts Our user–prompts for LLM-inference in terms of context placement strategies can be seen in Tables 15, 16. For Aligned andAll Zero strategies items in a contexts–list has a following format: "- [{score}] {document}". For No Scores strategy items has the following format: "- {document}". As a system–prompt the same instruction was used for all languages (translated correspondingly): "You are an AI assistant who helps solve user issues.". Language PositionQwen Llama Aligned All ZeroNo ScoresMean Aligned All ZeroNo ScoresMean EnglishTOP 0.092 0.092 0.100 0.095 0.232 0.247 0.193 0.224 MIDDLE 0.093 0.101 0.100 0.098 0.237 0.239 0.201 0.225 BOTTOM 0.092 0.098 0.097 0.096 0.240 0.240 0.205 0.228 RussianTOP 0.104 0.110 0.128 0.114 0.202 0.223 0.169 0.198 MIDDLE 0.112 0.117 0.125 0.118 0.216 0.219 0.181 0.205 BOTTOM 0.112 0.109 0.125 0.115 0.214 0.222 0.191 0.209 GermanTOP 0.117 0.106 0.114 0.112 0.231 0.203 0.219 0.218 MIDDLE 0.125 0.118 0.129 0.124 0.220 0.157 0.224 0.200 BOTTOM 0.128 0.117 0.124 0.123 0.225 0.152 0.220 0.199 HindTOP 0.078 0.074 0.076 0.076 0.302 0.320 0.236 0.286 MIDDLE 0.081 0.076 0.082 0.080 0.294 0.290 0.257 0.280 BOTTOM 0.082 0.081 0.079 0.081 0.288 0.276 0.275 0.280 VietnameseTOP 0.113 0.101 0.107 0.107 0.271 0.296 0.266 0.278 MIDDLE 0.117 0.102 0.120 0.113 0.287 0.299 0.294 0.293 BOTTOM 0.111 0.099 0.111 0.107 0.298 0.298 0.313 0.303 Mean 0.106 0.097 0.105 – 0.268 0.254 0.256 – Table 13: Entropy evaluation: Qwen (left block) and Llama (right block). Language User Prompt
https://arxiv.org/abs/2505.16134v1
arXiv:2505.16135v1 [cs.AI] 22 May 2025 21 May 2025 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Jeffrey Seely1, Yuki Imajuku1, Tianyu Zhao1, Edoardo Cetin1and Llion Jones1 1Sakana AI Existing reasoning benchmarks for large language models (LLMs) frequently fail to capture authentic creativity, often rewarding memorization of previously observed patterns. We address this shortcoming with Sudoku-Bench , a curated benchmark of challenging and unconventional Sudoku variants specifically selected to evaluate creative, multi-step logical reasoning. Sudoku variants form an unusually effective domain for reasoning research: each puzzle introduces unique or subtly interacting constraints, making memorization infeasible and requiring solvers to identify novel logical breakthroughs (“break-ins”). Despite their diversity, Sudoku variants maintain a common and compact structure, enabling clear and consistent evaluation. Sudoku-Bench includes a carefully chosen puzzle set, a standardized text-based puzzle representation, and flexible tools compatible with thousands of publicly available puzzles—making it easy to extend into a general research environment. Baseline experiments show that state-of-the-art LLMs solve fewer than 15% of puzzles unaided, highlighting significant opportunities to advance long-horizon, strategic reasoning capabilities. Alongside this report, we release the Sudoku-Bench repository. Contents 1 Introduction 2 2 Background: Sudoku Variants 3 3Sudoku-Bench : Dataset and Benchmark Design 6 4 Baseline Performance and Analysis 9 5 Related Work 10 6 Discussion 11 Corresponding author(s): Jeffrey Seely (jeffrey@sakana.ai) Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Matty Didn't Call You Back Because He Had The Wrogn Numberby LakeNormal sudoku rules apply.Every clue in the puzzle is wrogn. A clue is wrogn if it is not completely correct.Parity Paradoxby Marty SearsNormal sudoku rules apply. Each line within a 3x3 box has the same total, which is displayed in yellow in that box. (2-digit yellow totals read from left to right.) The digit in the purple square indicates how many of the yellow totals have the same parity (oddness/evenness) as itself (ie the purple square digit).Reticuleby Skeptical MarioKiller cages - Digits in a cage cannot repeat and sum to the clue in the upper left corner of the cageThermometers - Digits along a gray thermometer line increase moving away from the bulbModsby Marty Sears & ThePedallingPianistNormal sudoku rules apply. Yellow lines are 'Same Difference' lines. Each pair of adjacent digits on one of these lines has the same difference. However, this 'difference value' may be different for different lines. A digit in a circle indicates exactly how many circles contain that digit.RAT RUN 7: Multiple Choiceby Marty SearsNormal sudoku rules apply.AIM OF EXPERIMENT: Finkz the rat must reach the cupcake by finding a path through the maze. The path must not visit any cell more than once, cross itself, or pass through any thick maze walls.As well as moving orthogonally, Finkz may move diagonally if there's a 2x2 space in which to do so, but may never pass diagonally through a round wall-spot on the corner of a cell.TELEPORTS: Entering a teleport will cause Finkz to be instantly transported to the matching coloured teleport. From there she can continue her journey. Matching teleports always have identical digits, but teleports that don't match always have different digits.ONE-WAY DOORS: Finkz may only
https://arxiv.org/abs/2505.16135v1
pass directly through a purple arrow if moving in the direction the arrow is pointing. An arrow always points to the smaller of the two values it sits between.TEST CONSTRAINT: In this experiment, for any two adjacent digits along the correct path, one can be divided by the other to give an exact integer (ie; one is a multiple of the other.) Figure 1|Each Sudoku variant has a unique set of constraints explicitly described in the puzzle rules. Puzzles may feature whimsical rules such as in Rat Run, or meta-level constraints, such as requiring all standard Sudoku rules to be intentionally violated. 1. Introduction Large-scale language models excel at short-form deduction (Long, 2023; Wei et al., 2022), yet genuinely creative reasoning remains elusive. Many standard benchmarks, where current models already rival or surpass human performance (Glazer et al., 2024; Hendrycks et al., 2021; Phan et al., 2025), often reward the memorization of solution templates (Bubeck et al., 2023). Once these templates are implicitly memorized, incremental accuracy gains offer limited insight into a model’s capacity for novel reasoning. Benchmarks such as ARC (Chollet, 2019) effectively resist memorization; however, their solutions, while novel to models, remain straightforward for humans, insufficiently capturing the depth of human creative reasoning. We propose Sudoku variants (Fig. 1) as a unique domain addressing this gap. A Sudoku variant is a logical puzzle defined by a partially filled 𝑛×𝑛grid, accompanied by visual constraints and even a problem-specific set of rules that can only be described in natural language. Yet, each puzzle still admits a unique solution—an 𝑛×𝑛grid fulfilling its constraints. Puzzle creators introduce original rules or combine common constraints in novel ways. Hundreds of user-submitted Sudoku variants are published daily on platforms like Logic Masters Germany (Log), deliberately designed to require creative insights and subtle logical breakthroughs. Such puzzles precisely target the type of novel, multi-step reasoning that memorization-focused and even popular reasoning benchmarks fail to consistently measure (Zhang et al., 2024). This paper’s contribution is twofold. First, we introduce open-source tools interfacing directly with the popular puzzle application SudokuPad (Neumann, 2021), facilitating both agentic tool-use interaction and standardized textual puzzle representations. The agentic interaction provides an API to fetch images of the current board state and access to all the annotation tools available in SudokuPad that human solvers usually rely on. Our textual format isolates logical reasoning from visual processing, enabling effective evaluation with current language models. Second, we present Sudoku-Bench , a carefully curated benchmark of 100 Sudoku variants, selected in collaboration 2 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants with hosts from the Cracking the Cryptic YouTube channel. These puzzles span a wide range of difficulties and reasoning styles, deliberately chosen to test model performance across diverse logical pathways and puzzle-specific “break-ins.” Our experiments showcase Sudoku-Bench poses a striking challenge for current state-of-the-art models. Without tool assistance, even the strongest publicly available LLM evaluated solves fewer than 15% of the benchmark. Notably, most of the successful completions come from the simplest subset of 4×4puzzles, with performance rapidly collapsing with larger and less conventional grids. This is observed
https://arxiv.org/abs/2505.16135v1
in both the one-shot configuration (prompt a model to solve a puzzle in one response) and a multi-step configuration (multi-turn interaction between the model providing at least one digit and the user providing the updated board state). Beyond benchmarking, Sudoku variants offer a fertile laboratory for reasoning research. An extensive, ever-growing supply of human-generated puzzles allows scalable difficulty progression, from simpler 4×4puzzles suitable for small models to highly intricate 9×9puzzles, the hardest of which can stump all but the best expert human solvers. Rich auxiliary data, including detailed expert solution transcripts and interaction traces, facilitate imitation learning. We include, as part of Sudoku- Bench thousands of hours of reasoning transcripts and actions taken when solving from Cracking the Cryptic, a popular YouTube channel dedicated to detailed demonstrations of solving Sudoku variants with over 250M views. This data is entirely available for researchers who wish to explore supervised approaches to learn and fine-tune models from human reasoning – qualitatively far beyond the depth and diversity of synthetic reasoning datasets with current state-of-the-art language models (Li et al., 2025; Muennighoff et al., 2025). Theremainderofthispaperproceedsasfollows: Section2surveysSudokuvariantsandtheirreasoning demands. Section 3 details the Sudoku-Bench dataset, text interface, and evaluation framework. Section 4 presents baseline results and analyses of model failure modes. We review related work in Section 5, and conclude with open research directions in Section 6. 2. Background: Sudoku Variants Traditional Sudoku involves completing a 9×9grid such that each digit from 1 to 9 appears exactly once in every row, column, and 3×3subgrid. This structure provides a foundation for numerous variants that introduce additional constraints. For instance, Killer Sudoku combines elements of Sudoku and Kakuro, requiring digits within outlined cages to sum to specified totals without repeats. Thermometers are paths of adjacent cells where digits must increase monotonically. Digits along arrowsmust sum to the digit in the circled cell at the base. Kropkidots between cells indicate specific relationships, such as consecutive numbers or a 1:2 ratio. The availability of web-based puzzle-making tools allowed puzzle authors to invent their own variants. In early 2020, the puzzle-hosting site Logic Masters saw a surge in the number of puzzles posted. As of May 2025, more than 27,000 user-submitted variants are published on the site (Log). Puzzle creators frequently combine multiple constraints in unique ways. Often, these combined constraintsresultinpuzzlesstartingwithminimalornodigits,necessitatingextensivelogicalreasoning to determine the initial placement, termed a “break-in.” Such puzzles require solvers to meticulously explore the interaction of constraints, significantly diverging from the eager guessing often observed in reasoning LLMs (Section 4). Beyond these standard constraint types, puzzle setters often employ meta-constraints, which involve deducing puzzle-specific parameters (e.g., “digits in a cage sum to an unknown value to be determined by solving,” or “the line must be identified as either a palindrome or a renban sequence”). These meta-constraints add another layer of complexity and creative reasoning. 3 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Ascensionby Florian WortmannNormal sudoku rules apply. Identical digits cannot be a knight's move apart. Digits along an arrow sum to the digit in the corresponding circle. Example of a logical break-in from the puzzle Ascension . Despite
https://arxiv.org/abs/2505.16135v1
the initial grid being empty, the puzzle constraints collectively enforce a strict sequence of increasing digits from the lower left to the upper right shaded circles. Identifying and leveraging this subtle interplay forms the puzzle’s essential insight. Though rated at an easy difficulty (2-star), current LLMs consistently fail to identify this logical entry point. reasoning trace by gemini-2.5-pro-preview-03-251. Identify Constraints: Standard Sudoku, Anti-Knight, Arrows (sum along arrow equals circle value).2. Arrow Analysis: All circles must be >= 3 (sum of two distinct digits >= 1+2=3). Arrow digits are distinct from circle digit. Digits on an arrow are distinct.3. Shared Circle Analysis: * C(4,3) = C(4,1)+C(4,2) = C(5,4)+C(6,5). C(4,3) != C(4,1), C(4,2). C(4,3) != C(5,4), C(6,5). * C(4,6) = C(5,6)+C(6,6) = C(3,5)+C(2,4). C(4,6) != C(5,6), C(6,6). C(4,6) != C(3,5), C(2,4). * C(1,6) = C(1,5)+C(1,4) = C(2,7)+C(3,8). C(1,6) != C(1,5), C(1,4). C(1,6) != C(2,7), C(3,8).4. Anti-Knight + Arrow Interactions: * C(4,3) is knight move from C(2,4) & C(3,5). Since C(4,6) = C(2,4)+C(3,5), C(4,6) > C(2,4) and C(4,6) > C(3,5). This means C(4,3) cannot be C(2,4) or C(3,5). Thus, C(4,3) != C(4,6). * [...]5. Digit 9 Analysis: * Digits on arrows cannot be 9. * C(1,9) cannot be 9: If C(1,9)=9, then C(2,7)!=9, C(3,8)!=9 (knight moves). C(1,6)=C(2,7)+C(3,8). C(1,6)!=9 (same box). C(2,9)+C(3,9)=9. This doesn't lead to the previous contradiction. Let's re-evaluate. C(1,9)=9 -> C(2,7)!=9, C(3,8)!=9. C(1,6) = C(2,7)+C(3,8). C(1,6) != C(1,9)=9 because they are in the same box. This means C(2,7)+C(3,8) != 9. This is possible. Where did the earlier contradiction come from? Ah, I misinterpreted C(1,6)!=9. * C(4,3) cannot be 9: If C(4,3)=9, [...] * [...]6. Digit 1 Analysis (Attempt C(4,3)=3): Led to C(4,1)=1, C(4,2)=2, C(5,4)=1, C(6,5)=2. Then C(4,6)=5, C(3,5)=4, C(2,4)=1. Check anti-knight 1: C(4,1)=1, C(5,4)=1, C(2,4)=1. These are allowed. This led to {C(5,6), C(6,6)}={1,4} or {2,3}. {1,4} fails (row 5 conflict: C(5,4)=1, C(5,6)=1). {2,3} means C(5,6)=2, C(6,6)=3 or C(5,6)=3, C(6,6)=2. Subcase C(5,6)=2, C(6,6)=3. Grid: C(4,1)=1, C(4,2)=2, C(4,3)=3, C(5,4)=1, C(6,5)=2. C(2,4)=1, C(3,5)=4. C(4,6)=5. C(5,6)=2, C(6,6)=3. Column 4 has C(2,4)=1 and C(5,4)=1. Impossible. So C(4,3)=3 is impossible.7. Digit 1 Analysis (Attempt C(4,3)=4): Led to C(4,1)=1, C(4,2)=3, C(5,4)=1, C(6,5)=3. Then C(4,6)=5, C(3,5)=2, C(2,4)=3. Then {C(5,6), C(6,6)}={1,4} or {2,3}. {1,4} means C(5,6)=1, C(6,6)=4. Fails (row 5 conflict: C(5,4)=1, C(5,6)=1). {2,3} requires C(5,6)=3, C(6,6)=2 (C(5,6)=2 impossible by AK C(3,5)=2, C(6,6)=2 impossible by AK C(3,5)=2). Grid: C(4,1)=1, C(4,2)=3, C(4,3)=4, C(5,4)=1, C(6,5)=3. C(2,4)=3, C(3,5)=2. C(4,6)=5. C(5,6)=3, C(6,6)=2. Check AK 3: C(4,2)=3, C(6,5)=3, C(2,4)=3, C(5,6)=3. C(4,2) vs C(2,4): OK. C(4,2) vs C(5,6): OK. C(6,5) vs C(2,4): OK. C(6,5) vs C(5,6): OK. C(2,4) vs C(5,6): OK. Looks OK. Let's write partial grid:. . . | . . . | . . .. . . 3 . . . . .. . . . 2 . . . .1 3 4 | . . 5 | . . .. . . 1 . 3 . . .. . . . 3 2 . . .Row 6 has C(6,5)=3, C(5,6)=3. Impossible. So C(4,3)=4 is impossible.8. Conclusion so far: C(4,3) >= 5. C(1,9)!=9. C(4,6)!=9.9. Consider C(7,3)=8. [...][...]AnalysisConfirmation of the puzzle rule of arrow and circleAnalysisConfirmation of the puzzle rule of knight’s move(Strange) AnalysisThe model tries to
https://arxiv.org/abs/2505.16135v1
exclude the possibility of some cells being 9, but the intuition is unclear, and this step of analysis does not effectively narrow the candidate space.SearchWith the temporary information of “circles>=3”, the model starts “guessing” until contradiction. Firstly it tries circle cell r4c3=3, r4c6=5, which finally leads to a contradiction. While the search strategy is brutal, the model also makes mistakes of missing potential placements (e.g. when r4c3=3, it can also have r4c1=2, r4c2=1).Then it tries circle cell r4c3=4, r4c6=5, etc. This search attempt finally leads to a contradiction again.More SearchThe above attempts of search provide a bit more information but the model continues to rely on search to solve the puzzle and reached a wrong solution. Gemini 2.5 Pro Preview ’s attempt to solve the puzzle Ascension . In contrast to the break-in by a human solver, the model failed to effectively narrow its search space and had to rely on a more brute-force search strategy, which did not lead to the correct solution. Figure 2|Ascension example. 4 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Puzzle authors are ultimately limited only by imagination, often developing whimsical and novel rulesets (e.g., puzzles themed around rats in mazes (Fig. 1)). Crucially, all Sudoku variants maintain a structured format: an 𝑛×𝑛grid, natural-language puzzle rules, visual elements easily encoded as text, and a single unique solution. This structured yet flexible framework makes Sudoku variants exceptionally suitable for systematically investigating creative reasoning capabilities, meaning that the puzzles are very diverse and challenging but grounded and easy to verify if correct. Puzzle example: Ascension We illustrate some of these features with an example. Figure 2a highlights the novel interaction between a knight’s move restriction and arrow constraints. To find the puzzle’s break-in, the solver must make three observations. First, whatever the digit highlighted in green ( r4c6, box 5), it must occur somewhere in box 2, but not in column 6 (by standard Sudoku rules), or along its arrow tip, or a knight’s move away, thus can only occur in one of the two half-shaded cells r1c4orr1c5. This same pattern applies to the other cell groups highlighted by the other colors shown in the middle panel. The second observation is that since digits on the arrow must be smaller than the corresponding circled base, this creates a long-range chain dependency across the highlighted cells, namely, the circled cells shaded yellow, purple, green, blue, then red, must be monotonically increasing. This is a key insight but not enough to determine an exact digit yet. The third observation is that the purple cell must be the sum of three Sudoku digits, the two in its arrow tip r4c1andr4c2, but one of which is equal to the yellow cell of r7c3, which itself is the sum of two Sudoku digits by arrow rules. The only digit that can be the sum of three Sudoku digits and leave enough room for the monotonic chain along green, blue and red, is six. Therefore r4c6 must be six and the subsequent digits in the monotonic chain are forced (right panel). In a video demonstrating
https://arxiv.org/abs/2505.16135v1
this puzzle solve, an expert solver discovered this break-in in about 4.5 minutes, and a full puzzle solve taking about 35 minutes.1In all LLMs we tested, no model was able to make progress. For example, we show the reasoning summary of Gemini 2.5 Pro Preview (Fig. 2b), which was able to successfully parse and identify the puzzle constraints, but quickly resorts to guesswork and search. This highlights that there is still a gap between how LLMs reason and how humans prefer to reason; LLMs can rely on brute-force but humans will prefer to save time and energy by using precise logic to find shortcuts to correct digits. We hope to see this benchmark encouraging work on creating LLMs that reason in a more “human-like” manner. TheAscension example highlights two facets of Sudoku variants. First, although both knight-move and arrow constraints are commonplace, this specific interaction is unique to this particular puzzle. Therefore, the memorization-resistance of Sudoku variants is not exclusively due to the inclusion of novel rulesets; familiar constraints can induce a solving tactic never seen before. Indeed, some of the most difficult puzzles adopt deceptively simple rulesets. The second point is that for puzzles with few or no given digits (as is common in variants), the search space is too large for initial guesswork to be effective. This also often necessitates a kind of meta-reasoning where one must decide at the outset what reasoning techniques should be applied, e.g. the use of coloring, set theory or looking at digit parity. This pattern of needing to spend time at the beginning to understand how the constraints interact in a new manner is normal when humans tackle these puzzles. This also means that some of these initial deductions remain pertinent throughout the solve, meaning that in order to robustly solve some of these puzzles over 100s of steps will either require a form of memory, like a scratchpad, or a very long context window. 1https://www.youtube.com/watch?v=-7OR_IK4Th8 5 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants 3.Sudoku-Bench : Dataset and Benchmark Design We sought to select 100 puzzles that are representative of the breadth of Sudoku variants. To establish a graded evaluation curve, we selected 15 4×4puzzles, 15 6×6puzzles, and 70 9×9puzzles. The 15 4×4puzzles are included, in part, to measure progress in even modestly sized language models. Fifty of the 9×9puzzles were curated by the hosts of Cracking the Cryptic exclusively for this benchmark. The selected puzzles evenly span difficulty ratings from novice-friendly “1-star” puzzles to expert-level “5-star” challenges that may require hours of careful analysis before any digits can be confidently placed. Twenty of the puzzles are difficult vanilla Sudokus, which were supplied by the puzzle company Nikoli, which popularized Sudoku in the 1980s. We aimed to create a smooth ramp in complexity such that an initial attempt at tackling the benchmark can yield some early success, but fully solving it will be very challenging, and we hope that this benchmark will resist being solved for a significant time span. Textdescriptions Each puzzle is given a pure text representation. For instance, Fig 3 shows
https://arxiv.org/abs/2505.16135v1
a simple 4×4puzzle whose line paths are represented as a sequence of rxcy(row x column y) coordinates, and the location of the dot is described as the two cells it lies between. The rules, visual elements, grid size, and initial board state (if any digits are given) are sufficient to unambiguously specify the puzzle and converted into a prompt. While some of the most recent reasoning models have shifted toward multimodal inputs, we found that most, including OpenAI o3 (OpenAI, 2025), struggle in converting 9×9puzzles into accurate coordinates. Puzzle benchmarks such as Enigma (Wang et al., 2025) and VGRP (Ren et al., 2025) emphasize the visual aspect of puzzles and require multimodal models. Given that current frontier models still struggle in exact specification of the visual elements of Sudoku puzzles, we opted to specify all elements precisely in text to isolate the creative reasoning process itself from visual understanding. Eachpuzzle’stextrepresentationhasbeenprecomputedforpuzzleson Sudoku-Bench . Weprovide the code for extracting text descriptions from a puzzle specified in SudokuPad, allowing researchers to utilize this harness in other puzzles. Note that many of the puzzles would benefit from visual reasoning, some even potentially requiring it, since many of the break-ins are geometric and use symmetry, or have some rules that reference the shapes in the puzzle. Some puzzles can be very visually dense (See Bottom-Right in Fig 1) and current vision model we tested are not powerful enough to extract all the features, like the small numbers. We suspect that solving this benchmark using vision would represent a significant improvement over current multimodal LLMs. 3.1. Expert reasoning traces A core question is whether advancing reasoning capabilities in LLMs can benefit from adopting more “human-like” thinking. In reinforcement learning models, pretraining on human supervision is common, while other work has shown that RL from scratch yields better performance in contained environments (Hester et al., 2018; Lowe et al., 2017; Ouyang et al., 2022; Silver et al., 2016). Vanilla Sudoku is an interesting domain in that the strategies that humans use differ so significantly from search-based solvers (Pelánek, 2011), and this effect is especially pronounced in Sudoku variants. The YouTube channel Cracking the Cryptic offers a particularly unique opportunity to explore the benefits of imitation learning. The channel contains over 3,000 published videos demonstrating the solving process of Sudoku variants. Notably, the hosts must verbally describe their thinking process, explaining to the viewer each logical deduction. A typical puzzle takes the hosts around 60 minutes 6 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Rules:Normal 4x4 sudoku rules apply.Digits separated by a black dot are in a 1:2 ratio.The difference between two adjacent digits on a coloured line indicates exactly how many pairs of adjacent digits along that line have that exact difference.Initial grid:. . . .. . . .. . . .. . . .Differences Count - part 1by Sujoyku and Marty Sears Visual elements:- line, color: pale green, coords: r1c1 r2c2 r4c3- line, color: plum, coords: r2c2 r2c3 r3c4 r4c4- circle, color: black, location: between r4c1 and r4c2 Figure 3|A text representation of a puzzle. The rules, initial
https://arxiv.org/abs/2505.16135v1
grid, and a text description of visual elements are sufficient to unambiguously specify the puzzle. to solve, while some of the more difficult puzzles featured on the channel are over 3 hours in length. We developed a dataset consisting of the audio transcripts of each solve, together with a sequence of SudokuPad actions extracted from the video. The actions were extracted using a machine learning model trained on ground truth actions simulated on SudokuPad and then applied to video frames. This dataset is hosted on HuggingFace2under an MIT license in agreement with the hosts of the channel. 3.2. Dataset format TheSudoku-Bench puzzle dataset3contains three subsets, challenge_100 ,nikoli_100 , and ctc. The challenge_100 is described above and represents the core benchmark. Additional puzzle data include nikoli_100 , a collection of hand-made vanilla Sudokus supplied by Nikoli for this benchmark (20 of which are featured in challenge_100 ). The nikoli_100 are designed to highlight creative or human-like reasoning in their solution paths, and may be applicable to many of the research approaches that use vanilla Sudoku as a testbed (Section 5). The ctcincludes 2,565 Sudoku variants that have been solved on Cracking the Cryptic . Due to the breadth and variety of Sudoku variants, the text representation of each puzzle in ctchas not undergone manual checking, and an unambiguous representation of the board would require a screenshot in some cases. 3.3. SudokuPad environment We also provide tools for interacting with SudokuPad in an agentic environment. SudokuPad enables common note-taking strategies used by human solvers, including color-coding cells (as in Fig. 2a) or providing candidate digits or pencil marks to cells. Our simple harness allows models to directly interface with the application to make use of these tools. Using SudokuPad in-the-loop may fit well with related benchmarks that evaluate reasoning models (including vision language models) in simple game environments (Paglieri et al., 2024; Ren et al., 2025). Our evaluation in this paper (Section 4) uses text interaction (relying only on SudokuPad for the initial puzzle data extraction). We make all 2huggingface.co/datasets/SakanaAI/Sudoku-CTC-Reasoning 3huggingface.co/datasets/SakanaAI/Sudoku-Bench 7 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants ModelMulti-step correct placements Multi-step solve rate (%) Single-shot solve rate (%) 4×4 6×6 9×9All 4×4 6×6 9×9All4×4 6×6 9×9All O3 Mini High 9.7 0.7 – – 60.0 0.0 – –73.3 6.7 2.9 14.0 Gemini 2.5 Pro 11.6 0.6 1.8 3.1 73.3 0.0 0.0 11.060.0 13.3 0.0 11.0 Qwen 3.235B A22B 6.5 1.1 0.7 1.7 40.0 0.0 0.0 6.053.3 0.0 0.0 8.0 Qwen 3.30B A3B 1.3 0.0 0.3 0.4 6.7 0.0 0.0 1.046.7 0.0 0.0 7.0 DeepSeek R1 9.5 0.8 1.1 2.3 60.0 0.0 0.0 9.040.0 0.0 0.0 6.0 Grok 3 Mini 8.5 0.7 0.9 2.0 53.3 0.0 0.0 8.040.0 0.0 0.0 6.0 Qwen QwQ 32B 5.0 0.7 0.6 1.3 26.7 0.0 0.0 4.040.0 0.0 0.0 6.0 Qwen 3 32B 4.3 0.5 0.5 1.0 26.7 0.0 0.0 4.040.0 0.0 0.0 6.0 Claude 3.7 Sonnet (Thinking) 8.1 1.1 – – 40.0 0.0 – –33.3 0.0 0.0 5.0 GPT 4.1 2.3 0.2 0.3 0.6 13.3 0.0 0.0 2.013.3 0.0 0.0 2.0 Gemini 2.0 Flash 0.5 0.1 0.2 0.2
https://arxiv.org/abs/2505.16135v1
0.0 0.0 0.0 0.00.0 0.0 0.0 0.0 Gemma 3 27B IT 0.1 0.1 0.5 0.3 0.0 0.0 0.0 0.00.0 0.0 0.0 0.0 Llama 4 Maverick 0.2 0.5 0.4 0.4 0.0 0.0 0.0 0.00.0 0.0 0.0 0.0 Table 1|Sudoku-Bench leaderboard. Performance comparison of various LLMs on Sudoku-Bench. Percentageofpuzzlescompletelysolvedforeachevaluationmode(multi -stepvs.single -shot),stratified by grid size. The right -mostAllcolumns aggregate across grid sizes (15 puzzles for 4 ×4 and 6×6, 70 for 9×9). In the multi-step setting, a model is prompted to provide any number of digits in its response, with the user providing an updated board state at each turn. Interaction is terminated if the model makes an incorrect placement. The average number of correct placements are presented in the first column set. In the single-shot setting the model is prompted to solve the entire puzzle in a single response. “–” indicates that fewer than the required number of responses were available due to cost limitations, so an aggregate could not be computed. of these SudokuPad tools available for researchers on our repository https://github.com/Sakan aAI/Sudoku-Bench . 3.4. Evaluation Framework Multi-step and single-shot We evaluate models in both multi-round and single-shot configurations. In a multi-round setup, we prompt the model to analyze the board and give at least one valid digit placement per response. We clarify that this is a committed digit(s) that cannot be undone (in the model’s reasoning trace, any amount of internal backtracking is possible in order to deduce the digit). Once the digit is placed, the user displays the updated board state. We continue until the puzzle is solved or the LLM misplaces any digit. In the multi-round setting, we track both the solve rate and correct digit placements per puzzle. To keep the context window manageable, we keep the most recent 5 responses from the LLM in context, while always keeping the first user message with the puzzle specification and instructions. We report the averages as average solve rate andaverage correct digits . In our evaluation, we run a single evaluation per model and per puzzle, so the average is across the 100 puzzles in the set. In the single-shot configuration, we prompt the model to provide a solution in a single response. A single-shot configuration is appropriate for evaluating models with sufficiently large context, or for a more straightforward evaluation of the smaller 4×4puzzles. In the single-shot setting, we report only the average solve rate . 8 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Figure 4|Response categorization for the single-shot setting. 4. Baseline Performance and Analysis We evaluated the current generation of state-of-the-art large language models on Sudoku-Bench , revealing substantial difficulty posed by these Sudoku variants. Table 1 summarizes model perfor- mance across puzzle sizes and interaction modes on benchmark. Even leading models such as o3 mini high andGemini 2.5 pro preview demonstrated solve rates below 15% for the complete set. Notably, performance varied significantly by puzzle size: models generally solved smaller 4×4 puzzles at rates between 40% to 73%, but performance sharply declined for 6×6grids and dropped nearly to zero on 9×9puzzles, underscoring the rapid escalation in complexity.
https://arxiv.org/abs/2505.16135v1
Comparing single-shot to multi-step evaluation modes, allowing iterative feedback slightly improved outcomes for smaller puzzles but did not meaningfully impact results for larger puzzles. The minimal difference between modes suggests that the fundamental difficulty for these models lies not merely in incremental reasoning but in effectively identifying initial logical breakthroughs. Categorizing model failures Analyzing model failures indicated several recurring patterns which we categorize in Fig. 4. The most common failure mode was presenting with confidence an Incorrect Solution . Other failure modes included Surrender (model explicitly gives up), Missing Information (model incorrectly claims puzzle information or given constraints are incomplete), and Claimed Contradiction (model mistakenly identifies contradictions in the puzzle rules). Of note is Missing Information . Since variants are not as densely represented in the training set of foundation models compared to vanilla Sudoku, it appears the new rules and variants throw them off, most notably due to the fact that variants typically have fewer starting digits (often none) compared to the minimum of 17 in a vanilla 9×9Sudoku. In addition, a part of model responses contain No Reasoning Trace so we cannot make a fine-grained categorization of its error type, otherwise we use Claude-3.5-Haiku to classify a wrong solution response into one of the other four error types. 9 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Sumthingsby Kennet's DadPlace the digits 1-6, once each, in every row, column, and region. Region borders divide the blue line into segments with the same sum. Description of the puzzle Sumthings . reasoning trace by gemini-2.5-pro-preview-03-251. Understand the Rules and Setup* [...]2. Analyze the Blue Lines and Segments* [...]* Line 2: r3c4, r2c4, r3c3, r4c3, r3c2, r4c2, r3c1, r2c1 * Segments: {r3c4, r2c4} (R4), {r3c3, r4c3} (R2), {r3c2, r4c2, r3c1, r2c1} (R1). * Constraint: Sums are equal. Let this sum be S2. * `S2 = r3c4 + r2c4` * `S2 = r3c3 + r4c3` * `S2 = r3c2 + r4c2 + r3c1 + r2c1` * The minimum sum for 4 distinct digits is 1+2+3+4=10. The maximum sum for 2 distinct digits is 5+6=11. Therefore, S2 must be 10 or 11.* [...] * The minimum sum for 4 distinct digits is 10. The maximum sum for 2 distinct digits is 11. Therefore, S3 must be 10 or 11.3. Determine the Line Sums (S2 and S3)* [...]* Let A = digits in {r2c4, r3c4}. Let B = digits in {r4c4, r5c4, r6c4, r6c5}. A and B are disjoint sets of digits.* If S2=10, A={4, 6}. If S2=11, A={5, 6}.* If S3=10, B={1, 2, 3, 4}. If S3=11, B={1, 2, 3, 5}.* Test combinations:* S2=10, S3=10: A={4,6}, B={1,2,3,4}. Union = {1,2,3,4,6}. Missing 5. Impossible.* S2=10, S3=11: A={4,6}, B={1,2,3,5}. Union = {1,2,3,4,5,6}. Possible.* S2=11, S3=10: A={5,6}, B={1,2,3,4}. Union = {1,2,3,4,5,6}. Possible.* S2=11, S3=11: A={5,6}, B={1,2,3,5}. Union = {1,2,3,5,6}. Missing 4. Impossible.* So, either (S2=10, S3=11) or (S2=11, S3=10).4. Use the r1c4 = r6c5 Relationship [...]5. Test the Two Possible Scenarios for S2 and S3* Scenario 1: S2=10, S3=11 [...]* Scenario 2: S2=11, S3=10 [...][...]AnalysisUnderstanding the rules.AnalysisThe model effectively reduces the possible cases of two blue lines’ segment sum
https://arxiv.org/abs/2505.16135v1
(S2 and S3) to two candidates (10 or 11).AnalysisFrom the 2 x 2 = 4 combinations of S2 and S3, it further reduces it to 2 possible cases, which forms a small constrained space for latter search.SearchThe model starts to search by starting from (S2=10, S3=11) or (S2=11, S3=10) and guessing other numbers on the lines until a contradiction.Due to the small search space, the model can find the only correct placements of the lines and reach a correct global solution in the end. Gemini 2.5 Pro’s solution to the puzzle Sumthings . Figure 5|Sumthings example. Asuccessfulsolve Whilemodelsoftenstrugglewithcomplexbreak-ins, theycansometimessucceed on moderately complex puzzles by effectively narrowing the search space. For instance, Figure 5 illustrates a 6 ×6 puzzle, Sumthings, which Gemini 2.5 Pro Preview solved. The model adopted a strategy of reducing the search space to a manageable size, then employing search to find the correct solution. This approach, however, proves less effective as puzzle complexity increases, where identifying specific "break-in" insights becomes crucial, as demonstrated by the Ascension example (Figure 2). 5. Related Work Sudoku-Bench complements existing benchmarks designed to evaluate advanced reasoning in artificial intelligence, with a particular focus on Sudoku variants as a structured domain for assessing creative and logical deduction. 10 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Benchmarks targeting creative deductive insight Benchmarks such as the Abstraction and Reason- ing Corpus ( ARC; Chollet, 2019) present diverse tasks to test reasoning and generalization beyond pattern memorization. Sudoku-Bench similarly introduces novel constraints for each puzzle, re- sisting memorization through a continuous influx of unique puzzles. Unlike ARC, which emphasizes tasks simple for humans but challenging for AI, Sudoku variants span a broader difficulty spectrum, including puzzles challenging even for expert human solvers. Nonetheless, Sudoku puzzles offer rec- ognizable logical breakthroughs readily appreciated by human novices, making Sudoku-Bench a valuable resource for precise evaluation of creative reasoning. Puzzle-centric reasoning datasets Several benchmarks focus on puzzle-solving for evaluating reasoning skills (Giadikiaroglou et al., 2024). For instance, PUZZLES (Estermann et al., 2024) compiles canonical logic puzzles; Tyagi et al. (2024) systematically analyze grid puzzle-solving by LLMs; and EnigmaEval (Wang et al., 2025) evaluates a large suite of problems from puzzle competitions. Recent additions include VGRP-Bench (Ren et al., 2025) for visual-grid reasoning, LogicGame (Gui et al., 2024) for rule-based reasoning, and PuzzlePlex (Long et al., 2024) for evaluating conversational agents’ reasoning. BALROG (Paglieri et al., 2024) evaluates LLM and VLM reasoningincomplexgameenvironmentsandcouldbeextendedusingtoolsfrom Sudoku-Bench to include SudokuPad as an environment. Sudoku as a reasoning testbed The standard Sudoku puzzle has been extensively utilized in machine learning research. Models include Recurrent Relational Networks (Palm et al., 2018) employing message-passing, differentiable SATNet consistency layers (Wang et al., 2019), masked- denoising and diffusion methods (Kim et al., 2025; Ye et al., 2024), and Kuramoto-inspired oscillator dynamics (Miyato et al., 2024). Further, large language models have achieved human-level accuracy through structured prompting and reasoning decomposition (Long, 2023). Shah et al. (2024) showed a high solve rate on vanilla Sudokus by training on a sequence of steps from a solver. Sudoku- Bench extendsthisresearchtraditionbyincorporatingdiverseandnovelpuzzleconstraints, enabling evaluations that specifically target
https://arxiv.org/abs/2505.16135v1
multi-step, strategic, and creative reasoning. 6. Discussion The role of tool use Evaluating model reasoning can be distinguished by whether external tools, such as constraint solvers or code execution environments, are available. Without tool use, the evaluation specifically assesses the model’s intrinsic reasoning capabilities, including logical deduction, maintaining global consistency, and internally generating creative insights, akin to solving puzzles by hand. This approach emphasizes pure cognitive reasoning skills and has been the primary evaluation mode presented in our baselines (Section 4). Conversely, allowing tool use tests the model’s ability to translate a given puzzle into a formal representation suitable for external solvers, effectively interact with these tools, and interpret solver results correctly. Standard Sudoku puzzles become straightforward when a solver is employed. Variants that only employ standard constraints such as arrows, cages, etc, are also easily solved by code execution. A third category of puzzles require natural language understanding and are not straightforward to interpret as a constraint satisfaction problem. This third category is itself a meaningful test for reasoning models with tool-use enabled. However, our current intention is to assess the reasoning required to find a puzzle’s “break-in,” and many puzzles such as Ascension from Fig. 2a are easily solved by tool-use, but the solution path would be substantially different than that intended by the puzzle setter. Therefore we selected the 100 puzzles of Sudoku-Bench for evaluating models without tool-use. Future work could consider a separate tool-use track, potentially with a different collection of puzzles. 11 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants Conclusion We introduced Sudoku-Bench , a unified benchmark built around modern Sudoku variants that systematically stress long-horizon deduction, rule-interpretation, and strategic planning. In addition, the benchmark is uniquely suited for evaluating creative reasoning via the rich and varied collection of break-ins featured in most puzzles. The benchmark includes a curated puzzle corpora with textual representations, providing a controlled substrate for measuring how well language models cope with novel, tightly coupled constraints. Baseline experiments show that frontier LLMs solve fewer than 15% of instances without external tools, and performance falls sharply on 9×9 variants—evidence that substantial headroom remains for improvements. Acknowledgments We thank Sven Neumann, author of SudokuPad, for help with the development of tooling used in Sudoku-Bench , and for permission in using SudokuPad as part of this project. Sudoku-Bench is developed in partnership with Cracking the Cryptic , with an agreement to provide content from the channel for use in the AI research community. The puzzles provided in the benchmark are featured on the channel. We acknowledge all puzzle creators and provide a list of setter acknowledgments in our repository. We thank the hosts of Cracking the Cryptic , Simon Anthony and Mark Goodliffe, for various help in the development of this benchmark, including the curated selection of puzzles in Sudoku-Bench , and for providing SudokuPad replay files for the reasoning traces described in 3.1. The handmade vanilla Sudoku puzzles were provided by Nikoli. We thank Nikoli for graciously agreeing to provide their puzzles for this benchmark. References Logic masters germany. https://logic-masters.de . Accessed: 2025-05-13. S. Bubeck,
https://arxiv.org/abs/2505.16135v1
V. Chandrasekaran, R. Eldan, J. A. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y.- F. Li, S. M. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4. ArXiv, abs/2303.12712, 2023. URL https: //api.semanticscholar.org/CorpusID:257663729 . F. Chollet. On the measure of intelligence, 2019. URL https://arxiv.org/abs/1911.01547 . B.Estermann, L.A.Lanzendörfer, Y.Niedermayr, andR.Wattenhofer. Puzzles: Abenchmarkforneural algorithmic reasoning. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 127059– 127098. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_fil es/paper/2024/file/e5d1eaadeed651ba1021c09149db4b92-Paper-Datasets_and_ Benchmarks_Track.pdf . P. Giadikiaroglou, M. Lymperaiou, G. Filandrianos, and G. Stamou. Puzzle solving using reasoning of large language models: A survey. arXiv preprint arXiv:2402.11291 , 2024. E. Glazer, E. Erdil, T. Besiroglu, D. Chicharro, E. Chen, A. Gunning, C. F. Olsson, J.-S. Denain, A. Ho, E. de Oliveira Santos, O. JĂ ¤rviniemi, M. Barnett, R. Sandler, M. Vrzala, J. Sevilla, Q. Ren, E. Pratt, L. Levine, G. Barkley, N. Stewart, B. Grechuk, T. Grechuk, S. V. Enugandla, and M. Wildon. Frontiermath: A benchmark for evaluating advanced mathematical reasoning in ai, 2024. URL https://arxiv.org/abs/2411.04872 . J. Gui, Y. Liu, J. Cheng, X. Gu, X. Liu, H. Wang, Y. Dong, J. Tang, and M. Huang. Logicgame: Bench- marking rule-based reasoning abilities of large language models. arXiv preprint arXiv:2408.15778 , 2024. 12 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants D.Hendrycks,C.Burns,S.Kadavath,A.Arora,S.Basart,E.Tang,D.Song,andJ.Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband, G. Dulac-Arnold, J. Agapiou, J. Z. Leibo, and A. Gruslys. Deep q-learning from demon- strations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Edu- cational Advances in Artificial Intelligence , AAAI’18/IAAI’18/EAAI’18. AAAI Press, 2018. ISBN 978-1-57735-800-8. J. Kim, K. Shah, V. Kontonis, S. Kakade, and S. Chen. Train for the worst, plan for the best: Under- standing token ordering in masked diffusions. arXiv preprint arXiv:2502.06768 , 2025. D. Li, S. Cao, T. Griggs, S. Liu, X. Mo, E. Tang, S. Hegde, K. Hakhamaneshi, S. G. Patil, M. Zaharia, et al. Llms can easily learn to reason from demonstrations structure, not content, is what matters! arXiv preprint arXiv:2502.07374 , 2025. J. Long. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291 , 2023. Y. Long, T. Jiang, Y. Zhao, A. Cohan, and D. Shasha. PuzzlePlex: A benchmark to evaluate the reasoning and planning of large language models on puzzles. 2024. R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. In Proceedings of the 31st International Conference on Neural Information Processing Systems , NIPS’17, page 6382â €“6393, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964. T. Miyato, S. Löwe, A. Geiger, and M. Welling. Artificial kuramoto oscillatory neurons. arXiv preprint arXiv:2410.13821 , 2024. N. Muennighoff, Z. Yang, W. Shi, X. L. Li, L. Fei-Fei, H. Hajishirzi, L.
https://arxiv.org/abs/2505.16135v1
Zettlemoyer, P. Liang, E. Candès, and T. Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. S. Neumann. Sudokupad, 2021. URL https://sudokupad.app/ . OpenAI. OpenAI o3 and o4-mini System Card. https://openai.com/index/o3-o4-mini-sys tem-card/ , 2025. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback. In Proceedings of the 36th International Conference on Neural Information Processing Systems , NIPS ’22, Red Hook, NY, USA, 2022. Curran Associates Inc. ISBN 9781713871088. D. Paglieri, B. Cupiał, S. Coward, U. Piterbarg, M. Wolczyk, A. Khan, E. Pignatelli, Ł. Kuciński, L. Pinto, R. Fergus, et al. Balrog: Benchmarking agentic llm and vlm reasoning on games. arXiv preprint arXiv:2411.13543 , 2024. R. Palm, U. Paquet, and O. Winther. Recurrent relational networks. Advances in neural information processing systems , 31, 2018. R. Pelánek. Difficulty rating of sudoku puzzles by a computational model. In The Florida AI Research Society, 2011. URL https://api.semanticscholar.org/CorpusID:6431985 . 13 Sudoku-Bench: Evaluating creative reasoning with Sudoku variants L. Phan, A. Gatti, Z. Han, N. Li, J. Hu, H. Zhang, C. B. C. Zhang, M. Shaaban, J. Ling, S. Shi, et al. Humanity’s last exam, 2025. URL https://arxiv.org/abs/2501.14249 . Y. Ren, K. Tertikas, S. Maiti, J. Han, T. Zhang, S. Süsstrunk, and F. Kokkinos. Vgrp-bench: Visual grid reasoning puzzle benchmark for large vision-language models. arXiv preprint arXiv:2503.23064 , 2025. K. Shah, N. Dikkala, X. Wang, and R. Panigrahy. Causal language modeling can elicit search and reasoning capabilities on logic puzzles. arXiv preprint arXiv:2409.10502 , 2024. D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, Jan. 2016. doi: 10.1038/nature16961. N. Tyagi, M. Parmar, M. Kulkarni, A. Rrv, N. Patel, M. Nakamura, A. Mitra, and C. Baral. Step-by-step reasoning to solve grid puzzles: Where do LLMs falter? In Y. Al-Onaizan, M. Bansal, and Y.-N. Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 19898–19915, Miami, Florida, USA, Nov. 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.1111. URL https://aclanthology.org/2024.emnl p-main.1111/ . C. J. Wang, D. Lee, C. Menghini, J. Mols, J. Doughty, A. Khoja, J. Lynch, S. Hendryx, S. Yue, and D. Hendrycks. Enigmaeval: A benchmark of long multimodal reasoning challenges, 2025. URL https://arxiv.org/abs/2502.08859 . P.-W. Wang, P. Donti, B. Wilder, and Z. Kolter. Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. pages 6545–6554, 2019. J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS , 2022. URL https://arxiv.org/abs/2201.11903 . J. Ye, J. Gao, S. Gong, L. Zheng,
https://arxiv.org/abs/2505.16135v1
Distilling the Implicit Multi-Branch Structure in LLMs’ Reasoning via Reinforcement Learning Shicheng Xu1,2, Liang Pang1∗, Yunchang Zhu3, Jia Gu1,2, Zihao Wei1,2, Jingcheng Deng1,2, Feiyang Pan3, Huawei Shen1, Xueqi Cheng1 1State Key Laboratory of AI Safety, Institute of Computing Technology, CAS 2University of Chinese Academy of Sciences3Huawei Inc. {xushicheng21s,pangliang,shenhuawei,cxq}@ict.ac.cn zhuyunchang@huawei.com, pfy824@gmail.com Abstract Distilling reasoning paths from teacher to student models via supervised fine-tuning (SFT) provides a shortcut for improving the reasoning ability of the smaller Large Language Models (LLMs). However, the reasoning paths generated by teacher mod- els often reflect only surface-level traces of their underlying authentic reasoning. Insights from cognitive neuroscience suggest that authentic reasoning involves a complex interweaving between meta-reasoning—which selects the appropriate sub- problem from multiple candidates—and solving, which addresses the sub-problem. It means that authentic reasoning has implicit multi-branch structure. Supervised fine-tuning collapses this rich structure into a flat sequence of token prediction in teacher’s reasoning path, which cannot distill this structure to student. To address this limitation, we propose RLKD, a reinforcement learning (RL)-based distillation framework guided by a novel Generative Structure Reward Model (GSRM). Our GSRM converts the reasoning path into multiple meta-reasoning-solving steps and gives the reward to measure the alignment between the reasoning structures of student and teacher. Our RLKD combines this reward with RL, enables the student LLM to internalize the teacher’s implicit multi-branch structure in authentic reason- ing, rather than merely mimicking fixed teacher’s output paths. Experiments show that RLKD, even when trained on only 0.1% of the data under an RL-only regime, surpasses the performance of standard SFT-RL pipelines and further unleashes the potential reasoning ability of the student LLM than SFT-based distillation. 1 Introduction Recently, Large Language Models (LLMs) have demonstrated impressive abilities on complex reasoning tasks Pan et al. [2025], Jiaqi et al. [2025], Xu et al. [2025a] via generating long reasoning paths Havrilla et al. [2024], Wei et al. [2022] such as Deepseek-R1 Guo et al. [2025]. However, high training costs Guo et al. [2025], Xu et al. [2025b] and the strong base model Chu et al. [2025], Yue et al. [2025] are required in order for LLMs to emerge with this excellent capability, which prevents this reasoning capability from being explored by resource-constrained teams in developing their LLMs. To solve this challenge, supervised fine-tuning (SFT) on the reasoning paths generated from LLMs with powerful reasoning capabilities provides a shortcut and efficient method to make the smaller LLMs generate the long reasoning paths and achieve significant improvement Guo et al. [2025], Zhang et al. [2025], Face, Wen et al. [2025]. Despite this advance, some studies find SFT-distilled reasoning LLMs are trapped in rigid imitation rather than authentic reasoning. Purely mimicking the teacher’s reasoning paths can leave the student LLM “unthinking”: it replicates the ∗Corresponding Author Preprint. Under review.arXiv:2505.16142v1 [cs.CL] 22 May 2025 (a) Selected Subproblems Potential Subproblems (Implicit Branches)………Meta-Reasoning 1 Solving 1 ………Meta-Reasoning 2 Solving 2 ………Meta-Reasoning 3 Solving 3 ………Meta-Reasoning n… Solving n Final AnswerStudent Large Language Models… ………… … …………… … …Student’s Reasoning ………………… ……… ………Teacher’s Reasoning Alignment RewardAction…Teacher’s Reasoning 𝒕𝒕𝟎𝟎𝒕𝒕𝟏𝟏𝒕𝒕𝟐𝟐𝒕𝒕𝟑𝟑𝒕𝒕𝟒𝟒𝒕𝒕𝟓𝟓𝒕𝒕𝟔𝟔𝒕𝒕𝟕𝟕𝒕𝒕𝟖𝟖𝒕𝒕𝟗𝟗𝒕𝒕𝟏𝟏𝟎𝟎𝒕𝒕𝟏𝟏𝟏𝟏 𝒕𝒕𝒙𝒙… Student Large Language ModelsTokens: Cross-Entropy
https://arxiv.org/abs/2505.16142v1
Loss…(b): SFT -based distillation (c): Our RL-based distillation SolvingStart Generative Structure Reward ModelFigure 1: (a) The generated reasoning path has implicit multi-branch structure. (b) Distillation only based on SFT collapses the rich structure into a flat sequence of token prediction to memorize only the teacher’s generated path. (c) Our proposed RL-based distillation can teach the student LLM to learn this structure by using a Generative Structure Reward Model to measure the alignment between the reasoning structure of the student and teacher, serving as the reward in RL. surface form of the reasoning steps yet still makes errors on key underlying steps Chen et al. [2025], Dai et al. [2024]. To analyze and solve this phenomenon, we introduce the concepts from human cognitive neuroscience to rethink the definition of authentic reasoning of LLMs, which consists of two parts: one is meta- reasoning and the other is solving Stefik [1981], Cox and Raja [2007], Russell and Wefald [1991]. Specifically, answering a complex problem involves multiple steps and each step consists of a meta- reasoning phase that determines the specific sub-problem from multiple potential sub-problems – followed by a solving phase that executes or answers the specific determined sub-problem. From this perspective, although the generated reasoning content is a definitive path, each step on this path is actually determined by meta-reasoning from multiple candidate states. Therefore, the complex interweaving between meta-reasoning and solving constitutes the authentic reasoning, in which both generated reasoning path and multiple other potential paths form the implicit multi-branch structure (Figure 1 (a)). A critical challenge in SFT-based distillation is that it does not teach the student LLM this structure. SFT trains a student LLM to imitate the teacher’s output sequence token-by-token with cross-entropy loss. Therefore, SFT collapses this rich implicit multi-branch structure in teacher into a flat sequence of token prediction to memorize only the teacher’s generated path (Figure 1 (b)) while fails to learn how to sample the path from other potential paths. Besides, by treating each token equally in training, the student LLM cannot learn how to meta-reason, how to solve and how to to connect them in multi-steps respectively. To solve this problem, it is important to provide the learning signal in distillation that clearly organizes multi-step meta-reasoning and solving in the reasoning paths, which is the core of implicit multi- branch structure. This is a highly semantic supervision that cannot be accomplished by token-level SFT and requires reinforcement learning (RL) Ouyang et al. [2022], Havrilla et al. [2024], so we propose RLKD, the first reinforcement learning-based knowledge distillation method for LLM’s reasoning. In RLKD, we design Generative Structure Reward Model (GSRM), a two-stage reward pattern that combines the semantic understanding of generative reward model Mahan et al. [2024] with the interpretability and controllability of the rule-based reward model Guo et al. [2025]. GSRM 2 can convert the reasoning path into the sequence consisting of multiple meta-reasoning-solving steps, and then score the matching degree between the sequences of the student and the teacher with structured reward mechanism. When combing GSRM with RL, our RLKD can guide the student
https://arxiv.org/abs/2505.16142v1
LLM at a step-level on how to better perform sampling to select the most suitable sub-problem from multiple potential ones , and then solving it. In this way, RLKD can distill the implicit multi-branch structure in the reasoning of teacher LLMs to the student LLMs to make the student learn to perform authentic reasoning (Figure 1 (c)). Experimental results on math (AIME and MATH-500) and graduate-Level Q&A (GPQA) show that our RLKD: (1) can use only 0.1%training data with RL-only paradigm to outperform SFT-RL pipeline in Qwen2.5-Math, (2) can further unleash the potential reasoning ability of the student LLM than SFT-based distillation, and (3) can outperform existing RL baselines. The contributions of this paper can be concluded as: •We point out that the reasoning paths generated by LLMs are merely superficial manifesta- tions of authentic reasoning, whereas authentic reasoning possesses an implicit multi-branch structure, which can not be distilled to the student LLM by SFT. •We propose RLKD, which, to the best of our knowledge, is the first RL-based distillation method for LLM’s reasoning, and can effectively distill the implicit multi-branch structure. •The experimental results demonstrate the superiority of our proposed RL-based method over SFT-based distillation, highlighting the immense potential of RL-based distillation techniques in LLM’s reasoning and paving the way for new directions in subsequent research. 2 Related Work 2.1 Structure in Reasoning of LLMs Recent studies finds that the reasoning of LLMs has the structure. Chain-of-Thought Wei et al. [2022], Least-to-Most Zhou et al. [2022] and Self-Ask Press et al. [2022] initially formalize the reasoning path as a chain consisting of multiple nodes, which is a linear structure. Tree-of-Thought Yao et al. [2023], Graph-of-Thought Besta et al. [2024] and SearChain Xu et al. [2024a] explicitly builds a non-linear structure for reasoning. SuperCorrect Yang et al. [2025] uses high-level plans plus detailed steps as hierarchical thought templates to correct student models. The commonality of existing studies is that they explicitly let LLMs generated the specified structure in reasoning through prompt engineering or fine-tuning. Different from them, we introduce human cognitive neuroscience to rethink the reasoning, which consists of meta-reasoning and solving. The former is responsible for determining the sub-problem from multiple potential candidates, while the latter aims to solve the sub-problem. This means that a generated reasoning path contains multiple potential other paths, and these potential paths together with the generated path form the structure of reasoning. We focus on how to distill this implicit multi-branch structure into the student LLM. 2.2 Supervised Fine-Tuning for Reasoning Path Distillation Supervised fine-tuning (SFT) on chain-of-thought demonstrations has emerged as a straightforward way to distill reasoning capabilities from large models into smaller ones. For example, Deepseek releases a series distilled LLMs based on Deepseek-R1 and significantly improve the reasoning capabilities Guo et al. [2025]. A similar study shows that with only 17k curated reasoning traces, a 32B student model can nearly match a closed-source o1-preview on math and coding benchmarks Li et al. [2025] and many open-source projects are released Wen et al. [2025], Face, Bespoke-Labs. However, recent findings highlight that SFT often teaches
https://arxiv.org/abs/2505.16142v1
format over substance: models learn to imitate the reasoning paths without authentic understanding of their content Chen et al. [2025], Dai et al. [2024]. In fact, a student can produce correct answers by mimicking a long Chain-of-Thought (CoT) pattern even if many intermediate steps are incorrect Li et al. [2025]. The key reason behind this limitation is that reasoning has implicit multi-branch structure but only SFT-based distillation collapses this rich structure into a flat sequence of token prediction. So we propose RLKD, a RL-based distillation method that can teach the student LLM to learn this structure. 2.3 Reinforcement Learning for Reasoning Optimization in LLMs Reinforcement learning (RL) has been explored as a means to optimize reasoning strategies in language models, building on foundations like Proximal Policy Optimization (PPO) Schulman et al. [2017]. More recent advances include Group Relative Policy Optimization (GRPO) Shao et al. [2024], introduced by the DeepSeek team to push mathematical reasoning performance and Decoupled Clip 3 and Dynamic Sampling Policy Optimization (DAPO) Yu et al. [2025] to train LLM with RL at Scale. These methods primarily focus on human feedback, outcome accuracy or heuristic rewards to optimize LLMs in specific downstream tasks. Different from them, our RLKD aims to use RL in knowledge distillation to transfer the implicit multi-branch structure from teacher to student to make the student LLM learn authentic reasoning. 3 Our Method This section introduces details of our RLKD, a RL-based knowledge distillation method that can transfer the implicit multi-branch structure for complex reasoning from teacher to student. Firstly, we propose to train a Generative Structure Reward Model (GSRM) to score the alignment degree between the reasoning paths of student and teacher LLMs in terms of their implicit multi-branch structure. Then, we combine the GSRM with GRPO for RL-based knowledge distillation training. 3.1 Generative Structure Reward Model Rewarding the implicit multi-branch structure is premised on accessing the meta-reasoning and solving at each reasoning step. However, the raw reasoning path generated by LLMs such as Deepseek-R1 is unstructured, making it difficult to directly distinguish between various reasoning steps and to decouple the meta-reasoning and solving content of each step. To solve this, we propose Generative Structure Reward Model (GSRM), a two-stage reward pattern that combines the semantic understanding of generative reward model Mahan et al. [2024] with the interpretability and controllability of the rule-based reward model Guo et al. [2025]. In the implementation of GSRM, firstly, we train a LLM to generate a sequence of meta-reasoning and solving pairs for the input reasoning path. Then, we design a structured reward mechanism to score the alignment degree between the teacher and the student in terms of their meta-reasoning and solving content at the corresponding steps. 3.1.1 Generation of Meta-Reasoning and Solving Sequence Dataset Construction. We devise detailed instructions and examples to perform in-context learning (ICL) with the GPT-4o API, enabling it to automatically construct a large-scale supervised fine-tuning dataset. Each data sample of this dataset is a input-out pair: reasoning path Ris the input and the sequence Sconsisting of multiple meta-reasoning and solving steps is the output. Specifically,
https://arxiv.org/abs/2505.16142v1
we first structurally define each meta-reasoning and solving steps as ( M,Q&A), in which Mis the content of meta-reasoning that focuses on determining the sub-problem that the current reasoning step should solve.Q-Apair is the content of solving, in which Qis a clear description of the current sub-problem andAis the solving result to the sub-problem. So the sequence Scontaining nmeta-reasoning and solving steps can be described as: S= [(M1,Q1&A1),(M2,Q2&A2),(M3,Q3&A3), ...,(Mn,Qn&An)]. (1) The essential for the generation of Sis (1) containing each key reasoning step in the reasoning path and (2) fully decoupling the contents of M,Q, andAso that they contain and only contain the specified information. Based on these two points, we design specific instruction and examples covering four reasoning tasks (math, science, code and puzzles) to enable GPT-4o to perform effective ICL. The instruction is in Figure 2 (a) and one example is shown in Figure 2 (b), and the full prompt content for ICL can be found in Appendix. Besides, we introduce a verification-based feedback strategy to improve data quality. For each Sgenerated by GPT-4o given input R, we use Deepseek- V3 to determine whether Smeets the requirements of the instruction. If it does not, we give the feedback from Deepseek-V3 to GPT-4o to re-generate S. If a data sample fails to pass the verification after three re-generations, we discard this sample. We automatically execute this data production process on OpenThoughts-114k Team [2025], an open synthetic reasoning dataset with 114k high- quality examples covering math, science, code, and puzzles, and each example has Deepseek-R1 generated reasoning path, which is the R. Finally, we get 93,625R-Spairs for the future supervised fine-tuning. Structured Fine-grained Training. This stage aims to train a generative reward model in our GSRM that can generate the sequence of meta-reasoning and solving pairs for the input reasoning path. Since the target output S(Equ. 1) is a highly structured text, we propose a training method 4 Decompose the problem into multiple steps according to the thinking process for reasoning. Format the thinking content in the form of [Meta -reasoning] -[Sub -problem] -[Answer] for each step. When the [Answer] of a certain step gets the answer to the original question, stop reasoning, and the last answer should include a complete answer to the original question [Meta -reasoning] represents the thinking process. It is necessary to analyze the previous and next steps as detailed as possible, explain in detail why this step is done to lead to the following query, and do not mention the answer of this step.}} \textit {{\tiny [Sub -Problem] represents the problem that needs to be solved in the current step, and analyze it as detailed as possible. Each query contai ns only one question [Answer] represents the answer to the Problem. Give the answer to the [Sub -Problem] directly and briefly without any reasoning s teps and redundant statements. You should split it into more [Meta -reasoning] -[Sub -Problem] -[Answer] with finer granularity, but do not repeat the conditions g iven in the question. Here are some examples for {} task: [Question]: Given real numbers 𝑎𝑎,𝑏𝑏,𝑐𝑐and a
https://arxiv.org/abs/2505.16142v1
positive number 𝜆𝜆such that the polynomial 𝑓𝑓𝑥𝑥=𝑥𝑥3+𝑎𝑎𝑥𝑥2+𝑏𝑏𝑥𝑥+𝑐𝑐has three real roots 𝑥𝑥1,𝑥𝑥2,𝑥𝑥3,and the conditions 𝑥𝑥2−𝑥𝑥1=𝜆𝜆and 𝑥𝑥3>1 2(𝑥𝑥1+𝑥𝑥2)are satisfied, find the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 . [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To find the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 , use Vieta’s formulas to express coefficients 𝑎𝑎,𝑏𝑏,𝑐𝑐in terms of the roots 𝑥𝑥1,𝑥𝑥2,𝑥𝑥3. Parametrize the roots to satisfy 𝑥𝑥2−𝑥𝑥1=𝜆𝜆and 𝑥𝑥3>12(𝑥𝑥1+𝑥𝑥2). Define 𝑥𝑥1=𝑡𝑡−𝜆𝜆 2, 𝑥𝑥2=𝑡𝑡+𝜆𝜆2, and 𝑥𝑥3=𝑡𝑡+𝑠𝑠where 𝑠𝑠>0. This parametrization simplifies the problem by eliminating 𝜆𝜆through symmetry. [Sub -problem 1]: Express 𝑎𝑎,𝑏𝑏,𝑏𝑏in terms of 𝑡𝑡 , 𝑠𝑠, and 𝜆𝜆using the parametrized roots. [Answer 1]: 𝑎𝑎=−3𝑡𝑡−𝑠𝑠,𝑏𝑏=3𝑡𝑡2+2𝑡𝑡𝑠𝑠−𝜆𝜆2 4,𝑐𝑐=−𝑡𝑡3−𝑡𝑡2𝑠𝑠+𝜆𝜆2 4(𝑡𝑡+𝑠𝑠) [Meta -reasoning 2]: Substitute 𝑎𝑎,𝑏𝑏,𝑐𝑐into the expression 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 . Simplify the expression by combining like terms. The result will depend only on 𝑠𝑠and 𝜆𝜆, allowing further simplification by introducing 𝑘𝑘=𝑠𝑠 𝜆𝜆. This reduces the problem to maximizing a cubic function in 𝑘𝑘. [Sub -problem 2]: Simplify 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 after substitution. What is the simplified form in terms of 𝑘𝑘? [Answer 2]: −2𝑘𝑘3+92𝑘𝑘. [Meta -reasoning 3]: To maximize 𝑄𝑄(𝑘𝑘)=−2𝑘𝑘3+92𝑘𝑘, find its critical points by taking the derivative and solving for 𝑘𝑘. Verify that the critical point corresponds to a maximum using the second derivative test. - [Sub -problem 3]: Find the critical point of 𝑄𝑄(𝑘𝑘)=−2𝑘𝑘3+92𝑘𝑘. [Answer 3]: 𝑘𝑘 =3 2. [Meta -reasoning 4]: Substitute 𝑘𝑘=3 2back into 𝑄𝑄(𝑘𝑘)to compute the maximum value. Confirm the calculation to ensure no arithmetic errors. - [Sub -problem 4]: Calculate 𝑄𝑄(3 2) . [Answer 4]:33 2. [Meta -reasoning 5]: Verify that the critical point 𝑘𝑘=3 2satisfies all constraints (e.g., 𝑠𝑠>0) and that the polynomial discriminant condition for three real roots is implicitly satisfied by the problem’s given conditio ns.Confirm no additional constraints affect the result. - [Sub -problem 5]: Does 𝑘𝑘=3 2satisfy 𝑠𝑠>0? [Answer 5]: Yes. [Meta -reasoning 6]: Finalize the maximum value of the original expression based on the derived result. - [Sub -problem 6]: What is the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 ? [Answer 6]: 33 2. Sothe final answer is: 33 2.(a) Instruction (b) One example in mathFigure 2: One sequence generation example for math task in in-context learning prompts for GPT-4o. called Structured Fine-grained Training to optimize each task (meta-reasoning generation and solving generation) in a fine-grained manner and dynamically adjust the optimization weight according to the difficulty of the task. Specifically, we split the tokens in Sinto two sets: one are tokens in meta-reasoning Mand the other are tokens in solving Q&A. We perform 3training epochs. Define Fis the LLM used in this part (Qwen2.5-7B-Instruct), the first epoch is training for meta-reasoning generation, only loss from the tokens in Mis computed as: L1=X S[i]∈M−logF(S[i]|S[1:i−1];θ). The second epoch is training for solving generation, only loss from Q&Ais computed as: L2=X S[i]∈Q&A−logF(S[i]|S[1:i−1];θ). The third stage is mixed training with dynamical weight. The total loss is the weighted average of the losses on the two token sets: L3=aX S[i]∈M−logF(S[i]|S[1:i−1];θ) +bX S[i]∈Q&A−logF(S[i]|S[1:i−1];θ). aandbare initialized to the average of L1andL2over training steps respectively. During training, we calculate the loss of the tokens in MandQ&Arespectively, and update aandbaccording to the ratio of the losses in
https://arxiv.org/abs/2505.16142v1
two sets every αtraining steps. We set αas100. Ablation study about this part can be found in Appendix. 3.1.2 Structured Reward Mechanism After above generation, we denote StandSsas the sequences generated from reasoning paths of the teacher LLM and the student LLM respectively: St= [(Mt 1,Qt 1&At 1),(Mt 2,Qt 2&At 2),(Mt 3,Qt 3&At 3), ...,(Mt n,Qt n&At n)], Ss= [(Ms 1,Qs 1&As 1),(Ms 2,Qs 2&As 2),(Ms 3,Qs 3&As 3), ...,(Ms m,Qs m&As m)]. We propose the Structured Reward Mechanism to map the generated sequence consisting of multiple meta-reasoning and solving to a reward value according to the alignment between StandSs. Although this computation is carried out based on a linear sequence, when combined with RL, it becomes capable of quantifying the degree of alignment between the teacher and the student in terms of the implicit multi-branch structure, serving as environmental feedback in RL. This is because it can assess the step-level alignment between the sequences and, through the mechanism of reward for RL, finely guide the student LLM to make correct sampling from multiple potential sub-problems at each meta-reasoning step. Compared with SFT-based distillation, it makes the student LLM learn the 5 implicit multi-branch structure in the authentic reasoning of the teacher LLM rather than just focuse on memorizing the reasoning path on the surface generated by the teacher LLM. Compared with existing RL methods, it can guide the student LLM on how to do better sampling at each step and avoid reward hacking Amodei et al. [2016], Di Langosco et al. [2022] by step-to-step comparison and early-exist mechanism. Specifically, our structured reward mechanism sequentially compares the corresponding steps of St andSs(Algorithm 1). We use Qwen-2.5-7B-Instruct to determine whether two texts are matched. For the steps where Mt imatches Ms i, we assign a temporary reward value of 1. We further deduct the temporary reward by judging the matching relationship of QiandAi. The temporary reward value is added to the total reward value after this. When meeting the mismatched Mi, this reward accumulation process is ended. This reward mechanism follows the sequential dependency of the reasoning path and can give an approximately unique reward value for each step. Algorithm 1: Structured Reward Mechanism Initialize r←0; // Total reward accumulator foreach index i∈ {1,2, . . . , min (n, m)}do Initialize temporary reward v←0; // Step-wise reward score ifMatch( Mt i,Ms i)then v←1; // Base score for matched meta-reasoning ifNot Match( Qt i,Qs i)then v←v×0.5; // 50% penalty for question mismatch end ifNot Match( At i,As i)then v←v×0.5; // Additional 50% penalty for answer mismatch end end else Break ; // Encountering mismatched meta-reasoning, exit end r←r+v; // Accumulate step contribution end return r; // Final reward between sequences 3.2 RL-based Knowledge Distillation Training We combine our Generative Structure Reward Models (GSRM) with Group-based Relative Policy Optimization (GRPO) Shao et al. [2024] for RL-based knowledge distillation training. GRPO introduces a group-constrained policy update method to address the exploration-stability trade-off. By dynamically partitioning experiences into Kgroups {G1, . . . ,GK}, GRPO enables differentiated policy optimization: each group applies an adaptive clipping threshold
https://arxiv.org/abs/2505.16142v1
ϵk∝1/σk(where σkis the group’s advantage standard deviation) to its policy ratio rk(θ), constraining updates with: LGRPO(θ) =KX k=1EGkh min rkˆAk,clip(rk,1−ϵk,1 +ϵk)ˆAki −β·Es[DKL(πold∥πθ)],(2) where ˆAk=Ak/σkis the group-normalized advantage. The dual-constraint design combines low- variance intra-group updates (via adaptive clipping) with global policy coherence (via KL divergence). In training, we combine the reward obtained from GSRM and the outcome reward of the specific task, such as the accuracy in math, in a weighted manner as the total reward for GRPO. Ablation study about the weights can be found in Appendix. 4 Experiments This section conducts a series of experiments to analyze our method. 4.1 Experimental Setup. Datasets and Evaluation Metrics. We use OpenR1-Math2as the training dataset for RL, in which Deepseek-R1 generated responses in this dataset are used as the teacher LLM’s reasoning paths. 2https://huggingface.co/datasets/open-r1/OpenR1-Math-220k 6 We keep the training datasets consistent with baselines including PPO and GRPO. In evaluation, we use the popular and challenging datasets on LLM’s reasoning including AIME24 MAA and MATH-500 Hendrycks et al. [2021] for math reasoning and GPQA-Diamond Rein et al. [2024] for graduate-Level Q&A. As for the metrics, we follow existing studies on LLM’s reasoning Guo et al. [2025], Shao et al. [2024], Zhang et al. [2025], Deng et al. [2024], Xu et al. [2024b] to use pass@k Chen et al. [2021]. Our validation is divided into two parts: pass@1 and pass@k ( k >1). As for pass@1, we generate m(mis64for AIME, 8for GPQA and 4for MATH500) responses for each question and compute pass@1 as pass@1 =1 mPm i=1pi, in which piis the correctness of the i-th response. This can alleviate randomness on small datasets. As for pass@k ( k >1), it involves the LLM generating kresponses for each question, with the data sample being marked as accurate if at least one of the kresponses is accurate. Paying attention to the metric where k >1is crucial, as in this setting, the LLM has the opportunity to explore multiple diverse paths to answer the question, it reflects the LLM’s ability to sample from multiple implicit paths during reasoning, thereby assessing whether the distilled LLM has learned authentic reasoning or merely memorized the teacher’s paths. Baselines and Comparison Settings. We categorize the baselines into three groups according to the experimental settings. The first setting aims to show the effect of our method on improving LLM’s reasoning ability. We compare our RL-only training method with the LLM trained in SFT-RL pipeline. We use Qwen2.5-Math-7B-Instruct Yang et al. [2024], a powerful reasoning LLM trained on large-scale chain-of-thought datasets with SFT and then GRPO. Both our method and Qwen2.5- Math-7B-Instruct use the same based model: Qwen2.5-Math-7B, which has been pre-trained on math corpus. The second setting aims to explore whether our RL-based distillation can further improve the performance of the SFT-distilled LLM and achieve effective optimization of the SFT-RL pipeline in knowledge distillation. We compare our method with RL baselines including PPO and GRPO based on DeepSeek-R1-Distill-Qwen-7B Guo et al. [2025], a powerful LLM that is SFT-distilled from Deepseek-R1. The third setting aims to compare which method is better for the student
https://arxiv.org/abs/2505.16142v1
LLM to learn authentic reasoning rather than memorizing the teacher’s path. We use embedding3similarity Deng et al. [2025] to select the 3.2K data samples with the largest difference from the test set in OpenR1- Math-220k training set, and train Qwen2.5-Math-7B on this subset with RL-based distillation and SFT-based distillation respectively. Implementation Details. In training, we build our code based on Open-R14, an open-source project for LLM’s reasoning. We use Pytorch 2.5.1 as the training framework and Deepspeed 0.15.4 for acceleration of parallel computing. In RL training, we use one Ascend 910B 64G NPU for online inference and four Ascend 910B 64G NPUs for training under deepspeed-zero3 setting. As for hyperparameters, we set per-device batch size as 2, gradient accumulation steps as 4, group size for GRPO as 4, temperature in online inference as 0.7. In evaluation, we use Lighteval5as the toolkit and follow settings in Deepseek Guo et al. [2025] to set temperature as 0.6, max new tokens as 32768 and top-p as 0.95. We report the results of multiple runs to reduce the randomness. 4.2 Experimental Results Main Results. Results about reasoning abilities of LLMs are shown in Table 1. In the training based on Qwen2.5-Math-7B, our RL-only method Qwen2.5-Math-7B-RLKD-Zero outperforms complex SFT + RL pipeline (Qwen2.5-Math-7B-Instruct) and uses much less data (nearly 0.1%). In the training based on Deepseek-R1-Distill-Qwen-7B (SFT-distilled LLM), baseline RL methods including PPO and GRPO can hardly bring about significant improvements on this basis while our RLKD is capable of further enhancing performance. The indicates that our RL-based distillation approach enables the SFT-distilled LLM to learn additional information beyond its memorization of the teacher’s reasoning paths. The relatively more significant improvements are observed in pass@k (k >1). In this setting, LLM has the opportunity to explore multiple diverse paths to answer the question, which means that compared to SFT distillation, our method enables the student LLM to learn how to perform sampling from multiple potential paths by distilling the implicit multi-branch structure from the teacher, thereby increasing the probability of providing the correct answer. Ablation Study. Figure 3 shows three metrics in RL training. We compare our RLKD with GRPO because RLKD is actually GRPO with the reward from our Generative Structure Reward Model 3obtained by gte-Qwen2-7B-instruct 4https://github.com/huggingface/open-r1 5https://github.com/huggingface/lighteval 7 Table 1: Reasoning abilities on AIME24, MATH-500 and GPQA-Diamond. The results are obtained based on generating multiple responses for each query to mitigate randomness and the best results are inbold font. Qwen2.5-Math-7B-RLKD-Zero is trained by our RLKD without any SFT (RL only). (a) Performance on AIME24 Model Data Size AIME24 SFT RL pass@1 pass@4 pass@8 pass@16 pass@32 pass@64 Number of running times for each question 1 4 8 16 32 64 4 8 16 32 64 Based on Qwen2.5-Math-7B Qwen2.5-Math-7B 0 0 13.3 11.7 11.7 11.3 11.0 10.3 20.0 26.7 30.0 30.0 43.3 Qwen2.5-Math-7B-Instruct 2,895K 66K 16.7 15.8 15.8 15.8 14.7 14.6 33.3 43.3 43.3 50.0 50.0 Our: Qwen2.5-Math-7B-RLKD-Zero 0 3.2K 23.3 20.0 20.4 22.1 22.5 21.0 40.0 46.7 56.7 60.0 70.0 Based on Deepseek-R1-Distill-Qwen-7B DeepSeek-R1-Distill-Qwen-7B 800K 0 50.0 52.7 52.5 52.9 52.3 52.4 66.7 73.3 80.0
https://arxiv.org/abs/2505.16142v1
80.0 83.3 DeepSeek-R1-Distill-Qwen-7B-PPO 800K 3.2K 46.7 52.1 53.0 52.7 52.9 52.9 66.7 73.3 73.3 80.0 83.3 DeepSeek-R1-Distill-Qwen-7B-GRPO 800K 3.2K 50.0 52.5 53.3 53.3 53.3 52.3 66.7 73.3 80.0 83.3 83.3 Our: DeepSeek-R1-Distill-Qwen-7B-RLKD 800K 3.2K 53.3 56.7 55.4 53.3 52.9 53.6 73.3 80.0 86.7 86.7 86.7 (b) Performance on GPQA-Diamond and MATH-500 Model Data Size GPQA-Diamond MATH-500 SFT RL pass@1 pass@4 pass@8 pass@1 pass@4 Number of running times for each question 1 4 8 4 8 1 4 4 Based on Qwen2.5-Math-7B Qwen2.5-Math-7B 0 0 29.3 27.8 27.5 61.1 79.3 54.8 56.2 81.0 Qwen2.5-Math-7B-Instruct 2,895K 66K 30.3 30.2 30.1 66.7 82.3 82.4∗81.5∗89.4∗ Our: Qwen2.5-Math-7B-RLKD-Zero 0 3.2K 34.9 34.2 32.7 69.2 86.4 74.4 73.9 87.8 Based on Deepseek-R1-Distill-Qwen-7B DeepSeek-R1-Distill-Qwen-7B 800K 0 47.9 50.8 50.2 74.7 83.8 92.4 93.2 97.4 DeepSeek-R1-Distill-Qwen-7B-PPO 800K 3.2K 47.9 49.4 50.7 74.7 84.3 93.4 94.0 97.6 DeepSeek-R1-Distill-Qwen-7B-GRPO 800K 3.2K 50.5 50.1 49.8 75.3 84.3 93.0 93.7 97.4 Our: DeepSeek-R1-Distill-Qwen-7B-RLKD 800K 3.2K 54.5 53.0 52.8 76.8 86.9 94.2 95.1 98.2 ∗means traing set of MATH-500 has appeared in the SFT training data of Qwen2.5-Math-7B-Instruct Yang et al. [2024]. /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013 /uni00000019/uni00000013 /uni0000001b/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000014/uni00000011/uni00000013/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c /uni0000002a/uni00000035/uni00000033/uni00000032 (a) Accuracy /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013 /uni00000019/uni00000013 /uni0000001b/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000014/uni00000018/uni00000013/uni00000013/uni00000015/uni00000013/uni00000013/uni00000013/uni00000015/uni00000018/uni00000013/uni00000013/uni00000016/uni00000013/uni00000013/uni00000013/uni00000016/uni00000018/uni00000013/uni00000013/uni00000026/uni00000052/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c /uni0000002a/uni00000035/uni00000033/uni00000032 (b) Completion Length /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000015/uni00000014/uni00000011/uni00000017/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000003/uni00000030/uni00000044/uni00000057/uni00000046/uni0000004b/uni0000004c/uni00000051/uni0000004a /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c (c) Alignment of the Structures Figure 3: The variations of metrics during the RL training process ((c) is not applicable to GRPO). (GSRM). The results indicate that as the training progresses, GSRM enables RLKD to better optimize the accuracy of the task (Figure 3 (a)), primarily because the student gradually learns the teacher’s implicit multi-branch structure in reasoning (Figure 3 (c)). Please see Appendix for more ablation studies about hyparameters such as reward weights. 4.3 Analysis of RL distillation vs. SFT distillation Performance Trend Varying with Training Steps. This section compares the SFT-based dis- tillation with our RL-based distillation (RLKD) by training Qwen2.5-Math-7B on the dataset has domain shift to AIME24 and is out of the domain of GPQA. This setting, where there exists a domain discrepancy between the training and testing sets for distillation, allows us to intuitively discern whether SFT-based distillation is merely mimicking and memorizing the teacher’s paths, and whether our RL-based distillation enables the student to learn authentic reasoning. The experimental results are shown in Figure 4. It is noteworthy that, as the training progresses, SFT and RLKD demonstrate completely opposite performance trends: RLKD can consistently enhance performance, even when 8 /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni0000001c/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000015/uni00000013/uni00000015/uni00000018/uni00000016/uni00000013/uni00000016/uni00000018/uni00000017/uni00000013/uni00000017/uni00000018/uni00000018/uni00000013/uni00000018/uni00000018/uni00000019/uni00000013/uni00000019/uni00000018/uni0000001a/uni00000013/uni0000001a/uni00000018/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048 /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000017/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000017/uni0000000c /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni0000001b/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni0000001b/uni0000000c/uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000019/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000019/uni0000000c /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000019/uni00000017/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000019/uni00000017/uni0000000c(a) AIME24 (Domain Shift) /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni0000001c/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000017/uni0000001c/uni00000018/uni00000017/uni00000018/uni0000001c/uni00000019/uni00000017/uni00000019/uni0000001c/uni0000001a/uni00000017/uni0000001a/uni0000001c/uni0000001b/uni00000017/uni00000033/uni00000048/uni00000055/uni00000049/uni00000052/uni00000055/uni00000050/uni00000044/uni00000051/uni00000046/uni00000048 /uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000017/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni00000017/uni0000000c/uni00000035/uni0000002f/uni0000002e/uni00000027/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni0000001b/uni0000000c /uni00000036/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000053/uni00000044/uni00000056/uni00000056/uni00000023/uni0000001b/uni0000000c (b) GPQA-Diamond (Out-of-Domain) Figure 4: Comparison between SFT-based distillation and our RL-based distillation (RLKD) in domain shift and out-of-domain setting. SFT and RLKD see the same data (32 samples) at each step. /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000019 /uni0000001a /uni0000001b /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000003/uni0000002c/uni00000051/uni00000047/uni00000048/uni0000005b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000027/uni0000004c/uni00000059/uni00000048/uni00000055/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000056/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000035/uni0000002f/uni0000002e/uni00000027 (a) Training Step 20 /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000019 /uni0000001a /uni0000001b /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000003/uni0000002c/uni00000051/uni00000047/uni00000048/uni0000005b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000027/uni0000004c/uni00000059/uni00000048/uni00000055/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000056/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000035/uni0000002f/uni0000002e/uni00000027 (b) Training Step 40 /uni00000013
https://arxiv.org/abs/2505.16142v1
/uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000019 /uni0000001a /uni0000001b /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000003/uni0000002c/uni00000051/uni00000047/uni00000048/uni0000005b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000027/uni0000004c/uni00000059/uni00000048/uni00000055/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000056/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000035/uni0000002f/uni0000002e/uni00000027 (c) Training Step 60 /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000019 /uni0000001a /uni0000001b /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000003/uni0000002c/uni00000051/uni00000047/uni00000048/uni0000005b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000027/uni0000004c/uni00000059/uni00000048/uni00000055/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000056/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000035/uni0000002f/uni0000002e/uni00000027 (d) Training Step 80 /uni00000013 /uni00000014 /uni00000015 /uni00000016 /uni00000017 /uni00000018 /uni00000019 /uni0000001a /uni0000001b /uni00000035/uni00000048/uni00000044/uni00000056/uni00000052/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000003/uni0000002c/uni00000051/uni00000047/uni00000048/uni0000005b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000027/uni0000004c/uni00000059/uni00000048/uni00000055/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000056/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025 /uni00000027/uni00000048/uni00000048/uni00000053/uni00000036/uni00000048/uni00000048/uni0000004e/uni00000010/uni00000035/uni00000014/uni00000010/uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000010/uni00000034/uni0000005a/uni00000048/uni00000051/uni00000010/uni0000001a/uni00000025/uni00000003/uni0000005a/uni00000012/uni00000003/uni00000035/uni0000002f/uni0000002e/uni00000027 (e) Training Step 100 Figure 5: Diversity among different reasoning paths at each meta-reasoning step varying with training. distilling on dataset that significantly diverges from the testing set, whereas SFT progressively under- mines performance. It indicates that SFT distillation is easy to fall into the trap of simply imitating and memorizing the teacher’s reasoning paths, rather than learning authentic reasoning that can be stably generalized. Our RLKD makes the student learn to perform better sampling among potential branches at each step, which is the implicit multi-branch structure of teacher’s authentic reasoning. Diversity of Reasoning Paths. The diversity of reasoning paths can reflect whether the student LLM has truly learned reasoning or is just memorizing the fixed teacher’s path in the training set. Figure 5 shows that our method brings the diversity-patterns of the student LLM closer to that of the teacher LLM in multi-step reasoning, suggesting that it allows the student LLM to learn authentic reasoning by distilling the implicit multi-branch structure. Specifically, we randomly sample 500 samples from Openthoughts-114K. On this sampled set, we set temperature to 0.8and let LLM generate 16responses for each question. We use the GSRM in Section 3.1.1 to generate a corresponding sequence containing multiple meta-reasoning-solving steps for each response, and calculate the diversity of the meta-reasoning-solving content within the 16responses of each question at step-level. The diversity score Dis calculated as: u=1 1616X i=1ei, D =1 1 16P16 i=1ei·u |ei||u|, in which eiis the text embedding encoded by gte-Qwen2-7B-instruct6for each meta-reasoning- solving content. As shown in Figure 5, the teacher LLM (Deepseek-R1) has a significantly different diversity-pattern from the SFT-distilled student LLM (DeepSeek-R1-Distill-Qwen-7B). Reasoning in the teacher has the higher diversity while the student is stuck in the relatively fixed paths. As the training of our method progresses (from step 20 to 100), the diversity of the student’s reasoning paths begins to increase and gradually approaches that of the teacher, which indicates that our method allows the student to successfully learn the teacher’s authentic reasoning paradigm. 5 Conclusion and Discussion Conclusion. This work addresses a critical flaw in knowledge distillation in LLM’s reasoning: the failure of SFT to transfer the implicit multi-branch structure underlying authentic reasoning. 6https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct 9 Drawing from cognitive neuroscience principles, we show that authentic reasoning involves dynamic meta-reasoning (sub-problem selection) and solving steps—a implicit multi-branch structure flattened by token-level SFT training. Our RLKD, the first RL-based distillation framework for reasoning, overcomes this paired with a Generative Structure Reward Model (GSRM), which decomposes reasoning paths into meta-reasoning-solving pairs and scores the structural alignment between teacher and student. Experiments across math and graduate-level QA tasks demonstrate RLKD’s superiority than SFT-based distillation, SFT-RL pipeline and RL baselines including PPO and GRPO, proving its ability to distill how teachers navigate latent reasoning branches rather than mimicking surface tokens. Further analysis confirms
https://arxiv.org/abs/2505.16142v1
RLKD-trained students mirror teachers’ multi-branch exploration patterns, closing the imitation-authentic reasoning gap. Limitations and Potential Impact. A key limitation of our work stems from computational con- straints: while RLKD demonstrates effectiveness on 7B-scale models, its viability on larger LLMs (e.g., 70B+ parameters) remains unverified. The framework’s emphasis on structural distillation opens avenues for developing lightweight, generalizable LLMs, potentially reducing reliance on brute-force scaling—a timely direction given growing concerns about AI’s environmental and economic costs. References Qianjun Pan, Wenkai Ji, Yuyang Ding, Junsong Li, Shilian Chen, Junyi Wang, Jie Zhou, Qin Chen, Min Zhang, Yulan Wu, et al. A survey of slow thinking-based reasoning llms using reinforced learning and inference-time scaling law. arXiv preprint arXiv:2505.02665 , 2025. Wang Jiaqi, Li Xinliang, Liu Zhengliang, Wu Zihao, Zhong Tianyang, Shu Peng, Li Yiwei, Jiang Hanqi, Zhou Yifan, Chen Junhao, et al. Llm reasoning: from openai o1 to deepseek r1. 2025. Shicheng Xu, Liang Pang, Huawei Shen, and Xueqi Cheng. A theory for token-level harmonization in retrieval-augmented generation. In The Thirteenth International Conference on Learning Representations , 2025a. URL https://openreview.net/forum?id=tbx3u2oZAu . Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, and Roberta Raileanu. Teaching large language models to reason with reinforcement learning. arXiv preprint arXiv:2403.04642 , 2024. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Fengli Xu, Qianyue Hao, Zefang Zong, Jingwei Wang, Yunke Zhang, Jingyi Wang, Xiaochong Lan, Jiahui Gong, Tianjian Ouyang, Fanjin Meng, et al. Towards large reasoning models: A survey of reinforced reasoning with large language models. arXiv preprint arXiv:2501.09686 , 2025b. Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161 , 2025. Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. Chong Zhang, Yue Deng, Xiang Lin, Bin Wang, Dianwen Ng, Hai Ye, Xingxuan Li, Yao Xiao, Zhanfeng Mo, Qi Zhang, et al. 100 days after deepseek-r1: A survey on replication studies and more directions for reasoning language models. arXiv preprint arXiv:2505.00551 , 2025. Hugging Face. Open r1: A fully open reproduction of deepseek-r1, january 2025. URL https://github. com/huggingface/open-r1 . Liang Wen, Yunke Cai, Fenrui Xiao, Xin He, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, et al. Light-r1: Curriculum sft, dpo and rl for long cot from scratch and beyond. arXiv preprint arXiv:2503.10460 , 2025. 10 Hardy Chen, Haoqin Tu, Fali Wang, Hui Liu, Xianfeng Tang, Xinya Du, Yuyin Zhou, and Cihang Xie. Sft or rl?
https://arxiv.org/abs/2505.16142v1
an early investigation into training r1-like reasoning large vision-language models. arXiv preprint arXiv:2504.11468 , 2025. Chengwei Dai, Kun Li, Wei Zhou, and Songlin Hu. Beyond imitation: Learning key reasoning steps from dual chain-of-thoughts in reasoning distillation. arXiv preprint arXiv:2405.19737 , 2024. Mark Stefik. Planning and meta-planning (molgen: Part 2). Artificial Intelligence , 16(2):141–169, 1981. Michael Cox and Anita Raja. Metareasoning: A manifesto. BBN Technical , 2007. Stuart Russell and Eric Wefald. Principles of metareasoning. Artificial intelligence , 49(1-3):361–395, 1991. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730– 27744, 2022. Dakota Mahan, Duy Van Phung, Rafael Rafailov, Chase Blagden, Nathan Lile, Louis Castricato, Jan-Philipp Fränken, Chelsea Finn, and Alon Albalak. Generative reward models. arXiv preprint arXiv:2410.12832 , 2024. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 , 2022. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350 , 2022. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822, 2023. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17682–17690, 2024. Shicheng Xu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-Seng Chua. Search-in-the-chain: Interactively enhancing large language models with search for knowledge-intensive tasks. In Proceedings of the ACM Web Conference 2024 , pages 1362–1373, 2024a. Ling Yang, Zhaochen Yu, Tianjun Zhang, Minkai Xu, Joseph E. Gonzalez, Bin CUI, and Shuicheng YAN. Supercorrect: Advancing small LLM reasoning with thought template distillation and self-correction. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=PyjZO7oSw2 . Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, et al. Llms can easily learn to reason from demonstrations structure, not content, is what matters! arXiv preprint arXiv:2502.07374 , 2025. Bespoke-Labs. Bespoke-stratos: The unreasonable effectiveness of reasoning distillation., january 2025. https://huggingface.co/bespokelabs/Bespoke-Stratos-7B . John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. 11 Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv
https://arxiv.org/abs/2505.16142v1
preprint arXiv:2503.14476 , 2025. OpenThoughts Team. Open Thoughts. https://open-thoughts.ai, January 2025. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565 , 2016. Lauro Langosco Di Langosco, Jack Koch, Lee D Sharkey, Jacob Pfau, and David Krueger. Goal misgeneralization in deep reinforcement learning. In International Conference on Machine Learning , pages 12004–12019. PMLR, 2022. MAA. American invitational mathematics examination - aime, february 2024. In American Invita- tional Mathematics Examination - AIME 2024 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. Jingcheng Deng, Zihao Wei, Liang Pang, Hanxing Ding, Huawei Shen, and Xueqi Cheng. Everything is editable: Extend knowledge editing to unstructured data in large language models. arXiv preprint arXiv:2405.15349 , 2024. Shicheng Xu, Liang Pang, Mo Yu, Fandong Meng, Huawei Shen, Xueqi Cheng, and Jie Zhou. Unsupervised information refinement training of large language models for retrieval-augmented generation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 133–145, 2024b. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jian- hong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024. Jingcheng Deng, Zhongtao Jiang, Liang Pang, Liwei Chen, Kun Xu, Zihao Wei, Huawei Shen, and Xueqi Cheng. Following the autoregressive nature of llm embeddings via compression and alignment. arXiv preprint arXiv:2502.11401 , 2025. A Ablation study of Structured Fine-grained Training Figure 6 shows the ablation study of our Structured Fine-grained Training method by showing the accuracy and structure matching varying with training steps. It indicates that generative structure reward model trained by our structured fine-grained training method can effectively optimize the task accuracy (Figure 6a) and reasoning structure matching (Figure 6b) between the teacher and the student a, which grows progressively as the training steps. B Ablation study for Reward Weights In this section, we conduct ablation study about the reward weights in RL training. The essential rewards in our method include accuracy reward ( Racc) for the specific tasks such as rule-based accuracy reward in math, and our structure matching reward ( Rgsrm ) obtained from generative structure reward model, so we primarily discuss the weight relationship between these two rewards. Specifically, we adjust the proportional relationship between Rgsrm andRacc(Rgsrm Racc) from 0.2to2 12 /uni00000013 /uni00000015/uni00000013 /uni00000017/uni00000013 /uni00000019/uni00000013 /uni0000001b/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni0000005a/uni00000012/uni00000003/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000010/uni0000004a/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a /uni0000005a/uni00000012/uni00000052/uni00000003/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000010/uni0000004a/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a(a) Accuracy /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni0000001a/uni00000013 /uni0000001b/uni00000013 /uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000036/uni00000057/uni00000048/uni00000053/uni00000056/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000015/uni00000014/uni00000011/uni00000017/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000003/uni00000030/uni00000044/uni00000057/uni00000046/uni0000004b/uni0000004c/uni00000051/uni0000004a /uni0000005a/uni00000012/uni00000003/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000010/uni0000004a/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a /uni0000005a/uni00000012/uni00000052/uni00000003/uni00000036/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000058/uni00000055/uni00000048/uni00000047/uni00000003/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000010/uni0000004a/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni0000004c/uni00000051/uni0000004a (b) Structure Matching
https://arxiv.org/abs/2505.16142v1
Figure 6: Ablation study about our Structured Fine-grained Training method. /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000015 /uni00000014/uni00000011/uni00000017 /uni00000014/uni00000011/uni00000019 /uni00000014/uni00000011/uni0000001b /uni00000015/uni00000011/uni00000013 /uni00000035/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000003/uni0000000bRgsrm:Racc/uni0000000c/uni00000018/uni00000015/uni00000011/uni00000013/uni00000018/uni00000015/uni00000011/uni00000015/uni00000018/uni00000015/uni00000011/uni00000017/uni00000018/uni00000015/uni00000011/uni00000019/uni00000018/uni00000015/uni00000011/uni0000001b/uni00000018/uni00000016/uni00000011/uni00000013/uni00000018/uni00000016/uni00000011/uni00000015/uni00000018/uni00000016/uni00000011/uni00000017/uni00000033/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000003/uni0000000b/uni00000051/uni00000020/uni00000014/uni00000019/uni0000000c /uni00000033/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000003/uni0000000b/uni00000051/uni00000020/uni00000014/uni00000019/uni0000000c (a) pass@1 (n=16) on AIME24 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000017 /uni00000013/uni00000011/uni00000019 /uni00000013/uni00000011/uni0000001b /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000015 /uni00000014/uni00000011/uni00000017 /uni00000014/uni00000011/uni00000019 /uni00000014/uni00000011/uni0000001b /uni00000015/uni00000011/uni00000013 /uni00000035/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000003/uni0000000bRgsrm:Racc/uni0000000c/uni0000001a/uni0000001c/uni0000001b/uni00000013/uni0000001b/uni00000014/uni0000001b/uni00000015/uni0000001b/uni00000016/uni0000001b/uni00000017/uni0000001b/uni00000018/uni0000001b/uni00000019/uni0000001b/uni0000001a/uni00000033/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000019/uni00000003/uni0000000b/uni00000051/uni00000020/uni00000014/uni00000019/uni0000000c /uni00000033/uni00000044/uni00000056/uni00000056/uni00000023/uni00000014/uni00000019/uni00000003/uni0000000b/uni00000051/uni00000020/uni00000014/uni00000019/uni0000000c (b) pass@16 (n=16) on AIME24 Figure 7: Ablation study about reward weights. Performance of pass@1 (n=16) and pass@16 (n=16) on AIME24 varying with (Rgsrm Racc). in increments of 0.2, and observe the variations in pass@1 (n=16) and pass@16 (n=16) on AIME24. This experimental results are shown in Figure 7. We can see that when the weights of the two are equalRgsrm Racc= 1, the LLM achieves a harmonized optimal effect in both the correctness of reasoning (pass@1)and the capability of sampling (pass@16). So in our experiments, we set the reward weights as:Racc:Rgsrm :Rformat :Rtag= 3 : 3 : 2 : 2 , in which Rformat is format reward and Rtagis tag count reward that assesses whether the thought content adheres to the specified format Guo et al. [2025]. C Prompt in Data Construction Figure 8 to 11 shows the full prompt in data construction of training generative structure reward model. 13 Decompose the problem into multiple steps according to the thinking process for reasoning. Format the thinking content in the form of [Meta -reasoning] -[Sub -problem] -[Answer] for each step. When the [Answer] of a certain step gets the answer to the original question, stop reasoning, and the last answer should include a complete answer to the original question [Meta -reasoning] represents the thinking process. It is necessary to analyze the previous and next steps as detailed as possible, explain in detail why this step is done to lead to the following query, and do not mention the answer of this step.}} \textit {{\tiny [Sub -Problem] represents the problem that needs to be solved in the current step, and analyze it as detailed as possible. Each query contai ns only one question [Answer] represents the answer to the Problem. Give the answer to the [Sub -Problem] directly and briefly without any reasoning s teps and redundant statements. You should split it into more [Meta -reasoning] -[Sub -Problem] -[Answer] with finer granularity, but do not repeat the conditions g iven in the question. Here are some examples for {} task: [Question]: Given real numbers 𝑎𝑎,𝑏𝑏,𝑐𝑐 and a positive number 𝜆𝜆 such that the polynomial 𝑓𝑓𝑥𝑥=𝑥𝑥3+𝑎𝑎𝑥𝑥2+𝑏𝑏𝑥𝑥+𝑐𝑐 has three real roots 𝑥𝑥1, 𝑥𝑥2, 𝑥𝑥3, and the conditions 𝑥𝑥2−𝑥𝑥1=𝜆𝜆 and 𝑥𝑥3>1 2(𝑥𝑥1+𝑥𝑥2) are satisfied, find the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 . [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To find the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 , use Vieta’s formulas to express coefficients 𝑎𝑎,𝑏𝑏,𝑐𝑐 in terms of the roots 𝑥𝑥1, 𝑥𝑥2, 𝑥𝑥3. Parametrize the roots to satisfy 𝑥𝑥2−𝑥𝑥1=𝜆𝜆 and 𝑥𝑥3>1 2(𝑥𝑥1+𝑥𝑥2). Define 𝑥𝑥1=𝑡𝑡−𝜆𝜆 2, 𝑥𝑥2=𝑡𝑡+𝜆𝜆2, and 𝑥𝑥3=𝑡𝑡+𝑠𝑠 where 𝑠𝑠>0. This parametrization simplifies the problem by eliminating 𝜆𝜆 through symmetry. [Sub -problem 1]: Express 𝑎𝑎,𝑏𝑏,𝑏𝑏 in terms of 𝑡𝑡 , 𝑠𝑠, and 𝜆𝜆 using the parametrized roots. [Answer 1]: 𝑎𝑎=−3𝑡𝑡−𝑠𝑠,𝑏𝑏=3𝑡𝑡2+2𝑡𝑡𝑠𝑠−𝜆𝜆2 4,𝑐𝑐=−𝑡𝑡3−𝑡𝑡2𝑠𝑠+𝜆𝜆2 4(𝑡𝑡+𝑠𝑠) [Meta -reasoning 2]: Substitute 𝑎𝑎,𝑏𝑏,𝑐𝑐 into the expression
https://arxiv.org/abs/2505.16142v1
2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 . Simplify the expression by combining like terms. The result will depend only on 𝑠𝑠 and 𝜆𝜆, allowing further simplification by introducing 𝑘𝑘=𝑠𝑠 𝜆𝜆. This reduces the problem to maximizing a cubic function in 𝑘𝑘. [Sub -problem 2]: Simplify 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 after substitution. What is the simplified form in terms of 𝑘𝑘? [Answer 2]: −2𝑘𝑘3+92𝑘𝑘. [Meta -reasoning 3]: To maximize 𝑄𝑄(𝑘𝑘)=−2𝑘𝑘3+92𝑘𝑘, find its critical points by taking the derivative and solving for 𝑘𝑘. Verify that the critical point corresponds to a maximum using the second derivative test. - [Sub -problem 3]: Find the critical point of 𝑄𝑄(𝑘𝑘)=−2𝑘𝑘3+92𝑘𝑘. [Answer 3]: 𝑘𝑘 =3 2. [Meta -reasoning 4]: Substitute 𝑘𝑘=3 2 back into 𝑄𝑄(𝑘𝑘) to compute the maximum value. Confirm the calculation to ensure no arithmetic errors. - [Sub -problem 4]: Calculate 𝑄𝑄(3 2) . [Answer 4]:33 2. [Meta -reasoning 5]: Verify that the critical point 𝑘𝑘=3 2 satisfies all constraints (e.g., 𝑠𝑠>0) and that the polynomial discriminant condition for three real roots is implicitly satisfied by the problem’s given conditio ns. Confirm no additional constraints affect the result. - [Sub -problem 5]: Does 𝑘𝑘=3 2 satisfy 𝑠𝑠>0? [Answer 5]: Yes. [Meta -reasoning 6]: Finalize the maximum value of the original expression based on the derived result. - [Sub -problem 6]: What is the maximum value of 2𝑎𝑎3+27𝑐𝑐−9𝑎𝑎𝑎𝑎 𝜆𝜆3 ? [Answer 6]: 33 2. So the final answer is: 33 2.(a) Instruction (b) Example 1 in mathFigure 8: Full prompt in data construction of training generative structure reward model: Instruction and Example 1 [Question]: Generate an executable Python function generated from the given prompt. The function should take stdin as input and print the output. Simply call the function after the definition.Vanya got an important task \u2014 he should enumerate books in the library and label each book with its number. Each of the n books should be assigned with a number from 1 to n. Naturally, distinct books should be assigned distinct numbers. \n\nVanya wants to know how many digits he will have to write down as he labels the books. \n\n\n-----Input ----- \n\nThe first line contains integer n (1 \ u2264 n \u2264 10^9) \u2014 the number of books in the library. \n\n\n-----Output -----\ n\nPrint the number of digits needed to number all the books. \n\n\n-----Examples -----\ nInput \n13\n\nOutput \n17\n\nInput \n4\n\nOutput \n4\n\n\n\n----- Note -----\n\nNote to the first test. The books get numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, which totals to 17 digits. \n\nNote to the second sample. The books get numbers 1, 2, 3, 4, which totals to 4 digits. \n [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: The problem requires calculating the total number of digits written when labeling books from 1 to n. The app roach involves iterating through each digit length (1 -digit, 2 -digits, etc.) and summing the contributions of each range. For ex ample, numbers with d digits start at 10^(d- 1) and end at 10^d -1. If n is within this range, calculate the count of numbers in the partial
https://arxiv.org/abs/2505.16142v1
range. - [Sub -problem 1]: How to determine the contribution of numbers with a specific digit length d to the total digit count? [Answer 1 ]: For each d, calculate the start and end of the range. If n >= end, add (end - start +1) * d. If n < end, add (n - start +1) * d. [Meta -reasoning 2]: Implementing this requires iterating through each possible digit length until the start of the range exceeds n. The loop increments d and calculates the start and end for each iteration. The total accumulates the contributions from each valid range. - [Sub -problem 2]: How to structure the loop to iterate through digit lengths and accumulate the total digits? [Answer 2]: Initial ize total=0, d=1. Loop while start <=n, update total based on current d's range, increment d. [Meta -reasoning 3]: The code must read n from standard input, compute the total digits, and print the result. Handling edge cases like n=1 or n=1e9 correctly is ensured by checking the range boundaries. - [Sub -problem 3]: What is the Python function structure to read input, compute the total digits, and print the output? [Answer 3] : Read n with int(input()), loop through d starting at 1, calculate start and end, update total, then print(total). [Meta -reasoning 4]: Finally, the function must be called after definition to execute when the script runs. The loop terminates w hen start exceeds n, ensuring all relevant digit lengths are covered. - [Sub -problem 4]: What is the complete Python function implementing the described logic? [Answer 4]: def count_digits (): n = int(input()) total = 0 d = 1while True:start = 10 ** (d- 1) if start > n:breakend = 10**d -1 count = end - start +1 if end <=n else n - start +1 total += count * dd +=1print(total)count_digits () So the final answer is: def count_digits (): n = int(input())total = 0d = 1 while True: start = 10 ** (d- 1) if start > n:break end = 10**d -1 count = end - start +1 if end <=n else n - start +1 total += count * dd +=1print(total)count_digits ()[Question]: The duration of a process used to manufacture components is known to be normally distributed with a mean of 30 minutes and a standard deviation of 4 minutes. What is the probability of a time greater than 33 minutes being recorded? [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To determine the probability that the process time exceeds 33 minutes, we first need to standardize the valu e using the z -score formula. This converts the normal distribution parameters (mean=30, standard deviation=4) into the standard normal distribution (mean=0, standard deviation=1), allowing us to use standard tables or tools for probability lookup. - [Sub -problem 1]: What is the z -score for a time of 33 minutes in this normal distribution? [Answer 1]: 0.75 [Meta -reasoning 2]: With the z -score calculated, the next step is to find the area under the standard normal curve to
https://arxiv.org/abs/2505.16142v1
the right of this z -score, which corresponds to the probability of the process time being greater than 33 minutes. Since standard tables t ypically provide cumulative probabilities (area to the left), subtracting the cumulative probability from 1 gives the desired result. - [Sub -problem 2]: What is the cumulative probability (area to the left) for a z- score of 0.75 in the standard normal distribution ? [Answer 2]: 0.7734 [Meta -reasoning 3]: Finally, subtract the cumulative probability from 1 to obtain the probability of the time exceeding 33 minut es. This completes the calculation. - [Sub -problem 3]: What is 1 minus the cumulative probability of 0.7734? [Answer 3]: 0.2266 So the final answer is: -0.2266. (c) Example 2 in math (d) Example 3 in code Figure 9: Full prompt in data construction of training generative structure reward model: Example 2 and 3 14 [Question]: Generate an executable Python function generated from the given prompt. The function should take stdin as input and print the output. Simply call the function after the definition.You are given n days and for each day (d_{ i}) you can select one of the following options: \n\tperform no task \ n\tperform a high effort task (h_{ i}) only if its the first day or if you chose no -task on the previous day \n\tperform a low effort task (l_{ i})\nWrite a program to find the maximum amount of tasks you can perform within these n days. \nExample 1:\nInput :\nn = 3\nhi[] = {2,8,1} \nli[] = {1,2,1} \nOutput : 9\nExplanation: \nOptions on 1st day: hi[0]=2, li[0]=1 or no -task\nSelect no-task.\nOptions on 2nd day: hi[1]=8, li[2]=1 or no -task\nSelect high-effort task as no -task was selected \nthe previous day. \ nOptions on 3rd day: hi[2]=1, li[2]=1 or no -task\nSelect low-effort task as selecting high -effort \ntask is only possible if you chose no -task on \nprevious day.\nExample 2:\nInput :\nn = 5\nhi[] = {3,6,8,7,6}\ nli[] = {1,5,4,5,3}\ nOutput : 20\nExplanation: Perform high -effort task on first \nday and low - effort task on all remaining days. \n \nYour Task: \nYou don't need to read input or print anything. Your task is to complete the function maxAmt () which accepts an integer n and two arrays li[] and hi[] as input parameter and returns the maximum amount of tasks you can perform within these n days. \nExpected Time Complexity: O(n) \nExpected Auxiliary Space: O(n) \nConstraints :\n1 <= n <= 10^{6}\ n#User function Template for python3 \n\nclass Solution: \n def maxAmt (self, n, hi, li): \n # code here \ n [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To maximize the total task amount with constraints on high -effort tasks, dynamic programming is suitable. We need to track the maximum possible sum for each state (no -task, high, low) at each day, considering transitions based on previo us choices. - [Sub -problem 1]: What are the possible states for each day and their transitions? [Answer 1]: States: no -task (0), high (1), lo w (2). Transitions: high can follow only no -task; others depend
https://arxiv.org/abs/2505.16142v1
on max previous states. [Meta -reasoning 2]: For each day, the current state's value depends on the previous day's states. For example, choosing high on day i requires the previous day to be no -task. We need to define recurrence relations for dynamic programming. - [Sub -problem 2]: What are the recurrence relations for each state? [Answer 2]: dp[i][0] = max(prev0, prev1, prev2); dp[i][1] = prev0 + hi[i ]; dp[i][2] = max_prev + li[i]. [Meta -reasoning 3]: To optimize space, track only the previous day's states instead of a full DP array. This reduces space compl exity to O(1) while maintaining O(n) time. - [Sub -problem 3]: How to implement the transitions using variables instead of a DP array? [Answer 3]: Use variables prev0, prev1 , prev2 and update them iteratively. [Meta -reasoning 4]: Initialize the states for day 0, then iterate from day 1 to n -1. After processing all days, the answer is th e maximum of the final states. - [Sub -problem 4]: What is the Python function implementing this logic? [Answer 4]: Function uses a loop to update curr0, curr1, curr2 based on previous states and returns their maximum. So the final answer is: ```python class Solution: def maxAmt (self, n, hi, li): if n == 0: return 0 prev0, prev1, prev2 = 0, hi[0], li[0] for i in range(1, n): curr0 = max(prev0, prev1, prev2) curr1 = prev0 + hi[ i] curr2 = curr0 + li[ i] prev0, prev1, prev2 = curr0, curr1, curr2 return max(prev0, prev1, prev2) (e) Example 4 in code [Question]: What is the concentration (in mol \/L) of a hydrochloric acid (HCl) solution if 25 mL of this solution requires 35 mL of a 0.1 M sodium hydroxide (NaOH) solution to reach the equivalence point in a titration reaction? [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To determine the concentration of HCl, the first step is to write the balanced chemical equation for the reaction between HCl and NaOH. This establishes the stoichiometric relationship between the reactants, which is necessary for su bsequent calculations. - [Sub -problem 1]: What is the balanced chemical equation for the reaction of HCl with NaOH, and what is the mole ratio between th em? [Answer 1]: HCl + NaOH → NaCl + H₂O (1:1 mole ratio) [Meta -reasoning 2]: Since the mole ratio is 1:1, the moles of HCl will equal the moles of NaOH at equivalence. The moles of NaOH can be calculated using its concentration and volume, which will directly give the moles of HCl. - [Sub -problem 2]: How many moles of NaOH are present in 35 mL of a 0.1 M solution? [Answer 2]: 0.0035 mol [Meta -reasoning 3]: With the moles of HCl determined (equal to moles of NaOH), the concentration of the HCl solution is calculat ed by dividing the moles of HCl by its volume in liters. This requires converting the given HCl volume from mL to L. - [Sub -problem 3]: What is the volume of the HCl solution in liters? [Answer
https://arxiv.org/abs/2505.16142v1
3]: 0.025 L [Meta -reasoning 4]: Finally, divide the moles of HCl by its volume in liters to obtain the concentration. This completes the cal culation. - [Sub -problem 4]: What is the concentration (mol/L) of HCl when 0.0035 mol is dissolved in 0.025 L? [Answer 4]: 0.14 mol/L So the final answer is: 0.14 mol/L (f) Example 5 in scienceFigure 10: Full prompt in data construction of training generative structure reward model: Example 4 and 5 [Question]: Where might jellyfish stay warm? \nA: atlantic ocean \nB: bikini bottom\ nC: japanese restaurant \nD: store \nE: pacific ocean [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To determine where jellyfish might stay warm, first identify their natural habitats. Jellyfish are marine cr eatures, so options related to oceans or aquatic environments are primary candidates. This narrows the focus to real -world ocean s (A, E) or fictional underwater locations (B).[Sub -problem 1]: Are jellyfish typically found in real oceans or fictional locations like Bikini Bottom? [Answer 1]: Real oceans [Meta -reasoning 2]: Since jellyfish inhabit real oceans, eliminate fictional or non -marine options. The remaining choices are At lantic Ocean (A) and Pacific Ocean (E). However, the question specifies "stay warm," implying a need to consider temperature variations within these oceans.[Sub -problem 2]: Do both the Atlantic and Pacific Oceans have warm regions suitable for jellyfish? [Answer 2]: Yes [Meta -reasoning 3]: Since both A and E are valid, re- examine the options for potential context clues. The inclusion of "Bikini B ottom" (a fictional warm underwater setting from SpongeBob SquarePants) suggests the question might reference pop culture rat her than biology. [Sub -problem 3]: Is "Bikini Bottom" a known fictional habitat for jellyfish in media? [Answer 3]: Yes [Meta -reasoning 4]: Given the mix of real and fictional options, infer the question’s intent: if it prioritizes humor or pop culture, Bikini Bottom (B) is likely the answer. If strictly biological, A/E would apply. However, the presence of B suggests a non- literal context. [Sub -problem 4]: Which option aligns with a playful or media -reference interpretation of "stay warm"? [Answer 4]: B: Bikini Bott om So the final answer is: B: Bikini Bottom[Question]: A circular loop of radius 0.2 meters carries a current of 5 amperes. What is the magnetic field at a point located 0.1 meters from the center of the loop? [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: To determine the magnetic field at a point 0.1 meters from the center of a circular current -carrying loop, w e first identify the relevant formula. Since the point's location is not explicitly stated as off -axis, we assume it lies along the loop's axis. The formula for the magnetic field on the axis of a circular loop is required here. [Sub -problem 1]: What is the formula for the magnetic field at a point on the axis of a circular current -carrying loop? [Answer 1]: B = (μ₀ * I * R²) / [2*(R² + z²)^(3/2)] [Meta -reasoning
https://arxiv.org/abs/2505.16142v1
2]: Next, substitute the given values into the formula: current (I = 5 A), loop radius (R = 0.2 m), axial distan ce (z = 0.1 m), and permeability of free space (μ₀ = 4π × 10⁻⁷ T·m/A). Compute each component stepwise to avoid errors. [Sub -problem 2]: What is the value of R² + z² for R = 0.2 m and z = 0.1 m? [Answer 2]: 0.05 m² [Meta -reasoning 3]: Calculate the term (R² + z²) raised to the power of 3/2, which appears in the denominator of the formula. Th is involves first taking the square root and then cubing the result. [Sub -problem 3]: What is (0.05 m²) raised to the 3/2 power? [Answer 3]: 0.01118 m³ [Meta -reasoning 4]: Now compute the numerator of the formula: μ₀ * I * R². Use μ₀ = 4π × 10⁻⁷ T·m/A, I = 5 A, and R² = 0.04 m². [Sub -problem 4]: What is the value of μ₀ * I * R²? [Answer 4]: 2.5133 × 10⁻⁷ T·m³ [Meta -reasoning 5]: Finally, divide the numerator by the denominator (2 * 0.01118 m³) to obtain the magnetic field strength at t he specified point. - [Sub -problem 5]: What is 2.5133 × 10⁻⁷ T·m³ divided by 0.02236 m³? [Answer 5]: 1.1237 × 10⁻⁵ T So the final answer is: 1.1237 × 10⁻⁵ T (g) Example 6 in science (h) Example 7 in puzzle [Question]: As you live life you get wiser, it is a part of what? \nA: growing older \nB: death \nC: experience things \nD: toothsome \nE: breathe [Reasoning Path]: Okay, so I... (too long to show here) [Response]: [Meta -reasoning 1]: The question asks to identify what gaining wisdom through living life is a part of. First, analyze the optio ns to eliminate unrelated terms (e.g., "toothsome" and "breathe" are irrelevant to wisdom). Then, focus on comparing plausibl e options like "growing older" and "experience things." - [Sub -problem 1]: Which options directly relate to the process of accumulating wisdom as stated in the question? [Answer 1]: A: growing older; C: experience things [Meta -reasoning 2]: To differentiate between "growing older" and "experience things," evaluate whether wisdom arises purely from aging or from actively engaging with life. Wisdom typically stems from learning through actions and events rather than mere p assage of time. - [Sub -problem 2]: Is gaining wisdom more closely tied to aging or to encountering and reflecting on life events? [Answer 2]: C: experience things [Meta -reasoning 3]: Confirm that "experience things" logically encompasses the idea of learning and gaining wisdom through lived events, making it the most fitting answer. - [Sub -problem 3]: Which option explicitly connects wisdom to lived actions rather than passive aging? [Answer 3]: C: experience things So the final answer is: C: experience things (h) Example 8 in puzzle Figure 11: Full prompt in data construction of training generative structure reward model: Example 6, 7 and 8 15
https://arxiv.org/abs/2505.16142v1
Steering LVLMs via Sparse Autoencoder for Hallucination Mitigation Zhenglin Hua1, Jinghan He2,3, Zijun Yao4, Tianxu Han5, Haiyun Guo2,3,Yuheng Jia1,Junfeng Fang6 1School of Computer Science and Engineering, Southeast University 2Foundation Model Research Center, Institute of Automation, Chinese Academy of Sciences 3School of Artificial Intelligence, University of Chinese Academy of Sciences 4Department of Computer Science and Technology, Tsinghua University 5Wuhan University of Technology6National University of Singapore huazhenglin2003@gmail.com ,hejinghan2022@ia.ac.cn Abstract Large vision-language models (LVLMs) have achieved remarkable performance on multi- modal tasks such as visual question answering (VQA) and image captioning. However, they still suffer from hallucinations, generating text inconsistent with visual input, posing signifi- cant risks in real-world applications. Existing approaches to address this issue focus on in- corporating external knowledge bases, align- ment training, or decoding strategies, all of which require substantial computational cost and time. Recent works try to explore more ef- ficient alternatives by adjusting LVLMs’ inter- nal representations. Although promising, these methods may cause hallucinations to be insuf- ficiently suppressed or lead to excessive inter- ventions that negatively affect normal seman- tics. In this work, we leverage sparse autoen- coders (SAEs) to identify semantic directions closely associated with either hallucinations or actuality, realizing more precise and direct hallucination-related representations. Our anal- ysis demonstrates that interventions along the faithful direction we identified can mitigate hal- lucinations, while those along the hallucinatory direction can exacerbate them. Building on these insights, we propose Steering LVLMs via SAELatent Directions (SSL), a training-free method based on SAE-derived latent directions to mitigate hallucinations in LVLMs. Extensive experiments demonstrate that SSL significantly outperforms existing decoding approaches in mitigating hallucinations, while maintaining transferability across different model architec- tures with negligible additional time overhead. 1 Introduction LVLMs (Liu et al., 2023; Dai et al., 2023b; Liu et al., 2024b) have demonstrated impressive ca- pabilities in jointly processing visual and textual modalities, achieving strong performance on tasks such as VQA (Antol et al., 2015) and image cap- In front of the sofa with wooden legs, a wooden coffee table waits patiently. On it, two remotes lie side by side, perhaps indicating a recent use of the television or a hint. The walls of the room are painted white. A window, although not visible in the image, is suggested by the presence of a window sill. In front of the sofa, a brown coffee table sits, its surface hosting a remote control and a TV guide. The relative positions of these objects create a harmonious balance in the room, with the sofa and coffee table. Please describe this image in detail. Human Add hallucinatory direction On the sofa, there are two remote controls, one black and one silver, placed on the left side of the sofa. The sofa is situated on a beige carpeted floor, and the background is a plain white wall. Add faithful direction Original LLaVA(a)LVLM Layer LVLM Layer +1Hidden States (b) Sparse Autoencoder Hallucinatory directionFaithful directionOriginal generationsAdd AddFigure 1: The figure shows, from top to bottom, the orig- inal response from the LLaV A-Next-8b, the response after intervention along the hallucinatory direction, and the response after intervention
https://arxiv.org/abs/2505.16146v1
along the faithful direc- tion. These results demonstrate that targeted interven- tions along faithful directions suppress hallucinatory generations, while perturbations along hallucinatory di- rections tend to elicit more factually incorrect content. tioning (Li et al., 2022). However, LVLMs still suf- fer from hallucination (Yin et al., 2011), where the generated text does not align with the visual con- tent. This limitation poses significant challenges to their deployment in critical applications, including medical diagnosis(Gu et al., 2024) and autonomous driving(You et al., 2024), where factual consistency and reliability are essential. To mitigate hallucination in LVLMs, researchers explore various strategies, including refining de- coding algorithms (Leng et al., 2024; Huang et al., 2024; Kim et al., 2024), incorporating external knowledge bases (Qu et al., 2024), and leverag- ing additional annotated data for model fine-tuning (Park et al., 2024). While promising, these ap- 1arXiv:2505.16146v1 [cs.CV] 22 May 2025 proaches often incur substantial computational cost and time. Recent works (Liu et al., 2025; Jiang et al., 2024; Li et al., 2025) try to explore more efficient alternatives by adjusting LVLMs’ inter- nal representations. However, these methods may cause hallucinations to be insufficiently suppressed or lead to excessive interventions that negatively affect normal semantics. Therefore, extracting fine- grained and reliable representations related to hal- lucinations remains a key challenge in advancing the reliability of LVLMs. We note the success of SAEs in extracting fine-grained semantic representations—specifically capturing whether the model knows certain enti- ties—of abstract concepts in the field of large lan- guage models (LLMs) (Ferrando et al., 2025). In- spired by this work, we extend the application of SAE-based analysis from LLMs to LVLMs, aim- ing to understand and mitigate hallucinations more precisely and directly by identifying and steering the internal semantic directions associated with hallucinatory and faithful content. Notably, SAEs builds on the Linear Representation Hypothesis (Park et al., 2023), which posits that internal model representations can be expressed as sparse combi- nations of interpretable semantic directions (Tigges et al., 2024; Li et al., 2023a). Specifically, we lever- age the SAE provided by Zhang et al. (2024a) to identify latent directions that closely correlate with hallucinatory semantics as well as those aligned with faithful content. As illustrated in Figure 1, targeted interventions along faithful directions sup- press hallucinatory generations, while perturba- tions along hallucinatory directions tend to elicit more factually incorrect content. Building on this insight, we propose Steering LVLMs via SAE Latent Directions (SSL), a training-free approach based on SAE-derived la- tent directions to mitigate hallucinations in LVLMs. During the visual feature merging stage, we inject faithful semantic directions to amplify grounded semantic features and improve image–text consis- tency. In the subsequent language generation stage, we reduce projection onto hallucinatory semantic directions, thereby reducing the risk of generat- ing factually incorrect content. Remarkably, al- though the SAE was trained on the LLaV A-Next- 8b model, the extracted hallucination and factuality directions generalize seamlessly to other architec- tures ( e.g., LLaV A1.5-7b model (Liu et al., 2024c) and InstructBLIP-7b model (Dai et al., 2023a)). Experimental evaluation on established LVLM hal-lucination benchmarks shows that SSL outperforms
https://arxiv.org/abs/2505.16146v1
existing decoding approaches, confirming its effec- tiveness and efficiency in hallucination reduction. Our main contributions are as follows: •We leverage SAEs to identify semantic direc- tions that are highly correlated with hallucina- tory and faithful object words in the represen- tation space of LVLMs. •We propose SSL, a training-free method that injects factuality semantic directions during visual feature fusion to reinforce grounded content and suppresses hallucination direc- tions during language generation to proac- tively mitigate hallucinatory outputs. •Extensive experiments demonstrate that SSL outperforms existing decoding approaches on widely used hallucination benchmarks with negligible time overhead, exhibiting transfer- ability across different architectures. 2 Preliminary LVLM generation. LVLMs take both image and text as input and encode them into a sequence of to- kens. During autoregressive generation, the model first concatenates the system tokens Xs, prompt to- kensXt, and visual tokens Xvin a predefined order to form the initial input. At the first generating step t= 1, the model predicts the output token based on this initial context. At each subsequent step t >1, the previously generated tokens X<t oare appended to the end of the initial input, resulting in the cur- rent sequence Xs, Xt, Xv, X<t o . The model then generates the next token autoregressively according to the conditional probability distribution, continu- ing until an end-of-sequence token is produced or a maximum sequence length is reached: yt= arg max pθ(yt|Xs, Xt, Xv, X<t o),(1) where ytis the token generated at time step t. Sparse autoencoders. SAEs have been proven to be effective for separating overlapping features (Bricken et al., 2023; Ferrando et al., 2025). In this work, we use the SAE provided by Zhang et al. (2024a), which operates on the residual stream hl∈Rdfrom the l-th layer of LVLMs. The SAE projects these representations into a higher- dimensional latent space z(x)∈RdSAEand applies a ReLU activation: z(x) =ReLU (Wencx+benc), (2) 2 /uni00000015 /uni00000013 /uni00000015 /uni00000017 /uni00000019 /uni0000001b/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000027/uni00000048/uni00000051/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000057/uni00000003/uni00000020/uni00000003/uni00000018/uni00000011/uni0000001b/uni00000016/uni0000000f/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000015/uni00000011/uni0000001a/uni00000014/uni00000048/uni00000010/uni00000013/uni0000001b/uni0000000f/uni00000003/uni00000026/uni00000052/uni0000004b/uni00000048/uni00000051/uni0000000a/uni00000056/uni00000003/uni00000047/uni00000003/uni00000020/uni00000003/uni00000013/uni00000011/uni0000001b/uni0000001b /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b/uni00000049/uni00000058/uni0000004f/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056(a) Hallucinatory latent KDE distribution /uni00000013/uni00000011/uni00000018 /uni00000013/uni00000011/uni00000013 /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 /uni00000015/uni00000011/uni00000018/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni0000001a/uni00000018/uni00000014/uni00000011/uni00000013/uni00000013/uni00000014/uni00000011/uni00000015/uni00000018/uni00000014/uni00000011/uni00000018/uni00000013/uni00000014/uni00000011/uni0000001a/uni00000018/uni00000027/uni00000048/uni00000051/uni00000056/uni0000004c/uni00000057/uni0000005c/uni00000057/uni00000003/uni00000020/uni00000003/uni00000010/uni00000018/uni00000011/uni0000001b/uni0000001c/uni0000000f/uni00000003/uni00000053/uni00000003/uni00000020/uni00000003/uni00000015/uni00000011/uni00000013/uni00000014/uni00000048/uni00000010/uni00000013/uni0000001b/uni0000000f/uni00000003/uni00000026/uni00000052/uni0000004b/uni00000048/uni00000051/uni0000000a/uni00000056/uni00000003/uni00000047/uni00000003/uni00000020/uni00000003/uni00000010/uni00000013/uni00000011/uni0000001b/uni0000001c /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056 /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b/uni00000049/uni00000058/uni0000004f/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056(b) Faithful latent KDE dis- tribution Figure 2: KDE plots of the selected latent activations for test samples labeled as hallucination and faithfulness. The overlaid plots visualize the distributional differ- ences, with annotated t-statistic, p-value, and Cohen’s d effect size indicating the statistical separation between the two groups. where Wencandbencdenote the encoder’s weight matrix and bias, respectively. To enforce sparsity, a top-k operation retains only the klargest activations inz(x), zeroing out the rest to obtain the sparse latent vector zk(x) = TopK (z, k). The decoder then reconstructs the original representation via a linear combination of the active components: SAE(x) =WT deczk(x) +bdec, (3) where Wdecandbdecdenote the decoder’s weight matrix and bias. During training, the loss function combines the reconstruction error with an auxil- iary loss proposed by Gao et al. (2025), aiming to encourage the utilization of meaningful features in the latent representation zk(x)and to prevent feature inactivity, thereby enhancing the overall ex- pressiveness of the sparse encoding. We refer to each component of zk(x)as a latent activation, and each row vector of Wdecas a latent direction. Steering with SAE
https://arxiv.org/abs/2505.16146v1
Latents. The SAE recon- structs model representations as a linear combi- nation of latent directions and a bias, effectively approximating the original input. Each latent ac- tivation zj(x)corresponds to a specific decoder direction dj=Wdec[j,:], enabling targeted ad- justment of the representation through activation steering (Turner et al., 2023). This technique al- lows us to steer the residual stream by modifying the representation as follows: xsteer←x+αdj, (4) where αis a tunable parameter that determines the strength of the intervention.3 Method In this work, we introduce SSL, a training-free method for steering LVLMs. Our method consists of two principal components: semantic directions identification and steering LVLMs via SAE latents. 3.1 Semantic Directions Identification Residual Stream Dataset Construction for Hal- lucinations and Faithfulness. To investigate whether there exist directions in LVLMs that are highly correlated with hallucinatory and faithful semantics, we randomly sampled 4,000 image- text pairs from the MSCOCO dataset (Lin et al., 2014). Using the LLaV A-Next-8b model (Liu et al., 2024d) for the image captioning task, we extract the residual stream representations from the 25th layer when the model generated object tokens clas- sified as either hallucinatory or faithful. Given that a small proportion of object words are tokenized into multiple subword units, we ex- clude these cases to simplify the analysis. Further- more, because each sample exhibited an imbalance between the counts of hallucinatory and faithful object terms, we enforce class balance by sampling an equal number of residual vectors from each cat- egory per image–text pair. See Appendix A for a description of the process. Finally, we construct a balanced dataset containing 1,784samples and divide it into a training set and a test set in a 9 : 1 ra- tio, for direction mining and direction effectiveness validation, respectively. Semantically Hallucinatory and Faithful Direc- tion Identification via SAE. Inspired by Meng et al. (2022); Ferrando et al. (2025), we leverage SAE to identify latent directions aligned with hallu- cinatory and faithful semantics. Specifically, each residual stream sample from the training set is passed through the SAE, and we record the ac- tivation frequency of each latent activation across hallucinatory samples Xhalland faithful samples Xfaithful . For a given latent activation j, its activa- tion frequencies on hallucinatory samples fhall jand faithful samples ffaithful j are computed as: fhall j=1 NhallX x∈X hallI(zj(x)>0), ffaithful j =1 NfaithfulX x∈X faithfulI(zj(x)>0),(5) where NhallandNfaithful represent the number of hallucinatory and faithful samples, respectively. To 3 < Step 1: Obtain Semantic DirectionsLVLM Layer 31 LVLM Layer +1 LVLM Layer ... ... LVLM Layer 0 SSL LVLM Positive Enhance Negative Suppress Step 2: Steer LVLM via Semantic Directions Please describe this image in detail. Image Text SAE WencWdec hall latent Faithful latent ReLUdhall dfaithfulSemantic Directions Figure 3: Overview of the proposed SSL approach leveraging SAE to identify semantically aligned directions and mitigate hallucination in LVLMs. We use SAE to identify latent directions within the internal representation space of LVLMs that are associated with hallucinatory and faithful semantics, denoted as dhallanddfaithful , respectively. These semantic directions are then used to modulate the residual
https://arxiv.org/abs/2505.16146v1
stream at the l-th layer, steering the generation process toward greater factual consistency. quantify the semantic relevance of each latent ac- tivation, we compute the difference in activation frequencies as follows: shall j=fhall j−ffaithful j , sfaithful j =ffaithful j−fhall j.(6) These values reflect the relevance of latent dimen- sionjto hallucinatory and faithful semantics, re- spectively. Finally, we identify the latent activation with the highest shall jas the hallucination semantic direction (hereafter referred to as the halllatent), and the one with the highest sfaithful j as the faith- ful semantic direction (hereafter referred to as the faithful latent). Validation of the Effectiveness of Semantic Di- rections. We begin by analyzing the distribu- tional differences of the hallucinatory latent and faithful latent activations across both sample types in the test set. These distributions are visualized us- ing kernel density estimation (KDE) plots as shown in Figure 2. We further quantify the separation us- ing independent two-sample t-tests and compute Cohen’s dto assess effect sizes. Both latent activa- tions exhibit statistically significant distributional shifts, with substantial effect sizes, confirming their discriminative power. To probe the semantic alignment of the activa- tions, we further conduct Spearman rank corre-lation analysis between the activation values of hallucinatory samples and their associated halluci- natory object terms. The hallucinatory latent corre- lates positively with hallucinatory objects (Spear- man’s ρ= 0.42,p= 9.95×10−9), whereas the faithful latent correlates negatively ( ρ=−0.44, p= 9.85×10−10). Given the binary nature of the labels and the symmetry of rank correlation, we infer a positive correlation between the faithful latent and samples. Finally, to quantitatively evaluate the predictive power of SAE-derived directions in distinguishing hallucinatory from faithful samples, we design a set of classification experiments based on logistic re- gression. The model takes the latent activation val- ues—either individually or in combination—as in- put features for a binary hallucination classification task. As illustrated in Figure 4, the results demon- strate that the latent semantic directions extracted by the SAE are discriminative (see Appendix B for more experimental details). Furthermore, com- bining the hall latent and faithful latent as input features yields further performance improvements. 3.2 Steering LVLMs via SAE Latents Steering Strategy. A core architectural mecha- nism involves multi-layer residual connections that progressively model input information. At each layer, semantic representations are passed through 4 /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000003/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000057 /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b/uni00000049/uni00000058/uni0000004f/uni00000003/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000057 /uni00000035/uni00000044/uni00000051/uni00000047/uni00000052/uni00000050/uni00000014 /uni00000035/uni00000044/uni00000051/uni00000047/uni00000052/uni00000050/uni00000015 /uni00000025/uni00000052/uni00000057/uni0000004b/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni0000001a/uni00000018/uni00000014/uni00000011/uni00000013/uni00000013/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000013/uni00000011/uni00000019/uni00000019/uni00000019/uni0000001a/uni00000013/uni00000011/uni00000019/uni0000001c/uni00000018/uni00000017 /uni00000013/uni00000011/uni00000018/uni00000015/uni00000016/uni00000013 /uni00000013/uni00000011/uni00000018/uni00000013/uni00000018/uni0000001a/uni00000013/uni00000011/uni0000001a/uni00000015/uni0000001c/uni0000001c/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048/uni00000003/uni00000013/uni00000011/uni00000018Figure 4: Comparison of classification accuracy using different latent activations. Hall latent and faithful latent correspond to the identified hallucinatory and faithful la- tent activations, respectively. Random1 denotes a single latent activation randomly selected from the SAE latent space, while Random2 represents a feature combination of two randomly selected latent activations. The dashed line indicates the baseline accuracy of 0.5. residual flows, which can be divided according to the input sequence into four main components: sys- tem token, prompt tokens, visual tokens, and output tokens. Among these, visual tokens interact with language tokens to guide the model’s understand- ing of image content. Injecting faithful direction at this position enables the model to increase visual faithfulness. Output tokens represent the model’s autoregressive language generations, influenced by
https://arxiv.org/abs/2505.16146v1
both prompts and visual inputs, thereby reflecting the model’s semantic behavior. Suppressing hallu- cinatory directions during this stage helps reduce hallucination tendencies and enhances factual con- sistency in language generation. Following the method described in Section 2, we identify two semantic direction vectors: the hallu- cinatory direction dhalland the faithful direction dfaithful . During the visual feature fusion stage, we incorporate dfaithful to improve the faithfulness of visual understanding. In the subsequent language generation stage, we suppress activations along dhallto reduce the risk of hallucinatory outputs. Se- mantic steering at layer lis defined as follows: Xl,v←Xl,v+α·dfaithful , X<t l,o←X<t l,o−α·dhall,(7) where αis a tunable hyperparameter controlling the strength of semantic steering. Adaptive Steering Parameters (ASP). The set- ting of the steering strength αplays a crucial role in determining the effectiveness of semantic interven- tion. Traditional steering approaches often rely on a fixed hyperparameter αto linearly combine theAlgorithm 1: SSL Input: Scaling factor γ; steering layer ls; semantic directions dhall,dfaithful ; residual stream at layer ls: [Xls,s, Xls,t, Xls,v, X<t ls,o] 1ifL=lsthen 2 fortoken xin residual stream do 3 ifx∈Xls,vthen 4 x←x+γ·∥x∥ ∥dfaithful∥+ϵ·dfaithful 5 else if x∈X<t ls,othen 6 x←x−γ·∥x∥ ∥dhall∥+ϵ·dhall 7 else // System and prompt tokens remain unchanged steering vector with the residual representations. However, this fixed strategy can result in unsta- ble or suboptimal performance, as the magnitude of residual vectors can vary across model layers and token positions. In such cases, a change that is too small may fail to induce meaningful guid- ance, while an excessively large change may cause semantic distortion or instability. To address this limitation, we propose an adap- tive feature steering mechanism, which dynami- cally adjusts the steering strength based on the norm of the residual vector at each token at a given layer. This approach ensures more stable and context-aware intervention across varying model states. Specifically, the adaptive steering strength αis computed as: α=γ·∥xresidual∥ ∥dsteer∥+ϵ, (8) where γis a scaling factor, xresidual denotes the residual vector, dsteeris the steering direction, and ϵis a small constant to avoid numerical instability. An overview of the proposed SSL method is pre- sented in Figure 3. The complete procedure of SSL is provided in Algorithm 1. 4 Experiments 4.1 LVLMs We conduct experiments on three representative LVLMs: LLaV A-NeXT-8b (Liu et al., 2024d), LLaV A-1.5-7b (Liu et al., 2024c) and InstructBLIP- 7b (Dai et al., 2023a). These models share a modu- lar structure comprising an image encoder, a pro- 5 LLaV A-NeXT-8b LLaV A-1.5-7b InstructBLIP-7b CHAIR S↓CHAIR I↓Avg.Len CHAIR S↓ CHAIR I↓Avg.Len CHAIR S↓ CHAIR I↓Avg.Len Greedy 29.60±0.89 8.03±0.41 165.61 49.44±1.57 14.19±0.76 82.97 45.44±2.43 13.07±0.71 92.11 Beam 27.20±1.19 7.20±0.39 174.17 53.60±2.39 15.47±0.45 87.38 48.68±1.65 13.59±0.43 95.92 DoLa 29.04±1.08 7.86±0.24 166.14 50.64±2.33 14.51±0.88 82.32 46.12±1.85 13.09±0.90 91.80 VCD 31.36±1.99 8.40±0.79 165.43 51.68±1.85 15.29±0.83 83.03 50.84±2.41 14.51±0.97 91.44 OPERA - - - 44.04±0.94 13.23±0.46 75.79 45.88±2.31 13.15±0.87 93.51 CODE 30.76±0.92 8.09±0.42 158.07 47.72±0.79 14.13±0.56 78.43 50.88±2.05 14.21±0.92 89.62 SSL 25.20±1.49 6.46±0.67 162.93 41.08±2.11 12.02±0.85 82.30 38.52±4.24 11.45±0.96 102.62 Table 1: CHAIR results on MSCOCO dataset averaged over 5 random seeds. The best and second-best results are indicated in bold
https://arxiv.org/abs/2505.16146v1
andunderlined , respectively. Avg.Len represents the average length of the generated descriptions. jection module, and a language model. LLaV A-1.5 and LLaV A-NeXT use an MLP to project all image tokens into the LLM’s input space, while Instruct- BLIP employs a Q-Former to select a compact set of informative visual tokens, reducing redundancy. Compared to LLaV A-1.5, LLaV A-NeXT upgrades the LLM from 7b to 8b parameters and supports higher-resolution inputs for visual understanding. 4.2 Benchmarks CHAIR. We evaluate object hallucination using the Caption Hallucination Assessment with Image Relevance (CHAIR) metric (Rohrbach et al., 2018), which compares generated image captions against ground-truth annotations to detect hallucinatory ob- jects mentioned in the captions but absent from the image. CHAIR includes two metrics at both cap- tions level (CHAIR S) and object level (CHAIR I): CHAIR S=|{captions w/ hallucinatory objects }| |{total captions }|, CHAIR I=|{hallucinatory objects }| |{total mentioned objects }|. (9) We randomly sample 500images from the COCO 2014 validation set (Lin et al., 2014) and conduct five runs with different random seeds. For all LVLMs, captions are generated using the prompt “Please describe this image in detail.” We report the mean and standard deviation for each metric. POPE. We further evaluate object hallucination using the POPE benchmark (Li et al., 2023b), a question-answering dataset designed to assess the factual consistency of generated image descriptions. POPE contains 500images from the MSCOCO dataset (Lin et al., 2014), each paired with bi- nary questions of the form: “Is there a <object> in the image?” The dataset comprises three sub- sets—random, popular, and adversarial—which differ in their object sampling strategies. Modelperformance is measured using standard classifica- tion metrics: Accuracy, Precision, Recall, and F1 score. To provide an overall assessment, we report the average results across all three subsets. LLaV A-Bench. We evaluate LVLM performance using the LLaV A-Bench (In-the-Wild) benchmark (Liu et al., 2024c), a comprehensive set designed to assess models across diverse and challenging visual scenarios. The benchmark includes 24images from varied real-world contexts, such as indoor scenes, outdoor environments, and internet memes, paired with60carefully curated questions spanning open- ended QA, fine-grained descriptions, and complex reasoning. We prompt the GPT-4o model to eval- uate the LVLMs’ outputs along two dimensions: factual accuracy and response detail. 4.3 Baselines We compare the performance of base LVLMs using greedy decoding and beam search decoding. Ad- ditionally, we also conduct a comparison between SSL and the popular training-free approaches that require neither external data nor auxiliary models. Specifically, DoLa (Chuang et al., 2024) derives the next-token distribution by contrasting logits from later and earlier layers; VCD (Leng et al., 2024) employs contrastive learning by comparing the output distributions generated from original and perturbed images; OPERA (Huang et al., 2024) enhances generation quality by alleviating exces- sive reliance on previously generated tokens dur- ing beam search; and CODE (Kim et al., 2024) enhances vision-language alignment by using self- generated captions as internal references. 4.4 Implementation Details We set γto0.6,0.8, and 0.2for LLaV A -NeXT, LLaV A-1.5, and InstructBLIP, respectively, to bal- ance effective mitigation of hallucination with min- imizing
https://arxiv.org/abs/2505.16146v1
the invasiveness of state interventions. SSL 6 LLaV A-NeXT-8b LLaV A-1.5-7b InstructBLIP-7b F1 score ↑ F1 score ↑ F1 score ↑ Greedy 89.10 84.99 85.37 Beam 89.30 85.31 84.41 DoLa 89.49 85.08 85.22 VCD 88.91 84.42 84.68 OPERA - 85.46 84.42 CODE 88.93 84.64 84.81 SSL 89.66 85.44 85.16 Table 2: POPE results averaged over popular, adversar- ial, and random splits. The best and second-best results are indicated in bold and underlined , respectively. is applied at 16thlayer for LLaV A -NeXT, 31thlayer for LLaV A -1.5 and 8thlayer for InstructBLIP. We faithfully replicate all baseline methods, imple- menting them based on their open-source code- bases and configuring them according to the hy- perparameters reported in the original papers. All experimental results are obtained under consistent base model, prompt, and generation parameter set- tings to ensure a fair comparison. For all methods involving beam search, we set the max_new_token to512and the beam size to 5. 5 Results CHAIR. Table 1 reports the performance of SSL on the CHAIR benchmark compared to all baseline approaches. Due to excessive memory require- ments, OPERA fails to produce results on LLaV A- NeXT-8b. Notably, although the SAE from Zhang et al. (2024a) were trained on LLaV A-NeXT-8b, the identified semantic directions generalize well across different model architectures. SSL consis- tently outperforms all baselines across all three LVLMs, while only incurring a marginal trade-off in caption length or descriptive richness. POPE. As shown in Table 2, applying SSL to LVLMs with different architectures consistently im- proves performance on the POPE benchmark. This demonstrates the robustness of SSL in enhancing models across a spectrum of capabilities and fur- ther validates the generalizability of the semantic directions captured by our approach. LLaV A-Bench. Table 3 presents the evaluation results on LLaV A-Bench using GPT-4o. The re- sults highlight the effectiveness of SSL in enhanc- ing model accuracy across highly diverse and chal- lenging tasks. 5.1 Ablation Study Effectiveness of ASP. To validate the necessity of adaptively adjusting the steering strength, weAccuracy ↑Detailedness ↑ LLaV A-NeXT 6.2891 6.0278 LLaV A-NeXT w/ SSL 6.3671 5.2667 LLaV A-1.5 5.3333 4.7000 LLaV A-1.5 w/ SSL 4.7167 4.5667 InstructBLIP 5.5056 4.1111 InstructBLIP w/ SSL 5.5722 4.2611 Table 3: Evaluation results on the LLaV A-Bench (In- the-Wild) benchmark based on prompting GPT-4o. 203040 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c Layer246810CHAIR /uni0000002f/uni00000013 /uni0000002f/uni00000014 /uni0000002f/uni00000015 /uni0000002f/uni00000016 /uni0000002f/uni00000017 /uni0000002f/uni00000018 /uni0000002f/uni00000019 /uni0000002f/uni0000001a /uni0000002f/uni0000001b /uni0000002f/uni0000001c /uni0000002f/uni00000014/uni00000013 /uni0000002f/uni00000014/uni00000014 /uni0000002f/uni00000014/uni00000015 /uni0000002f/uni00000014/uni00000016 /uni0000002f/uni00000014/uni00000017 /uni0000002f/uni00000014/uni00000018 /uni0000002f/uni00000014/uni00000019 /uni0000002f/uni00000014/uni0000001a /uni0000002f/uni00000014/uni0000001b /uni0000002f/uni00000014/uni0000001c /uni0000002f/uni00000015/uni00000013 /uni0000002f/uni00000015/uni00000014 /uni0000002f/uni00000015/uni00000015 /uni0000002f/uni00000015/uni00000016 /uni0000002f/uni00000015/uni00000017 /uni0000002f/uni00000015/uni00000018 /uni0000002f/uni00000015/uni00000019 /uni0000002f/uni00000015/uni0000001a /uni0000002f/uni00000015/uni0000001b /uni0000002f/uni00000015/uni0000001c /uni0000002f/uni00000016/uni00000013 /uni0000002f/uni00000016/uni00000014 Figure 5: Results of SSL applied across different layers. conduct an ablation study by replacing ASP with a fixed αsteering parameter as shown in Equa- tion 4 equal to the γvalue. As shown in Table 4, removing the adaptive strategy ASP results in con- sistent performance drops across all three model architectures, highlighting the importance of ASP in effectively mitigating hallucinations. Layer Selection Ablation. We conduct an ab- lation study on LLaV A-NeXT-8b to examine the impact of applying guidance at different layers. Figure 5 shows that the choice of guidance layer significantly affects model
https://arxiv.org/abs/2505.16146v1
performance, setting γto0.8. For LLaV A-NeXT-8b, applying SSL at middle layers yields more effective mitigation of hallucinations, with layer 15achieving the best per- formance. For further analysis on steering layer selection and scaling factor γsettings across dif- ferent models, please refer to Appendix C and D. 5.2 Further Analysis Analysis of Reverse-SSL for Inducing Halluci- nations in LVLMs. To further validate the ef- fectiveness of the semantic directions we identi- fied, we compare the CHAIR benchmark across three model architectures under three settings: orig- inal model state, SSL, and Reverse-SSL (see Ap- pendix E for details on Reverse-SSL). As shown in Figure 6, across all three model architectures, ap- plying Reverse-SSL significantly increases halluci- nations, while standard SSL guidance consistently reduces hallucinations. To further clearly illustrate 7 CHAIR S↓CHAIR I↓ LLaV A-NeXT w/ ASP 25.20 6.46 LLaV A-NeXT w/ fixed α 28.40 7.79 LLaV A-1.5 w/ ASP 41.08 12.02 LLaV A-1.5 w/ fixed α 47.21 13.12 InstructBLIP w/ ASP 38.52 11.45 InstructBLIP w/ fixed α 45.40 12.70 Table 4: Ablation study on the necessity of adaptively adjusting the steering parameters. /uni00000032/uni00000055/uni0000004c/uni0000004a/uni0000004c/uni00000051/uni00000044/uni0000004f /uni00000036/uni00000036/uni0000002f /uni00000035/uni00000048/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000010/uni00000036/uni00000036/uni0000002f/uni00000015/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013CHAIRS /uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000025/uni0000002f/uni0000002c/uni00000033 /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024/uni00000010/uni00000031/uni00000048/uni0000005b/uni00000057 /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024/uni00000010/uni00000014/uni00000011/uni00000018 (a) CHAIR Sacross models /uni00000032/uni00000055/uni0000004c/uni0000004a/uni0000004c/uni00000051/uni00000044/uni0000004f /uni00000036/uni00000036/uni0000002f /uni00000035/uni00000048/uni00000059/uni00000048/uni00000055/uni00000056/uni00000048/uni00000010/uni00000036/uni00000036/uni0000002f/uni00000019/uni0000001b/uni00000014/uni00000013/uni00000014/uni00000015/uni00000014/uni00000017/uni00000014/uni00000019/uni00000014/uni0000001b/uni00000015/uni00000013CHAIRI /uni0000002c/uni00000051/uni00000056/uni00000057/uni00000055/uni00000058/uni00000046/uni00000057/uni00000025/uni0000002f/uni0000002c/uni00000033 /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024/uni00000010/uni00000031/uni00000048/uni0000005b/uni00000057 /uni0000002f/uni0000002f/uni00000044/uni00000039 /uni00000024/uni00000010/uni00000014/uni00000011/uni00000018 (b) CHAIR Iacross models Figure 6: CHAIR evaluation across three multi- modal models—InstructBLIP-7b, LLaV A-NeXT-8b, and LLaV A-1.5-7b—under three generation settings: original, SSL, and Reverse-SSL. the effectiveness of SSL, we present several quali- tative examples in Appendix F. Additional Time Analysis. During each gener- ation step, SSL dynamically adjusts the steering strength through a single scaling and weighting op- eration, introducing negligible computational over- head. Compared to the overall generation process, the additional latency introduced by SSL is mini- mal. A comparison of inference time between SSL and other baselines is shown in Figure 7. 6 Related Works LVLMs refer to the phenomenon where the gener- ated textual content is inconsistent with the visual input. This issue arises from various factors, such as dataset bias, insufficient visual perception by the encoder, and misalignment across modalities (Liu et al., 2024a). While prior studies have proposed diverse strategies to mitigate hallucination, the in- ternal mechanisms within LVLMs that give rise to such inconsistencies remain largely underexplored. Liu et al. (2025) enhance the stability of vi- sual representations by steering latent features dur- ing generation, preventing premature degradation. Jiang et al. (2024) remove hallucination-related fea- ture components through linear orthogonalization 505560 Greedy Beam DoLa VCD OPERA CODE SSL Method02468101214Inference Time (sec/sample)Figure 7: Comparison of inference time for different methods measured using identical hardware. by projecting the internal image representations of vision models into the language space, thereby pu- rifying the input and reducing hallucinations. Li et al. (2025) uncover phenomena such as early acti- vation and progressive loss of visual information in LVLMs, and propose injecting continuous visual streams during inference to compensate for these effects, significantly reducing hallucinations. Unlike previous methods, our work directly iden- tifies hallucinatory and faithful semantic directions using SAEs. We then dynamically adjust these directions during visual-linguistic fusion and gen- eration to proactively reduce hallucination outputs. Furthermore, our approach contributes
https://arxiv.org/abs/2505.16146v1
to the practical interpretability of SAEs in LVLMs, demonstrating their potential for understanding and controlling internal semantic representations. 7 Conclusion This work explores the relationship between the hallucination in LVLMs and their internal latent representations. We construct a residual stream dataset for hallucinatory and faithful object tokens, and use SAE to extract the semantic directions cor- responding to hallucination and Factuality. Based on this insight, we propose SSL, a training-free method that amplifies true semantics while sup- pressing potential hallucinations. Extensive exper- iments demonstrate that SSL outperforms exist- ing methods. Furthermore, although the SAE was trained on LLaV A-Next, the semantic directions it extracted generalize well across different model architectures, further showcasing the potential of SAE in understanding and controlling the internal semantic representations of models. 8 Limitations Currently, the only fully open-source multi-modal SAE is provided by LLM-Labs, trained on the 25th layer of the LLaV A-Next 8b model. As a result, our study does not include a comparison of SAEs trained on other model architectures across differ- ent multi-modal models. Future work can focus on training multi-modal SAEs on various architectures to investigate whether the findings from this study generalize across different models. Ethical Consideration In our proposed SSL method, positive steering sig- nificantly reduces hallucinations in LVLMs, while negative steering increases them. Positive steer- ing not only improves model performance but also aligns with ethical principles such as safety and reliability. In contrast, negative guidance may lead to more hallucinations and generate false informa- tion. Therefore, such mechanisms should be ap- plied with caution, supported by thorough valida- tion and human oversight. References Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: visual question an- swering. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015 , pages 2425–2433. IEEE Com- puter Society. Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, and 6 others. 2023. Towards monosemanticity: Decom- posing language models with dictionary learning. In Transformer Circuits Thread . Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Jun- yang Lin, Chang Zhou, and Baobao Chang. 2025. An image is worth 1/2 tokens after layer 2: Plug-and- play inference acceleration for large vision-language models. In Computer Vision – ECCV 2024 , pages 19–35, Cham. Springer Nature Switzerland. Yung-Sung Chuang, Yujia Xie, Hongyin Luo, Yoon Kim, James R. Glass, and Pengcheng He. 2024. Dola: Decoding by contrasting layers improves factuality in large language models. In The Twelfth International Conference on Learning Representations . Wenliang Dai, Junnan Li, Dongxu Li, Anthony Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, PascaleFung, and Steven Hoi. 2023a. InstructBLIP: Towards general-purpose vision-language models with instruc- tion tuning. In Thirty-seventh Conference on Neural Information Processing Systems . Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023b. In- structblip: Towards general-purpose
https://arxiv.org/abs/2505.16146v1
vision-language models with instruction tuning. ArXiv preprint , abs/2305.06500. Javier Ferrando, Oscar Balcells Obeso, Senthooran Ra- jamanoharan, and Neel Nanda. 2025. Do i know this entity? knowledge awareness and hallucinations in language models. In The Thirteenth International Conference on Learning Representations . Leo Gao, Tom Dupre la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. 2025. Scaling and evaluating sparse autoencoders. In The Thirteenth International Conference on Learning Representations . Xuan Gong, Tianshi Ming, Xinpeng Wang, and Zhihua Wei. 2024. Damro: Dive into the attention mecha- nism of lvlm to reduce object hallucination. ArXiv preprint , abs/2410.04514. Zishan Gu, Changchang Yin, Fenglin Liu, and Ping Zhang. 2024. Medvh: Towards systematic evaluation of hallucination for large vision language models in the medical context. ArXiv preprint , abs/2407.02730. Jinghan He, Kuan Zhu, Haiyun Guo, Junfeng Fang, Zhenglin Hua, Yuheng Jia, Ming Tang, Tat-Seng Chua, and Jinqiao Wang. 2024. Cracking the code of hallucination in lvlms with vision-aware head diver- gence. ArXiv preprint , abs/2412.13949. Qidong Huang, Xiaoyi Dong, Pan Zhang, Bin Wang, Conghui He, Jiaqi Wang, Dahua Lin, Weiming Zhang, and Nenghai Yu. 2024. OPERA: Allevi- ating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection- Allocation . In 2024 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 13418–13427, Los Alamitos, CA, USA. IEEE Com- puter Society. Nick Jiang, Anish Kachinthaya, Suzie Petryk, and Yossi Gandelsman. 2024. Interpreting and editing vision- language representations to mitigate hallucinations. ArXiv preprint , abs/2410.02762. Junho Kim, Hyunjun Kim, KIM YEONJU, and Yong Man Ro. 2024. CODE: Contrasting self- generated description to combat hallucination in large multi-modal models. In The Thirty-eighth Annual Conference on Neural Information Processing Sys- tems. Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2024. 9 Mitigating Object Hallucinations in Large Vision- Language Models through Visual Contrastive Decod- ing . In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages 13872– 13882, Los Alamitos, CA, USA. IEEE Computer Society. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In International conference on ma- chine learning , pages 12888–12900. PMLR. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2023a. Inference- time intervention: Eliciting truthful answers from a language model. In Thirty-seventh Conference on Neural Information Processing Systems . Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Xin Zhao, and Ji-Rong Wen. 2023b. Evaluating object halluci- nation in large vision-language models. In Proceed- ings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 292–305, Sin- gapore. Association for Computational Linguistics. Zhuowei Li, Haizhou Shi, Yunhe Gao, Di Liu, Zhent- ing Wang, Yuxiao Chen, Ting Liu, Long Zhao, Hao Wang, and Dimitris N. Metaxas. 2025. The hid- den life of tokens: Reducing hallucination of large vision-language models via visual information steer- ing.ArXiv preprint , abs/2502.03628. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco:
https://arxiv.org/abs/2505.16146v1
Common objects in context. In Computer Vision – ECCV 2014 , pages 740–755, Cham. Springer Inter- national Publishing. Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024a. A survey on hallucination in large vision-language models. ArXiv preprint , abs/2402.00253. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024b. Improved baselines with visual instruc- tion tuning. In 2024 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) , pages 26286–26296. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2024c. Improved baselines with visual instruc- tion tuning. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , pages 26296–26306. Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024d. Llava- next: Improved reasoning, ocr, and world knowledge. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. ArXiv preprint , abs/2304.08485.Sheng Liu, Haotian Ye, and James Zou. 2025. Reducing hallucinations in large vision-language models via latent space steering. In The Thirteenth International Conference on Learning Representations . Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associ- ations in GPT. In Advances in Neural Information Processing Systems . Dongmin Park, Zhaofang Qian, Guangxing Han, and Ser-Nam Lim. 2024. Mitigating dialogue hallucina- tion for large vision language models via adversarial instruction tuning. ArXiv preprint , abs/2403.10492. Kiho Park, Yo Joong Choe, and Victor Veitch. 2023. The linear representation hypothesis and the geome- try of large language models. In Causal Representa- tion Learning Workshop at NeurIPS 2023 . Xiaoye Qu, Qiyuan Chen, Wei Wei, Jishuo Sun, and Jianfeng Dong. 2024. Alleviating hallucination in large vision-language models with active retrieval augmentation. ArXiv preprint , abs/2408.00555. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallu- cination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 4035–4045, Brussels, Belgium. Association for Computational Linguistics. Curt Tigges, Oskar J. Hollinsworth, Atticus Geiger, and Neel Nanda. 2024. Language models linearly rep- resent sentiment. In Proceedings of the 7th Black- boxNLP Workshop: Analyzing and Interpreting Neu- ral Networks for NLP , pages 58–87, Miami, Florida, US. Association for Computational Linguistics. Alexander Matt Turner, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDi- armid. 2023. Activation addition: Steering lan- guage models without optimization. ArXiv preprint , abs/2308.10248. Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2011. A survey on multimodal large language models. ArXiv preprint , abs/11/12. Junwei You, Haotian Shi, Zhuoyu Jiang, Zilin Huang, Rui Gan, Keshu Wu, Xi Cheng, Xiaopeng Li, and Bin Ran. 2024. V2x-vlm: End-to-end v2x cooperative autonomous driving through large vision-language models. ArXiv preprint , abs/2408.09251. Kaichen Zhang, Yifei Shen, Bo Li, and Ziwei Liu. 2024a. Large multi-modal models can interpret fea- tures in large multi-modal models. ArXiv preprint , abs/2411.14982. Xiaofeng Zhang, Yihao Quan, Chaochen Gu, Chen Shen, Xiaosong Yuan, Shaotian Yan, Hao Cheng, Kaijie Wu, and
https://arxiv.org/abs/2505.16146v1
Jieping Ye. 2024b. Seeing clearly by layer two: Enhancing attention heads to alleviate hal- lucination in lvlms. ArXiv preprint , abs/2411.09968. 10 A Construction of Residual Stream Dataset for Hallucinatory and Faithful Object Words Figure 8 illustrates the construction process of the residual stream dataset. We begin by ran- domly sampling 4,000 image-text pairs from the MSCOCO dataset and extracting residual stream vectors from the 25th layer of the LLaV A-Next- 8b model corresponding to object words identified as either hallucinatory or faithful during inference. It is worth noting that some object words are tok- enized into multiple subword tokens by the model’s tokenizer. Given the relatively low frequency of such cases and to facilitate consistent statistical analysis, we exclude these incomplete subword in- stances from our dataset. For example, the word “backpack” may be split into two tokens—"back" and "pack"—by the tokenizer. Since these sub- tokens do not independently convey the complete semantic meaning of the original word, they are omitted from further analysis. To ensure class balance, we extract an equal number of residual stream vectors for hallucinatory and faithful object words from each sample. This process results in a balanced dataset comprising 1,784 instances, with a 1:1 ratio of positive (hal- lucinatory) and negative (faithful) samples. We further divide the dataset into training and test sets using a 9:1 split while maintaining the class distri- bution in both subsets. The training set is used to identify semantic directions closely associated with hallucination, and the test set is employed to eval- uate the generalizability and discriminative power of the extracted semantic features. Using sparse autoencoder analysis, we find that the direction corresponding to latent activation index 36992 is highly correlated with hallucination, whereas index 47230 aligns closely with faithful outputs. B A Set of Classification Experiments Based on Logistic Regression To quantitatively evaluate the discriminative power of the latent activation directions extracted by the SAE in distinguishing hallucinatory from faithful samples, we conduct a series of classification ex- periments based on logistic regression. Specifi- cally, for the i-th sample, let z(xi)∈RdSAEdenote the SAE latent representation. From a total of N samples, we extract individual latent dimensions indexed by j, denoted as zj(xi), and construct five types of input features:Hall latent: The dimension hall with the highest correlation to hallucinated object words is selected, forming a one-dimensional feature: Xhall= [zhall(x1), . . . , z hall(xN)]T∈RN×1. (10) Faithful latent: The dimension faithful most correlated with faithful object words is selected, forming: Xfaithful = [zfaithful (x1), . . . , z faithful (xN)]T∈RN×1. (11) Random 1: A single latent dimension r1∼ U{0, . . . , d SAE−1}is randomly sampled to form a one-dimensional baseline feature Xr1=RN×1. Random 2: Two latent dimensions (r1, r2)are randomly sampled to construct a two-dimensional feature Xr2∈RN×2. The corresponding label vector is y= y(1), y(2), . . . , y(N)T, where y(i)∈ {0,1}, with 1 indicating a hallucinatory sample and 0 a faithful sample. All input features are standardized before being fed into the logistic regression model. The model is trained on the training set and evaluated
https://arxiv.org/abs/2505.16146v1
on the held-out test set using classification accuracy and confusion matrices as evaluation metrics. As shown in Figure 9, the confusion matrices for the three main feature groups ( Hall latent, Faith- fullatent, Both). The results show that the model achieves balanced performance across both posi- tive and negative classes, with no noticeable predic- tion bias. Figure 4, the performance of Random1 and Random2 approximates the random baseline (≈0.5), while using Hall latent and faithful la- tent individually yields classification accuracies of 66.67% and 69.54%, respectively. Combining the two features (Both) further improves performance to 72.99%. These findings suggest that the latent repre- sentations extracted by the SAE encode seman- tically discriminative signals for hallucination de- tection. Furthermore, combining hallucination and factuality-related latent directions provides comple- mentary information that enhances classification performance. C Choice of Steering Layer Figures 10, 11 and 12 present the results of ab- lation studies investigating the effect of introduc- ing SSL at individual layers of the LLaV A-Next (γ= 0.6), LLaV A-1.5 ( γ= 0.8)and InstructBLIP 11 Image: Please describe this image in detail. Prompt: Residual Stream DatasetHallucinatory object tokensFaithful object tokensThe image captures a charming scene of a vintage kitchen …… a white refrigerator and a white sink …… perhaps waiting to be used for a warm cup of tea …… a white chair …… It's a snapshot of a moment frozen in time, a glimpse into a simpler era.LVLM Layer 0LVLM Layer 24hidden stateLVLM Layer 25... ...... LLaVA-Next-8B Generating Figure 8: The process of construction of residual stream dataset for hallucinatory and faithful object words. /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f /uni00000033/uni00000055/uni00000048/uni00000047/uni0000004c/uni00000046/uni00000057/uni00000048/uni00000047/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000037/uni00000055/uni00000058/uni00000048/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000019/uni00000019 /uni00000015/uni00000014 /uni00000016/uni0000001a /uni00000018/uni00000013/uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000003/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000057 /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f /uni00000033/uni00000055/uni00000048/uni00000047/uni0000004c/uni00000046/uni00000057/uni00000048/uni00000047/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000037/uni00000055/uni00000058/uni00000048/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000017/uni0000001c /uni00000016/uni0000001b /uni00000014/uni00000018 /uni0000001a/uni00000015/uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b/uni00000003/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000057 /uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f /uni00000033/uni00000055/uni00000048/uni00000047/uni0000004c/uni00000046/uni00000057/uni00000048/uni00000047/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000029/uni00000044/uni0000004c/uni00000057/uni0000004b /uni0000002b/uni00000044/uni0000004f/uni0000004f/uni00000037/uni00000055/uni00000058/uni00000048/uni00000003/uni0000004f/uni00000044/uni00000045/uni00000048/uni0000004f/uni00000018/uni0000001a /uni00000016/uni00000013 /uni00000014/uni0000001a /uni0000001a/uni00000013/uni00000025/uni00000052/uni00000057/uni0000004b /uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013 /uni00000015/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013 /uni00000015/uni00000013/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013 Figure 9: The confusion matrices for three main feature groups. CHAIR S↓CHAIR I↓Avg.Len LLaV A-NeXT 29.60 8.03 165.61 w/ SSL γ= 0.2 28.36 7.38 163.20 w/ SSL γ= 0.4 27.92 6.69 158.01 w/ SSL γ= 0.6 25.20 6.46 162.93 w/ SSL γ= 0.8 22.28 5.14 185.80 w/ SSL γ= 1.0 13.28 4.06 163.62 w/ SSL γ= 1.5 2.56 0.59 53.80 Table 5: Ablation study on the scaling factor of LLaV A- Next. (γ= 0.2) models, respectively. For LLaV A-Next, we observe that applying SSL at the middle lay- ers more effectively mitigates hallucinations, con- sistent with the results shown in Figure 5. For LLaV A-1.5, we observe that applying SSL at either the layer1 or deeper layers consistently mitigates hallucination. This observation aligns closely with findings reported by Zhang et al. (2024b), He et al. (2024) and Chen et al. (2025), who also found thatlayer1 or deeper layers interventions can signifi- cantly reduce hallucination in LLaV A-1.5. In con- trast, for InstructBLIP, introducing SSL at shallow layers yields more substantial improvements, while deeper layer interventions contribute less notice- ably to performance. We hypothesize that this is attributable to architectural and training differences in InstructBLIP, specifically, its shallow layers may already perform substantial cross-modal alignment early in the pipeline, making early-stage seman- tic guidance more impactful on overall generation quality. A deeper analysis of the layer-specific mechanisms
https://arxiv.org/abs/2505.16146v1
in different multimodal architectures is left for future work. D Choice of Scaling Factor Tables 5, 6, and7 report the ablation results on the effect of the scaling factor γin the SSL. For the LLaV A series of models, setting γto 0.6 or 0.8 effectively reduces hallucinations, indicating 12 203040 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c Layer246810CHAIR /uni0000002f/uni00000013 /uni0000002f/uni00000014 /uni0000002f/uni00000015 /uni0000002f/uni00000016 /uni0000002f/uni00000017 /uni0000002f/uni00000018 /uni0000002f/uni00000019 /uni0000002f/uni0000001a /uni0000002f/uni0000001b /uni0000002f/uni0000001c /uni0000002f/uni00000014/uni00000013 /uni0000002f/uni00000014/uni00000014 /uni0000002f/uni00000014/uni00000015 /uni0000002f/uni00000014/uni00000016 /uni0000002f/uni00000014/uni00000017 /uni0000002f/uni00000014/uni00000018 /uni0000002f/uni00000014/uni00000019 /uni0000002f/uni00000014/uni0000001a /uni0000002f/uni00000014/uni0000001b /uni0000002f/uni00000014/uni0000001c /uni0000002f/uni00000015/uni00000013 /uni0000002f/uni00000015/uni00000014 /uni0000002f/uni00000015/uni00000015 /uni0000002f/uni00000015/uni00000016 /uni0000002f/uni00000015/uni00000017 /uni0000002f/uni00000015/uni00000018 /uni0000002f/uni00000015/uni00000019 /uni0000002f/uni00000015/uni0000001a /uni0000002f/uni00000015/uni0000001b /uni0000002f/uni00000015/uni0000001c /uni0000002f/uni00000016/uni00000013 /uni0000002f/uni00000016/uni00000014Figure 10: Ablation study on steering a specific layer of LLaV A-Next. 406080 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c Layer10.012.515.017.520.0CHAIR /uni0000002f/uni00000013 /uni0000002f/uni00000014 /uni0000002f/uni00000015 /uni0000002f/uni00000016 /uni0000002f/uni00000017 /uni0000002f/uni00000018 /uni0000002f/uni00000019 /uni0000002f/uni0000001a /uni0000002f/uni0000001b /uni0000002f/uni0000001c /uni0000002f/uni00000014/uni00000013 /uni0000002f/uni00000014/uni00000014 /uni0000002f/uni00000014/uni00000015 /uni0000002f/uni00000014/uni00000016 /uni0000002f/uni00000014/uni00000017 /uni0000002f/uni00000014/uni00000018 /uni0000002f/uni00000014/uni00000019 /uni0000002f/uni00000014/uni0000001a /uni0000002f/uni00000014/uni0000001b /uni0000002f/uni00000014/uni0000001c /uni0000002f/uni00000015/uni00000013 /uni0000002f/uni00000015/uni00000014 /uni0000002f/uni00000015/uni00000015 /uni0000002f/uni00000015/uni00000016 /uni0000002f/uni00000015/uni00000017 /uni0000002f/uni00000015/uni00000018 /uni0000002f/uni00000015/uni00000019 /uni0000002f/uni00000015/uni0000001a /uni0000002f/uni00000015/uni0000001b /uni0000002f/uni00000015/uni0000001c /uni0000002f/uni00000016/uni00000013 /uni0000002f/uni00000016/uni00000014 Figure 11: Ablation study on steering a specific layer of LLaV A-1.5. Algorithm 2: Reverse-SSL Input: Scaling factor γ; steering layer ls; semantic directions dhall,dfaithful ; residual stream at layer ls: [Xls,s, Xls,t, Xls,v, X<t ls,o] 1ifL=lsthen 2 fortoken xin residual stream do 3 ifx∈Xls,vthen 4 x←x−γ·∥x∥ ∥dfaithful∥+ϵ·dfaithful 5 else if x∈X<t ls,othen 6 x←x+γ·∥x∥ ∥dhall∥+ϵ·dhall 7 else // System and prompt tokens remain unchanged that moderate levels of semantic intervention are beneficial. However, when γ > 1.0, the super- vision becomes overly aggressive, disrupting the model’s behavior and leading to abnormal halluci- nation metrics. In contrast, for InstructBLIP, the best performance is observed at γ= 0.2, whileCHAIR S↓CHAIR I↓Avg.Len LLaV A-1.5 49.44 14.19 82.97 w/ SSL γ= 0.2 48.16 14.10 83.45 w/ SSL γ= 0.4 47.16 14.04 83.24 w/ SSL γ= 0.6 45.96 13.36 83.63 w/ SSL γ= 0.8 41.08 12.02 82.30 w/ SSL γ= 1.0 33.80 10.18 86.02 w/ SSL γ= 1.5 17.16 7.66 275.16 Table 6: Ablation study on the scaling factor of LLaV A- 1.5. larger values similarly result in performance degra- dation. These findings highlight the importance of carefully calibrating the intensity of semantic guidance to balance model control and generation quality across different architectures. E Reverse-SSL Approach To further validate the efficacy of the identified se- mantic directions, we extend the standard SSL by introducing Reverse Steering LVLMs via SAEs La- tents (Reverse-SSL), an approach that deliberately induces the model to generate more hallucinations. Specifically, at each residual flow layer, we divide 13 30405060 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c/uni00000003/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni00000056 /uni00000026/uni0000002b/uni00000024/uni0000002c/uni00000035/uni0000004c Layer101214CHAIR /uni0000002f/uni00000013 /uni0000002f/uni00000014 /uni0000002f/uni00000015 /uni0000002f/uni00000016 /uni0000002f/uni00000017 /uni0000002f/uni00000018 /uni0000002f/uni00000019 /uni0000002f/uni0000001a /uni0000002f/uni0000001b /uni0000002f/uni0000001c /uni0000002f/uni00000014/uni00000013 /uni0000002f/uni00000014/uni00000014 /uni0000002f/uni00000014/uni00000015 /uni0000002f/uni00000014/uni00000016 /uni0000002f/uni00000014/uni00000017 /uni0000002f/uni00000014/uni00000018 /uni0000002f/uni00000014/uni00000019 /uni0000002f/uni00000014/uni0000001a /uni0000002f/uni00000014/uni0000001b /uni0000002f/uni00000014/uni0000001c /uni0000002f/uni00000015/uni00000013 /uni0000002f/uni00000015/uni00000014 /uni0000002f/uni00000015/uni00000015 /uni0000002f/uni00000015/uni00000016 /uni0000002f/uni00000015/uni00000017 /uni0000002f/uni00000015/uni00000018 /uni0000002f/uni00000015/uni00000019 /uni0000002f/uni00000015/uni0000001a /uni0000002f/uni00000015/uni0000001b /uni0000002f/uni00000015/uni0000001c /uni0000002f/uni00000016/uni00000013 /uni0000002f/uni00000016/uni00000014Figure 12: Ablation study on steering a specific layer of InstructBLIP. CHAIR S↓CHAIR I↓Avg.Len InstructBLIP 45.44 13.07 92.11 w/ SSL γ= 0.2 38.52 11.45 102.62 w/ SSL γ= 0.4 10.16 7.90 64.04 w/ SSL γ= 0.6 0.76 21.99 349.86 Table 7: Ablation study on the scaling factor of Instruct- BLIP. the input sequence into four contiguous segments: system tokens, prompt tokens, visual tokens, and output tokens. During the visual-token stage, we inject a specific reverse-direction vector that delib- erately shifts the visual features away from the true image semantics. At
https://arxiv.org/abs/2505.16146v1
the onset of autoregressive language generation, we inject the specific reverse- direction to amplify the previously distorted visual signal, thereby biasing subsequent text outputs to- ward content that is either factually incorrect or substantially divergent from the original prompt. The complete algorithmic procedure for Reverse- SSL is presented in Algorithm 2. F More Qualitative Results Figures 13, 14, and 15 present additional qualita- tive examples on the LLaV A-Next, LLaV A-1.5, and InstructBLIP models, respectively, to demonstrate the effectiveness of our proposed SSL approach in mitigating hallucinated objects. With the integra- tion of SSL, the generated descriptions by LVLMs exhibit improved fidelity to the visual content while maintaining the richness and informativeness of the language output. G Details on the GPT-4o Evaluation To evaluate the performance of LVLMs on the LLaV A-Bench benchmark, we adopt GPT-4o asthe reference evaluator. Following the template provided in Table 8 of Gong et al. (2024), each evaluation instance includes the original image, the base output of the LVLM, and its SSL-enhanced counterpart. The evaluation focuses on both the accuracy and fineness of the generated responses. To mitigate potential biases caused by output order, we randomly swap the positions of the two outputs with a probability of 0.5 before each evaluation. Each sample is evaluated four times to compute an average score. Figures 16, 17, and 18 illus- trate representative evaluation examples based on three model architectures: LLaV A-Next, LLaV A- 1.5, and InstructBLIP, respectively. H License The open-source data utilized in this work was em- ployed exclusively for academic research, consis- tent with the original intended usage. All the used intellectual artifacts’ license allows for academic usage. 14 GPT-4o Prompt You are required to score the performance of two AI assistants in describing a given image. You should pay extra attention to the hallucination, which refers to the part of descriptions that are inconsistent with the image content, such as claiming the existence of something not present in the image or describing incorrectly in terms of the counts, positions, or colors of objects in the image. Please rate the responses of the assistants on a scale of 1 to 10, where a higher score indicates better performance, according to the following criteria: 1. Accuracy : whether the response is accurate with respect to the image content. Responses with fewer hallucinations should be given higher scores. 2. Detailedness : whether the response is rich in necessary details. Note that hallucinated descriptions should not count as necessary details. Please output the scores for each criterion, containing only two values indicating the scores for Assistant 1 and 2, respectively. The two scores are separated by a space. Following the scores, please provide an explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment. [Assistant 1] {} [End of Assistant 1] [Assistant 2] {} [End of Assistant 2] Output format: Accuracy: Reason: Detailedness: Reason: Table 8: The prompt used for GPT-4o evaluation. 15 Please describe this image in detail. Human In the heart of a cozy
https://arxiv.org/abs/2505.16146v1
room, a black and gray cat has found a comfortable spot on a wooden floor. The cat, with its fur as dark as night and patches as light as the moon, is facing away from us. To the left of the feline, a blue bicycle leans casually against a white wall. The bicycle, with its vibrant blue frame and black seat, adds a touch of color to the otherwise monochrome setting. The white wall behind it contrasts beautifully with the blue of the bicycle. The wooden floor beneath these objects adds warmth to the image, grounding the scene in a sense of homeliness. In the image, a black and gray cat is the main subject, standing on a wooden floor. The cat is positioned in front of a bicycle wheel, which is leaning against a white wall. The bicycle wheel is black with a red brake, and it has a black tire. The cat is facing the wall, and its tail is raised, indicating that it might be curious or alert. The image also contains a text that reads Trackstar, which could be related to the bicycle or the cat. The overall scene suggests a domestic setting, possibly a home or a garage. LLaVA-Next LLaVA-Next w/ SSL Please describe this image in detail. Human The image captures a serene and modern bathroom bathed in soft light. Dominating the scene is a white toilet, its lid closed, standing next to a shower enclosure. The shower, with its clear glass door, is adorned with a wooden shelf that holds a single red bottle. Next to the shower, there is a wooden cabinet with a mirror. The dark brown cabinet contrasts with the white countertop, which has a sink and a silver faucet. The floor has light beige tiles, and the walls are painted light gray. The layout is neat and makes the bathroom feel calm and modern. The image captures a modern bathroom bathed in a soothing beige color scheme. The bathroom is well- equipped with a white toilet, a white sink, and a white shower. The shower is equipped with a wooden cabinet, providing storage space for toiletries. The sink is accompanied by a wooden cabinet, offering additional storage space. The bathroom is illuminated by a light fixture, casting a warm glow on the beige tiles that cover the floor and walls. The overall layout of the bathroom suggests a well-organized and functional space. LLaVA-Next LLaVA-Next w/ SSL Figure 13: More examples of SSL in reducing hallucinated content in LLaV A-Next. Please describe this image in detail. Human The image features a large brown dog sitting on the floor in front of a flat-screen TV. The dog appears to be watching the TV, which is mounted on the wall above the dog. The TV is displaying a scene with a dog, likely capturing the dog's attention. In the room, there are two potted plants, one on the left side and the other on the right side. A remote control can be seen on the floor, close to the dog. Additionally, there is a
https://arxiv.org/abs/2505.16146v1
book placed on the floor, slightly to the right of the dog. The image features a large brown dog sitting on a carpeted floor in front of a flat screen TV. The dog appears to be watching something on the TV screen, likely a program or a movie. The TV is positioned on the left side of the room. Profiling the dog's position, there are two potted plants in the room, one on the right side and the other on the left side. The overall scene depicts a cozy and comfortable environment for the dog. LLaVA-1.5 LLaVA-1.5 w/ SSL Please describe this image in detail. Human The image features a bathroom with a toilet situated on the left side of the room. A sink is located on the right side of the bathroom, and a mirror is mounted above it. The bathroom also has a large bowl-shaped sink, which is a unique and eye- catching design element. There are several toiletries placed around the bathroom, including a toilet paper roll, a toothbrush, and a cup. The toilet paper roll is positioned near the toilet, while the toothbrush and cup are placed on the sink counter. The overall atmosphere of the bathroom is clean and well-organized. The image depicts a bathroom with a large sink positioned under a mirror. The sink is made of a dark wood material, adding a touch of elegance to the space. The bathroom also features a toilet on the left side of the room. There are several towels in the bathroom, with one placed on the sink and two others positioned on the floor. Additionally, there is a roll of toilet paper on the sink, ensuring that the bathroom is well-stocked for its users. LLaVA-1.5 LLaVA-1.5 w/ SSL Figure 14: More examples of SSL in reducing hallucinated content in LLaV A-1.5. 16 Please describe this image in detail. Human The image features a wooden pizza paddle on a table, with a partially prepared pizza dough on top of it. The pizza dough is topped with various ingredients, including tomatoes, shrimp, and other vegetables. The paddle is positioned in the center of the table, with the pizza dough spread out on top of it. There are several cups placed around the table, including one on the left side and another on the right side. A bowl is also visible on the table, likely containing additional ingredients for the pizza. A spoon can be seen on the table, possibly used for mixing or preparing the pizza dough. <s> The image features a pizza on a wooden paddle, which is placed on a table with a white cloth. The pizza is partially covered with various toppings, including shrimp, tomatoes, and other ingredients. The pizza is placed on the table, with the paddle being used to hold it in place. The pizza is large, with a significant portion of it visible on the table. The table is covered with a white cloth, providing a clean and elegant setting for the pizza. <s> Please describe this image in detail. Human The image features
https://arxiv.org/abs/2505.16146v1
a park with a bench placed near a bush. The bench is situated on the side of a path, with a tree growing next to it. There are several benches in the park, with one located closer to the bush and another further away. The park is well-lit, with streetlights illuminating the area. The bench near the bush is positioned in the foreground, while the other benches are placed in the background. The scene is peaceful and inviting, with the bench providing a comfortable spot for visitors to rest and enjoy the park's beauty. <s> The image features a park with a bench placed next to a bushy shrubbery, which is positioned in the middle of the scene. The bench is placed on the side of the bush, with a few other benches placed around the area. There are two benches on the left side of the image, while another bench is located on the right side. The benches are placed at different heights, with one bench being slightly higher than the others. The bushy shrubbery provides a natural touch to the scene, adding a bit of greenery to the otherwise concrete-like environment. <s> InstructBLIP InstructBLIP w/ SSL InstructBLIP InstructBLIP w/ SSL Figure 15: More examples of SSL in reducing hallucinated content in InstructBLIP. 17 The image is a digital artwork that features a dog dressed in a style reminiscent of the Mona Lisa. The dog is depicted with a brown coat and a black face, with a large, expressive nose and eyes. It is wearing a brown headband and a dark brown shawl with a pattern that resembles the folds of the Mona Lisa's dress. The dog's arms are crossed, and it is looking directly at the viewer with a calm and serene expression. The background of the image is a landscape with a mountainous terrain, featuring a river winding through the mountains. The sky is filled with clouds, and the overall color palette is dominated by earth tones, with a focus on browns and grays. The painting style is reminiscent of the impressionist style, with loose brushstrokes and a focus on the interplay of light and shadow. The image is a playful and creative blend of the famous painting with the subject of a dog, combining elements of art history with a modern twist.[Input Image] [Input Instruction] Describe this photo in detail. [Model Outputs] The image is a digital artwork that pays homage to the famous painting \"Mona Lisa\" by Leonardo da Vinci. The central figure in the artwork is a dog, specifically a Chihuahua, dressed in a black dress with a white collar. The dog is wearing a brown headscarf and is seated against a backdrop of a landscape. The landscape features a river, mountains, and a castle. The dog's expression is neutral, and it is looking directly at the viewer. The artwork is a playful and creative take on the original painting, replacing the human subject with a dog.LLaVA-Next LLaVA-Next w/ SSL GPT-4o EvaluationAccuracy: 4.0 9.0 Reason: Assistant 1 inaccurately identifies the dog as a Chihuahua and mentions
https://arxiv.org/abs/2505.16146v1
a black dress with a white collar, as well as a castle, none of which are present in the image. Assistant 2 accurately describes the dog in the style of the Mona Lisa, mentioning the correct attire and general description, with only minor inaccuracies like referring to the face as black instead of brown. Detailedness: 5.0 9.0 Reason: Assistant 1 gives less accurate details and misses several aspects of the image, resulting in less informative content. Assistant 2 provides a detailed description of the image, including the style, attire, and background elements. It accurately conveys the artistic style and elements borrowed from the Mona Lisa. Figure 16: The performance of SSL on reducing hallucinations on LLaV A-Next-8b. 18 The creature appears to be made of a combination of materials, including a red or orange substance that resembles a horse or a camel, and a pair of eyeglasses. The overall appearance suggests that the substance is likely a combination of clay or a similar material, along with the eyeglasses, which add a creative and playful touch to the overall design.[Input Image] [Input Instruction] [Model Outputs] The creature appears to be made of a combination of plastic and glass. LLaVA-1.5 LLaVA-1.5 w/ SSL GPT-4o EvaluationAccuracy: 3.0 5.0 Reason: Assistant 1 accurately identifies the materials as plastic and glass, which may relate to the eyeglasses, but it doesn't address the fiery aspects or colors. Assistant 2 accurately describes the creature as having a red or orange appearance and mentions eyeglasses. However, the description of the material as clay is speculative and doesn't fully align with the molten, fiery appearance. Detailedness: 2.0 6.0 Reason: Assistant 1 gives minimal description and lacks detail, primarily naming materials without describing the creature's form or distinctive features. Assistant 2 provides more details about the creature's appearance, color, and shape, mentioning both a horse or camel likeness and speculating on the material, despite some inaccuracies. What material appears to make up the creature?Figure 17: The performance of SSL on reducing hallucinations on LLaV A-1.5-7b. 19 The image shows a man in a yellow shirt standing on the back of a yellow SUV, ironing clothes on the roof of the vehicle. He is surrounded by various items, including a suitcase, a backpack, and a purse. The scene takes place on a busy city street, with multiple cars and a traffic light visible in the background. The man's presence on the roof of the SUV is a unique and creative way to iron clothes, making it a memorable and attention-grabbing image. <s>[Input Image] [Input Instruction] [Model Outputs] In the image, a man is standing on the back of a yellow car, ironing clothes on the roof. He is positioned on the right side of the car, with the laundry basket placed on the roof rack. The car is parked on a busy street, with several other vehicles and a truck in the background. The scene is set in a bustling city, with people walking around and cars passing by. The man's actions are likely to draw attention from the pedestrians, as he is
https://arxiv.org/abs/2505.16146v1
ironing clothes on the roof of the car. <s>InstructBLIP InstructBLIP w/ SSL GPT-4o EvaluationAccuracy: 6.0 7.0 Reason: Both assistants correctly describe a man ironing on the back of a yellow vehicle. However, Assistant 1 inaccurately mentions the man standing on the roof and the presence of additional items such as a suitcase, a backpack, and a purse, which are not visible. Assistant 2 incorrectly mentions ironing on the roof, but doesn't add non-existent items, therefore has a slightly more accurate description.. Detailedness: 5.0 6.0 Reason: Assistant 1 provides some details, despite the inaccuracies, such as the mention of the city street and multiple cars, but includes incorrect additional items. Assistant 2 offers a more accurate context of the scene, noting the bustling city and people, though details about specific objects are still inaccurate. Both descriptions lack accuracy in detailedness, but Assistant 2 is slightly more coherent.Describe this photo in detail. Figure 18: The performance of SSL on reducing hallucinations on InstructBLIP-7b. 20
https://arxiv.org/abs/2505.16146v1
arXiv:2505.16148v1 [cs.LG] 22 May 2025NAN: A Training-Free Solution to Coefficient Estimation in Model Merging Chongjie Si1, Kangtao Lv2, Jingjing Jiang1, Yadao Wang3, Yongwei Wang2, Xiaokang Yang1, Wenbo Su3, Bo Zheng3, Wei Shen1 1Shanghai Jiao Tong University,2Zhejiang University,3Alibaba Group {chongjiesi, wei.shen}@sjtu.edu.cn Abstract Model merging offers a training-free alterna- tive to multi-task learning by combining in- dependently fine-tuned models into a unified one without access to raw data. However, ex- isting approaches often rely on heuristics to determine the merging coefficients, limiting their scalability and generality. In this work, we revisit model merging through the lens of least-squares optimization and show that the optimal merging weights should scale with the amount of task-specific information encoded in each model. Based on this insight, we pro- pose NAN , a simple yet effective method that estimates model merging coefficients via the inverse of parameter norm. NAN is training- free, plug-and-play, and applicable to a wide range of merging strategies. Extensive experi- ments on show that NAN consistently improves performance of baseline methods. 1 Introduction The widespread adoption of pre-trained models (PTMs) has revolutionized both NLP and CV by en- abling efficient task-specific fine-tuning with min- imal annotated data (Devlin et al., 2019; Dosovit- skiy et al., 2020; Raffel et al., 2020). Public model hubs such as HuggingFace Transformers (Wolf et al., 2020), timm, and torchvision have accel- erated the release of numerous backbone and fine- tuned checkpoints, leading to a rapid proliferation of task-specialized models. However, maintaining a separate model for each task imposes substantial storage and deployment overhead, posing scala- bility challenges in multi-task scenarios (Ruder, 2016). While multi-task learning (MTL) offers a potential solution by jointly training on multiple tasks (Caruana, 1997), it is hindered by high com- putational costs, the need for simultaneous access to all datasets, and complexities in balancing het- erogeneous task objectives (Jin et al., 2022). To address the limitations of task-specific fine- tuning and the overhead of multi-task training,model merging has emerged as a promising paradigm for integrating independently fine-tuned models without access to training data (Ilharco et al., 2022; Kinderman et al., 2024; Yadav et al., 2023). While naive weight averaging often fails due to parameter misalignment (Wortsman et al., 2022), recent works have proposed more princi- pled approaches involving importance weighting, task-vector manipulation, and pre-processing tech- niques. These methods demonstrate that, with ap- propriate alignment and weighting, model merg- ing can serve as an efficient and modular alterna- tive to multi-task learning. Despite the promising progress, model merging still faces a fundamental challenge: many existing methods rely on heuristic or intuitive strategies for weight combination co- efficients (Yadav et al., 2023; Ilharco et al., 2022), lacking rigorous theoretical justification. These limitations prompt a reconsideration of how model merging should be fundamentally approached. In this work, we revisit the fundamental princi- ples of model merging and propose a theoretically grounded framework. Starting from a least-squares formulation, we derive the optimal merging coeffi- cients and reveal that the ideal merging weights should be proportional to the amount of task- specific information encoded in each model. Build- ing on this
https://arxiv.org/abs/2505.16148v1
insight, we introduce NAN, a novel training-free model merging plugin that leverages this information-theoretic perspective to achieve effective integration of multiple fine-tuned models. Extensive experiments demonstrate the effective- ness and generality of our approach, with NAN improving the performances of baseline methods. 2 Related Work Model merging aims to integrate multiple task- specific models into a single one, reducing the need to store and manage separate models for each task (Jin et al., 2022; Yadav et al., 2023; Yang et al., 1 2023; Stoica et al., 2023; Yu et al., 2024b; Ilharco et al., 2022). While naive weight averaging (Worts- man et al., 2022) is simple, it often leads to se- vere performance drops due to parameter misalign- ment. To overcome this, various methods estimate merging coefficients using heuristics or additional statistics. For instance, Fisher-Merging (Matena and Raffel, 2022) and RegMean (Jin et al., 2022) rely on Fisher or inner-product matrices, which must be provided or computed manually. Task vector-based approaches such as Task Arithmetic (Ilharco et al., 2022), Ties-Merging (Yadav et al., 2023), and AdaMerging (Yang et al., 2023) define merging in the space of model deltas, but their success heavily depends on intuitively selected or hand-tuned coefficients. Although AdaMerging estimates coefficients adaptively, it still assumes access to model-specific conditions. DARE (Yu et al., 2024b) sparsifies task vectors to reduce inter- ference but shows limited gains and is only tested on a small number of tasks. Overall, most exist- ing methods require either auxiliary information or strong manual heuristics. 3 Method In this section, we conduct an in-depth exploration of model merging from the perspective of least squares optimization. 3.1 Model Merging via Least Squares To better understand the underlying principles of model merging, we begin with a simplified least- squares formulation. Suppose we have two tasks, each associated with data matrices X1∈Rn1×d, Y1∈Rn1×m, andX2∈Rn2×d,Y2∈Rn2×m, where Xirepresents input features and Yidenotes task-specific supervision. For each task, we con- sider an independent least-square problem as: min W1∥X1W1−Y1∥2 F min W2∥X2W2−Y2∥2 F,(1) whose solutions admit closed forms: W∗ 1= (XT 1X1)−1XT 1Y1, W∗ 2= (XT 2X2)−1XT 2Y2.(2) Now consider the joint least-squares objective that seeks a shared model Wacross both tasks: min W∥X1W−Y1∥2 F+∥X2W−Y2∥2 F.(3)This problem has the following closed-form so- lution: W∗= (XT 1X1+XT 2X2)−1(XT 1Y1+ XT 2Y2). To explore the relationship between the jointly optimized solution W∗and the individually optimized W∗ 1andW∗ 2, we note that: W∗ 1=A−1 1b1,W∗ 2=A−1 2b2, (4) where Ai=XT iXi,bi=XT iYi. Then, W∗= (A1+A2)−1(b1+b2). (5) We now attempt to express W∗as a weighted com- bination of W∗ 1andW∗ 2. Observe that: W∗= (A1+A2)−1(A1W∗ 1+A2W∗ 2),(6) this leads to: W∗=Ω1W∗ 1+Ω2W∗ 2, (7) where the merging coefficients are matrix-valued: Ω1= (A1+A2)−1A1 Ω2= (A1+A2)−1A2.(8) This formulation reveals that the optimal merged solution is a weighted average of the individual solutions, where the weights are determined by the relative information content of each task, as quan- tified by XT iXi—essentially the unnormalized co- variance matrix of the inputs. In other words, tasks with more informative or higher-variance input dis- tributions contribute more to the merged solution. 3.2 Sample-Weighted Merging
https://arxiv.org/abs/2505.16148v1
To further understand the behavior of the merging coefficients, we now consider the case where the input features are normalized. This is a common pre-processing step in deep learning pipelines, es- pecially in representation learning and contrastive objectives. Under this normalization, the matrix Ai=XT iXibecomes approximately proportional to the sample size ni, assuming the features are approximately isotropic: Ai≈niId, where Idis thed-dimensional identity matrix. Substituting this into the earlier expression for the merged solution yields: W∗≈n1W∗ 1+n2W∗ 2 n1+n2, (9) This result provides a simple yet powerful in- sight: under normalized input features, the opti- mal merged model is approximately a sample-size- weighted average of the individually fine-tuned models. Consequently, the relative contribution of each model should be proportional to the amount of data it was trained on. 2 3.3 NAN: A Training-Free Plugin In practice, when the exact values of n1andn2 are not available—such as when merging open- source fine-tuned models—the direct estimation of sample sizes becomes infeasible. To address this, we resort to empirical proxies that reflect the amount of information each model has absorbed during fine-tuning. Recent findings suggest that the variance of the learned weights is inversely correlated with the training data volume (Fort et al., 2019; Izmailov et al., 2018; Si et al., 2025; Du et al., 2025), i.e., n∝1 Var(W). Intuitively, models trained on larger datasets exhibit lower variance in parameter up- dates, as the optimization process averages out stochastic fluctuations over more samples. This observation provides us with a practical prior for estimating task importance. Given that most pre- trained and fine-tuned weights are approximately zero-centered (Du et al., 2025; Si et al., 2025), we adopt the variance of the weights as a proxy signal. Assuming zero-mean updates, we have: Var(W)∝ ∥W∥2 F, where the Frobenius norm serves as a direct measure of magnitude. In prac- tice, we adopt the Frobenius norm rather than its squared value to compute the merging coefficients, as the squared norm may introduce large scaling disparities and result in numerical instability during normalization. The norm itself offers a more sta- ble approximation while still reflecting the relative importance of each model. Combining this insight with our earlier deriva- tion that optimal merging weights should scale with the sample size, we introduce Norm- Aware mergi Ng (NAN), a training-free plug-in. Specifi- cally, given mtask-specific models to be merged, NAN computes the Frobenius norm of each model’s weights Was: αi=1/∥Wi∥FPm j=11/∥Wj∥F. (10) When merging a large number of models, the softmax-normalized coefficients can become ex- cessively small. To mitigate this issue, we apply a global scaling factor m/2to the merged weights. NAN is highly versatile and can be seamlessly inte- grated into any existing model merging pipeline. It can be applied either directly on raw model weights or as a post-processing reweighting step following other merging strategies.4 Experiment Baselines. We compare NAN against the following baselines: Individual Models, Traditional Multi- task Learning, the training-based method AdaMerg- ing (Yadav et al., 2023), and several training-free methods, including Weight Averaging (Wortsman et al., 2022), Fisher Merging (Matena and Raffel, 2022), RegMean (Jin et al.,
https://arxiv.org/abs/2505.16148v1
2022), Task Arithmetic (Ilharco et al., 2022), and Ties-Merging (Yadav et al., 2023). Vision Task. Following prior work (Yadav et al., 2023; Yang et al., 2023), we adopt ViT-B/32 and ViT-L/14 as the pre-trained backbone for all meth- ods. Evaluation is conducted across eight image classification tasks: SUN397 (Xiao et al., 2010), Cars (Krause et al., 2013), RESISC45 (Cheng et al., 2017), EuroSAT (Helber et al., 2019), SVHN (Net- zer et al., 2011), GTSRB (Stallkamp et al., 2011), MNIST (Yann, 1998), and DTD (Cimpoi et al., 2014). All datasets are evaluated using top-1 clas- sification accuracy as the performance metric. Table 1 shows the performance of various merg- ing methods. While individual models and multi- task learning provide strong baselines, training- based methods require additional optimization and metadata. Among training-free approaches, NAN achieves consistently better performance when cou- pling with baseline methods. This demonstrates NAN’s effectiveness as a simple and general merg- ing strategy without relying on task-specific tuning or training. Language Task. Following prior work (Yu et al., 2024a), we use LLaMA2-13B (Touvron et al., 2023) as the backbone and merge two of its fine- tuned variants: WizardLM-13B (Xu et al., 2024) and WizardMath-13B (Luo et al., 2023). We test the performance on four datasets: MMLU (Hendrycks et al., 2021), CEval (Huang et al., 2023), GSM8K (Cobbe et al., 2021), and BBH (Suzgun et al., 2022). The results on GSM8K is evaluated following the official protocol of the Qwen2.5 Math Eval Toolkit (Yang et al., 2024), while others are obtained using the OpenCompass evaluation framework (Contributors, 2023). Table 2 shows the results of merging two LLaMA2-13B variants on four language under- standing and reasoning benchmarks. Task Arith- metic and Ties-Merging both improve over the indi- vidual models, indicating the benefits of parameter fusion. Our method achieves further gains, partic- ularly on GSM8K, and yields the highest average 3 Table 1: Multi-task performance when merging ViT-B/32 and ViT-L/14 models on eight tasks. Method SUN397 Cars RESISC45 EuroSAT SVHN GTSRB MNIST DTD Avg Acc ViT-B/32 Pretrained 62.3 59.7 60.7 45.5 31.4 32.6 48.5 43.8 48.0 Individual 75.3 77.7 96.1 99.7 97.5 98.7 99.7 79.4 90.5 Traditional MTL 73.9 74.4 93.9 98.2 95.8 98.9 99.5 77.9 88.9 AdaMerging++ 60.8 56.9 73.1 83.4 87.3 82.4 95.7 50.1 73.7 Layer-wise AdaMerging 64.5 68.1 79.2 93.8 87.0 91.9 97.5 59.1 80.1 Weight Averaging 65.3 63.4 71.4 71.7 64.2 52.8 87.5 50.1 65.8 Fisher Merging 68.6 69.2 70.7 66.4 72.9 51.1 87.9 59.9 68.3 RegMean 65.3 63.5 75.6 78.6 78.1 67.4 93.7 52.0 71.8 Task Arithmetic (TA) 55.2 54.9 66.7 78.9 80.2 69.7 97.3 50.4 69.1 TA + NAN 59.3 58.2 69.7 83.3 76.2 71.0 96.1 61.6 70.7 Ties-Merging (Ties) 59.8 58.6 70.7 79.7 86.2 72.1 98.3 54.2 72.4 Ties+NAN 61.6 61.8 74.0 80.9 83.8 75.7 97.8 54.6 73.8 ViT-L/14 Individual 82.3 92.4 97.4 100 98.1 99.2 99.7 84.1 94.2 Traditional MTL 80.8 90.6 96.3 96.3 97.6 99.1 99.6 84.4 93.5 Task Arithmetic 74.1 82.1 86.7 93.8 87.9 86.8 98.9 65.6 84.5 Ties-Merging (Ties) 76.5 85.0 89.3 95.7 90.3 83.3 99.0 68.8 86.0 Ties + NAN 74.4 84.3
https://arxiv.org/abs/2505.16148v1
87.7 95.3 89.5 92.5 99.2 68.5 86.4 Table 2: Results on language merging tasks. Method MMLU CEval GSM8K BBH Avg WizardLM-13B 53.6 32.6 38.8 19.4 36.1 WizardMath-13B 54.2 37.7 46.9 44.8 45.9 Task Arithmetic (TA) 56.3 39.5 52.7 35.7 46.0 TA + NAN 56.3 38.8 64.1 34.6 48.5 Ties-Merging (Ties) 55.9 40.0 55.3 38.9 47.5 Ties + NAN 56.8 39.2 58.5 39.3 48.5 performance across all datasets, demonstrating its effectiveness in merging complementary capabil- ities from general-purpose and math-specialized models. VLM Task. Following prior work (Si et al., 2025), we adopt the vision-language model (VLM) LLaV A-v1.5-13B (Liu et al., 2023) as the shared pre-trained base model and merge two of its fine- tuned variants: LLaV A-v1.6-13B (Liu et al., 2023), optimized for general multi-modal understanding, and Math-LLaV A (Shi et al., 2024), which is spe- cialized for mathematical reasoning. We test the performance on four datasets: MathVista (Lu et al., 2023), WeMath (Qiao et al., 2024), AI2D (Kemb- havi et al., 2016), and GeoQA (Chen et al., 2021). Table 3 summarizes the results of merging two LLaV A-based models across four visual-language reasoning benchmarks. Compared to the individ- ual models, Task Arithmetic achieves a reason- able trade-off, but still underperforms the task- specialized Math-LLaV A on certain datasets. ByTable 3: Results on VLM merging tasks. Method MathVista WeMath AI2D GeoQA Avg LLaV A-v1.5-13B 34.3 - 61.1 - - LLaV A-1.6-13B 33.6 30.1 67.9 23.9 38.9 Math-LLaV A 45.8 33.9 66.7 46.6 48.3 Task Arithmetic (TA) 43.7 35.2 69.3 41.2 47.4 TA + NAN 44.9 36.5 67.2 46.6 48.8 incorporating NAN into Task Arithmetic, we ob- serve consistent improvements across most tasks, leading to the best overall average. This demon- strates that NAN can effectively enhance existing merging strategies in the multi-modal setting. 5 Conclusion In this work, we present NAN, a novel training-free model merging framework grounded in a principled least-squares formulation. By interpreting model merging through the lens of theory, we derive the- oretically optimal merging coefficients that reflect the task-specific knowledge embedded in each fine- tuned model. This perspective enables a simple yet effective merging plugin that circumvents the computational burden and retraining requirements of traditional multi-task learning or heuristic-based merging approaches. Our extensive empirical eval- uation confirms the generality and robustness of NAN, consistently achieving competitive or supe- rior performance compared to existing baselines. 4 Limitations While NAN demonstrates strong performance across various domains, it currently focuses on merging models with a shared pre-trained back- bone and may require adaptation for merging across heterogeneous architectures or modalities. References Rich Caruana. 1997. Multitask learning. Machine learning , 28:41–75. Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric P Xing, and Liang Lin. 2021. Geoqa: A geometric question answering benchmark towards multimodal numerical reasoning. arXiv preprint arXiv:2105.14517 . Gong Cheng, Junwei Han, and Xiaoqiang Lu. 2017. Remote sensing image scene classification: Bench- mark and state of the art. Proceedings of the IEEE , 105(10):1865–1883. Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. 2014. De- scribing textures in the wild. In Proceedings of the IEEE conference
https://arxiv.org/abs/2505.16148v1
on computer vision and pattern recognition , pages 3606–3613. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168 . OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/ opencompass . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 conference of the North American chapter of the association for com- putational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, and 1 others. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 . Zhekai Du, Yinjie Min, Jingjing Li, Ke Lu, Changliang Zou, Liuhua Peng, Tingjin Chu, and Mingming Gong. 2025. Loca: Location-aware cosine adaptation for parameter-efficient fine-tuning. arXiv preprint arXiv:2502.06820 .Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan. 2019. Deep ensembles: A loss landscape perspective. arXiv preprint arXiv:1912.02757 . Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing , 12(7):2217–2226. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Con- ference on Learning Representations (ICLR) . Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. In Advances in Neural Infor- mation Processing Systems . Gabriel Ilharco, Marco Tulio Ribeiro, Mitchell Worts- man, Suchin Gururangan, Ludwig Schmidt, Han- naneh Hajishirzi, and Ali Farhadi. 2022. Edit- ing models with task arithmetic. arXiv preprint arXiv:2212.04089 . Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. 2018. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407 . Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng. 2022. Dataless knowledge fu- sion by merging weights of language models. arXiv preprint arXiv:2212.09849 . Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Min- joon Seo, Hannaneh Hajishirzi, and Ali Farhadi. 2016. A diagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Con- ference, Amsterdam, The Netherlands, October 11– 14, 2016, Proceedings, Part IV 14 , pages 235–251. Springer. Edan Kinderman, Itay Hubara, Haggai Maron, and Daniel Soudry. 2024. Foldable supernets: Scalable merging of transformers with different initializations and tasks. arXiv preprint arXiv:2410.01483 . Jonathan Krause, Michael Stark, Jia Deng, and Li Fei- Fei. 2013. 3d object representations for fine-grained categorization. In Proceedings of the IEEE inter- national conference on computer vision workshops , pages 554–561. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. In
https://arxiv.org/abs/2505.16148v1
NeurIPS . Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chun- yuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai- Wei Chang, Michel Galley, and Jianfeng Gao. 2023. 5 Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 . Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jian- guang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz- ardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583 . Michael S Matena and Colin A Raffel. 2022. Merging models with fisher-weighted averaging. Advances in Neural Information Processing Systems , 35:17703– 17716. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bis- sacco, Baolin Wu, Andrew Y Ng, and 1 others. 2011. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learn- ing and unsupervised feature learning , volume 2011, page 4. Granada. Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, and 1 oth- ers. 2024. We-math: Does your large multimodal model achieve human-like mathematical reasoning? arXiv preprint arXiv:2407.01284 . Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67. Sebastian Ruder. 2016. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 . Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. 2024. Math-llava: Bootstrapping mathemati- cal reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294 . Chongjie Si, Jingjing Jiang, and Wei Shen. 2025. Un- veiling the mystery of weight in large foundation models: Gaussian distribution never fades. arXiv preprint arXiv:2501.10661 . Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. 2011. The german traffic sign recog- nition benchmark: a multi-class classification compe- tition. In The 2011 international joint conference on neural networks , pages 1453–1460. IEEE. George Stoica, Daniel Bolya, Jakob Bjorner, Pratik Ramesh, Taylor Hearn, and Judy Hoffman. 2023. Zipit! merging models from different tasks without training. arXiv preprint arXiv:2305.03053 . Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, DennyZhou, , and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, and 1 others. 2023. Llama 2: Open foun- dation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and 1 others. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations , pages 38–45. Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Mor- cos, Hongseok Namkoong, Ali Farhadi, Yair Car- mon, Simon Kornblith, and 1
https://arxiv.org/abs/2505.16148v1
others. 2022. Model soups: averaging weights of multiple fine-tuned mod- els improves accuracy without increasing inference time. In International conference on machine learn- ing, pages 23965–23998. PMLR. Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. 2010. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition , pages 3485–3492. IEEE. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. 2024. Wizardlm: Empowering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representations . Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. 2023. Ties-merging: Re- solving interference when merging models. Ad- vances in Neural Information Processing Systems , 36:7093–7115. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, and 1 others. 2024. Qwen2. 5-math technical report: Toward mathe- matical expert model via self-improvement. arXiv preprint arXiv:2409.12122 . Enneng Yang, Zhenyi Wang, Li Shen, Shiwei Liu, Guib- ing Guo, Xingwei Wang, and Dacheng Tao. 2023. Adamerging: Adaptive model merging for multi-task learning. arXiv preprint arXiv:2310.02575 . LeCun Yann. 1998. The mnist database of handwritten digits. Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024a. Extend model merging from fine-tuned to 6 pre-trained large language models via weight disen- tanglement. arXiv preprint arXiv:2408.03092 . Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yong- bin Li. 2024b. Language models are super mario: Absorbing abilities from homologous models as a free lunch. In Forty-first International Conference on Machine Learning . 7
https://arxiv.org/abs/2505.16148v1
arXiv:2505.16149v1 [cs.CV] 22 May 2025When VLMs Meet Image Classification: Test Sets Renovation via Missing Label Identification Zirui Pang1∗Haosheng Tan2∗Yuhan Pu3∗, Zhijie Deng4Zhouan Shen4Keyu Hu4Jiaheng Wei4† 1University of Illinois Urbana-Champaign2University of Glasgow3Boston University 4The Hong Kong University of Science and Technology (Guangzhou) Abstract Image classification benchmark datasets such as CIFAR, MNIST, and ImageNet, serves as critical tools for model evaluation. However, despite the cleaning efforts, these datasets still suffer from pervasive noisy labels, and often contain missing labels due to the co-existing image pattern where multiple classes appear in an image sample. This results in misleading model comparisons and unfair evaluations. Existing label cleaning methods focus primarily on noisy labels, but the issue of missing labels remains largely overlooked. Motivated by these challenges, we present a comprehensive framework named REVEAL , integrating state-of- the-art pre-trained vision-language models (e.g., LLaV A, BLIP, Janus, Qwen) with advanced machine/human label curation methods (e.g., Docta, Cleanlab, MTurk), to systematically address both noisy labels and missing label detection in widely-used image classification test sets. REVEAL detects potential noisy labels and omissions, aggregates predictions from various methods, and refines label accuracy through confidence-informed predictions and consensus-based filtering. Additionally, we provide a thorough analysis of state-of-the-art vision-language models and pre-trained image classifiers, highlighting their strengths and limitations within the context of dataset renovation by revealing 10 observations . Our method effectively reveals missing labels from public datasets and provides soft-labeled results with likelihoods. Through human verifications, REVEAL significantly improves the quality of 6benchmark test sets, highly aligning to human judgments and enabling more accurate and meaningful comparisons in image classification. 1 Introduction Noisy and Missing Labels in Image Classification Test Sets Image classification is a foundational computer vision task and numerous benchmark datasets have been established to evaluate models’ visual recognition performance. However, noisy labels are occasionally present in widely used image-classification datasets (CIFAR [ 18], Caltech [ 14], Imagenet [ 11], MNIST [ 20]), especially in the corresponding test sets, which can compromise the reliability of model evaluations. Similar to [44,33], we categorize these errors into two types [ 33]: (1) noisy label errors , in which an incorrect label is assigned by referring to the ground-truth label [ 31,27,42,56,41], and (2) missing-label errors , where multiple object classes appear within an image but only a single label is provided as ground truth [44, 33, 45]. More illustrations come as follows. Caveat 1: Noisy Labels in Existing Test Benchmarks [Termed as noisy-label samples] ∗Equal contribution. Work done as Research Assistant at HKUST-GZ. †Corresponding to: jiahengwei@hkust-gz.edu.cn Preprint. Under review. lobster crab pear poppy crab spider castle house cup bottle sweet_pepper apple can bottle otter seal leopard tiger dinosaur sealFigure 1: Exemplary CIFAR-100 test set with noisy labels. The text below each picture denotes the CIFAR-100 original label (first row) and the cleaned label in CIFAR-100 by [33] (second row). Although benchmark test sets are intended to be precise and clean, noisy labels frequently persist, thereby undermining the integrity of evaluations. Figure 1 illustrates several instances of such errors in the CIFAR-100 test benchmark. Caveat 2: Multi-Label Issues in Existing Test
https://arxiv.org/abs/2505.16149v1
Benchmarks [Termed as multi-label samples] Multi-label issues are common in image-classification benchmarks, where real-world images often contain multiple objects, but only the primary object is annotated. This leads to co-exist images, where valid labels are omitted, potentially biasing model evaluations. Figure 2 illustrates examples drawn from the CIFAR-100 test set. For instance, the first sub-figure on the left is labeled as “forest”, yet two other classes (man, boy) which also belong to valid dataset classes, are clearly visible. Consequently, an image possibly conveys additional semantic information beyond the assigned ground-truth label, potentially biasing models evaluated on these benchmarks. forest man,boy bridge forest table chair bee sunflower bicycle man forest mountain baby bed flatfish man lawn_mower man mountain sea Figure 2: Exemplary CIFAR-100 training images with multiple labels. The text below each picture denotes the CIFAR-100 original label (first row) and the human annotated supplementary label (second row). We did not exhaust all possible labels subjectively. Dataset Renovation via Missing Label Imputation Motivated by these observations, we propose a dataset renovation framework REVEAL for correcting noisy-label errors and addressing multi-label omissions in widely-used public image-classification test sets. Recent advances in Vision-Language Models (VLMs) offer promising solutions for this task, given their superior image recognition capabilities and robust performance in image-captioning tasks. For instance, BLIP [ 21] leverages a pretrained VLM decoder to synthesize high-quality training data, illustrating the potential of utilizing pretrained VLMs’ prior knowledge. Inspired by this insight, we introduce a comprehensive pipeline (illustrated in Figure 3) that employs multiple state-of-the-art VLMs to renovate diverse test Figure 3: REVEAL renovation pipeline. Both the VLM-based and human-annotated methods first assign labels to each image independently. These preliminary labels are then aggregated using a weighted voting ensembling strategy. To refine the results, a score threshold is applied to filter the aggregated labels, followed by a softmax operation to compute the corresponding likelihoods. This process ultimately yields a soft-labeled output suitable for downstream tasks. 2 benchmarks and aggregates their outputs via a weighted voting ensembling strategy. By combining annotations derived from VLMs with targeted human judgments, our approach effectively reduces label noise and generates complementary labels in a reliable and cost-effective manner. Our contributions include: •Missing Labels in Benchmarks: We highlight the critical yet overlooked issue of missing labels in widely-used image classification test sets, where images often contain multiple valid classes. •Systematic Analysis of VLMs: We investigate performances and limitations of VLMs in zero-shot image classification, revealing their strengths and weaknesses. •Revealing New Observations: We identify potential failure modes in VLM predictions, including hallucination, repetition, and reluctance to abstain, especially in fine-grained or large label sets. Missing labels often reflect uncertainty or true multi-object presence, where semantic confusion arises in dense label spaces. •Test sets Renovation with Ensembling: Building on these observations, we propose REVEAL , leveraging multiple VLMs to renovate six test benchmarks, aggregating predictions via an ensem- bling strategy to provide accurate soft labels and relevant annotations. REVEAL shows strong alignment with human judgment in most datasets. 2 Related Work Label Noise Detection & Missing Label Imputation Noisy and missing labels remain
https://arxiv.org/abs/2505.16149v1
major obstacles in image classification. The most reliable remedy, manual re -annotation, scales poorly to today’s large datasets [ 6,17,44,25]. Automated approaches typically involve label prediction or confusion matrix estimation [ 13,15,52,39,36,7,50,19], yet they often underperform in the real-world label noise [ 38,2,37,29,28,9,40,26]. Recent advances such as CLIP [ 35] have enabled zero-shot prediction techniques [ 57,47], while the Confident Learning framework (CL) provides a principled way to identify noisy labels and model uncertainty, particularly under class imbalance [32,34,46,43]. Beyond noise, the issue of missing labels is particularly pronounced in multi-label settings, where exhaustive annotation is prohibitively expensive. To mitigate this, Durand et al. [ 12] proposed a partial binary cross-entropy loss and a curriculum learning strategy that iteratively infers missing labels using model feedback. Ben-Cohen et al. [ 5] introduced a temporary model to estimate the distribution of unobserved labels, coupled with an asymmetric loss that emphasizes known labels. Ma et al. [ 30] preserved label semantics via contrastive embedding, while Zhang et al. [ 54] proposed a unified framework that combines bootstrapped label correction, multi-focal loss, and balanced training. Vision-Language Models for Classification Tasks The emergence of CLIP [ 35] demonstrated that pretrained VLMs can leverage label semantics for zero-shot image classification. ALIGN [ 16] scaled this approach using over a billion noisy image-text pairs, achieving strong performance on image-text retrieval benchmarks. Subsequent methods enhanced alignment and fusion [ 23,51], enabled multi- task learning and efficient fine-tuning (e.g., iCLIP [ 48], LiT [ 53]), and improved classification via prompt optimization (e.g., CoCoOp [ 55]). Recent efforts have focused on few-shot generalization and instruction-following capabilities. Flamingo [ 1] and OpenFlamingo [ 3] introduced cross-modal attention for strong performance across vision-language tasks. BLIP-2 [ 22] further advanced VLMs by connecting frozen image encoders with large language models through a query transformer. Instruction-tuned models like InstructBLIP [ 10] improved alignment with natural language prompts, enhancing robustness in open-ended settings. These developments underscore the potential of VLMs for open-vocabulary classification with minimal supervision. 3 Are Vision-Language Models Good Label Predictors? In this section, we evaluate four state-of-the-art VLMs: BLIP [ 21], Janus-Pro-7B [ 49], Qwen-VL- Plus [ 4], and LLaV A-13B [ 24], for their effectiveness in detecting noisy and missing labels in the CIFAR-10 and CIFAR-100 datasets. We introduce the renovation prompt used for this evaluation as well as the experimental setup, and summarize their impacts on the performance of VLMs. For each model, we outline its capabilities, results and provide a brief exploratory analysis of the findings. All models are evaluated under consistent settings outlined in Table 7. 3 3.1 Prompting Choices Prompts are the significant factor of VLM’s performance. We systematically examine three types of prompting strategies for guiding VLMs in image classification tasks: Binary Questioning, Direct Multi-Label Selection, and Batched Multi-Label Selection. The three categories of prompts are illus- trated in appendix B. The following provides detailed descriptions of each prompting approach: •Binary Questioning: Asking 10 yes/no questions for CIFAR-10 (100 for CIFAR-100) per image. Each image requires 10 prompts (100 for CIFAR-100) to finish classification. This type of prompts
https://arxiv.org/abs/2505.16149v1
leads to dramatic cost of running time in dataset with large amount of labels. •Direct Multi-Label Selection: Asking VLMs to choose all relevant labels from a fixed list of options (10 labels for CIFAR-10 and 100 for CIFAR-100). Each image requires a single prompt. •Batched Multi-Label Selection: We begin by partitioning the full set of labels into multiple batches, with each prompt containing only a single batch. VLMs are constrained to generate labels exclusively from the given prompt. Consequently, each image is processed with a number of prompts equal to the total number of label batches. 20 40 60 80 100 Label Batch Size204060Output Length / Time(min) 10 20 30 40 50 Label Batch Size510152025Output Length / Time(min) 10 20 30 40 50 Label Batch Size510152025Output Length / Time(min) 0.50.60.70.80.9 Recall 0.7500.7750.8000.8250.8500.875 Recall 0.740.760.780.800.82 Recall Output Length Time(min) Recall Figure 4: Evaluation on different settings of prompt. Results shown from left to right are from Janus , LLaV A ,Qwen , respectively. Results are evaluated on first 100 images of CIFAR-100 test data. To balance the running time and recall, our label batch size is set to be 20 accordingly. To evaluate prompt design for VLMs, we assess Janus, LLaV A and Qwen on the first 100 images of the CIFAR-100 test set using recall, inference time, and output length. As shown in Figure 4, larger label batch sizes reduce inference time by minimizing prompt iterations but tend to lower recall due to increased task complexity. A batch size of 20 offers an optimal trade-off, significantly improving efficiency while maintaining recall above 0.8, and is thus adopted for CIFAR-100 renovation. Beyond batch size, incorporating reasoning, image descriptions, and label shuffling enhances label quality by leveraging VLMs’ captioning capabilities and mitigating positional bias. Our final prompt design integrates all these elements. 3.2 Overall Experimental Setup Besides prompt setting, a few more parameters are required for running VLMs. As BLIP generates outputs through an Image-Text Matching (ITM) head, we extract the associated confidence scores and apply a threshold to filter out low-confidence labels, thereby enhancing labeling quality. Additionally, to prevent excessive label output and improve precision, we employ a top- tfiltering strategy where t is determined based on the cardinality of the label set. To conclude, our setting includes labels per prompt ,threshold andtop-thyperparameters. The overall configuration for VLM-based dataset renovation is summarized in Table 7. Details are shown in Appendix C. 3.3 Observations and Performance Following the renovation process, we conduct a detailed analysis of the results, identifying several notable observations in the VLMs renovation results. Examples of our observations are presented and the general performance of each VLM on CIFAR-10 is also given. Observation 1: Failure in Fine-grained Class Prediction Although instructed to select from a fixed set of labels, VLMs occasionally produce outputs that do not match any of the valid fine-grained candidates (e.g., sea snake), instead responding with more general terms (e.g., snake), which don’t belong to candidates. This issue becomes more prevalent as the label set expands, potentially due to the insufficient grounding in the specified label space. 4
https://arxiv.org/abs/2505.16149v1
Observation 1: Failure in Fine-grained Class Prediction Original Label: “sea snake” Candidates: [“sea snake”,“garter snake”...] Response(by Janus): “answer”: [“snake”], reason: “The image shows a creature with distinct features such as scales covering its body, elongated shape without limbs, and forked tongue which is characteristic of snakes. ” Observation 2: Label Repetition and Stalling Infinite repetition is occasionally observed when a large label batch (e.g., label batch size = 100) is provided. In some cases, identical labels are repeated multiple times within a single prediction, indicating that the model may become trapped in a repetitive loop. This behavior resembles a form of generation mode collapse, which diminishes both the diversity and informativeness of the model’s outputs. Observation 2: Label Repetition and Stalling Original Label: “cradle” Response(by Janus): “answer”: [“baby”, “baby blanket”, “baby toys”, “baby crib”, “baby sleeping bed with toys and blanket”,“baby sleeping bed with toys and blanket and crib and toys and toys and toys and toys... ”] Observation 3: Irrelevant Associative Reasoning Even when the model correctly identifies the primary object in an image, it may still generate additional labels based on imaginative or weakly related associations. For instance, a model might correctly recognize a “oak tree” but subsequently generate labels such as “apple” despite their absence or irrelevance to the visual content. Observation 3: Irrelevant Associative Reasoning Original Label: “oak tree” Response(by LLaV A): “answer”: [“apple”], “reason”: “The tree has a distinct shape and is surrounded by a grassy area, which is reminiscent of the natural environment where apples grow. ” Observation 4: Refusal of Null Response In the context of batched multi-label selection, VLMs often fail to return “None” even when none of the candidate labels are relevant to the given image. This behavior undermines the precision of the model’s responses and may stem from their training as cooperative assistants, which predisposes them to avoid issuing negative or null outputs. Observation 4: Refusal of Null Response Original Label: “porcupine” Candidate: [“reel”,“radiator”,“sunglass”,“car mirror”,“tusks”] Response(by Qwen): “answer”: [“reel”], reason: “The objects appear to resemble porcupines or similar animals perched on tree branches. ” Performance across VLMs on CIFAR-10 To better understand the labeling behavior of vision language models on CIFAR-10, we analyzed both the agreement patterns between models and the frequency distribution of predicted classes. To locate biases and assess intermodel consistency, we constructed pairwise confusion matrices (Figure 5). The Janus–LLaV A matrix exhibits strong alignment, especially for visually distinct classes such as airplane, bird, and truck, where pre- dictions closely match across models. However, Qwen–LLaV A reveals pronounced divergence: Qwen heavily over-predicts bird, cat, and dog, leading to inflated confusion with semantically or visually related categories like deer and horse. This is further confirmed by the Janus–Qwen matrix, where Qwen’s tendency to misclassify ambiguous or fine-grained categories results in sub- stantial off-diagonal errors. These discrepancies indicate that while models share consensus on coarse or high-contrast categories, interpretive biases and over-generalizations persist in handling nuanced or overlapping visual features, causing varied predictions in finer-grained classification tasks. 5 airplane automobilebirdcatdeerdogfroghorseshiptruckairplane automobile bird cat deer dog frog horse ship truck456 10 346 8
https://arxiv.org/abs/2505.16149v1